text
stringlengths
56
7.94M
\begin{document} \title{ H\"older regularity of generic manifold} \begin{center} {\it Dedicated to Professor J\'ozef Siciak for his $85^{th}$ birthday} \end{center} \vskip 0.3 cm \noindent{\bf Abstract :} In this paper we study H\"older continuity of the pluricomplex Green function with logarithmic growth at infinity of a smooth generic submanifold of $\C^n$. In particular we prove that the pluricomplex Green function of any $C^2$-smooth generic compact submanifold of $\C^n$ (without boundary) is Lipschitz continuous in $\C^n$. \noindent{{\bf Key words :} Generic manifold, attached analytic discs, plurisubharmonic Green function, pluripolar sets, pluriregular sets, H\"older continuity}. \noindent{\bf AMS Classification : 32U05, 32U15, 32U35, 32E30, 32V40 } \section{Introduction and statement of the main result} Real $m-$planes $\Pi \sub \C^n, \,\, \text{dim}_{R}\Pi=m, \,\,\, m \in \N^+,$ which are not contained in any proper complex subspace of $\C^n$ are important in complex analysis and pluripotential theory. The $\C-$hull of such plane $\Pi$ is equal to all $\C^n$ i.e. $\Pi + J \Pi = \C^n$ ($J$ is the standard complex structure on $\C^n$) and any non empty open subset of $\Pi$ is non pluripolar in $\C^n$. Such planes are called {\it generic} (real) subspaces of $\C^n$. Correspondingly, a real smooth submanifold $M \subset \C^n$ is said to be {\it generic} if for each $z \in M$, its real tangent space $T_z M $ is a generic subspace of $\C^n$ i.e. $T_z M + J T_z M = \C^n$. Such submanifold has real dimension $m \geq n$. The case of minimal dimension $\text{dim} M = n$ is the most relevant for our concern. In this case for each $z \in M$, the tangent space $T_z M$ does not contain any complex line i.e. $ T_z M \cap J T_z M = \{0\}$ and $M$ is said to be {\it totally real}. Observe that any smooth Jordan curve in $\C$ is totally real, hence any product of $n$ smooth Jordan curves in $\C$ is a smooth compact totally real submanifold of dimension $n$ in $\C^n$. Moreover the class of smooth compact totally real submanifolds of dimension $n$ in $\C^n$ is stable under small $C^{2}$-perturbations. Generic submanifolds of $\C^n$ play an important role in Complex Analysis and Pluripotential Theory (see [Pi74], [KC76], [Sa76], [C92], [EW10], [SZ12]). --------------------------------------------------------------------\\ *) The first author was partially supported by the fundamental research of Khorezm Mamun Academy,Grant $\Phi 4-\Phi A-0-16928$.\\ In our previous paper [SZ12], we used the method of attached analytic discs to investigate non plurithiness of generic submanifolds of $\C^n$. We proved in [SZ12], that subsets of full measure in a generic $C^2-$smooth submanifold are non-plurithin at any point. Here we continue our investigations concerning the {\it pluripotential} properties ({\it pluripolarity, pluriregularity }) of generic submanifolds in $\C^n$ by studing H\"older continuity of their pluricomplex Green functions. All these properties can be expressed in terms of the pluricomplex Green function defined as follows. Given a (bounded) subset $E \Subset \C^n$, we define its pluricomplex Green function as follows: $$ V_E (z) := \sup \{u (z) : u \in \mathcal L (\C^n), u|E \leq 0\}, $$ where $\mathcal L (\C^n)$ is the Lelong class of $psh$ functions $u$ in $\C^n$ with logarithmic growth at infinity i.e. $ \sup \{u (z) - \log^+ (z) : z \in \C^n\} < + \infty$ (see [Sic62], [Sic81], [Za76], [Kl]). Our main result is the following. \vskip 0.2 cm \noindent {\bf Main theorem.} {\it Let $M \subset \C^n$ be a $C^2$-smooth generic compact submanifold without boundary. Then its pluricomplex Green function $V_ M$ is Lipschitz continuous in $\C^n$.} \vskip 0.3 cm This theorem is concerned with compact submanifolds without boundary. In Section 3, we will consider the more general case of a $C^2$-smooth generic submanifold and prove that its extremal function is Lipschitz near each of its compact subsets (see Theorem 5.1). In the last section we consider the case of a compact $C^2$-smooth generic submanifold with boundary and discuss the H\"older continuity property of its pluricomplex Green function. From Lipshitz continuity or more generally the H\"older continuity of the pluricomplex Green function $V_E^*(z)$ of a compact set $E \subset \C^n$, it follows that the compact set $E$ satisfies the following Markov's inequality: there exists positive constants $A, r > 0$ such that $$ \Vert \nabla P (z) \Vert_E \leq A d^r \Vert P (z) \Vert_E,\,\,z\in \C^n $$ for any polynomial $P$ of degree $d$. This inequality plays an important role in approximation theory, gives sharp inequalities for polynomials and is useful for constructing continuous extension operators for smooth functions from subsets of $\R^n$ to $\C^n$ (see [PP86], [Ze93]). On the other hand, Complex Dynamic gives a lot of examples of compact subsets for which the pluricomplex Green function $V_E^*(z)$ is H\"older continuous (see [FS92, K95, Ks97]). More recently an important result of C.T. Dinh, V. A. Nguyen and N. Sibony shows that the Monge-Amp\`ere measure of a H\"older continuous plurisubharmonic function is a {\it moderate measure} (see [DNS]). In particular the equilibrium Monge-Amp\`ere measure $\mu_E := (dd^c V_E)^n$ of a compact subset whose pluricomplex Green function $V_E^*(z)$ is H\"older continuous is a moderate measure, which means that it satisfies the following uniform version of Skoda's integrability theorem: for any compact family $\mathcal U$ of $psh$ functions in a neighborhood of a given ball $\B \subset \C^n$, there exists $\varepsilon > 0$ and a constant $C > 0$ such that $$ \int_{\B} e^{- \varepsilon u} d \mu_E \leq C, \forall u \in \mathcal U. $$ From this property, it follows that the equilibrium measure $\mu_E$ is "well dominated" by the Monge-Amp\`ere capacity (see [Ze01]), in the sense that for any given ball $\B \subset \C^n$, there is a constant $A > 0$ such that for any Borel set $S \subset \B$, $$ \mu_E (S) \leq A \exp \left(- A \text{cap}_{\B} (S)^{- 1 \slash n}\right), $$ where $ \text{cap}_{\B} (S)$ is the Monge-Amp\`ere capacity ([BT82]). This property turns out to play an important role in the theory of complex Monge-Amp\`ere equations as was discovered by S. Kolodziej (see [Ko98], [Ce98],[GZ07]). \vskip 0.3 cm \noindent {\bf {Acknowledgments:}} 1. It is a pleasure for us to dedicate this paper to Professor J\'ozef Siciak for his $85^{th}$ birthday. His remarkable achievement in Pluripotential Theory and Polynomial Approximation has been a great inspiration for us. 2. We would like to thank Evgeniy Chirka and Norm Levenberg for interesting and helpful discussions on these problems. \\ 3. This work was started during the visit of the two authors to ICTP in June 2012. The authors would like to thank warmly this Institute for providing excellent conditions for mathematical research and specially Professor Claudio Arezzo for the invitation. 4. We would like to the thank the referee for his remarks. \section{Definitions and preliminaries} Let us recall the following definitions \begin{defn} 1. We say that a subset $P\subset \C^n$ is {\it pluripolar} if there is a plurisubharmonic ($psh$) function $u: u \not\equiv -\infty$ but $u|_P\equiv -\infty.$ \\ 2. We say that $E$ is {\it pluriregular} if its pluricomplex Green function satisfies $ V^*_E(z)|_E\equiv 0$ i.e. $V_E = V_E^*$ on $\C^n$. \end{defn} Observe that any pluriregular set is non-pluripolar. It is well-known that if $E$ in non-pluripolar then $V_E^* \in \mathcal L (\C^n)$. Moreover if $E$ is pluriregular compact set then $V_E = V_E^*$ is continuous in $\C^n$ (see [Sic81]). On the other hand, we know from ([BT82]) that if $E$ is non-pluripolar then the locally bounded $psh$ function $V_E^*$ satisfies the following complex Monge-Amp\`ere equation $$ (dd^c V_E)^n = 0, \ \ \text{on} \ \ \C^n \setminus \overline{E}, $$ which means that the equilibrium measure of $E$ defined as $$ \mu_E :=(dd^c V_E^*)^n $$ is a Borel measure supported in the closed set $\overline E$. Here we will introduce the following important notion. \begin{defn} We say that a set $E$ is $\Lambda_{\alpha}$-pluriregular, $\alpha >0$, if for every compact $K \subset E$ there exist a constant $A = A_K >0$ and a neighborhood $O = O_K$ of $K$ such that \begin {equation} \label{eq:H1} V_E^*(z) \leq A d^{\alpha} (z,K), \ \forall z \in O, \end {equation} where $d$ is the Euclidean distance in $\C^n$. \end{defn} Roughtly speaking, this definition means that the pluricomplex Green function $V_E$ of the set $E$ is H\"older continuous near any compact subset $K \subset E$. The following observation, which is essentially due to Z. B{\l}ocki, shows that if the set itself $E$ is compact, the definition means that its pluricomplex Green function is H\"older continuous (see [Sic97]). \begin{lem} If $E \subset \C^n$ is a $\Lambda_{\alpha}$-pluriregular compact set then its pluricomplex Green function $V_E$ is H\"older continous of order $\alpha$ globally in $\C^n$ i.e. for any $z, w \in \C^ n$, we have $$ \vert V_E (z) - V_E (w) \vert \leq A \vert z - w\vert^ {\alpha}. $$ \end{lem} \begin{proof} Observe that $V_E^*$ has a logarithmic growth at infinity. Therefore if $E$ is a $\Lambda_{\alpha}$-pluriregular compact set then its pluricomplex Green function $V_E$ satisfies (\ref{eq:H1}) for all $z\in \C^n$ i.e. for some constant $A > 0$ we have \begin {equation} \label{eq:H2} V_E(z) \leq A d^{\alpha} (z,E), \ \forall z \in \C^n. \end {equation} To prove that $V_E$ is H\"older continous of order $\alpha$ globally in $\C^n$, fix $h\in \C^n$ such that $\vert h\vert < \delta$ and observe, that for any $z \in E$, $d (z+h,E) \leq \delta^{\alpha}$, which implies by the H\"older condition (\ref{eq:H2}) that for any $z \in E$, $V_E (z + h) \leq A \delta^{\alpha}$. Therefore the function defined by $u (z) := V_E (z + h) - A \delta^{\alpha} $ is a plurisubharmonic function such that $u \in \mathcal L (\C^ n)$ and $u \leq 0$ on $E$. By the definition of $V_E$, we conclude that $u \leq V_E (z)$ for any $z \in \C^ n$, which implies that $V_E$ is H\"older continuous. \end{proof} \section{Analytic discs attached to generic manifolds} \subsection{Construction of attached analytic discs} Let $\U := \{\zeta \in \C : \vert \zeta \vert < 1\}$ be the open unit disc and $\T := \partial \U$ the unit circle. An analytic disc of $\C^n$ is a continuous function $f : \overline{\U} \longrightarrow \C^n$, which is holomorphic on $\U$. Let $M \subset \C^n$ be a given subset of $\C^n$ and $\gamma \subset \overline{U} $ a given connected subset of the closed disc $\bar \U$. We say that the analytic disc $f$ is attached to $M$ along $\gamma$ if $f (\gamma) \subset M$. If $f : \ove{\U} \longrightarrow \C^n$ is an analytic disc and $F$ is a holomorphic function on a neighboorhood $D$ of $f (\ove{\U})$, then $F \circ f$ is a holomorphic function on the unit disc $\U$. If $u$ is a plurisubharmonic function on $D$, then $u \circ f$ is a subharmonic function on $\U$. Therefore analytic discs enable us to reduce multidimentional complex problems to corresponding one dimensional complex problems. In the proof of our theorem, we need a smooth family of analytic disks. We will use Bishop's equation for construct such a family (see [B65], [Pi74]). Let M be a totally real submanifold of dimension $n$ given locally by the following equation $$ M: = \left\{ {z = x + y \in B \times \R^n :y = h\left( x \right)} \right\}, $$ where $ B \subset \R^n $ is a ball of center 0 and $ h : B \rightarrow \R^n$ a smooth map, such that $$ h\left( 0 \right) = 0 \ \ \text{and} \ \ Dh\left( 0 \right) = 0. $$ Let $v\left( \tau \right) : \T \rightarrow \R^+ $ a $C^\infty $ function on the unit circle $\T$ such that $$ v|_{\left\{e^{i \theta} \, : \, \, \, \theta \in (0,\pi)\right\}} = 0 \ \ \text{and} \ \ v|_{\left\{e^{i \theta} \, : \, \, \, \theta \in (\pi ,2\pi) \right\}} > 0 . $$ Assume that there exists a continuous mapping $X : \T \to \R^n $ which is a solution of the following Bishop equation \begin{equation} X\left( \tau \right) = c - \Im \left( {h \circ X + tv} \right)\left( \tau \right), \, \, \,\tau \in \T, \end{equation} where $ \left( {c,t} \right) \in Q = Q_c \times Q_t \subset \R^n \times \R^n $ is a fixed parameter and $\Im $ is the harmonic conjugate operator defined by the Schwarz integral formula \begin{equation} \Im \left( X \right)\left( \zeta \right) = \frac{1} {{2\pi }}\int\limits_T {X\left( \tau \right)\operatorname{Im} \frac{{e^{i\tau } + \zeta }} {{e^{i\tau } - \zeta }}d\tau } \,\,,\,\,\,\,\zeta = re^{i\theta } , \end{equation} normalized by the condition $$ \Im X\left( 0 \right) = 0. $$ We will consider the unique harmonic extension $ X\left( \zeta \right) $ of the mapping $X $ to the unit disk $\U.$ Then the following mapping \begin{equation} \begin{array}{ll} \Phi \left( {c,t,\zeta } \right): = X\left( {c,t,\zeta } \right) + i\left[ {h^{*}\left( {c,t,\zeta } \right) + tv\left( \zeta \right)} \right] = \\ = c + i\left\{ {h^{*}\left({c,t,\zeta } \right) + tv\left( \zeta \right) + i\Im \left[ {h^{*}\left( {c,t,\zeta } \right) + tv\left( \zeta \right)} \right]} \right\} \end{array} \end{equation} provides a family of analytic disks $ \Phi \left( {c,t,\zeta } \right):\overline \U \to {\Bbb C}^n $ such that \begin{equation} \forall \left( {c,t} \right) \in Q\,,\,\forall \, \tau \in \gamma \,,\,\Phi \left( {c,t,\tau } \right) \in M. \end{equation} Here $ X\left( {c,t,\zeta } \right) ,\,\, h^{*}\left( {c,t,\zeta } \right) \,\,\text{and}\,\,v\left( \zeta \right) $ are harmonic extensions of $ X\left( {c,t,\tau } \right), \\ h \circ X\left( {c,t,\tau } \right)\,\,\,\text{and}\,\,\,v\left( \tau \right) $ to the unit disk $\U$ respectively. We need a smooth family of disks $\Phi \left( {c,t,\zeta} \right)$. Many constructions of analytic discs attached to generic manifolds along a part the circle have been given by many different authors, depending on the smoothness properties of the manifold (see [Pi74], [75], [Sa76], [C92]). The most general and sharp result was proved by B.Coupet [C92]: \vskip 0.3 cm \noindent {\bf Theorem} ([C92]). {\it Let $p>2n+1, q\geq 1$ be integers and $h\in C^q (B)$. Then there exist a constant $\delta_0 >0$ independing on $h$ and $p$ such, that for arbitrary $C^q$-smooth mapping $k(c,t,\tau): \R^{2n+1}\to \R^n, \,\,$ with compact support and $\parallel{k}\parallel_{W^{q,p}}\, \leq \,\delta_0$, the equation \begin {equation} u=-\Im (h \circ u)+ k \end {equation} has a unique solution $u\in W^{q,p}(\T \times \R^{2n}).$ Moreover, the harmonic extensions of $u$ and $h\circ u$ to the unit disk $\U$ belong to $C^q{(\U \times \R^{2n})}.$} \vskip 0.3 cm Let now $h\in C^q (\B)$. Observe that Bishop's equation (3.1) is a particular case of the equation (3.5). Therefore, from the theorem of Coupet and Sobolev's embedding theorem $W^{q,p}\subset C^{q-1}$, it follows that for a small enough neighborhood $Q\ni 0$, $(c,t)\in Q$, the Bishop equation (3.1) has unique solution $X(\tau,c,t):$ $ X, h\circ X \in C^{q-1}{(\overline U \times Q)} \cap C^{q}{(U \times Q)}.$ Note that the operator $\Im : W^{q,p} \to W^{q,p} $ is continuous. Therefore, for a $C^2-$smooth generic submanifold $M \subset \C^n$, we obtain a smooth family of disks (3.3), attached to $M$, such that $$ \left\| \Im X \right\|_1 \leq A \left\| X \right\|_1, \left\| \Im h\circ X \right\|_1 \leq A \left\| h\circ X \right\|_1, $$ where $A \,\,\text{is a constant and}\,\, \left\| \cdot \right\|_1 $ is the $C^1$-norm in $\tau \in T.$ \subsection {Harmonic measure of boundary set of the unit disk.} For arbitrary $\gamma \subset \T$ we put $\aleph (\gamma,\U)$-- class of functions $$\left\{u(\zeta): \,u\in \text{sh}(\U)\cap C(\overline{\U}),\,\,u|_\U<0,\,\, u|_\gamma \leq {-1}\right \},\,\,$$ and set $$ \omega(\zeta,\gamma,\U)=\text{sup}\{u(z): u\in \aleph\}, \ \zeta \in \U. $$ Then (negative of) the upper semi-continuous regularisation $\omega^*(\zeta,\gamma,\U)$ is called the {\it harmonic measure} of $\gamma$ with respect to $\U$ at the point $\zeta$. The function $\omega^*$ is the unique solution of the Dirichlet problem: $$ \Delta \omega^*=0 ,\,\, \omega^* |_\T=-\chi_{\gamma}, $$ where $\chi_{\gamma}$ is the chararacteristic function of $\gamma.$ By Poisson formula $$ \omega^*(\zeta,\gamma,\U)= -\frac{1} {{2\pi}}\int\limits_\T {\chi _{\gamma} \left( \tau \right)\operatorname{Re} \frac{{e^{i\tau } + \zeta}} {{e^{i\tau } - \zeta }}d\tau } \,\,,\,\,\,\,\zeta = re^{i\theta }. $$ For $\gamma=\{e^{i\varphi}: \,0\leqslant \varphi \leqslant \pi \}$ the harmonic measure $\omega^*$ can be expressed as follows. \begin {equation} \omega^*(\zeta,\gamma,\U)= \frac{1}{\pi} \, \, \text{arg}\,i\,\frac{1-\zeta}{1+\zeta} - 1. \end {equation} Let us define the sector at the point $1=e^{i\cdot0} \in \ove \U$ as follows $$ \Omega_{0,\alpha}=\cup \{ l\cap \ove {\U}: \,\,l\ni1,\,\, \pi/2 \leq \text{arg}\,{l}\leq \pi/2+\alpha\}, $$ where $l$ stands for a real line passing through the point $1$ and $ 0\leq \alpha\leq \pi/2$ is fixed. The sector $\Omega_{a,\alpha}$ at the point $e^{ia} \in \ove \U$ can define in the same way. From (3.6) it follows clearly that \begin {equation} \omega^*(\zeta, \gamma, \U) \leq -1+\alpha/\pi, \, \, \, \, \forall \zeta \in \Omega_{0,\alpha}. \end{equation} A sector $\Omega_{a,\alpha}$ at the point $e^{ia}$ is said to be {\it admissible} if $\,\,\Omega_{a,\alpha}\cap \partial \U \subset \gamma.$ From the last fact, we deduce the following statement. \begin{lem} Let $\gamma=\text{arc}[e^{ia},e^{ib}] \subset \mathbb{T} ,\,\, 0 \leq a < b \leq 2\pi,$ be an arbitrary $\text{arc}$ on $\mathbb{T}$, and let $\Omega_{a,\alpha},\,\,$ be an admissible sector at the point $e^{ia}$. Then $\omega^* (\zeta,\gamma,\U)$ is $\Lambda_1$-continuous in $\U \cup \mathbb{T} \setminus \{e^{ia}, e^{ib}\},\,\, \omega^*|_{{\gamma}^{\circ}} \equiv -1,\,\,\omega^*|_{\mathbb{T} \setminus {\gamma}}\equiv 0\,\,$ and $\omega^*$ satisfies (3.7) in $\Omega_{a,\alpha}$. \end{lem} Here ${\gamma}^{\circ}$ denote the interior of the arc $\gamma.$ We note that if $\gamma_0 \Subset \gamma$ in an arc with non empty interior, then there exist $\alpha=\alpha(\gamma_0,\gamma)>0$ such that $\Omega_{\tau, \alpha}$ is admissible for every $\tau \in \gamma_0.$ \section {Transversality of attached discs to a generic manifold.} It is clear that the family of analytic discs constructed above $$ \Phi (c,t,\zeta) = X (c,t,\zeta) + i (h^{*} (c,t,\zeta) + t v (\zeta)), $$ for $ (c,t) \in Q=Q_c \times Q_t, \zeta \in \bar \U$, satisfies the following properties: \begin{equation} \label{eq:PFE} X (c,t,\tau) = c - \Im \left( h \circ X (c,t,\tau) + t v(\tau) \right), \,\, (c,t) \in Q, \,\, \tau \in \partial \U. \end{equation} \begin{equation} h^{^{*}}(c,t, \tau)=h \circ X (c,t, \tau), (c,t) \in Q, \tau \in \partial \U \end{equation} \begin{equation}\begin{array}{ll} X(c,0,\zeta)\equiv c, h^{^{*}}(c,0, \zeta)\equiv h(c) \,\,\, \text{so that} \\ \Phi(c,0, \zeta)\equiv c+ ih(c) \in M, c \in Q_c \end{array} \end{equation} \begin{equation} X(c,t,0)= \frac{1}{2\pi} \int_{T}X(c,t, \tau) d \tau \equiv c, \,\, (c,t) \in Q. \end{equation} \begin{equation} \left\| X \right\| \leq {O}(\left\| c \right\| + \left\| t \right\|), \left\| D_\tau X \right\| \leq {O}( \left\| t \right\|). \end{equation} Here and below $\left\| \cdot \right\| $ is Euclidean norm. The following geometric transversality property will be crucial for the proof of our main theorem. \begin{lem} Let $\gamma_0\subset\subset \gamma$ an arc with non empty interior. Then for small enough $Q$ the attached disks $\Phi (c,t,\zeta) ,\,\, t\neq 0,$ for $\zeta\rightarrow \tau \in \gamma_0$ meet $M$ transversally. \end{lem} \begin{proof} $\,\,$ For the normal derivative $D_{\mathop n\limits^ \to } $ at the points $\tau \in \gamma_0$ we have $$ \text{Im} D_{\mathop n\limits^ \to }\Phi(c,t,\tau)=D_{\mathop n\limits^ \to }h^*(c,t,\tau)+tD_{\mathop n\limits^ \to }\upsilon(\tau) $$ and $$ \left|\text{Im} D_{\mathop n\limits^ \to } \Phi(c,t,\tau)\right| \geq \,\,\parallel { t}\parallel b\, -\,O(\varepsilon)\parallel { t}\parallel= \parallel { t}\parallel(b-O(\varepsilon)), $$ where $$ b := \mathop {\inf }\limits_{\gamma_0} \left| {D_{\mathop n\limits^ \to } v\left( \tau \right)} \right| > 0 \,\, \text{and}\,\,\, \varepsilon =\text{sup}\left\{\left\|c\right\|+ \left\|t\right\|: \,\,c\in Q_c,\,t\in Q_t \right\}. $$ It follows, that for $O(\varepsilon) < \frac{b} {2}$ \begin{equation} \left|\text{Im} D_{\mathop n\limits^ \to } \Phi(c,t,\tau)\right| \geq \,\,\parallel { t}\parallel b/2\, \,\,\,\, \forall \tau \in \gamma_0, \end {equation} i.e. the disks $\Phi(c,t,\zeta)$ meet $M$ for $\zeta\rightarrow \tau \in \gamma_0$ transversally. \end{proof} \begin{cor} Let $Q'= \{ \parallel { t}\parallel= \sigma \} \subset Q_t ,$ where $\sigma> 0.$ Then there exist a neighborhood $\Omega\,' \supset \gamma_0$ and a constant $C>0$ such that \begin {eqnarray} d\,_\C (\zeta,\gamma_0) & \leq & C d_{\C^n} [\Phi(c,t,\zeta ),M] \,\,\,\\ d_{\C^n} [\Phi(c,t,\zeta ), \Phi(c,t,\gamma_0 )] & \leq & C d_{\C^n} [\Phi(c,t,\zeta ),M], \nonumber \end{eqnarray} $\forall \,\,\zeta \in \Omega =\overline {\U \cap \Omega'},\,\, t\in Q' , c\in \overline{Q}_c.$ Here $d\,_\C$ and $d_{\C^n}$ are Euclidean distances on $\C$ and $\C^n$, respectively. \end{cor} \begin{proof} The statement clearly follows from (4.6), because for every fixed $t^0\in Q',\,\, c^0 \in \overline Q_c$ we can write (4.7), which then will be true in some neighborhoods $B_c \ni c^0,\, B_t \ni t^0.$ \end{proof} \begin{lem} For every $\Omega'\supset \gamma_0\,$ and for every $Q'_t= \{\parallel { t}\parallel =\sigma \} \subset Q_t ,$ $\sigma > 0$ small enough the closed set $\,\,W=\{ \Phi(c,t,\zeta ) \in \C^n : c\in \overline Q_c,\, t\in Q'_t, \zeta \in \Omega =\overline {\U \cap \Omega'} \}$ contains the point $0\in M$ in its interior in $\C^n$ i.e. $0\in \dot{W}$. \end{lem} \begin{proof} By (4.3) $X(c,t,\zeta) \equiv c$ if $t=0.$ Since $X$ is smooth, then for small enough fixed $t^0$ and for arbitrary fixed $\tau^0 \in \gamma_0$ the image $X(c,t^0,\tau^0):\, c\in Q_c $ contains $0\in \R^n.$ It follows, that $0\in W.$ Moreover, $\dot{W} \neq \emptyset$ and if for some $\|t^0\|\leq \sigma,\, \zeta^0 \in \ove U $ \begin{equation} x(c, t^0, \zeta^0) \in \frac{1}{2} Q_c, \,\,\text{then} \,\, c\in Q_c \end{equation} Now we assume by contradiction that $0\in \partial W.$ Then $\C^n \setminus W$ is open and contains $0$ on its boundary. It is clear, that near $0$ there exists a point $p^0 =(x^0,y^0)\in {\partial \dot{W}\setminus M}$ such, that $ x^0 \in \frac{1}{2} Q_c$ and $p^0= \Phi(c^0,t^0,\zeta^0)$ for some $\,\,c^0\in \ove{Q}_c, \,\, \parallel { t^0}\parallel =\sigma, \,\,\zeta^0\in \Omega' \cap U.$ For simplicity we may assume that $ t^0=(0,...,0,\sigma)$ and set $ 'c = \left(c_1 ,...,c_{n - 1}\right),\,\,'t = \left(t_1 ,...,t_{n - 1}\right)$. From (4.8) it follows also, that $c\in Q_c.$ We consider the transformation \begin{eqnarray} S \left({'c} ,{'t},\zeta\right) = \Phi \left({'c},c_n^0,{'t},t_n^0,\zeta\right): \,\, {'Q} \times \overline \U \longrightarrow \C^n, \end{eqnarray} where $ 'Q := \{z\in Q: \,\,c_n = c_n^0, t_n = t_n^0\} \subset \R^{2n - 2}.$ Then $ S\left( {'c^0,'t^0 ,\zeta ^0}\right) = p^0$ and its Jacobian is given by $$ J('c,'t,\zeta)=\hat{J}(c,t,\zeta)|_{c_n=c^0_n, t_n=t^0_n}, $$ where $$ \hat{J}(c,t,\zeta) = \left| {\begin{matrix} {\frac{{\partial X_1 }} {{\partial c_1 }}} & {...} & {\frac{{\partial X_{n - 1} }} {{\partial c_1 }}} & | & {\frac{{\partial Y_1 }} {{\partial c_1 }}} & {...} & {\frac{{\partial Y_{n - 1} }} {{\partial c_1 }}} & | & {\frac{{\partial X_n }} {{\partial c_1 }}} & {\frac{{\partial Y_n }} {{\partial c_1 }}} \\ {...} & {...} & {...} & | & {...} & {...} & {...} & | & {...} & {...} \\ {\frac{{\partial X_1 }} {{\partial c_{n - 1} }}} & {...} & {\frac{{\partial X_{n - 1} }} {{\partial c_{n - 1} }}} & | & {\frac{{\partial Y_1 }} {{\partial c_{n - 1} }}} & {...} & {\frac{{\partial Y_{n - 1} }} {{\partial c_{n - 1} }}} & | & {\frac{{\partial X_n }} {{\partial c_{n - 1} }}} & {\frac{{\partial Y_n }} {{\partial c_{n - 1} }}} \\ { - - } & { - - } & { - - } & | & { - - } & { - - } & { - - } & { - - } & { - - } & { - - } \\ {\frac{{\partial X_1 }} {{\partial t_1 }}} & {...} & {\frac{{\partial X_{n - 1} }} {{\partial t_1 }}} & | & {\frac{{\partial Y_1 }} {{\partial t_1 }}} & {...} & {\frac{{\partial Y_{n - 1} }} {{\partial t_1 }}} & | & {\frac{{\partial X_n }} {{\partial t_1 }}} & {\frac{{\partial Y_n }} {{\partial t_1 }}} \\ {...} & {...} & {...} & | & {...} & {...} & {...} & | & {...} & {...} \\ {\frac{{\partial X_1 }} {{\partial t_{n - 1} }}} & {...} & {\frac{{\partial X_{n - 1} }} {{\partial t_{n - 1} }}} & | & {\frac{{\partial Y_1 }} {{\partial t_{n - 1} }}} & {...} & {\frac{{\partial Y_{n - 1} }} {{\partial t_{n - 1} }}} & | & {\frac{{\partial X_n }} {{\partial t_{n - 1} }}} & {\frac{{\partial Y_n }} {{\partial t_{n - 1} }}} \\ { - - } & { - - } & { - - } & | & { - - } & { - - } & { - - } & | & { - - } & { - - } \\ {\frac{{\partial X_1 }} {{\partial \zeta '}}} & {...} & {\frac{{\partial X_{n - 1} }} {{\partial \zeta '}}} & | & {\frac{{\partial Y_1 }} {{\partial \zeta '}}} & {...} & {\frac{{\partial Y_{n - 1} }} {{\partial \zeta '}}} & | & {\frac{{\partial X_n }} {{\partial \zeta '}}} & {\frac{{\partial Y_n }} {{\partial \zeta '}}} \\ {\frac{{\partial X_1 }} {{\partial \zeta ''}}} & {...} & {\frac{{\partial X_{n - 1} }} {{\partial \zeta ''}}} & | & {\frac{{\partial Y_1 }} {{\partial \zeta ''}}} & {...} & {\frac{{\partial Y_{n - 1} }} {{\partial \zeta ''}}} & | & {\frac{{\partial X_n }} {{\partial \zeta ''}}} & {\frac{{\partial Y_n }} {{\partial \zeta ''}}} \\ \end{matrix} } \right|, $$ Here $\zeta= \zeta' + i \zeta''$ and $ Y_k (c,t,\zeta) = h_k^* \circ X\left( c ,t,\zeta \right) + t_k v\left( \zeta\right), \ \ k = 1, \cdots n . $ The determinant $J$, is composed by $9$ block matrices $D_{ij} \,\,,\,\,i,j = 1,2,3$. We will show that $ J ('c^0,'t^0,\zeta^0) \neq 0$, which will imply that the operator $S$ is a local diffeomorphism in a neighborhood of the point $\left( {'c}^0 ,{'t}^0 ,\zeta^0\right)\,$. Indeed, by (4.3) $ X\left(c,0,\zeta\right) \equiv c\,\,,\,\,h^{*}\left( {c,0,\zeta} \right) \equiv h\left( c \right) $ and then $$ \left| {\begin{matrix} {D_{1 1}} & {D_{1 2}} \\ {D_{2 1}} & D_{2 2} \\ \end{matrix} } \right|_{(c,0,\zeta)} = D_{11} .D_{22} = v^{n - 1} \left( {\zeta } \right) $$ and \begin{equation} \left| {\begin{matrix} {D_{1 1}} & {D_{1 2}} \\ {D_{2 1}} & D_{2 2} \\ \end{matrix} } \right|_{(c,t,\zeta)} = v^{n - 1} \left( {\zeta } \right) + O\left(\varepsilon\right), \end{equation} where we recall that $\varepsilon ={sup}\left\{\left\|c\right\|+ \left\|t\right\|: \,\,c\in Q_c,\,t\in Q_t \right\} .$ Note also that $$ D_{3 3} = \left| {\begin{matrix} {\frac{{\partial X_n }}{{\partial \zeta' }}} & {\frac{{\partial Y_n }} {{\partial \zeta'}}} \\ {\frac{{\partial X_n }} {{\partial \zeta''}}} & {\frac{{\partial Y_n }} {{\partial \zeta'' }}} \\ \end{matrix} } \right| = \left|\frac{ d }{d \zeta}(X_n + i Y_n)\right|^2. $$ Now consider the right hand side near the arc $\gamma $. It is clear that for every $s > 0,$ there is an open set $\tilde{\Omega} \supset \gamma_0$ such that \begin{equation} \left|\frac{ d }{d \zeta}(X_n + i Y_n) (c,t,\zeta) \right|^2 \geq \left| D_{\tau} X_n (c,t,\tau)\right|^2 - s, \forall \zeta \in U \cap \tilde{\Omega}, \tau \in \gamma_0. \end{equation} We calculate $ D_\tau X\left( {c,t,\tau } \right)\,\,,\,\,$ for $\left({c,t} \right) \in Q\,\,,\,\,\tau \in T,$ \begin{equation} D_\tau X\left( {c,t,\tau } \right) = - D_\tau \Im h \circ X\left( {c,t,\tau } \right) - tD_\tau \Im v\left( \tau \right) \end{equation} Since, $ D_\tau \Im v\left( \tau \right) = D_{\mathop n\limits^ \to } v\left( \tau \right) $, where $ D_{\mathop n\limits^ \to } $ is the normal derivative $ \mathop n\limits^ \to $, then (4.12) implies $$ D_\tau X\left( {c,t,\tau } \right) + tD_{\mathop n\limits^ \to } v\left( \tau \right) = \Im D_\tau h \circ X\left( {c,t,\tau } \right). $$ For $k$-coordinate of vector $X\left( \tau \right) = X\left( {c,t,\tau } \right) $ we have \begin{equation}\begin{array}{ll} \left\| {D_\tau X_k \left(c,t, \tau \right) + t_{k}D_{\mathop n\limits^ \to } v\left( \tau \right)} \right\| = \left\| {\Im D_\tau h_{k} \circ X\left(c,t, \tau \right)} \right\| \leqslant \\ \leqslant {const}\left\| {D_\tau h_{k} \circ X\left(c,t, \tau \right)} \right\| \leqslant O\left( \varepsilon \right)\left\| {D_\tau X\left(c,t, \tau \right)} \right\|. \end{array} \end{equation} Therefore, \begin{equation}\begin{array}{ll} \left| {t_k D_{\mathop n\limits^ \to } v\left( \tau \right)} \right| - O\left( \varepsilon \right)\left\| t \right\| \leqslant \left| {D_\tau X_k \left( {c,t,\tau } \right)} \right| \leqslant \\ \leqslant \left| {t_k D_{\mathop n\limits^ \to } v\left( \tau \right)} \right| + O\left( \varepsilon \right)\left\| t \right\|\,\,,\,\,1 \leqslant k \leqslant n\,\,,\,\,\tau \in T \end{array} \end{equation} The second part of (4.14) implies \begin{equation} \left\| {D_\tau X\left( {c,t,\tau } \right)} \right\| \leqslant C\left\| t \right\|\,\,,\,\,\left( {c,t,\tau } \right) \in Q \times T\,\,,\,\,C - \text{constant} \end{equation} As in Lemma 4.1 if $b = \mathop {\inf }\limits_{\gamma_0} \left| {D_{\mathop n\limits^ \to } v\left( \tau \right)} \right| > 0 $ and $O(\varepsilon) < \frac{b} {2},$ then the first part of $(4.14)$ implies \begin{equation} \vert D_{\tau} X_k (c,t,\tau)\vert \geq \vert t_k\vert b - \Vert t\Vert b \slash 2, \end{equation} for $\tau \in \gamma, 1 \leq k \leq n$. By $(4.10)$ and $(4.11)$ it follows that \begin{eqnarray*} \left|J ('c,'t,\zeta)\right| &= & \left|D_{1 1}|\cdot |D_{2 2}\right| \cdot \left|\frac{ d }{d \zeta}(x_n + i y_n) ('c,'t,\zeta)\right|^2 + O (\varepsilon) \\ &\geq & \left[v^{n - 1} (\zeta) + O (\varepsilon)\right] \cdot \left[|t_n b \slash 2|^2 - s \right] + O (\varepsilon), \end{eqnarray*} for all $('c,'t,\zeta) \in {'Q} \times \left[\U \cap \Omega'\right],$ because $\parallel { t^0}\parallel =|t^0_n|.$ We can take $ \tilde{\Omega} \cap \Omega'$ instead of $\Omega $ and observe that all functions $O (\cdot)$ do not depend on $\zeta.$ Therefore if we take $\varepsilon, s$ small enough, then $ \left|J ('c^ 0,'t^ 0,\zeta^ 0)\right| > 0.$ Since, the plane $\{t_n=t^0_n\}$ is tangent to the sphere $\|t\|=\sigma$ at the point $t^0$, the Jacobian of the restriction $ \breve{S}=\Phi \left('c, c^0_n, \sqrt{|t_1|^2+...+|t_{n-1}|^2}, \zeta \right)$ also is not zero at the point $('c^ 0,'t^ 0,\zeta^ 0).$ In particular, the operator $$ \breve{S}: \,\,U_1 \times U_2 \times U_3 \rightarrow U (p) $$ is a homeomorphism, where $ U_1 \subset \R^{n-1}-$ a neighborhood of the point $'c^0$, $ U_2 \subset Q'_t-$ a neighborhood of $t^0 \in Q'_t$ and $U_3= \{\vert \zeta - \zeta_0\vert < \sigma'\}\subset \Omega, \sigma' > 0,$ is a neighborhood of $\zeta_0$. It follows, that the open set $U(p) \subset W,$ that is contradiction to $p \in \partial W. $ \end{proof} \section {Proof of the main Theorem} First we observe that from the results of Edigarian-Wiegerinck [EW10] and the authors [SZ12], it follows that $M \subset \C^n$ is a pluriregular set. Indeed, it was proved in [SZ12] that a set of full measure in a generic manifold $M$ is not thin. Since the set $P=\{z\in M: V^*_M(z)>0 \}$, where $V^*_M(z)$ is Green function, is pluripolar by Bedford and Taylor ([BT82]), it has zero-measure (see [Sa76, C92]) and then the set $M \setminus P$ is not thin. Therefore $V^*_M\equiv 0$ on $M$, i.e. $M$ is pluriregular. Note that in [EW10] non-thinness of $M\setminus P$ was proved for $C^1$-smooth manifold $M$ and for a pluripolar set $P\subset M$, which implies that an arbitrary $C^1$-smooth generic manifold is pluriregular. Our main theorem will be a consequence of the following result, thanks to Lemma 2.3. \begin{thm} Any $C^2$-smooth generic submanifold $M \subset \C^n$ is $\Lambda_1$-pluriregular. \end{thm} \begin{proof} We first reduce to the case of a totally real submanifold. Fix a point, say $z^0 = 0 \in M$. Changing holomorphic coordinates in $\C^n$, we can assume that the tangent space $T_0 M$, which by definition does not contain any complex hyperplane, can be written as $$ T_0 M = \{z = x + i y \in \C^n : y_1 = \cdots = y_{2 n - m} = 0\}. $$ Hence for a small neighborhood $G = G_1 \times G_2$ of the origin with $$G_1 =\{ (x,y'') = (x,y_{2 n - m + 1}, ... , y_n) \in \R^n \times \R^{m - n} : \vert x\vert \leq \delta, \vert y''\vert < \delta \},$$ $$G_2 =\{ y' = (y_1, \cdots, y_{2n - m}) \in \R^{2 n - m} : \vert y'\vert < \delta\},$$ we can represent $M$ as a graph $$ M \cap G = \{z \in G : y' = h (x,y'') \}, $$ where $h$ is $C^2$ smooth mapping from $G_1$ into $G_2$. Observe that for each small enough $y''_0$ the intersection $M\cap\Pi\{y''_0\}$ of $M$ with the plane $\Pi\{y''_0\} := \{ z \in \C^n : y'' = y''_0\}$ is an $n-$dimensional generic manifold. Moreover, since the Green function is monotonic, i.e. $V(z,E_1)\geq V(z,E_2)$ for $E_1 \subset E_2,$ it is enough to prove the theorem in the case when $M$ is generic of dimension $n$, hence totally real of dimension $n$. In this case, we show the local H\"older pluriregularity of $M$, using previous results from Section 4. Fix a point $p \in M$, a ball $B(p) = B_x \times B_y \subset \C^n$ centered at the point $p$ such that $ M_p := M \cap B (p)$ is the graph of a $C^2$-smooth function. Then by Corollary 4.2, for arbitrary fixed small $\sigma >0$ there exist a neighborhood $\Omega' \supset \gamma_0$ and a constant $C>0$, depending on the point $p$, such that the inequalities (4.7) hold. By Lemma 4.3 $\,\, O (p) \ni p$, where $O (p) = W^0$ is the interior of the set $W=W(p,\Omega', \sigma),$ constructed in Lemma 4.3. Fix a point $z^0 \in O (p) \setminus M $ and a disk $ \Phi(c,t,\zeta ):\,\,\Phi(c^0,t^0,\zeta^0)=z^0,\,\,$ with $c^0 \in \overline Q_c,\, t^0\in Q'_t, \zeta^0 \in \U \cap \Omega'$. Then the function $V_{M_p}\circ \Phi(c^0,t^0,\zeta) \in sh (\U)$ and $V_{M_p}\circ \Phi|_\gamma\equiv 0$. Let $C'' =\mathop {\max }\limits_{B (p)} V_{M_p}(z) <\infty.$ By the theorem of two constants we have \begin{equation} V_{M_p}\circ \Phi (c^0,t^0,\zeta)\leq C'' [\omega^*(\zeta,\gamma,\U)+1],\,\,\zeta\in \U. \end{equation} Therefore the first part of Lemma 3.1 and (4.7) yields the inequality $$ V_{M_p}(z^0) = V_{M_p} \circ \Phi (c^0,t^0,\zeta^0)\leq C'' \left[\omega^*(\zeta^0,\gamma,\U)+1)\right] \leq $$ \begin{eqnarray} \leq C' C'' d_{\C}(\zeta^0,\gamma_0)\leq C_p d_{\C^n}(z^0,M_p), \end{eqnarray} for all $z^0 \in O(p)$, where $C_p := C C' C''$ depends on the fixed point $p \in M$ and on the corresponding family of analytic discs, attached to $M$ locally, in a neighborhood of $p.$ Now given a compact set $K \subset M$ we can apply the previous estimate to each point of $K$. Then by compactness we can find a finite number of points $p_1, \cdots p_k$ of $K$, a finite number of balls balls $B (p_1), \cdots, B(p_k)$ and a finite numbers of open sets $ O (p_1), \cdots O (p_k)$ such that \begin{eqnarray*} V_{M_p} (z) \leq C_p d_{\C^n}(z,M_p), \end{eqnarray*} for any $z \in O (p)$ and $p = p_1, \cdots , p_k$. Now observe that $O = \cup_{1 \leq i \leq k} O (p_i)$ is a neighborhood of $K$ and shrinking a little bit the open sets $O_p$ we can assume that for any $p=p_i$ and $z \in O_p$ , $d_{\C^n}(z,M_p) \leq d (z, M).$ Since $V_M \leq V_{M_p}$, it follows that $ V_{M} (z) \leq A d_{\C^n}(z,K), $ for any $z \in O$.$\vartriangleright$ \end{proof} \section{Open problems } Let $D\subset M$ a domain with $C^1$-smooth boundary $\partial D.$ Lemma 4.2 states that a neighborhood of the generic manifold locally consists in the interior $\dot{W}$ of the set $W=\{ \Phi(c,t,\zeta ): c\in \overline Q_c,\, t\in Q'_t, \zeta \in \Omega =\overline {\U \cap \Omega'} \}.$ It seems clear, at least intuitively, that if here, instead of $\Omega$, we take its part $\Omega_{a,\alpha},\,\,\alpha>0$ (see Lemma 3.1), then we should see that $\dot{W}$ contains some wedge $$\left\{z\in \C^n: \,\, d_{\C^n}\left(z, M \right)< C_\alpha \cdot d_{\C^n}\left(z, \partial D \right)\right\},$$ where $C_\alpha >0$ is a constant. If this is true then we could prove: {\it arbitrary close $C^1$-domain $\overline D$ in $C^2$-smooth generic manifold is pluriregular, i.e. the Green function $V^*(z, \overline D)$ is continuous in} $\, \C^n.$ The proof easily follows by the well-known criteria of pluriregularity (see [Sa80]) and by the following lemma. \vskip 0.3 cm \textbf {Lemma 6.1.} {\it If $f(\lambda)$ is a $C^1$-smooth funstion on $[0,1] \subset \R$, then for every $\varepsilon >0$ there exist polynom $p(\lambda)$ such that} $$ | p(\lambda)-f(\lambda)| \leq \varepsilon \lambda,\,\, \lambda\in [0,1]. $$ The authors do not know any proof of the following \vskip 0.3 cm \textbf {Conjecture:} Let $ D \subset M $ be a bounded domain in $M$ with smooth boundary. then $\overline{D} \subset \C^{n}$ is $\Lambda_{1 \slash 2}\,$-pluriregular i.e. its pluricomplex Green function $V(z,\overline{D})$ is H\"older continuous of order $1 \slash 2$ in $\C^n$. \vskip 0.3 cm We note that if $M$ is real analytic generic manifold then the conjecture is true. \begin{center}\textbf{ References} \end{center} \vskip 0.3 cm \noindent [B65] E.Bishop: {\it Differentiable manifolds in complex Euclidean spaces}. Duke Math.J., V.32 (1965), no.1, 1-21. \noindent [BT82] E.Bedford, B.A. Taylor: {\it A new capacity for plurisubharmonic functions}. Acta Math., V.149 (1982), no. 1-2, 1-40. \noindent[C92] B. Coupet: {\it Construction de disques analytiques et r\'egularit\'e de fonctions holomorphes au bord}. Math.Z., V.209 (1992), no.2, 179-204. \noindent [DNS] T.-C. Dinh, V.-A. Nguyn, N. Sibony: {\it Exponential estimates for plurisubharmonic functions and stochastic dynamics}. J. Differential Geom. V. 84 (2010), no. 3, 465-488. \noindent [EW10] A. Edigarian and J. Wiegerinck: {\it Shcherbina's theorem for finely holomorphic functions}. Math. Z., V.266 (2010), no.2, 393-398. \noindent [FS92] J.E.Fornaes, N.Sibony:{\it Complex Henon mappings in $\C^2$ and Fatou-Bieberbach domains}. Duke Math .J., V. 65 (1992), 345-380. \noindent [GZ07] V. Guedj and A. Zeriahi, {\it The weighted Monge-Amp\`{e}re energy of quasiplurisubharmonic functions}. J. Funct. Ann. V.250 (2007), 442-482. \noindent[HC76] G. M. Henkin and E. M. Chirka: {\it Boundary propertie of holomorphic functions of several variables}. J. Math. Sci., V.5 (1976), 612-687. \noindent [Kl91] M.Klimek: {\it Pluripotential theory.} London Mathematical Society Monographs. New Series, 6. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1991. \noindent[Kol98] S. Kolodziej: {\it The complex Monge-Amp\`ere equation }. Acta Math., V.180 (1998), 69-117. \noindent[Kos97] M. Kosek: {\it H\"older continuity property of filled-in Julia sets in $\C^n$}. Proc. of the AMS.,V.125 (1997), no. 7, 2029-2032. \noindent[PP86] W.Pawlucki, W.Plesniak: {\it Markov's inequality and $C^ {\infty}$ functions on sets with polynomial cusps.} Math. Ann., V. 275 (1986), no. 3, 467-480. \noindent[Pi74] S.Pinchuk: {\it A Boundary-uniqueness theorem for holomorphic functions of several complex variables}. Math. Zametky, V.15 (1974),no.2, 205-212 \noindent[Sa76] A. Sadullaev: {\it A boundary-uniquiness theorem in $\C^n$}. Mathematical Sbornic, V.101(143)(1976),no.4, 568-583= Math. USSR Sb., V.30 (1976), 510-524. \noindent[Sa80] A. Sadullaev: {\it P-regularity of sets in $C^n$}. Lect. Not. In Math., V.798 (1980), 402-407. \noindent[Sa81] A. Sadullaev: {\it Plurisubharmonic measure and capacity on complex manifolds}. Uspehi Math. Nauk, V.36(220)(1981, no.4, 53-105= Russian Math. Surveys V.36 (1981), 61-119. \noindent[SZ12] A.Sadullaev, A.Zeriahi: {\it Subsets of full measure in a generic submanifold in $\C^n$ are non-plurithin}. Math. Z., Vol.274 (2013), 1155-1163. \noindent[Si62] J. Siciak: {\it On some extremal functions and their applications in the theory of analytic functions of several complex variables}. Trans. Amer. Math. Soc., V.105 (1962), 322-357. \noindent[Si81] J. Siciak: {\it Extremal plurisubharmonic functions in ${\bf C}^{n}$}. Ann. Polon. Math., V.39 (1981), 175-211. \noindent[Si97] J. Siciak: {\it Wiener's type sufficient conditions in $C^N$}. Univ. Iagel. Acta Math., V.35 (1997), 47-74. \noindent[Za74] V.P. Zahariuta: {\it Extremal plurisubharmonic functions, Hilbert scales, and the isomorphism of spaces of analytic functions of several variables.} I, II. (Russian) Teor. Funkciy Funkcional. Anal. i Prilojenie, V.19 (1974), 133-157; V.21 (1974), 65-83. \noindent[Za76] V.P. Zahariuta: {\it Extremal plurisubharmonic functions, orthogonal polynomials and Bernstein-Walsh theorems for analytic functions of several complex variables}. Ann. Polon. Math. V.33 (1976/77), no. 1-2, 137-148. \noindent[Ze87] A. Zeriahi: {\it Meilleure approximation polynomiale et croissance des fonctions enti\` eres sur certaines vari\'et\'es alg\'ebriques affines}. Ann. Inst. Fourier (Grenoble), V.37 (1987), no.2, 79-104. \noindent[Ze91]. A. Zeriahi: {\it Fonction de Green pluricomplexe \` a p\^ole \`a l'infini sur un espace de Stein parabolique et applications.} Math. Scand., V.69 (1991), no.1, 89-126. \noindent[Ze93] A. Zeriahi: {\it In\'egalit\'es de Markov et d\'eveloppement en s\'erie de polyn\^omes orthogonaux des fonctions $C^{\infty}$ et $A^{\infty}$.} Several complex variables (Stockholm, 1987/1988), 683-701, Math. Notes, 38, Princeton Univ. Press, Princeton, NJ, 1993. \noindent[Ze01] A. Zeriahi: {\it Volume and capacity of sublevel sets of a Lelong class of plurisubharmonic functions.} Indiana Univ. Math. J., 50 (2001), no.1, 671-703. \\ \noindent A.Sadullaev, National University of Uzbekistan, \\ 100174 Tashkent, Uzbekistan\\ {[email protected]} \\ \noindent A. Zeriahi, Institut de Math\'ematiques de Toulouse \\ Universit\'e Paul Sabatier, 118 Route de Narbonne, \\ 31062 Toulouse \\ [email protected] \end{document}
\begin{document} \baselineskip 18pt \title{Classification of polarized deformation quantizations} \author {Joseph Donin\thanks{Partially supported by Israel Academy of Sciences Grant no. 8007/99-01 }\\ {\normalsize Dept. of Math., Bar-Ilan University}} \date{} \maketitle \begin{abstract} We give a classification of polarized deformation quantizations on a symplectic manifold with a (complex) polarization. Also, we establish a formula which relates the characteristic class of a polarized deformation quantization to its Fedosov class and the Chern class of the polarization. \end{abstract} \tableofcontents \section{Introduction} Let $(M,\omega)$ be a symplectic manifold, $\mathcal{T}_M$ its {\em complexified} tangent bundle. It is known that classes of deformation quantizations on $(M,\omega)$ are in one-to-one correspondence with their Fedosov classes, the elements of $\omega+tH^2(M,\mathbb{C}t)$. The set $\omega+tH^2(M,\mathbb{C}t)$ may be interpreted in the following way. Let ${\mathcal X}$ be the set of formal closed 2-forms on $M$ of the form $\omega+t\omega_1+t^2\omega_2+\cdots$. Let $Aut(M)$ be the group of formal automorphisms of $M$ of the form $e^{tX}$, where $X=X_0+tX_1+t^2X_2+\cdots$, $X_i\in \mathcal{T}_M$ is a formal vector field. Since $\omega$ is nondegenerate, it is easy to see that the orbit of an element $\omega_t\in{\mathcal X}$ under the action of $Aut(M)$ is $\omega_t+t\cdot d(\Gamma(M,\mathcal{T}_M^*))$. It follows from this that $\omega+tH^2(M,\mathbb{C}t)$ may be identified with the set of orbits in ${\mathcal X}$ under the $Aut(M)$-action. So, the equivalence classes of deformation quantizations on $(M,\omega)$ are in one-to-one correspondence with the orbits in ${\mathcal X}$. In the paper, we extend that picture to polarized deformation quantizations (PDQ). Let $(M,\omega,\mathcal{P})$ be a polarized symplectic manifold, i.e. $\mathcal{P}$ is a Lagrangian integrable subbundle of the complexified tangent bundle to $M$. A PDQ on $(M,\omega,\mathcal{P})$ is a pair $(\mathbb{A}_t,\mathbb{O}_t)$, where $\mathbb{A}_t$ is a deformation quantization on $(M,\omega)$ and $\mathbb{O}_t$ is a commutative $t$-adically complete subalgebra of $\mathbb{A}_t$ such that $\mathbb{O}_0=\mathcal{O}_\mathcal{P}$, the algebra of functions constant along $\mathcal{P}$. Let ${\mathcal Y}$ denote the set of pairs $(\omega_t,\mathcal{P}_t)$, where $\omega_t\in {\mathcal X}$ and $\mathcal{P}_t$ is a polarization of $\omega_t$ such that $\mathcal{P}_0=\mathcal{P}$. Our result is that the equivalence classes of PDQ's on $(M,\omega,\mathcal{P})$ are in one-to-one correspondence with the orbits in ${\mathcal Y}$ under the $Aut(M)$-action. Let us describe this correspondence more precisely. First, we show that any PDQ is equivalent to a polarized star-product (PSP). By a PSP we mean a triple, $(\mathbb{C}M,\mu_t,\mathcal{O}_t)$, where $(\mathbb{C}M,\mu_t)$ is a star-product, $\mathcal{O}_t=\mathcal{O}_{\mathcal{P}_t}$, the algebra of functions from $\mathbb{C}M$ constant along a deformed polarization $\mathcal{P}_t$, and the multiplication $\mu_t$ satisfies the condition: $\mu_t(f,g)=fg$ (the usual multiplication) for $f\in\mathcal{O}_t$, $g\in\mathbb{C}M$. Further, we assign to any PSP $(\mathbb{C}M,\mu_t,\mathcal{O}_t)$ a pair $(\omega_t,\mathcal{P}_t)\in{\mathcal Y}$ in the following way. We put $\mathcal{P}_t=\mathcal{P}_{\mathcal{O}_t}$, the sheaf of formal vector fields annihilating $\mathcal{O}_t$. The form $\omega_t$ is equal, locally, to $\sum_idy_i\wedgedge dx_i$, where $x_i\in\mathcal{O}_t$, $y_i\in\mathbb{C}M$, $i=1,...,\half\dim M$, are Darboux coordinates with respect to $[,]=[,]_{\mu_t}$, the commutator of $\mu_t$; namely $[x_i,x_j]=[y_i,y_j]=0$, $[y_i,x_j]=\delta_{ij}$. It turns out that $\omega_t$ is well defined, i.e. does not depend on the choice of local Darboux coordinates. Denote the constructed map from PSP's to ${\mathcal Y}$ by $\tau$. The map $\tau$ turns out to descend to an isomorphism between the set of classes of PDQ's and the set $\bigl[{\mathcal Y}\bigr]$ of orbits in ${\mathcal Y}$. So, we obtain the following commutative diagram of maps: \be{}\lambdabel{funCD} \begin{CD} \{\mbox{PSP's}\}&@>\tau>>&{\mathcal Y}\\ @VVV & & @VVV\\ \{\mbox{classes of PDQ's}\}&@>>>&\bigl[{\mathcal Y}\bigr], \end{CD} \ee{} where the left downward arrow is an epimorphism and the bottom arrow is an isomorphism of sets. We show that the top arrow $\tau$ is an epimorphism with the following properties. 1) Two PSP's are equivalent if and only if their images with respect to $\tau$ lie on the same orbit. 2) Two PSP's $(\mathbb{C}M,\mu_t,\mathcal{O}_t)$ and $(\mathbb{C}M,\widetilde\mu_t,\widetilde\mathcal{O}_t)$ have the same image with respect to $\tau$ if and only if $\mathcal{O}_t=\widetilde\mathcal{O}_t$ and $[,]_{\mu_t}=[,]_{\widetilde\mu_t}$. 3) Let $(\omega_t,\mathcal{P}_t)=\tau(\mathbb{C}M,\mu_t,\mathcal{O}_t)$. Then, the 2-form \be{}\lambdabel{twoform} \theta_t=\omega_t+\ftt tr(\nabla^2|_{\mathcal{P}_t}) \ee{} represents the Fedosov class of the star-product $(\mathbb{C}M,\mu_t)$. Here $\nabla$ is a connection on $M$ preserving $\omega_t$, $\mathcal{P}_t$, and flat on $\mathcal{P}_t$ along $\mathcal{P}_t$. We prove that such a connection always exists and for it $tr(\nabla^2|_{\mathcal{P}_t})$ belongs to $\Gamma(M,d\mathcal{P}^\perp_t)$. By definition, $\omega_t$ belongs to $\omega+t\Gamma(M,d\mathcal{P}_t^\perp)$. So, it follows from (\ref{twoform}) that the Fedosov class of the star-product $(\mathbb{C}M,\mu_t)$ can be represented by a 2-form belonging to $\omega+t\Gamma(M,d\mathcal{P}_t^\perp)$, as well. In particular, both $(M,\omega_t,\mathcal{P}_t)$ and $(M,\theta_t,\mathcal{P}_t)$ are formal polarized symplectic manifolds that are deformations of $(M,\omega,\mathcal{P})$. Another consequence of (\ref{twoform}) is the following one. Let $\mathbb{A}_t$ be a deformation quantization on $(M,\omega)$. Suppose its Fedosov class $cl_F(\mathbb{A}_t)$ is represented by the 2-form $\theta_t$ that has a polarization $\mathcal{P}_t$. Then $\mathbb{A}_t$ can be extended to a PDQ $(\mathbb{A}_t,\mathbb{O}_t)$, i.e. there exists a commutative subalgebra $\mathbb{O}_t\subset \mathbb{A}_t$ isomorphic to $\mathcal{O}_{\mathcal{P}_t}$. There is the following interpretation of the image $[\omega_t]$ of element $\omega_t$ in the $\mathbb{C}t$-module $\Gamma(M,d\PP^\perp)/d(\Gamma(M,\PP^\perp))t$. Let $F(\mu_t,\mathcal{O}_t)=\{a\in\mathbb{C}M; [a,\mathcal{O}_t]_{\mu_t}\subset\mathcal{O}_t\}$. Then, there is the following exact sequence of $\mathcal{O}_t$-module and Lie algebra sheaves: \be{}\lambdabel{seqOL} \begin{CD} 0@>>>\mathcal{O}_t@>>>F(\mu_t,\mathcal{O}_t)@>>>Der(\mathcal{O}_t)@>>>0. \end{CD} \ee{} According to \cite{BB} and \cite{BK}, $F(\mu_t,\mathcal{O}_t)$ is called an $\mathcal{O}_t$-extension of $Der(\mathcal{O}_t)$. Equivalence classes of such extensions are described by their extension classes that are elements of $\Gamma(M,d\PP^\perp)/d(\Gamma(M,\PP^\perp))t$. We show that $[\omega_t]$ is just the extension class of (\ref{seqOL}). Analogously, $-[tr(\nabla^2|_{\mathcal{P}_t})]$ is the extension class of the extension \be{}\lambdabel{seqOB} \begin{CD} 0@>>>\mathcal{O}_t@>>>\mathbf{w}detilde T_{\det(\mathcal{P}_t)}@>>>Der(\mathcal{O}_t)@>>>0, \end{CD} \ee{} where $\mathbf{w}detilde T_{\det(\mathcal{P}_t})$ is the sheaf of $\mathcal{O}_t$-differential operators of order at most one on the $\mathcal{O}_t$-line bundle $\det(\mathcal{P}_t)$. Note that $-tr(\nabla^2|_{\mathcal{P}_t})$ divided by $2\pi\sqrt{-1}$ represents the first Chern class of $\mathcal{P}$, \cite{KN}. So, formula (\ref{twoform}) gives a relation between the Fedosov and extension classes of a PDQ. Among results related to ours we mention the following. In \cite{RY}, N. Reshetikhin and M. Yakimov considered the case of a real polarization on $M$ defined by a Lagrangian fiber bundle $M\to B$. In \cite{Ka1}, A. Karabegov constructed star-products with separation of variables on K\" ahler manifolds. This case corresponds to two polarizations on $M$ defined by holomorphic and anti-holomorphic vector fields. In the case of quantization on a K\" ahler manifold with separation of variables the class of $\omega_t$ in $H^2(M,\mathbb{C}t)$ coincides with the class defined by Karabegov in \cite{Ka1}. A formula relating the Karabegov and Fedosov classes in case of K\" ahler manifolds is found in \cite{Ka2}, see also \cite{KS}, \cite{Ne}. Our proof of the existence of a polarized star-product associated with any orbit in ${\mathcal Y}$ uses the Fedosov method adapted for the case with polarization. The analogous method was applied by M. Bordemann and S. Waldmann, \cite{BW}, for constructing a quantization with separation of variables on a K\" ahler manifold. Another approach to proving a formula relating the Fedosov and extension classes, using the Deligne classes, was presented in \cite{BD}. Unfortunately, there is a deficiency in the proof of Lemma 4.3 of that paper relating the extension and Deligne classes, however, the proof becomes correct for PSP's with the same polarization. The paper is organized as follows. In Section 2, we study cohomologies of the differential Hochschild complex on $M$ in presence of a distribution. Also, we prove a version of the Kostant-Hochschild-Rosenberg theorem for functions constant along a distribution. We use these results latter in proving that any PDQ is equivalent to a PSP. In Section 3, we introduce a notion of $\mathbb{C}$-{\em symplectic manifold}, which will be convenient for our consideration. This notion is a generalization of the notion of symplectic manifold. Namely, we suppose that symplectic form $\omega$ on a $\mathbb{C}$-symplectic manifold is a {\em complex} one and, locally, there exist {\em complex} Darboux coordinates with respect to $\omega$. For a usual symplectic manifold, when $\omega$ is real, such coordinates exist by the Darboux theorem. In this section, we establish some facts on $\mathbb{C}$-symplectic manifolds with polarization. By a polarization of $\omega$ we mean a Lagrangian subbundle, $\mathcal{P}$, of the {\em complexified} tangent bundle on $M$ such that, locally on $M$, there exist Darboux coordinates $x_i, y_i$, $i=1,...,\half\dim M$, where $x_i\in\mathcal{O}_\mathcal{P}$ for all $i$. So, (pseudo-)K\" ahler manifolds as well as purely real polarizations are included in our considerations. Note that from an analog of ``Dolbeault Lemma'' proved in \cite{Ra} one can derive sufficient conditions for $\mathcal{P}$ to be a complex polarization of $\omega$. In Section 4, we study properties of formal (or deformed) polarized symplectic manifolds. In Section 5, we prove the existence of a polarized symplectic connection on a formal polarized symplectic manifold, $(M,\omega_t,\mathcal{P}_t)$, and with the help of it introduce the characteristic class $\widetilde c_1(M,\omega_t,\mathcal{P}_t)$ of a polarized symplectic manifold. In Section 6, we prove some technical statements related to deformations of Poisson brackets on $M$. Such deformations appear, in particular, as commutators of star-products. In Section 7, we study properties of PDQ's. In particular, we prove the important fact that any PDQ is equivalent to a PSP. In Section 8, we define the extension class of a PDQ. Besides, we assign to any PSP an element of ${\mathcal Y}$, and to any class of PDQ's an orbit in ${\mathcal Y}$. We prove that the later assignment is a monomorphism that, actually, is an isomorphism, as we show in the next section. In Section 9, we prove that each element of ${\mathcal Y}$ corresponds to a PSP. To this end, we adapt the Fedosov method for constructing a PSP corresponding to a given pair $(\omega_t,\mathcal{P}_t)\in{\mathcal Y}$. By this method, a polarized symplectic connection, $\nabla$, extends to a Fedosov connection on the bundle of Weyl algebras on $M$. This connection has two scalar curvatures: the Weyl curvature, $\theta_t$, and the Wick curvature that turns out to be just $\omega_t$. We show that these curvatures differ from each other by $\frac{t}{2}tr(\nabla^2|_{\mathcal{P}_t})$, which immediately proves (\ref{twoform}). In Section 10, we formulate the main theorem collecting the results of the paper and give some corollaries. {\bf Acknowledgments.} I thank J.Bernstein, P.Bressler, B.Fedosov, A.Karabegov, and A.Mudrov for helpful discussions. \section{Complex distributions} \lambdabel{subsec2.2} For a smooth manifold $M$ we will denote by $\mathbb{C}m$ the sheaf of {\em complex valued} smooth functions on $M$ and by $\TT^\C_M=\mathcal{T}_M\otimes_\mathbb{R}\mathbb{C}$ the complexified tangent bundle on $M$. We say that a set of smooth functions $x_i$, $i=1,...,\dim M$, given on an open subset $U\subset M$ form a system of (complex) coordinates on $U$, if $dx_i$ are linearly independent at each point of $U$. Since any 1-form on $U$ can be uniquely written as $\sum_ia_idx_i$, one can define vector fields $\partial/\partial x_i\in\TT^\C_M$ in the following way. If $f$ is a function on $U$ and $df=\sum_ia_idx_i$, then $(\partial/\partial x_i)f=a_i$. Let $Q$ be a subbundle in a complex vector bundle $E$ over $M$. We denote by $Q^\perp$ the subbundle in $E^*$, the complex dual to $E$, orthogonal to $Q$. If sections $e_i$ form a local frame in $Q$, we set $(e_i)^\perp=Q^\perp$. A {\em (complex) distribution} on a manifold $M$ is a subbundle of $\TT^\C_M$. \begin{definition} A distribution $\mathcal{P}$ is said to be {\em integrable} if, locally on $M$, there exist (complex valued) functions $f_1,\dots,f_k$ such that $df_1,\dots, df_k$ give a local frame in $\mathcal{P}^\perp$, i.e. $df_i$ are linearly independent at each point and $\mathcal{P}=(df_i)^\perp$. \end{definition} An integrable distribution $\mathcal{P}$ is obviously involutive, i.e. $[\mathcal{P},\mathcal{P}]\subset\mathcal{P}$. Let $\mathcal{P}$ be an integrable distribution on $M$. We will denote by ${\OO_\PP}$ the sheaf of functions on $M$ constant along $\mathcal{P}$, i.e. $f\in{\OO_\PP}$ if and only if $Xf=0$ for any vector field $X\in\mathcal{P}$. \subsection{The Kostant-Hochschild-Rosenberg theorem in presence of a distribution} Let $M$ be a smooth manifold. Let ${\mathcal D}^n$ be the sheaf of $n$-differential operators on $M$ and ${\mathcal D}^\bullet$ the corresponding Hochschild complex with differential $d$. Let $\wedgedge^\bullet\mathcal{T}$ be the complex of sheaves of polyvector fields on $M$ with zero differential, $\mathcal{T}=\TT^\C_M$. There is the following ``smooth" version of the Kostant-Hochschild-Rosenberg theorem, \cite{Ko}, Thm. 4.6.1.1. \begin{propn}\lambdabel{propKHR} The natural embedding \be{}\lambdabel{KHR} \wedgedge^\bullet\mathcal{T}\to{\mathcal D}^\bullet \ee{} is a quasiisomorphism of complexes. Moreover, if $\varphi\in{\mathcal D}^n$ is a Hochschild cocycle, then its alternation $Alt(\varphi)$ is a polyvector field of $\wedgedge^\bullet\mathcal{T}$ cohomological to $\varphi$. \end{propn} \begin{proof} Arguments of this proof will be used also in proving the next proposition. The proposition is local on $M$, so it is enough to prove it replacing $M$ by an open set $U\subset M$ having complex coordinates $x_i$, $i=1,...,\dim M$. Any differential operator on $U$ may be uniquely presented as a polynomial in $\partiala{x_i}$ with coefficients being smooth functions on $U$. Hence, ${\mathcal D}^\bullet$ coincides over $U$ with the complex $\mathbb{C}c^\bullet(\mathcal{T})$. Here, for any vector bundle $E$, we denote by $\mathbb{C}c^\bullet(E)$ the complex $(\otimes^\bullet Sym(E),d)$ with differential of the form $$ d:\otimes^n Sym(E)\to \otimes^{n+1} Sym(E), $$ $$ d(a_1\otimes...\otimes a_n)=1\otimes a_1\otimes...\otimes a_n+ $$ $$ +\sum_{i=1}^n(-1)^ia_1\otimes...\otimes{\mathcal D}elta a_i\otimes...\otimes a_n +(-1)^{n+1}a_1\otimes...\otimes a_n, $$ where ${\mathcal D}elta$ is the comultiplication in the symmetric algebra $Sym(E)$ generated by the rule ${\mathcal D}elta(a)=a\otimes 1+1\otimes a$ for $a\in E$. One has the following well known statement (see, for example, the proof of Thm. 4.6.1.1. in \cite{Ko}). \begin{lemma}\lambdabel{lemKHR} Let $E$ be a line bundle over $M$. Then the conclusion of Proposition \ref{propKHR} holds for the map \be{}\lambdabel{GKHR} \wedgedge^\bullet E\to\mathbb{C}c^\bullet(E). \ee{} \end{lemma} Applying this lemma to $E=\mathcal{T}$ we prove the proposition. \end{proof} Let $(M,\mathcal{P})$ be a smooth manifold with integrable distribution. We call an $n$-chain $\nu\in {\mathcal D}^n$ {\em polarized} if $\nu(a_1,...,a_n)=0$ whenever $a_1,...,a_n\in{\OO_\PP}$. We call $\nu$ {\em strongly polarized}, if $\nu(a_1,...,a_n)=0$ whenever $a_1,...,a_{n-1}\in{\OO_\PP}$ and $a_n\in\mathbb{C}m$. \begin{propn}\lambdabel{propdop} Let $\nu\in {\mathcal D}^2$ be a polarized Hochschild $2$-cochain such that $d\nu$ is strongly polarized. Then, there exists a polarized differential operator $b$ such that $\nu+db$ is strongly polarized. \end{propn} \begin{proof} Since subsheaves of polarized and strongly polarized cochains are subsheaves of $\mathbb{C}m$-modules, it is enough to prove the proposition locally on $M$. So, let $U$ be an open set with coordinates $x_i$, $i=1,...,\dim M$ such that $\partiala{x_i}$, $i=1,...,k$, form a local frame in $\mathcal{P}$. Let $\mathcal{Q}$ be the subbundle in $\mathcal{T}$ generated by $\partiala{x_i}$, $i=k+1,...,n$. Thus, $\mathcal{T}=\mathcal{Q}\oplus\mathcal{P}$ over $U$. There is the natural isomorphism of complex $\mathbb{C}c^\bullet(\mathcal{T})$ with the tensor product of complexes $\mathbb{C}c^\bullet(\mathcal{Q},\mathcal{P})=\mathbb{C}c^\bullet(\mathcal{Q})\otimes\mathbb{C}c^\bullet(\mathcal{P})$, so we can identify $\mathbb{C}c^\bullet(\mathcal{T})$ with $\mathbb{C}c^\bullet(\mathcal{Q},\mathcal{P})$. Similarly, we identify the complex $\wedgedge^\bullet(\mathcal{T})$ with $\wedgedge^\bullet(\mathcal{Q},\mathcal{P})=\wedgedge^\bullet(\mathcal{Q})\otimes\wedgedge^\bullet(\mathcal{P})$. Thus, the map (\ref{GKHR}) generates the map of complexes \be{}\lambdabel{gmap} \wedgedge^\bullet(\mathcal{Q},\mathcal{P})\to\mathbb{C}c^\bullet(\mathcal{Q},\mathcal{P}). \ee{} Complex $\mathbb{C}c^\bullet(\mathcal{Q},\mathcal{P})$ decomposes obviously into a direct sum of subcomplexes $\mathbb{C}c^\bullet_{k,l}(\mathcal{Q},\mathcal{P})$, where $\mathbb{C}c^\bullet_{k,l}(\mathcal{Q},\mathcal{P})$ consists of elements of total degree $k$ with respect to $\mathcal{P}$ and $l$ with respect to $\mathcal{Q}$. The same is true for $\Lambda^\bullet(\mathcal{Q},\mathcal{P})$. Due to Proposition \ref{propKHR}, the map (\ref{gmap}) is a quasiisomorphism of bigraded complexes. It is clear that an element of $\mathbb{C}c^n(\mathcal{Q},\mathcal{P})$ is polarized, if it is a sum of tensor monomials having degree $>0$ with respect to $\mathcal{P}$. An element of $\mathbb{C}c^n(\mathcal{Q},\mathcal{P})$ is strongly polarized if it is a sum of tensor monomials of the form $a_1\otimes\cdots\otimes a_n$ where $a_1\otimes\cdots\otimes a_{n-1}$ is polarized. So, we may suppose that given $\nu$ is a polarized Hochschild cochain on $U$ belonging to $\mathbb{C}c^2(\mathcal{Q},\mathcal{P})$. It can be written as $\nu=\nu_0+\nu_1$, where $\nu_0=\sum (a_i\otimes c_i)$, the sum of all tensor monomials in $\nu$ such that $a_i\in\mathbb{C}c^1(\mathcal{Q})$, $c_i\in\mathbb{C}c^1(\mathcal{P})$. Let us denote $b=\sum a_ic_i$. It is clear that $\nu'=\nu+db$ does not contain tensor monomials of that type. The proposition will be proved if we show that $\nu'$ is strongly polarized. Let us prove this. Suppose $\nu'=\nu'_0+\nu'_1$, where $\nu'_0$ is not strongly polarized and $\nu'_1$ is strongly polarized. Then $\nu'_0$ has the form \be{}\lambdabel{nu} \nu'_0=\sum (a_i\otimes b_i)(1\otimes c_i), \ee{} where $(a_i\otimes b_i)\in \otimes^2 Sym(\mathcal{Q})$ and $c_i\in Sym(\mathcal{P})$ are linearly independent. Besides, $b_i$ are of degree $>0$. The element $d\nu=d\nu'=d\nu'_0+d\nu'_1$ is strongly polarized by hypothesis of the proposition, $d\nu'_1$ being the coboundary of a strongly polarized element $\nu'_1$ is strongly polarized too. All summands of $d\nu'_0$ with first two factors being of degree zero with respect to $\mathcal{P}$ are $(d(a_i\otimes b_i)+a_i\otimes b_i\otimes 1)(1\otimes 1\otimes c_i)$. These summands are not strongly polarized. Hence, $\sum_i (d(a_i\otimes b_i)+a_i\otimes b_i\otimes 1)(1\otimes 1\otimes c_i)=0$. Since elements $1\otimes 1\otimes c_i$ are linearly independent, it follows that $d(a_i\otimes b_i)=-a_i\otimes b_i\otimes 1$ for all $i$, which is only possible if $a_i\otimes b_i=0$ for all $i$. It follows from (\ref{nu}) that $\nu'_0=0$. So, $\nu'$ is equal to $\nu'_1$ which is strongly polarized. \end{proof} \subsection{Differential operators in presence of a distribution} Let $(M,\mathcal{P})$ be a smooth manifold with integrable distribution. Let $z_i$, $y_j$ be complex coordinates on an open set $U\subset M$ such that $\mathcal{P}=(dz_i)^\perp$. Vector fields $\partial/\partial y_j$ form a local frame in $\mathcal{P}$, since, by definition, ${\OO_\PP}$ consists of functions $a\in \mathbb{C}m$ such that $da$ has the form $\sum_ia_idz_i$. Since $da$ is closed, $\partial a_i/\partial y_j=0$ for all $i,j$, which implies that $a_i=\partial a/\partial z_i\in{\OO_\PP}$ for all $i$. Let $\mathcal{Q}$ be a subbundle in $\mathcal{T}=\TT^\C_M$ generated by $\partiala{z_j}$, so that $\mathcal{T}=\mathcal{Q}\oplus\mathcal{P}$ over $U$. The vector bundle $\mathcal{T}/\mathcal{P}$ may be considered as the sheaf of derivations from ${\OO_\PP}$ to $\mathbb{C}m$, $Der({\OO_\PP},\mathbb{C}m)$. Locally, such derivations can be presented in the form $\sum b_i\partiala{z_i}$, $b_i\in\mathbb{C}m$, i.e. as sections of $\mathcal{Q}$. Denote by $Der({\OO_\PP})$ the ${\OO_\PP}$-submodule of $Der({\OO_\PP},\mathbb{C}m)$ consisting of operators which take ${\OO_\PP}$ to itself. It is clear that $Der({\OO_\PP},\mathbb{C}m)=\mathbb{C}m\otimes_{\OO_\PP} Der({\OO_\PP})$. Locally, elements of $Der(\mathcal{O}_\mathcal{P})$ have the form $\sum_ia_i\partial/\partial z_i$, $a_i\in{\OO_\PP}$. Let $\wedgedge^\bullet(\mathcal{T}/\mathcal{P})$ denote the complex of sheaves of polyvector fields on $M$ from ${\OO_\PP}$ to $\mathbb{C}m$. Let ${\mathcal D}^\bullet({\OO_\PP},\mathbb{C}m)$ denote the restriction of the Hochschild complex ${\mathcal D}^\bullet$ to ${\OO_\PP}$. So, the sheaf ${\mathcal D}^n({\OO_\PP},\mathbb{C}m)$ may be considered as the sheaf of $n$-differential operators from ${\OO_\PP}$ to $\mathbb{C}m$. Locally, elements of ${\mathcal D}^1({\OO_\PP},\mathbb{C}m)$, the sheaf of differential operators from ${\OO_\PP}$ to $\mathbb{C}m$, can be presented as polynomials in $\partiala{z_i}$ with smooth coefficients. So, locally on $M$, complex ${\mathcal D}^\bullet({\OO_\PP},\mathbb{C}m)$ is isomorphic to the complex $\mathbb{C}c^\bullet(\mathcal{Q})$ (see previous subsection). We will need the following version of the Kostant-Hochschild-Rosenberg theorem. \begin{propn}\lambdabel{propKHR1} The natural embedding \be{}\lambdabel{imbO} \wedgedge^\bullet(\mathcal{T}/\mathcal{P})\to{\mathcal D}^\bullet({\OO_\PP},\mathbb{C}m) \ee{} is a quasiisomorphism of complexes. Moreover, if $\varphi\in{\mathcal D}^n({\OO_\PP},\mathbb{C}m)$ is a cocycle, then $Alt(\varphi)$ is a polyvector field of $\wedgedge^n(\mathcal{T}/\mathcal{P})$ cohomological to $\varphi$. \end{propn} \begin{proof} As follows from above, embedding (\ref{imbO}) is locally isomorphic to the embedding \be{*} \wedgedge^\bullet\mathcal{Q}\to\mathbb{C}c^\bullet(\mathcal{Q}). \ee{*} Now the proposition follows from Lemma \ref{lemKHR} when $E=\mathcal{Q}$. \end{proof} \begin{remark} All conclusions of Propositions \ref{propKHR}, \ref{propdop}, \ref{propKHR1} remain true for global sections of the corresponding sheaves, since they are sheaves of $\mathbb{C}m$-modules. \end{remark} \subsection{Differential forms in presence of a distribution} Let $(M,\mathcal{P})$ be a smooth manifold with integrable distribution. The sheaf $\mathcal{P}^\perp=(\mathcal{T}/\mathcal{P})^*$ of differential forms on $M$ which being applied to vector fields from $\mathcal{P}$ give zero, may be written as $\mathbb{C}m d\mathcal{O}_\mathcal{P}$. Denote $\Omega_{\mathcal{O}_\mathcal{P}}^1=Hom_{\mathcal{O}_\mathcal{P}}(Der({\OO_\PP}),{\OO_\PP})$, the sheaf of 1-forms on ${\OO_\PP}$. It is clear that $\Omega_{\mathcal{O}_\mathcal{P}}^1={\OO_\PP} d{\OO_\PP}$. Denote by $\Omega_{\OO_\PP}^{1cl}$ the subsheaf of closed forms of $\Omega_{\OO_\PP}^1$. \begin{lemma}\lambdabel{lemma2.2} a) The sequence of sheaves \be{}\lambdabel{seqomega} \begin{CD} 0 @>>> \mathbb{C} @>>> {\OO_\PP} @>d>> \Omega_{\OO_\PP}^{1cl} @>>> 0 \end{CD} \ee{} is exact. b) The sequence of sheaves \be{}\lambdabel{seqphi} \begin{CD} 0 @>>> \Omega_{\OO_\PP}^{1cl} @>>> \mathcal{P}^\perp @>d>> d\mathcal{P}^\perp @>>> 0 \end{CD} \ee{} is exact. \end{lemma} \begin{proof} Let functions $z_i$, $y_j$ form a local basis on $M$ and $\mathcal{P}=(dz_i)^\perp$. Let us prove a). It is sufficient to establish the exactness at the third term of (\ref{seqomega}). Let $\alpha=\sum_i a_idz_i\in\Omega_{\OO_\PP}^{1cl}$. Since $\alpha$ is closed, there exists, locally, $f\in\mathbb{C}m$ such that $df=\alpha$. Since $\alpha$ does not contain terms of the form $gdy_j$, one has $\partial f/\partial y_j=0$ for all $j$. Hence, $f\in\mathcal{O}_\mathcal{P}$. To prove b), it is sufficient to establish the exactness at $\mathcal{P}^\perp$. To this end, suppose $\beta=\sum_i b_idz_i\in\mathcal{P}^\perp$ and $d\beta=0$. Since $\beta$ is closed, $\partial b_i/\partial y_j=0$ for all $i,j$. This means that all $b_i\in\mathcal{O}_\mathcal{P}$. Thus, $\beta\in\Omega_{\OO_\PP}^1$ and closed, i.e. $\beta\in\Omega_{\OO_\PP}^{1cl}$. \end{proof} \begin{propn}\lambdabel{fundiso} Let $(M,\mathcal{P})$ be a smooth manifold with an integrable distribution. Then, there is the natural isomorphism \be{}\lambdabel{fundiso1} H^1(M,\Omega_{\OO_\PP}^{1cl})\backsimeq \Gamma(M,d\mathcal{P}^\perp)/d(\Gamma(M,\mathcal{P}^\perp)). \ee{} \end{propn} \begin{proof} This is an immediate consequence of the cohomological exact sequence for (\ref{seqphi}) and of $H^i(M,\mathcal{P}^\perp)=0$ for $i>0$. \end{proof} \section{$\mathbb{C}$-symplectic manifolds and their polarizations} \subsection{$\mathbb{C}$-symplectic manifolds} \begin{definition} By a {\em $\mathbb{C}$-symplectic manifold} we mean a pair $(M,\omega)$, where $M$ is a smooth manifold and $\omega$ a closed nondegenerate {\em complex} 2-form on $M$ satisfying the following condition: each point of $M$ has a neighborhood $U$ and complex coordinates $x_i,y_i$, $i=1,...,\half\dim M$, on $U$ such that the form $\omega$ on $U$ can be presented as \be{} \omega=\sum_idy_i\wedgedge dx_i, \ee{} \end{definition} If the form $\omega$ is real, such a presentation is possible by the Darboux theorem. One has \be{}\lambdabel{darb} \{x_i,x_j\}=\{y_i,y_j\}=0, \ \ \ \{y_i,x_j\}=\delta_{ij} \ee{} for all $i,j$, where $\{\cdot,\cdot\}$ is the Poisson bracket inverse to $\omega$. We call such functions $x_i,y_i$ {\em Darboux coordinates} with respect to $\omega$ (or $\{\cdot,\cdot\}$). In what follows we only deal with $\mathbb{C}$-symplectic manifolds, so we simply call them symplectic ones. \subsection{Polarization} \begin{definition}\lambdabel{def1.1} Let $(M,\omega)$ be a symplectic manifold. We call a (complex) distribution $\mathcal{P}$ on $M$ a polarization of $\omega$, if, locally, there exist (complex) Darboux coordinates, $x_i$, $y_i$, with respect to $\omega$ such that $\mathcal{P}=(dx_i)^\perp$, i.e. $x_i\in\mathcal{O}_\mathcal{P}$. We call the triple $(M,\omega,\mathcal{P})$ a {\em polarized symplectic manifold} (PSM). \end{definition} It follows that a polarization of $\omega$ is, in particular, an integrable distribution and a Lagrangian subbundle with respect to $\omega$. \begin{propn} Let $(M,\omega,\mathcal{P})$ be a PSM. Then $\omega\in\Gamma(M,d\mathcal{P}^\perp)$. \end{propn} \begin{proof} Let $x_i$, $y_i$ be local Darboux coordinates on $M$ such that $\mathcal{P}=(dx_i)^\perp$ and $\omega=\sum dy_i\wedgedge dx_i$. Then, locally, $\omega=d(\sum y_i dx_i)$ and $\sum y_i dx_i\in\mathcal{P}^\perp$. \end{proof} \begin{propn}\lambdabel{propmax} Let $(M,\omega,\mathcal{P})$ be a PSM. Then, $\mathcal{O}_\mathcal{P}$ is a maximal commutative Lie subalgebra in $\mathbb{C}m$ with respect to the Poisson bracket $\omega^{-1}$. \end{propn} \begin{proof} It follows from Definition \ref{def1.1} that, locally, the bracket $\omega^{-1}$ may be written in the form \be{}\lambdabel{eq10} \{\cdot,\cdot\}=\sum_i\bar\partialrtial_i\wedgedge\partialrtial_i, \ee{} where $\partialrtial_i=\{y_i,\cdot\}$, $\bar\partialrtial_i=\{\cdot,x_i\}$. The module $\mathcal{O}_\mathcal{P}$ consists, locally, of elements $g\in\mathbb{C}m$ such that $\{g,x_i\}=\bar\partialrtial_ig=0$ for all $i$. Putting two such elements $g_1,g_2$ in (\ref{eq10}), we obtain that $\{g_1,g_2\}=0$. So $\mathcal{O}_\mathcal{P}$ is commutative. The maximality of $\mathcal{O}_\mathcal{P}$ is obvious. Indeed, if $a\in\mathbb{C}m$ commutes with $\mathcal{O}_\mathcal{P}$, then, in particular, $\{x_i,a\}=0$ for all $i$, hence $a\in\mathcal{O}_\mathcal{P}$. \end{proof} \section{Deformations of a polarized symplectic manifold} \subsection{Formal everything} Let $t$ be a formal parameter. We will consider on $M$ formal functions, formal vector fields, formal forms, etc., which are elements of $\mathbb{C}M$, $\TT^\C_M[[t]]$, $\Phi^k[[t]]$, etc. In the formal case all sheaves over $M$ and their morphisms will be sheaves and morphisms of $\mathbb{C}t$-modules. Let $\mathcal{B}B$ be a sheaf over $M$. We call the map $\sigma:\mathcal{B}B[[t]]\to \mathcal{B}B$, $b_0+tb_1+\cdots\to b_0$, the {\em symbol map}. For a subsheaf $\mathcal{F}_t\subset\mathcal{B}B[[t]]$, we denote $\mathcal{F}_0=\sigma(\mathcal{F}_t)\subset\mathcal{B}B$. Let $\mathcal{F}_t$ be a subsheaf of $\mathcal{B}B[[t]]$. We call $\mathcal{F}_t$ a $t$-{\em regular} subsheaf if it is complete in $t$-adic topology, and $tb\in\mathcal{F}_t$, $b\in\mathcal{B}B[[t]]$ implies $b\in\mathcal{F}_t$. For a $t$-regular subsheaf $\mathcal{F}_t\subset\mathcal{B}B[[t]]$ the natural map $\mathcal{F}_t/t\mathcal{F}_t\to\mathcal{F}_0$ is an isomorphism. In this case we call $\mathcal{F}_t$ a {\em deformation} of $\mathcal{F}_0$. Let $\mathcal{B}B$ be a vector bundle over $M$. Then, a subsheaf $\mathcal{F}_t\subset\mathcal{B}B[[t]]$ is called a (formal) subbundle, if it is a $t$-regular subsheaf of $\mathbb{C}M$-modules. All notions and statements above carry over to the formal case. For example, a deformation of a distribution $\mathcal{P}$ on $M$ is a subbundle, $\mathcal{P}_t$, of $\TT^\C_M[[t]]$ such that $\mathcal{P}_0=\mathcal{P}$. A system of (formal) coordinates on an open set $U\subset M$ is a set of formal functions $x_i=x_{i0}+tx_{1i}+\cdots$, $i=1,...,\dim M$, $\mathbb{C}t$-linearly independent at each point of $U$. A formal polarized symplectic manifold is a triple $(M,\omega_t,\mathcal{P}_t)$, where $\omega_t$ is a formal symplectic form on $M$ (i.e. a closed 2-form of the form $\omega_t=\omega_0+t\omega_1+\cdots$ with nondegenerate $\omega_0$ ) and $\mathcal{P}_t$ is a polarization of $\omega_t$ in sense of Definition \ref{def1.1}, i.e., locally, there exist formal Darboux coordinates $x_i$, $y_i$ with respect to $\omega_t$ such that $\mathcal{P}_t=(dx_i)^\perp$. We say that $(M,\omega_t,\mathcal{P}_t)$ is a deformation of a polarized symplectic manifold $(M,\omega,\mathcal{P})$, if $(M,\omega_0,\mathcal{P}_0)=(M,\omega,\mathcal{P})$. The following proposition, which follows from Propositions \ref{prop12} and \ref{prop11} below, shows that any deformation of (polarized) symplectic manifold is a formal (polarized) symplectic manifold. \begin{propn} a) Let $\omega_t$ be a formal closed 2-form. If $\omega_0$ admits, locally, Darboux coordinates, then there exist their lifts being formal Darboux coordinates for $\omega_t$. b) Let $\mathcal{P}_t$ be an integrable distribution and Lagrangian with respect to $\omega_t$. Let $\mathcal{P}_0$ be a polarization of $\omega_0$. Then $\mathcal{P}_t$ is a polarization of $\omega_t$. \end{propn} \subsection{Local structure of deformed polarizations} It is clear that formal vector fields $tX_1+t^2X_2+\cdots\in t\TT^\C_M[[t]]$ form a sheaf of pro-nilpotent Lie algebras. It follows that elements $e^{tX}$, $X\in \TT^\C_M[[t]]$, form a sheaf of pro-unipotent Lie groups of formal automorphisms of $M$. Let $x_i$ be formal coordinates on $U$ and $a_i$, $i=1,...,\dim M$, arbitrary formal functions on $U$. Then there exists a derivation, $D$, of $\mathbb{C}M$ that takes $x_i$ to $a_i$. Such a derivation is $D=\sum_i a_i\partial/\partial x_i$. This implies the following \begin{lemma}\lambdabel{lemma1.1} Let $x_i$, $x'_i$, $i=1,...,\dim M$, be two systems of formal coordinates on an open set $U\subset M$, and $x_i=x'_i \mod t$. Then, there exists a formal automorphism on $U$ that takes $x_i$ to $x'_i$. \end{lemma} \begin{propn}\lambdabel{prop19} a) Let $\mathcal{P}_t$ be an integrable distribution on $M$ which is a deformation of a distribution $\mathcal{P}$. Then, locally, there exists a formal vector field, $X$, such that $e^{tX}$ gives an isomorphism $\mathcal{P}_t$ with $\mathcal{P}[[t]]$. b) Let $(M,\omega_t,\mathcal{P}_t)$ be a deformation of a polarized symplectic manifold $(M,\omega,\mathcal{P})$. Then, each point of $M$ has a neighborhood, $U$, and a formal vector field $X$ on $U$ such that $e^{tX}$ gives an isomorphism of $(M,\omega_t,\mathcal{P}_t)|_U$ with the trivial deformation $(U,\omega,\mathcal{P}[[t]])$. \end{propn} \begin{proof} a) Locally, there exist functions $x_{it}=x_{i0}+tx_{i1}+\cdots$, $i=1,...,k$, such that $\mathcal{P}_t=(dx_{it})^\perp$ and hence $\mathcal{P}=(dx_{i0})^\perp$. Let us add functions $x_{(k+1)0},...,x_{n0}$ in such a way that all $x_{i0}$, $i=1,...,n$, form a coordinate system. According to Lemma \ref{lemma1.1}, there exists a formal automorphism which takes coordinates $x_{it}$, $i=1,...,k$, $x_{j0}$, $j=k+1,...,n$, to coordinates $x_{i0}$, $i=1,...,n$. This formal automorphism gives obviously an isomorphism $\mathcal{P}_t$ onto $\mathcal{P}[[t]]$. b) Let $U$ be an open set in $M$ where Darboux coordinates $x_{it}=x_{i0}+tx_{i1}+\cdots$, $y_{it}=y_{i0}+tx_{i1}+\cdots$, $i=1,...,\half\dim M$, exist, and $\mathcal{P}_t=(dx_{it})^\perp$. Then, $x_{i0}$, $y_{i0}$ are Darboux coordinates with respect to $\omega_0=\omega$ such that $\mathcal{P}=(dx_{i0})^\perp$. By Lemma \ref{lemma1.1} there exists a formal vector field $X$ on $U$ such that the formal automorphism $e^{tX}$ takes coordinates $x_{it}$, $y_{it}$ to $x_{i0}$, $y_{i0}$. Such $X$ satisfies the conclusion of the proposition. \end{proof} Let $\mathcal{P}$ be an integrable distribution. Denote by $\mathcal{O}_\mathcal{P}$ the sheaf of functions constant along $\mathcal{P}$. Let $\mathcal{P}_t$ be a deformation of $\mathcal{P}$. It follows from the previous proposition that, locally, the pair $(\mathbb{C}M,\mathcal{O}_{\mathcal{P}_t})$ is isomorphic to the pair $(\mathbb{C}M,\mathcal{O}_\mathcal{P}[[t]])$, hence $\mathcal{O}_{\mathcal{P}_t}$ is a $t$-regular subalgebra of $\mathbb{C}M$. One holds the following inverse assertion. \begin{propn}\lambdabel{propdis} Let $\mathcal{P}$ be an integrable distribution on $M$. Let $\mathcal{O}_t$ be a $t$-regular subalgebra of $\mathbb{C}M$ such that $\mathcal{O}_0=\mathcal{O}_\mathcal{P}$. Then, there exists a deformation, $\mathcal{P}_t$, of $\mathcal{P}$ such that $\mathcal{O}_t=\mathcal{O}_{\mathcal{P}_t}$. \end{propn} \begin{proof} Let us prove that $d\mathcal{O}_t\subset (\TT^\C_M)^*[[t]]$ is $t$-regular. Let $b\in(\TT^\C_M)^*[[t]]$ and $tb\in d\mathcal{O}_t$. Then, there exists $a=a_0+ta'\in\mathcal{O}_t$ such that $da=tb$. It follows that $a_0$ is a constant, so $ta'\in\mathcal{O}_t$. Since $\mathcal{O}_t$ is $t$-regular, $a'\in\mathcal{O}_t$, too. Therefore, $b=da'\in d\mathcal{O}_t$, so that $d\mathcal{O}_t$ is a $t$-regular submodule in $(\TT^\C_M)^*[[t]]$. This implies that $\mathbb{C}M d\mathcal{O}_t$ is a subbundle in $(\TT^\C_M)^*[[t]]$, so $\mathcal{P}_t=(d\mathcal{O}_t)^\perp$ is a subbundle in $\TT^\C_M[[t]]$. Moreover, $\mathcal{P}_0=\mathcal{P}$. So, $\mathcal{P}_t$ is a deformation of $\mathcal{P}$. One has $\mathcal{O}_t\subset\mathcal{O}_{\mathcal{P}_t}$. Since these two subalgebras are $t$-regular and coincide at $t=0$, we have $\mathcal{O}_t=\mathcal{O}_{\mathcal{P}_t}$. \end{proof} The last proposition shows that there is a one-to-one correspondence between deformations of an integrable distribution $\mathcal{P}$ and deformations of $\mathcal{O}_\mathcal{P}$. \subsection{Action of formal automorphisms on a symplectic form} Let $(M,\omega_t)$ be a formal symplectic manifold, $\mathcal{T}=\TT^\C_M$. The well known formula for the Lie derivative $$L_X=i(X)d+di(X)$$ implies \be{} L_X\omega_t=d\alpha(X), \ee{} where $\alpha: \mathcal{T}[[t]]\to \mathcal{T}^*[[t]]$ is the map defined by $X\mapsto\omega_t(X,\cdot)$. Since $\alpha$ is an isomorphism, we have the following lemma. \begin{lemma}\lambdabel{lemma1} The orbit of $\omega_t$ by the action of the group of formal automorphisms $e^{tX}$, $X\in\Gamma(M,\mathcal{T}[[t]])$, is $\omega_t+td(\Gamma(M,\mathcal{T}^*[[t]]))$. \end{lemma} The lemma shows that the orbit of $\omega_t$ coincides with the cohomology class of $\omega_t$ in $\omega_0+tH^2(M,\mathbb{C}t)$. Let $(M,\omega_t,\mathcal{P}_t)$ be a formal PSM and $X\in\Gamma(M,\mathcal{P}_t)$. Since $\alpha(\mathcal{P}_t)=\mathcal{P}_t^\perp$, one has $L_X\omega_t=d\alpha(X)\in \Gamma(M,d\mathcal{P}_t^\perp)$. The argument as above implies \begin{lemma}\lambdabel{lemma3} By the action of the group generated by $e^{tX}$, $X\in\Gamma(M,\mathcal{P}_t)$, the orbit of $\omega_t$ is $\omega_t+td(\Gamma(M,\mathcal{P}_t^\perp))$. \end{lemma} The lemma shows that the orbit of $\omega_t$ corresponds to the cohomology class of $\omega_t$ in $\Gamma(M,d\PP^\perp)/d(\Gamma(M,\PP^\perp))t$. \section{Polarized symplectic connection and characteristic class of a polarized symplectic manifold}\lambdabel{PSC} \subsection{Polarized symplectic connection} Let $(M,\omega,\mathcal{P})$ be a (formal) PSM. Denote, for shortness, $\mathcal{T}=\TT^\C_M[[t]]$. \begin{definition} We call a connection, $\nabla$, on $M$ a $\mathcal{P}$-{\em symplectic connection} if a) it preserves $\omega$ and is torsion free, i.e. is a symplectic connection; b) it preserves $\mathcal{P}$, i.e. $\nabla_X(\mathcal{P})\subset\mathcal{P}$ for any $X\in\mathcal{T}$; c) it is flat on $\mathcal{P}$ along $\mathcal{P}$, i.e. for any $X,Y\in\mathcal{P}$ one has $(\nabla_X\nabla_Y-\nabla_Y\nabla_X-\nabla_{[X,Y]})(\mathcal{P})=0$. \end{definition} \begin{propn}\lambdabel{prop1.6} Let $(M,\omega,\mathcal{P})$ be a (formal) PSM. Then, there exists a $\mathcal{P}$-symplectic connection on $M$. \end{propn} \begin{proof} Let functions $a_1,...,a_{2n}$, $2n=\dim M$, form local Darboux coordinates on an open set $U\subset M$ and be such that $a_i\in\mathcal{O}_\mathcal{P}$ for $i=1,...,n$. Let $X_i=X_{a_i}$ be the corresponding Hamiltonian vector fields. Then, vector fields $X_i$, $i=1,...,n$, form a local frame in $\mathcal{P}$. Also, all $X_i$ commute and form a local frame in $\mathcal{T}$. Let $\nabla$ be the standard flat connection on $U$ associated with coordinates $a_i$. This connection is defined on $U$ by the rule $\nabla_{X_i}X_j=0$. It is easy to see that $\nabla$ is a $\mathcal{P}$-symplectic connection on $U$. Moreover, since $X_f\in\mathcal{P}$ is equivalent to $df\in\mathcal{P}^\perp$, the connection $\nabla$ satisfies the following property for Hamiltonian vector fields: \be{}\lambdabel{hamilt} \nabla_{X_f} X_g=0 \quad\quad \mbox{for\ } X_f,Y_g\in\mathcal{P}. \ee{} Now, let us prove the existence of a global connection. Let $\{U_\alpha\}$ is an open covering of $M$ such that on each $U_\alpha$ there is a $\mathcal{P}$-symplectic connection $\nabla_\alpha$ as above. Then $\nabla_\alpha-\nabla_\beta$ defined on $U_\alpha\cap U_\beta$ form a \v Cech cocycle $\psi_{\alpha,\beta}\in Hom(\mathcal{T}\otimes\mathcal{T}, \mathcal{T})$, $\psi_{\alpha,\beta}(X,Y)=\nabla_{\alpha X}Y-\nabla_{\beta X}Y$. Elements $\psi_{\alpha,\beta}$ satisfy the following properties. As follows from (\ref{hamilt}), \be{}\lambdabel{hamilt1} \psi_{\alpha,\beta}(X,Y)=0 \quad\quad \mbox{for\ } X,Y\in\mathcal{P}. \ee{} Since all $\nabla_\alpha$ are torsion free, $\psi_{\alpha,\beta}$ are symmetric. Since all $\nabla_\alpha$ preserve $\mathcal{P}$, one has $\psi_{\alpha,\beta}(X,Y)\in \mathcal{P}$ for $Y\in\mathcal{P}$. In addition, $\psi_{\alpha,\beta}$ considered as elements from $Hom(\mathcal{T}, Hom(\mathcal{T},\mathcal{T}))$, $X\mapsto\psi_{\alpha,\beta}(X,\cdot)$, belong to $Hom(\mathcal{T},\mathfrak{sp}(\mathcal{T}))$, where $\mathfrak{sp}(\mathcal{T})$ consists of endomorphisms of $\mathcal{T}$ preserving $\omega$. Since all the properties above are $\mathbb{C}M$-linear, one can find tensors $\psi_\alpha\in Hom(\mathcal{T}\otimes\mathcal{T}, \mathcal{T})$ satisfying all of them and such that $\psi_\alpha-\psi_\beta=\psi_{\alpha,\beta}$. Then $\nabla=\nabla_\alpha-\psi_\alpha=\nabla_\beta-\psi_\beta$ is a globally defined connection. Flatness of $\nabla$ on $\mathcal{P}$ along $\mathcal{P}$ follows from the fact that for all $\alpha$ tensors $\psi_\alpha$ satisfy property (\ref{hamilt1}), i.e. $\psi_\alpha(X,Y)=0$ for $X,Y\in\mathcal{P}$. Also, $\nabla$ is torsion free because all $\psi_\alpha$ are symmetric. So, $\nabla$ satisfies the proposition. \end{proof} \subsection{Characteristic class of a polarized symplectic manifold} \lambdabel{subsec5.2} Let $\nabla$ be a $\mathcal{P}_t$-symplectic connection on a formal polarized symplectic manifold $(M,\omega_t,\mathcal{P}_t)$. Let us denote by $\nabla^2|_{\mathcal{P}_t}$ the curvature of $\nabla$ restricted to $\mathcal{P}_t$. Then, $tr(\nabla^2|_{\mathcal{P}_t})$ is a closed 2-form on $M$ that represents, up to a constant factor, the first Chern class of $\mathcal{P}_0$. \begin{lemma}\lambdabel{lempolform} Let $\nabla$ be a $\mathcal{P}_t$-symplectic connection. Then $tr(\nabla^2|_{\mathcal{P}_t})$ belongs to $\Gamma(M,d\mathcal{P}^\perp_t)$. If $\nabla_1$ is another $\mathcal{P}_t$-symplectic connection, then $tr(\nabla_1^2|_{\mathcal{P}_t})$ differs from $tr(\nabla^2|_{\mathcal{P}_t})$ by an element of $d(\Gamma(M,\mathcal{P}^\perp_t))$. \end{lemma} \begin{proof} Follows from flatness of $\mathcal{P}_t$ along $\mathcal{P}_t$ with respect to the connection. \end{proof} The lemma allows us to consider the element of $\Gamma(M,d\PP^\perp)/d(\Gamma(M,\PP^\perp))t$ represented by the form $tr(\nabla^2|_{\mathcal{P}_t})$, where $\nabla$ is a $\mathcal{P}_t$-symplectic connection, as a characteristic class of the polarized symplectic manifold $(M,\omega_t,\mathcal{P}_t)$. Due to Proposition \ref{fundiso}, one can also consider this class as an element of $H^1(M,\Omega^{1cl}_{{\OO_\PP}_t})$. \section{Deformations of Poisson brackets} In this section we prove three technical statements which we use through the paper. Let $\pi_0=\{\cdot,\cdot\}$ be a nondegenerate Poisson bracket on a smooth manifold $M$ of dimension $2n$. We say that a formal sum $\pi_t=\pi_0+t\pi_1+\cdots$ is a deformation of $\pi_0$ if all $\pi_i$ are {\em bidifferential operators} on $M$ and $\pi_t$ defines a Lie algebra structure on the sheaf $\mathbb{C}M$. We will also denote $\pi_t$ by $[\cdot,\cdot]$. Let us recall the symbol map $\sigma:\mathbb{C}M\to\mathbb{C}m$, $a=a_0+ta_1+\cdots\mapsto a_0$. We call $a$ a lift of $a_0$. We say that functions $\hat x_i$, $\hat\xi_i$, $i=1,...,n$ on an open set $U\subset M$ form Darboux coordinates with respect to $[\cdot,\cdot]$, if $[\hat x_j,\hat x_k] = [\hat\xi_j,\hat\xi_k] = 0$, $[\hat\xi_j,\hat x_k] = \delta_{jk}$ for all $j,k$. It is clear that then functions $x_i=\sigma(\hat x_i)$, $\xi_i=\sigma(\hat\xi_i)$ form Darboux coordinates with respect to $\{\cdot,\cdot\}$. \begin{propn}\lambdabel{prop12} Let $[\cdot,\cdot]$ be a deformation of a Poisson bracket $\{\cdot,\cdot\}$ on $M$. Let $\hat x^{(i)}_1\ldots,\hat x^{(i)}_n, \hat\xi^{(i)}_1,\ldots,\hat\xi^{(i)}_n\in\mathbb{C}M$, $i=1,2$, be two systems of Darboux coordinates with respect to $[\cdot,\cdot]$ on a contractible open set $U$ in $M$ satisfying \be{*} \sigma(\hat x^{(1)}_j) = \sigma(\hat x^{(2)}_j), \ \ \sigma(\hat\xi^{(1)}_j) = \sigma(\hat\xi^{(2)}_j) \ee{*} Then, there exists $B\in \mathbb{C}M$ on $U$ such that the automorphism $\Phi=\exp(t\cdot ad(B))$, where $ad(B)=[B,\cdot]$, satisfies $\Phi(\hat x^{(1)}_j) = \hat x^{(2)}_j$ and $\Phi(\hat\xi^{(1)}_j) = \hat\xi^{(2)}_j$. \end{propn} \begin{proof} Let $B_0 = 0$. Assume that $B_m$ is such that the automorphism $\Phi_m = \exp(t\cdot ad(B_m))$ satisfies the conclusion of the proposition modulo $t^{m+1}$. This assumption is valid for $m=0$. Then, $\hat x^{(2)}_j = \Phi_m(\hat x^{(1)}_j) + t^{m+1}y_j \mod t^{m+2}$, $\hat\xi^{(2)}_j =\Phi_m(\hat\xi^{(1)}_j) + t^{m+1}\eta_j \mod t^{m+2}$ for suitable $y_j, \eta_j\in\mathbb{C}m$. The Darboux commutation relations for $\hat x^{(i)}_1\ldots, \hat x^{(i)}_n,\hat\xi^{(i)}_1,\ldots,\hat\xi^{(i)}_n$ imply that the functions $y_1,\ldots,y_n,\eta_1\ldots, \eta_n$ satisfy $\{x_j,y_k\}-\{x_k,y_j\}=0$, $\{\xi_j,\eta_k\}-\{\xi_k,\eta_j\}=0$, $\{\xi_j,y_k\}-\{x_k,\eta_j\} =0$, where $x_j=\sigma(\hat x^{(1)}_j) = \sigma(\hat x^{(2)}_j)$, $\xi_j=\sigma(\hat\xi^{(1)}_j) = \sigma(\hat\xi^{(2)}_j)$. Equivalently, the differential form $\alpha =\sum_j y_jdx_j + \eta_jd\xi_j$ is closed. By Poincar\'e Lemma there exists $f\in\mathbb{C}m$ such that $df=\alpha$, equivalently $y_j=\{\xi_j,f\}$, and $\eta_j=\{f,x_k\}$. There exists $B_{m+1}\in\mathbb{C}m$ such that \be{*} \exp(t\cdot ad(B_{m+1}) = \exp(ad(t^{m+1} f))\circ\exp(t\cdot ad(B_m)) \ee{*} and $B_{m+1} = B_m\mod t^{m+1}$. The limit $B = \underset{m\to\infty}{\lim}B_m$ exists and satisfies the conclusions of the proposition. \end{proof} \begin{propn}\lambdabel{prop11} Let $[\cdot,\cdot]$ be a deformation of a Poisson bracket $\{\cdot,\cdot\}$ on $M$ and $\mathcal{O}_t$ a $t$-adically complete submodule in $\mathbb{C}M$ being a commutative Lie subalgebra with respect to $[\cdot,\cdot]$. Let functions $x_i\in\mathcal{O}_0$, $\xi_i\in\mathbb{C}m$ form Darboux coordinates with respect to $\{\cdot,\cdot\}$ on a contractible open set $U\subset M$. Then, there exist their lifts $\hat x_i\in\mathcal{O}_t$, $\hat\xi_i\in\mathbb{C}m$ on $U$ which are Darboux coordinates with respect to $[\cdot,\cdot]$. \end{propn} \begin{proof} Since, by definition, $\mathcal{O}_t\to\sigma(\mathcal{O}_t)$ is surjective, we choose arbitrary lifts $\hat x_j\in \mathcal{O}_t$ of $x_j$. Note that $\hat x_j$, $\xi_j$ satisfy the both conclusions modulo $t$. Let $\hat x_{j,1}:=\hat x_j$, $\hat \xi_{j,1} :=\xi_j$. Suppose that $m\mathfrak{g}eq 2$ and $\hat x_j\in\mathcal{O}_t$, $\hat\xi_{j,m}\in\mathbb{C}M$ satisfies the conclusion of the proposition modulo $t^{m+1}$. The assumption on $x_{j,m}$, $\xi_{j,m}$ implies that \be{*} [\hat x_j,\hat x_k]&=&0, \\ \ [\hat\xi_{j,m},\hat x_k]& =& \delta_{jk} + y_{jk}t^{m+1}\mod t^{m+2},\\ \ [\hat \xi_j,\hat \xi_k]&=&t^{m+1}z_{jk}\mod t^{m+2} \ee{*} for suitable $y_{jk}, z_{jk}\in\mathbb{C}m$. The Jacobi identity and the commutation relations for $x_j$, $\xi_j$ imply that \be{*} \{y_{jk},x_l\}-\{y_{jl},x_k\}=0,\\ \ \{z_{lj},x_k\}+\{\xi_j,y_{lk}\}-\{\xi_l,y_{jk}\}=0,\\ \ \{z_{jk},\xi_l\} + \{z_{kl},\xi_j\} + \{z_{lj},\xi_k\} = 0. \ee{*} These identities say that the differential form $$\alpha=y_{jk}dx_j\wedgedge d\xi_k+z_{jk}dx_j\wedgedge dx_k$$ is closed. By the Poincar\'e Lemma there exists a 1-form $\beta=\sum a_id\xi_i+\sum b_idx_i$ on $U$ such that $d\beta=\alpha$. Note that $\{x_k,a_i\}=0$ for all pairs $i,k$, since $\alpha$ contains no terms of the form $fd\xi_i\wedgedge d\xi_k$. Hence, $a_i\in\sigma(\mathcal{O}_t)$. Let $\hat a_i$ be a lift of $a_i$ in $\mathcal{O}_t$. It is easy to check that $\hat x_{i,m+1}=x_{i,m}+t^{m+1}\hat a_i$ and $\hat\xi_{i,m+1}+t^{m+1}b_i$ satisfy the conclusions of the proposition modulo $t^{m+2}$. Hence, the limits $\hat x_i = \underset{m\to\infty}{\lim}\hat x_{i,m}$, $\hat \xi_i= \underset{m\to\infty}{\lim}\hat \xi_{i,m}$ exist and satisfy the conclusions of the proposition. \end{proof} We will need a more strong assertion. \begin{propn}\lambdabel{prop11a} Let $[\cdot,\cdot]$ be a deformation of a Poisson bracket $\{\cdot,\cdot\}$ on $M$ and $\mathcal{O}_t$ a $t$-regular submodule in $\mathbb{C}M$, which is a commutative Lie subalgebra with respect to $[\cdot,\cdot]$. Let functions $x_i\in\mathcal{O}_t$, $\xi_i\in\mathbb{C}M$ form Darboux coordinates modulo $t^k$, $k>0$, with respect to $[\cdot,\cdot]$ on a contractible open set $U\subset M$. Then, there exist functions $a_i\in\mathcal{O}_t$, $b_i\in\mathbb{C}M$ on $U$ such that the functions $x_i+t^ka^i$, $\xi_i+t^kb_i$ form Darboux coordinates with respect to $[\cdot,\cdot]$. \end{propn} \begin{proof} The same as of Proposition \ref{prop11}. \end{proof} \section{Deformation quantization on a polarized symplectic manifold} \subsection{Deformation quantization} Let us recall some definitions and facts about the deformation quantization on a smooth manifold $M$, see \cite{BFFLS}, \cite{De}, \cite{Fe}. \begin{definition} a) Let $\mathbb{C}m$ be the sheaf of smooth complex valued functions on $M$. A {\em formal deformation} of $\mathbb{C}m$ is a sheaf of $\mathbb{C}[[t]]$-algebras, $\mathbb{A}_t$, with an epimorphism $\sigma:\mathbb{A}_t\to \mathbb{C}m$ of $\mathbb{C}t$-algebras (called the {\em symbol map}) satisfying the condition: There exists an isomorphism of $\mathbb{C}t$-modules $\mathbb{A}_t\to\mathbb{C}M$ commuting with symbol maps. (Recall that the symbol map $\sigma:\mathbb{C}M\to\mathbb{C}m$ takes $f_0+tf_1+\cdots$ to $f_0$.) b) Two formal deformations $\mathbb{A}_{t1}$ and $\mathbb{A}_{t2}$ of $\mathbb{C}m$ are equivalent if there exists a map of sheaves of $\mathbb{C}t$-algebras $\mathbb{A}_{t1}\to\mathbb{A}_{t2}$, commuting with symbol maps. \end{definition} Suppose $\mathbb{A}_t$ is a formal deformation of $\mathbb{C}m$. The formula \be{} \{a,b\}=\sigma(\frac{1}{t}[\widetilde a,\widetilde b]), \ee{} where $a$ and $b$ are locally defined functions on $M$ and $\widetilde a$, $\widetilde b$ are their lifts with respect to $\sigma$, gives a well defined Poisson bracket on $\mathbb{C}m$. It is clear that equivalent formal deformations have the same Poisson bracket. \begin{definition} A deformation quantization (DQ) on a symplectic manifold $(M,\omega)$ is a formal deformation of $\mathbb{C}m$ whose Poisson bracket is equal to $\omega^{-1}$. \end{definition} \begin{definition} a) A star-product (SP) on $(M,\omega)$ is the structure of an associative algebra on the sheaf $\mathbb{C}M$ with the multiplication of the form \be{} f\ast g=\sum_{i\mathfrak{g}eq 0} t^i\mu_i(f,g), \qquad f,g\in\mathbb{C}m, \ee{} where all $\mu_i$, $i>0$, are bidifferential operators on $M$ vanishing on constants, i.e. $\mu_i(f,g)=0$ if $f$ or $g$ is a constant, $\mu_0(f,g)=fg$, and $\mu_1(f,g)-\mu_1(g,f)=\{f,g\}$, the Poisson bracket inverse to $\omega$. b) Two star-products $(\mathbb{C}M,\mu')$ and $(\mathbb{C}M,\mu'')$ on $(M,\omega)$ are equivalent if there exists a power series $B=1+tB_1+\cdots$, where $B_i$ are differential operators vanishing on constants, such that $\mu''(f,g)=B\mu'(B^{-1}f,B^{-1}g)$. \end{definition} It is clear that any SP $(\mathbb{C}M,\mu)$ defines a DQ with the natural symbol map $\mathbb{C}M\to \mathbb{C}m$, $f_0+tf_1+\cdots\mapsto f_0$. \begin{propn} The above assignment gives a one-to-one correspondence between the equivalence classes of SP's and DQ's. \end{propn} \begin{proof} Let $\mathbb{A}_t$ be a DQ. Let us prove that it is equivalent to a star-product. By definition of DQ, there exists an isomorphism of $\mathbb{C}[[t]]$-modules $\mathbb{C}M\to\mathbb{A}_t$ commuting with symbol maps. Let $\mu=\mu_0+t\mu_1+\cdots$ be the multiplication in the sheaf $\mathbb{C}M$ being the pullback of the multiplication in $\mathbb{A}_t$. In order to prove that each $\mu_i$ is a bidifferential operator it is enough to prove, according to the Peetre theorem, that $supp(\mu(f,g))\subset supp(f)\cap supp(g)$ for any functions $f$ and $g$ on $M$. But this is obvious because if, for example, $f=0$ on an open set $U$, then $\mu(f,g)=0$ on $U$, since $\mu$ is a map of sheaves. The same argument proves that two equivalent DQ's correspond to equivalent SP's. \end{proof} \begin{example}\lambdabel{ex1} {\em Moyal-Weyl star-product.} Let $U$ be an open set in a symplectic manifold $(M,\omega)$, in which there exist Darboux coordinates, $x_i$, $y_i$, $i=1,...,\half\dim M$, so that $\omega=\sum_idy_i\wedgedge dx_i$. Then, $\omega^{-1}=\sum_i\partialrtial/\partialrtial y_i\wedgedge \partialrtial/\partialrtial x_i$ and for $f,g\in C^\infty_U$ the multiplication formula \be{} f\otimesimes g\mapsto m\exp\left(\frac{t}{2}\sum_i(\partialrtial/\partialrtial y_i \otimes\partialrtial/\partialrtial x_i-\partialrtial/\partialrtial x_i \otimes\partialrtial/\partialrtial y_i)\right)(f\otimesimes g), \ee{} where $m$ is the usual multiplication of functions, defines a star-product on $(U, \omega)$. This star-product is called the Moyal-Weyl star-product. \end{example} The following statement is well known and can be proven with the help of Hochschild and de Rham cohomology. \begin{propn} Locally, any star-product on $(M,\omega)$ is equivalent to the Moyal-Weyl star-product. \end{propn} \subsection{Polarized deformation quantization} \begin{definition} a) A polarized deformation quantization (PDQ) on a polarized symplectic manifold $(M,\omega,\mathcal{P})$ is a pair $(\mathbb{A}_t,\mathbb{O}_t)$ where $\mathbb{A}_t$ is a DQ on $(M,\omega)$ and $\mathbb{O}_t$ is a $t$-adically complete commutative subalgebra in $\mathbb{A}_t$ such that $\sigma(\mathbb{O}_t)=\mathcal{O}_\mathcal{P}$, functions constant along $\mathcal{P}$. b) Two PDQ's $(\mathbb{A}_{t1},\mathbb{O}_{t1})$ and $(\mathbb{A}_{t2},\mathbb{O}_{t2})$ of $(M,\omega,\mathcal{P})$ are equivalent if there exists an equivalence map of DQ's $\mathbb{A}_{t1}\to\mathbb{A}_{t2}$ which takes $\mathbb{O}_{t1}$ to $\mathbb{O}_{t2}$. \end{definition} Since any DQ is equivalent to a SP, any PDQ is equivalent to a triple $(\mathbb{C}M,\mu_t,\mathcal{O}_t)$, where $(\mathbb{C}M,\mu_t)$ is a SP and $\mathcal{O}_t$ is a commutative subalgebra in $(\mathbb{C}M,\mu_t)$. \begin{propn}\lambdabel{propmaxq} Let $(\mathbb{A}_t, \mathbb{O}_t)$ be a PDQ. Then, a) $\mathbb{O}_t$ is a maximal commutative subalgebra in $\mathbb{A}_t$. b) $\mathbb{O}_t$ is a $t$-regular subalgebra in $\mathbb{A}_t$. \end{propn} \begin{proof} a) Locally, the bracket $[a,b]=\tm(ab-ba)$, $a,b\in\mathbb{A}_t$, is a deformation of the Poisson bracket $\omega^{-1}$. So, the statement easily follows from Proposition \ref{propmax}. b) Follows from a). \end{proof} The following definition will play an auxiliary role in the paper. \begin{definition} A {\em weakly polarized star-product} (wPSP) on a polarized symplectic manifold $(M,\omega,\mathcal{P})$ is a triple $(\mathbb{C}M,\mu_t,\mathcal{O}_t)$, where $(\mathbb{C}M,\mu_t)$ is a SP, $\mathcal{O}_t$ is a $t$-adically complete $\mathbb{C}t$-submodule in $\mathbb{C}M$ satisfying the conditions: a) $\sigma(\mathcal{O}_t)=\mathcal{O}_\mathcal{P}$; b) $\mathcal{O}_t$ is a commutative subalgebra in $\mathbb{C}M$ with respect to the usual multiplication in $\mathbb{C}M$; c) $\mu_t$ being restricted to $\mathcal{O}_t$ coincides with the usual multiplication. \end{definition} \begin{definition} We say that a wPSP $(\mathbb{C}M,\mu_t,\mathcal{O}_t)$ is a {\em polarized star-product} (PSP), if for any $a\in\mathcal{O}_t$, $b\in\mathbb{C}M$ the product $\mu_t(a,b)$ coincides with the usual product in $\mathbb{C}M$. \end{definition} So, both wPSP and PSP are particular cases of a PDQ. We are going to prove that, in fact, any PDQ is equivalent to a PSP. But first we prove the following \begin{lemma}\lambdabel{prop13} Any PDQ is equivalent to a wPSP. \end{lemma} \begin{proof} Since any deformation quantization is equivalent to a star-product, it is enough to prove the following. Let $\mathcal{O}_t$ be a commutative subalgebra in a star-product $(\mathbb{C}M,\mu_t)$ such that the triple $(\mathbb{C}M,\mu,\mathcal{O}_t)$ forms a PDQ, in particular, $\mu_t(a,b)=\mu_t(b,a)$ for $a,b\in \mathcal{O}_t$. Then, there exists a differential operator $D_t=1+tD_1+...$ on $M$ such that the multiplication $\widetilde\mu_t(f,g)=D_t^{-1}\mu(D_tf,D_tg)$ being restricted to $D_t^{-1}\mathcal{O}_t$ is the usual multiplication. Suppose $\mu_t$ being restricted to $\mathcal{O}_t$ coincides with the usual multiplication modulo $t^n$. Then, $\mu(a,b)=ab+t^n\nu(a,b) \mod t^n$ for $a,b\in\mathcal{O}_t$. It follows from Proposition \ref{propmaxq} b) that being considered modulo $t$, the bilinear form $\nu$ defines a Hochschild cocycle $\mathcal{B}ar{\nu}\in \Gamma(M,{\mathcal D}^2({\OO_\PP},\mathbb{C}m))$ (see Proposition \ref{propKHR1}). Since $\mathcal{B}ar{\nu}$ is commutative, it follows from Proposition \ref{propKHR1} that it is a coboundary. So, there exists a differential operator $\mathcal{B}ar{D}\in\Gamma(M,{\mathcal D}^1({\OO_\PP},\mathbb{C}m))$ such that $d_{Hoch}\mathcal{B}ar{D}=\mathcal{B}ar{\nu}$. Let $\widetilde{D}$ be a lift of $\mathcal{B}ar{D}$ to a differential operator on $M$. Let $D_n=1+t^n\widetilde{D}$. It is easy to see that $D_n^{-1}\mathcal{O}_t$ is a commutative subalgebra in the star-product $(\mathbb{C}M,\widetilde{\mu}_t)$ with $\widetilde{\mu}_t(a,b)=D_n^{-1}\mu_t(D_na,D_nb)$, and $\widetilde{\mu}_t(a,b)=ab$ modulo $t^{n+1}$ for $a,b\in D_n^{-1}\mathcal{O}_t$. By induction, we construct such differential operators $D_n$ for all $n$. Let $D=\Pi_{n=1}^\infty D_n$, $\mathcal{O}'_t=D^{-1}\mathcal{O}_t$, and $\mu'_t(a,b)=D^{-1}\mu_t(Da,Db)$. Then $D$ gives an isomorphism between PDQ's $(\mathbb{C}M,\mu_t,\mathcal{O}_t)$ and $(\mathbb{C}M,\mu'_t,\mathcal{O}'_t)$, and the second triple is a wPSP, which proves the proposition. \end{proof} \begin{propn}\lambdabel{prop13a} For any wPSP $(\mathbb{C}M,\mu_t,\mathcal{O}_t)$, there exists a differential operator $D=1+tD_1+\cdots$ on $M$ such that $Df=f$ for $f\in\mathcal{O}_t$ and the multiplication $\mu'_t(a,b)=D^{-1}\mu_t(Da,Db)$ defines a PSP $(\mathbb{C}M,\mu'_t,\mathcal{O}_t)$. \end{propn} \begin{proof} It is obvious that $\mu=\mu_t$ defines a PSP modulo $t$. Proceeding by induction, we assume that there exists a wPSP multiplication $\mu'$ equivalent to $\mu$ and being a PSP modulo $t^n$ with respect to $\mathcal{P}_t$. The proposition will be proved if we find a differential operator, $D_n$, such that $D_n(f)=0$ for all $f\in\mathcal{O}_0$ and the multiplication \be{}\lambdabel{form} \mu''(a,b)=D^{-1}\mu'(Da,Db), \ee{} where $D=1+t^nD_n$, defines a PSP modulo $t^{n+1}$ with respect to $\mathcal{P}_t$. Let $$\mu'=\mu_0+t\mu'_1+\cdots +t^{n-1}\mu'_{n-1}+t^n\nu \quad \mod t^{n+1}.$$ By our assumption, elements $\mu'_1,...,\mu'_{n-1}$ are strongly polarized and $\nu$ is polarized with respect to $\mathcal{O}_t$ (see the definition before Proposition \ref{propdop}). It follows from associativity of $\mu'$ that $$d_{Hoch}\nu(a,b,c)=\sum_{i+j=n}(\mu'_i(a,\mu'_j(b,c))-\mu'_i(\mu'_j(a,b),c)).$$ It is easy to check that each term in the right hand side is strongly polarized, since all $\mu'_i$, $i=1,...,n-1$, are such. So, $\nu$ satisfies the hypothesis of Proposition \ref{propdop}. Hence, there exists a polarized differential operator $D_n$ such that $\nu+d_{Hoch}D_n$ is a strongly polarized bidifferential operator. It is obvious that the multiplication $\mu''$ defined in (\ref{form}) with $D_n$ just constructed is as required. \end{proof} \begin{cor}\lambdabel{cor1} Any PDQ is equivalent to a PSP. \end{cor} \begin{proof} Follows from Lemma \ref{prop13} and Proposition \ref{prop13a}. \end{proof} \begin{propn}\lambdabel{prop20} a) Let $(\mathbb{C}M,\mu_t,\mathcal{O}_t)$ be a PSP on $(M,\omega,\mathcal{P})$. Let $\mathcal{P}_t=(d\mathcal{O}_t)^\perp$. Then, $\mathcal{P}_t$ is a deformation of $\mathcal{P}$ and $\mathcal{O}_t=\mathcal{O}_{\mathcal{P}_t}$. b) Let $(\mathbb{C}M,\widetilde\mu_t,\widetilde\mathcal{O}_t)$ be another PSP on $(M,\omega,\mathcal{P})$ equivalent to $(\mathbb{C}M,\mu_t,\mathcal{O}_t)$ as a PDQ. Then, there exists a formal automorphism of $M$ which takes $\mathcal{P}_t$ to $\widetilde\mathcal{P}_t=(d\widetilde\mathcal{O}_t)^\perp$. c) Let $(\mathbb{C}M,\mu_t,\mathcal{O}_t)$ and $(\mathbb{C}M,\widetilde\mu_t,\mathcal{O}_t)$ are two equivalent PSP's with the same $\mathcal{O}_t$. Let $D=1+tD_1+\cdots$ gives an equivalence. Then, there exists a decomposition, $D=D'e^{tX}$, where $D'$ is a differential operator identical on $\mathcal{O}_t$ and $X$ is a formal vector field on $M$ taking $\mathcal{O}_t$ to itself. \end{propn} \begin{proof} a) follows from Proposition \ref{propdis}. b). Let us put $X_0=0$. Then the automorphism $e^{tX_0}=Id$ takes $\mathcal{P}_t$ to $\widetilde\mathcal{P}_t$ modulo $t$. Suppose we have constructed a formal vector field $X_k$ such that the formal automorphism $e^{tX_{k-1}}$ of $M$ takes $\mathcal{P}_t$ to $\widetilde\mathcal{P}_t$ modulo $t^{k}$. Then, replacing $\mathcal{P}_t$ by $e^{tX_{k-1}}\mathcal{P}_t$ we may assume that $\mathcal{P}_t$ and $\widetilde\mathcal{P}_t$ coincide modulo $t^k$. The proposition will be proved, if we show that it is possible to find a vector field $Y$ such that $e^{t^kY}$ takes $\mathcal{P}_t$ to $\widetilde\mathcal{P}_t$ modulo $t^{k+1}$. Since, by our assumption, the SP's $(\mathbb{C}M,\mu_t)$ and $(\mathbb{C}M,\widetilde\mu_t)$ are equivalent and coincide modulo $t^k$, there exists a differential operator $1+t^kD_k$ realizing that equivalence. Since $1+t^kD_k$ takes $\mathcal{O}_t$ to $\widetilde\mathcal{O}_t$ and on the both of these subalgebras the respecting multiplications $\mu_t$ and $\widetilde\mu_t$ are trivial, $D_k$ being restricted to $\mathcal{O}_0$ is a derivation from $\mathcal{O}_0$ to $\mathbb{C}m$. Let $Y$ be an extension of that restricted $D_k$ to a derivation on $\mathbb{C}m$. It is clear that $e^{t^kY}$ takes $\mathcal{P}_t$ to $\widetilde\mathcal{P}_t$ modulo $t^{k+1}$. c). The operator $D$ being restricted to $\mathcal{O}_t$ is a formal automorphism of $\mathcal{O}_t$. Since formal automorphisms form a pro-unipotent group, there exists $X'\in Der(\mathcal{O}_t)$ such that $D$ being restricted to $\mathcal{O}_t$ is equal to $e^{tX'}$. Let $X\in\operatorname{Der}(\mathbb{C}m)$ be a lift of $X'$. We put $D'=De^{-tX}$ which is obviously identical on $\mathcal{O}_t$. \end{proof} \begin{example}\lambdabel{examp2} {\em Moyal-Wick PSP.} Let $(M,\omega,\mathcal{P})$ be a polarized symplectic manifold. Let $U$ be an open set in $M$ where there exist Darboux coordinates, $x_i$, $y_i$, $(dx_i)^\perp=\mathcal{P}$, $i=1,...,\half\dim M$, so that $\omega=\sum_idy_i\wedgedge dx_i$. Then, $\omega^{-1}=\sum_i\partialrtial/\partialrtial y_i\wedgedge \partialrtial/\partialrtial x_i$ and for $f,g\in C^\infty_U$ the multiplication formula \be{} f\otimesimes g\mapsto m\exp\left(t\sum_i\partialrtial/\partialrtial y_i \otimes\partialrtial/\partialrtial x_i\right)(f\otimesimes g), \ee{} where $m$ is the usual multiplication of functions, defines a PSP on $(U, \omega, \mathcal{P})$. This PSP is called the Moyal-Wick polarized star-product. Note that the functions $a_i,f_i$ satisfying the Darboux relations with respect to the Poisson bracket $\omega^{-1}$ also satisfy the Darboux relations with respect to the deformed bracket $\tm[\cdot,\cdot]$, where $[\cdot,\cdot]$ is the commutator of the Moyal-Wick PSP. Let us remark that the Moyal-Weyl SP from Example \ref{ex1} constructed using the same Darboux coordinates gives just a wPSP but not a PSP. \end{example} \begin{propn} Locally, any PSP on $(M,\omega,\mathcal{P})$ is equivalent to the Moyal-Wick PSP. \end{propn} \begin{proof} Let us prove that any two PSP's are locally equivalent. Since any SP's are locally equivalent, we may suppose that there are given two PSP's, $(\mathbb{C}M,\mu_t,\mathcal{O}_t)$ and $(\mathbb{C}M,\mu_t,\widetilde\mathcal{O}_t)$, with the same multiplication and different polarizations, and we have to prove that they are, locally, equivalent. Let $x_i$, $y_i$ are Darboux coordinates with respect to $\omega$ such that $(dx_i)^\perp=\mathcal{P}$. By Proposition \ref{prop11}, there exist their lifts $x_{it}$, $y_{it}$ and $x'_{it}$, $y'_{it}$ which satisfy the Darboux relations with respect to the bracket $[a,b]=\frac{1}{t}(\mu_t(a,b)-\mu_t(b,a))$, and $x_{it}\in\mathcal{O}_t$, $x'_{it}\in\widetilde\mathcal{O}_t$. By Proposition \ref{prop12}, there exists, locally, an inner automorphism of the SP $(\mathbb{C}M,\mu_t)$ that takes $x_{it}$, $y_{it}$ to $x'_{it}$, $y'_{it}$. It follows that this automorphism takes $\mathcal{O}_t$ to $\widetilde\mathcal{O}_t$. \end{proof} \section{Characteristic classes of PDQ's and PSP's} \subsection{Extension class associated with a PDQ} Let $(\mathbb{A}_t,\mathbb{O}_t)$ be a PDQ on a polarized symplectic manifold $(M,\omega,\mathcal{P})$. Since any PDQ is equivalent to a PSP, the sheaf $\mathbb{O}_t$ is isomorphic to $\mathcal{O}_{\mathcal{P}_t}$ for some deformed distribution $\mathcal{P}_t$. Thus, the sheaves $Der(\mathbb{O}_t)$ and $\Omega^{1cl}_{\mathbb{O}_t}$ are well defined (see Section \ref{subsec2.2}). Let \be{}\lambdabel{exten} F(\mathbb{A}_t,\mathbb{O}_t)=\{b\in\mathbb{A}_t; [b,\mathbb{O}_t]\subset\mathbb{O}_t\}, \ee{} where $[\cdot,\cdot]$ denotes the commutator in $\mathbb{A}_t$. It is clear that $F(\mathbb{A}_t,\mathbb{O}_t)$ is a sheaf of Lie algebras with the bracket $\ft[\cdot,\cdot]$ and the center $\mathbb{O}_t$. Moreover, any element $b\in F(\mathbb{A}_t,\mathbb{O}_t)$ determines the derivation $\ft[b,\cdot]$ of $\mathbb{O}_t$ and, due to Proposition \ref{prop11}, this correspondence defines an epimorphism $\sigma:F(\mathbb{A}_t,\mathbb{O}_t)\to Der(\mathbb{O}_t)$. We consider $F(\mathbb{A}_t,\mathbb{O}_t)$ as a left $\mathbb{O}_t$-module with respect to multiplication in $\mathbb{A}_t$. As a Lie algebra sheaf, $F(\mathbb{A}_t,\mathbb{O}_t)$ is an extension of $Der(\mathbb{O}_t)$. So, we have the following exact sequence of Lie algebras and $\mathcal{O}_t$-modules: \be{}\lambdabel{defoext} \begin{CD} 0 @>>> \mathbb{O}_t @>\iota>> F(\mathbb{A}_t,\mathbb{O}_t) @>\sigma>> Der(\mathbb{O}_t) @>>> 0. \end{CD} \ee{} According to the terminology of \cite{BB}, \cite{BK}, $F(\mathbb{A}_t,\mathbb{O}_t)$ is called {\em a $\mathbb{O}_t$-extension of $Der(\mathbb{O}_t)$}. We say that a map of Lie algebras and $\mathbb{O}_t$-modules, $s: Der(\mathbb{O}_t)\to F(\mathbb{A}_t)$, given over an open set of $M$ is a splitting of (\ref{defoext}), if $s\sigma=id$. Since $(\mathbb{A}_t,\mathbb{O}_t)$ can be realized as a PSP and, locally, there exist Darboux coordinates with respect to $\ft[\cdot,\cdot]$, the sequence (\ref{defoext}) locally splits (see the next subsection, where splittings are presented with the help of Darboux coordinates). \begin{lemma}\lambdabel{splitting} Let $s$ and $s'$ are two splittings of (\ref{defoext}) over an open set of $M$. Then, $s-s'\in \Omega^{1cl}_{\mathbb{O}_t}$. \end{lemma} \begin{proof} Direct calculation. \end{proof} Let us define the extension class of (\ref{exten}) in the following way. Let $\{U_\alpha\}$ be an open covering of $M$ such that over each $U_\alpha$ there is a splitting, $s_\alpha$, of $F(\mathbb{A}_t,\mathbb{O}_t)$. By Lemma \ref{splitting}, $f_{\alpha,\beta}=s_\beta-s_\alpha$ is a section of $\Omega^{1cl}_{\mathbb{O}_t}$ over $U_\alpha\cap U_\beta$. We define ${\rm cl}e(F(\mathbb{A}_t,\mathbb{O}_t))$ as the element of $H^1(M,\Omega^{1cl}_{\mathbb{O}_t})$ represented by the collection $\{f_{\alpha,\beta}\}$. One can prove that given $\mathbb{O}_t$, the extension class ${\rm cl}e(F(\mathbb{A}_t,\mathbb{O}_t))$ determines a $\mathbb{O}_t$-extension of $Der(\mathbb{O}_t)$ up to equivalence. We will denote the element ${\rm cl}e(F(\mathbb{A}_t,\mathbb{O}_t))$ by ${\rm cl}e(\mathbb{A}_t,\mathbb{O}_t)$ and call it the {\em extension class} of the PDQ $(\mathbb{A}_t,\mathbb{O}_t)$. In the next subsection, the extension class of a PSP, $(\mathbb{C}M,\mu_t,\mathcal{O}_t)$, will be represented as an element of $\Gamma(M,d\PP^\perp)/d(\Gamma(M,\PP^\perp))t$, $\mathcal{P}_t=\mathcal{P}_{\mathcal{O}_t}$, with the help of a characteristic 2-form associated with that PSP. \subsection{Characteristic 2-form associated with a PSP} \lambdabel{subsecextcl} Let $(\mathbb{C}M,\mu_t,\mathcal{O}_t)$ be a PSP that we denote for shortness by $(\mu_t,\mathcal{O}_t)$. We denote $\mathcal{P}_t=\mathcal{P}_{\mathcal{O}_t}$. Then the extension $F(\mu_t,\mathcal{O}_t)$ coincides as a left $\mathcal{O}_t$-module with a $\mathcal{O}_t$-submodule of $\mathbb{C}M$ with respect to the usual multiplication in $\mathbb{C}M$. In this case, the local splittings of $F(\mu_t,\mathcal{O}_t)$ are differential forms of $Hom_{\mathcal{O}_t}(Der(\mathcal{O}_t),\mathbb{C}M)=\mathcal{P}^\perp$. These forms can be described explicitly by Darboux coordinates. Let $\{U_\alpha\}$ be an open covering of $M$ such that each $U_\alpha$ has Darboux coordinates $x_{\alpha i}$, $y_{\alpha i}$, $x_{\alpha i}\in \mathcal{O}_t$, $i=1,...,n$, with respect to the bracket $[a,b]=\ft(\mu_t(a,b)-\mu_t(b,a))$ (in particular, $[y_{\alpha i},x_{\alpha j}]=\delta_{ij}$). By Proposition \ref{prop11} such a covering exists. Then, on $U_\alpha$, the $\mathcal{O}_t$-submodule $F(\mu_t,\mathcal{O}_t)\subset\mathbb{C}M$ is equal to $\mathcal{O}_t\oplus\mathcal{O}_ty_{\alpha 1}\oplus\cdots\oplus\mathcal{O}_ty_{\alpha n}$. Splittings $s_\alpha$ may be taken by the condition $x_{\alpha i}\mapsto y_{\alpha i}$, and the corresponding forms are \be{}\lambdabel{Darsplit} s_\alpha=\sum_i y_{\alpha i}dx_{\alpha i}. \ee{} By Lemma \ref{splitting}, $ds_\alpha=ds_\beta$ on $U_\alpha\cap U_\beta$. So, forms $s_\alpha$ define the global 2-form \be{}\lambdabel{gform} \omega_t\in \omega_0+t\Gamma(M,d\mathcal{P}_t^\perp), \quad\quad \omega_t=ds_\alpha=\sum_i dy_{\alpha i}\wedgedge dx_{\alpha i}. \ee{} \begin{lemma}\lambdabel{lemrepcl} This form represents the extension class ${\rm cl}e(\mu_t,\mathcal{O}_t)$ by the isomorphism (\ref{fundiso1}). \end{lemma} \begin{proof} Clear. \end{proof} Lemma \ref{splitting} shows that if we take other splittings of $F(\mu_t,\mathcal{O}_t)$, the procedure above gives the same form $\omega_t$. So, in the case of PSP, it is well defined not only the class ${\rm cl}e(\mu_t,\mathcal{O}_t)$ but also its representative in $\omega_0+t\Gamma(M,d\mathcal{P}_t^\perp)$ that we call the {\em characteristic 2-form} of the PSP and denote it by ${\rm cl}P(\mu_t,\mathcal{O}_t)$. Given a SP, $(\mathbb{C}M,\mu_t)$, let us denote $[a,b]_{\mu_t}=\frac{1}{t}(\mu_t(a,b)-\mu_t(b,a))$, $a,b\in\mathbb{C}M$. The bracket $[,]_{\mu_t}$ is a deformation of the initial Poisson bracket on $M$. \begin{propn}\lambdabel{propcom} Let $(\mu_t,\mathcal{O}_t)$, $(\widetilde\mu_t,\mathcal{O}_t)$ be PSP's with the same $\mathcal{O}_t$. Then, ${\rm cl}P(\mu_t,\mathcal{O}_t)={\rm cl}P(\widetilde\mu_t,\mathcal{O}_t)$ if and only if $[,]_{\mu_t}=[,]_{\widetilde\mu_t}$. \end{propn} \begin{proof} If ${\rm cl}P(\mu_t,\mathcal{O}_t)={\rm cl}P(\widetilde\mu_t,\mathcal{O}_t)$, then forms (\ref{Darsplit}) can be taken to be the same. This implies that there exist common Darboux coordinates with respect to $[,]_{\mu_t}$ and $[,]_{\widetilde\mu_t}$. \end{proof} \begin{propn}\lambdabel{propeq} Let $(\mu_t,\mathcal{O}_t)$, $(\widetilde\mu_t,\mathcal{O}_t)$ be two PSP's with the same $\mathcal{O}_t$. Let $\mathcal{P}_t=\mathcal{P}_{\mathcal{O}_t}$ and $\Gamma\mathcal{P}_t$ denote the module of global sections of $\mathcal{P}_t$. Then, the 2-forms ${\rm cl}P(\mu_t,\mathcal{O}_t)$, ${\rm cl}P(\widetilde\mu_t,\mathcal{O}_t)$ are lying on the same orbit of $e^{t\Gamma\mathcal{P}_t}$ if and only if there exists a formal differential operator $D=1+tD_1+\cdots$ identical on $\mathcal{O}_t$ such that $\widetilde\mu_t(a,b)=D^{-1}\mu_t(Da,Db)$. \end{propn} \begin{proof} Let us denote $\omega_t={\rm cl}P(\mu_t,\mathcal{O}_t)$, $\widetilde\omega_t={\rm cl}P(\widetilde\mu_t,\mathcal{O}_t)$. Assume such $D$ exists. Let $s_\alpha$ be forms from (\ref{Darsplit}) corresponding to $(\mu_t,\mathcal{O}_t)$. Let $\widetilde s_\alpha$ be forms obtained from $s_\alpha$ by applying $D$. Lemmas \ref{splitting} and \ref{lemma2.2} imply that there exist $f_{\alpha,\beta}\in\mathcal{O}_t$ over $U_\alpha\cap U_\beta$ such that $s_\alpha-s_\beta=df_{\alpha,\beta}$. Since $D$ acts on $\mathcal{O}_t$ trivially, one has $\widetilde s_\alpha-\widetilde s_\beta=df_{\alpha,\beta}$, too. This means that $\widetilde s_\alpha-s_\alpha$ does not depend on $\alpha$ and give a global form, $b$, of $\mathcal{P}_t^\perp$. Since $\omega_t=ds_\alpha$, $\widetilde\omega_t=d\widetilde s_\alpha$, one has $\widetilde\omega_t=\omega_t+db$. By Lemma \ref{lemma3}, there exists a formal automorphism, $e^{tY}$, $Y\in\Gamma\mathcal{P}_t$, taking $\omega_t$ to $\widetilde\omega_t$. Conversely, let us suppose that $\omega_t$ and $\widetilde\omega_t$ are on the same orbit of $e^{t\Gamma\mathcal{P}_t}$. Assume that we have found a differential operator identical on $\mathcal{O}_t$ which transforms $\widetilde\mu_t$ to a multiplication $\widetilde\mu'_t$ that is equal to $\mu_t$ modulo $t^k$, i.e. \be{}\lambdabel{modtk} \widetilde\mu'_t-\mu_t=t^k\nu+\cdots. \ee{} By the previous part of the proof, the corresponding form $\widetilde\omega'_t$ is also lying on the same orbit as $\omega_t$. The proposition will be proved if we find a differential operator $1+t^kD_k$, $D_k(\mathcal{O}_t)=0$, which transforms $\widetilde\mu'_t$ to $\mu_t$ modulo $t^{k+1}$. Let us prove that. Since $\widetilde\mu'_t=\mu_t$ mod $t^k$, we can choose, by Proposition \ref{prop11a}, systems of Darboux coordinates with respect to $[,]_{\widetilde\mu'_t}$ and $[,]_{\mu_t}$ that coincides modulo $t^k$. It follows that $\widetilde\omega'_t=\omega_t$ mod $t^k$. Hence, there is a formal automorphism $e^{t^kX}$, $X\in\mathcal{P}_t$, which takes $\widetilde\omega'_t$ to $\omega_t$. Applying $e^{t^kX}$ to $\widetilde\mu'_t$, we obtain a multiplication that is still equal to $\mu_t$ modulo $t^k$ but whose characteristic form is equal to $\omega_t$. So, we come to the situation when the multiplications $\widetilde\mu'_t$ and $\mu_t$ have the same characteristic form $\omega_t$. By Proposition \ref{propcom}, $\widetilde\mu'_t$ and $\mu_t$ have the same commutator. This implies that bidifferential operator $\nu$ in (\ref{modtk}) is commutative. Moreover, it is a Hochschild strongly polarized cocycle (see Section 2). So, there exists a polarized differential operator $D_k$ such that $d_{Hoch}D=\nu$. It follows that transformation $1+t^kD_k$ applying to $\widetilde\mu'_t$ gives a multiplication equal to $\mu_t$ modulo $t^{k+1}$ and identical on $\mathcal{O}_t$. \end{proof} \subsection{Characteristic pairs for PSP's and PDQ's}\lambdabel{subsclass} Let $(M,\omega,\mathcal{P})$ be a polarized symplectic manifold. Let us denote by $Aut(M)$ the group of formal automorphisms of $M$ and by ${\mathcal Y}={\mathcal Y}(M,\omega,\mathcal{P})$ the set of pairs $(\omega_t,\mathcal{P}_t)$, where $\omega_t=\omega_0+t\omega_1+\cdots$ is a formal symplectic form being a deformation of $\omega=\omega_0$ and $\mathcal{P}_t$ is a polarization of $\omega_t$ being a deformation of $\mathcal{P}$. The group $Aut(M)$ naturally acts on ${\mathcal Y}$. It is natural to assign to a PSP $(\mu_t,\mathcal{O}_t)$ on $(M,\omega,\mathcal{P})$ a pair $(\omega_t,\mathcal{P}_t)\in{\mathcal Y}$, where $\omega_t={\rm cl}P(\mu_t,\mathcal{O}_t)$ and $\mathcal{P}_t=\mathcal{P}_{\mathcal{O}_t}$. So, we obtain the map \be{*} \tau:\{\mbox{PSP's}\}\to{\mathcal Y}. \ee{*} \begin{propn}\lambdabel{eqcl} Two PSP's $(\mu_t,\mathcal{O}_t)$ and $(\widetilde\mu_t,\widetilde\mathcal{O}_t)$ are equivalent if and only if $\tau(\mu_t,\mathcal{O}_t)$ and $\tau(\widetilde\mu_t,\widetilde\mathcal{O}_t)$ are lying on the same orbit in ${\mathcal Y}$ under the $Aut(M)$-action. \end{propn} \begin{proof} Let $(\mu_t,\mathcal{O}_t)$ and $(\widetilde\mu_t,\widetilde\mathcal{O}_t)$ be equivalent. Let us prove that the pairs $\tau(\mu_t,\mathcal{O}_t)$ and $\tau(\widetilde\mu_t,\widetilde\mathcal{O}_t)$ are lying on the same orbit. By Proposition \ref{prop20} b), c) one can find a formal automorphism of $M$ such that after its applying we come to the situation when $(\widetilde \mu_t,\widetilde \mathcal{O}_t)$ turns into a PSP, $(\widetilde \mu_t,\mathcal{O}_t)$, with the same $\mathcal{O}_t$ as in $(\mu_t,\mathcal{O}_t)$ and the equivalence morphism from $(\mu_t,\mathcal{O}_t)$ to $(\widetilde \mu_t,\mathcal{O}_t)$ is given by a differential operator identical on $\mathcal{O}_t$. Now the statement follows from Proposition \ref{propeq}. Conversely, suppose that for $(\mu_t,\mathcal{O}_t)$ and $(\widetilde\mu_t,\widetilde\mathcal{O}_t)$ the corresponding pairs $(\omega_t,\mathcal{P}_t)$, $(\widetilde\omega_t,\widetilde\mathcal{P}_t)$ lie on the same orbit. Let us prove that those PSP's are equivalent. Let $B$ be a formal automorphism of $M$ sending $\widetilde\mathcal{P}_t$ to $\mathcal{P}_t$. Applying $B$ to $(\widetilde\mu_t,\widetilde\mathcal{O}_t)$, we come to the case when $\widetilde\mathcal{P}_t=\mathcal{P}_t$. So, we may suppose that $(\widetilde\omega_t,\widetilde\mathcal{P}_t)=(\widetilde\omega_t,\mathcal{P}_t)$. Since the pairs $(\omega_t,\mathcal{P}_t)$, $(\widetilde\omega_t,\mathcal{P}_t)$ lie on the same orbit, there exists $X\in\Gamma\mathcal{P}_t$ such that $\widetilde\omega_t=e^{tX}\omega_t$. Now the statement follows from Proposition \ref{propeq}. \end{proof} Let us denote by $\bigl[{\mathcal Y}\bigr]$ the set of orbits in ${\mathcal Y}={\mathcal Y}(M,\omega,\mathcal{P})$. \begin{cor}\lambdabel{corin} The map $\tau$ induces the embedding \be{*} {\rm cl}PQ: \{\mbox{classes of PDQ's}\}\to \bigl[{\mathcal Y}\bigr]. \ee{*} \end{cor} \begin{proof} Let $(\mathbb{A}_t,\mathbb{O}_t)$ be a PDQ on $(M,\omega,\mathcal{P})$. Then, by Corollary \ref{cor1}, there exists a PSP, $(\mu_t,\mathcal{O}_t)$, equivalent to $(\mathbb{A}_t,\mathbb{O}_t)$. We put ${\rm cl}PQ(\mathbb{A}_t,\mathbb{O}_t)=[\tau(\mu_t,\mathcal{O}_t)]$, the orbit passing through the pair $\tau(\mu_t,\mathcal{O}_t)$. By Propositions \ref{prop20} and \ref{eqcl}, this map is correctly defined, i.e. does not depends on the choice of an equivalent PSP. Proposition \ref{eqcl} also shows that being descended to equivalence classes of PDQ's ${\rm cl}PQ$ is embedding. \end{proof} In the next section we will prove that any element of ${\mathcal Y}$ is equal to $\tau(\mu_t,\mathcal{O}_t)$ for a PSP $(\mu_t,\mathcal{O}_t)$, which implies that the map ${\rm cl}PQ$ is, in fact, an isomorphism. \section{Existence of polarized deformation quantizations and relation between the extension and Fedosov classes of a PDQ} Let $(M,\omega_0)$ be a symplectic manifold. It is known that all equivalence classes of deformation quantizations (DQ) on $M$ with the Poisson bracket $\omega_0^{-1}$ can be obtained by the Fedosov method. According to this method, starting with a symplectic connection, one constructs a flat connection, $D$, (called the Fedosov connection) in the Weyl algebra defined on the cotangent bundle to $M$ via the Poisson bracket $\omega_0^{-1}$. The quantized algebra, $\mathbb{A}_t$, is realized as the subalgebra of flat sections in the Weyl algebra. The Weyl curvature of $D$ (see below), being a closed scalar 2-form of the form $\theta_t=\omega_0+t\omega_1+\cdots$, defines the Fedosov class \be{}\lambdabel{formtheta} cl_F(\mathbb{A}_t)=[\theta_t]\in [\omega_0]+tH^2(M,\mathbb{C}[[t]]). \ee{} It is also known that the correspondence $\mathbb{A}_t\mapsto cl_F(\mathbb{A}_t)$ is a bijection between the set of equivalence classes of DQ's on $(M,\omega_0)$ and the set $[\omega_0]+tH^2(M,\mathbb{C}[[t]])$, \cite{Fe}, \cite{NT}, \cite{Xu}. Let $(M,\omega,\mathcal{P})$ be a polarized symplectic manifold and $(\omega_t,\mathcal{P}_t)\in{\mathcal Y}(M,\omega,\mathcal{P})$ a deformation of the pair $(\omega,\mathcal{P})$, see Subsection \ref{subsclass}. We adapt the Fedosov method to construct a PSP, $(\mu_t,\mathcal{O}_t)$, such that $\tau(\mu_t,\mathcal{O}_t)=(\omega_t,\mathcal{P}_t)$. We start with a $\mathcal{P}_t$-symplectic connection, $\nabla$, and construct the Fedosov connection on the same Weyl algebra. We will see that by realizing the Fedosov scheme in presence of a polarization, the form $\omega_t$ appears as a so-called Wick curvature of the Fedosov connection. Moreover, $\omega_t$ differs from the Weyl curvature of that Fedosov connection by the form $\ftt tr(\nabla^2|_{\mathcal{P}_t})$. \subsection{Some notations} Let $\mathcal{E}$ be a formal vector bundle over $M$, i.e. a free $\mathbb{C}M$-module of finite rank over $M$. Denote by $T^k(\mathcal{E})$ the $k$-th tensor power of $\mathcal{E}$ over $\mathbb{C}M$ and by $T(\mathcal{E})$ the corresponding tensor algebra completed in the $\{\mathcal{E},t\}$-adic topology. Similarly we define the completed symmetric algebra $S(\mathcal{E})$. For a subbundle $\mathcal{P}$ of $\mathcal{E}$, we denote by $sym_\mathcal{P}: S(\mathcal{P})\to T(\mathcal{E})$ the natural embedding of $\mathbb{C}M$-modules defined by symmetrization. Let $\Lambda(\mathcal{E})$ be the exterior algebra of $\mathcal{E}$ over $\mathbb{C}M$. We will consider $T(\mathcal{E})\otimes\Lambda(\mathcal{E})$ as a graded super-algebra regarding a section $x\in T(\mathcal{E})\otimes\Lambda^k(\mathcal{E})$ of degree $k$ even (odd) if $k$ is even (odd). Denote by $\delta_{T(\mathcal{E})}$ the continuous $\mathbb{C}M$-linear derivation of $T(\mathcal{E})\otimes\Lambda(\mathcal{E})$ defined by the map $T^1(\mathcal{E})\otimes 1\to 1\otimes\Lambda^1(\mathcal{E})$, $v\otimes 1\mapsto 1\otimes v$, $v\in \mathcal{E}$ is a section. It is clear that $\delta_{T(\mathcal{E})}$ is a derivation of degree $1$ and $\delta_{T(\mathcal{E})}^2=0$. It is easy to see that for any subbundle $\mathcal{P}\subset \mathcal{E}$, the map $\delta_{T(\mathcal{E})}$ being restricted to $S(\mathcal{P})\otimes\Lambda(\mathcal{P})$ via the embedding $sym_\mathcal{P}\otimes id_\mathcal{P}$ gives a derivation of the algebra $S(\mathcal{P})\otimes\Lambda(\mathcal{P})$; we denote it by $\delta_\mathcal{P}$. On the algebra $S(\mathcal{P})\otimes\Lambda(\mathcal{P})$, there is another derivation, $\delta^*_\mathcal{P}$, of degree $-1$ generated by the map $1\otimes\Lambda^1(\mathcal{P})\to S^1(\mathcal{P})\otimes1$, $1\otimes v\to v\otimes 1$, $v\in\mathcal{P}$. It is easy to check that $(\delta^*_\mathcal{P})^2=0$ and $[\delta_\mathcal{P},\delta^*_\mathcal{P}]=\delta_\mathcal{P}\delta^*_\mathcal{P}+\delta^*_\mathcal{P}\delta_\mathcal{P}=deg$, where $deg$ is the derivation assigning to an element $x\in S^p(\mathcal{P})\otimes\Lambda^q(\mathcal{P})$ the element $(p+q)x$. Let $\mathcal{E}$ be presented as a direct sum of $\mathbb{C}M$-submodules, $\mathcal{E}=\mathcal{P}\oplus\mathcal{Q}$. It is obvious that the derivations $\delta_\mathcal{P}$, $\delta^*_\mathcal{P}$, $\delta_\mathcal{Q}$, $\delta^*_\mathcal{Q}$ induce derivations on the algebra $\mathbf{S}(\mathcal{P},\mathcal{Q})=(S(\mathcal{P})\otimes S(\mathcal{Q}))\otimes (\Lambda(\mathcal{P})\otimes\Lambda(\mathcal{Q}))$ that we will identify in the natural way with the algebra $(S(\mathcal{P})\otimes S(\mathcal{Q}))\otimes \Lambda(\mathcal{E})$. We put $\delta_{\mathcal{P},\mathcal{Q}}=\delta_\mathcal{P}+\delta_\mathcal{Q}$ and $\delta^*_{\mathcal{P},\mathcal{Q}}=\delta^*_\mathcal{P}+\delta^*_\mathcal{Q}$. Let us define the operator $\delta^{-1}_{\mathcal{P},\mathcal{Q}}$ on $\mathbf{S}(\mathcal{P},\mathcal{Q})$ in the following way. We put $\delta^{-1}_{\mathcal{P},\mathcal{Q}}(x)=0$ for $x\in \mathbb{C}M$ and $\delta^{-1}_{\mathcal{P},\mathcal{Q}}(x)=(1/(p+r+q)\delta_{\mathcal{P},\mathcal{Q}}^*(x)$ for $x\in(S^p(\mathcal{P})\otimes S^r(\mathcal{Q}))\otimes \Lambda^q(\mathcal{E})$, $p+r+q>0$. There is the obvious relation \be{}\lambdabel{relde} \delta_{\mathcal{P},\mathcal{Q}}\delta_{\mathcal{P},\mathcal{Q}}^{-1}+ \delta_{\mathcal{P},\mathcal{Q}}^{-1}\delta_{\mathcal{P},\mathcal{Q}}= \mbox{ projection on $\mathbf{S}^+(\mathcal{P},\mathcal{Q})$ along $\mathbb{C}M$}, \ee{} where $\mathbf{S}^+(\mathcal{P},\mathcal{Q})$ is the closure of $\oplus_{p+r+q>0}(S^p(\mathcal{P})\otimes S^r(\mathcal{Q}))\otimes \Lambda^q(\mathcal{E})$. One has the embedding \be{}\lambdabel{rels} sym_\mathcal{P}\otimes sym_\mathcal{Q}\otimes id: (S(\mathcal{P})\otimes S(\mathcal{Q}))\otimes \Lambda(\mathcal{E})\to T(\mathcal{E})\otimes\Lambda(\mathcal{E}). \ee{} It is clear that $\delta_{\mathcal{P},\mathcal{Q}}$ coincides with the restriction of $\delta_{T(\mathcal{E})}$ to $\mathbf{S}(\mathcal{P},\mathcal{Q})$ via this embedding. \subsection{The Fedosov algebra} Let $\varphi:\mathcal{E}\otimes\mathcal{E}\to\mathbb{C}M$ be a $\mathbb{C}M$-linear skew-symmetric nondegenerate form and $I$ the closed ideal in $T(\mathcal{E})$ generated by relations \be{}\lambdabel{relI} x\otimes y-y\otimes x=t\varphi(x,y). \ee{} We call $\mathcal{W}(\mathcal{E})=T(\mathcal{E})/I$ {\em the Weyl algebra} and $\mathbf{W}=\mathbf{W}(\mathcal{E})=\mathcal{W}\otimes\Lambda(\mathcal{E})$ {\em the Fedosov algebra} over $\mathcal{E}$. The derivation $\delta_{T(\mathcal{E})}$ on $T(\mathcal{E})\otimes\Lambda(\mathcal{E})$ induces a derivation of $\mathbf{W}$. Indeed, $\delta_{T(\mathcal{E})}$ applied to the both sides of (\ref{relI}) gives zero. We denote this derivation by $\delta$. Let $\mathcal{E}=\mathcal{P}\oplus\mathcal{Q}$ be a decomposition into $\mathbb{C}M$-modules. Define the Wick map, $\mathbf{w}_{\mathcal{P},\mathcal{Q}}$, as the composition $\mathbf{S}(\mathcal{P},\mathcal{Q})\to T(\mathcal{E})\otimes\Lambda(\mathcal{E})\to \mathbf{W}$, where the first map is (\ref{rels}) and the second one is the projection. By the PBW theorem $\mathbf{w}_{\mathcal{P},\mathcal{Q}}$ is an isomorphism of $\mathbb{C}M$-modules. Due to the isomorphism $\mathbf{w}_{\mathcal{P},\mathcal{Q}}$, the operators $\delta_{\mathcal{P},\mathcal{Q}}$ and $\delta^{-1}_{\mathcal{P},\mathcal{Q}}$ are carried over from $\mathbf{S}(\mathcal{P},\mathcal{Q})$ to $\mathbf{W}$. We retain for them the same notation. Note that while $\delta_{\mathcal{P},\mathcal{Q}}$ does not depend on the decomposition of $\mathcal{E}$ and coincides with the derivation $\delta$ induced from $T(\mathcal{E})\otimes\Lambda(\mathcal{E})$, the operator $\delta^{-1}_{\mathcal{P},\mathcal{Q}}$ is not a derivation and does depend on the decomposition. In particular, one can suppose that the decomposition is trivial, $\mathcal{E}=\mathcal{E}\oplus 0$. In this case we denote $\delta^{-1}_\mathcal{E}=\delta^{-1}_{\mathcal{E},0}$. \begin{propn}\lambdabel{prop1.1} One has $$H(\mathbf{W},\delta)=\mathbb{C}M.$$ Moreover, if $x\in \mathcal{W}(\mathcal{E})\otimes\Lambda^{k>0}(\mathcal{E})$ then $y=\delta^{-1}_{\mathcal{P},\mathcal{Q}}x$ is such that $\delta y=x$ for any decomposition $\mathcal{E}=\mathcal{P}\oplus\mathcal{Q}$. \end{propn} \begin{proof} Follows from (\ref{relde}). \end{proof} \subsection{Lie subalgebras in $\mathcal{W}$} Let $\mathcal{E}=\mathcal{P}\oplus\mathcal{Q}$ be a decomposition. We say that $x\in \mathbf{W}$ has $\mathbf{w}_{\mathcal{P},\mathcal{Q}}$-degree $(p,q)$ if $\mathbf{w}_{\mathcal{P},\mathcal{Q}}^{-1}(x)\in (S^p(\mathcal{P}\otimes S^q(\mathcal{Q}))\otimes\Lambda(\mathcal{E})$. We say that $x\in \mathbf{W}$ has $\mathbf{w}_{\mathcal{P},\mathcal{Q}}$-degree $k$ if $\mathbf{w}_{\mathcal{P},\mathcal{Q}}^{-1}(x)\in \oplus_{p+q=k}(S^p(\mathcal{P})\otimes S^q(\mathcal{Q}))\otimes\Lambda(\mathcal{E})$. We define the $\mathbf{w}_\mathcal{E}$-degree as the $\mathbf{w}_{\mathcal{E},0}$-degree for the trivial decomposition $\mathcal{E}=\mathcal{E}\oplus 0$. Let $\mathfrak{g}$ be a sheaf of Lie algebras acting on $\mathcal{E}$. We call a $\mathbb{C}M$-linear map $\lambdambda:\mathfrak{g}\to\mathcal{W}$ {\em a realization} of $\mathfrak{g}$, if it is a Lie algebra morphism ($\mathcal{W}$ is considered as a Lie algebra with respect to the commutator $\ft [\cdot,\cdot]$) and for any $x\in\mathfrak{g}$ and $v\in\mathcal{E}$ one has $x(v)=\ft[\lambdambda(x),v]$. It is easy to check that any two realizations differ by a Lie algebra morphism of $\mathfrak{g}$ to the center of $\mathcal{W}$, so if $\mathfrak{g}$ is a sheaf of semisimple Lie algebras, there is not more than one realization of $\mathfrak{g}$. Denote by ${\mathfrak{sp}}(\mathcal{E})$ the sheaf of symplectic Lie algebras with respect to $\varphi$. Since ${\mathfrak{sp}}(\mathcal{E})$ is semisimple, there is a unique realization $\rho_\mathcal{E}:{\mathfrak{sp}}(\mathcal{E}) \to \mathcal{W}$. The image of this realization consists of elements having $\mathbf{w}_\mathcal{E}$-degree two. Let $\mathcal{E}=\mathcal{P}\oplus\mathcal{Q}$ be a decomposition into Lagrangian subsheaves. Denote by ${\mathfrak{sp}}(\mathcal{P},\mathcal{E})$ the subsheaf of ${\mathfrak{sp}}(\mathcal{E})$ preserving $\mathcal{P}$. It is easy to check that ${\mathfrak{sp}}(\mathcal{P},\mathcal{E})$ can be realized as the subset of elements of $\mathcal{W}$ having $\mathbf{w}_{\mathcal{P},\mathcal{Q}}$-degree $(1,1)$ and $(2,0)$. Denote this realization by $\rho_{\mathcal{P},\mathcal{E}}:{\mathfrak{sp}}(\mathcal{P},\mathcal{E})\to \mathcal{W}$. On the other hand, ${\mathfrak{sp}}(\mathcal{P},\mathcal{E})$ is realized in $\mathcal{W}$ by $\rho_\mathcal{E}$. \begin{lemma}\lambdabel{lem1.1} Let $b\in {\mathfrak{sp}}(\mathcal{P},\mathcal{E})$. Then $$\rho_\mathcal{E}(b)=\rho_{\mathcal{P},\mathcal{E}}(b)+\frac{t}{2}trace(\mathcal{B}ar b),$$ where $\mathcal{B}ar b$ is $b$ restricted to $\mathcal{P}$. \end{lemma} \begin{proof} Direct computation using the fact that $\rho_{\mathcal{P},\mathcal{E}}(\mathcal{B}ar b)$ is $(1,1)$ $\mathbf{w}_{\mathcal{P},\mathcal{Q}}$-degree component of $\rho_{\mathcal{P},\mathcal{E}}(b)$ in any decomposition $\mathcal{E}=\mathcal{P}\oplus\mathcal{Q}$. \end{proof} \subsection{Filtrations on $\mathcal{W}$} We define two decreasing filtrations on $\mathcal{W}$ numbered by nonnegative integers. The $T$-filtration $F^T_\bullet\mathcal{W}$ is defined as follows. We ascribe to the elements of $\mathcal{E}$ degree 1 and to $t$ degree 2. Then $F^T_n\mathcal{W}$ consists of elements of $\mathcal{W}$ having the leading term of total degree $\mathfrak{g}eq n$. The $\mathcal{P}$-filtration, $F^\mathcal{P}_\bullet\mathcal{W}$, is firstly defined on $S(\mathcal{P})\otimes S(\mathcal{Q})$ by the subsets $F^\mathcal{P}_n=\oplus_{i\mathfrak{g}eq n}S^i(\mathcal{P})\otimes S(\mathcal{Q})$, $n=0,1,...$, and carried over to $\mathcal{W}$ via the Wick isomorphism. We extend those filtrations to $\mathbf{W}$ in the natural way standing, for example, $F^T_n\mathbf{W}=F^T_n\mathcal{W}\otimes\Lambda(\mathcal{E})$. We will use the following mnemonic notation. To point out, for example, that a section $x\in \mathbf{W}$ belongs to $F^T_n\mathbf{W}$ we write $F^T(x)\mathfrak{g}e n$. In the following we denote $\mathbf{S}(\mathcal{P})=S(\mathcal{P})\otimes\Lambda(\mathcal{E})$ embedded in $\mathbf{S}(\mathcal{P},\mathcal{Q})$ as $(S(\mathcal{P})\otimes 1)\otimes\Lambda(\mathcal{E})$. \begin{propn}\lambdabel{prop1.2} Let $\mathcal{E}=\mathcal{P}\oplus \mathcal{Q}$ be a decomposition of $\mathcal{E}$ into Lagrangian subsheaves. Then a) The Wick map $\mathbf{w}_{\mathcal{P},\mathcal{Q}}:\mathbf{S}(\mathcal{P},\mathcal{Q})\to\mathbf{W}$ has the following property: for $a\in \mathbf{S}(\mathcal{P})$ and arbitrary $c\in\mathbf{S}(\mathcal{P},\mathcal{Q})$ one has $ac=\mathbf{w}(a)\mathbf{w}(c)$. The filtrations on $\mathbf{W}$ have the properties: b) for $x,y\in \mathbf{W}$, if $F^\mathcal{P}(x)\mathfrak{g}e k$, then $F^\mathcal{P}(xy)\mathfrak{g}e k$; c) $F^\mathcal{P}(\delta^{-1}_{\mathcal{P},\mathcal{Q}}x)\mathfrak{g}eq F^\mathcal{P}(x)$; d) $F^T(\delta^{-1}_{\mathcal{P},\mathcal{Q}}x)>F^T(x)$. \end{propn} \begin{proof} Clear. \end{proof} \subsection{Fedosov's construction in the Wick case} \lambdabel{FC} Let $(M,\omega_t,\mathcal{P}_t)$ be a formal polarized symplectic manifold. We will write for shortness $\omega=\omega_t$, $\mathcal{P}=\mathcal{P}_t$. Let us denote $\mathcal{T}=\mathcal{T}^{\mathbb{C}}_M[[t]]$. It is easy to prove that there exists on $M$ a Lagrangian subbundle $\mathcal{Q}\subset\mathcal{T}$ such that $ \mathcal{T}=\mathcal{P}\oplus\mathcal{Q}$. In the following we set $\mathcal{E}=\mathcal{T}^*$ and consider the Fedosov algebra $\mathbf{W}=\mathbf{W}(\mathcal{E})$ with respect to $\varphi$ being the Poisson bracket inverse to $\omega$. The decomposition $\mathcal{E}=\mathcal{P}^\perp\oplus\mathcal{Q}^\perp$ is Lagrangian with respect to this $\varphi$. Let $\nabla$ be a $\mathcal{P}$-symplectic connection on $M$ (see Section \ref{PSC}). Then the induced connection $\nabla:\mathcal{E}\to\mathcal{E}\otimes\Lambda^1\mathcal{E}$ on $\mathcal{E}$ preserves $\mathcal{P}^\perp$, i.e. $\nabla(\mathcal{P}^\perp)\subset\mathcal{P}^\perp\otimes\Lambda^1(\mathcal{E})$, and is flat on $\mathcal{P}^\perp$ along $\mathcal{P}$, i.e. for any $X,Y\in\mathcal{P}$ one has $(\nabla_X\nabla_Y-\nabla_Y\nabla_X-\nabla_{[X,Y]})(\mathcal{P}^\perp)=0$. We will identify $\mathcal{P}$ with $\mathcal{P}^\perp$ and $\mathcal{Q}$ with $\mathcal{Q}^\perp$ by the isomorphism $x\mapsto\omega(x,\cdot)$ between $\mathcal{T}$ and $\mathcal{E}$. So, we will allow us to write $\mathcal{E}=\mathcal{P}\oplus\mathcal{Q}$. The connection $\nabla$ gives a derivation of the Fedosov algebra $\mathbf{W}$, which is an extension of the de Rham differential on functions. Analogously, $\nabla$ gives such derivations of the algebras $T(\mathcal{E})\otimes\Lambda(\mathcal{E})$, $S(\mathcal{E})\otimes\Lambda(\mathcal{E})$, and $(S(\mathcal{P})\otimes S(\mathcal{Q}))\otimes\Lambda(\mathcal{E})$. These derivations commute with the maps (\ref{rels}) and $\mathbf{w}_{\mathcal{P},\mathcal{Q}}$. For convenience, we will mark the elements of the Fedosov algebra lying in $\mathcal{E}\otimes 1$ by letters with hat over them ($\hat{x}$), while by $dx$ we will denote the copy of $\hat{x}$ lying in $1\otimes\Lambda^1\mathcal{E}$. Let $\omega=\omega_{ij}dx_i\wedgedge dx_j$ in some local coordinates. It is easy to check that for $\widetilde{\delta}=\omega_{ij}\hat{x}_i\otimes dx_j$ one has \begin{equation}\lambdabel{reldelta} \begin{split} \delta&=\frac{1}{t}\mathrm{ad}(\widetilde{\delta}), \\ \widetilde{\delta}^2&=t\omega. \end{split} \end{equation} Since the torsion of $\nabla$ is equal to zero, \be{}\lambdabel{torsion} \nabla(\widetilde\delta)=0. \ee{} Since $\nabla^2$ is a $\mathbb{C}M$-linear derivation of degree $0$ preserving $\mathcal{P}$, there is an element $R\in \rho_{\mathcal{P},\mathcal{E}}(\mathfrak{sp}(\mathcal{P},\mathcal{E}))\otimes\Lambda^2(\mathcal{E})$ such that $\nabla^2=\frac{1}{t}\mathrm{ad}(R)$. In particular, one has to be \be{}\lambdabel{FPR} F^\mathcal{P}(R)\mathfrak{g}eq 1. \ee{} According to Fedosov, \cite{Fe}, we also define $R^F\in\rho_\mathcal{E}({\mathfrak{sp}(\mathcal{P},\mathcal{E})})\otimes\Lambda^2(\mathcal{E})$ satisfying $\nabla^2=\ft\mathrm{ad}(R^F)$. It follows from (\ref{torsion}) \be{} \delta(R)=\delta(R^F)=0. \ee{} Following to Fedosov, we will consider connections on $\mathbf{W}$ of the form \be{} D=\nabla+\ft\mathrm{ad}(\mathfrak{g}amma), \quad\quad \mathfrak{g}amma\in \mathcal{W}\otimes\Lambda^1(\mathcal{E}). \ee{} We define the Wick curvature of $D$ as \be{*} \Omega_D=R+\nabla(\mathfrak{g}amma)+\ft\mathfrak{g}amma^2. \ee{*} According to Fedosov, we also define the Weyl (or Fedosov) curvature of $D$ as \be{}\lambdabel{weylcur} \Omega_D^F=R^F+\nabla(\mathfrak{g}amma)+\ft\mathfrak{g}amma^2. \ee{} Since by Lemma \ref{lem1.1} $R^F=R+\frac{t}{2}tr(\nabla^2|_\mathcal{P})$, we have \be{}\lambdabel{wickcur} \Omega^F_D=\Omega_D+\frac{t}{2}tr(\nabla^2|_\mathcal{P}). \ee{} One checks \be{}\lambdabel{flatness} D^2=\ft\mathrm{ad}(\Omega_D)=\ft\mathrm{ad}(\Omega_D^F). \ee{} Let us take $\mathfrak{g}amma$ in the form \be{} \mathfrak{g}amma=\widetilde\delta+r, \quad\quad r\in \mathcal{W}\otimes\Lambda^1(\mathcal{E}), \quad F^T(r)\mathfrak{g}eq 3. \ee{} Then the connection $D$ has the form \be{}\lambdabel{Fcon} \nabla+\delta+\ft\mathrm{ad}(r). \ee{} Using (\ref{reldelta}) and (\ref{torsion}), we obtain that its Wick curvature is \be{}\lambdabel{Fcurv} \Omega_D=R+\nabla(\widetilde\delta+r)+\frac{1}{t}(\widetilde\delta+r)^2= \omega+\delta r+R+\nabla r+\ft r^2. \ee{} \begin{propn}\lambdabel{prop1.3} There exists an element $r\in \mathcal{W}(\mathcal{E})\otimes\Lambda^1(\mathcal{E})$ such that a) $F^T(r)\mathfrak{g}eq 3$; b) $F^\mathcal{P}(r)\mathfrak{g}eq 1$; c) the connection $D=\nabla+\delta+\ft\mathrm{ad}(r)$ is flat, i.e. $D^2=0$; d) for its Wick curvature one has $$\Omega_D=\omega;$$ e) its Weyl curvature $\Omega_D^F$ belong to $\Gamma(M,d\mathcal{P}^\perp)$ and there is the formula \be{}\lambdabel{Wc} \Omega_D^F=\omega+\ftt tr(\nabla^2|_\mathcal{P}). \ee{} \end{propn} \begin{proof} First of all, we apply the Fedosov method, (\cite{Fe}, Theorem 5.2.2), to find $r$ satisfying d). According to (\ref{Fcurv}), $r$ must obey the equation \be{} \delta r=-(R+\nabla r+\ft r^2) \ee{} Let us look for $r$ being the limit of the sequence, $r=\lim r_k$, where $r_k\in \mathcal{W}(\mathcal{E})\otimes\Lambda^1(\mathcal{E})$, $k=3,4,...$, and $F^T(r_{k}-r_{k-1})\mathfrak{g}eq k$. As in Lemma 5.2.3 of \cite{Fe}, using Proposition \ref{prop1.2} d) and the fact that $F^T(R)\mathfrak{g}eq 2$, such $r_k$ can be calculated recursively: \begin{equation}\lambdabel{calr} \begin{split} r_3&=-\delta_{\mathcal{P},\mathcal{Q}}^{-1}(R) \\ r_{k+3}&=-(r_3+\delta_{\mathcal{P},\mathcal{Q}}^{-1}(\nabla r_{k+2}+ \frac{1}{t}r_{k+2}^2)). \end{split} \end{equation} So, a) and d) are proven. Let us prove that $F^\mathcal{P}(r_k)\mathfrak{g}eq 1$ for all $k$. The inequality $F^\mathcal{P}(r_3)\mathfrak{g}eq 1$ follows from the fact that $F^\mathcal{P}(R)\mathfrak{g}eq 1$ and from Proposition (\ref{prop1.2}) c). Suppose that $F^\mathcal{P}(r_i)\mathfrak{g}eq 1$ for $i<k+3$, $k>0$. Then $F^\mathcal{P}(\nabla r_{k+2})\mathfrak{g}eq 1$ because $\nabla$ preserves $\mathcal{P}$. On the other hand, $F^\mathcal{P}(r_{k+2}^2)\mathfrak{g}eq 1$ because of Proposition (\ref{prop1.2}) b), therefore, as follows from (\ref{calr}), $F^\mathcal{P}(r_{k+3})\mathfrak{g}eq 1$ as well. So, we have that $r$ being the limit of the convergent sequence $r_k$ satisfies the conditions a), b), and d) of the proposition. c) obviously follows from d) and (\ref{flatness}). e) follows immediately from d), (\ref{wickcur}), and Lemma \ref{lempolform}. \end{proof} \begin{propn}\lambdabel{prop1.3a} Let $\widetilde\nabla$ be another $\mathcal{P}$-symplectic connection on $M$. Let $\widetilde r\in\mathcal{W}(\mathcal{E})\otimes\Lambda^1(\mathcal{E})$, satisfy the conclusions of Proposition \ref{prop1.3}, in particular, the connection $\widetilde D=\widetilde\nabla+\delta+\ft\mathrm{ad}(\widetilde r)$ is flat. Then, there exists an element $B\in\mathcal{W}(\mathcal{E})$ such that a) $F^T(B)\mathfrak{g}eq 3$; b) $F^\mathcal{P}(B)\mathfrak{g}eq 1$ c) $e^{\ft\mathrm{ad} B}D=\widetilde D$. \end{propn} \begin{proof} Note that $\nabla-\widetilde\nabla$ can be presented as $\ft\mathrm{ad} R_0$, where $R_0\in\rho_{\mathcal{P},\mathcal{E}}(\mathfrak{sp}(\mathcal{P},\mathcal{E}))$. Therefore, $F^T(R_0)\mathfrak{g}eq 2$ and $F^\mathcal{P}(R_0)\mathfrak{g}eq 1$. Let us put $R_1=r-\widetilde r$. Then, $F^T(R_1)\mathfrak{g}eq 3$ and $F^\mathcal{P}(R_1)\mathfrak{g}eq 1$. We have $$\widetilde D=D-\ft\mathrm{ad}(R_0+R_1).$$ Since $\Omega_D=\Omega_{\widetilde D}=\omega$, using (\ref{torsion}) we obtain $$\delta(R_0)=0.$$ It follows that the element $B_0=\delta_{\mathcal{P},\mathcal{Q}}^{-1}(R_0)$ is such that $\delta(B_0)=R_0$ and $F^T(B_0)\mathfrak{g}eq 3$, $F^\mathcal{P}(B_0)\mathfrak{g}eq 1$. Replacing $D$ by $e^{\ft\mathrm{ad} B_0}D$ we obtain $$\widetilde D=D-\ft\mathrm{ad}(R'_0+R'_1),$$ where $F^T(R'_0)\mathfrak{g}eq 3$, $F^\mathcal{P}(R'_0)\mathfrak{g}eq 1$ and $F^T(R'_1)\mathfrak{g}eq 4$, $F^\mathcal{P}(R'_1)\mathfrak{g}eq 1$. Proceeding by induction on $F^T$-filtration, we obtain a sequence $B_i\in\mathcal{W}(\mathcal{E})$ with increasing $F^T$-filtration and such that $\Pi_{i=0}^\infty e^{\ft\mathrm{ad} B_i}(D)=\widetilde D$. Since elements $e^{\ft\mathrm{ad} B'}$, $B'\in\mathcal{W}(\mathcal{E})$, $F^T(B)\mathfrak{g}eq 3$, form a pro-unipotent Lie group, there exists an element $B\in\mathcal{W}(\mathcal{E})$, $F^T(B)\mathfrak{g}eq 3$, $F^\mathcal{P}(B)\mathfrak{g}eq 1$, such that $\Pi_{i=0}^\infty e^{\ft\mathrm{ad} B_i}=e^{\ft\mathrm{ad} B}$. \end{proof} Let $D$ be a connection satisfying Proposition \ref{prop1.3}. Denote by $\mathcal{W}_D$ the subsheaf of $\mathcal{W}$ consisting of flat sections $a$, i.e. such that $Da=0$. Since $D$ is a derivation of $\mathbf{W}$, it is clear that $\mathcal{W}_D$ is a sheaf of subalgebras. Let $\sigma=id-(\delta\delta_{\mathcal{P},\mathcal{Q}}^{-1}+\delta_{\mathcal{P},\mathcal{Q}}^{-1}\delta)$. Then, as follows from (\ref{relde}), $\sigma: \mathcal{W}\to\mathbb{C}M$ is a projection, where $\mathbb{C}M$ is considered as the center of the algebra $\mathcal{W}$. \begin{propn}\lambdabel{prop1.4} a) The map $\sigma:\mathcal{W}_D\to\mathbb{C}M$ is a bijection. b) The inverse map $\eta:\mathbb{C}M\to \mathcal{W}_D$ has the form $\eta(f)=f+\hat f$, there $F^T(\hat f)>F^T(f)$. c) If $df\in\mathcal{P}$, then $F^\mathcal{P}(\hat f)\mathfrak{g}eq 1$. d) If $df\in\mathcal{P}$, then $\sigma(\eta(f)\eta(g))=fg$ for any $g\in\mathbb{C}M$. \end{propn} \begin{proof} Again, we apply the Fedosov iteration procedure. According to \cite{Fe}, Theorem 5.2.4, we look for $\eta(f)$ as a limit, $\eta(f)=\lim a_k$, there $a_k\in \mathcal{W}$ can be calculated recursively: \begin{equation}\lambdabel{calf} \begin{split} a_0&=f \\ a_{k+1}&=a_0+\delta_{\mathcal{P},\mathcal{Q}}^{-1}(\nabla a_k+\ft\mathrm{ad} r(a_k)). \end{split} \end{equation} Put $\hat f=\eta(f)-a_0$. As in \cite{Fe}, Theorem 5.2.4, one proves that such $\eta(f)$ and $\hat f$ satisfy a) and b). Now observe that $a_1-a_0=\delta_{\mathcal{P},\mathcal{Q}}^{-1}(1\otimes df)$ and if $df\in\mathcal{P}$, then $F^\mathcal{P}(a_1-a_0)\mathfrak{g}eq 1$. By induction, we conclude that $F^\mathcal{P}(a_k-a_0)\mathfrak{g}eq 1$ for all $k\mathfrak{g}eq 1$. So $F^\mathcal{P}(a-a_0)\mathfrak{g}eq 1$ as well, which proves c). Let us prove d). We have $\eta(f)\eta(g)=f\eta(g)+\hat f\eta(g)$. Since by c) $F^\mathcal{P}(\hat f)\mathfrak{g}eq 1$, $F^\mathcal{P}(\hat f\eta(g))\mathfrak{g}eq 1$ as well. It follows that $\sigma(\hat f\eta(g))=0$ and $\sigma(\eta(f)\eta(g))=\sigma(f\eta(g))=fg$, because $\sigma$ is a $\mathbb{C}M$-linear map and $\sigma(\eta(g))=g$. \end{proof} \subsection{Existence of PSP's} Let $(M,\omega,\mathcal{P})$ be a polarized symplectic manifold. Recall that in Subsection \ref{subsclass} we have assigned to any PSP $(\mu_t, \mathcal{O}_t)$ on $(M,\omega,\mathcal{P})$ an element $\tau(\mu_t,\mathcal{O}_t)\in{\mathcal Y}(M,\omega,\mathcal{P})$, which is a pair $(\omega_t,\mathcal{P}_t)$ being a deformation of the pair $(\omega,\mathcal{P})$. The form $\omega_t$ represents the extension class ${\rm cl}e(\mu_t,\mathcal{O}_t)$. We show now that any element of ${\mathcal Y}(M,\omega,\mathcal{P})$ corresponds to a PSP. \begin{propn}\lambdabel{essent2} a) For any pair $(\omega_t,\mathcal{P}_t)\in{\mathcal Y}(M,\omega,\mathcal{P})$, there exists a PSP, $(\mu_t,\mathcal{O}_t)$, such that \be{}\lambdabel{lpa} \tau(\mu_t,\mathcal{O}_t)=(\omega_t,\mathcal{P}_t). \ee{} b) The Fedosov class of the corresponding star-product $(\mathbb{C}M,\mu_t)$ is represented by the form of $\omega+t\Gamma(M,d(\mathcal{P}_t^\perp))$ equal to \be{}\lambdabel{lpb} \theta_t=\omega_t+\ftt tr(\nabla^2|_{\mathcal{P}_t}), \ee{} where $\nabla$ is a $\mathcal{P}_t$-symplectic connection on the formal symplectic manifold $(M,\omega_t,\mathcal{P}_t)$. \end{propn} \begin{proof} Let $\mathbf{W}$ be the Fedosov algebra on $M$ corresponding to the symplectic form $\omega_t$. Let $\nabla$ be a $\mathcal{P}_t$-symplectic connection on $M$ corresponding to $\omega_t$ and $D$ the flat connection on $\mathbf{W}$ constructed in Proposition \ref{prop1.3} c). Let $\mathcal{W}_D$ be the sheaf of flat sections of $\mathcal{W}$. Define a star-product $(\mathbb{C}M,\mu_t)$ on $M$ carrying over the multiplication from $\mathcal{W}_D$ to $\mathbb{C}M$ via the map $\sigma$ from Proposition \ref{prop1.4}. Point d) of that proposition shows that, in fact, this star-product present the PSP $(\mathbb{C}M,\mu_t,\mathcal{O}_{\mathcal{P}_t})$. We are going to prove that this star-product is as required. In the following we identify $\mathcal{W}_D$ with the corresponding PSP via $\sigma$. Let us prove (\ref{lpa}). Let $(U_\alpha)$ be an open covering of $M$ such that on each $U_\alpha$ there exist formal Darboux coordinates $x_{\alpha i}$, $y_{\alpha i}$, $x_{\alpha i}\in\mathcal{O}_{\mathcal{P}_t}$ with respect to $\omega_t$. Denote by $\nabla_\alpha$ the standard flat $\mathcal{P}_t$-symplectic connections over $U_\alpha$ such that the forms $dx_{\alpha i}$, $dy_{\alpha i}$ are flat sections in $\mathcal{P}^\perp_t$. Then, the connections $D_\alpha=\nabla_\alpha+\delta$ satisfy on $U_\alpha$ Proposition \ref{prop1.3} with $r=0$. Let $\mathcal{W}_{D_\alpha}$ be the star-product on $U_\alpha$ constructed in Proposition \ref{prop1.4} via flat sections of $D_\alpha$. It is easy to see that $\mathcal{W}_{D_\alpha}$ coincides with the Moyal-Wick PSP with respect to the Darboux coordinates $x_{\alpha i}$, $y_{\alpha i}$ (see Example \ref{examp2}), so $x_{\alpha i}$, $y_{\alpha i}$ are also Darboux coordinates for the bracket $\ft[\cdot,\cdot]$ in $\mathcal{W}_{D_\alpha}$. Since $D$ and $D_\alpha$ have the same Wick curvature $\omega_t$, there exist, by Proposition \ref{prop1.3a}, elements $B_\alpha\in\mathcal{W}$ such that $e^{\ft\mathrm{ad} B_\alpha}D_\alpha=D$. It is clear that $e^{\ft\mathrm{ad} B_\alpha}$ acting on $\mathcal{W}$ takes $\mathcal{W}_{D_\alpha}$ to $\mathcal{W}_D$, and point b) of that Proposition implies that it is identical on $\mathcal{O}_{\mathcal{P}_t}$. Let $\Psi_{\alpha,\beta}=e^{-\ft\mathrm{ad} B_\alpha}e^{\ft\mathrm{ad} B_\beta}$. These may be considered as isomorphisms over $U_\alpha\cap U_\beta$ gluing star-products $\mathcal{W}_{D_\alpha}$ to a global star-product on $M$ that is obviously isomorphic to $\mathcal{W}_D$. We see, that functions $x_{\alpha i}$, $y_{\alpha i}$ form local Darboux coordinates corresponding to that star-product. So, the characteristic 2-form ${\rm cl}P(\mathcal{W}_D)$ (see Subsection \ref{subsclass}) is locally represented as $dy_{\alpha i}\wedgedge dx_{\alpha i}$. On the other hand, this form is equal to $\omega_t$, since from very beginning the functions $x_{\alpha i}$, $y_{\alpha i}$ have been chosen as Darboux coordinates for it. Hence, $\tau(\mathcal{W}_D)=(\omega_t,\mathcal{P}_t)$. b) Follows from Proposition \ref{prop1.3} e) and Lemma \ref{lempolform}. \end{proof} \section{The main theorem and corollaries} Let $(M,\omega,\mathcal{P})$ be a polarized symplectic manifold. Denote by ${\mathcal Y}$ the set of pairs $(\omega_t,\mathcal{P}_t)$, where $\omega_t=\omega+t\omega_1+\cdots$ is a deformed symplectic form and $\mathcal{P}_t$, $\mathcal{P}_0=\mathcal{P}$, its polarization. Let $Aut(M)$ be the group of formal automorphisms of $M$. \begin{thm}\lambdabel{thmm} a) The equivalence classes of PDQ's on $(M,\omega,\mathcal{P})$ are in one-to-one correspondence with the orbits in ${\mathcal Y}$ under the $Aut(M)$-action. b) Let the pair $(\omega_t,\mathcal{P}_t)$ be a point on the orbit corresponding to a PDQ $(\mathbb{A}_t,\mathbb{O}_t)$. Then, $(\mathbb{A}_t,\mathbb{O}_t)$ is isomorphic to a PSP, $(\mathbb{C}M,\mu_t,\mathcal{O}_t)$, where $\mathcal{O}_t$ consists of functions constant along $\mathcal{P}_t$ and the multiplication $\mu_t$ satisfies the condition \be{*} \mu_t(f,g)=fg \quad\quad \mbox{for}\ \ f\in\mathcal{O}_t,\ g\in\mathbb{C}M. \ee{*} c) The form $\omega_t$ represents the extension class ${\rm cl}e(\mathbb{A}_t,\mathbb{O}_t)\in H^1(M,\Omega^{1cl}_{\mathcal{O}_t})$, associated with $(\mathbb{A}_t,\mathbb{O}_t)$. d) Under the hypothesis of b), the Fedosov class of the deformation quantization $\mathbb{A}_t$ can be represented by a form $\theta_t$ that is a deformation of $\omega$ and polarized by $\mathcal{P}_t$. It is defined by the formula \be{}\lambdabel{eqFC} \theta_t=\omega_t+\ftt tr(\nabla^2|_{\mathcal{P}_t}), \ee{} where $\nabla$ is a $\mathcal{P}_t$-symplectic connection on the formal symplectic manifold $(M,\omega_t,\mathcal{P}_t)$. \end{thm} \begin{proof} Parts a) and b) follow from Proposition \ref{eqcl}, Corollary \ref{corin}, and Proposition \ref{essent2} a). Part c) follows from Lemma \ref{lemrepcl}. Part d) is the same as Proposition \ref{essent2} b). \end{proof} \begin{remark}\lambdabel{remPC} We have interpreted the form $\omega_t$ from (\ref{eqFC}) as a representative of the extension class associated to a PDQ (see Section 8). The form $tr(\nabla^2|_{\mathcal{P}_t})$ is the curvature form of the connection induced by $\nabla|_{\mathcal{P}_t}$ on the complex line bundle $\det(\mathcal{P}_t)$. Actually, $\det(\mathcal{P}_t)$ can be presented as a line $\mathcal{O}_t$-bundle, i.e. as a locally free sheaf of $\mathcal{O}_t$-modules of rank one. Indeed, $\det(\mathcal{P}_t)=\mathbb{C}M\otimes_{\mathcal{O}_t}\mathcal{L}$, where $\mathcal{L}=\Omega_{\mathcal{O}_t}^n$, $n=\half\dim M$. The form $-tr(\nabla^2|_{\mathcal{P}_t})$, as well as {\em minus} curvature form of any other connection on $\mathcal{L}$, can be interpreted as a representative of the extension class of an $\mathcal{O}_t$-extension of $Der(\mathcal{O}_t)$ associated with $\mathcal{L}$. Indeed, let $\mathbf{w}detilde T_\mathcal{L}$ denote the sheaf of $\mathcal{O}_t$-differential operators on $\mathcal{L}$ of order at most one. Then $\mathbf{w}detilde T_\mathcal{L}$ equipped with the left $\mathcal{O}_t$-module structure and the Lie bracket given by the commutator naturally forms a $\mathcal{O}_t$-extension of $Der(\mathcal{O}_t)$. Splittings of this extension are flat connection on $\mathcal{L}$. Let $d_\alpha$ be local flat connections on $\mathcal{L}$ in some open covering $\{U_\alpha\}$ of $M$. Then, $d_\alpha-d_\beta$ are closed 1-forms of $\Omega_{\mathcal{O}_t}^1$ that form a \v Cech cocycle. Hence, there exist smooth 1-forms $f_\alpha\in\mathcal{P}^\perp_t$ such that $d_\alpha-d_\beta=f_\alpha-f_\beta$. Differential operators $d_\alpha-f_\alpha$ form a global connection on $\mathcal{L}$, $\nabla_\mathcal{L}$, with the curvature locally equal to $-df_\alpha$. On the other hand, by definition (see Section 8), the extension class of $\mathbf{w}detilde T_\mathcal{L}$ is represented by the form $df_\alpha\in d\mathcal{P}^\perp_t$, i.e. $-\nabla_\mathcal{L}^2$ . So, projecting the equality (\ref{eqFC}) to $\Gamma(M,d\PP^\perp)/d(\Gamma(M,\PP^\perp))t$, we obtain \be{} [\theta_t]={\rm cl}e(\mu_t,\mathcal{O}_t)-\ftt{\rm cl}e(\mathbf{w}detilde T_{\det(\mathcal{P}_t)}). \ee{} Details are left to the reader. Note that the form $-\frac{1}{2\pi\sqrt{-1}}tr(\nabla^2|_{\mathcal{P}_t})$ represents the first Chern class of $\mathcal{P}$, \cite{KN}. \end{remark} \begin{cor} Let $\mathbb{A}_t$ be a deformation quantization on $(M,\omega)$. Suppose its Fedosov class $cl_F(\mathbb{A}_t)$ is represented by the form $\theta_t$ that has a polarization $\mathcal{P}_t$. Then $\mathbb{A}_t$ can be extended to a PDQ $(\mathbb{A}_t,\mathbb{O}_t)$, where $\mathbb{O}_t$ is isomorphic to $\mathcal{O}_{\mathcal{P}_t}$. \end{cor} \begin{proof} Let $\nabla$ be a $\mathcal{P}_t$-symplectic connection on the formal symplectic manifold $(M,\theta_t,\mathcal{P}_t)$. Let $(\mathbb{C}M,\mu_t,\mathcal{O}_t)$ be a PSP such that $$\tau(\mathbb{C}M,\mu_t,\mathcal{O}_t)=(\theta_t-\ftt tr(\nabla^2|_{\mathcal{P}_t}),\mathcal{P}_t).$$ By (\ref{eqFC}), $cl_F(\mathbb{C}M,\mu_t)=[\theta_t]=cl_F(\mathbb{A}_t)$, therefore star-products $\mathbb{A}_t$ and $(\mathbb{C}M,\mu_t)$ are equivalent. \end{proof} \begin{remark} All constructions of the paper can be extended to the case when $M$ is a formal manifold, $M_\lambdambda$, which is $M$ endowed with the function sheaf $C_M^\infty[[\lambdambda]]$, $\lambdambda$ a formal parameter. A formal polarized symplectic manifold is a triple, $(M_\lambdambda,\omega_\lambdambda,\mathcal{P}_\lambdambda)$, where $\omega_\lambdambda$ is a formal symplectic form on $M_\lambdambda$ and $\mathcal{P}_\lambdambda$ its polarization. The above construction of a polarized star-product applied to a formal polarized symplectic manifold $(M_\lambdambda,\omega_\lambdambda,\mathcal{P}_\lambdambda)$ gives the following \begin{propn} Let $(\mathbb{A}_t,\mathbb{O}_t)$ be a PDQ and $(\omega_t,\mathcal{P}_t)$ an element on the orbit corresponding to $(\mathbb{A}_t,\mathbb{O}_t)$. Then, there exists on $(M_\lambdambda,\omega_\lambdambda,\mathcal{P}_\lambdambda)$ a PSP $$(C_M^\infty[[\lambdambda]][[t]],\mu_{\lambdambda,t},\mathcal{O}_{\lambdambda,t})$$ such that $(\mathbb{A}_t,\mathbb{O}_t)$ is equivalent to the diagonal sub-family $(C_M^\infty[[t]],\mu_{t,t},\mathcal{O}_{t,t})$ obtained by the substitution $\lambdambda=t$. \end{propn} \end{remark} \small e-mail: [email protected] \end{document}
\begin{document} \title{Lacunarity of Han-Nekrasov-Okounkov $q$-series } \begin{abstract} A power series is called lacunary if ``almost all'' of its coefficients are zero. Integer partitions have motivated the classification of lacunary specializations of Han's extension of the Nekrasov-Okounkov formula. More precisely, we consider the modular forms \[F_{a,b,c}(z) \coloneqq \frac{\eta(24az)^a \eta(24acz)^{b-a}}{\eta(24z)},\] defined in terms of the Dedekind $\eta$-function, for integers $a,c \geq 1$ where $b \geq 1$ is odd throughout. Serre \cite{Serre} determined the lacunarity of the series when $a = c = 1$. Later, Clader, Kemper, and Wage \cite{REU} extended this result by allowing $a$ to be general, and completely classified the $F_{a,b,1}(z)$ which are lacunary. Here, we consider all $c$ and show that for ${a \in \{1,2,3\}}$, there are infinite families of lacunary series. However, for $a \geq 4$, we show that there are finitely many triples $(a,b,c)$ such that $F_{a,b,c}(z)$ is lacunary. In particular, if $a \geq 4$, $b \geq 7$, and $c \geq 2$, then $F_{a,b,c}(z)$ is not lacunary. Underlying this result is the proof the $t$-core partition conjecture proved by Granville and Ono \cite{Granville}. \end{abstract} \endinput \section{Introduction and Statement of Results} A series $\displaystyle{\sum_{n=0}^{\infty} a(n)q^n}$ is said to be \textit{lacunary} if ``almost all'' of its coefficients are zero; that is, if we have $$ \displaystyle{\lim_{x \rightarrow \infty} \frac{ \# \{ 0 \leq n < x : a(n)=0 \} }{x}=1}. $$ Two examples of generating functions which are lacunary come from identities due to Euler and Jacobi: \begin{equation}\label{Euler} \prod_{n = 1}^{\infty} \left( 1-q^{n} \right) = \sum_{m= -\infty}^{\infty} (-1)^{m}q^{\frac{3m^{2}+m}{2}} \end{equation} \begin{equation}\label{Jacobi1} \prod_{n = 1}^\infty(1-q^{n})^3 = \sum_{m =-\infty}^{\infty} (-1)^{m}(2m+1)q^{\frac{m^2+m}{2}}. \end{equation} These combinatorial identities all have partition-theoretic interpretations. For example, Euler's identity (\ref{Euler}) gives a recurrence for calculating $p(n)$, the number of partitions of $n$. Further, they are all related to modular forms by the Dedekind $\eta$-function, \begin{equation} \label{eta} \eta(z) \coloneqq q^{\frac{1}{24}} \prod_{n = 1}^\infty (1-q^{n}) \end{equation} where $q=e^{2\pi i z}$ and $\text{Im}(z)>0$. It is well-known that $\eta(24z)$ is a weight $\frac{1}{2}$ modular cusp form on $\Gamma_{0}(576)$ with Nebentypus character $\left( \frac{12}{n} \right)$. For background on modular forms, see \cite{Ono}. In light of (\ref{Euler}) and (\ref{Jacobi1}), it is natural to investigate the lacunarity of $$\displaystyle{f_{r}(z) \coloneqq \prod_{n = 1}^\infty(1-q^{n})^{r}= \sum_{m=0}^{\infty} \tau_{r}(m)q^{m}}.$$ The series $f_{r}(z)$ is closely related to modular forms via the Dedekind $\eta$-funtion. Indeed, $\eta(24z)^{r}=q^{r}f_{r}(24z)$ is a modular cusp form of weight $\frac{r}{2}$ on $\Gamma_{0}(576)$. Via the connection of $f_{r}(z)$ to modular forms, Serre \cite{Serre} investigated the lacunarity of $\eta(24z)^{r}$ and thus $f_{r}(z)$: he proved that, if $r$ is even, then $f_{r}(z)$ is lacunary if and only if $r \in \{ 2,4,6,8,10,14,26 \}$. \begin{remark} If $r$ is odd, the lacunarity of $f_{r}(z)$ is not as well-understood since, in this case, $\eta(24z)^r$ has half-integral weight, and so the methods developed by Serre are not applicable. Of course, it is evident from $(\ref{Euler})$ and $(\ref{Jacobi1})$ that both $f_1(z)$ and $f_3(z)$ are lacunary. \end{remark} At first glance, it is not obvious that the series $f_{r}(z)$ is always a relative of the partition generating function \[f_{-1}(z) = \sum_{m = 0}^\infty p(m) q^m = \prod_{n = 1}^\infty \frac{1}{1 - q^n}.\] However, work by Nekrasov and Okounkov \cite{NO} provides the key connection via an extension of this generating function. Recall that for a partition $\lambda$ of an integer $n \geq 1$, we can label each box $u$ of its Ferrers Diagram with a positive integer called the \textit{hooklength}, the number of boxes $v$ such that $v = u$, $v$ lies in the same column and below $u$, or $v$ lies in the same row and to the right of $u$. For example, when $n = 11$, we have the following Ferrers diagram for the partition $\lambda'=6+4+1$: \begin{figure} \caption{The Ferrers diagram of $\lambda' = 6 + 4 + 1$} \label{Ferrers} \end{figure} We define the multiset of all hook lengths of $\lambda$ to be $\mathcal{H}(\lambda)$. For $\lambda'$, we have $\mathcal{H}(\lambda') = \{1, 1, 1, 2, 2, 3, 4, 5, 5, 6, 8\}$. Nekrasov and Okounkov \cite{NO} generalized a result of Macdonald \cite{MacDonald} to create a partition-theoretic generating function. They showed that, for any $b \in \mathbb{C}$, one has $$f_{b-1}(z)= \prod_{n = 1}^\infty(1-q^{n})^{b-1} =\sum_{\lambda \in \mathcal{P}}q^{|\lambda|} \prod_{h \in \mathcal{H}(\lambda)} \left( 1- \frac{b}{h^{2}} \right) .$$ Given an integer $a \geq 1$, we consider the subset $\mathcal{H}_{a}(\lambda) \subseteq \mathcal{H}(\lambda)$ defined by \begin{equation}\label{Ha} \mathcal{H}_a(\lambda) \coloneqq \{ h\, \colon \, h \in \mathcal{H}(\lambda), h \equiv 0 \pmod{a} \}. \end{equation} A partition $\lambda$ is said to be an \textit{a-core} if $\mathcal{H}_{a}(\lambda)= \emptyset$. For $\lambda'$, we have that $\mathcal{H}_{2}(\lambda')= \{ 2,2,4,6,8 \}$ and that $\lambda'$ is an \textit{$a$-core} for $a=7$ and all $a \geq 9$. Han \cite{Han} generalized Nekrasov and Okounkov's formula to incorporate this set $\mathcal{H}_{a}(\lambda)$: \begin{equation} \displaystyle\sum_{\lambda \in \mathcal{P}} q^{|\lambda|} \prod_{h \in \mathcal{H}_a(\lambda)} \left( y-\frac{aby}{h^2}\right) = \prod_{n = 1}^\infty \frac{(1-q^{an})^a(1 - (yq^a)^n)^{b - a}}{ (1 - q)^n}, \end{equation} where $b,y \in \mathbb{C}$. When $y = a = 1$, one recovers the identity of Nekrasov and Okounkov. We are interested in the case where $y = q^{a(c - 1)}$ for $c \geq 1$. Hence, for $a,b,c \geq 1$ we define \begin{equation}\label{Fabc} F_{a,b,c}(z) \coloneqq \frac{\eta(24az)^a \eta(24acz)^{b-a}}{\eta(24z)}= q^{r} \prod_{n = 1}^\infty \frac{(1-q^{24an})^{a} (1-q^{24acn})^{b-a}}{1-q^{24n}}, \end{equation} where $r \coloneqq abc+a^{2}-a^{2}c-1$. \begin{remark} In Lemma $\ref{Fabcmod}$, we show that $F_{a,b,c}(z)$ is a weakly holomorphic modular form of weight $\frac{b-1}{2}$ whenever $b$ is odd. From now on, we assume $a,b,c \geq 1$ are all integers and $b$ is odd. \end{remark} Note that this is a generalization of Clader, Kemper and Wage's \cite{REU} work for which they took $c = 1$, corresponding to $y = 1$. In view of Serre's work and Han's partition-theoretic interpretation, they investigated the lacunarity of $F_{a,b,1}(z)$ and showed that there are finitely many pairs $(a,b)$ for which $F_{a,b,1}(z)$ is lacunary. The complete list they gave is replicated in the table below: \begin{center} \begin{tabular}{|c | c|} \hline $a$ & $b$ such that $F_{a,b,1}$ is lacunary \\ \hline $1$ & $\{ 3,5,7,9,11,15,27 \}$ \\ $2$ & $\{ 3,5,7 \}$ \\ $4$ & $ \{ 5,7 \}$ \\ $5$ & $ \{ 7,11 \}$ \\ $6$ & $ \emptyset $ \\ $7$ & $ \{9,15 \}$ \\ \hline \end{tabular} \end{center} We now focus our attention to the case where $c \geq 1$. It turns out that when we allow $c$ to vary, there are infinitely many lacunary $F_{a,b,c}(z)$. In fact, our first theorem shows that for $a \in \{1,2,3\}$, there exist infinite families of triples $(a,b,c)$ such that $F_{a,b,c}(z)$ is lacunary. \begin{thm} \label{infinite} For $F_{a,b,c}(z)$ as defined above, the following are true: \\ \noindent (1) If $b = 3$ and $a \in \{1,2,3\}$, then $F_{a,b,c}(z)$ is lacunary for all $c$. \\ \noindent (2) If $b = 5$ and $a \in \{1,2\}$, then $F_{a,b,c}(z)$ is lacunary for all $c$. \end{thm} \begin{remark} Note that this is not a complete classification of lacunary $F_{a,b,c}(z)$ for $a \in \{ 1,2,3\}$. \end{remark} However, if $a \geq 4$, then we obtain a different phenomenon. Namely, there are only finitely many triples $(a,b,c)$ such that $F_{a,b,c}(z)$ is lacunary. \begin{thm} \label{finite} If $a \geq 4$, $b \geq 1$ is odd, and $c \geq 2$, then $F_{a,b,c}(z)$ is not lacunary apart from three possible exceptions where $(a,b,c) \in \{(4,5,3), (4,5,5), (4,5,11)\}$. \end{thm} To obtain this result, we make use of the theory of modular forms with complex multiplication, Granville and Ono's \cite{Granville} proof of the $t$-core partition conjecture, and the P\'olya-Vinogradov Inequality. We begin by recalling some preliminaries in \S \ref{prelim}. In \S \ref{proofs}, we prove Theorem \ref{infinite} and Theorem \ref{finite}. Finally, in \S \ref{examples} we conjecture that $F_{4,5,3}(z)$, $F_{4,5,5}(z)$, and $F_{4,5,11}(z)$ are lacunary and discuss a method by which one can show this. \endinput \section{Preliminaries} \label{prelim} In this section we recall the modularity properties of $\eta$-quotients as well as various conditions on modular forms for lacunarity. \subsection{$\eta$-quotients} \label{etaquot} Let $M_k(\Gamma_0(N), \chi)$ (resp. $S_k(\Gamma_0(N), \chi))$ denote the complex vector space of holomorphic (resp. cuspidal) modular forms of weight $k$ with Nebentypus character $\chi$ on $\Gamma_0(N)$. An important example of a cuspidal modular form is the Dedekind $\eta$-function, defined in (\ref{eta}). Products of powers of the Dedekind $\eta$-function, $\eta$-quotients, are central to this paper. Formally, an $\eta$-quotient is any function $f$ that can be written as \begin{equation} f(z) = \prod_{\delta \mid N} \eta(\delta z)^{r_\delta} \end{equation} for some $N$ and each $r_{\delta} \in \mathbb{Z}$. Theorem 1.64 of \cite{Ono} gives conditions for when certain $\eta$-quotients are weakly holomorphic, which means that the poles are supported at cusps. In particular, if there is $N$ such that \[ k \coloneqq \frac{1}{2} \sum_{\delta \mid N} r_{\delta} \in \mathbb{Z}, \] \[\sum_{\delta \mid N} \delta r_\delta \equiv 0 \pmod {24},\] and \[\sum_{\delta \mid N} \frac{N}{\delta}r_{\delta} \equiv 0 \pmod {24},\] then $f(z)$ is a weakly holomorphic modular form of weight $k$ on $\Gamma_0(N)$ with Nebentypus character $\chi(d) \coloneqq \left(\frac{(-1)^k s}{d}\right)$, where $\displaystyle s \coloneqq \prod_{\delta \mid N} \delta^{r_\delta}$. We can also compute the order of vanishing of $f(z)$ at a cusp $\frac{x}{y}$, for $y \mid N$ and $\gcd (x,y) = 1$ using the formula from Theorem 1.65 of \cite{Ono}: \begin{equation}\label{cuspeq} \frac{N}{24}\sum_{\delta \mid N}\frac{\gcd (y, \delta)^2 r_\delta}{\gcd (y, \frac{N}{y})y \delta}. \end{equation} By considering the order of vanishing for all possible $y \mid N$, one can determine whether $f(z)$ is holomorphic or cuspidal. Applying this theory to our series $F_{a,b,c}(z)$, we have the following: \begin{lemma}\label{Fabcmod} Let $F_{a,b,c}(z)$ be as in (\ref{Fabc}). Then $F_{a,b,c}(z)$ is a weakly holomorphic modular form of weight $k=\frac{b-1}{2}$ on $\Gamma_{0}(576ac)$ with Nebentypus character $$ \chi_{F}(d) = \begin{cases} \left( \frac{(-1)^{k}ac}{d} \right) \text{ if } $a$ \text{ is even} \\ \left( \frac{(-1)^{k}a}{d} \right) \text{ if } $a$ \text{ is odd}. \end{cases} $$ Furthermore, $F_{a,b,c}(z)$ is holomorphic if and only if $b \geq a$ and cuspidal if and only if $b>a$. \end{lemma} \begin{remark}\label{leveloptimality} We choose to include a factor of $24$ in the definition of $F_{a,b,c}(z)$ to guarantee that the conditions mentioned above are satisfied for all triples $(a,b,c)$. This implies that $F_{a,b,c}(z)$ is a weakly holomorphic modular form on $\Gamma_0(N)$ where $N = 576ac$. In general, this $N$ is not necessarily optimal. The optimal $N$ may require altering the factor of 24 in the definition of $F_{a,b,c}(z)$ to some smaller $m$ satisfying $mr \equiv 0 \pmod{24}$ for $r$ defined in (\ref{Fabc}). The optimal level of this new modular form is then $N_{opt} \coloneqq \lcm(a,c)nm$ where the integer $n \geq 1$ is chosen minimally to satisfy $\frac{\lcm(a,c)}{ac} \cdot n(b-a) \equiv 0 \pmod{24}$. \end{remark} \subsection{Conditions on Modular Forms for Lacunarity} \label{modlac} The lacunarity of certain modular forms is already known. Suppose first that $f(z)$ is a weight-one holomorphic modular form on a congruence subgroup. Deligne and Serre (Proposition 9.7 in \cite{DeligneSerre}) showed that $f(z)$ must be lacunary by expressing the coefficients in terms of the Artin $L$-function. Next, suppose that $f(z)$ is a weakly holomorphic modular form of integral weight $k \geq 2$. If $f(z)$ has a pole at one of its cusps, then it cannot be lacunary. This can be shown using the Hardy-Littlewood circle method, which gives estimates for the magnitude of the Fourier coefficients of $f(z)$ as in Chapter 5 of \cite{Apostol}. Additionally, if $f(z)$ is holomorphic but not cuspidal, then it cannot be lacunary since $f(z)$ can be expressed as a linear combination of Eisenstein series together with a cusp form. Known bounds on the coefficients of such forms preclude them from being lacunary. In light of these remarks, we may restrict our study of the lacunarity of $F_{a,b,c}(z)$ to the space of cusp forms of level $N$ and integral weight $k \geq 2$. Central to this study are modular forms with complex multiplication. Serre \cite{Serre} proved that an element of $S_{k}(\Gamma_{0}(N),\chi)$ is lacunary if and only if it is expressible as a linear combination of forms with complex multiplication. We denote the space of cusp forms of weight $k$ with complex multiplication and Nebentypus character $\chi$ on $\Gamma_{0}(N)$ by $S^{CM}_k(\Gamma_{0}(N), \chi)$ and will sometimes call forms with complex multiplication CM forms. Recall that given any modular form $f(z) \in M_{k}(\Gamma_{0}(N), \chi)$, the action of the Hecke operator $T_{s}$ on $f(z)$ is defined by $$ f(z) | T_{s}= \displaystyle{\sum_{m=0}^{\infty}\left( \sum_{d \mid \gcd(s,m)} \chi(d)d^{k-1}a(sm/d^{2})\right)q^{m}},$$ where $\chi(m)=0$ if $\gcd(N,m) \neq 1$. When $s=p$ is prime, this definition reduces to $$f(z) | T_{p} = \displaystyle{\sum_{m=0}^{\infty}\left( a(pm)+\chi(p)p^{k-1}a(m/p) \right)q^{m}},$$ where we define $a(m/p) \coloneqq 0$ whenever $p \nmid m$. We now state an extension of a well-known lemma by Serre \cite{Serre} which can be seen in \cite{REU}. It says that given a cusp form with complex multiplication, the action of certain Hecke operators is zero. \begin{lemma} \label{cmlem} Let $g \in S^{CM}_k(\Gamma_{0}(N),\chi)$. If $s$ is an integer such that $\gcd(N,s)=1$ and there is no ideal of norm $s$ in the ring of integers of $\mathbb{Q}(\sqrt{-d})$ for any $d > 0$ with $d \mid N$, then $g | T_{s} =0$. \end{lemma} \begin{remark}Note that for a given $N$, there always exists an $s$ satisfying the hypotheses of the previous lemma. This follows since a positive proportion of all integers are relatively prime to $N$ and, for a fixed $d$, $0 \% $ of integers are norms of ideals in the ring of integers of $\mathbb{Q}(\sqrt{-d})$. \end{remark} \endinput \section{Proofs of Theorems} \label{proofs} \subsection*{Proof of Lemma \ref{Fabcmod}} For $F_{a,b,c}(z)$, take $N = 576ac$ and $\delta_1 = 24a$, $\delta_2 = 24ac$, $\delta_3 = 24$ as seen in \S \ref{etaquot}. So $r_{\delta_1} = a$, $r_{\delta_2} = b - a$, and $r_{\delta_3} = -1$. Then $F_{a,b,c}(z)$ satisfies the appropriate conditions to be a weakly holomorphic modular form with Nebentypus character $\chi_F(d) = \left( \frac{(-1)^k 24^{b - 1}a^b c^{b - a}}{d}\right)$. Properties of the Kronecker symbol give the desired $\chi_F$ for $a$ even and odd. The order of vanishing of a cusp $\frac{x}{y}$ obtained using (\ref{cuspeq}) is given by the expression \begin{equation*} \frac{\gcd(y, 24a)^2 a c + \gcd({y, 24ac)^2 (b - a) - \gcd(y, 24)^2ac}}{\gcd(y, \frac{576ac}{y})}. \end{equation*} \noindent As the numerator is nonnegative when $b \geq a$, $F_{a,b,c}(z)$ is holomorphic under this condition. To show the conditions of Lemma \ref{Fabcmod} precisely, we take $y = 24$, for which the numerator is nonnegative for $b \geq a$ and strictly positive for $b > a$. Therefore, $F_{a,b,c}(z)$ is only holomorphic for $b \geq a$ and cuspidal for $b > a$. \qed \subsection*{Proof of Theorem \ref{infinite}} For (1), it suffices to recall from \S \ref{modlac} that weight-one holomorphic modular forms are lacunary. For (2), if $a = 1$, then $F_{1,5,c}(z) = \eta(24cz)^4$, which is lacunary due to Serre's \cite{Serre} classification of even $r$'s for which $\eta(z)^r$ is lacunary. If $a = 2$, we use (\ref{Jacobi1}) and an identity given in \cite{Ono} to express $F_{2,5,c}(z)$ as follows: \begin{align*} F_{2,5,c}(z) &= \frac{\eta(48z)^2}{\eta(24z)} \cdot \eta(48cz)^3 = \left(\sum_{n = 0} ^ \infty q^{3(2n + 1)^2} \right) \left( \sum_{m = 0}^\infty (-1)^m (2m + 1) q^{3c (2m+1)^2}\right) \\ &= \sum_{n = 0}^\infty \sum_{m = 0}^\infty (-1)^m(2m + 1)q^{3(2n + 1)^2 + 3c(2m + 1)^2}. \end{align*} As can be seen in \cite{Serre1}, binary quadratic forms represent 0\% of the positive integers, so $F_{2,5,c}(z)$ is lacunary for all $c$. \qed \subsection{Proof of Theorem \ref{finite}} To prove this theorem, we first show that for a fixed pair $(a,c)$, there are only finitely many $b$ such that $F_{a,b,c}(z)$ is lacunary. Next, we show that there are finitely many pairs $(a,c)$ such that $F_{a,b,c}(z)$ is lacunary by bounding the product $ac$. This implies that there are finitely many triples $(a,b,c)$ such that $F_{a,b,c}(z)$ is lacunary. Finally, we review further steps for eliminating triples from being lacunary. In \S \ref{examples}, we discuss a method that one could use to complete this classification. \begin{lemma}\label{finb} Fix $a \geq 4$ and $c \geq 1$. Then there exist finitely many odd $b \geq 1$ such that $F_{a,b,c}(z)$ is lacunary. \end{lemma} \begin{proof} Choose $s$ satisfying the conditions of Lemma \ref{cmlem}. As seen in \S \ref{prelim}, a result of Serre \cite{Serre} states that $F_{a,b,c}(z)$ lacunary if and only if it is expressible as a linear combination of CM forms. By Lemma \ref{cmlem}, $F_{a,b,c}(z)$ is not lacunary unless $F_{a,b,c}(z) | T_s = 0$. We show only finitely many odd $b$ satisfy this condition. Suppose for simplicity that $s = p$ is prime. This is possible since $s$ is square-free and $T_{\alpha}T_{\beta}=T_{\alpha \beta}$ whenever $\gcd(\alpha,\beta)=1$. Define the coefficients $A_{a,b,c}(m)$ by the expansion \begin{equation} \label{DefFabc} F_{a,b,c}(z) = q^{r} \prod_{n = 1}^\infty\frac{(1-q^{24an})^{a} (1-q^{24acn})^{b-a}}{1-q^{24n}} = \sum_{m = 0}^\infty A_{a,b,c}(m)q^{24m+r}, \end{equation} for $r$ as defined in (\ref{Fabc}). Therefore, acting on $F_{a,b,c}(z)$ with $T_p$, we obtain \begin{equation}\label{DefineAabc} F_{a,b,c}(z)| T_p = \sum_{\substack{m \geq 0 \\ p \mid 24m + r}}A_{a,b,c}(m) q^{\frac{24 m + r}{p}} + \chi_F(p) p ^\frac{b - 3}{2} \sum_{m \geq 0} A_{a,b,c}(m)q^{p(24m + r)}. \end{equation} Choose $m_0 \geq 0$ to be the minimal $m$ so that $p \mid 24 m + r$. We must have $0 \leq m_{0} \leq p-1$. We show that this term cannot appear in the second sum, so $A_{a,b,c}(m_0)$ is the coefficient of $q^{\frac{24 m_0 + r}{p}}$ in the Fourier expansion of $F_{a,b,c}(z) | T_p$ \footnote{Note that when $s$ is not prime, there is $m_0 \leq s - 1$ such that $A_{a,b,c}(m_0)$ is a coefficient in the expansion of $F_{a,b,c}(z) \mid T_s$. However this $m_0$ is not necessarily the minimal $m$ such that $s \mid 24 m + r.$}. First note that for $m \geq 1$, $$ p(24m + r) > \frac{24 m_0 + r}{p},$$ by the inequality $m_0 \leq p -1$. Next, looking at the first term of the second sum, we show that $\frac{24 m_0 + r}{p} \neq pr$. If this were true, using the inequality $0 \leq m_0 \leq p - 1$ again, one obtains: \[24(p - 1) \geq 24 m_0 = r(p^2 - 1),\] which gives \[24 \geq (p + 1)r = (p + 1)(abc + a^2 - a^2c - 1).\] This inequality can only hold if $p \leq 23$. Each prime $p \leq 23$ yields a finite set of triples $(a,b,c)$ for which the inequality holds. We reach a contradiction for each triple by calculating $m_0$ and checking equality of the first exponents of each of the two sums. Therefore, it suffices to show $A_{a,b,c}(m_{0}) = 0$ for finitely many $b$. Notice from (\ref{DefFabc}) that \[F_{a,b,c}(z) = q^{r}\prod_{n = 1}^\infty (1 + q^{24n} + \cdots + q^{24(a - 1)})(1 - q^{24an})^{a - 1}(1 - q^{24acn})^{b - a} = \sum_{m = 0}^\infty A_{a,b,c}(m)q^{24m+r}.\] Since $a$ and $c$ are fixed, the coefficient $A_{a,b,c}(m)$ is a polynomial in $b$ with degree at most $\left\lfloor \frac{m}{ac} \right\rfloor$. Let's observe this phenomenon for $a = 4$ and $c = 1$: $$ F_{4,b,1}(z) = q^r \prod_{n=1}^{\infty} \frac{ (1-q^{96n})^{b}}{1-q^{n}} = q^r \prod_{n=1}^{\infty} \left( 1+q^{24n}+q^{48n}+q^{96n} \right) \left( 1-q^{96n} \right)^{b-1}.$$ By considering the possible ways that terms in this product could multiply to $q^{96 + r}$, we deduce that $$A_{4,b,1}(4)= -\binom{b-1}{1}+1+1+1+1=-b+5.$$ Hence $A_{4,b,1}(4)$ is a polynomial in $b$ of degree at most $\left\lfloor \frac{4}{4 \cdot 1} \right\rfloor = 1$. We now show that for $a \geq 4$, $A_{a,b,c}(m)$ is a nonzero polynomial in $b$ for all $m$. Therefore $A_{a,b,c}(m_0) = 0$ for finitely many $b$. Plugging $b=a$ into (\ref{DefFabc}), we have: \begin{equation} \prod_{n = 1}^\infty \frac{(1-q^{24an})^{a}}{(1-q^{24n})}= \sum_{m = 0}^\infty A_{a,a,c}(m)q^{24m}. \end{equation} \noindent We show $A_{a,a,c}(m) \neq 0$ for $m \geq 0$. Note that from Corollary 1.9 in \cite{Han} we have the following: \begin{align*} \sum_{m = 0}^\infty A_{a,a,c}(m)q^{24m} = \prod_{n = 1}^\infty \frac{(1-q^{24an})^{a}}{1-q^{24n}} &= \sum_{\lambda \text{ is an $a$-core }}q^{24 |\lambda|} \\ &= \sum_{m = 0}^\infty \# \{ \lambda \colon | \lambda |=m \text{ and } \lambda \text{ is an $a$-core } \}q^{24m}. \end{align*} \noindent It then follows from Granville and Ono's \cite{Granville} proof of the $t$-core partition conjecture that $\# \{ \lambda \colon | \lambda | =m \text{ and } \lambda \text{ is an $a$-core } \} \geq 1$, since we are assuming $a \geq 4$. Thus, $A_{a,b,c}(m)$ is a nonzero polynomial in $b$ for all $m \geq 0$. \end{proof} The following lemma is a direct generalization of Theorem 1.1 in \cite{REU}. In order to prove this lemma, we recall the P\'olya-Vinogradov Inequality which states that given a nontrivial Dirichlet character $\chi$ with modulus $m$, the following holds: \begin{equation} \big| \sum_{x=1}^{h}\chi(x) \big| \leq 2 \sqrt{m} \log m. \end{equation} \begin{lemma} \label{finitelymanytriples} If $a \geq 4$ there exists finitely many triples $(a,b,c)$ so that $F_{a,b,c}(z)$ is lacunary. \end{lemma} \begin{proof} Fix $a \geq 4$ and $c \geq 1$. Let $a'$ be the square-free part of $576ac$, the level of $F_{a,b,c}(z)$. We will show that there exists an $s \in \mathbb{Z}$ satisfying the conditions of Lemma \ref{cmlem}. In order to do this, it suffices to show that there exists an $s \in \mathbb{Z}$ such that $ \left( \frac{-1}{s} \right) = -1$ and $\left( \frac{-p}{s} \right)= -1$ for all primes $p \mid a'$. These conditions imply that $\left( \frac{-d}{s} \right)= -1$ for all positive integers $d$ dividing the level of $F_{a,b,c}(z)$ by properties of the Kronecker symbol. Recall that a prime $q$ is inert in the ring of integers of $\mathbb{Q}(\sqrt{-d})$ if $\left( \frac{-d}{q} \right) = -1$. By considering the prime factorization of $s$ and the multiplicativity of the Kronecker symbol, there does not exist any ideal of norm $s$ in the ring of integers of $\mathbb{Q}(\sqrt{-d})$ if $\left( \frac{-d}{s} \right)= -1$. For each pair $(a,c)$, there exists an $s$ satisfying the conditions of Lemma \ref{cmlem}. We show that for all but finitely many pairs, there is an $s < ac$. By (\ref{finb}), the set of odd $b$ for which $F_{a,b,c}(z)$ is lacunary is a subset of the roots of the nonzero polynomials $A_{a,b,c}(0), \ldots, A_{a,b,c}(s-1)$. All of these polynomials have degree less than or equal to $\big\lfloor \frac{s}{ac} \big\rfloor$, so no $b$ can exist for all but finitely many pairs $(a,c)$. We first assume that $a'=6ac$. We show that there is a $\kappa$ such that whenever $a' = 6ac \geq \kappa$ there exists an $s < ac$ satisfying the desired conditions. It suffices to find an $s \equiv 23 \pmod{24}$ such that $\left( \frac{-p}{s} \right) =-1$ for all primes $p \mid ac$ $\left( \text{note that if $s \equiv 23 \pmod{24}$, $\left( \frac{-2}{s} \right) = \left( \frac{-3}{s}\right) =-1$} \right)$. Write $ac=p_{1}p_{2} \cdots p_{m}$, where the $p_{i}$ are distinct primes not equal to $2$ or $3$. Consider the following linear combination of Dirichlet characters: \begin{equation*} g_{ac}(s) \coloneqq \frac{1}{2^{m}} \cdot \displaystyle{\sum_{d \mid ac} \mu(d)\psi_{d}(s)}, \end{equation*} where $\psi_{d}(s) \coloneqq \displaystyle{\prod_{p \mid d}} \left( \frac{-p}{s}\right)$ and by convention $\psi_1(s) = 1$. By inducting on the number of prime divisors of $ac$ and noticing that if $q$ is a prime not dividing $ac$ then $g_{ac \cdot q}(s) = \frac{\left(1- \left( \frac{-q}{s} \right) \right)g_{ac}(s)}{2}$, one can show that $$ g_{ac}(s)= \begin{cases} 1 \text{ if } \left( \frac{-p}{s} \right)=-1 \text{ for all } p \mid ac \\ 0 \text{ otherwise.} \end{cases}$$ Thus, it is sufficient to find an $s$ satisfying $g_{ac}(s)=1$ and $s \equiv 23 \pmod{24}$. We will do this by using the P\'olya-Vinogradov Inequality to show that, if $ac$ is sufficiently large, then \begin{equation}\label{sac} \sum_{\substack{s < ac \\ s \equiv 23 (24)}} \sum_{d \mid a} \mu(d)\psi_{d}(s) >0. \end{equation} Let $D$ be the set of Dirichlet characters modulo $24$. Since $\displaystyle{\sum_{\chi \in D} \chi(1)=8}$, we have: \begin{align*} \sum_{\substack{s < ac \\ s \equiv 23 (24)}} \sum_{d \mid ac} \mu(d)\psi_{d}(s) &= \sum_{\substack{s < ac \\ s \equiv 23 (24)}} 1 + \sum_{\substack{d \mid ac \\ d>1}} \mu(d) \sum_{\substack{s <ac \\ s \equiv 23 (24)}} \psi_{d}(s) \\ &= \sum_{\substack{s<ac \\ s\equiv 23 (24)}} 1 + \sum_{\substack{d \mid ac \\ d>1}} \mu(d) \sum_{s <ac} \psi_{d}(s) \sum_{\chi \in D} \frac{\chi(-s)}{8} \\ &\geq \frac{ac}{24}-1+ \sum_{\substack{d \mid ac \\ d>1}} \frac{1}{8} \mu(d) \psi_{d}(-1) \sum_{\chi \in D} \sum_{s < ac} \psi_{d}(-s) \chi(-s). \end{align*} Note that $\psi_d \circ \chi$ is a nontrivial Dirichlet character modulo $24d$. So, by applying the P\'olya-Vinogradov Inequality to the innermost sum: \begin{align*} \sum_{\substack{s <ac \\ s \equiv 23(24)}} \sum_{d \mid ac} \mu(d) \psi_{d}(s) &\geq \frac{ac}{24} - 1 - \sum_{\substack{d \mid ac \\ d >1}} 2 \sqrt{24d} \log(24d) \\ &\geq \frac{ac}{24} -1 -2^{m+1} \sqrt{24ac} \log(24ac). \end{align*} If $m \geq 12$, then $\frac{ac}{24} -1 -2^{m+1} \sqrt{24ac} \log(24ac) >0$. Hence, there exists an integer $s <ac$ satisfying the conditions of Lemma \ref{cmlem}. So in this case, the set of $b$ for which $F_{a,b,c}(z)$ is lacunary is empty. For each $m <12$, the inequality (\ref{sac}) can only fail if $ac < \kappa$, where $\kappa = 5 \cdot 7 \cdot \ldots \cdot 43 \approx 2.18 \times 10^{15}$. So if $6ac$ is square-free, there are only finitely many pairs $(a,c)$ for which there could exist odd $b$ where $F_{a,b,c}(z)$ is lacunary. Now we turn to the case where $6ac$ is not square-free. If $\frac{a'}{6}$ has a corresponding $s \leq \frac{a'}{6} < ac,$ then $F_{a,b,c}(z)$ is not lacunary for any choice of $b$. If $\frac{a'}{6}$ does not have an $s \leq \frac{a'}{6},$ then $F_{a,b,c}(z)$ can only be lacunary for pairs $(a,c)$ such that $s \leq ac$ where $s$ is chosen minimally. By the above, there are only finitely many possible $a',$ each of which yields finitely many pairs $(a,c)$ for which there could exist odd $b$ such that $F_{a,b,c}(z)$ is lacunary. It follows from Lemma \ref{finb} that there can be only finitely many pairs $(a,b,c)$ for which $F_{a,b,c}(z)$ is lacunary. \end{proof} We now prove Theorem \ref{finite}. \begin{proof} We demonstrate the process of showing that the only possible triples $(a,b,c)$ for which $F_{a,b,c}(z)$ could be lacunary are $(4,5,3)$, $(4,5,5)$, and $(4,5,11)$. We have the following algorithm: \\ \noindent (1) From Lemma \ref{finitelymanytriples} we know that there are finitely many triples $(a,b,c)$ such that $F_{a,b,c}(z)$ is lacunary. Further, all such triples have the property that $a' \leq 6\kappa$ where $a'$ and $\kappa$ are defined above. For each $a' \leq 6\kappa$, we check whether there exists an $s \equiv 23 \pmod{24}$ with $s < \frac{a'}{6}$ and $g_{a'/6}(s) = 1$. For all $a'$ with such an $s$, the possible pairs $(a,c)$ corresponding to $a'$ yield no odd $b$ for which $F_{a,b,c}(z)$ is lacunary. For all $a'$ without such an $s,$ there are finitely many pairs $(a,c)$ such that $F_{a,b,c}(z)$ could be lacunary. Let $S_{ac}$ be the set of all such pairs. \\ \noindent (2) For all $(a,c) \in S_{ac}$, choose $s$ minimally and use Lagrange interpolation to construct the polynomials $A_{a,b,c}(ac), \ldots ,A_{a,b,c}(s-1)$ \footnote{Note that we do not need to consider the polynomials $A_{a,b,c}(0), \ldots, A_{a,b,c}(ac-1) $ since these polynomials are nonzero constant polynomials and hence have no roots.}. Now find all of the odd positive integer roots of these polynomials. By repeating this processes for all $(a,c)$ with corresponding $s$, we have a set of all possible $(a,b,c)$. Denote this set by $S_{abc}$. \\ \noindent (3) For all $(a,b,c) \in S_{abc}$, choose prime $p$ minimally satisfying $p \equiv 23 \pmod{24}$ and $g_{a'/6}(p)=1$. Recall the definition of $m_{0}$ in Lemma \ref{finb}. Find $m_{0} \in \mathbb{Z}/p\mathbb{Z}$ such that $24m_{0} \equiv -r \pmod{p}$. If $A_{a,b,c}(m_{0}) \neq 0$ then $F_{a,b,c}(z)$ cannot be lacunary. Denote the set of remaining triples $S_{abc}'$. \\ \noindent (4) For all $(a,b,c) \in S_{abc}',$ find $p' > p$ satisfying $p' = 23 \pmod{24}$ and $g_{a'/6}(p') = 1$ and repeat step 3. This should be repeated multiple times to eliminate further candidates. \\ Following steps (1)-(4), the only remaining triples $(a,b,c)$ such that $F_{a,b,c}(z)$ could be lacunary are $(4,5,3)$, $(4,5,5)$, and $(4,5,11)$. \end{proof} \endinput \section{Discussion of Theorem \ref{finite}}\label{examples} Due to our results in \S \ref{proofs}, we conjecture that the series $F_{4,5,3}(z), F_{4,5,5}(z)$, and $F_{4,5,11}(z)$ are lacunary. In theory, it is not difficult to prove this conjecture. Namely, one has to systematically compute all of the weight $2$ modular forms with complex multiplication on a suitable level. Our calculations reveal that if this conjecture is true, then these $F_{a,b,c}(z)$ will be linear combinations of CM forms corresponding to fields in the table given below \footnote{There are fewer modular forms on lower level; hence, one should consider the optimal level as discussed in the remark in \S \ref{etaquot}.}. We have been unable to resolve this issue due to limits on our computational power. \begin{center} \begin{table}[h] \begin{tabular}{|c | c| c |} \hline & Optimal level of $F_{a,b,c}(z)$ & CM fields of $F_{a,b,c}(z)$ \\ \hline $F_{4,5,3}(z)$ & $2304$ & $\mathbb{Q}(\sqrt{-1})$, $\mathbb{Q}(\sqrt{-2})$, and $\mathbb{Q}(\sqrt{-3})$ \\ \hline $F_{4,5,5}(z)$ & $576 \cdot 4 \cdot 5$ & $\mathbb{Q}(\sqrt{-1})$, $\mathbb{Q}(\sqrt{-2})$, $\mathbb{Q}(\sqrt{-3})$, and $\mathbb{Q}(\sqrt{-5})$ \\ \hline $F_{4,5,11}(z)$ & $576 \cdot 4 \cdot 11$ & $\mathbb{Q}(\sqrt{-1})$, $\mathbb{Q}(\sqrt{-2})$, $\mathbb{Q}(\sqrt{-3})$, and $\mathbb{Q}(\sqrt{-11})$ \\ \hline \end{tabular} \end{table} \end{center} \endinput \end{document}
\begin{document} \title{Davies-Gaffney-Grigor'yan Lemma on Simplicial complexes} \author{Bobo Hua} \address{Bobo Hua,School of Mathematical Sciences, LMNS, Fudan University, Shanghai 200433, China; Shanghai Center for Mathematical Science, Fudan University, Shanghai 200433, China. } \email{[email protected]} \author{Xin Luo} \address{Xin Luo, College of Mathematics and Econometrics, Hunan University,Changsha 410082,China} \email{[email protected]} \begin{abstract} We prove Davies-Gaffney-Grigor'yan lemma for heat kernels of bounded discrete Hodge Laplacians on simplicial complexes. \end{abstract} \maketitle \section{introduction} The Davies-Gaffney-Grigor'yan Lemma, denoted as DGG Lemma in short below, is useful for heat kernel estimates on both manifolds and graphs. The DGG lemma on manifolds can be described as follows \begin{lemma}[Davies-Gaffney-Grigor'yan]\label{DGGManifold} Let $M$ be a complete Riemannian manifold and $p_t(x,y)$ the minimal heat kernel on $M$. For any two measurable subsets $B_1$ and $B_2$ of $M$ and $t>0,$ we have \begin{equation}\label{e:DGG Riemannian}\int_{B_1}\int_{B_2} p_t(x,y)d\mathrm{vol}(x)d\mathrm{vol}(y) \leq \sqrt{\mathrm{vol}(B_1)\mathrm{vol}(B_2)}\exp\left(-\mu t\right)\exp\left(- \frac{d^2(B_1,B_2)}{4t}\right),\end{equation} where $\mu$ is the greatest lower bound of the $\ell^2$-spectrum of the Laplacian on $M$ and $d(B_1,B_2)=\inf_{x_1\in B_1,x_2\in B_2}d(x_1,x_2)$ the distance between $B_1$ and $B_2$. \end{lemma} Davies firstly proved a lemma of this type in \cite{7} by adpoting the argument of Gaffney \cite{14}. Li and Yau also proved an earlier version of this lemma in \cite{23}. Later Grigor'yan proved the lemma in \cite{15} and introduced the term $\exp(-\mu t)$ on the right hand side which gives the sharp speed of decay of the heat kernel as $t\rightarrow \infty$ when $\mu > 0.$ Recently, variants of the Davies-Gaffney-Grigor'yan Lemma were proved by Bauer, Hua and Yau \cite{2,3} on graphs. Weighted graphs are defined to be the set of vertices $V$ and edges $E=\{(x,y)|x, y \in V\}$ with a measure function $m:V\in x\mapsto m_{x}\in (0,\infty)$ and an edge weight function $\mu:E\mapsto\mu_{xy}\in[0,\infty).$ The measure of any vertices subset $A$ is defined as the sum of measures of vertices, i.e. $m(A)=\sum_{x\in A}m_{x}$. Intrinsic metrics on graphs were introduced by Frank, Lenz and Wingert in \cite{13} and a pseudo metric $\rho$ is called intrinsic if $\sum_{y\in V}\mu_{xy}\rho^{2}(x,y)\leq m_{x}.$ The quantity, $s:=sup\{\rho(x,y)|x,y\in V, \mu_{xy}>0\},$ where the supremum is taken over all pairs $(x,y)$ with $\mu_{xy}>0$ is the jump size of an intrinsic metric $\rho.$ According to \cite{2,3}, \begin{thm} Let $(V,\mu,m)$ be a weighted graph with an intrinsic metric $\rho$ with finite jump size $s>0.$ Let $A,B$ be two subsets in $V$ and $f,g\in \ell^2_m$ with $\mathrm{supp} f\subset A, \mathrm{supp} g\subset B,$ then $$ |\langle e^{t\Delta}f,g\rangle|\leq e^{-\lambda t-\zeta_s(t,\rho(A,B))}\|f\|_{\ell^2_m}\|g\|_{\ell^2_m} $$ where $\lambda$ is the bottom of the $\ell^2$-spectrum of Laplacian and $$\zeta_s(t,r)=\frac{1}{s^2}(rs{\mbox{arcsinh}}{\frac{rs}{t}}-\sqrt{t^2+r^2s^2}+t). \quad t>0,r\geq0$$ Moreover, $$\sum_{y\in B}\sum_{x\in A}p_t(x,y)m_xm_y\leq \sqrt{m(A)m(B)}e^{-\lambda t-\zeta_s(t,\rho(A,B))}$$ where $p_t(x,y)$ is the minimal heat kernel of the graph. \end{thm} The DGG lemmas can be used to obtain heat kernel estimates and eigenvalue estimates. For instance, in conjunction with the Li-Yau inequality, pointwise heat kernel estimates were obtained for mainfolds in \cite{22,23}. And Chung, Grigor'yan and Yau \cite{5} obtained eigenvalue estimates with DGG lemma on compact Riemannian mainfolds. Moreover, combining the Harnack inequality and the DGG lemma, \cite{2} proved the heat kernel estimates on graphs satisfying the exponential curvature dimension inequality and eigenvalue estimates on finite graphs. In this paper, we prove the DGG Lemma for discrete Hodge Laplacian on simplicial complexes. Discrete Hodge Laplacians on simplicial complexes are generlizations of Hodge Laplacian on differential forms which encode the information of topology and geometry. Horak and Jost \cite{18,19} obtained properties of spectra of (discrete) Hodge Laplacians on special simplicial complexes and effects of constructions on spectrum. In this paper, we consider bounded Hodge Laplacians and related heat kernel to prove the DGG lemma. For bounded Hodge Laplacian $\mathcal{L}_i,$ it defines the heat semigroup $\{e^{-t\mathcal{L}_i}\}_{t\geq 0}$ which is a family of bounded self-adjoint operators. Then $e^{-t\mathcal{L}_i}$ has a heat kernel. We provide one kind of appropriate weight metrics on simplicial complexes which behave like the intrinsic metrics on graphs and with this kind of metrics we are able to prove DGG Lemma for the continuous time heat kernel related to bounded Hodge Laplacian on simplicial complexes adopting the methods in \cite{3,6}. We state the DGG lemma on simplicial complexes as follows, for the precise definitions of the quantities used we refer to Section 2. \begin{thm}\label{sharps} Let $(K,w)$ be an oriented weighted simplicial complex with an intrinsic metric $\rho$ with jump size $s>0.$ Let $A,B$ be two subsets in $S_{i}(K)$ and $f,g\in \ell^2_w$ acting on $i$-simplices with $\mathrm{supp} f\subset A, \mathrm{supp} g\subset B,$ then $$|\langle e^{-t\mathcal{L}_i}f,g\rangle|\leq e^{-\lambda t-\zeta_s(t,\rho(A,B))}\|f\|_{\ell^2_w}\|g\|_{\ell^2_w},$$ In particular, \begin{equation} |\sum_{F'\in B}\sum_{F\in A}p_t(F,F')w(F)w(F')|\leq \sqrt{w(A)w(B)}e^{-\lambda t-\zeta_s(t,\rho(A,B))} \end{equation} where $\lambda$ is the bottom of the $\ell^2$-spectrum of bounded Hodge Laplacian $\mathcal{L}_i$ and $\zeta_s(t,\rho)=\frac{1}{s^2}(\rho s{\mbox{arcsinh}}{\frac{\rho s}{t}}-\sqrt{t^2+\rho^2s^2}+t).$ \end{thm} Moreover, by setting $A=\{F\}, B=\{F'\},$ we obtain an upper bound for heat kernel between two $i$-simplicial faces for $i=\{0,1,...,dim K\}.$ $$|p_t(F,F')|\leq \frac{1}{\sqrt{w(F)w(F')}}e^{-\lambda t-\zeta_s(t,\rho(F,F'))}$$ \begin{rem} Our result is restricted to bounded Hodge Laplacians. It will be interesting to know whether it holds for unbounded cases. \end{rem} \section{Simplicial complexes} In this section we introduce the setting and definitions used throughout this paper. An abstract simplicial complex $K$ on vertices set $V$ is a collection of subsets of $V$ which is closed under incluson, i.e. if $F \in K$ and $F'\subset F$, then $F'\in K$. An $i$-face of $K$ is an element of cardinality $i+1$, and we denote the set of all $i$-faces by $S_i(K)$. The dimension of simplicial complex $K$ is the largest dimension of the faces it contains. For instance, graphs are $1$-dim simplicial complexes and the dimension of empty simplex is $(-1).$ For two $(i+1)$-faces with one common $i$-face, we say that they are $i$-down neighbors and two $i$-faces sharing an $(i+1)$-face are called $(i+1)$-up neighbors. We say that a face $F$ is oriented if the ordering of its vertices is given and write as $[F].$ Two orderings of its vertices are said to determine the same or opposite orientation when the related permutation is even or odd permutation. A simplicial complex $K$ is called weighted if there is a weight function $w$ on the set of all faces satisfying \begin{equation*} w: \bigcup^{dimK}_{i=-1}S_i(K)\rightarrow (0,+\infty). \end{equation*} The weight of a face $F$ is $w(F)$ and the weight of a subset $A$ is the sum of weights of faces contained in $A,$ i.e. $w(A)= \sum_{F \in A} w(F).$ Considering oriented simplicial complexes, the $i$-th chain group with coefficients in $\mathbb{R}$ is a vector space over $\mathbb{R}$ with basis $B_i(K,\mathbb{R})=\{[F]|F \in S_i(K)\}$. The cochain group is the dual of the chain group with basis given by the set of functions $\{e_{[F]}|[F]\in B_i(K,\mathbb{R})\}$ where \begin{equation} e_{[F]}([F'])=\left\{ \begin{array}{r@{\quad}l} 1 &if [F']=[F]\\ 0 & otherwise. \end{array}\right. \end{equation} We denote the $i$-th chain group and cochain group by $C_i(K,\mathbb{R})$ and $C^{i}(K,\mathbb{R})$ respectively. The coboundary operator $\delta_i$ $:$ $C^{i}(K,\mathbb{R})\rightarrow C^{i+1}(K,\mathbb{R})$ is the linear map $$(\delta_if)(v_0,...,v_{i+1})=\sum_{k=0}^{i+1}(-1)^{k}f(v_0,...,\widehat{v_k},...,v_{i+1}),$$ where $\widehat{v_k}$ means that the vertex $v_k$ has been omitted. With the weight function $w$, inner product on cochain group is given as follows $$(f,g)_{C^{i}}=\sum_{F\in S_i(K)}f([F])g([F])w(F).$$ And the adjoint $\delta_i^{\ast}$: $C^{i+1}(K,\mathbb{R})\rightarrow C^{i}(K,\mathbb{R})$ of the coboundary map $\delta_i$ is defined such that $$ (\delta_if,g)_{C^{i+1}}=(f,\delta_i^{\ast}g)_{C^{i}}, $$ for every $f \in C^{i}(K,\mathbb{R})$ and $g \in C^{i+1}(K,\mathbb{R})$. Eckmann generalized the graph Laplacian to simplicial complexes and proved the discrete version of the Hodge theorem \cite{10} which can be formulated as $$ ker(\delta_i^{\ast}\delta_i+\delta_{i-1}\delta_{i-1}^{\ast})\cong \tilde{H^{i}}(K,\mathbb{R}) $$ for the sequence of linear transformations $$ ...\overset{\delta_{i+1}}\leftarrow C^{i+1}(K,\mathbb{R})\overset{\delta_{i}}\leftarrow C^{i}(K,\mathbb{R}) \overset{\delta_{i-1}}\leftarrow...\leftarrow C^{-1}(K,\mathbb{R})\leftarrow 0 $$ where$$\mathcal{L}_i(K)=\delta_i^{\ast}\delta_i+\delta_{i-1}\delta_{i-1}^{\ast}$$ is the $i$-dimensional Hodge Laplacian. The operators $\delta_i^{\ast}\delta_i$ and $\delta_{i-1}\delta_{i-1}^{\ast}$ are called the $i$-up and $i$-down Hodge Laplace operator and are denoted by $\mathcal{L}^{up}_i(K)$ and $\mathcal{L}^{down}_i(K)$ respectively. Moreover, the operators $\mathcal{L}^{up}_i(K)$, $\mathcal{L}^{down}_i(K)$ and $\mathcal{L}_i(K)$ are self-adjoint and non-negative. Let $\overline{F}=\{v_0,...,v_{i+1}\}$ be an $(i+1)$-face of a simplicial complex $K$ and $F=\{v_0,...,\widehat{v_k},...,v_{i+1}\}$ be an $i$-face of $\overline{F}$. Then the \emph{boundary of the oriented face} $[\overline{F}]$ is \begin{equation*} \partial[\overline{F}]=\sum_{k}(-1)^{k}[v_0,...,\widehat{v_k},...,v_{i+1}], \end{equation*} and the sign of $[F]$ in the boundary of $[\overline{F}]$ is denoted by the $sgn([F],\partial[\overline{F}])$ which is equal to $(-1)^{k}$. If $[F]$ is not the boundary of $[\overline{F}]$, we say $sgn([F],\partial[\overline{F}])$ is equal to $0$. And for the sake of brevity, we will write $sgn([F],\partial[\overline{F}])$ as $\sigma_{F\overline{F}}$ below. According to \cite{18,19}, the $i$-up and $i$-down Hodge Laplace operators are given by \begin{equation}\label{eq1} (\mathcal{L}^{up}_{i})f([F]) =\sum_{\substack{\overline{F}\in S_{i+1}(K):\\ F\in \partial\overline{F}}}\frac{w(\overline{F})}{w(F)}f([F]) +\sum_{\substack{F'\in S_i:F\neq F',\\F,F'\in \partial\overline{F}}}\frac{w(\overline{F})}{w(F)}\sigma_{F\overline{F}} \sigma_{F'\overline{F}}f([F']) \end{equation} and \begin{equation}\label{eq2} (\mathcal{L}^{down}_{i})f([F]) =\sum_{E \in \partial F}\frac{w(F)}{w(E)}f([F]) +\sum_{F':F\cap F'=E}\frac{w(F')}{w(E)}\sigma_{EF}\sigma_{EF'}f([F']) \end{equation} where $\partial\overline{F}$ is the set of all $i$-faces of $\overline{F}$ by abuse of notation. For the degree of an $i$-face $F$, it is the sum of the weights of all faces that contain $F$ in its boundary, that is $$ \mathrm{deg} F=\sum_{\substack{\overline{F}\in S_{i+1}(K):\\F\in \partial\overline{F}}}w(\overline{F}). $$ Moreover, if the weight function $w$ on $K$ satisfies $$ w(F)=\mathrm{deg} F $$ for every $F \in S_i(K)$ which is not a facet of $K,$ the Hodge Laplace operator is called the \emph{weighted normalized Hodge Laplacian operator}. If a simplicial complex satisfies that there is a positive integer $M$ such that $$ \sharp\{F \in S_i(K)| E \in \partial F\}\leq M < \infty $$ holds for all $E \in S_{i-1}(K),$ then the Hodge Laplacian operator $\mathcal{L}_{i}$ acting on functions on $i$-simplicial faces of oriented simplicial complexes $(K,w)$ is bounded from $l_{w}^{2}$ to $l_{w}^{2}$ if and only if \begin{equation}\label{bounded} b:=\sup_{F \in S_j(K)}\frac{1}{w(F)} \sum_{F \in \partial\overline{F}} w(\overline{F})<\infty \quad for\quad j=i,i-1\tag{$\ast$} \end{equation} Next, we will introduce the following notations for different $i$-faces $F$ and $F'.$ $(1)\tau_{FEF'}:=\frac{w(F)w(F')}{w(E)}.$ $(2)w^{up}_{FF'}:=\left\{ \begin{array}{r@{\quad}l} w(\overline{F}), &if F',F\in \partial\overline{F};\\ 0,& otherwise. \end{array}\right.$ $(3)w^{down}_{FF'}:=\left\{ \begin{array}{r@{\quad}l} \tau_{FEF'}, &if F'\cap F=E;\\ 0, & otherwise. \end{array}\right.$ $(4)w_{FF'}:=w^{up}_{FF'}+w^{down}_{FF'}.$ $(5)\mathrm{Deg}(F):=\frac{\mathrm{deg F}}{w(F)}.$ For convenience, we extend the notations to the total set $S_i(K)\times S_i(K),$ such that $\tau_{FEF}=w^{up}_{FF}=w^{down}_{FF}=w_{FF}=0.$ And we write $F\sim F'$ if $w_{FF'}\neq 0$ holds. Also for simplicity, we will omit $``[~]" $ for oriented faces in the following. \begin{lemma}[Green's formula]\label{l:Green}Let $\mathcal{L}_{i}$ be the bounded Hodge Laplacian operator. Then for all $f,g\in \ell_{w}^{2}$ acting on $i$-simplices $$ (\mathcal{L}_{i} f,g)=(\delta_i f,\delta_i g)+(\delta_{i-1}^{\ast}f,\delta_{i-1}^{\ast}g) $$ In particular, \begin{eqnarray*} \sum_{F\in S_i(K)}(\mathcal{L}_{i} f)(F)g(F)w(F) =\sum_{F\in S_i(K)}f(F)g(F)(\sum_{F\in \partial \overline{F}} w(\overline{F})+\sum_{E\in \partial F}\tau_{FEF})\\ +\sum_{F\neq F'}f(F')g(F)(w(\overline{F})\sigma_{F\overline{F}}\sigma_{F'\overline{F}}+\tau_{FEF'}\sigma_{EF}\sigma_{EF'}) \end{eqnarray*} \end{lemma} \begin{proof} By direct computation \begin{eqnarray*} (\mathcal{L}_{i} f,g) &=&(\delta_if,\delta_ig)+(\delta_{i-1}^{\ast}f,\delta_{i-1}^{\ast}g)\\ &=&\sum_{\overline{F}\in S_{i+1}(K)}\sum_{F,F'\in \partial\overline{F}}\sigma_{F\overline{F}}\sigma_{F'\overline{F}}f(F)g(F')w(\overline{F})\\ &+&\sum_{E \in S_{i-1}(K)}\sum_{F,F'\in S_{i}(K)}\sigma_{EF}\sigma_{EF'}\tau_{FEF'}f(F)g(F')\\ &=&\sum_{F\in S_i(K)}f(F)g(F)(\sum_{F\in \partial \overline{F}} w(\overline{F})+\sum_{E\in \partial F}\tau_{FEF})\\ &+&\sum_{F\neq F'}f(F')g(F)(w(\overline{F})\sigma_{F\overline{F}}\sigma_{F'\overline{F}}+\tau_{FEF'}\sigma_{EF}\sigma_{EF'}) \end{eqnarray*} \end{proof} In this subsection, we introduce a kind of metrics on simplicial complexes which could be viewed as generalizations of the intrinsic metrics on graphs introduced in \cite{13}. Indeed, intrinsic metrics on graphs have been applied successfully to various problems, see \cite{1,4,11,12,16,20,21}. \begin{defi}[Intrinsic metric] A pseudo metric $\rho$ is called an intrinsic metric with respect to a simplicial complex $(K,\omega)$ if for all $F \in S_{i}(K)$ \begin{equation} \sum_{F'\in S_i(K)}w_{FF'}\rho^{2}(F,F')\leq w(F). \end{equation} \end{defi} \begin{rem} In our setting, there always exists an intrinsic metric on a weighted simplicial complex, see Definition \ref{defi23} mimicking the definition introduced by Huang \cite{17}. In general, intrinsic metrics are not unique. \end{rem} \begin{defi}\label{defi23} We define a function $\mu(F,F')$ by \begin{align*} \mu(F,F')= &\min\left\{\sqrt{\frac{w(F)}{\sum\limits_{F'}w_{FF'}}},\sqrt{\frac{w(F')}{\sum\limits_{F''}w_{F'F''}}}, 1\right\} \end{align*} for all pairs $(F,F')$ satisfying $w_{FF'}\neq 0$. It naturally induces a metric for different $i$-dim simplicial faces of $(K,\omega)$ by $$ \rho(F,F'):=\inf\left\{\sum\mu(F_{j},F_{j+1}):F_0=F,...,F_j,...,F_m=F', \ s.t.\ \forall 0\leq j \leq m, F_{j}\sim F_{j+1} \right\}. $$ where the infimum is taken over all such chains $F_0=F,...,F_j,...,F_m=F'$ of $i$-faces between $F$ and $F'$ with $w_{F_jF_{j+1}}\neq 0$ . If there is no such chain between $F$ and $F'$, we define $\rho(F,F')=\infty$ and $\rho(F,F)=0$ for the same $i$-dim simplicial face. \end{defi} It is obvious that the above metrics do satisfy the condition $(6)$. \begin{rem} The intrinsic metric on graphs in \cite{17} is defined as follows: $$ \delta(x,y)=\inf_{x=x_0\sim...\sim x_n=y}\sum^{n-1}_{i=0}(\mathrm{Deg}(x_i)\vee \mathrm{Deg}(x_{i+1})\vee 1)^{-\frac{1}{2}}, x,y \in X $$ When the intrinsic metric defined in (\ref{defi23}) applied to graphs, there is $$\rho(x,y)=\delta(x,y)$$ \end{rem} For bounded Hodge Laplacian satisfying (\ref{bounded}), there is a kind of canonical intrinsic metric analogous to the combinational distance on graphs. \begin{defi}\label{defi24} We can define another form of intrinsic metric between $F, F' \in S_i(K)$ by \begin{equation*} \rho(F,F')=\left\{ \begin{array}{r@{\quad}l}\frac{\inf\sum\mu(F_j,F_{j+1})}{(i+1)\sqrt{b}},& if F\neq F'; \\ 0, & if F=F'. \end{array}\right. \end{equation*} where the infimum is taken over all such chains $F_0=F,...,F_j,...,F_m=F'$ of $i$-faces between $F$ and $F'$ with $w_{F_jF_{j+1}}\neq 0.$ If there is no such chain between $F$ and $F'$, we define $\rho(F,F')=\infty$. And \begin{equation*} \mu(F_j,F_{j+1})=\left\{ \begin{array}{r@{\quad}l}1,& if w_{F_jF_{j+1}}\neq 0; \\ 0, & otherwise. \end{array}\right. \end{equation*} \end{defi} We will show the above metric is also an intrinsic metric. \begin{proof} It is easy to see that $$\rho(F,F')\leq \frac{1}{(i+1)\sqrt{b}},\quad w_{FF'}\neq 0$$ such that \begin{eqnarray*}&& \sum_{F'\in S_i}w_{FF'}\rho^{2}(F,F')\nonumber\\ &\leq& \frac{1}{b(i+1)^{2}}\left((i+1)\sum_{F\in\partial\overline{F}}w(\overline{F})+ \sum_{E\in \partial F}\sum_{F'\neq F}\tau_{FEF'}\right)\nonumber\\ &\leq&\frac{1}{b(i+1)^{2}}\left((i+1)bw(F)+w(F) \sum_{E\in \partial F}\frac{bw(E)-w(F)}{w(E)}\right)\nonumber\\ &\leq&w(F) \end{eqnarray*} \end{proof} \begin{rem} For graph case, the normalized Laplacian must satisfy (\ref{bounded}) while for higher dimensional simplicial complexes, the normalized Hodge Laplacian operator may not satisfies (\ref{bounded}). So for normalized Hodge Laplacian on higher dimensional simplicial complexes, the metric defined in Definition 2.3. may be not suitable. \end{rem} The \emph{jumps size} $s$ of a pseudo metric $\rho$ is given by \begin{align*} s:=\sup\{\rho(F,F')\mid F,F'\in S_i(K), F\sim F'\}\in[0,\infty]. \end{align*} If there is no $F$ and $F'$ satisfying $w_{FF'}\neq 0$, it is reasonable to define $s=\infty$. From now on, $\rho$ always denotes an intrinsic metric and $s$ denotes its jump size. A function $f:S_i(K)\to\mathbb{R} $ is Lipschitz (w.r.t.the metric $\rho$) if $|f(F)-f(F')|\leq \kappa\rho(F,F')$ for any $F,F'\in S_i(K).$ The minimal constant $\kappa$ that satisfies the above inequality is called the Lipschitz constant of $f$ and $supp f$ means the maximal set of simplicial faces $F$ satisfying $f(F)\neq 0,$ i.e. $supp f =\{F|F \in S_i(K),f(F)\neq 0\}.$ \section{Proof of main theorem} Let $(K,w)$ be an oriented weighted simplicial complex. \begin{defi} We say $u:[0,\infty)\times S_i(K)\to \mathbb{R}$ solves heat equation if \begin{equation} \begin{cases} \frac{\partial}{\partial t}u(t,F) = -\mathcal{L}_{i} u(t,F),\\ u(0,F) = f(F). \end{cases} \end{equation} The heat kernel, $p_t(F,F'),$ is defined as the solution with the initial condition $f(F)=\frac{1}{w(F')}\delta_{F'}(F).$ For a general initial data $f(F),$ the solution can be written as $$u(t,F) = \sum_{F'\in S_i(K)} p_t(F,F') f(F') w(F').$$ \end{defi} The integral maximum principle on Riemannian manifolds was introduced by Grigor'yan \cite{15}. We prove a variant of the integral maximum principle on simplicial complexes. \begin{lemma}\label{l:monotonicity Delmotte} Let $(K,w)$ be an oriented weighted simplicial complex with an intrinsic metric $\rho$ with jump size $s>0,$ and $\zeta$ be a bounded Lipschitz function on $i$-simplices with the Lipschitz constant $\kappa.$ Let $f:[0,\infty)\times S_i(K)\to\mathbb{R}$ solve the heat equation on $K$ and $\lambda$ be the first eigenvalue of the bounded Hodge Laplacian. Set $$E(t):= \sum_{F\in S_i(K)}f^2(t,F)e^{\zeta(F)}w(F).$$ Then $$\exp\left({2\lambda t-\frac{2}{s^2}(\cosh(\frac{\kappa s}{2})-1)t}\right)E(t)$$ is nonincreasing in $t\in [0,\infty)$. \end{lemma} \begin{proof} From dominated convergence theorem, $$E'(t)=2\sum_{F \in S_i{(K)}}f(t,F)\partial_{t} f(t,F)e^{\zeta(F)}w(F)$$ Since $f$ solves the heat equation and together with Green's formula, we obtain \begin{eqnarray*} E'(t)&=&-2\sum_{F\in S_i(K)}f^{2}(F)e^{\zeta(F)}\sum_{F \in \partial\overline{F}}w(\overline{F}) -2\sum_{F\in S_i(K)}f^{2}(F)e^{\zeta(F)}\sum_{E\in \partial F}\tau_{FEF} \\ &-&2\sum_{F\neq F'}f(F)f(F')e^{\zeta(F)}\sigma_{F\overline{F}}\sigma_{F'\overline{F}}w(\overline{F})\\ &-&2\sum_{F\neq F'}f(F)f(F')e^{\zeta(F)}\sigma_{EF}\sigma_{EF'}\tau_{FEF'}\\ &=&-2\sum_{F\in S_i(K)}f^{2}(F)e^{\zeta(F)}\sum_{F \in \partial\overline{F}}w(\overline{F}) -2\sum_{F\in S_i(K)}f^{2}(F)e^{\zeta(F)}\sum_{E\in \partial F}\tau_{FEF} \\ &-&\sum_{F\neq F'}f(F)f(F')\sigma_{F\overline{F}}\sigma_{F'\overline{F}}w(\overline{F})(e^{\zeta(F)}+e^{\zeta(F')})\\ &-&\sum_{F\neq F'}f(F)f(F')\sigma_{EF}\sigma_{EF'}\tau_{FEF'}(e^{\zeta(F)}+e ^{\zeta(F')})\\ \end{eqnarray*} Moreover \begin{eqnarray*} E'(t) &=&-2(\mathcal{L}_if e^{\frac{1}{2}\zeta},f e^{\frac{1}{2}\zeta}) +2(\mathcal{L}_if e^{\frac{1}{2}\zeta},f e^{\frac{1}{2}\zeta})+E'(t) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ &=&I+II \end{eqnarray*} where $I=-2(\mathcal{L}_if e^{\frac{1}{2}\zeta},f e^{\frac{1}{2}\zeta}),$ $II=2(\mathcal{L}_if e^{\frac{1}{2}\zeta},f e^{\frac{1}{2}\zeta})+E'(t)$ For the first term, by the Rayleigh quotient of the first eigenvalue,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$I\leq -2\lambda\sum f^2e^{\zeta} w.$$ For the second term, using the Green's formula again \begin{eqnarray*} II&=&\sum_{F\in S_i(K)}2f^{2}(F)e^{\zeta(F)}(\sum_{F\in \partial \overline{F}} w(\overline{F})+\sum_{E\in \partial F}\tau_{FEF})\\ &+&2\sum_{F\neq F'}f(F')f(F)e^{\frac{1}{2}\zeta(F)+\frac{1}{2}\zeta(F')}(w(\overline{F})\sigma_{F\overline{F}}\sigma_{F'\overline{F}}+\tau_{FEF'}\sigma_{EF}\sigma_{EF'})\\ &-&2\sum_{F\in S_i(K)}f^{2}(F)e^{\zeta(F)}\sum_{F \in \partial\overline{F}}w(\overline{F}) -2\sum_{F\in S_i(K)}f^{2}(F)e^{\zeta(F)}\sum_{E\in \partial F}\tau_{FEF} \\ &-&\sum_{F\neq F'}f(F)f(F')\sigma_{F\overline{F}}\sigma_{F'\overline{F}}w(\overline{F})(e^{\zeta(F)}+e^{\zeta(F')})\\ &-&\sum_{F\neq F'}f(F)f(F')\sigma_{EF}\sigma_{EF'}\tau_{FEF'}(e^{\zeta(F)}+e ^{\zeta(F')})\\ &=&-\sum_{F\neq F'}f(F)f(F')\sigma_{F\overline{F}}\sigma_{F'\overline{F}}w(\overline{F}) (e^{\frac{1}{2}\zeta(F)}-e^{\frac{1}{2}\zeta(F')})^{2}\nonumber\\ &-&\sum_{F\neq F'}f(F)f(F')\sigma_{EF}\sigma_{EF'}\tau_{FEF'} (e^{\frac{1}{2}\zeta(F)}-e^{\frac{1}{2}\zeta(F')})^{2}\nonumber\\ &=&-\sum_{F\neq F'}f(F)f(F')\sigma_{F\overline{F}}\sigma_{F'\overline{F}}w(\overline{F}) e^{\frac{1}{2}\zeta(F)+\frac{1}{2}\zeta(F')} (e^{\frac{1}{4}\zeta(F)-\frac{1}{4}\zeta(F')}-e^{\frac{1}{4}\zeta(F')-\frac{1}{4}\zeta(F)})^{2}\nonumber\\ &-&\sum_{F\neq F'}f(F)f(F')\sigma_{EF}\sigma_{EF'}\tau_{FEF'} e^{\frac{1}{2}\zeta(F)+\frac{1}{2}\zeta(F')} (e^{\frac{1}{4}\zeta(F)-\frac{1}{4}\zeta(F')}-e^{\frac{1}{4}\zeta(F')-\frac{1}{4}\zeta(F)})^{2}\nonumber\\ \end{eqnarray*} Because $2(\cosh\frac{x-y}{2}-1)=(e^{\frac{1}{4}x-\frac{1}{4}y}-e^{\frac{1}{4}y-\frac{1}{4}x})^{2}$, then \begin{eqnarray*} II&=&-2 \sum_{F\neq F'}f(F)f(F')\sigma_{F\overline{F}}\sigma_{F'\overline{F}}w(\overline{F}) e^{\frac{1}{2}\zeta(F)+\frac{1}{2}\zeta(F')}\left(\cosh\frac{\zeta(F)-\zeta(F')}{2}-1\right)\nonumber\\ &-&2 \sum_{F\neq F'}f(F)f(F')\sigma_{EF}\sigma_{EF'}\tau_{FEF'} e^{\frac{1}{2}\zeta(F)+\frac{1}{2}\zeta(F')}\left(\cosh\frac{\zeta(F)-\zeta(F')}{2}-1\right)\nonumber\\ \end{eqnarray*} \begin{eqnarray*} &\leq& \sum_{F\neq F'}w(\overline{F})(f^{2}(F) e^{\zeta(F)}+f^{2}(F') e^{\zeta(F')}) \left(\cosh\frac{\zeta(F)-\zeta(F')}{2}-1\right)\nonumber\\ &+& \sum_{F\neq F'}\tau_{FEF'}(f^{2}(F) e^{\zeta(F)}+f^{2}(F') e^{\zeta(F')}) \left(\cosh\frac{\zeta(F)-\zeta(F')}{2}-1\right)\nonumber\\ \end{eqnarray*} From the symmetry of equation in $F$ and $F'$, \begin{eqnarray*} II&\leq& 2\sum_{F\neq F'}w(\overline{F})f^{2}(F) e^{\zeta(F)} \left(\cosh\frac{\zeta(F)-\zeta(F')}{2}-1\right)\nonumber\\ &+& 2\sum_{F\neq F'}\tau_{FEF'}f^{2}(F) e^{\zeta(F)} \left(\cosh\frac{\zeta(F)-\zeta(F')}{2}-1\right)\nonumber\\ &=&2\sum_{F\neq F'}f^{2}(F) e^{\zeta(F)} \left(\cosh\frac{\zeta(F)-\zeta(F')}{2}-1\right)(w^{up}_{FF'}+w^{down}_{FF'})\nonumber\\ &=&2\sum_{F\neq F'}f^{2}(F) e^{\zeta(F)} \left(\cosh\frac{\zeta(F)-\zeta(F')}{2}-1\right)w_{FF'}\nonumber\\ \end{eqnarray*} We claim that for any $F$ and $F'$ with $w_{FF'}\neq 0$, $$\cosh\frac{\zeta(F)-\zeta(F')}{2}-1\leq \rho^{2}(F,F')\frac{1}{s^2}(\cosh\frac{\kappa s}{2}-1).$$ It suffices to consider $F\sim F'$ with $\rho(F,F')>0.$ Since $\zeta$ is a Lipschitz function with Lipschitz constant $\kappa,$ \begin{eqnarray*} &&\cosh\frac{\zeta(F)-\zeta(F')}{2}-1\\ &\leq&\cosh\frac{\kappa \rho(F,F')}{2}-1 =\rho^{2}(F,F')\frac{\cosh\frac{\kappa \rho(F,F')}{2}-1}{\rho^{2}(F,F')}\\ &\leq&\rho^{2}(F,F')\frac{1}{s^2}(\cosh\frac{\kappa s}{2}-1), \end{eqnarray*} where we used the monotonicity of the function $$t\mapsto \frac{1}{t^2}(\cosh\frac{\kappa t}{2}-1),\quad t>0,$$ and the definition of jump size $s$ of the metric $\rho.$ This proves the claim. Hence by this claim \begin{eqnarray*}II&\leq& 2\sum_{F\neq F'}w_{FF'}f^2(t,F)e^{\zeta(F)}\rho^{2}(F,F')\frac{1}{s^2}(\cosh\frac{\kappa s}{2}-1)\\ &\leq &\frac{2}{s^2}(\cosh\frac{\kappa s}{2}-1)\sum_{F}f^2(t,F)e^{\zeta(F)}w(F), \end{eqnarray*} where we used that $\rho$ is an intrinsic metric. Combining the estimates for the terms $I$ and $II,$ for any $t\geq 0,$ $$E'(t)\leq (-2\lambda+\frac{2}{s^2}(\cosh\frac{\kappa s}{2}-1))E(t).$$ which shows $$\exp\left({2\lambda t-\frac{2}{s^2}(\cosh(\frac{\kappa s}{2})-1)t}\right)E(t)$$ is nonincreasing in $t\in [0,\infty)$. \end{proof} Given $s>0,$ for any fixed $t>0$ and $r\geq 0$, we denote \begin{equation}\label{e:def eta} \zeta_s(t,r)=-\inf_{\kappa>0}\left(\frac{1}{s^2}(\cosh\frac{\kappa s}{2}-1)t-\frac{\kappa}{2} r\right). \end{equation} It is easy to see that the infinium is attained at $\kappa_0=\frac{2}{s}{\mbox{arcsinh}}{\frac{rs}{t}}$ and \begin{equation}\label{eq:eta}\zeta_s(t,r)=\frac{1}{s^2}\left(rs{\mbox{arcsinh}}{\frac{rs}{t}}-\sqrt{t^2+r^2s^2}+t\right).\end{equation} Note that $\zeta_s(t,r)=\zeta(\frac{t}{s^{2}}, \frac{r}{s}),$ where $\zeta(t,r)=r{\mbox{arcsinh}}{\frac{r }{t}}-\sqrt{t^2+r^2}+t.$ The function $\zeta(t,r)$ was already used by Davies, Pang, Delmotte to obtain eatimates of heat kernel \cite{8,9,24}. Moreover, according to \cite{9} \begin{equation} \begin{cases} \zeta(t,r) \leq \frac{r^{2}}{2t},\text{for }t\geq0\\ \zeta(t,r)\geq h \arcsin(h^{-1})\frac{r^{2}}{2t}, \text{for }t\geq hr. \end{cases} \end{equation} Now, we are ready to prove the DGG lemma on simplicial complexes. \begin{proof} Denote $r=\rho(A,B).$ For any $\kappa>0,$ define $\zeta(F)=\kappa\rho(F,A)\wedge (\kappa\rho(A,B)+1),$ $F\in S_i(K).$ Then $\zeta$ is a Lipschitz function with Lipschitz constant at most $\kappa$ and for any function $g$ on $i$-simplices $$\sum_{B}|g|^2e^{\zeta}w\geq e^{\kappa r}\sum_{B}|g|^2w.$$ For $f\in\ell^2_w,$ let $f(t,F)=e^{-t\mathcal{L}_i}f(F).$ Then the above inequality for $g(\cdot)=f(t,\cdot)$ and Lemma \ref{l:monotonicity Delmotte} yield \begin{eqnarray*}\sum_{F\in B}|f(t,F)|^2w(F)&\leq& e^{-\kappa r}E(t)\leq \exp\left(-2\lambda t+\frac{2}{s^2}(\cosh\frac{\kappa s}{2}-1)t-\kappa r\right)E(0)\\ &=&\exp\left(2(-\lambda t+\frac{1}{s^2}(\cosh\frac{\kappa s}{2}-1)t-\frac{\kappa}{2} r)\right)\sum_A f^2w,\end{eqnarray*} where we used that $\mathrm{supp} f\subset A.$ For fixed $s,t>0$ and $r\geq0,$ choose suitable $\kappa$ such that the following function $$\mathbb{R}^+\ni\kappa\mapsto {\frac{1}{s^2}(\cosh\frac{\kappa s}{2}-1)t-\frac{\kappa}{2} r}$$ attains the minimum. Then by \eqref{e:def eta} and \eqref{eq:eta} we have $$\sum_{B}|f(t,F)|^2w \leq e^{2(-\lambda t-\zeta_s(t,r))}\sum_A |f|^2w.$$ That is, for all $f\in \ell^2(A,w),$ $$\sup_{\substack{g\in \ell^2(B,w)\\\|g\|_{\ell^2_w}=1}}|\langle e^{-t\mathcal{L}_i}f,g\rangle|^2=\sum_B|e^{-t\mathcal{L}_i}f|^2w\leq e^{2(-\lambda t-\zeta_s(t,r))}\|f\|_{\ell^2_w}^2.$$ This proves the theorem. \end{proof} Using the properties of $\zeta,$ $(10),$ we obtain the following corollary. \begin{coro} Let $p_t(F,F')$ be the minimal heat kernel of the simplicial $K$ and $h > 0.$ Then there exists a constant $C(h,s)$ such that for any two subsets $A,B \subset S_{i}(K),t\geq sh\rho(A,B),$ \begin{equation} |\sum_{F'\in B}\sum_{F\in A}p_t(F,F')w(F)w(F')|\leq \sqrt{w(A)w(B)}\exp\left(-\lambda t\right)\exp\left(-C\frac{\rho^{2}(A,B)}{4t}\right) \end{equation} \end{coro} Noting that for the bounded Hodge Laplacian with $b < \infty,$ from Definition 2.3, the corresponding jump size $s$ is equal to $\frac{1}{(i+1)\sqrt{b}}.$ And we have the following corollary. \begin{coro} For a simplicial complex $K$ with the quantity $b< \infty$, the continuous heat kernel satifies: $$|p_t(F,F')|\leq \frac{1}{\sqrt{w(F)w(F')}}e^{-\lambda t-\zeta_{\frac{1}{(i+1)\sqrt{b}}}(t,\rho(F,F'))}.$$ where $\rho$ is the distance defined in Definition (\ref{defi24}). \end{coro} When applied to graphs with the normalized Laplacian, the above corollary implies the Davies's heat kernel estimate, [3,$~$Corollary$~$1.2]. \end{document}
\begin{document} \title{Common Coherence Witnesses and Common Coherent States } \author{Bang-Hai Wang$^{1}$} \email{[email protected]} \author{Zi-Heng Ding$^{2,3}$} \author{Zhihao Ma$^{2,3}$} \email{[email protected]} \author{Shao-Ming Fei$^{4,5}$} \email{[email protected]} \affiliation{ $^1$School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China\\ $^2$School of Mathematical Sciences, MOE-LSC, Shanghai Jiao Tong University, Shanghai 200240, China\\ $^3$Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China\\ $^4$School of Mathematical Sciences, Capital Normal University, Beijing 100048, China\\ $^5$Max-Planck-Institute for Mathematics in the Sciences, 04103 Leipzig, Germany } \date{\today} \begin{abstract} We show the properties and characterization of coherence witnesses. We show methods for constructing coherence witnesses for an arbitrary coherent state. We investigate the problem of finding common coherence witnesses for certain class of states. We show that finitely many different witnesses $W_1, W_2, \cdots, W_n$ can detect some common coherent states if and only if $\sum_{i=1}^nt_iW_i$ is still a witnesses for any nonnegative numbers $t_i(i=1,2,\cdots,n)$. We show coherent states play the role of high-level witnesses. Thus, the common state problem is changed into the question of when different high-level witnesses (coherent states) can detect the same coherence witnesses. Moreover, we show a coherent state and its robust state have no common coherence witness and give a general way to construct optimal coherence witnesses for any comparable states. \end{abstract} \pacs{03.65.Ud, 03.65.Ca, 03.67.Mn, 03.67.-a } \maketitle \section{I. Introduction} Originating from the fundamental superposition principle in quantum mechanics, quantum coherence \cite{Streltsov17,Hu2018} plays a crucial role in quantum metrology \cite{Giovannetti04,Giovannetti11}, quantum algorithms \cite{Hillery 16}, nanoscale thermodynamics \cite{Aberg14,Gour15,Lostaglio15,Narasimhachar15,Francica19} and energy transportation in the biological systems \cite{Lloyd11,Huelga13,Lambert13,Romero14}. Detecting and quantifying quantum coherence, therefore, become fundamental problems in the emerging quantum areas. Numerous impressive schemes on measures of quantum coherence have been presented \cite{Baumgratz14,Yuan2015,Fang2018,Winter2016,Streltsov2015,Du2015,Qi2017,Napoli16,Bu2017,Huang2017,Jin2018,Xi2019}. The coherence witness, inspired by entanglement witnesses, is arguably a powerful tool for coherence detection in experiments \cite{Piani16,Napoli16,Wang17,Zheng18,Ringbauer18,Nie19,Ma19} and coherence quantification in theory \cite{Wang21,Ma21}. It directly detects any coherent states and gives rise to measures of quantum coherence without state tomography. Compared with the entanglement witness, the coherence witness has many different characteristics deserving to be investigated extensively. Two natural questions arise that when different coherence witnesses can detect some common coherent states and when different coherent states can be detected by some common coherence witnesses in finite-dimensional systems. Although these two similar questions related to entanglement witnesses have been well solved, separably \cite{Wu06,Wu07,Hou10a}, the problems of common coherence witnesses and common coherent states remain unsolved. In this paper we systematically investigate and solve the problems of common coherence witnesses and common coherent states. This paper is organized as follows. In section II, we review the concept of coherence witnesses and the methods of constructing coherence witnesses. In section III we show sufficient and necessary conditions for any given two or many coherence witnesses to be incomparable, and deal with problem of common coherence witnesses. In section IV, we characterize coherent states based on high-level witnesses and solve the problem when different coherent states can be detected by common coherence witnesses. Summary and discussions are given in section IV. \section{II. Common coherence witnesses} With respect to a fixed basis $\{|i\rangle\}_{i=1,2,\cdots,d}$ of the $d$-dimensional Hilbert Space $\mathcal{H}$, a state is called incoherent if it is diagonal in this basis. Denote $\mathcal{I}$ the set of incoherent states. The density operator of an arbitrary incoherent state $\delta\in \mathcal{I}$ is of the form, \begin{equation}\label{ic} \delta=\sum_{i=1}^d\delta_i|i\rangle\langle i|. \end{equation} Clearly, the set of incoherent states $\mathcal{I}$ is convex and compact. Since the set of all incoherent states is convex and compact, there must exist a hyperplane which separates a arbitrary given coherent state from the set of all incoherent states by the Hahn-Banach theorem \cite{Edwards65}. We call this hyperplane a coherence witness \cite{Piani16,Napoli16}. A coherence witness is an Hermitian operator, $W=W^\dag$, such that (i) $\text{tr}(W\delta)\geq0$ for all incoherent states $\delta\in \mathcal{I}$, and (ii) there exists a coherent state $\pi$ such that $\text{tr}(W\pi)<0$. More precisely, an Hermitian operator $W$ on $\mathcal{H}$ is a coherence witness if (i') its diagonal elements are all non-negative, and (ii') there is at least one negative eigenvalue. Following the definition of incoherent states and the Hahn-Banach theorem, we can restrict the condition (i) to $tr(W\delta)=0$ and relax (ii) to $tr(W\pi)\neq0$ \cite{Napoli16,Ren2017,Wang21}. As coherence witnesses are hermitian quantum mechanical observables, they can be experimentally implemented \cite{Wang17,Zheng18,Ringbauer18,Nie19,Ma19}. Since the density matrix of an entangled quantum state can not be diagonal, from the definition (\ref{ic}) an entangled quantum state must be a coherent state. Therefore, the entanglement witnesses are also kinds of coherence witnesses with respect to a fixed basis. We denote $\mathbf{S}$ the set of all separable states, $\mathbf{E}$ the set of all entangled states, $\mathbf{I}$ the set of all incoherent states and $\mathbf{C}$ the set of all coherent states. Fig. 1 (a) illustrates the schematic picture of the relations between entanglement and coherence. Therefore, we can construct coherence witnesses in a similar way of constructing entanglement witnesses \cite{Guhne09,Horodecki09}. \begin{figure} \caption{(Color online) (a) With respective to a fixed basis, all entanglement witnesses are also coherence witnesses. (b) We denote $\mathbf{Q} \label{fig1} \end{figure} For a given coherent state $|\psi\rangle\langle\psi|$, one has coherence witness, \begin{equation} W=\alpha I-|\psi\rangle\langle\psi|, \end{equation} where $I$ is the identity matrix and $\alpha=\max \mathrm{Tr}(\delta|\psi\rangle\langle\psi|)$ with the maximal running over all incoherent state $\delta$. Coherence witnesses can also be constructed from geometrical methods, \begin{equation} W=\frac{1}{N}(\delta-\rho+\mathrm{Tr}(\delta(\rho-\delta))I), \end{equation} where $\delta$ is the closest incoherent state to $\rho$, $N=\|\rho-\delta\|$ and $\|A\|\equiv\sqrt{Tr(A\dag A)}$. Recently, a general way of constructing a coherence witness for an arbitrary state has been provided \cite{Wang21,Ma21}: $W_{\rho}=-\rho+\Delta(\rho)$ is an optimal coherence witness to detect the coherence of $\rho$, where $\Delta(\rho)=\sum_{i=0}^{d-1}\langle i|\rho|i\rangle|i\rangle\langle i|$ is the dephasing operation in the reference basis $\{|i\rangle\}_{i=0}^{d-1}$. More general constructions of coherence witnesses are also given in \cite{Wang21,Ma21}. For a coherence witness $W$, we define $D_{W}=\{\rho\,|\,\textrm{tr}\left(\rho W\right)<0\}$, namely, the set of all coherent states `` witnessed " by $W$. Give two coherence witnesses $W_1$ and $W_2$, we say that $W_2$ is finer than $W_1$ if $D_{W_1}\subseteq D_{W_2}$, that is, if all the coherent states `` witnessed " by $W_1$ are also `` witnessed " by $W_2$. We call $W$ optimal if there exists no other coherence witness which is finer than it. It is shown that a coherent witness is optimal if and only if its diagonal elements are all zero \cite{Wang21}. For normalization we set $\|W\|_\infty=1$ as there exist traceless coherence witnesses. Moreover, given two coherence witnesses $W_1$ and $W_2$, we say that $W_2$ and $W_1$ are incomparable if $D_{W_1}\cap D_{W_2}=\emptyset$. Two coherence witnesses $W_1$ and $W_2$ can detect some common coherent states if $D_{W_1}\cap D_{W_2}\neq\emptyset$. To proceed, we need the following lemma. {\textbf{Lemma 1:}} If $W_{2}$ and $W_{1}$ are incomparable, i.e., $D_{W_{1}} \cap D_{W_{2}} = \emptyset $ and if $D_{W} \subset D_{W_{1}} \cup D_{W_{2}}$, then either $D_{W} \subset D_{W_{1}}$or $D_{W} \subset D_{W_{2}}$ .\par {\it Proof.---} On the contrary, suppose that both $D_{W_{1}} \cap D_{W}$ and $D_{W_{2}} \cap D_{W}$ are nonempty. Take $\rho_{i} \in D_{W_{i}} \cap D_{W}$ , $i = 1, 2$. Consider the segment $[\rho_{1},\rho_{2}]$ consising of $\rho_{t}=(1-t)\rho_{1}+t\rho_{2}$, where $0 \leq t \leq 1$. As $D_{W}$ is convex, we obtain \begin{equation}\label{C} [\rho_{1},\rho_{2}] \subset D_{W} \subset D_{W_{1}} \cup D_{W_{2}}. \end{equation} Thus we have \begin{equation}\label{D} [\rho_{1},\rho_{2}] =( D_{W_{1}} \cap [\rho_{1},\rho_{2}]) \cup (D_{W_{2}} \cap [\rho_{1},\rho_{2}]), \end{equation} which means that $ [\rho_{1},\rho_{2}]$ can be divided into two convex parts. It follows that there is $0 < t_{0} <1$ such that $\{\rho_{t}: 0 \leq t < t_{0}\} \subset D_{W_{1}}$ , $\{\rho_{t}: t_{0} < t \leq 1\} \subset D_{W_{2}}$ and either $\rho_{t_{0}} \in D_{W_{1}}$ or $\rho_{t_{0}} \in D_{W_{2}}$.\par Assume that $\rho_{t_{0}} \in D_{W_{1}}$; then $\text{tr}(W_{1}\rho_{t_{0}} ) < 0$. Thus, for sufficiently small $\varepsilon > 0$ with $t_{0} + \varepsilon \leq 1$, we have \begin{eqnarray*} 0 &\leq& \text{tr}(\rho_{t_{0}+\varepsilon}W_{1}) \\ &=& \text{tr}(\rho_{t_{0}}W_{1}) + \varepsilon[\text{tr}(\rho_{2}W_{1}) - \text{tr}(\rho_{1}W_{1})] < 0 \end{eqnarray*} which leads to a contradiction. Similarly, $\rho_{t_{0}} \in D_{W_{2}}$ leads to a contradiction as well. This completes the proof. $\blacksquare$ \textbf{ Theorem 1.} $W_2$ and $W_1$ are incomparable (no common coherent states can be detected) if and only if there exist $a>0$ and $b>0$ such that $W_{a,b}=aW_1+bW_2$ is positive. {\it Proof.---} Obviously, if $W_2$ is finer than $W_1$, then $W_2$ is finer than $W_{a,b}$ and $W_{a,b}$ is finer than $W_1$ for positive $a$ and $b$. Hence, $D_{W_1}\cap D_{W_2}\subseteq D_W=\emptyset$ since $W_{a,b}=aW_1+bW_2$ for some $a>0$ and $b>0$. Take $t=\frac{a}{b}$.\par By Lemma 1, we have $D_{W_{a,b}} \subset D_{W_{1}}$ or $D_{W_{a,b}} \subset D_{W_{2}}$for all $a > 0$ and $b > 0$. Then $D_{W_{a,b}}=D_{\frac{1}{b}W_{a,b}}=D_{tW_{1}+W_{2}}=D_{\frac{t}{1+t}W_{1}+\frac{1}{1+t}W_{2}}$. Hence, we obtain $D_{W_{a,b}}=D_{\lambda W_{1}+(1-\lambda)W_{2}}\doteq W_{\lambda}$ by taking $\lambda=\frac{t}{1+t}$, where $\lambda \in (0,1)$. We now can consider $W_{\lambda}$ as $W_{a,b}$. When $t$ varies from $0$ to $\infty$ continuously, then $\lambda$ varies from $0$ to $1$ continuously, which means that $D_{W_{\lambda}}$ also varies form $D_{W_{2}}$ to $D_{W_{1}}$ continuously. Take $\lambda_{0} = \text{sup}\{\lambda : D_{W_{\lambda}}\subset D_{W_{2}} \}$. \par We claim that if $D_{W_{\lambda_{0}}} \subset D_{W_{2}}$ then there exist $0 <\varepsilon<1 - \lambda_{0}$ such that $W_{\lambda_{0} + \varepsilon}$ is a positive operator. Otherwise, if for all $0 <\varepsilon< 1 - \lambda_{0}$, $D_{W_{\lambda_{0}+\varepsilon}} \neq \emptyset$, then we have $D_{W_{\lambda_{0}}} \subset D_{W_{2}} , D_{W_{\lambda_{0}+\varepsilon}}\subset D_{W_{1}}$, and for all $\rho \in D_{W_{\lambda_{0}}}$ \begin{equation}\label{E} \text{tr}(W_{\lambda_{0}}\rho) < 0, \text{tr}(W_{\lambda_{0}}\rho) + \varepsilon(\text{tr}(W_{1}\rho) - \text{tr}(W_{2}\rho)) \geq 0. \end{equation} Note that $\text{tr}(W_{1}\rho) \geq 0$ and $\text{tr}(W_{2}\rho) < 0$, the second part of the last inequality is positive, and $\varepsilon$ is any small positive number, so the last inequality is impossible.\par On the other hand, if $D_{W_{\lambda_{0}}} \subset D_{W_{1}}$ then there exist $0 < \varepsilon < \lambda_{0}$ such that $D_{W_{\lambda_{0} - \varepsilon}}$ is a positive operator. Otherwise, if for all $0 < \varepsilon < \lambda_{0}$, $D_{W_{\lambda_{0} - \varepsilon}} \neq \emptyset$, then we have $D_{W_{\lambda_{0}}} \subset D_{W_{1}}$ , $D_{W_{\lambda_{0} - \varepsilon}} \subset D_{W_{2}}$, and for all $\rho \in D_{W_{\lambda_{0}}}$, we have \begin{equation}\label{F} \text{tr}(W_{\lambda_{0}}\rho) < 0, \text{tr}(W_{\lambda_{0}}\rho) + \varepsilon(\text{tr}(W_{2}\rho) - \text{tr}(W_{1}\rho)) \geq 0. \end{equation} For the similar reason of Eq.(\ref{E}), Eq.(\ref{F}) is impossible as well.\par To sum up the previous discussion, no matter $D_{W_{\lambda_{0}}} \subset D_{W_{1}}$ or $D_{W_{\lambda_{0}}} \subset D_{W_{2}}$, there exists $\lambda \geq0$, or equivalently $t > 0$ ($a > 0$ and $b > 0$) such that $W_{\lambda}$ ($W_{a,b}$) is a positive operator, which completes the proof of the theorem. $\blacksquare$ \textbf{ Corollary 1.} $W_2$ and $W_1$ are not incomparable if and only if $W_{a,b}=aW_1+bW_2$ are witnesses for all $a>0$ and $b>0$. Theorem 1 can be generalized to the case of finitely many witnesses. We have the following result. \textbf{ Theorem 2.} $W_1, W_2, \cdots, W_n$ are incomparable if and only if there exist $t_i>0$ $(i=1,2,\cdots,n)$ such that $W=\sum_{i=1}^nt_iW_i$ is positive. {\it Proof.---} (i) The ``if" part. If $W=\sum_{i=1}^nt_iW_i\ge0$ for $t_i\ge0$, then $D_W=\emptyset$. Let $S=\{W_i|1\le i\le n\}$ and the convex hull of $cov(S)=\{\sum_{i=1}^kt_iW_i|t_i\ge0,\sum_{i=1}^Kt_i=1,W_i\in S, K\in \mathbb{N}\}$. Without loss of generality we assume that any subsect of $S$ can detect some coherent states simultaneously. For $n=2$, Theorem 2 holds as it reduces to the Theorem 1. Now assume that the Theorem 2 holds for $K\le n-1$. We prove that Theorem 2 holds for $K=n$. Indeed we only need to prove the case of $n=3$. The case of arbitrary $n$ can be proved in a similar way. By the assumption, we have $D_{W_1}\neq\emptyset$, $D_{W_1}\cap D_{W_2}\neq\emptyset$ and $D_{W_1}\cap D_{W_3}\neq\emptyset$. But $D_{W_1}\cap D_{W_2}\cap D_{W_3}\neq\emptyset$, that is, $(D_{W_1}\cap D_{W_2})\cap D_{W_1}\cap D_{W_3}\neq\emptyset$. Let $W_{b,c}=bW_2+cW_3$, where $b>0$ and $c>0$. We have $D_{W_1}\cap D_{W_{b,c}}\subset (D_{W_1}\cap D_{W_2})\cup(D_{W_1}\cap D_{W_3})$. Since $(D_{W_1}\cap D_{W_2})$ and $(D_{W_1}\cap D_{W_3})$ are disjoint and $D_{W_1}\cap D_{W_{b,c}}$ is convex, $D_{W_1}\cap D_{W_{b,c}}$ varies from $(D_{W_1}\cap D_{W_3})$ to $(D_{W_1}\cap D_{W_2})$, whenever $\frac{b}{c}$ varies from 0 to $\infty$. By the similar argument to that in the proof of Theorem 1, we conclude that there exist $\frac{b_0}{c_0}>0$ such that $D_{W_1}\cap D_{W_{b,c}}=\emptyset$. Therefore, $W=a'W_1+b'W_{b,c}=aW_1+b'bW_2+b'cW_3\ge0$ for some $a'>0$ and $b'>0$. By induction on $n$ we complete the proof of (i). (ii) The ``only if" part is clear. If $D_W=\emptyset$, then there exist $W$ such that $W\ge0$ $(W\in cov(S))$ from the proof in (i). It follows that $W$ is not a witness, which gives a contraction. $\blacksquare$ \section{III. Common coherent states} A framework which assembles hierarchies of ``witnesses" has been proposed in \cite{Wang18}. In this framework, a coherence witness can witness coherent states, and on the other hand, a coherent state can also act as a ``high-level-witness " of coherence witnesses which witnesses coherence witnesses. Concretely, when a coherence witness $W$ detects a coherent state $\rho$, we say that $W$ ``witnesses" the coherence of the state $\rho$. A question naturally arises. What ``witnesses" coherence witnesses. It is known that the set of quantum states (incoherent states and coherent states) is also convex and compact. Thus, by the Hahn-Banach theorem, there is at least one ``high-level" witness ``witnessing" a coherence witness, see Fig. 1 (b). For a high-level witness of coherence witnesses $\Pi$, one has (i'') $\text{tr}(\Pi\varrho)\geq0$ for all quantum states $\varrho$, and (ii'') there exists at least one coherence witness $W$ such that $\text{tr}(\Pi W)<0$. Coherence witnesses ``witness'' coherent states and coherent states ``witness''coherence witnesses. Coherent states play the role of witnesses. Since coherent states are also (high-level) witnesses, the question when different coherent states can be detected by some common coherence witnesses can be transformed into the question when different high-level witnesses (coherent states) can detect the same coherence witnesses. From the high-level-witness role played by coherent states and the Theorem 1, we have the following result. \textbf{Theorem 3.} Two coherent states $\rho_1$ and $\rho_2$ are incomparable, i.e., $D_{\rho_1}\cap D_{\rho_2}=\emptyset$, if and only if there exists $0<t<1$ such that $\rho_t=t\rho_1+(1-t)\rho_2$ is an incoherent state. The robust of coherence $\mathscr{C}_\mathcal{R}(\rho)$ \cite{Piani16,Napoli16} of a coherent state $\rho \in {\mathcal{D}}(\mathbb{C}^d)$ is defined as \begin{equation}\label{ROC} \mathscr{C}_\mathcal{R}(\rho)= \min_{\tau \in {\mathscr{D}}(\mathbb{C}^d)} \left\{ s\geq 0\ \Big\vert\ \frac{\rho + s\ \tau}{1+s} =\delta \in {\mathcal{I}}\right\}, \end{equation} where ${\mathscr{D}}(\mathbb{C}^d)$ stands for the convex set of density operators acting on a $d$-dimensional Hilbert space. We have the following conclusions. \textbf{Corollary 2:} Any coherent state $\rho$ and the state minimizing $s$ in (\ref{ROC})) $\tau$ have no common coherence witnesses. \textbf{ Corollary 3.} Two coherent states $\rho_1$ and $\rho_2$ are not incomparable if and only if there does not exist $0<t<1$ such that $\rho_t=t\rho_1+(1-t)\rho_2$ is an incoherent state. From the general construction of optimal coherence witnesses for an arbitrary coherent state \cite{Wang21,Ma21} and Corollary 3, there also exists a general way of constructing a common optimal coherence witness for different coherent states. \textbf{ Corollary 4.} For two given not incomparable coherent states $\rho_1$ and $\rho_2$, the optimal coherence witness $W=aW_{\rho_1}+bW_{\rho_2}$ detects both the coherence of $\rho_1$ and $\rho_2$, where $a>0$, $b>0$ and $W_{\rho_i}=-\rho_i+\Delta(\rho_i)$ $(i=1,2)$. It is also not difficult to generalize Theorem 3 to the case for finitely many coherent states. \textbf{ Theorem 4.} The coherent states $\rho_1, \rho_2, \cdots, \rho_n$ are incomparable if and only if there exist $\sum_{i=1}^nt_i=1$, $t_i>0$ $(i=1,2,\cdots,n)$ such that $\rho=\sum_{i=1}^nt_i\rho_i$ is an incoherent state. \section{IV. Summary and Discussions} To summarize, we have investigated the properties of coherent witnesses and the methods of constructing coherence witnesses for any arbitrarily given coherent states. We have presented the conditions for different witnesses to detect the same coherent states, as well as the conditions for a set of different coherent states whose coherence can be detected by a common set of coherence witnesses. Here, we mainly considered the case of discrete quantum systems in finite-dimensional Hilbert spaces. In fact our results hold also for infinite-dimensional cases, since our main results are proved without the additional assumption $tr(W_1)=tr(W_2)$. However, the coherence in continuous variable systems (such as light modes) is significantly different from the case of the discrete systems. For instance, the set of Gaussian states must be closed and convex, but not necessarily bounded by the Hahn-Banach theorem \cite{Bruss02}). Our investigations may highlight further researches on these related problems. \section{ACKNOWLEDGMENTS} This work is supported by the National Natural Science Foundation of China under Grant Nos. 62072119, 61672007 and 12075159, Guangdong Basic and Applied Basic Research Foundation under Grant No. 2020A1515011180, Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology (Grant Nos. SIQSE202005, SIQSE202001), Natural Science Foundation of Shanghai (Grant No. 20ZR1426400), Beijing Natural Science Foundation (Z190005), the Academician Innovation Platform of Hainan Province, and Academy for Multidisciplinary Studies, Capital Normal University. \end{document}
\begin{document} \title[Dirichlet boundary value problem with discontinuous nonlinearity]{Dirichlet boundary value problem related to the $p(x)-$Laplacian with discontinuous nonlinearity} \author{M. AIT HAMMOU} \address[M. AIT HAMMOU]{Sidi Mohamed Ben Abdellah university, Laboratory LAMA, Department of Mathematics, Fez, Morocco} \email[M. AIT HAMMOU]{[email protected]} \subjclass[2010]{Set-valued operators, Nonlinear elliptic equation, $p(x)-$Laplaciane, Sobolev spaces with variable exponent, Degree theory.} \keywords{47H04, 47H11, 47H30, 35D30, 35J66.} \begin{abstract} In this paper, we prove the existence of a weak solution for the Dirichlet boundary value problem related to the $p(x)-$Laplacian $$ -\mbox{div}(|\nabla u|^{p(x)-2}\nabla u)+u\in -[\underline{g}(x,u),\overline{g}(x,u)], $$ by using the degree theory after turning the problem into a Hammerstein equation. The right hand side $g$ is a possibly discontinuous function in the second variable satisfying some non-standard growth conditions. \end{abstract} \maketitle \section{Introduction} The Laplacian $p(x)$-Laplacian has been widely used in the modeling of several physical phenomena. we can refer to \cite{R} for its use for electrorheological fluids and to \cite{AMSo,CLR} for its use for image processing. Up to these days, many results have been achieved obtained for solutions to equations related to this operator. \par We consider the following nonlinear elliptic boundary value problem \begin{equation}\label{Pr1} \left\{\begin{array}{lll} -\Delta_{p(x)}u+u\in -[\underline{g}(x,u),\overline{g}(x,u)] & \mbox{in }\; \Omega,\\ u=0 &\mbox{on }\; \partial\Omega. \end{array}\right. \end{equation} where $-\Delta_{p(x)}u:=-div(|\nabla u|^{p(x)-2}\nabla u)$ is the $p(x)-$Laplacian, $\Omega\subset \mathbb{R}^N$ is a bounded domain, $p(\cdot)$ is a log-H\"older continuous exponent and $g$ is a possibly discontinuous function in the second variable satisfying some non-standard growth conditions.\par By using the degree theory for $p(\cdot)\equiv p$ with values in $(2,N)$, Kim studied in (\cite{K}) this problem after developing a topological degree theory for a class of locally bounded weakly upper semicontinuous set-valued operators of generalized $(S_+)$ type in real reflexive separable Banach spaces, based on the Berkovits-Tienari degree \cite{BT}.\par The aim of this paper is to prove the existence of at least one weak solution for \eqref{Pr1} using the topological degree theory after turn the problem into a Hammerstein equation. The results in \cite{K} are thus extended into a larger functional framework, that of Sobolev spaces with variable exponents.\par This paper is divided into four sections The second section is reserved for a mathematical preliminaries concerning some class of locally bounded weakly upper semicontinuous set-valued operators of generalized $(S_+)$ type, the topological degree developing by Kim \cite{K} and some basic properties of generalized Lebesgue-Sobolev spaces $W_0^{1,p(x)}$. In the third, we find some assumptions and technical lemmas. The Fourth section is reserved to state and prove the existence results of weak solutions of problem \eqref{Pr1}. \section{Mathematical preliminaries} \subsection{Some classes of operators and topological degree} Let $X$ and $Y$ be two real Banach spaces and $\Omega$ a nonempty subset of $X$. A set-valued operator\\ $F:\Omega\subset X \rightarrow 2^Y$ is said to be \begin{itemize} \item {\it bounded}, if it takes any bounded set into a bounded set; \item {\it upper semicontinuous (u.s.c)}, if the set $F^{-1}(A)=\{u\in\Omega/ Fu\cap A\neq\emptyset\}$ is closed in $X$ for each closed set $A$ in $Y$; \item {\it weakly upper semicontinuous (w.u.s.c)}, if $F^{-1}(A)$ is closed in $X$ for each weakly closed set $A$ in $Y$; \item {\it compact}, if it is u.s.c and the image of any bounded set is relatively compact. \item {\it locally bounded}, if for each $u\in \Omega$ there exists a neighborhood $\mathcal{U}$ of $u$ such that the set $F(\mathcal{U})=\bigcup_{u\in\mathcal{U}}Fu$ is bounded. \end{itemize} Let $X$ be a real reflexive Banach space with dual $X^*$. A set-valued operator\\ $F:\Omega\subset X\rightarrow 2^{X^*}\backslash\emptyset$ is said to be \begin{itemize} \item {\it of class $(S_+)$}, if for any sequence $(u_n)$ in $\Omega$ and any sequence $(w_n)$ in $X^*$ with $w_n\in Fu_n$ such that $u_n \rightharpoonup u$ in $X$ and $limsup\langle w_n,u_n - u\rangle\leq 0 $, it follows that $u_n \rightarrow u$ in $X$; \item {\it quasimonotone}, if for any sequence $(u_n)$ in $\Omega$ and any sequence $(w_n)$ in $X^*$ with $w_n\in Fu_n$ such that $u_n \rightharpoonup u$ in $X$, it follows that $$liminf\langle w_n,u_n - u\rangle\geq 0.$$ \end{itemize} Let $T:\Omega_1\subset X\rightarrow X^*$ be a bounded operator such that $\Omega\subset\Omega_1$. A set-valued operatopr $F:\Omega\subset X\rightarrow 2^X\backslash\emptyset$ is said to of {\it class $(S_+)_T$}, if for any sequence $(u_n)$ in $\Omega$ and any sequence $(w_n)$ in $X$ with $w_n\in Fu_n$ such that $u_n \rightharpoonup u$ in $X$, $y_n:=Tu_n\rightharpoonup y$ in $X^*$ and $limsup\langle w_n,y_n-y\rangle\leq 0$, we have $u_n\rightarrow u$ in $X$. For any set $\Omega\subset X$ with $\Omega\subset D_F$, where $D_F$ denotes the domain of $F$, and any bounded operator $T:\Omega\rightarrow X^*$, we consider the following classes of operators: \begin{eqnarray*} \mathcal{F}_1(\Omega) &:=& \{F:\Omega\rightarrow X^*\mid F \mbox{ is bounded, continuous and of class }(S_+)\}, \\ \mathcal{F}_T(\Omega) &:=& \{F:\Omega\rightarrow 2^X\mid F \mbox{ is locally bounded, w.u.s.c and of class }(S_+)_T\}. \end{eqnarray*} Let $\mathcal{O}$ be the collection of all bounded open set in $X$. Define $$\mathcal{F}(X) := \{F \in \mathcal{F}_T(\bar{G})\mid G\in\mathcal{O}, T\in \mathcal{F}_1(\bar{G})\},$$ Here, $T\in \mathcal{F}_1(\bar{G})$ is called an \it essential inner map \rm to $F$. \begin{lemma}\label{l2.1}\cite[Lemma 1.4]{K} Let $G$ be a bounded open set in a real reflexive Banach space $X$. Suppose that $T\in \mathcal{F}_1(\bar{G})$ and $S:D_S\subset X^*\rightarrow 2^X$ is locally bounded and w.u.s.c such that $T(\bar{G})\subset D_S$. Then the following statements hold: \begin{enumerate} \item If $S$ is quasimonotone, then $I+ST\in\mathcal{F}_T(\bar{G})$, where $I$ denotes the identity operator. \item If $S$ is of class $(S_+)$, then $ST\in\mathcal{F}_T(\bar{G})$. \end{enumerate} \end{lemma} \begin{definition} Let $G$ be a bounded open subset of a real reflexive Banach space $X$, $T:\bar{G}\rightarrow X^*$ be bounded and continuous and let $F$ and $S$ be bounded and of class $(S_+)_T$. The affine homotopy $H:[0,1]\times\bar{G}\rightarrow 2^X$ defined by $$H(t,u):=(1-t)Fu+tSu \mbox{ for } (t,u)\in[0,1]\times\bar{G}$$ is called an affine homotopy with the common essential inner map $T$. \end{definition} \begin{remark}\cite[Lemma 1.6]{K} The above affine homotopy satisfies condition $(S_+)_T$. \end{remark} As in\cite{K}, we introduce a suitable topological degree for the class $\mathcal{F}(X)$: \begin{theorem}\label{t2.1}\cite[Definition 2.9 and Theorem 2.10]{K} Let $$\mathcal{M} =\{(F,G,h)|G\in \mathcal{O}, T\in\mathcal{F}_1(\bar{G}), F\in\mathcal{F}_T(\bar{G}), h\notin F(\partial G)\}.$$ There exists a unique degree function $d:\mathcal{M}\rightarrow \mathbb{Z}$ that satisfies the following properties: \begin{enumerate} \item (Existence) if $d(F,G,h)\neq 0$ , then the inclusion $h\in Fu$ has a solution in $G$, \item (Additivity) If $G_1$ and $G_2$ are two disjoint open subset of $G$ such that\\ $h\not\in F(\bar{G}\setminus (G_1\cup G_2))$, then we have $$d(F,G,h)=d(F,G_1,h)+d(F,G_2,h),$$ \item (Homotopy invariance) Suppose that $H: [0,1]\times \bar{G}\rightarrow X$ is a locally bounded w.u.s.c affine homotopy of class $(S_+)_T$ with the common essential inner map $T$. If $h:[0,1]\rightarrow X$ is a continuous curve in $X$ such that $h(t)\notin H(t,\partial G)$ for all $t\in [0,1]$, then the value of $d(H(t,.),G,h(t))$ is constant for all $t\in[0,1]$, \item (Normalization) For any $h\in G$, we have $$d(I,G,h)=1.$$ \end{enumerate} \end{theorem} \subsection{The spaces $W_0^{1,p(x)}(\Omega)$} We introduce the setting of our problem with some auxiliary results of the variable exponent Lebesgue and Sobolev spaces $L^{p(x)}(\Omega)$ and $W_0^{1,p(x)}(\Omega)$. For convenience, we only recall some basic facts with will be used later, we refer to \cite{FZ1,KR,ZQF} for more details.\\ Let $\Omega$ be an open bounded subset of $\mathbb{R}^N$, $N\geq2,$ with a Lipschitz boundary denoted by $\partial\Omega$. Denote $$C_+(\bar{\Omega})=\{h\in C(\bar{\Omega})|\inf_{x\in\bar{\Omega}}h(x)>1\}.$$ For any $h\in C_+(\bar{\Omega})$, we define $$h^+:=max\{h(x),x\in\bar{\Omega}\}, h^-:=min\{h(x),x\in\bar{\Omega}\}.$$ For any $p\in C_+(\bar{\Omega})$ we define the variable exponent Lebesgue space $$L^{p(x)}(\Omega)=\{u;\ u:\Omega \rightarrow \mathbb{R} \mbox{ is measurable and } \int_\Omega| u(x)|^{p(x)}\;dx<+\infty \}$$ endowed with {\it Luxemburg norm} $$\|u\|_{p(x)}=\inf\{{\lambda>0}/\rho_{p(x)}(\frac{u}{\lambda})\leq 1\}.$$ where $$\rho_{p(x)}(u)=\int_\Omega|u(x)|^{p(x)}\;dx, \;\;\; \forall u\in L^{p(x)}(\Omega). $$ $(L^{p(x)}(\Omega),\|\cdot\|_{p(x)})$ is a Banach space \cite[Theorem 2.5]{KR}, separable and reflexive \cite[Corollary 2.7]{KR}. Its conjugate space is $L^{p'(x)}(\Omega)$ where $\frac{1}{p(x)}+\frac{1}{p'(x})=1$ for all $x\in \Omega.$ For any $u\in L^{p(x)}(\Omega)$ and $v\in L^{p'(x)}(\Omega)$, H\"older inequality holds \cite[Theorem 2.1]{KR} \begin{equation}\label{hol} \left|\int_\Omega uv\;dx\right|\leq\left(\frac{1}{p^-}+\frac{1}{{p^{'}}^-}\right)\|u\|_{p(x)}\|v\|_{p'(x)}\leq 2 \|u\|_{p(x)}\|v\|_{p'(x)}. \end{equation} Notice that if $u\in L^{p(.)}(\Omega)$ then the following relations hold true (see \cite{FZ1}) \begin{equation}\label{imp0} \|u\|_{p(x)}<1(=1;>1)\;\;\;\Leftrightarrow \;\;\;\rho_{p(x)}(u)<1(=1;>1), \end{equation} \begin{equation}\label{imp1} \|u\|_{p(x)}>1\;\;\;\Rightarrow\;\;\;\|u\|_{p(x)}^{p^-}\leq\rho_{p(x)}(u)\leq\|u\|_{p(x)}^{p^+}, \end{equation} \begin{equation}\label{imp2} \|u\|_{p(x)}<1\;\;\;\Rightarrow\;\;\;\|u\|_{p(x)}^{p^+}\leq\rho_{p(x)}(u)\leq\|u\|_{p(x)}^{p^-}, \end{equation} From (\ref{imp1}) and (\ref{imp2}), we can deduce the inequalities \begin{equation}\label{e1} \|u\|_{p(x)}\leq \rho_{p(x)}(u)+1, \end{equation} \begin{equation}\label{ineq1} \rho_{p(x)}(u)\leq\|u\|_{p(x)}^{p^-}+\|u\| _{p(x)}^{p^+}. \end{equation} If $p_1,p_2\in C_+(\bar{\Omega}), p_1(x) \leq p_2(x)$ for any $x\in\bar{\Omega},$ then there exists the continuous embedding $L^{p_2(x)}(\Omega)\hookrightarrow L^{p_1(x)}(\Omega).$\\ Next, we define the variable exponent Sobolev space $W^{1,p(x)}(\Omega)$ as $$W^{1,p(x)}(\Omega)=\{u\in L^{p(x)}(\Omega)/|\nabla u | \in L^{p(x)}(\Omega)\}.$$ It is a Banach space under the norm $$||u||=\|u\|_{p(x)}+\|\nabla u\|_{p(x)}.$$ We also define $W_0^{1,p(.)}(\Omega)$ as the subspace of $W^{1,p(.)}(\Omega)$ which is the closure of $C_0^\infty(\Omega)$ with respect to the norm $||\;.\;||$. If the exponent $p(.)$ satisfies the log-H\"older continuity condition, i.e. there is a constant $\alpha>0$ such that for every $x,y\in\Omega, x\neq y$ with $|x-y|\leq\frac{1}{2}$ one has \begin{equation}\label{e4} |p(x)-p(y)|\leq\frac{\alpha}{-\log|x-y|}\;\;, \end{equation} then we have the Poincar\'e inequality (see \cite{HHKV,SD}), i.e. the exists a constant $C>0$ depending only on $\Omega$ and the function $p$ such that \begin{equation}\label{ptcar} \|u\|_{p(x)}\leq C\|\nabla u\|_{p(x)} , \forall u\in W_0^{1,p(.)}(\Omega). \end{equation} In particular, the space $W_0^{1,p(.)}(\Omega)$ has a norm $\|\cdot\|$ given by $$\|u\|_{1,p(x)}=\|\nabla u\|_{p(.)} \mbox{ for all } u\in W_0^{1,p(x)}(\Omega),$$ which is equivalent to $ \|\cdot\|.$ In addition, we have the compact embedding\\ $W_0^{1,p(.)}(\Omega)\hookrightarrow L^{p(.)}(\Omega)$(see \cite{KR}). The space $(W_0^{1,p(x)}(\Omega),\|\cdot\|_{1,p(x)})$ is a Banach space, separable and reflexive (see \cite{FZ1,KR}). The dual space of $W_0^{1,p(x)}(\Omega),$ denoted $W^{-1,p'(x)}(\Omega),$ is equipped with the norm $$\|v\|_{-1,p'(x)}=\inf\{\|v_0\|_{p'(x)}+\sum_{i=1}^{N}\|v_i\|_{p'(x)}\},$$ where the infinimum is taken on all possible decompositions $v=v_0-div F$ with $v_0\in L^{p'(x)}(\Omega)$ and $F=(v_1,...,v_N)\in (L^{p'(x)}(\Omega))^N.$ \section{Basic assumptions and technical Lemmas} In this section, we study the Dirichlet boundary value problem (\ref{Pr1}) with discontinuous nonlinearity, based on the degree theory in Section 2, where $\Omega\subset \mathbb{R}^N$, $N\geq2,$ is a bounded domain with a Lipschitz boundary $\partial\Omega$, $p\in C_+(\bar{\Omega})$ satisfy the log-H\"older continuity condition (\ref{e4}), $2\leq p^-\leq p(x)\leq p^+<\infty$ and $g:\Omega\times\mathbb{R}\rightarrow \mathbb{R}$ is a possibly discontinuous real-valued function in the sense that $$\underline{g}(x,s)=\liminf_{\eta\rightarrow s}g(x,\eta)=\lim_{\delta\rightarrow 0^+}\inf_{|\eta-s|<\delta}g(x,\eta),$$ $$\overline{g}(x,s)=\limsup_{\eta\rightarrow s}g(x,\eta)=\lim_{\delta\rightarrow 0^+}\sup_{|\eta-s|<\delta}g(x,\eta).$$ Suppose that \begin{description} \item[($g_1$)] $\overline{g}$ and $\underline{g}$ are superpositionally measurable, that is, $\overline{g}(\cdot,u(\cdot))$ and $\underline{g}(\cdot,u(\cdot))$ are measurable on $\Omega$ for any measurable function $u:\Omega\rightarrow\mathbb{R};$ \item[($g_2$)] $g$ satisfies the growth condition $$|g(x,s)|\leq k(x)+c|s|^{q(x)-1}$$ for a.e. $x\in\Omega$ and all $s\in\mathbb{R}$, where $c$ is a positive constant, $k\in L^{p'(x)}(\Omega)$ and $p\in C_+(\bar{\Omega})$ with $q^+ < p^-.$ \end{description} \begin{lemma}\cite[Proposition 1]{C}\label{usc} For each fixed $x\in \Omega$, the functions $\overline{g}(x,s)$ and $\underline{g}(x,s)$ are u.s.c functions on $\mathbb{R}^N$. \end{lemma} \begin{lemma}\cite[Theorem 3.1]{Ch}\label{L} The operator $L:W_0^{1,p(x)}(\Omega)\rightarrow W^{-1,p'(x)}(\Omega)$ setting by $$\langle Lu,v\rangle=\int_\Omega|\nabla u|^{p(x)-2}\nabla u\nabla v dx, \mbox{ for all } u,v\in W_0^{1,p(x)}(\Omega)$$ is continuous, bounded and strictly monotone. It is also a homeomorphism mapping of class $(S_+)$. \end{lemma} \begin{lemma}\label{A} The operator $A:W_0^{1,p(x)}(\Omega)\rightarrow W^{-1,p'(x)}(\Omega)$ setting by $$\langle Au,v\rangle=-\int_\Omega uv\;dx \mbox{ for } u,v\in W_0^{1,p(x)}(\Omega)$$ is compact. \end{lemma} \begin{proof} Since $p(x)\geq 2$, we have $p'(x)\leq 2\leq p(x)$, then the embedding\\$i:L^{p(x)}\rightarrow L^{p'(x)}$ is continuous. Since the embedding $I:W_0^{1,p(x)}(\Omega)\rightarrow L^{p(x)}(\Omega)$ is compact, it is known that the adjoint operator $I^*:L^{p'}(\Omega)\rightarrow W^{-1,p'(x)}(\Omega)$ is also compact. Therefor, $A=I^*oioI$ is compact. \end{proof} \begin{lemma}\label{N} Under assumptions $(g_1)$ and $(g_2)$, the set-valued operator\\ $N:W_0^{1,p(x)}(\Omega)\rightarrow 2^{W^{-1,p'(x)}(\Omega)}$ setting by $$Nu=\{z\in W^{-1,p'(x)}(\Omega)| \exists w\in L^{p'(x)}(\Omega); \underline{g}(x,u(x))\leq w(x)\leq \overline{g}(x,u(x)) \mbox{ a.e. } x\in\Omega$$ $$\mbox{ and } \langle z ,v\rangle=\int_\Omega wv dx,\;\;\ \forall v\in W_0^{1,p(x)}(\Omega)\}$$ is bounded, u.s.c and compact. \end{lemma} \begin{proof} Let $\phi:L^{p(x)}(\Omega)\rightarrow2^{L^{p'(x)}(\Omega)}$ be the set-valued operator given by $$\phi u=\{w\in L^{p'(x)}(\Omega)|; \underline{g}(x,u(x))\leq w(x)\leq \overline{g}(x,u(x)) \mbox{ a.e. } x\in\Omega\}.$$ For each $u\in W_0^{1,p(x)}(\Omega)$, we have from the growth condition $(g_2)$ $$max\{|\underline{g}(x,s)|,|\overline{g}(x,s)|\}\leq k(x)+c|s|^{q(x)-1}.$$ and by the inequalities \eqref{e1} and \eqref{ineq1}, it follows that \begin{eqnarray*} \|\underline{g}(x,u(x))\|_{p'(x)} &\leq& \rho_{p'(x)}(\underline{g}(x,u(x)))+1 \\ &=& \int_\Omega|\underline{g}(x,u(x))|^{p'(x)}+1 \\ &\leq& 2^{p'^+}(\rho_{p'(x)}(k)+\rho_{r(x)}(u)+1 \\ &\leq& 2^{p'^+}(\rho_{p'(x)}(k)+\|u\|_{r(x)}^{r^+}+\|u\|_{r(x)}^{r^-})+1, \end{eqnarray*} where $r(x)=(q(x)-1)p'(x)<p(x)$. By the continuous embedding $L^{p(x)}\hookrightarrow L^{r(x)}$, we have $$\|\underline{g}(x,u(x))\|_{p'(x)}\leq 2^{p'^+}(\rho_{p'(x)}(k)+\|u\|_{1,p(x)}^{r^+}+\|u\|_{1,p(x)}^{r^-})+1.$$ A similar inequality holds for $\overline{g}(x,u(x))$, so that $\phi$ is bounded on $W_0^{1,p(x)}(\Omega)$.\\ Let's show that $\phi$ is u.s.c, i.e., $$\forall\varepsilon>0, \exists\delta>0; \parallel u-u_0\parallel_{p(x)}<\delta\Rightarrow\phi u\subset\phi u_0+B_\varepsilon,$$ where $B_\varepsilon$ is the $\varepsilon$-ball in $L^{p'(x)}(\Omega)$.\\ To this end, given $u_0\in L^{p(x)}(\Omega)$, we consider the point sets $$E_{m,\varepsilon}=\bigcap_{t\in \mathbb{R}}G_t$$ where $$G_t=\{x\in \Omega; |t-u_0(x)|<\frac{1}{m}\Rightarrow[\underline{g}(x,t),\overline{g}(x,t)]\subset(\underline{g}(x,u_0(x)-\frac{\varepsilon}{R},\overline{g}(x,u_0(x))+ \frac{\varepsilon}{R});$$ $m$ being an integer and $R$ being a constant to be determined.\\ It is obvious that $$E_{1,\varepsilon}\subset E_{2,\varepsilon}\subset...$$ By Lemma \ref{usc}, $$\bigcup_{m=1}^\infty E_{m,\varepsilon}=\Omega,$$ thus there is an integer $m_0$ such that \begin{equation}\label{1.5} m(E_{m_0,\varepsilon})>m(\Omega)-\frac{\varepsilon}{R} \end{equation} But for all $\varepsilon>0$, there is $\eta=\eta(\varepsilon)>0$, such that $m(T)<\eta$ implies \begin{equation}\label{1.6} 2^{p'^+}\int_T 2|k(x)|^{p'(x)}+c'(1+2^{r^+})|u_0(x)|^{r(x)}<\frac{\varepsilon'}{3}, \end{equation} due to $k\in L^{p'(x)}(\Omega)$ and $u_0\in L^{r(x)}(\Omega)$, where $r(x)=(q(x)-1)p'(x),$\\ $c'=\max\{c^{p'^+},c^{p'^-}\}$ and $\varepsilon'=\inf\{\varepsilon^{p'^-},\varepsilon^{p'^+}\}.$ Now let \begin{equation}\label{1.8} 0<\delta<\min\{1,\frac{1}{m_0}(\frac{\eta}{2})^{\frac{1}{p^-}},(\frac{\varepsilon'}{3c'2^{p'^++r^+}})^{\frac{1}{r^-}}\}, \end{equation} \begin{equation}\label{1.9} R>\max\{\varepsilon,\frac{2\varepsilon}{\eta},(\frac{3m(\Omega)}{\varepsilon'})^{\frac{1}{p'^-}}\varepsilon\}. \end{equation} Suppose that $\|u-u_0\|_{p(x)}<\delta$, and consider the set $E =\{x\in \Omega; |u(x)-u_0(x)|\geq\frac{1}{m_0}\}$; we have \begin{equation}\label{1.10} m(E)<(m_0\delta)^{p^-}<\frac{\eta}{2} \end{equation} If $x\in E_{m_0,\varepsilon}\setminus E$, then, for each $w\in \phi u$, $$|u(x)-u_0(x)|<\frac{1}{m_0}$$ and $$w(x)\in (\underline{g}(x,u_0(x)-\frac{\varepsilon}{R},\overline{g}(x,u_0(x))+\frac{\varepsilon}{R}).$$ Let \begin{eqnarray*} G^+ &=& \{x\in \Omega; w(x)>\overline{g}(x,u_0(x))\}, \\ G^- &=&\{x\in \Omega; w(x)<\underline{g}(x,u_0(x))\}, \\ G^0 &=& \{x\in \Omega; w(x)\in [\underline{g}(x,u_0(x),\overline{g}(x,u_0(x))]. \end{eqnarray*} and \begin{eqnarray*} y(x) &=& \left\{ \begin{array}{ll} \overline{g}(x,u_0(x)), & \hbox{for } x\in G^+ ; \\ w(x), & \hbox{for } x\in G^0; \\ \underline{g}(x,u_0(x), & \hbox{for } x\in G^-. \end{array} \right. \end{eqnarray*} Then $y\in \phi u_0$ and \begin{equation}\label{1.11} |y(x)-w(x)|<\frac{\varepsilon}{R} \mbox{ for all } x\in E_{m_0,\varepsilon}\setminus E. \end{equation} Combining \eqref{1.9} with \eqref{1.11}, we obtain \begin{equation}\label{1.12} \int_{E_{m_0,\varepsilon}\setminus E}|y(x)-w(x)|^{p'(x)}\;dx<(\frac{\varepsilon}{R})^{p'^-}m(\Omega)<\frac{\varepsilon'}{3}. \end{equation} Let $V$ be the coset in $\Omega$ of $E_{m_0,\varepsilon}\setminus E$; then $V =(\Omega\setminus E_{m_0,\varepsilon})\cup(E_{m_0,\varepsilon}\cap E)$ and $$m(V)\leq m(\Omega\setminus E_{m_0,\varepsilon})+m(E_{m_0,\varepsilon}\cap E)<\frac{\varepsilon}{R}+m(E)<\eta,$$ in view of \eqref{1.5}, \eqref{1.10} and \eqref{1.9}. Combining $(g_2)$ and \eqref{1.6} with \eqref{1.8}, we obtain \begin{eqnarray*} \int_V|y(x)-w(x)|^{p'(x)}\;dx &\leq& \int_V(|y(x)|^{p'(x)}+|w(x)|^{p'(x)})\;dx \\ &\leq& 2^{p'^+}\int_V(|b(x)|^{p'(x)}+c^{p'(x)}|u_0(x)|^{r(x)} \\ &+&|b(x)|^{p'(x)}+c^{p'(x)}+|u(x)|^{r(x)})\;dx \\ &\leq& 2^{p'^+}\int_V(2|b(x)|^{p'(x)}+c^{p'(x)}|u_0(x)|^{r(x)} \\ &+&c^{p'(x)}+2^{r(x)}(|u(x)-u_0(x)|^{r(x)}+|u_0(x)|^{r(x)}))\;dx \\ &\leq& 2^{p'^+}\int_V(2|b(x)|^{p'(x)}+c'(1+2^{r(x)})|u_0(x)|^{r(x)})\;dx \\ &+& 2^{p'^+}c'\int_\Omega 2^{r(x)}|u(x)-u_0(x)|^{r(x)}\;dx \\ &\leq& \frac{\varepsilon'}{3}+\frac{\varepsilon'}{3}, \end{eqnarray*} Thus \begin{equation}\label{1.13} \int_V|y(x)-w(x)|^{p'(x)}\;dx\leq 2\frac{\varepsilon'}{3}. \end{equation} Combining \eqref{1.12} with \eqref{1.13}, we see that $$\rho_{p'(x)}(y-w)<\varepsilon'.$$ - If $\varepsilon\geq1$, then $\varepsilon'=\varepsilon^{p'^-}$. From \eqref{imp1} and \eqref{imp2}, we have $$\|y-w\|_{p'(x)}^{p'^-}<\varepsilon^{p'^-} \mbox{ or } \|y-w\|_{p'(x)}^{p'^+}<\varepsilon^{p'^-},$$ then $$\|y-w\|_{p'(x)}<\varepsilon \mbox{ or } \|y-w\|_{p'(x)}<\varepsilon^{\frac{p'^-}{p'^+}}\leq\varepsilon.$$ - If $\varepsilon<1$, then $\varepsilon'=\varepsilon^{p'^+}$. From \eqref{imp0} and \eqref{imp2}, we have $\|y-w\|_{p'(x)}^{p'^+}<\varepsilon^{p'^+},$ then $$\|y-w\|_{p'(x)}<\varepsilon.$$ Therefore $\phi$ is u.s.c.\\ Hence $I^*o\phi oI$ is obviously bounded, u.s.c and compact. \end{proof} \section{Main result} \begin{definition} We call that $u\in W_0^{1,p(x)}(\Omega)$ is a weak solution of (\ref{Pr1}) if there exists $z\in Nu$ such that $$\int_\Omega|\nabla u|^{p(x)-2}\nabla u\nabla v\;dx+\int_\Omega uv\;dx +\langle z,v\rangle= 0, \;\;\forall v\in W_0^{1,p(x)}(\Omega).$$ \end{definition} \begin{theorem} Under assumptions $(g_1)$ and $(g_2)$, the problem (\ref{Pr1}) has a weak solution $u$ in $W_0^{1,p(x)}(\Omega).$ \end{theorem} \begin{proof} Let $L, A :W_0^{1,p(x)}(\Omega)\rightarrow W^{-1,p'(x)}(\Omega)$ and $N:W_0^{1,p(x)}(\Omega)\rightarrow 2^{W^{-1,p'(x)}(\Omega)}$ be definded in Lemmas \ref{L}, \ref{A} and \ref{N}, respectively. Then $u\in W_0^{1,p(x)}(\Omega)$ is a weak solution of \eqref{Pr1} if and only if \begin{equation}\label{eq2} Lu\in-(A+N)u. \end{equation} Thanks to the properties of the operator $L$ seen in Lemma \ref{L} and in view of Minty-Browder Theorem (see \cite{Z}, Theorem 26A), the inverse operator \\$T:=L^{-1}:W^{-1,p'(x)}(\Omega)\rightarrow W_0^{1,p(x)}(\Omega)$ is bounded, continuous and satisfies condition $(S_+)$. Moreover, note by Lemmas \ref{A} and \ref{N} that the operator\\ $S:=A+N:W_0^{1,p(x)}(\Omega)\rightarrow 2^{W^{-1,p'(x)}(\Omega)}$ is bounded, u.s.c and quasimonotone. Consequently, equation (\ref{eq2}) is equivalent to \begin{equation}\label{eq3} u=Tv \mbox{ and } v\in -STv. \end{equation} To solve equation (\ref{eq3}), we will apply the degree theory introducing in section 2. To do this, we first claim that the set $$B:=\{v\in W^{-1,p'(x)}(\Omega)|v\in -tSTv \mbox{ for some } t\in[0,1]\}$$ is bounded. Indeed, let $v\in B$, that is $v+ta=0$ for some $t\in[0,1]$ where $a\in STv.$ Set $u:=Tv$, then $|Tv|_{1,p(x)}=|\nabla u|_{p(x)}.$ We write $a=Au+z\in Su,$ where $z\in Nu,$ that is $\langle z,u\rangle=\int_\Omega wu\;dx,$ for some $w\in L^{p'(x)}(\Omega)$ with $\underline{g}(x,u(x))\leq w(x)\leq\overline{g}(x,u(x))$ for a.e. $x\in\Omega.$\\ If $\|\nabla u\|_{p(x)}\leq 1$, then $\|Tv\|_{1,p(x)}$ is bounded.\\ If $\|\nabla u\|_{p(x)}>1$, then we get by the implication \eqref{imp1}, the growth condition $(g_2)$, the H\"older inequality \eqref{hol} and the inequality \eqref{ineq1} the estimate \begin{eqnarray*} \|Tv\|_{1,p(x)}^{p^-} &=& \|\nabla u\|_{p(x)}^{p-} \\ &\leq& \rho_{p(x)}(\nabla u) \\ &=& \langle Lu,u\rangle \\ &=& \langle v,Tv\rangle \\ &=& -t\langle a,Tv \rangle \\ &=& -t\int_\Omega (u+w)u\; dx \\ &\leq& const(+\int_\Omega|u|^2\;dx\int_\Omega|k(x)u(x)|\;dx+\rho_{q(x)}(u)) \\ &\leq& const(\|u\|_{L^2}^2+\|k\|_{p'(x)}\|u\|_{p(x)}+\|u\|_{q(x)}^{q^+}+\|u\|_{q(x)}^{q^-})\\ &\leq& const(\|u\|_{L^2}^2+\|u\|_{p(x)}+\|u\|_{q(x)}^{q^+}+\|u\|_{q(x)}^{q^-}). \end{eqnarray*} From the Poincar\'e inequality (\ref{ptcar}) and the continuous embedding $L^{p(x)}(\Omega)\hookrightarrow L^2(\Omega)$ $L^{p(x)}(\Omega)\hookrightarrow L^{q(x)}(\Omega)$, we can deduct the estimate $$\|Tv\|_{1,p(x)}^{p^-} \leq const(\|Tv\|_{1,p(x)}^2+\|Tv\|_{1,p(x)}+\|Tv\|_{1,p(x)}^{q^+}).$$ It follows that $\{Tv|v\in B\}$ is bounded.\\ Since the operator $S$ is bounded, it is obvious from \eqref{eq3} that the set $B$ is bounded in $W^{-1,p'(x)}(\Omega)$. Consequently, there exists $R>0$ such that $$\|v\|_{-1,p'(x)}<R \mbox{ for all } v\in B.$$ This says that $$v\notin -tSTv, \mbox{ for all } v\in\partial B_R(0) \mbox{ and all } t\in[0,1].$$ From Lemma (\ref{l2.1}) it follows that $$I+ST\in\mathcal{F}_T(\overline{B_R(0)})\mbox{ and } I=LT\in\mathcal{F}_T(\overline{B_R(0)}).$$ Consider a homotopy $H:[0,1]\times\overline{B_R(0)}\rightarrow 2^{W^{-1,p'(x)}(\Omega)}$ given by $$H(t,v):=(1-t)Iv+t(I+ST)v \mbox{ for } (t,v)\in[0,1]\times\overline{B_R(0)}.$$ Applying the homotopy invariance and normalization property of the degree $d$ stated in Theorem(\ref{t2.1}), we get $$d(I+ST,B_R(0),0)=d(I,B_R(0),0)=1,$$ and hence there exists a point $v\in B_R(0)$ such that $$v\in -STv,$$ which says that $u=Tv$ is a solution of \eqref{eq2}. We conclude that $u=Tv$ is a weak solution of \eqref{Pr1}. This completes the proof. \end{proof} \end{document}
\begin{document} \author[T.\,Prince]{Thomas Prince} \address{Mathematical Institute\\University of Oxford\\Woodstock Road\\Oxford\\OX2 6GG\\UK} \email{[email protected]} \keywords{Mirror Symmetry, Fano manifolds, toric degenerations.} \subjclass[2000]{14J33 (Primary), 14J45, 52B20 (Secondary)} \title{Polygons of Finite Mutation Type} \maketitle \begin{abstract} We classify Fano polygons with finite mutation class. This classification exploits a correspondence between Fano polygons and cluster algebras, refining the notion of singularity content due to Akhtar and Kasprzyk. We also introduce examples of cluster algebras associated to Fano polytopes in dimensions greater than two. \end{abstract} \section{Introduction} \label{sec:introduction} The notion of combinatorial, or polytope, mutation was introduced by Akhtar--Coates--Galkin--Kasprzyk~\cite{ACGK} to describe mirror partners to Fano manifolds. Following Givental \cite{Givental:Equivariant_GW,Giv95,Giv98}, Kontsevich~\cite{K98}, and Hori--Vafa~\cite{Hori--Vafa}, the mirror partner to a Fano manifold consists of a complex manifold together with a holomorphic function, the \emph{superpotential}. If this mirror manifold contains a complex torus we can write down a collection of volume preserving birational maps of this complex torus which preserve the regularity of the superpotential. We call these rational maps (algebraic) mutations, following \cite{ACGK} and work of Galkin--Usnich~\cite{Galkin--Usnich}. Combinatorial mutation is the operation induced on the Newton polyhedra of the restriction of the superpotential to such tori. All the polytopes we consider are \emph{Fano}, that is, polytopes which contain the origin in the interior and such that its vertices are primitive lattice vectors. In joint work \cite{KNP15} with Kasprzyk and Nill we showed that, in dimension two, the notion of polytope mutation is compatible with the construction of a quiver and cluster algebras one can associate to each Fano polygon. The idea of associating a polygon with a quiver -- or toric diagram -- has a reasonably long history, particularly in the physics literature. In that setting the polygon describes a toric Calabi--Yau singularity and the quiver is used to describe the matter content of a gauge theory arising on a stack of D$3$-branes probing the toric Calabi--Yau singularity, for example \cite{FHM+,HKPW,BP06,HV07,FHH01,LV98,AH97} for a selection of the literature on this subject. The construction of a quiver (and cluster algebra) from a polygon has also been used by Gross--Hacking--Keel \cite{GHK2} in the study of associated log Calabi--Yau varieties, and to study the derived category of the toric variety, or the associated local toric Calabi--Yau as pursued, for example, in \cite{BS10,H04,MR04,HP11,P12}. In each setting the basic construction is the same, and we recall the version relevant to our applications in \S\ref{sec:edge_mutations}. Our main result, \textbf{Theorem~\ref{thm:finite_type}}, is a classification of the mutation classes of polygons which contain only finitely many polygons. This parallels a finite type result of Mandel~\cite{Mandel14}, for rank two cluster varieties. In particular we see that finite mutation classes of polygons fall into four types $A_1^n$, for $n \in {\mathbb{Z}}_{\geq 0}$, $A_2$, $A_3$, and $D_4$. There is a close connection between mutation classes of Fano polygons and ${\mathbb{Q}}$-Gorenstein deformations of the corresponding toric varieties which is described in detail in \cite{A+}. Following these ideas we predict the existence of a finite type parameter space for these deformations, together with a boundary stratification such that each zero stratum corresponds to a polygon in the given mutation class, and the $1$-strata corresponds to the mutation families constructed by Ilten~\cite{Ilten12}. While our main result applies in dimension two, we note that polytope mutation is defined in all dimensions, and the construction of a quiver and cluster algebra we provides applies to `compatible collection' of mutations in any dimension, see Definition~\ref{def:compatible_collection}. This definition is, unfortunately, less well behaved in dimensions greater than two, but we provide an example indicating that polytope mutation can detect known examples cluster structures appearing on linear sections of Grassmannians of planes. We expect this to extend to a wide variety of other cluster structures found in Fano manifolds and their mirror manifolds. \section{Quivers and cluster algebras} \label{sec:clusters} We devote this section to fixing the various conventions and notation, as well as recalling the basic definitions. We recall the definition of cluster algebra, and in order to address both geometric and combinatorial applications we shall adapt our treatment from the work of Fomin--Zelevinsky~\cite{FZ00}, and the work of Fock--Goncharov ~\cite{FG09} and Gross--Hacking--Keel~\cite{GHK2}. We first fix the following data: \begin{itemize} \item $N$, a fixed rank $n$ lattice with skew-symmetric form $\{-,-\}\colon N \times N \rightarrow {\mathbb{Z}}$. \item A saturated sublattice $N_{uf} \subseteq N$, the \emph{unfrozen} sublattice. \item An index set $I$, $|I| = \rk(N)$ together with a subset $I_{uf} \subseteq I$ such that $|I_{uf}| = \rk(N_{uf})$. For later convenience we shall define $m := |I_{uf}|$. \end{itemize} \begin{rem} The requirement that the form is integral is not necessary, but is sufficiently general for our applications and simplifies the exposition considerably. \end{rem} \begin{dfn}\label{def:seed} A \emph{(labelled) seed} is a pair $\mathbf{s} = (\mathcal{E},C)$, where: \begin{itemize} \item $\mathcal{E}$ is a basis of $N$ indexed by $I$, such that $\mathcal{E}|_{I_{uf}}$ is a basis for $N_{uf}$. \item $C$ is a transcendence basis of $\mathcal{F}$, the field of rational functions in $n$ independent variables over ${\mathbb{Q}}(x_i : i \in I \setminus I_{uf})$, referred to as a \emph{cluster}. \end{itemize} \end{dfn} \begin{rem} The basis $\mathcal{E}$ is what the authors of \cite{FG09,GHK2} refer to as \emph{seed data}. Since we have fixed the lattice $N$ and skew-symmetric form $\{-,-\}$ the variables $x_i$ can be identified with coordinate functions on the \emph{seed torus} $T_N$. \end{rem} \begin{dfn}\label{def:seed_mutation} Given a seed $\mathbf{s} = (\mathcal{E},C)$ with $\mathcal{E} = \{e_1, \ldots , e_n\}$ and $C = \{x_1,\ldots,x_n\}$, the \emph{$j$th mutation} of $(\mathcal{E},C)$ is the seed $(\mathcal{E}',C')$, where $\mathcal{E}' = \{e_1',\ldots,e_n'\}$ and $C' = \{x'_1,\ldots,x'_n\}$ are defined by: \[ e'_k = \begin{cases} -e_j, & \text{if $k = j$} \\ e_k + \mathbf{m}ax{b_{kj},0}e_j, & \text{otherwise} \end{cases} \] \noindent where $b_{kl} = \{e_k,e_l\}$ and is often referred to as the \emph{exchange matrix}, \begin{align}\label{eq:exchange_relation} x_k' = x_k\text{ if }k \ne j,&& \text{ and }&& x_jx'_j = \!\!\prod_{\stackrel{\scriptstyle k\text{ such that}}{b_{jk} > 0}}\!\!{x^{b_{jk}}_k} + \!\!\prod_{\stackrel{\scriptstyle l\text{ such that}}{b_{jl} < 0}}\!\!{x^{b_{lj}}_l}. \end{align} \end{dfn} \begin{dfn} \label{dfn:cluster_algebra} A \emph{cluster algebra} is the subalgebra of $\mathcal{F}$ generated by the cluster variables appearing in the union of all clusters obtained by mutation from a given seed. \end{dfn} Given a skew-symmetric $n \times n$ matrix $B$, we let $\mathcal{A}(B)$ denote the cluster algebra associated to $B$; this is a subalgebra of ${\mathbb{Q}}(x_1,\ldots,x_n)$. \begin{rem} Definition~\ref{dfn:cluster_algebra} is really a special case of the definition of a cluster algebra, a class referred to as the \emph{skew-symmetric cluster algebras of geometric type}. In the general case the form $\{-,-\}$ need only be \emph{skew-symmetrizable}. One consequence of the skew-symmetry of the form $\{-,-\}$ is the identification of each exchange matrix with a \emph{quiver} $Q$. One may assign this quiver in the obvious way, assigning a vertex to each basis element of $N$, and $b_{ij}$ arrows $v_i \rightarrow v_j$, oriented according to the sign of $b_{ij}$. Having divided the vertex set into frozen vertices and unfrozen ones one can replace the basis $\mathcal{E}$ with $Q$. There is a well-known notion of quiver mutation, going back to Fomin--Zelevinsky~\cite{FZ00}, generalising the reflection functors of Bernstein--Gelfand--Ponomarev~\cite{BGP73}. Mutating a seed in a skew-symmetric cluster algebra induces a corresponding mutation of the associated quiver. \end{rem} \begin{dfn}\label{def:quiver_mutation} Given a quiver $Q$ and a vertex $v$ of $Q$, the \emph{mutation of $Q$ at $v$} is the quiver $\mut(Q,v)$ obtained from $Q$ by: \begin{enumerate} \item\label{item:quiver_mutation_add_shortcuts} adding, for each subquiver $v_1 \to v \to v_2$, an arrow from $v_1$ to $v_2$; \item\label{item:quiver_mutation_delete_2_cycles} deleting a maximal set of disjoint two-cycles; \item\label{item:quiver_mutation_reverse_arrows} reversing all arrows incident to $v$. \end{enumerate} The resulting quiver is well-defined up to isomorphism, regardless of the choice of two-cycles in~\eqref{item:quiver_mutation_delete_2_cycles}. \end{dfn} Since we shall refer to quivers frequently we shall make the following conventions. Given a quiver $Q$, we define \begin{itemize} \item $Q_0$ to be the set of vertices of $Q$. \item $\Arr(v_i,v_j)$ to be the set of arrows from $v_i \in Q_0$ to $v_j \in Q_0$. \item $b_{ij}$ to be the cardinality of $\Arr(v_i,v_j)$, with sign indicating orientation. \end{itemize} We shall always assume $Q$ has no vertex-loops or 2-cycles. Given a seed $\mathbf{s}$ we shall also fix notation for the dual basis $\mathcal{E}^\star$ of $M := \hom(N,{\mathbb{Z}})$ and for each $i \in I$, set $v_i := \{e_i,-\} \in M$. We now define the $\mathcal{A}$ and $\mathcal{X}$ cluster \emph{varieties} defined by Fock--Goncharov \cite{FG09}. Toward this, observe to a seed $\mathbf{s}$ we can associate a pair of tori \begin{align*} \mathcal{X}_\mathbf{s} = T_M && \mathcal{A}_\mathbf{s} = T_N. \end{align*} The dual pair of bases for the respective lattices define identifications of these tori with split tori, \begin{align*} \mathcal{X}_\mathbf{s} \longrightarrow \mathbb{G}^n_m, && \mathcal{A}_\mathbf{s} \longrightarrow \mathbb{G}^n_m. \end{align*} We also associate the following birational maps to each seed, \begin{align*} \mu^\star_kz^n = z^n(1+z^{e_k})^{-\{n,e_k\}} && \mu^\star_kz^m = z^m(1+z^{v_k})^{\langle e_k,m \rangle}. \end{align*} Pulling these birational maps back along the identifications with the split torus given by the seed, the birational map $\mu_k \colon \mathcal{A}_\mathbf{s} \dashrightarrow \mathcal{A}_{\mu_k(\mathbf{s})}$ is given by the exchange relation \eqref{eq:exchange_relation}. That is, this birational map is the coordinate-free expression of the exchange relation once we identify the standard coordinates on $T_N$ with the cluster variables $x_i \in C$ (including the frozen variables $x_{n+1} \cdots, x_m$). We obtain schemes $\mathcal{X}$ and $\mathcal{A}$ by gluing the seed tori $\mathcal{A}_s$ and~$\mathcal{X}_s$ along the birational maps defined by the mutations $\mu_k$. For more details and related results we refer to \cite{FG09,GHK2}. We conclude this section by recalling the classifications of cluster algebras of \emph{finite type} and \emph{finite mutation type}. \begin{dfn} A cluster algebra is said to be of \emph{finite type} if it contains finitely many clusters. \end{dfn} Given an undirected graph $G$ we say that a quiver $Q$ is an \emph{orientation} of $G$ if it has the same set of vertices and for each edge of $G$ there is precisely one arrow between the respective vertices. Given a simply-laced Dynkin diagram $D$ we say that $Q$ is of \emph{type $D$} if it is an orientation of the underlying graph of $D$. \begin{thm}[\!\!\cite{FZ03}] \label{thm:finite_cluster_algebras} There is a canonical bijection between the Cartan matrices of finite type and cluster algebras of finite type. Under this bijection, a Cartan matrix $A$ of finite type corresponds to the cluster algebra $\mathcal{A}(B)$, where $B$ is an arbitrary skew-symmetrizable matrix with Cartan companion equal to $A$. \end{thm} Theorem~\ref{thm:finite_cluster_algebras} describes skew-symmetric cluster algebras with finitely many clusters. We can ask instead for the weaker condition that only finitely many \emph{quivers} appear associated to seeds of the cluster algebra. This is the notion of \emph{finite mutation type} cluster algebra, for which a classification is also known. \begin{thm}[{\cite[Theorem~$6.1$]{FST09}}] \label{thm:finite_mut_type} Given a quiver $Q$ with finite mutation class, its adjacency matrix $b_{ij}$ is the adjacency matrix of a triangulation of a bordered surface or is mutation equivalent to one of eleven exceptional types. \end{thm} \begin{figure} \caption{the blocks of a \emph{block decomposition} \label{fig:blocks} \end{figure} The class of quivers coming from triangulations of surfaces is well-studied and we make use of a combinatorial characterisation of this class of quivers via \emph{block decomposition}. A quiver $Q$ is said to admit a \emph{block decomposition} if it may be assembled from the $6$ \emph{blocks} shown in Figure~\ref{fig:blocks} by identifying the vertices of quivers shown with unfilled circles, the \emph{outlets}. More precisely, we choose a partial matching of the combined set of outlets such that no outlet is matched to a vertex of the same block, including itself. We form $Q$ by gluing the quiver along these vertices and cancelling any two cycles formed by this process. See \cite[Definition~$13.1$]{FST07} for further discussion and examples of this definition. \begin{thm}[{\cite[Theorem~$13.3$]{FST07}}] \label{thm:blocks} A quiver $Q$ given by the adjacency matrix of a triangulation of a surface is mutation equivalent to a quiver which admits a \emph{block decomposition} \end{thm} \section{Mutations of Polytopes} \label{sec:edge_mutations} In two dimensions all combinatorial mutations are `tropicalisations' of cluster mutations. While this ceases to be true in higher dimensions there is a natural class of combinatorial mutations, the \emph{edge mutations} which do appear in this way. In terms of the definition of combinatorial mutation given in \cite{ACGK}, edge mutations are those which have one-dimensional \emph{factor}. In particular each edge mutation is obtained by studying the effect of the following birational maps -- an \emph{algebraic mutation}~\cite{ACGK} -- on the Newton polyhedra of certain Laurent polynomials. Throughout this section $N$ denotes an $n$-dimensional lattice (not necessarily related to the definition of a cluster algebra). We recall that, working over $\mathbb{C}$, if $M$ is the lattice dual to $N$, the torus $T_M$ is defined to be $\Spec(\mathbb{C}[N])$. \begin{dfn} \label{def:alg_mutation} Given an element $w \in M$, the \emph{weight vector}, and $f \in \Ann(w)$, the \emph{factor}, define a birational map $\phi_{w,f} \colon T_M \dashrightarrow T_M$ sending \[ z^n \mapsto z^n(1+z^f)^{\langle w,n\rangle}. \] Given a Laurent polynomial $W \in \mathbb{C}[N]$ such that $\phi^\star_{w,f}(W) \in \mathbb{C}[N]$ say that $W$ is mutable with weight vector $w$ and factor $f$. \end{dfn} \begin{dfn}[Cf.\@ {\cite[p$12$]{ACGK}}] \label{def:mutation} Fix a Fano polytope $P \subset N_{\mathbb{Q}}$ and its dual $P^\circ \subset M_{\mathbb{Q}}$, a weight vector $w \in M$, and factor $f \in \Ann(w)$. Define a piecewise linear map $T_{w,f}\colon M_{\mathbb{Q}} \rightarrow M_{\mathbb{Q}}$ by setting \[ T_{w,f} \colon m \mapsto m + \max(0,\langle m,f \rangle)w \] If $T_{w,f}(P^\circ)$ is a convex polytope then we say $P$ admits the mutation $(w,f)$ and that $P$ mutates to $(T_{w,f}(P^\circ))^\circ$. \end{dfn} \begin{rem} This definition of mutation is really a `dual characterisation' of \cite[Definition~$5$]{ACGK}, which encodes how the Newton polytope of a Laurent polynomial changes under algebraic mutations. \end{rem} \begin{rem} In \cite{ACGK} the authors show that the result of applying a mutation to a Fano polytope produces another Fano polytope, so the last dualization in Definition~\ref{def:mutation} is well-defined. \end{rem} \begin{pro} \label{pro:alg_to_combinatorial} Given $w \in M$,~$f \in \Ann(w)$ and a mutable Laurent polynomial $W \in \mathbb{C}[N]$ we have the following identity; \[ \Newt\left(\phi^\star_{w,f} W\right)^\circ = T_{w,f}\left(\Newt(W)^\circ\right). \] \end{pro} \begin{proof} The notion of combinatorial mutation is compatible with the mutation $W$ by construction. The interpretation of a combinatorial mutation as a piecewise linear map is made in the proof of Proposition $4$ in \cite{ACGK}. \end{proof} \begin{dfn} \label{def:compatible_collection} We define \emph{mutation data} to be elements $(w,f) \in M\oplus N$ such that $w$ and $f$ are primitive, and $f \in \Ann(w)$. A set of mutation data $\{(w_i,f_i) \in M \oplus N : i \in I\}$ for a finite index set $I$ is called \emph{compatible} if \[ \langle w_i,f_j \rangle = -\langle w_j,f_i\rangle. \] \end{dfn} \begin{rem} If $\dim N = 2$ any finite collection edge mutation data since $\langle w_i,f_j\rangle = w^i\wedge w^j$, for a choice of identification $\bigwedge^2M \cong {\mathbb{Z}}$. \end{rem} \begin{dfn} To a compatible collection of mutation data $\mathcal{E}$ we define a quiver $Q_{\mathcal{E}}$ as follows, \begin{enumerate} \item The vertex set of $Q_{\mathcal{E}}$ is $\mathcal{E}$. \item Between two vertices $(w_i,f_i)$ and $(w_j,f_j)$ there are $\langle w_i,f_j\rangle$ arrows, with sign indicating orientation. \end{enumerate} \end{dfn} Observe that, as $\langle w_i,f_j\rangle$ is skew-symmetric, the quiver $Q_\mathcal{E}$ contains no loops or two cycles. Note that we can use this definition to assign a \emph{cluster algebra} to a compatible collection of mutation data. We define a rule governing how compatible collections of mutations themselves mutate. \begin{dfn} \label{def:mutate_seeds} Given a compatible collection of mutation data $\mathcal{E}$, let $L$ be the sublattice of $M \oplus N$ generated by the elements of $\mathcal{E}$, and let $\{(w_i,f_i),(w_j,f_j)\} := \langle w_i,f_j\rangle$ define a skew-symmetric form on $L$. Fixing a pair $E_k = (w_k,f_k) \in \mathcal{E}$ we \emph{mutate} $\mathcal{E}$ to a new collection $\mathcal{E}_k$ as follows: \begin{itemize} \item $E_k \mapsto -E_k$; \item $E_i \mapsto E_i - \max(\{E_i,E_k\},0)E_k$, if $i \neq k$. \end{itemize} \end{dfn} This formula is identical to the mutation of seed data given in~\cite{FG09}; a connection we now make precise. Fix a compatible collection of mutations $\mathcal{E}$ and define a skew-symmetric form $[-,-]$ on ${\mathbb{Z}}^{\mathcal{E}}$ defined by setting $[e_i,e_j] := \{\theta(e_i),\theta(e_j)\}$, where $\theta\colon {\mathbb{Z}}^\mathcal{E} \to M\oplus N$ is defined by sending $e_i \mapsto (w_i,f_i)$. The following Lemma follows immediately by comparison of the formulae for mutating seed data in a cluster algebra with Definition~\ref{def:mutate_seeds}. \begin{lem} \label{lem:fix_cluster} The operations of mutation given in Definition~\ref{def:mutate_seeds}, and of mutation of the seeds defined above, are intertwined by $\theta$. \end{lem} \begin{rem} \label{rem:bad_fact} In dimensions higher than two a compatible collection of mutation data which defines a set of combinatorial mutations of a given polytope can transform by mutation to a compatible collection of mutation data which does not define a set of combinatorial mutations of the transformed polytope. In particular the piecewise linear maps may fail to preserve convexity. This appears to be a important obstruction to generalising the two-dimensional theory of mutations to higher dimensional polytopes. \end{rem} \begin{pro} Given seed data $\mathcal{E}$ such that $Q_{\mathcal{E}}$ is a directed simply-laced Dynkin diagram the number of polytopes obtained by successive edge mutation is bounded by the numbers of seeds in the cluster algebra determined by $Q_{\mathcal{E}}$. If $Q_\mathcal{E}$ is of type $A_n$ this bound is the Catalan number $C_n$. \end{pro} In fact, compatible collections of mutations appear whenever we have a cluster algebra with skew-symmetric exchange matrix. \begin{pro} \label{pro:cluster_algebra_polygon} Every compatible collection of mutations determines and is determined by a skew-symmetric cluster algebra without frozen variables, together with a subspace $V$ of the kernel of the skew-symmetric form $\{-,-\}$ defined by the exchange matrix. \end{pro} \begin{proof} Fix a skew-symmetric cluster algebra without frozen variables and a nominated subspace $V \subset \ker \{-,-\}$. Recall that a seed defines a basis $e_i$ of a lattice, which we denote $\widetilde{N}$. Define $M := \widetilde{N}/V$ and let $p \colon \widetilde{N} \rightarrow M$ be the canonical projection. The map $\theta\colon \widetilde{N} \rightarrow M \oplus\hom(M,{\mathbb{Z}})$ defined by $\theta \colon n \mapsto ( p(n),\{n,-\})$ defines a compatible collection of mutation data with weight vectors in the lattice $M$. \end{proof} Note that $N$ and $M$ play dual roles to those in \cite{FG09}, and we insist throughout that $P \subset N_{\mathbb{Q}}$. This exchange of roles explains the odd definition of $M$ in the proof of Proposition~\ref{pro:cluster_algebra_polygon}. To compare the birational maps associated to the two notions of mutations let $\mathbf{s}$ be a seed of the cluster algebra determined by a compatible collection of mutation data, and let $\mathcal{E}$ be the compatible collection corresponding to $\mathbf{s}$. Fix an element $E_k = (w_k,f_k) \in \mathcal{E}$ and consider the following diagram, \begin{equation} \label{eq:commuting_mutations} \xymatrix{ \mathcal{A}_{\mathbf{s}} \ar@{-->}^{\mu_k}[rr] \ar^{p}[d] & & \mathcal{A}_{\mu_k(\mathbf{s})} \ar^{p}[d] \\ T_{M} \ar@{-->}_{\phi_{(w_k,f_k)}}[rr] & & T_{M}, } \end{equation} \begin{pro} \label{prop:seed_mutations} The diagram shown in \eqref{eq:commuting_mutations} commutes \end{pro} \begin{proof} This is an exercise in writing out the definitions of the respective mutations, see~\cite[Section~3]{KNP15}. \end{proof} \begin{eg} The del~Pezzo surface of degree $5$ admits a toric degeneration to a toric surface $Z$ with a pair of $A_1$ singularities. Given a three-dimensional linear section $X$ of the Grassmannian $\Gr(2,5)$ $X$ admits a toric degeneration to the projective cone over $Z$. The fan determined by this toric threefold is formed by taking the cones over the faces of the reflexive polytope with PALP id $245$. In Figure~\ref{fig:B5_mutations} we show a pentagon of polytopes obtained by successively mutating the polytope shown in the top-right with respect to the mutation data \[ \mathcal{E} := \{(w_1,f_1),(w_2,f_2)\} \] where, \begin{align*} w_1 := (-1,0,0), && f_1 := (0,1,1)^T, \\ w_2 := (0,0,-1), && f_2 := (-1,0,0)^T. \\ \end{align*} We recall that there is an $A_2$ cluster structure on co-ordinate ring of the Grassmannian, and a toric degeneration of $\Gr(2,5)$ for each cluster chart in the dual Grassmannian \cite{RW15,RW17}. We expect that cluster structures in the mirror to a Fano variety to be detected by such compatible collections of mutations. Note that the polytopes we show in Figure~\ref{fig:B5_mutations} are not dual to Fano polytopes. However, recalling that $B_5$ has Fano index $2$, we can obtain a reflexive polytope by dilating each of the polytopes shown in Figure~\ref{fig:B5_mutations} by a factor of two, and translating. \begin{figure} \caption{Pentagon of edge mutations among toric degenerations of $B_5$.} \label{fig:B5_mutations} \end{figure} \end{eg} In the two dimensional case, we can canonically define a maximal set of compatible mutations, making use of the notion of singularity content~\cite{AK14}. \begin{dfn}[Cf.\@ {\cite[\S$1.2$]{KNP15}}] \label{def:polygon_seed} Given a Fano polygon $P \subset N_{\mathbb{Q}}$ with singularity content $(n,\mathcal{B})$ and $m := |\mathcal{B}|+n$, we define: \begin{itemize} \item An index set $I$ of size $m$, with a subset $I_{uf}$ of size $n$ and functions: \begin{align*} \phi_{uf} \colon I_{uf} \rightarrow \{\textrm{edges of $P$}\} && \phi_f \colon I \setminus I_{uf} \rightarrow \mathcal{B} \end{align*} Here the fibre $\phi_{uf}^{-1}(E)$ has $m_E$ elements, where $m_E$ is the singularity content of $\Cone(E)$, and $\phi_f$ is a bijection. \item A lattice map $\rho \colon {\mathbb{Z}}^m \rightarrow M$ sending each basis element to the primitive, inward-pointing normal to the edge of $P$ defined by the cone given by the specified functions $\phi_{uf}$ and $\phi$. \item A form $\{e_i,e_j\} := \rho(e_i) \wedge \rho(e_j)$. Note that this is an integral skew-symmetric form. \end{itemize} \end{dfn} By \cite[Proposition~$3.17$]{KNP15} the construction of a quiver from a polytope provided by Definition~\ref{def:polygon_seed} intertwines polygon and quiver mutations. We let $(E_P,C_P)$ denote the cluster algebra associated to a Fano polygon, where $E_P$ is the standard basis $e_i$ of ${\mathbb{Z}}^n$, and $C_P$ is the standard transcendence basis of the field of rational functions in $n$ variables over ${\mathbb{Q}}(x_i : i \in I\setminus I_{uf})$. We say a Fano polygon is of \emph{finite mutation type} if it is mutation equivalent to only finitely many Fano polygons. \begin{conjecture} The cluster algebras $\mathcal{C}_P$ for Fano polygons $P$ together with a bijection between the set of frozen variables and $\mathcal{B}$ is a complete mutation invariant Fano polygons. \end{conjecture} \begin{eg} Consider the Fano polygon $P$ for ${\mathbb{P}}^2$ \begin{align*} \includegraphics[scale=0.75]{./images/R16} \end{align*} Computing the determinant of the inward-pointing normals we obtain the quiver $Q_P$ \begin{align*} \xymatrix{ & \mathbf{u}llet \ar^3[dr] & \\ \mathbf{u}llet \ar^3[ur] & & \mathbf{u}llet \ar^3[ll] } \end{align*} The mutations of this quiver are well-known, and the triple $(3a,3b,3c)$ of non-zero entries of the exchange matrix satisfy the Markov equation $a^2 + b^2 + c^2 = 3abc$. Indeed, as the polygon $P$ is mutated the corresponding toric surfaces are ${\mathbb{P}}(a^2, b^2, c^2)$ for the same triples $(a,b,c)$. We see that in this case the mutations of the quivers exactly capture the mutations of the polygon. \end{eg} \begin{eg} \label{ex:special_surface} Consider the toric surface (using the notation for these surfaces appearing in \cite{A+}). $X_{5,5/3}$ associated with the Fano polygon shown below. \begin{align*} \includegraphics[scale=0.75]{./images/7} \end{align*} The quiver associated to this surface is simply the $A_2$ quiver, \[ \xymatrix{ \mathbf{u}llet \ar[r] & \mathbf{u}llet. } \] This example is important, both in this section, because it is an example of a \emph{finite type} polygon, and since a smoothing of this surface is given by $5$ Pfaffian equations, see \cite[\S$3.3$]{CH17}, a fact closely connected to the $A_2$ quiver we construct here. \end{eg} \section{Finite Type Classification} \label{sec:classification} We now make use of the classification of finite type and finite mutation type cluster algebras to establish the following result. \begin{thm} \label{thm:finite_type} $P$ is of finite mutation type if and only if $Q_P$ is mutation equivalent to a quiver of type $(A_1)^n$, $A_2$, $A_3$, or $D_4$. \end{thm} \begin{rem} The types referred to in Theorem~\ref{thm:finite_type} may also be referred to as type $I_n$, $II$, $III$, and $IV$ respectively; in analogy with Kodaira's monodromy matrices. The relationship between these matrices, log Calabi--Yau manifolds, and monodromy in certain integral affine manifolds is explored by Mandel in~\cite{Mandel14}. \end{rem} We first make two straightforward observations. First we note that the cluster algebra $\mathcal{C}_P$ induces a sequence of surjections: \begin{equation} \label{eq:tower} \xymatrix{ \{ \textrm{Clusters of $\mathcal{C}_P$} \} \ar[d] \\ \{ \textrm{Polygons mutation equivalent to $P$} \} \ar[d] \\ \{ \textrm{Quivers mutation equivalent to $Q_P$} \}. } \end{equation} The first vertical arrow follows from the fact that algebraic mutations determine combinatorial mutations, the second from Lemma~\ref{lem:fix_cluster}. For example, using this tower of surjections in the case of a type $A_2$ cluster algebra, we can immediately state the following result. \begin{pro}\label{pro:pentagon} If a Fano polygon $P$ has singularity content $(2,\mathcal{B})$ and the primitive inward-pointing normal vectors of the two edges corresponding to the unfrozen variables of $\mathcal{C}_P$ form a basis of the lattice $M$, then the mutation-equivalence class of $P$ has at most five members. \end{pro} \begin{proof} The quiver associated to $P$ is precisely an orientation of the $A_2$ quiver. The cluster algebra $\mathcal{C}_P$ is well-known and its cluster exchange graph forms a pentagon. Note however that the \emph{quiver} mutation graph is trivial, as the $A_2$ quiver mutates only to itself. \end{proof} Proposition~\ref{prop:seed_mutations} implies that the mutation class of $P$ has at most five elements. Note that we do not have a non-trivial lower bound: there is only one polygon in mutation equivalent to the polygon described in Example~\ref{ex:special_surface} up to $\GL(2,{\mathbb{Z}})$ equivalence. Next observe that the sequence of surjections shown in \eqref{eq:tower} immediately implies that \[ \textrm{$\mathcal{C}_P$ finite type} \Rightarrow \textrm{$P$ finite mutation type} \Rightarrow \textrm{$\mathcal{C}_P$ finite mutation type}. \] \begin{lem} \label{pro:no_kronecker} Given a Fano polygon $P$ of finite mutation type, $Q_P$ does not contain a Kronecker subquiver \[ Q_k := \{\xymatrix{ v_1 \ar^k[r] & v_2 }\}, \] where $k>1$ is the number of arrows from $v_1$ to $v_2$. \end{lem} \begin{rem} This result is expected from results on the corresponding cluster algebra. The Kronecker quiver defines a rank $2$ cluster algebra which is known not to be of finite type when $k > 1$. Given that $P$ is the Newton polygon of a superpotential which is itself a combination of cluster monomials, we expect the polygon $P$ to grow as we mutate. \end{rem} \begin{proof}[Proof of Lemma~\ref{pro:no_kronecker}] Assume there is a $Q_k$ subquiver of $Q_P$, with vertices $v_1$,~$v_2$ corresponding to edges $E_1$~$E_2$ of $P$. We define $\rho \colon {\mathbb{Z}}^2 \to M$ by mapping the standard basis to the primitive inward normal vectors $w_i$ to $E_i$ for $i \in \{1,2\}$. Let $P' \subset {\mathbb{Q}}^2$ be the image of $P$ under $\rho^\star$. The resulting polygon in ${\mathbb{Q}}^2$ is shown schematically in Figure~\ref{fig:std_form}. \begin{figure} \caption{Schematic diagram of a polygon in standard form} \label{fig:std_form} \end{figure} The \emph{local index} of each cone in $P$ is the integral height of the edge from the origin. Let $h_i$ denote the local indices of $E_i$ for $i \in \{1,2\}$. Note that, as $h_i = \langle e_i , \rho^\star v\rangle$ for any $v \in E_i$, $h_i$ is also the local index of $\rho^\star(E_i)$ in $P'$. Mutating at $v_1$ and $v_2$ we denote the new local indices, \[ \xymatrix{ (h_1,h_2') & (h_1,h_2) \ar[l] \ar[r] & (h_1',h_2). } \] We first show that $\rho^\star$ increases the lattice lengths of $E_i$ by a factor of $k := |w_1\wedge w_2|$ for each $i \in \{1,2\}$. Indeed, letting $\ell(E)$ denote the lattice length of an edge $E$, $v^i_1$ and $v^i_2$ denote the vertices of $E_i$, and $d_i := (v^i_1-v^i_2)/\ell(E_i)$ we have that: \begin{align*} \ell(\rho^\star(E_i)) &= \langle \ell(\rho^\star(E_i))e^\star_i,e_i \rangle \\ &= \langle \rho^\star(v^i_1) - \rho^\star(v^i_2),e_i \rangle\\ &= \langle v^i_1 -v^i_2,\rho(e_i) \rangle \\ &= \langle \ell(E_i)d_i,w_i \rangle \\ &= \pm\ell(E_i)(w_1\wedge w_2), \end{align*} where signs and orientations are chosen such that $\ell(E)$ is always positive. Studying Figure~\ref{fig:std_form}, we observe that: \begin{align*} h_1' \geq kh_2 - h_1 && h_2' \geq kh_1-h_2, \end{align*} Consider the case $k \geq 3$, and assume without loss of generality that $h_2 \geq h_1$. We have that $h_1' \geq 3h_2 - h_1 \geq 2h_2 \geq 2h_1$. Thus in this case the values in the pair $(h_1,h_2)$ grow (at least) exponentially with mutation, and in particular take infinitely many values. Next consider the case $k=2$. The inequalities above become, \begin{align*} h_1' \geq 2h_2 - h_1 && h_2' \geq 2h_1-h_2, \end{align*} \noindent and we are again free to assume that $h_2 \geq h_1$. Thus $h_1' \geq 2h_2-h_1 \geq h_1$, and if $h_2 > h_1$, $h_1' \geq 2h_2-h_1 > h_2$. Thus, assuming $h_1 \neq h_2$, one can generate an infinite increasing sequence of local indices. The only remaining case is if $h := h_1 = h_2 = h'_1 = h'_2$. To eliminate this possibility observe that, since $k=2$, the edges $\rho^\star(E_1)$,~$\rho^\star(E_2)$ must meet in a vertex with coordinates $(-h,-h)$ (indeed, assuming this does not hold, a mutation returns us to the previous case and one of the above inequalities is strict). Note that the sublattice $\rho^\star(N)$ is determined by the fact that $\rho^\star$ doubles the edge lengths of $E_1$ and $E_2$. The lattice vectors $(a,a)$ are in this sublattice for all $a \in {\mathbb{Z}}$. Thus, by primitivity of the vertices in $P$, $h=1$. Since the origin is in the interior of $P$, mutating in one of $v_1$ or $v_2$ returns us to the previous case. \end{proof} \begin{rem} Proposition~\ref{pro:no_kronecker} implies all the quivers that we consider from now on are directed graphs. Hence we refer to vertices as \emph{adjacent} if they are adjacent in the underlying graph. \end{rem} As well as the non-existence of Kronecker quivers in $Q_P$ for finite mutation type polygons $P$, we use heavy use of a connectedness result for quivers $Q_P$ which follows immediately from the definition of $Q_P$ via determinants in the plane; or equivalently from the fact the exchange matrix has rank $2$. \begin{lem} \label{lem:transitivity} Given a Fano polygon $P$ and vertices $v_1$,~$v_2$,~$v_3$ of $Q_P$ such that $v_i$ and $v_{i+1}$ are not adjacent for $i=1$,~$2$, then $v_1$ and $v_3$ are not adjacent. \end{lem} \begin{proof}[Proof of Theorem~\ref{thm:finite_type}] By Lemma~\ref{lem:transitivity}, if $Q_P$ is not connected, $Q_P \cong A_1^n$ for some $n$. Similarly, if $Q_P$ is of type $A$ or $D$, then it must be one of $A_2$,~$A_3$ or $D_4$. Thus we only need to show that there are is no Fano polygon $P$ of finite mutation type such that $\mathcal{C}_P$ is not of finite-type. However $\mathcal{C}_P$ is of finite mutation type, and we use the classification described in Theorems~\ref{thm:finite_mut_type} and~\ref{thm:blocks}, following \cite{FST07,FST09}. In fact, using Lemma~\ref{lem:transitivity}, none of the eleven exceptional types can occur as $Q_P$ for a Fano polygon $P$. Hence we can restrict to quivers which admit a \emph{block decomposition} and work case-by-case. We claim that every quiver $Q_P$ associated to a Fano polygon $P$ which admits a block decomposition is either mutation equivalent to an orientation of a simply-laced Dynkin diagram or to a quiver which contains a subquiver $Q_k$ for $k>1$. We assume for contradiction that $Q_P$ is the quiver associated to a Fano polygon $P$ of finite-type which is not mutation equivalent to a simply laced Dynkin diagram. \proofsection{Block V} \begin{figure} \caption{Mutations of block V} \label{fig:block_V_mutations} \end{figure} First observe that, since only one vertex of the block V is an outlet, the V block quiver is a subquiver of any quiver which contains the V block in its decomposition. However this quiver mutates to a quiver with a $Q_2$ subquiver as shown in Figure~\ref{fig:block_V_mutations}. Therefore block V never appears in a decomposition of a quiver $Q_P$. For later use we shall fix the following intermediate quiver, $\text{V}'$, shown in Figure~\ref{fig:Intermediate_block}. \begin{figure} \caption{Quiver $\text{V} \label{fig:Intermediate_block} \end{figure} \proofsection{Blocks IIIa and IIIb} Assume there is a type III block (a or b) connected to a quiver $Q'$ at a vertex $v$. If there is a vertex $v'$ of $Q'$ such that $v$ and $v'$ are not adjacent, the quiver violates Lemma~\ref{lem:transitivity}. In particular the vertex set of $Q'$ must be the vertex set of a single block. In particular, using the previous part, $Q'$ has at most four vertices. Case by case study shows that only the $A_3$ and $D_4$ types appear. \proofsection{Block IV} Consider the case of a decomposition only using type IV blocks. Note that the type IV block is itself of type D$4$. Consider attaching two type IV blocks. If the blocks are attached at a single outlet the resulting quiver contradicts Lemma~\ref{lem:transitivity}. In fact it is easy to see that it is impossible to add additional type IV blocks to meet this condition. If both pairs of outlets are matched there are two possible quivers depending on the relative orientations of the arrow between the outlets, one orientation produces a $Q_2$ subquiver automatically, the other produces a quiver containing the quiver $\text{V}'$ as a subquiver. Thus, for a type IV block to appear in a decomposition of $Q_P$ it must include a type I or II block. Now consider decompositions using type I and II blocks as well as type IV blocks. First note there must be exactly one IV block (assuming there is at least one). Indeed, if type IV blocks are not connected using both vertices, a non-outlet vertex of a IV block is not adjacent to some outlet, and some non-outlet vertex of a (different) IV block. However outlets and non-outlets of a type IV block are always adjacent, violating Lemma~\ref{lem:transitivity}. \begin{figure}\label{fig:IV_blocks} \end{figure} Thus we must attach I and II blocks to a single type IV block. By Lemma~\ref{lem:transitivity} the vertex set of the final quiver must be equal to the vertex set obtained by attaching a single block to each outlet of the IV block. Considering these cases in turn, we note first that attaching a type I block to cancel the arrow between the two outlets produces a quiver mutation equivalent to $D_4$ and therefore eliminated. For chains type I blocks of length two, if a $3$-cycle is produced, a mutation in the vertex between the type I blocks produces the $\text{V}'$ quiver. If not, the same mutation produces a $Q_2$ subquiver. Attaching a type II block along two outlets of the type IV block recovers the $\text{V}'$ or $Q_2$ subquiver cases we have already seen. Attaching type II blocks to a single outlet each we observe that every new vertex must be adjacent to both outlets of the IV block. Hence the only case without a $Q_2$ subquiver is shown on the right of Figure~\ref{fig:IV_blocks}, however this quiver mutates to one with a $Q_2$ subquiver. Attaching further type II blocks any quiver we obtain must contradict Lemma~\ref{lem:transitivity}. \proofsection{Blocks I and II} From what we have shown above, the block decomposition of $Q_P$ consists only of type I and type II blocks. Any connected quiver with a block decomposition into type I blocks is a path (with possibly changing orientations), which possibly closes up into a cycle. The only cases not violating Lemma~\ref{lem:transitivity} are mutation equivalent to orientations of simply laced Dynkin diagrams. \begin{figure} \caption{Octahedron of type II blocks.} \label{fig:Octahedral_quiver} \end{figure} For decompositions of $Q_P$ with type I and II blocks we divide the proof into cases indexed by the number of type II blocks. For a single type II block, we can attach a type I block to two outlets and in this way reduce to the type III case. Attaching each type I block to a type II block in at most one outlet, we use the fact that every new vertex must be adjacent to at least two of the vertices of the type II block. Thus we can obtain only two undirected graphs -- the underlying graph of a type IV block or an orientation of a tetrahedron, these cases can easily be eliminated. For example, there is no orientation of the tetrahedron making every cycle oriented; hence after a single mutation we obtain a quiver violating Proposition~\ref{pro:no_kronecker}. Consider the case of a pair of type II blocks. If these have disjoint vertex sets, each outlet of a type II block cannot be adjacent to \emph{two} of the outlets of the other type II block. Thus we must cancel the arrow between these two outlets with a type I block. However this creates a pair of $1$-valent non-outlet vertices which can be eliminated similarly to the type III case. At the other extreme, if we attach along all three outlets, we produce two easy cases. Attaching along a pair of outlets we generate either a $Q_2$ subquiver or a $4$-cycle. Considering the $4$-cycle with two outlets $v_1$ and $v_2$ (on non-adjacent corners) to meet the conditions of Lemma~\ref{lem:transitivity} any vertex adjacent to one of $v_1$ or $v_2$ must be adjacent to the other. Moreover, if the resulting quiver contains an arrow between $v_1$ and $v_2$, a mutation at one of the non-outlet vertices gives a $Q_2$ subquiver. Given a vertex $v$ adjacent to $v_1$ and $v_2$, if this defines a path between them, mutating at this node and a non-outlet in the four cycle produces a $Q_2$ subquiver. If $v$ does not lie on a path between $v_1$ and $v_2$ then mutating at both outlets produces a $Q_2$ subquiver. Attaching the type II blocks at a single outlet, the four arrows incident to this vertex are now fixed, so any new vertex must be adjacent to each of the remaining four outlets by Lemma~\ref{lem:transitivity}. However this cannot be achieved with type I blocks. Attaching more than two type II blocks together, we can eliminate the case where two are connected to form a 4-cycle as above. Since we can easily eliminate the case that two type II blocks meet in three outlets, we assume that each type II block meets every other in at most one outlet. Some pair of type II blocks must be attached in an outlet (otherwise we can argue as in the case of type II block separated by type I blocks). Thus, since every new vertex must be adjacent to all four outlets formed by attaching two type II blocks, all possible quivers can be represented as an octahedron with some orientation, see Figure~\ref{fig:Octahedral_quiver}. Considering an orientation of the octahedron; if any triangular face does not form a cycle we can mutate to form a $Q_2$ subquiver. Assuming every triangle is a cycle, and possibly mutating, the vertices adjacent to the `top' of the octahedron form a type V block subquiver. Following the same reasoning as for the type V block case (although note that the type V block is not part of a block decomposition here) these cases can be eliminated. \end{proof} \end{document}
\begin{document} \title[On contact surgery and knot Floer invariants]{On contact surgery and knot Floer invariants} \author{Irena Matkovi\v{c}} \address{Department of Mathematics, Uppsala University, Sweden} \email{[email protected]} \begin{abstract} We establish some general relations between Heegaard Floer based contact invariants. In particular, we observe that if the contact invariant of large negative, respectively positive, contact surgeries along a Legendrian knot does not vanish, then the Legendrian invariant, respectively the Legendrian inverse limit invariant, of that knot is non-zero. We use sutured Floer homology, and the limit constructions due to Golla, and Etnyre, Vela-Vick and Zarev. \end{abstract} \blfootnote{2020 {\em Mathematics Subject Classification.} 57K33, 57K18.} \keywords{Legendrian knots, contact surgery, knot Floer homology} \maketitle \section{Introduction} Heegaard Floer based contact invariants are the most used and powerful (though not complete) detector of tightness of contact manifolds. For Legendrian knots, they were independently developed from the point of view of sutured homology, knot Floer homology and grid homology, and how these invariants compare to each other attracted a lot of interest \cite{SV,BVV,G.i,EVVZ}, as did also their relationship to the invariants of contact surgeries along the knots \cite{Sah,LS.t,G.s,MT}. In contrast to the extensively studied behavior of surgeries on knots in the standard $3$-sphere, our interest is first in surgeries along non-loose knots in overtwisted manifolds (that is, knots in overtwisted manifolds whose complement is tight). Recall that the contact $r$-surgery (whenever $r\neq\frac{1}{n}$) is not uniquely defined as it depends on choices of stabilizations of the Legendrian knot and its Legendrian push-offs (when described by surgery diagrams \cite{DG}). In the following, the special role will be played by the contact $r$-surgery with all stabilizations negative, and we will denote the resulting contact structure by $\xi_r^-$. Based on the Legendrian surgeries, we can say the following about the Legendrian invariant of the knot and its torsion order. \begin{thm}\label{thm:minus} Let $L$ be a Legendrian knot in a contact manifold $(Y,\xi)$. If for every rational number $r\leq -1$ the contact $r$-surgery $\xi_r^-$ along $L$ has non-vanishing contact invariant, $c(Y_r(L),\xi_r^-)\neq 0$, then the Legendrian invariant of $L$ in $\HFK^-(-Y,L)$ is non-zero, $\Leg (L) \neq 0$. \end{thm} \begin{remark} As we recall in Lemma \ref{lem:minus}, it suffices to check the non-vanishing condition for very negative integral surgeries. \end{remark} \begin{example} Theorem \ref{thm:minus} allows us to prove the non-vanishing of the Legendrian invariants of non-loose $T_{(2,-2n+1)}$, conjectured by Lisca, Ozsv\'ath, Stipsicz and Szab\'o in \cite[Remark 6.11]{LOSSz}, as it is in greater generality carried out in \cite[Theorem 4.7]{M.n}. \end{example} Theorem \ref{thm:minus} is in fact interesting only when $c(Y,\xi)=0$, and then for integral surgeries we additionally observe the following. \begin{prop}\label{prop:order} Let $L$ be a Legendrian knot in a contact manifold $(Y,\xi)$ with $c(Y,\xi)=0$. If for all $n\geq m$ the contact $(-n)$-surgery along $L$ with $m$ positive stabilizations has non-zero contact invariant, then also the contact $(-n)$-surgery $\xi_{-n}^-$ has non-zero invariant; moreover, $m$ is at most the torsion order of $\Leg(L)$, that is $U^m\cdot\Leg(L)\neq 0$. \end{prop} \begin{example} However, notice that taking any contact surgery in Theorem \ref{thm:minus} would not work. There are Legendrian knots for which contact $r$-surgery with all stabilizations positive results in a contact manifold with the non-zero invariant for all $r\leq -1$, but the Legendrian invariant of the knot vanishes. Such examples are the non-loose Legendrian negative torus knots $T_{(p,-q)}$ with $tb\leq-pq$ whose transverse approximation is loose (see \cite{M.n} for details). \end{example} \begin{example} Inverse of Theorem \ref{thm:minus} is generally not true, not even when $\Leghat(L)\neq 0$. Take for an example a non-loose Legendrian right-handed trefoil $T_{(2,3)}$ in the overtwisted $(S^3,\xi)$ with Hopf invariant $d_3(\xi)=-1$ (see \cite[Figure 51]{EVVZ}), or more generally a non-loose Legendrian $T_{(2,2n+1)}$ in the overtwisted $(S^3,\xi)$ with Hopf invariant $d_3(\xi)=-2n+1$, that is, the knot $L(n)$ in \cite[Figure 9]{LOSSz}. As observed in \cite[Remark 6.5]{LOSSz}, some negative surgery on $L(n)$ produces a necessarily overtwisted contact structure on $S^3_{2n-1}(T_{(2,2n+1)})$, even though $\Leghat(L(n))\neq0$ according to \cite[Proposition 6.2]{LOSSz}. \end{example} In the case of positive surgeries, on the other hand, we provide an alternative view on and a generalization to all manifolds of some known results in the $3$-sphere, making use of the Legendrian inverse limit invariant. For the latter, in turn, we observe that it is not an independent invariant of Legendrian knots. \begin{thm}\label{thm:plus} Let $L$ be a Legendrian knot in a contact manifold $(Y,\xi)$. If there exists $R\geq 1$ such that for every $r>R$ the contact $r$-surgery $\xi_r^-$ along $L$ has non-vanishing contact invariant, $c(Y_r(L),\xi_r^-)\neq 0$, then the Legendrian inverse limit invariant of $L$ is non-zero, $\EHinverse (L)\neq 0$. \end{thm} \begin{remark}\label{rmk:1+} As we write out in Lemma \ref{lem:plus}, it suffices to find a positive integral surgery with the non-vanishing invariant. In particular, it would suffice to choose the negative stabilizations only for the initial integral surgery. Note however that contact surgery with all stabilizations negative corresponds to inadmissible transverse surgery \cite{C}. \end{remark} \begin{prop}\label{prop:EHinverse} For a Legendrian knot $L$ in a contact manifold $(Y,\xi)$, the non-vanishing of the inverse limit invariant $\EHinverse (L)\neq 0$ is equivalent to the non-vanishing of both the Legendrian hat invariant $\Leghat(L)\neq0$ and the ambient contact invariant $c(\xi)\neq0$. \end{prop} \begin{cor}\label{cor:tau_xi} Let $L$ be a Legendrian knot in a contact manifold $(Y,\xi)$. If any positive (integral) contact surgery along $L$ results in a contact manifold with the non-vanishing contact invariant, then $(Y,\xi)$ is tight with $c(\xi)\neq 0$. When $L$ is null-homologous, both the invariants $\Leghat(L)$ and $\Leg(L)$ are non-zero, and $\Leg(L)$ generates one of the $\mathbb F[U]$-towers of $\HFK^-(-Y,L)$; the Legendrian knot $L$ satisfies the Bennequin-type equality \[\tb(L)- \rot(L, [S]) = 2\tau_\xi(Y,L, [S])-1\] with respect to any Seifert surface $S$. \end{cor} \proof[Notation] Recall that the rank of $\HFK^-(-Y,\mathbf t_\xi, L)$ as $\mathbb F[U]$-module is equal to the dimension of $\HFhat(-Y,\mathbf t_\xi)$, and that Hedden \cite[Definition 23]{H} defines $\tau_\xi$ as the top grading of the tower corresponding to $c(\xi)$. For example, in the case of the $3$-sphere, when looking at the knot in $-S^3$ corresponds to looking at the mirror knot in $S^3$, the invariant $\tau_\xi(S^3,L) = \tau(L) = -\tau(m(L))$. \proof Knowing Proposition \ref{prop:EHinverse}, both statements follow from Theorem \ref{thm:plus}. The equality is obtained by the computation of the Alexander grading of $\Leghat(L)$ in terms of the classical invariants, as given by Ozsváth and Stipsicz in \cite[Theorem 1.6]{OS}. \endproof \begin{remark} Alternatively, the first statement of Corollary \ref{cor:tau_xi} is obvious from the naturality of contact invariants with respect to positive surgeries, see \cite[Theorem 1.1]{MT} of Mark and Tosun. Meanwhile the second statement can be proven using surgery formulae, in the same way as \cite[Theorem 1.2]{MT} of Mark and Tosun for knots in $(S^3,\xi_\text{std})$. \end{remark} Recall that in \cite{LS.t} Lisca and Stipsicz defined an invariant of transverse and Legendrian knots $\tilde{c}$ using positive contact surgeries. It is defined as the class of the vector $(c(\xi_n^-(L)))_{n\in\mathbb N}$ in the inverse system \[\left(\{\HFhat(-Y_n(L))\}_{n\in\mathbb N}, \{\Phi_{n,m}=F_{\overline{W}_{\!n}}\circ\cdots\circ F_{\overline{W}_{\!m+1}}\}_{m<n}\right)\] of Heegaard Floer groups $\HFhat(-Y_n(L))$ of surgeries and the cobordism maps $F_{\overline{W}_{\!n}}: \HFhat(-Y_n(L)) \rightarrow \HFhat(-Y_{n-1}(L))$, defined through surgery exact triangles. Theorem \ref{thm:plus} tells that non-vanishing of $\tilde{c}(L)$ implies non-vanishing of $\EHinverse (L)$. That answers \cite[Question 2]{EVVZ} of Etnyre, Vela-Vick and Zarev about the relationship between their inverse limit invariant $\EHinverse$ and the transverse invariant $\tilde{c}$ of Lisca and Stipsicz. \begin{example} Considering the inverse of Theorem \ref{thm:plus}, note that solely a non-zero $\Leghat(L)$ and the ambient contact manifold $(Y,\xi)$ having non-zero contact invariant, do not suffice for the non-vanishing of the contact invariant of large positive surgeries, as can be read from the conditions in \cite[Theorem 1.1]{G.s} of Golla for the knots in $(S^3,\xi_\text{std})\ $(see also \cite[Theorem 1.2]{MT} of Mark and Tosun). In particular, in the standard $3$-sphere $\EHinverse(L)\neq0$ is not equivalent to $\tilde{c}(L)\neq0$; for the latter, we additionally need that the underlying smooth knot satisfies the equality $\tau(L)=\nu(L)$. \end{example} \subsubsection*{Overview} The organization of the paper is straightforward. In Section \ref{Sec2} we briefly review relevant points about contact surgery and the Legendrian knot invarians defined in Heegaard Floer theory; throughout, we expect the basic knowledge of contact topology, in particular the convex surface theory, and the standard background in knot Floer and sutured Floer homology. In Section \ref{Sec3} we give the proofs of Theorem \ref{thm:minus}, Theorem \ref{thm:plus} and Proposition \ref{prop:EHinverse}. \section{Preliminaries}\label{Sec2} \subsection{Contact surgery} Recall that contact $r$-surgery (for any non-zero $r$) is performed along a Legendrian knot with the surgery coefficient $r$ measured relative to the contact framing; in addition to ordinary surgery, it prescribes for the contact structure to be preserved in the complement of the knot, while the extension to the glued-up torus needs to be tight. The possible contact structures on the solid torus filling are determined by the pair of two slopes: the boundary slope (that is, the slope of the dividing set on the boundary) equal to the initial contact framing $0$, and the meridional slope given by the surgery coefficient $r$. They are listed in the Honda's classification \cite{Ho.I} in terms of the shortest counter-clockwise path from $0$ to $r$ in the Farey graph (annotated by $(0,\infty,-1)$ in $(1,-1,\text{i})$ of $S^1$): the successive fractions along this path correspond to the successive slopes of basic slices glued to the knot complement. Explicitly, for a sequence $r_0=0, r_1,\dots, r_k=r$ we layer the glued-up torus from outside in into $k-1$ basic slices with boundary slopes $\{r_{i-1}, r_i\}$ for $i=1,\dots,k-1$, and fill it in with a solid torus of the boundary slope $r_{k-1}$ and meridional slope $r_k$; for each basic slice we have a choice of the sign, while the final solid torus admits a unique tight structure. In particular case when $n\in\mathbb Z$, the sequence is comprised of $0,-1,\dots, n+1,n$ for $n<0$, and $0,\infty,n$ for $n>0$. On the other hand, Ding and Geiges \cite{DG} gave an algorithm to convert contact $r$-surgery into a sequence of contact $(\pm1)$-surgeries, which encodes the convex decomposition of the glued-up torus in a form of the surgery diagram. Let us recall: If the surgery coefficient is $r=\frac{p}{q}$ and $m\in\mathbb N$ is the minimal such that $\frac{p}{q-mp}<0$, and $\frac{p}{q-mp}=[a_0,\dots,a_k]$, the slicing can be described on Legendrian push-offs of the surgered knot $L$ as follows. First we perform contact $(+1)$-surgery along $m$ push-offs of $L$, then for each successive $i^\text{th}$ continued fraction block we do $(-1)$-surgery along $L_i$ where $L_i$ is obtained from $L_{i-1}$ by Legendrian push-off and additional $a_i-1$ stabilizations, and $L_0=L$ stabilized $a_0-1$ times. All possible contact structures on glued-up torus are then covered by all possible choices of positive or negative stabilizations. In the case $n\in\mathbb Z$, this amounts to a single $(-1)$-surgery along $(n-1)$-times stabilized Legendrian knot for $n<0$, and to a single $(+1)$-surgery along the knot followed by $(-1)$-surgery along $n-1$ of its once stabilized push-offs for $n>0$. \subsection{Legendrian invariants in Heegaard Floer theory} The initial invariant of Legendrian knots was defined by Honda, Kazez and Mati\'c \cite{HKM.eh} in sutured Floer homology \cite{J}, in analogy to the contact invariant of closed manifolds due to Ozsváth and Szabó \cite{OSz.c}. Subsequently, Stipsicz and Vértesi \cite{SV}, Golla \cite{G.i}, and Etnyre, Vela-Vick and Zarev \cite{EVVZ} found interpretations of various knot Floer invariants in terms of the sutured Floer homology, which we aim to recall in this subsection. \subsubsection*{Convention} In accordance with the notation in the previous subsection on contact surgery, we parametrize the boundary of the Legendrian knot complement by the meridian of the knot taking the slope $\infty$, and the contact framing taking the slope $0$. Thus, in the case of a nullhomologous knot, our slope $0$ corresponds to the slope $\tb$ with respect to the Seifert framing. \subsubsection*{$\EH$-invariants and gluing maps} Honda, Kazez and Mati\'c \cite{HKM.eh} define $\EH(Y,\xi)$ of a contact manifold with convex boundary as a class in $\SFH(-Y,-\Gamma_\xi)$ where $\Gamma_\xi$ consists of dividing curves of $\xi$ on $\partial Y$. The contact invariant $c(Y,\xi)$ is then identified with $\EH((Y,\xi)\backslash(B^3,\xi_\text{std}))$ in $\SFH(Y(1))$, and in case of a Legendrian knot $L$, the invariant $\EH(L)$ is set to be $\EH(Y(L),\xi_L)$ in $\SFH(-(Y\backslash\nu L), -\Gamma_\text{0})$ where $\Gamma_\text{0}$ is a pair of oppositely oriented closed curves of slope $0$ on the boundary torus. The crucial property of these sutured contact invariants is their behaviour under gluing diffeomorphisms \cite{HKM.glue}: for sutured manifolds $(Y,\Gamma)\subset(Y',\Gamma')$, a contact structure $\zeta$ on $Y'\backslash Y$, compatible with $\Gamma\cup\Gamma'$, induces a map \[\Phi_\zeta: \SFH(-Y,-\Gamma) \rightarrow \SFH(-Y',-\Gamma')\] which in case of contact manifolds with convex boundary connects the $\EH$-classes, that is \[\Phi_\zeta(\EH(Y,\xi_Y))=\EH(Y',\xi_Y\cup\zeta).\] Studying Legendrian knots, the sutured manifolds, we are specifically interested in, are the knot complements with various boundary slopes $(Y(L),\Gamma_s)$ and their (punctured) Dehn fillings $(Y_r(L))(1)$. The key gluing maps are: \begin{itemize}[leftmargin=.5cm] \item $\phi_{s_0,s_1}: \SFH(-Y(L),-\Gamma_{s_0})\rightarrow \SFH(-Y(L),-\Gamma_{s_1})$, associated to the addition of a tight toric annulus $(T^2\times I,\zeta_{s_0,s_1})$; in particular, the maps $\sigma^\pm_{s_0,s_1}$ associated to the basic slices, and \item $\psi_{r(s)}: \SFH(-Y(L),-\Gamma_s)\rightarrow \HFhat(-Y_r(L))$, associated to the surgery $2$-handle, glued with slope $r$ to the torus of the boundary slope $s$; note that in case $s=0$ this corresponds to gluing-up a (punctured) tight solid torus $(V_r(1),\zeta_r)$, and when $s$ and $r$ are connected in the Farey graph, in which case we denote it simply by $\psi_r$, the contact filling is unique. \end{itemize} \subsubsection*{Invariant $\Leghat$} Stipsicz and V\'ertesi \cite{SV} interpret the Legendrian invariant $\Leghat(L)\in\HFKhat(-Y,L)$, originally defined by Lisca, Ozsv\'ath, Stipsicz and Szab\'o \cite{LOSSz}, as the $\EH$-invariant of the complement of a Legendrian knot $(Y(L),\xi_L,\Gamma_\text{0})$ completed by the negative basic slice with boundary slopes $0$ and $\infty$. Therefore, if we denote the completion $\xi_L\cup\zeta^-_{0,\infty}$ by $\overline{\xi_\infty}$, we have \[\Leghat(L)=\EH(Y(L),\overline{\xi_\infty}) \in\SFH(-Y(L),-\Gamma_\infty).\] \subsubsection*{Invariant $\Leg$} Golla \cite{G.i}, explicitly for knots in the $3$-sphere, and later Etnyre, Vela-Vick and Zarev \cite{EVVZ} give corresponding reinterpretation for the Legendrian invariant $\Leg(L)\in\HFK^-(-Y,L)$ \cite{LOSSz}. They both observe that $\SFH(-Y(L),\Gamma_{-i})$ together with the negative bypass attachments \[\phi^-_{-i,-j}=\sigma^-_{-i,-i-1}\circ\cdots\circ \sigma^-_{-j+1,-j}: \SFH(-Y(L),-\Gamma_{-i}) \rightarrow \SFH(-Y(L),-\Gamma_{-j})\] form a direct system $(\{\SFH(-Y(L),\Gamma_{-i})\}_{i\in\mathbb N}, \{\phi^-_{-i,-j}\}_{j>i})$ and that its direct limit $\underrightarrow{\SFH}(-Y,L)$ is isomorphic to $\HFK^-(-Y,L)$. Furthermore, they observe that $\EH$-invariants of the negative stabilizations of $L$ respect this direct system, and that the class of the vector $(\EH(L^{i-}))_{i\in\mathbb N}=\EHdirect(L)$ is taken to $\Leg(L)$ under the above isomorphism. \subsubsection*{Invariant $\EHinverse$} Etnyre, Vela-Vick and Zarev \cite{EVVZ} consider also a parallel inverse system with positive boundary slopes $(\{\SFH(-Y(L),\Gamma_{i})\}_{i\in\mathbb N}, \{\phi^-_{j,i}\}_{j>i})$, whose inverse limit $\underleftarrow{\SFH}(-Y,L)$ is isomorphic to $\HFK^+(-Y,L)$. They define the inverse limit invariant $\EHinverse(L)$ to be the class of the vector $(\EH(Y(L),\overline{\xi_i}))_{i\in\mathbb N}$ where $\overline{\xi_i}$ equals the extension by two negative basic slices $\xi_L\cup\zeta^-_{0,\infty}\cup\zeta^-_{\infty,i}$. \section{Proofs}\label{Sec3} \begin{lem}\label{lem:minus} If the contact invariant $c(Y_r(L),\xi_r^-)$ is non-zero for all large negative integers $r$, then it is non-zero for all $r\leq -1$. \end{lem} \proof First, utilizing contact surgery presentation of Ding and Geiges \cite{DG}, we know that $(Y_r(L),\xi_r^-)$ for $r\in (-n,-n+1)$ and $n\in \mathbb N$ can be obtained by Legendrian surgery on $(Y_{-n}(L),\xi_{-n}^-)$; hence, if the integral surgeries have non-zero contact invariant, so do the rational ones. Furthermore, if the contact invariant of $(Y_{-n}(L),\xi_{-n}^-)$ is non-zero, the contact invariant of $(Y_{-m}(L),\xi_{-m}^-)$ for $m<n$ is non-zero too. Indeed, the contact $(-n)$-surgery along a Legendrian knot $L$ equals some contact $(-n+1)$-surgery along a once stabilized knot $L'$. And, since every stabilization of the knot lies on a (boundary-parallel) stabilization of the page compatible with the original Legendrian knot, the capping off morphism of Baldwin implies, that the contact invariant of the Legendrian surgery along a stabilized knot $L'$ vanishes once the contact invariant of the Legendrian surgery along the knot $L$ is zero \cite[Theorem 1.7]{B}. \endproof \proof[Proof of Theorem \ref{thm:minus}] Under the identification of $\Leg(L)$ with $\EHdirect(L)$, we need to show that $\EH(L^{i-})$ remains non-zero as the number of negative stabilizations goes to infinity. Of course, the complement $(Y(L),\xi_L)$ of the standard neighborhood of the Legendrian knot $L$, as well as the complements \[(Y(L^{i-}),\xi_{L^{i-}}) = (Y(L),\xi_L\cup\bigcup_{j=-1}^{-i}\zeta^-_{j+1,j})\] of its stabilizations embed into $(Y,\xi)$ as sutured submanifolds. Hence, the induced map on the sutured Floer homology \[\psi_{\infty(-i)}: \SFH(-Y(L), -\Gamma_\text{-i}) \rightarrow \SFH(-Y(1))\] takes \[\psi_{\infty(-i)} (\EH(L^{i-}))=c(Y,\xi).\] Negative contact surgery takes away some of the twisting around the knot; indeed, we can always replace it by an admissible transverse surgery, which in turn can be understood as a particular contact cut (see \cite{BE} for details). However, in contact $(-n)$-surgery $(Y_{-n}(L),\xi_{-n}^-)$ still embed the standard complements of $L^{i-}$ for all $i<n$, or more precisely, contact $(-n)$-surgery $(Y_{-n}(L),\xi_{-n}^-)$ equals the complement of $L^{(n-1)-}$ filled by the solid torus of the boundary slope $-n+1$ and meridional slope $-n$. Therefore, we have \[\psi_{-n(-i)}:\SFH(-Y(L),-\Gamma_{-i}) \rightarrow \SFH((-Y_{-n}(L))(1))\] and \[\psi_{-n(-i)}(\EH(L^{i-}))=c(Y_{-n}(L),\xi_{-n}^-) \text{ for all } i<n.\] Since we assumed that the contact invariants of large negative contact surgeries with all stabilizations negative do not vanish, the $\EH$-invariants of the negative stabilizations of the Legendrian knot $L$ do not vanish either, and thus $\EHdirect(L)=\Leg(L)\neq 0$. \endproof \proof[Proof of Proposition \ref{prop:order}] The condition that the contact $(-n)$-surgery (for every $n\geq m$) along $L$ with $m$ positive stabilizations has non-zero contact invariant, means that there is a contact $-(n-m)$-surgery $\xi_{m-n}^-$ along $L^{m+}$ with non-zero contact invariant. Then, as a consequence of the capping off morphism of Baldwin \cite[Theorem 1.7]{B}, there is also a contact $-(n-m)$-surgery $\xi_{m-n}^-$ along $L$ with non-zero contact invariant for every $n\geq m$. On the other hand, when $c(Y,\xi)=0$, we know that the $U$-order of $\Leg(L)$ is finite, say $o$; in particular, the invariant of the $o$-times positively stabilized $L$ vanishes. Now, since for all $n$ a contact $-(n-m)$-surgery along $L^{m+}$ has non-zero contact invariant, the invariant $\Leg(L^{m+}) =U^m\cdot\Leg(L) \neq0$ by Theorem \ref{thm:minus}, and hence $m<o$. \endproof \begin{lem}\label{lem:plus} If there is $n\in\mathbb N$ such that $c(Y_n(L),\xi_n^-)\neq0$, then the contact invariant $c(Y_r(L),\xi_r^-)\neq0$ for all $r\geq n$. \end{lem} \proof According to the contact surgery presentation of Ding and Geiges \cite{DG}, every $(Y_r(L),\xi_r^-)$ for $r\geq n$ and $n\in \mathbb N$ can be obtained by Legendrian surgery on $(Y_{n}(L),\xi_n^-)$. \endproof \proof[Proof of Theorem \ref{thm:plus}] Opposed to the complements of the standard neighborhoods of the Legendrian knot stabilizations $L^{i-}$, the completion of the knot complement $(Y(L),\overline{\xi_\infty})$ never embeds in $(Y,\xi)$, and hence neither do its extensions $(Y(L),\overline{\xi_i})$ for $i\in\mathbb N$. Nevertheless, they do embed in positive contact surgeries with all stabilizations negative; indeed, in contrast to the negative surgeries these correspond to inadmissible transverse surgery and can be thought of as adding twisting along a Legendrian knot (see \cite{C}). Explicitly, in contact $r$-surgery where $r>0$, embed all the integral extensions for $n>r$. In particular, for $r=\frac{2n-1}{2}$ the contact $r$-surgery equals the $n$-extension $(Y(L),\overline{\xi_n})$ filled by the solid torus of the boundary slope $n$ and meridional slope $r$, and the morphism induced by the surgery $2$-handle \[\psi_r:\SFH(-Y(L),-\Gamma_n) \rightarrow \SFH((-Y_r(L))(1))\] takes \[\psi_r(\EH(Y(L),\overline{\xi_n})) = c(Y_r(L),\xi_r^-).\] Now, as we have assumed to have at least one positive surgery with non-vanishing contact invariant, all larger surgeries -- as observed in Remark \ref{rmk:1+} -- have non-vanishing invariant as well. Hence, for big enough $n$ the $n$-extension embeds into closed contact manifold with non-zero contact invariant, and has non-zero $\EH$. Thus, $\EHinverse(L)\neq 0$. \endproof \proof[Proof of Proposition \ref{prop:EHinverse}] According to Etnyre, Vela-Vick and Zarev \cite[Theorem 1.7]{EVVZ}, there is a commutative triangle \[ \begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=3em,column sep=3em] { \underleftarrow{\SFH}(-Y,L) & & \HFK^+(-Y,L) \\ &\ \ \HFKhat(-Y,L) & \\ }; \path[->,font=\small] (m-1-1) edge node[above] {$I_+$} (m-1-3); \path[<-,font=\small] (m-1-1) edge node[auto,swap] {$\Phi_{\text{dSV}}$} (m-2-2) (m-1-3) edge node[auto] {$\iota_*$} (m-2-2); \end{tikzpicture} \] where $I_+$ is the isomorphism and $\iota_*$ is the map induced on homology by the inclusion of complexes. Additionally, by \cite[Theorem 1.7]{EVVZ}, the map $\Phi_\text{dSV}$ sends the Legendrian invariant $\Leghat(L)$ into $\EHinverse(L)$. Now, recall that $\HFK^+$ is defined as the homology of the graded object associated to $\CFK^\infty$ quotient by $U\cdot\CFK^-$: \[ \CFK^+:=\CFK^\infty/\ U\cdot\CFK^- \text{ and } \HFK^+:=\text{H}_*(\gCFK^+). \] Hence, the map $\iota_*$ sends to zero exactly those elements of $\HFKhat$ which correspond to torsion elements in $\HFK^-$. Existence of the map $\Phi_\text{dSV}$ requires that $\Leghat(L)\neq 0$ when $\EHinverse(L)\neq 0$. Moreover, since the Legendrian invariant $\Leghat(L)$ as an element of $\HFK^-(-Y,L)$ is non-torsion if and only if the ambient contact structure has non-zero invariant \cite[Theorem 1.2]{LOSSz}, the result follows. \endproof \end{document}
\begin{document} \begin{frontmatter} \title {\textbf{Technical Uncertainty in Real Options with Learning}\tnoteref{t1}\\[0.1em] {\normalsize \textbf{The Journal of Energy Markets, Forthcoming}}} \tnotetext[t1]{We are grateful to two anonymous referees for valuable comments.} \author[author1] {Ali Al-Aradi} \ead{[email protected]} \author[author2]{\'Alvaro Cartea} \ead{[email protected]} \author[author3]{Sebastian Jaimungal} \ead{[email protected]} \address[author1] {Department of Statistical Sciences, University of Toronto, Toronto, Canada} \address[author2] {Mathematical Institute, University of Oxford, Oxford, UK\\ Oxford-Man Institute of Quantitative Finance, Oxford, UK} \address[author3] {Department of Statistical Sciences, University of Toronto, Toronto, Canada} \begin{abstract} We introduce a new approach to incorporate uncertainty into the decision to invest in a commodity reserve. The investment is an irreversible one-off capital expenditure, after which the investor receives a stream of cashflow from extracting the commodity and selling it on the spot market. The investor is exposed to price uncertainty and uncertainty in the amount of available resources in the reserves (i.e. technical uncertainty). She does, however, learn about the reserve levels through time, which is a key determinant in the decision to invest. To model the reserve level uncertainty and how she learns about the estimates of the commodity in the reserve, we adopt a continuous-time Markov chain model to value the option to invest in the reserve and investigate the value that learning has prior to investment. \end{abstract} \begin{keyword} Real Options; Investment under Uncertainty; Technical Uncertainty; Irreversibility \end{keyword} \end{frontmatter} \section{Introduction} What are the optimal market conditions to make an investment decision is an extensively studied problem in the academic literature and a key question at the heart of the valuation and execution of projects under uncertainty. Some investment projects are endowed with the option to delay decisions until market conditions are optimal. This option is valuable because decisions are made when the potential gains stemming from the decision are maximized. The classical work of \cite{McDonaldSiegel86} is the first to formalize the investment problem as a real option to invest in a project. In their work, the value $O_t$ of the option is calculated by comparing the difference in the value of investing now and the value of making the investment at a future time. Specifically, the value of the real option is \begin{equation} O_t = \sup_{\tau \in \mathcal{T}} {\mathbb E}\left[e^{-\rho(\tau-t)}(V_{\tau} - I_{\tau})_+\right]\,, \end{equation} where $\mathcal{T}$ is the set of admissible ${\mathcal F}$-stopping (exercise) times, $\rho$ is the risk-adjusted discount rate, and $V_t$ and $I_t$ are the project value and (sunk) cost respectively. The project value and cost are traditionally modeled with a geometric Brownian motion (GBM). The solution to this problem shows that the optimal investment strategy is to invest when the ratio ${V_t}/{I_t}$ reaches a critical boundary $B$ (the problem of optimal scrapping/divesting is similar, with the roles of $V_t$ and $I_t$ reversed). More recently, several authors have studied this problem with mean-reverting project value and costs (see, e.g., \cite{MetcalfHasset95}, \cite{Sarkar03}, \cite{JaiSouzaZubelli09}). In another classical paper, \cite{BrennanSchwartz1985} focus on the management of a mine (controlling output rate, opening/closing of mine, abandonment, and so on) rather than the optimal timing problem (see also \cite{Dixit89}). Management decisions are modulated by the output prices which are modeled as a GBM, while costs are known. These classical works do not take into account the uncertainty associated with reserve levels. To account for such ``technical uncertainty'', \cite{Pindyck80} develops a model where the demand and reserve levels fluctuate continuously with increasing variance. Furthermore, the optimal strategy is influenced by exploration and is introduced as a policy (i.e. control) variable in two distinct ways. The first allows exploratory effort to affect the level of ``knowledge'', which reduces the variance of reserve fluctuations. The second assumes that reserves are discovered at a rate that depends on: how much has already been discovered in the past, the amount of current effort, and exogenous noise. More recent approaches to the investment timing problem with technical uncertainty include those using Bayesian updating as in \cite{ArmstrongGalliBaileyCouet}, modeling project costs via Markov chains as in \cite{ElliotMiaoYu2009}, and using proportionality to model learning as in \cite{Sadowsky2005}. Also, \cite{CortozarSchwartzCasassus2000} describes a comprehensive approach to valuing several-stage exploration, solving the timing problem, and provides investment management (closure, opening, etc.) decision rules -- see also \cite{BrennanSchwartz1985}. Other works that incorporate real option techniques in the valuation of flexibility and investment decisions in commodities and energy include that of \cite{himpler2014optimal} who look at the optimal timing of wind farm repowering; \cite{taschini2010real} who look at the option to switch fuels under different scenarios and fuel incentives; the work of \cite{fleten2011transmission} that looks at the option to choose the capacity of an electricity interconnector between two locations, and \cite{cartea2012much} who value an electricity interconnector as a stream of real options of the difference of prices in two locations. Meanwhile, \cite{cartea2017irreversible} study the effect that model uncertainty has on irreversible investments. This work adds to the literature by incorporating both market and reserve uncertainty, while allowing the agent to learn about the status of reserves. Reserve uncertainty is represented by a Markov chain model with transition rates that decay as time flows forward to mimic the notion of learning. The model setup is developed in the context of oil exploration, however, it may be applied to other investment problems in commodities, such as mining for precious or base metals, and natural gas fields. We value the irreversible option to invest in the exploration by developing a version of Fourier space-time stepping, as in \cite{Jai08}, \cite{Surkov11}, and \cite{jaimungal2013valuing} for equity, commodity and interest-rate derivatives, respectively. For other work on ambiguity aversion and model uncertainty in commodities and algorithmic trading see \cite{CJZ}, \cite{CDJ}, \cite{Dempster}, If estimates on exploration costs and volume estimates are available, the calibration of the model is relatively simple. We demonstrate how the model can be used to assess whether exploration costs warrant the potential benefits from finding reserves and extracting them. Specifically, we show how to calculate the value of the option to delay investment and discuss the agent's optimal investment threshold. This threshold, also referred to as the exercise boundary, depends on a number of variables and factors including: the agent's estimate of the volume in the reserve, the rate at which the agent learns about the volume of the reserve, the rate at which the agent extracts the commodity, and the expiry of the option. We show that the value of the option to wait-and-learn is high at the beginning and gradually decreases as expiry of the option approaches because the agent has little time left to learn. We assume that the investment cost depends on the volume of the reserve. If the estimated volume is high (resp. low) the sunk cost to extract the commodity is high (resp. low). This has an effect on the optimal time to make the investment as well as the level of spot price commodity that justifies making the sunk cost. For example, we show that when the volume estimate is low, and the option to invest is far away from expiry, the agent sets a high investment threshold as a result of two effects which make the option to delay investment valuable. First, low volume requires a high commodity spot price to justify the investment. Second, far away from expiry the investor attaches high value to waiting and learning about the volume estimates of the reserve. On the other hand, as the option approaches expiry, these two effects become weaker. In particular, the value of learning is low because there is less time to learn about the reserve estimates, so the investment decision is merely based on whether costs will be recovered given the spot price of the commodity, the rate at which the commodity is extracted, and the uncertainty around it. The remainder of this paper is organized as follows. In Section \ref{sec:model assumptions}, we provide the details of our modeling framework, including how we model both technical uncertainty and the uncertainty in the underlying project. Moreover, we provide an approach for accounting for the agent's learning of the reserve environment through exploration. Next, Section \ref{sec:real option valuation} provides an analysis of the Fourier space-time stepping approach for valuing the early exercise features in the irreversible investment with learning. Section \ref{sec:calibration} shows how the model can be calibrated to estimates of the cost of exploration and the expected benefits of this exploration. Section \ref{sec:results} provides some numerical experiments to demonstrate the efficacy of the approach and an analysis of the qualitative behavior of the model and its implications. Finally, we provide some concluding remarks and ideas for future lines of work. \section{Model Assumptions} \label{sec:model assumptions} In this section we provide models for the two sources of uncertainty that drive the value of the real option to explore and (irreversibly) invest in a project. The setting described here is tuned to some extent for oil exploration, however, it can be modified to deal with other activities including: mining, natural and shale gas, and other natural explorations. Another extension to our setup is to account for the option to mothball exploration and/or extraction (once extraction begins), as well as other managerial flexibilities that might arise in exploration and investment. See for example \cite{McDonaldSiegel85}, \cite{DixitPindyck94}, \cite{Trigeorgis1999}, \cite{tsekrekos2010effect}, \cite{jaimungal2015incorporating}, and \cite{kobari2014real}. We first describe how we model technical uncertainty and then describe how we model project value uncertainty. As usual, we assume a filtered probability space $\left( \Omega,{\mathcal F},\{{\mathcal F}_t\}_{t \geq 0},{\mathbb P} \right)$ where the filtration $\{{\mathcal F}_t\}_{t \geq 0}$ will be described in more detail at a later stage. We also assume the existence of an equivalent martingale measure, or risk-neutral measure, $\mathbb{Q}$ and all models below are written in terms of this probability measure. \subsection{Technical Uncertainty} Let $V=(V_t)_{t\ge0}$ denote the estimated reserve volume (level) process and ${\vartheta}$ be the true reserve volume. The true reserve volume ${\vartheta}$ is unknown to the investor, and can be viewed as a random variable when conditioned\footnote{For which we need to assume that ${\vartheta} < \infty$ almost surely.} on the information available to her at time $t$, but it will be revealed as $t\to\infty$. We assume for simplicity that the possible reserve levels, as well as their estimates, take on values from a finite set of possible reserve volumes. We model the estimated reserve level as a continuous-time (inhomogeneous) Markov chain because reserve estimates are updated as new information from exploration is obtained. Moreover, to capture the feature that the accuracy of estimates improves as more information becomes available through time, we assume that the transition rate between the volume estimate states decreases as time flows forward. In addition, we assume that $\displaystyle\lim_{t\to\infty}V_t = {\vartheta}$ almost surely to reflect the feature mentioned earlier that the true reserve level is revealed to the investor with the passage of time.\footnote{We could in principle derive such a model by writing $V_t={\mathbb E}[{\vartheta}|\mathcal G_t]$ where $\mathcal G=(\mathcal G_t)_{t\ge0}$ denotes the filtration that generates our information about the true reserve. Under some specific modeling assumptions, $V$ can be cast into a Markov chain representation. We opt not to delve into these details, as it detracts from the simplicity of the approach we are proposing, and instead model $V$ directly.} More specifically, the estimated reserve volume $V_t$ is modulated by a finite state, continuous-time, inhomogeneous Markov chain $Z_t$ taking values in $\{1,\dots,m\}$ via \begin{equation} V_t = v^{(Z_t)}\,, \end{equation} where the constants \begin{equation}\label{eqn: possible reserve volumes} \left\{v^{(1)}, \cdots, v^{(m)}\right\}\in\mathds R^m_+ \end{equation} are the possible reserve volumes. The generator matrix of the Markov chain $Z_t$ is denoted by ${\boldsymbol{G}}_t$ and assumed to be of the form \begin{equation}\label{eqn: generator matrix} {\boldsymbol{G}}_t = h_t\,{\boldsymbol{A}}\,, \end{equation} where $h_t$ is a deterministic, non-negative decreasing function of time, such that $h_t \xrightarrow[]{t\to\infty} 0$ and $\int_0^\infty h_u\,du<\infty$, and ${\boldsymbol{A}}$ is a constant $m\times m$ matrix with $\sum_{j=1}^n A_{ij} = 0$ and $A_{ij}>0$ for $i\ne j$. The states of the Markov chain correspond to various possible estimates for reserve level, thus capturing the uncertainty in those estimates. The function $h_t$ captures how the agent learns about the volume or quantity of the commodity in the reserve. A decreasing $h$ implies that the transition rates are also decreasing, and hence the probability of changes in the estimated volumes decreases with time, and therefore the estimates become more accurate. Optimal policies for the irreversible investment to explore, and the subsequent value of the project based on the extraction of the commodity, naturally depend on the observed estimate of reserves -- Section \ref{sec:calibration} discusses in detail the form of $h_t$ and ${\boldsymbol{A}}$, and how they are calibrated to data. \subsection{Market Uncertainty} The second source of uncertainty stems from the spot price of the commodity which we denote by $S=(S_t)_{t\ge0}$, and is modelled as the exponential of an Ornstein-Uhlenbeck (OU) process: \begin{subequations}\label{eqn:S model} \begin{equation}\label{eqn:S model S} S_t = \exp\{\theta+X_t\}\,, \end{equation} where the OU process $X=(X_t)_{t\ge0}$ satisfies the stochastic differential equation (SDE) \begin{equation}\label{eqn:S model X} dX_t = -\kappa \, X_t\,dt + \sigma \,dW_t\,, \end{equation} \end{subequations} where $W=(W_t)_{t\ge0}$ is a standard Brownian motion (independent of $Z$), $\kappa>0$ is the rate of mean-reversion, $\theta$ is the (log-)level of mean-reversion, and $\sigma$ is the (log-)volatility of the spot price. Such models of commodity spot prices have been widely used in the literature, see for example \cite{CarteaFigueroa2005}, \cite{weron2007modeling}, \cite{CARTEA2008829}, \cite{kiesel2009two}, \cite{Coulon2013976}. Now that we have specified the model for the stock of the commodity in the reserve and its market price, we need one final ingredient: the market value of the commodity in the reserve. We denote this value by the process $P=(P_t)_{t\ge0}$ and show how to calculate it in steps. Suppose that investment is made at time $t$, which is followed by extraction of the commodity $\epsilon\geq 0$ later, and extraction continues until the random (stopping) time $\tau=t+\epsilon+\Delta$. Here $\Delta>0$ represents the amount of time required to complete the extraction process. The investor does not know how much of the commodity is in the reserve, so the time to depletion (and the extraction duration) is a random stopping time. We also note that engineering and physical limitations prevent the total amount of the commodity stored in the reserve from being extracted, and instead only $\gamma\,{\vartheta}$ is extractable, $0<\gamma<1$.\footnote{A good example is stored natural gas where there is always a residual that cannot be extracted from storage.} The random time to depletion $\Delta$ is related to the unknown total reserve volume ${\vartheta}$ via the rate of extraction. In particular, we have: \begin{equation} \int_{t+\epsilon}^{t+\epsilon+\Delta} g(u) \, du = \gamma\,{\vartheta}\,, \end{equation} where $g(u)$ denotes the rate of extraction at time $u$. We assume that once extraction begins the commodity is extracted at the rate \begin{equation}\label{eqn:extraction rate} g(u) = \alpha \,e^{-\beta(u-(t+\epsilon))}\,,\qquad u\in[t+\epsilon,t+\epsilon+\Delta]\,, \end{equation} where $\alpha\geq 0$, and $\beta\geq 0$. Figure \ref{fig:ExtractionRate} presents a stylized picture of the exponential extraction rate \eqref{eqn:extraction rate}. Under the specific extraction rate model in \eqref{eqn:extraction rate}, the time to depletion can be written as \begin{equation}\label{eqn: time to extraction Delta} \Delta = -{\widehat{f}}rac{1}{\beta} \log\left(1 - {\widehat{f}}rac{\beta}{\alpha}\, \gamma \, {\vartheta} \right)\,, \end{equation} which is a random variable because ${\vartheta}$ is only known as $t\to\infty$. \begin{figure} \caption{Once the agent invests in the reserve at time $t$, extraction begins a time $t+\epsilon$ and continues until $\gamma$ percentage of the actual (unknown) reserve volume ${\vartheta} \label{fig:ExtractionRate} \end{figure} The value of the reserve when extraction begins is determined by a number of factors including: the price of the commodity, the state of the Markov chain linked to reserve uncertainty, and the random time to exhaustion of the reserve. Specifically, the discounted value of the cash-flow generated from extracting the commodity at the rate $g(u)$ and selling it at the spot price $S_u$ is given by \begin{equation}\label{eqn: DCF} DCF_t = \int_{t+\epsilon}^{t+\epsilon+\Delta} e^{-\rho(u-t)} \,(S_u - c) \, g(u) \, du\,, \end{equation} where $\rho$ is the agent's discount factor for the level of risk she bears with the project and $c$ is a running cost that the investor incurs as long as the extraction operation lasts. To compute the market value of the commodity in the reserve, which we denote by $P_t$, we calculate the expected discounted value of the cash flows attained from selling the extracted commodity. Namely, we insert in \eqref{eqn: DCF}: the spot price of the commodity \eqref{eqn:S model}, the time to extraction completion given in \eqref{eqn: time to extraction Delta}, and the extraction rate \eqref{eqn:extraction rate}. Finally, we take expectations of $DCF_t$ to obtain \begin{align} P_t = {\mathbb E} \left[ DCF_t \,|\,{\mathcal F}_t\,\right] = {\mathbb E} \left[ \left.\int_{t+\epsilon}^{t+\epsilon+\Delta} e^{-\rho(u-t)} \,(F_t(u) - c) \, g(u) \, du\,\right|\,{\mathcal F}_t\,\right]\,, \label{eqn:ReserveValue} \end{align} where ${\mathcal F}_t = \sigma\left( (S_u, V_u)_{u \in [0,t]} \right)$ is the natural filtration generated by both $S$ and $V$ (or equivalently the Markov chain $Z$ introduced earlier), the expectation is taken under the risk-neutral measure $\mathbb{Q}$ and \begin{eqnarray*} F_t(u)&=& {\mathbb E}\left[S_u\,|\,{\mathcal F}_t\right]\\ &=&\exp\left\{ \theta + e^{-\kappa(u-t)}\,x + \frac{\sigma^2}{4\,\kappa}\left(1-e^{-2\kappa(u-t)}\right)\right\} \end{eqnarray*} is the forward price of the underlying asset. Thus, the expectation in \eqref{eqn:ReserveValue} is over the random time to depletion $\Delta$, which further depends on the reserve volume ${\vartheta}$. To compute this expectation we require the conditional distribution of the unknown reserve level ${\vartheta}$, that is we require ${\mathbb P}\left({\vartheta} = v^{(j)}\,|\,{\mathcal F}_t\right)$. By our assumption that ${\vartheta}=\displaystyle\lim_{t\to\infty}V_t$ a.s., this requires determining the conditional distribution at time $t$ of the limit of the underlying Markov chain. Specifically, \[ {\mathbb P}\left({\vartheta} = v^{(j)}\,\middle|\,{\mathcal F}_t\right) = {\mathbb P}\left(\lim_{t\to\infty} V_t = v^{(j)}\,\middle|\,{\mathcal F}_t\right)= {\mathbb P}\left(\lim_{t\to\infty} Z_t = j\middle|Z_t=i\right) = \left[e^{H_t {\boldsymbol{A}}} \right]_{ij}\,, \] where $H_t = \int^{\infty}_t h_u \,du$, the notation $[\,\cdot\,]_{ij}$ denotes the $ij$ element of the matrix in the square brackets, and recall that the matrix ${\boldsymbol{A}}$ is defined above in \eqref{eqn: generator matrix}. Therefore, the final expression for the expected discounted value of the reserve is \begin{equation}\label{eqn: reserve value} P_t := p^{(Z_t)}(t,X_t)=\sum_{j=1}^m \left[e^{H_t\,{\boldsymbol{A}}}\right]_{Z_t,j}\,\int_{t+\epsilon}^{t+\epsilon-{\widehat{f}}rac{1}{\beta} \log{\left(1 - {\widehat{f}}rac{\beta}{\alpha}\, \gamma \, v^{(j)} \right)}} e^{-\rho(u-t)} \,(F_t(u) - c) \, g(u) \, du\,. \end{equation} This expression has two sources of uncertainty, the first stems from the spot price of the commodity, through the OU process $X_t$, and the second from the estimate of the reserve volume, through the state of the Markov chain $Z_t$. With the model of extraction rate being exponentially decaying in time, see \eqref{eqn:extraction rate}, it is possible to write the integral appearing in the right-hand side of equation \eqref{eqn: reserve value} in terms of special functions, however, such a rewrite does not add clarity so we opt to keep the integral as shown above. \section{Real Option Valuation} \label{sec:real option valuation} Now that we have a model for the value of the reserve, we focus on the cost required to exploit the reserve of the commodity and the value of the flexibility to decide when to make the investment. The cost of investing in the reserve is irreversible and denoted by $I^{(k)}$, where $k$ is the regime of the reserve volume estimate. Here we assume that the cost $I^{(k)}$ is linked to the volume estimate because extracting a large reserve will likely require a larger up-front investment than that required to extract the commodity from a small reserve. We assume that the investment cost is \begin{equation}\label{eqn: linear cost} I^{(k)} = c_0 + c_1\,v^{(k)}\,, \end{equation} where $c_0\geq0$ is a fixed cost, $c_1\geq 0$, and recall that $v^{(k)}$ are the possible reserve volumes, see \eqref{eqn: possible reserve volumes}. Notice that the investment cost depends on the reserve estimate. This reflects the fact that the investor will base the size of their extraction operation on their estimate of the reserve amount: the larger they believe the reserve to be, the bigger the required operation. The running cost $c$ ensures that the investor's running cost is still tied to the true reserve amount. We denote the value of the option by $L_t=\ell^{(Z_t)}(t,X_t)$, where the collection of functions $\ell^{(1)}(t,x)$, $\dots$, $\ell^{(m)}(t,x)$ represents the value of the real option conditional on the state $Z_t=1,\dots,m$ (indexed by the superscript) and $X_t=x$. The agent must make a decision by time $T$, or they will lose the option to make the investment and exploit the reserve. Standard theory implies that the value of the option to irreversibly invest in the reserve, and begin extraction, is given by the optimal stopping problem: \begin{subequations} \begin{align} L_t &= \sup_{\tau\in\mathcal T}{\mathbb E}\left[\,e^{-\rho \,\tau}\,\max\left(P_\tau - I^{(Z_\tau)},0\right) \,|\,{\mathcal F}_t\,\right] \\ &=\sup_{\tau\in\mathcal T}{\mathbb E}\left[\,e^{-\rho \,\tau}\,\max\left(P_\tau - I^{(Z_\tau)},0\right) \,|\, Z_t,\, X_t \,\right] \,. \end{align} \end{subequations} Here, $\mathcal T$ denotes the set of admissible stopping times, taken to be the finite collection of ${\mathcal F}$-stopping times restricted to $t_i =i\Delta t$, $i=0,\dots,N$ with $t_N\le T$. In other words, the agent is restricted to making the investment decision on days $t_i$. In the interim time, the agent can acquire more information to improve her volume reserve estimates. For notational convenience we define the deflated value process \begin{equation}\label{eqn: deflated value process} \bar\ell^{(j)}(t,x):=e^{-\rho t}\ell^{(j)}(t,x)\,, \end{equation} and observe that in between the investment dates, the deflated value processes $\bar\ell^{(j)}(t,x)$ for $j=1,\cdots,m$, are martingales. In addition, since in between the investment dates there is no opportunity to exercise the option, $\bar\ell^{(j)}(t,x)$ is the same as a European claim with payoff equal to the value at the next exercise date. Thus, \begin{equation}\label{eqn: maximization} \bar\ell^{(j)}(t_i,x) = \max\left(\;\lim_{t\downarrow t_i}\bar\ell^{(j)}(t,x) \, ;\, e^{-\rho\,t_i}\,\left(p^{(j)}(t,x) - I^{(j)}\;\right) \right)\,, \end{equation} where $p^{(j)}(t,x)$ is as in \eqref{eqn: reserve value}, and recall that $j = 1,\dots, m$ represents the state of the regime. Finally, in the interval $t\in(t_i,t_{i+1}]$ the processes $\bar\ell^{(j)}(t,x) $ satisfy the coupled system of PDEs \begin{equation} (\partial_t+\mathscr{L})\,\bar\ell^{(j)}(t,x) + h_t \sum_{j=1}^m A_{jk}\, \bar\ell^{(k)}(t,x) = 0\,,\qquad t\in (t_i,t_{i+1}]\,, \label{eqn:RealOption_PDE} \end{equation} where $\mathscr{L}=-\kappa\,x\,\partial_x +\frac{1}{2}\sigma^2\partial_{xx}$ is the infinitesimal generator of the process $X_t$. The maximization in \eqref{eqn: maximization} represents the agent's option to hold on to the investment option at time $t_i$ or to invest immediately. If the second argument attains the maximum, then the agent exercises her option to invest in the reserve, at a cost of $I^{(j)}$, and receives the expected discounted value of the cash-flow $p^{(j)}(t,x)$, which results from extracting and selling the commodity on the spot market. This investment decision is tied to the reserve volume estimate through the regime $j$. Different regimes $j$ will result in different exercise policies and we explore this relationship in the next section. Motivated by the work of \cite{Surkov11}, who study options on multiple commodities driven by L\'evy processes, we solve the system of PDEs \eqref{eqn:RealOption_PDE} recursively by employing the Fourier transform of $\bar\ell^{(j)}(t,x)$ with respect to $x$, which we denote by $\tilde{\ell}^{(j)}(t,\omega)$. Specifically, we write \begin{equation} \tilde{\ell}^{(j)}(t,\omega) = \int_{-\infty}^\infty e^{-\imath\,\omega\,x}\,\bar{\ell}^{(j)}(t,x)\,dx\,, \quad \text{and} \quad \bar{\ell}^{(j)}(t,x) = \int_{-\infty}^\infty e^{\imath\,\omega\,x}\,\tilde{\ell}^{(j)}(t,\omega)\,\frac{d\omega}{2\pi}\,, \end{equation} where $\imath=\sqrt{-1}$. Applying the Fourier transform to \eqref{eqn:RealOption_PDE}, we obtain a new PDE, without the parabolic term, which depends on the state variable $\omega$ rather than the state variable $x$, i.e.: \begin{equation} \left[ \partial_t + (\kappa-{\widehat{f}}rac{1}{2}\sigma^2\,\omega^2)+\kappa\,\omega\,\partial_\omega \right] \tilde{\ell}^{(j)}(t,\omega) + h_t \sum_{j=1}^m A_{jk}\, \tilde{\ell}^{(k)}(t,\omega)=0\,. \end{equation} Within the interval $(t_{k},t_{k+1}]$, we introduce a moving coordinate system and write $\hat\ell^{(j)}(t,\omega) = \tilde\ell^{(j)}(t,e^{-\kappa(t_{k+1}-t)}\omega)$, which removes the derivative in $\omega$ and we find that the functions $\hat\ell^{(j)}$ satisfy the coupled linear system of ODEs \begin{equation} \partial_t \hat\ell^{(j)}(t,\omega)+ \left(\kappa-{\widehat{f}}rac{1}{2}\sigma^2\,\omega^2\,e^{-2\kappa(t_{k+1}-t)}\right)\hat\ell^{(j)}(t,\omega) + h_t \sum_{j=1}^m A_{jk}\, \hat{\ell}^{(k)}(t,\omega) = 0\,. \end{equation} By writing ${\boldsymbol{A}}=\boldsymbol{U}\boldsymbol{D}\boldsymbol{U}^{-1}$ where $\boldsymbol{U}$ is the matrix of eigenvectors of ${\boldsymbol{A}}$, and $\boldsymbol{D}$ the diagonal matrix of eigenvalues of ${\boldsymbol{A}}$, the above coupled system of ODEs can be recast as independent ODEs, which in vector form reads \begin{equation} \partial_t \left(\boldsymbol{U}^{-1} \boldsymbol{\hat\ell}(t,\omega) \right) + \left(\psi(\omega\,e^{-\kappa(t_{k+1}-t)})\,\mathbb{I} +h_t\,\boldsymbol{D}\right)\boldsymbol{U}^{-1}\boldsymbol{\hat\ell}(t,\omega) = \boldsymbol{0}\,, \end{equation} where $\boldsymbol{\hat\ell}(t,\omega) = (\hat\ell^{(1)}(t,\omega),\dots,\hat\ell^{(n)}(t,\omega))'$, $\psi(\omega) = \kappa-\frac{1}{2}\sigma^2\,\omega^2$ and $\mathbb{I}$ is the $n\times n$ identity matrix. These uncoupled ODEs have solution \begin{equation}\label{eqn: solution uncoupled ode} \boldsymbol{U}^{-1}\boldsymbol{\hat\ell}(t_k^+,\omega) = \exp\left\{ \int_{t_k}^{t_{k+1} } \psi(\omega\,e^{-\kappa(t_{k+1}-s)})\,ds\, \mathbb{I} + \int_{t_k}^{t_{k+1} } h_s\,ds\,\boldsymbol{D}\right\} \,\boldsymbol{U}^{-1}\boldsymbol{\hat\ell}(t_{k+1},\omega)\,. \end{equation} where $\boldsymbol{\hat\ell}(t_{k_+},\omega)= \displaystyle \lim_{t\downarrow t_k}\boldsymbol{\hat\ell}(t,\omega)$. Next, we left-multiply by $\boldsymbol{U}$ to obtain \begin{equation} \boldsymbol{\hat\ell}(t_k^+,\omega) = \exp\left\{ \int_{t_k}^{t_{k+1} } \psi\left(\omega\,e^{-\kappa(t_{k+1}-s)}\right)\,ds\,\right\} \exp\left\{ \int_{t_k}^{t_{k+1} } h_s\,ds\,{\boldsymbol{A}}\right\} \, \boldsymbol{\hat\ell}(t_{k+1},\omega)\,, \end{equation} and the Fourier transform of the deflated value of the option to irreversibly invest is \begin{equation} \boldsymbol{\tilde\ell}(t_k^+,\omega) = \exp\left\{ \int_{0}^{t_{k+1}-t_k } \psi(\omega\,e^{\kappa\,s})\,ds\,\right\} \exp\left\{ \int_{t_k}^{t_{k+1} } h_s\,ds\,{\boldsymbol{A}}\right\} \, \boldsymbol{\tilde\ell}\left(t_{k+1},\omega\,e^{\kappa(t_{k+1}-t_k)}\right)\,. \end{equation} This result has a few interesting features. The first is that the role of mean-reversion decouples from the Markov chain driving the volume estimates. The second is that the value at time $t^+_k$ at frequency $\omega$ depends on the value at time $t_{k+1}$ at frequency $\omega\,e^{\kappa(t_{k+1}-t_k)}$. This requires an extrapolation in the frequency space as the algorithm to calculate the option value steps backward in time. When we discretize the state space, such extrapolations could lead to inaccurate results since the edges of the state space are the most important contributions to the extrapolated values. Instead, we make use of the inverse relationship between frequencies and real space in Fourier transforms \[ \int_{-\infty}^\infty e^{\imath\,\omega\,x}\,g(x/a)\,dx = \int_{-\infty}^\infty e^{\imath\,(a\,\omega)\,x}\,g(x)\,\frac{dx}{a} = {\widehat{f}}rac{1}{a}\,\tilde{g}(a\,\omega)\,, \] to write \[ \boldsymbol{\tilde\ell}(t_k^+,\omega) = \exp\left\{ \int_{0}^{t_{k+1}-t_k } \psi(\omega\,e^{\kappa\,s})\,ds\,\right\} \exp\left\{ \int_{t_k}^{t_{k+1} } h_s\,ds\,{\boldsymbol{A}}\right\} \, \boldsymbol{\tilde{\breve\ell}}(t_{k+1},\omega)\,, \] where $\boldsymbol{\breve\ell}(t_{k+1},x) = \boldsymbol\ell(t_{k+1},x\,e^{-\kappa(t_{k+1}-t_k)})$ and $\boldsymbol{\tilde{\breve\ell}}$ denotes the Fourier transform of $\boldsymbol{\breve\ell}$. Thus the value of the right-limit of the real option to invest at $t_k^+$ is determined in terms of interpolation in $x$ (rather than extrapolation in $\omega$). In Figure \ref{fig:algo} we summarize the approach for valuing the real option to irreversibly invest in the reserve. \begin{figure} \caption{Algorithm for computing the value of the option to irreversibly invest in the reserve. \label{fig:algo} \label{fig:algo} \end{figure} \section{Calibration and learning}\label{sec:calibration} Armed with the model in Section \ref{sec:model assumptions}, and the valuation procedure developed in Section \ref{sec:real option valuation}, we discuss in detail the calibration procedure for the Markov chain parameters and the investor's learning function $h_t$ given by \eqref{eqn: generator matrix}. The general idea is to use the investor's prior information regarding the estimate to define the possible reserve estimates, represented by the states of the Markov chain, as well as the transition rates for the base generator matrix ${\boldsymbol{A}}$. The next step is to calibrate the learning function using information about how much the reserve estimate variance can be reduced by some future date. \subsection{Calibration of Markov chain parameters} At time $t=0$ the agent has an estimate of the true reserve volume, denoted by $\mu$, and the volatility of the estimate, denoted by $\sigma_0$. Thus \begin{equation} \mu={\mathbb E}[{\vartheta}\,|\,{\mathcal F}_0] \qquad\text{and}\qquad \sigma_0^2={\mathbb V}\left[{\vartheta}\,|\,{\mathcal F}_0\right]\,. \label{eqn:initial reserve volume distribution} \end{equation} The first step in the calibration procedure is to select the states of reserve volume conditional on the Markov chain state, i.e. to select $v^{(1)},\dots,v^{(m)}$. Since the $t=0$ estimate of the reserve volume represents an unbiased estimator of the reserves, the Markov chain should be symmetric around the initial estimate of the reserve volume. To ensure this symmetry, we assume that (i) the cardinality of the states of the Markov chain $Z_t$ is odd, i.e., $m=2\,L+1$ for some positive integer $L$; (ii) $v^{(L+1)} = \mu$ where $\mu$ is the initial estimate of the reserve volume, (iii) $v^{(k)}$ is increasing in $k$ and (iv) \begin{equation} v^{(L+1+i)} - \mu = \mu - v^{(L+1-i)}, \qquad \forall \; i=1,\dots,L\,. \end{equation} We further assume that the agent's estimate of the volume is normally distributed (this assumption can be modified to any distribution the agent considers to represent her prior knowledge): \begin{equation} {\vartheta} \,|\,{{\mathcal F}_0}\sim \mathcal N(\mu, \;\sigma_0^2)\,, \end{equation} Therefore, we choose $v^{(k)}$ to be equally spaced over $n$ standard deviations of the normal random variable support, i.e., we select \begin{equation} v^{(k)} = \mu-n\,\sigma_0 + (k-1) \,\frac{2\,n\,\sigma_0}{m-1}\,, \qquad \forall\;k=1,\dots,m\, \end{equation} Placing symmetry on the states of the reserve volume estimator is not sufficient to ensure symmetry in its distribution. We further require the symmetry in the base generator rate matrix ${\boldsymbol{A}}$, and assume that \begin{subequations} \begin{align} {\boldsymbol{A}}_{1,1}^{\boldsymbol{\lambda}} &= -\lambda_1\,, &{\boldsymbol{A}}_{1,2}^{\boldsymbol{\lambda}} &= \lambda_1\,, \\ {\boldsymbol{A}}_{i,i-1}^{\boldsymbol{\lambda}} &= \lambda_i\,, &{\boldsymbol{A}}_{i,i}^{\boldsymbol{\lambda}} &= -2 \lambda_i , & {\boldsymbol{A}}_{i,i+1}^{\boldsymbol{\lambda}} &= \lambda_i\,, & i = 2 \,,\dots, L\,, \\ {\boldsymbol{A}}_{L+1,L}^{\boldsymbol{\lambda}} &= \lambda_{L+1}\,, & {\boldsymbol{A}}_{L+1,L+1}^{\boldsymbol{\lambda}} &= -2 \lambda_{L+1} , & {\boldsymbol{A}}_{L+1,L+2}^{\boldsymbol{\lambda}} &= \lambda_{L+1}\,, \\ {\boldsymbol{A}}_{m-i,m-i-1}^{\boldsymbol{\lambda}} &= \lambda_{i}\,, & {\boldsymbol{A}}_{m-i,m-i}^{\boldsymbol{\lambda}} &= -2\lambda_{i}, & {\boldsymbol{A}}_{m-i,m-i+1}^{\boldsymbol{\lambda}} &= \lambda_{i}\,, & i = 1 \,,\dots, L-1\,, \\ {\boldsymbol{A}}_{m,m-1}^{\boldsymbol{\lambda}} &= \lambda_1\,, & {\boldsymbol{A}}_{m,m}^{\boldsymbol{\lambda}} &= -\lambda_1\,. \end{align} \end{subequations} for some set of ${\boldsymbol{\lambda}} = \{\lambda_1,\lambda_2,\dots,\lambda_{L+1}\}$, where $\lambda_1,\lambda_2,\dots,\lambda_{L+1}>0$. Here $\lambda_1$, $\left\{ \lambda_2,\dots,\lambda_{L} \right\}$ and $\lambda_{L+1}$ determine the transition rates out of the edge states, interior states and the midstate, respectively. The form of ${\boldsymbol{A}}^{\boldsymbol{\lambda}}$ ensures transitions only occur between neighboring states and ensures symmetry in the transition rates of states that are across the mean estimate. The parameters ${\boldsymbol{\lambda}}$ are calibrated so the invariant distribution of ${\vartheta}$ without learning coincides with a discrete approximation of a normal random variable with mean $\mu$ and variance $\sigma_0^2$. This ensures that the Markov chain generates an invariant distribution equal to a discrete approximation of the original estimate of the reserve volume distribution. Formally, let ${\boldsymbol{P}}^{\boldsymbol{\lambda}} = e^{{\boldsymbol{A}}^{\boldsymbol{\lambda}}}$ denote the transition probability after one unit of time, and let ${\boldsymbol{\pi}}^{\boldsymbol{\lambda}}$ denote the invariant distribution of ${\boldsymbol{P}}$, i.e., ${\boldsymbol{\pi}}^{\boldsymbol{\lambda}}$ solves the eigenproblem ${\boldsymbol{P}}\,{\boldsymbol{\pi}}^{\boldsymbol{\lambda}}={\boldsymbol{\pi}}^{\boldsymbol{\lambda}}$. Then, we choose ${\boldsymbol{\lambda}}$ such that \begin{subequations} \begin{align} {\boldsymbol{\pi}}_1^{\boldsymbol{\lambda}} &= \Phi_{\mu,\sigma_0}\left( {\widehat{f}}rac{1}{2} \left( v^{(1)} + v^{(2)} \right) \right) - \Phi_{\mu,\sigma_0}\left( {\widehat{f}}rac{1}{2} \left( v^{(1)} + \left[ v^{(1)} - \left(v^{(2)} - v^{(1)} \right) \right] \right) \right)\,, \\ {\boldsymbol{\pi}}_2^{\boldsymbol{\lambda}} &= \Phi_{\mu,\sigma_0}\left( {\widehat{f}}rac{1}{2} (v^{(i+1)} + v^{(i)}) \right)- \Phi_{\mu,\sigma_0} \left( {\widehat{f}}rac{1}{2} (v^{(i)} + v^{(i-1)}) \right)\,, \quad i = 2,\dots,2L\,, \\ {\boldsymbol{\pi}}_m^{\boldsymbol{\lambda}} &= \Phi_{\mu,\sigma_0}\left( {\widehat{f}}rac{1}{2} \left( v^{(m)} + \left[ v^{(m)} + \left(v^{(m)} - v^{(m-1)} \right) \right] \right) \right) - \Phi_{\mu,\sigma_0} \left( {\widehat{f}}rac{1}{2} (v^{(m)} + v^{(m-1)}) \right) \,, \end{align} \end{subequations} where $\Phi_{\mu,\sigma_0}(\cdot)$ denotes the normal cumulative density of a normal with mean $\mu$ and variance $\sigma_0^2$. Note that $\Phi_{\mu,\sigma_0}$ is evaluated at the midpoint between possible reserve amounts and that we extrapolate linearly to obtain the equations for the edge states. \subsection{Calibration of the learning function} As time passes, the agent gathers more and better quality information about the volumes of the commodity in the reserve. Thus, the variance of the estimated volume in the reserve is expected to decrease from $\sigma_0^2$ to $\sigma_{T'}^2 < \sigma_0^2$ by some fixed time $T' < T$. Specifically\footnote{In principle, we could develop a model that can calibrate to a sequence of times and variances, however, the one step reduction is enough to illustrate the essential ideas in this reduction.} \begin{equation} \sigma_{T'}^2={\mathbb V}\left[{\vartheta}\,|\,{\mathcal F}_{T'}\right]. \label{eqn:learned reserve volume distribution} \end{equation} This parameter, along with the starting reserve estimate variance, $\sigma_0^2$, are the main determinants of the learning function. For parsimony, we assume that the agent's learning function is of the form \[ h_t = a\,e^{-b\,t} \quad \text{for some }a,b>0\,, \] where the parameter $a$ represents the initial transition rates, i.e. $h_0=a$, between the states of the Markov chain, and hence reflects the uncertainty in the initial estimates of reserves. The learning parameter $b$ represents the rate at which the agent learns -- the larger (resp. smaller) is $b$ the quicker is the learning process because large (resp. small) values of $b$ make the transition rates decay faster (slower) through time and therefore reserve estimates become stable quickly (resp. slowly). Recall that the learning rate function plays a key role in the generator matrix of the Markov chain, see \eqref{eqn: generator matrix}, in that it captures how the agent learns how much volume of the commodity is in the reserve. The parameters in the learning rate function $h$ are calibrated to obey the constraints \eqref{eqn:initial reserve volume distribution} and \eqref{eqn:learned reserve volume distribution}. Due to the symmetry in the base transition rates ${\boldsymbol{A}}$, and the symmetry in the reserve volume states $v^{(k)}$, we automatically satisfy the mean constraints \begin{equation} {\mathbb E}[{\vartheta} \;|\; V_0 = \mu] = \mu\,, \qquad \text{and} \qquad {\mathbb E}[{\vartheta}\;|\; V_{T'} = \mu] = \mu\,. \end{equation} To satisfy the variance constraints we require the transition probabilities of the Markov chain from an arbitrary state at time $t$ to its infinite horizon state, which we denote $p_{t,ij} := {\mathbb P} \left( \displaystyle \lim_{t\to\infty} Z_t =j\,|\,Z_{t} = i \right)$, to be \begin{equation} p_{t,ij} = \left[\exp\left\{\left({\textstyle\int_{t}^{\infty}} h_u\,du\right) {\boldsymbol{A}}\right\} \right]_{ij} = \left[\exp\left\{{\widehat{f}}rac{a}{b}\,e^{-b\,t}{\boldsymbol{A}}\right\} \right]_{ij}\,. \end{equation} Note that the right-hand side of the equation above is a matrix exponential and, as before, the notation $[\,\cdot\,]_{ij}$ denotes the $ij$ element of the matrix in the square brackets. Now we must solve the two coupled system of non-linear equations for the parameters $a$ and $b$: \begin{subequations} \begin{align} \sigma_0^2 = {\mathbb V}\left[{\vartheta}\,|\, V_0 = \mu\right] & = \sum_{k=1}^{2L+1} v^{(k)} \left( p_{0,Lk} - \mu \right) ^2\,, \label{eqn:initial variance matching}\\ \sigma_{T'}^2= {\mathbb V}\left[{\vartheta}\,|\, V_{T'} = \mu\right] & = \sum_{k=1}^{2L+1} v^{(k)} \left( p_{T',Lk} - \mu \right) ^2 \,, \end{align} \end{subequations} where ${\mathbb V}[\cdot\,|\,\cdot]$ is the variance operator of the first argument conditioned on the event in the second argument. After which, all technical uncertainty model parameters are calibrated to the distributional properties of the initial reserve volume estimates and the reduction in variance as a result of learning. The value of the irreversible option to invest in the reserve with learning can now be valued using the approach in Section \ref{sec:real option valuation}, a summary of which is presented in the algorithm shown in Figure \ref{fig:algo}. The value of exploration, which improves the variance of the estimators of the reserve volume, can be assessed by considering the option value when there is no learning and comparing it to the option value without learning. \section{Numerical Results}\label{sec:results} In this section we investigate the optimal exercise policies of the agent and assess the value of learning. Throughout we use the following parameters and modeling choices: \begin{itemize} \item \textbf{Reserve volume}. Initial reserve estimate: $\mu = 10^9$, initial reserve estimate variance: $\sigma_0^2 = 3 \times 10^8$. \item \textbf{Investment costs}. Fixed cost parameter: $c_0 = 10^8$, variable cost parameter: $c_1 = 3 \times 10^6$. This implies an investment cost of $1.12 \times 10^8$ when the reserve estimate is lowest and $1.48 \times 10^8$ when the reserve estimate is highest. \item \textbf{Expiry of option}. $T=5$ years which consists of 255 weeks. \item \textbf{Other model parameters:} \begin{itemize} \item Underlying resource model parameters: $\kappa = 0.5$, $\theta = \log(100)$, $\sigma_X = 0.5$. \item Discount rate: $\rho = 0.05$. \item Extraction rate parameters: $\alpha = 1$, $\beta = 0.05$, $\gamma = 0.9$, $\epsilon = 2$. \item Markov chain parameters: 31 states. \end{itemize} \end{itemize} Furthermore, we will consider the case of slow learning and fast learning which we model by changing the reduction in reserve estimate variance ($\sigma_{T'}^2 = 2.5 \times 10^8$ versus $\sigma_{T'}^2 = 10^8$) with $T' = 2$ years. We also will consider the case with and without running costs ($c=20$ and $c=0$, respectively). Finally, we compare these to the case where this is no learning by setting $a = 1$ and $b = 0$. \begin{figure} \caption{Model-implied distribution of true reserve amount at $t=0$ and $t=2$ (variance reduction timeframe) conditional on being at the initial estimate $\mu = v^{(L+1)} \label{fig:calibration} \end{figure} Figure \ref{fig:calibration} shows the effect of learning on the distribution of the true reserve amount. We find that in both the slow and fast learning cases, the distribution (conditional on being at the initial reserve estimate) is more peaked around the mid-state. This reflects the fact that the investor has learned more about the reserve amount and, given that they are in the mid-state at $t=2$ (the timeframe given for calibrated variance reduction), they are now more inclined to believe that $\mu$ is the true reserve amount. This effect is slight in the slow learning case and more pronounced with fast learning as the investor is more confident having learned more about the reserve in the same amount of time. Note that the model-implied probabilities match the normal approximation only at $t=0$ since this was the only constraint imposed in the calibration procedure. Figure \ref{fig:exBoundary_withCosts} shows the optimal exercise boundary for an agent who learns at different rates and for different volume estimates (i.e. different states of the Markov chain). For simplicity, we only display a select number of states near the mid-state as these represent the most relevant volume estimates for the investor. The $y$-axis of the figure shows the spot price of the commodity at which the investor would exercise the option, and the $x$-axis is the time elapsed measured in years. The figure shows that as the agent's volume estimate increases, the exercise boundary shifts down because a larger reserve requires a lower commodity spot price to justify the investment. \begin{figure} \caption{Exercise boundary for different reserve estimates states when the agent's learning rate is low (left panel) vs. high (right panel) when they incur running costs. Brighter lines correspond to higher reserve estimates, the dashed red line corresponds to the middle state (initial reserve estimate) and the blue line is the exercise boundary when there is no learning.} \label{fig:exBoundary_withCosts} \end{figure} We observe that the inclusion of learning, both fast and slow, has a profound effect on the shape of the exercise boundary. In the no-learning case, this boundary is non-increasing with time whereas the cases of slow and fast learning lead to exercise boundaries with non-trivial shapes that include increases and decreases at different rates, with behavior varying depending on the investor's prevailing reserve estimate. The explanation for the general shape these curve takes is as follows: when the investor is in a low state (i.e. has a low reserve estimate) the boundary is increasing to allow the learning process to potentially drive the estimate upward leading to a higher project value in the event that the reserve estimate increases. Eventually, the investor believes that the learning process has provided enough information to be confident that whatever state they currently occupy is in fact the true reserve amount. At this point the exercise boundary goes back to the typical decreasing pattern. The time at which this ``inflection point'' occurs depends on the rate of learning: when learning is fast, this turnaround comes sooner. \begin{figure} \caption{Exercise boundary for different reserve estimates states when the agent's learning rate is low (left panel) vs. high (right panel) when they do not incur running costs. Brighter lines correspond to higher reserve estimates, the dashed red line corresponds to the middle state (initial reserve estimate) and the blue line is the exercise boundary when there is no learning.} \label{fig:exBoundary_noCosts} \end{figure} Figure \ref{fig:exBoundary_noCosts} shows the exercise boundaries, for different volume states, when the agent does not incur running costs. Observe that the exercise boundaries in both the fast and slow learning case have the same general shape as the boundaries in Figure \ref{fig:exBoundary_withCosts}. The effect of adding running costs is an upward parallel shift of the exercise boundaries, as the commodity spot price must be higher to justify the investment after accounting of running costs. \begin{figure} \caption{Option value as a function of spot price through time assuming the reserve estimate is equal to the mid-state (initial estimate), $V_t = \mu$, with slow learning (left panel) vs. fast learning (right panel). Dashed lines correspond to option value at maturity when the investor does not learn.} \label{fig:optionValueThruTime} \end{figure} Figure \ref{fig:optionValueThruTime} compares the value of the option at different points in time for different learning rates, assuming that the investor's estimate is equal to the initial estimate. We compare this to the case of no learning, which is very insensitive to time-to-maturity relative to the learning case.\footnote{Note that since the value of the option changes very slightly with time-to-maturity in the no-learning case, we plot the no-learning option value curve only at $t=0$ in Figure \ref{fig:optionValueThruTime}.} We find that the value of the option to wait-and-learn is high at the beginning and gradually decreases as expiry of the option approaches because the agent has little time left to learn. As expected, this decline in value is accelerated in the fast learning case as the investor becomes more confident about the reserve amount at an earlier point in time. Note that the effect of the passage of time on the option value is different in other states. \section{Conclusions} In this paper we show how to incorporate technical uncertainty into the decision to invest in a commodity reserve. This uncertainty stems from not knowing the volume of the commodity stored in the reserve, and compounds with the uncertainty of the value of the reserve because future spot prices are unknown. The agent has the option to wait-and-see before making the irreversible investment to exploit the commodity reserve. In our model, as time goes by, the agent learns about the volume of the commodity stored in the reserve, so the option to delay investment is valuable because it allows the agent to learn and to wait for the optimal market conditions (i.e. spot price of the commodity) before sinking the investment. We adopt a continuous-time Markov chain to model the reserve volume and the technical uncertainty. In our model the agent learns about the volume in the reserve as time goes by and the accuracy of the estimates is also improved with time. We show how to calculate the value of the option to delay investment and discuss the agent's optimal investment threshold. We show how the exercise boundary depends on the agent's estimate of the volume (which depends on the Markov chain state) and how this boundary depends on the rate at which she refines her estimates of the reserve. For example, we show that when the option to invest is far away from expiry, and the volume estimate is low, the value attached to waiting and gathering more information is higher for the agent who can quickly learn about the volume of the reserves than for an agent who learns at a very low rate. \end{document}
\begin{document} \author{MITSUHIRO SHISHIKURA} \address{Department of Mathematics, Kyoto University, Kyoto 606-8502, Japan} \email{[email protected]} \author{FEI YANG} \address{Department of Mathematics, Nanjing University, Nanjing 210093, P. R. China} \email{[email protected]} \title[High type quadratic Siegel disks]{The high type quadratic Siegel disks are Jordan domains} \begin{abstract} Let $\alpha$ be an irrational number of sufficiently high type and suppose $P_\alpha(z)=e^{2\pi\textup{i}\alpha}z+z^2$ has a Siegel disk $\mathbb{D}elta_\alpha$ centered at the origin. We prove that the boundary of $\mathbb{D}elta_\alpha$ is a Jordan curve, and that it contains the critical point $-e^{2\pi\textup{i}\alpha}/2$ if and only if $\alpha$ is a Herman number. \end{abstract} \subjclass[2020]{Primary 37F10; Secondary 37F50, 37F25} \keywords{Siegel disks; Jordan domains; Herman condition; high type; near-parabolic renormalization} \date{\today} \maketitle {\setcounter{tocdepth}{1} \tableofcontents } \section{Introduction}\label{introduction} Let $f$ be a non-linear holomorphic function with $f(0)=0$ and $f'(0)=e^{2\pi\textup{i}\alpha}$, where $0<\alpha<1$ is an irrational number. We say that $f$ is \textit{locally linearizable} at the fixed point 0 if there exists a holomorphic function defined near $0$ which conjugates $f$ to the \textit{rigid rotation} $R_\alpha(z)=e^{2\pi\textup{i}\alpha}z$. The maximal region in which $f$ is conjugate to $R_\alpha$ is a simply connected domain called the \textit{Siegel disk} of $f$ centered at 0. The existence of the Siegel disk of $f$ is dependent on the arithmetic condition of $\alpha\in(0,1)\setminus\mathbb{Q}$. Let \begin{equation} [0;a_1,a_2,\cdots]:= \mathbb{D}F{1}{a_1 +\mathbb{D}F{1}{a_2 +\mathbb{D}F{1}{\textup{d}ots}}} \end{equation} be the \textit{continued fraction expansion} of $\alpha$. The rational numbers $p_n/q_n:=[0;$ $a_1$, $\cdots$, $a_n]$, $n\geq 1$, are the convergents of $\alpha$, where $p_n$ and $q_n$ are coprime positive integers. If $\alpha$ belongs to the \emph{Brjuno class} \begin{equation} \mathcal{B}:=\{\alpha=[0;a_1,a_2,\cdots]\in (0,1)\setminus\mathbb{Q}~|~\mathop{\Sigma}\nolimits_{n=1}^\infty q_n^{-1}\log q_{n+1}<+\infty\}, \end{equation} then any holomorphic germ $f$ with $f(0)=0$ and $f'(0)=e^{2\pi\textup{i}\alpha}$ is locally linearizable at $0$ and hence $f$ has a Siegel disk centered at the origin \cite{Sie42, Brj71}. Yoccoz proved that the Brjuno condition is also necessary for the local linearization of the quadratic polynomial \begin{equation} P_\alpha(z):=e^{2\pi\textup{i}\alpha}z+z^2:\mathbb{C}\to\mathbb{C} \end{equation} at the origin \cite{Yoc95}. \subsection{Topology and obstructions of Siegel disk boundaries} The dynamics in the Siegel disks is simple and one mainly concerns the properties on the boundaries. In the 1980s, Douady and Sullivan asked the following question \cite{Dou83}: \begin{ques} Is the boundary of a Siegel disk a Jordan curve? \end{ques} Until today, this question has not been solved completely, even for quadratic polynomials\footnote{Dudko and Lyubich have announced an important progress on quadratic Siegel polynomials.}. However, much progress has been made on this problem for various families of functions under preconditions. An irrational number $\alpha=[0;a_1,a_2,\cdots]$ is called \emph{bounded type} if $\sup_{n\geq 1}\{a_n\}<+\infty$. Douady-Herman, Zakeri, Yampolsky-Zakeri, Shishikura and Zhang, respectively, proved that the boundaries of bounded type Siegel disks of quadratic polynomials, cubic polynomials, some quadratic rational maps, all polynomials and all rational maps with degree at least two are quasi-circles (hence are Jordan curves) (see \cite{Dou87}, \cite{Her87}, \cite{Zak99}, \cite{YZ01}, \cite{Shi01}, \cite{Zha11}). This is also true for some transcendental entire functions (see \cite{Gey01}, \cite{Zha05}, \cite{Che06}, \cite{KZ09}, \cite{Zak10}, \cite{Yan13}, \cite{CE18}, \cite{ZFS20}). An important breakthrough was made by Petersen and Zakeri in 2004. They proved that for almost all irrational number $\alpha$, the boundary of the Siegel disk of the quadratic polynomial $P_\alpha$ is a Jordan curve \cite{PZ04}. We refer these irrational numbers the \emph{PZ type}, i.e., $\log a_n=\mathcal{O}(\sqrt{n})$ as $n\to\infty$, where $a_n$ is the $n$-th digit of the continued fraction expansion of $\alpha$. Recently, Zhang generalized this result to all polynomials \cite{Zha14} and obtained the same result for the sine family \cite{Zha16}. Suppose the closure of the Siegel disk of $f$ is compactly contained in the domain of definition of $f$. One may wonder what phenomena near the boundary of a Siegel disk prevents $f$ from having a larger linearization domain. Obviously, the presence of periodic cycles near the boundary is one of the reasons since any Siegel disk cannot contain periodic points except the center itself. It was proved by Avila and Cheraghi that under some condition on $\alpha$ every neighborhood of the Siegel disk of $P_\alpha$ contains infinitely many cycles \cite{AC18}, which is similar to the small cycle property that prevents linearization (see \cite{Yoc88} and \cite{Per92}). On the other hand, note that any Siegel disk cannot contain a critical point. Hence the second question on the Siegel disk boundary is: \begin{ques} Does the boundary of a Siegel disk always contain a critical point? \end{ques} The answer is no. Ghys and Herman gave the first examples of polynomials having a Siegel disk whose boundary does not contain a critical point (see \cite{Ghy84}, \cite{Her86} and \cite{Dou87}). On the other hand, in relation to the results on the regularity\footnote{The word ``regularity'' here means the topological and geometric properties of the boundaries of the Siegel disks. See \cite{BC07}.} of the boundaries of the Siegel disks mentioned above (for the bounded type or PZ type rotation numbers), they also include the statement that the boundaries of those Siegel disks pass through at least one critical point. In particular, for the bounded type rotation numbers, Graczyk and \'{S}wi\c{a}tek proved a very general result: if an analytic function has a Siegel disk properly contained in the domain of holomorphy and the rotation number is of bounded type, then the boundary of the corresponding Siegel disk contains a critical point \cite{GS03}. Herman was one of the pioneers who studied the analytic diffeomorphisms on the circles \cite{Her79}. He introduced the following subset of irrational numbers. \begin{defi}[{Herman numbers}] Let $\mathcal{H}$ be the set of irrational numbers $\alpha$ such that every orientation-preserving analytic circle diffeomorphism of rotation number $\alpha$ is analytically conjugate to the rigid rotation. \end{defi} Herman proved that the set $\mathcal{H}$ is non-empty and contains a subset of Diophantine numbers \cite{Her79}. Yoccoz proved that $\mathcal{H}$ contains all Diophantine numbers (and hence contains all bounded type and PZ type numbers), and also gave an arithmetic characterization of the numbers in $\mathcal{H}$ \cite{Yoc02}. Suppose $f$ is an analytic function which has a Siegel disk properly contained in the domain of holomorphy. Ghys proved that if the rotation number belongs to $\mathcal{H}$ and the boundary of the Siegel disk is a Jordan curve, then $f$ has a critical point in the boundary of the Siegel disk \cite{Ghy84}. Later, Herman generalized this result by dropping the topological condition on the Siegel disk boundary but requiring that the restriction of $f$ on the Siegel disk boundary is injective \cite{Her85} (see also \cite{Per97}). In particular, he proved that if a unicritical polynomial has a Siegel disk whose rotation number is contained in $\mathcal{H}$, then the boundary of the Siegel disk contains a critical point. Recently, Ch\'{e}ritat and Roesch, Benini and Fagella, respectively, generalized this result to the polynomials with two critical values \cite{CR16} and to a special class of transcendental entire functions with two singular values \cite{BF18}. For polynomials, Rogers proved that if the Siegel disk $\mathbb{D}elta$ is fixed and the rotation number is in $\mathcal{H}$, then either $\partial\mathbb{D}elta$ contains a critical point or $\partial\mathbb{D}elta$ is an indecomposable continuum \cite{Rog98}. For the exponential map $E_\theta(z)=e^{2\pi\textup{i}\theta}(e^z-1)$, it was proved by Herman that, if $E_\theta$ has a bounded Siegel disk $\mathbb{D}elta_\theta$, then $E_\theta$ is injective on $\partial\mathbb{D}elta_\theta$. Hence it follows from Herman's result that $\mathbb{D}elta_\theta$ is unbounded when $\theta\in\mathcal{H}$ since $E_\theta$ has no critical points \cite{Her85}. Conversely, Herman, Baker and Rippon asked a question: if $\mathbb{D}elta_\theta$ is unbounded, is necessarily the singular value $-e^{2\pi\textup{i}\theta}$ contained in $\partial\mathbb{D}elta_\theta$? Rippon showed that this is true for almost all $\theta$ \cite{Rip94} and the question was fully answered positively by Rempe \cite{Rem04} and independently by Buff and Fagella (unpublished). Moreover, Rempe also studied the Herman type Siegel disks of some other transcendental entire functions \cite{Rem08}. \subsection{The statement of the main result} The proofs of the regularity results for the bounded type and PZ type Siegel disks stated previously are all based on surgery: either quasiconformal or trans-quasiconformal. In these proofs, some pre-models, and usually, a single or a family of Blaschke products are needed. By surgery, the regularity and the existence of critical points on the boundaries of Siegel disks were proved at the same time. In this paper, without using surgeries we shall prove that the Siegel disks of some holomorphic maps are Jordan domains and that Herman type rotation number is also necessary for the existence of critical points on the Siegel disk boundaries. To this end, it requires us to restrict the rotation numbers to a special class since we need to use near-parabolic renormalization scheme. In \cite{IS08}, a renormalization operator $\mathcal{R}$ and a compact class $\mathcal{F}$ that is invariant under $\mathcal{R}$ were introduced. All the maps in $\mathcal{F}$ have a special covering structure. They have a neutral fixed point at the origin and a unique simple critical point in their domains of definition. The renormalization operator assigns a new map in $\mathcal{F}$ to a given map of $\mathcal{F}$ that is obtained by considering the return map to a sector landing at the origin. As a return map, one iterate of $\mathcal{R} f$ corresponds to many iterates of $f\in\mathcal{F}$. To study very large iterates of $f$ near $0$, one hopes to repeat this process infinitely many times. However, to iterate $\mathcal{R}$ infinitely many times, the scheme requires the rotation number $\alpha$, where $f'(0)=e^{2\pi\textup{i}\alpha}$, to be of \emph{high type}, that is, $\alpha$ belongs to \begin{equation} \textup{HT}_{N}:=\{\alpha=[0;a_1,a_2,\cdots]\in (0,1)\setminus\mathbb{Q}~|~a_n\geq N \text{ for all } n\geq 1\} \end{equation} for some big integer\footnote{The precise value of $N$ is not known. But the value of $N$ is likely to be not less than $20$. It is conjectured that a variation of the invariant class and renormalization may be defined for $N=1$.} $N\in\mathbb{N}$. In this paper we prove the following main result. \begin{mainthm}\label{thm-main-1} Let $\alpha$ be an irrational number of sufficiently high type and suppose $P_\alpha(z)=e^{2\pi\textup{i}\alpha}z+z^2$ has a Siegel disk $\mathbb{D}elta_\alpha$ centered at the origin. Then the boundary of $\mathbb{D}elta_\alpha$ is a Jordan curve. Moreover, it contains the critical point $-e^{2\pi\textup{i}\alpha}/2$ if and only if $\alpha$ is a Herman number. \end{mainthm} Note that $\textup{HT}_{N}$ has measure zero if $N\geq 2$. However, all the usual types of irrational numbers have non-empty intersections with $\textup{HT}_{N}$: bounded type, PZ type, Herman type and Brjuno type etc. In particular, $\textup{HT}_{N}$ contains some irrational numbers such that the Siegel disk boundary of $P_\alpha$ has the regularity studied in \cite{ABC04}, \cite{BC07} and the self-similarity studied in \cite{McM98b}. Rogers proved that the boundary of any bounded irreducible Siegel disk $\mathbb{D}elta$ is either tame: the conformal map from $\mathbb{D}elta$ to the unit disk has a continuous extension to $\partial\mathbb{D}elta$, or wild: $\partial\mathbb{D}elta$ is an indecomposable continuum \cite{Rog92}. Recently, Ch\'{e}ritat constructed a holomorphic germ such that the corresponding Siegel disk is compactly contained in the domain of definition but the boundary is not locally connected \cite{Che11}. Our main theorem indicates that the boundaries of quadratic Siegel disks should be tame. As we have seen, in order to guarantee the existence of critical points on the boundaries of Siegel disks, Herman condition (i.e., the rotation number is of Herman type) appears usually as a requirement of sufficiency in most of the literature. As far as we know, the necessity only appears in \cite{BCR09}, where it proves that Herman condition is equivalent to the existence of a critical point on the boundary of the Siegel disks of a family of toy models. In fact, besides the quadratic polynomials, the proof of the Main Theorem in this paper is also valid for all the maps in Inou-Shishikura's invariant class. Hence the Main Theorem is also true for some rational maps and transcendental entire functions. We would like to point out that it was proved in \cite{Yam08} and \cite{AL15} that the bounded type Siegel disks of the maps in Inou-Shishikura's class are quasi-disks if the rotation number is of sufficiently high type. Recently, by constructing topological models of the post-critical sets of the maps in the Inou-Shishikura's class for all high type numbers, Cheraghi gave an alternative proof of the Main Theorem independently (see \cite{Che20C}). Our proofs are different: we analyze the dynamics and carry out the computations in the renormalization tower directly. \subsection{Strategy of the proof} Let $f_0$ be the normalized quadratic polynomial or a map in Inou-Shishikura's class (see \S\textup{Re}\,f{subsec-IS-class}) satisfying $f_0(0)=0$ and $f_0'(0)=e^{2\pi\textup{i}\alpha}$, where $\alpha$ is of Brjuno type and of sufficiently high type. For $n\geq 0$, let $f_{n+1}=\mathcal{R} f_n$ be the sequence of the maps which are generated by the near-parabolic renormalization operator $\mathcal{R}$. For each $n\geq 0$, we use $\mathcal{P}_n$ to denote the perturbed petal of $f_n$ and $\Phi_n$ the corresponding perturbed Fatou coordinate (see definitions in \S\textup{Re}\,f{subsec-near-para}). In order to prove that the boundary of the Siegel disk of $f_0$ is a Jordan curve, we construct a sequence of continuous curves $(\gamma_0^n:[0,1]\to\mathbb{C})_{n\in\mathbb{N}}$ in the perturbed Fatou coordinate plane of $f_0$ by using a renormalization tower. The images $\Phi_0^{-1}(\gamma_0^n)$ with $n\in\mathbb{N}$ are continuous closed curves in the Siegel disk of $f_0$. By using the uniform contraction with respect to the hyperbolic metrics in subdomains of the renormalization tower (see Lemma \textup{Re}\,f{lema:exp-conv}) and the estimations obtained in \S\textup{Re}\,f{subsec-esti-2} (see also Lemma \textup{Re}\,f{lema-go-up}), we prove that the sequence of the continuous curves $(\gamma_0^n:[0,1]\to\mathbb{C})_{n\in\mathbb{N}}$ converges uniformly to a limit $\gamma^\infty:[0,1]\to\mathbb{C}$ (see Proposition \textup{Re}\,f{prop-Cauchy-sequence}). Moreover, the convergence may not be exponentially fast. The next step is to prove that $\Phi_0^{-1}(\gamma^\infty)$ is exactly the boundary of the Siegel disk of $f_0$. We first estimate the distance between the points in a continuous curve $\gamma_n^1$ and the ``highest" point of the preimage of the boundary of the Siegel disk of $f_{n+1}$ under the modified exponential map $\textup{Exp}o$ (see the definition in \S\textup{Re}\,f{subsec-near-para}). Then after going up the renormalization tower, we prove that the distance between the curve $\gamma_0^n$ and the closest point in the boundary of the Siegel disk of $f_0$ tends to zero exponentially fast. This implies that the limit curve $\gamma^\infty$ is contained in the boundary of the Siegel disk of $f_0$. Then a routine argument shows that $\gamma^\infty$ is exactly the boundary of the Siegel disk of $f_0$ and must be a Jordan curve. For the second part of the Main Theorem which concerns Herman condition, we construct a canonical Jordan arc $\Gamma_0\cup\{0\}$ in the domain of definition of $f_0$ such that $\Gamma_0\cup\{0\}$ connects the critical value $-4/27$ with the origin and that $\gamma_0:=\Phi_0(\Gamma_0)$ is contained in a half-infinite strip $\mho$ with finite width. The construction of $\Gamma_0$ guarantees that $\Gamma_n\cup\{0\}=\textup{Exp}o\circ\Phi_{n-1}(\Gamma_{n-1})\cup\{0\}$ is also a Jordan arc connecting $-4/27$ with the origin and $\gamma_n=\Phi_n(\Gamma_n)$ is contained in $\mho$ for all $n\geq 1$. We study the homeomorphism $s_{\alpha_n}:=\Phi_n\circ \textup{Exp}o:\gamma_{n-1}\to\gamma_n$ from the simple curve in one level of the renormalization to another by the estimations obtained in Lemma \textup{Re}\,f{lema-key-esti-inverse}. Based on the sequence $(s_{\alpha_n})_{n\in\mathbb{N}}$, we define a new class of irrational numbers $\widetilde{\mathcal{H}}_N$ which is a subset of Brjuno numbers, where $N$ is a large number. After comparing the properties of $s_{\alpha_n}$ and Yoccoz's arithmetic characterization of $\mathcal{H}$, we prove that $\widetilde{\mathcal{H}}_N$ is exactly equal to the set of high type Herman numbers (see \S\textup{Re}\,f{subsec-equi-irrat}). On the other hand, we prove that the boundary of the Siegel disk of $f_0$ contains the critical value $-4/27$ if and only if $\alpha\in\widetilde{\mathcal{H}}_N$ (see \S\textup{Re}\,f{subsec-new-class}). This implies that the second part of the Main Theorem holds. \subsection{Some observations} There are several applications of Inou-Shishikura's invariant class. The first remarkable application is that Buff and Ch\'{e}ritat used it as one of the main tools to prove the existence of Julia sets of quadratic polynomials with positive area \cite{BC12}. Recently, Cheraghi and his coauthors have found several other important applications. In \cite{Che13} and \cite{Che19}, Cheraghi developed several elaborate analytic techniques based on Inou-Shishikura's results. The tools in \cite{Che13} and \cite{Che19} have led to part of the recent major progresses on the dynamics of quadratic polynomials. For examples, the Feigenbaum Julia sets with positive area (which is different from the examples in \cite{BC12}) have been found in \cite{AL15}, the Marmi-Moussa-Yoccoz conjecture for rotation numbers of high type has been proved in \cite{CC15}, the local connectivity of the Mandelbrot set at some infinitely satellite renormalizable points was proved in \cite{CS15}, some statistical properties of the quadratic polynomials was depicted in \cite{AC18}, the topological structure and the Hausdorff dimension of the high type irrationally indifferent attractors were characterized in \cite{Che20C} and \cite{CDY20} respectively. Recently, Ch\'{e}ritat generalized the near-parabolic renormalization theory to the unicritical families of any finite degrees \cite{Che21}. See also \cite{Yan20p} for the corresponding theory of local degree three. Hence there is a hope to generalize the Main Theorem in this paper to all unicritical polynomials. \noindent\textbf{Acknowledgements.} We would like to thank Xavier Buff and Arnaud Ch\'{e}ritat for helpful discussions and offering the manuscript \cite{BCR09}. We are also very grateful to Davoud Cheraghi for pointing out a gap in an earlier version of the paper and providing many invaluable comments and suggestions. The second author was indebted to Institut de Math\'{e}matiques de Toulouse for its hospitality during his visit in 2014/2015 where and when partial of this paper was written. He would also like to thank Davoud Cheraghi and Arnaud Ch\'{e}ritat for persistent encouragements. This work was supported by NSFC (grant No.\,12071210), NSF of Jiangsu Province (grant No.\,BK20191246) and the CSC program (2014/2015). \noindent \textbf{Notations.} We use $\mathbb{N}$, $\mathbb{N}^+$, $\mathbb{Z}$, $\mathbb{Q}$, $\mathbb{R}$ and $\mathbb{C}$ to denote the set of all natural numbers, positive integers, integers, rational numbers, real numbers and complex numbers, respectively. The Riemann sphere and the unit disk are denoted by $\widehat{\mathbb{C}}=\mathbb{C}\cup\{\infty\}$ and $\mathbb{D}=\{z\in\mathbb{C}:|z|<1\}$ respectively. A round disk in $\mathbb{C}$ is denoted by $\mathbb{D}(a,r)=\{z\in\mathbb{C}:|z-a|<r\}$ and $\overline{\mathbb{D}}(a,r)$ is its closure. Let $x\in\mathbb{R}$ be a non-negative number, we use $\lfloor x\rfloor$ to denote the integer part of $x$. For a set $X\subset \mathbb{C}$ and a number $\delta>0$, let $B_\delta(X):=\bigcup_{z\in X}\mathbb{D}(z,\delta)$ be the $\delta$-neighborhood of $X$. For a number $a\in\mathbb{C}$ and a set $X\subset \mathbb{C}$, we denote $aX:=\{az:z\in X\}$ and $X\pm a:=\{z\pm a:z\in X\}$. Let $A,B$ be two subsets in $\mathbb{C}$ and suppose $B$ is open. We say that $A$ is \textit{compactly contained} in $B$ if the closure of $A$ is compact and contained in $B$ and we denote it by $A\Subset B$. We use $\textup{diam}(X)$ to denote the Euclidean diameter of a set $X\subset\mathbb{C}$ and $\textup{len}(\gamma)$ the Euclidean length of a rectifiable curve $\gamma\subset\mathbb{C}$. \section{Near-parabolic renormalization scheme} In this section, we summarize some results in \cite{IS08}, \cite{BC12}, \cite{AC18} and \cite{Che19} which will be used in this paper. Parts of the theories can be also found in \cite{Shi98} and \cite{Shi00a}. \subsection{Inou-Shishikura's class}\label{subsec-IS-class} Let $P(z):=z(1+z)^2$ be a cubic polynomial with a parabolic fixed point at $0$ with multiplier $1$. Then $P$ has a critical point $\textup{cp}_{P}:=-1/3$ which is mapped to the critical value $\textup{cv}_P:=-{4}/{27}$. It has also another critical point $-1$ which is mapped to $0$. Consider the ellipse \begin{equation}\label{ellipse} E:=\left\{x+y\textup{i}\in\mathbb{C}:\left(\frac{x+0.18}{1.24}\right)^2+\left(\frac{y}{1.04}\right)^2\leq 1\right\} \end{equation} and define\footnote{\,The domain $U$ is denoted by $V$ in \cite{IS08}. } \begin{equation}\label{U-and-psi-1} U:=\psi_1(\widehat{\mathbb{C}}\setminus E), \text{~where~} \psi_1(z):=-\frac{4z}{(1+z)^2}. \end{equation} The domain $U$ is symmetric about the real axis, contains the parabolic fixed point $0$ and the critical point $\textup{cp}_P$, but $\overline{U}\cap(-\infty,-1]=\emptyset$ (see \cite[\S 5.A]{IS08} and Figure \textup{Re}\,f{Fig_U-zoom}). \begin{figure} \caption{The domains $U$ (the gray part), $U'$ (the white region bounded by the blue curves, see \eqref{equ-U-pri} \label{Fig_U-zoom} \end{figure} For a given function $f$, we denote its domain of definition by $U_f$. Following \cite[\S 4]{IS08}, we define a class of maps\footnote{\,The definition of $\mathcal{IS}_0$ is based on the class $\mathcal{F}_1$ in \cite{IS08}. There the conformal map $\varphi$ in the definition of $\mathcal{IS}_0$ is required to have a quasiconformal extension to $\mathbb{C}$. This condition is used by Inou and Shishikura to prove the uniform contraction of the near-parabolic renormalization operator under the Teichm\"{u}ller metric. We modify the definition here since we will not use this property in this paper.} \begin{equation} \mathcal{IS}_0:= \left\{f=P\circ\varphi^{-1}:U_f\to\mathbb{C} \left| \begin{array}{l} 0\in U_f \text{ is open in }\mathbb{C}, ~\varphi:U\to U_f \text{ is} \\ \text{conformal},~\varphi(0)=0 \text{ and } \varphi'(0)=1 \end{array} \right. \right\}. \end{equation} Each map in this class has a parabolic fixed point at $0$, a unique critical point at $\textup{cp}_f:=\varphi(-1/3)\in U_f$ and a unique critical value at $\textup{cv}:=-4/27$ which is independent of $f$. For $\alpha\in\mathbb{R}$, we define \begin{equation} \mathcal{IS}_\alpha:=\{f(z)=f_0(e^{2\pi\textup{i}\alpha}z):e^{-2\pi\textup{i}\alpha}\,U_{f_0}\to\mathbb{C} ~|~f_0\in\mathcal{IS}_0\}. \end{equation} For convenience, we normalize the quadratic polynomials to \begin{equation} Q_\alpha(z)= e^{2\pi\textup{i}\alpha}z+\frac{27}{16}e^{4\pi\textup{i}\alpha}z^2 \end{equation} such that all $Q_\alpha$ have the same critical value $-4/27$ as the maps in $\mathcal{IS}_\alpha$. In particular, $Q_\alpha=Q_0\circ R_\alpha$, where $R_\alpha(z)=e^{2\pi\textup{i}\alpha}z$. We would like to mention that the quadratic polynomial $Q_\alpha$ is not in the class $\mathcal{IS}_\alpha$. \begin{thm}[{Leau-Fatou \cite[\S 10]{Mil06} and Inou-Shishikura \cite{IS08}}]\label{thm-IS-attr-rep-1} For all $f\in\mathcal{IS}_0\cup\{Q_0\}$, there exist two simply connected domains $\mathcal{P}_{attr,f}$, $\mathcal{P}_{rep,f}\subset U_f$ and two univalent maps $\Phi_{attr,f}:\mathcal{P}_{attr,f}\to\mathbb{C}$, $\Phi_{rep,f}:\mathcal{P}_{rep,f}\to\mathbb{C}$ such that \begin{enumerate} \item $\mathcal{P}_{attr,f}$ and $\mathcal{P}_{rep,f}$ are bounded by piecewise analytic curves and are compactly contained in $U_f$, $\textup{cp}_f\in\partial \mathcal{P}_{attr,f}$ and $\partial \mathcal{P}_{attr,f}\cap\partial \mathcal{P}_{rep,f}=\{0\}$; \item The image $\Phi_{attr,f}(\mathcal{P}_{attr,f})$ is a right half plane and $\Phi_{rep,f}(\mathcal{P}_{rep,f})$ is a left half plane; and \item $\Phi_{attr,f}(f(z))=\Phi_{attr,f}(z)+1$ for all $z\in\mathcal{P}_{attr,f}$ and $\Phi_{rep,f}^{-1}(\zeta)=f(\Phi_{rep,f}^{-1}(\zeta-1))$ for all $\zeta\in\Phi_{rep,f}(\mathcal{P}_{rep,f})$. \end{enumerate} \end{thm} \noindent\textbf{Normalization of $\Phi_{attr,f}$ and $\Phi_{rep,f}$.} The univalent map $\Phi_{attr,f}$ in Theorem \textup{Re}\,f{thm-IS-attr-rep-1} is called an \emph{attracting Fatou coordinate} of $f$ and $\mathcal{P}_{attr,f}$ is called an \emph{attracting petal}. The attracting Fatou coordinate $\Phi_{attr,f}$ can be naturally extended to the immediate attracting basin $\mathcal{A}_{attr,f}$ of $0$. Specifically, for $z\in \mathcal{A}_{attr,f}$ such that $f^{\circ k}(z)\in\mathcal{P}_{attr,f}$ with $k\geq 0$, define \begin{equation} \Phi_{attr,f}(z):=\Phi_{attr,f}(f^{\circ k}(z))-k. \end{equation} Since $\Phi_{attr,f}$ is unique up to an additive constant, we \emph{normalize} it by $\Phi_{attr,f}(\textup{cp}_f)=0$. Therefore, we have $\Phi_{attr,f}(\mathcal{P}_{attr,f})=\{\zeta\in\mathbb{C}:\textup{Re}\,\zeta>0\}$. The univalent map $\Phi_{rep,f}$ in Theorem \textup{Re}\,f{thm-IS-attr-rep-1} is called a \emph{repelling Fatou coordinate} of $f$ and $\mathcal{P}_{rep,f}$ is called a \emph{repelling petal}. Since $\Phi_{rep,f}$ is also unique up to an additive constant, we \emph{normalize} it by \begin{equation} \Phi_{attr,f}(z)-\Phi_{rep,f}(z)\to 0 \text{\quad when~~} z\to 0, \end{equation} where $z$ is contained in a component of $\mathcal{A}_{attr,f}\cap\mathcal{P}_{rep,f}$ such that $\textup{Im}\,\Phi_{rep,f}(z)\to+\infty$ as $z\to 0$. \subsection{Near-parabolic renormalization}\label{subsec-near-para} We need to consider the case that a sequence of functions converges to a limiting function and the neighborhoods of a function need to be defined. \begin{defi}[{Neighborhoods of a function}] Let $f:U_f\to\mathbb{C}$ be a given function. A \emph{neighborhood} of $f$ is \begin{equation} \mathcal{N}=\mathcal{N}(f;K,\varepsilon)=\left\{g:U_g\to\widehat{\mathbb{C}}\,\left|\, K\subset U_g \text{~and~} \mathop{\textup{sup}}\limits_{z\in K}d_{\widehat{\mathbb{C}}}(g(z),f(z))<\varepsilon\right.\right\}, \end{equation} where $d_{\widehat{\mathbb{C}}}$ denotes the spherical distance, $K$ is a compact subset contained in $U_f$ and $\varepsilon>0$. A sequence $(f_n)$ is said to \textit{converge} to $f$ \emph{uniformly on compact sets} if for any neighborhood $\mathcal{N}$ of $f$, there exists $n_0>0$ such that $f_n\in\mathcal{N}$ for all $n\geq n_0$. \end{defi} If $f\in\bigcup_{\alpha\in[0,1)}\mathcal{IS}_\alpha\cup\{Q_\alpha\}$, we denote by $\alpha_f\in[0,1)$ the rotation number of $f$ at the origin, i.e., the real number $\alpha_f\in[0,1)$ so that $f'(0)=e^{2\pi\textup{i}\alpha_f}$. If $\alpha_f>0$ is small, besides the origin, the map $f$ has another fixed point $\sigma_f\neq0$ near $0$ in $U_f$, which depends continuously on $f$ (see \cite[\S 3.2]{Shi00a} or \cite[Lemma 9, p.\,707]{BC12}). \begin{prop}[{\cite[Proposition 12, p.\,707]{BC12}, see Figure \textup{Re}\,f{Fig_perturbed-Fatou-coor}}]\label{prop-BC-prop-12} There exist $\textit{\textbf{k}}\in\mathbb{N}^+$ and $\varepsilon_1>0$ satisfying $\lfloor\tfrac{1}{\varepsilon_1}\rfloor-\textit{\textbf{k}}>1$, such that for all $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in(0,\varepsilon_1]$, there exist a Jordan domain $\mathcal{P}_f\subset U_f$ and a univalent map $\Phi_f:\mathcal{P}_f\to\mathbb{C}$, such that \begin{enumerate} \item $\mathcal{P}_f$ contains $\textup{cv}$ and it is bounded by two arcs joining $0$ and $\sigma_f$; \item $\Phi_f(\textup{cv})=1$, $\Phi_f(\mathcal{P}_f)=\{\zeta\in\mathbb{C}:0<\textup{Re}\, \zeta<\lfloor\tfrac{1}{\alpha_f}\rfloor-\textit{\textbf{k}}\}$ with $\textup{Im}\, \Phi_f(z)\to +\infty$ as $z\to 0$ and $\textup{Im}\, \Phi_f(z)\to -\infty$ as $z\to \sigma_f$ in $\mathcal{P}_f$; \item If $z\in\mathcal{P}_f$ and $\textup{Re}\,\Phi_f(z)<\lfloor\tfrac{1}{\alpha_f}\rfloor-\textit{\textbf{k}}-1$, then $f(z)\in\mathcal{P}_f$ and $\Phi_f(f(z))=\Phi_f(z)+1$; and \item If $(f_n)$ is a sequence of maps in $\bigcup_{\alpha\in(0,\varepsilon_1]}\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ converging to a map $f_0\in\mathcal{IS}_0\cup\{Q_0\}$, then any compact set $K\subset\mathcal{P}_{attr,f_0}$ is contained in $\mathcal{P}_{f_n}$ for $n$ large enough and the sequence $(\Phi_{f_n})$ converges to $\Phi_{attr,f_0}$ uniformly on $K$; Moreover, any compact set $K\subset\mathcal{P}_{rep,f_0}$ is contained in $\mathcal{P}_{f_n}$ for $n$ large enough and the sequence $(\Phi_{f_n}-\frac{1}{\alpha_{f_n}})$ converges to $\Phi_{rep,f_0}$ uniformly on $K$. \end{enumerate} \end{prop} \begin{figure} \caption{The perturbed Fatou coordinate $\Phi_f$ and its domain of definition $\mathcal{P} \label{Fig_perturbed-Fatou-coor} \end{figure} Proposition \textup{Re}\,f{prop-BC-prop-12} was proved in \cite{BC12} only for Inou-Shishikura's class. However, when $f=Q_\alpha$ with sufficiently small $\alpha>0$, the existence of the domain $\mathcal{P}_f$ and the coordinate $\Phi_f:\mathcal{P}_f\to\mathbb{C}$ satisfying the properties in the above proposition is classic (see \cite{Shi00a}). The map $\Phi_f$ in Proposition \textup{Re}\,f{prop-BC-prop-12} is called the \emph{(perturbed) Fatou coordinate} of $f$ and $\mathcal{P}_f$ is called a \emph{(perturbed) petal}. \begin{defi}[{see Figure \textup{Re}\,f{Fig_near-para-norm-defi}}] Let $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in(0,\varepsilon_1]$, where $\varepsilon_1>0$ is the constant introduced in Proposition \textup{Re}\,f{prop-BC-prop-12}. Define \begin{equation}\label{defi-C-f-alpha} \begin{split} \mathcal{C}_f:=&\,\{z\in\mathcal{P}_f:1/2\leq\textup{Re}\,\Phi_f(z)\leq 3/2 \text{~and~} -2<\textup{Im}\,\Phi_f(z)\leq 2\}, \text{~and}\\ \mathcal{C}_f^\sharp:=&\,\{z\in\mathcal{P}_f:1/2\leq\textup{Re}\,\Phi_f(z)\leq 3/2 \text{~and~} \textup{Im}\,\Phi_f(z)\geq 2\}. \end{split} \end{equation} Note that $\textup{cv}=-4/27\in \operatorname{int}\, \mathcal{C}_f$ and $0\in\partial \mathcal{C}_f^\sharp$. \end{defi} \begin{figure} \caption{Left: The sets $\mathcal{C} \label{Fig_near-para-norm-defi} \end{figure} \begin{prop}[{\cite[Proposition 2.7]{Che19}, see Figure \textup{Re}\,f{Fig_near-para-norm-defi}}]\label{prop-CC-2} There exist constants $\varepsilon_1'\in(0,\varepsilon_1]$ and $\textit{\textbf{k}}_0\in\mathbb{N}^+$ such that for all $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in(0,\varepsilon_1']$, there exists a positive integer $k_f\in[1,\textit{\textbf{k}}_0]$ such that \begin{enumerate} \item For all $1\leq k\leq k_f$, the unique connected component $(\mathcal{C}_f^\sharp)^{-k}$ of $f^{-k}(\mathcal{C}_f^\sharp)$ that contains $0$ in its closure is relatively compact in $U_f$ and $f^{\circ k}:(\mathcal{C}_f^\sharp)^{-k}\to\mathcal{C}_f^\sharp$ is an isomorphism, and the unique connected component $\mathcal{C}_f^{-k}$ of $f^{-k}(\mathcal{C}_f)$ that intersects $(\mathcal{C}_f^\sharp)^{-k}$ is relatively compact in $U_f$ and $f^{\circ k}:\mathcal{C}_f^{-k}\to\mathcal{C}_f$ is a covering of degree $2$ ramified above $\textup{cv}$; and \item $k_f$ is the \emph{smallest} positive integer such that $\mathcal{C}_f^{-k_f}\cup(\mathcal{C}_f^\sharp)^{-k_f}\subset\{z\in\mathcal{P}_f:0<\textup{Re}\,\Phi_f(z)<\lfloor\tfrac{1}{\alpha_f}\rfloor-\textit{\textbf{k}}-\tfrac{1}{2}\}$. \end{enumerate} \end{prop} The same statement as Proposition \textup{Re}\,f{prop-CC-2} without the uniform bound of $k_f$ is proved in \cite[Proposition 13, p.\,713]{BC12}. For the corresponding statements of Propositions \textup{Re}\,f{prop-BC-prop-12} and \textup{Re}\,f{prop-CC-2} with $\alpha\in\mathbb{C}$ (specifically, when $|\arg\alpha|<\pi/4$ and $|\alpha|$ is small), we refer to \cite[\S 2]{CS15}. \begin{defi}[{Near-parabolic renormalization, see Figure \textup{Re}\,f{Fig_near-para-norm-defi}}] For $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in(0,\varepsilon_1']$, define \begin{equation} S_f:=\mathcal{C}_f^{-k_f}\cup(\mathcal{C}_f^\sharp)^{-k_f}, \end{equation} and consider the map \begin{equation} \Phi_f\circ f^{\circ k_f}\circ\Phi_f^{-1}:\Phi_f(S_f)\to\mathbb{C}. \end{equation} This map commutes with the translation by one. Hence it projects by the \textit{modified} exponential map \begin{equation}\label{equ-modify-exp} \textup{Exp}o(\zeta):=-\frac{4}{27} s\,(e^{2\pi\textup{i} \zeta}) \end{equation} to a well-defined map $\mathcal{R} f$ which is defined on a set punctured at zero, where $s:z\mapsto\overline{z}$ is the complex conjugacy. One can check that $\mathcal{R} f$ extends across zero and satisfies $(\mathcal{R} f)(0)=0$ and $(\mathcal{R} f)'(0)=e^{2\pi\textup{i}/\alpha_f}$. The map $\mathcal{R} f$ is called the \emph{near-parabolic renormalization} of $f$. \end{defi} Let $P(z)=z(1+z)^2$ be the cubic polynomial introduced at the beginning of \S\textup{Re}\,f{subsec-IS-class}. Define \begin{equation}\label{equ-U-pri} U':=P^{-1}(\mathbb{D}(0,\tfrac{4}{27}e^{4\pi}))\setminus ((-\infty,-1]\cup \overline{B}), \end{equation} where $B$ is the connected component of $P^{-1}(\mathbb{D}(0,\frac{4}{27}e^{-4\pi}))$ containing $-1$. By an explicit calculation, one can prove that $\overline{U}\subset U'$ (see \cite[Proposition 5.2]{IS08} and Figure \textup{Re}\,f{Fig_U-zoom}). \begin{thm}[{\cite[Main Theorem 3]{IS08}}]\label{thm-IS-attr-rep-3} For every $f=P\circ\varphi^{-1}\in\mathcal{IS}_\alpha$ or $f=Q_\alpha$ with $\alpha\in(0,\varepsilon_1']$, the near-parabolic renormalization $\mathcal{R} f$ is well-defined and the restriction of $\mathcal{R} f$ in a domain containing $0$ can be written as $P\circ\psi^{-1}\in\mathcal{IS}_{1/\alpha}$. Moreover, $\psi$ extends to a univalent function from $e^{-2\pi\textup{i}/\alpha}\,U'$ to $\mathbb{C}$. \end{thm} From Theorem \textup{Re}\,f{thm-IS-attr-rep-3} we know that the near-parabolic renormalization of $\mathcal{R} f$ can be also defined if the fractional part of $1/\alpha$ is contained in $(0,\varepsilon_1']$. This implies that the near-parabolic renormalization operator $\mathcal{R}$ can be applied infinitely many times to $f$ if $\alpha$ is of sufficiently high type. \subsection{Some sets in the Fatou coordinate planes}\label{subsec-esti-1} Let $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in(0,\varepsilon_1']$. We define a set in the Fatou coordinate plane of $f$: \begin{equation}\label{equ-MD-tilde-f} \widetilde{\mathcal{D}}_f:=\Phi_f(\mathcal{P}_f)\cup\bigcup_{j=0}^{k_f+\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}} -2}(\Phi_f(S_f)+j). \end{equation} \begin{lem}\label{lema:Phi-inverse} The map $\Phi_f^{-1}:\Phi_f(\mathcal{P}_f)\to\mathcal{P}_f$ can be extended to a holomorphic map \begin{equation} \Phi_f^{-1}:\widetilde{\mathcal{D}}_f\to \mathcal{P}_f\cup\bigcup_{j=0}^{k_f}f^{\circ j}(S_f), \end{equation} such that for all $\zeta\in\mathbb{C}$ with $\zeta,\zeta+1\in\widetilde{\mathcal{D}}_f$, then $\Phi_f^{-1}(\zeta+1)=f\circ\Phi_f^{-1}(\zeta)$. \end{lem} This lemma has been proved in \cite[Lemma 1.8]{AC18}. For completeness and clarifying some ideas we include a sketch of the construction of $\Phi_f^{-1}$ here. \begin{proof} By \eqref{defi-C-f-alpha}, the definition of $S_f$, Propositions \textup{Re}\,f{prop-BC-prop-12}(b) and \textup{Re}\,f{prop-CC-2}(a), we have $f^{\circ k_f}(S_f)=\mathcal{C}_f\cup\mathcal{C}_f^\sharp$ and $f^{\circ j}(S_f)$ is well-defined for all $0\leq j\leq k_f+\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}} -2$. If $\zeta\in\widetilde{\mathcal{D}}_f\setminus\Phi_f(\mathcal{P}_f)$, then there exists an integer $j\in[1,k_f+\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}} -2]$ so that $\zeta\in\Phi_f(S_f)+j$. For such $\zeta$ we define \begin{equation} \Phi_f^{-1}(\zeta):=f^{\circ j}(\Phi_f^{-1}(\zeta-j)). \end{equation} Note that there may exist two choices of $j$ for some point $\zeta$. Assume that $\zeta\in\Phi_f(S_f)+j'$ for some $j'\in[1,k_f+\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}} -2]$ and $j'\neq j$. Then $|j'-j|=1$. Without loss of generality, we assume that $j'=j+1$. By Proposition \textup{Re}\,f{prop-BC-prop-12}(c), we have $\Phi_f^{-1}(\zeta+1)=f\circ\Phi_f^{-1}(\zeta)$ for all $\zeta\in\mathbb{C}$ with $\zeta,\zeta+1\in\Phi_f(\mathcal{P}_f)$. Therefore, we have \begin{equation} f^{\circ j'}(\Phi_f^{-1}(\zeta-j'))=f^{\circ (j'-1)}(\Phi_f^{-1}(\zeta-j'+1))=f^{\circ j}(\Phi_f^{-1}(\zeta-j)). \end{equation} This implies that $\Phi_f^{-1}$ is well-defined in $\widetilde{\mathcal{D}}_f$ and it is straightforward to check that $\Phi_f^{-1}$ is holomorphic. Finally a completely similar calculation shows that $\Phi_f^{-1}(\zeta+1)=f\circ\Phi_f^{-1}(\zeta)$ for all $\zeta\in\mathbb{C}$ with $\zeta,\zeta+1\in\widetilde{\mathcal{D}}_f$. \end{proof} Note that $S_f$ is contained in $\{z\in\mathcal{P}_f:0<\textup{Re}\,\Phi_f(z)<\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}}-\tfrac{1}{2}\}$. For $i_0=k_f+\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}} -2$, we have $f^{\circ i_0}(S_f)=\{z\in\mathcal{P}_f:\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}}-\tfrac{3}{2}\leq\textup{Re}\,\Phi_f(z)\leq\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}}-\tfrac{1}{2} \text{~and~} \textup{Im}\,\Phi_f(z)>-2\}$. According to Proposition \textup{Re}\,f{prop-CC-2}(b), if we consider the local rotation of $f$ near the origin, this implies that \begin{equation}\label{equ-k-f-kc} k_f+\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}} -2\geq \lfloor\tfrac{1}{\alpha}\rfloor+1, \text{\quad i.e.\quad} k_f\geq \textit{\textbf{k}}+3. \end{equation} Note that the modified exponential map $\textup{Exp}o:\mathbb{C}\to\mathbb{C}\setminus\{0\}$ defined in \eqref{equ-modify-exp} is an anti-holomorphic covering map. The map $\Phi_f^{-1}:\widetilde{\mathcal{D}}_f\to\mathbb{C}\setminus\{0\}$ can be lifted to obtain an anti-holomorphic map \begin{equation} \chi_f:\widetilde{\mathcal{D}}_f\to\mathbb{C} \end{equation} such that \begin{equation} \quad \textup{Exp}o\circ\chi_f(\zeta)=\Phi_f^{-1}(\zeta), \text{ for all }\zeta\in\widetilde{\mathcal{D}}_f. \end{equation} There are infinitely many choices of $\chi_f:\widetilde{\mathcal{D}}_f\to\mathbb{C}$. But the following result holds. \begin{prop}[{\cite[Proposition 1.9]{AC18}}]\label{prop-unif-inverse} There exists $\textit{\textbf{k}}_1\in\mathbb{N}^+$ such that for all $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in(0,\varepsilon_1']$ and any choice of the lift $\chi_f$, we have \begin{equation} \sup\{|\textup{Re}\,(\zeta-\zeta')|:\zeta,\zeta'\in\chi_f(\widetilde{\mathcal{D}}_f)\}\leq \textit{\textbf{k}}_1. \end{equation} \end{prop} Proposition \textup{Re}\,f{prop-unif-inverse} was proved by applying Proposition \textup{Re}\,f{prop-CC-2}, the pre-compactness of the class $\mathcal{IS}_\alpha$ and a uniform bound on the total spiral of the set $\mathcal{P}_f$ about the origin (see \cite[Proposition 12]{BC12} or \cite[Proposition 2.4]{Che19}). From \cite[\S 5.A]{IS08} (see also \cite[Propositions 2.6 and 2.7]{CS15}), $\mathcal{P}_f$ is contained in the image of $f$. By Lemma \textup{Re}\,f{lema:Phi-inverse}, we have $\Phi_f^{-1}(\widetilde{\mathcal{D}}_f)\subset f(U_f)$. Since $f(U_f)\subset P(U')=\mathbb{D}(0,\tfrac{4}{27}e^{4\pi})$, we have $\textup{Im}\,\zeta>-2$ for every $\zeta\in\chi_f(\widetilde{\mathcal{D}}_f)$, where $P(z)=z(1+z)^2$ and $U'$ is defined in \eqref{equ-U-pri}. Therefore, by Proposition \textup{Re}\,f{prop-unif-inverse}, there exists a choice of $\chi_f$, denoted by $\chi_{f,0}$ such that \begin{equation}\label{equ-chi-choice} \chi_{f,0}(\widetilde{\mathcal{D}}_f)\subset \{\zeta\in\mathbb{C}:1\leq\textup{Re}\,\zeta<\textit{\textbf{k}}_1+2 \text{ and }\textup{Im}\,\zeta>-2\}. \end{equation} We define \begin{equation}\label{equ-MD-f} \mathcal{D}_f:=\Phi_f(\mathcal{P}_f)\cup\bigcup_{j=0}^{k_f+\textit{\textbf{k}}+\textit{\textbf{k}}_1+2}(\Phi_f(S_f)+j), \end{equation} where $\textit{\textbf{k}}$, $\textit{\textbf{k}}_1\in\mathbb{N}^+$ are integers introduced in Propositions \textup{Re}\,f{prop-BC-prop-12} and \textup{Re}\,f{prop-unif-inverse} respectively. Let $\textit{\textbf{k}}_0\in\mathbb{N}^+$ be the integer introduced in Proposition \textup{Re}\,f{prop-CC-2}. \begin{lem}\label{lema:D-n} For all $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $0<\alpha\leq \widetilde{\varepsilon}_1:=\min\{\varepsilon_1',1/(2\textit{\textbf{k}}+\textit{\textbf{k}}_1+4)\}$, we have $\mathcal{D}_f\subset\widetilde{\mathcal{D}}_f$. Moreover, \begin{equation}\label{equ-D-n-loc} \begin{split} \mathcal{D}_f &\,\subset\Phi_f(\mathcal{P}_f)\cup\{\zeta\in\mathbb{C}:-\textit{\textbf{k}}\leq\textup{Re}\,\zeta-\lfloor\tfrac{1}{\alpha}\rfloor<\textit{\textbf{k}}_0+\textit{\textbf{k}}_1+\tfrac{3}{2}\}\text{ and }\\ \mathcal{D}_f &\,\supset\Phi_f(\mathcal{P}_f)\cup\{\zeta\in\mathbb{C}:-\textit{\textbf{k}}\leq\textup{Re}\,\zeta-\lfloor\tfrac{1}{\alpha}\rfloor\leq\textit{\textbf{k}}_1+3 \text{ and }\textup{Im}\,\zeta\geq 0\}. \end{split} \end{equation} \end{lem} \begin{proof} The condition on $\alpha$ implies that $k_f+\textit{\textbf{k}}+\textit{\textbf{k}}_1+2\leq k_f+\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}} -2$. Then we have $\mathcal{D}_f\subset\widetilde{\mathcal{D}}_f$ by definition. Since $\Phi_f(S_f)\subset\{\zeta\in\mathbb{C}:0<\textup{Re}\,\zeta<\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}}-\tfrac{1}{2}\}$ by Proposition \textup{Re}\,f{prop-CC-2}(b), for $\zeta\in\mathcal{D}_f$ we have $\textup{Re}\,\zeta<\lfloor\tfrac{1}{\alpha}\rfloor+k_f+\textit{\textbf{k}}_1+\tfrac{3}{2}\leq \lfloor\tfrac{1}{\alpha}\rfloor+\textit{\textbf{k}}_0+\textit{\textbf{k}}_1+\tfrac{3}{2}$. Then the first assertion of \eqref{equ-D-n-loc} holds. By \eqref{ellipse} and \eqref{U-and-psi-1}, we have $U\supset\mathbb{D}(0,\tfrac{8}{9})$ (see also \cite[Lemma 6.1]{Che19}). For any $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$, by Koebe's $\tfrac{1}{4}$-theorem we have $U_f\supset\mathbb{D}(0,\tfrac{2}{9})$. Since $\textup{Exp}o(\Phi_f(S_f))\supset U_{\mathcal{R} f}\setminus\{0\}$ and $\mathcal{R} f\in\mathcal{IS}_{1/\alpha}$, it follows that $\mathbb{D}(0,\tfrac{2}{9})\subset\textup{Exp}o(\Phi_f(S_f))$. Since $f^{\circ k_f}(S_f)=\mathcal{C}_f\cup\mathcal{C}_f^\sharp\subset\mathcal{P}_f$, we have $\textup{Re}\,\zeta>\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}}$ for all $\zeta\in\Phi_f(S_f)+k_f$. This implies that $\{\zeta\in\mathbb{C}:-\tfrac{3}{2}\leq\textup{Re}\,\zeta-(\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}})\leq 1 \text{ and }\textup{Im}\,\zeta>-\tfrac{1}{2\pi}\log\tfrac{3}{2}\}$ is contained in $\bigcup_{j=0}^{k_f}(\Phi_f(S_f)+j)$. Therefore, $\mathcal{D}_f\setminus\Phi_f(\mathcal{P}_f)$ contains $\{\zeta\in\mathbb{C}:-\textit{\textbf{k}}\leq\textup{Re}\,\zeta-\lfloor\tfrac{1}{\alpha}\rfloor\leq\textit{\textbf{k}}_1+3 \text{ and }\textup{Im}\,\zeta\geq 0\}$. \end{proof} \subsection{Some quantitative estimations}\label{subsec-esti-2} Let $\sigma_f\neq0$ be another fixed point of $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ near $0$ which is contained in $\partial \mathcal{P}_f$ for small $\alpha>0$ (see Figure \textup{Re}\,f{Fig_perturbed-Fatou-coor}). It depends continuously on $f$ and has asymptotic expansion \begin{equation}\label{equ-sigma-f} \sigma_f=-4\pi\alpha\textup{i}/f_0''(0)+o(\alpha) \end{equation} as $f\to f_0\in\mathcal{IS}_0\cup\{Q_0\}$ in a fixed neighborhood of $0$ (see \cite[\S 3.2.1]{Shi00a}). By \cite[Main Theorem 1(a)]{IS08}, $|f_0''(0)|$ is contained in $[3,7]$ for all $f_0\in\mathcal{IS}_0$. By the pre-compactness of $\mathcal{IS}_0$, there exists a constant $D_0'>1$ such that for all $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in(0,\varepsilon_1]$, one has \begin{equation}\label{equ-range-sigma-f} \alpha/D_0'\leq |\sigma_f|\leq D_0' \alpha. \end{equation} For a general statement of \eqref{equ-range-sigma-f} (i.e., $\alpha\in\mathbb{C}$), see \cite[Lemma 3.25(1)]{CS15}. Let \begin{equation}\label{equ-tau} \tau_f(w):=\frac{\sigma_f}{1-e^{-2\pi\textup{i}\alpha w}} \end{equation} be a universal covering from $\mathbb{C}$ to $\widehat{\mathbb{C}}\setminus\{0,\sigma_f\}$ with period $1/\alpha$. Then $\tau_f(w)\to 0$ as $\textup{Im}\, w\to +\infty$ and $\tau_f(w)\to \sigma_f$ as $\textup{Im}\, w\to -\infty$. There exists a unique lift $F_f$ of $f$ under $\tau_f$ such that \begin{equation} f\circ \tau_f(w)=\tau_f\circ F_f(w) \text{\quad with\quad} \lim_{\textup{Im}\, w\to+\infty}(F_f(w)-w)=1. \end{equation} The set $\tau_f^{-1}(\mathcal{P}_f)$ consists of countably many simply connected components. Each of them is bounded by piecewise analytic curves going from $-\textup{i}\infty$ to $+\textup{i}\infty$. Let $\widetilde{\mathcal{P}}_f$ be the unique component separating $0$ from $1/\alpha$. Define \begin{equation}\label{equ-L-f} L_f:=\Phi_f\circ \tau_f:\widetilde{\mathcal{P}}_f\to\mathbb{C}. \end{equation} Then $L_f$ is univalent and it is the Fatou coordinate of $F_f$ since $L_f( F_f(w))=L_f(w)+1$ if both $w$ and $F_f(w)$ are contained in $\widetilde{\mathcal{P}}_f$. For $\alpha\in(0,\widetilde{\varepsilon}_1]$ and $R\in(0,+\infty)$, we define \begin{equation}\label{equ-Theta-alpha} \mathbb{T}heta_\alpha(R)=\mathbb{C}\setminus\bigcup_{n\in\mathbb{Z}}\,\mathbb{D}(n/\alpha,R). \end{equation} For $C>0$, we denote $a_C:=C e^{5\pi\textup{i}/12}$ and define a piecewise analytic curve \begin{equation} \begin{split} \ell_C:=\{w\in\mathbb{C}:\arg(w-a_C)=\tfrac{11}{12}\pi\} & \cup \{w\in\mathbb{C}:\arg(w-\overline{a}_C)=-\tfrac{11}{12}\pi\} \\ & \cup \{C e^{\textup{i}\theta}:\theta\in[-\tfrac{5\pi}{12},\tfrac{5\pi}{12}]\}. \end{split} \end{equation} Then $\ell_C\cup(-\ell_C+1/\alpha)$ divides $\mathbb{C}$ into three connected components\footnote{We always assume that $\alpha$ is small such that $\mathbb{T}heta_\alpha(C)$ is connected and hence $1/(2\alpha)\in\mathbb{T}heta_\alpha(C)$.}. Let $A_1(C)$ be the component of $\mathbb{C}\setminus(\ell_C\cup(-\ell_C+1/\alpha))$ containing $1/(2\alpha)$. The following result is a summary of Lemmas 6.4, 6.7(2), 6.6 and 6.11 in \cite{Che19}. \begin{lem}\label{lema-key-pre-Che} There are constants $\varepsilon_2\in(0,\widetilde{\varepsilon}_1]$, $C_0$, $C_0'>0$ and $C_0''\geq 6$ such that for all $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in(0,\varepsilon_2]$, we have \begin{enumerate} \item $F_f$ is defined and univalent on $\mathbb{T}heta_\alpha(C_0')$, and for all $r\in(0,1/2]$ and all $w\in\mathbb{T}heta_\alpha(r/\alpha)\cap\mathbb{T}heta_\alpha(C_0')$, then \begin{equation} |F_f(w)-(w+1)|,\,|F_f'(w)-1|<\min\Big\{\frac{1}{4},C_0\frac{\alpha}{r}e^{-2\pi\alpha\textup{Im}\, w}\Big\}; \end{equation} \item For all\,\footnote{In \cite[Lemma 6.7(2)]{Che19}, $R$ is contained in $[3.25,1/(2\alpha)]$. In fact the estimation of $|L_f'(w)|$ there still holds if $R\in [3.25,C/\alpha]$ for every $C\geq 1/2$ (the unique difference is that the constants in the estimation need to be modified).} $R\in[C_0'',2/\alpha]$ and all $w$ with $\mathbb{D}(w,R)\subset A_1:=A_1(C_0')$ and $\textup{Im}\, w\geq -1/\alpha$, then \begin{equation} \frac{1}{|L_f'(w)|} \leq 1+\frac{C_0}{R}; \end{equation} \item $L_f:\widetilde{\mathcal{P}}_f\to\mathbb{C}$ has a unique univalent extension onto $\widetilde{\mathcal{P}}_f \cup A_1$ such that $L_f( F_f(w))=L_f(w)+1$ if both $w$ and $F_f(w)$ belong to $\widetilde{\mathcal{P}}_f \cup A_1$; \item For any $r>0$ there is $K_r\geq 1$ depending only on $r$ such that\,\footnote{By Lemma \textup{Re}\,f{lema-key-pre-Che}(c), the number $x_f$ defined in \cite[Equation (50)]{Che19} satisfies $x_f\geq \lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}}$. Hence by \cite[Lemma 6.11]{Che19} this part holds for all $\zeta\in\Phi_f(\mathcal{P}_f)\setminus\mathbb{D}(0,r)$.} \begin{equation} K_r^{-1}\leq |(L_f^{-1})'(\zeta)|\leq K_r \text{ for all } \zeta\in\Phi_f(\mathcal{P}_f)\setminus\mathbb{D}(0,r). \end{equation} \end{enumerate} \end{lem} The following Lemma \textup{Re}\,f{lema-Cheraghi-L-f-add} and Proposition \textup{Re}\,f{prop-Cheraghi-L-f} are useful in the estimations of the locations of the points under $\Phi_f^{-1}$ and $\chi_f$. \begin{lem}\label{lema-Cheraghi-L-f-add} There exists a constant $D_0>0$ such that for any $D_1'>0$, there exists $D_1>0$ such that for all $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in(0,\varepsilon_2]$, we have \begin{enumerate} \item $D_0\leq |L_f^{-1}(\zeta)|\leq D_1$ for $\zeta\in\Phi_f(\mathcal{P}_f)\cap\overline{\mathbb{D}}(0,D_1')$; and \item $D_0\leq |L_f^{-1}(\zeta)-1/\alpha|\leq D_1$ for $\zeta\in\Phi_f(\mathcal{P}_f)\cap\overline{\mathbb{D}}(1/\alpha,D_1')$. \end{enumerate} \end{lem} \begin{proof} By the continuous dependence of the Fatou coordinates of the maps in $\mathcal{IS}_0$, the pre-compactness of $\mathcal{IS}_0$ and note that $\mathcal{P}_f$ is compactly contained in the domain of definition of $f$, there exists a constant $R_1>0$ such that \begin{equation} \mathcal{P}_f\subset\mathbb{D}(0,R_1) \text{ for all } f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\} \text{ with } \alpha\in(0,\varepsilon_2]. \end{equation} By \eqref{equ-range-sigma-f} and the formula of $\tau_f$ in \eqref{equ-tau}, a direct calculation shows that there exists a constant $D_0>0$ such that the Euclidean distance satisfies $\textup{dist}(L_f^{-1}(\zeta),\mathbb{Z}/\alpha)\geq D_0$ for all $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in(0,\varepsilon_2]$ and all $\zeta\in\Phi_f(\mathcal{P}_f)$. By Lemma \textup{Re}\,f{lema-key-pre-Che}(d), there exists a constant $K_1>1$ such that \begin{equation}\label{equ-bound-L-f} K_1^{-1}\leq |(L_f^{-1})'(\zeta)|\leq K_1 \end{equation} for all $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in(0,\varepsilon_2]$ and all $\zeta\in\Phi_f(\mathcal{P}_f)\setminus\mathbb{D}$. From \cite[Proposition 6.17]{Che19}, there exists a constant $C_1>0$ such that for all $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in(0,\varepsilon_2]$ we have \begin{equation}\label{equ-bound-pt} |L_f^{-1}(\tfrac{3}{2})|<C_1. \end{equation} Without loss of generality we assume that $D_1'>1$. Combining \eqref{equ-bound-L-f} and \eqref{equ-bound-pt}, there exists a constant $C_2>0$ depending only on $K_1$, $C_1$ and $D_1'$ such that $|L_f^{-1}(\zeta)|<C_2$ for all $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in(0,\varepsilon_2]$ and all $\zeta\in(\Phi_f(\mathcal{P}_f)\cap\overline{\mathbb{D}}(0,D_1'))\setminus\mathbb{D}$. On the other hand, by Lemma \textup{Re}\,f{lema-key-pre-Che}(a) and applying \begin{equation} L_f^{-1}(\zeta)=F_f^{-1}\circ L_f^{-1}(\zeta+1), \end{equation} there exists a constant $C_3>0$ such that $|L_f^{-1}(\zeta)|<C_3$ for all $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in(0,\varepsilon_2]$ and all $\zeta\in\Phi_f(\mathcal{P}_f)\cap\mathbb{D}$. By Lemma \textup{Re}\,f{lema-key-pre-Che}(d) and \cite[Proposition 6.16]{Che19}, there exists a constant $C_4>0$ depending on $D_1'$ such that $|L_f^{-1}(\zeta)-1/\alpha|\leq C_4$ for all $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in(0,\varepsilon_2]$ and all $\zeta\in\Phi_f(\mathcal{P}_f)\cap\overline{\mathbb{D}}(1/\alpha,D_1')$. Then the proof is complete if we set $D_1:=\max\{C_2,C_3,C_4\}$. \end{proof} \begin{prop}[{\cite[Propositions 6.19 and 6.17]{Che19}}]\label{prop-Cheraghi-L-f} There are constants $\varepsilon_2'\in(0,\varepsilon_2]$ and $D_2>0$ such that for all $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in(0,\varepsilon_2']$, we have \begin{enumerate} \item If $\zeta\in[0,\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}}\,]+\textup{i}\,[-3,+\infty)$, then \begin{equation} |L_f^{-1}(\zeta)-\zeta|\leq D_2 \log(1+1/\alpha). \end{equation} \item If $\zeta\in [0,\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}}\,]+\textup{i}\,[-3,1/\alpha]$, then \begin{equation} |L_f^{-1}(\zeta)-\zeta|\leq D_2 \min\{\log(2+|\zeta|),\,\log(2+|\zeta-1/\alpha|)\}. \end{equation} \end{enumerate} \end{prop} Proposition \textup{Re}\,f{prop-Cheraghi-L-f}(a) was proved in \cite[Proposition 6.19]{Che19} (see also \cite[Proposition 6.15]{Che19}). The statement (b) was proved in \cite[Proposition 6.17]{Che19} for $\zeta\in[0,\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}}\,]$ (i.e., $\zeta\in\mathbb{R}$). However, the arguments there can be applied to $\zeta\in[0,\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}}\,]+ \textup{i}\,[-3,1/\alpha]$ completely similarly by using \cite[Lemma 6.7]{Che19} and Lemma \textup{Re}\,f{lema-Cheraghi-L-f-add}. For more details on the study of $L_f$ and $L_f^{-1}$, see \cite[\S\S 6.3-6.6]{Che19} and \cite[\S 3.5]{CS15}. Let $X\geq 0$ and $Y\geq 0$ be two numbers. We use $X\asymp Y$ to denote that $X$ and $Y$ are in the same order, i.e., there exist two universal positive constants $C_1$ and $C_2$ such that $C_1Y\leq X\leq C_2 Y$. Let $\mathcal{D}_f$ be the set defined in \eqref{equ-MD-f}. \begin{lem}\label{lema-key-estimate-lp} There exist constants $\varepsilon_3\in(0,\varepsilon_2']$ and $D_3>0$ such that for all $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in(0,\varepsilon_3]$, we have \begin{enumerate} \item If $\zeta\in\mathcal{D}_f$ with $\textup{Im}\,\zeta\geq 1/\alpha$, then \begin{equation} |\Phi_f^{-1}(\zeta)| \asymp \frac{\alpha}{e^{2\pi\alpha\textup{Im}\,\zeta}} \text{\quad and\quad} \Big|\textup{Im}\, \chi_f(\zeta)-\Big(\alpha\,\textup{Im}\,\zeta+\frac{1}{2\pi}\log\frac{1}{\alpha}\Big)\Big|\leq D_3. \end{equation} \item If $\zeta\in\mathcal{D}_f$ with $\textup{Im}\, \zeta\in[-3,1/\alpha]$, then \begin{equation} |\Phi_f^{-1}(\zeta)|\asymp~ \max\Big\{\frac{1}{1+|\zeta|},\frac{1}{1+|\zeta-1/\alpha|}\Big\} \text{\quad and} \end{equation} \begin{equation} \big|\textup{Im}\, \chi_f(\zeta)-\tfrac{1}{2\pi}\min\big\{\log(1+|\zeta|),\log(1+|\zeta-1/\alpha|)\big\}\big|\leq D_3. \end{equation} \end{enumerate} \end{lem} \begin{proof} By the definition of $\Phi_f^{-1}$ in Lemma \textup{Re}\,f{lema:Phi-inverse}, if $\zeta\in\mathcal{D}_f\setminus\Phi_f(\mathcal{P}_f)$, then there exists a positive integer $j\in[1,k_f+\textit{\textbf{k}}+\textit{\textbf{k}}_1+2]$ such that $\zeta-j\in\Phi_f(\mathcal{P}_f)$ and $\Phi_f^{-1}(\zeta)=f^{\circ j}(\Phi_f^{-1}(\zeta-j))$. By the pre-compactness of $\mathcal{IS}_\alpha$, it is sufficient to prove the statements in this lemma for $\zeta\in\Phi_f(\mathcal{P}_f)$. (a) By Proposition \textup{Re}\,f{prop-Cheraghi-L-f}(a), we have \begin{equation} \textup{Im}\,\zeta-D_2\log(1+1/\alpha)\leq \textup{Im}\, L_f^{-1}(\zeta) \leq \textup{Im}\,\zeta+D_2\log(1+1/\alpha). \end{equation} If $\alpha$ is small, then $\alpha\log(1+1/\alpha)$ is also. Since $\textup{Im}\,\zeta\geq 1/\alpha$, decreasing $\alpha$ if necessary, we assume that $\textup{Im}\, \zeta-D_2\log(1+1/\alpha)>1/(2\alpha)$. Denote $w:=L_f^{-1}(\zeta)$. Then $|e^{-2\pi\textup{i}\alpha w}|=|e^{2\pi\alpha \textup{Im}\, w}\cdot e^{-2\pi\textup{i}\alpha \textup{Re}\, w}|>e^\pi$. Note that $\alpha\log(1+1/\alpha)$ is uniformly bounded above. Since $\textup{Im}\,\zeta\geq 1/\alpha$, we have \begin{equation} |1-e^{-2\pi\textup{i}\alpha w}|\asymp e^{2\pi\alpha\textup{Im}\, w}\asymp e^{2\pi\alpha\textup{Im}\,\zeta}. \end{equation} By \eqref{equ-range-sigma-f}, \eqref{equ-tau} and \eqref{equ-L-f}, we have \begin{equation} |\Phi_f^{-1}(\zeta)|=|\tau_f\circ L_f^{-1}(\zeta)|=\left|\frac{\sigma_f}{1-e^{-2\pi\textup{i}\alpha w}}\right| \asymp \frac{\alpha}{e^{2\pi\alpha\textup{Im}\,\zeta}}. \end{equation} Denote $y:=\textup{Im}\, \textup{Exp}o^{-1}\circ\Phi_f^{-1}(\zeta)$. By definition we have $\tfrac{4}{27}e^{-2\pi y}\asymp \alpha/e^{2\pi\alpha\textup{Im}\,\zeta}$. A direct calculation shows that $y=\alpha\,\textup{Im}\,\zeta+\tfrac{1}{2\pi}\log\frac{1}{\alpha}+\mathcal{O}(1)$, where $\mathcal{O}(1)$ is a number whose absolute value is less than a universal constant. (b) We divide the arguments into two cases. Firstly we assume that $\textup{Re}\,\zeta\in[0,1/(2\alpha)]$. By Proposition \textup{Re}\,f{prop-Cheraghi-L-f}(b), we have \begin{equation}\label{equ-Lf-D-2} |L_f^{-1}(\zeta)-\zeta|\leq D_2\log(2+|\zeta|). \end{equation} Let $D_1'>0$ be the smallest constant depending only on $D_2$ such that if $|\zeta|\geq D_1'$, then $|\zeta|\geq D_2\log(2+|\zeta|)+1$. If $|\zeta|\geq D_1'$, $\textup{Re}\,\zeta\in[0,1/(2\alpha)]$ and $\textup{Im}\,\zeta\in[-3,1/\alpha]$, by \eqref{equ-Lf-D-2} we have \begin{equation}\label{equ-Lf-asym} |L_f^{-1}(\zeta)|\asymp |\zeta|+1. \end{equation} If $|\zeta|\leq D_1'$, $\textup{Re}\,\zeta\in[0,1/(2\alpha)]$ and $\textup{Im}\,\zeta\in[-3,1/\alpha]$, by Lemma \textup{Re}\,f{lema-Cheraghi-L-f-add}(a), there exists a constant $D_1>1$ depending only on $D_1'$ such that $D_0\leq |L_f^{-1}(\zeta)|\leq D_1$. Therefore, we still have \eqref{equ-Lf-asym}. Next we assume that $\textup{Re}\,\zeta\in[1/(2\alpha),\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}}\,]$. By Proposition \textup{Re}\,f{prop-Cheraghi-L-f}(b), we have \begin{equation}\label{equ-Lf-D-2-2} |L_f^{-1}(\zeta)-\zeta|\leq D_2\log(2+|\zeta-1/\alpha|). \end{equation} If $|\zeta-1/\alpha|\geq D_1'$, then $|\zeta-1/\alpha|\geq D_2\log(2+|\zeta-1/\alpha|)+1$. If $|\zeta-1/\alpha|\geq D_1'$, $\textup{Re}\,\zeta\in[1/(2\alpha),\,\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}}\,]$ and $\textup{Im}\,\zeta\in[-3,1/\alpha]$, by \eqref{equ-Lf-D-2-2} we have \begin{equation}\label{equ-Lf-asym-2} |L_f^{-1}(\zeta)-1/\alpha|=|(L_f^{-1}(\zeta)-\zeta)+(\zeta-1/\alpha)|\asymp |\zeta-1/\alpha|+1. \end{equation} If $|\zeta-1/\alpha|\leq D_1'$, $\textup{Re}\,\zeta\in[1/(2\alpha),\lfloor\tfrac{1}{\alpha}\rfloor-\textit{\textbf{k}}\,]$ and $\textup{Im}\,\zeta\in[-3,1/\alpha]$, by Lemma \textup{Re}\,f{lema-Cheraghi-L-f-add}(b), we have $D_0\leq |L_f^{-1}(\zeta)-1/\alpha|\leq D_1$. Therefore, in this case we still have \eqref{equ-Lf-asym-2}. Denote $w:=L_f^{-1}(\zeta)$. By \eqref{equ-Lf-D-2} and \eqref{equ-Lf-D-2-2}, if $\alpha$ is small enough, then $-\tfrac{1}{4}\leq\textup{Re}\,(\alpha w)\leq \tfrac{5}{4}$ and $|\alpha w|\leq\tfrac{3}{2}$. By \eqref{equ-range-sigma-f}, \eqref{equ-L-f}, \eqref{equ-Lf-asym} and \eqref{equ-Lf-asym-2}, we have \begin{equation} \begin{split} |\Phi_f^{-1}(\zeta)|=\left|\frac{\sigma_f}{1-e^{-2\pi\textup{i}\alpha w}}\right| \asymp &~\max\Big\{\frac{1}{|w|},\frac{1}{|w-1/\alpha|}\Big\}\\ \asymp&~ \max\Big\{\frac{1}{1+|\zeta|},\frac{1}{1+|\zeta-1/\alpha|}\Big\}. \end{split} \end{equation} Then the estimation of $\textup{Im}\,\, \textup{Exp}o^{-1}\circ\Phi_f^{-1}(\zeta)$ follows by a direct calculation. \end{proof} \begin{rmk} There exist some overlaps between the estimations in Lemma \textup{Re}\,f{lema-key-estimate-lp}(a) and (b). Indeed, if $\zeta\in\mathcal{D}_f$ and $\textup{Im}\,\zeta\asymp 1/\alpha$, then \begin{equation} |\Phi_f^{-1}(\zeta)|\asymp\alpha \quad\text{and}\quad \textup{Im}\, \textup{Exp}o^{-1}\circ\Phi_f^{-1}(\zeta)=\tfrac{1}{2\pi}\log\tfrac{1}{\alpha}+\mathcal{O}(1). \end{equation} \end{rmk} The following lemma can be seen as an inverse version of Lemma \textup{Re}\,f{lema-key-estimate-lp}. \begin{lem}\label{lema-key-esti-inverse} There exist constants $D_4$, $D_5>1$ and $\varepsilon_3'\in(0,\varepsilon_3]$ such that for all $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in(0,\varepsilon_3']$, we have \begin{enumerate} \item If $\zeta\in\mathbb{C}$ satisfies $\textup{Im}\,\zeta\geq\frac{1}{2\pi}\log\tfrac{1}{\alpha}+D_4$, $\textup{Exp}o(\zeta)\in\mathcal{P}_f$ and $\Phi_f\circ \textup{Exp}o(\zeta)\in (0,2]+\textup{i}\,[-2,+\infty)$, then \begin{equation} \left|\textup{Im}\, \Phi_f\circ \textup{Exp}o(\zeta)-\frac{1}{\alpha}\Big(\textup{Im}\,\zeta-\frac{1}{2\pi}\log\frac{1}{\alpha}\Big)\right|\leq\frac{D_5}{\alpha}. \end{equation} \item If $\zeta\in\mathbb{C}$ satisfies $\textup{Im}\,\zeta< \frac{1}{2\pi}\log\tfrac{1}{\alpha}+D_4$, $\textup{Exp}o(\zeta)\in\mathcal{P}_f$ and $\Phi_f\circ \textup{Exp}o(\zeta)\in (0,2]+\textup{i}\,[-2,+\infty)$, then \begin{equation} \big|\log(3+\textup{Im}\, \Phi_f\circ \textup{Exp}o(\zeta))-2\pi\textup{Im}\,\zeta\big|\leq D_5. \end{equation} \end{enumerate} \end{lem} \begin{proof} (a) Denote $\zeta'=\Phi_f\circ \textup{Exp}o(\zeta)\in\Phi_f(\mathcal{P}_f)$. By Lemma \textup{Re}\,f{lema-key-estimate-lp}(a), if $\textup{Im}\,\zeta'\geq 1/\alpha$ we have \begin{equation}\label{equ-down-1} \Big|\textup{Im}\, \zeta'-\frac{1}{\alpha}\Big(\textup{Im}\,\zeta-\frac{1}{2\pi}\log\frac{1}{\alpha}\Big)\Big|\leq\frac{D_3}{\alpha}. \end{equation} Suppose $\textup{Im}\, \zeta'\in[-2,1/\alpha)$. By Lemma \textup{Re}\,f{lema-key-estimate-lp}(b), we have \begin{equation} \textup{Im}\,\zeta\leq \frac{1}{2\pi}\log(1+|\zeta'|)+D_3<\frac{1}{2\pi}\log\frac{1}{\alpha}+D_3+1. \end{equation} Therefore, if $\textup{Im}\,\zeta\geq \tfrac{1}{2\pi}\log\tfrac{1}{\alpha}+D_3+1$, then $\textup{Im}\,\zeta'\geq 1/\alpha$ or $\textup{Im}\,\zeta'<-2$. By the assumption in the lemma we have $\textup{Im}\,\zeta'\geq 1/\alpha$ and \eqref{equ-down-1} holds. Then Part (a) follows if we set $D_4:=D_3+1$ and $D_5:=D_3$. (b) Denote $\zeta'=\Phi_f\circ \textup{Exp}o(\zeta)\in (0,2]+\textup{i}\,[-2,+\infty)$. By \eqref{equ-down-1}, if $\textup{Im}\,\zeta'\in[1/\alpha,(1+2D_3)/\alpha]$, we have $|\log\tfrac{1}{\alpha}+2\pi\alpha\textup{Im}\,\zeta'-2\pi\textup{Im}\,\zeta|\leq 2\pi D_3$ and hence \begin{equation} \begin{split} |\log(3+\textup{Im}\,\zeta')-2\pi\textup{Im}\,\zeta| \leq &~ |\log(3\alpha+\alpha\textup{Im}\,\zeta')-2\pi\alpha\textup{Im}\,\zeta'|+2\pi D_3\\ \leq &~ \log(4+2 D_3)+6\pi D_3+2\pi. \end{split} \end{equation} By Lemma \textup{Re}\,f{lema-key-estimate-lp}(b), if $\textup{Im}\,\zeta'<1/\alpha$ we have $|\log(1+|\zeta'|)-2\pi\textup{Im}\, \zeta|\leq 2\pi D_3$ and hence \begin{equation} \begin{split} |\log(3+\textup{Im}\,\zeta')-2\pi\textup{Im}\,\zeta| \leq &~ |\log(3+\textup{Im}\,\zeta')-\log(1+|\zeta'|)|+2\pi D_3\\ \leq &~ \log 5+2\pi D_3. \end{split} \end{equation} Set $D_5=\log(4+2 D_3)+6\pi D_3+2\pi$. Then if $\textup{Im}\,\zeta'<(1+2D_3)/\alpha$ we have \begin{equation}\label{equ-down-2} |\log(3+\textup{Im}\,\zeta')-2\pi\textup{Im}\,\zeta|\leq D_5. \end{equation} Suppose $\textup{Im}\, \zeta'\geq (1+2D_3)/\alpha$. By Lemma \textup{Re}\,f{lema-key-estimate-lp}(a), we have \begin{equation} \textup{Im}\,\zeta\geq \alpha\,\textup{Im}\,\zeta'+\frac{1}{2\pi}\log\frac{1}{\alpha}-D_3\geq \frac{1}{2\pi}\log\frac{1}{\alpha}+D_3+1. \end{equation} Therefore, if $\textup{Im}\,\zeta< \tfrac{1}{2\pi}\log\tfrac{1}{\alpha}+D_3+1$, then $\textup{Im}\,\zeta'< (1+2D_3)/\alpha$ and we have \eqref{equ-down-2}. Summering the constants in Parts (a) and (b), the lemma follows if we set $D_4:=D_3+1$ and $D_5:=\log(4+2 D_3)+6\pi D_3+2\pi$. \end{proof} In the following, we use $h'$ to denote $\partial h/\partial z$ if $h$ is holomorphic and denote $\partial \overline{h}/\partial z$ if $h$ is anti-holomorphic. The following result is useful in the estimation of the Euclidean length of curves in Fatou coordinate planes. \begin{prop}\label{prop-key-estimate-yyy} There exist positive constants $\varepsilon_4\in(0,\varepsilon_3']$ and $D_2'$, $D_6'$, $D_6>1$ such that for all $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in(0,\varepsilon_4]$, we have \begin{enumerate} \item If $\zeta\in\mathcal{D}_f$ with $\textup{Im}\,\zeta\geq 1/(4\alpha)$, then \begin{equation} |\chi_f'(\zeta)-\alpha| \leq D_6\alpha e^{-2\pi\alpha\textup{Im}\, \zeta}. \end{equation} \item If $\zeta\in\mathcal{D}_f$ with $\textup{Im}\, \zeta\in[-2,1/(4\alpha)]$ and $r=\min\{|\zeta|,\,|\zeta-1/\alpha|\}\geq D_6'$, then \begin{equation} |\chi_f'(\zeta)|\leq \frac{\alpha}{1-e^{-2\pi\alpha(r-D_2'\log(2+r))}}\left(1+\frac{D_6}{r}\right), \end{equation} where $D_2'$ and $D_6'$ are chosen such that $r-2D_2'\log(2+r)\geq 4$ if $r\geq D_6'$. \end{enumerate} \end{prop} \begin{proof} Part (a) is proved in \cite[Proposition 3.3]{Che13}. We only prove Part (b). For the continuous function \begin{equation} \varphi(z):=|1-e^{2\pi\textup{i} z}|, \end{equation} where $z\in\Xi_\varrho:=\{\varrho e^{\textup{i}\theta}:\theta\in[-\tfrac{\pi}{4},\tfrac{5\pi}{4}]\}$ with $0<\varrho\leq\tfrac{2}{3}$, by a direct calculation\footnote{By setting $r:=2\pi\varrho$ and $\beta:=\theta-\frac{\pi}{2}$, it suffices to verify that $e^{-r \cos\beta}\sin\beta-\sin(\beta-r\sin\beta)>0$ for any $r\in(0,\frac{4\pi}{3}]$ and $\beta\in(0,\frac{3\pi}{4}]$. This can be done by considering three cases: (1) $\beta-r\sin\beta\in[-\pi,0]$; (2) $\beta-r\sin\beta\in(0,\frac{\pi}{2}]$ and $\beta\in(0,\frac{\pi}{2}]$; and (3) $\beta-r\sin\beta\in(0,\frac{3\pi}{4}]$ and $\beta\in(\frac{\pi}{2},\frac{3\pi}{4}]$. } we have \begin{equation}\label{equ-varphi-min} \min_{z\in\Xi_\varrho}\varphi(z)=\varphi(\varrho e^{\textup{i}\frac{\pi}{2}})=\varphi(\varrho\textup{i})=1-e^{-2\pi \varrho}. \end{equation} \textbf{Case 1.} We first consider $\zeta\in\Lambda_1:=\mathcal{D}_f\cap\{\zeta\in\mathbb{C}:\textup{Re}\,\zeta\in(0,1/(2\alpha)]\text{ and } \textup{Im}\,\zeta\in[-2,1/(4\alpha)]\}$ and denote $w:=L_f^{-1}(\zeta)\in\widetilde{\mathcal{P}}_f$. By \eqref{equ-modify-exp}, \eqref{equ-tau}, \eqref{equ-L-f} and a straightforward calculation we have \begin{equation}\label{equ-chi-deri-1} \begin{split} \chi_f'(\zeta)=&~(\textup{Exp}o^{-1}\circ\Phi_f^{-1})'(\zeta)=(\textup{Exp}o^{-1}\circ\tau_f\circ L_f^{-1})'(\zeta)\\ =&~\frac{\alpha}{1-e^{2\pi\textup{i}\alpha w}}\cdot\frac{1}{L_f'(w)}. \end{split} \end{equation} By Proposition \textup{Re}\,f{prop-Cheraghi-L-f}(b), we have \begin{equation}\label{equ:w-zeta} w\in\overline{\mathbb{D}}(\zeta,D_2\log(2+|\zeta|)). \end{equation} Let $C_0''\geq 6$ be the constant and $A_1=A_1(C_0')$ be the domain introduced in Lemma \textup{Re}\,f{lema-key-pre-Che}(b). Let $C_1\geq 1$ be a constant depending only on $C_0''$ and $D_2$ such that if $|\zeta|\geq C_1$, then \begin{equation}\label{equ-D-w-C} |\zeta|-2D_2\log(2+|\zeta|)\geq 4 \text{\quad and\quad} \overline{\mathbb{D}}(w,C_0'')\subset A_1. \end{equation} We assume that $\widehat{\varepsilon}_1>0$ is small such that if $\alpha\in(0,\widehat{\varepsilon}_1]$, then $\alpha|\zeta|<\tfrac{3}{5}$ and $D_2\alpha\log(2+|\zeta|)< \tfrac{1}{15}$ for all $\zeta\in \Lambda_1$. Hence \begin{equation}\label{equ:alpha-zeta} \alpha|\zeta|+D_2\alpha\log(2+|\zeta|)<\tfrac{2}{3} \text{\quad for all } \zeta\in\Lambda_1. \end{equation} By \eqref{equ:w-zeta}, \eqref{equ-D-w-C} and \eqref{equ:alpha-zeta}, for $\zeta\in\Lambda_1':=\Lambda_1\cap\{\zeta\in\mathbb{C}:|\zeta|\geq C_1\}$ we have $\alpha w\in\{\varrho e^{\textup{i}\theta}:0<\varrho\leq\frac{2}{3} \text{ and }-\tfrac{\pi}{4}<\theta<\tfrac{3\pi}{4}\}$. According to \eqref{equ-varphi-min}, we have \begin{equation}\label{equ-1-exp-1} |1-e^{2\pi\textup{i}\alpha w}|\geq 1-e^{-2\pi\alpha(|\zeta|-D_2\log(2+|\zeta|))}. \end{equation} On the other hand, by \eqref{equ-D-w-C}, Lemma \textup{Re}\,f{lema-key-pre-Che}(b)(d) and Proposition \textup{Re}\,f{prop-Cheraghi-L-f}(b), there exists a constant $C_2\geq 1$ depending only on $C_1$ and $D_2$ such that if $\zeta\in\Lambda_1'$ then \begin{equation}\label{equ-L-f-deri-1} \frac{1}{|L_f'(w)|}\leq 1+\frac{C_2}{|\zeta|}. \end{equation} Combining \eqref{equ-chi-deri-1}, \eqref{equ-1-exp-1} and \eqref{equ-L-f-deri-1}, if $\zeta\in\Lambda_1'$ we have \begin{equation} |\chi_f'(\zeta)|\leq\frac{\alpha}{1-e^{-2\pi\alpha(|\zeta|-D_2\log(2+|\zeta|))}}\left(1+\frac{C_2}{|\zeta|}\right). \end{equation} \textbf{Case 2.} Suppose $\zeta\in\Lambda_2:=\mathcal{D}_f\cap\{\zeta\in\mathbb{C}:\textup{Re}\,\zeta>1/(2\alpha) \text{ and } \textup{Im}\,\zeta\in[-2,1/(4\alpha)]\}$. By the definition of $\mathcal{D}_f$ in \eqref{equ-MD-f}, there exist an integer $J\geq 1$ which is independent of $f$ and an integer $j_0\in\mathbb{N}$ with $j_0\leq J$ such that $\zeta-j_0\in\Phi_f(\mathcal{P}_f)\cap\{\zeta:\textup{Re}\,\zeta>1/(2\alpha)\}$. We denote $w:=L_f^{-1}(\zeta-j_0)\in\widetilde{\mathcal{P}}_f$ and $\widetilde{w}:=F_f^{\circ j_0}(w)$. Then \begin{equation}\label{equ-chi-deri-2} \begin{split} \chi_f'(\zeta)=&~(\textup{Exp}o^{-1}\circ f^{\circ j_0}\circ \Phi_f^{-1})'(\zeta-j_0)\\ =&~(\textup{Exp}o^{-1}\circ\tau_f\circ F_f^{\circ j_0}\circ L_f^{-1})'(\zeta-j_0)=\frac{\alpha}{1-e^{2\pi\textup{i}\alpha \widetilde{w}}}\cdot\frac{(F_f^{\circ j_0})'(w)}{L_f'(w)}. \end{split} \end{equation} By Proposition \textup{Re}\,f{prop-Cheraghi-L-f}(b), we have $w\in\overline{\mathbb{D}}\big(\zeta-j_0,D_2\log(2+|\zeta-j_0-\tfrac{1}{\alpha}|)\big)$. Let $C_0''\geq 6$ and $A_1=A_1(C_0')$ be introduced as in Lemma \textup{Re}\,f{lema-key-pre-Che}(b). By Lemma \textup{Re}\,f{lema-key-pre-Che}(a), there exist two positive constants $C_1'$ and $C_1''$ depending only on $C_0''$, $D_2$ and $J$ such that if $|\zeta-1/\alpha|\geq C_1'$, then \begin{equation}\label{equ-D-w-C-2} \overline{\mathbb{D}}(w,C_0'')\subset A_1 \text{ and } |F_f^{\circ j}(w)-1/\alpha|\geq C_1''|\zeta-1/\alpha| \end{equation} for all $j=0,1,\cdots,j_0$. Also by Lemma \textup{Re}\,f{lema-key-pre-Che}(a), there exists a constant $D_2'\geq D_2$ depending only on $C_0''$, $C_1''$, $D_2$ and $J$ such that \begin{equation} \widetilde{w}=F_f^{\circ j_0}(w)\in\mathbb{D}\big(\zeta,D_2'\log(2+|\zeta-1/\alpha|)\big) \end{equation} and \begin{equation}\label{equ-F-deri} |(F_f^{\circ j_0})'(w)|\leq 1+\frac{D_2'}{|\zeta-1/\alpha|}. \end{equation} Let $C_2'\geq C_1'$ be a constant depending only on $C_1'$ and $D_2'$ such that if $|\zeta-1/\alpha|\geq C_2'$, then \begin{equation} |\zeta-1/\alpha|-2D_2'\log(2+|\zeta-1/\alpha|)\geq 4. \end{equation} Moreover, we assume that $\widehat{\varepsilon}_2>0$ is small such that if $\alpha\in(0,\widehat{\varepsilon}_2]$, then \begin{equation} \alpha|\zeta-1/\alpha|+D_2'\alpha\log(2+|\zeta-1/\alpha|)<\tfrac{2}{3} \text{\quad for all } \zeta\in\Lambda_2. \end{equation} For $\zeta\in\Lambda_2':=\Lambda_2\cap\{\zeta\in\mathbb{C}:|\zeta-1/\alpha|\geq C_2'\}$, we have $\alpha \widetilde{w}-1\in\{\varrho e^{\textup{i}\theta}:0<\varrho\leq\frac{2}{3} \text{ and }\tfrac{\pi}{4}<\theta<\tfrac{5\pi}{4}\}$. By \eqref{equ-varphi-min} and $\varphi(z)=|1-e^{2\pi\textup{i} z}|=|1-e^{2\pi\textup{i}(z-1)}|$, we have \begin{equation}\label{equ-1-exp-2} |1-e^{2\pi\textup{i}\alpha \widetilde{w}}|\geq 1-e^{-2\pi\alpha(|\zeta-1/\alpha|-D_2'\log(2+|\zeta-1/\alpha|))}. \end{equation} Similarly, by \eqref{equ-D-w-C-2}, Lemma \textup{Re}\,f{lema-key-pre-Che}(b)(d) and Proposition \textup{Re}\,f{prop-Cheraghi-L-f}(b), there exists a constant $C_3\geq 1$ depending only on $C_1''$, $C_2'$ and $D_2'$ such that if $\zeta\in\Lambda_2'$ then \begin{equation}\label{equ-L-f-deri-2} \frac{1}{|L_f'(w)|}\leq 1+\frac{C_3}{|\zeta-1/\alpha|}. \end{equation} Combining \eqref{equ-chi-deri-2}, \eqref{equ-F-deri}, \eqref{equ-1-exp-2} and \eqref{equ-L-f-deri-2}, if $\zeta\in\Lambda_2'$ we have \begin{equation} |\chi_f'(\zeta)|\leq\frac{\alpha}{1-e^{-2\pi\alpha(|\zeta-1/\alpha|-D_2'\log(2+|\zeta-1/\alpha|))}}\left(1+\frac{C_3'}{|\zeta-1/\alpha|}\right) \end{equation} for a constant $C_3'>0$ depending only on $C_3$ and $D_2'$. The proof is complete if we set $\varepsilon_4:=\min\{\varepsilon_3',\widehat{\varepsilon}_1,\widehat{\varepsilon}_2\}$, $D_6':=\max\{C_1,C_2'\}$ and $D_6:=\max\{C_2,C_3'\}$. \end{proof} \begin{rmk} Proposition \textup{Re}\,f{prop-key-estimate-yyy} will be used in the proof of Lemma \textup{Re}\,f{lema-go-up}. In \cite[Proposition 6.18]{Che19}, an estimation of $|\chi_f'(\zeta)|$ has been obtained for $\zeta\in[1,1/(2\alpha)]$ in another form. \end{rmk} \subsection{Renormalization tower and orbit relations}\label{subsec-basic-defi} In the rest of this paper, we always assume that the integer $N$ is large so that $N\geq 1/\varepsilon_4$, where $\varepsilon_4>0$ is the constant introduced in Proposition \textup{Re}\,f{prop-key-estimate-yyy}. Let $[0;a_1,a_2,\cdots]$ be the continued fraction expansion of $\alpha\in\textup{HT}_N$. Define $\alpha_0:=\alpha$, and inductively for $n\geq 1$, define the sequence of real numbers $\alpha_n\in(0,1)$ as \begin{equation}\label{equ-gauss} \alpha_n=\frac{1}{\alpha_{n-1}}-\Big\lfloor\frac{1}{\alpha_{n-1}}\Big\rfloor, \text{ where } n\geq 1. \end{equation} Then each $\alpha_n$ has continued fraction expansion $[0;a_{n+1},a_{n+2},\cdots]$. By definition, we have $\alpha_n\in(0,\varepsilon_4]$ for all $n\in\mathbb{N}$. Let $\alpha\in \textup{HT}_N$ and $f_0\in\mathcal{IS}_\alpha\cup \{Q_\alpha\}$. By Theorem \textup{Re}\,f{thm-IS-attr-rep-3}, the following sequence of maps is well-defined for all $n\geq 0$: \begin{equation} f_{n+1}:=\mathcal{R} f_n:U_{f_{n+1}}\to\mathbb{C}. \end{equation} Let $U_n:=U_{f_n}$ be the domain of definition of $f_n$ for $n\geq 0$. Then for all $n$, we have \begin{equation} f_n:U_n\to \mathbb{C}, ~f_n(0)=0,~f_n'(0)=e^{2\pi\textup{i}\alpha_n} \text{\quad and\quad} \textup{cv}=\textup{cv}_{f_n}=-4/27. \end{equation} For $n\geq 0$, let $\Phi_n:=\Phi_{f_n}$ be the Fatou coordinate of $f_n:U_n\to\mathbb{C}$ defined in the perturbed petal $\mathcal{P}_n:=\mathcal{P}_{f_n}$ and let $\mathcal{C}_n:=\mathcal{C}_{f_n}$ and $\mathcal{C}_n^\sharp:=\mathcal{C}_{f_n}^\sharp$ be the corresponding sets for $f_n$ defined in \eqref{defi-C-f-alpha}. Let $k_n:=k_{f_n}$ be the positive integer in Proposition \textup{Re}\,f{prop-CC-2} such that \begin{equation} S_n^0:=S_{f_n}=\mathcal{C}_n^{-k_n}\cup(\mathcal{C}_n^\sharp)^{-k_n}\subset\{z\in\mathcal{P}_n:0<\textup{Re}\,\Phi_n(z)<\lfloor\tfrac{1}{\alpha_n}\rfloor-\textbf{\textit{k}}-\tfrac{1}{2}\}. \end{equation} For $n\geq 0$, let $\widetilde{\mathcal{D}}_n:=\widetilde{\mathcal{D}}_{f_n}$ and $\mathcal{D}_n:=\mathcal{D}_{f_n}$ be the sets defined in \eqref{equ-MD-tilde-f} and \eqref{equ-MD-f} respectively. Note that $\mathcal{D}_n\subset\widetilde{\mathcal{D}}_n$ by Lemma \textup{Re}\,f{lema:D-n}. According to Lemma \textup{Re}\,f{lema:Phi-inverse}, we have a holomorphic map \begin{equation} \Phi_n^{-1}:\widetilde{\mathcal{D}}_n\to U_n\setminus\{0\} \end{equation} such that $\Phi_n^{-1}(\zeta+1)=f_n\circ\Phi_n^{-1}(\zeta)$ if $\zeta$, $\zeta+1\in\widetilde{\mathcal{D}}_n$. We denote the lift $\chi_{f_n,0}$ in \eqref{equ-chi-choice} by $\chi_{n,0}$. Then, for $n\geq 1$ we have \begin{equation}\label{equ-chi-n-0} \chi_{n,0}(\widetilde{\mathcal{D}}_n)\subset\{\zeta\in\mathbb{C}:1\leq\textup{Re}\,\zeta<\textit{\textbf{k}}_1+2 \text{ and }\textup{Im}\,\zeta>-2\}\subset\Phi_{n-1}(\mathcal{P}_{n-1}). \end{equation} Each $\chi_{n,0}$ is anti-holomorphic. For $j\in\mathbb{Z}$ we define \begin{equation}\label{eq:chi-n} \chi_{n,j}:=\chi_{n,0}+j. \end{equation} In the following we are mainly interested in $\chi_{n,j}$ with $0\leq j\leq a_n=\lfloor\tfrac{1}{\alpha_{n-1}}\rfloor$. For $\delta>0$, let $B_\delta(X)$ be the $\delta$-neighborhood of a set $X\subset\mathbb{C}$. The following lemma will be used to prove the uniform contraction with respect to the hyperbolic metrics in the domains of adjacent renormalization levels (see Lemma \textup{Re}\,f{lema:exp-conv}). \begin{lem}[{\cite[Lemma 2.1]{AC18}}]\label{lema:comp-inclu} There exists a constant $\delta_0>0$ depending only on the class $\mathcal{IS}_0$, such that for all $n\geq 1$ and $0\leq j\leq a_n$, then \begin{equation} B_{\delta_0}\big(\chi_{n,j}(\mathcal{D}_n)\big)\subset\mathcal{D}_{n-1}. \end{equation} \end{lem} For $n\geq 1$, we define an anti-holomorphic map $\psi_n$ by \begin{equation}\label{equ-psi-n} \psi_n:=\Phi_{n-1}^{-1}\circ\chi_{n,0}\circ\Phi_n:\mathcal{P}_n\to\mathcal{P}_{n-1}. \end{equation} Each $\psi_n$ extends continuously to $0\in\partial\mathcal{P}_n$ by mapping it to $0$. For $n\geq 1$, we define the composition \begin{equation} \Psi_n:=\psi_1\circ\psi_2\circ\cdots\circ\psi_n:\mathcal{P}_n\to\mathcal{P}_0\subset U_0. \end{equation} For $n\geq 0$ and $i\geq 1$, define the sector \begin{equation} S_n^i:=\psi_{n+1}\circ\cdots\circ\psi_{n+i}(S_{n+i}^0)\subset \mathcal{P}_n. \end{equation} In particular, $S_0^n\subset\mathcal{P}_0$ for all $n\geq 0$. Define \begin{equation} \mathcal{P}_n':=\{z\in\mathcal{P}_n:0<\textup{Re}\, \Phi_n(z)<\lfloor\tfrac{1}{\alpha_n}\rfloor-\textbf{\textit{k}}-1\}. \end{equation} Let $q_n$ be the denominator of the convergents $[0;a_1,\cdots,a_n]$ of the continued fraction expansion of $\alpha$. The following lemma was proved in \cite[\S 3]{Che19} and parts of the results can be also found in \cite[\S 1.5.5]{BC12}. The proof is based on the definition of near-parabolic renormalization. \begin{lem}[{\cite[Lemmas 3.3 and 3.4]{Che19}}]\label{lema-Cheraghi-2} For every $n\geq 1$, we have \begin{enumerate} \item For every $z\in\mathcal{P}_n'$, $f_{n-1}^{\circ a_n}\circ\psi_n(z)=\psi_n\circ f_n(z)$ and $f_0^{\circ q_n}\circ\Psi_n(z)=\Psi_n\circ f_n(z)$; \item For every $z\in S_n^0$, $f_{n-1}^{\circ (k_n a_n+1)}\circ\psi_n(z)=\psi_n\circ f_n^{\circ k_n}(z)$ and $f_0^{\circ (k_n q_n+q_{n-1})}\circ\Psi_n(z)=\Psi_n\circ f_n^{\circ k_n}(z)$; and \item For every $m<n$, $f_n:\mathcal{P}_n'\to \mathcal{P}_n$ and $f_n^{\circ k_n}:S_n^0\to \mathcal{C}_n\cup\mathcal{C}_n^\sharp$ are conjugate to some iterates of $f_m$ on the set $\psi_{m+1}\circ\cdots\circ\psi_n(\mathcal{P}_n)$. \end{enumerate} \end{lem} The above lemma implies that one iteration in the deep level of the renormalization corresponds to several iterations in the shallow one. For each $n\in\mathbb{N}$, by \eqref{equ-k-f-kc} we define \begin{equation}\label{equ-b-n} b_n:=k_n+\lfloor\tfrac{1}{\alpha_n}\rfloor-\textbf{\textit{k}}-2\geq a_{n+1}+1. \end{equation} From the definition of $\widetilde{\mathcal{D}}_n$ in \eqref{equ-MD-tilde-f}, it implies that for each $n\geq 0$ the following set is well-defined: \begin{equation}\label{equ-omega-n} \Omega_n^0:=\bigcup_{j=0}^{b_n}f_n^{\circ j}(S_n^0)\cup\{0\}. \end{equation} By Lemma \textup{Re}\,f{lema-Cheraghi-2}, the dynamics of $f_n$ can be transferred to the dynamics of $f_0$. Specifically, the first $k_n$ iterates of $f_n$ on $S_n^0$ corresponds to $k_n q_n+q_{n-1}$ iterates of $f_0$ and the next $\lfloor\tfrac{1}{\alpha_n}\rfloor-\textbf{\textit{k}}-2$ iterates corresponds to $q_n(\lfloor\tfrac{1}{\alpha_n}\rfloor-\textbf{\textit{k}}-2)$ iterates of $f_0$. Therefore, we can define \begin{equation} \Omega_0^n:=\bigcup_{j=0}^{b_n q_n+q_{n-1}}f_0^{\circ j}(S_0^n)\cup\{0\}. \end{equation} \begin{defi}[High type Brjunos] Let $N$ be the integer fixed before. Define \begin{equation}\label{equ:Brjuno} \mathcal{B}_N:= \left\{\alpha=[0;a_1,a_2,\cdots]\in (0,1)\setminus\mathbb{Q} \left| \begin{array}{l} \alpha \text{ is Brjuno and}\\ a_n\geq N, \,\forall\, n\geq 1 \end{array} \right. \right\}. \end{equation} Then $\mathcal{B}_N$ is strictly contained in $\textup{HT}_N$. \end{defi} \begin{prop}[{\cite[Propositions 3.5 and 5.10(2)]{Che19}}]\label{prop-Cherahi-nest} Let $f_0\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in \textup{HT}_N$. Then for all $n\geq 0$, we have \begin{enumerate} \item $\Omega_0^{n+1}$ is compactly contained in the interior of $\Omega_0^n$ and $f_0(\Omega_0^{n+1})\subset\Omega_0^n$; \item If $\alpha\in\mathcal{B}_N$, then $\operatorname{int}\,(\bigcap_{n=0}^\infty\Omega_0^n)=\mathbb{D}elta_0$, where $\mathbb{D}elta_0$ is the Siegel disk of $f_0$. \end{enumerate} \end{prop} In the rest of this paper, unless otherwise stated, for a given map $f_0\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in\textup{HT}_N$, we use $f_n$ to denote the map after $n$-th near-parabolic renormalization. We also use $U_n$, $\mathcal{P}_n$ and $\Phi_n$ etc to denote the domain of definition, the perturbed petal and the Fatou coordinate etc of $f_n$ respectively. \section{The suitable heights}\label{sec-height} \subsection{Radii of Siegel disks} The following classical distortion theorem can be found in \cite[Theorem 1.6, p.\,21]{Pom75}. \begin{thm}[{Koebe's distortion theorem}]\label{thm-Koebe} Suppose $f:\mathbb{D}\to\mathbb{C}$ is a univalent map with $f(0)=0$ and $f'(0)=1$. Then for each $z\in\mathbb{D}$ we have \begin{enumerate} \item $\tfrac{1-|z|}{(1+|z|)^3}\leq|f'(z)|\leq\tfrac{1+|z|}{(1-|z|)^3}$; \item $\tfrac{|z|}{(1+|z|)^2}\leq|f(z)|\leq\tfrac{|z|}{(1-|z|)^2}$; and \item $|\arg f'(z)|\leq 2\log\tfrac{1+|z|}{1-|z|}$. \end{enumerate} \end{thm} Let $\alpha_0:=\alpha\in \mathcal{B}_N$ and $\alpha_n\in(0,1)$ be the number defined inductively as in \eqref{equ-gauss} for $n\geq 1$. Denote $\beta_{-1}=1$ and $\beta_n:=\prod_{i=0}^n\alpha_i$ for $n\geq 0$. The \emph{Brjuno sum} $\mathcal{B}(\alpha)$ of $\alpha$ in the sense of Yoccoz is defined as \begin{equation}\label{equ-Brjuno-sum-Yoccoz} \mathcal{B}(\alpha):=\sum_{n=0}^{+\infty}\beta_{n-1}\log\frac{1}{\alpha_n}=\log\frac{1}{\alpha_0}+\alpha_0\log\frac{1}{\alpha_1}+\alpha_0\alpha_1\log\frac{1}{\alpha_2}+\cdots. \end{equation} It is proved in \cite[\S 1.5]{Yoc95} that $|\mathcal{B}(\alpha)-\sum_{n=0}^\infty q_n^{-1}\log q_{n+1}|\leq C'$ for a universal constant $C'>0$. Suppose a holomorphic map $f$ has a Siegel disk $\mathbb{D}elta_f$ centered at the origin which is compactly contained in the domain of definition of $f$. The \emph{inner radius} of $\mathbb{D}elta_f$ is the radius of the largest open disk centered at the origin that is contained in $\mathbb{D}elta_f$. \begin{lem}\label{lema-fixed-disk} There exists a universal constant $D_7>1$ such that for all $f_0\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in \mathcal{B}_N$, the inner radius of the Siegel disk of $f_n$ is $c_ne^{-\mathcal{B}(\alpha_n)}$ with $1/D_7 \leq c_n\leq D_7$ for every $n\in\mathbb{N}$. \end{lem} \begin{proof} By the definition of near-parabolic renormalization, it follows that $f_n\in\mathcal{IS}_{\alpha_n}$ with $\alpha_n\in \mathcal{B}_N$ for all $n\geq 1$. Then according to \cite{Brj71}, each $f_n$ with $n\geq 0$ has a Siegel disk centered at the origin. By the definition of Inou-Shishikura's class and Koebe's distortion theorem (Theorem \textup{Re}\,f{thm-Koebe}(b)), $f_n$ is univalent in $\mathbb{D}(0,\widetilde{c})$ for a universal constant $\widetilde{c}>0$. According to Yoccoz \cite[p.\,21]{Yoc95}, the Siegel disk of $f_n$ contains a round disk $\mathbb{D}(0, C_1 e^{-\mathcal{B}(\alpha_n)})$ for a universal constant $C_1>0$, where \begin{equation}\label{equ:Brj-Yoccoz-n} \mathcal{B}(\alpha_n):=\log\frac{1}{\alpha_n}+\sum_{k=1}^{+\infty}\alpha_n\cdots\alpha_{n+k-1}\log\frac{1}{\alpha_{n+k}} \end{equation} is the Brjuno sum of $\alpha_n$ defined in \eqref{equ-Brjuno-sum-Yoccoz}. On the other hand, by \cite[Theorem G]{Che19}, there is a universal constant $C_2>1$ such that the inner radius of the Siegel disk of $f_n$ is bounded above by $C_2 e^{-\mathcal{B}(\alpha_n)}$ for all $n\in\mathbb{N}$. The lemma follows if we set $D_7:=\max\{C_2,1/C_1\}$. \end{proof} \subsection{Definition of the heights} In the following, we use $\mathbb{D}elta_n$ to denote the Siegel disk of $f_n$ for all $n\geq 0$, where $f_0\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in \mathcal{B}_N$ and $f_n$ is obtained by applying the near-parabolic renormalization operator. \begin{defi}[{The heights}] Let $M\geq 1$. For $n\geq 0$, we define \begin{equation}\label{defi-eta} h_n:=\frac{\mathcal{B}(\alpha_{n+1})}{2\pi}+\frac{M}{\alpha_n}. \end{equation} \end{defi} There are many choices of the height $h_n$. One of the candidates is $\tfrac{\mathcal{B}(\alpha_{n+1})}{2\pi}+M$. In order to apply Lemma \textup{Re}\,f{lema-key-estimate-lp}(a) directly, we choose $h_n$ above so that $h_n> 1/\alpha_n$. Similar to \eqref{defi-C-f-alpha} (see Figure \textup{Re}\,f{Fig_near-para-norm-defi}), we define \begin{equation} \widetilde{\mathcal{C}}_n^\sharp:=\{z\in\mathcal{P}_n:1/2\leq\textup{Re}\,\Phi_n(z)\leq 3/2 \text{~and~} \textup{Im}\,\Phi_n(z)\geq h_n\}. \end{equation} Let $(\widetilde{\mathcal{C}}_n^\sharp)^{-k_n}$ be the component of $f_n^{-k_n}(\widetilde{\mathcal{C}}_n^\sharp)$ contained in $(\mathcal{C}_n^\sharp)^{-k_n}$. Recall that $\psi_n$ is defined in \eqref{equ-psi-n}. For $n\geq 0$ and $i\geq 1$, we denote \begin{equation} V_n^0:=(\widetilde{\mathcal{C}}_n^\sharp)^{-k_n}\subset S_n^0 \text{\quad and\quad} V_n^i:=\psi_{n+1}\circ\cdots\circ\psi_{n+i}(V_{n+i}^0)\subset S_n^i. \end{equation} \begin{lem}\label{lema-Jordan-domain} There exists a universal constant $M_1\geq 1$ such that if $M\geq M_1$, then for all $n\geq 0$ and $i\geq 0$, $V_n^i$ is compactly contained in $\mathbb{D}elta_n$. \end{lem} \begin{proof} We first prove that $V_n^0$ is compactly contained in $\mathbb{D}elta_n$ for all $n\geq 0$ if $M\geq 1$ is large enough. By a straightforward calculation, the image of $\Phi_n(\widetilde{\mathcal{C}}_n^\sharp)$ under $\textup{Exp}o$ is a punctured rounded disk centered at the origin with radius \begin{equation} \iota_n:=\frac{4}{27}e^{-2\pi h_n}=\frac{4}{27}e^{-\frac{2\pi M}{\alpha_n}}\cdot e^{-\mathcal{B}(\alpha_{n+1})}<\frac{1}{D_7}e^{-\mathcal{B}(\alpha_{n+1})} \end{equation} if $M\geq M_1:=\frac{1}{2\pi}\log D_7+1$, where $D_7> 1$ is the universal constant introduced in Lemma \textup{Re}\,f{lema-fixed-disk}. This implies that $\textup{Exp}o\circ\Phi_n(\widetilde{\mathcal{C}}_n^\sharp)$ is compactly contained in the Siegel disk of $f_{n+1}$ if $M\geq M_1$. Hence there exists a small open neighborhood $D$ of $\widetilde{\mathcal{C}}_n^\sharp$ in $\mathcal{P}_n$ such that $\textup{Exp}o\circ\Phi_n(D)$ is compactly contained in the Siegel disk $\mathbb{D}elta_{n+1}$. By Lemma \textup{Re}\,f{lema-Cheraghi-2}(c), it follows that $f_n$ can be iterated infinitely many times in $D$ and the orbit is compactly contained in the domain of definition of $f_n$. Note that $0$ is contained in $\overline{D}$. Therefore, $D$ is contained in the Siegel disk of $f_n$ and $\widetilde{\mathcal{C}}_n^\sharp\Subset\mathbb{D}elta_n$. Since $f_n^{\circ k_n}(V_n^0)=\widetilde{\mathcal{C}}_n^\sharp$ and $0\in\partial V_n^0$, we have $V_n^0\Subset\mathbb{D}elta_n$. For each $z\in V_n^0$, there exists a small open neighborhood of $z$ on which $f_n$ can be iterated infinitely many times. By Lemma \textup{Re}\,f{lema-Cheraghi-2}(b), there exists a small open neighborhood of $\Psi_n(z)\in V_0^n$ on which $f_0$ can be also iterated infinitely many times. Since each $z\in V_n^0$ satisfies this property and $0\in\partial V_0^n$, it follows that $V_0^n\Subset\mathbb{D}elta_0$. By a completely similar argument, we have $V_n^i\Subset \mathbb{D}elta_n$ for any $i> 0$ and $n> 0$. \end{proof} Note that the forward orbit of $V_n^i$ is compactly contained in $\mathbb{D}elta_n$ for any $n\geq 0$ and $i\geq 0$. Moreover, the backward orbit of $V_n^i$ is also compactly contained in $\mathbb{D}elta_n$ if the preimage under $f_n$ is chosen in $\mathbb{D}elta_n$. In the following, we always assume that $M\geq M_1$ unless otherwise stated. \subsection{The location of the neighborhoods}\label{subsec-loc-neig} For $n\geq 0$, each $V_n^0\cup\{0\}$ is a closed topological triangle\footnote{Here we use the fact that for any $x\in (0,\lfloor\tfrac{1}{\alpha_n}\rfloor-\textit{\textbf{k}})$, $\lim_{y\to+\infty}\Phi_n^{-1}(x+y\textup{i})=0$. See \cite[Proposition 2.4(a)]{CS15} or \cite[Lemma 6.9]{Che19}.} whose boundary consists of three analytic curves. We use $\partial^l V_n^0$, $\partial^r V_n^0$ and $\partial^b V_n^0$ to denote the three smooth edges of $V_n^0$, where $f_n(\partial^l V_n^0)=\partial^r V_n^0$ and $\partial^l V_n^0\cap \partial^r V_n^0=\{0\}$. The superscripts `$l$', `$r$' and `$b$' denote `left', `right' and `bottom', respectively. Similar naming convention is adopted to $V_n^i$ and their forward images for all $n\geq 0$ and $i\geq 0$. For example, $\partial^l V_n^i:=\psi_{n+1}\circ\cdots\circ\psi_{n+i}(\partial^l V_{n+i}^0)$ if $i$ is even while $\partial^l V_n^i:=\psi_{n+1}\circ\cdots\circ\psi_{n+i}(\partial^r V_{n+i}^0)$ if $i$ is odd (note that each $\psi_j$ is anti-holomorphic). For simplicity, we denote the segment \begin{equation}\label{eq-I-n} I_n^0:=\partial^b V_n^0\subset\mathbb{D}elta_n. \end{equation} The `left' and the `right' end points of $I_n^0$ are denoted by $\partial^l I_n^0$ and $\partial^r I_n^0$ respectively so that $f_n(\partial^l I_n^0)=\partial^r I_n^0$. Similar naming convention is adopted to $I_n^i$ and their forward images for all $n\geq 0$ and $i\geq 0$. In particular, by Lemma \textup{Re}\,f{lema-Cheraghi-2}(a) we have $f_0^{\circ q_n}(\partial^l I_0^n)=\partial^r I_0^n$ if $n$ is even and $f_0^{\circ q_n}(\partial^r I_0^n)=\partial^l I_0^n$ if $n$ is odd. Moreover, let $\partial^l S_n^i$ and $\partial^r S_n^i$ be the smooth edges of $S_n^i$ containing $\partial^l V_n^i$ and $\partial^r V_n^i$ respectively. Let $k_n=k_{f_n}\geq 1$ be the integer introduced in Proposition \textup{Re}\,f{prop-CC-2}, $D_3>0$ be a constant introduced in Lemma \textup{Re}\,f{lema-key-estimate-lp} and $\mathcal{D}_n=\mathcal{D}_{f_n}$ be the set defined in \eqref{equ-MD-f}. \begin{lem}[{see Figure \textup{Re}\,f{Fig_W-n}}]\label{lema-stru-R-n} There exists a constant $M_2\geq 1$ such that if $M\geq M_2$, then for all $n\in\mathbb{N}$, we have \begin{enumerate} \item $\textup{diam}(\Phi_n(I_n^0))\leq 2$ and $|\textup{Im}\,\zeta- h_n|\leq 1$ for all $\zeta\in\Phi_n(I_n^0)$; \item For all $y\geq h_n-1$, $u_n(y):=\{\zeta\in\mathbb{C}:\textup{Im}\,\zeta=y\}\cap\Phi_n(\partial^l S_n^0)$ is a singleton; \item $\textup{diam}(\beta_n')\leq 1$, where $\beta_n'$ is the arc in $\Phi_n(\partial^l S_n^0)$ connecting $u_n( h_n)$ with $\Phi_n(\partial^l I_n^0)$. \end{enumerate} \end{lem} \begin{proof} The proof is mainly based on applying Koebe's distortion theorem and the definition of near-parabolic renormalization. (a) By the definition of near-parabolic renormalization, we have \begin{equation} f_{n+1}(\textup{Exp}o\circ\Phi_n(V_n^0))=\textup{Exp}o\circ\Phi_n(\widetilde{\mathcal{C}}_n^\sharp). \end{equation} Note that $\textup{Exp}o\circ\Phi_n(\widetilde{\mathcal{C}}_n^\sharp)\cup\{0\}$ is a closed round disk with radius \begin{equation} \iota_n=\frac{4}{27}e^{-\frac{2\pi M}{\alpha_n}}\cdot e^{-\mathcal{B}(\alpha_{n+1})}. \end{equation} By Lemma \textup{Re}\,f{lema-fixed-disk}, $\mathbb{D}elta_{n+1}$ contains the disk $\mathbb{D}(0,\varsigma_n)$, where \begin{equation} \varsigma_n:=D_7^{-1} e^{-\mathcal{B}(\alpha_{n+1})}. \end{equation} Therefore, \begin{equation}\label{equ-inv-f-g} g:=f_{n+1}^{-1}:\mathbb{D}(0,\varsigma_n)\to\mathbb{D}elta_{n+1} \end{equation} is a well-defined univalent map with $|g'(0)|=1$. If $M$ is large enough such that $\iota_n$ is much smaller than $\varsigma_n$, then by Theorem \textup{Re}\,f{thm-Koebe} the distortion of the circle $g(\partial\mathbb{D}(0,\iota_n))$ relative to $\partial\mathbb{D}(0,\iota_n)$ can be arbitrarily small. Part (a) is proved if we notice that $\Phi_n(I_n^0)$ is the closure of a connected component of $\textup{Exp}o^{-1}\circ g(\partial\mathbb{D}(0,\iota_n)\setminus\{\iota_n\})$. (b) Still by the definition of near-parabolic renormalization, we have \begin{equation} f_{n+1}(\textup{Exp}o\circ\Phi_n(\partial^l S_n^0))=(0,\tfrac{4}{27}e^{4\pi}]. \end{equation} Since $\mathbb{D}(0,\varsigma_n)\subset\mathbb{D}elta_{n+1}$, we have $f_{n+1}^{-1}([0,\tfrac{4}{27}e^{4\pi}])\cap g(\mathbb{D}(0,\varsigma_n))=g([0,\varsigma_n))$, where $g$ is defined in \eqref{equ-inv-f-g}. On the other hand, by \eqref{equ-inv-f-g} and Theorem \textup{Re}\,f{thm-Koebe}(b), we assume that $M$ is large such that $\iota_n$ is small and $g(\mathbb{D}(0,\varsigma_n))\supset\overline{\mathbb{D}}(0,e^{2\pi}\iota_n)$. According to Theorem \textup{Re}\,f{thm-Koebe}(c), we assume further that $M$ is large such that $g([0,\varsigma_n))\cap \partial\mathbb{D}(0,r)$ is a singleton for any $0<r\leq e^{2\pi}\iota_n$. Therefore, \begin{equation} \textup{Exp}o\circ\Phi_n(\partial^l S_n^0)\cap\{z\in\mathbb{C}:|z|=r\} \end{equation} is a singleton, where $0<r\leq e^{2\pi}\iota_n=\tfrac{4}{27}e^{-2\pi( h_n-1)}$. This proves Part (b). (c) By the definition of near-parabolic renormalization, we have \begin{equation} \textup{Exp}o(u_n( h_n))=g([0,\varsigma_n))\cap\partial\mathbb{D}(0,\iota_n) \text{\quad and\quad} \textup{Exp}o\circ\Phi_n(\partial^l I_n^0)=g(\iota_n). \end{equation} Moreover, by the definition of $\beta_n'$ we have $\textup{Exp}o(\beta_n')\subset g([0,\varsigma_n))$. By Theorem \textup{Re}\,f{thm-Koebe}, the Euclidean length of the arc $\textup{Exp}o(\beta_n')$ with end points $g([0,\varsigma_n))\cap\partial\mathbb{D}(0,\iota_n)$ and $g(\iota_n)$ can be arbitrarily small if $M$ is large enough. This proves Part (c). \end{proof} Let $D_3>0$ be introduced in Lemma \textup{Re}\,f{lema-key-estimate-lp}. In the following we always assume that $M\geq\max\{M_2,D_3+\tfrac{1}{2\pi}\log\tfrac{4D_7}{27}+2\}$ unless otherwise stated. Then \begin{equation}\label{equ-y-n-alpha} y_n:=\frac{\mathcal{B}(\alpha_{n+1})}{2\pi}+M-D_3-\frac{3}{2} > \frac{\mathcal{B}(\alpha_{n+1})}{2\pi}-\frac{1}{2\pi}\log\frac{27 c_{n+1}}{4}. \end{equation} This implies that if $\textup{Im}\,\zeta\geq y_n$, then $\zeta\in\textup{Exp}o^{-1}(\mathbb{D}elta_{n+1})$. \section{The sequence of the curves is convergent}\label{sec-convergent} In this section, we define a sequence of continuous curves $(\gamma_n^i)_{n\in\mathbb{N}}$ in the Fatou coordinate planes with $i\in\mathbb{N}$. The image of each $\gamma_n^i$ under $\Phi_n^{-1}$ is a continuous closed curve contained in the Siegel disk $\mathbb{D}elta_n$ of $f_n$. We shall prove that $(\gamma_0^n)_{n\in\mathbb{N}}$ convergents uniformly to the boundary of $\mathbb{D}elta_0$. \subsection{Definition of the curves and its parametrization} For each $n\in\mathbb{N}$, note that $a_{n+1}=\lfloor\tfrac{1}{\alpha_n}\rfloor$. Recall that \begin{equation} u_n:=u_n( h_n)=\{\zeta\in\mathbb{C}:\textup{Im}\,\zeta= h_n\}\cap\Phi_n(\partial^l S_n^0) \end{equation} is introduced in Lemma \textup{Re}\,f{lema-stru-R-n}(b). Since $f_n^{\circ k_n}(S_n^0)=\mathcal{C}_n\cup\mathcal{C}_n^\sharp$, we have $\textup{Re}\,\zeta>a_{n+1}-\textit{\textbf{k}}$ for all $\zeta\in\Phi_n(S_n^0)+k_n$. Therefore, we have \begin{equation}\label{equ-u-n-domain} a_{n+1}-\textit{\textbf{k}}-k_n< \textup{Re}\, u_n< a_{n+1}-\textit{\textbf{k}}-\tfrac{3}{2}. \end{equation} We denote \begin{equation} u_n':=a_{n+1}-\textit{\textbf{k}}-k_n-\tfrac{1}{2}+ h_n\textup{i}. \end{equation} According to \eqref{equ-u-n-domain}, we have $\textup{Re}\, u_n'<\textup{Re}\, u_n$. Denote \begin{equation} u_n'':=\Phi_n(\partial^l I_n^0). \end{equation} Let $\beta_n'$ be the arc in $\Phi_n(\partial^l S_n^0)$ connecting $u_n$ with $u_n''$. See Figure \textup{Re}\,f{Fig_W-n}. We first give the definitions of two curves $\gamma_n^0(t)$ and $\gamma_n^1(t)$, where $t\in[0,1]$, and then define the curves $(\gamma_n^i(t))_{n\in\mathbb{N}}$ inductively. \textbf{Definition of $\gamma_n^0$}: The curve $\gamma_n^0(t):[0,1]\to\mathbb{C}$ is defined piecewise as following: \begin{enumerate} \item[(a$_0$)] For $t\in[0,1-\tfrac{\textit{\textbf{k}}\,+k_n+1}{a_{n+1}}]$, define $\gamma_n^0(t):=a_{n+1}t+\tfrac{1}{2}+ h_n\textup{i}$; \item[(b$_0$)] Let $\gamma_n^0(t):[1-\tfrac{\textit{\textbf{k}}\,+k_n+1}{a_{n+1}},1-\tfrac{k_n}{a_{n+1}}]\to [u_n',u_n]\cup\beta_n'$ be a homeomorphism such that \begin{equation} \gamma_n^0(1-\tfrac{\textit{\textbf{k}}\,+k_n+1}{a_{n+1}})=u_n' \text{ and } \gamma_n^0(1-\tfrac{k_n}{a_{n+1}})=u_n''; \end{equation} \item[(c$_0$)] Let $\gamma_n^0(t):[1-\tfrac{k_n}{a_{n+1}},1-\tfrac{k_n-1}{a_{n+1}}]\to \Phi_n(I_n^0)$ be a homeomorphism such that \begin{equation} \gamma_n^0(1-\tfrac{k_n}{a_{n+1}})=u_n'' \text{ and }\gamma_n^0(1-\tfrac{k_n-1}{a_{n+1}})=u_n''+1; \end{equation} \item[(d$_0$)] For $t\in[1-\tfrac{k_n-j}{a_{n+1}},1-\tfrac{k_n-j-1}{a_{n+1}}]$ with $1\leq j\leq k_n-1$, define $\gamma_n^0(t):=\gamma_n^0(t-\tfrac{j}{a_{n+1}})+j$. \end{enumerate} \begin{figure} \caption{The sketch of the construction of the continuous curve $\gamma_n^0$ (in blue) in the Fatou coordinate plane of $f_n$. The two red dots denote the initial and terminal points of $\gamma_n^0$ and they have the same image under the map $\Phi_n^{-1} \label{Fig_W-n} \end{figure} \begin{lem}[{See Figure \textup{Re}\,f{Fig_W-n}}]\label{lema-gamma-n-0} The map $\gamma_n^0(t):[0,1]\to\mathbb{C}$ has the following properties: \begin{enumerate} \item $\gamma_n^0$ and $\gamma_n^0+1$ are simple arcs in $\mathcal{D}_n$; \item $\gamma_n^0(0)=\tfrac{1}{2}+ h_n\textup{i}$ and $\gamma_n^0(1)=u_n''+k_n$; \item $\Phi_n^{-1}(\gamma_n^0)$ is a continuous closed curve in $\mathbb{D}elta_n$; and \item $|\textup{Im}\,\gamma_n^0(t)- h_n|\leq 1$ for all $t\in[0,1]$. \end{enumerate} \end{lem} \begin{proof} Parts (a) and (b) follow from the definition of $\gamma_n^0$. For Part (c), since $f_n^{\circ k_n}(\Phi_n^{-1}(u_n''))=\Phi_n^{-1}(\tfrac{1}{2}+ h_n\textup{i})$, we have $\Phi_n^{-1}(\tfrac{1}{2}+ h_n\textup{i})=\Phi_n^{-1}(u_n''+k_n)$ by Lemma \textup{Re}\,f{lema:Phi-inverse}. This implies that $\Phi_n^{-1}(\gamma_n^0)$ is a continuous closed curve in $\mathbb{D}elta_n$. Part (d) is an immediate consequence of Lemma \textup{Re}\,f{lema-stru-R-n}(a)(c). \end{proof} Before introducing $\gamma_n^1$, we define a thickened curve $\widetilde{\gamma}_n^0(t):[0,1]\to\mathbb{C}$ of $\gamma_n^0$: \begin{equation}\label{equ-gamma-tilde-n-0} \widetilde{\gamma}_n^0(t):= \left\{ \begin{aligned} & \gamma_n^0\Big(\tfrac{a_{n+1}}{a_{n+1}-1}t\Big) & ~~~\text{if } & t\in[0,1-\tfrac{1}{a_{n+1}}],\\ & \gamma_n^0(t)+1 & ~~~\text{if } & t\in (1-\tfrac{1}{a_{n+1}},1]. \end{aligned} \right. \end{equation} One can see that $\widetilde{\gamma}_n^0=\gamma_n^0\cup (\gamma_n^0([1-\tfrac{1}{a_{n+1}},1])+1)=\gamma_n^0\cup (\Phi_n(I_n^0)+k_n)$ and $\widetilde{\gamma}_n^0(t):[0,1]\to\mathbb{C}$ is a continuous curve in $\mathcal{D}_n$. Let $\chi_{n,0}:=\chi_{f_n,0}$ be the anti-holomorphic map defined in \eqref{equ-chi-choice}. \textbf{Definition of $\gamma_n^1$}: The curve $\gamma_n^1(t):[0,1]\to\mathbb{C}$ is defined piecewise as following: \begin{enumerate} \item[(a$_1$)] For $t\in[0,\tfrac{1}{a_{n+1}}]$, define $\gamma_n^1(t):=\chi_{n+1,0}\circ \widetilde{\gamma}_{n+1}^0(1-a_{n+1}t)$; \item[(b$_1$)] For $t\in (\tfrac{j}{a_{n+1}},\tfrac{j+1}{a_{n+1}}]$, where $1\leq j\leq a_{n+1}-1$, define \begin{equation} \gamma_n^1(t):=\chi_{n+1,j}\circ \gamma_{n+1}^0(j+1-a_{n+1}t), \end{equation} where $\chi_{n+1,j}=\chi_{n+1,0}+j$ is defined in \eqref{eq:chi-n}. \end{enumerate} Let $D_3>0$ be the constant introduced in Lemma \textup{Re}\,f{lema-key-estimate-lp}. \begin{lem}\label{lema-gamma-n-1} The map $\gamma_n^1(t):[0,1]\to\mathbb{C}$ has the following properties: \begin{enumerate} \item $\gamma_n^1$ and $\gamma_n^1+1$ are continuous curves in $\mathcal{D}_n$; \item $\gamma_n^1(0)=\chi_{n+1,0}(\gamma_{n+1}^0(1)+1)$ and $\gamma_n^1(1)=\chi_{n+1,0}(\gamma_{n+1}^0(1))+a_{n+1}$; \item $\Phi_n^{-1}(\gamma_n^1(0))=\Phi_n^{-1}(\gamma_n^1(1))$ and $\Phi_n^{-1}(\gamma_n^1)$ is a continuous closed curve in $\mathbb{D}elta_n$; and \item There exists a constant $D_8>0$ which is independent of $n$ such that for all $t\in[0,1]$, $|\textup{Re}\,\gamma_n^0(t)-\textup{Re}\,\gamma_n^1(t)|\leq D_8$ and $|\textup{Im}\,\gamma_n^1(t)-\tfrac{\mathcal{B}(\alpha_{n+1})}{2\pi}-M|\leq D_3+\tfrac{1}{2}$. \end{enumerate} \end{lem} \begin{proof} (a) Since $\chi_{n+1,j}$ is anti-holomorphic for all $j\in\mathbb{Z}$, we have \begin{equation} \chi_{n+1,j}(\gamma_{n+1}^0(0))=\chi_{n+1,j}(\gamma_{n+1}^0(1))+1=\chi_{n+1,j+1}(\gamma_{n+1}^0(1)), \end{equation} where $0\leq j\leq a_{n+1}-2$. Therefore, $\gamma_n^1(t):[0,1]\to\mathbb{C}$ is a continuous curve. By Lemma \textup{Re}\,f{lema:comp-inclu}, $\gamma_n^1$ and $\gamma_n^1+1$ are continuous curves in $\mathcal{D}_n$. (b) By the definition of $\gamma_n^1$, we have \begin{equation} \gamma_n^1(0)=\chi_{n+1,0}\circ \widetilde{\gamma}_{n+1}^0(1)=\chi_{n+1,0}(\gamma_{n+1}^0(1)+1) \end{equation} and \begin{equation} \begin{split} \gamma_n^1(1)=&~\chi_{n+1,a_{n+1}-1}(\gamma_{n+1}^0(0)) \\ =&~\chi_{n+1,a_{n+1}-1}(\gamma_{n+1}^0(1))+1=\chi_{n+1,0}(\gamma_{n+1}^0(1))+a_{n+1}. \end{split} \end{equation} (c) By Lemma \textup{Re}\,f{lema-Cheraghi-2}(a), we have \begin{equation} \Phi_n^{-1}\circ\chi_{n+1,0}(\gamma_{n+1}^0(1)+1)=f_n^{\circ a_{n+1}}(\Phi_n^{-1}\circ\chi_{n+1,0}(\gamma_{n+1}^0(1))). \end{equation} This implies that $\Phi_n^{-1}(\gamma_n^1(0))=\Phi_n^{-1}(\gamma_n^1(1))$ by Part (b). Therefore, $\Phi_n^{-1}(\gamma_n^1)$ is a continuous closed curve in $\mathbb{D}elta_n$. (d) By \eqref{equ-chi-n-0} we have \begin{equation}\label{equ-chi-n-1} \textup{Re}\,\chi_{n+1,j}(\widetilde{\gamma}_{n+1}^0)\subset[1+j,\textit{\textbf{k}}_1+2+j], \text{ where } j\in\mathbb{Z}. \end{equation} Hence for $t\in[0,1-\tfrac{\textit{\textbf{k}}\,+k_n+1}{a_{n+1}}]$, we have \begin{equation} |\textup{Re}\,\gamma_n^0(t)-\textup{Re}\,\gamma_n^1(t)|\leq \textit{\textbf{k}}_1+\tfrac{3}{2}. \end{equation} For $t\in[1-\tfrac{\textit{\textbf{k}}\,+k_n+1}{a_{n+1}},1-\tfrac{k_n}{a_{n+1}}]$, by \eqref{equ-u-n-domain} and Lemma \textup{Re}\,f{lema-stru-R-n}(c) we have \begin{equation} \textup{Re}\,\gamma_n^0(t)\in[\textup{Re}\, u_n'-\tfrac{1}{2},\textup{Re}\, u_n+1]\subset[a_{n+1}-\textit{\textbf{k}}\,-k_n-1,a_{n+1}-\textit{\textbf{k}}-\tfrac{1}{2}]. \end{equation} If $t\in[1-\tfrac{\textit{\textbf{k}}\,+k_n+1}{a_{n+1}},1-\tfrac{k_n}{a_{n+1}}]$, then $\gamma_n^1(t)\in\bigcup_{i=0}^{\textit{\textbf{k}}}\chi_{n+1,a_{n+1}-\textit{\textbf{k}}-k_n-1+i}(\gamma_{n+1}^0)$. By \eqref{equ-chi-n-1} we have \begin{equation} \textup{Re}\,\gamma_n^1(t)\in[a_{n+1}-k_n-\textit{\textbf{k}},\,a_{n+1}-k_n+\textit{\textbf{k}}_1+1]. \end{equation} Therefore, for $t\in[1-\tfrac{\textit{\textbf{k}}\,+k_n+1}{a_{n+1}},1-\tfrac{k_n}{a_{n+1}}]$ we have \begin{equation} |\textup{Re}\,\gamma_n^0(t)-\textup{Re}\,\gamma_n^1(t)|\leq \max\{k_n-\tfrac{1}{2},\textit{\textbf{k}}+\textit{\textbf{k}}_1+2\}. \end{equation} By Lemma \textup{Re}\,f{lema-stru-R-n}(a)(c), we have \begin{equation}\label{equ-u-n-1} u_n''\in\overline{\mathbb{D}}(u_n,1) \text{ and } \Phi_n(I_n^0)\subset \overline{\mathbb{D}}(u_n'',2). \end{equation} For $t\in [1-\tfrac{k_n}{a_{n+1}},1-\tfrac{k_n-1}{a_{n+1}}]$, by \eqref{equ-u-n-domain} and \eqref{equ-u-n-1} we have \begin{equation} \textup{Re}\,\gamma_n^0(t)\in[a_{n+1}-\textit{\textbf{k}}-k_n-3,a_{n+1}-\textit{\textbf{k}}+\tfrac{3}{2}]. \end{equation} On the other hand, we have \begin{equation} \textup{Re}\,\gamma_n^1(t)\in[a_{n+1}-k_n+1,a_{n+1}-k_n+\textit{\textbf{k}}_1+2]. \end{equation} Since $\gamma_n^i(t+\tfrac{1}{a_{n+1}})=\gamma_n^i(t)+1$ for $t\in [1-\tfrac{k_n}{a_{n+1}},1-\tfrac{1}{a_{n+1}}]$, where $i=0,1$, it implies that for all $t\in[1-\tfrac{k_n}{a_{n+1}},1]$, we have \begin{equation} |\textup{Re}\,\gamma_n^0(t)-\textup{Re}\,\gamma_n^1(t)|\leq \max\{k_n-\textit{\textbf{k}}+\tfrac{1}{2},\textit{\textbf{k}}+\textit{\textbf{k}}_1+5\}. \end{equation} Since $k_n\leq\textit{\textbf{k}}_0$ by Proposition \textup{Re}\,f{prop-CC-2}, it implies that $|\textup{Re}\,\gamma_n^0(t)-\textup{Re}\,\gamma_n^1(t)|\leq D_8:=\max\{\textit{\textbf{k}}_0-\tfrac{1}{2},\textit{\textbf{k}}+\textit{\textbf{k}}_1+5\}$ for all $t\in[0,1]$. Finally, the statement on $\textup{Im}\,\gamma_n^1(t)$ follows immediately from Lemma \textup{Re}\,f{lema-key-estimate-lp}(a) and Lemma \textup{Re}\,f{lema-gamma-n-0}(d). \end{proof} By \eqref{equ-y-n-alpha} and Lemma \textup{Re}\,f{lema-gamma-n-1}(d), for any $t\in[0,1]$ and $\zeta\in\textup{Exp}o^{-1}(\partial\mathbb{D}elta_{n+1})$, we have \begin{equation}\label{equ-Siegel-below} \textup{Im}\,\gamma_n^1(t)\geq \frac{\mathcal{B}(\alpha_{n+1})}{2\pi}+M-D_3-\frac{1}{2} >1+\textup{Im}\,\zeta. \end{equation} For $\ell=1$, we define a thickened curve $\widetilde{\gamma}_n^\ell(t):[0,1]\to\mathbb{C}$ of $\gamma_n^\ell$: \begin{equation}\label{equ-gam-tilde-n} \widetilde{\gamma}_n^\ell(t):= \left\{ \begin{aligned} & \gamma_n^\ell\Big(\tfrac{a_{n+1}}{a_{n+1}-1}t\Big) & ~~~\text{if } & t\in[0,1-\tfrac{1}{a_{n+1}}],\\ & \gamma_n^\ell(t)+1 & ~~~\text{if } & t\in (1-\tfrac{1}{a_{n+1}},1]. \end{aligned} \right. \end{equation} One can see that $\widetilde{\gamma}_n^\ell=\gamma_n^\ell\cup (\gamma_n^\ell([1-\tfrac{1}{a_{n+1}},1])+1)=\gamma_n^\ell\cup\chi_{n+1,a_{n+1}}(\gamma_{n+1}^{\ell-1})$, and $\widetilde{\gamma}_n^\ell(t):[0,1]\to\mathbb{C}$ is a continuous curve in $\mathcal{D}_n$. \textbf{Define $\gamma_n^i$ inductively}: For all $n\in\mathbb{N}$ and $1\leq \ell\leq i$ with $i\geq 1$, we assume that the curves $\gamma_n^\ell(t):[0,1]\to\mathbb{C}$ and $\widetilde{\gamma}_n^\ell(t):[0,1]\to\mathbb{C}$ are defined and satisfy \begin{enumerate} \item[(a$_\ell$)] $\widetilde{\gamma}_n^\ell$ is defined as in \eqref{equ-gam-tilde-n}; \item[(b$_\ell$)] $\gamma_n^\ell(t):=\chi_{n+1,0}\circ \widetilde{\gamma}_{n+1}^{\ell-1}(1-a_{n+1}t)$ for $t\in [0,\tfrac{1}{a_{n+1}})$, and $\gamma_n^\ell(t):=\chi_{n+1,j}\circ \gamma_{n+1}^{\ell-1}(j+1-a_{n+1}t)$ for $t\in(\tfrac{j}{a_{n+1}},\tfrac{j+1}{a_{n+1}}]$ with $1\leq j\leq a_{n+1}-1$; \item[(c$_\ell$)] $\gamma_n^\ell$ and $\gamma_n^\ell+1$ are continuous curves in $\mathcal{D}_n$; \item[(d$_\ell$)] $\gamma_n^\ell(0)=\chi_{n+1,0}(\gamma_{n+1}^{\ell-1}(1)+1)$ and $\gamma_n^\ell(1)=\chi_{n+1,0}(\gamma_{n+1}^{\ell-1}(1))+a_{n+1}$; and \item[(e$_\ell$)] $\Phi_n^{-1}(\gamma_n^\ell(0))=\Phi_n^{-1}(\gamma_n^\ell(1))$ and $\Phi_n^{-1}(\gamma_n^\ell)$ is a continuous closed curve in $\mathbb{D}elta_n$. \end{enumerate} Similar to the construction of $\gamma_n^i$, the curve $\gamma_n^{i+1}(t):[0,1]\to\mathbb{C}$ is defined as: \begin{enumerate} \item[(a$_{i+1}$)] For $t\in[0,\tfrac{1}{a_{n+1}}]$, define $\gamma_n^{i+1}(t):=\chi_{n+1,0}\circ \widetilde{\gamma}_{n+1}^i(1-a_{n+1}t)$; \item[(b$_{i+1}$)] For $t\in (\tfrac{j}{a_{n+1}},\tfrac{j+1}{a_{n+1}}]$, where $1\leq j\leq a_{n+1}-1$, define \begin{equation}\label{equ-defi-gam} \gamma_n^{i+1}(t):=\chi_{n+1,j}\circ \gamma_{n+1}^i(j+1-a_{n+1}t). \end{equation} \end{enumerate} \begin{lem}\label{lema-gamma-n-i} The map $\gamma_n^{i+1}(t):[0,1]\to\mathbb{C}$ has the following properties: \begin{enumerate} \item $\gamma_n^{i+1}$ and $\gamma_n^{i+1}+1$ are continuous curves in $\mathcal{D}_n$; \item $\gamma_n^{i+1}(0)=\chi_{n+1,0}(\gamma_{n+1}^i(1)+1)$ and $\gamma_n^{i+1}(1)=\chi_{n+1,0}(\gamma_{n+1}^i(1))+a_{n+1}$; \item $\Phi_n^{-1}(\gamma_n^{i+1}(0))=\Phi_n^{-1}(\gamma_n^{i+1}(1))$ and $\Phi_n^{-1}(\gamma_n^{i+1})$ is a continuous closed curve in $\mathbb{D}elta_n$. \end{enumerate} \end{lem} The proof of Lemma \textup{Re}\,f{lema-gamma-n-i} is completely similar to that of Lemma \textup{Re}\,f{lema-gamma-n-1}. Moreover, one can define the thickened curve $\widetilde{\gamma}_n^{\ell}$ of $\gamma_n^{\ell}$ with $\ell=i+1$ as in \eqref{equ-gam-tilde-n} similarly. By the definition of $\widetilde{\gamma}_n^i$, we have \begin{lem}\label{lema:gam-inverse} For each $t_0\in[0,1]$, there exist two sequences $(t_n)_{n\in\mathbb{N}}$ with $t_n\in[0,1]$ and $(j_n)_{n\geq 1}$ with $0\leq j_n\leq a_n$, such that for all $n\geq 1$ and all $i\in\mathbb{N}$, \begin{equation} \widetilde{\gamma}_{n-1}^{i+1}(t_{n-1})=\chi_{n,j_n}\big(\widetilde{\gamma}_n^{i}(t_n)\big). \end{equation} \end{lem} \subsection{The curves are convergent}\label{subsec-unif-cont} Our main goal in this subsection is to prove: \begin{prop}\label{prop-Cauchy-sequence} There exists a constant $K>0$ such that for all $n\in\mathbb{N}$, we have \begin{equation}\label{equ-Cauchy} \sum_{i=0}^n\sup_{t\in[0,1]}|\gamma_0^i(t)-\gamma_0^{i+1}(t)|\leq K. \end{equation} In particular, the sequence of the continuous curves $(\gamma_0^n(t):[0,1]\to\mathbb{C})_{n\in\mathbb{N}}$ converges uniformly as $n\to\infty$. \end{prop} In order to estimate the distance between $\gamma_0^i(t)$ and $\gamma_0^{i+1}(t)$ with $t\in[0,1]$, we will combine the uniform contraction with respect to the hyperbolic metrics and some quantitative estimations (with respect to the Euclidean metric) obtained in \S\textup{Re}\,f{subsec-esti-2}. For any hyperbolic domain $X\subset\mathbb{C}$, we use $\rho_X(z)|\textup{d} z|$ to denote the hyperbolic metric of $X$. The following lemma appears in \cite[Lemma 5.5]{Che19} in another form. For completeness we include a proof here. \begin{lem}\label{lema-uni-con-prep} Let $X$, $Y$ be two hyperbolic domains in $\mathbb{C}$ satisfying $\textup{diam}\,(\textup{Re}\,(X))\leq A'$ and $B_\delta(X)\subset Y$, where $A'$ and $\delta$ are positive constants. Then there exists a number $0<\lambda<1$ depending only on $A'$ and $\delta$ such that for any $z\in X$, \begin{equation} \rho_Y(z)\leq \lambda\,\rho_X(z). \end{equation} \end{lem} \begin{proof} For any fixed $z_0\in X$, we consider the holomorphic function \begin{equation} F(z):=z+\frac{\delta\,(z-z_0)}{z-z_0+2A'+\delta}:X\to\mathbb{C}. \end{equation} Since $\textup{diam}\,(\textup{Re}\,(X))\leq A'$, it follows that $|z-z_0|<|z-z_0+2A'+\delta|$ if $z\in X$. Thus we have $|F(z)-z|<\delta$ and $F(X)\subset Y$ by the assumption. Applying Schwarz-Pick's lemma to $F:X\to Y$ at $F(z_0)=z_0$, we have \begin{equation} \rho_Y(F(z_0))|F'(z_0)|=\rho_Y(z_0)\left(1+\frac{\delta}{2A'+\delta}\right)\leq \rho_X(z_0). \end{equation} The proof is finished if we set $\lambda:=(2A'+\delta)/(2A'+2\delta)$. \end{proof} Let $X$ be a set in $\mathbb{C}$ and $z_0\in X$. We use $\mathbb{C}omp_{z_0}X$ to denote the connected component of $X$ containing $z_0$. Let $\mathcal{D}_n$ be the set defined in \eqref{equ-MD-f}. For $n\in\mathbb{N}$, we define \begin{equation}\label{equ-D-n-pri} \mathcal{D}_n':=\mathbb{C}omp_1(\operatorname{int} \mathcal{D}_n\cap\{\zeta\in\mathbb{C}:-3<\textup{Im}\,\zeta< h_n+2\}), \end{equation} where $ h_n$ is the height defined in \eqref{defi-eta}. Note that each $\mathcal{D}_n'$ is a hyperbolic domain. Let $\rho_n(z)|\textup{d} z|$ be the hyperbolic metric of $\mathcal{D}_n'$. We use $\textup{len}(\cdot)$ and $\textup{len}_{\rho_n}(\cdot)$ to denote the length of curves with respect to the Euclidean and the hyperbolic metric $\rho_n(z)|\textup{d} z|$ respectively. \begin{lem}\label{lema:exp-conv} Let $A'>0$ and $\delta>0$ be two constants. Then there exist $A>0$ and $0<\nu<1$ depending only on $A'$ and $\delta$ such that for any piecewise continuous curve $\vartheta_n$ in $\mathcal{D}_n'$ with $\textup{len}(\vartheta_n)\leq A'$ and $B_\delta(\vartheta_n)\subset\mathcal{D}_n'$, we have \begin{equation} \textup{len}(\chi_{1,j_1}\circ\cdots\circ\chi_{n,j_n}(\vartheta_n))\leq A\cdot\nu^n, \end{equation} where $0\leq j_i\leq a_i$ and $1\leq i\leq n$. \end{lem} \begin{proof} Let $1\leq i\leq n$ and $0\leq j_i\leq a_i$. Note that we have assumed that $M>D_3$ in \eqref{equ-y-n-alpha}. By Lemma \textup{Re}\,f{lema-key-estimate-lp}, for $\zeta\in\mathcal{D}_i'$, we have \begin{equation}\label{equ-im-chi-1} \textup{Im}\,\chi_{i,j_i}(\zeta)\leq\tfrac{\mathcal{B}(\alpha_i)}{2\pi}+M+D_3+1<\tfrac{\mathcal{B}(\alpha_i)}{2\pi}+\tfrac{M}{\alpha_{i-1}}+1= h_{i-1}+1. \end{equation} Since $\Phi_i^{-1}(\mathcal{D}_i)$ is contained in the image of $f_i$, by the definition of near-parabolic renormalization (see also \eqref{equ-chi-choice}), we have \begin{equation}\label{equ-im-chi-2} \textup{Im}\,\chi_{i,j_i}(\zeta)>-2, \text{ for all }\zeta\in\mathcal{D}_i. \end{equation} By Lemma \textup{Re}\,f{lema:comp-inclu}, we have $B_{\delta_0}(\chi_{i,j_i}(\mathcal{D}_i))\subset\mathcal{D}_{i-1}$ for a constant $\delta_0$ depending only on the class $\mathcal{IS}_0$. Without loss of generality, we assume that $\delta_0<1$. Combining \eqref{equ-im-chi-1} and \eqref{equ-im-chi-2}, we have \begin{equation}\label{equ-B-delta-D} B_{\delta_0}(\chi_{i,j_i}(\mathcal{D}_i'))\subset\mathcal{D}_{i-1}'. \end{equation} Note that $\chi_{i,j_i}:(\mathcal{D}_i',\rho_i)\to (\mathcal{D}_{i-1}',\rho_{i-1})$ can be decomposed as: \begin{equation} (\mathcal{D}_i',\rho_i) \xlongrightarrow{\chi_{i,j_i}} (\chi_{i,j_i}(\mathcal{D}_i'), \tilde{\rho}_i) \overset{inc.}{\hookrightarrow} (\mathcal{D}_{i-1}',\rho_{i-1}), \end{equation} where $\tilde{\rho}_i(z)|dz|$ is the hyperbolic metric of $\chi_{i,j_i}(\mathcal{D}_i')$. According to Proposition \textup{Re}\,f{prop-unif-inverse}, we have $\textup{diam}\,(\textup{Re}\,\chi_{i,j_i}(\mathcal{D}_i'))\leq \textit{\textbf{k}}_1$. By Lemma \textup{Re}\,f{lema-uni-con-prep}, the inclusion map \begin{equation} (\chi_{i,j_i}(\mathcal{D}_i'), \tilde{\rho}_i) \overset{inc.}{\hookrightarrow} (\mathcal{D}_{i-1}',\rho_{i-1}) \end{equation} is uniformly contracting with respect to the hyperbolic metrics (and the contracting factor depends only on $\textit{\textbf{k}}_1$ and $\delta_0$). Since $\chi_{i,j_i}:\mathcal{D}_i'\to \chi_{i,j_i}(\mathcal{D}_i')$ do not expand the hyperbolic metric, it follows that $\chi_{i,j_i}:(\mathcal{D}_i',\rho_i)\to (\mathcal{D}_{i-1}',\rho_{i-1})$ is also uniformly contracting. Since $\vartheta_n$ is a piecewise continuous curve satisfying $\textup{len}(\vartheta_n)\leq A'$ and $B_\delta(\vartheta_n)\subset\mathcal{D}_n'$, it follows that there exists a constant $A''>0$ depending only on $A'$ and $\delta$ (not on $n$) such that $\textup{len}_{\rho_n}(\vartheta_n)\leq A''$. Define \begin{equation} G_n:=\chi_{1,j_1}\circ\cdots\circ\chi_{n,j_n}: \mathcal{D}_n'\to \mathcal{D}_0'. \end{equation} By the uniform contraction of $\chi_{i,j_i}$ for $1\leq i\leq n$ with respect to the hyperbolic metrics, there exists a constant $0<\nu<1$ depending only on $\textit{\textbf{k}}_1$ and $\delta_0$ such that \begin{equation} \textup{len}_{\rho_0} (G_n(\vartheta_n))\leq A''\cdot\nu^n. \end{equation} Since $B_{\delta_0}\big(G_n(\mathcal{D}_n')\big)\subset \mathcal{D}_0'$, the Euclidean metric and the hyperbolic metric $\rho_0$ of $\mathcal{D}_0'$ are comparable in $G_n(\mathcal{D}_n')$. Since $G_n(\vartheta_n)\subset G_n(\mathcal{D}_n')\subset \mathcal{D}_0'$, there exists a constant $A>0$ depending only on $A'$ and $\delta$ such that $\textup{len} (G_n(\vartheta_n))\leq A\cdot\nu^n$. \end{proof} Let $D_6'>1$ be the constant introduced in Proposition \textup{Re}\,f{prop-key-estimate-yyy}. \begin{lem}\label{lema-go-up} There exists $K_1>0$ such that for any $n\geq 1$ and any continuous curve $\eta_n:[0,1]\to \mathcal{D}_n$ with $\eta_n(0)\in\widetilde{\gamma}_n^0$ and $\textup{len} \big(\eta_n\big)\leq h_n-D_6'-1$, then \begin{equation} \textup{len} \big(\chi_{n,0}(\eta_n)\big)\leq \tfrac{1}{2\pi}\mathcal{B}(\alpha_n)+K_1. \end{equation} \end{lem} \begin{proof} By Proposition \textup{Re}\,f{prop-key-estimate-yyy}, we define \begin{equation} \begin{aligned} & \phi_1(r):=(1+D_6 e^{-2\pi\alpha_nr})\alpha_n ~~~&\text{if }& r\in[\tfrac{1}{4\alpha_n},+\infty),\\ & \phi_2(r):=\frac{\alpha_n}{1-e^{-2\pi\alpha_n(r-D_2'\log(2+r))}}\left(1+\frac{D_6}{r}\right) ~~~&\text{if }& r\in[D_6',\tfrac{1}{4\alpha_n}]. \end{aligned} \end{equation} A direct calculation shows that \begin{equation}\label{equ-integ-1} J':=\int_{1/(4\alpha_n)}^{ h_n-1}\phi_1(r)\,\textup{d}r< \frac{1}{2\pi}\alpha_n\mathcal{B}(\alpha_{n+1})+M+D_6. \end{equation} We claim that there exists $K_1'>0$ which is independent of $\alpha_n$ such that \begin{equation}\label{equ-J} J'':=\int_{D_6'}^{1/(4\alpha_n)}\phi_2(r)\,\textup{d}r< \frac{1}{2\pi}\log\frac{1}{\alpha_n}+K_1'. \end{equation} In fact, a direct calculation shows that $J''=J_1+D_2' J_2+D_6 J_3$, where \begin{equation} \begin{split} J_1=&~ \frac{1}{2\pi}\int_{D_6'}^{\frac{1}{4\alpha_n}}\frac{2\pi\alpha_n e^{2\pi\alpha_n r}-2\pi\alpha_n D_2'(r+2)^{2\pi\alpha_n D_2'-1}}{e^{2\pi\alpha_n r}-(r+2)^{2\pi\alpha_n D_2'}}\,\textup{d}r,\\ J_2=&~ \int_{D_6'}^{\frac{1}{4\alpha_n}}\frac{\alpha_n (r+2)^{2\pi\alpha_n D_2'-1}}{e^{2\pi\alpha_n r}-(r+2)^{2\pi\alpha_n D_2'}}\,\textup{d}r, \text{\quad and} \\ J_3=&~ \int_{D_6'}^{\frac{1}{4\alpha_n}}\frac{\alpha_n e^{2\pi\alpha_n r}}{e^{2\pi\alpha_n r}-(r+2)^{2\pi\alpha_n D_2'}}\cdot\frac{1}{r}\,\textup{d}r. \end{split} \end{equation} We assume that $\alpha_n$ is small such that $2\pi\alpha_n D_2'\leq 1/2$ and $2\pi\alpha_n D_2'\log(2+\tfrac{1}{4\alpha_n})\leq 1/2$. Since $1+t\leq e^t\leq 1+2t$ for $t\in[0,1]$, if $D_6'\leq r\leq \frac{1}{4\alpha_n}$, we have \begin{equation}\label{equ-J-1} \begin{split} e^{2\pi\alpha_n r}-(r+2)^{2\pi\alpha_n D_2'} \geq &~ 1+2\pi\alpha_n r-(1+4\pi\alpha_n D_2'\log(r+2)) \\ = &~ 2\pi\alpha_n (r-2 D_2'\log(r+2)), \end{split} \end{equation} where $r-2D_2'\log(2+r)\geq 2$ if $r\geq D_6'$. By \eqref{equ-J-1}, there exist $C_1$, $C_1'>0$ which are independent of $\alpha_n$ such that \begin{equation} J_1\leq C_1-\frac{1}{2\pi}\log(e^{2\pi\alpha_n D_6'}-(D_6'+2)^{2\pi\alpha_n D_2'})\leq \frac{1}{2\pi}\log\frac{1}{\alpha_n}+C_1'. \end{equation} For $J_2$, since the integral \begin{equation} \int_{D_6'}^{+\infty}\frac{1}{r-2 D_2'\log(2+r)}\cdot \frac{1}{(r+2)^{1/2}}\,\textup{d}r \end{equation} is convergent, it follows that there exists a constant $C_2>0$ which is independent of $\alpha_n$ so that $J_2\leq C_2$. Similarly, there exists a constant $C_3>1$ which is independent of $\alpha_n$ so that $J_3\leq C_3$. Hence \eqref{equ-J} follows if we set $K_1':=C_1'+C_2 D_2'+C_3D_6$. Without loss of generality we assume that $r\mapsto r-D_2'\log(2+r)$ is monotonously increasing on $[D_6',+\infty)$. Therefore, $\phi_1(r)$ and $\phi_2(r)$ are monotonously decreasing on $[\tfrac{1}{4\alpha_n},+\infty)$ and $[D_6',\tfrac{1}{4\alpha_n}]$ respectively. Denote \begin{equation}\label{equ:phi-r} \phi(r):= \left\{ \begin{aligned} & \phi_1(r) & ~~~\text{if } & r\in[\tfrac{1}{4\alpha_n},+\infty),\\ & \max\big\{\phi_2(r),\phi_1(\tfrac{1}{4\alpha_n})\big\} & ~~~\text{if } & r\in[D_6',\tfrac{1}{4\alpha_n}). \end{aligned} \right. \end{equation} Then $\phi(r)$ is monotonously decreasing on $[D_6',+\infty)$. By Lemma \textup{Re}\,f{lema-gamma-n-0}(d), we have $|\textup{Im}\,\eta_n(0)- h_n|\leq 1$. Since $\textup{len} \big(\eta_n\big)\leq h_n-D_6'-1$, we have $\eta_n\cap \big(\mathbb{D}(0,D_6')\cup\mathbb{D}(1/\alpha_n,D_6')\big)=\emptyset$. By \eqref{equ-integ-1} and \eqref{equ-J} we have \begin{equation}\label{equ-integral} \begin{split} \textup{len} \big(\chi_{n,0}(\eta_n)\big) \le &~ \int_{D_6'}^{ h_n-1}\phi(r)\,\textup{d}r \leq J' +\Big(J''+\big(\tfrac{1}{4\alpha_n}-D_6'\big)\phi_1\big(\tfrac{1}{4\alpha_n}\big)\Big) \\ < &~ J'+J''+ \frac{1}{4}(D_6+1)<\frac{1}{2\pi}\mathcal{B}(\alpha_n)+K_1, \end{split} \end{equation} where $K_1:=M+\frac{3}{2}D_6+K_1'$. The proof is complete. \end{proof} \begin{proof}[{Proof of Proposition \textup{Re}\,f{prop-Cauchy-sequence}}] Note that $\gamma_0^n(t)=\widetilde{\gamma}_0^n(\tfrac{a_1-1}{a_1}t)$ for all $t\in[0,1]$ and all $n\in\mathbb{N}$. In order to prove \eqref{equ-Cauchy}, it suffices to prove that there exist $K>0$ and a sequence of non-negative numbers $(y_i)_{i\geq 0}$ such that for any $n\in\mathbb{N}$, any $0\leq i\leq n$ and any $t_0\in[0,1]$, we have \begin{equation}\label{equ-convergent-1} |\widetilde{\gamma}_0^i(t_0)-\widetilde{\gamma}_0^{i+1}(t_0)|\leq y_i \text{\quad and\quad} \sum_{i=0}^n y_i\leq K. \end{equation} We divide the argument into several steps. \textbf{Step 1. Basic settings.} For any $t_0\in[0,1]$, by Lemma \textup{Re}\,f{lema:gam-inverse}, there exist two sequences $(t_n)_{n\in\mathbb{N}}$ with $t_n\in[0,1]$ and $(j_n)_{n\geq 1}$ with $0\leq j_n\leq a_n$ such that for all $n\geq 1$ and all $i\in\mathbb{N}$, \begin{equation}\label{equ-seg-t-n} \widetilde{\gamma}_{n-1}^{i+1}(t_{n-1})=\chi_{n,j_n}\big(\widetilde{\gamma}_n^{i}(t_n)\big). \end{equation} For $n\in\mathbb{N}$, let \begin{equation} \xi_n^0:[0,1]\to [\widetilde{\gamma}_n^0(t_n),\widetilde{\gamma}_n^1(t_n)] \end{equation} be the segment with $\xi_n^0(0)=\widetilde{\gamma}_n^0(t_n)$ and $\xi_n^0(1)=\widetilde{\gamma}_n^1(t_n)$ (we assume that the parametrization of $\xi_n^0$ on $[0,1]$ is linear). By Lemma \textup{Re}\,f{lema:D-n}, \eqref{equ-chi-n-1} and \eqref{equ-Siegel-below}, we have $B_{1/2}(\xi_n^0)\subset\mathcal{D}_n'$ for all $n\in\mathbb{N}$. For $\ell\geq 1$, we define the Jordan arc $\xi_n^\ell:[0,1]\to\mathbb{C}$ as \begin{equation}\label{equ-xi-i-ell} \xi_n^\ell(s):=\chi_{n+1,j_{n+1}}\circ\cdots\circ \chi_{n+\ell,j_{n+\ell}}(\xi_{n+\ell}^0(s)), \text{ where } s\in[0,1]. \end{equation} By \eqref{equ-seg-t-n} and \eqref{equ-xi-i-ell}, the following curve is continuous: \begin{equation}\label{equ-Xi-i-1} \begin{split} \eta_n^\ell:=\xi_n^0\cup\xi_n^1\cup\cdots\cup\xi_n^\ell=&~\xi_n^0\cup\chi_{n+1,j_{n+1}}(\eta_{n+1}^{\ell-1}) \\ =&~\xi_n^0\cup\chi_{n+1,j_{n+1}}\big(\xi_{n+1}^0\cup\cdots\cup\xi_{n+1}^{\ell-1}\big). \end{split} \end{equation} Denote $\eta_n^0:=\xi_n^0$. According to \eqref{equ-B-delta-D}, for any $n\geq 0$ and $\ell\ge 0$ we have \begin{equation} B_{\delta}(\eta_n^\ell)\subset\mathcal{D}_n', \text{\quad where\quad} \delta:=\min\{\delta_0,1/4\}. \end{equation} We give a parametrization of the continuous curve $\eta_n^\ell:[0,1]\to\mathbb{C}$ by \begin{equation}\label{equ-Xi-i-para} \eta_n^\ell(s):=\xi_n^j\big((\ell+1)s-j\big), \end{equation} where $s\in[\tfrac{j}{\ell+1},\tfrac{j+1}{\ell+1}]$ and $0\leq j\leq \ell$ (note that $\xi_n^j(1)=\xi_n^{j+1}(0)$ for every $0\leq j\leq \ell-1$). By definition, we have $|\widetilde{\gamma}_0^i(t_0)-\widetilde{\gamma}_0^{i+1}(t_0)|\leq\textup{len}(\xi_0^i)$ for all $i\in\mathbb{N}$. Therefore, in order to obtain \eqref{equ-convergent-1}, it suffices to prove that there exist $K>0$ and non-negative numbers $(y_i)_{i\geq 0}$ such that for any $n\in\mathbb{N}$ and any $0\leq i\leq n$, we have \begin{equation}\label{equ-seq-seg} \textup{len} (\xi_0^i)\leq y_i \text{\quad and\quad} \sum_{i=0}^n y_i\leq K. \end{equation} \textbf{Step 2. Decompositions of the curves.} In \eqref{equ-y-n-alpha} we assume that $M>D_3+\tfrac{1}{2\pi}\log\tfrac{4D_7}{27}+2> D_3+\frac{3}{2}$ (since $D_7> 1$). By \eqref{equ-gam-tilde-n}, it follows that Lemma \textup{Re}\,f{lema-gamma-n-1}(d) holds also for $\widetilde{\gamma}_n^0$ and $\widetilde{\gamma}_n^1$. By a direct calculation, we have \begin{equation}\label{equ-Xi-est} \begin{split} &~\textup{len} (\eta_n^0)=\textup{len} (\xi_n^0)=|\widetilde{\gamma}_n^0(t_n)-\widetilde{\gamma}_n^1(t_n)|\\ \leq &~ h_n+1-\tfrac{\mathcal{B}(\alpha_{n+1})}{2\pi}-M+D_3+\tfrac{1}{2}+D_8< h_n-\tfrac{\mathcal{B}(\alpha_{n+1})}{2\pi}+D_8. \end{split} \end{equation} Hence $\eta_n^0=\xi_n^0:[0,1]\to\mathbb{C}$ can be written as the union of two continuous curves $\eta_{n,(0)}^0:=\eta_n^0([0,s_n])$ and $\eta_{n,(1)}^0:=\eta_n^0([s_n,1])$ for some $s_n\in (0,1)$ (the choice of $s_n$ is not unique), such that \begin{equation}\label{equ:decomp-1} \textup{len}(\eta_{n,(0)}^0)\leq h_n-D_6'-1 \text{\quad and\quad}\textup{len}(\eta_{n,(1)}^0)\leq D_6'+D_8+1. \end{equation} Since $B_{\delta}(\eta_n^0)\subset\mathcal{D}_n'$, there exists a constant $K_2'>0$ depending only on $\delta$ and $D_6'+D_8+1$ such that \begin{equation}\label{equ:decomp-2} \textup{len}_{\rho_n}(\eta_{n,(1)}^0)\leq K_2', \end{equation} where $\rho_n(z)|dz|$ is the hyperbolic metric of $\mathcal{D}_n'$. Let $K_1>0$ be the constant introduced in Lemma \textup{Re}\,f{lema-go-up}. There exists a constant $K_2> K_2'$ depending only on $A':=K_1+D_6'+D_8+1$ and $\delta$, such that for any $n\in\mathbb{N}$ and any piecewise continuous curve $\xi'$ in $\mathcal{D}_n'$ with $B_{\delta}(\xi')\subset\mathcal{D}_n'$ and $\textup{len}(\xi')\leq K_1+D_6'+D_8+1$, one has \begin{equation}\label{equ:K-2} \textup{len}_{\rho_n}(\xi')\leq K_2. \end{equation} Let $\nu\in (0,1)$ be the number in Lemma \textup{Re}\,f{lema:exp-conv} depending only on $A'$ and $\delta$. Suppose $n\geq 1$. By Lemma \textup{Re}\,f{lema-go-up} and \eqref{equ:decomp-1}, Lemma \textup{Re}\,f{lema:exp-conv} and \eqref{equ:decomp-2}, $\xi_{n-1}^1$ is the union of two continuous curves $\chi_{n,j_n}(\eta_{n,(0)}^0)$ and $\chi_{n,j_n}(\eta_{n,(1)}^0)$, where \begin{equation}\label{equ:len-1} \begin{split} \textup{len}\big(\chi_{n,j_n}(\eta_{n,(0)}^0)\big)\leq &~\tfrac{1}{2\pi}\mathcal{B}(\alpha_n)+K_1 \text{\quad and } \\ \textup{len}_{\rho_{n-1}}\big(\chi_{n,j_n}(\eta_{n,(1)}^0)\big)\leq &~ K_2'\nu < K_2\nu. \end{split} \end{equation} Therefore, by \eqref{equ-Xi-est} we have \begin{equation}\label{equ:len-2} \begin{split} &~ \textup{len}\big(\xi_{n-1}^0\cup \chi_{n,j_n}(\eta_{n,(0)}^0)\big)\\ \leq &~\big( h_{n-1}-\tfrac{\mathcal{B}(\alpha_n)}{2\pi}+D_8\big) + \big(\tfrac{\mathcal{B}(\alpha_n)}{2\pi}+K_1\big)=h_{n-1}+D_8+K_1. \end{split} \end{equation} This implies that $\xi_{n-1}^0\cup \chi_{n,j_n}(\eta_{n,(0)}^0)=\xi_{n-1}^0\cup \xi_{n-1}^1([0,s_n])=\eta_{n-1}^1([0,\tfrac{1+s_n}{2}])$ can be written as the union of two continuous curves $\eta_{n-1,(0)}^1:=\eta_{n-1}^1([0,s_{n-1}])$ and $\eta_{n-1,(1)}^1:=\eta_{n-1}^1([s_{n-1},\tfrac{1+s_n}{2}])$ for some $s_{n-1}\in (0,\tfrac{1+s_n}{2})$, where \begin{equation}\label{equ:len-3} \begin{split} \textup{len}(\eta_{n-1,(0)}^1)\leq &~ h_{n-1}-D_6'-1 \text{\quad and } \\ \textup{len}(\eta_{n-1,(1)}^1)\leq &~ A'=K_1+D_6'+D_8+1. \end{split} \end{equation} Since $B_{\delta}(\eta_{n-1}^1)\subset\mathcal{D}_{n-1}'$, by \eqref{equ:K-2} we have $\textup{len}_{\rho_{n-1}}(\eta_{n-1,(1)}^1)\leq K_2$. Denote $\eta_{n-1,(2)}^1:=\eta_{n-1}^1([\frac{1+s_n}{2},1])=\chi_{n,j_n}(\eta_{n,(1)}^0)$, $s_{n-1}^{(1)}:=s_{n-1}$ and $s_{n-1}^{(2)}:=\tfrac{1+s_n}{2}$. Then the continuous curve \begin{equation} \eta_{n-1}^1=\xi_{n-1}^0\cup\xi_{n-1}^1=\eta_{n-1,(0)}^1\cup\eta_{n-1,(1)}^1\cup \eta_{n-1,(2)}^1 \end{equation} satisfies: \begin{itemize} \item $\eta_{n-1,(0)}^1=\eta_{n-1}^1([0,s_{n-1}^{(1)}])$, $\eta_{n-1,(1)}^1=\eta_{n-1}^1([s_{n-1}^{(1)},s_{n-1}^{(2)}])$ and $\eta_{n-1,(2)}^1=\eta_{n-1}^1([s_{n-1}^{(2)},1])$; and \item $\textup{len}(\eta_{n-1,(0)}^1)\leq h_{n-1}-D_6'-1$, $\textup{len}_{\rho_{n-1}}(\eta_{n-1,(1)}^1)\leq K_2$ and $\textup{len}_{\rho_{n-1}}(\eta_{n-1,(2)}^1)\leq K_2\nu$. \end{itemize} \textbf{Step 3. Inductive procedure.} Suppose there exists $1\leq i\leq n-1$ such that $\eta_{n-i}^i=\bigcup_{\ell=0}^i \xi_{n-i}^\ell=\bigcup_{k=0}^{i+1}\eta_{n-i,(k)}^i$ with $B_{\delta}(\eta_{n-i}^i)\subset\mathcal{D}_{n-i}'$ has the following properties: \begin{itemize} \item $\eta_{n-i,(k)}^i=\eta_{n-i}^i([s_{n-i}^{(k)},s_{n-i}^{(k+1)}])$ for some $0=s_{n-i}^{(0)}<s_{n-i}^{(1)}<\cdots<s_{n-i}^{(i+1)}<s_{n-i}^{(i+2)}=1$, where $0\leq k\leq i+1$; and \item $\textup{len}(\eta_{n-i,(0)}^i)\leq h_{n-i}-D_6'-1$ and $\textup{len}_{\rho_{n-i}}(\eta_{n-i,(k)}^i)\leq K_2\nu^{k-1}$ for every $1\leq k\leq i+1$. \end{itemize} By a similar argument to \eqref{equ:len-1}, \eqref{equ:len-2} and \eqref{equ:len-3}, there exist $0=s_{n-i-1}^{(0)}<s_{n-i-1}^{(1)}<\cdots<s_{n-i-1}^{(i+2)}<s_{n-i-1}^{(i+3)}=1$ such that the continuous curve $\eta_{n-i-1}^{i+1}=\bigcup_{\ell=0}^{i+1} \xi_{n-i-1}^\ell=\bigcup_{k=0}^{i+2}\eta_{n-i-1,(k)}^{i+1}$ with $B_{\delta}(\eta_{n-i-1}^{i+1})\subset\mathcal{D}_{n-i-1}'$ has the following properties: \begin{itemize} \item $\eta_{n-i-1,(k)}^{i+1}=\eta_{n-i-1}^{i+1}([s_{n-i-1}^{(k)},s_{n-i-1}^{(k+1)}])$, where $0\leq k\leq i+2$; and \item $\textup{len}(\eta_{n-i-1,(0)}^{i+1})\leq h_{n-i-1}-D_6'-1$ and $\textup{len}_{\rho_{n-i-1}}(\eta_{n-i-1,(k)}^{i+1})\leq K_2\nu^{k-1}$ for every $1\leq k\leq i+2$. \end{itemize} Inductively (as $i$ increases), there exist $0=s_0^{(0)}<s_0^{(1)}<\cdots<s_0^{(n+1)}<s_0^{(n+2)}=1$ such that the continuous curve $\eta_0^n=\bigcup_{\ell=0}^n \xi_0^\ell=\bigcup_{k=0}^{n+1}\eta_{0,(k)}^n$ with $B_{\delta}(\eta_0^n)\subset\mathcal{D}_0'$ has the following properties: \begin{itemize} \item $\eta_{0,(k)}^n=\eta_0^n([s_0^{(k)},s_0^{(k+1)}])$, where $0\leq k\leq n+1$; and \item $\textup{len}(\eta_{0,(0)}^n)\leq h_0-D_6'-1$ and $\textup{len}_{\rho_0}(\eta_{0,(k)}^n)\leq K_2\nu^{k-1}$ for every $1\leq k\leq n+1$. \end{itemize} \textbf{Step 4. The conclusion.} Since $B_{\delta}(\eta_0^n)\subset\mathcal{D}_0'$, the Euclidean metric and the hyperbolic metric $\rho_0$ of $\mathcal{D}_0'$ are comparable in a small neighborhood of $\eta_0^n$. Hence there exists a constant $C>0$ depending only on $\delta$ such that \begin{equation} \sum_{k=1}^{n+1}\textup{len}(\eta_{0,(k)}^n)\leq C\sum_{k=1}^{n+1}\textup{len}_{\rho_0}(\eta_{0,(k)}^n) \leq \frac{C K_2}{1-\nu}. \end{equation} Therefore, for all $n\geq 0$ we have \begin{equation} \textup{len}(\eta_0^n)=\sum_{i=0}^n\textup{len}(\xi_0^i)=\sum_{k=0}^{n+1}\textup{len}(\eta_{0,(k)}^n)\leq K:=h_0-D_6'-1+\frac{C K_2}{1-\nu}. \end{equation} By \eqref{equ:phi-r}, \eqref{equ-integral} and the similar estimations to \eqref{equ:len-1} and \eqref{equ:len-2} in the above inductive procedure, it follows that for any $n\geq 0$, there exists a sequence of non-negative numbers $\{y_i^{(n)}:0\leq i\leq n\}$ which is independent of the sequence $(t_n)_{n\in\mathbb{N}}$ such that for any $0\leq i\leq n$, we have \begin{equation} \textup{len} (\xi_0^i)\leq y_i^{(n)} \text{\quad and\quad} \sum_{i=0}^n y_i^{(n)}\leq K. \end{equation} Then \eqref{equ-seq-seg} holds if we set $y_i:=\inf_{n\in\mathbb{N}}\big\{y_i^{(n)}\big\}$. The estimation \eqref{equ-Cauchy} implies that the sequence of continuous curves $(\widetilde{\gamma}_n(t))_{n\in\mathbb{N}}$ converges uniformly on $[0,1]$. Since $\gamma_0^n(t)=\widetilde{\gamma}_0^n(\tfrac{a_1-1}{a_1}t)$ for all $t\in[0,1]$ and $n\in\mathbb{N}$, it implies that $(\gamma_0^n(t))_{n\in\mathbb{N}}$ converges uniformly on $[0,1]$. \end{proof} \begin{rmk} If $\alpha$ is of bounded type, or if there exists a universal constant $C>0$ such that $\mathcal{B}(\alpha_{n+1})\geq C/\alpha_n$ for all $n\in\mathbb{N}$, then the sequence $(\gamma_0^n(t))_{n\in\mathbb{N}}$ converges exponentially fast as $n\to\infty$. \end{rmk} \subsection{The Siegel disks are Jordan domains}\label{subsec-conv-to-bdy} By Proposition \textup{Re}\,f{prop-Cauchy-sequence}, the sequence of the continuous curves $(\gamma_0^n(t))_{n\geq 0}$ has a limit: \begin{equation} \gamma_0^\infty(t):=\lim_{n\to\infty}\gamma_0^n(t), \text{\quad where } t\in[0,1]. \end{equation} \begin{prop}\label{prop-tend-to-bdy} The limit $\Phi_0^{-1}(\gamma_0^\infty)$ is the boundary of the Siegel disk of $f_0$. \end{prop} \begin{proof} For $\zeta_0\in\gamma_0^{n+1}$, there exists $\zeta_n\in\widetilde{\gamma}_n^1\subset\bigcup_{j_{n+1}=0}^{a_{n+1}}\chi_{n+1,j_{n+1}}(\widetilde{\gamma}_{n+1}^0)$ such that \begin{equation} \zeta_0=\chi_{1,j_1}\circ\cdots\circ\chi_{n,j_n}(\zeta_n) \end{equation} for some sequence $(j_1,\cdots,j_n)$, where $0\leq j_i\leq a_i$ and $1\leq i\leq n$. By Lemma \textup{Re}\,f{lema-gamma-n-1}(d) and \eqref{equ-chi-choice}, we have \begin{equation}\label{equ-zeta-n} \Big|\textup{Im}\,\zeta_n-\frac{1}{2\pi}\mathcal{B}(\alpha_{n+1})-M\Big|\leq D_3+\frac{1}{2} \text{\quad and\quad} 1\leq \textup{Re}\,\zeta_n\leq a_{n+1}+\textit{\textbf{k}}_1+2. \end{equation} For each $n\in\mathbb{N}$, $\Phi_n^{-1}$ is defined in $\mathcal{D}_n$ (see Lemma \textup{Re}\,f{lema:Phi-inverse}). We denote \begin{equation} \mathbb{D}elta_n':=\{\zeta\in\mathcal{D}_n:\Phi_n^{-1}(\zeta)\in\mathbb{D}elta_n\}. \end{equation} Then $\textup{Exp}o(\mathbb{D}elta_n')=\mathbb{D}elta_{n+1}$. By Lemma \textup{Re}\,f{lema-fixed-disk}, the inner radius of the Siegel disk of $f_{n+1}$ is $c_{n+1}e^{-\mathcal{B}(\alpha_{n+1})}$, where $c_{n+1}\in[D_7^{-1},D_7]$ and $D_7>1$ is a universal constant. According to the definition of near-parabolic renormalization $f_{n+1}=\mathcal{R} f_n$, there exists a point $\zeta_n'\in\partial\mathbb{D}elta_n'\cap\textup{Exp}o^{-1}(\partial\mathbb{D}elta_{n+1})$ such that \begin{equation}\label{equ-y-widehat} |\textup{Re}\,(\zeta_n-\zeta_n')|\leq \frac{1}{2} \quad\text{and}\quad \textup{Im}\, \zeta_n'=\frac{1}{2\pi}\mathcal{B}(\alpha_{n+1})-\frac{1}{2\pi}\log\frac{27 c_{n+1}}{4}. \end{equation} Let $[\zeta_n,\zeta_n']$ be the closed segment connecting $\zeta_n$ with $\zeta_n'$. By Lemma \textup{Re}\,f{lema:D-n}, \eqref{equ-chi-n-1} and the choice of $ h_n$, it follows that $B_{1/2}([\zeta_n,\zeta_n'])\subset\mathcal{D}_n'$. By \eqref{equ-Siegel-below}, we have $[\zeta_n,\zeta_n')\subset\mathbb{D}elta_n'$. Combining \eqref{equ-zeta-n} and \eqref{equ-y-widehat}, there exists a constant $A'>0$ which is independent of $n$ so that $|\zeta_n'-\zeta_n|\leq A'$. According to Lemma \textup{Re}\,f{lema:exp-conv}, there exist two constants $A>0$ and $0<\nu<1$ which are independent of $n$ such that \begin{equation} \textup{len}(\chi_{1,j_1}\circ\cdots\circ\chi_{n,j_n}([\zeta_n,\zeta_n']))\leq A\cdot\nu^n, \end{equation} where $0\leq j_i\leq a_i$ and $1\leq i\leq n$. Denote $\zeta_0':=\chi_{1,j_1}\circ\cdots\circ\chi_{n,j_n}(\zeta_n')$. Then $|\zeta_0-\zeta_0'|\leq A\cdot\nu^n$. Since $\zeta_0'\in\partial\mathbb{D}elta_0'$, it implies that \begin{equation}\label{equ:zeta-0} \textup{dist}(\zeta_0,\partial\mathbb{D}elta_0')\leq A\cdot\nu^n . \end{equation} For any $t_0\in[0,1]$ and $n\geq 1$, we choose $\zeta_0=\zeta_0^{(n)}:=\widetilde{\gamma}_0^{n+1}(t_0)$. By \eqref{equ:zeta-0} we have $\gamma_0^\infty(t_0)\in\partial\mathbb{D}elta_0'$. By the arbitrariness of $t_0\in[0,1]$, it follows that $\gamma_0^\infty\subset\partial\mathbb{D}elta_0'$. Therefore we have $\Phi_0^{-1}(\gamma_0^\infty)\subset\partial\mathbb{D}elta_0$. By Lemma \textup{Re}\,f{lema-gamma-n-i}(c), $\Phi_0^{-1}(\gamma_0^n)$ is a continuous closed curve for all $n\geq 0$. Since $\gamma_0^n(t)$ converges uniformly to the limit $\gamma_0^\infty(t)$ on $[0,1]$ as $n\to\infty$, it follows that $\Phi_0^{-1}(\gamma_0^\infty)$ is a continuous closed curve which separates $\mathbb{D}elta_0$ from $U_0\setminus\overline{\mathbb{D}elta}_0$, where $U_0$ is the domain of definition of $f_0$. In particular, we have $\Phi_0^{-1}(\gamma_0^\infty)=\partial\mathbb{D}elta_0$. \end{proof} \begin{proof}[{Proof of the the first part of the Main Theorem}] Suppose $f_0\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$, where $\alpha\in\mathcal{B}_N$ with $N$ sufficiently large. By Proposition \textup{Re}\,f{prop-tend-to-bdy}, the boundary of the Siegel disk $\partial\mathbb{D}elta_0=\Phi_0^{-1}(\gamma_0^\infty)$ of $f_0$ is connected and locally connected. On the other hand, the Siegel disk $\mathbb{D}elta_0$ is compactly contained in the domain of definition of $f_0$ by Proposition \textup{Re}\,f{prop-Cherahi-nest}(b). By the definition of $\mathbb{D}elta_0$, there exists a conformal map $\phi:\mathbb{D}\to\mathbb{D}elta_0$ so that $f_0\circ\phi(w)=\phi(e^{2\pi\textup{i}\alpha}w)$. According to Carath\'{e}odory, the map $\phi$ can be extended continuously to $\phi:\overline{\mathbb{D}}\to\overline{\mathbb{D}elta}_0$. For each $\theta\in[0,2\pi)$, let $\gamma_\theta:=\{\phi(r e^{\textup{i}\theta}):0\leq r\leq 1\}$ be the internal ray of $\mathbb{D}elta_0$. Suppose there are two different rays $\gamma_{\theta_1}$ and $\gamma_{\theta_2}$ landing at a common point on $\partial\mathbb{D}elta_0$, i.e., $\phi(e^{\textup{i}\theta_1})=\phi(e^{\textup{i}\theta_2})$. Then $\gamma_{\theta_1}\cup \gamma_{\theta_2}$ is a Jordan curve contained in $\overline{\mathbb{D}elta}_0$. By the maximum modulus principle, $\{f_0^{\circ n}\}_{n\in\mathbb{N}}$ forms a normal family in the bounded domain $D_{\theta_1,\theta_2}$ which is bounded by $\gamma_{\theta_1}\cup \gamma_{\theta_2}$. This implies that $D_{\theta_1,\theta_2}$ is contained in the Fatou set and hence contained in $\mathbb{D}elta_0$. However, by Riesz brothers' theorem, $\phi$ must be a constant. This is a contradiction and each point in $\partial\mathbb{D}elta_0$ is the landing point of exactly one internal ray. Hence $\partial\mathbb{D}elta_0$ is a Jordan curve. \end{proof} \section{A canonical Jordan arc and a new class of irrationals}\label{sec-canonical-Jordan} In this section, we first define a canonical Jordan arc $\Gamma$ connecting the origin with the critical value $\textup{cv}=-4/27$ in the domain of definition of $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in\textup{HT}_N$. In particular, this arc is contained in $\mathcal{P}_f$. Then we define a new class of irrational numbers based on the mapping relations between the different levels of the renormalization. \subsection{A Jordan arc corresponding to $\alpha\in \textup{HT}_N$}\label{subsec-Jordan-arc} Let $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in\textup{HT}_N$, where $N\geq 1/\varepsilon_4$ is assumed in \S\textup{Re}\,f{subsec-basic-defi}. We define a half-infinite strip \begin{equation}\label{equ-E} \mho:=\{\zeta\in\mathbb{C}:1/4<\textup{Re}\,\zeta< 7/4 \text{ and } \textup{Im}\,\zeta>-2\} \end{equation} and a topological triangle \begin{equation} \mathcal{Q}_f:=\{z\in\mathcal{P}_f:\Phi_f(z)\in \mho\}. \end{equation} \begin{lem}\label{lema-height} There exists $\varepsilon_4'\in(0,\varepsilon_4]$ such that for all $f\in\mathcal{IS}_\alpha$ with $\alpha\in(0,\varepsilon_4']$, \begin{equation}\label{equ-M-c-subset} \overline{\mathcal{Q}}_f\setminus\{0\} \subset \mathbb{D}(0,\tfrac{4}{27}e^{3\pi})\setminus [0,\tfrac{4}{27}e^{3\pi}). \end{equation} \end{lem} We postpone the proof of Lemma \textup{Re}\,f{lema-height} to Appendix \textup{Re}\,f{sec-arc-straight}. The inclusion relation \eqref{equ-M-c-subset} is proved for the maps in $\mathcal{IS}_0$ first and then a continuity argument is used. For $f_0\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ with $\alpha\in\textup{HT}_N$, let $f_n:=\mathcal{R} f_{n-1}$ be the maps defined by the renormalization operator inductively, where $n\geq 1$. In the following, we always assume that $N\geq 1/\varepsilon_4'$ and denote $\mathcal{Q}_n:=\mathcal{Q}_{f_n}$. For $X\subset\mathbb{C}$ and $\delta>0$, we denote $B_\delta(X):=\bigcup_{z\in X}\mathbb{D}(z,\delta)$. \begin{cor}\label{cor-anti-holo} For each $n\geq 1$, there exists a unique anti-holomorphic inverse branch of the modified exponential map $\textup{Exp}o$: \begin{equation} \mathbb{L}\textup{og}:\mathcal{Q}_n\to\Phi_{n-1}(\mathcal{Q}_{n-1})=\mho, \end{equation} such that $\mathbb{L}\textup{og}(-\tfrac{4}{27})=1$. Moreover, $B_{1/4}(\mathbb{L}\textup{og}(\overline{\mathcal{Q}}_n\setminus\{0\}))\subset\mho$ and $\Phi_{n-1}^{-1}\circ\mathbb{L}\textup{og}:\overline{\mathcal{Q}}_n\setminus\{0\}\to\mathcal{Q}_{n-1}$ is well defined. \end{cor} \begin{proof} Since $\textup{Exp}o$ takes the value $-4/27$ at each integer, it follows that $\textup{Exp}o$ has an inverse branch $\mathbb{L}\textup{og}$ defined on $\overline{\mathcal{Q}}_n\setminus\{0\}$ such that $\mathbb{L}\textup{og}(-4/27)=1$ since $\overline{\mathcal{Q}}_n\setminus\{0\}$ is simply connected and avoids the origin. By Lemma \textup{Re}\,f{lema-height}, we have $\textup{Re}\,\mathbb{L}\textup{og}(\overline{\mathcal{Q}}_n\setminus\{0\})\subset (1/2,3/2)$ and $\textup{Im}\,\mathbb{L}\textup{og}(\overline{\mathcal{Q}}_n\setminus\{0\})> -3/2$. Therefore, $B_{1/4}(\mathbb{L}\textup{og}(\overline{\mathcal{Q}}_n\setminus\{0\}))$ is contained in $\mho$ and $\Phi_{n-1}^{-1}\circ\mathbb{L}\textup{og}:\overline{\mathcal{Q}}_n\setminus\{0\}\to\mathcal{Q}_{n-1}$ is well defined. \end{proof} Define a half-infinite strip \begin{equation}\label{equ-mho-new} \mho':=\{\zeta\in\mathbb{C}:1/2<\textup{Re}\,\zeta< 3/2 \text{ and } \textup{Im}\,\zeta>-7/4\}\subset\mho \end{equation} and a topological triangle for every $n\geq 0$: \begin{equation} \mathcal{Q}_n':=\{z\in\mathcal{P}_n:\Phi_n(z)\in \mho'\}. \end{equation} \begin{defi} Let $K_0:=\mathcal{Q}_0'$. For each $n\geq 1$, define \begin{equation} K_n:=\Phi_0^{-1}\circ\mathbb{L}\textup{og}\circ\cdots\circ\Phi_{n-1}^{-1}\circ\mathbb{L}\textup{og}(\mathcal{Q}_n'). \end{equation} By Corollary \textup{Re}\,f{cor-anti-holo}, $K_{n+1}\subset K_n$ for all $n\geq 0$, the critical value $-4/27$ is contained in the interior of $K_n$ and $0\in\partial K_n$. Define \begin{equation}\label{equ-Gamma} \Gamma:=\bigcap_{n\geq 0}K_n. \end{equation} \end{defi} \begin{lem}\label{lem-Jordan-arc} The set $\Gamma\cup\{0\}$ is a Jordan arc connecting $-4/27$ with $0$. \end{lem} \begin{proof} The general idea of the proof is to use the uniform contraction with respect to the hyperbolic metrics to prove that $\Gamma\cup\{0\}$ is locally connected and then prove that it must be a Jordan arc. Let us prove it in details. \textbf{Step 1}: We first define two continuous curves $\gamma_{0,\pm}^0:[0,+\infty)\to\mho$ as \begin{equation} \gamma_{0,\pm}^0(t):= \left\{ \begin{aligned} & 1\pm\tfrac{1}{2}+(t-\tfrac{11}{4})\,\textup{i} ~~~&\text{if }& t\in[1,+\infty),\\ & 1\pm\tfrac{t}{2}-\tfrac{7}{4}\,\textup{i} ~~~&\text{if }& t\in[0,1). \end{aligned} \right. \end{equation} Then $\gamma_{0,+}^0$ and $\gamma_{0,-}^0$ have the same initial point $\gamma_{0,\pm}^0(0)=1-\tfrac{7}{4}\textup{i}$ and $\gamma_{0,+}^0\cup \gamma_{0,-}^0=\partial\mho'$, where $\mho'$ is defined in \eqref{equ-mho-new}. For $\alpha\in(0,1)$, we define \begin{equation}\label{equ-varphi} \varphi_\alpha(t):= \left\{ \begin{aligned} & \frac{1}{\alpha}\Big(t-\frac{1}{2\pi}\log\frac{1}{\alpha}+1\Big) ~~~&\text{if }& t\geq\frac{1}{2\pi}\log\frac{1}{\alpha},\\ & e^{2\pi t} ~~~&\text{if }& t< \frac{1}{2\pi}\log\frac{1}{\alpha}. \end{aligned} \right. \end{equation} It is easy to see that $\varphi_\alpha$ is continuous on $\mathbb{R}$ and strictly increasing. For $n\geq 1$, we define $\varphi_n:=\varphi_{\alpha_n}$. Then $\varphi_n\circ\cdots\circ\varphi_1(t)\to +\infty$ as $n\to\infty$ for all $t\in\mathbb{R}$. In the following, we define two sequences of continuous curves $(\gamma_{n,\pm}^0)_{n\geq 0}$ inductively. For $n\geq 1$, suppose $\gamma_{n-1,\pm}^0:[0,+\infty)\to\partial\mho'$ has been defined. We define $\gamma_{n,\pm}^0:[0,+\infty)\to\partial\mho'$ as \begin{equation}\label{equ-gamma-n-plus} \gamma_{n,\pm}^0(t):= \left\{ \begin{aligned} & 1\pm\tfrac{1}{2}+\big(\varphi_n(\textup{Im}\, \gamma_{n-1,\pm}^0(t))-e^{-7\pi/2}-\tfrac{7}{4}\big)\,\textup{i} ~~~&\text{if }& t\in[1,+\infty),\\ & 1\pm\tfrac{t}{2}-\tfrac{7}{4}\,\textup{i} ~~~&\text{if }& t\in[0,1). \end{aligned} \right. \end{equation} Note that $\gamma_{n,+}^0(1)=\tfrac{3}{2}-\tfrac{7}{4}\,\textup{i}$ and $\gamma_{n,-}^0(1)=\tfrac{1}{2}-\tfrac{7}{4}\,\textup{i}$. Then both $\gamma_{n,+}^0:[0,+\infty)\to\partial\mho'$ and $\gamma_{n,-}^0:[0,+\infty)\to\partial\mho'$ are continuous injections and they have the same initial point $\gamma_{n,\pm}^0(0)=1-\tfrac{7}{4}\textup{i}$. Moreover, $\gamma_{n,+}^0\cup \gamma_{n,-}^0=\partial\mho'$. For $t\in[0,+\infty)$, all $n\geq 1$ and $1\leq i\leq n$, by Corollary \textup{Re}\,f{cor-anti-holo} the following curves are well-defined: \begin{equation}\label{equ-gam-anti} \gamma_{n-i,\pm}^i(t):= \left\{ \begin{aligned} & \mathbb{L}\textup{og}\circ\Phi_{n-i+1}^{-1}\circ\cdots\circ\mathbb{L}\textup{og}\circ\Phi_n^{-1}(\gamma_{n,\pm}^0(t)) & ~~~\text{if }& i \text{ is even},\\ & \mathbb{L}\textup{og}\circ\Phi_{n-i+1}^{-1}\circ\cdots\circ\mathbb{L}\textup{og}\circ\Phi_n^{-1}(\gamma_{n,\mp}^0(t)) & ~~~\text{if }& i \text{ is odd}. \end{aligned} \right. \end{equation} In particular, $\gamma_{n-i,\pm}^i\subset \overline{\mho'}$ for every $0\leq i\leq n$. Define \begin{equation} \Gamma_{n-i,\pm}^i(t):=\Phi_{n-i}^{-1}(\gamma_{n-i,\pm}^i(t)), \text{ where } t\in[0,+\infty). \end{equation} Then $\Gamma_{n-i,+}^i\cup\{0\}$ and $\Gamma_{n-i,-}^i\cup\{0\}$ are Jordan arcs, and $\Gamma_{n-i,+}^i\cup\Gamma_{n-i,-}^i\cup\{0\}$ is a Jordan curve\footnote{As before we use the fact that $\lim_{\textup{Im}\,\zeta\to+\infty}\Phi_{n-i}^{-1}(\zeta)=0$, where $\zeta\in\Phi_{n-i}(\mathcal{P}_{n-i})$.}. In particular, we have $\Gamma_{0,+}^n\cup\Gamma_{0,-}^n\cup\{0\}=\partial K_n$ and two sequences of continuous curves $\gamma_{0,\pm}^n:[0,+\infty)\to\overline{\mho'}$, where $n\in\mathbb{N}$. In the following we prove that $\gamma_{0,\pm}^n(t)$ and $\Gamma_{0,\pm}^n(t)$ converge uniformly on $[0,+\infty)$ as $n\to\infty$. \textbf{Step 2}: We first estimate the distance between $\gamma_{n-1,\pm}^0(t)$ and $\gamma_{n-1,\pm}^1(t)$ for all $n\geq 1$ and $t\in[0,+\infty)$. Let $t_n\in(1,+\infty)$ be the unique parameter such that \begin{equation} \textup{Im}\,\gamma_{n,\pm}^0(t_n)=\varphi_n(\textup{Im}\, \gamma_{n-1,\pm}^0(t_n))-e^{-7\pi/2}-\tfrac{7}{4}=\tfrac{1}{\alpha_n}. \end{equation} Then we have $\tfrac{1}{2\pi}\log\tfrac{1}{\alpha_n}<\textup{Im}\, \gamma_{n-1,\pm}^0(t_n)<\tfrac{1}{2\pi}\log\tfrac{1}{\alpha_n}+2\alpha_n$. By definition, we have \begin{equation} \begin{split} |\gamma_{n-1,\pm}^0(t)-\gamma_{n-1,\pm}^1(t)| =&~|\gamma_{n-1,\pm}^0(t)-\mathbb{L}\textup{og}\circ\Phi_n^{-1}(\gamma_{n,\mp}^0(t))|\\ \leq &~1+|\textup{Im}\,\gamma_{n-1,\pm}^0(t)-\textup{Im}\,\mathbb{L}\textup{og}\circ\Phi_n^{-1}(\gamma_{n,\mp}^0(t))|. \end{split} \end{equation} If $t\geq t_n$, then $\textup{Im}\,\gamma_{n,\pm}^0(t)\geq\tfrac{1}{\alpha_n}$. By \eqref{equ-varphi}, \eqref{equ-gamma-n-plus} and Lemma \textup{Re}\,f{lema-key-estimate-lp}(a), we have \begin{equation} \begin{split} &~|\textup{Im}\,\gamma_{n-1,\pm}^0(t)-\textup{Im}\,\mathbb{L}\textup{og}\circ\Phi_n^{-1}(\gamma_{n,\mp}^0(t))|\\ \leq &~D_3+\big|\textup{Im}\,\gamma_{n-1,\pm}^0(t)-\alpha_n\textup{Im}\,\gamma_{n,\mp}^0(t)-\tfrac{1}{2\pi}\log\tfrac{1}{\alpha_n}\big|\\ \leq &~D_3+1+\alpha_n(e^{-7\pi/2}+\tfrac{7}{4})<D_3+2. \end{split} \end{equation} If $t< t_n$, then $\textup{Im}\,\gamma_{n,\pm}^0(t)<\tfrac{1}{\alpha_n}$. By \eqref{equ-varphi}, \eqref{equ-gamma-n-plus} and Lemma \textup{Re}\,f{lema-key-estimate-lp}(b), there exist two universal constants $C_1$, $C_2\geq 1$ such that \begin{equation} \begin{split} &~|\textup{Im}\,\gamma_{n-1,\pm}^0(t)-\textup{Im}\,\mathbb{L}\textup{og}\circ\Phi_n^{-1}(\gamma_{n,\mp}^0(t))|\\ \leq &~D_3+\big|\textup{Im}\,\gamma_{n-1,\pm}^0(t)-\tfrac{1}{2\pi}\log(1+|\gamma_{n,\mp}^0(t)|)\big|\\ \leq &~D_3+C_1+\big|\textup{Im}\,\gamma_{n-1,\pm}^0(t)-\tfrac{1}{2\pi}\log(1+|\textup{Im}\,\gamma_{n,\mp}^0(t)|)\big| \leq D_3+C_1+C_2. \end{split} \end{equation} Therefore, for all $n\geq 1$ and $t\in[0,+\infty)$, we have \begin{equation}\label{equ:gamma-n-1} |\gamma_{n-1,\pm}^0(t)-\gamma_{n-1,\pm}^1(t)|\leq D_3+C_1+C_2+1. \end{equation} \textbf{Step 3}: Let $\rho_\mho(\zeta)|\textup{d} \zeta|$ and $\rho_n(z)|\textup{d} z|$ be the hyperbolic metrics of $\mho$ and $\mathcal{Q}_n$ respectively. Note that $\gamma_{n-1,\pm}^0$, $\gamma_{n-1,\pm}^1\subset\overline{\mho'}$ and $B_{1/4}(\overline{\mho'})\subset\mho$. By \eqref{equ:gamma-n-1}, there exists $C_3>0$ such that the hyperbolic distance between $\gamma_{n-1,\pm}^0$ and $\gamma_{n-1,\pm}^1$ satisfies \begin{equation}\label{equ:dist-rho} \textup{dist}_{\rho_\mho}(\gamma_{n-1,\pm}^0(t),\gamma_{n-1,\pm}^1(t))\leq C_3 \text{\quad for any } n\geq 1 \text{ and } t\in[0,+\infty). \end{equation} According to Corollary \textup{Re}\,f{cor-anti-holo}, for $1\leq i\leq n$, each map $\mathbb{L}\textup{og}\circ\Phi_i^{-1}:(\mho,\rho_\mho)\to (\mho,\rho_\mho)$ can be decomposed as: \begin{equation} \begin{split} \mathbb{L}\textup{og}\circ\Phi_i^{-1}:(\mho,\rho_\mho) &~\xlongrightarrow{\Phi_i^{-1}} (\mathcal{Q}_i,\rho_i) \xlongrightarrow{\mathbb{L}\textup{og}} (\mathbb{L}\textup{og}(\mathcal{Q}_i),\tilde{\rho}_i) \\ &~\overset{inc.}{\hookrightarrow} (B_{1/4}(\mathbb{L}\textup{og}(\mathcal{Q}_i)),\hat{\rho}_i) \overset{inc.}{\hookrightarrow} (\mho,\rho_\mho), \end{split} \end{equation} where $\tilde{\rho}_i$ and $\hat{\rho}_i$ are hyperbolic metrics of $\mathbb{L}\textup{og}(\mathcal{Q}_i)$ and $B_{1/4}(\mathbb{L}\textup{og}(\mathcal{Q}_i))$ respectively. Since $\textup{diam}(\textup{Re}\,(\mathbb{L}\textup{og}(\mathcal{Q}_i)))\leq 1$, by Lemma \textup{Re}\,f{lema-uni-con-prep}, the inclusion map \begin{equation} (\mathbb{L}\textup{og}(\mathcal{Q}_i),\tilde{\rho}_i) \overset{inc.}{\hookrightarrow} (B_{1/4}(\mathbb{L}\textup{og}(\mathcal{Q}_i)),\hat{\rho}_i) \end{equation} is uniformly contracting with respect to their hyperbolic metrics. Since $\Phi_i^{-1}$, $\mathbb{L}\textup{og}$ and the second inclusion map do not expand the hyperbolic metrics, it follows that $\mathbb{L}\textup{og}\circ\Phi_i^{-1}:(\mho,\rho_\mho)\to (\mho,\rho_\mho)$ is uniformly contracting. By the definition of $\gamma_{0,\pm}^n$, there exists a constant $0<\nu<1$ such that \begin{equation} \textup{dist}_{\rho_\mho}(\gamma_{0,\pm}^{n-1}(t),\gamma_{0,\pm}^n(t))\leq C_3\cdot\nu^{n-1}, \text{ where }n\geq 1 \text{ and }t\in[0,+\infty). \end{equation} This implies that the hyperbolic distance between $\Gamma_{0,\pm}^{n-1}(t)$ and $\Gamma_{0,\pm}^n(t)$ in $\mathcal{Q}_0=\Phi_0^{-1}(\mho)$ satisfies \begin{equation} \textup{dist}_{\rho_0}(\Gamma_{0,\pm}^{n-1}(t),\Gamma_{0,\pm}^n(t))\leq C_3\cdot\nu^{n-1}, \text{ where }n\geq 1 \text{ and }t\in[0,+\infty). \end{equation} Let $\check{\mathcal{Q}}_0:=B_1(\mathcal{Q}_0)$ and $\check{\rho}_0(z)|\textup{d} z|$ be the hyperbolic metric of $\check{\mathcal{Q}}_0$. Then the Euclidean and hyperbolic metrics (with respect to $\check{\rho}_0$) are comparable on $\mathcal{Q}_0$. According to Schwarz-Pick's lemma, we have $\check{\rho}_0(z)<\rho_0(z)$ for all $z\in\mathcal{Q}_0$. Therefore, there exists a constant $C_4>0$ such that the distance in the Euclidean metric satisfies \begin{equation} |\Gamma_{0,\pm}^{n-1}(t)-\Gamma_{0,\pm}^n(t)|\leq C_4\cdot\nu^{n-1}, \text{ where }n\geq 1 \text{ and }t\in[0,+\infty). \end{equation} Therefore, the following convergence is uniform for $t\in [0,+\infty)$: \begin{equation} \Gamma_{0,\pm}^\infty(t):=\lim_{n\to\infty} \Gamma_{0,\pm}^n(t). \end{equation} Note that $1\in\mho$ and $\mathbb{L}\textup{og}\circ\Phi_n^{-1}(1)=1$. By the uniformly contracting of $\mathbb{L}\textup{og}\circ\Phi_i^{-1}:(\mho,\rho_\mho)\to (\mho,\rho_\mho)$ for all $1\leq i\leq n$, we have \begin{equation} \begin{split} \lim_{n\to\infty}\Gamma_{0,\pm}^n(0) =&~\lim_{n\to\infty}\Phi_0^{-1}\circ\mathbb{L}\textup{og}\circ\Phi_1^{-1}\circ\cdots\circ\mathbb{L}\textup{og}\circ\Phi_n^{-1}(1-\tfrac{7}{4}\textup{i}) \\ =&~\Phi_0^{-1}(1)=-\tfrac{4}{27}. \end{split} \end{equation} Since $\gamma_{n-1,\pm}^0 \subset\overline{\mho'}$ and $B_{1/4}(\overline{\mho'})\subset\mho$, there exists a constant $C_3'>0$ such that \begin{equation} \textup{dist}_{\rho_\mho}(\gamma_{n-1,+}^0(t),\gamma_{n-1,-}^0(t))\leq C_3' \text{\quad for any } n\geq 1 \text{ and } t\in[0,+\infty). \end{equation} By a similar argument as above, we have \begin{equation} \Gamma_{0,+}^\infty(t)=\Gamma_{0,-}^\infty(t), \text{\quad where } t\in [0,+\infty). \end{equation} Note that $\Gamma$ is the intersection of the nested sequence $(K_n)_{n\geq 0}$, where $K_n$ is the bounded component of $\mathbb{C}\setminus(\Gamma_{0,+}^n\cup\Gamma_{0,-}^n\cup\{0\})$ for all $n\geq 0$. Therefore, $\Gamma=\Gamma_{0,+}^\infty=\Gamma_{0,-}^\infty$ and $\Gamma\cup\{0\}$ is a Jordan arc connecting $-4/27$ with $0$. \end{proof} \subsection{Dynamical behavior of the points on the arcs} Let $\phi_0:=\textup{id}$. For each $n\geq 1$, we denote \begin{equation} \phi_n:=\textup{Exp}o\circ\Phi_{n-1}\circ\cdots\circ\textup{Exp}o\circ\Phi_0. \end{equation} Let $\Gamma$ be the Jordan arc defined in \eqref{equ-Gamma}. By the proof of Lemma \textup{Re}\,f{lem-Jordan-arc}, $\phi_n$ can be defined on $\Gamma_0:=\Gamma$ since \begin{equation} \Gamma_n:=\phi_n(\Gamma_0)\subset \mathcal{Q}_n'=\Phi_n^{-1}(\mho'), \text{ where }n\geq 1. \end{equation} Note that the restriction of $\textup{Exp}o\circ\Phi_{n-1}$ on $\Gamma_{n-1}$ is a homeomorphism. Hence each $\Gamma_n\cup\{0\}$ is also a Jordan arc connecting $-\tfrac{4}{27}$ with $0$ in the dynamical plane of $f_n$. For each $n\geq 1$, the map $\phi_n:\Gamma_0\to \Gamma_n$ can be extended homeomorphically to $\phi_n:\Gamma_0\cup\{0\}\to \Gamma_n\cup\{0\}$ such that $\phi_n(-\tfrac{4}{27})=-\tfrac{4}{27}$ and $\phi_n(0)=0$. Moreover, \begin{equation}\label{equ-gamma-n} \gamma_n:=\Phi_n(\Gamma_n) \end{equation} is an unbounded arc in $\mho'$ with the initial point $1$. \begin{defi} For $n\geq 1$, we define \begin{equation}\label{equ-s-alpha-n} s_{\alpha_n}:=\Phi_n\circ \textup{Exp}o:\gamma_{n-1}\to\gamma_n. \end{equation} Then $s_{\alpha_n}$ is a homeomorphism with $s_{\alpha_n}(1)=1$. \end{defi} In the following, we assume that $\alpha=\alpha_0\in\mathcal{B}_N$, where $\mathcal{B}_N$ is the set of high type Brjuno numbers defined in \eqref{equ:Brjuno}. Let $\mathcal{B}(\alpha_n)$ be the Brjuno sum defined in \eqref{equ:Brj-Yoccoz-n}. \begin{defi} For $n\geq 0$, we define \begin{equation}\label{equ-height-new} \widetilde{\mathcal{B}}(\alpha_n):=\frac{\mathcal{B}(\alpha_n)}{2\pi}+ M, \end{equation} where $M\geq 1$ is a constant which will be determined in a moment. \end{defi} \begin{lem}\label{lema-all-greater} There exists a constant $M_0>1$ such that if $M\geq M_0$, for $\zeta\in\gamma_{n-1}$ with $\textup{Im}\,\zeta\geq\widetilde{\mathcal{B}}(\alpha_n)$, then $\textup{Im}\,\,s_{\alpha_n}(\zeta)\geq\widetilde{\mathcal{B}}(\alpha_{n+1})$, where $n\geq 1$. \end{lem} \begin{proof} Let $D_4>0$ be the constant introduced in Lemma \textup{Re}\,f{lema-key-esti-inverse}. If $M\geq D_4$, then \begin{equation} \widetilde{\mathcal{B}}(\alpha_n)=\frac{\mathcal{B}(\alpha_n)}{2\pi}+ M>\frac{1}{2\pi}\log\frac{1}{\alpha_n}+D_4. \end{equation} By Lemma \textup{Re}\,f{lema-key-esti-inverse}(a), if $M\geq 2D_5$ and $\textup{Im}\,\zeta\geq\widetilde{\mathcal{B}}(\alpha_n)$, then \begin{equation} \begin{split} \textup{Im}\,\,s_{\alpha_n}(\zeta) &~\geq \frac{1}{\alpha_n}\left(\textup{Im}\,\zeta-\frac{1}{2\pi}\log\frac{1}{\alpha_n}-D_5\right) \geq \frac{1}{\alpha_n}\left(\widetilde{\mathcal{B}}(\alpha_n)-\frac{1}{2\pi}\log\frac{1}{\alpha_n}-D_5\right) \\ &~=\widetilde{\mathcal{B}}(\alpha_{n+1})+\frac{1}{\alpha_n}\big((1-\alpha_n) M- D_5\big)\geq \widetilde{\mathcal{B}}(\alpha_{n+1}). \end{split} \end{equation} Then the lemma follows by setting $M_0:=\max\{D_4,2D_5\}$. \end{proof} Since $\alpha\in\mathcal{B}_N$, every $f_0\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ has a Siegel disk $\mathbb{D}elta_0$ centered at the origin. Let $D_7> 1$ be the universal constant in Lemma \textup{Re}\,f{lema-fixed-disk}. In the following we fix \begin{equation}\label{equ:M} M\geq \max\left\{M_0,\frac{1}{2\pi}\log\frac{27 D_7}{4} \right\}. \end{equation} Let $\Gamma_0\cup\{0\}$ be the Jordan arc connecting the critical value $\textup{cv}=-\frac{4}{27}$ with $0$ corresponding to $f_0$ (see Lemma \textup{Re}\,f{lem-Jordan-arc}). For a given point $z_0\in\Gamma_0$, let $(\zeta_n)_{n\geq 0}$ be the sequence defined by \begin{equation}\label{equ-sequence} \zeta_0:=\Phi_0(z_0)\in\gamma_0 \text{\quad and\quad} \zeta_n:=s_{\alpha_n}(\zeta_{n-1})\in\gamma_n \text{\quad for } n\geq 1. \end{equation} \begin{lem}\label{lema-eventually-above} If $z_0\in\Gamma_0\cap\mathbb{D}elta_0$, then there exists $n_0\geq 0$ such that $\textup{Im}\, \zeta_n\geq \widetilde{\mathcal{B}}(\alpha_{n+1})$ for all $n\geq n_0$. \end{lem} \begin{proof} Let $z_0\in\Gamma_0\cap\mathbb{D}elta_0$. By Lemma \textup{Re}\,f{lema-fixed-disk}, for every $n\in\mathbb{N}$, the inner radius of the Siegel disk of $f_n$ is $c_ne^{-\mathcal{B}(\alpha_n)}$, where $c_n\in[1/D_7,D_7]$. Let $\mho$ be the half-infinite strip defined in \eqref{equ-E}. By the definition of near-parabolic renormalization $f_{n+1}=\mathcal{R} f_n$, there exists $\widetilde{\zeta}_n\in \overline{\mho'}$ such that $\textup{Exp}o(\widetilde{\zeta}_n)\in\partial\mathbb{D}elta_{n+1}$ and (see \eqref{equ-y-widehat}) \begin{equation}\label{equ-y-n-hat-2} \textup{Im}\, \widetilde{\zeta}_n=\frac{1}{2\pi}\mathcal{B}(\alpha_{n+1})-\frac{1}{2\pi}\log\frac{27 c_{n+1}}{4}. \end{equation} Assume there exists a subsequence $(n_j)_{j\geq 1}$ such that $\textup{Im}\, \zeta_{n_j}< \widetilde{\mathcal{B}}(\alpha_{n_j+1})=\frac{1}{2\pi}\mathcal{B}(\alpha_{n_j+1})+M$. If $\textup{Im}\,\zeta_{n_j}\leq \textup{Im}\, \widetilde{\zeta}_{n_j}$, there exists $\zeta'_{n_j}\in\Phi_{n_j}(\partial\mathbb{D}elta_{n_j}\cap\mathcal{P}_{n_j})\cap \overline{\mho'}$ with $\textup{Im}\,\zeta_{n_j}'=\textup{Im}\,\zeta_{n_j}$ such that \begin{equation}\label{equ:Euc-1} |\zeta_{n_j}-\zeta_{n_j}'|\leq 1. \end{equation} If $\textup{Im}\,\zeta_{n_j}> \textup{Im}\, \widetilde{\zeta}_{n_j}$, we have \begin{equation} \frac{1}{2\pi}\mathcal{B}(\alpha_{n_j+1})-\frac{1}{2\pi}\log\frac{27 c_{n_j+1}}{4}<\textup{Im}\, \zeta_{n_j}< \frac{1}{2\pi}\mathcal{B}(\alpha_{n_j+1})+M \end{equation} and hence \begin{equation}\label{equ:Euc-2} |\zeta_{n_j}-\widetilde{\zeta}_{n_j}|^2 \leq 1+\Big(M+\frac{1}{2\pi}\log\frac{27 D_7}{4}\Big)^2. \end{equation} By \eqref{equ:Euc-1} and \eqref{equ:Euc-2}, for each $\zeta_{n_j}$ with $j\geq 1$, one can find a point ($\zeta_{n_j}'$ or $\widetilde{\zeta}_{n_j}$) in $\Phi_{n_j}(\partial\mathbb{D}elta_{n_j}\cap\mathcal{P}_{n_j})\cap \overline{\mho'}$ such that the hyperbolic distance with respect to $\rho_{\mho}$ between them are uniformly bounded above. By a similar argument to Proposition \textup{Re}\,f{prop-tend-to-bdy} based on Lemma \textup{Re}\,f{lema-uni-con-prep}, we conclude that $\zeta_0\in \Phi_0(\partial\mathbb{D}elta_0\cap\mathcal{P}_0)\cap \overline{\mho'}$ and $z_0\in\partial\mathbb{D}elta_0$, which violates our assumption that $z_0\in\mathbb{D}elta_0$. Therefore, there exists $n_0\geq 0$ such that $\textup{Im}\, \zeta_n\geq \widetilde{\mathcal{B}}(\alpha_{n+1})$ for all $n\geq n_0$. \end{proof} \begin{lem}\label{lema-in-disk} $\Gamma_0\cap\partial\mathbb{D}elta_0$ is a singleton. In particular, $\Gamma_0\setminus\{\textup{cv}\}\subset\mathbb{D}elta_0$ if and only if $\textup{cv}\in\partial\mathbb{D}elta_0$. \end{lem} \begin{proof} Since $\Gamma_0\cup\{0\}$ is a Jodan arc connecting $\textup{cv}=-\frac{4}{27}$ with $0$, there exists a homeomorphism $\beta:[0,1]\to\Gamma_0\cup\{0\}$ such that $\beta(0)=\textup{cv}$ and $\beta(1)=0$. Assume that $\Gamma_0\cap\partial\mathbb{D}elta_0$ is not a singleton. Then there exist $0\leq t_1<t_2<1$ such that \begin{itemize} \item $\beta(t_i)\in\partial\mathbb{D}elta_0$ for $i=1,2$; and \item $\beta([0,t_1])\cap\mathbb{D}elta_0=\emptyset$ and $\beta((t_2,1])\subset\mathbb{D}elta_0$. \end{itemize} Let $\Gamma_0':=\beta([t_1,t_2])$ be a subarc of $\Gamma_0$. Then we have the following two cases. (1) Assume $\Gamma_0'\subset\partial\mathbb{D}elta_0$. There exists $z_0\in\Gamma_0'$ such that $f_0^{\circ q_n}(z_0)\in\Gamma_0'$ for some big integer $n$ since the restriction of $f_0$ on $\partial\mathbb{D}elta_0$ is conjugate to the rigid rotation. Denote $\Gamma_n':=\textup{Exp}o\circ\Phi_{n-1}\circ\cdots\circ\textup{Exp}o\circ\Phi_0(\Gamma_0')$. Then $\Gamma_n'$ is a Jordan arc contained in $\Gamma_n\subset\mathcal{Q}_n'$. By Lemma \textup{Re}\,f{lema-Cheraghi-2}(a), $\Gamma_n'$ and hence $\Gamma_n$ contains a point $z_n$ and $f_n(z_n)$, which is impossible. (2) Assume $\Gamma_0'\not\subset\partial\mathbb{D}elta_0$. Let $W\neq\mathbb{D}elta_0$ be any bounded component of $\mathbb{C}\setminus(\partial\mathbb{D}elta_0\cup\Gamma_0')$. Since $\partial\mathbb{D}elta_0\cup\Gamma_0'$ and $W$ can be iterated infinitely many times by $f_0$, it follows from the maximum modulus principle that $W$ is contained in the Fatou set of $f_0$. Since $\partial W\cap\partial\mathbb{D}elta_0$ contains a subarc of $\partial\mathbb{D}elta_0$, it follows that $W$ is contained in $\mathbb{D}elta_0$, which is a contradiction. This finishes the proof that $\Gamma_0\cap\partial\mathbb{D}elta_0$ is a singleton. From $\Gamma_0\setminus\{\textup{cv}\}\subset\mathbb{D}elta_0$ we obtain $\textup{cv}\in\partial\mathbb{D}elta_0$ immediately. If $\textup{cv}\in\partial\mathbb{D}elta_0$, since $\Gamma_0$ is a Jordan arc and $\Gamma_0\cap\partial\mathbb{D}elta_0$ is a singleton, we conclude that $\Gamma_0\setminus\{\textup{cv}\}\subset\mathbb{D}elta_0$. \end{proof} \subsection{A new class of irrational numbers}\label{subsec-new-class} For $n\geq 1$, let $s_{\alpha_n}:\gamma_{n-1}\to\gamma_n$ be the homeomorphism defined in \eqref{equ-s-alpha-n}. In the following, we use $\Gamma_\alpha$ (resp. $\gamma_\alpha$) to denote $\Gamma_0$ (resp. $\gamma_0=\Phi_0(\Gamma_0)$) when we want to emphasize the dependence on $\alpha=\alpha_0\in\textup{HT}_N$. \begin{defi} Let $\widetilde{\mathcal{H}}_N$ be a subset of $\mathcal{B}_N$ defined as \begin{equation} \widetilde{\mathcal{H}}_N:= \left\{\alpha\in\mathcal{B}_N \left| \begin{array}{l} \forall\,\zeta\in\gamma_\alpha\setminus\{1\}, \,\exists\,n\geq 1 \text{ such that}\\ \textup{Im}\, s_{\alpha_n}\circ\cdots\circ s_{\alpha_1}(\zeta)\geq\widetilde{\mathcal{B}}(\alpha_{n+1}) \end{array} \right. \right\}. \end{equation} \end{defi} In the next section we show that $\widetilde{\mathcal{H}}_N$ is independent of the choice of $f_0\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ by proving that $\widetilde{\mathcal{H}}_N$ coincides with the set of high type Herman numbers. \begin{prop}\label{prop-equi-alpha} The critical value $\textup{cv}=-\frac{4}{27}$ is contained in $\partial\mathbb{D}elta_0$ if and only if $\alpha\in\widetilde{\mathcal{H}}_N$. \end{prop} \begin{proof} For each $\zeta\in\gamma_\alpha\setminus\{1\}$ and $n\geq 1$, we denote \begin{equation} \zeta_n:=s_{\alpha_n}\circ\cdots\circ s_{\alpha_1}(\zeta). \end{equation} Suppose $\alpha\in\widetilde{\mathcal{H}}_N$. Then there exists $n\geq 1$ such that $\textup{Im}\,\zeta_n\geq\widetilde{\mathcal{B}}(\alpha_{n+1})$. By \eqref{equ-y-n-hat-2} and the choice of $M$ in \eqref{equ:M}, we have $\Phi_n^{-1}(\zeta_n)\in\mathbb{D}elta_n$ and hence $\Phi_0^{-1}(\zeta)\in\mathbb{D}elta_0$. Therefore, $\Gamma_\alpha\setminus\{\textup{cv}\}=\Phi_0^{-1}(\gamma_\alpha\setminus\{1\})$ is contained in $\mathbb{D}elta_0$ and $\textup{cv}\in\partial\mathbb{D}elta_0$. Suppose $\alpha\in\mathcal{B}_N$ and $\textup{cv}\in\partial\mathbb{D}elta_0$. By Lemma \textup{Re}\,f{lema-in-disk}, we have $\Phi_0^{-1}(\zeta)\in\mathbb{D}elta_0\cap\Gamma_\alpha$. According to Lemma \textup{Re}\,f{lema-eventually-above}, there exists an integer $n\geq 1$ so that $\textup{Im}\, \zeta_n\geq\widetilde{\mathcal{B}}(\alpha_{n+1})$. This implies that $\alpha\in\widetilde{\mathcal{H}}_N$. \end{proof} \section{Optimality of Herman condition} Herman condition is not easy to verify in general. Yoccoz gave this condition an arithmetic characterization so that one can check easily whether an irrational number is of Herman type. In this section, we first recall Yoccoz's characterization and then prove that under the high type condition, an irrational number is of Herman type if and only if it belongs to the set $\widetilde{\mathcal{H}}_N$ defined in \S\textup{Re}\,f{subsec-new-class}. \subsection{Yoccoz's characterization on $\mathcal{H}$}\label{subsec-Yoc} For $\alpha\in(0,1)$ and $x\in\mathbb{R}$, define \begin{equation} r_\alpha(x):= \left\{ \begin{aligned} & \frac{1}{\alpha}\Big(x-\log\frac{1}{\alpha}+1\Big) & ~~~\text{if}\quad x\geq \log\frac{1}{\alpha},\\ & e^x & ~~~\text{if}\quad x< \log\frac{1}{\alpha}. \end{aligned} \right. \end{equation} The map $r_\alpha$ is of class $C^1$ on $\mathbb{R}$, satisfying $r_\alpha(\log\frac{1}{\alpha})=r_\alpha'(\log\frac{1}{\alpha})=\frac{1}{\alpha}$, $x+1\leq r_\alpha(x)\leq e^x$ for all $x\in\mathbb{R}$, and $r_\alpha'(x)\geq 1$ for all $x\geq 0$. For an irrational number $\alpha\in(0,1)$, we use $(\alpha_n)_{n\geq 0}$ to denote the sequence of irrationals defined as in \eqref{equ-gauss}. Let $\mathcal{B}(\alpha)$ be the Brjuno sum of $\alpha$ (see \eqref{equ-Brjuno-sum-Yoccoz}). A Brjuno number $\alpha$ is a Herman number (or belongs to Herman type) if every orientation-preserving analytic circle diffeomorphism of rotation number $\alpha$ is analytically conjugate to a rigid rotation. Let $\mathcal{H}$ be the set of all Herman numbers. \begin{thm}[{\cite[\S 2.5]{Yoc02}}]\label{thm-Yoccoz} Herman condition has the following arithmetic characterization: \begin{equation} \mathcal{H}=\big\{\alpha\in\mathcal{B}: \forall\, m\geq 0, \,\exists\, n>m \text{ such that } r_{\alpha_{n-1}}\circ\cdots\circ r_{\alpha_m}(0)\geq \mathcal{B}(\alpha_n)\big\}. \end{equation} \end{thm} \subsection{Two conditions are equivalent}\label{subsec-equi-irrat} In this subsection, we prove that the set of Herman numbers is equal to $\widetilde{\mathcal{H}}_N$ defined in \S\textup{Re}\,f{subsec-new-class} under the high type condition. \begin{lem}[{\cite[Lemma 4.9]{Yoc02}}]\label{lema-Yoccoz} Let $\alpha$ be irrational and $x\geq 0$. Then $\alpha\not\in\mathcal{H}$ if and only if there exist $m$ and an infinite set $I=I(m,x,\alpha)\subset\mathbb{N}$ such that, for all $k\in I$, we have \begin{equation} r_{\alpha_{m+k-1}}\circ\cdots\circ r_{\alpha_m}(x)<\log\tfrac{1}{\alpha_{m+k}}. \end{equation} \end{lem} Let $D_4$ and $D_5>1$ be the constants introduced in Lemma \textup{Re}\,f{lema-key-esti-inverse}. \begin{defi} For $\alpha\in(0,1)$ and $y\in\mathbb{R}$, we define \begin{equation}\label{equ-s-alpha-star} \overline{s}_\alpha(y):= \left\{ \begin{aligned} & \frac{1}{\alpha}\Big(y-\frac{1}{2\pi}\log\frac{1}{\alpha}+D_5\Big) & ~~~\text{if}\quad y\geq \frac{1}{2\pi}\log\frac{1}{\alpha}+D_4,\\ & e^{D_5}\, e^{2\pi y} & ~~~\text{if}\quad y< \frac{1}{2\pi}\log\frac{1}{\alpha}+D_4. \end{aligned} \right. \end{equation} \end{defi} Let $\gamma_\alpha=\gamma_{\alpha_0}$ be the unbounded arc defined in \eqref{equ-gamma-n} and $s_{\alpha_n}:=\Phi_n\circ \textup{Exp}o:\gamma_{n-1}\to\gamma_n$ the map defined in \eqref{equ-s-alpha-n}. By Lemma \textup{Re}\,f{lema-key-esti-inverse} and the definition of $\overline{s}_\alpha$, we have the following immediate result. \begin{lem}\label{lema-less-than} For each $\alpha\in\mathcal{B}_N$ and $\zeta\in\gamma_\alpha$, we have \begin{equation} \textup{Im}\, s_\alpha(\zeta)\leq \overline{s}_\alpha(\textup{Im}\, \zeta). \end{equation} \end{lem} Define $\mathcal{H}_N:=\mathcal{H}\cap\mathcal{B}_N$. \begin{lem}\label{lem-subset-alpha} We have $\widetilde{\mathcal{H}}_N\subset\mathcal{H}_N$. \end{lem} \begin{proof} Assume by contradiction that $\alpha\in\widetilde{\mathcal{H}}_N\setminus\mathcal{H}_N$. Define \begin{equation}\label{equ:C-0} C_0:=8\pi e^{D_5+2\pi D_4}. \end{equation} By Lemma \textup{Re}\,f{lema-Yoccoz}, for the number $2C_0$, there exist $m\geq 1$ and an infinite subset $I=I(m,2C_0,\alpha)$ of $\mathbb{N}$ such that for all $k\in I$, we have \begin{equation}\label{equ-not-Herman} r_{\alpha_{m+k-1}}\circ\cdots\circ r_{\alpha_m}(2C_0)<\log\tfrac{1}{\alpha_{m+k}}. \end{equation} Denote $x_{m-1}:=2C_0$ and $y_{m-1}:=1$. For $k\geq 1$, we define \begin{equation} x_{m+k-1}:=r_{\alpha_{m+k-1}}\circ\cdots\circ r_{\alpha_m}(2C_0) \text{\quad and\quad} y_{m+k-1}:=\overline{s}_{\alpha_{m+k-1}}\circ\cdots\circ \overline{s}_{\alpha_m}(1), \end{equation} where $\overline{s}_{\alpha_n}$ is the map defined in \eqref{equ-s-alpha-star}. We claim that \begin{equation}\label{equ-key-inequality} x_{m+k-1}\geq 2\pi y_{m+k-1}+C_0 \text{\quad for all } k\geq 0. \end{equation} Assume that \eqref{equ-key-inequality} holds temporarily. Since $\gamma_\alpha$ is an arc starting at the point $1$ and finally going up to the infinity, there exists $\zeta\in\gamma_{\alpha_{m-1}}$ so that $\textup{Im}\, \zeta=1$. For $k\geq 1$, we denote \begin{equation} \zeta_{m+k-1}:=s_{\alpha_{m+k-1}}\circ\cdots\circ s_{\alpha_m}(\zeta), \end{equation} where each $s_{\alpha_n}$ is defined in \eqref{equ-s-alpha-n}. By Lemma \textup{Re}\,f{lema-less-than}, we have $y_{m+k-1}\geq \textup{Im}\, \zeta_{m+k-1}$ for all $k\geq 1$. Since $\alpha\in\widetilde{\mathcal{H}}_N$, by the definition of $\widetilde{\mathcal{H}}_N$ and Lemma \textup{Re}\,f{lema-all-greater}, there exists an integer $k_0\geq 1$ such that for all $k\geq k_0$, one has \begin{equation} y_{m+k-1}\geq \textup{Im}\, \zeta_{m+k-1}\geq \widetilde{\mathcal{B}}(\alpha_{m+k})=\frac{\mathcal{B}(\alpha_{m+k})}{2\pi}+M>\frac{1}{2\pi}\log\frac{1}{\alpha_{m+k}}+M. \end{equation} On the other hand, since $\alpha\not\in\mathcal{H}_N$, by \eqref{equ-not-Herman} there exists $k\in I$ with $k\geq k_0$ such that $x_{m+k-1}<\log\tfrac{1}{\alpha_{m+k}}$. This is a contradiction since by \eqref{equ-key-inequality} we have $x_{m+k-1}\geq 2\pi y_{m+k-1}+C_0>\log\tfrac{1}{\alpha_{m+k}}$. Hence it suffices to prove the claim \eqref{equ-key-inequality}. Obviously, \eqref{equ-key-inequality} is true when $k=0$ since $C_0\geq 2\pi$. Suppose $x_{m+k-1}\geq 2\pi y_{m+k-1}+C_0$ for some $k\geq 0$. It suffices to obtain $x_{m+k}\geq 2\pi y_{m+k}+C_0$. The arguments are divided into following three cases. \textbf{Case I}: Suppose $x_{m+k-1}<\log\frac{1}{\alpha_{m+k}}$ and $y_{m+k-1}<\frac{1}{2\pi}\log\frac{1}{\alpha_{m+k}}+D_4$. By \eqref{equ:C-0}, we have $C_0>2 (D_5+\log(2\pi))$ and hence $e^{y+C_0}>e^{y+D_5+\log(2\pi)}+C_0$ for any $y\geq 1$. Therefore, \begin{equation} \begin{split} x_{m+k} =&~e^{x_{m+k-1}}\geq e^{2\pi y_{m+k-1}+C_0}> e^{2\pi y_{m+k-1}+D_5+\log(2\pi)}+C_0\\ =&~2\pi\,\overline{s}_{\alpha_{m+k}}(y_{m+k-1})+C_0=2\pi y_{m+k}+C_0. \end{split} \end{equation} \textbf{Case II}: Suppose $x_{m+k-1}\geq\log\frac{1}{\alpha_{m+k}}$ and $y_{m+k-1}\geq\frac{1}{2\pi}\log\frac{1}{\alpha_{m+k}}+D_4$. Then \begin{equation} \begin{split} x_{m+k}&~=\frac{1}{\alpha_{m+k}}\Big(x_{m+k-1}-\log\frac{1}{\alpha_{m+k}}+1\Big)\\ &~\geq \frac{2\pi}{\alpha_{m+k}}\Big(y_{m+k-1}-\frac{1}{2\pi}\log\frac{1}{\alpha_{m+k}}+D_5\Big) +\frac{1}{\alpha_{m+k}}(C_0+1-2\pi D_5)\\ &~\geq 2\pi y_{m+k}+2(C_0+1-2\pi D_5)> 2\pi y_{m+k}+C_0. \end{split} \end{equation} \textbf{Case III}: Suppose $x_{m+k-1}\geq\log\frac{1}{\alpha_{m+k}}$ and $y_{m+k-1}<\frac{1}{2\pi}\log\frac{1}{\alpha_{m+k}}+D_4$. We consider the following two subcases: Subcase (i): Suppose $2\pi y_{m+k-1}<\log\frac{1}{\alpha_{m+k}}-\frac{C_0}{4}$. Note that \begin{equation} x_{m+k}=\frac{1}{\alpha_{m+k}}\left(x_{m+k-1}-\log\frac{1}{\alpha_{m+k}}+1\right)\geq \frac{1}{\alpha_{m+k}}. \end{equation} Since $x_{m-1}=2C_0$, we have $x_{m+k}\geq \max\big\{2C_0,\frac{1}{\alpha_{m+k}}\big\}$. By \eqref{equ:C-0}, we have $C_0>4D_5+4\log(4\pi)$ and hence $2\pi e^{D_5-C_0/4}<1/2$. Then \begin{equation} \begin{split} x_{m+k}\geq \max\left\{2C_0,\frac{1}{\alpha_{m+k}}\right\} \geq &~ \frac{2\pi e^{D_5-C_0/4}}{\alpha_{m+k}}+C_0 \\ \geq &~ 2\pi e^{D_5} e^{2\pi y_{m+k-1}}+C_0=2\pi y_{m+k}+C_0. \end{split} \end{equation} Subcase (ii): Suppose $\log\frac{1}{\alpha_{m+k}}-\frac{C_0}{4}\leq 2\pi y_{m+k-1}<\log\frac{1}{\alpha_{m+k}}+2\pi D_4$. Then \begin{equation}\label{equ-compare-1} \begin{split} &~\alpha_{m+k}\big(x_{m+k}-(2\pi y_{m+k}+C_0)\big) \\ =&~ x_{m+k-1}-\log\tfrac{1}{\alpha_{m+k}}+1-\alpha_{m+k}(2\pi e^{D_5} e^{2\pi y_{m+k-1}}+C_0) \\ \geq &~ 2\pi y_{m+k-1}+C_0+1-\log\tfrac{1}{\alpha_{m+k}}-2\pi \alpha_{m+k} e^{D_5} e^{2\pi y_{m+k-1}}-C_0\alpha_{m+k}. \end{split} \end{equation} For $\alpha\in(0,1/2]$, we consider the following continuous function: \begin{equation} h(t):=t+C_0+1-\log\tfrac{1}{\alpha}-2\pi \alpha e^{D_5} e^t-C_0\alpha, \quad \text{where } t\in\mathbb{R}. \end{equation} Then $h'(t)=1-2\pi \alpha e^{D_5} e^t$. Hence $h$ is increasing on $(-\infty,\log\frac{1}{\alpha}-D_5-\log(2\pi)]$ and decreasing on $[\log\frac{1}{\alpha}-D_5-\log(2\pi),+\infty)$. By \eqref{equ:C-0} and a direct calculation, we have \begin{equation}\label{equ-value-ends} \begin{split} h(\log\tfrac{1}{\alpha}-\tfrac{C_0}{4})=&~(\tfrac{3}{4}-\alpha)C_0+1-2\pi e^{D_5-C_0/4}>0,\text{ and } \\ h(\log\tfrac{1}{\alpha}+2\pi D_4)=&~(1-\alpha)C_0+2\pi D_4+1-2\pi e^{D_5+2\pi D_4}>0. \end{split} \end{equation} By \eqref{equ-compare-1} and \eqref{equ-value-ends}, we have $x_{m+k}>2\pi y_{m+k}+C_0$. This finishes the proof of the claim \eqref{equ-key-inequality} and the lemma holds. \end{proof} Let $D_3>0$ be the constant introduced in Lemma \textup{Re}\,f{lema-key-estimate-lp}. \begin{defi} For $\alpha\in(0,1)$ and $y\in\mathbb{R}$, we define \begin{equation}\label{equ-s-alpha-star-1} \underline{s}_{\,\alpha}(y):= \left\{ \begin{aligned} & \frac{1}{\alpha}\Big(y-\frac{1}{2\pi}\log\frac{1}{\alpha}-D_3\Big) & ~~~\text{if}\quad y\geq \frac{1}{2\pi}\log\frac{1}{\alpha}+D_3+1,\\ & e^{-D_5}\, e^{2\pi y}-3 & ~~~\text{if}\quad y< \frac{1}{2\pi}\log\frac{1}{\alpha}+D_3+1. \end{aligned} \right. \end{equation} \end{defi} \begin{lem}\label{lema-less-than-1} For each $\alpha\in\mathcal{B}_N$ and $\zeta\in\gamma_\alpha$, we have \begin{equation} \underline{s}_{\,\alpha}(\textup{Im}\, \zeta)\leq \textup{Im}\, s_\alpha(\zeta). \end{equation} \end{lem} \begin{proof} It follows from the proof of Lemma \textup{Re}\,f{lema-key-esti-inverse} that $D_4=D_3+1$. Moreover, we choose $D_5=D_3$ in the proof of Lemma \textup{Re}\,f{lema-key-esti-inverse}(a). Then this lemma follows immediately from Lemma \textup{Re}\,f{lema-key-esti-inverse} and the definition of $\underline{s}_{\,\alpha}$. \end{proof} \begin{lem}\label{lem-subset-alpha-yy} We have $\mathcal{H}_N\subset\widetilde{\mathcal{H}}_N$. \end{lem} \begin{proof} The proof is similar to that of Lemma \textup{Re}\,f{lem-subset-alpha}. Suppose $\alpha\in\mathcal{H}_N\setminus\widetilde{\mathcal{H}}_N$ by contradiction. Since $\alpha\not\in\widetilde{\mathcal{H}}_N$, by the definition of $\widetilde{\mathcal{H}}_N$, there exist a point $\zeta\in\gamma_\alpha\setminus\{1\}$ and an infinite sequence $(n_k)_{k\in\mathbb{N}}$ such that \begin{equation}\label{equ-im-s-alpha-n} \textup{Im}\, \zeta_{n_k}<\widetilde{\mathcal{B}}(\alpha_{n_k+1}), \end{equation} where \begin{equation} \zeta_n:=s_{\alpha_n}\circ\cdots\circ s_{\alpha_1}(\zeta) \text{\quad for all } n\in\mathbb{N}. \end{equation} By the uniform contraction with respect to the hyperbolic metric as in the proof of Proposition \textup{Re}\,f{prop-tend-to-bdy} and Lemma \textup{Re}\,f{lem-Jordan-arc}, there exists an integer $m\geq 1$ such that \begin{equation} \zeta_{m-1}\in\gamma_{m-1} \text{\quad and\quad} \textup{Im}\, \zeta_{m-1}\geq 2C_0, \end{equation} where $C_0>2M$ is a large number and $M\geq 1$ is introduced in the definition of $\widetilde{\mathcal{B}}(\alpha_n)$. Then by \eqref{equ-im-s-alpha-n} there exists an infinite subset $I'=I'(m,\zeta,\alpha)$ of $\mathbb{N}$ such that for all $k\in I'$, we have \begin{equation}\label{equ-not-in-H} \textup{Im}\, \zeta_{m+k-1}<\widetilde{\mathcal{B}}(\alpha_{m+k}). \end{equation} Since $\alpha\in\mathcal{H}_N$, by Theorem \textup{Re}\,f{thm-Yoccoz}, there exists $k_0=k_0(m)\geq 1$ such that $r_{\alpha_{m+k_0-1}}\circ\cdots\circ r_{\alpha_m}(0)\geq \mathcal{B}(\alpha_{m+k_0})$. A direct calculation shows that for all $k\geq k_0$, one has \begin{equation}\label{equ-not-Herman-yy} r_{\alpha_{m+k-1}}\circ\cdots\circ r_{\alpha_m}(0)\geq \mathcal{B}(\alpha_{m+k}). \end{equation} Denote $x_{m-1}:=0$ and $y_{m-1}:=2C_0$. For $k\geq 1$, we define \begin{equation} x_{m+k-1}:=r_{\alpha_{m+k-1}}\circ\cdots\circ r_{\alpha_m}(0) \text{\quad and\quad} y_{m+k-1}:=\underline{s}_{\,\alpha_{m+k-1}}\circ\cdots\circ \underline{s}_{\,\alpha_m}(2C_0), \end{equation} where $\underline{s}_{\,\alpha_n}$ is the map defined in \eqref{equ-s-alpha-star-1}. We claim that if $C_0$ is large enough, then \begin{equation}\label{equ-key-inequality-yy} 2\pi y_{m+k-1}\geq x_{m+k-1}+C_0 \text{\quad for all } k\geq 0. \end{equation} Assume that \eqref{equ-key-inequality-yy} holds temporarily. By Lemma \textup{Re}\,f{lema-less-than-1}, we have $y_{m+k-1}\leq \textup{Im}\, \zeta_{m+k-1}$ for all $k\geq 1$. By \eqref{equ-not-in-H}, there exists an integer $k\in I'$ with $k\geq k_0$ such that \begin{equation} y_{m+k-1}\leq \textup{Im}\, \zeta_{m+k-1}< \widetilde{\mathcal{B}}(\alpha_{m+k})=\frac{\mathcal{B}(\alpha_{m+k})}{2\pi}+M. \end{equation} On the other hand, by \eqref{equ-not-Herman-yy}, we have $x_{m+k-1}\geq\mathcal{B}(\alpha_{m+k})$. However, by \eqref{equ-key-inequality-yy} we have $x_{m+k-1}\le 2\pi y_{m+k-1}-C_0<\mathcal{B}(\alpha_{m+k})$, which is a contradiction. Hence it suffices to prove the claim \eqref{equ-key-inequality-yy}. Obviously, \eqref{equ-key-inequality-yy} is true when $k=0$. Suppose $2\pi y_{m+k-1}\geq x_{m+k-1}+C_0$ for some $k\geq 0$. It suffices to obtain $2\pi y_{m+k}\geq x_{m+k}+C_0$. One can divide the arguments into three cases as in Lemma \textup{Re}\,f{lem-subset-alpha}. We omit the details since the rest proof is completely the same. \end{proof} \begin{rmk} In fact, if $\alpha\in\mathcal{H}_N$, then according to \cite{Ghy84} and \cite{Her85}, the boundary of the Siegel disk of each $f\in\mathcal{IS}_\alpha\cup\{Q_\alpha\}$ contains the unique critical value $-\frac{4}{27}$. This implies that $\alpha\in\widetilde{\mathcal{H}}_N$ by Proposition \textup{Re}\,f{prop-equi-alpha}. Therefore in this way we also obtain $\mathcal{H}_N\subset\widetilde{\mathcal{H}}_N$. \end{rmk} \begin{proof}[{Proof of the second part of the Main Theorem}] Let $\alpha\in\textup{HT}_N$ be an irrational number of sufficiently high type. By Lemmas \textup{Re}\,f{lem-subset-alpha} and \textup{Re}\,f{lem-subset-alpha-yy}, $\alpha\in\mathcal{H}_N$ if and only if $\alpha\in\widetilde{\mathcal{H}}_N$. By Proposition \textup{Re}\,f{prop-equi-alpha}, $\alpha\in\widetilde{\mathcal{H}}_N$ if and only if $\textup{cv}=f(\textup{cp}_f)\in\partial\mathbb{D}elta_f$, where $\mathbb{D}elta_f$ is the Siegel disk of $f\in\mathcal{IS}_\alpha\cup \{Q_\alpha\}$ and $\textup{cp}_f$ is the unique critical point of $f$. Therefore, $\alpha\in\mathcal{H}_N$ if and only if $\textup{cp}_f\in\partial\mathbb{D}elta_f$. \end{proof} \appendix \section{Some calculations in Fatou coordinate planes}\label{sec-arc-straight} In this appendix we give the proof of Lemma \textup{Re}\,f{lema-height} based on some estimations in \cite{IS08}. Let $0<\alpha<1/2$. Define \begin{equation} Y:=\Big\{w=x+ y\textup{i}\in\mathbb{C}:-\frac{1}{2\pi\alpha}\Big(\arccos\frac{\sqrt{3}}{2e^{2\pi\alpha y}}-\frac{\pi}{6}\Big)<x<\frac{2}{3\alpha} \text{ and } y>1\Big\} \end{equation} and $R:=\tfrac{4}{27}e^{3\pi}$ (see Figure \textup{Re}\,f{Fig_Y-w}). \begin{figure} \caption{The domain $Y$ and its image under $e^{-2\pi\textup{i} \label{Fig_Y-w} \end{figure} \begin{lem}\label{lema-calcu} There exists $\varepsilon'>0$ such that for all $f\in\mathcal{IS}_\alpha$ with $\alpha\in(0,\varepsilon']$, \begin{equation} \tau_f(Y)\subset\mathbb{D}(0,R)\setminus [0,R), \end{equation} where $\tau_f:\mathbb{C}\to\widehat{\mathbb{C}}\setminus\{0,\sigma_f\}$ is the universal covering defined in \eqref{equ-tau}. \end{lem} \begin{proof} By a direct calculation, we have \begin{equation} \begin{split} \{e^{-2\pi\textup{i}\alpha w}: w\in Y\} = &~\Big\{\xi\in\mathbb{C}: |\xi|>e^{2\pi\alpha} \text{ and } -\frac{4\pi}{3}<\arg\xi<\arccos\frac{\sqrt{3}}{2|\xi|}-\frac{\pi}{6}\Big\}\\ = &~\mathbb{C}\setminus\Big(\overline{\mathbb{D}}(0,e^{2\pi\alpha})\cup\Big\{\xi\in\mathbb{C}:\frac{\pi}{3}\leq\arg\Big(\xi-\frac{1-\sqrt{3}\textup{i}}{2}\Big)\leq\frac{2\pi}{3}\Big\}\Big). \end{split} \end{equation} Since $4\pi\alpha/(3R)<e^{2\pi\alpha}-1$, we have (see Figure \textup{Re}\,f{Fig_Y-w}) \begin{equation} e^{-2\pi\textup{i}\alpha w}\in \mathbb{C}\setminus\Big(\overline{\mathbb{D}}\big(1,\frac{4\pi\alpha}{3R}\big) \cup\Big\{\xi\in\mathbb{C}:\frac{\pi}{3}\leq\arg\big(\xi-1\big)\leq\frac{2\pi}{3}\Big\}\Big). \end{equation} This implies that \begin{equation}\label{equ-calcu-1} \frac{1}{1-e^{-2\pi\textup{i}\alpha w}}\in\mathbb{D}\Big(0,\frac{3R}{4\pi\alpha}\Big)\setminus\Big\{\xi\in\mathbb{C}:\frac{\pi}{3}\leq\arg\xi\leq\frac{2\pi}{3}\Big\}. \end{equation} Note that $\arcsin x\leq \tfrac{\pi}{3}x$ for $0\leq x\leq 1/2$. By \cite[Main Theorem 1(a)]{IS08}, $|f_0''(0)-4.91|\leq 1.14$ for all $f_0\in\mathcal{IS}_0$. Hence $|\arg f_0''(0)|<\arcsin\tfrac{1}{3}\leq\tfrac{\pi}{9}$ and \begin{equation} -\frac{4\pi\textup{i}\alpha}{f_0''(0)}\in \Big\{z\in\mathbb{C}:\frac{4\pi\alpha}{7}< |z|< \frac{8\pi\alpha}{7}\text{ and }\frac{25\pi}{18}<\arg z<\frac{29\pi}{18}\Big\}. \end{equation} By \eqref{equ-sigma-f} and the pre-compactness of $\mathcal{IS}_\alpha$, there exists a small $\varepsilon'>0$ such that for all $f\in\mathcal{IS}_\alpha$ with $\alpha\in(0,\varepsilon']$, then \begin{equation}\label{equ-tau-est-2} \sigma_f\in\Big\{z\in\mathbb{C}:\frac{\pi\alpha}{2}< |z|< \frac{4\pi\alpha}{3}\text{ and }\frac{4\pi}{3}<\arg z<\frac{5\pi}{3}\Big\}. \end{equation} By \eqref{equ-calcu-1} and \eqref{equ-tau-est-2} we have \begin{equation} \tau_f(w)=\frac{\sigma_f}{1-e^{-2\pi\textup{i}\alpha w}}\in\mathbb{D}(0,R)\setminus [0,R). \end{equation} The proof is complete. \end{proof} For each $C\geq 1$, we define a subset of $\mho$ (see \eqref{equ-E}): \begin{equation}\label{equ-mho-1} \mho_1(C):=\{\zeta\in\mathbb{C}:1/4<\textup{Re}\,\zeta< 7/4 \text{ and } \textup{Im}\,\zeta\geq C\}. \end{equation} \begin{lem}\label{lema-calcu-1} There exist $C> 1$ and $\varepsilon''>0$ such that for all $f\in\mathcal{IS}_\alpha$ with $\alpha\in(0,\varepsilon'']$, we have \begin{equation} L_f^{-1}(\overline{\mho_1(C)})\subset Y. \end{equation} where $L_f:\widetilde{\mathcal{P}}_f\to\mathbb{C}$ is the univalent map defined in \eqref{equ-L-f}. \end{lem} \begin{proof} Let $D_2>0$ be introduced in Proposition \textup{Re}\,f{prop-Cheraghi-L-f}. For $y>0$, we define \begin{equation} \varphi_1(y):=\log(2+\sqrt{y^2+(7/4)^2}). \end{equation} There exists a constant $C>0$ depending only on $D_2$ such that if $y\geq C$, then \begin{equation}\label{equ:y-2D-2} y-2D_2 \varphi_1(y)>1 \end{equation} and \begin{equation}\label{equ-y-arc} \frac{y}{2\pi}\Big(\arccos\frac{\sqrt{3}}{2e^{2\pi}}-\frac{\pi}{6}\Big)-D_2\varphi_1(y)>0. \end{equation} Let $0<\alpha\leq 1/C$. By Proposition \textup{Re}\,f{prop-Cheraghi-L-f}, we have $L_f^{-1}(\overline{\mho_1(C)})\subset X_1\cup X_2\cup X_3$, where \begin{equation} \begin{split} X_1&~=\{x+\textup{i} y:-D_2\log(1+\tfrac{1}{\alpha})\leq x\leq D_2\log(1+\tfrac{1}{\alpha})+\tfrac{7}{4} \text{ and }y\geq \tfrac{1}{\alpha}\},\\ X_2&~=\{x+\textup{i} y:-D_2 \varphi_1(y)\leq x\leq D_2 \varphi_1(y)+\tfrac{7}{4} \text{ and }y\in[C,\tfrac{1}{\alpha}]\} \end{split} \end{equation} and \begin{equation} X_3=\{x+\textup{i} y:-D_2 \varphi_1(C)\leq x\leq D_2 \varphi_1(C)+\tfrac{7}{4} \text{ and }y\in[C-D_2\varphi_1(C),C]\}. \end{equation} For $y>0$, we define a continuous function \begin{equation} \phi(y):=\frac{1}{2\pi\alpha}\Big(\arccos\frac{\sqrt{3}}{2e^{2\pi\alpha y}}-\frac{\pi}{6}\Big). \end{equation} Note that $\alpha\log(1+1/\alpha)$ is uniformly bounded above for $0<\alpha<1$. There exists a constant $\kappa_1>0$ depending only on $D_2$ such that if $\alpha\in(0, \kappa_1]$, then for $y\geq 1/\alpha$, \begin{equation} \phi(y)-D_2\log\Big(1+\frac{1}{\alpha}\Big)\geq \frac{1}{2\pi\alpha}\Big(\arccos\frac{\sqrt{3}}{2e^{2\pi}}-\frac{\pi}{6}\Big)-D_2\log\Big(1+\frac{1}{\alpha}\Big)>0. \end{equation} For $y\in[C,1/\alpha]$, we denote $t=2\pi\alpha y\in[2\pi\alpha C,2\pi]$. Then \begin{equation} \phi(y)-D_2\varphi_1(y)=y\psi(t)-D_2\varphi_1(y), \end{equation} where \begin{equation}\label{equ-psi-t} \psi(t):=\frac{1}{t}\Big(\arccos\frac{\sqrt{3}}{2e^t}-\frac{\pi}{6}\Big). \end{equation} A direct calculation shows that $\psi(t)$ is decreasing on $(0,2\pi]$. By \eqref{equ-y-arc} we have \begin{equation} \phi(y)-D_2\varphi_1(y)\geq \frac{y}{2\pi}\Big(\arccos\frac{\sqrt{3}}{2e^{2\pi}}-\frac{\pi}{6}\Big)-D_2\varphi_1(y)>0. \end{equation} Finally, let $y\in[C-D_2\varphi_1(C),C]$ and we still denote $t=2\pi\alpha y$. A direct calculation shows that $\lim_{t\to 0^+}\psi(t)=\sqrt{3}$, where $\psi$ is defined in \eqref{equ-psi-t}. By \eqref{equ:y-2D-2}, there exists a constant $\kappa_2>0$ depending only on $D_2$ such that if $\alpha\in(0, \kappa_2]$, then for $y\in[C-D_2\varphi_1(C),C]$ we have \begin{equation} \phi(y)-D_2\varphi_1(C)\geq y-D_2\varphi_1(C)\geq (C-D_2\varphi_1(C))-D_2\varphi_1(C)>1. \end{equation} Let $\kappa_3>0$ be a constant depending only on $D_2$ such that $D_2\varphi_1(\tfrac{1}{\alpha})+\tfrac{7}{4}<\tfrac{2}{3\alpha}$ for all $\alpha\in(0,\kappa_3]$. The proof is finished if we set $\varepsilon'':=\min\{1/C,\kappa_1,\kappa_2,\kappa_3\}$. \end{proof} \begin{proof}[{Proof of Lemma \textup{Re}\,f{lema-height}}] For $f_0\in\mathcal{IS}_0$, one can define $\mathcal{C}_{f_0}$ and $\mathcal{C}_{f_0}^\sharp$ as in \eqref{defi-C-f-alpha} similarly (Replacing $\mathcal{P}_f$ and $\Phi_f$ by $\mathcal{P}_{attr,f_0}$ and $\Phi_{attr,f_0}$). We first show that \eqref{equ-M-c-subset} holds for $f_0\in\mathcal{IS}_0$ and then use an argument of continuity. The Main Theorem 1 in \cite{IS08} was proved by transferring the parabolic fixed point $0$ of $f_0\in\mathcal{IS}_0$ to $\infty$ and a class corresponding to $\mathcal{IS}_0$ was defined (see \cite[\S 5.A]{IS08}): \begin{equation} \mathcal{IS}_0^Q:= \left\{F=Q\circ\varphi^{-1} \left| \begin{array}{l} \varphi:\widehat{\mathbb{C}}\setminus E\to \widehat{\mathbb{C}}\setminus\{0\} \text{ is univalent}, \\ \varphi(\infty)=\infty \text{ and } \varphi'(\infty)=1 \end{array} \right. \right\}, \end{equation} where $E$ is the ellipse defined in \eqref{ellipse} and $Q(z)=z(1+\frac{1}{z})^6/(1-\frac{1}{z})^4$ is a parabolic map. Each map in this class has a parabolic fixed point at $\infty$, a critical point at $\textup{cp}_F:=\varphi(5+2\sqrt{6})$ and a critical value at $\textup{cv}_Q=27$ which is independent of $F$. By \cite[Lemma 5.14(a)]{IS08}, $P$ and $Q$ are related by $Q=\psi_0^{-1}\circ P\circ \psi_1$, where $\psi_1(z)=-4z/(1+z)^2$ is defined in \eqref{U-and-psi-1} and $\psi_0(z)=-4/z$. By \cite[Proposition 5.3(c)]{IS08}, there exists a one-to-one correspondence between $\mathcal{IS}_0$ and $\mathcal{IS}_0^Q$. For $F\in\mathcal{IS}_0^Q$, one has natural definitions of the attracting petal $\mathcal{P}_{attr,F}$, repelling petal $\mathcal{P}_{rep,F}$, attracting Fatou coordinate $\Phi_{attr,F}$ and repelling Fatou coordinate $\Phi_{rep,F}$ etc based on the definitions relating to $f_0\in\mathcal{IS}_0$ in \S\textup{Re}\,f{subsec-IS-class}. For example, the attracting Fatou coordinate of $F$ is defined as $\Phi_{attr,F}(z)=\Phi_{attr,f_0}\circ\psi_0(z)$. For $f_0\in\mathcal{IS}_0$, we define a topological triangle \begin{equation} \mathcal{Q}_{f_0}:=\{z\in\mathcal{P}_{attr,f_0}:\Phi_{attr,f_0}(z)\in \mho\}. \end{equation} In order to prove \eqref{equ-M-c-subset}, it is convenient to work in the corresponding dynamical plane of $F=\psi_0^{-1}\circ f_0\circ\psi_0\in\mathcal{IS}_0^Q$. Define \begin{equation} D_{0,F}:=\{z\in\mathcal{P}_{attr,F}:0<\textup{Re}\,\Phi_{attr,F}(z)< 1 \text{~and~} \textup{Im}\,\Phi_{attr,F}(z)>-2\} \end{equation} and $D_{1,F}:=F(D_{0,F})$. By \cite[Proposition 5.7(e)]{IS08}, for $z\in \overline{D}_{0,F}$ we have \begin{equation} |z|\geq 0.05>27\,e^{-3\pi} \text{\quad and\quad} z\not\in \mathbb{R}_-. \end{equation} By \cite[Proposition 5.6(b)]{IS08}, for $z\in \overline{D}_{1,F}$ we have \begin{equation} |z|\geq \tfrac{25}{\sqrt{3}}\sin\tfrac{\pi}{3}=\tfrac{25}{2}>27\,e^{-3\pi} \text{\quad and\quad} z\not\in \mathbb{R}_-. \end{equation} Let $R=\tfrac{4}{27}e^{3\pi}$. We have \begin{equation}\label{equ-omega-new} \overline{D}_{0,F}\cup \overline{D}_{1,F}\subset \psi_0^{-1}\big(\mathbb{D}(0,R)\setminus [0,R)\big)=\mathbb{C}\setminus\big(\overline{\mathbb{D}}(0,27\,e^{-3\pi})\cup\mathbb{R}^-\big). \end{equation} By the definition of $\mathcal{Q}_{f_0}$, we have \begin{equation} \psi_0^{-1}(\mathcal{Q}_{f_0})=\{z\in\mathcal{P}_{attr,F}:1/4<\textup{Re}\,\Phi_{attr,F}(z)< 7/4\ \text{~and~} \textup{Im}\,\Phi_{attr,F}(z)> -2\}. \end{equation} Therefore, by \eqref{equ-omega-new} we have $\psi_0^{-1}(\overline{\mathcal{Q}}_{f_0}\setminus\{0\})\subset \overline{D}_{0,F}\cup \overline{D}_{1,F}$. This implies that \begin{equation}\label{equ:MQ-f0} \overline{\mathcal{Q}}_{f_0}\setminus\{0\}\subset \mathbb{D}(0,R)\setminus [0,R) \text{\quad for all } f_0\in\mathcal{IS}_0. \end{equation} Let $C>1$ be the constant introduced in Lemma \textup{Re}\,f{lema-calcu-1} and $\mho_1=\mho_1(C)$ be defined in \eqref{equ-mho-1}. By Lemmas \textup{Re}\,f{lema-calcu} and \textup{Re}\,f{lema-calcu-1}, for every $f\in\mathcal{IS}_\alpha$ with $0<\alpha\leq \min\{\varepsilon',\varepsilon''\}$, we have \begin{equation} \Phi_f^{-1}(\overline{\mho}_1)=\tau_f\circ L_f^{-1}(\overline{\mho}_1)\subset \mathbb{D}(0,R)\setminus [0,R). \end{equation} Define \begin{equation} \mho_2:=\overline{\mho\setminus\mho_1}=\{\zeta\in\mathbb{C}:1/4\leq\textup{Re}\,\zeta\leq 7/4 \text{ and } -2\leq\textup{Im}\,\zeta\leq C\}. \end{equation} By \eqref{equ:MQ-f0}, the continuity of the Fatou coordinates in Proposition \textup{Re}\,f{prop-BC-prop-12}(d) (see also \cite[Proposition 3.2.2]{Shi00a}) and the pre-compactness of $\mathcal{IS}_0$, there exists a constant $0<\varepsilon_4'\leq \min\{\varepsilon',\varepsilon''\}$ such that for all $f\in\mathcal{IS}_\alpha$ with $\alpha\in(0,\varepsilon_4']$, we have $\Phi_f^{-1}(\mho_2)\subset \mathbb{D}(0,R)\setminus [0,R)$ and hence $\overline{\mathcal{Q}}_f\setminus\{0\}=\Phi_f^{-1}(\overline{\mho})\subset\mathbb{D}(0,R)\setminus [0,R)$. \end{proof} \end{document}
\begin{document} \rightline{\Huge \bf WHAT IS ... a Graphon?} \vskip1cm \rightline{\it \huge Daniel Glasscock} \vskip1in \begin{multicols}{2} \noindent Large graphs are ubiquitous in mathematics, and describing their structure is an important goal of modern combinatorics. One way to study large, finite objects is to pass from sequences of larger and larger such objects to ideal limiting objects. Done properly, properties of the limiting objects reflect properties of the finite objects which approximate them, and vice versa. Graphons, short for graph functions, are the limiting objects for sequences of large, finite graphs with respect to the so-called cut metric. They were introduced and developed by C. Borgs, J. T. Chayes, L. Lov\'{a}sz, V. T. S\'{o}s, B. Szegedy, and K. Vesztergombi in \cite{lovasz2008} and \cite{lovasz2006}. Graphons arise naturally wherever sequences of large graphs appear: extremal graph theory, property testing of large graphs, quasi-random graphs, random networks, et cetera. Let's begin with some definitions and a motivating example. A \emph{graph} $G$ is a set of vertices $V(G)$ and a set of edges $E(G)$ between the vertices (excluding loops and multiple edges). A \emph{graph homomorphism} from $H$ to $G$ is a map from $V(H)$ to $V(G)$ that preserves edge adjacency; that is, for every edge $\{v,w\}$ in $E(H)$, the edge $\{\varphi(v),\varphi(w)\}$ is in $E(G)$. Denote by $\hom (H,G)$ the number of homomorphisms from $H$ to $G$. For example, $\hom(\vcenter{\hbox{\includegraphics[scale=0.1]{dot.pdf}}},G) = |V(G)|$, $\hom(\vcenter{\hbox{\includegraphics[scale=0.1]{edge.pdf}}},G) = 2 |E(G)|$, and $\hom(\vcenter{\hbox{\includegraphics[scale=0.1]{tri.pdf}}},G)$ is 6 times the number of triangles in $G$. Normalizing by the total number of possible maps, we get the \emph{homomorphism density} of $H$ into $G$, \[t(H,G) = \frac{\hom(H,G)}{|V(G)|^{|V(H)|}},\] the probability that a randomly chosen map from $V(H)$ to $V(G)$ preserves edge adjacency. This number also represents the density of $H$ as a subgraph in $G$ asymptotically as $n = |V(G)| \rightarrow \infty$. For example, $t(\vcenter{\hbox{\includegraphics[scale=0.1]{edge.pdf}}},G) = 2 |E(G)| / n^2$ while the density of edges in $G$ is $2 |E(G)| / n(n - 1)$; these two expressions are nearly the same when $n$ is large. Consider the following problem from extremal graph theory: \begin{center} \emph{How many 4-cycles must there be in a graph with edge density at least $1/2$?} \end{center} It is easy to see that there are at most on the order of $n^4$ 4-cycles in any graph; a theorem of Erd\H{o}s gives that graphs with at least half the number of possible edges have \emph{at least} on the order of $n^4$ 4-cycles. More specifically, for any graph $G$, \[t(\vcenter{\hbox{\includegraphics[scale=0.1]{4cycle.pdf}}},G) \geq t(\vcenter{\hbox{\includegraphics[scale=0.1]{edge.pdf}}},G)^4,\] meaning that if $t(\vcenter{\hbox{\includegraphics[scale=0.1]{edge.pdf}}},G) \geq 1/2$, then $t(\vcenter{\hbox{\includegraphics[scale=0.1]{4cycle.pdf}}},G) \geq 1/16$. In light of this, the problem may be reformulated into a minimization one: \emph{Minimize $t(\vcenter{\hbox{\includegraphics[scale=0.1]{4cycle.pdf}}},G)$ over finite graphs $G$ satisfying $t(\vcenter{\hbox{\includegraphics[scale=0.1]{edge.pdf}}},G) \geq 1/2$}. With some work, it may be shown that no finite graph $G$ with $t(\vcenter{\hbox{\includegraphics[scale=0.1]{edge.pdf}}},G) \geq 1/2$ achieves the minimum $t(\vcenter{\hbox{\includegraphics[scale=0.1]{4cycle.pdf}}},G) = 1/16$. It's useful at this point to draw an analogy with a problem from elementary analysis: \emph{Minimize $x^3 - 6x$ over rational numbers $x$ satisfying $x \geq 0$}. This polynomial has a unique minimum on $x \geq 0$ at $x = \sqrt 2$, so the best we may do over the rationals is show that the polynomial achieves values approaching this minimum along a sequence of rationals approaching $\sqrt 2$. We know well to avoid this complication by completing the rational numbers to the reals and realizing the limit of such a sequence as $\sqrt 2$. There is a sequence of finite graphs with edge density at least 1/2 and 4-cycle density approaching 1/16. Let $R_n$ be an instance of a random graph on $n$ vertices where each edge is decided independently with probability $1/2$. Throwing away those $R_n$'s for which $t(\vcenter{\hbox{\includegraphics[scale=0.1]{edge.pdf}}},R_n) < 1/2$, the 4-cycle density in the remaining graph sequence limits to 1/16 almost surely. Following the $\sqrt 2$ analogy, we should look to realize the limit of this sequence of finite graphs and understand how it solves the minimization problem at hand. What might the limit of the sequence of random graphs $(R_n)_n$ be? From the adjacency matrix of a labeled graph, construct the graph's \emph{pixel picture} by turning the 1's into black squares, erasing the 0's, and scaling to the unit square $[0,1]^2$. {\centering \begin{tabular}{ccc} & & \\ $\left(\rule{0cm}{.8cm}\right.$ \kern-.5em {\small \begin{tabular}{p{.09cm}p{.09cm}p{.09cm}p{.09cm}} 0 & 1 & 0 & 1\\ 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 \end{tabular}} \kern-.5em $\left.\rule{0cm}{.8cm}\right)$ & $\boldsymbol{\longrightarrow}$ & $\vcenter{\hbox{\includegraphics[scale=.39]{4cyclepixel.pdf}}}$ \\ & & \end{tabular}\par} \noindent Pixel pictures may be seen to ``converge'' graphically; those of larger and larger random graphs with edge probability 1/2, regardless of how they are labeled, seem to converge to a gray square, the constant 1/2 function on $[0,1]^2$. \vskip8pt \begin{center}\includegraphics[scale=.39]{randomlimit.pdf}\end{center} \vskip5pt The constant 1/2 function on $[0,1]^2$ is an example of a labeled graphon. A \emph{labeled graphon} is a symmetric, Lebesgue-measurable function from $[0,1]^2$ to $[0,1]$ (modulo the usual identification almost everywhere); they may be thought of as edge-weighted graphs on the vertex set $[0,1]$. An \emph{unlabeled graphon} is a graphon up to re-labeling, where a \emph{re-labeling} is the result of applying an invertible, measure preserving transformation to the $[0,1]$ interval. Note that any pixel picture is a labeled graphon, meaning that (labeled) graphs are (labeled) graphons. As another example of this convergence, consider the \emph{growing uniform attachment} graph sequence $(G_n)_n$ defined inductively as follows. Let $G_1 = \vcenter{\hbox{\includegraphics[scale=0.1]{dot.pdf}}}$. For $n \geq 2$, construct $G_n$ from $G_{n-1}$ by adding one new vertex, then, considering each pair of non-adjacent vertices in turn, drawing an edge between them with probability $1/n$. This sequence almost surely limits to the graphon $1-\max(x,y)$. (Since matrices are indexed with $(0,0)$ in the top left corner, so too are graphons.) \vskip8pt \begin{center}\includegraphics[scale=.39]{gualimit.pdf}\end{center} \vskip5pt There are two natural ways to label a complete bipartite graph, and each suggests a different limit graphon for the complete bipartite graph sequence. Both sequences of labeled graphons in fact have the same limit, as indicated in the diagram; the reader is encouraged to return to this example after we define this convergence more precisely. \vskip8pt \begin{center}\includegraphics[scale=.39]{bipartlimit3.pdf}\end{center} \vskip5pt Homomorphism densities extend naturally to graphons. For a finite graph $G$, the density $t(\vcenter{\hbox{\includegraphics[scale=0.1]{edge.pdf}}},G)$ may be computed by giving each vertex of $G$ a mass of $1/n$ and integrating the edge indicator function over all pairs of vertices. In exactly the same way, the edge density $t(\vcenter{\hbox{\includegraphics[scale=0.1]{edge.pdf}}},W)$ of a labeled graphon $W$ is \[\int_{[0,1]^2} W(x,y) \ dx dy,\] and the 4-cycle density $t(\vcenter{\hbox{\includegraphics[scale=0.1]{4cycle.pdf}}},W)$ is \begin{align*} \int_{[0,1]^4} &W(x_1,x_2)W(x_2,x_3) \quad \\ & \quad W(x_3,x_4)W(x_4,x_1) \ dx_1 dx_2 dx_3 dx_4. \end{align*} It is straightforward from here to write down the expression for the homomorphism density $t(H,W)$ of a finite graph $H$ into a graphon $W$. This allows us to see how the constant graphon $W \equiv 1/2$ solves the minimization problem: $t(\vcenter{\hbox{\includegraphics[scale=0.1]{edge.pdf}}},W) = 1/2$ while $t(\vcenter{\hbox{\includegraphics[scale=0.1]{4cycle.pdf}}},W) = 1/16$. To see the space of graphons as the completion of the space of finite graphs and make graphon convergence precise, define the \emph{cut distance} $\delta_\square (W,U)$ between two labeled graphons $W$ and $U$ by \begin{align*} \inf_{\varphi, \psi} \ \sup_{S, T} \bigg| \ \int \limits_{S \times T} & W\big(\varphi (x), \varphi (y)\big) \quad \\ & \quad - U \big(\psi(x),\psi(y) \big) \ dx dy \ \bigg|, \end{align*} where the infimum is taken over all re-labelings $\varphi$ of $W$ and $\psi$ of $U$, and the supremum is taken over all measurable subsets $S$ and $T$ of $[0,1]$. The cut distance first measures the maximum discrepancy between the integrals of two labeled graphons over measurable boxes (hence the $\square$) of $[0,1]^2$, then minimizes that maximum discrepancy over all possible re-labelings. (It is possible to define the cut distance between two finite graphs combinatorially, without any analysis, but the definition is quite involved.) The infimum in the definition of the cut distance makes it well defined on the space of unlabeled graphons, but it is not yet a metric. Graphons $W$ and $U$ for which $t(H,W) = t(H,U)$ for all finite graphs $H$ are called \emph{weakly isomorphic}; it turns out that $W$ and $U$ are weakly isomorphic if and only if $\delta_\square (W,U) = 0$. The cut distance becomes a genuine metric on the space $\mathcal{G}$ of unlabeled graphons up to weak isomorphism. The examples of pixel picture convergence above provide examples of convergent sequences and their limits in $\mathcal{G}$ (up to weak isomorphism). We conclude by highlighting some fundamental results on graphons. \noindent \textbf{Theorem 1} \quad \emph{Every graphon is the $\delta_\square$-limit of a sequence of finite graphs.} To approximate a labeled graphon $W$ by a finite labeled graph, let $S$ be a set of $n$ randomly chosen points from $[0,1]$, then construct a graph on $S$ where the edge $\{s_i,s_j\}$ is included with probability $W(s_i,s_j)$. With high probability (as $|S| \to \infty$), this labeled graph approximates $W$ well in cut distance. \noindent \textbf{Theorem 2} \quad \emph{The space $(\mathcal{G},\delta_\square)$ is compact.} This implies that $\mathcal{G}$ is complete; combining this fact with Theorem 1, we see that the space of graphons is the completion of the space of finite graphs with the cut metric! This theorem also demonstrates how graphons provide a bridge between different forms of Szemer\'{e}di's Regularity Lemma: Theorem 2 may be deduced from a weak form of the lemma, while a stronger regularity lemma follows from the compactness of $\mathcal{G}$. \noindent \textbf{Theorem 3} \quad \emph{For every finite graph $H$, the map $t(H,\cdot): \mathcal{G} \to [0,1]$ is Lipschitz continuous.} Theorems 2 and 3 combine with elementary analysis to show that minimization problems in extremal graph theory (such as the one considered above) are guaranteed to have solutions in the space of graphons. These graphon solutions provide a ``template'', via Theorem 1, for approximate solutions in the space of finite graphs. The interested reader is encouraged to consult L. Lov\'{a}sz's book \cite{lovasz} for more! \end{multicols} \end{document}
\begin{document} \title{Quantum repeater without Bell measurements in double quantum dot systems} \author{Xiao-Feng Yi} \email[Corresponding author: ]{[email protected]} \author{Peng Xu} \author{Qi Yao} \affiliation{School of physics and Technology,Wuhan University,Wuhan 430072,China} \author{Xianfu Quan} \affiliation{State key Laboratory of Magnetic Resonance and Atomic and Molecular physics,Wuhan Institute of Physics and Mathematics,Chinese Academy of Sciences,Wuhan 430071,China} \affiliation{University of Chinese Academy of Science,Bejing 10079,China} \date{\today} \begin{abstract} We propose a Bell measurement free scheme to implement a quantum repeater in GaAs/AlGaAs double quantum dot systems. We prove that four pairs of double quantum dots compose an entanglement unit, given that the initial state is singlet states. Our scheme differs from the famous Duan-Lukin-Cirac-Zoller (DLCZ) protocol in that the Bell measurements are unnecessary for the entanglement swapping, which provides great advantages and conveniences in experimental implementations. Our scheme significantly improves the success probability of quantum repeaters based on solid state quantum devices. \end{abstract} \maketitle \section{\label{sec:levell}INTRODUCTION} Quantum repeater is a basic building block in quantum communication, quantum computing, and quantum teleportation. After the original ideal of the quantum repeater by Briegel {\it el al.}~\cite{PhysRevLett.81.5932} in 1998, Duan {\it el al.}~\cite{Duan2001Long} presented the widely adopted DLCZ scheme, which is based on atomic ensembles and linear optics with many Bell measurements. Shortly, the robustness of a quantum repeater was checked by Zhao {\it el al.}~\cite{PhysRevLett.98.240502} and Jiang {\it el al.}~\cite{PhysRevA.76.012301}. Numerous quantum repeater schemes to further enhance the noise-resistance are followed. Among all these schemes the photons as an information carrier and the Bell measurements are required. Similar to the DLCZ scheme, the probability of the successful entanglement swapping is 50\%~\cite{PhysRevLett.80.3891}. To improve the success probability, entanglement purification is also adopted~\cite{Pan2000Entanglement}. There are many physical systems serving as candidates to realize quantum information processing~\cite{PhysRevLett.83.4204}, for example, trapped ions, quantum dots (QDs), photons, neutral cold atoms in optical lattice, nitrogen vacancy centers~\cite{PhysRevA.95.052336} in diamond, and superconducting qubits~\cite{Ladd2010Quantum}. Among these systems, we focus on the semiconductor QDs with the aid of optical cavities. The QD is often called artificially atom, which consists of electrons or holes confined in a potential well. Many materials may form QDs~\cite{RevModPhys.79.1217}, such as semiconductor QDs (InAs or GaAs), graphene QDs, and so on. We specifically take the semiconductor QDs into consideration in this paper, which are constructed by heterostructures of GaAs and AlGaAs grown with the molecular beam epitaxy technique~\cite{cho1975molecular}. QDs have been employed to realize a quantum repeater. In 2014, C. Wang {\it et al.} presented a scheme to construct a quantum repeater based on QDs~\cite{Wang2014Construction}. In 2015, Jianping Wang {\it et al.} achieved scalable entangled photon source with self-assembled QDs in experiment~\cite{PhysRevLett.115.067401}, which gives one a hope to implement a quantum repeater in QDs. However, these schemes are based on photons and Bell measurements. Realizing entanglement between two distant QDs is still absent~\cite{Mcmahon2015Towards, PhysRevB.90.235421}. In this paper, we present an alternative scheme of a quantum repeater which is based on the double quantum dots (DQDs). Different from previous proposals, our scheme employs the product of local measurements on QD electron spins, instead of the Bell measurements of photons. Such local measurements are easier to implement than Bell measurements in QD experiments. In fact, our scheme is divided into three steps: (i) Preparation of entanglement pairs in DQDs; (ii) Creation of entanglement swapping between 4 QDs, with the help of optical cavities which couple the adjacent DQDs~\cite{RevModPhys.81.865, petta2005coherent, PhysRevLett.85.2392, Chen2006Scheme, Bouwmeester1997Experimental}; (iii) Extending the entanglement distance many times to realize a quantum repeater, as shown in Fig.~\ref{fig:figure1}(a). \begin{figure} \caption{(a) Quantum circuit diagram of entanglement swapping. The initial state for the two pairs of DQDs is a singlet state $|\beta_{11} \label{fig:figure1} \end{figure} \section{\label{sec:level2}Quantum repeater based on DQDs} \subsection{Preparation of initial $(1,1)$ singlet states} We consider a quantum repeater implemented with many DQDs which are initialized in a $(1,1)$ singlet state (Bell state) for each DQD. Figure~\ref{fig:figure1}(b) is a schematic diagram of a typical DQD. To prepare the initial singlet state in a DQD, as shown in Fig.~\ref{fig:figure3}(a), we consider two electrons in the DQD. The charge configurations includes $(0,2)$ and $(1,1)$, where the two electrons are both in the right QD and each in a QD, respectively. For the $(0,2)$ charge configuration with a large bias (detuning) $\Delta$, the ground state of the DQD is a singlet state $(0,2)_S$ due to the Pauli exclusion principle, as shown in Fig.~\ref{fig:figure2}(a). For the $(1,1)$ charge configuration at zero bias (detuning) $\Delta$, two spin states, a singlet state $(1,1)_S$ and three triplet states $(1,1)_T$, are possible. The bias $\Delta$ can be adjust in experiments by the gate voltages on gate L and gate R. By adiabatically and appropriately lowing the bias $\Delta$ in a magnetic field which splits the triplet states of the electrons as shown in Fig.~\ref{fig:figure2}(b), one reaches the singlet state where the two electron spins in each QD are entangled. This process has been successfully demonstrated in experiments~\cite{petta2005coherent} and the probability of $(1,1)_S$ can be detected by employing a QPC sensor. \begin{figure} \caption{(a) Preparation diagram of a singlet state ${(1,1)_S} \label{fig:figure2} \end{figure} \subsection{\label{sec:level2}Entanglement swapping in a pair of DQDs} \begin{figure} \caption{(a) Entanglement swapping in two pairs of DQDs. The QDs 2 and 3 are coupled within a cavity. Entanglement is established between QD 1 and QD 8 (b) and between QD 1 and QD 16 (c).} \label{fig:figure3} \end{figure} After the step of preparing many $(1,1)_S$ DQD singlet states, which means we have many locally entangled electron spin pairs, we need to extend the entanglement distance. As a second step, we extend the entanglement distance from DQD to a pair of DQD, i.e., building the entanglement between the electron spins in QD 1 and QD 4. We choose the well-known entanglement swapping method which can be achieved by coupling the middle QDs 2 and 3 equally with a single-mode cavity system~\cite{PhysRevLett.85.2392}, as shown in Fig.~\ref{fig:figure3}(a). This is the famous Tavis-Cummings model (TCM)~\cite{PhysRev.170.379}, whose Hamiltonian is: \begin{equation} \label{equ1} H={\omega}a^{\dag}a+\frac{1}{2}\omega_2\sigma_{2Z}+\frac{1}{2}\omega_3\sigma_{3Z} +\sum\limits_{i=2,3}g(a^{\dag}\sigma_i^- + a\sigma_i^+) \end{equation} where $\omega$ is the single mode frequency of the cavity, $\omega_{2,3}$ is the Zeeman splitting of the electron spin in QD2 and QD3 (we assume $\omega_2$=$\omega_3$), respectively, and $g$ is the coupling strength between the QDs and the cavity. The annihilation operator of the cavity mode is $a$, and $\sigma _i^ + = {\left| e \right\rangle _{ii}}$$\left\langle g \right|$ and $\sigma _i^ - = {\left| {\rm{g}} \right\rangle _{ii}\left\langle e \right|}$, where ${\left| g \right\rangle _i}$ and ${\left| e \right\rangle _i}$ are the ground state and excited state of the $i$th QD, respectively. We have set $\hbar = 1$. In the interaction picture defined by $H_0 = {\omega}a^{\dag}a+\frac{1}{2}\omega_2\sigma_{2Z} + \frac{1}{2}\omega_3\sigma_{3Z}$, the Hamilton of the system under the rotating wave approximation can be written as: \begin{equation} \label{equ2} {H_i} = g\sum\limits_{i = 2,3} {({e^{ - i\Delta t}}{a^\dag }\sigma _i^ - + H.C)} \end{equation} where $\Delta$ is the detuning between the cavity frequency $\omega$ and the QD transition frequency $\omega _i$. There is no energy exchange between the cavity and the QDs in the case $\Delta \gg g$. By further employing the Nakajiama Transformation~\cite{PhysRevLett.85.2392}, we obtain the effective Hamiltonian \begin{equation} \label{equ3} \begin{split} H = \lambda [\sum\limits_{i = 2,3} {({{\left| e \right\rangle }_{ii}\left\langle e \right|a{a^\dag }} - {{\left| g \right\rangle }_{ii}\left\langle g \right|{a^\dag }a)}}\\ + (\sigma _2^ + \sigma _3^ - + \sigma _3^ + \sigma _2^ - )] \end{split} \end{equation} with $\lambda = {g^2}/{\Delta }$. Suppose that the cavity is prepared in a vacuum state, the effective Hamiltonian is then reduced to~\cite{PhysRevLett.85.2392}: \begin{equation} \label{equ4} H = \lambda [\sum\limits_{i = 2,3} {{{\left| e \right\rangle }_{ii}\left\langle e \right| + (\sigma _2^ + \sigma _3^ - + \sigma _2^ - \sigma _3^ + )}} ]. \end{equation} Under such a Hamiltonian, the QD $i$ and QD $j$ with an initial product state, such as ${\left| {ee} \right\rangle _{ij}}, {\left| {eg} \right\rangle _{ij}}, {\left| {ge} \right\rangle _{ij}}$, and ${\left| {gg} \right\rangle _{ij}}$, becomes after a period time $t$, respectively, \begin{equation} \begin{split} \label{equ6} \begin{array}{l} {\left| {eg} \right\rangle _{ij}} \to {e^{ - i\lambda t}}[\cos (\lambda t){\left| {eg} \right\rangle _{ij}} - i\sin (\lambda t){\left| {ge} \right\rangle _{ij}}], \\ {\left| {ee} \right\rangle _{ij}} \to {e^{ - i2\lambda t}}{\left| {ee} \right\rangle _{ij}}, \\ {\left| {gg} \right\rangle _{ij}} \to {\left| {gg} \right\rangle _{ij}}. \end{array} \end{split} \end{equation} The initial four QDs are in a product state of a pair of singlet states, i.e., \begin{equation} \label{equ5} \psi_{1,2,3,4} = \psi_{1,2} \otimes \psi_{3,4} \end{equation} with \begin{equation} \begin{split} \psi_{1,2} = \frac{1}{{\sqrt 2 }}(\left| {eg} \right\rangle - \left| {ge} \right\rangle )_{1,2}, \\ \psi_{3,4} = \frac{1}{{\sqrt 2 }}(\left| {eg} \right\rangle - \left| {ge} \right\rangle )_{3,4}. \end{split} \end{equation} After an evolution time $t$, the whole state of the four QDs evolves into \begin{equation} \begin{split} \label{equ8} {\left| \psi \right\rangle _{1,2,3,4}} &= {\left| {ge} \right\rangle _{1,4}}{e^{ - i\lambda t}}\left[ {\cos \left( {\lambda t} \right){{\left| {eg} \right\rangle }_{2,3}} - i\sin (\lambda t){{\left| {ge} \right\rangle }_{2,3}}} \right]\\ &\quad- {e^{ - i2\lambda t}}{\left| {gg} \right\rangle _{1,4}}{\left| {ee} \right\rangle _{2,3}}- {\left| {ee} \right\rangle _{1,4}}{\left| {gg} \right\rangle _{2,3}}\\ &\quad+ {\left| {eg} \right\rangle _{1,4}}{e^{ - i\lambda t}}\left( {\cos \left( {\lambda t} \right){{\left| {ge} \right\rangle }_{2,3}} - i\sin (\lambda t){{\left| {eg} \right\rangle }_{2,3}}} \right). \end{split} \end{equation} At a special moment $\lambda t = \pi /4$, the above state Eq.~(\ref{equ8}) turns into \begin{equation} \label{equ9} \begin{split} \left| \psi \right\rangle _{1,2,3,4} &= {e^{ - i\frac{\pi }{4}}}\sqrt{2} {\left| {gg} \right\rangle _{1,4}}{\left| {ee} \right\rangle _{2,3}} - {e^{i\frac{\pi }{4}}}\sqrt{2} {\left| {ee} \right\rangle _{1,4}}{\left| {gg} \right\rangle _{2,3}} \\ &\quad+ \frac{{{e^{ - i\pi /4}}}}{{2\sqrt{2}}} {\left| {eg} \right\rangle _{2,3}}[{\left| {ge} \right\rangle _{1,4}} - i{\left| {eg} \right\rangle _{1,4}}]\\ &\quad + {\left| {ge} \right\rangle _{2,3}}[{\left| {eg} \right\rangle _{1,4}} - i{\left| {ge} \right\rangle _{1,4}}]. \end{split} \end{equation} By measuring the state of the QDs 2 and 3, there are four possible outcomes: ${\left| {gg} \right\rangle _{2,3}}$, ${\left| {ee} \right\rangle _{2,3}}$, ${\left| {eg} \right\rangle _{2,3}}$, and ${\left| {ee} \right\rangle _{2,3}}$. For the first two results, ${\left| {gg} \right\rangle _{2,3}}$ and ${\left| {ee} \right\rangle _{2,3}}$, no entanglement swapping occurs, i.e., QDs 1 and 4 are not entangled. However, if we get one of the last two results ${\left| {eg} \right\rangle _{2,3}}$ or ${\left| {ge} \right\rangle _{2,3}}$, the state would collapse into an entangled state between QDs 1 and 4, namely \begin{equation} \label{equ10} \left|\psi\right\rangle _{1,4} = \frac{1}{\sqrt{2}}[{\left| {ge} \right\rangle _{1,4}} - i{\left| {eg}\right\rangle _{1,4}}] \end{equation} or \begin{equation} \left|\psi\right\rangle _{1,4}^{'} = \frac{1}{\sqrt{2}}[{\left| {eg} \right\rangle _{1,4}} - i{\left| {ge} \right\rangle _{1,4}}] \end{equation} where we have ignored the overall phase. \subsection{\label{sec:level2}Quantum repeater} For a practical quantum repeater, one needs to extend the distance between the two entangled qubits (QDs here) as long as possible. In our protocol, starting from many singlet states of two adjacent QDs, we have proved that the entangled swapping can be achieved, i.e., QD 1 and QD 4 are entangled pairs and the entanglement distance is doubled. Next, we need to extend further the entanglement distance to 8 QDs, 16 QDs and more~(Fig.\ref{fig:figure3}(b)), until we find a periodicity of the entanglement. Next, for an 8 QDs unit, there are 3 possible initial entangled states in this system \begin{equation}\begin{split} \label{equ11} \begin{array}{l} (1).\; {\psi _{\rm{L}}} = {\psi _R} = {\psi _{1,4}}{\rm{ = }}{\psi _{5,8}},\\ (2).\; {\psi _{\rm{L}}} = {\psi _R} = \psi _{1,4}^{'}{\rm{ = }}{\psi _{5,8}},\\ (3).\; {\psi _{\rm{L}}} = {\psi _{1,4}},{\psi _R} = \psi _{1,4}^{'}{\rm{ = }}{\psi _{5,8}},\\ or \;{\psi _{\rm{L}}} = \psi _{1,4}^{'},{\psi _R} = {\psi _{1,4}}{\rm{ = }}{\psi _{5,8}}. \end{array} \end{split} \end{equation} \textbf{Case (1)}\\ The initial state is also a product state of two entangled QD pairs \begin{equation} \label{equ12} \begin{split} \phi &= {\psi _{1,4}}\otimes {\psi _{5,8}} \\ &= \frac{1}{2}[{\left| {ge} \right\rangle _{1,8}}{\left| {eg} \right\rangle _{4,5}} - i{\left| {gg} \right\rangle _{1,8}}{\left| {ee} \right\rangle _{4,5}}\\ &\quad- i{\left| {ee} \right\rangle _{1,8}}{\left| {gg} \right\rangle _{4,5}} - {\left| {eg} \right\rangle _{1,8}}{\left| {ge} \right\rangle _{5,4}}]. \end{split} \end{equation} By switching on the coupling between the cavity and the QDs 4 and 5, the system evolves after a time $t$ into \begin{equation} \label{equ13} \begin{split} \phi ^{'} &= \frac{1}{2}\{ {\left| {ge} \right\rangle _{1,8}}{e^{ - i\lambda t}}[\cos (\lambda t){\left| {eg} \right\rangle _{4,5}} - i\sin (\lambda t){\left| {ge} \right\rangle _{4,5}}]\\ &\quad- i{e^{ - it2\lambda }}{\left| {gg} \right\rangle _{1,8}}{\left| {ee} \right\rangle _{4,5}}- i{\left| {ee} \right\rangle _{1,8}}{\left| {gg} \right\rangle _{4,5}} \\ &\quad- {\left| {eg} \right\rangle _{1,8}}{e^{ - i\lambda t}}[\cos (\lambda t){\left| {ge} \right\rangle _{4,5}} - i\sin (\lambda t){\left| {eg} \right\rangle _{4,5}}]\}. \end{split} \end{equation} Choosing $\lambda t = \pi /4$, we find \begin{equation} \label{equ14} \begin{split} \phi ^{'} &= \frac{{{e^{ - \frac{\pi }{4}i}}}}{{2\sqrt 2 }}\left( {{{\left| {ge} \right\rangle }_{1,8}} + i{{\left| {eg} \right\rangle }_{1,8}}} \right){\left| {eg} \right\rangle _{4,5}} \\ &\quad- ({\left| {eg} \right\rangle _{1,8}} + i{\left| {ge} \right\rangle _{1,8}}){\left| {ge} \right\rangle _{4,5}}\\ &\quad- i{e^{ - \frac{\pi }{4}i}}{\left| {gg} \right\rangle _{1,8}}{\left| {ee} \right\rangle _{4,5}} \\ &\quad- \sqrt 2 i{e^{\frac{\pi }{4}i}}{\left| {ee} \right\rangle _{1,8}}{\left| {gg} \right\rangle _{4,5}}. \end{split} \end{equation} Similarly, by detecting the states of QDs 4 and 5, we throw away the results if they are ${\left| {gg} \right\rangle _{4,5}}$ or ${\left| {ee} \right\rangle _{4,5}}$. Otherwise, the final state collapses into the entangled states between QDs 1 and 8, namely \begin{equation} \label{equ15} \begin{split} \left| \psi \right\rangle _{1,8 }= \frac{1}{{\sqrt 2 }}({{\left| {ge} \right\rangle }_{1,8}} + i{{\left| {eg} \right\rangle }_{1,8}} ),\\ \left| \psi \right\rangle _{1,8}^{'} = \frac{1}{{\sqrt 2 }}({\left| {eg} \right\rangle _{1,8}} + i{\left| {ge} \right\rangle _{1,8}}). \end{split} \end{equation} Obviously, the entanglement distance is doubled again. \textbf{Case (2)}\\ In this case, the initial state is \begin{eqnarray} \label{equ16} \phi &=& {\psi' _{1,4}}\otimes{\psi _{5,8}} \nonumber\\ &=& \frac{1}{2}\big[{{\left| {eg} \right\rangle }_{1,8}}{{\left| {ge} \right\rangle }_{4,5}}- i{{\left| {ee} \right\rangle }_{1,8}}{{\left| {gg} \right\rangle }_{4,5}}\nonumber\\ &-& i{{\left| {gg} \right\rangle }_{1,8}}{{\left| {ee} \right\rangle }_{4,5}}-{{\left| {ge} \right\rangle }_{1,8}}{{\left| {eg} \right\rangle }_{4,5}}\big]. \end{eqnarray} By switching on the coupling between the cavity and the QDs 4 and 5, the system evolves after a time $t$ into \begin{equation} \label{equ17} \begin{split} \phi' &= \frac{1}{2}\{ {\left| {eg} \right\rangle _{1,8}}{e^{ - i\lambda t}}\left[ {\cos (\lambda t){{\left| {ge} \right\rangle }_{4,5}} - i\sin (\lambda t){{\left| {eg} \right\rangle }_{4,5}}} \right]\\ &\quad- i{\left| {ee} \right\rangle _{1,8}}{\left| {gg} \right\rangle _{4,5}} - i{e^{ - i2\lambda t}}{\left| {gg} \right\rangle _{1,8}}{\left| {ee} \right\rangle _{4,5}}\\ &\quad- {\left| {ge} \right\rangle _{1,8}}{e^{ - i\lambda t}}\left[ {\cos (\lambda t){{\left| {eg} \right\rangle }_{4,5}} - i\sin (\lambda t){{\left| {ge} \right\rangle }_{4,5}}} \right]\}. \end{split} \end{equation} Choosing $\lambda t = \pi /4$, we obtain \begin{equation} \label{equ18} \begin{split} \phi' &= \frac{{{e^{ - i\frac{\pi }{4}}}}}{{2\sqrt 2 }}\{ ({\left| {eg} \right\rangle _{1,8}} + i{\left| {ge} \right\rangle _{1,8}}){\left| {ge} \right\rangle _{4,5}} - ({\left| {ge} \right\rangle _{1,8}}\\ &\quad+ i{\left| {eg} \right\rangle _{1,8}}){\left| {eg} \right\rangle _{4,5}}- \sqrt 2 i{e^{i\frac{\pi }{4}}}{\left| {ee} \right\rangle _{1,8}}{\left| {gg} \right\rangle _{4,5}}\\ &\quad- i{e^{ - i\frac{\pi }{4}}}\sqrt 2 {\left| {gg} \right\rangle _{1,8}}{\left| {ee} \right\rangle _{4,5}}\}. \end{split} \end{equation} By detecting the states of QDs 4 and 5, we throw away the results if they are ${\left| {gg} \right\rangle _{4,5}}$ or ${\left| {ee} \right\rangle _{4,5}}$. Otherwise, the final state collapses into the entangled states between QDs 1 and 8, namely \begin{equation} \begin{split} \label{equ19} \begin{array}{l} \psi _{1,8}^{'} = \frac{1}{{\sqrt 2 }}({\left| {eg} \right\rangle _{1,8}} + i{\left| {ge} \right\rangle _{1,8}}),\\ {\psi _{1,8}} = \frac{1}{{\sqrt 2 }}({\left| {ge} \right\rangle _{1,8}} + i{\left| {eg} \right\rangle _{1,8}}). \end{array} \end{split} \end{equation} \textbf{Case (3)} In this case, the initial state is \begin{eqnarray} \label{equ20} \phi &=&{\psi _{1,4}}\otimes{\psi' _{5,8}}\nonumber\\ &=&\frac{1}{2}\big[{{\left| {gg} \right\rangle }_{1,8}}{{\left| {ee} \right\rangle }_{4,5}} - i{{\left| {ge} \right\rangle }_{1,8}}{{\left| {eg} \right\rangle }_{4,5}}\nonumber\\ &-& i{{\left| {eg} \right\rangle }_{1,8}}{{\left| {ge} \right\rangle }_{4,5}} - {{\left| {ee} \right\rangle }_{1,8}}{{\left| {gg} \right\rangle }_{4,5}} \big]. \end{eqnarray} By switching on the coupling between the cavity and the QDs 4 and 5, the system evolves after a time $t$ into \begin{equation} \label{equ21} \begin{split} \phi' &=\frac{1}{2}[{e^{ - i2\lambda t}}{\left| {gg} \right\rangle _{1,8}}{\left| {ee} \right\rangle _{4,5}} \\ &\quad- i{\left| {ge} \right\rangle _{1,8}}{e^{ - \lambda t}}\left[ {\cos (\lambda t){{\left| {eg} \right\rangle }_{4,5}}-i\sin (\lambda t){{\left| {ge} \right\rangle }_{4,5}}} \right]\\ &\quad- i{\left| {eg} \right\rangle _{1,8}}{e^{ - \lambda t}}\left[ {\cos (\lambda t){{\left| {ge} \right\rangle }_{4,5}} - i\sin (\lambda t){{\left| {eg} \right\rangle }_{4,5}}}\right]\\ &\quad- {\left| {ee} \right\rangle _{1,8}}{\left| {gg} \right\rangle _{4,5}}]. \end{split} \end{equation} Choosing $\lambda t = \pi/4$, we obtain \begin{equation} \label{equ22} \begin{split} \phi' &= \frac{{{e^{ - i\frac{\pi }{4}}}}}{{2\sqrt 2 }}[\sqrt 2 {e^{ - i\frac{\pi }{4}}}{\left| {gg} \right\rangle _{1,8}}{\left| {ee} \right\rangle _{4,5}} - \sqrt 2 {e^{\frac{\pi }{4}}}{\left| {ee} \right\rangle _{1,8}}{\left| {gg} \right\rangle _{4,5}}\\ &\quad- ({\left| {eg} \right\rangle _{1,8}} + i{\left| {ge} \right\rangle _{1,8}}){\left| {eg} \right\rangle _{4,5}}\\ &\quad- ({\left| {ge} \right\rangle _{1,8}} + i{\left| {eg} \right\rangle _{1,8}}){\left| {ge} \right\rangle _{4,5}}]. \end{split} \end{equation} By detecting the states of QDs 4 and 5, we throw away the results if they are ${\left| {gg} \right\rangle _{4,5}}$ or ${\left| {ee} \right\rangle _{4,5}}$. Otherwise, the final state collapses into the entangled states between QDs 1 and 8, namely \begin{equation} \label{equ23} \begin{split} \begin{array}{l} {\psi _{1,8}} = \frac{1}{{\sqrt 2 }}({\left| {eg} \right\rangle _{1,8}} + i{\left| {ge} \right\rangle _{1,8}}),\\ \psi _{1,8}^{'} = \frac{1}{{\sqrt 2 }}({\left| {ge} \right\rangle _{1,8}} + i{\left| {eg} \right\rangle _{1,8}}). \end{array} \end{split} \end{equation} By comparing the results of 4 QDs and 8 QDs, we find obviously they are exactly the same, indicating the periodicity of the quantum repeater. It is straightforward to extend the quantum entanglement for 16 QDs with a product state of a pair of 8 entangled QDs~(Fig.\ref{fig:figure3}(c)). Similarly, there are also three cases in the 16 QDs' system \begin{equation} \label{equ24} \begin{split} \begin{array}{l} (1).\; {\psi _{\rm{L}}} = {\psi _R} = {\psi _{1,8}},\\ (2).\; {\psi _{\rm{L}}} = {\psi _R} = \psi _{1,8}^{'},\\ (3).\; {\psi _{\rm{L}}} = {\psi _{1,8}},\; {\psi _R} = \psi _{1,8}^{'};\; {\rm or}\; {\psi _{\rm{L}}} = \psi _{1,8}^{'},\; {\psi _R} = {\psi _{1,8}}. \end{array} \end{split} \end{equation} By repeating this process, it is easy to extend the entanglement distance to 32 QDs and longer. Our scheme of a quantum repeater differs from the traditional DLCZ scheme in 3 aspects (see Fig.~\ref{fig:figure4} as a summary). (i) Our system is based on solid state QDs which are different from the coupled atoms and photons system. (ii) The start unit of our scheme is 4 entangled QDs which are prepared from 2 pairs of QDs initially in singlet states. This is different from the atoms and photons system. (iii) The measurements we need in our scheme are local, instead of the Bell measurement required in the DLCZ schemes. These local measurements are easier to implement in experiments than the Bell measurements. Although our scheme of the quantum repeater is different from the traditional one, the success probability of our scheme is the same as the DLCZ scheme, which is higher than other schemes based on solid state systems. \begin{figure} \caption{A complete process to establish entanglement between QD 1 and QD 16 (and more distant QDs) through coupling two central QDs within cavities.} \label{fig:figure4} \end{figure} \section{\label{sec:level3}Conclusion} In conclusion, we propose a quantum repeater scheme based on the solid state QDs. The starting unit is 4 entangled QDs with an experimentally prepared two pairs of DQDs in their singlet states. Our scheme reaches the same success probability as the famous DLCZ scheme but without the requirement of the Bell measurements. Such an advantage is more favored for experimentalists due to the simplicity of the scheme. Quantum repeater is a fundamental block in quantum communication, quantum computing, and quantum teleportation~\cite{Bouwmeester1997Experimental}. For a practical quantum repeater, other factors, such as the decoherence, the environmental effect, the measurement efficiency, need to be included. The performance of the quantum repeater scheme in a real situation and new methods to improve the success probability of entanglement swapping using entanglement purification~\cite{Pan2000Entanglement,PhysRevLett.76.722,PhysRevA.57.R4075} and noise-suppression also require many further explorations. \section*{ACKNOWLEDGMENT} X.F.Yi thanks Wenxian Zhang,Feng Mang,Yong Zhang and Zhang-qi Yin for valuable discussions. This work was supported by the National Natural Science Foundation of China under Grant No. 11574239 and Open Research Fund Program of the State Key Laboratory of Low Dimensional Quantum Physics under Grant No. KF201614. \end{document}
\betagin{document} \sloppy \newenvironment{proo}{\betagin{trivlist} \item{\sc {Proof.}}} { $\square$ \end{trivlist}} \longrightarrowg\def\symbolfootnote[#1]#2{\betagingroup \def\thefootnote{{\mathcal O}nsymbol{footnote}}{\mathcal O}ootnote[#1]{#2}\endgroup} \title{From deformation theory of wheeled props to\\ classification of Kontsevich formality maps} \author{Assar~Andersson} {\mathrm a\mathrm d}dress{Assar~Andersson: Mathematics Research Unit, Luxembourg University, Maison du Nombre, 6 Avenue de la Fonte, L-4364 Esch-sur-Alzette, Grand Duchy of Luxembourg } \email{[email protected]} \author{Sergei~Merkulov} {\mathrm a\mathrm d}dress{Sergei~Merkulov: Mathematics Research Unit, Luxembourg University, Maison du Nombre, 6 Avenue de la Fonte, L-4364 Esch-sur-Alzette, Grand Duchy of Luxembourg } \email{[email protected]} \betagin{abstract} We study the homotopy theory of the wheeled prop controlling Poisson structures on formal graded finite-dimensional manifolds and prove, in particular, that Grothendieck-Teichm\"uller group acts on that wheeled prop faithfully and homotopy non-trivially. Next we apply this homotopy theory to the study of the deformation complex of an arbitrary M.\ Kontsevich formality map and compute the full cohomology group of that deformation complex in terms of the cohomology of a certain graph complex introduced earlier by M.\ Kontsevich in \cite{Ko} and studied by T.\ Willwacher in \cite{Wi1}. \end{abstract} \maketitle {\lambdarge {\mathsf e}ction{\bf Introduction} } \lambdabel{sec:introduction} \subsection{Wheeled props, formal Poisson structures and Grothendieck-Teichm\"uller group} Let $V$ be an arbitrary finite-dimensional ${\mathbb Z}$-graded vector space over a field ${\mathbb K}$ of characteristic zero (say, $V={\mathbb R}^d$, ${\mathbb K}={\mathbb R}$) and $V^*:={\mathrm H\mathrm o\mathrm m}(V,{\mathbb K})$ its dual. Then the completed symmetric algebra ${\mathcal O}_{\mathcal M}:=\widehat{\odot^\bullet}V$ can be understood as the ${\mathbb K}$-algebra of formal smooth functions on the dual vector space $V^*$ understood as a formal manifold ${\mathcal M}$, and the Lie algebra of derivations of ${\mathcal O}_{\mathcal M}$, $$ {\mathcal T}{\mathcal M}:= \mathrm{Der}({\mathcal O}_V){\mathsf i}meq {\mathrm H\mathrm o\mathrm m}(V,\widehat{\odot^\bullet} V){\mathsf i}meq {\partial}rod_{m\geq 0} {\mathrm H\mathrm o\mathrm m}(V, \odot^m V), $$ as the Lie algebra of formal smooth vector fields on ${\mathcal M}$. A {\em formal graded Poisson structure}\, on ${\mathcal M}$ is a degree 2 element ${\partial}i$ in the Schouten Lie algebra $$ {\mathcal T}_{poly}{{\mathcal M}}:=\wedge^\bullet T_{\mathcal M} {\mathsf i}meq {\partial}rod_{m\geq 0,n\geq 0} {\mathrm H\mathrm o\mathrm m}(\wedge^n V, \odot^m V)[-n]={\partial}rod_{m\geq 0,n\geq 0} {\mathrm H\mathrm o\mathrm m}(\odot^n (V[1]), \odot^m V)={\partial}rod_{k\geq 0}\odot^k (V^*[-1]\oplus V) $$ of polyvector fields satisfying the Maurer-Cartan equation, $$ [{\partial}i,{\partial}i]=0, $$ where the Schouten Lie bracket $[\ ,\ ]$ (of degree $-1$) originates essentially from the canonical pairing map $V^*[-1]\times V \bar{i}ghtarrow {\mathbb K}[-1]$. Thus a formal Poisson structure is a formal power series, \betagin{equation}\lambdabel{1: pi formal power series} {\partial}i=\sum_{n,m=0}^\infty{\partial}i_n^m,\ \ \ \ \ {\partial}i_n^m\in {\mathrm H\mathrm o\mathrm m}(\wedge^n V, \odot^m V)[2-n] \end{equation} and hence can be understood as a representation in the vector space $V$, $$ \rho_{\partial}i: \mathcal{H}\mathit{olieb}_{0,1}^{\star} \longrightarrow {\mathcal E} nd_V, $$ of a certain prop of {\em formal Poisson structures}\, which is by definition the free prop{\mathcal O}ootnote{See, e.g., \cite{Ma,Me3} and the first sections of \cite{V} for an elementary introduction into the theory of props and wheeled props.} generated a collection of 1-dimensional ${\mathbb S}_m^{op}\times {\mathbb S}_n$ bimodules ${\mbox{1 \hskip -8pt 1}}_m\otimes {\mathit s \mathit g\mathit n}_n[n-2]$. A useful observation \cite{Me0} is that this prop comes equipped with a natural differential $\delta^\star$ such that the Maurer-Cartan equation $[{\partial}i,{\partial}i]=0$ gets encoded into the compatibility of the representation $\rho_{\partial}i$ with the differentials. The strange notation $\mathcal{H}\mathit{olieb}_{1,0}^{\star}$ comes from the fact that this particular prop comes from the family of dg props $\mathcal{H}\mathit{olieb}_{c,d}^{\star}$ which control Maurer-Cartan elements ${\partial}i$ in the graded commutative algebra \betagin{equation}\lambdabel{1: odot V[-c] + V[-d]} {\partial}rod_{k\geq 0}\odot^k (V^*[-d]\oplus V[-c]) \end{equation} equipped with the obvious Poisson type Lie bracket (of homological degree $-c-d$). The case $c=0$, $d=1$ corresponds to formal Poisson structures while the case $c=1$, $d=1$ corresponds to extended {\em homotopy Lie bialgebras}\, structures. {\mathsf i}p The superscript $\star$ in the notation indicates that we consider in this paper an {\em extended version}\, of the family of props $\mathcal{H}\mathit{olieb}_{c,d}$ studied earlier in \cite{MW1}. The latter family controls the {\em truncated}\, version of the above formal power series (\ref{1: pi formal power series}) which allows only monomials with $m,n\geq 1$, $m+n\geq 3$; such a truncation makes perfect sense in the context of the theory of minimal resolutions of $(c,d)$ Lie bialgebras. However we are interested in this paper in the ``full story" with no restrictions on the integer parameters $m$ and $n$, and that ``full story" turns out to be sometimes quite different from the truncated one. {\mathsf i}p Note that under appropriate completion of the above graded commutative algebra (\ref{1: odot V[-c] + V[-d]}) all the above structures (the convergent Lie bracket, Maurer-Cartan elements ${\partial}i$ and associated representations $\rho_{\partial}i$ of the props $\mathcal{H}\mathit{olieb}_{c,d}^\star$) make sense also for {\em infinite-dimensional}\, vector spaces $V$. An important point of this paper is that the deformation theories of these structures behave quite differently in finite and infinite dimensions. Indeed, in infinite dimensions they can be understood as representations of {\em ordinary}\, props $$ \rho_{\partial}i: \mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar}}} \longrightarrow {\mathcal E} nd_V $$ while in {\em finite}\, dimensions as representations of their {\em wheeled closures} (cf.\ \cite{Me1,MMS}), $$ \rho_{\partial}i: \mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd \longrightarrow {\mathcal E} nd_V $$ which have quite different deformation theories or, equivalently, quite different dg Lie algebras, $\mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar}}})$ and $\mathrm{Der}(\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd)$, of derivations (see \S 3 for their precise definitions). The dg prop $\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar}}}$ is a proper subprop of $\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd$, the latter containing many more universal operations (involving, roughly speaking, the trace operation $V\otimes V^*\bar{i}ghtarrow {\mathbb K}$ which has no sense in general when $\mathsf{diop}m V=\infty$). {\mathsf i}p The first main purpose of this paper is the study of the deformation theory of both props $\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar}}}$ and $\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd$ (in fact of their completed versions) and the computation of the cohomologies of the associated complexes of derivations in terms of the M.\ Kontsevich graph complexes $\mathsf{GC}_d^{\geq 2}$ introduced{\mathcal O}ootnote{The symbol $\mathsf{GC}_d$ stands often in the literature for the graph complex generated by connected oriented graphs with all vertices trivalent; we denote by $\mathsf{GC}_d^{\geq 2}$ its extension which allows connected graphs with at least bivalent vertices (see \S 3.2 for more details and references). The latter complex has a quasi-isomorphic versions, $\mathsf{dGC}_d^{\geq 2}$, spanned by graphs with fixed directions on edges; the subcomplex of $\mathsf{dGC}_d^{\geq 2}$ spanned by {\em oriented}\, graphs, that is, directed graphs with no closed paths of directed edges, is denoted by $\mathsf{GC}_d^{or}$. These complexes have been studied in \cite{Wi1,Wi2,Z}.} in \cite{Ko} and studied in \cite{Wi1}, and of its oriented version $\mathsf{GC}_d^{or}$ which was studied in \cite{Wi2,Z}. These complexes are spanned by {\em connected graphs}. It is often useful \cite{MW1,MW3} to add to these classical graph complexes an additional element $\emptyset$ concentrated in degree zero, ``a graph with no vertices and edges", and define the {\em full graph complexes}\, of not necessarily connected graphs as the completed graded symmetric tensor algebras \betagin{equation}\lambdabel{1: fGC in terms of GC} \mathsf{fGC}^{\geq 2}_{d}:=\widehat{\odot^\bullet}\left((\mathsf{GC}_{d}^{\geq 2}\oplus {\mathbb K} )[-d]\bar{i}ght)[d], \end{equation} \betagin{equation}\lambdabel{1: fGC^or in terms of GC^0r} \mathsf{fGC}^{or}_{d}:=\widehat{\odot^\bullet}\left((\mathsf{GC}_{d}^{or}\oplus {\mathbb K} )[-1-c-d]\bar{i}ght)[d], \end{equation} the summands ${\mathbb K}$ being generated by $\emptyset$. The formal class $\emptyset$ takes care for (homotopy non-trivial) rescaling operations of the (wheeled) props under considerations, and essentially leads us in applications to the full Grothendieck-Teichm\"uller group $GRT=GRT_1 \rtimes{\mathbb K}^*$ (see \cite{D2}) rather than to its reduced version $GRT_1$. The Lie bracket of $\emptyset$ with elements $\Gamma$ of $\mathsf{GC}_{d}^{\geq 2}$ or $\mathsf{GC}_{d}^{or}$ is defined as the multiplication of $\Gamma$ by twice the number of its loops. \subsubsection{\bf Proposition} {\em There are morphisms of dg Lie algebras, $$ F^\circlearrowright: \mathsf{fGC}^{\geq 2}_{c+d+1} \longrightarrow \mathrm{Der}(\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd), \ \ \ \ F: \mathsf{fGC}_{c+d+1}^{or} \longrightarrow \mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar}}}) $$ which are quasi-isomorphisms. } {\mathsf i}p It was proven in \cite{Wi1,Wi2} that $H^\bullet(\mathsf{GC}_{c+d+1}^{\geq 2}) =H^\bullet(\mathsf{GC}_{c+d+2}^{or})$ and that $$ H^0(\mathsf{fGC}_2^{\geq 2})=H^0(\mathsf{fGC}_3^{or})=\mathfrak{grt}\ \stackrel{{\mathbf e}xt{as a vector space}}{{\mathsf i}meq}\ \mathfrak{grt}_1 \oplus {\mathbb K}, $$ where $\mathfrak{grt}$ (resp., $\mathfrak{grt}_1$) is the Lie algebra of the Grothendieck-Teichm\"uller group $GRT$ (resp., $GRT_1$). It is easy to see that $H^0(\mathsf{GC}_2^{or})=0$ and $H^0(\mathsf{GC}_3^{\geq 2})=0$. \subsubsection{\bf Corollary} {\em There is an isomorphism of Lie algebras $$ H^0(\mathrm{Der}(\swhHoLB_{0,1}))=\mathfrak{grt} $$ that is, the Grothendieck-Teichm\"uller group $GRT$ acts up to homotopy faithfully (and essentially transitively) on the vertex completion of the wheeled properad $\swhHoLB_{1,0}$ governing {\em finite}-dimensional formal Poisson structures. {\mathsf i}p By contrast $$ H^0(\mathrm{Der}(\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar}}}_{0,1}))=0 $$ that is, the completion of the properad $\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar}}}_{1,0}$ governing {\em infinite}-dimensional formal Poisson structures admits no homotopy non-trivial automorphisms at all. } {\mathsf i}p Note that the above Proposition applied to another interesting case $c=d=1$ gives us quite the opposite picture, $$ H^0(\mathrm{Der}(\swhHoLB_{1,1}))=0, \ \ \ \ \ H^0(\mathrm{Der}(\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar}}}_{1,1}))=\mathfrak{grt} $$ cf.\ \cite{MW1}. These results are by no means surprising --- the graph complex $\mathsf{fGC}_2^{\geq 2}$ can be understood as a kind of universal incarnation of the Chevalley-Eilenberg deformation complex of the Lie algebra ${\mathcal T}_{poly}({\mathcal M})$ for any finite-dimensional formal manifold \cite{Ko}, and the fact that $H^0(\mathsf{fGC}_2^{\geq 2})=\mathfrak{grt}$ already implies \cite{Wi1} that the Grothendieck-Teichm\"uller group $GRT$ acts (up to homotopy) as universal ${\mathcal L} ie_\infty$ automorphisms of ${\mathcal T}_{poly}({\mathcal M})$; this action is given in terms of certain iterations of the canonical $GL(V)$-invariant $BV$ operator on ${\mathcal T}_{poly}({\mathcal M})$, so what the above Corollary says essentially is that even if one drops this restriction on the possible structure of linear operators acting on ${\mathcal T}_{poly}({\mathcal M})$, the action of $GRT$ remains homotopy non-trivial. The above Proposition can be inferred from the theory of stable cohomology of the Lie algebra of polyvector fields developed in \cite{Wi3} (but not immediately). In any case, our proof of Proposition 1.1.1 is very short, so we decided to show a new direct argument behind that claim in \S 4.1 below. {\mathsf i}p The main advantage of our study of the homotopy theory of the vertex completion $\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright}$ of the wheeled prop $\swhHoLB_{0,1}$ is that it gives us --- almost immediately! --- a full insight into the homotopy theory of M.\ Kontsevich's formality maps which is the second main topic of this paper. \subsection{Homotopy classification of M.\ Kontsevich formality maps} M.\ Kontsevich formality map \cite{Ko2} associates to any {\em finite}-dimensional formal Poisson structure ${\partial}i$ on a formal graded manifold ${\mathcal M}=V^*$ a curved ${\mathcal A} ss_\infty$-structure on the ${\mathbb R}$-algebra ${\mathcal O}_{\mathcal M}=\widehat{\odot^\bullet} V$ of formal smooth functions on ${\mathcal M}$ which is given in terms of polydifferential operators constructed from ${\partial}i$. In our approach ${\partial}i$ is a representation in $V$ of the wheeled prop $\swhHoLB_{1,0}$, and the construction of polydifferentials operators from ${\partial}i$ can be conveniently encoded into the polydifferential functor \cite{MW3} $$ {\mathcal O}: \mathsf{Category\ of\ dg\ props} \longrightarrow \mathsf{Category\ of\ dg\ operads} $$ applied to the prop $\swhHoLB_{1,0}$: for any dg prop ${\mathcal P}$ the associated dg operad ${\mathcal O}({\mathcal P})$ has the property that for any representation $\rho$ of ${\mathcal P}$ in a vector space $V$ the operad ${\mathcal O}({\mathcal P})$ has a canonically associated representation ${\mathcal O}(\rho)$ in the completed graded commutative algebra $\widehat{\odot^\bullet} V$ given in terms of polydifferenial operators. Curved ${\mathcal A} ss_\infty$ algebra structures are controlled by the well-known (non-cofibrant) dg operad $c{\mathcal A} ss_\infty$ so that the Maxim Kontsevich universal formality map from \cite{Ko} (or any other universal formality map) can be understood as a morphism of dg operads $$ {\mathcal F}:{c {\mathcal A} ss}_\infty \longrightarrow {\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright}). $$ satisfying certain non-triviality conditions (se \S 5 for details). We show in this paper a very short and elementary (based essentially on the contractility of the permutahedra polytopes) proof of the following classification theorem. \subsubsection{\bf Theorem} {\em Let\, ${\mathsf D\mathsf e\mathsf f }\left(c{\mathcal A} ss_\infty \stackrel{{\mathcal F}}{\bar{i}ghtarrow} {\mathcal O}(\swhHoLB_{1,0})\bar{i}ght)$ be the deformation complex of any given formality map ${\mathcal F}$ (in particular, of the M.\ Kontsevich map from \cite{Ko2}). Then there is a canonical morphism of complexes $$ \mathsf{fGC}_2^{\geq 2} \longrightarrow {\mathsf D\mathsf e\mathsf f }\left(c{\mathcal A} ss_\infty \stackrel{{\mathcal F}}{\bar{i}ghtarrow} {\mathcal O}(\swhHoLB_{0,1})\bar{i}ght)[1] $$ which is a quasi-isomorphism.} This result implies the equality of cohomology groups for any $i\in {\mathbb Z}$, $$ H^{i+1}\left({\mathsf D\mathsf e\mathsf f }\left(c{\mathcal A} ss_\infty \stackrel{{\mathcal F}}{\bar{i}ghtarrow} {\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})\bar{i}ght)\bar{i}ght)=H^i(\mathsf{fGC}_2^{\geq 2}) $$ which in the special case $i=0$ reads as $$ H^1\left({\mathsf D\mathsf e\mathsf f }\left(c{\mathcal A} ss_\infty \stackrel{{\mathcal F}}{\bar{i}ghtarrow} {\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})\bar{i}ght)\bar{i}ght)=H^0(\mathsf{fGC}_2^{\geq 2}) =\mathfrak{grt} $$ and hence gives us a new (very short) proof of the following remarkable Theorem by V.\ Dolgushev. \subsubsection{\bf Theorem \cite{Do}} {\em The Grothendieck-Teichm\"uller group $GRT$ acts freely and transitively on the set of homotopy classes of universal formality morphisms.} {\mathsf i}p This Theorem implies the identification of the set of homotopy classes of formality maps with the set of V.\ Drinfeld associators \cite{D2}. \subsection{Some notation} The set $\{1,2, \ldots, n\}$ is abbreviated to $[n]$; the group of bijections $[n]\bar{i}ghtarrow [n]$ is denoted by ${\mathbb S}_n$; the trivial (resp., sign) one-dimensional representation of ${\mathbb S}_n$ is denoted by ${\mbox{1 \hskip -8pt 1}}_n$ (resp., ${\mathit s \mathit g\mathit n}_n$). The cardinality of a finite set $S$ is denoted by $\# S$. We work in this paper in the category of ${\mathbb Z}$-graded vector spaces over a field ${\mathbb K}$ of characteristic zero. If $V=\oplus_{i\in {\mathbb Z}} V^i$ is a graded vector space, then $V[k]$ stands for the graded vector space with $V[k]^i:=V^{i+k}$; for $v\in V^i$ we set $|v|:=i$. If $V$ is a complex with a differential $d$, then $V[k]$ is also a complex with the differential given by $(-1)^k d$. {\mathsf i}p For the basic notions and facts of the theory of props and properads we refer to the papers \cite{Ma,MV,V} (and references cited there) and of their wheeled versions to \cite{MMS, Me1}. A short introduction into these theories can be found in \cite{Me3}. We assume that every (wheeled) prop ${\mathcal P}$ we work with in this paper has the unit denote by $\uparrow\in {\mathcal P}(1,1)$. {\mathsf i}p { \Large {\mathsf e}ction{\bf Wheeled properads of homotopy Lie bialgebras and their extensions} } {\mathsf i}p \subsection{Reminder on props of Lie $(c,d)$-bialgebras and their minimal resolutions} Consider for any pair of integeres $c,d\in {\mathbb Z}$ a quadratic prop \cite{MW1} $$ \mathcal{L}\mathit{ieb}_{c,d}:={\mathcal F} ree\lambdangle e{\bar{a}}ngle/\lambdangle{\mathcal R}{\bar{a}}ngle, $$ defined as the quotient of the free prop generated by an ${\mathbb S}$-bimodule $e=\{e(m,n)\}_{m,n\geq 0}$ with all $e(m,n)=0$ except{\mathcal O}ootnote{When representing elements of various props below as graphs we always assume by default that all edges and legs are {\em directed}\, with the flow running from the bottom of the graph to the top.} $$ e(2,1):={\mbox{1 \hskip -8pt 1}}_1\otimes {\mathit s \mathit g\mathit n}_2^{c}[c-1]=\mbox{span}\left\lambdangle \betagin{array}{c}\betagin{xy} <0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-}, <0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-}, <-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^{_2}}**@{}, <-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^{_1}}**@{}, \end{xy}\end{array} =(-1)^{c} \betagin{array}{c}\betagin{xy} <0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-}, <0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-}, <-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^{_1}}**@{}, <-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^{_2}}**@{}, \end{xy}\end{array} \bar{i}ght{\bar{a}}ngle $$ $$ e(1,2):= {\mathit s \mathit g\mathit n}_2^{d}\otimes {\mbox{1 \hskip -8pt 1}}_1[d-1]=\mbox{span}\left\lambdangle \betagin{array}{c}\betagin{xy} <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^{_2}}**@{}, <-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^{_1}}**@{}, \end{xy}\end{array} =(-1)^{d} \betagin{array}{c}\betagin{xy} <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^{_1}}**@{}, <-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^{_2}}**@{}, \end{xy}\end{array} \bar{i}ght{\bar{a}}ngle $$ by the ideal generated by the following elements \betagin{equation}\lambdabel{R for LieB} {\mathcal R}:\left\{ \betagin{array}{c} \betagin{array}{c}\resizebox{7mm}{!}{ \betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{-}, <0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{-}, <-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{-}, <-2.3mm,2.3mm>*{\circ};<-2.3mm,2.3mm>*{}**@{}, <-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{-}, <-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{-}, <0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^3}**@{}, <-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^2}**@{}, <-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^1}**@{}, \end{xy}}\end{array} + \betagin{array}{c}\resizebox{7mm}{!}{\betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{-}, <0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{-}, <-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{-}, <-2.3mm,2.3mm>*{\circ};<-2.3mm,2.3mm>*{}**@{}, <-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{-}, <-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{-}, <0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^2}**@{}, <-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^1}**@{}, <-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^3}**@{}, \end{xy}}\end{array} + \betagin{array}{c}\resizebox{7mm}{!}{\betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{-}, <0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{-}, <-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{-}, <-2.3mm,2.3mm>*{\circ};<-2.3mm,2.3mm>*{}**@{}, <-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{-}, <-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{-}, <0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^1}**@{}, <-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^3}**@{}, <-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^2}**@{}, \end{xy}}\end{array} \ \ , \ \ \betagin{array}{c}\resizebox{8.4mm}{!}{ \betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-}, <-2.4mm,-2.4mm>*{\circ};<-2.4mm,-2.4mm>*{}**@{}, <-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-}, <-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^3}**@{}, <-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^2}**@{}, <-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^1}**@{}, \end{xy}}\end{array} + \betagin{array}{c}\resizebox{8.4mm}{!}{ \betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-}, <-2.4mm,-2.4mm>*{\circ};<-2.4mm,-2.4mm>*{}**@{}, <-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-}, <-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^2}**@{}, <-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^1}**@{}, <-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^3}**@{}, \end{xy}}\end{array} + \betagin{array}{c}\resizebox{8.4mm}{!}{ \betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-}, <-2.4mm,-2.4mm>*{\circ};<-2.4mm,-2.4mm>*{}**@{}, <-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-}, <-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^1}**@{}, <-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^3}**@{}, <-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^2}**@{}, \end{xy}}\end{array} \\ \betagin{array}{c}\resizebox{5mm}{!}{\betagin{xy} <0mm,2.47mm>*{};<0mm,0.12mm>*{}**@{-}, <0.5mm,3.5mm>*{};<2.2mm,5.2mm>*{}**@{-}, <-0.48mm,3.48mm>*{};<-2.2mm,5.2mm>*{}**@{-}, <0mm,3mm>*{\circ};<0mm,3mm>*{}**@{}, <0mm,-0.8mm>*{\circ};<0mm,-0.8mm>*{}**@{}, <-0.39mm,-1.2mm>*{};<-2.2mm,-3.5mm>*{}**@{-}, <0.39mm,-1.2mm>*{};<2.2mm,-3.5mm>*{}**@{-}, <0.5mm,3.5mm>*{};<2.8mm,5.7mm>*{^2}**@{}, <-0.48mm,3.48mm>*{};<-2.8mm,5.7mm>*{^1}**@{}, <0mm,-0.8mm>*{};<-2.7mm,-5.2mm>*{^1}**@{}, <0mm,-0.8mm>*{};<2.7mm,-5.2mm>*{^2}**@{}, \end{xy}}\end{array} - \betagin{array}{c}\resizebox{7mm}{!}{\betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.4mm,2.4mm>*{\circ};<2.4mm,2.4mm>*{}**@{}, <2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-}, <2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-}, <0mm,-1.3mm>*{};<0mm,-5.3mm>*{^1}**@{}, <2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^2}**@{}, <2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^2}**@{}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^1}**@{}, \end{xy}}\end{array} - (-1)^{d} \betagin{array}{c}\resizebox{7mm}{!}{\betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.4mm,2.4mm>*{\circ};<2.4mm,2.4mm>*{}**@{}, <2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-}, <2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-}, <0mm,-1.3mm>*{};<0mm,-5.3mm>*{^2}**@{}, <2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^1}**@{}, <2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^2}**@{}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^1}**@{}, \end{xy}}\end{array} - (-1)^{d+c} \betagin{array}{c}\resizebox{7mm}{!}{\betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.4mm,2.4mm>*{\circ};<2.4mm,2.4mm>*{}**@{}, <2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-}, <2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-}, <0mm,-1.3mm>*{};<0mm,-5.3mm>*{^2}**@{}, <2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^1}**@{}, <2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^1}**@{}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^2}**@{}, \end{xy}}\end{array} - (-1)^{c} \betagin{array}{c}\resizebox{7mm}{!}{\betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.4mm,2.4mm>*{\circ};<2.4mm,2.4mm>*{}**@{}, <2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-}, <2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-}, <0mm,-1.3mm>*{};<0mm,-5.3mm>*{^1}**@{}, <2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^2}**@{}, <2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^1}**@{}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^2}**@{}, \end{xy}}\end{array} \end{array} \bar{i}ght. \end{equation} Thus a representation, $$ \rho: \mathcal{L}\mathit{ieb}cd \longrightarrow {\mathcal E} nd_V $$ of this prop in a differential graded (dg, for short) vector space $V$ is uniquely determined by the values of $\rho$ on the generators, $$ \rho \left( \betagin{xy} <0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-}, <0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-}, <-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, \end{xy}\bar{i}ght): V[-c]\bar{i}ghtarrow \odot^2(V[-c])[1], \ \ \ \ \left( \betagin{xy} <0mm,0.66mm>*{};<0mm,3mm>*{}**@{-}, <0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-}, <-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-}, <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, \end{xy} \bar{i}ght): \odot^2(V[d]) \bar{i}ghtarrow V[1+d], $$ which equip $V$ with (degree shifted) dg Lie algebra and Lie coalgebra structures satisfying the Drinfeld compatibility condition (which is assured by the vanishing under $\rho$ of the bottom graph in ${\mathcal R}$). {\mathsf i}p The minimal resolution of the prop $\mathcal{L}\mathit{ieb}cd$ is a free cofibrant prop $\mathcal{H}\mathit{olieb}_{c,d}$ generated by the ${\mathbb S}$-bimodule ${E}=\{{E}(m,n)\}$ with ${E}(m,n)\neq 0$ only for $m+n\geq 3$ and $m,n\geq 1$, \betagin{equation}\lambdabel{2: symmetries of HoLiebcd corollas} {E}(m,n):={\mathit s \mathit g\mathit n}_m^{\otimes |c|}\otimes {\mathit s \mathit g\mathit n}_n^{|d|}[c[m-1] + d[n-1]-1]=:{\mathbf e}xt{span}\left\lambdangle \betagin{array}{c}\resizebox{17mm}{!}{\betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-}, <-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-}, <0mm,0mm>*{};<1mm,5mm>*{\ldots}**@{}, <0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-}, <0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-}, <0mm,0mm>*{};<-10.5mm,5.9mm>*{^{{\mathsf i}gma(1)}}**@{}, <0mm,0mm>*{};<-4mm,5.9mm>*{^{{\mathsf i}gma(2)}}**@{}, <0mm,0mm>*{};<10.0mm,5.9mm>*{^{{\mathsf i}gma(m)}}**@{}, <-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-}, <-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<1mm,-5mm>*{\ldots}**@{}, <0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-}, <0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-10.5mm,-6.9mm>*{^{{\mathbf a}u(1)}}**@{}, <0mm,0mm>*{};<-4mm,-6.9mm>*{^{{\mathbf a}u(2)}}**@{}, <0mm,0mm>*{};<10.0mm,-6.9mm>*{^{{\mathbf a}u(n)}}**@{}, \end{xy}}\end{array} =(-1)^{c|{\mathsf i}gma|+d|{\mathbf a}u|} \betagin{array}{c}\resizebox{14mm}{!}{\betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-}, <-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{}, <0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-}, <0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-}, <0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{}, <0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{}, <0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{}, <0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{}, <-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-}, <-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{}, <0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-}, <0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{}, <0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{}, <0mm,0mm>*{};<4.5mm,-6.9mm>*{^{n\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{}, <0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{}, \end{xy}}\end{array} \bar{i}ght{\bar{a}}ngle_{ {\mathcal O}orall {\mathsf i}gma\in {\mathbb S}_m, {\mathcal O}orall{\mathbf a}u\in {\mathbb S}_n} \end{equation} The differential on $\mathcal{H}\mathit{olieb}_{c,d}$ is given on the generators by \betagin{equation}\lambdabel{LBcd_infty} \delta \betagin{array}{c}\resizebox{14mm}{!}{\betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-}, <-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{}, <0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-}, <0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-}, <0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{}, <0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{}, <0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{}, <0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{}, <-0.6mm,-0.44 mm>*{};<-8mm,-5mm>*{}**@{-}, <-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{}, <0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-}, <0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{}, <0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{}, <0mm,0mm>*{};<4.5mm,-6.9mm>*{^{n\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{}, <0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{}, \end{xy}}\end{array} \ \ = \ \ \sum_{[1,\ldots,m]=I_1\sqcup I_2\atop {|I_1|\geq 0, |I_2|\geq 1}} \sum_{[1,\ldots,n]=J_1\sqcup J_2\atop {|J_1|\geq 1, |J_2|\geq 1} }\hspace{0mm} {\partial}m \betagin{array}{c}\resizebox{22mm}{!}{ \betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-}, <-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-}, <0mm,0mm>*{};<0mm,5mm>*{\ldots}**@{}, <0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-}, <0.6mm,0.44mm>*{};<12.4mm,4.8mm>*{}**@{-}, <0mm,0mm>*{};<-2mm,7mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ }}**@{}, <0mm,0mm>*{};<-2mm,9mm>*{^{I_1}}**@{}, <-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-}, <-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{}, <0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-}, <0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<0mm,-7mm>*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }}**@{}, <0mm,0mm>*{};<0mm,-10.6mm>*{_{J_1}}**@{}, <13mm,5mm>*{};<13mm,5mm>*{\circ}**@{}, <12.6mm,5.44mm>*{};<5mm,10mm>*{}**@{-}, <12.6mm,5.7mm>*{};<8.5mm,10mm>*{}**@{-}, <13mm,5mm>*{};<13mm,10mm>*{\ldots}**@{}, <13.4mm,5.7mm>*{};<16.5mm,10mm>*{}**@{-}, <13.6mm,5.44mm>*{};<20mm,10mm>*{}**@{-}, <13mm,5mm>*{};<13mm,12mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ }}**@{}, <13mm,5mm>*{};<13mm,14mm>*{^{I_2}}**@{}, <12.4mm,4.3mm>*{};<8mm,0mm>*{}**@{-}, <12.6mm,4.3mm>*{};<12mm,0mm>*{\ldots}**@{}, <13.4mm,4.5mm>*{};<16.5mm,0mm>*{}**@{-}, <13.6mm,4.8mm>*{};<20mm,0mm>*{}**@{-}, <13mm,5mm>*{};<14.3mm,-2mm>*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ }}**@{}, <13mm,5mm>*{};<14.3mm,-4.5mm>*{_{J_2}}**@{}, \end{xy}}\end{array} \end{equation} where the signs on the r.h.s\ are uniquely fixed for $c+d\in 2{\mathbb Z}$ by the fact that they all equal to $+1$ if $ c$ and $d$ are even integers, and for $c+d\in 2{\mathbb Z}+1$ the signs are given explicitly in \cite{Me1}. Note that the props $\mathcal{H}\mathit{olieb}_{c,d}$ and $\mathcal{H}\mathit{olieb}_{d,c}$ are canonically isomorphic to each other via the flow reversing on the generating graphs. {\mathsf i}p A representation of $\mathcal{H}\mathit{olieb}_{c,d}$ in a finite-dimensional vector space $V$ can be identified with a degree $c+d+1$ element ${\partial}i$ in the completed graded commutative algebra $$ {\partial}i=\sum_{m,n\geq 1\atop m+n\geq 3}={\partial}rod_{m,n\geq 1,m+n\geq 3} {\mathrm H\mathrm o\mathrm m}(\odot^n (V[d]), \odot^m (V[-c])\subset {\partial}rod_{k\geq 0}\odot^k (V^*[-d]\oplus V[-c]) $$ equipped with the obvious Poisson type Lie bracket of degree $-c-d$. \subsection{Non-cofibrant extensions of $\mathcal{H}\mathit{olieb}_{c,d}$} Consider a dg prop $\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar}}}$ generated by the ${\mathbb S}$-bimodule $E^{\star}=\{E^\star(m,n)\}_{m,n\geq 0}$ with {\em all}\, $E^\star(m,n)$ non-zero and given by the same formula as in (\ref{2: symmetries of HoLiebcd corollas}). The differential $\delta^{\star}$ on $\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar\uparrow}}}$ is given formally by the formula (\ref{LBcd_infty}) with the summation over partitions of the sets $[m]$ and $[n]$ appropriately extended, \betagin{equation}\lambdabel{2: differential in HoLBcd^star} \delta^{\star} \betagin{array}{c}\resizebox{14mm}{!}{\betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-}, <-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{}, <0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-}, <0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-}, <0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{}, <0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{}, <0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{}, <0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{}, <-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-}, <-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{}, <0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-}, <0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{}, <0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{}, <0mm,0mm>*{};<4.5mm,-6.9mm>*{^{n\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{}, <0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{}, \end{xy}}\end{array} \ \ = \ \ \sum_{[1,\ldots,m]=I_1\sqcup I_2\atop {|I_1|\geq 0, |I_2|\geq 0}} \sum_{[1,\ldots,n]=J_1\sqcup J_2\atop {|J_1|\geq 0, |J_2|\geq 0} }\hspace{0mm} {\partial}m \betagin{array}{c}\resizebox{22mm}{!}{ \betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-}, <-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-}, <0mm,0mm>*{};<0mm,5mm>*{\ldots}**@{}, <0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-}, <0.6mm,0.44mm>*{};<12.4mm,4.8mm>*{}**@{-}, <0mm,0mm>*{};<-2mm,7mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ }}**@{}, <0mm,0mm>*{};<-2mm,9mm>*{^{I_1}}**@{}, <-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-}, <-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{}, <0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-}, <0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<0mm,-7mm>*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }}**@{}, <0mm,0mm>*{};<0mm,-10.6mm>*{_{J_1}}**@{}, <13mm,5mm>*{};<13mm,5mm>*{\circ}**@{}, <12.6mm,5.44mm>*{};<5mm,10mm>*{}**@{-}, <12.6mm,5.7mm>*{};<8.5mm,10mm>*{}**@{-}, <13mm,5mm>*{};<13mm,10mm>*{\ldots}**@{}, <13.4mm,5.7mm>*{};<16.5mm,10mm>*{}**@{-}, <13.6mm,5.44mm>*{};<20mm,10mm>*{}**@{-}, <13mm,5mm>*{};<13mm,12mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ }}**@{}, <13mm,5mm>*{};<13mm,14mm>*{^{I_2}}**@{}, <12.4mm,4.3mm>*{};<8mm,0mm>*{}**@{-}, <12.6mm,4.3mm>*{};<12mm,0mm>*{\ldots}**@{}, <13.4mm,4.5mm>*{};<16.5mm,0mm>*{}**@{-}, <13.6mm,4.8mm>*{};<20mm,0mm>*{}**@{-}, <13mm,5mm>*{};<14.3mm,-2mm>*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ }}**@{}, <13mm,5mm>*{};<14.3mm,-4.5mm>*{_{J_2}}**@{}, \end{xy}}\end{array} \end{equation} Its ideal $I_0$ generated by all $(m,n)$-corollas{\mathcal O}ootnote{We often call corollas of type $(0,n)$ (resp.\, $(m,0)$) {\em sources} (resp., {\em targets}). Note that the $(0,0)$ corolla $\bullet$ is the unique generator which is both a source and a target. The $(1,1)$-corolla is often called a {\em passing vertex}.} with $m=0$ or $n=0$ is differential, and the quotient properad $\mathcal{H}\mathit{olieb}_{c,d}^{\star}/I^{0}$ is denoted by $\mathcal{H}\mathit{olieb}_{c,d}^+$ (there exists a general ``plus" endofunctor , ${\mathcal P}\bar{i}ghtarrow {\mathcal P}^+$, in the category of props, and the non-cofibrant prop $\mathcal{H}\mathit{olieb}_{c,d}^+$ can be understood as the application of that construction to $\mathcal{H}\mathit{olieb}_{c,d}$). {\mathsf i}p The dg prop $\mathcal{H}\mathit{olieb}_{c,d}^+$ contains in turn the differential ideal $I^+$ generated by the $(1,1)$-corolla, and the quotient properad is precisely $\mathcal{H}\mathit{olieb}_{c,d}$. \subsection{Wheeled closures} We refer to \cite{Me1,MMS} for the full details of the wheelification functor, but as we work in this paper only with free props ${\mathcal P}$ generated by certain $(m,n)$ corollas, $m,n\in {\mathbb N}$, it is very easy to explain what is the wheeled closure ${\mathcal P}^\circlearrowright$ of ${\mathcal P}$: if elements of ${\mathcal P}$ are obtained in general by glueing output legs of generating corollas to input legs of other corollas in such a way that directed paths of edges in the resulting directed graph never form a cycle (a ``wheel"), elements of ${\mathcal P}^\circlearrowright$ are constructed in the same but with with latter restriction dropped. For example, $$ \betagin{xy} <0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-}, <0.4mm,0.0mm>*{};<2.4mm,2.1mm>*{}**@{-}, <-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{}**@{-}, <0mm,-0.8mm>*{\circ};<0mm,0.8mm>*{}**@{}, <2.96mm,2.4mm>*{\circ};<2.45mm,2.35mm>*{}**@{}, <2.4mm,2.8mm>*{};<0mm,5mm>*{}**@{-}, <3.35mm,2.9mm>*{};<5.5mm,5mm>*{}**@{-}, <0mm,-1.3mm>*{};<0mm,-5.4mm>*{_1}**@{}, <-2.8mm,2.5mm>*{};<0mm,5mm>*{\circ}**@{}, <-2.8mm,2.5mm>*{};<0mm,5mm>*{}**@{-}, <2.96mm,5mm>*{};<2.96mm,7.5mm>*{\circ}**@{}, <0.2mm,5.1mm>*{};<2.8mm,7.5mm>*{}**@{-}, <0.2mm,5.1mm>*{};<2.8mm,7.5mm>*{}**@{-}, <5.5mm,5mm>*{};<2.8mm,7.5mm>*{}**@{-}, <2.9mm,7.5mm>*{};<2.9mm,10.5mm>*{}**@{-}, <2.9mm,7.5mm>*{};<2.9mm,11.1mm>*{^1}**@{}, \end{xy}\in \mathcal{H}\mathit{olieb}_{c,d},\ \ \ \ \betagin{xy} <0mm,2.47mm>*{};<0mm,-0.5mm>*{}**@{-}, <0.5mm,3.5mm>*{};<2.2mm,5.2mm>*{}**@{-}, <-0.48mm,3.48mm>*{};<-2.2mm,5.2mm>*{}**@{-}, <0mm,3mm>*{\circ};<0mm,3mm>*{}**@{}, <0mm,-0.8mm>*{\circ};<0mm,-0.8mm>*{}**@{}, <0mm,-0.8mm>*{};<-2.2mm,-3.5mm>*{}**@{-}, <0mm,0mm>*{};<-2.5mm,10.1mm>*{^1}**@{}, <-2.5mm,5.7mm>*{\circ};<0mm,0mm>*{}**@{}, <-2.5mm,5.7mm>*{};<-2.5mm,9.4mm>*{}**@{-}, <-2.5mm,5.7mm>*{};<-5mm,3mm>*{}**@{-}, <-5mm,3mm>*{};<-5mm,-0.8mm>*{}**@{-}, <-2.5mm,-4.2mm>*{\circ};<0mm,3mm>*{}**@{}, <-2.8mm,-3.6mm>*{};<-5mm,-0.8mm>*{}**@{-}, <-2.5mm,-4.6mm>*{};<-2.5mm,-7.3mm>*{}**@{-}, <0mm,0mm>*{};<-2.5mm,-8.9mm>*{_1}**@{}, (0.4,3.6)*{} \ar@{->}@(ur,dr) (0.1,-0.6)*{} \end{xy}\in \mathcal{H}\mathit{olieb}_{c,d}^\circlearrowright $$ where the orientation on edges is assumed to flow from bottom to the top unless explicitly shown. Clearly, ${\mathcal P}$ is a subprop of its wheeled closure ${\mathcal P}^\circlearrowright$. It makes sense to talk about representations of ordinary props in any vector space (finite- or infinite-dimensional), while their wheeled closures can be represented, in general, only in {\em finite-dimensional}\, vector spaces $V$ as graphs with wheels induce trace operations of the form $V\otimes {\mathrm H\mathrm o\mathrm m}(V,{\mathbb K}) \bar{i}ghtarrow {\mathbb K}$ which have no sense in infinite dimensions. {\mathsf i}p The wheeled closures of $\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar}}}$ and $\mathcal{H}\mathit{olieb}_{c,d}^+$ are denoted by $\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd$ and $\mathcal{H}\mathit{olieb}_{c,d}^{+\circlearrowright}$ respectively. {\mathsf i}p Denote by $\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop \bigstar\circlearrowright}$ (resp., $\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop +\circlearrowright}$) the vertex completion of the prop $\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd$ (resp., $\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop +\circlearrowright}$). One must be careful about definitions of representations of these completed props, but for our purposes the following remark will be enough: given any representation of the prop $\swhHoLB_{1,0}$ in a finite-dimensional dg vector space $V$, $$ \rho: \mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd \longrightarrow {\mathcal E} nd_V $$ that is, a formal Poisson structure ${\partial}i\in {\mathcal T}_{poly}{\mathcal M}$ on $V^*$ viewed as a formal graded manifold, there is an associated {\em continuous}\, morphism of the topological props $$ \hat{\rho}: \widehat{\mathcal{H}\mathit{olieb}}_{1,0}^{\atop \bigstar\circlearrowright}\longrightarrow {\mathcal E} nd_V[[\hbar]] $$ whose value on any generating corolla $e$ of $\widehat{\mathcal{H}\mathit{olieb}}_{1,0}$ is equal to $\hbar \rho(e)$, that is, a formal Poisson structure $\hbar {\partial}i\in {\mathcal T}_{poly}{\mathcal M}[[\hbar]]$. Here $\hbar$ is any formal parameter of homological degree zero (``Planck constant"). {\mathsf i}p \subsubsection{{\bf Proposition}} \lambdabel{2: prop on acyclicity of HoLBs+,+} {\em (i) The dg subprop of $\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop +\circlearrowright}$ spanned by graphs with at least one ingoing or at least one outgoing legs is acyclic while its complement $\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop +\circlearrowright}(0,0)$ has non-trivial cohomology which is equal to $H^\bullet(\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop \circlearrowright}(0,0), \delta)$. {\mathsf i}p (ii) The dg prop $\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop \bigstar\circlearrowright}$ is acyclic.} \betagin{proof} Consider a filtration of the complex $(\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop +\circlearrowright}, \delta^+)$ by the number of vertices of valency $\geq 3$. The induced differential on the associated graded attaches to each leg the $(1,1)$-corolla. We can consider another filtration such that the induced differential attaches $(1,1)$-corolla only to the input (or output) leg labelled by number 1. This complex is obviously acyclic. This proves the claim for the required subprop of $\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop +\circlearrowright}$. The second claim about non-triviality of $H^\bullet(\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop +\circlearrowright}(0,0), \delta^+)$ follows from the direct examples of non-trivial cohomology classes such as (see \cite{Me1}) $$ \betagin{array}{c}\resizebox{18mm}{!}{ \betagin{xy} <-5mm,5mm>*{\bulletllet}; <-5mm,5mm>*{};<-5mm,8mm>*{}**@{-}, <-5mm,5mm>*{};<-7mm,4mm>*{}**@{-}, <0mm,0mm>*{\bulletllet}; <0mm,0mm>*{};<-5mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-5mm,5mm>*{}**@{-}, <0mm,0mm>*{};<1.5mm,1.5mm>*{}**@{-}, <0mm,0mm>*{};<1.5mm,-1.5mm>*{}**@{-}; <-5mm,-5mm>*{\bulletllet}; <-5mm,-5mm>*{};<-7mm,-2mm>*{}**@{-}, <-5mm,-5mm>*{};<-5mm,-8mm>*{}**@{-}, \ar@{->}@(ul,dl) (-5.0,8.0)*{};(-7.0,4.0)*{}, \ar@{->}@(ur,dr) (1.5,1.5)*{};(1.5,-1.5)*{}, \ar@{->}@(ul,dl) (-7.0,-2.0)*{};(-5.0,-8.0)*{}, \end{xy}}\end{array} - \betagin{array}{c}\resizebox{18mm}{!}{ \betagin{xy} <0mm,4mm>*{\bulletllet}; <0mm,4mm>*{};<0mm,8mm>*{}**@{-}, <0mm,4mm>*{};<3mm,2mm>*{}**@{-}, <0mm,-5mm>*{\bulletllet}; <0mm,-5mm>*{};<0mm,4mm>*{}**@{-}, <0mm,-5mm>*{};<2mm,-3mm>*{}**@{-}, <0mm,-5mm>*{};<0mm,-8mm>*{}**@{-}, <-5mm,-5mm>*{};<0mm,4mm>*{}**@{-}; <-5mm,-5mm>*{\bulletllet}; <-5mm,-5mm>*{};<-7mm,-2mm>*{}**@{-}, <-5mm,-5mm>*{};<-5mm,-8mm>*{}**@{-}, \ar@{->}@(ur,dr) (0,8.0)*{};(3.0,2.0)*{}, \ar@{->}@(ur,dr) (2.0,-3.0)*{};(0.0,-8.0)*{}, \ar@{->}@(ul,dl) (-7.0,-2.0)*{};(-5.0,-8.0)*{}, \end{xy}}\end{array} + \betagin{array}{c}\resizebox{18mm}{!}{ \betagin{xy} <0mm,-4mm>*{\bulletllet}; <0mm,-4mm>*{};<0mm,-8mm>*{}**@{-}, <0mm,-4mm>*{};<3mm,-2mm>*{}**@{-}, <0mm,5mm>*{\bulletllet}; <0mm,5mm>*{};<0mm,-4mm>*{}**@{-}, <0mm,5mm>*{};<2mm,3mm>*{}**@{-}, <0mm,5mm>*{};<0mm,8mm>*{}**@{-}, <-5mm,5mm>*{};<0mm,-4mm>*{}**@{-}; <-5mm,5mm>*{\bulletllet}; <-5mm,5mm>*{};<-7mm,2mm>*{}**@{-}, <-5mm,5mm>*{};<-5mm,8mm>*{}**@{-}, \ar@{->}@(ur,dr) (3.0,-2.0)*{};(0.0,-8.0)*{}, \ar@{->}@(ur,dr) (0.0,8.0)*{};(2.0,3.0)*{}, \ar@{->}@(ul,dl) (-5.0,8.0)*{};(-7.0,2.0)*{}, \end{xy}}\end{array} \in H(\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop +\circlearrowright}(0,0), \delta^+)\ \ {\mathcal O}orall c,d\in {\mathbb Z} \ {\mathbf e}xt{with} \ c+d\in 2{\mathbb Z}+1, $$ or even a simpler one $$ \betagin{array}{c}\resizebox{20mm}{!}{ \betagin{xy} <-5mm,5mm>*{\bulletllet}; <-5mm,5mm>*{};<-5mm,8mm>*{}**@{-}, <-5mm,5mm>*{};<-7mm,4mm>*{}**@{-}, <0mm,0mm>*{\bulletllet}; <0mm,0mm>*{};<-5mm,5mm>*{}**@{-}, <0mm,0mm>*{};<1.5mm,1.5mm>*{}**@{-}, <0mm,0mm>*{};<1.5mm,-1.5mm>*{}**@{-}; \ar@{->}@(ul,dl) (-5.0,8.0)*{};(-7.0,4.0)*{}, \ar@{->}@(ur,dr) (1.5,1.5)*{};(1.5,-1.5)*{}, \end{xy}}\end{array} \in H(\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop +\circlearrowright}(0,0), \delta^+) \ \ {\mathcal O}orall c,d\in {\mathbb Z}. $$ It is easy to see (cf.\ \cite{Wi2}) that graphs containing passing vertices do not contribute to the cohomology so that $$ H^\bullet(\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop +\circlearrowright}(0,0), \delta^+)= H^\bullet(\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop \circlearrowright}(0,0), \delta). $$ {\mathsf i}p Consider next a filtration of each complex $(\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop \bigstar\circlearrowright}(m,n), \delta^{\star})$ with $m+n\geq 1$ by the total number of vertices with no input edges or no output edges. The induced differential in the associated graded is precisely $\delta^+$ so that the argument as above proves its acyclicity. {\mathsf i}p Finally, consider the complex $(\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop \bigstar\circlearrowright}(0,0), \delta^{\star})$. Call univalent vertices and passing vertices of graphs from $\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop \bigstar\circlearrowright}(0,0)$ {\em stringy}\, ones, and call maximal connected subgraphs (if any) of a graph $\Gamma$ from $\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop \bigstar\circlearrowright}(0,0)$ consisting of stringy vertices with at least one vertex univalent {\em strings}. The vertices of $\Gamma$ which do not belong to strings are called {\em core}\, ones. Thus strings are subgraphs or graphs of the following three types, \betagin{itemize} \item[(i)] $\betagin{array}{c} \resizebox{55mm}{!} { \xy (-45,1)*+{_{\mathbf e}xt{core vertex}}="-1", (-30,1)*{\bullet}="0", (-20,1)*{\bullet}="1", (-13,1)*{}="2", (-10,1)*{\ldots}, (-7,1)*{}="3", (0,1)*{\bullet}="4", (10,1)*{\bullet}="5", \ar @{->} "-1";"0" <0pt> \ar @{->} "0";"1" <0pt> \ar @{->} "1";"2" <0pt> \ar @{->} "3";"4" <0pt> \ar @{->} "4";"5" <0pt> \endxy} \end{array}$\ {\mathbf e}xt{$n\geq 0$ stringy vertices (shown as black bullets)} \item[(ii)] $\betagin{array}{c} \resizebox{55mm}{!} { \xy (-45,1)*+{_{\mathbf e}xt{core vertex}}="-1", (-30,1)*{\bullet}="0", (-20,1)*{\bullet}="1", (-13,1)*{}="2", (-10,1)*{\ldots}, (-7,1)*{}="3", (0,1)*{\bullet}="4", (10,1)*{\bullet}="5", \ar @{<-} "-1";"0" <0pt> \ar @{<-} "0";"1" <0pt> \ar @{<-} "1";"2" <0pt> \ar @{<-} "3";"4" <0pt> \ar @{<-} "4";"5" <0pt> \endxy} \end{array}$\ {\mathbf e}xt{$n\geq 0$ stringy vertices} \item[(iii)] $\betagin{array}{c} \resizebox{55mm}{!} { \xy (-40,1)*{\bullet}="-1", (-30,1)*{\bullet}="0", (-20,1)*{\bullet}="1", (-13,1)*{}="2", (-10,1)*{\ldots}, (-7,1)*{}="3", (0,1)*+{\bullet}="4", (10,1)*+{\bullet}="5", (20,1)*+{\bullet}="6", \ar @{->} "-1";"0" <0pt> \ar @{->} "0";"1" <0pt> \ar @{->} "1";"2" <0pt> \ar @{->} "3";"4" <0pt> \ar @{->} "4";"5" <0pt> \ar @{->} "5";"6" <0pt> \endxy} \end{array}$ \ {\mathbf e}xt{$n\geq 1$ stringy vertices} \end{itemize} Consider a (complete, exhaustive, bounded above) filtration of $(\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop \star\circlearrowright}(0,0), \delta^{\star})$ by the number of core vertices, $$ F_{-p}\ {\mathbf e}xt{is generated by graphs with the number of core vertices}\ \geq p. $$ The differential in the associated graded acts non-trivially on strings of types (i) and (ii) (resp., (iii)) with {\em even}\, (resp., {\em odd}) number of stringy vertices only by increasing that number by one. Hence the complexes $C_{(i)}$, $C_{(ii)}$ and $C_{(iii)}$ generated by strings of type (i), (ii) and (iii) respectively are acyclic. {\mathsf i}p If the set of core vertices is empty, we are in the situation of the complex $C_{(iii)}$ so that the associated cohomology vanishes. {\mathsf i}p If the set of core vertices is non-empty, then the associated graded is isomorphic to the unordered tensor product $$ \bigotimes_{v} \odot^\bullet C_{(i)}^v \otimes \odot^\bullet C_{(ii)}^v $$ over the set of core vertices of the graded symmetric tensor algebras of acyclic complexes $C_{(i)}$ and $C_{(ii)}$, and hence is acyclic itself. \end{proof} {\mathsf i}p {\mathsf i}p { \Large {\mathsf e}ction{\bf Deformation complexes of wheeled props and graph complexes} } {\mathsf i}p \subsection{Derivations of wheeled props} A wheeled prop ${\mathcal P}^\circlearrowright$ in the category of complexes is an ${\mathbb S}$-bimodule, that is a collection $\{{\mathcal P}^\circlearrowright(m,n)\}$ of $({\mathbb S}_m)^{op}\times {\mathbb S}_n$ modules, equipped with two basic operations satisfying certain axioms (see \S 2 in \cite{MMS} for full details, or just pictures 5 and 6 in \cite{MMS}): \betagin{itemize} \item[(i)] the horizontal composition (``a map from disjoint union of two decorated corollas into a single corolla") $$ \betagin{array}{rccc} \circ_h: &{\mathcal P}^\circlearrowright(m_1,n_1) \otimes {\mathcal P}^\circlearrowright(m_2,n_2) &\longrightarrow & {\mathcal P}^\circlearrowright(m_1+m_2,n_1+n_2)\\ & a\otimes b & \longrightarrow & a\circ_h b \end{array} $$ \item[(ii)] the trace operation defined for any $m,n\geq 1$ and any $i\in [m]$, $j\in [n]$ (``gluing $i$-th output leg to the $j$-in input leg, and then contracting the resulting internal edge"), $$ \betagin{array}{rccc} Tr_j^i: &{\mathcal P}^\circlearrowright(m,n) &\longrightarrow & {\mathcal P}^\circlearrowright(m-1,n-1)\\ & a & \longrightarrow & Tr_j^i(a). \end{array} $$ \end{itemize} The Lie algebra of derivations of ${\mathcal P}^\circlearrowright$ is defined as the vector space $\mathrm{Der}({\mathcal P}^\circlearrowright)\hookrightarrow {\mathrm H\mathrm o\mathrm m}_{{\mathbb S}}({\mathcal P}^\circlearrowright,{\mathcal P}^\circlearrowright)$ of those endomorphisms $D: {\mathcal P}^\circlearrowright \longrightarrow {\mathcal P}^\circlearrowright$ of the ${\mathbb S}$-bimodule ${\mathcal P}^\circlearrowright$ which satisfy the two conditions: (i) for any $a,b\in {\mathcal P}$ one has $$ D(a\circ_h b) = D(a)\circ_h f(b) + (-1)^{|D||a|} f(a)\circ_h D(b),\ \ \ \ $$ and (ii) for any $c\in {\mathcal P}(m,n)$ with $m,n\geq 1$ and any $i\in [m]$ and $j\in [n]$ $$ D(Tr_j^i(c))= Tr_j^i(D(c)). $$ If the $\delta$ is a differential in the wheeled prop ${\mathcal P}^\circlearrowright$, then $\delta$ is a MC element in $\mathrm{Der}({\mathcal P}^\circlearrowright)$ so that the latter becomes also a complex with the differential $d=[\delta,\ ]$. {\mathsf i}p We are interested in the complex of derivations of the {\em completed}\, (by the number of vertices) prop $\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop \bigstar\circlearrowright}$ but abusing notations denote it from now on by $\mathrm{Der}(\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd)$ (cf.\ \cite{MW1}). Any derivation of $\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop \bigstar\circlearrowright}$ is uniquely determined by its values on the generators of the prop $\mathcal{H}\mathit{olieb}_{c,d}^{\atop \bigstar \circlearrowright}$. Hence we have isomorphisms of graded vector spaces, \betagin{equation}\lambdabel{2: Der(Holieb++) as graphs} \mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar\circlearrowright}}}) = {\partial}rod_{m,n\geq 0} \left(\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop \bigstar\circlearrowright}(m,n) \otimesimes {\mathit s \mathit g\mathit n}_m^{\otimes |c|}\otimesimes {\mathit s \mathit g\mathit n}_n^{\otimes |d|}\bar{i}ght)^{{\mathbb S}_m\times {\mathbb S}_m}[1+c(1-m)+d(1-n)]. \end{equation} Thus elements of this complex can be interpreted as directed (not necessarily connected) graphs which might have incoming or outgoing legs and wheels, for example $$ \betagin{array}{c} \resizebox{15mm}{!}{ \xy (0,0)*{\bullet}="d1", (10,0)*{\bullet}="d2", (-5,-5)*{}="dl", (5,-5)*{}="dc", (15,-5)*{}="dr", (0,10)*{\bullet}="u1", (10,10)*{\bullet}="u2", (5,15)*{}="uc", (5,15)*{}="uc", (15,15)*{}="ur", (0,15)*{}="ul", \ar @{<-} "d1";"d2" <0pt> \ar @{<-} "u1";"d1" <0pt> \ar @{->} "u1";"u2" <0pt> \ar @{<-} "u1";"d2" <0pt> \ar @{->} "u2";"d2" <0pt> \ar @{<-} "u2";"d1" <0pt> \endxy} \end{array} \hspace{-5mm} \betagin{array}{c} \resizebox{15mm}{!}{ \xy (0,0)*{\bullet}="d1", (10,0)*{\bullet}="d2", (-5,-5)*{}="dl", (5,-5)*{}="dc", (15,-5)*{}="dr", (0,10)*{\bullet}="u1", (10,10)*{\bullet}="u2", (5,15)*{}="uc", (5,15)*{}="uc", (15,15)*{}="ur", (0,15)*{}="ul", \ar @{<-} "d1";"d2" <0pt> \ar @{<-} "d2";"dc" <0pt> \ar @{<-} "d2";"dr" <0pt> \ar @{<-} "u1";"d1" <0pt> \ar @{->} "u1";"u2" <0pt> \ar @{<-} "u1";"d2" <0pt> \ar @{->} "u2";"d2" <0pt> \ar @{<-} "u2";"d1" <0pt> \ar @{<-} "uc";"u2" <0pt> \ar @{<-} "ur";"u2" <0pt> \ar @{<-} "ul";"u1" <0pt> \endxy} \end{array} \in \mathrm{Der}(\swhHoLB_{c,d}) $$ Its subcomplex spanned by {\em oriented} (i.e.\ with no wheels) directed graphs is precisely the derivation complex of $\mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar}}})$, e.g. $$ \betagin{array}{c} \resizebox{15mm}{!}{ \xy (0,0)*{\bullet}="d1", (10,0)*{\bullet}="d2", (-5,-5)*{}="dl", (5,-5)*{}="dc", (15,-5)*{}="dr", (0,10)*{\bullet}="u1", (10,10)*{\bullet}="u2", (5,15)*{}="uc", (5,15)*{}="uc", (15,15)*{}="ur", (0,15)*{}="ul", \ar @{<-} "d1";"d2" <0pt> \ar @{<-} "u1";"d1" <0pt> \ar @{->} "u1";"u2" <0pt> \ar @{<-} "u1";"d2" <0pt> \ar @{<-} "u2";"d2" <0pt> \ar @{<-} "u2";"d1" <0pt> \ar @{<-} "ul";"u1" <0pt> \endxy} \end{array} \in \mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar}}}) $$ Note that the outgoing or ingoing legs (if any) of these graphs are not assigned particular numerical labels; more precisely, their numerical labels are (skew)symmetrized in accordance with the parity of the integer parameters $c$ and $d$. {\mathsf i}p The Lie algebra $\mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar\circlearrowright}}})$ contains a Maurer-Cartan element $$ \gamma^{\star}:=\sum_{m,n\geq 0}\sum_{[m]=I_1\sqcup I_2, [n]=J_1\sqcup J_2 \atop |I_1|, |I_2|, |J_1|, |J_2|\geq 0} \hspace{-1mm} {\partial}m \betagin{array}{c}\resizebox{22mm}{!}{ \betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-}, <-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-}, <0mm,0mm>*{};<0mm,5mm>*{\ldots}**@{}, <0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-}, <0.6mm,0.44mm>*{};<12.4mm,4.8mm>*{}**@{-}, <0mm,0mm>*{};<-2mm,7mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ }}**@{}, <0mm,0mm>*{};<-2mm,9mm>*{^{I_1}}**@{}, <-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-}, <-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{}, <0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-}, <0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<0mm,-7mm>*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }}**@{}, <0mm,0mm>*{};<0mm,-10.6mm>*{_{J_1}}**@{}, <13mm,5mm>*{};<13mm,5mm>*{\circ}**@{}, <12.6mm,5.44mm>*{};<5mm,10mm>*{}**@{-}, <12.6mm,5.7mm>*{};<8.5mm,10mm>*{}**@{-}, <13mm,5mm>*{};<13mm,10mm>*{\ldots}**@{}, <13.4mm,5.7mm>*{};<16.5mm,10mm>*{}**@{-}, <13.6mm,5.44mm>*{};<20mm,10mm>*{}**@{-}, <13mm,5mm>*{};<13mm,12mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ }}**@{}, <13mm,5mm>*{};<13mm,14mm>*{^{I_2}}**@{}, <12.4mm,4.3mm>*{};<8mm,0mm>*{}**@{-}, <12.6mm,4.3mm>*{};<12mm,0mm>*{\ldots}**@{}, <13.4mm,4.5mm>*{};<16.5mm,0mm>*{}**@{-}, <13.6mm,4.8mm>*{};<20mm,0mm>*{}**@{-}, <13mm,5mm>*{};<14.3mm,-2mm>*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ }}**@{}, <13mm,5mm>*{};<14.3mm,-4.5mm>*{_{J_2}}**@{}, \end{xy}}\end{array}, $$ which corresponds to the differential $\delta^{\star}$ in $\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar\circlearrowright}}}$. Hence the differential in the complex $\mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}^{\atop \bigstar\circlearrowright})$ is given by \betagin{equation}\lambdabel{d in Der(Holieb^+)} d^{\star} \Gamma :=[\gamma^{\star},\Gamma]= \delta^{\star}\Gamma {\partial}m \sum \betagin{array}{c}\resizebox{14mm}{!}{\betagin{xy} <0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{}, <-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-}, <-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{}, <0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-}, <0.6mm,0.44mm>*{};<10mm,6mm>*{}**@{-}, <0mm,0mm>*{};<12.0mm,7.5mm>*{\Gamma}**@{}, <-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-}, <-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{}, <0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-}, <0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-}, \end{xy}}\end{array} \mp \sum \betagin{array}{c}\resizebox{14mm}{!}{\betagin{xy} <0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{}, <-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-}, <-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{}, <0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-}, <0.6mm,0.44mm>*{};<-10mm,-6mm>*{}**@{-}, <0mm,0mm>*{};<-12.0mm,-7.5mm>*{\Gamma}**@{}, <-0.6mm,-0.44mm>*{};<8mm,5mm>*{}**@{-}, <-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{}, <0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-}, <0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-}, \end{xy}}\end{array} \end{equation} where the differential in the first term, $$ \delta^{\star}\Gamma= (-1)^{|\Gamma|}\sum_v\Gamma\circ_v \xy (0,-2)*{\bulletllet}="a", (0,2)*{\bullet}="b", \ar @{->} "a";"b" <0pt> \endxy, $$ acts on the vertices of $\Gamma$ by formula{\mathcal O}ootnote{That formula might be understood as a substitution into each vertex $v$ the graph $ \xy (0,-2)*{\bulletllet}="a", (0,2)*{\bullet}="b", \ar @{->} "a";"b" <0pt> \endxy$ and redistributing all edges of $v$ along the pair of new created vertices in all possible ways.} (\ref{d in Der(Holieb^+)}) while in the remaining two terms one attaches $(m,n+1)$-corollas and, respectively, $(m+1,n)$-corollas to each outgoing leg (if any), and, respectively each ingoing leg (if any) of $\Gamma$, and sums over all $m,n$ satisfying $m,n\geq 0$. {\mathsf i}p It is often useful (cf.\ \cite{Wi1,MW1}) to include the graph $\uparrow$ without vertices into the complex $\mathrm{Der}(\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd)$ and set, in accordance with the above general formula for $d^\star$, $$ d^\star \uparrow \, = \sum_{m,n\geq 0}(m-n) \overbrace{ \underbrace{ \betagin{array}{c}\resizebox{6mm}{!} {\xy (0,4.5)*+{...}, (0,-4.5)*+{...}, (0,0)*{\bullet}="o", (-5,5)*{}="1", (-3,5)*{}="2", (3,5)*{}="3", (5,5)*{}="4", (-3,-5)*{}="5", (3,-5)*{}="6", (5,-5)*{}="7", (-5,-5)*{}="8", \ar @{-} "o";"1" <0pt> \ar @{-} "o";"2" <0pt> \ar @{-} "o";"3" <0pt> \ar @{-} "o";"4" <0pt> \ar @{-} "o";"5" <0pt> \ar @{-} "o";"6" <0pt> \ar @{-} "o";"7" <0pt> \ar @{-} "o";"8" <0pt> \endxy}\end{array} }_{n\times} }^{m\times}, $$ The derivation $d^\star \uparrow$ corresponds to the universal automorphism of {\em any}\, dg wheeled prop ${\mathcal P}^\circlearrowright$ which sends every element $a\in {\mathcal P}^\circlearrowright(m,n)$ into $\lambda^{m-n}a$ for any $\lambda\in {\mathbb K}{\mathsf e}tminus 0$. {\mathsf i}p It is important to notice that the subspace $$ \mathrm{Der}(\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd)_{conn}\subset \mathrm{Der}(\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd) $$ spanned by {\em connected}\, graphs is a subcomplex{\mathcal O}ootnote{The meaning of the complex $\mathrm{Der}(\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd)_{conn}$ is that it describes derivations of $\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop \bigstar\circlearrowright}$ as a {\em properad}\, rather than as a prop.}, and that there is an canonical isomorphism of complexes \betagin{equation}\lambdabel{3: Der in terms of Der_connected} \mathrm{Der}(\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd)=\left(\widehat{\odot^\bullet} \left(\mathrm{Der}(\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd)_{conn}[-1-c-d]\bar{i}ght)\bar{i}ght)[1+c+d] \end{equation} As the (completed) symmetric tensor product functor is exact, it is enough to compute the cohomology of the subcomplex $\mathrm{Der}(\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd)_{conn}$. We do it in the next section in terms of the cohomology of certain M.\ Kontsevich graph complexes \cite{Ko} which are reminded in the next subsection. \subsection{Reminder on graph complexes} A {\em graph}\, $\Gamma$ is a 1-dimensional $CW$ complex whose 0-cells are called {\em vertices}\, and 1-cells are called {\em edges}. The set of vertices of $\Gamma$ is denoted by $V(\Gamma)$ and the set of edges by $E(\Gamma)$. A graph $\Gamma$ is called {\em directed}\, if each edge $e\in E(\Gamma)$ comes equipped with a fixed orientation. If a vertex $v$ of a directed graph has $m\geq 0$ outgoing edges and $n\geq 0$ incoming edges, then we say that $v$ is an $(m,n)$-{\em vertex}. A $(1,1)$-vertex is called {\em passing}. {\mathsf i}p Let ${\mathsf G}_{n,l}$ be the set of directed graphs $\Gamma$ with $n$ vertices and $l$ edges such that some bijections $V(\Gamma)\bar{i}ghtarrow [n]$ and $E(\Gamma)\bar{i}ghtarrow [l]$ are fixed, i.e.\ every edges and every vertex of $\Gamma$ has a fixed numerical label. There is a natural right action of the group ${\mathbb S}_n \times {\mathbb S}_l$ on the set ${\mathsf G}_{n,l}$ with ${\mathbb S}_n$ acting by relabeling the vertices and ${\mathbb S}_l$ by relabeling the edges. Consider a graded vector space (``directed full graph complex") $$ \mathsf{dFGC}_d= {\partial}rod_{l\geq 0}{\partial}rod_{n\geq 1} {\mathbb K} \lambdangle {\mathsf G}_{n,l}{\bar{a}}ngle \otimes_{{\mathbb S}_n\times {\mathbb S}_l} \left({\mathit s \mathit g\mathit n}_n^{\otimes |d|} \otimes {\mathit s \mathit g\mathit n}_l^{|d-1|}\bar{i}ght) [d(1-n) + l(d-1)] $$ This space is spanned by directed graph with no numerical labels on vertices and edges but with a choice of an {\em orientation}: for $d$ even (resp., odd) this is a choice of ordering of edges (resp., vertices) up to an even permutation. This graded vector space has a Lie algebra structure with $$ [\Gamma_1,\Gamma_2]:= \sum_{v\in V(\Gamma)} \Gamma_1\circ_v \Gamma_2 - (-1)^{|\Gamma_1||\Gamma_2|} \Gamma_2\circ_v \Gamma_1 $$ where $\Gamma_1\circ_v \Gamma_2$ is defined by substituting the graph $\Gamma_2$ into the vertex $v$ of $\Gamma_1$ and taking a sum over re-attachments of dangling edges (attached earlier to $v$) to vertices of $\Gamma_2$ in all possible ways. It is easy to see that the degree 1 graph $\xy (0,0)*{\bulletllet}="a", (5,0)*{\bullet}="b", \ar @{->} "a";"b" <0pt> \endxy$ in $\mathsf{dFGC}_d$ is a Maurer-Cartan element, so that one can make the latter into a complex with the differential $$ \delta:= [\xy (0,0)*{\bulletllet}="a", (5,0)*{\bullet}="b", \ar @{->} "a";"b" <0pt> \endxy ,\ ]. $$ The complex $\mathsf{dFGC}_d$ contains a subcomplex $\mathsf{FGC}^{or}$ spanned by {\em oriented graphs}, that is, graphs with no closed paths of directed edges (``wheels"). {\mathsf i}p One can define an {\em undirected}\, full graph complex as $$ \mathsf{FGC}_d= {\partial}rod_{l\geq 0}{\partial}rod_{n\geq 1} {\mathbb K} \lambdangle {\mathsf G}_{n,l}{\bar{a}}ngle \otimes_{{\mathbb S}_n\times ({\mathbb S}_l \ltimes ({\mathbb S}_2)^l)} \left({\mathit s \mathit g\mathit n}_n^{\otimes |d|} \otimes ({\mathit s \mathit g\mathit n}_l^{|d-1|}\otimes ({\mathit s \mathit g\mathit n}_2^{|d|})^{\otimes l})\bar{i}ght) [d(1-n) + l(d-1)] $$ where the group $({\mathbb S}_2)^l$ acts on edges by reversing their directions. This graph complex is spanned by graphs with directions on edges forgotten for $d$ even, and fixed up to the flip and multiplication by $(-1)$ for $d$ odd. {\mathsf i}p These dg Lie algebras contain dg subalgebras $\mathsf{dcGC}_{d}\subset \mathsf{dFGC}_d$, $\mathsf{cGC}^{or}\subset \mathsf{FGC}_d^{or}$ and $\mathsf{cGC}_d\subset \mathsf{FGC}_d$ spanned by {\em connected}\, graphs which in turn contain dg Lie subalgebras $\mathsf{dcGC}_d^{\geq 2}$, $\mathsf{GC}_d^{or,\geq 2}$ and, respectively, $\mathsf{GC}_d^{\geq 2}$ spanned by graphs with all vertices having valency $\geq 2$. The dg Lie algebras $\mathsf{dcGC}_d^{\geq 2}$ and $\mathsf{GC}_d^{or,\geq 2}$ (resp., $\mathsf{GC}_d^{\geq 2}$) contain in turn dg Lie subalgebras $\mathsf{dGC}_d$ and $\mathsf{GC}_d^{or}$ spanned by graphs with no passing vertices, (resp., $\mathsf{GC}_d$ spanned by graphs with all vertices at least trivalent). The canonical inclusion maps $$ \mathsf{dGC}_d \longrightarrow \mathsf{dcGC}_d^{\geq 2} \longrightarrow \mathsf{dcGC}_d, \ \ \mathsf{GC}_d^{or} \longrightarrow \mathsf{GC}_d^{or,\geq 2}\longrightarrow \mathsf{cGC}^{or}_d $$ are all quasi-isomorphisms \cite{Wi1,Wi2}. There is also a canonical morphism of dg Lie algebras \betagin{equation}\lambdabel{3: from GC to dGC} \mathsf{GC}^{\geq 2}_2\longrightarrow \mathsf{dcGC}_2, \end{equation} which sends a graph with no directions on edges into a sum of graphs with all possible directions on edges; it is also a quasi-isomorphism \cite{Wi1}. It was proven in in \cite{Wi1,Wi2} that $$ H^\bullet(\mathsf{GC}_{d}^{\geq 2}) =H^\bullet(\mathsf{GC}_{d+1}^{or}) $$ and that $$ H^0(\mathsf{dGC}_2)=H^0(\mathsf{GC}_2^{\geq 2})=H^0(\mathsf{GC}_3^{or})=\mathfrak{grt}_1, $$ where $\mathfrak{grt}_1$ is the Lie algebra of the Grothendieck-Teichm\"uller group $GRT_1$. It is easy to see that $H^0(\mathsf{GC}_2^{or})=0$ and $H^0(\mathsf{GC}_3^{\geq 2})=0$. {\mathsf i}p One has canonical monomorphisms of complexes $$ \odot^{\bullet} \left(\mathsf{dGC}_d [-d]\bar{i}ght)[d] \bar{i}ghtarrow \mathsf{dFGC}_d,\ \ \ \odot^{\bullet} \left(\mathsf{GC}^{\geq 2}_d [-d]\bar{i}ght)[d]\bar{i}ghtarrow \mathsf{FGC}_d, \ \ \ \odot^{\bullet}\left(\mathsf{GC}^{or}_d [-d]\bar{i}ght)[d]\bar{i}ghtarrow \mathsf{FGC}^{or}_d $$ which are quasi-isomorphisms. Hence it is enough to study only connected graph complexes. {\mathsf i}p It is often useful \cite{MW1,MW3} to consider slightly extended dg Lie algebras, \betagin{equation}\lambdabel{3: extended GC} \mathsf{dGC}_d\oplus \ {\mathbb K} ,\ \ \ \mathsf{GC}_d^{\geq 2}\oplus {\mathbb K} ,\ \ \ \ \mathsf{GC}_d^{or}\oplus {\mathbb K} \end{equation} where the summand ${\mathbb K}$ is generated by an additional element $\emptyset$ concentrated in degree zero, ``a graph with no vertices and edges", whose Lie bracket, $[\emptyset, \Gamma]$, with an element $\Gamma$ of $\mathsf{GC}_{d}^{\geq 2}$ or $\mathsf{GC}_{d}^{or}$ is defined as the multiplication of $\Gamma$ by twice the number of its loops (in particular, $\emptyset$ is a cycle with respect to the differential $\delta$). In this case the zero-th cohomology groups of the first two of these extended complexes for $d=2$ and, respectively, of the last complex for $d=3$ are all equal to the Lie algebra $\mathfrak{grt}$ of the ``full" Grothendieck-Teichm\"uller group, rather than to its reduced version $\mathfrak{grt}_1$. This very useful fact prompts us to define the {\em full graph complexes}\, of not necessarily connected graphs as the completed graded symmetric tensor algebras (\ref{1: fGC in terms of GC}) and (\ref{1: fGC^or in terms of GC^0r}). { \Large {\mathsf e}ction{\bf Cohomology of the derivation complex of $\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar\circlearrowright}}}$} } {\mathsf i}p \subsection{From directed graph complex to the complex of properadic derivations} Following \cite{MW1} one notices that there is a natural right action of the dg Lie algebra $\mathsf{dcGC}_{c+d+1}$ on the dg wheeled {\em properad}\, $\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop \star\circlearrowright}$ by properadic derivations, i.e.\ there is a canonical morphism of dg Lie algebras, \betagin{equation}\lambdabel{3: Morhism F from dcGC to Der^++} \betagin{array}{rccc} F^{\star}\colon & \mathsf{dcGC}_{c+d+1} &\to & \mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar\circlearrowright}}})_{conn}\\ & \Gamma & \to & F(\Gamma) \end{array} \end{equation} where the derivation $F(\Gamma)$ has, by definition, the following values on the generators of the completed properad $\widehat{\mathcal{H}\mathit{olieb}}cd^{\atop\bigstar\circlearrowright}$ \betagin{equation} \lambdabel{5:derivation Fstar(Ga)} \left(\betagin{array}{c}\resizebox{12mm}{!}{\betagin{xy} <0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, <-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-}, <-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{}, <0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-}, <0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-}, <0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{}, <0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{}, <0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{}, <-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-}, <-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{}, <0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-}, <0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{}, <0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{}, <0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{}, \end{xy}}\end{array}\bar{i}ght)\cdot F^{\star}(\Gamma) = \sum_{s:[n]\bar{i}ghtarrow V(\Gamma)\atop \hat{s}:[m]\bar{i}ghtarrow V(\Gamma)} \betagin{array}{c}\resizebox{9mm}{!} {\xy (-6,7)*{^1}, (-3,7)*{^2}, (2.5,7)*{}, (7,7)*{^m}, (-3,-8)*{_2}, (3,-6)*{}, (7,-8)*{_n}, (-6,-8)*{_1}, (0,4.5)*+{...}, (0,-4.5)*+{...}, (0,0)*+{\Gamma}="o", (-6,6)*{}="1", (-3,6)*{}="2", (3,6)*{}="3", (6,6)*{}="4", (-3,-6)*{}="5", (3,-6)*{}="6", (6,-6)*{}="7", (-6,-6)*{}="8", \ar @{-} "o";"1" <0pt> \ar @{-} "o";"2" <0pt> \ar @{-} "o";"3" <0pt> \ar @{-} "o";"4" <0pt> \ar @{-} "o";"5" <0pt> \ar @{-} "o";"6" <0pt> \ar @{-} "o";"7" <0pt> \ar @{-} "o";"8" <0pt> \endxy}\end{array}\ \ \ \ \ \ {\mathcal O}orall\ m,n\geq 0, \end{equation} with the sum being taken over all ways of attaching the incoming and outgoing legs to the graph $\Gamma$. The image $$ \delta^{\star}:=F^{\star}\left(\xy (0,-2)*{\bulletllet}="a", (0,2)*{\bullet}="b", \ar @{->} "a";"b" <0pt> \endxy\bar{i}ght) $$ gives us the standard differential (\ref{2: differential in HoLBcd^star}) in $\mathcal{H}\mathit{olieb}_{c,d}^{\star\circlearrowright}$. The monomorphism $\mathsf{dGC}_{c+d+1}\hookrightarrow \mathsf{dcGC}_{c+d+1}$ is a quasi-isomorphism so that, from the cohomological viewpoint, it is enough to study the restriction of the above map to the dg Lie subalgebra $\mathsf{dGC}_{c+d+1}$ (we denote this restriction by the same symbol). \subsubsection{\bf Theorem}\lambdabel{5: Theorem on qis to Def(Holieb star_wheeled)} {\em For any $c,d\in {\mathbb Z}$ the morphism of dg Lie algebras \betagin{equation}\lambdabel{5: F^star from dGC to Der HoLB^star} F^\star : \mathsf{dGC}_{c+d+1} \longrightarrow \mathrm{Der}(\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd)_{conn} \end{equation} is a quasi-isomorphism up to one rescaling class represented by the series $$ r^\star= \sum_{m,n\geq 0}(m+n-2) \overbrace{ \underbrace{ \betagin{array}{c}\resizebox{6mm}{!} {\xy (0,4.5)*+{...}, (0,-4.5)*+{...}, (0,0)*{\bullet}="o", (-5,5)*{}="1", (-3,5)*{}="2", (3,5)*{}="3", (5,5)*{}="4", (-3,-5)*{}="5", (3,-5)*{}="6", (5,-5)*{}="7", (-5,-5)*{}="8", \ar @{-} "o";"1" <0pt> \ar @{-} "o";"2" <0pt> \ar @{-} "o";"3" <0pt> \ar @{-} "o";"4" <0pt> \ar @{-} "o";"5" <0pt> \ar @{-} "o";"6" <0pt> \ar @{-} "o";"7" <0pt> \ar @{-} "o";"8" <0pt> \endxy}\end{array} }_{n\times} }^{m\times}. $$ } \betagin{proof} For a graph $\Gamma$ in $\mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar\circlearrowright}}})$ let $V^{\lhd 2}(\Gamma)\subset V(\Gamma)$ be the subset of univalent vertices and passing vertices, and let $V^{\unrhd 2}(\Gamma)$ be its complement, i.e.\ the subset of non-passing vertices of valency $\geq 2$ of $\Gamma$. Consider the following filtration of the complex $\mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar\circlearrowright}}})$, \betagin{equation}\lambdabel{5: filtration od Der by number of geq 2 nonpass vertices} F_{-p}(\mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar\circlearrowright}}})):={\mathbf e}xt{linear span of graphs $\Gamma$ with}\ \# V^{\unrhd 2}(\Gamma) \geq p. \end{equation} For a graph $\Gamma\in \mathsf{dGC}_{c+d+1}$ one has $V(\Gamma)= V^{\unrhd 2}(\Gamma)$ so that an analogous filtration of the l.h.s.\ in (\ref{3: Morhism F from dcGC to Der^++}) takes the form \betagin{equation}\lambdabel{5: filtration ofdGC by number of vertices} F_{-p}(\mathrm{Der}(\mathsf{dGC}_{c+d+1})):={\mathbf e}xt{linear span of graphs $\Gamma$ with}\ \#V(\Gamma) \geq p. \end{equation} The morphism (\ref{3: Morhism F from dcGC to Der^++}) respects these filtrations and hence induces the morphism of the associated graded complexes (all denoted by the same letters), \betagin{equation}\lambdabel{5: gr F} {\mathcal F}^\star: (\mathsf{dcGC}_{c+d+1}, 0) \to (\mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar\circlearrowright}}}),\hat{d}) \end{equation} where the induced differential in the l.h.s.\ is trivial while the induced differential in the r.h.s.\ is given by $$ \hat{d}\Gamma=\hat{\delta}\Gamma \ \ {\partial}m \sum_{{\mathbf e}xt{in-legs}\atop {\mathbf e}xt{of}\ \Gamma}\left( \betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy} <0mm,0mm>*{\bullet}; <0.0mm,7.5mm>*{\Gamma}; <0.0mm,0.44mm>*{};<0mm,5.5mm>*{}**@{-}, \end{xy}}\end{array} {\partial}m \betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy} <0mm,0mm>*{\bullet}; <0.0mm,7.5mm>*{\Gamma}; <0.0mm,0.44mm>*{};<0mm,5.5mm>*{}**@{-}, <0.0mm,0.0mm>*{};<0mm,-5.5mm>*{}**@{-}, \end{xy}}\end{array} \bar{i}ght) {\partial}m\sum_{{\mathbf e}xt{out-legs}\atop {\mathbf e}xt{of}\ \Gamma}\left( \betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy} <0mm,0mm>*{\bullet}; <0.0mm,-7.5mm>*{\Gamma}; <0.0mm,-0.44mm>*{};<0mm,-5.5mm>*{}**@{-}, \end{xy}}\end{array} {\partial}m \betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy} <0mm,0mm>*{\bullet}; <0.0mm,-7.5mm>*{\Gamma}; <0.0mm,-0.44mm>*{};<0mm,-5.5mm>*{}**@{-}, <0.0mm,0.0mm>*{};<0mm,5.5mm>*{}**@{-}, \end{xy}}\end{array} \bar{i}ght) $$ where $$ \hat{\delta}\Gamma=\delta^\star \Gamma \bmod {\mathbf e}xt{terms creating new $(m,n)$-corollas with $m\geq 2$ or $n\geq 2$}. $$ Let us call ingoing or outgoing legs (if any) of graphs from (\ref{2: Der(Holieb++) as graphs}) {\em hairs}\, and consider the following complete, exhaustive and bounded above filtration of both sides of the arrow in (\ref{5: gr F}) \betagin{equation}\lambdabel{5: filtration by hairs + s + t} F'_{-p}(\mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar\circlearrowright}}})):={\mathbf e}xt{span of graphs with $\#$hairs + $\#$univalent sources + $\#$univalent targets}\ \geq p. \end{equation} and $$ F'_{-p}(\mathsf{dGC}_{c+d+1}):=\left\{\betagin{array}{cl} \mathsf{dGC}_{c+d+1} & {\mathbf e}xt{for}\ p\leq 0\\ 0 & {\mathbf e}xt{for}\ p\geq 1 \end{array} \bar{i}ght. $$ Note that the unique graph $\bullet$ consisting of the zero valent vertex is counted twice --- once as a source and once as a target --- so that $\bullet$ belongs to $F'_{-2}(\mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar\circlearrowright}}}))$; similarly, the derivation $\uparrow$ (see \S 3.1) is assumed by definition to have two hairs and hence also belongs to $F'_{-2}(\mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar\circlearrowright}}}))$. The map ${\mathcal F}^\star$ respects both filtrations and hence induces a morphism (denoted by the same letter again) of the associated graded complexes \betagin{equation}\lambdabel{5: gr F^st} {\mathcal F}^\star: (\mathsf{dcGC}_{c+d+1}, 0) \to (\mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar\circlearrowright}}}), d_0)=:(C,d_{0}) \end{equation} where the induced differential $d_0$ is given on two exceptional graphs by $$ d_0\, \bullet=\betagin{array}{c}\resizebox{1.4mm}{!} {\xy (0,-2.5)*{\bullet}="a", (0,2.5)*{\bullet}="b", \ar @{->} "a";"b" <0pt> \endxy}\end{array} \ , \ \ \ d_0\uparrow \, = \betagin{array}{c}\resizebox{2.3mm}{!} { \xy (0,-2.5)*{\bullet}="a", (0,2.5)*{}="b", \ar @{->} "a";"b" <0pt> \endxy}\end{array} - \betagin{array}{c}\resizebox{1.44mm}{!} {\xy (0,-2.5)*{}="a", (0,2.5)*{\bullet}="b", \ar @{->} "a";"b" <0pt> \endxy}\end{array} $$ and on all other graphs by the formula \betagin{equation}\lambdabel{5: differential d_0} {d}_0\Gamma={\delta}^+\Gamma \ \ {\partial}m \sum_{{\mathbf e}xt{in-legs}\atop {\mathbf e}xt{of}\ \Gamma}\left( \betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy} <0mm,0mm>*{\bullet}; <0.0mm,7.5mm>*{\Gamma}; <0.0mm,0.44mm>*{};<0mm,5.5mm>*{}**@{-}, \end{xy}}\end{array} {\partial}m \betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy} <0mm,0mm>*{\bullet}; <0.0mm,7.5mm>*{\Gamma}; <0.0mm,0.44mm>*{};<0mm,5.5mm>*{}**@{-}, <0.0mm,0.0mm>*{};<0mm,-5.5mm>*{}**@{-}, \end{xy}}\end{array} \bar{i}ght) {\partial}m\sum_{{\mathbf e}xt{out-legs}\atop {\mathbf e}xt{of}\ \Gamma}\left( \betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy} <0mm,0mm>*{\bullet}; <0.0mm,-7.5mm>*{\Gamma}; <0.0mm,-0.44mm>*{};<0mm,-5.5mm>*{}**@{-}, \end{xy}}\end{array} {\partial}m \betagin{array}{c}\resizebox{1.5mm}{!}{\betagin{xy} <0mm,0mm>*{\bullet}; <0.0mm,-7.5mm>*{\Gamma}; <0.0mm,-0.44mm>*{};<0mm,-5.5mm>*{}**@{-}, <0.0mm,0.0mm>*{};<0mm,5.5mm>*{}**@{-}, \end{xy}}\end{array} \bar{i}ght) \end{equation} where \betagin{equation}rn \delta^+\Gamma &:=& \hat{\delta}\Gamma \ {\mathbf e}xt{modulo term creating new univalent vertices}. \end{equation}rn By analogy to the proof of Proposition {\ref{2: prop on acyclicity of HoLBs+,+}}, let us call the univalent vertices and passing bivalent vertices of graphs $\Gamma$ from $\mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar\circlearrowright}}})$ {\em stringy}; the maximal connected subgraphs (if any) of a graph $\Gamma$ consisting of stringy vertices with at least one univalent vertex or with at least one hair are called {\em strings}. Let us call the non-passing vertices of valency $\geq 2$ which do not belong to the strings (if any) the {\em core vertices}, and let $\Gamma^{core}$ be the full subgraph of $\Gamma$ spanned by the core vertices; in principle any graph from the set of generators of $\mathsf{dcGC}_{c+d+1}$ can occur as a core graph $\Gamma^{core}$ of some graph $\Gamma\in \mathrm{Der}(\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd)$. A string is a subgraph (if any) of $\Gamma$ of one of the following eight types (we classify the unique graph in $\mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar\circlearrowright}}})$ consisting consisting of the zero valency vertex $\bullet$ as well as the graph with no vertices $\uparrow$ as {\em strings}\, as well --- they correspond to the element $\alpha^\bullet_1$ and $\alpha^{\uparrow\downarrow}_0$ listed below), \betagin{align*} \alpha^\bullet _n &{\mathsf i}meq \betagin{array}{c} \resizebox{55mm}{!} { \xy (-40,1)*{\bullet}="-1", (-30,1)*{\bullet}="0", (-20,1)*{\bullet}="1", (-13,1)*{}="2", (-10,1)*{\ldots}, (-7,1)*{}="3", (0,1)*{\bullet}="4", (10,1)*{\bullet}="5", (20,1)*{\bullet}="6", \ar @{->} "-1";"0" <0pt> \ar @{->} "0";"1" <0pt> \ar @{->} "1";"2" <0pt> \ar @{->} "3";"4" <0pt> \ar @{->} "4";"5" <0pt> \ar @{->} "5";"6" <0pt> \endxy} \end{array} \ {\mathbf e}xt{$n\geq 1$ stringy vertices}\\ \alpha^{\uparrow}_n &{\mathsf i}meq \betagin{array}{c} \resizebox{55mm}{!} { \xy (-40,1)*{\bullet}="-1", (-30,1)*{\bullet}="0", (-20,1)*{\bullet}="1", (-13,1)*{}="2", (-10,1)*{\ldots}, (-7,1)*{}="3", (0,1)*{\bullet}="4", (10,1)*{\bullet}="5", (20,1)*{}="6", \ar @{->} "-1";"0" <0pt> \ar @{->} "0";"1" <0pt> \ar @{->} "1";"2" <0pt> \ar @{->} "3";"4" <0pt> \ar @{->} "4";"5" <0pt> \ar @{->} "5";"6" <0pt> \endxy} \end{array} \ {\mathbf e}xt{$n\geq 1$ stringy vertices}\\ \alpha^{\downarrow}_n & {\mathsf i}meq \betagin{array}{c} \resizebox{55mm}{!} { \xy (-40,1)*{}="-1", (-30,1)*{\bullet}="0", (-20,1)*{\bullet}="1", (-13,1)*{}="2", (-10,1)*{\ldots}, (-7,1)*{}="3", (0,1)*{\bullet}="4", (10,1)*{\bullet}="5", (20,1)*{\bullet}="6", \ar @{->} "-1";"0" <0pt> \ar @{->} "0";"1" <0pt> \ar @{->} "1";"2" <0pt> \ar @{->} "3";"4" <0pt> \ar @{->} "4";"5" <0pt> \ar @{->} "5";"6" <0pt> \endxy} \end{array} \ {\mathbf e}xt{$n\geq 1$ stringy vertices}\\ \alpha^{\uparrow\downarrow}_n &{\mathsf i}meq \betagin{array}{c} \resizebox{55mm}{!} { \xy (-40,1)*{}="-1", (-30,1)*{\bullet}="0", (-20,1)*{\bullet}="1", (-13,1)*{}="2", (-10,1)*{\ldots}, (-7,1)*{}="3", (0,1)*{\bullet}="4", (10,1)*{\bullet}="5", (20,1)*{}="6", \ar @{->} "-1";"0" <0pt> \ar @{->} "0";"1" <0pt> \ar @{->} "1";"2" <0pt> \ar @{->} "3";"4" <0pt> \ar @{->} "4";"5" <0pt> \ar @{->} "5";"6" <0pt> \endxy} \end{array} \ {\mathbf e}xt{$n\geq 0$ stringy vertices} \\ &\\ \beta^{\bullet,\uparrow}_n & {\mathsf i}meq \betagin{array}{c} \resizebox{35mm}{!} { \xy (-30,1)*+{_v}*\cir{}="0", (-20,1)*{\bullet}="1", (-13,1)*{}="2", (-10,1)*{\ldots}, (-7,1)*{}="3", (0,1)*{\bullet}="4", (10,1)*{\bullet}="5", \ar @{->} "0";"1" <0pt> \ar @{->} "1";"2" <0pt> \ar @{->} "3";"4" <0pt> \ar @{->} "4";"5" <0pt> \endxy} \end{array}\ {\mathbf e}xt{$n\geq 1$ stringy vertices (shown as black bullets)} \\ \beta^{\uparrow}_n & {\mathsf i}meq \betagin{array}{c} \resizebox{35mm}{!} { \xy (-30,1)*+{_v}*\cir{}="0", (-20,1)*{\bullet}="1", (-13,1)*{}="2", (-10,1)*{\ldots}, (-7,1)*{}="3", (0,1)*{\bullet}="4", (10,1)*{}="5", \ar @{->} "0";"1" <0pt> \ar @{->} "1";"2" <0pt> \ar @{->} "3";"4" <0pt> \ar @{->} "4";"5" <0pt> \endxy} \end{array}\ {\mathbf e}xt{$n\geq 0$ stringy vertices} \\ \beta^{\bullet,\downarrow}_n & {\mathsf i}meq \betagin{array}{c} \resizebox{35mm}{!} { \xy (-30,1)*+{_v}*\cir{}="0", (-20,1)*{\bullet}="1", (-13,1)*{}="2", (-10,1)*{\ldots}, (-7,1)*{}="3", (0,1)*{\bullet}="4", (10,1)*{\bullet}="5", \ar @{<-} "0";"1" <0pt> \ar @{<-} "1";"2" <0pt> \ar @{<-} "3";"4" <0pt> \ar @{<-} "4";"5" <0pt> \endxy} \end{array}\ {\mathbf e}xt{$n\geq 1$ stringy vertices} \\ \beta^{\downarrow}_n & {\mathsf i}meq \betagin{array}{c} \resizebox{35mm}{!} { \xy (-30,1)*+{_v}*\cir{}="0", (-20,1)*{\bullet}="1", (-13,1)*{}="2", (-10,1)*{\ldots}, (-7,1)*{}="3", (0,1)*{\bullet}="4", (10,1)*{}="5", \ar @{<-} "0";"1" <0pt> \ar @{<-} "1";"2" <0pt> \ar @{<-} "3";"4" <0pt> \ar @{<-} "4";"5" <0pt> \endxy} \end{array}\ {\mathbf e}xt{$n\geq 0$ stringy vertices} & \\ & \\ \gamma_n & {\mathsf i}meq \betagin{array}{c} \resizebox{35mm}{!} { \xy (-30,1)*+{_v}*\cir{}="0", (-20,1)*{\bullet}="1", (-13,1)*{}="2", (-10,1)*{\ldots}, (-7,1)*{}="3", (0,1)*{\bullet}="4", (10,1)*+{_w}*\cir{}="5", \ar @{->} "0";"1" <0pt> \ar @{->} "1";"2" <0pt> \ar @{->} "3";"4" <0pt> \ar @{->} "4";"5" <0pt> \endxy} \end{array}\ {\mathbf e}xt{$n\geq 1$ passing vertices (shown as black bullets)} \end{align*} where $v$ and $w$ stand for any pair of (not necessary distinct) arbitrary {\em core}\, vertices. Note that $\beta^{\bullet, \uparrow}_0\equiv \beta^{\bullet, \downarrow}_0$ stand for one and the same element --- a core vertex $v$ with no strings attached. {\mathsf i}p The associated graded complex $C$ in the r.h.s.\ of (\ref{5: gr F^st}) splits into a direct sum \betagin{equation}\lambdabel{5: C = C_empty core + C_non-empty core} C=C_{\mathbf e}xt{empty core}\ \ \ \oplus\ \ \ C_{\mathbf e}xt{non-empty core} \end{equation} where the first (resp., second) summand is spanned by graphs $\Gamma$ with the set $V(\Gammamma^{core})$ empty (resp., non-empty). Thus $$ C_{\mathbf e}xt{empty core}:= {\mathbf e}xt{span}\left\lambdangle \alpha_{n}^{\bullet}\ , \alpha_{n}^{\uparrow}\ , \alpha_{n}^{\downarrow}\ , \alpha_{n}^{\uparrow\downarrow}\ , \uparrow\bar{i}ght{\bar{a}}ngle_{n\geq 1} $$ with the induced differential $d_0$ given on the generators by $$ d_0 \alpha_n^{\bullet}= \left\{\betagin{array}{ll} 0 & {\mathbf e}xt{if $n$ is even}\\ {\partial}m \alpha_{n+1}^{\bullet} & {\mathbf e}xt{if $n$ is odd} \end{array}\bar{i}ght.\ \ \ , \ \ \ d_0 \alpha_n^{\uparrow\downarrow}= \left\{\betagin{array}{ll} {\partial}m\alpha_{n+1}^{\uparrow} {\partial}m\alpha_{n+1}^{\downarrow} {\partial}m\alpha_{n+1}^{\uparrow\downarrow} & {\mathbf e}xt{if $n$ is odd}\\ {\partial}m\alpha_{n+1}^{\uparrow} {\partial}m\alpha_{n+1}^{\downarrow} & {\mathbf e}xt{if $n$ is even}\ \end{array}\bar{i}ght. $$ $$ d_0 \alpha_n^{\uparrow}= \left\{\betagin{array}{ll} {\partial}m\alpha_{n+1}^{\bullet} & {\mathbf e}xt{if $n$ is odd}\\ {\partial}m\alpha_{n+1}^{\bullet} {\partial}m \alpha_{n+1}^{\uparrow} & {\mathbf e}xt{if $n$ is even}\ \end{array}\bar{i}ght. , \ \ \ d_0 \alpha_n^{\downarrow}= \left\{\betagin{array}{ll} {\partial}m\alpha_{n+1}^{\bullet} & {\mathbf e}xt{if $n$ is odd}\\ {\partial}m\alpha_{n+1}^{\bullet} {\partial}m \alpha_{n+1}^{\downarrow} & {\mathbf e}xt{if $n$ is even}\ \end{array}\bar{i}ght. $$ It is easy to see that the cohomology of this complex is one-dimensional and is equal to the sum of two cycles \betagin{equation}\lambdabel{5: graded ass of rescaling class} \left(\bullet + \betagin{array}{c}\resizebox{2mm}{!} { \xy (0,-2.5)*{\bullet}="a", (0,2.5)*{}="b", \ar @{->} "a";"b" <0pt> \endxy}\end{array}\bar{i}ght) + \left(\bullet + \betagin{array}{c}\resizebox{1.24mm}{!} {\xy (0,-2.5)*{}="a", (0,2.5)*{\bullet}="b", \ar @{->} "a";"b" <0pt> \endxy}\end{array}\bar{i}ght)= 2\bullet + \betagin{array}{c}\resizebox{2mm}{!} { \xy (0,-2.5)*{\bullet}="a", (0,2.5)*{}="b", \ar @{->} "a";"b" <0pt> \endxy}\end{array} + \betagin{array}{c}\resizebox{1.24mm}{!} {\xy (0,-2.5)*{}="a", (0,2.5)*{\bullet}="b", \ar @{->} "a";"b" <0pt> \endxy}\end{array} \end{equation} (whose difference is a coboundary as $d_0 \uparrow= \betagin{array}{c}\resizebox{2mm}{!} { \xy (0,-2.5)*{\bullet}="a", (0,2.5)*{}="b", \ar @{->} "a";"b" <0pt> \endxy}\end{array} - \betagin{array}{c}\resizebox{1.24mm}{!} {\xy (0,-2.5)*{}="a", (0,2.5)*{\bullet}="b", \ar @{->} "a";"b" <0pt> \endxy}\end{array}$). This is precisely the representative of the rescaling class $r^\star$. {\mathsf i}p Consider next the second complex $(C_{\mathbf e}xt{non-empty core}, d_0)$. It decomposes into the completed direct sum (parameterized by arbitrary graphs $\Gamma^{core}$ from $\mathsf{dcGC}_{c+d+1}$) of the tensor products of complexes $$ C_{\mathbf e}xt{non-empty core}{\mathsf i}meq {\partial}rod_{\Gamma^{core}}C_{\Gamma^{core}},\ \ \ \ C_{\Gammamma^{core}}:= \bigotimes_{v\in V(\Gamma^{core})} X_v \bigotimes_{e\in E(\Gamma^{core})} X_e $$ where \betagin{itemize} \item for each edge $ X_{e}:= {\mathbb K}[0] \oplus {\mathbf e}xt{span} \left\lambdangle \gamma_n\bar{i}ght{\bar{a}}ngle_{n\geq 1} $ with differential given on generators by $d(1\in {\mathbb K}[0])=0$ and $d\gamma_n={\partial}m \gamma_{n+1}$ so that $H^\bullet(X_e)={\mathbb K}[0]$; hence the factors $X_e$ can be ignored in the above formula for the complex $C_{\Gamma^{core}}$; \item the complexes $X_v$ can be different for different core vertices $v$ but their classification is rather simple and is discussed next. \end{itemize} {\mathsf i}p For each core vertex $v$ consider two complexes, $$ C_v^\uparrow:={\mathbf e}xt{span}\left\lambdangle \beta_{n}^{\bullet, \uparrow}\ , \beta_{m}^{\uparrow}\ , \bar{i}ght{\bar{a}}ngle_{n\geq 1,m\geq 0}, \ \ \ \ C_v^\downarrow:={\mathbf e}xt{span}\left\lambdangle \beta_{n}^{\bullet, \downarrow}\ , \ \beta_{m}^{\downarrow}\ \bar{i}ght{\bar{a}}ngle_{n\geq 1,m\geq 0} $$ equipped with the differentials given by $$ d_v \beta_n^{\bullet, \uparrow}= \left\{\betagin{array}{ll} {\partial}m \beta_{n+1}^{\bullet, \uparrow} & {\mathbf e}xt{if $n$ is even}\\ 0 & {\mathbf e}xt{if $n$ is odd}\ \end{array}\bar{i}ght.\ \ , \ \ d_v \beta_n^{\uparrow}= \left\{\betagin{array}{ll} {\partial}m\beta_{n+1}^{\bullet, \uparrow} & {\mathbf e}xt{if $n$ is even}\\ {\partial}m\beta_{n+1}^{\bullet, \uparrow} {\partial}m \beta_{n+1}^{\uparrow} & {\mathbf e}xt{if $n$ is odd}\ \end{array}\bar{i}ght. $$ $$ d_v \beta_n^{\bullet, \downarrow}= \left\{\betagin{array}{ll} {\partial}m \beta_{n+1}^{\bullet, \downarrow} & {\mathbf e}xt{if $n$ is even}\\ 0 & {\mathbf e}xt{if $n$ is odd}\ \end{array}\bar{i}ght. , \ \ \ d_v \beta_n^{\downarrow}= \left\{\betagin{array}{ll} {\partial}m\beta_{n+1}^{\bullet,\downarrow} & {\mathbf e}xt{if $n$ is even}\\ {\partial}m\beta_{n+1}^{\bullet, } {\partial}m \beta_{n+1}^{\downarrow} & {\mathbf e}xt{if $n$ is odd}\ \end{array}\bar{i}ght. $$ It is easy that that both complexes $C_v^\uparrow$ and $C_v^\downarrow$ are acyclic. Indeed, consider a filtration of, say, the complex $C_v^\uparrow$ by the number of vertices of the form $\beta^{\bullet, \uparrow}_n$; the cohomology of the associated graded complex is 2-dimensional and is spanned by $\beta^{\bullet, \uparrow}_1$ and $\beta^{\uparrow}_0$ with the induced differential given on the generators by the isomorphism $\beta^{\uparrow}_0\bar{i}ghtarrow \beta^{\bullet, \uparrow}_1$. Next we have to consider several types of non-empty core graphs. {\mathsf i}p {\sc Case 1}: the case $\Gamma^{core}=\bullet$, the single vertex without any edges. In this case $$ C_{\Gamma^{core}}={\partial}rod_{p+q\geq 3} {\odot}^p C_v^\uparrow \otimes \odot^q C_v^\downarrow \ \ \oplus\ \ \ \odot^2 C_v^\uparrow \ \ \ \oplus \ \ \ \odot^2 C_v^\downarrow. $$ Due to the acyclicity of the complexes $C_v^\downarrow$ and $C_v^\uparrow$ and exactness of the (symmetric) tensor product functor, we conclude that $H^\bullet(C_{\Gamma^{core}})=0$ in this case. {\mathsf i}p {\sc Case 2}: the core graph $\Gamma^{core}$ contains at least one vertex $v$ which either has valency one or is a passing{\mathcal O}ootnote{Note that a passing vertex in $\Gamma^{core}$ can {\em not}\, be passing in $\Gamma$.} vertex. Then $C_{\Gamma^{core}}$ has the following tensor factor $$ X_v=\left\{ \betagin{array}{ll}{\partial}rod_{p+q\geq 2} \odot^p C_v^\uparrow \otimes \odot^q C_v^\downarrow \ \oplus\ C_v^\uparrow & {\mathbf e}xt{if}\ |v|=|v|_{out}=1\\ {\partial}rod_{p+q\geq 2} \odot^p C_v^\uparrow \otimes \odot^q C_v^\downarrow \ \oplus\ C_v^{\downarrow} & {\mathbf e}xt{if}\ |v|=|v|_{in}=1\\ {\partial}rod_{p+q\geq 1} \odot^p C_v^\uparrow \otimes \odot^q C_v^\downarrow & {\mathbf e}xt{if}\ |v|=2, |v|_{in}=|v|_{out}=1 \end{array} \bar{i}ght. $$ which is in all cases acyclic, $H^\bullet(X_v)=0$, so that $H^\bullet(C_{\Gamma^{core}})=0$. {\mathsf i}p Thus we conclude that only generators $\Gamma^{core}$ of the subspace $\mathsf{dGC}_{c+d+1}$ can contribute to $H^\bullet(C_{\mathbf e}xt{non-empty core})$. {\mathsf i}p {\sc Case 3}: Consider finally the case when $\Gamma^{core}$ is a generator of $\mathsf{dGC}_{c+d+1}$. Then $$ C_{\Gammamma^{core}}:= \bigotimes_{v\in V(\Gamma^{core})}\hspace{-2mm} X_v \bigotimes_{e\in E(\Gamma^{core})} \hspace{-2mm} X_e\ \ \ {\mathbf e}xt{with}\ \ X_v={\partial}rod_{p,q\geq 0} \odot^p C_v^\uparrow \otimes \odot^q C_v^\downarrow \ \ \ {\mathbf e}xt{and} \ \ X_{e}:= {\mathbb K}[0] \oplus {\mathbf e}xt{span} \left\lambdangle \gamma_n\bar{i}ght{\bar{a}}ngle_{n\geq 1} $$ We conclude that for each $v\in V(\Gamma^{core})$ (resp., each edge $e\in E(\Gamma^{core})$) the associated cohomology group $H^\bullet(X_v)$ (resp., $H^\bullet(X_e)$) is concentrated in degree zero and is equal to ${\mathbb K}$ so that $H^\bullet(C_{\Gamma^{core}})={\mathbf e}xt{span}\left\lambdangle\Gamma^{core}\bar{i}ght{\bar{a}}ngle$ and hence $$ H^\bullet(C_{\mathbf e}xt{non-empty core}){\mathsf i}meq \mathsf{dGC}_{c+d+1}. $$ Moreover, this isomorphism holds true at the level complexes when turning the page of our spectral sequence. By the spectral sequences comparison theorem we conclude that the map of the original complexes (\ref{5: F^star from dGC to Der HoLB^star}) is a quasi-isomorphism up to one rescaling class $r^\star$. \end{proof} \subsubsection{\bf Remarks} (i) In terms of representations of the prop $\swhHoLB_{0,1}$, that is, in terms of formal Poisson structures, the rescaling class $r^\star$ corresponds to the following universal automorphism $$ {\partial}i=\sum_{m,n\geq 0} {\partial}i_n^m \longrightarrow {\partial}i^{new}=\sum_{m,n\geq 0} \lambda^{m+n-2} {\partial}i_n^m, \ \ \ {\mathcal O}orall \ \lambda\in {\mathbb K}^\star, $$ of the set of formal (finite- or infinite-dimensional) Poisson structures. (ii) In terms of the extended graph complexes (\ref{3: extended GC}) the above result can be re-written as a quasi-isomorphism of dg Lie algebras $$ \mathsf{dGC}_{c+d+1}\oplus {\mathbb K} \longrightarrow \mathrm{Der}(\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd)_{conn} $$ where the generator $\emptyset$ of ${\mathbb K}$ is mapped into the rescaling class. Note that the l.h.s.\, is {\em not}\, a direct sum of Lie algebras, only of graded vector spaces. (iii) Composing the quasi-isomorphism (\ref{3: from GC to dGC}) with the quasi-isomorphism (\ref{3: Morhism F from dcGC to Der^++}) and using equalities (\ref{1: fGC in terms of GC}) and (\ref{3: Der in terms of Der_connected}) we obtain a canonical quasi-isomorphism of dg Lie algebras $$ F^\circlearrowright: \mathsf{fGC}^{\geq 2}_{c+d+1} \longrightarrow \mathrm{Der}(\mathcal{H}\mathit{olieb}^{_{\atop ^{\bigstar\circlearrowright}}}cd) $$ and hence prove the first part of Proposition 1.1.1. Similarly one can study the deformation theory of the ordinary (non-wheeled) properad $\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar}}}$ and obtain the following result. \subsubsection{\bf Proposition}\lambdabel{5: F from GC^0r to Der(HoLB^star)} {\em There is a quasi-isomorphism of dg Lie algebras} $$ F: \mathsf{fGC}_{c+d+1}^{or} \longrightarrow \mathrm{Der}(\mathcal{H}\mathit{olieb}_{c,d}^{_{\atop ^{\bigstar}}}) $$ We skip the details which are identical to the arguments used in the proof of Theorem 4.1.1. {\Large {\mathsf e}ction{\bf Classification of universal quantizations \\ of Poisson structures} } \subsection{Polydifferential functor on wheeled props} There is a polydifferential functor{\mathcal O}ootnote{In fact in the subsequent paper \cite{MW3} the functor ${\mathcal O}$ was further extended to a {\em polydifferential endofunctor}\, ${\mathcal D}$ in the category of (wheeled) props such that ${\mathcal O}$ is an operadic part of ${\mathcal D}$.} \cite{MW2} $$ {\mathcal O}: {\mathbf e}xt{\sf Category of dg props} \longrightarrow {\mathbf e}xt{\sf Category of dg operads} $$ which verbatim extends (on the l.h.s.) to the category of dg wheeled props and has the property that for any dg (wheeled) prop ${\mathcal P}$ and its any representation, $\rho: {\mathcal P}\bar{i}ghtarrow {\mathcal E} nd_V$, in a dg vector space $V$ the associated dg operad ${\mathcal O}({\mathcal P})$ has an associated representation, ${\mathcal O}(\rho): {\mathcal O}({\mathcal P})\bar{i}ghtarrow {\mathcal E} nd_{\widehat{\odot^\bullet} V}$, in the (completed) graded commutative algebra $\widehat{\odot^\bullet} V$ given in terms of polydifferential (with respect to the standard multiplication in $\widehat{\odot^\bullet} V$) operators. We refer to \cite{MW2} for full details and explain here only the explicit structure of the dg operad $$ {\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})=\left\{ {\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})(k)\bar{i}ght\}_{k\geq 0} $$ A typical element $a$ in the ${\mathbb S}_k$-module ${\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})(k)$ is a linear combination, $$ \lambda_1 \hat{e}_1 + \ldots + \lambda_N \hat{e}_N, \ \ \ \lambda_1,\ldots,\lambda_N\in {\mathbb K}, $$ where each generator, say, $\hat{e}_s\in {\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})(k)$, $s\in [N]$, is constructed from some graph $e_s\in \widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright}(m_s,n_s)$ as follows: \betagin{itemize} \item[(i)] draw new $k$ big white vertices labelled from 1 to k (the ``inputs" of $\hat{e}_s$) and one extra output big white vertex, \item[(ii)] symmetrize all $m_s$ outputs legs of $e_s$ (if $m_s\geq 1$) and attach them to the unique output white vertex; if $m_s=0$, the output big white vertex receives no incoming edges; \item[(iii)] partition the set $[n_s]$ if input legs of $e_s$ into $k$ ordered disjoint subsets $$ [n_s]=I_1\sqcup \ldots \sqcup I_k, \ \ \ \ \#I_i\geq 0, i\in [k], $$ and then symmetrize the legs in each subset $I_i$ and attach them (if any) to the $i$-labelled input white vertex. \end{itemize} For example, the element $$ e=\betagin{array}{c}\resizebox{12mm}{!}{ \xy (0,0)*{\circ}="o", (-2,5)*{}="2", (4,5)*{\circ}="3", (4,10)*{}="u", (4,0)*{}="d1", (7,0)*{}="d2", (10,0)*{}="d3", (-1.5,-5)*{}="5", (1.5,-5)*{}="6", (4,-5)*{}="7", (-4,-5)*{}="8", (-2,7)*{_1}, (4,12)*{_2}, (-1.5,-7)*{_2}, (1.5,-7)*{_3}, (10.4,-1.6)*{_6}, (-4,-7)*{_1}, (4,-1.6)*{_4}, (7,-1.6)*{_5}, \ar @{-} "o";"2" <0pt> \ar @{-} "o";"3" <0pt> \ar @{-} "o";"5" <0pt> \ar @{-} "o";"6" <0pt> \ar @{-} "o";"8" <0pt> \ar @{-} "3";"u" <0pt> \ar @{-} "3";"d1" <0pt> \ar @{-} "3";"d2" <0pt> \ar @{-} "3";"d3" <0pt> \endxy}\end{array} \in \widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright}(2,6) $$ can generate the following element $$ \hat{e}=\betagin{array}{c}\resizebox{18mm}{!}{ \xy (-1.5,5)*{}="1", (1.5,5)*{}="2", (9,5)*{}="3", (0,0)*{\circ}="A"; (9,3)*{\circ}="O"; (5,12)*+{\hspace{2mm}}*{\mathfrak r}m{o}="X"; (-6,-10)*+{_1}*{\mathfrak r}m{o}="B"; (6,-10)*+{_2}*{\mathfrak r}m{o}="C"; (14,-10)*+{_3}*{\mathfrak r}m{o}="D"; (22,-10)*+{_4}*{\mathfrak r}m{o}="E"; "A"; "B" **\crv{(-5,-0)}; "A"; "D" **\crv{(5,-0.5)}; "A"; "C" **\crv{(-5,-7)}; "A"; "O" **\crv{(5,5)}; \ar @{-} "O";"C" <0pt> \ar @{-} "O";"D" <0pt> \ar @{-} "O";"X" <0pt> \ar @{-} "A";"X" <0pt> \ar @{-} "O";"B" <0pt> \endxy} \end{array} \in {\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})(4) $$ in the associated polydifferential operad. If we erase the top big white vertex and its all attached edges, then we get from elements of ${\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})$ precisely M.\ Kontsevich graphs from \cite{Ko2}. The operad ${\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})$ admits a filtration by the number of small white vertices (that is, by the number of vertices coming from the underlying generators of $\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright}$) which we call from now on {\em internal vertices}. The big white vertices of graphs from ${\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})$ are called the {\em external}\, ones. Note that incoming external vertices are {\em not}\, ordered from left to right as one might infer from the pictures above --- they are only labelled by distinct integers. Note also that elements of $\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright}$ may contain elements with no internal vertices at all, for example, $$ \betagin{array}{c}\resizebox{9mm}{!}{ \xy (0,9)*+{\hspace{2mm}}*{\mathfrak r}m{o}="X"; (-5,0)*+{_1}*{\mathfrak r}m{o}="B"; (5,0)*+{_2}*{\mathfrak r}m{o}="C"; \endxy }\end{array} \in \widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright}(2). $$ The latter graph admits an automorphism which swaps numerical labels of vertices (cf.\ \cite{MW2,MW3}) and controls the canonical graded commutative multiplication in $\widehat{\odot^\bullet}V$. For any $i\in [n]$ the operadic composition $$ \betagin{array}{rccc} \circ_i: &{\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})(n)\otimes {\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})(m) & \longrightarrow & {\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})(m+n-1)\\ & \Gamma_1\otimes \Gamma_2 &\longrightarrow & \Gamma_1\circ_i \Gamma_2 \end{array} $$ is defined by substituting the graph $\Gamma_2$ (with the output external vertex erased so that all edges, if any, connected to that external vertex become ``dangling in the air")) inside the big circle of the $i$-labelled external vertex of $\Gamma_1$ and erasing that big circle (so that all edges of $\Gamma_1$ connected to the $i$-th external vertex, if any, also become ``dangling in the air"), and then taking the sum over all possible ways to do the following operations \betagin{itemize} \item[(i)] glueing some (or all or none) hanging edges of $\Gamma_2$ to some hanging edges of $\Gamma_1$, \item[(ii)] attaching some (or all or none) hanging edges of $\Gamma_2$ to the output external vertex of $\Gamma_1$, \item[(iii)] attaching some (or all or none) hanging edges of $\Gamma_1$ to the external input vertices of $\Gamma_2$, \end{itemize} in such a way that no dangling edges are left. We refer to \cite{MW2,MW3} for concrete examples. \subsection{Kontsevich formality map as a morphism of dg operads} M.\ Kontsevich formality map from \cite{Ko2} provides us with a universal quantization of arbitrary (formal) graded Poisson structures. It can understood as a morphism of dg props{\mathcal O}ootnote{Similarly, a universal formality map behind Drinfeld's deformation quantizations of Lie bialgebras can be understood as a morphism of dg props, see \cite{MW3}.}, \betagin{equation}\lambdabel{5: quant map F} {\mathcal F}: c{\mathcal A} ss_\infty \longrightarrow {\mathcal O}(\swhHoLB_{0,1}) \end{equation} satisfying a certain non-triviality condition (which is given explicitly below). Here $c{\mathcal A} ss_\infty$ is a dg operad of {\em curved $A_\infty$-algebras} defined as the free operad generated by the ${\mathbb S}$-module $$ E(n):={\mathbb K}[{\mathbb S}_n][n-2]= \mbox{span} \left(\betagin{array}{c}\resizebox{20mm}{!}{ \betagin{xy} <0mm,0mm>*{\bulletllet};<0mm,0mm>*{}**@{}, <0mm,0mm>*{};<-8mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-4.5mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<0mm,-4mm>*{\ldots}**@{}, <0mm,0mm>*{};<4.5mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<8mm,-5mm>*{}**@{-}, <0mm,0mm>*{};<-11mm,-7.9mm>*{^{{\mathsf i}gma(1)}}**@{}, <0mm,0mm>*{};<-4mm,-7.9mm>*{^{{\mathsf i}gma(2)}}**@{}, <0mm,0mm>*{};<10.0mm,-7.9mm>*{^{{\mathsf i}gma(n)}}**@{}, <0mm,0mm>*{};<0mm,5mm>*{}**@{-}, \end{xy}}\end{array}\ \ \bar{i}ght)_{{\mathsf i}gma\in {\mathbb S}_n},\ \ \ {\mathcal O}orall\ n\geq 0 $$ and equipped with the differential given on the generators by the formula $$ \delta \betagin{array}{c}\resizebox{10mm}{!}{ \betagin{xy} <0mm,0mm>*{\bulletllet}, <0mm,5mm>*{}**@{-}, <-5mm,-5mm>*{}**@{-}, <-2mm,-5mm>*{}**@{-}, <2mm,-5mm>*{}**@{-}, <5mm,-5mm>*{}**@{-}, <0mm,-7mm>*{_{1\ \ \ \ldots\ \ \ n}}, \end{xy}}\end{array} =\sum_{k=0}^{n}\sum_{l=0}^{n-k} (-1)^{k+l(n-k-l)+1} \betagin{array}{c}\resizebox{30mm}{!}{ \betagin{xy} <0mm,0mm>*{\bulletllet}, <0mm,5mm>*{}**@{-}, <4mm,-7mm>*{^{1\ \ \dots \ \ k\qquad\ \ (k+l+1)\ \ \dots\ n}}, <-14mm,-5mm>*{}**@{-}, <-6mm,-5mm>*{}**@{-}, <20mm,-5mm>*{}**@{-}, <8mm,-5mm>*{}**@{-}, <0mm,-5mm>*{}**@{-}, <0mm,-5mm>*{\bulletllet}; <-5mm,-10mm>*{}**@{-}, <-2mm,-10mm>*{}**@{-}, <2mm,-10mm>*{}**@{-}, <5mm,-10mm>*{}**@{-}, <0mm,-12mm>*{_{k+1\ \dots\ k+l }}, \end{xy}}\end{array}. $$ It is non-cofibrant and acyclic (as the dg operad ${\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})$). {\mathsf i}p The non-triviality condition on the map (\ref{5: quant map F}) reads as the following approximation on the values of ${\mathcal F}$ on the generating $n$ corollas of $c{\mathcal A} ss_\infty$ for any $n\geq 0$ (modulo graphs with the number of internal vertices $\geq 2$ whose linear span is denoted below by $O(2)$) \betagin{equation}\lambdabel{5: Boundary cond for formality map} {\mathcal F}\left( \betagin{array}{c}\resizebox{10mm}{!}{ \betagin{xy} <0mm,0mm>*{\bulletllet}, <0mm,5mm>*{}**@{-}, <-5mm,-5mm>*{}**@{-}, <-2mm,-5mm>*{}**@{-}, <2mm,-5mm>*{}**@{-}, <5mm,-5mm>*{}**@{-}, <0mm,-7mm>*{_{1\ \ \ \ldots\ \ \ n}}, \end{xy}}\end{array} \bar{i}ght)=\left\{ \betagin{array}{ll} \betagin{array}{c}\resizebox{9mm}{!}{ \xy (0,9)*+{\hspace{2mm}}*{\mathfrak r}m{o}="X"; (-5,0)*+{_1}*{\mathfrak r}m{o}="B"; (5,0)*+{_2}*{\mathfrak r}m{o}="C"; \endxy}\end{array} + \sum_{p\geq 0}{\mathfrak r}ac{1}{p!} \betagin{array}{c}\resizebox{10mm}{!}{ \xy (0,2.8)*{^p}; (0,1)*{...}; (0,-3)*{\circ}="a"; (0,9)*+{\hspace{2mm}}*{\mathfrak r}m{o}="X"; (-7,-9)*+{_1}*{\mathfrak r}m{o}="B"; (7,-9)*+{_2}*{\mathfrak r}m{o}="C"; "a"; "X" **\crv{(-5,-0)}; "a"; "X" **\crv{(+5,-0)}; "a"; "X" **\crv{(9,-0)}; "a"; "X" **\crv{(-9,-0)}; \ar @{-} "a";"B" <0pt> \ar @{-} "a";"C" <0pt> \endxy}\end{array} + O(2) & {\mathbf e}xt{if}\ n=2\\ \sum_{p\geq 0}{\mathfrak r}ac{1}{p!} \betagin{array}{c}\resizebox{13mm}{!}{ \xy (0,2.8)*{^p}; (0,1)*{...}; (3.5,-10)*{...}; (0,-3)*{\circ}="a"; (0,9)*+{\hspace{2mm}}*{\mathfrak r}m{o}="X"; (-10,-10)*+{_1}*{\mathfrak r}m{o}="B"; (-3,-10)*+{_2}*{\mathfrak r}m{o}="C"; (10,-10)*+{_{k}}*{\mathfrak r}m{o}="E"; "a"; "X" **\crv{(-5,-0)}; "a"; "X" **\crv{(+5,-0)}; "a"; "X" **\crv{(9,-0)}; "a"; "X" **\crv{(-9,-0)}; \ar @{-} "a";"B" <0pt> \ar @{-} "a";"C" <0pt> \ar @{-} "a";"E" <0pt> \endxy}\end{array} + O(2) & {\mathbf e}xt{otherwise}\\ \end{array} \bar{i}ght. \end{equation} where the summations $\sum_{p\geq 0}$ run over the number of edges connecting the internal vertex to the external out-vertex. A morphism of dg operads (\ref{5: quant map F}) satisfying the above non-triviality condition is called a {\em formality map} (after \cite{Ko2}). \subsection{Deformation complexes of morphisms of props} Let ${\mathcal P}$ be an arbitrary dg free prop, and ${\mathcal Q}$ an arbitrary dg prop, and $f: {\mathcal P} \bar{i}ghtarrow {\mathcal Q}$ a morphism between them. Then there is a standard construction of the {\em deformation complex}\, ${\mathsf D\mathsf e\mathsf f }({\mathcal P} \stackrel{f}{\bar{i}ghtarrow}{\mathcal Q})$ of the morphism $f$ described in several ways in \cite{MV}; in general, ${\mathsf D\mathsf e\mathsf f }({\mathcal P} \stackrel{f}{\bar{i}ghtarrow}{\mathcal Q})$ is a filtered ${\mathcal L} ie_\infty$ algebra. This construction builds on earlier works which describe deformation complexes of morphisms of dg {\em operads} \cite{KS,VdL}. The constructions in \cite{MV} generalize straightforwardly to the case when ${\mathcal P}$ and ${\mathcal Q}$ are dg {\em wheeled}\, props. For example, when ${\mathcal P}={\mathcal Q}=\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop \bigstar\circlearrowright}$ and $f$ is the identity map, then the associated deformation complex $$ {\mathsf D\mathsf e\mathsf f }(\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop \bigstar\circlearrowright}\stackrel{{\mathrm I\mathrm d}}{\longrightarrow} \widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop \bigstar\circlearrowright})[1]{\mathsf i}meq \mathrm{Der}({\mathcal{H}\mathit{olieb}}_{c,d}^{\atop \bigstar\circlearrowright}) $$ is, up to the degree shift, is precisely the derivation complex of $\widehat{\mathcal{H}\mathit{olieb}}_{c,d}^{\atop \bigstar\circlearrowright}$ (but the Lie algebra structure is different!). The machinery of \cite{KS,MV,VdL} gives us a well-defined dg Lie algebra $$ {\mathsf D\mathsf e\mathsf f }\left(c{\mathcal A} ss_\infty \stackrel{{\mathcal F}}{\longrightarrow} {\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})\bar{i}ght) $$ which controls the deformation theory of any formality map ${\mathcal F}$. Our second main result in this paper is the computation of its cohomology in terms of the M.\ Kontsevich graph complex $\mathsf{fGC}^{\geq 2}_2$. \subsection{Theorem (Classification of formality maps)}\lambdabel{5: Corollary on GCor and Def(assb to Dlie)} {\em For any formality morphism ${\mathcal F}$ $$ {\mathcal F}: c\mathcal{A} \mathit{ss}_\infty {\longrightarrow} {\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright}) $$ there is a canonically associated morphism of complexes $$ f_{\mathcal F}: \mathsf{fGC}^{\geq 2}_2 \longrightarrow {\mathsf D\mathsf e\mathsf f }\left(c\mathcal{A} \mathit{ss}_\infty \stackrel{{\mathcal F}}{\longrightarrow} {\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})\bar{i}ght)[1] $$ which is a quasi-isomorphism.} \betagin{proof} The proof of this Theorem is very similar to the proof of Proposition 5.4.1 in \cite{MW3} and is based essentially on the contractibility of the permutahedra polytopes. Let us first explain the naturality of the morphism $f_{{\mathcal F}}$. Any derivation of the dg wheeled prop $\widehat{\mathcal{H}\mathit{olieb}}_{1,0}^{\atop \bigstar\circlearrowright}$, that is, any deformation $D$ of the identity automorphism of $\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright}$, $$ D\in {\mathsf D\mathsf e\mathsf f }\left( \widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright}\stackrel{{\mathrm I\mathrm d}}{\longrightarrow} \widehat{\mathcal{H}\mathit{olieb}}_{1,0}^{\atop \bigstar\circlearrowright}\bar{i}ght) $$ induces an associated deformation of the identity automorphism of ${\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})$, $$ D\in {\mathsf D\mathsf e\mathsf f }\left({\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{1,0}^{\atop \bigstar\circlearrowright})\stackrel{{\mathrm I\mathrm d}}{\longrightarrow} {\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{1,0}^{\atop \bigstar\circlearrowright}) \bar{i}ght) $$ and hence, via the composition of $D$ with the given map ${\mathcal F}$, gives us a canonical morphism of complexes $$ g_{{\mathcal F}}: {\mathsf D\mathsf e\mathsf f }\left(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright}\stackrel{{\mathrm I\mathrm d}}{\longrightarrow} \widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright}\bar{i}ght)\longrightarrow {\mathsf D\mathsf e\mathsf f }\left(c\mathcal{A} \mathit{ss}_\infty \stackrel{{\mathcal F}}{\longrightarrow} {\mathcal O}( \widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})\bar{i}ght) $$ or, equivalently, \betagin{equation}\lambdabel{5: g_cF} g_{{\mathcal F}}: \mathrm{Der}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright}) \longrightarrow {\mathsf D\mathsf e\mathsf f }\left(c\mathcal{A} \mathit{ss}_\infty \stackrel{{\mathcal F}}{\longrightarrow} {\mathcal O}( \widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})\bar{i}ght)[1] \end{equation} Composing this map $g_{{\mathcal F}}$ with the canonical quasi-isomorphism $F$ from Proposition 1.1.1, we obtain the required map $f_{{\mathcal F}}$. Thus to prove the theorem it is enough to prove that the map $g_{{\mathcal F}}$ is a quasi-isomorphism. Which is easy. {\mathsf i}p Both complexes in (\ref{5: g_cF}) admits filtrations by the number of edges in the graphs, and the map $g_{{\mathcal F}}$ preserves these filtrations, and hence induces a morphism of the associated spectral sequences, $$ g^r_{{\mathcal F}}: ({\mathcal E}_r\mathrm{Der}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright}), d_r) \longrightarrow \left({\mathcal E}_r {\mathsf D\mathsf e\mathsf f }\left(c\mathcal{A} \mathit{ss}_\infty \stackrel{{\mathcal F}}{\longrightarrow} {\mathcal O}( \widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})\bar{i}ght)[1], \delta_r\bar{i}ght). $$ The induced differential $d_0$ on the initial page of the spectral sequence of the l.h.s.\ is trivial, $d_0=0$. The induced differential on the initial page of the spectral sequence of the r.h.s. is not trivial and is determined by the following summand in ${\mathcal F}$ (see (\ref{5: Boundary cond for formality map})), $$ \betagin{array}{c}\resizebox{9mm}{!}{ \xy (0,9)*+{\hspace{2mm}}*{\mathfrak r}m{o}="X"; (-5,0)*+{_1}*{\mathfrak r}m{o}="B"; (5,0)*+{_2}*{\mathfrak r}m{o}="C"; \endxy}\end{array} $$ Hence the differential $\delta_0$ acts only on big input white vertices of graphs from ${\mathsf D\mathsf e\mathsf f }\left(c\mathcal{A} \mathit{ss}_\infty \stackrel{{\mathcal F}}{\bar{i}ghtarrow} {\mathcal O}( \widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})\bar{i}ght)[1] $ by splitting each such big white vertex $ \betagin{array}{c}\resizebox{4mm}{!}{ \xy (0,1.5)*+{_v}*{\mathfrak r}m{o}; \endxy }\end{array} $ into two big white vertices $ \betagin{array}{c}\resizebox{10mm}{!}{ \xy (-5,2)*+{_{v'}}*{\mathfrak r}m{o}="B"; (5,2)*+{_{v''}}*{\mathfrak r}m{o}="C"; \endxy}\end{array} $ and redistributing all edges (if any) attached to $v$ in all possible ways among the new vertices $v'$ and $v''$. The cohomology $$ {\mathcal E}_1 {\mathsf D\mathsf e\mathsf f }\left(c\mathcal{A} \mathit{ss}_\infty \stackrel{{\mathcal F}}{\longrightarrow} {\mathcal O}( \widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})\bar{i}ght)[1]= H\left({\mathcal E}_0 {\mathsf D\mathsf e\mathsf f }\left(c\mathcal{A} \mathit{ss}_\infty \stackrel{{\mathcal F}}{\longrightarrow} {\mathcal O}( \widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})\bar{i}ght)[1], \delta_0\bar{i}ght)$$ is spanned by graphs all of whose white vertices are precisely univalent and skew symmetrized (see, e.g., Theorem 3.2.4 in \cite{Me-p} where this result is obtained from the cell complexes of permutahedra, or Appendix A in \cite{Wi1} for another purely algebraic argument) and hence is isomorphic (after erasing these no more needed big white vertices) to $\mathrm{Der}(\swhHoLB_{0,1})$ as a graded vector space. The boundary condition (\ref{5: Boundary cond for formality map}) says that the induced differential $\delta_1$ in the complex $ {\mathcal E}_1 {\mathsf D\mathsf e\mathsf f }\left(c\mathcal{A} \mathit{ss}_\infty \stackrel{{\mathcal F}}{\longrightarrow} {\mathcal O}( \widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})\bar{i}ght)[1]$ agrees precisely with the induced differential $d_1$ in ${\mathcal E}_1\mathrm{Der}(\swhHoLB_{0,1})$ so that the induced morphism of the next pages of the spectral sequences, $$ g_{{\mathcal F}}^1: ( \widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright}), d_1)\longrightarrow \left({\mathcal E}_1{\mathsf D\mathsf e\mathsf f }(c{\mathcal A} ss_\infty \stackrel{{\mathcal F}}{\bar{i}ghtarrow} {\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright}), \delta_1\bar{i}ght) $$ is an isomorphism. By the spectral sequence comparison theorem, the morphism $g_{{\mathcal F}}$ is a quasi-isomorphism. \end{proof} We conclude that for any $i\in {\mathbb Z}$, $$ H^{i+1}\left({\mathsf D\mathsf e\mathsf f }\left(c{\mathcal A} ss_\infty \stackrel{{\mathcal F}}{\bar{i}ghtarrow} {\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})\bar{i}ght)\bar{i}ght)=H^i(\mathsf{fGC}_2^{\geq 2}) $$ The special case $i=0$ reads as $$ H^1\left({\mathsf D\mathsf e\mathsf f }\left(c{\mathcal A} ss_\infty \stackrel{{\mathcal F}}{\bar{i}ghtarrow} {\mathcal O}(\widehat{\mathcal{H}\mathit{olieb}}_{0,1}^{\atop \bigstar\circlearrowright})\bar{i}ght)\bar{i}ght)=H^0(\mathsf{fGC}_2^{\geq 2}) =\mathfrak{grt} $$ which is equivalent to the main result of the remarkable paper \cite{Do} by V.\ Dolgushev. {\mathsf i}p \def$'${$'$} \betagin{thebibliography}{10} \bibitem[Do]{Do} V.\ Dolgushev, {\em Stable Formality Quasi-isomorphisms for Hochschild Cochains I}, arXiv:1109.6031 (2011) \bibitem[D]{D2} V. Drinfeld, {\em On quasitriangular quasi-Hopf algebras and a group closely connected with $Gal(\bar{Q}/Q)$}, Leningrad Math. J. {\bf 2}, No.\ 4 (1991), 829-860. \bibitem[K1]{Ko} M.\ Kontsevich, {\em Formality Conjecture}, In: D. Sternheimer et al. (eds.), Deformation Theory and Symplectic Geometry, Kluwer 1997, 139-156. \bibitem[K2]{Ko2} M.\ Kontsevich, {\em Deformation quantization of Poisson manifolds}, Lett.\ Math.\ Phys. {\bf 66} (2003), 157-216. \bibitem[KS]{KS} M.\ Kontsevich and Y.\ Soibelman, {\em Deformations of algebras over operads and the Deligne conjecture}, Conference Moshe Flato 1999, Vol. I (Dijon), Math. Phys. Stud., vol. 21, Kluwer Acad. Publ., Dordrecht (2000), pp. 255-307 \bibitem[Ma]{Ma} M.\ Markl, \newblock {\em Operads and props}. In: ``Handbook of Algebra" vol. 5, 87-140, Elsevier 2008. \bibitem[MMS]{MMS} M.\ Markl, S.\ Merkulov and S.\ Shadrin, {\em Wheeled props and the master equation}, preprint math.AG/0610683, J.\ Pure and Appl.\ Algebra {\bf 213} (2009), 496-535. \bibitem[Me1]{Me0} S.A.\ Merkulov, {\em Prop profile of Poisson geometry}, Commun.\ Math.\ Phys. {\bf 262} (2006), 117-135. \bibitem[Me2]{Me1} S.A.\ Merkulov, {\em Graph complexes with loops and wheels}. In: ``Algebra, Arithmetic and Geometry - Manin Festschrift" (eds. Yu.\ Tschinkel and Yu.\ Zarhin), Progress in Mathematics, Birkha\"user (2010) 311-354. \bibitem[Me3]{Me3} S.A.\ Merkulov, {\em Wheeled props in algebra, geometry and quantization}. In: Proceedings of the 5th European Congress of Mathematics, Amsterdam, 14-18 July, 2008. EMS Publishing House, 2010, pp. 84-114. \bibitem[Me4]{Me-p} S.A.\ Merkulov, {\em Permutahedra, HKR isomorphism and polydifferential Gerstenhaber-Schack complex}. In: ``Higher Structure in Geometry and Physics: In Honor of Murray Gerstenhaber and Jim Stasheff", Cattaneo, A.S., Giaquinto, A., Xu, P. (Eds.), Progress in Mathematics 287, XV, Birkha\"user, Boston (2011). \bibitem[MV]{MV} S.A.\ Merkulov and B.\ Vallette, {\em Deformation theory of representations of prop(erad)s I \& II}, Journal f\"ur die reine und angewandte Mathematik (Crelle) {\bf 634}, 51--106, \& {\bf 636}, 123--174 (2009). \bibitem[MW1]{MW1} S.\ Merkulov and T.\ Willwacher, {\em Deformation theory of Lie bialgebra properads}, In: Geometry and Physics: A Festschrift in honour of Nigel Hitchin, Oxford University Press 2018, pp. 219-248. \bibitem[MW2]{MW2} S.\ Merkulov and T.\ Willwacher, {\em Props of ribbon graphs, involutive Lie bialgebras and moduli spaces of curves}, preprint arXiv:1511.07808 (2015) 51pp. \bibitem[MW3]{MW3} S.A. Merkulov and T.\ Willwacher, {\em Classification of universal formality maps for quantizations of Lie bialgebras}, preprint arXiv:1605.01282 (2016) \bibitem[V]{V} B.\ Vallette, {\em A Koszul duality for props}, Trans.\ Amer. Math. Soc., {\bf 359} (2007), 4865--4943. \bibitem[VdL]{VdL} P. Van der Laan, {\em Operads up to Homotopy and Deformations of Operad Maps}, arXiv:math.QA/0208041 (2002). \bibitem[W1]{Wi1} T.\ Willwacher, {\em M.\ Kontsevich's graph complex and the Grothendieck-Teichm\"uller Lie algebra}, Invent. Math. 200 (2015), no. 3, 671--760. \bibitem[W2]{Wi2} T.\ Willwacher, {\em The oriented graph complexes}, Comm. Math. Phys. 334 (2015), no. 3, 1649--1666. \bibitem[W3]{Wi3} T.\ Willwacher, {\em Stable cohomology of polyvector fields}, Comm.\ Math.\ Phys. {\bf 21} (2014), 1501-1530. \bibitem[Z]{Z} M.\ \v Zivkovi\' c, {\em Multi-oriented graph complexes and quasi-isomorphisms between them I: oriented graphs}, preprint arXiv:1703.09605 (2017) \end{thebibliography} \end{document}
\begin{document} \title{Bertini and Northcott} \author{{F}abien {P}azuki and {M}artin {Widmer}} \address{Fabien Pazuki. University of Copenhagen, Institute of Mathematics, Universitetsparken 5, 2100 Copenhagen, Denmark, and Universit\'e de Bordeaux, IMB, 351, cours de la Lib\'eration, 33400 Talence, France.} \email{[email protected]} \address{Martin Widmer. Department of Mathematics, Royal Holloway, University of London, Egham, Surrey, TW20 0EX, United Kingdom} \email{[email protected]} \maketitle \begin{abstract} We prove a new Bertini-type Theorem with explicit control of the genus, degree, height, and the field of definition of the constructed curve. As a consequence we provide a general strategy to reduce certain height and rank estimates on abelian varieties over a number field $K$ to the case of jacobian varieties defined over a suitable extension of $K$. \end{abstract} {\flushleft \textbf{Keywords:} Bertini, Northcott, height, abelian varieties.\\ \textbf{Mathematics Subject Classification:} 11G10, 11G30, 11G50, 14G40, 14K15.} \begin{center} --------- \end{center} \begin{center} \textbf{Bertini et Northcott} \end{center} \begin{abstract} Nous pr\'esentons un th\'eor\`eme de Bertini avec contr\^ole sur le genre, le degr\'e, la hauteur et le corps de d\'efinition de la courbe obtenue. Ceci fournit une strat\'egie de r\'eduction de certains \'enonc\'es d'estimations sur la hauteur ou le rang des vari\'et\'es ab\'eliennes g\'en\'erales d\'efinies sur un corps de nombres $K$ au cas des jacobiennes d\'efinies sur une extension de $K$ appropri\'ee. \end{abstract} {\flushleft \textbf{Mots-Clefs:} Bertini, Northcott, hauteur, vari\'et\'es ab\'eliennes.\\ } \begin{center} --------- \end{center} \thispagestyle{empty} \section{Introduction} Let $A$ be an abelian variety of dimension $g$ over a number field $K$. We denote by $\Delta_K$ the discriminant of $K$. Let $\mathrm{rank}(A(K))$ denote the Mordell-Weil rank of $A(K)$ and $\hF(A/K)$ the relative Faltings height of $A$. The first author recently gave in \cite{Paz16} a proof of the inequality \begin{alignat}1\label{heightrank} \mathrm{rank}(A(K))\leq c(g) [K:\Q]^3 \max\{1, \hF(A/K), \log\vert \Delta_K\vert\}, \end{alignat} for an explicit constant $c(g)>0$. It is unknown whether a dependence on $\Delta_K$ is needed. However, (\ref{heightrank}) was achieved by combining two inequalities: the first one is a classical descent inequality between the Mordell-Weil rank of $A(K)$ and the logarithm of the product of the norms of the primes of bad reduction for $A/K$. The second one is an inequality between the logarithm of the product of the norms of the primes of bad reduction for $A/K$ and the Faltings height of $A$, obtained using the strategy of reducing the general abelian case to the jacobian case. In the present paper, we axiomatize this strategy of \textit{reducing to the jacobian case} to enable us to use it in other contexts. A classical dimension argument shows that when the dimension $g$ is big, jacobian varieties become rarer in the moduli space of principally polarized abelian varieties, at least over $\C$. Over $\overline{\mathbb{Q}}$, Tsimerman \cite{Tsi12} proved that for every $g\geq 4$ there exist abelian varieties of dimension $g$ that are not isogenous to any jacobian. Masser and Zannier \cite{MaZa20} even recently proved the following: for every $g\geq 4$ ``almost every'' principally polarized abelian variety of dimension $g$, defined over an extension of $\Q$ of degree at most $2^{16g^4}$, is Hodge generic and not isogenous to any jacobian (see Theorem 1.3 in \cite{MaZa20} for the precise statement). Hence, reducing to the case of jacobians is \textit{a priori} non-trivial. The jacobians we reduce to may have significantly larger dimension than the original abelian variety. The main advantage of this reduction lies in the fact that more tools are available for jacobians than for general abelian varieties. \\ Before stating our first result we need to introduce the Northcott number of a set of algebraic numbers, following the terminology of \cite{ViVi16} page 59. Here we use $h_\infty(\cdot)$ for the absolute logarithmic Weil height on $\Qbar$ (defined in Section \ref{defs}), whereas in \cite{ViVi16} the exponential Weil height is used, so that their Northcott number is the exponential of the Northcott number defined here. \begin{defin}\label{Northcott} Let $S$ be a set of algebraic numbers. For any real number $t$, define the set $S_t=\{s\in{S}\,\vert\, h_\infty(s)\leq t\}$. Let $\m(S)=\inf\{t\,\vert\, S_t \, \mathrm{is}\; \mathrm{infinite}\}$. We will call $\m(S)$ the \emph{Northcott number} of $S$. \end{defin} In particular we have $\m(\Qbar)=0$ and $\m(K)=\infty$ for any number field $K$. \begin{convention}\label{convention} For the remainder of the introduction we fix an arbitrary infinite subset $S$ of the algebraic numbers $\Qbar$ with finite Northcott number $\m(S)$. \end{convention} For a projective variety $Y$, let $\hp(Y)$ stand for the height of a Chow form of $Y$ (this is a projective height defined in Section \ref{defs}). In this paper varieties are always assumed to be geometrically irreducible. Our first result is a Bertini-type statement with some control on the genus, the degree, the height, and the field of definition of the resulting curve. \begin{thm}\label{theo1} Let $K$ be a subfield of $\overline{\mathbb{Q}}$, and let $X$ be a non-singular closed subvariety of $\mathbb{P}^N_{K}$ with $\dim X\geq2$. Then there exists a finite subset $s\subset S$, and a non-singular, geometrically irreducible curve $C$ on $X$, defined over $K(s)$, with genus $g(C)\leq (\deg X)^2+\deg X$, with $\deg C\leq \deg X$ and with $\hp(C)\leq \hp(X)+(\dim X)(\deg X)(N+1)(\m(S)+2)$. \end{thm} We will now apply Theorem \ref{theo1} to the case of abelian varieties and derive some consequences. All number fields are assumed to lie in a fixed algebraic closure $\overline{\mathbb{Q}}$. For each abelian variety $A$ whose (minimal) field of definition is a number field $K$, we consider a real-valued map $q(A,\cdot)$ with domain the set of all finite extensions $L$ of $K$. For example, one could take $q(A, L)$ to be the Mordell-Weil rank of the group of rational points $A(L)$ or the dimension of $A$. We write $$\Qf:=\{q(A,\cdot)\,\vert\, A \text{ is an abelian variety defined over a number field}\}$$ for the family of these maps $q(A,\cdot)$. In this article $\N=\{0,1,2,3,\ldots\}$ denotes the set of non-negative integers, $\R^+$ the set of positive real numbers, and $\R^+_0$ the set of non-negative real numbers. All functions $c_i(\cdot)$ are real valued and non-negative. \begin{defin}\label{admissible} Given maps $\czero, \cone, \ctwo: \mathbb{N}\to \mathbb{R}^+_0$ with $\ctwo$ increasing (not necessarily strictly), we say that the family of maps $\Qf$ is $(S, \czero, \cone, \ctwo)$-admissible if it satisfies the following: For every number field $K$ and every abelian variety $A/K$ of dimension $g\geq2$, and for every finite subset $s\subset S$, for every abelian variety $B/K(s)$, the following three properties hold: \begin{description} \item[(E)] $q(A,K(s))\geq \czero(g) q(A,K)$, \item[(P)] $q(A\times B,K(s))\geq \cone(g) q(A,K(s)) $, \item[\;(I)] $\vert q(A,K(s))-q(B,K(s))\vert\leq \ctwo(g)$ whenever $A$ and $B$ are $K(s)$-isogenous. \end{description} \end{defin} Note that the homomorphisms between abelian varieties of dimension $g$ and defined over $K$, can be defined over a finite extension of $K$ with degree bounded above in terms of $g$ only, which is true by \cite{Sil92}, see also \cite{Rem20} for a recent optimized bound. The following result shows that certain estimates for abelian varieties using the stable Faltings height $\hF(A)$ can be reduced to the case of jacobians. \begin{thm} \label{machine} Let $\czero,\cone,\ctwo,\constii,\constiii, \cthree,\cfour:\N\to \R^+_0$, and assume $\czero,\cone,\constiii, \cthree$ are positive. Assume that the family of maps $\Qf$ is $(S, \czero, \cone, \ctwo)-$admissible. Let $K$ be a number field, $A/K$ an abelian variety defined over $K$ of dimension $g\geq 2$, and suppose there exists a finite set $s\subset S$ and a non-singular, geometrically irreducible curve $C\subset A$, defined over $K(s)$, and of genus $g_0$, such that \begin{description} \item[(i)] there exists a closed immersion $A\to \Jac(C)$, defined over the field of definition of $C\subset A$, \item[(ii)] $g_0\leq \constii(g)$, \item[(iii)] $\hF(\Jac(C))\leq \constiii(g)(\hF(A)+m(S)+1)$. \end{description} Then, if $$\hF(\Jac(C))\geq \cthree(g) q(\Jac(C), K(s)) - \cfour(g)$$ holds, we also have $$\hF(A)\geq \cfive(g) q(A,K) - \csix(g),$$ where $\cfive(g)=\frac{\czero(g)\cone(g)\cthree(g)}{\constiii(g)}$ and $\csix(g)=\frac{\ctwo(\constii(g))\cthree(g)}{\constiii(g)}+\frac{\cfour(g)}{\constiii(g)}+\m(S)+1$. \end{thm} Theorem \ref{theo1}, in conjunction with work of Cadoret and Tamagawa, allows us to prove the existence of a curve $C$ with the required properties, provided $A$ is principally polarized. We also need an additional technical assumption which can always be achieved by extending the ground field. Let $K$ be a number field, and let $A$ be an abelian variety of dimension $g$ defined over $K$. We say $A/K$ satisfies the condition $({SS\Theta})$ if: \\ $({SS\Theta})$\begin{tabular}{ll} &$A/K$ is semi-stable and equipped with a projective embedding $\Theta: A\to \mathbb{P}_K^{4^{2g}-1}$,\\ & corresponding to the $16$th power of a symmetric theta divisor, with a theta\\ & structure of level $r=4$, given by the modified theta coordinates of \cite{DavPhi} \\ & section 3.1. \end{tabular} \\ For any positive integer $N$, we denote by $A[N]$ the set of all $N$-torsion points in $A(\overline{\mathbb{Q}})$. We will then denote by $K(A[N])$ the smallest field extension $F/K$ such that $A[N]\subset A(F)$. If $A/K$ is principally polarized then we can ensure $({SS\Theta})$ by replacing the ground field $K$ with $K(A[48])$. Indeed, an embedding $\Theta$ always exists (based on Mumford's construction, see for instance paragraph 2.3 in \cite{Paz12}) after replacing the base field $K$ with the extension $K(A[16])$. Moreover, $A$ is semi-stable over the field $K(A[N])$ whenever $N$ has at least two coprime divisors $d_1, d_2 \geq 3$ (by Raynaud's criterion from \cite{SGA7}, Proposition 4.7. See also \cite{SiZa95} for extensions of this result). Note also that for any positive integer $N$ we have $[K(A[N]):K] \leq N^{4g^2}$ by Lemme 4.7 page 2078 of \cite{GaRe2}. \begin{Proposition} \label{Cexists} There exists a map $\constiii:\N\to \R^+$ such that the following holds. If $K$ is a number field, and $A/K$ is a principally polarized abelian variety defined over $K$ of dimension $g\geq 2$ satisfying $({SS\Theta})$, then there exists a finite set $s\subset S$ and a non-singular, geometrically irreducible curve $C\subset A$ defined over $K(s)$, and of genus $g_0$, such that \begin{description} \item[(i)] there exists a closed immersion $A\to \Jac(C)$, defined over the field of definition of $C\subset A$, \item[(ii)] $g_0\leq (16^gg!)^2+16^gg!$, \item[(iii)] $\hF(\Jac(C))\leq \constiii(g)(\hF(A)+m(S)+1)$. \end{description} \end{Proposition} Note that the curve $C$ is not necessarily semi-stable over $K(s)$, and that $\Jac(C)$ does not come automatically with a theta structure of level 4 over $K(s)$. This is why the item $(iii)$ concerns stable Faltings heights, and not heights over $K(s)$.\\ \begin{cor} \label{machinecorrollary} Let $\czero, \cone, \ctwo, \cthree, \cfour:\N\to \R^+_0$, with $\czero,\cone,\cthree$ positive, and suppose the family of maps $\Qf$ is $(S, \czero, \cone, \ctwo)-$admissible. Set $\constii(g)=(16^gg!)^2+16^gg!$. Then there exists a map $\constiii:\N\to \R^+$ such that the following holds: If $K$ is a number field and the inequality \begin{description} \item[(J)] \hspace{2.6cm} $\hF(J)\geq \cthree(g) q(J, K(s)) - \cfour(g)$ \end{description} holds for all finite $s\subset S$, for all jacobians $J/K(s)$ of dimension $g_0\leq \constii(g)$, then $$\hF(A)\geq \cfive(g)q(A,K) - \csix(g)$$ for all principally polarized abelian varieties $A/K$ of dimension $g\geq 2$ satisfying $({SS\Theta})$, where $\cfive(g)$ and $\csix(g)$ are as in Theorem \ref{machine}. \end{cor} Sometimes Zarhin's trick can be used to get rid of the assumption that $A$ be principally polarized and the property $({SS\Theta})$. Let us denote the dual of $A$ by $\check{A}$. We set $$Z(A)=A^4\times \check{A}^4.$$ \begin{cor} \label{machinecorrollaryallabvar} Let $\czero, \cone, \ctwo, \cthree, \cfour:\N\to \R^+_0$, with $\czero,\cone,\cthree$ positive, and suppose the family of maps $\Qf$ is $(S, \czero, \cone, \ctwo)-$admissible. Set $\constii(g)=(16^gg!)^2+16^gg!$. Then there exists a map $\constiii:\N\to \R^+$ such that, with $\cfive(g)$ and $\csix(g)$ as in Theorem \ref{machine}, the following holds: Let $A/K$ be an abelian variety of dimension $g$. i) If $Z(A)/K$ satisfies $({SS\Theta})$, and the hypothesis (J) (with $8g$ instead of $g$) holds true, then we have $$\hF(A)\geq \frac{\cfive(8g)\cone(g)}{8} q(A,K) - \frac{\csix(8g)}{8}.$$ ii) If the hypothesis (J) from Corollary \ref{machinecorrollary} (with $8g$ instead of $g$) holds for all extensions $L$ with $[L:K] \leq 48^{256 g^2}$ instead of just $K$, then we have $$\hF(A)\geq \frac{\cfive(8g)\cone(g)}{8} q(A,L_0) - \frac{\csix(8g)}{8},$$ for some extension $L_0$ of $K$ with $[L_0:K] \leq 48^{256 g^2}$, even if $Z(A)/K$ does not satisfy $({SS\Theta})$. \end{cor} To deduce Corollary \ref{machinecorrollaryallabvar} we first note that $Z(A)$ admits a principal polarization. For the first claim we combine Theorem \ref{machine} (with $\constii(g)=(16^gg!)^2+16^gg!$ and $\constiii(\cdot)$ as in Proposition \ref{Cexists}) and Proposition \ref{Cexists} with $A$ replaced by $Z(A)$. Using that $\dim Z(A)=8g$, $\hF(Z(A))=8 \hF(A)$, and that by (P) we have $q(Z(A), K)\geq \cone(g)q(A,K)$ the claim follows. For the second part we use that by the discussion before Proposition \ref{Cexists} there exists an extension $L_0$ of $K$ such that $Z(A)/L_0$ satisfies $({SS\Theta})$ and $[L_0:K] \leq 48^{256 g^2}$. We replace $K$ by $L_0$ and conclude as before. Let us illustrate Corollary \ref{machinecorrollaryallabvar} by a result of the first author \cite{Paz16} that pioneered the strategy used in this paper. We take \begin{equation*} q(A,K)=\frac{1}{[K:\mathbb{Q}]}\log \left(N_{K/\mathbb{Q}}\Big(\prod_{\mathfrak{p} \, \mathrm{bad}\, s.s.}\mathfrak{p}\Big)\right), \end{equation*} where the product runs over the semi-stable bad prime ideals of $A/K$. It is clear that $(P)$ holds with $\cone(g)=1$. Isogenous abelian varieties share the same semi-stable bad reduction primes by the N{\'e}ron-Ogg-Shafarevich criterion, because they have the same Tate modules (see Theorem 1 page 493 of \cite{SeTa68} and Corollary 2 page 22 of \cite{Fal86}). Hence, (I) holds with $\ctwo(g)=0$. For (E) we need to assume that our fixed $S$ from Convention \ref{convention} is such that $K(S)/K$ has ramification uniformly bounded above all finite prime ideals of $K$ with a bound independent of $K$ (cf. Proposition \ref{NN2Trem} to construct such $S$). If $M/K$ is a finite extension then $A/K$ has semi-stable bad reduction at $\mathfrak{p}\subset \Oseen_K$ if and only if $A/M$ has semi-stable bad reduction at $\mathfrak{B}\subset \Oseen_M$ for each prime $\mathfrak{B}$ above $\mathfrak{p}$. Hence, with an $S$ as above, a straightforward computation shows that (E) holds. Finally, as shown in \cite{Paz16} hypothesis (J) follows essentially from the arithmetic Noether's formula of \cite{MB} Th\'{e}or\`{e}me 2.5 page 496. By Corollary \ref{machinecorrollaryallabvar} (ii) it follows that there exist $\cfive:\N\to \R^+_0$ and $\csix:\N\to \R^+$ such that \begin{equation}\label{Faltbad} \hF(A)\geq \frac{\cfive(8g)}{8\cdot 48^{256g^2}[K:\mathbb{Q}]}\log \left(N_{K/\mathbb{Q}}\Big(\prod_{\mathfrak{p} \, \mathrm{bad}\, s.s.}\mathfrak{p}\Big)\right) -\frac{\csix(8g)}{8}. \end{equation} Combining (\ref{Faltbad}) with an additional argument (Lemma 3.5 of \cite{Paz16}) shows that one can take the product in (\ref{Faltbad}) even over all primes with bad reduction, but at the expense of replacing the stable Faltings height $\hF(A)$ on the left hand-side with the relative Faltings height $\hF(A/K)$. This was used in \cite{Paz16} to establish (\ref{heightrank}). Next let us consider another consequence of Theorem \ref{machine}. Taking $q(A,K)=\mathrm{rank}(A(K))$ we see that $\Qf$ is $(S,1,1,0)-$admissible. Given an abelian variety $A/K$ of dimension $g\geq 1$ then $q(A,L)$ is unbounded as $L$ runs over all finite extensions of $K$. Let $\constii(\cdot), \constiii(\cdot)$ be given, and choose $\czero(g)=1,\cone(g)=1,\ctwo(g)=0,\cthree(g)=1$ and $\cfour(g)=0$. Take an extension $L/K$ such that $\hF(A)< \cfive(g) q(A,L) - \csix(g)$. Then apply Theorem \ref{machine} with $A/K$ replaced by $A/L$ to deduce the following corollary. \begin{cor}\label{rank intro} Let $A$ be an abelian variety over $\overline{\mathbb{Q}}$ of dimension $g\geq2$, let $\constii, \constiii:\N\to \R^+$. Then there exists a number field $L$ such that $A$ is defined over $L$, and for every finite set $s\subset S$, and for every non-singular, geometrically irreducible curve $C\subset A$ defined over $L(s)$, and satisfying $(i), (ii), (iii)$ from Theorem \ref{machine} (with the given maps $\constii(\cdot), \constiii(\cdot)$), we have $$\hF(\Jac(C))< \mathrm{rank}(\Jac(C)(L(s))).$$ \end{cor} Let us recall that Honda conjectured in \cite{Honda} page 98 that for any abelian variety $A/K$ there exists a constant $c_A>0$ such that if $M$ is an extension of the number field $K$ then $$\mathrm{rank}(A(M))\leq c_A [M:\mathbb{Q}].$$ As a consequence of Proposition \ref{Cexists} we can reduce Honda's conjecture to the case (of a strong form) of jacobians. \begin{cor}\label{Honda} Let $A$ be a principally polarized abelian variety over $K$ of dimension $g\geq2$ satisfying $({SS\Theta})$. Suppose that there exists a map $\ctwenty: \N\times \R\to \R^+$, increasing in both variables, and such that $$\mathrm{rank}(\Jac(C)(M))\leq \ctwenty(g_0,\hF(\Jac(C))) [M:\mathbb{Q}],$$ for all finite subsets $s\subset S$, for all non-singular, geometrically irreducible curves $C\subset A$, defined over $K(s)$, and of genus $g_0\leq (16^gg!)^2+16^gg!$, and for all finite extensions $M/K(s)$. Then there exist a finite subset $s_A\subset S$, and a map $\ctwentyseven:\N\times \R\times \R^+_0\to \R^+$ such that for each finite extension $M/K(s_A)$ $$\mathrm{rank}(A(M))\leq \ctwentyseven(g,\hF(A),m(S))[M:\mathbb{Q}].$$ \end{cor} Using again Zarhin's trick, we conclude that if $\mathrm{rank}(J(M))\leq \ctwenty(g_0,\hF(J)) [M:\mathbb{Q}]$ for all finite subsets $s\subset S$, all jacobians $J/L(s)$, for each extension $L/K$ of degree at most $48^{256 g^2}$, and of genus $g_0\leq (16^{8g}(8g)!)^2+16^{8g}(8g)!$, and all finite extensions $M/L(s)$, then there exists a finite subset $s_A\subset S$, and a map $\ctwentyseven:\N\times \R\times \R^+_0\to \R^+$ such that for each finite extension $M/K(s_A)$ $$\mathrm{rank}(A(M))\leq 48^{256 g^2}\ctwentyseven(8g,8\hF(A),m(S))[M:\mathbb{Q}].$$ In particular, if the strong form of Honda's conjecture (with $c_A=\ctwenty(g,\hF(A))$ as above) holds true for all jacobians, then Honda's conjecture holds true for all abelian varieties over $\Qbar$. A different contribution related to Honda's conjecture is in Pasten's recent paper \cite{Pas19}. We define the main tools in Section \ref{defs}. As Theorem \ref{theo1} is expressed using the Philippon height of Chow forms, and as we would like to obtain corollaries involving the Faltings height, we also gather what is needed to translate the inequalities into this other height theory. We prove Theorem \ref{theo1} in Section \ref{Bertinisec} (for background on Bertini theorems, a good reference is \cite{Jou83}). As usual we proceed by intersecting the projective variety $X$ with hyperplanes but we use the fact that all their coefficients can be assumed to lie in our fixed set $S$ with finite Northcott number. To control the height and the degree we use previous work of R\'emond \cite{Remond}. The control on the genus is obtained through previous work of Cadoret and Tamagawa \cite{CaTa12}. In Section \ref{sectionProposition} we explain how to apply Theorem \ref{theo1} to abelian varieties. In Section \ref{exmachina} we prove Theorem \ref{machine}, and in Section \ref{ProofHonda} we prove Corollary \ref{Honda}. Finally, in Section \ref{North}, we provide some methods to construct infinite sets with finite Northcott numbers. In particular, we show how to construct an infinite set $S\subset \Qbar$ with finite Northcott number such that $K(S)/K$ has uniformly bounded ramification for every number field $K$, and this bound is also uniform in $K$. The authors thank Philipp Habegger and Ga\"el R\'emond for their helpful feedback on an earlier version of the text, and Hector Pasten for helpful comments on Honda's conjecture. It is our pleasure to thank the referee for a very dedicated work and for helpful feedback, that led to improvements in several places. The first author is supported by ANR-17-CE40-0012 Flair and ANR-20-CE40-0003-01 Jinvariant. \section{Definitions and height comparisons}\label{defs} \subsection{Height of algebraic numbers and projective points} Let $K$ be a number field and $M_K$ the set of places of $K$. We denote by $M_{K}^{0}$ the set of finite places and $M_K^{\infty}$ the set of archimedean places. For any prime number $p>0$, we normalize the archimedean absolute values by $\vert p\vert_v=p$ and the non-archimedean by $\vert p\vert_v=p^{-1}$ if $v$ divides $p$. We denote by $K_v$ the completion of $K$ with respect to $\vert \cdot\vert_v$. Let $d=[K:\mathbb{Q}]$. For any place $v$ of $K$, let $d_v=[K_v:\mathbb{Q}_v]$. Let $N\geq 1$ be an integer and $x=(x_0,\ldots ,x_N)$ a vector of algebraic numbers in $K$, not all zero. Let $\displaystyle{\Vert x\Vert_v=\max_{0\leq i\leq N}\vert x_i\vert_v}$ for $v$ a non-archimedean place of $K$ and $\displaystyle{\Vert x\Vert_v=\Big(\sum_{0\leq i\leq N} \vert x_i\vert_v^2\Big)^{1/2}}$ for $v$ an archimedean place of $K$. For $P=(x_0:\cdots:x_N)\in\mathbb{P}^{N}_{\Qbar}$ we define the $l_2$-height and the $l_\infty$-height (or Weil height) as \begin{alignat*}1 h_2(P)&=\sum_{v\in{M_K}}\frac{d_v}{d}\log \Vert x\Vert_v,\\ h_{\infty}(P)&=\sum_{v\in{M_K}}\frac{d_v}{d}\log \max_{0\leq i\leq N}\vert x_i\vert_v, \end{alignat*} where $K$ is any number field containing the coordinates $x_0,\ldots,x_N$. Dividing by the degree $d=[K:\Q]$ makes the value independent of the particular choice of $K$, and thanks to the product formula these heights are well-defined on $\mathbb{P}^{N}_{\Qbar}$. Moreover, they are comparable; we have \begin{equation}\label{compar} h_{\infty}(P)\leq h_2(P) \leq h_{\infty}(P)+\frac{1}{2}\log(N+1). \end{equation} Besides the height of a projective point we also need to measure the height of an algebraic number $x$. By abuse of notation we write \begin{alignat*}1 h_2(x)&=h_2(P),\\ h_{\infty}(x)&=h_\infty(P), \end{alignat*} where $P=(1:x)\in \mathbb{P}^{1}_{\Qbar}$. \subsection{Height of a polynomial with algebraic coefficients} If $P=x_0X^N+\cdots +x_{N-1}X+x_N\in \Qbar[X]$ is a non-zero polynomial we define $h_2(P)=h_2(x_0:\cdots:x_N)$. More generally, if $$P=\sum_{i_1=0}^{N}\cdots \sum_{i_n=0}^{N}x_{{i_1}\cdots{i_n}}X_1^{i_1}\cdots X_n^{i_n}\in \Qbar[X_1,\ldots,X_n]\backslash\{0\}$$ then we define $$h_2(P)=h_2(\cdots: x_{{i_1}\cdots{i_n}} :\cdots).$$ Analogously, we define $$h_\infty(P)=h_\infty(\cdots: x_{{i_1}\cdots{i_n}} :\cdots).$$ \subsection{Height of a Chow form}\label{Chow} Let us now consider $X$ a geometrically irreducible projective variety inside $\mathbb{P}^N$ defined over the number field $K$. We follow \cite{Remond} to define the height of $X$. Let $F$ be its Chow form. We define the height of $X$ by $$h_{\mathbb{P}^N}(X)=\sum_{v\in{M_K^{0}}}\frac{d_v}{d}\log \Vert F\Vert_v +\sum_{v\in{M_K^{\infty}}}\frac{d_v}{d}\Big[(\dim X +1)(\deg X)\sum_{j=1}^N \frac{1}{2j}+\int_{S_{N+1}^{\dim X +1}}\log \vert F_v\vert \tau\Big], $$ where $\tau$ is the invariant measure of total mass 1 on $S_{N+1}^{\dim X +1}$, the $(\dim X +1)$-power of the unit sphere $S_{N+1}$, and in the expression $\Vert F\Vert_v$ we identify $F$ with the vector of its coefficients. Again, this definition is independent of the choice of $K$. When $X$ is a general closed subscheme of $\mathbb{P}^N$ defined over a number field, we define its height as the sum of the previously defined heights of its irreducible components. We note that $h_{\mathbb{P}^N}(\cdot)$ is non-negative (see for instance \cite{Phi3} paragraph 1 page 346). \subsection{Height of an abelian variety} For the special case of abelian varieties, we will mostly use the Faltings height, and in some estimates also the theta height. We recall their definition and give a brief summary of some useful properties and comparisons of these heights. \subsubsection{Faltings height} Let $A$ be an abelian variety of dimension $g\geq1$ defined over a number field $K$. Let ${\mathcal O}_K$ be the ring of integers of $K$ and let $\pi\colon {\mathcal A}\longrightarrow \Spec(\mathcal{O}_K) $ be the N\'eron model of $A$ over $\Spec(\mathcal{O}_K)$. Let $\varepsilon\colon \Spec(\mathcal{O}_K)\longrightarrow {\mathcal A}$ be the zero section of $\pi$ and let $\omega_{{\mathcal A}/\mathcal{O}_K}$ be the pullback along $\varepsilon$ of the maximal exterior power (the determinant) of the sheaf of relative differentials $$\omega_{{\mathcal A}/\mathcal{O}_K}:=\varepsilon^{\star}\Omega^g_{{\mathcal A}/\mathcal{O}_K}\;.$$ \noindent For any archimedean place $v$ of $K$, let $\sigma$ be an embedding of $K$ in $\mathbb{C}$ associated to $v$. The associated line bundle $$\omega_{{\mathcal A}/\mathcal{O}_K,\sigma}=\omega_{{\mathcal A}/\mathcal{O}_K}\otimes_{{\mathcal O}_K,\sigma}\mathbb{C}\simeq H^0({\mathcal A}_{\sigma}(\mathbb{C}),\Omega^g_{{\mathcal A}_\sigma}(\mathbb{C}))\;$$ is equipped with the $L^2$-metric $\Vert.\Vert_{v}$ given by $$\Vert s\Vert_{v}^2=\frac{i^{g^2}}{\gamma^{g}}\int_{{\mathcal A}_{\sigma}(\mathbb{C})}s\wedge\overline{s}\;$$ where $\gamma>0$ is a normalizing constant. In this article we choose $\gamma=(2\pi)^2$. The projective ${\mathcal O}_K$-module $\omega_{{\mathcal A}/\mathcal{O}_K}$ is of rank $1$ and together with the hermitian norms $\Vert.\Vert_{v}$ at infinity it defines a hermitian line bundle $\overline{\omega}_{{\mathcal A}/\mathcal{O}_K}=({\omega}_{{\mathcal A}/\mathcal{O}_K}, (\Vert .\Vert_v)_{v\in{M_K^\infty}})$ over $\mathcal{O}_K$. It has a well defined Arakelov degree $\widehat{\degr}(\overline{\omega}_{{\mathcal A}/\mathcal{O}_K})$, given by $$\widehat{\degr}(\overline{\omega}_{{\mathcal A}/\mathcal{O}_K})=\log\#\left({\omega}_{{\mathcal A}/\mathcal{O}_K}/{s{\mathcal O}}_K\right)-\sum_{v\in{M_{K}^{\infty}}}d_v\log\Vert s\Vert_{v}\;,$$ where $s$ is any non-zero section of ${\omega}_{{\mathcal A}/\mathcal{O}_K}$. The resulting number does not depend on the choice of $s$ in view of the product formula on the number field $K$. The height of $A/K$ is defined as $$\hF(A/K):=\frac{1}{[K:\mathbb{Q}]}\widehat{\degr}(\overline{\omega}_{{\mathcal A}/\mathcal{O}_K})\;.$$ It does not depend on any choice of projective embedding of $A$. We emphasise that by our choice of normalization $\gamma=(2\pi)^2$ the Faltings height $\hF(\cdot)$ is non-negative (see Remarque 3.3 in \cite{Paz19} and a detailed proof in the appendix of \cite{GaRe3} for the semi-stable case, the general case follows by property (2) below). Faltings \cite{Falt} used the normalization $\gamma=2$ so that $\hF(A/K)=h_{Faltings}(A/K)+\frac{g}{2}\log(2\pi^2)$. We recall here three classical properties (see for instance \cite{Del}, in particular page 35) used in the sequel: \begin{enumerate} \item If $A=A_1\times A_2$ is a product of abelian varieties over $K$ then one has $\hF(A/K)=\hF(A_1/K)+\hF(A_2/K)$. \item If $K'/K$ is a number field extension then one has $\hF(A/K')\leq \hF(A/K)$. \item If $A/K$ is semi-stable then the height is stable by number field extension. \end{enumerate} \begin{defin} \label{faltings} The stable height of $A/K$ is defined as $\hF(A):=\hF(A/K')$ for any number field extension $K'/K$ such that $A/K'$ is semi-stable. \end{defin} \subsubsection{Theta height} We refer the reader to classical work of Mumford on theta structures, recalled in some details in paragraph 2.3 of \cite{Paz12}. Let $A$ be an abelian variety over a number field $K$. Assume $A/K$ is given with a theta structure of level $4$. It gives in particular an explicit embedding of $K$-varieties $\Theta: A\to \mathbb{P}^{4^{2g}-1}_K$, and we define $$h_{\Theta}^{(4)}(A,\Theta):= h_2(\Theta(0_A)).$$ We note that $h_{\Theta}^{(4)}(\cdot,\cdot)$ is also a non-negative height. \subsubsection{Useful inequalities} We gather here some technical inequalities between these different heights. Let us start by considering $C$ a non-singular curve in $\mathbb{P}_{\Qbar}^N$ with $N\geq4$, of degree $\deg C$ and genus $g$. Choose a ground field $K$ such that $\Jac(C)/K$ has $({SS\Theta})$. By Th\'eor\`eme 1.3 and Proposition 1.1 page 760 of \cite{Remond} one has \begin{equation}\label{remlem} h_{\Theta}^{(4)}(\Jac(C),\Theta)\leq (2\deg C+1)^2\log(N+1)^4 \, m^{20m8^{g}}(\hp(C)+1), \end{equation} where $m=4g+2\deg C -2$. As we will also need a comparison between the Faltings height and the theta height of level $r=4$ of a principally polarized abelian variety $A/K$ of dimension $g$ with $({SS\Theta})$, we extract the following Bost-David comparison from \cite{Paz12}, Corollary 1.3 page 21 (due to the different normalization $\gamma=2\pi$ our height here differs by $g\log(2\pi)/2$ from the one in \cite{Paz12}) using the factor $7$ instead of $6$ to take care of the different normalization: \begin{equation*} \vert h_{\Theta}^{(4)}(A,\Theta)-\frac{1}{2} h_{F}(A)\vert \leq 7 \cdot 4^{2g}\cdot \log(4^{2g}) \cdot\log\Big(\max\{1, h_{\Theta}^{(4)}(A,\Theta)\}+2\Big). \end{equation*} It follows that there exists an explicitly computable map $\cthirtytwo:\N\to \R^+$ such that \begin{equation}\label{thetafaltings} h_{\Theta}^{(4)}(A,\Theta)-\cthirtyone(g)\leq h_{F}(A)\leq 3h_{\Theta}^{(4)}(A,\Theta)+\cthirtyone(g). \end{equation} Finally, we need a Philippon height -- Faltings height comparison lemma. \begin{lem}\label{chowfal} Let $A$ be a principally polarized abelian variety of dimension $g$ over $\overline{\mathbb{Q}}$ given with a projective embedding $\Theta: A\to \mathbb{P}^{4^{2g}-1}$ compatible with a theta structure of level $r=4$, corresponding to the 16th power of a symmetric theta divisor. There is an explicitly computable map $\cseven:\N\to \R^+$ such that $$\hp(A)\leq \cseven(g) (\hF(A)+1),$$ where $N=16^g-1$. \end{lem} \begin{proof} We use Proposition 3.9 of \cite{DavPhi} page 665, where the authors prove that for any algebraic subvariety $V\subset A$ the inequality $$\vert \widehat{h}_{\mathbb{P}^N}(V)-h_{\mathbb{P}^N}(V)\vert \leq \ceight(g, \dim V, \deg V, h_{\Theta}^{(4)}(A,\Theta))$$ holds, where ${h}_{\mathbb{P}^N}(V)$ is the height of the variety $V$ as defined previously in paragraph \ref{Chow} (it is the same definition as the one in \cite{DavPhi} page 644), the height $\widehat{h}_{\mathbb{P}^N}(V)$ is defined in \cite{Phi} before Proposition 9 and the quantity $\ceight(g, \dim V, \deg V, h_{\Theta}^{(4)}(A,\Theta))>0$ can be taken to be $(4^{g+1}h_{\Theta}^{(4)}(A,\Theta)+3g\log2)\cdot(\dim V+1)\cdot\deg V$. Picking $V=A$ the abelian variety we focus on, we have $\widehat{h}_{\mathbb{P}^N}(A)=0$ (see Proposition 9 item (vii) page 281 of \cite{Phi}), $\dim A=g$, $\deg A = 16^g g!$ (see \cite{Mum} page 150) and \begin{equation}\label{h1} h_{\mathbb{P}^N}(A)\leq \cnine(g) (h_{\Theta}^{(4)}(A,\Theta)+1), \end{equation} where $\cnine(g)>0$ only depends on the dimension of $A$, and one can take $\cnine(g)=4^{3g+1}(g!) (g+1)$. Plugging the estimate from (\ref{thetafaltings}) into (\ref{h1}) and using that $\hF(A)\geq 0$, concludes the proof. \end{proof} \section{Bertini with height control}\label{Bertinisec} As a first step, we state a classical Bertini Theorem, then we use a result of R\'emond to control the height of a curve drawn on a projective variety. \begin{thm}\label{Bertini}(Bertini's Theorem) Let $X$ be a non-singular closed subvariety of $\mathbb{P}^N_{\overline{\mathbb{Q}}}$ with $\dim X\geq2$. There exists a hyperplane $H_0\subset \mathbb{P}^N_{\overline{\mathbb{Q}}}$ not containing $X$ and such that $X\cap H_0$ is non-singular, geometrically irreducible of dimension $\dim X -1$. Furthermore, the set of such hyperplanes forms an open dense subset $U$ of the complete linear system $\vert H_0\vert$, viewed as a projective space. \end{thm} \begin{proof} This is a particular case of Theorem II.8.18 of \cite{Hart} page 179, see also Remark 7.9.1 of \cite{Hart} page 245. \end{proof} \begin{cor}\label{control} Let $X$ be as in Theorem \ref{Bertini}, and let $S$ be an infinite set of algebraic numbers. There exists a hyperplane $H_0$ defined with coefficients in $S$ such that $X\cap H_0$ is non-singular, geometrically irreducible, of dimension $\dim X -1$. \end{cor} \begin{proof} Let $\check{\mathbb{P}}^{N}$ be the dual projective space. Consider the isomorphism $j:\mathbb{P}^N\longrightarrow \check{\mathbb{P}}^{N}$ defined by $j([s_0:\cdots : s_N])=H_{s_0,\ldots, s_N}$, where $H_{s_0,\ldots, s_N}$ is the hyperplane defined by $s_0x_0+\cdots+s_Nx_N=0$. We look at the set $U$ obtained in Theorem \ref{Bertini}. The set $j^{-1}(U)$ is non-empty and open in $\mathbb{P}^{N}$. Let $P\in \Qbar[x_0,\ldots,x_N]$ be a non-zero homogeneous polynomial. Since $S$ is infinite there exists $[s_0:\cdots:s_N]\in \mathbb{P}^N$ with all $s_i\in S$ and $P(s_0,\ldots, s_N)\neq 0$. Hence, $$j^{-1}(U)\cap \{[s_0:\cdots:s_N]\in{\mathbb{P}^N} \,\vert\, s_0, \ldots, s_N\in{S}\;\, \textrm{not}\; \textrm{all}\; \textrm{zero}\}\neq \emptyset,$$ and this implies the claim. \end{proof} The following result of R\'emond is the main tool for Theorem \ref{theo1}. It is a direct consequence of Proposition 2.3 page 765 of \cite{Remond}. \begin{prop}(R\'emond)\label{Remond} Let $X$ be a closed subscheme of $\mathbb{P}^N_{\overline{\mathbb{Q}}}$. Let $P_1, \ldots, P_s$ be homogeneous polynomials of $\overline{\mathbb{Q}}[X_0,\ldots, X_N]$ of degree at most $D$ and of height $h_2(\cdot)$ at most $H$. If $\mathcal{V}$ is the family of irreducible components of the intersection $Y$ of $X$ with the zeros of $P_1,\ldots, P_s$, then $$\sum_{V\in{\mathcal{V}}}D^{\dim V}\deg V \leq D^{\dim X} \deg X,$$ and if one denotes $d=\min\{\dim V \vert V\in{\mathcal{V}}\}$, $$\sum_{V\in{\mathcal{V}}}D^{\dim V +1}\hp(V)\leq D^{\dim X +1} \hp(X) + (\dim X - d) D^{\dim X} (\deg X) H.$$ In particular, $\deg Y\leq D^{\dim X-d}\deg X$ and $$\hp(Y)\leq D^{\dim X -d} \hp(X) +(\dim X- d)D^{\dim X -d -1}(\deg X) H.$$ \end{prop} R\'emond's original version is slightly stronger, as he only requires a modified height to be bounded by $H$. The height used in the inequalities, however, is the same as the one used in the present work. \begin{prop}\label{control2} Let $S$ be an infinite set of algebraic numbers with finite Northcott number $\m(S)$. Let $X$ be a non-singular closed subvariety of $\mathbb{P}^N_{\overline{\mathbb{Q}}}$ with $\dim X\geq2$. Then there exists a hyperplane $H_0$ defined with coefficients in $S$ and such that $X\cap H_0$ is non-singular, geometrically irreducible with dimension $\dim X -1$, with $\deg(X\cap H_0)\leq \deg X$, and $$\hp(X\cap H_0) \leq \hp(X) +(\deg X) (N+1)(\m(S)+2).$$ \end{prop} \begin{proof} First let us replace $S$ with its infinite subset of elements $s$ with $h_\infty(s)<\m(S)+1$. By Corollary \ref{control} we get a hyperplane $H_0$, defined by a non-zero linear form $F_0$, with coefficients $s_0,\ldots,s_N \in S$. Thus, $$h_\infty(F_0)=h_\infty(s_0:\cdots:s_N)\leq h_\infty(1:s_0:\cdots:s_N)\leq \sum_{i=0}^N h_\infty(1:s_i)\leq (N+1)(\m(S)+1).$$ Finally, using that $h_2(F_0)\leq h_\infty(F_0)+\frac{1}{2}\log(N+1)\leq (N+1)(\m(S)+2)$, and applying Proposition \ref{Remond} with $H=(N+1)(\m(S)+2)$, $d=\dim X-1$, and $D=1$ yields the claim. \end{proof} We are now in position to prove Theorem \ref{theo1}. We will prove the following, slightly more precise, result. \begin{cor}\label{curves} (Bertini with height control) Let $S$ be an infinite set of algebraic numbers with finite Northcott number $\m(S)$. Let $X$ be a non-singular closed subvariety of $\mathbb{P}^N_{\overline{\mathbb{Q}}}$ with $\dim X\geq2$. Then there exists a non-singular, geometrically irreducible curve $C$ on $X$, defined over a finite extension of the field of definition of $X$ by finitely many elements of $S$, with $\deg C\leq \deg X$, and $$\hp(C) \leq \hp(X) +(\dim X)(\deg X) (N+1)(\m(S)+2).$$ Moreover, the genus of $C$ may be assumed to be bounded from above by $(\deg X)^2+\deg X$ and if $X=A$ is a principally polarized abelian variety, one may assume in addition that there is a closed immersion $A\to \Jac(C)$ over the field of definition of $C$. \end{cor} \begin{proof} We apply Proposition \ref{control2} to the successive intersections $X\cap H_1\cap\cdots \cap H_i$, where $i\geq1$ is an integer. We reach dimension $1$ in $\dim X-1$ steps, the curve $C$ will be $X\cap H_1\cap\ldots\cap H_{g-1}$ if $g=\dim X$. The control on the genus of $C$ is based on Castelnuovo's criterion of \cite{ACGH} page 116 (see also Remark 2.1 \cite{CaTa12}). In the case where $X$ is a principally polarized abelian variety $A$ and both $A$ and $C$ are defined over an infinite field $K$ of characteristic zero, the closed immersion $A\to \Jac(C)$ is obtained from studying the fundamental groups of the successive intersections (independently of the choice of hyperplanes). The closed immersion $A\cap H_1\to A$ induces a surjective morphism between \'etale fundamental groups $\pi_1(A\cap H_1)\to \pi_1(A)$. Iterating $g-1$ times, the closed immersion $C\to A$ induces a surjective homomorphism $\pi_1(C)\to \pi_1(A)$ over $\mathrm{Gal}(\overline{K}/K)$. This implies that the Albanese morphism $\Jac(C)\to A$ is surjective with connected kernel. Hence the dual morphism $\check{A}\to\check{\Jac(C)}$ is a closed immersion. As both $A$ and $\Jac(C)$ are principally polarized, we have a closed immersion $A\to \Jac(C)$. For more details see Lemme X.2.10 of \cite{SGA}, recalled in Lemma 4.1 of \cite{CaTa12}, and the arguments detailed in the last section of \cite{CaTa12}. The only difference with Theorem 1.2 of \cite{CaTa12} is that the hyperplanes we chose have bounded height. \end{proof} \section{Theorem \ref{theo1} applied to abelian varieties}\label{sectionProposition} We turn to an application of Theorem \ref{theo1} to abelian varieties. In particular, we prove Proposition \ref{Cexists}. \begin{lem}\label{tech} Suppose $N\geq 4$. There exists a map $\cten: \N^3\to \R^+$ such that the following holds. For any non-singular geometrically irreducible curve $C$ in $\mathbb{P}^N$ of genus $g_0$, we have \begin{equation}\label{h2} \hF(\Jac(C))\leq \cten(g_0,\deg C,N)(h_{\mathbb{P}^N}(C)+1). \end{equation} \end{lem} \begin{proof} After an extension of the base field we can assume that $\Jac(C)$ satisfies $({SS\Theta})$. We conclude from inequality (\ref{remlem}) that $h_{\Theta}^{(4)}(\Jac(C),\Theta)\leq \ctwentyeight(g_0,\deg C,N)(\hp(C)+1)$ where $\ctwentyeight(g_0,\deg C,N)>0$. Using (\ref{thetafaltings}) with $A=\Jac(C)$, and the fact that $\hp(C)\geq 0$ yields the claim. \end{proof} Next we prove Proposition \ref{Cexists}. For the convenience of the reader we recall the statement. \begin{Proposition}\label{corona} Let $S\subset \Qbar$ be an infinite set with finite Northcott number $m(S)$. Then there exists a map $\constiii:\N\to \R^+$ such that the following holds. If $K$ is a number field, and $A/K$ is a principally polarized abelian variety defined over $K$ of dimension $g\geq 2$, and satisfying $({SS\Theta})$, then there exists a finite set $s\subset S$ and a non-singular, geometrically irreducible curve $C\subset A$ defined over $K(s)$, and of genus $g_0$, such that \begin{description} \item[(i)] there exists a closed immersion $A\to \Jac(C)$, defined over the field of definition of $C\subset A$, \item[(ii)] $g_0\leq (16^gg!)^2+16^gg!$, \item[(iii)] $\hF(\Jac(C))\leq \constiii(g)(\hF(A)+m(S)+1)$. \end{description} \end{Proposition} \begin{proof} By assumption we can embed our abelian variety $A$ via $\Theta: A\to \mathbb{P}_K^{N}$ (compatible with a theta structure of level $r=4$, corresponding to the 16th power of a symmetric theta divisor), where $N=4^{2g}-1$. By Corollary \ref{curves} there exists a non-singular geometrically irreducible curve $C$ on $A\subset \mathbb{P}^{N}$ defined over a finite extension $K'/K$ obtained by adjoining elements of $S$, and such that \begin{equation}\label{g_0} \hp(C) \leq \hp(A) +g(\deg A)(N+1)(\m(S)+2), \end{equation} $\deg C\leq \deg A$, and with the genus $g_0$ of $C$ bounded from above by $(\deg A)^2+\deg A$, and such that there is a closed immersion $A\to \Jac(C)$, defined over the field of definition of $C\subset A$. Since $\deg A=16^g g!$, we get the first two properties. Next we prove (iii). Because $A$ is principally polarized, and by Lemma \ref{chowfal}, one has \begin{equation}\label{chowfaltings} \hp(A)\leq \cseven(g)(\hF(A)+1). \end{equation} By Lemma \ref{tech} one has \begin{equation}\label{curve} \hF(\Jac(C))\leq \cten(g_0,\deg C,N)(\hp(C)+1). \end{equation} Using successively (\ref{curve}), (\ref{g_0}), and (\ref{chowfaltings}), this gives \begin{equation*} \hF(\Jac(C))\leq \cthirteen(g,g_0,\deg C,N)\hF(A)+\cfourteen(g,g_0,\m(S), \deg A, \deg C, N), \end{equation*} with the quantities \begin{alignat*}1 \cthirteen(g,g_0,\deg C,N)&=\cseven(g)\cten(g_0,\deg C,N),\\ \cfourteen(g,g_0,\m(S),\deg A,\deg C,N)&=\cten(g_0,\deg C,N)\Big(\cseven(g)+1\\ &\phantom{=}\quad g(\deg A) (N+1)(\m(S)+2))\Big). \end{alignat*} Finally, note that $g_0\leq (16^gg!)^2+16^gg!$, $\deg C\leq \deg A=16^g g!$, and $N=16^g-1$, and since $\cten(\cdot,\cdot,\cdot)$ can be assumed to be increasing in each variable, we get the required map $\constiii:\N\to \R^+$ as claimed. \end{proof} \section{Proof of Theorem \ref{machine}}\label{exmachina} Let us start by the following lemma. \begin{lem}\label{lemme clef} Let $A$ be an abelian variety of dimension $g$ over a number field $K$, and let $C\subset A$ be a non-singular geometrically irreducible algebraic curve of genus $g_0$ over a number field extension $K'$ of $K$ such that there exists a closed immersion $A\to \Jac(C)$ defined over $K'$. Then there exists an abelian variety $B$ defined over $K'$ such that $\Jac(C)$ is $K'$-isogenous to $A\times B$. \end{lem} \begin{proof} By abuse of notation, we may view $A$, via the closed immersion, as an abelian subvariety of $\Jac(C)$ defined over $K'$. From Poincar\'e's Reducibility Theorem we get that $\Jac(C)$ is $K'$-isogenous to $A\times B$, where $B$ is the quotient of $\Jac(C)$ by the image of $A$. \end{proof} We are now ready to prove Theorem \ref{machine}. \begin{proof} By hypothesis there exists a finite set $s\subset S$ and a non-singular, geometrically irreducible curve $C\subset A$, defined over $K(s)$, and of genus $g_0$, such that \begin{description} \item[(i)] there exists a closed immersion $A\to \Jac(C)$ defined over the field of definition of $C\subset A$, \item[(ii)] $g_0\leq \constii(g)$, \item[(iii)] $\hF(\Jac(C))\leq \constiii(g)(\hF(A)+m(S)+1)$. \end{description} By Lemma \ref{lemme clef}, we know that $\Jac(C)$ is $K(s)$-isogenous to $A\times B$ for some abelian variety $B$ over $K(s)$. By hypothesis we have \begin{equation}\label{HF-ineq} \hF(\Jac(C))\geq \cthree(g) q(\Jac(C),K(s))-\cfour(g). \end{equation} Next note that $\dim(\Jac(C))\geq \dim A\geq 2$, and hence by property (I) \begin{equation*} q(\Jac(C),K(s))\geq q(A\times B,K(s))-\ctwo(g_0), \end{equation*} and by property (P) \begin{equation*} q(A\times B,K(s))\geq \cone(g)q(A,K(s)), \end{equation*} and finally by property (E) \begin{equation*} q(A,K(s))\geq \czero(g)q(A,K). \end{equation*} Plugging the last three inequalities into (\ref{HF-ineq}) yields \begin{equation*} \hF(\Jac(C))\geq \czero(g)\cone(g)\cthree(g) q(A,K)-\ctwo(g_0)\cthree(g)-\cfour(g). \end{equation*} Finally, we apply hypothesis $(iii)$ and use that $\ctwo(\cdot)$ is increasing and $g_0\leq \constii(g)$ to conclude \begin{equation*} \hF(A)\geq \frac{\czero(g)\cone(g)\cthree(g)}{\constiii(g)} q(A,K)-\frac{\ctwo(\constii(g))\cthree(g)}{\constiii(g)}-\frac{\cfour(g)}{\constiii(g)}-\m(S)-1. \end{equation*} \end{proof} \section{Proof of Corollary \ref{Honda}}\label{ProofHonda} We now turn to the proof of Corollary \ref{Honda}. \begin{proof} By Proposition \ref{Cexists} there exists a finite subset $s_A\subset S$ and a non-singular geometrically irreducible curve $C\subset A$, defined over $K(s_A)$, and of genus $g_0$, satisfying (i), (ii) and (iii). By Lemma \ref{lemme clef} we have that $\Jac(C)$ is $K(s_A)$-isogenous to $A\times B$ for some abelian variety $B$ defined over $K(s_A)$. Let $M/K(s_A)$ be a finite extension. Then, with $\constii(g)=(16^gg!)^2+16^gg!$, we have \begin{alignat*}2 \mathrm{rank}(A(M))&\leq \mathrm{rank}((A\times B)(M))&\\ &= \mathrm{rank}(\Jac(C)(M))&\\ &\leq \ctwenty(g_0,\hF(\Jac(C)))[M:\mathbb{Q}] & (\text{by hypothesis})\\ &\leq \ctwenty(\constii(g),\hF(\Jac(C)))[M:\mathbb{Q}] &(\text{by (ii)}) \\ &\leq \ctwenty(\constii(g),\constiii(g)(\hF(A)+m(S)+1))[M:\mathbb{Q}] &(\text{by (iii)}). \end{alignat*} \end{proof} \section{Sets and field extensions with finite Northcott number}\label{North} Recall Definition \ref{Northcott} of the Northcott number. In some applications of Theorem \ref{theo1} and Theorem \ref{machine} it is essential that the curve $C$ is defined over an extension $K(s)/K$ with uniformly bounded (or otherwise prescribed) ramification, with a bound independent of $K$. Hence, we need to construct an infinite set $S\subset \Qbar$ with finite Northcott number and such that $K(S)/K$ has uniformly bounded ramification, i.e., the ramification indices $e(\mathfrak{B}/\mathfrak{B}\cap K)$ are uniformly bounded as $\mathfrak{B}$ runs over the finite prime ideals of all finite extensions of $K$ contained in $K(S)$. It is also required that this bound is independent of $K$. Here we show that such a set $S$ exists. We also recall two other methods to produce infinite sets with finite Northcott number. \begin{lem}\label{intgen} Let $L$ be a number field of degree $d$. Then there exists an integral number $\alpha$ in $L$ with $L=\mathbb{Q}(\alpha)$ and \begin{alignat*}1 h_\infty(\alpha)\leq \frac{1}{d}\log|\Delta_L|. \end{alignat*} \end{lem} \begin{proof} We make use of the exponential height given by $H(x)=\exp(h_\infty(x))$ for any algebraic number $x$. If $L$ has a real embedding then the claim follows from \cite{VaWi13} Theorem 1.2 since the proof of this result yields an algebraic integer. Suppose the field $L$ has no real embeddings. Let $\sigma_{1},\overline{\sigma_{1}},\ldots,\sigma_{s},\overline{\sigma_{s}}$ be the $s$ pairs of complex conjugate embeddings of $L$. With $\sigma=(\sigma_1,\ldots,\sigma_{s})$ we get an embedding $\sigma:L\longrightarrow \mathbb{C}^s.$ The image of the ring of integers $\mathcal{O}_L$ is a lattice with determinant $2^{-s}\sqrt{|\Delta_L|}$. We define a convex set $S_T$ in $\mathbb{C}^s$, symmetric with respect to the origin, by \begin{alignat*}1 S_T=\{(x_1,\ldots,x_{s})\in \mathbb{C}^s\,\vert\,&|\Im(x_1)|\leq T, |\Re(x_1)|<1,|x_2|<1,\ldots,|x_{s}|<1\}. \end{alignat*} So $S_T$ has volume $4\pi^{s-1}T$. By Minkowski's convex body theorem we conclude that $S_T$ contains a non-zero lattice point whenever $T>(\pi/4)(2/\pi)^{s}\sqrt{|\Delta_L|}$. But such a lattice point $\sigma(\alpha)$ must come from a primitive point $\alpha$, otherwise there exists an embedding $\sigma'$, different from $\sigma_1(\cdot)$, with $\sigma_1(\alpha)=\sigma'(\alpha)$. If $\sigma'(\cdot)=\overline{\sigma_1}(\cdot)$ then $\Im(\sigma_1(\alpha))=0$, and if $\sigma'(\cdot)\neq \overline{\sigma_1}(\cdot)$ then $|\sigma_1(\alpha)|=|\sigma'(\alpha)|<1$. In both cases we conclude that $|\sigma_1(\alpha)\cdots \sigma_{s}(\alpha)|<1$, and hence $\alpha=0$. As $\alpha$ is integral we get $H(\alpha)\leq (T^2+1)^{1/d}$, and since $|\Delta_L|\geq 2$ we may conclude $H(\alpha)\leq |\Delta_L|^{1/d}$. Taking the logarithm yields the claimed inequality. \end{proof} Since $|\Delta_L|=|\Delta_K|^{[L:K]}$ whenever $L/K$ is a finite unramified extension one can conclude from the previous lemma that if $F/K$ is an infinite unramified extension then $\m(F)\leq\frac{1}{[K:\Q]}\log\vert \Delta_{K}\vert $. In general it is not so easy to decide whether a number field $K$ has an infinite unramified extension (cf. \cite{Mai00}). However, the following proposition shows that there exists an infinite set $S\subset \Qbar$ with finite Northcott number such that $K(S)/K$ has uniformly bounded ramification, with a bound independent of $K$. \begin{prop}\label{NN2Trem} There exists an infinite set $S\subset \Qbar$ with finite Northcott number, and $E_S\in \N$ such that for every number field $K$ and every finite extension $L/K$ with $L\subset K(S)$, we have $e(\mathfrak{B}/\mathfrak{B}\cap K)\leq E_S$ for every prime ideal $\mathfrak{B}$ in $\Oseen_L$. \end{prop} \begin{proof} By, e.g., \cite{Mar78} there exists a number field $K_0$ that admits an infinite unramified extension $F/K_0$. Then $KF/KK_0$ is infinite and unramified (see Proposition B.2.4 page 592 of \cite{BoGu07}). Hence, if $L/K$ is finite and $L\subset KF$ then for any prime ideal $\mathfrak{Q}$ in $\Oseen_{LK_0}$ we have $e(\mathfrak{Q}/\mathfrak{Q}\cap K)=e(\mathfrak{Q}/\mathfrak{Q}\cap KK_0)e(\mathfrak{Q}\cap KK_0/\mathfrak{Q}\cap K)= e(\mathfrak{Q}\cap KK_0/\mathfrak{Q}\cap K)\leq [KK_0:K]\leq [K_0:\Q]$. In particular, $e(\mathfrak{B}/\mathfrak{B}\cap K)\leq [K_0:\Q]$ for every prime ideal $\mathfrak{B}$ in $\Oseen_L$. Hence, with $S=F$ and $E_S=[K_0:\Q]$ the extension $K(S)/K$ has ramification uniformly bounded by $E_S$, and we know from the previous observation that $\m(S)\leq \frac{1}{[K_0:\Q]}\log\vert \Delta_{K_0}\vert$. \end{proof} The problem of finding $K_0$ that admits an infinite unramified extension and minimises $\frac{1}{[K_0:\Q]}\log\vert \Delta_{K_0}\vert$ has found much interest, see for instance Martinet \cite{Mar78} who showed that $K_0=\Q(\cos(2\pi/11),\sqrt{2},\sqrt{-23})$ admits an infinite unramified $2$-tower and satisfies $$\frac{1}{[K_0:\Q]}\log\vert \Delta_{K_0}\vert\leq 4.53.$$ If $K=\Q$ and if we are allowed to have ramification only above a single rational prime $p$, then we can take $S=\{p^{1/p^i}\,\vert\, i\in \N\}$ so that $\m(S)=0$. However, in this example the ramification is not uniformly bounded. A different way of describing essentially the same example is to take $S=\{x_i\,\vert\, i\in \N\}$ with $x_0=p$ and $x_i\in \Qbar$ satisfying $P(x_{i+1},x_i)=0$ for $P(x,t)=x^p-t$. This approach of constructing sets with finite Northcott number can be generalised as follows. \begin{lem}\label{Height quasi-equiv} Let $P(x,t)\in \Qbar[x,t]$ be irreducible with $\deg_x P>\deg_t P>0$, and let $S=\{x_i\,\vert\, i\in \N\}$ with $x_i$ pairwise distinct algebraic numbers satisfying $P(x_{i+1},x_i)=0$ for $i\in \N$. Then $$\m(S)\leq \deg_t P\left(\frac{\gamma_P\deg_x P}{\deg_x P-\deg_t P}\right)^2,$$ where $\gamma_P = 5(\log(2^{\min\{\deg_x P, \deg_t P\}}(\deg_x P + 1)(\deg_t P + 1))+h_\infty(P))^{1/2}$. \end{lem} \begin{proof} This follows from an explicit version of a result of N\'eron \cite{Ne65} $$\left|\frac{h_\infty(x_{i+1})}{\deg_t P}-\frac{h_\infty(x_i)}{\deg_x P} \right|\leq \gamma_P\max\left \{\frac{h_\infty(x_{i+1})}{\deg_t P},\frac{h_\infty(x_i)}{\deg_x P}\right \}^{1/2}$$ due to Habegger \cite[Theorem 1]{Hab17}. Habegger's inequality implies $$h_\infty(x_{i+1})\leq qh_\infty(x_{i})+Q\max\{h_\infty(x_{i}),h_\infty(x_{i+1})\}^{1/2},$$ where $Q=\gamma_P\sqrt{\deg_t P}$, $q=\frac{\deg_t P}{\deg_x P}$, and the upper bound in Lemma \ref{Height quasi-equiv} is just $\left(\frac{Q}{1-q}\right)^2$. We leave the details to the reader. \end{proof} For the polynomial $P(x,t)=x^2-tx-1$, Smyth (\cite[Theorem 1, pages 137-138]{Smy80})\label{Smyth} proved much more. Indeed, set $x_0=1$, and suppose $S=\{x_i\}_{i=0}^\infty\subset \Qbar$ satisfy $P(x_{i+1},x_i)=0$ for $i\in \N$. Then $x_i$ has degree $2^i$ over $\Q$, each $x_i$ is totally real, and the sequence of logarithmic Weil heights $h_\infty(x_i)$ has a limit point $0.2732\ldots$. In particular, $\m(S)\leq 0.274$. Using the the well-known identity $$h_\infty(\alpha)=\frac{1}{\deg f}\int_{0}^1\log|f(e^{2\pi i t})|dt,$$ where $f\in \Z[x]$ is the minimum polynomial of $\alpha$, one has yet another method to construct infinite subsets $S\subset \Qbar$ with finite Northcott number. Let $f=a_0x^d+\cdots +a_d \in \Z[x]$, and let us write $\|f\|_1=|a_0|+\cdots+|a_d|$ for the length of $f$. Now let $\{f_i\}_{i=0}^\infty \subset \Z[x]$ be an infinite set of non-constant irreducible polynomials. Let $S\subset \Qbar$ be such that $S$ contains a root for each polynomial $f_i$. Then $$\m(S)\leq \liminf_{i}\frac{\log \|f_i\|_1}{\deg f_i}.$$ \begin{exa} Consider the polynomials $f_i(x)=x^i-x-1\in \Z[x]$. Selmer \cite{Sel56} has shown that for $i>1$ they are all irreducible (and Osada \cite[Corollary 3]{O87} showed that they have full Galois group $S_i$). Therefore, $m(S)=0$. \end{exa} \begin{exa} Consider a sequence of monic polynomials $f_i(x)\in \Z[x]$ of degree $i$ whose constant term is equal to $\pm p_i$ for a prime $p_i$, and such that $\|f_i\|_1<2p_i$. These polynomials are all irreducible over $\Z$. Otherwise, $f_i=gh$ with $g,h\in \Z[x]$, and $g$ has constant term $\pm 1$. Hence $f_i$ would have a zero $\alpha$ of complex absolute value at most $1$, and thus $p_i=|\alpha^i+a_1\alpha^{i-1}+\cdots+a_{i-1}\alpha|\leq |\|f_i\|_1-p_i|<p_i$. If $\log p_i=o(i)$, we conclude $m(S)=0$. \end{exa} \end{document}
\begin{document} \begin{frontmatter} \title{Distributed sub-optimal resource allocation over weight-balanced graph via singular perturbation \thanksref{footnoteinfo}} \thanks[footnoteinfo]{This paper was not presented at any IFAC meeting. Corresponding author Y.~Hong. Tel. +86-10-82541824. Fax +86-10-82541832.} \author[CAS]{Shu Liang}\ead{[email protected]}, \author[CAS]{Xianlin Zeng}\ead{[email protected]}, \author[CAS]{Yiguang Hong}\ead{[email protected]} \address[CAS]{Key Laboratory of Systems and Control, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, 100190, China} \begin{keyword} Distributed optimization, resource allocation, sub-optimal algorithm, weight-balanced graph, continuous-time design, singular perturbation, exponential convergence. \end{keyword} \begin{abstract} In this paper, we consider distributed optimization design for resource allocation problems over weight-balanced graphs. With the help of singular perturbation analysis, we propose a simple sub-optimal continuous-time optimization algorithm. Moreover, we prove the existence and uniqueness of the algorithm equilibrium, and then show the convergence with an exponential rate. Finally, we verify the sub-optimality of the algorithm, which can approach the optimal solution as an adjustable parameter tends to zero. \end{abstract} \end{frontmatter} \section{Introduction} Distributed optimization has attracted intense research attention in recent years, due to its theoretic significance and broad applications in various research fields, and many distributed algorithms have been developed to optimize a global objective or cost function based on agents' local cost functions and information exchange between neighbors in a multi-agent network \cite{Yuan2016Zeroth,Mokhtari2017Network}. So far, much effort has also been done for distributed continuous-time algorithm design, referring to \cite{Shi2013reaching,Gharesifard2014Distributed,Liu2015Second,Lou2016Distributed,Yang2017Multi} and the references therein, partially because of its applications in physical plants or hybrid systems and available continuous-time control methods. Resource allocation is one of the most important optimization problems, which has been widely investigated in various areas such as economic systems, communication networks, and power grids; and various algorithms, centralized or decentralized have been constructed, for example, in \cite{Arrow1958Studies,Heal1969Planning,Lakshmanan2008Decentralized,Zappone2016Energy}. Different from the most existing results, \cite{Cherukuri2016Initialization,Yi2016Initialization} considered distributed initialization-free continuous-time algorithms to solve the optimal resource allocation problem with applications to economic dispatch of power systems. The algorithms given in \cite{Yi2016Initialization,Gharesifard2016Price} dealt with undirected graph cases, based on the symmetry of the Laplacians associated with the given graphs. As pointed out in \cite{Gharesifard2014Distributed,Gharesifard2016Price}, there were examples to make a distributed algorithm for undirected graphs divergent for some directed graphs. For practical applications, distributed optimization algorithms over balanced directed graphs were developed with or without the resource allocation constraint, for example, in \cite{Gharesifard2014Distributed,Cherukuri2016Initialization}. However, these algorithms, involving the usage of the eigenvalues of the Laplacians, might yield additional computation burden in the distributed implementation, and make the convergence quite sensitive to the network topology. Partially because distributed optimization just became a hot topic in this decade, there are quite few results about its sub-optimal algorithms and related analysis. For example, \cite{Nedic2009Approximate} proposed an algorithm without exactly solving the considered problem, but with fast convergence rate. In fact, sub-optimal design deserves investigation, though the exactness of optimal solutions may be sacrificed. As we know, the exact optimization solution may be hard to obtain due to technical difficulties, complexity, or computational cost; on the contrary, sub-optimal algorithms may provide considerable benefits with simple feasible designs and even performance enhancement. In distributed design for large-scale networks, we may particularly need sub-optimal algorithms to reduce the computational complexity or sensitivity to the network topology, rather than to seek high-cost exact optimal solution \cite{Bhatti2016Large}. Based on the above observation, the motivation of this paper is to study a distributed sub-optimal algorithm design for the resource allocation optimization over a balanced directed graph. Our algorithm is of lower dimensions than existing ones, with the reduction of computational burden and information exchanging. Moreover, its convergence is kept over any strongly connected and weight-balanced graph because its design does not depend on any specific knowledge of the graph. To achieve this, we adopt a singular perturbation idea in the distributed sub-optimal design. Note that the singular perturbation theory provides powerful tools for (continuous-time) control design \cite{Kokotovic1999Singular}, and the well-known high-gain technique and semi-global stabilization design are closely related to singular perturbation \cite{Khalil2002Nonlinear}. The contributions of this paper can be summarized as follows. (i) We first propose a distributed sub-optimal algorithm to solve the continuous-time resource allocation problem for weight-balanced graphs, without using any information of the network topology. The sub-optimal design is simpler than those optimization ones. In light of the conventional fixed-point theory, we prove the existence and uniqueness of the algorithm equilibrium. (ii) We adopt a singular perturbation idea in our design, totally different from that given in \cite{Gharesifard2014Distributed,Cherukuri2016Initialization,Yi2016Initialization}, and then show that the quasi-steady-state model of our algorithm is exactly the primal-dual optimization algorithm. Note that the original primal-dual algorithm may not be directly implementable in a fully distributed manner due to the coupled resource allocation constraint. (iii) We prove the convergence of the proposed sub-optimal algorithm with an exponential rate, and estimate the difference of the sub-optimal solution from the optimal one, which, in fact, is bounded linearly by an adjustable parameter. Moreover, we verify that the sub-optimal solution always satisfies the resource allocation constraint and can be made arbitrarily close to the optimal point as the parameter tends to $0$. The paper organization is as follows: Section 2 provides preliminaries and formulates the problem, while Section 3 proposes the distributed algorithms. Then Section 4 presents the algorithm analysis, and finally, Section 5 gives some concluding remarks. {\em Notations: } Let $\mathbb{R}^n$ be the $n$-dimensional real vector space and $\mathbb{B}$ be the unit ball. The Euclidean norm of vectors in $\mathbb{R}^n$ and its induced consistent matrix norm are denoted by $\|\cdot\|$. $col(x_1,...,x_N)$ stands for the column vector stacked with column vectors $x_i,\,(i=1,...,N)$, i.e., $col(x_1,...,x_N) = (x_1^{T},\,x_2^{T},\,\cdots,\,x_n^{T})^{T}$, and $1_{n} = col\{1,...,1\}\in \mathbb{R}^n$. $I_n$ is the identity matrix in $\mathbb{R}^{n\times n}$. $\otimes$ denotes the Kronecker's product for matrices and $\det(\cdot)$ denotes the determinant of a matrix. For a smooth function $f:\mathbb{R}^n\to \mathbb{R}$, $\nabla f(x)$ and $\nabla^2 f(x)$ denote its gradient vector and Hessian matrix at point $x$, respectively. \section{Preliminaries and Formulation} In this section, we introduce relevant preliminary knowledge about convex analysis and graph theory and then formulate our problem. \subsection{Preliminaries} A function $f: \mathbb{R}^n\to \mathbb{R}$ is said to be {\em convex} if $f(\lambda z_1 +(1-\lambda)z_2) \leq \lambda f(z_1) + (1-\lambda)f(z_2)$ for any $z_1, z_2 \in \mathbb{R}^n$ and $\lambda\in (0,\,1)$. Moreover, it is said to be {\em $c_0$-strongly convex} for a constant $c_0>0$, if \begin{multline} f(\lambda z_1 +(1-\lambda)z_2) \leq \lambda f(z_1) + (1-\lambda)f(z_2) \\ - \frac{1}{2}c_0\lambda(1-\lambda)\|z_1-z_2\|^2. \end{multline} For a twice continuously differentiable function $f$, it is $c_0$-strongly convex if and only if $\nabla^2 f(x) \geq c_0 I_n$. In addition, for $c_0$-strongly convex and differentiable function $f$, there holds \begin{equation}\label{eq:stronglyConvex} f(y) \geq f(x) + \nabla f(x)^T (y-x) + \frac{1}{2}c_0\|y-x\|^2, \,\forall\, x,y\in \mathbb{R}^n. \end{equation} A function $g:\mathbb{R}^n\to \mathbb{R}$ is said to be {\em level bounded} \cite{Rockafellar1998Variational} if all sets of the form \begin{equation} \{x\in \mathbb{R}^n\,|\, g(x)\leq \alpha\}, \text{ for } \alpha\in \mathbb{R}^n \end{equation} are bounded. Obviously, the strong convexity and differentiability imply the level boundedness by \eqref{eq:stronglyConvex}. A map $H:\mathbb{R}^n\to \mathbb{R}^n$ is said to be {\em locally Lipschitz continuous} at a point $x$ if there are constants $\delta>0$ and $\kappa = \kappa(x, \delta)$ such that \begin{equation}\label{eq:LipchitzContinuous0} \|H(x_1) - H(x_2)\| \leq \kappa \|x_1 - x_2\|, \, \forall\, x_1, x_2 \in x + \delta\mathbb{B}. \end{equation} Moreover, $H$ is said to be {\em $\kappa$-Lipshcitz continuous} if \eqref{eq:LipchitzContinuous0} holds irrespective of $x$ and $\delta$. Consider a multi-agent network with its interaction topology described by a weighted graph $\mathcal{G} = \{\mathcal{V}, \mathcal{E}, \mathcal{A}\}$, where $\mathcal{V}= \{ 1,2, \ldots N\}$ is the node set, $\mathcal{E} \subseteq \mathcal{V} \times \mathcal{V}$ is the edge set, and $\mathcal{A} = [a_{ij}]_{N\times N}$ is an adjacency matrix with $a_{ij} >0$ if $(j,i) \in \mathcal{E}$ (meaning that agent $j$ can send its information to agent $i$), and $a_{ij} =0$, otherwise. If $a_{ij} = a_{ji}, \, \forall\, i,j \in \mathcal{V}$, then $\mathcal{G}$ is undirected. A path is a sequence of vertices connected by edges. A graph is said to be strongly connected if there is a path between any pair of vertices. For node $i \in \mathcal{V}$, the weighted in-degree and weighted out-degree are $d_{in}^i =\sum_{j=1}^N a_{ij}$ and $d_{out}^i =\sum_{j=1}^N a_{ji}$, respectively. A graph is weight-balanced if $\forall\,i \in \mathcal{V}, d_{in}^i = d_{out}^i$. The following lemma characterizes graph $\mathcal{G}$ by its (in-degree) Laplacian matrix, defined as $L= \mathcal{D}_{in} - \mathcal{A}$, where $\mathcal{D}_{in} = diag\{d_{in}^1, \ldots, d_{in}^N\} \in \mathbb{R}^{N \times N}$. \begin{lemma}\cite{Bullo2009Distributed} The following statements hold. \begin{enumerate}[1)] \item Graph $\mathcal{G}$ is undirected if and only if $L = L^T$. \item Graph $\mathcal{G}$ is strongly connected if and only if zero is a simple eigenvalue of $L$. \item Graph $\mathcal{G}$ is weight-balanced if and only if $L + L^T$ is positive semidefinite. \end{enumerate} \end{lemma} \subsection{Problem formulation} Distributed resource allocation optimization problem is usually formulated as follows. For each agent $i \in \mathcal{V}$, there are a local decision variable $x_i\in \mathbb{R}^{n}$ and a local cost function $f_i(x_i):\mathbb{R}^n\to \mathbb{R}$. The agents cooperate each other in order to minimize the total cost function of the network, defined as $f(\bm{x}) \triangleq \sum_{i=1}^Nf_i(x_i)$, subject to the resource allocation constraint $\sum_{i=1}^Nx_i = \sum_{i=1}^Nb_i=d$. In other words, \begin{equation}\label{eq:optimizationProblem} \min_{\bm{x}\in \mathbb{R}^{nN}} f(\bm{x}), \text{ s.t. } (1_N^T\otimes I_n)\bm{x} = d, \end{equation} where $\bm{x} \triangleq col\{x_1,...,x_N\}$ and $d\in \mathbb{R}^n$. The following assumption is adopted to ensure the well-posedness of \eqref{eq:optimizationProblem}, which is widely used. \begin{assumption}\label{assum:1} ~ \begin{enumerate}[1)] \item $f(\bm{x})$ is $c_0$-strongly convex and twice continuously differentiable. \item The interaction graph $\mathcal{G}$ is strongly connected and weight-balanced. \end{enumerate} \end{assumption} The following lemma is quite fundamental for problem \eqref{eq:optimizationProblem}. We present it with its proof here for completeness. \begin{lemma}\label{lem:optimalPrimalDual} Under Assumption \ref{assum:1}, there exists a unique optimal solution $\bm{x}^* = col\{x_1^*, ..., x_N^*\}$ of problem \eqref{eq:optimizationProblem}. In addition, there exists a unique $\bm{\lambda}^* = col\{\mu^*, ..., \mu^*\}$ such that the following condition holds. \begin{equation}\label{eq:optimalSolution} \left\{\begin{aligned} 0 & = \nabla f(\bm{x}^*) + \bm{\lambda}^*\\ 0 & = (1_N^T\otimes I_n)\bm{x}^* -d \end{aligned}\right. \end{equation} \end{lemma} \begin{proof} Since $f$ is strongly convex and differentiable, it is level bounded, which implies the existence of an optimal point over the set $\Omega = \{\bm{x}\in \mathbb{R}^{nN}\,|\,(1_N^T\otimes I_n)\bm{x} -d = 0\}$. Also, the strong convexity of $f$ implies the uniqueness of the optimal point $\bm{x}^*$. Since the normal cone of $\Omega$ at point $\bm{x}^*$ is $\mathcal{N}_{\Omega}(\bm{x}^*) = \{1_N\otimes \mu\,|\,\mu\in\mathbb{R}^n\}$, the conclusion follows from the necessary optimality condition $-\nabla f(\bm{x}^*) \in \mathcal{N}_{\Omega}(\bm{x}^*)$ \cite[Theorem 6.12, page 207]{Rockafellar1998Variational}. \end{proof} The goal of this paper is to design a distributed sub-optimal algorithm with a positive adjustable parameter $\varepsilon$ for problem \eqref{eq:optimizationProblem}, such that \begin{enumerate}[1)] \item the equilibrium point of the proposed algorithm is exponentially stable with the resource allocation constraint held; \item it approaches the optimal solution of problem \eqref{eq:optimizationProblem} as $\varepsilon \to 0$, and the difference between it and the optimal solution is bounded linearly by $\varepsilon$. \end{enumerate} Of course, the design of sub-optimal algorithm should be simpler than that for optimization algorithms. \section{Distributed algorithm design} In this section, we propose a distributed sub-optimal algorithm, and also show the relationship between its design and singular perturbation analysis. To make a comparison, we first introduce a distributed algorithm over undirected graphs for problem \eqref{eq:optimizationProblem}, obtained in the literature, such as \cite{Yi2016Initialization}: \begin{equation}\label{eq:algorithmPI} \forall\, i\in \mathcal{V}, \, \left\{\begin{aligned} \dot{x}_i & = - \nabla f_i(x_i) - \lambda_i\\ \dot{\lambda}_i & = - k_P\sum_{j=1}^Na_{ij}(\lambda_i-\lambda_j) \\ &\quad - k_I\sum_{j=1}^Na_{ij}(z_i-z_j) + x_i - b_i\\ \dot{z}_i & = \sum_{j=1}^Na_{ij}(\lambda_i-\lambda_j) \end{aligned}\right. \end{equation} where $\sum_{i=1}^N b_i= d$ and $k_P=k_I=1$ in \cite{Yi2016Initialization}. The continuous-time algorithm \eqref{eq:algorithmPI} is constructed by combining the Lagrangian duality and the consensus dynamics. Roughly speaking, the dynamics of $x_i$'s correspond to the gradient decent and the dynamics of $\lambda_i$'s and $z_i$'s render the local Lagrangian multipliers $\lambda_i$ to reach a consensus at the optimal point of the dual problem. On the other hand, as pointed out in \cite{Gharesifard2014Distributed,Gharesifard2016Price}, the continuous-time algorithms like \eqref{eq:algorithmPI} may become divergent over some directed graphs. One remedy is to tune the parameters $k_P$ and $k_I$ to stabilize the algorithm dynamics over a balanced graph, which was indeed used in \cite{Gharesifard2014Distributed,Cherukuri2016Initialization}. However, since that stabilization is based on the eigenvalues of the Laplacian of the balanced graph, whose information is not local, the algorithm is not fully distributed or its design increases the computational cost. In this paper, we propose a simple distributed algorithm for problem \eqref{eq:optimizationProblem} without the knowledge of the eigenvalues associated with the considered balanced graph: \begin{equation}\label{eq:algorithmNew} \forall\, i\in \mathcal{V}, \, \left\{\begin{aligned} \dot{x}_i & = - \nabla f_i(x_i) - \lambda_i\\ \varepsilon\dot{\lambda}_i & = - \sum_{j=1}^Na_{ij}(\lambda_i-\lambda_j) + \varepsilon (x_i - b_i) \end{aligned}\right. \end{equation} where $\varepsilon > 0$ is a small adjustable parameter. For simplicity, we rewrite algorithm \eqref{eq:algorithmNew} in a compact form as \begin{equation}\label{eq:algorithmCompact} \left\{\begin{aligned} \dot{\bm{x}} & = -\nabla f(\bm{x}) - \bm{\lambda}\\ \varepsilon\dot{\bm{\lambda}} & = - \bm{L}\bm{\lambda} + \varepsilon(\bm{x} - \bm{b}) \end{aligned}\right. \end{equation} where $\bm{\lambda} = col\{\lambda_1,...,\lambda_N\}, \bm{b} = col\{b_1,...,b_N\}$ and $\bm{L} = L\otimes I_n$, $L$ is the Laplacian matrix of the strongly connected and weight-balanced graph. \begin{remark} Algorithm \eqref{eq:algorithmNew} has lower dimensions and less (communication) complexity than \eqref{eq:algorithmPI}, because it does not involve the dynamics of $z_i$'s and related information exchanging. \end{remark} Since $\bm{L}$ is generally asymmetric, \eqref{eq:algorithmCompact} loses any interpretation from gradient-decent-gradient-ascent dynamics for the saddle-point computation, which is widely used for constrained convex optimization. In fact, our design is based on singular perturbation ideas as follows. Clearly, we can choose a matrix $T\in \mathbb{R}^{N\times N}$ satisfying \begin{equation} T = [1_N, M_1]^T, \quad T^{-1} = [1_N, M_2]. \end{equation} Let $[\begin{smallmatrix} \mu\\ \bm{\theta} \end{smallmatrix}] \triangleq (T\otimes I_n) \bm{\lambda}$, where $\mu \in \mathbb{R}^{n}, \bm{\theta} \in \mathbb{R}^{n(N-1)}$. Then \eqref{eq:algorithmCompact} can be written as a standard singular perturbation model as follows: \begin{equation}\label{eq:singularPerturbation} \left\{\begin{aligned} \dot{\bm{x}} & = -\nabla f(\bm{x}) - (1_N\otimes I_n)\mu - (M_2\otimes I_n)\bm{\theta}\\ \dot{\mu} & = (1_N^T\otimes I_n)\bm{x} - d\\ \varepsilon\dot{\bm{\theta}} & = - (M_1^TLM_2\otimes I_n) \bm{\theta} + \varepsilon (M_1^T\otimes I_n) (\bm{x} - \bm{b}) \end{aligned}\right. \end{equation} It can be observed from \eqref{eq:singularPerturbation} that, for a sufficiently small $\varepsilon >0$, $\bm{\theta}$ corresponds to the fast transient part and $(\bm{x}, \mu)$ corresponds to the slow part. Because all the eigenvalues of matrix $- (M_1^TLM_2\otimes I_n)$ are negative, the fast manifold is simply $\bm{\theta} = 0$, and then the quasi-steady-state model (or reduced model) of \eqref{eq:singularPerturbation} is \begin{equation}\label{eq:quasiSteady} \left\{\begin{aligned} \dot{\bm{x}} & = -\nabla f(\bm{x}) - (1_N\otimes I_n)\mu\\ \dot{\mu} & = (1_N^T\otimes I_n)\bm{x} - d\\ \end{aligned}\right. \end{equation} Let us denote the solution of \eqref{eq:quasiSteady} by $(\tilde{\bm{x}}(t), \tilde{\mu}(t))$ and the solution of \eqref{eq:singularPerturbation} by $(\bm{x}(t,\varepsilon), \mu(t,\varepsilon), \bm{\theta}(t,\varepsilon))$. With the existing singular perturbation results \cite[Theorems 11.2 and 11.3, pages 439 and 452]{Khalil2002Nonlinear}, \eqref{eq:singularPerturbation} is asymptotically stable and, for any $\varepsilon \in (0, \varepsilon^*)$ with some $\varepsilon^*>0$, an initial moment $t_0$ and some time $t_b>t_0$, we have \begin{equation}\label{eq:singularPerturbationResult} \begin{aligned} (\bm{x}(t,\varepsilon), \mu(t,\varepsilon))- (\tilde{\bm{x}}(t),\tilde{\mu}(t)) & = O(\varepsilon), \, t\in [t_0,\infty)\\ \bm{\theta}(t,\varepsilon) - 0 & = O(\varepsilon), \, t\in [t_b,\infty) \end{aligned} \end{equation} To sum up, we have the following statements from singular perturbation analysis. \begin{enumerate}[1)] \item The algorithm \eqref{eq:algorithmCompact} has its quasi-steady-state model as \eqref{eq:quasiSteady}, and \eqref{eq:quasiSteady} is exactly the primal-dual optimization algorithm for problem \eqref{eq:optimizationProblem}. However, in contrast to \eqref{eq:algorithmCompact}, the algorithm \eqref{eq:quasiSteady} is not directly implementable in a fully distributed manner because the dynamics of $\mu$ needs to collect all the information of $x_1, ..., x_N$ due to the coupled resource allocation constraint. \item The trajectory of algorithm \eqref{eq:algorithmCompact} is near the (centralized) primal-dual one within an error bound estimation $O(\varepsilon)$. Moreover, since $(\tilde{\bm{x}}(t),\tilde{\mu}(t))$ converges to the optimal primal-dual solution $(\bm{x}^*, \mu^*)$ in Lemma \ref{lem:optimalPrimalDual}, \eqref{eq:algorithmCompact} approaches a ball centered at the optimal point as $t\to \infty$, yielding some sub-optimal solution. \end{enumerate} Note that for our algorithm, we have to verify the existence of its equilibrium, which is not straightforward. Moreover, the estimation $O(\varepsilon)$ in the singular perturbation theory may be too rough since it holds for all $t\in [t_0,\infty)$. In order to clarify the effectiveness of our method, we have to find a new way for the algorithm analysis. To be specific, we will first study the existence of the equilibrium of the algorithm \eqref{eq:algorithmCompact}, and then study its convergence and sub-optimality, in the sequel. \section{Main Results} In this section, we analyze the equilibrium, convergence and sub-optimality for the algorithm \eqref{eq:algorithmCompact}. \subsection{Equilibrium analysis} Here let us show the existence and uniqueness of the equilibrium of algorithm \eqref{eq:algorithmCompact}. \begin{theorem}\label{thm:1} Under Assumption \ref{assum:1}, there exists $\varepsilon_0>0$ such that for any fixed $\varepsilon \in (0, \, \varepsilon_0)$, algorithm \eqref{eq:algorithmCompact} has a unique equilibrium, i.e., a unique pair $(\bar{\bm{x}}(\varepsilon), \bar{\bm{\lambda}}(\varepsilon))$ satisfying the following equation \begin{equation}\label{eq:equilibriumCompact} \left\{\begin{aligned} 0 &= \nabla f(\bm{x}) + \bm{\lambda}\\ 0 &= -\varepsilon (\bm{x} - \bm{b}) + \bm{L}\bm{\lambda} \end{aligned}\right. \end{equation} \end{theorem} \begin{proof} We first show the existence and then the uniqueness in the proof. (i) Existence: Let $(\bm{x}^*, \bm{\lambda}^*)$ be the optimal solution pair in \eqref{eq:optimalSolution}. Since $f(\bm{x})$ is twice continuously differentiable, \begin{equation}\label{eq:rz} \nabla f(\bm{x}) = \nabla f(\bm{x}^*) + H \bm{z} - \bm{r}(\bm{z}), \end{equation} where \begin{equation} H \triangleq \nabla^2f(\bm{x}^*), \quad \bm{z} \triangleq \bm{x} - \bm{x}^*, \end{equation} and $\bm{r}(\bm{z})$ is an infinitesimal term with respect to $\bm{z}$. Clearly, it follows from \eqref{eq:optimalSolution} that $\bm{L}\bm{\lambda}^* = 0$ and $\bm{L}\nabla f(\bm{x}^*) = -\bm{L}\bm{\lambda}^* = 0$. Moreover, since $(1_N^T\otimes I_n)(\bm{b} - \bm{x}^*) = 0$, there exists $\bm{\lambda}_0$ such that $\bm{L}\bm{\lambda}_0 = \bm{b} - \bm{x}^*$. Thus, by eliminating $\bm{\lambda}$ in \eqref{eq:equilibriumCompact}, we obtain an equation with respect to variable $\bm{z}$ as \begin{equation}\label{eq:Phi} \begin{aligned} \bm{z} = \Phi(\bm{z}, \varepsilon) & \triangleq (\varepsilon I_{nN} + \bm{L}H)^{-1}(\varepsilon(\bm{b} - \bm{x}^*) + \bm{L}\bm{r}(\bm{z}))\\ & = (\varepsilon I_{nN} + \bm{L}H)^{-1}\bm{L}(\varepsilon\bm{\lambda}_0 + \bm{r}(\bm{z})) \end{aligned} \end{equation} We claim that matrix $\varepsilon I_{nN} + \bm{L}H$ is nonsingular (and then the map $\Phi(\bm{z},\varepsilon)$ in \eqref{eq:Phi} is well-defined). In fact, $H \geq c_0I_{nN}$ and $\bm{L}+ \bm{L}^T$ is positive semidefinite according to Assumption \ref{assum:1}. Then $v^TH^{\frac{1}{2}}\bm{L}H^{\frac{1}{2}}v = v^TH^{\frac{1}{2}}(\bm{L}+\bm{L}^T)H^{\frac{1}{2}}v \geq 0, \,\forall\, v \in \mathbb{R}^{nN}$. Due to $\det(sI_{nN} - \bm{L}H) = \det(sI_{nN} - H^{\frac{1}{2}}\bm{L}H^{\frac{1}{2}})$, all the eigenvalues of matrix $\bm{L}H$ are nonnegative. Consequently, matrix $\varepsilon I_{nN} + \bm{L}H$ is nonsingular. Moreover, since $ (\varepsilon I_{nN} + \bm{L}H)^{-1}(\varepsilon I_{nN} + \bm{L}H) = I_{nN}$, \begin{equation} \begin{aligned} (\varepsilon I_{nN} + \bm{L}H)^{-1}\bm{L} & = H^{-1} - (H + \varepsilon^{-1}H\bm{L}H)^{-1}. \end{aligned} \end{equation} Note that $\|\nu\|^2 = \eta\nu^T(H + \varepsilon^{-1}H\bm{L}H)\nu \geq \eta c_{0} \|\nu\|^2$ for any eigenvalue $\eta$ of matrix $(H + \varepsilon^{-1}H\bm{L}H)^{-1}$ with corresponding eigenvector $\nu \neq 0$. Hence, the spectral radius $\rho$ of matrix $(H + \varepsilon^{-1}H\bm{L}H)^{-1}$ satisfies \begin{equation} \rho((H + \varepsilon^{-1}H\bm{L}H)^{-1})\leq c_0^{-1}, \, \forall\, \varepsilon >0. \end{equation} Then, recalling \cite[Lemma 5.6.10, page 347]{Horn2013Matrix}, there exists a matrix norm $\|\cdot\|_{\sharp}$ such that \begin{equation} \|(H + \varepsilon^{-1}H\bm{L}H)^{-1}\|_{\sharp} \leq c_0^{-1} +1, \, \forall\, \varepsilon >0. \end{equation} It follows from the equivalence of matrix norms that there exists a constant $k_0>0$ such that \begin{equation} \|(H + \varepsilon^{-1}H\bm{L}H)^{-1}\| \leq k_0(c_0^{-1} +1), \, \forall\, \varepsilon >0. \end{equation} Therefore, for any $\varepsilon >0$, \begin{equation}\label{eq:k1} \begin{aligned} \|(\varepsilon I_{nN} + \bm{L}H)^{-1}\bm{L}\| & \leq \|H^{-1}\| + \|(H + \varepsilon^{-1}H\bm{L}H)^{-1}\|\\ & \leq (k_0+1)c_0^{-1} + k_0 \triangleq k_1. \end{aligned} \end{equation} Additionally, for $\bm{r}(\bm{z})$ in \eqref{eq:rz} and $k_1$ in \eqref{eq:k1}, there exists $\delta = \delta(k_1)>0$ such that \begin{equation}\label{eq:rz2} \|\bm{r}(\bm{z}) - \bm{r}(\bm{z}')\|\leq \frac{1}{k_1+1}\|\bm{z} - \bm{z}'\|, \, \forall\, \bm{z},\bm{z}' \in \delta\mathbb{B}. \end{equation} Furthermore, for the constants $k_1 >0, \delta>0$ and $\bm{\lambda}_0$ in \eqref{eq:Phi}, there exists $\varepsilon_0 = \varepsilon_0(k_1, \delta, \bm{\lambda}_0)>0$ such that \begin{equation}\label{eq:epsilone0} \varepsilon \|\bm{\lambda}_0\| \leq \frac{\delta}{k_1(k_1 +1)}, \,\forall\, \varepsilon \in (0,\,\varepsilon_0). \end{equation} Consider the map $\Phi(\bm{z}, \varepsilon)$ in \eqref{eq:Phi}. On the one hand, it follows from \eqref{eq:k1} and \eqref{eq:rz2} that \begin{equation}\label{eq:contraction} \|\Phi(\bm{z}, \varepsilon) - \Phi(\bm{z}', \varepsilon)\|\leq \frac{k_1}{k_1+1}\|\bm{z}-\bm{z}'\|, \, \forall\, \bm{z},\bm{z}' \in \delta\mathbb{B} \end{equation} for any fixed $\varepsilon >0$, that is, $\Phi(\cdot,\varepsilon)$ is a contraction map in $\delta\mathbb{B}$. On the other hand, it follows from \eqref{eq:rz2} and \eqref{eq:epsilone0} that \begin{equation}\label{eq:maptoitself} \|\Phi(\bm{z},\varepsilon)\|\leq k_1\varepsilon \|\bm{\lambda}_0\| + k_1\|\bm{r}(\bm{z})\|\leq \delta, \,\forall\, \bm{z}\in \delta\mathbb{B} \end{equation} for any $\varepsilon \in (0,\,\varepsilon_0)$, that is, $\Phi(\cdot,\varepsilon)$ maps the compact set $\delta\mathbb{B}$ into itself. According to the Contraction Mapping Theorem \cite[page 458]{Bertsekas2015Convex}, $\Phi(\cdot,\varepsilon)$ has a fixed point $\bar{\bm{z}}(\varepsilon)$, which is the solution of equation \eqref{eq:Phi}. Let \begin{equation} \bar{\bm{x}}(\varepsilon) \triangleq \bar{\bm{z}}(\varepsilon) + \bm{x}^*, \quad \bar{\bm{\lambda}}(\varepsilon) \triangleq - \nabla f(\bar{\bm{x}}(\varepsilon)). \end{equation} Thus, we obtain that $(\bar{\bm{x}}(\varepsilon), \bar{\bm{\lambda}}(\varepsilon))$ is a solution of equation \eqref{eq:equilibriumCompact}. (ii) Uniqueness: Suppose there are two solution pairs $(\bm{x}, \bm{\lambda})$ and $(\bm{x}', \bm{\lambda}')$ for equation \eqref{eq:equilibriumCompact}. By some calculations, we have \begin{equation} \begin{aligned} 0 & = (\bm{x} - \bm{x}')^T(\nabla f(\bm{x}) - \nabla f(\bm{x}') + \bm{\lambda} - \bm{\lambda}') \\ &\quad + (\bm{\lambda} - \bm{\lambda}')^T(-(\bm{x} - \bm{x}') + \varepsilon^{-1}\bm{L}(\bm{\lambda} - \bm{\lambda}'))\\ & = (\bm{x} - \bm{x}')^T(\nabla f(\bm{x}) - \nabla f(\bm{x}')) \\ & \quad + \varepsilon^{-1} (\bm{\lambda} - \bm{\lambda}')^T\bm{L}(\bm{\lambda} - \bm{\lambda}') \geq 0 \end{aligned} \end{equation} Then $(\bm{x} - \bm{x}')^T(\nabla f(\bm{x}) - \nabla f(\bm{x}')) = 0$. Since $f(\bm{x})$ is strongly convex, there must hold $\bm{x}' = \bm{x}$ and $\bm{\lambda}' = \bm{\lambda} = - \nabla f(\bm{x})$, which completes the proof. \end{proof} Note that the equilibrium is not known beforehand and the existing singular perturbation techniques do not cover this problem. Instead, we use a fixed-point theorem to prove the existence and then the uniqueness. \subsection{Convergence and sub-optimality} Based on the existence of the equilibrium, it is time to study the convergence of the proposed algorithm. \begin{theorem}\label{thm:2} Under Assumption \ref{assum:1}, the algorithm \eqref{eq:algorithmCompact} with $\varepsilon \in (0, \, \varepsilon_0)$ converges to its equilibrium point $(\bar{\bm{x}}(\varepsilon), \bar{\bm{\lambda}}(\varepsilon))$. Furthermore, if the gradient map $\nabla f(\bm{x})$ is $\kappa$-Lipshcitz continuous for some constant $\kappa >0$, then \eqref{eq:algorithmCompact} exponentially converges to its equilibrium point. \end{theorem} \begin{proof} Since the righthand side of \eqref{eq:algorithmCompact} is locally Lipschitz continuous, there exists a unique trajectory $(\bm{x}(t,\varepsilon), \bm{\lambda}(t, \varepsilon))$ satisfying \eqref{eq:algorithmCompact}. Take the following Lyapunov function \begin{equation} V(\bm{x}, \bm{\lambda}) \triangleq \|\bm{x} - \bar{\bm{x}}(\varepsilon)\|^2 + \|\bm{\lambda} - \bar{\bm{\lambda}}(\varepsilon)\|^2. \end{equation} Then $V$ is positive definite and its first order derivative with respect to time $t$ is \begin{equation} \begin{aligned} \dot{V}(\bm{x}, \bm{\lambda}) & = - (\bm{x} - \bar{\bm{x}}(\varepsilon))^T (\nabla f(\bm{x}) + \bm{\lambda}) \\ & \quad - (\bm{\lambda} - \bar{\bm{\lambda}}(\varepsilon))^T (\bm{b}-\bm{x} + \varepsilon^{-1}\bm{L}\bm{\lambda})\\ & = - (\bm{x} - \bar{\bm{x}} (\varepsilon))^T (\nabla f(\bm{x}) - \nabla f(\bar{\bm{x}} (\varepsilon)))\\ & \quad - \varepsilon^{-1}(\bm{\lambda} - \bar{\bm{\lambda}}(\varepsilon))^T\bm{L}(\bm{\lambda} - \bar{\bm{\lambda}}(\varepsilon))\\ & \leq - c_0 \|\bm{x} - \bar{\bm{x}} (\varepsilon)\|^2 \\ & \quad - \varepsilon^{-1}(\bm{\lambda} - \bar{\bm{\lambda}}(\varepsilon))^T(\bm{L} + \bm{L}^T)(\bm{\lambda} - \bar{\bm{\lambda}}(\varepsilon))\\ & \leq 0. \end{aligned} \end{equation} Also, we have that $\dot{V}(\bm{x}, \bm{\lambda}) = 0$ if and only if $\bm{x} = \bar{\bm{x}}(\varepsilon)$ and $\bm{\lambda} = \bar{\bm{\lambda}}(\varepsilon)$. By the Invariance Principle \cite[page 126]{Khalil2002Nonlinear}, algorithm \eqref{eq:algorithmCompact} converges to $(\bar{\bm{x}}(\varepsilon), \bar{\bm{\lambda}}(\varepsilon))$. Moreover, a linearized system of algorithm \eqref{eq:algorithmCompact} at its equilibrium can be obtained via replacing the term $\nabla f(\bm{x})$ by an affine map $F(\bm{x})$ defined as \begin{equation}\label{eq:Fx} F(\bm{x}) \triangleq \nabla f(\bar{\bm{x}}(\varepsilon)) + \nabla^2 f(\bar{\bm{x}}(\varepsilon))(\bm{x} - \bar{\bm{x}}(\varepsilon)). \end{equation} Following the same proof as above, this linear system is asymptotically stable. Then there exist two positive definite matrices $P, Q \in \mathbb{R}^{2nN \times 2nN}$ such that \begin{equation} \begin{aligned} V_1(\bm{x}, \bm{\lambda}) & \triangleq \begin{bmatrix} \bm{x} - \bar{\bm{x}}(\varepsilon)\\ \bm{\lambda} - \bar{\bm{\lambda}}(\varepsilon) \end{bmatrix}^T P \begin{bmatrix} \bm{x} - \bar{\bm{x}}(\varepsilon)\\ \bm{\lambda} - \bar{\bm{\lambda}}(\varepsilon) \end{bmatrix}\\ & \geq \zeta_1(\|\bm{x} - \bar{\bm{x}}(\varepsilon)\|^2 + \|\bm{\lambda} - \bar{\bm{\lambda}}(\varepsilon)\|^2) \end{aligned} \end{equation} for some $\zeta_1>0$ and \begin{equation} \begin{aligned} \dot{V}_1(\bm{x}, \bm{\lambda}) & = - \begin{bmatrix} \bm{x} - \bar{\bm{x}}(\varepsilon)\\ \bm{\lambda} - \bar{\bm{\lambda}}(\varepsilon) \end{bmatrix}^T Q \begin{bmatrix} \bm{x} - \bar{\bm{x}}(\varepsilon)\\ \bm{\lambda} - \bar{\bm{\lambda}}(\varepsilon) \end{bmatrix}\\ & \quad - 2 \begin{bmatrix} \bm{x} - \bar{\bm{x}}(\varepsilon)\\ \bm{\lambda} - \bar{\bm{\lambda}}(\varepsilon) \end{bmatrix}^T P \begin{bmatrix} \nabla f(\bm{x}) - F(\bm{x})\\ 0 \end{bmatrix} \end{aligned} \end{equation} where $F(\bm{x})$ is in \eqref{eq:Fx}. If $\nabla f(\bm{x})$ is $\kappa$-Lipschitz continuous, then there are positive constants $\zeta_2$ and $\zeta_3>0$ such that \begin{equation} \dot{V}_1(\bm{x}, \bm{\lambda}) \leq \zeta_2\|\bm{x} - \bar{\bm{x}}(\varepsilon)\|^2 - \zeta_3\|\bm{\lambda} - \bar{\bm{\lambda}}(\varepsilon)\|^2. \end{equation} Define a new Lyapunov function as \begin{equation} V_2(\bm{x}, \bm{\lambda}) \triangleq c_0^{-1}(\zeta_2 + \zeta_3)V(\bm{x}, \bm{\lambda}) + V_1(\bm{x}, \bm{\lambda}) \end{equation} Then \begin{equation} V_2(\bm{x}, \bm{\lambda}) \geq (c_0^{-1}(\zeta_2 + \zeta_3) + \zeta_1)(\|\bm{x} - \bar{\bm{x}}(\varepsilon)\|^2 + \|\bm{\lambda} - \bar{\bm{\lambda}}(\varepsilon)\|^2), \end{equation} and \begin{equation} \dot{V}_2(\bm{x}, \bm{\lambda}) \leq - \zeta_3 (\|\bm{x} - \bar{\bm{x}}(\varepsilon)\|^2 + \|\bm{\lambda} - \bar{\bm{\lambda}}(\varepsilon)\|^2) \end{equation} Thus, the algorithm is globally exponentially convergent with the exponential rate no more than $-\frac{\zeta_3c_0}{\zeta_1c_0 +\zeta_2 + \zeta_3}$, which implies the conclusion. \end{proof} \begin{remark} The obtained result about the exponential rate is consistent with some existing ones for undirected graphs such as \cite[Theorem 4.3]{Yi2016Initialization}, but our algorithm is of lower dimensional dynamics and also applicable to balanced directed graphs. \end{remark} Next, we need to verify the sub-optimality of the algorithm \eqref{eq:algorithmCompact} and check the difference between the sub-optimal solution and the optimal one. \begin{theorem}\label{thm:3} The equilibrium $(\bar{\bm{x}}(\varepsilon), \bar{\bm{\lambda}}(\varepsilon))$ of algorithm \eqref{eq:algorithmCompact} is a sub-optimal solution of problem \eqref{eq:optimizationProblem} in the sense that \begin{equation}\label{eq:resouceAllocationConstraint} (1^T_{N}\otimes I_n) \bar{\bm{x}}(\varepsilon) = d, \end{equation} and \begin{equation}\label{eq:solutionLimit} \lim_{\varepsilon \to 0} (\bar{\bm{x}}(\varepsilon), \bar{\bm{\lambda}}(\varepsilon)) = (\bm{x}^*, \bm{\lambda}^*). \end{equation} Moreover, for any $\varepsilon \in (0,\varepsilon_0)$, there hold \begin{equation}\label{eq:LipchitzContinuous} \|\bar{\bm{x}}(\varepsilon) - \bm{x}^*\| \leq \gamma_1 \varepsilon, \quad \|\bar{\bm{\lambda}}(\varepsilon) - \bm{\lambda}^*\| \leq \gamma_2 \varepsilon, \end{equation} where $\gamma_1 \triangleq k_1(k_1+1)\|\bm{\lambda}_0\|, \gamma_2 \triangleq \gamma_1 \sup_{\|\bm{z}\|\leq \delta}\{\|\nabla^2 f(\bm{z})\|\}$, and $\varepsilon_0, k_1, \bm{\lambda}_0, \delta$ are in the proof of Theorem \ref{thm:1}. \end{theorem} \begin{proof} Since $(\bar{\bm{z}}(\varepsilon), \bar{\bm{\lambda}}(\varepsilon))$ satisfies \eqref{eq:equilibriumCompact} and $(1^T_{N}\otimes I_n)\bm{L} = 0, (1^T_{N}\otimes I_n)\bm{b} = d$, equality \eqref{eq:resouceAllocationConstraint} holds. Next, from the proof of Theorem \ref{thm:1}, $\bar{\bm{z}}(\varepsilon) = \bar{\bm{x}}(\varepsilon) - \bm{x}^*$ is a fixed point of map $\Phi(\bm{r}(\bm{z}), \varepsilon)$. It follows from \eqref{eq:rz2} and \eqref{eq:maptoitself} that \begin{equation} \begin{aligned} \|\bar{\bm{x}}(\varepsilon) - \bm{x}^*\| & = \|\Phi(\bm{r}(\bar{\bm{x}}(\varepsilon) - \bm{x}^*), \varepsilon)\|\\ & \leq k_1 \|\lambda_0\| \varepsilon + k_1\|\bm{r}(\bar{\bm{x}}(\varepsilon) - \bm{x}^*)\| \\ & \leq k_1 \|\lambda_0\| \varepsilon + \frac{k_1}{k_1+1}\|\bar{\bm{x}}(\varepsilon) - \bm{x}^*\| \end{aligned} \end{equation} Thus there holds \begin{equation} \|\bar{\bm{x}}(\varepsilon) - \bm{x}^*\| \leq k_1(k_1+1)\|\bm{\lambda}_0\|\varepsilon. \end{equation} On the other hand, it follows from \eqref{eq:maptoitself} that \begin{equation} \|\bar{\bm{x}}(\varepsilon) - \bm{x}^*\| \leq \delta. \end{equation} Thus \begin{equation} \begin{aligned} \|\bar{\bm{\lambda}}(\varepsilon) - \bm{\lambda}^*\| & = \|\nabla f(\bar{\bm{x}}(\varepsilon)) - \nabla f(\bm{x}^*)\| \\ & \leq \sup_{\|\bm{z}\|\leq \delta}\{\|\nabla^2 f(\bm{z})\|\} \cdot \|\bar{\bm{x}}(\varepsilon) - \bm{x}^*\| \end{aligned} \end{equation} Therefore, \eqref{eq:LipchitzContinuous} holds, which also implies \eqref{eq:solutionLimit}. \end{proof} Note that the sub-optimal solution depends closely on not only the parameter $\varepsilon$, but also the parameter $\bm{b}$. The following result shows a special case when $\bm{b}$ happens to equal the $\bm{x}^*$. \begin{corollary} With $\bm{L}\bm{\lambda_0} = \bm{b}- \bm{x}^*$, we can chose $\bm{\lambda}_0 = 0$ provided $\bm{b} = \bm{x}^*$. Then algorithm \eqref{eq:algorithmCompact} with any $\varepsilon >0$ has its equilibrium as $(\bm{x}^*, \bm{\lambda}^*)$, i.e., it gives exactly the optimal solution to problem \eqref{eq:optimizationProblem}. \end{corollary} \begin{remark} Theorems \ref{thm:2} and \ref{thm:3} showed that, different from some algorithms like \eqref{eq:algorithmPI}, the algorithm \eqref{eq:algorithmNew} is of simple dynamics, and convergent over weight-balanced graphs, without depending on the network topology; also, it can be adjusted easily to reduce the optimization error by tuning the parameter $\varepsilon$. \end{remark} \subsection{Numerical example} Here we give an illustrative example for our algorithm. Consider the following problem \begin{equation*} \begin{aligned} & \min f(\bm{x}) = \frac{1}{2}(x_1^2 + \frac{1}{4}x_2^2 + x_3^2)\\ & \text{ s.t. } x_1 + x_2 + x_3 = 1 \end{aligned} \end{equation*} with a multi-agent system consisting of three agents, where agent $i$ manipulates variable $x_i$ for $i = 1, 2, 3$, and their interaction graph is shown in Fig. \ref{fig:topology}. Our distributed algorithm can be given as follows: \begin{equation*} \left\{\begin{aligned} \dot{x}_1 & = - x_1 -\lambda_1\\ \dot{x}_2 & = - \frac{1}{4}x_2 -\lambda_2\\ \dot{x}_3 & = - x_3 - \lambda_3 \\ \varepsilon \dot{\lambda}_1 & = - (\lambda_1 - \lambda_3) + \varepsilon (x_1 - \frac{1}{3}) \\ \varepsilon \dot{\lambda}_2 & = - (\lambda_2 - \lambda_1) + \varepsilon (x_2 - \frac{1}{3}) \\ \varepsilon \dot{\lambda}_3 & = - (\lambda_3 - \lambda_2) + \varepsilon (x_3 - \frac{1}{3}) \\ \end{aligned}\right. \end{equation*} By some calculations, the equilibrium point $(\bar{\bm{x}}(\varepsilon), \bar{\bm{\lambda}}(\varepsilon))$ is \begin{equation*} \begin{aligned} \begin{bmatrix} \bar{x}_1(\varepsilon)\\ \bar{x}_2(\varepsilon)\\ \bar{x}_3(\varepsilon) \end{bmatrix} & = \begin{bmatrix} \frac{1}{6}\\ \frac{2}{3}\\ \frac{1}{6} \end{bmatrix} + \frac{\varepsilon}{6(4\varepsilon^2 + 9\varepsilon + 6)} \begin{bmatrix} 4\varepsilon + 9\\ - 8\varepsilon - 12\\ 4\varepsilon +3 \end{bmatrix} \\ \begin{bmatrix} \bar{\lambda}_1(\varepsilon)\\ \bar{\lambda}_2(\varepsilon)\\ \bar{\lambda}_3(\varepsilon) \end{bmatrix} & = \begin{bmatrix} -\frac{1}{6}\\ -\frac{1}{6}\\ -\frac{1}{6} \end{bmatrix} + \frac{\varepsilon}{6(4\varepsilon^2 + 9\varepsilon + 6)} \begin{bmatrix} -(4\varepsilon + 9)\\ 2\varepsilon + 3\\ -(4\varepsilon +3) \end{bmatrix} \end{aligned} \end{equation*} Indeed, the optimal solution of the problem is $\bm{x}^* = (\frac{1}{6}, \frac{2}{3}, \frac{1}{6})^T$ because, with the Cauchy inequality, $(x_1^2 + \frac{1}{4}x_2^2 + x_3^2)(1^2 + 2^2 + 1^2) \geq (x_1+x_2+x_3)^2 = 1$ and equality holds if and only if $\bm{x} = k(1,2,1)^T$ for some $k\in \mathbb{R}$. Due to the equality constraint, $k$ must be $\frac{1}{6}$. Moreover, we observe that $\bm{\bar{x}}(\varepsilon)$ satisfies the constraint, i.e., $\bar{x}_1(\varepsilon) + \bar{x}_2(\varepsilon) + \bar{x}_3(\varepsilon) = 1$. Furthermore, the distance between the algorithm equilibrium and the optimal solution is dominated by a term proportional to $\varepsilon$. Simulations are taken with $\varepsilon = 1$, $\varepsilon = 0.1$, and $\varepsilon = 0.01$. The trajectories and the Lyapunov function are shown in Fig. \ref{fig:F1} - Fig. \ref{fig:F4}. It is indicated that our simple algorithm converges to its equilibrium, which approaches the optimal point as $\varepsilon$ tends to zero. \begin{figure} \caption{The communication graph of the three agents. } \label{fig:topology} \end{figure} \begin{figure} \caption{The trajectories of allocation of agent $1$} \label{fig:F1} \end{figure} \begin{figure} \caption{The trajectories of allocation of agent $2$} \label{fig:F2} \end{figure} \begin{figure} \caption{The trajectories of allocation of agent $3$} \label{fig:F3} \end{figure} \begin{figure} \caption{The trajectories of Lyapunov function} \label{fig:F4} \end{figure} \section{Conclusions} In this paper, a distributed sub-optimal continuous-time algorithm has been proposed for resource allocation optimization problem. The convergence has been proved over any strongly connected and weight-balanced graph and the sub-optimality have been analyzed with numerical simulation. At the same time, the singular perturbation ideas have been shown to be useful in the distributed sub-optimal design, though the problems occurred are not completely covered by the existing singular perturbation theory. In fact, based on the proposed approach, we are considering some systematical ways to further make the singular perturbation techniques serve the distributed algorithm design with various constraints. \end{document}
\begin{document} \title{Optimal Calder\'on Spaces for generalized Bessel potentials} \date{\today} \author{Elza Bakhtigareeva, Mikhail L. Goldman, and Dorothee D. Haroske} \maketitle \begin{abstract} In the paper we investigate the properties of spaces with generalized smoothness, such as Calder\'on spaces that include the classical Nikolskii-Besov spaces and many of their generalizations, and describe differential properties of generalized Bessel potentials that include classical Bessel potentials and Sobolev spaces. Kernels of potentials may have non-power singularity at the origin. With the help of order-sharp estimates for moduli of continuity of potentials, we establish the criteria of embeddings of potentials into Calder\'on spaces, and describe the optimal spaces for such embeddings. \end{abstract} \section{Introduction} \label{intro} The paper is devoted to generalized Bessel potentials constructed by the convolutions of generalized Bessel-McDonald kernels with functions from the basic rearrangement invariant space. If the criterion is satisfied for the embedding of potentials into the space of bounded continuous functions, we state the equivalent description for the cones of moduli of continuity of potentials in the uniform norm. This gives the opportunity to obtain the criterion for the embedding of potentials into the Calder\'on space. We develop here the results of \cite{GoHa}. Some results presented here we announced in papers \cite{GoHa-3,GoHa-4}. In the case of generalized Bessel potentials constructed over the basic weighted Lorentz space we describe explicitly the optimal Calder\'on space for such an embedding. The results of Sections 3, 4 are based on an application of some results obtained in \cite{BaGo}. The paper is organized as follows. In Section~\ref{sect-1} notation, essential concepts and definitions are presented. We present the results concerning equivalent descriptions for the cone of moduli of continuity for generalized Bessel potentials in uniform norm (Theorem \ref{theo-g-1.1}), and prove two-sided estimates for a variant of the continuity envelope function for the space of potentials (Theorem \ref{theo-g-ad-2.11}). The criterion of embedding for the space of potentials into the Calder\'on space is presented in Theorem \ref{theo-g-2.4}. Sections~\ref{sect-3}-\ref{sect-3b} are devoted to the description of the optimal Calder\'on space for such embedding. Theorem \ref{theo-g-3.3} gives an explicit description in the case when the basic space for the potential is a weighted Lorentz space. The proofs of the main results of Section~\ref{sect-3} we give in Sections~\ref{sect-3a} and~\ref{sect-3b}. Section~\ref{sect-4} contains some explicit descriptions of the optimal Calder\'on space. \section{Preparations} \label{sect-1} First we fix some notation. By $\ensuremath{\mathbb N}$ we denote the set of natural numbers, by $\ensuremath{\mathbb N}_0$ the set $\ensuremath{\mathbb N} \cup \{ 0\}$. For two positive real sequences $\{\alpha_k\}_{k\in \ensuremath{\mathbb N}}$ and $\{\beta_k\}_{k\in \ensuremath{\mathbb N}}$ we mean by $\alpha_k\sim \beta_k$ that there exist constants $c_1,c_2>0$ such that $c_1 \alpha_k\leq \beta_k\leq c_2 \alpha_k$ for all $k\in \ensuremath{\mathbb N}$; similarly for positive functions. Usually $B_r = \{y\in \ensuremath{\mathbb R}^{n}: |y|<r\}$ stands for a ball in $\ensuremath{\mathbb R}^{n}$ centred at the origin and with radius $r>0$. We denote by $\mu_n$ the Lebesgue measure on $\ensuremath{\mathbb R}_n$, $n\in \ensuremath{\mathbb N}$, and by $V_{n}$ the volume of the $n$-dimensional unit ball, that is, $V_n = \mu_n(B_1)$. For a set $A$ we denote by $\chi_A$ its characteristic function. Given two (quasi-) Banach spaces $X$ and $Y$, we write $X\hookrightarrow Y$ if $X\subset Y$ and the natural embedding of $X$ in $Y$ is continuous. All unimportant positive constants will be denoted by $c$, occasionally with subscripts. \subsection{Banach function spaces} \label{sec-1-1} We use some notation and general facts from the theory of Banach function spaces and rearrangement-invariant spaces; for general background material we refer to \cite{BS}. As usual we call $f^\ast$ the decreasing rearrangement of the function $f$, i.e., $0\leq f^\ast$ is a decreasing, right-continuous function on $\ensuremath{\mathbb R}_+ = (0,\infty)$, equi-measurable with $f$, \begin{equation} \mu_n(\{x\in\ensuremath{\mathbb R}^{n}: |f(x)|>s\}) = \mu_1(\{t\in\ensuremath{\mathbb R}_+: f^\ast(t)>s\}), \quad s>0. \end{equation} \begin{definition}\label{bfs-ris} \bli \item[{\bfseries\upshape (i)}] A linear space $X=X(0,T)$ of measurable functions on $(0,T)$, equipped with the norm $\|\cdot\|_X$, is called a {\em Banach function space}, shortly: {\em BFS}, if the following conditions are satisfied: \begin{itemize} \item[{\bfseries\upshape (P1)}] $\|f\|_X=0 \iff f=0\ $ $\mu$-a.e. on $(0,T)$; \item[{\bfseries\upshape (P2)}] $|f|\leq g,\ g\in X \quad\text{implies}\quad f\in X, \ \|f\|_X \leq \|g\|_X;$ \item[{\bfseries\upshape (P3)}] If for some functions $f_n\in X$, $f_n\geq 0$, $n\in\ensuremath{\mathbb N}$, and $f_n$ monotonically increasing to $f$, then $\|f_n\|_X \to \|f\|_X$, i.e. \begin{equation} \label{def-bfs} \|f\|_X=\lim_{n\to\infty}\|f_n\|_X. \end{equation} \item[{\bfseries\upshape (P4)}] For every measurable set $B\subset (0,T)$ with $\mu(B)>0$, there exists some $c_B>0$, such that for all $f\in X$, \[ \int_B |f| \ensuremath{\,\mathrm{d}} \mu \leq c_B \|f\|_X. \] \item[{\bfseries\upshape (P5)}] For every measurable set $B\subset (0,T)$ with $\mu(B)>0$, $\|\chi_B\|_X<\infty$. \end{itemize} \item[{\bfseries\upshape (ii)}] A BFS $E$ is called a {\em rearrangement-invariant space}, shortly: {\em RIS}, if its norm is monotone with respect to rearrangements, \begin{equation} f^\ast \leq g^\ast, \ g\in E \quad \text{implies}\quad f\in E, \ \|f\|_E\leq \|g\|_E.\label{1.3} \end{equation} \end{list} \end{definition} \begin{remark}\label{r1} Note that the limit on the right-hand side of \eqref{def-bfs} always exists (finite or infinite), because the sequence of norms increases. The finiteness of this limit is a criterion for $f \in X$. \end{remark} For an RIS $E(\ensuremath{\mathbb R}^{n})$ its associate space $E'(\ensuremath{\mathbb R}^{n})$ is an RIS again, equipped with the norm \[ \|g\|_{E'(\ensuremath{\mathbb R}^{n})} = \sup_{\|f\|_{E(\ensuremath{\mathbb R}^{n})}\leq 1} \; \int\limits_0^\infty f^\ast(s) g^\ast(s) \ensuremath{\,\mathrm{d}} s,\quad g\in E'(\ensuremath{\mathbb R}^{n}), \] see \cite[Ch.~2]{BS} for further details. \begin{remark} The following Luxemburg representation formula is known: For an RIS $E(\ensuremath{\mathbb R}^{n})$ there exists a unique RIS $\widetilde{E}(\ensuremath{\mathbb R}_+)$ such that \begin{equation} \|f\|_{E(\ensuremath{\mathbb R}^{n})} = \|f^\ast\|_{\widetilde{E}(\ensuremath{\mathbb R}_+)}.\label{1.4} \end{equation} Likewise, let $\widetilde{E}'(\ensuremath{\mathbb R}_+)$ be the Luxemburg representation for $E'(\ensuremath{\mathbb R}^{n})$, with \[ \|h\|_{\widetilde{E}'(\ensuremath{\mathbb R}_+)} = \sup\left\{ \int_0^\infty h^\ast(t)f^\ast(t)\ensuremath{\,\mathrm{d}} t:\quad f\in \widetilde{E}(\ensuremath{\mathbb R}_+), \ \|f\|_{\widetilde{E}(\ensuremath{\mathbb R}_+)}\leq 1\right\}. \] \end{remark} \begin{example}\label{ex-Lorentz} Let us mention some examples of RIS such as ${L_{p}}$, classical Lorentz and Marcinkiewicz spaces ${\Lambda _{pq}}$ and ${M_{p}}$, Orlicz spaces and other. Recall that the norms in generalized weighted Lorentz and Marcinkiewicz spaces ${\Lambda _{q}\left(v\right)}$ and ${M\left(v\right)}$ with a weight ${v}$ are given by \begin{align} \|f\|_{\Lambda_q(v)} & =\left(\int _0^\infty f^\ast\left(t\right)^{q}v\left(t\right) \ensuremath{\,\mathrm{d}} t\right)^{1/q},\quad 1\le q<\infty; \label{1.5} \\ \|f\|_{{M\left(v\right)}} &=\sup\left\{f^{\ast\ast}\left(t\right)v\left(t\right):\ t\in \ensuremath{\mathbb R}_+\right\},\label{1.6} \end{align} where \begin{equation} f^{\ast\ast}\left(t\right)=\frac1t \int_0^t f^\ast\left(s\right) \ensuremath{\,\mathrm{d}} s \label{f**} \end{equation} is the well-known maximal function of $f^\ast$, that is, $f^{\ast}\le f^{\ast\ast}$, $f^{\ast\ast}$ is monotonically decreasing, whereas $t f^{\ast\ast}(t)$ is monotonically increasing.\\ For special weights we obtain a variety of Lorentz and Marcinkiewicz spaces: For instance, $v_{pq}\left(t\right)=t^{{\frac{q}{p}-1}}$, $ 1\le p,q<\infty $, yields the classical Lorentz spaces \[ \Lambda_{q}\left(v_{pq}\right)=\Lambda_{pq},\quad\text{in particular,}\quad \Lambda_q(v_{qq})=\Lambda_q(1)=L_q, \] whereas the choice \[ v_b\left(t\right)=\left[t^{{\gamma }} b\left(t\right)\right]^{q} t^{{-1}},\quad \gamma\in\ensuremath{\mathbb R}, \] and ${b\left(t\right)}$ a slowly varying function of logarithmic type, leads to the so-called Lorentz-Karamata spaces, see for example \cite{GNO,neves-01}. \end{example} \subsection{Space of Bessel potentials}\label{sec-1-2} The space of Bessel potentials is introduced by an integral representation over an RIS $E=E\left(\ensuremath{\mathbb R}^{n}\right)$. Here we need the notion of the generalized Bessel-McDonald kernel $G=G_\Phi$, \begin{equation} G(x)=\Phi(|x|), \quad x\in \ensuremath{\mathbb R}^{n}\setminus\{0\}, \label{g1.2} \end{equation} where we make the following assumptions on the function $\Phi:\ensuremath{\mathbb R}_+\to [0,\infty)$ everywhere in the sequel: $\Phi$ is continuous, monotonically decreasing, and \begin{equation} 0<\int_0^\infty \Phi(z) z^{n-1}\ensuremath{\,\mathrm{d}} z<\infty. \label{g1.3} \end{equation} We denote by \begin{equation} \varphi(\tau)=\Phi\left((\tau/V_n)^{1/n}\right),\quad \tau >0, \label{g1.4} \end{equation} so that $\varphi$ is a positive continuous decreasing function such that \begin{equation*} 0<\int_0^\infty \varphi(\tau)\ensuremath{\,\mathrm{d}} \tau<\infty. \end{equation*} \begin{definition}\label{defi-HGE} Let $E=E(\ensuremath{\mathbb R}^{n})$ be an RIS, $G$ the generalized Bessel-McDonald kernel as above. Then \begin{equation} H_{E}^{G}\left(\ensuremath{\mathbb R}^{n}\right)\equiv H_{E}^G=\left\{u=G\ast f :\ f\in E\left(\ensuremath{\mathbb R}^{n}\right) \right\}, \label{g1.1} \end{equation} equipped with the norm \begin{equation*} \|u\|_{H_{E}^G}:=\inf \left\lbrace \|f\|_{E} :\quad f \in E(\ensuremath{\mathbb R}^{n});\quad G\ast f=u\right\rbrace . \end{equation*} Here \[ u(x)=(G\ast f)\left(x\right)=\int _{\ensuremath{\mathbb R}^{n}} G\left(x-y\right)f(y) \ensuremath{\,\mathrm{d}} y. \] \end{definition} Let $C(\ensuremath{\mathbb R}^{n})$ be the space of all complex-valued bounded uniformly continuous functions on $\ensuremath{\mathbb R}^{n}$, equipped with the sup-norm as usual. The following estimate holds for $u=G\ast f$, with fixed $T\in\ensuremath{\mathbb R}_+$: \begin{equation*} \|u\|_{C}:=\sup\limits_{x \in \ensuremath{\mathbb R}^{n}} |u(x)|\leq c_0 \int _{0}^T \varphi(\tau)f^{\ast}(\tau)\ensuremath{\,\mathrm{d}} \tau, \quad c_{0}=1+\left( \int _{T}^{\infty} \varphi\ensuremath{\,\mathrm{d}} \tau\right) \left( \int _{0}^T \varphi\ensuremath{\,\mathrm{d}} \tau\right)^{-1}, \end{equation*} see \cite[(1.3)]{GoMa}. Thus, if we require that \begin{equation} \varphi \in\widetilde{E}'(0,T), \label{g1.5} \end{equation} then for every $f \in E(\ensuremath{\mathbb R}^{n})$ such that $G\ast f=u$, we have the inequality \begin{equation} \|u\|_{C}\leq c_0\|\varphi\|_{\widetilde{E}'(0,T)}\|f^{\ast}\|_{\widetilde{E}(\ensuremath{\mathbb R}_+)}=c_0\|\varphi\|_{\widetilde{E}'(0,T)}\|f\|_{{E}(\ensuremath{\mathbb R}^{n})}. \label{g1.6} \end{equation} We take the infimum over such functions $f$ and obtain \begin{equation} \|u\|_{C}\leq c_0\|\varphi\|_{\widetilde{E}'(0,T)}\|u\|_{H_E^G(\ensuremath{\mathbb R}^{n})}. \label{g1.7} \end{equation} Under the additional conditions \eqref{g1.15}, \eqref{g1.16} we have the embedding $ H^G_E(\ensuremath{\mathbb R}^{n})\hookrightarrow C(\ensuremath{\mathbb R}^{n})$ into the space of continuous uniformly bounded functions (see Remark \ref{rem-g-1.2} below). For later use, let us further recall the definition of differences of functions. If $f$ is an arbitrary function on $\ensuremath{\mathbb R}^{n}$, $h\in\ensuremath{\mathbb R}^{n}$ and $k\in\ensuremath{\mathbb N}$, then \begin{equation} (\Delta_h^k f)(x):=\sum_{j=0}^{k}\,\binom{k}{j}\,(-1)^{k-j}\, f(x+jh), \quad x\in\ensuremath{\mathbb R}^{n}. \label{diff} \end{equation} Note that $\Delta_h^k$ can also be defined iteratively via \[ (\Delta_h^1 f)(x)=f(x+h)-f(x) \quad \text{and} \quad (\Delta_h^{k+1} f)(x)=\Delta_h^1(\Delta_h^k f)(x), \quad k\in\ensuremath{\mathbb N}. \] For convenience we may write $\Delta_h$ instead of $\Delta_h^1$. Accordingly, the $k$-th modulus of smoothness of a function $f\in C(\ensuremath{\mathbb R}^{n})$ is defined by \begin{equation} \omega_k(f;t)=\sup_{|h|\leq t} \left\|\Delta_h^k f\right\|_{C(\ensuremath{\mathbb R}^{n})}, \quad t>0, \label{modulus} \end{equation} such that $\omega_1(f;t)=\omega(f;t)$. Recall that \eqref{diff} immediately gives \begin{equation} \omega_k(f;\lambda t) \leq \ (1+\lambda)^k \ \omega_k(f;t),\quad \lambda>0.\label{omega-dil} \end{equation} \subsection{The cone of moduli of continuity for potentials} \label{sec-1-3} Let $k\in\ensuremath{\mathbb N}$, $T>0$, and $H^G_E(\ensuremath{\mathbb R}^{n})$ as in Definition~\ref{defi-HGE}. We introduce the following cone of moduli of continuity of potentials, \begin{equation} M=\left\{h:\ensuremath{\mathbb R}_+\to\ensuremath{\mathbb R}_+: \ h(t)= \omega_k\left(u; t^{1/n}\right), \ t\in (0,T)\quad\text{for some}\ u\in H^G_E(\ensuremath{\mathbb R}^{n})\right\}, \label{g1.8} \end{equation} equipped with the functional \begin{equation} \label{g1.9} \varrho_M(h) = \inf\left\{ \|u\|_{H^G_E}: \ u\in H^G_E(\ensuremath{\mathbb R}^{n}), \ \omega_k\left(u; t^{1/n}\right)= h(t),\ t\in (0,T)\right\},\quad h\in M. \end{equation} Plainly, $M=M(k,E,G,T)$, but we shall usually write $M$ for convenience. Our next aim is to characterize the above cone by a simpler expression. For this purpose we define the notion of covering and equivalence of cones. We claim that the cone $M=M(0,T)$ with $\varrho_M$ is covered by the cone $K=K(0,T)$ with $\varrho_K$, with covering constant $c \in (0, \infty)$, written as $M\overset{c}{\prec}K$, if for any function $h\in M$ there exists some function $g\in K$, such that \begin{equation}\label{g1.13} \varrho_K(g)\leq c \varrho_M(h)\qquad \text{and}\qquad g(t)\geq h(t), \quad t\in (0,T). \end{equation} We write $M\prec K$ if there exists $c \in (0, \infty)$, such that $M\overset{c}{\prec}K$. We denote as $c_0(M\prec K)$ the best constant of covering $M\prec K$, that is $$c_0(M\prec K)=\inf \left\lbrace c \in \ensuremath{\mathbb R}_+: M\overset{c}{\prec}K\right\rbrace.$$ In case of mutual covering we call it {\em equivalence of cones}, that is \begin{equation} M\approx K\qquad \iff\qquad M\prec K\prec M. \label{g1.14} \end{equation} We need some further notation. For $\varphi$ as above, let \begin{equation} \label{g1.12} \Omega_\varphi(t,\tau)=\frac{\varphi(\tau)}{1+\left(\frac{\tau}{t}\right)^{k/n}},\quad t,\tau>0, \end{equation} and \[ \widetilde{E}_0(0,T) = \left\{\sigma \in \widetilde{E}(0,T): \sigma\geq 0, \ \sigma\ \text{monotonically decreasing}\right\}. \] Then the new cone $K=K(E,\varphi,T)$ is given by \begin{equation} K=\left\{h:\ensuremath{\mathbb R}_+\to\ensuremath{\mathbb R}_+: \ h(t)=\int_0^T \Omega_\varphi(t,\tau) \sigma(\tau)\ensuremath{\,\mathrm{d}} \tau\quad\text{for some}\ \sigma \in \widetilde{E}_0(0,T)\right\}, \label{g1.10} \end{equation} equipped with the functional \begin{equation} \label{g1.11} \varrho_K(h)=\inf\left\{ \left\|\sigma\right\|_{\widetilde{E}(0,T)}: \sigma \in \widetilde{E}_0(0,T), \quad \int_0^T \Omega_\varphi(t,\tau) \sigma(\tau)\ensuremath{\,\mathrm{d}} \tau = h(t)\right\}. \end{equation} Our first main result reads as follows. \begin{theorem} \label{theo-g-1.1} Let $\Phi$ be a standard function with \eqref{g1.3}, $\varphi$, given by \eqref{g1.4}, satisfy \eqref{g1.5}. \bli \item[{\bfseries\upshape 1.}] Assume, in addition, that $\Phi\in C^k(\ensuremath{\mathbb R}_+),$ $k\in\ensuremath{\mathbb N}$, and $\Phi$ satisfies the following estimates for some positive real numbers $a_1, a_2, z_1$, \begin{align} \max_{1\leq j\leq k} \left( z^{2j} \left|\Phi_j(z)\right|\right) \leq & \ a_1 \ \Phi(z),\quad z\in (0,z_1], \label{g1.15}\\ \max_{1\leq j\leq k} \left( z^{2j} \left|\Phi_j(z)\right|\right) \leq & \ a_2\ z^k \Phi(z),\quad z>z_1, \label{g1.16} \end{align} where \[ \Phi_j(z)=\left(\frac{1}{z}\ \frac{\ensuremath{\,\mathrm{d}}}{\ensuremath{\,\mathrm{d}} z}\right)^j \Phi(z). \] Then, for the cones $M$ given by \eqref{g1.8} and $K$ given by \eqref{g1.10} we have the covering $M \prec K$. \item[{\bfseries\upshape 2.}] If $\Phi$ additionally satisfies the following estimate for some positive real number $\delta_1$: \begin{align} (-1)^k z^k \Phi^{(k)}(z) \geq & \ \delta_1\ \Phi(z), \quad z\in (0,z_1],\label{g1.17} \end{align} then $M$ and $K$ are equivalent. The constants in the condition of mutual coverings \eqref{g1.14} depend on $k$, $n$, $T$, $a_1,a_2$, $z_1,\delta_1$ and on the norm of the embedding operator \eqref{g1.7}. \end{list} \end{theorem} \begin{remark}\label{g-ad2.7} The proof is based on the following crucial estimates. \bli \item[{\bfseries\upshape 1.}] Under the assumptions of Theorem \ref{theo-g-1.1}, Part 1, for any $u \in H_E^G(\ensuremath{\mathbb R}^{n})$, that is $u=G\ast f,\, f \in E(\ensuremath{\mathbb R}^{n}),$ the estimate holds \begin{equation} \omega_k(u, t^{1/n})\leq c_{1}\int_{0}^{T}\Omega_{\varphi}(t, \tau)f^{*}(\tau)\ensuremath{\,\mathrm{d}} \tau, \quad t \in (0,T), \label{g-ad2.28} \end{equation} see \cite{GoMa}, with $c_1=c_1(k, n,z_1,a_1,a_2, T, \|\ensuremath\mathrm{id}\|)\in \ensuremath{\mathbb R}_+$, where $\|\ensuremath\mathrm{id}\|$ refers to the norm of embedding operator in \eqref{g1.7}. \item[{\bfseries\upshape 2.}] Under the assumptions of Theorem \ref{theo-g-1.1}, Part 2, for $\sigma_0 \in \widetilde{E}_0(0,T)$ we denote $\sigma(t) =\sigma_0(t),\, t \in (0, T),\quad \sigma(t)=0,\, t \geq T$. Then, there exists $f \in E(\ensuremath{\mathbb R}^{n})$, such that $f^{*}(t)\leq \sigma(t)$, $t \in \ensuremath{\mathbb R}_+$, and for $u=G\ast f \in H_E^G(\ensuremath{\mathbb R}^{n})$ \begin{equation} \omega_k(u, t^{1/n})\geq c_{2}\int_{0}^{T}\Omega_{\varphi}(t, \tau)\sigma_0(\tau)\ensuremath{\,\mathrm{d}} \tau,\quad t \in (0,T), \label{g-ad2.29} \end{equation} with $c_2=c_2(k, n,z_1,a_1,\delta_1, T)>0$, see \cite{GoMa-2, GoHa-3}. Moreover, for the best constants of coverings in Theorem \ref{theo-g-1.1} we have \begin{equation} \label{g-ad2.30} c_0(M\prec K)\leq c_1;\quad c_0(K\prec M)\geq c_2^{-1}. \end{equation} \end{list} \end{remark} \begin{proof} {\em Step 1}.\quad We first show that under the assumptions \eqref{g1.15}, \eqref{g1.16} there is the covering $M \overset{c}{\prec}K$ for the cones given by \eqref{g1.8} and \eqref{g1.10} with any constant of covering $c>c_1$. Let $c=(1+\varepsilon)^{2}c_1,\quad \varepsilon \in (0, 1).$ For $h\in M$ there exists $u_\varepsilon \in H_E^G(\ensuremath{\mathbb R}^{n})$ such that \begin{align} \omega_k(u_\varepsilon; t^{1/n}) & = h(t),\, t\in (0,T), \quad\|u_\varepsilon\|_{H_E^G}\leq (1+\varepsilon)\varrho_M(h).\ensuremath{{\mathbb N}_0}number \end{align} Then, for $u_\varepsilon \in H_E^G(\ensuremath{\mathbb R}^{n})$, we find $f_\varepsilon \in E(\ensuremath{\mathbb R}^{n})$, such that $$u_\varepsilon=G\ast f_\varepsilon,\quad \|f_\varepsilon\|_E \leq (1+\varepsilon)\|u_\varepsilon\|_{H^G_E} \ \leq \ (1+\varepsilon)^2\varrho_M(h).$$ By \eqref{g-ad2.28} for $u=u_\varepsilon$ we have the estimate \begin{equation*} h(t)\leq g_{\varepsilon}(t):=c_1\int_0^T \Omega_\varphi(t,\tau) f_{\varepsilon}^\ast(\tau)\ensuremath{\,\mathrm{d}} \tau=\int_0^T \Omega_\varphi(t,\tau) \sigma_{\varepsilon, 0}(\tau)\ensuremath{\,\mathrm{d}} \tau,\quad t\in (0,T). \end{equation*} Here \[ \sigma_{\varepsilon, 0}=c_1f^{\ast}_{\varepsilon} \in \widetilde{E}_0(0,T);\quad \left\|\sigma_{\varepsilon, 0}\right\|_{\widetilde{E}(0,T)} \leq c_1\|f^{\ast}_\varepsilon\|_{\widetilde{E}(\ensuremath{\mathbb R}_+)}= c_1\|f_\varepsilon\|_{E}. \] We see that $$ h\leq g_{\varepsilon} \in K;\quad \varrho_{K}(g_{\varepsilon})\leq \|\sigma_{\varepsilon, 0}\|_{\widetilde{E}(0,T)} \leq c_1\|f_{\varepsilon}\|_{E}\leq c_1 (1+\varepsilon)^2\varrho_M(h)=c\varrho_M(h). $$ These estimates show that \[ M \overset{c}{\prec}K\quad \text{for all}\quad c>c_1 \quad\text{implies}\quad c_0(M\prec K)\leq c_1. \] {\em Step 2}.\quad We will show that under assumptions of Theorem \ref{theo-g-1.1}, Part 2, there is the covering \begin{equation}\label{g-ad2.31} K\overset{c}{\prec} M\quad \text{for all}\quad c>c_2^{-1},\quad\text{which implies}\quad c_0(K\prec M)\leq c_2^{-1}. \end{equation} Let $c=(1+\varepsilon)c_2^{-1},\, \varepsilon \in (0,1)$. For $g\in K$ there exists $\sigma_{\varepsilon, 0} \in\widetilde{E}_0(0,T)$, such that \begin{align*} g(t)=& \int_0^T \Omega_\varphi(t,\tau)\sigma_{\varepsilon, 0}(\tau)\ensuremath{\,\mathrm{d}}\tau,\, t \in (0,T);\quad \|\sigma_{\varepsilon, 0}\|_{\widetilde{E}(0,\infty)}\leq (1+\varepsilon)\varrho_K(g). \end{align*} Now, let \begin{align*} \sigma_{\varepsilon}(t) = \begin{cases} \sigma_{\varepsilon,0}(t), & t \in (0,T), \\ 0, & t \geq T.\end{cases} \end{align*} Then, $$\|\sigma_{\varepsilon}\|_{\widetilde{E}(0,\infty)}=\|\sigma_{\varepsilon, 0}\|_{\widetilde{E}(0,T)}\leq (1+\varepsilon)\varrho_K(g).$$ According to \eqref{g-ad2.29} with $\sigma_{0}=\sigma_{\varepsilon, 0}$ we find $f_\varepsilon \in E(\ensuremath{\mathbb R}^{n})$, such that $f_{\varepsilon}^{\ast}\leq \sigma_{\varepsilon}$ and for $u_{\varepsilon}=G\ast f_{\varepsilon}$ the estimate holds $\omega_{k}(u_{\varepsilon}, t^{1/n})\geq c_{2}g(t), \, t \in (0, T)$. So, we denote $$h_{\varepsilon}(t):=\omega_{k}(c_2^{-1}u_{\varepsilon}, t^{1/n})=c_2^{-1}\omega_{k}(u_{\varepsilon}, t^{1/n})\geq g(t),\quad t \in (0,T).$$ Moreover, $h_{\varepsilon} \in M$, and \begin{align*} \varrho_M(h_{\varepsilon}) & \leq \left\| c_2^{-1} u_{\varepsilon}\right\|_{H^G_E} \leq \|c_2^{-1}f_{\varepsilon}\|_E = c_2^{-1}\|f_{\varepsilon}^\ast\|_{\widetilde{E}(0,\infty)}\\ & \leq c_2^{-1}\|\sigma_{\varepsilon}\|_{\widetilde{E}(0,\infty)} \leq (1+\varepsilon)c_2^{-1}\varrho_K(g). \end{align*} These estimates show that \[ K\overset{c}{\prec}M\quad\text{for all}\quad c>c_{2}^{-1} \quad\text{which implies}\quad c_{0}(K\prec M)\leq c_{2}^{-1}. \] \end{proof} \begin{remark} \label{rem-g-1.2} For $\Omega_{\varphi}(t, \tau)$, see \eqref{g1.12}, $\varphi \in \widetilde{E}'(0,T)$, see \eqref{g1.5}, and $\sigma \in \widetilde{E}_0(0,T)$ the following assertions hold \begin{equation*} \Omega_{\varphi}(t, \tau)\sigma(\tau)\leq \varphi(\tau)\sigma(\tau) \in L_1(0,T);\quad \Omega_{\varphi}(t, \tau)\sigma(\tau)\rightarrow 0\,(t\rightarrow +0). \end{equation*} Therefore, by Lebesgue's dominated convergence theorem \eqref{g1.10} implies \begin{equation*} h \in K \Rightarrow h(t)\rightarrow 0 (t\rightarrow +0). \end{equation*} Together with the covering $M\prec K,$ see \eqref{g1.13}, it leads to \begin{equation*} h \in M \Rightarrow h(t)\rightarrow 0 (t\rightarrow +0). \end{equation*} It means that under the assumptions \eqref{g1.15}, \eqref{g1.16} \begin{equation*} \omega_k(u,t)\rightarrow 0 (t\rightarrow +0)\Rightarrow H^G_E(\ensuremath{\mathbb R}^{n})\hookrightarrow C(\ensuremath{\mathbb R}^{n}). \end{equation*} \end{remark} \begin{example}\label{exm-g-1.3} For the classical Bessel potentials the corresponding Bessel-McDonald kernels are determined by \eqref{g1.2} with \begin{equation} \Phi_\nu(x)=H_\nu(x),\quad x\in\ensuremath{\mathbb R}_+,\quad \nu =\frac{n-\alpha}{2},\quad 0<\alpha<n, \label{g1.18} \end{equation} where $H_\nu(x)=x^{-\nu} K_{\nu }(x)$, $x>0$, and $K_\nu$ is the modified Bessel function, \begin{equation} K_{\nu }\left(\varrho \right)=\frac12 \left(\frac{\varrho}{2}\right)^{\nu }\int\limits_0^\infty \xi^{-\nu -1} e^{-\xi - \varrho^2/4\xi } \ensuremath{\,\mathrm{d}}\xi \label{1.13} \end{equation} cf. \cite{goldman-24,GoHa, nik}. From the well-known properties of these functions, it follows easily that conditions \eqref{g1.15}--\eqref{g1.16} are satisfied and that \begin{equation}\label{g1.19} \Phi_\nu(y)\simeq \begin{cases} y^{-2\nu},& y\in (0,y_1],\\ y^{-\nu-\frac12} e^{-y}, & y>y_1,\end{cases} \end{equation} for some appropriate $y_1>0$ (in the sense of two-sided estimates with constants depending only on $\nu$ and $y_1$). Note that \eqref{g1.4} reads as \begin{equation} \label{g1.20} \varphi(\tau) \simeq \tau^{\alpha/n-1},\quad \tau\in (0,T], \end{equation} in this case which implies that \eqref{g1.7} is true. In order to apply Theorem~\ref{theo-g-1.1} we need to verify \eqref{g1.5} and have found in this case that \[ \eqref{g1.5}\quad\text{if, and only if,}\quad \tau^{\alpha/n-1}\in \widetilde{E}'(0,T),\quad T\in\ensuremath{\mathbb R}_+. \] Recall that $\widetilde{E}'(0,T)$ is the restriction of $\widetilde{E}'(\ensuremath{\mathbb R}_+)$ to $(0,T)$. \end{example} \begin{example}\label{exm-g-1.4} Let $\Phi\in C^k(\ensuremath{\mathbb R}_+)$ satisfy assumptions \eqref{g1.3}--\eqref{g1.5} and \eqref{g1.16} for some $z_1\in\ensuremath{\mathbb R}_+$. Assume $T=V_n z_1^n$, and \begin{equation} \label{g1.21} \Phi(z)=z^{\alpha-n} \Lambda (z),\quad 0<\alpha<n,\quad z\in (0,z_1]. \end{equation} Here $\Lambda \in C^k(0,z_1]$ is a positive function, with \begin{equation} \label{g1.22} z^j \Lambda^{(j)}(z)=\varepsilon_j(z)\Lambda(z), \quad \text{with}\quad \varepsilon_j(z)\xrightarrow[z\to 0+]{} 0,\quad j=1, \dots, k. \end{equation} Then $\Lambda$ is a slowly varying function on $(0,z_1]$, i.e., for all $\gamma>0$, \begin{equation}\begin{split} \label{g1.23} z^\gamma \Lambda(z)\quad \text{is monotonically increasing,}\\ \quad z^{-\gamma} \Lambda (z)\quad\text{is monotonically decreasing}.\end{split} \end{equation} We further conclude that \begin{equation} \label{g1.24} \varphi(\tau)=\tau^{\alpha/n-1} \lambda(\tau),\quad \lambda(\tau)=\Lambda\left(\left(\frac{\tau}{V^n}\right)^{1/n}\right),\quad \tau\in (0,T]. \end{equation} The function $\lambda$ is slowly varying as well on $(0,T]$, and \eqref{g1.7} is satisfied. One verifies that $\Phi$ satisfies conditions \eqref{g1.15} and \eqref{g1.17} such that Theorem~\ref{theo-g-1.1} is applicable whenever \eqref{g1.5} is true, where $\varphi$ is given by \eqref{g1.24}. \begin{proof} We apply the Leibniz formula to $\Phi$ given by \eqref{g1.21} and get \begin{align*} \Phi^{(k)}(z)= &\ (\alpha-n)\cdots (\alpha-n-(k-1)) z^{\alpha-n-k}\Lambda(z) \\ & + \sum_{j=1}^k C_{k,j} (\alpha-n)\cdots (\alpha-n-(k-j-1)) z^{\alpha-n-(k-j)}\Lambda^{(j)}(z), \end{align*} where $C_{k,j}$ are the binomial coefficients. Therefore, by \eqref{g1.22}, \begin{align*} z^k\Phi^{(k)}(z)= &\ \Phi(z)\Big[ (-1)^k (n+k-1-\alpha)\cdots(n-\alpha)\\ &+ \sum_{j=1}^k C_{k,j} (\alpha-n)\cdots (\alpha-n-(k-j-1)) \varepsilon_j(z)\Big]. \end{align*} This assertion implies \eqref{g1.17} for sufficiently small $z_1>0$ since $\varepsilon _j(z)\to 0$ for $z\to 0+$, $j=1, \dots, k$. Note that \begin{equation}\label{g5.6} \left(z^{-1} \frac{\ensuremath{\,\mathrm{d}}}{\ensuremath{\,\mathrm{d}} z}\right)^m z^{\alpha-n} = (\alpha-n)\cdots (\alpha-n-2(m-1)) z^{\alpha-n-2m}. \end{equation} Furthermore, condition \eqref{g1.22} implies for $m=1, \dots, k$, \begin{equation}\label{g5.7} \left(z^{-1} \frac{\ensuremath{\,\mathrm{d}}}{\ensuremath{\,\mathrm{d}} z}\right)^m \Lambda(z)=\delta_m(z) z^{-2m} \Lambda(z), \end{equation} where $\delta_m(z)\to 0$ for $z\to 0+$. An analogue of Leibniz' rule together with \eqref{g5.6}, \eqref{g5.7} gives \begin{align*} & \left(z^{-1}\frac{\ensuremath{\,\mathrm{d}}}{\ensuremath{\,\mathrm{d}} z}\right)^l \Phi(z) \\ & = \ \sum_{j=0}^l C_{l,j}\left[\left(z^{-1}\frac{\ensuremath{\,\mathrm{d}}}{\ensuremath{\,\mathrm{d}} z}\right)^{l-j} z^{\alpha-n}\right] \left[\left(z^{-1} \frac{\ensuremath{\,\mathrm{d}}}{\ensuremath{\,\mathrm{d}} z}\right)^j \Lambda(z)\right] \\ & = \ (\alpha-n)\cdots (\alpha-n-2(l-1))z^{\alpha-n-2l}\Lambda(z) \\ & \quad~ + \sum_{j=1}^l C_{l,j} (\alpha-n)\cdots (\alpha-n-2(l-j-1))z^{\alpha-n-2(l-j)}\delta_j(z)z^{-2j}\Lambda(z)\\ & = \ \Phi(z) z^{-2l} F(z), \end{align*} where $$F(z)=(\alpha-n)\cdots (\alpha-n-2(l-1))+ \sum_{j=1}^l C_{l,j} (\alpha-n)\cdots (\alpha-n-2(l-j-1))\delta_j(z).$$ Since the term $F(z)$ is bounded on $(0,z_1]$, we obtain \eqref{g1.15}. Thus Theorem~\ref{theo-g-1.1} can be applied. Finally $\varphi$ is determined by \eqref{g1.24} with $0<\alpha<n$, and $\lambda$ being slowly varying on $(0,T]$, and we conclude \begin{equation} \label{g5.8} \int_0^t \varphi(\tau)\ensuremath{\,\mathrm{d}} \tau = \int_0^t \lambda(\tau)\tau^{\alpha/n-1} \ensuremath{\,\mathrm{d}} \tau \simeq \lambda(t) t^{\alpha/n} = \varphi(t) t, \end{equation} where the involved constants do not depend on $t\in (0,T)$, see also Remark~\ref{rem-g-5.1} below. Thus condition \eqref{g1.7} is satisfied and we have the equivalence that $ H^G_E(\ensuremath{\mathbb R}^{n})\hookrightarrow C(\ensuremath{\mathbb R}^{n})$ holds if, and only if, \[ \tau^{\alpha/n-1} \lambda(\tau) \in \widetilde{E}'(0,T). \] \end{proof} \end{example} \begin{remark}\label{rem-g-5.1} In \eqref{g5.8} we used some well-known properties of slowly varying functions $\lambda$, which are positive on $(0,T)$: for any $\gamma>0$, \begin{align} \label{g5.9} \int_0^t \tau^{\gamma-1} \lambda(\tau)\ensuremath{\,\mathrm{d}}\tau & \simeq t^\gamma \lambda(t),\\ \label{g5.10} \int_t^T \tau^{-\gamma-1} \lambda(\tau)\ensuremath{\,\mathrm{d}}\tau &\leq c_\gamma t^{-\gamma} \lambda(t),\\ \label{g5.11} \lambda(t) & = {\mathbf o}\left(\int_t^T \tau^{-1} \lambda(\tau)\ensuremath{\,\mathrm{d}}\tau\right)\quad\text{for}\quad t\to 0+. \end{align} \end{remark} Now we describe some important characteristic of the smoothness of functions from $H^{G}_{E},$ the uniform majorant for moduli of continuity $\Omega^{k}_{EG}(t^{1/n}),\, t \in (0, T),$ namely, \begin{equation} \label{g-ad2.47} \Omega^{k}_{EG}(t^{1/n})=\sup \left\lbrace \omega_k(u, t^{1/n}):\, u \in H^{G}_{E}(\ensuremath{\mathbb R}^{n});\, \|u\|_{H^{G}_{E}}\leq 1\right\rbrace. \end{equation} In case of $k=1$ this is a variant of the continuity envelope function studied in \cite{Ha-crc,T-func} in general, and in \cite{GoHa} for Bessel potentials. \begin{theorem} \label{theo-g-ad-2.11} Let $\Phi$ be a standard function with \eqref{g1.3}, $\varphi$, given by \eqref{g1.4}, satisfy \eqref{g1.5}. \bli \item[{\bfseries\upshape 1.}] Under the assumptions of Theorem \ref{theo-g-1.1}, Part 1, the following estimate holds: \begin{equation} \Omega^{k}_{EG}(t^{1/n})\leq c_{1}\|\Omega_{\varphi}(t, \cdot)\|_{\widetilde{E}'(0, T)},\quad t \in (0, T). \label{g-ad2.48} \end{equation} \item[{\bfseries\upshape 2.}] Under the assumptions of Theorem \ref{theo-g-1.1}, Part 2, we have both estimates: \eqref{g-ad2.48} and \begin{equation} \Omega^{k}_{EG}(t^{1/n})\geq c_{2}\|\Omega_{\varphi}(t, \cdot)\|_{\widetilde{E}'(0, T)},\quad t \in (0, T). \label{g-ad2.49} \end{equation} The constants $c_{1}, c_{2}$ are the same as in Theorem \ref{theo-g-1.1}. \end{list} \end{theorem} \begin{proof} {\em Step 1}.\quad Let $u \in H^{G}_{E}(\ensuremath{\mathbb R}^{n})$. For any $f\in {E}(\ensuremath{\mathbb R}^{n})$ such that $G\ast f=u$ we have the estimate, see \eqref{g-ad2.28}, \begin{align*} \omega_k(u, t^{1/n})\leq c_{1}\int_{0}^{T}\Omega_{\varphi}(t,\tau)f^{\ast}(\tau)\ensuremath{\,\mathrm{d}} \tau & \leq c_{1}\|\Omega_{\varphi}(t, \cdot)\|_{\widetilde{E}'(0, T)}\|f^{\ast}\|_{\widetilde{E}(0, T)} \\ & \leq c_{1}\|\Omega_{\varphi}(t, \cdot)\|_{\widetilde{E}'(0, T)}\|f\|_{E(\ensuremath{\mathbb R}^{n})}. \end{align*} Therefore, \begin{equation*} \omega_k(u, t^{1/n})\leq c_{1}\|\Omega_{\varphi}(t, \cdot)\|_{\widetilde{E}'(0, T)}\|u\|_{H^G_E(\ensuremath{\mathbb R}^{n})}, \quad\forall u \in H^G_E(\ensuremath{\mathbb R}^{n}). \end{equation*} This yields \eqref{g-ad2.48}.\\ {\em Step 2}.\quad Note that $\Omega_{\varphi}(t, \tau)\geq 0$ decreases as a function of $\tau \in (0,T),$ so that we have by the well-known formula for an associated norm in the RIS $\widetilde{E}(0, T)$ \begin{equation*} \|\Omega_{\varphi}(t, \cdot)\|_{\widetilde{E}'(0, T)}=\sup \left\lbrace \int_0^T \Omega_{\varphi}(t, \tau)\sigma(\tau)\ensuremath{\,\mathrm{d}} \tau:\, \sigma \in \widetilde{E}_0(0, T);\, \|\sigma\|_{\widetilde{E}(0, T)}\leq 1\right\rbrace. \end{equation*} Thus, for every $\varepsilon \in (0,1)$ there exists $\sigma_{0, \varepsilon} \in \widetilde{E}_0(0, T)$ such that $\|\sigma_{0, \varepsilon}\|_{\widetilde{E}(0, T)}\leq 1$, and \begin{equation*} \int_0^T \Omega_{\varphi}(t, \tau)\sigma_{0, \varepsilon}(\tau)\ensuremath{\,\mathrm{d}} \tau \geq (1-\varepsilon)\|\Omega_{\varphi}(t, \cdot)\|_{\widetilde{E}'(0, T)}. \end{equation*} Let $\sigma_{\varepsilon}\in \widetilde{E}_0(\ensuremath{\mathbb R}_+)$ be the extension of $\sigma_{0, \varepsilon}$ by zero from $(0, T)$ onto $\ensuremath{\mathbb R}_+$. Then, according to \eqref{g-ad2.29}, there exists $f_{\varepsilon} \in E(\ensuremath{\mathbb R}^{n})$ such that $f_{\varepsilon}^{\ast}\leq \sigma_{\varepsilon},\quad G\ast f_{\varepsilon}=u_{\varepsilon} \in H^{G}_{E}(\ensuremath{\mathbb R}^{n})$, \begin{equation} \label{g-ad2.50} \omega_k(u_{\varepsilon}, t^{1/n})\geq c_{2}\int_{0}^{T}\Omega_{\varphi}(t,\tau)\sigma_{0,\varepsilon}(\tau)\ensuremath{\,\mathrm{d}} \tau \geq (1-\varepsilon) c_{2}\|\Omega_{\varphi}(t, \cdot)\|_{\widetilde{E}'(0, T)}. \end{equation} Moreover, \begin{equation} \label{g-ad2.51} \|u\|_{H^{G}_{E}(\ensuremath{\mathbb R}^{n})}\leq \|f_{\varepsilon}\|_{E(\ensuremath{\mathbb R}^{n})}=\|f_{\varepsilon}^{\ast}\|_{\widetilde{E}(\ensuremath{\mathbb R}_+)}\leq \|\sigma_{\varepsilon}\|_{\widetilde{E}(\ensuremath{\mathbb R}_+)}=\|\sigma_{0,\varepsilon}\|_{\widetilde{E}(0, T)}\leq 1. \end{equation} Consequently, by \eqref{g-ad2.47}, \eqref{g-ad2.50}, and \eqref{g-ad2.51} \begin{equation*} \Omega_{EG}^{k}(t^{1/n})\geq c_{2}\|\Omega_{\varphi}(t, \cdot)\|_{\widetilde{E}'(0, T)}. \end{equation*} This leads to \eqref{g-ad2.29} by passing to the limit for $\varepsilon \rightarrow 0.$ \end{proof} \section{Embeddings into Calder\'on spaces} \label{sect-2} \subsection{Generalized Banach function spaces} \label{sect-2-1} We need some generalization of the notion of Banach function spaces (BFS), see Definition~\ref{bfs-ris}. Let $\mu$ denote the Lebesgue measure on $(0,T)$, $T\in (0,\infty]$. \begin{definition}\label{gbfs-def} A linear space $X=X(0,T)$ of measurable functions on $(0,T)$, equipped with the norm $\|\cdot\|_X$, is called a {\em generalized Banach function space}, shortly: {\em GBFS}, if the following conditions are satisfied: \begin{itemize} \item[{\bfseries\upshape (P1)}] $\|f\|_X=0 \iff f=0\ $ $\mu$-a.e. on $(0,T)$; \item[{\bfseries\upshape (P2)}] $|f|\leq g,\ g\in X \quad\text{implies}\quad f\in X, \ \|f\|_X \leq \|g\|_X$; \item[{\bfseries\upshape (P3)}] If for some functions $f_n\in X$, $f_n\geq 0$, $n\in\ensuremath{\mathbb N}$, and $f_n$ monotonically increasing to $f$, then $\|f_n\|_X \to \|f\|_X$ ~ for $n\to\infty$. \item[{\bfseries\upshape (P4)}] For every measurable set $B\subset (0,T)$ with $\mu(B)>0$, there exists some $h_B>0$ $\mu$-a.e. in $B$, and some $c_B>0$, such that for all $f\in X$, \[ \int_B h_B |f| \ensuremath{\,\mathrm{d}} \mu \leq c_B \|f\|_X. \] \item[{\bfseries\upshape (P5)}] For every measurable set $B\subset (0,T)$ with $\mu(B)>0$, there exists some $f_B\in X$ such that $f_B>0$ $\mu$-a.e. in $B$. \end{itemize} \end{definition} \begin{remark} Note that in \cite{BS} for a BFS it is required that $h_B=f_B=\chi_B$ in (P4), (P5), respectively, where $\chi_B$ is the characteristic function of $B$. Note that, if $X=X(0,T)$ is a BFS, and the function $\nu$ is $\mu$-measurable such that $0<\nu<\infty$ $\mu$-a.e. in $(0,T)$, then \[ X_\nu = \left\{f: f \nu\in X, \ \|f\|_{X_\nu} = \|f\nu\|_X\right\} \] is a GBFS. Moreover, a GBFS is in fact a Banach space, as well as its associated space. The duality principle holds for GBFS (the twice associated space coincides with the initial space, cf. \cite{BGZ}). \end{remark} Let $K=K(0,T)$ be some cone of non-negative $\mu$-measurable functions on $(0,T)$ equipped with the positively homogeneous functional $\varrho_K$. Recall that for the GBFS $X=X(0,T)$ the embedding $ K \mapsto X$ means that $K\subset X$ and \begin{equation} \label{g2.1} \exists\ c=c_K>0: \quad \|h\|_X \leq c_K\varrho_K(h)\quad\text{for all}\ h\in K. \end{equation} \begin{definition}\label{def-g-2.2} A GBFS $X_0=X_0(0,T)$ is called {\em optimal} for the embedding $K\mapsto X$, if \bli \item[{\bfseries\upshape (i)}] $K\mapsto X_0$, \item[{\bfseries\upshape (ii)}] whenever $K\mapsto Y$, where $Y$ is a GBFS, then this implies $X_0\subset Y$. \end{list} \end{definition} \begin{remark}\label{rem-g-2.3} Let $K$ and $M$ be some cones of non-negative $\mu$-measurable functions on $(0,T)$ equipped with the functionals $\varrho_K$ and $\varrho_M$. If $K \approx M$, then \bli \item[{\upshape (1)}] for every GBFS $X=X(0,T)$ we have $K\mapsto X$ if, and only if, $M\mapsto X$, and the ratio of the constants $c_K/c_M$ in \eqref{g2.1} depends on the constants of the mutual coverings of the cones only, see \eqref{g1.13}, \eqref{g1.14}, \item[{\upshape (2)}] a GBFS $X_0=X_0(0,T)$ is optimal for both the embeddings $K\mapsto X$ as well as $M\mapsto X$. \end{list} This can be seen as follows. Let us show that whenever $M\prec K$ and $K\mapsto X$, then $M\mapsto X$. For every $h_1\in M$ we can find $h_2\in K$ such that \[h_1\leq h_2\quad\text{on}\quad (0,T)\quad\text{and}\quad \varrho_K(h_2)\leq c_0\varrho_M(h_1).\] Now $K\mapsto X$ implies $h_2\in X$ and, by property (P2) of Definition~\ref{gbfs-def}, $h_1\in X$ with $\|h_1\|_X\leq \|h_2\|_X\leq c_K\varrho_K(h_2)$. This finally leads to \[ \|h_1\|_X\leq c_K c_0\varrho_M(h_1)\quad\text{for all}\quad h_1\in M. \] But this is nothing else than $M\mapsto X$. Consequently, the equivalence $M \approx K$ implies the equivalence \[ K\mapsto X \iff M\mapsto X. \] Thus the same GBFS $X_0=X_0(0,T)$ is optimal for both embeddings $K\mapsto X$ and $M\mapsto X$. \end{remark} Let $X=X(0,T)$ be a GBFS and $k\in\ensuremath{\mathbb N}$. We introduce the Calder\'on space $\Lambda^k(C,X)$ (see for example \cite{goldman-8}, a more special version was considered in \cite{GNO-7}) as follows: \begin{align} \Lambda^k(C;X) = & \left\{u\in C(\ensuremath{\mathbb R}^{n}): \ \omega_k(u; t^{1/n}) \in X(0,T)\right\}, \label{g2.2}\\ & \|u\|_{\Lambda^k(C,X)} = \|u\|_C + \|\omega_k(u; t^{1/n})\|_{X(0,T)}. \label{g2.3} \end{align} The following non-trivial conditions hold: \begin{align} \label{g2.4} \Lambda^k(C;X) \neq \{0\} & \iff \left\|t^{k/n}\right\|_{X(0,T)} < \infty, \\ \label{g2.5} \Lambda^k(C;X) \neq C(\ensuremath{\mathbb R}^{n}) & \iff \left\| 1 \right\|_{X(0,T)} = \infty. \end{align} Moreover, it is obvious that \begin{equation} \label{g2.6} X_0(0,T)\hookrightarrow X(0,T) \quad\text{implies}\quad \Lambda^k(C;X_0) \hookrightarrow \Lambda^k(C;X). \end{equation} Now we are able to formulate a criterion for the embedding $H^G_E(\ensuremath{\mathbb R}^{n}) \hookrightarrow \Lambda^k(C;X)$. \begin{theorem} \label{theo-g-2.4} Let the conditions of Theorem~\ref{theo-g-1.1} be satisfied, and let $K$ be the cone given by \eqref{g1.10} with \eqref{g1.12}. Then \begin{equation} \label{g2.7} H^G_E(\ensuremath{\mathbb R}^{n}) \hookrightarrow \Lambda^k(C;X), \end{equation} if, and only if, \begin{equation} \label{g2.8} K\mapsto X. \end{equation} The norm of the embedding operator in \eqref{g2.7} depends only on $k$, $n$, $T$, $a_1$, $a_2$, $z_1$, $\delta_1$ and on the norms of the embedding operators in \eqref{g1.7} and \eqref{g2.1}. \end{theorem} \begin{proof} First we show that under the condition \eqref{g1.5} we have the equivalence \begin{equation} \eqref{g2.7} \iff M\mapsto X(0,T), \label{g6.1} \end{equation} where $M$ is the cone in \eqref{g1.8}. Indeed, in this case \[ \|u\|_C\leq c_1 \|u\|_{H^G_E}\quad\text{for all}\quad u\in H^G_E(\ensuremath{\mathbb R}^{n}), \] and \eqref{g2.7} is equivalent to \[ \|\omega_k(u;t^{1/n})\|_{X(0,T)} \leq c_2\ \|u\|_{H^G_E}, \quad u\in H^G_E(\ensuremath{\mathbb R}^{n}). \] This means that for $h\in M$, \[ \|h\|_{X(0,T)} \leq c_2\|u\|_{H^G_E}, \] for every $u\in H^G_E(\ensuremath{\mathbb R}^{n})$ such that $\omega_k(u;t^{1/n}) = h(t)$, $t\in (0,T)$. Therefore, for $h\in M$, \[ \|h\|_{X(0,T)}\leq c_2\inf\left\{\|u\|_{H^G_E}: u\in H^G_E(\ensuremath{\mathbb R}^{n}), \ \omega_k(u;t^{1/n})=h(t)\right\}, \] that is, \[ \|h\|_{X(0,T)} \leq c_2 \varrho_M(h),\quad h\in M. \] This is equivalent to the embedding $M\mapsto X(0,T)$. Now the equivalence of \eqref{g2.7} and \eqref{g2.8} follows from \eqref{g6.1}, \eqref{g1.14} due to Remark~\ref{rem-g-2.3}. \end{proof} \begin{corollary}\label{cor-g-2.5} Let $X_0=X_0(0,T)$ be an optimal GBFS for the embedding \eqref{g2.8}, where $K$ is again the cone given by \eqref{g1.10}--\eqref{g1.12}. Then $\Lambda^k(C;X_0)$ is an optimal Calder\'on space for the embedding \eqref{g2.7}, that is, \begin{equation} \begin{split} H^G_E(\ensuremath{\mathbb R}^{n}) \hookrightarrow \Lambda^k(C;X_0),\quad \text{and}\\ \eqref{g2.7} \quad\text{implies}\quad \Lambda^k(C;X_0)\hookrightarrow \Lambda^k(C;X). \end{split} \label{g2.9} \end{equation} \end{corollary} \begin{proof} For the GBFS $X_0=X_0(0,T)$ we have \begin{equation} \label{g6.2} K\mapsto X_0 \implies H^G_E(\ensuremath{\mathbb R}^{n}) \hookrightarrow \Lambda^k(C; X_0), \end{equation} by Theorem~\ref{theo-g-2.4}. Assume that the embedding \eqref{g2.7} is true. Then $K\mapsto X$ implies $X_0\hookrightarrow X$ by \eqref{g2.8}, and by the definition of the optimal GBFS for the embedding $K\mapsto X$. Now we apply \eqref{g2.6} and obtain both the assertions in \eqref{g2.9}. \end{proof} \section{The description of the optimal Calder\'on space, I} \label{sect-3} We shall exemplify the results of Sections~\ref{sect-1} and \ref{sect-2} for the case when the basic RIS $E(\ensuremath{\mathbb R}^{n})$ coincides with a weighted Lorentz space, $E(\ensuremath{\mathbb R}^{n})=\Lambda_q(v)$, $1\leq q<\infty$, where the weight $v>0$ is a measurable function, recall Example~\ref{ex-Lorentz}, in particular \eqref{1.5}. Then $\Lambda_q(v)$, $1\leq q<\infty$, is equipped with the functional \begin{equation} \|f\|_{\Lambda_q(v)} =\left(\int _0^\infty f^\ast\left(t\right)^{q}v\left(t\right) \ensuremath{\,\mathrm{d}} t\right)^{1/q},\quad 1\le q<\infty. \label{g3.1} \end{equation} General properties of Lorentz spaces can be found, for instance, in \cite{C-S,C-P-S-S}. Recall that \begin{equation} \label{g3.2} \Lambda_q(v)\neq \{0\} \iff V(t)=\int_0^t v(\tau)\ensuremath{\,\mathrm{d}}\tau <\infty,\quad t>0. \end{equation} Expression \eqref{g3.1} is equivalent to some norm if $q=1$ and $t^{-1}V(t)$ almost decreases, or, in case $q>1$, if there exists some $c>0$, such that \begin{equation} \label{g3.3} t^q \int_t^\infty \tau^{-q}v(\tau)\ensuremath{\,\mathrm{d}} \tau \leq c V(t),\quad t>0. \end{equation} We shall assume in the sequel that these conditions are satisfied. We further need the following notation, where $T>0$ and $t\in (0,T]$, \begin{align} W(t) = & V(t)^{-1}\int_0^t \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau, \label{g3.4} \\ \Psi_q(t) = & \begin{cases} \sup_{\tau\in (0,t]} W(\tau), & q=1, \\ \left(\int_0^t W^{q'}(\tau) v(\tau)\ensuremath{\,\mathrm{d}}\tau\right)^{1/q'}, & 1<q<\infty, \end{cases} \label{g3.5} \end{align} where $q'$ is defined as usual, $\frac1q+\frac{1}{q'}=1$, $1<q<\infty$. \begin{lemma} \label{lemma-g-3.1} Let $T>0$, $1\leq q<\infty$. Using the above notation, \begin{equation} \eqref{g1.5} \quad\text{is true\quad if, and only if,}\quad \Psi_q(T)<\infty. \label{g3.7} \end{equation} \end{lemma} \begin{proof} Let us consider the case of basic RIS \begin{equation} \label{g7.1} E(\ensuremath{\mathbb R}^{n})=\Lambda_q(v),\quad 1\leq q<\infty, \end{equation} see \eqref{g3.1}, \eqref{g3.2}. We define for $t>0$, $1<q<\infty$, \begin{equation} w(t)=V(t)^{-q'} v(t). \label{g7.2} \end{equation} Note that \begin{equation} \label{g7.3} \int_T^\infty w(t)\ensuremath{\,\mathrm{d}} t = \frac{1}{q'-1}\left(V(T)^{1-q'}-\lim_{t\to\infty}V(t)^{1-q'}\right). \end{equation} We use the well-known description of the associated RIS for Lorentz spaces \eqref{g7.1} in the Luxemburg representation: \[ \|\varphi_0\|_{\widetilde{E}'(\ensuremath{\mathbb R}_+)} = \begin{cases} \sup_{\tau\in\ensuremath{\mathbb R}_+} \frac{1}{V(\tau)} \int_0^\tau \varphi_0^*(s)\ensuremath{\,\mathrm{d}} s,& q=1,\\ \left(\int_0^\infty \left(\int_0^\tau \varphi_0^*(s)\ensuremath{\,\mathrm{d}} s\right)^{q'} w(\tau)\ensuremath{\,\mathrm{d}}\tau\right)^{1/q'},& q>1,\end{cases} \] where $\varphi_0^\ast$ is the decreasing rearrangement of the function $\varphi_0:\ensuremath{\mathbb R}_+\to[0,\infty]$. For $\varphi$ given by \eqref{g1.4} we set \[ \varphi_0(\tau)=\begin{cases} \varphi(\tau), &\tau\in (0,T),\\ 0,& \tau\geq T.\end{cases} \] Since $\varphi\geq 0$ is monotonically decreasing and right-continuous, we conclude \[\varphi_0^\ast(s) = \varphi(s)\chi_{(0,T)}(s), \] and thus \[ \int_0^\tau \varphi_0^\ast (s)\ensuremath{\,\mathrm{d}} s = \left(\int_0^\tau\varphi(s)\ensuremath{\,\mathrm{d}} s\right)\chi_{(0,T)}(\tau) + \left(\int_0^T\varphi(s)\ensuremath{\,\mathrm{d}} s\right)\chi_{[T,\infty)}(\tau). \] In view of $\ \|\varphi\|_{\wt{E}'(0,T)} = \|\varphi_0\|_{\wt{E}'(\ensuremath{\mathbb R}_+)}\ $ this leads to \[ \|\varphi\|_{\wt{E}'(0,T)} = \max\left\{\sup_{\tau\in (0,T)} W(\tau), \left(\int_0^T \varphi(s)\ensuremath{\,\mathrm{d}} s\right) \sup_{\tau\geq T} V(\tau)^{-1}\right\} = \Psi_1(T) \] in case of $q=1$, see \eqref{g3.4} and \eqref{g3.5}. In case of $q>1$, using that $q'=\frac{q}{q-1}$, we conclude from \eqref{g3.4}, \eqref{g3.5}, \eqref{g7.2} and \eqref{g7.3} that \begin{align*} \|\varphi\|_{\wt{E}'(0,T)} \simeq & \left( \int_0^T W(\tau)^{q'} v(\tau)\ensuremath{\,\mathrm{d}} \tau\right)^{1/q'} + \left(\int_0^T \varphi(s)\ensuremath{\,\mathrm{d}} s\right) \left(\int_T^\infty w(\tau)\ensuremath{\,\mathrm{d}} \tau\right)^{1/q'} \\ = & \ \Psi_q(T) + \left(\int_0^T \varphi(s)\ensuremath{\,\mathrm{d}} s\right) \frac{1}{(q'-1)^{1/q'}}\left(V(T)^{1-q'}-\lim_{t\to\infty}V(t)^{1-q'}\right)^{1/q'}. \end{align*} Thus the equivalence \eqref{g3.7} is shown, as the second term is finite in view of \eqref{g1.3}, \eqref{g1.4}. \end{proof} In view of the description of the optimal Calder\'on spaces we introduce two alternative collections of the conditions for $\varphi$ and $v$. For that reason we complement our above notation \eqref{g3.4}, \eqref{g3.5} as follows: \begin{align} \widetilde{W}(t) = & V(t)^{-1} t^{1-\frac{k}{n}} \varphi(t), \label{g3.12}\\ U_q(t) = & \begin{cases} \sup_{\tau\in [t,T]} \widetilde{W}(\tau), & q=1, \\ \left(\int_t^T \widetilde{W}^{q'}(\tau) v(\tau)\ensuremath{\,\mathrm{d}}\tau\right)^{1/q'}, & 1<q<\infty, \end{cases} \label{g3.13} \end{align} where again $t\in (0,T]$ is assumed. Note that $\widetilde{W}$ is a continuous bounded function on $[t,T]$ for any $t\in (0,T]$, such that the expressions in \eqref{g3.13} are well-defined. Now we can formulate the alternative assumptions. \bli \item[{\bfseries\upshape (A)}] There exists a constant $d_1>0$ such that for every $t\in (0,T ]$, \begin{equation} \label{g3.8} \int_t^T \tau^{-\frac{k}{n}} \varphi(\tau)\ensuremath{\,\mathrm{d}} \tau \leq d_1 t^{-\frac{k}{n}} \int_0^t \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau, \end{equation} and, in addition, \begin{equation} \label{g3.9} \exists\ \varepsilon>0: \ t^\varepsilon V(t)^{-1}\quad \text{is monotonically decreasing for}\quad t\in (0,T]. \end{equation} \item[{\bfseries\upshape (B)}] There exists a constant $d_2>0$ such that for every $t\in (0,T]$ \begin{equation} \label{g3.10} \int_0^t \tau^{-\frac{k}{n}} \varphi(\tau)\ensuremath{\,\mathrm{d}} \tau \leq d_2 t^{1-\frac{k}{n}} \varphi(t), \end{equation} and, in addition, \begin{equation} \label{g3.11} \exists\ \varepsilon>0: \ t^\varepsilon U_q(t)\quad \text{is monotonically decreasing for}\quad t\in (0,T]. \end{equation} \end{list} \begin{remark}\label{rem-g-3.2} Let $\lambda>0$ be slowly varying on $(0,T]$, \begin{equation} \label{g3.15} 0<\alpha<n,\quad \varphi(t)=t^{\frac{\alpha}{n}-1} \lambda(t),\quad t\in (0,T], \end{equation} similar to \eqref{g1.24}. Recall that $\lambda\equiv 1$ corresponds to the classical Bessel potentials. Then \eqref{g3.8} holds if, and only if, $\alpha<k$, whereas \eqref{g3.10} holds, if, and only if, $\alpha>k$. \\ This can be seen as follows. Let $\varphi$ be given by \eqref{g3.15} and denote \begin{align} \label{g7.4} A(t)= & \int_t^T \tau^{-k/n}\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau = \int_t^T \tau^{\frac{\alpha-k}{n}-1} \lambda(\tau)\ensuremath{\,\mathrm{d}}\tau,\\ B(t)= & \ t^{-k/n} \int_0^t\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau = t^{-k/n} \int_0^t \tau^{\alpha/n-1} \lambda(\tau)\ensuremath{\,\mathrm{d}}\tau. \label{g7.5} \end{align} According to \eqref{g5.9} for $\alpha>0$, \begin{equation} \label{g7.6} B(t)\simeq t^{\frac{\alpha-k}{n}} \lambda(t),\quad t\in (0,T), \end{equation} and by \eqref{g5.10} for $0<\alpha<k$, \[ A(t)\leq c\ t^{\frac{\alpha-k}{n}}\lambda(t) \simeq B(t),\quad t\in (0,T), \] such that \eqref{g3.8} follows. If $\alpha=k$, then \eqref{g7.4}, \eqref{g7.6} and \eqref{g5.11} show that \[ B(t) \simeq \lambda(t) \simeq {\mathbf o}\left(A(t)\right),\quad t\to 0+, \] such that \eqref{g3.8} fails. If $\alpha>k$, then \eqref{g7.6} and \eqref{g5.9} show that \[ B(0+)=0,\quad A(0+) = \int_0^T \tau^{\frac{\alpha-k}{n}-1}\lambda(\tau)\ensuremath{\,\mathrm{d}}\tau \simeq T^{\frac{\alpha-k}{n}}\lambda(T)>0, \] so that \eqref{g3.8} fails. Therefore \eqref{g3.8} holds if, and only if, $\alpha<k$. Next we show that \eqref{g3.10} holds if, and only if, $\alpha>k$. For a function $\varphi$ given by \eqref{g3.15} we denote by \[ C(t)=\int_0^t \tau^{-\frac{k}{n}}\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau = \int_0^t \tau^{\frac{\alpha-k}{n}-1} \lambda(\tau)\ensuremath{\,\mathrm{d}}\tau. \] If $\alpha<k$, then $C(t)=\infty$, $t\in (0,T)$, for every slowly varying function $\lambda>0$. Thus \eqref{g3.10} fails. If $\alpha=k$, then \[ t^{1-\frac{k}{n}} \varphi(t)= \lambda(t)={\mathbf o}\left(\int_0^t \lambda(\tau) \frac{\ensuremath{\,\mathrm{d}}\tau}{\tau}\right) \quad \text{for}\quad t\to 0+ \] for every slowly varying function $\lambda>0$. Hence \eqref{g3.10} fails as well. Finally, in the remaining case $\alpha>k$, we have by \eqref{g5.9} that \[ C(t)\simeq t^{\frac{\alpha-k}{n}} \lambda(t) = t^{1-\frac{k}{n}} \varphi(t), \quad t\in (0,T), \] and \eqref{g3.10} holds. \end{remark} Now we present one of our main results. Its proof however has to be postponed to Section~\ref{sect-3b} as we shall need some detailed preparation. But here we want to formulate the corresponding result first and collect some further consequences and examples below. We introduce the following notation, \begin{equation} \label{g3.16} \left\|f\right\|_{\mathring{X}_0} = \left(\int_0^T \left(\frac{\|f\|_{L_\infty(0,t)}}{\Psi_q(t)}\right)^q \frac{\ensuremath{\,\mathrm{d}} \Psi_q(t)}{\Psi_q(t)}\right)^{1/q}, \end{equation} with $\Psi_q$ defined by \eqref{g3.5}. Let $T_1\in (0,T)$ be such that $\Psi_q(T_1)=\frac12 \Psi_q(T)$. \begin{theorem} \label{theo-g-3.3} Let $1\leq q<\infty$, $T>0$, and assume that $\Psi_q(T)<\infty$, where $\Psi_q$ is given by \eqref{g3.5}. Let the cone $K$ be given by \eqref{g1.10} with \eqref{g1.12}. Assume that at least one of the above conditions {\upshape\bfseries (A)} or {\upshape\bfseries (B)}, given by \eqref{g3.8}--\eqref{g3.11}, is satisfied. Then the optimal GBFS $X_0=X_0(0,T)$ for the embedding $K\mapsto X$ has the following norm: \bli \item[{\upshape\bfseries (i)}] if $q=1$ and $\Psi_1(0+)=\lim_{t\downarrow 0} \Psi_1(t) >0$, then \begin{equation} \label{g3.17} \|f\|_{X_0} = \|f\|_{L_\infty(0,T)}, \end{equation} \item[{\upshape\bfseries (ii)}] if $q=1$ and $\Psi_1(0+)=\lim_{t\downarrow 0} \Psi_1(t) =0$, or $1<q<\infty$, then \begin{equation} \label{g3.18} \|f\|_{X_0} = \|f\|_{\mathring{X}_0} + \Psi_q(T)^{-1} \| f\|_{L_\infty(T_1,T)}. \end{equation} \end{list} \end{theorem} \section{Construction of the associated norm to the optimal norm}\label{sect-3a} \subsection{Some preparation: General description}\label{sect-g-7} Let the assumptions of Theorem~\ref{theo-g-1.1} be satisfied, and $K$ the cone given by \eqref{g1.10} with \eqref{g1.12}. We will show that the results of \cite[Thm.~3.1, Rem.~3.2, Ex.~3.3]{BGZ} are applicable here. Note that here $A=D=(0,T)$, $\mu=\nu$ are Lebesgue measures, $\Omega_\varphi(t,\tau)$ is determined by \eqref{g1.12}, so that for $t,\tau\in (0,T)$, \begin{equation} \label{g7.7} \Omega_\varphi(t,\tau) \simeq\begin{cases}\varphi(\tau), & \tau\in (0,t], \\ t^{k/n} \tau^{-k/n}\varphi(\tau), & \tau>t. \end{cases} \end{equation} To apply \cite[Thm.~3.1]{BGZ} it is sufficient to verify that \begin{equation} \label{g7.8} c_0=\left\|\int_0^T \Omega_\varphi(t,\cdot)\ensuremath{\,\mathrm{d}} t\right\|_{\widetilde{E}'(0,T)} < \infty, \end{equation} and the existence of some $\sigma_0\in \widetilde{E}_0(0,T)$ such that \begin{equation} \label{g7.9} \int_0^T \Omega_\varphi(t,\tau)\sigma_0(\tau)\ensuremath{\,\mathrm{d}}\tau>0, \quad t\in (0,T), \end{equation} as these conditions imply \cite[(3.7), (3.8)]{BGZ}. According to \eqref{g7.7}, for every $g\in M^+(0,T)$, \begin{equation} \label{g7.10} \int_0^T \Omega_\varphi(\xi,\tau) g(\xi)\ensuremath{\,\mathrm{d}}\xi \simeq \tau^{-k/n} \varphi(\tau) \int_0^\tau \xi^{k/n}g(\xi)\ensuremath{\,\mathrm{d}}\xi + \varphi(\tau)\int_{\tau}^T g(\xi)\ensuremath{\,\mathrm{d}}\xi. \end{equation} Let $g(t)\equiv 1$, $t\in (0,T)$, then for $\tau\in (0,T)$, \[ \int_0^T \Omega_\varphi(t,\tau) \ensuremath{\,\mathrm{d}} t \simeq \tau\varphi(\tau) + (T-\tau)\varphi(\tau)=T\varphi(\tau), \] so that \[ c_0=\left\|\int_0^T\Omega_\varphi(t,\cdot)\ensuremath{\,\mathrm{d}} t\right\|_{\widetilde{E}'(0,T)} \simeq \left\|\varphi\right\|_{\widetilde{E}'(0,T)} < \infty, \] because of assumption \eqref{g1.5}. Furthermore, with \[ \sigma_0 = \chi_{(0,T)}\in\widetilde{E}_0(0,T), \] we obtain in view of \eqref{g7.7} for every $t\in (0,T)$, \[ \int_0^T \Omega_\varphi(t,\tau)\sigma_0(\tau)\ensuremath{\,\mathrm{d}}\tau = \int_0^T \Omega_\varphi(t,\tau)\ensuremath{\,\mathrm{d}}\tau \geq c\int_0^t \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau>0, \] which implies \eqref{g7.9}. Finally, an application of \cite[Thm.~3.1, Rem.~3.2, Ex.~3.3]{BGZ} shows that the associated norm to the optimal one for the embedding $K\mapsto X$ coincides with \begin{equation} \label{g7.11} \varrho_0(g)=\left\|\int_0^T\Omega_\varphi(t,\cdot)g(t)\ensuremath{\,\mathrm{d}} t\right\|_{\widetilde{E}'(0,T)} ,\quad g\in M^+(0,T). \end{equation} \subsection{The case $E=\Lambda_1(v)$}\label{sect-g-8} \begin{lemma} Let the assumptions \eqref{g1.3}, \eqref{g1.4} and \eqref{g3.1}-\eqref{g3.3} be satisfied with $q=1$. Then the following estimate holds for the norm \eqref{g7.11}, \begin{equation} \varrho_0(g)\simeq \wt{\varrho}_0(g)+\varrho_1(g), \quad g\in M^+(0,T), \label{g8.1} \end{equation} where \begin{align} \wt{\varrho}_0(g) & = \sup_{t\in (0,T)} \left\{ V(t)^{-1} \left(\int_0^t\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau\right)\left(\int_t^T g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)\right\}, \label{g8.2}\\ \varrho_1(g)& = \sup_{t\in (0,T)} \left\{V(t)^{-1} \left(\int_0^t \Phi_k(\xi,t)g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)\right\}, \label{g8.3} \end{align} where \begin{equation} \label{g8.4} \Phi_k(\xi,t)=\int_0^\xi \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau + \xi^{k/n} \int_\xi^t \tau^{-k/n} \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau. \end{equation} \label{lemma-g-8.1} \end {lemma} \begin{proof} For $g\in M^+(0,T)$ we define \begin{equation} \Psi_0(g,\tau) = \begin{cases} \int_0^T \Omega_\varphi(\xi,\tau) g(\xi) \ensuremath{\,\mathrm{d}}\xi, & \tau\in (0,T), \\ 0, & \tau\geq T. \end{cases} \label{g8.5} \end{equation} Then, according to \eqref{g7.11}, \begin{equation} \varrho_0(g)=\left\|\Psi_0(g)\right\|_{\wt{E}'(\ensuremath{\mathbb R}_+)}. \label{g8.6} \end{equation} In our setting, $\wt{E}(\ensuremath{\mathbb R}_+)=\Lambda_1(v)$ implies \begin{equation} \wt{E}'(\ensuremath{\mathbb R}_+) = M_V(\ensuremath{\mathbb R}_+), \label{g8.7} \end{equation} where $M_V$ is the Marcinkiewicz space normed by \[ { \|f\|_{M_V} = \sup_{t>0} V(t)^{-1} \int_0^t f^\ast(\tau)\ensuremath{\,\mathrm{d}}\tau}, \] recall \eqref{1.6} in Example~\ref{ex-Lorentz}. and $f^\ast$ denotes the decreasing rearrangement of $f$, as usual. Since $0\leq f(\tau)=\Psi_0(g,\tau)$ is decreasing and right-continuous, hence $f^\ast(\tau)=\Psi_0(g,\tau)$. Thus \eqref{g8.6} implies \begin{equation} \varrho_0(g)=\sup_{t>0} V(t)^{-1} \int_0^t \Psi_0(g,\tau)\ensuremath{\,\mathrm{d}}\tau, \label{g8.8} \end{equation} which in view of \eqref{g8.5} leads to \begin{align*} \varrho_0(g) &= \max\left\{\sup_{t\in (0,T)} \left(V(t)^{-1} \int_0^t \Psi_0(g,\tau)\ensuremath{\,\mathrm{d}}\tau\right); \left(\int_0^T \Psi_0(g,\tau)\ensuremath{\,\mathrm{d}}\tau\right) \sup_{t\geq T} V(t)^{-1}\right\}\\ &= \sup_{t\in (0,T]} \left(V(t)^{-1} \int_0^t \Psi_0(g,\tau)\ensuremath{\,\mathrm{d}}\tau\right) \\ & \simeq \sup_{t\in (0,T]} \left(V(t)^{-1} \int_0^t \int_0^T \Omega_\varphi(\xi,\tau) g(\xi) \ensuremath{\,\mathrm{d}}\xi\ensuremath{\,\mathrm{d}}\tau\right). \end{align*} We substitute \eqref{g7.10} into the last formula and obtain \[ \varrho_0(g) \simeq \wt{\varrho}_0(g) + \varrho_1(g), \] where $\wt{\varrho}_0(g)$ is determined by \eqref{g8.2} and \begin{equation} \varrho_1(g)= \sup_{t\in (0,T]} \left(V(t)^{-1} \int_0^t \left(\int_\tau^t g(\xi)\ensuremath{\,\mathrm{d}}\xi + \tau^{-k/n} \int_0^\tau \xi^{k/n} g(\xi)\ensuremath{\,\mathrm{d}}\xi\right) \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau\right). \label{g8.9} \end{equation} Changing the order of integration, equality \eqref{g8.9} gives \eqref{g8.3}. \end{proof} \begin{remark} \label{rem-g-8.2} Let the conditions of Lemma~\ref{lemma-g-8.1} be satisfied and \[ \sup_{t\in (0,T)} V(t)^{-1} \int_0^t \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau = \infty. \] Then $g\in M^+(0,T)$ with $\varrho_0(g)<\infty$ implies $g=0$ a.e. on $(0,T)$. This is a consequence of \eqref{g8.2}. \end{remark} \begin{corollary} \label{cor-g-8.3} Let the conditions of Lemma~\ref{lemma-g-8.1} be satisfied and the following estimate be valid for $\xi\in (0,T)$, \begin{equation} \xi^{k/n}\int_\xi^T \tau^{-k/n}\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau \leq d_1\int_0^\xi \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau, \label{g8.10} \end{equation} for some $d_1\in\ensuremath{\mathbb R}_+$ not depending on $\xi$. Then \begin{equation} \varrho_0(g)\simeq \widehat{\varrho}_0(g)=\sup_{t\in(0,T]} \left\{V(t)^{-1} \left(\int_0^t \left(\int_\tau^T g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau\right)\right\}. \label{g8.11} \end{equation} \end{corollary} \begin{proof} Note that in this case \eqref{g8.3} and \eqref{g8.10} yield \begin{align} \varrho_1(g)&\simeq \sup_{t\in (0,T]} \left\{ V(t)^{-1} \int_0^t \left(\int_0^\xi \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau\right)g(\xi)\ensuremath{\,\mathrm{d}}\xi\right\}\ensuremath{{\mathbb N}_0}number\\ &= \sup_{t\in(0,T]} \left\{ V(t)^{-1} \int_0^t \left(\int_\tau^t g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau\right\}. \label{g8.12} \end{align} We insert this identity in \eqref{g8.1}, take into account \eqref{g8.2} and arrive at \eqref{g8.11}. \end{proof} \begin{remark} \label{rem-g-8.4} If $k>n$, then the estimate \eqref{g8.10} is valid for every positive function $\varphi$ which decreases on $(0,T)$. Indeed, in this case \begin{align*} \xi^{k/n} \int_\xi^T \tau^{-k/n} \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau &\leq \xi^{k/n} \varphi(\xi) \int_\xi^\infty \tau^{-k/n}\ensuremath{\,\mathrm{d}}\tau = \xi\varphi(\xi) \left(\frac{k}{n}-1\right)^{-1} \\ &\leq \left(\int_0^\xi \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau\right)\left(\frac{k}{n}-1\right)^{-1}. \end{align*} \end{remark} \begin{corollary} \label{cor-g-8.5} If the conditions of Lemma~\ref{lemma-g-8.1} are satisfied, and the following estimate takes place for $\xi\in (0,T)$, \begin{equation} \int_0^\xi \tau^{-k/n}\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau\leq d_2 \xi^{1-k/n} \varphi(\xi), \label{g8.13} \end{equation} with $d_2\in\ensuremath{\mathbb R}_+$ not depending on $\xi$, then \begin{equation} \varrho_0(g) \simeq \wt{\varrho}_0(g) + \widehat{\varrho}_1(g),\quad g\in M^+(0,T), \label{g8.14} \end{equation} where $\wt{\varrho}_0$ is given by \eqref{g8.2} and \begin{align} \widehat{\varrho}_1(g) & =\sup_{t\in (0,T]} \left(V(t)^{-1} t^{1-k/n} \varphi(t) \int_0^t \xi^{k/n} g(\xi) \ensuremath{\,\mathrm{d}}\xi\right) \ensuremath{{\mathbb N}_0}number\\ &= \sup_{t\in (0,T]} \left( U_1(t) \int_0^t \xi^{k/n} g(\xi)\ensuremath{\,\mathrm{d}}\xi\right), \label{g8.15} \end{align} and $U_1$ is defined by \eqref{g3.13}. \end{corollary} \begin{proof} In this case, by \eqref{g8.13}, \[ \int_0^\xi \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau \leq \xi^{k/n}\int_0^\xi \tau^{-k/n}\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau \simeq \xi\varphi(\xi)\leq \int_0^\xi \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau, \] so that \begin{equation} \xi\varphi(\xi) \simeq \int_0^\xi \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau \simeq \xi^{k/n}\int_0^\xi \tau^{-k/n}\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau, \label{g8.17} \end{equation} and \begin{align} \int_0^\xi \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau + \xi^{k/n} \int_\xi^t \tau^{-k/n}\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau & \simeq \xi^{k/n}\int_0^t \tau^{-k/n}\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau \ensuremath{{\mathbb N}_0}number\\ &\simeq \xi^{k/n} t^{1-k/n} \varphi(t). \label{g8.18} \end{align} From here, and from \eqref{g8.3}-\eqref{g8.4} it follows that \begin{equation} \varrho_1(g)\simeq \sup_{t\in (0,T]} \left(V(t)^{-1} t^{1-k/n} \varphi(t) \left(\int_0^t \xi^{k/n} g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)\right) = \widehat{\varrho}_1(g). \label{g8.19} \end{equation} Therefore, \eqref{g8.1} implies \eqref{g8.14}. The second equality in \eqref{g8.15} with $U_1$ from \eqref{g3.13} is a well-known consequence of the fact that \begin{equation} 0\leq \sigma_k(t)=\int_0^t \xi^{k/n}g(\xi)\ensuremath{\,\mathrm{d}}\xi \label{g8.20} \end{equation} increases in $t\in (0,T]$. \end{proof} \begin{remark} \label{rem-g-8.6} For a positive decreasing function $\varphi$ the estimate \eqref{g8.13} is possible only when $k<n$. Otherwise the integral diverges. For such a function $\varphi$ the inverse estimate is evident, such that \eqref{g8.13} implies \begin{equation} \int_0^\xi \tau^{-k/n} \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau \simeq \xi^{1-k/n} \varphi(\xi). \label{g8.16} \end{equation} \end{remark} \begin{lemma} \label{lemma-g-8.7} Let the assumptions of Lemma~\ref{lemma-g-8.1} be satisfied and \begin{equation} \label{g8.21} \int_0^t V(\xi)\xi^{-1}\ensuremath{\,\mathrm{d}}\xi \leq c V(t),\quad t\in (0,T), \end{equation} for some $c>0$ independent of $t$. Then \begin{equation} \widehat{\varrho}_0(g) \simeq \wt{\varrho}_0(g), \label{g8.22} \end{equation} using the notation \eqref{g8.11} and \eqref{g8.2}. \end{lemma} \begin{proof} For every $t\in (0,T]$ we have \begin{align*} \sup_{\xi\in (0,t]} \Big(V(\xi)^{-1} \int_\xi^t g(s)\ensuremath{\,\mathrm{d}} s \int_0^\xi \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau\Big) & \leq \sup_{\xi\in (0,T]} \Big(V(\xi)^{-1} \int_\xi^Tg(s)\ensuremath{\,\mathrm{d}} s\int_0^\xi \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau\Big) \\ &= \wt{\varrho}_0(g). \end{align*} Therefore, \[ \int_\xi^t g(s)\ensuremath{\,\mathrm{d}} s \int_0^\xi \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau\leq \wt{\varrho}_0(g)V(\xi), \quad \xi\in (0,t], \ t\in (0,T]. \] Thus \begin{align*} \int_0^t \varphi(\xi)\left(\int_\xi^t g(s)\ensuremath{\,\mathrm{d}} s\right)\ensuremath{\,\mathrm{d}}\xi &\leq \int_0^t\left(\int_0^\xi \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau\right)\left(\int_\xi^t g(s)\ensuremath{\,\mathrm{d}} s\right)\xi^{-1} \ensuremath{\,\mathrm{d}}\xi \\ &\leq \wt{\varrho}_0(g) \int_0^t V(\xi)\xi^{-1}\ensuremath{\,\mathrm{d}}\xi, \end{align*} and, according to \eqref{g8.21}, \[ \sup_{t\in (0,T]} V(t)^{-1} \int_0^t \varphi(\xi) \left(\int_\xi^t g(s)\ensuremath{\,\mathrm{d}} s\right)\ensuremath{\,\mathrm{d}}\xi \leq c \wt{\varrho}_0(g). \] Consequently, by \eqref{g8.11}, \begin{align*} \widehat{\varrho}_0(g) = & \sup_{t\in (0,T]} V(t)^{-1} \int_0^t \varphi(\xi)\left(\int_\xi^T g(s)\ensuremath{\,\mathrm{d}} s\right)\ensuremath{\,\mathrm{d}}\xi \\ \leq & \sup_{t\in (0,T]} V(t)^{-1} \int_0^t \varphi(\xi)\left(\int_\xi^t g(s)\ensuremath{\,\mathrm{d}} s\right)\ensuremath{\,\mathrm{d}}\xi\\ & ~\qquad +\sup_{t\in (0,T]} V(t)^{-1} \left( \int_0^t \varphi(\xi)\ensuremath{\,\mathrm{d}}\xi\right) \left(\int_t^T g(s)\ensuremath{\,\mathrm{d}} s\right)\\ \leq & (c+1) \wt{\varrho}_0(g). \end{align*} Since $\wt{\varrho}_0(g) \leq \widehat{\varrho}_0(g)$, we obtain \eqref{g8.22}. \end{proof} \begin{corollary} Let the assumptions of Lemma~\ref{lemma-g-8.1} be satisfied and the estimates \eqref{g8.10} and \eqref{g8.21} be valid. Then \begin{equation} \varrho_0(g) \simeq \wt{\varrho}_0(g),\quad g\in M^+(0,T), \label{g8.23} \end{equation} with constants not depending on $g$, recall \eqref{g7.11} and \eqref{g8.2}. \label{cor-g-8.8} \end{corollary} \begin{proof} Plainly \eqref{g8.11} and \eqref{g8.22}\ imply \eqref{g8.23}. \end{proof} \begin{lemma} \label{lemma-g-8.9} Let the assumptions of Lemma~\ref{lemma-g-8.1} be satisfied, the estimate \eqref{g8.13} be valid, and assume that for some $\varepsilon>0$ the function $U_1(t)t^\varepsilon $ is decreasing on $(0,T]$, recall \eqref{g3.12}-\eqref{g3.13}. Then the estimate \eqref{g8.23} holds for the norms \eqref{g7.11} and \eqref{g8.2}. \end{lemma} \begin{proof} Without loss of generality we assume that $U_1(T)=1$ and introduce a discretizing sequence $\left\lbrace \nu_m\right\rbrace _{m\in\ensuremath{{\mathbb N}_0}}$ by \begin{equation} \nu_m=\sup\left\{t\in (0,T]: U_1(t)=2^m\right\}, \quad m\in\ensuremath{{\mathbb N}_0}. \label{g8.24} \end{equation} Note that $U_1$ is a positive and decreasing function on $(0,T]$, with $U_1(0+)=\infty$, such that $\nu_m$ is well-defined, $m\in\ensuremath{{\mathbb N}_0}$, and \begin{equation} \nu_0=T, \quad 0<\nu_{m+1}<\nu_m, \quad m\in\ensuremath{{\mathbb N}_0},\quad \lim_{m\to\infty} \nu_m=0. \label{g8.25} \end{equation} By assumption, $U_1(t) t^\varepsilon$ decreases which leads to \begin{equation} \nu_{m+1}<\nu_m\leq2^{1/\varepsilon} \nu_{m+1}\quad\text{such that}\quad \nu_{m+1}\simeq\nu_m, \quad m\in\ensuremath{{\mathbb N}_0}, \label{g8.26} \end{equation} for fixed $\varepsilon$. For convenience we use the notation \begin{equation} \Delta_m=(\nu_{m+1},\nu_m],\quad m\in\ensuremath{{\mathbb N}_0}. \label{g8.27} \end{equation} The discretized version of \eqref{g8.15} then yields \begin{align} \widehat{\varrho}_1(g) & = \sup_{m\in\ensuremath{{\mathbb N}_0}}\sup_{t\in\Delta_m} U_1(t) \int_0^t \xi^{k/n} g(\xi)\ensuremath{\,\mathrm{d}}\xi \ensuremath{{\mathbb N}_0}number\\ & \simeq \sup_{m\in\ensuremath{{\mathbb N}_0}} 2^m \sup_{t\in\Delta_m} \int_0^t \xi^{k/n} g(\xi)\ensuremath{\,\mathrm{d}}\xi \ensuremath{{\mathbb N}_0}number\\ &= \sup_{m\in\ensuremath{{\mathbb N}_0}} 2^m\int_0^{\nu_m} \xi^{k/n} g(\xi)\ensuremath{\,\mathrm{d}}\xi \ensuremath{{\mathbb N}_0}number\\ &= \sup_{m\in\ensuremath{{\mathbb N}_0}} 2^m \sum_{j\geq m} \int_{\Delta_j} \xi^{k/n} g(\xi)\ensuremath{\,\mathrm{d}}\xi\label{g8.28}. \end{align} Here we used the assertion \begin{equation} U_1(t)\simeq 2^m, \quad t\in\Delta_m,\quad m\in\ensuremath{{\mathbb N}_0}. \label{g8.29} \end{equation} Now we apply some well-known estimate for non-negative sequences $\{\alpha_m\}_{m\in\ensuremath{{\mathbb N}_0}}$ and positive sequences $\{\beta_m\}_{m\in\ensuremath{{\mathbb N}_0}}$ which satisfy, in addition, $\beta_{m+1}/\beta_m\geq B>1$ for some number $B$. Then for $0<p\leq\infty$ and $1\leq r\leq\infty$, \begin{equation} \left(\sum_{m\in\ensuremath{{\mathbb N}_0}} \left(\beta_m \left(\sum_{j\geq m} \alpha_j^r\right)^{\frac1r} \right)^p \right)^{\frac1p} \leq c(B,p)\left(\sum_{m\in\ensuremath{{\mathbb N}_0}} \left(\beta_m \alpha_m\right)^p\right)^{\frac1p}, \label{g8.30} \end{equation} where $c(B,p)$ is a positive constant (and the usual modification for $p=\infty$ or $r=\infty$). Since the inequality inverse to \eqref{g8.30} is valid (with $c=1$), the estimate \eqref{g8.30} is in fact an equivalence. Now we use \eqref{g8.30} with \[\beta_m=2^m, \quad r=1, \quad p=\infty, \quad \alpha_j=\int_{\Delta_j} \xi^{k/n} g(\xi)\ensuremath{\,\mathrm{d}}\xi, \] insert it in \eqref{g8.28} and conclude \begin{equation} \widehat{\varrho}_1(g) \leq \ c_1\ \sup_{m\in\ensuremath{{\mathbb N}_0}} \ 2^m \int_{\Delta_m} \xi^{k/n} g(\xi)\ensuremath{\,\mathrm{d}}\xi \leq c_2\ \sup_{m\in\ensuremath{{\mathbb N}_0}} \ 2^m \nu_m^{k/n} \int_{\Delta_m} g(\xi)\ensuremath{\,\mathrm{d}}\xi, \label{g8.31} \end{equation} where we used in the latter estimate that $\xi \simeq \nu_m$ for $\xi\in\Delta_m$, $m\in\ensuremath{{\mathbb N}_0}$. It remains to estimate $\wt{\varrho}_0(g)$ given by \eqref{g8.2} from below. By similar discretization arguments as above we observe that \begin{align*} \wt{\varrho}_0(g) & \geq \sup_{m\in\ensuremath{{\mathbb N}_0}}\sup_{t\in\Delta_m} V(t)^{-1} t \varphi(t) \int_t^T g(\xi)\ensuremath{\,\mathrm{d}}\xi \\ & \geq \ \sup_{m\in\ensuremath{\mathbb N}} \left(\int_{\Delta_{m-1}} g(\xi)\ensuremath{\,\mathrm{d}}\xi\right) \sup_{t\in\Delta_m} V(t)^{-1} t \varphi(t)\\ &\simeq \sup_{m\in \ensuremath{\mathbb N}} \nu_{m-1}^{k/n} \int_{\Delta_{m-1}} g(\xi)\ensuremath{\,\mathrm{d}}\xi \sup_{t\in\Delta_m} V(t)^{-1} t^{1-k/n}\varphi(t), \end{align*} where we used that $\varphi$ decreases and the obvious estimate \[ \int_t^T g(\xi)\ensuremath{\,\mathrm{d}}\xi \geq \int_{\nu_m}^T g(\xi)\ensuremath{\,\mathrm{d}}\xi \geq \int_{\Delta_{m-1}} g(\xi)\ensuremath{\,\mathrm{d}}\xi,\quad t\in\Delta_m,\quad m\in\ensuremath{\mathbb N}. \] Thus \eqref{g8.24} implies \begin{align*} 2^{m+1} = & \ U_1(\nu_{m+1}) = \max\left\{ U_1(\nu_m), \sup_{t\in\Delta_m} V(t)^{-1} t^{1-k/n}\varphi(t)\right\} \\ = & \sup_{t\in\Delta_m} V(t)^{-1} t^{1-k/n}\varphi(t). \end{align*} Finally this leads to \[ \wt{\varrho}_0(g) \geq c_3 \sup_{m\in \ensuremath{\mathbb N}} \nu_{m-1}^{k/n}2^{m+1}\int_{\Delta_{m-1}} g(\xi)\ensuremath{\,\mathrm{d}}\xi = 4 c_3 \sup_{m\in \ensuremath{{\mathbb N}_0}} \nu_{m}^{k/n}2^{m}\int_{\Delta_m} g(\xi)\ensuremath{\,\mathrm{d}}\xi. \] Together with \eqref{g8.31} this results in \[ \wt{\varrho}_0(g) \geq c_4 \widehat{\varrho}_1(g),\quad g\in M^+(0,T), \] where $c_4>0$ does not depend on $g$. In view of \eqref{g8.14} this yields \eqref{g8.23} as desired. \end{proof} \begin{remark} \label{rem-g-8.10} In Lemma~\ref{lemma-g-8.9} we considered the alternative situation to Corollary~\ref{cor-g-8.8}, where the estimate \eqref{g8.10} is replaced by \eqref{g8.13}. In this more delicate case a more flexible method of discretization was needed for the proof. \end{remark} \subsection{The case $E=\Lambda_q(v)$, $1<q<\infty$}\label{sect-g-9} Now we deal with Lorentz spaces $\Lambda_q(v)$ for $1<q<\infty$, recall Example~\ref{ex-Lorentz} and \eqref{g3.1}. Our main aim is to obtain some counterparts of Corollary~\ref{cor-g-8.8} and Lemma~\ref{lemma-g-8.9} from the preceding section, but now corresponding to the case $q>1$. For this we need some auxiliary lemmas first. \begin{lemma} \label{lemma-g-9.1} Let the assumptions \eqref{g1.3}, \eqref{g1.4}, and \eqref{g3.1}-\eqref{g3.3} be satisfied with $1<q<\infty$. Then the following estimate holds for the norm \eqref{g7.11}, \begin{equation} \varrho_0(g)\simeq \wt{\varrho}_0(g) + \varrho_1(g)+\varrho_2(g), \quad g\in M^+(0,T), \label{g9.1} \end{equation} where \begin{align} \wt{\varrho}_0(g)& =\left(\int_0^T \left(\left(\int_0^t \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau\right)\left(\int_t^T g(s)\ensuremath{\,\mathrm{d}} s\right)\right)^{q'} w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}, \label{g9.2}\\ \varrho_1(g)&= \left(\int_0^T \left(\int_0^t \Phi_k(\xi,t)g(\xi) \ensuremath{\,\mathrm{d}}\xi\right)^{q'} w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}, \label{g9.3}\\ \varrho_2(g)&= \left(\int_0^T \Phi_k(\xi,T)g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)\left(\int_T^\infty w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}, \label{g9.4} \end{align} where $\Phi_k$ is defined by \eqref{g8.4} and $w$ by \eqref{g7.2}. \end{lemma} \begin{proof} Note that the line of arguments is similar to those strengthened in the proof of Lemma~\ref{lemma-g-8.1}. Recall that for an RIS $\wt{E}=\Lambda_q(v)$ the associated RIS is the space $\wt{E}'=\Gamma_{q'}(w)$ with the norm \[ \left\|f\right\|_{\wt{E}'(\ensuremath{\mathbb R}_+)} = \left(\int_0^\infty \left(\int_0^t f^\ast(\tau)\ensuremath{\,\mathrm{d}}\tau\right)^{q'} w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}. \] Since in our case $f(\tau)=\Psi_0(g,\tau)=f^\ast(\tau)$, see \eqref{g8.5}, thus \begin{equation} \varrho_0(g)=\left\|\Psi_0(g)\right\|_{\wt{E}'(\ensuremath{\mathbb R}_+)} = \left(\int_0^\infty \left(\int_0^t \Psi_0(g,\tau)\ensuremath{\,\mathrm{d}}\tau\right)^{q'} w(t)\ensuremath{\,\mathrm{d}}\tau\right)^{1/q'}. \label{g9.5} \end{equation} Substitution of \eqref{g8.5} into \eqref{g9.5} gives \begin{equation} \varrho_0(g)\simeq \widehat{\varrho}_1(g)+\varrho_2(g), \label{g9.6} \end{equation} where \begin{align} \widehat{\varrho}_1(g)&= \left(\int_0^T \left(\int_0^t \Psi_0(g,\tau)\ensuremath{\,\mathrm{d}}\tau\right)^{q'} w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}, \label{g9.7}\\ \varrho_2(g)&= \left(\int_0^T \Psi_0(g,\tau)\ensuremath{\,\mathrm{d}}\tau\right)\left(\int_T^\infty w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}. \label{g9.8} \end{align} Now we introduce some function $G_k$ for $t\in (0,T]$, and $\tau\in (0,t]$, \begin{equation} G_k(t,\tau)=\int_\tau^t g(\xi)\ensuremath{\,\mathrm{d}}\xi + \tau^{-k/n} \int_0^\tau \xi^{k/n} g(\xi)\ensuremath{\,\mathrm{d}}\xi. \label{g9.9} \end{equation} Then \eqref{g7.10} and \eqref{g8.5} imply that \begin{align} \int_0^t \Psi_0(g,\tau)\ensuremath{\,\mathrm{d}}\tau = & \int_0^t G_k(T,\tau)\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau\ensuremath{{\mathbb N}_0}number\\ = & \int_0^t \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau \int_t^T g(\xi)\ensuremath{\,\mathrm{d}}\xi + \int_0^t G_k(t,\tau)\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau.\label{g9.10} \end{align} After some change of the order of integration we obtain \begin{equation} \int_0^t G_k(t,\tau)\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau = \int_0^t \Phi_k(\xi,t)g(\xi)\ensuremath{\,\mathrm{d}}\xi, \label{g9.11} \end{equation} recall \eqref{g9.9} and \eqref{g8.4}. As a special case we get \begin{equation} \int_0^T \Psi_0(g,\tau)\ensuremath{\,\mathrm{d}}\tau = \int_0^T G_k(T,\tau)\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau = \int_0^T \Phi_k(\xi,T)g(\xi)\ensuremath{\,\mathrm{d}}\xi, \label{g9.12} \end{equation} which means the coincidence of \eqref{g9.8} and \eqref{g9.4}. Moreover, substituting \eqref{g9.10} into \eqref{g9.7}, we arrive at \begin{equation} \widehat{\varrho}_1(g) \simeq \wt{\varrho}_0(g)+\varrho_1(g),\quad g\in M^+(0,T), \label{g9.13} \end{equation} where $\wt{\varrho}_0(g)$ is given by \eqref{g9.2} and \begin{equation} \varrho_1(g)= \left(\int_0^T \left(\int_0^t G_k(t,\tau)\varphi(\tau) \ensuremath{\,\mathrm{d}}\tau\right)^{q'} w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}. \label{g9.14} \end{equation} Now in view of \eqref{g9.11}, equality \eqref{g9.3} coincides with \eqref{g9.14}, and \eqref{g9.6} and \eqref{g9.13} imply \eqref{g9.1}. \end{proof} \begin{lemma} \label{lemma-g-9.2} Let $1<q<\infty$, $v>0$ be a measurable function on $(0,T_0)$, where $T_0\in (0,\infty]$, and $V$ and $w$ be given on $(0,T_0)$ by \eqref{g3.2} and \eqref{g7.2}, respectively. Let $\ensuremath\varepsilon>0$, $\delta\in [0, \ensuremath\varepsilon/q)$, and \begin{equation} V(t) t^{-\ensuremath\varepsilon}\quad \text{increasing on}\quad (0,T_0). \label{g9.15} \end{equation} Then, for $t\in (0,T_0)$, \begin{align} \left(\int_t^{T_0} w(\tau)\tau^{\delta q'}\ensuremath{\,\mathrm{d}}\tau\right)^{1/q'} & \leq \left(\frac{\ensuremath\varepsilon q}{q'(\ensuremath\varepsilon-\delta q)}\right)^{1/q'} t^\delta V(t)^{-1/q}\label{g9.16},\\ \left(\int_0^t w(\tau)^{-q/q'}\tau^{-(\delta+1)q}\ensuremath{\,\mathrm{d}}\tau\right)^{1/q} & \leq \frac{t^{-\delta} V(t)^{1/q}}{\ensuremath\varepsilon^{1/q'}(\ensuremath\varepsilon-\delta q)^{1/q}} \ .\label{g9.17} \end{align} \end{lemma} \begin{proof} We start proving \eqref{g9.16}. Here we apply \eqref{g7.2} and \eqref{g9.15} to conclude \begin{align*} \int_t^{T_0} w(\tau)\tau^{\delta q'}\ensuremath{\,\mathrm{d}} \tau \leq & \left(t^\ensuremath\varepsilon V(t)^{-1}\right)^{\delta q'/\ensuremath\varepsilon} \int_t^{T_0} V(\tau)^{q'(\delta/\ensuremath\varepsilon-1)} v(\tau)\ensuremath{\,\mathrm{d}}\tau \\ = &\ t^{\delta q'} V(t)^{-\delta q'/\ensuremath\varepsilon} (q' (\delta/\ensuremath\varepsilon-1)+1)^{-1} \left(V(\tau)^{q'(\delta/\ensuremath\varepsilon-1)+1}\right) \Big|^{T_0}_{\tau=t}\\ \leq & t^{\delta q'} \left(q'\left(\frac1q-\frac{\delta}{\ensuremath\varepsilon}\right)\right)^{-1} V(t)^{1-q'}, \end{align*} which implies \eqref{g9.16}. It remains to verify \eqref{g9.17}. Property \eqref{g9.15} yields that \[ v(\tau)=V'(\tau) \geq \ensuremath\varepsilon \tau^{-1} V(\tau),\quad \tau\in (0,T_0). \] Therefore, \begin{align*} \int_0^t w(\tau)^{-q/q'} \tau^{-(\delta+1)q} \ensuremath{\,\mathrm{d}} \tau & = \int_0^t V(\tau)^q v(\tau)^{-q/q'} \tau^{-(\delta+1)q} \ensuremath{\,\mathrm{d}}\tau\\ & \leq \ensuremath\varepsilon^{-q/q'} \int_0^t V(\tau)\tau^{-\delta q-1} \ensuremath{\,\mathrm{d}}\tau. \end{align*} Now \eqref{g9.15} implies that \[ \int_0^t V(\tau)\tau^{-\delta q-1} \ensuremath{\,\mathrm{d}}\tau \leq V(t)t^{-\ensuremath\varepsilon} \int_0^t \tau^{\ensuremath\varepsilon-\delta q-1} \ensuremath{\,\mathrm{d}}\tau = V(t) t^{-\delta q} (\ensuremath\varepsilon-\delta q)^{-1}. \] These estimates conclude the proof. \end{proof} We formulate an immediate consequence of the estimates \eqref{g9.16} and \eqref{g9.17}. \begin{corollary} \label{cor-g-9.3} Let the assumptions of Lemma~\ref{lemma-g-9.2} be satisfied. Then \begin{equation} \left(\int_0^t w(\tau)^{-q/q'}\tau^{-(\delta+1)q}\ensuremath{\,\mathrm{d}}\tau\right)^{1/q} \left(\int_t^{T_0} w(\tau)\tau^{\delta q'}\ensuremath{\,\mathrm{d}}\tau\right)^{1/q'} \leq \frac{(q/q')^{1/q'}}{\ensuremath\varepsilon-\delta q}. \label{g9.18} \end{equation} \end{corollary} \begin{lemma} \label{lemma-g-9.4} Let the assumptions of Lemma~\ref{lemma-g-9.1} be satisfied, and assume the conditions \eqref{g8.10} and \eqref{g9.15} to hold. Then there is the equivalence \begin{equation} \varrho_0(g) \simeq \wt{\varrho}_0(g),\quad g\in M^+(0,T), \label{g9.19} \end{equation} where the norm $\varrho_0$ is given by \eqref{g7.11} and $\wt{\varrho}_0(g)$ by \eqref{g9.2}. \end{lemma} \begin{proof} \underline{\em Step 1}.~ Lemma~\ref{lemma-g-9.1} implies the estimates \eqref{g9.1}--\eqref{g9.4}. Next we apply \eqref{g8.10} to $\Phi_k(\xi,t)$, given by \eqref{g8.4}, that \[ \Phi_k(\xi,t)\simeq \int_0^\xi \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau,\quad t\in (0,T], \] and thus \begin{align*} \varrho_1(g) = & \left(\int_0^T \left(\int_0^t \left(\int_0^\xi\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau\right)g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)^{q'} w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}\\ = & \left(\int_0^T \left(\int_0^t\varphi(\tau) \left(\int_\tau^t g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)\ensuremath{\,\mathrm{d}}\tau\right)^{q'} w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}. \end{align*} Hence, \begin{equation} \varrho_1(g) \leq c \left(\int_0^T \left(\int_0^t\varphi(\tau) \left(\int_\tau^T g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)\ensuremath{\,\mathrm{d}}\tau\right)^{q'} w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}. \label{g9.20} \end{equation} Similarly, \begin{equation} \varrho_2(g) \simeq \left(\int_0^T\varphi(\tau)\left(\int_\tau^T g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)\ensuremath{\,\mathrm{d}}\tau\right) \left(\int_T^\infty w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}. \label{g9.21} \end{equation} Therefore it is enough to prove that \begin{align} \varrho_1(g) & \leq \ c_1\ \wt{\varrho}_0(g), \label{g9.22}\\ \varrho_2(g) & \leq \ c_2 \ \wt{\varrho}_0(g), \label{g9.23} \end{align} where $c_1,c_2$ are positive constants independent of $g\in M^+(0,T)$. \\ \underline{\em Step 2}.~ We verify \eqref{g9.22}. We apply Hardy's inequality \cite[Thm.~2, p.~41]{mazya} in adapted notation, that is, \begin{align} \Big(\int_0^T &\left(\int_0^t\varphi(\tau) \left(\int_\tau^T g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)\ensuremath{\,\mathrm{d}}\tau\right)^{q'} w(t)\ensuremath{\,\mathrm{d}} t\Big)^{1/q'}\ensuremath{{\mathbb N}_0}number\\ & \leq c_3 \left(\int_0^T \left(t \varphi(t) \int_t^T g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)^{q'} w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'} \label{g9.24} \end{align} if, and only if, \begin{equation} B_0=\sup_{t\in (0,T)} \left(\int_t^T w(\tau)\ensuremath{\,\mathrm{d}}\tau\right)^{1/q'} \left(\int_0^t w(\tau)^{-q/q'} \tau^{-q} \ensuremath{\,\mathrm{d}}\tau\right)^{1/q}<\infty. \label{g9.25} \end{equation} Moreover, for the best possible constant $c_3$ in \eqref{g9.24} we have \begin{equation} B_0\leq c_3\leq B_0\left(\frac{q'}{q'-1}\right)^{\frac{q'-1}{q'}} (q')^\frac{1}{q'} = B_0 q^\frac1q (q')^\frac{1}{q'}. \label{g9.26} \end{equation} Corollary~\ref{cor-g-9.3} with $\delta=0$, $T_0=T$, implies that \begin{equation} B_0\leq \ensuremath\varepsilon^{-1} (q'-1)^{-1/q'}, \label{g9.27} \end{equation} and consequently \begin{equation} c_3\leq \frac{q}{\ensuremath\varepsilon}. \label{g9.28} \end{equation} Now \eqref{g9.22} follows from \eqref{g9.20}, \eqref{g9.24}, \eqref{g9.2} and from the obvious estimate $t\varphi(t) \leq \int_0^t \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau$ due to the monotonicity of $\varphi$. \\ \underline{\em Step 3}.~ It remains to verify \eqref{g9.23} which is much simpler. By H\"older's inequality we get from \eqref{g9.21} that \begin{align*} \varrho_2(g) \leq & c \left(\int_0^T\tau \varphi(\tau)\left(\int_\tau^T g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)^{q'} w(\tau)\ensuremath{\,\mathrm{d}}\tau\right)^{1/q'} \times \\ &\quad \times \left(\int_0^T \tau^{-q} w(\tau)^{-q/q'} \ensuremath{\,\mathrm{d}}\tau\right)^{1/q} \left(\int_T^\infty w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}\\ \leq & c\ \wt{\varrho}_0(g) \left(\int_0^T \tau^{-q} w(\tau)^{-q/q'} \ensuremath{\,\mathrm{d}}\tau\right)^{1/q} \left(\int_T^\infty w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}. \end{align*} Now application of \eqref{g9.18} with $T_0=\infty$, $\delta=0$, $t=T$ gives \[ \varrho_2(g)\leq c_4 \frac1\ensuremath\varepsilon \left(\frac {q}{q'}\right)^{1/q'} \wt{\varrho}_0(g).\] Thus we have finally shown \eqref{g9.22} and \eqref{g9.23} which together with \eqref{g9.1} imply \eqref{g9.19}. \end{proof} The last preparatory lemma we need is the following. \begin{lemma} \label{lemma-g-9.5} Let the assumption of Lemma~\ref{lemma-g-9.1} be satisfied. If the estimate \eqref{g8.13} holds, and for the function $U_q$, given by \eqref{g3.13} for $q>1$, there is some $\ensuremath\varepsilon>0$ such that \begin{equation} t^\ensuremath\varepsilon U_q(t)\quad \text{decreases on}\quad (0,T), \label{g9.29} \end{equation} then the assertion \eqref{g9.19} holds with $\wt{\varrho}_0(g)$ given by \eqref{g9.2}. All the constants appearing in \eqref{g9.19} are positive, finite, and independent of $g\in M^+(0,T)$. \end{lemma} \begin{proof} We strengthen a similar line of arguments like in the proof of Lemma~\ref{lemma-g-8.9} and use the method of discretization again.\\ \underline{\em Step 1.}~ According to \eqref{g8.4} and \eqref{g8.17}-\eqref{g8.18} we have \begin{equation} \Phi_k(\xi,t)\simeq \xi^{k/n} t^{1-k/n}\varphi(t). \label{g9.30} \end{equation} Hence \eqref{g9.1}-\eqref{g9.4} read as \begin{align} \varrho_1(g)\simeq & \left(\int_0^T \left(\left(\int_0^t \xi^{k/n} g(\xi)\ensuremath{\,\mathrm{d}}\xi\right) t^{1-k/n} \varphi(t)\right)^{q'} w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'} \label{g9.31}\\ \varrho_2(g)\simeq & \wt{\varrho}_2(g) = \left(\int_0^T \xi^{k/n} g(\xi)\ensuremath{\,\mathrm{d}}\xi\right) T^{1-k/n} \varphi(T) \left(\int_T^\infty w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}. \label{g9.32} \end{align} \underline{\em Step 2.}~ First we estimate $\varrho_1(g)$. We substitute formulas \eqref{g7.2} and \eqref{g3.12} into \eqref{g9.31} and obtain \begin{equation} \varrho_1(g)\simeq \wt{\varrho}_1(g) = \left(\int_0^T \left(\int_0^t \xi^{k/n} g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)^{q'} \wt{W}(t)^{q'} v(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}. \label{g9.33} \end{equation} Note that $U_q(T)=0$, $U_q(0+)=\infty$, and $U_q(t) t^\ensuremath\varepsilon $ is monotonically decreasing. We introduce the discretizing sequence $\{\delta_m\}_{m\in\ensuremath{\mathbb Z}}$ by \begin{equation} \delta_m=\sup\{\tau\in (0,T): U_q(\tau)=2^m\},\quad m\in\ensuremath{\mathbb Z}. \label{g9.34} \end{equation} Thus we observe $\delta_0\in (0,T)$, $\delta_m\to 0$ for $m\to +\infty$, $\delta_m\to T$ for $m\to-\infty$, and \begin{equation} \delta_{m+1}<\delta_m\leq \delta_{m+1} 2^{1/\ensuremath\varepsilon},\quad m\in\ensuremath{\mathbb Z}. \label{g9.35} \end{equation} We use the notation \begin{equation} \wt{\Delta}_m = (\delta_{m+1},\delta_m], \quad m\in\ensuremath{\mathbb Z}. \label{g9.36} \end{equation} Therefore, \[ \wt{\varrho}_1(g)^{q'} = \sum_ {m\in\ensuremath{\mathbb Z}} \int_{\wt{\Delta}_m} \left(\int_0^t \xi^{k/n} g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)^{q'} \wt{W}(t)^{q'} v(t)\ensuremath{\,\mathrm{d}} t. \] In view of \eqref{g9.34} this can be continued by \begin{align*} \wt{\varrho}_1(g)^{q'} \leq & \sum_ {m\in\ensuremath{\mathbb Z}}\left( \int_0^{\delta_m}\xi^{k/n} g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)^{q'} \int_{\wt{\Delta}_m} \wt{W}(t)^{q'} v(t)\ensuremath{\,\mathrm{d}} t\\ = & \sum_ {m\in\ensuremath{\mathbb Z}}\left( \int_0^{\delta_m}\xi^{k/n} g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)^{q'} \left(U_q(\delta_{m+1})^{q'} - U_q(\delta_m)^{q'}\right)\\ = & \ (2^{q'}-1) \sum_{m\in\ensuremath{\mathbb Z}} 2^{mq'} \left(\sum_{j\geq m}\int_{\wt{\Delta}_j }\xi^{k/n} g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)^{q'} . \end{align*} Now we apply in appropriately adapted notation estimate \eqref{g8.30} again and obtain \[ \wt{\varrho}_1(g)\leq c(q') \left(\sum_{m\in\ensuremath{\mathbb Z}} 2^{mq'} \left(\int_{\wt{\Delta}_m }\xi^{k/n} g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)^{q'}\right)^{1/q'}. \] Using \eqref{g9.35} and \eqref{g9.36} we observe $ \xi \simeq \delta_m$, $\xi\in \wt{\Delta}_m$, $m\in\ensuremath{\mathbb Z}$, such that finally \begin{equation} \wt{\varrho}_1(g)\leq c_1(q,\ensuremath\varepsilon) \left(\sum_{m\in\ensuremath{\mathbb Z}} 2^{mq'} \delta_m^{kq'/n} \left(\int_{\wt{\Delta}_m } g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)^{q'}\right)^{1/q'}. \label{g9.37} \end{equation} \underline{\em Step 3.}~ We deal with $\varrho_2(g)$ and \eqref{g9.32}. We apply \eqref{g9.16} with $\delta=0$ and obtain \begin{equation} \wt{\varrho}_2(g)\leq c_2(q)\left(\int_0^T \xi^{k/n} g(\xi)\ensuremath{\,\mathrm{d}}\xi\right) T^{1-k/n} \varphi(T) V(T)^{-1/q}. \label{g9.38} \end{equation} Moreover, \[ \int_0^{\delta_0} \xi^{k/n}g(\xi)\ensuremath{\,\mathrm{d}}\xi = \sum_{m\in\ensuremath{{\mathbb N}_0}} \int_{\wt{\Delta}_m} \xi^{k/n}g(\xi)\ensuremath{\,\mathrm{d}}\xi \simeq \sum_{m\in\ensuremath{{\mathbb N}_0}}\delta_m^{k/n}\int_{\wt{\Delta}_m} g(\xi)\ensuremath{\,\mathrm{d}}\xi, \] such that H\"older's inequality leads to \[ \int_0^{\delta_0} \xi^{k/n}g(\xi)\ensuremath{\,\mathrm{d}}\xi\leq c\left( \sum_{m\in\ensuremath{{\mathbb N}_0}} 2^{-mq}\right)^{\frac1q}\left(\sum_{m\in\ensuremath{{\mathbb N}_0}}2^{mq'} \delta_m^{q'k/n}\left(\int_{\wt{\Delta}_m} g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)^{q'}\right)^{\frac{1}{q'}}, \] such that \[ \int_0^T \xi^{k/n}g(\xi)\ensuremath{\,\mathrm{d}}\xi\leq c_3(q,\ensuremath\varepsilon)\left(\left(\sum_{m\in\ensuremath{{\mathbb N}_0}}2^{mq'} \delta_m^{q'k/n}\left(\int_{\wt{\Delta}_m} g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)^{q'}\right)^{\frac{1}{q'}}+T^{k/n}\int_{\delta_0}^T g(\xi)\ensuremath{\,\mathrm{d}}\xi\right). \] Together with \eqref{g9.37}, \eqref{g9.38} this leads to \begin{equation} \wt{\varrho}_1(g)+\wt{\varrho}_2(g)\leq c_4(q,\ensuremath\varepsilon,T)\left(\left(\sum_{m\in\ensuremath{{\mathbb N}_0}}2^{mq'} \delta_m^{q'k/n}\left(\int_{\wt{\Delta}_m} g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)^{q'}\right)^{\frac{1}{q'}}+\int_{\delta_0}^T g(\xi)\ensuremath{\,\mathrm{d}}\xi\right). \label{g9.39} \end{equation} \underline{\em Step 4.}~ We estimate $\wt{\varrho}_0(g)$ in \eqref{g9.2} from below. First of all, \begin{align*} \wt{\varrho}_0(g)\geq & \left(\int_0^{\delta_0}\left(\left(\int_0^t \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau\right)\left(\int_t^T g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)\right)^{q'} w(t)\ensuremath{\,\mathrm{d}} t\right)^{\frac{1}{q'}} \\ \geq &\left(\int_{\delta_0}^T g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)\left(\int_0^{\delta_0} \left(t\varphi(t)\right)^{q'} w(t)\ensuremath{\,\mathrm{d}} t\right)^{\frac{1}{q'}}, \end{align*} hence \begin{equation} \int_{\delta_0}^T g(\xi)\ensuremath{\,\mathrm{d}}\xi \leq c(\delta_0,q) \wt{\varrho}_0(g),\quad g\in M^+(0,T). \label{g9.40} \end{equation} Furthermore, according to \eqref{g9.34}-\eqref{g9.35}, \begin{align} \wt{\varrho}_0(g)^{q'} &= \sum_{m\in\ensuremath{\mathbb Z}} \int_{\wt{\Delta}_m} \left(\left(\int_0^t \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau\right)\left(\int_t^T g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)\right)^{q'} w(t)\ensuremath{\,\mathrm{d}} t\ensuremath{{\mathbb N}_0}number\\ &\geq \sum_{m\in\ensuremath{\mathbb Z}} \left(\int_{\wt{\Delta}_{m-1}} g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)^{q'} \int_{\wt{\Delta}_m} \left(t\varphi(t)\right)^{q'} w(t)\ensuremath{\,\mathrm{d}} t\ensuremath{{\mathbb N}_0}number\\ & \simeq \sum_{m\in \ensuremath{\mathbb Z}} \left(\int_{\wt{\Delta}_{m-1}} g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)^{q'} \delta_{m-1}^{kq'/n} \int_{\wt{\Delta}_m} \wt{W}(t)^{q'} v(t)\ensuremath{\,\mathrm{d}} t. \label{g9.41} \end{align} For $t\in\wt{\Delta}_m$ we have $t\leq\delta_m<\delta_{m-1}<T$, such that $\int_t^T g(\xi)\ensuremath{\,\mathrm{d}}\xi\geq \int_{\wt{\Delta}_{m-1}} g(\xi)\ensuremath{\,\mathrm{d}}\xi$. In addition, we know that $t\simeq t^{1-/k/n} \delta_{m-1}^{k/n}$ and $\int_0^t\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau\geq t\varphi(t)$, recall also notation \eqref{g3.13} and \eqref{g7.2}. Now by similar arguments as presented in the proof above following \eqref{g9.36} we obtain \[ \int_{\wt{\Delta}_m} \wt{W}(t)^{q'} v(t)\ensuremath{\,\mathrm{d}} t = (2^{q'}-1) 2^{mq'} = 2^{q'}(2^{q'}-1) 2^{(m-1)q'}. \] Substituting this into \eqref{g9.41} and an index shift lead to \begin{equation} \wt{\varrho}_0(g)\geq c(\ensuremath\varepsilon,q)\left(\sum_{m\in\ensuremath{\mathbb Z}} 2^{mq'} \delta_m^{kq'/n} \left(\int_{\wt{\Delta}_m} g(\xi)\ensuremath{\,\mathrm{d}}\xi\right)^{q'}\right)^{1/q'}. \label{g9.42} \end{equation} Thus the estimates \eqref{g9.39}, \eqref {g9.40} and \eqref{g9.42} result in \[ \wt{\varrho}_1(g) + \wt{\varrho}_2(g) \leq c_5(\ensuremath\varepsilon,\delta_0,q,T)\wt{\varrho}_0(g),\quad g\in M^+(0,T). \] Then the last estimate, together with \eqref{g9.33}, \eqref{g9.32} and \eqref{g9.1} yields \eqref{g9.19}. \end{proof} Recall that our aim is to prove Theorem~\ref{theo-g-3.3} above. For that reason we summarize our preceding results in the following theorem. \begin{theorem} \label{theo-g-9.6} Let the conditions of Theorem,~\ref{theo-g-3.3} be satisfied. Then the associated GBFS $X_0'=X_0'(0,T)$ to the optimal space $X_0=X_0(0,T)$ is generated by the function norm $\varrho_0$ such that \begin{equation} \varrho_0(g)\simeq \wt{\varrho}_0(g),\quad g\in M^+(0,T), \label{g9.43} \end{equation} where for $q=1$, \begin{equation} \wt{\varrho}_0(g)=\sup_{t\in (0,T]} \left(V(t)^{-1} \left(\int_0^t \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau\right)\left(\int_t^T g(s)\ensuremath{\,\mathrm{d}} s\right)\right), \label{g9.44} \end{equation} and for $1<q<\infty,$ \begin{equation} \wt{\varrho}_0(g)=\left(\int_0^T\left(\left(\int_0^t \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau\right)\left(\int_t^T g(s)\ensuremath{\,\mathrm{d}} s\right)\right)^{q'}w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}, \label{g9.45} \end{equation} where $\frac{1}{q}+\frac{1}{q'}=1$, as usual. Here again \begin{equation} V(t)=\int_0^t v(\tau)\ensuremath{\,\mathrm{d}}\tau,\qquad w(t)=V(t)^{-q'} v(t). \label{g9.46} \end{equation} \end{theorem} \begin{proof} First we assume that condition {\bfseries (A)} (see \eqref{g3.8} and \eqref{g3.9}) is satisfied. Then for $q=1$ we can apply Corollary~\ref{cor-g-8.8}, and find that \eqref{g8.10} coincides with \eqref{g3.8}, and \eqref{g3.9} implies \eqref{g8.21} (as in the last estimate before Lemma~\ref{lemma-g-9.4} with $\delta=0$). Thus \eqref{g9.43} is just \eqref{g8.23} in this case. If $1<q<\infty$, we receive \eqref{g9.19} by Lemma~\ref{lemma-g-9.4}.\\ Secondly we consider the situation when {\bfseries (B)} holds, that is, \eqref{g3.10} and \eqref{g3.11}. For $q=1$ we can apply Lemma~\ref{lemma-g-8.9}, since \eqref{g8.13} coincides with \eqref{g3.10} and \eqref{g3.11} provides the required property of $U_1$. This yields \eqref{g8.23} which coincides with \eqref{g9.43}. Finally, if $1<q<\infty$, then \eqref{g9.43} is a consequence of Lemma~ \ref{lemma-g-9.5}. \end{proof} \section{The description of the optimal Calder\'on space, II} \label{sect-3b} Recall that we already stated in Section~\ref{sect-3} above one of our main results, Theorem~\ref{theo-g-3.3}. Now we are ready to present its proof. \begin{proof}[Proof of Theorem~\ref{theo-g-3.3}] Theorem~\ref{theo-g-9.6} above shows that (under the given assumptions) the associated norm is optimal. So what is left to verify are the explicit representations for the optimal norm $\|\cdot\|_{X_0}$ in \eqref{g3.17} and \eqref{g3.18}, respectively, with \eqref{g3.16}. This norm is associated to the norm $\wt{\varrho}_0$ presented in \eqref{g9.44}, \eqref{g9.45}. We benefit from the paper \cite{BaGo} and an application of \cite[Thm.~1.2]{BaGo} (in appropriately adapted notation) concludes the argument. \end{proof} The combination of Theorem~\ref{theo-g-2.4} and \ref{theo-g-3.3} now yields the following result. \begin{theorem}\label{theo-g-3.4} Let the assumptions of Theorems~\ref{theo-g-1.1} and \ref{theo-g-3.3} be satisfied. Let $q=1$ and $\Psi_1(0+)=0$ or $1<q<\infty$. Then the optimal Calder\'on space for the embedding \eqref{g2.7} has the following norm \begin{equation} \label{g3.19} \|u\|_{\Lambda^k(C;X_0)} = \|u\|_C + \left(\int_0^T \left(\frac{\omega_k(u;t^{1/n})}{\Psi_q(t)}\right)^q \frac{\ensuremath{\,\mathrm{d}} \Psi_q(t)}{\Psi_q(t)}\right)^{1/q}. \end{equation} \end{theorem} \begin{proof} Theorem~\ref{theo-g-3.3} states that the GBFS $X_0=X_0(0,T)$ is optimal for the embedding $K\mapsto X$, where $K$ is the cone described by \eqref{g1.10} with \eqref{g1.12}. Then Corollary~\ref{cor-g-2.5} shows that the corresponding Calder\'on space $\Lambda^k(C;X_0)$ is optimal for the embedding \eqref{g2.7}. Thus \begin{equation} \|u\|_{\Lambda^k(C;X_0)} = \|u\|_C + \left\|\omega_k(u;\tau^{1/n})\right\|_{X_0(0,T)}. \label{g10.1} \end{equation} We substitute \eqref{g3.18} into \eqref{g10.1} and arrive at \begin{align} \|u\|_{\Lambda^k(C;X_0)} \simeq & \|u\|_C + \Psi_q(T)^{-1} \left\|\omega_k(u;\tau^{1/n})\right\|_{L_\infty(T_1,T)}\ensuremath{{\mathbb N}_0}number\\ &+ \left(\int_0^T \left(\frac{\left\|\omega_k(u;\tau^{1/n})\right\|_{L_\infty(0,t)}}{\Psi_q(t)}\right)^q \frac{\ensuremath{\,\mathrm{d}} \Psi_q(t)}{\Psi_q(t)}\right)^{\frac1q}. \label{g10.2} \end{align} But $\omega_k(u; \tau^{1/n})$ increases with respect to $\tau$, hence \begin{align*} \left\|\omega_k(u;\tau^{1/n}\right\|_{L_\infty(T_1,T)} & \leq \omega_k(u;T^{1/n}) \leq 2^k \|u\|_C, \\ \left\|\omega_k(u;\tau^{1/n})\right\|_{L_\infty(0,t)} ~ & \leq \omega_k(u;t^{1/n}), \quad t\in (0,T). \end{align*} Together with \eqref{g10.2} this implies \eqref{g3.19}. \end{proof} \begin{remark}\label{rem-g-3.5} In the case $q=1$ and $\Psi_1(0+)>0$, the embedding \eqref{g2.7} takes place `on the limit of the smoothness', and we obtain $\Lambda^k(C;X_0)=C(\ensuremath{\mathbb R}^{n})$. According to the results in \cite{GoHa} in this case there exist functions $u\in H^G_E(\ensuremath{\mathbb R}^{n})$ such that $\omega_k(u;t^{1/n})\to 0$ for $t\to 0+$ arbitrarily slowly. Note that in this case $X_0(0,T)=L_\infty(0,T)$ by \eqref{g3.17}, such that the above norm \[ \|u\|_{\Lambda^k(C;X_0)} = \|u\|_C + \left\|\omega_k(u;\tau^{1/n})\right\|_{L_\infty(0,T)} \simeq \|u\|_C. \] In that case we cannot say anything else than $u\in C(\ensuremath{\mathbb R}^{n})$. \end{remark} \begin{remark}\label{rem-g-3.6} We return to Examples~\ref{exm-g-1.3}, \ref{exm-g-1.4}. If $\alpha\neq k$, then Theorems~\ref{theo-g-3.3}, \ref{theo-g-3.4} can be applied. In the limiting case $\alpha=k$ some special care is needed. This follows from estimate \eqref{g1.20} in Example~\ref{exm-g-1.3}. In Example~\ref{exm-g-1.4} we have the equality \eqref{g1.24} when $\lambda$ is slowly varying on $(0,T]$. So in some sense \eqref{g1.12} can be understood as a special case of \eqref{g1.24} with $\lambda\equiv 1$. Thus Remark~\ref{rem-g-3.2} applies and implies that we can use Theorems~\ref{theo-g-3.3}, \ref{theo-g-3.4} in case of $\alpha\neq k$. \end{remark} Before we can state our next main result, Theorem~\ref{theo-g-3.7} below, which also covers the delicate limiting case $\alpha=k$, we need some further preparation. \begin{lemma} \label{lemma-g-10.1} Let $\lambda>0$ be a continuous function on $(0,T]$ such that for some $\delta\in (0,1)$ the function $\lambda(t)t^{-\delta}$ decreases. Then \[ \lambda(s) + \int_s^t \lambda(\tau)\tau^{-1}\ensuremath{\,\mathrm{d}} \tau \leq \frac{\lambda(s)}{\delta} \left(\frac{t}{s}\right)^\delta,\quad s\in (0,t], \quad t\in (0,T]. \] \end{lemma} \begin{proof} We use the assumed monotonicity and argue as follows, \begin{align*} \int_s^t\lambda(\tau)\tau^{-1}\ensuremath{\,\mathrm{d}}\tau \leq \lambda(s)s^{-\delta}\int_s^t \tau^{\delta-1}\ensuremath{\,\mathrm{d}}\tau = \frac{\lambda(s) s^{-\delta}}{\delta}\left(t^\delta-s^\delta\right). \end{align*} Hence \[ \lambda(s)+\int_s^t\lambda(\tau)\tau^{-1}\ensuremath{\,\mathrm{d}}\tau \leq \frac{\lambda(s)}{\delta}\left(\frac{t}{s}\right)^\delta.\] \end{proof} \begin{corollary} Let $\varphi$ be given by \eqref{g3.15} with $\alpha=k$, and $\lambda>0$ be slowly varying on $(0,T]$. Then for every $\delta\in (0,1)$ the function $\Phi_k$ given by \eqref{g8.4} can be estimated by \begin{equation} 0<\Phi_k(\xi,t)\leq c\ t^\delta \xi^{k/n-\delta}\lambda(\xi),\quad \xi\in (0,t],\quad t\in (0,T], \label{g10.3} \end{equation} where $c=c(\delta,k,n)>0$. \label{cor-g-10.2} \end{corollary} \begin{proof} If $\varphi$ is given by \eqref{g3.15} with $\alpha=k$, then \begin{equation} \int_0^\xi \varphi(\tau)\ensuremath{\,\mathrm{d}}\tau = \int_0^\xi \tau^{k/n-1}\lambda(\tau) \ensuremath{\,\mathrm{d}}\tau \simeq \xi^{k/n}\lambda(\xi), \label{g10.4} \end{equation} recall \eqref{g5.8}. Hence \[ \Phi_k(\xi,t)=\int_0^\xi\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau + \xi^{k/n}\int_\xi^t \tau^{-k/n}\varphi(\tau)\ensuremath{\,\mathrm{d}}\tau \simeq \xi^{k/n}\left(\lambda(\xi)+\int_\xi^t \lambda(\tau)\tau^{-1}\ensuremath{\,\mathrm{d}}\tau\right). \] Application of Lemma~\ref{lemma-g-10.1} leads to \eqref{g10.3}. \end{proof} \begin {corollary} Let the assumptions of Corollary~\ref{cor-g-10.2} be satisfied, and let $\varrho_1$ be given by \eqref{g8.3}-\eqref{g8.4}. Then for any $\delta\in(0,1)$ there is some positive constant $c_1=c_1(\delta,k,n)$, such that \begin{equation} \varrho_1(g)\leq c_1 \sup_{t\in (0,T]} \left(V(t)^{-1} t^\delta\int_0^t s^{\frac{k}{n}-\delta} \lambda(s)g(s)\ensuremath{\,\mathrm{d}} s\right),\quad g\in M^+(0,T). \label{g10.5} \end{equation} \label{cor-g-10.3} \end{corollary} \begin{proof} This follows immediately by substituting \eqref{g10.3} into \eqref{g8.4}. \end{proof} \begin{corollary} Let the assumptions of Corollary~\ref{cor-g-10.2} be satisfied, and let $\varrho_1$ be given by \eqref{g8.3}-\eqref{g8.4}. Then for any $\delta\in(0, \min\{1,k/n\})$ there is some positive constant $c_2=c_2(\delta,k,n)$, such that \begin{equation} \varrho_1(g)\leq c_2 \sup_{t\in (0,T]} \left(V(t)^{-1} t^\delta\int_0^t \tau^{\frac{k}{n}-\delta-1}\lambda(\tau) \left(\int_\tau^t g(s)\ensuremath{\,\mathrm{d}} s\right)\ensuremath{\,\mathrm{d}}\tau\right),\quad g\in M^+(0,T). \label{g10.6} \end{equation} \label{cor-g-10.4} \end{corollary} \begin{proof} Recall that when $\lambda>0$ is slowly varying, we have for $0<\delta<k/n$ that \begin{equation} s^{\frac{k}{n}-\delta} \lambda(s)\simeq \int_0^s \tau^{\frac{k}{n}-\delta.-1}\lambda(\tau)\ensuremath{\,\mathrm{d}}\tau,\quad s\in (0,T]. \label{g10.7} \end{equation} Substituting this into \eqref{g10.5} and changing the order of integration we arrive at \eqref{g10.6}. \end{proof} Now we come to our next essential result. \begin{theorem}\label{theo-g-3.7} Let $\varphi$ be determined by \eqref{g3.15}. Assume that $\alpha\leq k$ and \eqref{g3.9} is satisfied, or $\alpha>k$ and \eqref{g3.11} is satisfied with \[ \widetilde{W}(t)=V(t)^{-1} t^{\frac{\alpha-k}{n} } \lambda(t). \] Then the formulas \eqref{g3.16}-\eqref{g3.18}, \eqref{g3.19} hold with $\Psi_q$, given by \eqref{g3.5}, where \begin{equation} \label{g3.20} W(t)=V(t)^{-1} t^{\alpha/n} \lambda(t),\quad t\in (0,T]. \end{equation} \end{theorem} \begin{proof} Note first that it is left to consider the case $\alpha=k$ only since the other cases are already covered by Theorem~\ref{theo-g-3.3}, in particular using condition {\bfseries (A)} if $\alpha<k$, and condition {\bfseries (B)} if $\alpha>k$. So we may assume in the following that $\alpha=k$.\\ \underline{\em Step 1}.\quad First we deal with the case $q=1$. For $\varphi$ given by \eqref{g3.15} we have \eqref{g10.4}. Thus $\wt{\varrho}_0$ given by \eqref{g8.2} can be estimated by \[\wt{\varrho}_0(g)\simeq \sup_{\tau\in (0,T]} \left(V(\tau)^{-1} \tau^{k/n}\lambda(\tau)\left(\int_\tau^T g(s)\ensuremath{\,\mathrm{d}} s\right)\right),\quad g\in M^+(0,T), \] such that \[ \tau^{k/n} \lambda(\tau)\int_\tau^t g(s)\ensuremath{\,\mathrm{d}} s \leq c\wt{\varrho}_0(g) V(\tau),\quad \tau\in (0,t],\quad t\in (0,T]. \] In view of this we can continue \eqref{g10.6} here by \[ \varrho_1(g)\leq c_3\sup_{t\in (0,T]} \left( V(t)^{-1} t^\delta\left(\int_0^t \tau^{-\delta-1}V(\tau)\ensuremath{\,\mathrm{d}}\tau\right)\right)\wt{\varrho}_0(g), \] for any $\delta\in (0,\min\{1,k/n\})$. Recall that $V(\tau)\tau^{-\ensuremath\varepsilon}$ is monotonically increasing by \eqref{g3.9}. Assume that $\delta<\ensuremath\varepsilon$. Then \[ \int_0^t \tau^{-\delta-1}V(\tau)\ensuremath{\,\mathrm{d}}\tau\leq V(t) t^{-\ensuremath\varepsilon} \int_0^t \tau^{\ensuremath\varepsilon-\delta-1}\ensuremath{\,\mathrm{d}}\tau = V(t) t^{-\delta}(\ensuremath\varepsilon-\delta)^{-1}. \] Consequently, \[ \varrho_1(g)\leq c_4 \wt{\varrho}_0(g),\quad g\in M^+(0,T), \] with $c_4=c_4(\delta,\ensuremath\varepsilon,k,n)>0$. Together with \eqref{g8.1} this shows that \eqref{g9.19} is valid for the norm $\varrho_0$ given by \eqref{g7.11} which is associated to the optimal one. In other words, we can apply the results given in \eqref{g3.16}-\eqref{g3.18} as explained in the proof of Theorem~\ref{theo-g-3.3}.\\ \underline{\em Step 2}.\quad Assume now $1<q<\infty$. First we show that assertion \eqref{g9.43} is valid with $\wt{\varrho}_0(g)$ from \eqref{g9.45} where $\varphi$ is given by \eqref{g3.15} with $\alpha=k.$ Applying \eqref{g10.4} implies \begin{equation} \label{g10.8} \wt{\varrho}_0(g)\simeq \left(\int_0^T \left( t^{k/n}\lambda(t)\left(\int_t^T g(s)\ensuremath{\,\mathrm{d}} s\right)\right)^{q'}w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}. \end{equation} Further, \eqref{g10.3}, \eqref{g9.3}, and \eqref{g9.4} yield \[ {\varrho}_1(g)\leq c_{1}\left(\int_0^T \left( \int_0^t s^{k/n-\delta}\lambda(s)g(s)\ensuremath{\,\mathrm{d}} s\right)^{q'}t^{\delta q'}w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}, \] \[ {\varrho}_2(g)\leq c_{2}\left(\int_0^T s^{k/n-\delta}\lambda(s)g(s)\ensuremath{\,\mathrm{d}} s\right)T^{\delta } \left( \int_T^{\infty} w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}. \] Now, substituting \eqref{g10.7} into these formulas and some change of order of integration lead to \begin{equation} \label{g10.9} {\varrho}_1(g)\leq \tilde{c}_{1}\left(\int_0^T \left( \int_0^t \tau^{k/n-\delta-1}\lambda(\tau)\left(\int_{\tau}^T g(s)\ensuremath{\,\mathrm{d}} s\right)\ensuremath{\,\mathrm{d}} \tau\right)^{q'}t^{\delta q'}w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}, \end{equation} \begin{equation} \label{g10.10} {\varrho}_2(g)\leq \tilde{c}_{2}\left(\int_0^T \tau^{k/n-\delta-1}\lambda(\tau)\left(\int_{\tau}^T g(s)\ensuremath{\,\mathrm{d}} s\right)\ensuremath{\,\mathrm{d}} \tau\right)T^{\delta } \left( \int_T^{\infty} w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'}, \end{equation} in view of the obvious estimate $\int_{\tau}^t g(s)\ensuremath{\,\mathrm{d}} s\leq \int_{\tau}^T g(s)\ensuremath{\,\mathrm{d}} s, \,\tau< t\leq T. $ Now we apply in appropriately adapted notation Hardy's inequality (see \cite[Thm.~2, Ch.~1]{mazya}) to estimate the right-hand side in \eqref{g10.9} and obtain that \begin{align} \left(\int_0^T \left( \int_0^t \tau^{k/n-\delta-1}\lambda(\tau)\left(\int_{\tau}^T g(s)\ensuremath{\,\mathrm{d}} s\right)\ensuremath{\,\mathrm{d}} \tau\right)^{q'}t^{\delta q'}w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'} \ensuremath{{\mathbb N}_0}number\\ \leq c_{3} \left(\int_0^T \left( t^{k/n}\lambda(t)\int_{\tau}^T g(s)\ensuremath{\,\mathrm{d}} s\right)^{q'}w(t)\ensuremath{\,\mathrm{d}} t\right)^{1/q'} \label{g10.11} \end{align} holds if, and only if, \[ B_{\delta}:=\sup\limits_{t \in (0, T)} \left(\, \left(\int_t^T \tau^{\delta q'}w(\tau)\ensuremath{\,\mathrm{d}} \tau\right)^{1/q'} \left(\int_0^t \tau^{-(\delta+1)q}w(\tau)^{-q/q'}\ensuremath{\,\mathrm{d}} \tau\right)^{1/q} \right) <\infty. \] This condition is satisfied in view of Corollary \ref{cor-g-9.3}: if $0\leq\delta < \varepsilon /q,$ then \begin{equation} \label{g10.12} B_{\delta}\leq \frac{\left( q/q'\right)^{1/q'}}{ \varepsilon -\delta q}. \end{equation} Moreover, for the best possible constant $c_3$ in \eqref{g10.11} we have \begin{equation} B_{\delta} \leq c_3\leq B_{\delta}q^{1/q}\left({q'}\right)^{1/q'}\leq \frac{q}{ \varepsilon -\delta q}. \label{g10.13} \end{equation} Estimates~\eqref{g10.9},\eqref{g10.11}, and \eqref{g10.8} imply that \begin{equation} \varrho_{1}(g) \leq c_{4} \wt{\varrho}_0(g),\quad g \in M^{+}(0, T), \label{g10.14} \end{equation} where $c_{4}>0$ is independent of $g$. Similarly, by H\"older's inequality we get from \eqref{g10.10} in view of \eqref{g10.8} \[ {\varrho}_2(g)\leq {c}_{5}\wt{\varrho}_0(g)\left(\int_0^T \tau^{-(\delta+1)q}w(\tau)^{-q/q'}\ensuremath{\,\mathrm{d}} \tau\right)^{1/q}T^{\delta }\left( \int_T^{\infty} w(\tau)\ensuremath{\,\mathrm{d}} \tau\right)^{1/q'}. \] But \[ T^{\delta }\left( \int_T^{\infty} w(\tau)\ensuremath{\,\mathrm{d}} \tau\right)^{1/q'}\leq \left( \int_T^{\infty} \tau^{\delta q'}w(\tau)\ensuremath{\,\mathrm{d}} \tau\right)^{1/q'}, \] and estimating \eqref{g9.18} with $t=T,\, T_0=\infty,$ yields \begin{equation} \varrho_{2}(g) \leq c_{6} \wt{\varrho}_0(g),\quad g \in M^{+}(0, T), \label{g10.15} \end{equation} where $c_{6}>0$ is independent of $g$. The assertions \eqref{g9.1}, \eqref{g10.14}, and \eqref{g10.15} imply \eqref{g9.43} with $\wt{\varrho}_0(g)$ from \eqref{g9.45}. Recall that here $\varphi$ is given by \eqref{g3.15}, where $\alpha=k,$ such that \eqref{g9.45} coincides with \eqref{g10.8}. It remains to describe the optimal norm $\|\cdot\|_{X_0}$ which is associated to the norm $\wt{\varrho}_0(g)$ in \eqref{g10.8}. This description is given by the formulas \eqref{g3.16} - \eqref{g3.18} where we have to consider $\varphi$ given by \eqref{g3.15} with $\alpha=k.$ Thus, formula \eqref{g3.5} is valid where for given $\varphi$ the function $W$ \eqref{g3.4} has the equivalent form \eqref{g3.20}. \end{proof} \section{Some explicit descriptions of the optimal Calder\'on space} \label{sect-4} Here we present a more detailed consideration in the case of classical Bessel potentials, see Example~\ref{exm-g-1.3}. Note that in Example~\ref{exm-g-4.2} below we extend some preceding results in \cite{GNO-7}. The results presented here were announced in our paper \cite{GoHa-3}. We start with a preparatory lemma which we shall need in the arguments below. \begin{lemma}\label{lemma-g-11.1} Let $0<T\leq\infty$, $\beta>0$, $\lambda$ be a positive slowly varying function on $(0,T)$, and \[ A(t)=\int_t^T \tau^{-\beta-1}\lambda(\tau)\ensuremath{\,\mathrm{d}}\tau,\quad t\in (0,T). \] Then there exists some $\ensuremath\varepsilon>0$ such that $t^\ensuremath\varepsilon A(t)$ is monotonically decreasing. \end{lemma} \begin{proof} We have the equality \[ \left[ t^{\varepsilon}A(t)\right]'=t^{\varepsilon-1}\left[ \varepsilon \int_t^T \tau^{-\beta-1}\lambda(\tau)\ensuremath{\,\mathrm{d}} \tau -t^{-\beta}\lambda(t)\right]. \] Applying estimate \eqref{g5.10} yields \[ \left[ t^{\varepsilon}A(t)\right]'\leq t^{\varepsilon-1}\left[ t^{-\beta}\lambda(t)(\varepsilon c_{\beta}-1)\right]<0, \] if we choose $\varepsilon \in (0, c_{\beta}^{-1}).$ \end{proof} \begin{example}\label{exm-g-4.1} Let $0<\alpha<n$, $1\leq q<\infty$, $v=1$ in \eqref{g3.1}, such that $E=L_q(\ensuremath{\mathbb R}^{n})$ in Example~\ref{exm-g-1.3}. For $0<\alpha<n$, $q=1$, the space $H^G_E(\ensuremath{\mathbb R}^{n})$ is not embedded into $C(\ensuremath{\mathbb R}^{n})$. If $q>1$, the criterion for the embedding into $C(\ensuremath{\mathbb R}^{n})$ reads as \[ \eqref{g1.6} \quad\text{is true}\iff \alpha>\frac{n}{q}. \] If $\frac{n}{q}<\alpha<\min\left(n,k+\frac{n}{q}\right)$, then the optimal Calder\'on space for the embedding \eqref{g2.7} has the norm \begin{equation} \label{g4.1} \|u\|_{\Lambda^k(C;X_0)} = \|u\|_C + \left(\int_0^T \left(\frac{\omega_k(u;t^{1/n})}{t^{\alpha/n-1/q}}\right)^q \frac{\ensuremath{\,\mathrm{d}} t}{t}\right)^{1/q}. \end{equation} In particular, this means that $\Lambda^k(C;X_0)$ coincides with the classical Besov space $B^{\alpha-n/q}_{\infty,q}(\ensuremath{\mathbb R}^{n})$, cf. \cite{nik,T-F1} for further details on Besov spaces. \end{example} \begin{proof} \underline{\em Step 1}.\quad In the case considered here we have equivalence \eqref{g1.20}. Without loss of generality we can assume that \begin{equation} \label{g11.1} 0<\alpha<n;\quad \varphi(t)=t^{\alpha/n-1},\quad t \in (0,T]. \end{equation} For the basic RIS $E(\ensuremath{\mathbb R}^{n})=L_q(\ensuremath{\mathbb R}^{n}),\, 1\leq q< \infty, $ the criterion of the embedding \eqref{g1.6} has the form \eqref{g1.5} , such that \begin{equation} \label{g11.2} \eqref{g1.6} \Longleftrightarrow \tau^{\alpha/n-1} \in L_{q'}(0, T) \Longleftrightarrow \alpha>n/q. \end{equation} It means that in case of $q=1$ the embedding \eqref{g1.6} is impossible for this Example. \\ \underline{\em Step 2}.\quad Now let $1<q<\infty$, $n/q <\alpha <n$, and $\alpha \leq k$. We have here $v(t)=1,\, V(t)=t;$ condition \eqref{g3.9} is satisfied. So we may apply the corresponding results of Theorem \ref{theo-g-3.7}. According to \eqref{g3.4}, and \eqref{g3.5} with $\varphi$ given by \eqref{g11.1}, and $n/q <\alpha <n,$ we get \begin{equation} \label{g11.3} W(t)=\frac{n}{\alpha}t^{\alpha/n-1},\quad \Psi_q(t)=ct^{\alpha/n-1},\quad t \in (0, T], \end{equation} where $c=c(\alpha,n,q)>0.$ Therefore, formula \eqref{g3.19} leads to \eqref{g4.1}.\\ \underline{\em Step 3}.\quad Now we consider the case $1<q<\infty;\, \max (n/q, k) <\alpha <\min ( n, k+n/q).$ Condition \eqref{g3.10} is satisfied, we may apply Theorem \ref{theo-g-3.3} (case {\bf (B)}). Here, according to \eqref{g3.12}, and \eqref{g3.13} with $\varphi$ given by \eqref{g11.1} we get \begin{align} \widetilde{W}(t) & =t^{\frac{\alpha-k}{n}-1}, \label{g11.4}\\ U_{q}(t) & =\left( \int_t^T \tau^{(\frac{\alpha-k}{n}-1)q'} \ensuremath{\,\mathrm{d}} \tau\right)^{1/q'}. \label{g11.5} \end{align} We apply Lemma \ref{lemma-g-11.1} to the function $A(t)=U_q(t)^{q'}$ which corresponds to the special case $\lambda(t)=1;\quad \beta=\left[ 1-\frac{\alpha-k}{n}\right]^{q'}-1.$ For $\alpha<k+n/q$ we have $\beta>0,$ such that in view of Lemma \ref{lemma-g-11.1} there exists some $\ensuremath\varepsilon>0$ with the property that \[ t^{\varepsilon q'}A(t)\quad\text{decreases monotonically} \iff t^{\varepsilon}U_q(t)\quad\text{decreases monotonically}. \] It shows that \eqref{g3.11} is valid. Therefore, we may apply Theorem \ref{theo-g-3.3} (case {\bf (B)})) here, as well as Theorem \ref{theo-g-3.4}. We arrive at the descriptions \eqref{g3.18}, \eqref{g3.19} for $1<q<\infty,$ and $\Psi_q$ given by \eqref{g11.3}. It leads to \eqref{g4.1}. \end{proof} \begin{example}\label{exm-g-4.2} Let $0<\alpha<n$, $1\leq q<\infty$, $1<p<\infty$, $E=\Lambda_q(v)$, recall \eqref{g3.1}, where $v$ is given by \begin{equation} \label{g4.2} v(t)=t^{q/p-1} b^q(t),\quad t\in (0,T), \end{equation} where $b$ is a positive slowly varying continuous function on $(0,T)$. In other words, $E(\ensuremath{\mathbb R}^{n})$ is a so-called Lorentz-Karamata space, cf. \cite{GNO-7}. Now we explicate Example~\ref{exm-g-1.3}. Note that in this case \begin{equation}\label{g4.2a} \Psi_q(t)= \begin{cases} \sup\limits_{\tau\in (0,t]} \tau^{\frac{\alpha}{n}-\frac1p} b(\tau)^{-1}, & q=1, \\ \left(\int_0^t \tau^{q'\left(\frac{\alpha}{n}-\frac1p\right)} b(\tau)^{-q'} \frac{\ensuremath{\,\mathrm{d}}\tau}{\tau}\right)^{1/q'}, & q>1,\end{cases} \end{equation} for $t\in (0,T)$. \bli \item[{\upshape\bfseries (i)}] If $0<\alpha<\frac{n}{p}$, then $H^G_E(\ensuremath{\mathbb R}^{n})$ is not embedded into $C(\ensuremath{\mathbb R}^{n})$. \item[{\upshape\bfseries (ii)}] If $\alpha=\frac{n}{p}$, then we have to distinguish further between $q=1$ and $q>1$: in case of $q=1$, we require also $\Psi_1(0+)=0$, and \[ \eqref{g1.6} \quad\text{holds}\iff \Psi_1(t)=\sup_{\tau\in (0,t]} b(\tau)^{-1}<\infty, \quad t\in (0,T]. \] In case of $q>1$, we arrive at \[ \eqref{g1.6}\quad\text{holds}\iff \Psi_q(t)=\left(\int_0^t b(\tau)^{-q'} \frac{\ensuremath{\,\mathrm{d}}\tau}{\tau}\right)^{1/q'}<\infty, \quad t\in (0,T]. \] In that case the optimal Calder\'on space for the embedding \eqref{g2.7} has the norm \eqref{g3.16}, where in case of $q>1$ we have \[ \frac{\ensuremath{\,\mathrm{d}} \Psi_q(t)}{\Psi_q(t)} \simeq \frac{b(t)^{-q'}}{\int_0^t b(\tau)^{-q'} \frac{\ensuremath{\,\mathrm{d}} \tau}{\tau}} \ \frac{\ensuremath{\,\mathrm{d}} t}{t}. \]\ \item[{\upshape\bfseries (iii)}] In case of $\frac{n}{p}<\alpha<\min\left(n,k+\frac{n}{p}\right)$, the optimal Calder\'on space for the embedding \eqref{g2.7} has the norm \eqref{g3.16}, where we conclude from \eqref{g4.2a} that \[ \Psi_q(t) \simeq \begin{cases} t^{\frac{\alpha}{n}-\frac1p} b(t)^{-1}, & q=1, \\ t^{\frac{\alpha}{n}-\frac1p} b(t)^{-1}, & q>1,\end{cases} \] hence $\frac{\ensuremath{\,\mathrm{d}}\Psi_q(t)}{\Psi_q(t)} \simeq \frac{\ensuremath{\,\mathrm{d}} t}{t}$. \end{list} \end{example} \begin{proof} \underline{\em Step 1}.\quad Here we consider the case of $\varphi$ given by \eqref{g11.1}. For the basic Lorentz-Karamata space $E=\Lambda_{q}(v)$ with $1\leq q <\infty,\, 1<p<\infty$, and $v(t)$ is given by \eqref{g4.2}. Applying \eqref{g5.9} in appropriately adapted notation leads to \begin{equation} \label{g11.6} V(t)=\int_0^t v(\tau) \ensuremath{\,\mathrm{d}} \tau \simeq t^{q/p}b(t)^q, \quad t \in (0, T), \end{equation} such that in view of \eqref{g3.4} \[ W(t) \simeq t^{\frac{\alpha}{n}-\frac{q}{p}}b(t)^{-q}, \quad t \in (0, T). \] Thus, for $q=1$ we have by \eqref{g3.5} that \begin{equation} \label{g11.7} \Psi_1(t) \simeq \begin{cases} t^{\frac{\alpha}{n}-\frac1p} b(t)^{-1}, & \alpha >n/p,\\ B_{1}(t), & \alpha =n/p,\\ \infty, & \alpha< n/p, \end{cases} \end{equation} where \begin{equation} \label{g11.8} B_{1}(t)=\sup \left\lbrace b(\tau)^{-1}: \tau \in (0, t]\right\rbrace. \end{equation} For $1<q<\infty$ we get by \eqref{g3.5} \begin{equation} \label{g11.9} \Psi_q(t) \simeq \left( \int_0^t \tau^{q'(\frac{\alpha}{n}-\frac1p)} b(\tau)^{-q'}\tau^{-1}\ensuremath{\,\mathrm{d}} \tau\right)^{1/q'}, \end{equation} such that in view of \eqref{g5.9}, for $t \in (0, T)$, \begin{equation} \label{g11.10} \Psi_q(t) \simeq \begin{cases} t^{\frac{\alpha}{n}-\frac1p} b(t)^{-1},& \alpha>n/p,\\ B_{q}(t), & \alpha =n/p,\\ \infty, & \alpha< n/p, \end{cases} \end{equation} where \begin{equation} \label{g11.11} B_{q}(t)=\left\lbrace \int_0^t b(\tau)^{-q'}\tau^{-1}\ensuremath{\,\mathrm{d}} \tau \right\rbrace^{1/q'}. \end{equation} According to Lemma \ref{lemma-g-3.1}, \[ \eqref{g1.6}\Longleftrightarrow \eqref{g1.5}\Longleftrightarrow \Psi_q(T)<\infty. \] We see that \eqref{g1.6} is impossible for $\alpha<n/p;$ \eqref{g1.6} is valid for $\alpha>n/p;$ and the validity of \eqref{g1.6} for $\alpha=n/p$ depends on $B_{q}:\, \eqref{g1.6} \Longleftrightarrow B_{q}(T)<\infty,$ where $B_{q}$ is given by \eqref{g11.8}, \eqref{g11.10}.\\ \underline{\em Step 2}.\quad Assume that the conditions of embedding \eqref{g1.6} are satisfied, in particular, for $\alpha\geq n/p.$ If $\alpha\leq k$ we may apply Theorem \ref{theo-g-3.7} with $\lambda(t)=1.$ Condition \eqref{g3.9} is valid for function $V$ given by \eqref{g11.6} for any $\varepsilon \in (0, q/p).$ Indeed, in this case $V(t)t^{-\varepsilon}=t^{q/p-\varepsilon}b(t)^q$ increases monotonically, because $b(t)^q$ is slowly varying. Then, by Theorem \ref{theo-g-3.7} we get the descriptions \eqref{g3.16}-\eqref{g3.18} with $\Psi_q$ from \eqref{g11.7} in case of $q=1$, or from \eqref{g11.10} in case of $1<q<\infty$.\\ \underline{\em Step 3}.\quad Finally, let $\alpha\geq n/p$, $k< \alpha< \min ( n, k+n/p)$. We need to verify condition \eqref{g3.11}. According to \eqref{g3.12} with $\varphi$ given by \eqref{g11.1}, and $V$ given by \eqref{g11.6} we obtain \[ \widetilde{W}(t)=t^{\frac{\alpha-k}{n}-\frac{q}{p}}b(t)^{-q},\quad t \in (0,T). \] Here, $\frac{\alpha-k}{n}-\frac{1}{p}<0,$ such that for $q=1$ the function $ \widetilde{W}(t)$ decreases monotonically, and \begin{equation} \label{g11.12} U_1(t)=\widetilde{W}(t)=t^{\frac{\alpha-k}{n}-\frac{1}{p}}b(t)^{-1}, \end{equation} see \eqref{g3.13}. For $1<q<\infty$ we obtain by \eqref{g3.13} and \eqref{g4.2} that \begin{equation} \label{g11.13} U_q(t)=\left( \int_t^T \tau^{q'\left( \frac{\alpha-k}{n}-\frac{1}{p}\right)}b(\tau)^{-q'}\tau^{-1}\ensuremath{\,\mathrm{d}} \tau\right)^{1/q'}. \end{equation} We see that for $\alpha<k+n/p$ condition \eqref{g3.11} is valid. For $U_{1}$ given by \eqref{g11.12} this is obvious because $b(t)^{-1}$ is slowly varying. For $U_{q}$ given by \eqref{g11.13} we define \[ \beta=\left[\frac{1}{p}-\frac{\alpha-k}{n}\right] q'>0;\quad \lambda(t)=b(t)^{-q'};\quad A(t)=U_{q}(t)^{q'}. \] Then, by Lemma \ref{lemma-g-11.1} there exists some $\ensuremath\varepsilon>0$ such that \begin{equation} \label{g11.14} A(t)t^{\varepsilon q'}\quad\text{decreases monotonically} \iff U_{q}(t)t^{\varepsilon}\quad\text{decreases monotonically}. \end{equation} Consequently, we may apply Theorem \ref{theo-g-3.3} (case {\bf (B)}) and get descriptions \eqref{g3.16}-\eqref{g3.18}, \eqref{g3.19} with $\Psi_q$ determined by \eqref{g11.7} for $q=1,$ or by \eqref{g11.10} for $1<q<\infty.$ \end{proof} \begin{remark} \label{rem-g-11.2} Note that for $1<q<\infty$ and the function $\Psi_q$ defined by integral \eqref{g11.9} we have in view of estimate \eqref{g5.9} that \begin{equation} \label{g11.15} \frac{\ensuremath{\,\mathrm{d}} \Psi_q(t)}{\Psi_q(t)}\simeq \frac{\ensuremath{\,\mathrm{d}} \left( \Psi_q^{q'}(t)\right) }{\left( \Psi_q^{q'}(t)\right)}\simeq \frac{\ensuremath{\,\mathrm{d}} t}{t}. \end{equation} For the function $\Psi_q=B_q$ given by integral \eqref{g11.11} we get \begin{equation} \label{g11.16} \frac{\ensuremath{\,\mathrm{d}} \Psi_q(t)}{\Psi_q(t)}\simeq \frac{ b(t)^{-q'}t^{-1}\ensuremath{\,\mathrm{d}} t}{\int_0^t b(\tau)^{-q'}\tau^{-1}\ensuremath{\,\mathrm{d}} \tau}. \end{equation} These assertions are useful when we apply formulas \eqref{g3.16}, \eqref{g3.19}. \end{remark} \end{document}
\begin{document} \title{Can relativistic bit commitment lead to secure quantum oblivious transfer?} \author{Guang Ping He} \email{[email protected]} \affiliation{School of Physics and Engineering, Sun Yat-sen University, Guangzhou 510275, P. R. China} \begin{abstract} While unconditionally secure bit commitment (BC) is considered impossible within the quantum framework, it can be obtained under relativistic or experimental constraints. Here we study whether such BC can lead to secure quantum oblivious transfer (QOT). The answer is not completely negative. On one hand, we provide a detailed cheating strategy, showing that the \textquotedblleft honest-but-curious adversaries\textquotedblright\ in some of the existing no-go proofs on QOT still apply even if secure BC is used, enabling the receiver to increase the average reliability of the decoded value of the transferred bit. On the other hand, it is also found that some other no-go proofs claiming that a dishonest receiver can always decode all transferred bits simultaneously with reliability $100\%$ become invalid in this scenario, because their models of cryptographic protocols are too ideal to cover such a BC-based QOT. \end{abstract} \pacs{03.67.Dd, 03.67.Ac, 42.50.Dv, 03.65.Ud, 03.30.+p} \maketitle \section{Introduction} Besides the well-known quantum key distribution (QKD) \cite{qi365}, bit commitment (BC) and oblivious transfer (OT) are also essential cryptographic primitives. It was shown that OT is the building block of multi-party secure computations and more complicated \textquotedblleft post-cold-war era\textquotedblright\ multi-party cryptographic protocols \cite{qi139}, and quantum OT (QOT) can be obtained basing on quantum BC (QBC) \cite{qi75}. But it is widely accepted that unconditionally secure QBC is impossible within the quantum framework \cite{qi74}-\cite{qbc31}. This result, known as the Mayers-Lo-Chau (MLC) no-go theorem, is considered as putting a serious drawback on quantum cryptography. Obviously, it indicates that QOT built upon QBC cannot be secure either. This stimulated the emergence of many other no-go proofs on quantum two-party secure computations including QOT \cite{qi149,qi500,qi797,qi499,qi677,qi725,qbc14,*qbc40,qbc61}. Nevertheless, Kent showed that BC can be unconditionally secure under relativistic settings \cite{qi44,qi582,qbc24,qbc51}. Recently, these relativistic BC protocols were implemented experimentally \cite{qbc82,qbc83} . Also, there were many proposals on \textquotedblleft practical\textquotedblright\ QBC, which are secure if the participants are limited by some experimental constraints, such as individual measurements or limited coherent measurements, misaligned reference frames, unstability of particles, Gaussian operations with non-Gaussian states, etc. (see the introduction of Ref. \cite{HeJPA} for a detailed list). Therefore, it is natural to ask whether these BC protocols can lead to secure QOT. That is, suppose that any setting or constraint required to guarantee the security of the above BC protocols is satisfied, so that the participants can use them as a secure \textquotedblleft black box\textquotedblright\ without caring the internal details of these protocols. Then we put no constraint (except those forbidden by fundamental physics laws) on the participants' behaviors in other steps of the BC-based QOT. Note that in this scenario, it may not be straight-forward to apply some common methods adopted for proving the impossibility of QBC and related no-go theorems, e.g., replacing any protocol with an ideal model which contains quantum communications and unitary transformations only. This is because now there is the secure \textquotedblleft black box\textquotedblright\ QBC stands in the middle of the QOT process, so that some cheating strategies may be interrupted at this stage. Thus it is important to reexamine whether the no-go proofs of QOT still apply, and if yes, how the cheating is performed. In this paper, the answer is twofold. On one hand, we will provide a cheating strategy in details, showing that some of the no-go proofs \cite {qi500,qi797,qi499,qi677,qi725,qbc14,*qbc40} remain valid even if QOT is based on secure BC. On the other hand, we found that some other no-go proofs \cite{qi149,qbc61} no longer work in such a QOT protocol, revealing that these proofs are not sufficiently general. \section{Definitions} BC is a cryptographic task between two remote parties Charlie and Diana (generally called Alice and Bob in literature. But to avoid confusing with the roles in OT, here we name them differently). It generally includes two phases. In the commit phase, Charlie decides the value of the bit $x$ ($x=0$ or $1$) which he wants to commit, and sends Diana a piece of evidence. Later, in the unveil phase, Charlie announces the value of $x$, and Diana checks it with the evidence. An unconditionally secure BC protocol needs to be both binding (i.e., Charlie cannot change the value of $x$ after the commit phase) and concealing (Diana cannot know $x$ before the unveil phase) without relying on any computational assumption. In the quantum case, Charlie's input can be more complicated. Besides the two classical values $0$ and $1$, he can commit a quantum superposition or mixture of the states corresponding to $x=0$\ and $x=1$, so that $x$ can be unveiled as either $0$ or $1$\ with the probabilities $p_{0}$\ and $p_{1}$, respectively. More specifically, suppose that a QBC protocol requires Charlie to send Diana a quantum system $\Psi $ as the evidence in the commit phase, whose state is expected to be $\left\vert \psi _{0}\right\rangle _{\Psi }$ (if $x=0$) or $\left\vert \psi _{1}\right\rangle _{\Psi }$ (if $ x=1 $). Then Charlie can introduce another system $C$, and prepare $C\otimes \Psi $\ in the entangled state \begin{equation} \left\vert C\otimes \Psi \right\rangle =p_{0}^{1/2}\left\vert c_{0}\right\rangle _{C}\otimes \left\vert \psi _{0}\right\rangle _{\Psi }+p_{1}^{1/2}\left\vert c_{1}\right\rangle _{C}\otimes \left\vert \psi _{1}\right\rangle _{\Psi }, \label{eqcheating} \end{equation} where $\left\vert c_{0}\right\rangle _{C}$\ and $\left\vert c_{1}\right\rangle _{C}$\ are orthogonal. He sends $\Psi $ to Diana and keeps $C$ to himself. When it is time to unveil, Charlie measures $C$ in the basis $\{\left\vert c_{0}\right\rangle _{C},\left\vert c_{1}\right\rangle _{C}\}$, and unveils the committed $x$ as $0$ (or $1$) if the result is $ \left\vert c_{0}\right\rangle _{C}$ (or $\left\vert c_{1}\right\rangle _{C}$ ). With this strategy, his commitment was kept at the quantum level until the unveil phase, instead of taking a fixed classical value. According to Kent \cite{qi581}, this \textquotedblleft is not considered a security failure of a quantum BC protocol \textit{per se}\textquotedblright . As long as a BC protocol can force Charlie to commit to a probability distribution $(p_{0},p_{1})$\ which cannot be changed after the commit phase, and $(p_{0}+p_{1})-1$\ can be made arbitrarily close to $0$ by increasing some security parameters of the protocol, then it is still considered as unconditionally secure. On the other hand, if a protocol can further force Charlie to commit to a particular classical $x$, i.e., besides $p_{0}+p_{1}\rightarrow 1$, both $p_{0}$ and $p_{1}$ can only take the values $0$ or $1$ instead of any value in between, then it is called a bit commitment with a certificate of classicality (BCCC). All the above mentioned BC protocols \cite{qi44,qi582,qbc24,qbc51,HeJPA} are not BCCC, and unconditionally secure BCCC seems impossible \cite{qi581}. Therefore, in the following when speaking of secure BC, we refer to the non-BCCC ones only, except where noted. OT is also a two-party cryptography. There are two major types of OT in literature. Using Cr\'{e}peau's description \cite{qi140}, they are defined as follows. \textit{Definition A: All-or-nothing OT (AoN OT)} (A-i) Alice knows one bit $b$. (A-ii) Bob gets bit $b$ from Alice with the probability $1/2$. (A-iii) Bob knows whether he got $b$ or not. (A-iv) Alice does not know whether Bob got $b$ or not. \textit{Definition B: One-out-of-two OT (1-2 OT)} (B-i) Alice knows two bits $b_{0}$ and $b_{1}$. (B-ii) Bob gets bit $b_{j}$ and not $b_{\bar{j}}$ with $Pr(j=0)=Pr(j=1)=1/2$. (B-iii) Bob knows which of $b_{0}$ or $b_{1}$ he got. (B-iv) Alice does not know which $b_{j}$ Bob got. We will study BC-based AoN OT first, and come back to 1-2 OT later. \section{Insecurity} According to Yao \cite{qi75}, AoN QOT can be built upon BC as follows. \textit{The BC-based AoN QOT protocol:} (I) Let $\left\vert 0,0\right\rangle $\ and $\left\vert 0,1\right\rangle $\ be two orthogonal states of a qubit, and define $\left\vert 1,0\right\rangle \equiv (\left\vert 0,0\right\rangle +\left\vert 0,1\right\rangle )/\sqrt{2}$ , $\left\vert 1,1\right\rangle \equiv (\left\vert 0,0\right\rangle -\left\vert 0,1\right\rangle )/\sqrt{2}$. That is, the state of a qubit is denoted as $\left\vert a_{i},g_{i}\right\rangle $, where $a_{i}$\ represents the basis and $g_{i}$\ distinguishes the two states in the same basis. For $ i=1,...,n$, Alice randomly picks $a_{i},g_{i}\in \{0,1\}$\ and sends Bob a qubit $\phi _{i}$ in the state $\left\vert a_{i},g_{i}\right\rangle $. (II) For $i=1,...,n$, Bob randomly picks a basis $b_{i}\in \{0,1\}$ to measure $\phi_{i}$ and records the result as $\left\vert b_{i},h_{i}\right\rangle $. Then he commits $(b_{i},h_{i})$ to Alice using the BC protocol. (III) Alice randomly picks a subset $R\subseteq \{1,...,n\}$\ and tests Bob's commitment at positions in $R$. If any $i\in R$ reveals $a_{i}=b_{i}$ and $g_{i}\neq h_{i}$, then Alice stops the protocol; otherwise, the test result is \textit{accepted}. (IV) Alice announces the bases $a_{i}$\ ($i=1,...,n$). Let $T_{0}$ be the set of all $1\leq i\leq n$ with $a_{i}=b_{i}$, and $T_{1}$ be the set of all $1\leq i\leq n$ with $a_{i}\neq b_{i}$. Bob chooses $I_{0}\subseteq T_{0}-R$ , $I_{1}\subseteq T_{1}-R$ with $\left\vert I_{0}\right\vert =\left\vert I_{1}\right\vert =0.24n$, and sets $\{J_{0},J_{1}\}=\{I_{0},I_{1}\}$ or $ \{J_{0},J_{1}\}=\{I_{1},I_{0}\}$ at random, then sends $\{J_{0},J_{1}\}$ to Alice. (V) Alice picks a random $s\in \{0,1\}$, and sends $s$, $\beta _{s}=b\bigoplus\limits_{i\in J_{s}}g_{i}$ to Bob. Bob computes $b=\beta _{s}\bigoplus\limits_{i\in J_{s}}h_{i}$ if $J_{s}=I_{0}$; otherwise does nothing. Now suppose that the BC protocol used in this QOT is secure. That is, no matter we use relativistic BC \cite{qi44,qi582,qbc24,qbc51}, or \textquotedblleft practical\textquotedblright\ QBC protocols listed in the introduction of Ref. \cite{HeJPA}, we assume that all the security requirements (e.g., relativistic settings or experimental limitations) are already met, so that Bob does not have unlimited computational power to cheat within the BC stage. In this case, the validity of the no-go proofs of QOT \cite{qi149,qi500,qi797,qi499,qi677,qi725,qbc14,*qbc40,qbc61} cannot be taken for granted, because all these proofs were derived without implying any limitation on the computational power of the cheater. Intriguingly, the conclusions of some of the no-go proofs \cite {qi500,qi797,qi499,qi677,qi725,qbc14,*qbc40} remain valid, that unconditionally secure QOT is still impossible in this case. The key reason is that secure BC, being not a BCCC, cannot avoid the participant keeping the commitment at the quantum level instead of taking a fixed classical value. Kent \cite{qi44} briefly mentioned that it will allow more general coherent quantum attacks to be used on schemes of which BC is a subprotocol, but no details of the cheating strategy was given. Here we will elaborate how Bob can make use of this feature to break the BC-based QOT protocol. For each $\phi _{i}$ ($i=1,...,n$), a dishonest Bob does not pick a classical $b_{i}$ and measure it in step (II). Instead, he introduces two ancillary qubit systems $B_{i}$ and $H_{i}$ as the registers for the bits $ b_{i}$ and $h_{i}$, and prepares their initial states as $\left\vert B_{i}\right\rangle =(\left\vert 0\right\rangle _{B}+\left\vert 1\right\rangle _{B})/\sqrt{2}$ and $\left\vert H_{i}\right\rangle =\left\vert 0\right\rangle _{H}$,\ respectively. Here $\left\vert 0\right\rangle $ and $\left\vert 1\right\rangle $ are orthogonal. Then he applies the unitary transformation \begin{eqnarray} U_{1} &\equiv &\left\vert 0\right\rangle _{B}\left\langle 0\right\vert \otimes \left\vert 0,0\right\rangle _{\phi }\left\langle 0,0\right\vert \otimes I_{H} \nonumber \\ &&+\left\vert 0\right\rangle _{B}\left\langle 0\right\vert \otimes \left\vert 0,1\right\rangle _{\phi }\left\langle 0,1\right\vert \otimes \sigma _{H}^{(x)} \nonumber \\ &&+\left\vert 1\right\rangle _{B}\left\langle 1\right\vert \otimes \left\vert 1,0\right\rangle _{\phi }\left\langle 1,0\right\vert \otimes I_{H} \nonumber \\ &&+\left\vert 1\right\rangle _{B}\left\langle 1\right\vert \otimes \left\vert 1,1\right\rangle _{\phi }\left\langle 1,1\right\vert \otimes \sigma _{H}^{(x)} \end{eqnarray} on the system $B_{i}\otimes \phi _{i}\otimes H_{i}$. Here $I_{H}$\ and $ \sigma _{H}^{(x)}$\ are the identity operator and Pauli matrix of system $ H_{i}$ that satisfy $I_{H}\left\vert 0\right\rangle _{H}=\left\vert 0\right\rangle _{H}$\ and $\sigma _{H}^{(x)}\left\vert 0\right\rangle _{H}=\left\vert 1\right\rangle _{H}$,\ respectively. The effect of $U_{1}$ is like running a quantum computer program that if $\left\vert B_{i}\right\rangle =\left\vert 0\right\rangle _{B}$ ($\left\vert B_{i}\right\rangle =\left\vert 1\right\rangle _{B}$) then measures qubit $ \phi _{i}$ in the basis $b_{i}=0$\ ($b_{i}=1$), and stores the result $h_{i}$ in system $H_{i}$. It differs from a classical program with the same function as no destructive measurement is really performed, since $U_{1}$ is not a projective operator. Consequently, the bits $b_{i}$ and $h_{i}$ are kept at the quantum level instead of being collapsed to classical values. Bob then commits $(b_{i},h_{i})$ to Alice at the quantum level. This can always be done in a BC protocol which does not satisfy the definition of BCCC. For example, to commit $b_{i}$, Bob further introduces two ancillary systems $E$ and $\Psi $\ and prepares the initial state as \begin{equation} \left\vert E\otimes \Psi \right\rangle _{0}=\left\vert e_{0}\right\rangle _{E}\otimes \left\vert \psi _{0}\right\rangle _{\Psi }. \label{BC1} \end{equation} Let $U_{E\otimes \Psi }$ be a unitary transformation on $E\otimes \Psi $\ satisfying $U_{E\otimes \Psi }\left\vert e_{0}\right\rangle _{E}\otimes \left\vert \psi _{0}\right\rangle _{\Psi }=\left\vert e_{1}\right\rangle _{E}\otimes \left\vert \psi _{1}\right\rangle _{\Psi }$. Here $\left\vert \psi _{0}\right\rangle _{\Psi }$, $\left\vert \psi _{1}\right\rangle _{\Psi } $\ have the same meanings as these in Eq. (\ref{eqcheating}), and $ \left\vert e_{0}\right\rangle _{E}$, $\left\vert e_{1}\right\rangle _{E}$ are orthogonal. Bob applies the unitary transformation \begin{equation} U_{2}\equiv \left\vert 0\right\rangle _{B}\left\langle 0\right\vert \otimes I_{E\otimes \Psi }+\left\vert 1\right\rangle _{B}\left\langle 1\right\vert \otimes U_{E\otimes \Psi } \label{BC2} \end{equation} on system $B_{i}\otimes E\otimes \Psi $, where $I_{E\otimes \Psi }$\ is the identity operator of system $E\otimes \Psi $. As a result, we can see that the final state of $B_{i}\otimes \phi _{i}\otimes H_{i}\otimes E\otimes \Psi $\ will be very similar to Eq. (\ref{eqcheating}) if we view $B_{i}\otimes \phi _{i}\otimes H_{i}\otimes E$\ as system $C$. Then Bob can follow the process after Eq. (\ref{eqcheating}) (note that now Bob plays the role of Charlie) to complete the commitment of $b_{i}$ without collapsing it to a classical value. He can do the same to $h_{i}$. Back to step (III) of the QOT protocol. Whenever $(b_{i},h_{i})$ ($i\in R$) are picked to test the commitment, Bob simply unveils them honestly. Since these $(b_{i},h_{i})$ will no longer be useful in the remaining steps of the protocol, it does not hurt Bob's cheating. Note that the rest $(b_{i},h_{i})$ ($i\notin R$) are still kept at the quantum level. After Alice announced all bases $a_{i}$\ ($i=1,...,n$) in step (IV), Bob introduces a single global control qubit $S^{\prime }$ for all $i$, initialized in the state $ \left\vert s^{\prime }\right\rangle =(\left\vert 0\right\rangle _{S^{\prime }}+\left\vert 1\right\rangle _{S^{\prime }})/\sqrt{2}$, and yet another ancillary system $\Gamma _{i}$ for each $i\in T_{0}\cup T_{1}-R$ initialized in the state $\left\vert \Gamma _{i}\right\rangle =\left\vert 0\right\rangle _{\Gamma }$. Then he applies the unitary transformation \begin{eqnarray} U_{3} &\equiv &\left\vert 0\right\rangle _{S^{\prime }}\left\langle 0\right\vert \otimes \left\vert a_{i}\right\rangle _{B}\left\langle a_{i}\right\vert \otimes I_{\Gamma } \nonumber \\ &&+\left\vert 0\right\rangle _{S^{\prime }}\left\langle 0\right\vert \otimes \left\vert \lnot a_{i}\right\rangle _{B}\left\langle \lnot a_{i}\right\vert \otimes \sigma _{\Gamma }^{(x)} \nonumber \\ &&+\left\vert 1\right\rangle _{S^{\prime }}\left\langle 1\right\vert \otimes \left\vert a_{i}\right\rangle _{B}\left\langle a_{i}\right\vert \otimes \sigma _{\Gamma }^{(x)} \nonumber \\ &&+\left\vert 1\right\rangle _{S^{\prime }}\left\langle 1\right\vert \otimes \left\vert \lnot a_{i}\right\rangle _{B}\left\langle \lnot a_{i}\right\vert \otimes I_{\Gamma } \end{eqnarray} on the incremented system $S^{\prime }\otimes B_{i}\otimes \Gamma _{i}$. Here $I_{\Gamma }$\ and $\sigma _{\Gamma }^{(x)}$\ are the identity operator and Pauli matrix of system $\Gamma _{i}$ that satisfies $I_{\Gamma }\left\vert 0\right\rangle _{\Gamma }=\left\vert 0\right\rangle _{\Gamma }$\ and $\sigma _{\Gamma }^{(x)}\left\vert 0\right\rangle _{\Gamma }=\left\vert 1\right\rangle _{\Gamma }$,\ respectively. The effect of $U_{3}$ is to compare $a_{i}$ with $b_{i}$ and store the result $(a_{i}\neq b_{i})\oplus s^{\prime }$\ in $\Gamma _{i}$. Bob then measures all $\Gamma _{i}$ ($i\in T_{0}\cup T_{1}-R$) in the basis $\{\left\vert 0\right\rangle _{\Gamma },\left\vert 1\right\rangle _{\Gamma }\}$, takes $T_{0}$ ($T_{1}$) as the set of all $1\leq i\leq n$ with $\left\vert \Gamma _{i}\right\rangle =\left\vert 0\right\rangle _{\Gamma }$ ($\left\vert \Gamma _{i}\right\rangle =\left\vert 1\right\rangle _{\Gamma }$) instead of how they were defined in step (IV), and always sets $J_{0}\subseteq T_{0}-R$, $J_{1}\subseteq T_{1}-R$ to finish the rest parts of the QOT protocol. With this method, the relationship between $J_{0}$, $J_{1}$ and $I_{0}$, $ I_{1}$\ are kept at the quantum level. Since $I_{0}$ ($I_{1}$) denotes the set corresponding to $a_{i}=b_{i}$ ($a_{i}\neq b_{i}$). We can see that $ U_{3}$ makes $J_{0}=I_{0}$, $J_{1}=I_{1}$\ when $s^{\prime }=0$, while $ J_{0}=I_{1}$, $J_{1}=I_{0}$\ when $s^{\prime }=1$. As $S^{\prime }$ was initialized as $\left\vert s^{\prime }\right\rangle =(\left\vert 0\right\rangle _{S^{\prime }}+\left\vert 1\right\rangle _{S^{\prime }})/ \sqrt{2}$, the actual result of step (IV) can be described by the entangled state \begin{eqnarray} &&\left\vert S^{\prime }\otimes (\bigotimes\limits_{i}B_{i}\otimes \phi _{i}\otimes H_{i}\otimes E_{i}^{\prime })\right\rangle \nonumber \\ &\rightarrow &\left\vert \Phi _{b}\right\rangle =(\left\vert 0\right\rangle _{S^{\prime }}\otimes \left\vert J_{0}=I_{0}\vee J_{1}=I_{1}\right\rangle \nonumber \\ &&+\left\vert 1\right\rangle _{S^{\prime }}\otimes \left\vert J_{0}=I_{1}\vee J_{1}=I_{0}\right\rangle )/\sqrt{2}. \label{eqstepiv} \end{eqnarray} Here $E_{i}^{\prime }$ stands for all the ancillary systems Bob introduced in the process of committing $(b_{i},h_{i})$. $\left\vert J_{0}=I_{0}\vee J_{1}=I_{1}\right\rangle $\ denotes the state of system $\bigotimes \limits_{i}B_{i}\otimes \phi _{i}\otimes H_{i}\otimes E_{i}^{\prime }$, in which the subsystems $B_{i}$ and $H_{i}$\ contain the correct $b_{i}$ and $ h_{i}$ corresponding to $J_{0}=I_{0}\vee J_{1}=I_{1}$. The meaning of $ \left\vert J_{0}=I_{1}\vee J_{1}=I_{0}\right\rangle $\ is also similar. After Alice announced $s$\ and $\beta _{s}$\ in step (V), the systems under Bob's possession can be viewed as \begin{equation} \left\vert \Phi _{b}\right\rangle =(\left\vert s\right\rangle _{S^{\prime }}\otimes \left\vert J_{s}=I_{0}\right\rangle +\left\vert \lnot s\right\rangle _{S^{\prime }}\otimes \left\vert fail\right\rangle )/\sqrt{2}. \label{eqqot} \end{equation} It means that if Bob measures system $S^{\prime }$ in the basis $ \{\left\vert 0\right\rangle _{S^{\prime }},\left\vert 1\right\rangle _{S^{\prime }}\}$\ and the result $\left\vert s^{\prime }\right\rangle _{S^{\prime }}$ satisfies $s^{\prime }=s$, then he is able to measure the rest systems and get all the correct $h_{i}$\ to decode the secret bit $b$ unambiguously; else if the result satisfies $s^{\prime }\neq s$, then he knows that he fails to decode $b$. Now the most tricky part is, as the value of $s^{\prime }$ was kept at the quantum level before system $S^{\prime }$ is measured, at this stage a dishonest Bob can choose not to measure $ S^{\prime }$ in the basis $\{\left\vert 0\right\rangle _{S^{\prime }},\left\vert 1\right\rangle _{S^{\prime }}\}$. Instead, by denoting $ \left\vert b\right\rangle \equiv \left\vert s\right\rangle _{S^{\prime }}\otimes \left\vert J_{s}=I_{0}\right\rangle $, and $\left\vert ?\right\rangle \equiv \left\vert \lnot s\right\rangle _{S^{\prime }}\otimes \left\vert fail\right\rangle $,\ Eq. (\ref{eqqot}) can be treated as $ \left\vert \Phi _{b}\right\rangle =(\left\vert b\right\rangle +\left\vert ?\right\rangle )/\sqrt{2}$ where $\left\vert b=0\right\rangle \equiv ( \begin{array}{ccc} 1 & 0 & 0 \end{array} )^{T}$, $\left\vert b=1\right\rangle \equiv ( \begin{array}{ccc} 0 & 1 & 0 \end{array} )^{T}$, and $\left\vert ?\right\rangle \equiv ( \begin{array}{ccc} 0 & 0 & 1 \end{array} )^{T}$ are mutually orthogonal. Then according to Eq. (33) of Ref. \cite {qi499}, Bob can distinguish them using the positive operator-valued measure (POVM) $(E_{0},I-E_{0})$, where \begin{equation} E_{0}=\frac{1}{6}\left[ \begin{array}{ccc} 2+\sqrt{3} & -1 & 1+\sqrt{3} \\ -1 & 2-\sqrt{3} & 1-\sqrt{3} \\ 1+\sqrt{3} & 1-\sqrt{3} & 2 \end{array} \right] . \label{eqpovm} \end{equation} This allows Bob's decoded $b$ to match Alice's actual input with reliability $(1+\sqrt{3}/2)/2$ \cite{qi499}. On the contrary, when Bob executes the QOT protocol honestly, in $1/2$ of the cases he can decode $b$ with reliability $ 100\%$; in the rest $1/2$ cases he fails to decode $b$, he can guess the value randomly, which results in a reliability of $50\%$. Thus the average reliability in the honest case is $100\%/2+50\%/2=75\%<(1+\sqrt{3}/2)/2$. Note that in the above dishonest strategy, in any case Bob can never decode $ b$ with reliability $100\%$. Therefore it is debatable whether it can be considered as a successful cheating, as the strategy does not even accomplish what an honest Bob can do. That is why it is called \textit{honest }-but-curious adversary \cite{qi677,qi725}, i.e., in some sense it may still be regarded as honest behavior instead of full cheating. Nevertheless, it provides Bob with the freedom to choose between accomplishing the original goal of QOT or achieving a higher average reliability, which could leave rooms for potential problems when building even more complicated cryptographic protocols upon such a BC-based QOT. The above cheating strategy is basically the same we proposed in section 5 of Ref. \cite{HeJPA}, which was applied to show why the specific QBC protocol in the same reference cannot lead to secure QOT. But here we can see that its power is not limited to the QBC protocol in Ref. \cite{HeJPA}. Especially, Bob's steps related with Eqs. (\ref{BC1}) and (\ref{BC2}) will always be valid as long as the BC protocol used in QOT is not a BCCC, as they do not involve the details of the BC process. Thus we reach a much general result, that any BC (except BCCC) cannot lead to unconditionally secure AoN QOT using Yao's method \cite{qi75}. It covers not only unconditionally secure QBC (regardless whether it exists or not), but also relativistic BC (both classical \cite{qi44,qi582} and quantum ones \cite {qbc24,qbc51}) and practically secure QBC (e.g., those listed in the introduction of Ref. \cite{HeJPA}), even if all the requirements for them to be secure are already met. In this sense, QOT is more difficult than QBC, in contrast to the classical relationship that OT and BC are quivalent. This result shows that the original security proof of BC-based QOT \cite {qi75} is not general. The proof claimed that as long as the BC protocol is unconditionally secure, then the QOT protocol built upon it will be unconditionally secure too. But now we can see that it may still be valid for BCCC-based QOT, but fails to cover all unconditionally secure BC. Now consider 1-2 OT. It can be built upon BC in much the same way as the above BC-based AoN QOT protocol, except that step (V) should be modified into: (V') Alice sends $\beta _{0}=b_{0}\bigoplus\limits_{i\in J_{0}}g_{i}$ and $ \beta _{1}=b_{1}\bigoplus\limits_{i\in J_{1}}g_{i}$\ to Bob. Bob computes $ b_{0}=\beta _{0}\bigoplus\limits_{i\in J_{0}}h_{i}$ if $J_{0}=I_{0}$, or $ b_{1}=\beta _{1}\bigoplus\limits_{i\in J_{1}}h_{i}$ if $J_{1}=I_{0}$. Bob can also apply the above cheating strategy, so that the result of step (IV) is still described by Eq. (\ref{eqstepiv}). After Alice announced $ \beta _{0}$ and $\beta _{1}$\ in step (V'), if Bob wants to decode $b_{0}$, he can treat the right-hand side of Eq. (\ref{eqstepiv}) as \begin{equation} \left\vert \Phi _{b}\right\rangle =(\left\vert 0\right\rangle _{S^{\prime }}\otimes \left\vert J_{0}=I_{0}\right\rangle +\left\vert 1\right\rangle _{S^{\prime }}\otimes \left\vert fail\right\rangle )/\sqrt{2}, \label{eqqot1} \end{equation} else if he wants to decode $b_{1}$, he can treat it as \begin{equation} \left\vert \Phi _{b}\right\rangle =(\left\vert 0\right\rangle _{S^{\prime }}\otimes \left\vert fail\right\rangle +\left\vert 1\right\rangle _{S^{\prime }}\otimes \left\vert J_{1}=I_{0}\right\rangle )/\sqrt{2}. \label{eqqot2} \end{equation} Comparing these two equations with Eq. (\ref{eqqot}), we can see that they both have the form $\left\vert \Phi _{b}\right\rangle =(\left\vert b\right\rangle +\left\vert ?\right\rangle )/\sqrt{2}$. Thus Bob can still apply the POVM described by Eq. (\ref{eqpovm}) to decode the bit he wants. Consequently, he can decode one of $b_{0}$ and $b_{1}$ at his choice with reliability $(1+\sqrt{3}/2)/2$. Again, despite that the value is higher than the average reliability of the honest behavior, in the current case Bob can never decode the bit with reliability $100\%$. Thus it still belongs to the honest-but-curious adversaries. Also, it is important to note that the POVM $ (E_{0},I-E_{0})$ is a two-value measurement that can obtain one bit of information only, and the POVMs corresponding to Eq. (\ref{eqqot1}) and Eq. ( \ref{eqqot2}) are not the same. Therefore Bob can pick only one of them to increase the average reliability of one of $b_{0}$ and $b_{1}$, instead of decoding both bits simultaneously. From the above cheating strategies, we can see that Bob's key idea is to keep introducing quantum entanglement to the system, which enables him to keep more and more data at the quantum level, so that he can have the freedom on choosing different measurements at a later time. This gives yet another example showing the power of entanglement in quantum cryptography. \section{Security} The above honest-but-curious adversaries indicate that the BC-based QOT protocol is not unconditionally secure, which is in agreement with the conclusion of the no-go proofs of QOT \cite {qi500,qi797,qi499,qi677,qi725,qbc14,*qbc40}. Nevertheless, we will show below that this protocol is secure against the cheating strategy in other no-go proofs \cite{qi149,qbc61}. In Lo's no-go proof \cite{qi149}, the following definition of 1-2 OT was proposed. \textit{Definition C: Lo's 1-2 OT} (C-i) Alice inputs $i$, which is a pair of messages $(m_{0},m_{1})$. (C-ii) Bob inputs $j=0$ or $1$. (C-iii) At the end of the protocol, Bob learns about the message $m_{j}$, but not the other message $m_{\bar{j}}$, i.e., the protocol is an ideal one-sided two-party secure computation $f(m_{0},m_{1},j=0)=m_{0}$\ and $ f(m_{0},m_{1},j=1)=m_{1}$. (C-iv) Alice does not know which $m_{j}$ Bob got. It was introduced as a special case of the ideal one-sided two-party quantum secure computations, defined in Lo's proof as follows. \textit{Definition D: ideal one-sided two-party secure computation} Suppose Alice has a private (i.e. secret) input $i\in \{1,2,...,n\}$ and Bob has a private input $j\in \{1,2,...,m\}$. Alice helps Bob to compute a prescribed function $f(i,j)\in \{1,2,...,p\}$ in such a way that, at the end of the protocol: (a) Bob learns $f(i,j)$ unambiguously; (b) Alice learns nothing [about $j$\ or $f(i,j)$]; (c) Bob knows nothing about $i$ more than what logically follows from the values of $j$ and $f(i,j)$. Lo's proof \cite{qi149} showed that any protocol satisfying Definition D is insecure, because Bob can always obtain all $f(i,j)$ ($j\in \{1,2,...,m\}$). As a corollary, secure 1-2 OT satisfying Definition C is impossible, as Bob can always learn both $m_{0}$ and $m_{1}$. This result is surprising. As shown in the previous section, other no-go proofs \cite{qi500,qi797,qi499,qi677,qi725,qbc14,*qbc40} claimed that QOT is insecure, merely because Bob can increase the average reliability of the decoded value of one of $m_{0}$ and $m_{1}$. It is never indicated in Refs. \cite{qi500,qi797,qi499,qi677,qi725,qbc14,*qbc40} that he can decode both of them simultaneously. Thus the cheating strategy in Lo's proof \cite{qi149} seems more powerful. However, it will be shown below that Lo's proof is not sufficiently general to cover all kinds of QOT. We must notice that Definition C is not rigorously equivalent to Definition B. An important feature of Definition C is that all Alice's (Bob's) input to the entire protocol is merely $ i=\{m_{0},m_{1}\}$ ($j=\{0,1\}$). Furthermore, as can be seen from (C-i) and (C-iii), the inputs $i$ and $j$ are independent of each other. But in general, seldom any protocol satisfies these requirement. That is, let us denote all Alice's (Bob's) input to a protocol as $I$ ($J$). In Definition C there is $I=i$, $J=j$, and $I$, $J$ are independent. But most existing quantum cryptographic protocols generally have $I\supset i$, $J\supset j$, and $I$, $J$ are dependent of each other. For example, in the well-known Bennett-Brassard 1984 (BB84) QKD protocol \cite{qi365}, though the aim of Alice and Bob is to share a secret key $k$, the protocol cannot be modeled as a simple box to which Alice inputs $k$, then Bob gets the output $k$. Instead, more inputs of both participants have to be involved. Alice should first input some quantum states (denoted as input $i_{1}$), and Bob inputs and announces his measurement bases (input $ j_{1}$). Then Alice tells Bob which bases are correct (input $i_{2}$), followed by a security check in which Bob reveals some measurement results (input $j_{2}$), and Alice verifies whether these results are correct or not (input $i_{3}$). Alice also reveals some results for Bob to verify \ldots\ Finally they obtain $k$ from the remaining unannounced measurement results. Obviously Alice cannot determine $i_{2}$ without knowing $j_{1}$, Bob's $ j_{2}$ will be affected by Alice's $i_{1}$, \ldots\ , the final key $k$ is also affected by the $i$'s and $j$'s. Thus we see that in the BB84 protocol, the inputs $I=\{i_{1},i_{2},\ldots \}$ and $J=\{j_{1},j_{2},\ldots \}$ are dependent of each other. For an eavesdropper, even though parts of $I$ and $ J $ are revealed, it is still insufficient to decode $k$. This is also the case for OT. Alice and Bob generally need to send quantum states, perform operations and exchanges lots of information throughout the entire protocol. All these (e.g., Alice's $\{a_{i},g_{i}\}$, $R$, $\beta _{0} $, $\beta _{1}$\ and Bob's $\{b_{i},h_{i}\} $, $\{J_{0},J_{1}\}$\ in the protocol in section III) should be treated as parts of their inputs. Consequently, there is $I\supset i$ and $J\supset j $. Definition B requires that Alice has zero knowledge about $j$. But it does not necessarily imply that she has zero knowledge about $J$. Therefore $I$ and $J$ can be dependent of each other. Indeed, step (V') of the BC-based 1-2 QOT protocol in section III clearly shows that $I$ includes not only the secret bits $b_{0}$ and $b_{1}$, but also depends on how Bob selects $J_{0}$ and $J_{1}$ in step (IV). Meanwhile, Bob's announcing $J_{0}$ and $J_{1}$ does not necessarily reveal his choice of $j$. Therefore, comparing with Definitions C and D, the BC-based 1-2 QOT protocol cannot be viewed as an ideal\ function $ f(i(m_{0},m_{1}),j)$, where $i$ and $j$ are merely the private inputs of Alice and Bob, respectively. Instead, it has the form $f(I(m_{0},m_{1},J),J)$ , where Alice' input $I$ will be varied according to Bob's input $J$, and its value is not determined until Bob's input has been completed. That is, BC-based 1-2 QOT does not satisfy Definition C. With this feature, the cheating strategy in Lo's proof can be defeated, as it was pointed out in Ref. \cite{2OT} which will be reviewed below. According to Lo's strategy, Bob can cheat in 1-2 OT satisfying Definition C, because he can change the value of $j$ from $j_{1}$ to $j_{2}$ by applying a unitary transformation to his own quantum machine alone. This enables him to learn $f(i(m_{0},m_{1}),j_{1})$\ and $f(i(m_{0},m_{1}),j_{2})$\ simultaneously without being found by Alice. However, in a protocol described by the function $f(I(m_{0},m_{1},J),J)$, a value in the form $ f(I(m_{0},m_{1},J_{(1)}),J_{(2)})$\ (with $J_{(k)}$\ denoting Bob's input corresponding to $j_{k}$) will be meaningless. Without the help of Alice, Bob cannot change $I$ from $I(m_{0},m_{1},J_{(1)})$\ to $ I(m_{0},m_{1},J_{(2)})$. Hence he cannot learn $ f(I(m_{0},m_{1},J_{(1)}),J_{(1)})$\ and $f(I(m_{0},m_{1},J_{(2)}),J_{(2)})$\ simultaneously by\ himself. Thus the BC-based 1-2 QOT protocol is immune to this cheating. Now we prove it in a more rigorous mathematical form, following the procedure in the appendix of Ref. \cite{2OT}. According to the cheating strategy in Lo's proof as shown in section III of Ref. \cite{qi149}, in any protocol satisfying Definition D, Alice and Bob's actions on their quantum machines can be summarized as an overall unitary transformation $U$ applied to the initial state $\left\vert u\right\rangle _{in}\in H_{A}\otimes H_{B}$ , i.e. \begin{equation} \left\vert u\right\rangle _{fin}=U\left\vert u\right\rangle _{in}. \label{e1} \end{equation} When both parties are honest, $\left\vert u^{h}\right\rangle _{in}=\left\vert i\right\rangle _{A}\otimes \left\vert j\right\rangle _{B}$ and \begin{equation} \left\vert u^{h}\right\rangle _{fin}=\left\vert v_{ij}\right\rangle \equiv U(\left\vert i\right\rangle _{A}\otimes \left\vert j\right\rangle _{B}). \label{e2} \end{equation} Thus the density matrix that Bob has at the end of protocol is \begin{equation} \rho ^{i,j}=Tr_{A}\left\vert v_{ij}\right\rangle \left\langle v_{ij}\right\vert . \label{e3} \end{equation} Bob can cheat in this protocol, because given $j_{1},j_{2}\in \{1,2,...,m\}$ , there exists a unitary transformation $U^{j_{1},j_{2}}$ such that \begin{equation} U^{j_{1},j_{2}}\rho ^{i,j_{1}}(U^{j_{1},j_{2}})^{-1}=\rho ^{i,j_{2}} \label{e4} \end{equation} for all $i$. It means that Bob can change the value of $j$ from $j_{1}$ to $ j_{2}$ by applying a unitary transformation independent of $i$\ to the state of his quantum machine. This equation is derived as follows \cite{qi149}. Alice may entangle the state of her quantum machine $A$ with her quantum dice $D$ and prepares the initial state \begin{equation} \frac{1}{\sqrt{n}}\sum\limits_{i}\left\vert i\right\rangle _{D}\otimes \left\vert i\right\rangle _{A}. \label{e5} \end{equation} She keeps $D$ for herself and uses the second register $A$ to execute the protocol. Supposing that Bob's input is $j_{1}$, the initial state is \begin{equation} \left\vert u^{\prime }\right\rangle _{in}=\frac{1}{\sqrt{n}} \sum\limits_{i}\left\vert i\right\rangle _{D}\otimes \left\vert i\right\rangle _{A}\otimes \left\vert j_{1}\right\rangle _{B}. \label{e6} \end{equation} At the end of the protocol, it follows from Eqs. (\ref{e1}) and (\ref{e6}) that the total wave function of the combined system $D$, $A$, and $B$ is \begin{equation} \left\vert v_{j_{1}}\right\rangle _{in}=\frac{1}{\sqrt{n}} \sum\limits_{i}\left\vert i\right\rangle _{D}\otimes U(\left\vert i\right\rangle _{A}\otimes \left\vert j_{1}\right\rangle _{B}). \label{e7} \end{equation} Similarly, if Bob's input is $j_{2}$, the total wave function at the end will be \begin{equation} \left\vert v_{j_{2}}\right\rangle _{in}=\frac{1}{\sqrt{n}} \sum\limits_{i}\left\vert i\right\rangle _{D}\otimes U(\left\vert i\right\rangle _{A}\otimes \left\vert j_{2}\right\rangle _{B}). \label{e8} \end{equation} Due to the requirement (b) in Definition D, the reduced density matrices in Alice's hand for the two cases $j=j_{1}$ and $j=j_{2}$ must be the same, i.e. \begin{equation} \rho _{j_{1}}^{Alice}=Tr_{B}\left\vert v_{j_{1}}\right\rangle \left\langle v_{j_{1}}\right\vert =Tr_{B}\left\vert v_{j_{2}}\right\rangle \left\langle v_{j_{2}}\right\vert =\rho _{j_{2}}^{Alice}. \label{e9} \end{equation} Equivalently, $\left\vert v_{j_{1}}\right\rangle $ and $\left\vert v_{j_{2}}\right\rangle $\ have the same Schmidt decomposition \begin{equation} \left\vert v_{j_{1}}\right\rangle =\sum\limits_{k}a_{k}\left\vert \alpha _{k}\right\rangle _{AD}\otimes \left\vert \beta _{k}\right\rangle _{B} \label{e10} \end{equation} and \begin{equation} \left\vert v_{j_{2}}\right\rangle =\sum\limits_{k}a_{k}\left\vert \alpha _{k}\right\rangle _{AD}\otimes \left\vert \beta _{k}^{\prime }\right\rangle _{B}. \label{e11} \end{equation} Now consider the unitary transformation $U^{j_{1},j_{2}}$ that rotates $ \left\vert \beta _{k}\right\rangle _{B}$\ to $\left\vert \beta _{k}^{\prime }\right\rangle _{B}$. Notice that it acts on $H_{B}$ alone and yet, as can be seen from Eqs. (\ref{e10}) and (\ref{e11}), it rotates $\left\vert v_{j_{1}}\right\rangle $ to $\left\vert v_{j_{2}}\right\rangle $,\ i.e. \begin{equation} \left\vert v_{j_{2}}\right\rangle =U^{j_{1},j_{2}}\left\vert v_{j_{1}}\right\rangle . \label{e12} \end{equation} Since \begin{equation} _{D}\left\langle i\right. \left\vert v_{j}\right\rangle =\frac{1}{\sqrt{n}} \left\vert v_{ij}\right\rangle \label{e13} \end{equation} [see Eqs. (\ref{e2}), (\ref{e7}), and (\ref{e8})], by multiplying Eq. (\ref {e12}) by $_{D}\left\langle i\right\vert $\ on the left, one finds that \begin{equation} \left\vert v_{ij_{2}}\right\rangle =U^{j_{1},j_{2}}\left\vert v_{ij_{1}}\right\rangle . \label{e14} \end{equation} Taking the trace of $\left\vert v_{ij_{2}}\right\rangle \left\langle v_{ij_{2}}\right\vert $\ over $H_{A}$ and using Eq. (\ref{e14}), Eq. (\ref {e4}) can be obtained. Eqs. (\ref{e1}) - (\ref{e14}) are exactly those presented in Lo's proof \cite {qi149}. We now consider the BC-based 1-2 QOT protocol. Since it has the feature that Alice's input $I$\ is dependent of Bob's input $J$, in the above proof, all $i$ in the equations should be replaced by $I(J)$ from the very beginning. Consequently, Eq. (\ref{e13}) becomes \begin{equation} _{D}\left\langle I(J)\right\vert \left. v_{J}\right\rangle =\frac{1}{\sqrt{n} }\left\vert v_{I(J)J}\right\rangle . \label{e15} \end{equation} In this case, multiplying Eq. (\ref{e12}) by $_{D}\left\langle I_{(2)}\right\vert $\ ($I_{(2)}\equiv I(J_{(2)})$ for short) on the left cannot give Eq. (\ref{e14}) any more. Instead, the result is \begin{equation} \left\vert v_{I_{(2)}J_{(2)}}\right\rangle =U^{J_{(1)},J_{(2)}}U^{I_{(1)},I_{(2)}}\left\vert v_{I_{(1)}J_{(1)}}\right\rangle , \label{e16} \end{equation} where $U^{I_{(1)},I_{(2)}}\equiv _{D}\left\vert I_{(2)}\right\rangle \left\langle I_{(1)}\right\vert _{D}$. Then Eq. (\ref{e4}) is replaced by \begin{equation} U^{J_{(1)},J_{(2)}}U^{I_{(1)},I_{(2)}}\rho ^{I_{(1)},J_{(1)}}(U^{J_{(1)},J_{(2)}}U^{I_{(1)},I_{(2)}})^{-1}=\rho ^{I_{(2)},J_{(2)}}. \label{e17} \end{equation} Note that $U^{I_{(1)},I_{(2)}}$ is the unitary operation on Alice's side. This implies that without Alice's help, Bob cannot change the density matrix he has from $\rho ^{I_{(1)},J_{(1)}}$\ to $\rho ^{I_{(2)},J_{(2)}}$. That is why Bob's cheating strategy fails. In brief, Lo's no-go proof on ideal one-sided two-party secure computations \cite{qi149} cannot cover the above BC-based 1-2 QOT, \ because the proof studied merely the protocols in which the inputs of the participants are independent. As we mentioned, even the BB84 protocol does not satisfy this requirement, while it can still be used as a black box to build more sophisticated protocols, e.g., quantum secret sharing. Thus we see that black box protocols do not necessarily require independent inputs of the participants. The model used in Lo's proof is too ideal, so that many useful protocols in quantum cryptography are not covered. Similarly, a recent no-go proof on two-sided two-party secure computations \cite{qbc61} is also based on a model of protocols with independent inputs. Moreover, the proof contains a logical loophole on the use of the security definition \cite{CommentQBC61}. Therefore its conclusion is not sufficiently general either. \section{Summary and discussions} We elaborated how Bob can make use of quantum entanglement to break the above BC-based QOT and achieve a higher average reliability of the decoded value of the transferred bit, even under certain practical settings in which the no-go proofs for secure QBC become invalid. Meanwhile, we also showed that BC-based QOT, though not unconditionally secure, can defeat certain kinds of cheating which attempt to decode all transferred bits simultaneously with reliability $100\%$. Thus it is still valuable for building some \textquotedblleft post-cold-war era\textquotedblright\ quantum cryptographies. This insecurity proof is valid as long as the secure BC used in the QOT protocol is not BCCC. It covers relativistic BC \cite{qi44,qi582,qbc24,qbc51} , as well as many practically secure QBC \cite{qi295,qi669,qbc43}, conditionally secure QBC \cite{qi63}, computationally secure QBC \cite {qbc42,qbc21,qbc34}, cheat-sensitive QBC \cite {qi150,qbc50,qbc78,qi197,qbc52,qbc89}, and some other types of protocols \cite{HeJPA,HeQIP,HePRA,HeProof}. Nevertheless, when Bob is limited to bounded or noisy quantum storages, secure QOT can be made possible in practice with two approaches. On one hand, with this technological constraint BCCC can be obtained \cite {qbc26,qi137,qi243,qbc41,qbc39,qi796,qbc59,qbc65,qbc98}. This is because Bob can no longer keep system $C$\ in Eq. (\ref{eqcheating}) (which represents the systems $B_{i}\otimes \phi _{i}\otimes H_{i}\otimes E$\ mentioned below Eq. (\ref{BC2}) or $B_{i}\otimes \phi _{i}\otimes H_{i}\otimes E_{i}^{\prime }$\ in Eq. (\ref{eqstepiv})) perfectly in the entangled form shown by these equations once the protocol lasts too long or requires too much quantum storages. Thus BC-based QOT protocol in section III can be secure at least against the insecurity proof in this paper. On the other hand, bounded or noisy storages can also force Bob to measure the quantum states he receives, as long as the QOT protocol involves too much qubits or the time interval between each step is sufficiently long. BC is no longer needed to convince Alice that Bob has already completed the measurements. Then there can be QOT protocols not based on BC, which are proven to be practically secure \cite {qi243,qbc41,qbc39,qi796,qi795,qbc6,qbc86}. We should also note that, even without the assumption of such technological limitations, our above insecurity proof does not mean that all QOT must not be unconditionally secure in principle. This is because the existing method \cite{qi75} is not necessarily the only way to build OT from BC. Further more, there is no evidence indicating that OT has to be built upon BC. Therefore, it is still worth questioning whether other kinds of unconditionally secure OT exist, especially relativistic OT. \newline The work was supported in part by the NSF of Guangdong province. \end{document}
\begin{eqnarray}gin{document} \begin{eqnarray}gin{titlepage} \title{\bf On the propagation of regularity and decay of solutions to the Benjamin equation } \author{Boling Guo$^{~a}$, \quad Guoquan Qin$~^{b,*}$ \\[10pt] \small {$^a $ Institute of Applied Physics and Computational Mathematics, China Academy of Engineering Physics,}\\ \small { Beijing, 100088, P. R. China}\\[5pt] \small {$^b $ Graduate School of China Academy of Engineering Physics,}\\ \small { Beijing, 100088, P. R. China}\\[5pt] } \footnotetext {*~Corresponding author. ~~~~~E-mail addresses: [email protected](B. Guo), [email protected](G. Qin).} \date{} \end{titlepage} \maketitle \begin{eqnarray}gin{abstract} In this paper, we investigate some special regularities and decay properties of solutions to the initial value problem(IVP) of the Benjamin equation. The main result shows that: for initial datum $u_{0}\in H^{s}(\mathbb{R})$ with $s>3/4,$ if the restriction of $u_{0}$ belongs to $H^{l}((x_{0}, \infty))$ for some $l\in \mathbb{Z}^{+}$ and $x_{0}\in \mathbb{R},$ then the restriction of the corresponding solution $u(\cdot, t)$ belongs to $H^{l}((\alphapha, \infty))$ for any $\alphapha\in \mathbb{R}$ and any $t\in(0, T)$. Consequently, this type of regularity travels with infinite speed to its left as time evolves. \vskip0.1in \noindent{\bf MSC:} primary 35Q53, secondary 35B05. \end{abstract} ~~\noindent{ \textbf{Key words}: Benjamin equation; Propagation of regularity; Decay} \section{Introduction} \setcounter{equation}{0} In this paper, we are concerned with the IVP of the following Benjamin equation \begin{eqnarray}gin{eqnarray}\begin{eqnarray}gin{cases}\lambdabel{benjamin} u_{t}+\partial_{x}^{3}u-H\partial_{x}^{2}u+u\partial_{x}u=0,\quad x, t\in \mathbb{R},\\ u(x, 0)=u_{0}(x), \end{cases} \end{eqnarray} where $H$ is the one-dimensional Hilbert transform \begin{eqnarray}gin{eqnarray*}\lambdabel{hilbert} Hf(x)&=&\frac{1}{\pi}\mathbbox{p.v.}\left(\frac{1}{x}*f\right)(x)\nonumber\\ &=&\frac{1}{\pi}\lim_{\varepsilonsilon\rightarrow 0}\int_{|y|\geq \varepsilonsilon} \frac{f(x-y)}{y}\mathbbox{d}y\nonumber\\ &=&(-i\mathbbox{sgn}(\xi)\hat{f}(\xi))^{\vee}(x) \end{eqnarray*} and $u=u(x, t)$ is a real valued function. We will derive some special properties including the propagation of regularity and decay of solutions to equation (\ref{benjamin}). The integro-differential equation (\ref{benjamin}) models the unidirectional propagation of long waves in a two-fluid system, where the lower fluid with greater density is infinitely deep and the interface is subject to capillarity. It was derived by Benjamin \cite{be} to study gravity-capillary surface waves of solitary type on deep water. He also showed that the solutions of the Benjamin equation (\ref{benjamin}) satisfy the following conservation laws \begin{eqnarray}gin{eqnarray*} &&I_{1}(u)=\int_{-\infty}^{+\infty}u(x, t)\mathbbox{d}x,\\ &&I_{2}(u)=\int_{-\infty}^{+\infty}u^{2}(x, t)\mathbbox{d}x,\\ &&I_{3}(u)=\int_{-\infty}^{+\infty} [\frac{1}{2}(\partial_{x}u)^{2}(x, t) -\frac{1}{2}u(x, t)H\partial_{x}u(x, t) +\frac{1}{3}u^{3}(x, t)]\mathbbox{d}x.\\ \end{eqnarray*} Notice that the conservation law for solutions of (\ref{benjamin}) \begin{eqnarray}gin{equation*} I_{1}(u_{0})=\int_{-\infty}^{+\infty}u(x, t)\mathbbox{d}x =\int_{-\infty}^{+\infty}u_{0}(x)\mathbbox{d}x \end{equation*} guarantees that the property $\hat{u}(0)=0$ is preserved by the solution flow. Following the definition of T. Kato \cite{ka} it is said that the IVP (\ref{benjamin}) is locally well-posed (LWP) in the Banach space $X$ if given any datum $u_{0}\in X$ there exists $T>0$ and a unique solution \begin{eqnarray}gin{eqnarray}\lambdabel{class1} u\in C([-T, T];X)\cap Y(T) \end{eqnarray} with $Y(T)$ be an auxillary function space. Furthermore, the solution map $u_{0}\mapsto u$ is continuous from $X$ into the class (\ref{class1}). This notion of LWP, which includes the ``persistent" property, i.e., the solution describes a continuous curve on $X$, implies that the solution of (\ref{benjamin}) defines a dynamic system on $X$. If $T$ can be taken arbitrarily large, the IVP (\ref{benjamin}) is said to be globally well-posed(GWP). The problem of finding the minimal regularity property, measured in the classical Sobolev space \begin{eqnarray}gin{equation*} H^{s}(\mathbb{R})=(1-\partial_{x}^{2})^{-s/2}L^{2}(\mathbb{R}),\quad s\in\mathbb{R}, \end{equation*} required to guarantee that the IVP (\ref{benjamin}) is locally or globally well-posed in $H^{s}(\mathbb{R})$ has been extensively studied. We list some of the main results here. Employing the Fourier restriction method introduced by Bourgain \cite{bo} , Linares \cite{li} established the LWP result for (\ref{benjamin}) in $H^{s}(\mathbb{R})$ with $s\geq 0,$ which combined with the conservation law $I_{2}$, leads to the GWP for (\ref{benjamin}) in $L^{2}.$ Guo and Huo \cite{gh2} obtained the LWP result in $H^{s}(\mathbb{R})$ for $s> -3/4.$ The best LWP results were established by Li and Wu \cite{lw} and Chen, Guo and Xiao \cite{cgx}. They also asserted the GWP for (\ref{benjamin}) in $H^{s}(\mathbb{R})$ for $s\geq -3/4.$ On the other hand, for the study of existence, stability and asymptotics of solitary wave solutions of equation (\ref{benjamin}), we can refer to \cite{be, abr, pa, cb, sl}. The well-posedness problem has also been studied in the following weighted Sobolev spaces concerning with regularity and decay property \begin{eqnarray}gin{equation*}\lambdabel{weighted} Z_{s, r}=H^{s}(\mathbb{R})\cap L^{2}(|x|^{r}\mathbbox{d}x),\quad s, r\in \mathbb{R} \end{equation*} and \begin{eqnarray}gin{equation*}\lambdabel{weighted1} \dot{Z}_{s, r}=\{f\in Z_{s, r}: \hat{f}(0)=0\}. \end{equation*} In this respect we can refer to, such as, the articles \cite{fp1, flp1, flp2} for the Benjamin-Ono and the dispersion generalized Benjamin-Ono equations, the paper of Nahas and Ponce \cite{np} for the nonlinear Schr\"{o}dinger equation, and so on. For the Benjamin equation (\ref{benjamin}), Urrea \cite{u} established the LWP in weighted Sobolev spaces $Z_{s, r}$ with $s\geq 1,$ $r\in[0, s/2]$ and $r<5/2$, the GWP in $Z_{s, r}$ with $s\geq 1,$ $r\in[0, s/2]$ and $3/2<r<5/2$, and the GWP in $\dot{Z}_{s, r}$ with $r\in[0, s/2]$ and $5/2\leq r<7/2$. In particular, this implies the well-posedness of the IVP (\ref{benjamin}) in the Schwartz space. He also established a unique continuity property for solutions of (\ref{benjamin}). More precisely, he showed that if $u\in C([0, T]; Z_{7, 7/2^{-}})$ is a solution of the IVP (\ref{benjamin}) and there exists three different times $t_{1}, t_{2}, t_{3}\in[0, T]$ such that $u(\cdot, t_{j})\in \dot{Z}_{7, 7/2}$ for $j=1, 2, 3,$ then $u(x, t)\equiv 0.$ Also, there are works concerning with special regularities and decay properties of some dispersive models. Isaza, Linares and Ponce \cite{ilp1} consider these problems for the the k-generalized KdV equations \begin{eqnarray}gin{eqnarray}\begin{eqnarray}gin{cases}\lambdabel{kdv} u_{t}+\partial_{x}^{3}u+u^{k}\partial_{x}u=0,\quad x, t\in \mathbb{R},\\ u(x, 0)=u_{0}(x). \end{cases} \end{eqnarray} They mainly established two results. The first one describes the propagation of regularity in the right hand side of the initial value for positive times. It asserts that this regularity travels with infinite speed to its left as time goes by. Note that in \cite{ilp2}, they proved similar result for the following Benjamin-Ono equation with negative dispersion \begin{eqnarray}gin{eqnarray}\begin{eqnarray}gin{cases}\lambdabel{bo} u_{t}-H\partial_{x}^{2}u+u\partial_{x}u=0,\quad x, t\in \mathbb{R},\\ u(x, 0)=u_{0}(x). \end{cases} \end{eqnarray} The difference between \cite{ilp1} and \cite{ilp2} lies in the regularity of the initial data. For the k-generalized KdV equations, the initial value $u_{0}$ belongs to $H^{3/4^{+}}(\mathbb{R})$, while $u_{0}$ lies in $H^{3/2}(\mathbb{R})$ for the Benjamin-Ono equation. The second conclusion in \cite{ilp1} is that if the initial value $u_{0}\in H^{3/4^{+}}(\mathbb{R})$ of the k-generalized KdV equations has polynomial decay in the positive real line, then the corresponding solution possesses some persistence properties and regularity effects for positive times. Segata and Smith \cite{ss} extend the results of \cite{ilp1} to the following fifth order dispersive equation with $a_{1}, a_{2}, a_{3}$ be three constants \begin{eqnarray}gin{eqnarray}\begin{eqnarray}gin{cases}\lambdabel{fifth} u_{t}-\partial_{x}^{5}u+a_{1}u^{2}\partial_{x}u +a_{2}\partial_{x}u\partial_{x}^{2}u +a_{3}u\partial_{x}^{3}u=0,\quad x, t\in \mathbb{R},\\ u(x, 0)=u_{0}(x). \end{cases} \end{eqnarray} However, the regularity of the initial data need to be $5/2^{+}$ for equation (\ref{fifth}). Motivated by the above works, the objective of this paper is to extend the results of \cite{ilp1} to the IVP (\ref{benjamin}). Before stating our results we describe the following Theorem providing us with the space of solutions where we shall be working on. \begin{eqnarray}gin{thmA} Let $u_{0}\in H^{3/4^{+}}(\mathbb{R}).$ Then there exists a constant $T=T(\|u_{0}\|_{H^{3/4^{+}}})$ and a unique local solution of the IVP (\ref{benjamin}) such that \begin{eqnarray}gin{eqnarray}\lambdabel{class} &&(i)\quad u\in C([-T, T]; H^{3/4^{+}}(\mathbb{R})),\nonumber\\ &&(ii)\quad \partial_{x}u\in L^{4}([-T, T]; L^{\infty}(\mathbb{R})),\\ &&(iii)\quad \sup_{x}\int_{-T}^{T}|J^{r}\partial_{x}u(x, t)|^{2}\mathbbox{d}t< \infty, \quad for \quad r\in[0,3/4^{+}],\nonumber\\ &&(iv)\quad \int_{-\infty}^{\infty}\sup_{-T\leq t \leq T}|u(x, t)|^{2}\mathbbox{d}x< \infty,\nonumber \end{eqnarray} where $J=(1-\partial_{x}^{2})^{\frac{1}{2}}$ denotes the Bessel potential. Moreover, the map data-solution, $u_{0}\mapsto u(x, t)$ is locally continuous (smooth) from $H^{3/4^{+}}(\mathbb{R})$ into the class defined by (\ref{class}). \end{thmA} \begin{eqnarray}gin{rem} The above well-posedness Theorem can be obtained by combining the properties of the unitary group associated to the linear part of equation (\ref{benjamin}) and the commutator estimate established by Kato and Ponce \cite{kp}. For the method of its proof, we refer the reader to \cite{kpv} and \cite{la}, and we omit the details here. \end{rem} We first describe the propagation of one-sided regularity displayed by solutions to the IVP (\ref{benjamin}) provided by Theorem A. \begin{eqnarray}gin{thm}\lambdabel{regularity} Assume $u_{0}\in H^{3/4^{+}}(\mathbb{R})$ and for some $l\in \mathbb{Z}^{+},$ $l\geq 1$ and $x_{0}\in\mathbb{R}$ there holds \begin{eqnarray}gin{eqnarray} \|\partial_{x}^{l}u_{0}\|_{L^{2}((x_{0}, \infty))}^{2} =\int_{x_{0}}^{\infty}|\partial_{x}^{l}u_{0}(x)|^{2}\mathbbox{d}x<\infty, \lambdabel{113} \end{eqnarray} then the solution of the IVP (\ref{benjamin}) provided by Theorem A satisfies that for any $v>0$ and $\varepsilonsilon>0$ \begin{eqnarray}gin{eqnarray}\lambdabel{114} \sup_{0\leq t\leq T}\int_{x_{0}+\varepsilonsilon-vt}^{\infty}(\partial_{x}^{j}u)^{2}(x, t)\mathbbox{d}x \leq c_{0}, \end{eqnarray} for $j=0, 1, 2, ..., l$ with $c_{0}=c_{0}(\|u_{0}\|_{H^{3/4^{+}}}; \|\partial_{x}^{l}u_{0}\|_{L^{2}((x_{0}, \infty))}; l; v; \varepsilonsilon; T)$. In particular, for all $t\in(0, T],$ the restriction of $u(\cdot, t)$ to any interval $(x_{1}, \infty)$ belongs to $H^{l}((x_{1}, \infty))$. Moreover, for any $v\geq0$ , $\varepsilonsilon>0 $ and $R>0$ \begin{eqnarray}gin{eqnarray} \int_{0}^{T}\int[D_{x}^{\frac{1}{2}}(\partial_{x}^{l}u(x, t)\eta(x+vt; \varepsilonsilon, b))]^{2}\mathbbox{d}x\mathbbox{d}t \leq c_{0},\lambdabel{115}\\ \int_{0}^{T}\int_{x_{0}+\varepsilonsilon-vt}^{x_{0}+R-vt} (\partial_{x}^{l+1}u)^{2}(x, t)\mathbbox{d}x\mathbbox{d}t \leq c_{1},\lambdabel{116} \end{eqnarray} where $c_{1}=c_{1}(l; \|u_{0}\|_{H^{3/4^{+}}}; \|\partial_{x}^{l}u_{0}\|_{L^{2}((x_{0}, \infty))}; v; \varepsilonsilon; T; R)$. \end{thm} \begin{eqnarray}gin{rem} The functions $\eta(x; \varepsilonsilon, b)$ mentioned in Theorem \ref{regularity} and $\eta_{j}(x; \varepsilonsilon, b)$ in Theorem \ref{persistence} will be defined in section 2. In addition, without loss of generality, we shall assume from now on $x_{0}=0$ in Theorem \ref{regularity}. \end{rem} The persistence of decay and regularity effects established in \cite{ilp1} can also be extended to the IVP (\ref{benjamin}). In fact, we have \begin{eqnarray}gin{thm}\lambdabel{persistence} Assume $u_{0}\in H^{3/4^{+}}(\mathbb{R})$ and for some $n\in \mathbb{Z}^{+},$ $n\geq 1$ there holds \begin{eqnarray}gin{eqnarray} \|x^{n/2}u_{0}\|_{L^{2}((0, \infty))}^{2} =\int_{0}^{\infty}|x^{n}||u_{0}(x)|^{2}\mathbbox{d}x<\infty, \lambdabel{117} \end{eqnarray} then the solution of the IVP (\ref{benjamin}) provided by Theorem A satisfies that \begin{eqnarray}gin{eqnarray}\lambdabel{118} \sup_{0\leq t\leq T}\int_{0}^{\infty}|x^{n}||u(x, t)|^{2}\mathbbox{d}x \leq c_{2} \end{eqnarray} with $c_{2}=c_{2}(\|u_{0}\|_{H^{3/4^{+}}}; \|x^{n/2}u_{0}\|_{L^{2}((0, \infty))}; T; n)$. Furthermore, for any $v\geq0$ , $\varepsilonsilon, \delta>0 ,$ $m, j\in\mathbb{Z}^{+},$ $m+j\leq n$ and $m\geq 1,$ \begin{eqnarray}gin{eqnarray} &&\sup_{\delta\leq t\leq T}\int_{\varepsilonsilon-vt}^{\infty}(\partial_{x}^{m}u)^{2}(x, t)x_{+}^{j}\mathbbox{d}x +\int_{\delta}^{T}\int_{\varepsilonsilon-vt}^{\infty} (\partial_{x}^{m+1}u)^{2}(x, t)x_{+}^{j-1}\mathbbox{d}x\mathbbox{d}t\nonumber\\ &&+\int_{\delta}^{T}\int[D_{x}^{\frac{1}{2}}(\partial_{x}^{m}u(x, t)\eta_{j}(x+vt; \varepsilonsilon, b))]^{2}\mathbbox{d}x\mathbbox{d}t \leq c_{3},\lambdabel{119} \end{eqnarray} where $c_{3}=c_{3}(l; \|u_{0}\|_{H^{3/4^{+}}}; \|x^{n/2}u_{0}\|_{L^{2}((x_{0}, \infty))} ; v; \varepsilonsilon; T; \delta; n),$ $x_{+}=\max\{x, 0\}.$ \end{thm} Simple analysis of the proof of Theorems \ref{regularity} and \ref{persistence} yields their validity for the "defocusing" Benjamin equation \begin{eqnarray}gin{eqnarray}\begin{eqnarray}gin{cases}\lambdabel{dbenjamin} u_{t}+\partial_{x}^{3}u-H\partial_{x}^{2}u-u\partial_{x}u=0,\quad x, t\in \mathbb{R},\\ u(x, 0)=u_{0}(x). \end{cases} \end{eqnarray} Consequently, our results still hold for $u(-x, -t)$ with $u(x, t)$ be the solution of (\ref{benjamin}). Put another way, for datum satisfying the assumption (\ref{113}) and (\ref{117}) on the left hand side of the real line, respectively, Theorems \ref{regularity} and \ref{persistence} remain true backward in time. On the other hand, equation (\ref{benjamin}) is time reversible. In fact, let $v(x, t)=u(-x, -t)$ with $u(x, t)$ be the solution of equation (\ref{benjamin}). Using the relation $(Hv)(x, t)=-(Hu)(-x, -t)$, one has \begin{eqnarray}gin{eqnarray}\begin{eqnarray}gin{cases}\lambdabel{benjamin2} v_{t}+\partial_{x}^{3}v-H\partial_{x}^{2}v+v\partial_{x}v=0,\quad x, t\in \mathbb{R},\\ v(x, 0)=u_{0}(-x). \end{cases} \end{eqnarray} Theorems \ref{regularity} and \ref{persistence} combining with the above two points indicate \begin{eqnarray}gin{cor}\lambdabel{cor1} Let $u\in C([-T, T]; H^{3/4^{+}}(\mathbb{R}))$ be a solution of the equation (\ref{benjamin}) provided by Theorem A such that \begin{eqnarray}gin{eqnarray*} \partial_{x}^{m}u(\cdot, \hat{t})\notin L^{2}((a, \infty))\quad for \quad some \quad \hat{t}\in (-T, T),\quad a\in \mathbb{R}\quad and \quad m\in \mathbb{Z}^{+}. \end{eqnarray*} Then for any $t\in [-T, \hat{t})$ and any $\begin{eqnarray}ta\in \mathbb{R}$ \begin{eqnarray}gin{equation*} \partial_{x}^{m}u(\cdot, t)\notin L^{2}((\begin{eqnarray}ta, \infty)), \quad and \quad x^{m/2}u(\cdot, t)\notin L^{2}((0, \infty)). \end{equation*} \end{cor} Next, Theorems \ref{regularity} and \ref{persistence} yield that the singularity of the solution corresponding to an appropriate class of initial data propagates with infinite speed to the left as time goes by. Also, since equation (\ref{benjamin}) is time reversible, the solution cannot have had some regularity in the past. More precisely, we have \begin{eqnarray}gin{cor}\lambdabel{cor2} Let $u\in C([-T, T]; H^{3/4^{+}}(\mathbb{R}))$ be a solution of the equation (\ref{benjamin}) provided by Theorem A. Suppose there exists $n, m\in \mathbb{Z}^{+}$ with $m\leq n$ such that for some $a, b\in \mathbb{R}$ with $a< b$ \begin{eqnarray}gin{eqnarray}\lambdabel{cor119} \int_{b}^{\infty}|\partial_{x}^{n}u_{0}(x)|^{2}\mathbbox{d}x<\infty \quad but \quad \partial_{x}^{m}u_{0}\notin L^{2}((a, \infty)). \end{eqnarray} Then for any $t\in (0, T)$ and any $v, \varepsilonsilon> 0$ \begin{eqnarray}gin{equation*} \int_{b+\varepsilonsilon-vt}^{\infty}|\partial_{x}^{n}u(x, t)|^{2}\mathbbox{d}x<\infty \end{equation*} and for any $t\in (-T, 0)$ and any $\alphapha\in \mathbb{R}$ \begin{eqnarray}gin{equation*} \int_{\alphapha}^{\infty}|\partial_{x}^{m}u(x, t)|^{2}\mathbbox{d}x=\infty. \end{equation*} \end{cor} We now discuss some of the ingredients in the proof of Theorems \ref{regularity} and \ref{persistence}. The first one is concerned with the proof of Theorem \ref{regularity}. As in \cite{ilp1}, we mainly use induction. To treat the Benjamin-Ono term $-H\partial_{x}^{2}u,$ we follow the idea in \cite{ilp2}, where the commutator estimate for the Hilbert transform (\ref{commutator}) plays a vital role. In spite of this, there is a little difference between \cite{ilp2} and this paper when handling the following two terms (see (\ref{2A422})) \begin{eqnarray}gin{eqnarray} \int_{0}^{T}\int(\partial_{x}^{2}u)^{2} (\eta^{'})^{2}\mathbbox{d}x\mathbbox{d}t +\int_{0}^{T}\int(\partial_{x}^{2}u \eta)^{2}\mathbbox{d}x\mathbbox{d}t.\lambdabel{q1} \end{eqnarray} In \cite{ilp2} for the Benjamin-Ono equation (\ref{bo}), these two terms can be controlled by sufficient local smoothing effect. More precisely, the condition (1.6)(iii) in \cite{ilp2} reads \begin{eqnarray}gin{eqnarray*} \int_{-T}^{T}\int_{-R}^{R} (|\partial_{x}D_{x}u|^{2}+|\partial_{x}^{2}u|^{2})\mathbbox{d}x\mathbbox{d}t\leq c_{0}, \end{eqnarray*} where $R$ is arbitrary and finite. This combined with the boundedness of $\eta^{'}$ and $\eta$ on the support of $\eta$ immediately yields the finiteness of (\ref{q1}). However, (\ref{class})(iii) in this paper provides us at most $7/4^{+}$ order local smoothing effect, which is not enough to bound (\ref{q1}). Fortunately, in the first step(the case l=1 in the proof of Theorem \ref{regularity}) in our induction process, the KdV term provides us with the finiteness of (see (\ref{conclusion1})) \begin{eqnarray}gin{eqnarray*} \int_{0}^{T}\int (\partial_{x}^{2}u)^{2}(x, t)\chi^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\mathbbox{d}t.\lambdabel{q2} \end{eqnarray*} This permits us to use the properties of $\eta^{'}$ and $\eta$, i.e. (\ref{prp3}) and (\ref{prp4}), to control (\ref{q1}). The second one relates to the proof of Theorem \ref{persistence}. The difficulty still comes from the Benjamin-Ono term. For the term $A_{422}$ in (\ref{4A42}), note that because of the factor $x^{n}$ in the definition of $\eta_{n}^{'}$ and $\eta_{n}$, the support of $\eta_{n}$ is not $[\varepsilonsilon, b]$ at all for the general case. As a consequence, $\eta_{n}$ and $\eta_{n}^{'}$ may be unbounded. However, we notice that (\ref{prp10}) and (\ref{prp11}) provide us with a relation between $\chi_{n} $ and $\chi_{n-1} $, therefore, we could use induction to treat this term. (\ref{prp10}) and (\ref{prp11}) are also used to bound the term in (\ref{6A422}). The rest of this paper is organized as follows: in section 2 we construct our cut-off functions and state a lemma to be used in the proof of Theorem \ref{regularity} and Theorem \ref{persistence}. The proof of Theorem \ref{regularity} and Theorem \ref{persistence} will be given in section 3 and section 4, respectively. \section{Preliminaries} Let us first construct our cutoff functions, the construction of this family of cutoff functions is motivated by Segata and Smith \cite{ss}. Let $p$ be large enough and let $\rho(x)$ be defined as follows \begin{eqnarray}gin{eqnarray*} \rho(x)=a\int_{0}^{x}y^{p}(1-y)^{p}\mathbbox{d}y \end{eqnarray*} with the constant $a=a(p)$ be chosen to satisfy $\rho(1)=1.$ \begin{eqnarray}gin{rem} According to Lemma \ref{lem1} below, when come across the $L^{p}$ norm of the commutator related to the Hilbert transform, we want to put all derivatives to the smooth function $\psi,$ this is the reason for $p$ in the definition of $\rho(x)$ being large enough. \end{rem} With the above definition, we have \begin{eqnarray}gin{eqnarray*} \rho(0)=0,\quad \rho(1)=1,\\ \rho^{'}(0)=\rho^{''}(0)=\cdot\cdot\cdot=\rho^{(p)}(0)=0,\\ \rho^{'}(1)=\rho^{''}(1)=\cdot\cdot\cdot=\rho^{(p)}(1)=0 \end{eqnarray*} with $0<\rho, \rho^{'}$ for $0<x<1. $ Next, for parameters $\varepsilonsilon, b>0,$ define $\chi\in C^{p}(\mathbb{R})$ by \begin{eqnarray}gin{eqnarray*} \chi(x; \varepsilonsilon, b)= \begin{eqnarray}gin{cases} 0,\quad x\leq \varepsilonsilon\\ \rho((x-\varepsilonsilon)/b),\quad \varepsilonsilon<x<b+\varepsilonsilon\\ 1,\quad b+\varepsilonsilon\leq x. \end{cases} \end{eqnarray*} In addition, we define $\chi_{n}=x^{n}\chi\in C^{p}(\mathbb{R}).$ By their definitions, $\chi $ and $\chi_{n}$ are both positive for $x\in (\varepsilonsilon, \infty)$. Computing as section 2 in \cite{ss}, we can derive the following properties concerning $\chi $ and $\chi_{n}$: \begin{eqnarray}gin{eqnarray} &&(1)\quad \chi(x; \varepsilonsilon/10, \varepsilonsilon/2 )=1\quad on \quad\mathbbox{supp}\chi(x; \varepsilonsilon, b )=[\varepsilonsilon, \infty);\lambdabel{id}\\ &&(2)\quad |\chi(x, \varepsilonsilon, b)|\leq \chi_{1}^{'}(x, \varepsilonsilon, b);\lambdabel{prp2}\\ &&(3)\quad \left|\frac{[\chi^{''}(x; \varepsilonsilon, b)]^{2}}{\chi^{'}(x; \varepsilonsilon, b)}\right| \leq c(\varepsilonsilon, b)\chi^{'}(x; \varepsilonsilon/3, b+\varepsilonsilon) \quad on \quad support \quad of \quad\chi^{'};\lambdabel{prp3}\\ &&(4)\quad |\chi^{(j)}(x; \varepsilonsilon, b)| \leq c(j, \varepsilonsilon, b)\chi^{'}(x; \varepsilonsilon/3, b+\varepsilonsilon)\quad on \quad[\varepsilonsilon, b+\varepsilonsilon]\quad for \quad j=1, 2,..., p;\lambdabel{prp4}\\ &&(5)\quad |\chi_{n-l}^{''}(x, \varepsilonsilon, b)|\leq c(n, l)\chi_{n-l-2}(x, \varepsilonsilon, b)\nonumber\\ &&\quad \quad+c(b, v, \varepsilonsilon, T)\chi^{'}(x, \varepsilonsilon/3, b+\varepsilonsilon) \quad for \quad l\leq n-2;\lambdabel{prp5}\\ &&(6)\quad |\chi_{n-l}^{'''}(x, \varepsilonsilon, b)| \leq c(n, l)\chi_{n-l-3}(x, \varepsilonsilon, b)\nonumber\\ &&\quad \quad+c(n, l, b)\chi(x, \varepsilonsilon/10,\varepsilonsilon/2)\quad for \quad l\leq n-3;\lambdabel{prp6}\\ &&(7)\quad n\chi_{n-1}(x, \varepsilonsilon, b)\leq\chi_{n}^{'}(x, \varepsilonsilon, b) ;\lambdabel{prp7}\\ &&(8)\quad |\chi_{n}^{(j)}(x; \varepsilonsilon, b)| \leq c(j, n, b)[1+\chi_{n}(x; \varepsilonsilon, b)]\quad for \quad j=1, 2, 3, ..., p;\lambdabel{prp8}\\ &&(9)\quad |\chi_{n}^{(j)}(x; \varepsilonsilon, b)| \leq c(\varepsilonsilon, n, b)\chi_{n-1}(x; \varepsilonsilon/3, b+\varepsilonsilon)\quad for \quad j=1, 2, 3, ..., p;\lambdabel{prp10}\\ &&(10)\left|\frac{[\chi_{n}^{''}(x; \varepsilonsilon, b)]^{2}}{\chi_{n}^{'}(x; \varepsilonsilon, b)}\right| \leq c(\varepsilonsilon, b, n)\chi_{n-1}(x; \varepsilonsilon/3, b+\varepsilonsilon)\quad on \quad support \quad of \quad\chi_{n}^{'}.\lambdabel{prp11} \end{eqnarray} Moreover, we define \begin{eqnarray}gin{eqnarray}\lambdabel{eta} \eta(x; \varepsilonsilon, b)=\sqrt{\chi^{'}(x; \varepsilonsilon, b )},\nonumber\\ \eta_{n}(x; \varepsilonsilon, b)=\sqrt{\chi_{n}^{'}(x; \varepsilonsilon, b )}. \end{eqnarray} Then, reasoning as section 2 in \cite{ilp2}, we derive that $\eta(x; \varepsilonsilon, b)$ and $\eta_{n}(x; \varepsilonsilon, b)$ are both in $C^{p}(\mathbb{R}).$ The following commutator estimate is an extension of the Calder\'{o}n theorem \cite{ca}, it was proved by Dawson, McGahagan and Ponce \cite{dmp}. \begin{eqnarray}gin{lem}\lambdabel{lem1} For any $p\in (1, \infty)$ and $l, m\in\mathbb{Z}^{+}\cup \{0\},$ $l+m\geq 1$ there exists a constant $C=C(p, l, m)>0$ such that \begin{eqnarray}gin{equation}\lambdabel{commutator} \|\partial_{x}^{l}[H; \psi]\partial_{x}^{m}f\|_{L^{p}} \leq C\|\partial_{x}^{l+m}\psi\|_{L^{\infty}}\|f\|_{L^{p}} \end{equation} with $H$ be the Hilbert transform. \end{lem} \section{Proof of Theorem \ref{regularity} } To prove Theorem \ref{regularity}, we follow the idea in \cite{ilp1} and use an induction argument. To illuminate our method, we first prove (\ref{114}) for $l=1$ and $l=2$. Let us first prove the case $l=1.$ Formally, applying $\partial_{x}$ to equation (\ref{benjamin}) and multiplying the result by $\partial_{x}u(x, t)\chi(x+vt; \varepsilonsilon, b)$, after some integration by parts, one deduces \begin{eqnarray}gin{eqnarray}\lambdabel{first} &&\frac{1}{2}\frac{\mathbbox{d}}{\mathbbox{d}t}\int(\partial_{x}u)^{2}(x, t)\chi(x+vt)\mathbbox{d}x \underbrace{-\frac{1}{2}v\int(\partial_{x}u)^{2}(x, t)\chi^{'}(x+vt)\mathbbox{d}x}_{ \text{$A_{1}$}}\nonumber\\ &&+\frac{3}{2}v\int(\partial_{x}^{2}u)^{2}(x, t)\chi^{'}(x+vt)\mathbbox{d}x \underbrace{-\frac{1}{2}\int(\partial_{x}u)^{2}(x, t)\chi^{'''}(x+vt)\mathbbox{d}x}_{ \text{$A_{2}$}}\nonumber\\ &&\underbrace{+\int\partial_{x}(u\partial_{x}u)\partial_{x}u(x, t)\chi(x+vt)\mathbbox{d}x}_{ \text{$A_{3}$}} \underbrace{-\int H\partial_{x}^{3}u\partial_{x}u(x, t)\chi(x+vt)\mathbbox{d}x}_{ \text{$A_{4}$}}=0, \end{eqnarray} where in $\chi$ we omit the parameters $\varepsilonsilon$ and $b$. We estimate the integrals in (\ref{first}) term by term. Using (\ref{class})(iii) with $r=0$ and the support property of $\chi^{'}(x)$ , it holds that \begin{eqnarray}gin{eqnarray*} \int_{0}^{T}|A_{1}(t)|\mathbbox{d}t \leq \int_{0}^{T}\int(\partial_{x}u)^{2}(x, t)\chi^{'}(x+vt)\mathbbox{d}x\mathbbox{d}t\leq c_{0} \end{eqnarray*} and similarly \begin{eqnarray}gin{eqnarray*} \int_{0}^{T}|A_{2}(t)|\mathbbox{d}t \leq c_{0}. \end{eqnarray*} For the term $A_{3}$, direct computation yields \begin{eqnarray}gin{eqnarray*} A_{3}(t) &=&\int(\partial_{x}u)^{3}\chi(x+vt)\mathbbox{d}x +\int u \partial_{x}^{2}u\partial_{x}u\chi(x+vt)\mathbbox{d}x\nonumber\\ &=&\frac{1}{2}\int(\partial_{x}u)^{3}\chi(x+vt)\mathbbox{d}x -\frac{1}{2}\int u \partial_{x}u\partial_{x}u\chi^{'}(x+vt)\mathbbox{d}x\nonumber \end{eqnarray*} \begin{eqnarray}gin{eqnarray*} \quad\quad\quad\quad\quad\quad&\leq& \|\partial_{x}u\|_{L^{\infty}}\int(\partial_{x}u)^{2}\chi(x+vt)\mathbbox{d}x +\|u\|_{L^{\infty}}\int(\partial_{x}u)^{2}\chi^{'}(x+vt)\mathbbox{d}x\nonumber\\ &=&A_{31}+A_{32}. \end{eqnarray*} By Sobolev embedding theorem, one obtains \begin{eqnarray}gin{eqnarray*} \int_{0}^{T}|A_{32}(t)|\mathbbox{d}t \leq\sup_{[0, T]}\|u\|_{H^{3/4^{+}}} \int(\partial_{x}u)^{2}\chi^{'}(x+vt)\mathbbox{d}x\mathbbox{d}t. \end{eqnarray*} The term $A_{31}$ will be controlled by using (\ref{class})(ii) and the Gronwall inequality. Finally, to estimate $A_{4}$, we follow the idea described in \cite{ilp2}. Integration by parts yields \begin{eqnarray}gin{eqnarray*} A_{4} &=&-\int H\partial_{x}^{3}u\partial_{x}u\chi(x+vt)\mathbbox{d}x\nonumber\\ &=&\int H\partial_{x}^{2}u\partial_{x}^{2}u\chi(x+vt)\mathbbox{d}x +\int H\partial_{x}^{2}u\partial_{x}u\chi^{'}(x+vt)\mathbbox{d}x\nonumber\\ &=&A_{41}+A_{42}. \end{eqnarray*} Since the Hilbert transform is skew symmetric, we have \begin{eqnarray}gin{eqnarray*} A_{41} &=&\int H\partial_{x}^{2}u\partial_{x}^{2}u\chi(x+vt)\mathbbox{d}x\nonumber\\ &=&-\int \partial_{x}^{2}u H(\partial_{x}^{2}u\chi(x+vt))\mathbbox{d}x\nonumber\\ &=&-\int \partial_{x}^{2}u H\partial_{x}^{2}u\chi(x+vt)\mathbbox{d}x -\int \partial_{x}^{2}u [H; \chi]\partial_{x}^{2}u\mathbbox{d}x\nonumber\\ &=&-A_{41}-\int \partial_{x}^{2}u [H; \chi]\partial_{x}^{2}u\mathbbox{d}x. \end{eqnarray*} Therefore, (\ref{commutator}) leads to \begin{eqnarray}gin{eqnarray*} A_{41} &=&-\frac{1}{2}\int \partial_{x}^{2}u [H; \chi]\partial_{x}^{2}u\mathbbox{d}x\nonumber\\ &=&-\frac{1}{2}\int u \partial_{x}^{2}[H; \chi]\partial_{x}^{2}u\mathbbox{d}x\nonumber\\ &\leq& c\|u\|_{L^{2}}\|\partial_{x}^{2}[H; \chi]\partial_{x}^{2}u\|_{L^{2}}\nonumber\\ &\leq& c\|u\|_{L^{2}}^{2}=c\|u_{0}\|_{L^{2}}^{2}. \end{eqnarray*} Concerning the term $A_{42}$, let us recall the definition of $\eta(x; \varepsilonsilon, b)$ in (\ref{eta}), we can write $A_{42}$ as \begin{eqnarray}gin{eqnarray*} A_{42} &=&\int H\partial_{x}^{2}u\partial_{x}u\chi^{'}(x+vt)\mathbbox{d}x\nonumber\\ &=&\int H\partial_{x}^{2}u \eta \partial_{x}u\eta\mathbbox{d}x\nonumber\\ &=&\int H(\partial_{x}^{2}u \eta) \partial_{x}u\eta\mathbbox{d}x -\int [H; \eta]\partial_{x}^{2}u\partial_{x}u\eta\mathbbox{d}x\nonumber\\ &=&\int H\partial_{x}(\partial_{x}u \eta) \partial_{x}u\eta\mathbbox{d}x -\int H(\partial_{x}u \eta^{'}) \partial_{x}u\eta\mathbbox{d}x -\int [H; \eta]\partial_{x}^{2}u\partial_{x}u\eta\mathbbox{d}x\nonumber\\ &=&A_{421}+A_{422}+A_{423}. \end{eqnarray*} Plancherel's identity yields \begin{eqnarray}gin{eqnarray}\lambdabel{1A421} A_{421}=\int H\partial_{x}(\partial_{x}u \eta) \partial_{x}u\eta\mathbbox{d}x =\int[D_{x}^{\frac{1}{2}}(\partial_{x}u \eta)]^{2}\mathbbox{d}x, \end{eqnarray} which is positive and will stay at the left hand side of (\ref{first}). The boundedness of the Hilbert transform in $L^{2}$ and the Young inequality produce \begin{eqnarray}gin{eqnarray}\lambdabel{A422} \int_{0}^{T}|A_{422}|\mathbbox{d}t &=&\int_{0}^{T}\left|\int H(\partial_{x}u \eta^{'}) \partial_{x}u\eta\mathbbox{d}x\right|\mathbbox{d}t\nonumber\\ &\leq& c\int_{0}^{T}\int(\partial_{x}u)^{2} (\eta^{'})^{2}\mathbbox{d}x\mathbbox{d}t +c\int_{0}^{T}\int(\partial_{x}u)^{2}\eta^{2}\mathbbox{d}x\mathbbox{d}t. \end{eqnarray} Employing the boundedness of $\eta$ and $\eta^{'}$ on the support of $\eta$ and using (\ref{class})(iii) with $r=0$, we obtain \begin{eqnarray}gin{eqnarray*} \int_{0}^{T}|A_{422}|\mathbbox{d}t\leq c_{0}. \end{eqnarray*} Invoking the commutator estimate (\ref{commutator}), we derive \begin{eqnarray}gin{eqnarray*} |A_{423}| &=&\left|\int [H; \eta]\partial_{x}^{2}u\partial_{x}u\eta\mathbbox{d}x\right|\nonumber\\ &\leq& \|[H; \eta]\partial_{x}^{2}u\|_{L^{2}}\|\partial_{x}u\eta\|_{L^{2}}\nonumber\\ &\leq& c\|u\|_{L^{2}}^{2}+c\|\partial_{x}u\eta\|_{L^{2}}^{2}\nonumber\\ &\leq& c\|u_{0}\|_{L^{2}}^{2}+c\|\partial_{x}u\eta\|_{L^{2}}^{2}. \end{eqnarray*} After integration in time, the term $\|\partial_{x}u\eta\|_{L^{2}}^{2}$ can be controlled as that in (\ref{A422}). Substituting the above information in $(\ref{first})$, using the Gronwall inequality and (\ref{class})(ii), one obtains \begin{eqnarray}gin{eqnarray}\lambdabel{conclusion1} &&\sup_{t\in[0, T]}\int (\partial_{x}u)^{2}(x, t)\chi(x+vt; \varepsilonsilon, b)\mathbbox{d}x +\int_{0}^{T}\int (\partial_{x}^{2}u)^{2}(x, t)\chi^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\mathbbox{d}t\nonumber\\ &&+\int_{0}^{T}\int [D_{x}^{\frac{1}{2}}(\partial_{x}u(x, t) \eta(x+vt; \varepsilonsilon, b))]^{2}\mathbbox{d}x\mathbbox{d}t \leq c_{0}. \end{eqnarray} This completes the proof of the case $l=1.$ Next, we prove (\ref{114}) for the case $l=2.$ Applying $\partial_{x}^{2}$ to equation (\ref{benjamin}), then multiplying $\partial_{x}^{2}u(x, t)\chi(x+vt; \varepsilonsilon, b)$ and integrating, we find \begin{eqnarray}gin{eqnarray}\lambdabel{second} &&\frac{1}{2}\frac{\mathbbox{d}}{\mathbbox{d}t}\int(\partial_{x}^{2}u)^{2}(x, t)\chi(x+vt)\mathbbox{d}x \underbrace{-\frac{1}{2}v\int(\partial_{x}^{2}u)^{2}(x, t)\chi^{'}(x+vt)\mathbbox{d}x} _{\text{$A_{1}$}}\nonumber\\ &&+\frac{3}{2}v\int(\partial_{x}^{3}u)^{2}(x, t)\chi^{'}(x+vt)\mathbbox{d}x \underbrace{-\frac{1}{2}\int(\partial_{x}^{2}u)^{2}(x, t)\chi^{'''}(x+vt)\mathbbox{d}x} _{\text{$A_{2}$}}\nonumber\\ &&\underbrace{+\int\partial_{x}^{2}(u\partial_{x}u)\partial_{x}^{2}u(x, t)\chi(x+vt)\mathbbox{d}x} _{\text{$A_{3}$}} \underbrace{-\int H\partial_{x}^{4}u\partial_{x}^{2}u(x, t)\chi(x+vt)\mathbbox{d}x} _{\text{$A_{4}$}}=0. \end{eqnarray} Invoking (\ref{conclusion1}), one has \begin{eqnarray}gin{eqnarray*} \int_{0}^{T}|A_{1}(t)|\mathbbox{d}t \leq |v|\int_{0}^{T}\int(\partial_{x}^{2}u)^{2}(x, t)\chi^{'}(x+vt)\mathbbox{d}x \leq c_{0}. \end{eqnarray*} Employing (\ref{prp4}) with $j=3$ and using (\ref{conclusion1}) with $(\varepsilonsilon/3, b+\varepsilonsilon)$ instead of $(\varepsilonsilon, b)$, it holds that \begin{eqnarray}gin{eqnarray}\lambdabel{2A2} \int_{0}^{T}|A_{2}(t)|\mathbbox{d}t &\leq&\int_{0}^{T}\int(\partial_{x}^{2}u)^{2}(x, t)|\chi^{'''}(x+vt; \varepsilonsilon, b)|\mathbbox{d}x\mathbbox{d}t\nonumber\\ &\leq& \int_{0}^{T}\int(\partial_{x}^{2}u)^{2}(x, t)|\chi^{'}(x+vt; \varepsilonsilon/3, b+\varepsilonsilon)|\mathbbox{d}x\mathbbox{d}t \leq c_{0}. \end{eqnarray} Integration by parts yields \begin{eqnarray}gin{eqnarray*} A_{3}(t) &=&3\int\partial_{x}u(\partial_{x}^{2}u)^{2}\chi(x+vt)\mathbbox{d}x +\int u \partial_{x}^{3}u\partial_{x}^{2}u\chi(x+vt)\mathbbox{d}x\nonumber\\ &=&\frac{5}{2}\int\partial_{x}u(\partial_{x}^{2}u)^{2}\chi(x+vt)\mathbbox{d}x -\frac{1}{2}\int u (\partial_{x}^{2}u)^{2}\chi^{'}(x+vt)\mathbbox{d}x\nonumber\\ &\leq& \|\partial_{x}u\|_{L^{\infty}}\int(\partial_{x}^{2}u)^{2}\chi(x+vt)\mathbbox{d}x +\|u\|_{L^{\infty}}\int(\partial_{x}^{2}u)^{2}\chi^{'}(x+vt)\mathbbox{d}x\nonumber\\ &=&A_{31}+A_{32}. \end{eqnarray*} Again, using Sobolev embedding theorem and (\ref{conclusion1}), one obtains \begin{eqnarray}gin{eqnarray*} \int_{0}^{T}|A_{32}(t)|\mathbbox{d}t <\sup_{[0, T]}\|u\|_{H^{3/4^{+}}} \int_{0}^{T}\int(\partial_{x}^{2}u)^{2}\chi^{'}(x+vt)\mathbbox{d}x\mathbbox{d}t\leq c_{0}. \end{eqnarray*} The term $A_{31}$ will be controlled by using (\ref{class})(ii) and the Gronwall inequality. We now estimate $A_{4}$. Integration by parts leads to \begin{eqnarray}gin{eqnarray*} A_{4} &=&-\int H\partial_{x}^{4}u\partial_{x}^{2}u\chi(x+vt)\mathbbox{d}x\nonumber\\ &=&\int H\partial_{x}^{3}u\partial_{x}^{3}u\chi(x+vt)\mathbbox{d}x +\int H\partial_{x}^{3}u\partial_{x}^{2}u\chi^{'}(x+vt)\mathbbox{d}x\nonumber\\ &=&A_{41}+A_{42}. \end{eqnarray*} Invoking again the fact that the Hilbert transform is skew symmetric, we have \begin{eqnarray}gin{eqnarray}\lambdabel{2A41} A_{41} &=&\int H\partial_{x}^{3}u\partial_{x}^{3}u\chi(x+vt)\mathbbox{d}x\nonumber\\ &=&-\int \partial_{x}^{3}u H(\partial_{x}^{3}u\chi(x+vt))\mathbbox{d}x\nonumber\\ &=&-\int \partial_{x}^{3}u H\partial_{x}^{3}u\chi(x+vt)\mathbbox{d}x -\int \partial_{x}^{3}u [H; \chi]\partial_{x}^{3}u\mathbbox{d}x\nonumber\\ &=&-A_{41}-\int \partial_{x}^{3}u [H; \chi]\partial_{x}^{3}u\mathbbox{d}x. \end{eqnarray} Consequently, (\ref{commutator}) produces \begin{eqnarray}gin{eqnarray*} A_{41} &=&-\frac{1}{2}\int \partial_{x}^{3}u [H; \chi]\partial_{x}^{3}u\mathbbox{d}x\nonumber\\ &=&\frac{1}{2}\int u \partial_{x}^{3}[H; \chi]\partial_{x}^{3}u\mathbbox{d}x\nonumber\\ &\leq& c\|u\|_{L^{2}}\|\partial_{x}^{3}[H; \chi]\partial_{x}^{3}u\|_{L^{2}}\nonumber\\ &\leq& c\|u\|_{L^{2}}^{2}=c\|u_{0}\|_{L^{2}}^{2}. \end{eqnarray*} Applying (\ref{eta}) yields \begin{eqnarray}gin{eqnarray*} A_{42} &=&\int H\partial_{x}^{3}u\partial_{x}^{2}u\chi^{'}(x+vt)\mathbbox{d}x\nonumber\\ &=&\int H\partial_{x}^{3}u \eta \partial_{x}^{2}u\eta\mathbbox{d}x\nonumber\\ &=&\int H(\partial_{x}^{3}u \eta) \partial_{x}^{2}u\eta\mathbbox{d}x -\int [H; \eta]\partial_{x}^{3}u\partial_{x}^{2}u\eta\mathbbox{d}x\nonumber\\ &=&\int H\partial_{x}(\partial_{x}^{2}u \eta) \partial_{x}^{2}u\eta\mathbbox{d}x -\int H(\partial_{x}^{2}u \eta^{'}) \partial_{x}^{2}u\eta\mathbbox{d}x -\int [H; \eta]\partial_{x}^{3}u\partial_{x}^{2}u\eta\mathbbox{d}x\nonumber\\ &=&A_{421}+A_{422}+A_{423}. \end{eqnarray*} Similar to the treatment of (\ref{1A421}), we write $A_{421}$ as \begin{eqnarray}gin{eqnarray*} A_{421} =\int H\partial_{x}(\partial_{x}^{2}u \eta) \partial_{x}^{2}u\eta\mathbbox{d}x =\int[D_{x}^{\frac{1}{2}}(\partial_{x}^{2}u \eta)]^{2}\mathbbox{d}x. \end{eqnarray*} The Young inequality leads to \begin{eqnarray}gin{eqnarray}\lambdabel{2A422} \int_{0}^{T}|A_{422}|\mathbbox{d}t &\leq&\int_{0}^{T}\left|\int H(\partial_{x}^{2}u \eta^{'}) \partial_{x}^{2}u\eta\right|\mathbbox{d}x\mathbbox{d}t\nonumber\\ &\leq& c_{0}\int_{0}^{T}\int(\partial_{x}^{2}u)^{2} (\eta^{'})^{2}\mathbbox{d}x\mathbbox{d}t +c_{0}\int_{0}^{T}\int(\partial_{x}^{2}u \eta)^{2}\mathbbox{d}x\mathbbox{d}t. \end{eqnarray} Invoking (\ref{prp3}) and using (\ref{conclusion1}) with $(\varepsilonsilon/3, b+\varepsilonsilon)$ instead of $(\varepsilonsilon, b)$ yield \begin{eqnarray}gin{eqnarray*} &&\int_{0}^{T}\int(\partial_{x}^{2}u \eta)^{2}\mathbbox{d}x\mathbbox{d}t +\int_{0}^{T}\int(\partial_{x}^{2}u)^{2} (\eta^{'})^{2}\mathbbox{d}x\mathbbox{d}t\nonumber\\ &&\leq \int_{0}^{T}\int(\partial_{x}^{2}u )^{2}\chi^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\mathbbox{d}t +\int_{0}^{T}\int(\partial_{x}^{2}u)^{2}\chi^{'}(x+vt; \varepsilonsilon/3, b+\varepsilonsilon)\mathbbox{d}x\mathbbox{d}t\nonumber\\ &&\leq c_{0}. \end{eqnarray*} For the term $A_{423}$, (\ref{commutator}) leads to \begin{eqnarray}gin{eqnarray*} A_{423} &=&-\int [H; \eta]\partial_{x}^{3}u\partial_{x}^{2}u\eta\mathbbox{d}x\nonumber\\ &=&\int \partial_{x}[H; \eta]\partial_{x}^{3}u\partial_{x}u\eta\mathbbox{d}x +\int [H; \eta]\partial_{x}^{3}u\partial_{x}u\eta^{'}\mathbbox{d}x\nonumber\\ &\leq& \|\partial_{x}[H; \eta]\partial_{x}^{3}u\|_{L^{2}}\|\partial_{x}u\eta\|_{L^{2}} +\|[H; \eta]\partial_{x}^{3}u\|_{L^{2}}\|\partial_{x}u\eta^{'}\|_{L^{2}}\nonumber\\ &\leq& c\|u\|_{L^{2}}\|\partial_{x}u\eta\|_{L^{2}} +c\|u\|_{L^{2}}\|\partial_{x}u\eta^{'}\|_{L^{2}}\nonumber\\ &\leq& c\|u_{0}\|_{L^{2}}^{2}+c\|\partial_{x}u\eta\|_{L^{2}}^{2}+c\|\partial_{x}u\eta^{'}\|_{L^{2}}^{2}, \end{eqnarray*} which, after integration in time, can be controlled by using similar method as that in (\ref{2A422}). Accordingly, gathering the above information in (\ref{second}) and invoking the Gronwall inequality, one derives \begin{eqnarray}gin{eqnarray}\lambdabel{conclusion2} &&\sup_{t\in[0, T]}\int (\partial_{x}^{2}u)^{2}(x, t)\chi(x+vt; \varepsilonsilon, b)\mathbbox{d}x +\int_{0}^{T}\int (\partial_{x}^{3}u)^{2}(x, t)\chi^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\mathbbox{d}t\nonumber\\ &&+\int_{0}^{T}\int [D_{x}^{\frac{1}{2}}(\partial_{x}^{2}u(x, t) \eta(x+vt; \varepsilonsilon, b))]^{2}\mathbbox{d}x\mathbbox{d}t \leq c_{0}. \end{eqnarray} We prove the general case $ l \geq 2 $ by induction. In details, we assume: If $u_{0}$ satisfies (\ref{113}) then (\ref{114}) holds, that is to say \begin{eqnarray}gin{eqnarray}\lambdabel{l} &&\sup_{t\in[0, T]}\int (\partial_{x}^{j}u)^{2}(x, t)\chi(x+vt; \varepsilonsilon, b)\mathbbox{d}x +\int_{0}^{T}\int (\partial_{x}^{j+1}u)^{2}(x, t)\chi^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\mathbbox{d}t\nonumber\\ &&+\int_{0}^{T}\int [D_{x}^{\frac{1}{2}}(\partial_{x}^{j}u(x, t) \eta(x+vt; \varepsilonsilon, b))]^{2}\mathbbox{d}x\mathbbox{d}t \leq c_{0} \end{eqnarray} for $j=1, 2, ..., l, $ $l\geq 2,$ and for any $\varepsilonsilon, b, v>0.$ Now we have that \begin{eqnarray}gin{eqnarray*} u_{0}|_{(0, \infty)}\in H^{l+1}((0, \infty)). \end{eqnarray*} Thus from the previous step $(\ref{l})$ holds. And formally, we have for $\varepsilonsilon, b, v>0$ the following identity \begin{eqnarray}gin{eqnarray}\lambdabel{third} &&\frac{1}{2}\frac{\mathbbox{d}}{\mathbbox{d}t}\int(\partial_{x}^{l+1}u)^{2}(x, t)\chi(x+vt)\mathbbox{d}x \underbrace{-\frac{1}{2}v\int(\partial_{x}^{l+1}u)^{2}(x, t)\chi^{'}(x+vt)\mathbbox{d}x} _{\text{$A_{1}$}}\nonumber\\ &&+\frac{3}{2}\int(\partial_{x}^{l+2}u)^{2}(x, t)\chi^{'}(x+vt)\mathbbox{d}x \underbrace{-\frac{1}{2}\int(\partial_{x}^{l+1}u)^{2}(x, t)\chi^{'''}(x+vt)\mathbbox{d}x} _{\text{$A_{2}$}}\nonumber\\ &&\underbrace{+\int\partial_{x}^{l+1}(u\partial_{x}u)\partial_{x}^{l+1}u(x, t)\chi(x+vt)\mathbbox{d}x} _{\text{$A_{3}$}} \underbrace{-\int H\partial_{x}^{l+3}u\partial_{x}^{l+1}u(x, t)\chi(x+vt)\mathbbox{d}x} _{\text{$A_{4}$}}=0. \end{eqnarray} Invoking (\ref{l}) with $j=l$, it holds that \begin{eqnarray}gin{eqnarray*} \int_{0}^{T}|A_{1}(t)|\mathbbox{d}t \leq |v|\int_{0}^{T}\int(\partial_{x}^{l+1}u)^{2}(x, t)\chi^{'}(x+vt)\mathbbox{d}x \leq c_{0}. \end{eqnarray*} Using similar method of treating (\ref{2A2}), we find \begin{eqnarray}gin{eqnarray*} \int_{0}^{T}|A_{2}(t)|\mathbbox{d}t &\leq&\int_{0}^{T}\int(\partial_{x}^{l+1}u)^{2}(x, t)|\chi^{'''}(x+vt; \varepsilonsilon, b)| \mathbbox{d}x\mathbbox{d}t\nonumber\\ &\leq& \int_{0}^{T}\int(\partial_{x}^{l+1}u)^{2}(x, t)|\chi^{'}(x+vt; \varepsilonsilon/3, b+\varepsilonsilon)|\mathbbox{d}x\mathbbox{d}t\nonumber\\ &\leq& c_{0}. \end{eqnarray*} We estimate $A_{3}$ by considering two cases: The first case is when $l+1=3$ and the second is $l+1\geq 4.$ When $l+1=3$, we have after integration by parts \begin{eqnarray}gin{eqnarray*} A_{3}(t) &=&4\int\partial_{x}u(\partial_{x}^{3}u)^{2}\chi(x+vt)\mathbbox{d}x +\int u \partial_{x}^{4}u\partial_{x}^{3}u\chi(x+vt)\mathbbox{d}x\nonumber\\ &+&3\int(\partial_{x}^{2}u)^{2}\partial_{x}^{3}u\chi(x+vt)\mathbbox{d}x\nonumber\\ &=&\frac{7}{2}\int\partial_{x}u(\partial_{x}^{3}u)^{2}\chi(x+vt)\mathbbox{d}x -\frac{1}{2}\int u (\partial_{x}^{3}u)^{2}\chi^{'}(x+vt)\mathbbox{d}x\nonumber\\ &+&3\int(\partial_{x}^{2}u)^{2}\partial_{x}^{3}u\chi(x+vt)\mathbbox{d}x\nonumber\\ &=&A_{31}+A_{32}+A_{33}. \end{eqnarray*} Simple computation leads to \begin{eqnarray}gin{eqnarray*} |A_{31}(t)| <\|\partial_{x}u\|_{L^{\infty}} \int(\partial_{x}^{3}u)^{2}\chi(x+vt)\mathbbox{d}x \end{eqnarray*} with the integral be the quantity to be estimated. Employing (\ref{l}) with $j=l=2$, one deduces \begin{eqnarray}gin{eqnarray*} \int_{0}^{T}|A_{32}(t)|\mathbbox{d}t &\leq&\sup_{t\in[0, T]}\|u\|_{L^{\infty}} \int_{0}^{T}\int(\partial_{x}^{3}u)^{2}\chi^{'}(x+vt)\mathbbox{d}x\mathbbox{d}t\nonumber\\ &\leq&\sup_{t\in[0, T]}\|u\|_{H^{3/4^{+}}} \int_{0}^{T}\int(\partial_{x}^{3}u)^{2}\chi^{'}(x+vt)\mathbbox{d}x\mathbbox{d}t\nonumber\\ &\leq& c_{0}. \end{eqnarray*} Integration by parts leads to \begin{eqnarray}gin{eqnarray*} A_{33}= 3\int(\partial_{x}^{2}u)^{2}\partial_{x}^{3}u\chi(x+vt)\mathbbox{d}x =-\int(\partial_{x}^{2}u)^{3}\chi^{'}(x+vt)\mathbbox{d}x \end{eqnarray*} Using (\ref{id}), we have \begin{eqnarray}gin{eqnarray}\lambdabel{1A331} |A_{33}|\leq \|\partial_{x}^{2}u\chi^{'}(\cdot+vt; \varepsilonsilon, b )\|_{L^{\infty}} \int(\partial_{x}^{2}u)^{2}\chi(x+vt; \varepsilonsilon/10, \varepsilonsilon/2 )\mathbbox{d}x \end{eqnarray} with the integral be bounded in $t\in (0, T]$ by a constant $c_{0}(\varepsilonsilon, b, v)$ resulting from (\ref{l})(j=2). Therefore, from the boundedness of $\chi^{'}$ and the Sobolev inequality $\|f\|_{L^{\infty}}\leq \|f\|_{H^{1, 1}}$, one has \begin{eqnarray}gin{eqnarray}\lambdabel{1A332} |A_{33}| &\leq& c\|\partial_{x}^{2}u\chi^{'}(\cdot+vt; \varepsilonsilon, b )\|_{L^{\infty}}^{2}+c\nonumber\\ &\leq& c\|(\partial_{x}^{2}u)^{2}\chi^{'}(\cdot+vt; \varepsilonsilon, b )\|_{L^{\infty}}+c\nonumber\\ &\leq& c\int|\partial_{x}[(\partial_{x}^{2}u)^{2}\chi^{'}(x+vt; \varepsilonsilon, b )]|\mathbbox{d}x +c\nonumber\\ &\leq& c\int|\partial_{x}^{2}u\partial_{x}^{3}u\chi^{'}(x+vt; \varepsilonsilon, b )|\mathbbox{d}x +c\int|\partial_{x}^{2}u\partial_{x}^{2}u\chi^{''}(x+vt; \varepsilonsilon, b )|\mathbbox{d}x +c\nonumber\\ &\leq& c\int(\partial_{x}^{2}u)^{2}\chi^{'}(x+vt; \varepsilonsilon, b )|\mathbbox{d}x +c\int(\partial_{x}^{3}u)^{2}\chi^{'}(x+vt; \varepsilonsilon, b )|\mathbbox{d}x\nonumber\\ &+&c\int|\partial_{x}^{2}u\partial_{x}^{2}u\chi^{'}(x+vt; \varepsilonsilon/3, b+\varepsilonsilon )|\mathbbox{d}x +c, \end{eqnarray} where we have used (\ref{prp4}) with $j=2$. Employing (\ref{l}) with $j=1, 2$ and integration in time, we obtain \begin{eqnarray}gin{eqnarray}\lambdabel{1A333} \int_{0}^{T}|A_{33}|\mathbbox{d}t \leq c_{0}. \end{eqnarray} We turn our attention to the second case $l+1\geq 4$ in $A_{3}$. By integration by parts, one derives \begin{eqnarray}gin{eqnarray*} A_{3} &=&d_{0}\int u(\partial_{x}^{l+1}u)^{2}\chi^{'}(x+vt)\mathbbox{d}x +d_{1}\int \partial_{x}u(\partial_{x}^{l+1}u)^{2}\chi(x+vt)\mathbbox{d}x\nonumber\\ &+&d_{2}\int\partial_{x}^{2}u\partial_{x}^{l}u\partial_{x}^{l+1}u\chi(x+vt)\mathbbox{d}x +\sum_{j=3}^{l-1}\int\partial_{x}^{j}u\partial_{x}^{l+2-j}u\partial_{x}^{l+1}u\chi(x+vt)\mathbbox{d}x\nonumber\\ &=&A_{3,0}+A_{3,1}+A_{3,2}+\sum_{j=3}^{l-1}A_{3, j}. \end{eqnarray*} Using (\ref{l}) with $j=l$ and the Sobolev embedding, one obtains \begin{eqnarray}gin{eqnarray*} \int_{0}^{T}|A_{3, 0}|\mathbbox{d}t &\leq& \int_{0}^{T}\|u\|_{L^{\infty}}\int(\partial_{x}^{l+1}u)^{2}\chi^{'}(x+vt)\mathbbox{d}x\mathbbox{d}t\nonumber\\ &\leq& \sup_{0\leq t \leq T }\|u\|_{H^{3/4^{+}}}\int_{0}^{T}\int(\partial_{x}^{l+1}u)^{2}\chi^{'}(x+vt)\mathbbox{d}x\mathbbox{d}t \leq c_{0}. \end{eqnarray*} Direct computation leads to \begin{eqnarray}gin{eqnarray*} |A_{3, 1}| \leq \|\partial_{x}u\|_{L^{\infty}}\int(\partial_{x}^{l+1}u)^{2}\chi(x+vt)\mathbbox{d}x, \end{eqnarray*} which can be handled by the Gronwall inequality and (\ref{class})(ii). To estimate $A_{3, 2}$ we follow the argument in the previous case. Accordingly, we need to estimate $\sum_{j=3}^{l-1}A_{3, j}$ which only appears when $l-1\geq 3.$ The Young inequality leads to \begin{eqnarray}gin{eqnarray*} |A_{3, j}| &\leq& \frac{1}{2}\int(\partial_{x}^{j}u\partial_{x}^{l+2-j}u)^{2}\chi(x+vt)\mathbbox{d}x +\frac{1}{2}\int(\partial_{x}^{l+1}u)^{2}\chi(x+vt)\mathbbox{d}x\nonumber\\ &=&A_{3, j, 1}+\frac{1}{2}\int(\partial_{x}^{l+1}u)^{2}\chi(x+vt)\mathbbox{d}x \end{eqnarray*} with the last integral be the quantity to be estimated. To handle $A_{3, j, 1}$, one observes that $j, l+2-j\leq l-1$ and accordingly \begin{eqnarray}gin{eqnarray*} |A_{3, j, 1}| \leq \|(\partial_{x}^{j}u)^{2}\chi(\cdot+vt; \varepsilonsilon/10, \varepsilonsilon/2)\|_{L^{\infty}} \int(\partial_{x}^{l+2-j}u)^{2}\chi(x+vt; \varepsilonsilon, b)\mathbbox{d}x \end{eqnarray*} with the last integral be bounded by (\ref{l}). Moreover, Sobolev embedding yields \begin{eqnarray}gin{eqnarray*} &&\|(\partial_{x}^{j}u)^{2}\chi(\cdot+vt; \varepsilonsilon/10, \varepsilonsilon/2)\|_{L^{\infty}}\nonumber\\ &&\leq \|\partial_{x}[(\partial_{x}^{j}u)^{2}\chi(\cdot+vt; \varepsilonsilon/10, \varepsilonsilon/2)]\|_{L^{1}}\nonumber\\ &&\leq \|\partial_{x}^{j}u\partial_{x}^{j+1}u\chi(\cdot+vt; \varepsilonsilon/10, \varepsilonsilon/2)]\|_{L^{1}} +\|\partial_{x}^{j}u\partial_{x}^{j}u\chi^{'}(\cdot+vt; \varepsilonsilon/10, \varepsilonsilon/2)]\|_{L^{1}}\nonumber\\ &&\leq c\int(\partial_{x}^{j}u)^{2}\chi(x+vt; \varepsilonsilon/10, \varepsilonsilon/2)\mathbbox{d}x +c\int(\partial_{x}^{j+1}u)^{2}\chi(x+vt; \varepsilonsilon/10, \varepsilonsilon/2)\mathbbox{d}x\nonumber\\ &&+c\int(\partial_{x}^{j}u)^{2}\chi^{'}(x+vt; \varepsilonsilon/10, \varepsilonsilon/2)\mathbbox{d}x, \end{eqnarray*} which can be treated after integration in time by invoking (\ref{l}). Finally, we estimate $A_{4}$. After integration by parts, we find \begin{eqnarray}gin{eqnarray*} A_{4} &=&-\int H\partial_{x}^{l+3}u\partial_{x}^{l+1}u\chi(x+vt)\mathbbox{d}x\nonumber\\ &=&\int H\partial_{x}^{l+2}u\partial_{x}^{l+2}u\chi(x+vt)\mathbbox{d}x +\int H\partial_{x}^{l+2}u\partial_{x}^{l+1}u\chi^{'}(x+vt)\mathbbox{d}x\nonumber\\ &=&A_{41}+A_{42}. \end{eqnarray*} Similar to (\ref{2A41}), we write $A_{41}$ as \begin{eqnarray}gin{eqnarray*} A_{41} &=&\int H\partial_{x}^{l+2}u\partial_{x}^{l+2}u\chi(x+vt)\mathbbox{d}x\nonumber\\ &=&-\int \partial_{x}^{l+2}u H(\partial_{x}^{l+2}u\chi(x+vt))\mathbbox{d}x\nonumber\\ \end{eqnarray*} \begin{eqnarray}gin{eqnarray*} \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad&=&-\int \partial_{x}^{l+2}u H\partial_{x}^{l+2}u\chi(x+vt)\mathbbox{d}x -\int \partial_{x}^{l+2}u [H; \chi]\partial_{x}^{l+2}u\mathbbox{d}x\nonumber\\ &=&-A_{41}-\int \partial_{x}^{l+2}u [H; \chi]\partial_{x}^{l+2}u\mathbbox{d}x. \end{eqnarray*} Consequently, there holds \begin{eqnarray}gin{eqnarray*} A_{41} &=&-\frac{1}{2}\int \partial_{x}^{l+2}u [H; \chi]\partial_{x}^{l+2}u\mathbbox{d}x\nonumber\\ &=&-\frac{1}{2}(-1)^{l+2}\int u \partial_{x}^{l+2}[H; \chi]\partial_{x}^{l+2}u\mathbbox{d}x\nonumber\\ &\leq& c\|u\|_{L^{2}}\|\partial_{x}^{l+2}[H; \chi]\partial_{x}^{l+2}u\|_{L^{2}}\nonumber\\ &\leq& c\|u\|_{L^{2}}^{2}=c\|u_{0}\|_{L^{2}}^{2}. \end{eqnarray*} Recall $\eta=\sqrt{\chi^{'}},$ therefore \begin{eqnarray}gin{eqnarray*} A_{42} &=&\int H\partial_{x}^{l+2}u\partial_{x}^{l+1}u\chi^{'}(x+vt)\mathbbox{d}x\nonumber\\ &=&\int H\partial_{x}^{l+2}u \eta \partial_{x}^{l+1}u\eta\mathbbox{d}x\nonumber\\ &=&\int H(\partial_{x}^{l+2}u \eta) \partial_{x}^{l+1}u\eta\mathbbox{d}x -\int [H; \eta]\partial_{x}^{l+2}u\partial_{x}^{l+1}u\eta\mathbbox{d}x\nonumber\\ &=&\int H\partial_{x}(\partial_{x}^{l+1}u \eta) \partial_{x}^{l+1}u\eta\mathbbox{d}x -\int H(\partial_{x}^{l+1}u \eta^{'}) \partial_{x}^{l+1}u\eta\mathbbox{d}x\nonumber\\ &-&\int [H; \eta]\partial_{x}^{l+2}u\partial_{x}^{l+1}u\eta\mathbbox{d}x\nonumber\\ &=&A_{421}+A_{422}+A_{423}. \end{eqnarray*} For the term $A_{421}$, one has \begin{eqnarray}gin{eqnarray*} A_{421}=\int H\partial_{x}(\partial_{x}^{l+1}u \eta) \partial_{x}^{l+1}u\eta\mathbbox{d}x =\int[D_{x}^{\frac{1}{2}}(\partial_{x}^{l+1}u \eta)]^{2}\mathbbox{d}x. \end{eqnarray*} The H\"{o}lder and Young inequality yield \begin{eqnarray}gin{eqnarray}\lambdabel{3A422} \int_{0}^{T}|A_{422}|\mathbbox{d}t &\leq&\int_{0}^{T}\left|\int H(\partial_{x}^{l+1}u \eta^{'}) \partial_{x}^{l+1}u\eta\right|\mathbbox{d}x\mathbbox{d}t\nonumber\\ &\leq& c\int_{0}^{T}\int(\partial_{x}^{l+1}u)^{2} (\eta^{'})^{2}\mathbbox{d}x\mathbbox{d}t +c\int_{0}^{T}\int(\partial_{x}^{l+1}u \eta)^{2}\mathbbox{d}x\mathbbox{d}t. \end{eqnarray} Thus, we can handle this term by using a similar method as that in (\ref{2A422}). Invoking (\ref{commutator}), one finds \begin{eqnarray}gin{eqnarray*} A_{423} &=&-\int [H; \eta]\partial_{x}^{l+2}u\partial_{x}^{l+1}u\eta\mathbbox{d}x\nonumber\\ &=&\int \partial_{x}[H; \eta]\partial_{x}^{l+2}u\partial_{x}^{l}u\eta\mathbbox{d}x +\int [H; \eta]\partial_{x}^{l+2}u\partial_{x}^{l}u\eta^{'}\mathbbox{d}x\nonumber\\ &\leq& \|\partial_{x}[H; \eta]\partial_{x}^{l+2}u\|_{L^{2}}\|\partial_{x}^{l}u\eta\|_{L^{2}} +\|[H; \eta]\partial_{x}^{l+2}u\|_{L^{2}}\|\partial_{x}^{l}u\eta^{'}\|_{L^{2}}\nonumber\\ &\leq& c\|u\|_{L^{2}}\|\partial_{x}^{l}u\eta\|_{L^{2}} +c\|u\|_{L^{2}}\|\partial_{x}^{l}u\eta^{'}\|_{L^{2}}\nonumber\\ &\leq& c\|u_{0}\|_{L^{2}}^{2}+c\|\partial_{x}^{l}u\eta\|_{L^{2}}^{2}+c\|\partial_{x}^{l}u\eta^{'}\|_{L^{2}}^{2}, \end{eqnarray*} which can also be controlled by using a similar way as that in (\ref{2A422}). As a consequence, substituting the above information into (\ref{third}) and employing the Gronwall inequality, one deduces \begin{eqnarray}gin{eqnarray}\lambdabel{conclusion3} &&\sup_{t\in[0, T]}\int (\partial_{x}^{l+1}u)^{2}\chi(x+vt; \varepsilonsilon, b)\mathbbox{d}x +\int_{0}^{T}\int (\partial_{x}^{l+2}u)^{2}\chi^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\mathbbox{d}t\nonumber\\ &&+\int_{0}^{T}\int [D_{x}^{\frac{1}{2}}(\partial_{x}^{l+1}u \eta)]^{2}\mathbbox{d}x\mathbbox{d}t \leq c_{0}. \end{eqnarray} This close our induction. To justify the previous formal computations we refer the reader to \cite{ilp1} and we omit the details here. \section{Proof of Theorem \ref{persistence}} In this section, we prove Theorem \ref{persistence}. We first prove (\ref{118}) for any $n\in\mathbb{Z}^{+}.$ Note that $x_{+}^{n}u_{0}\in L^{2}(\mathbb{R})$ implies $\chi_{n}(x; \varepsilonsilon, b)u_{0} \in L^{2}(\mathbb{R}).$ Multiplying equation (\ref{benjamin}) with $u(x, t)\chi_{n}(x+vt; \varepsilonsilon, b)$ and integrating, we obtain \begin{eqnarray}gin{eqnarray}\lambdabel{zero} &&\frac{1}{2}\frac{\mathbbox{d}}{\mathbbox{d}t}\int u^{2}(x, t)\chi_{n}(x+vt; \varepsilonsilon, b)\mathbbox{d}x \underbrace{-\frac{1}{2}v\int u^{2}(x, t)\chi^{'}_{n}(x+vt; \varepsilonsilon, b)\mathbbox{d}x} _{\text{$A_{1}$}}\nonumber\\ &&+\frac{3}{2}\int(\partial_{x}u)^{2}(x, t)\chi^{'}_{n}(x+vt; \varepsilonsilon, b)\mathbbox{d}x \underbrace{-\frac{1}{2}\int u^{2}(x, t)\chi^{'''}_{n}(x+vt; \varepsilonsilon, b)\mathbbox{d}x} _{\text{$A_{2}$}}\nonumber\\ &&\underbrace{+\int u\partial_{x}uu(x, t)\chi_{n}(x+vt; \varepsilonsilon, b)\mathbbox{d}x} _{\text{$A_{3}$}} \underbrace{-\int H\partial_{x}^{2}uu(x, t)\chi_{n}(x+vt; \varepsilonsilon, b)\mathbbox{d}x} _{\text{$A_{4}$}}=0. \end{eqnarray} Employing (\ref{prp8}) with $j=1$, one easily deduces \begin{eqnarray}gin{eqnarray*}\lambdabel{46} |A_{1}(t)| &\leq& |v|\int u^{2}\chi_{n}(x+vt; \varepsilonsilon, b)\mathbbox{d}x +|v|c(n, b)\int u^{2}\mathbbox{d}x\nonumber\\ &\leq& |v|\int u^{2}\chi_{n}(x+vt; \varepsilonsilon, b)\mathbbox{d}x +|v|c(n, b)\|u_{0}\|_{L^{2}}^{2}. \end{eqnarray*} Again, invoking (\ref{prp8}) with $j=3,$ we derive \begin{eqnarray}gin{eqnarray*}\lambdabel{45} |A_{2}(t)| &\leq& c(n, b)\int u^{2}\mathbbox{d}x +\int u^{2}\chi_{n}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber\\ &\leq& c(n, b)\|u_{0}\|_{L^{2}}^{2} +\int u^{2}\chi_{n}(x+vt; \varepsilonsilon, b)\mathbbox{d}x. \end{eqnarray*} We estimate $A_{3}$ as \begin{eqnarray}gin{eqnarray*}\lambdabel{47} |A_{3}(t)| \leq \|\partial_{x}u\|_{L^{\infty}}\int u^{2}\chi_{n}(x+vt; \varepsilonsilon, b)\mathbbox{d}x. \end{eqnarray*} For $A_{4}$, integration by parts produces \begin{eqnarray}gin{eqnarray*}\lambdabel{48} A_{4} &=&-\int H\partial_{x}^{2}uu\chi_{n}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber\\ &=&\int H\partial_{x}u\partial_{x}u\chi_{n}(x+vt; \varepsilonsilon, b)\mathbbox{d}x +\int H\partial_{x}uu\chi_{n}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber\\ &=&A_{41}+A_{42}. \end{eqnarray*} Reasoning as (\ref{2A41}), it holds that \begin{eqnarray}gin{eqnarray*} A_{41} &=&\int H\partial_{x}u\partial_{x}u\chi_{n}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber\\ &=&-\int \partial_{x}u H(\partial_{x}u\chi_{n}(x+vt; \varepsilonsilon, b))\mathbbox{d}x\nonumber\\ &=&-\int \partial_{x}u H\partial_{x}u\chi_{n}(x+vt; \varepsilonsilon, b)\mathbbox{d}x -\int \partial_{x}u [H; \chi_{n}]\partial_{x}u\mathbbox{d}x\nonumber\\ &=&-A_{41}-\int \partial_{x}u [H; \chi_{n}]\partial_{x}u\mathbbox{d}x. \end{eqnarray*} As a result, we find \begin{eqnarray}gin{eqnarray*} A_{41} &=&-\frac{1}{2}\int \partial_{x}u [H; \chi_{n}]\partial_{x}u\mathbbox{d}x\nonumber\\ &=&\frac{1}{2}\int u \partial_{x}[H; \chi_{n}]\partial_{x}u\mathbbox{d}x\nonumber\\ &\leq& c\|u\|_{L^{2}}\|\partial_{x}[H; \chi_{n}]\partial_{x}u\|_{L^{2}}\nonumber\\ &\leq& c\|u\|_{L^{2}}^{2}=c\|u_{0}\|_{L^{2}}^{2}. \end{eqnarray*} Integration by parts and (\ref{eta}) lead to \begin{eqnarray}gin{eqnarray}\lambdabel{4A42} A_{42} &=&\int H\partial_{x}uu\chi_{n}^{'}(x+vt)\mathbbox{d}x\nonumber\\ &=&\int H\partial_{x}u \eta_{n}u\eta_{n}\mathbbox{d}x\nonumber\\ &=&\int H(\partial_{x}u \eta_{n}) u\eta_{n}\mathbbox{d}x -\int [H; \eta_{n}]\partial_{x}uu\eta_{n}\mathbbox{d}x\nonumber\\ &=&\int H\partial_{x}(u \eta_{n}) u\eta_{n}\mathbbox{d}x -\int H(u \eta_{n}^{'}) u\eta_{n}\mathbbox{d}x -\int [H; \eta_{n}]\partial_{x}uu\eta_{n}\mathbbox{d}x\nonumber\\ &=&A_{421}+A_{422}+A_{423}. \end{eqnarray} Again, using the Plancherel theorem, we write $A_{421}$ as \begin{eqnarray}gin{eqnarray*} A_{421}=\int H\partial_{x}(u \eta_{n}) u\eta_{n}\mathbbox{d}x =\int[D_{x}^{\frac{1}{2}}(u \eta_{n})]^{2}\mathbbox{d}x. \end{eqnarray*} For the term $A_{422}$, note that because of the factor $x^{n},$ the support of $\chi_{n}^{'}$ is not $[\varepsilonsilon, b]$ at all for the general case. As a consequence, $\eta_{n}$ and $\eta_{n}^{'}$ may be unbounded. However, we notice that (\ref{prp10}) and (\ref{prp11}) provide us with a relation between $\chi_{n} $ and $\chi_{n-1} $, therefore, we could use induction to close our proof. Let us first consider the case $n=0$ and thus $\eta_{n}=\eta$ . At this point, we derive \begin{eqnarray}gin{eqnarray*}\lambdabel{4A422} A_{422}= -\int H(u \eta^{'}) u\eta\mathbbox{d}x \leq c\|u \eta^{'}\|_{L^{2}}\|u \eta\|_{L^{2}} \leq c\|u \|_{L^{2}}^{2} \leq c\|u_{0} \|_{L^{2}}^{2}. \end{eqnarray*} Invoking (\ref{commutator}), we find \begin{eqnarray}gin{eqnarray*} A_{423} =-\int [H; \eta]\partial_{x}uu\eta\mathbbox{d}x \leq \|[H; \eta]\partial_{x}u\|_{L^{2}}\|u\eta\|_{L^{2}} \leq c\|u \|_{L^{2}}^{2} \leq c\|u_{0} \|_{L^{2}}^{2}. \end{eqnarray*} Hence, we obtain the following inequality when $n=0:$ \begin{eqnarray}gin{eqnarray*}\lambdabel{n0} &&\sup_{t\in[0, T]}\int u^{2}(x, t)\chi(x+vt; \varepsilonsilon, b)\mathbbox{d}x +\int_{0}^{T}\int (\partial_{x}u)^{2}(x, t)\chi^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\mathbbox{d}t\nonumber\\ &&+\int_{0}^{T}\int [D_{x}^{\frac{1}{2}}(u(x, t) \eta(x+vt; \varepsilonsilon, b))]^{2}\mathbbox{d}x\mathbbox{d}t \leq c_{2}. \end{eqnarray*} Let us assume the case $n\geq 0$ holds, i.e., \begin{eqnarray}gin{eqnarray}\lambdabel{nn} &&\sup_{t\in[0, T]}\int u^{2}(x, t)\chi_{n}(x+vt; \varepsilonsilon, b)\mathbbox{d}x +\int_{0}^{T}\int (\partial_{x}u)^{2}(x, t)\chi_{n}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\mathbbox{d}t\nonumber\\ &&+\int_{0}^{T}\int [D_{x}^{\frac{1}{2}}(u(x, t) \eta_{n}(x+vt; \varepsilonsilon, b))]^{2}\mathbbox{d}x\mathbbox{d}t \leq c_{2}. \end{eqnarray} We shall prove the case $n+1$. We only need to treat the terms $A_{422}$ and $A_{423}$ with $n+1$ instead of $n$. Employing (\ref{prp10}) and (\ref{prp11}), we find \begin{eqnarray}gin{eqnarray*}\lambdabel{4A4222} A_{422} &=& -\int H(u \eta_{n+1}^{'}) u\eta_{n+1}\mathbbox{d}x\nonumber\\ &\leq& c_{2}\|u \eta_{n+1}^{'}\|_{L^{2}}\|u \eta_{n+1}\|_{L^{2}}\nonumber\\ &\leq& c_{2}\int u^{2} (\eta_{n+1}^{'})^{2}\mathbbox{d}x +c_{2}\int u^{2} (\eta_{n+1})^{2}\mathbbox{d}x\nonumber\\ &\leq& c_{2}\int u^{2} \chi_{n}(x+vt; \varepsilonsilon/3, b+\varepsilonsilon)\mathbbox{d}x, \end{eqnarray*} which can be handled by using (\ref{nn}) with $(\varepsilonsilon/3, b+\varepsilonsilon)$ instead of $(\varepsilonsilon, b).$ The term $A_{423}$ can be controlled similarly. Thus we completes the proof of (\ref{118}). And for convenience, we view (\ref{nn}) as a conclusion in the following of this paper. Next, we prove (\ref{119}). We first prove the case $n=1.$ From (\ref{nn}) with $n=1$ and (\ref{prp2}), it follows that for any $\delta> 0$ there exists $\hat{t}\in (0, \delta)$ such that \begin{eqnarray}gin{eqnarray*}\lambdabel{410} \int(\partial_{x}u)^{2}(x, \hat{t})\chi(x; \varepsilonsilon, b)\mathbbox{d}x <\infty. \end{eqnarray*} A smooth solution $u$ to the IVP (\ref{benjamin}) satisfies the following identity: \begin{eqnarray}gin{eqnarray} &&\frac{1}{2}\frac{\mathbbox{d}}{\mathbbox{d}t}\int(\partial_{x}u)^{2}(x, t)\chi(x+vt; \varepsilonsilon, b)\mathbbox{d}x \underbrace{-\frac{1}{2}v\int(\partial_{x}u)^{2}(x, t)\chi^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x} _{\text{$A_{1}$}}\nonumber \end{eqnarray} \begin{eqnarray}gin{eqnarray}\lambdabel{411} &&+\frac{3}{2}\int(\partial_{x}^{2}u)^{2}(x, t)\chi^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x \underbrace{-\frac{1}{2}\int(\partial_{x}u)^{2}(x, t)\chi^{'''}(x+vt; \varepsilonsilon, b)\mathbbox{d}x} _{\text{$A_{2}$}}\nonumber\\ &&\underbrace{+\int\partial_{x}(u\partial_{x}u)\partial_{x}u(x, t)\chi(x+vt; \varepsilonsilon, b)\mathbbox{d}x} _{\text{$A_{3}$}} \underbrace{-\int H\partial_{x}^{3}u(x, t)\partial_{x}u\chi(x+vt; \varepsilonsilon, b)\mathbbox{d}x} _{\text{$A_{4}$}}=0. \end{eqnarray} Using (\ref{nn}) with $n=0$, one obtains \begin{eqnarray}gin{eqnarray*}\lambdabel{412} \int_{\hat{t}}^{T}|A_{1}(t)|\mathbbox{d}t \leq c_{3}. \end{eqnarray*} Again, using (\ref{nn}) with $n=0$ and $(\varepsilonsilon/3, b+\varepsilonsilon)$ instead of $(\varepsilonsilon, b)$, there holds \begin{eqnarray}gin{eqnarray*}\lambdabel{414} \int_{\hat{t}}^{T}|A_{2}(t)|\mathbbox{d}t \leq c_{3}. \end{eqnarray*} For the term $A_{3}$, integration by parts leads to \begin{eqnarray}gin{eqnarray*}\lambdabel{415} A_{3}(t) &=&\frac{1}{2}\int(\partial_{x}u)^{3}\chi(x+vt; \varepsilonsilon, b)\mathbbox{d}x -\frac{1}{2}\int u \partial_{x}u\partial_{x}u\chi^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber\\ &\leq& \|\partial_{x}u\|_{L^{\infty}}\int(\partial_{x}u)^{2}\chi(x+vt; \varepsilonsilon, b)\mathbbox{d}x +\|u\|_{L^{\infty}}\int(\partial_{x}u)^{2}\chi^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber\\ &=&A_{31}+A_{32}. \end{eqnarray*} By Sobolev embedding theorem, one obtains \begin{eqnarray}gin{eqnarray*} \int_{\hat{t}}^{T}|A_{32}(t)|\mathbbox{d}t <\sup_{t\in[\hat{t}, T]}\|u\|_{H^{3/4^{+}}} \int_{\hat{t}}^{T}\int(\partial_{x}u)^{2}\chi^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\mathbbox{d}t. \end{eqnarray*} The term $A_{31}$ will be controlled by using (\ref{class})(ii) and the Gronwall inequality. The term $A_{4}$ can be estimated as in the proof of Theorem \ref{regularity}, we omit it. Substituting the above information in $(\ref{411})$, using Gronwall inequality and (\ref{class})(ii), one obtains \begin{eqnarray}gin{eqnarray}\lambdabel{conclusion5} &&\sup_{t\in[\hat{t}, T]}\int (\partial_{x}u)^{2}(x, t)\chi(x+vt; \varepsilonsilon, b)\mathbbox{d}x +\int_{\hat{t}}^{T}\int (\partial_{x}^{2}u)^{2}(x, t)\chi^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\mathbbox{d}t\nonumber\\ &&+\int_{\hat{t}}^{T}\int [D_{x}^{\frac{1}{2}}(\partial_{x}u(x, t) \eta(x+vt; \varepsilonsilon, b))]^{2}\mathbbox{d}x\mathbbox{d}t \leq c_{3}. \end{eqnarray} Next, we turn to the case $n=2$ in the proof of (\ref{119}). Since $x_{+}u_{0}\in L^{2}(\mathbb{R})$, using (\ref{nn}) with $n=2$, one finds \begin{eqnarray}gin{eqnarray}\lambdabel{419} &&\sup_{t\in[0, T]}\int u^{2}(x, t)\chi_{2}(x+vt; \varepsilonsilon, b)\mathbbox{d}x +\int_{0}^{T}\int (\partial_{x}u)^{2}(x, t)\chi_{2}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\mathbbox{d}t\nonumber\\ &&+\int_{0}^{T}\int [D_{x}^{\frac{1}{2}}(u(x, t) \eta_{2}(x+vt; \varepsilonsilon, b))]^{2}\mathbbox{d}x\mathbbox{d}t \leq c_{2}. \end{eqnarray} Using (\ref{419}) and (\ref{prp7}), we derive that for any $\delta> 0$ there exists $\hat{t}\in (0, \delta)$ such that \begin{eqnarray}gin{eqnarray*}\lambdabel{421} \int (\partial_{x}u)^{2}(x, \hat{t})\chi_{1}(x; \varepsilonsilon, b)\mathbbox{d}x <\infty. \end{eqnarray*} Consider the following identity \begin{eqnarray}gin{eqnarray}\lambdabel{z411} &&\frac{1}{2}\frac{\mathbbox{d}}{\mathbbox{d}t}\int(\partial_{x}u)^{2}\chi_{1}(x+vt; \varepsilonsilon, b)\mathbbox{d}x \underbrace{-\frac{1}{2}v\int(\partial_{x}u)^{2}(x, t)\chi_{1}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x} _{\text{$A_{1}$}}\nonumber\\ &&+\frac{3}{2}\int(\partial_{x}^{2}u)^{2}(x, t)\chi_{1}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x \underbrace{-\frac{1}{2}\int(\partial_{x}u)^{2}(x, t)\chi_{1}^{'''}(x+vt; \varepsilonsilon, b)\mathbbox{d}x} _{\text{$A_{2}$}}\\ &&\underbrace{+\int\partial_{x}(u\partial_{x}u)(x, t)\partial_{x}u\chi_{1}(x+vt; \varepsilonsilon, b)\mathbbox{d}x} _{\text{$A_{3}$}} \underbrace{-\int H\partial_{x}^{3}u\partial_{x}u(x, t)\chi_{1}(x+vt; \varepsilonsilon, b)\mathbbox{d}x} _{\text{$A_{4}$}}=0.\nonumber \end{eqnarray} Invoking (\ref{nn}) with $n=1$, it holds that \begin{eqnarray}gin{eqnarray*}\lambdabel{423} \int_{\hat{t}}^{T}|A_{1}(t)|\mathbbox{d}t \leq |v|\int_{\hat{t}}^{T}\int(\partial_{x}u)^{2}(x, t)\chi_{1}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\mathbbox{d}t \leq c_{3}. \end{eqnarray*} For the term $A_{2}$, employing the fact that $\chi_{1}^{'''}$ is supported in $[\varepsilonsilon, b]$, we deduce \begin{eqnarray}gin{eqnarray*}\lambdabel{426} \int_{\hat{t}}^{T}|A_{2}(t)|\mathbbox{d}t \leq c_{3}, \end{eqnarray*} where we have used (\ref{class})(iii) with $r=0$. Concerning the term $A_{3}$, integration by parts leads to \begin{eqnarray}gin{eqnarray*}\lambdabel{w415} A_{3}(t) &=&\frac{1}{2}\int(\partial_{x}u)^{3}\chi_{1}(x+vt; \varepsilonsilon, b)\mathbbox{d}x -\frac{1}{2}\int u \partial_{x}u\partial_{x}u\chi_{1}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber\\ &\leq& \|\partial_{x}u\|_{L^{\infty}}\int(\partial_{x}u)^{2}\chi_{1}(x+vt; \varepsilonsilon, b)\mathbbox{d}x +\|u\|_{L^{\infty}}\int(\partial_{x}u)^{2}\chi_{1}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber\\ &=&A_{31}+A_{32}, \end{eqnarray*} which can treated as in the proof of Theorem \ref{regularity}. Finally, we control $A_{4}$. Integration by parts yields \begin{eqnarray}gin{eqnarray*} A_{4} &=&-\int H\partial_{x}^{3}u\partial_{x}u\chi_{1}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber\\ &=&\int H\partial_{x}^{2}u\partial_{x}^{2}u\chi_{1}(x+vt; \varepsilonsilon, b)\mathbbox{d}x +\int H\partial_{x}^{2}u\partial_{x}u\chi_{1}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber\\ &=&A_{41}+A_{42}. \end{eqnarray*} The term $A_{41}$ can be handled by using integration by parts, the skew symmetry of the Hilbert transform and the commutator estimate (\ref{commutator}), we omit it. Now, we focus on the term $A_{42}.$ Recall $(\eta_{1})^{2}=\chi_{1}^{'},$ one has \begin{eqnarray}gin{eqnarray*} A_{42} &=&\int H\partial_{x}^{2}u\partial_{x}u\chi_{1}^{'}(x+vt)\mathbbox{d}x\nonumber\\ &=&\int H\partial_{x}^{2}u \eta_{1} \partial_{x}u\eta_{1}\mathbbox{d}x\nonumber \end{eqnarray*} \begin{eqnarray}gin{eqnarray*} &=&\int H(\partial_{x}^{2}u \eta_{1}) \partial_{x}u\eta_{1}\mathbbox{d}x -\int [H; \eta_{1}]\partial_{x}^{2}u\partial_{x}u\eta_{1}\mathbbox{d}x\nonumber\\ &=&\int H\partial_{x}(\partial_{x}u \eta_{1}) \partial_{x}u\eta_{1}\mathbbox{d}x -\int H(\partial_{x}u \eta_{1}^{'}) \partial_{x}u\eta_{1}\mathbbox{d}x -\int [H; \eta_{1}]\partial_{x}^{2}u\partial_{x}u\eta_{1}\mathbbox{d}x\nonumber\\ &=&A_{421}+A_{422}+A_{423}. \end{eqnarray*} For the term $A_{421}$, we have \begin{eqnarray}gin{eqnarray*} A_{421}=\int H\partial_{x}(\partial_{x}u \eta_{1}) \partial_{x}u\eta_{1}\mathbbox{d}x =\int[D_{x}^{\frac{1}{2}}(\partial_{x}u \eta_{1})]^{2}\mathbbox{d}x. \end{eqnarray*} The Young inequality leads to \begin{eqnarray}gin{eqnarray*}\lambdabel{5A422} \int_{\hat{t}}^{T}|A_{422}|\mathbbox{d}t &=&\int_{\hat{t}}^{T}\left|\int H(\partial_{x}u \eta_{1}^{'}) \partial_{x}u\eta_{1}\mathbbox{d}x\right|\mathbbox{d}t\nonumber\\ &\leq& c_{3}\int_{\hat{t}}^{T}\int(\partial_{x}u)^{2} (\eta_{1}^{'})^{2}\mathbbox{d}x\mathbbox{d}t +c_{3}\int_{\hat{t}}^{T}\int(\partial_{x}u)^{2} \eta_{1}^{2}\mathbbox{d}x\mathbbox{d}t. \end{eqnarray*} Note that $\eta_{1}$ in unbounded in support of $\chi_{1}^{'}$. However, invoking (\ref{nn}) with $n=1,$ we find \begin{eqnarray}gin{eqnarray*} \int_{\hat{t}}^{T}\int(\partial_{x}u)^{2} \eta_{1}^{2}\mathbbox{d}x\mathbbox{d}t \leq c_{3}. \end{eqnarray*} Now, simple computation yields $\chi(x; \varepsilonsilon, b)\leq \chi_{1}^{'}(x; \varepsilonsilon, b)$. This fact combining with (\ref{prp11}) and (\ref{nn}) with $(\varepsilonsilon/3, b+\varepsilonsilon)$ instead of $(\varepsilonsilon, b)$ permits us to conclude \begin{eqnarray}gin{eqnarray*} \int_{\hat{t}}^{T}\int(\partial_{x}u)^{2} (\eta_{1}^{'})^{2}\mathbbox{d}x\mathbbox{d}t \leq c_{3}. \end{eqnarray*} Thus we have controlled $A_{422}$ after integration in time. The term $A_{423}$ can be handled by using the above method and (\ref{commutator}), we omit it. As a result, we conclude after invoking the Gronwall inequality that \begin{eqnarray}gin{eqnarray}\lambdabel{conclusion6} &&\sup_{t\in[\hat{t}, T]}\int (\partial_{x}u)^{2}(x, t)\chi_{1}(x+vt; \varepsilonsilon, b)\mathbbox{d}x +\int_{\hat{t}}^{T}\int (\partial_{x}^{2}u)^{2}(x, t)\chi_{1}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\mathbbox{d}t\nonumber\\ &&+\int_{\hat{t}}^{T}\int [D_{x}^{\frac{1}{2}}(\partial_{x}u(x, t) \eta_{1}(x+vt; \varepsilonsilon, b))]^{2}\mathbbox{d}x\mathbbox{d}t \leq c_{3}. \end{eqnarray} By (\ref{conclusion6}) for any $\delta> 0$ there exists $\hat{\hat{t}}\in (\hat{t}, \delta)$ such that \begin{eqnarray}gin{eqnarray*} \int (\partial_{x}^{2}u)^{2}(x, \hat{\hat{t}})\chi_{1}^{'}(x; \varepsilonsilon, b)\mathbbox{d}x< \infty, \end{eqnarray*} this produces \begin{eqnarray}gin{eqnarray*} \int (\partial_{x}^{2}u)^{2}(x, \hat{\hat{t}})\chi(x; \varepsilonsilon, b)\mathbbox{d}x< \infty. \end{eqnarray*} Hence, the result of propagation of regularity (\ref{conclusion2}) yields : \begin{eqnarray}gin{eqnarray*}\lambdabel{431} &&\sup_{t\in[\delta, T]}\int (\partial_{x}^{2}u)^{2}(x, t)\chi(x+vt; \varepsilonsilon, b)\mathbbox{d}x +\int_{\delta}^{T}\int (\partial_{x}^{3}u)^{2}(x, t)\chi^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\mathbbox{d}t\nonumber\\ &&+\int_{\delta}^{T}\int [D_{x}^{\frac{1}{2}}(\partial_{x}^{2}u(x, t) \eta(x+vt; \varepsilonsilon, b))]^{2}\mathbbox{d}x\mathbbox{d}t \leq c_{3}. \end{eqnarray*} This completes the proof of the case $n=2$. For the general case, we use induction. Given $(m, l)\in \mathbb{Z}^{+}\times \mathbb{Z}^{+}$ we say that \begin{eqnarray}gin{eqnarray}\lambdabel{432} (m, l)>(\hat{m}, \hat{l})\Leftrightarrow \begin{eqnarray}gin{cases} (i)\quad m>\hat{m}\\ or\\ (ii) \quad m=\hat{m}\quad and\quad l>\hat{l}. \end{cases} \end{eqnarray} Similarly, we say that $(m, l)\geq(\hat{m}, \hat{l})$ if $(ii)$ in the right hand side of (\ref{432}) holds with $\geq$ instead of $>$. The general case $(m, l)$ reads: For any $\varepsilonsilon, b, v>0$ \begin{eqnarray}gin{eqnarray}\lambdabel{433} &&\sup_{t \in [\delta, T]}\int (\partial_{x}^{l}u)^{2}(x, t)\chi_{m}(x+vt; \varepsilonsilon, b)\mathbbox{d}x +\int_{\delta}^{T}\int (\partial_{x}^{l+1}u)^{2}(x, t)\chi_{m}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\mathbbox{d}t\nonumber\\ &&+\int_{\delta}^{T}\int [D_{x}^{\frac{1}{2}}(\partial_{x}^{l}u(x, t) \eta_{m}(x+vt; \varepsilonsilon, b))]^{2}\mathbbox{d}x\mathbbox{d}t \leq c_{3}. \end{eqnarray} Notice that we have already proved the following cases: \begin{eqnarray}gin{enumerate} \item (0,1) and (1, 0) \item (0, 2) (1, 1) and (2, 0) \item Under the hypothesis $x_{+}^{n/2}u_{0}\in L^{2}(\mathbb{R})$, we proved (\ref{nn}), i.e. (n, 0) for all $n\in \mathbb{Z}^{+}$ \item By Theorem \ref{regularity} (propagation of regularity): If (\ref{433}) holds with $(m, l)=(1, l)$ $(\delta/2 $ instead of $\delta)$, then there exists $\hat{t}\in (\delta/2 , \delta)$ such that \begin{eqnarray}gin{eqnarray*} \int (\partial_{x}^{l+1}u)^{2}\chi_{1}^{'}(x+v\hat{t}; \varepsilonsilon, b)\mathbbox{d}x < \infty \end{eqnarray*} which implies that \begin{eqnarray}gin{eqnarray*} \int (\partial_{x}^{l+1}u)^{2}\chi(x+v\hat{t}; \varepsilonsilon, b)\mathbbox{d}x < \infty. \end{eqnarray*} \end{enumerate} By the propagation of regularity (Theorem \ref{regularity}), one has the result (\ref{433}) with $(m, l)=(0, l+1)$, that is, $(1, l)$ implies $(0, l+1)$ for any $l\in \mathbb{Z}^{+}$. Now we assume (\ref{433}) holds for $(m, k)$ such that \begin{eqnarray}gin{eqnarray*}\lambdabel{434} \begin{eqnarray}gin{cases} (a) \quad (m, k) \leq (n-j, j)\quad for \quad some \quad j=0, 1, 2,..., n\\ and\\ (b)\quad (m, k) = (n+1, 0), (n, 1), ..., (n+1-l, l)\quad for \quad some \quad l\leq n. \end{cases} \end{eqnarray*} We need to prove the case $(n+1-(l+1), l+1)=(n-l, l+1)$. From (4) above, since $(1, l)$ implies $(0, l+1)$, this case is already true for $l=n.$ Thus it suffices to consider $l\leq n-1.$ From the previous step $(n-l+1, l)$ we have that for any $\delta^{'}, v, \varepsilonsilon> 0,$ \begin{eqnarray}gin{eqnarray}\lambdabel{435} &&\sup_{t\in[\delta^{'}, T]}\int (\partial_{x}^{l}u)^{2}(x, t)\chi_{n+1-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x +\int_{\delta^{'}}^{T}\int (\partial_{x}^{l+1}u)^{2}(x, t)\chi_{n+1-l}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\mathbbox{d}t\nonumber\\ &&+\int_{\delta^{'}}^{T}\int [D_{x}^{\frac{1}{2}}(\partial_{x}^{l}u(x, t) \eta_{n+1-l}(x+vt; \varepsilonsilon, b))]^{2}\mathbbox{d}x\mathbbox{d}t \leq c_{3}. \end{eqnarray} Simple computation yields \begin{eqnarray}gin{eqnarray*}\lambdabel{436} \chi_{n+1-l}^{'}(x; \varepsilonsilon, b)\geq c\chi_{n-l}(x; \varepsilonsilon, b). \end{eqnarray*} According to (\ref{435}), there exists $\hat{t}\in (\delta^{'}, 2\delta^{'})$ such that \begin{eqnarray}gin{eqnarray*}\lambdabel{437} \int (\partial_{x}^{l+1}u)^{2}(x, \hat{t})\chi_{n-l}(x+v\hat{t}; \varepsilonsilon, b)\mathbbox{d}x < \infty. \end{eqnarray*} For smooth solution of equation (\ref{benjamin}), consider \begin{eqnarray}gin{eqnarray}\lambdabel{438} &&\frac{1}{2}\frac{\mathbbox{d}}{\mathbbox{d}t}\int(\partial_{x}^{l+1}u)^{2}\chi_{n-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x \underbrace{-\frac{1}{2}v\int(\partial_{x}^{l+1}u)^{2}(x, t)\chi_{n-l}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x} _{\text{$A_{1}$}}\nonumber\\ &&+\frac{3}{2}\int(\partial_{x}^{l+2}u)^{2}(x, t)\chi_{n-l}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x \underbrace{-\frac{1}{2}\int(\partial_{x}^{l+1}u)^{2}(x, t)\chi_{n-l}^{'''}(x+vt; \varepsilonsilon, b)\mathbbox{d}x} _{\text{$A_{2}$}}\nonumber\\ &&\underbrace{+\int\partial_{x}^{l+1}(u\partial_{x}u)\partial_{x}^{l+1}u(x, t)\chi_{n-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x} _{\text{$A_{3}$}}\nonumber\\ &&\underbrace{-\int H\partial_{x}^{l+3}u\partial_{x}^{l+1}u(x, t)\chi_{n-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x} _{\text{$A_{4}$}}=0. \end{eqnarray} From the previous step $(n-l, l)$ , we derive \begin{eqnarray}gin{eqnarray*}\lambdabel{429} \int_{\hat{t}}^{T}|A_{1}(t)|\mathbbox{d}t \leq |v|\int_{\hat{t}}^{T}\int(\partial_{x}^{l+1}u)^{2}(x, t)\chi_{n-l}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\mathbbox{d}t \leq c_{3}. \end{eqnarray*} Invoking (\ref{prp6}), one obtains \begin{eqnarray}gin{eqnarray}\lambdabel{441} |A_{2}(t)| &\leq& c_{3}\int(\partial_{x}^{l+1}u)^{2}(x, t)\chi_{n-l-3}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber\\ &+&c_{3}\int(\partial_{x}^{l+1}u)^{2}(x, t)\chi(x+vt; \varepsilonsilon/10, \varepsilonsilon/2)\mathbbox{d}x. \end{eqnarray} According to previous steps $(n-l-3, l+1)$ and $(0, l+1)$, we know that (\ref{441}) is bounded. Notice that the step $(0, l+1)$ is implied by the step $(1, l)=(l+1-l, l)\leq (n-l, l)$. For the term $A_{3}$, Leibniz formula leads to \begin{eqnarray}gin{eqnarray} A_{3} &=&d_{0}\int u\partial_{x}^{l+2}u\partial_{x}^{l+1}u\chi_{n-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x +d_{1}\int \partial_{x}u(\partial_{x}^{l+1}u)^{2}\chi_{n-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber \end{eqnarray} \begin{eqnarray}gin{eqnarray}\lambdabel{5A3} &+&d_{2}\int\partial_{x}^{2}u\partial_{x}^{l}u\partial_{x}^{l+1}u\chi_{n-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x +\sum_{j=3}^{l-1}\int\partial_{x}^{j}u\partial_{x}^{l+2-j}u\partial_{x}^{l+1}u\chi_{n-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber\\ &=&A_{3,0}+A_{3,1}+A_{3,2}+\sum_{j=3}^{l-1}A_{3, j}. \end{eqnarray} After integration by parts, we deduce \begin{eqnarray}gin{eqnarray*} A_{3,0} &=&-\frac{d_{0}}{2}\int\partial_{x}u(\partial_{x}^{l+1}u)^{2}\chi_{n-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber\\ &-&\frac{d_{0}}{2}\int u(\partial_{x}^{l+1}u)^{2}\chi_{n-l}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber\\ &=&A_{3,01}+A_{3,02} \end{eqnarray*} with $A_{3,01}$ be similar to $A_{3,1}$. Sobolev embedding yields \begin{eqnarray}gin{eqnarray*} \int_{\hat{t}}^{T}|A_{3, 02}(t)|\mathbbox{d}t &\leq&\sup_{t\in[\hat{t}, T]}\|u\|_{L^{\infty}} \int_{\hat{t}}^{T}\int(\partial_{x}^{l+1}u)^{2}\chi_{n-l}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\mathbbox{d}t\nonumber\\ &\leq&\sup_{t\in[\hat{t}, T]}\|u\|_{H^{3/4^{+}}} \int_{\hat{t}}^{T}\int(\partial_{x}^{l+1}u)^{2}\chi_{n-l}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\mathbbox{d}t, \end{eqnarray*} where the last integral corresponds to the case $(n-l, l)$, which is part of our hypothesis of induction. For the term $A_{3, 1}$, we have \begin{eqnarray}gin{eqnarray*} |A_{3, 1}| \leq c_{3}\|\partial_{x}u\|_{L^{\infty}}\int(\partial_{x}^{l+1}u)^{2}\chi_{n-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x \end{eqnarray*} which can be handled by the Gronwall inequality. Consider now $A_{3, 2}$ which appears only if $l\geq 2$ (we recall that $n\geq 3$(to be proved $(n-l, l+1))$) \begin{eqnarray}gin{eqnarray}\lambdabel{5A32} A_{3, 2} =d_{2}\int\partial_{x}^{2}u\partial_{x}^{l}u\partial_{x}^{l+1}u\chi_{n-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x. \end{eqnarray} Following the idea in \cite{ilp1}, we study two cases: $l=2$ and $l\geq 3$. We first consider $l=2$. Similar to the estimates of (\ref{1A331})-(\ref{1A333}) in the previous section, one derives \begin{eqnarray}gin{eqnarray}\lambdabel{448} |A_{3, 2}| &=&\left|-\frac{d_{2}}{3}\int\partial_{x}^{2}u\partial_{x}^{2}u\partial_{x}^{2}u\chi_{n-l}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\right|\nonumber\\ &\leq& c\|(\partial_{x}^{2}u)\chi_{n-l}^{'}(\cdot+vt; \varepsilonsilon, b)\|_{L^{\infty}} \int(\partial_{x}^{2}u)^{2}\chi(x+vt; \varepsilonsilon/10, \varepsilonsilon/2)\mathbbox{d}x\nonumber\\ &\leq& c_{3}\|(\partial_{x}^{2}u)\chi_{n-l}^{'}(\cdot+vt; \varepsilonsilon, b)\|_{L^{\infty}}^{2}+c_{3}\nonumber\\ &\leq& c_{3}\|(\partial_{x}^{2}u)^{2}\chi_{n-l}^{'}(\cdot+vt; \varepsilonsilon, b)\|_{L^{\infty}}+c_{3}\nonumber\\ &\leq& c_{3}\int|\partial_{x}[(\partial_{x}^{2}u)^{2}\chi_{n-l}^{'}(x+vt; \varepsilonsilon, b)]|\mathbbox{d}x +c_{3}\nonumber\\ &\leq& c_{3}\int|(\partial_{x}^{2}u)^{2}\chi_{n-l}^{'}(x+vt; \varepsilonsilon, b)|\mathbbox{d}x +c_{3}\int|(\partial_{x}^{3}u)^{2}\chi_{n-l}^{'}(x+vt; \varepsilonsilon, b)|\mathbbox{d}x\nonumber\\ &+&c_{3}\int|(\partial_{x}^{2}u)^{2}\chi_{n-l}^{''}(x+vt; \varepsilonsilon, b)|\mathbbox{d}x\nonumber\\ &=&A_{3,21}+A_{3,22}+A_{3,23}+c_{3}. \end{eqnarray} From our induction hypothesis we know that $A_{3,21} $ and $A_{3,22}$ are bounded after integration in time, since $A_{3,21} $ corresponds to the case $(n-l, 1)=(n-2, 1)$ and $A_{3,22} $ corresponds to the case $(n-l, 2)=(n-2, 2)$. Moreover, invoking (\ref{prp5}), one derives \begin{eqnarray}gin{eqnarray}\lambdabel{5A323} |A_{3,23}| &\leq& c_{3}\int(\partial_{x}^{2}u)^{2}\chi_{n-l-2}(x+vt; \varepsilonsilon, b)\mathbbox{d}x +c_{3}\int(\partial_{x}^{2}u)^{2}\chi^{'}(x+vt; \varepsilonsilon/3, b+\varepsilonsilon)\mathbbox{d}x\nonumber\\ &=&A_{3,231}+A_{3,232}. \end{eqnarray} From the induction cases $(n-l-2, 2)$ and $(0, 1)$, we deduce that $A_{3,231}$ is bounded in time $t\in [\hat{t}, T]$ and $A_{3,232}$ can be controlled after integration in time. This completes the proof of (\ref{5A32}) in the case $l=2$. Next, we turn to the case $l\geq3.$ Integration by parts leads to \begin{eqnarray}gin{eqnarray}\lambdabel{452} A_{3,2} &=& d_{2}\int\partial_{x}^{2}u\partial_{x}^{l}u\partial_{x}^{l+1}u\chi_{n-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber\\ &=&-\frac{d_{2}}{2}\int\partial_{x}^{3}u(\partial_{x}^{l}u)^{2}\chi_{n-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber\\ &+&\frac{d_{2}}{2}\int\partial_{x}^{2}u(\partial_{x}^{l}u)^{2}\chi_{n-l}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x \end{eqnarray} For the integrals on the right hand side of (\ref{452}), using (\ref{433}) and reasoning as (\ref{448}) produce \begin{eqnarray}gin{eqnarray*}\lambdabel{w1} |A_{3,2}| & \leq& c_{3}\int|(\partial_{x}^{2}u)^{2}\chi_{n-l}^{'}(x+vt; \varepsilonsilon, b)|\mathbbox{d}x +c_{3}\int|(\partial_{x}^{3}u)^{2}\chi_{n-l}^{'}(x+vt; \varepsilonsilon, b)|\mathbbox{d}x\nonumber\\ &+&c_{3}\int|(\partial_{x}^{2}u)^{2}\chi_{n-l}^{''}(x+vt; \varepsilonsilon, b)|\mathbbox{d}x +c_{3}\int|(\partial_{x}^{4}u)^{2}\chi_{n-l}^{'}(x+vt; \varepsilonsilon, b)|\mathbbox{d}x\nonumber\\ &+&c_{3}\int|(\partial_{x}^{3}u)^{2}\chi_{n-l}^{''}(x+vt; \varepsilonsilon, b)|\mathbbox{d}x. \end{eqnarray*} Since $l\geq 3,$ after integration in time, the first two and the fourth integrals correspond to the previous cases $(n-l, 1)$ , $(n-l, 2)$ and $(n-l, 3)$, respectively, which are all implied in the case $(n-l, l)$. The third and fifth integrals can be treated using a similar way as (\ref{5A323}), where the fifth integral corresponds to the case $(n-l-2, 3)$ and $(0, 2)$ after using (\ref{prp5}). Note that the case $(0, 2)$ is implied by the case $(1, 1)$, which can be deduced from the previous case $l=1.$ Therefore, we only need to consider the remainder terms in (\ref{5A3}), i.e., \begin{eqnarray}gin{eqnarray*}\lambdabel{453} A_{3,j} = c_{j}\int\partial_{x}^{j}u\partial_{x}^{l+2-j}u\partial_{x}^{l+1}u\chi_{n-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x. \end{eqnarray*} Without loss of generality , we can assume $3\leq j \leq l/2+1.$ Consequently, one finds \begin{eqnarray}gin{eqnarray*}\lambdabel{454} |A_{3, j}| \leq c_{j}\int(\partial_{x}^{j}u\partial_{x}^{l+2-j}u)^{2}\chi_{n-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x +c_{j}\int(\partial_{x}^{l+1}u)^{2}\chi_{n-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x \end{eqnarray*} with the second integral be the quantity to be estimated. For the first integral, we have \begin{eqnarray}gin{eqnarray}\lambdabel{455} &&c_{j}\int(\partial_{x}^{j}u\partial_{x}^{l+2-j}u)^{2}\chi_{n-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber\\ &&\leq \|(\partial_{x}^{j}u)^{2}\chi(x+vt; \varepsilonsilon/10, \varepsilonsilon/2)\|_{L^{\infty}} \int(\partial_{x}^{l+2-j}u)^{2}\chi_{n-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x. \end{eqnarray} From the induction hypothesis $(n-l, l+2-j)$ with $j\geq3$, we deduce that the integral in (\ref{455}) is bounded. Thus it remains to control the $L^{\infty}$ norm. For this purpose, we employ the Sobolev inequality $\|f\|_{L^{\infty}}\leq \|f\|_{H^{1, 1}}$ to obtain \begin{eqnarray}gin{eqnarray*}\lambdabel{456} &&\|(\partial_{x}^{j}u)^{2}\chi(x+vt; \varepsilonsilon/10, \varepsilonsilon/2)\|_{L^{\infty}}\nonumber\\ &&\leq \int|\partial_{x}[(\partial_{x}^{j}u)^{2}\chi(x+vt; \varepsilonsilon/10, \varepsilonsilon/2)]|\mathbbox{d}x\nonumber\\ &&\leq c\int|\partial_{x}^{j}u\partial_{x}^{j+1}u\chi(x+vt; \varepsilonsilon/10, \varepsilonsilon/2)|\mathbbox{d}x +c\int|(\partial_{x}^{j}u)^{2}\chi^{'}(x+vt; \varepsilonsilon/10, \varepsilonsilon/2)|\mathbbox{d}x\nonumber\\ &&\leq c\int(\partial_{x}^{j}u)^{2}\chi(x+vt; \varepsilonsilon/10, \varepsilonsilon/2)\mathbbox{d}x +c\int(\partial_{x}^{j+1}u)^{2}\chi(x+vt; \varepsilonsilon/10, \varepsilonsilon/2)\mathbbox{d}x\nonumber\\ &&+c\int|(\partial_{x}^{j}u)^{2}\chi^{'}(x+vt; \varepsilonsilon/10, \varepsilonsilon/2)|\mathbbox{d}x. \end{eqnarray*} Since $j\leq l-1$, we have $j+1\leq l\leq n.$ Thus, previous cases $(0, j)$ and $(0, j+1)$ imply the boundedness of the first two integrals, respectively. The third integral corresponds to the case $(0, j-1)$ after integration in time. Finally, we estimate $A_{4}.$ As before, we write \begin{eqnarray}gin{eqnarray*} A_{4} &=&-\int H\partial_{x}^{l+3}u\partial_{x}^{l+1}u\chi_{n-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber\\ &=&\int H\partial_{x}^{l+2}u\partial_{x}^{l+2}u\chi_{n-l}(x+vt; \varepsilonsilon, b)\mathbbox{d}x +\int H\partial_{x}^{l+2}u\partial_{x}^{l+1}u\chi_{n-l}^{'}(x+vt; \varepsilonsilon, b)\mathbbox{d}x\nonumber\\ &=&A_{41}+A_{42}. \end{eqnarray*} The term $A_{41}$ can be treated easily, we omit it. For the term $A_{42},$ one has \begin{eqnarray}gin{eqnarray*} A_{42} &=&\int H\partial_{x}^{l+2}u\partial_{x}^{l+1}u\chi_{n-l}^{'}(x+vt)\mathbbox{d}x\nonumber\\ &=&\int H\partial_{x}^{l+2}u \eta_{n-l} \partial_{x}^{l+1}u\eta_{n-l}\mathbbox{d}x\nonumber\\ &=&\int H(\partial_{x}^{l+2}u \eta_{n-l}) \partial_{x}^{l+1}u\eta_{n-l}\mathbbox{d}x -\int [H; \eta_{n-l}]\partial_{x}^{l+2}u\partial_{x}^{l+1}u\eta_{n-l}\mathbbox{d}x\nonumber\\ &=&\int H\partial_{x}(\partial_{x}^{l+1}u \eta_{n-l}) \partial_{x}^{l+1}u\eta_{n-l}\mathbbox{d}x -\int H(\partial_{x}^{l+1}u \eta_{n-l}^{'}) \partial_{x}^{l+1}u\eta_{n-l}\mathbbox{d}x\nonumber\\ &-&\int [H; \eta_{n-l}]\partial_{x}^{l+2}u\partial_{x}^{l+1}u\eta_{n-l}\mathbbox{d}x\nonumber\\ &=&A_{421}+A_{422}+A_{423}, \end{eqnarray*} where $A_{421}$ is positive and will stay at the left hand side of (\ref{438}). (\ref{prp10}) and (\ref{prp11}) lead to \begin{eqnarray}gin{eqnarray}\lambdabel{6A422} |A_{422}| &\leq&\left|\int H(\partial_{x}^{l+1}u \eta_{n-l}^{'}) \partial_{x}^{l+1}u\eta_{n-l}\mathbbox{d}x\right|\nonumber\\ &\leq& c\int(\partial_{x}^{l+1}u)^{2} (\eta_{n-l}^{'})^{2}\mathbbox{d}x +c\int(\partial_{x}^{l+1}u \eta_{n-l})^{2}\mathbbox{d}x\nonumber\\ &\leq& c\int(\partial_{x}^{l+1}u)^{2} \chi_{n-l-1}(x; \varepsilonsilon/3, b+\varepsilonsilon)\mathbbox{d}x, \end{eqnarray} which can be handled by the previous step $(n-l-1, l+1)$ since $l+1\leq n.$ The term $A_{423}$ can be handled similarly, we omit it. This basically completes the proof of Theorem \ref{persistence}. To justify the previous formal computation, we approximate the initial data $u_{0} $ by Schwartz functions, say $u_{0}^{\mu}$, $\mu> 0,$ which can be satisfied by convolution $u_{0} $ with a family of mollifiers. Using the well-posedness in the class of Schwartz functions, we obtain a family of solutions $u^{\mu}(\cdot, t)$ for which each step of the above argument can be justified. From our construction those estimates are uniform in the parameter $\mu> 0$, which yields the desired estimate by passing to the limit. \textbf{Acknowledgement} The research of B. Guo is partially supported by the National Natural Science Foundation of China, grant 11731014. \begin{eqnarray}gin{thebibliography}{} \bibitem{abr} J.P. Albert, J.L. Bona, J.M. Restrepo, Solitary wave solutions of the Benjamin equation, \textit{SIAM J. Appl. Math.} 59 (1999) 2139-2161. \bibitem{cb} H. Chen, J.L. Bona, Existence and asymptotic properties of solitary-wave solutions of Benjamin-type equations, \textit{Adv. Differential Equations} 3 (1998) 51-84. \bibitem{be} T.B. Benjamin, A new kind of solitary wave, \textit{J. Fluid Mech.} 245 (1992) 401-411. \bibitem{bo} J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations, \textit{Geom. Funct. Anal. } 3 (1993) 107-156. \bibitem{ca} A.P. Calder\'{o}n, Commutators of singular integral operators, \textit{Proc. Natl. Acad. Sci.} 340 (1965) 1092-1099. \bibitem{cgx} W. Chen, Z. Guo, J. Xiao, Sharp well-posedness for the Benjamin equation, \textit{Nonlinear Anal.} 74 (2011) 6209-6230. \bibitem{dmp} L. Dawson, H. McGahagan, G. Ponce, On the decay properties of solutions to a class of Schr\"{o}dinger equations, \textit{Proc. Amer. Math. Soc.} 136 (2008) 2081-2090. \bibitem{fp1} G. Fonseca, G. Ponce, The IVP for the Benjamin-Ono equation in weighted Sobolev spaces, \textit{J. Funct. Anal.} 260 (2011) 436-459. \bibitem{flp1} G. Fonseca, F. Linares, G. Ponce The IVP for the Benjamin-Ono equation in weighted Sobolev spaces II, \textit{J. Funct. Anal.} 262 (2012) 2031-2049. \bibitem{flp2} G. Fonseca, F. Linares, G. Ponce The IVP for the dispersion generalized Benjamin-Ono equation in weighted Sobolev spaces, \textit{Ann. Inst. H. Poincar\'{e} Anal. Non Lin\'{e}aire} 30 (2013) 763-790. \bibitem{gh2} B. Guo, Z. Huo, The cauchy problem for the generalized Korteweg-de Vries-Benjamin-Ono equation with low regularity data, \textit{Acta Math. Sin. (Engl. Ser.)} 21 (2005) 1191-1196. \bibitem{ilp1} P. Isaza, F. Linares, G. Ponce, On the propagation of regularity and decay of solutions to the k-generalized Korteweg-de Vries equation, \textit{Comm. Partial Differential Equations} 40 (2015) 1336-1364. \bibitem{ilp2} P. Isaza, F. Linares, G. Ponce, On the propagation of regularities in solutions of the Benjamin-Ono equation, \textit{J. Funct. Anal.} 270 (2016) 976-1000. \bibitem{ka} T. Kato, On the Cauchy problem for the (generalized) Korteweg-de Vries equation, in: \textit{ Adv. Math. Suppl. Stud., Stud. Appl. Math.} 8 (1983) 93-128. \bibitem{kp} T. Kato and G. Ponce, Commutator estimates and the Euler and Navier-Stokes equations, \textit{Comm. Pure Appl. Math.} 41 (1988) 891-907. \bibitem{kpv} C. Kenig, G. Ponce, L. Vega, A bilinear estimate with applications to the KdV equation, \textit{J. Amer. Math. Soc.} 9 (1996) 573-603. \bibitem{la} C. Laurey, The cauchy problem for a third order nonlinear Schr\"{o}dinger equation, \textit{Nonlinear Anal.} 29 (1997) 121-158. \bibitem{li} F. Linares, $L^{2}$ Global well-posedness of the initial value problem associated to the Benjamin equation, \textit{J. Differential Equations} 152 (1999) 377-393. \bibitem{lw} Y. Li, Y. Wu, Global well-posedness for the Benjamin equation in low regularity, \textit{Nonlinear Anal.} 73 (2010) 1610-1625. \bibitem{np} J. Nahas, G. Ponce, On the persistent properties of solutions to semi-linear Schr\"{o}dinger equation, \textit{Comm. Partial Differential Equations} 34 (2009) 1-20. \bibitem{pa} J.A. Pava, Existence and stability of solitary wave solutions of the Benjamin equation, \textit{J. Differential Equations} 152 (1999) 136-159. \bibitem{sl} M. Scialom, F. Linares, On generalized Benjamin type equations, \textit{Discrete Contin. Dyn. Syst.} 12 (2005) 161-174. \bibitem{ss} J.I. Segata, D.L. Smith, Propagation of regularity and persistence of decay for fifth order dispersive models, \textit{J. Dynam. Differential Equations} (2015) 1-36. \bibitem{u} J.J. Urrea, The Cauchy problem associated to the Benjamin equation in weighted Sobolev spaces, \textit{J. Differential Equations} 254 (2013) 1863-1892. \end{thebibliography} \end{document}
\begin{equation}gin{document} \title{Vanishing viscosity limit for homogeneous axisymmetric no-swirl solutions of stationary Navier-Stokes equations} \author{Li Li\footnote{Department of Mathematics, Harbin Institute of Technology, Harbin 150080, China. Email: [email protected]}, YanYan Li\footnote{Department of Mathematics, Rutgers University, 110 Frelinghuysen Road, Piscataway, NJ 08854, USA. Email: [email protected]}, Xukai Yan\footnote{School of Mathematics, Georgia Institute of Technology, 686 Cherry St NW, Atlanta, GA 30313, USA. Email: [email protected]}} \date{} \maketitle \abstract{$(-1)$-homogeneous axisymmetric no-swirl solutions of three dimensional incompressible stationary Navier-Stokes equations which are smooth on the unit sphere minus the north and south poles have been classified. In this paper we study the vanishing viscosity limit of sequences of these solutions. As the viscosity tends to zero, some sequences of solutions $C^m_{loc}$ converge to solutions of Euler equations on the sphere minus the poles, while for other sequences of solutions, transition layer behaviors occur. For every latitude circle, there are sequences which $C^m_{loc}$ converge respectively to different solutions of the Euler equations on the spherical caps above and below the latitude circle. We give detailed analysis of these convergence and transition layer behaviors. } \section{Introduction}\label{sec:intro} We consider $(-1)$-homogeneous solutions of incompressible stationary Navier-Stokes equations in $\mathbb{R}^3$: \begin{equation}gin{equation}\label{NS} \left\{ \begin{equation}gin{split} & -\nu \triangleangle u + u\cdot \nabla u +\nabla p = 0, \\ & \mbox{div}\textrm{ } u=0. \end{split} \right. \end{equation} The incompressible stationary Euler equations in $\mathbb{R}^3$ are given by: \begin{equation}gin{equation}\label{Euler} \left\{ \begin{equation}gin{split} & v\cdot \nabla v +\nabla q = 0, \\ & \mbox{div}\textrm{ } v=0. \end{split} \right. \end{equation} Equations (\ref{NS}) and (\ref{Euler}) are invariant under the scaling $u(x)\to \lambda u(\lambda x)$ and $p(x)\to \lambda^2 p(\lambda x)$, $\lambda>0$. We study solutions which are invariant under the scaling. For such solutions $u$ is $(-1)$-homogeneous and $p$ is $(-2)$-homogeneous. We call them $(-1)$-homogeneous solutions according to the homogeneity of $u$. Landau discovered in \cite{Landau} a three parameter family of explicit $(-1)$-homogeneous solutions of (\ref{NS}), which are axisymmetric with no swirl. Tian and Xin proved in \cite{TianXin} that all $(-1)$-homogeneous, axisymmetric nonzero solutions of (\ref{NS}) which are smooth on the unit sphere $\mathbb{S}^2$ are Landau solutions. They also gave in the paper explicit expressions of all $(-1)$-homogeneous axisymmetric solutions of (\ref{Euler}). \v{S}ver\'{a}k proved in \cite{Sverak} that all $(-1)$-homogeneous nonzero solutions which are smooth on $\mathbb{S}^2$ are Landau solutions. We studied in \cite{LLY1} and \cite{LLY2} $(-1)$-homogeneous axisymmetric solutions of (\ref{NS}) which are smooth on $\mathbb{S}^2$ minus the north and south poles. In particular, we classified in \cite{LLY2} all such solutions with no swirl. $(-1)$-homogeneous solutions of (\ref{NS}) and (\ref{Euler}) have been studied in \cite{CK}, \cite{G}, \cite{KP}, \cite{KPS}, \cite{KS}, \cite{Luo-Shvydkoy}, \cite{MT}, \cite{PP1}, \cite{PP2}, \cite{PP3}, \cite{Serrin}, \cite{Shvydkoy}, \cite{SL}, \cite{SQ}, \cite{W} and \cite{Y}. In spherical coordinates $(r,\thetaeta, \phi)$, where $r$ is the radial distance from the origin, $\thetaeta$ is the angle between the radius vector and the positive $x_3$-axis, and $\phi$ is the meridian angle about the $x_3$-axis, a vector field $u$ is written as \[ u = u_r \vec{e}_r+ u_\thetaeta \vec{e}_{\thetaeta} + u_\phi \vec{e}_{\phi}, \] where \[ \vec{e}_r = \left( \begin{equation}gin{matrix} \sin\thetaeta\cos\phi \\ \sin\thetaeta\sin\phi \\ \cos\thetaeta \end{matrix} \right), \hspace{0.5cm} \vec{e}_{\thetaeta} = \left( \begin{equation}gin{matrix} \cos\thetaeta\cos\phi \\ \cos\thetaeta\sin\phi \\ -\sin\thetaeta \end{matrix} \right), \hspace{0.5cm} \vec{e}_{\phi} = \left( \begin{equation}gin{matrix} -\sin\phi \\ \cos\phi \\ 0 \end{matrix} \right). \] We use $N$ and $S$ to denote respectively the north and south poles of $\mathbb{S}^2$. A vector field $u$ is called axisymmetric if $u_r$, $u_{\thetaeta}$ and $u_{\phi}$ depend only on $r$ and $\thetaeta$, and is called {\it no-swirl} if $u_{\phi}=0$. For any $(-1)$-homogeneous axisymmetric no-swirl solution $(u,p)$ of (\ref{NS}), $u_r$ and $p$ (modulo a constant) can be expressed by $u_{\thetaeta}$ and its derivatives as follows \begin{equation}gin{equation}\label{eqNS_1} \begin{equation}gin{split} & u_{r} =- \frac{d u_{ \thetaeta}}{d \thetaeta} -\cot\thetaeta u_{\thetaeta}, \\ & 2p=-\frac{d^2 u_{r}}{d\thetaeta^2} - (\cot\thetaeta - u_{ \thetaeta}) \frac{d u_{ r}}{d\thetaeta} - u_{r}^2 -u_{\thetaeta}^2. \end{split} \end{equation} Similarly, for any $(-1)$-homogeneous axisymmetric no-swirl solution $(v,q)$ of (\ref{Euler}), $v_r$ and $q$ can be expressed by $v_{\thetaeta}$ and its derivatives as follows \begin{equation}gin{equation}\label{eqEuler_1} v_r=- \frac{d v_{\thetaeta}}{d \thetaeta} -\cot\thetaeta v_{\thetaeta}, \quad 2q=v_{\thetaeta}\frac{d v_r}{d\thetaeta}-v_r^2-v_{\thetaeta}^2. \end{equation} In this paper, we analyze the behavior of any sequence of $(-1)$-homogeneous axisymmetric no-swirl solutions $\{(u_{\nu_k}, p_{\nu_k})\}$ of (\ref{NS}), with vanishing viscosity $\nu_k\to 0$. We will show that in some cases there are subsequences converging to solutions of (\ref{Euler}) on $\mathbb{S}^2$ and in some other cases there are transition layer behaviors. There have been a large amount of research work on vanishing viscosity limit for incompressible Navier-Stokes equations. See for instance \cite{C}, \cite{CC}, \cite{Constantin-Vicol}, \cite{DM}, \cite{DN}, \cite{F}, \cite{Guo-Nguyen}, \cite{Iyer1}, \cite{Iyer2}, \cite{Maekawa}, \cite{Masmoudi}, \cite{SC1}, \cite{SC2}, \cite{Temam-Wang} and \cite{Wang}. Based on our result in \cite{LLY2}, we have the following theorem. \begin{equation}gin{thm}\label{thm1_0_0} (i) Let $0<\nu<1$, $(u_{\nu}, p_{\nu})$ be $(-1)$-homogeneous axisymmetric no-swirl solutions of (\ref{NS}) which are smooth on $\mathbb{S}^2\setminus\{S,N\}$. Then for any $0<\thetaeta_1<\thetaeta_2<\thetaeta_3<\thetaeta_4<\pi$, there exists some positive constant $C$, depending only on the $\{\thetaeta_i\}$, such that \[ \int_{\mathbb{S}^2\cap\{\thetaeta_1<\thetaeta<\thetaeta_4\}}|u_{ \nu, \thetaeta}|^2\le C\left(\int_{\mathbb{S}^2\cap\{\thetaeta_2<\thetaeta<\thetaeta_3\}}|u_{\nu, \thetaeta}|^2+\nu^2\right). \] (ii) Let $\nu_k\to 0^+$, $(u_{\nu_k}, p_{\nu_k})$ be $(-1)$-homogeneous axisymmetric no-swirl solutions of (\ref{NS}) which are smooth on $\mathbb{S}^2\setminus\{S,N\}$. If $\displaystyle \sup_{k}\nu_k^{-2}\int_{\mathbb{S}^2\cap\{a<\thetaeta<b\}}|u_{\nu_k, \thetaeta}|^2<\infty$ for some $-1<a<b<1$, then there exists some solution $(\tilde{u}, \tilde{p})$ of (\ref{NS}) with $\nu=1$ which is smooth on $\mathbb{S}^2\setminus\{S,N\}$, such that, after passing to a subsequence, for any $\epsilon>0$, and any integer $m$, \[ \lim_{k\to \infty}||(\frac{u_{\nu_k}}{\nu_k}, \frac{p_{\nu_k}}{\nu_k^2})-(\tilde{u},\tilde{p})||_{C^m(\mathbb{S}^2\cap\{\epsilon<\thetaeta<\pi-\epsilon\})}=0. \] \end{thm} As in \cite{LLY1} and \cite{LLY2}, we work with variable $x:=\cos\thetaeta$ and vector $U:=u\sin\thetaeta$. We use $"$ $'$ $"$ to denote the derivative with respect to $x$. For $\nu\ge 0$, let \begin{equation}gin{equation}\label{eqdef_1} \bar{c}_3(c_1,c_2; \nu)=-\frac{1}{2}(\sqrt{\nu^2+c_1}+\sqrt{\nu^2+c_2})(\sqrt{\nu^2+c_1}+\sqrt{\nu^2+c_2}+2\nu), \end{equation} and introduce \begin{equation}gin{equation*} J_\nu:=\{c\in \mathbb{R}^3\mid c=(c_1,c_2,c_3), c_1\ge -\nu^2, c_2\ge -\nu^2, c_3\ge \bar{c}_3(c_1,c_2;\nu)\}. \end{equation*} It is easy to see that $J_{\nu'}\subset J_{\nu}$ for any $0\le \nu'\le \nu$. We use $\mathring{J}_{\nu}$ to denote the interior of $J_{\nu}$. For $\nu>0$, it is known from Theorem 1.2 in \cite{LLY1} that all (-1)-homogeneous axisymmetric no-swirl solutions of (\ref{NS}) which are smooth in $\mathbb{S}^2\setminus\{S,N\}$ are given by $u=U'_{\thetaeta}(x)\vec{e}_r+\sin\thetaeta U_{\thetaeta}(x)\vec{e}_{\thetaeta}$ where $U_{\thetaeta}$ satisfies \begin{equation} \label{eq:NSE} \nu (1-x^2)U_\thetaeta' + 2\nu x U_\thetaeta + \frac{1}{2}U_\thetaeta^2 = P_c(x):=c_1(1-x)+c_2(1+x)+c_3(1-x^2), \quad -1<x<1, \end{equation} for some $c=(c_1,c_2,c_3)\in J_{\nu}$. Let $\tilde{U}_{\thetaeta}:=\frac{U_{\thetaeta}}{\nu}$, then $U_{\thetaeta}$ is a solution of (\ref{eq:NSE}) if and only if $\tilde{U}_{\thetaeta}$ is a solution of \begin{equation}gin{equation}\label{eqNSE_1} (1-x^2)\tilde{U}_\thetaeta' + 2 x \tilde{U}_\thetaeta + \frac{1}{2}\tilde{U}_\thetaeta^2 = P_{\frac{c}{\nu^2}}(x), \quad -1<x<1. \end{equation} Similar to the above, let $V=v\sin\thetaeta $, then all $(-1)$-homogeneous axisymmetric solutions of Euler equations (\ref{Euler}) are given by $v=V'_{\thetaeta}\vec{e}_r+\sin\thetaeta V_{\thetaeta}\vec{e}_{\thetaeta}+a\vec{e}_{\phi}$, where $a$ is a constant and $V_{\thetaeta}$ satisfies, for some $c$, \begin{equation} \label{eq:EE} \frac{1}{2}V_\thetaeta^2 = P_c(x), \end{equation} where $P_c(x)$ is the second order polynomial given in (\ref{eq:NSE}). Introduce a subset of $\partial J_0$: \[ \partial'J_0:=\{(0,0,c_3)\mid c_3>0\}\cup \{(c_1,0,c_3)\mid c_1>0, c_3\ge -\frac{1}{2}c_1\}\cup\{(0,c_2,c_3)\mid c_2>0,c_3\ge -\frac{1}{2}c_2\} \] By Lemma \ref{lem7_1} in the Appendix, $P_c\ge 0$ on $[-1,1]$ if and only if $c\in J_0$; $P_c>0$ on [-1,1] if and only if $c\in \mathring J_0$; and $P_c>0$ on $(-1,1)$ if and only if $c\in \mathring J_0\cup \partial' J_0$. For $c\in J_{0}$, let $v^{\pm}_c=v^{\pm}_{c, r}\vec{e}_r + v^{\pm}_{c, \thetaeta} \vec{e}_\thetaeta$, where $$ v^{\pm}_{c, \thetaeta} (r, \thetaeta, \varphi)=\pm \frac { \sqrt{ 2P_c(\cos\thetaeta) } }{r \sin\thetaeta}, \quad v^{\pm}_{c, r}(r, \thetaeta, \varphi)=\pm \frac{P_c'(\cos\thetaeta)}{r\sqrt{2P_c(\cos\thetaeta)}}, $$ and $$ q_c(r, \thetaeta, \varphi)=-\frac{1}{2r^2}(P_c''(\cos\thetaeta)+\frac{2P_c(\cos\thetaeta)}{\sin^2\thetaeta}). $$ It is easy to see from the above (see also \cite{TianXin}) that $\{(v^{\pm}_c, q_c)\ |\ c\in \mathring J_0\cup \partial' J_0\}$ is the set of (-1)-homogeneous axisymmetric no-swirl solutions of (\ref{Euler}) which are smooth in $\mathbb S^2 \setminus\{S, N\}$. Next, we prove that if a sequence of $(-1)$-homogeneous axisymmetric no-swirl solutions $\{(u_{\nu_k}, p_{\nu_k})\}$ of (\ref{NS}) converges weakly in $L^2(\mathbb{S}^2\cap \{\thetaeta_1<\thetaeta<\thetaeta_2\})$ to $(v^{+}_{c},q_c)$ or $(v^{-}_{c},q_c)$ for some $c\in \mathring{J}_0$, then the convergence is $C^m_{loc}$ for any positive integer $m$. More precisely we have the following theorem. \begin{equation}gin{thm}\label{thm1_0_1} For $0<\thetaeta_1<\thetaeta_2<\pi$ and $\nu_k\to 0^+$, let $\{(u_{\nu_k}, p_{\nu_k})\}$ be smooth $(-1)$-homogeneous, axisymmetric, no-swirl solutions of (\ref{NS}) in the open cone in $\mathbb{R}^3$ generated by $\mathbb{S}^2\cap \{\thetaeta_1<\thetaeta<\thetaeta_2\}$. Assume that $\{u_{\nu_k, \thetaeta}\}$ weakly converges to $v=v^+_c$ or $v^-_c$ in $L^2(\mathbb{S}^2\cap \{\thetaeta_1<\thetaeta<\thetaeta_2\})$ for some $c\in \mathring{J}_0$. Then for any $\epsilon>0$ and any positive integer $m$, there exists some constant $C$, depending only on $\thetaeta_1,\thetaeta_2, \epsilon, m$ and $\sup_{\nu_k}||u_{\nu_k, \thetaeta}||_{L^2(\mathbb{S}^2\cap \{\thetaeta_1<\thetaeta<\thetaeta_2\})}$, such that \[ ||(u_{\nu_k}, p_{\nu_k})-(v, q_c)||_{C^m(\mathbb{S}^2\cap \{\thetaeta_1+\epsilon<\thetaeta<\thetaeta_2-\epsilon)}\le C\nu_k. \] \end{thm} In the above theorem we have only analyzed axisymmetric no-swirl solutions $\{u_\nu, p_\nu\}$. Concerning general solutions we raise the following \noindent {\bf Question 1.}\ Let $\Omega\subset \mathbb{S}^2$ be an open set, and let $\{ (u_{\nu_k}, p_{\nu_k})\}$, $\nu_k\to 0^+$, and $(v, q)$ be smooth $(-1)$-homogeneous solutions of (\ref{NS}) and (\ref{Euler}) respectively in the open cone in $\mathbb{R}^3$ generated by $\Omega$. Assume that $u_{\nu_k}$ weakly converges to $v$ in $L^2(\Omega)$ as ${\nu_k} \to 0^+$. Is it true that for every non-negative integer $m$, $\{(u_{\nu_k}, p_{\nu_k})\}$ converges to $(v,q)$ in $C_{loc}^m(\Omega)$? Given part (ii) of Theorem \ref{thm1_0_0}, we will only consider below the behavior of $(u_{\nu_k},p_{\nu_k})$ when $\nu_k^{-2}\int_{\mathbb{S}^2\cap\{\frac{\pi}{4}<\thetaeta<\frac{\pi}{2}\}}|u_{\nu_k, \thetaeta}|^2\to \infty$ as $k\to \infty$. For instance, Theorem \ref{thm1_0} below gives asymptotic profiles of $\{(u_{\nu_k},p_{\nu_k})\}$ under the condition. \begin{equation}gin{thm}\label{thm1_0} (i) There exist $(-1)$-homogeneous axisymmetric no-swirl solutions \newline $\{(u^{\pm}_{\nu}(c), p^{\pm}_{\nu}(c))\}_{0< \nu\le 1}$ of (\ref{NS}), belonging to $C^{0}(\mathring{J}_{\nu}\times (0,1], C^m(\mathbb{S}^2\setminus(B_{\epsilon}(S)\cup B_{\epsilon}(N))))$ for every integer $m\ge 0$, such that for every compact subset $K\subset \mathring{J}_{0}$, and every $\epsilon>0$, there exists some constant $C$ depending only on $\epsilon, K$ and $m$, such that \[ ||(u^{\pm}_{\nu}(c), p^{\pm}_{\nu}(c))-(v^{\pm}_c, q_c)||_{C^m(\mathbb{S}^2\setminus\{B_{\epsilon}(S)\cup B_{\epsilon}(N)\})}\le C\nu, \quad c\in K. \] (ii) For every $0<\thetaeta_0<\pi$, there exist $(-1)$-homogeneous axisymmetric no-swirl solutions $\{(u_{\nu}(c, \thetaeta_0), p_{\nu}(c, \thetaeta_0))\}_{0<\nu\le 1}$ of (\ref{NS}), belonging to $C^{0}(\mathring{J}_{\nu}\times(0,1]\times(0,\pi), C^m(\mathbb{S}^2\setminus(B_{\epsilon}(S)\cup B_{\epsilon}(N))))$ for every integer $m\ge 0$, such that for every compact subset $K\subset \mathring{J}_0$, and every $\epsilon>0$, there exists some constant $C$ depending on $\epsilon, K$ and $m$, such that \[ \begin{equation}gin{split} & ||(u_{\nu}(c, \thetaeta_0), p_{\nu}(c, \thetaeta_0))-(v^{+}_c, q_c)||_{C^m(\mathbb{S}^2\cap \{\thetaeta_0+\epsilon<\thetaeta<\pi-\epsilon\})}\\ & +||(u_{\nu}(c, \thetaeta_0), p_{\nu}(c, \thetaeta_0))-(v^{-}_c, q_c)||_{C^m(\mathbb{S}^2\cap\{\epsilon<\thetaeta<\thetaeta_0-\epsilon\})}\le C\nu, \quad c\in K. \end{split} \] \end{thm} Notice that for every $c$ in $\mathring{J}_0$, $P_c>0$ on $[-1,1]$, and $v^{+}_c \ne v^{-}_c$ on $\mathbb{S}^2\cap\{\thetaeta=\thetaeta_0\}$. The limit functions in Theorem \ref{thm1_0} (ii) have jump discontinuities across the circle $\{\thetaeta=\thetaeta_0\}$. In the following we give more detailed study on the behaviors of $\{(u_{\nu_k}, p_{\nu_k})\}$ which include that in regions where limit functions are not smooth and transition layer behaviors occur. Define, for $\nu>0$ and $c\in J_{\nu}$, \begin{equation}gin{equation}\label{eq_2} \begin{equation}gin{array}{lll} &\tau_{1}(\nu,c_1):=2\nu-2\sqrt{\nu^2+c_1}, \quad \quad &\tau_{2}(\nu,c_1):=2\nu+2\sqrt{\nu^2+c_1}, \\ &\tau_{1}'(\nu,c_2):=-2\nu-2\sqrt{\nu^2+c_2}, \quad \quad &\tau_{2}'(\nu,c_2):=-2\nu+2\sqrt{\nu^2+c_2}. \end{array} \end{equation} By Theorem 1.1 and Theorem 1.3 in \cite{LLY2}, using the scaling in (\ref{eqNSE_1}), we have the following theorem.\\ \textbf{Theorem A} (\cite{LLY2}) \emph{For each $\nu>0$, there exist $U^{+}_{\nu, \thetaeta}(c)(x)\in C^0(J_{\nu}\times [-1,1))$ and $U^{-}_{\nu, \thetaeta}(c)(x)\in C^0(J_{\nu}\times (-1,1])$ such that for every $c\in J_{\nu}$, $U^{\pm}_{\nu, \thetaeta}(c)\in C^{\infty}(-1,1)$ satisfy (\ref{eq:NSE}) in $(-1,1)$, and $U^{-}_{\nu, \thetaeta}(c)\le U_{\nu, \thetaeta}\le U^+_{\nu, \thetaeta}(c)$ for any solution $U_{\nu, \thetaeta}$ of (\ref{eq:NSE}) in $(-1,1)$. If $c_3>\bar{c}_3(c_1,c_2, \nu)$, then $U^{-}_{\nu, \thetaeta}(c)<U^{+}_{\nu, \thetaeta}(c)$ in $(-1,1)$, and the graphs of all solutions of (\ref{eq:NSE}) foliate the region $\{(x,y)\in \mathbb{R}^2 \mid -1\le x\le 1, U^{-}_{\nu, \thetaeta}(c)\le y \le U^{+}_{\nu, \thetaeta}(c)\}$. Moreover, \begin{equation}gin{equation*} \begin{equation}gin{split} & U_{\nu, \thetaeta}^+(-1)=\tau_2(\nu,c_1), \quad U_{\nu, \thetaeta}^+(1)=\tau_2'(\nu,c_2), \\ & U_{\nu, \thetaeta}^-(-1)=\tau_1(\nu,c_1), \quad U_{\nu, \thetaeta}^-(1)=\tau_1'(\nu,c_2), \end{split} \end{equation*} and if $U_{\nu, \thetaeta}$ is a solution other than $U_{\nu, \thetaeta}^\pm$, then \begin{equation}gin{equation*} U_{\nu, \thetaeta}(-1)=\tau_1(\nu,c_1), \quad U_{\nu, \thetaeta}(1)=\tau_2'(\nu,c_2). \end{equation*} If $c_3=\bar{c}_3(c_1,c_2,\nu)$, then \begin{equation}gin{equation*} U_{\nu, \thetaeta}^+(c)\equiv U^-_{\nu, \thetaeta}(c)\equiv U^*_{\nu, \thetaeta}(c_1,c_2):= (\nu+\sqrt{\nu^2+c_1})(1-x)+(-\nu-\sqrt{\nu^2+c_2})(1+x). \end{equation*} In particular, $U^*_{\nu, \thetaeta}(c_1,c_2)(-1)=\tau_2(\nu,c_1)$ and $U^*_{\nu, \thetaeta}(c_1,c_2)(1)=\tau_1'(\nu,c_2)$.} For $c_1,c_2\ge 0$, $c_1+c_2>0$, denote \begin{equation}gin{equation*} c_3^*(c_1,c_2)=\bar{c}_3(c_1,c_2; 0)=-\frac{1}{2}(c_1+2\sqrt{c_1c_2}+c_2)< 0, \end{equation*} \begin{equation}gin{equation}\label{eqP_1} P_{(c_1,c_2)}^*(x):=P_{(c_1,c_2,c_3^*(c_1,c_2))}(x)= -c_3^*(c_1,c_2)\left(x-\frac{\sqrt{c_1}-\sqrt{c_2}}{\sqrt{c_1}+\sqrt{c_2}}\right)^2. \end{equation} Then \begin{equation}gin{equation}\label{eqP_2} P_c(x)=P_{(c_1,c_2)}^*(x)+(c_3-c_3^*(c_1,c_2))(1-x^2). \end{equation} Clearly, $c_3^*(c_1,c_2)=\min\{c_3\in \mathbb{R}\mid P_c(x)\ge 0 \textrm{ on }[-1,1]\}$. In this paper we will call $U_{\nu, \thetaeta}^+(c)$ and $U_{\nu, \thetaeta}^-(c)$ the upper solution and lower solution of (\ref{eq:NSE}) respectively. Consider sequences $\{(u_{\nu_k},p_{\nu_k})\}$ satisfying (\ref{NS}) with $\nu_k\to 0^+$. Then $U_{\nu_k, \thetaeta}=u_{\nu_k, \thetaeta}\sin\thetaeta$ satisfies (\ref{eq:NSE}) for some $P_{c_k}$, $c_k\in J_{\nu_k}$. As mentioned ealier, we only consider below the case when $\nu_k^{-2}\int_{\mathbb{S}^2\cap\{a<\thetaeta<b\}}|u_{\nu_k, \thetaeta}|^2\to \infty$ for some $a,b\in (-1,1)$. By Lemma \ref{lem2_0}, this is equivalent to the condition that $\nu_k^{-2}|c_k|\to \infty$. If $\lim_{k\to \infty}\nu_k^{-2}|c_k|<\infty$, then $c_k\to 0$, and by Theorem \ref{thm1_0_0} (ii), $u_{\nu_k}\to 0$ in $C_{loc}^m(\mathbb{S}^2\setminus\{S,N\})$ for every $m$. The behaviors of $\{U_{\nu_k, \thetaeta}^{\pm}\}$ are different from other solutions. In most cases, $U_{\nu_k, \thetaeta}^{\pm}$ converge to solutions of Euler equation (\ref{eq:EE}) on all $[-1,1]$, while for other solutions, boundary layer behavior occurs. We first present the convergence results of $\{U_{\nu_k, \thetaeta}^{\pm}\}$ on $[-1,1]$. If $\min_{[-1,1]}P_c>0$, we have, after passing to a subsequence, the convergence of $\{U_{\nu_k, \thetaeta}^{\pm}(c_k)\}$, $c_k\to c$, to the solution of the Euler equation $\pm \sqrt{2P_c}$ on $[-1,1]$. \begin{equation}gin{thm}\label{thm1_1} Let $\nu_k\to 0^+$, $c_k\in J_{\nu_k}$, $\nu_k^{-2}|c_k|\to \infty$. Assume $\hat{c}_{k}:=|c_k|^{-1}c_k\to \hat{c}\in \mathring{J}_0$. Then for any $\epsilon>0$ and any positive integer $m$, there exists some constant $C$, depending only on $\epsilon, m$ and $\hat{c}$, such that for large $k$, \begin{equation}gin{equation}\label{eq1_1} \begin{equation}gin{split} & ||U^{+}_{\nu_k, \thetaeta}(c_k)-\sqrt{2P_{c_k}}||_{L^{\infty}(-1,1)}+||U^-_{\nu_k, \thetaeta}(c_k)+\sqrt{2P_{c_k}}||_{L^{\infty}(-1,1)}\le C\nu_k,\\ & ||U^+_{\nu_k, \thetaeta}(c_k)-\sqrt{2P_{c_k}}||_{C^m[-1,1-\epsilon]}+||U^-_{\nu_k, \thetaeta}(c_k)+\sqrt{2P_{c_k}}||_{C^m[-1+\epsilon,1]}\le C\nu_k. \end{split} \end{equation} \end{thm} \begin{equation}gin{rmk} The constant $C$ in Theorem \ref{thm1_1} depends only on $\epsilon, m$, and a positive lower bound of $dist(\hat{c}, \partial J_0)$. Similar statements can be made for Theorem \ref{thm1_2_1}, \ref{thm1_2_2}, \ref{thm1_3} and \ref{thm:BL:1}. \end{rmk} \begin{equation}gin{rmk} In the second estimate in (\ref{eq1_1}), the $\epsilon$ could not be taken as $0$ in general. \end{rmk} In Theorem \ref{thm1_1}, $\hat{c}\in \mathring{J}_0$, which is equivalent to $\min_{[-1,1]}P_{\hat{c}}>0$. If $\min_{[-1,1]}P_{\hat{c}}=0$, i.e. $c\in \partial J_0$, things are more delicate. As pointed out later in Section 3, we only need to consider in Theorem \ref{thm1_1} the special case when $\nu_k\to 0$, $c_k\to c\in \mathring{J}_0$. In the following, we will only state the results in the case $c_k\to c\ne 0$. The next two theorems are for $c_3=c_3^*(c_1,c_2)$, i.e. $P_c=P^*_{(c_1,c_2)}$. The following results are proved among other things. If $c_k\in J_0$, then $\{U^{\pm}_{\nu_k, \thetaeta}(c_k)\}$ converge to the Euler equation solutions $\pm \sqrt{2P_c}$ in $L^{\infty}(-1,1)$. If $\bar{x}:=\frac{\sqrt{c_1}-\sqrt{c_2}}{\sqrt{c_1}+\sqrt{c_2}}=1$, i.e. $c_2=0$, then $\{U^{+}_{\nu_k, \thetaeta}(c_k)\}$ converges to the Euler equation solution $\sqrt{2P_c}$ in $L^{\infty}(-1,1)$. On the other hand, if $\bar{x}\in [-1,1)$, i.e. $c_2>0$, then there exist examples $\{U^{+}_{\nu_k, \thetaeta}(c_k)\}$ having no convergent subsequence in $L^{\infty}(1-\delta,1)$ for any $\delta>0$. In particular, it has no subsequence converging to a solution of the Euler equation in $L^{\infty}(-1,1)$. Similar results are proved for $\{U^{-}_{\nu_k, \thetaeta}(c_k)\}$. If $\bar{x}=-1$, i.e. $c_1=0$, then $\{U^{-}_{\nu_k, \thetaeta}(c_k)\}$ converges to the Euler equation solution $-\sqrt{2P_c}$ in $L^{\infty}(-1,1)$. On the other hand, if $\bar{x}\in (-1,1]$, i.e. $c_1>0$, then there exist examples $\{U^{-}_{\nu_k, \thetaeta}(c_k)\}$ having no convergent subsequence in $L^{\infty}(-1, -1+\delta)$ for any $\delta>0$. In particular, it has no subsequence converging to a solution of the Euler equation in $L^{\infty}(-1,1)$. \begin{equation}gin{thm}\label{thm1_2} For any $c\in \partial J_0$ with $c_3=c_3^*(c_1,c_2)$ and $c_2>0$, there exist some sequences $c_k\in J_{\nu_k}$, $c_k\to c$, $\nu_k\to 0^+$, such that for any $\epsilon>0$, $\displaystyle \inf_{k}||\frac{1}{2}(U^+_{\nu_k, \thetaeta}(c_k))^2-P_{c_k}||_{L^{\infty}(1-\epsilon,1)}>0$. Similarly, for any $c\in \partial J_0$ with $c_3=c_3^*(c_1,c_2)$ and $c_1>0$, there exist some sequences $c_k\in J_{\nu_k}$, $c_k\to c$, and $\nu_k\to 0^+$, such that for any $\epsilon>0$, $\displaystyle \inf_{k}||\frac{1}{2}(U^-_{\nu_k, \thetaeta}(c_k))^2-P_{c_k}||_{L^{\infty}(-1,-1+\epsilon)}>0$. \end{thm} \begin{equation}gin{rmk} It is easy to see that the $\{U^+_{\nu_k, \thetaeta}\}$ constructed in Theorem \ref{thm1_2} satisfies \newline $\inf_{k}||U^+_{\nu_k, \thetaeta}(c_k)-\sqrt{2P_{c_k}}||_{L^{\infty}(1-\epsilon,1)}>0$, and $\{U^+_{\nu_k, \thetaeta}(c_k)\}$ has no convergent subsequence in $L^{\infty}(1-\epsilon, 1)$ for any $\epsilon>0$. Similar statements apply to $\{U^-_{\nu_k, \thetaeta}(c_k)\}$. \end{rmk} \begin{equation}gin{thm}\label{thm1_2_1} Let $\nu_k\to 0^+$, $c_k\in J_{\nu_k}$, $c_k\to c\ne 0$ and $c_3=c_3^*(c_1,c_2)$. (i) If $c_k\in J_0$, then $\lim_{k\to \infty}||U^{\pm}_{\nu_k, \thetaeta}(c_k) \mp \sqrt{2P_{c}}||_{L^{\infty}(-1,1)}=0$, and for any $0<\begin{equation}ta<2/3$, there exists some constant $C$, depending only on $c$, $\epsilon$ and $\begin{equation}ta$, such that for large $k$, \begin{equation}gin{equation*} ||\frac{1}{2}(U^{\pm}_{\nu_k, \thetaeta}(c_k))^2-P_{c_k}||_{L^{\infty}(-1,1)}+\nu_k^{\begin{equation}ta} ||\frac{1}{2}(U^{\pm}_{\nu_k, \thetaeta}(c_k))^2-P_{c_k}||_{C^{\begin{equation}ta}(-1+\epsilon,1-\epsilon)}\le C\nu_k^{2/3}. \end{equation*} Moreover, $\bar{x}:=\frac{\sqrt{c_1}-\sqrt{c_2}}{\sqrt{c_1}+\sqrt{c_2}}\in [-1,1]$, and for any $\epsilon>0$ and integer $m\ge 0$, there exists some constant $C$, depending only on $c$, $m$ and $\epsilon$, such that for large $k$, \[ ||U^{+}_{\nu_k, \thetaeta}(c_k)-\sqrt{2P_{c_k}}||_{C^{m}([-1,1-\epsilon]\setminus[\bar{x}-\epsilon, \bar{x}+\epsilon])}+ ||U^{-}_{\nu_k, \thetaeta}(c_k)+\sqrt{2P_{c_k}}||_{C^{m}([-1+\epsilon,1]\setminus[\bar{x}-\epsilon, \bar{x}+\epsilon])}\le C\nu_k. \] (ii) If $c_2=0$, then $\lim_{k\to \infty}||U^+_{\nu_k, \thetaeta}(c_k)-\sqrt{2P_{c}}||_{L^{\infty}(-1,1)}=0$, and there exists some constant $C$, depending only on $c$, such that for large $k$, \begin{equation}gin{equation}\label{eqthm1_2_1_1} ||\frac{1}{2}(U^+_{\nu_k, \thetaeta}(c_k))^2-P_{c_k}||_{L^{\infty}(-1,1)}\le C(|c_{k2}|+|2c_{k3}+c_{k1}|^2+\nu_k^{2/3})=o(1). \end{equation} (iii) If $c_1=0$, then $\lim_{k\to \infty}||U^-_{\nu_k, \thetaeta}(c_k)+\sqrt{2P_{c}}||_{L^{\infty}(-1,1)}=0$, and there exists some constant $C$, depending only on $c$, such that for large $k$, \begin{equation}gin{equation*} ||\frac{1}{2}(U^-_{\nu_k, \thetaeta}(c_k))^2-P_{c_k}||_{L^{\infty}(-1,1)}\le C(|c_{k1}|+|2c_{k3}+c_{k2}|^2+\nu_k^{2/3})=o(1). \end{equation*} \end{thm} Next, we discuss the remaining cases when $\min_{[-1,1]}P_c=0=P_c(-1)$ and $P_c'(-1)> 0$ or $\min_{[-1,1]}P_c=0=P_c(1)$ and $P_c'(1)<0$. This is equivalent to $c_1=0$ and $c_3>c_3^*(c_1,c_2)$ or $c_2=0$ and $c_3>c_3^*(c_1,c_2)$. In this case, $U_{\nu_k, \thetaeta}^{\pm}(c_k)$ converge respectively to the Euler equation solutions $\pm \sqrt{2P_c}$ in $L^{\infty}(-1,1)$. \begin{equation}gin{thm}\label{thm1_2_2} Let $\nu_k\to 0^+$, $c_k\in J_{\nu_k}$, $c_k\to c\ne 0$, $c_1c_2=0$ and $c_3>c_3^*(c_1,c_2)$. Then \[ \lim_{k\to \infty}||U^{\pm}_{\nu_k, \thetaeta}(c_k) \mp \sqrt{2P_{c}}||_{L^{\infty}(-1,1)}=0. \] Moreover, for any $\epsilon>0$ and integer $m\ge 0$, there exists some constant $C>0$, depending only on $\epsilon$, $\begin{equation}ta$, and $c$, such that for large $k$, \begin{equation}gin{equation*} \nu_k^{1/2}||\frac{1}{2}(U^{\pm}_{\nu_k, \thetaeta}(c_k))^2-P_{c_k}||_{L^{\infty}(-1,1)}+ ||U^{\pm}_{\nu_k, \thetaeta}(c_k) \mp \sqrt{2P_{c}}||_{C^{m}(-1+\epsilon,1-\epsilon)}\le C\nu_k. \end{equation*} \end{thm} Theorem \ref{thm1_1} - Theorem \ref{thm1_2_2} present some convergent results of $U_\thetaeta^{\pm}$. To help the readers to have a general picture of these results, we summarize the $L^\infty$ convergent results of $U_{\nu_k,\thetaeta}^{\pm}$ in Table \ref{tab:1}, where we always assume that $\nu_k\to 0^+$, $c_k\in J_{\nu_k}$, $c_k\to c\ne 0$. \begin{equation}gin{table} \caption{Summary of convergent results for $U_{\nu_k,\thetaeta}^{\pm}$} \label{tab:1} \renewcommand{1.3}{1.3} \begin{equation}gin{center} \begin{equation}gin{tabular}{|c|c|p{6.3cm}|p{6.3cm}|} \hline \multicolumn{2}{|c|}{\multirow{2}{*}{Conditions}} & \multicolumn{2}{|c|}{Conclusions (True/False)} \\ \cline{3-4} \multicolumn{2}{|c|}{} & $\displaystyle \lim_{k\to \infty}\|U^{+}_{\nu_k, \thetaeta}(c_k) - \sqrt{2P_{c}}\|_{L^{\infty}(-1,1)}=0$ & $\displaystyle \lim_{k\to \infty}\|U^{-}_{\nu_k, \thetaeta}(c_k) + \sqrt{2P_{c}}\|_{L^{\infty}(-1,1)}=0$ \\ \hline\hline \multicolumn{2}{|c|}{$c\in \mathring{J}_0$} & True & True \\ \hline \multirow{5}{*}{$c_3=c_3^*$} & $c_2>0$ & {False: $\forall \epsilon>0, \exists$ non-convergent \newline sequence $\{U_{\nu_k,\thetaeta}^+\}$ in $(1-\epsilon,1)$.} & \\ \cline{2-4} & $c_1>0$ & & False: $\forall \epsilon>0, \exists$ non-convergent \newline sequence $\{U_{\nu_k,\thetaeta}^-\}$ in $(-1,-1+\epsilon)$. \\ \cline{2-4} & $c_k\in J_0$ & True & True \\ \cline{2-4} & $c_2=0$ & True & \\ \cline{2-4} & $c_1=0$ & & True \\ \hline \multicolumn{2}{|c|}{$c_3>c_3^*$, $c_1c_2=0$} & True & True \\ \hline \end{tabular} \end{center} \end{table} We now present results for solutions $U_{\nu_k, \thetaeta}$ of (\ref{eq:NSE}) other than $U^{\pm}_{\nu_k, \thetaeta}(c_k)$. For $c\in J_0\setminus\{0\}$, define \begin{equation}gin{equation}\label{eq_alpha} \alpha(c)=\left\{ \begin{equation}gin{array}{ll} 1, & \textrm{ if }c_1,c_2>0,c_3>c_3^*(c_1,c_2),\\ \frac{2}{3}, & \textrm{ if }c_3=c_3^*(c_1,c_2)<0,\\ \frac{1}{2}, & \textrm{ if }c_1c_2=0, c_3>c_3^*(c_1,c_2). \end{array} \right. \end{equation} \begin{equation}gin{thm}\label{thm1_3} Let $\nu_k\to 0^+$, $c_k\in J_0$, $c_k\to c\ne 0$. Assume $U_{\nu_k, \thetaeta}(c_k)\in C^1(-1,1)$ is a solution of (\ref{eq:NSE}) with $\nu_k$ and $c_k$, other than $U_{\nu_k, \thetaeta}^\pm(c_k)$. Then there exists at most one $-1<x_k<1$ such that $U_{\nu_k, \thetaeta}(x_k)=0$, and such $x_k$ must exist if $c_1,c_2>0$. (i) If $U_{\nu_k, \thetaeta}(x_k)=0$ for some $x_k\in (-1,1)$, then for any $\epsilon>0$, \begin{equation}gin{equation}\label{eq1_7_0} \lim_{k\to \infty}\left(||U_{\nu_k, \thetaeta}+\sqrt{2P_{c}}||_{L^{\infty}(-1, x_k-\epsilon)}+||U_{\nu_k, \thetaeta}-\sqrt{2P_{c}}||_{L^{\infty}(x_k+\epsilon, 1)}\right)=0, \end{equation} and for any $0<\begin{equation}ta<\alpha(c)$, there exists some constant $C>0$, depending only on $c$, $\epsilon$ and $\begin{equation}ta$, such that for large $k$, \begin{equation}gin{equation*} ||\frac{1}{2}U^2_{\nu_k, \thetaeta}-P_{c_k}||_{L^{\infty}((-1, x_k-\epsilon)\cup (x_k+\epsilon, 1))}+ \nu_k^{\begin{equation}ta} ||\frac{1}{2}U^2_{\nu_k, \thetaeta}-P_{c_k}||_{C^{\begin{equation}ta}((-1+\epsilon, x_k-\epsilon)\cup (x_k+\epsilon, 1-\epsilon))}\le C\nu_k^{\alpha(c)}. \end{equation*} (ii) If $U_{\nu_k, \thetaeta}(x_k)=0$ for some $x_k\in (-1,1)$ satisfying $x_k\to -1$ and $c_1=0$, or $U_{\nu_k, \thetaeta}\ne 0$ on $(-1,1)$ and $c_2>0=c_1$, then \begin{equation}gin{equation}\label{eq1_7_2} \lim_{k\to \infty}||U_{\nu_k, \thetaeta}-\sqrt{2P_{c}}||_{L^{\infty}(-1, 1)}=0. \end{equation} If $U_{\nu_k, \thetaeta}(x_k)=0$ for some $x_k\in (-1,1)$ satisfying $x_k\to 1$ and $c_2=0$, or $U_{\nu_k, \thetaeta}\ne 0$ on $(-1,1)$ and $c_1>0=c_2$, then \begin{equation}gin{equation}\label{eq1_7_3} \lim_{k\to \infty}||U_{\nu_k, \thetaeta}+\sqrt{2P_{c}}||_{L^{\infty}(-1, 1)}=0. \end{equation} If $U_{\nu_k, \thetaeta}\ne 0$ on $(-1,1)$ and $c_1=c_2=0$, then, after passing to a subsequence, either (\ref{eq1_7_2}) or (\ref{eq1_7_3}) occurs. (iii) If $c\in \mathring{J}_0$, then $U_{\nu_k, \thetaeta}(x_k)=0$ for some $x_k\in (-1,1)$ and for any $\epsilon>0$ and any positive integer $m$, there exists some constant $C>0$, depending only on $\epsilon$, $m$ and $c$, such that for large $k$, \begin{equation}gin{equation}\label{eq1_2_1} ||U_{\nu_k, \thetaeta}+\sqrt{2P_{c_k}}||_{L^{\infty}(-1, x_k-\epsilon)}+||U_{\nu_k, \thetaeta}-\sqrt{2P_{c_k}}||_{L^{\infty}(x_k+\epsilon, 1)}\le C\nu_k, \end{equation} \begin{equation}gin{equation}\label{eq1_2_2} ||U_{\nu_k, \thetaeta}+\sqrt{2P_{c_k}}||_{C^m(-1+\epsilon, x_k-\epsilon)}+||U_{\nu_k, \thetaeta}-\sqrt{2P_{c_k}}||_{C^m(x_k+\epsilon, 1-\epsilon)}\le C\nu_k. \end{equation} \end{thm} \begin{equation}gin{rmk} For any $c\in J_0$ with $c_1,c_2>0$, $0<\nu<1$, and $-1\le \hat{x}\le 1$, there exists some solution $U_{\thetaeta}$ of (\ref{eq:NSE}), other than $U_{\nu, \thetaeta}^{\pm}(c)$, such that $U_{\thetaeta}(\hat{x})=0$. This can be seen from Theorem A, which asserts that the graphs of all solutions of (\ref{eq:NSE}) foliate the region $\{(x,y)\in \mathbb{R}^2 \mid -1\le x\le 1, U^{-}_{\nu, \thetaeta}(c)\le y \le U^{+}_{\nu, \thetaeta}(c)\}$ in $\mathbb{R}^2$. \end{rmk} The above theorem indicates the formation of boundary layers (if we view $x=\pm 1$ as boundaries) and interior layers. We give descriptions of boundary layers and interior layers in the following theorem. For $c\in J_0$, define \begin{equation}gin{equation*} \kappa(c):=\left\{ \begin{equation}gin{array}{ll} 1,& \textrm{ if }c_1,c_2>0,c_3>c^*_3(c_1,c_2),\\ 0, & \textrm{ otherwise. } \end{array} \right. \end{equation*} Let $-1<x<1$, $K>0$, define \begin{equation}gin{equation*} \label{eq:BL:mid} \widetilde{U}_{\thetaeta,x_k}(x) = \begin{equation}gin{cases} -\sqrt{2P_{c_k}(x)}, \qquad & -1\le x < x_k - K\nu_k|\ln \nu_k|(1-x_k^2), \\ \sqrt{2P_{c_k}(x_k)} \tanh \Big( \frac{\sqrt{2P_{c_k}(x_k)} \cdot (x-x_k)}{2 (1-x_k^2)\nu_k} \Big), \quad & |x-x_k|\le K\nu_k|\ln \nu_k|(1-x_k^2), \\%x_k - K\nu_k|\ln \nu_k|(1-x_k^2) < x < x_k + K\nu_k|\ln \nu_k|(1-x_k^2), \\ \sqrt{2P_{c_k}(x)}, \qquad & x_k + K\nu_k|\ln \nu_k|(1-x_k^2) < x \le 1. \\ \end{cases} \end{equation*} \begin{equation}gin{thm}\label{thm:BL:1} Let $\nu_k\to 0^+$, $c_k\in J_0$, $c_k\to c\ne 0$. Assume $U_{\nu_k, \thetaeta}\in C^1(-1,1)$ is a solution of (\ref{eq:NSE}) with $\nu_k$ and $c_k$, other than $U_{\nu_k, \thetaeta}^\pm$. In addition, assume that there exists $x_k\in(-1,1)$ such that $U_{\nu_k, \thetaeta}(x_k) = 0$ and $x_k\to \hat{x}\in[-1,1]$, $P_c(\hat{x})\not=0$. Then $\{U_{\nu_k, \thetaeta}\}$ develops a layer near $x_k$. Moreover, there exist some positive constants $K$ and $C$, depending only on $c$, such that for large $k$, \begin{equation}gin{equation}\label{eq:BL:rate} \|U_{\nu_k, \thetaeta} - \widetilde{U}_{\thetaeta,x_k}\|_{L^\infty(-1,1)} \le C\nu_k^{\alpha(c)}|\ln \nu_k|^{2\kappa(c)}. \end{equation} \end{thm} \begin{equation}gin{rmk} The solutions $U_{\nu_k, \thetaeta}$ of (\ref{eq:NSE}) with $\nu_k$ asymptotically behave like $\widetilde{U}_{\thetaeta,x_k}$ as $\nu_k\to 0^+$. Hence an interior layer appears when $\hat{x}\in(-1,1)$, and a boundary layer appears when $\hat{x}=\pm 1$ if we view $x=\pm 1$ as boundaries. \end{rmk} \begin{equation}gin{rmk}\label{rmk:scale} The length scale of the transition layers is $\nu_k$ for interior layers, and is $o(\nu_k)$ for boundary layers. Moreover, for any $\epsilon_k=o(\nu_k)$, there exists $\{U_{\nu_k, \thetaeta}\}$ having boundary layer length scale as $\epsilon_k$. \end{rmk} The organization of the paper is as follows. Theorem \ref{thm1_0_0} is proved at the beginning of Section \ref{sec2}. In the remaining part of Section \ref{sec2} we present some preliminary results and prove Theorem \ref{thm1_0_1} at the end of Section \ref{sec2}. In Section \ref{sec3} we prove Theorem \ref{thm1_1} and Theorem \ref{thm1_3} (iii). In Section \ref{sec4} we prove Theorem \ref{thm1_2} and Theorem \ref{thm1_2_1}. In Section \ref{sec5} we prove Theorem \ref{thm1_2_2}. Theorem \ref{thm1_3} (i) and (ii) are proved at the end of Section \ref{sec4} and the end of Section \ref{sec5}. Theorem \ref{thm1_0} is proved at the end of Section \ref{sec5}. In Section \ref{sec6} we prove Theorem \ref{thm:BL:1}. In Section \ref{sec:illu} we give illustrations on transition layer behaviors. In the appendix, we present some elementary properties of second order polynomials which we have used. \noindent {\bf Acknowledgment}. The work of the first named author is partially supported by NSFC grants No. 11871177. The work of the second named author is partially supported by NSF grants DMS-1501004. The work of the third named author is partially supported by AMS-Simons Travel Grant and AWM-NSF Travel Grant. \section{Preliminary}\label{sec2} We first prove Theorem \ref{thm1_0_0}. \begin{equation}gin{lem}\label{lem2_0} For $0<\nu\le 1$, let $U_{\nu, \thetaeta}$ satisfies (\ref{eq:NSE}) in $(-1,1)$ for some $c\in J_{\nu}$. Then there exists some universal constant $C>0$, such that for any $-1<r<s<1$, \begin{equation}gin{equation}\label{eq2_0_0} ((s-r)^4|c|-(s-r)\nu^2)/C\le \int_{r}^{s}U^2_{\nu, \thetaeta}\le C(|c|+\nu^2/\min\{1-s,1+r\}). \end{equation} \end{lem} \begin{equation}gin{proof} Throughout the proof, $C$ denotes some universal constant which may change value from line to line. For all $0<r<s<1$, there exist $a\in [-s,-r]$ and $b\in[r,s]$ such that \[ |U_{\nu, \thetaeta}(a)|\le \frac{1}{\sqrt{s-r}}\left(\int_{-s}^{-r}U_{\nu, \thetaeta}^2\right)^{1/2}, \quad |U_{\nu, \thetaeta}(b)|\le \frac{1}{\sqrt{s-r}}\left(\int_{r}^{s}U_{\nu, \thetaeta}^2\right)^{1/2}. \] By (\ref{eq:NSE}) and the above, \[ \begin{equation}gin{split} \int_{-r}^{r}U_{\nu, \thetaeta}^2 & \le \int_{a}^{b}U_{\nu, \thetaeta}^2 =2\int_{a}^{b}\left(P_c-\nu(1-x^2)U'_{\nu, \thetaeta}-2\nu xU_{\nu, \thetaeta}\right)dx\\ & \le C|c|+C\nu(|U_{\nu, \thetaeta}(a)|+|U_{\nu, \thetaeta}(b)|)+C\nu^2+\frac{1}{4}\int_{a}^{b} U^2_{\nu, \thetaeta}(x)dx\\ & \le C(|c|+\nu^2/(s-r))+\frac{1}{2}\int_{-s}^{s}U_{\nu, \thetaeta}^2 . \end{split} \] By Lemma 1 in \cite{Giaquinta}, \begin{equation}gin{equation*} \int_{-r}^{r}U_{\nu, \thetaeta}^2\le C(|c|+\nu^2/(s-r)), \quad \forall 0<r<s<1. \end{equation*} The second inequality in (\ref{eq2_0_0}) follows from the above. Next, we prove the first inequality in (\ref{eq2_0_0}). Rewrite $P_c=\hat{c}_1+\hat{c}_2x+\hat{c}_3x^2$. Then $|c|\le C|\hat{c}|$ where $\hat{c}=(\hat{c}_1,\hat{c}_2,\hat{c}_3)$. For $-1<r<s< 1$, let $\delta =(s-r)/9$. Then there exist $a\in [r, r+\delta]$ and $b_i\in [r+2i\delta, r+(2i+1)\delta]$, $i=1,2,3$, such that \begin{equation}gin{equation}\label{eq2_0_1} |U_{\nu, \thetaeta}(a)|\le \frac{1}{\sqrt{\delta}}\left(\int_{r}^{r+\delta}U_{\nu, \thetaeta}^2\right)^{1/2}, \quad |U_{\nu, \thetaeta}(b_i)|\le \frac{1}{\sqrt{\delta}}\left(\int_{r+2i\delta}^{r+(2i+1)\delta}U_{\nu, \thetaeta}^2\right)^{1/2}. \end{equation} For each $i=1,2,3$, we have \[ \int_{a}^{b_i}P_c(x)dx =\int_{a}^{b_i}\left(\nu(1-x^2)U'_{\thetaeta}+2\nu xU_{\thetaeta}+\frac{1}{2}U^2_{\thetaeta}\right)=:\begin{equation}ta_i. \] Let $\begin{equation}ta=(\begin{equation}ta_1,\begin{equation}ta_2,\begin{equation}ta_3)$, write the above as $A\hat{c}^t=\begin{equation}ta^t$, where $\hat{c}^t$ and $\begin{equation}ta^t$ denote the transpose of $\hat{c}$ and $\begin{equation}ta$ respectively, and \[ A= \left( \begin{equation}gin{matrix} b_1-a & (b_1^2-a^2)/2 & (b_1^3-a^3)/3\\ b_2-a & (b_2^2-a^2)/2 & (b_2^3-a^3)/3\\ b_3-a & (b_3^2-a^2)/2 & (b_3^3-a^3)/3 \end{matrix} \right). \] By (\ref{eq2_0_1}), we have, after an integration by parts, \begin{equation}gin{equation}\label{eq2_0_3} |\begin{equation}ta_i| \le C\nu \left( |U_{\nu, \thetaeta}(a)|+ |U_{\nu, \thetaeta}(b_i)|\right)+C\nu^2+C\int_{a}^{b_i}U_{\nu, \thetaeta}^2dx \le C\nu^2+\frac{C}{\delta}\int_{r}^{s}U_{\nu, \thetaeta}^2. \end{equation} By computation, we have that $A$ is invertible and \[ A^{-1}=\left( \begin{equation}gin{matrix} -\frac{2a^2+ab_3+ab_2-b_2b_3}{(b_1-a)(b_2-b_1)(b_3-b_1)} & \frac{2a^2+ab_1+ab_3-b_1b_3}{(b_2-a)(b_2-b_1)(b_3-b_2)} & -\frac{2a^2+ab_1+ab_2-b_1b_2}{(b_3-a)(b_3-b_1)(b_3-b_2)}\\ -\frac{2(b_2+b_3+a)}{(b_1-a)(b_2-b_1)(b_3-b_1)} & \frac{2(b_1+b_3+a)}{(b_2-a)(b_2-b_1)(b_3-b_2)} & -\frac{2(b_1+b_2+a)}{(b_3-a)(b_3-b_1)(b_3-b_2)}\\ \frac{3}{(b_1-a)(b_2-b_1)(b_3-b_1)} & -\frac{3}{(b_2-a)(b_2-b_1)(b_3-b_2)} & \frac{3}{(b_3-a)(b_3-b_1)(b_3-b_2)} \end{matrix} \right). \] Clearly, $\delta\le b_i-a, b_j-b_i\le 9\delta$ for every $i<j$. So we have $||A^{-1}||\le C\delta^{-3}$. Then, using (\ref{eq2_0_3}), we have \[ |c|\le C|\hat{c}|=C|A^{-1}\begin{equation}ta|\le ||A^{-1}|||\begin{equation}ta|\le C\delta^{-3}\left(\nu^2+\frac{1}{\delta}\int_{r}^{s}U_{\nu, \thetaeta}^2\right). \] The first inequality of (\ref{eq2_0_0}) follows from the above. The lemma is proved. \end{proof} \noindent{\emph{Proof of Theorem \ref{thm1_0_0}}}: (i) We use $C$ to denote a positive constant depending only on $\{\thetaeta_i\}$, which may vary from line to line. Let $r_i=\cos\thetaeta_i$, $1\le i\le 4$, $x=\cos\thetaeta$, $U_{\nu, \thetaeta}=u_{\nu, \thetaeta}\sin\thetaeta$. Then $U_{\nu, \thetaeta}$ satisfies (\ref{eq:NSE}) on $(r_4,r_1)$ for some $c\in J_{\nu}$. By Lemma \ref{lem2_0}, we have \begin{equation}gin{equation}\label{eq1_0_0_5} \begin{equation}gin{split} & \int_{\mathbb{S}^2\cap \{\thetaeta_1<\thetaeta<\thetaeta_4\}}|u_{\nu, \thetaeta}|^2\le C\int_{r_4}^{r_1}U^2_{\nu, \thetaeta}(x)dx \le C(|c|+\nu^2)\\ & \le C\left(\int_{r_3}^{r_2}U^2_{\nu, \thetaeta}(x)dx+\nu^2\right)\le C\left(\int_{\mathbb{S}^2\cap\{\thetaeta_2<\thetaeta<\thetaeta_3\}}|u_{\nu, \thetaeta}|^2+\nu^2\right). \end{split} \end{equation} Part (i) is proved. (ii) Let $U_{\nu_k, \thetaeta}=u_{\nu_k, \thetaeta}\sin\thetaeta$, $r=\cos(\pi-\epsilon), s=\cos\epsilon$. Since $(u_{\nu_k}, p_{\nu_k})$ are $(-1)$-homogeneous axisymmetric no-swirl solutions of (\ref{NS}) on $\mathbb{S}^2\setminus\{S,N\}$, there exists $c_k\in J_{\nu_k}$, such that $U_{\nu_k, \thetaeta}$ satisfies (\ref{eq:NSE}) with the right hand side to be $P_{c_k}$. By Lemma \ref{lem2_0}, using the boundedness of $\nu_k^{-2}\int_{\mathbb{S}^2\cap\{a<\thetaeta<b\}}|u_{\nu_k, \thetaeta}|^2$, $\{\nu_k^{-2}|c_k|\}$ is bounded for some $a,b\in (-1,1)$. Notice that $\tilde{U}_{\thetaeta, k}:=U_{\nu_k, \thetaeta}(c_k)/\nu_k$ is a solution to (\ref{eqNSE_1}) with $P_{c_k\nu_k^{-2}}$, and after passing to a subsequence, $\tilde{c}_k:=c_k\nu_k^{-2}\to \tilde{c}$ for some $\tilde{c}$. By Lemma 2.2 in \cite{LLY2}, $\{||\tilde{U}_{\thetaeta, k}||_{L^{\infty}(-1,1)}\}$ is bounded. It follows from standard ODE theories that there exists some smooth solution $\tilde{U}_{\thetaeta}$ of (\ref{eqNSE_1}) with $c\nu^{-2}=\tilde{c}$ that $\tilde{U}_{\thetaeta, k}\to \tilde{U}_{\thetaeta}$ in $C^m([-1+\epsilon, 1-\epsilon])$ for any $\epsilon>0$ and any positive integer $m$. Part (ii) is proved with $\tilde{u}_{\thetaeta}=\tilde{U}_{\thetaeta}/\sin\thetaeta$ together with (\ref{eqNS_1}). \qed Let $\nu>0$, $c\in \mathbb{R}^3$, and $f_{\nu}$ be a solution of the equation \begin{equation}gin{equation}\label{eq_1} \nu (1-x^2)f'_{\nu}+2\nu x f_{\nu}+\frac{1}{2}f^2_{\nu}=P_c(x). \end{equation} \begin{equation}gin{lem}\label{lem2_1} For $0<\nu\le 1$ and $c\in \mathbb{R}^3$, let $f_{\nu}$ be a solution of (\ref{eq_1}) in $C^1(-1,1)$. Then $|f_{\nu}|\le 5\sqrt{1+|c|}$ in $(-1,1)$. \end{lem} \begin{equation}gin{proof} By Theorem 1.2 and Theorem 1.3 in \cite{LLY2}, we have $c\in J_{\nu}$, $f_{\nu}(-1)=\tau_1$ or $\tau_2$, and $f_{\nu}(1)=\tau_1'$ or $\tau_2'$, where $\tau_1,\tau_2,\tau_1'$ and $\tau_2'$ are defined as in (\ref{eq_2}). Thus $|f(\pm 1)|<5\sqrt{1+|c|}$. Suppose that there exists a point $x_0\in(-1,1)$ such that $f_\nu(x_0)>5\sqrt{1+|c|}$, then by (\ref{eq_1}), \[ \nu (1-x_0^2) f_{\nu}'(x_0) \le 6|c|-\frac{1}{2}f_{\nu}^2(x_0)+2\nu f_{\nu}(x_0)\le 6|c|+4\nu^2-\frac{1}{4}f^2_{\nu}(x_0)<0. \] So $f_{\nu}'(x_0)<0$. It follows that $f_{\nu}(x)>5\sqrt{1+|c|}$ for any $-1<x<x_0$. This contradicts the fact that $f_{\nu}(-1)<5\sqrt{1+|c|}$. We have proved that $f_{\nu}\le 5\sqrt{1+|c|}$ on $(-1,1)$. Similarly, we can prove that $f_{\nu}\ge -5\sqrt{1+|c|}$ on $(-1,1)$. \end{proof} \begin{equation}gin{lem}\label{lem2_5} Let $0< \nu \le 1$, $-1\le a<b\le 1$, $f_{\nu}\in C^1(a,b)$ be a solution of (\ref{eq_1}) in $(a,b)$ and $|f_{\nu}| \le M$ on $(a,b)$ for some constant $M>0$ . Then \begin{equation}gin{equation}\label{eq2_5_0} \inf_{(a,b)}\left|\frac{1}{2}f^2_{\nu}-P_{c}\right|<10M\nu/(b-a). \end{equation} Moreover, if $0 \le a <b\le 1$, we have \begin{equation}gin{equation}\label{eq:lem2_5:1} \inf_{(a,b)}\left|\frac{1}{2}f^2_{\nu}-P_{c}\right|\le 8M\nu(1-a)/(b-a), \end{equation} and if $-1 \le a <b\le 0$, we have \begin{equation}gin{equation}\label{eq:lem2_5:2} \inf_{(a,b)}\left|\frac{1}{2}f^2_{\nu}-P_{c}\right|\le 8M\nu(b+1)/(b-a). \end{equation} \end{lem} \begin{equation}gin{proof} Shrinking $(a,b)$ slightly, we may assume without loss of generality that $f_{\nu}$ is also in $C^0[a,b]$. For convenience we write $h_{\nu}=\frac{1}{2}f^2_{\nu}-P_{c}$, and we only need to consider that $h_{\nu}$ does not change sign on $(a,b)$. Integrating (\ref{eq_1}) over $(a,b)$, we have \[ \begin{equation}gin{split} \int_{a}^{b}|h_{\nu}(x)|dx & =\left|\int_{a}^{b}(\nu(1-x^2)f'_{\nu}(x)+2\nu xf_{\nu}(x))dx\right|\\ & =\nu\left|(1-b^2)f_{\nu}(b)-(1-a^2)f_{\nu}(a)+\int_{a}^{b}4x f_{\nu}\right|\le 10M\nu. \end{split} \] This implies (\ref{eq2_5_0}). If $0 \le a <b\le 1$, we have \[ \begin{equation}gin{split} \int_{a}^{b}|h_{\nu}(x)|dx & =\nu\left|(1-b^2)f_{\nu}(b)-(1-a^2)f_{\nu}(a)+\int_{a}^{b}4x f_{\nu}\right|\\ & \le M \nu \big( 2(1-b)+2(1-a)+4(1-a) \big) \le 8M\nu(1-a). \end{split} \] This gives (\ref{eq:lem2_5:1}). Estimate (\ref{eq:lem2_5:2}) can be proved similarly. \end{proof} \begin{equation}gin{lem}\label{lem2_2} For $0< \nu \le 1$, $c\in \mathbb{R}^3$, $-1\le a< b \le 1$, let $f_{\nu}\in C^1(a,b)\cap C^0[a,b]$ be a solution of (\ref{eq_1}) in $(a,b)$ satisfying, for some positive constants $\mu$ and $\delta$, that $f_{\nu}(a)\ge \mu$ and $P_c\ge \delta$ in $(a,b)$. Then for all $0<\nu\le 1$, \[ f_{\nu}(x)\ge \min\{\mu, \sqrt{\delta}, \delta/(4\nu)\}, \quad a\le x\le b. \] \end{lem} \begin{equation}gin{proof} If for some $0<\lambda<\mu$, there exists some $x\in (a,b]$ such that $f_{\nu}(x)\le \lambda$, then let $x_{\nu}$ be the first point greater than $a$ such that $f_{\nu}(x_{\nu})=\lambda$. Then $f'_{\nu}(x_{\nu})\le 0$. By equation (\ref{eq_1}) we have that \[ 2\nu\lambda+\lambda^2/2\ge 2\nu x_{\nu}\lambda+\lambda^2/2\ge P_c(x_\nu)\ge \delta. \] So either $4\nu\lambda\ge \delta$ or $\lambda^2\ge \delta$. \end{proof} \addtocounter{lem}{-1} \renewcommand{\thetaelem}{\thetaesection.\arabic{lem}'} \begin{equation}gin{lem}\label{lem2_2'} For $0< \nu \le 1$, $c\in\mathbb{R}^3$, $-1\le a< b \le 1$, let $f_{\nu}\in C^1(a,b)\cap C^0[a,b]$ be a solution of (\ref{eq_1}) in $(a,b)$, satisfying, for some positive constants $\mu$ and $\delta$, that $f_{\nu}(b)\le -\mu$ and $P_c\ge \delta$ in $(a,b)$. Then for all $0<\nu\le 1$, \[ f_{\nu}(x)\le -\min\{\mu, \sqrt{\delta}, \delta/(4\nu)\}, \quad a\le x\le b. \] \end{lem} \renewcommand{\thetaelem}{\thetaesection.\arabic{lem}} \begin{equation}gin{proof} Let $g_{\nu}(x):=-f_{\nu}(a+b-x)$ for $x\in [a,b]$. Then $g_{\nu}$ is a solution of (\ref{eq_1}) with the same $P_c$ and $g_{\nu}(a)\ge \mu$. The lemma follows from Lemma \ref{lem2_2}, applied to $g_{\nu}$. \end{proof} \begin{equation}gin{cor}\label{cor2_3} Let $0< \nu \le 1$, $c\in \mathbb{R}^3$, $-1\le a<b\le 1$, $f_{\nu}\in C^1(a,b)$ be a solution of (\ref{eq_1}) in $(a,b)$ satisfying $0\le f_{\nu} \le M$ on the interval for some positive constant $M$. If $P_c\ge \delta>0$ in $(a,b)$ for some constant $\delta$, then \begin{equation}gin{equation}\label{eq:lem2_3} f_{\nu}(x)\ge \min\{\sqrt{\delta},\delta/(4\nu)\}, \qquad x\in(a+\epsilon, b) \end{equation} holds for any $\epsilon$ satisfying $20M\nu/\delta<\epsilon<b-a$. If we further assume that $-1<a<-1/2$, then (\ref{eq:lem2_3}) holds for any $32M\nu(a+1)/\delta<\epsilon<\min\{a+1,b-a\}$. \end{cor} \begin{equation}gin{proof} Shrinking $(a,b)$ slightly we may assume without loss of generality that $f_{\nu}$ is also in $C^0[a,b]$. By Lemma \ref{lem2_5}, there is some $x_{\nu}\in [a,a+\epsilon]$ such that \[ \left|\frac{1}{2}f^2_\nu(x_{\nu})-P_c(x_{\nu})\right|\le 10M\nu/\epsilon. \] For $20M\nu/\delta<\epsilon<b-a$, we have $f^2_\nu(x_{\nu})\ge 2P_c(x_{\nu})-20M\nu/\epsilon \ge \delta$. So $f_{\nu}(x_{\nu})\ge \sqrt{\delta}$. Then applying Lemma \ref{lem2_2}, we have (\ref{eq:lem2_3}) for any $20M\nu/\delta<\epsilon<b-a$, $0<\nu \le 1$. If $-1<a<-1/2$, then $2a+1<0$. By Lemma \ref{lem2_5} and (\ref{eq:lem2_5:2}), for any $0<\epsilon<\min(a+1,b-a)$, there exists $x_\nu\in (a,a+\epsilon)$ such that \[ \left| \frac{1}{2} f^2_\nu (x_{\nu}) - P_c(x_{\nu}) \right| \le 8M\nu (a+\epsilon+1)/\epsilon < 16M\nu(a+1)/\epsilon. \] For $32M\nu(a+1)/\delta<\epsilon<\min(a+1,b-a)$, we have $f^2_\nu(x_{\nu})\ge 2P_c(x_{\nu})-32M\nu(a+1)/\epsilon\ge \delta$. So $f_{\nu}(x_{\nu})\ge \sqrt{\delta}$. By Lemma \ref{lem2_2}, (\ref{eq:lem2_3}) holds for any $32M\nu(a+1)/\delta<\epsilon<\min(a+1,b-a)$, $0<\nu \le 1$. \end{proof} \addtocounter{cor}{-1} \renewcommand{\thetaecor}{\thetaesection.\arabic{cor}'} \begin{equation}gin{cor}\label{cor2_3'} Let $0< \nu \le 1$, $c\in \mathbb{R}^3$, $-1\le a<b\le 1$, $f_{\nu}\in C^1(a,b)$ be a solution of (\ref{eq_1}) in $(a,b)$ and $-M\le f_{\nu} \le 0$ for some positive constant $M$ on $(a,b)$. If $P_c\ge \delta>0$ in $(a,b)$, then \begin{equation}gin{equation}\label{eq:lem2_3'} f_{\nu}\le -\min\{\sqrt{\delta},\delta/(4\nu)\}, \qquad x\in(a,b-\epsilon) \end{equation} holds for any $20M\nu/\delta<\epsilon<b-a$. If we further assume that $1/2<b<1$, then (\ref{eq:lem2_3'}) holds for any $32M\nu(1-b)/\delta<\epsilon<\min(1-b,b-a)$. \end{cor} \renewcommand{\thetaecor}{\thetaesection.\arabic{cor}} \begin{equation}gin{rmk} Under the conditions of Corollary \ref{cor2_3} (or Corollary \ref{cor2_3'}), for any small $\epsilon>0$ fixed, there exists $\nu_0>0$, depending only on $\epsilon, M$ and $\delta$, such that (\ref{eq:lem2_3}) (or (\ref{eq:lem2_3'})) holds for all $0<\nu<\nu_0$. \end{rmk} \begin{equation}gin{lem}\label{lem2_4} Let $0< \nu \le 1$, $-1\le a< b \le 1$, $c\in \mathbb{R}^3\setminus\{0\}$, suppose $f_{\nu}\in C^1(a,b)\cap C^0[a,b]$ is a solution of (\ref{eq_1}) in $(a,b)$, and $P_c\ge 0$ in $(a,b)$. Then there exists at most one $x_{\nu}\in (a,b)$ such that $f_{\nu}(x_{\nu})=0$. Moreover, if $f_{\nu}(a)>0$, then $f_{\nu}>0$ on $(a,b)$, and if $f_{\nu}(b)<0$, then $f_{\nu}<0$ on $(a,b)$. \end{lem} \begin{equation}gin{proof} We first prove that there does not exist $\bar{x}\in (a,b)$ and $\epsilon>0$ such that $f_{\nu}(\bar{x})=0$, $f'_{\nu}(\bar{x})\le 0$ and $f_{\nu}>0$ in $(\bar{x}-\epsilon, \bar{x})$. If such $\bar{x}$ and $\epsilon$ exist ,then \[ 0\ge \nu(1-\bar{x}^2)f'_{\nu}(\bar{x})+2\nu\bar{x}f_{\nu}(\bar{x})+\frac{1}{2}f^2_{\nu}(\bar{x})=P_c(\bar{x})\ge 0. \] So $P_c(\bar{x})=0$, and $f'_{\nu}(\bar{x})=0$. Since $c\ne 0$ and $P_c\ge 0$ in $(a,b)$, $P_c\equiv\lambda(x-\bar{x})^2$ for some $\lambda>0$. So $P'_c(\bar{x})=0$ and $P''_c(\bar{x})>0$. It is easy to see that $f\in C^3(a,b)$. Take a derivative of equation (\ref{eq_1}) at $\bar{x}$, using the fact $f_{\nu}(\bar{x})=f'_{\nu}(\bar{x})=0$, we have $\nu(1-\bar{x}^2)f''_{\nu}(\bar{x})=P'_c(\bar{x})=0$. So $f''_{\nu}(\bar{x})=0$. Now we have $f_{\nu}(\bar{x})=f'_{\nu}(\bar{x})=f''_{\nu}(\bar{x})=0$ and $f'''_{\nu}(\bar{x})>0$ which imply that $f_{\nu}(x)<0$ for $x<\bar{x}$ and close to $x_{\nu}$, violating $f_{\nu}>0$ in $(\bar{x}-\epsilon, \bar{x})$, a contradiction. Similarly, there does not exist $\bar{x}\in (a,b)$ and $\epsilon>0$ such that $f_{\nu}(\bar{x})=0$, $f'_{\nu}(\bar{x})\le 0$ and $f_{\nu}<0$ in $(\bar{x}, \bar{x}+\epsilon)$. Now we prove that there exists at most one $x_{\nu}\in (a,b)$ such that $f_{\nu}(x_{\nu})=0$. Clearly $f_{\nu}$ is not identically equal to zero on $(a,b)$. If $f_{\nu}$ has more than one zero point in $(a,b)$, then there exist some $x_\nu<y_{\nu}$ in $(-1,1)$ such that $f_{\nu}(x_{\nu})=f_\nu(y_\nu)=0$, and either $f_\nu<0$ in $(x_\nu,y_\nu)$ or $f_\nu>0$ in $(x_\nu,y_\nu)$. If $f_\nu<0$ in $(x_\nu,y_\nu)$, then $f_{\nu}(x_{\nu})=0$ and $f'_{\nu}(x_{\nu})\le 0$. If $f_\nu>0$ in $(x_\nu,y_\nu)$, then $f_{\nu}(y_{\nu})=0$ and $f'_{\nu}(y_{\nu})\le 0$. We have proved in the above that neither could occur, a contradiction. Next, we prove that if $f_\nu(a)>0$, then $f_{\nu}>0$ on $(a,b)$. If $f_{\nu}$ is not positive on the whole interval $(a,b)$, then let $\bar{x}\in (a,b)$ be the first point greater than $a$ such that $f_{\nu}(\bar{x})=0$. We have $f'_{\nu}(\bar{x})\le 0$, and $f_{\nu}>0$ in $(a,\bar{x})$, a contradiction. Similarly, we have that if $f_{\nu}(b)<0$, then $f_{\nu}<0$ on $(a,b)$. \end{proof} \begin{equation}gin{cor}\label{cor2_1} Let $0< \nu \le 1$, $c\in\mathbb{R}^3\setminus\{0\}$. Assume that $P_c\ge 0$ in $(-1,1)$, then \[ U_{\nu, \thetaeta}^+(x)>0, \quad U_{\nu, \thetaeta}^-(x)<0, \quad -1< x< 1. \] \end{cor} \begin{equation}gin{proof} Since $U_{\nu, \thetaeta}^+(-1)=2\nu+2\sqrt{\nu^2+c_1}>0$ and $U_{\nu, \thetaeta}^-(1)=-2\nu-2\sqrt{\nu^2+c_2}<0$, the corollary follows from Lemma \ref{lem2_4}. \end{proof} \begin{equation}gin{lem}\label{lem2_6} Let $0< \nu \le 1$, $c\in \mathbb{R}^3$, $-1\le a<b\le 1$, $f_{\nu}\in C^1(a,b)\cap C^0[a,b]$ be a solution of (\ref{eq_1}) in $(a,b)$. If there exist some $m, M>0$ such that \begin{equation}gin{equation*} m\le |f_{\nu}(x)|\le M, \quad \forall a\le x\le b, \end{equation*} then \begin{equation}gin{equation}\label{eq2_6_2} ||\frac{1}{2}f^2_{\nu}-P_c||_{L^{\infty}(a, b)}\le \max\left\{\left(2M+\sqrt{6}|c|/m\right)\nu, \left|\frac{1}{2}f^2_{\nu}(a)-P_c(a)\right|, \left|\frac{1}{2}f^2_{\nu}(b)-P_c(b)\right| \right\}. \end{equation} Moreover, for any $0<\epsilon<(b-a)/2$, \begin{equation}gin{equation}\label{eq2_6_3} ||\frac{1}{2}f^2_{\nu}-P_c||_{L^{\infty}(a+\epsilon,b-\epsilon)}\le \nu \cdot \max\left\{2M+\sqrt{6}|c|/m, 10M/\epsilon\right\}. \end{equation} \end{lem} \begin{equation}gin{proof} We first prove (\ref{eq2_6_2}). Since $\frac{1}{2}f^2_{\nu}-P_c$ is continuous on $[a,b]$, there exists some $z_{\nu}\in [a,b]$, such that \[ \left|\frac{1}{2}f^2_{\nu}(z_\nu)-P_c(z_\nu)\right|=\max_{[a,b]}\left|\frac{1}{2}f^2_{\nu}-P_c\right|. \] If $z_{\nu}=a$ or $b$, then (\ref{eq2_6_2}) is proved. Otherwise we have \[ f_\nu(z_\nu)f'_\nu(z_\nu)-P'_c(z_\nu)=0. \] Since $|f_{\nu}(z_{\nu})|\ge m$, we have \[ |f'_\nu(z_\nu)|=|P'_c(z_\nu)|/|f_\nu(z_\nu)|\le (|c_1|+|c_2|+2|c_3|)/m\le \sqrt{6}|c|/m. \] Then by (\ref{eq_1}), we have \[ \left|\frac{1}{2}f^2_{\nu}(z_\nu)-P_c(z_\nu)\right|=\nu|(1-z^2_\nu)f'_\nu(z_\nu)+2z_\nu f_\nu(z_\nu)|\le \left(2M+\sqrt{6}|c|/m\right)\nu. \] So (\ref{eq2_6_2}) is proved. Next, we prove (\ref{eq2_6_3}). By Lemma \ref{lem2_5}, for any $\epsilon>0$, there exist some $x_\nu \in [a, a+\epsilon]$, and $y_{\nu}\in [b-\epsilon, b]$, satisfying \[ \left|\frac{1}{2}f^2_{\nu}(x_\nu)-P_c(x_\nu)\right|\le 10 M\nu/\epsilon, \quad \left|\frac{1}{2}f^2_{\nu}(y_\nu)-P_c(y_\nu)\right|\le 10M\nu/\epsilon. \] Apply (\ref{eq2_6_2}) on $(x_{\nu}, y_{\nu})$, we have \[ ||\frac{1}{2}f^2_{\nu}-P_c||_{L^{\infty}(x_{\nu},y_{\nu})}\le \max\left\{\left(2M+\sqrt{6}|c|/m\right)\nu, 10M\nu/\epsilon\right\}. \] Notice $(a+\epsilon,b-\epsilon)\subset (x_{\nu},y_{\nu})$, the lemma is proved. \end{proof} \begin{equation}gin{cor}\label{cor2_2} Let $0< \nu \le 1$, $c\in \mathbb{R}^3$, $-1\le a<b\le 1$, $P_c\ge 0$ in $(a,b)$, $f_{\nu}\in C^1(a,b)\cap C^0[a,b]$ be a solution of (\ref{eq_1}) in $(a,b)$. If there exist some $m, M>0$ such that \begin{equation}gin{equation*} m\le f_{\nu}(x)\le M, \quad \forall a<x<b, \quad 0<\nu\le 1, \end{equation*} then there exists a universal constant $C>0$, such that \begin{equation}gin{equation*} \begin{equation}gin{split} ||f_{\nu}-\sqrt{2P_c}||_{L^{\infty}(a, b)}\le & \frac{C}{m}\max\left\{\left(M+|c|/m\right)\nu,(M+\sqrt{|c|})|f_{\nu}(a)-\sqrt{2P_c(a)}|,\right. \\ & \left.(M+\sqrt{|c|})|f_{\nu}(b)-\sqrt{2P_c(b)}| \right\}. \end{split} \end{equation*} Moreover, for any $0<\epsilon<(b-a)/2$, \begin{equation}gin{equation*} ||f_{\nu}-\sqrt{2P_c}||_{L^{\infty}(a+\epsilon,b-\epsilon)}\le \frac{C\nu}{m}\max\left\{M+|c|/m,M/\epsilon\right\}. \end{equation*} \end{cor} \begin{equation}gin{proof} Since $f_{\nu}\ge m>0$ and $P_c\ge 0$ in $(a,b)$, we have $m\le f_\nu+\sqrt{2P_c}\le M+\sqrt{10|c|}$ in $(a,b)$. So the corollary follows from Lemma \ref{lem2_6}. \end{proof} \addtocounter{cor}{-1} \renewcommand{\thetaecor}{\thetaesection.\arabic{cor}'} \begin{equation}gin{cor}\label{cor2_2'} Let $0< \nu \le 1$, $-1\le a<b\le 1$, $P_c\ge 0$ in $(a,b)$, $f_{\nu}\in C^1(a,b)\cap C^0[a,b]$ be a solution of (\ref{eq_1}) in $(a,b)$. If there exist some $m, M>0$ such that \begin{equation}gin{equation*} -M\le f_{\nu}(x)\le -m, \quad \forall a<x<b, \quad 0<\nu\le 1, \end{equation*} then there exists a universal constant $C>0$, such that \begin{equation}gin{equation*} \begin{equation}gin{split} ||f_{\nu}+\sqrt{2P_c}||_{L^{\infty}(a, b)} \le & \frac{C}{m}\max\left\{\left(M+|c|/m\right)\nu, (M+\sqrt{|c|})|f_{\nu}(a)+\sqrt{2P_c(a)}|,\right.\\ & \left.(M+\sqrt{|c|})|f_{\nu}(b)+\sqrt{2P_c(b)}| \right\}. \end{split} \end{equation*} Moreover, for any $0<\epsilon<(b-a)/2$, there exists a universal constant $C>0$, such that \begin{equation}gin{equation*} ||f_{\nu}+\sqrt{2P_c}||_{L^{\infty}(a+\epsilon,b-\epsilon)}\le \frac{C\nu}{m}\max\left\{M+|c|/m, M/\epsilon\right\}. \end{equation*} \end{cor} \renewcommand{\thetaecor}{\thetaesection.\arabic{cor}} \begin{equation}gin{lem}\label{lem2_10} Let $0< \nu \le 1$, $c\in \mathbb{R}^3$, $-1\le a<b\le 1$, $\alpha\ge 0$, $f_{\nu}\in C^1(a,b)\cap C^0[a,b]$ be a solution of (\ref{eq_1}) in $(a,b)$, satisfying $f_{\nu}(a)>0$ or $f_{\nu}(b)<0$. Suppose there exists some $\bar{x}\in \mathbb{R}$ such that $P_c(\bar{x})=\min_{[a,b]}P_c\ge -C_1(b-a)^{\alpha}$, $\mathrm{dist}(\bar{x}, [a,b])\le C_1(b-a)$, and $P_c(x)\le P_c(\bar{x})+C_1|x-\bar{x}|^{\alpha}$ for $a\le x\le b$, and \[ |\frac{1}{2}f^2_{\nu}(a)-P_c(a)|+|\frac{1}{2}f^2_{\nu}(b)-P_c(b)|\le C_1(b-a)^{\alpha}, \] for some positive constants $C_1$. Then there exists some constant $C$, depending only on $C_1$ and an upper bound of $|c|$, such that for $\nu<\sqrt{2C_1}/4(b-a)^{\alpha/2}$, \[ ||\frac{1}{2}f^2_{\nu}-P_c||_{L^{\infty}(a,b)}\le C(b-a)^{\alpha}+C\nu(b-a)^{-\alpha/2}. \] \end{lem} \begin{equation}gin{proof} We only prove for the case $f_{\nu}(a)>0$, the case $f_{\nu}(b)<0$ can be proved similarly. For convenience denote $h:=\frac{1}{2}f_{\nu}^2-P_c$ and $\delta=b-a$. Let $C$ be a positive constant, depending only on $C_1$ and an upper bound of $|c|$, which may vary from line to line. Suppose $\max_{[a,b]}|h|=|h(\tilde{z})|$ for some $\tilde{z}\in [a,b]$. If $\tilde{z}=a$ or $b$, then we are done. Suppose $\tilde{z}\in (a,b)$. Then $0=h'(\tilde{z})=f_{\nu}(\tilde{z})f'_{\nu}(\tilde{z})-P'_c(\tilde{z})$. So \begin{equation}gin{equation}\label{eq2_10_1} |f'_{\nu}(\tilde{z})|=|P_c'(\tilde{z})|/|f_{\nu}(\tilde{z})|. \end{equation} If $P_{c}(\bar{x})> 2C_1\delta^{\alpha}$, then since $|h(a)|\le C_1\delta^{\alpha}$, we have \[ f_{\nu}^2(a)\ge 2P_{c}(a)-2C_1\delta^{\alpha}\ge 2P_c(\bar{x})-2C_1\delta^{\alpha}\ge 2C_1\delta^{\alpha}. \] Since $f_{\nu}(a)>0$, we have $f_{\nu}(a)\ge \sqrt{2C_1}\delta^{\alpha/2}$. Then by Lemma \ref{lem2_2}, we have that for $\nu<\sqrt{2C_1}/4\delta^{\alpha/2}$, \[ f_{\nu}(x)\ge \min\{\sqrt{2C_1}\delta^{\alpha/2},C_1\delta^{\alpha}/(2\nu)\}\ge \sqrt{2C_1}\delta^{\alpha/2}, \quad a<x<b. \] With this, we deduce from (\ref{eq2_10_1}) that \begin{equation}gin{equation}\label{eq2_10_2} |f'_{\nu}(\tilde{z})|\le C\delta^{-\alpha/2}. \end{equation} By (\ref{eq_1}) and Lemma \ref{lem2_1} we have the desired estimate \begin{equation}gin{equation}\label{eq2_10_3} |h(\tilde{z})|\le C\nu |f'_{\nu}(\tilde{z})|+C\nu|f_{\nu}(\tilde{z})|\le C\nu\delta^{-\alpha/2}. \end{equation} If $P_{c}(\bar{x})\le 2C_1\delta^{\alpha}$, then using the hypothesis $P_c(\bar{x})\ge -C_1(b-a)^{\alpha}$, $\mathrm{dist}(\bar{x}, [a,b])\le C_1(b-a)$, and $P_c(x)\le P_c(\bar{x})+C_1|x-\bar{x}|^{\alpha}$, we have \[ -C_1\delta^{\alpha}\le P_c(\bar{x})\le P_c(\tilde{z})\le P_c(\bar{x})+C|\tilde{z}-\bar{x}|^{\alpha}\le C\delta^{\alpha}. \] So \[ \frac{1}{2}f^2_{\nu}(\tilde{z})\ge |h(\tilde{z})|-|P_{c}(\tilde{z})| \ge |h(\tilde{z})|-C\delta^{\alpha}. \] If $|h(\tilde{z})|\le 2C\delta^{\alpha}$, then we are done. Otherwise we have $|f_{\nu}(\tilde{z})|\ge \sqrt{2C}\delta^{\alpha/2}$. With this, we deduce (\ref{eq2_10_2}) using (\ref{eq2_10_1}), and obtain (\ref{eq2_10_3}) as above. The lemma is proved. \end{proof} \begin{equation}gin{lem}\label{lem2_12} Let $0< \nu \le 1$, $-1\le a<b\le 1$, $c\in \mathbb{R}^3$, $k\ge 0$ be an integer, assume $P_c\ge \delta>0$ on $(a,b)$, $f_\nu\in C^k[a,b]$ is a solution to (\ref{eq_1}). Suppose there exists some $ M>0$ such that $f_\nu\le M$ on $(a,b)$, and \begin{equation}gin{equation}\label{eq2_12_2} \left|\frac{d^i}{dx^i}(f_{\nu}-\sqrt{2P_c})(a)\right|+ \left|\frac{d^i}{dx^i}(f_{\nu}-\sqrt{2P_c})(b)\right|\le M\nu, \quad \forall 0\le i\le k. \end{equation} Then there exists some $C>0$, depending only on $\delta$, $k$, $M$ and an upper bound of $|c|$, such that \begin{equation}gin{equation}\label{eq2_12_1} ||f_{\nu}-\sqrt{2P_c}||_{C^k(a,b)}\le C\nu. \end{equation} \end{lem} \begin{equation}gin{proof} Throughout the proof, $C$ and $\nu_0$ denote various positive constants, depending only on $\delta$, $k$, $M$ and an upper bound of $|c|$. $C$ will be chosen first and will be large, and $\nu_0$ will be small, and its choice may depend on the largeness of $C$. We will only need to prove (\ref{eq2_12_1}) for $\nu<\nu_0$, since it is obvious for $\nu\ge \nu_0$. For convenience, write $Q=\sqrt{2P_c}$. Denote \begin{equation}gin{equation}\label{eq2_12_9} h_0(x):=\frac{1}{\nu}(f_{\nu}(x)-Q(x)),\quad h_i(x):=\frac{d^i}{d x^i}h_0(x),\quad \forall i\ge 1. \end{equation} Rewrite (\ref{eq_1}) as \begin{equation}gin{equation}\label{eq2_12_7} \nu h'_0(x)=\frac{1}{1-x^2}F(x,h_0(x)), \end{equation} where \[ F(x,h_0):=-\{2x\nu h_0+\frac{1}{2}\nu h^2_0+(1-x^2)Q'(x)+2xQ(x)+h_0 Q(x)\}. \] \noindent\textbf{Claim}: For all $n\ge 2$, and for $x\in [a,b]$, \begin{equation}gin{equation}\label{eq2_12_8} \nu h_n(x)=\frac{1}{1-x^2}[2(n-1)\nu x+F_{h_0}(x,h_0)]h_{n-1}+\frac{1}{1-x^2}F_n(x,h_0,...,h_{n-2}), \end{equation} where $F_n(x,h_0, h_1,...,h_{n-2})$ satisfies that for any compact subset $K\subset [a,b]\times \mathbb{R}^{n-1}$ and for any integer $m\ge 0$ \[ ||F_n||_{C^m(K)}\le C', \] for some $C'$ depending only on $n,m$ and $K$. \noindent\emph{Proof of the Claim}: We prove it by induction. Differentiating (\ref{eq2_12_7}) leads to (\ref{eq2_12_8}) for $n=2$, with $F_2(x,h_0)=F_x(x,h_0)$. Now suppose that (\ref{eq2_12_8}) is true for some $n\ge 2$, and we will prove (\ref{eq2_12_8}) for $n+1$. Differentiating (\ref{eq2_12_8}), we have \[ \nu h_{n+1}=\frac{1}{1-x^2}(2n\nu x+F_{h_0})h_{n}+\frac{1}{1-x^2}F_{n+1}(x,h_0,...,h_{n-1}), \] where \[ \begin{equation}gin{split} F_{n+1}(x,h_0,...,h_{n-1})& :=2(n-1)\nu+F_{h_0h_0}(x,h_0)+F_{xh_0}(x,h_0)+\partial_xF_n(x,h_0,...,h_{n-2})\\ & +\sum_{i=0}^{n-2}\partial_{h_i}F_n(x_0,h_0,...,h_{n-2})h_{i+1}. \end{split} \] The claim is proved. We prove the lemma by induction on $k$. By (\ref{eq2_12_2}) with $i=0$, $f_{\nu}(a)\ge \sqrt{2P_c(a)}-M\nu\ge \sqrt{2\delta}-M\nu\ge \sqrt{\delta}$ for $\nu\le \nu_0$. By Lemma \ref{lem2_2}, $f_{\nu} \ge \sqrt{\delta}$ in $(a,b)$ for $\nu\le \nu_0$ on $[a,b]$. Then by Corollary \ref{cor2_2}, we have $|h_0|\le C$ in $[a,b]$. Let $z_\nu \in [a,b]$ such that $|h_1(z_{\nu})|=\max_{[a,b]}|h_1|$. By (\ref{eq2_12_2}), $|h_1(a)|, |h_1(b)|\le M$. If $z_{\nu}=a$ or $b$, the lemma holds for $k=1$. If $z_\nu\ne a$ or $b$, then by (\ref{eq2_12_8}), \[ 0=\nu h'_1(z_\nu)=\nu h_2(z_{\nu})=\frac{1}{1-z_{\nu}^2}\left\{[2\nu z_{\nu}+F_{h_0}(z_\nu, h_0(z_\nu)) ] h_1(z_\nu)+F_2(z_\nu,h_0(z_\nu))\right\}=0, \] and, by the boundedness of $h_0$ and the property of $F_2$, $|F_2(x,h_0(x))|\le C$ on $[a,b]$. Since $|h_0|\le C$ and $Q=\sqrt{2P_c}\ge \sqrt{2\delta}$, we have, for $\nu \le \nu_0$, that \begin{equation}gin{equation*} |2\nu x+F_{h_0}(x,h_0(x))|=|\nu h_0(x)+Q(x)+4\nu x|\ge Q-C\nu\ge \sqrt{\delta}>0, \quad a<x<b. \end{equation*} So we have \[ \max_{[a,b]}|h_1|=|h_1(z_\nu)|=\frac{|F_2(z_\nu, h_0(z_\nu))|}{|2\nu z_{\nu}+F_{h_0}(z_\nu, h_0(z_\nu))|}\le C, \] and the lemma holds for $k=1$. Next, assume the lemma holds for all $1\le k \le n$ for some $n$, and we prove it for $k=n+1$. By the induction hypothesis, $|h_k|\le C$ in $[a,b]$, for all $1\le k \le n$. Let $z_{\nu}\in [a,b]$ such that $|h_{n+1}(z_{\nu})|=\max_{[a,b]}|h_{n+1}|$. By (\ref{eq2_12_2}), $|h_{n+1}(a)|, |h_{n+1}(b)|\le M$. If $z_{\nu}=a$ or $b$, the lemma holds for $k=n+1$. Otherwise by (\ref{eq2_12_8}), \begin{equation}gin{equation*} \begin{equation}gin{split} & 0=\nu h'_{n+1}(z_\nu) =\nu h_{n+2}(z_{\nu,n})\\ & =\frac{1}{1-z_{\nu,n}^2}\{[2(n+1)\nu z_{\nu,n}+F_{h_0}(z_{\nu},h_0(z_{\nu}))]h_{n+1}(z_{\nu})+F_{n+2}(z_{\nu},h_0(z_{\nu}),...,h_{n}(z_{\nu}))\}, \end{split} \end{equation*} and, by the induction hypothesis and the property of $F_{n+2}$, $|F_{n+2}(x,h_0,...,h_{n})|\le C$ on $[a,b]$. As above, for $\nu\le \nu_0$, \[ |2(n+1)\nu z_{\nu}+F_{h_0}(z_{\nu},h_0(z_{\nu}))|>\sqrt{\delta}, \quad a<x<b, \] and therefore \[ \max_{[a,b]}|h_{n+1}|=|h_{n+1}(z_{\nu})|=\frac{|F_{n+2}(z_{\nu})|}{|2(n+1)\nu z_{\nu}+F_{h_0}(z_{\nu})|}\le C. \] So the lemma holds for $k=n+1$. The lemma is proved. \end{proof} \addtocounter{lem}{-1} \renewcommand{\thetaelem}{\thetaesection.\arabic{lem}'} \begin{equation}gin{lem}\label{lem2_12'} Let $0< \nu \le 1$, $-1\le a<b\le 1$, $c\in \mathbb{R}^3$, $k\ge 0$ be an integer, assume $P_c\ge \delta>0$ on $(a,b)$, $f_\nu\in C^k[a,b]$ is a solution to (\ref{eq_1}). Suppose there exists some $ M>0$ such that $f_\nu>- M$ on $(a,b)$, and \begin{equation}gin{equation*} \left|\frac{d^i}{dx^i}(f_{\nu}+\sqrt{2P_c})(a)\right|+ \left|\frac{d^i}{dx^i}(f_{\nu}+\sqrt{2P_c})(b)\right|\le M\nu, \quad \forall 0\le i \le k. \end{equation*} Then there exist some $C>0$ and $\nu_0>0$, depending only on $\delta$, $k$, $M$ and an upper bound of $|c|$, such that \begin{equation}gin{equation*} ||f_{\nu}+\sqrt{2P_c}||_{C^k(a,b)}\le C\nu, \quad \forall 0<\nu<\nu_0. \end{equation*} \end{lem} \renewcommand{\thetaelem}{\thetaesection.\arabic{lem}} \begin{equation}gin{lem}\label{lem2_7} Let $0< \nu \le 1$, $-1\le a<b\le 1$, $c\in \mathbb{R}^3$, $k\ge 0$ be an integer, assume $P_c\ge \delta>0$ on $[a,b]$, $f_\nu\in C^k[a,b]$ is a solution to (\ref{eq_1}), $f_\nu(a)>0$, and there exists some $M>0$ such that $f_{\nu}\le M$ in $[a,b]$. Then $f_{\nu}>0$ in $[a,b]$. Moreover, for any $0<\epsilon<(b-a)/2$, there exists some $C>0$, depending only on $\epsilon$, $\delta$, $M$, $k$ and an upper bound of $|c|$, such that \begin{equation}gin{equation}\label{eq2_7_1} ||f_{\nu}-\sqrt{2P_c}||_{C^k(a+\epsilon,b-\epsilon)}\le C\nu. \end{equation} \end{lem} \begin{equation}gin{proof} Let $C$ be a constant, depending only on $\epsilon$, $\delta$, $k$, $M$ and an upper bound of $|c|$, which may vary from line to line. Since $P_c\ge \delta>0$ on $[a,b]$ and $f_{\nu}(a)>0$, we have $f_{\nu}(x)\ge \min\{f_{\nu}(a), \sqrt{\delta}, \delta/(4\nu)\}>0$ in $[a,b]$ by Lemma \ref{lem2_2}. The positivity of $f_{\nu}$ on $[a,b]$ can also be deduced from Lemma \ref{lem2_4}. Applying Lemma \ref{lem2_5} on $[a,a+\epsilon/2]$ and $[b-\epsilon/2,b]$ respectively, there exist some $x_{\nu}\in [a,a+\epsilon/2]$ and $y_{\nu}\in [b-\epsilon/2,b]$, such that $\left|\frac{1}{2}f^2_{\nu}(x_{\nu})-P_c(x_{\nu})\right|+\left|\frac{1}{2}f^2_{\nu}(y_{\nu})-P_c(y_{\nu})\right|\le C\nu$. Using $f_{\nu}>0$ and $P_c\ge \delta$, we have \begin{equation}gin{equation*} \left|f_{\nu}(x_{\nu})-\sqrt{2P_c(x_{\nu})}\right|+\left|f_{\nu}(y_{\nu})-\sqrt{2P_c(y_{\nu})}\right|\le C\nu. \end{equation*} Since $P_c\ge \delta$, there exists $\nu_0>0$, depending only on $\epsilon, \delta, M$ and $c$, such that $f_{\nu}(x_{\nu})\ge \sqrt{2P_c(x_{\nu})}-C\nu\ge \sqrt{2\delta}-C\nu\ge \sqrt{\delta}$ for $\nu\le \nu_0$. Note that for $\nu\ge \nu_0$, (\ref{eq2_7_1}) is obvious. So we only need to consider $\nu\le \nu_0$. Applying Lemma \ref{lem2_2} on $[x_{\nu}, y_{\nu}]$, we have $f_{\nu}\ge 1/C$ on $[x_{\nu}, y_{\nu}]$. Since we also have $f_{\nu}\le M$ on $[x_{\nu}, y_{\nu}]$, by Corollary \ref{cor2_2}, we have that \[ ||f_{\nu}-\sqrt{2P_c}||_{L^{\infty}(a+\epsilon/2, b-\epsilon/2)}\le ||f_{\nu}-\sqrt{2P_c}||_{L^{\infty}(x_{\nu},y_{\nu})}\le C\nu. \] For convenience, denote $h_i$, $i\ge 0$, as in (\ref{eq2_12_9}). We have proved that $|h_0|\le C$ in $[a+\epsilon/2, b-\epsilon/2]$. So for any $0<\epsilon<(b-a)/2$, \begin{equation}gin{equation*} \big|\int_{a+\epsilon/2}^{a+\epsilon}h_1(x) dx\big|\le C, \quad \big|\int_{b-\epsilon}^{b-\epsilon/2}h_1(x) dx\big|\le C. \end{equation*} Thus there exist some $x_\nu\in [a+\epsilon/2,a+\epsilon]$, and $y_\nu\in [b-\epsilon, b-\epsilon/2]$, such that $|h_1(x_\nu)|\le C$, $|h_1(y_{\nu})|\le C$. Apply Lemma \ref{lem2_12} on $[x_\nu, y_{\nu}]$, we have $|h_1(x)|\le C, \quad x_{\nu}<x<y_{\nu}$. So the lemma holds for $k=1$. Next, assume the lemma holds for all $1\le k \le n$ for some $n$, and we prove it for $k=n+1$. By the induction hypothesis, for any $\epsilon>0$, $|h_k|\le C$ in $(a+\epsilon/2, b-\epsilon/2)$ for all $1\le k \le n$. It follows that \[ \big|\int_{a+\epsilon/2}^{a+\epsilon}h_{n+1}(x) dx\big|\le C, \quad \big|\int_{b-\epsilon}^{b-\epsilon/2}h_{n+1}(x) dx\big|\le C. \] So there exist some $x_\nu\in [a+\epsilon/2,a+\epsilon]$, and $y_\nu\in [b-\epsilon, b-\epsilon/2]$, such that $|h_{n+1}(x_\nu)|\le C, \quad |h_{n+1}(y_{\nu})|\le C$. Apply Lemma \ref{lem2_12} on $[x_\nu, y_{\nu}]$, we have $|h_{n+1}(x)|\le C, \quad x_{\nu}<x<y_{\nu}$. The lemma holds for $k=n+1$. The lemma is proved. \end{proof} \addtocounter{lem}{-1} \renewcommand{\thetaelem}{\thetaesection.\arabic{lem}'} \begin{equation}gin{lem}\label{lem2_7'} Let $0< \nu \le 1$, $-1\le a<b\le 1$, $c\in \mathbb{R}^3$, $k\ge 0$ be an integer, assume $P_c\ge \delta>0$ on $[a,b]$, $f_\nu\in C^k[a,b]$ is a solution to (\ref{eq_1}), $f_\nu(b)<0$, and there exists some $M>0$ such that $f_{\nu}\ge -M$ in $[a,b]$. Then $f_{\nu}<0$ in $[a,b]$. Moreover, for any $0<\epsilon<\frac{b-a}{2}$, there exists some $C>0$, depending only on $\epsilon$, $\delta$, $k$ and an upper bound of $|c|$, such that \begin{equation}gin{equation*} ||f_{\nu}+\sqrt{2P_c}||_{C^k(a+\epsilon,b-\epsilon)}\le C\nu. \end{equation*} \end{lem} \renewcommand{\thetaelem}{\thetaesection.\arabic{lem}} \noindent\emph{Proof of Theorem \ref{thm1_0_1}}: Writing $U_{\nu_k, \thetaeta}=u_{\nu_k, \thetaeta}\sin\thetaeta$, $V_{\thetaeta}=v_{\thetaeta}\sin\thetaeta$ and $x=\cos\thetaeta$ as usual. Then $U_{\nu_k, \thetaeta}$ satisfies the equation \begin{equation}gin{equation}\label{eq1_0_1_0} \nu_k (1-x^2)U_{\nu_k, \thetaeta}'+2\nu_k x U_{\nu_k, \thetaeta}+\frac{1}{2}U_{\nu_k, \thetaeta}^2=P_{c_k}(x), \textrm{ in }(r,s), \end{equation} and $U_{\nu_k, \thetaeta} \rightharpoonup V_{\thetaeta}$ in $L^2(r,s)$, where $r=\cos\thetaeta_2$ and $s=\cos\thetaeta_1$. We know that $-1<r<s<1$. For any $\epsilon>0$ and any positive integer $m$, let $C$ denote some positive constant depending only on $\thetaeta_1,\thetaeta_2, \epsilon$, $m$ and $\sup_{\nu_k}||u_{\nu_k, \thetaeta}||_{L^2(\mathbb{S}^2\cap\{\thetaeta_1<\thetaeta<\thetaeta_2\})}$ whose value may vary from line to line. As in the proof of Lemma \ref{lem2_0}, with $\delta:=(s-r)/9$, for each $k$ there exist $a_k\in [r,r+\delta]$, and $b_{ki}\in [r+2i\delta, r+(2i+1)\delta]$, $i=1,2,3$, such that \[ |U_{\nu_k, \thetaeta}(a_k)|+\sum_{i=1}^3|U_{\nu_k, \thetaeta}(b_{ki})|\le C. \] Arguing as in the proof of Lemma \ref{lem2_0}, we have, for $i=1,2,3$, that $\left|\int_{a_k}^{b_{ki}}P_{c_k}\right|\le C$, and in turn \begin{equation}gin{equation}\label{eq1_0_1_1} |c_k|\le C. \end{equation} Passing to a subsequence, $c_k\to c$, we have $\frac{1}{2}V_{\thetaeta}^2=P_c \textrm{ on }(r,s)$. Since $c\in \mathring{J}_0$, we have $P_c>0$ on $(r,s)$. So there exists some $\delta>0$, such that for large $k$, \begin{equation}gin{equation}\label{eq1_0_1_2} P_{c_k}\ge 1/C \textrm{ on }[r+\delta/8, s-\delta/8]. \end{equation} Next, since $\int_{r}^{s}|U_{\nu_k, \thetaeta}|^2\le C$, there exists some $a_k\in [r+\epsilon/16, r+\epsilon/4]$ and $b_k\in [s-\epsilon/4, s-\epsilon/8]$, such that $|U_{\nu_k, \thetaeta}(a_k)|+|U_{\nu_k, \thetaeta}(b_k)|\le C$. If $|U_{\nu_k, \thetaeta}(\alpha_k)|=\max_{[a_k,b_k]}|U_{\nu_k, \thetaeta}|>\max\{|U_{\nu_k, \thetaeta}(a_k)|, |U_{\nu_k, \thetaeta}(b_k)|\}$ for some $\alpha_k\in (a_k,b_k)$, then $U_{\nu_k, \thetaeta}'(\alpha_k)=0$ and, by (\ref{eq1_0_1_0}) and (\ref{eq1_0_1_1}), we have \[ \frac{1}{2}U^2_{\nu_k, \thetaeta}(\alpha_k)\le |P_{c_k}(\alpha_k)|+|2\nu_k\alpha_k U_{\nu_k, \thetaeta}(\alpha_k)|\le C+C|U_{\nu_k, \thetaeta}(\alpha_k)|. \] It follows that $|U_{\nu_k, \thetaeta}(\alpha_k)|\le C$. Hence \begin{equation}gin{equation}\label{eq1_0_1_3} |U_{\nu_k, \thetaeta}|\le C \textrm{ on }[r+\delta/4, s-\delta/4]. \end{equation} We know that either $V_{\thetaeta}=\sqrt{2P_c}$ on $(r,s)$ or $V_{\thetaeta}=-\sqrt{2P_c}$ on $(r,s)$. \noindent \textbf{Claim}: If $V_{\thetaeta}=\sqrt{2P_c}$, then $U_{\nu_k, \thetaeta}\ge 1/C$ on $[r+\epsilon/4, s-\epsilon/4]$. If $V_{\thetaeta}=-\sqrt{2P_c}$, then $U_{\nu_k, \thetaeta}\le -1/C$ on $[r+\epsilon/4, s-\epsilon/4]$. \noindent\emph{Proof of the Claim}: We only treat the case when $V_{\thetaeta}=\sqrt{2P_c}$. The other case can be treated similarly. By (\ref{eq1_0_1_1}), (\ref{eq1_0_1_2}), and the weak convergence of $U_{\nu_k, \thetaeta}$ to $V_{\thetaeta}$, we have \[ \int_{r+\epsilon/8}^{r+\epsilon/4} U_{\nu_k, \thetaeta}V_{\thetaeta}\to \int_{r+\epsilon/8}^{r+\epsilon/4}V_{\thetaeta}^2=\int_{r+\epsilon/8}^{r+\epsilon/4}2P_c\ge 1/C. \] So there exists some $a_k\in [r+\epsilon/8, r+\epsilon/4]$, such that $U_{\nu_k, \thetaeta}(a_k)\ge 1/C$. Applying Lemma \ref{lem2_2} on $[a_k, s-\epsilon/8]$, we have $U_{\nu_k, \thetaeta}\ge 1/C$ on $[a_k, s-\epsilon/8]$. Thus $U_{\nu_k, \thetaeta}\ge 1/C$ on $[r+\epsilon/4, s-\epsilon/4]$. Note that if $V_{\thetaeta}=-\sqrt{2P_c}$, we will argue similarly and use Lemma \ref{lem2_2'} instead of Lemma \ref{lem2_2}. The claim is proved. By the claim and (\ref{eq1_0_1_3}), we either have $1/C\le U_{\nu_k, \thetaeta}\le C$ on $[r+\epsilon/4, s-\epsilon/4]$, or $-C\le U_{\nu_k, \thetaeta}\le -1/C$ on $[r+\epsilon/4, s-\epsilon/4]$. We can, in view of (\ref{eq1_0_1_2}), apply Lemma \ref{lem2_7} and Lemma \ref{lem2_7'}, to obtain \[ || U_{\nu_k, \thetaeta}-V_{\thetaeta}||_{C^m([r+\epsilon, s-\epsilon])}\le C\nu_k. \] Notice $x=\cos\thetaeta$, $u_{\nu_k, \thetaeta}=U_{\nu_k, \thetaeta}/\sin\thetaeta$ and $v_{\thetaeta}=V_{\thetaeta}/\sin\thetaeta$, we have proved that \[ || u_{\nu_k, \thetaeta}-v_{\thetaeta}||_{C^m(\mathbb{S}^2\cap \{\thetaeta_1+\epsilon<\thetaeta<\thetaeta_2-\epsilon\})}\le C\nu_k. \] The conclusion of the theorem then follows from the above, in view of formulas (\ref{eqNS_1}) and (\ref{eqEuler_1}). The theorem is proved. \qed \section{$c\in \mathring{J}_0$}\label{sec3} \noindent{\textit{Proof of Theorem \ref{thm1_1}}}: We only need to prove the theorem in the special case that $c_k\to c\ne 0$ and $\nu_k\to 0$, where $c_1,c_2>0,c_3>c_3^*(c_1,c_2)$, which is equivalent to $\min_{[-1,1]}P_c>0$. Indeed, let $\hat{\nu}_k=nu_k/\sqrt{|c_k|}$. By the assumption $\hat{\nu}_k\to 0$, $\hat{c}_k\to \hat{c}\ne 0$. It is easy to see that $U^{+}_{\thetaeta, \hat{\nu}_k}(\hat{c}_k)=U^{+}_{\nu_k, \thetaeta}(c_k)/\sqrt{|c_k|}$. The desired estimate (\ref{eq1_1}) for $U^{+}_{\nu_k, \thetaeta}(c_k)$ can be easily deduced from the estimate of $U^{+}_{\thetaeta, \hat{\nu}_k}(\hat{c}_k)$. We prove the estimates in (\ref{eq1_1}) for $\{U_{\nu_k, \thetaeta}^+\}$, the proof for $\{U_{\nu_k, \thetaeta}^-\}$ is similar. In the following, $C$ denotes various constant depending only on $c$. By Lemma \ref{lem2_1}, $||U_{\nu_k, \thetaeta}^+||_{L^{\infty}(-1,1)}\le C$ for all $k$. By Theorem A, the convergence of $\{c_k\}$ to $c$ and the fact that $\min_{[-1,1]}P_c>0$ and $c_1,c_2>0$, we have, for large $k$, $\min_{[-1,1]}P_{c_k}\ge \frac{1}{2}\min_{[-1,1]}P_c>0$, $U^+_{\nu_k, \thetaeta}(-1)=\tau_2(\nu_k, (c_k)_1)\ge \sqrt{2(c_k)_1}\ge \sqrt{c_1}>0$, and $\left|U^+_{\nu_k, \thetaeta}(\pm 1)-\sqrt{2P_{c_k}(\pm 1)}\right|\le C\nu_k$. An application of Lemma \ref{lem2_2} gives, for large $k$, that $U^+_{\nu_k, \thetaeta}\ge 1/C \textrm{ on }[-1,1]$. The first estimate (\ref{eq1_1}) then follows from Corollary \ref{cor2_2}. To prove the second estimate in (\ref{eq1_1}), we first prove the following lemma. \begin{equation}gin{lem}\label{lem3_1} Let $0<\nu\le 1$, $c\in \mathbb{R}^3$, $U_{\nu, \thetaeta}^+$ be the upper solution of (\ref{eq_1}). If $P_c(-1)=2c_1\ge \delta>0$, then for each non-negative integer $m$, there exists some constant $C$ depending only on $\delta$, $m$ and an upper bound of $|c|$, such that \[ \left|\frac{d^m}{dx^m}(U^+_{\nu, \thetaeta}-\sqrt{2P_c})(-1)\right|\le C\nu. \] \end{lem} \begin{equation}gin{proof} Denote $C$ to be a constant, depending only on $\delta$, $m$ and an upper bound of $|c|$, which may vary from line to line. We first prove that for every $m\ge 0$, \begin{equation}gin{equation}\label{eq3_1_1} \left|\frac{d^m}{dx^m}U^+_{\nu, \thetaeta}(-1)\right|\le C. \end{equation} It can be checked that $U^+_{\nu, \thetaeta}$ is a solution of (\ref{eq_1}) if and only if $\nu U^+_{\nu, \thetaeta}$ is a solution of (\ref{eqNSE_1}). Then by Lemma 2.3 in \cite{LLY2}, we have that $U^+_{\nu, \thetaeta}\in C^{\infty}[-1,0]$. For $m\ge 0$, differentiating (\ref{eq_1}) $(m+1)$ times and sending $x$ to $-1$ lead to \[ \begin{equation}gin{split} & \nu\sum_{i=1}^{m+1}{m+1 \choose i}\frac{d^i}{dx^i}(1-x^2)\frac{d^{m+2-i}}{dx^{m+2-i}}U^+_{\nu, \thetaeta}(x) +2\nu \sum_{i=0}^{1}{m+1 \choose i}\frac{d^i}{dx^i}(x)\frac{d^{m+1-i}}{dx^{m+1-i}}U^+_{\nu, \thetaeta}(x)\\ & +\frac{1}{2} \sum_{i=0}^{m+1}{m+1 \choose i}\frac{d^i}{dx^i}U^+_{\nu, \thetaeta}(x)\frac{d^{m+1-i}}{dx^{m+2-i}}U^+_{\nu, \thetaeta}(x)=\frac{d^{m+1}}{dx^{m+1}}P_c(x),\quad \textrm{ at }x=-1. \end{split} \] Notice that the $i=1$ term in the first sum and the $i=0$ term in the second sum cancel out, we rewrite the above equation as \[ \begin{equation}gin{split} & U^+_{\nu, \thetaeta}(x)\frac{d^{m+1}}{dx^{m+1}}U^+_{\nu, \thetaeta}(x) =\frac{d^{m+1}}{dx^{m+1}}P_c(x)-\frac{1}{2} \sum_{i=1}^{m}{m+1 \choose i}\frac{d^i}{dx^i}U^+_{\nu, \thetaeta}(x)\frac{d^{m+1-i}}{dx^{m+2-i}}U^+_{\nu, \thetaeta}(x)\\ & -\nu\sum_{i=2}^{m+1}{m+1 \choose i}\frac{d^i}{dx^i}(1-x^2)\frac{d^{m+2-i}}{dx^{m+2-i}}U^+_{\nu, \thetaeta}(x) -2\nu (m+1)\frac{d^{m}}{dx^{m}}U^+_{\nu, \thetaeta}(x), \quad \textrm{ at }x=-1. \end{split} \] Since $2c_1=P_c(-1)\ge \delta>0$ and $U^+_{\nu, \thetaeta}(-1)=2\nu+2\sqrt{\nu^2+c_1}$, we have $1/C\le U^+_{\nu, \thetaeta}(-1)\le C$. Using this and the fact that the right hand side of the above equation involves only $\left\{\frac{d^i}{dx^i}U^+_{\nu, \thetaeta}(-1)\right\}_{0\le i \le m}$, we can easily prove (\ref{eq3_1_1}) by induction. By (\ref{eq3_1_1}) and the fact that $1/C\le U^+_{\nu, \thetaeta}(-1)\le C$, take $m-$th derivatives of (\ref{eq_1}), we have that \begin{equation}gin{equation}\label{eq3_1_2} |\frac{d^{m}}{dx^{m}}(\frac{1}{2}(U^+_{\nu, \thetaeta})^2-P_c)(-1)|=\nu|\frac{d^{m}}{dx^{m}}((1-x^2)(U^+_{\nu, \thetaeta})'+2xU^+_{\nu, \thetaeta})|\big|_{x=-1}\le C\nu. \end{equation} Since $U^+_{\nu, \thetaeta}-\sqrt{2P_c}=\frac{2}{U^+_{\nu, \thetaeta}+\sqrt{2P_c}}\left[\frac{1}{2}(U^+_{\nu, \thetaeta})^2-P_c\right]$, $1/C\le U^+_{\nu, \thetaeta}(-1)\le C$, and $P_c(-1)\ge 1/C$, the lemma follows from (\ref{eq3_1_1}) and (\ref{eq3_1_2}). \end{proof} Now we continue to prove Theorem \ref{thm1_1}. Apply Lemma \ref{lem2_7} with $a=-3/4$ and $b=1$, we have, for all $m\ge 0$, \[ ||\frac{d^m}{dx^m}(U_{\nu_k, \thetaeta}^+-\sqrt{2P_{c_k}})||_{L^{\infty}(-\frac{1}{2},1-\epsilon)}\le C\nu_k. \] By Lemma \ref{lem3_1} and Lemma \ref{lem2_12} with $a=-1$, $b=-1/2$, we have \[ ||\frac{d^m}{dx^m}(U_{\nu_k, \thetaeta}^+-\sqrt{2P_{c_k}})||_{L^{\infty}(-1,-\frac{1}{2})}\le C\nu_k, \] for some $C$ depending only on $\delta$, $m$ and an upper bound of $|c|$. Theorem \ref{thm1_1} is proved. \qed \noindent{\textit{Proof of Theorem \ref{thm1_3} Started}}: In this part, we prove the first paragraph of Theorem \ref{thm1_3} and part (iii). Let $C$ denote a positive constant, having the same dependence as specified in the theorem, which may vary from line to line. By Lemma \ref{lem2_1}, \begin{equation}gin{equation}\label{eqthm3_3_1} |U_{\nu_k, \thetaeta}|\le C. \end{equation} Since $U_{\nu_k, \thetaeta}$ is not $U_{\nu_k, \thetaeta}^\pm$, we know from Theorem A that $U_{\nu_k, \thetaeta}(-1)=\tau_1(\nu_k, c_{k1})$ and $U_{\nu_k, \thetaeta}(1)=\tau_2'(\nu_k, c_{k2})$. Since $c_k\in J_0\setminus\{0\}$, we have $P_c\ge 0$ on $[-1,1]$. By Lemma \ref{lem2_4}, there exists at most one $x_k$ in $(-1,1)$ such that $U_{\nu_k, \thetaeta}(x_k)=0$. Now we prove part (iii). Since $c\in \mathring{J}_0$, we have $c_1,c_2>0$ and $\min_{[-1,1]}P_c>0$. By the convergence of $\{c_k\}$ to $c$, we deduce, using (\ref{eq_2}), that \begin{equation}gin{equation}\label{eqthm3_3_2} U_{\nu_k, \thetaeta}(-1)\le -1/C, \quad U_{\nu_k, \thetaeta}(1)\ge 1/C, \quad \min_{[-1,1]}P_{c_k}\ge 1/C, \end{equation} and \begin{equation}gin{equation}\label{eqthm3_3_3} |U_{\nu_k, \thetaeta}(-1)+\sqrt{2P_{c_k}(-1)}|+|U_{\nu_k, \thetaeta}(1)-\sqrt{2P_{c_k}(1)}|\le C\nu_k. \end{equation} Clearly there exists $x_k\in (-1,1)$ such that $U_{\nu_k, \thetaeta}(x_k)=0$. By Lemma \ref{lem2_4} and (\ref{eqthm3_3_2}), \begin{equation}gin{equation}\label{eqthm3_3_4} U_{\nu_k, \thetaeta}(x)<0, -1\le x< x_k, \textrm{ and }U_{\nu_k, \thetaeta}(x)>0, x_k<x\le 1. \end{equation} By Corollary \ref{cor2_3} and Corollary \ref{cor2_3'}, using (\ref{eqthm3_3_1}) and (\ref{eqthm3_3_4}), we have that \begin{equation}gin{equation}\label{eqthm3_3_5} -C\le U_{\nu_k, \thetaeta}(x)\le -1/C, x\in (-1,x_k-\epsilon/2), \textrm{ and }1/C\le U_{\nu_k, \thetaeta}\le C, x\in (x_k+\epsilon/2, 1). \end{equation} With (\ref{eqthm3_3_5}) we deduce (\ref{eq1_2_2}) by applying Lemma \ref{lem2_7} and Lemma \ref{lem2_7'}. With (\ref{eqthm3_3_3}), (\ref{eqthm3_3_5}) and (\ref{eq1_2_2}), we deduce (\ref{eq1_2_1}) by applying Corollary \ref{cor2_2} and Corollary \ref{cor2_2'}. \qed \section{$c\in \partial J_0\setminus\{0\}$ and $c_3=c_3^*(c_1,c_2)$}\label{sec4} In this section, we study a sequence of solutions $U_{\nu_k, \thetaeta}$ of (\ref{eq:NSE}) with $c_k\to c$ and $\nu_k\to 0$, where $c\in \partial J_0\setminus\{0\}$ and $c_3=c_3^*(c_1,c_2)$. We first study the behaviors of $U_{\nu_k, \thetaeta}^{\pm}$. \noindent{\emph{Proof of Theorem \ref{thm1_2}}}: Let $C$ denote a constant depending only on $c$ which may vary from line to line. We only prove the result for the case $c_3=c_3^*(c_1,c_2)$ and $c_2>0$. The result for the case $c_3=c_3^*(c_1,c_2)$ and $c_1>0$ can be proved similarly. Since $c_3=c_3^*(c_1,c_2)$ and $c_2>0$, we have $c_3<0$ and $P_c=-c_3(x-\bar{x})^2$ with $\bar{x}:=\frac{\sqrt{c_1}-\sqrt{c_2}}{\sqrt{c_1}+\sqrt{c_2}}\in [-1,1)$. Then for any $\epsilon<(1-\bar{x})/8$, we have \begin{equation}gin{equation}\label{eqlem4_1_1} P_{c}(x)\ge2P_c(1)/3=4c_2/3>0, \quad 1-2\epsilon\le x\le 1. \end{equation} Choose sequences $c_{k1}\to c_1$, $c_{k2}\to c_2$, let, as in (\ref{eqdef_1}), \[ \bar{c}_{k3}:=\bar{c}_{3}(c_{k1},c_{k2};\nu_k)=-\frac{1}{2}(\sqrt{\nu_k^2+c_{k1}}+\sqrt{\nu_k^2+c_{k2}})(\sqrt{\nu_k^2+c_{k1}}+\sqrt{\nu_k^2+c_{k2}}+2\nu_k). \] Let $c_k=(c_{k1},c_{k2},c_{k3})$, where $c_{k3}>\bar{c}_{k3}$ will be chosen later. It is easy to see that $c_k\in J_{\nu_k}$. Let $U_{\nu_k, \thetaeta}^+$ be the solution of (\ref{eq_1}) with the right hand side $P_{c_k}$. For convenience, write $f_k=U_{\nu_k, \thetaeta}^+$, $P_k=P_{c_k}$, and $h_k:=\frac{1}{2}f^2_k-P_k$. Let $\bar{P}_{k}:=P_{(c_{k1},c_{k2},\bar{c}_{k3})}$. By Theorem A, there exists a unique solution $\bar{f}_k$ of (\ref{eq_1}) with the right hand side $\bar{P}_k$, and \begin{equation}gin{equation}\label{eqlem4_1_2} \bar{f}_k(x)=(\nu_k+\sqrt{\nu_k^2+c_{k1}})(1-x)-(\nu_k+\sqrt{\nu_k^2+c_{k2}})(1+x). \end{equation} By Theorem A again, for any integer $i>0$, there exists $\delta_{ik}>0$, $\delta_{ik}\to 0$, such that for $|c_{k3}-\bar{c}_{k3}|\le \delta_{ik}$, we have $||f_k-\bar{f}_k||_{L^{\infty}(-1,1-1/i)}<1/i$. Choose $c_{k3}=\bar{c}_{k3}+\delta_{kk}$. Then $||f_k-\bar{f}_k||_{L^{\infty}(-1,1-1/k)}<1/k$. By computation, for any $\epsilon>0$, $\bar{f}_k(1-\epsilon)=\epsilon \sqrt{c_1}-(2-\epsilon)\sqrt{c_2}+o(1)$, where $o(1)\to 0$ as $k\to \infty$. Since $c_2>0$, we have that for $k>1/\epsilon$, $f_k(1-\epsilon)<0$. On the other hand, by Theorem A, using $c_{k3}>\bar{c}_{k3}$, we have $f_k(1)=-2\nu_k+2\sqrt{\nu_k^2+c_{k2}}\ge \sqrt{c_2}>0$ for sufficiently large $k$. So there is some $x_k\in (1-\epsilon, 1)$ such that $f_k(x_k)=0$. By (\ref{eqlem4_1_1}), $P_k(x_k)\ge P_c(x_k)+o(1)\ge c_2>0$ for large $k$. So we have $|\frac{1}{2}f^2_k(x_k)-P_k(x_k)|\ge c_2$ for large $k$. Theorem \ref{thm1_2} is proved. \qed \begin{equation}gin{rmk} If $c_k\in J_0$, i.e. $P_{c_k}\ge 0$ on $[-1,1]$, then we have, by Theorem \ref{thm1_2_1}, that $||\frac{1}{2}(U_{\nu_k, \thetaeta}^{\pm}(c_k))^2-P_{c_k}||_{L^{\infty}[-1,1]}\to 0$ as $k\to 0$. So the $\{P_{c_k}\}$ constructed in Theorem \ref{thm1_2} has the property that $\min_{[-1,1]}P_{c_k}<0$ for large $k$. \end{rmk} \noindent{\textit{Proof of Theorem \ref{thm1_2_1}}}: Let $C$ denote a positive constant depending only on $c$ which may vary from line to line. For convenience, write $f_k=U_{\nu_k, \thetaeta}^+(c_k)$, $P_k=P_{c_k}$, and $h_k:=\frac{1}{2}f^2_k-P_k$. In the following we always assume that $k$ is large. (i) We only prove the results for $U_{\nu_k,\thetaeta}^+$, the proof for $U_{\nu_k,\thetaeta}^-$ is similar. Since $P_k\ge 0$ in $[-1,1]$ and $f_k(-1)=\tau_2(\nu_k,c_{k1})>0$, we have, by Lemma \ref{lem2_1} and Lemma \ref{lem2_4}, \begin{equation}gin{equation}\label{eq4_2_9} 0<f_k(x)\le C, \quad -1\le x<1. \end{equation} By (\ref{eqP_1}), $P_c(x)=-c_3(x-\bar{x})^2$ with $\bar{x}=\frac{\sqrt{c_1}-\sqrt{c_2}}{\sqrt{c_1}+\sqrt{c_2}}\in [-1,1]$. Since $c_k\to c$ and $c_3<0$, we know $c_{k3}<\frac{1}{2}c_3<0$ for large $k$. Let $\bar{x}_k$ be the unique minimum point of $P_k$, then $\bar{x}_k\to \bar{x}$, \begin{equation}gin{equation*} P_k(x)=P_k(\bar{x}_k)-c_{k3}(x-\bar{x}_k)^2, \end{equation*} and for large $k$, that \begin{equation}gin{equation}\label{eq4_2_3} \frac{|c_3|}{2}(x-\bar{x}_k)^2\le P_k(x)-P_k(\bar{x}_k)\le 2|c_3|(x-\bar{x}_k)^2, \quad -1\le x\le 1. \end{equation} We first prove \begin{equation}gin{equation}\label{eq4_1_1} ||\frac{1}{2}(U^{+}_{\nu_k, \thetaeta})^2-P_{c_k}||_{L^{\infty}(-1,1)}\le C\nu_k^{2/3} . \end{equation} \textbf{Case 1}: $c_1,c_2>0$. In this case $\bar{x}\in (-1,1)$. Let $a_k=\nu_k^{1/3}/\alpha$ for some positive $k$-independent constant $\alpha$ to be determined. By Lemma \ref{lem2_5}, there exists some $x_{k}\in (\bar{x}_k+a_k,\bar{x}_k+2a_k)$, such that $|h_k(x_{k})|\le C\nu_k/a_k=C\alpha^3 a_k^2$. It follows from (\ref{eq4_2_3}), using the fact that $P_k(\bar{x}_k)\ge 0$, that \begin{equation}gin{equation}\label{eq4_2_4} P_c(x)\ge |c_3|a_k^2/2, \quad \forall \bar{x}_k+a_k\le x\le 1. \end{equation} Thus $f^2_k(x_k)/2\ge P_{k}(x_k)-|h_k(x_k)|\ge \left(|c_3|/2-C\alpha^3\right)a_k^2$. Fix $\alpha^3=|c_3|/(4C)$. By (\ref{eq4_2_9}) we have $f_{k}(x_{k})\ge a_k/C$. Applying Lemma \ref{lem2_2} on $[x_k,1]$, using (\ref{eq4_2_4}), we have $f_k\ge a_k/C$ on $[x_{k}, 1]$. Since $|h_{k}(x_{k})|\le C\nu_k^{2/3}$ and $|h_k(1)|=|\frac{1}{2}f^2_k(1)-P_k(1)|=|\frac{1}{2}(\tau_2'(\nu,c_{k2}))^2-2c_2|\le C\nu_k$, we have, by applying Lemma \ref{lem2_6} on $[x_k,1]$, that \[ \max_{[\bar{x}_k+2a_k,1]}|h_k|\le \max_{[x_k,1]}|h_k|\le C\nu_k^{2/3}. \] Similarly, by Lemma \ref{lem2_5}, there exists some $x'_k\in [\bar{x}_k-2a_k, \bar{x}_k-a_k]$, such that $|h_k(x'_k)|\le C\nu_k^{2/3}$. We also have $|h_k(-1)|=|\frac{1}{2}f^2_k(-1)-P_k(-1)|=|\frac{1}{2}(\tau_2(\nu,c_{k1}))^2-2c_1|\le C\nu_k$. Similar to (\ref{eq4_2_4}), we have $P_k(x)\ge a_k/C^2$ for $x\in [-1,x_k']$. Recall that $f_k(-1)\ge \sqrt{c_1}$. Using Lemma \ref{lem2_2} we have $f_k\ge a_k/C$ on $[-1,x_k']$. Then by similar arguments as on $[\bar{x}_k+2a_k, 1]$, we have \[ \max_{[-1, \bar{x}_k-2a_k]}|h_k|\le C\nu_k^{2/3}. \] Now we have that $|h_k(\bar{x}_k-2a_k)|\le Ca_k^2$ and $|h_k(\bar{x}_k+2a_k)|\le Ca_k^2$. Notice that $f_k>0$ on $(-1,1)$, $P_k\ge 0$ in $(-1,1)$, using (\ref{eq4_2_3}) we have $P_k(x)\le P_k(\bar{x}_k)+ 2|c_3|(x-\bar{x}_k)^2$ on $[-1,1]$. Applying Lemma \ref{lem2_10} on $[\bar{x}_k-2a_k, \bar{x}_k+2a_k]$ with $\alpha=2$ and $\bar{x}=\bar{x}_k$, there we have that \[ \max_{[\bar{x}_k-2a_k,\bar{x}_k+2a_k]}|h_k|\le Ca_k^2+C\nu_k/a_k\le C\nu_k^{2/3}. \] Estimate (\ref{eq4_1_1}) is proved in this case. \textbf{ Case 2}: $c_1=0$ and $c_2>0$. In this case $P_c(x)=\frac{1}{2}c_2(x+1)^2$, $\bar{x}_k\to -1$. Let $a_k=\nu_k^{1/3}/\alpha$ for some $\alpha>0$ to be determined. Let $b_k=\max\{-1,\bar{x}_k-2a_k\}$ and $d_k=\max\{-1+2a_k, \bar{x}_k+2a_k\}$. It is clear that $-1\le b_k<d_k<1$. We prove estimate (\ref{eq4_1_1}) separately on $[d_k,1]$, $[-1,b_k]$ and $[b_k,d_k]$. We first prove the estimate on $[d_k,1]$. Since $d_k-2a_k\ge \bar{x}_k$, we have $x-\bar{x}_k\ge a_k$ for $x$ in $[d_k-a_k, d_k]$. By (\ref{eq4_2_3}), we have $P_k\ge \frac{1}{2}|c_3|a_k^2$ in $[d_k-a_k,d_k]$. We also have $[d_k-a_k,d_k]\subset [-1,1]$. Applying Lemma \ref{lem2_5} on $[d_k-a_k,d_k]$, using (\ref{eq4_2_9}), there exists some $x_k\in [d_k-a_k,d_k]$, such that $|h_k(x_{k})|\le C\nu_k/a_k=C\alpha^3 a_k^2$. Thus $\frac{1}{2}f^2_k(x_k)\ge P_{k}(x_k)-|h_k(x_k)|\ge \left(\frac{1}{2}|c_3|-C\alpha^3\right)a_k^2$. Fix $\alpha^3=|c_3|/(4C)$. By (\ref{eq4_2_9}) we have $f_{k}(x_{k})\ge a_k/C$. Applying Lemma \ref{lem2_2} on $[x_{k}, 1]$, and using $P_k\ge \frac{1}{2}|c_3|a_k^2$ on the interval, we have $f_k(x)\ge a_k/C$ on $[x_{k}, 1]$. Since $|h_{k}(x_{k})|\le C a_k^2$ and $|h_k(1)|=|\frac{1}{2}f^2_k(1)-P_k(1)|=|\frac{1}{2}(\tau_2'(\nu,c_{k2}))^2-2c_2|\le C\nu_k$, notice $x_k\le d_k$, we have, by applying Lemma \ref{lem2_6} on $[x_{k}, 1]$, that \begin{equation}gin{equation}\label{eq4_2_5} \max_{[d_k, 1]}|h_k|\le \max_{[x_k,1]}|h_k|\le C\nu_k^{2/3}. \end{equation} Next, we prove the estimate on $[-1,b_k]$. If $\bar{x}_k-2a_k>-1$, by (\ref{eq4_2_3}) we have $P_k\ge \frac{1}{2}|c_3|a_k^2$ on $[-1,\bar{x}_k-a_k]$. In particular, $2c_{k1}=P_k(-1)\ge \frac{1}{2}|c_3|a_k^2$. So $c_{k1}\ge \frac{1}{4}|c_3|a_k^2$, and consequently $f_k(-1)=\tau_2(\nu_k,c_{k1})\ge \sqrt{c_{k1}}\ge \frac{1}{2}\sqrt{|c_3|}a_k$. Applying Lemma \ref{lem2_2} on $[-1,\bar{x}_k-a_k]$, and using $P_k\ge \frac{1}{2}|c_3|a_k^2$ on the interval, we have $f_k(x)\ge a_k/C$ on $[-1,\bar{x}_k-a_k]$. Applying Lemma \ref{lem2_5} on $[\bar{x}_k-2a_k, \bar{x}_k-a_k]$, using (\ref{eq4_2_9}), there exists some $y_k\in [\bar{x}_k-2a_k, \bar{x}_k-a_k]$, such that $|h_k(y_{k})|\le C\nu_k/a_k=C\alpha^3 a_k^2$. We also have $|h_k(-1)|\le C\nu_k$. Notice $b_k=\bar{x}_k-2a_k\le y_k\le \bar{x}_k-a_k$, applying Lemma \ref{lem2_6} on $[-1,y_k]$, we have that \begin{equation}gin{equation}\label{eq4_2_7} \max_{[-1,b_k]}|h_k|\le \max_{[-1,y_k]}|h_k|\le C\nu_k^{2/3}. \end{equation} If $\bar{x}_k-2a_k\le -1$, $b_k=-1$, $\max_{[-1,b_k]}|h_k|=|h_k(-1)|\le C\nu_k\le C\nu_k^{\frac{2}{3}}$. Now we prove the estimate on $[b_k,d_k]$. We have proved in the above that $|h_{k}(b_k)|\le Ca_k^2$ and $|h_{k}(d_k)|\le Ca_k^2$. If $\bar{x}_k<-1-a_k$, then $[b_k,d_k]=[-1,-1+2a_k]$, and for any $x\in [-1,-1+2a_k]$, $x-\bar{x}_k\ge -1-\bar{x}_k>a_k$. By (\ref{eq4_2_3}), we have $P_k\ge a^2_k/C$ on $[-1,-1+2a_k]$. In particular, $2c_{k1}=P_k(-1)\ge \frac{1}{2}|c_3|a_k^2$. So $c_{k1}\ge \frac{1}{4}|c_3|a_k^2$, and consequently $f_k(-1)=\tau_2(\nu_k,c_{k1})\ge \sqrt{c_{k1}}\ge \frac{1}{2}\sqrt{|c_3|}a_k$. Applying Lemma \ref{lem2_2} on $[-1,-1+2a_k]$, we have $f_k\ge a_k/C$ on $[-1,-1+2a_k]$. Notice we also know $|h_{k}(-1)|\le Ca_k^2$ and $|h_{k}(-1+2a_k)|\le Ca_k^2$. Applying Lemma \ref{lem2_6} on $[b_k,d_k]=[-1,-1+2a_k]$, we have that in this case \begin{equation}gin{equation}\label{eq4_2_12} \max_{[b_k,d_k]}|h_k|\le Ca_k^2. \end{equation} Next, we consider the case $\bar{x}_k\ge -1-a_k$. If $\bar{x}_k-2a_k\ge -1$, then $[b_k,d_k]=[\bar{x}_k-2a_k,\bar{x}_k+2a_k]$. If $\bar{x}_k-2a_k<-1$, then $[b_k,d_k]=[-1,-1+2a_k]$ when $\bar{x}_k<-1$, and $[b_k,d_k]=[-1,\bar{x}_k+2a_k]$ when $\bar{x}_k\ge -1$. So we have $\mathrm{dist}(\bar{x}_k, [b_k,d_k])\le Ca_k$, and $2a_k\le d_k-b_k\le 4a_k$. Notice that $|h_{k}(b_k)|\le Ca_k^2$, $|h_{k}(d_k)|\le Ca_k^2$, $f_k>0$ on $(-1,1)$ and $P_k\ge 0$ in $(-1,1)$, and using (\ref{eq4_2_3}), $P_k(x)\le P_k(\bar{x}_k)+ 2|c_3|(x-\bar{x}_k)^2$ on $[-1,1]$. Applying Lemma \ref{lem2_10} on $[b_k,d_k]$ with $\alpha=2$, we have that \begin{equation}gin{equation}\label{eq4_2_8} \max_{[b_k,d_k]}|h_k|\le Ca_k^2+C\nu_k/a_k\le C\nu_k^{2/3}. \end{equation} By (\ref{eq4_2_5}), (\ref{eq4_2_7}), (\ref{eq4_2_12}) and (\ref{eq4_2_8}), we have $\max_{[-1,1]}|h_k|\le C\nu_k^{2/3}$. So estimate (\ref{eq4_1_1}) is proved in Case 2. \textbf{Case 3}: $c_2=0,c_1>0$. The proof of (\ref{eq4_1_1}) is similar to that of Case 2. We have by now proved (\ref{eq4_1_1}). By (\ref{eq4_1_1}), we have $\lim_{k\to \infty}|||f_k|-\sqrt{2P_{c}}||_{L^{\infty}(-1,1)}=0$. Using this and (\ref{eq4_2_9}), we have $\lim_{k\to \infty}||U^{+}_{\nu_k, \thetaeta}-\sqrt{2P_{c}}||_{L^{\infty}(-1,1)}=0$. Next, we prove \begin{equation}gin{equation}\label{eq4_1_3} ||U^{+}_{\nu_k, \thetaeta}-\sqrt{2P_{c_k}}||_{C^{m}([-1,1-\epsilon]\setminus[\bar{x}-\epsilon, \bar{x}+\epsilon])}\le C\nu_k, \end{equation} If $\bar{x}=-1$, then by (\ref{eq4_2_3}) and the fact that $\bar{x}_k\to \bar{x}$, we have $P_k\ge \epsilon^2/C$ on $[-1+\epsilon/2, 1-\epsilon/2]$ for large $k$. Applying Lemma \ref{lem2_7} on $[-1+\epsilon/2, 1-\epsilon/2]$, using (\ref{eq4_2_9}), we have (\ref{eq4_1_3}) in this case. If $\bar{x}>-1$, without loss of generality we assume $\epsilon$ is small such that $\bar{x}-2\epsilon>-1$. In this case, by (\ref{eq4_2_3}) and the fact that $\bar{x}_k\to \bar{x}$, we have $P_k\ge \epsilon^2/C$ on $[-1, 1-\epsilon/2]\setminus [\bar{x}-\epsilon/2, \bar{x}+\epsilon/2]$ for large $k$. Applying Lemma \ref{lem2_7} on $[-1+\epsilon/2, \bar{x}-\epsilon/2]$ and $[\bar{x}+\epsilon/2, 1-\epsilon/2]$ separately, we have \begin{equation}gin{equation}\label{eq4_2_10} ||f_k-\sqrt{2P_k}||_{C^{m}([-1+\epsilon,\bar{x}-\epsilon]\cup [\bar{x}+\epsilon, 1-\epsilon])}\le C\nu_k, \end{equation} for some constant $C$ depending only on $\epsilon$, $m$ and an upper bound of $|c|$. By Lemma \ref{lem3_1} and (\ref{eq4_2_10}), we have $ \left|\frac{d^i}{dx^i}(f_k-\sqrt{2P_k})(-1)\right|\le C\nu_k$ and $\left|\frac{d^i}{dx^i}(f_k-\sqrt{2P_k})(\bar{x}-\epsilon)\right|\le C\nu_k$, $0\le i\le m$, where $C$ depending only on $m$ and an upper bound of $|c|$. Applying Lemma \ref{lem2_12} on $[-1,\bar{x}-\epsilon]$, using (\ref{eq4_2_9}), we have \begin{equation}gin{equation}\label{eq4_2_11} ||f_k-\sqrt{2P_k}||_{C^{m}([-1,\bar{x}-\epsilon])}\le C\nu_k. \end{equation} Estimate (\ref{eq4_1_3}) in this case follows from (\ref{eq4_2_10}) and (\ref{eq4_2_11}). Next, for any $\epsilon>0$, there exists some constant $C>0$, depending only on $\epsilon$ and an upper bound of $|c|$, such that $|f'_k|\le C\nu_k^{-1/3}$ on $[-1+\epsilon, 1-\epsilon]$, so $|h'_k|=|f_kf'_k-P'_k|\le C\nu_k^{-1/3}$. By interpolation for any $x, y\in (-1+\epsilon, 1-\epsilon)$ and $0<\begin{equation}ta<1$, \[ \frac{|h_k(x)-h_k(y)|}{|x-y|^{\begin{equation}ta}}\le 2||h_||^{1-\begin{equation}ta}_{L^{\infty}(-1+\epsilon, 1-\epsilon)}||h_k'||^{\begin{equation}ta}_{L^{\infty}(-1+\epsilon, 1-\epsilon)}\le C\nu_k^{2(1-\begin{equation}ta)/3}\nu_k^{-\begin{equation}ta/3}\le C\nu_k^{\frac{2}{3}-\begin{equation}ta}. \] So we have \begin{equation}gin{equation}\label{eq4_1_2} ||\frac{1}{2}(U^{+}_{\nu_k, \thetaeta})^2-P_{c_k}||_{C^{\begin{equation}ta}(-1+\epsilon,1-\epsilon)}\le C\nu_k^{\frac{2}{3}-\begin{equation}ta}, \end{equation} Part (i) follows from (\ref{eq4_1_1}), (\ref{eq4_1_3}) and (\ref{eq4_1_2}). (ii) If $P_k\ge 0$ on $[-1,1]$, then the conclusion of the lemma follows from part(i). So below we assume that $\min_{[-1,1]}P_k(x)<0$. Let $\min_{[-1,1]}P_k(x)=P_k(\bar{x}_k)$. Since $P_k(x)=c_{k1}(1-x)+c_{k2}(1+x)+c_{k3}(1-x^2)$, we have $\bar{x}_k=\frac{c_{k2}-c_{k1}}{2c_{k3}}$, and \begin{equation}gin{equation}\label{eqlem4_2_5} P_k(x)=P_k(\bar{x}_k)-c_{k3}(x-\bar{x}_k)^2. \end{equation} Then \begin{equation}gin{equation}\label{eq5_2_2} 1-\bar{x}_k=\frac{-c_{k2}+2c_{k3}+c_{k1}}{2c_{k3}}\le C(|c_{k2}|+|2c_{k3}+c_{k1}|). \end{equation} By Lemma \ref{lem2_9} and the assumption that $\min_{[-1,1]}P_k(x)<0$, we have \begin{equation}gin{equation}\label{eqlem4_2_4} -C\nu_k\le P_k(\bar{x}_k)<0. \end{equation} Let $\bar{P}_{k}:=P_{(c_{k1},c_{k2},\bar{c}_{k3})}$ and $\bar{f}_k$ be the same as in (\ref{eqlem4_1_2}). Denote \[ \tilde{x}_k=\frac{\sqrt{\nu_k^2+c_{k1}}-\sqrt{\nu_k^2+c_{k2}}}{2\nu_k+\sqrt{\nu_k^2+c_{k1}}+\sqrt{\nu_k^2+c_{k2}}}\in (-1,1). \] By (\ref{eqlem4_1_2}) we have that \begin{equation}gin{equation}\label{eqlem4_2_2} \bar{f}_k(\tilde{x}_k)=0, \quad \bar{f}_k>0 , -1 \le x< \tilde{x}_k, \textrm{ and } \bar{f}_k<0, \tilde{x}_k<x\le 1. \end{equation} By computation \begin{equation}gin{equation}\label{eq5_2_1} 1-\tilde{x}_k=\frac{2\nu_k+2\sqrt{\nu_k^2+c_{k2}}}{2\nu_k+\sqrt{\nu_k^2+c_{k1}}+\sqrt{\nu_k^2+c_{k2}}}\le C(\nu_k+\sqrt{|c_{k2}|}). \end{equation} Since $2c_3=-c_1$, by (\ref{eq5_2_2}) and (\ref{eq5_2_1}) we see that $\bar{x}_k\to 1$ and $\tilde{x}_k\to 1$. Notice that $P_k\ge \bar{P}_k$ and $f_k(-1)=\bar{f}_k(-1)>2\nu_k$. By Lemma 2.4 in \cite{LLY2}, we have \begin{equation}gin{equation}\label{eqlem4_2_3} f_k\ge \bar{f}_k, \quad -1<x<1. \end{equation} Let $y_k=\min\{\bar{x}_k, \tilde{x}_k\}\to 1$. By (\ref{eqlem4_2_2}) and (\ref{eqlem4_2_3}) we have $f_k>0$ for $-1\le x<y_k$. As in Case 3 in the proof of part (i), we have \begin{equation}gin{equation}\label{eq5_2_5} \lim_{k\to \infty}||f_k-\sqrt{2P_c}||_{L^{\infty}(-1,y_k-2a_k)}=0, \end{equation} and that there is some $a_k=\nu_k^{1/3}/\alpha$ with $\alpha>0$ fixed, such that \begin{equation}gin{equation}\label{eq5_2_6} \max_{[-1,y_k-2a_k]}|h_k|\le C\nu_k^{2/3}. \end{equation} By (\ref{eq5_2_1}) and (\ref{eq5_2_2}) and the fact $a_k=\nu_k^{1/3}/\alpha$ we have \begin{equation}gin{equation}\label{eq5_2_2_1} |y_k-2a_k-1|\le C(\sqrt{|c_{k2}|}+|2c_{k3}+c_{k1}|+\nu_k^{1/3}). \end{equation} On the interval $[y_k-2a_k, 1]$, by (\ref{eq5_2_2_1}), (\ref{eqlem4_2_5}), (\ref{eqlem4_2_4}) and the fact $\bar{x}_k\in [y_k-2a_k,1]$, we have that for large $k$, \[ |P_k(x)|\le C(|c_{k2}|+|2c_{k3}+c_{k1}|^2+\nu_k^{2/3}), \quad y_k-2a_k\le x\le 1, \] and \[ P_k(x)\ge -C\nu_k+|c_3|a_k^2/2>0, \quad -1\le x<y_k-2a_k. \] Let $\hat{P}_k(x)=P_{\hat{c}_k}(x):=P_k(x)+C\nu_k(1+x)$. It can be seen that the corresponding $\hat{c}_k$ belongs to $J_{\nu_k}$. We have $\hat{P}_k\ge P_k>0$ for $-1\le x<y_k-2a_k$. By (\ref{eqlem4_2_4}), $\hat{P}_k>0$ for $y_k-2a_k\le x\le 1$. So $\hat{P}_k>0$ on $[-1,1]$. Let $\hat{f}_k$ be the upper solution of (\ref{eq_1}) with the right hand side to be $\hat{P}_k$. Then by part (i), we have $||\frac{1}{2}\hat{f}_k^2-\hat{P}_k||_{L^{\infty}(-1,1)}\le C\nu_k^{2/3}$. Notice that \[ |\hat{P}_k|\le C(|c_{k2}|+|2c_{k3}+c_{k1}|^2+\nu_k^{2/3}), \quad y_k-2a_k<x< 1 . \] So \begin{equation}gin{equation}\label{eq5_2_3} \hat{f}_k^2<C(|c_{k2}|+|2c_{k3}+c_{k1}|^2+\nu_k^{2/3}), \quad y_k-2a_k<x< 1 . \end{equation} Since $\hat{c}_{k1}=c_{k1}$, we have $f_{k}(-1)=\hat{f}_k(-1)>2\nu_k$. Using this and the fact $\hat{P_k}\ge P_k$, by Lemma 2.4 in \cite{LLY2}, we have $f_k\le \hat{f}_k$ on $(-1,1)$. So on the interval $[y_k-2a_k, 1]$, we have $\bar{f}_k\le f_k\le \hat{f}_k$. Using the expression of $\bar{f}_k$, (\ref{eqlem4_1_2}) and (\ref{eq5_2_2_1}), we have \[ |\bar{f}_k|\le C(\sqrt{|c_{k2}|}+|2c_{k3}+c_{k1}|+a_k), \quad y_k-2a_k<x< 1 . \] By this estimate and (\ref{eq5_2_3}), we have \begin{equation}gin{equation}\label{eq5_2_4} |f_k|\le C(\sqrt{|c_{k2}|}+|2c_{k3}+c_{k1}|+\nu_k^{1/3}), \quad y_k-2a_k<x< 1 . \end{equation} So we have \[ |\frac{1}{2}f_k^2-P_k|\le C(|c_{k2}|+|2c_{k3}+c_{k1}|^2+\nu_k^{2/3}), \quad y_k-2a_k<x< 1 . \] By this and (\ref{eq5_2_6}) we have (\ref{eqthm1_2_1_1}). Moreover, by (\ref{eq5_2_2_1}) and (\ref{eq5_2_4}), we have \[ |f_k(x)-\sqrt{2P_c(x)}|\le |f_k|+|c_3||y_k-2a_k-1|\le C(\sqrt{|c_{k2}|}+|2c_{k3}+c_{k1}|+\nu_k^{1/3}), \quad y_k-2a_k<x< 1. \] By this and (\ref{eq5_2_5}), we have $\lim_{k\to \infty}||U^{+}_{\nu_k, \thetaeta}-\sqrt{2P_{c}}||_{L^{\infty}(-1,1)}=0$. Part (ii) is proved. (iii) The proof is similar as that of part (ii). Theorem \ref{thm1_2_1} is proved. \qed Now we study sequence of solutions $U_{\nu_k, \thetaeta}$ of (\ref{eq:NSE}) with $\nu_k$ and $c_k$ other than $U_{\nu_k, \thetaeta}^\pm$. \noindent{\emph{Proof of Theorem \ref{thm1_3} continued}}: We will prove Theorem \ref{thm1_3} (i) and (ii) in the case $c_3=c_3^*(c_1,c_2)$. Let $C$ denote a positive constant, having the same dependence as specified in the theorem, which may vary from line to line. For convenience write $f_k=U_{\nu_k, \thetaeta}$, $P_k=P_{c_k}$ and $h_k=\frac{1}{2}f_k^2-P_k$. Throughout the proof $k$ is large. Let $\bar{x}=\frac{\sqrt{c_1}-\sqrt{c_2}}{\sqrt{c_1}+\sqrt{c_2}}$. By the assumption, $c_1,c_2\ge 0$, $c_3=c_3^*(c_1,c_2)=-\frac{1}{2}(c_1+2\sqrt{c_1c_2}+c_2)<0$, $-1\le \bar{x}\le 1$, and $P_c(x)=-c_3(x-\bar{x})^2$. Since $c_k\in J_0$, we have $P_k\ge 0$ on $[-1,1]$. By Lemma \ref{lem2_4}, there exists at most one $x_k\in (-1,1)$ such that $f_k(x_k)=0$, and if such $x_k$ exists we have \begin{equation}gin{equation}\label{eq4_3_8} f_k(x)<0, -1< x<x_k, \textrm{ and }f_k(x)>0, x_k<x< 1. \end{equation} Since $c_k\to c$ and $c_3<0$, we know $c_{k3}<\frac{1}{2}c_3<0$ for large $k$. Let $\bar{x}_k$ be the unique minimum point of $P_k$, then $\bar{x}_k\to \bar{x}$, $P_k(x)=P_k(\bar{x}_k)-c_{k3}(x-\bar{x}_k)^2$, and for large $k$, \begin{equation}gin{equation}\label{eq4_3_0} \frac{1}{2}|c_3|(x-\bar{x}_k)^2\le P_k(x)-P_k(\bar{x}_k)\le 2|c_3|(x-\bar{x}_k)^2, \quad -1\le x\le 1. \end{equation} By Lemma \ref{lem2_1}, \begin{equation}gin{equation}\label{eq4_3_5} |f_k|\le C. \end{equation} Since $P_c(x)=-c_3(x-\bar{x})^2$, we have, for every $\epsilon>0$, $\min_{[-1,1]\setminus (\bar{x}-\epsilon/2, \bar{x}+\epsilon/2)}P_c>0$. By the convergence of $\{c_k\}$ to $c$, we deduce that \begin{equation}gin{equation}\label{eq4_3_6} \min_{[-1,1]\setminus (\bar{x}-\epsilon/2, \bar{x}+\epsilon/2)}P_k\ge 1/C. \end{equation} Using (\ref{eq4_3_8}) and (\ref{eq4_3_6}), by applying Lemma \ref{lem2_7} and Lemma \ref{lem2_7'} on each interval of $[-1, x_k-\epsilon/2]\setminus(\bar{x}-\epsilon/2, \bar{x}+\epsilon/2)$ and $[x_k+\epsilon/2, 1]\setminus(\bar{x}-\epsilon/2, \bar{x}+\epsilon/2)$ separately, we have \begin{equation}gin{equation*} ||U_{\nu_k, \thetaeta}+\sqrt{2P_{c_k}}||_{C^m([-1+\epsilon, x_k-\epsilon]\setminus[\bar{x}-\epsilon, \bar{x}+\epsilon])}+||U_{\nu_k, \thetaeta}-\sqrt{2P_{c_k}}||_{C^m([x_k+\epsilon, 1-\epsilon]\setminus[\bar{x}-\epsilon, \bar{x}+\epsilon])}\le C\nu_k, \end{equation*} Next, we prove \begin{equation}gin{equation}\label{eq4_3_1} ||\frac{1}{2}U^2_{\nu_k, \thetaeta}-P_{c_k}||_{L^{\infty}((-1, x_k-\epsilon)\cup (x_k+\epsilon, 1))}\le C\nu_k^{2/3}. \end{equation} Suppose $x_k\to \hat{x}\in [-1,1]$ as $k\to \infty$. Since $f_k$ is not $U_{\nu_k, \thetaeta}^\pm$, we know from Theorem A that $f_k(-1)=\tau_1(\nu_k, c_{k1})$, $f_k(1)=\tau_2'(\nu_k, c_{k2})$. and therefore, in view of (\ref{eq_2}), we have \begin{equation}gin{equation}\label{eq4_3_7} |f_{k}(-1)+\sqrt{2P_k(-1)}|+|f_{k}(1)-\sqrt{2P_k(1)}|\le C\nu_k. \end{equation} We have $\min_{[-1,1]}P_c=P_c(\bar{x})$. Assume $\min_{[-1,1]}P_k=P_k(\bar{x}_k)$. Then $\bar{x}\in [-1,1]$ and $\bar{x}_k\to \bar{x}$. We also have $P_k$ satisfy (\ref{eq4_2_3}) for large $k$. \textbf{Case 1:} $c_1>0$, $c_2>0$, $c_3=c_3^*(c_1,c_2)<0$. In this case $\bar{x}\in (-1,1)$. We discuss the cases when $|x_k-\bar{x}_k|\ge \epsilon/4$ and $|x_k-\bar{x}_k|<\epsilon/4$ separately. If $|x_k-\bar{x}_k|\ge \epsilon/4$, we prove the case when $x_k\ge \bar{x}_k+\epsilon/4$, the other case can be proved similarly. In view of (\ref{eq_2}), we have $f_k(-1)\le -1/C$ and $f_k(1)\ge 1/C$. We first estimate $|h_k|$ on $[x_k+\epsilon,1]$. We have $P_k\ge 1/C$ on $[x_k+\epsilon/2,1]$ for large $k$. By Corollary \ref{cor2_3} and Corollary \ref{cor2_3'}, using (\ref{eq4_3_5}) and (\ref{eq4_3_8}), we have that \begin{equation}gin{equation}\label{eq4_3_9} 1/C\le f_k\le C, x\in \left(x_k+\epsilon/2, 1\right). \end{equation} Using (\ref{eq4_3_7}) and (\ref{eq4_3_9}), applying Lemma \ref{lem2_6} on $(x_k+\epsilon/2, 1)$, we have \begin{equation}gin{equation}\label{eq4_3_10} \max_{[x_k+\epsilon, 1]}|h_k|\le C\nu_k. \end{equation} Next, we prove estimate (\ref{eq4_3_1}) on $[-1,x_k-\epsilon]$. The proof is similar to Case 1 in the proof of Theorem \ref{thm1_2_1} (i). Let $a_k=\nu_k^{1/3}/\alpha$ for some positive constant $\alpha$ to be determined. Since $P_k(\bar{x}_k)\ge 0$, it follows from (\ref{eq4_3_0}) that \begin{equation}gin{equation}\label{eq4_3_14} P_k(x)\ge |c_3|a_k^2/2, \quad \forall x\in [-1,\bar{x}_k-a_k]\cup [\bar{x}_k+a_k,1]. \end{equation} By Lemma \ref{lem2_5}, there exists some $y_{k}\in (\bar{x}_k-2a_k,\bar{x}_k-a_k)$, $s_k\in (\bar{x}_k+a_k, \bar{x}_k+2a_k)$ and $t_k\in (x_k-\epsilon/8, x_k-\epsilon/16)$, such that $|h_k(y_{k})|+|h_k(s_k)|\le C\nu_k/a_k=C\alpha^3 a_k^2$ and $|h_k(t_k)|\le C\nu_k$. It follows from (\ref{eq4_3_14}) that \[ f^2_k(y_k)/2\ge P_{c_k}(y_k)-|h_k(y_k)|\ge \left(|c_3|/2-C\alpha^3\right)a_k^2. \] Fix $\alpha^3=\frac{|c_3|}{4C}$. We have $f_{k}(y_{k})<- a_k/\sqrt{C}$. Similarly we have $f_k(t_k)<-a_k/\sqrt{C}$. Using (\ref{eq4_3_8}), applying Lemma \ref{lem2_2'} on $[-1,y_k]$ and $[s_k,t_k]$ separately, we have $f_k(x)\le -a_k/C$ on $[-1, y_{k}]$ and $[s_k,t_k]$. Since $|h_{k}(y_{k})|\le C\nu_k^{2/3}$, $|h_k(-1)|\le C\nu_k$, $|h_k(s_k)|\le C\nu_k^{2/3}$ and $|h_k(t_k)|\le C\nu_k$, applying Lemma \ref{lem2_6} on $[-1, y_k]$ and $[s_k, t_k]$ separately, we have \begin{equation}gin{equation}\label{eq4_3_11} \max_{[-1, \bar{x}_k-2a_k]}|h_k|\le \max_{[-1, y_k]}|h_k|\le C\nu_k^{2/3}, \end{equation} and \begin{equation}gin{equation}\label{eq4_3_12} \max_{[\bar{x}_k+2a_k, x_k-\epsilon]}|h_k|\le \max_{[s_k,t_k]}|h_k| \le C\nu_k^{2/3}. \end{equation} Now we have that $h_k(\bar{x}_k-2a_k)\le Ca_k^2$ and $h_k(\bar{x}_k+2a_k)\le Ca_k^2$. Notice that $f_k<0$ on $[\bar{x}_k-2a_k,\bar{x}_k+2a_k]$, $P_k(\bar{x}_k)\ge 0$ and $P_k(x)=P_k(\bar{x}_k)-c_{k3}(x-\bar{x}_k)^2$. Applying Lemma \ref{lem2_10} on $[\bar{x}_k-2a_k,\bar{x}_k+2a_k]$ with $\alpha=2$, we have that \begin{equation}gin{equation}\label{eq4_3_13} \max_{[\bar{x}_k-2a_k,\bar{x}_k+2a_k]}|h_k|\le Ca_k^2+C\nu_k/a_k\le C\nu_k^{2/3}. \end{equation} By (\ref{eq4_3_10}), (\ref{eq4_3_11}), (\ref{eq4_3_12}) and (\ref{eq4_3_13}), we have proved (\ref{eq4_3_1}) when $x_k\ge \bar{x}_k+\epsilon/4$. Next, if $|x_k-\bar{x}_k|<\epsilon/4$, similar as (\ref{eq4_3_9}) we have \[ -C\le f_k\le -1/C, x\in \left(-1,x_k-\epsilon/2\right), \textrm{ and } 1/C\le f_k\le C, x\in \left(x_k+\epsilon/2, 1\right). \] Using this and (\ref{eq4_3_7}), applying Lemma \ref{lem2_6} on $(-1,x_k-\epsilon/2)$ and $(x_k+\epsilon/2, 1)$, (\ref{eq4_3_1}) is proved. \textbf{Case 2}: $c_1=0$, $c_2>0$, $c_3=c_3^*(c_1,c_2)=-c_2/2<0$. In this case $P_c(x)=\frac{1}{2}c_2(x+1)^2$, $\bar{x}_k\to -1$. we have the estimate (\ref{eq4_3_14}). We discuss the cases when $x_k-\bar{x}_k\ge \epsilon/4$ and $x_k-\bar{x}_k<\epsilon/4$ separately. If $x_k-\bar{x}_k\ge \epsilon/4$, in view of (\ref{eq_2}), we have $f_k(1)\ge 1/C$. We first estimate $|h_k|$ on $[x_k+\epsilon,1]$. We have $P_k\ge 1/C$ on $[x_k+\epsilon/2,1]$ for large $k$. By Corollary \ref{cor2_3} and Corollary \ref{cor2_3'}, using (\ref{eq4_3_5}) and (\ref{eq4_3_8}), we have (\ref{eq4_3_9}). Using (\ref{eq4_3_7}) and (\ref{eq4_3_9}), applying Lemma \ref{lem2_6} on $(x_k+\epsilon/2, 1)$, we have \begin{equation}gin{equation*} \max_{[x_k+\epsilon, 1]}|h_k|\le C\nu_k. \end{equation*} Next, we prove the estimate (\ref{eq4_3_1}) on $[-1,x_k-\epsilon]$. The proof is similar to Case 2 in the proof of Theorem \ref{thm1_2_1}(ii). Let $a_k=\nu_k^{1/3}/\alpha$ for some $\alpha>0$ to be determined, $b_k=\max\{-1,\bar{x}_k-2a_k\}$ and $d_k=\max\{-1+2a_k, \bar{x}_k+2a_k\}$. It is clear that $-1\le b_k<d_k<1$. We prove estimate (\ref{eq4_3_1}) separately on $[d_k,x_k-\epsilon]$, $[-1,b_k]$ and $[b_k,d_k]$. We first prove the estimate on $[d_k,x_k-\epsilon]$. Since $d_k-2a_k\ge \bar{x}_k$, we have $x-\bar{x}_k\ge a_k$ for $x$ in $[d_k-a_k, d_k]$. Applying Lemma \ref{lem2_5} on $[d_k-a_k,d_k]$, using (\ref{eq4_3_5}), there exists some $s_k\in (d_k-a_k,d_k)$ and $t_k\in (x_k-\epsilon/8, x_k-\epsilon/16)$, such that $|h_k(s_{k})|\le C\nu_k/a_k=C\alpha^3 a_k^2$ and $|h_k(t_k)|\le C\nu_k$. Thus \[ \frac{1}{2}f^2_k(t_k)\ge P_{k}(t_k)-|h_k(t_k)|\ge \frac{1}{2}|c_3|a_k^2-C\nu_k . \] By (\ref{eq4_3_8}) and (\ref{eq4_3_14}), we have $f_{k}(t_{k})\le -a_k/C$. Applying Lemma \ref{lem2_2'} on $[s_{k}, t_k]$, and using $P_k\ge \frac{1}{2}|c_3|a_k^2$ on the interval, we have \begin{equation}gin{equation}\label{eq4_3_17} f_k(x)\le -a_k/C, \quad s_k\le x\le t_k. \end{equation} Notice $s_k\le d_k$ and $|h_k(s_{k})|+|h_k(t_k)|\le C\alpha^3 a_k^2$, applying Lemma \ref{lem2_6} on $[s_{k}, t_k]$, we have \begin{equation}gin{equation}\label{eq4_3_16} \max_{[d_k, x_k-\epsilon]}|h_k|\le \max_{[s_k,t_k]}|h_k|\le C\nu_k^{2/3}. \end{equation} Next, we prove the estimate on $[-1,b_k]$. If $\bar{x}_k-2a_k>-1$, applying Lemma \ref{lem2_5} on $[b_k,b_k+a_k]$, using (\ref{eq4_3_5}), there exists some $y_k\in (b_k, b_k+a_k)$, such that $|h_k(y_{k})|\le C\nu_k/a_k=C\alpha^3 a_k^2$. Thus \[ \frac{1}{2}f^2_k(y_k)\ge P_{k}(y_k)-|h_k(y_k)|\ge\left(\frac{1}{2}|c_3|-C\alpha^3\right)a_k^2. \] Fix $\alpha^3=|c_3|/(4C)$. By (\ref{eq4_3_8}) and (\ref{eq4_3_14}), we have $f_{k}(y_{k})\le -a_k/C$. Applying Lemma \ref{lem2_2'} on $[-1,y_k]$, we have $f_k(x)\le -a_k/C$ on $(-1, y_{k})$. Using $|h_{k}(y_{k})|\le C\nu_k^{2/3}$ and $|h_k(-1)|\le C\nu_k$, applying Lemma \ref{lem2_6} on $[-1, y_k]$, we have \begin{equation}gin{equation}\label{eq4_3_20} \max_{[-1, b_k]}|h_k|\le \max_{[-1, y_k]}|h_k|\le C\nu_k^{2/3}. \end{equation} If $\bar{x}_k-2a_k\le -1$, $b_k=-1$, $\max_{[-1,b_k]}|h_k|=|h_k(-1)|\le C\nu_k\le C\nu_k^{2/3}$. Now we prove the estimate on $[b_k,d_k]$. We have proved in the above that $|h_{k}(b_k)|\le Ca_k^2$ and $|h_{k}(d_k)|\le Ca_k^2$. If $\bar{x}_k<-1-a_k$, then $[b_k,d_k]=[-1,-1+2a_k]$, and for any $x\in [-1,-1+2a_k]$, $x-\bar{x}_k\ge -1-\bar{x}_k>a_k$. By (\ref{eq4_3_14}), we have $P_k\ge a_k^2/C$ on $[-1,-1+2a_k]$. By (\ref{eq4_3_17}), $f_k(d_k)\le -a_k/C$. Applying Lemma \ref{lem2_2'} on $[-1,-1+2a_k]$, we have $f_k\le -a_k/C$ on $[-1,-1+2a_k]$. Notice we also know $|h_{k}(-1)|\le Ca_k^2$ and $|h_{k}(-1+2a_k)|\le Ca_k^2$. Applying Lemma \ref{lem2_6} on $[b_k,d_k]=[-1,-1+2a_k]$, we have that in this case \begin{equation}gin{equation}\label{eq4_3_18} \max_{[b_k,d_k]}|h_k|\le Ca_k^2. \end{equation} Next, we consider the case $\bar{x}_k\ge -1-a_k$. If $\bar{x}_k-2a_k\ge -1$, then $[b_k,d_k]=[\bar{x}_k-2a_k,\bar{x}_k+2a_k]$. If $\bar{x}_k-2a_k<-1$, then $[b_k,d_k]=[-1,-1+2a_k]$ when $\bar{x}_k<-1$, and $[b_k,d_k]=[-1,\bar{x}_k+2a_k]$ when $\bar{x}_k\ge -1$. So we have $\mathrm{dist}(\bar{x}_k, [b_k,d_k])\le Ca_k$, and $2a_k\le d_k-b_k\le 4a_k$. Notice that $|h_{k}(b_k)|\le Ca_k^2$, $|h_{k}(d_k)|\le Ca_k^2$, $f_k<0$ on $[b_k,d_k]$ and $P_k\ge 0$ in $(-1,1)$, and using (\ref{eq4_3_0}), $P_k(x)\le P_k(\bar{x}_k)+ 2|c_3|(x-\bar{x}_k)^2$ on $[-1,1]$. Applying Lemma \ref{lem2_10} on $[b_k,d_k]$ with $\alpha=2$, we have that \begin{equation}gin{equation}\label{eq4_3_19} \max_{[b_k,d_k]}|h_k|\le Ca_k^2+C\nu_k/a_k\le C\nu_k^{2/3}. \end{equation} By (\ref{eq4_3_16}), (\ref{eq4_3_20}), (\ref{eq4_3_18}) and (\ref{eq4_3_19}), we have $\max_{[-1,1]}|h_k|\le C\nu_k^{2/3}$. So estimate (\ref{eq4_3_1}) is proved when $x_k-\bar{x}_k\ge \epsilon/4$. Next, if $x_k-\bar{x}_k<\epsilon/4$. Since $x_k>-1$ and $\bar{x}_k\to -1$, we have $x_k+\epsilon/2>\bar{x}_k+\epsilon/4$, and therefore we have $P_k\ge 1/C$ on $[x_k+\epsilon/2, 1]$. similar as (\ref{eq4_3_9}) we have \[ 1/C\le f_k\le C \textrm{ on }\left(x_k+\epsilon/2, 1\right). \] Using this and (\ref{eq4_3_7}), applying Lemma \ref{lem2_6} on $(x_k+\epsilon/2, 1)$, (\ref{eq4_3_1}) is proved. \textbf{Case 3}: $c_2=0,c_1>0, c_3=c_3^*(c_1,c_2)$. The proof of (\ref{eq4_3_1}) is similar to that of Case 2. We have by now proved (\ref{eq4_3_1}). By (\ref{eq4_3_1}) and (\ref{eq_1}), for any $\epsilon>0$, there exists some constant $C>0$, depending only on $\epsilon$ and an upper bound of $|c|$, such that $|f'_k|\le C\nu_k^{-1/3}$ on $[-1+\epsilon, x_k-\epsilon]\cup [x_k+\epsilon, 1-\epsilon])$, so $|h'_k|=|f_kf'_k-P'_k|\le C\nu_k^{-1/3}$. So we have \begin{equation}gin{equation*} ||\frac{1}{2}U^2_{\nu_k, \thetaeta}-P_{c_k}||_{C^1(([-1+\epsilon, x_k-\epsilon]\cup [x_k+\epsilon, 1-\epsilon])\cap [\bar{x}-\epsilon, \bar{x}+\epsilon]}\le C\nu_k^{-1/3}, \end{equation*} By interpolation for any $x, y\in (-1+\epsilon, 1-\epsilon)$ and $0<\begin{equation}ta<1$, \[ \frac{|h_k(x)-h_k(y)|}{|x-y|^{\begin{equation}ta}}\le 2||h_k||^{1-\begin{equation}ta}_{L^{\infty}(-1+\epsilon, 1-\epsilon)}||h_k'||^{\begin{equation}ta}_{L^{\infty}(-1+\epsilon, 1-\epsilon)}\le C\nu_k^{2(1-\begin{equation}ta)/3}\nu_k^{-\begin{equation}ta/3}\le C\nu_k^{2/3-\begin{equation}ta}. \] We have \begin{equation}gin{equation}\label{eq4_3_4} ||\frac{1}{2}U^2_{\nu_k, \thetaeta}-P_{c_k}||_{C^{\begin{equation}ta}([-1+\epsilon, x_k-\epsilon]\cup [x_k+\epsilon, 1-\epsilon])}\le C\nu_k^{2/3-\begin{equation}ta}. \end{equation} Using (\ref{eq4_3_8}), (\ref{eq4_3_6}) and (\ref{eq4_3_1}), we have (\ref{eq1_7_0}) in this case. Part (i) in this case follows from (\ref{eq1_7_0}), (\ref{eq4_3_1}) and (\ref{eq4_3_4}). Next, we prove part (ii) in this case. Notice that in this case $c_3=c_3^*(c_1,c_2)$, $c_1,c_2$ cannot both be zero. We first prove that if such $x_k$ exists and $x_k\to -1$ with $c_1=0$ or such $x_k$ does not exist with $c_2>0=c_1$, then \begin{equation}gin{equation}\label{eq4_3_1_1} \lim_{k\to \infty}||f_k-\sqrt{2P_{k}}||_{L^{\infty}(-1, 1)}=0. \end{equation} In this case $P_c(x)=-\frac{1}{2}c_2(x+1)^2$ where $c_2>0$. By Theorem \ref{thm1_2_1}(i), we have \[ \limsup_{k\to \infty}||U^{\pm}_{\nu_k, \thetaeta}\mp \sqrt{2P_{k}}||_{L^{\infty}(-1, 1)}=0. \] So for any $\epsilon_0>0$, there exists some $\epsilon>0$, such that $||P_k||_{L^{\infty}L^{\infty}(-1,-1+2\epsilon)}<\epsilon_0$, and $||U^{\pm}_{\nu_k, \thetaeta}||_{L^{\infty}(-1,-1+2\epsilon)}<\epsilon_0$ for large $k$. Notice $U_{\nu_k, \thetaeta}^-\le f_k\le U^+_{\nu_k, \thetaeta}$, we then have $||f_k-\sqrt{2P_{k}}||_{L^{\infty}(-1,-1+2\epsilon)}<2\epsilon_0$. Since $P_c(x)=-\frac{1}{2}c_2(x+1)^2$ we also have that $P_c\ge 1/C$ on $[-1+\epsilon, 1]$. Moreover, if such $x_k$ does not exist, then since $f_k(1)=\tau'_2(\nu_k, c_{k2})>0$, we have $f_k>0$ on $(-1,1]$. If such $x_k$ exists and $x_k\to -1$, then for $k$ large we have $-1<x_k<-1+\epsilon$. By (\ref{eq4_3_8}), we also have $f_k>0$ on $[-1+\epsilon, 1]$. Then by Corollary \ref{cor2_3}, we have $f_k\ge 1/C$ on $[-1+2\epsilon, 1]$. Notice $|f_k(1)-\sqrt{2P_c(-1)}|\le C\nu_k$ and $|f_k(-1+2\epsilon)-\sqrt{2P_c(-1+2\epsilon)}|\le 2\epsilon_0$, by Corollary \ref{cor2_2} we have $||f_k-\sqrt{2P_c}||_{L^{\infty}(-1+2\epsilon,1)}<C\epsilon_0$. So (\ref{eq4_3_1_1}) is proved. Similarly, if such $x_k$ exists and $x_k\to 1$ with $c_2=0$ or such $x_k$ does not exist with $c_1>0=c_2$, , we have \[ \lim_{k\to \infty}||U_{\nu_k, \thetaeta}+\sqrt{2P_{c_k}}||_{L^{\infty}(-1, 1)}=0. \] \qed \section{$c\in \partial J_0$ and $c_3>c_3^*(c_1,c_2)$}\label{sec5} \begin{equation}gin{lem}\label{lem5_1} Let $-1\le a<b\le 1$, $\nu_k\to 0^+$, $c_k\in J_{\nu_k}$, $c_1c_2=0, c_3>c_3^*(c_1,c_2)$, and $c_k\to c$ as $k\to \infty$. Then \[ \min_{[a,b]}P_{c_k}=\min\{P_{c_k}(a), P_{c_k}(b)\}, \] for sufficiently large $k$. \end{lem} \begin{equation}gin{proof} When $c_3>0$, we have $P_{c_k}''(x)=-2c_{k3}<0$ for large $k$. Thus $P_{c_k}$ is concave down, $\min_{[a,b]}P_{c_k}=\min\{P_{c_k}(a), P_{c_k}(b)\}$. When $c_3\le 0$, we distinguish to two cases. Case $1$ is $c_1=0$ and Case $2$ is $c_2=0$. If $c_1=0$, then $P_c'(x)=c_2-2c_3x\ge c_2+2c_3>0$ in $[-1,1]$. So $P_k'>0$ in $[-1,1]$ for large $k$. Thus $\min_{[a,b]}P_{c_k}=P_{c_k}(a)$. If $c_2=0$, then $P_c'(x)=-c_1-2c_3x\le -c_1-2c_3<0$ in $[-1,1]$. So $P_{c_k}'<0$ in $[-1,1]$ for large $k$. Thus $\min_{[a,b]}P_{c_k}=P_{c_k}(b)$. \end{proof} \begin{equation}gin{lem}\label{lem5_3} Let $\nu_k\to 0^+$, $c_k\in J_{\nu_k}$, $-1<b\le 1$, $c_1c_2=0, c_3>c_3^*(c_1,c_2)$, and $c_k\to c$ as $k\to \infty$. If $P_c(b)> 0$, then $U^+_{\nu_k, \thetaeta}>0$ on $[-1,b]$ for sufficiently large $k$. \end{lem} \begin{equation}gin{proof} For convenience denote $f_k=U_{\nu_k, \thetaeta}^+$, and $P_k=P_{c_k}$. If $c_{k1}\ge 0$, we have $P_k(-1)=2c_{k1}\ge 0$ for large $k$. Since $P_c(b)>0$, we also have $P_k(b)>0$. By Lemma \ref{lem5_1} we have $P_k(x)\ge \min\{P_k(-1), P_k(b)\}>0$. Using this and the fact that $f_k(-1)=\tau_2(c_{k1}, \nu_k)>0$, by Lemma \ref{lem2_4} we have $f_k>0$ on $[-1,b]$. If $c_{k1}<0$, since $c_{k1}\to c_1\ge 0$, we must have $c_1=0$, and then $c_3>-c_2/2$ and $P_c(x)=c_2(1+x)+c_3(1-x^2)$. So $P_c(-1)=0$ and $P_c'(-1)=c_2+2c_3>0$. Since $c_k\to c$ as $k\to \infty$, there exists some $C_0>0$ and $\delta>0$, such that \begin{equation}gin{equation}\label{eq5_3_1} C_0(1+x)\le P_k(x)-P_k(-1)\le 2C_0(1+x), \quad -1<x<-1+\delta \end{equation} for $k$ sufficiently large. Notice $P_k(-1)=2c_{k1}\ge -2\nu_k^2$. Since $c_{k1}<0$, then since $P_k(-1)=2c_{k1}\ge -2\nu_k^2$, by (\ref{eq5_3_1}), we have $P_k(-1+2\nu^2_k/C_0)\ge 0$. So by Lemma \ref{lem5_1}, we have $P_k\ge \min\{P_k(-1+2\nu^2_k/C_0), P_k(b)\}\ge 0$ on $[-1+2\nu^2_k/C_0, b]$. Next, let \[ g_k(x):=f_k(-1)-\frac{C_0}{8\nu_k}(1+x) \] Since $f_k(-1)\ge 2\nu_k$, it can be checked that $g_k>0$ on $[-1,-1+2\nu^2_k/C_0]$. By computation, using the facts that $\frac{1}{2}f^2_k(-1)-2\nu_kf_k(-1)=2c_{k1}$ and $f_k(-1)\ge 2\nu_k$, we have that for $-1\le x\le -1+2\nu^2_k/C_0$ and $k$ sufficiently large, \[ \begin{equation}gin{split} & Q_k:=\nu_k(1-x^2)g_k'+2\nu_k xg_k+\frac{1}{2}g_k^2 \\ & =\left([\frac{C^2_0}{128\nu^2_k}-\frac{C_0}{8}](1+x)+(2\nu_k-\frac{C_0}{8\nu_k})f_k(-1)\right)(1+x)+\frac{1}{2}f^2_k(-1)-2\nu_kf_k(-1)\\ & \le \left([\frac{C^2_0}{128\nu^2_k}-\frac{C_0}{8}]\frac{2\nu_k^2}{C_0}+4\nu_k^2-\frac{C_0}{4}\right)(1+x)+2c_{k1}\\ & \le 2c_{k1}<P_k(x). \end{split} \] By Lemma 2.3 in \cite{LLY2}, we have $\limsup_{x\to -1^+}|x+1|^{-1}|f_k(x)-f_k(-1)|<\infty$. So we have $g_k(-1)=f_k(-1)> 2\nu_k$ or $f_k(-1)=g_k(-1)=2\nu_k$ with $\limsup_{x\to -1^+}\int_{-1+2\nu_k^2/C_0}^{x}(1-s^2)^{-1}(-2\nu_k+f_k(s))ds<\infty$. It can be checked that $f_k$ is a solution of (\ref{eq_1}) if and only if $\nu_k f_k$ is a solution of (\ref{eqNSE_1}). Similarly, $\nu_k g_k$ is a solution of (\ref{eqNSE_1}) with the right hand side to be $Q_k/\nu_k^2$. Notice that $Q_k< P_k$, applying Lemma 2.4 in \cite{LLY2}, we have $f_k\ge g_k>0$ on $(-1,-1+2\nu_k^2/C_0]$. Since $P_k\ge 0$ on $[-1+2\nu_k^2/C_0, b)$ and $f_k(-1+2\nu_k^2/C_0)>0$, we have, by Lemma \ref{lem2_4}, that $f_k>0$ on $[-1+2\nu_k^2/C_0, b]$. So $f_k>0$ in $[-1,b]$, the lemma is proved. \end{proof} \noindent{\textit{Proof of Theorem \ref{thm1_2_2}}}: We only prove the results for $U_{\nu_k, \thetaeta}^+$, the proof of the results for $U_{\nu_k, \thetaeta}^-$ is similar. Let $C$ be a positive constant depending only on $c$ which may vary from line to line. For convenience, write $f_k=U_{\nu_k, \thetaeta}^+$, $P_k=P_{c_k}$, and let $h_k:=\frac{1}{2}f^2_k-P_k$. In the following we always assume that $k$ is large. Since $c_k\to c$ as $k\to \infty$, by Lemma \ref{lem2_1}, we have $f_k\le C$ in $[-1,1]$. We first prove \begin{equation}gin{equation}\label{eq5_1_1} ||\frac{1}{2}(U^{+}_{\nu_k, \thetaeta})^2-P_{c_k}||_{L^{\infty}(-1,1)}\le C\nu_k^{1/2} . \end{equation} \textbf{Case} 1: $c_1=0$, $c_2>0$, $c_3>-c_2/2$. In this case, $P_c(x)=c_2(1+x)+c_3(1-x^2)$ in $(-1,1)$. So $P_c(-1)=2c_1=0$, and $P_c'(-1)=c_2+2c_3>0$. Since $c_k\to c$ as $k\to \infty$, there exists some $\delta>0$, such that for large $k$, \begin{equation}gin{equation}\label{eq5_3_5} \frac{1}{2}P_{c}'(-1)(1+x)\le P_{k}(x)-P_{k}(-1)\le 2P_{c}'(-1)(1+x), \quad -1<x<-1+\delta. \end{equation} Let $a_k=\nu_k^{1/2}/\alpha$ for some positive constant $\alpha$ to be determined. Then by Lemma \ref{lem2_5}, there exists some $x_{k}\in (-1+a_k,-1+2a_k)$, such that $|h_k(x_{k})|\le \frac{C\nu_k}{a_k}=C\alpha^2 a_k$. It follows from (\ref{eq5_3_5}) and the fact that $P_k(-1)=2c_{k1}\ge -\nu_k^2$ and $P_c'(-1)>0$, that \[ \begin{equation}gin{split} \frac{1}{2}f^2_k(x_k) & \ge P_k(x_k)-|h_k(x_k)|\ge P_k(-1)+\frac{1}{2}P_c'(-1)(x_k+1)-|h_k(x_k)|\\ & \ge -2\nu_k^2+\left(\frac{1}{2}P'_c(-1)-C\alpha^2\right)a_k \ge \left(\frac{1}{4}P'_c(-1)-C\alpha^2\right)a_k. \end{split} \] Fix $\alpha^2=P'_c(-1)/(8C)$. By (\ref{eq5_3_5}) $P_c(x_k)-2\nu_k^2+\frac{1}{2}P'_c(-1)a_k>0$, by Lemma \ref{lem5_3}, we have $f_k>0$ on $[-1,x_k]$. So $f_{k}(x_{k})\ge \sqrt{a_k/C}$. Since $P_k(1)>\frac{1}{2}P_c(1)>0$ for large $k$, by (\ref{eq5_3_5}) we have \[ P_k(-1+a_k)\ge 2c_{k1}+a_k/C\ge -2\nu_k^2+a_k/C\ge a_k/C. \] Then by Lemma \ref{lem5_1}, we have $P_k(x)\ge a_k/C$ in $[-1+a_k, 1]$ for $k$ large. Applying Lemma \ref{lem2_2} on $[x_k,1]$, we have $f_k(x)\ge \sqrt{a_k/C}$ on $[x_{k}, 1]$. We also have $|h_{k}(x_{k})|\le C\nu_k^{1/2}$, $|h_k(1)|=|\frac{1}{2}f^2_k(1)-P_k(1)|=|\frac{1}{2}(\tau_2'(\nu,c_{k2}))^2-2c_{k2}|\le C\nu_k$. So by applying Lemma \ref{lem2_6} on $[x_k,1]$, \[ \max_{[-1+2a_k,1]}|h_k|\le \max_{[x_k,1]}|h_k|\le C\nu_k^{1/2}. \] Now we have $h_k(-1+2a_k)\le Ca_k$ and $|h_k(-1)|=|\frac{1}{2}f^2_k(-1)-P_k(-1)|=|\frac{1}{2}(\tau_2(\nu,c_{k1}))^2-2c_{k1}|\le C\nu_k$. By (\ref{eq5_3_5}) we have $P_k(x)\ge P_k(-1)=2c_{k1}\ge -2\nu_k^2\ge -Ca_k$ in $[-1,-1+2a_k]$ for $k$ large. By (\ref{eq5_3_5}) we also have that $P_k(x)\le P_k(-1)+C(1+x)$. Notice $f_k(-1)=2\nu_k+2\sqrt{\nu_k^2+c_{k1}}>0$, applying Lemma \ref{lem2_10} on $[-1,-1+2a_k]$ with $\alpha=1$ and $\bar{x}_k=-1$ there, we have that \[ \max_{(-1,-1+2a_k)}|h_k|\le Ca_k+C\nu_k/\sqrt{a_k}\le C\nu_k^{1/2}. \] Estimate (\ref{eq5_1_1}) is proved in this case. \textbf{Case} 2: $c_1>0$, $c_2=0$, $c_3>-\frac{1}{2}c_1$. In this case, $P_c(x)=c_1(1-x)+c_3(1-x^2)$. So $P_c(1)=0$ and $P_c'(1)=-c_1-2c_3<0$. Since $c_k\to c$ as $k\to \infty$, there exists some $\delta>0$, such that \begin{equation}gin{equation}\label{eq5_1_2} -\frac{1}{2}P'_c(1)(1-x)<P_k(x)-P_k(1)<-2P'_c(1)(1-x), \quad 1-\delta<x<1, \end{equation} for large $k$. Let $a_k=\nu_k^\frac{1}{2}$. By Lemma \ref{lem2_5}, there exists some $x_{k}\in (1-2a_k, 1-a_k)$, such that $|h_k(x_{k})|\le C\nu_k/a_k=C a_k$. Since $P_c(-1)=2c_1>0$, we have $P_k(-1)>\frac{1}{2}P_c(-1)>0$ for large $k$. By (\ref{eq5_1_2}) we have \[ P_k(1-a_k)\ge 2c_{k2}+a_k/C\ge -2\nu_k^2+a_k/C\ge a_k/C. \] Then by Lemma \ref{lem5_1}, we have $P_k(x)\ge a_k/C$ in $[-1, 1-a_k]$ for $k$ large. Notice $f_k(-1)=\tau_2(\nu_k,c_{k1})\ge \sqrt{c_1}>0$ for large $k$. Applying Lemma \ref{lem2_2} on $[-1, 1-a_k]$, we have $f_k(x)\ge \sqrt{a_k/C}$ on $[-1, 1-a_k]$. We also have $|h_{k}(x_{k})|\le C\nu_k^{1/2}$, $|h_k(-1)|=|\frac{1}{2}f^2_k(-1)-P_k(-1)|=|\frac{1}{2}(\tau_2(\nu,c_{k1}))^2-2c_{k1}|\le C\nu_k$. So by applying Lemma \ref{lem2_6} on $[-1,x_k]$, \[ \max_{[-1, 1-2a_k]}|h_k|\le \max_{[-1, x_k]}|h_k|\le C\nu_k^{1/2}. \] Now we have $h_k(1-2a_k)\le Ca_k$ and $|h_k(1)|=|\frac{1}{2}f^2_k(1)-P_k(1)|=|\frac{1}{2}(\tau'_2(\nu,c_{k2}))^2-2c_{k2}|\le C\nu_k$. By (\ref{eq5_1_2}) we have $P_k(x)\ge P_k(1)=2c_{k1}\ge -2\nu_k^2\ge -Ca_k$ in $[1-2a_k, 1]$ for $k$ large. By (\ref{eq5_1_2}) we also have that $P_k(x)\le P_k(1)+C(1-x)$. Notice $f_k(1-2a_k)\ge \sqrt{a_k/C}>0$, applying Lemma \ref{lem2_10} on $[1-2a_k, 1]$ with $\alpha=1$ and $\bar{x}_k=1$ there, we have that \[ \max_{[1-2a_k, 1]}|h_k|\le Ca_k+C\nu_k/\sqrt{a_k}\le C\nu_k^{1/2}. \] Estimate (\ref{eq5_1_1}) is proved in this case. \textbf{Case} 3: $c_1=c_2=0$, $c_3>0$. In this case, $P_c(x)=c_3(1-x^2)$ in $(-1,1)$. So $P_c(\pm 1)=0$, $P_c'(-1)=2c_3>0$ and $P_c'(1)=-2c_3<0$. Since $c_k\to c$ as $k\to \infty$, there exists some $\delta>0$, such that for large $k$, (\ref{eq5_3_5}) and (\ref{eq5_1_2}) are true. Let $a_k=\nu_k^{1/2}/\alpha$ for some positive constant $\alpha$ to be determined. Then by Lemma \ref{lem2_5}, there exists some $x_{k}\in (-1+a_k,-1+2a_k)$ and $y_k\in (1-2a_k, 1-a_k)$, such that $|h_k(x_{k})|+|h_k(y_{k})|\le C\nu_k/a_k=C\alpha^2 a_k$. Similar as Case 1, we have $f_{k}(x_{k})\ge \sqrt{a_k/C}$. By (\ref{eq5_3_5}), (\ref{eq5_1_2}), and Lemma \ref{lem5_1}, we have $P_k(x)\ge a_k/C$ in $[x_k, y_k]$ for $k$ large. Applying Lemma \ref{lem2_2} on $[x_k,y_k]$, we have $f_k(x)\ge \sqrt{a_k/C}$ on $[x_{k}, y_k]$. We also have $|h_{k}(x_{k})|\le C\nu_k^{1/2}$, $|h_k(y_k)|\le C\nu_k^{1/2}$. So by applying Lemma \ref{lem2_6} on $[x_k,y_k]$, \begin{equation}gin{equation*} \max_{[-1+2a_k,1-2a_k]}|h_k|\le \max_{[x_k,y_k]}|h_k|\le C\nu_k^{1/2}. \end{equation*} As in Case 1 and Case 2, we have \begin{equation}gin{equation*} \max_{[-1,-1+2a_k]}|h_k|+\max_{[1-2a_k, 1]}|h_k| \le C\nu_k^{1/2}. \end{equation*} By the above, estimate (\ref{eq5_1_1}) is proved in this case. From (\ref{eq5_1_1}) we have $\lim_{k\to 0}|||f_k|-\sqrt{2P_c}||_{L^{\infty}(-1,1)}=0$. By Lemma \ref{lem5_3} we have $f_k>0$ on $[-1,1-2a_k]$. Using this and the fact $\max_{[1-2a_k,1]}|P_k|\le Ca_k$, we have $ \lim_{k\to \infty}||U^{-}_{\nu_k, \thetaeta}+\sqrt{2P_{c}}||_{L^{\infty}(-1,1)}=0$. Next, let $\epsilon>0$ be any fixed positive small constant. If $c_1=0$, by (\ref{eq5_3_5}) we have that $P_k(-1+\frac{1}{2}\epsilon)\ge epsi\epsilon$. If $c_1>0$, $P_k(-1)\ge \sqrt{c_1}$. Similarly, if $c_2=0$, by (\ref{eq5_1_2}), $P_k(1-\epsilon/2)\ge \epsilon/C$. If $c_2>0$, $P_k(1)\ge \sqrt{c_2}>0$. By Lemma \ref{lem5_1} we have $P_k\ge \epsilon/C$ on $[-1+\epsilon/2, 1-\epsilon/2]$. As proved above, we also have $f_k>0$ on $[-1+\epsilon/2, 1-\epsilon/2]$ for large $k$. Applying Lemma \ref{lem2_10} on $[-1+\epsilon/2, 1-\epsilon/2]$, we obtain \begin{equation}gin{equation*} ||U^{+}_{\nu_k, \thetaeta}-\sqrt{2P_{c_k}}||_{C^{m}([-1+\epsilon,1-\epsilon])}\le C\nu_k. \end{equation*} The proof is finished. \qed \begin{equation}gin{rmk} The assumption of $c$ in Theorem \ref{thm1_2_2} is equivalent to $P_{c}(1)P_c(-1)=0$, $P^2_{c}(1)+(P'_c(1))^2\ne 0$ and $P^2_{c}(-1)+(P'_c(-1))^2\ne 0$. \end{rmk} Next, we study solutions of (\ref{eq:NSE}) which are not $U_{\nu_k, \thetaeta}^\pm$. \noindent{\emph{Proof of Theorem \ref{thm1_3} completed}}: We will prove Theorem \ref{thm1_3} (i) and (ii) in the case $c_1c_2=0$ and $c_3>c_3^*(c_1,c_2)$. Let $C$ be a positive constant, having the same dependence as specified in the theorem, which may vary from line to line. For convenience write $f_k=U_{\nu_k, \thetaeta}$, $P_k=P_{c_k}$ and $h_k=\frac{1}{2}f_k^2-P_k$. Throughout the proof $k$ is large. We first prove part (i) in this case. Since $P_k\in J_0$, we have $P_k\ge 0$ on $[-1,1]$. By Lemma \ref{lem2_4}, there exists at most one $x_k\in (-1,1)$ such that $f_k(x_k)=0$. Moreover, if $x_k$ exists, then we have \begin{equation}gin{equation}\label{eq5_4_3} f_k(x)<0 \textrm{ for }-1< x<x_k, \quad \textrm{ and }f_k(x)>0 \textrm{ for }x_k<x< 1. \end{equation} By Lemma \ref{lem2_1}, \begin{equation}gin{equation}\label{eq5_4_4} |f_k|\le C. \end{equation} Since $c_1,c_2\ge 0$, $c_1c_2=0$, $c_3>c_3^*(c_1,c_2)$, we have $\min_{[-1+\epsilon/2,1-\epsilon/2]}P_c>0$. By the convergence of $\{c_k\}$ to $c$, \begin{equation}gin{equation}\label{eq5_4_5} \min_{[-1+\epsilon/2,1-\epsilon/2]}P_k\ge 1/C. \end{equation} Using (\ref{eq5_4_3}) and (\ref{eq5_4_5}), by applying Lemma \ref{lem2_7} and Lemma \ref{lem2_7'} on each interval of $[-1+\epsilon/2, x_k-\epsilon/2]$ and $[x_k+\epsilon/2, 1-\epsilon/2]$ separately, we deduce \begin{equation}gin{equation*} ||U_{\nu_k, \thetaeta}+\sqrt{2P_{c_k}}||_{C^m([-1+\epsilon, x_k-\epsilon])}+||U_{\nu_k, \thetaeta}-\sqrt{2P_{c_k}}||_{C^m([x_k+\epsilon, 1-\epsilon])}\le C\nu_k. \end{equation*} Next, we prove \begin{equation}gin{equation}\label{eq5_4_1} ||\frac{1}{2}U^2_{\nu_k, \thetaeta}-P_{c_k}||_{L^{\infty}((-1, x_k-\epsilon)\cup (x_k+\epsilon, 1))}\le C\nu_k^{1/2}. \end{equation} Since $f_k$ is not $U_{\nu_k, \thetaeta}^\pm$, we know from Theorem A that $f_k(-1)=\tau_1(\nu_k, c_{k1})$ and $f_k(1)=\tau_2'(\nu_k, c_{k2})$. In view of (\ref{eq_2}), we have \begin{equation}gin{equation}\label{eq5_4_6} |f_{k}(-1)+\sqrt{2P_k(-1)}|+|f_{k}(1)-\sqrt{2P_k(1)}|\le C\nu_k. \end{equation} \textbf{Case} 1: $c_1=0$, $c_2>0$, $c_3>-\frac{1}{2}c_2$. In this case $P_c(x)=c_2(1+x)+c_3(1-x^2)$ in $(-1,1)$. So $P_c(-1)=0$, $P_c(1)=2c_2>0$, and $P_c'(-1)=c_2+2c_3>0$. Since $c_k\to c$, there exists some $\delta>0$, such that \begin{equation}gin{equation}\label{eq5_4_10} \frac{1}{2}P_{c}'(-1)(1+x)\le P_{k}(x)-P_{k}(-1)\le 2P_{c}'(-1)(1+x), \quad -1<x<-1+\delta. \end{equation} So $P_k(-1+\epsilon/2)>1/C$ and $P_k(1)>1/C$. By Lemma \ref{lem5_1}, we have \begin{equation}gin{equation}\label{eq5_4_7} P_k(x)\ge 1/C, \quad -1+\epsilon/4\le x\le 1. \end{equation} We discuss the cases when $x_k+1\ge \epsilon/4$ and $x_k+1<\epsilon/4$ separately. We first discuss the case when $x_k+1\ge \epsilon/4$. We have $P_k\ge 1/C$ on $[x_k+\epsilon/4,1]$ for large $k$. Applying Corollary \ref{cor2_3} on $[x_k+\epsilon/4, 1]$, using (\ref{eq5_4_3}), (\ref{eq5_4_4}) and (\ref{eq5_4_7}), we have that \begin{equation}gin{equation}\label{eq5_4_8} 1/C\le f_k\le C \textrm{ on }\left(x_k+\epsilon/2, 1\right). \end{equation} Using (\ref{eq5_4_8}), applying Lemma \ref{lem2_6} on $(x_k+\epsilon/2, 1)$, we have \begin{equation}gin{equation}\label{eq5_4_9} \max_{[x_k+\epsilon, 1]}|h_k|\le C\nu_k \end{equation} Next, let $a_k=\nu_k^{1/2}/\alpha$ for some positive constant $\alpha$ to be determined. Since $P_k(-1)\ge 0$, it follows from (\ref{eq5_4_10}) and (\ref{eq5_4_7}) that $P_k(x)\ge a_k/C$ for $x$ in $[-1+a_k,1]$. By Lemma \ref{lem2_5}, there exists some $s_k\in (-1+a_k, -1+2a_k)$ and $t_k\in (x_k-\epsilon, x_k-\epsilon/2)$, such that $|h_k(s_k)|\le C\nu_k/a_k=C\alpha^2 a_k$ and $|h_k(t_k)|\le C\nu_k$. It follows from (\ref{eq4_3_14}) that \[ \frac{1}{2}f^2_k(t_k)\ge P_{c_k}(t_k)-|h_k(t_k)|\ge 1/C\alpha^2 a_k-C\nu_k \] By (\ref{eq5_4_3}), we have $f_{k}(t_{k})<- \sqrt{a_k}/C$. Using (\ref{eq5_4_4}), applying Lemma \ref{lem2_2'} on $[s_k,t_k]$, we have $f_k(x)\le -\sqrt{a_k}/C$ on $[s_k,t_k]$. Using $|h_k(s_k)|\le C\nu_k^{1/2}$ and $|h_k(t_k)|\le C\nu_k$, applying Lemma \ref{lem2_6} on $[s_k, t_k]$, we have \begin{equation}gin{equation}\label{eq5_4_12} \max_{[-1+2a_k, x_k-\epsilon]}|h_k|\le \max_{[s_k,t_k]}|h_k| \le C\nu_k^{1/2}. \end{equation} Now we have that $|h_k(-1+2a_k)|\le Ca_k$ and $|h_k(-1)|=|\frac{1}{2}\tau^2_1(\nu_k,c_{k1})-2c_{k1}|\le Ca_k$. Notice that $f_k<0$ on $[-1,-1+2a_k]$, $P_k(-1)\ge 0$. Using (\ref{eq5_4_10}), applying Lemma \ref{lem2_10} on $[-1,-1+2a_k]$ with $\alpha=1$, we have that \begin{equation}gin{equation}\label{eq5_4_13} \max_{[-1,-1+2a_k]}|h_k|\le Ca_k+C\nu_k/a_k\le C\nu_k^{1/2}. \end{equation} By (\ref{eq5_4_9}), (\ref{eq5_4_12}) and (\ref{eq5_4_13}), we have proved (\ref{eq5_4_1}) when $\hat{x}>-1$. Next, if $x_k+1<\epsilon/4$, similar as (\ref{eq5_4_8}) we have $1/C\le f_k\le C \textrm{ on }\left(x_k+\epsilon/2, 1\right)$. Using this and (\ref{eq5_4_6}), applying Lemma \ref{lem2_6} on $(x_k+\epsilon/2, 1)$, (\ref{eq5_4_1}) is proved. \textbf{Case} 2: $c_1>0$, $c_2=0$, $c_3>-\frac{1}{2}c_1$. The proof is similar as Case 1. \textbf{Case} 3: $c_1=c_2=0$, $c_3>0$. Similar as Case 1 we have $|h_k|\le C\nu_k^{1/2}$ on $[-1,0]\setminus[x_k-\epsilon, x_k+\epsilon]$, and similar as Case 2 we have $|h_k|\le C\nu_k^{1/2}$ on $[0,1]\setminus[x_k-\epsilon, x_k+\epsilon]$. We have by now proved (\ref{eq5_4_1}). By (\ref{eq5_4_1}) and (\ref{eq_1}), for any $\epsilon>0$, there exists some constant $C>0$, depending only on $\epsilon$ and an upper bound of $|c|$, such that $|f'_k|\le C\nu_k^{-\frac{1}{2}}$ on $[-1+\epsilon, x_k-\epsilon]\cup [x_k+\epsilon, 1-\epsilon]$, so $|h'_k|=|f_kf'_k-P'_k|\le C\nu_k^{-\frac{1}{2}}$. So we have \begin{equation}gin{equation*} ||\frac{1}{2}U^2_{\nu_k, \thetaeta}-P_{c_k}||_{C^1(([-1+\epsilon, x_k-\epsilon]\cup [x_k+\epsilon, 1-\epsilon])}\le C\nu_k^{-\frac{1}{2}}, \end{equation*} By interpolation for any $x, y\in (-1+\epsilon, 1-\epsilon)$ and $0<\begin{equation}ta<1$, \[ \frac{|h_k(x)-h_k(y)|}{|x-y|^{\begin{equation}ta}}\le 2||h_||^{1-\begin{equation}ta}_{L^{\infty}(-1+\epsilon, 1-\epsilon)}||h_k'||^{\begin{equation}ta}_{L^{\infty}(-1+\epsilon, 1-\epsilon)}\le C\nu_k^{\frac{1}{2}(1-\begin{equation}ta)}\nu_k^{-\frac{1}{2}\begin{equation}ta}\le C\nu_k^{\frac{1}{2}-\begin{equation}ta}. \] We have \begin{equation}gin{equation}\label{eq5_4_4_1} ||\frac{1}{2}U^2_{\nu_k, \thetaeta}-P_{c_k}||_{C^{\begin{equation}ta}([-1+\epsilon, x_k-\epsilon]\cup [x_k+\epsilon, 1-\epsilon])}\le C\nu_k^{\frac{1}{2}-\begin{equation}ta}. \end{equation} Next, using (\ref{eq5_4_3}), (\ref{eq5_4_5}) and (\ref{eq5_4_1}), we then have (\ref{eq1_7_0}). Part (i) in this case follows in view of (\ref{eq5_4_1}) and (\ref{eq5_4_4_1}). Now we prove part (ii) in this case. If such $x_k$ exists and $x_k\to -1$ with $c_1=0$, or such $x_k$ does not exist with $c_2>0=c_1$, we can prove (\ref{eq1_7_2}) using similar arguments as that for part (ii) in \emph{''Proof of Theorem \ref{thm1_3} continued"} in Section 4. If such $x_k$ exists and $x_k\to 1$ with $c_2=0$, or such $x_k$ does not exist with $c_1>0=c_2$, we can prove similarly (\ref{eq1_7_3}). If such $x_k$ does not exist with $c_1=c_2=0$, we prove either (\ref{eq1_7_2}) or (\ref{eq1_7_3}). In this case, $f_k$ does not change sign on $(-1,1)$ and $P_c(-1)=P_c(1)=0$. If $f_k>0$ on $(-1,1)$ after passing to a subsequence, we have, by Theorem \ref{thm1_2_2}, $\limsup_{k\to \infty}||\frac{1}{2}(U^{\pm}_{\nu_k, \thetaeta})^2-P_{k}||_{L^{\infty}(-1, 1)}=0$. So for any $\epsilon_0>0$, there exists some $\epsilon>0$, such that $||P_k||_{L^{\infty}(-1,-1+2\epsilon)}+||P_k||_{L^{\infty}[1-2\epsilon, 1]}<\epsilon_0$, and $||(U^{\pm}_{\nu_k, \thetaeta})^2||_{L^{\infty}(-1,-1+2\epsilon)}+||(U^{\pm}_{\nu_k, \thetaeta})^2||_{L^{\infty}[1-2\epsilon, 1]}<\epsilon_0$. Notice $U_{\nu_k, \thetaeta}^-\le f_k\le U^+_{\nu_k, \thetaeta}$, we then have $||f_k-\sqrt{2P_k}||_{L^{\infty}(-1,-1+2\epsilon)}+||f_k-\sqrt{2P_k}||_{L^{\infty}[1-2\epsilon, 1]}<2\epsilon_0$. We also have $P_c\ge 1/C$ on $[-1+\epsilon, 1-\epsilon]$ and $f_k>0$ on $[-1+\epsilon, 1-\epsilon]$. By Corollary \ref{cor2_3}, we have $f_k\ge 1/C$ on $[-1+2\epsilon, 1-2\epsilon]$. Notice $|f_k(-1+2\epsilon)-\sqrt{2P_k(-1+2\epsilon)}|+|f_k(1-\epsilon)-\sqrt{2P_k(1-2\epsilon)}|\le 2\epsilon_0$, by Corollary \ref{cor2_2} we have $||f_k-\sqrt{2P_k}||_{L^{\infty}[-1+2\epsilon, 1-2\epsilon]}<C\epsilon_0$. If $f_k<0$ on $(-1,1)$ after passing to a subsequence, similar as the above we have (\ref{eq1_7_3}). Part (ii) in this case is proved. The proof of Theorem \ref{thm1_3} is completed now. Part (iii) is proved in Section 3, part (i) and (ii) follows from (iii), \emph{"Proof of Theorem \ref{thm1_3} continued"} in Section 4, and the above. \qed \noindent{\emph{Proof of Theorem \ref{thm1_0}}}: For $0<\nu\le 1$ and $c\in \mathring{J}_{\nu}$, let \[ u^{\pm}_{\nu,\thetaeta}(c)=\frac{1}{\sin\thetaeta}U_{\nu,\thetaeta}^{\pm}(c),\quad u^{\pm}_{\nu,r}(c) =-\frac{d u_{\nu,\thetaeta}^{\pm}}{d \thetaeta} - u_{\nu,\thetaeta}^{\pm} \cot\thetaeta. \] and \[ p^{\pm}_{\nu}(c)=-\frac{1}{2}\left(\frac{d^2 u^{\pm}_{\nu,r}(c)}{d\thetaeta^2} + (\cot\thetaeta - u^{\pm}_{\nu,\thetaeta}(c)) \frac{d u^{\pm}_{\nu,r}(c)}{d\thetaeta} + (u^{\pm}_{\nu,r}(c))^2 +(u^{\pm}_{\nu,\thetaeta}(c))^2\right). \] By Theorem 1.1 of \cite{LLY2}, $\{(u_{\nu}^{\pm}(c), p_{\nu}^{\pm}(c))\}_{0<\nu\le 1}$ belong to $C^{0}(\mathring{J}_{\nu}\times (0,1], C^m(\mathbb{S}^2\setminus(B_{\epsilon}(S)\cup B_{\epsilon}(N))))$ for every integer $m\ge 0$. By Theorem \ref{thm1_1}, there exists some constant $C$, which depends only on $K, \epsilon$ and $m$, such that \[ ||U^{+}_{\nu, \thetaeta}-\sqrt{2P_{c}}||_{L^{\infty}(-1,1)}+||U^-_{\nu, \thetaeta}+\sqrt{2P_{c}}||_{L^{\infty}(-1,1)}\le C\nu, \] and \[ ||U^+_{\nu, \thetaeta}-\sqrt{2P_{c}}||_{C^m(-1,1-\epsilon)}+||U^-_{\nu, \thetaeta}+\sqrt{2P_{c}}||_{C^m(-1+\epsilon,1)}\le C\nu. \] Theorem \ref{thm1_0}(i) follows from the above. Now we prove part (ii). By Theorem A, there exist a unique $U_{\thetaeta}:=U_{\nu, \thetaeta}(c,\thetaeta_0)$ of (\ref{eq:NSE}) satisfying, with $x_0=\cos \thetaeta_0$, that \[ U_{\thetaeta}(-1)=\tau_1(\nu,c_1)<0, \quad U_{\thetaeta}(1)=\tau_2(\nu,c_2)>0, \quad U_{\thetaeta}(x_0)=0. \] For every $\epsilon>0$, we have, by Theorem \ref{thm1_3}, that \[ ||U_{\nu, \thetaeta}-\sqrt{2P_{c}}||_{C^m(x_0+\epsilon,1-\epsilon)}+||U_{\nu, \thetaeta}+\sqrt{2P_{c}}||_{C^m(-1+\epsilon,x_0-\epsilon)}\le C\nu. \] The estimate in part (ii) follows from the above. \qed \section{Proof of Theorem \ref{thm:BL:1}}\label{sec6} In this section, we give the \noindent{\emph{Proof of Theorem \ref{thm:BL:1}}}: Define \begin{equation}gin{equation}\label{eq6_0} w_k(x):=\sqrt{2P_{c_k}(x_k)} \tanh \Big( \frac{\sqrt{2P_{c_k}(x_k)} \cdot (x-x_k)}{2 (1-x_k^2)\nu_k} \Big). \end{equation} By computation, we know that $w_k(x_k)=0$ and \begin{equation}gin{equation}\label{eq6_0_0} \nu_k(1-x_k^2)w_k'+\frac{1}{2}w_k^2=P_k(x_k). \end{equation} {\it Step 1.} We prove \begin{equation}gin{equation}\label{eq6_2_1} |U_{\nu_k, \thetaeta} - w_k| \le C\nu_k |\ln \nu_k|^{2}, \quad x_k-K\nu_k|\ln\nu_k|(1-x_k^2)<x< x_k+K\nu_k|\ln\nu_k|(1-x_k^2)). \end{equation} Let $C$ denote a constant depending only on $c$, $K$ and $\hat{x}$ which may vary from line to line. For convenience denote $f_k:=U_{\nu_k, \thetaeta}$ and $P_k:=P_{c_k}$. By Lemma \ref{lem2_1} and Lemma \ref{lem2_4}, we have that \begin{equation}gin{equation}\label{eq6_2_2} 0<f_k<C, \textrm{ in }(x_k, 1), \textrm{ and } -C<f_k<0, \textrm{ in }(-1,x_k). \end{equation} Let \begin{equation}gin{equation}\label{eq6_2_4} y := \frac{x-x_k}{\nu_k(1-x_k^2)}, \qquad \tilde{f}_k(y):=f_k(x), \quad \tilde{w}_k(y):=w_k(x). \end{equation} Then for $x_k-K\nu_k|\ln\nu_k|(1-x_k^2)\le x\le x_k+K\nu_k|\ln\nu_k|(1-x_k^2)$, we have $-K|\ln\nu_k|\le y\le K|\ln\nu_k|$. By $f_k(x_k)=0$ and (\ref{eq_1}), we know that $\tilde{f}_k(0)=0$ and \begin{equation}gin{equation}\label{eq6_2_5} \begin{equation}gin{aligned} & (1-2x_k\nu_ky-\nu_k^2y^2(1-x_k^2)) \tilde{f}_k'(y) + 2 \nu_k (x_k + y \nu_k(1-x_k^2)) \tilde{f}_k(y) + \frac{1}{2} \tilde{f}_k^2(y) \\ = & P_{c_k} = P_{c_k}(x_k) + P_{c_k}'(x_k)\nu_k y(1-x_k^2)+ \frac{1}{2} P_{c_k}''(x_k) \nu_k^2 y^2(1-x_k^2)^2. \end{aligned} \end{equation} By (\ref{eq6_0}) and (\ref{eq6_0_0}), we have $\tilde{w}(0)=0$ and \begin{equation}gin{equation}\label{eq6_2_6} \tilde{w}_k'(y) + \frac{1}{2} \tilde{w}_k^2(y) = P_{c_k}(x_k). \end{equation} Set $g_k(y) := \tilde{f}_k(y) - \tilde{w}_k(y)$, then by (\ref{eq6_2_5}) and (\ref{eq6_2_6}), we have $g_k(0)=0$ and \begin{equation}gin{equation}\label{eq6_2_7} g_k'(y) + h_k(y) g_k(y) = H_k(y), \end{equation} where $h_k(y) = \frac{1}{2} (\tilde{f}_k(y) + \tilde{w}_k(y))$ and \[ \begin{equation}gin{aligned} H_k(y) = & P_{c_k}'(x_k) \nu_ky + \frac{1}{2} P_{c_k}''(x_k) \nu^2_k y^2 - 2\nu_k (x_k + y \nu_k(1-x_k^2))\tilde{f}_k(y) \\ & + \tilde{f}'_k(y)(2x_k\nu_ky + \nu^2_k y^2(1-x_k^2)). \end{aligned} \] By (\ref{eq6_0}) and (\ref{eq6_2_2}), we have $|h_k(y)|\le C$ for $|y|\le K|\ln\nu_k|$. By (\ref{eq6_2_5}) and (\ref{eq6_2_2}), we have $|\tilde{f}_k'(y)|\le C$ for $|y|\le K|\ln\nu_k|$ and $k>>1$. So $|H_k|\le C \nu_k|\ln \nu_k|$ for $|y|\le K|\ln \nu_k|$ and $k>>1$. Hence, from the estimates of $h_k$, $H_k$, (\ref{eq6_2_7}) and the fact that $g_k(0)=0$, we have \[ |g_k(y)| = e^{- \int_{0}^{y} h(s)ds} \bigg| \int_0^y e^{\int_{0}^{s} h(t)dt} H(s) ds \bigg| \le C \nu_k |\ln \nu_k|^2, \qquad |y|\le K|\ln \nu_k|. \] Therefore, the estimate (\ref{eq6_2_1}) is proved. {\it Step 2. } We prove that there exists some $K>0$ and small $\epsilon>0$, independent of $k$, such that \begin{equation}gin{equation}\label{eq6_1_01} |U_{\nu_k, \thetaeta} + \sqrt{2P_{c_k}} | \le C\nu_k^{\alpha(c)} |\ln \nu_k|^{2\kappa(c)}, \quad b_k\le x\le x_k-K\nu_k|\ln\nu_k|(1-x_k^2), \end{equation} \begin{equation}gin{equation}\label{eq6_1_02} |U_{\nu_k, \thetaeta} - \sqrt{2P_{c_k}} | \le C\nu_k^{\alpha(c)} |\ln \nu_k|^{2\kappa(c)}, \quad x_k+K\nu_k|\ln\nu_k|(1-x_k^2)\le x\le d_k, \end{equation} where $b_k=\max\{-1,x_k-\epsilon\}$ and $d_k=\min\{1,x_k+\epsilon\}$. It is sufficient to prove (\ref{eq6_1_01}) since the other estimate can be obtained similarly. We first prove that (\ref{eq6_1_01}) holds at the endpoints $x=b_k$ and $x=b_k':=x_k-K\nu_k|\ln\nu_k|(1-x_k^2)$. For convenience denote $f_k:=U_{\nu_k, \thetaeta}$, $P_k=P_{c_k}$ and $h_k=\frac{1}{2}f_k^2-P_k$. Since $x_k\to\hat{x}$, $P_c(\hat{x})>0$, we can chose $\epsilon>0$ small, such that \begin{equation}gin{equation}\label{eq6_1_2} P_k\ge 1/C, \qquad x\in (x_k-2\epsilon, x_k+2\epsilon). \end{equation} By Theorem \ref{thm1_3} (i), we have $|h_k| \le C \nu_k^{\alpha(c)}$ for $-1\le x\le x_k-\epsilon$ where $\alpha(c)$ is given by (\ref{eq_alpha}). Using this, (\ref{eq6_2_2}) and (\ref{eq6_1_2}), we have that \begin{equation}gin{equation}\label{eq6_1_3_2} \Big| f_k (b_k)+ \sqrt{2P_k(b_k)} \Big| \le C \nu_k^{\alpha(c)}. \end{equation} Let $K$ be a positive constant to be determined later. It is easy to see that \[ \begin{equation}gin{split} |w_k(b_k') + \sqrt{2P_{k}(x_k)}| \le & C e^{ -K |\ln \nu_k| \frac{\sqrt{2P_{k}(x_k)} }{2}}=C \nu_k^{\frac{K}{2}\sqrt{2P_{k}(x_k)} } \le C\nu_k, \end{split} \] as long as $K\sqrt{2P_{k}(x)} \ge 2$ for any $x\in[b_k,d_k]$ and $k$ sufficiently large. Thus by Step 1, we have \begin{equation}gin{equation}\label{eq6_1_15} \begin{equation}gin{split} & \Big| f_k (b_k') + \sqrt{2P_{k}(b_k')} \Big| \\ \le & |f_k (b_k')- w_k(b_k')| + |w_k(b_k')+ \sqrt{2P_{k}(x_k)}| + \Big| \sqrt{2P_{k}(x_k)} - \sqrt{2P_{k}(b_k')} \Big|\\ \le & C\nu_k|\ln\nu_k|^2. \end{split} \end{equation} By (\ref{eq6_1_2}) and (\ref{eq6_1_15}), $f_k (b_k') \ge 1/C$. Then using this, (\ref{eq6_1_2}) and (\ref{eq6_2_2}), applying Lemma \ref{lem2_2'} on $[b_k, b_k']$, we have \begin{equation}gin{equation}\label{eq6_1_16} -C\le f_k\le -1/C, \quad b_k \le x\le b_k'. \end{equation} By (\ref{eq6_1_3_2}), (\ref{eq6_1_15}), (\ref{eq6_1_16}), applying Corollary \ref{cor2_2} on $[b_k,b_k']$, we know that (\ref{eq6_1_01}) holds on $[b_k,b_k']$. Similar argument implies (\ref{eq6_1_02}). Theorem \ref{thm:BL:1} follows from the above two steps and Theorem \ref{thm1_3}. \qed \FloatBarrier \section{Illustrations on the interior transition layer}\label{sec:illu} Similar to Jeffery-Hamel flows, in the two-dimensional plane flows between two non-parallel walls, a boundary layer occurs if we impose no-slip boundary conditions in a nozzle. For axisymmetric, $(-1)$-homogeneous solutions in three dimensional case, we consider a cone region $\Omega = \{(r,\thetaeta,\phi)\in \mathbb{R}^+\times[0,\pi]\times[0,2\pi): 0\le\thetaeta<\arccos x_0<\pi\}$ for some $x_0\in(-1,1)$, then $\partial \Omega$ corresponds to $x=x_0$. We consider solutions in $\Omega$ and impose no-slip boundary conditions on $\partial \Omega$. Since we are considering a first order differential equation of $U_\thetaeta$, only one no-slip boundary condition can be imposed: $U_\thetaeta|_{\partial \Omega}=0$. As shown in \cite{LLY2}, there exist solutions $U=(U_{\thetaeta,i},0)$ of the Navier-Stokes equations, with viscosity $\nu_i\to 0$ satisfying \begin{equation}gin{equation}\label{eq:ex:w:bc} \left\{ \begin{equation}gin{aligned} \nu_i (1-x^2) U_\thetaeta' + 2 \nu_i x U_\thetaeta + \frac{1}{2} U_\thetaeta^2 & = P_c(x) = \mu (x-\xi)^2, \\ U_\thetaeta (x_0) & = 0, \end{aligned} \right. \end{equation} where $P_c(x)$ represents a quadratic polynomial, and $\xi, x_0\in(-1,1)$ are given constants such that $P_c(\xi)=P_c'(\xi)=0, \xi\not=x_0$. To illustrate the behavior of homogeneous solutions, we set $P_c(x)=2(x-2/3)^2$, $x_0=0$, $\xi=2/3$, and $\nu_i\to 0$, for example. Then $U_{\thetaeta,i}$ are illustrated in Figure \ref{fig:1} for $\nu=1,1/8,1/20,1/50$, respectively. The boundary conditions in (\ref{eq:ex:w:bc}) and Theorem \ref{thm1_3} implies that an interior layer happens in a neighborhood of $x_0$. We can see in the figure that \begin{equation}gin{equation}\label{eq:limit} U_\thetaeta\to -\sqrt{2P_c(x)}, \; x\in (-1,x_0-\epsilon),\qquad U_\thetaeta\to +\sqrt{2P_c(x)}, \; x\in (x_0+\epsilon,1). \end{equation} Since $c_3=c_3^*(c_1,c_2)$ in the example, we know by Theorem 1.8 that in a neighborhood of $x=\xi$, where $P_c$ vanishes, the convergence rate $|U_{\thetaeta,i}-\sqrt{2P_c}|\sim \nu_i^{2/3}$. The streamlines of $U_{\thetaeta,i}$ are illustrated in Figure \ref{fig:2} for $\nu=1,1/8,1/20,1/50$, respectively. By (\ref{eq:limit}), $U_{\thetaeta,i}\to \pm 2|x-2/3|=V_\thetaeta$, whose streamlines are illustrated in Figure \ref{fig:3}. Notice that $\pm\sqrt{P_c(x)}$ are not smooth solutions of the Euler equations, since they are singular at $x=\xi$. On the other hand, $\pm 2(x-2/3)\in C^\infty(-1,1)$ are smooth solutions of the Euler equations, whose streamlines are illustrated in Figure \ref{fig:4}. \begin{equation}gin{figure} \begin{equation}gin{center} \captionsetup{width=.87\linewidth} \includegraphics[width=12cm]{inviscid_limit.eps} \caption{When $P_c=2(x-2/3)^2$, $U_\thetaeta(0)=0$, the graph of $U_\thetaeta$ for $\nu=1, 1/8, 1/20, 1/50$, respectively. } \label{fig:1} \end{center} \end{figure} \begin{equation}gin{figure} \begin{equation}gin{center} \begin{equation}gin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=7cm]{nueq1.eps} \caption{$\nu=1$} \end{subfigure} ~ \begin{equation}gin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=7cm]{nueq1over8.eps} \caption{$\nu=1/8$} \end{subfigure} \newline \begin{equation}gin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=7cm]{nueq1over20.eps} \caption{$\nu=1/20$} \end{subfigure} ~ \begin{equation}gin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=7cm]{nueq1over50.eps} \caption{$\nu=1/50$} \end{subfigure} \captionsetup{width=.87\linewidth} \caption{The streamlines of solutions of Navier-Stokes equation for $\nu=1, 1/8, 1/20, 1/50$, respectively. } \label{fig:2} \end{center} \end{figure} \begin{equation}gin{figure} \begin{equation}gin{center} \begin{equation}gin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=7cm]{posiSqrtPc.eps} \caption{$V_\thetaeta=\sqrt{P_c}$} \end{subfigure} ~ \begin{equation}gin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=7cm]{negaSqrtPc.eps} \caption{$V_\thetaeta=-\sqrt{P_c}$} \end{subfigure} \caption{The streamlines of $\pm\sqrt{P_c}$, the limit of $U_{\thetaeta,i}$ as $\nu_i\to 0$. } \label{fig:3} \end{center} \end{figure} \begin{equation}gin{figure} \begin{equation}gin{center} \begin{equation}gin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=7cm]{Vpositive.eps} \caption{$V_\thetaeta=2(x-2/3)$} \end{subfigure} ~ \begin{equation}gin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=7cm]{Vnegative.eps} \caption{$V_\thetaeta=-2(x-2/3)$} \end{subfigure} \caption{The streamlines of the smooth solutions of Euler equation. } \label{fig:4} \end{center} \end{figure} \FloatBarrier \section{Appendix}\label{sec7} \begin{equation}gin{lem}\label{lem7_1} Let $c\in \mathbb{R}^3$, then (i) $P_c\ge 0$ on $[-1,1]$ if and only if $c\in J_0$. (ii) $P_c>0$ on $[-1,1]$ if and only if $c\in \mathring{J}_0$. (iii) $\min_{[-1,1]}P_c=0$ if and only if $c\in \partial J_0$. (iv)$P_c>0$ in $(-1,1)$ if and only if $c\in \mathring{J}_0\cup \partial'J_0$. \end{lem} \begin{equation}gin{proof} (i) For $c\in J_0$, we have $c_1,c_2\ge 0$, $c_3\ge c_3^*(c_1,c_2)$ and therefore using (\ref{eqP_1}) and (\ref{eqP_2}) $P_c\ge P_{(c_1,c_2)}^*\ge 0$ on $[-1,1]$. On the other hand, if $P_c\ge 0$ on $[-1,1]$, we have $c_1=\frac{1}{2}P_c(-1)\ge 0$ and $c_2=\frac{1}{2}P_c(1)\ge 0$. If $c_1,c_2>0$, then $\bar{x}:=\frac{\sqrt{c_1}-\sqrt{c_2}}{\sqrt{c_1}+\sqrt{c_2}} \in (-1,1)$ and, using (\ref{eqP_1}), $0\le P_c(\bar{x})=(c_3-c_3^*(c_1,c_2))(1-\bar{x}^2)$. Thus $c_3\ge c_3^*(c_1,c_2)$ and $c\in J_0$. If $c_1=0$, then $P_c(-1)=0$ and therefore $c_2+2c_3=P'_c(-1)\ge 0$. So $c\in J_0$. If $c_2=0$, then $P_c(1)=0$ and therefore $-c_1-2c_3=P'_c(1)\le 0$. So $c\in J_0$. Part (i) is proved. (ii) If $c\in \mathring{J}_0$, then $c_1,c_2>0$, $c_3>c_3^*(c_1,c_2)$, and $\bar{x} \in (-1,1)$. The positivity of $P_c$ on $[-1,1]$ then follows from the expression (\ref{eqP_2}). On the other hand, if $P_c>0$ on $[-1,1]$, then $c_1=\frac{1}{2}P_c(-1)> 0$, $c_2=\frac{1}{2}P_c(1)> 0$, and $\bar{x}\in (-1,1)$. It follows, using (\ref{eqP_2}), that $0< P_c(\bar{x})=(c_3-c_3^*(c_1,c_2))(1-\bar{x}^2)$ and therefore $c_3-c_3^*(c_1,c_2)>0$. We have proved that $c\in \mathring{J}_0$. Part (ii) is proved. Part (iii) is a consequence of (i) and (ii). (iv) If $c\in \mathring{J}_0$, then we know from (ii) that $P_c>0$ on $[-1,1]$. If $c_1=c_2=0$, and $c_3>0$, then $P_c(x)=c_3(1-x^2)>0$ in $(-1,1)$. If $c_1>0$, $c_2=0$, $c_3\ge -c_1/2$, then $P_c(x)\ge c_1(1-x)-\frac{1}{2}c_1(1-x^2)=\frac{c_1}{2}(1-x)^2>0$ in $(-1,1)$. If $c_1=0$, $c_2>0$, $c_3\ge -c_2/2$, then $P_c(x)\ge c_2(1+x)-\frac{1}{2}c_2(1-x^2)=\frac{c_2}{2}(1+x)^2>0$ in $(-1,1)$. On the other hand, if $P_c>0$ in $(-1,1)$, then $c\in J_0$ by part (i). We only need to prove that $c$ does not belong to $\partial J_0\setminus\partial'J_0$. Indeed, if $c\in \partial J_0\setminus\partial'J_0$, then $c=0$ or $c_1,c_2>0$ and $c_3=c_3^*(c_1,c_2)$. Clearly $c$ cannot be $0$. For the latter, we know from (\ref{eqP_1}) that $P_c=P_{(c_1,c_2)}^*$ has a zero point at $\bar{x} \in (-1,1)$. We have proved (iv). \end{proof} We then have \begin{equation}gin{lem}\label{lem2_9} For any $0<\nu\le 1$ and $c\in J_{\nu}$, there exists some constant $C$, depending only on an upper bound of $|c|$, such that \begin{equation}gin{equation*} P_c(x)\ge -C\nu, \quad \forall -1\le x \le 1. \end{equation*} \end{lem} \begin{equation}gin{proof} For $c\in J_{\nu}$, we have $c_1\ge -\nu^2, c_2\ge -\nu^2$ and $c_3\ge \bar{c}_3:=\bar{c}_3(c_1,c_2;\nu)$. Let $\tilde{c}_1=c_1+\nu^2$, $\tilde{c}_2=c_2+\nu^2$, and $\tilde{c}_3^*=-\frac{1}{2}(\tilde{c}_1+2\sqrt{\tilde{c}_1\tilde{c}_2}+\tilde{c}_2)$. By (\ref{eqP_1}), $P_{(\tilde{c}_1, \tilde{c}_2, \tilde{c}_3^*)}\ge 0$ in $[-1,1]$. Since $c_3\ge \bar{c}_3\ge \tilde{c}_3^*-C\nu$, we have \[ P_c \ge P_{(\tilde{c}_1,\tilde{c}_2,c_3)}-C\nu^2 \ge P_{(\tilde{c}_1,\tilde{c}_2, \tilde{c}_3^*)}-C\nu \ge -C\nu, \quad \textrm{ in }[-1,1]. \] \end{proof} \FloatBarrier \begin{equation}gin{thebibliography}{99} \bibitem{C}S. I. Chernyshenko, Asymptotics of a steady separated flow around a body with large Reynolds numbers. (Russian) Prikl. Mat. Mekh. 52 (1988), no. 6, 958-966; translation in J. Appl. Math. Mech. 52(1988), no. 6, 746-753 (1990). \bibitem{CC}S. I. Chernyshenko, Ian P. Castro, High-Reynolds-number weakly stratified flow past an obstacle. J. Fluid Mech. 317 (1996), 155-178. \bibitem{CK} M. Cannone and G. Karch, Smooth or singular solutions to the Navier-Stokes system, J. Differential Equations 197 (2004), 247-274. \bibitem{Constantin-Vicol} P. Constantin and V. Vicol, Remarks on high Reynolds numbers hydrodynamics and the inviscid limit, J. Nonlinear Sci. 28 (2018), 711-724. \bibitem{DM} R. DiPerna and A. J. Majda, Oscillations and concentrations in weak solutions of the incompressible fluid equations, Comm. Math, Phys. 108 (1987), 667-689. \bibitem{DN}T. Drivas and H. Nguyen, Remarks on the emergence of weak Euler solutions in the vanishing viscosity limit, arXiv: 1808.01014[math.AP] 12 Oct 2018. \bibitem{F}B. F\''{o}rnberg, A numerical study of steady viscous flow past a circular cylinder J. Fluid. Mech., 98 (4) (1980) \bibitem{Giaquinta} M. Giaquinta and E. Giusti, On the regularity of the minima of variational integrals, Acta Math. 148 (1982), 285-298. \bibitem{G} M. A. Goldshtik, A paradoxical solution of the Navier-Stokes equations, Prikl. Mat. Mekh. 24 (1960) 610-621. Transl., J. Appl. Math. Mech. (USSR) 24 (1960) 913-929. \bibitem{Guo-Nguyen} Y. Guo and T. Nguyen, Prandtl boundary layer expansions of steady Navier-Stokes flows over a moving plate, Ann. PDE 3 (2017), no. 1, Art. 10, 58 pp. \bibitem{Iyer1} S. Iyer, Steady Prandtl boundary layer expansions over a rotating disk, Arch. Ration. Mech. Anal. 224 (2017), 421-469. \bibitem{Iyer2} S. Iyer, Global steady Prantle expansion over a moving boundary, arXiv:1609.05397v1 [math.AP] 17 Sep 2016. \bibitem{KP} G. Karch and D. Pilarczyk, Asymptotic stability of Landau solutions to Navier-Stokes system, Arch. Ration. Mech.Anal. 202 (2011), 115-131. \bibitem{KPS}G. Karch, D. Pilarczyk and M.E. Schonbek, $L^2$-asymptotic stability of singular solutions to the Navier-Stokes system of equations in $\mathbb{R}^3$, J. Math. Pures Appl. 108 (2017), 14-40. \bibitem{KS} A. Korolev and V. \v{S}ver\'{a}k, On the large-distance asymptotics of steady state solutions of the Navier-Stokes equations in 3D exterior domains, Ann. Inst. H. Poincar\'{e} Anal. Non Lin\'{e}aire 28 (2011), 303-313. \bibitem{Landau} L. Landau, A new exact solution of Navier-Stokes Equations, Dokl. Akad. Nauk SSSR 43 (1944), 299-301. \bibitem{LLY1} L. Li, Y.Y. Li and X. Yan, Homogeneous solutions of stationary Navier-Stokes equations with isolated singularities on the unit sphere. I. One singularity, Arch. Ration. Mech. Anal. 227 (2018), 1091-1163. \bibitem{LLY2} L. Li, Y.Y. Li and X. Yan, Homogeneous solutions of stationary Navier-Stokes equations with isolated singularities on the unit sphere. II. Classification of axisymmetric no-swirl solutions, Journal of Differential Equations 264 (2018), 6082-6108. \bibitem{Luo-Shvydkoy} X. Luo and R. Shvydkoy, 2D homogeneous solutions to the Euler equation, Comm. Partial Differential Equations 40 (2015), 1666-1687. \bibitem{Maekawa} Y. Maekawa, On the inviscid limit problem of the vorticity equations for viscous incompressible flows in the half-plane, Comm. Pure Appl. Math. 67 (2014), 1045-1128. \bibitem{Masmoudi} N. Masmoudi, Remarks about the inviscid limit of the Navier-Stokes system, Comm. Math. Phys. 270 (2007), 777-788. \bibitem{MT} H. Miura, and T.-P. Tsai, Point sigularities of 3D stationary Navier-Stokes flows, J. Math. Fluid Mech. 14 (2012), 33-41. \bibitem{PP1}A. F. Pillow and R. Paull, Conically similar viscous flows. Part 1. Basic conservation principles and characterization of axial causes in swirl-free flow, Journal of Fluid Mechanics 155 (1985), 327-341. \bibitem{PP2}A. F. Pillow and R. Paull, Conically similar viscous flows. Part 2. One-parameter swirl-free flows, Journal of Fluid Mechanics 155 (1985), 343-358. \bibitem{PP3}A. F. Pillow and R. Paull, Conically similar viscous flows. Part 3. Characterization of axial causes in swirling flow and the one-parameter flow generated by a uniform half-line source of kinematic swirl angular momentum, Journal of Fluid Mechanics 155 (1985), 359-379. \bibitem{SC1} M. Sammartino and R. Caflisch, Zero viscosity limit for analytic solutions, of the Navier-Stokes equation on a half-space. I. Existence for Euler and Prandtl equations, Comm. Math. Phys. 192 (1998), 433-461. \bibitem{SC2} M. Sammartino and R. Caflisch, Zero viscosity limit for analytic solutions of the Navier-Stokes equation on a half-space. II. Construction of the Navier-Stokes solution, Comm. Math. Phys. 192 (1998), 463-491. \bibitem{Serrin} J. Serrin, The swirling vortex, Philos. Trans. R. Soc. Lond. Ser. A, Math. Phys. Sci. 271 (1972), 325-360. \bibitem{Shvydkoy} R. Shvydkoy, Homogeneous solutions to the 3D Euler system, Trans. Amer. Math. Soc. 370 (2018), 2517-2535. \bibitem{SL} N. A. Slezkin, On an exact solution of the equations of viscous flow, Uch. zap. MGU, no. 2, 89-90, 1934. \bibitem{SQ}H. B. Squire, The round laminar jet, Quart. J. Mech. Appl. Math. 4 (1951), 321-329. \bibitem{Sverak}V. \v{S}ver\'{a}k, On Landau's solutions of the Navier-Stokes equations, Problems in mathematical analysis. No. 61. J. Math. Sci. 179 (2011), 208-228. arXiv:math/0604550. \bibitem{Temam-Wang} R. Temam and X. Wang, On the behavior of the solutions of the Navier-Stokes equations at vanishing viscosity, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 25 (1997) , no. 3-4, 807-828 (1998). \bibitem{TianXin} G. Tian and Z.P. Xin, One-point singular solutions to the Navier-Stokes equations, Topol. Methods Nonlinear Anal. 11 (1998), 135-145. \bibitem{W}C. Y. Wang, Exact solutions of the steady state Navier-Stokes equation, Annu. Rev. Fluid Mech. 23 (1991), 159-177. \bibitem{Wang} X. Wang, A Kato type theorem on zero viscosity limit of Navier-Stokes flows, Indiana Univ. Math. J. 50 (2001), Special Issue, 223-241. \bibitem{Y}V. I. Yatseyev, On a class of exact solutions of the equations of motion of a viscous fluid, 1950. \end{thebibliography} \end{document}
\begin{document} \title{ \rule{\linewidth}{2pt}\\ {\textbf{ Unveiling the Sampling Density \\ in Non-Uniform Geometric Graphs }} \\ \rule{\linewidth}{.5pt} } \author[1, 2]{Raffaele~Paolino \thanks{Correspondence to Raffaele Paolino at \href{mailto:[email protected]}{[email protected]}}} \author[3]{Aleksandar~Bojchevski} \author[2, 4]{Stephan~Günnemann} \author[1, 2, 5]{Gitta~Kutyniok} \author[6]{Ron~Levie} \affil[1]{Department of Mathematics, Ludwig Maximilian University of Munich} \affil[2]{Munich Center for Machine Learning (MCML)} \affil[3]{CISPA Helmholtz Center for Information Security} \affil[4]{Department of Computer Science \& Munich Data Science Institute, Technical University of Munich} \affil[5]{Department of Physics and Technology, University of Tromsø} \affil[6]{Faculty of Mathematics, Technion – Israel Institute of Technology} \renewcommand\Authands{ and } \maketitle \begin{abstract} A powerful framework for studying graphs is to consider them as geometric graphs: nodes are randomly sampled from an underlying metric space, and any pair of nodes is connected if their distance is less than a specified neighborhood radius. Currently, the literature mostly focuses on uniform sampling and constant neighborhood radius. However, real-world graphs are likely to be better represented by a model in which the sampling density and the neighborhood radius can both vary over the latent space. For instance, in a social network communities can be modeled as densely sampled areas, and hubs as nodes with larger neighborhood radius. In this work, we first perform a rigorous mathematical analysis of this (more general) class of models, including derivations of the resulting graph shift operators. The key insight is that graph shift operators should be corrected in order to avoid potential distortions introduced by the non-uniform sampling. Then, we develop methods to estimate the unknown sampling density in a self-supervised fashion. Finally, we present exemplary applications in which the learnt density is used to 1) correct the graph shift operator and improve performance on a variety of tasks, 2) improve pooling, and 3) extract knowledge from networks. Our experimental findings support our theory and provide strong evidence for our model. end{abstract} \section{Introduction} Graphs are mathematical objects used to represent relationships among entities. Their use is ubiquitous, ranging from social networks to recommender systems, from protein-protein interactions to functional brain networks. Despite their versatility, their non-euclidean nature makes graphs hard to analyze. For instance, the indexing of the nodes is arbitrary, there is no natural definition of orientation, and neighborhoods can vary in size and topology. Moreover, it is not clear how to compare a general pair of graphs since they can have a different number of nodes. Therefore, new ways of thinking about graphs were developed by the community. One approach is proposed in graphon theory \citep{lovaszLargeNetworksGraph2012}: graphs are sampled from continuous graph models called emph{graphons}, and any two graphs of any size and topology can be compared using certain metrics defined in the space of graphons. A emph{geometric graph} is an important case of a graph sampled from a graphon. In a geometric graph, a set of points is uniformly sampled from a metric-measure space, and every pair of points is linked if their distance is less than a specified neighborhood radius. Therefore, a geometric graph inherits a geometric structure from its latent space that can be leveraged to perform rigorous mathematical analysis and to derive computational methods. Geometric graphs have a long history, dating back to the 60s \citep{gilbertRandomPlaneNetworks1961}. They have been extensively used to model complex spatial networks \citep{barthelemySpatialNetworks2011}. One of the first models of geometric graphs is the emph{random geometric graph} \citep{penroseRandomGeometricGraphs2003b}, where the latent space is a Euclidean unit square. Various generalizations and modifications of this model have been proposed in the literature, such as emph{random rectangular graphs} \citep{estradaRandomRectangularGraphs2015}, emph{random spherical graphs} \citep{allen-perkinsRandomSphericalGraphs2018}, and emph{random hyperbolic graphs} \citep{krioukovHyperbolicGeometryComplex2010}. Geometric graphs are particularly useful since they share properties with real-world networks. For instance, random hyperbolic graphs are emph{small-world}, emph{scale-free}, with emph{high clustering} \citep{papadopoulosGreedyForwardingDynamic2010, gugelmannRandomHyperbolicGraphs2012}. The small-world property asserts that the distance between any two nodes is small, even if the graph is large. The scale-free property is the description of the degree sequence as a heavy-tailed distribution: a small number of nodes have many connections, while the rest have small neighborhoods. These two properties are intimately related to the presence of emph{hubs} -- nodes with a considerable number of neighbors -- while the high clustering is related to the network's community structure. However, standard geometric graph models focus mainly on uniform sampling, which does not describe real-world networks well. For instance, in location-based social networks, the spatial distribution of nodes is rarely uniform because people tend to congregate around the city centers \citep{choFriendshipMobilityUser2011,wangUnderstandingSpatialConnectivity2009}. In online communities such as the LiveJournal social network, non-uniformity arises since the probability of befriending a particular person is inversely proportional to the number of closer people \citep{huNavigationNonuniformDensity2011,liben-nowellGeographicRoutingSocial2005}. In a WWW network, there are more pages (and thus nodes) for popular topics than obscure ones. In social networks, different demographics (age, gender, ethnicity, etc.) may join a social media platform at different rates. For surface meshes, specific locations may be sampled more finely, depending on the required level of detail. The imbalance caused by non-uniform sampling could affect the analysis and lead to biased results. For instance, \citet{janssenNonuniformDistributionNodes2016} show that incorrectly assuming uniform density consistently overestimates the node distances while using the (estimated) density gives more accurate results. Therefore, it is essential to assess the sampling density, which is one of the main goals of this paper. Barring a few exceptions, non-uniformity is rarely considered in geometric graphs. \citet{iyerNonuniformRandomGeometric2012} study a class of non-uniform random geometric graphs where the radii depend on the location. \citet{martinez-martinezNonuniformRandomGraphs2022} study non-uniform graphs on the plane with the density functions specified in polar coordinates. \citet{prattTemporalConnectivityFinite2018} consider temporal connectivity in finite networks with non-uniform measures. In all of these works, the focus is on (asymptotic) statistical properties of the graphs, such as the average degree and the number of isolated nodes. \begin{comment} Graphs are mathematical objects used to represent relationships among entities. Their use is ubiquitous, ranging from social networks to recommender systems, from protein-to-protein interactions to functional brain networks. Despite their versatility, some challenges arise when signal analysis is performed on them, primarily caused by their non-euclidean nature: the lack of a well-posed definition of direction, the arbitrariness in the order of nodes, the variability of neighborhoods in terms of size and shape. Those challenges are especially cumbersome when dealing with a set of different graphs since the possibly different number of nodes could prevent us from using their algebraic characterization for comparisons. Therefore, new ways of thinking about graphs were developed by the community. A possible solution is represented by emph{geometric graphs} (or emph{spatial networks}). In a geometric graph, a set of points is uniformly sampled from a region of a metric space, and any two points are linked if their distance is less than a specified neighborhood radius. Therefore, a geometric graph inherits from its latent space an important geometric structure that can be leveraged in order to address the aforementioned problems. Geometric graphs have a long history, dating back to the 60s \citep{gilbertRandomPlaneNetworks1961}, and they have been extensively used to model complex spatial networks \citep{barthelemySpatialNetworks2011}. One of the first models is the emph{random geometric graph} \citep{penroseRandomGeometricGraphs2003b}, where the latent space is a Euclidean unit square. Various generalizations and modifications of the previous model have been proposed, such as emph{random rectangular graphs} \citep{estradaRandomRectangularGraphs2015}, emph{random spherical graphs} \citep{allen-perkinsRandomSphericalGraphs2018}, and emph{random hyperbolic graphs} \citep{krioukovHyperbolicGeometryComplex2010} for which the underlying spaces are respectively a unit rectangle, a hypersphere and a hyperbolic disk. Geometric graphs are particularly useful since they can be easily generalized to model properties shared by real-world networks. Real-world networks are thought to be emph{small world} and emph{scale free}. The former is the property for which, despite a large number of nodes, any pair of nodes are just a few edges distant. The latter is the empirical observation that the degree follows a power law distribution. Intuitively, just a little number of nodes has many connections, while the majority of them have small neighborhoods. These two properties are intimately related to the presence of emph{hubs} which are nodes with a considerable amount of connections. Some random generative models specifically aim to replicate such features: the Watts-Strogatz model \citep{wattsCollectiveDynamicsSmallworld1998a} creates small-world networks adding long-range connections, but it fails to exhibit a plausible degree distribution; the Barabási-Albert model \citep{albertStatisticalMechanicsComplex2002} can reproduce both features using preferential attachment. In a geometric graph, a hub can be modeled as a node with a larger neighborhood radius. Real-world networks also show a community structure (emph{high clustering}) and a tendency for similar nodes to be connected (emph{assortativity}). The stochastic block model \citep{hollandStochasticBlockmodelsFirst1983} can model the above-mentioned properties partitioning the nodes in communities and specifying the inter- and intra-community linking probabilities. A geometric graph reproduces such properties in a different fashion: assortativity is intrinsically modeled by the fact that near (hence, similar) points are connected, while communities are modeled as densely sampled areas. However, the standard geometric graph model focuses mainly on uniform sampling, which is very unlikely to be the case for real-world networks. For instance, in location-based social networks, the spatial distribution of nodes is rarely uniform because people tend to congregate around the city centers \citep{choFriendshipMobilityUser2011,wangUnderstandingSpatialConnectivity2009}. In online communities such as the LiveJournal social network, non-uniformity arises since the probability of befriending a particular person is inversely proportional to the number of closer people \citep{huNavigationNonuniformDensity2011,liben-nowellGeographicRoutingSocial2005}. In a WWW network, there are more pages (and thus nodes) for popular topics compared to obscure topics. In social networks, different demographics (age, gender, ethnicity, \dots) may join at different rates. For surface meshes, specific locations may be sampled more finely, depending on the required level of detail. The imbalance caused by non-uniform sampling could affect the analysis and lead to biased results. For instance, \citet{janssenNonuniformDistributionNodes2016} shows that incorrectly assuming uniform density consistently overestimates the node distances while using the (estimated) density gives more accurate results. Therefore, it is essential to assess the sampling density, which is one of the main goals of this paper. Barring a few exceptions, non-uniformity is rarely considered. \citet{iyerNonuniformRandomGeometric2012} study a class of non-uniform random geometric graphs where the radii depend on the location. \citet{martinez-martinezNonuniformRandomGraphs2022} study non-uniform graphs on the plane with the density functions specified in polar coordinates. \citet{prattTemporalConnectivityFinite2018} consider temporal connectivity in finite networks with non-uniform measures. \citet{janssenNonuniformDistributionNodes2016} study the behavior of the Spatial Preferential Attachment (SPA) model with non-uniform distribution of nodes. In all these works, the focus is on simple (asymptotic) topological properties of the graphs, such as the average degree and the number of isolated nodes. In our work, we focus on deriving the corresponding GSO, and we explicitly correct for the non-uniformity. end{comment} \subsection{Our Contribution} While traditional Laplacian approximation approaches solve the direct problem -- approximating a known continuous Laplacian with a graph Laplacian -- in this paper we solve the inverse problem -- constructing a graph Laplacian from an observed graph that is guaranteed to approximate an unknown continuous Laplacian. We believe that our approach has high practical significance, as in practical data science on graphs, the graph is typically given, but the underlying continuous model is unknown. To be able to solve this inverse problem, we introduce the non-uniform geometric graph (NuG) model. Unlike the standard geometric graph model, a NuG is generated by a non-uniform sampling density and a non-constant neighborhood radius. In this setting, we propose a class of graph shift operators (GSOs), called emph{non-uniform geometric GSOs}, that are computed solely from the topology of the graph and the node/edge features while guaranteeing that these GSOs approximate corresponding latent continuous operators defined on the underlying geometric spaces. Together with \citet{dasoulasLearningParametrisedGraph2021} and \citet{sahbiLearningLaplaciansChebyshev2021}, our work can be listed as a theoretically grounded way to learn the GSO. Justified by formulas grounded in Monte-Carlo analysis, we show how to compensate for the non-uniformity in the sampling when computing non-uniform geometric GSOs. This requires having estimates both of the sampling density and the neighborhood radii. Estimating these by only observing the graph is a hard task. For example, graph quantities like the node degrees are affected both by the density and the radius, and hence, it is hard to decouple the density from the radius by only observing the graph. We hence propose methods for estimating the density (and radius) using a self-supervision approach. The idea is to train, against some arbitrary task, a spectral graph neural network, where the GSOs underlying the convolution operators are taken as a non-uniform geometric GSO with learnable density. For the model to perform well, it learns to estimate the underlying sampling density, even though it is not directly supervised to do so. We explain heuristically the feasibility of the self-supervision approach on a sub-class of non-uniform geometric graphs that we call emph{geometric graphs with hubs}. This is a class of geometric graphs, motivated by properties of real-world networks, where the radius is roughly piece-wise constant, and the sampling density is smooth. We show experimentally that the NuG model can effectively model real-world graphs by training a graph autoencoder, where the encoder embeds the nodes in an underlying geometric space, and the decoder produces edges according to the NuG model. Moreover, we show that using our non-uniform geometric GSOs with learned sampling density in spectral graph neural networks improves downstream tasks. Finally, we present proof-of-concept applications in which we use the learned density to improve pooling and extract knowledge from graphs. \section{Non-Uniform Geometric Models} In this section, we define non-uniform geometric GSOs, and a subclass of such GSOs called geometric graphs with hubs. To compute such GSOs from the data, we show how to estimate the sampling density from a given graph using self-supervision. \subsection{Graph Shift Operators and Kernel Operators} We denote graphs by $\mathcal{G}=(\mathcal{V}, \mathcal{E})$, where $\mathcal{V}$ is the set of nodes, $\abs{\mathcal{V}}$ is the number of nodes, and $\mathcal{E}$ is the set of edges. A one-dimensional graph signal is a mapping $\mathbf{u}:\mathcal{V} \rightarrow\mathbb{R}$. For a higher feature dimension $F\in\mathbb{N}$, a signal is a mapping $\mathbf{u}:\mathcal{V} \rightarrow\mathbb{R}^F$. In graph data science, typically, the data comprises only the graph structure $\mathcal{G}$ and node/edge features $\mathbf{u}$, and the practitioner has the freedom to design a graph shift operator (GSO). Loosely speaking, given a graph $\mathcal{G}=(\mathcal{V}, \mathcal{E})$, a GSO is any matrix $\mathbf{L}\in\mathbb{R}^{\lvert \mathcal{V} \rvert \times \lvert \mathcal{V} \rvert}$ that respects the connectivity of the graph, i.e., $L_{i, j}=0$ whenever $(i, j)\notin \mathcal{E}$, $i\neq j$ \citep{mateosConnectingDotsIdentifying2019}. GSOs are used in graph signal processing to define filters, as functions of the GSO of the form $f(\mathbf{L})$, where $f:\mathbb{R}\rightarrow\mathbb{R}$ is, e.g., a polynomial \citep{defferrardConvolutionalNeuralNetworks2017} or a rational function \citep{levieCayleyNetsGraphConvolutional2019}. The filters operate on graph signals $\mathbf{u}$ by $f(\mathbf{L})\mathbf{u}$. Spectral graph convolutional networks are the class of graph neural networks that implement convolutions as filters. When a spectral graph convolutional network is trained, only the filters $f:\mathbb{R}\rightarrow\mathbb{R}$ are learned. One significant advantage of the spectral approach is that the convolution network is not tied to a specific graph, but can rather be transferred between different graphs of different sizes and topologies. In this work, we see GSOs as randomly sampled from kernel operators defined on underlying geometric spaces. The underlying spaces are modelled as metric spaces. To allow modeling the random sampling of points, each metric space is also assumed to be a probability space. \begin{definition} \label{def:mpl} Let $(\mathcal{S}, d, \mu)$ be a metric-probability space\footnote{A metric-probability space is a triple $(\mathcal{S}, d, \mu)$, where $\mathcal{S}$ is a set of points, and $\mu$ is the Borel measure corresponding to the metric $d$.} with probability measure $\mu$ and metric $d$; let ${m\in\lfun^{\infty}(\mathcal{S})}$\footnote{A function $g:\mathcal{S}\rightarrow\mathbb{R}$ is an element of $\lfun^{\infty}(\mathcal{S})$ iff. $exists M<\infty:{\mu(\{x\in\mathcal{S}:\lvert g(x)\rvert >M\})=0}$. The norm in $\lfun^{\infty}(\mathcal{S})$ is the essential supremum, i.e. $\inf\{M\geq 0: \lvert g(x) \rvert \leq M \text{ for almost every } x\in\mathcal{S}\}$.}; let ${K\in\lfun^{\infty}(\mathcal{S}\times\mathcal{S})}$. The emph{metric-probability Laplacian} $\mathcal{L}=\mathcal{L}_{K, m}$ is defined as \begin{equation} \label{eq:mml} \mathcal{L}: \lfun^{\infty}(\mathcal{S})\rightarrow \lfun^{\infty}(\mathcal{S})\, , \; (\mathcal{L}u)(x) = \int\limits_{\mathcal{S}}K(x, y)\, u(y)\dif\mu(y)-m(x)\, u(x)\, . end{equation} end{definition} For example, let $\mathcal{S}$ be a Riemannian manifold, and take $K(x, y)=\mathbbm{1}_{\ball_\alpha(x)}(y)/\mu(\ball_\alpha(x))$ and $m(x)=1$, where $\ball_\alpha(x)$ is the ball or radius $\alpha$ about $x$. In this case, the operator $\mathcal{L}_{K, m}$ approximates the Laplace-Beltrami operator when $\alpha$ is small \citep{buragoSpectralStabilityMetricmeasure2019}. A random graph is generated by randomly sampling points from the metric-probability space $(\mathcal{S}, d, \mu)$. As a modeling assumption, we suppose the sampling is performed according to a measure $\nu$. We assume $\nu$ is a weighted measure with respect to $\mu$, i.e., there exists a density function $\rho:\mathcal{S}\rightarrow (0,\infty)$ such that $\dif \nu(y)= \rho(y)\dif \mu(y)$\footnote{Formally, $\nu$ is absolutely continuous with respect to $\mu$, with Radon-Nykodin derivative $\rho$.}. We assume that $\rho$ is bounded away from zero and infinity. Using a change of variable, it is easy to see that \begin{equation*} (\mathcal{L}u)(x) =\int\limits_{\mathcal{S}}K(x, y)\,\rho(y)^{-1}\, u(y)\dif\nu(y)-m(x)\, u(x)\, . end{equation*} Let $\mathbf{x}=\{x_i\}_{i=1}^N$ be a random independent sample from $\mathcal{S}$ according to the distribution $\nu$. The corresponding sampled GSO $\mathbf{L}$ is defined by \begin{equation} \label{eq:sampled_mpl} L_{i,j}=N^{-1} K(x_i,x_j)\rho(x_j)^{-1}-m(x_i)\, . end{equation} Given a signal $u\in \lfun^{\infty}(\mathcal{S})$, and its sampled version $\mathbf{u}=\{u(x_i)\}_{i=1}^N$, it is well known that $(\mathbf{L}\mathbf{u})_i$ approximates $\mathcal{L}u(x_i)$ for every ${i\in\{1,\dots, N\}}$ \citep{heinGraphLaplaciansTheir2007, vonluxburgConsistencySpectralClustering2008}. \subsection{Non-Uniform Geometric GSOs} According to \cref{eq:sampled_mpl}, a GSO $\mathbf{L}$ can be directly sampled from the metric-probability Laplacian $\mathcal{L}$. However, such an approach would violate our motivating guidelines, since we are interested in GSOs that can be computed directly from the graph structure, without explicitly knowing the underlying continuous kernel and density. In this subsection, we define a class of metric-probability Laplacians that allow such direct sampling. For that, we first define a model of adjacency in the metric space. \begin{definition} \label{def:neighborhood_model} Let ${(\mathcal{S}, d, \mu)}$ be a metric-probability space. Let ${\alpha:\mathcal{S}\rightarrow(0, +\infty)}$ be a non-negative measurable function, referred to as emph{neighborhood radius}. The emph{neighborhood model} $\mathcal{N}$ is defined as the set-valued function that assigns to each ${x\in\mathcal{S}}$ the ball \begin{equation*} \mathcal{N}(x)\coloneqq\{y\in\mathcal{S}\, : \, d(x, y)\leq \max\left(\alpha(x), \alpha(y)\right)\}\, . end{equation*} end{definition} Since $y\in\mathcal{N}(x)$ implies $x\in\mathcal{N}(y)$ for all $x$, $y\in\mathcal{S}$, \cref{def:neighborhood_model} models only symmetric graphs. Next, we define a class of continuous Laplacians based on neighborhood models. \begin{definition} \label{def:mplm} Let ${(\mathcal{S}, d, \mu)}$ be a metric-probability space, and $\mathcal{N}$ a neighborhood model as in \cref{def:neighborhood_model}. Let $m^{(i)}:\mathbb{R}\rightarrow\mathbb{R}$ be a continuous function for every $i\in\{1, \dots, 4\}$. The emph{metric-probability Laplacian model} is the kernel operator $\mathcal{L}_{\mathcal{N}}$ that operates on signals $u:\mathcal{S}\rightarrow\mathbb{R}$ by \begin{equation} \label{eq:mplm} \begin{aligned} \left(\mathcal{L}_{\mathcal{N}}u\right)(x) &\coloneqq \int\limits_{\mathcal{N}(x)} m^{(1)}\left(\mu\left(\mathcal{N}(x)\right)\right)\, m^{(2)}\left(\mu\left(\mathcal{N}(y)\right)\right) u(y) \dif \mu(y)\\ &- \int\limits_{\mathcal{N}(x)} m^{(3)}\left(\mu\left(\mathcal{N}(x)\right)\right) \, m^{(4)}\left(\mu\left(\mathcal{N}(y)\right)\right) \dif \mu(y) \, u(x)\, . end{aligned} end{equation} end{definition} In order to give a concrete example, suppose the neighborhood radius $\alpha(x)=\alpha$ is a constant, $m^{(1)}(x)=m^{(3)}(x)=x^{-1}$, and $m^{(2)}(x)=m^{(4)}(x)=1$, then \cref{eq:mplm} gives \begin{equation*} (\mathcal{L}_\mathcal{N}u)(x) = \dfrac{1}{\mu\left(\ball_\alpha(x)\right)}\int\limits_{\ball_\alpha(x)} u(y)\dif\mu(y) - u(x)\, , end{equation*} which is an approximation of the Laplace-Beltrami operator. Since the neighborhood model of $\mathcal{S}$ represents adjacency in the metric space, we make the modeling assumption that graphs are sampled from neighborhood models, as follows. First, random independent points $\mathbf{x}=\{x_i\}_{i=1}^N$ are sampled from $\mathcal{S}$ according to the ``non-uniform'' distribution $\nu$ as before. Then, an edge is created between each pair $x_i$ and $x_j$ if $x_j\in\mathcal{N}(x_i)$, to form the graph $\mathcal{G}$. Now, a GSO can be sampled from a metric-probability Laplacian model $\mathcal{L}_{\mathcal{N}}$ by \cref{eq:sampled_mpl}, if the underlying continuous model is known. However, such knowledge is not required, since the special structure of the metric-probability Laplacian model allows deriving the GSO directly from the sampled graph $\mathcal{G}$ and the sampled density $\{\rho(x_i)\}_{i=1}^N$. \cref{def:nug_gso} below gives such a construction of GSO. In the following, given a vector $\mathbf{u}\in \mathbb{R}^N$ and a function $m:\mathbb{R}\rightarrow\mathbb{R}$, we denote by $m(\mathbf{u})$ the vector $\{m(u_i)\}_{i=1}^N$, and by $\diag(\mathbf{u})\in\mathbb{R}^{N\times N}$ the diagonal matrix with diagonal $\mathbf{u}$. \begin{definition} \label{def:nug_gso} Let $\mathcal{G}=(\mathcal{V}, \mathcal{E})$ be a graph with adjacency matrix $\mathbf{A}$; let $\pmb{\rho}:\mathcal{V}\rightarrow (0,\infty)$ be a graph signal, referred to as emph{graph density}. The emph{non-uniform geometric GSO} is defined to be \begin{equation} \label{eq:nug_gso} \begin{aligned} \mathbf{L}_{\mathcal{G},\pmb{\rho}} & \coloneqq N^{-1}\mathbf{D}_{\pmb{\rho}}^{(1)} \mathbf{A}_{\pmb{\rho}} \mathbf{D}_{\pmb{\rho}}^{(2)} - N^{-1}\diag\left(\mathbf{D}_{\pmb{\rho}}^{(3)}\mathbf{A}_{\pmb{\rho}} \mathbf{D}_{\pmb{\rho}}^{(4)} \mathbf{1}\right)\, , end{aligned} end{equation} where $\mathbf{A}_{\pmb{\rho}} = \mathbf{A}\diag(\pmb{\rho})^{-1}$ and $\mathbf{D}_{\pmb{\rho}}^{(i)} =\diag\left(m^{(i)}\left(N^{-1}\, \mathbf{A}_{\pmb{\rho}}\, \pmb{1}\right)\right)$. end{definition} \Cref{def:nug_gso} can retrieve, as particular cases, the usual GSOs, as shown in \cref{tab:usual_GSO} in \cref{sec:R&B}. For example, in case of $m^{(1)}(x)=m^{(3)}(x)=x^{-1}$, $m^{(2)}(x)=m^{(4)}(x)=1$, and uniform sampling $\rho=1$, \cref{eq:nug_gso} leads to the random-walk Laplacian $\mathbf{L}_{\mathcal{G},\pmb{1}} = \mathbf{D}^{-1} \mathbf{A} - \mathbf{I}$. The non-uniform geometric GSO in \cref{def:nug_gso} is the Monte-Carlo approximation of the metric-probability Laplacian in \cref{def:mplm}. This is shown in the following proposition, whose proof can be found in \cref{sec:proofs}. \begin{proposition} \label{thm:convergence} Let $\mathcal{G}=(\mathcal{V}, \mathcal{E})$ be a random graph with i.i.d. sample $\mathbf{x}=\{x_i\}_{i=1}^N$ from the metric-probability space $(\mathcal{S},d,\mu)$ with neighborhood structure $\mathcal{N}$. Let $\mathbf{L}_{\mathcal{G}, \pmb{\rho}}$ be the non-uniform geometric GSO as in \cref{def:nug_gso}. Let $u\in \lfun^{\infty}(\mathcal{S})$ and $\mathbf{u}=\{u(x_i)\}_{i=1}^N$. Then, for every $i=1,\ldots,N$, \begin{equation} \label{eq:MC2} \mathbb{E}\left((\mathbf{L}_{\mathcal{G}, \pmb{\rho}}\mathbf{u})_i - (\mathcal{L}_{\mathcal N} u)(x_i)\right)^2 = \mathcal{O}(N^{-1}). end{equation} end{proposition} In \cref{sec:proofs} we also show that, in probability at least $1-p$, it holds \begin{equation} \label{eq:Hoef} \forall\; i\in\{1, \dots, N\}\, , \; \lvert(\mathbf{L}_{\mathcal{G}, \pmb{\rho}}\mathbf{u})_i - (\mathcal{L}_{\mathcal N} u)(x_i)\rvert = \mathcal{O}\left(N^{-\frac{1}{2}}\sqrt{\log(1/p)+\log(N)}\right). end{equation} \cref{thm:convergence} means that if we are given a graph that was sampled from a neighborhood model, and we know (or have an estimate of) the sampling density at every node of the graph, then we can compute a GSO according to \cref{eq:nug_gso} that is guaranteed to approximate a corresponding unknown metric-probability Laplacian. The next goal is hence to estimate the sampling density from a given graph. \subsection{Inferring the Sampling Density} In real-world scenarios, the true value of the sampling density is not known. The following result gives a first rough estimate of the sampling density in a special case. \begin{lemma} \label{thm:integral_mean_value} Let $(\mathcal{S}, d, \mu)$ be a metric-probability space; let $\mathcal{N}$ be a neighborhood model; let $\nu$ be a weighted measure with respect to $\mu$ with continuous density $\rho$ bounded away from zero and infinity. There exists a function $c:\mathcal{S}\rightarrow\mathcal{S}$ such that $c(x) \in \mathcal{N}(x)\ $ and $ \ (\rho\circ c)(x)= \nu\big(\mathcal{N}(x)\big)/ \mu\big(\mathcal{N}(x)\big)$. end{lemma} The proof can be found in \cref{sec:proofs}. In light of \cref{thm:integral_mean_value}, if the neighborhood radius of $x$ is small enough, if the volumes $\mu(\mathcal{N}(x))$ are approximately constant, and if $\rho$ does not vary too fast, the sampling density at $x$ is roughly proportional to $\nu(\mathcal{N}(x))$, that is, the likelihood a point is drawn from $\mathcal{N}(x)$. Therefore, in this situation, the sampling density $\rho(x)$ can be approximated by the degree of the node $x$. In practice, we are interested in graphs where the volumes of the neighborhoods $\mu(\mathcal{N}(x))$ are not constant. Still, a normalization of the GSO by the degree can soften the distortion introduced by non-uniform sampling, at least locally in areas where $\mu(\mathcal{N}(x))$ is slowly varying. This suggests that the degree of a node is a good input feature for a method that learns the sampling density from the graph structure and the node features. Such a method is developed next. \subsection{Geometric Graphs with Hubs} \label{sec:method} When designing a method to estimate the sampling density from the graph, the degree is not a sufficient input parameter. The reason is that the degree of a node has two main contributions: the sampling density and the neighborhood radius. The problem of decoupling the two contributions is difficult in the general case. However, if the sampling density is slowly varying, and if the neighborhood radius is piecewise constant, the problem becomes easier. Intuitively, a slowly varying sampling density causes a slight change in the degree of adjacent nodes. In contrast, a sudden change in the degree is caused by a radius jump. In time-frequency analysis and compressed sensing, various results guarantee the ability to separate a signal into its different components, e.g., piecewise constant and smooth components \citep{doAnalysisSimultaneousInpainting2022, donohoMicrolocalAnalysisGeometric2013, gribonvalHarmonicDecompositionAudio2003}. This motivates our model of emph{geometric graphs with hubs}. \begin{definition} \label{def:gwh} A emph{geometric graph with hubs} is a random graph with non-uniform geometric GSO, sampled from a metric-probability space $(\mathcal{S}, d, \mu)$ with neighborhood model $\mathcal{N}$, where the sampling density $\rho$ is Lipschitz continuous in $\mathcal{S}$ and $\mu(\mathcal{N}(x))$ is piecewise constant. end{definition} We call this model a geometric graph with hubs since we typically assume that $\mu(\mathcal{N}(x))$ has a low value for most points $x\in\mathcal{S}$, while only a few small regions, called emph{hubs}, have large neighborhoods. In \cref{sec:link_prediction}, we exhibit that geometric graphs with hubs can model real-world graphs. To validate this, we train a graph auto-encoder on real-world networks, where the decoder is restricted to be a geometric graph with hubs. The fact that such a decoder can achieve low error rates suggests that real-world graphs can often be modeled as geometric graphs with hubs. Geometric graphs with hubs are also reasonable from a modeling point of view. For example, it is reasonable to assume that different demographics join a social media platform at different rates. Since the demographic is directly related to the node features, and the graph roughly exhibits homophily, the features are slowly varying over the graph, and hence, so is the sampling density. On the other hand, hubs in social networks are associated with influencers. The conditions that make a certain user an influencer are not directly related to the features. Indeed, if the node features in a social network are user interests, users that follow an influencer tend to share their features with the influencer, so the features themselves are not enough to determine if a node is deemed to be a center of a hub or not. Hence, the radius does not tend to be continuous over the graph, and, instead, is roughly constant and small over most of the graph (non-influencers), except for a number of narrow and sharp peaks (influencers). \subsection{Learning the Sampling Density} \label{Learning the Sampling Density} In the current section, we propose a strategy to assess the sampling density $\pmb{\rho}$. As suggested by the above discussion, the local changes in the degree of the graph give us a lot of information about the local changes in the sampling density and neighborhood radius of geometric graphs with hubs. Hence, we implement the density estimator as a message-passing graph neural network (MPNN) $\Theta$ because it performs local computations and it is equivariant to node indexing, a property that both the density and the degree satisfy. Since we are mainly interested in estimating the inverse of the sampling density, $\Theta$ takes as input the inverse of the degree and the inverse of the mean degree of the one-hop neighborhood for all nodes in the graph as two input channels. However, it is not yet clear how to train $\Theta$. Since in real-world scenarios the ground-truth density is not known, we train $\Theta$ in a self-supervised manner. In this context, we choose a task (link prediction, node or graph classification, etc.) on a real-world graph $\mathcal{G}$ and we solve it by means of a graph neural network $\Psi$, referred to as emph{task network}. Since we want $\Psi$ to depend on the sampling density estimator $\Theta$, we define $\Psi$ as a spectral graph convolution network based on the non-uniform geometric GSO $\mathbf{L}_{\mathcal{G},\Theta(\mathcal{G})}$, e.g., GCN \citep{kipfSemiSupervisedClassificationGraph2017}, ChebNet \citep{defferrardConvolutionalNeuralNetworks2017} or CayleyNet \citep{levieCayleyNetsGraphConvolutional2019}. We, therefore, train $\Psi$ end-to-end on the given task. The idea behind the proposed method is that the task depends mostly on the underlying continuous model. For example, in shape classification, the label of each graph depends on the surface from which the graph is sampled, rather than the specific intricate structure of the discretization. Therefore, the task network $\Psi$ can perform well if it learns to ignore the particular fine details of the discretization, and focus on the underlying space. The correction of the GSO via the estimated sampling density (\cref{eq:nug_gso}) gives the network exactly such power. Therefore, we conjecture that $\Theta$ will indeed learn how to estimate the sampling density for graphs that exhibit homophily. In order to verify the previous claim, and to validate our model, we focus on link prediction on synthetic datasets (see \cref{sec:NuG_examples}), for which the ground-truth sampling density is known. As shown in \cref{fig:synthetic}, the MPNN $\Theta$ is able to correctly identify hubs, and correctly predict the ground-truth density in a self-supervised manner. \section{Experiments} \label{sec:experiments} In the following, we validate the NuG model experimentally. Moreover, we verify the validity of our method first on synthetic datasets, then on real-world graphs in a transductive (node classification) and inductive (graph classification) setting. Finally, we propose proof-of-concept applications in explainability, learning GSOs, and differentiable pooling. \subsection{Link Prediction} \label{sec:link_prediction} The method proposed in \cref{Learning the Sampling Density} is applied on synthetic datasets of geometric graphs with hubs (see for details \crefrange{sec:implementation_synthetic}{sec:implementation_link_prediction}). In \cref{fig:synthetic}, it is shown that $\Theta$ is able to correctly predict the value of the sampling density. The left plots of \cref{fig:S1_synthetic,fig:D_synthetic} show that the density is well approximated both at hubs and non-hubs. Looking at the right plots, it is evident that the density cannot be predicted solely from the degree. \begin{figure} \centering \begin{subfigure}{.4\linewidth} \includegraphics[width=\linewidth]{img/S1_learnt_pdf_H.pdf} \caption{$\mathbb{S}^1$ geometric graph with hubs.} \label{fig:S1_synthetic} end{subfigure} \begin{subfigure}{.4\linewidth} \includegraphics[width=\linewidth]{img/R2_learnt_pdf_H.pdf} \caption{$\mathbb{D}$ geometric graph with hubs.} \label{fig:D_synthetic} end{subfigure} \caption{Example of the learned probability density function in link prediction, where the decoder implements the usual inner product, and the underlying metric space is (a) the unit circle, and (b) the unit disk. (Left) Ground-truth sampling density vs. learned sampling density at the nodes. (Right) Degree vs. learned sampling density. } \label{fig:synthetic} end{figure} \cref{fig:link_prediction_AUC} and \cref{fig:link_prediction_AP} in \cref{sec:implementation_link_prediction_citation_networks} show that the NuG model is able to effectively represent real-world graphs, outperforming other graph auto-encoder methods (see \cref{tab:link_prediction_dataset} for the number of parameters of each method). Here, we learn an auto-encoder with four types of decoders: inner product, MLP, constant neighborhood radius, and piecewise constant neighborhood radius corresponding to a geometric graph with hubs (see \cref{sec:implementation_link_prediction_citation_networks} for more details). Better performances are reached if the graph is allowed to be a geometric graph with hubs as in \cref{def:gwh}. Moreover, the performances of distance and distance+hubs decoders seem to be consistent among different datasets, unlike the inner product and MLP decoders. This corroborates the claims that real-world graphs can be better modeled as geometric graphs with non-constant neighborhood radius. \cref{fig:Pubmed_test_alpha_beta} in \cref{sec:implementation_link_prediction_citation_networks} shows the learned probabilities of being a hub, and the learned values of $\alpha$ and $\beta$ for the Pubmed graph. \begin{figure} \centering \begin{subfigure}{.3\linewidth} \includegraphics[width=\linewidth]{img/Citeseer_AUC_d_dh_ip_mlp.pdf} \caption{Citeseer} end{subfigure} \begin{subfigure}{.3\linewidth} \includegraphics[width=\linewidth]{img/Cora_AUC_d_dh_ip_mlp.pdf} \caption{Cora} end{subfigure} \begin{subfigure}{.3\linewidth} \includegraphics[width=\linewidth]{img/Pubmed_AUC_d_dh_ip_mlp.pdf} \caption{Pubmed} end{subfigure} \begin{subfigure}{.3\linewidth} \includegraphics[width=\linewidth]{img/AmazonComputers_AUC_d_dh_ip_mlp.pdf} \caption{AmazonComputers} end{subfigure} \begin{subfigure}{.3\linewidth} \includegraphics[width=\linewidth]{img/AmazonPhoto_AUC_d_dh_ip_mlp.pdf} \caption{AmazonPhoto} end{subfigure} \begin{subfigure}{.3\linewidth} \includegraphics[width=\linewidth]{img/FacebookPagePage_AUC_d_dh_ip_mlp.pdf} \caption{FacebookPagePage} end{subfigure} \caption{Test AUC for link prediction task as a function of the dimension of the latent space. Performances averaged across $10$ runs on each value of the latent dimension. } \label{fig:link_prediction_AUC} end{figure} \subsection{Node Classification} \label{sec:node_classification} Another exemplary application is to use a non-uniform geometric GSO $\mathbf{L}_{\mathcal{G},\pmb{\rho}}$ ( \cref{def:nug_gso}) in a spectral graph convolution network for node classification tasks, where the density $\rho_i$ at each node $i$ is computed by a different graph neural network, and the whole model is trained end-to-end on the task. The details of the implementation are reported in \cref{sec:implementation_node_classification}. In \cref{fig:CitationNetworks_node_classification} we show the accuracy of the best-scoring GSO out of the ones reported in \cref{tab:usual_GSO} when the density is ignored against the performances of the best-scoring GSO when the sampling density is learned. For Citeseer and FacebookPagePage, the best GSOs are the symmetric normalized adjacency matrix. For Cora and Pubmed, the best density-ignored GSO is the symmetric normalized adjacency matrix, while the best density-normalized GSO is the adjacency matrix. For AmazonComputers and AmazonPhoto, the best-scoring GSOs are the symmetric normalized Laplacian. This validates our analysis: if the sampling density is ignored, the best choice is to normalize the Laplacian by the degree to soften the distortion of non-uniform sampling. \begin{figure} \centering \begin{subfigure}{.25\linewidth} \includegraphics[width=\linewidth]{img/Citeseer_classification-1.pdf} \caption{Citeseer} end{subfigure} \begin{subfigure}{.25\linewidth} \includegraphics[width=\linewidth]{img/Cora_classification-1.pdf} \caption{Cora} end{subfigure} \begin{subfigure}{.25\linewidth} \includegraphics[width=\linewidth]{img/Pubmed_classification-1.pdf} \caption{Pubmed} end{subfigure} \begin{subfigure}{.25\linewidth} \includegraphics[width=\linewidth]{img/AmazonComputers_classification-1.pdf} \caption{AmazonComputers} end{subfigure} \begin{subfigure}{.25\linewidth} \includegraphics[width=\linewidth]{img/AmazonPhoto_classification-1.pdf} \caption{AmazonPhoto} end{subfigure} \begin{subfigure}{.25\linewidth} \includegraphics[width=\linewidth]{img/FacebookPagePage_classification-1.pdf} \caption{FacebookPagePage} end{subfigure} \caption{Test accuracy on node classification task. Comparison between the best scoring graph shift operator when the density is ignored (I) or learned (L). Results averaged across $10$ runs: each point represents the performance at one run. } \label{fig:CitationNetworks_node_classification} end{figure} \subsection{Graph Classification \& Differentiable Pooling} \label{sec:pooling_classification} In this experiment we perform graph classification on the AIDS dataset \citep{riesenIAMGraphDatabase2008}, as explained in \cref{sec:implementation_node_classification}. \Cref{fig:metrics_AIDS} shows that the classification performances of a spectral graph neural network are better if a quota of parameters is used to learn $\pmb{\rho}$ which is used in a non-uniform geometric GSO (\Cref{def:nug_gso}). The learnable $\pmb{\rho}$ on the AIDS dataset can be used not only to correct the Laplacian but also to perform a better pooling (\cref{sec:implementation_graph_classification} explains how this is implemented). Usually, a graph convolutional neural network is followed by a global pooling layer in order to extract a representation of the whole graph. A vanilla pooling layer aggregates uniformly the contribution of all nodes. We implemented a weighted pooling layer that takes into account the importance of each node. As shown in \cref{fig:metrics_AIDS}, the weighted pooling layer can indeed improve performances on the graph classification task. \cref{fig:deg_rhoL_rhoP_AIDS} in \cref{sec:implementation_graph_classification} shows a comparison between the degree, the density learned to correct the GSO and the density learned for pooling. From the plot it is clear that the degree cannot predict the density. Indeed, the sampling density at nodes with the same degree can have different values. \begin{figure} \centering \begin{subfigure}{.4\linewidth} \includegraphics[width=\linewidth]{img/AIDS_c_ILP.pdf} \caption{Combinatorial Laplacian} end{subfigure} \begin{subfigure}{.4\linewidth} \includegraphics[width=\linewidth]{img/AIDS_gcn_ILP.pdf} \caption{Symmetric normalized adjacency} end{subfigure} \caption{Test metrics of the graph classification task on the AIDS dataset, using the combinatorial Laplacian (a) and the symmetric normalized adjacency (b), averaged over $10$ runs. Comparison when the importance $\pmb{\rho}^{-1}$ is ignored (I), used to correct the Laplacian (L), or used for pooling (P). Each point represents the performance at one run. In (a) the best performances are reached when $\pmb{\rho}^{-1}$ is used to correct the Laplacian, and in (b) when $\pmb{\rho}^{-1}$ is used for pooling. } \label{fig:metrics_AIDS} end{figure} \subsection{Explainability in Graph Classification} \label{sec:graph_classification} In this experiment, we show how to use the density estimator for explainability. The inverse density vector $\pmb{\rho}^{-1}$ can be interpreted as a measure of importance of each node, relative to the task at hand, instead of seeing it as the sampling density. Thinking about $\pmb{\rho}^{-1}$ as importance is useful when the graph is not naturally seen as randomly generated from a graphon model. We applied this paradigm to the AIDS dataset, as explained in the previous subsection. The fact that the classification performances are better when $\pmb{\rho}$ is learned demonstrates that $\pmb{\rho}$ is an important feature for the classification task, and hence, it can be exploited to extract knowledge from the graph. We define the mean importance of each chemical element $e$ as the sum of all values of $\pmb{\rho}^{-1}$ corresponding to nodes labeled as $e$ divided by the number of nodes labeled $e$. \Cref{fig:importance_AIDS} shows the mean importance of each element, when $\pmb{\rho}^{-1}$ is estimated by using it as a module in the task network in two ways. (1) The importance $\pmb{\rho}^{-1}$ is used to correct the GSO. (2) The importance $\pmb{\rho}^{-1}$ is used in a pooling layer, that maps the output of the graph neural network $\Psi$ to one feature of the form $\sum_{j=1}^{\lvert \mathcal{V}\rvert}{\rho_j}^{-1}\Psi(\mathbf{X})_j$, where $\mathbf{X}$ denotes the node features. In both cases, the most important elements are the same; therefore, the two methods seem to be consistent. \begin{figure} \centering \includegraphics[width=.75\linewidth]{img/AIDS_importance_comparison.pdf} \caption{(Left) Distribution of chemical elements per class (active, inactive respectively in blue, red) computed as the number of compounds labeled as active (respectively, inactive) containing that particular element divided by the number of active (respectively, inactive) compounds. This is a measure of rarity. For example, potassium is present in $5$ out of $400$ active compounds, and in $1$ over $1600$ inactive compounds. Hence, it is more rare to find potassium in an inactive compound. (Right) The mean importance of each element when $\pmb{\rho}^{-1}$ is used to correct the GSO (L, orange) and when it is used for weighted pooling (P, green). Carbon, oxygen, and nitrogen have low mean importance, which makes sense as they are present in almost every compound, as shown in the left plot. The chemical elements are sorted according to their mean importance when $\pmb{\rho}^{-1}$ is used to correct the GSO (orange bars). } \label{fig:importance_AIDS} end{figure} \section*{Conclusions} In this paper, we addressed the problem of learning the latent sampling density by which graphs are sampled from their underlying continuous models. We developed formulas for representing graphs given their connectivity structure and sampling density using non-uniform geometric GSOs. We then showcased how the density of geometric graphs with hubs can be estimated using self-supervision, and validated our approach experimentally. Last, we showed how knowing the sampling density can help with various tasks, e.g., improving spectral methods, improving pooling, and gaining knowledge from graphs. One limitation of our methodology is the difficulty in validating that real-world graphs are indeed sampled from latent geometric spaces. While we reported experiments that support this modeling assumption, an important future direction is to develop further experiments and tools to support our model. For instance, can we learn a density estimator on one class of graphs and transfer it to another? Can we use ground-truth demographic data to validate the estimated density in social networks? We believe future research will shed light on those questions and find new ways to exploit the sampling density for various applications. \printbibliography \appendix \section{Implementation Details} \subsection{Synthetic Dataset Generation} \label{sec:implementation_synthetic} This section explains how to generate a synthetic dataset of geometric graphs with hubs. We first consider a metric space. For our experiments, we mainly focused on the unit-circle $\mathbb{S}^1$ and on the unit-disk $\mathbb{D}$ (see \cref{sec:NuG_examples} for more details). Each graph is generated as follows. First, a non-uniform distribution is randomly generated. We considered an angular non-uniformity as described in \cref{def:sbrv}, where the number of oscillating terms, as well as the parameters $\pmb{c}$, $\pmb{n}$, $\pmb{\mu}$, are chosen randomly. In the case of $2$-dimensional spaces, the radial distribution is the one shown in \cref{tab:euclidean_spherical_hyperbolic}. According to each generated probability density function, $N$ points $\{x_i\}_{i=1}^N$ are drawn independently. Among them, $m<N$ are chosen randomly to be hubs, and any other node whose distance from a hub is less than some $\varepsilon>0$ is also marked as a hub. We consider two parameters $\alpha$, $\beta>0$. The neighborhood radius about non-hub (respectively, hubs) nodes is taken to be $\alpha$ (respectively $\alpha+\beta$). Any two points are then connected if \begin{equation*} d(x_i, x_j)\leq \max\{r(x_i), r(x_j)\}\, ,\; r(x)=\begin{dcases} \alpha & x \text{ is non-hub}\\ \alpha+\beta & x \text{ is hub} end{dcases}\, . end{equation*} In practical terms, $\alpha$ is computed such that the resulting graph is strongly connected, hence, it differs from graph to graph; $\beta$ is set to be $3\, \alpha$ and $epsilon$ to be $\alpha/10$. \subsection{Density Estimation with Self-Supervision} \paragraph{Density Estimation Network} In our experiments, the inverse of the sampling density, $1/\rho$, is learned by means of an EdgeConv neural network $\Theta$ \citep{wangDynamicGraphCNN2019a}, which is referred to as PNet in the following, where the message function is a multi-layer perceptron (MLP), and the aggregation function is $\max(\cdot)$, followed by a $\absv(\cdot)$ non-linearity. The number of hidden layers, hidden channels, and output channels is $3$, $32$, and $1$, respectively. Since the degree is an approximation of the sampling density, as stated in \cref{thm:integral_mean_value}, and since we are interested in computing its inverse to correct the GSO, the input of PNet is the inverse of the degree and the inverse of the mean degree of the one-hop neighborhood. Justified by the Monte-Carlo approximation \begin{equation*} 1 = \int\limits_{\mathcal{S}} \dif \mu(y) =\int\limits_{\mathcal{S}} \rho(y)^{-1}\dif \nu(y) \approx N^{-1}\sum\limits_{i=1}^{N} \rho(x_i)^{-1}\, , \; x_i \sim \rho\; \forall i=1, \dots, N\, , end{equation*} the output of PNet is normalized by its mean. \paragraph{Self-Supervision of PNet via Link Prediction on Synthetic Dataset} \label{sec:implementation_link_prediction} To train the PNet $\Theta$, for each graph $\mathcal{G}$, we use $\Theta(\mathcal{G})$ to define a GSO $\mathbf{L}_{\mathcal{G},\Theta(\mathcal{G})}$. Then, we define a graph auto-encoder, where the encoder is implemented as a spectral graph convolution network with GSO $\mathbf{L}_{\mathcal{G},\Theta(\mathcal{G})}$. The decoder is the usual inner-product decoder. The graph signal is a slice of $20$ random columns of the adjacency matrix. The number of hidden channels, hidden layers, and output channels is respectively $32$, $2$, and $2$. For each node $j$, the network outputs a feature $\Theta(\mathcal{G})_j$ in $\mathbb{R}^n$. Here, $\mathbb{R}^n$ is seen as the metric space underlying the NuG. In our experiments (\cref{sec:link_prediction}), we choose $n=2$. Some results are shown in \cref{fig:synthetic}. \paragraph{Node Classification} \label{sec:implementation_node_classification} Let $\mathcal{G}$ be the real-world graph. In \cref{sec:node_classification}, we considered $\mathcal{G}$ to be one of the graphs reported in \cref{tab:link_prediction_dataset}. The task network $\Psi$ is a polynomial convolutional neural network implementing a GSO $\mathbf{L}_{\mathcal{G}, \Theta(\mathcal{G})}$, where $\Theta$ is the PNet; the order of the polynomial spectral filters is $1$, the number of hidden channels $32$, and the number of hidden layers $2$; the GSOs used are the ones in \cref{tab:usual_GSO}. The optimizer is ADAM \citep{kingmaAdamMethodStochastic2017} with learning rate $10^{-2}$. We split the nodes in training ($85\%$), validation ($5\%$), and test ($10\%$) in a stratified fashion, and apply early stopping. The performances of the method are shown in \cref{fig:CitationNetworks_node_classification}. \paragraph{Graph Classification} \label{sec:implementation_graph_classification} Let $\mathcal{G}$ be the real-world graph. In \cref{sec:graph_classification}, $\mathcal{G}$ is any compound in the AIDS dataset. The task network $\Psi$ is a polynomial convolutional neural network implementing a GSO $\mathbf{L}_{\mathcal{G}, \Theta(\mathcal{G})}$, where $\Theta$ is the PNet; the order of the spectral polynomial filters is $1$, the number of hidden channels $128$ and the number of hidden layers $2$. The optimizer is ADAM with learning rate $10^{-2}$. We perform a stratified splitting of the graphs in training ($85\%$), validation ($5\%$), and test ($10\%$), and applied early stopping. The chosen batch size is $64$. The pooling layer is a global add layer. In case of weighted pooling as in \cref{sec:pooling_classification}, the task network $\Psi$ implements as GSO $\mathbf{L}_{\mathcal{G}, \pmb{1}}$, while $\Theta$ is used to output the weights of the pooling layer. The performance metrics of both approaches are shown in \cref{fig:metrics_AIDS} \begin{figure} \centering \begin{subfigure}{.4\linewidth} \includegraphics[width=.9\linewidth]{img/AIDS_deg_rhoL.pdf} end{subfigure} \begin{subfigure}{.4\linewidth} \includegraphics[width=.9\linewidth]{img/AIDS_deg_rhoP.pdf} end{subfigure} \caption{Comparison between degree, density learnt to correct the GSO $\rho_L$, and density learnt to perform weighted pooling $\rho_P$, AIDS dataset. } \label{fig:deg_rhoL_rhoP_AIDS} end{figure} \subsection{Geometric Graphs with Hubs Auto-Encoder} \label{sec:implementation_link_prediction_citation_networks} Here, we validate that real-world graphs can be modeled approximately as geometric graphs with hubs, as claimed in \cref{sec:link_prediction}. We consider the datasets listed in \cref{tab:link_prediction_dataset}. The auto-encoder is defined as follows. Let $\mathcal{G}$ be the real-world graph with $N$ nodes and $F$ node features; let $\mathbf{X}\in\mathbb{R}^{N\times F}$ be the feature matrix. Let $n$ be the dimension of the metric space in which nodes are embedded. Let $\Psi$ be a spectral graph convolutional network, referred to as emph{encoder}. Let $\Psi(\mathbf{X})_i$ and $\Psi(\mathbf{X})_j\in\mathbb{R}^n$ be the embedding of nodes $i$ and $j$ respectively. A decoder is a mapping $\mathbb{R}^n\times\mathbb{R}^n\rightarrow [0, 1]$ that takes as input the embedding of two nodes $i$, $j$ and returns the probability that the edge $(i, j)$ exists. We use four types of decoders. (1) The emph{inner product decoder} from \citet{kipfVariationalGraphAutoEncoders2016} is defined as $\sigmoid\left(\langle \Psi(\mathbf{X})_i, \Psi(\mathbf{X})_j\rangle\right)$, where $\sigma(\cdot)$ is the logistic sigmoid function. (2) The emph{MLP decoder} is defined as $\sigma\left(\mathrm{MLP}([\Psi(\mathbf{X})_i,\Psi(\mathbf{X})_j])\right)$, where $[\Psi(\mathbf{X})_i,\Psi(\mathbf{X})_j]\in\mathbb{R}^{2\, n}$ denotes the concatenation of $\Psi(\mathbf{X})_i$ and $\Psi(\mathbf{X})_j$, and $\mathrm{MLP}$ denotes a multi-layer perceptron. (3) The emph{distance decoder} corresponds to geometric graphs. It is defined as ${\sigmoid(\alpha- \lVert \Psi(\mathbf{X})_i-\Psi(\mathbf{X})_j \rVert_2) }$, where $\alpha$ is the trainable neighborhood radius. (4) The emph{distance+hubs decoder} corresponds to geometric graphs with hubs. It is defined as $\sigmoid(\alpha+\max\{\Upsilon(\tilde{\mathbf{D}})_i, \Upsilon(\tilde{\mathbf{D}})_j\} \beta- \lVert \Psi(\mathbf{X})_i-\Psi(\mathbf{X})_j \rVert_2)$, where $\alpha$, $\beta$ are trainable parameters that describe the radii of hubs and non-hubs. $\Upsilon$ is a message-passing graph neural network (with the same architecture of PNet) that takes as input a signal $\tilde{\mathbf{D}}$ computed from the node degrees (i.e., the inverse of the degree and the inverse of the mean degree of the one-hop neighborhood), and outputs the probability that each node is a hub. $\Upsilon$ is learned end-to-end together with the rest of the auto-encoder. In order to guarantee that $0\leq \Upsilon(\mathcal{G})_j\leq 1$ the network is followed by a min-max normalization. The distance decoder is justified by the fact that the condition ${\lVert \Psi(\mathbf{X})_i-\Psi(\mathbf{X})_j \rVert_2\leq\alpha}$ can be rewritten as $\heaviside(\alpha-\lVert \Psi(\mathbf{X})_i-\Psi(\mathbf{X})_j \rVert_2)$, where $\heaviside(\cdot)$ is the Heaviside function. The Heaviside function is relaxed to the logistic sigmoid for differentiability. Similar reasoning lies behind the formula of the distance+hubs decoder. The encoder $\Psi$ is a polynomial spectral graph convolutional neural network implementing as GSO the symmetric normalized adjacency matrix; the order of the polynomial filters is $1$, the number of hidden channels $32$ and the number of hidden layers $2$. In the case of inner-product, MLP and distance decoder, the loss is the cross entropy of existing and non-existing edges. In the case of distance+hubs-decoder, we also add $\lVert\Upsilon(\mathcal{G}) \rVert_1/N$ to the loss, as a regularization term, since in our model we suppose the number of hubs is low. The optimizer is ADAM with learning rate $10^{-2}$. We split the edges in training ($85\%$), validation ($5\%$) and test ($10\%$), and apply early stopping. The performances of both methods are shown in \cref{fig:link_prediction_AUC,fig:link_prediction_AP}, while examples of embeddings and learned $\alpha$ and $\beta$ are shown in \cref{fig:Pubmed_test_alpha_beta}. The distance decoder has one learnable parameter more than the inner-product decoder. Since PNet has a fixed number of input channels, the distance+hubs decoder has $2,535$ learnable parameters more than the inner-product one. On the contrary, the mlp decoder has a number of input channels that depends on the latent dimension; therefore, the number of hidden channels is chosen to guarantee that the number of learnable parameters of the mlp decoder is approximately $2,535$. \begin{table} \centering \caption{Real-world networks used for the link prediction task: graph statistics and number of parameters of the auto-encoder for each of the three decoder types: inner product, distance, distance+hubs and MLP. Since the number of input channels of the MLP decoder depends on the latent dimension $n$, we report the number of parameter for $n=3$. } \label{tab:link_prediction_dataset} \resizebox{\linewidth}{!}{ \begin{tabular}{cccccccccc} \toprule & \multicolumn{4}{c}{Statistics} && \multicolumn{3}{c}{Decoder} \\ \cmidrule{2-5}\cmidrule{7-10} Dataset & N. nodes & N. edges & N. features & N. classes && Inner product & Distance & Distance+Hubs & MLP\\ \midrule \makecell{Citeseer \\ \citep{yangRevisitingSemiSupervisedLearning2016}} & 3,327 & 9,104 & 3,703 & 6 & & 237,154 & 237,155 & 239,689 & 239,750\\ \makecell{Cora \\ \citep{yangRevisitingSemiSupervisedLearning2016}}& 2,708 & 10,556 & 1,433 & 7 && 91,874 & 91,875 & 94,409 & 94,470\\ \makecell{Pubmed \\ \citep{yangRevisitingSemiSupervisedLearning2016}}& 19,717 & 88,648 & 500 & 3 && 32,162 & 32,163 & 34,697 & 34,758\\ \makecell{Amazon Computers \\ \citep{shchurPitfallsGraphNeural2019}} & 13,752 & 491,722 & 767 & 10 && 49,250 & 49,251 & 51,785 & 51,846\\ \makecell{Amazon Photo \\ \citep{shchurPitfallsGraphNeural2019}}& 7,650 & 238,162 & 745 & 8 && 47,842 & 47,843 & 50,377 & 50,438\\ \makecell{FacebookPagePage \\ \citep{rozemberczkiMultiScaleAttributedNode2021}}& 22,470 & 342,004 & 128 & 4 && 8,354 & 8,355 & 10,889 & 10,950\\ \bottomrule end{tabular} } end{table} \begin{figure} \centering \begin{subfigure}{.32\linewidth} \includegraphics[width=\linewidth]{img/Citeseer_AP_d_dh_ip_mlp.pdf} \caption{Citeseer} end{subfigure} \begin{subfigure}{.32\linewidth} \includegraphics[width=\linewidth]{img/Cora_AP_d_dh_ip_mlp.pdf} \caption{Cora} end{subfigure} \begin{subfigure}{.32\linewidth} \includegraphics[width=\linewidth]{img/Pubmed_AP_d_dh_ip_mlp.pdf} \caption{Pubmed} end{subfigure} \begin{subfigure}{.32\linewidth} \includegraphics[width=\linewidth]{img/AmazonComputers_AP_d_dh_ip_mlp.pdf} \caption{AmazonComputers} end{subfigure} \begin{subfigure}{.32\linewidth} \includegraphics[width=\linewidth]{img/AmazonPhoto_AP_d_dh_ip_mlp.pdf} \caption{AmazonPhoto} end{subfigure} \begin{subfigure}{.32\linewidth} \includegraphics[width=\linewidth]{img/FacebookPagePage_AP_d_dh_ip_mlp.pdf} \caption{FacebookPagePage} end{subfigure} \caption{Test AP for link prediction task as a function of the dimension of the latent space. Performances averaged across $10$ runs on each value of the latent dimension. } \label{fig:link_prediction_AP} end{figure} \begin{figure} \centering \includegraphics[height=.4\linewidth]{img/Pubmed_dGae_L.pdf} \includegraphics[height=.4\linewidth]{img/Pubmed_test_dims_alpha_beta.pdf} \caption{(Top left) Embedding of Pubmed in $2$ dimensions using a distance+hubs decoder. The intensity of the color for each node $i$ is proportional to the probability $p_i=\Upsilon(\mathcal{G})_i$ of being a hub. The three colours (red, green and blue) corresponds to the three different classes to which a node can belong, as reported in \cref{tab:link_prediction_dataset}. (Bottom left) Histogram of the probabilities $\mathbf{p}=\Upsilon(\mathcal{G})$ of being hub divided per class. (Right) Learned values of the radius parameters $\alpha$ (top) and $\beta$ (bottom) of the geometric graph with hubs auto-encoder on Pubmed, as a function of the latent dimension. Results averaged across $10$ runs for each value of the latent dimension. The average probability of being a hub is $19.06\%$, and the number of nodes with a probability of being a hub greater than $0.99$ is $10.10\%$. } \label{fig:Pubmed_test_alpha_beta} end{figure} \section{Synthetic Datasets - a Blueprint} \label{sec:NuG_examples} In the following, we consider some simple latent metric spaces and construct methods for randomly generating non-uniform samples. For each space, structural properties of the corresponding NuG are studied, such as the expected degree of a node and the expected average degree, in case the radius is fixed and the sampling is non-uniform. All proofs can be found in \cref{sec:proofs}, if not otherwise stated. Three natural metric measure spaces are the euclidean, spherical, and hyperbolic spaces. If we restrict the attention to $2$-dimensional spaces, a way to uniformly sample is summarized in \cref{tab:euclidean_spherical_hyperbolic}. \begin{table} \centering \caption{Properties of euclidean, spherical and hyperbolic spaces of dimension $2$. In the case of euclidean and hyperbolic spaces, the uniform distribution refers to a disk of radius $R$.} \label{tab:euclidean_spherical_hyperbolic} \begin{tabular}{ccl} \toprule Property & Geometry & \\ \midrule \multirow{3}{*}{\makecell{Measure of a ball\\ of radius $\alpha$}} & euclidean & $\pi\, \alpha^2$ \\ & spherical & $2\, \pi \,(1-\cos(\alpha))$ \\ & hyperbolic & $2\, \pi \,(\cosh(\alpha)-1)$\\ \midrule \multirow{3}{*}{\makecell{Uniform p.d.f.}} & euclidean & $(2\, \pi)^{-1}\mathbbm{1}_{[-\pi, \pi)}(\theta)2R^{-2} r \mathbbm{1}_{[0, R)}(r)$ \\ & spherical & $(2\, \pi)^{-1}\mathbbm{1}_{[-\pi, \pi)}(\theta)2^{-1} \sin(\varphi)\mathbbm{1}_{[0, \pi)}(\varphi)$ \\ & hyperbolic & $(2\, \pi)^{-1}\mathbbm{1}_{[-\pi, \pi)}(\theta)(\cosh(R)-1)^{-1}\sinh(r)\mathbbm{1}_{[0, R)}(r)$\\ \midrule \multirow{3}{*}{\makecell{Distance in polar\\ coordinates}} & euclidean & $\sqrt{r_1^2+r_2^2-2 r_1 r_2 \cos(\theta_1-\theta_2)}$ \\ & spherical & $\arccos\left( \cos(\phi_1)\, \cos(\phi_2)+\sin(\phi_1)\, \sin(\phi_2)\, \cos(\theta_1-\theta_2)\right)$ \\ & hyperbolic & $ \arccosh(\cosh(r_1)\cosh(r_2)-\sinh(r_1)\sinh(r_2)\cos(\theta_1-\theta_2))$\\ \bottomrule end{tabular} end{table} In all three cases, the emph{radial} component arises naturally from the measure of the space. A possible way to introduce non-uniformity is changing the emph{angular} distribution. In this way, preferential directions will be identified, leading to an anisotropic model. \begin{definition} \label{def:sbrv} Given a natural number $C\in\mathbb{N}$, and vectors $\pmb{c}\in\mathbb{R}^{C}$, $\pmb{n}\in\mathbb{N}^{C}$, $\pmb{\mu}\in\mathbb{R}^{C}$ the function \begin{equation*} \sbrv(\theta ; \pmb{c}, \pmb{n}, \pmb{\mu}) =\dfrac{1}{B} \sum\limits_{i=1}^{C}c_i\, \cos(n_i\, (\theta-\mu_i))+\dfrac{A}{B}\, ,\; A = \sum\limits_{i=1}^{C}\lvert c_i\rvert\, ,\; B = 2\, \pi\, \left(\sum\limits_{i=1}^{C}\lvert c_i\rvert +\sum\limits_{i:n_i = 0} c_i\right)\, , end{equation*} is a continuous, $2\pi$-periodic probability density function. It will be referred to as emph{spectrally bounded}. end{definition} The cosine can be replaced by a generic $2\pi$-periodic function; the only change in the construction will be the offset and the normalization constant. \begin{definition} \label{thm:gvM} Given a natural number $C\in\mathbb{N}$, and the vectors $\pmb{c}\in\mathbb{R}^{C}$, $\pmb{n}\in\mathbb{N}^{C}$, $\pmb{\mu}\in\mathbb{R}^C$, $\pmb{\kappa}\in\mathbb{R}_{\geq 0}^C$, the function \begin{equation*} \mvM(\theta ; \pmb{c}, \pmb{n}, \pmb{\mu}, \pmb{\kappa}) =\dfrac{1}{B} \sum\limits_{i=1}^{C}c_i\, \dfrac{exp(\kappa_i\cos(n_i\, (\theta-\mu_i))}{2\, \pi\, \besselI_0(\kappa_i)}+\dfrac{A}{B}\, , end{equation*} where \begin{equation*} A = \sum\limits_{i:c_i<0} c_i \, \dfrac{exp(\kappa_i)}{2\, \pi\, \besselI_0(\kappa_i)}\, , \; B = \sum\limits_{i: n_i\geq 1} c_i + \sum\limits_{i: n_i = 0} c_i \dfrac{exp(\kappa_i)}{\besselI_0(\kappa_i)}+ \sum\limits_{i:c_i<0} c_i \, \dfrac{exp(\kappa_i)}{ \besselI_0(\kappa_i)} \, , end{equation*} is a continuous, $2\pi$-periodic probability density function. It will be referred to as emph{multimodal von Mises}. end{definition} Both densities introduced previously can be thought of as functions over the unit circle. Hence, the very first space to be studied is $\mathbb{S}^1= \{\mathbf{x}\in\mathbb{R}^2:\lVert x \rVert =1\}$ equipped with geodesic distance. As shown in the next proposition, the geodesic distance can be computed in a fairly easy way. \begin{proposition} \label{thm:geodesic_distance_S1} Given two points $\pmb{x}, \pmb{y}\in\mathbb{S}^1$ corresponding to the angles $x, y\in[-\pi, \pi)$, their geodesic distance is equal to \begin{equation*} d(\pmb{x}, \pmb{y}) = \pi-\lvert \pi- \lvert x-y \rvert\rvert\, . end{equation*} end{proposition} The next proposition computes the degree of a node in a non-uniform unit circle graph. \begin{proposition} \label{thm:prob_balls_sbrv} Given a spectrally bounded probability density function as in \cref{def:sbrv}, the expected degree of a node $\theta$ in a unit circle geometric graph with neighborhood radius $\alpha$ is \begin{equation*} \deg(\theta) = \dfrac{2\, N}{B}\left(\sum\limits_{i: n_i\neq0}\dfrac{c_i}{n_i}\, \cos(n_i\, (\theta-\mu_i))\, \sin(n_i\, \alpha) + \left(\sum\limits_{i: n_i=0}c_i+A\right)\, \alpha\right)\, , end{equation*} and the expected average degree of the whole graph is \begin{equation*} \mathbb{E}[\deg(\theta)] =\dfrac{2\, \pi \, N\, \alpha}{B^2}\left(\sum\limits_{i: n_i\neq0} \sum\limits_{j: n_i=n_j}c_ic_j\, \, \cos(n_i\, (\mu_i-\mu_j))\dfrac{\sin(n_i\, \alpha)}{n_i\, \alpha} +2\, \left(\sum\limits_{i: n_i=0}c_i+A\right)^2 \right)\, . end{equation*} end{proposition} As a direct consequence, in the limit of $r$ going to zero \begin{align*} \lim_{\alpha\rightarrow 0^+}\dfrac{\mathbb{P}[\ball_\alpha(\theta)]}{2\, \alpha} &= \dfrac{1}{B}\left(\sum\limits_{i: n_i\neq0}c_i\, \cos(n_i\, (\theta-\mu_i))\, \left(\lim_{\alpha\rightarrow0^+}\dfrac{\sin(n_i\, \alpha)}{n_i\, \alpha}\right) + \left(\sum\limits_{i: n_i=0}c_i+A\right)\right)\\ &=\sbrv(\theta ; \pmb{c}, \pmb{n}, \pmb{\mu}) end{align*} thus, for sufficiently small $\alpha$, the probability of a ball centered at $\theta$ is proportional to the density computed in $\theta$. Moreover, the error can be computed as \begin{equation*} \left\lvert \sbrv(\theta;\pmb{c}, \pmb{n}, \pmb{\mu}) - \dfrac{\mathbb{P}[\ball_\alpha(\theta)]}{2\, \alpha}\right\rvert \leq\dfrac{1}{6\, B}\left(\sum\limits_{i=1}^Cn_i^2\, c_i \right) \alpha^2\, , end{equation*} which shows that the approximation worsens the more oscillatory terms there are. In the case of multimodal von Mises distribution, a closed formula for the probability of balls does not exist. The following proposition introduces an approximation based solely on cosine functions. \begin{proposition} \label{thm:gvM_as_sbrv} A multimodal von Mises probability density function can be approximated by a spectrally bounded one. end{proposition} The previous result, combined with \cref{thm:prob_balls_sbrv}, gives a way to approximate the expected degree of spatial networks sampled accordingly to a multimodal von Mises angular distribution. However, the computation is straightforward when $\pmb{n}$ is the constant vector $n\,\pmb{1}$, since the product of two von Mises pdf is the kernel of a von Mises pdf \begin{align*} &\dfrac{exp(\pmb{\kappa}_1 \cos(n(\theta-\pmb{\mu}_1)))}{2\, \pi\, \besselI_0(\pmb{\kappa}_1)} \dfrac{exp(\pmb{\kappa}_2 \cos(n(\theta-\pmb{\mu}_2)))}{2\, \pi\, \besselI_0(\pmb{\kappa}_2)}\\ &= \dfrac{exp\left(\sqrt{\pmb{\kappa}_1^2 +\pmb{\kappa}_2^2+2\pmb{\kappa}_1\pmb{\kappa}_2\cos(n(\pmb{\mu}_1-\pmb{\mu}_2))} \cos\left(n(\theta-\varphi)\right)\right)}{4\, \pi^2\, \besselI_0(\pmb{\kappa}_1) \besselI_0(\pmb{\kappa}_2)}\, , end{align*} where \begin{equation*} \varphi = n^{-1}\, \arctan\left(\dfrac{\pmb{\kappa}_1\sin(n\, \pmb{\mu}_1)+\pmb{\kappa}_2\sin( n\, \pmb{\mu}_2)}{\pmb{\kappa}_1\cos(n\, \pmb{\mu}_1)+\pmb{\kappa}_2\cos(n\, \pmb{\mu}_2)}\right)\, . end{equation*} The unit circle model is preparatory to the study of more complex spaces, for instance, the unit disk $\mathbb{D}=\{\mathbf{x}\in\mathbb{R}^2:\lVert \mathbf{x} \rVert \leq 1\}$ equipped with geodesic distance, as in \cref{tab:euclidean_spherical_hyperbolic}. \begin{proposition} Given a spectrally bounded angular distribution as in \cref{def:sbrv}, the degree of a node $(r, \theta)$ in a unit disk geometric graph with neighborhood radius $\alpha$ is \begin{align*} \deg(r, \theta) \approx 2\, \pi \, \alpha^2\, N \sbrv(\theta; \pmb{c}, \pmb{n}, \pmb{\mu})\, , end{align*} and the average degree of the whole network is \begin{align*} \mathbb{E}[\deg(r, \theta)] \approx \dfrac{2\, \pi^2\, \alpha^2\, N}{B^2}\left(\sum\limits_{i: n_i\neq0} \sum\limits_{j: n_i=n_j}c_ic_j\, \, \cos(n_i\, (\mu_i-\mu_j)) +2\, \left(\sum\limits_{i: n_i=0}c_i+A\right)^2\right)\, . end{align*} \label{thm:prob_balls_unit_disk} end{proposition} \Cref{fig:NuE} shows some examples of non-uniform sampling of the unit disk. The last example will be the hyperbolic disk with radius $R\gg 1$, equipped with geodesic distance as in \cref{tab:euclidean_spherical_hyperbolic}. \begin{proposition} \label{thm:prob_balls_hyperbolic} Given a spectrally bounded angular distribution as in \cref{def:sbrv}, the degree of a node $(r, \theta)$ in a hyperbolic geometric graph with neighborhood radius $\alpha$ is \begin{equation*} \deg(r, \theta) \approx 8 \, N\, e^{\frac{\alpha-R-r}{2}}\, \sbrv(\theta;\pmb{c}, \pmb{n}, \pmb{\mu})\, , end{equation*} and the average degree of the whole network is $\mathcal{O}(N\, e^{\frac{\alpha-2 R}{2}})$. end{proposition} The proof can be found in \cref{sec:proofs}. The computed approximation is in line with the findings of \cite{krioukovHyperbolicGeometryComplex2010}, where a closed formula for the uniform case is provided when $\alpha=R$. To the best of our knowledge, this is the first work that considers $\alpha\neq R$. Examples of non-uniform sampling of the hyperbolic disk are shown in \cref{fig:NuH}. \begin{figure} \centering \begin{subfigure}{.9\linewidth} \includegraphics[width=.3\linewidth]{img/R2_data1.pdf} \includegraphics[width=.3\linewidth]{img/R2_data7.pdf} \includegraphics[width=.3\linewidth]{img/R2_data9.pdf} \caption{Non-uniform sampling of the euclidean disk.} \label{fig:NuE} end{subfigure} \begin{subfigure}{.9\linewidth} \includegraphics[width=.3\linewidth]{img/H2_data1.pdf} \includegraphics[width=.3\linewidth]{img/H2_data7.pdf} \includegraphics[width=.3\linewidth]{img/H2_data9.pdf} \caption{Non-uniform sampling of the hyperbolic disk.} \label{fig:NuH} end{subfigure} \caption{Examples of non-uniform (a) euclidean and (b) hyperbolic sampling. The orange curve represents the angular probability density function, conveniently rescaled for visibility purposes.} end{figure} \section{Retrieving and Building GSOs} \label{sec:R&B} In the current section, we first show how to retrieve the usual definition of graph shift operators from \cref{def:nug_gso}, and then how \cref{def:nug_gso} can be used to create novel GSOs. For simplicity, for both goals we suppose uniform sampling $\pmb{\rho}=\mathbf{1}$; \cref{eq:nug_gso} can be rewritten as \begin{equation} \label{eq:nug_gso_uniform_sampling} \begin{aligned} \mathbf{L}_{\mathcal{G},\pmb{1}} &= N^{-1}\diag\left(m^{(1)}\left(N^{-1}\mathbf{d}\right)\right) \mathbf{A} \diag\left(m^{(2)}\left(N^{-1}\mathbf{d}\right)\right)\\ &- N^{-1}\diag\left(\diag\left(m^{(3)}\left(N^{-1}\mathbf{d}\right)\right) \mathbf{A} \diag\left(m^{(4)}\left(N^{-1}\mathbf{d}\right)\right) \mathbf{1} \right) end{aligned} end{equation} where $\mathbf{A}$ is the adjacency matrix and $\mathbf{d}$ is the degree vector. \Cref{tab:usual_GSO} exhibit which choice of $\{m^{(i)}\}_{i=1}^4$ correspond to which graph Laplacian. A question that may arise is whether the innermost $\diag(\cdot)$ in \cref{eq:nug_gso_uniform_sampling} can be factored out of the outermost one. As shown in the next proposition, it is not possible in general. \begin{proposition} \label{thm:diag_commuting} Let $\mathbf{A}\in\R^{N\times N}$, $\mathbf{A}=\mathbf{A}^{\transpose}$; let $\mathbf{v}\in\mathbb{R}^N_{\geq 0}$ and $\mathbf{V}=\diag(\mathbf{v})$, it holds \begin{equation*} \diag(\mathbf{V}\mathbf{A}\mathbf{1}) = \mathbf{V}\, \diag(\mathbf{A}\mathbf{1}) =\diag(\mathbf{A}\mathbf{1})\,\mathbf{V}\, . end{equation*} Moreover \begin{equation*} \diag(\mathbf{A}\mathbf{V}\mathbf{1}) = \diag(\mathbf{V}\mathbf{A}\mathbf{1}) \iff A_{i, j}=0\; \forall\; i, j = 1, \dots, N \, : \;v _i\neq v_j\, . end{equation*} end{proposition} The proof of the statement can be found in \cref{sec:proofs}. An important consequence of \cref{thm:diag_commuting} is that the graph Laplacian \begin{equation} \label{eq:new_symmetric_normalized_graph_laplacian} \mathbf{L} = \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}-\diag\left(\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}\mathbf{1}\right)\, , \; \mathbf{D} = \diag(\mathbf{d})\, , end{equation} obtained with $m^{(i)}(x)=x^{-\frac{1}{2}}$ for every $i\in\{1, \dots, 4\}$, is in general different from the symmetric normalized Laplacian, since \begin{equation*} \mathbf{L} = \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}-\diag\left(\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}\mathbf{1}\right) \neq \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}-\mathbf{D}^{-\frac{1}{2}}\diag\left(\mathbf{A}\mathbf{1}\right)\mathbf{D}^{-\frac{1}{2}} =\mathbf{L}_{sn}\, , end{equation*} In light of \cref{thm:diag_commuting}, the two Laplacians are equivalent if every node is connected to nodes with the same degree, e.g., if the graph is $k$-regular. The difference between the two Laplacians can be better seen by studying their spectrum. The next proposition introduces an upper bound on the eigenvalues of the Laplacian in \cref{eq:new_symmetric_normalized_graph_laplacian}. \begin{proposition} \label{thm:eigenvalues_symmetric_normalized_k_laplacian} Let $\mathcal{G} =(\mathcal{V}, \mathcal{E})$ be an undirected graph with adjacency matrix $\mathbf{A}\in\mathbb{R}^{N\times N}$ and degree matrix $\mathbf{D}=\diag(\mathbf{A}\mathbf{1})$. Let $\lambda$ be an eigenvalue of the graph Laplacian \begin{equation*} \mathbf{L} = \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}-\diag\left(\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}\mathbf{1}\right)\, , end{equation*} it holds $\lvert \lambda \rvert\leq 2\sqrt{N}$. end{proposition} The proof of the proposition can be found in \cref{sec:proofs}. It is well known that the spectral radius of the symmetric normalized Laplacian is less than or equal to $2$ \citep{chungSpectralGraphTheory1997b}, with equality holding for bipartite graphs. However, this is not the case for the Laplacian in \cref{eq:new_symmetric_normalized_graph_laplacian}, as shown in the next example. \begin{example}[Complete Bipartite Graph] Consider the complete bipartite graph with $n$ nodes in the first part and $m\geq n$ nodes in the second part. Its adjacency and degree matrix are \begin{equation*} \mathbf{A} = \begin{pmatrix} \mathbf{0}_{n\times n} & \mathbf{1}_{n\times m} \\ \mathbf{1}_{m\times n} & \mathbf{0}_{m\times m} end{pmatrix}\, , \; \mathbf{D} = \begin{pmatrix} m \mathbf{I}_{n\times n} & \phantom{n}\mathbf{0}_{n\times m} \\ \phantom{m}\mathbf{0}_{m\times n} & n\mathbf{I}_{m\times m} end{pmatrix}\, . end{equation*} A simple computation leads to \begin{equation*} \mathbf{L}=\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}-\diag\left(\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}\mathbf{1}\right) =\begin{pmatrix} -m^{\frac{1}{2}}{n}^{-\frac{1}{2}}\mathbf{I}_{n\times n} & (n m)^{-\frac{1}{2}} \mathbf{1}_{n\times m}\\ (n m)^{-\frac{1}{2}} \mathbf{1}_{m\times n}& -n^{\frac{1}{2}}{m}^{-\frac{1}{2}}\mathbf{I}_{m\times m} end{pmatrix}\, . end{equation*} It can be noted that $\mathbf{L}$ has null eigenvalue $\lambda_1 = 0$ corresponding to the constant eigenvector $\mathbf{1}_{n+m}$. The vector $\mathbf{v}_i = -\mathbf{e}_1+\mathbf{e}_i$, $i\in\{2, \dots, n\}$ is an eigenvector with eigenvalue $\lambda_2 = -\sqrt{m/n}$, whose multiplicity is $n-1$. Analogously, $\mathbf{v}_i = -\mathbf{e}_{n+1}+\mathbf{e}_{i+1}$, $i\in\{n+1, \dots, n+m-1\}$ is an eigenvector with eigenvalue $\lambda_3 = -\sqrt{n/m}$, whose multiplicity is $m-1$. Finally, the vector $\mathbf{v}_{n+m} = [-m/n\mathbf{1}_{n}^{\transpose}, \mathbf{1}_{m}^{\transpose}]^{\transpose}$ is eigenvector with eigenvalues $\lambda_4 = \lambda_2+\lambda_3 $. Therefore, the spectral radius of $\mathbf{L}$ is \begin{equation*} \lvert \lambda_4 \rvert = \dfrac{m+n}{\sqrt{m\, n}}\, . end{equation*} In the case of a balanced graph, $n=m$ implies that the spectral radius is $2$. In the case of a star graph, $n = 1$ and $\lvert \lambda_4 \rvert = \mathcal{O}(\sqrt{m})$ as $m\rightarrow \infty$; therefore, the asymptotic in \cref{thm:eigenvalues_symmetric_normalized_k_laplacian} is tight. end{example} \begin{table} \centering \caption{Usual graph shift operators as metric-probability Laplacians.} \label{tab:usual_GSO} \begin{tabular}{ccccc} \toprule Graph Shift Operator & $m^{(1)}(x)$ & $m^{(2)}(x)$ & $m^{(3)}(x)$ & $m^{(4)}(x)$ \\ \midrule Adjacency & $1(x)$ & $1(x)$ & $0(x)$ & $0(x)$ \\ Combinatorial Laplacian & $1(x)$ & $1(x)$ & $1(x)$ & $1(x)$ \\ \makecell{Signless Laplacian \\ \citep{cvetkovicSpectralTheoryGraphs2009a}} & $1(x)$ & $1(x)$ & $-1(x)$ & $1(x)$ \\ Random walk Laplacian& $x^{-1}$ & $1(x)$ & $x^{-1}$ & $1(x)$\\ Right normalized Laplacian & $1(x)$ & $x^{-1}$ & $x^{-1}$ & $1(x)$\\ \makecell{Symmetric normalized adjacency \\ \citep{kipfSemiSupervisedClassificationGraph2017}} & $x^{-\frac{1}{2}}$ & $x^{-\frac{1}{2}}$ & $0(x)$ & $0(x)$\\ Symmetric normalized Laplacian & $x^{-\frac{1}{2}}$ & $x^{-\frac{1}{2}}$ & $x^{-1}$ & $1(x)$\\ Equation \Cref{eq:new_symmetric_normalized_graph_laplacian} & $x^{-\frac{1}{2}}$ & $x^{-\frac{1}{2}}$ & $x^{-\frac{1}{2}}$ & $x^{-\frac{1}{2}}$\\ \bottomrule end{tabular} end{table} \section{Proofs} \label{sec:proofs} \begin{proof}[Proof of \cref{thm:convergence} and concentration of error] Let $\mathbf{x} = \{x_i\}_{i=1}^N$ be an i.i.d. random sample from $\rho$. Let $K$ and $m$ be the kernel and diagonal parts corresponding to the metric-probability Laplacian $\mathcal{L}_{\mathcal{N}}$. Let $\mathbf{L}$, $\mathbf{u}$ be \begin{equation*} L_{i, j} = N^{-1} K(x_i, x_j)\rho(x_j)^{-1}-m(x_i)\, , \; u_i = u(x_i)\, . end{equation*} Note that the non-uniform geometric GSO $\mathbf{L}_{\mathcal{G},\pmb{\rho}}$ based on the graph $\mathcal{G}$, which is randomly sampled from $\mathcal{S}$ with neighborhood model $\mathcal{N}$ via the sample points $\mathbf{x}$, is exactly equal to $\mathbf{L}$. Conditioned on $x_i = x$, the expected value is \begin{align*} \mathbb{E}\left(\mathbf{L}\, \mathbf{u}\right)_i &=N^{-1}\sum\limits_{j=1}^N \mathbb{E}\left( K(x, x_k)\, \rho(x_j)^{-1}u(x_j)\right) - m(x)u(x) = \mathcal{L}_{\mathcal{N}} u(x)\, . end{align*} Since the random variables $\{x_j\}_{j=1}^N$ are i.i.d. to $y$, then also the random variables \begin{equation*} \left\{K(x, x_j)\rho(x_j)^{-1}\, u(x_j)\right\}_{j=1}^N end{equation*} are i.i.d., hence, \begin{align*} \var\left(\mathbf{L}\mathbf{u}\right)_i &=\var\left(N^{-1}\sum\limits_{j=1}^N K(x, x_j)\rho(x_j)^{-1} u(x_j) - m(x) \, u(x)\right) \\ &= N^{-1}\var\left(K(x, y)\rho(y)^{-1} u(y)\right)\\ &\leq N^{-1} \mathbb{E}\left(\left\lvert K(x, y)\rho(y)^{-1} u(y) \right\rvert^2\right)\\ &= N^{-1}\int\limits_{\mathcal{S}} \left\lvert K(x, y)\rho(y)^{-1}u(y)\right\rvert^2\rho(y)\dif\mu(y)\\ &\leq N^{-1}\, \left\lVert K(x, \cdot)^2\rho(\cdot)^{-1} \right\rVert_{\lfun^{\infty}(\mathcal{S})}\lVert u \rVert_{\lfun^{2}(\mathcal{S})}^2\, , end{align*} which proves \cref{eq:MC2}. Next, we prove the concentration of error result. We know that there exist $a,b>0$ such that almost everywhere $K(x, x_j)\rho(x_j)^{-1}u(x_j)\in[a, b]$, since $K$, $1/\rho$ and $u$ are essentially bounded. By Hoeffding's inequality, for $t>0$, \begin{equation*} \prob\left[\lvert (\mathbf{L}\mathbf{u})_i- \mathcal{L}_{\mathcal{N}} u(x)\rvert \geq t\right] \leq 2\, exp\left(-\dfrac{2\, N\, t^2}{(b-a)^2} \right)\, . end{equation*} Setting \begin{equation*} \dfrac{p}{N}=2\, exp\left(-\dfrac{2\, N\, t^2}{(b-a)^2} \right)\, , end{equation*} solving for $t$, we obtain that for every node there is an event with probability at least $1-p/N$ such that \begin{equation*} \lvert (\mathbf{L}\mathbf{u})_i- \mathcal{L}_{\mathcal{N}} u(x)\rvert\leq 2^{-\frac{1}{2}} (b-a)N^{-\frac{1}{2}}\sqrt{\log(2\, N\, p^{-1})}\, . end{equation*} We then intersect all of these events to obtain an event of probability at least $1-p$ that satisfies \cref{eq:Hoef}. end{proof} \begin{proof}[Proof of \cref{thm:integral_mean_value}] By hypothesis, there exist $m_x$, $M_x>0$ such that $m_x\leq\rho(y)\leq M_x$ for all $y\in\mathcal{N}(x)$. Therefore, \begin{align*} M_x^{-1} \int\limits_{\mathcal{N}(x)}\dif \nu(y) \leq \int\limits_{\mathcal{N}(x)} \dif\mu(y) = \int\limits_{\mathcal{N}(x)} \rho(y)^{-1} \dif \nu(y) \leq m_x^{-1} \int\limits_{\mathcal{N}(x)}\dif \nu(y)\, , end{align*} from which \begin{align*} m_x \leq \dfrac{\int_{\mathcal{N}(x)} \dif\nu(y)}{\int_{\mathcal{N}(x)}\dif \mu(y)} \leq M_x\, . end{align*} By the Intermediate Value Theorem, there exists $c_x \in \mathcal{N}(x)$ such that \begin{equation*} \rho(c_x) = \dfrac{\int_{\mathcal{N}(x)} \dif\nu(y)}{\int_{\mathcal{N}(x)}\dif \mu(y)}\, , end{equation*} from which the thesis follows. end{proof} \begin{proof}[Proof of \cref{thm:geodesic_distance_S1}] Consider the map \begin{align*} \varphi : [-\pi, \pi) \rightarrow\mathbb{S}^1 \, ,\; \theta \mapsto\begin{pmatrix} \cos(\theta),& \sin(\theta) end{pmatrix}^{\transpose} \, , end{align*} and the angles $x, y, \in [-\pi, \pi)$ such that $\varphi(x) = \pmb{x}$, $\varphi(y) = \pmb{y}$, it holds \begin{align*} \mathring{d}(\pmb{x}, \pmb{y}) &= \arccos(\pmb{x}^{\transpose}\pmb{y}) = \arccos(\cos(x)\cos(y)+\sin(x)\sin(y)) \\ &= \arccos(\cos(x-y))\\ &= x-y+2\, k\, \pi\\ &=\begin{dcases} 2\, \pi+x-y\, , & x-y\in[-2\, \pi, -\pi) \\ y-x\, , & x-y\in[-\pi, 0) \\ x-y\, , & x-y\in[0, \pi) \\ 2\, \pi+y-x\, , & x-y\in[\pi, 2\, \pi) \\ end{dcases}\\ &=\begin{dcases} 2\, \pi-\lvert x-y \rvert\, , & \lvert x-y \rvert >\pi\\ \lvert x-y\rvert\, , & \lvert x-y \rvert <\pi end{dcases}\\ &= \pi-\lvert \pi- \lvert x-y \rvert \rvert\, . end{align*} end{proof} \begin{proof}[Proof of \cref{thm:prob_balls_sbrv}] The expected degree of a node $\theta$ is the probability of the ball centered at $\theta$ times the size $N$ of the sample. The probability of a ball can be computed by noting that \begin{equation*} \int\limits_{\theta_c-\alpha}^{\theta_c+\alpha}\cos(n_i (\theta-\mu_i))\dif \theta = \begin{dcases} 2\, \alpha \, ,& n_i = 0\\ \dfrac{2\, \cos(n_i (\theta_c-\mu_i))\sin(n_i\, \alpha)}{n_i}\, , &\text{otherwise} end{dcases} end{equation*} Therefore, the average degree can be computed as \begin{equation*} \overline{d} = N\, \int\limits_{-\pi}^\pi \mathbb{P}[\ball_\alpha(\theta)] \sbrv(\theta ; \pmb{c}, \pmb{n}, \pmb{\mu})\dif \theta\, . end{equation*} The inspection of $\sbrv(\theta ; \pmb{c}, \pmb{n}, \pmb{\mu})$ and $\mathbb{P}[\ball_\alpha(\theta)]$ shows that the only terms surviving integration are the constant term and the product of cosines with the same frequency \begin{equation*} \int\limits_{-\pi}^{\pi} \cos(n_i(\theta-\mu_i)) \cos(n_i(\theta-\mu_i)) \dif \theta = \begin{dcases} \pi \cos(n_i(\mu_j-\mu_i)), & n_i=n_j\\ 0, & n_i\neq n_j end{dcases} end{equation*} from which the thesis follows. end{proof} \begin{proof}[Proof of \cref{thm:gvM_as_sbrv}] Using Taylor expansion, it holds \begin{align*} exp(\kappa_i \cos(n_i(\theta-\mu_i))) &= \sum\limits_{m=0}^\infty \dfrac{\kappa_i^m}{m!} \cos(n_i(\theta-\mu_i))^m \\ &= 1 + \sum\limits_{m=1}^\infty \dfrac{\kappa_i^{2m}}{(2m)!} \cos(n_i(\theta-\mu_i))^{2m} \\ &+ \sum\limits_{m=1}^\infty \dfrac{\kappa_i^{2m-1}}{(2m-1)!} \cos(n_i(\theta-\mu_i))^{2m-1}\, . end{align*} A first approximation can be made noting that $\cos(x)^{2m}\leq \cos(x)^2$ and $\cos(x)^{2m-1}\approx \cos(x)$ for all $m\geq1$, obtaining \begin{align*} exp(\kappa_i \cos(n_i(\theta-\mu_i))) \approx 1 + (\cosh(\kappa_i)-1) \cos(n_i(\theta-\mu_i))^2 + \sinh(\kappa_i) \cos(n_i(\theta-\mu_i))\, . end{align*} Such approximation deteriorates fast when $\kappa_i$ increases. A more refined approximation is obtained considering the power of cosine with higher coefficient in the Taylor expansion. Using Stirling's approximation of factorial, it can be shown that \begin{align*} \dfrac{\kappa_i^m}{m!} \approx \dfrac{1}{\sqrt{2\, \pi\, m}}\left(\dfrac{\kappa_i\, e}{m}\right)^m\, . end{align*} In order to make the computation easier, suppose $\kappa_i$ is an integer; When $m=\kappa_i+1$, it holds \begin{align*} \dfrac{1}{\sqrt{2\, \pi\, (\kappa_i+1)}}\left(\dfrac{\kappa_i\, e}{\kappa_i+1}\right)^{\kappa_i+1} = \dfrac{e^{\kappa_i+1}}{\sqrt{2\, \pi\, (\kappa_i+1)}}\left(\dfrac{\kappa_i}{\kappa_i+1}\right)^{\kappa_i+1} < \dfrac{e^{\kappa_i}}{\sqrt{2\, \pi\, (\kappa_i+1)}} < \dfrac{e^{\kappa_i}}{\sqrt{2\, \pi\, \kappa_i}}\, , end{align*} where the first inequality is justified by the fact that $(\kappa_i/(\kappa_i+1))^{\kappa_i+1}$ is an increasing sequence that tends to $1/e$. The previous formula shows that the coefficient with $m=\kappa_i+1$ is always smaller than the coefficient with $m=\kappa_i$. The same reasoning can be applied to all the coefficients with $m>\kappa_i$. Suppose now $\kappa_i\geq3$, if $m\leq \kappa_i-2$ the previous reasoning holds. A peculiarity happens when $m=\kappa_i-1$: \begin{align*} \dfrac{1}{\sqrt{2\, \pi\, (\kappa_i-1)}}\left(\dfrac{\kappa_i\, e}{\kappa_i-1}\right)^{\kappa_i-1} = \dfrac{e^{\kappa_i-1}}{\sqrt{2\, \pi\, \kappa_i}}\left(\dfrac{\kappa_i}{\kappa_i-1}\right)^{\kappa_i-\frac{1}{2}} > \dfrac{e^{\kappa_i}}{\sqrt{2\, \pi\, \kappa_i}}\, , end{align*} because the sequence $(\kappa_i/(\kappa_i-1))^{\kappa_i-1/2}$ is decreasing; therefore $m=\kappa_i-1$ is the point of maximum, and $m=\kappa_i$ is the second largest value. Therefore, the following approximation for $exp(\kappa_i \cos(n_i(\theta-\mu_i)))$ holds: \begin{align*} \begin{cases} 1 + (\cosh(\kappa_i)-1) \cos(n_i(\theta-\mu_i))))^{2} + \sinh(\kappa_i) \cos(n_i(\theta-\mu_i))))\, , & \kappa_i \leq 1\\ 1 + (\cosh(\kappa_i)-1) \cos(n_i(\theta-\mu_i))))^{\kappa_i} + \sinh(\kappa_i) \cos(n_i(\theta-\mu_i))))^{\kappa_i-1}\, , & \kappa_i\geq 1, \text{ even}\\ 1 + (\cosh(\kappa_i)-1) \cos(n_i(\theta-\mu_i))))^{\kappa_i-1} + \sinh(\kappa_i) \cos(n_i(\theta-\mu_i))))^{\kappa_i}\, , & \kappa_i\geq 1, \text{ odd} end{cases} end{align*} The thesis follows from the equality \begin{equation*} \cos(n_i(\theta-\mu_i))^{\kappa_i} = \dfrac{1}{2^{\kappa_i}}\sum\limits_{k=0}^{\kappa_i} \binom{\kappa_i}{k}\cos((2\, k-\kappa_i)n_i(\theta-\mu_i))\, . end{equation*} end{proof} \begin{proof}[Proof of \cref{thm:prob_balls_unit_disk}] The domain of integration can be parametrized as $d_\mathbb{D}((r, \theta), (r_c, \theta_c))\leq\alpha$, leading to \begin{equation*} \theta\in\left(\theta_c-\arccos\left(\dfrac{r^2+r_c^2-\alpha^2}{2\, r\, r_c}\right), \theta_c+\arccos\left(\dfrac{r^2+r_c^2-\alpha^2}{2\, r\, r_c}\right)\right). \label{eq:disk_parametrization} end{equation*} Three cases must be discussed: (1) $0\leq r_c-\alpha\leq r_c+\alpha\leq1$, (2) $r_c-\alpha<0$, (3) $r_c+\alpha>1$. In scenario (1), the ball $\ball_\alpha(r_c, \theta_c)$ is contained in $\mathbb{D}$. The probability of the ball can be computed as \begin{equation} \begin{aligned} \prob\left[\ball_\alpha(r_c, \theta_c)\right] &=2\, \int\limits_{r_c-\alpha}^{r_c+\alpha} r\, \int\limits_{\theta_c-\theta_r}^{\theta_c+\theta_r}\sbrv(\theta; \pmb{c}, \pmb{n}, \pmb{\mu}) \, \dif \theta\, \dif r \\ & =\dfrac{4}{B}\sum\limits_{i: n_i\neq0}\dfrac{c_i}{n_i}\, \cos(n_i\, (\theta_c-\mu_i))\, \int\limits_{r_c-\alpha}^{r_c+\alpha} r \, \sin\left(n_i\, \arccos\left(\dfrac{r^2+r_c^2-\alpha^2}{2\, r\, r_c}\right)\right)\dif r\\ &+\dfrac{4}{B} \left(\sum\limits_{i: n_i=0}c_i+A\right)\int\limits_{r_c-\alpha}^{r_c+\alpha} r\arccos\left(\dfrac{r^2+r_c^2-\alpha^2}{2\, r\, r_c}\right) \dif r\, , end{aligned} \label{eq:prob_ball_scenario1} end{equation} where the last equality comes from \cref{thm:prob_balls_sbrv}. For simplicity, define \begin{equation*} \begin{aligned} f_{n_i-1}(r) &= r\, \sqrt{1-\left(\dfrac{r^2+r_c^2-\alpha^2}{2\, r\, r_c}\right)^2} \chebU_{n_i-1}\left(\dfrac{r^2+r_c^2-\alpha^2}{2\, r\, r_c}\right)\, , \\ g(r) &= r\, \arccos\left(\dfrac{r^2+r_c^2-\alpha^2}{2\, r\, r_c}\right)\, , end{aligned} end{equation*} where $U_{k}$ is the $k$-th Chebyshev polynomial of second kind. It is worthy to note that $f_{n_i-1}(r_c+\alpha) = 0$, $f_{n_i-1}(r_c-\alpha) = 0$, $f_{n_i-1}(\alpha-r_c) = 0$ and \begin{align*} f_{n_i-1}(r_c) = \alpha\, \sqrt{1-\left(\dfrac{\alpha}{2\, r_c}\right)^2} \, \chebU_{n_i-1}\left(1-\dfrac{\alpha^2}{2\, r_c^2}\right)\, ,\;f_{n_i-1}(\alpha) = \alpha\, \sqrt{1-\dfrac{r_c}{2\, \alpha}} \, \chebU_{n_i-1}\left(\dfrac{r_c}{2\, \alpha}\right)\, , end{align*} while $g(r_c+\alpha) = 0$, $g(r_c-\alpha) = 0$, $g(\alpha-r_c) = (\alpha-r_c)\pi$ and \begin{align*} g(r_c) = r_c \arccos\left(1-\dfrac{\alpha^2}{2\, r_c^2}\right)\, ,\; g(\alpha) = \alpha\, \arccos\left(\dfrac{r_c}{2\, \alpha}\right)\, . end{align*} The integral in \cref{eq:prob_ball_scenario1} can be approximated by the semi-area of an ellipse having $\alpha$ and $f_{n_i-1}(r_c)$ (respectively $g(r_c)$) as axes \begin{equation} \int\limits_{r_c-\alpha}^{r_c+\alpha} f_{n_i-1}(r) \dif r \approx \dfrac{\pi}{2}\alpha f_{n_i-1}(r_c)\, , \int\limits_{r_c-\alpha}^{r_c+\alpha} g(r) \dif r \approx \dfrac{\pi}{2}\alpha g(r_c)\, , \label{eq:integral_approx_ellipse} end{equation} that can be seen as a modified version of Simpson's rule since the latter would lead to a coefficient of $4/3$ instead of $\pi/2$. A comparison between the two methods is shown in \cref{fig:simpson_elliptic}. In scenario (2) the domain of integration contains the origin and the argument of $\arccos$ in \cref{eq:disk_parametrization} could be not well defined. The singularity can be removed by decomposing the domain of integration as the union of a disk of radius $\alpha-r_c$ around the origin and the remaining. Hence \begin{equation*} \begin{aligned} \prob\left[\ball_\alpha(r_c, \theta_c)\right] &= 2\, \int\limits_{0}^{\alpha-r_c} r\, \int\limits_{-\pi}^{\pi}\sbrv(\theta; \pmb{c}, \pmb{n}, \pmb{\mu}) \, \dif \theta\, \dif r +2\, \int\limits_{\alpha-r_c}^{\alpha+r_c} r\, \int\limits_{\theta_c-\theta_r}^{\theta_c+\theta_r}\sbrv(\theta; \pmb{c}, \pmb{n}, \pmb{\mu}) \, \dif \theta\, \dif r \\ & = (\alpha-r_c)^2+2\, \int\limits_{\alpha-r_c}^{\alpha+r_c} r\, \int\limits_{\theta_c-\theta_r}^{\theta_c+\theta_r}\sbrv(\theta; \pmb{c}, \pmb{n}, \pmb{\mu}) \, \dif \theta\, \dif r\, . end{aligned} end{equation*} The same reasoning as before leads to the approximations \begin{equation*} \int\limits_{\alpha-r_c}^{\alpha+r_c} f_{n_i-1}(r) \dif r \approx \dfrac{\pi}{2}r_c\, f_{n_i-1}(\alpha)\, , \; \int\limits_{\alpha-r_c}^{\alpha+r_c} g(r) \dif r \approx \dfrac{\pi}{2}r_c \, g(\alpha)\, . end{equation*} In scenario (3) the domain of integration partially lies outside $\mathbb{D}$. Hence \begin{equation*} \prob\left[\ball_\alpha(r_c, \theta_c)\right] = 2\, \int\limits_{r_c-\alpha}^{1} r\, \int\limits_{\theta_c-\theta_r}^{\theta_c+\theta_r}\sbrv(\theta; \pmb{c}, \pmb{n}, \pmb{\mu}) \, \dif \theta\, \dif r\, , end{equation*} that can be approximated as \begin{equation*} \begin{aligned} \int\limits_{r_c-\alpha}^{1} f_{n_i-1}(r) \dif r \approx \dfrac{f_{n_i-1}(r_c)}{2}\left(\dfrac{1-r_c}{\alpha}\sqrt{\alpha^2-(1-r_c)^2}+\alpha\, \arcsin\left(\dfrac{1-r_c}{\alpha}\right)\right)\, , \\ \int\limits_{r_c-\alpha}^{1} g(r) \dif r \approx \dfrac{g(r_c)}{2}\left(\dfrac{1-r_c}{\alpha}\sqrt{\alpha^2-(1-r_c)^2}+\alpha\, \arcsin\left(\dfrac{1-r_c}{\alpha}\right)\right)\, . end{aligned} end{equation*} The three scenarios can be summarized in one big formula. For simplicity, define the operator \begin{equation*} \begin{aligned} \mathcal{I}[f](r_c) &= \dfrac{\pi}{2} \dfrac{\alpha+r_c+\min\{\alpha-r_c, r_c-\alpha\}}{2}f_{n_i-1}\left(\dfrac{\alpha+r_c+\max\{\alpha-r_c, r_c-\alpha\}}{2}\right)\\ &-\dfrac{\max\{r_c+\alpha-1, 0\}}{r_c+\alpha-1}\dfrac{f_{n_i-1}(r_c)}{2}\left(\alpha\, \arccos\left(\dfrac{1-r_c}{\alpha}\right)-\dfrac{1-r_c}{\alpha}\sqrt{\alpha^2-(1-r_c)^2}\right)\, , end{aligned} end{equation*} that given a function $f$ returns the ellipse approximation of the integral over balls, it holds \begin{equation*} \begin{aligned} \prob\left[\ball_\alpha(r_c, \theta_c)\right] &= \dfrac{4}{B}\sum\limits_{i: n_i\neq0}\dfrac{c_i}{n_i}\, \cos(n_i\, (\theta_c-\mu_i))\, \mathcal{I}[f_{n_i-1}](r_c)+\dfrac{4}{B} \left(\sum\limits_{i: n_i=0}c_i+A\right) \mathcal{I}[g](r_c)\\ &+\max\{0, \alpha-r_c\}^2\, . end{aligned} end{equation*} from which the thesis follows. To compute the average degree of a spatial network from the unit disk, the quantity \begin{equation*} \overline{d}\coloneqq \int\limits_{0}^{1}2\, r\, \int\limits_{-\pi}^{\pi}\prob\left[\ball_\alpha(r, \theta)\right] \, \sbrv(\theta; \pmb{c}, \pmb{n}, \pmb{\mu}) \dif\theta \dif r\, , end{equation*} must be computed. Using \cref{thm:prob_balls_sbrv}, the integral can be written in the form \begin{align*} \overline{d}=&\dfrac{8\, \pi}{B^2}\sum\limits_{i: n_i\neq0} \sum\limits_{j: n_i=n_j}\dfrac{c_ic_j}{n_i}\, \, \cos(n_i\, (\mu_i-\mu_j))\, \int\limits_{0}^{1}r\, \mathcal{I}[f_{n_i-1}](r)\dif r\\ &+\dfrac{16\, \pi}{B^2}\left(\sum\limits_{i: n_i=0}c_i+A\right)^2\,\int\limits_{0}^1 r\, \mathcal{I}[g](r) \dif r\, . end{align*} From $\chebU_k(1)=k$ the following approximation can be derived \begin{equation*} \mathcal{I}[f_{n_i-1}](r) \approx \dfrac{\pi}{2}\, n_i\, \alpha^2 \sqrt{1-\left(\dfrac{\alpha}{2\, r}\right)^2}\sim \dfrac{\pi}{2}\, n_i\, \alpha^2 \left(1-\dfrac{\alpha^2}{8\, r^2}\right)\, , end{equation*} hence the integral boils down to \begin{equation*} \int\limits_{0}^1 r\, \mathcal{I}[f_{n_i-1}](r) \dif r \approx \dfrac{\pi}{4} n_i \alpha^2\,. end{equation*} The term \begin{equation*} \begin{aligned} \int\limits_{0}^1 r\, \mathcal{I}[g](r) \dif r &=\dfrac{\pi \, \alpha}{24}\left( \alpha \, \sqrt{4-\alpha^2}+4\, \arccos\left(1-\dfrac{\alpha^2}{2}\right)\right)\\ &+\dfrac{\pi \, \alpha^4}{48} \left(\log\left(2+\sqrt{4-\alpha^2}\right)-\log(\alpha)\right)\\ &\sim \dfrac{\pi \, \alpha^2}{24}\left(4+ \sqrt{4-\alpha^2}\right) \sim \dfrac{\pi \, \alpha^2}{24}\left(6-\dfrac{\alpha^2}{4}\right)\sim \dfrac{\pi \, \alpha^2}{4}\, , end{aligned} end{equation*} from which the thesis follows. end{proof} \begin{figure} \centering \label{fig:simpson_elliptic} \begin{tabular}{c} \includegraphics[width=.5\linewidth]{img/g.pdf}\\ \includegraphics[width=.5\linewidth]{img/f0.pdf} end{tabular} \caption{Approximation of $\int\limits_{a}^{b}g(r)\dif r$ (top) and $\int\limits_{a}^{b}f_0(r)\dif r$ (bottom) when $a=\alpha-r_c$ and $b=\alpha+r_c$ (left), $a=r_c-\alpha$ and $b=r_c+\alpha$ (center), and $a=\alpha+r_c$ and $b=1$ (right) as a function of $r_c$, $\alpha=0.05$.} end{figure} \begin{proof}[Proof of \cref{thm:prob_balls_hyperbolic}] Similarly to what has been done in \cref{thm:prob_balls_unit_disk}, the domain of integration can be parametrized as \begin{equation*} \theta\in(\theta_c-\theta_r, \theta_c+\theta_r)\, , \; \theta_r = \arccos\left(d_r\right)\, , \, d_r=\dfrac{\cosh(r)\cosh(r_c)-\cosh(\alpha)}{\sinh(r_c)\sinh(r)}\, . end{equation*} In order to remove the singularity of the argument of $\arccos$, the domain of integration can be decomposed as a ball containing the origin and the remaining, leading to \begin{align*} \mathbb{P}[\ball_\alpha(r_c, \theta_c)] &= \dfrac{1}{\cosh(R)-1}\int\limits_{l_1}^{u_1}\int\limits_{-\pi}^{\pi} \sinh(r) \sbrv(\theta ; \pmb{c}, \pmb{n}, \pmb{\mu})\dif\theta\dif r\\ &+\dfrac{1}{\cosh(R)-1}\int\limits_{l_2}^{u_2}\int\limits_{\theta_c-\theta_r}^{\theta_c+\theta_r} \sinh(r) \sbrv(\theta ; \pmb{c}, \pmb{n}, \pmb{\mu})\dif\theta\dif r\, ,\\ \intertext{where $l_1=0$, $u_1=\max\{\alpha-r_c, 0\}$, $l_2 = \lvert \alpha-r_c \rvert$ and $u_2=\min\{r_c+\alpha, R\}$.} &=\dfrac{\cosh(u_1)-1}{\cosh(R)-1} +\dfrac{1}{\cosh(R)-1}\int\limits_{l_2}^{u_2} \sinh(r)\prob\left[\ball_{\theta_r}(\theta_c)\right]\dif r\\ &=\dfrac{\cosh(u_1)-1}{\cosh(R)-1}+ \dfrac{2\, \left(\sum\limits_{i:n_i =0}c_i + A\right)}{B\, \left(\cosh(R)-1\right)}\int\limits_{l_2}^{u_2}\sinh(r)\theta_r\dif r\\ &+\dfrac{2}{B}\sum\limits_{i:n_i\neq 0}\dfrac{c_i}{n_i}\dfrac{\cos(n_i(\theta_c-\mu_i)}{\cosh(R)-1}\int\limits_{l_2}^{u_2}\sinh(r)\sqrt{1-d_r^2}\, U_{n_i-1}(d_r)\dif r\, , end{align*} The approximations $\theta_r \approx \sqrt{2-2\, d_r}$ and $d_r\approx 1+2\,(e^{-2\, r}+e^{-2\, r_c}-e^{\alpha-r_c-r}-e^{-\alpha-r_c-r}) $ as in \cite{gugelmannRandomHyperbolicGraphs2012} can be used to analyze the behaviors of both integrals. For large $R$, it holds \begin{equation*} \begin{aligned} \int\limits_{l_2}^{u_2}\dfrac{\sinh(r)}{\cosh(R)-1}\theta_r\dif r &\approx \int\limits_{l_2}^{u_2} e^{r-R} \sqrt{1-d_r}\dif r \\ &\approx 2\int\limits_{l_2}^{u_2} e^{r-R} \sqrt{e^{\alpha-r_c-r}+e^{-\alpha-r_c-r}-e^{-2 r}-e^{-2\, r_c}}\dif r\\ &= 2\int\limits_{l_2}^{u_2} e^{\frac{r-2\,R +\alpha-r_c}{2}} \sqrt{1+e^{-2\alpha}-e^{-r-\alpha+ r_c}-e^{r-\alpha- r_c}}\dif r\\ &\approx 4 e^{\frac{\alpha-R-r_c}{2}}\, , end{aligned} end{equation*} where the last approximation is justified by $\sqrt{1+x} = 1 + \mathcal{O}(x)$ when $\lvert x\rvert \leq 1$. Noting that $-1\leq d_r\leq 1$, one can get rid of the polynomial contribution \begin{equation*} \begin{aligned} \int\limits_{l_2}^{R}\dfrac{\sinh(r)}{\cosh(R)-1}\sqrt{1-d_r^2}\, U_{n_i-1}(d_r)\dif r\, &\approx n_i\int\limits_{l_2}^{R}e^{r-R}\sqrt{1-d_r^2}\dif r \\ & = n_i\int\limits_{l_2}^{R}e^{r-R}\sqrt{1-d_r}\sqrt{1+d_r}\dif r\\ &\approx \sqrt{2}\, n_i\int\limits_{l_2}^{R}e^{\frac{r-2R+\alpha-r_c}{2}}\sqrt{1+d_r}\dif r\\ &\approx 4\, n_i e^{\frac{\alpha-R-r_c}{2}}\, . end{aligned} end{equation*} Therefore, the probability of balls is approximately \begin{align*} \mathbb{P}[\ball_\alpha(r_c, \theta_c)] &\approx \dfrac{8\, e^{\frac{\alpha-R-r_c}{2}}}{B} \left(\sum\limits_{i:n_i\neq 0}c_i\cos(n_i\,( \theta_c-\mu_i)) + \left(\sum\limits_{i:n_i =0}c_i + A\right)\right)\, , end{align*} and the expected average degree of the network is \begin{equation*} \begin{aligned} \overline{d} &= N\int\limits_{0}^{R}\int\limits_{-\pi}^{\pi}\mathbb{P}[\ball_\alpha(r, \theta)]\sbrv(\theta; \pmb{c}, \pmb{n}, \pmb{\mu})\dfrac{\sinh(r)}{\cosh(R)-1}\dif \theta \dif r\\ &\approx \dfrac{16\,\pi\, N\, e^{\frac{\alpha-2\, R}{2}}}{B^2}\left( \sum\limits_{i:n_i\neq 0}\sum\limits_{j:n_i=n_j}c_i c_j\cos(n_i\,( \mu_i-\mu_j)) + 2\,\left(\sum\limits_{i:n_i =0}c_i + A\right)^2\right)\, . end{aligned} end{equation*} end{proof} \begin{proof}[Proof of \cref{thm:diag_commuting}] Equality $(2)$ is trivial, since diagonal matrices commutes; equality $(1)$ follows from \begin{align*} \diag(\mathbf{V}\mathbf{A}\mathbf{1})_{i, i} &= (\mathbf{V}\mathbf{A}\mathbf{1})_i = \sum\limits_{j=1}^nV_{i, j}(\mathbf{A}\mathbf{1})_j = \sum\limits_{j=1}^n\sum\limits_{k=1}^n V_{i, j}A_{j, k}\\ &=\sum\limits_{k=1}^n V_{i, i}A_{i, k} =V_{i, i}\sum\limits_{k=1}^nA_{i, k} =(\mathbf{V}\,\diag(\mathbf{A}\mathbf{1}))_{i, i}\, . end{align*} In order to prove $(3)$, we note that $\mathbf{V}$ can be decomposed as $\mathbf{V}=\sum_{i=1}^n v_{i} \, \mathbf{e}^{(i)}{\mathbf{e}^{(i)}}^{\transpose}$. Therefore \begin{align*} 0 &= (\diag(\mathbf{A}\mathbf{V}\mathbf{1})-\diag(\mathbf{V}\mathbf{A}\mathbf{1}))_{k, k} = \diag\left(\sum_{i=1}^n v_i \left(\mathbf{A}\mathbf{e}^{(i)}{\mathbf{e}^{(i)}}^{\transpose}\mathbf{1}-\mathbf{e}^{(i)}{\mathbf{e}^{(i)}}^{\transpose}\mathbf{A}\mathbf{1}\right)\right)_{k, k}\\ &= \left(\sum_{i=1}^n v_i \left(\mathbf{A}\mathbf{e}^{(i)}-\sum_{j=1}^n A_{i, j} \mathbf{e}^{(i)}\right)\right)_k = \sum_{i=1}^n v_i A_{k, i}-v_{k}\sum_{j=1}^n A_{k, j} \\ &= \sum_{i=1}^n (v_i-v_{k}) A_{k, i}\, , end{align*} must hold for all values of $k$. Consider the indices $k_1, k_2, \dots, k_n$ corresponding to the values $v_{k_1}\leq v_{k_2}\leq\dots\leq v_{k_n}$, then \begin{align*} 0 = \sum_{i=1}^n \underbrace{(v_i-v_{k_1})}_{\geq 0} A_{k_1, i}\, , end{align*} then $A_{k_1, i} = 0$ for each $i$ such that $v_i > v_{k_1}$. Take the index $k_2$ and consider \begin{align*} 0 &= \sum_{i=1}^n \underbrace{(v_i-v_{k_2})}_{\geq 0} A_{k_2, i} = \sum_{\substack{i=1\\i\neq k_1}}^n \underbrace{(v_i-v_{k_2})}_{\geq 0} A_{k_2, i}+\underbrace{(v_{k_1}-v_{k_2}) A_{k_2, k_1}}_{=0}\, . end{align*} The second addend is $0$ because $v_{k_2}$ can be either equal to $v_{k_1}$, in which case the difference is null, or $v_{k_2}>v_{k_1}$, in which case from the previous step $A_{k_2, k_1}= 0$. Therefore $A_{k_2, i} = 0$ for each $i$ such that $v_i > v_{k_2}$. By finite induction, the thesis holds when $\mathbf{A}$ has null entries in position $(i, j)$ whenever $v_i\neq v_j$. end{proof} \begin{proof}[Proof of \cref{thm:eigenvalues_symmetric_normalized_k_laplacian}] The eigenvalues can be characterized via the Rayleigh quotient \begin{align*} \dfrac{\left\langle \mathbf{u}, \left(\diag\left(\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}\mathbf{1}\right)-\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}\right)\mathbf{u}\right\rangle}{\left\langle \mathbf{u}, \mathbf{u}\right\rangle}\, . end{align*} Using \cref{thm:diag_commuting}, and considering $\mathbf{u} = \mathbf{D}^{\frac{1}{2}}\mathbf{v}$ the previous formula can be rewritten as \begin{align*} \dfrac{\left\langle \mathbf{D}^{\frac{1}{2}}\mathbf{v}, \left(\diag\left(\mathbf{A}\mathbf{D}^{-\frac{1}{2}}\mathbf{1}\right)-\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\right)\mathbf{v}\right\rangle}{\left\langle \mathbf{D}^{\frac{1}{2}}\mathbf{v}, \mathbf{D}^{\frac{1}{2}}\mathbf{v}\right\rangle} &= \dfrac{\mathbf{v}^{\transpose} \left(\diag\left(\mathbf{D}^{\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}\mathbf{1}\right)-\mathbf{A}\right)\mathbf{v}}{\mathbf{v}^{\transpose}\mathbf{D}\mathbf{v}}\, . end{align*} Let $d_i = \mathbf{D}_{i, i}$ the degree of the $i$-th node, using the symmetry of $\mathbf{A}$, the numerator can be rewritten as \begin{align*} &\sum\limits_{i, j}v_i^2 \sqrt{\dfrac{d_i}{d_j}}A_{i, j}- \sum\limits_{i, j}v_iA_{i, j}v_j\\ &= \dfrac{1}{2}\sum\limits_{i, j}v_i^2 \sqrt{\dfrac{d_i}{d_j}}A_{i, j}+\dfrac{1}{2}\sum\limits_{i, j}v_j^2 \sqrt{\dfrac{d_j}{d_i}}A_{i, j} - \sum\limits_{i, j}v_iA_{i, j}v_j\\ &=\dfrac{1}{2}\left(\sum\limits_{i, j} v_i A_{i, j}\left(\sqrt{\dfrac{d_i}{d_j}}v_i-v_j\right) - \sum\limits_{i, j} v_j A_{i, j}\left(v_i- \sqrt{\dfrac{d_j}{d_i}}v_j\right)\right)\\ &=\dfrac{1}{2}\sum\limits_{i, j} \left(\dfrac{v_i}{\sqrt{d_j}}-\dfrac{v_j}{\sqrt{d_i}}\right) A_{i, j}\left(\sqrt{d_i}v_i- \sqrt{d_j}v_j\right)\\ &=\dfrac{1}{2}\sum\limits_{i, j} \dfrac{A_{i, j}}{\sqrt{d_id_j}}\left(\sqrt{d_i}v_i- \sqrt{d_j}v_j\right)^2\, .\\ \intertext{From the last equality follows that the eigenvalues are all positive. From $(a-b)^2\leq 2(a^2+b^2)$ follows} &\leq\sum\limits_{i, j} \dfrac{A_{i, j}}{\sqrt{d_id_j}}\left(d_iv_i^2+ d_jv_j^2\right)\\ &= 2\, \sum\limits_{i, j} A_{i, j}\sqrt{\dfrac{d_i}{d_j}}v_i^2\\ &\leq 2\, \sqrt{N} \sum\limits_i d_i v_i^2\\ &= 2\, \sqrt{N} \,\mathbf{v}^{\transpose}\mathbf{D}\mathbf{v}\, , end{align*} from which the thesis follows. end{proof} end{document}
\begin{document} \begin{abstract} We introduce specific solutions to the linear harmonic oscillator, named bubbles. They form resonant families of invariant tori of the linear dynamics, with arbitrarily large Sobolev norms. We use these modulated bubbles of energy to construct a class of potentials which are real, smooth, time dependent and uniformly decaying to zero with respect to time, such that the corresponding perturbed quantum harmonic oscillator admits solutions which exhibit a logarithmic growth of Sobolev norms. The resonance mechanism is explicit in space variables and produces highly oscillatory solutions. We then give several recipes to construct similar examples using more specific tools based on the continuous resonant (CR) equation in dimension two. \end{abstract} \subjclass{ } \keywords{} \thanks{ } \maketitle \section{Introduction} \subsection{Setting of the problem} We consider the linear operator associated with the two dimensional quantum harmonic oscillator \begin{equation} \label{defH} H = - \Delta + |x|^2, \end{equation} where for $x = (x_1,x_2) \in \mathbb{R}^2$, $|x|^2 = x_1^2 + x_2^2$ and $\Delta$ the Laplace operator. Let the Sobolev norms associated with the function space defining the domain of $H$ $$ \mathcal{H}^r = \{ u \in L^2 \, | \,H^{r/2} f \in L^2\}, \quad r \geq 0, $$ then the solution to the linear Schr\"odinger equation \begin{equation} \label{J} \left|\begin{array}{l} i \partial_t u = Hu, \qquad u(t,x) \in \mathbb{C}\\ u_{|t=0}=u_0(x) \end{array}\right. \quad \Leftrightarrow \quad u(t,x) = e^{-i t H} u_0 \end{equation} preserves all the $\mathcal{H}^r$ norms $$\forall t\in \Bbb R, \ \ \mathbb{N}orm{e^{- i t H} u_0}{\mathcal{H}^r} = \mathbb{N}orm{u_0}{\mathcal{H}^r}$$ and no {\em weakly turbulent} effect can be observed, {\em i.e.}\ energy transfer between low and high frequencies generating growth of Sobolev norms.\\ A long standing open problem is the possibility of finding perturbations of \eqref{J} of the Hamiltonian form \begin{equation} \label{cenioneivbneivoeb} i \partial_t u = Hu + V(t,x,u) u \end{equation} producing such weakly turbulent effects, while preserving energies ($L^2$ norm and/or Hamiltonian energy in the time independent case), and to classify possible mechanisms of energy transfers, as well as their genericity. We propose in this paper a step forward in this direction by considering the linear case where the real potential $V(t,x)$ is independent of $u$ and chosen as smooth and small as possible. We in purpose focus onto the simplest possible linear case, but insist that the method of proof and the nature of the resonance mechanism will also apply to non linear problems $V(t,x,u) = W(t,x)+f(|u|^2)$. \subsection{Previous results} The study of the linear Schr\"odinger equation \eqref{J} perturbed by a general time dependent linear operator $ P(t ) u$ (not necessarily the multiplication with a function) has a long history with important recent developments.\\ The first class of results exhibit situations where solutions do not have any turbulent behavior and remain bounded for all times. The perturbed flow is essentially similar to the unperturbed one and the dynamics can be conjugated to a dynamics with a constant linear operator close to $H$. These are reducibility results generalizing Floquet theory and the perturbation operator is typically periodic or quasi-periodic in time. In \cite{Com87} such results were given for regularizing perturbation. Using KAM technics for PDEs more recent results have been shown, see for instance \cite{BG01, GT11, Bam17a, Bam17b, BGMR18,GP19}.\\ A second class of results concerns a priori bounds on the possible growth of Sobolev norms. In \cite{MR17}, general bounds in times where given for the case where the perturbation is a multiplication by a real potential $V(t,x)$. When the potential is regular, the growth can be at most of order $t^{\varepsilon}$ where $\varepsilon$ depends on the regularity of the potential, a bound that can be refined to $(\log t)^{\alpha}$ for analytic potential. More general results are also given in \cite{BGMR17}. \\ Concerning the possibility of growth and the existence of weakly turbulent mechanisms, very few results are available. In \cite{GY00}, explicit examples are given with solutions exhibiting Sobolev norm growth, and an explicit multiplication operator $V(t,x) = a \sin(t) x$. Note however that this operator is of order $1$, and in particular not decaying at infinity in $x$ and thus not defining an element of $\mathcal{H}^r$. In \cite{Del14}, J.-M. Delort constructed order zero pseudo differential operators $P(t)$ periodic in time, and such that the solution of the Harmonic oscillator perturbed by this operator growth like $t^{r/2}$ in $\mathcal{H}^r$ norm. Similar examples were given by A. Maspero \cite{Mas18}. During the preparation of this work, L. Thomann \cite{Tho20} also proposed an example of such operators based on a linearized version of the lowest Landau level equation, and constructed as explicit travelling waves. All these examples provide continuous operators $P(t)$ of order $0$ with periodic or growing behavior with respect to $t$, when $P(t)$ and its time derivatives are estimated in $\mathcal{L}(\mathcal{H}^r,\mathcal{H}^r)$, but so far no result has been given with the multiplication by a smooth potential belonging to $\mathcal{H}^r$.\\ Spectacular results have also been obtained in the case of the torus regarding a priori bounds, reducibility, and the construction of unbounded trajectories \cite{Bou99a,Bou99b}, see also \cite{Wan08b, Del10, MR17, EK09}. Specifically in the non linear setting of the (NLS) equation on the torus, the seminal work \cite{CKSTT10} provides the first explicit construction of growth mechanism for the limiting completely resonant equation. This analysis was refined in \cite{GK13} with optimized constants, and used in \cite{HPTV15} to show the relevance of the mechanism for the small data scattering problem.\\ More growth mechanisms have also been explored for other non linear dispersive problems in particular in \cite{GG10, Po11, GLPR18} which are deeply connected to our approach. \subsection{Statement of the result} We propose a new and elementary space based approach to construct classes of smooth asympotically in time vanishing potentials $V(t,x)$ for which a weakly turbulent mechanism for solutions to \eqref{cenioneivbneivoeb} occurs. Our construction comes with a complete description of the associated drift to high frequencies. Our main result is the following. \begin{theorem}[Existence of smooth vanishing potentials exhibiting weakly turbulent growth] \label{Th1} There exist potential functions $V(t,x) \in \mathcal{C}^\infty(\mathbb{R}\times \mathbb{R}^2; \mathbb{R})$ and functions $u(t,x) \in \mathcal{C}^\infty(\mathbb{R}\times \mathbb{R}^2; \mathbb{C})$ such that \begin{equation} \label{LHO1} \forall\, (t,x) \in \mathbb{R} \times \mathbb{R}^2\qquad i \partial_t u(t,x) = H u (t,x) + V(t,x) u(t,x) \end{equation} and such that for all $r \geq 0$ and all $k \in \mathbb{N}$ \begin{equation} \label{decay} \lim_{t \to +\infty} \mathbb{N}orm{\partial_t^{k} V(t,x)}{\mathcal{H}^r} = 0 \end{equation} and \begin{equation} \label{growth} \mathbb{N}orm{u(t,x)}{\mathcal{H}^1} \sim c (\log t)^\alpha, \quad t \to +\infty, \quad c,\alpha >0. \end{equation} \end{theorem} \noindent{\em Comments on the result.}\\ \noindent{\em 1. The bubble approach}. The main ingredient used to prove this result is the study of specific solutions to the unperturbed equation \eqref{J} that we call {\em bubbles}. They are explicit solutions whose trajectories form families of invariant tori of dimension one, parametrized by {\em actions} piloting the $\mathcal{H}^1$ norm of the solution, and with {\em angles all oscillating at the same frequency} corresponding to the frequency gap of the operator $H$. They thus form a resonant family of invariant tori of the linear dynamics, with arbitrarily high Sobolev norms. We then construct the perturbation as superposition of time oscillations which resonate with the bubbles decaying for large time to produce a growth of the $\mathcal{H}^1$ norm corresponding to a growth of the actions in the family of bubbles. A fundamental feature is that the bubbles are completely explicit and generated by the {\em pseudo conformal symmetry group} associated to the unperturbed flow \eqref{J}, and the leading order growth mechanism corresponds to a suitable resonant mechanism created by a fine tunning of the potential $V(t,x)$. In other words, {\em after renormalization}, the growth of Sobolev norms is generated by a small deformation of a solitary wave (here just a harmonic function). This is the heart of the analysis of blow up bubbles for (NLS) models in \cite{MeRa05, MaRa18} and the study of growth mechanisms in \cite{GLPR18}. Let us stress that the mechanism is completely explicit and \eqref{vbeivbibeveievb} gives an example of such an admissible potential in closed form.\\ \noindent{\em 2. Modulation equations and Arnold diffusion.} Resonance will be described through the study of modulation equations which are a perturbation of the trajectory associated to the pseudo-conformal symmetry of \eqref{defH}, section \ref{sectionmodualtib}. The obtained growth mechanism is deeply connected to the original example of {\em Arnold diffusion} given in \cite{Arn64} (see \cite{DGLS08} for a review on the subject). Indeed the modulation equations describing the evolution of the bubble in interaction with the complete system is a perturbation of a completely integrable system (see \eqref{arnold} below) containing resonant oscillations as in \cite{Arn64}, but of size $\varepsilon$ decaying in time in a non integrable way. Compared with the classical result in Arnold diffusion, this class of perturbations allows a complete growth in infinite time of the actions at a logarithmic scale. Moreover, as these bubbles are embedded into an infinite dynamical system, we construct the solution by superposing this new Arnold diffusion example with the backward integration methods for PDEs introduced in \cite{Me90}. \\ \noindent{\em 3 Oscillations}. An essential difference with the blow up analysis in \cite{MeRa05, MaRa18,GLPR18} is the {\em oscillatory nature} of the corresponding solutions which are a consequence of the discrete spectrum of the operator. For example, for the solution contructed in Theorem \ref{Th1}, there exist $t_n^{(1)},t_n^{(2)}\to +\infty$ such that :$$\left|\begin{array}{l} \lim_{n\to +\infty}\|u(t^{(1)}_n,\cdot)\|_{L^\infty}=0\\ \lim_{n\to +\infty}\|u(t^{(2)}_n,\cdot)\|_{L^\infty}=+\infty. \end{array}\right. $$ Monotonic growth of the energy is however achieved at the level of the action-angles variables which is the core of the resonant mechanism. We refer to \cite{MRRS19} for more highly oscillatory blow up mechanisms for (NLS) like models.\\ \noindent{\em 4. The growth rate}. Interestingly enough, the logarithmic growth rate \eqref{growth} saturates the general bound for smooth potentials proved in \cite{MR17}. Note that typically in all the examples we construct, we will be able to estimate the growth of higher Sobolev norm of $u$, that will be of order $(\log t)^{r\alpha}$ for the norm $\mathcal{H}^r$. Moreover, by tuning differently the potential, we can also produce bounds of order $t^\varepsilon$ but the estimate \eqref{decay} will be valid only up to some $k$ depending on $\varepsilon^{-1}$. These type of refinements and discussions about optimality of the result, as well as a complete classification of the examples yielding to Theorem \ref{Th1} will be out of the scope of this paper.\\ \noindent{\em 5. More growth mechanisms}. In section \ref{sectioncr} we also give general recipes to construct examples realizing Theorem \ref{Th1} for the pseudo-differential linearized CR equation introduced in \cite{FGH16} which is the first normal form operator of the cubic nonlinear Harmonic oscillator as shown in \cite{GHT16}. The strategy here is in some sense closer to \cite{Tho20} who considers the specific case of the Bargmann-Fock space, but turns out to be in fact very general. \\ \noindent {\bf Acknowledgments}. This work was completed during the participation of E.F. to the semester {\em Geometry, compatibility and structure preservation in computational differential equations} held at the Isaac Newton Institute, Cambridge, in Fall 2019. This visit was partially supported by a grant from the Simons Foundation. P.R. is supported by the ERC-2014-CoG 646650 SingWave. P.R. would like to thank P. Gerard, Z. Hani and Y. Martel for stimulating discussions at very early stages of this work at the 2015 MSRI program "New challenges in PDE". The authors would also like to thank L. Thomann for his careful reading of a preliminary version of the manuscript and his fruitful comments. \section{Bestiary} We recall in this section basic facts about the harmonic oscillator and Hermite functions which will be used in the proof of the main Theorem. We work in all the paper in dimension 2. \subsection{Notations} We set $$ ( f, g )_{L^2} = \int_{\mathbb{R}^2} f(x) \overline{g(x)} \mathrm{d} x. $$ We define the Fourier transform $$ \widehat f (\xi) = (\mathcal{F} f)(\xi) := \frac{1}{2\pi} \int_{\mathbb{R}^2} f(x) e^{- i x \cdot \xi } \, \mathrm{d} x, $$ with $x \cdot y = x_1 y_1 + x_2 y_2$ for $x = (x_1,x_2) \in \mathbb{R}^2$ and $y = (y_1,y_2) \in \mathbb{R}^2$. With this normalization we have $$ (\mathcal{F}^{-1} f) (x) = \frac{1}{2\pi} \int_{\mathbb{R}^2} f(\xi) e^{i x \cdot \xi } \, \mathrm{d} \xi. $$ We define $L^2(\mathbb{R}^2)$ the space the Hilbert space based on the scalar product $(\, \cdot \, , \, \cdot \, )_{L^2}$, and the Sobolev space $H^r$ equipped with the norm. For all $r \geq 0$, defining $\langle \nabla \rangle^r f$ as the inverse Fourier transform of the function $\langle \xi \rangle^r \widehat f(\xi)$, where for any complex number $z$, $\langle z \rangle = \sqrt{1 + |z|^2}$. We set \begin{equation} \label{Sob} \mathbb{N}orm{f}{H^r} := \mathbb{N}orm{\langle \nabla \rangle^r f}{L^2} = \mathbb{N}orm{\langle \xi \rangle^r \widehat f}{L^2}. \end{equation} Finally, we will use the following notation \begin{equation} \label{eqLambda} \Lambda = x \cdot \nabla_x = x_1 \partial_{x_1} + x_2 \partial_{x_2}. \end{equation} \subsection{Harmonic oscillator and eigenfunctions} Following \cite[Section 6.6]{GHT16} inspired by \cite[Chapter 1 \and Corollary 3.4.1]{Tha93} we consider the radial functions \begin{equation} \label{basishn} h_{k} = \frac{1}{\sqrt{\pi}} L_k^{(0)}(|x|^2) e^{- \frac{|x|^2}{2}}, \qquad L_k^{(0)} (x) = \frac{e^x}{k! } \frac{\mathrm{d} ^k}{\mathrm{d} x^k} (e^{-x} x^k). \end{equation} The $L_k^{(0)}$ are the standard Laguerre polynomials on $[0,+\infty]$. Then we have \begin{equation} \label{lambdan} H h_k = \lambda_k h_k = ( 4 k +2) h_k \quad \boldsymbol{m}ox{and}\quad \int_{\mathbb{R}^2} h_k(x) h_n(x) \mathrm{d} x = \delta_{nk}, \end{equation} for all $n,k \in \mathbb{N}$, where $\delta_{nk} = 0$ for $n \neq k$ and $\delta_{nn} = 1$. The familly $\{h_k\}_{k \geq 0}$ forms an $L^2$ orthonormal basis of radial functions in $L^2(\mathbb{R}^2)$. Note that we have $L_0^{(0)}(x) = 1$ and $L_1^{(0)}(x) = 1 - x$. The general expression of the $h_k$ can be computed using generating functions. For any complex number $|t| < 1$ the generating function of the Laguerre polynomials is given by $$ \sum_{n = 0}^{\infty} t^n L_n(z) = \frac{1}{1 - t} e^{- \frac{t z}{1 - t}}, $$ which is valid for $z \in \mathbb{R}$. Hence \begin{equation} \label{generhn} \sum_{n = 0}^{\infty} t^n h_n(x) =\frac{1}{\sqrt{\pi}}\frac{1}{1 - t} e^{- \frac{t |x|^2}{1 - t}}e^{- \frac{|x|^2}{2}} = \frac{1}{\sqrt{\pi}}\frac{1}{1 - t} e^{- \frac{(1+ t)}{2(1 - t)} |x|^2}. \end{equation} We recall the formula for generalized Laguerre polynomials, for $\alpha \in \mathbb{R}$, \begin{equation} \label{temple} (k+1) L_{k+1}^{(\alpha)} (x) = (2 k +1 + \alpha - x) L_k^{(\alpha)}(x) - (k + \alpha) L_{k-1}^{\alpha}(x), \quad k \geq 1, \end{equation} a formula which is also true for $k = 0$ (with for instance the definition $L_{-1}^{(\alpha)} = 0$. We also need the formulas $$ \frac{\mathrm{d}^k}{\mathrm{d} x^k} L_n^{(\alpha)}(x) = \left\{ \begin{array}{ll} (-1)^k L_{n-k}^{(\alpha + k)}(x) & \boldsymbol{m}ox{if}\quad k \leq n,\\ 0 & \boldsymbol{m}ox{otherwise} \end{array} \right. $$ and $$ x L_{n}^{(\alpha +1)}(x) = (n + \alpha) L_{n-1}^{(\alpha)}(x) - (n-x) L_{n}^{\alpha}(x). $$ From these relations, we obtain \begin{equation} \label{temple2} x \frac{\mathrm{d}}{\mathrm{d} x}L_n^{(\alpha)}(x) = - x L_{n-1}^{(\alpha +1)} = n L_{n}^{(\alpha)} - (n + \alpha) L_{n-1}^{(\alpha)}. \end{equation} From Equation \eqref{temple} with $\alpha = 0$, we infer for $k \geq 0$ \begin{equation} \label{eqy2} |x|^2 h_k(x) = -(k+1) h_{k+1}(x) + (2k +1) h_k(x) - k h_{k-1}(x). \end{equation} This implies that \begin{equation} \label{eqLap} \Delta h_k(x) = -(k+1) h_{k+1}(x) - (2k +1) h_k(x) - k h_{k-1}(x). \end{equation} Moreover, with $\Lambda$ given by \eqref{eqLambda} and using \eqref{temple2} we have \begin{eqnarray} \Lambda h_{k} &=& \frac{2}{\sqrt{\pi}}|x|^2 \frac{\mathrm{d}}{\mathrm{d} r}L_k^{(0)}(|x|^2) e^{- \frac{|x|^2}{2}} - \frac{1}{\sqrt{\pi}} |x|^2 L_k^{(0)}(|y|^2) e^{- \frac{|x|^2}{2}}\nonumber\\ &=& 2k h_k - 2k h_{k-1} + (k+1) h_{k+1} - (2k +1) h_k + k h_{k-1}\nonumber\\ &=& (k+1) h_{k+1} - h_k - k h_{k-1}. \label{bemol} \end{eqnarray} We will also need the following formulae, whose proof is postponed in the Section \ref{hermint}: \begin{proposition}[Inner products of Hermite functions] \label{prop21} We have \begin{equation} \label{h100} (h_1,h_0 h_0)_{L^2} = \int_{\mathbb{R}^2} h_1(y) h_0(y) h_0(y) \mathrm{d} y = \frac{2}{9 \sqrt{\pi}}, \end{equation} and \begin{equation} \label{1100} \mathbb{N}orm{h_1 h_0}{L^2}^2 = \int_{\mathbb{R}^2} h_1(y) h_1(y) h_0(y) h_0(y) \mathrm{d} y = \frac{1 }{4 \pi}. \end{equation} \end{proposition} \subsection{Functions spaces} We define the space associated with the Harmonic oscillator $$ \mathcal{H}^r = \{ u \in L^2 \, | \,H^{r/2} f \in L^2\}, \quad r \geq 0. $$ equipped with the norm $$ \mathbb{N}orm{f}{\mathcal{H}^r} := \mathbb{N}orm{H^{r/2}f}{L^2}. $$ We know (see for instance \cite[Proposition 1.6.6]{Hel84} or \cite[Lemma 2.4]{YZ04}) that on this space the following norms are equivalent: for all $r$ there exist positive constants $c_r$ and $C_r$ such that \begin{equation} \label{normeHr} c_r \mathbb{N}orm{f}{\mathcal{H}^r} \leq \mathbb{N}orm{f}{H^r} + \mathbb{N}orm{\langle x \rangle^{r} f}{L^2} \leq C_r \mathbb{N}orm{f}{\mathcal{H}^r}. \end{equation} Moreover, for $r > 1$ in 2D, $\mathcal{H}^r$ is an algebra: there exists $C_r$ such that \begin{equation} \label{algebra} \forall\, f,g \in \mathcal{H}^r \quad \mathbb{N}orm{fg}{\mathcal{H}^r} \leq C_r \mathbb{N}orm{f}{\mathcal{H}^r} \mathbb{N}orm{f}{\mathcal{H}^r}. \end{equation} The space $\mathcal{H}^r$ can be described in terms of the coefficients of $f$ in the basis of special Hermite functions $\{\varphi_{n,m}, n \geq 0, -n \leq m \leq n , n + m \boldsymbol{m}ox{ even }\}$ which is an normalized Hilbertian basis of $L^2(\mathbb{R}^2)$ satisfying (see \cite[Proposition 4.1]{GHT16}) $$ H \varphi_{n,m} = 2 (n +1) \varphi_{n,m} ,\quad L \varphi_{n,m} = m \varphi_{n,m}, $$ where $L = i x \times \nabla$ the angular momentum operator. Then every function of $\mathcal{H}^r$ expands into $$ f = \sum_{n = 0}^{+\infty} \sum_{m = -n}^n c_{n,m} \varphi_{n,m}\quad\boldsymbol{m}ox{ with } \quad \mathbb{N}orm{f}{\mathcal{H}^r}^2 = \sum_{n = 0}^{+\infty} \sum_{m = -n}^n (2n + 2)^r |c_{n,m}|^2. $$ Now {\em radial functions} are $f$ such that $c_{n,m} = ( f, \varphi_{n,m})_{L^2} = 0$ when $m \neq 0$. Then we have $\varphi_{2k,0} = (-1)^k h_k$ defined in \eqref{basishn}. We thus define $$ \mathcal{H}^r_{\mathrm{rad}} = \{ \exists \, c_n \in \mathbb{C}^\mathbb{N}\, |\, f = \sum_{n = 0}^\infty c_n h_n \in \mathcal{H}^r \}, $$ and for $f = \sum_{n = 0}^\infty c_n h_n \in \mathcal{H}^r_{\mathrm{rad}} $, we have $$ \mathbb{N}orm{f}{\mathcal{H}^r}^2 = \sum_{n = 0}^{+\infty} (4n + 2)^r |c_{n}|^2. $$ Moreover, for some constant $c_r$ and $C_r$ we have $$ c_r \mathbb{N}orm{f}{\mathcal{H}^r}^2 \leq \sum_{n = 0}^\infty \langle n \rangle^{r} |c_n|^2 \leq C_r \mathbb{N}orm{f}{\mathcal{H}^r}^2. $$ For all $t\in \mathbb{R}$, we can define action of the semi-group $e^{i t H}$ by the formula $$ e^{i t H } f = \sum_{n = 0}^\infty e^{i t \lambda_n }c_n h_n, $$ where the $\lambda_n = 4 n + 2$ are the eigenvalues of $H$ on radial functions (see \eqref{lambdan}). Note that we have \begin{equation} \label{eqflow} \forall\, t \in \mathbb{R},\quad \mathbb{N}orm{e^{it H} f}{\mathcal{H}^r} = \mathbb{N}orm{f}{\mathcal{H}^r}. \end{equation} The operator $f \mapsto |x|^2 f$ acts on functions in $\mathcal{H}^r$, and estimate \eqref{eqy2} implies the following estimate: \begin{lemma} For $r \geq 0$, there exists $C_r$ such that for all $f \in \mathcal{H}_{{\mathrm{rad}}}^{r+1}$, we have \begin{equation} \label{y2op} \mathbb{N}orm{|x|^2 f}{\mathcal{H}^r} \leq C_r \mathbb{N}orm{f}{\mathcal{H}^{r+1}}. \end{equation} \end{lemma} \begin{proof} Let $f = \sum_{n\geq 0} c_h h_n \in \mathcal{H}_{{\mathrm{rad}}}^{r+1}$. We have by using \eqref{eqy2} $$ |x|^2 f = \sum_{n \geq 0 } c_n (-(n+1) h_{n+1} + (2n +1) h_n - n h_{n-1}) = \sum_{n \geq 0} d_n h_n $$ with (defining $c_{n} = 0$ for $n < 0$), $$ d_n = - n c_{n-1} + (2 n +1) c_n - (n+1) c_{n+1} $$ Hence we have $$ \mathbb{N}orm{|x|^2 f}{\mathcal{H}^r}^2 = \sum_{n \geq 0} \langle n \rangle^r |d_n|^2 \leq C \sum_{n \geq 0} \langle n \rangle^r ( n+1 ) ( |c_{n-1}|^2 + |c_n|^2 + |c_{n+1}|^2) $$ for some numerical constant $C$, and we easily deduce the result. \end{proof} For two operators $A$ and $B$ acting on functions $f \in \mathcal{H}^r$, we set $[A,B]f = (AB - BA)f$. \begin{lemma} \label{commutator} For $r \geq 0$ there exists a constant $C_r$ such that for all $f = \sum_{n \geq 0}c_n h_n \in \mathcal{H}^{r}_{{\mathrm{rad}}}$, \begin{equation} \label{eqcom} \mathbb{N}orm{[H^{r/2},|x|^2] f}{L^2} \leq C_r \mathbb{N}orm{f}{\mathcal{H}^{r}}. \end{equation} \end{lemma} \begin{proof} From \eqref{eqy2} and \eqref{lambdan} we have \begin{eqnarray*} |x|^2 H^{r/2} h_k(x) &=& (4k +2)^{r/2} |y|^2 h_k \\ &=& (4k +2)^{r/2} (-(k+1) h_{k+1}(x) + (2k +1) h_k(x) - k h_{k-1}(x)) , \end{eqnarray*} and $$ H^{r/2} |x|^2 h_k(x) = -(k+1)(4k +6)^{r/2} h_{k+1}(x) + (2k +1)(4k +2)^{r/2} h_k(x) - (4 k - 2)^{r/2} k h_{k-1}(x). $$ Hence as for all $k,\ell \geq 0$, $r \geq 0$ and some constant $C_r$ independent of $k$ and $\ell$, we have \begin{equation} \label{eouais} |k^{r/2} - \ell^{r/2} | \leq C_r |k - \ell| ( k^{r/2 - 1} + \ell^{r/2 - 1}) \end{equation} we deduce that $$ (H^{r/2} |x|^2 - |x|^2 H^{r/2}) h_k = \alpha_{k} h_{k+1} + \mu_k h_k + \beta_k h_{k-1} $$ with $|\mu_k| + |\alpha_k| + |\beta_k| \leq C_r \langle k \rangle^{r/2}$ for some constant $C_r$ independent on $k$. Hence if $v = \sum_{n \geq 1} c_n h_n \in \mathcal{H}^r_{{\mathrm{rad}}}$, we have $[H^{r/2},|x|^2] v = \sum_{n} d_n h_n$ with $$ d_n = c_{n-1}\alpha_{n-1} + c_n \mu_n + c_{n+1} \beta_{n+1} $$ and hence $$ |d_n|^2 \leq C_r \langle n \rangle^{r} \big( |c_{n-1}|^2 + |c_{n}|^2 + |c_{n+1}|^2\big) $$ for some constant $C_r$ independent of $n$, from which we easily deduce \eqref{eqcom}. \end{proof} \subsection{CR operator} The CR trilinear operator is given by $$ (f_1,f_2,f_3) \mapsto \mathbb{T}c(f_1,f_2,f_3)(z) = \int_{\mathbb{R}^2} \int_{\mathbb{R}} f_1(x + z) \overline{f_2(x + \lambda x^{\perp} + z )} f_3(\lambda x^{\perp} + z) \, \mathrm{d} \lambda \mathrm{d} x. $$ where for $x = (x_1,x_2)\in\mathbb{R}^2$, we set $x^\perp = (-x_2,x_1)$. With this trilinear operator is associated the energy \begin{equation} \label{energyCR} \mathbb{E}c(f_1,f_2,f_3,f_4) = \int_{\mathbb{R}^2}\int_{\mathbb{R}^2} \int_{\mathbb{R}} f_1(x + z) f_2(\lambda x^{\perp} + z) \overline{f_3(x + \lambda x^{\perp} + z )} \, \overline{f_4(z)} \, \mathrm{d} \lambda \mathrm{d} x \mathrm{d} z \end{equation} and the (CR) equation introduced in \cite{FGH16} \begin{equation} \tag{CR} i \partial_t f = \mathbb{T}c(f,f,f). \end{equation} We recall some properties of the CR operator that can be found in \cite{FGH16} and \cite{GHT16}. First it is invariant by Fourier transform: $$ \mathcal{F} ( \mathcal{T}(f_1,f_2,f_3) ) = \mathbb{T}c( \widehat f_1,\widehat f_2,\widehat f_3) \quad \boldsymbol{m}ox{and} \quad \mathbb{E}c(f_1,f_2,f_3,f_4) = \mathbb{E}c(\widehat f_1,\widehat f_2,\widehat f_3, \widehat{f_4}) $$ Moreover, this operator has many symmetries that are summarized in Table \ref{table1}. In this table, $Q$ denotes a self adjoint operator {\em commuting} with $\mathbb{T}c$ in the sense of Lemma 2.4 in \cite{GHT16}, {\em i.e.} $$ Q( \mathbb{T}c(f_1,f_2,f_3) ) = \mathbb{T}c( Q f_1,f_2,f_3) - \mathbb{T}c( f_1, Q f_2, f_3) + \mathbb{T}c(f_1,f_2,Q f_3), $$ as soon as $f_1,f_2$ and $f_3$ are in the domain of $Q$. Then for all $\lambda \in \mathbb{R}$, we have $$ e^{i \lambda Q}\mathbb{T}c(f_1,f_2,f_3) = \mathbb{T}c( e^{i \lambda Q}f_1,e^{i \lambda Q}f_2,e^{i \lambda Q}f_3). $$ With such an operator is associated an invariant $\int_{\mathbb{R}^2} (Qu)(x) u(x) \mathrm{d} x $ of the (CR) equation. We also use the notation $R_\theta(x_1,x_2) = (x_1 \cos\theta - x_2 \sin \theta, x_2 \sin\theta + x_1 \cos \theta)$ the rotation of angle $\theta$. Finally, the operator $\mathbb{T}c$ is trilinear in $\mathcal{H}^r$ as can be immediatly seen from formula (2.4) in \cite{GHT16} as well as \cite[Proposition 7.1]{FGH16}: we have for $r \geq 0$ \begin{equation} \label{contiCR} \mathbb{N}orm{\mathbb{T}c(f_1,f_2,f_3)}{\mathcal{H}^r} \leq C_r \mathbb{N}orm{f_1}{\mathcal{H}^r} \mathbb{N}orm{f_2}{\mathcal{H}^r} \mathbb{N}orm{f_3}{\mathcal{H}^r}. \end{equation} for some consant $C_r$ independent of $f_1$, $f_2$ and $f_3$. \renewcommand{1.2}{1.2} \begin{table} \label{table1} \begin{center} \begin{tabular}{|c|c|c|} \hline Operator $Q$ & Conserved quantity & Corresponding symmetry \\ commuting with $\mathbb{T}c$ & $\int (Q u) \bar u $ & $u \mapsto e^{i \lambda Q} u$ \\ \hline $1$ & $\int |u|^2$ & $u \mapsto e^{i \lambda } u$\\ \hline $x_1$ & $\int x_1 |u|^2$ & $u \mapsto e^{i \lambda x_1} u$\\ \hline $x_2$ & $\int x_2 |u|^2$ & $u \mapsto e^{i \lambda x_2} u$\\ \hline $|x|^2$ & $\int |x|^2 |u|^2$ & $u \mapsto e^{i \lambda |x|^2} u$\\ \hline $i \partial_{x_1}$ & $\int ( i \partial_{x_1} u) \bar u $ & $u \mapsto u(\, \cdot \, + \lambda e_1) $\\ \hline $i \partial_{x_2}$ & $\int ( i \partial_{x_2} u) \bar u$ & $u \mapsto u(\, \cdot \, + \lambda e_2)$\\ \hline $\Delta$ & $\int |\nabla u|^2$ & $u \mapsto e^{i \lambda \Delta} u$\\ \hline $H$ & $\int (Hu) \bar u$ & $u \mapsto e^{i \lambda H} u$\\ \hline $L = i ( x \times \nabla) $ & $\int (Lu) \bar u$ & $u \mapsto u \circ R_\lambda$\\ \hline $i ( x \cdot \nabla + 1)$ & $\int i (x \cdot \nabla + 1) u \bar u$ & $u \mapsto \lambda u (\lambda \, \cdot \,)$\\ \hline \end{tabular} \end{center} \caption{Symmetries of the CR equation} \end{table} Finally, for some function $f \in \mathcal{H}^r$ and $r\geq 0$, we define the operator \begin{equation} \label{Tfu} \mathbb{T}c[f] u := \mathbb{T}c(f,f,u). \end{equation} \subsection{CR operator on radial functions} Following again \cite[Section 6.6]{GHT16}, if $h_n$ denote the Laguerre-Hermite functions \eqref{basishn} then we have for all $n_1,n_2$ and $n_3 \in \mathbb{N}$, \begin{equation} \label{CRhn} \mathbb{T}c( h_{n_1}, h_{n_2}, h_{n_3}) = \chi_{n_1 n_2 n_3 n_4} h_{n_4}, \quad n_4 = n_1 - n_2 + n_3. \end{equation} where \begin{equation} \label{piaz} \chi_{n_1 n_2 n_3 n_4} = \pi^2 \int_{\mathbb{R}^2} h_{n_1}(x)h_{n_2}(x)h_{n_3}(x)h_{n_4}(x) \mathrm{d} x. \end{equation} Using \eqref{1100} we obtain $\chi_{1100} = \pi^2 \mathbb{N}orm{h_1 h_0}{L^2}^2 = \frac{\pi}{4}$. \section{Modulation and the pseudo conformal symmetry} We set up in this section the basic algebraic fact and energy estimates associated to modulation of the {\em unperturbed} linear flow \eqref{defH}. The essential algebraic fact is the existence of an explicit pseudo-conformal symmetry which will generate the modulated bubbles, and more importantly the leading order finite dimensional dynamical system to be perturbed in a {\em resonant way}. \subsection{Commutators formulae} First, for $u \in \mathcal{H}^r$ and $N > 0$, we define the operator \begin{equation} \label{ScN} (\mathcal{S}_{N} u)(x) = \frac{1}{N} u( \frac{x}{N}). \end{equation} Note that in dimension 2, we have \begin{equation} \mathbb{N}orm{\mathcal{S}_N u }{L^2} = \mathbb{N}orm{u}{L^2} \quad \boldsymbol{m}ox{and}\quad \widehat{\mathcal{S}_N u}= \mathcal{S}_{\frac1N} u. \end{equation} We will also modulate by using the operators $e^{i m |x|^2}$ and $e^{i c \Delta}$ for real numbers $c$ and $m$. Note that all these transformation preserve the radial symmetry. We now collect here the following commutator relations: \begin{lemma}[Commutators] For $c$ and $m$ real numbers, we have the relations \begin{align} &e^{ - i m |x|^2 } \Delta e^{ i m |x|^2} = \Delta - 4 m^2 |x|^2 + 4 i m ( 1 + \Lambda), \label{C1}\\ &e^{- i c \Delta} |x |^2 e^{i c \Delta} = |x|^2 - 4 c^2 \Delta - 4 i c ( 1 + \Lambda), \label{C2} \\ & e^{ - i m |x|^2} ( 1 + \Lambda) e^{i m |x|^2} = (1 + \Lambda) + 2 i m |x|^2,\label{C3}\\ &e^{- i c \Delta} ( 1 + \Lambda) e^{i c \Delta} = (1 + \Lambda) - 2 i c \Delta. \label{C4} \end{align} \end{lemma} \begin{proof} We have \begin{equation} \label{nab} \nabla_x (e^{ i m |x|^2 } u) = e^{ i m |x|^2 } (\nabla u + 2i m x u), \end{equation} and hence \begin{eqnarray*} \nabla_x \cdot \nabla_x (e^{ i m |x|^2 } u) &=& e^{ i m |x|^2} (2 i m x) \cdot (\nabla u +2i m x u) \\ &&+ e^{ i m |x|^2 } ( \Delta u + 2i m \nabla \cdot ( x u))\\ &=&e^{ i m |x|^2 } \big( 2 i m x \cdot \nabla - 4 m^2 |x|^2 + \Delta + 4im + 2 i m x \cdot \nabla \big) u, \end{eqnarray*} Which yields the first equation. In terms of Fourier transform, we have $$ (\widehat{(1 + \Lambda) u })(\xi) = \widehat{u}(\xi) - \nabla_\xi \cdot ( \xi \widehat u (\xi) ) = - ((1 + \Lambda) \widehat u)(\xi). $$ which we can write $\mathcal{F}^{-1} (1+ \Lambda) \mathcal{F} = - (1 + \Lambda)$. Now using that $\mathcal{F}^{-1} |\xi|^2 \mathcal{F} = - \Delta$, and $\mathcal{F}^{-1} e^{ i m \Delta } \mathcal{F} = e^{ -i m |\xi|^2 }$, the first relation implies $$ - e^{ i m \Delta } |\xi|^2 e^{ - i m \Delta } = - |\xi|^2 + 4 m^2 \Delta - 4 i m ( 1 + \Lambda), $$ which yields the second relation, after taking $m = - c$. Now from \eqref{nab}, we obtain $$ \Lambda (e^{ i m |\xi|^2} u) = e^{ i m |\xi|^2} (\Lambda u + 2i m |\xi|^2 u), $$ and hence the third line. The fourth is obtain by Fourier transform. \subsection{Energy estimates through modulation} We take $c$, $m$ and $N$ as in the previous section and we are interested in estimating the Sobolev norms of $u = \mathcal{S}_N e^{i m |x|^2} e^{i c \Delta } v$ with respect to the norms of $v$. We will need the following lemma, whose proof can be found for instance in \cite{DR}. \begin{lemma}[Fourier transform of Gaussians] Let $u = u_1 + i u_2 \in \mathbb{C}$ with $\mathrm{Re}\, u \geq 0$. Then we have \begin{equation} \label{foufoug} \mathcal{F}\Big( e^{- u| \, \cdot \, |^2} \Big) (\xi) = \frac{1}{2 u} e^{- \frac{\xi^2}{4u}}. \end{equation} \end{lemma} \begin{proposition}[Energy estimates through modulations] \label{lemnorms} Let $r \in \mathbb{N}$. Then there exists $C_r$ such that for all $v \in \mathcal{H}^r$ and all real numbers $m$, $c$, $N$, we have \begin{equation} \label{boun} \mathbb{N}orm{\mathcal{S}_N e^{i m |x|^2} e^{i c \Delta} v }{\mathcal{H}^r} \leq C_r ( \langle m \rangle^{r} + \langle c \rangle^r) \max\big(N,\frac{1}{N}\big)^r \mathbb{N}orm{v}{\mathcal{H}^r}. \end{equation} If moreover $v(y) = h_0(y)$, then with $$ \eta = 1 + 2ic \quad \boldsymbol{m}ox{and} \quad z = 1 + 4cm - 2 i m. $$ we have \begin{eqnarray*} \mathbb{N}orm{ |x|^{2r} \mathcal{S}_N e^{i m |y|^2} e^{i c \Delta} h_0}{L^2}^2 &=& N^{2r } |\eta|^{2r} \int_{\mathbb{R}^2} |x|^{2r} |h_0(x)|^{2}\mathrm{d} x \\ \mathbb{N}orm{ |\nabla|^{2r} \mathcal{S}_N e^{i m |y|^2} e^{i c \Delta} h_0}{L^2}^2 &=& \frac{1}{N^{2r}} |z|^{2r} \int_{\mathbb{R}^2} |x|^{2r} |h_0(x)|^{2}\mathrm{d} x \end{eqnarray*} and in particular \begin{equation} \label{normegaussienne} \mathbb{N}orm{\mathcal{S}_N e^{i m |y|^2} e^{i c \Delta} h_0}{\mathcal{H}^1}^2 = N^{2 }\big( 1 + 4 c^2) + \frac{1}{N^{2}} ((1 + 4cm)^2 + 4m^2\big). \end{equation} \end{proposition} \begin{proof} By homogeneity, $$ \mathbb{N}orm{\mathcal{S}_N v}{H^r}^2 \leq C_r \max(1,\frac{1}{N^2})^r \mathbb{N}orm{v}{H^r}^2 \quad\boldsymbol{m}ox{and} \quad \mathbb{N}orm{\langle x \rangle^r \mathcal{S}_N v }{L^2}^2 \leq C_r \max(1,N^2)^r \mathbb{N}orm{\langle y \rangle^r v}{L^2}^2 $$ and hence using \eqref{normeHr}, $$ \mathbb{N}orm{\mathcal{S}_N v}{\mathcal{H}^r}^2 \leq C_r \max(N^2,\frac{1}{N^2})^r \mathbb{N}orm{v}{\mathcal{H}^r}^2. $$ Moreover, we have $\mathbb{N}orm{\langle y \rangle^r e^{i m |x|^2} v}{L^2} = \mathbb{N}orm{\langle y \rangle^r v}{L^2}$ and as for $i = 1,2$, $$ \partial_{x_i} e^{i m |x|^2} v = e^{ i m |x|^2}( 2 i m x_i v + \partial_{x_i} v), $$ we have the estimate $$ \mathbb{N}orm{\langle\nabla\rangle e^{i m |x|^2} v }{L^2} \leq C \langle m \rangle \mathbb{N}orm{ \langle x \rangle v }{L^2} + \mathbb{N}orm{\langle \nabla\rangle v}{L^2}, $$ By iterating this estimate, we obtain $$ \mathbb{N}orm{e^{i m |x|^2} v }{\mathcal{H}^r} \leq C \langle m \rangle^r \mathbb{N}orm{v}{\mathcal{H}^r}. $$ and hence after Fourier transform $\mathbb{N}orm{e^{i c \Delta} v }{\mathcal{H}^r} \leq C \langle c \rangle^r \mathbb{N}orm{v}{\mathcal{H}^r}$, which yields \eqref{boun}. To calculate the norm of the modulated Gaussian, using \eqref{foufoug} we calculate that \begin{eqnarray*} e^{i m |x|^2 } e^{i c \Delta} h_0 &=& \frac{1}{\sqrt{\pi} } e^{i m |x|^2 } \mathcal{F} ( e^{ - (\frac{1}{2}+ i c) |\, \cdot\, |^2 }) = \frac{1}{\eta \sqrt{\pi}} e^{i m |x|^2 } e^{- \frac{1}{2 \eta} |x|^2 } = \frac{1}{\eta \sqrt{\pi}} e^{- \frac{z}{2 \eta} |x|^2 } \end{eqnarray*} We thus have $$ \mathcal{S}_N e^{i m |x|^2 } e^{i c \Delta} h_0 = \frac{1}{N \eta \sqrt{\pi}} e^{- \frac{z}{2 \eta N^2} |x|^2 } \quad \boldsymbol{m}ox{and} \quad \mathcal{F} \mathcal{S}_N e^{i m |x|^2 } e^{i c \Delta} h_0 = \frac{N}{z \sqrt{\pi}} e^{- \frac{N^2\eta}{2 z} |\xi|^2 }. $$ Note that we have $\mathrm{Re}(z \overline{\eta}) = 1$. Hence \begin{eqnarray*} \int_{\mathbb{R}^2} |x|^{2r} |\mathcal{S}_N e^{i m |x|^2 } e^{i c \Delta} h_0|^2 \mathrm{d} x &=& \frac{1}{\pi} \int_{\mathbb{R}^2}\frac{|x|^{2r}}{N^2 |\eta|^2} e^{- \frac{1}{|\eta|^2 N^2} |x|^2 } \mathrm{d} x \\&=& N^{2r} |\eta|^{2r} \frac{1}{\pi} \int_{\mathbb{R}^2}|x|^{2r} e^{- |x|^2 } \mathrm{d} x, \end{eqnarray*} which yields the first estimate. The second one is obtained by Fourier transform. \end{proof} \subsection{Modulation equation} We consider the equation \begin{equation} \label{eqH} i \partial_t u = - \Delta u + |x|^2 u. \end{equation} \begin{proposition}[Modulated pseudo conformal symmetry] Let $L(t) > 0$, $\gamma(t)$ and $b(t)$ be real functions defined on $\mathbb{R}_+$. We set \begin{equation} \label{prince} u = e^{i \gamma} \mathcal{S}_{L} w, \quad w(t,y) = e^{- i \frac{b|y|^2}{4}} v(t,y) \qquad \boldsymbol{m}ox{and}\qquad \displaystyle \frac{\mathrm{d} s}{\mathrm{d} t} = \displaystyle\frac{1}{L^2}. \end{equation} Assume that the function $t \mapsto s(t)$ is invertible from $\mathbb{R}$ to itself. Then $u(t,x)$ solves \eqref{eqH} if and only $v(s,y)$ solves the equation \begin{equation} \label{eqH2} i \partial_s v + \Delta v - \gamma_s v + \Big( - L^4+ \frac{b_s}{4} - \frac{b^2}{4} - \frac{L_s}{L}\frac{b}{2} \Big) |y|^2 v - i \Big( \frac{L_s}{L} + b \Big)\Big(1 + \Lambda \Big) v = 0, \end{equation} where $L_s = \frac{\mathrm{d}}{\mathrm{d} s}( L(t(s)) )$ and similar definitions for $b_s$ and $\gamma_s$. \end{proposition} \begin{proof} We compute \begin{eqnarray*} i \partial_t u = i \partial_t e^{i \gamma} \mathcal{S}_L w &=& i \partial_t \Big( \frac{1}{L} e^{i \gamma} w(t,\frac{x}{L}) \Big) = e^{i \gamma}\mathcal{S}_L \Big( - i \frac{L_t}{L} ( 1+ \Lambda ) w + i \partial_t w - \gamma_t w \Big) \\ &=& \frac{e^{i \gamma}}{L^{2}} \mathcal{S}_L \Big( - i \frac{L_s}{L} ( 1+ \Lambda ) w + i \partial_s w - \gamma_s w \Big), \end{eqnarray*} with the notation $L_t = \partial_t L = \frac{1}{L^2} L_s$ and similar notations for the derivatives of $\gamma$ and $b$. Hence $u = e^{i \gamma} \mathcal{S}_L w$ is solution of \eqref{eqH} if and only if $$ \frac{1}{L^{2}}\Big( - i \frac{L_s}{L} ( 1+ \Lambda ) w + i \partial_s w - \gamma_s w \Big) = - \mathcal{S}_L^{-1} \Delta \mathcal{S}_L w + \mathcal{S}_L^{-1} |x|^2 \mathcal{S}_L w $$ and we obtain the equation $$ - i \frac{L_s}{L} \Big(1+ \Lambda \Big)w + i \partial_s w - \gamma_s w + \Delta_y w (s,y) - L^4 |y|^2 w = 0. $$ Now as $w (s,y) = e^{- i \frac{b |y|^2}{4}} v(s,y)$, we have $$ i \partial_s w = e^{- i \frac{b|y|^2}{4}} ( \frac{b_s}{4} |y|^2 v + i \partial_s v). $$ Hence we obtain the equation $$ i \partial_s v + \frac{b_s}{4} |y|^2 v - \gamma_s v - i\frac{L_s}{L} e^{i\frac{b|y|^2}{4} }( 1 + \Lambda) e^{- i \frac{b|y|^2}{4}} + e^{i\frac{b|y|^2}{4} } \Delta e^{ - i\frac{b|y|^2}{4} } v - L^4 |y|^2 v = 0, $$ and we obtain the result with \eqref{C1} and \eqref{C3} with $m = - \frac{b}{4}$. \end{proof} \begin{remark} In the following, we will always be in situations where $s \mapsto t(s)$ is invertible. We will thus write by a slight abuse of notation $(b(s),L(s),\gamma(s))$ for the functions $b((t(s))$, $L(t(s))$, and $\gamma(t(s))$. \end{remark} \section{Hamiltonian structures of the modulation equations} From \eqref{eqH2}, the explicit choice \begin{equation} \label{crux} \left| \begin{array}{l} \displaystyle - L^4 + \frac{b_s}{4} - \frac{b^2}{4} - \frac{L_s}{L}\frac{b}{2} = - 1 \\[2ex] \displaystyle \frac{L_s}{L} + b = 0 \end{array} \right. \end{equation} maps \eqref{eqH} onto $$ i \partial_s v = H v + \gamma_s v. $$ for which $v = h_k$ and $\gamma_s = - \lambda_k$ provide stationnary solutions. The dynamical system \eqref{crux} can be integrated explicitiely and the obtained transformation \eqref{prince} is nothing but the classical pseudo conformal symmetry (or Lens transform) of \eqref{defH}. Our aim in this section is to recall the classical Hamiltonian setting to integrate \eqref{crux} which prepares for the perturbative analysis performed in section \ref{sectionmodualtib}. \subsection{Darboux-Lie transform} The dynamical system \eqref{crux} can be written $$ \frac{\mathrm{d}}{\mathrm{d} s} \begin{pmatrix} L \\ b \end{pmatrix} = \begin{pmatrix} - b L \\ -b^2 - 4 + 4 L^4 \end{pmatrix} = 2L^3 \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} \partial_L E \\ \partial_b E \end{pmatrix} $$ where $$ E = \frac{1}{L^2}(\frac{b^2}{4} + 1) + L^2. $$ We want to write the previous system in a canonical Hamiltonian form (such a change of coordinates is called Darboux-Lie transformation). \begin{lemma} \label{lem30} Let $(L,b) \in (0,+\infty) \times \mathbb{R}$, $H(L,b)$ be given function, and let a non canonical Hamiltonian system $$ L_s = - 2L^3 \partial_b H(L,b) \quad \boldsymbol{m}ox{and} \quad b_s = 2L^3 \partial_L H(L,b) $$ be given. Then the change of variable $(L,b) \mapsto (\ell,b)$ where $$ \ell = \frac{1}{4L^2} $$ transform the system into a canonical Hamiltonian system of the form $$ \frac{\mathrm{d}}{\mathrm{d} s} \begin{pmatrix} b \\ \ell \end{pmatrix} = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} \partial_b K \\ \partial_\ell K \end{pmatrix} \quad \boldsymbol{m}ox{where} \quad K(b,\ell) = H(L,b). $$ \end{lemma} \begin{proof} With $K(\ell,b) = H(\frac{1}{2 \sqrt{\ell}},b)$ we calculate that $$ \ell_s = - \frac{L_s}{2L^3} = \partial_b H(L,b) = \partial_{b} K(\ell,b) $$ and moreover $$ \partial_\ell K(\ell,b) = - \frac{1}{4 \ell^{3/2}} \partial_L H(\frac{1}{2 \sqrt{\ell}},b) = - 2L^3 \partial_L H(L,b) = - b_s, $$ which shows the result. \end{proof} \subsection{Action-angles variables} In the canonical variables $(b,\ell) \in \mathbb{R} \times (0,+\infty) $, the Hamiltonian associated with the system \eqref{crux} is given by $$ E(b,\ell) = \ell b^2 + 4 \ell + \frac{1}{4\ell} >2, $$ where with a slight abuse of notation, we note $E$ the Hamiltonian in variables $(\ell,b)$ as in variables $(L,b)$. The system \eqref{crux} is thus equivalent to the system \begin{equation} \label{cruxell} \left| \begin{array}{rcll} b_s &=& - \partial_{\ell} E & = - b^2 - 4 + \frac{1}{4 \ell^2}\\[2ex] \ell_s &=& \partial_{b}E &= 2 \ell b \end{array} \right. \end{equation} \begin{proposition} \label{propAV} There exists a symplectic change of variable $(b,\ell) \mapsto (a,\theta)$ from the set $ \mathbb{R}\times (0,+\infty)$ to $(\frac12,+\infty) \times \mathbb{T}$ such that $$ E (b,\ell) = 4a, \quad \boldsymbol{m}ox{ so that } \quad \theta_s = 4, \quad a_s = 0, $$ and the flow in variable $(\theta,a)$ is given by $a(s) = a(0)$ and $\theta(s) = \theta(0) + 4s$. Moreover, we have the explicit formulae \begin{equation} \label{changeco} \begin{array}{rcl} \ell &=& \displaystyle \frac{1}{4} \left( 2a - \sqrt{4a^2 - 1}\cos(\theta)\right) = \frac{1}{4L^2} \quad \boldsymbol{m}ox{and}\\[2ex] b \ell &=& \sqrt{4a^2 - 1} \sin(\theta) = \displaystyle \frac{b}{4L^2}. \end{array} \end{equation} Moreover, we can expand $L^2(a,\theta)$ as follows: \begin{equation} \label{fish} L^2 (\theta,a) = 1 + 2 \sum_{n > 0} \left( \frac{2a - 1}{2a + 1} \right)^{\frac{n}{2}} \cos(n \theta) \end{equation} \end{proposition} \begin{proof} We use the method of generating functions with $b$ as impulse variable. We write on the set $\{ b > 0\}$ describing half a period, \begin{equation} \label{bell} b = \sqrt{- 4 + \frac{E}{\ell} - \frac{1}{4\ell^2}} = \partial_\ell S(E,\ell) \end{equation} where for $E > 2$ and $\ell >0$, $$ S(E,\ell) = \int_{\ell_0}^\ell \sqrt{- 4 + \frac{E}{z} - \frac{1}{4z^2}} \, \mathrm{d} z = \int_{\ell_0}^\ell \frac{1}{2z} \sqrt{- 16z^2 + 4Ez - 1} \mathrm{d} z. $$ Note that here, $$ \ell \in \Big[\frac{1}{8}(E - \sqrt{E^2 - 4}), \frac{1}{8}(E + \sqrt{E^2 - 4}) \Big]. $$ Now by construction, the change of variable $(b,L) \mapsto (E,\psi)$ is symplectic, with $$ \psi = \partial_E S(E,\ell) =\int_{\ell_0}^\ell \frac{1}{\sqrt{- 16z^2 + 4Ez - 1}} \mathrm{d} z = \int_{\ell_0}^\ell \frac{1}{\sqrt{ \frac{E^2}{4} - 1 - ( 4 z - \frac{E}{2})^2 }} \mathrm{d} z. $$ Moreover, we have in view of \eqref{cruxell} and \eqref{bell} $$ \frac{\mathrm{d}}{\mathrm{d} s} \psi(s) = \frac{\ell_s}{\sqrt{- 16\ell^2 + 4E\ell - 1}} = \frac{ b \ell }{\sqrt{- 4\ell^2 + E\ell - \frac{1}{4}}} = 1. $$ Now we have $$ \psi = \frac{1}{\sqrt{\frac{E^2}{4} - 1}} \int_{\ell_0}^\ell \frac{1}{\sqrt{ 1 - ( \frac{4 z - \frac{E}{2}}{\sqrt{\frac{E^2}{4} - 1}})^2 }} \mathrm{d} z = \frac{1}{\sqrt{\frac{E^2}{4} - 1}} \int_{\ell_0 - \frac{E}{8}}^{\ell - \frac{E}{8}} \frac{1}{\sqrt{ 1 - ( \frac{4 z }{\sqrt{\frac{E^2}{4} - 1}})^2 }} \mathrm{d} z, $$ or $$ \psi = \frac{1}{4} \int_{\frac{4(\ell_0 - \frac{E}{8})}{\sqrt{\frac{E^2}{4} - 1}}}^{\frac{4(\ell - \frac{E}{8})}{\sqrt{\frac{E^2}{4} - 1}}} \frac{1}{\sqrt{ 1 - z^2 }} \mathrm{d} z = \frac{1}{4}\arcsin \frac{4(\ell - \frac{E}{8})}{\sqrt{\frac{E^2}{4} - 1}} + \frac{\pi}{8} \in [0, \frac{\pi}{4}]. $$ by taking $\ell_0 = \frac{1}{8}(E - \sqrt{E^2 - 4})$ so that $\frac{4(\ell_0 - \frac{E}{8})}{\sqrt{\frac{E^2}{4} - 1}} = -1$. This change of variable describes half-a period. In order to obtain action-angle we set $(a,\theta) = ( E/4,4\psi) \in (\frac{1}{2},+\infty) \times \mathbb{T}$ to obtain action angle with $\theta \in [0,2\pi]$ on a full period, and a Hamiltonian $E(a,\theta) = 4a $. We thus have $\theta_s = \partial_a E(a,\theta) = 4$, $a_s = - \partial_{\theta} E(a,\theta) = 0$ and $$ \theta = \frac{\pi}{2} + \arcsin \frac{4(\ell - \frac{a}{2})}{\sqrt{4 a^2 - 1}} $$ and hence $$ \frac{4(\ell - \frac{a}{2})}{\sqrt{4a^2 - 1}} = \sin( \theta - \frac{\pi}{2} ) = - \cos(\theta). $$ and thus $$ \ell = \frac{1}{4} \left( 2a - \sqrt{4a^2 - 1}\cos(\theta)\right) $$ and $$ b = \frac{1}{2\ell} \sqrt{- 16 \ell^2 + 4 E \ell - 1} = \frac{1}{2\ell} \sqrt{ \frac{E^2}{4} - 1 - ( 4 \ell - \frac{E}{2})^2 } = \frac{1}{\ell}\Big(\sqrt{4a^2 - 1} \Big)\sin(\theta), $$ which is positive for $\theta \in [0,\pi]$. In particular, we have $$ b \ell = \sqrt{4a^2 - 1} \sin(\theta), $$ which shows \eqref{changeco}. To prove \eqref{fish} we can expand in Fourier series. We have $$ L^2 = \frac{1}{2a - \sqrt{4a^2 - 1}\cos(\theta)} = \frac{1}{2 a}\left( \frac{1 }{1 - \sqrt{1 - \frac{1}{4a^2}} \cos(\theta)}\right) $$ We recall the formula for the Poisson kernel, for $r \in (0,1)$, \begin{eqnarray*} \sum_{n \in \mathbb{Z} } r^{|n|} e^{i n\theta} &=& \frac{1 - r^2}{1 + r^2 -2 r \cos(\theta)} \\ &=& \frac{1 - r^2}{1 + r^2} \left(\frac{1}{ 1- 2 \frac{r}{1+r^2} \cos(\theta)}\right). \end{eqnarray*} To apply the formula, we need to take $$ 2 \frac{r}{1+r^2} = \sqrt{1 - \frac{1}{4a^2}}. $$ Setting $\alpha = 2 \frac{r}{1+r^2} $ or $r^2 + 1 - 2 \frac{r}{\alpha} = 0$, this yields to \begin{multline*} r = \frac{1}{\alpha} - \sqrt{\frac{1}{\alpha^2} - 1} = \frac{1}{\alpha}( 1 - \sqrt{1 - \alpha^2})\\ = \frac{1}{\sqrt{1 - \frac{1}{4a^2}}} ( 1 - \sqrt{1 - 1 + \frac{1}{4a^2}}) = \frac{2a} {\sqrt{4a^2 - 1}}\left( 1 - \frac{1}{2a}\right). \end{multline*} and hence $$ r = \frac{2 a - 1 }{\sqrt{4 a^2 - 1}} = \sqrt{\frac{2a - 1}{2a + 1}} $$ which is indeed in $(0,1)$. Note that we have $$ \frac{1 + r^2}{1 - r^2} = \frac{2a + 1 + 2a - 1}{2a + 1 - 2a + 1} = 2a $$ This shows \eqref{fish}. \end{proof} \subsection{Resonant bubbles} With the action-angle variables in hand, we are able to completely solve the system \eqref{crux} and provide solutions to the system \eqref{prince}. Taking $\theta = 4s$, we can indeed solve $t$ in terms of $s$ as follows: We have $$ \frac{\mathrm{d} t}{\mathrm{d} s } = L^2 = \frac{1}{4 \ell} = \frac{1}{2a - \sqrt{4a^2 - 1}\cos(4s)} $$ Using \eqref{fish}, we can solve to solve the system in time: $$ t(s) = \displaystyle s + \sum_{n >0} \frac{1}{2n} \left( \frac{E - 2}{E+ 2} \right)^{\frac{n}{2}} \sin(4n s). $$ We summarize by the formulas in terms of $E$ for the free flow $$ \left| \begin{array}{rcl} L^2(s) = \frac{1}{4\ell (s)}&=& \displaystyle\frac{2}{ E - \cos(4s)\sqrt{E^2 - 4}} \\[2ex] b(s) &=& \displaystyle\frac{\sin(4s)}{2 \ell(s)}\sqrt{E^2 - 4} = \displaystyle\frac{4\sin(4s)\sqrt{E^2 - 4}}{ E - \cos(4s)\sqrt{E^2 - 4}} \\[2ex] t(s) &=& \displaystyle s + \sum_{n >0} \frac{1}{2n} \left( \frac{E - 2}{E+ 2} \right)^{\frac{n}{2}} \sin(4n s) \end{array} \right. $$ Note that this formula together with the fact that $\frac{\mathrm{d} t}{\mathrm{d} s} > 0$ shows that shows that $t(s)$ is invertible and these formula with the change of variable \eqref{prince} provide solutions to the free flow which are all oscillating for at the same frequency for all values of $E$. \section{The resonant trajectory} \label{sectionmodualtib} We are now in position to study small perturbations of \eqref{crux} and prove the existence of resonant trajectories for a suitable choice of perturbations. Let us stress that we need in global in time bounds in the presence of highly oscillatory solutions, and these will be provided by the systematic use of action-angle variables and the backwards in time integration method. \subsection{Perturbed Hamiltonian} Let us consider the action-angle variables defined in Proposition \eqref{propAV}. The unperturbed system \eqref{crux} is associated with the Hamiltonian $E(a,\theta) = 4a$. Let us consider a time dependent Hamiltonian perturbation of this Hamiltonian of the form \begin{equation} \label{arnold} H(s,a,\theta) = 4a + P(s,a,\theta). \end{equation} Then the system is given by $$ a_s = - \partial_\theta P(s,a,\theta),\quad \boldsymbol{m}ox{and}\quad \theta_s = 4 + \partial_{a} P(s,a,\theta). $$ Now as the change of variable $(b,\ell) \mapsto (a,\theta)$ is symplectic, and with the definition of $\ell$, this dynamical system is equivalent to the following system in coordinates $(L,b)$: $$ \left| \begin{array}{l} - L^4 + \frac{b_s}{4} - \frac{b^2}{4} - \frac{L_s}{L}\frac{b}{2} = - 1 + 2 L^3 \partial_L P(s,b,L)\\ \frac{L_s}{L} + b = - 2 L^2 \partial_b P(s,b,L), \end{array} \right. $$ where $P(s,b,L) = P(s, a,\theta)$ (see Lemma \ref{lem30}). Let $\beta(s)$ be a given function. The solution of the equation \begin{equation} \label{perturb0} \left| \begin{array}{l} - L^4 + \frac{b_s}{4} - \frac{b^2}{4} - \frac{L_s}{L}\frac{b}{2} = - 1 - \beta(s)\\ \frac{L_s}{L} + b = 0, \end{array} \right. \end{equation} is thus the solution of a Hamiltonian of the form $E(b,L) + \frac{\beta(s)}{L^2}$. In variable $(a,\theta)$ this Hamiltonian is given by \begin{equation} \label{Hams} H(s,a,\theta) = 4a + \beta(s) \left( 2a - \sqrt{4a^2 - 1}\cos(\theta)\right). \end{equation} The dynamical system associated with this Hamiltonian is given by \begin{equation} \label{modbeta} \left| \begin{array}{rcll} \theta_s &=& \displaystyle 4 + 2 \beta(s) - \beta(s) \frac{4a \cos(\theta)}{\sqrt{4a^2 - 1}} &= \partial_a H(s,a,\theta)\\[2ex] a_s &=& - \beta(s) \sqrt{4a^2 - 1}\sin(\theta) &= - \partial_{\theta}H(s,a,\theta). \end{array} \right. \end{equation} \subsection{Construction of the resonant trajectory} We now produce an example of perturbation $\beta(s)$ for which we can construct a resonant solution to \eqref{modbeta}. \begin{proposition}[resonant trajectory] \label{laprop} Let $\beta(s)$ be defined as the function \begin{equation} \label{beta} \beta(s) = - \frac{\sin(4s)}{ s \log (s)}\quad \boldsymbol{m}ox{for} \quad s > 0. \end{equation} There exists $s_0 >0$ and $(a_0,\theta_0)$ and for all $k \in \mathbb{N}$, constant $B_k$ such that the solution of \eqref{modbeta} with initial data $(a(s_0),\theta(s_0)) = (a_0,\theta_0)$ exists for all $s \in [s_0,+\infty)$ and satisfies $a(s) \geq 2$ and \begin{equation} \label{hash} \left| \begin{array}{rcl} a(s) &=& \frac{1}{4} \log s + c(s)\\[2ex] \theta(s) &=& 4 s + \psi(s) \end{array} \right. \quad\boldsymbol{m}ox{with} \quad \forall\, k \in \mathbb{N}, \quad \Big|\frac{\mathrm{d}^k c }{\mathrm{d} s^k} (s)\Big| \leq B_k \frac{\log s}{s} \quad \boldsymbol{m}ox{and} \quad \Big|\frac{\mathrm{d}^k\psi}{\mathrm{d} s^k} (s)\Big| \leq \frac{B_k}{s}. \end{equation} \end{proposition} \begin{proof}[Proof of Proposition \ref{laprop}] We use the classical method of backwards in time integration of the flow to construct the solution with the suitable behaviour at $+\infty$.\\ \noindent{\bf step 1} Change of variables. Let us set $2a(s) = \cosh(r(s)) \geq 1$. As long as $r(s) > 0$, we have $\sqrt{4a^2 - 1} = \sinh(r)$ and the system \eqref{modbeta} can be written $$ \left| \begin{array}{rcl} \theta_s &=& \displaystyle 4 + 2 \beta(s) - 2\beta(s) \frac{\cosh(r)}{\sinh(r)} \cos(\theta)\\[2ex] r_s &=& - 2\beta(s)\sin(\theta). \end{array} \right. $$ Let $\psi = \theta - 4s$, we have \begin{equation} \label{plok} \left| \begin{array}{rcl} \psi_s &=& 2 \beta(s) - 2 \beta(s) \frac{1 + e^{-2r}}{1 - e^{-2r}} ( \cos(\psi ) \cos(4s) - \sin(\psi)\sin(4s)) \\[2ex] r_s &=& - 2\beta(s)( \sin(\psi) \cos(4s) + \cos(\psi) \sin(4s) ). \end{array} \right. \end{equation} Setting $\rho(s) = r(s) - \log\log s$, we have \begin{equation} \label{syslap} \left| \begin{array}{rcl} \psi_s &=& 2 \beta(s) - 2 \beta(s) (1 + f(s,\rho)) ( \cos(\psi ) \cos(4s) - \sin(\psi)\sin(4s)) \\[2ex] \rho_s &=& \displaystyle - \frac{1}{s \log s} - 2\beta(s)( \sin(\psi) \cos(4s) + \cos(\psi) \sin(4s) ). \end{array} \right. \end{equation} with $$ f(s,\rho) = \frac{1 + e^{-2r}}{1 - e^{-2r}} - 1 = \frac{2 e^{-2r}}{1 - e^{-2r}} = \frac{2 e^{-2\rho}}{(\log s)^2 - e^{-2\rho}}. $$ \noindent{\bf step 2} Backward bounds. We now derive uniform backward bounds which are the heart of the argument. \begin{lemma}[Uniform backward bounds] \label{cvneoineoneonvi} For all $M > s_0$, let us define $(\rho^M(s), \psi^M(s))$ be the solution of the system \eqref{syslap} such that $(\rho^M(M), \psi^M(M)) = (0,0)$. There exists a constant $B$ and $s_0$ sufficiently large such that for all $M > s_0$, $(\rho^M, \psi^M)$ exists on $[s_0,M]$, and moreover, \begin{equation} \label{samedi} \forall\, M > s_0, \quad \forall\, s \in [s_0,M] \qquad |\rho^M(s) |\leq \frac{B}{s} \quad\boldsymbol{m}ox{and}\quad |\psi^M(s)| \leq \frac{B}{s}. \end{equation} Moreover, for all $k$ and $n$ in $\mathbb{N}$ there exists a constant $B_{k,n}$ such that for all $M$ and all $s \in [s_0,M]$ \begin{equation} \label{lolo} \left|\frac{\partial^{k+n} f}{\partial s^k \partial \rho^n} (s, \rho^M(s))\right| \leq \frac{B_{k,n}}{s^k} \end{equation} \end{lemma} \begin{proof}[Proof of Lemma \ref{cvneoineoneonvi}] Note first that if $|\rho(s)| \leq \frac{B}{s}$ for $s \geq s_0$, we have $(\log s)^2 - e^{-2 \rho(s)} > (\log s_0)^2 - e^{\frac{2B}{s_0}} \geq 1$ if for instance \begin{equation} \label{cond1} s_0 \geq 2B \geq \exp(\sqrt{ e + 1}). \end{equation} Hence under these conditions, $f(s,\rho) \in [0,2 e]$ and all its derivative with respect to $\rho$ and $s$ satisfy bounds of the form \eqref{lolo}. We deduce that under the condition \eqref{cond1}, when $|\rho(s) |\leq \frac{B}{s}$ and $|\psi(s)| \leq \frac{B}{s}$, then we have \begin{equation} \label{estderiv} |\rho_s(s)| + |\psi_s(s)| \leq \frac{30}{s \log s}. \end{equation} For all $M > s_0$, define $$ T_M(s_0,B) = \inf\{ s \in [s_0,M] \quad\boldsymbol{m}ox{s. t.}\quad \forall \sigma \in [s,M]\, \quad |\rho^M(\sigma) |\leq \frac{B}{\sigma} \quad\boldsymbol{m}ox{and}\quad |\psi^M(\sigma)| \leq \frac{B}{\sigma} \}. $$ As $\rho^M(M) = \psi^M(M) = 0$, the previous estimate show that under the condition \eqref{cond1} the flow exists locally for such initial condition, and we have $T_M(s_0,B) > 0$. We will show that there is a choice of $B$ and $s_0$ such that for all $M$, $T_M(s_0,B) = s_0$. \\ Let $M > 0$, assume that $s$ is such that for all $\sigma$, $\rho^M(\sigma)$ and $\psi^M(\sigma)$ satisfy the bound \eqref{samedi} for $\sigma \in [s,M]$. We thus have as $\psi^M(M) = 0$ \begin{multline*} \psi^M(s) = \int_M^{s} 2 \beta(\sigma) \mathrm{d} \sigma \\ -\int_M^s 2 \beta(\sigma) (1 + f(\sigma,\rho^M)) \cos(\psi^M ) \cos(4\sigma)\mathrm{d} \sigma + \int_M^s 2 \beta(\sigma) (1 + f(\sigma,\rho^M)) \sin(\psi^M)\sin(4\sigma)) \mathrm{d} \sigma. \end{multline*} Let us calculate the three contributions to the right-hand side: $$ \int_M^{s} 2 \beta(\sigma) \mathrm{d} \sigma = - \int_M^{s} \frac{2 \sin(4\sigma)}{\sigma \log \sigma}\mathrm{d} \sigma = \left[ \frac{ \cos(4\sigma)}{ 2 \sigma \log \sigma}\right]_{M}^s + \int_M^{s} \frac{\cos(4\sigma)( \log \sigma + 1)}{2 \sigma^2 (\log \sigma)^2}\mathrm{d} \sigma. $$ Thus, there exists $B_0$ independent of $M$ such that for all $s < M$ this term is bounded in absolute value by $\frac{ B_0}{s}$. The second term can be written \begin{multline*} \int_M^s \frac{(1 + f(\sigma,\rho^M))}{\sigma\log \sigma} \cos(\psi^M ) \sin(8\sigma)\mathrm{d} \sigma = - \left[ \frac{(1 + f(\sigma,\rho^M))}{ 8 \sigma\log \sigma} \cos(\psi^M ) \cos(8\sigma) \right]_M^s \\ + \int_M^s \cos(8\sigma) \frac{\mathrm{d}}{\mathrm{d} s}\left(\frac{(1 + f(\sigma,\rho^M))}{ 8 \sigma\log \sigma} \cos(\psi^M )\right) \mathrm{d} \sigma. \end{multline*} Using \eqref{estderiv} and \eqref{lolo}, we see that this term can be bounded by $\frac{B_0}{s}$ up to a increasing of the constant $B_0$. The last term can be bounded by $$ 2\left| \int_M^s \frac{\sin^2(4\sigma) }{\sigma \log \sigma } (1 + f(\sigma,\rho^M)) \sin(\psi^M) \mathrm{d} \sigma \right| \leq 6 \int_s^M \frac{|\psi^M(\sigma)|}{\sigma \log \sigma} \mathrm{d} \sigma \leq \frac{6 B}{\log s_0} \int_s^M \frac{1}{\sigma^2} \mathrm{d} \sigma. $$ So far we have proved that for all $s < M$, \begin{equation} \label{estpsi} |\psi^M(s)| \leq \frac{2 B_0}{s} + \frac{6B }{s \log s_0 } \leq \frac{B}{2s}, \end{equation} provided we take $B > 8 B_0$ and $\log s_0 > 24$. Now we have $$ \rho^M(s) - \rho^M(M) = \int_M^s \left( - \frac{1}{\sigma \log \sigma} + \frac{2\sin(4\sigma)}{\sigma \log \sigma}( \sin(\psi^M) \cos(4\sigma) + \cos(\psi^M) \sin(4\sigma) )\right) \mathrm{d} \sigma. $$ Using $2\sin^2(4\sigma) = 1 - \cos(8\sigma)$ and $2\sin(4 \sigma) \cos(4 \sigma) = \sin(8 \sigma)$ and the fact that $\rho^M(M) = 0$, we obtain $$ \rho^M(s) = - \int_{s}^N \frac{\sin(\psi^M) \sin(8\sigma) }{\sigma \log(\sigma)} \mathrm{d} \sigma - \int_{s}^N \frac{\cos(\psi^M) - 1 }{\sigma \log(\sigma)} \mathrm{d} \sigma + \int_{s}^N \frac{\cos(\psi^M) \cos(8\sigma) }{\sigma \log(\sigma)} \mathrm{d} \sigma. $$ The first two terms can be treated by integration by part as before, and we can show that they can be bounded by $\frac{B_0}{s}$ after a possible increase of $B_0$ which is a constant independent of $M$, $s$ and $B$. Then using $|\cos(\psi) - 1 |\leq |\psi|^2 \leq \frac{B^2}{\sigma^2}$ we thus see that we have \begin{equation} \label{estrho} |\rho^M(s)| \leq \frac{2 B_0}{s} + \frac{B^2}{s (s_0 \log s_0)} \leq \frac{B}{2s}. \end{equation} provided that $B > 8 B_0$ and $s_0 \log s_0 > 4B$. Hence if $B$ and $s_0$ are large enough to satisfy condition \eqref{cond1} and the other conditions above, then \eqref{estpsi} and \eqref{estrho} are satisfy for all $s \geq T_M(s_0,M)$ which shows that $T_M(s_0,B) = s_0$. \\ The last estimate is then easily proved. \end{proof} \noindent{\bf step 3} Conclusion. Let us take $N$ and $M$ such that $s_0 < N \leq M$. Using \eqref{lolo}, we have that $$ |\rho^M_s - \rho^N_s| + |\psi^M_s - \psi^N_s| \leq \frac{C}{s \log s} ( |\rho^M(s) - \rho^N(s)| + |\psi^M(s)- \psi^N(s)|), $$ for some constant $C$ independent of $M$ and $N$. Hence for all $s \in [s_0,N]$, by integrating between $s$ and $N$, using the condition $\rho^N(N) = \psi^N(N) = 0$ and the bound \eqref{samedi}, we have \begin{multline*} |\rho^M(s) - \rho^N(s)| + |\psi^M(s) - \psi^N(s)| \leq \\ \frac{2B}{N} + \int_{s}^N \frac{C}{\sigma \log \sigma} ( |\rho^M(\sigma) - \rho^N(\sigma)| + |\psi^M(\sigma)- \psi^N(\sigma)|) \mathrm{d} \sigma \end{multline*} By using Gr\"onwall's lemma (see Lemma \ref{gr} in Appendix) we obtain \begin{eqnarray*} |\rho^M(s) - \rho^N(s)| + |\psi^M(s) - \psi^N(s)| &\leq& \frac{2B}{N} + \frac{2B}{N}\int_s^N \frac{C}{\sigma \log \sigma} \exp \left(\int_{s}^{\sigma} \frac{C}{\tau \log \tau} \mathrm{d} \tau\right) \mathrm{d} \sigma \\ &\leq& \frac{2B}{N} + \frac{2B}{N}\int_s^N \frac{C(\log \sigma)^{C-1}}{\sigma} \mathrm{d} \sigma \\ &\leq& \frac{2B}{N} + \frac{2BC}{N} (\log N)^{C}. \end{eqnarray*} We deduce that the sequence of function $(\rho^M,\psi^M)_{M \in \mathbb{N}}$ is Cauchy and thus converges on every interval $[s_0,T]$ for any fixed $T > s_0$. This solution solves the system \eqref{syslap} on this interval and does not depend on $T$ as it coincides with the unique solution on \eqref{syslap} with initial value $(\rho(s_0), \psi(s_0)) = \lim_{M \to \infty} (\rho^M(s_0),\psi^M(s_0))$. Hence this solution exists globally and satisfies the bound \eqref{samedi}. Moreover, by using \eqref{syslap}, we easily see that for all $k \geq 1$, there exists $B_k$ such that $$ \forall\, s \in [s_0,+\infty), \quad \Big|\frac{\mathrm{d}^k\rho }{\mathrm{d} s^k} (s)\Big| + \Big|\frac{\mathrm{d}^k\psi}{\mathrm{d} s^k} (s)\Big| \leq \frac{B_k}{s \log s}. $$ We deduce that $\theta(s) = \psi(s) + 4s$ satisfy the hypothesis of the theorem. Moreover, we have $2a(s) = \cosh(r(s)) = \cosh( \log \log s + \rho(s))$. Hence $$ 4a(s) = 2\cosh(r(s)) = e^{ \log \log s + \rho(s)} + e^{- \log \log t - \rho(s)} = (\log s) e^{\rho(s)} + \frac{e^{-\rho(s)}}{\log s} $$ from which we easily deduce the result with $4 c(s) = \log (s) ( e^{\rho(s)} - 1) + \frac{e^{-\rho(s)}}{\log s}$. Finally, we check that as $|\rho(s)| \leq \frac{B}{s}$ we have $$ a(s) \geq \frac{1}{4}\log(s_0) e^{\frac{B}{s_0}} \geq 2, $$ provided $s_0$ is large enough. \end{proof} \subsection{Energy drift} The solution constructed in Proposition \ref{laprop} exhibits a monotonic growth of the energy. \begin{corollary}[Logarithmic growth of the energy] \label{coro1} With $\beta$ given by \eqref{beta}, there exists $s_0$, $L(s_0) >0$ and $b(s_0)$ and positive constants $(B_k)_{k\in \mathbb{N}}$ and $(\alpha_k)_{k \in \mathbb{N}}$ such that the system \eqref{perturb0} admits global solutions $L(s)$ and $b(s)$ on $[s_0,+\infty)$ such that \begin{equation} \label{boundL2} \forall\,s \in [s_0,+\infty), \quad \frac{1}{B_0 \log s} \leq L^2 \leq B_0 \log s \quad \boldsymbol{m}ox{and} \quad |b(s) | \leq B_0 (\log s)^3, \end{equation} and $$ E(b,L) = \frac{1}{L^2}(\frac{b^2}{4} + 1) + L^2 = 4 a = \log s + \mathcal{O}(\frac{\log s}{s}), \quad \boldsymbol{m}ox{when} \quad t \to +\infty, $$ and such that we have the bounds for $k \geq 1$, \begin{equation} \label{bounderivs} \left|\frac{\mathrm{d}^k L}{\mathrm{d} s^k} (s)\right| + \Big|\frac{\mathrm{d}^k b}{\mathrm{d} s^k} (s)\Big| + \Big|\frac{\mathrm{d}^k }{\mathrm{d} s^k}\Big(\frac{1}{L}\Big) (s)\Big| \leq B_k (\log s)^{\alpha_k}. \end{equation} Moreover the time $t(s)$ satisfying $\frac{\mathrm{d} t}{\mathrm{d} s} = L^2 > 0$ and $t(s_0) = s_0$ satisfies \begin{equation} \label{boundtime} | t(s) - s| \leq B_0 (\log s)^2, \quad \boldsymbol{m}ox{for} \quad s > s_0. \end{equation} Hence as $t(s)$ is increasing, it is globally invertible. Moreover, by denoting $L(t)$ and $b(t)$ the quantities $L$ and $b$ viewed as depending on the time $t$, we have the bounds. \begin{equation} \label{bounderivt} \left|\frac{\mathrm{d}^k L}{\mathrm{d} t^k} (t)\right| + \Big|\frac{\mathrm{d}^k b}{\mathrm{d} t^k} (t)\Big| + \Big|\frac{\mathrm{d}^k }{\mathrm{d} t^k}\Big(\frac{1}{L}\Big) (t)\Big| \leq B_k (\log t)^{\alpha_k}, \end{equation} for $t \geq t_0 := s_0$. \end{corollary} \begin{proof}[Proof of Corollary \ref{coro1}] From \eqref{changeco} we have the explicit formulae $$ \frac{1}{L^2} = 2a - \sqrt{4a^2 - 1}\cos(\theta) \quad \boldsymbol{m}ox{and}\quad b = 4L^2 \sqrt{4a^2 - 1} \sin(\theta) $$ from which we easily deduce the existence of $L(s) >0$ and $b(s)$ as $a(s) > 2$. We have $$ \frac{1}{4a } \leq 2a - \sqrt{4a^2 - 1} \leq \frac{1}{L^2} \leq 2a + \sqrt{4a^2 - 1} \leq 4a \quad \boldsymbol{m}ox{and}\quad |b| \leq 8 a L^2 \leq 32 a^3 $$ and \eqref{boundL2} can be obtained using \eqref{hash}. Moreover, we can assume that $s_0$ is large enough to ensure that $a(s) \geq 1 + \frac{1}{8}\log(s)$ and hence $\sqrt{4a^2 - 1} \geq \frac{1}{4}\log s$ for $s \geq s_0$. Using \eqref{hash}, we have that $\frac{\mathrm{d}^k }{\mathrm{d} s^k}a(s) = \mathcal{O}( \frac{\log s}{s})$ and $\frac{\mathrm{d}^k}{\mathrm{d} s^k}\theta(s)(s) = \mathcal{O}(1)$ for $k \geq 1$. Using Fa\`a di Bruno formula, we deduce that for some constants $B_k$, we have for all $k \geq 0$ and all $s \geq s_0$, $$ \left|\frac{\mathrm{d}^k }{\mathrm{d} s^k} \sqrt{4a^2 - 1}\right|\leq B_k \log s \quad\boldsymbol{m}ox{and} \quad \left|\frac{\mathrm{d}^k }{\mathrm{d} s^k} \cos (\theta)\right | + \left|\frac{\mathrm{d}^k }{\mathrm{d} s^k} \sin (\theta)\right | \leq B_k $$ and then \eqref{bounderivs} by using again Fa\`a di Bruno formula and the bound $L^{\pm 2} \leq C (\log s)$ for some constant $C$ depending only on $s_0$. We can check that $\alpha_k = \mathcal{O}(k)$. From Equation \eqref{fish}, we have $$ \frac{\mathrm{d} t}{ \mathrm{d} s} = L^2 (\theta,a) = \left( 1 + 2 \sum_{n > 0} \left( \frac{2a - 1}{2a + 1} \right)^{\frac{n}{2}} \cos(n \theta) \right). $$ Assuming $t(s_0) = s_0$, and using $(2a - 1) < (2 a + 1)$ which justifies the infinite sums in $n$, \begin{eqnarray*} t(s) - s &=& 2 \sum_{n > 0} \int_{s_0} ^s \left( \frac{2a - 1}{2a + 1} \right)^{\frac{n}{2}} \cos(4 \sigma n + n \psi) \mathrm{d} \sigma \\ &=& 2 \sum_{n > 0} \int_{s_0}^s \left( \frac{2a - 1}{2a + 1} \right)^{\frac{n}{2}} \cos(4 \sigma n)\cos(n \psi) \mathrm{d} \sigma\\ && - 2 \sum_{n > 0} \int_{s_0}^s \left( \frac{2a - 1}{2a + 1} \right)^{\frac{n}{2}} \sin(4 \sigma n) \sin(n \psi) \mathrm{d} \sigma. \end{eqnarray*} We now integrate by part the terms in the right-and side. For the first term we use \begin{multline*} \int_{s_0}^s \left( \frac{2a - 1}{2a + 1} \right)^{\frac{n}{2}} \cos(4 \sigma n)\cos(n \psi) \mathrm{d} \sigma = \\ \frac{1}{4n}\left[ \left( \frac{2a - 1}{2a + 1} \right)^{\frac{n}{2}} \sin(4 \sigma n)\cos(n \psi) \right]_{s_0}^s + \frac{1}{4n} \int_{s_0}^s \psi_s \left( \frac{2a - 1}{2a + 1} \right)^{\frac{n}{2}} \sin(4 \sigma n)\sin(n \psi) \mathrm{d} \sigma \\ -\int_{s_0}^s \frac{2a_s}{(2 a + 1)^2} \left( \frac{2a - 1}{2a + 1} \right)^{\frac{n}{2} - 1} \sin(4 \sigma n)\cos(n \psi) \mathrm{d} \sigma \end{multline*} As $\psi_s = \mathcal{O}(\frac{1}{s})$ (see \eqref{hash}), the global contribution of the first term is bounded by \begin{multline*} C \int_{s_0}^{s} \sum_{n \geq 1} \frac{1}{\sigma } \left( \frac{2a - 1}{2a + 1} \right)^{\frac{n}{2}} \mathrm{d} \sigma \leq C \int_{s_0}^{s} \frac{1}{\sigma}\frac{1}{2a - \sqrt{4a^2 - 1}} \mathrm{d} \sigma \\ \leq C \int_{s_0}^{s} \frac{a(\sigma)}{\sigma}\mathrm{d} \sigma \leq C \int_{s_0}^s \frac{\log(\sigma)}{\sigma} \mathrm{d} \sigma \leq C (\log s)^2. \end{multline*} by using the formula for the Poisson Kernel, and up to modifications of the constant $C$ in each inequalities, depending only on $s_0$ and numerical constants. As $a_s = \mathcal{O}(\frac{\log (s) }{s})$ and $\log \sigma / (2 a(\sigma) + 1)$ is bounded, we calculate that the third term yields similarly a contribution of order $\mathcal{O}((\log s)^2)$. We then deduce that $$ | t(s) -s | \leq B_0 (\log s)^2, $$ for some constant $B_0$. The bounds \eqref{bounderivt} then easily derive from the bounds \eqref{bounderivs}, \eqref{boundL2} and the $\frac{\mathrm{d}}{\mathrm{d} s} = L^2 \frac{\mathrm{d}}{\mathrm{d} t}$. \end{proof} \section{Main result for the harmonic oscillator} We are now in position to construct the unbounded trajectory of Theorem \ref{Th1}. The construction relies on the existence of the resonant finite dimensional trajectory of Proposition \ref{laprop}, and the backwards integration method for the full PDE as introduced in \cite{Me90}. \subsection{Construction of the resonant trajectory} We consider the equation \eqref{LHO1}. From classical grounds, the a priori bound for all $t$ and all $k \in \mathbb{N}$, $$ \mathbb{N}orm{\partial_t^{k} V(t,x)}{\mathcal{H}^s} < +\infty. $$ ensures the existence and uniqueness of global solutions to \eqref{LHO1} in $\mathcal{H}^r$ for $r > 1$ satisfying $u(t_0,x) = u_0(x) \in \mathcal{H}^r$, with a given $t_0 \in \mathbb{R}$.They are solutions of the equation $$ \forall\, t \in \mathbb{R} \quad u(t,x) = e^{- i (t - t_0) H }u_0(x) - i \int_{t_0}^t e^{- i (t - t_0 - s)H } V(s,x) u(s,x) \mathrm{d} s. $$ We make the change of unknown \eqref{prince} with the modulation system \eqref{perturb0} associated with the function $\beta(s)$ defined in \eqref{beta}. By using \eqref{eqH2} we obtain the system \begin{equation} \label{mend} i \partial_s v = H v + \gamma_s(s) v + \beta(s) |y|^2 v + W(s,y) v, \quad y \in \mathbb{R}^d. \end{equation} where $$ W(s,y) = L^2 V(t,x) $$ with $s$ and $y$ satisfying \eqref{prince}. The heart of the proof of Theorem \ref{Th1} is the following statement. \begin{proposition}[resonant trajectory in renormalized variables] \label{prop1} Let $\beta(s)$ be given by the formula \eqref{beta} and \begin{equation} \label{eqW} W(s,y) = - \alpha \beta(s) h_0(y) \quad \boldsymbol{m}ox{with} \quad \alpha = \frac{ ( h_1 , |y|^2 h_0)_{L^2}}{( h_1 , h_0 h_0)_{L^2}} = - \frac{9 \sqrt{\pi}}{2}. \end{equation} Let $s_0$ be as in Proposition \ref{laprop} and $r > 1$. There exists a constant $B$ and a solution $\gamma(s) = - \lambda_0 s = - 2s$ and $v(s) \in \mathcal{H}^r_{{\mathrm{rad}}}$ to the equation \eqref{mend} such that $$ v(s,y) = h_0(y) + w(s,y),\quad \boldsymbol{m}ox{with}\quad \forall\, s \in (s_0,+\infty),\quad \mathbb{N}orm{w(s)}{\mathcal{H}^r} \leq \frac{B}{\sqrt{s}}. $$ \end{proposition} \begin{proof}[Proof of Proposition \ref{prop1}] Let us take $\gamma_s = - \lambda_0 = - 2$. Let $ w = v - h_0$. As $H h_0 = \lambda_0 h_0$, we have that \begin{equation} \label{cador} i\partial_s w = (H - \lambda_0) w + \beta(s) |y|^2 w + W(s,y) w + R(s) \end{equation} with $$ R(s) = \beta(s) |y|^2 h_0 + W(s,y) h_0 = \beta(s) ( |y|^2 h_0 - \alpha h_0^2). $$ By definition of $W$, we have $(R(s),h_1)_{L^2} = 0$. Let $g = e^{i s (H - \lambda_0)} w$. We have \begin{equation} \label{eqg} i \partial_s g = K(s) g + e^{i s (H - \lambda_0)} R(s). \end{equation} where \begin{eqnarray} K(s) &=& \beta(s) e^{i s H}|y|^2 e^{- is H} + e^{i s H} W(s,y) e^{- i s H} \nonumber \\[2ex] &=& \beta(s) e^{i s H}(|y|^2 - \alpha h_0) e^{- is H} \label{Ks} \end{eqnarray} Note that by using \eqref{eqflow}, \eqref{algebra} and \eqref{y2op} we have that for all $r > 1$ and all $f \in \mathcal{H}^{r+1}_{{\mathrm{rad}}}$, \begin{equation} \label{estKs} \mathbb{N}orm{K(s) f}{\mathcal{H}^r} \leq \frac{C_r}{s \log s} \mathbb{N}orm{f}{\mathcal{H}^{r+1}} \end{equation} for some constant $C_r$ independenf of $f$. Moreover, for all $r >1$, we have $$ [H^{s/2},K(s)] = \beta(s) e^{i s H} ( [H^{r/2},|y|^2] - \alpha [H^{r/2},h_0 ]) e^{-i sH} $$ by hence using \eqref{eqcom}, \eqref{algebra} and the fact that $[H^{r/2},h_0]f = H^{r/2}(h_0 f) - h_0 ( H^{r/2} f)$, we have \begin{equation} \label{estcommKs} \mathbb{N}orm{[H^{r/2},K(s)] f}{L^2} \leq \frac{C_r}{s \log s} \mathbb{N}orm{f}{\mathcal{H}^{r}}, \end{equation} for $r > 1$ and a constant $C_r$ depending only on $r$. Note that for all $s$, $K$ is Hermitian, {\em i.e.} for all $\forall\, f,g \in \mathcal{H}^{r+2}$, $$ (f, K(s) g)_{L^2} = (K(s) f , g)_{L^2}. $$ For $M \in (s_0,+\infty)$, let $w^M$ be the solution of \eqref{cador} in $\mathcal{H}_{{\mathrm{rad}}}^s$ such that $w^M(M) = 0$, $g^M = e^{i s (H - \lambda_0)} w^M$, and $$ f^M = g^M - i \int_s^M e^{i \sigma (H - \lambda_0)} R(\sigma,y) \mathrm{d} \sigma =: g^M - r^M. $$ We have that $f^M(M) = 0$, and \begin{equation} \label{eqfM} i \partial_s f^M = K(s) g^M = K(s) f^M + R^M(s) \end{equation} with \begin{equation} \label{eqRM} R^M(s) = K(s) r^M. \end{equation} Now we have $$ r^M = i \int_s^M e^{i \sigma (H - \lambda_0)} R(\sigma,y) \mathrm{d} \sigma = i \int_s^M \beta(\sigma) e^{i \sigma (H - \lambda_0)} (|y|^2 h_0 - \alpha h_0^2) \mathrm{d} \sigma $$ Let us calculate the terms in the right-hand side. We have $$ e^{i \sigma (H - \lambda_0)}(|y|^2 h_0 - \alpha h_0^2) = \sum_{n \neq 1} e^{i 4 n \sigma } ( ( h_n, |y|^2 h_0^2)_{L^2} - \alpha (h_n,h_0^2)_{L^2}) h_n $$ as the term for $n = 1$ vanishes by definition of $\alpha$. This shows that $$ r^M(s) = - \sum_{n \neq 1} \left( \int_s^M \frac{\sin(4\sigma)e^{4i n \sigma}}{\sigma \log \sigma} \mathrm{d} \sigma\right) ( ( h_n, |y|^2 h_0^2)_{L^2} - \alpha (h_n,h_0^2)_{L^2}) h_n. $$ But as $n \neq 1$, we have after integration by part, that for some constant $B$ independent on $M$, $n$ and $s$, we have for $s \leq M$, $$ \left| \int_s^M \frac{\sin(4\sigma)e^{4i n \sigma}}{\sigma \log \sigma} \mathrm{d} \sigma\right| \leq \frac{B}{s}. $$ This shows that \begin{multline} \label{raslbol} \mathbb{N}orm{r^M(s)}{\mathcal{H}^r}^2 \leq \frac{2 B^2}{s^2} \left( \sum_{n \in \mathbb{N}} \langle n \rangle^r ( h_n, |y|^2 h_0^2)_{L^2}^2 + \alpha^2 \sum_{n\in \mathbb{N}} \langle n \rangle^r (h_n,h_0^2)_{L^2}^2\right) \\ \leq \frac{2 B^2}{s^2} (\mathbb{N}orm{|y|^2 h_0}{\mathcal{H}^r}^2 + \alpha^2 \mathbb{N}orm{h_0^2}{\mathcal{H}^r}) \leq \frac{C^2_r}{s^2} \end{multline} for some constant $C_r$ independent on $M$ and $s$ but depending on $r> 1$. In view of the expression \eqref{eqRM} and the estimate \eqref{estKs}, we have $$ \mathbb{N}orm{R^M(s)}{\mathcal{H}^r} \leq \frac{C_{r}}{s \log s} \mathbb{N}orm{r^M(s)}{\mathcal{H}^{r +1}} \leq \frac{C_r}{s^2 \log s}, $$ up to a modification of the constant $C_r$ in the last inequality. Now using the equation \eqref{eqfM} on $f^M$, we have \begin{eqnarray*} \partial_s \mathbb{N}orm{f^M}{\mathcal{H}^r}^2 &=& \mathrm{Im} \left( H^{r/2} f^M, H^{r/2} K(s) f^M )_{L^2} + (H^{r/2}f^M, H^{r/2} R^M)_{L^2}\right) \\ &=& \mathrm{Im} \left( (H^{r/2} f^M, [H^{r/2}, K(s)] f^M )_{L^2} + (H^{r/2}f^M, H^{r/2} R^M)_{L^2}\right) \end{eqnarray*} as the multiplication by $K$ defines a symmetric operator. Hence we have by using \eqref{estcommKs}, $$ \partial_s \mathbb{N}orm{f^M}{\mathcal{H}^r}^2 \leq \frac{C}{s \log s}\mathbb{N}orm{f^M}{\mathcal{H}^r}^2 + \frac{C}{s^2 \log s } \mathbb{N}orm{f^M}{\mathcal{H}^{r}}, $$ for some constant $C$ depending only on $r$. For $\varepsilon > 0$, let $y^M_{\varepsilon}(s) = \sqrt{\mathbb{N}orm{f^M}{\mathcal{H}^r}^2 + \varepsilon^2}$. The previous inequality implies that for all $\varepsilon$, we have $$ \partial_{s} (y^M_{\varepsilon})^2 = 2 y^M_{\varepsilon} \partial_s y^M_{\varepsilon} \leq \frac{C}{s \log s}(y^M_{\varepsilon})^2 + \frac{C}{s^2 \log s } y^M_{\varepsilon}, $$ and hence as $y^M_{\varepsilon} > 0$ for all $s$, $$ \partial_s y^M_{\varepsilon} \leq \frac{C}{2 s \log s}y^M_{\varepsilon} + \frac{C}{2 s^2 \log s }. $$ By using Gr\"onwall's lemma (see Lemma \ref{gr} below), we obtain as $y^M_{\varepsilon}(M) = \varepsilon$, \begin{eqnarray} y^M_{\varepsilon}(s) &\leq& \varepsilon + \frac{C}{s} + \int_s^M \Big(\varepsilon + \frac{C}{\sigma}\Big) \frac{C}{2 \sigma \log \sigma} \exp \left( \int_s^\sigma \frac{C}{2 \tau \log \tau} \mathrm{d} \tau\right) \mathrm{d} \sigma \nonumber \\ &\leq & \varepsilon + \frac{C}{s} + \int_s^M \Big(\varepsilon + \frac{C}{\sigma}\Big) \frac{C (\log \sigma)^{C - 1}}{2 \sigma} \mathrm{d} \sigma. \label{top} \end{eqnarray} By letting $\varepsilon \to 0$, we deduce that for all $\alpha >0$, there exists a constant $\kappa_\alpha$ such that for all $M$ and all $s \in (s_0,M)$, \begin{equation} \label{boundfM} \mathbb{N}orm{f^M(s)}{\mathcal{H}^r} \leq \frac{\kappa_\alpha}{s^{1 - \alpha}}. \end{equation} Now if we take $f^M$ and $f^N$ for $M > N$, we have $$ i \partial_s (f^M - f^N) = K(s) (f^M - f^N) + K(s) (r^M - r^N). $$ But we have $$ r^M - r^N = i \int_N^M e^{i \sigma (H - \lambda_0)} R(\sigma,y) \mathrm{d} \sigma = r^M(N). $$ In particular, we have using \eqref{estKs} $$ \int_s^N \mathbb{N}orm{K(\sigma) (r^M - r^N)}{\mathcal{H}^r} \mathrm{d} \sigma \leq C_r \int_{s}^{N} \frac{1}{\sigma \log \sigma} \mathbb{N}orm{r^M(N)}{\mathcal{H}^{r+1}}\mathrm{d} \sigma. $$ Hence we have \begin{multline*} \mathbb{N}orm{ f^M(s) - f^N(s)}{\mathcal{H}^r} \leq \mathbb{N}orm{f^M(N)}{\mathcal{H}^r} + C_r \log \log N \int_s^N \mathbb{N}orm{r^M(N)}{\mathcal{H}^{r+1}} \\ + \frac{C}{\sigma \log \sigma} \mathbb{N}orm{f^M(\sigma) - f^N(\sigma)}{\mathcal{H}^r}\mathrm{d} \sigma \end{multline*} and by Gr\"onwall estimate, \eqref{raslbol} and \eqref{boundfM} with $\alpha = \frac{1}{4}$, $$ \mathbb{N}orm{ f^M(s) - f^N(s)}{\mathcal{H}^r} \leq \frac{\kappa_{1/4}}{N^{3/4}}\left( 1 + \int_s^N \frac{( \log \sigma)^{C-1}}{\sigma } \mathrm{d} \sigma \right) \leq \frac{C}{\sqrt{N}}, $$ for some constant $C$ independent of $N$ large enough. Hence the sequence of function $(f^M(s))_{M \in \mathbb{N}}$ is Cauchy and converge uniformly in $\mathcal{C}((0,T),\mathcal{H}^r))$ for all $T$. Moreover, the functions $r^M$ also converge to a function $r(s)$ on $(s_0,+\infty)$ in $\mathcal{H}^s$ and satisfies $\mathbb{N}orm{r(s)}{\mathcal{H}^r} \leq C_r/s$ (see \eqref{raslbol}). We deduce that $g^M = f^M - r^M$ converges towards the unique solution of \eqref{eqg}. Moreover, by using \eqref{boundfM} with $\alpha = \frac{1}{2}$, we have $\mathbb{N}orm{g(s)}{\mathcal{H}^r} \leq \frac{B}{\sqrt{s}}$ for all $s \in (s_0,+\infty)$ and some constant $B$ depending on $r$. We obtain the result by noticing that $v = h_0 + e^{- i s (H - \lambda_0) } g$. \end{proof} \subsection{Proof of Theorem \ref{Th1}} It follows directly from the following quantitative version. \begin{proposition}[Existence of the resonant trajectory] \label{vnbeibvebveiuve} Let $s_0$, $(b(t),L(t))$ and $s(t)$ satisfying Corollary \ref{coro1} and let $V(t,x)$ be the function defined as the time dependent Gaussian \begin{equation} \label{vbeivbibeveievb} V(t,x) = - \frac{9 \sqrt{\pi}}{2 L(t)^2} \frac{\sin(4s(t))}{ s(t) \log s(t) }h_0(\frac{x}{L(t)}). \end{equation} Then we have for all $k$ and all $r$, $$ \lim_{t \to \infty} \mathbb{N}orm{\partial_t^{k} V(t,x)}{\mathcal{H}^r} = 0. $$ Moreover, if $v(s,y)$ denote the function constructed in Proposition \ref{prop1}, then $$ u(t,x) = \frac{1}{L(t)} e^{- 2is(t) - i \frac{b}{4}L^{-2}(t) |x|^2} v(s(t), \frac{x}{L(t)}) $$ is a solution in $\mathcal{H}^r_{\mathrm{rad}}$, $r > 1$ of the equation $$ i \partial_{t} u = - \Delta u + |x|^2 u + V(t,x) u $$ on $(s_0,+\infty)$. Moreover, we have \begin{equation} \label{decomp} u(t,x) = u_0(t,x) + u_1(t,x) \end{equation} with $$ \mathbb{N}orm{u_0(t,x)}{\mathcal{H}^1}^2 \sim \log t \quad \quad\boldsymbol{m}ox{when}\quad t \to \infty $$ and such that for all $r > 1$, there exists $C_r$ and $\alpha_r$ such that $$ \quad \mathbb{N}orm{u_1(t,x)}{\mathcal{H}^r} \leq C_r \frac{(\log t)^{\alpha_r}}{\sqrt{t}}. $$ In particular, we have \begin{equation} \label{growth1} \mathbb{N}orm{u(t,x)}{\mathcal{H}^1}^2 \sim \log t \quad \boldsymbol{m}ox{when}\quad t \to \infty. \end{equation} \end{proposition} \begin{proof}[Proof of Proposition \ref{vnbeibvebveiuve}] The bound on the potential are consequences of the estimates in Corollary \ref{coro1}. To prove \eqref{growth1}, we observe that $v(s,y) = h_0(y) + w(s,y)$ with $\mathbb{N}orm{w(s)}{\mathcal{H}^1} \leq \frac{B}{\sqrt{s}}$. Hence by using Lemma \ref{lemnorms}, we see that the contribution of $w(s,y)$ converges to $0$ in $\mathcal{H}^1$ norm, when $s$ goes to $\infty$. The bound on $u_1$ are then easily proved by using the relation \eqref{boundtime} between $t(s)$ and $s$, and using the bound on $v$ and on $L$ and $b$. By using \eqref{normegaussienne}, we finally have $$ \mathbb{N}orm{u(t)}{\mathcal{H}^1}^2 = E(b(t),L(t)) + o(1) = 4a(t) + o(1) = \log(s(t)) + o(1) \sim \log(t). $$ and Proposition \ref{vnbeibvebveiuve} and Theorem \ref{Th1} are proved. \end{proof} \section{The linear CR equation} \label{sectioncr} We consider in this section the linearized CR equation and propose a different approach to produce growth and realize Theorem \ref{Th1} using some specific properties of the CR equation. \subsection{Existence of resonant trajectories} For CR, the existence of resonant trajectories for the perturbed problem can be reduced to the existence of suitable trajectories to the unperturbed flow. \begin{proposition}[resonant trajectory near suitable trajectories] \label{popop} Assume that there exists $s_0$, $F(s,x)$ and $f(s,x)$ such that $$ i \partial_s f = \mathbb{T}c[F] f, \quad s \in [s_0,+\infty], $$ and such that for all $r$ and $k$, there exists $\kappa = \kappa(r,k)$ and $C = C(r,k)$ such that \begin{equation} \label{Ft} \mathbb{N}orm{\partial_s^k F(s,x)}{\mathcal{H}^r} + \mathbb{N}orm{\partial_s^k f(s,x)}{\mathcal{H}^r} \leq C e^{\kappa s}, \quad \end{equation} and there exists $c$ and $\alpha$ such that \begin{equation} \label{ft} \mathbb{N}orm{f(s,x)}{\mathcal{H}^1} \sim c e^{\alpha s}, \quad s \to +\infty, \quad c,\alpha > 0. \end{equation} Then there exists $V(t,x)$ and $u(t,x) = e^{-i t H}f(\log \log t ,x) + \mathcal{O}(1) $ realizing Theorem \ref{Th1}. \end{proposition} \begin{proof}[Proof of Proposition \ref{popop}] We set $$ V (t,x) = \frac{1}{t \log t}| e^{-i t H} F(\log \log t,x) |^2. $$ By using the exponential bounds for $F$, we easily verify that $V$ satisfies the decay hypothesis in time \eqref{decay} in $\mathcal{H}^r$ for $r > 1$ (to ensure the algebra property of $\mathcal{H}^r$). Let us set $$ u= e^{-i t H} v(t ,x). $$ Then $u$ solves \eqref{LHO1} if $v$ is solution of $$ i \partial_t v =\frac{1}{t \log t} e^{i t H }| e^{-i t H} F(s,x) |^2 e^{ - i tH } v, \quad s = \log \log t. $$ Let us decompose $F (s, x) = \sum_{k \in \mathbb{Z}} f_k(s) h_k(x)$ and $v (t, x) = \sum_{k \in \mathbb{Z}} v_k(s) h_k(x)$. The previous equation is equivalent to the collection of equations \begin{eqnarray*} \forall\, k \in \mathbb{N}, \quad i \partial_t v_k(t) &=& \frac{1}{t \log t} \sum_{m,n,p \in \mathbb{N}} \chi_{k m n p} e^{4 i t (k - m + n - p) } F_{m}(s) \overline{F}_n(s) v_p(t) \\ &=& \frac{1}{t \log t} (\mathbb{T}c[F] v)_k + \frac{1}{t \log t} (R(s,t) v)_k, \end{eqnarray*} where the coefficients $\chi_{k m n p}$ are given by the formula \eqref{piaz} and $$ (R(s,\theta) v)_k = \frac{1}{t \log t} \sum_{ k \neq m - n + p } \chi_{k m n p} e^{4 i \theta (k - m + n - p) } F_{m}(s) \overline{F}_n(s) v_p, $$ define an operator $R(s,\theta)$ acting on $\mathcal{H}^r$ for $r > 1$ (see for instance \cite[Proposition 2.13]{GIP09}) which is oscillatory in $\theta$. We now set $$ w(t,x) = v(t,x) - f(\log \log t, x). $$ Then by assumption on $f$, $w$ satisfies $$ i \partial_t w = \frac{1}{t \log t} \mathbb{T}c[F] w + \frac{1}{t \log t} R(s,t) w + \frac{1}{t \log t} R(s,t) f(s). $$ For $M$ large enough, we define $w^M(t)$ the solution of this equation such that $w^M(M) = 0$, and we set $$ g^M(t,x) = w^M + i \int_{t}^{M} \frac{1}{\sigma \log \sigma} R(\log \log \sigma,\sigma) f (\log \log \sigma) \mathrm{d} \sigma = : w^M - r^M. $$ We have \begin{eqnarray*} i \partial_t g^M &=& \frac{1}{t \log t} \mathbb{T}c[F] g^M + \frac{1}{t \log t} R(s,t) g^M + \frac{1}{t \log t} \mathbb{T}c[F] r^M + \frac{1}{t \log t} R(s,t) r^M \\ &=& K(t) g^M + K(t) r^M, \quad \boldsymbol{m}ox{with}\quad K(t) = e^{i tH} V(t,x) e^{-i tH}, \end{eqnarray*} a formula that can be compared with \eqref{eqfM}-\eqref{eqRM}. As $V$ is smooth, the operator $K(t)$ possess the same properties as the operator defined in the proof of Proposition \ref{prop21}, in particular the commutator estimate \eqref{estcommKs}. To conclude by using the same argumentation as in this proof, we thus just need to control $\mathbb{N}orm{r^M}{\mathcal{H}^r}$ for $r$ large enough (see estimate \eqref{raslbol}). Now we have $$ r_k^M(t) = i \sum_{ k \neq m - n + p } \chi_{k m n p} \int_{t}^{M} \frac{1}{\sigma \log \sigma} e^{4 i \sigma (k - m + n - p) } F_{m}(s(\sigma)) \overline{F}_n(s(\sigma)) f_p(s(\sigma)) \mathrm{d} \sigma. $$ We integrate the oscillatory term by part and use the fact that $|k - m + n - p|\geq 1$. Moreover, proposition 3.3 of \cite{GIP09} gives some bounds on the coefficients $\chi_{kmnp}$. By applying estimates for polynomials acting on $\mathcal{H}^r$, see \cite[Proposition 3.3]{GIP09}, we obtain a bound of the form (for $r > 1$) \begin{eqnarray*} \mathbb{N}orm{r^M(t)}{\mathcal{H}^r} &\leq& \frac{C}{t \log t} \mathbb{N}orm{F(\log \log t)}{\mathcal{H}^r}^{2} \mathbb{N}orm{f(\log \log t)}{\mathcal{H}^r} \\ &&+ \frac{C}{M \log M} \mathbb{N}orm{F(\log \log M)}{\mathcal{H}^r}^{2} \mathbb{N}orm{f(\log \log M)}{\mathcal{H}^r} \\ &&+ C \int_{t}^M\frac{1}{\sigma^2 (\log \sigma)^2 } \mathbb{N}orm{(\partial_s F)(\log \log \sigma)}{\mathcal{H}^r} \mathbb{N}orm{F(\log \log \sigma)}{\mathcal{H}^r} \mathbb{N}orm{f(\log \log \sigma)}{\mathcal{H}^r} \mathrm{d} \sigma\\ &&+ C \int_{t}^M\frac{1}{\sigma^2 (\log \sigma)^2 } \mathbb{N}orm{F(\log \log \sigma)}{\mathcal{H}^r}^2 \mathbb{N}orm{(\partial_s f)(\log \log \sigma)}{\mathcal{H}^r} \mathrm{d} \sigma\\ &&+ C \int_{t}^M\frac{1}{\sigma^2 (\log \sigma)^2 } \mathbb{N}orm{F(\log \log \sigma)}{\mathcal{H}^r}^2 \mathbb{N}orm{f(\log \log \sigma)}{\mathcal{H}^r} \mathrm{d} \sigma. \end{eqnarray*} Using the bound on $f$ and $F$, we conclude that for some constants $C_r$ and $\beta_r$, we have for $t < M$ and uniformly in $M$, $$ \mathbb{N}orm{r^M(t)}{\mathcal{H}^r} \leq C_r \frac{(\log t)^{\beta_r}}{t}. $$ If we compare with \eqref{raslbol}, we see that we loose a factor $(\log t)^{\beta_r}$ compared with the estimates in the proof of Proposition \ref{prop21}, but it does not affect the result, and the conclusion is the same, see in particular \eqref{top} with same the kind of estimate. We conclude that $w^M$ converges towards a solution of \eqref{LHO1} such that in $\mathcal{H}^r$ for $r$ large enough, $$ u = e^{- i tH} (f(\log \log t) + \mathcal{O}(1)) $$ from which we obtain the result by using \eqref{ft}. \end{proof} \subsection{Existence of suitable trajectories and conclusion} Hence we are reduced to the problem of finding functions $F$ and $f$ satisfying \eqref{Ft} and \eqref{ft}. Due to the numerous invariance of the CR equation, there are many ways to construct such example. Up to a change of time one such example is given in \cite{Tho20} by using the analysis in \cite{ST20} for the lowest Landau level equation which coincide with the CR equation on the Bargmann-Fock space, see \cite{GHT16}. Here we give a general recipe to build simple examples based on the following fact: \begin{lemma} Let $\kappa$, $\nu$, $\mu$ real numbers. There exists $\beta \in \mathbb{C}$ and $\lambda \in \mathbb{R}$ such that \begin{equation} \label{eqmode} (\nu \Delta + i \mu ( 1+ \Lambda) + \kappa |x|^2 )h_0 +\mathbb{T}c[ h_0 + \beta h_1] h_0 = \lambda h_0. \end{equation} \end{lemma} \begin{proof} Using \eqref{eqy2}, \eqref{eqLap} and \eqref{bemol}, we have $$ \kappa |x|^2 h_0 = \kappa ( h_0 - h_1), \quad \nu \Delta h_0 = \nu( - h_0 - h_1)\quad \boldsymbol{m}ox{and} \quad i \mu ( 1 + \Lambda) h_0 =i \mu h_1. $$ On the other hand, we have using \eqref{CRhn} $$ \mathbb{T}c( h_0 + \beta h_1)h_0 = \chi_{0000} h_0 + |\beta|^2 \chi_{1100} h_0 + \beta \chi_{0110} h_1. $$ Hence the equation \eqref{eqmode} is satisfied if we have $$ \left| \begin{array}{l} \chi_{0000} + |\beta|^2 \chi_{1100} = \nu - \kappa + \lambda, \\[1ex] \beta \chi_{0110} h_1 = \nu + \kappa - i \mu, \end{array} \right. $$ which is a solvable equation in $\beta$ and $\lambda$ as $\chi_{1100}$ is non zero. \end{proof} To make the equation \eqref{eqmode} appear, we proceed as follows: we consider the equation $$ i \partial_s f = \mathbb{T}c[F]f $$ For $N$, $m$, $\gamma$ and $c$ depending on the time, we make the change of variable $$ f = e^{ i \gamma}\mathcal{S}_N e^{i m |\xi|^2} e^{i c \Delta } g, $$ where $\mathcal{S}_N$ is given by \eqref{ScN}. \end{proof} \begin{proposition} Let $\kappa$, $\nu$, $N$, $c$, $m$, $\lambda$ and $\gamma$ be a given functions of $s$. Let $f$ and $g$, $F$ and $G$ be linked by the relation $$ f = e^{ i \gamma} \mathcal{S}_N e^{i m |y|^2} e^{i c \Delta} g, \quad \boldsymbol{m}ox{and} \quad F = e^{ i \gamma} \mathcal{S}_N e^{i m |y|^2} e^{i c \Delta} G, $$ Then $$ i \partial_s f = \mathbb{T}c[F]f \quad \Longleftrightarrow\quad i \partial_s g = \nu \Delta g + i \mu ( 1 + \Lambda)g + \kappa |\xi|^2 g + \mathbb{T}c[G]g - \lambda g $$ if and only if \begin{equation} \label{newmodul} \left| \begin{array}{rcl} \frac{N_s}{N} &=& 4 c \kappa + \mu, \\ m_s &=& (1 + 4 c m) \kappa,\\ c_s &=& \nu - 4 \kappa c^2, \\ \gamma_s &=& -\lambda. \end{array} \right. \end{equation} \end{proposition} \begin{proof} To obtain the equation for $g$, we proceed step by step. Let us first assume $$ f = \mathcal{S}_N v, \quad \boldsymbol{m}ox{and} \quad F = \mathcal{S}_N V. $$ Then we have $$ i \partial_s f = i \partial_s \mathcal{S}_N v = \mathcal{S}_N i \partial_s v - i\frac{N_s}{N} \mathcal{S}_N ( 1 + \Lambda) v. $$ and $$ \mathbb{T}c[F]f = \mathcal{S}_N \mathbb{T}c[V]v. $$ We thus find the equation $$ i \partial_s v = i \frac{N_s}{N} ( 1 + \Lambda) v + \mathbb{T}c[V]v. $$ Now assume $$ v = e^{i m |\xi|^2} w \quad \boldsymbol{m}ox{and} \quad V = e^{i m |\xi|^2} W $$ We have $$ i \partial_s w - m_s |\xi|^2 w = i \frac{N_s}{N} e^{- i m |\xi|^2}( 1 + \Lambda)e^{i m |\xi|^2} w + \mathbb{T}c [W]w. $$ Hence using \eqref{C3}, $$ i \partial_s w - m_s |\xi|^2 w = i \frac{N_s}{N} ( 1 + \Lambda)w + \Big( m_s - 2 m \frac{N_s}{N}\Big) |\xi|^2 w + \mathbb{T}c [W]w. $$ Let us set $$ \kappa = m_s - 2 m \frac{N_s}{N} $$ and $w = e^{i c \Delta + i \gamma} g$, $W = e^{i c \Delta} G$. We find using \eqref{C2} and \eqref{C4} \begin{eqnarray*} i \partial_s g - \gamma_s&=& c_s \Delta g + i \frac{N_s}{N} e^{- i c \Delta }( 1 + \Lambda) e^{i c \Delta} g + \kappa e^{- i c \Delta } |\xi|^2e^{i c \Delta} g + \mathbb{T}c[G]g\\ &=& c_s \Delta g + i \frac{N_s}{N} ( 1 + \Lambda) g + 2 c \frac{N_s}{N} \Delta g + \kappa |\xi|^2 g - 4 \kappa c^2 \Delta g - 4 i \kappa c ( 1 + \Lambda)g + \mathbb{T}c[G]G\\ &=& (c_s + 2 c \frac{N_s}{N} - 4 \kappa c^2) \Delta g + i ( \frac{N_s}{N} - 4 \kappa c) ( 1 + \Lambda) g + \kappa |\xi|^2 g + \mathbb{T}c[G]g. \end{eqnarray*} and we obtain the result \end{proof} Now by using Proposition \ref{lemnorms} we thus have built solutions to the linear CR equation with $\mathcal{H}^1$ norm growing like $$ \mathbb{N}orm{\mathcal{S}_N e^{i m |x|^2} e^{i c \Delta} h_0}{\mathcal{H}^1} = N^{2 }\big( 1 + 4 c^2) + \frac{1}{N^{2}} ((1 + 4cm)^2 + 4m^2\big), $$ where $N$, $m$ and $c$ solve \eqref{newmodul}. With the simplest example $c = m = 0$, $N = e^{\mu s}$, we obtain the following result: \begin{proposition} Let $\mu > 0$ and $N = e^{\mu s}$. Then there exists $\beta \in i \mathbb{R}$ and $\lambda \in \mathbb{R}$ such that $f(s,x) = e^{- is \lambda}\mathcal{S}_N h_0$ and $F(s,x) = e^{- is \lambda} \mathcal{S}_N (h_0 + \beta h_1)$ satisfy \eqref{ft} and \eqref{Ft} and provide a solution to the linear CR equation. \end{proposition} Many solutions under the previous form can be constructed, as well as solutions obtained by modulating parameters with the other invariant laws of CR (see table 1). In each case, it provides examples of weakly turbulent solution for linear time dependent equation with pseudo-differential of order $0$ perturbation and by using Theorem \ref{popop}, examples of smooth potential producing growth of Sobolev norms. The complete classification of all these solutions as well as their genericity is clearly out of the scope of this paper. \begin{appendix} \section{Integral and norms of radial Hermite functions\label{hermint}} \begin{proposition} For $n,k \geq 0$, we have \begin{equation} \label{positivity} (h_n, h_0 h_k)_{L^2} = \frac{2}{\sqrt{\pi}}\left( \frac{1}{3}\right)^{n + 1} \sum_{p + q= k}\left( \frac{1}{3}\right)^{q} \frac{1}{p! q!} \frac{(n + q)!}{(n-p)!} >0, \end{equation} and \begin{equation} \label{positivity2} (h_n, h_0 h_0 h_k)_{L^2} = \frac{1}{ \pi} { n + k \choose k} \left(\frac{1}{2}\right)^{n+1 + k} >0. \end{equation} \end{proposition} \begin{proof} Using \eqref{generhn}, and as $h_0(x) = \frac{1}{\sqrt{\pi}} e^{- \frac{|x|^2}{2}}$, $$ \sum_{n = 0}^{\infty} t^k h_0(x) h_k(x) =\frac{1}{\pi}\frac{1}{1 - t} e^{- \frac{ t |x|^2 }{1 - t}}e^{- |x|^2} = \frac{1}{\pi}\frac{1}{1 - t} e^{- \frac{|x|^2}{1 - t}}. $$ We thus have for real numbers $r$ and $s$ such that $|r| < 1$ and $|t |< 1$, \begin{eqnarray*} \sum_{n,k = 0}^\infty r^n t^k ( h_n,h_0 h_k)_{L^2} &=& \frac{1}{\pi^{\frac{3}{2}}}\frac{1}{(1 - t)(1 - r)} \int_{\mathbb{R}^2} e^{- \frac{|x|^2}{1 - t}} e^{- \frac{(1+ r)}{2(1 - r)}|x|^2} \mathrm{d} x\\ &=& \frac{1}{\pi^{\frac{3}{2}}}\frac{1}{(1 - t)(1 - r)} \int_{\mathbb{R}^2} e^{- \frac{(3 - r - t - tr)}{2(1 - r)(1 - t)}|x|^2} \mathrm{d} x\\ &=& \frac{1}{\pi^{\frac{3}{2}}}\frac{2}{3 - r -t - tr} \int_{\mathbb{R}^2} e^{- |x|^2} \mathrm{d} x = \frac{2}{\sqrt{\pi}} \left( \frac{1}{3 - t - r- tr}\right). \end{eqnarray*} Hence we have $$ \frac{\sqrt{\pi}}{2}\sum_{n,k = 0}^\infty r^n t^k ( h_n,h_0 h_k)_{L^2} = \frac{1}{(3 - t)} \left( \frac{1}{1 - r\frac{1 + t}{3 - t}}\right) = \frac{1}{3 - t} \sum_{n = 0}^{\infty} r^n \left(\frac{1 + t}{3 - t}\right)^n. $$ By letting $t$ be fixed such that $|t | < 1$ and considering $r$ small enough, we deduce that $$ F_n(t) := \sum_{k = 0}^\infty t^k ( h_n,h_0 h_k)_{L^2} = \frac{2}{\sqrt{\pi}}\frac{(1+ t)^n}{(3 - t)^{n+1}}. $$ This shows that all the coefficients of the developpement are positive, and we have explicitely for $n \geq k$: \begin{eqnarray*} ( h_n,h_0 h_k)_{L^2} &=& \frac{2}{\sqrt{\pi}k!}\left.\frac{\mathrm{d}^k}{\mathrm{d} t^k} F_n(t) \right|_{t = 0} \\ &=&\frac{2}{\sqrt{\pi}}\left.\frac{1}{k!}\sum_{p + q= k} {k \choose p} \frac{n!}{(n-p)!} (1 + t)^{n - p} \frac{(n + q)!}{n!}( 3 - t)^{- n - 1 - q} \right|_{t = 0}\\ &=& \frac{2}{\sqrt{\pi}}\left( \frac{1}{3}\right)^{n + 1} \sum_{p + q= k}\left( \frac{1}{3}\right)^{q} \frac{1}{p! q!} \frac{(n + q)!}{(n-p)!} > 0, \end{eqnarray*} and we deduce the case $n < k$ by symmetry $( h_n,h_0 h_k)_{L^2} = ( h_k,h_0 h_n)_{L^2}$. This proves \eqref{positivity}. To prove \eqref{positivity2} we proceed in a similar way, we have $$ \sum_{n = 0}^{\infty} t^k h_0(x)^2 h_k(x) =\frac{1}{\pi^{\frac{3}{2}}}\frac{1}{1 - t} e^{- \frac{ t |x|^2 }{1 - t}}e^{- \frac{3}{2}|x|^2} = \frac{1}{\pi^\frac32}\frac{1}{1 - t} e^{- \frac{(3 - t)|x|^2}{2(1 - t)}}. $$ We thus have for $|r| < 1$ and $|t |< 1$, using \eqref{generhn} \begin{eqnarray} \sum_{n,k = 0}^\infty r^n t^k ( h_n,h_0^2 h_k)_{L^2} &=& \frac{1}{\pi^{2}}\frac{1}{(1 - t)(1 - r)} \int_{\mathbb{R}^2} e^{- \frac{(3 - t) |x|^2}{2(1 - t)}} e^{- \frac{(1+ r)}{2(1 - r)}|x|^2} \mathrm{d} x\nonumber \\ &=& \frac{1}{\pi^{2}}\frac{1}{(1 - t)(1 - r)} \int_{\mathbb{R}^2} e^{- \frac{(2 - r - t)}{(1 - r)(1 - t)}|x|^2} \mathrm{d} x \nonumber \\ &=& \frac{1}{\pi^{2}}\frac{1}{2 - r -t} \int_{\mathbb{R}^2} e^{- |x|^2} \mathrm{d} x = \frac{1}{\pi} \left( \frac{1}{2 - t - r}\right). \label{deme} \end{eqnarray} Hence we have $$ \pi\sum_{n,k = 0}^\infty r^n t^k ( h_n,h_0^2 h_k)_{L^2} = \frac{1}{(2 - t)} \left( \frac{1}{1 - r\frac{1}{2 - t}}\right) = \frac{1}{2 - t} \sum_{n = 0}^{\infty} r^n \left(\frac{1}{2 - t}\right)^n. $$ By letting $t$ fix such that $|t | < 1$ and considering $r$ small enough, we deduce that $$ F_n(t) := \sum_{k = 0}^\infty t^k ( h_n,h_0^2 h_k)_{L^2} = \frac{1}{\pi}\frac{1}{(2 - t)^{n+1}}. $$ This shows that all the coefficients of the developpement are positive, and we have explicitely: $$ ( h_n,h_0^2 h_k)_{L^2} =\frac{1}{ \pi k!}\left.\frac{\mathrm{d}^k}{\mathrm{d} t^k} F_n(t) \right|_{t = 0} = \frac{1}{ \pi k!}\frac{(n + k)!}{n!} \left(\frac{1}{2}\right)^{n+1 + k}, $$ which shows the result. \end{proof} \section{A backward Gr\"onwall inequality} \begin{lemma} \label{gr} Let $s_0 >0$ and $M >0$. Assume that $\beta(s) > 0$ and $\alpha(s)$ are functions defined on $(s_0,M)$, and that $u(s)$ satisfies $$ u(s) \leq \alpha(s) + \int_s^M \beta(\sigma) u(\sigma) \mathrm{d} \sigma. $$ Then we have $$ u(s) \leq \alpha(s) + \int_{s}^M \alpha(\sigma) \beta(\sigma) \exp \left( \int^{\sigma}_{s} \beta(\tau) \mathrm{d} \tau\right) \mathrm{d} \sigma $$ \end{lemma} \begin{proof} Let $v(s) = u(M-s + s_0)$, $\tilde \alpha(s) = \alpha( M - s + s_0)$ and $\tilde \beta(s) = \alpha( M - s + s_0)$ which are defined on $(s_0,M)$. We have $$ v(s) \leq \tilde \alpha(s) + \int_{M - s + s_0}^M \beta(\sigma) u(\sigma) \mathrm{d} \sigma = \tilde \alpha(s) + \int_{s_0}^s \tilde \beta(\sigma) v(\sigma) \mathrm{d} \sigma $$ By the classical Gr\"onwall inequality, we have \begin{eqnarray*} v(s) &\leq& \tilde \alpha(s) + \int_{s_0}^s \tilde \alpha(\sigma) \tilde \beta(\sigma) \exp \left( \int_{\sigma}^s \tilde \beta(\tau) \mathrm{d} \tau\right) \mathrm{d} \sigma \\ &=& \tilde \alpha(s) + \int_{M - s +s_0}^M \alpha(\sigma) \beta(\sigma) \exp \left( \int_{M - \sigma + s_0}^s \beta(M - \tau + s_0) \mathrm{d} \tau\right) \mathrm{d} \sigma \\ &=& \tilde \alpha(s) + \int_{M - s +s_0}^M \alpha(\sigma) \beta(\sigma) \exp \left( \int^{\sigma}_{M - s + s_0} \beta(\tau) \mathrm{d} \tau\right) \mathrm{d} \sigma \end{eqnarray*} from which we deduce the result. \end{proof} \end{appendix} \end{document}
\begin{document} \title{Optimal two-stage testing of multiple mediators}{} \begin{abstract} Mediation analysis in high-dimensional settings often involves identifying potential mediators among a large number of measured variables. For this purpose, a two step familywise error rate (FWER) procedure called ScreenMin has been recently proposed (Djordjilovi\'c et al. 2019). In ScreenMin, variables are first screened and only those that pass the screening are tested. The proposed threshold for selection has been shown to guarantee asymptotic FWER. In this work, we investigate the impact of the selection threshold on the finite sample FWER. We derive power maximizing selection threshold and show that it is well approximated by an adaptive threshold of Wang et al. (2016). We study the performance of the proposed procedures in a simulation study, and apply them to a case-control study examining the effect of fish intake on the risk of colorectal adenoma. \end{abstract} \keywords{high-dimensional mediation, familywise error rate, union hypothesis, partial conjunction hypothesis, intersection-union test, multiple testing, screening.} \section{Introduction} Mediation analysis is an important tool for investigating the role of intermediate variables lying on the path between an exposure or treatment ($X$) and an outcome variable ($Y$) \citep{vanderweele2015explanation}. Recently, mediation analysis has been of interest in emerging fields characterized by an abundance of experimental data. In genomics and epigenomics, researchers search for potential mediators of lifestyle and environmental exposures on disease susceptibility \citep{richardson2019integrative}; examples include mediation by DNA methylation of the effect of smoking on lung cancer risk \citep{fasanelli2015hypomethylation} and of the protective effect of breastfeeding against childhood obesity \citep{sherwood2019duration}. In neuroscience, researchers search for the parts of the brain that mediate the effect of an external stimulus on the perceived sensation \citep{woo2015distinct,chen2017high}. In these and other problems of this kind, researchers wish to investigate a large number of putative mediators, with the aim of identifying a subset of relevant variables to be studied further. This issue has been recognized as transcending the traditional (confirmatory) causal mediation analysis and has been termed {\it exploratory mediation analysis} \citep{serang2017exploratory}. Within the hypothesis testing framework, the problem of identifying potential mediators among $m$ variables $M_i$, $i=1,\ldots,m$, can be formulated as the problem of testing a collection of $m$ union hypotheses of the form $$ H_i= H_{i1}\cup H_{i2}, \quad H_{i1}: M_i \perp \! \! \! \! \perp X, \quad H_{i2}: M_i \perp \! \! \! \! \perp Y \mid (X, \boldsymbol{M}_{-i})^\top, $$ where $\boldsymbol{M}_{-i}= (M_1,\ldots,M_{i-1}, M_{i+1}, \ldots, M_m)$. Since $m$ is typically large with respect to the study sample size, it might be challenging to make inference on the conditional independence of $M_i$ and $Y$ given $X$ and the entire $(m-1)$-dimensional vector $\boldsymbol {M}_{-i}$. Instead, one can consider each putative mediator marginally in this exploratory stage and replace $H_{i2}$ with $H_{i2}^*: M_i \perp \! \! \! \! \perp Y \mid X$ \citep{sampson2018fwer}. For us, a variable $M_i$ is a potential mediator if $H_i$ is false, i.e., if both $H_{i1}$ and $H_{i2} $ $(H_{i2}^*)$ are false. Our goal is to identify as many potential mediators as possible while keeping familywise error rate (FWER) below a prescribed level $\alpha\in (0,1)$. Assume we have valid $p$-values, $p_{ij}$, for testing hypotheses $H_{ij}$. They would typically be obtained from two parametric models: a {\it mediator model} that models the relationship between $X$ and $\boldsymbol{M}$, and an {\it outcome model} that models the relationship between $Y$ and $X$ and $\boldsymbol{M}$. Then, according to the intersection union principle, $\overline{p}_i \coloneqq \max\left\{p_{i1}, p_{i2}\right\}$ is a valid $p$-value for $H_i$ \citep{gleser1973}. A simple solution to the considered problem consists of applying a standard multiple testing procedure, such as Bonferroni or \cite{holm1979simple}, to a collection of $m$ maximum $p$-values $\left\{\overline{p}_i, \,i=1,\ldots,m\right\}$. Unfortunately, due to the composite nature of the considered null hypotheses, $\overline{p}_i$ will be a conservative $p$-value for some points of the null hypothesis $H_i$. For instance, when both $H_{i1}$ and $H_{i2}$ are true, $\overline{p}_i$, will be distributed as the maximum of two independent standard uniform random variables, and thus stochastically larger than the standard uniform. This implies that the direct approach tends to be very conservative in most practical situations. Indeed, when only a small fraction of hypotheses $H_{ij}$ is false -- a plausible assumption in most applications considered above -- the actual FWER can be shown to be well below $\alpha$ \citep{wang2016detecting}, resulting in a low powered procedure. To attenuate this issue, we have recently proposed a two step procedure, ScreenMin, in which hypotheses are first screened on the basis of the minimum, $\underline{p}_i \coloneqq\min\left\{p_{i1}, p_{i2}\right\}$, and only hypotheses that pass the screening get tested: \begin{procedure}[ScreenMin \citep{djordjilovic2019global}] For a given $c\in (0,1)$, select $H_i$ if $\underline{p}_i \leq c$, and let $S=\left\{i: \underline{p}_i \leq c\right\}$ denote the selected set. The ScreenMin adjusted $p$-values are $$ p_i^* = \begin{cases} \min \left\{\abs S \, \overline{p}_i, 1 \right\} \quad \mbox{ if } i\in S,\\ 1 \quad \mbox{otherwise,} \end{cases} $$ where $\abs{S}$ is the size of the selected set. \end{procedure} It has been proved that, under the assumption of independence of all $p$-values, the ScreenMin procedure maintains the asymptotic FWER control. Independence of $p_{i1}$ and $p_{i2}$ follows from the correct specification of the outcome and the mediator model, while independence between rows of the $m\times 2$ $p$-value matrix, i.e. within sets $\left\{p_{11},\ldots,p_{m1}\right\}$ and $\left\{p_{12},\ldots,p_{m2}\right\}$, is a common, although often unrealistic, assumption in the multiple testing framework that we discuss in Section \ref{discussion}. With regards to power, by reducing the number of hypotheses that are tested, the proposed procedure can significantly increase the power to reject false union hypotheses. In this work, we look more closely at the role of the threshold for selection $c$. We show that the ScreenMin procedure does not guarantee non-asymptotic FWER for arbitrary thresholds, neither conditionally on $\abs{S}$, nor unconditionally. We derive the upper bound for the finite sample FWER, and then investigate the optimal threshold, where optimality is defined in terms of maximizing the power while guaranteeing the finite sample FWER control. We formulate this problem as a constrained optimization problem, and solve it under the assumption that the proportion of the false hypotheses and the distribution of the non-null $p$-values is known. We show that the solution is the smallest threshold that satisfies the FWER constraint, and that the data dependent version of this oracle threshold leads to a special case of an adaptive threshold proposed recently in the context of testing general partial conjunction hypotheses by \cite{wang2016detecting}. In their work, \cite{wang2016detecting} show that the proposed adaptive threshold maintains FWER control; our results further show that is also (nearly) optimal in terms of power. Recently, methodological issues pertaining to high-dimensional mediation analysis have been receiving increasing attention in the literature. Most proposed approaches focus on dimension reduction \citep{huang2016hypothesis, chen2017high} or penalization techniques \citep{zhao2016pathway, zhang2016estimating,song2018bayesian}, or a combination of the two \citep{ZHAO2020106835}. An approach most similar to ours is a multiple testing procedure proposed by \cite{sampson2018fwer}. The Authors adapt to the mediation setting the procedures proposed by \cite{bogomolov2018assessing} within the context of replicability analysis. Indeed, both the problem of identifying potential mediators and the problem of identifying replicable findings across two studies, can be seen as a special case of testing multiple partial conjunction hypotheses \citep{benjamini2008screening}. \section{Notation and setup} As already stated, we consider a collection $\mathcal{H}$ of $m$ null hypotheses of the form $H_i=H_{i1}\cup H_{i2}$. For each hypothesis pair $(H_{i1}, H_{i2})$ there are four possible states, $\left\{(0,0), (0,1), (1,0),\right.$ $\left. (1,1)\right\}$, indicating whether respective hypotheses are true (0) or false (1). Let $\pi_0$ denote the proportion of $(0,0)$ hypothesis pairs, i.e. pairs in which both component hypotheses are true; $\pi_1$ the proportion of $(0,1)$ and $(1,0)$ pairs in which exactly one hypothesis is true, and $\pi_2$ the proportion of $(1,1)$ pairs in which both hypotheses are false. In mediation, $(1,1)$ hypotheses are of interest, and our goal is to reject as many such hypotheses, while controlling FWER for $\mathcal{H}$. We denote by $p_{ij}$ the $p$-value for $H_{ij}$ (whether we refer to a random variable or its realization will be clear from the context). We assume that the $p_{ij}$ are continuous and independent random variables. We further assume that the distribution of the null $p$-values is standard uniform, that the density of the non-null $p$-values is strictly decreasing, and that $F$ denotes its cumulative distribution function. This will hold, for example, when the test statistics are normally distributed with a mean shift under the alternative; we will use this setting for illustration purposes throughout. We further let $\overline{p}_i$ ($\underline{p}_i$) denote the maximum (the minimum) of $p_{i1}$ and $p_{i2}$. For a given threshold $c\in (0,1)$, let the selection event be represented by a vector $G=(G_1,\ldots,G_m) \in \left\{0,1\right\}^m$, so that $G_i=1$ if $\underline{p}_i \leq c$ and $G_i=0$ otherwise. The size of the selected set is then $\abs S=\sum_{j=1}^m G_j$. \section{Finite sample FWER} Validity of the ScreenMin procedure relies on the maximum $p$-value, $\overline{p}_i$, remaining an asymptotically valid $p$-value after selection. We are thus interested in the distribution of $\overline{p}_i$ conditional on the selection $G$. We first look at the distribution of $\overline{p}_i$ conditional on the event that the $i$-th hypothesis has been selected. \begin{lemma}\label{condpvallemma} If $(H_{i1}, H_{i2})$ is a $(0,1)$ or a $(1,0)$ pair, then the distribution of $\overline{p}_i$ conditional on hypothesis $H_i$ being selected is \begin{equation} \mathrm {Pr}(\overline{p}_i \leq u \mid \underline{p}_i \leq c) =\begin{dcases} \frac{uF(u)}{F(c) + c - cF(c)}, \quad \mbox{ for } 0<u\leq c\leq 1\\ \frac{cF(u)+ uF(c)-cF(c)}{F(c) + c - cF(c)}, \quad \mbox{ for } 0<c\leq u\leq 1. \end{dcases} \label{condpval} \end{equation} If $(H_{i1}, H_{i2})$ is a $(0,0)$ pair, then \begin{equation*} \mathrm {Pr}(\overline{p}_i \leq u \mid \underline{p}_i \leq c) =\begin{dcases} \frac{u^2}{c(2-c)}, \quad \mbox{ for } 0<u\leq c\leq 1\\ \frac{2u-c}{2-c}, \quad \mbox{ for } 0<c\leq u\leq 1. \end{dcases} \end{equation*} \end{lemma} The proof is in Section \ref{a1}. The conditional $p$-value in \eqref{condpval} will play an important role in the following considerations. Since it is a function of both the selection threshold $c$ and the testing threshold $u$, we will denote it by $P_0(u, c)$. Consider now the distribution of $\overline{p}_i$ conditional on the entire selection event $G$ (where we are only interested in selections for which $G_i=1$). Given the independence of all $p$-values, $$ \mathrm{Pr}\left(\overline{p}_i\leq u \mid G\right)= \mathrm{Pr}\left(\overline{p}_i\leq u \mid G_i\right) = P_0(u,c) $$ for any fixed $u \in (0,1)$. However, in the ScreenMin procedure we are not interested in all $u$; we are interested in a data dependent threshold $\alpha/\abs{S}$. Nevertheless, we can still use expression \eqref{condpval}, since \begin{equation}\label{condselection} \mathrm{Pr}\left(\overline{p}_i\leq \frac{\alpha}{\abs{S}} \mathrel{\Big|} G\right) = \mathrm{Pr}\left(\overline{p}_i\leq \frac{\alpha}{1+\sum_{j\neq i} G_j} \mathrel{\Big|} I[\underline{p}_i \leq c], \sum_{j\neq m}G_j\right) = P_0\left(\frac{\alpha}{\abs{S}},\, c\right), \end{equation} where the first equality follows from observing that when the $i$-th hypothesis is selected we can write $\abs{S}= 1+\sum_{j\neq i} G_j$; and the second from the independence of $\overline{p}_i$ and $\sum_{j\neq i} G_j$. Screening on the basis of the minimum $\underline{p}_i$, would ideally leave $\overline{p}_i$ a valid $p$-value. Recall that a random variable is a valid $p$-value if its distribution under the null hypothesis is either standard uniform or stochastically greater than the standard uniform. For a given $c$, for the conditional $p$-value in \eqref{condpval}, we should thus have $P_0(u,c)\leq u$ for $u \in (0,1)$. Although this has been shown to hold asymptotically \citep{djordjilovic2019global}, the following analytical counterexample shows this is not the case in finite samples. \paragraph{Example 1.} Let $H_i$ be true, and let the test statistics for testing $H_{i1}$ and $H_{i2}$ be normal with a zero mean and a mean in the interval $\left[0,5\right]$, respectively. We refer to the mean shift associated to $H_{i2}$ as the signal-to-noise ratio (SNR). Figure \ref{condpvalF} plots a 5\% quantile of the conditional $p$-value distribution, $P_0(0.05, c)$, as a function of the SNR associated to $H_{i2}$. Although with increasing SNR the quantile under consideration converges to $0.05$ (in line with the asymptotic ScreenMin validity), for small values of SNR and low selection thresholds, the conditional quantile surpasses $0.05$. \qed \begin{figure} \caption{\label{condpvalF} \label{condpvalF} \end{figure} According to Example 1 and expression \eqref{condselection}, there are realizations of $\abs{S}$ so that $P_0(\alpha /\abs{S},c)$ is not bounded by $\alpha/\abs{S}$. This implies that the ScreenMin will not provide finite sample FWER {\it conditional} on $\abs{S}$; however, it could still guarantee FWER control {\it on average} across all $\abs{S}$. To investigate this hypothesis, we first derive the upper bound for the unconditional FWER for a given $c$. \begin{proposition}\label{ffwer} Let $V$ denote the number of true union hypotheses rejected by the ScreenMin procedure. For the familywise error rate, we then have \begin{equation}\label{exactfwer} \mathrm {Pr}(V\geq 1) \leq \mathrm{E}\left(\left[1- \left\{1-P_0\left(\frac{\alpha}{\abs{S}}, c\right)\right\}^{\abs S}\right]I\left[\abs S >0 \right]\right), \end{equation} with equality holding if and only if $\pi_1=1$. \end{proposition} Proof is in Section \ref{proofffwer}. We use this result to illustrate in the following analytical counterexample that the ScreenMin does not guarantee finite sample FWER control for arbitrary thresholds. \paragraph{Example 2.} Let $m=10$, and let all pairs $(H_{i1}, H_{i2})$ be $(0,1)$ or $(1,0)$ type, so that $\pi_0=\pi_2=0$ and $\pi_1=1$. Let the test statistics of all false $H_{ij}$ be normal with mean 2 and variance 1, and consider one-sided $p$-values. If the level at which FWER is to be controlled is $\alpha=0.05$, the default ScreenMin threshold for selection is $c=\alpha/m = 5\times10^{-3}$. The probability of selecting $H_i$ is then $P_{sel}=F(c)+c-cF(c)\approx 0.29$. In this case, the size of the selected set is a binomial random variable $\mathrm{Bi}(m, P_{sel})$. The conditional probability of rejecting a $H_i$ when $\abs S >0$, i.e. $P_{0}(\alpha/\abs S, c)= \mathrm{Pr}\left(\overline{p}_i \leq \alpha/\abs S \mathrel{\Big|} I[\underline{p}_i \leq c], \abs{S}\right)$, can be evaluated for each value of $\abs S$ according to \eqref{condpval}. The conditional distribution of the number of false rejections $V$ given $\abs S$ is also binomial with parameters $\abs S$ and $P_{0}(\alpha/\abs S, c)$. In this case, the exact FWER, obtained from \eqref{exactfwer}, is $\mathrm {Pr}(V\geq 1) =0.055 > \alpha$, so that the actual FWER of the ScreenMin procedure exceeds the nominal level $\alpha$. \qed \section{Oracle threshold for selection} According to the previous Section, not all thresholds for selection lead to finite sample FWER control. In this Section, we investigate the threshold that maximizes the power to reject false union hypotheses while ensuring finite sample FWER control. \begin{proposition} \label{prejection}The probability of rejecting a false union hypothesis conditional on the size of the selected set $\abs S$ is \begin{equation} \mathrm {Pr}\left(\overline{p}_i \leq \frac{\alpha}{\abs S}, \underline{p}_i \leq c\right) = \left\{\begin{array}{cc} 2 F(c)F\left(\frac{\alpha}{\abs{S}}\right) - F^2(c) & \mbox{for } c\,\abs{S}\leq \alpha;\\ F^2\left(\frac{\alpha}{\abs{S}}\right) & \mbox{for } c\,\abs{S}>\alpha \end{array} \right. \label{condrejection} \end{equation} for $ \abs S >0 $, and 0 otherwise. The unconditional probability of rejecting a false hypothesis is then obtained by taking the expectation over $\abs S$. \end{proposition} See Section \ref{a2} for the proof. Note that the distribution of $S$, as well as the distribution of $V$, depend on $c$, and in the following we emphasize this by writing $S(c)$ and $V(c)$. The threshold that maximizes the power while controlling FWER at $\alpha$ can then be found through the following constrained optimization problem: \begin{equation} \label{optthreshold} \max_{0< c \leq \alpha} \mathrm{E} \left[\mathrm {Pr}\left(\overline{p}_i \leq \frac{\alpha}{\abs {S(c)}},\, \underline{p}_i \leq c\right)I[\abs {S(c)} >0]\right] \mbox{ subject to } \mathrm{Pr}(V(c) \geq 1) \leq \alpha. \end{equation} In the above problem, both the objective function (the power) and the constraint (the FWER) are expected values of non-linear functions of the size of the selected set $\abs S$, the distribution of which is itself non-trivial. To circumvent this issue, instead of \eqref{optthreshold}, we consider its approximation based on exchanging the order of the function and the expected value: \begin{equation} \label{optthresholdapp} \max_{0< c \leq \alpha} \mathrm {Pr}\left(\overline{p}_i \leq \frac{\alpha}{\mathrm{E}\abs {S(c)}}, \,\underline{p}_i \leq c\right) \mbox{ subject to } \widehat{\mathrm{Pr}}(V(c) \geq 1) \leq \alpha, \end{equation} where $$ \widehat{\mathrm{Pr}}(V(c) \geq 1) = 1- \left\{1- P_{0}\left(\frac{\alpha}{\mathrm{E}\abs{S(c)}}, c\right)\right\}^{\mathrm{E}\abs {S(c)}}. $$ When $\pi_0, \pi_1, \pi_2$ and $F$ are known, \eqref{optthresholdapp} can be solved numerically. We denote its solution by $c^*$, and refer to it as the \textit{oracle} threshold in what follows. \begin{figure} \caption{\label{powerfwervsthreshold} \label{powerfwervsthreshold} \end{figure} \paragraph{Example 3.} Consider an example featuring $m=100$ union hypotheses with proportions of different hypotheses being $\pi_0=0.75$, $\pi_1= 0.25$ and $\pi_2=0.05$. Let the test statistics be normal with a zero mean for true null hypotheses and a mean shift (SNR) of $1.5, 2,$ or $3$ for false null hypotheses (variance equal to 1 in both cases). As before we consider one sided $p$-values. Plots in Figure \ref{powerfwervsthreshold} show power and FWER as functions of the selection threshold for three different values of SNR. We first note that for very small values of $c$, actual FWER is above $\alpha$, and in order for FWER to be controlled, $c$ needs to be large enough. In all three cases, the value of the threshold that maximizes the (unconstrained) power to reject a false union hypothesis is low and does not satisfy the FWER constraint (dashed line is above the nominal FWER level set to $0.05$). The solution to problem \eqref{optthresholdapp} is then the smallest $c$ that satisfies the FWER constraint. \qed In the above example the power maximizing selection threshold is the smallest threshold that guarantees FWER control. This can be shown to hold in general under mild conditions (see Section \ref{a3} for details). For a threshold to satisfy the FWER control in \eqref{optthresholdapp}, it needs to be at least as large as the solution to \begin{equation*} 1- \left\{1- P_0\left(\frac{\alpha}{\mathrm{E}\abs{S(c)}}, c\right)\right\}^{\mathrm{E}\abs {S(c)}} = \alpha. \end{equation*} If $m$ is large, we can consider a first order approximation of the left-hand side leading to \begin{equation} \label{fwerroot} P_{0}\left(\frac{\alpha}{\mathrm{E}\abs{S(c)}},c\right) \approx \frac{\alpha}{\mathrm{E}\abs{S(c)}}. \end{equation} The intuition corresponding to \eqref{fwerroot} is straightforward: for a given $c$, the probability that a conditional null $p$-value is less or equal to the ``average'' testing threshold, i.e. $\alpha/\mathrm{E}\abs{S(c)}$, should be exactly $\alpha/\mathrm{E}\abs{S(c)}$. Finally, when $m$ is large, the solution to \eqref{fwerroot} can be closely approximated by the solution to \begin{equation} \label{af1} c \,\mathrm{E}\abs{S(c)} = \alpha, \end{equation} (see Section \ref{a3}) so that the constrained optimization problem in \eqref{optthresholdapp} can be replaced with a simpler problem of finding a solution to equation \eqref{af1}. \section{Adaptive threshold for selection} Solving equation \eqref{af1} is easier than solving the constrained optimization problem of \eqref{optthresholdapp}; however, it still requires knowing $F, \pi_0$ and $\pi_1$. To overcome this issue one can try to estimate these quantities from data in an approach similar to the one of \cite{lei2018adapt} who employ an expectation-maximization algorithm. Another possibility is to consider the following strategy. Instead of searching for a threshold optimal \textit{on average}, we can adopt a {\it conditional} approach and replace $\mathrm{E}\abs{S(c)}$ in \eqref{af1} with its observed value $S(c)$. Since $S(c)$ takes on integer values, $c \,\abs{S(c)}$ has jumps at $\underline{p}_1, \ldots, \underline{p}_m$ and might be different from $\alpha$ for all $c$. We therefore search for the largest $c \in (0,1)$ such that \begin{equation} \label{af} c \,\abs{S(c)}\leq \alpha. \end{equation} Let $c_{a}$ be the solution to \eqref{af}. This solution has been proposed in \cite{wang2016detecting} in the following form $$ \gamma =\max\left\{c\in \left\{\frac{\alpha}{m}, \ldots,\frac{\alpha}{2}, \alpha \right\}: c\,\abs{S(c)} \leq \alpha\right\} $$ Obviously, due to a finite grid, $\gamma$ need not necessarily coincide with $c_a$; however, they lead to the same selected set $S$ and thus to equivalent procedures. Interestingly, in their work \cite{wang2016detecting}, search for a single threshold that is used for both selection and testing, and define it heuristically as a solution to the above maximization problem. When the two thresholds coincide, $P_0(c,c)$ is bounded by $c$ for all $c\in(0,1)$ (from \eqref{condpval}), and it is straightforward to show that the FWER control is maintained also for the data dependent threshold $c=\gamma$. Our results show, that in addition to providing non-asymptotic FWER control, this threshold is also nearly optimal in terms of power. \section{Simulations} We used simulations to assess the performance of different selection thresholds. Our data generating mechanism is as follows. We considered a small, $m=200$, and a large, $m=10000$, study. The proportion of false union hypotheses, $\pi_2$, was set to $0.05$ throughout. The proportion of $(1,0)$ hypothesis pairs with exactly one true hypothesis, $\pi_1$, was varying in $\{0.1, 0.2, 0.3, 0.4\}$. Independent test statistics for false $H_{ij}$ were generated from ${\sf N}(\sqrt{n}\mu_j, 1)$, where $n$ is the sample size of the study, and $\mu_j>0$, $j=1,2$, is the effect size associated with false component hypotheses. Test statistics for true component hypothesis were standard normal. For $m=200$, the SNR, $\sqrt{n}\mu_j$, was either the same for $j=1,2$ and equal to 3, or different and equal to 3 and 6, respectively. For $m=10000$, the signal-to-noise ratio was set to 4, and in case of unequal SNR it was set to 4 and 8. $P$-values were one-sided. FWER was controlled at $\alpha=0.05$. We also considered settings under positive dependence: in that case the test statistics were generated from a multivariate normal distribution with a compound symmetry variance matrix (the correlation coefficient $\rho\in \{0.3,0.8\}$). The FWER procedures considered were 1) Oracle SM: the ScreenMin procedure with the oracle threshold $c^*$(assuming $F,\pi_1,\pi_2$ to be known); 2) AdaFilter: the ScreenMin procedure with the adaptive threshold $\gamma$; 3) Default SM: the ScreenMin procedure with a default threshold $c=\alpha/m$; 4) $MCP_S$: the FWER procedure proposed in \cite{sampson2018fwer}; and 5) Bonf.: the standard Bonferroni Procedure. When applying the $MCP_S$ procedure, we used the implementation in \texttt{MultiMed} R package \citep{multimed} with the default threshold $\alpha_1=\alpha_2=\alpha/2$. We note that the threshold for this procedure can also be improved in an adaptive fashion by incorporating plug-in estimates of proportions of true hypotheses among $H_{i1}$, and $H_{i2}$, $i=1,\ldots,m$, as presented in \cite{bogomolov2018assessing}. Implementation of the remaining procedures, along with the reproducible simulation setup, is available at \href{http://github.com/veradjordjilovic/screenMin}{github.com/veradjordjilovic/screenMin}. For each setting, we estimated FWER as the proportion of generated datasets in which at least one true union hypothesis was rejected. We estimated power as the proportion of rejected false union hypotheses among all false union hypotheses, averaged across 1000 generated datasets. Results under independence are shown in Figure \ref{nf}. All considered procedures successfully control FWER. When most hypothesis pairs are $(0,0)$ pairs and $\pi_1$ is low, all procedures are conservative, but with increasing $\pi_1$ their actual FWER approaches $\alpha$. The opposite trend is seen with the power: it is maximum for $\pi_1=0$ and decreases with increasing $\pi_1$. When the signal-to-noise ratio is equal (columns 1 and 3), Oracle SM and AdaFilter outperform the rest in terms of power. Interestingly, the adaptive threshold (AdaFilter) is performing as well as the oracle threshold which uses the knowledge of $F,\pi_0$ and $\pi_1$. Under unequal signal-to-noise ratio, the oracle threshold is computed under a misspecified model (assuming signal to noise ratio is equal for all false hypotheses) and in this case the Default SM outperforms the other approaches. $MCP_S$ performs well in this setting and its power remains constant with increasing $\pi_1$. Results under positive dependence are shown in Figure \ref{ss2}. FWER control is maintained for all procedures. All procedures are more conservative in this setting than under independence, especially when the correlation is high, i.e. when $\rho=0.8$. With regards to power, most conclusions from the independence setting apply here as well. When the signal-to-noise ratio is equal, Oracle SM and AdaFilter outperform competing procedures. Under unequal signal to noise ratio, Default SM performs best, and $MCP_S$ performs well with power constant with increasing $\pi_1$. In the high-dimensional setting ($m=10000$), the power is higher than under independence for $\pi_1=0$, but it is rapidly decreasing with increasing $\pi_1$ and drops to zero when $\pi_1=0.4$. \begin{figure} \caption{\label{nf} \label{nf} \end{figure} \begin{figure} \caption{\label{ss2} \label{ss2} \end{figure} \section{Application: Navy Colorectal Adenoma study} The Navy Colorectal Adenoma case-control study \citep{sinha1999well} studied dietary risk factors of colorectal adenoma, a known precursor of colon cancer. A follow-up study investigated the role of metabolites as potential mediators of an established association between red meat consumption and colorectal adenoma. While red meat consumption is shown to increase the risk of adenoma, it has been suggested that fish consumption might have a protective effect. In this case, the exposure of interest is a daily fish intake estimated from dietary questionnaires; potential mediators are 149 circulating metabolites; and the outcome is a case-control status. Data for 129 cases and 129 controls, including information on age, gender, smoking status, and body mass index, are available in the \texttt{MultiMed} R package \citep{multimed}. For each metabolite, we estimated a mediator and an outcome model. The mediator model is a normal linear model with the metabolite level as outcome and daily fish intake as predictor. The outcome model is logistic with case-control status outcome and fish intake and metabolite level as predictors. Age, gender, smoking status, and body mass index were included as predictors in both models. To adjust for the case-control design, the mediator model was weighted (the prevalence of colorectal adenoma in the considered age group was taken to be $0.228$ as suggested in \cite{boca2013testing}). Screening with a default ScreenMin threshold $0.05/149= 3.3\times 10^{-4}$ leads to 13 hypotheses passing the selection. The adaptive threshold (AdaFilter) is higher ($2.2\times 10^{-3}$) and results in selecting 22 hypotheses. Testing threshold for the default ScreenMin is then $0.05/13=3.8\times 10^{-3}$. With the adaptive procedure, the testing threshold coincides with the screening threshold and is slightly lower ($2.2\times 10^{-3}$). Unadjusted $p$-values for the selected metabolites are shown in Table \ref{tab1}. The lowest maximum $p$-value among the selected hypotheses is $8.3\times10^{-3}$ (for DHA and 2-aminobutyrate) which is higher than both considered thresholds, meaning that we are unable to reject any hypotheses at the $\alpha=0.05$ level. Our results are in line with those reported in \cite{boca2013testing}, where the DHA was found to be the most likely mediator although not statistically significant (FWER adjusted $p$-value 0.06). One potential explanation for the negative findings is illustrated in Figure \ref{ncs}. Figure \ref{ncs} shows a scatterplot of the $p$-values for the association of metabolites with the fish intake ($p_1$) against the $p$-values for the association of metabolites with the colorectal adenoma ($p_2$). While a significant number of metabolites shows evidence of association with adenoma (cloud of points along the $y=0$ line), there seems to be little evidence for the association with the fish intake. In addition, data provide limited evidence of the presence of the total effect of the fish intake on the risk of adenoma ($p$-value in the logistic regression model adjusted for age, gender, smoking status and body mass index is $0.07$). \begin{table}[ht] \centering \caption{\label{tab1}\small $P$-values of the 22 metabolites that passed the screening with the adaptive threshold. Metabolites are sorted in an increasing oreder with respect to $\underline{p}$. Top 13 metabolites passed the screening with the default ScreenMin threshold. The last column (Min.Ind) indicates whether the minimum, $\underline{p}$, is the $p$-value for the association of a metabolite with the fish intake (1) or with the colorectal adenoma (2). } \begin{tabular}{rlrrc} \noalign{ } \hline \noalign{ } & Name & \multicolumn{1}{c}{$\underline{p}$} & \multicolumn{1}{c}{$\overline{p}$} & Min.Ind \\ \noalign{ } \hline \noalign{ } 1 & 2-hydroxybutyrate (AHB) & $1.2 \times 10^{-6}$ & $1.5 \times 10^{-2}$ & 2 \\ 2 & docosahexaenoate (DHA; 22:6n3) & $1.9 \times 10^{-6}$ & $8.3 \times 10^{-3}$ & 1 \\ 3 & 3-hydroxybutyrate (BHBA) & $7.8 \times 10^{-6}$ & $2.2 \times 10^{-1}$ & 2 \\ 4 & oleate (18:1n9) & $2.5 \times 10^{-5}$ & $7.3 \times 10^{-1}$ & 2 \\ 5 & glycerol & $3.9 \times 10^{-5}$ & $8.4 \times 10^{-1}$ & 2 \\ 6 & eicosenoate (20:1n9 or 11) & $5.9 \times 10^{-5}$ & $4.1 \times 10^{-1}$ & 2 \\ 7 & dihomo-linoleate (20:2n6) & $9.0 \times 10^{-5}$ & $2.6 \times 10^{-1}$ & 2 \\ 8 & 10-nonadecenoate (19:1n9) & $9.4 \times 10^{-5}$ & $5.4 \times 10^{-1}$ & 2 \\ 9 & creatine & $1.7 \times 10^{-4}$ & $9.2 \times 10^{-1}$ & 1 \\ 10 & palmitoleate (16:1n7) & $1.7 \times 10^{-4}$ & $6.3 \times 10^{-1}$ & 2 \\ 11 & 10-heptadecenoate (17:1n7) & $2.8 \times 10^{-4}$ & $7.1 \times 10^{-1}$ & 2 \\ 12 & myristoleate (14:1n5) & $2.9 \times 10^{-4}$ & $8.2 \times 10^{-1}$ & 2 \\ 13 & docosapentaenoate (n3 DPA; 22:5n3) & $3.0 \times 10^{-4}$ & $2.9 \times 10^{-1}$ & 2 \\ 14 & methyl palmitate (15 or 2) & $5.4 \times 10^{-4}$ & $1.8 \times 10^{-1}$ & 2 \\ 15 & N-acetyl-beta-alanine & $5.9 \times 10^{-4}$ & $1.3 \times 10^{-1}$ & 1 \\ 16 & linoleate (18:2n6) & $8.8 \times 10^{-4}$ & $6.7 \times 10^{-1}$ & 2 \\ 17 & 3-methyl-2-oxobutyrate & $8.9 \times 10^{-4}$ & $2.0 \times 10^{-1}$ & 2 \\ 18 & palmitate (16:0) & $9.9 \times 10^{-4}$ & $5.6 \times 10^{-1}$ & 2 \\ 19 & fumarate & $1.4 \times 10^{-3}$ & $5.0 \times 10^{-1}$ & 2 \\ 20 & 2-aminobutyrate & $1.4 \times 10^{-3}$ & $8.3 \times 10^{-3}$ & 2 \\ 21 & linolenate [alpha or gamma; (18:3n3 or 6)] & $1.6 \times 10^{-3}$ & $5.4 \times 10^{-1}$ & 2 \\ 22 & 10-undecenoate (11:1n1) & $1.8 \times 10^{-3}$ & $3.2 \times 10^{-1}$ & 2 \\ \hline \end{tabular} \end{table} \begin{figure} \caption{\label{ncs} \label{ncs} \end{figure} \section{Discussion}\label{discussion} In this article we have investigated power and non-asymptotic FWER of the ScreenMin procedure as a function of the selection threshold. We have found an upper bound for the finite sample FWER that is exact when $\pi_1=1$. We have posed the problem of finding an optimal selection threshold as a constrained optimization problem in which the power to reject a false union hypothesis is maximized under the condition guaranteeing FWER control. We have called this threshold the oracle threshold since it is derived under the assumption that the mechanism generating $p$-values is fully known. We have shown that the solution to this optimization problem is the smallest threshold that satisfies the FWER condition, and that it is well approximated by the solution to the equation $c\mathrm{E}\abs{S(c)}=\alpha$. A data-dependent version of the oracle threshold is a special case of the AdaFilter threshold proposed by \cite{wang2016detecting} for $n=r=2$. Our simulation results suggest that the performance of this adaptive threshold is almost indistinguishable from the oracle threshold, and we suggest its use in practice. The ScreenMin procedure relies on the independence of $p$-values. While independence between columns in the $p$-value matrix is satisfied in the context of mediation analysis (under correct specification of the mediator and the outcome model), independence within columns of the $p$-value matrix is likely to be unrealistic in a number of practical contexts. Our simulation results show that FWER control is maintained under mild and strong positive dependence within columns, but we do not have theoretical guarantees. The challenge with relaxing the independence assumption lies in the fact that when $\overline{p}_i$ is not independent of $\sum_{j\neq i} G_j$, then the equality regarding conditional $p$-values \eqref{condselection} no longer necessarily holds. Finding sufficient conditions that relax the assumption of independence while keeping the conditional distribution of $p$-values tractable is an open question. In this work we have focused on FWER, but it is tempting to consider combining screening based on $\underline{p}_i$ with an FDR procedure such as \cite{benjamini1995controlling}. Unfortunately, analyzing non-asymptotic FDR of such two step procedures is significantly more involved since their adaptive testing threshold is a function of $\overline{p}_1, \ldots, \overline{p}_{m}$, as opposed to $\alpha/\abs{S}$ in the two stage Bonferroni procedure presented here. To the best of our knowledge, the only method that has provable finite sample FDR control in this context has been proposed by \cite{bogomolov2018assessing}, and further investigation into the problem of optimizing the threshold for selection in this setting is warranted. \appendix \section{Technical details} \subsection{Proof of Lemma \ref{condpvallemma}}\label{a1} Consider first the distribution of the minimum $\underline{p}_i$ (to simplify notation, we omit the index $i$ in what follows): \begin{equation} \mathrm{Pr}(\underline{p} \leq c) = 1-\mathrm{Pr}(\underline{p}>c)= 1-\mathrm{Pr}(p_1>c, p_2>c)= 1-\prod_{j=1}^2\mathrm{Pr}(p_j>c). \label{minimum} \end{equation} The joint distribution of $\overline{p}$ and $\underline{p}$ is \begin{equation}\label{jointd} \mathrm{Pr}(\overline{p}\leq u, \underline{p}\leq c) = \mathrm{Pr}(\overline{p}\leq u)= \prod_{j=1}^2\mathrm{Pr}(p_j\leq u), \end{equation} for $0<u\leq c\leq1$, and \begin{eqnarray} \label{jointd2} \mathrm{Pr}(\overline{p}\leq u, \underline{p}\leq c) &=& \mathrm{Pr}(\overline{p}\leq c) + \mathrm{Pr}(\underline{p}\leq c, c<\overline{p}\leq u)\\ \nonumber & =& \prod_{j=1}^2\mathrm{Pr}(p_j\leq u) + \sum_{j=1}^2 \mathrm{Pr}(p_j \leq c)\left\{\mathrm{Pr}(p_{-j} \leq u) - \mathrm{Pr}(p_{-j} \leq c)\right\}, \end{eqnarray} for $0<c<u\leq1$, where $p_{-j}$ is $p_2$ for $j=1$, and $p_1$ for $j=2$. The distribution of $\overline{p}$ conditional on the hypothesis $H_i$ being selected is $\mathrm{Pr}(\overline{p} \leq u \mid \underline{p} \leq c)$. If the hypothesis $H_i$ is true then at least one of the $p$-values $p_1$ and $p_2$ is null and thus uniformly distributed. Without loss of generality, let $H_{i1}$ be true, so that $\mathrm{Pr}(p_1 \leq x) = x$. Let $F$ be the distribution function of $p_2$, so that $\mathrm{Pr}(p_2 \leq x) = F(x)$. Then from \eqref{minimum} \begin{equation*} \mathrm{Pr}(\underline{p} \leq c)= 1-(1-c)\left\{1-F(c)\right\}= c+F(c)-cF(c), \end{equation*} and similarly for the joint distribution from \eqref{jointd} and \eqref{jointd2} $$ \mathrm{Pr}(\overline{p}\leq u, \underline{p}\leq c)=\begin{cases} uF(u), \quad \mbox{for } 0<u\leq c\leq1, \\ uF(c)+cF(u)-cF(c), \quad \mbox{for } 0<c< u\leq1. \end{cases} $$ From this expression \eqref{condpval} follows. To obtain the result of the $(0,0)$ pair, it is sufficient to replace $F(x)$ with $x$ in the above expression. \subsection{Proof of Proposition \ref{ffwer}}\label{proofffwer} Let $I_0$ denote the index set of true union hypotheses, i.e. the index set of (0,0), (0,1) and (1,0) pairs. Consider the probability of making no false rejections conditional on the selection $G$. It is 1 if no hypothesis passes the selection, i.e. if $\sum_{j=1}^m G_j=0$, and otherwise \begin{eqnarray} \mathrm{Pr}(V=0 \mid G) &=&\mathrm{Pr}\left(\bigcap\limits_{i: G_i=1 \land i \in I_0}I\left[\overline{p}_i \geq \frac{\alpha}{\sum_{j=1}^m G_j}\right] \mathrel{\Big|} G\right) \nonumber \label{eq1} \\ & \geq& \mathrm{Pr}\left(\bigcap\limits_{i: G_i=1}I\left[\overline{p}_i \geq \frac{\alpha}{\sum_{j=1}^m G_j}\right] \mathrel{\Big|}G\right)\\ & = & \prod_{i: G_i=1} \mathrm{Pr}\left(\overline{p}_i\geq \frac{\alpha}{\sum_{j=1}^m G_j} \mathrel{\Big|}G\right) \nonumber \\ & =& \prod_{i: G_i=1} \mathrm{Pr}\left(\overline{p}_i\geq \frac{\alpha}{1+\sum_{j\neq i} G_j} \mathrel{\Big|} G\right)\nonumber \\ & = & \prod_{i: G_i=1} \mathrm{Pr}\left(\overline{p}_i\geq \frac{\alpha}{1+\sum_{j\neq i} G_j} \mathrel{\Big|} I[\underline{p}_i \leq c], \sum_{j\neq m}G_j\right) \nonumber\\ &= & \prod_{i: G_i=1} \left\{1 - \mathrm{Pr}\left(\overline{p}_i \leq \frac{\alpha}{\abs S}\mathrel{\Big|} I[\underline{p}_i \leq c], \abs{S}\right) \right\} \nonumber\\ &\geq& \left\{1 - P_0\left(\frac{\alpha}{\abs S}, \, c\right) \right\}^{\abs S}. \label{eq2} \end{eqnarray} In \eqref{eq1}, equality holds when for a given $G$, all selected hypotheses are true. This is true for all $G$ if and only if $I_0 = \left\{1,\ldots,m\right\}$. In \eqref{eq2}, equality holds if further all hypotheses are either a $(0,1)$ or a $(1,0)$ type. The conditional FWER can be found as $\mathrm {Pr}(V\geq 1 \mid G) = 1- \mathrm {Pr}(V = 0 \mid G)$. The expression \eqref{exactfwer} for the unconditional FWER is obtained by taking the expectation over $\abs S$. \subsection{Probability of rejecting a false union hypothesis}\label{a2} To reject $H_i$, two events need to occur: $\underline{p}_i$ needs to be below the selection threshold $c$, and $\overline{p}_i$ needs to be below the testing threshold $\alpha/\abs{S}$. The probability of rejecting $H_i$ conditional on $\abs{S}$ is then: \begin{eqnarray*} \mathrm{Pr}\left(\underline{p}_i \leq c,\,\,\, \overline{p}_i \leq \frac{\alpha}{\abs{S}}\right)&=& \mathrm{Pr} (\overline{p}_i \leq c) \,\,+\,\, \mathrm{Pr} \left(\underline{p}_i \leq c,\,\,\, c< \overline{p}_i \leq \frac{\alpha}{\abs{S}}\right)\\ &=& F^2(c) + 2F(c) \left[F\left(\frac{\alpha}{\abs{S}}\right) - F(c)\right], \end{eqnarray*} if $\alpha/\abs{S} \geq c$, and $$ \mathrm {Pr}\left(\underline{p}_i \leq c,\,\,\, \overline{p}_i \leq \frac{\alpha}{\abs{S}}\right)=\mathrm{ Pr}\left(\overline{p}_i \leq \frac{\alpha}{\abs{S}} \right) = F^2\left(\frac{\alpha}{\abs{S}}\right), $$ if $\alpha/\abs{S} < c$. \subsection{Oracle threshold and FWER constraint}\label{a3} We show that the threshold that maximizes the power under the FWER constraint, is the smallest threshold that satisfies the FWER constraint. First, we will show that $c$ satisfies the FWER constraint if it belongs to an interval $(c^*,1)$, where $c^*$ is defined below. We will then show that $c^*$ is well approximated by $\bar{c}$, where $\bar{c}$ solves the equation $c= \alpha/\mathrm{E}\abs{S(c)}$. But, according to \eqref{condrejection}, the power to reject a false union hypothesis is decreasing with $c$ for $c > \bar{c}$, so that the threshold that maximizes the (constrained) power is approximately $\bar{c}\approx c^*$. First order approximation of the FWER constraint in \eqref{optthresholdapp} states: \begin{equation} \mathrm{E}\abs{S(c)}P_0\left(\frac{\alpha}{\mathrm{E}(S(c))}, c\right) \leq \alpha. \label{foa} \end{equation} It is straightforward to check that when $c$ is close to zero, \eqref{foa} does not hold, while for $c=\bar{c}$, where $\bar{c}$ solves $c= \alpha/\mathrm{E}\abs{S(c)}$, the constraint is satisfied. Namely, for $\bar{c}$ the selection threshold and the testing threshold coincide and according to \eqref{condpval} we have \begin{equation} \label{equalthreshold} P_{0}\left(c,c\right) = c\,\, \frac{F(c)}{F(c)+c\left\{1-F(c)\right\}}\leq c \end{equation} for all $c\in (0,1)$, with equality holding if and only if $F(c)=1$. Given the continuity of $P_0$, this implies that there is a value $c^*$ in $(0,\bar{c}) $ such that the constraint holds with the equality. We now show that $c^*$ will be very close to $\bar{c}$. Denote $u_c= \alpha/\mathrm{E}\abs{S(c)}$. The equation $P_0(u_c, c) = u_c$ simplifies to $F(u_c)-F(c)= u_c\left\{1-F(c)\right\}$ according to \eqref{condpval} since $c<u_c$. When $m$ is large, the interval $(0,\bar{c})$ will be very small, and if we assume that $F$ is locally linear in the neighbourhood of $c$, we can substitute $F(u_c)\approx F(c) +f(c)(u_c-c)$, where $f(\cdot)$ is the density associated to $F$, to obtain $$ u_c \approx c\,\,\frac{f(c)}{f(c)+F(c)-1}, $$ Since the density is strictly decreasing, for small values of $c$, $\abs{f(c)} \gg \abs{F(c)-1}$, so that the above equation becomes $$ u_c \approx c \quad \mbox{i.e.} \quad \alpha/\mathrm{E}\abs{S(c)} \approx c. $$ Therefore, the smallest threshold that satisfies the FWER constraint can be approximated by $\bar{c}$. Among all $c \in (\bar{c}, 1)$, this threshold will maximize power, since for $c\geq\bar{c}$, the power to reject a false union hypothesis is decreasing according to \eqref{condrejection}. \end{document}
\begin{document} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray}s}{\begin{eqnarray*}} \newcommand{\end{eqnarray}s}{\end{eqnarray*}} \newcommand{\begin{align}}{\begin{align}} \newcommand{\end{align}}{\end{align}} \def \mbox{\rule{0.5em}{0.5em}}{ \mbox{\rule{0.5em}{0.5em}}} \newcommand{ $\Box$}{ $\Box$} \newcommand{\ignore}[1]{} \newcommand{\ignorex}[1]{#1} \newcommand{\wtilde}[1]{\widetilde{#1}} \newcommand{\mq}[1]{\mbox{#1}\quad} \newcommand{\bs}[1]{\boldsymbol{#1}} \newcommand{\qmq}[1]{\quad\mbox{#1}\quad} \newcommand{\qm}[1]{\quad\mbox{#1}} \newcommand{\nonumber}{\noindentnumber} \newcommand{\Bvert}{\left\vert\vphantom{\frac{1}{1}}\right.} \newcommand{\rightarrow}{\rightarrow} \newcommand{\mbox{supp}}{\mbox{supp}} \newcommand{{\cal L}}{{\cal L}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathcal}{\mathcal} \newcommand{\mathbf}{\mathbf} \newcommand{\textbf}{\textbf} \newcommand{\left(}{\left(} \newcommand{\lim\limits}{\lim\limits} \newcommand{\lim\limitsinf}{\liminf\limits} \newcommand{\lim\limitssup}{\limsup\limits} \newcommand{\right)}{\right)} \newcommand{\mathbb}{\mathbb} \newcommand{\rightarrow \infty}{\rightarrow \infty} \newtheorem{theorem}{Theorem}[section] \newtheorem{problem}[theorem]{Problem} \newtheorem{exercise}[theorem]{Exercise} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{claim}[theorem]{Claim} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \newtheorem{solution}[theorem]{Solution} \newtheorem{case}{Case}[theorem] \newtheorem{condition}[theorem]{Condition} \newtheorem{assumption}[theorem]{Assumption} \newtheorem{note}[theorem]{Note} \newtheorem{notes}[theorem]{Notes} \newtheorem{observation}[theorem]{Observation} \newtheorem{readingex}[theorem]{Reading exercise} \frenchspacing \tikzstyle{level 1}=[level distance=2.75cm, sibling distance=5.65cm] \tikzstyle{level 2}=[level distance=3cm, sibling distance=2.75cm] \tikzstyle{level 3}=[level distance=3.9cm, sibling distance=1.5cm] \tikzstyle{bag} = [text width=10em, text centered] \tikzstyle{end} = [circle, minimum width=3pt,fill, inner sep=0pt] \title{Hamilton Cycles on Dense Regular Digraphs and Oriented Graphs} \author{Allan Lo\footnote{School of Mathematics, University of Birmingham, United Kingdom, Email: [email protected]. A.~Lo was partially supported by EPSRC, grant no. EP/V002279/1 and EP/V048287/1 (A.~Lo).} \hspace{0.2in} Viresh Patel\footnote{Korteweg de Vries Instituut voor Wiskunde, Universiteit van Amsterdam, The Netherlands, Email: [email protected]. V.~Patel was supported by the Netherlands Organisation for Scientific Research (NWO) through the Gravitation Programme Networks (024.002.003).} \hspace{0.2in} Mehmet Akif Yıldız\footnote{Korteweg de Vries Instituut voor Wiskunde, Universiteit van Amsterdam, The Netherlands: Email: [email protected]. M.A.~Yıldız was supported by a Marie Skłodowska-Curie Action from the EC (COFUND grant no. 945045 ) and by the NWO Gravitation project NETWORKS (grant no. 024.002.003). } } \vskip 1pcpace{0.2in} \maketitle \begin{abstract} \noindent We prove that for every $\varepsilon > 0$ there exists $n_0=n_0(\varepsilon)$ such that every regular oriented graph on $n > n_0$ vertices and degree at least $(1/4 + \varepsilon)n$ has a Hamilton cycle. This establishes an approximate version of a conjecture of Jackson from 1981. We also establish a result related to a conjecture of K{\"u}hn and Osthus about the Hamiltonicity of regular directed graphs with suitable degree and connectivity conditions. \textbf{Keywords:} Hamilton cycle, robust expander, regular, digraph, oriented graph. \end{abstract} \section{Introduction} \noindent A Hamilton cycle in a (directed) graph is a (directed) cycle that visits every vertex of the (directed) graph. Hamilton cycles are one of the most intensely studied structures in graph theory. There is a rich body of results that establish (best-possible) conditions guaranteeing existence of Hamilton cycles in (directed) graphs. Degree conditions that guarantee Hamiltonicity have been of particular interest, as well as the trade-off between degree conditions and other conditions (e.g. various types of connectivity conditions). \\ \noindent In this paper, we are concerned with directed graphs (or digraphs for short) and oriented graphs. Recall that a digraph can have up to two directed edges between any pair of vertices (one in each direction), while an oriented graph can have only one.\\ \noindent The seminal result in the area is Dirac's theorem~\cite{Dirac}, which states that every graph on $n\geq 3$ vertices with minimum degree at least $n/2$ contains a Hamilton cycle. The disjoint union of two cliques of equal size or the slightly imbalanced complete bipartite graph shows that the bound is best possible. Ghouila-Houri~\cite{GhouilaHouri} proved the corresponding result for directed graphs, which states that every digraph on $n$ vertices with minimum semi-degree at least $n/2$ contains a Hamilton cycle. The bound here is again tight by doubling the edges in the extremal examples for the graph setting. The proofs of both of these results are relatively short, while the corresponding result for oriented graphs, due to Keevash, K{\"u}hn, and Osthus~\cite{OrientedExactMinDegree} given below, is more difficult and uses the Regularity Lemma together with a stability method. Again the degree threshold is tight as demonstrated by examples given in~\cite{OrientedExactMinDegree}. \begin{theorem} \label{thm:KKO} There exists an integer $n_0$ such that any oriented graph $G$ on $n\geq n_0$ vertices with minimum semi-degree $\delta^0(G) \geq \lceil (3n - 4)/8 \rceil$ contains a Hamilton cycle. \end{theorem} \noindent Here we consider the question of minimum degree thesholds for Hamiltonicity in \emph{regular} (di)graphs possibly with some mild connectivity constraints. In this direction, for the undirected setting, Bollob\'as and H{\"a}ggkvist (see \cite{JacksonDirected}) independently conjectured that a $t$-connected regular graph with degree at least $n/(t+1)$ is Hamiltonian. That is, the threshold for Hamiltonicity is significantly reduced compared to Dirac's Theorem if we consider regular graphs (with some relatively mild connectivity conditions). Note that the connectivity conditions without regularity is not enough to guarantee Hamiltonicity due to the slightly imbalanced complete bipartite graph. Jackson~\cite{JacksonDirected} proved the conjecture for $t=2$, while Jung~\cite{Jung} and Jackson, Li, and Zhu~\cite{JacksonLiZhu} gave an example showing the conjecture does not hold for $t \geq 4$. Finally, K\"uhn, Lo, Osthus, and Staden~\cite{IntoTwoBipartiteExpander, kuhn2016solution} resolved the conjecture by proving the case $t=3$ asymptotically. Results in \cite{PatStr} which use ideas from~\cite{IntoTwoBipartiteExpander, kuhn2016solution} also show that the algorithmic Hamilton cycle problem behaves quite differently for dense regular graphs compared to dense graphs. \\ \noindent Jackson conjectured in 1981 that, for oriented graphs, regularity alone is enough to reduce the semi-degree threshold for Hamiltonicity from $\lceil (3n-4)/8 \rceil$ in Theorem~\ref{thm:KKO} to $n/4$. \begin{conjecture}[\cite{JacksonConjecture}]\label{conj:JacksonMain} For each $d>2$, every $d$-regular oriented graph on $n\leq 4d+1$ vertices has a Hamilton cycle. \end{conjecture} \noindent Note that the disjoint union of two regular tournaments shows that the degree bound above cannot be improved. Furthermore, one cannot reduce the the degree bound even if the oriented graph is strongly $2$-connected; see Proposition~\ref{thm:counterexample2connected}. Our main result is an approximate version of Jackson's conjecture. \begin{theorem}\label{thm:MAIN-1-4-CASE} For every $\varepsilon>0$, there exists an integer $n_0(\varepsilon)$ such that every $d$-regular oriented graph on $n\geq n_0(\varepsilon)$ vertices with $d\geq (1/4+\varepsilon)n$ is Hamiltonian. \end{theorem} \noindent Recall that Jackson~\cite{JacksonDirected} proved the $t=2$ case of the Bollob{\'a}s--H{\"a}ggkvist conjecture, namely that every $2$-connected regular graph of degree at least $n/3$ has a Hamilton cycle. K{\"u}hn and Osthus gave a corresponding conjecture for digraphs. \begin{conjecture}[\cite{SurveyDirected}]\label{conj:directedregularHamilton} Every strongly $2$-connected $d$-regular digraph on $n$ vertices with $d\geq n/3$ contains a Hamilton cycle. \end{conjecture} \noindent We give a counterexample to this conjecture (see Proposition~\ref{thm:counterexample2connected}), but we show that a slight modification of the conjecture is true. In particular, $2$-connectivity is replaced with the following slightly different condition. We call a digraph $G$ on at least four vertices \emph{strongly well-connected} if for any partition $(A,B)$ of $V(G)$ with $|A|,|B|\geq 2$, there exist two non-incident edges $ab$ and $cd$ such that $a,d\in A$ and $b,c\in B$. Note that the property of being strongly well-connected and that of being strongly $2$-connected are incomparable\footnote{A directed cycle (on at least $4$ vertices) is strongly well-connected but not strongly $2$-connected; see Proposition~\ref{thm:counterexample2connected} for the converse example}; on the other hand being strongly well-connected is stronger than being strongly connected but weaker than being strongly $3$-connected. Our second result is an approximate version of a slightly modified statement of Conjecture~\ref{conj:directedregularHamilton}. \begin{theorem}\label{thm:MAIN-1-3-CASE} For every $\varepsilon > 0$, there exists an integer $n_0(\varepsilon)$ such that every strongly well-connected $d$-regular digraph on $n\geq n_0(\varepsilon)$ vertices with $d\geq (1/3+\varepsilon)n$ is Hamiltonian. \end{theorem} \noindent Note that K{\"u}hn and Osthus~\cite{SurveyDirected} give an example that shows the degree bound in Conjecture~\ref{conj:directedregularHamilton} cannot be reduced, i.e.\ an example of a strongly 2-connected regular digraph on $n$ vertices and degree close to $n/3$. The same example is easily seen to be strongly well-connected, showing that we cannot take the degree to be smaller than $n/3$ in Theorem~\ref{thm:MAIN-1-3-CASE}. \\ \noindent Our methods are based on the robust expanders technique of Kühn and Osthus which have been used to resolve a number of conjectures (see \cite{On-Kelly-Conjecture}, \cite{NEWW}). Any directed dense graph that is a robust expander automatically contains a Hamilton cycle. An important part of this paper is to gain an understanding of dense directed graphs that are not robust expanders. In particular, we are able to construct vertex partitions of such digraphs with useful expansion properties. Although we do not show it directly, such partitions almost immediately allow us to construct very long cycles in the required settings (that is cycles that pass through all but a small proportion of the vertices). The remainder of the paper is devoted to giving delicate balancing arguments to obtain full Hamilton cycles. \\ \noindent The paper is organised as follows. In the next subsection, we give the counterexample to Conjecture~\ref{conj:directedregularHamilton}. In Section~\ref{ch:notations} we give notation, preliminaries and a sketch proof. In Section~\ref{ch:partition} we develop the necessary language of partitions and establish some of their basic properties. Section~\ref{ch:balance} is devoted mainly to giving the balancing arguments that will allow us to construct full Hamilton cycles. Section~\ref{ch:partition-to-hamilton} shows how to combine earlier results to show that dense directed and oriented graphs with certain vertex partitions contain Hamilton cycles. In Section~\ref{ch:mainproofs} we prove Theorem~\ref{thm:MAIN-1-3-CASE} and~\ref{thm:MAIN-1-4-CASE}. We pose some open problems in Section~\ref{ch:conclusion}. \subsection{Counterexample to Conjecture~\ref{conj:directedregularHamilton}}\label{ch:counter-example} \begin{proposition}\label{thm:counterexample2connected} For $n\geq 3$, there exists a strongly $2$-connected $(n-1)$-regular digraph on $2n$ vertices with no Hamilton cycle. For $n \ge 3$, there exists a strongly $2$-connected $(n-1)$-regular oriented graph on $4n+2$ vertices with no Hamilton cycle. \end{proposition} \begin{figure} \caption{A strongly 2-connected $(n-1)$-regular digraph $G$ on $2n$ vertices} \label{fig:counter-example} \end{figure} \begin{proof} Let $G'$ be the digraph that is the disjoint union of two complete digraphs $G_1$ and $G_2$ each of size $n$. Let $a,b \in V(G_1)$ and $c,d \in V(G_2)$. Let $G$ be the digraph obtained from $G'$ by deleting the edges $ab$, $ba$, $cd$, and $dc$, and adding the edges $ac$, $cb$, $bd$, $da$ (see Figure~\ref{fig:counter-example}). It is clear that $G$ is a strongly $2$-connected $(n-1)$-regular digraph on $2n$ vertices. \\ \noindent It is easy to see that $G$ has no Hamilton cycle. Indeed, any Hamilton cycle $H$ of $G$ must use at least one edge inside one of the cliques (since $n\geq 3$). Let $P$ be a maximal path of $H$ inside one of the cliques (say $G_1$) with at least one edge. Let $e$ and $e'$ be the edges of $H$ that extend $P$ into $G_2$. Then $e_1$ and $e_2$ must be non-incident edges that cross between $G_1$ and $G_2$ in opposite directions. But $G$ does not have such a pair of edges. \\ \noindent The oriented graph is constructed similarly. It is easy to construct a regular tournament that contains two cycles that together span the tournament and which have exactly two vertices in common. Let $G'$ be the disjoint union of two such regular tournaments $G_1$ and $G_2$ each of size $2n+1$. Let $C_1$ and $C_1'$ be the two directed cycles in $G_1$ such that $V(C_1) \cup V(C_1') = V(G_1)$ and $V(C_1) \cap V(C_1') = \{a,b\}$. Similarly, let $C_2$ and $C_2'$ be two directed cycles in $G_2$ such that $V(C_2) \cup V(C_2') = V(G_2)$ and $V(C_2) \cap V(C_2') = \{c,d\}$. Let $G$ be obtained from $G'$ by deleting the edges of $C_1 \cup C_1'\cup C_2 \cup C_2'$, and adding the edges $ac$, $cb$, $bd$, $da$. It is easy to check that $G$ is a strongly $2$-connected, $(n-1)$-regular, oriented graph on $4n+2$ vertices. Note that $G$ is not Hamiltonian by a similar argument as above. \end{proof} \section{Notation and preliminaries}\label{ch:notations} \noindent Throughout the paper, we use standard graph theory notation and terminology. For a digraph $G$, we denote its vertex set by $V(G)$ and its edge set $E(G)$. For $a,b \in V(G)$, we write $ab$ for the directed edge from $a$ to $b$. We sometimes write $|G|$ for the number of vertices in $G$ and $e(G)$ for the number of edges in $G$. We write $H \subseteq G$ to mean $H$ is a subdigraph of $G$, i.e.\ $V(H) \subseteq V(G)$ and $E(H) \subseteq E(G)$. We sometimes think of $F \subseteq E(G)$ as a subdigraph of $G$ with vertex set consisting of those vertices incident to edges in $F$ and edge set $F$. For $S \subseteq V(G)$, we write $G[S]$ for the subdigraph of $G$ induced by $S$ and $G-S$ for the digraph $G[V(G) \setminus S]$. For $A,B \subseteq V(G)$ not necessarily disjoint, we define $E_G(A,B):= \{ab \in E(G): a \in A, \; b \in B \}$ and we write $G[A,B]$ for the graph with vertex set $A \cup B$ and edge set $E_G(A,B)$. We write $e_G(A,B) := |E_G(A,B)|$. We often drop subscripts if these are clear from context. For two digraphs $H_1$ and $H_2$, the union $H_1 \cup H_2$ is the digraph with vertex set $V(H_1) \cup V(H_2)$ and edge set $E(H_1) \cup E(H_2)$. We say an undirected graph $G$ is bipartite with bipartition $(A,B)$ if $V(G)=A\cup B$ and $E(G) \subseteq \{ab: a \in A, \; b \in B \}$.\\ \noindent For a digraph $G$ and $v\in V(G)$, we denote the set of outneighbours and inneighbours of $v$ by $N_G^{+}(v)$ and $N_G^{-}(v)$ respectively, and we write $d_G^{+}(v)=|N_G^{+}(v)|$ and $d_G^{-}(v)=|N_G^{-}(v)|$ for the out- and indegree of $v$ respectively. For $S\subseteq V(G)$ we write $d_S^{-}(v):= |N_G^-(v) \cap S|$ and $d_S^{+}(v):= |N_G^+(v) \cap S|$. We write $\delta^+(G)$ and $\delta^-(G)$ respectively for the minimum out- and indegree of $G$, and $\delta^{0}(G):=\min\{\delta^{+}(G),\delta^{-}(G)\}$ for the minimum semi-degree. Similarly, the maximum semi-degree $\Delta^{0}(G)$ of $G$ is defined by $\Delta^{0}(G):=\max\{\Delta^{+}(G),\Delta^{-}(G)\}$ where $\Delta^{+}(G)$ and $\Delta^{-}(G)$ denote the maximum out- and maximum indegree of $G$ respectively. A digraph is called $d$-regular if each vertex has exactly $d$ outneighbours and $d$ inneighbours. For undirected graphs $G$, we write $\Delta(G)$ and $\delta(G)$ respectively for the maximum degree and the minimum degree. A graph is called $d$-regular if each vertex has exactly $d$ neighbours. \\ \noindent A directed path $Q$ in a digraph $G$ is a subdigraph of $G$ where $V(Q) = \{v_1, \ldots, v_k \}$ for some $k \in \mathbb{N}$ and where $E(Q) = \{v_1v_2, v_2v_3, \ldots, v_{k-1}v_k \}$. A directed cycle in $G$ is exactly the same except that it also includes the edge $v_kv_1$. A set of vertex-disjoint directed paths $\mathcal{Q}=\{Q_1,Q_2,\ldots\}$ in $G$ is called a \emph{path system} in $G$. We interchangeably think of $\mathcal{Q}$ as a set of vertex-disjoint directed paths in $G$ and as a subgraph of $G$ with vertex set $V(\mathcal{Q}) = \cup_{i} V(Q_i)$ and edge set $E(\mathcal{Q}) = \cup_{i} E(Q_i)$. We sometimes call this subgraph the graph induced by $\mathcal{Q}$. A matching $M$ in a digraph (or undirected graph) $G$ is a set of edges $M \subseteq E(G)$ such that every vertex of $G$ is incident to at most one edge in $M$. We say a matching $M$ \emph{covers} $S \subseteq V(G)$ if every vertex in $S$ is incident to some edge in $M$. \\ \noindent For two sets $A$ and $B$, the \textit{symmetric difference} of $A$ and $B$ is the set $A\triangle B := (A\setminus B) \cup (B\setminus A)$. For $k\in\mathbb{N}$, we sometimes denote the set $\{1,2,\ldots,k\}$ by $[k]$. For $x, y \in (0,1]$, we often use the notation $x \ll y$ to mean that $x$ is sufficiently small as a function of $y$ i.e.\ $x \leq f(y)$ for some implicitly given non-decreasing function $f:(0,1] \rightarrow (0,1]$. \\ \subsection{Tools} \noindent We will require Vizing's theorem for multigraphs in the proof of Lemma~\ref{lem:MATHCING_LEMMA_I}. Let $H$ be an (undirected) multigraph (without loops). The multiplicity $\mu(H)$ of $H$ is maximum number of edges between two vertices of $H$, and, as usual, $\Delta(H)$ is the maximum degree of $H$. A proper $k$-edge-colouring of $H$ is an assignment of $k$ colours to the edges of $H$ such that incident edges receive different colours. \begin{theorem}[\cite{Vizing}; see e.g.\ \cite{Diestel}] \label{thm:Vizing} Any multigraph $H$ has a proper $k$-edge colouring with $k = \Delta(H) + \mu(H)$ colours. In particular, by taking the largest colour class, there is a matching in $H$ of size at least $e(H)/(\Delta(G)+\mu(G))$. \end{theorem} \noindent In Lemma~\ref{lem:MATCHING_LEMMA_II}, we will require a Chernoff inequality for bounding the tail probabilities of binomial random variables. For a random variable $X$, write $\mathbb{E}[X]$ for the expectation of $X$. We write $X \sim {\rm Bin}(n,p)$ to mean that $X$ is distributed as a binomial random variable with parameters $n$ and $p$, that is a random variable that counts the number of heads in $n$ independent coin flips where the probability of heads is $p$. In that case we have $\mathbb{E}[X]=np$ and the following bound. \begin{theorem}[see~\cite{Chernoff}] \label{thm:Chernoff} Suppose $X_1,X_2,\ldots,X_n$ are independent random variables taking values in $\{0,1\}$ and $X = X_1 + \cdots + X_n$. Then, for all $0\leq \delta\leq 1$, we have \begin{equation*} \mathbb{P}(X\leq (1-\delta)\mathbb{E}(X))\leq \exp{\left(-\delta^2 \mathbb{E}(X)/2\right)}. \end{equation*} In particular, this holds for $X \sim {\rm Bin}(n,p)$. \end{theorem} \subsection{Robust expanders} \noindent In this subsection we define robust expanders and discuss some of their useful properties. \begin{definition}\label{def:RobustExpander} Fix a digraph $G$ on $n$ vertices and parameters $0<\nu < \tau <1$. For $S\subseteq V(G)$, the \emph{robust $\nu$-outneighbourhood of $S$} is the set \emph{$\text{RN}_{\nu}^{+}(S) := \{v \in V(G): |N^-_G(v) \cap S| \geq \nu n \}$}. We say $G$ is a \emph{robust $(\nu,\tau)$-outexpander} if \emph{$|\text{RN}_{\nu}^{+}(S)|\geq |S|+\nu n$} for all subsets $S\subseteq V(G)$ satisfying $\tau n\leq |S|\leq (1-\tau)n$. \end{definition} \noindent If the constant $\nu$ used is clear from context, we write $\text{RN}^{+}(S)$. The notion of robust expansion has been key to proving numerous conjectures about Hamilton cycles. One of the starting points is the following seminal result which states that robust expanders with certain minimum degree condition are Hamiltonian. \begin{theorem}[\cite{RobustExpanderHamilton}; see also \cite{LoPatel}] \label{thm:RobustExpanderImpliesHamilton} Let $1/n \ll \nu\leq \tau\ll\gamma<1$. If $G$ is an $n$-vertex digraph with $\delta^{0}(G)\geq \gamma n$ such that $G$ is a robust $(\nu,\tau)$-outexpander, then $G$ contains a Hamilton cycle. \end{theorem} \noindent The following straightforward lemma shows that robust expansion is a ``robust" property, i.e.\ if $G$ is a robust $(\nu,\tau)$-outexpander, then adding or deleting a small number of vertices results in another robust outexpander with slightly worse parameters. \begin{lemma}[\cite{IntoTwoBipartiteExpander}] \label{lem:SYMMETRIC-DIFFERENCE} Let $0<\nu\ll\tau\ll 1$. Suppose that $G$ is a digraph and $U,U'\subseteq V(G)$ are such that $G[U]$ is a robust $(\nu,\tau)$-outexpander and $|U\triangle U'|\leq \nu |U|/2$. Then, $G[U']$ is a robust $(\nu/2,2\tau)$-outexpander. \end{lemma} \noindent By taking $(U,U')=(V(G)-S,V(G))$, Lemma~\ref{lem:SYMMETRIC-DIFFERENCE} has the following corollary. \begin{corollary}\label{cor:AddingSmallPartIntoRobustExpander} Let $1/n \ll \nu \ll \tau \ll 1$. If $G$ is an $n$-vertex digraph and $S\subset V(G)$ such that $|S|\leq \nu |G|/2$ and $G-S$ is a robust $(\nu,\tau)$-outexpander then $G$ is a robust $(\nu/2,2\tau)$-outexpander. \end{corollary} \noindent The next lemma shows that any digraph $G$ with minimum semi-degree slightly higher than $|G|/2$ is a robust outexpander. \begin{lemma}[\cite{On-Kelly-Conjecture}]\label{lem:HigherThanHalfGivesRobustExpander} Let $0<\nu\leq \tau\leq \varepsilon<1$ be constants such that $\varepsilon\geq 2\nu/\tau$. Let $G$ be a digraph on $n$ vertices with $\delta^{0}(G)\geq (1/2+\varepsilon)n$. Then, $G$ is a robust $(\nu,\tau)$-outexpander. \end{lemma} \noindent In fact we can relax the degree condition in Lemma~\ref{lem:HigherThanHalfGivesRobustExpander} and allow a small number of vertices to violate the minimum degree condition. \begin{corollary}\label{cor:Higher-Than-Half-For-All-But-Few} Let $1/n<\nu,\rho\ll\tau\ll\varepsilon\ll\alpha <1$ be constants. If $G$ is an $n$-vertex digraph such that $d^{+}(v), d^{-}(v)\geq (1/2+\varepsilon)n$ for all but at most $\rho n$ vertices $v \in V(G)$, then $G$ is a robust $(\nu,\tau)$-outexpander. In particular, if $\delta^{0}(G)\geq \alpha n$, then $G$ contains a Hamilton cycle. \end{corollary} \begin{proof} Fix $\nu'$ and $\tau'$ such that $\nu,\rho\ll \nu'\ll \tau'\ll\tau$. Let $W$ be the set of vertices $v$ in $G$ such that $\min\{d^{+}(v), d^{-}(v) \} < (1/2+\varepsilon)n$. Then, observe that $G'=G-W$ satisfies \[ d_{G'}^{+}(v), d_{G'}^{-}(v)\geq (1/2+\varepsilon-\rho)n\geq (1/2+\varepsilon-\rho)|G'| \] for all $v\in V(G')$. By our choice of parameters, we can conclude that $G'$ is a robust $(\nu',\tau')$-outexpander by Lemma~\ref{lem:HigherThanHalfGivesRobustExpander} since $\tau'\leq\varepsilon-\rho$ and $2\nu'/\tau'\leq \varepsilon-\rho$. Moreover, we have $|W|=\rho n\leq \nu' n/2$. Therefore, $G$ is a robust $(\nu,\tau)$-outexpander by Corollary~\ref{cor:AddingSmallPartIntoRobustExpander}, and the result follows by Theorem~\ref{thm:RobustExpanderImpliesHamilton}. \end{proof} \subsection{Sketch proof} Note that the sketch proof we give below only makes reference to Definition~\ref{def:RobustExpander}, Theorem~\ref{thm:RobustExpanderImpliesHamilton}, and Lemma~\ref{lem:HigherThanHalfGivesRobustExpander}. We will sketch the proof of Theorem~\ref{thm:MAIN-1-3-CASE} and then explain how these ideas are generalised and refined to prove Theorem~\ref{thm:MAIN-1-4-CASE}. \\ \noindent Let $G=(V,E)$ be an $n$-vertex, $d$-regular digraph with $d \geq (1/3+\varepsilon)n$. If $G$ is a robust $(\nu, \tau)$-outexpander (for suitable parameters $\nu$ and $\tau$), then by Theorem~\ref{thm:RobustExpanderImpliesHamilton}, we know $G$ has a Hamilton cycle. So assume $G$ is not a robust $(\nu, \tau)$-outexpander. We describe a useful vertex partition of $G$. \\ \noindent {\bf Partitioning non-robust expanders} - Since $G$ is not a robust $(\nu, \tau)$-outexpander we know by Definition~\ref{def:RobustExpander} that there exists $S \subseteq V(G)$ such that $\tau n \leq |S| \leq (1- \tau)n$ and $|\text{RN}_{\nu}^{+}(S)| \leq |S| + \nu n$. This immediately gives us a partition of $V(G)$ into four parts given by \begin{align*} V_{11} = S \cap \text{RN}^{+}(S), \:\:\:\:\: V_{12} = S \setminus \text{RN}^{+}(S), \:\:\:\:\: V_{21} = \text{RN}^{+}(S) \setminus S, \:\:\:\:\: V_{22} = V \setminus (S \cup \text{RN}^{+}(S)). \end{align*} We see that most outedges from vertices in $S$ go to $\text{RN}^{+}(S)$ by the definition of $\text{RN}^{+}(S)$. Moreover, $S$ and $\text{RN}^{+}(S)$ must be of similar size; indeed we already know $\text{RN}^{+}(S)$ is not significantly bigger than $S$, and it cannot be significantly smaller because otherwise the degrees in $\text{RN}^{+}(S)$ would be larger than degrees in $S$ violating that $G$ is regular. Also most outedges of vertices in $V \setminus S$ go to $V \setminus \text{RN}^{+}(S)$ because if many of these edges went to $\text{RN}^{+}(S)$, the degrees in $\text{RN}^{+}(S)$ would again be too large violating that $G$ is regular. All of this is straightforward to show and captured in Lemma~\ref{lem:structure_not_expander}. The structure we obtain is depicted in Figure~\ref{fig:partition-expander}. To summarise, we have that \begin{itemize} \item[(a)] $|S| \approx |\text{RN}^{+}(S)|$ so $|V_{12}| \approx |V_{21}|$, \item[(b)] most edges of $G$ are from $S$ to $\text{RN}^{+}(S)$ and from $V \setminus S$ to $V \setminus \text{RN}^{+}(S)$. We call these the good edges of $G$, and \item[(c)] (b) implies that we must have $|S|, |V \setminus S| \gtrapprox d$ so that in particular $n/3 \lessapprox |S|, |V \setminus S| \lessapprox 2n/3 $ \end{itemize} \begin{figure} \caption{The 4-partition of $V(G)$ with $|V_{12} \label{fig:partition-expander} \end{figure} \noindent Next we describe the strategy to construct a Hamilton cycle in $G$ using this partition. \\ \noindent {\bf Constructing the Hamilton cycle for balanced partitions} - We first describe how to construct the Hamilton cycle in the special case $|V_{12}| = |V_{21}|>0$. In that case, let $V_{12} = \{ x_1, \ldots, x_k \}$ and $V_{21}= \{y_1, \ldots, y_k \}$. Consider the two edge-disjoint subgraphs $G_1$ and $G_2$ of $G$ given by (see Figure~\ref{fig:graphs-G1-G2}) \begin{align*} G_1 &= \left( S \cup \text{RN}^{+}(S), \; E_G(S, \text{RN}^{+}(S) \right) \\ &= (V_{11} \cup V_{12} \cup V_{21}, \; E(V_{12}, V_{11}) \cup E(V_{11}, V_{11}) \cup E(V_{11}, V_{21}) \cup E(V_{12}, V_{21}) ), \end{align*} and \begin{align*} G_2 &= \left( (V \setminus S) \cup (V \setminus \text{RN}^{+}(S)), \; E_G(V \setminus S, V \setminus \text{RN}^{+}(S) \right) \\ &= (V_{22} \cup V_{12} \cup V_{21}, \; E(V_{21}, V_{22}) \cup E(V_{22}, V_{22}) \cup E(V_{22}, V_{12}) \cup E(V_{21}, V_{12}) ). \end{align*} \begin{figure} \caption{The edge-disjoint subgraphs $G_1$ and $G_2$ of $G$.} \label{fig:graphs-G1-G2} \end{figure} \noindent Suppose we can find \begin{itemize} \item[(i)] vertex-disjoint paths $Q^1_1, \ldots, Q^1_k$ in $G_1$ that together span $V(G_1)$ and where $Q^1_i$ is from $x_i$ to $y_{\sigma(i)}$ for some permutation $\sigma$ on $[k]$, \item[(ii)] vertex-disjoint paths $Q^2_1, \ldots, Q^2_k$ in $G_2$ that together span $V(G_2)$ and where $Q^2_i$ is from $y_i$ to $x_{\pi(i)}$ for some permutation $\pi$ on $[k]$, \item[(iii)] and where the permutation $\pi \sigma$ is a cyclic permutation. \end{itemize} Then it is easy to see that the union of these paths forms a Hamilton cycle. We find these paths as follows. \\ \noindent Consider $G_1$ first. We construct the graph $J_1$ from $G_1$ by identifying $x_i$ with $y_i$ for every $i \in [k]$ and keeping all edges (except any self loops). The vertex which replaces $x_i$ and $y_i$ is called $i$. From the structure of $G_1$, it is not hard to see that most vertices in $J_1$ have degree roughly $d = (1/3 + \varepsilon)n$, while $|J_1| = |S| \lessapprox 2n/3$ by (c). So most vertices in $J_1$ have in- and outdegree at least $(1/2+\varepsilon/2)|J_1|$, which implies $J_1$ is a robust expander by Lemma~\ref{lem:HigherThanHalfGivesRobustExpander}.\footnote{Any enumeration of the vertices in $V_{12}$ and $V_{21}$ would lead to $J_1$ being a robust expander.} Therefore $J_1$ has a Hamilton cycle $H_1$ by Theorem~\ref{thm:RobustExpanderImpliesHamilton}. \\ \noindent Let $\sigma$ be the permutation on $[k]$ where $\sigma(i)$ is the vertex in $[k]$ after $i$ that is visited by $H_1$. Therefore $H_1$ is the union of paths $R_1, \ldots, R_k$ where $R_i$ is from $i$ to $\sigma(i)$, which corresponds in $G_1$ to the path $Q_i^1$ from $x_i$ to $y_{\sigma(i)}$; these paths can easily be seen to satisfy (i) (see Figure~\ref{fig:hamilton-to-paths}). Next, we obtain $J_2$ from $G_2$ by identifying the vertex $x_i$ with $y_{\sigma(i)}$, and labelling the resulting vertex $i$, for every $i \in [k]$ similarly as for $J_1$. Again, we find that $J_2$ is a robust expander and so has a Hamilton cycle $H_2$. Let $\pi$ be the permutation on $[k]$ such that $\pi(i)$ is the next vertex in $[k]$ after $i$ visited by $H_2$. Using the same argument as before, we obtain paths $Q^2_1, \ldots, Q^2_k$ satisfying (ii). By our choice of identification in $J_2$, and since $H_2$ is a Hamilton cycle, it is easy to see that $\pi$ and $\sigma$ satisfy~(iii). \\ \begin{figure} \caption{An example illustration of (A) $G_1$, (B) the corresponding graph $J_1$ with a Hamilton cycle $H_1$, and (C) the vertex-disjoint paths $Q_1^1, \ldots, Q_k^1$ spanning $G_1$ (with $k=3$ in this case) corresponding to $H_1$. In this case $\sigma=(231)$, i.e. the cyclic permutation that sends 1 to 2, 2 to 3, and 3 to 1.} \label{fig:hamilton-to-paths} \end{figure} \noindent {\bf Constructing the Hamilton cycle for unbalanced partitions} - We have seen how to find the Hamilton cycle when $|V_{12}| = |V_{21}|$. If instead we only have (by (a)) that $|V_{12}| \approx |V_{21}|$, then we will find vertex-disjoint paths $S_1, \ldots, S_{\ell}$ that use only bad edges (and only a relatively small number of bad edges) such that ``contracting'' these paths results in a slightly modified graph $G'$ with a slightly modified vertex partition $V'_{11}, V'_{12}, V'_{21}, V'_{22}$, which has essentially the same properties as before but also that $|V_{12}'| = |V_{21}'|$. Here $G'$ is not regular, but almost regular; this however is enough for us. So we can find a Hamilton cycle in $G'$ using the previous argument, and ``uncontracting'' the paths $S_1, \ldots, S_{\ell}$ gives a Hamilton cycle in~$G$.\footnote{For Theorem~\ref{thm:MAIN-1-3-CASE}, these paths are constructed directly in the proof of the theorem in Section~\ref{ch:mainproofs}, but in the more complicated case of Theorem~\ref{thm:MAIN-1-4-CASE}, they are constructed in Lemma~\ref{lem:9-partition-path-selection}.} \\ \noindent {\bf The case of regular oriented graphs} - For Theorem~\ref{thm:MAIN-1-4-CASE}, i.e. when $G$ is an $n$-vertex regular oriented graph with degree $d \geq (1/4 + \varepsilon)n$, we start by applying the same argument as before. Recall that we construct digraphs $J_1$ and $J_2$ and wish to find Hamilton cycles in these digraphs. However, whereas before, we could guarantee that both $J_1$ and $J_2$ would be robust expanders, this time we find that (at most) one of them, say $J_2$ might not be. This is because $G$ and $J_i$ has lower degree, and so we cannot necessarily apply Lemma~\ref{lem:HigherThanHalfGivesRobustExpander}. It is not too hard to see that the $J_i$ are almost regular and so we can iterate our partition argument on $J_2$. In particular we can partition $V(J_2)$ into four parts $Z_{11}, Z_{12}, Z_{21}, Z_{22}$ that satisfy slightly modified forms of (a) and (b). Again if $|Z_{12}| = |Z_{21}|$, then we can create digraphs $K_1$ and $K_2$ such that Hamilton cycles in $K_1$ and $K_2$ lift to a Hamilton cycle in $J_2$ (just as Hamilton cycles in $J_1$ and $J_2$ lift to a Hamilton cycle in $G$). This time the increase in density is enough to guarantee that both $K_1$ and $K_2$ are robust expanders, which gives the desired Hamilton cycle by Theorem~\ref{thm:RobustExpanderImpliesHamilton}. If $|Z_{12}| \noindentt= |Z_{21}|$ then, as before, we need to construct paths whose contraction results in a modified graph with a modified partition that is balanced. In fact, we need to be able to find and contract paths in such a way that we simultaneously have $|V'_{12}| = |V'_{21}|$ and $|Z'_{12}| = |Z'_{21}|$. For this purpose, and generally for a cleaner and more transparent argument, rather than working with two iterations of the $4$-partition described earlier, we work equivalently with a $9$-partition of $V(G)$. The required paths are constructed in Lemma~\ref{lem:9-partition-path-selection}. \section{Partitions of regular digraphs and oriented graphs} \label{ch:partition} We have seen that (essentially) any dense digraph that is a robust expander is Hamiltonian. If the digraph is not a robust expander, then we will see (Lemma~\ref{lem:structure_not_expander}) that the witness sets to this non-expansion naturally forms a partition of the vertices into $4$-parts. Throughout the paper we will be working with such partitions and their iterations. In this section, we introduce the language of partitions and establish some of their basic properties. \begin{definition}\label{def:good-edges} For a given digraph $G$ and $k\in \mathbb{N}$, a partition $\mathcal{P}_k=\{V_{ij}:i,j\in[k] \}$ of $V(G)$ is called a $k^2$-partition of $V(G)$ (we allow the sets $V_{ij}$ to be empty). The set of \emph{good edges with respect to $\mathcal{P}_k$} is defined as $$\mathcal{G}_k(\mathcal{P}_k,G):=\displaystyle\bigcup_{i}E(V_{i*},V_{*i}),$$ where $V_{i*}:=\bigcup_{j}V_{ij}$ and $V_{*j}:=\bigcup_{i}V_{ij}$. The set of \emph{bad edges with respect to $\mathcal{P}_k$} is defined as $$\mathcal{B}_k(\mathcal{P}_k,G):=E(G)-\mathcal{G}_k(\mathcal{P}_k,G)=\displaystyle\bigcup_{i\neq j}E(V_{i*},V_{*j}).$$ We write $G_{ij}:=G[V_{i*}, V_{*j}]$. \end{definition} \noindent Note that while we define $k^2$-partitions and prove properties for general $k$, in fact we only require the cases $k=2,3$. For regular digraphs, we have a useful equality relating the sizes of different parts in a $k^2$-partition and the number of bad edges. \begin{proposition}\label{prop:partition-regular-graphs} Let $G$ be a $d$-regular digraph, $k\in \mathbb{N}$, and $\mathcal{P}_k=\{V_{ij}:i,j\in [k]\}$ be a $k^2$-partition of $V(G)$. Then, for all $i \in [k]$, we have $$d(|V_{i*}|-|V_{*i}|)= \displaystyle\sum_{j\neq i} \left( e(G_{ij})-e(G_{ji})\right).$$ \end{proposition} \begin{proof} By considering outneighbours of the vertices in $V_{i*}$, we can write $$d|V_{i*}|=e(V_{i*},V_{*i})+\displaystyle\sum_{j\neq i}e(V_{i*},V_{*j}).$$ Similarly, by considering the inneighbours of the vertices in $V_{*i}$, we have $$d|V_{*i}|=e(V_{i*},V_{*i})+\displaystyle\sum_{j\neq i}e(V_{j*},V_{*i}).$$ By subtracting the second equality from the first one, the result follows. \end{proof} \noindent If the number of bad edges is small compared to $E(G)$, then Proposition~\ref{prop:partition-regular-graphs} implies that $V_{i*}$ and $V_{*i}$ are similar in size. \begin{corollary}\label{cor:Y-and-Z-equal-size} Let $k \in \mathbb{N}$ and $\gamma$ be a constant. Let $G$ be a $d$-regular digraph on $n$ vertices, and $\mathcal{P}_k=\{V_{ij}:i,j\in [k]\}$ be a $k^2$-partition of $V(G)$. If $|\mathcal{B}_k(\mathcal{P},G)|\leq\gamma n^2$, then we have $\big| |V_{i*}|-|V_{*i}| \big|\leq \gamma n^2/d$ for all $i \in [k]$. \end{corollary} \begin{proof} Fix $i \in [k]$. We have $$\Big|\displaystyle\sum_{j\neq i}\left(e(V_{i*},V_{*j})-e(V_{j*},V_{*i})\right)\Big|\leq \displaystyle\sum_{j\neq i}\left(e(V_{i*},V_{*j})+e(V_{j*},V_{*i})\right)\leq|\mathcal{B}_k(\mathcal{P},G)|\leq \gamma n^2.$$ Hence, by Proposition~\ref{prop:partition-regular-graphs}, we know $d\big||V_{i*}|-|V_{*i}|\big| \leq \gamma n^2$, so the result follows. \end{proof} \noindent We will be especially interested in partitions with a small number of bad edges and where certain parts are not too small. \begin{definition}\label{def:name-good-partition} For a given digraph $G$ on $n$ vertices and constants $\gamma$, $\tau$, and $k\in \mathbb{N}$, we say a $k^2$-partition $\mathcal{P}_k=\{V_{ij}:i,j\in [k]\}$ of $V(G)$ is a \emph{$(k^2,\tau,\gamma)$-partition} if the following hold: \begin{center} $|\mathcal{B}_k(\mathcal{P}_k,G)|\leq \gamma n^2$ and $|V_{i*}|, |V_{*j}|\geq\tau n$ for all $i,j \in [k]$. \end{center} \end{definition} \begin{remark}\label{rem:Y-and-Z-equal-size} In general, the constants $\gamma$ and $\tau$ are taken to satisfy $1/n\ll \gamma\ll \tau\ll 1$. When working with regular graphs, we sometimes implicitly take the conclusion of Corollary~$\ref{cor:Y-and-Z-equal-size}$ as a property of a $(k^2,\tau,\gamma)$-partition. \end{remark} \noindent Next, we show that every almost regular digraph which is dense and not a robust $(\nu,\tau)$-outexpander admits a $(4,\tau/2,4\nu)$-partition. \begin{lemma}\label{lem:structure_not_expander} Let $1/n\ll\nu\ll\tau\ll\alpha\ll1$, and $G$ be a digraph on $n$ vertices such that $e(G) \ge (\alpha -\nu) n^2 $ and $\Delta^0(G) \le \alpha n$. If $G$ is not a robust $(\nu,\tau)$-outexpander, then $G$ admits a $(4,\tau,4\nu)$-partition. \end{lemma} \begin{proof} Assume $G$ is not a robust $(\nu,\tau)$-outexpander. Then we can find a subset $S\subseteq V(G)$ such that $\tau n\leq |S|\leq (1-\tau)n$ and $|\text{RN}_{\nu}^{+}(S)|<|S|+\nu n$. Let us define $V_{11}=S\cap \text{RN}_{\nu}^{+}(S)$, $V_{12}=S-\text{RN}_{\nu}^{+}(S)$, $V_{21}=\text{RN}_{\nu}^{+}(S)-S$, and $V_{22}=V(G)-(S\cup \text{RN}_{\nu}^{+}(S))$. Therefore $V_{1*}=S$ and $V_{*1}=\text{RN}_{\nu}^{+}(S)$. Note that $\mathcal{P}_2=\{V_{ij}:i,j \in [2]\}$ is a $4$-partition of $V(G)$. Moreover, since $\tau n\leq |S|\leq (1-\tau)n$, we have $|V_{1*}|,|V_{2*}|\geq \tau n$. \\ \noindent We first show that $|\mathcal{B}_2(\mathcal{P}_2,G)| \leq 4\nu n^2$. By the definition of $\text{RN}_{\nu}^{+}(S)$, we know that every vertex in $V_{*2}$ has fewer than $\nu n$ inneighbours from $V_{1*}$. Thus, we have \begin{align} e(V_{1*},V_{*2}) \le \nu n^2 \label{eq:first} \end{align} and \begin{align} e(V_{1*},V_{*1}) & = e(V_{1*},V(G)) - e(V_{1*},V_{*2}) \ge e(V_{1*},V(G)) - \nu n^2 = e(G) - e(V_{2*}, V(G)) - \nu n^2 \noindentnumber \\ &\ge (\alpha -\nu) n^2 - \alpha n |V_{2*}| - \nu n^2 = \alpha n |V_{1*}| - 2 \nu n^2 \label{eq:third} . \end{align} Since $|V_{*1}| = |\text{RN}_{\nu}^{+}(S)|<|S|+\nu n = |V_{1*}|+\nu n$, we have \begin{align*} e(V(G), V_{*1}) \le \alpha n |V_{*1}| < (|V_{1*}|+\nu n) \alpha n \le \alpha n |V_{1*}| + \nu n^2. \end{align*} Thus, together with~\eqref{eq:third}, we have \begin{align*} e(V_{2*}, V_{*1}) = e(V(G), V_{*1}) - e(V_{1*}, V_{*1}) \le 3 \nu n^2. \end{align*} Therefore \eqref{eq:first} implies that $ |\mathcal{B}_2(\mathcal{P}_2,G)| = e(V_{1*},V_{*2}) + e(V_{2*},V_{*1}) \leq 4\nu n^2$.\\ \noindentindent We now bound $|V_{*1}|$ and $|V_{*2}|$ from below. Let $T$ be the set of vertices with outdegree at most $(\alpha - \sqrt{\nu})n$. Then as $\Delta^0(G) \le \alpha n$ \begin{align*} (\alpha - \nu )n^2 \le e(G) \le (\alpha - \sqrt{\nu})n|T| + \alpha n (n-|T|) = \alpha n^2 - \sqrt{\nu}n |T|, \end{align*} which implies that $|T| \le \sqrt{\nu}n$. For $\{i,j\} = [2]$, recall that $|V_{i*}| \ge \tau n$ and so we have \begin{align*} ( |V_{*i}| + 4 \nu /\tau) |V_{i*}| &\ge |V_{i*}||V_{*i}| + 4 \nu n^2 \ge e(V_{i*} ,V_{*i}) + |\mathcal{B}_2(\mathcal{P}_2,G)| \ge e(V_{i*} ,V_{*i}) + e(V_{i*},V_{*j}) \\ &= e(V_{i*},V(G) ) \ge ( \alpha-\sqrt{\nu} ) n | V_{i*} \setminus T | \ge ( \alpha-\sqrt{\nu} ) n | V_{i*} | /2 \end{align*} As a result, we obtain $|V_{*i}| \geq ( \alpha-\sqrt{\nu})n - 4 \nu /\tau) \geq \tau n$, so the result follows. \end{proof} \noindent One can construct an $(\ell^2,\tau,\gamma)$-partition of $G$ from a $(k^2,\tau,\gamma)$-partition of $G$ for $\ell\leq k$. \begin{proposition}\label{prop:partition-9-to-4} Let $G$ be a digraph and $\mathcal{P}_k=\{V_{ij}:i,j\in [k]\}$ be a $(k^2,\tau,\gamma)$-partition of $G$. Let $\{I_1,I_2,\ldots,I_l\}$ be a partition of $[k]$ with $I_t\neq \emptyset$ for all $t\in[\ell]$. For $i',j'\in[\ell]$, let $W_{i'j'}=\bigcup_{i\in I_{i'},\,j\in I_{j'}}V_{ij}$. Then, $\mathcal{P}_\ell=\{W_{i'j'}:i',j'\in[\ell]\}$ is an $(\ell^2,\tau,\gamma)$-partition of $G$. \end{proposition} \begin{proof} Let $n = |G|$. For $i'\in [\ell]$, note that $$W_{i'*}=\displaystyle\bigcup_{j'\in[\ell]}W_{i'j'}=\displaystyle\bigcup_{i\in I_{i'},\, j\in[k]}V_{ij}=\displaystyle\bigcup_{i\in I_{i'}}V_{i*}$$ and so $|W_{i'*}|\geq \tau n$. Similarly, we have $|W_{*j'}|\geq \tau n$ for all $j'\in[\ell]$. Moreover, note that \begin{align*} |\mathcal{B}_\ell(\mathcal{P}_\ell,G)|&=\sum_{i',j'\in[\ell],\, i'<j'}e(W_{i'*},W_{*j'})= \sum_{i',j'\in[\ell],\, i'<j' }\,\,\sum_{i\in I_{i'},\,j\in I_{j'}}e(V_{i*},V_{*j})\\ &\leq \sum_{i,j\in[k],\, i<j}e(V_{i*},V_{*j})=|\mathcal{B}_k(\mathcal{P}_k,G)|, \end{align*} so the result follows. \end{proof} \noindent Next, we show that if a regular digraph is dense and admits a $(k^2,\tau,\gamma)$-partition, then certain unions of parts have size at least roughly the degree of the digraph. \begin{proposition}\label{prop:dense-regular-size} Let $1/n\ll\gamma\ll\tau\ll\varepsilon\ll\alpha\ll1$, $k\in\mathbb{N}$, and $G$ be a $d$-regular digraph on $n$ vertices where $d\geq (\alpha+\varepsilon) n$. If $\mathcal{P}_k=\{V_{ij}:i,j\in [k]\}$ is a $(k^2,\tau,\gamma)$-partition for $G$, then we have $|V_{i*}|,|V_{*i}|\geq d-\varepsilon n/2$ for all $i \in [k]$. In particular, $\mathcal{P}_k$ is a $(k^2,\alpha+\varepsilon/2,\gamma)$-partition for $G$. \end{proposition} \begin{proof} Let $i\in[k]$. By looking at the outneighbours of $V_{i*}$, we have $$|V_{*i}|\geq \dfrac{e(V_{i*},V_{*i})}{|V_{i*}|}\geq d-\dfrac{\gamma n^2}{|V_{i*}|}\geq d-\dfrac{\gamma n}{\tau}\geq d-\varepsilon n/2$$ since $|V_{i*}|\geq \tau n$ and $\gamma \ll \tau \ll \varepsilon$. Similarly, we have $|V_{i*}|\geq d-\varepsilon n/2$. \end{proof} \noindent If a $(k^2,\tau,\gamma)$-partition has the minimum possible number of bad edges among all $(k^2,\tau,\gamma)$-partitions of a digraph, then we give it a special name. \begin{definition}\label{def:extremal-4-partition} For a given digraph $G$ on $n$ vertices, constants $1/n\ll \gamma\ll \tau\ll 1$, and $k\in \mathbb{N}$, a $(k^2,\tau,\gamma)$-partition $\mathcal{P}_k=\{V_{ij}:i,j\in [k]\}$ of $V(G)$ is called an \emph{extremal $(k^2,\tau,\gamma)$-partition} if $\mathcal{B}_k(\mathcal{P}_k,G)\leq \mathcal{B}_k(\mathcal{P}_k',G)$ for all $(k^2,\tau,\gamma)$-partitions $\mathcal{P}_k'$ of $V(G)$. \end{definition} \noindent We establish some useful degree conditions for extremal $(k^2,\tau,\gamma)$-partitions of dense regular digraphs. \begin{proposition}\label{prop:degree-property-extremal} Let $1/n\ll\gamma\ll\tau\ll\alpha\ll1$, $k\in \mathbb{N}$, and $G$ be a $d$-regular digraph on $n$ vertices where $d\geq \alpha n$. Suppose $\mathcal{P}_k=\{V_{ij}:i,j\in [k]\}$ is an extremal $(k^2,\tau,\gamma)$-partition of~$G$. Then, for all $i,j \in [k]$ and $w\in V_{ij}$, we have $d_{V_{*i'}}^{+}(w)\leq d_{V_{*i}}^{+}(w)$ and $d_{V_{j'*}}^{-}(w)\leq d_{V_{j*}}^{-}(w)$ for all $ i',j'\in[k]$. In particular, we have $d_{V_{j'*}}^{-}(w), d_{V_{*i'}}^{+}(w)\leq d/2$ for all $i'\neq i$ and $j'\neq j$, and $d_{\mathcal{G}_k(\mathcal{P}_k,G)}^{+}(v),d_{\mathcal{G}_k(\mathcal{P}_k,G)}^{-}(v)\geq d/k$ for all $v\in V(G)$. \end{proposition} \begin{proof} Let $\varepsilon$ be a constant such that $\tau \ll \varepsilon \ll \alpha$. Let $\alpha' = \alpha - \varepsilon$. Suppose the contrary and without loss of generality that there exists $w\in V_{ij}$ and $a\in[k]$ such that $d_{V_{*a}}^{+}(w)>d_{V_{*i}}^{+}(w)$. Let $V_{ij}'=V_{ij}\backslash\{w\}$, $V_{aj}'=V_{aj}\cup \{w\}$, and $V_{i'j'}'=V_{i'j'}$ for all $(i',j')\in[k]\times[k]\backslash\{(i,j),(a,j)\}$. Let $\mathcal{P}_{k}'=\{V_{i'j'}':i',j'\in[k]\}$. By Proposition~\ref{prop:dense-regular-size}, $$|V_{i*}'|=|V_{i*}|-1\geq d-\varepsilon n/2-1\geq \tau n$$ since $\tau \ll \varepsilon$. Similarly, we have $|V_{*j}'|\geq \tau n$. Moreover, for all $i'\neq i$ and $j'\neq j$, we know $|V_{i'*}'|\geq |V_{i'*}|\geq \tau n$ and $|V_{*j'}'|\geq |V_{*j'}|\geq \tau n$. On the other hand, we obtain $$|\mathcal{B}_k(\mathcal{P}_k',G)|=|\mathcal{B}_k(\mathcal{P}_k,G)|-d_{V_{*a}}^{+}(w)+d_{V_{*i}}^{+}(w)<|\mathcal{B}_k(\mathcal{P}_k,G)|.$$ Hence $\mathcal{P}_{k}'$ is a $(k^2,\tau,\gamma)$-partition of $G$ having fewer bad edges than the extremal $(k^2,\tau,\gamma)$-partition $\mathcal{P}_k$, which is a contradiction. As a result, for all $1\leq i',j'\leq k$, we have $d_{V_{*i'}}^{+}(w)\leq d_{V_{*i}}^{+}(w)$ and $d_{V_{j'*}}^{-}(w)\leq d_{V_{j*}}^{-}(w)$. The rest of the proof is immediate. \end{proof} \noindent For any dense regular oriented graph, we show that certain unions of sets in a $(k^2,\tau,\gamma)$-partition have strictly positive size. \begin{proposition}\label{prop:quarterleadsnonempty} Let $1/n<\gamma\ll\tau\ll\varepsilon <1$ be constants, $k\in \mathbb{N}$, and $G$ be a $d$-regular oriented graph on $n$ vertices with $d \geq (1/4+\varepsilon)n$. Let $\mathcal{P}_k=\{V_{ij}:i,j\in [k]\}$ be a $(k^2,\tau,\gamma)$-partition of $G$. Then, for $i \in [k]$, we have $$\displaystyle\big|\bigcup_{j\neq i}V_{ij}\big|,\displaystyle\big|\bigcup_{j\neq i}V_{ji}\big|\geq \tau n.$$ \end{proposition} \begin{proof} First suppose that $k=2$. Without loss of generality, assume $|V_{11}|\leq |V_{22}|$, which gives $d-{|V_{11}|}/{2}\geq d-{n}/{4}\geq \varepsilon n$. By Corollary~\ref{cor:Y-and-Z-equal-size}, we know that $|V_{12}|-|V_{21}|\leq \gamma n^2/d\leq \tau n$. Hence, it suffices to show that $|V_{12}|\geq 2\tau n$. By Proposition~\ref{prop:dense-regular-size}, we have $|V_{11}|+|V_{12}|=|V_{1*}|\geq (1/4+{\varepsilon}/{2})n$. Since $\tau\ll \varepsilon$, we may assume that $|V_{11}|\geq n/4$. Then, since $G$ is oriented and $d$-regular, we can write \begin{align*} d|V_{11}|& =e(V,V_{11})=e(V_{11},V_{11})+e(V_{12},V_{11})+e(V_{2*},V_{11})\leq {|V_{11}|^2}/{2}+|V_{12}||V_{11}|+\gamma n^2\\ &\leq |V_{11}|\cdot ( {|V_{11}|}/2+|V_{12}|+4\gamma n). \end{align*} This implies $|V_{12}|\geq d-{|V_{11}|}/{2}-4\gamma n\geq (\varepsilon-4\gamma)n\geq 2\tau n$ as required. \\ \noindent Now, fix $k\geq 3$ and define $\mathcal{W}_i=\{W^{i}_{ab}:a,b\in[2]\}$ for all $i \in [k]$ where $$W^{i}_{11}=V_{ii},\,\, W^{i}_{12}=\displaystyle\bigcup_{j\neq i}V_{ij},\,\, W^{i}_{21}=\displaystyle\bigcup_{j\neq i}V_{ji},\,\, W^{i}_{22}=\displaystyle\bigcup_{a,b\neq i}V_{ab}.$$ Notice that $\mathcal{W}_i$ is a $(4,\tau,\gamma)$-partition by Proposition~\ref{prop:partition-9-to-4}, so we get $|W^{i}_{12}|,|W^{i}_{21}|\geq\tau n$ from the case $k=2$. Then, we obtain $$\displaystyle\Big|\bigcup_{j\neq i}V_{ij}\Big|=|W^{i}_{12}|\geq\tau n\,\text{ and }\,\displaystyle\Big|\bigcup_{j\neq i}V_{ji}\Big|=|W^{i}_{21}|\geq\tau n,$$ so the result follows for any $k$. \end{proof} \section{Balancing partitions} \label{ch:balance} Let $G$ be a regular digraph or oriented graph and suppose $\mathcal{P}_k$ is a $(k^2,\tau,\gamma)$-partition of $G$ that is ``not balanced'', in the sense that $|V_{i*}| \noindentt= |V_{*i}|$ for some $i \in [k] $. Then, Proposition~\ref{prop:partition-regular-graphs} implies that any Hamilton cycle $C$ must contain a number of bad edges (i.e.\ edges from $\mathcal{B}_k(\mathcal{P}_k,G)$) that depends on the extent of the ``imbalance'' of $\mathcal{P}_k$. Since $\mathcal{B}_k(\mathcal{P}_k,G)$ is small (at most $\gamma n^2$ edges), when constructing a Hamilton cycle of~$G$, it is necessary to first pick the edges of $\mathcal{B}_k(\mathcal{P}_k,G)$ that will be in~$C$. Let us write $\mathcal{Q}$ for the bad edges in our target Hamilton cycle, and note that $\mathcal{Q}$ is a path system. \\ \noindent By Proposition~\ref{prop:partition-regular-graphs} (applied with $d=1$ and $G=\mathcal{Q}$), we must ensure that~$\mathcal{Q}$ satisfies that for all $i \in [k]$, \begin{align*} \sum_{j\neq i}| E(\mathcal{Q})\cap E(V_{i*},V_{*j}) | - \sum_{j\neq i}| E(\mathcal{Q})\cap E(V_{j*},V_{*i}) |=|V_{i*}|-|V_{*i}|. \end{align*} A naive approach to construct $\mathcal{Q}$ is to take a suitable size matching in each of~$G_{ij}$ for $i \ne j$, where as before~$G_{ij}=G[V_{i*},V_{*j}]$. However, the union of these matchings may not be a path system since it might contain cycles or might satisfy $\Delta^0(\mathcal{Q}) \ge 2$. The main purpose of this section is to adapt the naive approach to construct $\mathcal{Q}$; see Lemma~\ref{lem:BALANCING-PARTITION}. \\ \noindent Our first goal is to show that given several edge-disjoint subdigraphs of some given digraph, we are able to pick a relatively large path system from each subdigraph such that the union of these path systems does not contain a directed cycle; this is Lemma~\ref{lem:CYCLE_FREE}. The first two lemmas below are technical results needed to prove this. \begin{lemma}\label{lem:MATHCING_LEMMA_I} Let $G$ be a digraph with $\Delta^{0}(G)\leq d$. Let $0<\theta<1$, and define the sets $W^{+}=\{w\in G:d^{+}(w)\geq \theta d\}$ and $W^{-}=\{w\in G:d^{-}(w)\geq \theta d\}$. Then, there exists a matching $M$ satisfying \begin{itemize} \item[(i)] $4\theta e(M)+|W^{+}|+|W^{-}|\geq e(G)/d $, \item[(ii)] $x\noindenttin W^{+}$ and $y\noindenttin W^{-}$ for all $xy\in E(M)$, \item[(iii)] $e(M)\leq e(G)/\theta d$. \end{itemize} \end{lemma} \begin{proof} If $\theta d<1$, we obtain $W^{+}=\{w\in G:d^{+}(w)\geq 1\}$. Then, we have $x\in W^{+}$ for any $xy\in E(G)$, which, in particular, implies $d|W^{+}|\geq e(G)$. Therefore, we can set $M$ to be empty in that case. Hence, we may assume $\theta d\geq 1$. Let $H$ be the multigraph obtained from $G$ by deleting all the edges $ab$ with either $a\in W^{+}$ or $b\in W^{-}$, and by making all the edges undirected. Note that we have $\Delta(H)+\mu(H) \leq 2\theta d+2$ and \begin{align}\label{eqn:deleting-some-edges} e(H)\geq e(G)-d(|W^{+}|+|W^{-}|). \end{align} Then, by Theorem \ref{thm:Vizing} (Vizing's theorem for multigraphs), there exists a matching $M_1$ in $H$ of size at least $e(H)/(2\theta d+2)$. Moreover, we can assume that $e(M_1)\leq e(H)/\theta d$ because otherwise we can remove some edges from $M_1$. Let $M$ be the corresponding matching in $G$. Clearly (ii) holds. By using $\theta d\geq 1$, we obtain $$\dfrac{e(G)}{\theta d}\geq \dfrac{e(H)}{\theta d}\geq e(M)\geq \dfrac{e(H)}{2\theta d+2}\geq \dfrac{e(H)}{4\theta d},$$ so (iii) holds. Hence, together with \eqref{eqn:deleting-some-edges}, we have $4\theta e(M)+|W^{+}|+|W^{-}|\geq e(G)/d$, proving~(i). \end{proof} \noindent Now, given some matchings in a graph, we show that one can pick a significant number of edges from each matching such that all the chosen edges form a matching. \begin{lemma}\label{lem:MATCHING_LEMMA_II} Let $k,r\in\mathbb{N}$ and $M_1,M_2,\ldots,M_k$ be matchings with $\Delta\left(\bigcup_{i \in [k]} M_i\right)\leq r$. Suppose $e(M_i)>2(r^3+r)^2\ln k$ for all $i \in [k]$. Then, there exists a matching $H\subseteq \bigcup_{i \in [k]}M_i$ with $|E(H)\cap M_i|\geq e(M_i)/(r^2+1)$ for all $i \in [k]$. \end{lemma} \begin{proof} Letting $G=\bigcup_{i \in [k]}M_i$, we have $\Delta(G)\leq r$. We mark edges of $G$ randomly as follows. For each vertex $v\in G$, pick an edge incident to $v$ uniformly at random and mark all other edges incident to $v$. Do this independently for every vertex $v$ (So some edges may be marked twice). Then, let $H$ be the graph where all the marked edges are deleted. Note that $H$ is a matching. We now show that $H$ satisfies the desired property with positive probability. Notice that each edge of $G$ lies in $H$ with probability at least $1/r^2$, and these probabilities are independent for vertex-disjoint edges. Now, for any $i \in [k]$, let $X_i={\rm Bin}(e(M_i),1/r^2)$. Since $M_i$ is a matching, we have \begin{align*} \mathbb{P}\left(|E(H)\cap M_i|\leq \dfrac{e(M_i)}{r^2+1}\right)\leq \mathbb{P}\left(X_i\leq \dfrac{e(M_i)}{r^2+1}\right). \end{align*} Note that $\mathbb{E}[X_i]={e(M_i)}/{r^2}$. Hence, by Theorem \ref{thm:Chernoff} (Chernoff bound), we obtain \begin{align*} \mathbb{P}\left(X_i\leq \dfrac{e(M_i)}{r^2+1}\right)= \mathbb{P}\left(X_i\leq \dfrac{r^2}{r^2+1}\cdot \mathbb{E}[X_i]\right)\leq \exp{\left(-\frac{\mathbb{E}[X_i]}{2(r^2+1)^2}\right)}= \exp{\left(\frac{-e(M_i)}{2(r^3+r)^2}\right)}. \end{align*} \noindentindent Then, by using $e(M_i)> 2(r^3+r)^2\ln k$, we obtain $\mathbb{P}\left(|E(H)\cap M_i|\leq \dfrac{e(M_i)}{r^2+1}\right)< \dfrac{1}{k}$ for each $i \in [k]$. Hence, by the union bound, we have \begin{align*} \mathbb{P}\left(|E(H)\cap M_i|\geq \dfrac{e(M_i)}{r^2+1}\text{ for all }i \in [k]\right)>0. \end{align*} \noindentindent Therefore, there exists a matching $H\subseteq \bigcup_{i \in [k]}M_i$ with $|E(H)\cap M_i|\geq \dfrac{e(M_i)}{r^2+1}$ for all $i \in [k]$. \end{proof} \noindent By using Lemma~\ref{lem:MATHCING_LEMMA_I} and Lemma~\ref{lem:MATCHING_LEMMA_II}, we will prove an edge selection lemma which will be used in the proof of Lemma~\ref{lem:9-partition-path-selection}. \begin{lemma}\label{lem:CYCLE_FREE} Let $k\in\mathbb{N}$, let $0<\gamma\ll\alpha<1$ be constants, and let $G$ be a digraph on $n$ vertices. Let $G_1,G_2,\ldots,G_k$ be pairwise edge-disjoint subgraphs of $G$ with $\sum_{i \in [k]}e(G_i)\leq \gamma n^2$ and $\Delta^{0}(G_i)\leq \alpha n$ for each $i\in[k]$. Then, each $G_i$ contains a path system $\mathcal{Q}_i$ such that $\bigcup_{i \in [k]}\mathcal{Q}_i$ is cycle-free and $e(\mathcal{Q}_i)\geq \left\lfloor e(G_i) / \alpha n \right\rfloor$ for all $i \in [k]$. \end{lemma} \begin{proof} Since $\gamma\ll \alpha$, we can choose a constant $\theta$ with $\sqrt{8\gamma}/\alpha <\theta<1/\left(8k(k^3+k)^2\right)$. Then, let us define the sets $$W_i^{+}=\{w\in V(G_i):d_{G_i}^{+}(w)\geq\alpha\theta n\}\text{ and }W_i^{-}=\{w\in V(G_i):d_{G_i}^{-}(w)\geq\alpha\theta n\}.$$ By Lemma~\ref{lem:MATHCING_LEMMA_I}, for each $i \in [k]$, we can find a matching $M_i$ in $G_i$ with \begin{align*} 4\theta e(M_i)+|W_i^{+}|+|W_i^{-}|\geq e(G_i)/\alpha n,\,\, M_i\subseteq G_i[V-W_i^{+},V-W_i^{-}],\,\, e(M_i)\leq e(G_i)/\alpha \theta n. \end{align*} For each $i$, we have either $|W_i^{+}|+|W_i^{-}|>(e(G_i)/\alpha n)-1$ or $4\theta e(M_i)\geq 1$. In the latter case, we have $e(M_i)\geq 2(k^3+k)^2\ln k$ due to the definition of $\theta$. Let $R$ be the set of indices $i \in [k]$ satisfying $4\theta e(M_i)\geq 1$. By applying Lemma~\ref{lem:MATCHING_LEMMA_II} for the matchings $M_i$ with $i\in R$, we find a matching $M\subseteq\bigcup_{i\in R}M_i$ such that $e(M\cap M_i)\geq e(M_i)/(k^2+1)$ for all $i\in R$. Therefore, we have \begin{align*} e(M\cap M_i)+|W_i^{+}|+|W_i^{-}|&\geq e(M_i)/(k^2+1) +|W_i^{+}|+|W_i^{-}|\\ &\geq 4\theta e(M_i)+|W_i^{+}|+|W_i^{-}|\geq e(G_i)/\alpha n \end{align*} for all $i\in R$. On the other hand, if $i\noindenttin R$, we know $|W_i^{+}|+|W_i^{-}|>(e(G_i)/\alpha n)-1$, which, in particular implies $|W_i^{+}|+|W_i^{-}|\geq \lfloor e(G_i)/\alpha n\rfloor$. Write $N_i=M\cap M_i$ if $i\in R$, write $N=\bigcup_{i\in R}N_i$, and set $N_i=\emptyset$ if $i\noindenttin R$. Thus, we obtain $e(N_i)+|W_i^{+}|+|W_i^{-}|\geq \lfloor e(G_i)/\alpha n\rfloor$ for all $i \in [k] $. By deleting edges in $N_i$ or removing vertices from $W_i^{+}\cup W_i^{-}$, we may assume \begin{align*} e(N_i)+|W_i^{+}|+|W_i^{-}|= \lfloor e(G_i)/\alpha n\rfloor \text{ for all $i \in [k] $.} \end{align*} Let us write $W=\bigcup_{i \in [k] }(W_i^{+}\cup W_i^{-})$. Note that \begin{align} |V(N)\cup W| & \le \sum_{i \in [k]}\left(2 e (N_i) + |W_i^{+}|+|W_i^{-}|\right) \leq 2\cdot \sum_{i \in [k]} (e(G_i)/\alpha\theta n ) \leq 2\gamma n/\alpha \theta. \label{eqn:bound-non-selected} \end{align} We now construct the desired path systems $\mathcal{Q}_1, \ldots \mathcal{Q}_k$ by induction. Suppose we have found path systems $\mathcal{Q}_1,\ldots,\mathcal{Q}_j$ for some $0\leq j\leq k$ such that $N\cup \left(\bigcup_{i \in [j]}\mathcal{Q}_i\right)$ is cycle-free, and the following hold for all $i \in [j]$: \begin{itemize} \item[(i)] $N_i\subseteq \mathcal{Q}_i\subseteq G_i$, \item[(ii)] $e(\mathcal{Q}_i)= \lfloor e(G_i)/\alpha n \rfloor $. \end{itemize} \noindent If $j=k$, then we are done. If $j<k$, then we construct $\mathcal{Q}_{j+1}$ as follows. First, we define $U=\left(V(N)\cup W\right)\cup V\left(\bigcup_{i \in [j]}\mathcal{Q}_i\right)$. By using \eqref{eqn:bound-non-selected}, we have $$|U|\leq 2\gamma n/\alpha\theta+2\sum_{i \in[j] }e(\mathcal{Q}_i)\leq (2\gamma n/\alpha\theta) +(2\gamma n/\alpha).$$ \noindent We construct the bipartite graph $\mathcal{B}$ with bipartition $(A,B)$ as follows. Let $B=V(G)-U$, and let $A$ be the disjoint union of $W_{j+1}^{+}$ and $W_{j+1}^{-}$. We add the edge $ab$ for each $a\in W_{j+1}^{+}$ and $b\in B$ if $b\in N_G^{+}(a)$, and add the edge $ab$ for each $a\in W_{j+1}^{-}$ and $b\in B$ if $a\in N_{G}^{-}(b)$. Due to the choice of $\theta$, we have \begin{align*} d_{\mathcal{B}}(a) \geq \alpha\theta n- (2\gamma n/\alpha\theta)-(2\gamma n/\alpha)\geq \gamma n/\alpha \geq e(G_{j+1})/\alpha n\geq |W_{j+1}^{+}|+|W_{j+1}^{-}|\geq |A|. \end{align*} Therefore, we can greedily pick a matching in $\mathcal{B}$ that covers $A$. Note that the corresponding edges in $G$ with respect to this matching give a path system $\mathcal{Q}_{j+1}'$ in $G_{j+1}$ containing paths of length one or two with $e(\mathcal{Q}_{j+1}')=|W_{j+1}^{+}|+|W_{j+1}^{-}|$ and $E(\mathcal{Q}_{j+1}')\cap E\left(N \cup \bigcup_{i \in [j]}\mathcal{Q}_i\right)=\emptyset$. Moreover, each edge in $\mathcal{Q}_{j+1}'$ will contain a vertex in $W_{j+1}^{+}\cup W_{j+1}^{-}$ and one unique vertex not in~$U$. Since $x\noindenttin W_{j+1}^{+}$ and $y\noindenttin W_{j+1}^{-}$ for all $xy\in N_{j+1}$, we can add $N_{j+1}$ into $\mathcal{Q}_{j+1}'$ to obtain another path system $\mathcal{Q}_{j+1}$ in $G_{j+1}$ with $e(\mathcal{Q}_{j+1})=\lfloor e(G_{j+1})/\alpha n \rfloor$.\\ \noindent Finally, suppose $N\cup \left(\bigcup_{i \in [j+1]}\mathcal{Q}_i\right)=N\cup \left(\bigcup_{i \in [j]}\mathcal{Q}_i\right)\cup \mathcal{Q}_{j+1}$ has a cycle $C$. Since $N\cup \left(\bigcup_{i \in [j]}\mathcal{Q}_i\right)$ has no cycle, $C$ contains a edge~$e$ in $\mathcal{Q}_{j+1}'$. However, $e$ contains a unique vertex~$x$ not in~$U$, that is, $x$ has (total) degree~$1$ in~$N\cup \left(\bigcup_{i \in [j+1]}\mathcal{Q}_i\right)$, a contradiction. This completes the inductive construction of the $\mathcal{Q}_i$ and the proof of the lemma. As a result, $N\cup \left(\bigcup_{i \in [j+1]}\mathcal{Q}_i\right)$ is cycle-free, and we are done. \end{proof} \noindent Suppose $G$ is an oriented graph and consider a $9$-partition $\{V_{ij}:i,j \in [3]\}$ of $V(G)$. For $i,j \in [3]$, $i\neq j$, we say a path system $\mathcal{Q}$ is \emph{type-$ij$} if $E(\mathcal{Q})\subseteq E(V_{i*},V_{*j})$. Our next lemma describes the structure of the graph which is the union of several path systems that are of different types. First some further notation. \\ \noindent We denote the set of all type-$ij$ path systems by $\mathcal{Q}(i,j)$. Let $\mathscr{S}\subset \bigcup_{i\neq j}\mathcal{Q}(i,j)$ be a set consisting of three path systems of different types. We say $\mathscr{S}$ is a \emph{symmetric 3-set} if either $|\mathscr{S}\cap \mathcal{Q}(1,2)|=|\mathscr{S}\cap \mathcal{Q}(2,3)|=|\mathscr{S}\cap \mathcal{Q}(3,1)|=1$ or $|\mathscr{S}\cap \mathcal{Q}(2,1)|=|\mathscr{S}\cap \mathcal{Q}(3,2)|=|\mathscr{S}\cap \mathcal{Q}(1,3)|=1$. Otherwise, we say $\mathscr{S}$ is an \emph{anti-symmetric 3-set}. For an anti-symmetric 3-set $\mathscr{S}$, if $|\mathscr{S}\cap (\mathcal{Q}(1,2)\cup \mathcal{Q}(2,3)\cup \mathcal{Q}(3,1))|=2$ and $|\mathscr{S}\cap (\mathcal{Q}(2,1)\cup \mathcal{Q}(3,2)\cup \mathcal{Q}(1,3))|=1$, we call the unique path system in $\mathscr{S}\cap (\mathcal{Q}(2,1)\cup \mathcal{Q}(3,2)\cup \mathcal{Q}(1,3))$ as a \emph{special element} of $\mathscr{S}$. Similarly, if $|\mathscr{S}\cap (\mathcal{Q}(1,2)\cup \mathcal{Q}(2,3)\cup \mathcal{Q}(3,1))|=1$ and $|\mathscr{S}\cap (\mathcal{Q}(2,1)\cup \mathcal{Q}(3,2)\cup \mathcal{Q}(1,3))|=2$, then we call the unique path system in $\mathscr{S}\cap (\mathcal{Q}(1,2)\cup \mathcal{Q}(2,3)\cup \mathcal{Q}(3,1))$ as a \textit{special element} of $\mathscr{S}$. We will show that the graph induced by $\mathscr{S}$ has some structural properties if $\mathscr{S}$ is a symmetric or anti-symmetric 3-set. First, we need the definition of an anti-directed path. \\ \noindent Let $G$ be a digraph. A subgraph $P$ of $G$ is called an \emph{anti-directed path} in $G$ if its edges can be ordered as $E(P)=\{e_1,e_2,\ldots,e_k\}$ for some $k \in \mathbb{N}$ such that \begin{itemize} \item[(i)] $(e_1,e_2,\ldots,e_k)$ induces an (undirected) path when we forgot the directions of the edges, and \item[(ii)] $(e_1,e_2,\ldots,e_k)$ does not contain a directed path of length at least two. \end{itemize} An anti-directed path $P$ in $G$ is said to be \emph{maximal} if it is not entirely contained in any other anti-directed path. \begin{lemma}\label{lem:ADE-BCF-Structure} Let $\mathcal{P}_3=\{V_{ij}:i,j \in [3]\}$ be a $9$-partition of an oriented graph $G$. Let $\mathscr{S}$ be a set of path systems in $G$, and let $H$ be the graph induced by all the paths in $\mathscr{S}$. If $\mathscr{S}$ is a symmetric $3$-set, then $H$ is the disjoint union of paths and cycles. If $\mathscr{S}$ is an anti-symmetric $3$-set with special element $S$, then $E(H)$ can be partitioned into maximal anti-directed paths $Q$ of length at most three with the following properties: \begin{itemize} \item[(i)] If $Q$ is a maximal anti-directed path of length two, then $Q$ has unique edge belonging to~$S$. \item[(ii)] If $Q$ is a maximal anti-directed path of length three, then each edge of $Q$ belongs to a distinct path system in $\mathscr{S}$ where the middle edge belongs to~$S$. \end{itemize} \end{lemma} \begin{proof} If $\mathscr{S}$ is a symmetric $3$-set, without loss of generality, assume $\mathscr{S}=\{\mathcal{Q}_{23},\mathcal{Q}_{31},\mathcal{Q}_{12}\}$ where the path system $\mathcal{Q}_{ij}$ is type-$ij$. Then, for any $x_{23}y_{23}\in E(\mathcal{Q}_{23})$, $x_{31}y_{31}\in E(\mathcal{Q}_{31})$, $x_{12}y_{12}\in E(\mathcal{Q}_{12})$, we obtain $x_{23}$, $x_{31}$, $x_{12}$ are all distinct since $x_{23}\in V_{2*}$, $x_{31}\in V_{3*}$, and $x_{12}\in V_{1*}$. Similarly, we have $y_{23}$, $y_{31}$, $y_{12}$ are all distinct. Therefore, for any vertex $v\in H$, we have $d^{+}(v), d^{-}(v)\leq 1$, which implies $H$ is the disjoint union of paths and cycles.\\ \noindent Let $\mathscr{S}$ be an anti-symmetric $3$-set. Without loss of generality, it is enough to examine the cases $\mathscr{S}=\{\mathcal{Q}_{23},\mathcal{Q}_{12},\mathcal{Q}_{13}\}$ and $\mathscr{S}=\{\mathcal{Q}_{23},\mathcal{Q}_{12},\mathcal{Q}_{21}\}$ where $\mathcal{Q}_{ij}$ is type-$ij$. Let us first examine the case $\mathscr{S}=\{\mathcal{Q}_{23},\mathcal{Q}_{12},\mathcal{Q}_{13}\}$. Note that $\mathcal{Q}_{13}$ is the special element of $\mathscr{S}$. For any $x_{23}y_{23}\in E(\mathcal{Q}_{23})$, $x_{13}y_{13}\in E(\mathcal{Q}_{13})$, $x_{12}y_{12}\in E(\mathcal{Q}_{12})$, we have $x_{23}\in V_{2*}$, $x_{13},x_{12}\in V_{1*}$, $y_{12}\in V_{*2}$, $y_{23},y_{13}\in V_{*3}$, which shows that $\Delta^{0}(H)\leq 2$. Thus, we can conclude that two different maximal anti-directed paths in $H$ are edge-disjoint, which implies every edge of $H$ lies in a unique maximal anti-directed path. Let $Q$ be a maximal anti-directed path in~$H$ of length at least two. Let $e_i,e_{i+1}$ be two consecutive edges in~$Q$. It is easy to check that \begin{align*} (e_i,e_{i+1}) \in &\left( E (\mathcal{Q}_{13}) \times E (\mathcal{Q}_{12}) \right) \cup \left( E (\mathcal{Q}_{12}) \times E(\mathcal{Q}_{13}) \right) \\ &\cup \left( E(\mathcal{Q}_{13}) \times E (\mathcal{Q}_{23}) \right) \cup \left( E(\mathcal{Q}_{23}) \times E (\mathcal{Q}_{13}) \right). \end{align*} \noindent Therefore, if $Q$ has two edges, property (i) follows. If $Q$ has three consecutive edges $e_i,e_{i+1},e_{i+2}$, then \begin{align*} (e_i,e_{i+1},e_{i+2}) \in E(\mathcal{Q}_{12}) \times E(\mathcal{Q}_{13}) \times E(\mathcal{Q}_{23}) \text{ or } (e_i,e_{i+1},e_{i+2}) \in E(\mathcal{Q}_{23} ) \times E(\mathcal{Q}_{13}) \times E(\mathcal{Q}_{12}). \end{align*} This shows property (ii), and in particular that the middle of the three edges is in the special element~$\mathcal{Q}_{13}$. Finally, if $Q$ has at least four edges, take any four consecutive edges. These four edges contain two anti-directed paths of length three and the middle edge of each of these paths lies in $\mathcal{Q}_{13}$ from the argument above. Therefore we obtain two consecutive edges in $Q$ both in $\mathcal{Q}_{13}$, which is impossible since $\mathcal{Q}_{13}$ is a path system. \\ \noindent If $\mathscr{S}=\{\mathcal{Q}_{23},\mathcal{Q}_{12},\mathcal{Q}_{21}\}$, it is easy to check that we have $d_{H}^{+}(v)\leq 2$ and $d_{H}^{-}(v)\leq 1$ for all $v\in V(H)$. Note that $\mathcal{Q}_{21}$ is the special element of $\mathscr{S}$. As before, we see that $E(H)$ can be partitioned into maximal anti-directed paths since $\Delta^{0}(H)\leq 2$. Also, since each anti-directed path of length at least three has at least one vertex of indegree two, we have that all the maximal anti-directed paths in $H$ have at most two edges. Moreover, if $Q$ is a maximal anti-directed path of length two, say $e$ and $f$ are the edges of $Q$, then we have either $(e,f)\in E(Q_{23})\times E(Q_{21})$ or $(e,f)\in E(Q_{21})\times E(Q_{23})$, which completes the proof.\end{proof} \noindent We need one more technical proposition before we prove the lemma that shows how to select the bad edges that will be part of our final Hamilton cycle. \begin{proposition}\label{prop:selection-indicators} Let $t,x_1,x_2,x_3,x_4,x_5\in\{0,1\}$ be such that \begin{align}\label{eqn:modulo-2-congruence} x_1+x_2+x_3\equiv t\equiv x_1+x_4+x_5\pmod{2}. \end{align} Then, one can find $m_i\in\{-1,1\}$ for $i \in[5]$ with $$m_1x_1+m_2x_2+m_3x_3=t=m_1x_1+m_4x_4+m_5x_5. $$ \end{proposition} \begin{proof} Without loss of generality, we can assume $x_2\leq x_3$, $x_4\leq x_5$, and $x_2+x_3\leq x_4+x_5$. By \eqref{eqn:modulo-2-congruence}, we must have $(x_2+x_3,x_4+x_5)\in\{(0,0),(1,1),(2,2),(0,2)\}$. \begin{enumerate} \item If $x_2+x_3=0=x_4+x_5$, then we have $t=x_1$ and $x_2=x_3=x_4=x_5=0$. Hence, we only need $m_1x_1=x_1$, which can be done by choosing $m_1=1$. \item If $x_2+x_3=1=x_4+x_5$, then we have $t=1-x_1$, $x_2=x_4=0$ and $x_3=x_5=1$. Hence, we need $m_1x_1+m_3=1-x_1=m_1x_1+m_5$, which can be done by choosing $m_3=m_5=1$, and $m_1=-1$. \item If $x_2+x_3=2=x_4+x_5$, then we have $t=x_1$ and $x_2=x_3=x_4=x_5=1$. Hence, we need $m_1x_1+m_2+m_3=x_1=m_1x_1+m_4+m_5$, which can be done by choosing $m_1=1$, $m_2=m_4=1$, and $m_3=m_5=-1$. \item If $x_2+x_3=0$ and $x_4+x_5=2$, then we have $t=x_1$, $x_2=x_3=0$ and $x_4=x_5=1$. Hence, we need $m_1x_1=x_1=m_1x_1+m_4+m_5$, which can be done by choosing $m_4=1$, $m_5=-1$, and $m_1=1$.\end{enumerate}\end{proof} \noindent We are now ready to prove the main result of this section. \begin{lemma}\label{lem:9-partition-path-selection} Let $1/n\ll\gamma\ll\tau\ll\alpha\ll1$ be some constants, let $G$ be a $d$-regular oriented graph on $n$ vertices with $d\geq \alpha n$. Let $\mathcal{P}_3=\{V_{ij}:i,j \in [3]\}$ be an extremal $(9,\tau,\gamma)$-partition of $G$. Then, there exists a path system $\mathcal{Q}$ in $\mathcal{B}_3(\mathcal{P}_3,G)$ such that, writing $a_{ij}= |E(\mathcal{Q})\cap E(V_{i*},V_{*j})|$ for all $i\neq j$, we have \begin{itemize} \item[(i)] $e\left(\mathcal{Q}\right)\leq 2\gamma n/\alpha$, and \item[(ii)] $a_{i*}-a_{*i}=|V_{i*}|-|V_{*i}|$ for all $i \in [3]$, where $a_{i*}=\sum_{j\neq i}a_{ij}$ and $a_{*i}=\sum_{j\neq i}a_{ji}$. \end{itemize} \end{lemma} \begin{proof} Let us write $n_i=|V_{i*}|-|V_{*i}|$ for $i \in [3]$. Since $n_1+n_2+n_3=0$, without loss of generality, we can assume $n_1,n_2\geq 0$. Recall $G_{ij} = G[V_{i*},V_{*j}]$, and write $m_{ij}=e(G_{ij})-e(G_{ji})$ for $i,j \in [3]$, $i\neq j$. Note that $m_{ij} = - m_{ji}$. By Proposition~\ref{prop:partition-regular-graphs}, we have \begin{align} d n_1&=m_{12}+m_{13}=m_{12}-m_{31},\label{eqn:theta-1-equality}\\ d n_2&=m_{21}+m_{23}=m_{23}-m_{12}.\label{eqn:theta-2-equality} \end{align} Since $n_1,n_2\geq 0$, \eqref{eqn:theta-1-equality} and \eqref{eqn:theta-2-equality} imply that $m_{23}\geq m_{12}\geq m_{31}$ and $m_{12} \ge 0$. So, it suffices to consider the cases $$m_{23}\geq m_{12}\geq m_{31}\geq 0\,\text{ and }\,m_{23}\geq m_{12}\geq 0\geq m_{31}.$$ Let $m_{12}=dx$ for some $x\geq 0$, and write $x=s+t$ where $s=\lfloor x\rfloor$ and $0\leq t<1$.\\ \noindent \underline{\textit{Case 1:}} Suppose we have $m_{23}\geq m_{12}\geq m_{31}\geq 0$. Then, we can write $m_{31}=d(x-n_1)$ and $m_{23}=d(x+n_2)$ by using \eqref{eqn:theta-1-equality} and~\eqref{eqn:theta-2-equality}. Let $H = G_{23} \cup G_{31} \cup G_{12} $. Notice that $e(H)\leq |\mathcal{B}_3(\mathcal{P}_3,G)| \leq\gamma n^2$. Also, by Proposition~\ref{prop:degree-property-extremal}, we know $\Delta^{0}(G_{23}),\Delta^{0}(G_{31}),\Delta^{0}(G_{12})\leq d/2$. By Lemma~\ref{lem:CYCLE_FREE}, we can find path systems $\mathcal{Q}_{23} \subseteq G_{23}$, $\mathcal{Q}_{31} \subseteq G_{31}$, $\mathcal{Q}_{12} \subseteq G_{12}$ such that $\mathcal{Q}_{23}\cup \mathcal{Q}_{31}\cup \mathcal{Q}_{12}$ has no cycle and \begin{align*} e(\mathcal{Q}_{23}) &= \left\lfloor \dfrac{e(G_{23})}{d/2} \right\rfloor \geq \left\lfloor \dfrac{m_{23}}{d/2} \right\rfloor = \left\lfloor 2x+2n_2 \right\rfloor \geq s+n_2,\\ e(\mathcal{Q}_{31}) &= \left\lfloor \dfrac{e(G_{31})}{d/2} \right\rfloor \geq \left\lfloor \dfrac{m_{31}}{d/2} \right\rfloor = \left\lfloor 2x-2n_1 \right\rfloor \geq s-n_1,\\ e(\mathcal{Q}_{12}) &= \left\lfloor \dfrac{e(G_{23})}{d/2} \right\rfloor \geq \left\lfloor \dfrac{m_{12}}{d/2} \right\rfloor = \left\lfloor 2x \right\rfloor \geq s. \end{align*} \noindent Moreover, by Lemma~\ref{lem:ADE-BCF-Structure}, we have $\mathcal{Q}_{23}\cup \mathcal{Q}_{31}\cup \mathcal{Q}_{12}$ is disjoint union of paths and cycles since $\{\mathcal{Q}_{23},\mathcal{Q}_{31},\mathcal{Q}_{12}\}$ is a symmetric $3$-set. However, we know it is cycle-free, which implies it is a path system. We now define $\mathcal{Q}$ by choosing $s+n_2$ edges from~$\mathcal{Q}_{23}$, $s-n_1$ edges from~$\mathcal{Q}_{31}$, and $s$ edges from~$\mathcal{Q}_{12}$. Note that $$ e\left( \mathcal{Q}\right)\leq \dfrac{e(G_{12})+e(G_{23})+e(G_{31})}{d/2}\leq \dfrac{|\mathcal{B}_3(\mathcal{P}_3,G)|}{d/2}\leq \dfrac{2\gamma n}{\alpha }.$$ Since $a_{23}=s+n_2$, $a_{31}=s-n_1$, $a_{12}=s$ and $a_{21}=a_{32}=a_{13}=0$, the result follows. \\ \noindent \underline{\textit{Case 2:}} Suppose we have $m_{23} \ge m_{12}\geq 0\geq m_{31}$. Recall $m_{12}=dx$. Then, we can write $m_{13}=d(n_1-x)$ and $m_{23}=d(n_2+x)$ by using~\eqref{eqn:theta-1-equality} and~\eqref{eqn:theta-2-equality}. As with the previous case, we can find path systems $\mathcal{Q}_{23}\subseteq G_{23}$, $\mathcal{Q}_{13}\subseteq G_{13}$, $\mathcal{Q}_{12}\subseteq G_{12}$ such that $\mathcal{Q}_{23}\cup \mathcal{Q}_{13}\cup \mathcal{Q}_{12}$ is cycle-free and \begin{align} \label{eqn:size-path-systems} e(\mathcal{Q}_{13})= 2n_1 + \lfloor-2x\rfloor,\, e(\mathcal{Q}_{23})&= \lfloor2x\rfloor +2n_2 ,\, e(\mathcal{Q}_{12})= \lfloor2x\rfloor,\\ e(\mathcal{Q}_{13} \cup \mathcal{Q}_{23} \cup \mathcal{Q}_{12}) &\le 2 \gamma n/ \alpha. \label{eqn:size-path-systems2} \end{align} Let $H$ be the graph induced by $\mathcal{Q}_{23}\cup \mathcal{Q}_{13}\cup \mathcal{Q}_{12}$. Note that $\mathcal{Q}_{13}$ is the special element of the anti-symmetric $3$-set $\{\mathcal{Q}_{23},\mathcal{Q}_{13},\mathcal{Q}_{12}\}$. For simplicity, we write $A = 13$, $B = 23$ and $C = 12$ (so e.g.\ $m_A = m_{13}$ and $G_A = G_{13}$). By Lemma~\ref{lem:ADE-BCF-Structure}, we can decompose $E(H)$ into six sets~$\mathscr{S}_{T}$ with $T\in\{ ABC,AB,AC,A,B,C\}$ such that $\mathscr{S}_T$ is the set of maximal anti-directed paths of length~$|T|$ containing an edge in each $\mathcal{Q}_S$ for $S \in T$, e.g.\ $\mathscr{S}_{ABC}$ is the set of anti-directed paths of length three with one edge in each of $\mathcal{Q}_{13},\mathcal{Q}_{23},\mathcal{Q}_{12}$.\\ \noindent From the definition of the decomposition, clearly we have \begin{align*} e(\mathcal{Q}_{13}) & = e(\mathcal{Q}_A) = |\mathscr{S}_{ABC}|+|\mathscr{S}_{AB}|+|\mathscr{S}_{AC}|+|\mathscr{S}_{A}|,\\ e(\mathcal{Q}_{23}) & = e(\mathcal{Q}_{B}) = |\mathscr{S}_{ABC}|+|\mathscr{S}_{AB}|+|\mathscr{S}_{B}|,\\ e(\mathcal{Q}_{12}) & = e(\mathcal{Q}_{C}) = |\mathscr{S}_{ABC}|+|\mathscr{S}_{AC}|+|\mathscr{S}_{C}|. \end{align*} By \eqref{eqn:size-path-systems}, we obtain \begin{align} 2(|\mathscr{S}_{ABC}|+|\mathscr{S}_{AC}|)+|\mathscr{S}_{AB}|+|\mathscr{S}_A|+|\mathscr{S}_C|& = e(\mathcal{Q}_{A})+e(\mathcal{Q}_{C}) =2n_1+\lfloor-2t\rfloor+\lfloor2t\rfloor\label{eqn:theta-1-related}\\ |\mathscr{S}_{AB}|+|\mathscr{S}_B|-|\mathscr{S}_{AC}|-|\mathscr{S}_C| & = e(\mathcal{Q}_{B})-e(\mathcal{Q}_{C}) = 2n_2\label{eqn:theta-2-related}. \end{align} Hence, letting $|\mathscr{S}_{T}|\equiv r_T\pmod{2}$ for $T\in\{ABC,AB,AC,A,B,C\}$ where $r_T\in\{0,1\}$, we have the following equivalence by using \eqref{eqn:theta-1-related} and \eqref{eqn:theta-2-related}: $$r_{AB}+r_A+r_C\equiv -\lfloor2t\rfloor-\lfloor-2t\rfloor\equiv r_{AC}+r_A+r_B\pmod{2}.$$ Since $0\leq t<1$, we have $-\lfloor2t\rfloor-\lfloor-2t\rfloor\in\{0,1\}$. Then, by Proposition~\ref{prop:selection-indicators}, we can find $i_{AB},i_{AC},i_A,i_B,i_C\in\{-1,1\}$ such that \begin{align} i_{AB}r_{AB}+i_Ar_A+i_Cr_C= -\lfloor2t\rfloor-\lfloor-2t\rfloor= i_{AC}r_{AC}+i_Ar_A+i_Br_B. \label{eqn:indicator-selection} \end{align} \noindent We now construct $\mathcal{Q} \subseteq H$ as follows. Initializing $\mathcal{Q}=\emptyset$, we will add some edges into $\mathcal{Q}$ as follows: \begin{enumerate} \item Choose $ \left(|\mathscr{S}_{ABC}|+r_{ABC}\right)/2 $ many paths from $\mathscr{S}_{ABC}$, $\left(|\mathscr{S}_{AB}|+i_{AB}r_{AB}\right)/2$ many paths from $\mathscr{S}_{AB}$, and $\left(|\mathscr{S}_{AC}|+i_{AC}r_{AC}\right)/2$ many paths from $\mathscr{S}_{AC}$. For each such path, we add the unique edge from $\mathcal{Q}_{A} \subseteq G_{13}$ to~$\mathcal{Q}$. \item Take the remaining $\left(|\mathscr{S}_{ABC}|-r_{ABC}\right)/2$ many paths from $\mathscr{S}_{ABC}$. For each such path, we add the unique edge from $\mathcal{Q}_{B} \subseteq G_{23}$ and the unique edge from $\mathcal{Q}_{C} \subseteq G_{12}$ to~$\mathcal{Q}$. \item Take the remaining $\left(|\mathscr{S}_{AB}|-i_{AB}r_{AB}\right)/2$ many paths from $\mathscr{S}_{AB}$. For each such path, we add the unique edge from $\mathcal{Q}_{B} \subseteq G_{23}$ to~$\mathcal{Q}$. \item Take the remaining $\left(|\mathscr{S}_{AC}|-i_{AC}r_{AC}\right)/2$ many paths from $\mathscr{S}_{AC}$. For each such path, we add the unique edge from $\mathcal{Q}_{C} \subseteq G_{12}$ to~$\mathcal{Q}$. \item For each $T\in\{A,B,C\}$, take $\left(|\mathscr{S}_{T}|+i_{T}r_{T}\right)/2$ many paths from $\mathscr{S}_T$. Add them to~$\mathcal{Q}$. \end{enumerate} If $\Delta^0(\mathcal{Q}) \ge 2$, then there exists an anti-directed path~$Q'$ of length~2 in~$\mathcal{Q}$. This path~$Q'$ must be contained in some maximal anti-directed path~$Q^*$ in $\mathscr{S}_{ABC} \cup \mathscr{S}_{AB} \cup \mathscr{S}_{AC}$. Only in Step~2 do we add more than one edge from a maximal anti-directed path to~$\mathcal{Q}$. However, the two edges added in that case are not incident by~Lemma~\ref{lem:ADE-BCF-Structure}(ii) as $\mathcal{Q}_{13} = \mathcal{Q}_A$ is the special element. Therefore no such $Q'$ exists, and so $\Delta^0(\mathcal{Q}) \leq 1$. Recall that $\bigcup \mathcal{Q} \subseteq H$ is cycle-free and so $\mathcal{Q}$ is a path system. By~\eqref{eqn:size-path-systems2} \begin{align*} e\left(\mathcal{Q}\right) \le e(H) \le 2 \gamma n /\alpha. \end{align*} Note that \begin{align*} 2(a_{1*} - a_{*1}) & = 2\left(e\left( \mathcal{Q} \cap G_{12}\right) + e\left( \mathcal{Q} \cap G_{13}\right)\right) = 2\left(e\left( \mathcal{Q} \cap \mathcal{Q}_{A}\right) + e\left( \mathcal{Q} \cap \mathcal{Q}_{C}\right)\right) \\ & = 2(|\mathscr{S}_{ABC}|+|\mathscr{S}_{AC}|)+|\mathscr{S}_{AB}|+|\mathscr{S}_A|+|\mathscr{S}_C| +{i_{AB}r_{AB}+i_Ar_A+i_Cr_C} = 2n_1, \end{align*} where the last equality is due to~\eqref{eqn:theta-1-related} and~\eqref{eqn:indicator-selection}. Similarly, \begin{align*} 2(a_{2*} - a_{*2}) & = e( \mathcal{Q} \cap G_{23}) - e(\mathcal{Q} \cap G_{12}) = e( \mathcal{Q} \cap G_{B}) - e( \mathcal{Q} \cap G_{C}) \\ &= 2(|\mathscr{S}_{AB}|+|\mathscr{S}_B|-|\mathscr{S}_{AC}|-|\mathscr{S}_C|)+({i_{AC}r_{AC}+i_Br_B-i_{AB}r_{AB}-i_Cr_C}) = 2 n_2, \end{align*} where the last equality is due to~\eqref{eqn:theta-2-related} and~\eqref{eqn:indicator-selection}. So we have $a_{1*} - a_{*1} = n_1$ and $a_{2*} - a_{*2} = n_2$. Since $n_1 + n_2 +n_3 = 0$, we deduce that $a_{3*} - a_{*3} = n_3$ as required. \end{proof} \noindent The previous lemma shows how to obtain the (path system of) bad edges that will be part of our final Hamilton cycle. It will be convenient to suitably contract this path system because the resulting contracted graph will have a ``balanced'' partition and finding a Hamilton cycle in the contracted graph will give us a Hamilton cycle in the original graph by ``uncontracting'' the path system. We now define the right notion of contraction and establish some of its properties. \begin{definition}\label{def:path-contraction} Let $G$ be a digraph, $k\in \mathbb{N}$, and $\mathcal{P}_k=\{V_{ij}:i,j\in [k]\}$ be a $k^2$-partition of $V(G)$. Let $\mathcal{Q}$ be a path system in $G$. We define the \emph{contraction of $\mathcal{Q}$ in $G$ with respect to $\mathcal{P}_k$} as follows: for each $Q\in\mathcal{Q}$, create a new vertex $x$ associated to $Q$ such that $N^{-}(x)=N_{G}^{-}(u)$ and $N^{+}(x)=N_{G}^{+}(v)$ where $Q$ goes from $u$ to $v$. If $u\in V_{ij}$ and $v\in V_{i'j'}$, put $x$ into $V_{i'j}$. Then, we delete all the vertices in $\mathcal{Q}$. We call $\mathcal{P}_k'=\{V_{ij}':i,j \in [k]\}$ the \emph{resulting partition} where $V_{ij}'$ is the updated version of $V_{ij}$ for all $i,j \in [k]$, and we denote the \emph{resulting} graph by $G'$. \end{definition} \noindent Since we often use the following fact, we state it as a proposition. \begin{proposition} \label{prop:contract-hamilton} Let $G$ be a digraph, $\mathcal{Q}$ be a path system in $G$, and $\mathcal{P}_k=\{V_{ij}:i,j\in [k]\}$ be a $k^2$-partition of $V(G)$. If $G'$ is the graph obtained from $G$ by contracting $\mathcal{Q}$ with respect to $\mathcal{P}_k$, and $G'$ is Hamiltonian, then so is $G$. \end{proposition} \noindent Next we see that the number of bad edges cannot increase from contracting a path system with respect to the given partition. \begin{proposition}\label{prop:after-contraction} Let $1/n\ll \theta,\gamma\ll \tau \le 1$ and $k\in \mathbb{N}$ be constants. Let $G$ be a digraph on $n$ vertices, $\mathcal{Q}$ be a path system in $G$, and $\mathcal{P}_k=\{V_{ij}:i,j\in [k]\}$ be a $k^2$-partition of $V(G)$. Let us contract $\mathcal{Q}$ with respect to the partition $\mathcal{P}_k$. Then, we have $|\mathcal{B}_k(\mathcal{P}_k',G')|\leq |\mathcal{B}_k(\mathcal{P}_k,G)|$. Moreover, if $\mathcal{P}_k$ is a $(k^2,\tau,\gamma)$-partition of $G$ and $e(\mathcal{Q}) \le \theta n$, then $\mathcal{P}_{k}'$ is a $(k^2,\tau/2,2\gamma)$-partition for $G'$. \end{proposition} \begin{proof} Consider a path $P\in\mathcal{Q}$ that goes from $u$ to $v$, let $x$ be the created vertex corresponding $P$ during the contraction process with $x\in V_{cb}'$. In particular, we have $v \in V'_{c*}$. If $xy\in \mathcal{B}_k(\mathcal{P}_{k}',G')$, then $y\noindenttin V_{*c}'$ and $y\in N^{+}(v)$, which shows $vy\in \mathcal{B}_k(\mathcal{P}_k,G)$. Similarly, for any bad edge in $G'$ with respect to $\mathcal{P}_{k}'$, we can find a different bad edge in $G$ with respect to $\mathcal{P}_k$, which shows $|\mathcal{B}_k(\mathcal{P}_{k}',G')|\leq |\mathcal{B}_k(\mathcal{P}_k,G)|$.\\ \noindent Notice that we have $|G'|=(1-\theta)n$. Also, since we deleted at most $2\theta n$ vertices and $\theta\ll \tau$, we have $|V_{i*}'|\geq \tau n-2\theta n\geq \tau |G'|/2$ for all $i \in [k]$. Moreover, we get $2(1-\theta)^2>1$ since $\theta\ll1$, which implies $|\mathcal{B}_k(\mathcal{P}_{k}',G')|\leq\gamma n^2\leq 2\gamma (1-\theta)^2n^2=2\gamma|G'|^2$. \end{proof} \noindent We end this section with a lemma which states that if a path system $\mathcal{Q}$ in $\mathcal{B}(\mathcal{P}_k,G)$ satisfies condition (ii) of Lemma~\ref{lem:9-partition-path-selection}, then the contraction of $\mathcal{Q}$ with respect to $\mathcal{P}_k$ balances the partition. \begin{lemma}\label{lem:BALANCING-PARTITION} Let $k \in \mathbb{N}$, and let $\mathcal{P}_{k}=\{V_{ij}:i,j\in [k]\}$ be a $k^2$-partition for a digraph $G$. Let $\mathcal{Q}$ be a path system in $\mathcal{B}_{k}(\mathcal{P}_{k},G)$ such that, for all $i \in [k]$, $$\sum_{j\neq i}a_{ij}-\sum_{j\neq i}a_{ji}=|V_{i*}|-|V_{*i}|,$$ where $a_{ij}$ denotes the number of edges in $E(\mathcal{Q})\cap E(V_{i*},V_{*j})$ for all $i\neq j$. Then, the contraction of $\mathcal{Q}$ with respect to $\mathcal{P}_{k}$ results in a digraph $G'$ with a $k^2$-partition $\mathcal{P}'_{k}=\{V'_{ij}:i,j \in [k]\}$ such that $|V_{i*}'|=|V_{*i}'|$ for all $i \in [k]$. \end{lemma} \begin{proof} Let $\mathcal{Q}=\{Q_1,Q_2,\ldots,Q_t\}$. Let $a_{ij}^{p}$ denotes the number of edges in $E(Q_p)\cap E(V_{i*},V_{*j})$ for all $1\leq p \leq t$ and $i\neq j$. It can be easily checked that the contraction of the path $Q_p$ with respect to $\mathcal{P}_{k}$ decreases $|V_{i*}|$ by $\sum_{j\neq i}a_{ij}^{p}$ and decreases $|V_{*i}|$ by $\sum_{j\neq i}a_{ji}^{p}$. Therefore, the contraction of the path $Q_p$ leads to a decrease in $|V_{i*}|-|V_{*i}|$ by $\sum_{j\neq i}a_{ij}^{p}-\sum_{j\neq i}a_{ji}^{p}$. Since all the paths in $\mathcal{Q}$ can be contracted independently, we have \begin{align*} |V_{i*}'|-|V_{*i}'|=\left(|V_{i*}|-|V_{*i}|\right)-\sum_{p \in [t]}\left( \sum_{j\neq i}a_{ij}^{p}-\sum_{j\neq i}a_{ji}^{p} \right)=\left(|V_{i*}|-|V_{*i}|\right)-\left(\sum_{j\neq i}a_{ij}-\sum_{j\neq i}a_{ji}\right). \end{align*} Since we have $\sum_{j\neq i}a_{ij}-\sum_{j\neq i}a_{ji}=|V_{i*}|-|V_{*i}|$, the result follows. \end{proof} \section{Hamilton cycles from partitions}\label{ch:partition-to-hamilton} The main goal of this section is to prove that regular directed or oriented graphs of suitably high degree that admit a $(k^2, \tau, \gamma)$-partition for suitable $k, \tau, \gamma$ have a Hamilton cycle. We begin by formally defining certain contracted graphs associated with $4$-partitions (i.e.\ the graphs $J_i$ discussed in the sketch proof). These will be used in this and the next section. \\ \noindent Let $H$ be a (undirected) bipartite graph with bipartition $(A,B)$ and $|A|=|B|=n$. Given a set $K$ of size $n$ and bijections $\phi_A:K\to A$ and $\phi_B:K\to B$, the \emph{identification of $H$ with respect to $(K,\phi_A,\phi_B)$} is defined to be the digraph $G$, where $V(G)=K$ and for each $a,b\in K$, we have $ab\in E(G)$ if and only if $\phi_A(a)\phi_B(b)\in E(H)$.\footnote{If $\phi_A(a)\phi_B(a) \in E(H)$ for some $a \in K$, then we will have a loop $aa \in E(G)$. The small number of loops in $G$ play no role in our arguments, but we keep them for convenience so that $H$ and $G$ have the same number of edges.} \\ \noindent Let $G$ be a digraph and $\mathcal{P}=\{V_{ij}:i,j \in [2]\}$ be a $4$-partition of $V(G)$. For each $i \in [2]$, we define $\mathcal{B}^{i}(\mathcal{P},G)$ to be the (undirected) bipartite graph with bipartition $(V_{i*},V_{*i})$, where, for each $a\in V_{i*}$ and $b\in V_{*i}$, we have $ab\in E(\mathcal{B}^{i}(\mathcal{P},G))$ if and only if $ab\in E(G)$. (Although $V_{i*}$ and $V_{*i}$ are not disjoint as subsets of $V(G)$, namely $V_{i*} \cap V_{*i} = V_{ii}$, we duplicate any vertices in~$V_{ii}$, so $\mathcal{B}^{i}(\mathcal{P},G)$ has $|V_{i*}| + |V_{*i}|$ vertices.) \\ \noindent Let $G$ be a digraph and $\mathcal{P}=\{V_{ij}:i,j \in [2]\}$ be a $4$-partition of $V(G)$ such that $|V_{12}|=|V_{21}|=t>0$. For $i \in [2]$, we call $\phi^i=(\phi_{i*},\phi_{*i})$ a \emph{proper $i$-pair with respect to $\mathcal{P}$} if $\phi_{i*}:[t]\cup V_{ii}\to V_{i*}$ and $\phi_{*i}:[t]\cup V_{ii}\to V_{*i}$ are bijections satisfying $\phi_{i*}(x)=\phi_{*i}(x)=x$ for all $x\in V_{ii}$. In this case we define $\mathcal{J}^{i}(\mathcal{P},G,\phi^i)$ to be the identification of $\mathcal{B}^{i}(\mathcal{P},G)$ with respect to $([t]\cup V_{ii},\phi_{i*},\phi_{*i})$. One can think of $\mathcal{J}^{i}(\mathcal{P},G,\phi^i)$ as the digraph obtained from $G[V_{i*} \cup V_{*i}]$ by pairing vertices in $V_{i*} \setminus V_{ii}$ with vertices in $V_{*i} \setminus V_{ii}$ and identifying them, where the pairing is determined by $\phi_{i*}$ and $\phi_{*i}$; if we pair $x \in V_{i*} \setminus V_{ii}$ with $y \in V_{*i} \setminus V_{ii}$, the identified vertex has the same outneighbours as $x$ and the same inneighbours as $y$. Note that there is a one-to-one correspondence between the edges in $\mathcal{J}^{i}(\mathcal{P},G,\phi^i)$ and those in $G[V_{i*}, V_{*i}]$,. \\ \noindent The first proposition shows how Hamiltonicity of $\mathcal{J}^i$ translates into Hamiltonicity for $G$. \begin{proposition}\label{prop:identification-Hamilton-implies-Hamilton} Let $G$ be a digraph on $n$ vertices, and let $\mathcal{P}=\{V_{ij}:i,j \in [2]\}$ be a $4$-partition of $V(G)$ with $|V_{12}|=|V_{21}|>0$. Suppose that for every $i \in [2]$ and every proper $i$-pair $\phi^i$ with respect to $\mathcal{P}$, we have that $\mathcal{J}^{i}(\mathcal{P},G,\phi^i)$ is Hamiltonian. Then, $G$ is Hamiltonian. \end{proposition} \begin{proof} Let $|V_{12}|=|V_{21}|=t$ and $\phi^1$ be a proper 1-pair with respect to $\mathcal{P}$. Consider a Hamilton cycle $C$ in $\mathcal{J}^{1}(\mathcal{P},G,\phi^1)$. Recall that the vertex set of $\mathcal{J}^{1}(\mathcal{P},G,\phi^1)$ is $[t] \cup V_{11}$. Let $p_1,\ldots,p_t$ be the order in which the vertices in $[t]$ are visited by $C$ so that $C$ can be partitioned into paths $P_1, \ldots P_t$ where $P_r$ is a path from $p_r$ to $p_{r+1}$ (with the convention that $p_{t+1}=p_1$). Each $P_r$ corresponds to a path $P_r^1$ in $G[V_{11} \cup V_{12} \cup V_{21}]$ from $\phi_{1*}(p_r) \in V_{12}$ to $\phi_{*1}(p_{r+1}) \in V_{21}$, and moreover the paths $P_1^1, \ldots, P_t^1$ are vertex-disjoint and span $V_{11} \cup V_{12} \cup V_{21}$.\\ \noindent Let $\phi^2$ be the proper $2$-pair with respect to $\mathcal{P}$ satisfying $\phi_{2*}(p_r) = \phi_{*1}(p_{r+1}) \in V_{21}$ and $\phi_{*2}(p_r)=\phi_{1*}(p_r) \in V_{12}$ for all $r \in [t]$. Note that $\mathcal{J}^{2}(\mathcal{P},G,\phi^2)$ can be obtained from $G[V_{2*} \cup V_{*2}]$ by identifying the start and end points of $P_r^1$ for each $r$ and calling the resulting vertex $p_r$ (here we keep only the inedges of start point $\phi_{1*}(p_r)$ and the outedges of the end point $\phi_{*1}(p_{r+1})$). Since $\mathcal{J}^{2}(\mathcal{P},G,\phi^2)$ has some Hamilton cycle $H$, we see that $G$ also has a Hamilton cycle, obtained by replacing each vertex $p_r$ in $H$ with the path $P^1_r$. \end{proof} \noindent Next, we will prove that digraphs admitting a $(4,1/3,\gamma)$-partitions with additional degree conditions are Hamiltonian. \begin{lemma}\label{lem:digraph-hamilton} Let $1/n \ll \gamma,\rho\ll \varepsilon\ll1$ be constants. Let $G$ be a digraph on $n$ vertices with a $(4,1/3,\gamma)$-partition $\mathcal{P}=\{V_{ij}:i,j \in [2]\}$. Suppose that \begin{itemize} \item[(i)] $d^{+}_{\mathcal{G}_2(\mathcal{P},G)}(v),d^{-}_{\mathcal{G}_2(\mathcal{P},G)}(v)\geq (1/3+\varepsilon)n$ holds for all but at most $\rho n$ vertices $v \in V(G)$, \item[(ii)]$\delta^0 ( \mathcal{G}_2(\mathcal{P},G) ) \geq n/20$, \item[(iii)] $|V_{12}|=|V_{21}|>0$. \end{itemize} \noindent Then $G$ is Hamiltonian. \end{lemma} \begin{proof} Let $|V_{12}|=|V_{21}|=t$. For $i \in [2]$, let $\phi^i$ be a proper $i$-pair with respect to $\mathcal{P}$. Let $J_i:=\mathcal{J}^{i}(\mathcal{P},G,\phi^i)$. Since $|V(J_i)|=|V_{i*}|\geq n/3$ (the inequality holds because $\mathcal{P}$ is a $(4, 1/3, \gamma)$-partition), we obtain $n/3\leq |J_i|\leq 2n/3$. On the other hand, for any $v\in V_{ii}$, we have $d_{J_i}^{+}(v)=d_{\mathcal{G}_2(\mathcal{P},G)}^{+}(v)$ and $d_{J_i}^{-}(v)=d_{\mathcal{G}_2(\mathcal{P},G)}^{-}(v)$. Similarly, for any $r\in[t]$, we have $d_{J_i}^{+}(r)=d_{\mathcal{G}_2(\mathcal{P},G)}^{+}(\phi_{i*}(r))$ and $d_{J_i}^{-}(r)=d_{\mathcal{G}_2(\mathcal{P},G)}^{-}(\phi_{*i}(r))$. Then, $d_{J_i}^{+}(x),d_{J_i}^{-}(x)\geq (1/2+\varepsilon)|J_i|$ holds for all but at most $3\rho|J_i|$ vertices $x$ in $J_i$ by (i). Moreover, (ii) implies $\delta^{0}(J_i)\geq |J_i|/20$. Therefore, $J_i$ is Hamiltonian for $i \in [2]$ by Corollary~\ref{cor:Higher-Than-Half-For-All-But-Few}. Hence, the result follows from Proposition~\ref{prop:identification-Hamilton-implies-Hamilton}. \end{proof} \noindent We end this section by showing that every regular oriented graph of sufficiently high degree that admits a $(9,\tau,\gamma)$-partition is Hamiltonian. \begin{lemma}\label{lem:9partitionHamiltonian} Let $1/n <\gamma\ll \tau\ll\varepsilon<1$ be constants. Then every $d$-regular oriented graph~$G$ on $n$ vertices with $d\geq (1/4+\varepsilon)n$ and that admits a $(9,\tau,\gamma)$-partition is Hamiltonian. \end{lemma} \begin{proof} Let $\mathcal{P}=\{V_{ij}:i,j \in [3]\}$ be an extremal $(9,\tau,\gamma)$-partition of $G$. Firstly, we claim at least two of the following are true: \begin{itemize} \item[(a)] $\tau n+|V_{11}|\leq |V_{22}|+|V_{33}|+|V_{23}|+|V_{32}|$, \item[(b)] $\tau n+|V_{22}|\leq |V_{33}|+|V_{11}|+|V_{31}|+|V_{13}|$, \item[(c)] $\tau n+|V_{33}|\leq |V_{11}|+|V_{22}|+|V_{12}|+|V_{21}|$. \end{itemize} \noindent If not, then without loss of generality, say (a) and (b) are false. By adding up those inequalities, we obtain $2\tau n > 2|V_{33}|+|V_{31}\cup V_{32}|+|V_{13}\cup V_{23}|$. However, by Proposition~\ref{prop:quarterleadsnonempty}, we know $|V_{31}\cup V_{32}|,|V_{13}\cup V_{23}|\geq \tau n$, so we have a contradiction. Similarly, it can be easily shown that at least two of the following are true: \begin{align*} \text{(a$'$) } \tau n &\leq |V_{12}|+|V_{21}|,& \text{(b$'$) } \tau n & \leq |V_{13}|+|V_{31}|,& \text{(c$'$) } \tau n &\leq |V_{23}|+|V_{32}|. \end{align*} Thus, without loss of generality, we can assume that (c) and (a$'$) hold, that is, \begin{align} \label{eq:WLOG} \tau n +|V_{33}|\leq |V_{11}|+|V_{22}|+|V_{12}|+|V_{21}|\text{ and } \tau n \leq |V_{12}|+|V_{21}|. \end{align} \noindent By Lemma~\ref{lem:9-partition-path-selection}, there exists a path system $\mathcal{Q}$ in $\mathcal{B}_3(\mathcal{P},G)$ containing at most $8\gamma n$ edges such that $\sum_{j\neq i}a_{ij} - \sum_{j\neq i}a_{ji}=|V_{i*}|-|V_{*i}|$ for all $i \in [3]$, where $a_{ij} = |E(V_{i*}, V_{*j}) \cap \mathcal{Q}|$. We contract $\mathcal{Q}$ with respect to $\mathcal{P}$ and write $G'$ for the resulting graph and $\mathcal{P}'=\{V_{ij}':i,j \in [3]\}$ for the resulting partition. By Proposition~\ref{prop:dense-regular-size}, $\mathcal{P}$ is actually a $(9, 1/4+\varepsilon/2 ,\gamma)$-partition of~$G$, so Proposition~\ref{prop:after-contraction} implies that $\mathcal{P}'$ is a $(9,1/8,2\gamma)$-partition for~$G'$. By Lemma~\ref{lem:BALANCING-PARTITION}, we have \begin{align} \label{eqn:balpart} |V_{i*}'|=|V_{*i}'| \ge |V_{*i}| - |V(\mathcal{Q})| \ge n/4 \text{ for all }i \in [3]. \end{align} Moreover, by using Proposition~\ref{prop:quarterleadsnonempty}, we have \begin{align} \label{eq:newpartition1} \sum_{j\neq i}|V_{ij}'|=\sum_{j\neq i}|V_{ji}'|\geq \tau n \,\text{ for all }i \in [3]. \end{align} Also, using~\eqref{eq:WLOG} and the facts that $\gamma \ll \tau$ and $ e(\mathcal{Q}) \le 8\gamma n$, we have \begin{align} \label{eq:newpartition2} |V_{33}'|\leq |V_{11}'|+|V_{22}'|+|V_{12}'|+|V_{21}'|\,\text{ and }\, |V_{12}'|+|V_{21}'| \geq \tau n /2. \end{align} Since $| V ( \mathcal{Q} ) |\le 16\gamma n$, we have \begin{align} \label{eq:degreeG'} \delta^0(G')\geq d-16\gamma n \geq \left( 1/4 + {\varepsilon}/{2} \right)n. \end{align} Similarly, by Proposition~\ref{prop:degree-property-extremal}, \begin{align} \label{eq:degreee} \text{for any } v\in V(G')\text{, if } v \in V_{ab}' \text{ for some } a,b \in [3], \text{ then } d_{V_{*a}'}^{+}(v), d_{V_{b*}'}^{-}(v)\geq d/3-16\gamma n. \footnote{It is clear that all the vertices of $G'$ inherited from $G$ satisfy these degree conditions; for the new vertices in $G'$ (created from contracting paths), one can easily check in the definition of contraction that the vertices are placed in such a way that the degree conditions hold.} \end{align} In other words, we have \begin{align} d_{\mathcal{G}_3(\mathcal{P}',G')}^{+}(v),d_{\mathcal{G}_3(\mathcal{P}',G')}^{-}(v)\geq d/3-16\gamma n. \label{eq:G'-min-good-degree} \end{align} \noindent Let \begin{align*} W_{11}&=V_{33}', & W_{12}&=V_{32}'\cup V_{31}', & W_{21}&=V_{23}'\cup V_{13}', & W_{22}&=V_{11}'\cup V_{22}'\cup V_{12}'\cup V_{21}'. \end{align*} By Proposition~\ref{prop:partition-9-to-4}, we have $\mathcal{W}=\{W_{ij}:i,j \in [2]\}$ is a $(4,1/8,2\gamma)$-partition for $G'$. Furthermore, \eqref{eq:newpartition2} and \eqref{eq:newpartition1} imply that \begin{align*} |W_{11}|\leq |W_{22}| \text{ and }|W_{12}|=|W_{21}|\geq \tau n/2. \end{align*} \noindent By Proposition~\ref{prop:contract-hamilton}, if $G'$ is Hamiltonian then so is $G$. For $i \in [2]$, let $\phi^i$ be a proper $i$-pair with respect to $\mathcal{W}$. In order to prove the lemma, it is enough to show that $J_i:=\mathcal{J}^{i}(\mathcal{W},G',\phi^i)$ is Hamiltonian for $i \in [2]$ by Proposition~\ref{prop:identification-Hamilton-implies-Hamilton}. \\ \noindent First, for $J_1$, \eqref{eqn:balpart} and the fact that $|W_{11}|\leq |W_{22}|$ imply that \begin{align} \label{eqn:|V_{3*}|} n/4 \le |V_{3*}'|= |J_1| \leq |G'|/2\leq n/2. \end{align} Let $B^{+}(J_1)$ be the set of vertices in $J_1$ satisfying $d_{J_1}^{+}(x)<(1/2+\varepsilon/2)|J_1|$. Similarly, define $B^{-}(J_1)$. For any vertex $x\in V(J_1)$, we have $\phi_{1*}(x)\in V_{3*}'$ and $d_{J_1}^{+}(x)=d_{V_{*3}'}^{+}(\phi_{1*}(x))$. Together with~\eqref{eq:degreeG'} and \eqref{eqn:|V_{3*}|}, we deduce that \begin{align*} 2 \gamma n^2 &\ge |\mathcal{B}_2(\mathcal{W},G')| \ge e( \phi_{1*}(B^{+}(J_1)) , V(G') \setminus V_{*3}') \ge \sum_{x \in B^{+}(J_1)} \left( d_{G'}^{+}(\phi_{1*}(x)) - d_{V_{*3}'}^{+}(\phi_{1*}(x)) \right) \\ & \ge \sum_{x \in B^{+}(J_1)} \left( \delta^0(G') - d_{J_1}^{+}(x) \right) \ge \sum_{x \in B^+(J_1)} \left( (1/4 + \varepsilon / 2)n - (1/2 + \varepsilon/2)|J_1| \right) \\ &\ge |B^{+}(J_1)| \varepsilon n/4. \end{align*} So $|B^{+}(J_1)| \le 8 \gamma n/\varepsilon$ and, similarly, $|B^{-}(J_1)| \le 8 \gamma n/\varepsilon$. By~\eqref{eqn:|V_{3*}|}, \begin{align*} |B^{+}(J_1)|+|B^{-}(J_1)| & \leq 16\gamma n/\varepsilon \le 64 \gamma |J_1| / \varepsilon \le \sqrt{\gamma}|J_1|. \end{align*} Thus $d_{J_1}^{+}(x),d_{J_1}^{-}(x)\geq(1/2+\varepsilon/2)|J_1|$ holds for all but at most $\sqrt{\gamma}|J_1|$ vertices. Also, by \eqref{eq:degreee}, we have $\delta^0 (J_1) \geq d/3-16\gamma n \geq |J_1|/10$. Therefore, by Corollary~\ref{cor:Higher-Than-Half-For-All-But-Few}, $J_1$ is Hamiltonian.\\ \noindent For $J_2$, we first show that $J_2$ has a $(4,1/3,8\gamma)$-partition. By~\eqref{eqn:|V_{3*}|} \begin{align} \label{eqn:|J_2|} n/2 \le |J_2|=|G'|-|V_{3*}'| \le 3n/4, \end{align} so \eqref{eqn:balpart} implies that \begin{align*} |V'_{*i}| =|V_{i*}'| \ge n/4 \ge |J_2|/3 \text{ for } i \in [2]. \end{align*} Then, we partition $[t]$ into parts $\{T_{ij}:i,j \in [2]\}$ as follows: \begin{align*} T_{11}&=\{q \in [t]:\phi_{2*}(q)\in V_{13}', \;\phi_{*2}(q)\in V_{31}'\}, & T_{12}&=\{q \in [t]:\phi_{2*}(q)\in V_{13}', \; \phi_{*2}(q)\in V_{32}'\},\\ T_{21}&=\{q \in [t]:\phi_{2*}(q)\in V_{23}', \; \phi_{*2}(q)\in V_{31}'\}, & T_{22}&=\{q\in [t]:\phi_{2*}(q)\in V_{23}', \; \phi_{*2}(q)\in V_{32}'\}. \end{align*} Then, let us write $Z_{ij}=V_{ij}'\cup T_{ij}$ for $i,j \in [2]$, and $\mathcal{Z}=\{Z_{ij}:i,j \in [2]\}$. Notice that $\mathcal{Z}$ is a partition of $V(J_2)$. By using $|T_{11}|+|T_{12}|=|V_{13}'|$, we deduce that $|Z_{1*}|=|V_{1*}'|\geq |J_2|/3$. More generally, for $i \in \{1,2\}$ \begin{align*} |Z_{i*}| = |Z_{*i}| = |V'_{i*}| \geq |J_2|/3. \end{align*} Note that $ Z_{12}\cup Z_{21} \supseteq V_{12}'\cup V_{21}' \ne \emptyset$ by~\eqref{eq:newpartition2}. We deduce that $|Z_{12}|=|Z_{21}|>0$. On the other hand, one can verify that \begin{align} \label{eq:GJ-correspondence} xy\in E(Z_{i*},Z_{*j}) \:\:\text{ if and only if }\:\: \phi_{2*}(x)\phi_{*2}(y)\in E(V_{i*}',V_{*j}') \:\:\text{ for all } i,j \in [2]. \end{align} Then, we have \begin{align} \label{eqn:e(Z_i*,Z*j)} e(Z_{i*},Z_{*j})=e(V_{i*}',V_{*j}') \text{ for }i,j \in [2]. \end{align} Hence, we obtain $$|\mathcal{B}_2(\mathcal{Z},J_2)|=e(V_{1*}',V_{*2}')+e(V_{2*}',V_{*1}')\leq |\mathcal{B}_3(\mathcal{P}',G')| \leq 2\gamma |G'|^2\leq 8\gamma |J_2|^2.$$ As a result, $\mathcal{Z}$ is a $(4,1/3,8\gamma)$-partition for $J_2$ with $|Z_{12}|=|Z_{21}|>0$.\\ \noindentindent Let $B^{+}(J_2)$ be the set of vertices in $J_2$ satisfying $d_{\mathcal{G}_2(\mathcal{Z},J_2)}^{+}(x)<(1/3+\varepsilon/3)|J_2|$. Similarly, define $B^{-}(J_2)$. Note that $d_{\mathcal{G}_2(\mathcal{Z},J_2)}^{+}(x)= d_{\mathcal{G}_3(\mathcal{P}',G')}^{+}(\phi_{2*}(x))$ for any vertex $x\in V(J_2)$ by \eqref{eq:GJ-correspondence}. Moreover, by~\eqref{eq:degreeG'} and \eqref{eqn:|J_2|}, we have $\delta^{0}(G')\geq (1/3+\varepsilon/2)|J_2|$. Hence, by \eqref{eqn:|J_2|}, for any vertex $x\in B^{+}(J_2)$, we obtain \begin{align*} d_{\mathcal{B}_3(\mathcal{P}',G')}^{+}(\phi_{2*}(x))> (1/3+\varepsilon/2)|J_2|- (1/3+\varepsilon/3)|J_2|\geq \varepsilon|J_2|/6\geq \varepsilon n/12. \end{align*} \noindentindent Since $|\mathcal{B}_3(\mathcal{P}',G')|\leq |\mathcal{B}_3(\mathcal{P},G)| \leq \gamma n^2$, we find $\gamma n^2\geq |B^{+}(J_2)| \varepsilon n/12$. So $|B^{+}(J_2)|\leq 12\gamma n /\varepsilon$ and, similarly, $|B^{-}(J_2)|\leq 12\gamma n/\varepsilon $. As a result, by \eqref{eqn:|J_2|}, we have \begin{align*} |B^{+}(J_2)|+|B^{-}(J_2)|\leq 24\gamma n/\varepsilon\leq 48\gamma |J_2|/\varepsilon \leq \sqrt{\gamma}|J_2|. \end{align*} \noindentindent On the other hand, by \eqref{eq:G'-min-good-degree} and \eqref{eqn:|J_2|}, for any vertex $x\in V(J_2)$, we obtain \begin{align*} d_{\mathcal{G}_2(\mathcal{Z},J_2)}^{+}(x)=d_{\mathcal{G}_3(\mathcal{P}',G')}^{+}(\phi_{2*}(x))\geq d/3-16\gamma n\geq |J_2|/20. \end{align*} Similarly, we have $d_{\mathcal{G}_2(\mathcal{Z},J_2)}^{-}(x)\geq |J_2|/20$. As a result, the partition $\mathcal{Z}$ for the digraph $J_2$ satisfies all the conditions of Lemma~\ref{lem:digraph-hamilton}, so we are done.\end{proof} \section{Proofs of main results}\label{ch:mainproofs} In this section, we give the proofs of Theorem~\ref{thm:MAIN-1-3-CASE} and Theorem~\ref{thm:MAIN-1-4-CASE}. \begin{proof}[Proof of Theorem~\ref{thm:MAIN-1-3-CASE}] Let $\nu$ and $\tau$ be constants satisfying $1/n \ll \nu\ll\tau\ll \varepsilon$. If $G$ is a robust $(\nu, \tau)$-outexpander, then we are done by Theorem~\ref{thm:RobustExpanderImpliesHamilton}. Assume not. Then, $G$ admits a $(4,\tau,4\nu)$-partition by Lemma~\ref{lem:structure_not_expander}. Let $\mathcal{P}=\{V_{ij}:i,j \in [2]\}$ be an extremal $(4,\tau,4\nu)$-partition for $G$. Notice that $|V_{1*}|,|V_{2*}|\geq (1/3+\varepsilon/2)n$ by Proposition~\ref{prop:dense-regular-size}. Without loss of generality, assume $|V_{12}|\geq |V_{21}|$. We will choose a path system $\mathcal{Q}$ in $\mathcal{B}_2(\mathcal{P},G)$ satisfying \[ |E(\mathcal{Q})\cap E(V_{1*},V_{*2})|-|E(\mathcal{Q})\cap E(V_{2*},V_{*1})|=|V_{12}|-|V_{21}| \] as follows. \begin{itemize} \item[(i)] If $V_{12}=V_{21}=\emptyset$, then $|V_{11}|,|V_{22}|\geq (1/3+\varepsilon/2)n$. Since $G$ is strongly well-connected, we can find disjoint edges $ab\in E(V_{11},V_{22})$ and $cd\in E(V_{22},V_{11})$. Then, set $\mathcal{Q}=\{ab,cd\}$. \item[(ii)] If $|V_{12}|\geq |V_{21}|>0$, then $d(|V_{12}|-|V_{21}|)=e(V_{1*},V_{*2})-e(V_{2*},V_{*1})$ by Proposition~\ref{prop:partition-regular-graphs}. Hence, we have $e(V_{1*},V_{*2})\geq d(|V_{12}|-|V_{21}|)$. By Proposition~\ref{prop:degree-property-extremal}, $E(V_{1*},V_{*2})$ induces a subgraph $H$ in $G$ with $\Delta^{0}(H)\leq d/2$. Since $e(V_{1*},V_{*2})\leq 4\nu n^2$, by Lemma~\ref{lem:CYCLE_FREE}, we can find a path system $\mathcal{Q}'$ in $H$ with $e(\mathcal{Q}')\geq 2e(V_{1*},V_{*2})/d\geq 2(|V_{12}|-|V_{21}|)$. Then, we remove all but exactly $|V_{12}|-|V_{21}|$ edges in $\mathcal{Q}'$ to obtain $\mathcal{Q}$. \item[(iii)] If $|V_{12}|\geq 2$ and $|V_{21}|=0$, then as with the previous case, $E(V_{1*},V_{*2})$ has a path system $\mathcal{Q}'$ containing $2|V_{12}|$ edges. We claim $\mathcal{Q}'$ has at least one path that starts in $V_{11}$ and ends in $V_{22}$. If not, then any path in $\mathcal{Q}'$, with $s$ edges say, is incident to at least $s$ vertices in $V_{12}$, but since $\mathcal{Q}'$ contains more than $|V_{12}|$ edges, we have a contradiction. Next we claim that any path in $\mathcal{Q}'$ from $V_{11}$ to $V_{22}$ has at most $|V_{12}|$ edges. Indeed, if not, then $\mathcal{Q}'$ has a unique path which has $|V_{12}| + 1$ edges. But then $\mathcal{Q}'$ has $2|V_{12}| = |V_{12}| + 1$ edges, contradicting $|V_{12}| \geq 2$. Using the claims, we can remove all but exactly $|V_{12}|$ edges in $\mathcal{Q}'$ to obtain a path system $\mathcal{Q}$ with exactly $|V_{12}| = |V_{12}| - |V_{21}|$ edges and where least one path starts in $|V_{11}|$ and ends in $|V_{22}|$. \item[(iv)] If $|V_{12}|=1$ and $|V_{21}|=0$, let $x$ be the unique vertex in $V_{12}$. By Proposition~\ref{prop:partition-regular-graphs}, we have $d=d_{V_{11}}^{-}(x)+d_{V_{22}}^{+}(x)+e(V_{11},V_{22})-e(V_{22},V_{11})$. Note that, by Proposition~\ref{prop:degree-property-extremal}, we know $d_{V_{11}}^{-}(x),d_{V_{22}}^{+}(x)\leq d/2$. If $d_{V_{11}}^{-}(x)=d_{V_{22}}^{+}(x)=d/2$, then we obtain another extremal $(4,\tau,4\nu)$-partition by moving $x$ into $V_{11}$, which results in case (i). If we have either $d_{V_{11}}^{-}(x)<d/2$ or $d_{V_{22}}^{+}(x)<d/2$, then we have $e(V_{11},V_{22})\geq 1$. We can take an arbitrary edge $ab\in E(V_{11},V_{22})$, and set $\mathcal{Q}=\{ab\}$. \end{itemize} \noindent Now we contract this path system $\mathcal{Q}$ in $G$ with respect to partition $\mathcal{P}$ to obtain a graph $G'$ with resulting partition $\mathcal{P}'=\{V_{ij}':i,j \in [2]\}$. By Lemma~\ref{lem:BALANCING-PARTITION}, we have $|V_{12}'|=|V_{21}'|$. Moreover, the choice of $\mathcal{Q}$ ensures that both $V_{12}'$ and $V_{21}'$ are nonempty. \footnote{In cases (i), (iii), and (iv), $V_{21}$ is empty so we include paths from $V_{11}$ to $V_{22}$ so that the vertex created when contracting this path is placed in $V'_{21}$; see Definition~\ref{def:path-contraction}. Similarly for $V_{12}$.} We note that $\mathcal{Q}$ has at most $12\nu n$ edges since, by construction, $\mathcal{Q}$ has at most $|V_{12} - V_{21}|$ edges (except in case (i) where $\mathcal{Q}$ has two edges) and $|V_{12}|-|V_{21}|\leq 12\nu n$ by Corollary~\ref{cor:Y-and-Z-equal-size}. Therefore, we delete at most $24\nu n$ vertices, which implies $\delta^{0}(G')\geq d-24\nu n$. On the other hand, by Proposition~\ref{prop:after-contraction}, we have $\mathcal{P}'$ is a $(4,\tau/2,8\nu)$-partition. Also, by Proposition~\ref{prop:dense-regular-size}, we have \begin{align*} |V_{i*}'|\geq |V_{i*}|-24\nu n\geq (1/3+\varepsilon-24\nu)n\geq |G'|/3 \end{align*} for $i \in [2]$. Similarly, we obtain $|V_{*i}'|\geq |G'|/3$, so $\mathcal{P}'$ is a $(4,1/3,8\nu)$-partition. Let $B^{+}(G')$ be the set of vertices in $G'$ satisfying $d_{\mathcal{G}_2(\mathcal{P}',G')}^{+}(x)<(1/3+\varepsilon/3)|G'|$. Similarly, define $B^{-}(G')$. Note that $\delta^{0}(G')\geq d-24\nu n\geq (1/3+\varepsilon/2)|G'|$. Hence, for any vertex $x\in B^{+}(G')$, we obtain \begin{align*} d_{\mathcal{B}_2(\mathcal{P}',G')}^{+}(x)>(1/3+\varepsilon/2)|G'|-(1/3+\varepsilon/3)|G'|\geq \varepsilon |G'|/6. \end{align*} Since $|\mathcal{B}_2(\mathcal{P}',G')|\leq 8\nu |G'|^2$, we have $8\gamma |G'|^2\geq |B^{+}(G')|\cdot \varepsilon |G'|/6$. So, $|B^{+}(G')|\leq 48\nu |G'|/\varepsilon$ and, similarly, $|B^{-}(G')|\leq 48\nu |G'|/\varepsilon$. As a result, we obtain \begin{align*} |B^{+}(G')|+|B^{-}(G')|\leq 96\nu |G'|/\varepsilon\leq \sqrt{\nu}|G'|. \end{align*} \noindentindent Moreover, since $\mathcal{P}$ is an extremal $(4,\tau,4\nu)$-partition and we deleted at most $24\nu n$ vertices, Proposition~\ref{prop:degree-property-extremal} implies that $d_{\mathcal{G}_2(\mathcal{P}',G')}^{+}(v),d_{\mathcal{G}_2(\mathcal{P}',G')}^{-}(v)\geq d/2-24\nu n\geq |G'|/20$ for all $v\in V(G')$. As a result, $\mathcal{P}'$ satisfies the properties in Lemma~\ref{lem:digraph-hamilton}, so $G'$ is Hamiltonian. Hence, the result follows by Proposition~\ref{prop:contract-hamilton}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:MAIN-1-4-CASE}] Fix constants $\nu$ and $\tau$ satisfying $1/n \ll \nu\ll\tau\ll \varepsilon$. By Theorem~\ref{thm:RobustExpanderImpliesHamilton}, we are done if $G$ is a robust $(\nu, \tau)$-outexpander. Assume not. Then, by Lemma~\ref{lem:structure_not_expander}, $G$ admits an extremal $(4,\tau,4\nu)$-partition~$\mathcal{P}=\{V_{ij}:i,j \in [2]\}$. Then, for each $i \in [2]$, \begin{align} |V_{i*}|,|V_{*i}|\geq (1/4+\varepsilon/2)n \label{eqn:123} \end{align} by Proposition~\ref{prop:dense-regular-size}. Also, we have $|V_{12}|,|V_{21}|\geq \tau n$ by Proposition~\ref{prop:quarterleadsnonempty}. Without loss of generality, assume $|V_{11}|\leq |V_{22}|$. Furthermore, by reversing the edges if necessary, we may assume that $|V_{12}| \ge |V_{21}|$. Let $r= |V_{12}|-|V_{21}|$. By Corollary~\ref{cor:Y-and-Z-equal-size} and the fact that $|\mathcal{B}_2(\mathcal{P},G)|\leq 4\nu n^2$, we obtain \begin{align} \label{eqn:r} r\leq 4\nu n^2/d\leq 16\nu n. \end{align} Fix a subset~$R$ of~$V_{12}$ of size~$r$. Let $\mathcal{W}=\{W_{ij}:i,j \in [2]\}$ where $W_{ij}=V_{ij} \setminus R$ for $i,j \in [2]$. Note that \begin{align} \label{eqn:|W_2*|} |W_{2*}|&=|W_{*2}|\geq (n-r)/2 \\ \label{eqn:|W_1*|} \text{ and } |W_{1*}| &= |W_{*1}| \geq |V_{21}| \geq \tau n. \end{align} We now split into cases depending on whether, for all proper $2$-pairs~$\phi^2$ with respect to~$\mathcal{W}$, the digraph $\mathcal{J}^{2}(\mathcal{W},G-R,\phi^2)$ is a robust $(\nu^{1/2}, \tau)$-outexpander or not. \\ \noindentindent\underline{Case 1:} Suppose that, for all $2$-pairs~$\phi^2$ with respect to~$\mathcal{W}$, $\mathcal{J}^{2}(\mathcal{W},G-R,\phi^2)$ is a robust $(\nu^{1/2}, \tau)$-outexpander. Recall that $G_{ij} = G[V_{i*},V_{*j}]$. By~Proposition~\ref{prop:degree-property-extremal}, $\Delta^{0}(G_{12}),\Delta^{0}(G_{21})\leq d/2$. Moreover, since $e(G_{12}),e(G_{21}) \le |\mathcal{B}_2(\mathcal{P},G)|\leq 4\nu n^2$, we have by Proposition~\ref{prop:partition-regular-graphs} that \begin{align*} e(G_{12}) \ge e(G_{12})-e(G_{21}) =d(|V_{12}|-|V_{21}|)= d r. \end{align*} Hence, by Lemma~\ref{lem:CYCLE_FREE}, $G_{12}$ has a path system~$\mathcal{Q}$ with $r$ edges. \\ \noindent We contract $\mathcal{Q}$ in~$G$ with respect to $\mathcal{P}$ to obtain $G'$ with resulting partition $\mathcal{P}'=\{V_{ij}':i,j \in [2]\}$. Since $|E(\mathcal{Q})\cap E(G_{12})|-|E(\mathcal{Q})\cap E(G_{21})|=|V_{12}|-|V_{21}|$, Lemma~\ref{lem:BALANCING-PARTITION} implies that $|V_{12}'|=|V_{21}'|$. Moreover, by Proposition~\ref{prop:quarterleadsnonempty}, we have $|V_{12}|,|V_{21}|\geq \tau n$. Since $r\leq 16\nu n\ll \tau n$, we conclude that $|V_{12}'|=|V_{21}'|>0$. By Proposition~\ref{prop:contract-hamilton}, it is enough to show that $G'$ is Hamiltonian. \\ \noindent Consider a proper $i$-pair~$\psi^i$ with respect to $\mathcal{P}'$ for $i \in [2]$. Let $\mathcal{J}_i = \mathcal{J}^{i}(\mathcal{P}',G',\psi^i)$. To show that $G'$ is Hamiltonican, by Proposition~\ref{prop:identification-Hamilton-implies-Hamilton}, it suffices to show that $\mathcal{J}_1$ and $\mathcal{J}_2$ are Hamiltonian. \\ \noindentindent We first show that $\mathcal{J}_2$ is Hamiltonian as follows. Let $\mathcal{J}^2 = \mathcal{J}^{2}(\mathcal{W},G-R,\phi^2)$. Note that \begin{align*} |\mathcal{J}^2|=|W_{*2}|\geq (n-r)/2. \end{align*} Let $\phi^2$ be a proper $2$-pair with respect to~$\mathcal{W}$. Let $t = |W_{12}|$ and $t' = |V_{12}'|$. Recall that $\phi^2$ is a function from $[t] \cup W_{22}$ to $W_{2*} \times W_{*2}$, and $\psi^2$ is a function from $[t'] \cup V_{22}'$ to $V_{2*}' \times V_{*2}'$. Let $X$ be the subset of $([t] \cup W_{22}) \cap ([t'] \cup V_{22}')$ such that $\phi^2 (x) = \psi^2(x)$ for all~$x \in X$. We now pick $\phi^2$ among all $2$-pairs with respect to $\mathcal{W}$ such that $|X|$ is as large as possible. Since \begin{align*} |W_{2*} \triangle V'_{2*} |, |W_{*2} \triangle V'_{*2} | \le 3e(\mathcal{Q}) + |R| = 4r , \end{align*} we deduce that \begin{align*} \left| ([t] \cup W_{22}) \setminus X \right|, \left| ([t'] \cup V'_{22}) \setminus X \right| \le 16r. \end{align*} Therefore \begin{align*} | V(\mathcal{J}_2) \triangle V(\mathcal{J}^2) | \le | ([t] \cup W_{22}) \setminus X | + | ([t'] \cup V'_{22}) \setminus X | \le 32r \le 512 \nu n \le \nu^{1/2} |\mathcal{J}^2|/2. \end{align*} Since $\mathcal{J}^2$ is a robust $(\nu^{1/2} , \tau)$-outexpander by assumption, we conclude that $\mathcal{J}_2$ is a robust $(\nu^{1/2}/2,2\tau)$-outexpander by Lemma~\ref{lem:SYMMETRIC-DIFFERENCE}. Also, by Proposition~\ref{prop:degree-property-extremal}, we know $\delta^{0}(\mathcal{J}_2)\geq d/2-2r\geq |\mathcal{J}_2|/10$. Hence, $\mathcal{J}_2$ is Hamiltonian by Theorem~\ref{thm:RobustExpanderImpliesHamilton}.\\ \noindent We now show that $\mathcal{J}_1$ is Hamiltonian. Since $|V_{11}|\leq |V_{22}|$, together with~\eqref{eqn:123}, we have \begin{align*} n/4 \le |V_{1*}|-2r \le |\mathcal{J}_1|=|V_{1*}'|\leq |V_{1*}|+r\leq n/2+r\leq (1/2+\tau)n. \end{align*} By Proposition~\ref{prop:degree-property-extremal}, we have $\delta^{0}(\mathcal{J}_1)\geq d/2-2r\geq |\mathcal{J}_1|/10$. By Proposition~\ref{prop:after-contraction}, $\mathcal{P}'$ is a $(4,\tau/2,8\nu)$-partition of~$G'$. Also \begin{align*} e(\mathcal{J}_1)&= e(V_{1*}',V_{*1}') \geq \delta^{0}(G')|V_{1*}'|- |\mathcal{B}_2(\mathcal{P}',G')| \\ &\geq (d-2r)|V_{1*}'|- 8\nu n^2 \ge (d -64\nu n)|\mathcal{J}_1| \end{align*} as $|V_{1*}'|=|\mathcal{J}_1|\geq n/4$. Let $B^{+}(\mathcal{J}_1)$ be the set of vertices in $\mathcal{J}_1$ satisfying $d_{\mathcal{J}_1}^{+}(v)<d -\varepsilon n/4$. Similarly define $B^{-}(\mathcal{J}_1)$. Since $\Delta^0(\mathcal{J}_1) \le d$, we obtain \begin{align*} \left(|\mathcal{J}_1|-|B^{+}(\mathcal{J}_1)|\right)d+|B^{+}(\mathcal{J}_1)|\left(d-\varepsilon n/4\right)\geq e(\mathcal{J}_1)\geq (d -64\nu n)|\mathcal{J}_1|, \end{align*} which implies $64\nu n|\mathcal{J}_1|\geq |B^{+}(\mathcal{J}_1)|\varepsilon n/4$. Hence, we have $|B^{+}(\mathcal{J}_1)|\leq 256\nu |\mathcal{J}_1|/\varepsilon$, and similarly, $|B^{-}(\mathcal{J}_1)|\leq 256\nu |\mathcal{J}_1|/\varepsilon$. As a result, we obtain \begin{align*} |B^{+}(\mathcal{J}_1)|+ |B^{-}(\mathcal{J}_1)| \leq 512\nu |\mathcal{J}_1|/\varepsilon \leq \sqrt{\nu}|\mathcal{J}_1| \end{align*} as $\nu \ll \varepsilon$. Hence, for all but at most $\sqrt{\nu} |\mathcal{J}_1|$ vertices $v \in V(\mathcal{J}_1)$, we have \begin{align*} d_{\mathcal{J}_1}^{+}(v),d_{\mathcal{J}_1}^{-}(v)\geq (d -\varepsilon n/4) \ge (1/2+\varepsilon/3)|\mathcal{J}_1|. \end{align*} \noindentindent Therefore, $\mathcal{J}_1$ satisfies the conditions of Corollary~\ref{cor:Higher-Than-Half-For-All-But-Few}, so it is Hamiltonian.\\ \noindentindent\underline{Case 2:} Suppose that there exists a $2$-pair~$\phi^2 = (\phi_{2*},\phi_{*2})$ with respect to~$\mathcal{W}$ such that $\mathcal{J}^{2}(\mathcal{W},G-R,\phi^2)$ is not a robust $(\nu^{1/2}, \tau)$-outexpander. Let $\mathcal{J}^2 = \mathcal{J}^{2}(\mathcal{W},G-R,\phi^2)$. We now show that there is a $(9, \tau/6, 20 \nu^{1/2})$-partition for~$G$ (so that we can apply Lemma~\ref{lem:9partitionHamiltonian}). \\ \noindent Let $\theta=d/|W_{2*}|$. Remove any loops in $\mathcal{J}^2$. Notice that we have $|\mathcal{J}^2| = |W_{2*}|$ and $\Delta^{0}(\mathcal{J}^2)\leq d=\theta |W_{2*}|$. Since $W_{2*} = V_{2*}$ and $W_{*2} = V_{*2}-R$, we have by~\eqref{eqn:|W_2*|} \begin{align*} e(\mathcal{J}^2) & \ge e(W_{2*}, W_{*2}) - n = e(V_{2*}, W_{*2}) -n \geq d|W_{*2}| - e(\mathcal{B}_2(\mathcal{P}, G)) -n \\ &\ge d|W_{2*}|-\nu n^2 -n \geq (\theta-\nu^{1/2})|W_{2*}|^2. \end{align*} By Lemma~\ref{lem:structure_not_expander}, $\mathcal{J}^2$ admit a $(4, \tau, 4 \nu^{1/2})$-partition $\mathcal{P}^*_2 = \{U_{i,j}:i,j \in [2]\}$. Let $X_i = \phi_{2*}(U_{i*})$ and $Y_i = \phi_{*2}(U_{*i})$ for $i \in [2]$. Hence we have \begin{align} \label{eqn:X_i,Y_i} |X_1|,|X_2|, |Y_1|, |Y_2| & \geq \tau |W_{2*}| \ge \tau(n-r)/2\geq \tau n/3, \\ \label{eqn:e(XY)} e_{G}(X_1,Y_2)+e_{G}(X_2,Y_1) & \leq \mathcal{B}_2(\mathcal{P}^*_2,\mathcal{J}^2)+|W_{2*}| \le 5\nu^{1/2} |W_{2*}|^2, \end{align} where we have used~\eqref{eqn:|W_2*|} and \eqref{eqn:r} for the first line. Then, let us define the partition $\mathcal{Z}=\{Z_{ij}:i,j \in [3]\}$ for $G-R$ as follows: \[ \begin{array}{lll} Z_{11}=W_{22}\cap X_1\cap Y_1, &Z_{12}=W_{22}\cap X_1\cap Y_2, &Z_{13}=W_{21}\cap X_1, \\ Z_{21}=W_{22}\cap X_2\cap Y_1, &Z_{22}=W_{22}\cap X_2\cap Y_2, &Z_{23}=W_{21}\cap X_2, \\ Z_{31}=W_{12}\cap Y_1, &Z_{32}=W_{12}\cap Y_2, &Z_{33}=W_{11}. \end{array} \] Notice that, for $i \in [2]$ \begin{align*} |Z_{i*}|=|X_i| \geq \tau n/3 \text{ and } |Z_{*i}|=|Y_i| \ge \tau n/3 \end{align*} by~\eqref{eqn:X_i,Y_i}. Also, by \eqref{eqn:|W_1*|}, we have $|Z_{3*}|=|W_{1*}|\geq \tau n/3$ and $|Z_{*3}|=|W_{*1}|\geq \tau n/3$. Note that \begin{align*} \mathcal{B}_3(\mathcal{Z},G-R) & \subseteq E_{G}(X_1,Y_2) \cup E_{G}(X_2,Y_1) \cup \bigcup_{i,j\neq 3} \left( E_G(Z_{i*},Z_{*3}) \cup E_G(Z_{3*},Z_{*j})\right) \\ & = E_{G}(X_1,Y_2) \cup E_{G}(X_2,Y_1) \cup \mathcal{B}_2(\mathcal{W},G-R) \\ &\subseteq E_{G}(X_1,Y_2) \cup E_{G}(X_2,Y_1) \cup \mathcal{B}_2(\mathcal{P},G), \end{align*} so \eqref{eqn:e(XY)} implies that $|\mathcal{B}_3(\mathcal{Z},G-R)| \le 5 \nu^{1/2} |W_{2*}|^2 + 4 \nu n^2 \le 10 \nu^{1/2} |G-R|^2$. Therefore, $\mathcal{Z}$ is a $(9,\tau/3,10\nu^{1/2})$-partition for~$G-R$. Let us distribute the vertices of~$R$ into elements of~$\mathcal{Z}$ arbitrarily. Since $r\leq 16\nu n\ll \tau n$, the modified version of $\mathcal{Z}$ becomes a $(9,\tau/6, 20\nu^{1/2})$-partition for~$G$. Hence, by Lemma~\ref{lem:9partitionHamiltonian}, $G$ is Hamiltonian, as required.\end{proof} \section{Conclusion} \label{ch:conclusion} \noindent The main result of this paper is a proof of the approximate version of Jackson's conjecture, namely Conjecture~\ref{conj:JacksonMain}. It remains an open problem to prove this conjecture exactly. Similarly, it would be interesting (and probably easier) to obtain an exact version of Theorem~\ref{thm:MAIN-1-3-CASE}, namely to show that every strongly well-connected $n$-vertex $d$-regular digraph with $d \geq n/3$ is Hamiltonian. \\ \noindentindent Another natural question is to ask for the analogue of Theorem~\ref{thm:MAIN-1-3-CASE} for oriented graphs. By suitably orienting the edges in a non-Hamiltonian $2$-connected regular graph on $n$ vertices with degree close to $n/3$ (see e.g. \cite{kuhn2016solution}), there exist non-Hamiltonian strongly well-connected regular oriented graphs on $n$ vertices with $d$ close to~$n/6$. \begin{proposition} \label{prop:wellconncetedexample} For $n \in \mathbb{N}$, there exists a strongly well-connected $3n$-regular oriented graph on $18n+5$ vertices with no Hamilton cycle. \end{proposition} \begin{proof} Let $G_1$, $G_2$ and $G_3$ be vertex-disjoint regular tournaments each on $(6n+1)$ vertices. For $i \in [3]$, let $M_i = \{ x^{i}_j y^{i}_j : j \in [2n] \}$ be a matching of size~$2n$ in~$G_i$. Define $G$ to be the oriented graph obtained from $\bigcup_{i \in [3]} (G_i - M_i)$ by adding two new vertices $z$ and $z'$ and edge set $\{ x^{i}_j z, z y^{i}_j, x^{i}_{j+n} z', z' y^{i}_{j+n} : i \in [3], j \in [n] \}$. Note that $G$ is a strongly well-connected $3n$-regular oriented graph on $18 n +5$ vertices. However $G$ is not Hamiltonian because deleting the two vertices $z$ and $z'$ from $G$ disconnects it into $3$ components (whereas deleting any $2$ vertices from a Hamilton cycle disconnects it into at most $2$ components). \end{proof} \noindentindent Are all strongly well-connected $d$-regular oriented graphs on $n$ vertices with $d \ge n/6$ Hamiltonian?\\ \noindentindent Another interesting direction is to obtain an analogue of the Bollob{\'a}s--H{\"a}ggkvist Conjecture (which is discussed in the introduction) for oriented graphs. That is, given $t \ge 3$, determine the minimum value for $d$ such that any strongly $t$-connected $d$-regular $n$-vertex oriented graph is Hamiltonian. For any choice of $t$, we must have $d \ge n/8$ as shown below. \begin{proposition} \label{prop:3-conncetedexample} For $n \in \mathbb{N}$, there exists a strongly $n$-connected $2n$-regular oriented graph on $16n+1$ vertices with no Hamilton cycle. \end{proposition} \begin{proof} Consider a $2n$-regular oriented bipartite graph~$H$ with vertex classes $A$ and $B$ each of size~$4n$. Fix $b \in B$ and let $N^+_H(b) = \{a^+_1, \dots, a^+_{2n}\}$ and $N^-_H(b) = \{a^-_1, \dots, a^-_{2n}\}$. Let $G_1$ and $G_2$ be regular tournaments each on $(4n+1)$ vertices. Suppose that $V(H)$, $V(G_1)$ and $V(G_2)$ are pairwise disjoint. For $i \in [2]$, let $M_i = \{ x^{i}_j y^{i}_j : j \in [n] \}$ be a matching of size~$n$ in~$G_i$. Define $G$ to be the oriented graph obtained from $(H - \{b\}) \cup G_1 \cup G_2$ by removing the edges from $M_1 \cup M_2$ and adding the edges $\{ x^1_j a^+_j, a^-_j y^1_j , x^2_j a^+_{j+n}, a^-_{j+n} y^2_j : j \in [n]\}$. Note that $G$ is a strongly $n$-connected $2n$-regular oriented graph on $16n+1$ vertices. However $G$ is not Hamiltonian as removing $A$ will create $2+(|B|-1) > |A|$ components. \end{proof} \noindentindent Are all strongly $3$-connected $d$-regular oriented graphs on $n$ vertices with $d \ge n/8$ Hamiltonian?\\ \noindentindent For digraphs, one can similarly ask whether are all strongly well connected (or $3$-connected) $d$-regular digraphs on $n$ vertices with $d \ge n/3$ (or $d \ge n/4$, respectively) Hamiltonian? If the question is true, then the value of $d$ is best possible by considering the digraph analogues of the examples given by Propositions~\ref{prop:wellconncetedexample} and~\ref{prop:3-conncetedexample}. \end{document}
\begin{document} \title{Chromatic Derivatives, Chromatic Expansions and Associated Spaces} \thanks{Some of the results from this paper were presented at \mathop{{\mathrm{e}}}mph{UNSW Research Workshop on Function Spaces and Applications}, Sydney, December 2-6, 2008, and at \mathop{{\mathrm{e}}}mph{SAMPTA 09}, Marseille, May 18-22, 2009.} \author{Aleksandar Ignjatovi\'{c}} \address{School of Computer Science and Engineering, University of New South Wales {\rm and} National ICT Australia (NICTA), Sydney, NSW 2052, Australia} \mathop{{\mathrm{e}}}mail{[email protected]} \urladdr{www.cse.unsw.edu.au/\~{}{ignjat}} \keywords{chromatic derivatives, chromatic expansions, orthogonal systems, orthogonal polynomials, special functions, signal representation} \subjclass[2000]{41A58, 42C15, 94A12, 94A20} \begin{abstract} This paper presents the basic properties of chromatic derivatives and chromatic expansions and provides an appropriate motivation for introducing these notions. Chromatic derivatives are special, numerically robust linear differential operators which correspond to certain families of orthogonal polynomials. Chromatic expansions are series of the corresponding special functions, which possess the best features of both the Taylor and the Shannon expansions. This makes chromatic derivatives and chromatic expansions applicable in fields involving empirically sampled data, such as digital signal and image processing. \mathop{{\mathrm{e}}}nd{abstract} \maketitle \section{Extended Abstract} Let $\BL{\pi}$ be the space of continuous $L^2$ functions with the Fourier transform supported within $[-\pi,\pi]$ (i.e., the space of $\pi$ band limited signals of finite energy), and let $P^{\scriptscriptstyle{L}}_n(\omega)$ be obtained by normalizing and scaling the Legendre polynomials, so that \[ \frac{1}{2\pi}\int_{-\pi}^{\pi} P^{\scriptscriptstyle{L}}_{\scriptstyle{n}} (\omega)\;P^{\scriptscriptstyle{L}}_{\scriptstyle{m}} (\omega){\rm d}\omega=\delta(m-n). \] We consider linear differential operators $\K{n}=(-{\mathop{{\mathrm{i}}}})^{n} P^{\scriptscriptstyle{L}}_{\scriptstyle{n}} \left(\mathop{{\mathrm{i}}}\;\frac{{\rm d}}{{\rm d} t}\right)$; for such operators and every $f\in\BL{\pi}$, \[ \K{n}[f](t)=\frac{1}{2\pi}\int_{-\pi}^{\pi}{\mathop{{\mathrm{i}}}}^n\; P^{\scriptscriptstyle{L}}_{\scriptstyle{n}}(\omega) \widehat{f}(\omega){\mathop{{\mathrm{e}}}}^{\mathop{{\mathrm{i}}} \omega t} {\rm d}\omega. \] We show that for $f\in\BL{\pi}$ the values of $\K{n}[f](t)$ can be obtained in a numerically accurate and noise robust way from samples of $f(t)$, even for differential operators $\K{n}$ of high order. Operators $\K{n}$ have the following remarkable properties, relevant for applications in digital signal processing. \begin{proposition} Let $f:\mathop{\mathds{R}}\rightarrow\mathop{\mathds{R}}$ be a restriction of any entire function; then the following are equivalent: \begin{enumerate}[(a)] \item $\sum_{n=0}^{\infty}\K{n}[f](0)^2 < \infty$; \item for all $t\in\mathop{\mathds{R}}$ the sum $\sum_{n=0}^{\infty}\K{n}[f](t)^2$ converges, and its values are independent of $t\in\mathop{\mathds{R}}$; \item $f\in\BL{\pi}$. \mathop{{\mathrm{e}}}nd{enumerate} \mathop{{\mathrm{e}}}nd{proposition} Moreover, the following Proposition provides local representation of the usual norm, the scalar product and the convolution in $\BL{\pi}$. \begin{proposition} For all $f, g\in\BL{\pi}$ the following sums do not depend on $t\in\mathop{\mathds{R}}$, and \begin{eqnarray} \sum_{n=0}^{\infty}\K{n}[f](t)^2\hspace*{-2mm}&=&\hspace*{-2mm} \int_{-\infty}^{\infty} f(x)^2 dx;\nonumber\\ \sum_{n=0}^{\infty}\K{n}[f](t)\K{n}[g](t) \hspace*{-2mm}&=&\hspace*{-2mm} \int_{-\infty}^{\infty} f(x)g(x) dx;\nonumber\\ \sum_{n=0}^{\infty}\K{n}[f](t) \K{n}_t[g(u-t)] \hspace*{-2mm}&=&\hspace*{-2mm} \int_{-\infty}^{\infty}f(x)g(u-x) dx.\nonumber \mathop{{\mathrm{e}}}nd{eqnarray} \mathop{{\mathrm{e}}}nd{proposition} The following proposition provides a form of Taylor's theorem, with the differential operators $\K{n}$ replacing the derivatives and the spherical Bessel functions replacing the monomials. \begin{proposition} Let $j_n$ be the spherical Bessel functions of the first kind; then: \begin{enumerate} \item for every entire function $f$ and for all $z\in\mathop{\mathds{C}}$, \[ f(z) =\sum_{n=0}^{\infty}(-1)^n\K{n}[f](0)\K{n}[j_0(\pi z)] =\sum_{n=0}^{\infty}\K{n}[f](0)\;\sqrt{2n+1}\;j_n(\pi z); \] \item if $f\in\BL{\pi}$, then the series converges uniformly on $\mathop{\mathds{R}}$ and in $L^2$. \mathop{{\mathrm{e}}}nd{enumerate} \mathop{{\mathrm{e}}}nd{proposition} We give analogues of the above theorems for very general families of orthogonal polynomials. We also introduce some nonseparable inner product spaces. In one of them, related to the Hermite polynomials, functions $f_{\omega}(t)=\sin \omega t$ for all $\omega>0$ have finite positive norms and for \textbf{every} two distinct values $\omega_1\neq\omega_2$ the corresponding functions $f_{\omega_1}(t)=\sin \omega_1 t$ and $f_{\omega_2}(t)=\sin \omega_2 t$ are mutually orthogonal. Related to the properties of such spaces, we also make the following conjecture for families of orthonormal polynomials. \begin{conjecture} Let $P_n(\omega)$ be a family of symmetric positive definite orthonormal polynomials corresponding to a moment distribution function $a(\omega)$, \vspace*{-3mm} \[ ~\hspace*{20mm}\int_{-\infty}^{\infty}P_n(\omega)\;P_m(\omega)\;{\rm d}a(\omega)=\delta(m-n), \] and let $\gamma_n>0$ be the recursion coefficients in the corresponding three term recurrence relation for such ortho\underline{\textbf{normal}} polynomials, i.e., such that \[ P_{n+1}(\omega)=\frac{\omega}{\gamma_{n}}\, P_{n}(\omega)-\frac{{\gamma_{n-1}}}{{\gamma_{n}}}\, P_{n-1}(\omega). \] If $\gamma_n$ satisfy \ \ $\displaystyle{0<\lim_{n\rightarrow\infty}\frac{\gamma_n}{n^p}<\infty}$\ \ for some $0\leq p <1$, then \[\displaystyle{0<\lim_{n\rightarrow\infty}\frac{1}{n^{1-p}} \sum_{k=0}^{n-1}P_k(\omega)^2<\infty} \] for all $\omega$ in the support $sp(a)$ of $a(\omega)$. \mathop{{\mathrm{e}}}nd{conjecture} Numerical tests with $\gamma_n=n^p$ for many $p\in[0,1)$ indicate that the conjecture is true. \section{Motivation} Signal processing mostly deals with the signals which can be represented by continuous $L^2$ functions whose Fourier transform is supported within $[-\pi,\pi]$; these functions form \textit{the space $\BL{\pi}$ of $\pi$ band limited signals of finite energy}. Foundations of classical digital signal processing rest on the Whittaker--Kotel'nikov--Nyquist--Shannon Sampling Theorem (for brevity the Shannon Theorem): every signal $f\in\BL{\pi}$ can be represented using its samples at integers and \textit{the cardinal sine function} $\sinc t =\sin \pi t/\pi t$, as \begin{equation}\label{nexp} f(t)=\sum_{n=-\infty}^{\infty}f(n)\; \sinc (t-n). \mathop{{\mathrm{e}}}nd{equation} Such signal representation is of \textit{global nature}, because it involves samples of the signal at integers of arbitrarily large absolute value. In fact, since for a fixed $t$ the values of $\sinc (t-n)$ decrease slowly as $|n|$ grows, the truncations of the above series do not provide satisfactory local signal approximations. On the other hand, since every signal $f\in\BL{\pi}$ is a restriction to $\mathop{\mathds{R}}$ of an entire function, it can also be represented by the Taylor series, \begin{equation}\label{taylor} f(t)=\sum_{n=0}^{\infty}f^{(n)}(0)\;\frac{t^n}{n!}. \mathop{{\mathrm{e}}}nd{equation} Such a series converges uniformly on every finite interval, and its truncations provide good local signal approximations. Since the values of the derivatives $f^{(n)}(0)$ are determined by the values of the signal in an arbitrarily small neighborhood of zero, the Taylor expansion is of \textit{local nature}. In this sense, the Shannon and the Taylor expansions are complementary. However, unlike the Shannon expansion, the Taylor expansion has found very limited use in signal processing, due to several problems associated with its application to empirically sampled signals. \begin{enumerate}[(I)] \item Numerical evaluation of higher order derivatives of a function given by its samples is very noise sensitive. In general, one is cautioned against numerical differentiation: \begin{quote}``\ldots numerical differentiation should be avoided whenever possible, particularly when the data are empirical and subject to appreciable errors of observation''\cite{Hil}.\mathop{{\mathrm{e}}}nd{quote} \item The Taylor expansion of a signal $f\in\BL{\pi}$ converges non-uniformly on $\mathop{\mathds{R}}$; its truncations have rapid error accumulation when moving away from the center of expansion and are unbounded. \item Since the Shannon expansion of a signal $f\in\BL{\pi}$ converges to $f$ in $\BL{\pi}$, the action of a continuous linear shift invariant operator (in signal processing terminology, a \textit{filter}) $A$ can be expressed using samples of $f$ and the \textit{impulse response\/} $A[\sinc\!]$ of $A$: \begin{equation}\label{filter-ny} A[f](t)=\sum_{n=-\infty}^{\infty}f(n)\;A[\sinc\!](t-n). \mathop{{\mathrm{e}}}nd{equation} In contrast, the polynomials obtained by truncating the Taylor series do not belong to $\BL{\pi}$ and nothing similar to \mathop{{\mathrm{e}}}qref{filter-ny} is true of the Taylor expansion. \mathop{{\mathrm{e}}}nd{enumerate} Chromatic derivatives were introduced in \cite{IG0} to overcome problem (I) above; the chromatic approximations were introduced in \cite{IG00} to obtain local approximations of band-limited signals which do not suffer from problems (II) and (III). \subsection{Numerical differentiation of band limited signals} To understand the problem of numerical differentiation of band-limited signals, we consider an arbitrary $f\in \BL{\pi}$ and its Fourier transform $\widehat{f}(\omega)$; then \[f^{(n)}(t)=\frac{1}{2\pi}\int_{-\pi}^{\pi}(\mathop{{\mathrm{i}}}\omega)^n \widehat{f}(\omega){\mathop{{\mathrm{e}}}}^{\mathop{{\mathrm{i}}}\omega t}d\omega.\] Figure~\ref{derivatives} (left) shows, for $n= 15$ to $n=18$, the plots of $(\omega/\pi)^{n}$, which are, save a factor of $\mathop{{\mathrm{i}}}^n$, the symbols, or, in signal processing terminology, the \textit{transfer functions} of the normalized derivatives $1/\pi^n \; {\mathrm d}^n/{\mathrm d }t^n$. These plots reveal why there can be no practical method for any reasonable approximation of derivatives of higher orders. Multiplication of the Fourier transform of a signal by the transfer function of a normalized derivative of higher order obliterates the Fourier transform of the signal, leaving only its edges, which in practice contain mostly noise. Moreover, the graphs of the transfer functions of the normalized derivatives of high orders and of the same parity cluster so tightly together that they are essentially indistinguishable; see Figure~\ref{derivatives} (left).\footnote{If the derivatives are not normalized, their values can be very large and are again determined essentially by the noise present at the edge of the bandwidth of the signal.} However, contrary to a common belief, these facts \textit{do not} preclude numerical evaluation of all differential operators of higher orders, but only indicate that, from a numerical perspective, the set of the derivatives $\{f, f^\prime, f^{\prime\prime},\ldots\}$ is a very poor base of the vector space of linear differential operators with real coefficients. We now show how to obtain a base for this space consisting of numerically robust linear differential operators. \subsection{Chromatic derivatives} Let polynomials $P^{\scriptscriptstyle{L}}_n(\omega)$ be obtained by normalizing and scaling the Legendre polynomials, so that \begin{equation*} \frac{1}{2\pi}\int_{-\pi}^{\pi}P^{\scriptscriptstyle{L}}_{\scriptstyle{n}} (\omega)\;P^{\scriptscriptstyle{L}}_{\scriptstyle{m}}(\omega){\rm d}\omega=\delta(m-n). \mathop{{\mathrm{e}}}nd{equation*} We define operator polynomials\footnote{Thus, obtaining $\K{n}_t$ involves replacing $\omega^k$ in $P^{\scriptscriptstyle{L}}_{\scriptstyle{n}}(\omega)$ with $\mathop{{\mathrm{i}}}^k\;{\rm d}^k/{\rm d} t^k$ for all $k\leq n$. If $\K{n}_t$ is applied to a function of a single variable, we drop index $t$ in $\K{n}_t$.} \begin{equation*}\label{dop} \K{n}_t=(-{\mathop{{\mathrm{i}}}})^{n} P^{\scriptscriptstyle{L}}_{\scriptstyle{n}}\left(\mathop{{\mathrm{i}}}\;\frac{{\rm d}}{{\rm d} t}\right). \mathop{{\mathrm{e}}}nd{equation*} Since polynomials $P^{\scriptscriptstyle{L}}_{\scriptstyle{n}}(\omega)$ contain only powers of the same parity as $n$, operators $\K{n}$ have real coefficients, and it is easy to verify that \begin{equation*} \K{n}_t[{\mathop{{\mathrm{e}}}}^{\mathop{{\mathrm{i}}}\omega t}]= {\mathop{{\mathrm{i}}}}^nP^{\scriptscriptstyle{L}}_{\scriptstyle{n}}(\omega)\,{\mathop{{\mathrm{e}}}}^{\mathop{{\mathrm{i}}}\omega t}. \mathop{{\mathrm{e}}}nd{equation*} Consequently, for $f\in\BL{\pi}$, \[ \K{n}[f](t)=\frac{1}{2\pi}\int_{-\pi}^{\pi}{\mathop{{\mathrm{i}}}}^n P^{\scriptscriptstyle{L}}_{\scriptstyle{n}}(\omega)\widehat{f}(\omega)\; {\mathop{{\mathrm{e}}}}^{\mathop{{\mathrm{i}}}\omega t}{\rm d}\omega. \] In particular, one can show that \begin{equation}\label{cdersinc} \K{n}[\sinc](t)=(-1)^n\;\sqrt{2n+1}\;{\mathrm j}_{n}(\pi t), \mathop{{\mathrm{e}}}nd{equation} where ${\mathrm j}_n(x)$ is the spherical Bessel function of the first kind of order $n$. Figure~\ref{derivatives} (right) shows the plots of $P^{\scriptscriptstyle{L}}_{\scriptstyle{n}}(\omega)$, for $n= 15$ to $n=18$, which are the transfer functions (again save a factor of $\mathop{{\mathrm{i}}}^n$) of the corresponding operators $\K{n}$. Unlike the transfer functions of the (normalized) derivatives $1/\pi^n\;{\rm d}^n/{\rm d} t^n$, the transfer functions of the chromatic derivatives $\K{n}$ form a family of well separated, interleaved and increasingly refined comb filters. Instead of obliterating, such operators encode the features of the Fourier transform of the signal (in signal processing terminology, the \textit{spectral features} of the signal). For this reason, we call operators $\K{n}$ the {\mathop{{\mathrm{e}}}m chromatic derivatives\/} associated with the Legendre polynomials. \begin{figure} \begin{center} \includegraphics[width=4.7in]{Plain_Legendre.eps} \caption{Graphs of $\left(\frac{\omega}{\pi}\right)^n$ (left) and of $P^{\scriptscriptstyle{L}}_{\scriptstyle{n}}(\omega)$ (right) for $n=15-18$.} \label{derivatives} \mathop{{\mathrm{e}}}nd{center} \mathop{{\mathrm{e}}}nd{figure} Chromatic derivatives can be accurately and robustly evaluated from samples of the signal taken at a small multiple of the usual Nyquist rate. Figure~\ref{remezPlainLegendre15} (left) shows the plots of the transfer function of a \textit{transversal filter} given by $\mathcal{T}_{15}[f](t)=\sum_{k=-64}^{64}c_{k}\,f(t+k/2)$ (gray), used to approximate the chromatic derivative $\K{15}[f](t)$, and the transfer function of $\K{15}$ (black). The coefficients $c_k$ of the filter were obtained using the Remez exchange method \cite{Opp}, and satisfy $|c_k|< 0.2, (-64\leq k\leq 64$). The filter has 129 taps, spaced two taps per Nyquist rate interval, i.e., at a distance of $1/2$. Thus, the transfer function of the corresponding ideal filter $\K{15}$ is $P^{\scriptscriptstyle{L}}_{\scriptstyle{15}}(2\omega)$ for $|\omega|\leq \pi/2$, and zero outside this interval. The pass-band of the actual transversal filter is 90\% of the bandwidth $[-\pi/2,\pi/2]$. Outside the transition regions $[-11\pi/20,-9\pi/20]$ and $[9\pi/20,11\pi/20]$ the error of approximation is less than $1.3\times 10^{-4}$. Implementations of filters for operators $\K{n}$ of orders $0\leq n\leq 24$ have been tested in practice and proved to be both accurate and noise robust, as expected from the above considerations. \begin{figure} \begin{center} \includegraphics[width=5in]{remez.eps} \caption{Transfer functions of $\K{15}$ (left, black) and $d^{15}/dt^{15}$ (right, black) and of their transversal filter approximations (gray).} \label{remezPlainLegendre15} \mathop{{\mathrm{e}}}nd{center} \mathop{{\mathrm{e}}}nd{figure} For comparison, Figure~\ref{remezPlainLegendre15} (right) shows the transfer function of a transversal filter obtained by the same procedure and with the same bandwidth constraints, which approximates the (normalized) ``standard" derivative $(2/\pi)^{15}\; {\rm d}^{15}/{\rm d} t^{15}$ (gray) and the transfer function of the ideal filter (black). The figure clearly indicates that such a transversal filter is of no practical use. Note that \mathop{{\mathrm{e}}}qref{nexp} and \mathop{{\mathrm{e}}}qref{cdersinc} imply that \begin{equation}\label{no} \K{k}[f](t)=\sum_{n=-\infty}^{\infty}f(n)\;\K{k}[\sinc\!](t-n) =\sum_{n=-\infty}^{\infty}f(n)\;(-1)^k\;\sqrt{2k+1}\;{\mathrm j}_k(\pi(t-n)). \mathop{{\mathrm{e}}}nd{equation} However, in practice, the values of $\K{k}[f](t)$, especially for larger values of $k$, \textit{cannot} be obtained from the Nyquist rate samples using truncations of \mathop{{\mathrm{e}}}qref{no}. This is due to the fact that functions $\K{k}[\sinc\!](t-n)$ decay very slowly as $|n|$ grows; see Figure~\ref{i15} (left). Thus, to achieve any accuracy, such a truncation would need to contain an extremely large number of terms. \begin{figure} \begin{center} \includegraphics[width=5in]{approx.eps} \caption{\textsc{left}: Oscillatory behavior of $\sinc(t)$ (black), and $\K{15}[\sinc](t)$ (gray); \textsc{right}: A signal $f\in\BL{\pi}$ (gray) and its chromatic and Taylor approximations (black, dashed)} \label{i15} \mathop{{\mathrm{e}}}nd{center} \mathop{{\mathrm{e}}}nd{figure} On the other hand, this also means that signal information present in the values of the chromatic derivatives of a signal obtained by sampling an appropriate filterbank at an instant $t$ is \textit{not redundant} with information present in the Nyquist rate samples of the signal in any reasonably sized window around $t$, which is a fact suggesting that chromatic derivatives could enhance standard signal processing methods operating on Nyquist rate samples. \subsection{Chromatic expansions} The above shows that numerical evaluation of the chromatic derivatives associated with the Legendre polynomials does not suffer problems which precludes numerical evaluation of the ``standard'' derivatives of higher orders. On the other hand, the chromatic expansions, defined in Proposition~\ref{app} below, were conceived as a solution to problems associated with the use of the Taylor expansion.\footnote{Propositions stated in this Introduction are special cases of general propositions proved in subsequent sections.} \begin{proposition}\label{app} Let $\K{n}$ be the chromatic derivatives associated with the Legendre polynomials, let ${\mathrm j}_n$ be the spherical Bessel function of the first kind of order $n$, and let $f$ be an arbitrary entire function; then for all $z,u\in\mathop{\mathds{C}}$, \begin{eqnarray} f(z)&=&\sum_{n=0}^{\infty}\;K^n[f](u)\; K^n_u[\sinc (z-u)]\label{CEL}\\ &=&\sum_{n=0}^{\infty}(-1)^n\;K^n[f](u)\; K^n[\sinc ](z-u)\label{CEL1}\\ &=&\sum_{n=0}^{\infty}K^n[f](u)\;\sqrt{2n+1}\; {\mathrm j}_n (\pi(z-u)) \mathop{{\mathrm{e}}}nd{eqnarray} If $f\in\BL{\pi}$, then the series converges uniformly on $\mathop{\mathds{R}}$ and in the space $\BL{\pi}$. \mathop{{\mathrm{e}}}nd{proposition} The series in \mathop{{\mathrm{e}}}qref{CEL} is called \textit{the chromatic expansion of $f$ associated with the Legendre polynomials\/}; a truncation of this series is called a \textit{chromatic approximation\/} of $f$. As the Taylor approximation, a chromatic approximation is also a local approximation; its coefficients are the values of differential operators $\K{m}[f](u)$ at a single instant $u$, and for all $k\leq n$, \begin{equation*} f^{(k)}(u)=\frac{{\rm d}^k}{{\rm d}t^k} \left[\sum_{m=0}^{n}\K{m}[f](u) \;\K{m}_u[\sinc(t-u)]\right]_{t=u}. \mathop{{\mathrm{e}}}nd{equation*} Figure~\ref{i15} (right) compares the behavior of the chromatic approximation (black) of a signal $f\in\BL{\pi}$ (gray) with the behavior of its Taylor approximation (dashed). Both approximations are of order 16. The signal $f(t)$ is defined using the Shannon expansion, with samples $\{f(n)\ : \ |f(n)|<1,\ -32\leq n\leq 32\}$ which were randomly generated. The plot reveals that, when approximating a signal $f\in\BL{\pi}$, a chromatic approximation has a much gentler error accumulation when moving away from the point of expansion than the Taylor approximation of the same order. Unlike the monomials which appear in the Taylor formula, functions $\K{n}[\sinc\!](t)=(-1)^n\;\sqrt{2n+1}\;{\mathrm j}_n(\pi t)$ belong to $\BL{\pi}$ and satisfy $|\K{n}[\sinc\!](t)|\leq 1$ for all $t\in\mathop{\mathds{R}}$. Consequently, the chromatic approximations also belong to $\BL{\pi}$ and are bounded on $\mathop{\mathds{R}}$. Since by Proposition~\ref{app} the chromatic approximation of a signal $f\in\BL{\pi}$ converges to $f$ in $\BL{\pi}$, if $A$ is a filter, then $A$ commutes with the differential operators $\K{n}$ and thus for every $f\in \BL{\pi}$, \begin{equation}\label{filter-ce} A[f](t)=\sum_{n=0}^{\infty}(-1)^n\;\K{n}[f](u)\; \K{n}[A[\,\sinc ]](t-u). \mathop{{\mathrm{e}}}nd{equation} A comparison of \mathop{{\mathrm{e}}}qref{filter-ce} with \mathop{{\mathrm{e}}}qref{filter-ny} provides further evidence that, while local just like the Taylor expansion, the chromatic expansion associated with the Legendre polynomials possesses the features that make the Shannon expansion so useful in signal processing. This, together with numerical robustness of chromatic derivatives, makes chromatic approximations applicable in fields involving empirically sampled data, such as digital signal and image processing. \subsection{A local definition of the scalar product in \BL{\pi}} Proposition~\ref{fun} below demonstrates another remarkable property of the chromatic derivatives associated with the Legendre polynomials. \begin{proposition}\label{fun} Let $f:\mathop{\mathds{R}}\rightarrow\mathop{\mathds{R}}$ be a restriction of an arbitrary entire function; then the following are equivalent: \begin{enumerate}[(a)] \item $\sum_{n=0}^{\infty}\K{n}[f](0)^2 < 0$; \item for all $t\in\mathop{\mathds{R}}$ the sum $\sum_{n=0}^{\infty}\K{n}[f](t)^2$ converges, and its values are independent of $t\in\mathop{\mathds{R}}$; \item $f\in\BL{\pi}$. \mathop{{\mathrm{e}}}nd{enumerate} \mathop{{\mathrm{e}}}nd{proposition} The next proposition is relevant for signal processing because it provides \textit{local representations} of the usual norm, the scalar product and the convolution in $\BL{\pi}$, respectively, which are defined globally, as improper integrals. \begin{proposition}\label{local-space} Let $\K{n}$ be the chromatic derivatives associated with the (rescaled and normalized) Legendre polynomials, and $f, g\in\BL{\pi}$. Then the following sums do not depend on $t\in\mathop{\mathds{R}}$ and satisfy \begin{eqnarray} \sum_{n=0}^{\infty}K^n[f](t)^2\hspace*{-2mm}&=&\hspace*{-2mm} \int_{-\infty}^{\infty} f(x)^2 dx;\\ \sum_{n=0}^{\infty}K^n[f](t)K^n[g](t) \hspace*{-2mm}&=&\hspace*{-2mm} \int_{-\infty}^{\infty} f(x)g(x) dx;\\ \sum_{n=0}^{\infty}K^n[f](t) K^n_t[g(u-t)] \hspace*{-2mm}&=&\hspace*{-2mm} \int_{-\infty}^{\infty}f(x)g(u-x) dx. \mathop{{\mathrm{e}}}nd{eqnarray} \mathop{{\mathrm{e}}}nd{proposition} \subsection{} We finish this introduction by pointing to a close relationship between the Shannon expansion and the chromatic expansion associated with the Legendre polynomials. Firstly, by \mathop{{\mathrm{e}}}qref{CEL1}, \begin{equation}\label{fn} f(n)=\sum_{k=0}^{\infty} \;\K{k}[f](0)\;(-1)^k \K{k}[\sinc\!](n). \mathop{{\mathrm{e}}}nd{equation} Since $\K{n}[\sinc\!](t)$ is an even function for even $n$ and odd for odd $n$, \mathop{{\mathrm{e}}}qref{no} implies \begin{equation}\label{kn} \K{k}[f](0)=\sum_{n=-\infty}^{\infty}\;f(n)\; (-1)^k K^k[\sinc\!](n). \mathop{{\mathrm{e}}}nd{equation} Equations \mathop{{\mathrm{e}}}qref{fn} and \mathop{{\mathrm{e}}}qref{kn} show that the coefficients of the Shannon expansion of a signal -- the samples $f(n)$, and the coefficients of the chromatic expansion of the signal -- the simultaneous samples of the chromatic derivatives $\K{n}[f](0)$, are related by an orthonormal operator defined by the infinite matrix \[ \left[(-1)^k \K{k}\left[ \sinc\right](n)\ :\ k\in\mathop{\mathds{N}}, n\in\mathop{\mathds{Z}}\right]=\left[\sqrt{2k+1}\;{\mathrm j}_{k}(\pi n)\ :\ k\in\mathop{\mathds{N}}, n\in\mathop{\mathds{Z}}\right]. \] Secondly, let $\mathcal{S}_u[f(u)]=f(u+1)$ be the unit shift operator in the variable $u$ ($f$ might have other parameters). The Shannon expansion for the set of sampling points $\{u+n\ :\ {n\in \mathop{\mathds{Z}}}\}$ can be written in a form analogous to the chromatic expansion, using operator polynomials $\mathcal{S}_u^n=\mathcal{S}_u\circ\ldots\circ\mathcal{S}_u$, as \begin{eqnarray} f(t)&=&\sum_{n=0}^{\infty}f(u+n)\;\sinc (t-(u+n))\\ &=&\sum_{n=0}^{\infty} \mathcal{S}^{n}_u[f](u)\; \mathcal{S}^{n}_u[\sinc (t-u)];\label{fancyN} \mathop{{\mathrm{e}}}nd{eqnarray} compare now \mathop{{\mathrm{e}}}qref{fancyN} with \mathop{{\mathrm{e}}}qref{CEL}. Note that the family of operator polynomials $\{\mathcal{S}^{n}_u\}_{n\in \mathop{\mathds{Z}}}$ is also an orthonormal system, in the sense that their corresponding transfer functions $\{\mathop{{\mathrm{e}}}^{\mathop{{\mathrm{i}}} n\; \omega}\}_{n\in\mathop{\mathds{Z}}}$ form an orthonormal system in $L^2[-\pi,\pi]$. Moreover, the transfer functions of the families of operators $\{\K{n}\}_{n\in \mathop{\mathds{N}}}$ and $\{\mathcal{S}^{n}\}_{n\in \mathop{\mathds{Z}}}$, where $\K{n}$ are the chromatic derivatives associated with the Legendre polynomials, are orthogonal on $[-\pi,\pi]$ with respect to the same, constant weight $\mathrm{w}(\omega)= 1/(2\pi)$. In this paper we consider chromatic derivatives and chromatic expansions which correspond to some very general families of orthogonal polynomials, and prove generalizations of the above propositions, extending our previous work \cite{IG5}.\footnote{ Chromatic expansions corresponding to general families of orthogonal polynomials were first considered in \cite{CH}. However, Proposition 1 there is false; its attempted proof relies on an incorrect use of the Paley-Wiener Theorem. In fact, function $F_g$ defined there need not be extendable to an entire function, as it can be shown using Example 4 in Section~\ref{examples} of the present paper.} However, having in mind the form of expansions \mathop{{\mathrm{e}}}qref{CEL} and \mathop{{\mathrm{e}}}qref{fancyN}, one can ask a more general (and somewhat vague) question. \begin{question}\label{Q2} What are the operators $A$ for which there exists a family of operator polynomials $\{P_n(A)\}$, orthogonal under a suitably defined notion of orthogonality, such that for an associated function $\mathbf{m}_{\scriptscriptstyle{A}}(t)$, \begin{equation*} f(t)=\sum_{n}P_n(A)[f](u)\;P_n(A) [\mathbf{m}_{\scriptscriptstyle{A}}(t-u)] \mathop{{\mathrm{e}}}nd{equation*} for all functions from a corresponding (and significant) class $\mathcal{C}_{\scriptscriptstyle{A}}$? \mathop{{\mathrm{e}}}nd{question} \section{Basic Notions} \subsection{Families of orthogonal polynomials} Let $\mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}:\mathcal{P}_{\omega}\rightarrow \mathop{\mathds{R}}$ be a linear functional on the vector space $\mathcal{P}_{\omega}$ of real polynomials in the variable $\omega$ and let ${\mu}_{n} = \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}(\omega^{n})$. Such $\mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}$ is a \textit{moment functional}, ${\mu}_{n}$ is the \textit{moment} of $\mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}$ of order $n$ and the \textit{Hankel determinant} of order $n$ is given by \begin{equation*}\nonumber \Delta_n=\left|\begin{array}{ccc} \mu_0&\ldots &\mu_n\\ \mu_1&\ldots &\mu_{n+1}\\ \ldots&\ldots&\ldots\\ \mu_{n}&\ldots&\mu_{2n} \mathop{{\mathrm{e}}}nd{array}\right|. \mathop{{\mathrm{e}}}nd{equation*} The moment functionals $\mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}$ which we consider are assumed to be: \begin{enumerate}[(i)] \item\label{c1} positive definite, i.e., $\Delta_{n}>0$ for all $n$; such functionals also satisfy $\mu_{2n}> 0$; \item\label{c2} symmetric, i.e., $\mu_{2n+1}=0$ for all $n$; \item\label{c3} normalized, so that $\mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}(1)=\mu_0=1$. \mathop{{\mathrm{e}}}nd{enumerate} For functionals $\mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}$ which satisfy the above three conditions there exists a family \PPP\ of polynomials with real coefficients, such that \begin{enumerate}[(a)] \item \PPP\ is an orthonormal system with respect to \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}, i.e., for all $m,n$, \[\mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}(\PP{m}{\omega}\,\PP{n}{\omega})=\delta(m-n);\] \item each polynomial $\PP{n}{\omega}$ contains only powers of $\omega$ of the same parity as $n$; \item $\PP{0}{\omega}=1$. \mathop{{\mathrm{e}}}nd{enumerate} A family of polynomials is the family of orthonormal polynomials corresponding to a symmetric positive definite moment functional \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ just in case there exists a sequence of reals $\gamma_n>0$ such that for all $n>0$, \begin{equation}\label{poly} \PP{n+1}{\omega}=\frac{\omega}{\gamma_{n}}\, \PP{n}{\omega}-\frac{\gamma_{n-1}}{\gamma_{n}}\, \PP{n-1}{\omega}. \mathop{{\mathrm{e}}}nd{equation} If we set $\gamma_{-1}=1$ and $\PP{-1}{\omega}=0$, then \mathop{{\mathrm{e}}}qref{poly} holds for $n=0$ as well. We will make use of the Christoffel-Darboux equality for orthogonal polynomials, \begin{equation}\label{CDP} (\omega-\sigma)\sum_{k=0}^{n} \PP{{k}}{\omega}\PP{{k}}{\sigma} = \gamma_n (\PP{{n+1}}{\omega}\PP{{n}}{\sigma}-\PP{{n+1}}{\sigma} \PP{{n}}{\omega}), \mathop{{\mathrm{e}}}nd{equation} and of its consequences obtained by setting $\sigma=-\omega$ in \mathop{{\mathrm{e}}}qref{CDP} to get \begin{equation}\label{equal} \omega\left(\sum_{k=0}^{n}\PP{{2k+1}}{\omega}^2- \sum_{k=0}^{n}\PP{{2k}}{\omega}^2\right)= \gamma_{2n+1}\,\PP{{2n+2}}{\omega}\, \PP{{2n+1}}{\omega}, \mathop{{\mathrm{e}}}nd{equation} and by by letting $\sigma\rightarrow\omega$ in \mathop{{\mathrm{e}}}qref{CDP} to get \begin{equation}\label{sumsquares} \sum_{k=0}^{n}\PP{{k}}{\omega}^2 = \gamma_n \, (\PP{{n+1}}{\omega}^\prime \PP{{n}}{\omega}-\PP{{n+1}}{\omega} \PP{{n}}{\omega}^\prime). \mathop{{\mathrm{e}}}nd{equation} For every positive definite moment functional \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ there exists a non-decreasing bounded function $ a (\omega)$, called a \textit{moment distribution function}, such that for the associated Stieltjes integral we have \begin{equation}\label{l3} \int_{-\infty}^{\infty} \omega^{n}\,{\mathop{{\rm d}a(\omega)}}=\mu_n \mathop{{\mathrm{e}}}nd{equation} and such that for the corresponding family of polynomials $\PPP$ \begin{equation}\label{polyortho} \int_{-\infty}^{\infty} \PP{n}{\omega}\,\PP{m}{\omega} \,{\mathop{{\rm d}a(\omega)}}=\delta(m-n). \mathop{{\mathrm{e}}}nd{equation} We denote by $\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$ the Hilbert space of functions $\varphi:\mathop{\mathds{R}}\rightarrow \mathop{\mathds{C}}$ for which the Lebesgue-Stieltjes integral $\int_{-\infty}^{\infty}|\varphi(\omega)|^2\, {\mathop{{\rm d}a(\omega)}}$ is finite, with the scalar product defined by $\doti{\alpha}{\beta}=\int_{-\infty}^{\infty}\alpha(\omega) \,\overline{\beta(\omega)}\, {\mathop{{\rm d}a(\omega)}}$, and with the corresponding norm denoted by $\noi{\varphi}$. We define a function $\mathbi{m}:\mathop{\mathds{R}}\rightarrow\mathop{\mathds{R}}$ as \begin{equation}\label{mm} \mathbi{m}(t)=\int_{-\infty}^{\infty}{\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega t }{\mathop{{\rm d}a(\omega)}}. \mathop{{\mathrm{e}}}nd{equation} Since \begin{eqnarray*} \int_{-\infty}^{\infty}|(\mathop{{\mathrm{i}}}\omega)^{n} \,{\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\,\omega t} | {\mathop{{\rm d}a(\omega)}} \leq\left(\int_{-\infty}^{\infty}\!\!\omega^{2n} {\mathop{{\rm d}a(\omega)}} \int_{-\infty}^{\infty}\!\!{\mathop{{\rm d}a(\omega)}}\right)^{1/2} \!\!\!=\sqrt{\mu_{2n}}<\infty, \mathop{{\mathrm{e}}}nd{eqnarray*} we can differentiate \mathop{{\mathrm{e}}}qref{mm} under the integral sign any number of times, and obtain that for all s$n$, \begin{eqnarray}\label{dm} &\mathbi{m}^{(2n)}(0)=(-1)^n\mu_{2n};& \\ &\mathbi{m}^{(2n+1)}(0)=0.& \mathop{{\mathrm{e}}}nd{eqnarray} \subsection{The chromatic derivatives} Given a moment functional $\mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}$ satisfying conditions \mathop{{\mathrm{e}}}qref{c1} -- \mathop{{\mathrm{e}}}qref{c3} above, we associate with \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ a family of linear differential operators $\KK$ defined by the operator polynomial\footnote{Thus, to obtain $\K{n}$, one replaces $\omega^k$ in $\PP{n}{\omega}$ by $\mathop{{\mathrm{i}}}^k D^k_t$, where ${\mathop{{D}}}^k_t[f]=\frac{\mathrm{d}^k}{\mathrm{d}t^k}f(t)$. We use the square brackets to indicate the arguments of operators acting on various function spaces. If $A$ is a linear differential operator, and if a function $f(t,\vec{w})$ has parameters $\vec{w}$, we write $A_{t}[f]$ to distinguish the variable $t$ of differentiation; if $f(t)$ contains only variable $t$, we write $A[f(t)]$ for $A_{t}[f(t)]$ and ${\mathop{{D}}}^k[f(t)]$ for ${\mathop{{D}}}^k_t [f(t)]$. } \begin{eqnarray*} \K{n}_t=\frac{1}{\mathop{{\mathrm{i}}}^n}\; P_{n}^{\scriptscriptstyle{\mathcal{M}}} \left(\mathop{{\mathrm{i}}} {\mathop{{D}}}_t\right), \mathop{{\mathrm{e}}}nd{eqnarray*} and call them \textit{the chromatic derivatives associated with} \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}. Since \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ is symmetric, such operators have real coefficients and satisfy the recurrence \begin{equation}\label{three-term} \K{n+1}=\frac{1}{{\gamma_{n}}}\,({\mathop{{D}}}\circ \K{n})+\frac{{\gamma_{n-1}}}{{\gamma_{n}}}\, \K{n-1}, \mathop{{\mathrm{e}}}nd{equation} with the same coefficients $\gamma_n>0$ as in \mathop{{\mathrm{e}}}qref{poly}. Thus, \begin{equation}\label{iwt} \K{n}_t[e^{\mathop{{\mathrm{i}}}\omega t}]= {\mathop{{\mathrm{i}}}}^n\PP{n}{\omega}\,e^{\mathop{{\mathrm{i}}}\omega t} \mathop{{\mathrm{e}}}nd{equation} and \begin{equation} \K{n}[\mathbi{m}](t)=\int_{-\infty}^{\infty} {\mathop{{\mathrm{i}}}}^{n}\,\PP{n}{\omega}\, {\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega t} {\mathop{{\rm d}a(\omega)}}. \mathop{{\mathrm{e}}}nd{equation} The basic properties of orthogonal polynomials imply that for all $m,n,$ \begin{equation}\label{orthonorm} (\K{n}\circ \K{m})[\mathbi{m}](0) =(-1)^{n}\delta(m-n), \mathop{{\mathrm{e}}}nd{equation} and, if $m < n$ or if $m-n$ is odd, then \begin{equation}\label{bkn} ({\mathop{{D}}}^m\circ\K{n})[\mathbi{m}](0)=0. \mathop{{\mathrm{e}}}nd{equation} The following Lemma corresponds to the Christoffel-Darboux equality for orthogonal polynomials and has a similar proof which uses \mathop{{\mathrm{e}}}qref{three-term} to represent the left hand side of \mathop{{\mathrm{e}}}qref{C-D} as a telescoping sum. \begin{lemma}[\!\!\cite{IG5}]\label{CD} Let $\KK$ be the family of chromatic derivatives associated with a moment functional $\mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}$, and let $f,g\in C^\infty$; then \begin{equation}\label{C-D} {\mathop{{D}}}\left[\sum_{m=0}^{n} \K{m}[f]\,\K{m}[g]\right]= {\gamma_{n}}\, (\K{n+1}[f]\,\K{n}[g]+\K{n}[f]\,\K{n+1}[g]). \mathop{{\mathrm{e}}}nd{equation} \mathop{{\mathrm{e}}}nd{lemma} \subsection{Chromatic expansions} Let $f$ be infinitely differentiable at a real or complex $u$; the formal series \begin{eqnarray}\label{cex} \mathrm{CE}^{\!\scriptscriptstyle{\mathcal{M}}}[f,u](t)&=&\sum_{k=0}^{\infty}\K{k}[f](u)\;\K{k}_u[\mathbi{m}(t-u)]\\ &=&\sum_{k=0}^{\infty}(-1)^k\K{k}[f](u)\;\K{k}[\mathbi{m}](t-u)\nonumber \mathop{{\mathrm{e}}}nd{eqnarray} is called the \textit{chromatic expansion\/} of $f$ associated with \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}, centered at $u$, and \begin{equation*} \mathrm{CA}^{\!\scriptscriptstyle{\mathcal{M}}}[f,n,u](t)=\sum_{k=0}^{n}(-1)^k\K{k}[f](u)\K{k}[\mathbi{m}](t-u) \mathop{{\mathrm{e}}}nd{equation*} is the \textit{chromatic approximation\/} of $f$ of order $n$. From \mathop{{\mathrm{e}}}qref{orthonorm} it follows that the chromatic approximation $\mathrm{CA}^{\!\scriptscriptstyle{\mathcal{M}}}[f,n,u](t)$ of order $n$ of $f(t)$ for all $m\leq n$ satisfies \begin{eqnarray*} \K{m}_{t} [\mathrm{CA}^{\!\scriptscriptstyle{\mathcal{M}}}[f,n,u](t)]\big |_{t=u}&=&\sum_{k=0}^{n} \,(-1)^k\K{k} [f](u)\, (\K{m}\circ\K{k})[\mathbi{m}](0)\ =\ \K{m}[f](u). \mathop{{\mathrm{e}}}nd{eqnarray*} Since $\K{m}$ is a linear combination of derivatives ${\mathop{{D}}}^k$ for $k\leq m$, also $f^{(m)}(u) = {\mathop{{D}}}^m_t[\mathrm{CA}^{\!\scriptscriptstyle{\mathcal{M}}}[f,n,u](t)]\big |_{t=u}$ for all $m\leq n$. In this sense, just like the Taylor approximation, a chromatic approximation is a local approximation. Thus, for all $m\leq n$, \begin{equation}\label{k-to-d} f^{(m)}(u) = {\mathop{{D}}}^m_t[\mathrm{CA}^{\!\scriptscriptstyle{\mathcal{M}}}[f,n,u](t)]\big |_{t=u} \!\!= \sum_{k=0}^{n}(-1)^k \, \K{k}[f](u)\,({\mathop{{D}}}^{m}\circ \K{k})[\mathbi{m}](0). \mathop{{\mathrm{e}}}nd{equation} Similarly, since ${{\mathop{{D}}}^m_t}\!\left[\sum_{k=0}^{n}f^{(k)}(u){(t-u)^k}/{k!}\right]\big |_{t=u}\!\!=f^{(m)}(u)$ for $m\leq n$, we also have \begin{equation}\label{d-to-k} \K{m}[f](u)= \K{m}_{t} \left[\sum_{k=0}^{n}f^{(k)}(u){(t-u)^k}/{k!}\right]_{t=u}\!\!=\! \sum_{k=0}^{n}f^{(k)}(u)\, \K{m}\left[t^k/k!\right](0). \mathop{{\mathrm{e}}}nd{equation} Equations \mathop{{\mathrm{e}}}qref{k-to-d} and \mathop{{\mathrm{e}}}qref{d-to-k} for $m=n$ relate the standard and the chromatic bases of the vector space space of linear differential operators, \begin{eqnarray} {\mathop{{D}}}^{n} &=& \sum_{k=0}^{n}(-1)^k \, ({\mathop{{D}}}^{n} \circ \K{k})[\mathbi{m}](0)\; \K{k}; \label{inverse}\\ \K{n}&=&\sum_{k=0}^{n} \K{n}\left[t^k/k!\right](0)\; {\mathop{{D}}}^k.\label{direct} \mathop{{\mathrm{e}}}nd{eqnarray} Note that, since for $j>k$ all powers of $t$ in $\K{k}\left[{t^j}/{j!}\right]$ are positive, we have \begin{equation}\label{zero-mon} j>k\ \ \Rightarrow \ \ \K{k}\left[{t^j}/{j!}\right](0)=0. \mathop{{\mathrm{e}}}nd{equation} \section{Chromatic Moment Functionals}\label{sec3} \subsection{} We now introduce the broadest class of moment functionals which we study. \begin{definition} Chromatic moment functionals are symmetric positive definite moment functionals for which the sequence $\{\mu_n^{1/n}/n\}_{n\in\mathop{\mathds{N}}}$ is bounded. \mathop{{\mathrm{e}}}nd{definition} If \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ is chromatic, we set \begin{equation}\label{limsup} \rho=\limsup_{n\rightarrow\infty} \left(\frac{\mu_n}{n!}\right)^{1/n}=\mathop{{\mathrm{e}}}\; \limsup_{n\rightarrow\infty} \frac{\mu_n^{1/n}}{n}<\infty. \mathop{{\mathrm{e}}}nd{equation} \begin{lemma}\label{meuw} Let \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ be a chromatic moment functional and $\rho$ such that \mathop{{\mathrm{e}}}qref{limsup} holds. Then for every $\alpha$ such that $0\leq \alpha < 1/\rho$ the corresponding moment distribution $a(\omega)$ satisfies \begin{equation}\label{weight} \int_{-\infty}^{\infty}{\mathop{{\mathrm{e}}}}^{\alpha|\omega|}{\mathop{{\rm d}a(\omega)}} <\infty. \mathop{{\mathrm{e}}}nd{equation} \mathop{{\mathrm{e}}}nd{lemma} \begin{proof} For all $b>0$, $\int_{-b}^{b}{\mathop{{\mathrm{e}}}}^{\alpha|\omega|}{\mathop{{\rm d}a(\omega)}} = \sum_{n=0}^{\infty}{\alpha^n}/{n!}\int_{-b}^{b}|\omega|^n{\mathop{{\rm d}a(\omega)}}.$ For even $n$ we have $\int_{-b}^{b}\omega^n{\mathop{{\rm d}a(\omega)}}\leq\mu_n.$ For odd $n$ we have $|\omega|^n<1+\omega^{n+1}$ for all $\omega$, and thus $\int_{-b}^{b}|\omega|^n{\mathop{{\rm d}a(\omega)}} <\int_{-b}^{b}{\mathop{{\rm d}a(\omega)}} + \int_{-b}^{b}\omega^{n+1}{\mathop{{\rm d}a(\omega)}}\leq 1 + \mu_{n+1}.$ Let $\beta_n=\mu_n$ if $n$ is even, and $\beta_n=1+\mu_{n+1}$ if $n$ is odd. Then also $\int_{-\infty}^{\infty}{\mathop{{\mathrm{e}}}}^{\alpha|\omega|}{\mathop{{\rm d}a(\omega)}} \leq \sum_{n=0}^{\infty}{\alpha^n}\beta_n/{n!}$. Since $\limsup_{n\rightarrow\infty}(\beta_n/n!)^{1/n}=\rho$ and $0\leq \alpha<1/\rho$, the last sum converges to a finite limit. \mathop{{\mathrm{e}}}nd{proof} On the other hand, the proof of Theorem 5.2 in \S II.5 of \cite{Fr} shows that if \mathop{{\mathrm{e}}}qref{weight} holds for some $\alpha> 0$, then \mathop{{\mathrm{e}}}qref{limsup} also holds for some $\rho\leq 1/\alpha$. Thus, we get the following Corollary. \begin{corollary} A symmetric positive definite moment functional is chromatic just in case for some $\alpha > 0$ the corresponding moment distribution function $a(\omega)$ satisfies \mathop{{\mathrm{e}}}qref{weight}. \mathop{{\mathrm{e}}}nd{corollary} \noindent \textbf{Note:} \textit{For the remaining part of this section we assume that \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ is a chromatic moment functional.}\\ For every $a>0$, we let $S(a)=\{z\in \mathop{\mathds{C}}\;:\;|\mathrm{Im}(z)|<a\}$. The following Corollary directly follows from Lemma \ref{meuw}. \begin{corollary}\label{euw} If $u\in \st{2}$, then ${\mathop{{\mathrm{e}}}}^{\mathop{{\mathrm{i}}} u\; \omega}\in\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$. \mathop{{\mathrm{e}}}nd{corollary} We now extend function $\mathbi{m}(t)$ given by \mathop{{\mathrm{e}}}qref{mm} from $\mathop{\mathds{R}}$ to the complex strip $\st{}$. \begin{proposition}\label{anaz} Let for $z\in \st{}$, \begin{equation}\label{mmz} \mathbi{m}(z)=\int_{-\infty}^{\infty}{\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega z }{\mathop{{\rm d}a(\omega)}}. \mathop{{\mathrm{e}}}nd{equation} Then $\mathbi{m}(z)$ is analytic on the strip $\st{}$. \mathop{{\mathrm{e}}}nd{proposition} \begin{proof} Fix $n$ and let $z=x+\mathop{{\mathrm{i}}} y$ with $|y|<1/\rho$; then for every $b>0$, \begin{equation*} \int_{-b}^{b}| (\mathop{{\mathrm{i}}} \omega)^n{\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega z }|{\mathop{{\rm d}a(\omega)}} \leq\int_{-b}^{b}|\omega|^n {\mathop{{\mathrm{e}}}}^{|\omega y|}{\mathop{{\rm d}a(\omega)}} =\sum_{k=0}^{\infty}\frac{|y|^k}{k!}\int_{-b}^{b} |\omega|^{n+k}{\mathop{{\rm d}a(\omega)}}. \mathop{{\mathrm{e}}}nd{equation*} As in the proof of Lemma \ref{meuw}, we let $\beta_k=\mu_{n+k}$ for even values of $n+k$, and $\beta_k=1+\mu_{n+k+1}$ for odd values of $n+k$; then the above inequality implies \[\int_{-\infty}^{\infty}| (\mathop{{\mathrm{i}}} \omega)^n{\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega z }|{\mathop{{\rm d}a(\omega)}} \leq\sum_{k=0}^{\infty}\frac{|y|^k\beta_k}{k!},\] and it is easy to see that for every fixed $n$, $\limsup_{k\rightarrow\infty} (\beta_k/k!)^{1/k}=\rho$. \mathop{{\mathrm{e}}}nd{proof} \begin{proposition}\label{f-phi} Let $\varphi(\omega)\in \mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$; we can define a corresponding function $f_{\varphi}:\st{2}\rightarrow \mathop{\mathds{C}}$ by \begin{equation}\label{f-int} f_{\varphi}(z)=\int_{-\infty}^{\infty}\varphi(\omega) e^{{\mathop{{\mathrm{i}}}}\omega z }\,{\mathop{{\rm d}a(\omega)}}. \mathop{{\mathrm{e}}}nd{equation} Such $f_{\varphi}(z)$ is analytic on $\st{2}$ and for all $n$ and $z\in \st{2}$, \begin{equation}\label{fourier-int-der} \K{n}[f_{\varphi}](z)=\int_{-\infty}^{\infty} {\mathop{{\mathrm{i}}}}^{n}\,\PP{n}{\omega}\, \varphi(\omega)\, {\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega z} {\mathop{{\rm d}a(\omega)}}. \mathop{{\mathrm{e}}}nd{equation} \mathop{{\mathrm{e}}}nd{proposition} \begin{proof} Let $z = x +\mathop{{\mathrm{i}}} y$, with $|y|<1/(2\rho)$. For every $n$ and $b>0$ we have \begin{eqnarray*} &&\hspace*{-20mm}\int_{-b}^{b}| (\mathop{{\mathrm{i}}} \omega)^n \varphi(\omega){\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega z }|{\mathop{{\rm d}a(\omega)}}\\ &\leq&\int_{-b}^{b}|\omega|^n |\varphi(\omega) |{\mathop{{\mathrm{e}}}}^{|\omega y|}{\mathop{{\rm d}a(\omega)}}\\ &\leq&\sum_{k=0}^{\infty}\frac{|y|^k}{k!}\int_{-b}^{b} |\omega|^{n+k}|\varphi(\omega)| {\mathop{{\rm d}a(\omega)}}\\ &\leq&\sum_{k=0}^{\infty}\frac{|y|^k}{k!} \left(\int_{-b}^{b} \omega^{2n+2k}{\mathop{{\rm d}a(\omega)}}\int_{-b}^{b} |\varphi(\omega)|^2 {\mathop{{\rm d}a(\omega)}}\right)^{1/2}. \mathop{{\mathrm{e}}}nd{eqnarray*} Thus, also \[\int_{-\infty}^{\infty}| (\mathop{{\mathrm{i}}} \omega)^n \varphi(\omega){\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega z }|{\mathop{{\rm d}a(\omega)}}\leq \noi{\varphi(\omega)} \sum_{k=0}^{\infty}\frac{|y|^k}{k!}\sqrt{\mu_{2n+2k}}.\] The claim now follows from the fact that $\limsup_{k\rightarrow\infty} {\sqrt[2k]{\mu_{2n+2k}}}/{\sqrt[k]{k!}}=2\rho$ for every fixed $n$. \mathop{{\mathrm{e}}}nd{proof} \begin{lemma}\label{complete} If \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ is chromatic, then $\PPP$ is a complete system in $\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$. \mathop{{\mathrm{e}}}nd{lemma} \begin{proof} Follows from a theorem of Riesz (see, for example, Theorem 5.1 in \S II.5 of \cite{Fr}) which asserts that if $\liminf_{n\rightarrow\infty} \left({\mu_n}/{n!}\right)^{1/n}<\infty$, then $\PPP$ is a complete system in $\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$.\footnote{Note that we need the stronger condition $\limsup_{n\rightarrow\infty} \left({\mu_n}/{n!}\right)^{1/n}<\infty$ to insure that function $\mathbi{m}(z)$ defined by \mathop{{\mathrm{e}}}qref{mmz} is analytic on a strip (Proposition~\ref{anaz}).} \mathop{{\mathrm{e}}}nd{proof} \begin{proposition}\label{some} Let $\varphi(\omega)\in\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$; if for some fixed $u\in\st{2}$ the function $\varphi(\omega){\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega u}$ also belongs to $\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$, then in $\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$ we have \begin{equation}\label{ft-expand} \varphi(\omega)e^{{\mathop{{\mathrm{i}}}}\omega u }=\sum_{n=0}^{\infty}(-{\mathop{{\mathrm{i}}}})^{n} \K{n}[f_{\varphi}](u)\,\PP{n}{\omega}, \mathop{{\mathrm{e}}}nd{equation} and for $f_{\varphi}(z)$ given by \mathop{{\mathrm{e}}}qref{f-int} we have \begin{equation}\label{inde} \sum_{n=0}^{\infty} |\K{n}[f_{\varphi}](u)|^2= \noi{\varphi(\omega){\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega u}}^{2} <\infty. \mathop{{\mathrm{e}}}nd{equation} \mathop{{\mathrm{e}}}nd{proposition} \begin{proof} By Proposition \ref{f-phi}, if $u\in\st{2}$, then equation \mathop{{\mathrm{e}}}qref{fourier-int-der} holds for the corresponding $f_{\varphi}$ given by \mathop{{\mathrm{e}}}qref{f-int}. If also $\varphi(\omega){\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega u}\in\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$, then \mathop{{\mathrm{e}}}qref{fourier-int-der} implies that \begin{equation}\label{proj} \doti{\varphi(\omega){\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega u}}{\PP{n}{\omega}}= (-{\mathop{{\mathrm{i}}}})^n\K{n}[f_{\varphi}](u). \mathop{{\mathrm{e}}}nd{equation} Since $\{\PP{n}{\omega}\}_{n\in\mathop{\mathds{N}}}$ is a complete orthonormal system in $\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$, \mathop{{\mathrm{e}}}qref{proj} implies \mathop{{\mathrm{e}}}qref{ft-expand}, and Parseval's Theorem implies \mathop{{\mathrm{e}}}qref{inde}. \mathop{{\mathrm{e}}}nd{proof} \begin{corollary}\label{constant} For every $\varphi(\omega)\in\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$ and every $u\in\mathop{\mathds{R}}$, equality \mathop{{\mathrm{e}}}qref{ft-expand} holds and \begin{equation}\sum_{n=0}^{\infty} |\K{n}[f_{\varphi}](u)|^2 =\noi{\varphi(\omega)}^{2}. \mathop{{\mathrm{e}}}nd{equation} Thus, the sum $\sum_{n=0}^{\infty} |\K{n}[f_{\varphi}](u)|^2$ is independent of $u\in \mathop{\mathds{R}}$. \mathop{{\mathrm{e}}}nd{corollary} \begin{proof} If $u\in\mathop{\mathds{R}}$, then $\varphi(\omega){\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega u }\in\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$ and $\noi{\varphi(\omega){\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega u }}^{2}=\noi{\varphi(\omega)}^{2}$. \mathop{{\mathrm{e}}}nd{proof} \begin{corollary}\label{sum-squares} Let $\varepsilon>0$; then for all $u\in S(\frac{1}{2\rho}-\varepsilon)$ \begin{equation} \sum_{n=0}^{\infty} |\K{n}[\mathbi{m}](u)|^2 < \noi{{\mathop{{\mathrm{e}}}}^{(\frac{1}{2\rho}-\varepsilon) |\omega| }}^2<\infty. \mathop{{\mathrm{e}}}nd{equation} \mathop{{\mathrm{e}}}nd{corollary} \begin{proof} Corollary \ref{euw} implies that we can apply Proposition \ref{some} with $\varphi(\omega)=1$, in which case $f_{\varphi}(z)=\mathbi{m}(z)$, and, using Lemma \ref{meuw}, obtain \begin{equation*} \sum_{n=0}^{\infty} |\K{n}[\mathbi{m}](u)|^2= \noi{{\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega u}}^{2}\leq \noi{{\mathop{{\mathrm{e}}}}^{|\mathrm{Im}(u)\,\omega|}}^{2}< \noi{{\mathop{{\mathrm{e}}}}^{(\frac{1}{2\rho}-\varepsilon) |\omega| }}^2<\infty. \mathop{{\mathrm{e}}}nd{equation*} \mathop{{\mathrm{e}}}nd{proof} \begin{definition} $\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$ is the vector space of functions $f:\st{2} \rightarrow \mathop{\mathds{C}}$ which are analytic on $\st{2}$ and satisfy $\sum_{n=0}^{\infty}|\K{n}[f](0)|^2<\infty$. \mathop{{\mathrm{e}}}nd{definition} \begin{proposition}\label{bijection} The mapping \begin{equation}\label{ft0} f(z)\mapsto \varphi_f(\omega)=\sum_{n=0}^{\infty} (-{\mathop{{\mathrm{i}}}})^{n}\K{n}[f](0)\,\PP{n}{\omega} \mathop{{\mathrm{e}}}nd{equation} is an isomorphism between the vector spaces $\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$ and $\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$, and its inverse is given by \mathop{{\mathrm{e}}}qref{f-int}. \mathop{{\mathrm{e}}}nd{proposition} \begin{proof} Let $f\in\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$; since $\sum_{n=0}^{\infty} |\K{n}[f](0)|^2<\infty$, the function $\varphi_f(\omega)$ defined by \mathop{{\mathrm{e}}}qref{ft0} belongs to $\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$. By Proposition~\ref{f-phi}, $f_{\varphi_f}$ defined from $\varphi_f$ by \mathop{{\mathrm{e}}}qref{f-int} is analytic on $\st{2}$ and by Proposition~\ref{some} it satisfies $\varphi_f(\omega)=\sum_{n=0}^{\infty}(-{\mathop{{\mathrm{i}}}})^{n} \K{n}[f_{\varphi_f}](0)\,\PP{n}{\omega}.$ By the uniqueness of the Fourier expansion of $\varphi_f(\omega)$ with respect to the system \PPP\ we have $\K{n}[f](0)=\K{n}[f_{\varphi_f}](0)$ for all $n$. Thus, $f(z)=f_{\varphi_f}(z)$ for all $z\in \st{2}$. \mathop{{\mathrm{e}}}nd{proof} Proposition \ref{bijection} and Corollary \ref{constant} imply the following Corollary. \begin{corollary}\label{independent} For all $f\in\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$ and all $t\in\mathop{\mathds{R}}$ the sum $\sum_{n=0}^{\infty}|\K{n}[f](t)|^2$ converges and is independent of $t$. \mathop{{\mathrm{e}}}nd{corollary} \begin{definition} For every $f(z)\in\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$ we call the corresponding $\varphi_f(\omega)\in \mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$ given by equation \mathop{{\mathrm{e}}}qref{ft0} the \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}-Fourier-Stieltjes transform of $f(z)$ and denote it by $\mathcal{F}^{\Mi}[f](\omega)$. \mathop{{\mathrm{e}}}nd{definition} Assume that $ a (\omega)$ is absolutely continuous; then $a^\prime(\omega)= \mathop{\mathrm{w}}(\omega)$ almost everywhere for some non-negative weight function $\mathop{\mathrm{w}}(\omega)$. Then \mathop{{\mathrm{e}}}qref{f-int} implies \begin{equation*} f(z)=\int_{-\infty}^{\infty}\mathcal{F}^{\Mi}[f](\omega) \;{\mathop{{\mathrm{e}}}}^{\mathop{{\mathrm{i}}}\omega z}\mathop{\mathrm{w}}(\omega)\;{\rm d}\omega. \mathop{{\mathrm{e}}}nd{equation*} This implies the following Proposition. \begin{proposition}[\!\!\cite{IG5}]\label{weight-space} Assume that a function $f(z)$ is analytic on the strip $\st{2}$ and that it has a Fourier transform $\widehat{f}(\omega)=\int_{-\infty}^{\infty}f(t)\, {\mathop{{\mathrm{e}}}}^{-\mathop{{\mathrm{i}}}\omega t} {\rm d} t$ such that $f(z)=\frac{1}{2\pi}\int_{-\infty}^{\infty} \widehat{f}(\omega)\;{\mathop{{\mathrm{e}}}}^{\mathop{{\mathrm{i}}}\omega z}\;{\rm d}\omega$ for all $z\in\st{2}$; then $f(z)\in\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$ if and only if $\int_{-\infty}^{\infty}|\widehat{f}(\omega)|^2\; \mathop{\mathrm{w}}(\omega)^{-1}\;{\rm d}\omega<\infty,$ in which case $\widehat{f}(\omega)=2\pi\; \mathcal{F}^{\Mi}[f](\omega)\mathop{\mathrm{w}}(\omega)$. \mathop{{\mathrm{e}}}nd{proposition} \subsection{Uniform convergence of chromatic expansions} The Shannon expansion of an $f\in\BL{\pi}$ is obtained by representing its Fourier transform $\widehat{f}(\omega)$ as series of the trigonometric polynomials; similarly, the chromatic expansion of an $f\in\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$ is obtained by representing $\mathcal{F}^{\Mi}[f](\omega)$ as a series of orthogonal polynomials \PPP. \begin{proposition}\label{unif-con} Assume $f\in\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$; then for all $u\in\mathop{\mathds{R}}$ and $\varepsilon>0$, the chromatic series $\mathrm{CE}^{\!\scriptscriptstyle{\mathcal{M}}}[f,u](z)$ of $f(z)$ converges to $f(z)$ uniformly on the strip $S(\frac{1}{2\rho}-\varepsilon)$. \mathop{{\mathrm{e}}}nd{proposition} \begin{proof} Assume $u\in\mathop{\mathds{R}}$; by applying \mathop{{\mathrm{e}}}qref{fourier-int-der} to $\mathbi{m}(z-u)$ we get that for all $z\in\st{2}$, \begin{equation}\label{exp-CA}\mathrm{CA}^{\!\scriptscriptstyle{\mathcal{M}}}[f,n,u](z)= \int_{-\infty}^{\infty}\sum_{k=0}^{n}(-{\mathop{{\mathrm{i}}}})^{k} \K{k}[f](u)\, \PP{k}{\omega}{\mathop{{\mathrm{e}}}}^{\mathop{{\mathrm{i}}} \omega (z-u)}{\mathop{{\rm d}a(\omega)}}. \mathop{{\mathrm{e}}}nd{equation} Since $f\in\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$, Proposition \ref{bijection} implies $\mathcal{F}^{\Mi}[f](\omega)\in\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$. Corollary~\ref{constant} and equation \mathop{{\mathrm{e}}}qref{ft-expand} imply that in $\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$ \[ \mathcal{F}^{\Mi}[f](\omega)\;{\mathop{{\mathrm{e}}}}^{\mathop{{\mathrm{i}}}\omega u}= \sum_{k=0}^{\infty}(-{\mathop{{\mathrm{i}}}})^{k} \K{k}[f](u)\,\PP{k}{\omega}. \] Thus, \begin{eqnarray}\label{f-ex} f(z)=\int_{-\infty}^{\infty}\sum_{k=0}^{\infty}(-{\mathop{{\mathrm{i}}}})^{k} \K{k}[f](u)\,\PP{k}{\omega}{\mathop{{\mathrm{e}}}}^{\mathop{{\mathrm{i}}} \omega (z-u)}{\mathop{{\rm d}a(\omega)}}. \mathop{{\mathrm{e}}}nd{eqnarray} Consequently, from \mathop{{\mathrm{e}}}qref{exp-CA} and \mathop{{\mathrm{e}}}qref{f-ex}, \begin{eqnarray*} |f(z)-\mathrm{CA}^{\!\scriptscriptstyle{\mathcal{M}}}[f,n,u](z)|\\ &&\hspace*{-50mm}\leq\int_{-\infty}^{\infty}\left| \sum_{k=n+1}^{\infty}(-{\mathop{{\mathrm{i}}}})^{k} \K{k}[f](u)\,\PP{k}{\omega}{\mathop{{\mathrm{e}}}}^{\mathop{{\mathrm{i}}} \omega (z-u)} \right|{\mathop{{\rm d}a(\omega)}}\nonumber\\ &&\hspace*{-50mm}\leq\left(\!\int_{-\infty}^{\infty}\! \left|\sum_{k=n+1}^{\infty}(-{\mathop{{\mathrm{i}}}})^{n} \K{n}[f](u)\,\PP{n}{\omega}\right|^2\!{\mathop{{\rm d}a(\omega)}} \int_{-\infty}^{\infty}|{\mathop{{\mathrm{e}}}}^{\mathop{{\mathrm{i}}}\omega(z-u)}|^2{\mathop{{\rm d}a(\omega)}}\!\right)^{1/2} \mathop{{\mathrm{e}}}nd{eqnarray*} For $z\in S(\frac{1}{2\rho}-\varepsilon)$ we have \begin{equation}\label{convergence} |f(z)-\mathrm{CA}^{\!\scriptscriptstyle{\mathcal{M}}}[f,n,u](z)|\leq\!\left(\sum_{k=n+1}^{\infty}| \K{n}[f](u)|^2 \!\!\int_{-\infty}^{\infty} {\mathop{{\mathrm{e}}}}^{(\frac{1}{\rho}-2\varepsilon)|\omega|}{\mathop{{\rm d}a(\omega)}}\right)^{1/2}. \mathop{{\mathrm{e}}}nd{equation} Consequently, Lemma \ref{meuw} and Corollary \ref{independent} imply that $\mathrm{CE}^{\!\scriptscriptstyle{\mathcal{M}}}[f,u](z)$ converges to $f(z)$ uniformly on $S(\frac{1}{2\rho}-\varepsilon)$. \mathop{{\mathrm{e}}}nd{proof} \begin{proposition}\label{rep} Space $\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$ consists precisely of functions of the form $f(z)=\sum_{n=0}^{\infty}a_{n} \K{n}[\mathbi{m}](z)$ where $a=\langle\!\langle a_n\rangle\!\rangle_{n\in\mathop{\mathds{N}}}$ is a complex sequence in $l^2$. \mathop{{\mathrm{e}}}nd{proposition} \begin{proof} Assume $a\in l^2$; by Proposition \ref{sum-squares}, for every $\varepsilon>0$, if $z\in S(\frac{1}{2\rho}-\varepsilon)$ then \begin{eqnarray*} \sum_{n=k}^{\infty}|a_{n}\K{n}[\mathbi{m}](z)|&\leq& \left(\sum_{n=k}^{\infty}|a_{n}|^2\sum_{n=k}^{\infty} |\K{n}[\mathbi{m}](z)|^2\right)^{1/2}\\ &\leq &\left(\sum_{n=k}^{\infty}|a_{n}|^2\right)^{1/2} \noi{{\mathop{{\mathrm{e}}}}^{(\frac{1}{2\rho}-\varepsilon) |\omega| }}, \mathop{{\mathrm{e}}}nd{eqnarray*} which implies that the series converges absolutely and uniformly on $S(\frac{1}{2\rho}-\varepsilon)$. Consequently, $f(z)=\sum_{n=0}^{\infty}a_{n} \K{n}[\mathbi{m}](z)$ is analytic on $\st{2}$, and \begin{eqnarray*} \K{m}[f](0)= \sum_{n=0}^{\infty}a_n(\K{m}\circ\K{n})[\mathbi{m}](0)=(-1)^ma_m. \mathop{{\mathrm{e}}}nd{eqnarray*} Thus, $\sum_{n=0}^{\infty} |\K{n}[f](0)|^2=\sum_{n=0}^{\infty} |a_n|^2<\infty$ and so $f(z)\in \mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$. Proposition \ref{unif-con} provides the opposite direction. \mathop{{\mathrm{e}}}nd{proof} Note that Proposition \ref{rep} implies that for every $\varepsilon>0$ functions $f(z)\in\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$ are bounded on the strip $S(\frac{1}{2\rho}-\varepsilon)$ because \begin{equation*} |f(z)|\leq \left(\sum_{n=0}^{\infty}|\K{n}[f](0)|^2\right)^{1/2} \noi{{\mathop{{\mathrm{e}}}}^{(\frac{1}{2\rho}-\varepsilon) |\omega|}}. \mathop{{\mathrm{e}}}nd{equation*} \subsection{A function space with a locally defined scalar product} \begin{definition} $\mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$ is the space of functions $f(t): \mathop{\mathds{R}} \mapsto \mathop{\mathds{C}}$ obtained from functions in $\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$ by restricting their domain to $\mathop{\mathds{R}}$. \mathop{{\mathrm{e}}}nd{definition} Assume that $f,g\in\mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$; then \mathop{{\mathrm{e}}}qref{proj} implies that for all $u\in\mathop{\mathds{R}}$, \begin{eqnarray*} \sum_{n=0}^{\infty} \K{n}[f](u)\overline{\K{n}[g](u)}&=& \doti{\mathcal{F}^{\Mi}[f](\omega){\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega u}}{\mathcal{F}^{\Mi}[g](\omega) {\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega u}}\\ &=&\doti{\mathcal{F}^{\Mi}[f](\omega)}{\mathcal{F}^{\Mi}[g](\omega)}. \mathop{{\mathrm{e}}}nd{eqnarray*} Note that for all $t,u\in \mathop{\mathds{R}}$, \begin{eqnarray*} &&\sum_{n=0}^{n}\K{k}[f](u)\,\K{k}_u[g(t-u)]\\ &&\hspace*{10mm}=\int_{-\infty}^{\infty}\sum_{k=0}^{n} \K{k}[f](u)\,({-\mathop{{\mathrm{i}}}})^{k}\, \PP{k}{\omega}\ \mathcal{F}^{\Mi}[g](\omega)\, {\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega (t-u)}{\mathop{{\rm d}a(\omega)}}. \mathop{{\mathrm{e}}}nd{eqnarray*} By \mathop{{\mathrm{e}}}qref{ft-expand}, the sum $\sum_{k=0}^{n}({-\mathop{{\mathrm{i}}}})^{k} \K{k}[f](u)\, \PP{k}{\omega}$ converges in $\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$ to $\mathcal{F}^{\Mi}[f](\omega)\;{\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega u}$. Since $\mathcal{F}^{\Mi}[g](\omega)\, {\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega (t-u)}\in\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$, and since \[\left|\int_{-\infty}^{\infty}\mathcal{F}^{\Mi}[g](\omega)\, \mathcal{F}^{\Mi}[f](\omega)\,{\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega t}{\mathop{{\rm d}a(\omega)}}\right|<\noi{\mathcal{F}^{\Mi}[f](\omega)}\noi{\mathcal{F}^{\Mi}[g](\omega)},\] we have that for all $t,u\in \mathop{\mathds{R}}$, \begin{equation*} \sum_{k=0}^{\infty}\K{k}[f](u)\, \K{k}_u[g(t-u)] =\int_{-\infty}^{\infty}\mathcal{F}^{\Mi}[g](\omega)\, \mathcal{F}^{\Mi}[f](\omega)\,{\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega t}{\mathop{{\rm d}a(\omega)}}<\infty. \mathop{{\mathrm{e}}}nd{equation*} \begin{proposition}[\!\!\cite{IG5}]\label{local-space-gen} We can introduce locally defined scalar product, an associated norm and a convolution of functions in \mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}\ by the following sums which are independent of $u\in\mathop{\mathds{R}}:$ \begin{eqnarray} \norm{f}^2&=&\sum_{n=0}^{\infty}|K^n[f](u)|^2= \noi{\mathcal{F}^{\Mi}[f](\omega)}^2;\label{nor}\\ \Mscal{f}{g}&=&\sum_{n=0}^{\infty}K^n[f](u)\overline{K^n[g](u)}\\ &=&\doti{\mathcal{F}^{\Mi}[f](\omega)}{\mathcal{F}^{\Mi}[g](\omega)};\label{scl}\nonumber\\ (f \ast_{~_{\!\!\!{\mathcal{M}}}} g)(t)&=&\sum_{n=0}^{\infty}K^n[f](u) K^n_u[g(t-u)]\label{convolution}\\ &=&\int_{-\infty}^{\infty} {\mathcal{F}^{\Mi}[f](\omega)}\;{\mathcal{F}^{\Mi}[g](\omega)}{\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega t} {\mathop{{\rm d}a(\omega)}}.\nonumber \mathop{{\mathrm{e}}}nd{eqnarray} \mathop{{\mathrm{e}}}nd{proposition} Letting $g(t)\mathop{{\mathrm{e}}}quiv\mathbi{m}(t)$ in \mathop{{\mathrm{e}}}qref{convolution}, we get $(f\ast_{~_{\!\!\!{\mathcal{M}}}}\mathbi{m})(t)=\mathrm{CE}^{\!\scriptscriptstyle{\mathcal{M}}}[f,u](t)=f(t)$ for all $f(t)\in \mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$, while by setting $u=0$, $u=t$ and $u=t/2$ in \mathop{{\mathrm{e}}}qref{convolution}, we get the following lemma. \begin{lemma}[\!\cite{IG5}]\label{con} For every $f,g\in \mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$ and for every $t\in\mathop{\mathds{R}},$ \begin{eqnarray}\label{symm} \sum_{k=0}^{\infty} (-1)^k\,\K{k}[f](t)\,\K{k}[g](0) &=&\sum_{k=0}^{\infty}(-1)^k\,\K{k}[f](0)\, \K{k}[g](t)\nonumber\\ &=&\sum_{k=0}^{\infty}(-1)^k\,\K{k}[f](t/2)\, \K{k}[g](t/2).\nonumber \mathop{{\mathrm{e}}}nd{eqnarray} \mathop{{\mathrm{e}}}nd{lemma} Since $\mathbi{m}(z)$ is analytic on $\st{}$, so are $\K{n}[\mathbi{m}](z)$ for all $n$; thus, since by \mathop{{\mathrm{e}}}qref{orthonorm} $\sum_{m=0}^{\infty}(\K{n}\circ\K{m})[\mathbi{m}](0)^2=1$, we have $\K{n}[\mathbi{m}](t)\in \mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$ for all $n$. Let $u$ be a fixed real parameter; consider functions $B^{n}_{u}(t)=\K{n}_u[\mathbi{m}(t-u)]= (-1)^n\K{n}[\mathbi{m}](t-u)\in\mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$. Since \begin{eqnarray}\label{orthob} &&\Mscal{B^{n}_{u}(t)}{B^{m}_{u}(t)}\\ &&\hspace*{5mm}=\sum_{k=0}^{\infty}(\K{k}_t\circ\K{n}_u)[\mathbi{m}(t-u)] (\K{k}_t\circ\K{m}_u)[\mathbi{m}(t-u)]\nonumber\\ &&\hspace*{5mm}=\sum_{k=0}^{\infty}(-1)^{m+n}(\K{k}\circ\K{n})[\mathbi{m}](t-u) (\K{k}\circ\K{m})[\mathbi{m}](t-u)\Big|_{t=u}\nonumber\\ &&\hspace*{5mm}=\delta(m-n),\nonumber \mathop{{\mathrm{e}}}nd{eqnarray} the family $\{B^{n}_{u}(t)\}_{n\in\mathop{\mathds{N}}}$ is orthonormal in $\mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$ and for all $n\in\mathop{\mathds{N}}$ and all $t\in\mathop{\mathds{R}}$, \begin{equation}\label{sumsq} \sum_{k=0}^{\infty}(\K{k}\circ\K{n})[\mathbi{m}](t)^2=1. \mathop{{\mathrm{e}}}nd{equation} By \mathop{{\mathrm{e}}}qref{orthonorm}, for $f\in\mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$, \begin{eqnarray}\label{projf} \Mscal{f}{\K{n}_u[\mathbi{m}(t-u)]}&=& \sum_{k=0}^{\infty} \K{k}[f](t)(\K{k}_t\circ\K{n}_u)[\mathbi{m}(t-u)]\Big|_{t=u}\nonumber\\ &=& \sum_{k=0}^{\infty}(-1)^n \K{k}[f](u)(\K{k}\circ\K{n})[\mathbi{m}](0)\nonumber\\ &=&\K{n}[f](u). \mathop{{\mathrm{e}}}nd{eqnarray} \begin{proposition}[\!\!\cite{IG5}]\label{in-LL} The chromatic expansion $\mathrm{CE}^{\!\scriptscriptstyle{\mathcal{M}}}[f,u](t)$ of $f(t)\in\mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$ is the Fourier series of $f(t)$ with respect to the orthonormal system $\{\K{n}_u[\mathbi{m}(t-u)]\}_{n\in\mathop{\mathds{N}}}$. The chromatic expansion converges to $f(t)$ in \mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}; thus, $\{\K{n}_u[\mathbi{m}(t-u)]\}_{n\in\mathop{\mathds{N}}}$ is a complete orthonormal base of \mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}. \mathop{{\mathrm{e}}}nd{proposition} \begin{proof} Since $\K{k}_t[f(t)-\mathrm{CE}^{\!\scriptscriptstyle{\mathcal{M}}}[f,n,u](t)]|_{t=u}$ equals $0$ for $k\leq n$ and equals $\K{k}[f](u)$ for $k>n$, $\norm{f-\mathrm{CE}^{\!\scriptscriptstyle{\mathcal{M}}}[f,n,u]}= \sum_{k=n+1}^{\infty}\K{k}[f](u)^2\rightarrow 0$. \mathop{{\mathrm{e}}}nd{proof} Note that using \mathop{{\mathrm{e}}}qref{sumsq} with $n=0$ we get \begin{eqnarray} |f(t)-\mathrm{CE}^{\!\scriptscriptstyle{\mathcal{M}}}[f,n,u](t)|&\leq&\sum_{k=n+1}^{\infty} |\K{k}[f](u)\K{k}[\mathbi{m}](t-u)|\nonumber\\ &&\hspace*{-25mm}\leq\left(\sum_{k=n+1}^{\infty}\K{k}[f](u)^2 \sum_{k=n+1}^{\infty}\K{k}[\mathbi{m}](t-u)^2\right)^{1/2}\nonumber\\ &&\hspace*{-25mm}=\left(\sum_{k=n+1}^{\infty}\K{k}[f](u)^2\right)^{1/2} \left(1-\sum_{k=0}^{n}\K{k}[\mathbi{m}](t-u)^2\right)^{1/2}.\label{error} \mathop{{\mathrm{e}}}nd{eqnarray} Let \begin{eqnarray*} E_n(t)=\left(1-\sum_{k=0}^{n}\K{k}[\mathbi{m}](t)^2\right)^{1/2}; \mathop{{\mathrm{e}}}nd{eqnarray*} then, using Lemma \ref{CD}, we have \begin{eqnarray*} E_n^\prime(t)=\gamma_n\;\K{n+1}[\mathbi{m}](t)\;\K{n}[\mathbi{m}](t) \left(1-\sum_{k=0}^{n}\K{k}[\mathbi{m}](t)^2\right)^{-1/2}. \mathop{{\mathrm{e}}}nd{eqnarray*} Since $(D^{k}\circ\K{n})[\mathbi{m}](0)=0$ for all $0\leq k\leq n-1$, we get that $E_n^{(k)}(t)=0$ for all $k\leq 2n+1$. Thus, $E_{n}(0)=0$ and $E_{n}(t)$ is very flat around $t=0$, as the following graph of $E_{15}(t)$ shows, for the particular case of the chromatic derivatives associated with the Legendre polynomials. This explains why chromatic expansions provide excellent local approximations of signals $f\in\BL{\pi}$. \begin{figure}[h] \begin{center} \includegraphics[width = 2.5in]{error.eps} \caption{Error bound $E_{15}(t)$ for the chromatic approximation of order $15$ associated with the Legendre polynomials.} \mathop{{\mathrm{e}}}nd{center} \mathop{{\mathrm{e}}}nd{figure} \subsection{Chromatic expansions and linear operators} Let $A$ be a linear operator on $\mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$ which is continuous with respect to the norm $\norm{f}$. If $A$ is shift invariant, i.e., if for every fixed $h$, $A[f(t+h)]=A[f](t+h)$ for all $f\in \mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$, then $A$ commutes with differentiation on \mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}\ and \begin{equation*} A[f](t)=\sum_{n=0}^{\infty} \,(-1)^{n}\K{n} [f](u)\, \K{n}[A[\mathbi{m}]](t-u) =(f\ast_{~_{\!\!\!{\mathcal{M}}}}A[\mathbi{m}])(t). \mathop{{\mathrm{e}}}nd{equation*} Consequently, the action of such $A$ on any function in $\mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$ is uniquely determined by $A[\mathbi{m}]$, which plays the role of the \mathop{{\mathrm{e}}}mph{impulse response} $A[\sinc]$ of a \mathop{{\mathrm{e}}}mph{continuous time invariant linear system} in the standard signal processing paradigm based on Shannon's expansion. Note that if $A$ is a continuous linear operator $A$ on \mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}\ such that $(A\circ D^n)[\mathbi{m}](t)=(D^n\circ A)[\mathbi{m}](t)$ for all $n$, then Lemma~\ref{con} implies that for every $f\in \mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$, \begin{eqnarray*} A[f](t)&=& \sum_{n=0}^{\infty} \,(-1)^{n}\K{n} [f](0)\, \K{n}[A[\mathbi{m}]](t) \\ &=&\sum_{n=0}^{\infty} \,(-1)^{n} \K{n}[A[\mathbi{m}]](0)\, \K{n} [f](t). \mathop{{\mathrm{e}}}nd{eqnarray*} Since operators $\K{n}[f](t)$ are shift invariant, such $A$ must be also shift invariant. \subsection{A geometric interpretation} For every particular value of $t\in \mathop{\mathds{R}}$ the mapping of \mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}\ into $l^2$ given by $f\mapsto f_t=\langle\!\langle \K{n}[f](t)\rangle\!\rangle_{n\in\mathop{\mathds{N}}}$ is unitary isomorphism which maps the base of \mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}, consisting of vectors $B^{k}(t)=(-1)^k\K{k}[\mathbi{m}(t)]$, into vectors $B^{k}_{t}= \langle\!\langle(-1)^k(\K{n}\circ\K{k})[\mathbi{m}(t)]\rangle\!\rangle_{n\in\mathop{\mathds{N}}}$. Since the first sum in \mathop{{\mathrm{e}}}qref{orthob} is independent of $t$, we have $\langle B^{k}_{t},B^{m}_{t}\rangle=\delta(m-k)$, and \mathop{{\mathrm{e}}}qref{projf} implies $\langle f_t,B^{k}_{t}\rangle=\K{k}[f](0)$. Thus, since $\sum_{k=0}^{\infty}\K{k}[f](0)^2< \infty$, we have $\sum_{k=0}^{\infty}\K{k}[f](0)\,B^{k}_{t}\in l^2$ and \begin{eqnarray*} \sum_{k=0}^{\infty}\langle f_t,B^{k}_{t}\rangle\; B^{k}_{t}&=& \sum_{k=0}^{\infty}\K{k}[f](0)\,B^{k}_{t}\\&=& \Big\langle\!\!\Big\langle\sum_{k=0}^\infty\K{k}[f](0) (-1)^k(\K{n}\circ\K{k})[\mathbi{m}(t)]\Big\rangle\!\! \Big\rangle_{n\in\mathop{\mathds{N}}}. \mathop{{\mathrm{e}}}nd{eqnarray*} Since for $f\in\mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$ the chromatic series of $f$ converges uniformly on $\mathop{\mathds{R}}$, we have $K^{n}[f](t)= \sum_{k=0}^{\infty}K^{k}[f](0) (-1)^k(\K{n}\circ\K{k})[\mathbi{m}(t)].$ Thus, \begin{equation*} \sum_{k=0}^{\infty}\langle f_t,B^{k}_{t}\rangle\; B^{k}_{t}= \sum_{k=0}^{\infty}K^{k}[f](0)\; B^{k}_{t}= \langle\!\langle \K{n}[f](t)\rangle\!\rangle_{n\in \mathop{\mathds{N}}}=f_t. \mathop{{\mathrm{e}}}nd{equation*} Thus, while the coordinates of $f_t=\langle\!\langle \K{n}[f](t)\rangle\!\rangle_{n\in \mathop{\mathds{N}}}$ in the usual base of $l^2$ vary with $t$, the coordinates of $f_t$ in the bases $\{B^{k}_{t}\}_{k\in \mathop{\mathds{N}}}$ remain the same as $t$ varies. We now show that $\{B^{n}_{t}\}_{n\in\mathop{\mathds{N}}}$ is the moving frame of a helix $H:\mathop{\mathds{R}}\mapsto l_2$. \begin{lemma}[\!\!\cite{IG5}]\label{continuous} Let $f\in\mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$ and let $t\in\mathop{\mathds{R}}$ vary; then $\vec{f}(t)= \langle\!\langle\K{n}[f](t)\rangle\!\rangle_{n\in\mathop{\mathds{N}}}$ is a continuous curve in $l_2$. \mathop{{\mathrm{e}}}nd{lemma} \begin{proof} Let $f\in\mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$; then, since $\sum_{n=0}^{\infty}\K{n}[f](t)^2$ converges to a continuous (constant) function, by Dini's theorem, it converges uniformly on every finite interval $I$. Thus, the last two sums on the right side of inequality $\norm{f(t)-f(t+h)}^2\leq\sum_{n=0}^{N} (\K{n}[f](t)-\K{n}[f](t+h))^2+ 2\sum_{n=N+1}^{\infty} \K{n}[f](t)^2+2\sum_{n=N+1}^{\infty}\K{n}[f](t+h)^2$ can be made arbitrarily small on $I$ if $N$ is sufficiently large. Since functions $\K{n}[f](t)$ have continuous derivatives, they are uniformly continuous on $I$. Thus, $\sum_{n=0}^{N}(\K{n}[f](t)-\K{n}[f](t+h))^2$ can also be made arbitrarily small on $I$ by taking $|h|$ sufficiently small. \mathop{{\mathrm{e}}}nd{proof} \begin{lemma}[\!\!\cite{IG5}]\label{differentiable} If $g^\prime\in\mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$, then $\displaystyle{\lim_{|h|\rightarrow 0} \norm{\frac{g(t)-g(t+h)}{h}-g^\prime(t)} = 0}$; thus, the curve $\vec{g}(t)=\langle\!\langle \K{n}[g](t)\rangle\!\rangle_{n\in\mathop{\mathds{N}}}$ is differentiable, and $(\vec{g})^{\prime}(t)=\langle\!\langle\K{n}[g^{\prime}](t) \rangle\!\rangle_{n\in\mathop{\mathds{N}}}$. \mathop{{\mathrm{e}}}nd{lemma} \begin{proof} Let $I$ be any finite interval; since $g^{\prime}\in \mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$, for every $\varepsilon >0$ there exists $N$ such that $\sum_{n=N+1}^{\infty}\K{n} \left[g^\prime\right](u)^2<\varepsilon /8$ for all $u\in I$. Since functions $\K{n} \left[g^\prime\right](u)$ are uniformly continuous on $I$, there exists a $\delta > 0$ such that for all $t_1,t_2\in I$, if $|t_1-t_2|<\delta$ then $\sum_{n=0}^{N}\left(\K{n} [g^\prime](t_1)-\K{n}[g^\prime](t_2)\right)^2<\varepsilon/2$. Let $h$ be an arbitrary number such that $|h|<\delta$; then for every $t$ there exists a sequence of numbers $\xi_n^t$ that lie between $t$ and $t-h$, and such that $(\K{n}[g](t)-\K{n}[g](t-h))/h =\K{n}[g^\prime](\xi_n^t)$. Thus, for all $t\in I$, \begin{eqnarray*} &&\sum_{n=0}^{\infty}\K{n} \left[\frac{g(t)-g(t-h)}{h}-g^\prime(t)\right]^2 =\sum_{n=0}^{\infty}\left(\K{n} [g^\prime](\xi_n^t)-\K{n}[g^\prime](t)]\right)^2\\ &&\ \ \ \ <\sum_{n=0}^{N}\left(\K{n} [g^\prime](\xi_n^t)-\K{n}[g^\prime](t)\right)^2+ 2\sum_{n=N+1}^{\infty}\K{n} [g^\prime](\xi_n^t)^2\nonumber \\ &&\hspace*{10mm}+ 2\sum_{n=N+1}^{\infty}\K{n} [g^\prime](t)^2 <\varepsilon/2+4\varepsilon/8=\varepsilon.\nonumber \mathop{{\mathrm{e}}}nd{eqnarray*} \mathop{{\mathrm{e}}}nd{proof} \noindent Since $\K{n}[\mathbi{m}](t)\in\mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$ for all $n$, if we let $\vec{e}_{k+1}(t)=\langle\!\langle(\K{k}\circ\K{n})[\mathbi{m}](t) \rangle\!\rangle_{n\in\mathop{\mathds{N}}}$ for $k\geq 0$, then by Lemma~\ref{differentiable}, $\vec{e}_k(t)$ are differentiable for all $k$. Since $l_2$ is complete and $\vec{e}_1(t)$ is continuous, $\vec{e}_1(t)$ has an antiderivative $\vec{H}(t)$. Using \mathop{{\mathrm{e}}}qref{three-term}, we have \begin{eqnarray*} \vec{e}_1(t)&=& \vec{H}^{\!\!\ \prime}(t);\\ \vec{e}_1^{\ \prime}(t)&=&\langle\!\langle({\mathop{{D}}}\circ\K{0}\circ\K{n}) [\mathbi{m}](t)\rangle\!\rangle_{n\in\mathop{\mathds{N}}}=\gamma_0\, \langle\!\langle(\K{1}\circ\K{n}) [\mathbi{m}](t)\rangle\!\rangle_{n\in\mathop{\mathds{N}}}\\ &=&\gamma_0\, \vec{e}_2(t);\label{FS1}\nonumber \\ \vec{e}_k^{\ \prime}(t)&=&-\gamma_{k-2} \langle\!\langle(\K{k-2}\circ\K{n}) [\mathbi{m}](t)\rangle\!\rangle_{n\in\mathop{\mathds{N}}}+\gamma_{k-1}\langle\!\langle(\K{k}\circ\K{n}) [\mathbi{m}](t)\rangle\!\rangle_{n\in\mathop{\mathds{N}}}\\ &=& - \gamma_{k-2}\,\vec{e}_{k-1}(t) + \gamma_{k-1}\,\vec{e}_{k+1}(t),\;\;\; \mbox{for $k\geq 2$}. \label{FSk}\nonumber \mathop{{\mathrm{e}}}nd{eqnarray*} This means that the curve $\vec{H}(t)$ is a helix in $l_2$ because it has constant curvatures $\kappa_k=\gamma_{k-1}$ for all $k\geq 1$; the above equations are the corresponding Frenet--Serret formulas and $\vec{e}_{k+1}(t)=\langle\!\langle(\K{k}\circ\K{n})[\mathbi{m}](t)\rangle\!\rangle_{n\in \mathop{\mathds{N}}}$ for $k\geq 0$ form the orthonormal moving frame of the helix $\vec{H}(t)$. \section{Examples}\label{examples} We now present a few examples of chromatic derivatives and chromatic expansions, associated with several classical families of orthogonal polynomials. More details and more examples can be found in \cite{HB}. \subsection{Example 1: Legendre polynomials/Spherical Bessel functions} Let $L_n(\omega)$ be the Legendre polynomials; if we set $P_{n}^{\scriptscriptstyle{L}}(\omega)= \sqrt{2n+1}\,L_n(\omega/\pi)$ then \begin{equation*} \int_{-\pi}^{\pi} P_{n}^{\scriptscriptstyle{L}}(\omega) P_{m}^{\scriptscriptstyle{L}}(\omega)\;\frac{\rm{d} \omega}{2\pi}=\delta(m-n). \mathop{{\mathrm{e}}}nd{equation*} The corresponding recursion coefficients in equation \mathop{{\mathrm{e}}}qref{poly} are given by the formula $\gamma_n=\pi (n+1)/\sqrt{4(n+1)^2-1}$; the corresponding space $\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$ is $L^2[-\pi,\pi]$. The space $\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$ for this particular example consists of all entire functions whose restrictions to $\mathop{\mathds{R}}$ belong to $L^2$ and which have a Fourier transform supported in $[-\pi,\pi]$. Proposition~\ref{local-space-gen} implies that in this case our locally defined scalar product $\Mscal{f}{g}$, norm $\norm{f}$ and convolution $(f \ast_{~_{\!\!\!{\mathcal{M}}}} g)(t)$ coincide with the usual scalar product, norm and convolution on $L_2$. \subsection{Example 2: Chebyshev polynomials of the first kind/Bessel functions} Let $P_{n}^{\scriptscriptstyle{T}}(\omega)$ be the family of orthonormal polynomials obtained by normalizing and rescaling the Chebyshev polynomials of the first kind, $T_n(\omega)$, by setting $P_{0}^{\scriptscriptstyle{T}}(\omega)= 1$ and $P_{n}^{\scriptscriptstyle{T}}(\omega)=\sqrt{2}\;T_n(\omega/\pi)$ for $n>0$. In this case \begin{equation*} \int_{-\pi}^{\pi}P_{n}^{\scriptscriptstyle{T}}(\omega) P_{m}^{\scriptscriptstyle{T}}(\omega) \frac{\rm{d}\omega}{ \pi\sqrt{\pi^2-\omega^2}}=\delta(n-m). \mathop{{\mathrm{e}}}nd{equation*} By Proposition \ref{weight-space}, the corresponding space $\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$ contains all entire functions $f(t)$ which have a Fourier transform $\widehat{f}(\omega)$ supported in $[-\pi,\pi]$ that also satisfies $\int_{-\pi}^{\pi}\sqrt{\pi^2-\omega^2}\; |\widehat{f}(\omega)|^2\rm{d} \omega<\infty$. In this case the corresponding space $\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$ contains functions which do not belong to $L^2$; the corresponding function \mathop{{\mathrm{e}}}qref{mmz} is $\mathbi{m}(z)={\mathrm J}_0(\pi z)$ and for $n>0$, $\K{n}[\mathbi{m}](z)=(-1)^{n}\sqrt{2}\,{\mathrm J}_n(\pi z)$, where ${\mathrm J}_n(z)$ is the Bessel function of the first kind of order $n$. In the recurrence relation \mathop{{\mathrm{e}}}qref{three-term} the coefficients are given by $\gamma_0=\pi/\sqrt{2}$ and $\gamma_n=\pi/2$ for $n>0$. The chromatic expansion of a function $f(z)$ is the Neumann series of $f(z)$ (see \cite{WAT}), \begin{equation*} f(t)=f(u){\mathrm J}_0 (\pi( z-u))+\sqrt{2}\;\sum_{n=1}^{\infty}\K{n}[f](u){\mathrm J}_n(\pi (z-u)). \mathop{{\mathrm{e}}}nd{equation*} Thus, the chromatic expansions corresponding to various families of orthogonal polynomials can be seen as generalizations of the Neumann series, while the families of corresponding functions $\{\K{n}[\mathbi{m}](z)\}_{n\in\mathop{\mathds{N}}}$ can be seen as generalizations and a uniform representation of some familiar families of special functions. \subsection{Example 3: Hermite polynomials/Gaussian monomial functions} Let $H_n(\omega)$ be the Hermite polynomials; then polynomials $P_{n}^{\scriptscriptstyle{H}}(\omega)= (2^{n}n!)^{-1/2}H_n(\omega)$ satisfy \begin{equation*}\int_{-\infty}^{\infty} P_{n}^{\scriptscriptstyle{H}}(\omega) P_{m}^{\scriptscriptstyle{H}}(\omega) \;{\mathop{{\mathrm{e}}}}^{-\omega^2}\;\frac{\rm{d}\omega}{\sqrt{\pi}} =\delta(n-m). \mathop{{\mathrm{e}}}nd{equation*} The corresponding space $\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$ contains entire functions whose Fourier transform $\widehat{f}(\omega)$ satisfies $\int_{-\infty}^{\infty} |\widehat{f}(\omega)|^2 \;{\mathop{{\mathrm{e}}}}^{\omega^2} \rm{d} \omega<\infty$. In this case the space $\mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$ contains non-bandlimited signals; the corresponding function defined by \mathop{{\mathrm{e}}}qref{mmz} is $\mathbi{m}(z)={\mathop{{\mathrm{e}}}}^{-z^2/4}$ and $\K{n}[\mathbi{m}](z)= (-1)^{n}(2^{n}\,n!)^{-1/2}\,z^{n} {\mathop{{\mathrm{e}}}}^{-z^2/4}$. The corresponding recursion coefficients are given by $\gamma_n=\sqrt{(n+1)/2}$. The chromatic expansion of $f(z)$ is just the Taylor expansion of $f(z)\mathop{{\mathrm{e}}}^{z^2/4}$, multiplied by $\mathop{{\mathrm{e}}}^{-z^2/4}$. \subsection{Example 4: Herron family} This example is a slight modification of an example from \cite{HB}. Let the family of orthonormal polynomials be given by the recursion $L_0(\omega)= 1$, $L_1(\omega)=\omega$, and $L_{n+1}(\omega)=\omega/(n+1)L_n(\omega)- n/(n+1)L_{n-1}(\omega).$ Then \begin{equation*} \frac{1}{2}\int_{-\infty}^{\infty}L(m,\omega)\;L(n,\omega)\; \mathop{\mathrm{sech}}\left(\frac{\pi\omega}{2}\right) \;\rm{d}\omega= \delta(m-n). \mathop{{\mathrm{e}}}nd{equation*} In this case $\mathbi{m}(z)=\mathop{\mathrm{sech}} z$ and $\K{n}[\mathbi{m}](z)= (-1)^{n}\mathop{\mathrm{sech}} z\, \tanh^{n} z$. The recursion coefficients are given by $\gamma_n=n+1$ for all $n\geq 0$. If $E_n$ are the Euler numbers, then $\mathop{\mathrm{sech}} z=\sum_{n=0}^\infty E_{2n}\,z^{2n}/(2n)!$, with the series converging only in the disc of radius $\pi/2$. Thus, in this case $\mathbi{m}(z)$ is not an entire function. \section{Weakly bounded moment functionals}\label{sec5} \subsection{} To study local convergence of chromatic expansions of functions which are not in $\mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$ we found it necessary to restrict the class of chromatic moment functionals. The restricted class, introduced in \cite{IG5}, is still very broad and contains functionals that correspond to many classical families of orthogonal polynomials. It consists of functionals such that the corresponding recursion coefficients $\gamma_n>0$ appearing in \mathop{{\mathrm{e}}}qref{three-term} are such that sequences $\{\gamma_n\}_{n\in \mathop{\mathds{N}}}$ and $\{\gamma_{n+1}/\gamma_{n}\}_{n\in\mathop{\mathds{N}}}$ are bounded from below by a positive constant, and such that the growth rate of the sequence $\{\gamma_n\}_{n\in \mathop{\mathds{N}}}$ is sub-linear in $n$. For technical simplicity in the definition below these conditions are formulated using a single constant $M$ in all of the bounds. \begin{definition}[\!\!\cite{IG5}]\label{def-weak} Let \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ be a moment functional such that for some $\gamma_n>0$ \mathop{{\mathrm{e}}}qref{three-term} holds. \begin{enumerate} \item \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ is \textit{weakly bounded} if there exist some $M\geq 1$, some $0\leq p<1$ and some integer $r\geq 0$, such that for all $n\geq 0$, \begin{equation} \frac{1}{M}\leq \gamma_n\leq M (n + r)^p, \label{one-weak}\\ \mathop{{\mathrm{e}}}nd{equation} \begin{equation}\frac{{\gamma_{n}}}{{\gamma_{n+1}}}\leq M^2.\label{two-weak} \mathop{{\mathrm{e}}}nd{equation} \item \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ is \textit{bounded} if there exists some $M\geq 1$ such that for all $n\geq 0$, \begin{equation} \frac{1}{M}\leq{\gamma_n}\leq M. \mathop{{\mathrm{e}}}nd{equation} \mathop{{\mathrm{e}}}nd{enumerate} \mathop{{\mathrm{e}}}nd{definition} Since \mathop{{\mathrm{e}}}qref{three-term} is assumed to hold for some $\gamma_n>0$, weakly bounded moment functionals are positive definite and symmetric. Every bounded functional \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ is also weakly bounded with $p=0$. Functionals in our Example 1 and Example 2 are bounded. For bounded moment functionals \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ the corresponding moment distribution $ a(\omega)$ has a finite support \cite{Chih} and consequently $\mathbi{m}(t)$ is a band-limited signal. However, $\mathbi{m}(t)$ can be of infinite energy (i.e., not in $L^2$) as is the case in our Example 2. Moment functional in Example 3 is weakly bounded but not bounded $(p=1/2)$; the moment functional in Example 4 is not weakly bounded $(p=1)$. We note that important examples of classical orthogonal polynomials which correspond to weakly bounded moment functionals in fact satisfy a stronger condition from the following simple Lemma. \begin{lemma} Let \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ be such that \mathop{{\mathrm{e}}}qref{three-term} holds for some $\gamma_n>0$. If for some $0\leq p<1$ the sequence ${\gamma_n}/{n^p}$ converges to a finite positive limit, then \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ is weakly bounded. \mathop{{\mathrm{e}}}nd{lemma} Weakly bounded moment functionals allow a useful estimation of the coefficients in the corresponding equations \mathop{{\mathrm{e}}}qref{inverse} and \mathop{{\mathrm{e}}}qref{direct} relating the chromatic and the ``standard'' derivatives. \begin{lemma}[\!\!\cite{IG5}]\label{bounds} Assume that $\mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}$ is such that for some $M\geq 1, r\geq 0$ and $p\geq 0$ the corresponding recursion coefficients $\gamma_n$ for all $n$ satisfy inequalities \mathop{{\mathrm{e}}}qref{one-weak}. Then the following inequalities hold for all $k$ and $n$: \begin{eqnarray} |(\K{n}\circ {\mathop{{D}}}^k)[\mathbi{m}](0)|&\leq&(2M)^k(k+r)!^p;\label{b-bound}\\ \left|\K{n} \left[\frac{t^k}{k!}\right](0)\right| &\leq&(2M)^{n}.\label{mono-bound} \mathop{{\mathrm{e}}}nd{eqnarray} \mathop{{\mathrm{e}}}nd{lemma} \begin{proof} By \mathop{{\mathrm{e}}}qref{bkn}, it is enough to prove \mathop{{\mathrm{e}}}qref{b-bound} for all $n,k$ such that $n\leq k$. We proceed by induction on $k$, assuming the statement holds for all $n\leq k$. Applying \mathop{{\mathrm{e}}}qref{three-term} to ${\mathop{{D}}}^{k}[\mathbi{m}](t)$ we get \begin{equation*}|(\K{n}\circ {\mathop{{D}}}^{k+1})[\mathbi{m}](t)|\leq\! {\gamma_{n}}|(\K{n+1}\circ {\mathop{{D}}}^{k})[\mathbi{m}](t)|+ {\gamma_{n-1}}\,|(\K{n-1}\circ {\mathop{{D}}}^{k})[\mathbi{m}](t)|. \mathop{{\mathrm{e}}}nd{equation*} Using the induction hypothesis and \mathop{{\mathrm{e}}}qref{bkn} again, we get for all $n\leq k+1$, \begin{eqnarray*}|(\K{n}\circ {\mathop{{D}}}^{k+1}) \left[\mathbi{m}\right](0)| &< & (M(k+1+r)^p+M(k+r)^p) (2M)^{k} (k+r)!^p\nonumber\\ &<&(2M)^{k+1} (k+1+r)!^p. \mathop{{\mathrm{e}}}nd{eqnarray*} Similarly, by \mathop{{\mathrm{e}}}qref{zero-mon}, it is enough to prove \mathop{{\mathrm{e}}}qref{mono-bound} for all $k\leq n$. This time we proceed by induction on $n$ and use \mathop{{\mathrm{e}}}qref{three-term}, \mathop{{\mathrm{e}}}qref{one-weak} and \mathop{{\mathrm{e}}}qref{two-weak} to get \begin{equation*} \left|\K{n+1}\left[\frac{t^k}{k!}\right]\right| \leq M \left|\K{n}\left[\frac{t^{k-1}}{(k-1)!}\right]\right|+ M^2\, \left|\K{n-1}\left[\frac{t^k}{k!}\right]\right|. \mathop{{\mathrm{e}}}nd{equation*} By induction hypothesis and using \mathop{{\mathrm{e}}}qref{zero-mon} again, we get that for all $k\leq n+1$, $\left|\K{n+1}\left[\frac{t^k}{k!}\right](0)\right|< M\, (2M)^{n}+ M^2 (2M)^{n-1}<(2M)^{n+1}$. \mathop{{\mathrm{e}}}nd{proof} \begin{corollary}[\!\!\cite{IG5}]\label{poly-bdd} Let $\mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}$ be weakly bounded; then for every fixed $n$ \[ \LIM{k}\left|(\K{n}\circ {\mathop{{D}}}^{k}) \left[\mathbi{m}\right](0)/k!\right|^{1/k}=0, \] and the convergence is uniform in $n$. \mathop{{\mathrm{e}}}nd{corollary} \begin{proof} Let $R(k)=(k+r)!/k!$; then $R(k)$ is a polynomial of degree $r$, and, by \mathop{{\mathrm{e}}}qref{b-bound}, \begin{equation}\label{poly-bound-eq} \left|\frac{(\K{n}\circ {\mathop{{D}}}^k)\left[\mathbi{m}\right](0)}{k!}\right|^{1/k}\leq \frac{2M R(k)^{p/k}}{k!^{(1-p)/k}} <\frac{2M{\mathop{{\mathrm{e}}}}^{1-p}\; R(k)^{p/k}}{k^{1-p}}. \mathop{{\mathrm{e}}}nd{equation} \mathop{{\mathrm{e}}}nd{proof} \begin{corollary}[\!\!\cite{IG5}]\label{moments-asymptotics} Let $\mathbi{m}(z)$ correspond to a weakly bounded moment functional \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}; then \begin{equation}\label{entire} \LIM{k}\left(\frac{\mu_k}{k!}\right)^{1/k}= \LIM{k}\left|\frac{\mathbi{m}^{k}(0)}{k!}\right|^{1/k} =0.\mathop{{\mathrm{e}}}nd{equation} Thus, since \mathop{{\mathrm{e}}}qref{limsup} is satisfied with $\rho=0$, every weakly bounded moment functional is chromatic. \mathop{{\mathrm{e}}}nd{corollary} Note that this and Proposition~\ref{anaz} imply that $\mathbi{m}(z)$ is an entire function. If \mathop{{\mathrm{e}}}qref{one-weak} holds with $p=1$, then Lemma~\ref{bounds} implies only \begin{equation}\label{p=1} \limsup_{k\rightarrow\infty}\left|\frac{(\K{n}\circ {\mathop{{D}}}^{k})\left[\mathbi{m}\right](0)}{k!}\right|^{1/k}\leq 2M. \mathop{{\mathrm{e}}}nd{equation} Example 4 shows that in this case the corresponding function $\mathbi{m}(z)$ need not be entire. Thus, if we are interested in chromatic expansions of entire functions, the upper bound in \mathop{{\mathrm{e}}}qref{one-weak} of the definition of a weakly bounded moment functional is sharp. Lemma~\ref{bounds} and Proposition~\ref{complete} imply the following Corollary. \begin{corollary}[\!\!\cite{IG5}] If \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ is weakly bounded, then the corresponding family of polynomials $\PPP$ is a complete system in $\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$. \mathop{{\mathrm{e}}}nd{corollary} Thus, we get that the Chebyshev, Legendre, Hermite and similar classical families of orthogonal polynomials are complete in their corresponding spaces $\mathop{{\mathrm{e}}}nsuremath{L^2_{a(\omega)} }$. To simplify our estimates, we choose $K\geq 1$ such that for $p$, $M$ and $r$ as in Definition~\ref{def-weak} for all $k>0$, we have \begin{equation}\label{K} \frac{(2M)^{k} (k + r)!^{p}}{k!^p} <K^{k}. \mathop{{\mathrm{e}}}nd{equation} The following Lemma slightly improves a result from \cite{IG5}. \begin{lemma}\label{bounds1} Let \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ be weakly bounded and $p<1$ and $K\geq 1$ such that \mathop{{\mathrm{e}}}qref{one-weak} and \mathop{{\mathrm{e}}}qref{K} hold. Let also $k$ be an integer such that $k\geq 1/(1-p)$. Then there exists a polynomial $P(x)$ of degree $k-1$ such that for every $n$ and every $z\in \mathop{\mathds{C}}$, \begin{equation}\label{bound-Knb} |\K{n}[\mathbi{m}](z)|< \frac{|K z|^{n}}{n!^{1-p}}P(|z|)\,{\mathop{{\mathrm{e}}}}^{|K z|^k}. \mathop{{\mathrm{e}}}nd{equation} \mathop{{\mathrm{e}}}nd{lemma} \begin{proof} Using the Taylor series for $\K{n}[\mathbi{m}](z)$, \mathop{{\mathrm{e}}}qref{bkn}, \mathop{{\mathrm{e}}}qref{b-bound} and \mathop{{\mathrm{e}}}qref{K}, we get that for $z$ such that $|Kz|\geq1$, \begin{eqnarray*} |\K{n}[\mathbi{m}](z)|&<&\sum_{m=0}^{\infty}\frac{|K z|^{n+m}}{(n+m)!^{1-p}}\ \leq\ \frac{|K z|^{n}}{n!^{1-p}} \sum_{m=0}^{\infty}\frac{|K z|^{m}}{m!^{1/k}} \\ &<& \frac{|K z|^{n}}{n!^{1-p}} \sum_{m=0}^{\infty}\frac{|K z|^{k\lfloor m/k\rfloor+k-1}}{\lfloor m/k\rfloor !}\\ &=& k\;\frac{|K z|^{n+k-1}}{n!^{1-p}} \sum_{j=0}^{\infty}\frac{|K z|^{kj}}{j!}\\ & = & k\;\frac{|K z|^{n+k-1}}{n!^{1-p}}\, {\mathop{{\mathrm{e}}}}^{|Kz|^k}. \mathop{{\mathrm{e}}}nd{eqnarray*} If $|K z|<1$, then a similar calculation shows that for such $z$ we have $|\K{n}[\mathbi{m}](z)|<k\,\mathop{{\mathrm{e}}}{|K z|^{n}}/{n!^{1-p}}$. The claim now follows with $P(|z|)=k(|K z|^{k-1}+\mathop{{\mathrm{e}}})$. \mathop{{\mathrm{e}}}nd{proof} \subsection{Local convergence of chromatic expansions} \begin{proposition}\label{expansionK} Let \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ be weakly bounded, $p<1$ as in \mathop{{\mathrm{e}}}qref{one-weak}, $f(z)$ a function analytic on a domain $G\subseteq\mathop{\mathds{C}}$ and $u\in G$. \begin{enumerate} \item If the sequence $|\K{n}(u)/n!^{1-p}|^{1/n}$ is bounded, then the chromatic expansion $\mathrm{CE}^{\!\scriptscriptstyle{\mathcal{M}}}[f,u](z)$ of $f(z)$ converges uniformly to $f(z)$ on a disc $D\subseteq G$, centered at $u$. \item In particular, if $|\K{n}(u)/n!^{1-p}|^{1/n}$ converges to zero, then the chromatic expansion $\mathrm{CE}^{\!\scriptscriptstyle{\mathcal{M}}}[f,u](z)$ of $f(z)$ converges for every $z\in G$ and the convergence is uniform on every finite closed disc around $u$, contained in $G$. \mathop{{\mathrm{e}}}nd{enumerate} \mathop{{\mathrm{e}}}nd{proposition} \begin{proof} Assume that $R$ is such that $\limsup_{n\rightarrow \infty}|\K{n}[f](u)/n!^{1-p}|^{1/n}<R$. Then $|\K{n}[f](u)|<R^nn!^{1-p}$ for all sufficiently large $n$. Let $K$ and $k$ be such that \mathop{{\mathrm{e}}}qref{bound-Knb} holds; then Lemma \ref{bounds1} implies that for all sufficiently large $n$, \[ |\K{n}[f](u)\K{n}[\mathbi{m}](z-u)|^{1/n}<RK(P(|z-u|){\mathop{{\mathrm{e}}}}^{|K(z-u)|^k})^{1/n}|z-u|. \] Thus, the chromatic series converges uniformly inside every disc $D\subseteq G$, centered at $u$, of radius less than $1/(RK)$. Since \begin{equation*} \K{j}[\mathrm{CA}^{\!\scriptscriptstyle{\mathcal{M}}}[f,u](z)]\big|_{z=u}=\sum_{n=0}^{\infty} (-1)^n\K{n}[f](u)(\K{j}\circ\K{n})[\mathbi{m}](0)=\K{j}[f](u), \mathop{{\mathrm{e}}}nd{equation*} $\mathrm{CA}^{\!\scriptscriptstyle{\mathcal{M}}}[f,u](z)$ converges to $f(z)$ on $D$. \mathop{{\mathrm{e}}}nd{proof} \begin{lemma} Let $M$ be as in Definition \ref{def-weak}. Then \[ \limsup_{n\rightarrow \infty}\left|\frac{\K{n}[f](u)}{n!^{1-p}} \right|^{1/n} \leq 2M\limsup_{n\rightarrow \infty} \left|\frac{f^{(n)}(u)}{n!^{1-p}}\right|^{1/n}. \] \mathop{{\mathrm{e}}}nd{lemma} \begin{proof} Let $\beta>0$ be any number such that $\limsup_{n\rightarrow \infty}|{f^{(n)}(u)}/{n!^{1-p}}|^{1/n}<\beta$; then there exists $B_{\beta}\geq 1$ such that $|f^{(k)}(u)|\leq B_{\beta}\;k!^{1-p}\beta^k$ for all $k$. Using \mathop{{\mathrm{e}}}qref{direct} and \mathop{{\mathrm{e}}}qref{mono-bound} we get \begin{eqnarray*} |\K{n}[f](u)| &\leq& \sum_{k=0}^n\left|\K{n} \left[\frac{t^k}{k!}\right](0)\right||f^{(k)}(u)| \leq (2M)^nB_{\beta}\sum_{k=0}^n\;k!^{1-p}\beta^k\\ &<& (2M)^nB_{\beta}\sum_{k=0}^n\;\left(\frac{n}{e}\right)^{(k+1)(1-p)}\beta^k. \mathop{{\mathrm{e}}}nd{eqnarray*} Summation of the last series shows that $|\K{n}[f](u)|<2B_{\beta}(2M\beta)^nn!^{1-p}$ for sufficiently large $n$. \mathop{{\mathrm{e}}}nd{proof} \begin{corollary}\label{expansionD} Let \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ be weakly bounded, $p<1$ as in \mathop{{\mathrm{e}}}qref{one-weak}, $f(z)$ a function analytic on a domain $G\subseteq\mathop{\mathds{C}}$ and $u\in G$. \begin{enumerate} \item If the sequence $|f^{(n)}(u)/n!^{1-p}|^{1/n}$ is bounded, then the chromatic expansion $\mathrm{CE}^{\!\scriptscriptstyle{\mathcal{M}}}[f,u](z)$ of $f(z)$ converges uniformly to $f(z)$ on a disc $D\subseteq G$ centered at $u$ . \item In particular, if $|f^{(n)}(u)/n!^{1-p}|^{1/n}$ converges to zero, then the chromatic expansion $\mathrm{CE}^{\!\scriptscriptstyle{\mathcal{M}}}[f,u](z)$ of $f(z)$ converges for all $z\in G$ and the convergence is uniform on every closed disc around $u$, contained in $G$. \mathop{{\mathrm{e}}}nd{enumerate} \mathop{{\mathrm{e}}}nd{corollary} \begin{corollary}[\!\!\cite{IG5}]\label{bounded-exp} If \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ is bounded, then for every entire function $f$ and all $u,z\in \mathop{\mathds{C}}$, the chromatic expansion $CE[f,u](z)$ converges to $f(z)$ for all $z$, and the convergence is uniform on every disc around $u$ of finite radius. \mathop{{\mathrm{e}}}nd{corollary} \begin{proof} If $f(z)$ is entire, then for every $u$, $\LIM{n} |f^{(n)}(u)/n!|^{1/n}=0$. The Corollary now follows from Corollary~\ref{expansionD} with $p=0$. \mathop{{\mathrm{e}}}nd{proof} \begin{proposition}\label{exp-type} Assume \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ is weakly bounded and let $0\leq p <1$ be such that \mathop{{\mathrm{e}}}qref{one-weak} holds, and $k$ such that $k\geq 1/(1-p)$. Then there exists $C,L>0$ such that $|f(z)|\leq C\norm{f}{\mathop{{\mathrm{e}}}}^{L|z|^k}$ for all $f(z)\in\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$. \mathop{{\mathrm{e}}}nd{proposition} \begin{proof} Since $f(z)\in\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$, the chromatic expansion of $f(z)$ and \mathop{{\mathrm{e}}}qref{bound-Knb} yield \begin{eqnarray*} |f(z)|&\leq&\left(\sum_{n=0}^{\infty}|\K{n}[f](0)|^2 \sum_{n=0}^{\infty}|\K{n}[\mathbi{m}](z)|^2\right)^{1/2}\\ &\leq& \norm{f}P(|z|){\mathop{{\mathrm{e}}}}^{|Kz|^k} \left(\sum_{n=0}^{\infty}\frac{|K z|^{2n}}{n!^{2(1-p)}}\right)^{1/2}, \mathop{{\mathrm{e}}}nd{eqnarray*} which, using the method from the proof of Lemma~\ref{bounds1}, can easily be shown to imply our claim. \mathop{{\mathrm{e}}}nd{proof} Note that for bounded moment functionals, such as those corresponding to the Legendre or the Chebyshev polynomials, we have $p=0$; thus, Proposition \ref{exp-type} implies that functions which are in \mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}\ are of exponential type. For \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ corresponding to the Hermite polynomials p=1/2 (see Example 3); thus, we get that there exists $C,L>0$ such that $|f(z)|\leq C\norm{f}{\mathop{{\mathrm{e}}}}^{L|z|^2}$ for all $f\in\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$. It would be interesting to establish when the reverse implication is true and thus obtain a generalization of the Paley-Wiener Theorem for functions satisfying $|f(z)|<C{\mathop{{\mathrm{e}}}}^{L|z|^k}$ for $k>1$. \subsection{Generalizations of some classical equalities for the Bessel functions} Corollaries \ref{bounded-exp} and \ref{expansionD} generalize the classic result that every entire function can be expressed as a Neumann series of Bessel functions \cite{WAT}, by replacing the Neumann series with a chromatic expansion that corresponds to any (weakly) bounded moment functional. Thus, many classical results on Bessel functions from \cite{WAT} immediately follow from Corollary \ref{bounded-exp}, and, using Corollary \ref{expansionD}, generalize to functions $\K{n}[\mathbi{m}](z)$ corresponding to any weakly bounded moment functional \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}. Below we give a few illustrative examples. \begin{corollary}\label{eiw} Let $\PP{n}{\omega}$ be the orthonormal polynomials associated with a weakly bounded moment functional \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}; then for every $z\in\mathop{\mathds{C}}$, \begin{equation} {\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega z}=\sum_{n=0}^{\infty}{\mathop{{\mathrm{i}}}}^{n} \PP{n}{\omega}\,\K{n}[\mathbi{m}](z). \mathop{{\mathrm{e}}}nd{equation} \mathop{{\mathrm{e}}}nd{corollary} \begin{proof} If $p<1$ then \begin{equation*} \LIM{n}\frac{\sqrt[n]{|\frac{{\mathrm d}^{n}}{{\mathrm d} z^{n}}{\mathop{{\mathrm{e}}}}^{{\mathop{{\mathrm{i}}}}\omega z}|_{z=0}}}{n^{1-p}}=\LIM{n} \frac{|\omega|}{n^{1-p}}=0, \mathop{{\mathrm{e}}}nd{equation*} and the claim follows from Proposition \ref{expansionD} and \mathop{{\mathrm{e}}}qref{iwt}. \mathop{{\mathrm{e}}}nd{proof} Corollary \ref{eiw} generalizes the well known equality for the Chebyshev polynomials $T_{n}(\omega)$ and the Bessel functions ${\mathrm J}_{n}(z)$, i.e., \[ {\mathop{{\mathrm{e}}}}^{\mathop{{\mathrm{i}}} \omega z}={\mathrm J}_0(z) + 2\sum_{n=1}^{\infty} {\mathop{{\mathrm{i}}}}^{n}T_{n}(\omega){\mathrm J}_{n}(z). \] In Example 3, \ref{eiw} becomes the equality for the Hermite polynomials $H_n(\omega)$: \[ {\mathop{{\mathrm{e}}}}^{\mathop{{\mathrm{i}}} \omega z}=\sum_{n=0}^{\infty} \frac{H_{n}(\omega)}{n!}\left(\frac{\mathop{{\mathrm{i}}} z}{2}\right)^{n} {\mathop{{\mathrm{e}}}}^{-\frac{z^2}{4}}. \] By applying Corollary \ref{expansionD} to the constant function $f(t)\mathop{{\mathrm{e}}}quiv 1$, we get that its chromatic expansion yields that for all $z\in\mathop{\mathds{C}}$ \begin{equation*} \mathbi{m}(z)+\sum_{n=1}^{\infty}\left(\prod_{k=1}^{n} \frac{\gamma_{2k-2}}{\gamma_{2k-1}}\right)\K{2n}[\mathbi{m}](z)=1, \mathop{{\mathrm{e}}}nd{equation*} with $\gamma_n$ the recursion coefficients from \mathop{{\mathrm{e}}}qref{poly}. This equality generalizes the equality \[ {\mathrm J}_{0}(z)+2\sum_{n=1}^{\infty} {\mathrm J}_{2n}(z)=1. \] Using Proposition \ref{unif-con} to expand $\mathbi{m}(z+u)\in\mathop{{\mathrm{e}}}nsuremath{{\mathbf L}_{{\!{\scriptscriptstyle{\mathcal{M}}}}}^2}$ into chromatic series around $z=0$, we get that for all $z,u\in\mathop{\mathds{C}}$ \begin{equation*}\mathbi{m}(z+u)=\sum_{n=0}^{\infty}(-1)^n \K{n}[\mathbi{m}](u)\K{n}[\mathbi{m}](z), \mathop{{\mathrm{e}}}nd{equation*} which generalizes the equality \[{\mathrm J}_0(z+u)={\mathrm J}_0(u){\mathrm J}_0(z)+2 \sum_{n=1}^{\infty}(-1)^n{\mathrm J}_n(u){\mathrm J}_n(z). \] \section{Some Non-Separable Spaces}\label{nonsep} \subsection{} Let \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ be weakly bounded; then periodic functions do not belong to $\mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$ because $\sum_{n=0}^{\infty} \K{n}[f](t)^2$ diverges. We now introduce some nonseparable inner product spaces in which pure harmonic oscillations have finite norm and are pairwise orthogonal.\\ \noindent\textbf{Note:} \textit{In the remainder of this paper we consider only weakly bounded moment functionals and real functions which are restrictions of entire functions.}\\ \begin{definition} Let let $0\leq p<1$ be as in \mathop{{\mathrm{e}}}qref{one-weak}. We denote by $\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}$ the vector space of functions such that the sequence \begin{equation}\label{nu} \nu_{n}^{f}(t)=\frac{1}{(n+1)^{1-p}}\sum_{k=0}^{n} \K{k}[f](t)^2 \mathop{{\mathrm{e}}}nd{equation} converges uniformly on every finite interval $I\subset\mathop{\mathds{R}}$. \mathop{{\mathrm{e}}}nd{definition} \begin{proposition} Let $f,g\in\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}$ and \begin{equation}\label{sigma} \sigma_{n}^{fg}(t)=\frac{1}{(n+1)^{1-p}}\sum_{k=0}^{n} \K{k}[f](t)\K{k}[g](t); \mathop{{\mathrm{e}}}nd{equation} then the sequence $\{\sigma_{n}^{fg}(t)\}_{n\in\mathop{\mathds{N}}}$ converges to a constant function. In particular, $\{\nu_{n}^{f}(t)\}_{n\in\mathop{\mathds{N}}}$ also converges to a constant function. \mathop{{\mathrm{e}}}nd{proposition} \begin{proof} Since $\nu_{n}^{f}(t)$ and $\nu_{n}^{g}(t)$ given by \mathop{{\mathrm{e}}}qref{nu} converge uniformly on every finite interval, the same holds for the sequence $\sigma_{n}^{fg}(t)$. Consequently, it is enough to show that for all $t$, the derivative $\sigma_{n}^{fg}(t)^{\prime}$ of $\sigma_{n}^{fg}(t)$ satisfies $\LIM{n}{ \sigma_{n}^{fg}(t)^{\prime}} = 0$. Let \[ S_k(t)=\K{k}[f](t)^2+ \K{k+1}[f](t)^2+ \K{k}[g](t)^2+\K{k+1}[g](t)^2; \] then, since $f,g\in\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}$, the sequence ${1}/{(n+1)^{1-p}}\sum_{k=0}^{n}S_k(t)$ converges everywhere to some $\alpha(t)$. We now show that if $t$ is such that $\alpha(t)>0$, then there are infinitely many $k$ such that $S_k(t)<2\alpha(t)k^{-p}$. Assume opposite, and let $K$ be such that $S_k(t)\geq 2\alpha(t)k^{-p}$ for all $k\geq K$. Then, since \[ \sum_{k=K}^{n}k^{-p}>\int_{K}^{n+1}x^{-p}{\mathrm d} x=\frac{(n+1)^{1-p}-K^{1-p}}{1-p}, \] we would have that for all $n>K$, \begin{eqnarray*} \frac{\sum_{k=K}^{n}S_k(t)}{(n+1)^{1-p}}\geq \frac{2\alpha(t)\sum_{k=K}^{n}k^{-p}}{(n+1)^{1-p}} > \frac{2\alpha(t)((n+1)^{1-p}-K^{1-p})}{(n+1)^{1-p}(1-p)}. \mathop{{\mathrm{e}}}nd{eqnarray*} However, since $0\leq p<1$, this would imply ${\sum_{k=0}^{n}S_k(t)}/{(n+1)^{1-p}}>\alpha(t)$ for all sufficiently large $n$, which contradicts the definition of $\alpha(t)$. Consequently, for infinitely many $n$ all four summands in $S_n(t)$ must be smaller than $2\alpha(t)\,n^{-p}$. For those values of $n$ we have \[ |\K{n+1}[f](t)\,\K{n}[g](t)|+|\K{n}[f](t)\,\K{n+1}[g](t)| <4\alpha(t)n^{-p}. \] Since \mathop{{\mathrm{e}}}nsuremath{{\mathcal{M}}}\ is weakly bounded, \mathop{{\mathrm{e}}}qref{C-D} and \mathop{{\mathrm{e}}}qref{one-weak} imply that for some $M\geq 1$ and an integer $r$, \begin{equation*} \left|\sigma_{n}^{fg}(t)^{\prime}\right| < \frac{M (n+r)^p}{(n+1)^{1-p}} (|\K{n+1}[f](t)\,\K{n}[g](t)|+|\K{n}[f](t)\,\K{n+1}[g](t)|). \mathop{{\mathrm{e}}}nd{equation*} Thus, for infinitely many $n$ we have \begin{equation*} \left|\sigma_{n}^{fg}(t)^{\prime}\right|< \frac{4M\, (n+r)^p\; n^{-p}\,\alpha(t)}{(n+1)^{1-p}}. \mathop{{\mathrm{e}}}nd{equation*} Consequently, $\liminf_{n\rightarrow\infty}\left| \sigma_{n}^{fg}(t)^{\prime}\right|=0$ and since $\LIM{n} \sigma_{n}^{fg}(t)^{\prime}$ exists, it must be equal to zero. \mathop{{\mathrm{e}}}nd{proof} \begin{corollary} Let $\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}_0$ be the vector space consisting of functions $f(t)$ such that $\LIM{n}\nu_{n}^{f}(t)=0$; then in the quotient space $\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}_2=\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}/\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}_0$ we can introduce a scalar product by the following formula whose right hand side is independent of $t$: \begin{eqnarray}\label{scal-cesaro} \Scal{f}{g}=\LIM{n}\frac{1}{(n+1)^{1-p}} \sum_{k=0}^{n} \K{k}[f](t)\, \K{k}[g](t). \mathop{{\mathrm{e}}}nd{eqnarray} \mathop{{\mathrm{e}}}nd{corollary} The corresponding norm on $\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}_2$ is denoted by $\Norm{\,\cdot\,}$. Clearly, all real valued functions from $\mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$ belong to $\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}_0$. \begin{proposition} If $f\in\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}_2$, then the chromatic expansion of $f(t)$ converges to $f(t)$ for every $t$ and the convergence is uniform on every finite interval. \mathop{{\mathrm{e}}}nd{proposition} \begin{proof} Since ${1}/{(n+1)^{1-p}}\sum_{k=0}^{n} \K{k}[f](t)^2$ converges to $0<(\Norm{f})^2<\infty$, for all sufficiently large $n$, \begin{eqnarray*} |\K{n}[f](t)|^{1/n}&\leq& \left(\sum_{k=0}^{n} \K{k}[f](t)^2\right)^{1/(2n)}\\ &\leq& (2 \Norm{f})^{1/n}(n+1)^{(1-p)/(2n)}. \mathop{{\mathrm{e}}}nd{eqnarray*} Thus, $|\K{n}[f](t)/n!^{1-p}|^{1/n}\rightarrow 0$, and the claim follows from Proposition \ref{expansionK}. \mathop{{\mathrm{e}}}nd{proof} Since $\K{n}[\mathbi{m}](t)\in\mathop{{\mathrm{e}}}nsuremath{L_{\!{\scriptscriptstyle{\mathcal{M}}}}^2}$, we have $\Norm{\sum_{j=0}^{n} (-1)^j \K{j}[f](0)\,\K{j}[\mathbi{m}](t)}=0$ for all $n$. Thus, the chromatic expansion of $f\in\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}_2$ does not converge to $f(t)$ in $\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}_2$. Moreover, there can be no such series representation of functions $f\in\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}_2$, converging in $\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}_2$, because the space $\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}_2$ is in general nonseparable, as the remaining part of this paper shows. \subsection{Space $\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}_2$ associated with the Chebyshev polynomials (Example 2)}\label{cheb} For this case the corresponding space $\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}_2$ will be denoted by $\ensuremath{\mathcal{C}_2^{\scriptscriptstyle{T}}}$, and in \mathop{{\mathrm{e}}}qref{one-weak} we have $p=0$. Thus, the scalar product on $\ensuremath{\mathcal{C}_2^{\scriptscriptstyle{T}}}$ is defined by \[\ScalT{f}{g}=\LIM{n}\frac{1}{n+1} \sum_{k=0}^{n}\K{k}[f](t)\K{k}[g](t).\] \begin{proposition}\label{CE} Functions $f_{\omega}(t)=\sqrt{2}\,\sin\omega t$ and $g_{\omega}(t)=\sqrt{2}\,\cos\omega t$ for $0<\omega <\pi$ form an orthonormal system of vectors in $\ensuremath{\mathcal{C}_2^{\scriptscriptstyle{T}}}$. \mathop{{\mathrm{e}}}nd{proposition} \begin{proof} From \mathop{{\mathrm{e}}}qref{iwt} we get \begin{eqnarray}\label{sin} \Scal{f_{\omega} }{f_{\sigma}}\hspace*{-3mm}&=& \LIM{n}\frac{\sum_{k=0}^{n} \PT{2k}{\omega}\PT{2k}{\sigma}\sin\omega t\sin\sigma t}{2n+1} \nonumber\\ &&\hspace*{10mm} +\LIM{n}\frac{\sum_{k=0}^{n-1} \PT{2k+1}{\omega}\PT{2k+1}{\sigma} \cos\omega t\cos\sigma t}{2n+1}. \mathop{{\mathrm{e}}}nd{eqnarray} Since $\PT{n}{\omega}\leq\sqrt{2}$ on $(0,\pi)$, \mathop{{\mathrm{e}}}qref{CDP} implies that for $\omega,\sigma\in (0,\pi)$ and $\omega\neq\sigma$, \begin{equation*} \LIM{n}\frac{\sum_{k=0}^{n} \PT{k}{\omega}\PT{k}{\sigma}}{n+1} =\LIM{n}\frac{\PT{{n+1}}{\omega} \PT{{n}}{\sigma}-\PT{{n+1}}{\sigma}\PT{{n}}{\omega}}{2(n+1)}=0. \mathop{{\mathrm{e}}}nd{equation*} Since $\PT{2n}{\omega}$ are even functions and $\PT{2n+1}{\omega}$ odd, this also implies that \begin{equation*}\LIM{n}\frac{\sum_{k=0}^{n} \PT{2k}{\omega}\PT{2k}{\sigma}}{2n+1}= \LIM{n}\frac{\sum_{k=0}^{n-1} \PT{2k+1}{\omega}\PT{2k+1}{\sigma}}{2n+1}=0. \mathop{{\mathrm{e}}}nd{equation*} Thus, by \mathop{{\mathrm{e}}}qref{sin}, $\Scal{f_{\omega} }{f_{\sigma}}=0$. Using \mathop{{\mathrm{e}}}qref{sumsquares}, one can verify that for $0<\omega<\pi$ \begin{equation*} \frac{1}{n+1}\sum_{k=0}^{n}\PT{{k}}{\omega}^2 = \frac{1+2n}{2n+2}+\frac{\sin((2n+1) \arccos\omega)} {(2n+2)\sqrt{1-\omega^2}}\rightarrow 1. \mathop{{\mathrm{e}}}nd{equation*} Thus, \mathop{{\mathrm{e}}}qref{equal} implies that for $0<\omega<\pi$ \begin{equation*} \LIM{n}\frac{1}{2n+1}\sum_{k=0}^{n}\PT{{2k}}{\omega}^2 = \LIM{n}\frac{1}{2n+1}\sum_{k=0}^{n}\PT{{2k+1}}{\omega}^2 =\frac{1}{2} \mathop{{\mathrm{e}}}nd{equation*} Consequently, $\Norm{\sqrt{2}\sin\omega t}=1$. \mathop{{\mathrm{e}}}nd{proof} \subsection{Space $\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}_2$ associated with the Hermite polynomials (Example 3)}\label{herm} The corresponding space $\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}_2 $ in this case is denoted by $\ensuremath{\mathcal{C}_2^{\!\scriptscriptstyle{H}}}$, and in \mathop{{\mathrm{e}}}qref{one-weak} we have $p=1/2$. Thus, the scalar product in $\ensuremath{\mathcal{C}_2^{\!\scriptscriptstyle{H}}}$ is defined by \[\ScalH{f}{g}=\LIM{n}\frac{1}{\sqrt{n+1}}\; \sum_{k=0}^{n}\K{k}[f](t)\K{k}[g](t).\] \begin{proposition}\label{HE} For all $\omega >0 $ functions $f_{\omega}(t)=\sin\omega t$ and $g_{\omega}(t)=\cos\omega t$ form an orthogonal system in $\ensuremath{\mathcal{C}_2^{\!\scriptscriptstyle{H}}}$, and $\Norm{f_\omega}=\Norm{g_\omega}={\mathop{{\mathrm{e}}}^{\omega^2/2}}/ {\sqrt[4]{2\pi}}$. \mathop{{\mathrm{e}}}nd{proposition} \begin{proof} For all $\omega$ and for $n\rightarrow \infty$, \[ \PH{n}{\omega}- \frac{{\Gamma(n+1)}^{\frac{1}{2}}}{2^{\frac{n}{2}} \Gamma(\frac{n}{2}+1)}\, {\mathop{{\mathrm{e}}}}^{\frac{\omega^2}{2}}\cos\left(\sqrt{2n+1}\;\omega- \frac{n\pi}{2}\right)\rightarrow 0; \] see, for example, 8.22.8 in \cite{SZG}. Using the Stirling formula we get \begin{equation}\label{asym} \PH{n}{\omega}-\left(\frac{2}{\pi}\right)^ {\frac{1}{4}}\;n^{-\frac{1}{4}}\; {\mathop{{\mathrm{e}}}}^{\frac{\omega^2}{2}}\cos\left(\sqrt{2n+1}\;\omega- \frac{n\pi}{2}\right)\rightarrow 0. \mathop{{\mathrm{e}}}nd{equation} This fact and \mathop{{\mathrm{e}}}qref{CDP} are easily seen to imply that $\Scal{f_{\omega} }{f_{\sigma}}=0$ for all distinct $\omega,\sigma>0$, while \mathop{{\mathrm{e}}}qref{asym} and \mathop{{\mathrm{e}}}qref{equal} imply that \begin{equation}\label{eq} \LIM{n}\left(\frac{\sum_{k=0}^{n} \PH{{2k}}{\omega}^2}{\sqrt{2n+1}}- \frac{\sum_{k=0}^{n-1}\PH{{2k+1}}{\omega}^2}{\sqrt{2n+1}}\right) =0. \mathop{{\mathrm{e}}}nd{equation} Since $H_n^\prime(\omega)=2\;n\;H_{n-1}(\omega)$, we have $\PH{n}{\omega}^\prime=\sqrt{2\;n}\;\PH{n-1}{\omega}$. Using this, \mathop{{\mathrm{e}}}qref{asym} and \mathop{{\mathrm{e}}}qref{sumsquares} one can verify that \begin{equation*} \LIM{n}\frac{\sum_{k=0}^{n}\PH{{k}}{\omega}^2}{\sqrt{n+1}}= \sqrt{\frac{2}{\pi}}\;{\mathop{{\mathrm{e}}}}^{\omega^2}, \mathop{{\mathrm{e}}}nd{equation*} which, together with \mathop{{\mathrm{e}}}qref{eq}, implies that $\Norm{f_\omega}={\mathop{{\mathrm{e}}}^{\omega^2/2}}/ {\sqrt[4]{2\pi}}.$ \mathop{{\mathrm{e}}}nd{proof} Note that in this case, unlike the case of the family associated with the Chebyshev polynomials, the norm of a pure harmonic oscillation of unit amplitude depends on its frequency. One can verify that propositions similar to Proposition~\ref{CE} and Proposition~\ref{HE} hold for other classical families of orthogonal polynomials, such as the Legendre polynomials. Our numerical tests indicate that the following conjecture is true.\footnote{We have tested this Conjecture numerically, by setting $\gamma_n=n^p$ for several values of $p<1$, and in all cases a finite limit appeared to exist. Paul Nevai has informed us that the special case of this Conjecture for $p=0$ is known as Nevai-Totik Conjecture, and is still open.} \begin{conjecture}\label{conj1} Assume that for some $p<1$ the recursion coefficients $\gamma_n$ in \mathop{{\mathrm{e}}}qref{poly} are such that $\gamma_n/n^p$ converges to a finite positive limit. Then, for the corresponding family of orthogonal polynomials we have \begin{equation}\label{hyp} 0< \LIM{n}\frac{1}{(n+1)^{1-p}}\sum_{k=0}^{n}\PP{k}{\omega}^2 <\infty \mathop{{\mathrm{e}}}nd{equation} for all $\omega$ in the support $sp(a)$ of the corresponding moment distribution function $a(\omega)$. Thus, in the corresponding space $\ensuremath{\mathcal{C}^{\scriptscriptstyle{\mathcal{M}}}}_2$ all pure harmonic oscillations with positive frequencies $\omega$ belonging to the support of the moment distribution $a(\omega)$ have finite positive norm and are mutually orthogonal. \mathop{{\mathrm{e}}}nd{conjecture} Note that \mathop{{\mathrm{e}}}qref{sumsquares} implies that \mathop{{\mathrm{e}}}qref{hyp} is equivalent to \[ 0< \LIM{n}\frac{\PP{{n+1}}{\omega}^\prime \PP{{n}}{\omega}-\PP{{n+1}}{\omega} \PP{{n}}{\omega}^\prime}{(n+1)^{1-2p}}<\infty. \] \section{Remarks} The special case of the chromatic derivatives presented in Example 2 were first introduced in \cite{IG0}; the corresponding chromatic expansions were subsequently introduced in \cite{IG00}. These concepts emerged in the course of the author's design of a pulse width modulation power amplifier. The research team of the author's startup, \mathop{{\mathrm{e}}}mph{Kromos Technology Inc.,} extended these notions to various systems corresponding to several classical families of orthogonal polynomials \cite{CH,HB}. We also designed and implemented a channel equalizer \cite{H} and a digital transceiver (unpublished), based on chromatic expansions. A novel image compression method motivated by chromatic expansions was developed in \cite{C0,C1}. In \cite{CNV} chromatic expansions were related to the work of Papoulis \cite{Pap} and Vaidyanathan \cite{VA}. In \cite{NIV} and \cite{VIN} the theory was cast in the framework commonly used in signal processing. Chromatic expansions were also studied in \cite{CH}, \cite{B} and and \cite{WS}. Local convergence of chromatic expansions was studied in \cite{IG5}; local approximations based on trigonometric functions were introduced in \cite{IG6}. A generalization of chromatic derivatives, with the prolate spheroidal wave functions replacing orthogonal polynomials, was introduced in \cite{Gil}; the theory was also extended to the prolate spheroidal wavelet series that combine chromatic series with sampling series.\\ \noindent\textbf{Note:} Some Kromos technical reports and some manuscripts can be found at the author's web site \texttt{http://www.cse.unsw.edu.au/\~{}{ignjat}/diff/}. \begin{thebibliography}{10} \bibitem{B} J.~Byrnes. \newblock Local signal reconstruction via chromatic differentiation filter banks. \newblock In {\mathop{{\mathrm{e}}}m Proc. {35th Asilomar Conference on Signals, Systems, and Computers}}, Monterey, California, November 2001. \bibitem{Chih} T.~S. Chihara. \newblock {\mathop{{\mathrm{e}}}m An introduction to Orthogonal Polynomials}. \newblock Gordon and Breach, 1978. \bibitem{C0} M.~Cushman. \newblock Image compression. \newblock Technical report, Kromos Technology, Los Altos, California, 2001. \bibitem{C1} M.~Cushman. \newblock A method for approximate reconstruction from filterbanks. \newblock In {\mathop{{\mathrm{e}}}m Proc. {SIAM Conference on Linear Algebra in Signals, Systems and Control}}, Boston, August 2001. \bibitem{CH} M.~Cushman and T.~Herron. \newblock The general theory of chromatic derivatives. \newblock Technical report, Kromos Technology, Los Altos, California, 2001. \bibitem{CNV} M.~Cushman, M.~J. Narasimha, and P.~P. Vaidyanathan. \newblock Finite-channel chromatic derivative filter banks. \newblock {\mathop{{\mathrm{e}}}m IEEE Signal Processing Letters}, 10(1), 2003. \bibitem{Fr} G.~Freud. \newblock {\mathop{{\mathrm{e}}}m Orthogonal Polynomials}. \newblock Pergamon Press, 1971. \bibitem{H} T.~Herron. \newblock Towards a new transform domain adaptive filtering process using differential operators and their associated splines. \newblock In {\mathop{{\mathrm{e}}}m Proc. {ISPACS}}, Nashville, November 2001. \bibitem{HB} T.~Herron and J.~Byrnes. \newblock Families of orthogonal differential operators for signal processing. \newblock Technical report, Kromos Technology, Los Altos, California, 2001. \bibitem{Hil} F.~B. Hildebrand. \newblock {\mathop{{\mathrm{e}}}m Introduction to Numerical Analysis}. \newblock Dover Publications, 1974. \bibitem{IG0} A.~Ignjatovic. \newblock Signal processor with local signal behavior, 2000. \newblock ~US Patent 6115726. Provisional Patent Disclosure this patent was filled October 3, 1997. Patent application this patent was filled May 28, 1998. The patent was issued September 5, 2000. \bibitem{IG5} A.~Ignjatovic. \newblock Local approximations based on orthogonal differential operators. \newblock {\mathop{{\mathrm{e}}}m Journal of Fourier Analysis and Applications}, 13(3), 2007. \bibitem{IG6} A.~Ignjatovic. \newblock Chromatic derivatives and local approximations. \newblock {\mathop{{\mathrm{e}}}m IEEE Transactions on Signal Processing}, to appear. \bibitem{IG00} A.~Ignjatovic and N.~Carlin. \newblock Method and a system of acquiring local signal behavior parameters for representing and processing a signal, 2001. \newblock ~US Patent 6313778. Provisional Patent Disclosure this patent was filled July 9, 1999; patent application was this patent filled July 9, 2000. The patent was issued November 6, 2001. \bibitem{NIV} M.~J. Narasimha, A.~Ignjatovic, and P.~P. Vaidyanathan. \newblock Chromatic derivative filter banks. \newblock {\mathop{{\mathrm{e}}}m IEEE Signal Processing Letters}, 9(7), 2002. \bibitem{Opp} A.~Oppenheim and A.~Schafer. \newblock {\mathop{{\mathrm{e}}}m Discrete--Time Signal Processing}. \newblock Prentice Hall, 1999. \bibitem{Pap} A.~Papulis. \newblock Generalized sampling expansion. \newblock {\mathop{{\mathrm{e}}}m IEEE Transactions on Circuits and Systems}, 24(11), 1977. \bibitem{SZG} G.~Szego. \newblock {\mathop{{\mathrm{e}}}m Orthogonal Polynomials}. \newblock American Mathematical Society, 1939. \bibitem{VA} P.~P. Vaidyanathan. \newblock {\mathop{{\mathrm{e}}}m Multirate Systems and Filter Banks}. \newblock Prentice-Hall, 1993. \bibitem{VIN} P.~P. Vaidyanathan, A.~Ignjatovic, and S.~Narasimha. \newblock New sampling expansions of band limited signals based on chromatic derivatives. \newblock In {\mathop{{\mathrm{e}}}m Proc. {35th Asilomar Conference on Signals, Systems, and Computers}}, Monterey, California, November 2001. \bibitem{Gil} G.~Walter. \newblock {Chromatic Series and Prolate Spheroidal Wave Functions}. \newblock {\mathop{{\mathrm{e}}}m Journal of Integral Equations and Applications}, 20(2), 2006. \bibitem{WS} G.~Walter and X.~Shen. \newblock A sampling expansion for non bandlimited signals in chromatic derivatives. \newblock {\mathop{{\mathrm{e}}}m IEEE Transactions on Signal Processing}, 53, 2005. \bibitem{WAT} G.~N. Watson. \newblock {\mathop{{\mathrm{e}}}m A Treatise on the Theory of Bessel Functions}. \newblock Cambridge University Press, 1966. \mathop{{\mathrm{e}}}nd{thebibliography} \mathop{{\mathrm{e}}}nd{document}
\begin{document} \newcommand{{\mathfrak{T}}}{{\mathfrak{T}}} \newcommand{\protect{\mathrm{rank}}ef}{\protect{\mathrm{rank}}efotect{\mathrm{rank}}ef} \newcommand{\cite}{\cite} \newcommand{{1 \over 2}}{{1 \over 2}} \newcommand{\subseteq}{\subseteqbseteq} \newcommand{{\mathrm{rank}}}{{\mathrm{rank}}} \newcommand{{\mathcal C}}{{\mathcal C}} \newcommand{{\mathcal F}}{{\mathcal F}} \newcommand{{\epsilon}}{{{\epsilon}silon}} \newcommand{{\mathcal A}}{{\mathcal A}} \newcommand{{\lambda}}{{\lambda}} \newcommand{\lfloor}{\lfloorloor} \newcommand{{\mathrm{rank}}f}{{\mathrm{rank}}floor} \newcommand{{{\Bbb B}bb R}}{{{\Bbb B}bb R}} \newcommand{{\Bbb B}}{{{\Bbb B}bb B}} \newcommand{{{\Bbb R}^3}}{{{{\Bbb B}bb R}^3}} \newcommand{{\Bbb G}}{{{\Bbb B}bb G}} \newcommand{{\Bbb Z}}{{{\Bbb B}bb Z}} \newcommand{{\Bbb O}}{{{\Bbb B}bb O}} \newcommand{{Imm(F,\E)}}{{Imm(F,{{\Bbb R}^3})}} \newcommand{{1 \over 2}c}{{H_1(F,{\Bbb Z}/2)}} \newcounter{numb} \title[Formulae for order one invariants] {Formulae for order one invariants \\ of immersions and embeddings of surfaces} \author{Tahl Nowik} \address{Department of Mathematics, Bar-Ilan University, Ramat-Gan 52900, Israel.} \email{tahl@@math.biu.ac.il} \date{May 4, 2003} \thanks{Partially supported by the Minerva Foundation} \begin{abstract} The universal order 1 invariant $f^U$ of immersions of a closed orientable surface into ${{\Bbb R}^3}$, whose existence has been established in \cite{o}, takes values in the group ${\Bbb G}_U = K \oplus {\Bbb Z}/2 \oplus {\Bbb Z}/2$ where $K$ is a countably generated free Abelian group. The projections of $f^U$ to $K$ and to the first and second ${\Bbb Z}/2$ factors are denoted $f^K, M, Q$ respectively. An explicit formula for the value of $Q$ on any embedding has been given in \cite{a}. In the present work we give an explicit formula for the value of $f^K$ on any immersion, and for the value of $M$ on any embedding. \end{abstract} \maketitle \section{introduction}\label{intro} Finite order invariants of stable immersions of a closed orientable surface into ${{\Bbb R}^3}$ have been defined in \cite{o}, where all order 1 invariants have been classified. In \cite{h} all higher order invariants have been classified, and it has been shown that they are all functions of order 1 invariants. This brings the attention back to order 1 invariants, and to the problem of finding explicit formulae for them. In \cite{o}, the existence of a universal order 1 invariant $f^U$ has been established, which takes values in a group ${\Bbb G}_U = K \oplus {\Bbb Z}/2 \oplus {\Bbb Z}/2$ where $K$ is a countably generated free Abelian group. The existence proof, however, gave no clue for computing the invariant. We will denote the projections of $f^U$ to $K$ and to the first and second ${\Bbb Z}/2$ factors of ${\Bbb G}_U$ by $f^K, M, Q$ respectively. (The geometric meaning of $M$ and $Q$ will be explained in Section \protect{\mathrm{rank}}ef{inv}.) In \cite{a}, an explicit formula has been given for $Q(i\circ h) - Q(i)$ where $h:F \to F$ is a diffeomorphism such that $i$ and $i\circ h$ are regularly homotopic, and for $Q(e')-Q(e)$ where $e,e'$ are any two regularly homotopic embeddings. In the present work we give an explicit formula for: \begin{enumerate} \item The value of $f^K$ on all immersions. \item $M(i\circ h) - M(i)$ where $h:F \to F$ is a diffeomorphism such that $i$ and $i\circ h$ are regularly homotopic. \item $M(e')-M(e)$ for any two regularly homotopic embeddings. \end{enumerate} Note that the invariant $f^U$ is specified only up to an order 0 invariant, i.e. up to an additive constant in each regular homotopy class, and so the same is true for $f^K,M,Q$. For $M$ and $Q$ we will not have a specific choice of constants, and so as in (2),(3) above, we will speak only of the difference of the value of $M$ and $Q$ on regularly homotopic immersions. The structure of the paper is as follows: In Section \protect{\mathrm{rank}}ef{back} we give the necessary background. Note that in the present work we deviate from \cite{o},\cite{h} in our procedure for defining order one invariants, and accordingly we deviate in our choice of generators for ${\Bbb G}_U$. This is of no consequence in the abstract setting of \cite{o},\cite{h}, but will greatly effect the simplicity of the explicit formula for $f^K$ that we will find in the present work. In Section \protect{\mathrm{rank}}ef{inv} we explain the geometric meaning of the invariants $M$ and $Q$. In Section \protect{\mathrm{rank}}ef{st} we present the formulae that will be proved in this paper. In Section \protect{\mathrm{rank}}ef{fk} we prove the formula for $f^K$. In Section \protect{\mathrm{rank}}ef{ap} we give two applications. In Section \protect{\mathrm{rank}}ef{m} we prove the formula for $M$. \section{Background}\label{back} In this section we summarize the background needed for this work. Given a closed oriented surface $F$, ${Imm(F,\E)}$ denotes the space of all immersions of $F$ into ${{\Bbb R}^3}$, with the $C^1$ topology. A CE point of an immersion $i:F\to {{\Bbb R}^3}$ is a point of self intersection of $i$ for which the local stratum in ${Imm(F,\E)}$ corresponding to the self intersection, has codimension one. We distinguish twelve types of CEs which we name $E^0, E^1, E^2, H^1, H^2, T^0, T^1, T^2, T^3, Q^2, Q^3, Q^4$. Their precise description appears in the proof of Proposition \protect{\mathrm{rank}}ef{p1} below. This set of twelve symbols is denoted ${\mathcal C}$. A co-orientation for a CE is a choice of one of the two sides of the local stratum corresponding to the CE. All but two of the above CE types are non-symmetric in the sense that the two sides of the local stratum may be distinguished via the local configuration of the CE, and for those ten CE types, permanent co-orientations for the corresponding strata are chosen once and for all. The two exceptions are $H^1$ and $Q^2$ which are completely symmetric. In fact, there does not exist a consistent choice of co-orientation for $H^1$ and $Q^2$ CEs since the global strata corresponding to these CE types are one sided in ${Imm(F,\E)}$ (see \cite{o}). We fix a closed oriented surface $F$ and a regular homotopy class ${\mathcal A}$ of immersions of $F$ into ${{\Bbb R}^3}$ (that is, ${\mathcal A}$ is a connected component of ${Imm(F,\E)}$). We denote by $I_n\subseteq {\mathcal A}$ ($n\geq 0$) the space of all immersions in ${\mathcal A}$ which have precisely $n$ CE points (the self intersection being elsewhere stable). In particular, $I_0$ is the space of all stable immersions in ${\mathcal A}$. Given an immersion $i\in I_n$, a \emph{temporary co-orientation} for $i$ is a choice of co-orientation at each of the $n$ CE points $p_1, \dots , p_n$ of $i$. Given a temporary co-orientation ${\mathfrak{T}}$ for $i$ and a subset $A\subseteq \{p_1,\dots,p_n\}$, we define $i_{{\mathfrak{T}},A} \in I_0$ to be the immersion obtained from $i$ by resolving all CEs of $i$ at points of $A$ into the positive side with respect to ${\mathfrak{T}}$, and all CEs not in $A$ into the negative side. Now let ${\Bbb G}$ be any Abelian group and let $f:I_0\to{\Bbb G}$ be an invariant, i.e. a function which is constant on each connected component of $I_0$. Given $i\in I_n$ and a temporary co-orientation ${\mathfrak{T}}$ for $i$, $f^{\mathfrak{T}}(i)$ is defined as follows: $$f^{\mathfrak{T}}(i)=\subseteqm_{ A \subseteq \{p_1,\dots,p_n\} } (-1)^{n-|A|} f(i_{{\mathfrak{T}},A})$$ where $|A|$ is the number of elements in $A$. The statement $f^{\mathfrak{T}}(i)=0$ is independent of the temporary co-orientation ${\mathfrak{T}}$ so we simply write $f(i)=0$. An invariant $f:I_0\to{\Bbb G}$ is called \emph{of finite order} if there is an $n$ such that $f(i)=0$ for all $i\in I_{n+1}$. The minimal such $n$ is called the \emph{order} of $f$. The group of all invariants on $I_0$ of order at most $n$ is denoted $V_n$. From now on our discussion will reduce to order 1 invariants only. The more general setting may be found in \cite{o},\cite{h}. For an immersion $i:F\to{{\Bbb R}^3}$ and any $p\in{{\Bbb R}^3}$, we define the degree $d_p(i) \in {\Bbb Z}$ of $i$ at $p$ as follows: If $p \not\in i(F)$ then $d_p(i)$ is the (usual) degree of the map obtained from $i$ by composing it with the projection onto a small sphere centered at $p$. If on the other hand $p \in i(F)$ then we first push each sheet of $F$ which passes through $p$, a bit into its preferred side determined by the orientation of $F$, obtaining a new immersion $i'$ which misses $p$, and we define $d_p(i)=d_p(i')$. If $i\in I_1$ and the unique CE of $i$ is located at $p \in {{\Bbb R}^3}$, then we define $C(i)$ to be the expression $R^a_m$ where $R^a\in{\mathcal C}$ is the symbol describing the configuration of the CE of $i$ at $p$ (one of the twelve symbols above) and $m=d_p(i)$. We denote by ${\mathcal C}_1$ the set of all expressions $R^a_m$ with $R^a\in{\mathcal C}, m\in{\Bbb Z}$. The map $C:I_1 \to {\mathcal C}_1$ is surjective. Let $f \in V_1$. For $i\in I_1$, if the CE of $i$ is of type $H^1$ or $Q^2$ and ${\mathfrak{T}}$ is a temporary co-orientation for $i$, then $2f^{\mathfrak{T}}(i)=0$ (\cite{o} Proposition 3.5), and so in this case $f^{\mathfrak{T}}(i)$ is independent of ${\mathfrak{T}}$. This fact is used to extend any $f\in V_1$ to $I_1$ by setting for any $i\in I_1$, $f(i) = f^{\mathfrak{T}}(i)$, where if the CE of $i$ is of type $H^1$ or $Q^2$ then ${\mathfrak{T}}$ is arbitrary, and if it is not of type $H^1$ or $Q^2$ then the permanent co-orientation is used for the CE of $i$. We will always assume without mention that any $f \in V_1$ is extended to $I_1$ in this way. For $f\in V_1$ and $i,j\in I_1$, if $C(i)=C(j)$ then $f(i)=f(j)$ (\cite{o} Proposition 3.7), so any $f\in V_1$ induces a well defined function $u(f):{\mathcal C}_1\to{\Bbb G}$. The map $f\mapsto u(f)$ induces an injection $u:V_1 / V_0 \to {\mathcal C}_1^*$ where ${\mathcal C}_1^*$ is the group of all functions from ${\mathcal C}_1$ to ${\Bbb G}$. The main result of \cite{o} is that the image of $u:V_1 / V_0 \to {\mathcal C}_1^*$ is the subgroup $\Delta_1 = \Delta_1({\Bbb G}) \subseteq {\mathcal C}_1^*$ which is defined as the set of functions in ${\mathcal C}_1^*$ satisfying relations which we write as relations on the symbols $R^a_m$, e.g. $T^0_m = T^3_m$ will stand for $g(T^0_m) = g(T^3_m)$. The relations defining $\Delta_1$ are: \begin{itemize} \item $E^2_m = - E^0_m = H^2_m$, \ \ $E^1_m = H^1_m$. \item $T^0_m = T^3_m$, \ \ $T^1_m = T^2_m$. \item $2H^1_m =0 $, \ \ $H^1_m = H^1_{m-1}$. \item $2Q^2_m =0 $, \ \ $Q^2_m = Q^2_{m-1}$. \item $H^2_m - H^2_{m-1} = T^3_m - T^2_m$. \item $Q^4_m - Q^3_m = T^3_m - T^3_{m-1}$, \ \ $Q^3_m - Q^2_m = T^2_m - T^2_{m-1}$. \end{itemize} Let ${\Bbb B}\subseteq{\Bbb G}$ be the subgroup defined by ${\Bbb B}=\{ x\in {\Bbb G} : 2x=0\}$. To obtain a function $g\in\Delta_1$ one may assign arbitrary values in ${\Bbb G}$ for the symbols $\{T^2_m\}_{m\in{\Bbb Z}}$, $\{H^2_m\}_{m\in{\Bbb Z}}$ (here is where we deviate from \cite{o},\cite{h}) and arbitrary values in ${\Bbb B}$ for the two symbols $H^1_0 , Q^2_0$. Once this is done then the value of $g$ on all other symbols is uniquely determined, namely: \begin{enumerate} \item $E^1_m = H^1_m = H^1_0$ for all $m$. \item $E^2_m = -E^0_m = H^2_m$ for all $m$. \item $T^3_m = T^2_m + H^2_m - H^2_{m-1}$ \item $T^0_m = T^3_m$, \ \ $T^1_m = T^2_m$ for all $m$. \item $Q^2_m = Q^2_0$ for all $m$. \item $Q^3_m (= Q^2_m + T^2_m - T^2_{m-1}) =Q^0_m + T^2_m - T^2_{m-1}$ for all $m$. \item $Q^4_m (= Q^3_m + T^3_m - T^3_{m-1}) = Q^0_m + 2T^2_m - 2T^2_{m-1} + H^2_m - 2H^2_{m-1} + H^2_{m-2}$ for all $m$. \end{enumerate} In the sequel we will refer to this procedure as the "7-step procedure". The Abelian group ${\Bbb G}_U$ is defined as follows (again note the difference from \cite{o},\cite{h}): $${\Bbb G}_U = \left< \{t^2_m\}_{m\in{\Bbb Z}}, \{h^2_m\}_{m\in{\Bbb Z}}, h^1_0, q^2_0 \ | \ 2h^1_0 = 2q^2_0 = 0 {\mathrm{rank}}ight>.$$ The universal element $g^U\in\Delta_1({\Bbb G}_U)$ is defined by $g^U(T^2_m) = t^2_m$, $g^U(H^2_m)=h^2_m$, $g^U(H^1_0)=h^1_0$, $g^U(Q^2_0)=q^2_0$ and the value of $g^U$ on all other symbols of ${\mathcal C}_1$ is determined by the 7-step procedure. In \cite{o} the existence of an order 1 invariant $f^U:I_0\to{\Bbb G}_U$ with $u(f^U)=g^U$ is proven. (Note that this is the same $g^U$ as in \cite{o} only presented via different generators). The invariant $f^U$ is a \emph{universal} order 1 invariant, meaning the following: \begin{dfn}\label{uni} A pair $({\Bbb G},f)$ where ${\Bbb G}$ is an Abelian group and $f:I_0 \to {\Bbb G}$ is an order $n$ invariant, will be called a \emph{universal order $n$ invariant} if for any Abelian group ${\Bbb G}'$ and any order $n$ invariant $f':I_0 \to {\Bbb G}'$ there exists a unique homomorphism $\varphi:{\Bbb G} \to {\Bbb G}'$ such that $f' - \varphi \circ f$ is an invariant of order at most $n-1$. \end{dfn} In \cite{h} all higher order invariants are classified, and for every $n$ a universal order $n$ invariant is constructed as ${\mathcal F}_n \circ f^U$ where ${\mathcal F}_n:{\Bbb G}_U \to M_n$ is an explicit function (not homomorphism) into a certain Abelian group $M_n$. \section{The invariants}\label{inv} In this section we introduce the three invariants $f^K,M,Q$ that interest us. We define $K \subseteq {\Bbb G}_U$ to be the subgroup generated by $\{t^2_m\}_{m\in{{\Bbb B}bb Z}} \cup \{h^2_m\}_{m\in{{\Bbb B}bb Z}}$ (this is the same as the subgroup $K_1$ in \cite{h}) and define $f^K:I_0 \to K$ to be the projection of $f^U$ to $K$. Similarly we define $M : I_0 \to {\Bbb Z}/2$ (respectively $Q: I_0 \to {\Bbb Z}/2$) to be the projection of $f^U$ to ${\Bbb Z}/2$ sending all generators of ${\Bbb G}_U$ to 0 except $h^1_0$ (respectively except $q^2_0$). Then $f^U = f^K \oplus M \oplus Q$. Note that $f^U$ is defined only up to an additive constant in each regular homotopy class, and so the same is true for $f^K,M,Q$. More in detail, the invariants $Q$ and $M$ are defined as follows: $M:I_0 \to {\Bbb Z}/2$ is the order 1 invariant defined by $u(M)(H^1_0)=1$, $u(M)(Q^2_0)=0$ and $u(M)(T^2_m)=u(M)(H^2_m)=0$ for all $m$. By the 7-step procedure, this extends to $u(M)(H^1_m)=u(M)(E^1_m)=1$ for all $m$, $u(M)(H^a_m)=u(M)(E^a_m)=0$ for $a\neq 1$ and any $m$ and $u(M)(T^a_m)=u(M)(Q^a_m)=0$ for all $a,m$. That is, if $i_+,i_- \in I_0$ are the two immersions obtained from $i \in I_1$ by resolving its CE, then $M(i_+)-M(i_-) = 1 \in {\Bbb Z}/2$ iff the CE of $i$ is a "matching tangency" i.e. tangency of two sheets of the surface where the orientations of the two sheets match at time of tangency. (Thus the name $M$ for this invariant). And so for any $i,j \in I_0$, $M(j)-M(i) \in {\Bbb Z}/2$ is the number mod 2 of matching tangencies ocurring in any regular homotopy between $i$ and $j$. Similarly, $Q:I_0 \to {\Bbb Z}/2$ is the ${\Bbb Z}/2$ valued order 1 invariant satisfying $u(Q)(Q^2_0) = 1$, $u(Q)(H^1_0)=0$ and $u(Q)(T^2_m)=u(Q)(H^2_m)=0$ for all $m$. By the 7-step procedure, we have $u(Q)(Q^a_m)=1$ for all $a,m$ and $u(Q)(T^a_m)=U(Q)(E^a_m)=u(Q)(H^a_m)=0$ for all $a,m$. That is, $Q$ is the invariant such that for any $i,j \in I_0$, $Q(j)-Q(i)\in{\Bbb Z}/2$ is the number mod 2 of quadruple points occurring in any regular homotopy between $i$ and $j$. This invariant has been studied in \cite{q} and \cite{a}. In \cite{a} an explicit formula has been given for $Q(i \circ h) - Q(i)$ for any diffeomorphism $h:F \to F$ such that $i$ and $i \circ h$ are regularly homotopic, and for $Q(e')-Q(e)$ for any two regularly homotopic embeddings. In the present work we will do the same for $M$, leaving open the interesting problem of finding an explicit formula for $Q$ and $M$ on \emph{all} immersions. For $f^K$ however, we will indeed give a formula for all immersions. \section{Statement of results}\label{st} Let $i \in I_0$. For every $m\in{\Bbb Z}$ let $U_m = U_m(i) = \{ p \in {{\Bbb R}^3}-i(F) \ : \ d_p(i)=m \}$. This is an open set in ${{\Bbb R}^3}$ which may be empty, and may be non-connected or unbounded, but in any case, the Euler characteristic $\chi(U_m)$ is defined. Denote by $N_m = N_m(i)$ the number of triple points $p\in{{\Bbb R}^3}$ of $i$ having $d_p(i)=m$. The following formula for $f^K : I_0 \to K \subseteq {\Bbb G}_U$ will be proved in Section \protect{\mathrm{rank}}ef{fk}: $$f^K(i)= \subseteqm_{m\in{\Bbb Z}} \chi(U_m) \bigg(\subseteqm_{ -{1 \over 2} < k < \lfloor{m \over 2}{\mathrm{rank}}f + {1 \over 2}} h^2_{m-2k}\bigg) + \subseteqm_{m\in{\Bbb Z}} {1 \over 2} N_m \bigg( t^2_m - \subseteqm_{ -{1 \over 2} < k < m-{1 \over 2} } h^2_k \bigg) $$ where for $a \in{{\Bbb B}bb R}$, $\lfloor a {\mathrm{rank}}f$ denotes the greatest integer $\leq a$, and for $a,b \in {{\Bbb B}bb R}$ the sum $\subseteqm_{a<k<b}$ means the following: If $a<b$ then it is the sum over all integers $a<k<b$, if $a=b$ then the sum is 0, and if $a>b$ then $\subseteqm_{a<k<b} = - \subseteqm_{b<k<a}$. For $i,j \in I_0$ let $M(i,j)=M(j)-M(i)$. The following two formulae for $M$ will be proved in Section \protect{\mathrm{rank}}ef{m}: For any diffeomorphism $h:F\to F$ such that $i$ and $i \circ h$ are regularly homotopic: $$M(i,i\circ h)=\bigg({\mathrm{rank}}(h_*-Id)\bigg)\bmod{2}$$ where $h_*$ is the map induced by $h$ on $H_1(F,{\Bbb Z}/2)$. If $e:F\to{{\Bbb R}^3}$ is an embedding then $e(F)$ splits ${{\Bbb R}^3}$ into two pieces, one compact and one non-compact, which we denote $D^0(e)$ and $D^1(e)$ respectively. By restriction of range, $e$ induces maps $e^k : F \to D^k(e)$, $k=0,1$. Let $e^k_* : {1 \over 2}c \to H_1(D^k(e),{\Bbb Z}/2)$ be the map induced by $e^k$. Then for two regularly homotopic embeddings $e,e':F \to {{\Bbb R}^3}$, $M(e,e')$ is computed as follows: \begin{enumerate} \item Find a basis $a_1,\dots,a_n,b_1,\dots,b_n$ for ${1 \over 2}c$ such that $e^0_*(a_i)=0$, $e^1_*(b_i)=0$ and $a_i \cdot b_j = \delta_{ij}$ (where $a \cdot b$ denotes the intersection form in ${1 \over 2}c$). \item Find a similar basis $a'_1,\dots,a'_n,b'_1,\dots,b'_n$ using $e'$ in place of $e$. \item Let $m$ be the dimension of the subspace of ${1 \over 2}c$ spanned by: $$a'_1 - a_1 \ , \ \dots \ , \ a'_n - a_n \ , \ b'_1 - b_1 \ , \ \dots \ , \ b'_n - b_n.$$ \end{enumerate} Then $M(e,e') = m\bmod{2} \in {\Bbb Z}/2$. \section{Proof of formula for $f^K$}\label{fk} We define the group ${\Bbb O}$ to be the free Abelian group with generators $\{x_n\}_{n\in {\Bbb Z}} \cup \{y_n\}_{n\in{\Bbb Z}}$. For $i\in I_0$ we define $k(i) \in {\Bbb O}$ as follows (the terms are defined in Section \protect{\mathrm{rank}}ef{st} and the sums are always finite): $$k(i)= \subseteqm_{m\in {\Bbb Z}} \chi(U_m) x_m + \subseteqm_{m \in {\Bbb Z}} {1 \over 2} N_m y_m .$$ Indeed this is an element of ${\Bbb O}$ since as we shall see below, $N_m$ is always even. In the mean time say $k$ attains values in the ${\Bbb B}bb Q$ vector space with same basis. \begin{prop}\label{p1} The invariant $k$ is an order 1 invariant, with $u(k)$ given by: \begin{itemize} \item $u(k)(E^a_m) = u(k)(H^a_m) = x_{m+a-2} - x_{m-a}$ \item $u(k)(T^a_m) = x_{m+a-3} + x_{m-a} + y_m$ \item $u(k)(Q^a_m) = x_{m+a-4} - x_{m-a} + (a-2)y_m + (2-a)y_{m-1}$ \end{itemize} \end{prop} \begin{pf} We use the explicit description of the CE types, as appearing in \cite{o}, where more details may be found. A model in 3-space for the different sheets involved in the self intersection near the CE, is given. The CE is obtained at the origin when setting ${\lambda}=0$. We will show that for any $i \in I_1$, if $i_+ \in I_0$ is the immersion on the positive side of $i$ with respect to the permanent co-orientation for the CE of $i$, (if such exists, otherwise an arbitrary side is chosen) and $i_- \in I_0$ is the immersion on the other side, then indeed $k(i_+) - k(i_-)$ depends on $C(i)$ as in the statement of this proposition. By showing in particular, that this change depends \emph{only} on $C(i)$, we show that $k$ is indeed an invariant of order 1. Model for $E^a_m$: \ $z=0$, \ $z=x^2+y^2+{\lambda}$. The positive side is that where ${\lambda} < 0$, where there is a new 2-sphere in the image of the immersion, which is made of two 2-cells, and bounds a 3-cell in ${{\Bbb R}^3}$. The superscript $a$ is then the number of 2-cells (0, 1 or 2) whose prefered side determined by the orientation of the surface, is facing away from the 3-cell, (and $m$ is the degree at the CE at time ${\lambda}=0$). The degree of points in the new 3-cell is seen to be $m+a-2$, and its $\chi$ is 1, and so the term $x_{m+a-2}$. The second change ocurring, is that the region just above the plane $z=0$, has a 2-handle removed from it, so its $\chi$ is reduced by 1, and the degree in this region is seen to be $m-a$, and so the term $- x_{m-a}$. Model for $H^a_m$: \ $z=0$, \ $z=x^2-y^2+{\lambda}$. The positive side for $H^2$ is that where both sheets have their preferred side facing toward the region that is between them near the origin. For $H^1$ a positive side is chosen arbitrarily. By rotating the configuration if necessary, say the positive side is where ${\lambda} <0$. The superscript $a$ then denotes the number of sheets (1 or 2) whose preferred side is facing toward the region that is between the two sheets near the origin, when ${\lambda} <0$. The changes ocurring in the neighboring regions when passing from ${\lambda} > 0$ to ${\lambda} < 0$ are that a 1-handle is removed from the region $X$ just above the $x$ axis, and a 1-handle is added to the region $Y$ just below the $y$ axis. The degree of $X$ is seen to be $m+a-2$ and since a 1-handle is removed, $\chi(X)$ increases by 1 and thus the term $x_{m+a-2}$. The degree of $Y$ is seen to be $m-a$, and since a 1-handle is added, $\chi(Y)$ decreases by 1 and thus the term $-x_{m-a}$. Model for $T^a_m$: \ $z=0$, \ $y=0$, \ $z=y+x^2+{\lambda}$. The positive side for this configuration is when ${\lambda} < 0$, where there is a new 2-sphere in the image of the immersion, which is made of three 2-cells, and bounds a 3-cell in ${{\Bbb R}^3}$. The superscript $a$ is the number of 2-cells (0, 1, 2 or 3) whose prefered side is facing away from the 3-cell. The degree in the new 3-cell is $m+a-3$ and its $\chi$ is 1 and so the term $x_{m+a-3}$. The second change ocurring is that a 1-handle is removed from the region near the $x$ axis having negative $y$ values and positive $z$ values. The degree of this region is $m-a$ and since a 1-handle is removed, $\chi$ is increased by 1 and so the term $x_{m-a}$. The last change that effects the value of $k$ is that two triple points are added, each of degree $m$. This increases ${1 \over 2} N_m$ by 1 and so the term $y_m$. Model for $Q^a_m$: \ $z=0$, \ $y=0$, \ $x=0$, \ $z=x+y+{\lambda}$. On both the positive and negative side there is a simplex created near the origin, and the positive side is that where the majority of the four sheets are facing away from the simplex (and for $Q^2$ a positive side is chosen arbitrarily). The superscript $a$ denotes the number of sheets (2,3 or 4) facing away from the simplex created on the positive side, its degree thus seen to be $m+a-4$. The simplex on the negative side has $4-a$ sheets facing away from it and so its degree is $m-a$. So when moving from the negative to the positive side, a 3-cell ($\chi=1$) of degree $m-a$ is removed and a 3-cell of degree $m+a-4$ is added, and so the terms $x_{m+a-4} - x_{m-a}$. In addition to that, the degree of the four triple points of the simplex changes. On the positive side there are $a$ triple points with degree $m$, (namely, the triple points which are opposite the faces which are facing away from the simplex), and $4-a$ triple points with degree $m-1$. On the negative side the situation is reversed, i.e. there are $4-a$ triple points with degree $m$ and $a$ triple points with degree $m-1$. So the total change in $N_m$ is $a-(4-a) = 2a-4$ and the total change in $N_{m-1}$ is $(4-a)-a = 4-2a$ and so the terms $(a-2)y_m + (2-a)y_{m-1}$. \end{pf} We can now verify that indeed the values of $k$ are in ${\Bbb O}$ i.e. no half integer coefficients appear (which means $N_m$ is always even). From Proposition \protect{\mathrm{rank}}ef{p1} we see that the change in the value of $k$ is in ${\Bbb O}$ along any regular homotopy, and so it is enough to show that the value is in ${\Bbb O}$ for one immersion in any given regular homotopy class. Indeed, we show a bit more: \begin{lemma}\label{l1} Let $g$ be the genus of $F$. Any immersion $i:F\to{{\Bbb R}^3}$ is regularly homotopic to an immersion $j$ with $k(j) = (2-g)x_0 + (1-g)x_{-1}$. \end{lemma} \begin{pf} By \cite{p}, any immersion $i:F\to {{\Bbb R}^3}$ is regularly homotopic to an immersion whose image is of one of two standard forms, either a standard embedding, or an immersion obtained from a standard embedding by adding a ring to it. (For what we mean by a "ring" see \cite{a}.) For an embedding $e$, $k(e)$ is either $(2-g)x_0 + (1-g)x_{-1}$ or $(2-g)x_0 + (1-g)x_1$, depending on whether the preferred side of $e(F)$, determined by the orientation of $F$, is facing the compact or the non-compact side of $e(F)$ in ${{\Bbb R}^3}$, respectively. Now take an orientation reversing diffeomorphism $h:F\to F$ such that $e\circ h$ is regularly homotopic to $e$, to see that both values are attained. (Such $h$ exists by \cite{p}, take e.g. an $h$ that induces the identity on $H_1(F, {\Bbb Z}/2)$.) Now, a ring added to such embedding bounds a solid torus, whose $\chi$ is 0, and the topological type and degree of the other two components remains the same, and so by the same argument as for an embedding, the two values are attained in this case too. \end{pf} We define a homomorphism $\varphi:{\Bbb G}_U \to {\Bbb O}$ on generators as follows: \begin{itemize} \item $\varphi(h^2_m) = x_m - x_{m-2}$ \item $\varphi(t^2_m) = x_{m-1} + x_{m-2} + y_m$ \item $\varphi(h^1_0) = \varphi(q^2_0) = 0$ \end{itemize} By Proposition \protect{\mathrm{rank}}ef{p1}, $u(k) = u(\varphi \circ f^U)$ and so $k=\varphi \circ f^U + c$ where $c \in {\Bbb O}$ is a constant. We define the following homomorphism $F:{\Bbb O} \to K$ satisfying that $F \circ \varphi$ is the projection of ${\Bbb G}_U$ onto $K$, and so $F \circ k = F \circ \varphi \circ f^U + F(c) = f^K + F(c)$. By redefining $f^U$ as $f^U + F(c)$ we have $F \circ k = f^K$. We define $F$ on generators of ${\Bbb O}$ as follows (the notation involved is defined in Section \protect{\mathrm{rank}}ef{st}): $$ F(x_m) = \subseteqm_{ -{1 \over 2} < k < \lfloor{m \over 2}{\mathrm{rank}}f + {1 \over 2}} h^2_{m-2k} \ \ \ \ \ \ \ \ \ \ \ \ \ \ F(y_m) = t^2_m - \subseteqm_{ -{1 \over 2} < k < m-{1 \over 2} } h^2_k $$ One checks directly that indeed $F \circ \varphi$ maps each generator of $K$ to itself. Since $\varphi$ is not surjective, there was a certain choice in the construction of $F$. Indeed the image of $\varphi$ is the subgroup of ${\Bbb O}$ of all elements $\subseteqm A_m x_m + \subseteqm B_m y_m$ with $A_m,B_m \in {\Bbb Z}$ satisfying $\subseteqm_m A_{2m} = \subseteqm_m A_{2m+1} = \subseteqm_m B_m$. And so any two generators $x_i , x_j$ with $i$ even and $j$ odd, generate a subgroup in ${\Bbb O}$ which is a direct summand of the image of $\varphi$. Our choice for $F$ was that $F(x_{-2}) = F(x_{-1}) = 0$. Note that by Lemma \protect{\mathrm{rank}}ef{l1}, the image of $k:I_0 \to {\Bbb O}$ is contained in a non trivial coset of the image of $\varphi$ in ${\Bbb O}$, (and so the constant $c$ appearing above is non-zero, regardless of an additive constant for $f^U$). Composing the formula for $F$ with the formula for $k$ we obtain our formula for $f^K$: $$f^K(i)= \subseteqm_{m\in{\Bbb Z}} \chi(U_m) \bigg(\subseteqm_{ -{1 \over 2} < k < \lfloor{m \over 2}{\mathrm{rank}}f + {1 \over 2}} h^2_{m-2k}\bigg) + \subseteqm_{m\in{\Bbb Z}} {1 \over 2} N_m \bigg( t^2_m - \subseteqm_{ -{1 \over 2} < k < m-{1 \over 2} } h^2_k \bigg). $$ The choice of constants for $f^K$ here may be characterized by saying that in each regular homotopy class, $ f^K(j) = (2-g)h^2_0 $ for $j$ of Lemma \protect{\mathrm{rank}}ef{l1}. Since $f^U$ is universal, the image of $f^K:I_0\to K$ is not contained in any coset of a proper subgroup of $K$, yet the image of $f^K$ is far from being the whole group $K$, since as we see from the formula, the coefficients of all generators $t^2_m$ are always non-negative. It would be interesting to determine the precise image of $f^U:I_0 \to {\Bbb G}_U$. \section{Applications}\label{ap} We give the following two applications. The first will be used in the second and the second will be used in Section \protect{\mathrm{rank}}ef{m}. We will use the fact that $\varphi:{\Bbb G}_U \to {\Bbb O}$ is not surjective to obtain identities on immersions: Let $\theta_0,\theta_1 : {\Bbb O} \to {\Bbb Z}$ be the homomorphisms defined by: $\theta_0(x_{2m})=1, \theta_0(x_{2m+1})=0, \theta_0(y_m)=-1$ for all $m$ and $\theta_1(x_{2m})=0, \theta_1(x_{2m+1})=1, \theta_1(y_m)=-1$ for all $m$ and so $\theta_0 \circ \varphi = \theta_1 \circ \varphi = 0$. It follows that $\theta_0 \circ k$ and $\theta_1 \circ k$ are constant invariants, which are given explicitly by $\theta_0 \circ k(i) = \subseteqm \chi(U_{2m}) - {1 \over 2}N$ and $\theta_1 \circ k(i) = \subseteqm \chi(U_{2m+1}) - {1 \over 2}N$ where $N=N(i)=\subseteqm N_m(i)$ is the total number of triple points of $i$. To find the value of these constants we need to evaluate them on a single immersion in every regular homotopy class. For the immersion $j$ of Lemma \protect{\mathrm{rank}}ef{l1}, $\theta_0 \circ k(j)=2-g$ and $\theta_1 \circ k(j)=1-g$, so we get the following two identities: For any $i \in I_0$, $$\subseteqm_m \chi(U_{2m}) - {1 \over 2}N = 2-g \ \ \ \ \ \ \text{and} \ \ \ \ \ \ \subseteqm_m \chi(U_{2m+1}) - {1 \over 2}N = 1-g.$$ For our second application, let $U:I_0 \to {\Bbb Z}$ be the order one invariant defined by $u(U)(H^2_m)=1$, $u(U)(T^2_m)=0$ for all $m$ and $u(U)(H^1_0)=u(U)(Q^2_0)=0$. By the 7-step procedure we will have $u(U)(H^2_m)=u(U)(E^2_m)=-u(U)(E^0_m)=1$ for all $m$, $u(U)(H^1_m)=u(U)(E^1_m)=0$ for all $m$, and $u(U)(T^a_m)=u(U)(Q^a_m)=0$ for all $a,m$. That is, for any $i,j \in I_0$, $U(j)-U(i) \in {\Bbb Z}$ is the signed number of \emph{un}-matching tangencies occurring in any regular homotopy from $i$ to $j$ (thus the name $U$ for this invariant) where each such tangency is counted as $\pm 1$ according to its permanent co-orientation and the prescription $u(U)(H^2_m)=u(U)(E^2_m)=-u(U)(E^0_m)=1$. Following the definition of $U$ we define $\eta : K \to {\Bbb Z}$ on generators as follows: $\eta(h^2_m)=1$ and $\eta(t^2_m)=0$ for all $m$. Then $u(U) = u(\eta \circ f^K)$ and so (up to choice of constants) $U=\eta \circ f^K$. So from our formula for $f^k$ we get an explicit formula for $U$: $$U(i)=\subseteqm_{m\in{\Bbb Z}} \chi(U_m) \lfloor {m+2 \over 2} {\mathrm{rank}}f - \subseteqm_{m \in{\Bbb Z}} {1\over 2}m N_m .$$ Again we may characterize the choice of constants by saying that $U(j)=2-g$ for $j$ of Lemma \protect{\mathrm{rank}}ef{l1} We denote $U(i,j)=U(j)-U(i)$. For two regularly homotopic embeddings $e,e':F \to {{\Bbb R}^3}$ we would like to compute $U(e,e')$. For $e:F \to {{\Bbb R}^3}$ an embedding let $c(e)\in {\Bbb Z}$ be the degree of the points in the compact side of $i(F)$ in ${{\Bbb R}^3}$, so $c(e)=\pm 1$. We have $U(e) = (2-g) + (1-g) \lfloor {c(e) + 2 \over 2} {\mathrm{rank}}f$ and so $$U(e,e') = U(e')-U(e) = (1-g)\bigg(\lfloor {c(e') + 2 \over 2} {\mathrm{rank}}f - \lfloor {c(e) + 2 \over 2} {\mathrm{rank}}f\bigg) = (1-g){\epsilon}(e,e')$$ where ${\epsilon}(e,e')$ is $0$ if $c(e)=c(e')$, is $1$ if $c(e)=-1,c(e')=1$ and is $-1$ if $c(e)=1,c(e')=-1$. Now for $i \in I_0$ and $h:F \to F$ a diffeomorphism such that $i$ and $i \circ h$ are regularly homotopic, we would like to compute $U(i,i\circ h)$. If $h$ is orientation preserving then from the formula we have for $U(i)$ it is clear that $U(i)=U(i \circ h)$ and so $U(i,i \circ h)=0$. Now let $h:F \to F$ be orientation reversing. If $p \in {{\Bbb R}^3} - i(F)$ then $d_p(i \circ h) = -d_p(i)$ and if $p\in{{\Bbb R}^3}$ is a triple point of $i$ then $d_p(i \circ h) = 3-d_p(i)$ and so we get: $$U(i \circ h) - U(i)= \subseteqm_m \chi(U_m(i)) (\lfloor {-m+2 \over 2} {\mathrm{rank}}f - \lfloor {m+2 \over 2} {\mathrm{rank}}f) - \subseteqm_m {1\over 2}(3-m - m) N_m(i) .$$ Using the two identities from the beginning of this section and the fact that $\lfloor {-m+2 \over 2} {\mathrm{rank}}f - \lfloor {m+2 \over 2} {\mathrm{rank}}f = -2\lfloor {m+2 \over 2} {\mathrm{rank}}f + k(m)$ where $k(m)$ is 2 for $m$ even and 1 for $m$ odd, we get: $$U(i, i\circ h) = (1-g) + 2\bigg(2-g-U(i)\bigg).$$ Note that the $U(i)$ appearing here on the right, stands for our specific formula for the invariant $U$, and not for the abstract invariant which is defined only up to a constant. This equality for $h$ orientation reversing can be interpreted as $U(i,i\circ h) = U(j,j\circ h) + 2U(i,j)$ for $j$ of Lemma \protect{\mathrm{rank}}ef{l1}, offering another way for proving the equality. Let $\widehat{U}:I_0\to {\Bbb Z}/2$ be the mod 2 reduction of $U$. The reduction mod 2 of the above results reads as follows: For embeddings $e,e':F \to {{\Bbb R}^3}$, $\widehat{U}(e,e')=(1-g)\widehat{{\epsilon}}(e,e')$ where $\widehat{{\epsilon}}(e,e')\in{\Bbb Z}/2$ is 0 if $c(e)=c(e')$ and is 1 if $c(e) \neq c(e')$. For $h:F\to F$ a diffeomorphism such that $i$ and $i\circ h$ are regularly homotopic, $\widehat{U}(i,i\circ h)=(1-g){\epsilon}(h)$ where ${\epsilon}(h) \in {\Bbb Z}/2$ is 0 if $h$ is orientation preserving and is 1 if $h$ is orientation reversing. \section{Proof of formula for $M$}\label{m} For $i \in I_0$ and $h:F \to F$ a diffeomorphism such that $i$ and $i \circ h$ are regularly homotopic, let $M'(i,i\circ h)$ denote our proposed formula for $M(i,i\circ h)$ presented in Section \protect{\mathrm{rank}}ef{st}. So we must show that indeed $M(i,i\circ h)=M'(i,i\circ h)$. Similarly, for regularly homotopic embeddings $e,e':F \to {{\Bbb R}^3}$, let $M'(e,e')$ denote the proposed value for $M(e,e')$ presented in Section \protect{\mathrm{rank}}ef{st}, so we must show $M(e,e')=M'(e,e')$. In \cite{a} it is shown that $Q(i,i\circ h) = M'(i,i\circ h) + (1-g){\epsilon}(h)$ and $Q(e,e')= M'(e,e') + (1-g)\widehat{{\epsilon}}(e,e')$. In view of the concluding paragraph of Section \protect{\mathrm{rank}}ef{ap}, this means $Q(i,i\circ h) = M'(i,i\circ h) + \widehat{U}(i,i\circ h)$ and $Q(e,e')= M'(e,e') + \widehat{U}(e,e')$. So showing $M=M'$ in these two settings is equivalent to showing $Q=M+\widehat{U}$ in these settings, which means that the number mod 2 of quadruple points occurring in any regular homotopy between such two immersions or embeddings, is equal to the number mod 2 of all tangencies occurring (matching and un-matching). So, it remains to prove the following: \begin{prop}\label{p2} Let $i,j \in I_0$ such that either there is a diffeomorphism $h:F\to F$ such that $j= i \circ h$ or $i,j$ are both embeddings. Then in any regular homotopy between $i$ and $j$, the number mod 2 of quadruple points occurring, is equal to the number mod 2 of tangencies occurring. \end{prop} \begin{pf} For a closed 3-manifold $N$ and stable immersion $f:N \to {{\Bbb B}bb R}^4$, there is defined a closed surface $S_f$ and immersion $g:S_f \to {{\Bbb B}bb R}^4$ such that the image $g(S_f) \subseteq {{\Bbb B}bb R}^4$ is precisely the multiple set of $f$. It is shown in \cite{ec} that the number mod 2 of quadruple points of $f$ is equal to $\chi(S_f) \bmod 2$. Now let $i,j:F \to {{\Bbb R}^3}$ be as in the assumption of this proposition and let $H_t:F \to {{\Bbb R}^3}$, $0 \leq t \leq 1$, be a regular homotopy with $H_0=i$, $H_1=j$. We define an immersion $f:F \times [0,1] \to {{\Bbb R}^3} \times [0,1]$ by $f(x,t) = (H_t(x) , t)$. In case $i,j$ are embeddings we continue $f$ into ${{\Bbb B}bb R}^4 = {{\Bbb R}^3} \times {{\Bbb B}bb R}$ and construct a closed 3-manifold $N$ by attaching two handle bodies to $F \times [0,1]$, glued so that $f$ can be extended to embeddings of these handle bodies into ${{\Bbb R}^3} \times (-\infty ,0]$ and ${{\Bbb R}^3} \times [1,\infty)$. We thus obtain an immersion $\bar{f}:N\to{{\Bbb B}bb R}^4$ with self intersection being precisely the original self intersection of $F \times [0,1]$. The projection ${{\Bbb R}^3} \times [0,1] \to [0,1]$ induces a Morse function on $S_{\bar{f}}$ with singularities precisely wherever a tangency CE occurs in the regular homotopy $H_t$, and so by Morse theory $\chi(S_{\bar{f}})$ is equal mod 2 to the number of tangencies. By \cite{ec} then, the number mod 2 of quadruple points of $H_t$ which is the number mod 2 of quadruple points of $\bar{f}$ is equal to the number mod 2 of tangencies. In case $j = i \circ h$, let $N$ be the 3-manifold obtained from $F \times [0,1]$ by gluing its two boundary components to each other via $h$ so that there is induced an immersion $\bar{f}:N \to {{\Bbb R}^3} \times S^1$. Composing $\bar{f}$ with an embedding of ${{\Bbb R}^3} \times S^1$ in ${{\Bbb B}bb R}^4$, we see again that the number of quadruple points of $H_t$ is equal mod 2 to $\chi(S_{\bar{f}})$ which is equal mod 2 to the number of tangencies of $H_t$. \end{pf} We remark that one can prove the formulae for $M$ presented in Section \protect{\mathrm{rank}}ef{st} directly, without resorting to the result of \cite{ec}, by going along the lines of \cite{a}. Proposition \protect{\mathrm{rank}}ef{p2} would then be obtained as a corollary. \end{document}
\begin{document} \title{Power domination throttling} \author{Boris Brimkov\thanks{Department of Computational and Applied Mathematics, Rice University, Houston, TX, 77005, USA ([email protected], [email protected], [email protected], [email protected])} \hskip 1.5em Joshua Carlson\thanks{Department of Mathematics, Iowa State University, Ames, IA, 50011, USA ([email protected])} \hskip 1.5em Illya V. Hicks$^*$\\ Rutvik Patel$^*$ \hskip 1.5em Logan Smith$^*$} \date{} \maketitle \begin{abstract} A power dominating set of a graph $G=(V,E)$ is a set $S\subset V$ that colors every vertex of $G$ according to the following rules: in the first timestep, every vertex in $N[S]$ becomes colored; in each subsequent timestep, every vertex which is the only non-colored neighbor of some colored vertex becomes colored. The power domination throttling number of $G$ is the minimum sum of the size of a power dominating set $S$ and the number of timesteps it takes $S$ to color the graph. In this paper, we determine the complexity of power domination throttling and give some tools for computing and bounding the power domination throttling number. Some of our results apply to very general variants of throttling and to other aspects of power domination. \noindent {\bf Keywords:} Power domination throttling, power domination, power propagation time, zero forcing \end{abstract} \section{Introduction} A \emph{power dominating set} of a graph $G=(V,E)$ is a set $S\subset V$ that colors every vertex of $G$ according to the following rules: in the first timestep, every vertex in $N[S]$ becomes colored; in each subsequent timestep, every vertex which is the only non-colored neighbor of some colored vertex becomes colored. The first timestep is called the \emph{domination step} and each subsequent timestep is called a \emph{forcing step}. The {\em power domination number} of $G$, denoted $\gamma_P(G)$, is the cardinality of a minimum power dominating set. The \emph{power propagation time of $G$ using $S$}, denoted $\operatorname{ppt}(G;S)$, is the number of timesteps it takes for a power dominating set $S$ to color all of $G$. The \emph{power propagation time} of $G$ is defined as \[\operatorname{ppt}(G)=\min\{\operatorname{ppt}(G;S):S \text{ is a minimum power dominating set}\}.\] It is well-known that larger power dominating sets do not necessarily yield smaller power propagation times. The \emph{power domination throttling number} of $G$ is defined as \[\operatorname{th}_{\gamma_P}(G)=\min\{|S|+\operatorname{ppt}(G;S):S \text{ is a power dominating set}\}.\] $S$ is a \emph{power throttling set} of $G$ if $S$ is a power dominating set of $G$ and $|S|+\operatorname{ppt}(G;S)=\operatorname{th}_{\gamma_P}(G)$. Power domination arises from a graph theoretic model of the Phase Measurement Unit (PMU) placement problem from electrical engineering. Electrical power companies place PMUs at select locations in a power network in order to monitor its performance; the physical laws by which PMUs observe the network give rise to the color change rules described above (cf. \cite{BH05,powerdom3}). This PMU placement problem has been explored extensively in the electrical engineering literature; see \cite{EEprobabilistic,Baldwin93,Brunei93,EEinformationTheoretic,EEtaxonomy,Mili91,EEtabuSearch,EEmultiStage}, and the bibliographies therein for various placement strategies and computational results. The PMU placement literature also considers various other properties of power dominating sets, such as redundancy, controlled islanding, and connectedness, and optimizes over them in addition to the cardinality of the set (see, e.g., \cite{akhlaghi,connected_pd,mahari,xia}). Power domination has also been widely studied from a purely graph theoretic perspective. See, e.g., \cite{benson,bozeman,connected_pd,dorbec2,dorfling,kneis,xu,powerdom2} for various structural and computational results about power domination and related variants. The power propagation time of a graph has previously been studied in \cite{aazami,dorbec,ferrero,liao}. Other variants of propagation time arising from similar dynamic graph coloring processes have also been studied; these include zero forcing propagation time \cite{berliner,fast_hicks,proptime1,kenter} and positive semidefinite propagation time \cite{warnberg}. Throttling for other problems such as zero forcing \cite{butler_young}, positive semidefinite zero forcing \cite{carlson2}, minor monotone floor of zero forcing \cite{carlson}, and the game of Cops and Robbers \cite{breen} has been studied as well. Notably missing from the literature on throttling (for power domination as well as other variants) is the computational complexity of the problems. In this paper, we determine the complexity of a large, abstract class of throttling problems, including power domination throttling. We also give explicit formulas and tight bounds for the power domination throttling numbers of certain graphs, and characterizations of graphs with extremal power domination throttling numbers. \section{Preliminaries} A graph $G=(V,E)$ consists of a vertex set $V=V(G)$ and an edge set $E=E(G)$ of two-element subsets of $V$. The \emph{order} of $G$ is denoted by $n(G)=|V|$. We will assume that the order of $G$ is nonzero, and when there is no scope for confusion, dependence on $G$ will be omitted. Two vertices $v,w\in V$ are \emph{adjacent}, or \emph{neighbors}, if $\{v,w\}\in E$; we will sometimes write $vw$ to denote an edge $\{v,w\}$. The \emph{neighborhood} of $v\in V$ is the set of all vertices which are adjacent to $v$, denoted $N(v)$; the \emph{degree} of $v\in V$ is defined as $d(v)=|N(v)|$. The \emph{maximum degree} of $G$ is defined as $\Delta(G)=\max_{v\in V}d(v)$; when there is no scope for confusion, dependence on $G$ will be omitted. The \emph{closed neighborhood} of $v$ is the set $N[v]=N(v)\cup \{v\}$. \emph{Contracting} an edge $e$ of a graph $G$, denoted $G/e$, is the operation of removing $e$ from $G$ and identifying the endpoints of $e$ into a single vertex. A graph $H$ is a \emph{subgraph} of a graph $G$, denoted $H\leq G$, if $H$ can be obtained from $G$ by deleting vertices and deleting edges of $G$; $H$ is a \emph{minor} of $G$, denoted $H\preceq G$, if $H$ can be obtained from $G$ by deleting vertices, deleting edges, and contracting edges of $G$. Given $S \subset V$, $N[S]=\bigcup_{v\in S}N[v]$, and the \emph{induced subgraph} $G[S]$ is the subgraph of $G$ whose vertex set is $S$ and whose edge set consists of all edges of $G$ which have both endpoints in $S$. An isomorphism between graphs $G_1$ and $G_2$ will be denoted by $G_1\simeq G_2$. Given a graph $G=(V,E)$, and sets $V'\subset V$ and $E'\subset E$, we say the vertices in $V'$ are \emph{saturated} by the edges in $E'$ if every vertex of $V'$ is incident to some edge in $E'$. An \emph{isolated vertex}, or \emph{isolate}, is a vertex of degree 0. A \emph{dominating vertex} is a vertex which is adjacent to all other vertices. The path, cycle, complete graph, and empty graph on $n$ vertices will respectively be denoted $P_n$, $C_n$, $K_n$, $\overline{K}_n$. Given two graphs $G_1$ and $G_2$, the \emph{disjoint union} $G_1\dot\cup G_2$ is the graph with vertex set $V(G_1)\dot\cup V(G_2)$ and edge set $E(G_1)\dot\cup E(G_2)$. With a slight abuse in notation, given a set $S\subset V(G_1\dot\cup G_2)$, we will use, e.g., $S\cap V(G_1)$ to denote the set of vertices in $G_1\dot\cup G_2$ originating from $G_1$ (instead of specifying the unique index created by the disjoint union operation). The \emph{intersection} of $G_1$ and $G_2$, denoted $G_1\cap G_2$, is the graph with vertex set $V(G_1)\cap V(G_2)$ and edge set $E(G_1)\cap E(G_2)$. The \emph{Cartesian product} of $G_1$ and $G_2$, denoted $G_1\square G_2$, is the graph with vertex set $V(G_1)\times V(G_2)$, where vertices $(u,u')$ and $(v,v')$ are adjacent in $G_1\square G_2$ if and only if either $u = v$ and $u'$ is adjacent to $v'$ in $G_2$, or $u' = v'$ and $u$ is adjacent to $v$ in $G_1$. The \emph{join} of $G_1$ and $G_2$, denoted $G_1\lor G_2$, is the graph obtained from $G_1\dot\cup G_2$ by adding an edge between each vertex of $G_1$ and each vertex of $G_2$. The \emph{complete bipartite graph} with parts of size $a$ and $b$, denoted $K_{a,b}$, is the graph $\overline{K}_a\lor\overline{K}_b$. The graph $K_{n-1,1}$, $n\geq 3$, will be called a \emph{star}. For other graph theoretic terminology and definitions, we refer the reader to \cite{bondy}. A \emph{zero forcing} set of a graph $G=(V,E)$ is a set $S\subset V$ that colors every vertex of $G$ according to the following color change rule: initially, every vertex in $S$ is colored; then, in each timestep, every vertex which is the only non-colored neighbor of some colored vertex becomes colored. Note that in a given forcing step, it may happen that a vertex $v$ is the only non-colored neighbor of several colored vertices. In this case, we may arbitrarily choose one of those colored vertices $u$, and say that $u$ is the one which forces $v$; making such choices in every forcing step will be called ``fixing a chronological list of forces". The notions of \emph{zero forcing number} of $G$, denoted $Z(G)$, \emph{zero forcing propagation time of $G$ using $S$}, denoted $\operatorname{pt}(G;S)$, \emph{zero forcing propagation time} of $G$, denoted $\operatorname{pt}(G)$, and \emph{zero forcing throttling number}, denoted $\operatorname{th}(G)$, are defined analogously to $\gamma_P(G)$, $\operatorname{ppt}(G;S)$, $\operatorname{ppt}(G)$, and $\operatorname{th}_{\gamma_P}(G)$. A \emph{positive semidefinite (PSD) zero forcing set} of $G$ is a set $S\subset V$ which colors every vertex of $G$ according to the following color change rule: initially, in timestep 0, every vertex in $S_0:=S$ is colored; then, in each timestep $i\geq 1$, if $S_{i-1}$ is the set of colored vertices in timestep $i-1$, and $W_1,\ldots,W_k$ are the vertex sets of the components of $G-S_{i-1}$, then every vertex which is the only non-colored neighbor of some colored vertex in $G[W_j\cup S_{i-1}]$, $1\leq j\leq k$, becomes colored. As with zero forcing, the PSD zero forcing notation $Z_+(G)$, $\operatorname{pt}_+(G;S)$, $\operatorname{pt}_+(G)$, and $\operatorname{th}_+(G)$ is analogous to $\gamma_P(G)$, $\operatorname{ppt}(G;S)$, $\operatorname{ppt}(G)$, and $\operatorname{th}_{\gamma_P}(G)$, respectively. For every graph $G$, $\gamma_P(G)\le\operatorname{th}_{\gamma_P}(G)\le\operatorname{th}(G)$. Moreover, in general, $\operatorname{th}_{\gamma_P}(G)$ and $\operatorname{th}_+(G)$ are not comparable; for example, $\operatorname{th}_{\gamma_P}(K_7)<th_+(K_7)$, while $\operatorname{th}_{\gamma_P}(G)>th_+(G)$ for $G=(\{1,2,3,4,5,6,7\},\{\{1,2\},\{1,3\},\{1,4\},\{2,5\},\{2,6\},\{3,7\}\})$. \section{Complexity Results} A number of NP-Completeness results have been presented for power domination, zero forcing, and positive semidefinite zero forcing. For example power domination was shown to be NP-Complete for general graphs \cite{powerdom3}, planar graphs \cite{guo}, chordal graphs \cite{powerdom3}, bipartite graphs \cite{powerdom3}, split graphs \cite{guo,liao2}, and circle graphs \cite{guo}; zero forcing was shown to be NP-Complete for general graphs \cite{aazami2,fallat} and planar graphs \cite{aazami2}; PSD zero forcing was shown to be NP-complete for general graphs \cite{yang} and line graphs \cite{wang}. However, despite recent interest in the corresponding throttling problems, to our knowledge there are no complexity results for any of those problems. In this section, we provide sufficient conditions which ensure that given an NP-Complete vertex minimization problem, the corresponding throttling problem is also NP-Complete. To facilitate the upcoming discussion, we recall three categories of graph parameters introduced by Lov\'{a}sz \cite{lovasz}. Let $\phi$ be a graph parameter and $G_1$ and $G_2$ be two graphs on which $\phi$ is defined. Then, $\phi$ is called {\em maxing} if $\phi(G_1 \dot\cup G_2) = \max\{\phi(G_1), \phi(G_2)\}$, {\em additive} if $\phi(G_1 \dot\cup G_2) = \phi(G_1) + \phi(G_2)$, and {\em multiplicative} if $\phi(G_1 \dot\cup G_2) = \phi(G_1)\phi(G_2)$. For example, $\gamma_P(G)$ is an additive parameter, $\operatorname{ppt}(G)$ is a maxing parameter, and the number of distinct power dominating sets admitted by $G$ is a multiplicative parameter. We will show that with only minor additional assumptions, a minimization problem defined as the sum of a maxing parameter and an additive parameter inherits the NP-Completeness of the additive parameter for any family of graphs. \begin{definition} \label{def1} Given a graph $G=(V,E)$, let $X(G)$ be a set of subsets of $V$ and let $p(G;\,\cdot\,)$ be a function which maps a member of $X(G)$ to a nonnegative integer. Define the parameters $x(G):=\min_{S \in X(G)} |S|$ and $p(G):=\min_{\substack{S \in X(G)\\ |S| = x(G)}} p(G;S)$, and define $\arg p(G):=\arg\min_{\substack{S \in X(G)\\ |S| = x(G)}} p(G;S)$. \end{definition} \noindent Note that the function $p$ and the parameter $p$ are differentiated by their inputs. Table \ref{table_exp} shows the power domination notation corresponding to the abstract notation of Definition~\ref{def1}. \begin{table}[H] \renewcommand{1.5}{1.5} \centering \begin{tabularx}{\textwidth}{|X|X|} \hline \textbf{Abstract notation} &\textbf{Power domination notation}\\ \hline $X(G)$ & Set of power dominating sets of $G$\\ $x(G)$ & $\gamma_P(G)$\\ $p(G;S)$ &$\operatorname{ppt}(G;S)$\\ $p(G)$ &$\operatorname{ppt}(G)$\\ $\min_{S\in X(G)} \{|S|+p(G;S)\}$ &$\operatorname{th}_{\gamma_P}(G)$\\ \hline \end{tabularx} \caption{Notation for abstract problems and corresponding notation for power domination.} \label{table_exp} \end{table} \noindent Table \ref{fig:probs} gives a pair of abstract decision problems that can be defined for $X$, $x$, and $p$, as well as three instances which have been studied in the literature. \begin{table}[H] \newcolumntype{S}{>{\hsize=.9\hsize}X} \newcolumntype{D}{>{\hsize=1.1\hsize}X} \centering \renewcommand{1.5}{1.5} \begin{tabularx}{\textwidth}{|S|D|} \hline \textbf{Set minimization problem} & \textbf{Throttling problem} \\ \hline \textsc{Minimum $X$ set} \newline \textbf{Instance:} Graph $G$, integer $k$ \newline \textbf{Question:} Is $x(G)<k$? & \textsc{$(X,p)$-Throttling} \newline \textbf{Instance:} Graph $G$, integer $k$ \newline \textbf{Question:} Is $\min_{S\in X(G)}\{|S|+p(G;S)\}<k$? \\ \hline \hline \textsc{Power Domination} \newline \textbf{Instance:} Graph $G$, integer $k$ \newline \textbf{Question:} Is $\gamma_P(G)<k$? & \textsc{Power Domination Throttling} \newline \textbf{Instance:} Graph $G$, integer $k$ \newline \textbf{Question:} Is $\operatorname{th}_{\gamma_P}(G)<k$? \\ \hline \textsc{Zero Forcing} \newline \textbf{Instance:} Graph $G$, integer $k$ \newline \textbf{Question:} Is $Z(G)<k$? & \textsc{Zero Forcing Throttling} \newline \textbf{Instance:} Graph $G$, integer $k$ \newline \textbf{Question:} Is $\operatorname{th}(G)<k$? \\ \hline \textsc{PSD Zero Forcing} \newline \textbf{Instance:} Graph $G$, integer $k$ \newline \textbf{Question:} Is $Z_+(G)<k$? & \textsc{PSD Zero Forcing Throttling} \newline \textbf{Instance:} Graph $G$, integer $k$ \newline \textbf{Question:} Is $\operatorname{th}_{+}(G)<k$? \\ \hline \end{tabularx} \caption{NP-Complete set minimization problems and corresponding throttling problems.} \label{fig:probs} \end{table} \noindent We now give sufficient conditions to relate the complexity of these problems. \begin{theorem} \label{thm:comp} Let $X$ and $p$ (as in Definition \ref{def1}) satisfy the following: \begin{enumerate} \item[1)] For any graph $G$, there exist constants $b$, $c$ such that for any set $S \in X(G)$, $p(G;S) < b = O(|V(G)|^{c})$, and $p(G;S)$ and $b$ can be computed in $O(|V(G)|^{c})$ time. \item[2)] For any graphs $G_1$ and $G_2$, $X(G_1 \dot\cup G_2) = \{S_1 \dot\cup S_2 : S_1 \in X(G_1), S_2 \in X(G_2)\}$. \item[3)] For any graphs $G_1$ and $G_2$, and for any $S_1 \in X(G_1)$ and $S_2 \in X(G_2)$, $p(G_1 \dot\cup G_2; S_1 \dot\cup S_2) = \max\{ p(G_1;S_1), p(G_2;S_2) \}$. \item[4)] \textsc{Minimum $X$ Set} is NP-Complete. \end{enumerate} Then, \textsc{$(X,p)$-Throttling} is NP-Complete. \end{theorem} \begin{proof} We will first show that $x$ is an additive parameter and $p$ is a maxing parameter. Let $G_1$ and $G_2$ be graphs. By 2), \begin{eqnarray*} x(G_1 \dot\cup G_2)&=& \min \{|S'|:S' \in X(G_1 \dot\cup G_2)\}\\ &=&\min \{|S'|:S' \in \{S_1 \dot\cup S_2 : S_1 \in X(G_1), S_2 \in X(G_2)\}\}\\ &=&\min \{|S_1|+|S_2|:S_1 \in X(G_1), S_2\in X(G_2)\}\\ &=&\min\{|S_1|:S_1\in X(G_1)\}+\min\{|S_2|:S_2\in X(G_2)\}= x(G_1)+x(G_2). \end{eqnarray*} Thus, $x$ is additive by definition. Now let $S^*$ be a set in $\arg p(G_1 \dot\cup G_2)$. By 2), there exist sets $S_1\in X(G_1)$ and $S_2\in X(G_2)$ such that $S^*=S_1\dot\cup S_2$. By definition, $|S_1|\geq x(G_1)$ and $|S_2|\geq x(G_2)$, and since $x$ is additive, $|S^*|=x(G_1\dot\cup G_2)=x(G_1)+x(G_2)$. Thus, $|S_1|=x(G_1)$ and $|S_2|=x(G_2)$. Then, \begin{eqnarray*} p(G_1\dot\cup G_2)&=&p(G_1 \dot\cup G_2;S^*) = p(G_1 \dot\cup G_2; S_1 \dot\cup S_2) = \max \{p(G_1;S_1), p(G_2;S_2)\} \\ &\geq& \max \left\{\min_{\substack{S\in X(G_1) \\ |S| = x(G_1)}}p(G_1;S), \min_{\substack{S\in X(G_2) \\ |S| = x(G_2)}}p(G_2;S)\right\} =\max \{p(G_1), p(G_2) \}, \end{eqnarray*} where the third equality follows from 3), and the inequality follows from the fact that $|S_1|=x(G_1)$ and $|S_2|=x(G_2)$. Now, let $S_1^*\in \arg p(G_1)$ and $S_2^*\in\arg p(G_2)$. Then, \begin{eqnarray*} p(G_1 \dot\cup G_2) &=& \min_{\substack{S' \in X(G_1 \dot\cup G_2) \\ |S'| = x(G_1 \dot\cup G_2)}} p(G_1 \dot\cup G_2;S')\leq p(G_1\dot\cup G_2;S_1^* \dot\cup S_2^*)\\ &=&\max \{p(G_1;S_1^*), p(G_2;S_2^*)\} = \max \{p(G_1), p(G_2) \}, \end{eqnarray*} where the inequality follows from 2) and the fact that $x$ is additive, and the second equality follows from 3). Thus, $p(G_1 \dot\cup G_2) = \max \{p(G_1), p(G_2)\}$, so $p$ is maxing by definition.\newline Next we will show that \textsc{$(X,p)$-Throttling} is in NP. By 1), for any $S \in X(G)$, $p(G; S)$ can be computed in polynomial time. By 4), \textsc{Minimum $X$ Set} is in NP, so there exists a polynomial time algorithm to verify that $S$ is in $X(G)$. Thus, for any $S \subset V(G)$, $|S|+p(G;S)$ can be computed or found to be undefined in polynomial time. Therefore, \textsc{$(X,p)$-Throttling} is in NP. We will now show that \textsc{$(X,p)$-Throttling} is NP-Hard, by providing a polynomial reduction from \textsc{Minimum $X$ Set}. Let $\langle G,k\rangle$ be an instance of \textsc{Minimum $X$ Set}. Let $B = b + 1$, where $b$ is the bound on $p(G;S)$ in 1). Let $G_1,\ldots,G_B$ be disjoint copies of $G$, and let $G' = \dot\cup_{i=1}^B G_i$. We will show $\langle G,k\rangle$ is a `yes'-instance of \textsc{Minimum $X$ Set} if and only if $\langle G',Bk + b \rangle$ is a `yes'-instance of \textsc{$(X,p)$-Throttling}. Note that by 1), $\langle G',Bk + b \rangle$ can be constructed in a number of steps that is polynomial in $n$. Since $x$ is an additive parameter, $x(G')=x(\dot\cup_{i=1}^B G_i)=\sum_{i=1}^B x(G_i)=Bx(G)$. Thus, \begin{eqnarray*} \min_{S' \in X(G')} \{|S'| + p(G';S')\} &\leq& \min_{\substack{S' \in X(G') \\ |S'| = x(G')}} \{|S'| + p(G';S')\}\\ &=& \min_{\substack{S' \in X(G') \\ |S'| = x(G')}} \{Bx(G) + p(G';S')\}\\ &=& Bx(G) + p(G') = Bx(G) + p(G), \end{eqnarray*} where the last equality follows from the fact that $p$ is maxing, and $p(G') = p(\dot\cup_{i=1}^B G_i) = \max\{p(G_1),\ldots,p(G_B)\} = p(G)$. Now consider any $S' \in X(G')$. Clearly $|S'| \geq x(G') = Bx(G)$. Suppose first that $|S'| \geq B(x(G)+1)$; then, \[ |S'| + p(G';S') \geq B(x(G)+1) + p(G';S') \geq Bx(G) + B > Bx(G) + p(G).\] \noindent Now suppose that $|S'| < B(x(G)+1)$. Since $S' \in X(G') = \{\dot\cup_{i=1}^B S_i : S_i \in X(G_i) \}$, $|S' \cap V(G_i)| \geq x(G)$ for all $i\in\{1,\ldots,B\}$. By the pigeonhole principle, $|S' \cap V(G_j)|=|S_j| = x(G)$ for some $j\in\{1,\ldots,B\}$. By 3), $$ p(G';S') = \max \{p(G_j; S_j), p(G'-G_j; S' \backslash S_j) \} \geq p(G_j; S_j) \geq p(G). $$ Thus in all cases, $|S'| + p(G';S') \geq Bx(G) + p(G)$. Hence, it follows that \begin{equation} \label{eq_comp} \min_{S' \in X(G')}\{|S'| + p(G';S')\} = Bx(G) + p(G). \end{equation} We will now show that $x(G) < k$ if and only if $\min_{S' \in X(G')}\{|S'| + p(G';S')\} < Bk + b$. First, suppose that $x(G) < k$. Then by \eqref{eq_comp}, $\min_{S' \in X(G')}\{|S'| + p(G';S')\} = Bx(G) + p(G) < Bk + b$. Now suppose that $\min_{S' \in X(G')}\{|S'| + p(G';S')\} < Bk + b$. Then, by \eqref{eq_comp}, $Bx(G) + p(G) < Bk + b$. Rearranging, dividing by $B$, and taking the floor yields \[x(G)=\lfloor x(G)\rfloor< \left\lfloor k+\frac{b - p(G)}{B} \right\rfloor=k+\left\lfloor\frac{B-1 - p(G)}{B} \right\rfloor = k.\] Thus, $\langle G,k\rangle$ is a `yes'-instance of \textsc{Minimum $X$ Set} if and only if $\langle G',Bk + b \rangle$ is a `yes'-instance of \textsc{$(X,p)$-Throttling}. \end{proof} \noindent We now show that Theorem \ref{thm:comp} can be applied to the specific throttling problems posed for power domination, zero forcing, and positive semidefinite zero forcing. \begin{corollary} \label{cor_comp} \textsc{Power Domination Throttling}, \textsc{Zero Forcing Throttling}, and \textsc{PSD Zero Forcing Throttling} are NP-Complete. \end{corollary} \begin{proof} Given a graph $G$, let $X(G)$ denote the set of power dominating sets of $G$ and for $S\in X(G)$, let $p(G;S)$ denote the power propagation time of $G$ using $S$. Clearly, for any power dominating set $S$, $\operatorname{ppt}(G;S)$ is bounded above by $|V(G)|$, and can be computed in polynomial time. Thus, assumption 1) of Theorem \ref{thm:comp} is satisfied. For any graphs $G_1$ and $G_2$, it is easy to see that $S$ is a power dominating set of $G_1 \dot\cup G_2$ if and only if $S \cap V(G_1)$ is a power dominating set of $G_1$ and $S \cap V(G_2)$ is a power dominating set of $G_2$. Thus, assumption 2) of Theorem \ref{thm:comp} is satisfied. Let $G_1$ and $G_2$ be graphs, and let $S_1$ be a power dominating set of $G_1$ and $S_2$ be a power dominating set of $G_2$. Then, the same vertices which are dominated in $G_1$ by $S_1$ and in $G_2$ by $S_2$ can be dominated in $G_1\dot\cup G_2$ by $S_1\dot\cup S_2$, and all forces that occur in timestep $i\geq 2$ in $G_1$ and $G_2$ will occur in $G_1\dot\cup G_2$ at the same timestep. Thus, $\operatorname{ppt}(G_1 \dot\cup G_2; S_1 \dot\cup S_2) = \max\{ \operatorname{ppt}(G_1;S_1), \operatorname{ppt}(G_2;S_2) \}$, so assumption 3) of Theorem \ref{thm:comp} is satisfied. Finally, since \textsc{Power Domination} is NP-Complete (cf. \cite{powerdom3}), assumption 4) of Theorem \ref{thm:comp} is satisfied. Thus, \textsc{Power Domination Throttling} is NP-Complete. By a similar reasoning, it can be shown that the assumptions of Theorem \ref{thm:comp} also hold for zero forcing and positive semidefinite zero forcing; thus, \textsc{Zero Forcing Throttling} and \textsc{PSD Zero Forcing Throttling} are also NP-Complete. \end{proof} Some graph properties are preserved under disjoint unions; we will call a graph property $P$ {\em additive} if for any two graphs $G_1$, $G_2$ with property $P$, $G_1 \dot\cup G_2$ also has property $P$. Let $\left<G, k\right>$ be an instance of \textsc{Minimum $X$ Set} in the special case that $G$ has property $P$. In the proof of Theorem \ref{thm:comp}, a polynomial reduction from $\left<G, k\right>$ to an instance $\left<G', Bk + b\right>$ of \textsc{$(X, p)$-Throttling} is given, where $G'$ is the disjoint union of copies of $G$. If property $P$ is additive, then $G'$ also has property $P$. Thus, special cases of \textsc{$(X, p)$-Throttling} in graphs with property $P$ reduce from instances of \textsc{Minimum $X$ Set} with property $P$, by the proof of Theorem \ref{thm:comp}. It is easy to see that planarity, chordality, and bipartiteness are additive properties. As noted at the beginning of this section, \textsc{Power Domination} is NP-Complete for graphs with these properties. Thus, these NP-Completeness results can be extended to the corresponding throttling problem. \begin{corollary} \textsc{Power Domination Throttling} is NP-Complete even for planar graphs, chordal graphs, and bipartite graphs. \end{corollary} \section{Bounds and exact results for $\operatorname{th}_{\gamma_P}(G)$} In this section, we derive several tight bounds and exact results for the power domination throttling number of a graph. We have also implemented a brute force algorithm for computing the power domination throttling number of arbitrary graphs (cf. \url{https://github.com/rsp7/Power-Domination-Throttling}), and used it to compute the power domination throttling numbers of all graphs on fewer than 10 vertices. Recall the following well-known bound on the power propagation time. \begin{lemma}[\cite{ferrero,proptime1}]\label{pth_bound_lemma} Let $G$ be a graph and $S$ be a power dominating set of $G$. Then $\operatorname{ppt}(G;S)\geq \frac{1}{\Delta}\left(\frac{n}{|S|}-1\right)$. \end{lemma} \begin{theorem}\label{pth_bound} Let $G$ be a nonempty graph. Then, $\operatorname{th}_{\gamma_P}(G)\ge\big\lceil2\sqrt{\frac{n}{\Delta}}-\frac{1}{\Delta}\big\rceil$, and this bound is tight. \end{theorem} \proof Since $G$ is nonempty, we have $\Delta>0$. Let $\mathcal{P}(G)$ denote the set of all power dominating sets of $G$. By Lemma \ref{pth_bound_lemma}, \[\operatorname{th}_{\gamma_P}(G)=\min_{S\in \mathcal{P}(G)}\{|S|+\operatorname{ppt}(G;S)\}\ge\min_{S\in \mathcal{P}(G)}\left\{|S|+\frac{1}{\Delta}\left(\frac{n}{|S|}-1\right)\right\}\ge\min_{s>0}\left\{s+\frac{1}{\Delta}\left(\frac{n}{s}-1\right)\right\}.\] To compute the last minimum, let us minimize $t(s)\vcentcolon=s+\frac{1}{\Delta}(\frac{n}{s}-1),s>0$. Since $t'(s)=1-\frac{n}{\Delta s^2}$, $s=\sqrt{\frac{n}{\Delta}}$ is the only critical point of $t(s)$. Since $t''(s)=\frac{2n}{\Delta s^3}>0$ for $s>0$, we have that $t(\sqrt{\frac{n}{\Delta}})=\sqrt{\frac{n}{\Delta}}+\frac{1}{\Delta}(n/\sqrt{\frac{n}{\Delta}}-1)=2\sqrt{\frac{n}{\Delta}}-\frac{1}{\Delta}$ is the global minimum of $t(s)$. Thus, $\operatorname{th}_{\gamma_P}(G)=\lceil\operatorname{th}_{\gamma_P}(G)\rceil\geq \left\lceil 2\sqrt{\frac{n}{\Delta}}-\frac{1}{\Delta}\right\rceil$. The bound is tight, e.g., for paths and cycles; see Proposition \ref{pth_path}. \qed \begin{theorem}[\cite{carlson2}]\label{thm_zth_path_cycle} $\operatorname{th}_{+}(P_n)=\big\lceil\sqrt{2n}-\frac{1}{2}\big\rceil$ for $n\ge1$ and $\operatorname{th}_{+}(C_n)=\big\lceil\sqrt{2n}-\frac{1}{2}\big\rceil$ for $n\ge4$. \end{theorem} \begin{proposition}\label{pth_path} $\operatorname{th}_{\gamma_P}(P_n)=\big\lceil\sqrt{2n}-\frac{1}{2}\big\rceil$ for $n\geq 1$ and $\operatorname{th}_{\gamma_P}(C_n)=\big\lceil\sqrt{2n}-\frac{1}{2}\big\rceil$ for $n\geq 3$. \end{proposition} \proof Let $S$ be an arbitrary nonempty subset of $V(P_n)$. If any vertex in $S$ has two neighbors which are not in $S$, then both of these neighbors are in different components of $P_n-S$. Moreover, each vertex in $N[S]$ has at most one neighbor which is not in $N[S]$. Thus, the PSD zero forcing color change rules and the power domination color change rules both dictate that at each timestep, the non-colored neighbors of every colored vertex of $P_n$ will be colored. Hence, since any nonempty subset $S$ of $V(P_n)$ is both a power dominating set and a PSD zero forcing set, $\operatorname{ppt}(P_n;S)=\operatorname{pt}_+(P_n;S)$. Thus, $\operatorname{th}_{\gamma_P}(P_n)=\min\{|S|+\operatorname{ppt}(P_n;S)\colon S\subset V(P_n),|S|\ge1\}=\min\{|S|+\operatorname{pt}_+(P_n;S)\colon S\subset V(P_n),|S|\ge1\}=\operatorname{th}_+(P_n)=\big\lceil\sqrt{2n}-\frac{1}{2}\big\rceil$, where the last equality follows from Theorem \ref{thm_zth_path_cycle}. Clearly $\operatorname{th}_{\gamma_P}(C_n)=\big\lceil\sqrt{2n}-\frac{1}{2}\big\rceil$ for $n=3$, so suppose that $n\ge4$. By a similar reasoning as above, and since any set $S\subset V(C_n)$ of size at least 2 is both a power dominating set and a PSD zero forcing set, it follows that $\operatorname{ppt}(P_n;S)=\operatorname{pt}_+(P_n;S)$. If $\{v\}\subset V(C_n)$ is a power throttling set of $C_n$ and $u$ is a vertex of $C_n$ at maximum distance from $v$, then $\{u,v\}$ is also a power throttling set, since $\operatorname{ppt}(C_n;\{u,v\})\le\operatorname{ppt}(C_n;\{v\})-1$ for $n\ge4$. Thus, $\operatorname{th}_{\gamma_P}(C_n)=\min\{|S|+\operatorname{ppt}(C_n;S)\colon S\subset V(C_n),|S|\ge1\}=\min\{|S|+\operatorname{ppt}(C_n;S)\colon S\subset V(C_n),|S|\ge2\}=\min\{|S|+\operatorname{pt}_+(C_n;S)\colon S\subset V(C_n),|S|\ge2\}=\operatorname{th}_+(C_n)=\big\lceil\sqrt{2n}-\frac{1}{2}\big\rceil$, where the last equality follows from Theorem \ref{thm_zth_path_cycle}. \qed \begin{proposition}\label{pth_disjoint_union} Let $G_1,G_2$ be graphs and $G=G_1\dot\cup G_2$. Then, \begin{eqnarray*} \operatorname{th}_{\gamma_P}(G)&\geq& \max\{\gamma_P(G_1)+\operatorname{th}_{\gamma_P}(G_2),\gamma_P(G_2)+\operatorname{th}_{\gamma_P}(G_1)\},\\ \operatorname{th}_{\gamma_P}(G)&\le&\gamma_P(G_1)+\gamma_P(G_2)+\max\{\operatorname{ppt}(G_1),\operatorname{ppt}(G_2)\}, \end{eqnarray*} and these bounds are tight. \end{proposition} \proof We first establish the lower bound. Suppose for contradiction that $\operatorname{th}_{\gamma_P}(G)<\gamma_P(G_1)+\operatorname{th}_{\gamma_P}(G_2)$, and let $S$ be a power throttling set of $G$. Thus, $|S|+\operatorname{ppt}(G;S)<\gamma_P(G_1)+\operatorname{th}_{\gamma_P}(G_2)$. Note that $|S\cap V(G_2)|\le|S|-\gamma_P(G_1)$, since $S\cap V(G_1)$ must be a power dominating set of $G_1$. Moreover, $\operatorname{ppt}(G_2;S\cap V(G_2))\le\operatorname{ppt}(G;S)$. Thus, \begin{eqnarray*} \operatorname{th}_{\gamma_P}(G_2)&\le&|S\cap V(G_2)|+\operatorname{ppt}(G_2;S\cap V(G_2))\\ &\le&|S|-\gamma_P(G_1)+\operatorname{ppt}(G;S)\\ &<&\operatorname{th}_{\gamma_P}(G_2), \end{eqnarray*} a contradiction. Thus, $\operatorname{th}_{\gamma_P}(G)\geq \gamma_P(G_1)+\operatorname{th}_{\gamma_P}(G_2)$. Similarly, $\operatorname{th}_{\gamma_P}(G)\geq \gamma_P(G_2)+\operatorname{th}_{\gamma_P}(G_1)$. We now establish the upper bound. Let $S_1\subset V(G_1)$ and $S_2\subset V(G_2)$ be power dominating sets such that $\operatorname{ppt}(G_1;S_1)=\operatorname{ppt}(G_1)$ and $\operatorname{ppt}(G_2;S_2)=\operatorname{ppt}(G_2)$. Let $S=S_1\cup S_2$. Then $\operatorname{th}_{\gamma_P}(G)\le|S|+\operatorname{ppt}(G;S)=|S_1|+|S_2|+\max\{\operatorname{ppt}(G_1;S_1)+\operatorname{ppt}(G_2;S_2)\}=\gamma_P(G_1)+\gamma_P(G_2)+\max\{\operatorname{ppt}(G_1),\operatorname{ppt}(G_2)\}$. Both bounds are tight, e.g., when $G$ is the disjoint union of two stars.\qed \begin{theorem} \label{thm:th-ub} Let $G_1$ and $G_2$ be graphs such that $G_1 \cap G_2\simeq K_k$. Then \[\max\{\operatorname{th}_{\gamma_P}(G_1),\operatorname{th}_{\gamma_P}(G_2)\}\leq \operatorname{th}_{\gamma_P}(G_1 \cup G_2) \leq \gamma_P(G_1) + \gamma_P(G_2) + k + \max\{\operatorname{ppt}(G_1),\operatorname{ppt}(G_2)\},\] and these bounds are tight. \end{theorem} \proof Let $K=V(G_1 \cap G_2)$. We will first establish the upper bound. Let $S_1\subset V(G_1)$ and $S_2\subset V(G_2)$ be minimum power dominating sets such that $\operatorname{ppt}(G_1;S_1)=\operatorname{ppt}(G_1)$ and $\operatorname{ppt}(G_2;S_2)=\operatorname{ppt}(G_2)$. Let $S = S_1 \cup S_2 \cup K$. $S$ is a power dominating set of $G_1\cup G_2$, since all vertices which are dominated in $G_1$ by $S_1$ and in $G_2$ by $S_2$ are dominated in $G_1\cup G_2$ by $S_1\cup S_2$, and all forces which occur in $G_1$ and in $G_2$ can also occur in $G_1\cup G_2$ (or are not necessary); this is because $N[K]$ is colored after the domination step, and the non-colored neighbors of any vertex $v \in V(G_1\cup G_2)$ at any forcing step are a subset of the non-colored neighbors of $v$ at the same timestep in $G_1$ or $G_2$. For the same reason, a force which occurs in timestep $i\geq 2$ in $G_1$ or $G_2$ occurs in a timestep $j\leq i$ in $G_1\cup G_2$ (or is not necessary). Therefore, $\operatorname{ppt}(G_1 \cup G_2; S) \leq \max \{\operatorname{ppt}(G_1),\operatorname{ppt}(G_2)\}$, and $|S|\leq \gamma_P(G_1)+\gamma_P(G_2)+k$. Thus, $\operatorname{th}_{\gamma_P}(G_1 \cup G_2) \leq |S|+\operatorname{ppt}(G_1\cup G_2;S)\leq \gamma_P(G_1) + \gamma_P(G_2) + k + \max \{\operatorname{ppt}(G_1),\operatorname{ppt}(G_2)\}$. We will now establish the lower bound. Let $S$ be a power throttling set of $G_1 \cup G_2$ and let $w$ be any vertex in $K$. We will consider four cases. \emph{Case 1:} $S\cap K\neq \emptyset$. In this case, let $S_1=S\cap V(G_1)$ and $S_2=S\cap V(G_2)$. \emph{Case 2:} $S\cap K = \emptyset$ but $S\cap V(G_1)\neq \emptyset$ and $S\cap V(G_2)\neq \emptyset$. In this case, let $S_1=(S\cap V(G_1))\cup \{w\}$ and $S_2=(S\cap V(G_2))\cup\{w\}$. \emph{Case 3:} $S\subset V(G_1)\backslash V(G_2)$. In this case, let $S_1=S$ and $S_2=\{w\}$. \emph{Case 4:} $S\subset V(G_2)\backslash V(G_1)$. In this case, let $S_1=\{w\}$ and $S_2=S$. \noindent Note that in all cases, $S_1 \subset V(G_1)$, $S_2 \subset V(G_2)$, $|S_1|\leq |S|$, and $|S_2|\leq |S|$. In Cases 1 and 2, $K$ is dominated by $S_1$ in $G_1$ and by $S_2$ in $G_2$. Subsequently, at any forcing step, the non-colored neighbors of any vertex $v$ in $G_1$ or $G_2$ are a subset of the non-colored neighbors of $v$ at the same timestep in $G_1\cup G_2$. Thus, $S_1$ is a power dominating set of $G_1$ and $S_2$ is a power dominating set of $G_2$. Moreover, a force which occurs in timestep $i\geq 2$ in $G_1\cup G_2$ occurs in a timestep $j\leq i$ in $G_1$ or $G_2$. Therefore, $\operatorname{ppt}(G_1; S_1) \leq \operatorname{ppt}(G_1 \cup G_2; S)$, and $\operatorname{ppt}(G_2; S_2) \leq \operatorname{ppt}(G_1 \cup G_2; S)$. In Case 3, since no vertex of $K$ is in $S$, no vertex of $K$ colors another vertex of $G_1\cup G_2$ in the domination step. Thus, in $G_1\cup G_2$, no vertex in $V(G_2)\backslash K$ can force a vertex of $K$, since this would mean a vertex in $K$ forced some vertex in $V(G_2)\backslash K$ in a previous timestep, which would require all vertices of $K$ to already be colored. Moreover, in $G_1\cup G_2$, all vertices in $V(G_2)\backslash K$ can be forced after the vertices in $K$ get colored. Thus, $S_1$ is a power dominating set of $G_1$ and $S_2$ is a power dominating set of $G_2$. Furthermore, since $S_1$ and $S_2$ can color $G_1$ and $G_2$ using a subset of the forces that are used by $S$ to color $G_1\cup G_2$, it follows that $\operatorname{ppt}(G_1; S_1) \leq \operatorname{ppt}(G_1 \cup G_2; S)$ and $\operatorname{ppt}(G_2; S_2) \leq \operatorname{ppt}(G_1 \cup G_2; S)$. Case 4 is symmetric to Case 3. Thus, in all cases, $\operatorname{th}_{\gamma_P}(G_1)\leq |S_1| + \operatorname{ppt}(G_1; S_1) \leq |S| + \operatorname{ppt}(G_1 \cup G_2; S)=\operatorname{th}_{\gamma_P}(G_1\cup G_2)$ and $\operatorname{th}_{\gamma_P}(G_2)\leq |S_2| + \operatorname{ppt}(G_2; S_2) \leq |S| + \operatorname{ppt}(G_1 \cup G_2; S)=\operatorname{th}_{\gamma_P}(G_1\cup G_2)$, so $\max\{\operatorname{th}_{\gamma_P}(G_1),\operatorname{th}_{\gamma_P}(G_2)\}\leq \operatorname{th}_{\gamma_P}(G_1 \cup G_2)$. To see that the upper bound is tight, let $K$ be a complete graph with vertex set $\{v_1,\ldots,v_k\}$, let $G_1$ be the graph obtained by appending two leaves, $u_i$ and $w_i$, to each vertex $v_i$ of $K$, $1\leq i\leq k$, and then appending three paths of length 1 to each $w_i$, $1\leq i\leq k$. Let $G_2$ be a copy of $G_1$ labeled so that $G_1\cap G_2=K$ and the vertex in $G_2$ corresponding to $w_i$ in $G_1$ is $w_i'$, $1\leq i\leq k$; see Figure \ref{fig:th-upperbounds} for an illustration. Let $S = \{w_1,\ldots,w_k\}$. Since every minimum power dominating set of $G_1$ must contain $S$, and $S$ is itself a power dominating set of $G_1$, $\gamma_P(G_1) = \gamma_P(G_2) = |S| = k$. Furthermore, $\max\{\operatorname{ppt}(G_1),\operatorname{ppt}(G_2)\}=2$, so $\gamma_P(G_1) + \gamma_P(G_2) + k + \max\{\operatorname{ppt}(G_1),\operatorname{ppt}(G_2)\} = 3k + 2$. In $G_1 \cup G_2$, for $1\leq i\leq k$, $v_i$ has two leaves appended to it; thus, either $v_i$ or one of these two leaves must be contained in any power dominating set of $G_1 \cup G_2$. Likewise, since each vertex $w_i$ has three paths appended to it, either $w_i$ or at least one vertex in those paths must be contained in any power dominating set. Similarly, either $w_i'$ or at least one vertex in the paths appended to $w_i'$ must be contained in any power dominating set. Thus, $\gamma_P(G_1 \cup G_2) \geq 3k$. If $\operatorname{th}_{\gamma_P}(G_1 \cup G_2) \leq 3k+1$, then there must exist a power dominating set $S'$ such that $\operatorname{ppt}(G_1 \cup G_2;S') = 1$, and $|S'|=3k$. However, if $\operatorname{ppt}(G_1 \cup G_2;S') = 1$, then $S'$ must be a dominating set, and it is easy to see that $G_1\cup G_2$ does not have a dominating set of size $3k$. Therefore $\operatorname{th}_{\gamma_P}(G_1 \cup G_2) = 3k+2=\gamma_P(G_1) + \gamma_P(G_2) + k + \max\{\operatorname{ppt}(G_1),\operatorname{ppt}(G_2)\}$. \begin{figure} \caption{Graphs $G_1$ and $G_1 \cup G_2$ for which the upper bound in Theorem \ref{thm:th-ub} \label{fig:th-upperbounds} \end{figure} To see that the lower bound is tight, let $K$ be a complete graph on $k$ vertices, let $G_1$ be the graph obtained by appending three leaves to each vertex of $K$, and let $G_2$ be a copy of $G_1$ labeled so that $G_1\cap G_2=K$. Then, $V(K)$ is a power throttling set of $G_1$, $G_2$ and $G_1 \cup G_2$, since $V(K)$ is a minimum power dominating set in all three graphs, and the power propagation time in all three graphs using $V(K)$ is 1. Thus, $\operatorname{th}_{\gamma_P}(G_1 \cup G_2)=k+1=\max\{\operatorname{th}_{\gamma_P}(G_1),\operatorname{th}_{\gamma_P}(G_2)\}$. \qed \noindent We conclude this section by deriving tight bounds on the power domination throttling numbers of trees; some ideas in the following results are adapted from \cite{carlson2}. \begin{lemma}\label{lemma_no_leaves} Let $G$ be a connected graph on at least $3$ vertices. Then there exists a power throttling set of $G$ that contains no leaves. \end{lemma} \proof Let $S'$ be a power throttling set of $G$, and suppose that $v\in S'$ is a leaf with neighbor $u$ (which cannot be a leaf since $G$ is connected and $n(G)\geq 3$). If $u\in S'$, then $S\vcentcolon=S'\setminus\{v\}$ is also a power throttling set of $G$, since $|S|=|S'|-1$ and $\operatorname{ppt}(G;S)\le\operatorname{ppt}(G;S')+1$. Otherwise, if $u\notin S'$, then let $S=(S'\setminus\{v\})\cup\{u\}$. Note that $N[S']\subset N[S]$, and so $\operatorname{pt}(G;N[S])\le\operatorname{pt}(G;N[S'])$. Since $\operatorname{ppt}(G;S),\operatorname{ppt}(G;S')\ge1$, this implies that $\operatorname{ppt}(G;S)\le\operatorname{ppt}(G;S')$. Since $|S|=|S'|$, $S$ must also achieve throttling. This process of replacing leaves with non-leaf vertices in power throttling sets of $G$ can be repeated until a power throttling set is obtained which has no leaves. \qed \begin{proposition}\label{prop_subtree_monotone} If $T$ is a tree with subtree $T'$, then $\operatorname{th}_{\gamma_P}(T')\le\operatorname{th}_{\gamma_P}(T)$. That is, power domination throttling is subtree monotone for trees. \end{proposition} \proof Clearly the claim is true for trees with at most $2$ vertices, so suppose that $T$ is a tree with at least $3$ vertices. By Lemma \ref{lemma_no_leaves}, $T$ has a power throttling set $S$ which does not contain leaves. Let $v$ be a leaf of $T$; then, $S\subset V(T-v)$, so $\operatorname{ppt}(T-v;S)\le\operatorname{ppt}(T;S)$. Thus, $\operatorname{th}_{\gamma_P}(T-v)\leq |S|+\operatorname{ppt}(T-v;S)\leq |S|+\operatorname{ppt}(T;S)=\operatorname{th}_{\gamma_P}(T)$. Since any subtree $T'$ of $T$ can be attained by repeated removal of leaves, and since each removal of a leaf does not increase the power domination throttling number, it follows that $\operatorname{th}_{\gamma_P}(T')\le\operatorname{th}_{\gamma_P}(T)$. \qed \begin{theorem}\label{cor_tree_diam_bound} Let $T$ be a tree on at least $3$ vertices. Then, \[\big\lceil\sqrt{2(diam(T)+1)}-1/2\big\rceil\le\operatorname{th}_{\gamma_P}(G)\le diam(T)-1+\gamma_P(T),\] and these bounds are tight. \end{theorem} \proof Since $T$ has diameter $d:=diam(T)$ and at least 3 vertices, $T$ contains a path of length $d\geq 2$. Thus $P_{d+1}$ is a subtree of $T$, and $\Delta(P_{d+1})=2$. Then, the lower bound follows from Theorem~\ref{pth_bound} and Proposition \ref{prop_subtree_monotone}. In Theorem 2.5 of \cite{ferrero}, it is shown that for every tree with at least 3 vertices, $\operatorname{ppt}(T)\le d-1$. Let $S^*$ be a power throttling set of $T$ and $S$ be a minimum power dominating set of $T$ such that $\operatorname{ppt}(T;S)=\operatorname{ppt}(T)$. Then, $\operatorname{th}_{\gamma_P}(T)=|S^*|+\operatorname{ppt}(T;S^*)\leq |S|+\operatorname{ppt}(T;S)=\gamma_P(T)+\operatorname{ppt}(T)\leq \gamma_P(T)+d-1$. Both bounds are tight, e.g., for stars, since $\big\lceil\sqrt{2(2+1)}-1/2\big\rceil= 2-1+1$. \qed \section{Extremal power domination throttling numbers} In this section, we give a characterization of graphs whose power domination throttling number is at least $n-1$ or at most $t$, for any constant $t$. We begin by showing that graphs with $\operatorname{th}_{\gamma_P}(G)\leq t$ are minors of the graph in the following definition. \begin{definition} \label{def_gsab} Let $a\geq 0$, $b\geq 0$, and $s\geq 1$ be integers and let $\mathcal{G}(s,a,b)$ be the graph obtained from $K_s \dot{\cup} (K_a \square P_b)$ by adding every possible edge between the disjoint copy of $K_s$ and a copy of $K_a$ in $K_a \square P_b$ whose vertices have minimum degree. If either $a=0$ or $b=0$, then $\mathcal{G}(s,a,b)\simeq K_s$. A \emph{path edge} of $\mathcal{G}(s,a,b)$ is an edge that belongs to one of the copies of $P_b$; a \emph{complete edge} is an edge that belongs to one of the copies of $K_a$, or to $K_s$; a \emph{cross edge} is an edge between $K_s$ and $K_a \square P_b$. The vertices in $K_s$ and $K_a$ that are incident to cross edges are called \emph{$s$-vertices} and \emph{$a$-vertices}, respectively. See Figure \ref{PDgraph} for an illustration. \end{definition} \begin{figure} \caption{The graph $\mathcal{G} \label{PDgraph} \end{figure} \begin{theorem}\label{pdChar} Let $G$ be a graph and $t$ be a positive integer. Then, $\operatorname{th}_{\gamma_P}(G) \leq t$ if and only if there exist integers $a\geq 0$, $b\geq 0$, and $s\geq 1$ such that $s + b = t$, and $G$ can be obtained from $\mathcal{G}(s, a, b)$ by \begin{enumerate} \item contracting path edges, \item deleting complete edges, and/or \item deleting cross edges so that the remaining cross edges saturate the $a$-vertices. \end{enumerate} Moreover, for a fixed $t$, these conditions can be verified in polynomial time. \end{theorem} \proof Suppose first that $\operatorname{th}_{\gamma_P}(G) \leq t$. Let $S$ be a power throttling set of $G$, and fix some chronological list of forces by which $N[S]$ colors $G$. Let $s = |S|$, let $b' = \operatorname{ppt}(G; S) = \operatorname{th}_{\gamma_P}(G) -s$, and let $b = t-s$; note that $b' \leq b$. Let $A = N[S] \setminus S=\{v_{1,1}, v_{2,1},\ldots,v_{a,1}\}$, where $a=|A|$. Clearly, $a \leq s\Delta(G)$. We will show that $G$ can be obtained from $\mathcal{G}(s,a,b)$ by contracting path edges, deleting complete edges, and/or deleting cross edges so that the remaining cross edges saturate the $a$-vertices. First, note that $\mathcal{G}(s,a,b')$ can be obtained from $\mathcal{G}(s,a,b)$ by contracting path edges. Thus, it suffices to show that $G$ can be obtained from $\mathcal{G}(s,a,b')$ by the above operations. Label the $s$-vertices of $\mathcal{G}(s,a,b')$ with the elements of $S$, and label the $a$-vertices of $\mathcal{G}(s,a,b')$ with the elements of $\{v_{1,1}^1, v_{2,1}^1,\ldots,v_{a,1}^1\}$. For each $s$-vertex $u$ and $a$-vertex $v_{i,1}^1$, delete the edge $uv_{i,1}^1$ unless $uv_{i,1} \in E(G)$. Note that all edges deleted this way are cross edges, and that after these deletions, the remaining cross edges must saturate the $a$-vertices, since by definition the vertices in $S$ dominate the vertices in $A$. Also, for each pair of $s$-vertices $u_1$, $u_2$, delete the edge $u_1u_2$ unless $u_1u_2\in E(G)$; note that all edges deleted this way are complete edges. For $1\leq i\leq a$, let $v_{i,1},\ldots,v_{i,{p_i}}$ be a maximal sequence of vertices of $G$ such that $v_{i,j}$ forces $v_{i,j+1}$ for $1 \leq j < p_i$ (after the domination step using $S$). Note that since $A=N[S]\backslash S$, $A$ is a zero forcing set of $G-S$, and hence each vertex of $G-S$ belongs to exactly one such sequence. For $1 \leq i \leq a$ and $1 \leq j \leq p_i$, if $v_{i,j}$ performs a force, let $\tau_{i,j}$ be the timestep at which $v_{i,j}$ performs a force minus the timestep at which $v_{i,j}$ gets colored; if $v_{i,j}$ does not perform a force, let $\tau_{i,j}$ be $b'+1$ minus the timestep at which $v_{i,j}$ gets colored. Note that for each $i\in \{1,\ldots,a\}$, $\sum_{j=1}^{p_i}\tau_{i,j}=b'$. Thus, if $P_1,\ldots,P_a$ are the paths used in the construction of $\mathcal{G}(s,a,b')$, we can label the vertices of $P_i$, $1\leq i\leq a$, in order starting from the endpoint which is an $a$-vertex toward the other endpoint, as \[v_{i,1}^1,\ldots,v_{i,1}^{\tau_{i,1}},v_{i,2}^1,\ldots,v_{i,2}^{\tau_{i,2}},v_{i,3}^1,\ldots,v_{i,3}^{\tau_{i,3}},\ldots,v_{i,p_i}^1,\ldots,v_{i,p_i}^{\tau_{i,p_i}}.\] Let $K_1,\ldots,K_{b'}$ be the cliques of size $a$ used in the construction of $\mathcal{G}(s,a,b')$, where $V(K_1)=\{v_{1,1}^1,\ldots,v_{a,1}^1\}$, and the vertices of $K_{\ell}$ are adjacent to the vertices of $K_{\ell+1}$ for $1\leq \ell<b'$. Thus, each such clique corresponds to a timestep in the forcing process of $G-S$ using $A$. Let $e=\{v_{i_1,j_1},v_{i_2,j_2}\}$ be an arbitrary edge of $G-S$ with $i_1 \neq i_2$. There is an earliest timestep $\ell^*$ at which both $v_{i_1,j_1}$ and $v_{i_2,j_2}$ are colored. Therefore, the clique $K_{\ell^*}$ contains $v_{i_1,j_1}^{\alpha}$ and $v_{i_2,j_2}^{\beta}$, for some $\alpha\in \{1,\ldots,\tau_{i_1,j_1}\}$ and $\beta\in \{1,\ldots,\tau_{i_2,j_2}\}$. Denote the edge $\{v_{i_1,j_1}^{\alpha}$,$v_{i_2,j_2}^{\beta}\}$ by $\phi(e)$, and note that $\phi(e)$ is uniquely determined for $e$. Delete all edges in $K_1,\ldots,K_{b'}$ from $\mathcal{G}(s,a,b')$ except the edges $\{\phi(e):e=\{v_{i_1,j_1},v_{i_2,j_2}\}\in E(G-S), \text{ with }i_1 \neq i_2\}$. Next, for $1 \leq i \leq a$ and $1 \leq j \leq p_i$, contract the edges $\{v_{i,j}^1,v_{i,j}^2\},\{v_{i,j}^2,v_{i,j}^3\},\ldots,\{v_{i,j}^{\tau_{i,j}-1},v_{i,j}^{\tau_{i,j}}\}$ in $\mathcal{G}(s,a,b')$ and let $\psi(v_{i,j})$ be the vertex corresponding to $\{v_{i,j}^1,\ldots,v_{i,j}^{\tau_{i,j}}\}$ obtained from the contraction of these edges. See Figure \ref{PDextExample} for an illustration. Note that these operations delete complete edges and contract path edges. Moreover, note that there is a bijection between edges of $G-S$ of the form $e=\{v_{i_1,j_1},v_{i_2,j_2}\}$ with $i_1 \neq i_2$ and the edges $\phi(e)$ of $\mathcal{G}(s,a,b')$, as well as between edges of the form $\{v_{i,j},v_{i,{j+1}}\}$ of $G-S$ and the edges $\{\psi(v_{i,j}),\psi(v_{i,{j+1}})\}$ of $\mathcal{G}(s,a,b')$. Thus, the obtained graph is isomorphic to $G$, so $G$ can be obtained from $\mathcal{G}(s,a,b')$ by contracting path edges, deleting complete edges, and/or deleting cross edges so that the remaining cross edges saturate the $a$-vertices. \begin{figure} \caption{\emph{Top left:} \label{PDextExample} \end{figure} Conversely, suppose there exist integers $a\geq 0$, $b\geq 0$, and $s\geq 1$ such that $s + b = t$, and $G$ can be obtained from $\mathcal{G}(s,a,b)$ by contracting path edges, deleting complete edges, and deleting cross edges so that the remaining cross edges saturate the $a$-vertices. Let $S$ be the set of $s$-vertices in $\mathcal{G}(s,a,b)$ and $A$ be the set of $a$-vertices. Clearly $S$ is a power dominating set of $\mathcal{G}(s,a,b)$, and $\operatorname{ppt}(\mathcal{G}(s,a,b);S)=b$. In the power domination process of $\mathcal{G}(s,a,b)$ using $S$, complete edges are not used in the domination step and are not used in any forcing step, since any vertex which is adjacent to a non-colored vertex via a complete edge is also adjacent to a non-colored vertex via a path edge. Therefore, $S$ remains a power dominating set after any number of complete edges are deleted from $\mathcal{G}(s,a,b)$; moreover, deleting complete edges from $\mathcal{G}(s,a,b)$ cannot increase the power propagation time using $S$, since all the forces can occur in the same order as in the original graph, via the path edges. It is also easy to see that if any path edges of $\mathcal{G}(s,a,b)$ are contracted, $S$ remains a power dominating set of the resulting graph, since all the forces can occur in the same relative order along the new paths. Moreover, note that $\mathcal{G}(s,a,b)-S\simeq K_a \square P_b$, and that $A$ is a zero forcing set of $K_a \square P_b$. Thus, the power domination process of $\mathcal{G}(s,a,b)$ using $S$ after the domination step is identical to the zero forcing process of $K_a \square P_b$ using $A$. It follows from Lemma 3.15 of \cite{carlson} that contracting path edges of $K_a \square P_b$ does not increase the zero forcing propagation time using $A$. Thus, contracting path edges of $\mathcal{G}(s,a,b)$ does not increase the power propagation time using $S$. Finally, deleting cross edges so that the remaining cross edges saturate the $a$-vertices ensures that every $a$-vertex will still be dominated by an $s$-vertex in the first timestep. Thus, since $S$ remains a power dominating set of $G$, and since $G$ is obtained from $\mathcal{G}(s,a,b)$ by operations that do not increase the power propagation time using $S$, it follows that $\operatorname{th}_{\gamma_P}(G) \leq |S|+\operatorname{ppt}(G;S)\leq |S|+\operatorname{ppt}(\mathcal{G}(s,a,b);S)=s+b=t$. To see that it can be verified in polynomial time whether a graph $G=(V,E)$ satisfies the conditions of the theorem, note that for a fixed constant $t$, there are $O(n^t)$ subsets of $V$ of size at most $t$. Given a set $S\subset V$, it can be verified in $O(n^2)$ time whether $S$ is a power dominating set of $G$, and if so, $\operatorname{ppt}(G;S)$ can be computed in $O(n^2)$ time. Thus, it can be verified in $O(n^{t+2})$ time whether there exists a power dominating set $S$ with $|S|\leq t-\operatorname{ppt}(G;S)$, and hence whether $\operatorname{th}_{\gamma_P}(G)\leq t$. \qed \noindent We can use Theorem \ref{pdChar} to quickly characterize graphs with low power domination throttling numbers. \begin{corollary} Let $G$ be a graph. Then $\operatorname{th}_{\gamma_P}(G) = 1$ if and only if $G \simeq K_1$. \end{corollary} \begin{corollary} Let $G$ be a graph. Then $\operatorname{th}_{\gamma_P}(G) = 2$ if and only if $G \simeq \overline{K}_2$ or $G$ has a dominating vertex and $G\not\simeq K_1$. \end{corollary} \noindent We conclude this section by characterizing graphs whose power domination throttling numbers are large. \begin{proposition}\label{pth=n} Let $G$ be a graph. Then $\operatorname{th}_{\gamma_P}(G)=n$ if and only if $G\simeq\overline{K}_n$ or $G\simeq K_2\dot\cup\overline{K}_{n-2}$. \end{proposition} \proof If $G\simeq\overline{K}_n$ or $G\simeq K_2\dot\cup\overline{K}_{n-2}$, it is easy to see that $\operatorname{th}_{\gamma_P}(G)=n$. Let $G$ be a graph with $\operatorname{th}_{\gamma_P}(G)=n$. If $|E(G)|=0$, then $G\simeq\overline{K}_n$. If $|E(G)|=1$, then $G\simeq K_2\dot\cup\overline{K}_{n-2}$. If $|E(G)|\geq 2$, then let $u$ and $v$ be distinct endpoints of distinct edges of $G$. Let $S=V\setminus\{u,v\}$, so that $|S|=n-2$ and $\operatorname{ppt}(G;S)=1$. This implies that $\operatorname{th}_{\gamma_P}(G)\le n-1$, a contradiction.\qed \begin{theorem}\label{pth=n-1} Let $G$ be a graph. Then $\operatorname{th}_{\gamma_P}(G)=n-1$ if and only if $G\simeq P_3\dot\cup\overline{K}_{n-3}$ or $G\simeq C_3\dot\cup\overline{K}_{n-3}$ or $G\simeq P_4\dot\cup\overline{K}_{n-4}$ or $G\simeq C_4\dot\cup\overline{K}_{n-4}$ or $G\simeq K_2\dot\cup K_2\dot\cup\overline{K}_{n-4}$. \end{theorem} \proof If $G$ is any of the graphs in the statement of the theorem, then it is easy to see that $\operatorname{th}_{\gamma_P}(G)=n-1$. Let $G$ be a graph with $\operatorname{th}_{\gamma_P}(G)=n-1$ and suppose $G$ has connected components $G_1,\ldots,G_k$. By Proposition \ref{pth_disjoint_union}, $n(G)-1=\operatorname{th}_{\gamma_P}(G)\leq \operatorname{th}_{\gamma_P}(G_1)+\ldots+\operatorname{th}_{\gamma_P}(G_k)$, so $\operatorname{th}_{\gamma_P}(G_i)\geq n(G_i)-1$ for $1\leq i\leq k$. Let $G_i$ be an arbitrary component of $G$. We will show that $\operatorname{th}_{\gamma_P}(G_i)=n(G_i)-1$ if and only if $G_i\in \{P_3,C_3,P_4,C_4\}$. If $G_i\in \{P_3,C_3,P_4,C_4\}$, then it is easy to see that $\operatorname{th}_{\gamma_P}(G_i)=n(G_i)-1$. Now suppose $\operatorname{th}_{\gamma_P}(G_i)=n(G_i)-1$. Since $G_i$ is connected and $G_i\not\simeq K_1$, $\Delta(G_i)\ge 1$. If $\Delta(G_i)=1$, then connectedness implies that $G_i\simeq K_2$, but then $\operatorname{th}_{\gamma_P}(G_i)=2=n(G_i)$, a contradiction. If $\Delta(G_i)=2$, then connectedness implies that $n(G_i)\ge3$ and $G_i\simeq P_{n(G_i)}$ or $G_i\simeq C_{n(G_i)}$. However, if $n(G_i)\ge 5$, and if we label the vertices of $G_i$ $v_1,\ldots,v_5,\ldots, v_{n(G_i)}$ in order along the path or cycle, then taking $S=V(G_i)\setminus\{v_1,v_3,v_4\}$ yields $\operatorname{th}_{\gamma_P}(G_i)\leq |S|+\operatorname{ppt}(G_i;S)=n(G_i)-3+1$, a contradiction. Finally, if $\Delta(G_i)\ge3$ and $v$ is a vertex with $d(v)=\Delta(G_i)$, then taking $S=V(G_i)\setminus N(v)$ yields $\operatorname{th}_{\gamma_P}(G_i)\leq|S|+\operatorname{ppt}(G_i;S)\le n(G_i)-2$, a contradiction. Moreover, by Proposition \ref{pth=n}, $\operatorname{th}_{\gamma_P}(G_i)=n(G_i)$ if and only if $G_i\in \{K_1, K_2\}$. Thus, each component of $G$ is one of the following: $K_1, K_2, P_3, C_3, P_4, C_4$. If one of the components of $G$, say $G_1$, is $P_3$, $C_3$, $P_4$, or $C_4$, then all other components of $G$ must be $K_1$. To see why, let $v$ be a degree 2 vertex in $G_1$, and let $w$ be a non-isolate vertex in another component; then, taking $S=V(G)\setminus(N(v)\cup\{w\})$ yields $\operatorname{th}_{\gamma_P}(G)\leq |S|+\operatorname{ppt}(G;S)=n(G)-3+1$, a contradiction. If one of the components of $G$, say $G_1$, is $K_2$, then exactly one other component must be $K_2$, and all other components must be $K_1$. To see why, note that by the argument above, no other component can be $P_3$, $C_3$, $P_4$, or $C_4$, and by Proposition \ref{pth=n}, there must be a component different from $K_1$. Thus, this component must also be a $K_2$ component. If there are at least three $K_2$ components, then let $v_1,v_2,v_3$ be degree 1 vertices, each belonging to a distinct $K_2$ component; taking $S=V(G)\setminus\{v_1,v_2,v_3\}$ yields $\operatorname{th}_{\gamma_P}(G)\leq |S|+\operatorname{ppt}(G;S)=n(G)-3+1$, a contradiction. Thus, there are exactly two $K_2$ components. \qed \section{Conclusion} In this paper, we presented complexity results, tight bounds, and extremal characterizations for the power domination throttling number. Our complexity results apply not only to power domination throttling, but also to a general class of minimization problems defined as the sum of two graph parameters. One direction for future work is to determine the largest value of $\operatorname{th}_{\gamma_P}(G)$ for a connected graph $G$. For example, $\operatorname{th}_{\gamma_P}(G)\geq \gamma_P(G)$, and there are graphs for which $\gamma_P(G)=\frac{n}{3}$. Is there an infinite family of connected graphs for which $\operatorname{th}_{\gamma_P}(G)=\frac{n}{2}$? It would also be interesting to find operations which affect the power domination throttling number monotonely, or conditions which guarantee that the power domination throttling number of a graph is no less than or no greater than the power domination throttling number of an induced subgraph. We partially answered this question by showing that power domination throttling is subtree monotone for trees. Finding an exact polynomial time algorithm for the power domination throttling number of trees would also be of interest. \end{document}
\begin{document} \title{Numerical invariants of identities of unital algebras} \author[D. Repov\v s, and M. Zaicev] {Du\v san Repov\v s and Mikhail Zaicev} \address{Du\v{s}an Repov\v{s} \\Faculty of Education, and Faculty of Mathematics and Physics, University of Ljubljana, Kardeljeva plo\v s\v cad 16, Ljubljana, 1000, Slovenia} \email{[email protected]} \address{Mikhail Zaicev \\Department of Algebra\\ Faculty of Mathematics and Mechanics\\ Moscow State University \\ Moscow, 119992, Russia} \email{[email protected]} \thanks{The first author was supported by SRA grants P1-0292-0101, J1-4144-0101 and J1-5435-0101. The second author was supported by RFBR grant No 13-01-00234a} \keywords{Polynomial identity, non-associative unital algebra, codimension, exponential growth, fractional PI-exponent} \subjclass[2010]{Primary 16R10; Secondary 16P90} \begin{abstract} We study polynomial identities of algebras with adjoined external unit. For a wide class of algebras we prove that adjoining external unit element leads to increasing of PI-exponent precisely to 1. We also show that any real number from the interval [2,3] can be realized as PI-exponent of some unital algebra. \end{abstract} \date{\today} \maketitle \section{Introduction} We study numerical characteristics of polynomial identities of algebras over a field $F$ of characteristic zero. Given an algebra $A$ over $F$, one can associate to it the sequence $\{c_n(A)\}$ of non-negative integers called the {\it sequence of codimensions}. If the growth of $\{c_n(A)\}$ is exponential then the limiting ration of consecutive terms is called PI-{\it exponent} of $A$ and written $exp(A)$. In the present paper we are mostly interested what happens with PI-exponent if we adjoin to $A$ an external unit element. The first results in this area were proved for associative algebras. It is known that $exp(A)$ is an integer in the associative case \cite{GZ98}, \cite{GZ99}. It was shown in \cite{GZJA} that it follows from the proofs in \cite{GZ98}, \cite{GZ99} that either $exp(A^\sharp) =exp(A)$ or $exp(A^\sharp)=exp(A)+1$ and both options can be realized. Here $A^\sharp$ is the algebra $A$ with adjoined external unit. The next result was published in \cite{ZAL}, following an example of A 5-dimensional algebra $A$ with $exp(A)<2$ constructed in \cite{GMZCA2009}. The point is that in the associative or Lie case PI-exponent cannot be less than 2 (\cite{K}, \cite{M}). For a finite dimensional Lie superalgebra, Jordan and alternative algebra PI-exponent is also at least 2. Starting from the example $A$ from \cite{GMZCA2009} it was shown in \cite{ZAL} that $exp(A^\sharp)=exp(A)+1$. In \cite{ZAL} also the following problem was stated: is it true that always either $exp(A^\sharp)=exp(A)$ or $exp(A^\sharp)=exp(A)+1$? An example of 4-dimensional simple algebra $A $ with a fractional PI-exponent was constructed in \cite{BBZF}. It was also shown that $exp(A^\sharp)=exp(A)+1$. This result was announced in \cite{BBZU}. It was also shown in \cite{BBZU} that if $A$ is itself a unital algebra then $exp(A^\sharp)=exp(A)$. In the present paper (see Theorem \ref {t1}) we shall prove that for a previously known series of algebras $A_\alpha$ with $exp(A_\alpha)=\alpha$, $\alpha\in \mathbb R, 1<\alpha<2$ (see \cite{GMZAdvM}) any extended algebra $A^\sharp_\alpha$ has exponent $\alpha+1$. That is, we shall show that there exist infinitely many algebras $A$ such that $exp(A^\sharp)=exp(A)+1$. Another important question is the following: which real numbers can be realized as PI-exponents of some algebra? For example, if $A$ is any associative PI-algebra or a finite dimensional Lie or Jordan algebra then $exp(A)$ is an integer (see \cite{GZ98}, \cite{GZ99}, \cite{Z}, \cite{GSZ}). For unital algebras it is only known that if $\dim A<\infty$ then $exp(A)$ cannot be less than 2. As a consequence of the main result of our paper (see Corollary \ref{c1}) we shall obtain that for any real $\alpha\in [2,3]$ there exists a unital algebra $B_\alpha$ such that $exp(B_\alpha)=\alpha$. \section{Preliminaries} Let $A$ be an algebra over a field $F$ of characteristic zero and let $F\{X\}$ be absolutely free algebra over $F$ with a countable set of generators $X=\{x_1,x_2,\ldots\}$. Recall that a polynomial $f=f(x_1,\ldots,x_n)$ is said to be an {\it identity} of $A$ if $f(a_1\ldots, a_n)=0$ for all $a_1,\ldots,a_n\in A$. The set $Id(A)$ of all polynomial identities of $A$ forms an ideal of $F\{X\}$. Denote by $P_n$ the subspace of all multilinear polynomials in $F\{X\}$ on $x_1,\ldots, x_n$. Then the intersection $Id(A)\cap P_n$ is the space of all multilinear identities of $A$ of degree $n$. Denote $$ P_n(A)=\frac{P_n}{Id(A)\cap P_n}. $$ A non-negative integer $$ c_n(A)=\dim P_n(A) $$is called the $n$th {\it codimension} of $A$. Asymptotic behavior of the sequence $\{c_n(A)\}, n=1,2,\ldots,$ is an important numerical invariant of identities of $A$. We refer readers to \cite{GZbook} for an account of basic notions of the theory of codimensions of PI-algebras. If the sequence $\{c_n(A)\}$ is exponentially bounded, i.e. $c_n(A)\le a^n$ for all $n$ and for some number $a$ (for example in case $\dim A<\infty$ and in many other cases), we can define the lower and the upper PI-exponents of $A$ by $$ \underline{exp}(A)=\liminf_{n\to\infty}\sqrt[n]{c_n(A)},\quad \overline{exp}(A)=\limsup_{n\to\infty}\sqrt[n]{c_n(A)}, $$ and (the ordinary) PI-exponent $$ exp(A)=\lim_{n\to\infty}\sqrt[n]{c_n(A)} $$ provided that $\underline{exp}(A)=\overline{exp}(A)$. In order to compute the values of codimensions we can consider symmetric group action on $P_n$ defined by $$ \sigma f(x_1,\ldots,x_n)=f(x_{\sigma(1)},\ldots,x_{\sigma(n)}\quad \forall \sigma\in S_n. $$ The subspace $P_n\cap Id(A)$ is invariant under this action and we can study the structure of $P_n(A)$ as an $S_n$-module. Denote by $\chi_n(A)$ the $S_n$-character of $P_n(A)$ called the $n$th {\it cocharacter} of $A$. Since ${\rm char}~ F=0$ and any $S_n$-representation is completely reducible, the $n$th cocharacter has the decomposition \begin{equation}\label{e1} \chi_n(A)=\sum_{\lambda\vdash n}m_\lambda \chi_\lambda \end{equation} where $\chi_\lambda$ is the irreducible $S_n$-character corresponding to the partition $\lambda\vdash n$ and non-negative integer $m_\lambda$ is the multiplicity of $\chi_\lambda$ in $\chi_n(A)$. Obviously, it follows from (\ref{e1}) that $$ c_n(A)=\sum_{\lambda\vdash n}m_\lambda \deg\chi_\lambda. $$ Another important numerical characteristic is the $n$th {\it colength} of $A$ defined by $$ l_n(A)=\sum_{\lambda\vdash n}m_\lambda $$ with $m_\lambda$ taken from (\ref{e1}). In particular, if the sequence $\{l_n(A)\}$ is polynomially bounded as a function of $n$ while some of $\deg\chi_\lambda$ with $m_\lambda\ne 0$ are exponentially large, the principal part of the asymptotic of $\{c_n(A)\}$ is defined by the largest value of $\deg\chi_\lambda$ with non-zero multiplicity. For studying asymptotic of codimensions it is convenient to use the following functions. Let $0\le x_1,\ldots,x_d\le 1$ be real numbers such that $x_1+\cdots+x_d=1$. Denote $$ \Phi(x_1,\ldots,x_d)=\frac{1}{x_1^{x_1}\cdots x_d^{x_d}}. $$ If $d=2$ then instead of $\Phi(x_1,x_2)$ we will write $$ \Phi_0(x)=\frac{1}{x^x(1-x)^{1-x}}. $$ We assume that some of $x_1,\ldots,x_d$ can have zero values. In this case we assume that $0^0=1$. Given $\lambda=(l_1,\ldots,\lambda_d)\vdash n$ we define \begin{equation}\label{e3} \Phi(\lambda)=\frac{1}{(\frac{\lambda_1}{n})^{\frac{\lambda_1}{n}}\cdots (\frac{\lambda_d}{n})^{\frac{\lambda_d}{n}}}. \end{equation} For partitions $\lambda=(\lambda_1,\ldots,\lambda_k)\vdash n$ with $k<d$ we also consider $\Phi(\lambda)$ as in (\ref{e3}), assuming $\lambda_{k+1}=\cdots=\lambda_d=0$. The relationship between $\deg\chi_\lambda$ and $\Phi(\lambda)$ is given by the following lemma. \begin{lemma}\label{l1}{\rm (see \cite[Lemma 1]{GZJLMS})} Let $\lambda=(\lambda_1,\ldots,\lambda_k)\vdash n$ be a partition of $n$. If $k\le d$ and $n\ge 100$ then $$ \frac{\Phi(\lambda)^n}{n^{d^2+d}} \le \deg \chi_\lambda \le n\Phi(\lambda)^n. $$ \end{lemma} $\Box$ Now we investigate how the value of $\Phi(x_1,\ldots, x_d)$ increases after adding one extra variable. \begin{lemma}\label{l3} Let $$ \Phi(x_1,\ldots,x_d)=\frac{1}{x_1^{x_1}\cdots x_d^{x_d}}, \quad 0\le x_1,\ldots,x_d,\quad x_1+\cdots+x_d=1, $$ and let $\Phi(z_1,\ldots,z_d)=a$ for some fixed $z_1,\ldots,z_d$. Then $$ \max_{{{0\le t\le 1}}}\{\Phi(y_1,\ldots y_d,1-t)\vert y_1=tz_1,\ldots, y_d=tz_d\}=a+1. $$ Moreover, the maximal value is achieved if $t=\frac{a}{a+1}$. \end{lemma} {\em Proof.} Consider $$ g(t)=\ln \Phi^{-1}(tz_1,\ldots, tz_d, 1-t). $$ Then $$ g(t)=t\ln t+(1-t)\ln(1-t)-t\ln a. $$ Hence its derivative is equal to $$ g'(t)=\ln\frac{t}{(1-t)a} $$ and $g'(t)=0$ if and only if $t=(1-t)a$, that is $t=\frac{a}{a+1}$. It is not difficult to check that $g$ has the minimum at this point. Now we compute the value of $g$: $$ g(\frac{a}{a+1})=\frac{a}{a+1}\ln \frac{a}{a+1}+\frac{1}{a+1}\ln \frac{1}{a+1}-\frac{a}{a+1}\ln a=\ln B, $$ where $$ B=(\frac{a}{a+1})^{\frac{a}{a+1}}(\frac{1}{a+1})^{\frac{1}{a+1}} a^{-\frac{a}{a+1}}=\frac{1}{a+1}. $$ Hence $\Phi_{max}=B^{-1}=a+1$, and we have completed the proof. $\Box$ Next lemma shows what happens with $\Phi(\lambda)$ when we insert an extra row in Young diagram $D_\lambda$. \begin{lemma}\label{l4} Let $\gamma$ be a positive real number and let $\lambda=(\lambda_1,\ldots,\lambda_d)$ be a partition of $n$ such that $\frac{\lambda_1}{n},\ldots, \frac{\lambda_d}{n}\ge\gamma$. Then for any $\varepsilon>0$ there exist $n'=kn$ and a partition $\mu\vdash n'$, $\mu=(\mu_1,\ldots,\mu_{d+1})$ such that for some integers $1\le i\le d+1$ and $q\ge 1$ the following conditions hold: \begin{itemize} \item[1)] $\mu_j=q\lambda_j$ for all $j\le i-1$; \item[2)] $\mu_{j+1}=q\lambda_j$ for all $j\ge i$; and \item[3)] $\vert \Phi(\lambda)-\Phi(\mu)+1\vert <\varepsilon$. \end{itemize} Moreover, $k$ does not depend on $\lambda$ and $n$. \end{lemma} {\em Proof.} Denote $$ z_1=\frac{\lambda_1}{n},\ldots,z_d=\frac{\lambda_d}{n} $$ and $a=\Phi(z_1,\ldots,z_d)=\Phi(\lambda)$. By Lemma \ref{l3} \begin{equation}\label{e3a} \Phi(tz_1,\ldots,tz_d, 1-t)=a+1 \end{equation} if $t=\frac{a}{a+1}$. It is not difficult to check that $1\le \Phi(x_1,\ldots,x_d)\le d$, hence $\frac{1}{d+1}\le 1-t\le\frac{1}{2}$. Note that $\Phi=\Phi(x_1,\ldots,x_{d+1})$ can be viewed as a function of $d$ independent indeterminates $x_1,\ldots,x_d$. Conditions $0<\gamma\le x_1,\ldots, x_d$ and $\frac{1}{d+1}\le x_{d+1} \le \frac{1}{2}$ define a compact domain $Q$ in $\mathbb{R}^d$ since $x_{d+1}=1-x_1-\cdots- x_d$. Since $\Phi$ is continuous on $Q$ there exists an integer $k$ such that $$ \vert \Phi(x_1,\ldots,x_d,x_{d+1} - \Phi(x_1',\ldots,x_d',x_{d+1}'\vert < \varepsilon $$ as soon as $\vert x_i-x_i'\vert < \frac{1}{k}$ for all $i=1,\ldots,d$. Clearly, $k$ does not depend on $n$ and $\lambda$. Then there exists a rational number $t_0=\frac{q}{k}<1$ such that $\vert t-t_0\vert < \frac{1}{k}$ and \begin{equation}\label{e4} \vert \Phi(t_0z_1,\ldots, t_0z_d, 1-t_0)-a-1\vert <\varepsilon. \end{equation} Denote $y_0=1-t_0$. Then $t_0z_i\le 1-t_0=y_0 \le t_0z_{i-1}$ for some $i$ (or $y_0>t_0z_1$, or $y_0< t_0 z_d$). Now we set $n'=kn$, $$ \mu_1=q\lambda_1,\ldots,\mu_{i-1}=q\lambda_{i-1}, $$ $$ \mu_{i+1}=q\lambda_i,\ldots, \mu_{d+1}=q\lambda_d $$ and $\mu_i=n(k-q)$. Then $\mu=(\mu_1,\ldots,\mu_{d+1})$ is a partition of $n'$ and $$ \Phi(\mu)=\Phi(t_0z_1,\ldots,t_0z_d,1-t_0). $$ In particular, $\vert \Phi(\lambda)-\Phi(\mu)-1\vert <\varepsilon$ by (\ref{e3a}) and (\ref{e4}), and we have completed the proof of our lemma. $\Box$ \section{Algebras of infinite words} In this section we recall some constructions and algebras from \cite{GMZAdvM} and their properties. These algebras will be used for constructing unital algebras. Let $K=(k_1,k_2,\ldots)$ be an infinite sequence of integers $k_i\ge 2$. Then the algebra $A(K)$ is defined by its basis \begin{equation}\label{e01} \{a,b\}\cup Z_1\cup Z_2\cup\ldots \end{equation} where \begin{equation}\label{e02} Z_i=\{z^{(i)}_j\vert 1\le j\le k_i, \,i=1,2,\ldots\} \end{equation} with the multiplication table \begin{equation}\label{e03} z^{(i)}_1a=z^{(i)}_2,\ldots, z^{(i)}_{k_i-1}a=z^{(i)}_{k_i}, z^{(i)}_{k_i}b=z^{(i+1)}_1 \end{equation} for all $i=1,2,\ldots$. All remaining products are assumed to be zero. It is easy to verify (see also \cite{GMZAdvM}) that $A$ satisfies the identity $x_1(x_2x_3)=0$ and if $m_\lambda\ne 0$ in (\ref{t1}) then $\lambda=(\lambda_1)$ or $\lambda=(\lambda_1,\lambda_2)$ or $\lambda=(\lambda_1,\lambda_2,1)$. Denote by $W_n^{(d)}$, $d\le n$, the subspace of the free algebra $F\{X\}$ of all homogeneous polynomials of degree $n$ on $x_1,\ldots, x_d$. Given a PI-algebra $A$, we define $$ W_n^{(d)}(A)=\frac{W_n^{(d)}}{W_n^{(d)}\cap Id(A)}. $$ Recall that the height $h(\lambda)$ of a partition $\lambda=(\lambda_1,\ldots,\lambda_d)$ is equal to $d$. We will use the following result from \cite{GMZAdvM}. \begin{lemma}\label{l5} {\rm (\cite[Lemma 4.1]{GMZAdvM})} Let $A$ be a PI-algebra with $n$th cocharacter $\chi_n(A)=\sum_{\lambda\vdash n} m_\lambda\chi_\lambda$. Then for every $\lambda\vdash n$ with $h(\lambda)\le d$ we have that $m_\lambda\le \dim W_n^{(d)}(A)$. \end{lemma} $\Box$ Now let $w=w_1w_2\ldots$ be an infinite word in the alphabet $\{0,1\}$. Given an integer $m\ge 2$, let $K_{m,w}=\{k_i\}, i=1,2,\ldots$, be the sequence defined by \begin{equation}\label{e04} k_i=\left\{ \begin{array}{rcl} m,\, if\, w_i=0\\ m+1,\, if\, w_i=1 \end{array} \right. \end{equation} and write $A(m,w)=A(K_{m,w})$. Recall that the complexity $Comp_w(n)$ of an infinite word $w$ is the number of distinct subwords of $w$ of the length $n$ (see \cite{Lotair}, Chapter 1). Slightly modifying the proof of Lemma 4.2 from \cite{GMZAdvM} we obtain: \begin{lemma}\label{l6} For any $m\ge 2$ and for any infinite word $w$ the following inequalities hold $$ \dim W_n^{(d)}(A(m,w))\le d(m+1)n Comp_w(n) $$ and $$ l_n(A(m,w))\le n^3\dim W_n^{(3)}(A(m,w)). $$ \end{lemma} $\Box$ Now we fix the algebra $A(m,w)$ by choosing the word $w$. Obviously, $Comp_w(n)\le T$ for any infinite periodic word with period $T$. It is known (see \cite{Lotair}) that $Comp_w(n)\ge n+1$ for any aperiodic word $w$. In case $Comp_w(n)=n+1$ for all $n\ge 1$ the word $w$ is said to be {\it Sturmian}. It is also known that for any Sturmian or periodic word the limit $$ \pi(w)=\lim_{n\to\infty}\frac{w_1+\cdots+w_n}{n}>0 $$ always exists (we always assume that a periodic word is non-zero). This limit $\pi(w)$ is called the {\it slope} of $w$. For any real number $\alpha\in(0,1)$ there exists a word $w$ with $\pi(w)=\alpha$ and $w$ is Sturmian or periodic depending on whether $\alpha$ is irrational or rational, respectively. Moreover, $$ exp(A(m,w))=\Phi_0(\beta)=\frac{1}{\beta^\beta(1-\beta)^{1-\beta}} $$ for Sturmian or periodic word $w$, where $\beta=\frac{1}{m+\alpha}, \alpha=\pi(w)$ (\cite{GMZAdvM}, Theorem 5.1). As a consequence, for any real $1\le\alpha \le 2$ there exists an algebra $A$ such that $exp(A)=\alpha$. Finally, for any periodic word $w$ and for any $m\ge 2$ there exists a finite dimensional algebra $B(m,w)$ satisfying the same identities as $A(m,w)$. In particular, for any rational $0<\beta\le\frac{1}{2}$ there exists a finite dimensional algebra $B$ with $$ exp(B)=\Phi_0(\beta)=\frac{1}{\beta^\beta(1-\beta)^{1-\beta}}. $$ \section{Algebra with adjoined unit} We fix an infinite or periodic word $w$ and $m\ge 2$ and consider the algebra $A=A(m,w)$. Denote by $A^\sharp$ the algebra obtained from $A$ by adjoining external unit element $1$. Our main goal is to prove that $exp(A^\sharp)$ exists and that $$ exp(A^\sharp)=exp(A)+1. $$ First we find a polynomial upper bound for the colength of $A^\sharp$. We start with the remark concerning an arbitrary algebra $B$. Recall that, given an algebra $B$, $W_n^{(d)}(B)$ is the dimension of the space of homogeneous polynomials on $x_1,\ldots,x_d$ of total degree $n$ modulo ideal $Id(B)$. \begin{lemma}\label{l7} Let $B$ be an arbitrary algebra. Suppose that $\dim W_n^{(d)}(B)\le\alpha n^T$ for some natural $T$, $\alpha\in\mathbb{R}$ and for all $n\ge 1$. Then $$ \dim W_n^{(d)}(B^\sharp)\le\alpha (n+1)^{T+d+1}. $$ \end{lemma} {\em Proof}. Denote by $F\{X\}^\sharp$ absolutely free algebra generated by $X$ with adjoined unit element. First note that a multihomogeneous polynomial $f(x_1,\ldots,x_d)$ is an identity of $B^\sharp$ if all multihomogeneous on $x_1,\ldots,x_d$ components of $f(1+x_1,\ldots,1+x_d)$ are identities of $B$. Clearly, the number of multihomogeneous polynomials on $x_1,\ldots,x_d$ of total degree $k$, linearly independent modulo $Id(B)$, does not exceed $\dim W_k^{(d)}(B)$. On the other hand, the number of multihomogeneous components of total degree $k$ in a free algebra $F\{x_1,\ldots,x_d\}$ does not exceed $(k+1)^d$. Take now $$ N=(k+1)^d\sum_{k=0}^n \dim W_k^{(d)}(B) +1 $$ assuming that $\dim W_0^{(d)}(B)=1$. Clearly, $$ N\le 1+(n+1)^d\alpha\sum_{k=0}^n k^T <\alpha(n+1)^{T+d+1}. $$ Given homogeneous polynomials $f_1,\ldots, f_{N+1}$ on $x_1,\ldots,x_d$ of degree $n$, consider their linear combination $f=\lambda_1 f_1+ \cdots+ \lambda_{N+1} f_{N+1}$ with unknown coefficients $\lambda_1,\cdots, \lambda_{N+1}$. The assumption that some multihomogeneous component of $f(1+x_1,\ldots, 1+x_d)$ is an identity of $B^\sharp$ is equivalent to some linear equation on $\lambda_1,\cdots, \lambda_{N+1}$. Hence the condition that all multihomogeneous components of $f(1+x_1,\ldots, 1+x_d)$ are identities of $B$ leads to at most $N$ linear equations on $\lambda_1,\cdots, \lambda_{N+1}$. It follows that $f_,\ldots,f_{N+1}$ are linearly dependent modulo $Id(B^\sharp)$ and we have completed the proof. $\Box$ \begin{lemma}\label{l8}Let $A=A(m,w)$ where $m\ge 2$ and $w$ is periodic or Sturmian word. Then $$ l_n(A^\sharp) \le 4(m+1)(n+1)^{12} $$ for all sufficiently large $n$. \end{lemma} {\em Proof}. First note that the cocharacter of $A^\sharp$ lies in the strip of width $4$ that is, if $m_\lambda\ne 0$ in the decomposition \begin{equation}\label{e5} \chi_n(A^\sharp)=\sum_{\lambda\vdash n} m_\lambda\chi_\lambda \end{equation} then $h(\lambda)\le 4$. The number of partitions of $n$ of type $\lambda=(\lambda_1,\ldots,\lambda_k)$ with $1\le k\le 4$ is less than $n^4$. By Lemma \ref{l6} \begin{equation}\label{e6} \dim W_n^{(4)}(A) \le 4(n+1) Comp_w(n). \end{equation} If $w$ is Sturmian word then $Comp_w(n)=n+1$. If $w$ is periodic then its complexity is finite and hence $Comp_w(n)\le n+1$ for all sufficiently large $n$ in (\ref{e6}). In particular, $$ \dim W_n^{(4)}(A) \le 4(m+1)(n+1)^2\le 4(m+1)n^2 $$ for all sufficiently large $n$. Applying Lemmas \ref{l5}, \ref{l6} and \ref{l7} we obtain $$ m_\lambda \le\dim W_n^{(4)}(A^\sharp) \le 4(m+1)(n+1)^8 $$ for all $m_\lambda\ne 0$ in (\ref{e5}) and then $$ l_n(A^\sharp) = \sum_{\lambda\vdash n}m_\lambda \le 4(m+1)n^4(n+1)^8 \le. 4(m+1)(n+1)^{12}. $$ $\Box$ In the next step we shall find an upper bound for $\Phi(\lambda)$ provided that $m_\lambda\ne 0$ in the $n$th cocharacter of $A^\sharp$. \begin{lemma}\label{l9} For any $\varepsilon>0$ there exists $n_0$ such that $m_\lambda=0$ in (\ref{e5}) if $n>n_0$ and $$ \frac{\lambda_3}{\lambda_1}\ge \frac{\beta}{1-\beta} +\varepsilon $$ where $\beta=\frac{1}{m+\alpha}$ and $\alpha$ is the slope of $w$. \end{lemma} {\em Proof}. First let $\lambda=(\lambda_1,\lambda_2,\lambda_3,1)\vdash n$. Inequality $m_\lambda\ne 0$ means that there exists a multilinear polynomial $g$ of degree $n$ depending on one alternating set of four variables, $\lambda_3-1$ alternating sets of three variables and some extra variables and $g$ is not an identity of $A^\sharp$. That is, there exists an evaluation $\varphi:F\{X\}^\sharp\to A^\sharp$ such that $\varphi(g)\ne 0$ and the set $\{\varphi(x_1),\ldots,\varphi(x_n)\}$ contains at least $\lambda_3$ basis elements $b\in A$ and at most $\lambda_1$ elements $a\in A$. Obviously, $\varphi(g)=0$ if $\{\varphi(x_1),\ldots,\varphi(x_n)\}$ does not contain exactly one element $z_j^{(i)}\in A$. Any non-zero product of basis elements of $A$ is the left-normed product of the type $$ z_j^{(i)}a^{k_1} b^{l_1}\cdots a^{k_t}b^{l_t} $$ where $k_1,\ldots,k_t,l_1,\ldots, l_t$ are equal to $0$ or $1$. More precisely, this product can be written in the form \begin{equation}\label{e7} z_j^{(i)} f(a,b) \end{equation} where $$ f(a,b)=a^{t_0}ba^{t_1}b\cdots b a^{t_k}ba^{t_{k+1}} $$ is an associative monomial on $a$ and $b$ and $$ t_0=m+w_i-j, t_1=m+w_{i+1}-1,\ldots, t_k=m+w_{i+k}-1, t_{k+1} \le m+w_{i+k+1}-1. $$ In particular, $\deg_bf=k+1$ and $$ \deg_af=t_0+t_1+\cdots+t_{k+1}\ge t_1+\cdots+t_k=(m-1)k+ w_{i+1}+\cdots+w_{i+k}. $$ The total degree of monomial (\ref{e7}) (i.e. the number of factors) is $$ n=(m+1)k+ w_{i}+\cdots+w_{i+k}+t_{k+1}-j+1. $$ Hence, $(m+1)k\ge n-(1+k)-m-1$ and $k\ge \frac{n-m-2}{m+2}$. In particular, $k$ grows with increasing $n$. It is known that $$ \frac{w_{i+1}+\cdots+w_{i+k}}{k}\ge \alpha - \frac{C}{k} $$ for some constant $C$ (see \cite{GMZAdvM}, Proposition 5.1 or \cite{Lotair}, Section 2.2). This implies that $$ \deg_af>(m-1)k+k(\alpha-\delta) $$ where $\delta=\frac{C}{k}$ and $$ \frac{\deg_bf}{\deg_af} <\frac{1+\frac{1}{k}}{m-1+\alpha-\delta}. $$ Since $\varphi(g)\ne 0$, at least one monomial of the type (\ref{e7}) in $\varphi(g)$ is non-zero. Therefore \begin{equation}\label{e8} \frac{\lambda_3}{\lambda_1}\le \frac{\deg_bf}{\deg_af}<\frac{1+\frac{1}{k}}{m-1+\alpha-\delta}. \end{equation} Since $\delta$ is an arbitrary small positive real number, one can choose $n_0$ such that \begin{equation}\label{e9} <\frac{1+\frac{1}{k}}{m-1+\alpha-\delta}<\frac{1}{m-1+\alpha}+\frac{\varepsilon}{2} \end{equation} for all $n\ge n_0$. Combining (\ref{e8}) and (\ref{e9}) we conclude that \begin{equation}\label{e10} \frac{\lambda_3}{\lambda_1}<\frac{1}{m-1+\alpha}+\frac{\varepsilon}{2} \end{equation} provided that $m_\lambda\ne 0$ in (\ref{e5}) and $n\ge n_0$. Note that $\frac{\beta}{1-\beta}=\frac{1}{m-1+\alpha}$, hence we have completed the proof of our lemma in case $\lambda=(\lambda_1,\lambda_2,\lambda_3,1)$. Slightly modifying previous arguments we get the proof of the inequality (\ref{e10}) for a partition $\lambda=(\lambda_1,\lambda_2,\lambda_3)$ with three parts. The only difference is that non-identical polynomial $g$ depends on at least $\lambda_3$ skewsymmetric sets of variables of order 3 but after evaluation, one of these variables can be replaced by $z_j^{(i)}$ and we get the inequality $$ \frac{\lambda_3-1}{\lambda_1}\le \frac{\deg_bf}{\deg_af} $$ instead of (\ref{e8}). Taking into account that $\lambda_1\to\infty$ if $n\to\infty$ we get the same conclusion and thus complete the proof. $\Box$ For the lower bound of codimensions of $A^\sharp$ we need the following results. Let $A=A(m,w)$ be an algebra defined by an integer $m\ge 2$ and by an infinite word $w=w_1w_2\ldots$ in the alphabet $\{0,1\}$. Then \begin{equation}\label{e11} z_1^{(1)}a^{i_1}ba^{i_2}b\cdots a^{i_r}b=z_1^{r+1} \end{equation} if $i_1=m-1+w_1, i_2=m-1+w_2,\ldots, i_r=m-1+w_r$. Otherwise the left hand side of (\ref{e11}) is zero. \begin{lemma}\label{l11} Let $\lambda=(j,\lambda_2,\lambda_3,1)$ be a partition of $n=j+mr+w_1+\cdots+w_r+1$ with $j\ge\lambda_2=(m-1)r+w_1+\cdots+w_r$, $\lambda_3=r$ or let $\lambda=(\lambda_1,j,\lambda_3,1)$ be a partition of the same $n$ with $\lambda_1=(m-1)r+w_1+\cdots+w_r>j\ge \lambda_3=r$. Then $m_\lambda\ne 0$ in (\ref{e5}). \end{lemma} {\em Proof}. Recall that, given $S_n$-module $M$, the multiplicity of $\chi_\lambda$ in the character $\chi(M)$ is non-zero if $e_{T_\lambda}M\ne 0$ for some Young tableaux $T_\lambda$ of shape $D_\lambda$. The essential idempotent $e_{T_\lambda}\in FS_n$ is equal to $$ e_{T_\lambda}=(\sum_{\sigma\in R_{T_\lambda}}\sigma)(\sum_{\tau\in C_{T_\lambda}}(sgn~\tau)\tau. $$ Here $R_{T_\lambda}$ is the row stabilizer of $T_\lambda$, i.e. the subgroup of all $\sigma\in S_n$ permuting indices only inside rows of $T_\lambda$ and $C_{T_\lambda}$ is the column stabilizer of $T_\lambda$. First let $\lambda_1=j\ge \lambda_2$. Denote $n_0=mr+w_1+\cdots+w_r+1$ and consider the Young tableaux $T_\lambda$ of the following type. Into the boxes of the 1st row of $D_\lambda$ we place $n_0+1,\ldots,n_0+j$ from left to right. Into the third row we insert $j_1=i_1+2,\ldots, j_r=i_1+\cdots+i_r+r+1$. (In fact, $j_1,\ldots, j_r$ are the positions of $b$ in the product (\ref{e11})). Into the second row we insert from left to right $j_1-1,\ldots,j_r-1, i_{r+1},\ldots, i_{\lambda_2}$ where $\{i_{r+1},\ldots, i_{\lambda_2}\}= \{2,\ldots,n_0\}\setminus\{j_1-1,j_1,\ldots,j_r-1,j_r\}$ and into the unique box of the 4th row we put $1$. Then $$ e_{T_\lambda}(x_1,\ldots,x_n)=Sym_1Sym_2Sym_3Alt_1\cdots Alt_{\lambda_2}(x_1,\ldots,x_n) $$ where \begin{itemize} \item[-] $Alt_1$ is the alternation on $\{1,j_1-1,j_1,n_0+1\}$; \item[-] $Alt_k$ is the alternation on $\{j_k-1,j_k,n_0+k\}$ if $2\le k \le r$; \item[-] $Alt_k$ is the alternation on $\{i_k,n_0+k\}$ if $r<k\le \lambda_2$; \item[-] $Sym_1$ is the symmetrization on $\{n_0+1,\ldots, n_0+j\}$; \item[-] $Sym_2$ is the symmetrization on $\{j_1,\ldots,j_r\}$; \item[-] $Sym_3$ is the symmetrization on $\{2,\ldots,n\}\setminus \{j_1,\ldots,j_r\}$. \end{itemize} After an evaluation $$ \varphi(x_1)=z_1^{(1)}, \varphi(x_{n_0+1})=\cdots=\varphi(x_{n_0+j})=1\in A^\sharp, \varphi(x_{j_1})=\cdots=\varphi(x_{j_r})=b $$ and $$ \varphi(x_i)=a~if~ i\ne 1,j_1,\ldots,j_r,n_0+1,\ldots,n_0+j $$ we have $$ \varphi(e_{T_\lambda}(x_1\cdots x_n))= j! r! (n_0-r-1)! z_1^{(r+1)}\ne 0, $$ hence $m_\lambda\ne 0$ in (\ref{e5}). Similarly, filling up the second row of $T_\lambda$ by $n_0+1,\ldots, n_0+j$ in case $\lambda_1=(m-1)+w_1+\cdots+w_r> j\ge \lambda_3=r$ we prove that $e_{T_\lambda}(x_1\cdots x_n)$ is not an identity of $A^\sharp$. $\Box$ Recall that, given $0\le\beta\le 1$, $$ \Phi_0(\beta)=\Phi(\beta,1-\beta)=\frac{1}{\beta^\beta(1-\beta)^{1-\beta}}. $$ \begin{lemma}\label{l12} Let $A=A(m,w)$ be an algebra defined for an integer $m\ge 2$ and a Sturmian or periodic word $w$ with slope $\alpha$. Let also $\beta=\frac{1}{m+\alpha}$. Then for any $\varepsilon>0$ there exist a constant $C$, positive integers $n_1< n_2\ldots$ and partitions $\lambda^{(i)}\vdash n_i$ such that for some large enough $i_0$ the following properties hold: \begin{itemize} \item[1)] $\vert\Phi(\lambda^{(i)})-\Phi_0(\beta)-1\vert < \varepsilon\quad {\rm for~ all}\quad i\ge i_0$; \item[2)] $n_{i+1}-n_i < C$ for all $i\ge i_0$; \item[3)] $m_\lambda^{(i)}\ne 0$ in $\chi _{n_i}(A^\sharp)$ for all $i\ge i_0$. \end{itemize} \end{lemma} {\em Proof}. Note that $\beta<\frac{1}{2}$ since $\alpha >0$. First take an arbitrary $r\ge 1$, $n=mr+w_1+\cdots+w_r$ and $\lambda=(\lambda_1,\lambda_2)$, where $\lambda_1=(m-1)r+w_1+\cdots+ w_r, \lambda_2=r$. We set $$ x_1=\frac{\lambda_1}{n}=\frac{m-1+\frac{w_1+\cdots+ w_r}{r}}{m+\frac{w_1+\cdots+ w_r}{r}}, $$ $$ x_2=\frac{\lambda_2}{n}=\frac{1}{m+\frac{w_1+\cdots+ w_r}{r}}. $$ As it was mentioned in the proof of Lemma \ref{l9} (see also \cite{GMZAdvM}, Proposition 5.1 or \cite{Lotair}, Section 2.2) there exists a constant $C_1$ such that \begin{equation}\label{e11a} \vert\frac{w_1+\cdots+ w_r}{r}-\alpha\vert <\frac{C_1}{r}. \end{equation} Hence for any $\varepsilon_1>0$ we can find $r_0$ such that \begin{equation}\label{e12} \vert\Phi(\lambda)-\Phi_0(\beta)\vert < \varepsilon_1 \end{equation} for all $r\ge r_0$ since $\Phi(z_1,z_2)$ is a continuous function and $(x_1,x_2)\to (1-\beta,\beta)$ when $r\to\infty$. Now using Lemma \ref{l3} and Lemma \ref{l4}, given $\varepsilon_2>0$, we insert one extra row into $D_\lambda$ that is, we construct a partition $\mu=(\mu_1,\mu_2,\mu_3)$ of $n_0=nk$ such that \begin{equation}\label{e14} \vert\Phi(\lambda)-\Phi(\mu)-1\vert < \varepsilon_2. \end{equation} We have three options. Either $\mu_1$ is a new row (that is, $(\mu_2,\mu_3)= (q\lambda_1,q\lambda_2)$ or $\mu_2$ is a new row (that is, $(\mu_1,\mu_3)= (q\lambda_1,q\lambda_2)$ or $\mu_3$ is a new row (that is, $(\mu_1,\mu_2)= (q\lambda_1,q\lambda_2)$. First we exclude the third case. Suppose that $(\mu_1,\mu_2)=(q\lambda_1,q\lambda_2)$. Recall that by Lemma \ref{l3} the maximal value of $\Phi(tz_1,tz_2,1-t)$ is achieved if $$ t=\frac{\Phi(z_1,z_2)}{1+\Phi(z_1,z_2)}. $$ Since $\Phi(z_1,z_2)<2$ if $\beta<\frac{1}{2}$ we obtain that $1-t> \frac{1}{3}$. For Lemma \ref{l4} this means that the new row of $D_\mu$ cannot be the third row that is, case $(\mu_1,\mu_2)=(q\lambda_1,q\lambda_2)$ is impossible. Now let $(\mu_2,\mu_3)=(q\lambda_1,q\lambda_2)$. We exchange $\mu$ to $\mu'$ in the following way. We set $\mu_2'=qr(m-1)+w_1+\cdots+w_{qr}$ and take $\mu'=(\mu_1,\mu_2',\mu_3)$. Then $\mu'\vdash n'$ where $$ n'-n_0=\mu_2'-\mu_2=w_1+\cdots+w_{qr}-q(w_1+\cdots+w_{r}). $$ Using again inequality (\ref{e11a}) we get \begin{equation}\label{e15} |n'-n_0|< C_1(q+1). \end{equation} Inequality (\ref{e15}) also shows that $\mu_1\ge\mu_2'\ge\mu_3$ if $n$ is sufficiently large and our construction of partition $\mu$ is correct. Clearly, $\vert \Phi(\mu)-\Phi(\mu') \vert\to 0$ if $n\to\infty$ and \begin{equation}\label{e16} \vert \Phi(\mu)-\Phi(\mu') \vert <\varepsilon_3 \end{equation} for any fixed $\varepsilon_3 >0$ for all sufficiently large $r$ (and $n$). Starting from this sufficiently large $r$ we denote $n_r=n'+1$ and take $\lambda^{(r)}\vdash n_r$, $\lambda^{(r)}=(\mu_1,\mu',\mu_3,1)$. All preceding $n_1,\ldots,n_{r-1}$ and $\lambda^{(1)},\ldots,\lambda^{(r-1)}$ we choose in an arbitrary way. Since $\mu_3=qr$, by Lemma \ref{l11} the multiplicity of the irreducible character $\lambda^{(r)}$ in $\chi_{n_r}(A^\sharp)$ is not equal to zero and $|n_r-kn|< C_2=C_1(q+1)+1$ by (\ref{e15}) since $n_0=nk$. It is not difficult to see that in this case \begin{equation}\label{e17} \vert \Phi(\mu')-\Phi(\lambda^{(r)}) \vert <\varepsilon_4 \end{equation} for any fixed $\varepsilon_4 >0$ if $r$ (and the corresponding $n$) is sufficiently large. Combining (\ref{e12}), (\ref{e14}), (\ref{e16}) and (\ref{e17}) we see that $\lambda^{(r)}$ satisfies conditions 1) and 3) of our lemma. Finally, consider the difference between $n_r$ and $n_{r+1}$ provided that all $n_{r+1}, n_{r+2},\ldots$ are constructed by the same procedure. That is, we take $$ \bar n = m(r+1)+w_1+\cdots+w_{r+1}+1 $$ and obtain $n_{r+1}$ satisfying the same condition $$ \vert n_{r+1}- k\bar n \vert < C_2. $$ On the other hand, $\bar n -kn=k(m+w_{r+1}) \le k(m+1)$ and $\vert kn-n_r\vert < C_2$. Hence we have $$ \vert n_{r+1} - n_r \vert < C=2C_2+k(m+1). $$ This latter inequality completes the proof of our lemma if $(\mu_2,\mu_3)=(q\lambda_1,q\lambda_2)$. Arguments in the case $(\mu_1,\mu_3)=(q\lambda_1,q\lambda_2)$ are the same. $\Box$ \section{The main result} Now we are ready to prove the main result of the paper. \begin{theorem}\label{t1} Let $w=w_1w_2\ldots$ be Sturmian or periodic word and let $A=A(m,w)$, $m\ge 2,$ be an algebra defined by $m$ and $w$ in (\ref{e01}) - (\ref{e04}). If $A^\sharp$ is the algebra obtained from $A$ by adojining an external unit then PI-exponent of $A^\sharp$ exists and $$ exp(A^\sharp)=1+exp(A). $$ \end{theorem} {\em Proof.} Let $\alpha=\pi(w)$ be the slope of $w$ and let $\beta=\frac{1}{m+\alpha}$. Recall that $exp(A)=\Phi_0(\beta)$ where $$ \Phi_0(\beta)=\frac{1}{\beta^\beta(1-\beta)^{1-\beta}} $$ (\cite{GMZAdvM}). First we prove that for any $\delta>0$ there exists $N$ such that \begin{equation}\label{e18} \Phi(\lambda)<\Phi_0(\beta)+1+\delta \end{equation} as soon as $\lambda$ is a partition of $n\ge N$ with $m_\lambda\ne 0$ in $\chi_n(A^\sharp)$. By Lemma \ref{l9}, for any $\varepsilon>0$ there exists $n_0$ such that \begin{equation}\label{e23a} \frac{\lambda_3}{\lambda_1}<\frac{\beta}{1-\beta}+\varepsilon \end{equation} if $n\ge n_0, \lambda\vdash n$ and $m_\lambda\ne 0$. If $\lambda=(n)$ or $\lambda=(\lambda_1,\lambda_2)$ then by the hook formula for dimensions of irreducible $S_n$-representations it follows that $\deg \chi_\lambda\le 2^n$. Then by Lemma \ref{l1} $$ \Phi(\lambda)\le 2\sqrt[n]{n^6} $$ and (\ref{e18}) holds for all sufficiently large $n$ since $1\le\Phi_0(\beta)\le 2$. Let $\lambda=(\lambda_1,\lambda_2,\lambda_3)$. Denote $\mu=(\lambda_1,\lambda_3)\vdash n'$, where $n'=n-\lambda_2$. If $x_1=\frac{\lambda_1}{n'}, x_2=\frac{\lambda_3}{n'}$ then $$ \Phi(\mu)=\Phi(x_1,x_2)=\Phi_0(x_2) $$ and $$ x_2\le \varphi(\varepsilon)=\frac{\beta+(1-\beta)\varepsilon}{1+(1-\beta)\varepsilon} $$ as follows from (\ref{e23a}). Since $$ \lim_{n\to\infty}\varphi(\varepsilon)=\beta $$ and $\Phi_0$ is continuous, there exist $N$ and $\varepsilon$ such that $\Phi(\mu)<\Phi_0(\beta)+\delta$ for all $n>N$. Then by Lemma \ref{l3} $$ \Phi(\lambda)\le \Phi(\mu)+1< \Phi_0(\beta)+1+\delta. $$ Now consider the case $\lambda=(\lambda_1,\lambda_2,\lambda_3,1)$. Excluding the second row of diagram $D_\lambda$ we get a partition $\mu=(\mu_1,\mu_2,1)=(\lambda_1,\lambda_3,1)$ of $n'=n-\lambda_2$ with $$ \frac{\mu_2}{\mu_1}<\frac{\beta}{1-\beta}+\varepsilon. $$ Consider also partition $\mu'=(\mu_1,\mu_2)$ of $n'-1$. As before, given $\delta>0$, one can find $n_0$ such that $$ \Phi(\mu')<\Phi_0(\beta)+\frac{\delta}{2} $$ provided that $n\ge n_0$. Since $\Phi$ is continuous, for all sufficiently large $n$ (and $n'$) we have $$ \Phi(\mu) < \Phi_0(\beta)+\delta. $$ Applying again Lemma \ref{l3} we get (\ref{e18}). It now follows from (\ref{e18}), Lemmas \ref{l1} and \ref{l8} that $$ c_n(A^\sharp)=\sum_{\lambda\vdash n}m_\lambda\deg\chi_\lambda \le(\Phi_0(\beta)+1+\delta)^n l_n(A^\sharp) \le 4(m+1)(n+1)^{13} (\Phi_0(\beta)+1+\delta)^n. $$ Hence $$ \overline{exp}(A^\sharp)=\limsup_{n\to\infty}\sqrt[n]{c_n(A^\sharp)}\le\Phi_0(\beta)+\delta+1 $$ for any $\delta>0$ that is, \begin{equation}\label{e19} \overline{exp}(A^\sharp)\le \Phi_0(\beta)+1=exp(A)+1. \end{equation} Now we find a lower bound for codimensions. Since $$ c_n(A^\sharp)\ge\deg\chi_\lambda\ge \frac{\Phi(\lambda)^{n}}{n^{20}} $$ by Lemma \ref{l1} if $m_\lambda\ne 0$ in $\chi_n(A^\sharp)$, then by Lemma \ref{l12} for any $\varepsilon>0$ there exists a sequence $n_1<n_2<\ldots$ such that $$ c_{n_i}(A^\sharp)\ge \frac{1}{n_i^{20}}(\Phi_0(\beta)+1-\varepsilon)^{n_i},~i=1,2,\ldots $$ and $n_{i+1}-n_i<C=const$ for all $i\ge 1$. Note that the sequence $\{c_n(R)\}$ is non-decreasing for any unital algebra $R$. Then \begin{equation}\label{e20} \underline{exp}(A^\sharp)=\liminf_{n\to\infty}\sqrt[n]{c_n(A^\sharp)}\ge \Phi_0(\beta)+1. \end{equation} Now (\ref{e19}) and (\ref{e20}) complete the proof of the theorem. $\Box$ \begin{corollary}\label{c1} For any real numbers $\gamma\in [2,3]$ there exists an algebra $A$ with 1 such that $exp(A)=\gamma$. \end{corollary} $\Box$ As it was mentioned in the preliminaries, PI-exponents of finite dimensional algebras form a dense subset in the interval $[1,2]$. Hence we get the following \begin{corollary}\label{c2} For any real numbers $\beta<\gamma\in [2,3]$ there exists a finite dimensional algebra $B$ with 1 such that $\beta\le exp(B)\le\gamma$. In particular, PI-exponents of finite dimensional unital algebras form a dense subset in the interval $[2,3]$. \end{corollary} $\Box$ \end{document}
\begin{document} \title{Non-symplectic automorphisms of order 3\\ on K3 surfaces} \author{Michela Artebani \and Alessandra Sarti} \date{\today} \begin{abstract} In this paper we study K3 surfaces with a non-symplectic automorphism of order $3$. In particular, we classify the topological structure of the fixed locus of such automorphisms and we show that it determines the action on cohomology. This allows us to describe the structure of the moduli space and to show that it has three irreducible components. \end{abstract} \subjclass[2000]{Primary 14J28; Secondary 14J50, 14J10} \keywords{non-symplectic automorphism, K3 surface, Eisenstein lattice} \maketitle \pagestyle{myheadings} \markboth{Michela Artebani and Alessandra Sarti}{K3 surfaces with a non-symplectic automorphism of order three } \setcounter{tocdepth}{1} \section*{Introduction} An automorphism on a K3 surface is called \emph{non-symplectic} if its natural representation on the vector space of holomorphic two-forms is not trivial. In \cite{N1} and \cite{S} it was proved that a purely non-symplectic group of automorphisms is finite and cyclic. All the orders of such groups have been determined in \cite{MO} by Machida and Oguiso. In particular, the maximal order is known to be $66$ and $19$ if it is a prime. Non-symplectic involutions have been studied in \cite{N3} and \cite{Z2}. In \cite{OZ2} Oguiso and Zhang showed that the moduli space of K3 surfaces with a non-symplectic automorphism of order $13, 17$ or $ 19$ is zero dimensional. The case of order $11$ has been studied by the same authors in \cite{OZ1}. The fixed locus of a non-symplectic involution is known to be either empty or the disjoint union of smooth curves and it is completely described. Moreover, the action of the involution on the K3 lattice is well understood and only depends on the topology of the fixed locus. In this paper we intend to give similar results for a non-symplectic automorphism of order $3$ on a K3 surface $X$ i.e. $\sigma\in Aut(X)$ such that $$\sigma^3=id \hspace{0.5cm} \mbox{and}\hspace{0.5cm} \sigma^*(\omega_X)=\zeta \omega_X,$$ where $\omega_X$ is a generator for $H^{2,0}(X)$ and $\zeta$ is a primitive $3$-rd root of unity. We prove that the fixed locus of $\sigma$ is not empty and it is the union of $n\leq 9$ points and $k\leq 6$ disjoint smooth curves, where all possible values of $(n,k)$ are given in Table \ref{fixtable}. The automorphism $\sigma$ induces a non-trivial isometry $\sigma^*$ on the K3 lattice. The fixed sublattice for $\sigma^*$ is contained in the Picard lattice and its orthogonal complement is known to be a lattice over the Eisenstein integers. Here we prove that the isometry $\sigma^*$ only depends on the pair $(n,k)$. The fixed lattice and its orthogonal complement have been computed for any $(n,k)$ and are listed in Table \ref{lat}. The type $(n,k)$ gives a natural stratification of the moduli space of K3 surfaces with a non-symplectic automorphism of order $3$. As a consequence of the previous results, we prove that each stratum is birational to the quotient of a complex ball by the action of an arithmetic group. In particular, we prove that there are $3$ maximal strata of dimensions $9, 9, 6$, corresponding to $(n,k)=(0,1), (0,2), (3,0)$. The first two components also appear in \cite{K}, in particular the first one is known to be birational to the moduli space of curves of genus $4$.\\ \begin{figure} \caption{Fixed locus and fixed lattice} \label{fig} \end{figure} In the first section we recall some basic properties of non-symplectic automorphisms of order three and their relation with lattices over the Eisenstein integers. In section 2 we determine the structure of the fixed locus by generalizing a method in \cite{N3} and \cite{Ka}. More precisely, we determine algebraic relations between $n,k$ and two integers $m,a$ which identify the fixed lattice of $\sigma^*$. All possible values of these two pairs of invariants are given in Table \ref{fixtable} and represented in Figure \ref{fig} (the analogous diagram for non-symplectic involutions can be found in \S 2.3, \cite{AN}). In section 3 we prove that each such configuration occurs i.e. it can be realized by an order $3$ non-symplectic automorphism on a K3 surface. This is done by means of lattice theory, Torelli theorem and surjectivity theorem for the period map of K3 surfaces. The list of the corresponding invariant lattices and of their orthogonal complements is given in Table \ref{lat}. In section 4 we describe projective models for K3 surfaces with non-symplectic automorphisms of order $3$. In particular, we show that for $k>1$ all of them have a jacobian elliptic fibration such that the automorphism acts as the identity on the basis. Other models are given as complete intersections in $\mathbb P^ 4$, quartic surfaces and double sextics. In the last section we describe the structure of the moduli space. The projective models given in section 4 show that the moduli space is irreducible for given $n,k$ or, equivalently, that the action of the automorphism on cohomology is determined by $n,k$. This result and lattice theory allow us to identify the irreducible components of the moduli space. \\ \emph{Acknowledgements.} We would like to thank Igor Dolgachev, Shigeyuki Kond\=o, Bert van Geemen and Jin-Gen Yang for several stimulating discussions. \section{Automorphisms and lattices} Let $X$ be a K3 surface i.e. a simply connected smooth complex projective surface with a nowhere vanishing holomorphic two-form $\omega_X$. An automorphism $\sigma$ of $X$ is called \emph{non-symplectic} if its action on the vector space $H^{2,0}(X)=\mathbb{C}\omega_X$ is not trivial. In this paper we are interested in non-symplectic automorphisms of order three i.e. $$ \sigma^3=id \hspace{0.5cm}\mbox{ and }\hspace{0.5cm} \sigma^*(\omega_X)=\zeta \omega_X,$$ where $\zeta$ is a primitive $3$-rd root of unity. The automorphism $\sigma$ induces an isometry $\sigma^*$ on $H^2(X,\mathbb{Z})$ which preserves the Picard lattice $N_X$ and the transcendental lattice $T_X$ of $X$ $$N_X=\{x\in H^2(X,\mathbb{Z}): (x,\omega_X)=0\}\hspace{1cm} T_X=N^{\perp}_X .$$ We will denote by $N(\sigma)$ the invariant lattice $$N(\sigma)=\{x\in H^2(X,\mathbb{Z}): \sigma^*(x)=x\}.$$ In order to describe the action of the automorphism on cohomology, we first need some preliminaries of lattice theory (see \cite{N2}) Recall that the \emph{discriminant group} of a lattice $L$ is the finite abelian group $A_L=L^*/L$, where $L^*=Hom(L,\mathbb{Z})$ and $A_L\cong A_{L^{\perp}}$. \begin{defin} Let $\mathcal E=\mathbb{Z}[\zeta]$ be the ring of Eisenstein integers. A \emph{$\mathcal E$-lattice} is a pair $(L,\rho)$ where $L$ is an even lattice and $\rho$ an order three fixed point free isometry on $L$. If $\rho$ acts identically on $A_L$ then $L$ will be called \emph{$\mathcal E^*$-lattice}. \end{defin} \begin{rem}\label{herm} Any $\mathcal E$-lattice $L$ is clearly a module over $\mathcal E=\mathbb{Z}[\zeta]$ via the action $$(a+\zeta b)\cdot x=ax+b\sigma^*(x),\hspace{0.7cm} a,b\in\mathbb{Z}, \ x\in L.$$ Since $\rho$ is fixed point free and $\mathcal E$ is a principal ideal domain, we have that $L$ is a free module over $\mathcal E$. Moreover, $L$ is equipped with a hermitian form $H:L\times L\rightarrow \mathcal E$ such that $2 Re\,H(x,y)=(x,y)$. This is defined as $$H(x,y)=\frac{1}{2}[(x,y)-\frac{\theta}{3}(x,\sigma^2(y)-\sigma(y))]$$ where $\theta=\zeta-\zeta^2$. \end{rem} \begin{lemma}\label{inv} \begin{enumerate}[$\bullet$]\ \\ \item Any $\mathcal E$-lattice has even rank. \item Any $\mathcal E^*$-lattice is {\it 3-elementary} i.e. its discriminant group is isomorphic to $ (\mathbb{Z}/3\mathbb{Z})^{\oplus a}$ for some $a$. \end{enumerate} \end{lemma} \proof The rank of a $\mathcal E$-lattice is equal to $2m$, where $m$ is its rank over $\mathcal E$. Since $\rho$ is fixed point free, then $\rho^2+\rho+id=0$ on $L$. If $x\in A_{L}$ then $\rho(x)=x$ hence $\rho^2(x)+\rho(x)+x=3x=0$. Thus $A_{L}$ is a direct sum of copies of $\mathbb{Z}/3\mathbb{Z}$. \qed\\ Note that, according to Lemma \ref{inv}, to any $\mathcal E^*$-lattice $L$ we can associate the pair $m(L),a(L)$ where $2m(L)=\rightarrownk (L)$ and $a(L)$ is the minimal number of generators of $A_L$.\\ The following is a reformulation of \cite[Theorem 0.1]{N1} and \cite[Lemma 1.1]{MO} for order three automorphisms. \begin{theorem}\label{ns} Let $X$ be a K3 surface and $\sigma$ be a non-symplectic automorphism of order $3$ on $X$. Then: \begin{enumerate}[$\bullet$] \item $N(\sigma)$ is a primitive $3$-elementary sublattice of $N_X$, \item $(N(\sigma)^{\perp},\sigma^*)$ is a $\mathcal E^*$-lattice \item $(T_X,\sigma^*)$ is a $\mathcal E$-sublattice of $N(\sigma)^{\perp}$. \end{enumerate} \end{theorem} \begin{exa}\label{ex} In what follows we will adopt the standard notation for lattices: $U$ is the hyperbolic lattice and $A_n, D_n, E_n$ are the negative definite lattices of rank $n$ associated to the corresponding root systems. The lattice $L(\alpha)$ is obtained multiplying by $\alpha$ the form on $L$. \begin{enumerate}[$\bullet$] \item The lattices $U$, $E_8$, $A_2$, $E_6$, $U(3)$ are $3$-elementary lattices with $a=0,0,1,1,2$ respectively. \item If $L$ is a $3$-elementary lattice of rank $n$ with $a(L)=a$, then it can be proved that the scaled dual $L^*(3)$ is again a $3$-elementary lattice with $a(L^*(3))=n-a$. For example, the lattice $E_6^*(3)$ is $3$-elementary with $a=5$. \item The lattice $A_2$ is a $\mathcal E^*$-lattice with order $3$ isometry : $$e\mapsto f,\ f\mapsto -e-f.$$ where $e^2=f^2=-2$, $(e,f)=1$. \item The lattice $U\oplus U(3)$ is a $\mathcal E^*$-lattice with order three isometry : $$e_1\mapsto e_1-f_1,\ e_2\mapsto -2e_2-f_2,$$ $$f_1\mapsto -2f_1+3e_1,\ f_2\mapsto f_2+3e_2,$$ where $e_i^2={f_i}^2=0$, $(e_1,e_2)=1$, $(f_1,f_2)=3$. \item The lattice $U\oplus U$ has a structure of $\mathcal E^*$-lattice induced by the natural embedding $U\oplus U(3)\subset U\oplus U$. \item The lattices $E_6$, $E_8$ are $\mathcal E^*$-lattices, see \cite[\S 2.6, Ch.2]{CS} for a description of two order three isometries on them. \item The \emph{Coxeter-Todd lattice} $K_{12}$ is a negative definite $\mathcal E^*$-lattice of rank $12$ with $a=6$ (see \cite{CS1}). In fact this is a \emph{unimodular lattice} over $\mathcal E$ i.e. $Hom(K_{12},\mathcal E)=K_{12}$. In \cite[Theorem 3]{F} W. Feit proved that this is the only unimodular $\mathcal E$-lattice of dimension $<12$ containing no vectors of norm $1$. \end{enumerate} \end{exa} Hyperbolic $3$-elementary lattices have been classified by Nikulin \cite{N2} and Rudakov-Shafarevich \cite{RS} (note that in this paper there is a misprint in the last condition of the theorem). \begin{theorem}\label{RS} An even hyperbolic $3$-elementary lattice $L$ of rank $r>2$ is uniquely determined by the integer $a=a(L)$. Moreover, given $r$ and $a\leq r$ such a lattice exists if and only if the following conditions are satisfied $$ \begin{array}{ll} r\equiv 0\ (mod\, 2)&\\ r\equiv 2\ (mod\, 4)& \mbox{ for } a\equiv 0\ (mod\, 2)\\ (-1)^{r/2-1}\equiv 3\ (mod\, 4)& \mbox{ for } a\equiv 1\ (mod\, 2)\\ r>a>0& \mbox{ for } r\not \equiv 2\ (mod\, 8)\\ \end{array} $$ \end{theorem} For $r=2$ binary forms are classified (see \cite{Bu}, \cite{CS}), in this case the unique definite even $3$-elementary lattices are $A_2$ and $A_2(-1)$ (with $a=1$) and the only indefinite ones are $U$ and $U(3)$ (with $a=0$ and $a=2$). \section{The fixed locus} Let $\sigma$ be an order three non-symplectic automorphism of a K3 surface and $X^{\sigma}$ be its fixed locus. By \cite{Ca} the action of $\sigma$ can be locally linearized and diagonalized at $p\in X^{\sigma}$. Since $\sigma$ acts on $\omega_X$ as the multiplication by $\zeta$, there are two possible local actions for $\sigma$ $$ \left(\begin{array}{cc} \zeta^2&0\\ 0&\zeta^2 \end{array} \right)~~~\mbox{or}~~~\left(\begin{array}{cc} \zeta&0\\ 0&1 \end{array} \right) .$$ In the first case $p$ is an isolated fixed point and $\sigma$ acts as the identity on $\mathbb{P} T_p(X)$. In the second case $p$ belongs to a curve in the fixed locus (the line $x=0$). Hence \begin{lemma} The fixed locus of $\sigma$ is either empty or the disjoint union of $k$ smooth curves and $n$ points. \end{lemma} In this section we will relate the topological invariants $n, k$ with the lattice invariants $m,a$ defined in the previous section. Our approach generalizes a technique in \cite{Ka}. \begin{theorem}\label{fix} The fixed locus of $\sigma$ is not empty and it is the disjoint union of $n\leq 9$ points and $k\leq 6$ smooth curves with: \begin{enumerate}[a)] \item one curve of genus $g\geq 0$ and $k-1$ rational curves or \item $k=0$ and $n=3$. \end{enumerate} Moreover, if $\rightarrownk N(\sigma)^{\perp}=2m$, then $$m+n=10\hspace{0.5cm} \mbox{ and }\hspace{0.5cm} g=3+k-n\ \mbox{ (in\ case a))}.$$ \end{theorem} \proof Let $\sigma$, $X$, $X^{\sigma}$ and $N(\sigma)$ as before. By the topological Lefschetz fixed point formula $$ \chi(X^{\sigma})=2+\mbox{Tr}(\sigma^*_{|H^2(X,\mathbb{Z})})=2+\rightarrownk(N(\sigma))+m(\zeta+\zeta^2) .$$ Since $\zeta+\zeta^2+1=0$ and $\rightarrownk(N)=22-2m$ this gives \begin{equation}\label{eq1} \chi(X^{\sigma})=\sum_{i=1}^k \mathcal X(C_i)+n= 3(8-m) ,\end{equation} where $C_i$ are the smooth curves in the fixed locus. Besides, the holomorphic Lefschetz formula (Theorem 4.6, \cite{AS}) gives the equality \begin{equation}\label{eq2} 1+\mbox{Tr}(\sigma^*_{|H^{2,0}(X) })= \frac{1}{6\zeta}(\sum_{i=1}^k\mathcal X(C_i)-2n)\Longleftrightarrow 2n-\sum_{i=1}^k \mathcal X(C_i)=6. \end{equation} The equations \eqref{eq1} and \eqref{eq2} imply $m+n=10$, hence also $n\leq 9$. Case $b)$ immediately follows from (\ref{eq2}). By the Hodge index theorem the Picard lattice $N_X$ is hyperbolic. This implies that the fixed locus contains at most one curve $C$ of genus $g>1$ and that in this case the other curves in the fixed locus are rational (since they belong to the orthogonal complement of $C$). If the fixed locus of $\sigma$ contains at least two elliptic curves $C_1, C_2$, then they are linearly equivalent and $|C_1|:X\longrightarrow\mathbb P^ 1$ gives an elliptic fibration on $X$. The induced action of $\sigma$ on $\mathbb P^ 1$ is not trivial since otherwise $\sigma$ would act as the identity on the tangent space at a point in $C_1$. Hence $\sigma$ has exactly $2$ fixed points in $\mathbb P^ 1$, corresponding to the fibers $C_1,C_2$ and there are no other fixed curves or points in the fixed locus. This contradicts equality (\ref{eq2}), hence the fixed locus contains at most one elliptic curve. This completes the proof of $a)$. As a consequence, if $g$ is the biggest genus of a curve in the fixed locus, (\ref{eq2}) can be written as $g=3+k-n$. The inequality $k\leq 6$ follows from (\ref{eq2}) and the Smith inequality (see \cite{Br}) $$\sum_{i\geq 0} \dim H^i(X^{\sigma},\mathbb{Z}_3)\leq \sum_{i\geq 0} \dim H^i(X,\mathbb{Z}_3).$$\qed \begin{cor} An order $3$ automorphism on a K3 surface is symplectic if and only if its fixed locus is given by $6$ points. \end{cor} \proof An order $3$ symplectic automorphism on a K3 surface has exactly 6 fixed points by \cite[\S 5]{N1}. The converse follows from Theorem \ref{fix}. \qed\\ Let $a=a(N(\sigma))$, then a more refined application of Smith exact sequences gives \begin{pro}\label{a} $\dim H_{*}(X)-\dim H_{*}(X^{\sigma})=2a+m.$ \end{pro} \proof In what follows we will write $\sigma$ for $\sigma^*$. Let $g=id+\sigma+\sigma^{2}$ and $h=id-\sigma$ on $H^2(X,\mathbb{Z})$. If $L_+=ker(h)$ and $L_-=ker(g)$, then $$3^a=o(H^2(X,\mathbb{Z})/L_+\oplus L_-).$$ Since $h^2=id-2\sigma+\sigma^2=g$ over $\mathbb{Z}_3$, then the image of $L_+\oplus L_-$ under the homomorphism $$c:H^2(X,\mathbb{Z})\longrightarrow H^2(X,\mathbb{Z}_3)$$ coincides with $c(L_-)$. Hence $$a=\dim H^2(X,\mathbb{Z}_3)-\dim c(L_-).$$ We now express this number in a different way by applying Smith exact sequences over $\mathbb{Z}_3$ (see Ch.III, \cite{Br}). In the rest of the proof the coefficients will be in $\mathbb{Z}_3$. Let $C(X)$ be the chain complex of $X$ with coefficients in $\mathbb{Z}_3$. The automorphisms $g,h$ act on $C(X)$ and give chain subcomplexes $g C(X)$ and $h C(X)$. We denote by $H_i^{g}(X),\ H_i^{h}(X)$ the associated homology groups with coefficients in $\mathbb{Z}_3$ as in \cite[Definition 3.2]{Br} and by $\mathcal X^g(X), \mathcal X^h(X)$ the corresponding Euler characteristics. By \cite[Proposition 3.4]{Br} we have $$H_i^g(X)\cong H_i(S,X^{\sigma}), $$ where the second term is the homology of the pair $(S,X^{\sigma})$ where $S=X/\sigma$ and $X^{\sigma}$ is identified to its image in $S$. Let $\rho=h^i$ and $\bar \rho=h^{3-i},$ with $i=1,2$. Then we have the exact triangles (\cite[Theorem 3.3 and (3.8)]{Br}) $$\xymatrix{ (3)\label{t1}\ \ & &H(X) \ar[dl]_{\rho_*} & \\ &H^{\rho}(X)\ar[rr]_{}& &H^{\bar\rho}(X)\oplus H(X^{\sigma})\ar[ul]_{i_*}\\ (4)\label{t2}\ \ & &H^{h}(X) \ar[dl]_{h_*} & \\ &H^{g}(X)\ar[rr]_{}& &H^{g}(X) \ar[ul]_{i_*}} $$ where $h_*, i_*, \rho_*$ have degree $0$ and the horizontal arrows have degree $-1$. The two triangles \eqref{t1} and (4) are called Smith sequences, in particular they induce two exact sequences $$0 \rightarrow H_3^{g}(X)\stackrel{\gamma_3}{\rightarrow} H_2^{h}(X)\oplus H_2(X^{\sigma})\stackrel{\alpha_2}{\rightarrow} H_2(X)\stackrel{\beta_2}{\rightarrow} H^{g}_2(X)\stackrel{\gamma_2}{\rightarrow} H^{h}_1(X)\oplus H_1(X^{\sigma})\rightarrow 0 $$ $$0\rightarrow H_3^{h}(X)\stackrel{\gamma'_3}{\rightarrow} H_2^{g}(X)\oplus H_2(X^{\sigma})\stackrel{\alpha'_2}{\rightarrow} H_2(X)\stackrel{\beta'_2}{\rightarrow} H^{h}_2(X)\stackrel{\gamma'_2}{\rightarrow} H^{g}_1(X)\oplus H_1(X^{\sigma})\rightarrow 0,$$ From exact sequence \eqref{t1} with $\rho=\sigma$ and sequence (4) follow (\cite[Theorem 4.3]{Br}) the equalities of Euler characteristics $$\mathcal X(X)-\mathcal X(X^{\sigma})=\mathcal X^g(X)+\mathcal X^h(X)=3\mathcal X^g(X).$$ Then, from the exactness of Smith sequences, Lemma \ref{chi} and Lemma \ref{im} below we have $$\begin{array}{ccl}\dim H_{*}(X)-\dim H_{*}(X^{\sigma})&=&\mathcal X^g(X)+\mathcal X^h(X)-2\dim H_1(X^{\sigma})\\ &=&\dim Im(\beta_2)+\dim Im(\beta'_2)\\ &=&2a+ m.\end{array}$$ \qed \begin{lemma}\label{chi} $\mathcal X^g(X)+\mathcal X^h(X)=\sum_{i=1}^2 (-1)^i (\dim H^g_i(X)+\dim H^h_i(X)).$ \end{lemma} \proof From the exact sequence for the pair $(S,X^{\sigma})$ and sequences $(3)$, $(4)$ it follows $$\dim H^g_0(X)=\dim H^h_0(X)=0,$$ $$ \dim H^g_3(X)=\dim H^h_3(X)=\dim H^g_4(X)=\dim H^h_4(X).$$ This immediately implies the statement. \qed \begin{lemma}\label{im} $Im(\alpha_2)=c(L_-)$, $\dim Im(\alpha_2)-\dim Im(\alpha'_2)=m$. \end{lemma} \proof By definition of the Smith exact sequence $Im(\alpha_2)\subset c(L_-)$. Conversely, if $x\in c(L_-)$ then $\alpha'_2(\beta_2(x)\oplus 0)=g(x)=0$, hence $(\beta_2(x)\oplus 0)\in Im(\alpha'_2)$. By definition the projection of $\gamma_3$ on the second factor is the boundary homomorphism of the sequence of the pair $(S,X^{\sigma})$ and this is injective since $H_3(S)=0$. It follows that $\beta_2(x)=0$ i.e. $x\in ker(\beta_2)=Im(\alpha_2)$. By the two exact sequences above and the homology vanishing in Lemma \ref{chi} it follows that $$\dim Im(\alpha_2)-\dim Im(\alpha'_2)=h_2^h(X)-h_2^g(X)$$ (in fact $\dim H_1^g(X)=\dim H_1^h(X)$). Because of sequence $(4)$ and Proposition \ref{fix} this also equals $\mathcal X^g(X)=m$. \qed \begin{cor}\label{amg} If $k=0$ then $m=a$, otherwise $2g=m-a$. \end{cor} \proof It follows from equation (\ref{eq1}) in the proof of Proposition \ref{fix} and from Proposition \ref{a} since $$\dim H_{*}(X)=\mathcal X(X)=24,$$ $$\dim H_{*}(X^{\sigma})-\mathcal X(X^{\sigma})=2h^1(X^{\sigma})=4g.$$ \qed\\ Note that $g$, $m$ and $a$ are functions of $n,k$: $$\begin{array}{l} g(n,k)=3+k-n\\ m(n,k)=10-n\\ a(n,k)=n+4-2k. \end{array}$$ \begin{theorem}\label{fixt} The fixed locus of $\sigma$ contains $n$ points and $k$ curves where $n,k$ are in the same row of Table \ref{fixtable}. \begin{table} \begin{tabular}{c|c|c|c|c} $n$& $k$& $g(n,k)$ & $m(n,k)$ & $a(n,k)$\\ \hline $0$ &$1,2$& $4,5$ & $10$ & $2,0 $\\ \hline $1$ & $1,2$& $3,4$ & $9$ & $3,1 $\\ \hline $2$& $1,2$& $2,3$& $8$ & $4,2$\\ \hline $3$& $0,1,2,3$& $ \emptyset,1,2,3$& $7$ & $7,5,3,1$\\ \hline $4$& $1,2,3,4$& $0,1,2,3$& $6$ & $6,4,2,0$\\ \hline $5$& $2,3,4$& $0,1,2 $& $5$ & $5,3,1$\\ \hline $6$& $3,4$& $0,1$ & $4$ & $4,2 $\\ \hline $7$& $4,5$& $0,1$ & $3$ & $3,1$\\ \hline $8$ & $5,6$& $0,1$ & $2$ & $2,0$\\ \hline $9$ & $6$& $0$ & $1$ & $1$ \end{tabular}\\ \ \\ \ \\ \caption{Fixed locus and fixed lattice}\label{fixtable} \end{table} \end{theorem} \proof By Theorem \ref{RS} and Corollary \ref{amg} we get $$a\leq min (m, 22-2m),\ \ a=0 \mbox { or }a=22-2m \Longrightarrow m\equiv 2\ (mod\, 4).$$ Then the result follows from Theorem \ref{fix} and Corollary \ref{amg}. \qed \section{Existence} Let $(T,\rho)$ be an $\mathcal E^*$-lattice of signature $(2,n-2).$ Assume that $T$ has a primitive embedding in the K3 lattice $L_{K3}$ and let $N$ be its orthogonal complement in $L_{K3}$. In $L_{K3}\otimes \mathbb{C}$ consider the period domain $$B_{\rho}=\{\omega\in \mathbb P(T\otimes \mathbb{C}): (\omega,\bar \omega)>0,\rho(\omega)=\zeta \omega\}.$$ If $\omega\in B_{\rho}$ is a generic point, then by the surjectivity of the period map (\cite{Ku}, \cite{PP}) there exists a K3 surface $X=X_{\omega}$ with a marking $\phi: H^2(X,\mathbb{Z})\rightarrow L_{K3}$ such that $\mathbb P(\phi_{\mathbb{C}}(\omega_X))=\omega$. \begin{pro}\label{existence} \begin{table} \begin{tabular}{c|c|c|c} $n$ & $k$& $T(n,k)$&$N(n,k)$\\ \hline $0$ &$1$& $U\oplus U(3)\oplus E_8\oplus E_8$& $U(3)$\\ & $2$& $U\oplus U\oplus E_8\oplus E_8$& U\\ \hline $1$ &$1$& $U\oplus U(3)\oplus E_6\oplus E_8$& $U(3)\oplus A_2$\\ & $2$& $U\oplus U\oplus E_6\oplus E_8$& $U\oplus A_2$\\ \hline $2$ &$1$& $U\oplus U(3)\oplus E_6\oplus E_6$ &$U(3)\oplus A_2^2$\\ &$2$& $U\oplus U\oplus E_6\oplus E_6$&$U\oplus A_2^2$\\ \hline $3$ &$0$& $U\oplus U(3)\oplus A_2^5$& $U(3)\oplus E_6^*(3)$\\ & $1$& $U\oplus U\oplus A_2^5$ & $U(3)\oplus A_2^3$\\ &$ 2$& $U\oplus U(3)\oplus A_2\oplus E_8$& $U\oplus A_2^3$\\ &$3$& $U\oplus U\oplus A_2\oplus E_8$& $U\oplus E_6$\\ \hline $4$ &$1$& $U\oplus U(3)\oplus A_2^4$& $U(3)\oplus A_2^4$\\ &$2$& $U\oplus U\oplus A_2^4$&$U\oplus A_2^4$\\ &$3$& $U\oplus U(3)\oplus E_8$& $U\oplus E_6\oplus A_2$\\ &$4$& $U\oplus U\oplus E_8$&$U\oplus E_8$\\ \hline $5$ &$2$& $U\oplus U(3)\oplus A_2^3$&$U\oplus A_2^5$\\ &$3$& $U\oplus U(3)\oplus E_6$&$U\oplus A_2^2\oplus E_6$\\ &$4$& $U\oplus U\oplus E_6$&$U\oplus E_8\oplus A_2$\\ \hline $6$ &$3$& $U\oplus U(3)\oplus A_2^2$&$U\oplus E_6\oplus A_2^3$\\ &$4$& $U\oplus U\oplus A_2^2$&$U\oplus E_6^2$\\ \hline $7$ &$4$& $U\oplus U(3)\oplus A_2$&$U\oplus E_6\oplus E_6\oplus A_2$\\ &$5$& $U\oplus U\oplus A_2$&$U\oplus E_6\oplus E_8$\\ \hline $8$ &$5$& $U\oplus U(3)$&$U\oplus E_6\oplus E_8\oplus A_2$\\ &$6$& $U\oplus U$&$U\oplus E_8\oplus E_8$\\ \hline $9$ &$6$& $A_2(-1)$&$U\oplus E_8\oplus E_8\oplus A_2$\\ \end{tabular}\\ \ \\ \ \\ \caption{The lattices $T(n,k),\ N(n,k)$.}\label{lat} \end{table} The K3 surface $X$ admits an order three non-symplectic automorphism $\sigma$ such that $\sigma^*=\rho$ on $T$ up to conjugacy and $$N(\sigma)=N_X\cong N,\ N(\sigma)^{\perp}=T_X\cong T.$$ \end{pro} \proof We consider the isometry on $N\oplus T$ defined by $(x,y)\mapsto(x,\rho(y))$. Since $\rho$ acts as the identity on $A_T$, then this isometry defines an isometry $\bar\rho$ on $L_{K3}$. If $\omega\in B_{\rho}$, then $\bar\rho(\omega)=\zeta \omega$ and, if $\omega$ is generic, we can assume that there are no roots in $T\cap \omega^{\perp}$. Then the statement follows from \cite[Theorem 3.10]{Na}. \qed \begin{pro}\label{transc} For any $n,k$ in the same row of Table \ref{fixtable} there exists a unique $3$-elementary lattice $T(n,k)$ of signature $(2,2m(n,k)-2)$ with $a=a(n,k)$. This lattice is a $\mathcal E^*$-lattice and has a unique primitive embedding in $L_{K3}$.\\ The lattice $T(n,k)$ and its orthogonal complement $N(n,k)$ in $L_{K3}$ are given in Table \ref{lat}. \end{pro} \proof It can be easily checked that any $m,a$ in Table \ref{fixtable} is realized by one of the lattices in Table \ref{lat}. Any lattice $T$ in Table \ref{lat} is a direct sum of $\mathcal E^*$-lattices in Examples \ref{ex}, hence it is also a $\mathcal E^*$-lattice (by taking the direct sum of the isometries on each factor). By \cite[Theorem 1.12.2]{N2} there exists a primitive embedding of $T$ in $L_{K3}$ if and only if there exists a hyperbolic $3$-elementary lattice $N$ of rank $22-2m(n,k)$ and $a=a(n,k)$. This follows from Theorem \ref{RS}. Moreover, by \cite[Corollary 1.12.3]{N2}, the embedding of $N$ in $L_{K3}$ is unique, hence the embedding of $T$ in $L_{K3}$ is also unique. \qed \begin{theorem}\label{clas} For any $n,k$ in the same row of Table \ref{fixtable} there exists a K3 surface $X$ with a non-symplectic automorphism $\sigma$ of order three such that \begin{enumerate}[$\bullet$] \item $X^{\sigma}$ contains $n$ points, $k$ curves and the biggest genus of a fixed curve is $g(n,k)$, \item $N_X=N(\sigma)\cong N(n,k)$ and $T_X\cong T(n,k)$. \end{enumerate} \end{theorem} \proof It follows from Proposition \ref{existence}, Proposition \ref{transc} and Theorem \ref{fix}. \qed\\ \noindent In what follows we will call $X_{n,k}$ and $\sigma_{n,k}$ a K3 surface and an automorphism as in Theorem \ref{clas}. \section{Examples and projective models} We will start studying elliptic fibrations (see \cite{M} for basic definitions and properties) on K3 surfaces with a non-symplectic automorphism of order $3$ . The possible Kodaira types for stable fibers and the action of the automorphism on them is described by the following result in \cite{Z3}. \begin{lemma}\label{ell} Let $X$ be a K3 surface with a non-symplectic automorphism $\sigma$ of order three and $f:X\rightarrow \mathbb P^ 1$ be an elliptic fibration. If $F$ is a singular fiber of $f$ containing at least one $\sigma$-fixed curve, then it is one of the following Kodaira types: \begin{enumerate}[$\bullet$] \item \textbf{$IV$:} $F=F_1+F_2+F_3$ and $F_1$ is the only fixed curve in $F$. \item \textbf{$I_n$} with \textbf{$n=3,6,9,12,15,18$:} $F=F_1+\cdots +F_n$ where $F_i\cdot F_{i+1}=F_n\cdot F_1=1$ and the $\sigma$-fixed curves in $F$ are $F_1,F_4,\dots,F_{n-2}$. \item \textbf{$I^*_{n-5}$} with \textbf{$n=5,8,11,14,17$:} $$F=F_1+F_2+2(F_3+\cdots + F_{n-2})+F_{n-1}+F_n$$ where $F_1\cdot F_3=F_i\cdot F_{i+1}=F_{n-2}\cdot F_n=1$, $2\leq i\leq n-2$ and the $\sigma$-fixed curves are $F_3, F_6,F_9, \dots, F_{n-2}$. \item \textbf{$IV^*$:} $F=3F_1+2F_2+F_3+2F_4+F_5+2F_6+F_7$ and $F_1$ is the only $\sigma$-fixed curve in $F$. \item \textbf{$III^*$:} $F=4F_1+2F_2+3F_3+2F_4+F_5+3F_6+2F_7+F_8$ and $F_1, F_5, F_8$ are the only $\sigma$-fixed curves in $F$. \item \textbf{$II^*$:} $F=6F_1+3F_2+4F_3+2F_4+5F_5+4F_6+3F_7+2F_8+F_9$ where $F_1, F_7$ are the only $\sigma$-fixed curves in $F$. \end{enumerate} \end{lemma} We now show that any $X_{n,k}$ admits an invariant elliptic fibration if the fixed locus contains more than two curves \begin{pro}\label{k>1} If $k>1$ then the K3 surface $X_{n,k}$ is isomorphic to a jacobian elliptic fibration with Weierstrass model $$y^2=x^3+ p_{12}(t)$$ where $p_{12}(t)$ has exactly the following multiple roots \begin{enumerate}[$\bullet$] \item $n$ double roots if $k=2$. \item one $4$-uple root and $n-3$ double roots if $k=3$ \item one $5$-uple root and $n-4$ double roots if $k=4$ \item one $5$-uple root, one $4$-uple root and $n-7$ double roots for $k=5$ \item two $5$-uple roots and $n-8$ double roots for $k=6$. \end{enumerate} In these coordinates the non-symplectic automorphism $\sigma_{n,k}$ is $$(x,y,t)\mapsto (\zeta x,y,t).$$ Conversely, for any $k>1$, a jacobian fibration with the above properties is a K3 surface and the automorphism $\sigma_{n,k}$ has fixed locus of type $(n,k)$. \end{pro} \proof If $k=2$ then by Proposition \ref{transc} the Picard lattice of $X=X_{n,k}$ is isomorphic to $U\oplus A_2^{n}$. Hence $X$ has a jacobian elliptic fibration $f:X\rightarrow \mathbb P^ 1$ with $n$ reducible fibers with dual graph $\tilde A_2$. The isometry $\sigma^*$ acts as the identity on $N_X$, thus it preserves $f$ and any of its reducible fibers. Since either there are $n>2$ reducible fibers or $g(n,k)>1$ then $\sigma$ acts as the identity on $\mathbb P^ 1$ i.e. any fiber of $f$ is $\sigma$-invariant and the section is in the fixed locus of $\sigma$. In particular, any fiber of $f$ has an automorphism of order $3$ with a fixed point. The Weierstrass model of $f$ can be given by an equation of type $$y^2=x^3+a(t)x+b(t).$$ By the previous remark the functional invariant $j(t)=a(t)^3/\Delta(t)$ is equal to zero i.e. $a(t)\equiv 0$ and the action of $\sigma$ is $(x,y,t)\mapsto (\zeta x,y,t)$. Moreover, the reducible fibers of $f$ are of Kodaira type $IV$ and there are $12-2n$ other singular fibers of type $II$. Up to a change of coordinates in $\mathbb P^ 1$, we can assume that all singular fibers are in $\mathbb P^ 1\backslash\infty$ i.e. that $\deg(b(t))=12$. Note that simple roots of $b(t)$ give type $II$ fibers and double roots give type $IV$ fibers (see \cite{M} or \cite{Ko}). This gives the result. For $k>2$ we can still see from $N(n,k)$ in Table \ref{fixtable} that $X$ has a jacobian elliptic fibration with fibers of type $\tilde A_2$, $IV^*$ and $II^*$. By the previous arguments and Lemma \ref{ell} we find that their j-invariants are zero and we can write the Weierstrass equations as before, recalling that $4$-uple roots of $b(t)$ give type $IV^*$ fibers and $5$-uple roots give type $II^*$ fibers. The last statement follows easily by Lemma \ref{ell}. \qed\\ \noindent If $g(n,k)>0$, we will denote by $C_{n,k}$ the curve in the fixed locus of $\sigma_{n,k}$ with such genus. \begin{cor}\label{hyp} If $k>1$ then $C_{n,k}$ is hyperelliptic. \end{cor} \proof If $k>1$ then, by Proposition \ref{k>1}, $X_{n,k}$ has a jacobian elliptic fibration and the section is in the fixed locus. Since $\sigma_{n,k}$ fixes $3$ points on each fiber of the elliptic fibration, then $C$ is a double section and $f_{|C}$ is a double cover to $\mathbb P^ 1$. \qed\\ \noindent Let $\phi_{n,k}: X_{n,k}\longrightarrow \mathbb P^ {g(n,k)} \cong \mathbb |C_{n,k}|^{\vee} $ be the morphism associated to $|C_{n,k}|$. \begin{lemma}\label{proj} There is a projective transformation $\tilde \sigma_{n,k}$ of $\mathbb P^ g$ which preserves $Im(\phi_{n,k})$ and such that $\phi_{n,k} \circ \sigma_{n,k}=\tilde \sigma_{n,k} \circ \phi_{n,k}$. For a suitable choice of coordinates $$\tilde \sigma_{n,k}(x_0,\dots,x_{g-1},x_g)=(x_0,\dots,x_{g-1},\zeta x_g).$$ \end{lemma} \proof Since $\sigma_{n,k}$ fixes $C_{n,k}$, then it preserves $|C_{n,k}|$, hence it induces a projectivity $\tilde \sigma_{n,k}$ of $|C_{n,k}|^{\vee}$ which fixes pointwisely the hyperplane $H$ such that $\phi_{n,k}^{-1}(H)=C_{n,k}$. If we choose coordinates such that $H=\{x_g=0\}$, then $\tilde \sigma_{n,k}$ is of the above form. \qed \begin{rem} If $g(n,k)=1$ then $\phi_{n,k}$ is an elliptic fibration. The automorphism $\tilde\sigma_{n,k}$ has exactly two fixed points $p,q\in \mathbb P^ 1$ such that $\phi_h^{-1}(p)=C_{n,k}$ and $\phi_h^{-1}(q)$ is a reducible fiber. By \cite{D} this is of Kodaira type $I_0$ if $n=3$ and $I^*_{3(n-4)}$ for $n=4,\dots,8.$ \end{rem} \begin{rem} The curve $C_{0,2}$ is a hyperelliptic curve of genus $5$ and by \cite{SD} $\phi_{0,2}:X_{0,2}\rightarrow \mathbb P^ 5$ is a degree two morphism onto a cone over a rational normal quartic branched along a cubic section $B$ and the vertex of the cone. Since the branch curve is invariant for the action of $\tilde \sigma$, then $B$ has an equation of the form $$F_3(x_0,\dots,x_4)+bx_5^3$$ where $b$ is non zero and $(0,\dots,0,1)$ is the vertex of the cone. Conversely, the double cover of the cone branched along the generic section with this equation and the vertex is a K3 surface with an automorphism $\sigma_{0,2}$. The surface $X_{0,2}$ can be also obtained as the triple cover of the Hirzebruch surface $\mathbb F_6$ branched along the disjoint union of the exceptional curve $e$ with $e^2=-6$ and a curve in $\mid 12f+2e \mid$, where $f$ is the class of a fiber. \end{rem} In what follows $F_i, G_i$ will denote homogeneous polynomials of degree $i$ and $b,c,d$ are non-zero complex numbers. \begin{pro}\label{ic} If $(n,k)=(0,1), (3,0) , (3,1)$ then $X_{n,k}$ is isomorphic to the complete intersection of a quadric and a cubic in $\mathbb P^ 4$ with equations of the form $$ \begin{array}{l} \bullet \hspace{0.7cm} X_{0,1}:\ \left\{ \begin{array}{ll} F_2(x_0,\dots,x_3)=0\\ F_3(x_0,\dots,x_3)+bx_4^3=0 \end{array}\right. \\ \\ \hspace{0.9cm} \sigma_{0,1}(x_0,\dots,x_3,x_4)=(x_0,\dots,x_3,\zeta x_4) \\ \ \\ \bullet \hspace{0.7cm} X_{3,0}:\ \left\{ \begin{array}{ll} F_2(x_0,x_1)+b x_2 x_3+c x_2 x_4=0 \\ F_3(x_0,x_1)+d x_2^3+G_3(x_3,x_4)+x_2F_1(x_0,x_1)G_1(x_3,x_4)=0 \end{array}\right. \\ \ \\ \hspace{0.9cm} \sigma_{3,0}(x_0,x_1,x_2,x_3,x_4)=(x_0,x_1,\zeta^2x_2,\zeta x_3, \zeta x_4).\\ \ \\ \bullet \hspace{0.7cm} X_{3,1}:\ \left\{\begin{array}{ll} x_3F_1(x_0,x_1,x_2)+x_4G_1(x_0,x_1,x_2)=0\\ F_3(x_0,x_1,x_2)+G_3(x_3,x_4)=0 \end{array}\right.\\ \ \\ \hspace{0.9cm} \sigma_{3,1}(x_0,x_1,x_2,x_3,x_4)=(x_0,x_1,x_2,\zeta x_3,\zeta x_4). \end{array}$$ Conversely, for generic $F_i, G_j, b, c, d$, the above equations define K3 surfaces and the fixed locus of $\sigma_{n,k}$ is of type $(n,k)$. \end{pro} \proof The fixed locus of $\sigma_{0,1}$ only contains the curve $C_{0,1}$ of genus $4$. Assume that $E$ is an elliptic curve on $X$ intersecting $C_{0,1}$, then $\sigma_{0,1}$ preserves $E$ and has exactly $3$ fixed points on it by Riemann Hurwitz formula, thus $(C_{0,1},E)=3$. By \cite[Theorem 5.2]{SD} this implies that the morphism $\phi_{0,1}$ is an embedding of $X_{0,1}$ in $\mathbb P^ 4$ as the complete intersection of a quadric and a cubic hypersurface. Since both hypersurfaces are $\tilde \sigma_{0,1}$-invariant, then they have an equation of the above form and $\sigma_{0,1}=\tilde \sigma_{0,1}$. According to Proposition \ref{transc}, the Picard lattice of $X_{3,0}$ is isomorphic to $U(3)\oplus E_6^*(3)$. Let $e,f$ be the standard basis of $U(3)$ and $h=e+f$. The morphism associated to $h$ is an embedding as the intersection of a quadric and a cubic hypersurface in $\mathbb P^ 4$. Moreover, $\sigma_{3,0}$ induces a projectivity $\tilde \sigma$ of $\mathbb P^ 4$. Since $\sigma_{3,0}$ has only isolated fixed points, then $\tilde\sigma$ has no fixed hyperplane or planes. Hence, up to a choice of coordinates, we can assume that $\sigma_{3,0}$ and the two hypersurfaces have equations of the above form. According to Proposition \ref{transc}, the Picard lattice of $X_{3,1}$ is isomorphic to $U(3)\oplus A_2^3$. Let $e,f$ be the basis of $U(3)$ as before and $e_i, e'_i$, $i=1,2,3$ be the standard basis of $A_2^3$. The class $h=2e+f-\sum_{i=1}^3 (e_i+e'_i)$ gives an embedding as the complete intersection of a quadric and a cubic hypersurface. The equations of $X_{3,1}$ and $\sigma_{3,1}$ can be determined as before. The last statement follows from easy computations. \qed \begin{rem}\label{g4} It is clear from the equations in Proposition \ref{ic} that the surface $X_{0,1}$ is the triple cover of a quadric branched along a curve of genus $4$. Also the converse is true, as proved by Kond\=o in \cite{K}. In fact \cite[Theorem 1]{K} states that the moduli space of K3 surfaces $X_{0,1}$ is birational to the coarse moduli space of curves of genus $4$. \end{rem} \begin{pro}\label{qu} If $(n,k)=(1,1), (4,1)$ then $X_{n,k}$ is isomorphic to a smooth quartic in $\mathbb P^ 3$ with equations of the form $$ \begin{array}{l} \bullet \hspace{0.7cm} X_{1,1}:\ F_4(x_0,x_1,x_2)+F_1(x_0,x_1,x_2)x_3^3=0\\ \ \\ \hspace{0.9cm} \sigma_{1,1}(x_0,\dots,x_3)=(x_0,x_1,x_2,\zeta x_3) \\ \ \\ \bullet \hspace{0.7cm} X_{4,1}:\ F_4(x_0,x_1)+F_3(x_2,x_3)F_1(x_0,x_1)=0 \\ \ \\ \hspace{0.9cm} \sigma_{4,1}(x_0,x_1,x_2,x_3)=(x_0,x_1,\zeta x_2,\zeta x_3).\\ \end{array}$$ Conversely, for generic $F_i$'s, the above equations define K3 surfaces and $\sigma_{1,1}$, $\sigma_{4,1}$ have fixed locus of type $(1,1)$ and $(4,1)$ respectively. \end{pro} \proof The fixed locus of $\sigma_{1,1}$ is the union of the curve $C_{1,1}$ of genus $3$ and an isolated point. By Riemann Hurwitz formula the intersection of $C_{1,1}$ with any elliptic curve on $X_{1,1}$ is either zero or three. Hence, by \cite[Theorem 5.2]{SD}, $\phi_{1,1}$ is an embedding of $X_{1,1}$ in $\mathbb P^ 3$. Since $Im(\phi_{1,1})$ is invariant for the action of $\tilde \sigma_{1,1}$ we find that its equation is of the above form. According to Proposition \ref{transc}, the Picard lattice of $X_{4,1}$ is isomorphic to $U(3)\oplus A_2^4$. Let $e,f$ be the basis of $U(3)$ as before and $e_i, e'_i$, $i=1,2,3,4$ the standard basis of $A_2^4$. The class $h=2e+f-\sum_{i=1}^4 (e_i+e'_i)$ gives an embedding as a smooth quartic surface in $\mathbb P^ 3$. The equations of $X_{4,1}$ and $\sigma_{4,1}$ can be determined as before. The last statement follows from easy computations. \qed \begin{rem} If $g(n,k)=3$ and $n\geq 2$ then $C_{n,k}$ is hyperelliptic by Proposition \ref{hyp}, hence by \cite{SD} $\phi_{n,k}$ is a rational map of degree $2$ onto a quadric $Q$. Since $Q$ is invariant for $\tilde\sigma_{n,k}$ in Lemma \ref{proj}, then $Q$ is a cone and $\phi_{n,k}$ is branched along a quartic section $B$ and the vertex. The rational curves orthogonal to $C_{n,k}$ are contracted to the vertex of the cone and give a singular point of $B$ of type $A_1$ for $n=2$, $A_4$ for $n=3$ and $A_7$ for $n=4$. \end{rem} \noindent The following was proved in \cite{D}. \begin{pro}\label{2,1} The surface $X_{2,1}$ is isomorphic to the double cover of $\mathbb P^ 2$ branched along a smooth plane sextic with equation $$ F_6(x_0,x_1)+F_3(x_0,x_1)x_2^3+bx_2^6=0$$ and $$\sigma_{2,1}(x_0,x_1,x_2)=(x_0,x_1,\zeta x_2).$$ Conversely, for generic $F_i$'s, the above equation defines a K3 surface and $\sigma_{2,1}$ has fixed locus of type $(2,1)$. \end{pro} \proof The morphism $\phi_{2,1}$ is a double cover of $\mathbb P^ 2$ branched along a smooth plane sextic by \cite{SD}. Note that $\tilde \sigma_{2,1}$ (Lemma \ref{proj}) fixes pointwisely the line $x_2=0$ and the point $p=(0,0,1)$. The last statement follows from easy computations. \qed \begin{rem} If $g(n,k)=2$ and $n>2$ then it can be proved that $\phi_{n,k}$ is a degree two morphism to $\mathbb P^ 2$ branched along a plane sextic as in Proposition \ref{2,1} with $b=0$ i.e. $p=(0,0,1)$ is singular. One can show that if $F_3$ has no multiple roots, then $p$ is of type $D_4$. If $F_3$ has exactly a double root $q$, then $p$ is of type $D_7$ if $F_6(q)\not=0$ and of type $D_{10}$ if $F_6(q)=0$ is a simple root. This completes \cite[Proposition 3.3.18]{D}. \end{rem} \section{The moduli space} We denote by $\mathcal M_{n,k}$ the moduli space of $K3$ surfaces with an order three non-symplectic automorphism with $n$ fixed points and $k$ fixed curves i.e. the space of all pairs $(X_{n,k},\sigma_{n,k})$. \begin{pro}\label{uniq} $\mathcal M_{n,k}$ is irreducible. \end{pro} \proof In propositions \ref{k>1}, \ref{ic}, \ref{qu} and \ref{2,1} we proved that the surface $X_{n,k}$ is isomorphic to the general element of an irreducible family of surfaces and that the automorphism $\sigma_{n,k}$ is uniquely determined. This gives the result.\qed\\ Let $\rho$ be an order $3$ isometry such that $(T,\rho)$ is a $\mathcal E^*$-lattice and consider the period domain $B_{\rho}$ as in section 3. Since $T$ has signature $(2,2m-2)$ it is easy to see that $B_{\rho}$ is isomorphic to a $(m-1)$-dimensional complex ball. Let $$\Gamma_{\rho}=\{\gamma \in O(T): \gamma\circ \rho=\rho \circ \gamma\}.$$ \begin{theorem}[\S 11, \cite{DK}] \label{mod} The generic point of $\mathcal M_{\rho}=B_{\rho}/\Gamma_{\rho}$ parametrizes pairs $(X,\sigma)$ where $X$ is a K3 surface and $\sigma$ an order $3$ non-symplectic automorphism on $X$ with $\sigma^*=\rho$ up to isometries. \end{theorem} \begin{cor} For any $n,k$ there is a unique isometry $\rho_{n,k}$ such that $T(n,k)$ is a $\mathcal E^*$-lattice and $\mathcal M_{n,k}$ is birational to $\mathcal M_{\rho_{n,k}}$. \end{cor} \proof Observe that $\mathcal M_{\rho}$ is irreducible. Then the result follows from Proposition \ref{uniq} and Theorem \ref{mod}. \qed\\ According to Theorem \ref{clas}, $T(3,0)=U\oplus U(3)\oplus A_2^{5}$. By Theorem \ref{RS} we also have an isomorphism $T(3,0)\cong A_2(-1)\oplus K_{12}$, where $K_{12}$ is the Coxeter-Todd lattice. \begin{pro}\label{cox} If $K_{12}$ is a primitive $\mathcal E$-sublattice of $T(n,k)$ then $(n,k)=(0,3)$. \end{pro} \proof Let $B$ be the orthogonal complement of $K_{12}$ in $T=T(n,k)$. Since $K_{12}$ is a unimodular $\mathcal E$-lattice then $K_{12}\oplus B=T$. In fact, assume on the contrary that there exist $a\in K_{12}$, $b\in B$, $n\in \mathbb{Z}$ such that $(a+b)/n \in T/(K_{12}\oplus B)$. Let $H$ be the hermitian form associated to a $\mathcal E$-lattice in Remark \ref{herm}, then the homomorphism $$\phi:K_{12}\longrightarrow \mathcal E,\ x\mapsto H(x,(a+b)/n)=H(x,a/n)$$ belongs to $Hom(K_{12},\mathcal E)$ and not to $K_{12}$. Since the lattice $K_{12}$ is negative definite and $T$ has signature $(2,20-s)$ then $\rightarrownk B\geq 2$. Moreover $a(T)\geq 6$, hence $\rightarrownk T^{\perp}\geq 6$ and $\rightarrownk B\leq 4$. If $\rightarrownk B=2$ then $B$ is a rank one positive definite $\mathcal E^*$-lattice. A direct computation shows that $B\cong A_2(-1)$, hence $T\cong T(3,0)$. If $\rightarrownk B=4$ then $a(T)=6$ (since $\rightarrownk T^{\perp}=6$), hence $B$ is a unimodular lattice of signature $(2,2)$. By \cite[Corollary 1.13.3]{N2} $B\cong U\oplus U$. Assume that $K_{12}\oplus U\oplus U$ admits a primitive embedding in $L_{K3}$. Then its orthogonal complement is a hyperbolic $3$-elementary lattice of rank $6$ with $a=6$. This lattice does not exist by Theorem \ref{RS}, hence this case can not occur. \qed \begin{rem} The Coxeter-Todd lattice also appears in connection to symplectic automorphisms of order $3$ on K3 surfaces. In \cite[Theorem 5.1]{GS} it is proved that the orthogonal complement of the fixed lattice of such automorphisms is isomorphic to $K_{12}$. \end{rem} \begin{theorem}\label{moduli} The moduli space of K3 surfaces with a non-symplectic automorphism of order $3$ has three irreducible components which are the closures of $$\mathcal M_{0,1},\ \mathcal M_{0,2},\ \mathcal M_{3,0}.$$ \end{theorem} \proof By Proposition \ref{k>1} it is clear that all moduli spaces $\mathcal M_{n,k}$ with $k>1$ are in the closure of $\mathcal M_{0,2}$. By Remark \ref{g4} a K3 surface in $\mathcal M_{0,1}$ is the triple cover of a quadric surface $Q$ branched along a genus $4$ curve $C$. It is easy to check that, if $C$ has singular points of type $A_1$ then the triple cover of $Q$ branched along $C$ has rational double points of type $A_2$ and its minimal resolution is still a K3 surface with an order $3$ non-symplectic automorphism. For example, if $C$ is generic with a node, then the associated K3 surface has an order three automorphism which fixes the proper transform of $C$ and one point (coming from the contraction of a component of the exceptional divisor over the node). In fact the Picard lattice isomorphic to $U(3)\oplus A_2$, where $A_2$ is generated by components of the exceptional divisor. Hence we are in case $n=1$, $k=1$. Similarly, by taking $C$ with $n=1,\dots,4$ singularities of type $A_1$, we obtain cases $(n,1)$. As a consequence of Proposition \ref{cox}, $\mathcal M_{3,0}$ is not contained in any other $\mathcal M_{n,k}$, hence it gives a maximal irreducible component of the moduli space. \qed \begin{rem} In \S 4, \cite{K} S. Kond\=o describes the relation between the two components $\mathcal M_{0,1}$ and $\mathcal M_{0,2}$ (in fact the second component contains jacobians of the first one) and relates the groups $\Gamma_{0,1}, \Gamma_{0,2}$ with those appearing in Deligne-Mostow's list \cite{DM}. \end{rem} \end{document}
\begin{document} \title{The stable 4--genus of knots } \author{Charles Livingston} \thanks{This work was supported by a grant from the NSF } \thanks{\today} \address{Department of Mathematics, Indiana University, Bloomington, IN 47405} \email{[email protected]} \keywords{} \maketitle \begin{abstract} We define the stable 4--genus of a knot $K \subset S^3$, $g_{st}(K)$, to be the limiting value of $g_4(nK)/n$, where $g_4$ denotes the 4--genus and $n$ goes to infinity. This induces a seminorm on the rationalized knot concordance group, $\mathcal{C}_{\bf Q} = \mathcal{C} \otimes {\bf Q}$. Basic properties of $g_{st}$ are developed, as are examples focused on understanding the unit ball for $g_{st}$ on specified subspaces of $\mathcal{C}_{\bf Q}$. Subspaces spanned by torus knots are used to illustrate the distinction between the smooth and topological categories. A final example is given in which Casson-Gordon invariants are used to demonstrate that $g_{st}(K)$ can be a noninteger.\end{abstract} \section{Summary.} In order to better understand the smooth 4--genus of knots $K \subset S^3$, denoted $g_4(K)$, we introduce and study here the {\it stable 4--genus}, $$\displaystyle{g_{st}(K) = \lim_{n\to \infty} g_4(nK)/n}.$$ As will be seen in Section~\ref{algprelims}, the existence of the limit and its basic properties follow from the subadditivity of $g_4$ as a function on the classical knot concordance group $\mathcal{C}$; that is, $g_4(K \mathop{\#} J) \le g_4(K) + g_4(J)$ for all $K$ and $J$. Neither classical knot invariants nor the invariants that arise from Heegaard-Floer theory~\cite{os} or Khovanov homology~\cite{ra} can be used to demonstrate that $g_{st}(K) \notin {\bf Z}$ for some $K$. One result of this paper is the construction of a knots $K$ for which $g_{st}(K)$ is close to $\frac{1}{2}$. Perhaps of greater interest is the exploration of the new perspective on the 4--genus and knot concordance offered from the stable viewpoint. In particular, a number of interesting and challenging new questions arise naturally. For example, we note that finding a knot $K$ with $0 < g_{st}(K) <\frac{1}{2}$ is closely related to the existence of torsion in $\mathcal{C}$ of order greater than 2. We will also consider the distinction between the smooth and topological categories from the perspective of the stable genus. \vskip.1in \noindent{\it Acknowledgements} Thanks are due to Pat Gilmer for conversations related to his results on 4--genus, which play a key role in Section~\ref{seconehalf}. Thanks are also due to Ian Agol and Danny Calegari for discussing with me the analogy between the stable genus and the stable commutator length, described in Section~\ref{secquestions}. \section{Algebraic preliminaries.}\label{algprelims} The existence of the limiting value and its basic properties are summarized in the following general theorem. \begin{theorem}\label{limthm} Let $\nu \colon\thinspace G \to {\bf R}_{\ge 0}$ be a subadditive function on an abelian group $G$. Then: \begin{enumerate} \item The limit $\nu_{st}(g) = \lim_{n \to \infty} \nu(ng)/n$ exists for all $g \in G$. \item The function $\nu_{st}\colon\thinspace G \to {\bf R}_{\ge 0}$ is subadditive and multiplicative: $\nu_{st}(ng) = n \nu_{st}(g)$ for $n \in {\bf Z}_{\ge 0}$. If $\nu(g) = \nu(-g)$ for all $g$, then $\nu_{st}(g) = \nu_{st}(-g)$ for all $g$. \item There is a factorization of $\nu_{st}$ through $G_{\bf Q} = G \otimes {\bf Q}$. That is, there is a multiplicative, subadditive function $\overline{\nu}_{st} \colon\thinspace G_{\bf Q} \to {\bf R}_{\ge 0}$ such that $\nu_{st} = \overline{\nu}_{st} \circ i$ where $i\colon\thinspace G \to G_{{\bf Q}}$ is the map $g \to g \otimes 1$. \end{enumerate} \end{theorem} \begin{proof} The proof of (1) is a standard elementary exercise using the consequence of subadditivity, $\nu(ng) \le n\nu(g)$ for all $g$. In the appendix to this paper we summarize a proof. The rest of the theorem follows easily. \end{proof} A {\it seminorm} on a vector space is a nonnegative multiplicative and subadditive function. Thus, $\overline{{\nu}}_{st}$ is a seminorm on $G_{\bf Q}$.\vskip.1in \noindent{\bf Notation} We will usually drop the overbar notation; that is, we will denote both the functions $\nu_{st}$ on $G$ and $\overline{\nu}_{st}$ on $G_{\bf Q}$ by $\nu_{st}$ and be clear as to what domain we are using. \vskip.1in In our applications we will want to bound $g_{st}$ using homomorphisms on the concordance group, in particular signatures, the Ozsv\'ath-Szab\'o invariant $\tau$ and the Khovanov-Rasmussen invariant $s$. The needed algebraic observation is the following, the proof of which the reader can readily provide. \begin{theorem}\label{bdthm} If $\sigma \colon\thinspace G \to {\bf R}$ is a homomorphism and $\nu(g) \ge |\sigma(g)|$ for all $g \in G$, then: \begin{enumerate} \item $|\sigma| \colon\thinspace G \to {\bf R}_{\ge 0}$ is subadditive. \item The stable function ${| \sigma |}_{st} $ satisfies ${| \sigma |}_{st}= | \sigma |$ and is a seminorm on $G_{\bf Q}$. \item ${\nu}_{st}(x) \ge |\sigma(x)|$ for all $x \in G_{{\bf Q}}$. \end{enumerate} \end{theorem} A seminorm can be completely understood via its unit ball. \begin{definition} If ${\nu}$ is a seminorm on a vector space $V$, then $B_{{\nu}} = \{x \in V\ |\ {\nu}(x) \le 1\}$. \end{definition} \begin{theorem} Let $\nu$ be a subadditive nonnegative function on an abelian group $G$ and let $\sigma$ be a real-valued homomorphism on $G$. \begin{enumerate} \item $B_{{\nu}_{st}}$ and $B_{{|\sigma|}}$ are convex subsets of $G_{\bf Q}$. \item If $\nu(g) \ge |\sigma(g)|$ for all $g \in G$, then $B_{{\nu}_{st}} \subset B_{{|\sigma|}}$ \end{enumerate} \end{theorem} \section{Elementary examples} We begin exploring the stable genus by computing its value for a few simple examples. \subsection{$g_{st}(4_1) = 0$} The first example of a nonslice knot is the figure eight knot, $4_1$, as originally proved by Fox and Milnor~\cite{fm}. Since $4_1$ is amphicheiral, $2(4_1)$ is slice, meaning that $g_4(2(4_1)) = 0$. It follows immediately that in taking limits, $g_{st}(4_1) = 0$. \subsection{$g_{st}(3_1) = 1$} The first knot of infinite order in $\mathcal{C}$ is the trefoil, $3_1$, as originally proved by Murasugi~\cite{mu}. Let $\sigma(K)$ denote of the classical signature of $K$: the signature of $V + V^{\text{\sc T}} $ where $V$ is a Seifert matrix for $K$ and $V^{\text{\sc T}}$ its transpose. Then we have the Murasugi bound, $g_4(K) \ge \frac{1}{2}| \sigma(K)|$. Hence Theorem~\ref{bdthm} applies to show that $g_{st}(3_1) \ge \frac{1}{2} | \sigma(3_1) |= 1$. On the other hand, $g_4(3_1) = 1$, so $g_{st}(3_1) \le 1$. \subsection{$g_{st}(3T_{2,7} - 2T_{2,11}) = 2$} As a final example that illustrates a simple application of Tristram-Levine signatures~\cite{ le2, tr}, we consider the knot $3T_{2,7} - 2T_{2,11}$, where $T_{p,q}$ denotes the $(p,q)$--torus knot. We will now apply Theorem~\ref{bdthm} to $\sigma_t$ for appropriate $t$, where $\sigma_t$ is the Tristram-Levine signature~\cite{le2, tr}, defined by: $$\sigma_t(K) = \text{signature}((1 - e^{2 i\pi t})V +(1 - e^{-2i\pi t})V^{\text{\sc T}}).$$ (Formally, to achieve a concordance invariant one forms the two-sided limit $\sigma'_t(K) = \lim_{\epsilon \to 0} \frac{1}{2} ( \sigma_{t - \epsilon}(K) + \sigma_{t + \epsilon}(K))$: then $\sigma'$ is a homomorphism on the concordance group for any specific value of $t$.) For the knot $3T_{2,7} - 2T_{2,11}$ this signature function is graphed in Figure~\ref{figsiggraph}. Since the function is symmetric about $\frac{1}{2}$, we have graphed the portion of the function on the interval $[0, \frac{1}{2}]$. \begin{figure} \caption{Signature function for $3T_{2,7} \label{figsiggraph} \end{figure} If we let $x$ be any number between $\frac{3}{14}$ and $\frac{5}{22}$, then the Tristram-Levine bound $g_{4}(K) \ge \frac{1}{2} |\sigma_x(K)|$ implies $g_{st}(K) \ge \frac{1}{2} |\sigma_x(K)|$. Thus we have that $g_{st}(3T_{2,7} - 2T_{2,11}) \ge 2$. On the other hand, the reader should have no trouble finding four band moves in the schematic diagram of $3T_{2,7} - 2T_{2,11}$ (Figure~\ref{diagramt23}) that converts it into the torus knot $T_{2,1}$ which is the unknot and in particular bounds a disk. The corresponding surface in the 4--ball constructed by performing these band moves and capping off with the disk is of genus 2. Thus $g_{st}(3T_{2,7} - 2T_{2,11}) \le 2$. \begin{figure} \caption{Schematic diagram for $3T_{2,7} \label{diagramt23} \end{figure} \section{Families of knots: $xT_{2,7} + yT_{2,11}$.} A nice illustrative example is given by restricting to the subspace $S$ of $\mathcal{C}_{\bf Q}$ spanned by the torus knots $ T_{2,7}$ and $ T_{2,11}$. We want to understand the unit ball of $g_{st}$ on $S$ in terms of the unit ball associated to the function Max$_{0\le t \le 1} \{\sigma_t\}$; for any particular example it is more straightforward to directly analyze the signature function. In the present case, the signature functions for $T_{2,7}$ and $T_{2,11}$ are zero near $t = 0$ and increase by two at each of the jumps at the points $\{1/14, 3/14, 5/14\}$ and $\{1/22, 3/22, 5/22, 7/22, 9/22\}$, respectively. For the readers convenience, we order the union of these two sets: $$\{1/22, 1/14, 3/22, 3/14, 5/22, 7/22, 5/14, 9/22\}.$$ Evaluating the signature functions $\sigma_t(xT_{2,7} + yT_{2,11})$ at values between each of these numbers and for some $t$ close to $\frac{1}{2}$ yields the following set of inequalities: \begin{eqnarray*} g_{st}(xT_{2,7} + yT_{2,11}) & \ge & |y|\\ g_{st}(xT_{2,7} + yT_{2,11})& \ge & | x + y|\\ g_{st}(xT_{2,7} + yT_{2,11}) & \ge & |x +2y|\\ g_{st}(xT_{2,7} + yT_{2,11})& \ge & |2x + 2y|\\ g_{st}(xT_{2,7} + yT_{2,11}) & \ge & |2x + 3y|\\ g_{st}(xT_{2,7} + yT_{2,11}) & \ge & |2x + 4y|\\ g_{st}(xT_{2,7} + yT_{2,11})& \ge & |3x + 4y|\\ g_{st}(xT_{2,7} + yT_{2,11})& \ge & |3x + 5y|.\\ \end{eqnarray*} Based on these, we find that the unit ball $B_{{g}_{st}}$ restricted to the span of $T_{2,7}$ and $T_{2,11}$ is contained in the set illustrated in Figure~\ref{ballgraph}. \begin{figure} \caption{The unit ball for $g_{st} \label{ballgraph} \end{figure} By convexity, to show that this set is actually the unit ball for $g_{st}$, we need to check only the vertices. For instance, we want to see that $g_{st}(\frac{3}{2}T_{2,7} - T_{2,11}) = 1$. That is, we need to show $g_{st}(3T_{2,7} - 2T_{2,11}) = 2$. That calculation was done in the previous section. The other vertices are handled similarly. (That is, one shows that $g_4(T_{2,7}) = 3$, $g_4(T_{2,7} - T_{2,11}) = 2$, $g_4(3T_{2,7} - 2T_{2,11}) = 2$ and $g_4(2T_{2,7} - T_{2,11}) = 2$. The point $(0, \frac{1}{5})$ is not a vertex so need not be considered.) \vskip.1in \noindent{\bf Note.} Rick Litherland~\cite{lith} has proved that for any pair of two stranded torus knots, the 4--genus of a linear combination $xT_{2,k} + yT_{2,j}$ is determined by its signature function. \section{A smooth versus topological comparison: $xT_{3,7} + yT_{2,5}$.} We now summarize a more complicated example of the computation of the $g_{st}$ unit ball on the 2--dimensional subspace spanned by $T_{3,7}$ and $T_{2,5}$. The added complexity occurs because the signature function of $T_{3,7}$ does not determine its smooth 4--genus; this signature function has positive jumps at $1/21, 2/21, 4/21, 5/21$ and $ 8/21$ but a negative jump at $ 10/21$. Thus its maximum value is 5, and its value at $t=\frac{1}{2}$ is 4. On the other hand, the Ozsv\'ath-Szab\'o $\tau$ or Khovanov-Rasmussen invariant $s$ both take value 6, and thus determine the smooth 4--genus of $T_{3,7}$ to be 6. (See~\cite{os, ra} for details.) Considering only the signature function, we can show that the unit $g_{st}$ ball is contained within the entire shaded region. Using either $\tau$ or $s$ places additional bounds which eliminate the two thin darker triangles. The innermost parallelogram represents points that we know are in the unit ball. \begin{figure} \caption{Bounds on the topological and smooth unit ball for $g_{st} \label{smooth-top} \end{figure} \vskip.1in \noindent{\bf Note.} Recent work has slightly enlarged the region which we know lies in the unit $g_{st}$ ball for the span of these two knots, but most of the region remains unknown. We know of no knots in this span for which the topological and smooth 4--genus differ. \section{A 4--dimensional example.} As our final example related to finding a $g_{st}$ unit ball, we consider the span of the first four knots that are of infinite order in $\mathcal{C}$: $3_1$, $5_1$, $5_2$ and $6_2$. If we identify the span of these with ${\bf Q}^4$ via the coordinates $x_1(3_1) + x_2(5_1) + x_3(5_2) +x_4(6_2)$, then the unit ball determined by the maximum of the signature function turns out to be a polyhedron formed as the convex hull of 24 points that come in antipodal pairs. We list one from each pair: \begin{enumerate} \item $ (2, -1,0,0), (0,1,-2,0), (0,1,0,-1), (2,-1,0,-1), (0,0,1,0), (2,0,-1,0) $ \vskip.05in \item $ (0,1,0,-2)$ \vskip.05in \item $ (2,1,-2,-2), (2,1,-2,-1), (0,1,-2,1) , (0,0,1,-2), (2,0,-1,-2) $. \vskip.05in \end{enumerate} Those in the first set of five have all been shown to have $g_4 = 1$. For those in the last set we have been unable to compute the genus or stable genus. For the second set, $(0,1,0,-2) $, we have been unable to compute the 4--genus, but we know that twice this knot has 4--genus 2, and hence its stable 4--genus is 1. \section{A knot with $g_{st}(K)$ near $ \frac{1}{2}$. Gilmer, Casson-Gordon bounds.}\label{seconehalf} We begin by presenting Gilmer's result~\cite{gi} bounding the 4--genus of a knot $K$ in terms of Casson-Gordon signature invariants~\cite{cg1}. Let $K$ be a knot and let $M_d(K)$ denote its $d$--fold branched cover, with $d$ a prime power. To each prime $p$ and character $\chi \colon\thinspace H_1(M_d(K), {\bf Z}) \to {\bf Z}_p$, there is the Casson-Gordon invariant $\sigma(K,\chi) \in {\bf Q}$. By~\cite{gi}, this invariant is additive under connected sum of knots and direct sums of characters. A special case of the main theorem of~\cite{gi3} states the following: \begin{theorem} If $K$ is an algebraically slice knot for which $H_1(M_d(K), {\bf Z}) \colon\thinspaceng {\bf Z}_p^{2n}$ and $g_4(K) = g$, then there is a subspace $H \subset \text{Hom}(H_1(M_d(K), {\bf Z}), {\bf Z}_p) \colon\thinspaceng H^1(M_d(K), {\bf Z}_p)$ of dimension $\frac{1}{2} (2n-2(d-1)g) $ such that for all $\chi \in H$, $|\sigma(K, \chi) |\le 2dg$. \end{theorem} It was observed in~\cite{gl} that $H$ can be assumed to be invariant under the deck transformation. Applying this and specializing to the case of $d = 3$, we have: \begin{corollary}\label{corgilmer} If $K$ is an algebraically slice knot for which $H_1(M_3(K), {\bf Z}) \colon\thinspaceng {\bf Z}_p^{2n}$ and $g_4(K) = g$, then there is a ${\bf Z}_3$--invariant subspace $H \subset \text{Hom}(H_1(M_3(K), {\bf Z}), {\bf Z}_p) \colon\thinspaceng H^1(M_3(K), {\bf Z}_p)$ of dimension $ n-2g $ such that for all $\chi \in H$, $|\sigma(K, \chi) |\le 6g$. \end{corollary} \noindent{\bf Example} Consider the knot illustrated in Figure~\ref{genknot}, which we denote $K(J_1, J_2)$. This family of knots has been used throughout the study of knot concordance; a detailed description can be found, for instance, in~\cite{gl}, in which the details of the results we now summarize can be found. First, the homology of the 3--fold branched cover is the direct sum of cyclic groups of order seven: $H_1(M_3(K(J_1, J_2))) \colon\thinspaceng {\bf Z}_7 \oplus {\bf Z}_7$. Furthermore, the homology splits as the direct sum of $E_2 \colon\thinspaceng {\bf Z}_7$ and $E_4 \colon\thinspaceng {\bf Z}_7$, the $2$--eigenspace and $4$--eigenspace of the deck transformation. (Note that $2^3 = 4^3 = 1 \mod 7$.) Similarly, $H^*_1(M_3(K(J_1,J_2))) = \text{Hom}( H_1(M_3(K(J_1,J_2))) , {\bf Z}_p)$ splits as a direct sum of eigenspaces, which we denote $E^*_2$ and $E^*_4$. Using two eigenvectors as a basis for $H^*_1(M_3(K(J_1,J_2)))$ and letting $\chi_{a,b}$ be the character corresponding to $(a,b)$ via this identification, as proved in~\cite{gl} we have: \begin{theorem} \label{thmcg} $\sigma(K(J_1, J_2), \chi_{a,0}) = \sigma_{a/7}(J_1) + \sigma_{2a/7}(J_1)+ \sigma_{4a/7}(J_1)$; similarly, $\sigma(K(J_1, J_2), \chi_{0,b}) = \sigma_{b/7}(J_2) + \sigma_{2b/7}(J_2)+ \sigma_{4b/7}(J_2)$. In particular, it follows that $\sigma(K(J_1, J_2), \chi_{0,0}) = 0$. \end{theorem} \begin{figure} \caption{The knot $K(J_1,J_2)$.} \label{genknot} \end{figure} We can now demonstrate that particular knots in this family have $g_{st}(K(J_1, J_2))$ near $ \frac{1}{2}$. \begin{theorem} For any $\epsilon >0$, there is a knot $J$ so that $\frac{1}{2} (1-\epsilon)\le g_{st}(K(J,-J)) \le\frac{1}{2}$. \end{theorem} \begin{proof} By the additivity of $3$--genus, for any knot $J$ we have $g_3(2K(J,-J)) = 2$. On the evident Seifert surface for $2K(J,-J) $ there is a curve on the surface with framing 0 representing the knot $J \mathop{\#} -J$, which is slice. Thus, the Seifert surface can be surgered in the 4--ball to give a surface of genus one bounded by $2K(J,-J)$. Therefore, $g_4(2K(J, -J)) \le 1$ and $g_{st}(K(J,-J)) \le \frac{1}{2}$. We now proceed to show that for each $\epsilon$ there is some $J$ for which $g_{st}(K(J,-J)) \ge \frac{1}{2}(1 - \epsilon)$. For a given $J$, if this is inequality is false, then for some $n >0$, $g_4(nK(J,-J)) < \frac{1}{2}(1-\epsilon)n$. (Since this holds for some $n$, it holds for all $n$ sufficiently large.) For this $n$, we have $H_1(M_3(nK(J,-J))) = {\bf Z}_7^{2n}$. Applying Corollary~\ref{corgilmer} we find the relevant subgroup $H$ has dimension $\dim (H) > n - 2(\frac{1}{2}(1-\epsilon)n).$ Simplifying, we have $\dim (H) > \epsilon n$. Since $H_1( M_3(K(J,-J)))$ splits as the direct sum of a $2$--eigenspace and a $4$--eigenspace, the same is true for $ H_1(M_3(n K(J,-J)))$. Thus, we also have an eigenspace splitting of $H^*_1(M_3(n K(J,-J)))$. The subspace $H$ given by Corollary~\ref{corgilmer} is invariant under the deck transformation, so it too must split as the sum of eigenspaces, $ H = H_2 \oplus H_4$. Given that $\dim (H) > \epsilon n$, one of these must have dimension at least $\frac{1}{2} \epsilon n$. We will assume $\dim (H_2) > \frac{1}{2} \epsilon n$; the case $\dim (H_4) > \frac{1}{2} \epsilon n$ is similar. We next use the fact, easily established using the Gauss-Jordan algorithm, that a subspace of dimension $a$ in ${\bf Z}_p^b$ contains some vector with at least $a$ nonzero coordinates. Thus, $H_2$ contains a vector $h$ with at least $\frac{1}{2} \epsilon n$ nonzero coordinates. For the character $\chi_h$ given by $h$, by the additivity of Casson-Gordon invariants and Theorem~\ref{thmcg}, $$\sigma(K, \chi_h) =\sum \sigma(K(J, -J), \chi_{a_i,0}) = \sum \sigma_{\frac{a_i}{7}}(J)$$ where the sum has at least $ \frac{1}{2} \epsilon n$ elements and each $a_i = 1, 2, $ or $3$. Now, letting $M> 0 $ be a fixed constant assume that $\sigma_{\frac{a_i}{7}}(J) > M$ for $a_i = 1, 2, 3$. Such a $J$ is easily constructed using the connected sum of $(2,k)$-torus knots. Then $\sigma(nK(J,-J), \chi_h) > \frac{1}{2} \epsilon n M$. Thus, we will have a contradiction to Corollary~\ref{corgilmer} if $\frac{1}{2} \epsilon n M \ge 6( \frac{1}{2}(1-\epsilon)n) $. Simplifying, we find that there is a contradiction if $M \ge 6 (\frac{1-\epsilon}{\epsilon}) $. In conclusion, if $\sigma_\frac {a}{7} \ge 6 (\frac{1-\epsilon}{\epsilon}) $ then $g_{st}(K) \ge (1-\epsilon)\frac{1}{2}$. In the case that we are working with the $4$ --eigenspace instead of the $2$--eigenspace, the same condition appears, since Corollary~\ref{corgilmer} concerns the absolute value of the Casson-Gordon invariant, and switching eigenspaces simply interchanges $\sigma_\frac{a}{7}(J)$ with $\sigma_\frac{a}{7}(-J)$. \end{proof} \subsection{Other non-integer examples.} The knot $K(J,-nJ)$ illustrated in Figure~\ref{genknot2} can be shown to satisfy $g_4((n+1) K(J,-nJ)) \le \frac{n}{n+1}$, in much the same way as the special case of $n=1$, $K(J,-J)$. Thus, $g_{st}(K(J,-nJ) \le \frac{n}{n+1}$. The argument used above, based on the 3--fold cover, cannot be successfully applied to find a lower bound. However, using the 2--fold cover we have been able to prove a weaker result. Given $n$, there is a $J$ so that $\frac{n-1}{n} \le g_{st}(K(J,-nJ)) \le \frac{n}{n+1}$. \begin{figure} \caption{The knot $K(J_1,J_2)$.} \label{genknot2} \end{figure} \section{Question.}\label{secquestions} \begin{enumerate} \item Is $g_{st}$ a norm on $\mathcal{C}_{\bf Q}$? That is, if $g_{st}(K) = 0$, does $K$ represent torsion in $\mathcal{C}$?\vskip.1in \item Is there a knot $K$ such that $0 < g_{st}(K) < \frac{1}{2}$? This question relates to that of finding torsion of order greater than 2 in $\mathcal{C}$. For instance, if there is a knot $K$ of order three, then $g_4(3K) =0$. A simpler question than that of finding such a knot is to find a knot satisfying $g_4(3K) = 1$ but $g_4(2K) \ge 2$. \vskip.1in \item Is $g_{st}(K) \in {\bf Q}$ for all $K$? Presumably the examples constructed in the previous section satisfy $g_{st} = \frac{n}{n+1}$ for some $n$, though this seems difficult to prove.\vskip.1in \item Related to this previous question, is there a knot for which $g_{st}(K) \ne g_4(nK)/n$ for any $n$? \vskip.1in \item Let $\{K_i\}$ be finite set of knots and let $S$ be the span of these knots in $\mathcal{C}_{\bf Q}$. Is the $g_{st}$ ball in $S$ a finite sided polyhedron? \vskip.1in \item For some pair of distinct nontrivial positive torus knots, $T_{p,q}$ and $T_{p', q'}$, with $p, q, p', q' >2$, determine the unit $g_{st}$ ball on their span in $\mathcal{C}_{\bf Q}$, in either the smooth or topological category. \end{enumerate} \subsection{Stable commutator length} If $g \in [G,G]$ is an element in the commutator subgroup of a group $G$, it can be expressed as a product of commutators. The shortest such expression for $g$ is called the commutator length, $cl(g)$. The limit $\lim_{n \to \infty} {cl(g^n)}/{n}$ is called the stable commutator length. The notion was first studied in~\cite{ba}. Although no formal connections between this and the stable 4--genus are known at this time, the possibility of such connections is provocative. We note that Calegari's work~\cite{cal} has revealed much of the behavior of the stable commutator length for free groups. In particular, the stable commutator length is always rational for free groups, though this is not true for all groups~\cite{zhu}. Further details can be found in~\cite{cal2} \appendix \section{Limits}\label{appendix} We sketch the proof of Theorem~\ref{limthm}, restated as follows. \begin{prop} Let $f \colon\thinspace {\bf Z}_+ \to R_{\ge 0}$ satisfy $f(nm) \le nf(m)$ for all $n$ and $m$. Then $\lim_{n \to \infty} f(n)/n$ exists. \end{prop} \begin{proof} Let $L$ be the greatest lower bound of $\{f(n)/n\}_{n \in {\bf Z}_+}$. For any $\epsilon$ there is an $N$ such that $f(N)/N \le L + \frac{\epsilon}{2}$. Any $n$ can be written as $n = aN + b$ where $0\le b <N$. Also, $f(b) \le B = \max\{f(b)\}_{0 \le b < N}$. By subadditivity we have $f(n) \le af(N) + f(b)$. Dividing by $n$ we have $$ \frac{f(n)}{n} \le \frac{af(N)}{aN + b} + \frac{f(b)}{aN + b} \le \frac{ f(N)}{ N } + \frac{B} { aN }.$$ Thus, if $n$ is chosen large enough that $\frac{B}{aN} \le \frac{\epsilon}{2}$, (for instance, choose $n \ge \frac{2B}{\epsilon} + N$) we have $f(n)/n \le L + \epsilon$. \end{proof} \end{document}
\begin{document} \title{Quantum-type Coherence as a Combination of Symmetry and Semantics.} \docident{\hspace{\fill}\makebox[0pt][r]{\sf CLNS 97/1476}} \center{Floyd R. Newman Laboratory of Nuclear Studies\\ Cornell University, Ithaca, New York 14853 USA} \abstract{It is shown that quantum-type coherence, leading to indeterminism and interference of probabilities, may in principle exist in the absence of the Planck constant and a Hamiltonian. Such coherence is a combined effect of a symmetry (not necessary physical) and semantics. The crucial condition is that symmetries should apply to logical statements about observables. A theoretical example of a non-quantum system with quantum-type properties is analysed.} \indent{Coherence, a cornerstone of quantum mechanics, is considered to be a result of the quantization of action. However, we will show that ``quantum-type coherence,'' as we will call it, does not depend on the existence of the Planck constant (although its concrete manifestations do). In our two examples, $E_{1}$ and $E_{2}$, analysed below, such coherence appears in the presence of:(I) An ordered set of mutually exclusive objects, with numerical values $\xi,\xi\in$A, where A is an affine space (a space with no fixed origin); $\xi$ is a particle coordinate in $E_{1}$, and an interpretation of a given situation in $E_{2}$. $\linebreak$ (II) Another ordered set of mutually exclusive objects, with numerical values $\chi$ defined on set (I) {\it as a whole}; $\chi$ is a value of the particle momentum in $E_{1}$, and in $E_{2}$ the ordinal number of a logical statement in a set of mutually exclusive statements describing the situation. Conditions(I) and (II) imply the existence of a symmetry. It is shown that when symmetries apply to logical statements about objects instead of objects ``per se,'' the following semantic problems arise: in $E_{1}$, the problem of expressing the truth values of logical statements about objects in the second set, in terms of the truth values of logical statements about objects in the first set; and in $E_{2}$, the problem of expressing the truth values of statements in one interpretation when the truth values of the same statements in another interpretation are given. Inexpressibility is therefore a combined effect of symmetry and semantics---both irrelevant to $\hbar$. This effect, fundamental for quantum-type coherence, leads to indeterminism and interference of probabilities.} \indent{We will first delineate the border between the symmetry-semantic part (without $\hbar$) and the quantum part (with $\hbar$) of the quantum mechanical formalism, using an example of a single, zero-spin particle. Then we will analyse a non-quantum system having all typical features of quantum-type coherence. Its analysis provides a theoretical basis for searching for systems, neither classical nor quantum, in which quantum-type interference can be observed. Observation of such systems, interesting in itself, may indirectly clarify our understanding of the structure of quantum mechanics and the origins of quantization.} \indent{$E_{1}$.{\it A quantum example}. We will construct the quantum mechanics formalism of a single, zero-spin particle, introducing assumptions step-by-step so as to make clear exactly at which point in our construction $\hbar$ is needed. We will tag our assumptions with Greek letters.} \indent{Our fundamental assumption about what makes mechanics ``quantum`` is that $(\alpha)$ the statement $\Lambda_{p_0}:``p=p_{0}``$ about a particle momentum $p$ (a translational invariant in the coordinate $q$-space) should itself be an invariant of the translational symmetry in the same space. Such a requirement makes sense only if $\Lambda_{p_{0}}$, a logical statement, is simultaneously a function of coordinates, such that it does not depend on transformations $q\to q+\delta q$. This is possible only if $\Lambda_{p_{0}}$, as a function of coordinates, either does not depend on $q$ at all, or depends only on differences between coordinates. We suppose the latter, $(\beta)\Lambda_{p_{0}}(q,q')=\Lambda_{p_{0}}(q'-q)$. In our next step, $\Lambda_{p_{0}}(q,q')$ is assumed to be a matrix in $q$-space; and since $\Lambda_{p_{0}}$ is a logical statement, we can use logic to calculate it. Consider the logical equivalence, $\Lambda_{p_{0}}\wedge\Lambda_{p_{0}}\sim\Lambda_{p_{0}}$. The natural assumption is that $(\gamma)$: the logical conjunction in this equivalence should be represented by the matrix product, and the logical equivalence by the equation: \begin{equation} \int\Lambda_{p_{0}}(q'-s)\Lambda_{p_{0}}(s-q)ds=\Lambda_{p_{0}}(q'-q). \end{equation} The solution of this equation is $\Lambda_{p_{0}}=L^{-1}exp2\pi i(q'-q)\lambda^{-1}(p_{0})$, where the wave length $\lambda(p_{0})$ is an unknown function of $p_{0}$, and $L\equiv\int dq$. The eigenvectors of this matrix (up to normalization constants) are $\psi_{p}(q)=exp2\pi iq/\lambda(p), \lambda(p_{1})\neq\lambda(p_{2})$ for $p_{1}\neq p_{2}$. Thus, we already have---without $\hbar$---the correct wave functions and density matrices, though the dependence of $\lambda$ on $p$ remains unknown. To be consistent, we now assume that a statement ``$q=q_{0}$'' about particle coordinate $q$ should also be represented in $q$-space by a matrix. Obviously this matrix must be $(\delta):\Lambda_{q_{0}}(q,q')=K\delta(q'-q_{0})\delta(q-q_{0})$, with $K$ constant. The eigenvectors of $\Lambda_{q_{0}}$ are $\psi_{q_{i}}(q)=\delta(q-q_{i}).$} Matrices $\Lambda_{q_{0}}(q,q^{\prime})$ and $\Lambda_{p_{0}}(q,q^{\prime})$ do not commute; their commutator is not proportional to $\hbar$. While $\Lambda_{q_{0}}$ is a statement about the exact location of the particle, statement $\Lambda_{p_{0}}$ expresses {\it by its own symmetric structure} the uncertainty of that location, and is defined by the momentum value, $p_{0}$, which, according to the meaning of translational invariance, relates to $q$-space as a whole. If, now, $\Lambda_{p_{0}}$ is true, i.e., is the correct description of the state of the particle, and the question is whether $\Lambda_{q_{0}}$ is true, the answer can be at best probabilistic, since the truth of $\Lambda_{p_{0}}$ is inexpressible in terms of the truth of any $\Lambda_{q_{i}}$. Here we have the probability that the conjunction $\Lambda_{p_{0}}\wedge\Lambda_{q_{0}}$ is true. Since, according to $(\gamma)$, conjunctions are represented by matrix products, and since the probability should be a translational invariant and be independent of the order of matrices, the only correct formula is: \begin{equation} w(p_{0}\vert q_{0})=\int\Lambda_{p_{0}}(q,q')\Lambda_{q_{0}}(q',q)dqdq'=K/L. \end{equation} Only after this step in constructing the quantum formalism need we introduce $\hbar$. The physical part of the quantum formalism is then defined by the introduction of Hamiltonian, canonical transformations, etc. {\it E$_{2}$. A non-quantum example.} Consider a situation open to interpretation and describable by different logical connections among {\it n} independent logical statements, $\lambda_{i}, i=1,2,...,n$. According to the classical logic of propositions, every such description can be represented by a disjunction of mutually exclusive conjunctions. There are $N=2^{n}$ such conjunctions, which can be enumerated as $\Lambda_{k}, k=1,2...,N; N\geq 2$. One, and only one of them, can be true. By definition, a {\it certain} (i.e., not uncertain) interpretation $I^{(s)},I^{(s)}=I^{(s)}(k),$ is the following function of the integer $k, k=1,2,...N$: if some $\Lambda_{l}$ is defined as a true statement-conjunction, then $I^{(s)}(k)=\delta_{kl}$. Inversely, if $I^{(s)}(k)=\delta_{kl},$ then in this interpretation $\Lambda_{l}$ is true. Two interpretations are identical if the corresponding functions are identical. There are only {\it N} non-identical certain interpretations, each defining which {\it one} of the {\it N} conjunctions is true. {\it N} non-identical certain interpretations can be transformed into each other by permutations, described by the finite table. That table can be considered as an algorithm defining the truth values of the conjunctions in all certain interpretations, when the truth values of the conjunctions in one of them are defined. Now we will extend our concept of interpretation beyond classical logic by introducing the following two conditions: (a)(symmetry) There is no correct interpretation a priori: any statement-conjunction may be considered true. Such a choice defines a {\it correct} interpretation. Once an interpretation is considered correct, {\it N} different certain interpretations can be generated by permutations of truth values. By definition, these certain interpretations define the meaning of the truth values of {\it N} conjunctions. (b) If there are two interpretations, $I^{(s)}(k)$ and $I^{(s')}(k)$, then the difference between them is measured by a real number, $\theta$; all values of $\theta$ inside an interval $\theta_{min}\leq\theta\leq\theta_{max}$ are permitted; and if $I^{(s)}(k)$ is considered correct, then $\theta =s'-s$. More formally, all interpretations are now points in an affine space $A^{1},s\in A^{1}$, and $\vec\theta$ is a vector in $R^{1}$ vector space of real numbers, $\theta \in R^{1}$. The group $R^{1}$ acts on $A^{1}$ as the continuous group of parallel displacements. Points of our affine space are {\it functions} defined in their own discrete spaces, each containing {\it N} points, $N\geq 2$. We will show that such a system possesses the properties of quantum-type coherence: indeterminism, interference of probabilities, and the possibility of introducing wave functions, though none of our assumptions depends on $\hbar$. The reason, qualitatively, is that now we have a continuum of interpretations; but when we define a meaning of truth values of {\it N} conjunctions that is equal for all interpretations, only {\it N} interpretations (which, according to (a), can be chosen arbitrarily) can be certain. In all other interpretations, truth values of conjunctions are not certain. {\it Theorem 1. On the existence of inexpressibility.} {\it If conditions (a) and (b) are met, then either all interpretations are identical or there does not exist any algorithm, defined for $\theta$ in the interval $[\theta_{min},\theta_{max}]$, to calculate the truth values of statements in an interpretation $I^{(s')}(k)$ when some other interpretation, $I^{(s)}(k)$, is considered correct} {\it Proof.} Let statements $\Lambda_{l}, l=1,2,...,N,$ in some interpretation,$I^{(0)}(l),$ that is considered correct, possess given truth values, and let not all interpretations be identical. Then there exists some $\theta$ that defines an interpretation, $I^{(\theta)}(k),$ different from $I^{(0)}(k)$. This means that at least one of the statements in $I^{(\theta)}$, let it be $\Lambda_{i}$, does not possess the same truth value as $\Lambda_{i}$ in $I^{(0)}$. Assume that there exists an algorithm mapping the truth values of statements in $I^{(0)}$ onto the truth values of statements in $I^{(\theta)}$. Since two different distributions of truth values among {\it N} statements are permutations of each other, any assumed algorithm should define the operation of a permutation, which should depend on $\theta$. Consider the parameter $\delta\theta=\theta/N$! connecting any two interpretations with $s'-s=\delta\theta$. According to our assumption, this parameter defines some permutation of truth values. Consider {\it N!} such consecutive permutations, beginning from the given distribution of truth values in the initial interpretation, $I^{(0)}$. On one hand, {\it N!} identical permutations give us the same final distribution of truth values as the initial one. But, on the other hand, since $\delta\theta\cdot N!=\theta$, we will arrive at interpretations $I^{(\theta)}$, which is not identical to the initial interpretation. The contradiction means that our assumption about the existence of a mapping algorithm dependent on $\theta$ was wrong. It also means that not all interpretations can be certain. The truth values of {\it N} conjunctions in uncertain interpretations cannot be expressed in terms of the truth values of these conjunctions in certain interpretations. $\Box$ Thus, under the conditions satisfying Theorem 1, the problem of expressing the truth values of statements in an arbitrary interpretation, when the truth values of statements in another, certain interpretation are given, is insoluble. Therefore, in the general case, if a question arises whether a particular statement in an interpretation is true when the truth values of statements in another interpretation are given, the answer is unpredictable. If the concept of probability applies to such a system, then the probability of a certain answer can depend only on the parameter defining the difference between interpretations, $\theta$. This leads to {\it Theorem 2. Under the conditions of Theorem 1, the probabilities p$(\theta)$ of complex events do not obey classical rules, and have interference terms.} {\it Proof.}It is sufficient to prove the theorem for the simplest case {\it N}=2, in which there are only two mutually exclusive statements in every interpretation, $\Lambda$ and $\overline{\Lambda}$; the latter is the negation of the first. Consider three pairs of interpretations, ${\left( I^{(0)},I^{(\theta)}\right) \left(,I^{(\theta)},I^{(\theta + \vartheta)}\right)}$, and ${\left( I^{(0)},I^{(\theta + \vartheta)}\right)}$. We will label s the statement, either $\Lambda$ or $\bar{\Lambda}$, whose truth values are interpreted in the (maybe uncertain) interpretation $I^{(s)}(k), k=1,2;\;\Lambda_{1}\equiv\Lambda, \Lambda_{2}\equiv\overline{\Lambda}$. Probabilities of the answer ``yes'' to the questions ``Is $\Lambda^{(r)}$ (or $\bar{\Lambda}^{(r)})$ true, if $\Lambda^{(s)}$ is true?'' will be denoted as $p(s,r)\equiv p(r-s),$ and $p(s,\bar{r})=1-p(s,r)$; and the probabilities of the answer ``yes'' to the same questions but with $\bar{\Lambda}^{(s)}$ instead of $\Lambda^{(s)}$ will be denoted as $p(\bar{s},r)=1 - p(s,r)$, and $p(\bar{s},\bar{r})=p(s,r)$. The equalities follow from the fact that in any interpretation, certain or not, one of the statements is true and the others are false, because the disjunction \begin{equation} \Lambda_{1}\vee\Lambda_{2}\vee ...\vee\Lambda_{N}\equiv T \end{equation} is an invariant (a tautology) independent of the choice of interpretation. Given questions corresponding to the aforementioned three pairs of interpretations, such that either $\Lambda^{(s)}$ or $\bar{\Lambda}^{(s)}$ is true in interpretation $I^{(s)}$ of every pair, the classical probability formula for the answers should be: $p(0,\theta + \vartheta)=p(0,\theta)p(\theta,\theta + \vartheta)+p(0,\bar{\theta})p(\bar{\theta},\theta + \vartheta),$ which can be rewritten as \begin{equation} p(\theta + \vartheta)=p(\theta)p(\vartheta)+(1 - p(\theta))(1 - p(\vartheta)),\;\; classical. \end{equation} But this equation is violated, for example, when $p(\theta + \vartheta)$=0, i.e., when $(\theta + \vartheta)$ is such that in interpretation $I^{(\theta + \vartheta)}$ ``yes'' (``no'') means the same as the ``no'' (``yes'') of interpretation $I^{(0)}$. Choosing $\theta = \vartheta$, then, gives us $0 = p^{2} + (1 - p)^{2}$, and this is impossible. Therefore, there must be an additional term in (4). Moreover, in our simple case we can calculate this term under the assumption \begin{equation} p(0,\theta)=p(\theta,0). \end{equation} Choosing $\vartheta =-\theta$ gives us another classical equation: $1=p^{2}+(1-p)^{2}$, which can be rewritten as \begin{equation} 1=cos^{4}f(\theta)+sin^{4}f(\theta),\;\; classical, \end{equation} where the function $f(\theta)$ needs to be found. In this case, the needed additional term should be $2sin^{2}f(\theta)cos^{2}f(\theta)$. From this we can find \begin{equation} f(\theta)=a\theta \end{equation} where $a$ is an arbitrary real number. Indeed, from (7) it follows that \begin{equation} p(\theta +\vartheta)\equiv cos^{2}a(\theta + \vartheta)=p(\theta)p(\vartheta)+(1-p(\theta))(1-p(\vartheta))+ interference\;\;term, \end{equation} \begin{equation} interference\;\;term=-2sin a\theta\;\; sin a\vartheta\;\; cos a\theta\;\; cos a \vartheta. \end{equation} This formula gives correct results in cases $\vartheta =-\theta,$ and $a\vartheta = a\theta =\pi/4,$ while deviations from (7) do not.$\Box$ When $a = 1/2$, formulae (8),(9) coincide with the quantum formulae for a spin 1/2 placed on a plane, with the rotational symmetry around the axis perpendicular to this plane; $\theta$ is an angle between two axes, $z$ and $z'$, placed on this plane; $\Lambda^{(z)}$ is the statement $``s_{z}=1/2``$; and $\bar{\Lambda}^{(z)}$ the statement $``s_{z}=-1/2.``$ From this analogy it is clear that we can introduce formal wave functions as superpositions of certain interpretations, as defined above, thus introducing a Hilbert space. (We will not do this here.) There is an essential difference, however, between our non-physical system and a quantum mechanical one. In quantum mechanics, the discreteness of spin $z$-projections is a direct result of $SU$2 symmetry in a three-dimensional space; such discreteness would not exist in a system with a rotational symmetry only on a plane. In our logical system, the discreteness---which is simply the discreteness of logical truth values---exists independently of symmetries. \end{document}
\begin{document} \sloppy \title{Transient Hemodynamics Prediction Using an Efficient Octree-Based Deep Learning Model} \titlerunning{Efficient Transient Hemodynamics Prediction} \author{Noah Maul\inst{1,2} \and Katharina Zinn \inst{1,2} \and Fabian Wagner \inst{1} \and Mareike Thies \inst{1} \and Maximilian Rohleder \inst{1,2} \and Laura Pfaff \inst{1,2} \and Markus Kowarschik \inst{2} \and Annette Birkhold \inst{2}\and Andreas Maier \inst{1} } \authorrunning{N. Maul et al.} \institute{Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany \and Siemens Healthcare GmbH, Forchheim, Germany\\ \email{[email protected]}} \maketitle \begin{abstract} Patient-specific hemodynamics assessment could support diagnosis and treatment of neurovascular diseases. Currently, conventional medical imaging modalities are not able to accurately acquire high-resolution hemodynamic information that would be required to assess complex neurovascular pathologies. Therefore, computational fluid dynamics (CFD) simulations can be applied to tomographic reconstructions to obtain clinically relevant information. However, three-dimensional (3D) CFD simulations require enormous computational resources and simulation-related expert knowledge that are usually not available in clinical environments. Recently, deep-learning-based methods have been proposed as CFD surrogates to improve computational efficiency. Nevertheless, the prediction of high-resolution transient CFD simulations for complex vascular geometries poses a challenge to conventional deep learning models. In this work, we present an architecture that is tailored to predict high-resolution (spatial and temporal) velocity fields for complex synthetic vascular geometries. For this, an octree-based spatial discretization is combined with an implicit neural function representation to efficiently handle the prediction of the 3D velocity field for each time step. The presented method is evaluated for the task of cerebral hemodynamics prediction before and during the injection of contrast agent in the internal carotid artery (ICA). Compared to CFD simulations, the velocity field can be estimated with a mean absolute error of \SI{0.024}{\meter\per\second}, whereas the run time reduces from several hours on a high-performance cluster to a few seconds on a consumer graphical processing unit. \keywords{Hemodynamics \and Octree \and Operator learning} \end{abstract} \section{Introduction} Vascular diseases are globally the leading cause of death \cite{who_cvd_2011} and thus optimal medical diagnosis and treatment are desirable. A detailed understanding of vascular abnormalities is essential for treatment planning, which includes hemodynamic information such as blood velocity and pressure. However, especially for neurovascular pathologies quantitative blood flow information at sufficiently high resolution is difficult to acquire by conventional medical imaging alone. Hence, measurements, e.g., three-dimensional (3D) tomographic reconstructions of the abnormality, are usually coupled with computational methods. These methods include 3D computational fluid dynamics (CFD) simulations, that are based on solving partial differential equations (PDEs) numerically. However, the setup of CFD simulations requires domain-specific knowledge and enormous computational resources, which are usually not available in a clinical environment. Even without the required preprocessing steps, simulations require several hours of runtime on high-performance computing clusters. Recently, deep-learning-based models have been proposed to approximate CFD results with different input data, acting as surrogate models. Once trained, these models allow to reduce runtime during inference \cite{Feiger2020,yuan_2022,taebi_2022,edward_2020,rutkowski_2021,du_2022}. However, the presented approaches either regress a desired hemodynamic quantity directly (without estimating the whole 3D velocity or pressure field), work on small volumetric patches (only possible with suitable input data, e.g., magnetic resonance tomography), or are limited to steady-state simulations. To the best of our knowledge, there exists no prior work on high-resolution transient CFD surrogate models for complex vascular geometries. We hypothesize that lacking suitable deep learning architectures and associated data representations for time-resolved volumetric vascular data so far prohibit more accurate methods. Particularly, uniformly spaced voxel volumes are inherently ill-suited for discretizing neurovascular geometries, as the size of the smallest feature defines required resolution and voxel size. Further, the resulting volumes are usually sparse, meaning that a large part of the volume is not covered by (image-able) vascular structures and is, therefore, not contributing to clinically relevant hemodynamics. Convolutional neural networks (CNNs) are in principle well-suited for flow prediction as they exploit local coherence \cite{xie_2018}. However, this would require huge computational resources due to the sparse data. Even more challenging is the application of CNNs to transient simulations, where the time dimension must be considered in the architecture, leading to further increase of computational complexity. In this work, we present a computationally efficient deep learning architecture that is designed to infer high-resolution velocity fields for transient flow simulations in complex 3D vascular geometries. We utilize octree-based CNNs \cite{wang_2017,wang_2020}, enabling sparse convolutions, to efficiently discretize and process neurovascular geometries with a high spatial resolution. Furthermore, instead of utilizing four-dimensional or auto-regressive architectures, we formulate the regression task as an operator learning problem. This concept has been introduced to solve PDEs in general \cite{Lu2021,li2020fourier} and has also been applied to medical problems, such as tumor ablation planning \cite{meister_2022}. The presented method is evaluated for the task of cerebral hemodynamics prediction based on a 3D digital subtraction angiography (3D DSA) acquisition, which is usually performed for treatment planning of neurovascular procedures. \section{Methods} \subsection{Problem Description and Method Overview} In this work, the goal is to train a deep-learning-based CFD surrogate model to approximate the velocity field given the vascular geometry, and the associated boundary and initial conditions, which are the commonly used input data for a hemodynamics CFD simulation. We aim to learn the solution operator for the incompressible Navier-Stokes equations \begin{equation} \begin{split} \frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u} &= - \nabla p + \nu \nabla^2 \mathbf{u} \\ \nabla \cdot \mathbf{u} &= 0 \, , \end{split} \end{equation} where $\mathbf{u}$ denotes the velocity, $p$ the pressure and $\nu$ the kinematic viscosity of a fluid. For this, the following steps are carried out. First, synthetic cerebral vessel trees are generated and preprocessed for simulation. Second, boundary conditions (BCs) are chosen from a representative set and a CFD solver is employed to calculate a set of reference solutions that describe the blood and total flow during injection of a contrast agent (CA). Third, the reference solutions are postprocessed, resulting in a dataset that is used to train a neural network. After training, the surrogate model is applied to unseen data. \subsection{Physics Model of 3D DSA Acquisition} Our physics model is designed to describe the physiological blood flow and the effect of the injection of contrast agent (CA) during a neurovascular 3D DSA acquisition. As CA is commonly injected into the ICA, we focus our analysis on this case. Like previous studies \cite{Sun_2012,Waechter_2008}, we assume that the density difference between CA and blood is negligible and model the mixture as a single-phase flow. However, due to resistances downstream, the blood flow rate before injection $Q_\text{B}$ and the injection flow rate $Q_\text{CA}$ do not simply add up but can be described with a mixing factor $m$ \cite{Sun_2012,Mulder_2011}, $Q_\text{T}(t) = Q_\text{B}(t) + m \cdot Q_\text{CA}(t) \, ,$ where $Q_\text{T}$ denotes the total flow rate. As in previous work \cite{Sun_2012}, a mixing factor of $0.3$ is chosen. Further, we model the compliance and resistance of the contrast flow through the catheter by an analogous electrical network consisting of a resistor and a capacitor \cite{Waechter_2008}. Therefore, the injection flow rate $Q_\text{CA}(t)$ can be described by \begin{equation} Q_\text{CA}(t) = \begin{cases} 0 & t < T_\text{S} \\ Q_\text{CA}^{\text{max}} \cdot \left(1 - \text{e}^{-(t-T_\text{S})/T_\text{L}}\right) & t \geq T_\text{S} \end{cases} \, , \end{equation} where $T_\text{S}$ refers to the injection start time, $T_\text{L}$ to the time of the lag and $Q_\text{CA}^{\text{max}}$ to the maximum injection rate that is set to \SI{2.5}{\milli \liter \per \second}. Moreover, the physiological blood flow rate should reflect real conditions and is therefore derived from reported values in literature. To generate a set of representative inflow conditions of humans, we follow the approach of Ford et al. \cite{Ford_2005} and Hoi et al. \cite{Hoi_2010}. They showed how the inflow waveform for the ICA can be modeled by the mean flow rate, cardiac cycle length and age. We select a set of flow rates~$\in \{3.4,4.4,5.4\}\,\si{\milli\liter\per\second}$ and cardiac cycle lengths~$\in \{785,885, 985\}\,\si{\milli\second}$ that correspond approximately to the mean $\pm$ one standard deviation. For each combination of the sets, two waveforms (young and elderly) are generated, leading to $18$ different inflow waveforms in total. The flow is modeled as laminar and blood as a Newtonian fluid with a kinematic viscosity $\nu$ of \SI{3.2e-6}{\meter\squared \per \second} and a density of \SI{1.06e3}{\kilogram\per\meter\cubed}. Vessel walls are assumed to be rigid, no-slip and zero-gradient pressure BCs are applied. The outlet BCs are set according to the flow-splitting method \cite{Chnafa_2018}, which avoids unrealistic zero pressure outlet BCs. The algorithm starts at the inlet (most distal part of the ICA) and determines the flow split ratio between the branches at each bifurcation. When an outlet is reached, the associated flow rate is assigned as the BC. \subsection{Generation of Synthetic Vascular Geometries} The surrogate model is trained and evaluated with a dataset that comprises synthetic neurovascular geometries in combination with physiological BCs. Surface meshes of vessel trees are automatically generated using the 3D modeling software \textit{Blender} (Blender, version $2.92$, Blender Foundation) and the \textit{Sapling Tree Gen} addon (Sapling Tree Gen, version 0.3.4, Hale et al.) that is based on a sampling algorithm by Weber et al. \cite{sapling_paper}. The root branch is modeled as a synthetic ICA, where the radius is chosen uniformly random between $1.62$ and \SI{1.98}{\milli\meter} \cite{Chnafa_2017}. To cover a large variety of bifurcation types, bifurcation angles are chosen uniformly distributed between $35$ and \SI{135}{\degree} with an additional vertical attraction factor of $2.6$ \cite{sapling_paper}. To ensure a developed flow and avoid backflow at the outlets, flow extensions with an approximate length of five times the respective vessel diameter are added to the inlet and all outlets. Three synthetic trees are depicted in Fig~\ref{fig:trees}. Centerlines are calculated on the resulting surface mesh and a locally radius-adaptive polyhedral volumetric mesh containing five prismatic boundary layers to capture steep gradients near the vessel wall is generated \cite{Vmtk_2018,Weller_1998}. \begin{figure} \caption{Three synthetic vascular trees generated for the dataset.} \label{fig:trees} \end{figure} \subsection{CFD Simulation Setup} \label{sec:simulation} For each vessel tree, the flow rate, cardiac cycle length and the age is sampled uniformly random and the resulting inflow curve is set as the inflow BC (plug flow velocity profile) for the simulation. The CFD software \textit{OpenFOAM} (OpenFOAM, version 8, The OpenFOAM Foundation) \cite{Weller_1998} is employed and second order schemes are chosen for space and time discretization. An adaptive implicit time-stepping method with a maximum timestep of \SI{1}{\milli\second} is employed. Overall, four cardiac cycles are simulated. The first one is used to wash out initial transient effects, while the second one reflects the hemodynamics before contrast injection. The virtual injection of CA starts with the third cardiac cycle and is continued until the end of the simulation. Each simulation is executed utilizing 40 CPUs on a high-performance cluster requiring several hours of runtime. \subsection{Deep Learning Architecture} The proposed deep learning model is designed to infer the velocity field from the geometry and BCs. The model consists of three main building blocks, as illustrated in Fig.~\ref{fig:architecture}. In the first block (a), a 3D point cloud representation of the geometry and the BCs (inflow waveform) are processed to compute node features for each point. In the second block (b), an octree \cite{maegher_1982} is calculated from the point cloud, and passed to an octree-based neural network. In the third block (c), the output of the network is regarded as a continuous neural representation of the velocity field function that can be evaluated at a set of spatiotemporal points in parallel. The individual building blocks are elaborately presented in the following. \begin{figure} \caption{Overview of the proposed architecture that consists of three buildings blocks: geometry and BC encoding (a), octree construction and octree U-Net inference (b), and neural function evaluation (c).} \label{fig:architecture} \end{figure} \subsubsection{a) Geometry and Boundary Condition Processing} The input to the model consists of a point cloud representation of the geometry and the BCs. In the dataset, the inflow waveform provides sufficient information to solve the Navier-Stokes equations as the outflow BCs are calculated from the inflow waveform and the geometry (flow-splitting method). Also, the initial conditions are shared across all cases. The distance field (DF) of the point cloud to the tree surface is calculated and the 1D inflow waveform is supplied to a 1D CNN (BC Net). This network contains four blocks, each consisting of a convolution, pooling, and leaky rectified linear unit (LReLU) layer. As a last operation, the output is pooled to a four-dimensional vector by averaging over the feature maps. The feature vector and the DF are concatenated, repeating the same feature vector for all field points, and together form the point cloud features. \subsubsection{b) Octree Construction and U-Net} The octree data structure is built from the spatial information of the point cloud \cite{wang_2017} with a maximum depth level of ten, leading to an isotropic node resolution of \SI{0.15}{\milli\meter} on the finest level. At each level until maximum depth, a node is refined if it contains one or more points, such that the adaptive octree resolution can be controlled by the input point cloud. The point features that lie within a node on the finest level are averaged and considered the node features. To learn the neural representation of the velocity field, a U-Net architecture \cite{ronneberger_2015} is employed. Wang et al. \cite{wang_2017} introduced a convolution layer that acts directly on the octree structure, such that calculations are only performed in spatial locations where necessary (inside vessels). The U-Net consists of three downsampling (strided convolution) and three corresponding upsampling (transposed convolution) steps, starting at the maximum octree depth level down to the seventh level. The four encoder levels consist of two, three, four, and six residual bottleneck blocks \cite{he_2016}, respectively. Each decoder level comprises two bottleneck residual blocks. A bottleneck residual block consists of two $3\times3$~convolutions and one $1\times1$~convolution for the residual connection. The LReLU is used as an activation function. \subsubsection{c) Neural Function Evaluation} The output of the U-Net is regarded as a continuous neural representation of the velocity field that can be evaluated at arbitrary spatiotemporal points. Meister et al. \cite{meister_2022} employ an operator learning approach (mapping between function spaces) to predict a temperature distribution using the DeepONet architecture \cite{Lu2021} by approximating the temporal antiderivative operator applied on each voxel of a 3D grid. This allows to efficiently evaluate a large number of spatiotemporal points in parallel. We extend this approach by exploiting the high-resolution octree representation. To evaluate the time-resolved velocity field at an arbitrary spatial point $\mathbf{x} \in \mathbb{R}^3$, the corresponding feature vector is calculated by trilinear interpolation between the neighboring octree node features at the finest octree level. The feature vector is transformed by two fully connected layers and the result is split in three equally sized parts $\mathbf{b}_\mathbf{x} = (\mathbf{b}^{1}_\mathbf{x}, \mathbf{b}^{2}_\mathbf{x},\mathbf{b}^{3}_\mathbf{x})$, $\mathbf{b}^{i}_{\mathbf{x}} \in \mathbb{R}^{d}$, that contain the time dynamics information of the three approximated velocity vector components $\hat{\mathbf{u}}=(\hat{u}_1, \hat{u}_2, \hat{u}_3)$ at $\mathbf{x}$. A series of five fully connected layers and LReLU activation functions (trunk net \cite{Lu2021}) receives the time point $t$ as an input and result in a latent vector $\mathbf{r_t} \in \mathbb{R}^d$. To evaluate $\hat{\mathbf{u}}(\mathbf{x}, t)$, three dot products are calculated \begin{equation} \label{eq:dot_prod} \hat{u}_i(\mathbf{x}, t) = \sum^{d}_{k = 1} b^{\,i}_{\mathbf{x}, k}\, r_{t,k} + c_i \, , \end{equation} where $i \in \{1,2,3\}$ and $c_i$ denotes a learnable bias for each velocity component. The algorithmic steps are also illustrated in Fig.~\ref{fig:architecture}. Note that this formulation only requires a single forward pass of the octree U-Net for each full time-resolved velocity field. Further, batches of $\{\mathbf{b}_\mathbf{x_i}\}$ and $\{\mathbf{r_{t_j}}\}$ can be calculated independently and in parallel. \subsection{Training Setup} To train and test the proposed model for the regression task, a suitable dataset is constructed using the synthetic geometries and simulated hemodynamics data. Overall, a virtual CA injection is simulated for 45 different virtual vessel trees, as described in Sec.~\ref{sec:simulation}. The velocity field is saved 30 times per second to be consistent with clinical 3D DSA image data that is usually acquired at 30 frames per second. Due to differing cardiac cycle lengths, this leads to a varying number of samples per simulation. The samples are then split by geometry in training (35), validation (5) and test set (5). For each sample, the cell centers from the volumetric mesh are used as an input point cloud to construct the octree. The same locations are also utilized to evaluate the learned velocity field. As a data augmentation technique during training, the point clouds are randomly rotated and translated, such that a greater variety of octrees can be constructed. To optimize the network parameters, the mean absolute error $\frac{1}{3N} \sum_{j=1}^N \sum_{i \in \{1,2,3\}} |\hat{u}_{ij} - u_{ij}|$ between predicted and actual velocity components is calculated for $N$ spatiotemporal points. Ten time points are randomly chosen per simulation and batched to reduce training time. The ADAM optimizer \cite{Kingma_2015} is employed to train the network until convergence and the model with the lowest validation loss is selected for evaluation. \section{Results} The method is evaluated on the test set that contains the simulations of five separate vessel trees. Like in the training phase, the network is queried at the cell centers of the CFD mesh and at 30 time steps per second until the end of the third cardiac cycle. We quantitatively compare the predictions of our method with the CFD simulations. \subsection{Quantitative Evaluation} The overall mean, standard deviation, and median of the absolute error across the whole test set is $0.024$, $0.043$ and \SI{0.010}{\meter\per\second}, respectively. To avoid time-consuming statistical processing, the time-averaged velocity field per case is computed for the network prediction and the CFD simulation. In Fig. \ref{fig:statistics}, a regression plot over the time-averaged test set (left), as well as the error distribution for each individual case (right) is depicted. The mean absolute error of the time-averaged velocities is \SI{0.023}{\meter\per\second} and a coefficient of determination ($R^2$) of $0.97$ is determined for the prediction. For some points, the network tends to underestimate the velocities. The quartiles of the error distribution show that \SI{75}{\percent} of all velocities can be estimated with an error smaller than \SI{0.040}{\meter\per\second}. \begin{figure} \caption{Statistical evaluation of the predicted velocity field for the test set cases. Due to the large number of points, only the time-averaged velocities are considered. The left figure depicts the joint histogram of predicted and CFD (time-averaged) velocities (all components) over the entire test set (mean absolute error of \SI{0.023} \label{fig:statistics} \end{figure} \subsection{Qualitative Evaluation} \begin{figure} \caption{Evaluation of the (time-averaged) velocity field at three locations for test case five (median of mean absolute error across cases, visualized in Fig. \ref{fig:statistics} \label{fig:qualitative} \end{figure} \begin{figure} \caption{Comparison of volumetric flow rates at two slices for test case five (median of mean absolute error across cases, visualized in Fig. \ref{fig:statistics} \label{fig:qualitative_flowrate} \end{figure} We qualitatively evaluate the time-averaged velocity distributions for test case five (median of mean absolute error across cases), which is depicted in Fig.~\ref{fig:qualitative}. Three cross-sectional slices of the velocity magnitude at different locations and streamlines at the first bifurcation are depicted. The first slice is positioned far downstream after three bifurcations at a position where a less complex flow is expected (small radius and curvature, some distance from the previous bifurcation). To evaluate more complex flow scenarios, an additional slice is placed right before the second bifurcation. The third slice shows the cross section of the first bifurcation. Overall, both slice and streamline plots show reasonable agreement between predicted and actual velocity values as well as distributions. Additionally, the volumetric flow rate over time is compared for the first two slices, which is displayed in Fig~\ref{fig:qualitative_flowrate}. The flow rates agree reasonable, but are slightly underestimated for the three systoles (first and secondary peaks). \subsection{Runtime Evaluation} We evaluate the GPU-time of the model on a NVIDIA Quadro RTX 3000 ($6$ GB memory) graphical processing unit for one vessel tree from the test set. The overall time can be split in three parts: BC Net and U-Net forward pass $t_\text{net}$, spatial function evaluation $t_\text{spatial}$ and temporal $t_\text{temporal}$ function evaluation (trunk net forward pass and dot products). The spatial function evaluation comprises the octree interpolation and subsequent network transformation, whereas temporal evaluation refers to the execution of the trunk net and the dot products (visualized in Fig.~\ref{fig:architecture}). The spatial evaluation is performed with a batch of $10^6$ coordinates. For the temporal evaluation, the trunk net is executed with a batch of $100$ timesteps and the dot products are calculated between the output and the $10^6$ feature vectors. We measure the runtimes $204.5\pm\SI{2.7}{\milli\second}$, $92.2\pm\SI{4.9}{\milli\second}$ and $23.8\pm\SI{1.5}{\milli\second}$ for $t_\text{net}$, $t_\text{spatial}$ and $t_\text{temporal}$ respectively, where mean and standard deviation are calculated across $100$ runs for each part. Hence, assuming a velocity prediction for $N_\text{s}$ spatial points and $N_\text{t}$ temporal points, the overall runtime is approximately $t_\text{net} + N_\text{s} / 10^6 \cdot t_\text{spatial} +N_\text{t}/10^2 \cdot t_\text{temporal}$. \section{Discussion} Computational methods for 3D hemodynamics assessment of neurovascular pathologies require enormous computational resources that are usually not available in clinical environments. We presented a method that is tailored to predict the high-resolution (spatial and temporal) velocity field given a (complex) vascular geometry and corresponding boundary and initial conditions. By combining an explicit octree discretization with an implicit neural function representation, our proposed model enables the approximation of transient 3D hemodynamic simulations within seconds. We evaluated the method for the task of hemodynamics prediction during a 3D DSA acquisition for virtual cerebral vessels trees, where CA is injected into the ICA. Once trained, the velocity field can be inferred for unseen vascular geometries with a mean absolute velocity error of $0.024\,\pm\, \SI{0.043}{\meter\per\second}$. Our quantitative and qualitative evaluation showed good agreement between the prediction of our model and the CFD ground truth. Existing approaches for predicting hemodynamics with machine learning surrogate models either regress a derived low-dimensional hemodynamic quantity directly (without outputting 3D velocity or pressure fields) \cite{Feiger2020}, rely on magnetic resonance imaging input \cite{edward_2020,rutkowski_2021}, or predict 3D steady-state simulations \cite{du_2022,Li_2021} with fixed BCs. Compared to this, our method allows the prediction of high-resolution unsteady velocity fields for varying BCs. Raissi et al. \cite{raissi_2020} use physics-informed neural networks (PINNs) to predict high-resolution transient hemodynamics inside an aneurysm assuming that the concentration field of a transported passive scalar can be measured. In contrast to our work, this allows to infer the underlying velocity field without knowledge of the BCs. However, PINNs usually must be retrained in a self-supervised manner for each inference case and are therefore difficult to apply in a medical setup, which is a major advantage of our method. One consideration for the application of neural networks is the ability to generalize on unseen data. We tested our method on synthetic vessel trees that were not included in the training or validation procedure. The availability of sufficient clinical data is a common problem in medical machine learning, such that synthetically generated data is often used \cite{Li_2021_aneu}. However, not all flow patterns in anatomical cerebral vessel trees might be covered by synthetic cases. In particular, pathological abnormalities that alter the flow, e.g., stenoses or aneurysms, were not considered in our study. For a clinical application, our method needs to be investigated on real clinical patient data. Our method is not limited to predicting velocity fields for medical applications. It could be applied analogously to predict pressure distributions or other quantities of interest and is suited for any vessel or tube-shaped geometries, even outside the medical field. \section{Conclusion} The computational complexity of CFD simulations restricts patient-specific hemodynamics assessment in the clinical workflow. We presented a deep-learning-based CFD surrogate model tailored to predict the high-resolution spatial and temporal velocity field given a complex vascular geometry and BCs within seconds. We envision that our approach could form the basis for a clinical hemodynamics assessment tool that supports diagnosis of vascular diseases and provides online feedback to clinicians during procedures. \subsubsection*{Disclaimer} The concepts and information presented are based on research and are not commercially available. \end{document}
\begin{document} \setcounter{page}{1} \title[A new class of ideal Connes amenability ]{A new class of ideal Connes amenability } \author[A. Minapoor and A. Rejali]{$^1$Ahmad Minapoor, $^2$ Ali Rejali $^{*}$} \address{$^1$ Department of Mathematics, Ayatollah Borujerdi University, Borujerd, Iran} \address{$^2$ Department of Pure Mathematics, Faculty of Mathematics and Statistics, University of Isfahan, Isfahan 81746-73441, Iran } \email{$^1$\textcolor[rgb]{0.00,0.00,0.84}{shp\[email protected] }} \email{$^2$ [email protected] } \date{Received: xxxxxx; Revised: yyyyyy; Accepted: zzzzzz. \newline \indent $^{*}$ Corresponding author} \displaystyle\sumbjclass[2000]{46H20, 46H25.} \keywords{dual Banach algebra; amenability; Ideal Connes-amenable;.} \maketitle \begin{abstract} In this paper, we introduce a new notion of amenability, $\sigma-$Connes ideal, say, for a large class of dual Banach algebras. We extend the concept of ideal Connes amenability and study their properties. Let $\sigma$ be a $weak^{*}$-continuous endomorphism on a dual Banach algebra $\mathcal{A}$ with dense range. Then the concept of ideal Connes-amenability and $\sigma-$ ideally Connes amenability are the same. We gave some general results and hereditary properties with some examples for this new notion of amenability. \end{abstract} \section{Introduction} \noindent The study of the cohomologies of Banach algebras is an active and important area of research in Banach algebras. Several aspects of these cohomologies have been researched into by several researchers leading to the introduction of various notions of amenability. For example, Johnson in \cite{joh} showed that the locally compact group $G$ is amenable as a group if and only if the group algebra $L^{1}(G)$ is amenable as a Banach algebra. The amenability of $L^{1}(G)$ as a Banach algebra is equivalent to saying that $L^{1}(G)$ has a vanishing first order Hochschild cohomology with coefficients in dual Banach $L^{1}(G)-$ bimodule. For a general Banach algebra $\mathcal{A},$ Johnson in \cite{joh} showed that the amenability of $\mathcal{A}$ is equivalent to $\mathcal{A}$ having a virtual diagonal, which is also equivalent to $\mathcal{A}$ having a bounded approximate diagonal. For details on this, other important results and important references, see \cite{joh,m3,m4,m,m1,MS} and the references contained therein. \noindent The dual space structure of the von Neumann algebras and the $w^*$-topology on dual Banach algebra $\mathcal{A}$ is very important and have to be put into consideration when studying cohomologies of dual Banach algebra $\mathcal{A}.$ Thus, whenever an algebra carries a canonical $w^*$-topology, it is natural to consider derivations which are $w^*$-continuous. Thus, the notion of amenability that fits into dual Banach algebras and for von Neumann algebras was introduced by Johnson et. al. in \cite{jkr} for von Neumann algebras, then by Helemeskii \cite{hel} and Runde in \cite{run1} for larger class of dual Banach algebras. This notion of amenability is called Connes amenability. A dual Banach algebra $\mathcal{A}$ is Connes-amenable if, for every normal, dual Banach $\mathcal{A}.$-bimodule $X$, every $weak^{*}$-continuous derivation $D\in \mathcal Z^1(\mathcal{A},X)$ is inner. Clearly, the notion of Connes amenability fully exploits the important dual space structure and the $w^*$-topology on dual Banach algebra $\mathcal{A}.$ For further details on this, see \cite{hel,jkr,run1,run3}. \noindent Furthermore, as a continuation in the study of cohomologies of Banach algebra, Gordji and Yazdanpanah in \cite{ey} introduced two notions of amenability for a Banach algebra $A.$ These new notions are the concepts of $I$-weak amenability and ideal amenability for Banach algebras, where $I$ is a closed ideal in $A.$ They gave some examples to show that the notion of ideal amenability is different and weaker than the notion of weak amenability and established some general results on the new notions of amenability. Moreover, they obtained some connections between ideal amenability and weak amenability and concluded by posing some open questions. One of these open questions was answers partially by Mewomo in \cite{m2}. An alternative notion which is related to ideal amenability was investigated in \cite{tbe} \noindent We noted and pointed out that the notion of ideal amenability introduced in \cite{ey} does not fit into dual Banach algebra due to the dual space structure and the $w^*$-topology on dual Banach algebra. To circumvent this weakness and as a further generalization of the notion of ideal amenability, the first author, Bodaghi and Ebrahimi Bagha in \cite{mbe} introduced the notion of ideal Connes amenability for dual Banach algebras, which is weaker than the notion of Connes-amenability. For example they show that von Neumann algebras are always ideal Connes amenable. \noindent Motivated by the above results and the ongoing research interest in this direction, we introduce $\sigma-$ideal Connes amenability for dual Banach algebra $\mathcal{A}$, where $\sigma$ is a $weak^{*}$-continuous endomorphism of $\mathcal{A}$, that extends the notion of ideal Connes-amenability for a large class of dual Banach algebras. \noindent The organization of the paper is as follows: Section \ref{Se2} contains some basic and useful definitions and the two new notions of amenability that are introduced in this work. These are needed in subsequent sections of this study. In Section \ref{Se3}, we give some general theory and hereditary properties and establish the condition under which the ideal Connes-amenability and $\sigma-$ ideally Connes amenability are equivalent for dual Banach algebra $\mathcal{A}.$ Some useful and important examples to illustrate our main results are given in Section \ref{Se4}. We then conclude in Section \ref{Se5}. \section{Preliminaries and definitions}\label{Se2} \noindent In this section, we recall some basic concepts and standard definitions that are needed in subsequent sections. We also introduce our new notions of amenability that are studied in this work. Throughout this work, $\mathcal{A}$ will denote a dual Banach algebra. For more details and other basic definitions, standard notions and relevant results, see \cite{mbe} and the references contained therein. \begin{definition} A Banach algebra is dual if there is a closed submodule $(\mathcal{A_{*})}$ of $\mathcal{A^{*}}$ such that $(\mathcal{A_{*})^{*}=\mathcal{A}}$. Let $\mathcal{A}$ be a dual Banach algebra, $(\mathcal{I_{*})^{*}=\mathcal{I}}$ be a $w^{*}$-closed two-sided ideal in $\mathcal{A}$ and $\sigma$ be a $w^{*}$-continuous endomorphism of $\mathcal{A}$, i.e. a $w^{*}$-continuous algebra homomorphism from $\mathcal{A}$ into $\mathcal{A}$. We say that $\mathcal{I}$ has the $\sigma-$dual trace extension property if for each $\lambda \in \mathcal{I}$ with $\sigma(a) \cdot \lambda = \lambda \cdot \sigma(a) $ $(a \in \mathcal{A}),$ there exist a $\tau \in \mathcal{A}$ such that $\tau \vert_{\mathcal{I_{*}}}= \lambda $ and $a\cdot \tau- \tau \cdot a = 0$ for all $a\in \mathcal{A}$. \end{definition} Let $\mathcal{A}$ be a dual Banach algebra. A $w^{*}$-continuous $\sigma-$derivation from $\mathcal{A}$ into a dual Banach $\mathcal{A}$-bimodule $X$, is a $w^{*}$-continuous map $D:A\longrightarrow X$ satisfying $$D(ab)=D(a)\cdot \sigma(b)+\sigma(a)\cdot D(b) \qquad (a,b \in A).$$ For each $x\in X$, the mapping $\delta_{x}^{\sigma}:A\longrightarrow X $, defined by $\delta_{x}^{\sigma}(a)= \sigma(a) \cdot x-x\cdot \sigma(a)$, for all $a\in \mathcal{A}$, is a $\sigma-$derivation called an inner $\sigma-$derivation. \begin{definition} A dual Banach algebra $\mathcal{A}$ is said to be $\sigma-$Connes amenable, if every $w^{*}$-continuous $\sigma-$derivation from $\mathcal{A}$ into every normal dual $\mathcal{A}$-bimodule is an inner $\sigma-$derivation. Let $\mathcal{A}$ be a dual Banach algebra, $\mathcal{I}$ be a $w^{*}$-closed two-sided ideal in $\mathcal{A}$ and $\sigma$ a $w^{*}$-continuous endomorphism on $\mathcal{A}.$ We say that \begin{itemize} \item $\mathcal{A}$ is $\sigma-\mathcal{I}-$Connes amenable, if every $w^{*}$-continuous $\sigma-$derivation from $\mathcal{A}$ into $\mathcal{I}$ is an inner $\sigma-$derivation. \item $\mathcal{A}$ is $\sigma$-ideally Connes amenable, if it is $\sigma-\mathcal{I}-$Connes amenable, for every $w^{*}$-closed two-sided ideal $\mathcal{I}$ in $\mathcal{A}$. \end{itemize} \end{definition} \noindent Let $\sigma$ be the identity map on $\mathcal{A}$. Then, it is obvious that every ideal Connes amenable dual Banach algebra is $\sigma$-ideally Connes amenable. We denote $\mathcal Z^1_{\sigma,w^*}(\mathcal{A},X^*)$ for the $w^*$-continuous $\sigma-$derivations from $A$ into $X^*$ and denote the space of all inner $\sigma-$derivations from $\mathcal{A}$ into $X^*$ by $\mathcal N^1_{\sigma}(\mathcal{A},X^*)$. In addition, $\mathcal H^1_{\sigma,w^*}(\mathcal{A},X^*)=\mathcal Z^1_{\sigma,w^*}(\mathcal{A},X^*)/\mathcal N^1_{\sigma}(\mathcal{A},X^*)$. \section{Main results}\label{Se3} \noindent In this section, we present and prove our main results. \begin{theorem}\label{00} Let $\mathcal{A}$ be a dual Banach algebra and $\sigma$ be a $w^{*}-$continuous endomorphism of $\mathcal{A}$ with a dense range. Then the ideal Connes amenability and $\sigma$-ideally Connes amenability are equivalent. \end{theorem} \begin{proof} \noindent Suppose that $\mathcal{A}$ is $\sigma$-ideally Connes amenable. Let $\mathcal{I}$ be a $w^{*}$-closed ideal of $\mathcal{A}$ and $D: \mathcal{A} \longrightarrow \mathcal{I}$ be a $w^{*}-$continuous derivation. It is easy to see that $Do\sigma: \mathcal{A} \longrightarrow \mathcal{I}$ is a $w^{*}-$continuous $\sigma-$derivation. By the assumption, there exists $x\in \mathcal{I}$ such that $D(\sigma(a))=\sigma(a)x-x\sigma(a)$, $a\in \mathcal{A}$. For an arbitrary element $b\in \mathcal{A}$, there exists $(a_{i})_{i}\displaystyle\sumbset \mathcal{A}$, such that $b=lim_{i}\sigma(a_{i})$. Hence $D(b)=lim_{i}D(\sigma(a_{i}))= lim_{i}\sigma(a_{i})x-x\sigma(a_{i})= bx-xb$. Thus $D=ad_{x}$, as required. The converse is immediate. \end{proof} \begin{theorem} \label{them0} \noindent Let $\mathcal{A}$ be $\sigma$-ideally Connes amenable dual Banach algebra, where $\sigma$ is a $w^{*}-$continuous endomorphism of $\mathcal{A}$, and $\mathcal{I}$ be $w^{*}$-closed two-sided ideal in $\mathcal{A}$ with the $\sigma$-dual trace extension property. Then $\dfrac{\mathcal{A}}{\mathcal{I}}$ is $\sigma$-ideally Connes amenable dual Banach algebra. \end{theorem} \begin{proof} \noindent Let $\dfrac{\mathcal{J}}{\mathcal{I}}$ be a $w^{*}$-closed two-sided ideal in $\dfrac{\mathcal{A}}{\mathcal{I}}$. Then $\mathcal{J}$ is a $w^{*}$-closed two-sided ideal in $\mathcal{A}$, also $^{\bot}\mathcal{I}$ is a predual of $\dfrac{\mathcal{A}}{\mathcal{I}}$. We know that $ ^{\bot}\mathcal{I}$ is a closed $\mathcal{A}$-submodule of $\mathcal{J_{*}}$. Let $\pi_{*}: \mathcal{J_{*}} \longrightarrow $ $^{\bot}\mathcal{I}$ be the natural projection $\mathcal{A}$-bimodule morphism and $q: \mathcal{A} \longrightarrow \dfrac{\mathcal{A}}{\mathcal{I}}$ be the natural quotient map and $(\pi_{*})^{*}$ be the adjoint of $\pi_{*}$. Let $D: \dfrac{\mathcal{A}}{\mathcal{I}}\longrightarrow \dfrac{\mathcal{J}}{\mathcal{I}}$ be a $w^{*}$-continuous $\sigma$-derivation. Then $d=(\pi_{*})^{*}\circ D\circ q: \mathcal{A}\longrightarrow \mathcal{J}$ is a $w^{*}$-continuous $\sigma$-derivation. Let $j_{*}\in \mathcal{J}_{*},$ then we have \begin{align*} \langle j_{*}, d(ab) \rangle &=\langle j_{*},(\pi_{*})^{*}( D\circ q(ab)) \rangle \\ &=\langle j_{*},(\pi_{*})^{*} ( D(a+\mathcal{I})(b+ \mathcal{I})) \rangle \\ &=\langle j_{*},(\pi_{*})^{*} (\sigma(a+\mathcal{I})\cdot D(b+\mathcal{I})+D(a+\mathcal{I}).\sigma(b+\mathcal{I}) \rangle \\ &=\langle \pi_{*}( j_{*}),(\sigma(a)+\mathcal{I})\cdot D(b+\mathcal{I})+D(a+\mathcal{I}).(\sigma(b)+\mathcal{I}) \rangle \\ &=\langle \pi_{*}( j_{*})\cdot (\sigma(a)+\mathcal{I}),D(b+\mathcal{I}) \rangle+\langle (\sigma(b)+\mathcal{I})\cdot \pi_{*}( j_{*}), D(a+\mathcal{I}) \rangle \\ &=\langle \pi_{*}( j_{*})\cdot \sigma(a),D(b+\mathcal{I}) \rangle+\langle \sigma(b)\cdot \pi_{*}( j_{*}), D(a+\mathcal{I}) \rangle \\ &=\langle \pi_{*}( j_{*}\cdot \sigma(a), D(b+\mathcal{I}) \rangle+\langle \pi_{*}(\sigma(b)\cdot j_{*}), D(a+\mathcal{I}) \rangle \\ &=\langle j_{*}\cdot \sigma(a),(\pi_{*})^{*}(D(b+\mathcal{I})) \rangle+\langle \sigma(b)\cdot j_{*},(\pi_{*})^{*}( D(a+\mathcal{I})) \rangle \\ &=\langle j_{*},\sigma(a)\cdot (\pi_{*})^{*} (D \circ q(b))+(\pi_{*})^{*} ( D\circ q(a))\cdot \sigma(b) \rangle\\ &=\langle j_{*},\sigma(a)\cdot d(b)+d(a)\cdot \sigma(b) \rangle. \end{align*} So there exist an element $\lambda \in \mathcal{J}$ such that $d(a)=\sigma(a)\cdot \lambda - \lambda \cdot \sigma(a) $ ($a \in \mathcal{A}$). Let $m$ be the restriction of $\lambda$ to $\mathcal{I_{*}}$, then $m \in \mathcal{I}$ and for $i_{*}\in \mathcal{I_{*}},$ we have \begin{align*} \langle i_{*},\sigma(a)\cdot m -m\cdot \sigma(a) \rangle &=\langle i_{*}\cdot \sigma(a)-a\cdot i_{*},m \rangle\\ &=\langle i_{*}\cdot \sigma(a)-\sigma(a)\cdot i_{*},\lambda \rangle\\ &=\langle i_{*} , \sigma(a)\cdot \lambda -\lambda \cdot \sigma(a) \rangle\\ &=\langle i_{*}, (\pi_{*})^{*}\circ D\circ q(a) \rangle\\ &=\langle \pi_{*}(i_{*}), D\circ q(a) \rangle\\ &=\langle \pi_{*}(i_{*}), D(a+\mathcal{I}) \rangle=0. \end{align*} The reason of last equality is that $\pi_{*}$ is the projection on $^{\bot}\mathcal{I}$, so if $i_{*} \in \mathcal{I_{*}}$ since $\mathcal{I_{*}}= \dfrac{\mathcal{A_{*}}}{^{\bot}\mathcal{I}}$ thus $i_{*}$ is not in $^{\bot}\mathcal{I}$ so $\pi_{*}(i_{*})=0$. Therefore $\sigma(a)\cdot m=m\cdot \sigma(a)$ for each $(a \in \mathcal{A})$. Since $\mathcal{I}$ has the $\sigma-$dual trace extension property, then there exist a $\kappa\in \mathcal{A}$ such that $\kappa \vert_{\mathcal{I_{*}}}=m$ and $ a\cdot \kappa- \kappa \cdot a=0$ ($a \in \mathcal{A}$). Let $\tau$ be the restriction of $\kappa$ to $\mathcal{J_{*}}$. Then $\tau \in \mathcal{J}$ and $\lambda - \tau=0$ on $\mathcal{I_{*}}$. Therefore $\lambda- \tau \in \dfrac{\mathcal{J}}{\mathcal{I}}$. \noindent Now let $x$ be in $(\dfrac{\mathcal{J}}{\mathcal{I}})_{*}$, then there exist a $j_{*}\in \mathcal{J}_{*}$ such that $\pi_{*}(j_{*})=x$, so we have \begin{align*} \langle x , D(a+\mathcal{I})\rangle &=\langle \pi_{*}(j_{*}), D(a+\mathcal{I})\rangle\\ &=\langle j_{*},\sigma(a)\cdot \lambda - (\sigma(a)\cdot \tau-\tau \cdot a)-\lambda \cdot \sigma(a)\rangle\\ &=\langle j_{*},\sigma(a)\cdot \lambda-\sigma(a)\cdot \tau+\tau \cdot \sigma(a)- \lambda .\sigma(a)\rangle\\ &=\langle j_{*}, \sigma(a)\cdot (\lambda- \tau)-(\lambda-\tau)\cdot \sigma(a). \rangle \end{align*} If $j_{*}\in$ $^{\bot}\mathcal{I},$ then by definition of $\pi_{*},$ we have $\pi_{*}(j_{*})= j_{*}.$ Also if $j_{*}$ is not in $^{\bot}\mathcal{I},$ then $\pi_{*}(j_{*})= 0$. In the first case, we have \begin{align*} \langle j_{*},\sigma(a)\cdot (\lambda- \tau)-(\lambda-\tau)\cdot \sigma(a) \rangle &=\langle \pi_{*}(j_{*}),\sigma(a)\cdot (\lambda-\tau)-(\lambda-\tau)\cdot \sigma(a)\rangle\\ &=\langle x , \sigma(a)\cdot (\lambda- \tau)-(\lambda-\tau)\cdot \sigma(a) \rangle. \end{align*} Hence \begin{center} $D(a+\mathcal{I})= \sigma(a)\cdot (\lambda- \tau)-(\lambda-\tau)\cdot \sigma(a) .$\end{center} It means that $D$ is an inner $\sigma$-derivation. In the second case also, we have $D$ is an inner $\sigma$-derivation. So we canclude that $\dfrac{\mathcal{A}}{\mathcal{I}}$ is $\sigma$-ideally Connes amenable. \end{proof} \begin{proposition}\label{prop1} \noindent Let $\sigma$ be a $w^{*}-$continuous endomorphism of dual Banach algebra $\mathcal{A}$. If $H^{1}_{\sigma, w^{*}}(\mathcal{A^{\sharp}},\mathcal{A^{\sharp}})=\lbrace 0 \rbrace, $ then $H^{1}_{\sigma, w^{*}}(\mathcal{A},\mathcal{A})=\lbrace 0 \rbrace $. \end{proposition} \begin{proof} \noindent Let $D: \mathcal{A} \longrightarrow \mathcal{A}$ be a $w^{*}-$continuous $\sigma-$derivation. Note that $\mathcal{A}$ is a $\mathcal{A^{\sharp}}-$bimodule with the following module action \begin{equation} (a+\alpha )\cdot b = a\cdot b +\alpha b, \quad b\cdot(a+\alpha)= b\cdot a + \alpha b, \nonumber \end{equation} for all $a,b \in \mathcal{A}$ and $\alpha \in \mathbb{C}$. Define $\widehat{D}:A^{\sharp} \longrightarrow A^{\sharp} $ with $\widehat{D}(a+\alpha)= D(a).$ Clearly $\widehat{D}$ is a $w^{*}-$continuous $\sigma-$derivation and we can look at it as a function into $\mathcal{A^{\sharp}}$. Since $H^{1}_{\sigma, w^{*}}(\mathcal{A^{\sharp}},\mathcal{A^{\sharp}})=\lbrace 0 \rbrace $, so there exists $t\in \mathcal{A}$ such that $\widehat{D}= \delta_{t}^{\sigma}$. Hence for each $a\in \mathcal{A},$ we have \begin{eqnarray} D(a)&=& \widehat{D}(a+\alpha)\nonumber\\ &=& \widehat{\sigma}(a+\alpha)\cdot t - t\cdot \widehat{\sigma}(a+\alpha)\nonumber\\ &=& \sigma(a)\cdot t-t\cdot \sigma(a). \nonumber \end{eqnarray} \noindent This shows that $D$ is $\sigma-$inner. Thus $H^{1}_{\sigma, w^{*}}(\mathcal{A},\mathcal{A})=\lbrace 0 \rbrace $. \end{proof} \begin{proposition}\label{prop2} \noindent Let $\mathcal{A}$ be a dual Banach algebra and $\mathcal{I}$ be a $w^{*}-$closed two-sided ideal of $\mathcal{A}$ with a bounded approximate identity. Suppose that $\sigma$ be an idempotent endomorphism of $\mathcal{A}$ such that $\sigma(\mathcal{I})=\mathcal{I}$. Then $H^{1}_{\sigma, w^{*}}(\mathcal{I},\mathcal{I})=\lbrace 0 \rbrace $ if and only if $H^{1}_{\sigma, w^{*}}(\mathcal{A},\mathcal{I})=\lbrace 0 \rbrace $. \end{proposition} \begin{proof} \noindent Suppose that $H^{1}_{\sigma, w^{*}}(\mathcal{I},\mathcal{I})=\lbrace 0 \rbrace. $ Let $D:\mathcal{A}\longrightarrow \mathcal{I}$ be $w^{*}-$continuous $\sigma-$derivation and $i: \mathcal{I}\longrightarrow \mathcal{A}$ be the embedding map. Then $d= D\vert_{\mathcal{I}}: \mathcal{I}\longrightarrow \mathcal{I}$ is a $\sigma-$derivation. So there exists $t_{0}\in \mathcal{I}$ such that $d= \delta_{t_{0}}^{\sigma}$. Since $\mathcal{I}$ has a bounded approximate identity and $\sigma(\mathcal{I})=\mathcal{I}$, so $\overline{\sigma(\mathcal{I}^{2})}= \overline{\mathcal{I}^{2}}=\mathcal{I}$. On the other hand, since $\mathcal{I}=\sigma(\mathcal{I})\cdot \mathcal{I}\cdot \sigma(\mathcal{I})$, then we have $\mathcal{I_{*}}=\sigma(\mathcal{I})\cdot \mathcal{I_{*}}\cdot \sigma(\mathcal{I})$. So for $i,j\in \mathcal{I}$ and $i_{*}\in \mathcal{I_{*}},$ we have \begin{align*} \langle \sigma(i) i_{*}\sigma(j),& D(a) \rangle = \langle \sigma(i) i_{*}, \sigma(j) D(a) \rangle\\ &=\langle \sigma(i) i_{*}, D(ja)- D(j)\sigma(a)\rangle\\ &=\langle \sigma(i) i_{*}, \sigma(ja)t_{*}-t_{*}\sigma(ja) \rangle - \langle \sigma(i) i_{*}, (\sigma(j)t_{*}- t_{*}\sigma(j))\sigma(a) \rangle\\ &=\langle \sigma(i) i_{*}\sigma(j), \sigma(a)t_{\star} \rangle - \langle \sigma(a) \sigma(i)i_{*}, t_{*}\sigma(j) \rangle\\ &-\langle \sigma(a) \sigma(i)i_{*}, \sigma(j)t_{*} \rangle + \langle \sigma(a) \sigma(i)i_{*}, t_{*}\sigma(j) \rangle\\ &= \langle \sigma(i) i_{*}\sigma(j), \sigma(a) t_{*}-t_{*}\sigma(a) \rangle\\ &=\langle \sigma(i) i_{*}\sigma(j), \delta_{t_{*}}^{\sigma}(a) \rangle. \end{align*} Therefore $D=\delta_{t_{*}}^{\sigma}$, and so $D$ is $\sigma-$inner. Conversly, let $H^{1}_{\sigma, w^{*}}(\mathcal{A},\mathcal{I})=\lbrace 0 \rbrace $, and let $D: \mathcal{I}\longrightarrow \mathcal{I}$ be $w^{*}-$continuous $\sigma-$derivation. Since $\mathcal{I}$ is neo-unital Banach $\mathcal{I-}$bimodule i.e. $\mathcal{I}= \sigma(\mathcal{I})\cdot \mathcal{I}\cdot \sigma(\mathcal{I})$, by [\cite{myr}, Proposition 4.14], $D$ has an extension $\widehat{D}: \mathcal{A}\longrightarrow \mathcal{I}$, such that $\widehat{D}$ is also $\sigma-$derivation, now by hypothesis $\widehat{D}$ is $\sigma-$inner. Thus $H^{1}_{\sigma, w^{*}}(\mathcal{I},\mathcal{I})=\lbrace 0 \rbrace $. \end{proof} \begin{proposition}\label{lem9} \noindent Let $\sigma$ be a $w^{*}$-continuous endomorphism of dual Banach algebra $\mathcal{A}$. If $\mathcal{A^{\sharp}} $ is $\widehat{\sigma}-$ideally Connes amenable, then $\mathcal{A}$ is $\sigma-$ideally Connes amenable, where $\widehat{\sigma}$ is defined by $\widehat{\sigma}(a+\alpha)= \sigma(a), \quad a\in \mathcal{A}, \alpha \in \mathbb{C}$. \end{proposition} \begin{proof} \noindent Let $\mathcal{I}$ be $w^{*}$-closed two-sided ideal in $\mathcal{A}$, and $D: \mathcal{A}\longrightarrow \mathcal{I}$ be a $w^{*}-$continuous $\sigma-$derivation. It is easy to see that $\mathcal{I}$ is a $w^{*}$-closed two-sided ideal of $\mathcal{A^{\sharp}} $, and $\widehat{D}:\mathcal{A^{\sharp}} \longrightarrow \mathcal{I}$ with $\widehat{D}(a+\alpha)= D(a) $, is a $w^{*}-$continuous $\sigma-$derivation. Hence there exists $t\in \mathcal{I}$ such that $\widehat{D}=\delta_{t}^{\sigma}$. So for each $a\in \mathcal{A},$ we have \begin{eqnarray} D(a) &=& \widehat{D}(a+\alpha)\nonumber\\ &=&\widehat{\sigma}(a+\alpha)\cdot t-t \cdot \widehat{\sigma}(a+\alpha)\nonumber\\ &=& \sigma(a)\cdot t-t\cdot \sigma(a).\nonumber \end{eqnarray} \noindent This shows that $D$ is $\sigma-$inner. Thus $\mathcal{A}$ is $\sigma-$ideally Connes amenable. \end{proof} \begin{proposition}\label{lem10} \noindent Suppose that $\mathcal{A}, \mathcal{B}$ are dual Banach algebras and $\phi:\mathcal{A}\longrightarrow \mathcal{B}$ be a $w^{*}-$continuous epimorphism. If $\mathcal{A}$ is Connes amenable then $\mathcal{B}$ is $\sigma-$ideal-Connes amenable. \end{proposition} \begin{proof} \noindent Let $\sigma:\mathcal{B}\longrightarrow \mathcal{B}$ be a $w^{*}-$continuous endomorphism on $\mathcal{B}$ and $\mathcal{I}$ be a $w^{*}$-closed two-sided ideal of $\mathcal{B}$, then $\mathcal{I}$ is a normal dual $\mathcal{A-}$bimodule with the following actions given by :\begin{center} $a\cdot i=\sigma(\phi(a))\cdot i ,\hspace{.5cm}i\cdot a= i\cdot \sigma(\phi(a))\hspace{.5cm}(a\in \mathcal{A}, i\in \mathcal{I}).$ \end{center} Now let $D: \mathcal{B}\longrightarrow \mathcal{I}$ be a $w^{*}-$continuous $\sigma-$derivation, it is easy to check that $Do\phi: \mathcal{A}\longrightarrow \mathcal{I}$ is a $w^{*}-$continuous derivation, since $\mathcal{I}$ is a normal dual $\mathcal{A-}$bimodule, there exists $t\in \mathcal{I}$ such that $Do\phi(a)= a\cdot t-t\cdot a=\sigma(\phi(a))\cdot t-t\cdot \sigma(\phi(a))$. Since $\phi$ is an epimorphism, so for each $b\in \mathcal{B}$, there exists $a\in \mathcal{A}$ such that $b=\phi(a)$, and we have \begin{equation} D(b)=\sigma(b)\cdot t-t\cdot \sigma(b). \nonumber \end{equation} \noindent This shows that $D$ is a $\sigma-$inner derivation. Thus $\mathcal{B}$ is $\sigma-$ideal-Connes amenable. \end{proof} \begin{theorem}\label{thm12} \noindent Suppose that $\mathcal{I}$ is a $w^{*}$-closed two-sided ideal in dual Banach algebra $\mathcal{A}.$ If $\mathcal{I}$ is $\sigma-$Connes amenable and $\dfrac{\mathcal{A}}{\mathcal{I}}$ is Connes-amenable, then $\mathcal{A}$ is $\sigma-$Connes amenable. \end{theorem} \begin{proof} \noindent We follow the method of proof of [\cite{myr}, Proposition 4.18]. Let $X$ be a dual Banach $\mathcal{A}-$bimodule and $D: \mathcal{A}\longrightarrow X$ be a $w^{*}-$continuous $\sigma-$derivation, $X$ is a Banach $\mathcal{I-}$bimodule too. Clearly $d=D\vert_{\mathcal{I}}: \mathcal{I}\longrightarrow X$ is a $\sigma-$derivation and by the $\sigma-$Connes amenability of $\mathcal{I},$ there exists $x_{0}\in X$ such that $D=\delta_{x_{0},w^{*}}^{\sigma}.$ Therefore for each $i\in \mathcal{I},$ we have \begin{equation} d(i)=\sigma(i)\cdot x_{0}-x_{0}\cdot \sigma(i). \nonumber \end{equation} \noindent Set $D_{1}=D- \delta_{x_{0},w^{*}}^{\sigma}$, clearly $D_{1}$ is $\sigma-$derivation and $D_{1}\vert _{\mathcal{I}}=0$.\\ Now let $X_{0}=\overline{span}^{w^{*}}(X\cdot \sigma(\mathcal{I}))\cup (\sigma(\mathcal{I})\cdot X)$. $\dfrac{X}{X_{0}}$ is a dual Banach $\frac{\mathcal{A}}{\mathcal{I}}-$bimodule via the following actions: \begin{center} $(a+\mathcal{I})(x+X_{0})=\sigma(a)x+X_{0}, \quad (x+X_{0})(a+\mathcal{I})=x\sigma(a)+X_{0},\hspace{.2cm}(x\in X,a\in \mathcal{A}).$ \end{center} \noindent Now, we define $\widehat{D}: \dfrac{\mathcal{A}}{\mathcal{I}}\longrightarrow \dfrac{X}{X_{0}}$; $\langle g_{*}, \widehat{D}(a+\mathcal{I}) \rangle= \langle g_{*}, D_{1}(a) \rangle$, for $g_{*}\in (\dfrac{X}{X_{0}})_{*}=^{\perp}X_{0} $. \noindent Let $a+\mathcal{I}=a^{\prime}+\mathcal{I}$, for some $a,a^{\prime}\in \mathcal{A}$. So $a-a^{\prime}\in \mathcal{I}$, and we have $D_{1}(a-a^{\prime})=0$. Thus $D_{1}(a)= D_{1}(a^{\prime})$, so $\widehat{D}(a+\mathcal{I})=\widehat{D}(a^{\prime}+\mathcal{I})$, which shows that $\widehat{D}$ is well defined. We claim that $\widehat{D}$ is a derivation. \begin{align*} \langle g_{*}, \widehat{D}(a+\mathcal{I})(b+\mathcal{I})&=\langle g_{*},D_{1}(ab) \rangle = \langle g_{*},\sigma(a)D_{1}(b)+D_{1}(a)\sigma(b) \rangle \\ &= \langle g_{*}\sigma(a), D_{1}(b) \rangle +\langle \sigma(b) g_{*}, D_{1}(a) \rangle \\ &=\langle g_{*}\cdot (a+\mathcal{I}), \widehat{D}(b+\mathcal{I}) \rangle +\langle (b+\mathcal{I})\cdot g_{*} , \widehat{D}(a+\mathcal{I}) \rangle \\ &=\langle g_{*}, (a+\mathcal{I})\cdot \widehat{D}(b+\mathcal{I}) \rangle +\langle g_{*} , \widehat{D}(a+\mathcal{I})\cdot (b+\mathcal{I}) \rangle. \end{align*} \noindent So there exists $t\in \dfrac{X}{X_{0}}$, such that $\widehat{D}=\delta_{t}$. Thus we have \begin{align*} \langle g_{*}, D_{1}(a) \rangle &=\langle g_{*} , \widehat{D}(a+\mathcal{I}) \rangle \\ &=\langle g_{*},(a+\mathcal{I})\cdot t-t \cdot (a+\mathcal{I}) \rangle \\ &=\langle g_{*}\cdot (a+\mathcal{I}), t \rangle - \langle (a+\mathcal{I}) \cdot g_{*} , t \rangle \\ &=\langle g_{*}\cdot \sigma(a) , t \rangle - \langle \sigma(a) \cdot g_{*}, t \rangle \\ &= \langle g_{*} , \sigma(a)\cdot t- t\cdot \sigma(a). \end{align*} \noindent So $D_{1}=D-\delta_{x}$, and therefore $D=\delta_{x-t, w^{*}}^{\sigma}.$ \end{proof} \begin{theorem}\label{pro13} \noindent Let $\mathcal{A}$ be a $\sigma-$Connes amenable dual Banach algebra. Then $\sigma(\mathcal{A})$ has an identity. \end{theorem} \begin{proof} \noindent The proof is standard, see \cite{run1}. We provide the proof for the benefit of readers. Consider $X=\mathcal{A}$ as a dual Banach $\mathcal{A-}$bimodule with the trivial left action, that is: \begin{center} $a\cdot x= 0, \hspace{.5cm} x\cdot a= xa, \hspace{.5cm}(a\in \mathcal{A}, x\in X).$ \end{center} Then $D: \mathcal{A}\longrightarrow X$, defined by $D(a)=\sigma(a)$, is a $w^{*}-$continuous $\sigma-$derivation. Since $\mathcal{A}$ is $\sigma-$Connes amenable, there exists $E\in X$ such that $D=\delta_{E}^{\sigma}$. Hence \begin{equation} \sigma(a)= \sigma(a)\cdot E- E\cdot \sigma(a)\quad (a\in \mathcal{A}). \nonumber \end{equation} \noindent Hence $-E$ is an identity element for $\sigma(\mathcal{A})$. \end{proof} \begin{proposition}\label{thm19} \noindent Let $\mathcal{A},\mathcal{B}$ be dual Banach algebras, $\sigma \in Hom^{w^{*}}(\mathcal{B})$ and $\phi: \mathcal{A}\longrightarrow \mathcal{B}$ be a $w^{*}-$continuous epimorphism. If $\mathcal{A}$ is weakly amenable and commutative, then $\mathcal{B}$ is $\sigma-$ideally Connes amenable. \end{proposition} \begin{proof} \noindent Let $\mathcal{I}$ be a $w^{*}-$closed two-sided ideal of $\mathcal{B}$ and $D:\mathcal{B}\longrightarrow \mathcal{I}$ be an arbitrary $w^{*}-$continuous $\sigma-$derivation. Then $\mathcal{I}$ becomes a dual Banach $\mathcal{A-}$bimodule with the following module actions: \begin{center} $a\cdot i=\sigma(\phi(a))\cdot i, \quad i\cdot a=i \cdot \sigma(\phi(a))\hspace{.2cm}(a\in \mathcal{A},i\in \mathcal{I})$. \end{center} The bounded linear mapping $Do\phi: \mathcal{A}\longrightarrow \mathcal{I}$ is a derivation. It is easy to see that $\mathcal{B}$ is commutative and therefore $\mathcal{I}$ is a symmetric Banach $\mathcal{B-}$bimodule. Hence $\mathcal{I}$ is a symmetric Banach $\mathcal{A-}$bimodule. Now $\mathcal{A}$ is weakly amenable, thus $H^{1}(\mathcal{A}, \mathcal{I})=\lbrace 0 \rbrace$. So $Do \phi=0$. Consequently $D=0$, and $\mathcal{B}$ is $\sigma-$ideally Connes amenable. \end{proof} \section{Examples}\label{Se4} \noindent In this section, we give some examples to illustrate the new notion of $\sigma-$ideally Connes amenability introduced in this work. These examples show that the notion of $\sigma-$ideally Connes amenability is different from ideally Connes amenable. In doing this, we give some examples of $\sigma-$ideally Connes amenable dual Banach algebras that are not ideally Connes amenable. \example \noindent Let $\mathcal{A}$ be a dual Banach algebra and $\phi \in$ Ball$\mathcal{A^{*}}$ be a $w^{*}-$continuous homomorphism such that for all $w^{*}-$closed two sided ideal $\mathcal{I}$ of $\mathcal{A}$, $\phi\mid_{\mathcal{I}}\neq 0.$ Then $\mathcal{A}$ with the product \begin{equation} a\cdot b=\phi(a)b \quad a,b \in \mathcal{A} \nonumber \end{equation} becomes a Banach algebra, since maltiplication is separately $w^{*}-$continuous so $\mathcal{A}$ is a dual Banach algebra. We denote this algebra with $\mathcal{A_{\phi}}$. It is easy to see that $\mathcal{A_{\phi}}$ has a left identity $e$, while it has not right approximate identity. So by [\cite{mbe}, Proposition 2.3], $\mathcal{A_{\phi}}$ is not ideally Connes amenable. Now suppose that $\sigma: \mathcal{A_{\phi}} \longrightarrow \mathcal{A_{\phi}}$ is defined by \begin{equation} \sigma(a)=\phi(a)e. \nonumber \end{equation} \noindent We have \begin{equation} \sigma^{2}(a) = \sigma(\phi(a)e) \nonumber\\ = \phi(a)\sigma(e) \nonumber\\ = \phi(a)\phi(e)e \nonumber\\ = \phi(a)e \nonumber\\ = \sigma(a). \nonumber \end{equation} \noindent Thus $\sigma$ is idempotent. It is easy to see that $e$ is identity for $\sigma(\mathcal{A_{\phi}})$. Let $\mathcal{I}$ be a $w^{*}-$closed two sided ideal of $\mathcal{A_{\phi}}$ and $d: \mathcal{A_{\phi}}\longrightarrow \mathcal{I}$ be a non-zero $w^{*}-$continuous $\sigma-$derivation. For each $a,b\in \mathcal{A_{\phi}},$ we have \begin{equation} d(a\cdot b)=\sigma(a)\cdot d(b)+d(a)\cdot \sigma(b). \nonumber \end{equation} \begin{equation} d(\phi(a)b)=\phi(a)e\cdot d(b)+d(a)\cdot \phi(b)e. \nonumber \end{equation} So we have \begin{equation} \phi(a)d(b)=\phi(a)d(b)+\phi(b)d(a)\cdot e. \nonumber \end{equation} \noindent Thus $\phi(b)d(a)\cdot e=0.$ Since $\phi\neq 0,$ we have $d(a)\cdot e=0.$ Thus $\phi(d(a)) e=0$, since $d(a)$ is in $\mathcal{I}$ and $\phi\mid_{\mathcal{I}}\neq 0 $, so we conclude that $e=0$, that is a contradiction. It means that every $\sigma-$derivation is zero, so it is inner. Thus $\mathcal{A_{\phi}}$ is $\sigma-$ideally Connes amenable. \example\label{ex2} \noindent Let $\mathcal A=l^{1} (\mathbb N).$ This algebra was introduced and studied in [\cite{mbe}, Example 2.8]. It is not ideal Connes amenable, since it has no approximate identity. Let $\sigma$ be a $w^{*}-$continuous idempotent endomorphism on $l^{1} (\mathbb N)$ such that for all $a\in l^{1} (\mathbb N)$, $\sigma(a)(1)=a(1)$. If $\mathcal I$ be a weak$^*$-closed two-sided ideal of $\mathcal A$ with $\mathcal I\neq \mathcal A$, we get $\mathcal I\displaystyle\sumbseteq \{f\in \mathcal A; f(1)=0\}$. For such ideal $\mathcal I$, let $D: \mathcal A\longrightarrow \mathcal I$ be a weak$^*$-continuous $\sigma-$derivation. Set $f \in \mathcal A,$ define the mapping $\widetilde{f}:\mathbb N \longrightarrow \mathbb C$, by $\widetilde{f} (1)=0$ and $\widetilde{f} (n)=f(n)$ for $n\geq 2$. Let $e\in l^{1} (\mathbb N)$ be such that $e(n)=0$ for $n\neq 1$ and $e(1)=1$. Then we find $f=f\cdot e+\widetilde{f}$. We conclude that \begin{equation} D(f)=\sigma(f)(1)D(e)+D(\widetilde{f}). \nonumber \end{equation} \noindent For each $g\in \mathcal{A}_*$, we have \begin{equation} \langle D(\widetilde{f})\cdot \sigma(e), g\rangle=\langle D(\widetilde{f}), \sigma(e)\cdot g\rangle=\langle D(\widetilde{f}), g\rangle . \nonumber \end{equation} \noindent Since $D(\widetilde{f})\in \mathcal I$, $D(\widetilde{f})(1)=0$, so \begin{equation} D(\widetilde{f})\cdot \sigma(e)=D(\widetilde{f})(1) \sigma(e)=0,\nonumber \end{equation} and we conclude that $D(\widetilde{f})=0.$ Thus, \begin{equation} D(f)=\sigma(f)(1)D(e)=\sigma(f)\cdot D(e), \nonumber \end{equation} and so \begin{equation} D(f)=\sigma(f)\cdot D(e)- D(e) \cdot \sigma(f).\nonumber \end{equation} \noindent In the last equality, since $D(e)\in \mathcal{I}$ and $D(e)(1)=0$, so $D(e) \cdot \sigma(f)=0$. Therefore for any weak$^*$-closed two-sided ideal $\mathcal I$ of $\mathcal A$ with $\mathcal I\neq \mathcal A, H^{1}_{\sigma, w^{*}}(\mathcal{A},\mathcal{I})=\lbrace 0 \rbrace $. \noindent We know that $C_{0} (\mathbb N)$ is dense in $l^{1} (\mathbb N)$. Let $a\in l^{1} (\mathbb N)$, then there is a net $\{a_{\alpha}\}\displaystyle\sumbset C_{0} (\mathbb N),$ such that $a_{\alpha}\longrightarrow a$ in $w^{*}$-topology. For $f \in C_{0} (\mathbb N)^{*}$, define \begin{equation} \langle a,f \rangle:=lim-w^{*}\langle a_{\alpha}, f \rangle. \nonumber \end{equation} \noindent We can define the left module action of $l^{1} (\mathbb N)$ on $C_{0} (\mathbb N)^{*}$ by \begin{equation} a\cdot f= \langle a,f \rangle e. \nonumber \end{equation} \noindent We can see that \begin{equation} a\cdot (b\cdot f)=(ab)\cdot f, \nonumber \end{equation} and \begin{equation} \Vert a\cdot f \Vert \leq \Vert a \Vert \Vert f \Vert. \nonumber \end{equation} \noindent The right module action is defined by $f\cdot a=a(1)f$. It is easily verified that $C_{0} (\mathbb N)^{*}$ is a $l^{1} (\mathbb N)-$bimodule. \noindent Let $D$ be a weak$^*$-continuous $\sigma-$derivation from $l^{1} (\mathbb N)$ to $l^{1} (\mathbb N) \cong C_{0} (\mathbb N)^{*}$. For all $a,b \in l^{1} (\mathbb N),$ we have \begin{equation} D(a.b)=D(a)\cdot \sigma(b)+ \sigma(a)\cdot D(b).\nonumber \end{equation} \noindent Thus \begin{equation} a(1)D(b)=\sigma(a)(1)D(a)+\langle \sigma(a), D(b) \rangle e. \nonumber \end{equation} \noindent Set $a=b$, then we get \begin{equation} \langle \sigma(a), D(a) \rangle=0. \nonumber \end{equation} \noindent So for all $a,b \in l^{1} (\mathbb N),$ we have \begin{align*} 0=\langle \sigma(a.b), &D(a.b) \rangle=\langle \sigma(a.b) , D(a)\cdot \sigma(b)+ \sigma(a)\cdot D(b) \rangle\\ &=\langle \sigma(a).\sigma(b), \sigma(b)(1)D(a)+ \langle \sigma(a), D(b) \rangle e \rangle\\ &= \langle \sigma(a).\sigma(b), \sigma(b)(1)D(a)+ \langle \sigma(a), D(b) \rangle \langle \sigma(a)\sigma(b), e \rangle\\ &= \sigma(b)(1) \langle \sigma(a).\sigma(b), D(a) \rangle + \sigma(a)(1)\sigma(b)(1)\langle \sigma(a), D(b) \rangle\\ &= \sigma(b)(1).\sigma(a)(1)\langle \sigma(b), D(a) \rangle + \sigma(a)(1)\sigma(b)(1)\langle \sigma(a), D(b) \rangle. \end{align*} So we conclude that \begin{equation} \langle \sigma(a), D(b) \rangle = -\langle \sigma(b), D(a) \rangle. \nonumber \end{equation} \noindent Let $t\in \sigma(\mathcal{A})$, then there is a, $b\in l^{1} (\mathbb N) $ such that $t=\sigma(b)=\sigma(\sigma(b)).$ We obtain \begin{align*} \langle t, D(a) \rangle&=\langle t, D(e.a) \rangle\\ &=\langle t, D(e)\cdot \sigma(a)+ \sigma(e)\cdot D(a) \rangle\\ &= \langle t, D(e)\cdot \sigma(a)\rangle +\langle t, \sigma(e)\cdot D(a) \rangle\\ &=\langle \sigma(a).t, D(e) \rangle + \langle t.\sigma(e), D(a) \rangle\\ &=\langle \sigma(a). \sigma(\sigma(b)), D(e) \rangle + \langle \sigma(\sigma(b)).\sigma(e), D(a) \rangle\\ &=\langle \sigma(a). \sigma(b), D(e) \rangle +\langle \sigma(\sigma(b).e), D(a) \rangle\\ &=\langle \sigma(a). \sigma(b), D(e) \rangle - \langle \sigma(a), D(\sigma(b).e) \rangle\\ &= \langle \sigma(a). \sigma(b), D(e) \rangle - \langle \sigma(a), \sigma(b)(1)D(e) \rangle\\ &=\langle \sigma(b), D(e) \cdot \sigma(a)\rangle -\langle \sigma(a), D(e)\cdot \sigma(b) \rangle\\ &= \langle \sigma(b), D(e) \cdot \sigma(a)\rangle -\langle \sigma(b).\sigma(a), D(e) \rangle\\ &= \langle \sigma(b), D(e) \cdot \sigma(a)\rangle -\langle \sigma(b), \sigma(a)\cdot D(e) \rangle\\ &=\langle t, D(e) \cdot \sigma(a)- \sigma(a)\cdot D(e) \rangle. \end{align*} So, \begin{equation} D(a)=D(e) \cdot \sigma(a)- \sigma(a)\cdot D(e)=\delta_{-D(e)}^{\sigma}(a). \nonumber \end{equation} \noindent Thus, we conclude that $ l^{1} (\mathbb N)$ is $\sigma-$ideally Connes amenable. \example\label{ex3} \noindent Let $\mathcal{A}$ be a non-ideally Connes amenable Banach algebra with a right [left] approximate identity. It is known that $\mathcal{A}^{\sharp}$ is not ideally Connes amenable. Consider the map $\sigma:\mathcal{A}^{\sharp} \longrightarrow \mathcal{A}^{\sharp} $ defined by \begin{equation} \sigma(a+\lambda e_{\mathcal{A}^{\sharp}})=\lambda, \quad (a\in \mathcal{A}, \lambda \in \mathbb{C}). \nonumber \end{equation} Then it is routinely checked that $\sigma$ is weak$^*$-continuous. Let $\mathcal{I}$ be a $w^{*}$-closed two-sided ideal in $\mathcal{A}^{\sharp}$, we show that every $\sigma-$derivation $D:\mathcal{A}^{\sharp} \longrightarrow \mathcal{I} $ is zero. Let $(e_{i})_{i}$ be a right [left] approximate identity for $\mathcal{A}$. A simple calculation shows that $D(ae_{i})=0 $, for each $a\in \mathcal{A}$ and $i$, and consequently $D(a)=0$. Hence \begin{equation} D(a+\lambda e_{\mathcal{A}^{\sharp}})=D(a)+\lambda D(e_{\mathcal{A}^{\sharp}})=0. \nonumber \end{equation} That is $D=0$ and so $\mathcal{A}^{\sharp}$ is $\sigma$-ideally Connes amenable. \section{Conclusion}\label{Se5} \noindent In this work, we introduced and studied the notions of $\sigma-\mathcal{I}-$ Connes amenability and $\sigma-$ ideally Connes amenability for a dual Banach algebra $\mathcal{A},$ where $\mathcal{I}$ is a $weak^{*}$-closed two-sided ideal in $\mathcal{A}$ and $\sigma$ is a $weak^{*}$-continuous endomorphism on $\mathcal{A}.$ Under some conditions, we proved that the ideal Connes-amenability of a dual Banach algebra $\mathcal{A}$ is equivalent to its $\sigma-$ ideally Connes amenability. Some general theory and hereditary properties on the notion of $\sigma-$ ideally Connes amenability for dual Banach algebras are established. Finally, we presented some useful and important examples to illustrate our results. The results obtained in this work complement and extend some existing results in the literature.\\\\ \end{document}
\begin{document} \title{The Fujita phenomenon in exterior domains under the Robin boundary conditions} \begin{center} LMPA Joseph Liouville, ULCO, FR 2956 CNRS, \\ Universit\'e Lille Nord de France \\ 50 rue F. Buisson, B.P. 699, F-62228 Calais Cedex (France)\\ \url{[email protected]}\\ \end{center} \begin{abstract} \noindent The Fujita phenomenon for nonlinear parabolic problems $\partial_t u=\Delta u +u^p$ in an exterior domain of $\mathbb{R}^N$ under the Robin boundary conditions is investigated in the superlinear case. As in the case of Dirichlet boundary conditions (see Trans. Amer. Math. Soc 316 (1989), 595-622 and Isr. J. Math. 98 (1997), 141-156), it turns out that there exists a critical exponent $p=1+2/N$ such that blow-up of positive solutions always occurs for subcritical exponents, whereas in the supercritical case global existence can occur for small non-negative initial data.\\ \noindent \emph{Key words: } Nonlinear parabolic problems; Robin boundary conditions ; Global solutions; Blow-up.\\ \end{abstract} \section{Introduction} Let $\Omega$ be an exterior domain of $\mathbb{R}^N$ , that is to say a connected open set $\Omega$ such that $\overline{\Omega}^c$ is a bounded domain when $N\geq 2$, and in dimension one, $\Omega$ is the complement of a real closed interval. We always suppose that the boundary $\partial \Omega $ is of class $\mathcal{C}^2$. The outer normal unit vector field is denoted by $\nu: \partial \Omega \rightarrow \mathbb{R}^N $ and the outer normal derivative by $\partial _\nu$. Let $p$ be a real number with $p>1$, $\alpha$ a non-negative continuous function on $\partial \Omega \times \mathbb{R}^+$ and $\varphi$ a continuous function in $\overline{\Omega}$. Consider the following nonlinear parabolic problem \begin{equation}\label{Ext_Robin} \left\{ \begin{array}{ll} \partial _t u = \Delta u + u^p& \textrm{ in }\overline{\Omega} \times (0,+\infty), \\ \partial _\nu u+\alpha u= 0 & \textrm{ on } \partial \Omega \times (0,+\infty), \\ u(\cdot , 0) = \varphi & \textrm{ in } \overline{\Omega} . \end{array} \right. \end{equation} In this paper, we give a positive answer to Levine \& Zhang's question \cite{LZ}: the Fujita phenomenon, well-known in the case of $\Omega=\mathbb{R}^N$ (see Ref. \cite{Fujita}), remains true for the Robin boundary conditions. The case limiting $\alpha \equiv 0$ and $\alpha = +\infty$ were proved by Levine \& Zhang in \cite{LZ} and by Bandle \& Levine in \cite{BL1}, respectively. The real number $1+\frac{2}{N}$ is still the critical exponent, and we prove the blowing-up of all positive solutions of Problem $(\ref{Ext_Robin})$ for subcritical exponents $p$, whereas in the supercritical case, we show the existence of global positive solutions of Problem $(\ref{Ext_Robin})$ for sufficiently small initial data. In the last section, we study the case of a general second order elliptic operator replacing the Laplacian. We also consider a non-linearity including a time and a space dependence. Throughout, we shall assume that $\alpha$ is non-negative \begin{equation}\label{H0_Robin} \alpha \geq 0 \textrm{ on } \partial \Omega \times \mathbb{R}^+ , \end{equation} and, in order to deal with classical solutions, we need some regularity on $\alpha$ \begin{equation}\label{H1_Robin} \alpha \in \mathcal{C}(\partial \Omega \times \mathbb{R}^+). \end{equation} To construct solutions with the truncation procedure (see Section \ref{Prelem}), we suppose \begin{equation}\label{i-data} \varphi \in \mathcal{C}(\overline{\Omega}), \ 0<\parallel \varphi \parallel_\infty < \infty, \ \varphi \geq 0,\ \lim_{\parallel x \parallel_2 \rightarrow\infty}\varphi (x)=0 . \end{equation} In the case $\Omega = \mathbb{R}^N$, the boundary conditions are dropped, and the result is well-known by the classical paper of Fujita \cite{Fujita}. Thus we suppose $\Omega \not= \mathbb{R}^N$. \section{Preliminaries}\label{Prelem} First, we give the definition of positive solution which is understood along this paper. \begin{defi} A positive solution of Problem $(\ref{Ext_Robin})$ is a positive function $u: (x,t) \mapsto u(x,t)$ of class $\mathcal{C}(\overline{\Omega} \times [0,T)) \cap \mathcal{C}^{2,1}(\overline{\Omega} \times (0,T))$, satisfying \begin{equation*} \left\{ \begin{array}{ll} \partial _t u = \Delta u + u^p& \textrm{ in }\overline{\Omega} \times (0,+\infty), \\ \partial _\nu u+\alpha u= 0 & \textrm{ on } \partial \Omega \times (0,+\infty), \\ u(\cdot , 0) = \varphi & \textrm{ in } \overline{\Omega} , \end{array} \right. \end{equation*} where $\alpha$ and $\varphi$ are given with $(\ref{H0_Robin})$, $(\ref{H1_Robin})$ and $(\ref{i-data})$. The time $T=T(\alpha, \varphi) \in (0,+\infty]$ denotes the maximal existence time of the solution $u$. If $T=+\infty$, the solution is called global. \end{defi} From \cite{BL1}, if $T<+\infty$, $u$ blows up in finite time, that is to say: \begin{displaymath} \lim_{t\nearrow T} \sup_{x\in \overline{\Omega}} u(x,t)= +\infty. \end{displaymath} Then, let us recall a standard procedure to construct solutions of Problem $(\ref{Ext_Robin})$ in outer domains for uniformly bounded and continuous initial data $\varphi$. For more details, we refer to \cite{BBR1} , \cite{JFR} and references therein. Let $( D_n)_{n \in \mathbb{N}}$ be a sequence of nested bounded domains such that \begin{displaymath} \overline{\Omega}^c \subseteq D_0 \subseteq D_1 \subseteq \dots \subseteq \bigcup_{n \in \mathbb{N}} D_ n = \mathbb{R}^N . \end{displaymath} Let $u_n$ be the solution of \begin{equation}\label{truncP} \left\{ \begin{array}{ll} \partial _t u = \Delta u + u^p& \textrm{ in }\overline{\Omega} \cap D_n \times (0,+\infty), \\ \partial _\nu u+\alpha u= 0 & \textrm{ on } \partial \Omega \times (0,+\infty), \\ u= 0 & \textrm{ on } \partial D_n \times (0,+\infty), \\ u(\cdot , 0) = \varphi_n & \textrm{ in } \overline{\Omega} \cap D_n , \end{array} \right. \end{equation} where $(\varphi_n)_{n \in \mathbb{N}}$ denotes a sequence of functions in $\mathcal{C}_0(\overline{\Omega} \cap D_n)$ such that \begin{displaymath} 0 \leq \varphi_n \leq \varphi \textrm{ in } \overline{\Omega} \cap D_n \end{displaymath} and $\varphi_n \to \varphi$ uniformly in any compact of $\overline{\Omega} \cap D_n$ as $n \to +\infty$. Let $z$ denote the solution of the ODE \begin{equation*} \left\{ \begin{array}{ll} \dot{z} = z^p, \\ z(0)= \parallel \varphi \parallel_\infty, \end{array} \right. \end{equation*} with maximal existence time $S=\frac{1}{(p-1)\parallel \varphi \parallel_\infty^{p-1}}$. By the comparison principle (see \cite{BDC1}), we have \begin{displaymath} 0 \leq u_n(x,t) \leq u_{n+1}(x,t) \leq z(x,t) \textrm{ in } \overline{\Omega} \cap D_n \times [0,S]. \end{displaymath} Standard arguments based on a priori estimates for the heat equation imply $u_n \rightarrow u$ in the sense of $\mathcal{C}^{2,1}_{loc}(\overline{\Omega} \times (0,S))$ as $n\rightarrow +\infty$, where $u$ is a positive solution of Problem $(\ref{Ext_Robin})$. Moreover, since $u_n$ vanishes on $\partial D_n$ for each $n\in \mathbb{N}^*$, the solution $u$ vanishes at infinity: \begin{displaymath} \lim_{\parallel x \parallel_2 \rightarrow \infty} u(x,t) = 0 \ , \forall \ t \in (0,T) . \end{displaymath} \section{Blow up case} In this section, we compare the solution of Problem $(\ref{Ext_Robin})$ with an appropriate Dirichlet solution. We prove the following theorem: \begin{theo}\label{explode} Suppose that conditions $(\ref{H0_Robin})$, $(\ref{H1_Robin})$ and $(\ref{i-data})$ are fullfiled. Then all non-trivial positive solutions of Problem $(\ref{Ext_Robin})$ blow up in finite time for $p \in (1, 1+ 2/N)$. Moreover, if $N \geq 3$, blow up also occurs for $p=1 + 2/N$. \end{theo} \emph{Proof: } Ab absurdo, suppose that there exists $\alpha$ and a non-trivial $\varphi$ satisfying the hypotheses above, and such that the solution $u$ of Problem $(\ref{Ext_Robin})$ with these parameters is global. Then, consider $u_n$ the solution of the truncated Problem $(\ref{truncP})$. By the comparison principle from \cite{BDC1}, we obtain \begin{displaymath} 0 \leq u_n(x,t) \leq u(x,t) \textrm{ in } \overline{\Omega} \cap D_n \textrm{ for } t >0. \end{displaymath} Thus, $u_n$ can not blow up in finite time, and $u_n$ must be global. Next, define $v_n$ the solution of the following problem \begin{equation*} \left\{ \begin{array}{ll} \partial _t v_n = \Delta v_n + v_n^p& \textrm{ in }\overline{\Omega} \cap D_n \times (0,+\infty), \\ v_n= 0 & \textrm{ on } \partial \Omega \times (0,+\infty), \\ v_n= 0 & \textrm{ on } \partial D_n \times (0,+\infty), \\ v_n(\cdot , 0) = \varphi_n & \textrm{ in } \overline{\Omega} \cap D_n . \end{array} \right. \end{equation*} Again, the comparison principle from \cite{BDC1} implies $0 \leq v_n(x,t) \leq u_n(x,t)$ in $\overline{\Omega} \cap D_n$ for $t >0$. Then, we consider $v$ the solution of the Dirichlet problem \begin{equation*} \left\{ \begin{array}{ll} \partial _t v = \Delta v + v^p& \textrm{ in }\overline{\Omega} \times (0,+\infty), \\ v= 0 & \textrm{ on } \partial \Omega \times (0,+\infty), \\ v(\cdot , 0) = \varphi & \textrm{ in } \overline{\Omega} , \end{array} \right. \end{equation*} obtained as the limit of the $v_n$ by the truncation procedure described in Section \ref{Prelem}. Thus, $v \leq u$ in $\overline{\Omega} \times (0,+\infty)$ and $v$ is a global positive solution. A contradiction with Bandle \& Levine results \cite{BL1} (see \cite{BL2} for the one-dimensional case). If $N \geq 3$ and $p=1 + 2/N$, the contradiction holds with Mochizuki \& Suzuki's results \cite{MS} and \cite{Suzuki}. Hence, our solution $u$ must blow up in finite time. \\ \mbox{}\nolinebreak \rule{2mm}{2mm}\par\medbreak \section{Global existence case} Now, we consider supercritical exponents: \begin{displaymath} p > 1 + \frac{2}{N}. \end{displaymath} We look for a global positive super-solution of Problem $(\ref{Ext_Robin})$, we mean a function $U$ satisfying \begin{equation*} \left\{ \begin{array}{ll} \partial _t u \geq \Delta u + u^p& \textrm{ in }\overline{\Omega} \times (0,+\infty), \\ \partial_\nu u+\alpha u \geq 0 & \textrm{ on } \partial \Omega \times (0,+\infty), \\ u(\cdot , 0) \geq \varphi & \textrm{ in } \overline{\Omega} . \end{array} \right. \end{equation*} With this global super-solution and using the comparison principle, we construct the sequence $(u_n)_{n \in \mathbb{N}}$ of global positive solutions of Problems $(\ref{truncP})$. Thus, using the truncation procedure of Section \ref{Prelem}, we construct a global positive solution of Problem $(\ref{Ext_Robin})$. We use two different super-solutions, and we obtain two results on the global existence with some restrictions on the dimension $N$ or on the coefficient $\alpha$. First, we only suppose that the dimension \begin{displaymath} N \geq 3. \end{displaymath} \begin{theo} Under hypotheses $(\ref{H0_Robin})$, $(\ref{H1_Robin})$ and $(\ref{i-data})$, for $ N \geq 3$ and \begin{displaymath} p > 1 + \frac{2}{N}, \end{displaymath} Problem $(\ref{Ext_Robin})$ admits global non-trivial positive solutions for sufficiently small initial data $\varphi$. \end{theo} \emph{Proof: } Consider $\varphi$ satisfying $(\ref{i-data})$ and $v$ the non-trivial positive solution $v$ of the Neumann problem \begin{equation*} \left\{ \begin{array}{ll} \partial _t v = \Delta v + v^p& \textrm{ in } \overline{\Omega} \times (0,+\infty), \\ \partial _\nu v= 0 & \textrm{ on } \partial \Omega \times (0,+\infty), \\ v(\cdot , 0) = \varphi & \textrm{ in } \overline{\Omega}, \end{array} \right. \end{equation*} where the initial data $\varphi$ is sufficiently small such that the solution $v$ is global. This choice can be achieved because $ N \geq 3$ and $p > 1 + 2/N$, see Levine \& Zhang \cite{LZ}. For all $\alpha \geq 0$ on $\partial \Omega \times(0,+\infty)$, we obtain \begin{displaymath} \partial _\nu v + \alpha v \geq 0 \textrm{ on } \partial \Omega \times (0,+\infty) . \end{displaymath} Thus, $v$ is a super-solution of Problem $(\ref{Ext_Robin})$, and we can deduce the statement of the theorem. \\ \mbox{}\nolinebreak \rule{2mm}{2mm}\par\medbreak \noindent Now, we suppose that there exists a positive constant $c>0$ such that \begin{equation}\label{H2_Robin} \alpha \geq c \textrm{ on } \partial \Omega \times \mathbb{R}^+ . \end{equation} We do not impose any condition on the dimension. \begin{theo} Let $\alpha$ be a coefficient satisfying $(\ref{H1_Robin})$ and $(\ref{H2_Robin})$, $\varphi$ an initial data with $(\ref{i-data})$. For \begin{displaymath} p > 1 + \frac{2}{N}, \end{displaymath} Problem $(\ref{Ext_Robin})$ admits global positive solutions for sufficiently small initial data $\varphi$. \end{theo} \emph{Proof: } We consider the function $U :\overline{\Omega} \times [0,+\infty) \longrightarrow [0,+\infty)$ defined by \begin{displaymath} U(x,t) = A (t+t_0)^{-\mu} \exp \Big( - \frac{\parallel x \parallel_2 ^2}{4(t+t_0)}\Big) , \end{displaymath} where $\mu =1/(p-1)$, $t_0>0$ and $A>0$ will be chosen below. All the calculus will be detailed in the proof of the general Theorem \ref{R1_Robin}. If $A>0$ is small enough, we have \begin{displaymath} \partial_t U \geq \Delta U + U^p \textrm{ in } \overline{\Omega} \times (0,+\infty). \end{displaymath} On the boundary $\partial \Omega$, hypothesis $(\ref{H2_Robin})$ gives \begin{eqnarray*} \partial _\nu U(x,t) + \alpha U(x,t) & \geq & \Big( \frac{ -x\cdot \nu(x)}{2(t+t_0)} + \alpha(x,t) \Big) U (x,t) \\ & \geq & \Big( \frac{ -x\cdot \nu(x)}{2(t+t_0)} + c \Big) U(x,t) \\ \end{eqnarray*} Since the boundary $\partial \Omega$ is compact, the function $(\partial \Omega \ni x \mapsto -x\cdot \nu(x) \in \mathbb{R}$ is bounded. We choose $t_0$ sufficiently big such that $ -x\cdot \nu(x)/(2t_0) + c \geq 0$. Then we obtain \begin{displaymath} \partial _\nu U + \alpha U \geq 0 \textrm{ on } \partial \Omega \times (0,+\infty). \end{displaymath} Finally, if we choose $\varphi \leq U(\cdot,0)$ in $\overline{\Omega}$, the function $U$ is a super-solution of Problem $(\ref{Ext_Robin})$. \\ \mbox{}\nolinebreak \rule{2mm}{2mm}\par\medbreak \begin{rema} In the previous proof, one can note that the hypothesis $(\ref{H2_Robin})$ can be relaxed into \begin{equation}\label{Rem_Robin} \alpha(x,t) \geq \frac{x\cdot \nu(x)}{2(t+t_0)} \textrm{ for all } (x,t) \in \partial \Omega \times (0,+\infty) . \end{equation} This condition gives us an optimal bound on $\alpha$ only if we know the geometry of the domain $\Omega$. For instance, if \begin{displaymath} \Omega = \{ \parallel x \parallel _2 > R \}, \end{displaymath} we obtain \begin{displaymath} x\cdot \nu(x) = -R \textrm{ for all } x \in \partial \Omega. \end{displaymath} Then, the equation $(\ref{Rem_Robin})$ is equivalent to \begin{equation*} \alpha(x,t) \geq \frac{-R}{2(t+t_0)} \textrm{ for all } (x,t) \in \partial \Omega \times (0,+\infty) . \end{equation*} In particular, the previous theorem holds for all non-negative $\alpha$. \end{rema} \noindent In the one-dimensional case, using symmetry and translation, we can suppose that $\Omega = (-\infty,-1) \cup (1,+\infty)$. Then, without any additional hypothesis on the parameters of Problem $(\ref{Ext_Robin})$, we obtain: \begin{theo} Assume the conditions $(\ref{H0_Robin})$, $(\ref{H1_Robin})$ and $(\ref{i-data})$. For dimension $N=1$ and \begin{displaymath} p > 3, \end{displaymath} Problem $(\ref{Ext_Robin})$ admits global positive solutions for sufficiently small initial data $\varphi$. \end{theo} \section{Generalization} In the manner of Bandle \& Levine's results \cite{BL2}, we generalize our results. We consider the following problem \begin{equation}\label{Ext_RobinG} \left\{ \begin{array}{ll} \partial _t u = L u + t^q \parallel x \parallel_2^s u^p& \textrm{ in }\overline{\Omega} \times (0,+\infty), \\ \partial _\nu u+\alpha u= 0 & \textrm{ on } \partial \Omega \times (0,+\infty), \\ u(\cdot , 0) = \varphi & \textrm{ in } \overline{\Omega} , \end{array} \right. \end{equation} where $q$ and $s$ are two positive real numbers, $p>1$ is a real number, and $L$ stands for the second order elliptic operator \begin{displaymath} L= \sum_{i,j=1}^N \partial_{x_i} \Big( a_{ij}(x) \partial_{x_j} \Big ) +\sum_{i=1}^N b_i(x) \partial_{x_i} . \end{displaymath} To deal with classical solutions, the coefficients are assumed to be in $\mathcal{C}^2(\overline{\Omega})$. We keep the hypotheses $(\ref{H0_Robin})$, $(\ref{H1_Robin})$ and $(\ref{i-data})$ on the parameters $\alpha$ and $\varphi$. In order to state our principal results, we shall introduce some notations. \begin{displaymath} \rho(x)= \sum_{i,j=1}^N a_{ij}(x) \frac{x_i x_j}{\parallel x \parallel_2^2}. \end{displaymath} Throughout, we assume that the matrix $A=(a_{ij})_{1\leq i,j\leq N}$ is normalized, so that for some $\nu_0 \in (0,1]$ \begin{displaymath} 0< \nu_0 \leq \rho \leq 1 \textrm{ in } \overline{\Omega}. \end{displaymath} Denote $b=(b_1, \dots , b_N)$ and let \begin{displaymath} l(x)= \sum_{i,j=1}^N \Big(\partial_{x_j} a_{ij}(x) - b_i(x)\Big)x_i , \end{displaymath} \begin{displaymath} l^*(x)= \sum_{i,j=1}^N \Big(\partial_{x_j} a_{ij}(x) + b_i(x)\Big)x_i . \end{displaymath} We can state the following theorem concerning the blow-up case. \begin{theo} Assume that $N \geq 2$, \begin{displaymath} \divergence b(x) \leq 0 \textrm{ in } \overline{\Omega} , \end{displaymath} and \begin{equation}\label{HG_Robin} \rho(x) \leq \frac{\trace A(x) +l(x)}{2} \textrm{ in } \overline{\Omega} . \end{equation} Then, all non-trivial positive solutions of Problem $(\ref{Ext_RobinG})$ blow up in finite time for \begin{displaymath} 1 < p < 1 + \frac{2+2q+s}{N} . \end{displaymath} \end{theo} \emph{Proof: } Ab absurdo, we suppose that there exists a non-trivial positive solution $v$ of Problem $(\ref{Ext_RobinG})$. As in the proof of Theorem \ref{explode}, we deduce that there exists a non-trivial positive solution $u$ of the following Dirichlet problem \begin{equation*} \left\{ \begin{array}{ll} \partial _t u = L u + t^q \parallel x \parallel_2 ^s u^p& \textrm{ in }\overline{\Omega} \times (0,+\infty), \\ u= 0 & \textrm{ on } \partial \Omega \times (0,+\infty), \\ u(\cdot , 0) = \varphi & \textrm{ in } \overline{\Omega} . \end{array} \right. \end{equation*} According to Bandle \& Levine's results from \cite{BL2}, the solution $u$ blows up in finite time under the above hypotheses. Thus, $v$ must blow up too. \\ \mbox{}\nolinebreak \rule{2mm}{2mm}\par\medbreak \noindent For the one-dimensional case, Bandle \& Levine weaken the hypothesis $(\ref{HG_Robin})$. Then, we obtain: \begin{theo} Assume that $N = 1$, \begin{displaymath} \divergence b(x) \leq 0 \textrm{ in } \overline{\Omega} , \end{displaymath} and \begin{displaymath} \Big( \frac{2+2q+s}{p-1} -2 \Big) a_{11} +l >0 \textrm{ in } \overline{\Omega} . \end{displaymath} If $1 < p < 3+2q+s$, then all non-trivial positive solutions of Problem $(\ref{Ext_RobinG})$ blow up in finite time. \end{theo} \noindent Now, we consider the global existence case. \begin{theo}\label{R1_Robin} Assume that condition $(\ref{H2_Robin})$ is satisfied, \begin{displaymath} \rho(x) \leq 1 \textrm{ in } \overline{\Omega} , \end{displaymath} and \begin{displaymath} 2\gamma_0 := \inf_{\overline{\Omega}} \Big( \trace A + l^* \Big) >0. \end{displaymath} Then, for any \begin{displaymath} p>1+\frac{2+2q+s}{2\gamma_0} , \end{displaymath} Problem $(\ref{Ext_RobinG})$ admits global non-trivial positive solutions if the initial data $\varphi$ is sufficiently small. \end{theo} \emph{Proof: } We consider the function $U :\overline{\Omega} \times [0,+\infty) \longrightarrow [0,+\infty)$ defined by \begin{displaymath} U(x,t) = A (t+t_0)^{-\mu} \exp \Big( - \frac{\parallel x \parallel_2 ^2}{4(t+t_0)}\Big) , \end{displaymath} where $\mu =(2+2q+s)/(2p-2)$, $t_0>0$ and $A>0$ will be chosen below. We have \begin{displaymath} \partial_t U(x,t) = \Big( \frac{-\mu}{t+t_0} + \frac{\parallel x \parallel_2 ^2}{4(t+t_0)^2}\Big) U(x,t) , \end{displaymath} \begin{displaymath} L U(x,t) = \Big( \rho(x)\frac{\parallel x \parallel_2 ^2}{4(t+t_0)^2} - \frac{\trace A + l^*}{2(t+t_0)} \Big) U(x,t) , \end{displaymath} and \begin{displaymath} \partial_\nu U(x,t) = \Big( \frac{-x\cdot\nu(x)}{2(t+t_0)}\Big) U(x,t) . \end{displaymath} On the boundary $\partial \Omega$, we obtain: \begin{displaymath} \partial_\nu U(x,t) + \alpha U(x,t) = \Big( \frac{-x\cdot\nu(x)}{2(t+t_0)} +\alpha \Big) U(x,t) . \end{displaymath} Thanks to hypothesis $(\ref{H2_Robin})$, and because the boundary $\partial \Omega$ is compact, we can choose $t_0$ sufficiently big such that \begin{displaymath} \frac{-x\cdot\nu(x)}{2t_0} +c \geq 0 \textrm{ on } \partial \Omega. \end{displaymath} Thus, $\partial_\nu U(x,t) + \alpha U(x,t) \geq 0$ is achieved on $\partial \Omega \times (0,+\infty)$. Then, in $ \overline{\Omega}$, we have \begin{displaymath} \partial_t U(x,t) - L U(x,t) = \Big( (1-\rho(x)) \frac{\parallel x \parallel_2 ^2}{4(t+t_0)^2} + \frac{\trace A + l^* -2\mu }{2(t+t_0)} \Big) U(x,t). \end{displaymath} With $\rho \leq 1$, we ignore the $t$-quadratic term, and by definition of $\gamma_0$, we obtain \begin{equation}\label{Maj_Robin} \partial_t U(x,t) - L U(x,t) \geq \Big( \frac{ \gamma_0 -\mu }{t+t_0} \Big) U(x,t), \end{equation} with $\gamma_0 -\mu>0$. On the other hand, $t<t+t_0$ implies \begin{displaymath} t^q \parallel x \parallel_2 ^s U^p(x,t) \leq A^{p-1} \parallel x \parallel_2 ^s (t+t_0)^{q-\mu(p-1)}\exp\Big( \frac{-(p-1)\parallel x \parallel_2 ^2}{4(t+t_0)}\Big) U(x,t). \end{displaymath} Using the overestimation \begin{displaymath} \Big( \frac{2s}{p-1} \Big)^\frac{s}{2} \exp \Big( \frac{-s}{2}\Big) (t+t_0)^\frac{s}{2} \geq \parallel x \parallel_2^s \exp \Big( -\frac{\parallel x \parallel_2^2(p-1)}{4(t+t_0)}\Big) , \end{displaymath} we obtain \begin{equation}\label{Min_Robin} t^q \parallel x \parallel_2 ^s U^p(x,t) \leq A^{p-1} \Big( \frac{2s}{p-1} \Big)^\frac{s}{2} \exp \Big( \frac{-s}{2}\Big) (t+t_0)^{\frac{s}{2}+q-\mu(p-1)} U(x,t). \end{equation} By definition of $\mu$, we have $s/2+q-\mu(p-1)=-1$. Thus, we just have to choose $A$ sufficiently small, equations $(\ref{Maj_Robin})$ and $(\ref{Min_Robin})$ give \begin{displaymath} \partial_t U(x,t) - L U(x,t) \geq t^q \parallel x \parallel_2 ^s U^p(x,t). \end{displaymath} Finally, if the initial data $\varphi \leq U(\cdot,0)$ in $\overline{\Omega}$, $U$ is a super-solution of Problem $(\ref{Ext_RobinG})$, and we can deduce the existence of a solution using the truncation procedure of Section \ref{Prelem}. \\ \mbox{}\nolinebreak \rule{2mm}{2mm}\par\medbreak \end{document}
\begin{document} \setlength{\baselineskip}{24pt} \begin{center}{ {\large \bf PHASE SPACE REPRESENTATION \\ FOR OPEN QUANTUM SYSTEMS \\ WITHIN THE LINDBLAD THEORY}\\ \vskip 1truecm A. Isar$^{\dagger\ddagger}$, A. Sandulescu$^\dagger$, and W. Scheid$^ \ddagger$\\ $^\dagger${\it Department of Theoretical Physics, Institute of Atomic Physics \\ Bucharest-Magurele, Romania }\\ $^\ddagger${\it Institut f\"ur Theoretische Physik der Justus-Liebig-Universit\"at \\ Giessen, Germany }\\ } \end{center} \vskip 1truecm \begin{abstract} \vskip 0.5truecm The Lindblad master equation for an open quantum system with a Hamiltonian containing an arbitrary potential is written as an equation for the Wigner distribution function in the phase space representation. The time derivative of this function is given by a sum of three parts: the classical one, the quantum corrections and the contribution due to the opening of the system. In the particular case of a harmonic oscillator, quantum corrections do not exist. \end{abstract} \section{Introduction} In the last two decades, more and more interest has arisen about the problem of dissipation in quantum mechanics and the search for a consistent description of open quantum systems [1-4]. Because dissipative processes imply irreversibility and, therefore, a preferred direction in time, it is generally thought that quantum dynamical semigroups are the basic tools to introduce dissipation in quantum mechanics. A general form of the generators of such semigroups was given by Lindblad [5-7]. This formalism has been studied for the case of damped harmonic oscillators [8,9] and applied to various physical phenomena, for instance, to the damping of collective modes in deep inelastic collisions in nuclear physics [10-12] and to the interaction of a two-level atom with a single mode of the electromagnetic field [13] (for a recent review see Ref. [14]). The quasiprobability distribution functions have proven to be of great use in the study of quantum mechanical systems. They are useful not only as calculational tools, but can also provide insights into the connection between classical and quantum mechanics. The first of these quasiprobability distributions was introduced by Wigner [15] to study quantum corrections to classical statistical mechanics. The Wigner distribution function has found many applications primarily in statistical mechanics or in purely quantum mechanical problems, but also in areas such as quantum chemistry and quantum optics, collisions [16], quantum chaos [17,18] quantum fluid dynamics and for some aspects of density functional theory [19]. Quantum optics has given rise to a number of quasiprobability distributions, the most well-known being the Glauber $P$ distribution [20,21]. More recently extensive use has been made of the generalized $P$ distributions [22,23]. In the previous papers [24,25] the applicability of quasiprobability distributions to the Lindblad theory was explored. In order to have a simple formalism we studied the master equation of the one-dimensional damped harmonic oscillator as example of an open quantum system. This equation was transformed into Fokker-Planck type equations for $c$-number quasiprobability distributions associated with the density operator and a comparative study of the Glauber, antinormal ordering and Wigner representations was made. We then solved these Fokker-Planck equations, subject to different types of initial conditions. The increased activity in heavy ion physics has led to a greater interest in phase space distributions. Currently models as the intranuclear cascade model, "hot spot" model and hydrodynamical models, which emphasize the collective and transport behaviour, are especially suitable for an examination in this framework [16]. We could also mention the nuclear reaction theory formulated in the language of the Wigner distribution function [26], the use of a modified Vlasov equation for the description of intermediate energy or high energy heavy ion collisions [27-29], the quantum molecular dynamics approach to investigate the fragment formation and the nuclear equation of state in heavy ion collisions [30] and the transport phenomena in dissipative heavy ion collisions [31]. The aim of the present paper is to use the quantum mechanical phase space Wigner distribution for a one-dimensional particle in a general potential in the framework of the Lindblad theory for open quantum systems. Since the Wigner distribution is a quantum generalization of the Boltzmann one-particle distribution, we can obtain an analogue of the Boltzmann-Vlasov equation for open quantum systems. The content of this paper is arranged as follows. In Sec. 2 we review very briefly the Lindblad model for the theory of open quantum systems, namely we formulate the basic equation of motion for the density operator. In Sec. 3 we discuss the quantum mechanical phase space Wigner distribution and its properties, which are quite useful in applications. In Sec. 4 we derive a Boltzmann-Vlasov type equation from the Lindblad master equation for the time dependence of the Wigner distribution. Finally, we discuss and summarize our results in Sec. 5. \section{Quantum mechanical Markovian master equation} A possibility for an irreversible behaviour in a finite system is to avoid the unitary time development and to consider non-Hamiltonian systems. If $S$ is a limited set of macroscopic degrees of freedom and $R$ the set of non-explicitly described degrees of freedom of a large system $S+R,$ the simplest dynamics for $S$ which describes an irreversible process is a semigroup of transformations introducing a preferred direction in time. In Lindblad's axiomatic formalism of introducing dissipation in quantum mechanics, the usual von Neumann-Liouville equation ruling the time evolution of closed quantum systems is replaced by the following Markovian master equation for the density operator $\hat\rho(t)$ in the Schr\"odinger picture [5]: $${d\Phi_{t}(\hat\rho)\over dt}=L(\Phi_{t}(\hat\rho)). \eqno (2.1)$$ Here, $\Phi_{t}$ denotes the dynamical semigroup describing the irreversible time evolution of the open system in the Schr\"odinger representation and $L$ is the infinitesimal generator of the dynamical semigroup $\Phi_t$. Using the Lindblad theorem which gives the most general form of a bounded, completely dissipative generator $L$, we obtain the explicit form of the most general quantum mechanical master equation of Markovian type: $${d\hat\rho(t)\over dt}=-{i\over\hbar}[\hat H,\hat\rho(t)]+{1\over 2\hbar} \sum_{j}([\hat V_{j}\hat\rho(t),\hat V_{j}^\dagger ]+[\hat V_{j},\hat\rho(t)\hat V_{j}^\dagger ]).\eqno (2.2)$$ Here $\hat H$ is the Hamiltonian operator of the system and $\hat V_{j},$ $\hat V_{j}^\dagger $ are bounded operators on the Hilbert space $\cal H$ of the Hamiltonian. We should like to mention that the Markovian master equations found in the literature are of this form after some rearrangement of terms, even for unbounded generators. We make the basic assumption that the general form (2.2) of the master equation with a bounded generator is also valid for an unbounded generator. Since we study the one-dimensional problem, we consider that the operators $\hat H,\hat V_{j},\hat V_{j}^\dagger $ are functions of the basic observables $\hat p$ and $\hat q$ of the one-dimensional quantum mechanical system (with $[\hat q,\hat p]=i\hbar\hat I, $ where $\hat I$ is the identity operator on $\cal H)$. For simplicity we consider that the operators $\hat V_{j},$ $\hat V_j^\dagger $ are first degree polynomials in $\hat p$ and $\hat q$. Since in the linear space of the first degree polynomials in $\hat p$ and $\hat q$ the operators $\hat p$ and $\hat q$ give a basis, there exist only two ${\bf C}$-linear independent operators $\hat V_{1},\hat V_{2}$ which can be written in the form of $$\hat V_{j}=a_{j}\hat p+b_{j}\hat q,~j=1,2,\eqno (2.3)$$ with $a_{j},b_{j}$ complex numbers [6]. The constant term is omitted because its contribution to the generator $L$ is equivalent to terms in $\hat H$ linear in $\hat p$ and $\hat q$ which for simplicity are chosen to be zero, so that $\hat H$ is taken of the form $$\hat H=\hat H_0+{\mu\over 2}(\hat p \hat q+\hat q \hat p), ~~ \hat H_0={1\over 2m}\hat p^2+U(\hat q), \eqno (2.4)$$ where $U(\hat q)$ is the potential energy and $m$ is the mass of the particle. With these choices and with the notations $$D_{qq}={\hbar\over 2}\sum_{j=1,2}{\vert a_{j}\vert}^2, D_{pp}={\hbar\over 2}\sum_{j=1,2}{\vert b_{j}\vert}^2, D_{pq}=D_{qp}=-{\hbar\over 2}{\rm Re}\sum_{j=1,2}a_{j}^*b_{j}, \lambda=-{\rm Im}\sum_{j=1,2}a_{j}^*b_{j},\eqno(2.5)$$ where $a_j^*$ and $b_j^*$ denote the complex conjugate of $a_j,$ and $b_j^*$ respectively, the master equation (2.2) takes the following form [8,14]: $${d\hat\rho \over dt}=-{i\over \hbar}[\hat H_0,\hat\rho]+{i(\lambda-\mu) \over 2\hbar} [\hat p,\hat\rho\hat q+\hat q\hat\rho]-{i(\lambda+\mu)\over 2\hbar }[\hat q,\hat\rho \hat p+\hat p\hat\rho]$$ $$-{D_{pp}\over {\hbar}^2}[\hat q,[\hat q,\hat\rho]]-{D_{qq}\over {\hbar}^2} [\hat p,[\hat p,\hat\rho]]+ {2D_{pq}\over {\hbar}^2}[\hat p,[\hat q,\hat\rho]]. \eqno (2.6)$$ Here the quantum mechanical diffusion coefficients $D_{pp},D_{qq},$ $D_{pq}$ and the friction constant $\lambda$ satisfy the following fundamental constraints [8,14]: $${\rm i})~D_{pp}>0,~{\rm ii})~D_{qq}>0, ~{\rm iii})~D_{pp}D_{qq}-{D_{pq}}^2\ge {\lambda}^2{\hbar}^2/4. \eqno (2.7)$$ The necessary and sufficient condition for $L$ to be translationally invariant is $\mu=\lambda$ [6,8,14]. In the following general values for $\lambda$ and $\mu$ will be considered. \section{Wigner distribution function} In the phase space picture, the quantum corrections become transparent and a smooth transition from quantum to classical physics is encountered. This picture is particularly suitable for obtaining quantum mechanical results in situations where a good initial approximation comes from the classical result and also for deriving classical limits of quantal processes. A quantum mechanical particle is described by a density matrix $\hat\rho$ and the average of a function of the position and momentum operators $\hat A(\hat p,\hat q)$ (we consider the one-dimensional case) is $$<\hat A>={\rm Tr}(\hat A\hat\rho),\eqno(3.1)$$ or, in terms of a quasiprobability distribution $\phi(p,q),$ $$<\hat A>=\int dp \int dq A(p,q)\phi(p,q), \eqno (3.2) $$ where the function $A(p,q)$ can be derived from the operator $\hat A(\hat p, \hat q)$ by a well-defined correspondence rule [32]. This allows one to cast quantum mechanical results into a form in which they resemble classical ones. In the case where $\phi$ in Eq. (3.2) is chosen to be the Wigner distribution, then the correspondence between $A(p,q)$ and $\hat A$ is that proposed by Weyl [33], as was first demonstrated by Moyal [34]. The Wigner distribution function, which represents the Weyl transform of the density operator, is defined through the partial Fourier transform of the off-diagonal elements of the density matrix: $$W(p,q)={1\over\pi\hbar}\int dy<q-y\vert\hat\rho\vert q+y> e^{2ipy/\hbar} \eqno(3.3)$$ and satisfies the following properties: ${\rm ~~i})~W(p,q)$ is real, but cannot be everywhere positive, ${\rm ~ii}) \int dp W(p,q)=<q |\hat\rho|q>,$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(3.4) $~~~~\int dq W(p,q)=<p|\hat\rho|p>,$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(3.5) $~~~~\int dp\int dq W(p,q)={\rm Tr}\hat\rho=1,$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(3.6) ${\rm iii})~W(p,q)$ is Galilei invariant and invariant with respect to space and time reflec- ~~~~~tions, ${\rm iv})$ In the force free case the equation of motion is the classical (Liouville) one $${\partial W\over\partial t}=-{p\over m}{\partial W\over\partial q}. \eqno(3.7)$$ Wigner has shown that any real distribution function as long as it satisfies properties ${\rm ii})$ assumes also negative values for some $p$ and $q.$ The classical phase space function $A(p,q)$ corresponding to the quantum operator $\hat A$ is given by the so-called Wigner-Moyal transform: $$A(p,q)=\int dz e^{ipz/\hbar}<q-{1\over 2}z\vert\hat A\vert q+{1\over 2}z>, \eqno(3.8)$$ so that $\int\int dp dq A(p,q)=2\pi\hbar{\rm Tr}\hat A.$ Clearly, Eq. (3.3) is a special case of Eq. (3.8) for the density operator, i.e. $\hat A=\hat\rho$ and $W(p,q)$ is the phase space function which corresponds to the operator $\hat\rho/2\pi\hbar.$ The function $A(p,q)$ is known as the Wigner equivalent of the operator $\hat A.$ The relation which expresses the Wigner equivalent $F$ of the product of operators $\hat F=\hat A\hat B$ in terms of the Wigner equivalents of the individual operators $\hat A$ and $\hat B$ is the following: $$F(p,q)=A(p,q)(\exp{\hbar\Lambda\over 2i})(p,q)=B(p,q)(\exp(-{\hbar\Lambda\over 2i}))A(p,q),\eqno(3.9)$$ where $\Lambda$ (essentially the Poisson bracket operator) is given by $$\Lambda=\overleftarrow{\partial\over\partial p}\overrightarrow{\partial\over \partial q}-\overleftarrow{\partial\over\partial q}\overrightarrow{\partial \over\partial p}\eqno(3.10)$$ and the arrows indicate in which direction the derivatives act. From Eq. (3.9) the Wigner equivalent of the commutator $[\hat A,\hat B]$ follows directly: $$([\hat A,\hat B])(p,q)=-2iA(p,q)(\sin{\hbar\Lambda\over 2})B(p,q). \eqno(3.11)$$ Next we recall the time dependence of the Wigner distribution function $W(p,q,t)$ for a closed system [32]. Instead of the von Neumann-Liouville master equation we have the following quantum Liouville equation which determines the time evolution of the Wigner phase space distribution function (Wigner equivalent of the density operator): $$i\hbar {\partial W\over\partial t}=H(p,q)(\exp{\hbar\Lambda\over 2i})W(p,q)-W(p,q)(\exp{\hbar\Lambda\over 2i})H(p,q),\eqno(3.12)$$ or, cf. Eq. (3.11), $$\hbar {\partial W\over\partial t}=-2H(p,q)(\sin{\hbar\Lambda\over 2})W(p,q),\eqno(3.13)$$ where $H(p,q)$ is the Wigner equivalent of the Hamiltonian operator $\hat H$ of the system. Note that if we take the $\hbar\to 0$ limit of this equation, we obtain the classical Liouville equation (Vlasov equation). \section{The equation of motion for the Wigner distribution of open systems} In two previous papers [8,24] the evolution equations for the Wigner function were obtained for the damped harmonic oscillator within the Lindblad theory for open quantum systems. In this Section we obtain the equation for the Wigner function of a one-dimensional system with an arbitrary potential. In other words, we obtain, by means of the Wigner function, the phase space representation of the general Lindblad master equation. The time evolution of the Wigner distribution function corresponding to the Lindblad master equation (2.2), can be obtained from Eq. (3.12) by adding in the right-hand side the Wigner equivalent corresponding to the opening part of Eq. (2.2), i.e. the sum of commutators. By using formulas (3.9) and (3.11), we obtain the following evolution equation for the Wigner distribution: $${\partial W\over\partial t}=-{2\over\hbar}H(\sin{\hbar\Lambda\over2})W$$ $$+{1\over 2\hbar}\sum_j\left [2V_j(\exp{\hbar\Lambda\over 2i})W(\exp{\hbar \Lambda\over 2i})V_j^*-V_j^*(\exp{\hbar\Lambda\over 2i})V_j(\exp{\hbar \Lambda\over 2i})W-W(\exp{\hbar\Lambda\over 2i})V_j^*(\exp{\hbar \Lambda\over 2i})V_j\right ], \eqno(4.1)$$ where $V_j$ and $V_j^*$ are the Wigner equivalents of the operators $\hat V_j$ and $\hat V_j^\dagger,$ respectively, and $V_j^*$ is the complex conjugate of $V_j.$ If the operators $\hat V_j$ are taken of the form (2.3), then Eq. (4.1) becomes the evolution equation for the Wigner function corresponding to the master equation (2.6): $$ {\partial W\over \partial t}=-{2 \over\hbar}H(\sin{\hbar\Lambda\over 2})W+ \lambda{\partial\over\partial q}(qW)+\lambda{\partial\over\partial p}(pW)$$ $$+ D_{qq}{\partial^2 W\over \partial q^2}+D_{pp}{\partial^2 W \over \partial p^2}+2D_{pq}{\partial^2 W \over \partial p \partial q}. \eqno (4.2) $$ Here it is easily to remark that the first term on the right-hand side generates the evolution in phase space of a closed system and gives the Poisson bracket and the higher derivatives containing the quantum contribution, and the following terms represent the contribution from the opening (interaction with the environment). For a Hamiltonian operator of the form (2.4) and by assuming a Taylor expansion of the potential $U,$ i.e. $U(q)$ is an analytic function, this equation takes the form: $${\partial W\over\partial t}=-{p\over m}{\partial W\over\partial q} +{\partial U\over\partial q}{\partial W\over\partial p}+\sum_{n=1} ^\infty(-1)^n{(\hbar)^{2n}\over2^{2n}(2n+1)!}{\partial^{2n+1}U(q)\over\partial q^{2n+1}}{\partial^{2n+1}W\over\partial p^{2n+1}}+{\cal L}W,\eqno(4.3)$$ where we have introduced the notation $${\cal L}\equiv (\lambda-\mu){\partial\over\partial q}(qW)+(\lambda+\mu) {\partial\over\partial p}(pW) + D_{qq}{\partial^2 W\over \partial q^2}+D_{pp}{\partial^2 W \over \partial p^2}+2D_{pq}{\partial^2 W \over \partial p \partial q}. \eqno (4.4) $$ If Eq. (4.3) had only the first two terms on the right-hand side, $W$ would evolve along the classical flow in phase-space. The terms containing $\lambda$ and $\mu$ are the dissipative terms. They modify the flow of $W$ and cause a contraction of each volume element in phase space. The terms containing $D_{pp}, D_{qq}$ and $D_{pq}$ are the diffusive terms and produce an expansion of the volume elements. The diffusion terms are responsible for noise and also for the destruction of interference, by erasing the structure of the Wigner function on small scales. The sum term (the power series), together with the first two terms make up the unitary part of the evolution. Hence, up to corrections of order $\hbar^2,$ unitary evolution corresponds to approximately classical evolution of the Wigner function. It is partly for this reason that the evolution of a quantum system is most conveniently undertaken in the Wigner representation. The higher corrections can often be assumed as negligible and give structures on small scales. There are, however, important examples where they cannot be neglected, e.g., in chaotic systems (see the discussion below). From Eq. (4.3) it is clear that, as a consequence of the quantum correction terms with higher derivatives, the Wigner function of a non-linear system does not follow the classical Liouville flow. The higher derivative terms are generated by the nonlinearities in the potential $U(q).$ The question of the classical limit of the Wigner equation is of more than formal interest [35]. There are two well-known limits in which Eq. (4.3) can go over into a classical equation: 1) when $U$ is at most quadratic in $q$ and 2) when $\hbar\to 0.$ Because of the extra diffusion terms, we get yet a third classical limit: In the limit of large $D_{pp},$ the diffusive smoothing becomes so effective that it damps out all the momentum-derivatives in the infinite sum and Eq. (4.3) approaches the Liouville equation with diffusion, an equation of Fokker-Planck type. This is an example of how macroscopic objects start to behave classically (decoherence), since the diffusion coefficients are roughly proportional to the size of these objects. Thus an object will evolve according to classical dynamics if it has a strong interaction with its environment. The connection between decoherence and transition from quantum to classical in the framework of the Lindblad theory for open quantum systems will form the subject of another paper [36]. In the following we shall consider Eq. (4.3) for some particular cases. 1) In the case of a free particle, i.e. $U(q)=0,$ Eq. (4.3) takes the form: $${\partial W\over\partial t}=-{p\over m}{\partial W\over\partial q} +{\cal L}W. \eqno(4.5)$$ 2) In the case of a linear potential $U=\gamma q$ (where for example $\gamma=mg$ for the free fall or $\gamma=eE$ for the motion in a uniform electric field), one gets $${\partial W\over\partial t}=-{p\over m}{\partial W\over\partial q} +\gamma{\partial W\over\partial p}+{\cal L}W. \eqno(4.6)$$ 3) In the case of the harmonic oscillator with $U=m\omega^2 q^2/2,$ Eq. (4.3) takes the form: $${\partial W\over \partial t}=-{p \over m}{\partial W \over \partial q}+m \omega^2q{\partial W\over \partial p}+{\cal L}W. \eqno(4.7)$$ An analogous equation can be obtained in the case of the motion on an inverted parabolic potential $U(q)=-m\kappa^2 q^2/2.$ Since the drift coefficients are linear in the variables $p$ and $q$ and the diffusion coefficients are constant with respect to $p$ and $q,$ Eqs. (4.5)-(4.7) describe an Ornstein-Uhlenbeck process [37-39]. Eqs. (4.5)-(4.7) are exactly equations of the Fokker-Planck type. Eq. (4.7) was extensively studied is the literature [8,14,25] and it represents an exactly solvable model. It should be stressed that not every function $W(p,q,0) $ on the phase-space is the Wigner transform of a density operator. Hence, the quantum mechanics appears now in the restrictions imposed on the initial condition $W(p,q,0)$ for Eq. (4.3). The most frequently used choice for $W(p,q,0)$ is a Gaussian function and Eqs. (4.5)-(4.7) preserve this Gaussian type, i.e., $W(p,q,t)$ is always a Gaussian function in time, so that the differences between quantum and classical mechanics are completely lost in this representation of the master equation. 4) In the case of an exponential potential $U(q)=\alpha\exp(-\beta q),$ the Wigner equation is an infinite order partial differential equation $${\partial W\over\partial t}=-{p\over m}{\partial W\over\partial q} -\alpha\beta\exp(-\beta q){\partial W\over\partial p}+\sum_{n=1} ^\infty(-1)^{3n+1}{(\hbar)^{2n}\over2^{2n}(2n+1)!}\alpha\beta^{2n+1}\exp(-\beta q){\partial^{2n+1}W\over\partial p^{2n+1}}+{\cal L}W,\eqno(4.8)$$ but in the case of a potential of the finite polynomial form $U(q)=\sum_{n=1}^N a_n q^n,$ the sum keeps only a certain number of derivate terms. As an illustration of this remark, we consider an anharmonic oscillator with the potential $U_{anh}(q) =m\omega^2q^2/2 +Cq^4.$ In this case the Wigner equation (4.3) becomes $${\partial W\over \partial t}=-{p \over m}{\partial W \over \partial q}+(m \omega^2q+4Cq^3){\partial W\over \partial p} -C\hbar^2q{\partial^3 W\over\partial p^3}+{\cal L}W. \eqno(4.9)$$ The above equation has one term with third derivative, associated with the nonlinear potential $U_{anh}.$ In fact, the first three terms on the right-hand side of Eq. (4.9) give the usual Wigner equation of an isolated anharmonic oscillator. The third derivative term is of order $\hbar^2$ and is the quantum correction. In the classical limit, when this term is neglected, the Wigner equation becomes one of the Fokker-Planck type. 5) For a periodic potential $U(q)=U_0\cos(kq),$ $${\partial^{2n+1}U\over\partial q^{2n+1}}=(-1)^n k^{2n}{\partial U\over\partial q} \eqno(4.10)$$ and we obtain $${\partial W\over \partial t}=-{p \over m}{\partial W \over \partial q}+{\partial U\over\partial q}{\delta W\over \delta p} +{\cal L}W, \eqno(4.11) $$ where $${\delta W\over \delta p}= {W(q,p+\hbar k/2,t)-W(q,p-\hbar k/2,t)\over \hbar k}. \eqno(4.12)$$ Eq. (4.11) takes a simpler form when $\hbar k$ is large compared to the momentum spread $\Delta p$ of the particle being considered, i.e. when the spatial extension of the wave packet representing the particle is large compared to the spatial period of the potential. Imposing the condition $\hbar k\gg \Delta p$ on Eq. (4.12), one sees that $\delta W/ \delta p$ is small for any $p$ that yields an appreciable value of the Wigner distribution function, i.e. $\delta W/ \delta p\approx 0$ for all practical purposes [40]. Eq. (4.11) is then reduced to the equation (4.5) for a free particle moving in an environment. From the examples 1)--3) we see that for Hamiltonians at most quadratic in $q$ and $p,$ the equation of motion of the Wigner function contains only the classical part and the contributions from the opening of the system and obeys classical Fokker-Planck equations of motion (4.5)-(4.7). In general, of course, the potential $U$ has terms of order higher than $q^2$ and one has to deal with a partial differential equation of order higher than two or generally of infinite order. When the potential deviates only slightly from the harmonic potential, one can still take the classical limit $\hbar\to 0$ in Eq. (4.3) as the lowest-order approximation to the quantum motion and construct higher-order approximations that contain quantum corrections to the classical trajectories using the standard perturbation technique [40]. Eq. (4.3) can also be rewritten in the form $${\partial W\over \partial t}=-{p \over m}{\partial W \over \partial q}+{\partial U_{eff}\over\partial q}{\partial W\over \partial p} +{\cal L}W, \eqno(4.13) $$ where the effective potential is defined as [40] $${\partial U_{eff}\over\partial q}{\partial W\over\partial p}= {\partial U\over\partial q}{\partial W\over\partial p}+\sum_{n=1} ^\infty(-1)^n{(\hbar)^{2n}\over2^{2n}(2n+1)!}{\partial^{2n+1}U(q)\over\partial q^{2n+1}}{\partial^{2n+1}W\over\partial p^{2n+1}}. \eqno(4.14)$$ Then the phase-space points move under the influence of the effective potential $U_{eff}.$ We note that Eq. (4.14) indicates that only when at least an approximate solution for $W$ is known, the effective potential can be determined. If $U$ does not deviate much from the harmonic potential, the zeroth-order approximation for $W$ can be taken as that resulting from classical propagation. The main limitation of the effective potential method is that it can be applied only to systems whose behaviour is not much different from that of the harmonic oscillator or the free wave packet, for example a low-energy Morse oscillator and an almost free wave packet slightly perturbed by a potential step or barrier [40]. The effective potential method may possibly be applied to a collision system [40]. The quantum dynamics of atomic and molecular collision processes has been described rather successfully in the past by means of classical trajectories [16,40-42], i.e. the collision system is one for which an approximate solution of $W$ can be considered as known and thus the perturbative scheme of the effective potential method can be employed. In Refs. [43-45] similar equations were written for the Wigner function for a class of quantum Brownian motion models consisting of a particle moving in a general potential and coupled to an environment of harmonic oscillators. Phase space provides a natural framework to study the consequences of the chaotic dynamics and its interplay with decoherence [46-50]. Eq. (4.3) could be applied in order to investigate general implications of the process of decoherence for quantum chaos. Since decoherence induces a transition from quantum to classical mechanics, it can be used to find the connection between the classical and quantum chaotic systems. In this case $U(q)$ is the potential of a classically chaotic system, coupled to the external environment. For this purpose, a particular case of Eq. (4.3) was utilized by Zurek and Paz [51] in the special case of the high temperature limit of the environment: $\mu=\lambda=\gamma,$ where $\gamma$ is the relaxation rate, $D_{qq}=0,D_{pq}=0$ and the diffusion coefficient $D_{pp}\equiv D=2m\gamma k_B T $ ($T$ is the temperature of the environment). In this model the symmetry between $q$ and $p$ is broken, and coupling with the environment through position gives momentum diffusion only [52]. Zurek also used a similar equation in which the diffusion term is symmetric, namely $D(\partial_{pp}^2+\partial_{qq}^2)W$ [51]. It is sometimes argued that the power series involving third and higher derivative terms may be neglected. Paz and Zurek [51] have argued that the diffusive terms may smooth out the Wigner function, suppressing contributions from the higher-order terms. When these terms can be neglected, the Wigner function evolution equation (4.3) then becomes $${\partial W\over \partial t}=-{p \over m}{\partial W \over \partial q}+{\partial U\over\partial q}{\partial W\over \partial p} +{\cal L}W. \eqno(4.15)$$ In the previous considered particular case of a thermal bath and if $\lambda=\mu=\gamma,$ $D_{qq}=D_{pq}=0,$ $D_{pp}\equiv D=2m\gamma k_B T,$ this equation becomes a Kramers equation [45,53]: $${\partial W\over \partial t}=-{p \over m}{\partial W \over \partial q}+{\partial U\over\partial q}{\partial W\over \partial p} + 2\gamma {\partial\over\partial p}(pW) +2m\gamma k_B T{\partial^2 W\over\partial p^2}.\eqno(4.16)$$ It posseses the stationary solution [45] $$W(p,q)=\tilde N\exp\left[-{p^2\over 2mk_B T}-{U(q)\over k_B T}\right], \eqno(4.17)$$ where $\tilde N$ is a normalization factor. As was discussed by Anastopoulos and Halliwell [45], this will be an admissable solution, i.e., it is a Wigner function only if the potential is such that $\exp[-U(q)/D]$ is normalizable. This requires $U(q)\to\infty$ as $q\to\pm\infty$ faster than $\ln|q|.$ In this case the stationary distribution is the Wigner transform of a thermal state $\rho=Z^{-1}\exp(-H_0/kT)$ with $Z={\rm Tr}\exp(-H_0/kT)$ for large temperatures. The general results of [53] then show that all solutions of Eq. (4.16) approach the stationary solutions (4.17) as $t\to\infty.$ Hence, to the extent that Eq. (4.16) is valid, all initial states tend towards the thermal state in the long-time limit [45]. \section{Summary and outlook} The Lindblad theory provides a selfconsistent treatment of damping as a possible extension of quantum mechanics to open systems. In the present paper we have shown how the Wigner distribution can be used to solve the problem of dissipation for some simple systems. From the master equation we have derived the corresponding Vlasov or Fokker-Planck equations in the Wigner $W$ representation. The Fokker-Planck equations which we obtained describe an Ornstein-Uhlenbeck process. The Wigner representation discussed provides a classical conceptual framework for studying quantum phenomena and enables one to employ various methods of approximation or expansion used in classical cases to problems of the quantum domain. By means of the Wigner function one can calculate mean values of physical quantities that may be of interest, using a classical-like phase space integration, where position and momentum are treated as ordinary variables rather than as operators. The phase-space formulation of quantum mechanics represents an alternative to the standard wave mechanics formulation of the Lindblad equation. The main difficulty with the phase-space formulation is that the time development of the phase-space distribution (Wigner) function is given in terms of an infinite-order partial differential equation (see Eq. (4.3)). What matters, however, is not whether the equation can be solved exactly, but whether a reasonable set of approximations can be introduced that make it possible to obtain an approximate but still accurate solution to the equation. The Wigner phase-space formulation is particularly useful when the classical description of the system under consideration is adequate and quantum corrections to the classical description are only desired for higher accuracy. {\it Acknowledgment.} One of us (A.I.) whould like to express his sincere gratitude to Professor W. Scheid for the hospitality at the Institut f\"ur Theoretische Physik in Giessen. {\large {\bf References}} \begin{enumerate} \item R. W. Hasse, J. Math. Phys. {\bf 16} (1975) 2005 \item E. B. Davies, {\it Quantum Theory of Open Systems} (Academic Press, New York, 1976) \item H. Dekker, Phys. Rep. {\bf 80} (1981) 1 \item K. H. Li, Phys. Rep. {\bf 134} (1986) 1 \item G. Lindblad, Commun. Math. Phys. {\bf 48} (1976) 119 \item G. Lindblad, Rep. Math. Phys. {\bf 10} (1976) 393 \item G. Lindblad, {\it Non-Equilibrium Entropy and Irreversibility} (Reidel, Dordrecht, 1983) \item A. Sandulescu and H. Scutaru, Ann. Phys. (N.Y.) {\bf 173} (1987) 277 \item A. Sandulescu, H. Scutaru and W. Scheid, J. Phys. A - Math. Gen. {\bf 20} (1987) 2121 \item A. Pop, A. Sandulescu, H. Scutaru and W. Greiner, Z. Phys. A {\bf 329} (1988) 357 \item A. Isar, A. Sandulescu and W. Scheid, Int. J. Mod. Phys. A {\bf 5} (1990) 1773 \item A. Isar, A. Sandulescu and W. Scheid, J. Phys. G - Nucl. Part. Phys. {\bf 17} (1991) 385 \item A. Sandulescu and E. Stefanescu, Physica A {\bf 161} (1989) 525 \item A. Isar, A. Sandulescu, H. Scutaru, E. Stefaanescu and W. Scheid, Int. J. Mod. Phys. E {\bf 3} (1994) 635 \item E. P. Wigner, Phys. Rev. {\bf 40} (1932) 749 \item P. Carruthers and F. Zachariasen, Rev. Mod. Phys. {\bf 55} (1983) 245 \item K. Takahasi and N. Saito, Phys. Rev. Lett. {\bf 55} (1985) 645 \item K. Takahasi, J. Phys. Soc. Jpn. {\bf 55} (1986) 762 \item {\it The Physics of Phase Space,} ed. by Y. S. Kim and W. W. Zachary (Springer, Berlin, 1986) \item E. J. Glauber, Phys. Rev. Lett. {\bf 10} (1963) 84 \item E. C. G. Sudarshan, Phys. Rev. Lett. {\bf 10} (1963) 277 \item P. D. Drummond and C. W. Gardiner, J. Phys. A - Math. Gen. {\bf 13} (1980) 2353 \item P. D. Drummond, C. W. Gardiner and D. F. Walls, Phys. Rev. A {\bf 24} (1981) 914 \item A. Isar, W. Scheid and A. Sandulescu, J. Math. Phys. {\bf 32} (1991) 2128 \item A. Isar, Helv. Phys. Acta {\bf 67} (1994) 436 \item E. A. Remler, Ann. Phys. (N.Y.) {\bf 136} (1981) 293 \item H. St\" ocker and W. Greiner, Phys. Rep. {\bf 137} (1986) 278 \item G. F. Bertsch and S. Das Gupta, Phys. Rep. {\bf 160} (1988) 191 \item W. Cassing, V. Metag, U. Mosel and K. Niita, Phys. Rep. {\bf 188} (1990) 363 \item J. Aichelin, Phys. Rep. {\bf 202} (1991) 233 \item H. Feldmeier, Rep. Prog. Phys. {\bf 50} (1987) 915 \item M. Hillery, R.F. O 'Connell, M.O. Scully and E.P. Wigner, Phys.Rep. {\bf 106} (1984) 121 \item H. Weyl, Z. Phys. {\bf 46} (1927) 1 \item J. E. Moyal, Proc. Cambridge Phil. Soc. {\bf 45} (1949) 99 \item E. Heller, J. Chem. Phys. {\bf 65} (1976) 1289 \item A. Isar and W. Scheid, in preparation \item G. E. Uhlenbeck and L. S. Ornstein, Phys. Rev. {\bf 36} (1930) 823 \item H. Haken, Rev. Mod. Phys. {\bf 47} (1975) 67 \item C. W. Gardiner, {\it Handbook of Stochastic Methods} (Springer, Berlin, 1982) \item H. W. Lee, Phys. Rep. {\bf 259} (1995) 147 \item D. L. Bunker, Method. Comput. Phys. {\bf 10} (1971) 287 \item L. M. Raff and D. L. Thomson, in {\it Theory of Chemical Reaction Dynamics,} ed. by M. Baer (CRC, Boca Ratou, 1985) vol. III, p. 1 \item B. L. Hu, J. P. Paz and Y. Zhang, Phys. Rev. D {\bf 47} (1993) 1576 \item A. Anderson and J. J. Halliwell, Phys. Rev. D {\bf 48} (1993) 2753 \item C. Anastapoulos and J. J. Halliwell, Phys. Rev. D {\bf 51} (1995) 6870 \item W. H. Zurek, Phys. Rev. D {\bf 24} (1981) 1516; {\it ibid.} {\bf 26} (1982) 1862; E. Joos and H. D. Zeh, Zeit. Phys. B {\bf 59} (1985) 229 \item W. H. Zurek, Phys. Today {\bf 44} (Oct. 1991) 36; {\it ibid.} {\bf 46} (Apr. 1993) 81; Prog. Theor. Phys. {\bf 89} (1993) 281 \item J. P. Paz, S. Habib and W. H. Zurek, Phys. Rev. D {\bf 47} (1993) 488 \item W. H. Zurek, S. Habib and J. P. Paz, Phys. Rev. Lett. {\bf 70} (1993) 1187 \item W. H. Zurek, in {\it Frontiers of Nonequilibrium Statistical Mechanics,} ed. by G. T. Moore and M. O. Scully (Plenum, New York, 1986) \item W. H. Zurek and J. P. Paz, Phys. Rev. Lett. {\bf 72} (1994) 2508 \item A. O. Caldeira and A. J. Leggett, Physica A {\bf 121} (1983) 587; W. G. Unruh and W. H. Zurek, Phys. Rev. D {\bf 40} (1989) 1071; B. L. Hu, J. P. Paz and Y Zhang, Phys. Rev. D {\bf 45} (1992) 2843; {\it ibid.} {\bf47} (1993) 1576 \item H. Risken, {\it The Fokker-Planck Equations: Methods of Solution and Applications,} (Springer-Verlag, Berlin, 1989) \end{enumerate} \end{document}
\begin{document} \title{Turing Incomparability in Scott Sets} \author{Anton\'in Ku\v{c}era} \address{Department of Theoretical Computer Science and Mathematical Logic\\ Faculty of Mathematics and Physics\\ Charles University\\ Malostransk\'{e} n\'{a}m. 25, 118 00 Praha 1\\ Czech Republic} \email{[email protected]} \thanks{Ku\v{c}era was partially supported by the Research project of the Ministry of Education of the Czech Republic MSM0021620838} \author{Theodore A. Slaman} \address{Department of Mathematics\\ The University of California, Berkeley\\ Berkeley, CA 94720-3840 USA} \email{[email protected]} \thanks{Slaman was partially supported by NSF grant DMS-0501167.} \keywords{Scott set, Turing degree, K-trivial, low for random} \subjclass[2000]{03D28} \begin{abstract} For every Scott set $\mathcal F$ and every non-recursive set $X$ in $\mathcal F$, there is a $Y \in \mathcal F$ such that $X$ and $Y$ are Turing incomparable. \end{abstract} \maketitle \section{Introduction} H. Friedman and A. McAllister posed the question whether for every non-recursive set $X$ of a Scott set $\mathcal F$ there is a $Y \in \mathcal F$ such that $X$ and $Y$ are Turing incomparable (see Problem 3.2 and also Problem 3.3 in Cenzer and Jockusch \cite{Cenzer.Jockusch:2000}). We present a positive solution to the question, using recent results in the area of algorithmic randomness and also results on $\Pi^0_1$ classes. \subsection{Background and Notation} We begin by discussing the background of the Friedman-McAllister question. We then review some basic definitions and establish our notational conventions. \subsubsection{Background} We let $2^\omega$ denote the set of infinite binary sequences. One can equivalently think of $2^\omega$ as the Cantor set. A finite binary sequence $\sigma$ determines an open neighborhood in $2^\omega$ by taking the set of all infinite extensions of $\sigma$. A binary tree $T$ determines a closed subset of $2^\omega$ by taking the complement of the union of open neighborhoods given by the elements of $T$ which have no extensions in $T$. As is well-known, the Cantor set is a canonical example of a compact set. This fact translates to binary trees in the form of K\"onig's Lemma that every infinite binary tree has an infinite path. However, the proof of K\"onig's Lemma is not computational. Not every infinite recursive binary tree has an infinite recursive path. Thus, the set of recursive reals does not verify the compactness of $2^\omega$ with respect to recursive closed sets (also called $\Pi^0_1$ classes) . In order to study the consequences of compactness, we need richer subsets of $2^\omega$. \begin{definition} A Scott set is a nonempty set $\mathcal F \subseteq 2^\omega$ such that if $T \subseteq 2^{<\omega}$ is an infinite tree recursive in a finite join of elements of $\mathcal F$, then there is an infinite path through $T$ in $\mathcal F$. \end{definition} Scott \cite{Scott:1962} proved that the sets representable in a complete extension of Peano Arithmetic form a Scott set. Scott sets occur naturally in the study of models of arithmetic. They are the $\omega$-models of $WKL_0$, an axiomatic treatment of compactness. In other words, they are the models of $WKL_0$ in which the natural numbers are standard. One can test the power of compactness arguments by examining what is true in every Scott set. A natural family of questions comes from considering the Turing degrees represented in an arbitrary Scott set. For example, it is not merely the case that every Scott set has non-recursive elements. Jockusch and Soare~\cite{Jockusch.Soare:1972} showed that for any Scott set $S$ and any finite partial order $P$, there are elements of $S$ whose Turing degrees are ordered isomorphically to $P$. Thus, the existential theory of the Turing degrees of a Scott set is rich and completely determined. The existential-universal theory of the degrees represented in an arbitrary Scott set is more complex and not at all understood. Groszek and Slaman \cite{Groszek.Slaman:1997} states that every Scott set has an element of minimal Turing degree, namely a degree $m$ such that $m$ has no non-trivial element strictly below it. Friedman-McAllister question is universal-existential about an arbitrary Scott set, for every degree $x$ is there a degree $y$ which is Turing incomparable with $x$. In other words, given a Turing degree $x$, can one use a compactness argument to construct a $y$ which is Turing incomparable with $x$? The typical immediate reaction to the question is that there should always be such a $y$ and that it should be routine to exhibit an $x$-recursive tree such that every infinite path in that tree has Turing degree incomparable with $x$. This is not the case. For example, Ku\v{c}era \cite{Kucera:1988} showed that there is a Scott set $S$ and a non-recursive degree $x$ from $S$ such that $x$ is recursive in all complete extensions of Peano Arithmetic that appear in $S$. Building $y$'s incomparable with that $x$ cannot be accomplished using complete extensions of Peano Arithmetic. Similar obstacles appear when one attempts to find $y$'s incomparable to $x$ by means of other familiar $\Pi^0_1$ classes. Even so, for every Scott set $S$ and every non-recursive $x$ represented in $S$, there is a $y$ in $S$ which is Turing incomparable with $x$. Given $x$, our construction of $y$ is not uniform. If possible, we find $y$ by taking a sequence which is 1-random relative to $x$. If that fails (the non-uniformity), then we apply recent results in the theory of algorithmic randomness to build a recursive tree whose infinite paths are Turing incomparable with $x$. \subsubsection{Definitions and Notation} Our computability-theoretic notation generally follows Soare \cite{Soare:1987} and Odifreddi \cite{Odifreddi:1989,Odifreddi:1999}. An introduction to algorithmic randomness can be found in Li and Vit\'anyi \cite{Li.Vitanyi:1997}. A short survey of it is also given in Ambos-Spies and Ku\v{c}era \cite{Ambos-Spies.Kucera:2000}. Much deeper insight into the subject of algorithmic randomness can be found in a forthcoming book of Downey and Hirschfeldt \cite{Downey.Hirschfeldt:nd}, a good survey is also in Downey, Hirschfeldt, Nies and Terwijn \cite{Downey.Hirschfeldt.ea:nd}. We refer to the elements of $2^{\omega}$ as sets or infinite binary sequences. We denote the collection of strings, i.e. finite initial segments of sets, by $2^{<\omega}$. The length of a string $\sigma$ is denoted by $|\sigma|$, for a set $X$, we denote the string consisting of the first $n$ bits of $X$ by $X \restr n$ and we use similar notation $\sigma \restr n$ for strings $\sigma$ of length $\geq n$. We let $\sigma * \tau$ denote the concatenation of $\sigma$ and $\tau$ and let $\sigma * Y$ denote the concatenation of $\sigma$ and (infinite binary sequence) $Y$. We write $\sigma \prec X$ to indicate $X\restr|\sigma|=\sigma$. If $\sigma \in 2^{<\omega}$, then $[\sigma]$ denotes $\{X \in 2^\omega : \sigma \prec X \}$. A $\Sigma^0_1$ class is a collection of sets that can be effectively enumerated. Such a class can be represented as $\bigcup_{\sigma \in W}[\sigma]$ for some (prefix-free) recursively enumerable (r.e.) set of strings $W$. The complements of $\Sigma^0_1$ classes are called $\Pi^0_1$ classes. Any $\Pi^0_1$ class can be represented by the class of all infinite paths through some recursive tree. We use also relativized versions, i.e. $\Sigma^{0, X}_1$ classes and $\Pi^{0, X}_1$ classes. $\Pi^0_1$ classes play an important role in logic, in subsystems of second-order arithmetic, and also in algorithmic randomness. By the relativized tree representation of $\Pi^0_1$ classes, if $\mathcal F$ is a Scott set, $X\in\mathcal F$, and $P$ is a nonempty $\Pi^{0,X}_1$ class, then $\mathcal F$ includes an element of $P$. \begin{definition} Let $X$ be a set. A Martin-L\"{o}f test relative to $X$ is a uniformly r.e. in $X$ sequence of $\Sigma^{0, X}_1$ classes $\{U^X_n\}$ such that $\mu(U^X_n) \leq 2^{-n}$, where $\mu$ denotes the standard measure on $2^\omega$. Then any subclass of $\bigcap_{n\in\omega}U^X_n$ is called a Martin-L\"{o}f null set relative to $X$. If $X = \emptyset$, we simply speak of Martin-L\"{o}f test and Martin-L\"{o}f null set. A set $R$ is Martin-L\"{o}f random relative to $X$, or $1$-random relative to $X$, if $\{R\}$ is not Martin-L\"{o}f null relative to $X$. If $X = \emptyset,$ we speak of $1$-randomness. \end{definition} Martin-L\"{o}f proved that there is a universal Martin-L\"{o}f test, $\{U_n\}$, such that for all $R$, $R$ is 1-random if and only if $R\not\in\bigcap_{n\in\omega}U_n.$ Similarly, there is a universal Martin-L\"{o}f test relative to $X,$ $\{U^X_n\}$ (uniformly in $X$). We will use $K$ to denote prefix-free Kolmogorov complexity. See Li and Vit\'anyi \cite{Li.Vitanyi:1997} for details. The version of Kolmogorov complexity relativized to a set $X$ is denoted by $K^X$. Schnorr \cite{Schnorr:1971*b} proved that a set is $1$-random if and only if for all $n$, $K(A \restr n) \geq n + O(1)$ . There are several notions of computational weakness related to $1$-randomness. They are summarized in the following definition. \begin{definition} \begin{enumerate} \item $\mathcal L$ denotes the class of sets which are low for $1$-randomness, i.e sets $A$ such that every $1$-random set is also $1$-random relative to $A$. \item ${\mathcal K}$ denotes the class of $K$-trivial sets, i.e. the class of sets $A$ such that for all $n,$ $K(A \restr n) \leq K(0^n) + O(1)$, where $0^n$ denotes the string of $n$ zeros. \item $\mathcal M$ denotes the class of sets that are low for $K$, i.e sets $A$ such that for all $\sigma$, $K(\sigma) \leq K^A(\sigma) + O(1).$ \item A set $A$ is a basis for $1$-randomness if $A \leq_T Z$ for some $Z$ such that $Z$ is $1$-random relative to $A$. The collection of such sets is denoted by $\mathcal B$. \end{enumerate} \end{definition} Nies \cite{Nies:2005} proved that $\mathcal L$ = $\mathcal M$, Hirschfeldt and Nies, see \cite{Nies:2005}, proved that $\mathcal K$ = $\mathcal M$, and Hirschfeldt, Nies and Stephan \cite{Hirschfeldt.Nies.ea:nd} proved that $\mathcal B = \mathcal K$. Thus, all these four classes are equal and we have, remarkably, four different characterizations of the same class. That is, $\mathcal L = \mathcal K = \mathcal M = \mathcal B$. Chaitin \cite{Chaitin:1977} proved that $K$-trivials are ${\mathcal D}elta^0_2$. Further, by a result of Ku\v{c}era \cite{Kucera:1993} low for $1$-random sets are $GL_1$ and, thus, all these sets are, in fact, low. The lowness of these sets also follows from some recent results on this class of sets, see \cite{Nies:2005}. \section{The main result} In this section, we present a solution to the Friedman--McAllister question. \begin{theorem}\label{2.1} For any Scott set $\mathcal S$ and any non-recursive set $X \in \mathcal S$, there is a $Y \in \mathcal S$ such that $X$ and $Y$ are Turing incomparable. \end{theorem} Theorem~\ref{2.1} is a consequence of the stronger Claim~\ref{2.2}. \begin{claim}\label{2.2} For every non-recursive set $X$ there is a nonempty $\Pi^{0, X}_1$ class $\mathcal P \subseteq 2^{\omega}$ such that every element of $\mathcal P$ is Turing incomparable with $X$. \end{claim} As we will see, we can do even better for $K$-trivial sets $X$. Namely, we can replace the $\Pi^{0, X}_1$ class mentioned in the claim by a (non-relativized) $\Pi^0_1$ class. \begin{proof}[Proof of Claim~\ref{2.2}] We split the proof of the claim into two cases. \begin{case} $X$ is not a basis for $1$-randomness. \end{case} Let $T_1$ be a tree recursive in $X$ such that any infinite path in $T_1$ is $1$-random relative to $X$. We can take e.g. a tree recursive in $X$ for which the collection of all infinite paths is the $\Pi^{0,X}_1$ class which is the the complement of $U^X_0$, the first class appearing in a universal Martin-L\"of test relative to $X$. Clearly, any infinite path through $T_1$ is Turing incomparable with $X$. \begin{case} $X$ is a basis for $1$-randomness. \end{case} As we described above, such an $X$ is $K$-trivial, low for $1$-randomness, low for $K$, and ${\mathcal D}elta^0_2$. We use these properties to construct a recursive tree $T_2$ such that any infinite path through $T_2$ is Turing incomparable with $X$. Also we not only avoid a lower cone of sets recursive in $X$, but even avoid all the class of sets which are bases for $1$-randomness. Since the class of these sets is closed downwards, we obviously get a stronger property. Since the class $\mathcal B$ is equal to $\mathcal L$ and also to $\mathcal M$, we can equivalently work with any of these characterizations. Thus, to handle Case~2, it is sufficient to prove the following lemma. \begin{lemma}\label{2.3} Let $X$ be a non-recursive ${\mathcal D}elta^0_2$ set. Then there is a recursive tree $T_2$ such that for any infinite path $Y$ in $T_2$ we have $Y \not \geq_T X$ and $Y$ is not low for $1$-randomness (and, thus, $Y \not \leq_T X$ if $X \in \mathcal L$). \end{lemma} \begin{proof}[Proof of Lemma~\ref{2.3}] The proof is by a finite injury priority argument. We build the tree $T_2$ by stages. At stage $s+1$, we terminate a string by not extending it to any string of length $s+1$ in $T_2$. We will describe the strategies and leave the rest to the reader. The strategies have the following general pattern. Each strategy starts to work at a given string $\sigma \in T_2$, it acts only finitely often, and it yields as its outcome a nonempty finite collection $Q$ consisting of strings of the same length. Some strategies (called $\mathcal L$-strategies) and their outcomes depend not only on $\sigma$ itself, but also on how $\sigma$ arises as a concatenation of strings belonging to outcomes of previous strategies. Further, for each $\alpha \in Q$, the string $\sigma*\alpha$ together with a recursive tree of all strings extending $\sigma*\alpha$ are left for the next strategy. By producing its outcome, each strategy satisfies some particular requirement as explained later. \textit{Avoiding an upper cone above $X$.\ \ } Let $\sigma \in T_2$ be given. Suppose that $\Phi$ is a recursive Turing functional. We act to ensure that for every infinite path $A$ in $T_2$, if $A$ extends $\sigma$ then $\Phi(A)\neq X$. We use the fact that $X$ is ${\mathcal D}elta^0_2$ to adapt the Sacks Preservation Strategy \cite{Sacks:1963*b}. We monitor the maximum length of agreement between $\Phi(\tau)$, for $\tau$ extending $\sigma$, and the current approximation to $X$. If at stage $s+1$, we see a string $\eta$ of length $s$ on $T_2$ for which this maximum has gone higher than ever before, we take the least such $\eta$ and we terminate all extensions of $\sigma$ except for $\eta$. If this were to occur infinitely often above $\sigma$, then $X$ would be recursive. Compute $X(n)$ by finding the first stage where the maximum length of agreement between $\Phi(\tau)$, for some $\tau$ extending $\sigma$, and the approximation to $X$ was greater than $n$. Since the length of agreement increases infinitely often, the approximation to $X$ returns to this value infinitely often. But then, since the approximation converges to the value of $X$, the value at the stage we found must be the true value. This is a contradiction to $X$'s being non-recursive. So, this strategy acts finitely often and satisfies the requirement. Observe, that the strategy yields as its output just one string $\alpha$, where $\alpha$ is the string at which we last terminate all extensions of $\sigma$ which are incompatible with $\alpha$. Otherwise, $\alpha$ is the empty string, if we never do so. \textit{Avoiding the class of sets low for $1$-randomness.\ \ } We will refer to our strategies to avoid the class of sets low for $1$-randomness as $\mathcal L$-strategies. We will begin by explaining the general idea behind these strategies. Recall, a set $X$ is low for $1$-randomness if and only if every 1-random set $Z$ is also 1-random relative to $X$. In the case that $X$ is low for $1$-randomness, we can ensure that $X$ does not compute any path in $T_2$ by ensuring that each path $Y$ in $T_2$ is not low for 1-randomness, i.e. $Y\not \in\mathcal L$. We will ensure that a path $Y$ in $T_2$ is not in $\mathcal L$, by embedding large intervals of some $1$-random set $Z$ into it. In this way, we can recover the 1-random set $Z$ recursively from $Y$ and $\emptyset'$ and ensure that $Y$ can enumerate a Martin-L\"{o}f test (relative to $Y$) which shows that $Z$ is not $1$-random in $Y$. An infinite path $Y$ in $T_2$ can be viewed as an infinite concatenation of strings $\alpha_0 * \alpha_1 * \alpha_2 * \dots $ , where each $\alpha_i$ is that uniquely determined string compatible with $Y$, which belongs to the outcome of a strategy that started at $\alpha_0 * \dots * \alpha_{i-1}$ (where $\alpha_{-1}$ is the empty string). Let us note that due to a standard finite injury priority argument, such a sequence $\{\alpha_i \}_{i \in \omega}$ can be found recursively in $\emptyset'$ and $Y$. Then let $Z_Y$ denote the set obtained as an infinite concatenation of strings $\alpha_{i_0} * \alpha_{i_1} * \alpha_{i_2} * \ldots $, where $\{i_j \}_{j \in \omega}$ is a recursive increasing sequence of indices of those strings $\alpha_i$'s which belong to outcomes of $\mathcal L$-strategies, i.e. of those $i$'s for which a strategy that started at $\alpha_0 * \alpha_1 * \ldots * \alpha_{i-1}$ was an $\mathcal L$-strategy. The general goal of $\mathcal L$-strategies is to ensure that for any infinite path $Y$ in $T_2$, $Z_Y$ is $1$-random but is not $1$-random relative to $Y$. To guarantee that $Z_Y$ is not $1$-random in $Y$, we have to satisfy for all $e$ the requirement $Z_Y \in U^Y_e$, where \{$U^Y_e\}_{e \in\omega}$ is a universal Martin-L\"{o}f test relative to $Y$ (uniformly in $Y$). As is standard, we may let $U^Y_e$ be $\bigcup_k V^{k,Y}_{e+k+1}$, where $\{V^{k,Y}_i \}_{i,k \in \omega}$ is a uniformly r.e. in $Y$ sequence of all Martin-L\"{o}f tests relative to $Y$ (uniformly in $Y$), and $\{V^{k,Y}_i \}_{i \in\omega}$ is the $k$-th test. To guarantee that $Z_Y$ is $1$-random, we fix a $\Pi^0_1$ class $\overline{U}_0$ of 1-random sequences and fix a recursive tree $T^*$ such that the infinite paths in $T^*$ are exactly the members of $\overline{U}_0$. We will ensure that each initial segment of $Z_Y$ extends to an element of $\overline{U}_0$. Suppose now, that $\sigma \in T_2$ and $e$ are given, and let $\alpha_0, \ldots , \alpha_k$ be the strings for which $\sigma = \alpha_0 * \alpha_1 * \dots * \alpha _k $, where each $\alpha_i$ belongs to an outcome of the strategy that started at $\alpha_0 * \alpha_1 * \dots * \alpha_{i-1}$. Let $\tau_\sigma$ be the string $\alpha_{i_0} * \alpha_{i_1} * \ldots * \alpha_{i_j}$ , where $i_0, i_1, \ldots , i_j$ are indices (in increasing order) of those $\alpha_i$'s which belong to outcomes of $\mathcal L$-strategies, i.e. of those $i$'s such that the strategy that started at $\alpha_0 * \alpha_1 * \ldots * \alpha_{i-1}$ was an $\mathcal L$-strategy. Roughly speaking, $\tau_\sigma$ is the finite sequence already embedded into $\sigma$ which can be extended to an infinite path in $T^*$. Observe, that for any set $Z$ the set $\tau_\sigma*Z$ is not $1$-random relative to the set $\sigma*Z$. In fact, $\tau_\sigma*Z$ is recursive in $\sigma*Z$ as it is obtained by appending all but finitely much of $\sigma*Z$ to $\tau_\sigma$. Thus, $\tau_\sigma*Z\in U^{\sigma*Z}_e$. If we had no other requirements to satisfy, we could restrict the infinite extensions of $\sigma$ in $T_2$ to those of the form $\sigma*Z$ for which $\tau_\sigma*Z$ is $1$-random (using the $\Pi^0_1$ class $\overline{U}_0$). But, of course, we have to satisfy our requirement in a finitary way to leave space for the cone avoiding strategies. The idea here is to make any infinite path through $T_2$ extending $\sigma$ locally $1$-random. Thus, we design our tree so that enough of such a $Z$ is embedded in the extensions of $\sigma$ to ensure that $[\tau_\sigma*(Z \restr i)] \subseteq U^{\sigma*Z}_e $ for some $i$. The crucial thing here is that we can accomplish this objective in a finite way. That is, we can effectively compute (from $\sigma$, $\tau_\sigma$ and $e$) an $i$ such that $[\tau_\sigma*(Z \restr i)] \subseteq U^{\sigma*Z}_e$ for all sets $Z$. Intuitively, for any $Z$, $\tau_\sigma*Z$ is not 1-random relative to $\sigma*Z$ and we can calculate how long it takes for $\sigma*Z$ to recognize the failure of relative 1-randomness. We give this calculation in detail. Given $\sigma$ and $\tau_\sigma$ find a Martin-L\"{o}f test relative to $X$ (uniformly in $X$) $\{B^X_j \}_{j \in \omega}$ with index $b$ such that $B^X_j = [(\tau_\sigma*X^*) \restr j]$, where $X^*$ is the set for which $X = (X \restr |\sigma|) *X^*$. Then we obviously have $B^{\sigma*Z}_j = [(\tau_\sigma*Z) \restr j]$, for any set $Z$. By the construction of $\{U^X_e\}_{e \in\omega}$, the universal Martin-L\"{o}f test (relative to $X$), and since $b$ is an index of the test $\{B^X_j \}_{j \in \omega}$, we have $B^{\sigma*Z}_{e+b+1} \subseteq U^{\sigma*Z}_e$ for all sets $Z$. It follows that $[\tau_\sigma*(Z \restr i)] \subseteq U^{\sigma*Z}_e$ for all sets $Z$ and $i$ such that $|\tau_\sigma| + i \geq e+b+1$. Our calculation chooses the least such $i$. It only remains to put a restriction on $T_2$ to ensure that $\sigma * \alpha$ is extendable to an infinite path in $T_2$ for strings $\alpha$ of length $i$ if and only if $\tau_\sigma * \alpha$ is extendable to an infinite path in $T^*$, the recursive tree whose infinite paths are exactly the elements of $\overline{U}_0$ and hence are 1-random. The strategy, given $\sigma \in T_2$ and $e$, where $\sigma = \alpha_0 * \alpha_1 * \ldots * \alpha_k$ with properties of $\alpha_i$'s described above, is precisely as follows. Find the corresponding string $\tau_\sigma$, then compute an $i$ such that $[\tau_\sigma*(Z \restr i)] \subseteq U^{\sigma*Z}_e$ for all sets $Z$. Now for each $\beta$ such that $|\beta| \geq i$, we terminate the string $\sigma*\beta$ in $T_2$ if $\tau_\sigma * (\beta \restr i)$ is not extendable to a string of length $|\sigma * \beta|$ in the recursive tree $T^*$ which represents the $\Pi^0_1$ class $\overline{U}_0$. This strategy acts only finitely often and eventually reaches its goal. Observe that the strategy yields a finite collection of strings of the same length $Q$, where all requests on $Q$ are satisfied. \end{proof} With Lemma~\ref{2.3}, we have completed the proof of Claim~\ref{2.2}.\end{proof} \subsection{An $\mathcal M$ variation} Since $\mathcal L$ and $\mathcal M$ are equal, we can equivalently use the characterization of $\mathcal M$ to design our strategy to handle Case~2, namely, our strategy for avoiding the class of sets which are bases for $1$-randomness. For the convenience of the reader we present also a variant of a strategy expressed in terms of $\mathcal M$. Given a $\sigma$, we want to ensure that each infinite path $Y$ extending $\sigma$ in $T_2$ can give a shorter description of some string $\tau$ than any description possible without $Y$. Let $c$ be the amount that we want to shorten the description. We will compute an $m$ (see below) and we want to ensure \[ K(Y \restr m) - c \geq K^Y(Y\restr m). \] We choose $m$ much larger than $K(\sigma)$ and $c$. The maximum of 10 and $2^{|\sigma|+K(\sigma)+c+d}$ is big enough (where a constant $d$ is explained below). For each string $\tau$ extending $\sigma$ of length $m$ let $\tau^*$ denote the string of length $m- |\sigma|$ for which $\tau = \sigma * \tau^*$. It is easy to see that $K^Y(Y \restr m)$ is less than or equal to $2 \log(m)$, since $Y$ can describe its first $m$ values using the description of $m$. (As a caveat, this bound may only apply to sufficiently large $m$ because of the fixed cost of interpreting binary representations. This is fixed data and we can assume that $m$ is large enough for the upper bound to apply). On the other hand, $K(\tau^*) \leq K(\tau) + K(\sigma) + d$ for some constant $d$ independent of a choice of $\tau$. By terminating strings with shorter descriptions, we can ensure that for each $Y$ extending $\sigma$ in $T_2$, the $\tau^*$ of $\tau = Y \restr m$ satisfies $K(\tau^*) \geq m - |\sigma|$. This is similar to making a recursive tree of $1$-random sets, but here we are making any path in $T_2$ (merely) locally $1$-random to ensure that infinite paths in $T_2$ are not low for $K$. We can now calculate: \[ K(\tau) \geq K(\tau^*) - K(\sigma) - d, \] and substituting for $K(\tau^*)$, \[ K(\tau) \geq (m - |\sigma|) - K(\sigma) - d. \] Since $K^Y(Y \restr m) \leq 2\log(m)$, it is sufficient to ensure that \[ m - |\sigma| -K(\sigma) -d - c \geq 2\log(m), \] or, equivalently, \[ m \geq 2\log(m) + |\sigma| + K(\sigma) + c + d. \] If $m \geq 2^{|\sigma| + K(\sigma)+c+d}$, then it is sufficient to ensure $m \geq 3\log(m)$. This holds if $m$ is greater than 10. So, the strategy working above $\sigma$ to ensure that $Y$ is not low for $K$ reserves the collection of extensions of $\sigma$ of length $m$ and at most half of them are eventually stopped to be extendable to an infinite path in $T_2$. So, it satisfies its requirement and acts only finitely often. \begin{remark} We have not addressed the question whether it is provable in the subsystem of second order arithmetic $WKL_0$ that for every non-recursive set $X$ there is a non-recursive set $Y$ which is Turing incomparable with $X$ (see Problem 3.2. part 1 in Cenzer and Jockusch \cite{Cenzer.Jockusch:2000}). \end{remark} \section{An open problem} Suppose that ${\mathcal F}$ is a Scott set and let ${\mathcal D}^{\mathcal F}$ denote the partial order of the Turing degrees which are represented by elements of ${\mathcal F}$. According to Theorem~\ref{2.1}, \[ {\mathcal D}^{\mathcal F}\models \forall d>0\exists x(d\not\geq_T x \text{ and }x\not\geq_T d). \] The dual theorem of Groszek and Slaman \cite{Groszek.Slaman:1997} states that every Scott set has an element of minimal Turing degree: \begin{equation} \label{eq:minimal} {\mathcal D}^{\mathcal F}\models \exists d>0\forall x \neg(d>_T x \text{ and }x>_T 0). \end{equation} Together, these results are sufficient to determine for any sentence in the language of partial orders of the form $\forall d\exists x{\varphi}(d,x)$, where ${\varphi}$ is quantifier-free, whether that sentence holds in ${\mathcal D}^{\mathcal F}$. Further, such sentences hold in ${\mathcal D}^{\mathcal F}$ if and only if they hold in ${\mathcal D}$, the Turing degrees of all sets. By Lerman~\cite{Lerman:1983} and Shore~\cite{Shore:1978}, the general $\forall\exists$-theory of ${\mathcal D}$ is decidable. The proof of decidability rests on two technical results. The first is a general extension theorem due to Sacks~\cite{Sacks:1961*b} which, like Theorem~\ref{2.1}, constructs degrees $x$ incomparable to or above given ones $d$. The second is Lerman's~\cite{Lerman:1971} theorem that every finite lattice is isomorphic to an initial segment of the Turing degrees which, like Formula~(\ref{eq:minimal}), produces degrees $d$ which limit the possible types of $x$'s which are below $d$. Superficially, Theorem~\ref{2.1} and the existence of minimal degrees suggest that the $\forall\exists$-theory of ${\mathcal D}^{\mathcal F}$ resembles that of ${\mathcal D}$. However, the actual proofs are quite different, and we are left with the following question. \begin{question} Suppose that ${\mathcal F}$ is a Scott set. Is ${\mathcal D}^{\mathcal F}$ $\forall\exists$-elementarily equivalent to ${\mathcal D}$? \end{question} \end{document}
\begin{document} \title{Energy cost for target control of complex networks} \author{Gaopeng Duan$^{1^*}$, Aming Li$^{2,3^*}$, Tao Meng$^{1}$, and Long Wang$^{1^\dag}$} \date{\today} \maketitle \begin{enumerate} \item Center for Systems and Control, College of Engineering, Peking University, Beijing 100871, China \item Department of Zoology, University of Oxford, Oxford OX1 3PS, UK \item Department of Biochemistry, University of Oxford, Oxford OX1 3QU, UK \item[$*$] These authors contributed equally to this work \item[$\dag$] Correspondence to: [email protected] \mathbf{e}nd{enumerate} \begin{abstract} To promote the implementation of realistic control over various complex networks, recent work has been focusing on analyzing energy cost. Indeed, the energy cost quantifies how much effort is required to drive the system from one state to another when it is fully controllable. A fully controllable system means that the system can be driven by external inputs from any initial state to any final state in finite time. However, it is prohibitively expensive and unnecessary to confine that the system is fully controllable when we merely need to accomplish the so-called target control---controlling a subnet of nodes chosen from the entire network. Yet, when the system is partially controllable, the associated energy cost remains elusive. Here we present the minimum energy cost for controlling an arbitrary subset of nodes of a network. Moreover, we systematically show the scaling behavior of the precise upper and lower bounds of the minimum energy in term of the time given to accomplish control. For controlling a given number of target nodes, we further demonstrate that the associated energy over different configurations can differ by several orders of magnitude. When the adjacency matrix of the network is nonsingular, we can simplify the framework by just considering the induced subgraph spanned by target nodes instead of the entire network. Importantly, we find that, energy cost could be saved by orders of magnitude as we only need the partial controllability of the entire network. Our theoretical results are all corroborated by numerical calculations, and pave the way for estimating the energy cost to implement realistic target control in various applications. \mathbf{e}nd{abstract} \section{Introduction} Network control has received much attention in the past decade \cite{barabasi2016network,Havlin2004book,duan2017asynchronous,Liu2016Rev,liu2013observability,wang2007new,wang2007finite}. The practical requirement of controlling complex networks from arbitrary initial to final states in finite time by appropriate external inputs motivates various explorations on the essential attribute of complex networked systems---network controllability \cite{chen2017pinning,Duan2019PRE,guan2017controllability,Li2017ConEng,Li2017,Liu2011,lu2016controllability,tian2018controllability,wang2009controllability,Yan2012PRL}. By detecting the controllability of the underlying networks, one can implement various control tasks to alter systems' states accordingly. To do this, the associated energy cost, serving as a common metric, has to be estimated in advance. In many large-scale practical dynamical networks, it is a strong constraint to ensure their full controllability. Moreover, in practical control tasks, it is prohibitively expensive and unnecessary to steer the whole network nodes towards the desired state. In Ref.~\cite{Gao2014}, authors approximated the minimum number of driver nodes for the target control of complex networks, which shows traditional full control overestimates the number of driver nodes. That is, choosing a subset of network nodes as target nodes and only controlling these nodes to achieve expected tasks efficiently reduces the number of control inputs. For energy cost of target control, Ref. \cite{Klickstein2017} presented that it exponentially ascends with the number of target nodes. Therefore, it requires much more energy to control the entire network. However, previous analysis depends on the full controllability of the entire network, namely, although we only need to calculate the energy cost to control some of the nodes of the network, other nodes have to be controllable as well \cite{Klickstein2017}. A systematic analysis of energy cost for achieving the sole target control with the existence of uncontrollable nodes remains elusive. Here, we consider energy cost for target control. We present the scaling behavior of both upper and lower bounds of the minimum energy in terms of the given control time. Furthermore, we revel that with the certain number of target nodes, \text{different} targets can result in hugely different energy cost. Particularly, for the case of nonsingular adjacency matrix, the corresponding results are more intuitively over the networked topology, where we just need to examine the induced subgraph spanned by the target nodes. \section{Results} We consider the canonical linear discrete time-invariant dynamics \begin{equation}\mathbf{\Lambda}bel{sys1} \mathbf{x}(\tau+1)=\mathbf{A}\mathbf{x}(\tau)+\mathbf{B}\mathbf{u}(\tau), \quad \tau=0, 1, 2,... \mathbf{e}nd{equation} where $\mathbf{x}(\tau)=(x_1(\tau), x_2(\tau), \dots, x_n(\tau))^{\text{T}}\in\mathbb{R}^{n}$ denotes the state of the entire network with $n$ nodes and $\mathbf{u}(\tau)=(u_1(\tau), u_2(\tau),$ $\dots, u_m(\tau))^{\text{T}} $ $\in\mathbb{R}^{m}$ captures $m$ external input signals at time $\tau$. $\mathbf{A}\in\mathbb{R}^{n\times n}$ is the adjacency matrix of the network which can represent \text{interactions} between system components. Here, we consider undirected networks i.e., $\mathbf{A}=\mathbf{A}^{\text{T}}$. $\mathbf{B}=(b_{ij})\in\mathbb{R}^{n\times m}$ is the input matrix, in which $b_{ij}=1$ means node $i$ is infected by the input $j$ directly, otherwise $b_{ij}=0$. Nodes that directly receive independent external input signals are called driver nodes. And, each external input is allowed to directly control one driver node (Fig.~\ref{figure2}(a)). System (\ref{sys1}) is defined as controllable in $\tau_f\in \mathbb{N}$ steps if there exists an input $\mathbf{u}(\tau)$ to drive the system to the desired state $\mathbf{x}_f=\mathbf{x}(\tau_f)$ from any initial state $\mathbf{x}_0=\mathbf{x}(0)$ at $\tau_f$ steps. According to the Kalman's controllability criterion \cite{Kalman63}, the rank $r=\text{rank}(\mathcal{C})$ of the controllability matrix \begin{equation*}\mathbf{\Lambda}bel{contrllability matrix} \mathcal{C}(\mathbf{A}, \mathbf{B})=[\mathbf{B}, \mathbf{A}\mathbf{B}, \mathbf{A}^2\mathbf{B}, \cdots,\mathbf{A}^{n-1}\mathbf{B}] \mathbf{e}nd{equation*} represents the dimension of the controllable space. In other words, for networked system (\ref{sys1}), $r=\text{rank}(\mathcal{C})$ tells that there are $r$ nodes can be controlled towards any desired state in finite time. For example, for a not fully controllable network shown in fig.~\ref{figure2}(a) with $4$ nodes, the controllable space is three dimensional. From fig.~\ref{figure2}(c), (d), we can see that the states of nodes $3$ and $4$ are always the same. The corresponding method determining the controllable subspace was proposed in \cite{hosoe1980determination}. In what follows, we aim to analyse the corresponding energy cost required to control these controllable nodes. \begin{figure}[!ht] \centering \begin{minipage}[t]{1\textwidth} \centering \includegraphics[width=15cm]{f258.pdf} \mathbf{e}nd{minipage} \caption{Illustration of the controllable space as we choose node $1$ as driver node. (a) A network with $4$ nodes, which is not fully controllable. We randomly generate $5, 000$ normalized states that the network can be driven from the origin. In (b), we depict locations of nodes $1, 2, 3$ in three dimensional space. And the corresponding locations of nodes $2, 3, 4$ are presented in (c), from which we can see these points are located in the plane $x_3=x_4$. Panel (d) is the view of (c) from the angle parallel to the plane $x_3=x_4$. In other words, (d) presents the projection of (c) onto the plane $x_2=0$, which indicates the states of nodes $3$ and $4$ are exactly the same. }\mathbf{\Lambda}bel{figure2} \mathbf{e}nd{figure} Firstly, we consider the special case where $\mathbf{A}$ is nonsingular. We analyse the target control of this case from the view of structural controllability \cite{Lin1970,Liu2011}. For a network, if each node has a self dynamics (i.e., $a_{ii}\neq 0$), then the corresponding adjacency matrix is nonsingular. And for the data-sample system, there are two types of signals: continuous-time signal and discrete-time signal. In order to study and design this kind of system, it is necessary to translate the continuous-time state space models into the equivalent discrete-time state space models \cite{Zhenglinear}. Specifically, for continuous-time systems \begin{equation*}\mathbf{\Lambda}bel{cts} \dot{\mathbf{x}}(t)=\mathcal{A} \mathbf{x}(t)+\mathcal{B} \mathbf{u}(t), \mathbf{e}nd{equation*} where $\mathcal{A}$ and $\mathcal{B}$ are the corresponding system matrix and input matrix, the \text{equivalent} discretized linear system is the system (\ref{sys1}), where $\mathbf{x}(\tau)=[\mathbf{x}(t)]_{t=\tau\mathbf{e}ta},$ $\mathbf{u}(\tau)=[\mathbf{u}(t)]_{t=\tau\mathbf{e}ta}, \mathbf{A}=\mathbf{e}^{\mathcal{A} \mathbf{e}ta}$ and $\mathbf{B}=(\int^{\mathbf{e}ta}_0\mathbf{e}^{\mathcal{A} t}\text{d}t)\mathcal{B}$ with $\mathbf{e}ta$ being equal sampling period satisfying Shannon sampling theorem \cite{shannon1998communication}. Apparently, in the time-discretized system (\ref{sys1}), the system matrix $\mathbf{A}=\mathbf{e}^{\mathcal{A} \mathbf{e}ta}$ is nonsingular. When system (\ref{sys1}) is not fully controllable, i.e., $\text{rank}(\mathcal{C})=r<n$, we denote the controllable node set by $\{{\text{c}_1},$ ${\text{c}_2}, \cdots, {\text{c}_r}\}$ and uncontrollable node set by $\{{\bar{\text{c}}_1},$ ${\bar{\text{c}}_2}, \cdots, {\bar{\text{c}}_{n-r}}\}$. Next, we permute the order of the nodes such that the first $r$ nodes are controllable, and leave the rest $n-r$ nodes uncontrollable. Consequently, the corresponding state variable transformation comes with \begin{equation*}\mathbf{\Lambda}bel{pt} \bar{\mathbf{x}}(\tau)=\mathbf{\Theta}\mathbf{x}(\tau), \mathbf{e}nd{equation*} where $\mathbf{\Theta}$ is a permutation matrix with $\mathbf{\Theta}\mathbf{\Theta}^{\text{T}}=\mathbf{\Theta}^{\text{T}}\mathbf{\Theta}=\mathbf{I}$. Then system (\ref{sys1}) is equivalent to \begin{equation*} \bar{\mathbf{x}}(\tau+1)=\mathbf{\Theta}\mathbf{A}\mathbf{\Theta}^{\text{T}}\bar{\mathbf{x}}(\tau)+\mathbf{\Theta} \mathbf{B}\mathbf{u}(\tau), \mathbf{e}nd{equation*} which can be further decomposed with the following expression \begin{equation}\mathbf{\Lambda}bel{eq3} \begin{split} \begin{bmatrix} \bar{\mathbf{x}}_{\text{c}}(\tau+1)\\ \bar{\mathbf{x}}_{\bar{{\text{c}}}}(\tau+1) \mathbf{e}nd{bmatrix} = \begin{bmatrix} \mathbf{\Theta}_{\text{c}}\mathbf{A}\mathbf{\Theta}^{\text{T}}_{\text{c}}&\,\,\mathbf{\Theta}_{\text{c}}\mathbf{A}\mathbf{\Theta}^{\text{T}}_{\bar{{\text{c}}}}\\ \mathbf{\Theta}_{\bar{{\text{c}}}}\mathbf{A}\mathbf{\Theta}^{\text{T}}_{\text{c}}&\,\,\mathbf{\Theta}_{\bar{{\text{c}}}}\mathbf{A}\mathbf{\Theta}^{\text{T}}_{\bar{{\text{c}}}} \mathbf{e}nd{bmatrix} \begin{bmatrix} \bar{\mathbf{x}}_{\text{c}}(\tau)\\ \bar{\mathbf{x}}_{\bar{{\text{c}}}}(\tau) \mathbf{e}nd{bmatrix} + \begin{bmatrix} \mathbf{B}_{\text{c}}\\ \mathbf{0} \mathbf{e}nd{bmatrix} \mathbf{u}(\tau). \mathbf{e}nd{split} \mathbf{e}nd{equation} Therein, $\bar{\mathbf{x}}_{\text{c}}=[x_{{\text{c}}_1}\,\,x_{{\text{c}}_2}\,\, \dots\,\, x_{{\text{c}}_r}]^{\text{T}}$, $\bar{\mathbf{x}}_{\bar{{\text{c}}}}=[x_{\bar{{\text{c}}}_1}\,\, x_{\bar{{\text{c}}}_2}\,\, \dots\,\, x_{\bar{{\text{c}}}_{n-r}}]^{\text{T}}$, $\mathbf{\Theta}_{\text{c}}$ is the first $r$ rows of $\mathbf{\Theta}$, and $\mathbf{\Theta}_{\bar{{\text{c}}}}$ is the remaining $n-r$ rows. For $\mathbf{\Theta}_{\text{c}}\mathbf{A}\mathbf{\Theta}^{\text{T}}_{\bar{{\text{c}}}}$, the element of the $i$th row and $j$th column is $\mathbf{\Theta}_{\text{c}}\mathbf{A}\mathbf{\Theta}^{\text{T}}_{\bar{{\text{c}}}}(i, j)=a_{{\text{c}}_i, \bar{{\text{c}}}_j}.$ In other words, we need to judge whether there exists a link between the controllable node ${\text{c}}_i$ and the uncontrollable node $\bar{{\text{c}}}_j$. A network is structural controllable, if the following conditions are satisfied \cite{Lin1970,Liu2011}: 1) each node is accessible from external inputs; 2) there is no dilation. In addition, if matrix $(\mathbf{A}~ \mathbf{B})$ has full rank, then the network $(\mathbf{A}, \mathbf{B})$ has no dilation. Here $\mathbf{A}$ is nonsingular which leads to the absence of dilation in the network $(\mathbf{A}, \mathbf{B})$. If there exists a direct link between nodes ${\text{c}}_i$ and $\bar{{\text{c}}}_j$, then the node $\bar{{\text{c}}}_j$ is controllable, since $\bar{{\text{c}}}_j$ can receive control signal from the controllable ${\text{c}}_i$. Therefore, there is no direct link between ${\text{c}}_i$ and $\bar{{\text{c}}}_j$, i.e., $a_{{\text{c}}_i, \bar{{\text{c}}}_j}=0$. By that, system (\ref{eq3}) is \begin{equation*}\mathbf{\Lambda}bel{eq030704} \begin{bmatrix} \bar{\mathbf{x}}_{\text{c}}(\tau+1)\\ \bar{\mathbf{x}}_{\bar{{\text{c}}}}(\tau+1) \mathbf{e}nd{bmatrix} = \begin{bmatrix} \mathbf{\Theta}_{\text{c}}\mathbf{A}\mathbf{\Theta}^{\text{T}}_{\text{c}}&\,\,\mathbf{0}\\ \mathbf{0}&\,\,\mathbf{\Theta}_{\bar{{\text{c}}}}\mathbf{A}\mathbf{\Theta}^{\text{T}}_{\bar{{\text{c}}}} \mathbf{e}nd{bmatrix} \begin{bmatrix} \bar{\mathbf{x}}_{\text{c}}(\tau)\\ \bar{\mathbf{x}}_{\bar{{\text{c}}}}(\tau) \mathbf{e}nd{bmatrix} + \begin{bmatrix} \mathbf{B}_{\text{c}}\\ \mathbf{0} \mathbf{e}nd{bmatrix} \mathbf{u}(\tau). \mathbf{e}nd{equation*} And the controllable subsystem is \begin{equation*}\mathbf{\Lambda}bel{eq4} \bar{\mathbf{x}}_{\text{c}}(\tau+1)=\mathbf{\Theta}_{\text{c}}\mathbf{A}\mathbf{\Theta}^{\text{T}}_{\text{c}}\bar{\mathbf{x}}_{\text{c}}(\tau)+\mathbf{B}_{\text{c}}\mathbf{u}(\tau). \mathbf{e}nd{equation*} By letting $\bar{\mathbf{A}}_{\text{c}}=\mathbf{\Theta}_{\text{c}}\mathbf{A}\mathbf{\Theta}^{\text{T}}_{\text{c}}$, it is clear that all eigenvalues of $\bar{\mathbf{A}}_{\text{c}}$ belong to the set of eigenvalues of $\mathbf{A}$. And the graph of the adjacency matrix $\bar{\mathbf{A}}_{\text{c}}$ is a subgraph of original graph, spanned by nodes ${{\text{c}}_1}, {{\text{c}}_2}, \dots, {{\text{c}}_r}$. Note that, a network is structural controllable can be a precondition of state controllable. And for a structural controllable network, almost all weights can guarantee the controllability of states \cite{Lin1970}. In this part, we do not consider the rare scenarios where the network is structural controllable but not state controllable. In this case, we obtain the energy cost for controlling a controllable network in table \ref{table2} of Section~\ref{Energycom}. For the general case, we can equivalently regard target control as the output controllability of a system. Namely, consider a part of the system (\ref{sys1}) \begin{equation}\mathbf{\Lambda}bel{sys2} \begin{cases} \mathbf{x}(\tau+1)=\mathbf{A}\mathbf{x}(\tau)+\mathbf{B}\mathbf{u}(\tau)\\ \mathbf{y}(\tau)=\mathbf{C}\mathbf{x}(\tau) \mathbf{e}nd{cases} \mathbf{e}nd{equation} where $\mathbf{C}=[\mathbf{I}_{\text{c}_1}^{\text{T}}~\mathbf{I}_{\text{c}_2}^{\text{T}}~\cdots~\mathbf{I}_{\text{c}_r}^{\text{T}}]^{\text{T}}\in \mathbb{R}^{r\times n}$ is the output matrix with $\mathbf{I}_{\text{c}_i}$ being the $i$th row of identity matrix. $\mathbf{y}(\tau) =[x_{{\text{c}}_1}(\tau)\,\,x_{{\text{c}}_2}(\tau)\,\, \dots\,\, x_{{\text{c}}_r}(\tau)]^{\text{T}}$ collects the states of target nodes. In fig.~\ref{figure2}, we make a brief explanation on target control. System (\ref{sys2}) is called output controllable if and only if the output controllability matrix satisfies \begin{equation} \text{rank}~ \mathcal{C}(\mathbf{A}, \mathbf{B}, \mathbf{C})=\text{rank}~[\mathbf{C}\mathbf{B}, \mathbf{C}\mathbf{A}\mathbf{B}, \mathbf{C}\mathbf{A}^2\mathbf{B}, \cdots,\mathbf{C}\mathbf{A}^{n-1}\mathbf{B}]=r. \mathbf{e}nd{equation} Note that partial controllability of the system (\ref{sys1}) is equivalent to output controllability of the system (\ref{sys2}) \cite{lin1991output}. Without loss of generality, we denote controllable nodes by $1, 2, \dots, r$, i.e., $\mathbf{C}$ is chosen from the first $r$ rows of an identity matrix with size $n$. To analyse the energy cost for target control, we employ the conventional definition of the following input control energy \begin{equation}\mathbf{\Lambda}bel{ET} E(\tau_f)=\frac{1}{2}\sum^{\tau_f-1}_{\tau=0}\mathbf{u}^{\text{T}}(\tau)\mathbf{u}(\tau). \mathbf{e}nd{equation} By minimizing the energy cost $E(\tau_f)$, one can employ optimal energy control theory \cite{OptimalBooLewis} to derive the optimal control input (see section \ref{outcon}) \begin{equation}\mathbf{\Lambda}bel{opU} \mathbf{u}^*(\tau)=\mathbf{B}^{\text{T}}(\mathbf{A}^{\text{T}})^{\tau_f-\tau-1}\mathbf{C}^{\text{T}}(\mathbf{C}\mathbf{W}\mathbf{C}^\text{T})^{-1}(\mathbf{y}_{f}-\mathbf{C}\mathbf{A}^{\tau_f}\mathbf{x}_0), \mathbf{e}nd{equation} where $\mathbf{y}_f=\mathbf{y}(\tau_f)$ and $\mathbf{W}$ is Gramian matrix of the system (\ref{sys1}) with $ \mathbf{W}=\sum^{\tau_f-1}_{i=0}\mathbf{A}^{\tau_f-i-1}\mathbf{B}$ $\mathbf{B}^{\text{T}}(\mathbf{A}^{\text{T}})^{\tau_f-i-1}. $ Accordingly, the minimum energy cost is \begin{equation}\mathbf{\Lambda}bel{minE} E(\tau_f)=(\mathbf{y}_f-\mathbf{C}\mathbf{A}^{\tau_f}\mathbf{x}_0)^{\text{T}}(\mathbf{C}\mathbf{W}\mathbf{C}^{\text{T}})^{-1}(\mathbf{y}_f-\mathbf{C}\mathbf{A}^{\tau_f}\mathbf{x}_0). \mathbf{e}nd{equation} Assuming $\mathbf{x}_0=\mathbf{0}$ and denoting $\mathbf{W}_{\text{C}}=\mathbf{C}\mathbf{W}\mathbf{C}^{\text{T}}$, we further have \begin{equation} E(\tau_f)=\mathbf{y}_f^{\text{T}}\mathbf{W}_{\text{C}}^{-1}\mathbf{y}_f. \mathbf{e}nd{equation} Intuitively, $\mathbf{W}_{\text{C}}$ consists of the first $r$ rows and $r$ columns of the matrix $\mathbf{W}$. Note that if system (\ref{sys2}) is output controllable, matrix $\mathbf{W}_{\text{C}}$ is invertible. By normalizing the control distance $\|\mathbf{y}_f\|=1$, we have \begin{equation}\mathbf{\Lambda}bel{ineqe} \frac{1}{\mathbf{\Lambda}mbda_{\max}(\mathbf{W}_{\text{C}})} \leq E(\tau_f)\leq \frac{1}{\mathbf{\Lambda}mbda_{\min}(\mathbf{W}_{\text{C}})}, \mathbf{e}nd{equation} where $\mathbf{\Lambda}mbda_{\max} (\mathbf{\Lambda}mbda_{\min})$ is the maximum (minimum) eigenvalue of $\mathbf{W}_{\text{C}}$. From Eq.~(\ref{ineqe}), the kernel problem is to obtain the minimum and maximum eigenvalues of $\mathbf{W}_{\text{C}}$. In previous studies, researchers focused on the lower bound of energy cost, where the trace of the corresponding Gramian matrix used to approximate the maximum eigenvalue \cite{Li2017,Pasqualetti2014Controllability,Yan2012PRL}. Indeed, given that the trace of a matrix is equal to the sum of the corresponding eigenvalues, it is frequently employed to approximate the maximum eigenvalue of controllability Gramian matrix, where most eigenvalues are relatively small. It further reflects the upper bound of the energy cost dominates. Therefore, it is meaningful to acquire the corresponding upper bound of energy for achieving control goals. We take the three-dimensional fully controllable network as an example. As shown in Eq.~(\ref{eq5}), matrix $\mathbf{W}^{-1}$ of $\alpha^{\text{T}}\mathbf{W}^{-1}\alpha$ is invertible with three positive eigenvalues $\mu_1, \mu_2, \mu_3$ and the corresponding three linearly independent and orthogonal normalized eigenvectors $\alpha_1, \alpha_2, \alpha_3$. Here, we assume $\|\alpha\|=1$ and $\alpha$ is a linear combination of $\alpha_1, \alpha_2, \alpha_3$ as $\alpha=a_1\alpha_1+a_2\alpha_2+a_3\alpha_3$. Therefore, we have $E=\mu_1a_1^2+\mu_2a_2^2+\mu_3a_3^2$ with $a_1^2+a_2^2+a_3^2=1$. After introducing new variables $x=\sqrt{\mu_1}a_1$, $y=\sqrt{\mu_2}a_2$ and $z=\sqrt{\mu_3}a_3$ with $a_1=\sin\theta\cos\mathbf{P}hi$, $a_2=\sin\theta\sin\mathbf{P}hi$ and $a_3=\cos\theta$, we have $x^2+y^2+z^2=E(\alpha)=d^2$, where $d$ is the distance between the origin and the point $(x, y, z)$. At a given control distance, we show the energy required to reach final states on the unit sphere in fig.~\ref{figure1}(c). And from fig.~\ref{figure1}, it is clear that a large proportion ($\sim 65\%$) of the final states requires the largest amount ($>75\%$) of energy to accomplish control. \begin{figure}[ht] \centering \begin{minipage}[t]{1\textwidth} \centering \includegraphics[width=15cm]{figure72.pdf} \mathbf{e}nd{minipage} \caption{Energy distribution for a three dimensional fully controllable network. (a) For the fully connected three dimensional controllable network, the interaction strengths are given alongside each link, and node $1$ is chosen as driver node. In (b), we make the statistics of energy distribution for driving a fully controllable network from the origin to unit ball by traversing about $90,000$ points from the surface uniformly. For these $90,000$ data, we find the maximum and the minimum values and divide them into 10 equal intervals. The bar diagram counts the number and percentage of the data in each subinterval. In addition, we calculate the ratios of each data to the maximum value, denoted by $\rho$. Therefore, we count the probability of the ratio falling into the interval $(0, 0.25], (0.25, 0.5], (0.5, 0.75]$ and $(0.75, 1]$, as shown in the pie. Accordingly in (c), we depict these $90,000$ points in three dimensional coordinates by taking logarithm of these energy ln$(E)$ as the distance from the origin to these points.}\mathbf{\Lambda}bel{figure1} \mathbf{e}nd{figure} In the sequel, we analyse the problem from the point of view of system decomposition. Since system (\ref{sys1}) is not fully controllable, the system can be decomposed into two parts: controllable part and uncontrollable part, by introducing variable transformation \begin{equation} \overline{\mathbf{x}}(\tau)=\mathbf{R}\mathbf{x}(\tau), \mathbf{e}nd{equation} where $\mathbf{R}$ is an orthogonal matrix. The first $r$ columns of $\mathbf{R}^{\text{T}}$ are constructed by the orthonormal basis of column space of $\mathcal{C}$ (via Gram-Schmidt orthogonalization), and the rest columns are constructed by $n-r$ column vectors orthogonal to existing $r$ columns. Therefore, system (\ref{sys1}) translates into \begin{equation}\mathbf{\Lambda}bel{eq1} \overline{\mathbf{x}}(\tau+1)=\overline{\mathbf{A}}\overline{\mathbf{x}}(\tau)+\overline{\mathbf{B}}\mathbf{u}(\tau) \mathbf{e}nd{equation} where $\overline{\mathbf{A}}=\mathbf{R}\mathbf{A}\mathbf{R}^{\text{T}}$ and $\overline{\mathbf{B}}=\mathbf{R}\mathbf{B}$. According to controllability decomposition theory, the specific form of (\ref{eq1}) is \begin{equation}\notag \begin{bmatrix} \overline{\mathbf{x}}_{\text{c}}(\tau+1)\\ \overline{\mathbf{x}}_{\text{nc}}(\tau+1) \mathbf{e}nd{bmatrix} = \begin{bmatrix} \mathbf{A}_{\text{c}}&\mathbf{0}\\ \mathbf{0}&\mathbf{A}_{\text{nc}} \mathbf{e}nd{bmatrix} \begin{bmatrix} \overline{\mathbf{x}}_{\text{c}}(\tau)\\ \overline{\mathbf{x}}_{\text{nc}}(\tau) \mathbf{e}nd{bmatrix} + \begin{bmatrix} \mathbf{B}_{\text{c}}\\ \mathbf{0} \mathbf{e}nd{bmatrix} \mathbf{u}(\tau), \mathbf{e}nd{equation} where $\overline{\mathbf{x}}_{\text{c}}(\tau)\in \mathbb{R}^{r}$, $\mathbf{A}_{\text{c}}\in\mathbb{R}^{r\times r}$, and $\mathbf{B}_{\text{c}}\in \mathbb{R}^{r\times m}$. Therein, the dynamics of the controllable part is \begin{equation}\mathbf{\Lambda}bel{eq2} \overline{\mathbf{x}}_{\text{c}}(\tau+1)=\mathbf{A}_{\text{c}}\overline{\mathbf{x}}_{\text{c}}(\tau)+\mathbf{B}_{\text{c}}\mathbf{u}(\tau). \mathbf{e}nd{equation} For example, for a not fully controllable network as shown in fig.~\ref{figure2}, one can preform controllable decomposition. Firstly, it needs to obtain maximal linearly independent group of column vectors of $\mathcal{C}$ as $ \mathbf{P}rotect\begin{bmatrix} 1\mathbf{P}rotect\\ 0\mathbf{P}rotect\\ 0\mathbf{P}rotect\\ 0 \mathbf{P}rotect\mathbf{e}nd{bmatrix}, \mathbf{P}rotect\begin{bmatrix} 0\mathbf{P}rotect\\ 1\mathbf{P}rotect\\ 1\mathbf{P}rotect\\ 1 \mathbf{P}rotect\mathbf{e}nd{bmatrix}, \mathbf{P}rotect\begin{bmatrix} 3\mathbf{P}rotect\\ 1\mathbf{P}rotect\\ 0\mathbf{P}rotect\\ 0 \mathbf{P}rotect\mathbf{e}nd{bmatrix}. $ By performing Gram-Schmidt orthogonalization on the above vectors and extending to the whole $4$ dimensional linear space, we have $\mathbf{R}^{\text{T}}=\mathbf{P}rotect\begin{bmatrix} 1&0&0&0\mathbf{P}rotect\\ 0&\frac{1}{\sqrt{3}}&\frac{2}{\sqrt{6}}&0\mathbf{P}rotect\\ 0&\frac{1}{\sqrt{3}}&\frac{-1}{\sqrt{6}}&\frac{1}{\sqrt{2}}\mathbf{P}rotect\\ 0&\frac{1}{\sqrt{3}}&\frac{-1}{\sqrt{6}}&\frac{-1}{\sqrt{2}} \mathbf{P}rotect\mathbf{e}nd{bmatrix}.$ Furthermore, $$\overline{\mathbf{A}}=\mathbf{R}\mathbf{A}\mathbf{R}^{\text{T}}= \mathbf{P}rotect\begin{bmatrix} 0&\sqrt{3}&0&0\mathbf{P}rotect\\ \sqrt{3}&\frac{1}{3}&\frac{2}{\sqrt{18}}&0\mathbf{P}rotect\\ 0&\frac{2}{\sqrt{18}}&\frac{2}{3}&0\mathbf{P}rotect\\ 0&0&0&0 \mathbf{P}rotect\mathbf{e}nd{bmatrix} \quad \text{with} \quad \mathbf{A}_{\text{c}}=\mathbf{P}rotect\begin{bmatrix} 0&\sqrt{3}&0\mathbf{P}rotect\\ \sqrt{3}&\frac{1}{3}&\frac{2}{\sqrt{18}}\mathbf{P}rotect\\ 0&\frac{2}{\sqrt{18}}&\frac{2}{3} \mathbf{P}rotect\mathbf{e}nd{bmatrix}$$ and $\mathbf{B}_{\text{c}}=[1 ~ 0~0]^{\text{T}}.$ For system (\ref{eq2}), the corresponding Gramian matrix is \begin{equation} \mathcal{W}=\sum^{\tau_f-1}_{\tau=0}\mathbf{A}^\tau_{\text{c}}\mathbf{B}_{\text{c}}\mathbf{B}_{\text{c}}^{\text{T}}\mathbf{A}_{\text{c}}^\tau, \mathbf{e}nd{equation} which is invertible. Substituting $\overline{\mathbf{A}}=\mathbf{R}\mathbf{A}\mathbf{R}^{\text{T}}$ and $\overline{\mathbf{B}}=\mathbf{R}\mathbf{B}$ into $\mathbf{W}_{\text{C}}$, we have \begin{equation}\mathbf{\Lambda}bel{WCT} \mathbf{W}_{\text{C}}=\mathbf{R}_1^{\text{T}}\mathcal{W}\mathbf{R}_1, \mathbf{e}nd{equation} where $\mathbf{R}= \begin{bmatrix} \mathbf{R}_1& \mathbf{R}_3\\ \mathbf{R}_2& \mathbf{R}_4 \mathbf{e}nd{bmatrix}$ and $\mathbf{R}_1\in \mathbb{R}^{r\times r}$. It needs to be emphasized that the ultimate goal is to derive the minimum and maximum eigenvalues of $\mathbf{W}_{\text{C}}$. For that, based on an effective approach to approximate the minimum and maximum eigenvalues of positive definite matrix $\mathbf{M}$ \cite{lam2011estimates}, we can derive the corresponding upper and lower bounds as \begin{equation}\mathbf{\Lambda}bel{upE} \overline{E}\approx f(\underline{\alpha}, \underline{\beta}) \mathbf{e}nd{equation} \begin{equation}\mathbf{\Lambda}bel{lowE} \underline{E}\approx \frac{1}{f(\overline{\alpha}, \overline{\beta})} \mathbf{e}nd{equation} where $f(\alpha, \beta)=\sqrt{\frac{\alpha}{n}+\sqrt{\frac{n-1}{n}(\beta-\frac{\alpha^2}{n})}}$, $ \overline{\alpha}=\text{trace}(\mathbf{M}^2), \overline{\beta}=\text{trace}(\mathbf{M}^4), \underline{\alpha}$= \text{trace} $((\mathbf{M}^{-1})^2), $ and $ \underline{\beta}=\text{trace}((\mathbf{M}^{-1})^4). $ In Sections \ref{Energycom} and \ref{energyuncom}, we make adequate analysis on calculating parameters $\overline{\alpha}$, $\overline{\beta}$, $\underline{\alpha}$ and $\underline{\beta}$ of $\mathbf{W}_{\text{C}}$ to obtain approximations of the corresponding maximum and minimum eigenvalues. Table~\ref{table1} presents the upper and lower bounds of the minimum energy cost and the corresponding numerical verification is shown in fig.~\ref{figure7}. \begin{table}[!http] \centering\caption{Lower and upper bounds of the minimum energy. Here, $|\mathbf{\Lambda}mbda_{\text{c}1}|$ and $|\mathbf{\Lambda}mbda_{\text{c}r}|$ are the minimum and the maximum absolute values of eigenvalues of $\mathbf{A}_{\text{c}}$, respectively, i.e., $|\mathbf{\Lambda}mbda_{\text{c}1}|=\min\{|\mathbf{\Lambda}mbda_{\text{c}i}|~|\mathbf{\Lambda}mbda_{\text{c}i}\in \mathbf{\Lambda}mbda(\mathbf{A}_{\text{c}})\}$ and $|\mathbf{\Lambda}mbda_{\text{c}r}|=\max\{|\mathbf{\Lambda}mbda_{\text{c}i}|~|\mathbf{\Lambda}mbda_{\text{c}i}\in \mathbf{\Lambda}mbda(\mathbf{A}_{\text{c}})\}$. When $|\mathbf{\Lambda}mbda_{\text{c}r}|>1$, $\underline{E}$ decreases for time $\tau_f$ with power law $\mathbf{\Lambda}mbda_{\text{c}r}^{2-2\tau_f}$. When $|\mathbf{\Lambda}mbda_{\text{c}r}|=1$, $\underline{E}\sim \tau_f^{-1}$ holds for any number of driver nodes. When $|\mathbf{\Lambda}mbda_{\text{c}r}|=1$, $\underline{E}$ approaches a constant irrespective of $\tau_f$. For one driver node, the constant is given by Eq.(\ref{lowE}) with (\ref{Aeq11})(\ref{Aeq12}) and for any number of driver nodes, the constant is given by Eq.(\ref{lowE}) with (\ref{Aeq15})(\ref{Aeq16}). Analogously, for the upper bound, when $|\mathbf{\Lambda}mbda_{\text{c}1}|>1$, $\overline{E}\sim \mathbf{\Lambda}mbda_{\text{c}1}^{2-2\tau_f}$ holds and when $|\mathbf{\Lambda}mbda_{\text{c}1}|=1$, $\overline{E}\sim \tau_f^{-1}$ holds. When $|\mathbf{\Lambda}mbda_{\text{c}1}|<1$, $\overline{E}$ approaches a constant irrespective of $\tau_f$ as Eq.(\ref{upE}) with (\ref{Aeq13})(\ref{Aeq14}) for one driver node. } \fontsize{8}{15}\selectfont \begin{tabular}{cc|c|c} \toprule[2pt] \multicolumn{2}{c|}{Number of driver nodes}&$1$&$m (m\leq r)$\\ \toprule[1pt] \multirow{3}{*}{Lower bound $\underline{E}$}&$|\mathbf{\Lambda}mbda_{\text{c}r}|<1$&Eq.(\ref{lowE}) with (\ref{Aeq11})(\ref{Aeq12})&Eq.(\ref{lowE}) with (\ref{Aeq15})(\ref{Aeq16})\\ &$|\mathbf{\Lambda}mbda_{\text{c}r}|=1$&$ \sim \tau_f^{-1}$& $\sim \tau_f^{-1}$\\ &$|\mathbf{\Lambda}mbda_{\text{c}r}|>1$&$\sim \mathbf{\Lambda}mbda_{\text{c}r}^{2-2\tau_f}$&$\sim \mathbf{\Lambda}mbda_{\text{c}r}^{2-2\tau_f}$\\ \toprule[1pt] \multirow{3}{*}{Upper bound $\overline{E}$}&$|\mathbf{\Lambda}mbda_{\text{c}1}|<1$&Eq.(\ref{upE}) with (\ref{Aeq13})(\ref{Aeq14})&$C_1$\\ &$|\mathbf{\Lambda}mbda_{\text{c}1}|=1$&$ \sim \tau_f^{-1}$& $\sim \tau_f^{-1}$\\ &$|\mathbf{\Lambda}mbda_{\text{c}1}|>1$&$\sim \mathbf{\Lambda}mbda_{\text{c}1}^{2-2\tau_f}$&$\sim \mathbf{\Lambda}mbda_{\text{c}1}^{2-2\tau_f}$\\ \toprule[2pt] \mathbf{e}nd{tabular} \mathbf{\Lambda}bel{table1} \mathbf{e}nd{table} \begin{figure}[http] \centering \includegraphics[width=15cm]{f293.pdf} \caption{The upper and lower bounds of control energy for target control. In order to show the theoretical results in table~\ref{table1}, we generate random networks with different types of $\mathbf{A}_{\text{c}}$. For each pair of nodes, we add an edge with the probability $0.1$ \cite{erdHos1960evolution}. To systematically show all cases given in table~\ref{table1}, in (a) and (d), here we set link weight $a_{ij}$ from the interval $[0, 0.05]$ uniformly. Analogously, in (b), (c) and (f), the interval is $[-1, 0]$, and in (e), the interval is $[0, 0.1]$. To intuitively judge the eigenvalues of $\mathbf{A}_{\text{c}}$, we add self-loops and set the corresponding weight as $a+s_i$ with $s_i=-\sum^n_{j=1}a_{ij}$. In (a) and (d), by setting $a=0.8$, we can derive that all eigenvalues of $\mathbf{A}_{\text{c}}$ are located in $[0.6478, 0.8]$, which leads to $|\mathbf{\Lambda}mbda_{\text{c}1}|<1$ and $|\mathbf{\Lambda}mbda_{\text{c}r}|<1$. In (b), (e) and (f), we set $a=1$ such that all eigenvalues of $\mathbf{A}_{\text{c}}$ of (b) and (f) are located in $[1, 4.4048]$ with $|\mathbf{\Lambda}mbda_{\text{c}1}|=1$ and $|\mathbf{\Lambda}mbda_{\text{c}r}|>1$ and all eigenvalues of $\mathbf{A}_{\text{c}}$ of (e) are located in $[0.5805, 1]$ with $|\mathbf{\Lambda}mbda_{\text{c}r}|=1$. In (c), all eigenvalues of $\mathbf{A}_{\text{c}}$ are located in $[2, 6.5169]$ by selecting $a=2$, which leads to $|\mathbf{\Lambda}mbda_{\text{c}1}|>1$. Here, we adopt the random network with size $n=20$, and the dimension of controllable space of the network is $14$ by selecting one driver node. In each panel, we can see that the generated pattern of analytical derivation almost overlaps that of numerical calculations. }\mathbf{\Lambda}bel{figure7} \mathbf{e}nd{figure} From energy scaling in terms of the control time for target control, we find that controlling different target nodes corresponds to different energy scalings. Therefore, in the given control task to control a given number of nodes, one can achieve minimum energy control by choosing appropriate driver nodes. In fig.~\ref{figure4}(a), it is clear that the network is not controllable due to dilation. When \text{nodes} $1$ and $5$ are assigned as driver node separately, set of the controllable nodes is different. When node $1$ is the driver node, nodes $1, 2$ are controllable, and the minimum absolute value of eigenvalues of the corresponding Gramian matrix of controllable part is $0$. When node $5$ is the driver node, nodes $1, 2, 5$ are controllable, and the minimum absolute value of eigenvalues of the corresponding Gramian matrix of controllable part is $1.07$. Therefore, the upper bounds of energy cost for achieving target control are different, and the corresponding energy scaling behaviors are different, which are depicted in fig.~\ref{figure4}(d), (e). For a given set of target nodes, one can employ a greedy algorithm proposed in \cite{Gao2014} to find an approximately minimum set of driver nodes for target control. \begin{figure}[ht] \centering \includegraphics[width=14cm]{f261.pdf} \caption{Target control under different choices of driver nodes. (a) A network with $5$ nodes. In (b) and (c), we assign node $1$ and node $5$ as driver node, respectively. When node $1$ is driver node, only node $1$ and node $2$ are controllable; when node $5$ is driver node, nodes $1, 2, 5$ are controllable. For (b), the minimum eigenvalue $\mathbf{\Lambda}mbda_{\text{c}1}$ of $\mathbf{A}_{\text{c}}$ satisfies $|\mathbf{\Lambda}mbda_{\text{c}1}|<1$. Therefore, the corresponding upper bound of energy approaches a constant, depicted in (d) (see theory shown in table~\ref{table1}). Analogously, for (c), the minimum eigenvalue $\mathbf{\Lambda}mbda_{\text{c}1}$ of $\mathbf{A}_{\text{c}}$ satisfies $|\mathbf{\Lambda}mbda_{\text{c}1}|>1$. Therefore, the corresponding upper bound of energy exponentially decreases with $\tau_f$, as depicted in (e). }\mathbf{\Lambda}bel{figure4} \mathbf{e}nd{figure} In addition, we find that we can save huge amount of energy cost, even multiple orders of magnitude less by achieving target control compared with traditional full controllability. For example, in fig.~\ref{figure6}, we make a comparison on energy cost between controlling partial network and controlling the entire network. When achieving target control by one external input, there are $9$ controllable nodes as shown in fig.~\ref{figure6}(a) (node $2$ is uncontrollable). When applying two external inputs to the network as shown in fig.~\ref{figure6}(b), the network is fully controllable. From fig.~\ref{figure6}(c), we find that it expends less energy cost to achieve target control with controlling $90\%$ nodes, compared with controlling the entire network when $\tau_f$ is large. When $\tau_f$ is small, it expends less energy cost to achieve full control compared with achieving target control, which is reasonable due to the effect of more input signals for full control and consistent with the result in Ref. \cite{Pasqualetti2014Controllability}. \begin{figure}[ht] \centering \begin{minipage}[t]{1\textwidth} \centering \includegraphics[width=15cm]{f292.pdf} \mathbf{e}nd{minipage} \caption{Comparison of energy cost for achieving target control and full control. In (a), we control the network by adding an input to node $9$ directly (blue), under which there are $9$ nodes being controllable (green) except the node $2$ (white). In (b), we directly control nodes $2$ and $9$, by which the network is fully controllable. In (c), for $\mathbf{x}_f=[x_1(\tau_f)~ x_2(\tau_f)~\cdots~ x_{10}(\tau_f)]^{\text{T}}$ in networks (a) and (b), by letting $x_i(\tau_f)=1, i=1, 3, 4,\dots,10$ and $x_2(\tau_f)=2, 3, 4$, we calculate the corresponding energy cost. And upper bounds of the corresponding minimum energy cost are presented in (d). It is clear that energy cost of achieving target control is much less than achieving full control, when $\tau_f$ is large. }\mathbf{\Lambda}bel{figure6} \mathbf{e}nd{figure} \section*{Discussion} In this paper, we investigate energy cost for achieving target control, i.e., controlling a part of complex networks towards the desired state. In practical control tasks, sometimes it is unnecessary and inadvisable to control the entire network \cite{galbiati2013power,Gao2014}. Target control relaxes the requirement from full controllability and avoids excessive waste of energy cost \cite{Klickstein2017}. With respect to issues of energy cost for controlling complex networks, we give a framework for estimating the exact upper and the lower bounds of control energy. The method we applied is effective and can be widely used to analyse the traditional full controllability \cite{Li2018The} as well. The nonlinear dynamics is admitted to properly describe practical complex systems \cite{Cornelius2013NatCommun,Khalil2002a}. However, the corresponding operation of recognizing empirical parameterizations is challenging. A common alternative to process the nonlinearity is to investigate the linearized version. Indeed, in many aspects, linear dynamics is adequate to approximate and explore nonlinear dynamics in permissible local regions. Specifically, the ability of detecting the controllability of linearized dynamics has been validated to guarantee the controllability of the original nonlinear \text{systems} along the trajectory in corresponding regions \cite{LinearizationBook}. Nevertheless, further analysis on nonlinear dynamics still remains a promising direction for general networked \text{systems} in future work. In a real complex network, temporal networks are universal, that is, the interactions between nodes are dynamic \cite{Holme2012,Li2017,NaokiBook,Posfai2014NJP}. As a preliminary exploration, we consider the framework for static networks, which can be extended to explore the corresponding energy cost for controlling temporal networks. Controlling a part of temporal networks, one can utilize output controllability to formulate specific problems. Furthermore, one can take advantage of the effective Gramian matrix given in Ref.~\cite{Li2017} and adopt controllable decomposition to solve the problem. In order to achieve target control of temporal networks with less energy cost, one can apply independent path theorem proposed in \cite{Posfai2014NJP} to determine driver nodes according to the framework for temporal networks. \begin{thebibliography}{10} \mathbf{e}xpandafter\ifx\csname url\mathbf{e}ndcsname\relax \def\url#1{\texttt{#1}}\fi \mathbf{e}xpandafter\ifx\csname urlprefix\mathbf{e}ndcsname\relax\defURL {URL }\fi \mathbf{P}rovidecommand{\bibinfo}[2]{#2} \mathbf{P}rovidecommand{\mathbf{e}print}[2][]{\url{#2}} \bibitem{barabasi2016network} \bibinfo{author}{Barab\'{a}si, A.-L.} \newblock \mathbf{e}mph{\bibinfo{title}{Network Science}} (\bibinfo{publisher}{Cambridge University Press}, \bibinfo{address}{Cambridge}, \bibinfo{year}{2016}). \bibitem{Havlin2004book} \bibinfo{author}{Cohen, R.} \& \bibinfo{author}{Havlin, S.} \newblock \mathbf{e}mph{\bibinfo{title}{Complex Networks: Structure, Robustness and Function}} (\bibinfo{publisher}{Cambridge University Press, Cambridge}, \bibinfo{year}{2010}). \bibitem{duan2017asynchronous} \bibinfo{author}{Duan, G.}, \bibinfo{author}{Xiao, F.} \& \bibinfo{author}{Wang, L.} \newblock \bibinfo{title}{Asynchronous periodic edge-event triggered control for double-integrator networks with communication time delays}. \newblock \mathbf{e}mph{\bibinfo{journal}{IEEE Transactions on Cybernetics}} \textbf{\bibinfo{volume}{48}}, \bibinfo{pages}{675--688} (\bibinfo{year}{2017}). \bibitem{Liu2016Rev} \bibinfo{author}{Liu, Y.-Y.} \& \bibinfo{author}{Barab\'{a}si, A.-L.} \newblock \bibinfo{title}{Control principles of complex systems}. \newblock \mathbf{e}mph{\bibinfo{journal}{Reviews of Modern Physics}} \textbf{\bibinfo{volume}{88}}, \bibinfo{pages}{035006} (\bibinfo{year}{2016}). \bibitem{liu2013observability} \bibinfo{author}{Liu, Y.-Y.}, \bibinfo{author}{Slotine, J.-J.} \& \bibinfo{author}{Barab{\'a}si, A.-L.} \newblock \bibinfo{title}{Observability of complex systems}. \newblock \mathbf{e}mph{\bibinfo{journal}{Proceedings of the National Academy of Sciences}} \textbf{\bibinfo{volume}{110}}, \bibinfo{pages}{2460--2465} (\bibinfo{year}{2013}). \bibitem{wang2007new} \bibinfo{author}{Wang, L.} \& \bibinfo{author}{Xiao, F.} \newblock \bibinfo{title}{A new approach to consensus problems in discrete-time multiagent systems with time-delays}. \newblock \mathbf{e}mph{\bibinfo{journal}{Science in China Series F: Information Sciences}} \textbf{\bibinfo{volume}{50}}, \bibinfo{pages}{625--635} (\bibinfo{year}{2007}). \bibitem{wang2007finite} \bibinfo{author}{Wang, L.} \& \bibinfo{author}{Xiao, F.} \newblock \bibinfo{title}{Finite-time consensus problems for networks of dynamic agents}. \newblock \mathbf{e}mph{\bibinfo{journal}{IEEE Transactions on Automatic Control}} \textbf{\bibinfo{volume}{55}}, \bibinfo{pages}{950--955} (\bibinfo{year}{2010}). \bibitem{chen2017pinning} \bibinfo{author}{Chen, G.} \newblock \bibinfo{title}{Pinning control and controllability of complex dynamical networks}. \newblock \mathbf{e}mph{\bibinfo{journal}{International Journal of Automation and Computing}} \textbf{\bibinfo{volume}{14}}, \bibinfo{pages}{1--9} (\bibinfo{year}{2017}). \bibitem{Duan2019PRE} \bibinfo{author}{Duan, G.}, \bibinfo{author}{Li, A.}, \bibinfo{author}{Meng, T.}, \bibinfo{author}{Zhang, G.} \& \bibinfo{author}{Wang, L.} \newblock \bibinfo{title}{Energy cost for controlling complex networks with linear dynamics}. \newblock \mathbf{e}mph{\bibinfo{journal}{Physical Review E}} \textbf{\bibinfo{volume}{99}}, \bibinfo{pages}{052305} (\bibinfo{year}{2019}). \bibitem{guan2017controllability} \bibinfo{author}{Guan, Y.}, \bibinfo{author}{Ji, Z.}, \bibinfo{author}{Zhang, L.} \& \bibinfo{author}{Wang, L.} \newblock \bibinfo{title}{Controllability of multi-agent systems under directed topology}. \newblock \mathbf{e}mph{\bibinfo{journal}{International Journal of Robust and Nonlinear Control}} \textbf{\bibinfo{volume}{27}}, \bibinfo{pages}{4333--4347} (\bibinfo{year}{2017}). \bibitem{Li2017ConEng} \bibinfo{author}{Li, A.}, \bibinfo{author}{Cornelius, S.~P.}, \bibinfo{author}{Liu, Y.-Y.}, \bibinfo{author}{Wang, L.} \& \bibinfo{author}{Barab\'{a}si, A.-L.} \newblock \bibinfo{title}{Control energy scaling in temporal networks}. \newblock \mathbf{e}mph{\bibinfo{journal}{arXiv:}} \bibinfo{pages}{1712.06434v1} (\bibinfo{year}{2017}). \bibitem{Li2017} \bibinfo{author}{Li, A.}, \bibinfo{author}{Cornelius, S.~P.}, \bibinfo{author}{Liu, Y.-Y.}, \bibinfo{author}{Wang, L.} \& \bibinfo{author}{Barab\'{a}si, A.-L.} \newblock \bibinfo{title}{The fundamental advantages of temporal networks}. \newblock \mathbf{e}mph{\bibinfo{journal}{Science}} \textbf{\bibinfo{volume}{358}}, \bibinfo{pages}{1042--1046} (\bibinfo{year}{2017}). \bibitem{Liu2011} \bibinfo{author}{Liu, Y.-Y.}, \bibinfo{author}{Slotine, J.-J.} \& \bibinfo{author}{Barab\'{a}si, A.-L.} \newblock \bibinfo{title}{{Controllability of complex networks.}} \newblock \mathbf{e}mph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{473}}, \bibinfo{pages}{167--73} (\bibinfo{year}{2011}). \bibitem{lu2016controllability} \bibinfo{author}{Lu, Z.}, \bibinfo{author}{Zhang, L.}, \bibinfo{author}{Ji, Z.} \& \bibinfo{author}{Wang, L.} \newblock \bibinfo{title}{Controllability of discrete-time multi-agent systems with directed topology and input delay}. \newblock \mathbf{e}mph{\bibinfo{journal}{International Journal of Control}} \textbf{\bibinfo{volume}{89}}, \bibinfo{pages}{179--192} (\bibinfo{year}{2016}). \bibitem{tian2018controllability} \bibinfo{author}{Tian, L.}, \bibinfo{author}{Guan, Y.} \& \bibinfo{author}{Wang, L.} \newblock \bibinfo{title}{Controllability and observability of multi-agent systems with heterogeneous and switching topologies}. \newblock \mathbf{e}mph{\bibinfo{journal}{International Journal of Control}} \bibinfo{pages}{1--12} (\bibinfo{year}{2018}). \bibitem{wang2009controllability} \bibinfo{author}{Wang, L.}, \bibinfo{author}{Jiang, F.}, \bibinfo{author}{Xie, G.} \& \bibinfo{author}{Ji, Z.} \newblock \bibinfo{title}{Controllability of multi-agent systems based on agreement protocols}. \newblock \mathbf{e}mph{\bibinfo{journal}{Science in China Series F: Information Sciences}} \textbf{\bibinfo{volume}{52}}, \bibinfo{pages}{2074} (\bibinfo{year}{2009}). \bibitem{Yan2012PRL} \bibinfo{author}{Yan, G.}, \bibinfo{author}{Ren, J.}, \bibinfo{author}{Lai, Y.-C.}, \bibinfo{author}{Lai, C.-H.} \& \bibinfo{author}{Li, B.} \newblock \bibinfo{title}{Controlling complex networks: How much energy is needed?} \newblock \mathbf{e}mph{\bibinfo{journal}{Physical Review Letters}} \textbf{\bibinfo{volume}{108}}, \bibinfo{pages}{218703} (\bibinfo{year}{2012}). \bibitem{Gao2014} \bibinfo{author}{Gao, J.}, \bibinfo{author}{Liu, Y.-Y.}, \bibinfo{author}{D'Souza, R.~M.} \& \bibinfo{author}{Barab\'{a}si, A.-L.} \newblock \bibinfo{title}{{Target control of complex networks}}. \newblock \mathbf{e}mph{\bibinfo{journal}{Nature Communications}} \textbf{\bibinfo{volume}{5}}, \bibinfo{pages}{5415} (\bibinfo{year}{2014}). \bibitem{Klickstein2017} \bibinfo{author}{Klickstein, I.}, \bibinfo{author}{Shirin, A.} \& \bibinfo{author}{Sorrentino, F.} \newblock \bibinfo{title}{{Energy scaling of targeted optimal control of complex networks}}. \newblock \mathbf{e}mph{\bibinfo{journal}{Nature Communications}} \textbf{\bibinfo{volume}{8}}, \bibinfo{pages}{15145} (\bibinfo{year}{2017}). \bibitem{Kalman63} \bibinfo{author}{Kalman, R.~E.} \newblock \bibinfo{title}{Mathematical description of linear dynamical systems}. \newblock \mathbf{e}mph{\bibinfo{journal}{Journal of the Society for Industrial and Applied Mathematics Series A Control}} \textbf{\bibinfo{volume}{1}}, \bibinfo{pages}{152--192} (\bibinfo{year}{1963}). \bibitem{hosoe1980determination} \bibinfo{author}{Hosoe, S.} \newblock \bibinfo{title}{Determination of generic dimensions of controllable subspaces and its application}. \newblock \mathbf{e}mph{\bibinfo{journal}{IEEE Transactions on Automatic Control}} \textbf{\bibinfo{volume}{25}}, \bibinfo{pages}{1192--1196} (\bibinfo{year}{1980}). \bibitem{Lin1970} \bibinfo{author}{Lin, C.-T.} \newblock \bibinfo{title}{Structural controllability}. \newblock \mathbf{e}mph{\bibinfo{journal}{IEEE Transactions on Automatic Control}} \textbf{\bibinfo{volume}{19}}, \bibinfo{pages}{201--208} (\bibinfo{year}{1974}). \bibitem{Zhenglinear} \bibinfo{author}{Zheng, D.} \newblock \mathbf{e}mph{\bibinfo{title}{Linear System Theory (Second Edition)}} (\bibinfo{publisher}{Tsinghua University Press, Beijing}, \bibinfo{year}{2005}). \bibitem{shannon1998communication} \bibinfo{author}{Shannon, C.~E.} \newblock \bibinfo{title}{Communication in the presence of noise}. \newblock \mathbf{e}mph{\bibinfo{journal}{Proceedings of the IEEE}} \textbf{\bibinfo{volume}{86}}, \bibinfo{pages}{447--457} (\bibinfo{year}{1998}). \bibitem{lin1991output} \bibinfo{author}{Lin, X.}, \bibinfo{author}{Tade, M.~O.} \& \bibinfo{author}{Newell, R.~B.} \newblock \bibinfo{title}{Output structural controllability condition for the synthesis of control systems for chemical processes}. \newblock \mathbf{e}mph{\bibinfo{journal}{International Journal of Systems Science}} \textbf{\bibinfo{volume}{22}}, \bibinfo{pages}{107--132} (\bibinfo{year}{1991}). \bibitem{OptimalBooLewis} \bibinfo{author}{Lewis, F.~L.} \& \bibinfo{author}{Syrmos, V.~L.} \newblock \mathbf{e}mph{\bibinfo{title}{Optimal Control (Second Edition)}} (\bibinfo{publisher}{Wiley}, \bibinfo{address}{New York}, \bibinfo{year}{1995}). \bibitem{Pasqualetti2014Controllability} \bibinfo{author}{Pasqualetti, F.}, \bibinfo{author}{Zampieri, S.} \& \bibinfo{author}{Bullo, F.} \newblock \bibinfo{title}{Controllability metrics and algorithms for complex networks}. \newblock \mathbf{e}mph{\bibinfo{journal}{IEEE Transactions on Control of Network Systems}} \textbf{\bibinfo{volume}{1}}, \bibinfo{pages}{40--52} (\bibinfo{year}{2014}). \bibitem{lam2011estimates} \bibinfo{author}{Lam, J.}, \bibinfo{author}{Li, Z.}, \bibinfo{author}{Wei, Y.}, \bibinfo{author}{Feng, J.} \& \bibinfo{author}{Chung, K.~W.} \newblock \bibinfo{title}{Estimates of the spectral condition number}. \newblock \mathbf{e}mph{\bibinfo{journal}{Linear and Multilinear Algebra}} \textbf{\bibinfo{volume}{59}}, \bibinfo{pages}{249--260} (\bibinfo{year}{2011}). \bibitem{erdHos1960evolution} \bibinfo{author}{Erd{\H{o}}s, P.} \& \bibinfo{author}{R{\'e}nyi, A.} \newblock \bibinfo{title}{On the evolution of random graphs}. \newblock \mathbf{e}mph{\bibinfo{journal}{Publications of the Mathematical Institute of the Hungarian Academy of Sciences}} \textbf{\bibinfo{volume}{5}}, \bibinfo{pages}{17--60} (\bibinfo{year}{1960}). \bibitem{galbiati2013power} \bibinfo{author}{Galbiati, M.}, \bibinfo{author}{Delpini, D.} \& \bibinfo{author}{Battiston, S.} \newblock \bibinfo{title}{The power to control}. \newblock \mathbf{e}mph{\bibinfo{journal}{Nature Physics}} \textbf{\bibinfo{volume}{9}}, \bibinfo{pages}{126} (\bibinfo{year}{2013}). \bibitem{Li2018The} \bibinfo{author}{Li, A.}, \bibinfo{author}{Wang, L.} \& \bibinfo{author}{Schweitzer, F.} \newblock \bibinfo{title}{The optimal trajectory to control complex networks}. \newblock \mathbf{e}mph{\bibinfo{journal}{arXiv:}} \bibinfo{pages}{1806.04229v1} (\bibinfo{year}{2018}). \bibitem{Cornelius2013NatCommun} \bibinfo{author}{Cornelius, S.~P.}, \bibinfo{author}{Kath, W.~L.} \& \bibinfo{author}{Motter, A.~E.} \newblock \bibinfo{title}{Realistic control of network dynamics}. \newblock \mathbf{e}mph{\bibinfo{journal}{Nature Communications}} \textbf{\bibinfo{volume}{4}}, \bibinfo{pages}{1942} (\bibinfo{year}{2013}). \bibitem{Khalil2002a} \bibinfo{author}{Khalil, H.} \newblock \mathbf{e}mph{\bibinfo{title}{Nonlinear Systems}} (\bibinfo{publisher}{Prentice Hall, New Jersey}, \bibinfo{year}{2002}). \bibitem{LinearizationBook} \bibinfo{author}{Coron, J.-M.} \newblock \mathbf{e}mph{\bibinfo{title}{Control and Nonlinearity}} (\bibinfo{publisher}{American Mathematical Society}, \bibinfo{year}{2009}). \bibitem{Holme2012} \bibinfo{author}{Holme, P.} \& \bibinfo{author}{Saram\"{a}ki, J.} \newblock \bibinfo{title}{{Temporal networks}}. \newblock \mathbf{e}mph{\bibinfo{journal}{Physics Reports}} \textbf{\bibinfo{volume}{519}}, \bibinfo{pages}{97--125} (\bibinfo{year}{2012}). \bibitem{NaokiBook} \bibinfo{author}{Masuda, N.} \& \bibinfo{author}{Lambiotte., R.} \newblock \mathbf{e}mph{\bibinfo{title}{A Guide to Temporal Networks}} (\bibinfo{publisher}{World Scientific, Singapore}, \bibinfo{year}{2016}). \bibitem{Posfai2014NJP} \bibinfo{author}{P\'{o}sfai, M.} \& \bibinfo{author}{H\"{o}vel, P.} \newblock \bibinfo{title}{{Structural controllability of temporal networks}}. \newblock \mathbf{e}mph{\bibinfo{journal}{New Journal of Physics}} \textbf{\bibinfo{volume}{16}}, \bibinfo{pages}{123055} (\bibinfo{year}{2014}). \mathbf{e}nd{thebibliography} \appendix \section{Optimal Control Energy Theory} \subsection{Fully controllable systems} In this subsection, we assume system (\ref{sys1}) is fully controllable. According to the optimal control energy theory, we have the Hamilton function \begin{equation}\mathbf{\Lambda}bel{hm} \mathbf{H}(\tau)=\frac{1}{2}\mathbf{u}(\tau)^{\text{T}}\mathbf{u}(\tau)+\mathbf{\Lambda}mbda(\tau+1)^{\text{T}}(\mathbf{A}\mathbf{x}(\tau)+\mathbf{B}\mathbf{u}(\tau)). \mathbf{e}nd{equation} The state variable $\mathbf{x}(\tau)$ and $\mathbf{\Lambda}mbda(\tau)$ satisfy \begin{equation*}\mathbf{\Lambda}bel{ze1} \mathbf{x}(\tau+1)=\frac{\mathbf{P}artial \mathbf{H}(\tau)}{\mathbf{P}artial\mathbf{\Lambda}mbda(\tau+1)}=\mathbf{A}\mathbf{x}(\tau)+\mathbf{B}\mathbf{u}(\tau), \mathbf{e}nd{equation*} and \begin{equation}\mathbf{\Lambda}bel{ze2} \mathbf{\Lambda}mbda(\tau)=\frac{\mathbf{P}artial \mathbf{H}(\tau)}{\mathbf{P}artial\mathbf{x}(\tau)}=\mathbf{A}^{\text{T}}\mathbf{\Lambda}mbda(\tau+1). \mathbf{e}nd{equation} And the optimal input satisfies \begin{equation}\mathbf{\Lambda}bel{fopu} 0=\frac{\mathbf{P}artial \mathbf{H}(\tau)}{\mathbf{P}artial\mathbf{u}(\tau)}. \mathbf{e}nd{equation} By solving Eq.~(\ref{fopu}), the optimal input is \begin{equation}\mathbf{\Lambda}bel{opu1} \mathbf{u}(\tau)^*=-\mathbf{B}^{\text{T}}\mathbf{\Lambda}mbda(\tau+1). \mathbf{e}nd{equation} Furthermore, from iterative equation~(\ref{ze2}), we can derive \begin{equation}\mathbf{\Lambda}bel{Aeq1} \mathbf{\Lambda}mbda(0)=(\mathbf{A}^{\text{T}})^\tau\mathbf{\Lambda}mbda(\tau) \quad \rightarrow\quad \mathbf{\Lambda}mbda(0)=(\mathbf{A}^{\text{T}})^{\tau_f}\mathbf{\Lambda}mbda(\tau_f) \quad \rightarrow\quad \mathbf{\Lambda}mbda(\tau)=(\mathbf{A}^{\text{T}})^{\tau_f-\tau}\mathbf{\Lambda}mbda(\tau_f). \mathbf{e}nd{equation} In addition, the solution of Eq.~(\ref{sys1}) is \begin{equation}\mathbf{\Lambda}bel{Aeq2} \mathbf{x}(\tau)=\mathbf{A}^\tau\mathbf{x}_0+\sum^{\tau-1}_{i=0}\mathbf{A}^{\tau-i-1}\mathbf{B}\mathbf{u}(i). \mathbf{e}nd{equation} Substituting Eqs.~(\ref{opu1}) and (\ref{Aeq1}) into (\ref{Aeq2}), we have \begin{align} \mathbf{x}(\tau) &=\mathbf{A}^\tau\mathbf{x}_0-\sum^{\tau-1}_{i=0}\mathbf{A}^{\tau-i-1}\mathbf{B}\mathbf{B}^{\text{T}}(\mathbf{A}^{\text{T}})^{\tau-i-1}\mathbf{\Lambda}mbda(\tau_f).\mathbf{\Lambda}bel{Aeq3} \mathbf{e}nd{align} By letting $\tau=\tau_f$ in Eq.~(\ref{Aeq3}), we have $ \mathbf{x}(\tau_f)=\mathbf{A}^{\tau_f}\mathbf{x}_0-\mathbf{W}\mathbf{\Lambda}mbda(\tau_f) $ and \begin{equation}\mathbf{\Lambda}bel{Aeq4} \mathbf{\Lambda}mbda(\tau_f)=-\mathbf{W}^{-1}(\mathbf{x}(\tau_f)-\mathbf{A}^{\tau_f}\mathbf{x}_0) \mathbf{e}nd{equation} where $ \mathbf{W}=\sum^{\tau_f-1}_{i=0}\mathbf{A}^{\tau_f-i-1}\mathbf{B}\mathbf{B}^{\text{T}}(\mathbf{A}^{\text{T}})^{\tau_f-i-1} $ is the controllability Gramian matrix of system (\ref{sys1}). Substituting Eqs.~(\ref{Aeq1}) and (\ref{Aeq4}) into (\ref{opu1}), we derive the optimal input \begin{equation*}\mathbf{\Lambda}bel{opu} \mathbf{u}^*(\tau)=\mathbf{B}^{\text{T}}(\mathbf{A}^{\text{T}})^{\tau_f-\tau-1}\mathbf{W}^{-1}(\mathbf{x}(\tau_f)-\mathbf{A}^{\tau_f}\mathbf{x}_0), \mathbf{e}nd{equation*} and the minimum energy cost \begin{align} E_{\min}&=\frac{1}{2}\sum^{\tau_f-1}_{\tau=0}\mathbf{u}^*(\tau)^{\text{T}}\mathbf{u}^*(\tau)=\frac{1}{2}\alpha^{\text{T}}\mathbf{W}^{-1}\alpha\mathbf{\Lambda}bel{eq5} \mathbf{e}nd{align} with $\alpha=\mathbf{x}(\tau_f)-\mathbf{A}^{\tau_f}\mathbf{x}_0$. \subsection{Output controllable system}\mathbf{\Lambda}bel{outcon} Consider the output controllable system (\ref{sys2}) with $\mathbf{x}_0=\mathbf{x}(0)$ and $\mathbf{y}_f=\mathbf{y}(\tau_f)$. Let $\mathbf{C}=[\mathbf{I}_r, \mathbf{0}]\in \mathbb{R}^{r\times n}$ with $\mathbf{I}_r$ being $r$-order identity matrix and rank~$\mathcal{C}(\mathbf{A}, \mathbf{B}, \mathbf{C})=r$ holds. Analogously, construct Hamilton function as Eq.~(\ref{hm}). Then Eqs.~(\ref{ze2})\,(\ref{fopu}) and (\ref{opu1}) hold. From the iterative equation (\ref{ze2}), we have $ (\mathbf{A}^{\text{T}})^\tau\mathbf{\Lambda}mbda(\tau)=\mathbf{\Lambda}mbda(0), $ which further leads to $ \mathbf{\Lambda}mbda(0)=(\mathbf{A}^{\text{T}})^{\tau_f}\mathbf{\Lambda}mbda(\tau_f). $ Moreover, let $\mathbf{\Lambda}mbda(\tau_f)=\mathbf{C}^{\text{T}}\mathbf{H}at{\mathbf{\Lambda}mbda}_f$, and we have \begin{equation}\mathbf{\Lambda}bel{Aeq9} \mathbf{\Lambda}mbda(\tau)=(\mathbf{A}^{\text{T}})^{\tau_f-\tau}\mathbf{\Lambda}mbda(\tau_f)=(\mathbf{A}^{\text{T}})^{\tau_f-\tau}\mathbf{C}^{\text{T}}\mathbf{H}at{\mathbf{\Lambda}mbda}_f. \mathbf{e}nd{equation} And then, substituting Eqs.~(\ref{fopu}) and (\ref{opu1}) into (\ref{Aeq2}), we have \begin{equation}\notag \mathbf{x}(\tau)=\mathbf{A}^\tau\mathbf{x}_0-\sum^{\tau-1}_{i=0}\mathbf{A}^{\tau-i-1}\mathbf{B}\mathbf{B}^{\text{T}}\mathbf{\Lambda}mbda(i+1) =\mathbf{A}^\tau\mathbf{x}_0-\sum^{\tau-1}_{i=0}\mathbf{A}^{\tau-i-1}\mathbf{B}\mathbf{B}^{\text{T}}(\mathbf{A}^{\text{T}})^{\tau-i-1}\mathbf{C}^{\text{T}}\mathbf{H}at{\mathbf{\Lambda}mbda}_f. \mathbf{e}nd{equation} Since the controllability Gramian matrix of system (\ref{sys1}) is $\mathbf{W}$, we obtain \begin{equation*} \begin{cases} \mathbf{x}(\tau_f)=\mathbf{A}^{\tau_f}\mathbf{x}_0-\mathbf{W}\mathbf{C}^{\text{T}}\mathbf{H}at{\mathbf{\Lambda}mbda}_f,\\ \mathbf{y}(\tau_f)=\mathbf{C}\mathbf{A}^{\tau_f}\mathbf{x}_0-\mathbf{C}\mathbf{W}\mathbf{C}^{\text{T}}\mathbf{H}at{\mathbf{\Lambda}mbda}_f, \mathbf{e}nd{cases} \mathbf{e}nd{equation*} and the second equality is equivalent to \begin{equation}\mathbf{\Lambda}bel{Aeq10} \mathbf{H}at{\mathbf{\Lambda}mbda}_f=-(\mathbf{C}\mathbf{W}\mathbf{C}^\text{T})^{-1}(\mathbf{y}_f-\mathbf{C}\mathbf{A}^{\tau_f}\mathbf{x}_0). \mathbf{e}nd{equation} Then the optimal input (\ref{opu1}) with (\ref{Aeq9}) and (\ref{Aeq10}) is \begin{equation*}\mathbf{\Lambda}bel{opu2} \mathbf{u}^*(\tau)=\mathbf{B}^{\text{T}}(\mathbf{A}^{\text{T}})^{\tau_f-\tau-1}\mathbf{C}^{\text{T}}(\mathbf{C}\mathbf{W}\mathbf{C}^\text{T})^{-1}(\mathbf{y}_f-\mathbf{C}\mathbf{A}^{\tau_f}\mathbf{x}_0). \mathbf{e}nd{equation*} Denoting $\beta=\mathbf{y}_f-\mathbf{C}\mathbf{A}^{\tau_f}\mathbf{x}_0$, the minimum energy cost is \begin{align*} E&=\sum^{\tau_f-1}_{k=0}\mathbf{u}^{\text{T}}(k)\mathbf{u}(k)\notag\\ &=\beta^{\text{T}}[(\mathbf{C}\mathbf{W}\mathbf{C}^{\text{T}})^{-1}]^{\text{T}}\mathbf{C}\underbrace{\sum^{\tau_f-1}_{k=0}\mathbf{A}^{\tau_f-k-1}\mathbf{B}\mathbf{B}^{\text{T}} (\mathbf{A}^{\text{T}})^{\tau_f-k-1}}_{\mathbf{W}}\mathbf{C}^{\text{T}} (\mathbf{C}\mathbf{W}\mathbf{C}^{\text{T}})^{-1}\beta\notag\\ &=(\mathbf{y}_f-\mathbf{C}\mathbf{A}^{\tau_f}\mathbf{x}_0)^{\text{T}}(\mathbf{C}\mathbf{W}\mathbf{C}^{\text{T}})^{-1}(\mathbf{y}_f-\mathbf{C}\mathbf{A}^{\tau_f}\mathbf{x}_0).\mathbf{\Lambda}bel{minEout} \mathbf{e}nd{align*} \section{Energy scaling} \subsection{Energy scaling for full controllability}\mathbf{\Lambda}bel{Energycom} In this subsection, we assume system (\ref{sys1}) is fully controllable, which implies the Gramian matrix $\mathbf{W}$ is invertible. Next, we give detailed analysis on approximations of the minimum and maximum eigenvalues of $\mathbf{W}$. For $\mathbf{A}=\mathbf{A}^{\text{T}}$, we assume that its eigenvalues satisfy $|\mathbf{\Lambda}mbda_1|\leq|\mathbf{\Lambda}mbda_2|\leq\cdots\leq|\mathbf{\Lambda}mbda_n|$. Via the orthogonal decomposition $\mathbf{A}=\mathbf{P}\mathbf{\Lambda}\mathbf{P}^{\text{T}}$ with $\mathbf{P}=(p_{ij})_{nn}$, then Gramian matrix $\mathbf{W}$ is equivalent to \begin{equation*}\mathbf{\Lambda}bel{WT} \mathbf{W}=\sum^{\tau_f-1}_{\tau=0}(\mathbf{P}\mathbf{\Lambda}\mathbf{P}^{\text{T}})^\tau\mathbf{B}\mathbf{B}^{\text{T}}(\mathbf{P}\mathbf{\Lambda}\mathbf{P}^{\text{T}})^\tau =\mathbf{P}\sum^{\tau_f-1}_{\tau=0}\mathbf{\Lambda}^\tau\mathbf{P}^{\text{T}}\mathbf{B}\mathbf{B}^{\text{T}}\mathbf{\Lambda}^\tau\mathbf{P}^{\text{T}}. \mathbf{e}nd{equation*} By letting $\mathbf{M}=\sum^{\tau_f-1}_{\tau=0}\mathbf{\Lambda}^\tau\mathbf{P}^{\text{T}}\mathbf{B}\mathbf{B}^{\text{T}}\mathbf{P}\mathbf{\Lambda}^\tau=\sum^{\tau_f-1}_{\tau=0}\mathbf{M}_\tau,$ we have $\mathbf{W}=\mathbf{P}\mathbf{M}\mathbf{P}^{\text{T}}$, which indicates all eigenvalues of $\mathbf{M}$ and $\mathbf{W}$ are the same. Furthermore, $\mathbf{M}_{\tau}$ has the following form \begin{equation}\notag \begin{split} \mathbf{M}_{\tau}&= \begin{bmatrix} \mathbf{\Lambda}mbda^{\tau}_1&&&\\ &\mathbf{\Lambda}mbda^{\tau}_2&&\\ &&\ddots&\\ &&&\mathbf{\Lambda}mbda^{\tau}_n \mathbf{e}nd{bmatrix} \begin{bmatrix} q_{11}&q_{12}&\cdots&q_{1n}\\ q_{21}&q_{22}&\cdots&q_{2n}\\ \vdots&\vdots&\ddots&\vdots\\ q_{n1}&q_{n2}&\cdots&q_{nn} \mathbf{e}nd{bmatrix} \begin{bmatrix} \mathbf{\Lambda}mbda^{\tau}_1&&&\\ &\mathbf{\Lambda}mbda^{\tau}_2&&\\ &&\ddots&\\ &&&\mathbf{\Lambda}mbda^{\tau}_n \mathbf{e}nd{bmatrix}\\ &\mathop{=}\limits^{(i,j)}q_{ij}\mathbf{\Lambda}mbda^{\tau}_i\mathbf{\Lambda}mbda^{\tau}_j, \mathbf{e}nd{split} \mathbf{e}nd{equation} where $\mathbf{Q}=\mathbf{P}^{\text{T}}\mathbf{B}\mathbf{B}^{\text{T}}\mathbf{P}=(q_{ij})_{nn}$. Thus, we have \begin{equation}\mathbf{\Lambda}bel{Mij} \mathbf{M}(i, j)=\sum^{\tau_f-1}_{t=0}q_{ij}\mathbf{\Lambda}mbda^t_i\mathbf{\Lambda}mbda^t_j=q_{ij}\sum^{\tau_f-1}_{t=0}\mathbf{\Lambda}mbda^t_i\mathbf{\Lambda}mbda^t_j=q_{ij} \frac{1-(\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_j)^{\tau_f}}{1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_j}. \mathbf{e}nd{equation} Note that, we have $ \mathbf{M}(i, j)=q_{ij}\tau_f~\text{if}~ \mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_j=1,$ since $ \lim\limits_{\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_j\rightarrow1}\frac{1-(\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_j)^{\tau_f}}{1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_j}=\frac{-\tau_f(\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_j)^{\tau_f-1}}{-1}=\tau_f. $ Therefore, in following analysis, we consider the form of $\mathbf{M}$ as Eq.~(\ref{Mij}). \subsubsection{$n$ driver nodes} In the case of $n$ driver nodes, each node receives an independent input signal separately. Then the corresponding matrix $\mathbf{Q}=\mathbf{I}_n$ causes $\mathbf{M}$ to be a diagonal matrix $$ \mathbf{M}= \begin{bmatrix} \frac{1-\mathbf{\Lambda}mbda^{2\tau_f}_1}{1-\mathbf{\Lambda}mbda^2_1}&&\\ &\ddots&\\ &&\frac{1-\mathbf{\Lambda}mbda^{2\tau_f}_n}{1-\mathbf{\Lambda}mbda^2_n} \mathbf{e}nd{bmatrix}. $$ Apparently, the function $$ y=f(x)=\frac{1-x^{2\tau_f}}{1-x^2} $$ is the sum of geometric sequences $$ y=f(x)=\sum^{\tau_f-1}_{t=0}x^t=1+x^2+x^4+x^6+\cdots+x^{2(\tau_f-1)}, $$ which is an even function with $\dot{f}(x)>0 ~\text{for}~ x>0$, and $ \dot{f}(x)=0$ at $x=0$. Therefore, the function $f(x)$ increases as the variable $|x|$ increases. And then, the minimum and maximum eigenvalues of $\mathbf{M}$ is $\frac{1-\mathbf{\Lambda}mbda^{2\tau_f}_1}{1-\mathbf{\Lambda}mbda^2_1}$ and $\frac{1-\mathbf{\Lambda}mbda^{2\tau_f}_n}{1-\mathbf{\Lambda}mbda^2_n}$, respectively. In addition, when time scale $\tau_f$ is large, for $\frac{1-\mathbf{\Lambda}mbda^{2\tau_f}_1}{1-\mathbf{\Lambda}mbda^2_1}$ and $\frac{1-\mathbf{\Lambda}mbda^{2\tau_f}_n}{1-\mathbf{\Lambda}mbda^2_n}$, we have \begin{equation*} \mathbf{\Lambda}mbda_{\max}(\mathbf{M}) \begin{cases} \approx \frac{1}{1-\mathbf{\Lambda}mbda^2_n},~&\text{if}~ |\mathbf{\Lambda}mbda_n|<1;\\ \approx \tau_f,~&\text{if}~ |\mathbf{\Lambda}mbda_n|=1;\\ \thicksim \mathbf{\Lambda}mbda_n^{2\tau_f-2}, ~&\text{if}~ |\mathbf{\Lambda}mbda_n|>1, \mathbf{e}nd{cases} \mathbf{e}nd{equation*} and \begin{equation*} \mathbf{\Lambda}mbda_{\min}(\mathbf{M}) \begin{cases} \approx \frac{1}{1-\mathbf{\Lambda}mbda^2_1},~&\text{if}~ |\mathbf{\Lambda}mbda_1|<1;\\ \approx \tau_f,~&\text{if}~ |\mathbf{\Lambda}mbda_1|=1;\\ \thicksim \mathbf{\Lambda}mbda_1^{2\tau_f-2}, ~&\text{if}~ |\mathbf{\Lambda}mbda_1|>1. \mathbf{e}nd{cases} \mathbf{e}nd{equation*} \subsubsection{$1$ driver node}\mathbf{\Lambda}bel{1drivecom} In the case of $1$ driver node, we assume the sole node $h$ as the driver node. Then, we have the specific form of $\mathbf{M}$ as $\mathbf{M}(i, j)=p_{hi}p_{hj}\frac{1-(\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_j)^{\tau_f}}{1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_j}$. For the specific form of $\mathbf{M}^2$, we get $$ (\mathbf{M}^2)_{i, j}=\sum^n_{k=1}p_{hi}p_{hj}p^2_{hk}\frac{1-(\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_k)^{\tau_f}}{1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_k}~\frac{1-(\mathbf{\Lambda}mbda_k\mathbf{\Lambda}mbda_j)^{\tau_f}}{1-\mathbf{\Lambda}mbda_k\mathbf{\Lambda}mbda_j}. $$ According to the definitions of $\overline{\alpha}$ and $\overline{\beta}$, we have \begin{equation*}\mathbf{\Lambda}bel{alphaov} \overline{\alpha}=\sum_{i=1}^n\sum^n_{k=1}p_{hi}^2p^2_{hk}\left(\frac{1-(\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_k)^{\tau_f}}{1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_k}\right)^2 \mathbf{e}nd{equation*} and \begin{equation*}\mathbf{\Lambda}bel{betaov} \overline{\beta}=\sum^n_{j=1}\sum^n_{i=1}\left(\sum^n_{k=1}p_{hi}p_{hj}p^2_{hk}\frac{1-(\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_k)^{\tau_f}}{1- \mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_k}~\frac{1-(\mathbf{\Lambda}mbda_k\mathbf{\Lambda}mbda_j)^{\tau_f}}{1-\mathbf{\Lambda}mbda_k\mathbf{\Lambda}mbda_j}\right)^2. \mathbf{e}nd{equation*} More specifically, when the maximum eigenvalue of $\mathbf{A}$ satisfies $|\mathbf{\Lambda}mbda_n|< 1$, with the approximation of $1-(\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_j)^{\tau_f}\approx 1$ for $i, j=1, 2, \dots, n$, we have \begin{equation}\mathbf{\Lambda}bel{alphaup1} \overline{\alpha}\approx\sum_{i=1}^n\sum^n_{k=1}p_{hi}^2p^2_{hk}\left(\frac{1}{1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_k}\right)^2 \mathbf{e}nd{equation} and \begin{equation}\mathbf{\Lambda}bel{betaup1} \overline{\beta}\approx\sum^n_{j=1}\sum^n_{i=1}\left(\sum^n_{k=1}p_{hi}p_{hj}p^2_{hk}\frac{1}{1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_k}~\frac{1}{1-\mathbf{\Lambda}mbda_k\mathbf{\Lambda}mbda_j}\right)^2. \mathbf{e}nd{equation} When the maximum eigenvalue of $\mathbf{A}$ satisfies $|\mathbf{\Lambda}mbda_n| = 1$, with the approximation of $\frac{1-(\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_j)^{\tau_f}}{1-(\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_j)}\approx \tau_f$ for $i, j=1, 2, \dots, n$, we have \begin{equation*}\mathbf{\Lambda}bel{alphaup2} \overline{\alpha}=\sum_{i=1}^n\sum^n_{k=1}p_{hi}^2p^2_{hk}\left(\frac{1-(\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_k)^{\tau_f}}{1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_k}\right)^2\thicksim {\tau_f}^2 \mathbf{e}nd{equation*} and \begin{equation*}\mathbf{\Lambda}bel{betaup2} \overline{\beta}=\sum^n_{j=1}\sum^n_{i=1}\left(\sum^n_{k=1}p_{hi}p_{hj}p^2_{hk}\frac{1-(\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_k)^{\tau_f}}{1- \mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_k}~\frac{1-(\mathbf{\Lambda}mbda_k\mathbf{\Lambda}mbda_j)^{\tau_f}}{1-\mathbf{\Lambda}mbda_k\mathbf{\Lambda}mbda_j}\right)^2\thicksim \tau_f^4. \mathbf{e}nd{equation*} When the maximum eigenvalue of $\mathbf{A}$ satisfies $|\mathbf{\Lambda}mbda_n| > 1$, we have \begin{equation*}\mathbf{\Lambda}bel{alphaup3} \overline{\alpha}=\sum_{i=1}^n\sum^n_{k=1}p_{hi}^2p^2_{hk}\left(\frac{1-(\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_k)^{\tau_f}}{1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_k}\right)^2\thicksim \mathbf{\Lambda}mbda_n^{4\tau_f-4} \mathbf{e}nd{equation*} and \begin{equation*}\mathbf{\Lambda}bel{betaup3} \overline{\beta}=\sum^n_{j=1}\sum^n_{i=1}\left(\sum^n_{k=1}p_{hi}p_{hj}p^2_{hk}\frac{1-(\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_k)^{\tau_f}}{1- \mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_k}~\frac{1-(\mathbf{\Lambda}mbda_k\mathbf{\Lambda}mbda_j)^{\tau_f}}{1-\mathbf{\Lambda}mbda_k\mathbf{\Lambda}mbda_j}\right)^2\thicksim \mathbf{\Lambda}mbda_n^{8\tau_f-8}. \mathbf{e}nd{equation*} In order to get values of $\underline{\alpha}$ and $\underline{\beta}$, the key precondition relies on $\mathbf{M}^{-1}$. When all eigenvalues of $\mathbf{A}$ satisfy $|\mathbf{\Lambda}mbda_i|<1$, we have $\mathbf{M}(i, j)=\frac{p_{hi}p_{hj}}{1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_j}$ with the approximation of $1-(\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_j)^{\tau_f}\approx 1$. Furthermore, the corresponding inverse matrix $\mathbf{M}^{-1}$ is \begin{equation}\notag \mathbf{M}^{-1}(i, j)= \begin{cases} \frac{(1-\mathbf{\Lambda}mbda_i^2)\mathbf{P}rod\limits_{k\neq i}(1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_k)^2}{\mathbf{P}rod\limits_{k\neq i}(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_k)^2p^2_{hi}}, &i=j;\\ \frac{(1-\mathbf{\Lambda}mbda_i^2)(1-\mathbf{\Lambda}mbda_j^2)\mathbf{P}rod\limits_{k< l}\mathbf{P}rod\limits_{l=2}^{n}(1-\mathbf{\Lambda}mbda_k\mathbf{\Lambda}mbda_l)}{(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_j)^2\mathbf{P}rod\limits_{k\neq i,j}(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_k)(\mathbf{\Lambda}mbda_j-\mathbf{\Lambda}mbda_k)p_{hi}p_{hj}}, &i\neq j. \mathbf{e}nd{cases} \mathbf{e}nd{equation} Hence, in this case, matrix $\mathbf{M}^2$ has the following form \begin{equation}\notag \begin{split} (\mathbf{M}^2)_{i, j}=&\sum^n \limits_{k=1, k\neq i, j}\mathbf{M}(i, k)\mathbf{M}(k, j)+\mathbf{M}(i, i)\mathbf{M}(i, j)+\mathbf{M}(i, j)\mathbf{M}(j, j)\\ =&\sum^n_{k=1, k\neq i, j}\frac{(1-\mathbf{\Lambda}mbda_i^2)(1-\mathbf{\Lambda}mbda_j^2)(1-\mathbf{\Lambda}mbda_k^2)^2\mathbf{P}rod\limits_{r< l}\mathbf{P}rod\limits_{l=2}^{n}(1-\mathbf{\Lambda}mbda_r\mathbf{\Lambda}mbda_l)^2}{(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_k)^2(\mathbf{\Lambda}mbda_j-\mathbf{\Lambda}mbda_k)^2\mathbf{P}rod\limits_{r\neq i,k}\mathbf{P}rod \limits_{l\neq k, j}(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_r)(\mathbf{\Lambda}mbda_k-\mathbf{\Lambda}mbda_r)(\mathbf{\Lambda}mbda_k-\mathbf{\Lambda}mbda_l)(\mathbf{\Lambda}mbda_j-\mathbf{\Lambda}mbda_l)p_{hi}p_{hj}p^2_{hk}}\\ &+\frac{(1-\mathbf{\Lambda}mbda_i^2)\mathbf{P}rod\limits_{r\neq i}(1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_r)^2}{\mathbf{P}rod\limits_{r\neq i}(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_r)^2p^2_{hi}} \frac{(1-\mathbf{\Lambda}mbda_i^2)(1-\mathbf{\Lambda}mbda_j^2)\mathbf{P}rod\limits_{r< l}\mathbf{P}rod\limits_{l=2}^{n}(1-\mathbf{\Lambda}mbda_r\mathbf{\Lambda}mbda_l)}{(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_j)^2\mathbf{P}rod\limits_{r\neq i,j}(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_r)(\mathbf{\Lambda}mbda_j-\mathbf{\Lambda}mbda_r)p_{hi}p_{hj}}\\ &+\frac{(1-\mathbf{\Lambda}mbda_j^2)\mathbf{P}rod\limits_{r\neq j}(1-\mathbf{\Lambda}mbda_j\mathbf{\Lambda}mbda_r)^2}{\mathbf{P}rod\limits_{r\neq j}(\mathbf{\Lambda}mbda_j-\mathbf{\Lambda}mbda_r)^2p^2_{hj}} \frac{(1-\mathbf{\Lambda}mbda_i^2)(1-\mathbf{\Lambda}mbda_j^2)\mathbf{P}rod\limits_{r< l}\mathbf{P}rod\limits_{l=2}^{n}(1-\mathbf{\Lambda}mbda_r\mathbf{\Lambda}mbda_l)}{(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_j)^2\mathbf{P}rod\limits_{r\neq i,j}(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_r)(\mathbf{\Lambda}mbda_j-\mathbf{\Lambda}mbda_r)p_{hi}p_{hj}}. \mathbf{e}nd{split} \mathbf{e}nd{equation} From that, we can derive $\underline{\alpha}$ and $\underline{\beta}$ as \begin{equation*}\mathbf{\Lambda}bel{alphalow1} \underline{\alpha}=\sum^n_{i=1}\left(\frac{(1-\mathbf{\Lambda}mbda_i^2)\mathbf{P}rod\limits_{k\neq i}(1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_k)^2}{\mathbf{P}rod\limits_{k\neq i}(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_k)^2p^2_{hi}}\right)^2+ \sum^n_{j=1}\sum^n_{i=1, i\neq j}\left(\frac{(1-\mathbf{\Lambda}mbda_i^2)(1-\mathbf{\Lambda}mbda_j^2)\mathbf{P}rod\limits_{k< l}\mathbf{P}rod\limits_{l=2}^{n}(1-\mathbf{\Lambda}mbda_k\mathbf{\Lambda}mbda_l)}{(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_j)^2\mathbf{P}rod\limits_{k\neq i,j}(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_k)(\mathbf{\Lambda}mbda_j-\mathbf{\Lambda}mbda_k)p_{hi}p_{hj}}\right)^2 \mathbf{e}nd{equation*} and \begin{align*} \underline{\beta}=&\sum^n_{j=1}\sum^n_{i=1}\left(\sum^n_{k=1, k\neq i, j}\frac{(1-\mathbf{\Lambda}mbda_i^2)(1-\mathbf{\Lambda}mbda_j^2)(1-\mathbf{\Lambda}mbda_k^2)^2\mathbf{P}rod\limits_{r< l}\mathbf{P}rod\limits_{l=2}^{n}(1-\mathbf{\Lambda}mbda_r\mathbf{\Lambda}mbda_l)^2}{(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_k)^2(\mathbf{\Lambda}mbda_j-\mathbf{\Lambda}mbda_k)^2\mathbf{P}rod \limits_{r\neq i,k; l\neq k, j}(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_r)(\mathbf{\Lambda}mbda_k-\mathbf{\Lambda}mbda_r)(\mathbf{\Lambda}mbda_k-\mathbf{\Lambda}mbda_l)(\mathbf{\Lambda}mbda_j-\mathbf{\Lambda}mbda_l)p_{hi}p_{hj}p^2_{hk}}\right.\notag\\ &\left.+\frac{(1-\mathbf{\Lambda}mbda_i^2)\mathbf{P}rod\limits_{r\neq i}(1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_r)^2}{\mathbf{P}rod\limits_{r\neq i}(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_r)^2p^2_{hi}} \frac{(1-\mathbf{\Lambda}mbda_i^2)(1-\mathbf{\Lambda}mbda_j^2)\mathbf{P}rod\limits_{r< l}\mathbf{P}rod\limits_{l=2}^{n}(1-\mathbf{\Lambda}mbda_r\mathbf{\Lambda}mbda_l)}{(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_j)^2\mathbf{P}rod\limits_{r\neq i,j}(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_r)(\mathbf{\Lambda}mbda_j-\mathbf{\Lambda}mbda_r)p_{hi}p_{hj}}\right.\notag\\ &\left.+\frac{(1-\mathbf{\Lambda}mbda_j^2)\mathbf{P}rod\limits_{r\neq j}(1-\mathbf{\Lambda}mbda_j\mathbf{\Lambda}mbda_r)^2}{\mathbf{P}rod\limits_{r\neq j}(\mathbf{\Lambda}mbda_j-\mathbf{\Lambda}mbda_r)^2p^2_{hj}} \frac{(1-\mathbf{\Lambda}mbda_i^2)(1-\mathbf{\Lambda}mbda_j^2)\mathbf{P}rod\limits_{r< l}\mathbf{P}rod\limits_{l=2}^{n}(1-\mathbf{\Lambda}mbda_r\mathbf{\Lambda}mbda_l)}{(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_j)^2\mathbf{P}rod\limits_{r\neq i,j}(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_r)(\mathbf{\Lambda}mbda_j-\mathbf{\Lambda}mbda_r)p_{hi}p_{hj}}\right)^2.\mathbf{\Lambda}bel{betalow1} \mathbf{e}nd{align*} When $|\mathbf{\Lambda}mbda_1|<1$ and other eigenvalues satisfy $|\mathbf{\Lambda}mbda_1|\leq\cdots\leq|\mathbf{\Lambda}mbda_l|<1$, $|\mathbf{\Lambda}mbda_{l+1}|=\cdots=|\mathbf{\Lambda}mbda_{l+r}|=1$, $1<|\mathbf{\Lambda}mbda_{l+r+1}|\leq\cdots\leq|\mathbf{\Lambda}mbda_n|$, the corresponding $\mathbf{M}$ is \begin{equation*}\mathbf{\Lambda}bel{MTP} \mathbf{M}= \begin{bmatrix} \mathbf{M}_{1}&\mathbf{M}_{2}&\mathbf{M}_{3}\\ \mathbf{M}^{\text{T}}_{2}&\mathbf{M}_{4}&\mathbf{M}_{5}\\ \mathbf{M}^{\text{T}}_{3}&\mathbf{M}^{\text{T}}_{5}&\mathbf{M}_{6} \mathbf{e}nd{bmatrix} \mathbf{e}nd{equation*} where $$ \mathbf{M}_{1}= \begin{bmatrix} \frac{p^2_{h1}}{1-\mathbf{\Lambda}mbda_1^2}&\cdots&\frac{p_{h1}p_{hl}}{1-\mathbf{\Lambda}mbda_1\mathbf{\Lambda}mbda_l}\\ \vdots&\ddots&\vdots\\ \frac{p_{h1}p_{hl}}{1-\mathbf{\Lambda}mbda_1\mathbf{\Lambda}mbda_l}&\cdots&\frac{p^2_{hl}}{1-\mathbf{\Lambda}mbda_l^2} \mathbf{e}nd{bmatrix},\quad \mathbf{M}_{2}= \begin{bmatrix} \frac{p_{h1}p_{l+1} (1-(\mathbf{\Lambda}mbda_1\mathbf{\Lambda}mbda_{l+1})^{\tau_f})}{1-\mathbf{\Lambda}mbda_1\mathbf{\Lambda}mbda_{l+1}}&\cdots&\frac{p_{h1}p_{l+r} (1-(\mathbf{\Lambda}mbda_1\mathbf{\Lambda}mbda_{l+r})^{\tau_f})}{1-\mathbf{\Lambda}mbda_1\mathbf{\Lambda}mbda_{l+r}}\\ \vdots&\ddots&\vdots\\ \frac{p_{hl}p_{l+1} (1-(\mathbf{\Lambda}mbda_l\mathbf{\Lambda}mbda_{l+1})^{\tau_f})}{1-\mathbf{\Lambda}mbda_l\mathbf{\Lambda}mbda_{l+1}}&\cdots&\frac{p_{hl}p_{l+r} (1-(\mathbf{\Lambda}mbda_l\mathbf{\Lambda}mbda_{l+r})^{\tau_f})}{1-\mathbf{\Lambda}mbda_l\mathbf{\Lambda}mbda_{l+r}} \mathbf{e}nd{bmatrix}, $$ $$ \mathbf{M}_{3}= \begin{bmatrix} \frac{p_{h1}p_{l+r+1} (1-(\mathbf{\Lambda}mbda_1\mathbf{\Lambda}mbda_{l+r+1})^{\tau_f})}{1-\mathbf{\Lambda}mbda_1\mathbf{\Lambda}mbda_{l+r+1}}&\cdots&\frac{p_{h1}p_{n} (1-(\mathbf{\Lambda}mbda_1\mathbf{\Lambda}mbda_{n})^{\tau_f})}{1-\mathbf{\Lambda}mbda_1\mathbf{\Lambda}mbda_{n}}\\ \vdots&\ddots&\vdots\\ \frac{p_{hl}p_{l+r+1} (1-(\mathbf{\Lambda}mbda_l\mathbf{\Lambda}mbda_{l+r+1})^{\tau_f})}{1-\mathbf{\Lambda}mbda_l\mathbf{\Lambda}mbda_{l+r+1}}&\cdots&\frac{p_{hl}p_{n} (1-(\mathbf{\Lambda}mbda_l\mathbf{\Lambda}mbda_{n})^{\tau_f})}{1-\mathbf{\Lambda}mbda_l\mathbf{\Lambda}mbda_{n}} \mathbf{e}nd{bmatrix},$$$$ \mathbf{M}_{4}= \begin{bmatrix} p_{h\,l+1}^2{\tau_f}&\cdots&p_{h\,l+1}p_{h\,l+r}{\tau_f}\\ \vdots&\ddots&\vdots\\ p_{h\,l+1}p_{h\,l+r}{\tau_f}&\cdots &p_{h\,l+r}^2{\tau_f} \mathbf{e}nd{bmatrix}, $$ $$ \mathbf{M}_{5}= \begin{bmatrix} \frac{p_{h\,l+1}p_{h\,l+r+1} (1-(\mathbf{\Lambda}mbda_{h\, l+1}\mathbf{\Lambda}mbda_{l+r+1})^{\tau_f})}{1-\mathbf{\Lambda}mbda_{l+1}\mathbf{\Lambda}mbda_{l+r+1}}&\cdots&\frac{p_{h\, l+1}p_{h\,n} (1-(\mathbf{\Lambda}mbda_{l+1}\mathbf{\Lambda}mbda_{n})^{\tau_f})}{1-\mathbf{\Lambda}mbda_{l+1}\mathbf{\Lambda}mbda_{n}}\\ \vdots&\ddots&\vdots\\ \frac{p_{h\,l+r}p_{l+r+1} (1-(\mathbf{\Lambda}mbda_{l+r}\mathbf{\Lambda}mbda_{l+r+1})^{\tau_f})}{1-\mathbf{\Lambda}mbda_{l+r}\mathbf{\Lambda}mbda_{l+r+1}}&\cdots&\frac{p_{h\,l+r}p_{h\,n} (1-(\mathbf{\Lambda}mbda_{l+r}\mathbf{\Lambda}mbda_{n})^{\tau_f})}{1-\mathbf{\Lambda}mbda_{l+r}\mathbf{\Lambda}mbda_{n}} \mathbf{e}nd{bmatrix}, $$ $$ \mathbf{M}_{6}= \begin{bmatrix} \frac{p_{h\,l+r+1}^2 (1-\mathbf{\Lambda}mbda_{l+r+1}^{2{\tau_f}})}{1-\mathbf{\Lambda}mbda_{l+r+1}^2}&\cdots&\frac{p_{h\, l+r+1}p_{h\,n} (1-(\mathbf{\Lambda}mbda_{l+r+1}\mathbf{\Lambda}mbda_{n})^{\tau_f})}{1-\mathbf{\Lambda}mbda_{l+r+1}\mathbf{\Lambda}mbda_{n}}\\ \vdots&\ddots&\vdots\\ \frac{p_{h\, l+r+1}p_{h\,n} (1-(\mathbf{\Lambda}mbda_{l+r+1}\mathbf{\Lambda}mbda_{n})^{\tau_f})}{1-\mathbf{\Lambda}mbda_{l+r+1}\mathbf{\Lambda}mbda_{n}}&\cdots&\frac{p_{hn}^2 (1-\mathbf{\Lambda}mbda_{n}^{2{\tau_f}})}{1-\mathbf{\Lambda}mbda_{n}^2} \mathbf{e}nd{bmatrix}. $$ In this case, it is difficult to calculate $\mathbf{M}^{-1}$ directly. For that, we utilize $\mathbf{M}^{-1}=\frac{\mathbf{M}^*}{|\mathbf{M}|}$ with $\mathbf{M}^*$ being adjoint matrix of $\mathbf{M}$. Intuitively, it is easy to get $ |\mathbf{M}|\thicksim {\tau_f}^r~\mathbf{P}rod^n_{i={l+r+1}}\mathbf{\Lambda}mbda_i^{2{\tau_f}}. $ For adjoint matrix $\mathbf{M}^*$, we have $$ \mathbf{M}^*(i, j)\thicksim \begin{cases} {\tau_f}^r\cdot \mathbf{P}rod^n_{i={l+r+1}}\mathbf{\Lambda}mbda_i^{2{\tau_f}}, &i, j\leq l;\\ {\tau_f}^a\cdot \mathbf{P}rod^n_{i={l+r+1}}\mathbf{\Lambda}mbda_i^{2{\tau_f}}/k, &\text{otherwise}, \mathbf{e}nd{cases} $$ where $a=r-1 ~\text{or}~ k\neq 1 ~\text{with}~ k=(\mathbf{\Lambda}mbda_{l_1}\mathbf{\Lambda}mbda_{l_2})^{{\tau_f}}$, $l_1, l_2\geq l+r+1$. Therefore, we get $\mathbf{M}^{-1}$ as $$ \mathbf{M}^{-1}(i,j)\approx \begin{cases} c_{ij}\neq 0, &i, j\leq l;\\ 0, &\text{otherwise}. \mathbf{e}nd{cases} $$ Then in the process of calculating $\underline{\alpha}$ and $\underline{\beta}$, we find that elements of the first $l$ rows and $l$ columns of $\mathbf{M}^{-1}$ dominate, i.e., $c_{ij}, i, j\leq l$ are adequate. Thus, in order to get the specific forms of $\underline{\alpha}$ and $\underline{\beta}$, we employ the inverse matrix of $\mathbf{M}_{1}$ to replace the inverse matrix of $\mathbf{M}$. And then, the corresponding $\underline{\alpha}$ and $\underline{\beta}$ are \begin{equation}\mathbf{\Lambda}bel{alphalow1} \underline{\alpha}=\sum^l_{i=1}\left(\frac{(1-\mathbf{\Lambda}mbda_i^2)\mathbf{P}rod\limits_{k\neq i}(1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_k)^2}{\mathbf{P}rod\limits_{k\neq i}(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_k)^2p^2_{hi}}\right)^2+ \sum^l_{j=1}\sum^l_{i=1, i\neq j}\left(\frac{(1-\mathbf{\Lambda}mbda_i^2)(1-\mathbf{\Lambda}mbda_j^2)\mathbf{P}rod\limits_{k< d}\mathbf{P}rod\limits_{d=2}^{n}(1-\mathbf{\Lambda}mbda_k\mathbf{\Lambda}mbda_d)}{(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_j)^2\mathbf{P}rod\limits_{k\neq i,j}(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_k)(\mathbf{\Lambda}mbda_j-\mathbf{\Lambda}mbda_k)p_{hi}p_{hj}}\right)^2 \mathbf{e}nd{equation} and \begin{equation}\mathbf{\Lambda}bel{betalow1} \begin{split} \underline{\beta}=&\sum^l_{j=1}\sum^l_{i=1}\left(\sum^l_{k=1, k\neq i, j}\frac{(1-\mathbf{\Lambda}mbda_i^2)(1-\mathbf{\Lambda}mbda_j^2)(1-\mathbf{\Lambda}mbda_k^2)^2\mathbf{P}rod\limits_{r< d}\mathbf{P}rod\limits_{d=2}^{l}(1-\mathbf{\Lambda}mbda_r\mathbf{\Lambda}mbda_d)^2}{(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_k)^2(\mathbf{\Lambda}mbda_j-\mathbf{\Lambda}mbda_k)^2\mathbf{P}rod \limits_{r\neq i,k; d\neq k, j}(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_r)(\mathbf{\Lambda}mbda_k-\mathbf{\Lambda}mbda_r)(\mathbf{\Lambda}mbda_k-\mathbf{\Lambda}mbda_d)(\mathbf{\Lambda}mbda_j-\mathbf{\Lambda}mbda_d)p_{hi}p_{hj}p^2_{hk}}\right.\\ &+\frac{(1-\mathbf{\Lambda}mbda_i^2)\mathbf{P}rod\limits_{r\neq i}(1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_r)^2}{\mathbf{P}rod\limits_{r\neq i}(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_r)^2p^2_{hi}} \frac{(1-\mathbf{\Lambda}mbda_i^2)(1-\mathbf{\Lambda}mbda_j^2)\mathbf{P}rod\limits_{r< d}\mathbf{P}rod\limits_{d=2}^{n}(1-\mathbf{\Lambda}mbda_r\mathbf{\Lambda}mbda_d)}{(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_j)^2\mathbf{P}rod\limits_{r\neq i,j}(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_r)(\mathbf{\Lambda}mbda_j-\mathbf{\Lambda}mbda_r)p_{hi}p_{hj}}\\ &\left.+\frac{(1-\mathbf{\Lambda}mbda_j^2)\mathbf{P}rod\limits_{r\neq j}(1-\mathbf{\Lambda}mbda_j\mathbf{\Lambda}mbda_r)^2}{\mathbf{P}rod\limits_{r\neq j}(\mathbf{\Lambda}mbda_j-\mathbf{\Lambda}mbda_r)^2p^2_{hj}} \frac{(1-\mathbf{\Lambda}mbda_i^2)(1-\mathbf{\Lambda}mbda_j^2)\mathbf{P}rod\limits_{r< d}\mathbf{P}rod\limits_{d=2}^{n}(1-\mathbf{\Lambda}mbda_r\mathbf{\Lambda}mbda_d)}{(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_j)^2\mathbf{P}rod\limits_{r\neq i,j}(\mathbf{\Lambda}mbda_i-\mathbf{\Lambda}mbda_r)(\mathbf{\Lambda}mbda_j-\mathbf{\Lambda}mbda_r)p_{hi}p_{hj}}\right)^2. \mathbf{e}nd{split} \mathbf{e}nd{equation} When $|\mathbf{\Lambda}mbda_1|=1$, i.e., $ \mathbf{M}= \begin{bmatrix} \mathbf{M}_{4}&\mathbf{M}_{5}\\ \mathbf{M}^{\text{T}}_{5}&\mathbf{M}_{6} \mathbf{e}nd{bmatrix} $ with $l=0$ in (\ref{MTP}), for $\mathbf{M}^{-1}=\frac{\mathbf{M}^*}{|\mathbf{M}|}$, we have $ |\mathbf{M}|\thicksim {\tau_f}^r\cdot \mathbf{P}rod^n_{i={r+1}}\mathbf{\Lambda}mbda_i^{2{\tau_f}} $ and $$ \mathbf{M}^*(i, j)\thicksim \begin{cases} {\tau_f}^{r-1}\cdot \mathbf{P}rod^n_{i={r+1}}\mathbf{\Lambda}mbda_i^{2{\tau_f}}, &i, j\leq r;\\ {\tau_f}^{a}\cdot \mathbf{P}rod^n_{i={r+1}}\mathbf{\Lambda}mbda_i^{2{\tau_f}}/k, &\text{otherwise}, \mathbf{e}nd{cases} $$ $\text{where}~ a=r-1 ~\text{or}~a=r~\&~ k=(\mathbf{\Lambda}mbda_{l_1}\mathbf{\Lambda}mbda_{l_2})^{{\tau_f}},$ which lead to $$ \mathbf{M}^{-1}(i,j)\thicksim \begin{cases} {\tau_f}^{-1}, &i, j\leq r;\\ {\tau_f}^{b}(\mathbf{\Lambda}mbda_{l_1}\mathbf{\Lambda}mbda_{l_2})^{-{\tau_f}},&\text{otherwise} (\text{with}~b=0~\text{or}-1). \mathbf{e}nd{cases} $$ Due to that $ \lim_{{\tau_f}\rightarrow \infty}\frac{{\tau_f}^{b}(\mathbf{\Lambda}mbda_{l_1}\mathbf{\Lambda}mbda_{l_2})^{-{\tau_f}}}{{\tau_f}^{-1}}=0 $, we have $$\underline{\alpha}\thicksim {\tau_f}^{-2}\quad \text{and} \quad \underline{\beta}\thicksim {\tau_f}^{-4}.$$ When $|\mathbf{\Lambda}mbda_1|>1$, i.e., $ \mathbf{M}=\mathbf{M}_{6} $ with $l=r=0$ in (\ref{MTP}), for $\mathbf{M}^{-1}=\frac{\mathbf{M}^*}{|\mathbf{M}|}$, we have $ |\mathbf{M}|\thicksim \mathbf{P}rod^n_{i={1}}\mathbf{\Lambda}mbda_i^{2{\tau_f}}=|\mathbf{A}|^{2{\tau_f}} $ and $ \mathbf{M}^*(i,j)\thicksim\frac{\mathbf{P}rod^n_{i={1}}\mathbf{\Lambda}mbda_i^{2{\tau_f}}}{\mathbf{\Lambda}mbda_i^{\tau_f}\mathbf{\Lambda}mbda^{\tau_f}_j}, $ which lead to $\mathbf{M}^{-1}(i, j)\thicksim(\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_j)^{-{\tau_f}}.$ Moreover, in calculating $\underline{\alpha}$ and $\underline{\beta}$, $\mathbf{\Lambda}mbda_1^{-2{\tau_f}}$ dominates. Therefore, we have $$\underline{\alpha}\thicksim \mathbf{\Lambda}mbda_1^{-4{\tau_f}},\quad\text{and} \quad \underline{\beta}\thicksim \mathbf{\Lambda}mbda_1^{-8{\tau_f}}.$$ \subsubsection{$m$ driver nodes} In the case of $m$ driver nodes, the indexes of driver nodes can be denoted by $d_1, d_2, \cdots, d_m$. Then the corresponding input matrix is $\mathbf{B}=[e_{d_1}, e_{d_2}, \cdots, e_{d_m}]\in \mathbb{R}^{n\times m}$ with $e_i$ being the $i$th column of identity matrix. Accordingly, we have $\mathbf{M}({i, j})=q_{ij}\frac{1-(\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_j)^{\tau_f}}{1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_j}$ with $q_{ij}=\sum^m_{k=1}p_{d_k i}p_{d_k j}$. In the analysis of $1$ driver \text{node}, we find that the form of $q_{ij}$ has no essential effect on the main analysis process. For example, when $|\mathbf{\Lambda}mbda_n|<1$, we have $\mathbf{M}({i, j})\approx \sum^m_{k=1}p_{d_k i}p_{d_k j}\frac{1}{1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_j}$ and $\mathbf{M}^2(i, j)=\sum_{l=1}^n(\sum^m_{k=1}p_{d_k i}p_{d_k l}\frac{1}{1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_l})(\sum^m_{k=1}p_{d_k l}p_{d_k j}\frac{1}{1-\mathbf{\Lambda}mbda_l\mathbf{\Lambda}mbda_j})$. Furthermore, we have the corresponding $\overline{\alpha}$ and $\overline{\beta}$ \begin{equation}\mathbf{\Lambda}bel{alphaup4} \overline{\alpha}=\sum_{j=1}^n\sum_{i=1}^n\left(\frac{\sum^m_{k=1}p_{d_k i}p_{d_k j}}{1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_j}\right)^2 \mathbf{e}nd{equation} and \begin{equation}\mathbf{\Lambda}bel{betaup4} \overline{\beta}=\sum_{j=1}^n\sum_{i=1}^n\left( \sum_{l=1}^n\left(\sum^m_{k=1}p_{d_k i}p_{d_k l}\frac{1}{1-\mathbf{\Lambda}mbda_i\mathbf{\Lambda}mbda_l}\right)\left(\sum^m_{k=1}p_{d_k l}p_{d_k j}\frac{1}{1-\mathbf{\Lambda}mbda_l\mathbf{\Lambda}mbda_j}\right) \right)^2. \mathbf{e}nd{equation} Analogously, other cases can also be derived and thus omitted here. In summary, in different cases of $\mathbf{A}$ with different properties, parameters $\overline{\alpha}, \overline{\beta}, \underline{\alpha}$ and $\underline{\beta}$ are obtained. And the corresponding bounds are acquired accordingly as shown in table~\ref{table2}. \begin{table}[!http] \centering\caption{Lower and upper bounds of the minimum energy for a fully controllable network. } \fontsize{8}{15}\selectfont \begin{tabular}{cc|c|c|c} \toprule[2pt] \multicolumn{2}{c|}{Number of driver nodes}&$1$&$m (m<n)$&$n$\\ \toprule[1pt] \multirow{3}{*}{Lower bound $\underline{E}$}&$|\mathbf{\Lambda}mbda_{n}|<1$&Eq.(\ref{lowE}) with (\ref{alphaup1})(\ref{betaup1})&Eq.(\ref{lowE}) with (\ref{alphaup4})(\ref{betaup4})&$1-\mathbf{\Lambda}mbda_n^2$\\ &$|\mathbf{\Lambda}mbda_{n}|=1$&$ \sim \tau_f^{-1}$& $\sim \tau_f^{-1}$& $\sim \tau_f^{-1}$\\ &$|\mathbf{\Lambda}mbda_{n}|>1$&$\sim \mathbf{\Lambda}mbda_{n}^{2-2\tau_f}$&$\sim \mathbf{\Lambda}mbda_{n}^{2-2\tau_f}$&$\sim \mathbf{\Lambda}mbda_{n}^{2-2\tau_f}$\\ \toprule[1pt] \multirow{3}{*}{Upper bound $\overline{E}$}&$|\mathbf{\Lambda}mbda_{1}|<1$&Eq.(\ref{upE}) with (\ref{alphalow1})(\ref{betalow1})&constant&$1-\mathbf{\Lambda}mbda_1^2$\\ &$|\mathbf{\Lambda}mbda_1|=1$&$ \sim \tau_f^{-1}$& $\sim \tau_f^{-1}$& $\sim \tau_f^{-1}$\\ &$|\mathbf{\Lambda}mbda_1|>1$&$\sim \mathbf{\Lambda}mbda_{1}^{-2\tau_f}$&$\sim \mathbf{\Lambda}mbda_{1}^{-2\tau_f}$&$\sim \mathbf{\Lambda}mbda_{1}^{-2\tau_f}$\\ \toprule[2pt] \mathbf{e}nd{tabular} \mathbf{\Lambda}bel{table2} \mathbf{e}nd{table} \subsection{Energy scaling for target control}\mathbf{\Lambda}bel{energyuncom} The essential procedure is to get the minimum and maximum eigenvalues of $\mathbf{W}_{\text{C}}$ in Eq.~(\ref{WCT}). Furthermore, we employ $\mathbf{\Lambda}mbda_{\max}(\mathbf{W})\approx f(\overline{\alpha}, \overline{\beta})$ and $\mathbf{\Lambda}mbda_{\min}(\mathbf{W})\approx \frac{1}{f(\underline{\alpha}, \underline{\beta})}$ to approximate the corresponding eigenvalues. Note that subsystem (\ref{eq2}) is controllable. Analogical to the case of full controllability, we perform the following analysis. For system (\ref{eq2}), we have $\mathbf{A}_{\text{c}}=\mathbf{P}_{\text{c}}\mathbf{\Lambda}_{\text{c}}\mathbf{P}_{\text{c}}^{\text{T}}$ and $\mathbf{B}_{\text{c}}=\mathbf{R}_1\mathbf{B}_r$ with $\mathbf{\Lambda}=$diag$(\mathbf{\Lambda}mbda_{{\text{c}}1}, \mathbf{\Lambda}mbda_{{\text{c}}2}, \dots, \mathbf{\Lambda}mbda_{{\text{c}}r})$ and $\mathbf{B}_r$ being the first $r$ rows of $\mathbf{B}$. It is obvious that $\mathbf{\Lambda}mbda_{{\text{c}}i}\in \{\mathbf{\Lambda}mbda_1, \mathbf{\Lambda}mbda_2, \dots, \mathbf{\Lambda}mbda_n\}$. Moreover, the corresponding $\mathcal{W}$ is $\mathbf{P}_{\text{c}}\mathbf{M}_{\text{C}}\mathbf{P}_{\text{c}}^{\text{T}}$, where $\mathbf{M}_{\text{C}}=\sum_{{\tau}=0}^{{\tau_f}-1}\mathbf{\Lambda}_{\text{c}}^{\tau}\mathbf{Q}_{\text{c}}\mathbf{\Lambda}_{\text{c}}^{\tau}$ with $\mathbf{Q}_{\text{c}}=\mathbf{P}_{\text{c}}^{\text{T}}\mathbf{B}_{\text{c}}\mathbf{B}_{\text{c}}^{\text{T}}\mathbf{P}_{\text{c}}$. Denoting $\mathbf{P}_R=\mathbf{R}_1^{\text{T}}\mathbf{P}_{\text{c}}$, we have $\mathbf{Q}_{\text{c}}=\mathbf{P}_R^{\text{T}}\mathbf{B}_r\mathbf{B}_r^{\text{T}}\mathbf{P}_R$ with $\mathbf{Q}_{\text{c}}=(q_{ij}^C)_{r\times r}$ and $\mathbf{P}_R=(p_{ij}^R)_{r\times r}$. In the case of $1$ driver node, the elements of $\mathbf{M}_{\text{C}}$ are $\mathbf{M}_{\text{C}}(i, j)=q_{ij}^C\frac{1-(\mathbf{\Lambda}mbda_{{\text{c}}i}\mathbf{\Lambda}mbda_{{\text{c}}j})^{{\tau_f}}}{1-\mathbf{\Lambda}mbda_{{\text{c}}i}\mathbf{\Lambda}mbda_{{\text{c}}j}}$ with $q_{ij}^C=p_{hi}^Rp_{hj}^R, i, j=1, 2, \dots, r.$ The specific forms of $\mathbf{W}_{\text{C}}$ and $\mathbf{W}_{\text{C}}^2$ are \begin{equation*} \mathbf{W}_{\text{C}}(i, j)=\sum_{k=1}^r\sum_{l=1}^r p_{ik}^Rp_{jl}^Rq_{kl}^C\frac{1-(\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l})^{\tau_f}}{1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l}} \mathbf{e}nd{equation*} and \begin{equation*} \mathbf{W}_{\text{C}}^2(i, j)=\sum_{s=1}^r\left(\sum_{k=1}^r\sum_{l=1}^r p_{ik}^Rp_{sl}^Rq_{kl}^C\frac{1-(\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l})^{\tau_f}}{1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l}}\right) \left( \sum_{k=1}^r\sum_{l=1}^r p_{sk}^Rp_{jl}^Rq_{kl}^C\frac{1-(\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l})^{\tau_f}}{1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l}} \right). \mathbf{e}nd{equation*} According to the definitions of $\overline{\alpha}$ and $\overline{\beta}$, we have \begin{equation*} \begin{split} \overline{\alpha}=\sum_{i=1}^r\sum_{j=1}^r\left( \sum_{k=1}^r\sum_{l=1}^r p_{ik}^Rp_{jl}^Rq_{kl}^C\frac{1-(\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l})^{\tau_f}}{1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l}}\right)^2 \mathbf{e}nd{split} \mathbf{e}nd{equation*} and \begin{equation*} \begin{split} \overline{\beta} &=\sum_{i=1}^r\sum_{j=1}^r\left( \sum_{s=1}^r\left(\sum_{k=1}^r\sum_{l=1}^r p_{ik}^Rp_{sl}^Rq_{kl}^C\frac{1-(\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l})^{\tau_f}}{1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l}}\right) \left( \sum_{k=1}^r\sum_{l=1}^r p_{sk}^Rp_{jl}^Rq_{kl}^C\frac{1-(\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l})^{\tau_f}}{1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l}} \right) \right)^2. \mathbf{e}nd{split} \mathbf{e}nd{equation*} Similar to the section~\ref{1drivecom}, based on the approximation of $1-(\mathbf{\Lambda}mbda_{{\text{c}}i}\mathbf{\Lambda}mbda_{{\text{c}}j})^{\tau_f}\approx 1$ for $|\mathbf{\Lambda}mbda_{{\text{c}}r}|<1$, we have \begin{equation}\mathbf{\Lambda}bel{Aeq11} \begin{split} \overline{\alpha}=\sum_{i=1}^r\sum_{j=1}^r\left( \sum_{k=1}^r\sum_{l=1}^r p_{ik}^Rp_{jl}^Rq_{kl}^C\frac{1}{1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l}}\right)^2 \mathbf{e}nd{split} \mathbf{e}nd{equation} and \begin{equation}\mathbf{\Lambda}bel{Aeq12} \begin{split} \overline{\beta} &=\sum_{i=1}^r\sum_{j=1}^r\left( \sum_{s=1}^r\left(\sum_{k=1}^r\sum_{l=1}^r p_{ik}^Rp_{sl}^Rq_{kl}^C\frac{1}{1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l}}\right) \left( \sum_{k=1}^r\sum_{l=1}^r p_{sk}^Rp_{jl}^Rq_{kl}^C\frac{1}{1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l}} \right) \right)^2. \mathbf{e}nd{split} \mathbf{e}nd{equation} In the case of $|\mathbf{\Lambda}mbda_{{\text{c}}r}|=1$, by utilizing $\frac{1-(\mathbf{\Lambda}mbda_{{\text{c}}i}\mathbf{\Lambda}mbda_{{\text{c}}j})^{\tau_f}}{1-(\mathbf{\Lambda}mbda_{{\text{c}}i}\mathbf{\Lambda}mbda_{{\text{c}}j})}\approx {\tau_f}$, we have \begin{equation*} \begin{split} \overline{\alpha}&\approx\sum_{i=1}^r\sum_{j=1}^r\left( \sum_{k=1}^r\sum_{l=1}^r p_{ik}^Rp_{jl}^Rq_{kl}^C{\tau_f}\right)^2\sim {\tau_f}^2 \mathbf{e}nd{split} \mathbf{e}nd{equation*} and \begin{equation*} \begin{split} \overline{\beta} &=\sum_{i=1}^r\sum_{j=1}^r\left( \sum_{s=1}^r\left(\sum_{k=1}^r\sum_{l=1}^r p_{ik}^Rp_{sl}^Rq_{kl}^C{\tau_f}\right) \left( \sum_{k=1}^r\sum_{l=1}^r p_{sk}^Rp_{jl}^Rq_{kl}^C{\tau_f} \right) \right)^2\sim {\tau_f}^4. \mathbf{e}nd{split} \mathbf{e}nd{equation*} In the case of $|\mathbf{\Lambda}mbda_{{\text{c}}r}|>1$, we have \begin{equation*} \begin{split} \overline{\alpha}=\sum_{i=1}^r\sum_{j=1}^r\left( \sum_{k=1}^r\sum_{l=1}^r p_{ik}^Rp_{jl}^Rq_{kl}^C\frac{1-(\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l})^{\tau_f}}{1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l}}\right)^2\sim \mathbf{\Lambda}mbda_{{\text{c}}r}^{4{\tau_f}} \mathbf{e}nd{split} \mathbf{e}nd{equation*} and \begin{equation*} \begin{split} \overline{\beta} &=\sum_{i=1}^r\sum_{j=1}^r\left( \sum_{s=1}^r\left(\sum_{k=1}^r\sum_{l=1}^r p_{ik}^Rp_{sl}^Rq_{kl}^C\frac{1-(\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l})^{\tau_f}}{1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l}}\right) \left( \sum_{k=1}^r\sum_{l=1}^r p_{sk}^Rp_{jl}^Rq_{kl}^C\frac{1-(\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l})^{\tau_f}}{1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l}} \right) \right)^2\\ &\sim \mathbf{\Lambda}mbda_{{\text{c}}r}^{8{\tau_f}}. \mathbf{e}nd{split} \mathbf{e}nd{equation*} To calculate $\underline{\alpha}$ and $\underline{\beta}$, the pivotal is to get $\mathbf{W}_{\text{C}}^{-1}$. It is clear that $\mathbf{W}_{\text{C}}^{-1}=\mathbf{R}_1^{-1}\mathcal{W}^{-1}(\mathbf{R}_{1}^{\text{T}})^{-1}$ with $\mathcal{W}^{-1}=\mathbf{P}_{\text{c}}\mathbf{M}_{\text{C}}^{-1}\mathbf{P}_{\text{c}}^{\text{T}}$. Moreover, when $|\mathbf{\Lambda}mbda_{{\text{c}}i}|<1$, $\mathbf{M}_{\text{C}}(i, j)\approx \frac{p_{hi}^Rp_{hj}^R}{1-\mathbf{\Lambda}mbda_{{\text{c}}i}\mathbf{\Lambda}mbda_{{\text{c}}j}}$. The corresponding elements of $\mathbf{M}_{\text{C}}^{-1}$ are \begin{equation}\notag \mathbf{M}_{\text{C}}^{-1}(i, j)= \begin{cases} \frac{(1-\mathbf{\Lambda}mbda_{{\text{c}}i}^2)\mathbf{P}rod\limits_{k\neq i}(1-\mathbf{\Lambda}mbda_{{\text{c}}i}\mathbf{\Lambda}mbda_{{\text{c}}k})^2}{\mathbf{P}rod\limits_{k\neq i}(\mathbf{\Lambda}mbda_{{\text{c}}i}-\mathbf{\Lambda}mbda_{{\text{c}}k})^2(p^{R}_{hi})^2}, &i=j;\\ \frac{(1-\mathbf{\Lambda}mbda_{{\text{c}}i}^2)(1-\mathbf{\Lambda}mbda_{{\text{c}}j}^2)\mathbf{P}rod\limits_{k< l}\mathbf{P}rod\limits_{l=2}^{n}(1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l})}{(\mathbf{\Lambda}mbda_{{\text{c}}i}-\mathbf{\Lambda}mbda_{{\text{c}}j})^2\mathbf{P}rod\limits_{k\neq i,j}(\mathbf{\Lambda}mbda_{{\text{c}}i}-\mathbf{\Lambda}mbda_{{\text{c}}k})(\mathbf{\Lambda}mbda_{{\text{c}}j}-\mathbf{\Lambda}mbda_{{\text{c}}k})p_{hi}^Rp_{hj}^R}, &i\neq j. \mathbf{e}nd{cases} \mathbf{e}nd{equation} For simplicity, denoting $\mathbf{P}_r=(\mathbf{P}_R^{-1})^{\text{T}}=(p_{ij}^r)_{r\times r}$, we have $\mathbf{W}_{\text{C}}^{-1}=\mathbf{P}_r\mathbf{M}_{\text{C}}^{-1}\mathbf{P}_r^{\text{T}}$ and the specific elements are \begin{equation*} \begin{split} \mathbf{W}_{\text{C}}^{-1}(i, j)&=\sum_{l=1}^r\sum_{k=1}^rp_{ik}^r\mathbf{M}_{\text{C}}^{-1}(k, l)p_{jl}^r\\ &=\sum_{l=1}^rp_{il}^r\mathbf{M}_{\text{C}}^{-1}(l, l)p_{jl}^r+\sum_{b=1,b\neq c}^r\sum_{c=1}^rp_{ic}^r\mathbf{M}_{\text{C}}^{-1}(c, b)p_{jb}^r\\ &=\sum_{l=1}^rp_{il}^rp_{jl}^r\frac{(1-\mathbf{\Lambda}mbda_{{\text{c}}i}^2)\mathbf{P}rod\limits_{k\neq i}(1-\mathbf{\Lambda}mbda_{{\text{c}}i}\mathbf{\Lambda}mbda_{{\text{c}}k})^2}{\mathbf{P}rod\limits_{k\neq i}(\mathbf{\Lambda}mbda_{{\text{c}}i}-\mathbf{\Lambda}mbda_{{\text{c}}k})^2(p^{R}_{hi})^2}\\ &\quad +\sum_{b=1,b\neq v}^r\sum_{v=1}^rp_{iv}^rp_{jb}^r\frac{(1-\mathbf{\Lambda}mbda_{{\text{c}}v}^2)(1-\mathbf{\Lambda}mbda_{{\text{c}}b}^2)\mathbf{P}rod\limits_{k< l}\mathbf{P}rod\limits_{l=2}^{n}(1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l})}{(\mathbf{\Lambda}mbda_{{\text{c}}v}-\mathbf{\Lambda}mbda_{{\text{c}}b})^2\mathbf{P}rod\limits_{k\neq v,b}(\mathbf{\Lambda}mbda_{{\text{c}}v}-\mathbf{\Lambda}mbda_{{\text{c}}k})(\mathbf{\Lambda}mbda_{{\text{c}}b}-\mathbf{\Lambda}mbda_{{\text{c}}k})p_{hv}^Rp_{hb}^R}. \mathbf{e}nd{split} \mathbf{e}nd{equation*} Based on that, we can derive $\underline{\alpha}$ and $\underline{\beta}$ as \begin{equation*} \begin{split} \underline{\alpha}=\sum_{i=1}^r\sum_{j=1}^r&\left(\sum_{l=1}^rp_{il}^rp_{jl}^r\frac{(1-\mathbf{\Lambda}mbda_{{\text{c}}i}^2)\mathbf{P}rod\limits_{k\neq i}(1-\mathbf{\Lambda}mbda_{{\text{c}}i}\mathbf{\Lambda}mbda_{{\text{c}}k})^2}{\mathbf{P}rod\limits_{k\neq i}(\mathbf{\Lambda}mbda_{{\text{c}}i}-\mathbf{\Lambda}mbda_{{\text{c}}k})^2(p^{R}_{hi})^2}\right.\\ &\quad \left.+\sum_{b=1,b\neq v}^r\sum_{v=1}^rp_{iv}^rp_{jb}^r\frac{(1-\mathbf{\Lambda}mbda_{{\text{c}}v}^2)(1-\mathbf{\Lambda}mbda_{{\text{c}}b}^2)\mathbf{P}rod\limits_{k< l}\mathbf{P}rod\limits_{l=2}^{n}(1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l})}{(\mathbf{\Lambda}mbda_{{\text{c}}v}-\mathbf{\Lambda}mbda_{{\text{c}}b})^2\mathbf{P}rod\limits_{k\neq v,b}(\mathbf{\Lambda}mbda_{{\text{c}}v}-\mathbf{\Lambda}mbda_{{\text{c}}k})(\mathbf{\Lambda}mbda_{{\text{c}}b}-\mathbf{\Lambda}mbda_{{\text{c}}k})p_{hv}^Rp_{hb}^R}\right)^2 \mathbf{e}nd{split} \mathbf{e}nd{equation*} \begin{equation*} \begin{split} \underline{\beta}&=\sum_{i=1}^r\sum_{j=1}^r \left( \sum_{s=1}^r\left(\sum_{l=1}^rp_{il}^rp_{sl}^r\frac{(1-\mathbf{\Lambda}mbda_{{\text{c}}i}^2)\mathbf{P}rod\limits_{k\neq i}(1-\mathbf{\Lambda}mbda_{{\text{c}}i}\mathbf{\Lambda}mbda_{{\text{c}}k})^2}{\mathbf{P}rod\limits_{k\neq i}(\mathbf{\Lambda}mbda_{{\text{c}}i}-\mathbf{\Lambda}mbda_{{\text{c}}k})^2(p^{R}_{hi})^2}\right.\right.\\ &\quad\left. +\sum_{b=1,b\neq v}^r\sum_{v=1}^rp_{iv}^rp_{sb}^r\frac{(1-\mathbf{\Lambda}mbda_{{\text{c}}v}^2)(1-\mathbf{\Lambda}mbda_{{\text{c}}b}^2)\mathbf{P}rod\limits_{k< l}\mathbf{P}rod\limits_{l=2}^{n}(1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l})}{(\mathbf{\Lambda}mbda_{{\text{c}}v}-\mathbf{\Lambda}mbda_{{\text{c}}b})^2\mathbf{P}rod\limits_{k\neq v,b}(\mathbf{\Lambda}mbda_{{\text{c}}v}-\mathbf{\Lambda}mbda_{{\text{c}}k})(\mathbf{\Lambda}mbda_{{\text{c}}b}-\mathbf{\Lambda}mbda_{{\text{c}}k})p_{hv}^Rp_{hb}^R}\right)\\ &\left(\sum_{l=1}^rp_{sl}^rp_{jl}^r\frac{(1-\mathbf{\Lambda}mbda_{{\text{c}}s}^2)\mathbf{P}rod\limits_{k\neq s}(1-\mathbf{\Lambda}mbda_{{\text{c}}s}\mathbf{\Lambda}mbda_{{\text{c}}k})^2}{\mathbf{P}rod\limits_{k\neq s}(\mathbf{\Lambda}mbda_{{\text{c}}s}-\mathbf{\Lambda}mbda_{{\text{c}}k})^2(p^{R}_{hs})^2}\right.\\ &\quad\left.\left. +\sum_{b=1,b\neq v}^r\sum_{v=1}^rp_{sv}^rp_{jb}^r\frac{(1-\mathbf{\Lambda}mbda_{{\text{c}}v}^2)(1-\mathbf{\Lambda}mbda_{{\text{c}}b}^2)\mathbf{P}rod\limits_{k< l}\mathbf{P}rod\limits_{l=2}^{n}(1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l})}{(\mathbf{\Lambda}mbda_{{\text{c}}v}-\mathbf{\Lambda}mbda_{{\text{c}}b})^2\mathbf{P}rod\limits_{k\neq v,b}(\mathbf{\Lambda}mbda_{{\text{c}}v}-\mathbf{\Lambda}mbda_{{\text{c}}k})(\mathbf{\Lambda}mbda_{{\text{c}}b}-\mathbf{\Lambda}mbda_{{\text{c}}k})p_{hv}^Rp_{hb}^R}\right) \right)^2. \mathbf{e}nd{split} \mathbf{e}nd{equation*} When $|\mathbf{\Lambda}mbda_{{\text{c}}1}|<1$ and other eigenvalues satisfy $|\mathbf{\Lambda}mbda_{{\text{c}}1}|\leq\cdots\leq|\mathbf{\Lambda}mbda_{{\text{c}}\mu}|<1$, $|\mathbf{\Lambda}mbda_{{\text{c}} \,\mu+1}|=\cdots=|\mathbf{\Lambda}mbda_{{\text{c}} \,\mu+k}|=1$, $1<|\mathbf{\Lambda}mbda_{{\text{c}} \,\mu+k+1}|\leq\cdots\leq|\mathbf{\Lambda}mbda_{{\text{c}}r}|$, similar to the section~\ref{1drivecom}, the elements of the first $\mu$ rows and the first $\mu$ columns dominate. Therefore, for $\mathbf{W}_{\text{C}}^{-1}(i, j)=\sum_{l=1}^r\sum_{k=1}^rp_{ik}^r\mathbf{M}_{\text{C}}^{-1}(k, l)p_{jl}^r$, we employ $\sum_{l=1}^\mu\sum_{k=1}^\mu p_{ik}^r\mathbf{M}_{\text{C}}^{-1}(k, l)p_{jl}^r$ to approximate it. Accordingly, $\underline{\alpha}$ and $\underline{\beta}$ are \begin{equation}\mathbf{\Lambda}bel{Aeq13} \begin{split} \underline{\alpha}=\sum_{i=1}^r\sum_{j=1}^r&\left(\sum_{l=1}^\mu p_{il}^rp_{jl}^r\frac{(1-\mathbf{\Lambda}mbda_{{\text{c}}i}^2)\mathbf{P}rod\limits_{k\neq i}(1-\mathbf{\Lambda}mbda_{{\text{c}}i}\mathbf{\Lambda}mbda_{{\text{c}}k})^2}{\mathbf{P}rod\limits_{k\neq i}(\mathbf{\Lambda}mbda_{{\text{c}}i}-\mathbf{\Lambda}mbda_{{\text{c}}k})^2(p^{R}_{hi})^2}\right.\\ &\quad \left.+\sum_{b=1,b\neq v}^\mu\sum_{v=1}^\mu p_{iv}^rp_{jb}^r\frac{(1-\mathbf{\Lambda}mbda_{{\text{c}}v}^2)(1-\mathbf{\Lambda}mbda_{{\text{c}}b}^2)\mathbf{P}rod\limits_{k< l}\mathbf{P}rod\limits_{l=2}^{n}(1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l})}{(\mathbf{\Lambda}mbda_{{\text{c}}v}-\mathbf{\Lambda}mbda_{{\text{c}}b})^2\mathbf{P}rod\limits_{k\neq v,b}(\mathbf{\Lambda}mbda_{{\text{c}}v}-\mathbf{\Lambda}mbda_{{\text{c}}k})(\mathbf{\Lambda}mbda_{{\text{c}}b}-\mathbf{\Lambda}mbda_{{\text{c}}k})p_{hv}^Rp_{hb}^R}\right)^2 \mathbf{e}nd{split} \mathbf{e}nd{equation} \begin{equation}\mathbf{\Lambda}bel{Aeq14} \begin{split} \underline{\beta}&=\sum_{i=1}^r\sum_{j=1}^r \left( \sum_{s=1}^r\left(\sum_{l=1}^\mu p_{il}^rp_{sl}^r\frac{(1-\mathbf{\Lambda}mbda_{{\text{c}}i}^2)\mathbf{P}rod\limits_{k\neq i}(1-\mathbf{\Lambda}mbda_{{\text{c}}i}\mathbf{\Lambda}mbda_{{\text{c}}k})^2}{\mathbf{P}rod\limits_{k\neq i}(\mathbf{\Lambda}mbda_{{\text{c}}i}-\mathbf{\Lambda}mbda_{{\text{c}}k})^2(p^{R}_{hi})^2}\right.\right.\\ &\quad\left. +\sum_{b=1,b\neq v}^\mu\sum_{v=1}^\mu p_{iv}^rp_{sb}^r\frac{(1-\mathbf{\Lambda}mbda_{{\text{c}}v}^2)(1-\mathbf{\Lambda}mbda_{{\text{c}}b}^2)\mathbf{P}rod\limits_{k< l}\mathbf{P}rod\limits_{l=2}^{n}(1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l})}{(\mathbf{\Lambda}mbda_{{\text{c}}v}-\mathbf{\Lambda}mbda_{{\text{c}}b})^2\mathbf{P}rod\limits_{k\neq v,b}(\mathbf{\Lambda}mbda_{{\text{c}}v}-\mathbf{\Lambda}mbda_{{\text{c}}k})(\mathbf{\Lambda}mbda_{{\text{c}}b}-\mathbf{\Lambda}mbda_{{\text{c}}k})p_{hv}^Rp_{hb}^R}\right)\\ &\left(\sum_{l=1}^\mu p_{sl}^rp_{jl}^r\frac{(1-\mathbf{\Lambda}mbda_{{\text{c}}s}^2)\mathbf{P}rod\limits_{k\neq s}(1-\mathbf{\Lambda}mbda_{{\text{c}}s}\mathbf{\Lambda}mbda_{{\text{c}}k})^2}{\mathbf{P}rod\limits_{k\neq s}(\mathbf{\Lambda}mbda_{{\text{c}}s}-\mathbf{\Lambda}mbda_{{\text{c}}k})^2(p^{R}_{hs})^2}\right.\\ &\quad\left.\left. +\sum_{b=1,b\neq v}^\mu\sum_{v=1}^\mu p_{sv}^rp_{jb}^r\frac{(1-\mathbf{\Lambda}mbda_{{\text{c}}v}^2)(1-\mathbf{\Lambda}mbda_{{\text{c}}b}^2)\mathbf{P}rod\limits_{k< l}\mathbf{P}rod\limits_{l=2}^{n}(1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l})}{(\mathbf{\Lambda}mbda_{{\text{c}}v}-\mathbf{\Lambda}mbda_{{\text{c}}b})^2\mathbf{P}rod\limits_{k\neq v,b}(\mathbf{\Lambda}mbda_{{\text{c}}v}-\mathbf{\Lambda}mbda_{{\text{c}}k})(\mathbf{\Lambda}mbda_{{\text{c}}b}-\mathbf{\Lambda}mbda_{{\text{c}}k})p_{hv}^Rp_{hb}^R}\right) \right)^2. \mathbf{e}nd{split} \mathbf{e}nd{equation} When $|\mathbf{\Lambda}mbda_{{\text{c}}1}|=1$, we have $\underline{\alpha}\sim {\tau_f}^{-2}$ and $\underline{\alpha}\sim {\tau_f}^{-4}$, and when $|\mathbf{\Lambda}mbda_{{\text{c}}1}|>1$, we have $\underline{\alpha}\thicksim \mathbf{\Lambda}mbda_{{\text{c}}1}^{-4{\tau_f}}$, \text{and} $ \underline{\beta}\thicksim \mathbf{\Lambda}mbda_{{\text{c}}1}^{-8{\tau_f}}.$ In the case of $m$ driver nodes with $m\leq r$, when $|\mathbf{\Lambda}mbda_{{\text{c}}r}|<1$, compared with the case of $1$ driver node, we find only $q_{ij}^C, i, j=1, 2, \dots, r$ is different. Denoting the indexes of driver nodes by $d_1, d_2, \cdots, d_m$, then the corresponding matrix is $\mathbf{B}_r=[e_{d_1}, e_{d_2}, \cdots, e_{d_m}]\in \mathbb{R}^{r\times m}$. Therefore, we have $\overline{\alpha}$ and $\overline{\beta}$ as \begin{equation}\mathbf{\Lambda}bel{Aeq15} \overline{\alpha}=\sum_{i=1}^r\sum_{j=1}^r\left( \sum_{k=1}^r\sum_{l=1}^r p_{ik}^Rp_{jl}^Rq_{kl}^C\frac{1}{1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l}}\right)^2 \mathbf{e}nd{equation} and \begin{equation}\mathbf{\Lambda}bel{Aeq16} \begin{split} \overline{\beta}&=\sum_{i=1}^r\sum_{j=1}^r\left( \sum_{s=1}^r\left(\sum_{k=1}^r\sum_{l=1}^r p_{ik}^Rp_{sl}^Rq_{kl}^C\frac{1}{1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l}}\right) \left( \sum_{k=1}^r\sum_{l=1}^r p_{sk}^Rp_{jl}^Rq_{kl}^C\frac{1}{1-\mathbf{\Lambda}mbda_{{\text{c}}k}\mathbf{\Lambda}mbda_{{\text{c}}l}} \right) \right)^2 \mathbf{e}nd{split} \mathbf{e}nd{equation} with $q_{ij}^C=\sum_{k=1}^mp_{d_k\,i}^Rp_{d_k\,j}^R$. Other cases are similar to the section~\ref{1drivecom} and thus omitted here. \mathbf{e}nd{document}
\begin{document} \title{Optimal Control in Fluid Models of n\textsf{x}n Input-Queued Switches\\ under Linear Fluid-Flow Costs} \author{Yingdong Lu} \email{[email protected]} \affiliation{ IBM Research } \author{Mark S. Squillante} \email{[email protected]} \affiliation{ IBM Research } \author{Tonghoon Suk} \email{[email protected]} \affiliation{ IBM Research } \renewcommand{Optimal Control in Fluid Models of Input-Queued Switches under Linear Costs}{Optimal Control in Fluid Models of Input-Queued Switches under Linear Costs} \renewcommand{author names}{author names} \begin{abstract} Most of the early input-queued switch research focused on establishing throughput optimality of the max-weight scheduling policy, with some recent research showing that max-weight scheduling is optimal with respect to total expected delay asymptotically in the heavy-traffic regime. However, the question of delay-optimal scheduling in input-queued switches remains open in general, as does the question of delay-optimal scheduling under more general objective functions. To gain fundamental insights into these very difficult problems, we consider a fluid model of $n \times n$ input-queued switches with associated fluid-flow costs, and we derive an optimal scheduling control policy to an infinite horizon discounted control problem with a general linear objective function of fluid cost. Our optimal policy coincides with the $c\mu$-rule in certain parameter domains. More generally, due to the input-queued switch constraints, the optimal policy takes the form of the solution to a flow maximization problem, after we identify the Lagrangian multipliers of some key constraints through carefully designed algorithms. Computational experiments demonstrate the benefits of our optimal scheduling policy over variants of max-weight scheduling within fluid models of input-queued switches. \end{abstract} \keywords{Optimal scheduling control, Linear cost functions, Fluid models, Input-queued switch networks, c$\mu$-policy.} \settopmatter{printacmref=false} \maketitle \section{Introduction} Input-queued switch architectures are widely used in modern computer and communication networks. The optimal scheduling control of these high-speed, low-latency switch networks is critical for our understanding of fundamental design and performance issues related to internet routers, cloud computing data centers, and high-performance computing. A large and rich literature exists around optimal scheduling in these computer and communication systems. This includes the extensive study of input-queued switches as an important mathematical model for a general class of optimal control problems of broad interest in both theory and practice. Most of the previous research related to scheduling control in input-queued switches has focused on throughput optimality. In particular, the max-weight scheduling policy, first introduced in \cite{TasEph92} for wireless networks and subsequently in \cite{MCKEOWN96} specifically for input-queued switches, is well-known to be throughput optimal. The question of delay-optimal scheduling control in such switch networks, however, is far less clear with much more limited results. This is due in large part because of the inherent difficulty of establishing delay (or equivalently, via Little's Law, queue length) optimality for these types of stochastic systems in general. Hence, previous research on optimal delay scheduling control in input-queued switches has focused on heavy-traffic and related asymptotic regimes; see, e.g., \cite{Stolyar_cone_SSC, ShaWis_11, kang2009diffusion,shah2012optimal,zhong2014scaling}. Such previous research includes showing that the max-weight scheduling policy is asymptotically optimal in heavy traffic for an objective function of the summation of the squares of the queue lengths with the assumption of complete resource pooling~\cite{stolyar2004}. Max-weight scheduling was then shown to be optimal in heavy traffic for an objective function of the summation of the queue lengths under the assumption that all the ports are saturated~\cite{maguluri2016}. This was subsequently extended to the case of incompletely saturated ports under the same objective function~\cite{maguluri2016b} and then to the case of general linear objective functions~\cite{LuMaSq+18}. Nevertheless, beyond these and related recent results limited to the heavy-traffic regime, the question of delay-optimal scheduling control in input-queued switches remains open in general, as does the question of delay-optimal scheduling under more general objective functions. In this paper, we seek to gain fundamental insights on optimal delay-cost scheduling in these stochastic systems by studying a fluid model of general $n \times n$ input-queued switches where each fluid flow has an associated cost. The objective of the corresponding optimal control problem is to determine the scheduling policy that minimizes the discounted summation over an infinite horizon of general linear cost functions of the fluid levels associated with each queue. Related research has been conducted in the queueing network literature; see, e.g., \cite{10.2307/171604,AvramBertsimasRicard95,maglaras2000,Bauerle2000}. In particular, similar problems have been studied within the context of fluid models of multiclass queueing networks~\cite{AvramBertsimasRicard95,Bauerle2000}. These previous studies take a classical optimal control approach based on exploiting Pontryagin's Maximum Principle~\cite{PoBoGa+62}, which itself only provides necessary conditions for optimality, to identify optimal policies. However, while this framework enables with relative ease the derivation of optimal policies for fluid models of basic queueing networks, the situation for input-queued switches is quite different and much more difficult. Specifically, the highly constrained structure of the input-queued switch networks requires us to pay special attention to the feasibility of the optimal control problem. To address these issues, we implicitly move the capacity constraint into the objective and identify the appropriate Lagrangian multiplier through carefully designed search algorithms. Then, at any fluid level, we establish that the optimal scheduling policy is obtained through a solution to a flow maximization problem, which is also shown to be throughput optimal. Our optimal policy coincides with the $c\mu$-rule in certain parameter domains. These theoretical results reflect the high complexity nature of input-queued switches, and are expected to be of interest more broadly than input-queued switch networks and more broadly than related classes of fluid models of stochastic networks with constraints. We observe important differences in the decisions made under our optimal scheduling control policy in comparison with those made under a cost-weighted variant of the max-weight scheduling policy and the $c\mu$-rule within the fluid model of general $n \times n$ input-queued switches. It is important to emphasize that our goal is to determine the optimal solution of the corresponding fluid control problem, which is at the core of the important scheduling-decision differences between our optimal policy and those of the other scheduling policies. Although we show that our flow maximization solution coincides with the $c\mu$-rule in some regions of the decision space, we also show that the $c\mu$-rule is not stable under certain arrival rates and thus it cannot in general be the optimal scheduling policy. In contrast to the max-weight scheduling policy which does not use any arrival rate information, we show that the optimal policy from our flow maximization solution for the $n \times n$ input-queued switch fluid control problem can depend in general on the arrival rates, which is consistent with known results established for the original (non-fluid limit) $2\times 2$ input-queued switch where the optimal policy takes into account the arrival processes in some regions of the decision space~\cite{LuMaSq+16}. The cost-weighted max-weight scheduling policy has been shown to exhibit optimal queue-length scaling in the heavy traffic regime~\cite{LuMaSq+18}, suggesting that the importance of arrival-process information on the queue-length scaling of the optimal scheduling control policy tends to diminish asymptotically as the traffic intensity increases. To further investigate these important differences, we conduct fluid-model computational experiments with our optimal scheduling policy, the max-weight scheduling policy, and the $c\mu$-rule to gain additional fundamental insights on various important theoretical issues with respect to optimal scheduling control in input-queued switch networks. In comparisons with the max-weight scheduling policy, we find that our optimal scheduling control policy provides improvements of at least $10\%$ in most of the experiments, sometimes rendering improvements of more than $50\%$. Moreover, the improvements of our optimal policy over max-weight scheduling grow as the throughput increases. With respect to the $c\mu$-rule, we find that the comparisons with our optimal scheduling control policy fall into three different cases: (1) The $c\mu$-rule coincides with the optimal policy, and thus is fluid-cost optimal; (2) The $c\mu$-rule is unstable (not throughput optimal), and obviously not fluid-cost optimal; (3) The $c\mu$-rule is stable, but not fluid-cost optimal. Moreover, the greatest improvements observed for our optimal policy over stable $c\mu$-rule instances represent relative performance gaps of more than $70\%$. The remainder of this paper is organized as follows. Section~\ref{sec:model} presents our mathematical models, for both stochastic processes of input-queued switch networks and their mean-field limits, together with our formulation of the optimal scheduling control problems of interest. Section~\ref{sec:control} then provides our analysis and results for optimal scheduling control and related theoretical properties, deferring our proofs until Section~\ref{sec:proofs}. The results of computational experiments are presented in Section~\ref{sec:experiments}, followed by concluding remarks. \section{Mathematical Models}\label{sec:model} In this section, we first provide some technical preliminaries especially with respect to the notation used in the paper. We then present a stochastic process model of general $n \times n$ input-queued switches, including the dynamics of queue lengths in discrete time. Next, we introduce a sequence of such stochastic processes under an appropriate scaling and prove that every sample path of the sequence has a convergent subsequence to deterministic processes in continuous time, i.e., our fluid models for general $n \times n$ input-queued switches; this includes a characterization of admissible scheduling control policies for the fluid models. Lastly, we present a formulation of the optimal scheduling control problems with the objective of finding an admissible policy that minimizes the infinite-horizon discounted total linear cost of queue lengths in the fluid models. \subsection{Technical Preliminaries} Let $\mathbb{R}$, $\mathbb{R}_+$, $\mathbb{R}^+$, $\mathbb{Z}$, $\mathbb{Z}_+$, and $\mathbb{Z}^+$ respectively denote the sets of real numbers, non-negative real numbers, positive real numbers, integers, non-negative integers, and positive integers. For positive integer $n\in\mathbb{Z}^+$, we define $[n]:=\{1,2,\dots,n\}$ to be the set of all positive integers less than or equal to $n$. The blackboard bold typefaces is used for general sets, e.g., $\mathbb{I}$ and $\mathbb{J}$. When the set $\mathbb{I}$ is finite, we represent its cardinality by $|\mathbb{I}|$; e.g., we have $|[n]|=n$ for $n\in\mathbb{Z}^+$. We use the bold font to represent vectors, matrices, and real-valued functions on a finite set. The function $\boldsymbolu:\mathbb{I}\to\mathbb{R}$, defined on the finite set $\mathbb{I}$, can be considered as an $|\mathbb{I}|$-dimensional vector $ \boldsymbolu~=~[\mu(\bm{s}):\bm{s}\in\mathbb{I}]$, where $\mu(\bm{s})$ is the value of $\boldsymbolu$ at $\bm{s}$. We denote by $\mathbb{R}^\mathbb{I}$ the set of all real-valued functions on $\mathbb{I}$. For finite sets $\mathbb{I}$ and $\mathbb{J}$, $\mathbb{R}^{\mathbb{I}\times\mathbb{J}}$ is the set of all real-valued functions from $\mathbb{I}\times\mathbb{J}$ in which an element $\bm{A}$ can also be represented by the matrix $\bm{A}=[A(\bm{s},\bm{\rho}):\bm{s}\in\mathbb{I}, \bm{\rho}\in\mathbb{J}]$, where $A(\bm{s},\bm{\rho})$ is the value of the function $\bm{A}$ at $(\bm{s},\bm{\rho})\in\mathbb{I}\times\mathbb{J}$. For $\bm{A}\in\mathbb{R}^{\mathbb{I}\times\mathbb{J}}$, $\bm{\eta}\in\mathbb{R}^\mathbb{J}$, and $\boldsymbolu\in\mathbb{R}^\mathbb{I}$, we respectively define $\boldsymbolu \bm{A}\in\mathbb{R}^\mathbb{J}$, $\bm{A}\bm{q} \in\mathbb{R}^\mathbb{I}$, and $\mu A\eta\in \mathbb{R}$ by \begin{align*} (\mu A)(\rho)&:=\sum_{\bm{s}\in\mathbb{I}}\mu(\bm{s})A(\bm{s},\bm{\rho}),\quad (A\eta)(\bm{s}):=\sum_{\bm{\rho}\in\mathbb{J}} A(\bm{s},\bm{\rho})\eta(\bm{\rho}),\\ \mu A\eta&:=\sum_{\bm{s}\in\mathbb{I}}\sum_{\bm{\rho}\in\mathbb{J}} \mu(\bm{s})A(\bm{s},\bm{\rho})\eta(\bm{\rho}), \end{align*} which is similar to matrix-vector multiplication. For $\bm{w},\boldsymbolu\in\mathbb{R}^{\mathbb{I}}$, we also define $\bm{w}\cdot\boldsymbolu\in\mathbb{R}$ by $\bm{w}\cdot\boldsymbolu:=\sum_{\bm{s}\in\mathbb{I}}w(\bm{s})\mu(\bm{s})$, which is the same as the inner-product of two vectors. We denote the $1$-norm of a vector by $\|\cdot\|_1$, namely for $\boldsymbolu\in\mathbb{R}^{\mathbb{I}}$, $\|\boldsymbolu\|_1~:=~\sum_{\bm{s}\in\mathbb{I}}|\mu(\bm{s})|$. Finally, we use the sans serif font for random variables and use the bold sans serif font for random vectors, e.g., $\mathcal{Q}$ and $\boldsymbol{\mathcal{Q}}$, respectively. \subsection{Stochastic Models}\label{sec:stochastic model} The input-queued switch of interest consists of $n$ input ports and $n$ output ports. For each pair $(i,j)\in\mathbb{J}:=[n]\times[n]$, packets that needs to be transmitted from the $i$-th input port to the $j$-th output port are stored in a queue indexed by $(i,j)$. We describe below how the number of packets in a queue (queue length) evolves over time. Time is slotted by nonnegative integers and the length of queue $\bm{\rho}\in\mathbb{J}$ at the beginning of the $t$-th time slot is denoted by $\mathcal{Q}_t(\bm{\rho})$. External packets arrive at each queue according to an exogenous stochastic process. Let $\mathcal{A}_t(\bm{\rho}) \in \mathbb{Z}_+$ represent the number of arrivals to queue $\bm{\rho}\in\mathbb{J}$ until time $t$. Assume that $\{ \mathcal{A}_{t+1}(\bm{\rho})-\mathcal{A}_{t}(\bm{\rho}) : t \in \mathbb{Z}_+, \, \bm{\rho}\in \mathbb{J} \}$ are independent random variables and that, for fixed $\bm{\rho}\in\mathbb{J}$, $\{ \mathcal{A}_{t+1}(\bm{\rho})-\mathcal{A}_t(\bm{\rho}) : t \in \mathbb{Z}_+ \}$ are identically distributed with $\mathbb{E}[\mathcal{A}_{t+1}(\bm{\rho})-\mathcal{A}_t(\bm{\rho})] =: \lambda({\bm{\rho}})$. We refer to the $|\mathbb{J}|$-dimensional vector $\bm{\lambda}\in[0,1]^{\mathbb{J}}$ as the arrival rate vector. Furthermore, $\bm{\lambda}$ lies in the interior of the {\it capacity region} $\{\bm{\lambda}\in[0,1]^{\mathbb{J}}, \sum_i \lambda_{ij}<1, \sum_j \lambda_{ij}<1\}$. During each time slot, packets in the queues can be simultaneously transmitted (or departed from the queues) subject to: \begin{enumerate} \item[(1)] At most one packet can be transmitted from an input port; \item[(2)] At most one packet can be transmitted to an output port. \end{enumerate} Hence, we denote the departure of packets from the queues during a time slot by an $n^2$-dimensional binary vector $\bm{s}=[s(\bm{\rho}):\bm{\rho}\in\mathbb{J}]$ such that $s(\bm{\rho})=1$ if a packet in queue $\bm{\rho}$ departs from the queue, and $s(\bm{\rho})=0$ otherwise. We refer to such $\bm{s}$ as a \emph{basic schedule}, and let $\mathbb{I}$ denote the set of all basic schedules: \begin{align}\label{eq:all basic schedules} \mathbb{I} = \left\{ \bm{s}\in\{0,1\}^{\mathbb{J}} : \sum_{i\in[n]} s(i,j)\leq 1, \sum_{j\in[n]} s(i,j)\leq 1, \forall i,j\in [n] \right\}. \end{align} Note that the empty basic schedule $\bm{s}$, such that $s(i,j)=0$ for all $(i,j)\in\mathbb{J}$, is indeed a member of $\mathbb{I}$. For $\bm{s}\in\mathbb{I}$, let $\mathcal{D}_t(\bm{s})$ denote the cumulative number of time slots devoted to basic schedule $\bm{s}$ until time $t$. We therefore have \begin{align}\label{eq:departure process} \|\boldsymbol{\mathcal{D}}_t\|_1=\sum_{\bm{s}\in\mathbb{I}}\mathcal{D}_t(\bm{s})=t\quad\mbox{and}\quad\|\boldsymbol{\mathcal{D}}_{t+1}\|_1-\|\boldsymbol{\mathcal{D}}_t\|_1=1 \end{align} for every $t\in\mathbb{Z}_+$. From the description of arrivals and departures, we can see that $\boldsymbol{\mathcal{Q}}_t$ evolves according to the following dynamics \begin{align}\label{eq:dynamics_of_stochastic} \boldsymbol{\mathcal{Q}}_{t} \; = \; \boldsymbol{\mathcal{Q}}_0 + \boldsymbol{\mathcal{A}}_t - \boldsymbol{\mathcal{D}}_t\bm{A}, \end{align} where $\boldsymbol{\mathcal{Q}}_0=[\mathcal{Q}_0(\bm{\rho}):\bm{\rho}\in\mathbb{J}]$ is the initial queue lengths and $\bm{A}\in\{0,1\}^{\mathbb{I}\times\mathbb{J}}$ is the schedule-queue adjacency matrix such that $A(\bm{s},\bm{\rho})=s(\bm{\rho})$ for $\bm{s}\in\mathbb{I}$ and $\bm{\rho}\in\mathbb{J}$. We refer to a stochastic process $\{(\boldsymbol{\mathcal{Q}}_t,\boldsymbol{\mathcal{A}}_t,\boldsymbol{\mathcal{D}}_t)\in\mathbb{Z}_+^{\mathbb{J}}\times\mathbb{Z}_+^{\mathbb{J}}\times\mathbb{Z}_+^{\mathbb{I}}:t\in\mathbb{Z}_+\}$ that satisfies \eqref{eq:dynamics_of_stochastic} as a \emph{discrete-time stochastic model for input-queued switches} with the (random) initial state $\boldsymbol{\mathcal{Q}}_0\in\mathbb{Z}_+^{\mathbb{J}}$. \subsection{Fluid Models}\label{sec:convergence_FL} This section introduces a deterministic process that represents our fluid models for input-queued switches, describes the scaled processes of the original stochastic process, and relates them to these fluid models. The basic set up and ideas can be found in the research literature on fluid limit models, especially the papers of Dai~\cite{dai1995} and Dai and Prabhakar~\cite{DaiPra00}. The key concepts concern the tightness and the measures of stochastic processes, which leads to the convergence of the subsequences of the scaled processes. We introduce a continuous-time deterministic process related to an input-queued switch through the following definition. \begin{definition}\label{def:fluid_model} An absolutely continuous deterministic process $\{(\bm{q}_t,\boldsymbol{\mathcal{\delta}}_t)\in\mathbb{R}^{\mathbb{J}}\times\mathbb{R}^{\mathbb{I}}:t\in\mathbb{R}_+\}$ is called a \emph{(input-queued switch) fluid model} with initial state $\bm{q}_0\in\mathbb{R}_+^{\mathbb{J}}$ and arrival rates $\bm{\lambda}\in [0,1]^{\mathbb{J}}$ if the following conditions hold: \begin{enumerate}[label=\textbf{(FM\arabic*)}] \item\label{fluid:dynamics} $\bm{q}_t= \bm{q}_0 + \bm{\lambda} t -\boldsymbol{\mathcal{\delta}}_t\bm{A}$ for $t\in\mathbb{R}_+$; \item\label{fluid:positivity} $\bm{q}_t\geq 0$ for $t\in\mathbb{R}_+$; \item\label{fluid:time} $\sum_{\bm{s}\in\mathbb{I}} {\delta}_t(\bm{s}) = t$ (i.e., $\|\boldsymbol{\mathcal{\delta}}_t\|_1=t$) and $\boldsymbol{\mathcal{\delta}}_t\geq{\mathbf{0}}$ for $t\in\mathbb{R}_+$; \item\label{fluid:increase} For any $\bm{s}\in\mathbb{I}$, ${\delta}_t(\bm{s})$ is non-decreasing with respect to $t$. \end{enumerate} Furthermore, a deterministic process $\{\boldsymbolu_t\in\mathbb{R}_+\,:\,t\in\mathbb{R}_+\}$ is called an \emph{(fluid-level) admissible policy} for the input-queued switch if and only if there exists a fluid model $(\bm{q}_t,\boldsymbol{\mathcal{\delta}}_t)$ such that $\boldsymbolu_t=\dot{\boldsymbol{\mathcal{\delta}}}_t$ for all $t\in\mathbb{R}_+$ at which $\dot{\boldsymbol{\mathcal{\delta}}}_t$ exists. \end{definition} Note that, since $(\bm{q}_t,\boldsymbol{\mathcal{\delta}}_t)$ is absolutely continuous, $\dot{\bm{q}_t}$ and $\dot{\boldsymbol{\mathcal{\delta}}_t}$ exist at almost every $t\in\mathbb{R}_+$. The following proposition introduces convenient alternative criteria for a fluid-level admissible policy. \begin{proposition} \label{prop:admissible policy} Fix $\bm{q}_0\in\mathbb{R}_+^{\mathbb{J}}$ and $\bm{\lambda}\in[0,1]^{\mathbb{J}}$. Let $\{\boldsymbolu_t\in\mathbb{R}_+^{\mathbb{I}}\,:\,t\in\mathbb{R}_+\}$ be an integrable deterministic process and $\{\bm{q}_t\in\mathbb{R}^{\mathbb{J}}\,:\,t\in\mathbb{R}_+\}$ a process satisfying $\dot{\bm{q}}_t~=~\bm{\lambda} -\boldsymbolu_{t}\bm{A}$ with initial state $\bm{q}_0$. Then, the following statements are equivalent: \begin{enumerate}[label=\textbf{(AP\arabic*)}] \item\label{ap:definition} $\boldsymbolu_t$ is a fluid-level admissible policy; \item\label{ap:positivity} $\|\boldsymbolu_t\|_1=1$ and $\bm{q}_t\geq 0$ for all $t\in\mathbb{R}_+$; \item\label{ap:region} $\|\boldsymbolu_t\|_1=1$ and $\boldsymbolu_t\in{\mathbb{U}}(\bm{q}_t)$ for all $t\in\mathbb{R}_+$, where \begin{align}\label{eq:admissible policy region} {\mathbb{U}}(\bm{q}):=\left\{\boldsymbolu\in [0,1]^{\mathbb{I}}\,:\, (\mu A)(\bm{\rho})\leq \lambda(\bm{\rho}) \textrm{ if $q(\bm{\rho})=0$} \right\}. \end{align} \end{enumerate} In this case, $(\bm{q}_t,\boldsymbol{\mathcal{\delta}}_t:=\int_0^t \boldsymbolu_{t'}dt' )$ is the fluid model associated with the fluid-level admissible policy $\boldsymbolu_t$. \end{proposition} We next introduce a family of scaled processes, based on the original models indexed by positive integers, and demonstrate that converging subsequences will have fluid models as their limits, which motivates our fluid optimal control problems in Section~\ref{sec:fluidopt}. \subsubsection{Scaled Queueing Processes} Fix index $r\in\mathbb{Z}^{+}$ and then let $\{(\boldsymbol{\mathcal{Q}}^r_t,\boldsymbol{\mathcal{A}}^r_t,\boldsymbol{\mathcal{D}}^r_t)\,:\,t\in\mathbb{Z}_+\}$ be a discrete-time stochastic model with initial state $\boldsymbol{\mathcal{Q}}^r$ as described in Section~\ref{sec:stochastic model}. We extend this discrete-time process to a continuous-time process by defining \begin{align}\label{eq:extention} \begin{split} \boldsymbol{\mathcal{A}}^r_t&:=(t-\lfloor t\rfloor)\left(\boldsymbol{\mathcal{A}}^r_{\lfloor t \rfloor+1}-\boldsymbol{\mathcal{A}}^r_{\lfloor t\rfloor}\right)+\boldsymbol{\mathcal{A}}^r_{\lfloor t\rfloor},\\ \boldsymbol{\mathcal{D}}^r_t&:=(t-\lfloor t\rfloor)\left(\boldsymbol{\mathcal{D}}^r_{\lfloor t\rfloor+1}-\boldsymbol{\mathcal{D}}^r_{\lfloor t\rfloor}\right)+\boldsymbol{\mathcal{D}}^r_{\lfloor t\rfloor},\\ \boldsymbol{\mathcal{Q}}^r_t&:=(t-\lfloor t\rfloor)\left(\boldsymbol{\mathcal{Q}}^r_{\lfloor t\rfloor+1}-\boldsymbol{\mathcal{Q}}^r_{\lfloor t\rfloor}\right)+\boldsymbol{\mathcal{Q}}^r_{\lfloor t\rfloor}\\ &=\boldsymbol{\mathcal{Q}}^r+\boldsymbol{\mathcal{A}}^r_t-\boldsymbol{\mathcal{D}}^r_t\bm{A}, \end{split} \end{align} where $\lfloor t\rfloor$ is the largest integer less than or equal to $t$. \begin{remark} Processes $\mathcal{Q}^r_t(\bm{\rho})$, $\mathcal{A}^r_t(\bm{\rho})$ and $\mathcal{D}^r_t(\bm{s})$ are random functions, and every sample path for $(\boldsymbol{\mathcal{Q}}^r_t,\boldsymbol{\mathcal{A}}^r_t,\boldsymbol{\mathcal{D}}^r_t)$ is continuous. We use the notation $\omega^r$ to explicitly denote the dependency on the randomness in the $r$-th system and the notation $\bm{\omega}=[\omega^r\,:\,r\in\mathbb{Z}^{+}]$ to denote the overall randomness. For example, $\mathcal{Q}^r_t(\rho;\bm{\omega})=\mathcal{Q}^r_t(\rho;\omega^r)$ and $\boldsymbol{\mathcal{Q}}^r_t(\bm{\omega})=\boldsymbol{\mathcal{Q}}^r_t(\omega^r)$. \end{remark} For randomness $\bm{\omega}$, the scaled $r$-th system is defined by \begin{align}\label{eq:scaled} \begin{split} \lefteqn{(\hat{\boldsymbol{\mathcal{Q}}}^r_t(\bm{\omega}),\;\hat{\boldsymbol{\mathcal{A}}}^r_t(\bm{\omega}),\;\hat{\boldsymbol{\mathcal{D}}}^r_t(\bm{\omega}))}\\ &\qquad\qquad:= \left(r^{-1}\boldsymbol{\mathcal{Q}}^r_{rt}(\bm{\omega}),\; r^{-1}\boldsymbol{\mathcal{A}}^r_{rt}(\bm{\omega}),\; r^{-1}\boldsymbol{\mathcal{D}}^r_{rt}(\bm{\omega}) \right). \end{split} \end{align} We assume that the initial state of the $r$-th system satisfies \begin{align*} r^{-1}\boldsymbol{\mathcal{Q}}^{r}_0\Rightarrow {\bm{q}}_0, \quad \hbox{as } r\to \infty, \end{align*} for a (deterministic) point $\bm{q}_0\in\mathbb{R}_+^{\mathbb{J}}$, where the convergence is understood to be convergence in distribution. \subsubsection{Tightness and Convergence} For a fixed sample path $\bm{\omega}$, from~\eqref{eq:departure process} and \eqref{eq:extention}, we have $\hat{\mathcal{D}}_0(\bm{\rho};\bm{\omega})=0$ and $\hat{\mathcal{D}}_{t}(\bm{\rho};\bm{\omega})\leq\|\hat{\boldsymbol{\mathcal{D}}}_t(\bm{\omega})\|_1=t$ so that $ \hat{\mathcal{D}}^{r}_{t}(\bm{\rho};\bm{\omega})-\hat{\mathcal{D}}^{r}_{t'}(\bm{\rho};\bm{\omega})~\leq~(t-t')$, for any $r>0$ and $t\geq t'\geq 0$. This implies the tightness of the process $\hat{\mathcal{D}}^{r}_{t}$; see, e.g., \cite{billingsley2013convergence}. Meanwhile, from the functional strong law of large numbers (see, e.g., \cite{yao2001fundamentals}), we have \begin{align*} \lim_{r\to\infty} \sup_{0\leq t\leq T} | \hat{\mathcal{A}}_t^{r}(\bm{\rho};\bm{\omega}) - \lambda(\bm{\rho})t|=0 \end{align*} almost surely. We therefore have that, almost surely, for each sample path $\bm{\omega}$ and any sequence $\{r_k\}$ such that $\lim_{k\to\infty} r_k=\infty$, there exists a subsequence $\{r_{k_l}\}$ and absolutely continuous deterministic process $(\bm{q}_t,\boldsymbol{\mathcal{\delta}}_t)$, which is a fluid model in Definition~\ref{def:fluid_model}, such that \begin{align*} (\hat{\boldsymbol{\mathcal{Q}}}^{r_{k_l}}_t(\bm{\omega}),\hat{\boldsymbol{\mathcal{D}}}^{r_{k_l}}_t(\bm{\omega}))\to(\bm{q}_t,\boldsymbol{\mathcal{\delta}}_t) \end{align*} uniformly on all compact sets as $l\to\infty$. \begin{remark} The conditions~\ref{fluid:dynamics} to~\ref{fluid:increase} are necessary conditions for all the fluid limits, and they do not uniquely determine a fluid limit, even under a fixed admissible scheduling policy. Such a lack of uniqueness for the fluid limits and its implications for queueing networks are discussed at length in \cite{Bramson1998}. For certain special cases, with extra conditions on the policies, fluid limits can be shown to be unique; see, e.g., \cite{shah2012} for input-queued switches. Our interest, however, is in solving optimal scheduling control problems within the context of the fluid models. With conditions such as~\ref{fluid:dynamics} and~\ref{fluid:increase}, fluid limit results are generally established for converging subsequences; similar results can be found in \cite{dai1995} for queueing networks. \end{remark} \subsection{Fluid Model Optimal Control Problems} \label{sec:fluidopt} We now formulate the optimal scheduling control problem of interest within the context of the fluid models of input-queue switches. To this end, we define as follows the total discounted delay cost over the entire time horizon under a fluid-level admissible policy $\{\boldsymbolu_t\,:\,t\in\mathbb{R}_+\}$ with initial state $\bm{q}_0$: \begin{align*} c(\boldsymbolu_t;\bm{q}_0)~:=~\int_0^\infty e^{-\beta t}\bm{c}\cdot\bm{q}_t dt, \end{align*} where $\bm{q}_t$ is the deterministic function defined in \ref{fluid:dynamics} with $\boldsymbol{\mathcal{\delta}}_t:=\int_0^t \boldsymbolu_s ds$ and initial state $\bm{q}_0$, $\beta$ is the discount factor, and $\bm{c}\in(\mathbb{R}^+)^{\mathbb{J}}$ is the vector of cost coefficients. Specifically, we seek to find a fluid-level admissible scheduling policy with the following objective: \begin{align*} \textrm{Minimize $c(\boldsymbolu_t;\bm{q}_0)$ over all admissible policies $\{\boldsymbolu_t\,:\,t\in\mathbb{R}_+\}$}. \end{align*} From \ref{ap:positivity} in Proposition~\ref{prop:admissible policy}, this control problem can be formulated as \begin{align}\label{eq:optimal control problem} \begin{split} \textrm{minimize} & \qquad \int_0^\infty e^{-\beta t}\bm{c}\cdot\bm{q}_t dt\\ \textrm{subject to} & \qquad \dot\bm{q}_t=\bm{\lambda}-\boldsymbolu_t\bm{A},\quad\forall t\in\mathbb{R}_+,\\ & \qquad \bm{q}_t\geq {\mathbf{0}},\quad\forall t\in\mathbb{R}_+,\\ & \qquad \boldsymbolu_t\in{\mathbb{U}},\quad\forall t\in\mathbb{R}_+, \end{split} \end{align} where ${\mathbb{U}}=\{\boldsymbolu\in [0,1]^{\mathbb{I}}\,:\,\|\boldsymbolu\|_1=1\}$ and the initial state of $\bm{q}_t$ is $\bm{q}_0$. In the remainder of this section, we exploit results in optimal control theory and derive necessary and sufficient conditions for the optimality of Problem~\eqref{eq:optimal control problem}. As previously noted, the Pontryagin Maximum Principle~\cite{PoBoGa+62} typically only provides necessary conditions for optimality, but these necessary conditions become sufficient under certain conditions that we show to be the case for our optimal control problem. The Hamiltonian function $H$ and Lagrangian function $L$ corresponding to \eqref{eq:optimal control problem} are respectively defined by \begin{align*} H(\bm{q},\boldsymbolu,\tilde{\bm{p}};t)&:=-e^{-\beta t}\bm{c}\cdot\bm{q}+(\bm{\lambda}-\boldsymbolu\bm{A})\tilde{\bm{p}},\\ L(\bm{q},\boldsymbolu,\tilde{\bm{p}},\tilde{\bm{\eta}};t)&:=-e^{-\beta t}\bm{c}\cdot\bm{q}+(\bm{\lambda}-\boldsymbolu\bm{A})\tilde{\bm{p}}+\bm{q}\cdot\tilde{\bm{\eta}}, \end{align*} where $\bm{q}$, $\tilde{\bm{p}}$, $\tilde{\bm{\eta}}$ $\in\mathbb{R}^{\mathbb{J}}$ and $\boldsymbolu\in\mathbb{R}^{\mathbb{I}}$. We also define \begin{align*} H^*(\bm{q},\tilde{\bm{p}};t):=\max\left\{H(\bm{q},\boldsymbolu,\tilde{\bm{p}};t)\,:\,\boldsymbolu\in{\mathbb{U}}\right\}. \end{align*} Then, from Pontryagin's maximum principle~\cite{PoBoGa+62} under appropriate conditions, we have the following sufficient conditions for an optimal solution of the optimal control problem. \begin{lemma}[\protect{\cite[Theorem 8 and 11]{SeSy77}}]\label{lemma:maximum_principle} Let $\bm{q}_0$ be the initial condition of a fluid model. Let $\{\boldsymbolu^*_t\in\mathbb{R}_+^{\mathbb{I}}:t\in\mathbb{R}_+\}$ be a fluid-level admissible policy, and let $\bm{q}^*_t=\bm{q}_0+\bm{\lambda} t + \int_0^t\boldsymbolu^*_{t'} \bm{A} dt'$ be the associated queue length process. Assume there exist a process $\{\tilde{\bm{p}}_t\in\mathbb{R}^{\mathbb{J}}:t\in\mathbb{R}_+\}$ with piecewise continuous $\dot{\tilde{\bm{p}}}_t$ and a process $\{\tilde{\bm{\eta}}_t\in\mathbb{R}^{\mathbb{J}}:t\in\mathbb{R}_+\}$ such that the following conditions are satisfied: \begin{enumerate} \itemsep0.3em \item[(i)] $H^*(\bm{q}^*_t,\tilde{\bm{p}}_t;t)=H(\bm{q}^*,\boldsymbolu^*_t,\tilde{\bm{p}}_t;t)$; \item[(ii)] $\dot{\tilde{\bm{p}}}_t=-L'_{\bm{q}}(\bm{q}^*_t,\boldsymbolu^*_t,\tilde{\bm{p}}_t,\tilde{\bm{\eta}}_t;t)=-e^{-\beta t}\bm{c}+\tilde{\bm{\eta}}_t$; \item[(iii)] $\bm{q}^*_t\cdot\tilde{\bm{\eta}}_t=0$, $\tilde{\bm{\eta}}_t\geq{\mathbf{0}}$; \item[(iv)] $\liminf_{t\to\infty} \tilde{\bm{p}}_t\cdot(\bm{q}^*_t-\bm{q}_t)\leq 0$ for any fluid model $(\bm{q}_t,\boldsymbol{\mathcal{\delta}}_t)$ with initial condition $\bm{q}_0$; \item[(v)] $H^*(\bm{q}, \tilde{\bm{p}}_t;t)$ is concave in $\bm{q}$; \item[(vi)] $\bm{g}(\bm{q}):=\bm{q}$ is quasiconcave in $\bm{q}$ and differentiable in $\bm{q}$ at $\bm{q}^*_t$. \end{enumerate} Then, $\{\boldsymbolu^*_t:t\in\mathbb{R}_+\}$ is an optimal solution to problem~\eqref{eq:optimal control problem}. \end{lemma} Observe, however, that by the definition of $H$ and $H^*$, we obtain \begin{align*} H^*(\bm{q}, \tilde{\bm{p}}_t;t)&=\max\left\{H(\bm{q},\boldsymbolu,\tilde{\bm{p}};t)\,:\,\boldsymbolu\in{\mathbb{U}}\right\}\\ &= -e^{-\beta t}\bm{c}\cdot\bm{q}+\max\left\{(\bm{\lambda}-\boldsymbolu\bm{A})\tilde{\bm{p}}\,:\,\boldsymbolu\in{\mathbb{U}}\right\}, \end{align*} which is linear in $\bm{q}$. Further observe $\bm{g}(\bm{q})=\bm{q}$ are linear in $\bm{q}$. Therefore, conditions (v) and (vi) are satisfied regardless of the choice of $\bm{q}^*_t$, $\boldsymbolu^*_t$, $\tilde{\bm{p}}_t$, and $\tilde{\bm{\eta}}_t$. Hence, we need only check conditions (i)-(iv) to prove the optimality of $\{\boldsymbolu^*_t:t\in\mathbb{R}_+\}$. The following proposition provides an alternative set of sufficient conditions for an optimal solution of the optimal control problem. \begin{proposition}\label{prop:max_principle} Let $\bm{q}_0$ be the initial condition of a fluid model. Let $\{\boldsymbolu^*_t\in\mathbb{R}_+^{\mathbb{I}}:t\in\mathbb{R}_+\}$ be a fluid-level admissible policy, and let $\bm{q}^*_t=\bm{q}_0-\bm{\lambda} t + \int_0^t\boldsymbolu^*_{t'} \bm{A} dt'$ be the associated queue length process. Assume there exists a continuous process $\{\bm{p}_t\in\mathbb{R}^{\mathbb{J}}:t\in\mathbb{R}_+\}$ with piecewise continuous $\dot{\bm{p}}_t$ and a process $\{\bm{\eta}_t\in\mathbb{R}_+^{\mathbb{J}}:t\in\mathbb{R}_+\}$ such that the following conditions are satisfied: \begin{enumerate}[label=\textbf{(C\arabic*)}] \item\label{cond:optimality} $\boldsymbolu^*_t\in\arg\max\left\{\mu A p_t\,:\,\boldsymbolu\in{\mathbb{U}}\right\}$; \item\label{cond:diff} $\dot{\bm{p}}_t-\beta\bm{p}_t=\bm{c}-\bm{\eta}_t$; \item\label{cond:slackness} $\bm{q}^*_t\cdot\bm{\eta}_t=0$, $\bm{q}^*_t\geq 0$, $\bm{\eta}_t\geq 0$; \item\label{cond:endpoint} $\liminf_{t\to\infty} \bm{p}_t\cdot(\bm{q}^*_t-\bm{q}_t)\geq 0$ for any fluid model $(\bm{q}_t,\boldsymbol{\mathcal{\delta}}_t)$ with initial condition $\bm{q}_0$. \end{enumerate} Then, $\{\boldsymbolu^*_t:t\in\mathbb{R}_+\}$ is an optimal solution to the optimal control problem~\eqref{eq:optimal control problem}. \end{proposition} \section{Optimal Control} \label{sec:control} In this section, we present and analyze algorithms that render the optimal fluid-cost scheduling policy, namely the optimal solution to the control problem~\eqref{eq:optimal control problem} of Section~\ref{sec:fluidopt}. We first provide and recall some technical preliminaries, including additional notation. Then we present a critical threshold result for a family of linear programs, followed by the optimal control algorithm that exploits a critical threshold at each state of the system. \subsection{Technical Preliminaries} \label{sub:terminologies} We refer to the stochastic model in Section~\ref{sec:stochastic model} as the pre-limit model and refer to the fluid model in Section~\ref{sec:convergence_FL} as the limit system. For the pre-limit model, recall that a basic schedule is a collection of queues from each of which a packet can depart simultaneously, where $\mathbb{J}:=[n]\times[n]$ denotes the set of queues. A basic schedule is represented by a $|\mathbb{J}|$-dimensional binary vector $\bm{s}=[s(\bm{\rho})\in\{0,1\}:\bm{\rho}\in{\mathbb{J}}]$, where $s(\bm{\rho})=1$ if and only if $\bm{\rho}$ is in the collection composing the basic schedule. For $\bm{\rho}\in\mathbb{J}$ and $\bm{s}\in\mathbb{I}$, we use $\bm{\rho}\in\bm{s}$ if $s(\bm{\rho})=1$. For a basic schedule $\bm{s}\in\mathbb{I}$, with $\mathbb{I}$ the set of all basic schedules given in~\eqref{eq:all basic schedules}, we define the \emph{weight} of $\bm{s}$ by \begin{align*} w(\bm{s})~:=~\sum_{\bm{\rho}\in\bm{s}} c(\bm{\rho}), \end{align*} where $\bm{c}\in(\mathbb{R}^+)^{\mathbb{J}}$ is the cost coefficient vector introduced in~\eqref{eq:optimal control problem}. While time in the pre-limit system is discrete with queue-length vector $\boldsymbol{\mathcal{Q}}_t\in\mathbb{Z}_+^{\mathbb{J}}$ at time $t\in\mathbb{Z}_+$, time in the limit system is continuous with the state space of (fluid) queue-length vectors $\bm{q}_t$ given by $\mathbb{R}_+^{\mathbb{J}}$. From Proposition~\ref{prop:admissible policy}, we define a \emph{(fluid-level) schedule} by a convex combination of basic schedules and represent it as an $|\mathbb{I}|$-dimensional vector $\boldsymbolu=[\mu(\bm{s})\in[0,1]:\bm{s}\in\mathbb{I}]$ with $\|\boldsymbolu\|_1=1$, where $\mu(\bm{s})$ is the coefficient of schedule $\bm{s}$. Furthermore, schedule $\boldsymbolu$ is \emph{admissible} at state $\bm{q}\in\mathbb{R}_+^{\mathbb{J}}$ if and only if $\boldsymbolu\in{\mathbb{U}}(\bm{q})$, as defined in~\eqref{eq:admissible policy region}. \subsection{Critical Thresholds} We now introduce, for each state $\bm{q}\in\mathbb{R}_+^{\mathbb{J}}$, a family of linear programming problems, indexed by non-negative real numbers, from which we construct an (admissible) schedule associated with the linear program. These schedules are instrumental to the development of the optimal control algorithms in Section~\ref{sec:algorithms}. For a given state $\bm{q}$ and a real value $\tau\in\mathbb{R}_+$, define sets $\mathbb{I}_{\tau}\subset\mathbb{I}$ and $\mathbb{J}_{\bm{q}}\subset\mathbb{J}$ by $\mathbb{I}_{\tau}:=\left\{\bm{s}\in\mathbb{I}\;:\;w(\bm{s})\geq \tau\right\}$, $\mathbb{J}_{\bm{q}}:=\{\bm{\rho}\in\mathbb{J}\;:\;q(\bm{\rho})=0\}$, respectively, and define an $|\mathbb{I}_{\tau}|$-dimensional vector \begin{align*} \bm{w}_{\tau}:=[w(\bm{s})-\tau:\bm{s}\in\mathbb{I}_{\tau}]\in\mathbb{R}_+^{\mathbb{I}_{\tau}}. \end{align*} Then, for $\tau$ with $\mathbb{I}_{\tau}\neq\emptyset$, we formulate the following linear programming problem: \begin{align}\label{eq:primal}\tag{$P_{\bm{q},\tau}$} \begin{split} \textrm{max} \quad \bm{w}_{\tau}\cdot \bm{\nu}, \quad \textrm{s.t.} & \quad \bm{\nu} \bm{A}_{{\tau},{\bm{q}}} \leq \bm{\lambda}_{{\bm{q}}}, \quad \bm{\nu} \geq {\mathbf{0}}, \end{split} \end{align} where \begin{align*} \bm{A}_{{\tau},{\bm{q}}}~&:=~ [ A(\bm{s},\bm{\rho})\,:\,\bm{s}\in\mathbb{I}_{\tau},\,\bm{\rho}\in\mathbb{J}_{\bm{q}}]~\in~\{0,1\}^{\mathbb{I}_{\tau}\times\mathbb{J}_{\bm{q}}},\\ \bm{\lambda}_{{\bm{q}}}~&:=~ [\lambda(\bm{\rho})\,:\,\bm{\rho}\in\mathbb{J}_{\bm{q}}]\,\in\,[0,1]^{\mathbb{J}_{\bm{q}}}, \end{align*} and $\bm{\nu}\in\mathbb{R}^{\mathbb{I}_{\tau}}$ is the vector of decision variables. Note that, if $\tau=0$, then $\mathbb{I}_{0}=\mathbb{I}$ and $\bm{w}_0=\bm{A} \bm{c}$. \begin{remark} The feasible region for Problem~\eqref{eq:primal} is nonempty because $\bm{\nu}={\mathbf{0}}$ obviously satisfies all constraints. From any feasible vector $\bm{\nu}$ for Problem~\eqref{eq:primal}, if we define $\boldsymbolu\in\mathbb{R}^{\mathbb{I}}$ by \begin{align*} \mu(\bm{s})~=~ \begin{cases} \nu(\bm{s}) & \textrm{if $\bm{s}\in\mathbb{I}_{\tau}$}\\ 0 & \textrm{otherwise} \end{cases}, \end{align*} then we have $\boldsymbolu\in{\mathbb{U}}(\bm{q})$ due to the constraints in Problem~\eqref{eq:primal}. Thus, when $\|\boldsymbolu\|_1=\|\bm{\nu}\|_1=1$, $\boldsymbolu$ is an admissible schedule at state $\bm{q}$. \end{remark} The next theorem shows the existence of a specific $\tau\in\mathbb{R}_+$ for each state $\bm{q}$, from which we can construct an admissible schedule associated with an optimal solution to Problem~\eqref{eq:primal}. \begin{theorem}\label{thm:critical threshold} For any state $\bm{q}$, there exists a $\tau=\tau(\bm{q})\in\mathbb{R}_+$ such that Problem~\eqref{eq:primal} has an optimal solution $\bm{\nu}$ that can be extended to an admissible schedule at state $\bm{q}$; namely, $\|\bm{\nu}\|_1=1$. We call such $\tau$ a \emph{critical threshold} of state $\bm{q}$. \end{theorem} In the remainder of this section, we provide the basic arguments for establishing Theorem~\ref{thm:critical threshold} by devising a search algorithm for critical thresholds that will terminate in a finite number of iterations. First, letting $\gamma$ denote the optimal value of Problem~\eqref{eq:primal}, it is obvious that $\tau$ is a critical threshold at state $\bm{q}$ if and only if the following set is nonempty: \begin{align}\label{eq:check critical threshold} {\mathbb{Q}}(\bm{q},\tau,\gamma) :=\left\{\bm{\nu}\geq 0 \in\mathbb{I}_{\tau}\,:\, \bm{w}_{\tau}\cdot\bm{\nu}=\gamma, \|\bm{\nu}\|_1=1,\ \bm{\nu} \bm{A}_{\tau,{\bm{q}}} \leq \bm{\lambda}_{\bm{q}}\right\}. \end{align} Note that all constraints in~\eqref{eq:check critical threshold} are linear and ${\mathbb{Q}}({\bm{q},\tau,\gamma})$ is a polyhedron, which implies that the emptiness of the set ${\mathbb{Q}}(\bm{q},\tau,\gamma)$ can be checked quickly through the solution of a linear program. Define ${\mathbb{W}}:=\{w(\bm{s})\,:\,\bm{s}\in\mathbb{I}\}=\{\tau_1,\tau_2,\dots\}$ to be the ordered set of all (distinct) weights of schedules in $\mathbb{J}$ with $\tau_i>\tau_{i+1}$ for $i=1,2,\dots$. Algorithm~\ref{alg:critical threshold in W} then checks if ${\mathbb{W}}$ contains a critical threshold and finds one if it exists. {\small \begin{algorithm} \caption{Algorithm to find a critical threshold at state $\bm{q}$ in ${\mathbb{W}}$} \label{alg:critical threshold in W} \textbf{Input:} None, \qquad \textbf{Output:} An integer \begin{algorithmic}[1] \State\label{alg:check critical threshold:def of h} {Set $l=1$ and \begin{align*} h=\min\{k:\exists \bm{s}\in\mathbb{J} \textrm{ such that } w(\bm{s})=\tau_k,\ q(\bm{\rho})\neq 0\ \forall\bm{\rho}\in\bm{s} \} \end{align*} } \State{Solve Problem~\eqref{eq:primal} with $\tau=\tau_l$, obtain an optimal value $\gamma_l$ and an optimal solution $\bm{\nu}^*$}\label{alg:check critical threshold:initial check start} \If{${\mathbb{Q}}(\bm{q},\tau_l,\gamma_l)\neq\emptyset$} \State\Return{$l$} \mathbb{E}ndIf \State{Solve Problem~\eqref{eq:primal} with $\tau=\tau_h$, obtain an optimal value $\gamma_h$ and an optimal solution $\bm{\nu}^*$} \If{${\mathbb{Q}}(\bm{q},\tau_h,\gamma_h)\neq\emptyset$} \State\Return{$h$} \mathbb{E}ndIf\label{alg:check critical threshold:initial check end} \While{$l<h-1$} \State{Set $m=\lfloor\frac{l+h}{2}\rfloor$ and $\tau=\tau_{m}$} \State{Solve Problem~\eqref{eq:primal} with $\tau=\tau_m$, obtain an optimal value $\gamma_m$ and an optimal solution $\bm{\nu}^*$} \If{${\mathbb{Q}}(\bm{q},\tau_m,\gamma_m)\neq\emptyset$} \State\Return{$m$}\label{alg:check critical threshold:return m} \mathbb{E}lse \If{$\|\bm{\nu}^*\|_1>1$}\label{alg:check critical threshold:start update l h} \State{Set $h=m$} \mathbb{E}lse \State{Set $l=m$} \mathbb{E}ndIf\label{alg:check critical threshold:end update l h} \mathbb{E}ndIf \mathbb{E}ndWhile \State\Return{$-l$}\label{alg:check critical threshold:return -l} \end{algorithmic} \end{algorithm}} The next proposition shows that, if the algorithm returns a positive integer $m$, then $\tau_m$ is a critical threshold of state $\bm{q}$. \begin{proposition}\label{prop:critical threshold in W} If there exists a critical threshold in ${\mathbb{W}}$, Algorithm~\ref{alg:critical threshold in W} returns a positive integer $m$ such that $\tau_m\in{\mathbb{W}}$ is a critical threshold. Otherwise, it returns $-l$ (where $l\in\mathbb{Z}^{+}$) such that\\ \indent $1$-norm of any optimal solution to~\eqref{eq:primal} with $\tau=\tau_l$ is $<1$;\\ \indent $1$-norm of any optimal solution to~\eqref{eq:primal} with $\tau=\tau_{l+1}$ is $>1$. \end{proposition} \begin{remark} Algorithm~\ref{alg:critical threshold in W} has $O(\log|{\mathbb{W}}|)$ iterations because $(h-l)$ is almost one greater than half of the previous value of $(h-l)$ in the algorithm. \end{remark} When Algorithm~\ref{alg:critical threshold in W} returns a critical threshold $\tau_m$ of state $\bm{q}$, for positive integer $m$, we have the key element needed for our optimal control policy in this case, as we will see in Algorithm~\ref{alg:optimal_control}. Otherwise, we exploit the results from Algorithm~\ref{alg:critical threshold in W} to obtain the desired critical threshold for state $\bm{q}$. Henceforth, assume that ${\mathbb{W}}$ does not contain any critical threshold. From the above results, in this case, Algorithm~\ref{alg:critical threshold in W} returns $-l$ for some $l\in\mathbb{Z}^{+}$; and if a critical threshold exists in $\mathbb{R}_+$ (but not in ${\mathbb{W}}$), then it is between $\tau_{l+1}$ and $\tau_{l}$. We define $\bm{a}r{\bm{w}}:=[w(\bm{s}):\bm{s}\in\mathbb{I}_{\tau_l}]$ and formulate another linear optimization problem for $\tau\in(\tau_{l+1},\tau_l)$: \begin{align}\label{eq:primal sub}\tag{$P'_{\bm{q},\tau}$} \begin{split} \textrm{max} & \quad \bm{a}r{\bm{w}}\cdot \bm{\nu}-\tau\|\bm{\nu}\|_1, \quad \textrm{s.t.} \quad \bm{\nu} \bm{A}_{\tau_l,\bm{q}} \leq \bm{\lambda}_{\bm{q}}, \quad \bm{\nu} \geq {\mathbf{0}}, \end{split} \end{align} where $\bm{\nu}\in\mathbb{R}^{\mathbb{I}_{\tau_l}}$ is a vector of decision variables. The following proposition then allows us to find a critical threshold of state $\bm{q}$ in $(\tau_{l+1},\tau_{l})$ based on the solution to the linear program~\eqref{eq:primal sub}. \begin{proposition}\label{prop:critical threshold not in W 1} Assume that ${\mathbb{W}}$ does not contain any critical threshold and let $-l$ be the output of Algorithm~\ref{alg:critical threshold in W} for some positive integer $l\in\mathbb{Z}^{+}$. Then, \begin{enumerate} \item[(i)] For $\tau\in(\tau_{l+1},\tau_{l})$, Problem~\eqref{eq:primal sub} is equivalent to Problem~\eqref{eq:primal}; \item[(ii)] The feasible region of Problem~\eqref{eq:primal sub} is a polytope (bounded polyhedron); \item[(iii)] All optimal solutions to Problem~\eqref{eq:primal sub} with $\tau=\tau_{l+1}$ have $1$-norm greater than $1$. \end{enumerate} \end{proposition} \begin{remark} Note that in Problem~\eqref{eq:primal sub}, only the objective function depends on $\tau$ and feasible sets do not depend on $\tau$. Since Problem~\eqref{eq:primal} is equivalent to Problem~\eqref{eq:primal sub} for $\tau\in(\tau_{l+1},\tau_{l})$, we can verify if $\tau$ is a critical threshold by checking the emptiness of the set \begin{align}\label{eq:check critical threshold sub} \begin{split} \lefteqn{{\mathbb{Q}}'(\bm{q},\tau,\gamma)}\\ &:=\left\{\bm{\nu}\in\mathbb{I}_{\tau_l}:\bm{a}r{\bm{w}}\cdot\bm{\nu}-\tau=\gamma, \|\bm{\nu}\|_1=1,\bm{\nu} \bm{A}_{\tau_l,\bm{q}} \leq \bm{\lambda}_{\bm{q}},\bm{\nu} \geq {\mathbf{0}}\right\}, \end{split} \end{align} where $\gamma$ is the optimal value of Problem~\eqref{eq:primal sub}. \end{remark} Now, we present an algorithm that obtains a critical threshold of state $\bm{q}$ in $(\tau_{l+1},\tau_{l})$. {\small \begin{algorithm} \caption{Algorithm to find a critical threshold at state $\bm{q}$ in $(\tau_{l+1},\tau_{l})$} \label{alg:critical threshold not in W} \textbf{Input:} integer $l$ such that\\ \hspace*{0.5in} $1$-norm of any optimal solution to Problem \eqref{eq:primal sub} with $\tau=\tau_l$ is less than $1$ \\ \hspace*{0.5in} $1$-norm of any optimal solution to Problem \eqref{eq:primal sub} with $\tau=\tau_{l+1}$ is greater than $1$ \\ \textbf{Output:} a critical threshold $\tau\in(\tau_{l+1},\tau_{l})$ \begin{algorithmic}[1] \State{Set $\bm{a}r{\bm{w}}=[w(\bm{s}):\bm{s}\in\mathbb{I}_{\tau_l}]$, and $k=0$} \State{Set $\tau^L_0=\tau_l$ and obtain a basic optimal solution $\bm{\nu}^L_0$ to Problem~\eqref{eq:primal sub} with $\tau=\tau^L_0$} \State{Set $\tau^S_0=\tau_{l+1}$ and obtain a basic optimal solution $\bm{\nu}^S_0$ to Problem~\eqref{eq:primal sub} with $\tau=\tau^S_0$} \While{True} \State{Set $$\tau^M_k:=\frac{\bm{a}r{\bm{w}}\cdot(\bm{\nu}^S_k-\bm{\nu}^L_k)}{\|\bm{\nu}^S_k\|_1-\|\bm{\nu}^L_k\|_1}$$} \State{Solve Problem \eqref{eq:primal sub} with $\tau=\tau^M_k$, obtain optimal value $\gamma^*$ and basic optimal solution $\bm{\nu}^M_k$} \If{${\mathbb{Q}}'(\bm{q},\tau^M_k,\gamma^*)\neq\emptyset$}\label{line:condition} \State\Return{$\tau^M_k$} \mathbb{E}lse \If{$\|\bm{\nu}^M_k\|_1>1$}\label{alg:critical threshold not in W:start update l h} \State{Set $(\tau^S_{k+1},\bm{\nu}^S_{k+1})=(\tau^M_k,\bm{\nu}^M_k)$} \State{\quad and $(\tau^L_{k+1},\bm{\nu}^L_{k+1})=(\tau^L_k,\bm{\nu}^L_k)$} \mathbb{E}lse \State{Set $(\tau^L_{k+1},\bm{\nu}^L_{k+1})=(\tau^M_k,\bm{\nu}^M_k)$} \State{\quad and $(\tau^S_{k+1},\bm{\nu}^S_{k+1})=(\tau^S_k,\bm{\nu}^S_k)$} \mathbb{E}ndIf \label{alg:critical threshold not in W:end update l h} \State{Set $k=k+1$} \mathbb{E}ndIf \mathbb{E}ndWhile \end{algorithmic} \end{algorithm}} The next proposition establishes that this algorithm provides a critical threshold of state $\bm{q}$. \begin{proposition}\label{prop:critical threshold not in W} Assume that ${\mathbb{W}}$ does not contain any critical threshold and $-l$ is the output of Algorithm~\ref{alg:critical threshold in W} for some positive integer $l\in\mathbb{Z}^{+}$. Then, Algorithm~\ref{alg:critical threshold not in W} with input $l$ returns a critical threshold in a finite amount of time. \end{proposition} To summarize, the following algorithm combines Algorithm~\ref{alg:critical threshold in W} and Algorithm~\ref{alg:critical threshold not in W} to produce a critical threshold for any state $\bm{q}$. {\small \begin{algorithm} \caption{Algorithm to find a critical threshold at state $\bm{q}$}\label{alg:critical threshold} \textbf{Input:} State $\bm{q}$ \textbf{Output:} a critical threshold $\tau=\tau(\bm{q})$ \begin{algorithmic}[1] \State{Set $m$ be the output of Algorithm~\ref{alg:critical threshold in W} with input $\bm{q}$} \If{$m>0$} \State\Return{$\tau_m$} \mathbb{E}lse \State\Return{the output of Algorithm~\ref{alg:critical threshold not in W} with input $l=-m$} \mathbb{E}ndIf \end{algorithmic} \end{algorithm}} \subsection{Optimal Control Algorithm} \label{sec:algorithms} By exploiting the critical threshold for any state $\bm{q}$ from the previous section, we now introduce an optimal control algorithm and show that it renders an optimal solution to the optimal control problem~\eqref{eq:optimal control problem}. {\small \begin{algorithm} \caption{Optimal Control Algorithm for initial state $\bm{q}_{t=0}$}\label{alg:optimal_control} \begin{algorithmic}[1] \State{Set $k=0$, $t_0=0$, and $\bm{q}^*_{0}=\bm{q}_{t=0}$} \While{$t_k<\infty$} \State{Let $\tau_k$ be the output of Algorithm~\ref{alg:critical threshold} with input $\bm{q}=\bm{q}^*_{t_k}$} \State{Let $\gamma_k$ be the optimal value of Problem~\eqref{eq:primal} with $\bm{q}=\bm{q}^*_{t_k}$ and $\tau=\tau_k$} \State{Find a point $\bm{\nu}_k\in{\mathbb{Q}}(\bm{q}^*_{t_k},\tau_k,\gamma_k)$ in \eqref{eq:check critical threshold}}\label{alg:line:find bnu_k} \State{Define $\boldsymbolu^*\in\mathbb{R}^{\mathbb{I}}$ by \begin{align*} \mu^*(\bm{s})=\begin{cases} \nu_k(\bm{s}) & \textrm{if $\bm{s}\in\mathbb{I}_{\tau_k}$}\\ 0 & \textrm{otherwise} \end{cases} \end{align*}}\label{alg:line:bmu} \State{Set \begin{align*} \lefteqn{t_{k+1}=t_k}\\ &+\min\left\{ \frac{q_{t_{k}}(\bm{\rho})}{(\mu^*\,A)(\bm{\rho})-\lambda(\bm{\rho})}\,:\,\bm{\rho}\in\mathbb{J}\bm{a}ckslash\mathbb{J}_{\bm{q}_{t_k}^*},\ (\mu^*\,A)(\bm{\rho})-\lambda(\bm{\rho})>0\right\} \end{align*} }\label{alg:definition of t_{k+1}} \State{Set $\boldsymbolu^*(t)=\boldsymbolu^*$ for $t\in[t_k,t_{k+1})$ and $\bm{q}^*_{t}=\bm{q}^*_{t_k}+(t-t_k)\bm{\lambda}-(t-t_k)\boldsymbolu^* \bm{A}$ for $t\in[t_k,t_{k+1}]$}\label{alg:line:definition of q_t} \State{Set $k=k+1$} \mathbb{E}ndWhile \end{algorithmic} \end{algorithm}} The next proposition shows that the above algorithm produces a fluid-level admissible policy. \begin{proposition} \label{prop:well-definedness of t_{k+1}} In Algorithm~\ref{alg:optimal_control}, we have that $\boldsymbolu^*_t$ is a fluid-level admissible policy and $\bm{q}^*_t$ is the continuous process satisfying $\dot{\bm{q}^*_t}=\bm{\lambda}-\boldsymbolu^*_t\,\bm{A}$ with initial state $\bm{q}_{t=0}$. \end{proposition} Now, we prove the stability of the system under the scheduling policy $\boldsymbolu^*_t$ in Algorithm~\ref{alg:optimal_control}. \begin{theorem} \label{thm:throughput_optimal} Assume that the arrival rate vector $\bm{\lambda}$ is inside the capacity region. Then, the schedule produced by Algorithm~\ref{alg:optimal_control} empties the system in finite time. Moreover, if $\bm{q}^*_{T}={\mathbf{0}}$ for some $T\geq 0$, then $\bm{q}^*_t={\mathbf{0}}$ for all $t\geq T$. \end{theorem} The second result in the above theorem claims that Algorithm~\ref{alg:optimal_control} is weakly stable, the definition of which is as follows. \begin{definition}[\protect{\cite[Definition 6]{DaiPra00}}]\label{def:weak stability} A fluid-level admissible policy $\boldsymbolu_t$ is \emph{weakly stable} if the corresponding fluid queue length process $\{\bm{q}_t\,:\,t\in\mathbb{R}_+\}$ with initial state $\bm{q}_0={\mathbf{0}}$ satisfies $\bm{q}_t={\mathbf{0}}$ for all $t\geq 0$. \end{definition} We next establish that, under this implication, Algorithm~\ref{alg:optimal_control} is an optimal policy that satisfies Proposition~\ref{prop:max_principle}. \begin{theorem}\label{thm:optimal_result} Assume that the arrival rate vector $\bm{\lambda}$ is in the capacity region. Then, $(\bm{q}^*_t, \boldsymbolu^*_t)$ is an optimal solution to problem~\eqref{eq:optimal control problem}. \end{theorem} \subsection{Relationship with c$\boldsymbol{\mu}$ Policy} Given an arrival rate vector $\bm{\lambda}$ and initial queue length $\bm{q}_0$ such that $\lambda(i,j)=q_0(i,j)=0$ for all $i\in[n]$ and $j\in[n]\setminus\{1\}$, the $n\times n$ input-queued switch is equivalent to $n$ parallel queues with one server. The c$\mu$-policy is well-known for this case to be an optimal policy that minimizes the discounted total cost over an infinite horizon in both the stochastic and fluid models (see~\cite{Cox61} and~\cite{Bauerle2000}); and, in this case, Algorithm~\ref{alg:optimal_control} follows the c$\mu$-policy in the fluid model. However, the c$\mu$-policy is not optimal for the $n\times n$ input-queued switch in general. Consider a $3\times 3$ input-queued switch fluid model such that $\lambda(i,j)=0.45$ if $ (i,j)=(1,1),(1,2),(2,1),(2,3)$, and zero otherwise; $c(i,j)=1$ if $(i,j)=(1,2),(2,3)$, $c(i,j)=0.5$ if $(i,j)=(2,1)$, $c(i,j)=0.1$ if $(i,j)=(1,1),(2,3)$, and zero otherwise; $\bm{q}_0={\mathbf{0}}$. Then, according to the c$\mu$-policy, the admissible schedule at $\bm{q}$ with $q(1,2)=q(2,3)=q(2,1)=0$ becomes \begin{align*} \mu(\bm{s})~=~ \begin{cases} 0.45 & \textrm{for $\bm{s}$ such that $s(1,2)=s(2,3)=1$}\\ 0.45 & \textrm{for $\bm{s}$ such that $s(2,1)=1$}\\ 0.10 & \textrm{for $\bm{s}$ such that $s(1,1)=1$}\\ 0 & \textrm{otherwise} \end{cases}. \end{align*} Hence, the queue lengths for $(1,2)$, $(2,3)$ and $(2,1)$ are maintained at zero but the queue length for $(1,1)$ increases with rate $0.45-0.10=0.35$, which shows that the c$\mu$-policy is not weakly stable. On the other hand, according to Theorem~\ref{thm:throughput_optimal}, Algorithm~\ref{alg:optimal_control} is weakly stable. In this example, the critical threshold at $\bm{q}_0={\mathbf{0}}$ is $\tau=0$ and the admissible schedule is \begin{align*} \mu^*(\bm{s})~=~ \begin{cases} 0.45 & \textrm{for $\bm{s}$ such that $s(1,2)=s(2,1)=1$} \\ 0.45 & \textrm{for $\bm{s}$ such that $s(1,1)=s(2,3)=1$}\\ 0 & \textrm{otherwise} \end{cases}, \end{align*} which maintains the system to be empty. \section{Proofs of Main Results}\label{sec:proofs} In this section, we turn to consider the proofs of our main results. \subsection{Proof of Proposition~\ref{prop:admissible policy}} From the differential equation and the initial state of $\bm{q}_t$, we have \begin{align} \label{eq:integral for q_t} \bm{q}_t~=\bm{q}_0+\bm{\lambda} t-\int_0^t \boldsymbolu_{t'}\bm{A} dt'~=~~\bm{q}_0+\bm{\lambda} t-\left(\int_0^t \boldsymbolu_{t'} dt'\right)\bm{A}. \end{align} Therefore, $\{\bm{q}_t:t\in\mathbb{R}_+\}$ is well-defined and differentiable everywhere. Now, we show that \ref{ap:region} $\Rightarrow$ \ref{ap:positivity} $\Rightarrow$ \ref{ap:definition} $\Rightarrow$ \ref{ap:region}. Assume that $\boldsymbolu_t$ satisfies $\|\boldsymbolu_t\|_1=1$ and $\boldsymbolu_t\in{\mathbb{U}}(\bm{q}_t)$ for all $t\in\mathbb{R}_+$. We claim that $\bm{q}_t\geq {\mathbf{0}}$ for all $t\in\mathbb{R}_+$. If this is not true, i.e., $q_{t'}(\bm{\rho})<0$ for some $\bm{\rho}\in\mathbb{J}$ at some time $t'$, then let $t''=\sup\{t<t'\,:\,q_t(\bm{\rho})=0\}$ which is well-defined because $q_t(\bm{\rho})$ is continuous and $q_0(\bm{\rho})=\bm{q}(\bm{\rho})\geq 0$. By the continuity of $q_t(\bm{\rho})$, we have that $q_{t''}(\bm{\rho})=0$ and $q_t(\bm{\rho})<0$ for all $t\in(t'',t')$. Hence, $\dot{q}_{t''}(\bm{\rho})<0$, which contradicts the fact that $\lambda_{t''}(\bm{\rho})\leq (\mu_{t''}A)(\bm{\rho})$, and thus $\bm{q}_t\geq 0$ for all $t\in\mathbb{R}_+$, which proves that \ref{ap:region} implies \ref{ap:positivity}. Suppose $\|\boldsymbolu_t\|_1=1$ and $\bm{q}_t\geq 0$ for $t\in\mathbb{R}_+$. We show that $(\bm{q}_t,\boldsymbol{\mathcal{\delta}}_t)$ is a fluid model with $\boldsymbol{\mathcal{\delta}}_t:=\int_0^t \boldsymbolu_{t'} dt'$. Conditions~\ref{fluid:dynamics} and \ref{fluid:positivity} immediately follow from \eqref{eq:integral for q_t} and the assumption in \ref{ap:positivity}, respectively. Further note that \begin{align*} \|\boldsymbol{\mathcal{\delta}}_t\|_1 =\sum_{\bm{s}\in\mathbb{I}} \int_0^t {\boldsymbolu}_{t'}(\bm{s})dt'=\int_0^t \sum_{\bm{s}\in\mathbb{I}}\mu_{t'}(\bm{s})dt'= \int_0^t \|\boldsymbolu\|_1=t, \end{align*} which implies the condition~\ref{fluid:time}. Since $\dot{\boldsymbol{\mathcal{\delta}}}_t=\boldsymbolu_t\geq 0$ for all $t\in\mathbb{R}_+$, the condition~\ref{fluid:increase} also holds, and therefore \ref{ap:positivity} implies \ref{ap:definition}. Lastly, assume that $\{\boldsymbolu_t\,:\,t\in\mathbb{R}_+\}$ is a fluid-level admissible policy and let $(\bm{q}_t,\boldsymbol{\mathcal{\delta}}_t)$ be a fluid model with $\dot{\boldsymbol{\mathcal{\delta}}}_t=\boldsymbolu_t$, which implies $\boldsymbol{\mathcal{\delta}}_t=\int_0^t \boldsymbolu_{t'} dt'$. From conditions~\ref{fluid:time} and \ref{fluid:increase}, we have \begin{align*} \|\boldsymbolu_t\|_1=\|\dot{\boldsymbol{\mathcal{\delta}}}_t\|_1=\sum_{\bm{s}\in\mathbb{I}} \dot{{\delta}}_t(\bm{s})=\frac{d}{dt}{\left(\sum_{\bm{s}\in\mathbb{I}} {\delta}_t(\bm{s})\right)}=\frac{d}{dt}\|\boldsymbol{\mathcal{\delta}}_t\|_1=1. \end{align*} Moreover, from the condition~\ref{fluid:dynamics}, $\bm{q}_t$ is the process such that $\bm{q}_t=\bm{q}_0+\bm{\lambda} t- \int_0^t\boldsymbolu_{t'}\bm{A} dt'$. If $q_t(\bm{\rho})=0$ but $\lambda(\bm{\rho})<\mu_t(\bm{\rho})$ for some $t\in\mathbb{R}_+$ and $\bm{\rho}\in\mathbb{J}$, then $\dot{q}_t(\bm{\rho})<0$. Therefore, we have $q_{t'}(\bm{\rho})<0$ for $t'\in[t,t+\varepsilon]$ and some $\varepsilon>0$, which contradicts the condition~\ref{fluid:positivity}. Hence, we obtain $\boldsymbolu_t\in{\mathbb{U}}(\bm{q}_t)$ for $t\in\mathbb{R}_+$, and thus \ref{ap:definition} is a sufficient condition for \ref{ap:region}. \subsection{Proof of Proposition~\ref{prop:max_principle}} Define $\tilde{\bm{p}}_t:=-e^{-\beta t}\bm{p}_t$ and $\tilde{\bm{\eta}}_t:=e^{-\beta t}\bm{\eta}_t$. We then prove that $\tilde{\bm{p}}_t$ and $\tilde{\bm{\eta}}_t$ satisfy the conditions in Lemma~\ref{lemma:maximum_principle}. From \ref{cond:optimality}, we have \begin{align*} H^*(\bm{q}^*_t,\tilde{\bm{p}}_t;t)&~=~\max\left\{H(\bm{q}^*_t,\boldsymbolu,\tilde{\bm{p}}_t;t)\,:\,\boldsymbolu\in{\mathbb{U}}\right\}\\ &~=~\max\left\{-e^{-\beta t}\bm{c}\cdot\bm{q}^*_t+\left(\bm{\lambda}-\boldsymbolu\bm{A}\right)\tilde{\bm{p}}_t\,:\,\boldsymbolu\in{\mathbb{U}}\right\}\\ &~=~-e^{-\beta t}\bm{c}\cdot\bm{q}^*_t+\bm{\lambda}\cdot\tilde{\bm{p}}_t+e^{-\beta t}\;\max\left\{\boldsymbolu\bm{A}\bm{p}_t\,:\,\boldsymbolu\in{\mathbb{U}}\right\}\\ &~=~-e^{-\beta t}\bm{c}\cdot\bm{q}^*_t+\bm{\lambda}\cdot\tilde{\bm{p}}_t+e^{-\beta t}\boldsymbolu^*_t\bm{A}\bm{p}_t\\ &~=~-e^{-\beta t}\bm{c}\cdot\bm{q}^*_t+\left(\bm{\lambda}-\boldsymbolu^*_t\bm{A}\right)\tilde{\bm{p}}_t\\ &~=~H(\bm{q}^*_t,\boldsymbolu^*_t,\tilde{\bm{p}}_t;t), \end{align*} which implies condition (i) of Lemma~\ref{lemma:maximum_principle}. Condition~\ref{cond:diff} implies \begin{align*} \dot{\tilde{\bm{p}}}&=-e^{-\beta t}\dot{\bm{p}}_t+\beta e^{-\beta t}\bm{p}_t=-e^{-\beta t}\left(\dot{\bm{p}}_t-\beta\bm{p}_t\right)\\ &=-e^{-\beta t}\left(\bm{c}-\bm{\eta}_t\right)=-e^{-\beta t}+\tilde{\bm{\eta}}_t, \end{align*} which proves condition (ii) of Lemma~\ref{lemma:maximum_principle}. Since $\bm{\eta}_t$ is a positive multiple of $\tilde{\bm{\eta}}_t$ and $\bm{p}_t$ is a negative multiple of $\tilde{\bm{p}}_t$, conditions (iii) and (iv) of Lemma~\ref{lemma:maximum_principle} then follow from conditions~\ref{cond:slackness} and~\ref{cond:endpoint}, respectively. \subsection{Proof of Proposition~\ref{prop:critical threshold in W}} We first introduce a key lemma that relates the norms of optimal solutions to Problem~\eqref{eq:primal} with different $\tau$. \begin{lemma}\label{lemma:monotonicity} Fix $\tau',\tau''\in\mathbb{R}_+$ with $\tau'>\tau''$. Let $\bm{\nu}'\in\mathbb{R}_+^{\mathbb{I}_{\tau'}}$ and $\bm{\nu}''\in\mathbb{R}_+^{\mathbb{I}_{\tau''}}$ be solutions to Problem~\eqref{eq:primal} with $\tau=\tau'$ and $\tau=\tau''$, respectively. Then, we have $\|\bm{\nu}'\|_1 \leq \|\bm{\nu}''\|_1$. \end{lemma} \begin{proof} Note that $\mathbb{I}_{\tau'}\subset\mathbb{I}_{\tau''}$. We denote $\bm{\nu}''_1\in\mathbb{R}_+^{\mathbb{I}_{\tau'}}$ and $\bm{\nu}''_2\in\mathbb{R}_+^{\mathbb{I}_{\tau''}\bm{a}ckslash\mathbb{I}_{\tau'}}$ as the projections of $\nu''(\bm{s})$ to $\mathbb{R}_+^{\mathbb{I}_{\tau'}}$ and $\mathbb{R}_+^{\mathbb{I}_{\tau''}\bm{a}ckslash\mathbb{I}_{\tau'}}$, respectively; i.e., $\nu''_1(\bm{s})=\nu''(\bm{s})$ for all $\bm{s}\in\mathbb{I}_{\tau'}$ and $\nu''_2(\bm{s})=\nu''(\bm{s})$ for all $\bm{s}\in\mathbb{I}_{\tau''}\bm{a}ckslash\mathbb{I}_{\tau'}$, respectively. Naturally, we have \begin{align*} \bm{\lambda}_{{\bm{q}}}~\geq~\bm{\nu}''\bm{A}_{\tau'',\bm{q}} ~\geq~ \bm{\nu}''_1\bm{A}_{\tau',\bm{q}}, \end{align*} which implies that $\bm{\nu}''_1$ is a feasible solution of Problem~\eqref{eq:primal} with $\tau=\tau'$. Hence, we obtain \begin{align}\label{eq:lemma_monotonicity} \bm{w}_{\tau'}\cdot\bm{\nu}''_1~\leq~\bm{w}_{\tau'}\cdot\bm{\nu}' \end{align} due to the fact that $\bm{\nu}'$ is an optimal solution to Problem~\eqref{eq:primal}. On the other hand, we have \begin{align}\label{eq:lemma_monotonicity:2} \begin{split} \lefteqn{\bm{w}_{\tau''}\cdot\bm{\nu}''}\\ =&\sum_{\bm{s}\in\mathbb{I}_{\tau''}}\left(w(\bm{s})-\tau''\right)\nu''(\bm{s})\\ =&\sum_{\bm{s}\in\mathbb{I}_{\tau'}}\left(w(\bm{s})-\tau''\right)\nu''(\bm{s}) +\sum_{\bm{s}\in\mathbb{I}_{\tau''}\bm{a}ckslash\mathbb{I}_{\tau'}}\left(w(\bm{s})-\tau''\right)\nu''(\bm{s})\\ =&\sum_{\bm{s}\in\mathbb{I}_{\tau'}}\left(w(\bm{s})-\tau'\right)\nu''_1(\bm{s})+(\tau'-\tau'')\sum_{\bm{s}\in\mathbb{I}_{\tau'}}\nu''(\bm{s})\\ &+\sum_{\bm{s}\in\mathbb{I}_{\tau''}\bm{a}ckslash\mathbb{I}_{\tau'}}\left(w(\bm{s})-\tau''\right)\nu''(\bm{s})\\ \leq&\sum_{\bm{s}\in\mathbb{I}_{\tau'}}\left(w(\bm{s})-\tau'\right)\nu''_1(\bm{s})+(\tau'-\tau'')\sum_{\bm{s}\in\mathbb{I}_{\tau'}}\nu''(\bm{s})\\ &+\sum_{\bm{s}\in\mathbb{I}_{\tau''}\bm{a}ckslash\mathbb{I}_{\tau'}}\left(\tau'-\tau''\right)\nu''(\bm{s})\\ &=\bm{w}_{\tau'}\cdot\bm{\nu}''_1+(\tau'-\tau'')\|\bm{\nu}''\|_1, \end{split} \end{align} where the inequality follows from $w(\bm{s})<\tau_1$ for all $\bm{s}\in\mathbb{I}_{\tau''}\bm{a}ckslash\mathbb{I}_{\tau'}$. Now, if we extend $\bm{\nu}'$ to $\tilde{\bm{\nu}'}\in\mathbb{R}_+^{\mathbb{I}_{\tau''}}$ by \begin{align*} \tilde{\nu'}(\bm{s})~=~\begin{cases} \nu'(\bm{s}) & \textrm{if $\bm{s}\in\mathbb{I}_{\tau'}$} \\ 0 & \textrm{if $\bm{s}\in\mathbb{I}_{\tau''}\bm{a}ckslash\mathbb{I}_{\tau'}$} \end{cases}, \end{align*} then $\tilde{\bm{\nu}'}$ is a feasible solution of Problem~\eqref{eq:primal} with $\tau=\tau''$ because $\tilde{\bm{\nu}'}\bm{A}_{\tau'',\bm{q}}=\bm{\nu}'\bm{A}_{\tau',\bm{q}}\leq\bm{\lambda}_{{\bm{q}}}$, and \begin{align}\label{eq:lemma_monotonicity:3} \begin{split} \bm{w}_{\tau''}\cdot\tilde{\bm{\nu}'} &=\sum_{\bm{s}\in\mathbb{I}_{\tau''}}\left(w(\bm{s})-\tau''\right)\tilde{\nu'}(\bm{s}) =\sum_{\bm{s}\in\mathbb{I}_{\tau'}}\left(w(\bm{s})-\tau''\right)\tilde{\nu'}(\bm{s})\\ &=\sum_{\bm{s}\in\mathbb{I}_{\tau'}}\left(w(\bm{s})-\tau'\right)\tilde{\nu'}(\bm{s})+(\tau'-\tau'')\sum_{\bm{s}\in\mathbb{I}_{\tau'}}\tilde{\nu'}(\bm{s})\\ &=\bm{w}_{\tau'}\cdot\bm{\nu}'+(\tau'-\tau'')\|\bm{\nu}'\|_1. \end{split} \end{align} Since $\bm{\nu}''$ is an optimal solution to Problem~\eqref{eq:primal} with $\tau=\tau''$, from~\eqref{eq:lemma_monotonicity:2} and~\eqref{eq:lemma_monotonicity:3} we obtain \begin{align*} \bm{w}_{\tau'}\cdot\bm{\nu}'+(\tau'-\tau'')\|\bm{\nu}'\|_1 &=\bm{w}_{\tau''}\cdot\tilde{\bm{\nu}'}\\ &\leq \bm{w}_{\tau''}\cdot\bm{\nu}'' \leq \bm{w}_{\tau'}\cdot\bm{\nu}''_1+(\tau'-\tau'')\|\bm{\nu}''\|_1, \end{align*} so that \begin{align} \bm{w}_{\tau'}\cdot\bm{\nu}'+(\tau'-\tau'')\|\bm{\nu}'\|_1 &\leq \bm{w}_{\tau'}\cdot\bm{\nu}''_1+(\tau'-\tau'')\|\bm{\nu}''\|_1.\label{eq:lemma_monotonicity:4} \end{align} Then, \eqref{eq:lemma_monotonicity} and~\eqref{eq:lemma_monotonicity:4} imply $\|\bm{\nu}'\|_1\leq\|\bm{\nu}''\|_1$ because $\tau'>\tau''$. \end{proof} Now, we prove Proposition~\ref{prop:critical threshold in W}. We claim that any critical threshold is less than or equal to $\tau_1$ and greater than or equal to $\tau_h$, where \begin{align*} h=\min\{k:\exists \bm{s}\in\mathbb{J} \textrm{ such that } w(\bm{s})=\tau_k,\ q_{\bm{\rho}}\neq 0\, \forall\bm{\rho}\in\bm{s} \} \end{align*} is defined in Line~\ref{alg:check critical threshold:def of h} of Algorithm~\ref{alg:critical threshold in W}. Since $\tau_1$ is the largest number in ${\mathbb{W}}$, we have $w(\bm{s})\leq\tau_1$ for all $\bm{s}\in\mathbb{I}$, and thus $\bm{w}_{\tau_1}={\mathbf{0}}$. Hence, any feasible solution in Problem~\eqref{eq:primal} with $\tau=\tau_1$ is an optimal solution to the problem. If Problem~\eqref{eq:primal} with $\tau=\tau_1$ has an optimal solution $\bm{\nu}$ with $\|\bm{\nu}\|\geq 1$, then $\bm{\nu}/\|\bm{\nu}\|_1$ is also an optimal solution because \begin{align*} \frac{1}{|\bm{\nu}\|_1}\bm{\nu}~=~\frac{1}{|\bm{\nu}\|_1}\bm{\nu}+\left(1-\frac{1}{|\bm{\nu}\|_1}\right){\mathbf{0}} \end{align*} is a convex combination of $\bm{\nu}$ and ${\mathbf{0}}\in\mathbb{R}^{\mathbb{I}_{\tau_1}}$, which is also an optimal solution. Hence, $\tau_1$ is a critical threshold. Otherwise, all optimal solutions to the problem have $1$-norm less than $1$. Therefore, by Lemma~\ref{lemma:monotonicity}, any critical threshold should be less than $\tau_1$. Let $\bm{\nu}_h\in\mathbb{R}^{\mathbb{I}_{\tau_h}}$ be an optimal solution to Problem~\eqref{eq:primal} with $\tau=\tau_h$ and $\bm{s}_h\in\mathbb{J}$ such that $w(\bm{s}_h)=\tau_h$ and $q_{\bm{\rho}}\neq 0$ for all $\bm{\rho}\in\bm{s}_h$. We denote by $\bm{e}\in\mathbb{R}^{\mathbb{I}_{\tau_h}}$ the vector with $e(\bm{s}_{h})=1$ and $e(\bm{s})=0$ for any $\bm{s}\in\mathbb{I}_{\tau_h}\bm{a}ckslash\{\bm{s}_h\}$. Then, for any $\alpha\in\mathbb{R}_+$, we have $\bm{\nu}_h+\alpha\bm{e}\geq {\mathbf{0}}$. Moreover, for all $\bm{\rho}\in\mathbb{J}_{\bm{q}}$, we obtain $A(\bm{s}_h,\bm{\rho})=0$, and thus $A_{\tau_h,\bm{q}}(\bm{s}_h,\bm{\rho})=0$. Therefore, we have $\bm{e}\bm{A}_{\tau_h,\bm{q}}={\mathbf{0}}$ so that \begin{align*} \left(\bm{\nu}_h+\alpha\bm{e}\right)\bm{A}_{\tau_h,\bm{q}} ~=~\bm{\nu}_h\bm{A}_{\tau_h,\bm{q}}+\alpha\bm{e}\bm{A}_{\tau_h,\bm{q}} ~=~\bm{\nu}_h\bm{A}_{\tau_h,\bm{q}} ~\leq~\bm{\lambda}_{{\bm{q}}}, \end{align*} which implies that $\bm{\nu}_h+\alpha\bm{e}$ is in the feasible set of Problem~\eqref{eq:primal} with $\tau=\tau_h$. Furthermore, we obtain \begin{align*} \bm{w}_{\tau_h}\cdot(\bm{\nu}_h+\alpha\bm{e}) &=\bm{w}_{\tau}\cdot\bm{\nu}_h+\alpha\bm{w}_{\tau_h}\cdot\bm{e}\\ &=\bm{w}_{\tau}\cdot\bm{\nu}_h+\alpha\,w_{\tau_h}\!(\bm{s}_h)\,e(\bm{s}_h) =\bm{w}_{\tau}\cdot\bm{\nu}_h \end{align*} because $w_{\tau_h}(\bm{s}_h)=w(\bm{s}_h)-\tau_h=0$. Hence, $\bm{\nu}_h+\alpha\bm{e}_{\bm{s}_h}$ is an optimal solution to Problem~\eqref{eq:primal} with $\tau=\tau_h$. However, we also have \begin{align*} \|\bm{\nu}_h+\alpha\bm{e}_{\bm{s}_h}\|_1=\|\bm{\nu}_h\|_1+\alpha. \end{align*} Here $\alpha\ge 0$ can be arbitrary, so~\eqref{eq:primal} with $\tau=\tau_h$ has an optimal solution with $1$-norm greater than $1$. Therefore, by Lemma~\ref{lemma:monotonicity}, any critical threshold at state $\bm{q}$ is greater than or equal to $\tau_h$. Next, note that Lines~\ref{alg:check critical threshold:start update l h}--\ref{alg:check critical threshold:end update l h} in Algorithm~\ref{alg:critical threshold in W} update $l$ and $h$ so that Problems~\eqref{eq:primal} with $\tau=\tau_l$ and $\tau=\tau_h$ have an optimal solution with $1$-norm that is less than and greater than $1$, respectively. Hence, a critical threshold is found between $\tau_l$ and $\tau_h$ during the algorithm. Now, assume that ${\mathbb{W}}$ has a critical threshold. If $\tau_1$ or $\tau_h$ is a critical threshold, Algorithm~\ref{alg:critical threshold in W} returns $1$ or $h$ as in Lines~\ref{alg:check critical threshold:initial check start}--\ref{alg:check critical threshold:initial check end}. In the {\bf While} loop, $m$ is the midpoint between $l$ and $h$ and if $\tau_m$ is a critical threshold, then it is returned in Line~\ref{alg:check critical threshold:return m}. If not, $l$ or $h$ is updated and, at each iteration, the gap between $l$ and $h$ is reduced by half as part of the binary search. Algorithm~\ref{alg:critical threshold in W} therefore finds a critical threshold, returning $m$ such that $\tau_m$ is the critical threshold, within a finite number of iterations. Otherwise, the {\bf While} loop ends after a finite number of iterations and, in Line~\ref{alg:check critical threshold:return -l}, the algorithm returns the negative integer $-l$, where any optimal solution to Problem~\eqref{eq:primal} with $\tau=\tau_l$ has $1$-norm less than $1$. Moreover, since $h=l+1$ (from the condition in the {\bf While} loop), all optimal solutions to Problem~\eqref{eq:primal} with $\tau=\tau_h=\tau_{l+1}$ have $1$-norm greater than $1$. \subsection{Proof of Proposition~\ref{prop:critical threshold not in W 1}} (i) For any $\tau\in(\tau_{l+1},\tau_l)$, since there is no $\bm{s}\in\mathbb{I}$ such that $w(\bm{s})\in(\tau_{l+1},\tau_{l})$, we have \begin{align*} \mathbb{I}_{\tau}=\{\bm{s}\in\mathbb{I}:w(\bm{s})\geq \tau\}=\{\bm{s}\in\mathbb{I}:w(\bm{s})\geq \tau_l\}=\mathbb{I}_{\tau_l}, \end{align*} and \begin{align*} \bm{w}_{\tau}\cdot\bm{\nu} &=\sum_{\bm{s}\in\mathbb{I}_{\tau_l}}\left( w(\bm{s})-\tau\right)\nu(\bm{s})\\ &=\sum_{\bm{s}\in\mathbb{I}_{\tau_l}}w(\bm{s})\nu(\bm{s})-\tau\sum_{\bm{s}\in\mathbb{I}_{\tau_l}}\nu(\bm{s})\\ &=\bm{a}r{\bm{w}}\cdot\bm{\nu}-\tau\|\bm{\nu}\|_1,\quad \forall\bm{\nu}\in\mathbb{R}_+^{\mathbb{I}_{\tau}}=\mathbb{R}_+^{\mathbb{I}_{\tau_l}}. \end{align*} Then, Problems~\eqref{eq:primal sub} and~\eqref{eq:primal} are equivalent, because all constraints and objective functions are the same. (ii) From Algorithm~\ref{alg:critical threshold in W}, $\tau_l>\tau_h$ where \begin{align*} h=\min\{k:\exists \bm{s}\in\mathbb{J} \textrm{ such that } w(\bm{s})=\tau_k,\ q_{\bm{\rho}}\neq 0\ \forall\bm{\rho}\in\bm{s} \}. \end{align*} Therefore, for any $\bm{s}\in\mathbb{I}_{\tau_l}$, there exists a $\bm{\rho}\in\mathbb{J}$ such that $\bm{\rho}\in\bm{s}$ and $q(\bm{\rho})=0$. If $\bm{\nu}$ is a feasible solution of Problem~\eqref{eq:primal sub}, by the constraints in Problem~\eqref{eq:primal sub}, we have for $\bm{s}\in\mathbb{I}_{\tau_l}$ that $0\leq \nu(\bm{s})\leq \lambda(\bm{\rho})$, where $\bm{\rho}\in\mathbb{J}$ is the queue such that $\bm{\rho}\in\bm{s}$ and $q(\bm{\rho})=0$. In other words, the feasible region of Problem~\eqref{eq:primal sub} is bounded; namely, it is a polytope. (iii) We prove the proposition by contradiction. Suppose that $\bm{\nu}^*$ is an optimal solution to Problem~\eqref{eq:primal sub} with $\tau=\tau_{l+1}$ such that $\|\bm{\nu}^*\|_1<1$. Define $\tilde{\bm{\nu}}^*\in\mathbb{R}^{\mathbb{I}_{\tau_{l+1}}}$ by $\tilde{\nu}^*(\bm{s})~=~\nu^*(\bm{s})$ if $\bm{s}\in\mathbb{I}_{\tau_l}$ and zero otherwise (i.e., $\bm{s}\in\mathbb{I}_{\tau_{l+1}}\bm{a}ckslash\mathbb{I}_{\tau_l}$) . Then, $\tilde{\bm{\nu}}^*\bm{A}_{\tau_{l+1},\bm{q}}~=~ \bm{\nu}^*\bm{A}_{\tau_l,\bm{q}}~\leq~\bm{\lambda}_{\bm{q}}$, which implies that $\tilde{\bm{\nu}}^*$ is feasible to~\eqref{eq:primal} with $\tau=\tau_{l+1}$. On the other hand, for every feasible solution $\tilde{\bm{\nu}}$ of~\eqref{eq:primal} with $\tau=\tau_{l+1}$, if we define $\bm{\nu}\in\mathbb{R}^{{\mathbb{I}}_{\tau_l}}$ by $\nu(\bm{s})=\tilde{\bm{\nu}}(\bm{s})$ for $\bm{s}\in\mathbb{I}_{\tau_l}$, we obtain \begin{align*} \bm{w}_{\tau_{l+1}}\cdot\tilde{\bm{\nu}} &=\sum_{\bm{s}\in\mathbb{I}_{\tau_{l+1}}}\left(w(\bm{s})-\tau_{l+1}\right)\tilde{\nu}(\bm{s}) =\sum_{\bm{s}\in\mathbb{I}_{\tau_0}}\left(w(\bm{s})-\tau_{l+1}\right)\tilde{\nu}(\bm{s}) \\ &=\sum_{\bm{s}\in\mathbb{I}_{\tau_0}}\left(w(\bm{s})-\tau_{l+1}\right)\nu(\bm{s}) =\bm{a}r{\bm{w}}\cdot\bm{\nu}-\tau_{l+1}\|\bm{\nu}\|_1\\ &\leq\bm{a}r{\bm{w}}\cdot\bm{\nu}^*-\tau_{l+1}\|\bm{\nu}^*\|_1 =\bm{w}_{\tau_{l+1}}\cdot\tilde{\bm{\nu}}^*. \end{align*} Therefore, $\tilde{\bm{\nu}}^*$ is an optimal solution to Problem~\eqref{eq:primal} with $\tau=\tau_{l+1}$ satisfying $\|\tilde{\bm{\nu}}^*\|_1=\|\bm{\nu}^*\|_1<1$. By Proposition~\ref{prop:critical threshold in W}, all optimal solutions to Problem~\eqref{eq:primal} with $\tau=\tau_{l+1}$ have $1$-norm greater than $1$, which contradicts the assumption $\|\bm{\nu}^*\|_1<1$. \subsection{Proof of Proposition~\ref{prop:critical threshold not in W}} The next sequence of lemmas establishes Proposition~\ref{prop:critical threshold not in W}. \begin{lemma}\label{lemma:critical threshold not in W} In Algorithm~\ref{alg:critical threshold not in W}, any optimal solution to Problem~\eqref{eq:primal sub} with $\tau=\tau^L_k$ has $1$-norm less than $1$ and any optimal solution to Problem~\eqref{eq:primal sub} with $\tau=\tau^S_k$ has $1$-norm greater than $1$, for any $k\in\mathbb{Z}_{+}$. We also have, for $k\in\mathbb{Z}_{+}$, $\tau^M_{k+1}~\in~(\tau^S_{k+1},\tau^L_{k+1})~\subset~(\tau^S_k,\tau^L_k)$. \end{lemma} \begin{proof} We prove the lemma statements by induction on $k$. For $k=0$, both claims are true because of the assumption of the input $l$. Now, assume that the claims hold up until $k\geq 0$. Then, if the condition in Line~\ref{line:condition} of Algorithm~\ref{alg:critical threshold not in W} is true, the algorithm finishes and there is nothing to prove. When this condition is false, suppose that $\|\bm{\nu}^M_k\|_1>1$ and then, since $\tau^L_{k+1}=\tau^L_k$, the $1$-norm of any optimal solution to Problem~\eqref{eq:primal sub} with $\tau=\tau^L_k$ is less than $1$. For Problem~\eqref{eq:primal sub} with $\tau=\tau^S_{k+1}=\tau^M_k$, if it has an optimal solution~$\bm{\nu}^*$ with $\|\bm{\nu}^*\|_1<1$, we have another optimal solution \begin{align*} \left(1-\frac{1-\|\bm{\nu}^*\|_1}{\|\bm{\nu}^M_k\|_1-\|\bm{\nu}^*\|_1}\right)\bm{\nu}^*+\frac{1-\|\bm{\nu}^*\|_1}{\|\bm{\nu}^M_k\|_1-\|\bm{\nu}^*\|_1}\bm{\nu}^M_k, \end{align*} which is a convex combination of two optimal solutions to the problem. Moreover, the $1$-norm of the optimal solution is \begin{align*} \left(1-\frac{1-\|\bm{\nu}^*\|_1}{\|\bm{\nu}^M_k\|_1-\|\bm{\nu}^*\|_1}\right)\|\bm{\nu}^*\|_1+\frac{1-\|\bm{\nu}^*\|_1}{\|\bm{\nu}^M_k\|_1-\|\bm{\nu}^*\|_1}\|\bm{\nu}^M_k\|_1=1, \end{align*} which implies that $\tau^M_k$ is a critical threshold at state $\bm{q}$ and contradicts that the condition in Line~\ref{line:condition} is false. Hence, any optimal solution to Problem~\eqref{eq:primal sub} with $\tau=\tau^S_{k+1}$ has $1$-norm greater than $1$. By similar arguments, the claims hold for $k+1$ when $\|\bm{\nu}^M_k\|_1<1$. Next, we show that $\tau^M_k\in(\tau^S_k,\tau^L_k)$ for $k\in\mathbb{Z}^{+}$. For Problem~\eqref{eq:primal sub} with $\tau=\tau^L_k$, we have \begin{enumerate}[leftmargin=0.7in] \item[(i)] $\bm{\nu}^L_k$ is an optimal solution; \item[(ii)] $\bm{\nu}^S_k$ is a feasible solution with $\|\bm{\nu}^S_k\|_1>1$; \item[(iii)] No feasible solution with $1$-norm greater than $1$ is optimal; \end{enumerate} where the last statement is from the previous argument. Therefore, \begin{align*} &\bm{a}r{\bm{w}}\cdot\bm{\nu}^S_k-\tau^L_k\|\bm{\nu}^S_k\|_1~<~\bm{a}r{\bm{w}}\cdot\bm{\nu}^L_k-\tau^L_k\|\bm{\nu}^L_k\|_1 \Rightarrow\quad\frac{\bm{a}r{\bm{w}}\cdot(\bm{\nu}^S_k-\bm{\nu}^L_k)}{\|\bm{\nu}^S_k\|_1-\|\bm{\nu}^L_k\|_1}~<~\tau^L_k. \end{align*} By similar arguments for Problem~\eqref{eq:primal sub} with $\tau=\tau^S_k$, we have \begin{align*} \tau^S_k~<~\bm{a}r{\bm{w}}\cdot(\bm{\nu}^S_k-\bm{\nu}^L_k)/(\|\bm{\nu}^S_k\|_1-\|\bm{\nu}^L_k\|_1). \end{align*} Combining the last two inequalities, we conclude \begin{align*} \tau^S_k~<~\tau^M_k=\bm{a}r{\bm{w}}\cdot(\bm{\nu}^S_k-\bm{\nu}^L_k)/(\|\bm{\nu}^S_k\|_1-\|\bm{\nu}^L_k\|_1)~<~\tau^L_k. \end{align*} Lastly, we show that $(\tau^S_{k+1},\tau^L_{k+1})~\subset~(\tau^L_k,\tau^S_k)$. If the condition in Line~\ref{line:condition} is true for $k$, then the algorithm stops and there is nothing to prove. Otherwise, either $(\tau^L_{k+1},\tau^S_{k+1})=(\tau^M_k,\tau^S_k)$ or $(\tau^L_{k+1},\tau^S_{k+1})=(\tau^L_k,\tau^M_k)$, all of which satisfies $(\tau^S_{k+1},\tau^L_{k+1})~\subset~(\tau^L_k,\tau^S_k)$. \end{proof} \begin{lemma}\label{lemma:critical threshold not in W:2} In Algorithm~\ref{alg:critical threshold not in W}, if $\bm{\nu}^L_k\neq\bm{\nu}^L_{k+1}$, then $\bm{\nu}^M_{k'}\neq\bm{\nu}^L_k$ for any $k'>k$; If $\bm{\nu}^S_k\neq\bm{\nu}^S_{k+1}$, then $\bm{\nu}^M_{k'}\neq\bm{\nu}^S_k$ for any $k'>k$. \end{lemma} \begin{proof} By symmetry, we only need to prove the first statement. Assume that $\bm{\nu}^L_k\neq\bm{\nu}^L_{k+1}$. Then, $\bm{\nu}^L_{k+1}=\bm{\nu}^M_k$ and $\|\bm{\nu}^M_k\|_1<1$, and $\tau^M_k$ is not a critical threshold. We also claim that $\bm{\nu}^L_k$ is not an optimal solution to Problem~\eqref{eq:primal sub} with $\tau=\tau^M_k$. Suppose for contradiction that it is. From the definition of $\tau^M_k$, we obtain \begin{align*} \bm{a}r{\bm{w}}\cdot\bm{\nu}^L_k-\tau^M_k\|\bm{\nu}^L_k\|_1~=~\bm{a}r{\bm{w}}\cdot\bm{\nu}^S_k-\tau^M_k\|\bm{\nu}^S_k\|_1, \end{align*} which implies that $\bm{\nu}^S_k$ is also an optimal solution to Problem~\eqref{eq:primal sub} with $\tau=\tau^M_k$. Hence, for $\alpha=\frac{1-\|\bm{\nu}^L_k\|_1}{\|\bm{\nu}^S_k\|_1-\|\bm{\nu}^L_k\|_1}\in(0,1)$, we have that $(1-\alpha)\bm{\nu}^L_k+\alpha\bm{\nu}^S_k$ is an optimal solution satisfying \begin{align*} \|(1-\alpha)\bm{\nu}^L_k+\alpha\bm{\nu}^S_k\|_1=(1-\alpha)\|\bm{\nu}^L_k\|_1+\alpha\|\bm{\nu}^S_k\|_1=1, \end{align*} which implies that $\tau^M_k$ is a critical threshold, and thus rendering a contradiction. Hence, we prove the claim, and therefore we obtain \begin{align}\label{eq:lemma:critical threshold not in W:1} \bm{a}r{\bm{w}}\cdot\bm{\nu}^L_k-\tau^M_k\|\bm{\nu}^L_k\|_1~<~\bm{a}r{\bm{w}}\cdot\bm{\nu}^M_k-\tau^M_k\|\bm{\nu}^M_k\|_1. \end{align} Moreover, by Lemma~\ref{lemma:critical threshold not in W}, we have $\tau^L_k>\tau^M_k$, and thus by Lemma~\ref{lemma:monotonicity} we obtain $\|\bm{\nu}^L_k\|_1\leq\|\bm{\nu}^M_k\|_1$. If $\|\bm{\nu}^L_k\|_1=\|\bm{\nu}^M_k\|_1$, we have $\bm{a}r{\bm{w}}\cdot\bm{\nu}^L_k<\bm{a}r{\bm{w}}\cdot\bm{\nu}^M_k$ from \eqref{eq:lemma:critical threshold not in W:1}, which implies $\bm{a}r{\bm{w}}\cdot\bm{\nu}^L_k-\tau\|\bm{\nu}^L_k\|_1~<~\bm{a}r{\bm{w}}\cdot\bm{\nu}^M_k-\tau\|\bm{\nu}^M_k\|_1$, for any $\tau\in(\tau_{l+1},\tau_{l})$, thus contradicting the fact that $\bm{\nu}^L_k$ is an optimal solution to~\eqref{eq:primal sub} with $\tau=\tau^L_k$. Hence, $\|\bm{\nu}^L_k\|_1<\|\bm{\nu}^M_k\|_1$. Lemma~\ref{lemma:critical threshold not in W} implies $\tau^M_{k'}<\tau^L_{k+1}=\tau^M_k$ for $k'>k$; thus, from \eqref{eq:lemma:critical threshold not in W:1}, \begin{align*} \tau^M_{k'}~<~\tau^M_k~&<~(\bm{a}r{\bm{w}}\cdot\bm{\nu}^M_k-\bm{a}r{\bm{w}}\cdot\bm{\nu}^L_k)/(\|\bm{\nu}^M_k\|_1-\|\bm{\nu}^L_k\|_1)\nonumber\\ \Rightarrow\qquad \bm{a}r{\bm{w}}\cdot\bm{\nu}^L_k-\tau^M_{k'}\|\bm{\nu}^L_k\|_1~&<~\bm{a}r{\bm{w}}\cdot\bm{\nu}^M_k-\tau^M_{k'}\|\bm{\nu}^M_k\|_1,\qquad\forall k'>k. \end{align*} In other words, $\bm{\nu}^L_k$ is not an optimal solution to Problem~\eqref{eq:primal sub} with $\tau=\tau^M_{k'}$ for $k'>k$. Therefore, $\bm{\nu}^M_{k'}\neq \bm{\nu}^L_k$ for any $k'>k$. \end{proof} Now, we prove Proposition~\ref{prop:critical threshold not in W}. Assume that the opposite is true: the condition in Line~\ref{line:condition} is always false so that the algorithm does not terminate. By Lemma~\ref{lemma:critical threshold not in W:2}, for every $k\in\mathbb{Z}_+$, we have $k$ basic feasible solutions (vertices) of Problem~\eqref{eq:primal sub} that cannot be $\bm{\nu}^M_k$. Since the number of vertices in a polytope is finite, say $K$, Problem~\eqref{eq:primal sub} with $\tau=\tau^M_k$ does not have a basic optimal solution, which contracts the Fundamental Theorem of Linear Programming. \subsection{Proof of Proposition~\ref{prop:well-definedness of t_{k+1}}} For $t\in(t_k,t_{k+1})$, from the definition of $\bm{q}^*_t$ in Line~\ref{alg:line:definition of q_t} of Algorithm~\ref{alg:optimal_control}, we obtain $\dot{\bm{q}^*_t}~=~\bm{\lambda}-\boldsymbolu_k\,\bm{A} =\bm{\lambda}-\boldsymbolu^*_t\,\bm{A}$. Moreover, $\bm{q}^*_t$ is continuous because $\bm{q}^*_t$ is continuous at $t_k$ for every $k$ such that $t_{k}<\infty$. Now, since $\bm{\nu}_k$ is a feasible solution to Problem~\eqref{eq:primal} with $\tau=\tau_k$ and $\bm{q}=\bm{q}^*_{t+k}$, we have for $\bm{\rho}\in\mathbb{J}_{\bm{q}^*_{k+1}},\ t\in[t_k,t_{k+1})$: \begin{align*} (\mu^*_t\,A)(\bm{\rho})~=~(\mu_k\,A)(\bm{\rho})~=~(\nu_k\,A_{\tau,\bm{q}})(\bm{\rho})~\leq~\lambda(\bm{\rho}). \end{align*} For $\bm{\rho}\in\mathbb{J}\bm{a}ckslash\mathbb{J}_{\bm{q}_{t_k}^*}$, if $(\mu_k\,A)(\bm{\rho})-\lambda(\bm{\rho})> 0$, because Line~\ref{alg:definition of t_{k+1}} implies that $\left((\mu_k\,A)(\bm{\rho})-\lambda(\bm{\rho})\right)(t_{k+1}-t_{k})\leq q_{t_k}(\bm{\rho})$, we obtain \begin{align*} q^*_t(\bm{\rho})=q^*_{t_k}(\bm{\rho})+(t-t_{k})\lambda(\bm{\rho})-(t-t_{k})(\mu_k\,A)(\bm{\rho})>0 \end{align*} for $t\in[t_k,t_{k+1})$. Therefore, we have $(\mu^*_t\,A)(\bm{\rho})\leq \lambda(\bm{\rho})$ for all $\bm{\rho}\in\mathbb{J}_{\bm{q}_t}$ when $t\in[t_k,t_{k+1})$. In other words, $\boldsymbolu^*_t\in{\mathbb{U}}(\bm{q}^*_t)$ for all $t\in\mathbb{R}_+$, and thus $\boldsymbolu^*_t$ is a fluid-level admissible policy. \subsection{Proof of Theorem~\ref{thm:throughput_optimal}} Let $k\in\mathbb{Z}_+$ be such that $t_k$ is a moment at which Algorithm~\ref{alg:optimal_control} updates $\boldsymbolu_t^*$ and $\bm{q}_k\neq {\mathbf{0}}$. Then, for $t\in[t_k,t_{k+1}]$, $\boldsymbolu^*_{t}=(\bm{\nu}_k,{\mathbf{0}})\in\mathbb{R}_+^{\mathbb{I}}$ where $\bm{\nu}_k\in\mathbb{R}_+^{\mathbb{I}_{\tau_k}}$ is an optimal solution to \begin{align*} \begin{split} \textrm{max} \quad \bm{w}_{\tau_k}\cdot \bm{\nu}, \quad \textrm{s.t.} & \quad \bm{\nu} \bm{A}_{{\tau_k},{\bm{q}_k}} \leq \bm{\lambda}_{{\bm{q}_k}}, \quad \|\bm{\nu}\|_1=1, \quad \bm{\nu} \geq {\mathbf{0}} \end{split} \end{align*} because $\tau_k$ is a critical threshold at $\bm{q}_k$. Since $w(\bm{\rho})<\tau_k$ for any $\bm{\rho}\in\mathbb{I}\bm{a}ckslash\mathbb{I}_{\tau_k}$, $\boldsymbolu^*_{t}\in\mathbb{R}_+^{\mathbb{I}}$ is an optimal solution to \begin{align}\label{eq:primal_wo_multiplier}\tag{$P_{\bm{q}}$} \begin{split} \textrm{max} \quad \bm{w}\cdot \boldsymbolu, \quad \textrm{s.t.} & \quad \boldsymbolu \bm{A}_{0,{\bm{q}_k}} \leq \bm{\lambda}_{{\bm{q}_k}}, \quad \|\boldsymbolu\|_1=1, \quad \boldsymbolu \geq {\mathbf{0}}. \end{split} \end{align} If the arrival rate vector $\bm{\lambda}$ is inside the interior of the stability region, then by well-known results (see, e.g. \cite{ziegler2012lectures}), it is inside the polytope of the permutation matrices. Hence, there exists a representation of $\bm{\lambda}$ as a convex combination of vertices. Meanwhile, we know that the vertices correspond to schedules in the switch, and the zero vector. Denote this combination of schedules as $\boldsymbolu'$, under which we know that $\boldsymbolu' \bm{A}=\bm{\lambda}$. Note that $\bm{\lambda}$ being an interior point also implies that $\|\boldsymbolu' \|_1<1$, and thus we can augment $\boldsymbolu'$ to $\boldsymbolu''$ with the extra capacity assigning to queues with positive surplus. Hence, there exists a feasible solution to \eqref{eq:primal_wo_multiplier} such that $\boldsymbolu''\bm{A}\bm{c}>\bm{c}\cdot\bm{\lambda}$ and, more precisely, $\boldsymbolu''\bm{A}\bm{c}-\bm{c}\cdot\bm{\lambda}>c\,\varepsilon$ where $\varepsilon=1-\|\boldsymbolu'\|_1$ and $c=\min_{\bm{\rho}} c_{\bm{\rho}}$. Since $\boldsymbolu_t$ is an optimal solution to \eqref{eq:primal_wo_multiplier}, \begin{align*} \bm{c}\cdot\dot{\bm{q}^*_t}=\bm{c}\cdot\bm{\lambda}-\boldsymbolu^*_t\bm{A}\bm{c}=\bm{c}\cdot\bm{\lambda}-\boldsymbolu^*_t\cdot\bm{w}\leq \bm{c}\cdot\bm{\lambda}-\boldsymbolu''\bm{A}\bm{c}<-c\,\varepsilon, \end{align*} which implies the weighted queue length decreases at a nonzero rate until it reaches zero. Next, assuming that $\bm{q}_{T}={\mathbf{0}}$, we then have the critical threshold $\tau=0$ and $\mathbb{I}_{\tau}=\mathbb{I}$. Hence, the first part of the constraints in Problem~\eqref{eq:dual} is given by $ \bm{A}\bm{\zeta}~\geq~\bm{w}_0~=~\bm{A}\bm{c}$. For every $\bm{\rho}\in\mathbb{J}$, define $\bm{e}_{\bm{\rho}}\in\mathbb{R}^{\mathbb{I}}_+$ by $e_{\bm{\rho}}(\bm{\rho})=1$ and $e_{\bm{\rho}}(\bm{\rho}')=0$ if $\bm{\rho}'\neq \bm{\rho}$. Then, upon multiplying $\bm{A}\bm{\zeta}~\geq~\bm{w}_0~=~\bm{A}\bm{c}$ by $\bm{e}_{\bm{\rho}}\in\mathbb{I}$, we have \begin{align*} \zeta(\bm{\rho})~=~\bm{e}_{\bm{\rho}}\bm{A}\bm{\zeta}~\geq~\bm{e}_{\bm{\rho}}\bm{A}\bm{c}~=~c(\bm{\rho}),\quad\forall\bm{\rho}\in\mathbb{J}. \end{align*} Therefore, the optimal solution to Problem~\eqref{eq:dual} with $\tau=0$ and $\bm{q}={\mathbf{0}}$ is $\bm{\zeta}^*=\bm{c}$. The complementary slackness then implies $\boldsymbolu^*\bm{A}=\bm{\lambda}$ and $\bm{q}_t={\mathbf{0}}$ for all $t\geq T$. \subsection{Proof of Theorem~\ref{thm:optimal_result}} We prove Theorem~\ref{thm:optimal_result} by constructing functions $\bm{p}_t,\bm{\eta}_t:\mathbb{R}_+\to\in\mathbb{R}^{\mathbb{J}}$ and showing that they together with $(\bm{q}^*_t, \boldsymbolu^*_t)$ satisfy the conditions in Proposition~\ref{prop:max_principle}. Define ${\mathbb{T}}:=\{t_0=0,\;t_1,\;\dots,t_K\}$ to be the set of moments at which Algorithm~\ref{alg:optimal_control} updates $\boldsymbolu^*_{t}$. Then, from Theorem~\ref{thm:throughput_optimal}, we have that $K<\infty$ and $\bm{q}^*_t={\mathbf{0}}$ for $t\geq t_K$. Let $t_{K+1}=\infty$. Define Problem~\eqref{eq:dual} to be the dual of Problem~\eqref{eq:primal} given as \begin{align}\label{eq:dual}\tag{$D_{\bm{q},\tau}$} \min \bm{\lambda}_{\bm{q}} \cdot \bm{\zeta}, \quad s.t. \quad \qquad \bm{A}_{\tau,\bm{q}}\bm{\zeta} \geq \bm{w}_{\tau}\quad \bm{\zeta} \geq {\mathbf{0}}, \end{align} where $\bm{\zeta}\in\mathbb{R}^{\mathbb{J}_{\bm{q}}}$ is the vector of decision variables. For each $k$, we fix an optimal solution $\bm{\zeta}_k\in\mathbb{R}^{\mathbb{J}_{\bm{q}_{t_k}}}$ for Problem~\eqref{eq:dual} with $\tau=\tau_k$ and $\bm{q}=\bm{q}^*_{t_k}$, and define $\bm{\eta}_t$ for $t\in[t_k,t_{k+1})$ by \begin{align*} \eta_t(\bm{\rho})=\begin{cases} \zeta_k(\bm{\rho}) & \textrm{if $\bm{\rho}\in\mathbb{J}_{\bm{q}^*_{t_k}}$}\\ 0 & \textrm{otherwise} \end{cases}. \end{align*} Then, from the complementary slackness of primal/dual linear programming problems, we obtain the following important lemmas. \begin{lemma}\label{lemma:slackness} We have $\bm{\eta}_t\geq{\mathbf{0}}$ and $\bm{q}^*_t\geq{\mathbf{0}}$ for $t\in\mathbb{R}_+$. Furthermore, $\eta_t(\bm{\rho})>0$ only if $q^*_t(\bm{\rho})=0$ for $t\in\mathbb{R}_+$ and $\bm{\rho}\in\mathbb{J}$, which implies Condition~\ref{cond:slackness} in Proposition~\ref{prop:max_principle}: $\bm{q}^*_t\cdot\bm{\eta}_t=0$. \end{lemma} \begin{lemma}\label{lemma:optimality} For $\bm{s}\in\mathbb{I}$ and $t\in[t_k, t_{k+1})$, we have $\left(A(c-\eta_t)\right)(\bm{s})\leq \tau_k$. If $\mu^*_t(\bm{s})>0$, then $\left(A(c-\eta_t)\right)(\bm{s})=\tau_k$. In other words, we have \begin{align} \boldsymbolu\bm{A}(\bm{c}-\bm{\eta}_t)~&\leq~\tau_k,\quad\forall \boldsymbolu\in{\mathbb{U}}, \label{eq:upperbound for muAc-eta} \\ \boldsymbolu_t^*\bm{A}(\bm{c}-\bm{\eta}_t)~&=~\tau_k, \label{eq:optimum for muAc-eta} \end{align} for $t\in[t_k,t_{k+1})$. \end{lemma} Defining $\bm{p}_t$ for $t\in[t_k,t_{k+1})$ by \begin{align}\label{eq:def_p} \bm{p}_t~:=~\int_t^{t_{k+1}} e^{\beta(t_{k+1}-t')}\left(\bm{c}-\bm{\eta}_{t'}\right) dt', \end{align} then Condition~\ref{cond:diff} of Lemma~\ref{lemma:maximum_principle} is satisfied. From \eqref{eq:upperbound for muAc-eta}, for any $\boldsymbolu\in{\mathbb{U}}$ (i.e., $\boldsymbolu\geq 0$ and $\|\boldsymbolu\|_1=1$), we obtain {\small \begin{align*} \boldsymbolu\bm{A}\bm{p}_t &=\int_t^{t_{k+1}} e^{\beta(t_{k+1}-t')}\boldsymbolu\bm{A}\left(\bm{c}-\bm{\eta}_{t'}\right) dt' \leq\tau_k\int_t^{t_{k+1}} e^{\beta(t_{k+1}-t')} dt'. \end{align*}} Moreover, from \eqref{eq:optimum for muAc-eta}, we have {\small \begin{align*} \boldsymbolu^*_t\bm{A}\bm{p}_t &=\int_t^{t_{k+1}} e^{\beta(t_{k+1}-t')}\boldsymbolu_t^*\bm{A}\left(\bm{c}-\bm{\eta}_{t'}\right) dt' =\tau_k\int_t^{t_{k+1}} e^{\beta(t_{k+1}-t')} dt', \end{align*}} by the second part of Lemma~\ref{lemma:optimality}. Therefore, we obtain \begin{align*} \boldsymbolu_t^*~\in~\arg\max\left\{ \boldsymbolu\bm{A}\bm{p}_t\,:\,\boldsymbolu\in{\mathbb{U}} \right\} \end{align*} and~\ref{cond:optimality} holds. When $t\in[t_K,t_{K+1})$, i.e., $t\geq t_K$, we have $\bm{q}_{t}={\mathbf{0}}$, $\tau_K=0$, and $\mathbb{I}_{\tau_K}=\mathbb{I}$. Hence, the first constraint in Problem~\eqref{eq:dual} with $\bm{q}=\bm{q}_{t_K}$ and $\tau=\tau_K=0$ becomes \begin{align}\label{eq:constraint when tau is 0} \bm{A}\bm{\zeta}~\geq~\bm{w}_0~=~\bm{A}\bm{c}. \end{align} For every $\bm{\rho}\in\mathbb{J}$, define $\bm{e}_{\bm{\rho}}\in\mathbb{R}^{\mathbb{I}}_+$ by $e_{\bm{\rho}}(\bm{\rho})=1$ and $e_{\bm{\rho}}(\bm{\rho}')=0$ if $\bm{\rho}'\neq \bm{\rho}$. Then, upon multiplying \eqref{eq:constraint when tau is 0} by $\bm{e}_{\bm{\rho}}\in\mathbb{I}$, we obtain \begin{align*} \zeta(\bm{\rho})~=~\bm{e}_{\bm{\rho}}\bm{A}\bm{\zeta}~\geq~\bm{e}_{\bm{\rho}}\bm{A}\bm{c}~=~c(\bm{\rho}),\quad\forall\bm{\rho}\in\mathbb{J}. \end{align*} Thus, the optimal solution to Problem~\eqref{eq:dual} with $\tau=\tau_K$ and $\bm{q}=\bm{q}_{t_K}$ is $\bm{\zeta}_K=\bm{c}$. Since $\bm{\eta}_{t'}=\bm{\zeta}_K=\bm{c}$ for all $t'\geq t_K$, we have \begin{align*} \bm{p}_t~=~\int_t^\infty e^{\beta(t_{k+1}-t')}\left(\bm{c}-\bm{\eta}_{t'}\right)dt'~=~{\mathbf{0}}, \end{align*} which implies that $\lim_{t\to\infty} \bm{p}_t\cdot(\bm{q}^*_t-\bm{q}_t)=0$ and~\ref{cond:endpoint} holds. \subsubsection{Proof of Lemma~\ref{lemma:slackness}} Assume that $t\in[t_k,t_{k+1})$ for some $k=0,1,\dots,K$. Since $\bm{\zeta}_k$ is a feasible solution to~\eqref{eq:dual} with $\tau=\tau_k$ and $\bm{q}=\bm{q}^*_k$, we have $\bm{\zeta}_k\geq{\mathbf{0}}$ and $\bm{\eta}_t\geq{\mathbf{0}}$. Moreover, from Proposition~\ref{prop:well-definedness of t_{k+1}}, we have $\bm{q}^*_t\geq{\mathbf{0}}$. Now, assume that $\eta_{t}(\bm{\rho})>0$. Then, we have $\zeta_k(\bm{\rho})>0$ and $\bm{\rho}\in\mathbb{J}_{\bm{q}^*_k}$, which implies $ q^*_{t_k}(\bm{\rho})~=~0$. On the other hand, by complementary slackness for \eqref{eq:primal} and \eqref{eq:dual}, we obtain \begin{align*} \zeta_k(\bm{\rho})\left(\lambda(\bm{\rho})-\left(\nu_k A_{\tau_k,\bm{q}^*_{t_k}}\right)(\bm{\rho})\right)=0, \end{align*} where $\bm{\nu}_k$ is an optimal solution to \eqref{eq:primal} used in Line~\ref{alg:line:bmu} of Algorithm~\ref{alg:optimal_control}. Since $\zeta_k(\bm{\rho})>0$, we have $\lambda(\bm{\rho})-\left(\nu_k A_{\tau_k,\bm{q}^*_{t_k}}\right)(\bm{\rho})=0$ so that, for $t'\in[t_k,t_{k+1})$, \begin{align}\label{eq:slackness derivative} \dot{q}^*_{t'}(\bm{\rho})=\lambda(\bm{\rho})-\left(\mu^*_{t'} A\right)(\bm{\rho})= \lambda(\bm{\rho})-\left( \nu_k A_{\tau_k,\bm{q}_{t_k}} \right)(\bm{\rho})=0. \end{align} From the fact $ q^*_{t_k}(\bm{\rho})~=~0$ and \eqref{eq:slackness derivative}, we conclude $q^*_{t'}(\bm{\rho})=0$ for $t'\in[t_k,t_{k+1})$. Therefore, $q^*_t(\bm{\rho})=0$. \subsubsection{Proof of Lemma~\ref{lemma:optimality}} Consider $t\in[t_k,t_{k+1})$ and $\bm{s}\in\mathbb{I}$, and assume that $\mu^*_t(\bm{s})=\mu_k(\bm{s})>0$. Then, we have $\nu_k(\bm{s})>0$. By complementary slackness for \eqref{eq:primal} and \eqref{eq:dual} with $\tau=\tau_k$ and $\bm{q}=\bm{q}_{t_k}$, we obtain \begin{align*} \left(A_{\tau_k,\bm{q}_{t_k}}\zeta_k\right) (\bm{s})= w_{\tau_k}(\bm{s})=w(\bm{s})-\tau_k. \end{align*} Hence, we conclude \begin{align*} \left(A\left(c-\eta_t\right)\right)(\bm{s})=(Ac)(\bm{s})-(A\eta_t)(\bm{s})=w(\bm{s})-\left(A_{\tau_k,\bm{q}_{t_k}}\zeta_k\right)(\bm{s})=\tau_k, \end{align*} which implies the second part of the lemma. On the other hand, assume that $\mu^*_t(\bm{s})=0$. If $\bm{s}\in\mathbb{I}_{\tau_k}$, we have \begin{align*} \left(A\left(c-\eta_t\right)\right)(\bm{s})=(Ac)(\bm{s})-(A\eta_t)(\bm{s})=w(\bm{s})-A_{\tau_k,\bm{q}_{t_k}}\zeta_k(\bm{s})\leq\tau_k, \end{align*} where the last inequality follows from the constraints in~\eqref{eq:dual}. If $\bm{s}\not\in\mathbb{I}_{\tau_k}$, we then obtain \begin{align*} \left(A\left(c-\eta_t\right)\right)(\bm{s})\leq (Ac)(\bm{s})=w(\bm{s})\leq\tau_k, \end{align*} and thus the lemma is proved. \section{Computational Experiments}\label{sec:experiments} \begin{figure*} \caption{$\kappa=0.70$, Relative Gap is $19\%$} \caption{$\kappa=0.90$, Relative Gap is $35\%$} \caption{$\kappa=0.95$, Relative Gap is $50\%$} \caption{Performance Comparisons of Total Costs under Optimal Policy (Algorithm~\ref{alg:optimal_control} \label{fig:comparison} \end{figure*} \begin{figure} \caption{Histogram of Relative Gaps for $\kappa=0.9$} \label{fig:histogram} \end{figure} \begin{figure*} \caption{Algorithm 4 coincides $c\mu$-rule} \label{fig:optimal_cmu_all} \caption{Unstable $c\mu$-rule} \label{fig:unstable_cmu_all} \caption{Stable but not optimal $c\mu$-rule} \label{fig:cmu_stable_cmu_all} \caption{Performance Comparisons of Total Costs under Optimal Policy (Algorithm~\ref{alg:optimal_control} \label{fig:cmu_comparison} \end{figure*} In this section, we present computational experiments that compare the performance of our optimal control algorithm with that of the max-weight scheduling algorithm and the $c\mu$ rule in the fluid model context. We fix the number of input and output ports to be $n\in\mathbb{Z}_+$ and fix the throughput $\kappa\in(0,1)$. For $1\leq i,j\leq n$, we randomly generate the costs $c(i,j)\in(0,1)$ and the arrival rates $\lambda(i,j)\in(0,1)$ such that \begin{align}\label{eq:throughput in simulation} \max\left\{\sum_{k=1}^n \lambda(i,k),\sum_{k=1}^n \lambda(k,j)\,:\,i,j\in[n]\right\}~=~\kappa. \end{align} We also choose an initial queue length to be an integer between $1$ and $100$ uniformly at random for each $(i,j)\in[n]\times[n]$. With these parameters, we apply Algorithm~\ref{alg:optimal_control} until we reach the time $T$ at which the queue length becomes $0$ for all queues. During our experiments, we let $t_0,t_1,\dots,t_K$ denote the epochs at which Algorithm~\ref{alg:optimal_control} updates the admissible schedule, with $t_0=0$ and $t_K=T$. Then, the total cost $ \int_0^{\infty} \bm{c}\cdot \bm{q}_t\,dt$ is given by \begin{align}\label{eq:total_cost} \begin{split} \sum_{k=0}^{K-1}\int_{t_k}^{t_{k+1}}\bm{c}\cdot\bm{q}_t dt =\sum_{k=0}^{K-1} \bm{c}\cdot\left(\frac{\bm{q}_{t_{k+1}}+\bm{q}_{t_{k}}}{2}\right)(t_{k+1}-t_k) \end{split} \end{align} because on the interval $[t_k,t_{k+1}]$ the admissible schedule does not change and $\bm{q}_t$ is a linear function. Note that, even though the objective function in the optimal control problem~\eqref{eq:optimal control problem} has a discount factor $\beta\in(0,1)$, we set $\beta=1$ for the results of our computational experiments herein because Algorithm~\ref{alg:optimal_control} does not depend on $\beta$. While the existence and uniqueness of the fluid limit under the max-weight scheduling algorithm has been proven (see~\cite{DaiPra00} and~\cite{shah2012}), an explicit formula is not known. Hence, to numerically compute the max-weight scheduling algorithm in the fluid model, we partition the interval $[0,T]$ into slots of size $\Delta t$; then, for time slot $[t'_k, t'_k+\Delta t]$, we find a basic schedule of the max-weight algorithm with respect to $\bm{q}_{t_k}$, say $\bm{s}\in\mathbb{I}$, and use this schedule during that time slot. In other words, we set $$q_{t_{k+1}}(i,j)~=~\max\left\{ q_{t_k}(i,j)+\left(\lambda(i,j)-s(i,j)\right)\Delta t,0\right\}$$ for $(i,j)\in[n]\times[n]$ and approximately measure the total cost on the interval $[0,T]$ by (assuming that $t'_{K'}=T$) $\int_0^T \bm{c}\cdot \bm{q}_t\,dt\approx\sum_{k=1}^{K'-1} \bm{c}\cdot \bm{q}_{t'_k}$, which is close to the actual total cost under the max-weight scheduling algorithm as $\Delta t\to 0$ and we selected $\Delta t$ accordingly. ~ ~ Figure~\ref{fig:comparison} illustrates a representative sample of the total cost over time on $[0,T]$ for the $3\times 3$ input-queued switch fluid model under our optimal control policy and the max-weight scheduling policy. The cost coefficients and the initial queue lengths are set to be the same in each of these three experiments. We vary the throughput $\kappa$, defined in~\eqref{eq:throughput in simulation}, across the three experiments (i.e., $\kappa = 0.7, 0.9, 0.95$) while fixing the ratio among the arrival rates. As observed in the figure, the performance of our optimal policy (Algorithm~\ref{alg:optimal_control}) improves in comparison with that of the max-weight scheduling algorithm as the throughput $\kappa$ increases. To quantify this performance comparison, we calculate the \emph{relative gap} defined by the difference between the total costs at time $T$ under the two algorithms divided by the total cost at time $T$ of the optimal algorithm. The growth in this relative performance gap as the throughput increases ranges from $19\%$ for $\kappa=0.7$, to $35\%$ for $\kappa=0.9$ and $50\%$ for $\kappa=0.95$. Figure~\ref{fig:histogram} illustrates a representative sample of the corresponding relative performance gap results for various combinations of costs, initial state, and arrival rates under a fixed throughput of $\kappa=0.9$. We observe that the distribution of the relative gap demonstrates improved performance of at least $10\%$, in most cases, under Algorithm~\ref{alg:optimal_control} in comparison with the max-weight scheduling. The sample average of the relative performance gap is around $20\%$. We also compare the total cost under our optimal policy (Algorithm 4) and the $c\mu$-rule. Figure~\ref{fig:cmu_comparison} illustrates a representative sample of the total cost over time on $[0,T]$ for the $3\times 3$ input-queued switch fluid model, demonstrating three different types of behavior. In Figure~\ref{fig:optimal_cmu_all}, the $c\mu$-rule and the optimal algorithm are identical and provide the same performance. We observe in Figure~\ref{fig:unstable_cmu_all}, however, that the $c\mu$-rule is unstable and clearly not optimal. Moreover, even when the $c\mu$-rule is stable, it may not be optimal as shown in Figure~\ref{fig:cmu_stable_cmu_all}. The highest relative performance improvement of our optimal policy over instances of the stable $c\mu$-rule is more than $70\%$. \section{Conclusions}\label{sec:conclusions} We studied a fluid model of general $n \times n$ input-queued switches where each fluid flow has an associated cost, and derived an optimal scheduling control policy under a general linear objective function based on minimizing discounted fluid cost over an infinite horizon. We demonstrated that, while in certain parameter domains the optimal policy coincides with the $c\mu$-rule, in general the optimal policy is determined algorithmically through a constrained flow maximization problem whose parameters, essentially Lagrangian multipliers of some key network constraints, were in turn identified by another set of carefully designed algorithms. Computational experiments within fluid models of input-queued switches demonstrated the significant benefits of our optimal scheduling policy over variants of max-weight scheduling. \end{document}
\begin{document} \pagenumbering{arabic} \newcommand\be{\begin{equation}} \newcommand\ee{\end{equation}} \newcommand\bea{\begin{eqnarray}} \newcommand\eea{\end{eqnarray}} \newcommand\ket[1]{|#1\rangle} \newcommand\bra[1]{\langle #1|} \newcommand\braket[2]{\langle #1|#2\rangle} \def{\mathchar'26\mkern-12mu d}{{\mathchar'26\mkern-12mu d}} \newtheorem{thm}{Theorem} \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{defn}[thm]{Definition} \newtheorem{rem}[thm]{Remark} \pagestyle{plain} \title{\bf A Quantum Otto Engine with Shortcuts to Thermalization and Adiabaticity} \author{ A. Pedram} \email{[email protected]} \affiliation{Department of Physics, Koç University, Istanbul, Sarıyer 34450, Türkiye} \author{ S. C. Kadıoğlu} \email{[email protected]} \affiliation{Department of Physics, Koç University, Istanbul, Sarıyer 34450, Türkiye} \author{A. Kabakçıoğlu} \email{[email protected]} \affiliation{Department of Physics, Koç University, Istanbul, Sarıyer 34450, Türkiye} \author{Ö. E. Müstecaplıoğlu} \email{[email protected]} \affiliation{Department of Physics, Koç University, Istanbul, Sarıyer 34450, Türkiye} \affiliation{TÜBİTAK Research Institute for Fundamental Sciences, 41470 Gebze, Türkiye} \pagebreak \begin{abstract} We investigate the energetic advantage of accelerating a quantum harmonic oscillator Otto engine by use of shortcuts to adiabaticity (for the power and compression strokes) and to equilibrium (for the hot isochore), by means of counter-diabatic (CD) driving. By comparing various protocols with and without CD driving, we find that, applying both type of shortcuts leads to enhanced power and efficiency even after the driving costs are taken into account. The hybrid protocol not only retains its advantage in the limit cycle, but also recovers engine functionality (i.e., a positive power output) in parameter regimes where an uncontrolled, finite-time Otto cycle fails. We show that controlling three strokes of the cycle leads to an overall improvement of the performance metrics compared with controlling only the two adiabatic strokes. Moreover, we numerically calculate the limit cycle behavior of the engine and show that the engines with accelerated isochoric and adiabatic strokes display a superior power output in this mode of operation. \end{abstract} \keywords{open quantum systems; quantum thermodynamics; quantum heat engines;} \maketitle \section{Introduction} Enhancing the performance of heat engines has been a long-standing objective in the field of thermodynamic cycle investigations. The pinnacle of these endeavors is reflected in Carnot's second law of thermodynamics, which establishes an upper limit on the efficiency of heat engines~\cite{bhlitem57177}. This limit can, in principle, be attained through a Carnot cycle, comprising reversible processes. However, while the Carnot engine achieves this limit, it is unable to generate any power due to the quasi-static nature of its processes. Practical heat engine cycles such as Otto, Diesel, or Stirling face the challenge of balancing efficiency and power, necessitating real-world engines to complete a cycle within a finite time, albeit with lower efficiency. Of particular significance is the efficiency at maximum power, which was studied by Curzon and Ahlborn in their seminal paper~\cite{doi:10.1119/1.10023}.\\ Currently, there is ongoing research in quantum heat engines and refrigerators~\cite{doi:10.1146/annurev-physchem-040513-103724,Kosloff2017TheQH,tuncer2020,kurizki_kofman_2022,10.1088/2053-2571/ab21c6}. Since the pioneering work by Scovil and Schulz-DuBois~\cite{PhysRevLett.2.262}, quantum heat engines have been extensively studied theoretically~\cite{1979,doi:10.1063/1.446862,doi:10.1063/1.461951,10.1063/1.471453,10.1119/1.18197,PhysRevLett.88.050602,PhysRevLett.89.116801,doi:10.1126/science.1078955,PhysRevE.68.016101,PhysRevLett.93.140403,PhysRevE.70.046110,PhysRevE.72.056110,Rezek_2006,PhysRevE.76.031105,PhysRevE.77.041118,PhysRevLett.109.203006,PhysRevE.90.022102,Hardal2015,PhysRevE.95.032139,Turkpence_2017,PhysRevB.96.104304,PhysRevE.96.062120,Niedenzu2018,PhysRevE.97.062153,Naseem:19,PhysRevE.100.012109,PhysRevE.102.062123,Hamedani_Raja_2021}, and successful experimental demonstrations have been recently reported~\cite{PhysRevLett.119.050602,PhysRevLett.122.110601,PhysRevLett.123.240601,Bouton2021,doi:10.1126/sciadv.abl7740,PhysRevA.106.022436,Zhang2022,e25020311}. Despite these theoretical and experimental developments, it is still an open question whether the quantum advantage can lead to Carnot efficiency or better at finite power~\cite{PhysRevLett.112.030602,Campisi2016,Kim2022}. It is, therefore, crucial to study the potential and limitations of quantum thermal devices and investigate methods to boost their finite-time performance.\\ A specific type of control techniques which are collectively known as shortcuts to adiabaticity (STA) have gained a particular interest in the study of quantum heat engines. The central idea in STA is to design protocols to drive the system by emulating its adiabatic dynamics in finite time~\cite{TORRONTEGUI2013117,RevModPhys.91.045001}. Since its foundation~\cite{doi:10.1021/jp030708a,doi:10.1021/jp040647w,Berry_2009,PhysRevLett.104.063002} different approaches have been developed to engineer STA, such as using dynamical invariants,~\cite{PhysRevLett.104.063002,PhysRevA.83.062116}, inversion of scaling laws~\cite{PhysRevA.84.031606,Campo2012} and the fast-forward technique~\cite{doi:10.1098/rspa.2009.0446,PhysRevA.84.043434,PhysRevA.86.013601}. In recent years, successful experimental realizations of STA have been reported in the literature~\cite{doi:10.1126/sciadv.aau5999,https://doi.org/10.1002/qute.201900121,PhysRevApplied.13.044059,Yin2022}.\\ STA has been extensively explored as a means to enhance the performance of quantum refrigerators~\cite{PhysRevB.100.035407,PhysRevResearch.2.023120} and boost the power output of quantum harmonic oscillator~\cite{Campo2014,PhysRevE.98.032121,PhysRevE.99.022110,e18050168} and spin chain~\cite{PhysRevE.99.032108,PhysRevResearch.2.023145,_AKMAK_2021} based quantum heat engines by emulating adiabatic strokes within finite time frames. Previous studies in these domains have commonly assumed a rapid thermalization step. However, with the growing interest in the application of STA to open quantum systems, recent research has increasingly focused on the possibility of achieving swift thermalization using various techniques~\cite{PhysRevA.98.052129,PhysRevLett.122.250402,PhysRevA.100.012126,Alipour2020shortcutsto,PhysRevResearch.2.033178,PhysRevX.10.031015,Suri2018,Dann_2020}. These approaches are often referred to as fast thermalization, shortcuts to thermalization (STT) or shortcuts to equilibrium (STE). Since quantum heat engines contain thermalization branches, study of these techniques is of paramount importance in order to optimize the power and efficiency of these devices~\cite{PhysRevA.100.012126,PhysRevX.10.031015,Suri2018,Dann_2020}.\\ Motivated by these advancements, we conduct an analysis of the energy dynamics, power output, and efficiency characteristics of an Otto cycle utilizing a quantum harmonic oscillator as its working medium within a finite time frame. To achieve this, we employ shortcuts to adiabaticity (STA) during the expansion and compression strokes, while implementing STE during the hot isochoric strokes, resulting in what we refer to as an ST{\AE} engine. In our investigation, we compare the thermodynamic performance metrics of this engine with an uncontrolled non-adiabatic quantum Otto (UNA engine) and a quantum Otto engine in which only the adiabatic strokes are accelerated using CD-driving (STA engine).\\ For a comprehensive evaluation of the thermodynamic performance within controlled dynamics, it is essential to meticulously account for the energetic expenses associated with the implementation of the control protocols~\cite{PhysRevE.99.032108,PhysRevE.98.032121,PhysRevE.99.022110,PhysRevResearch.2.023145}. In this regard, we adopt a widely employed cost measure to quantify the energetic requirements of the STA protocol during the adiabatic strokes. Additionally, we introduce a novel cost measure based on the dissipative work performed by the environment~\cite{PhysRevA.105.L040201} specifically for STE protocol. This approach allows for an accurate evaluation of energy-related aspects when both STA and STE protocols are implemented.\\ Initially, we conduct a thermodynamic analysis of the three engines (UNA, STA, and ST{\AE}) by considering a single cycle following the preparation phase. Subsequently, we extend our examination to their respective limit cycles. Our findings clearly demonstrate that the ST{\AE} engine, incorporating both shortcuts to adiabaticity (STA) and shortcuts to equilibrium (STE) protocols, attains superior power output and efficiency when compared to the STA-only engine and the UNA engine.\\ This manuscript is organised as follows. In~\cref{sec:otto} we briefly introduce the quantum Otto cycle. In~\cref{sec:sta} the scheme for STA using counterdiabatic driving is introduced. In~\cref{sec:stt} the protocol for the fast driving of the Otto cycle towards thermalization is introduced. In~\cref{sec:otto_sta_ste} we describe the quantum Otto engine for which both STA and STE is used. In~\cref{sec:result} we present our results and finally in~\cref{sec:conclusion} we draw our conclusions. \section{Quantum Harmonic Otto Cycle} \label{sec:otto} We study a finite-time quantum Otto cycle and consider the working medium to be a harmonic oscillator with a time dependent frequency. The Hamiltonian for a harmonic oscillator is \begin{equation}\label{qhohamiltonian} \hat{H}_0=\hat{p}^2/2m +m\omega_t^2\hat{x}^2/2 \end{equation} in which $\hat{x}$ and $\hat{p}$ are the position and momentum operators respectively and $\omega_t$ is the time-dependent frequency and $m$ is the mass of the oscillator. As shown in \cref{otto.schm}, the cycle consists of the following strokes. \begin{itemize} \item[(1)] Adiabatic compression ($1\rightarrow 2$): Unitary evolution for a duration of $\tau_{12}$ while the frequency increases from $\omega_c$ to $\omega_h$. \item[(2)] Hot isochore ($2\rightarrow 3$): Heat transfer from the hot bath to the system for a duration of $\tau_{23}$ as the system equilibrates with the bath at temperature $T_h$ at fixed frequency. \item[(3)] Adiabatic expansion ($3\rightarrow 4$): Unitary evolution for a duration of $\tau_{34}$ while the frequency decreases from $\omega_h$ to $\omega_c$. \item[(4)] Cold isochore ($4\rightarrow 1$): Heat transfer from the system to the cold bath for a duration of $\tau_{41}$ as the system equilibrates with the bath at temperature $T_c$ at fixed frequency. \end{itemize} \begin{figure} \caption{Schematic representation of a quantum Otto cycle.\label{otto.schm} \label{otto.schm} \end{figure} During the adiabatic steps the system is decoupled from the heat baths, and the frequency is varied over time, at a rate bounded by the adiabatic theorem~\cite{Berry_2009}, so that the populations of the drifting energy levels remain unchanged. The resulting isentropic evolution is more restrictive than the classical adiabaticity condition of constant entropy. Mathematical formulations of the adiabatic time evolution for an isolated quantum system employ a time-scale separation between fast and slow degrees of freedom, yielding product states as solutions. For the compression and expansion strokes of the QHO Otto engine, this condition implies that the system will always remain in an instantaneous eigenstate of the time-dependent Hamiltonian. The unitary time evolution of such a process, which ideally takes infinite time, can be solved exactly and has zero entropy production. The average work and heat exchange values during the cycle is given by~\cite{PhysRevE.77.021128,DEFFNER2010200}, \begin{eqnarray}\label{adiabatic.cycle} \langle W_{12}\rangle &=& \frac{\hbar}{2}(\omega_h Q_{12}^{*} - \omega_c) \text{coth}(\frac{\beta_c \hbar \omega_c}{2}) \\ \langle Q_{23}\rangle &=& \frac{\hbar \omega_h}{2} \left[\text{coth}(\frac{\beta_h \hbar \omega_h}{2}) - Q_{12}^{*}\text{coth}(\frac{\beta_c \hbar \omega_c}{2}) \right] \\ \langle W_{34}\rangle &=& \frac{\hbar}{2}(\omega_c Q_{34}^{*} - \omega_h) \text{coth}(\frac{\beta_h \hbar \omega_h}{2}) \\ \langle Q_{41}\rangle &=& \frac{\hbar \omega_c}{2} \left[\text{coth}(\frac{\beta_c \hbar \omega_c}{2}) - Q_{34}^{*}\text{coth}(\frac{\beta_h \hbar \omega_h}{2}) \right] \end{eqnarray} where $W$ refers to the work input and $Q$ refers to the heat input to the engine during the indicated strokes. $\beta_c$ and $\beta_h$ are the inverse temperatures for the cold and hot bath respectively. The terms $Q^*_{12}$ and $Q^*_{34}$ are the adiabaticity parameters and their values depend on the driving scheme~\cite{PhysRevE.98.032121}. We are employing a sign convention in which all the incoming fluxes (heat and work) are taken to be positive~\cite{PhysRevE.106.024137,PhysRevE.102.062123}.\\ In order to study the finite-time Otto cycle, we need to consider the time evolution along each stroke separately. The unitary evolution of the adiabatic strokes is governed by \begin{equation}\label{unitary.dynamics} \partial_t\hat{\rho} = -\frac{i}{\hbar}[\hat{H}_0,\hat{\rho}], \end{equation} where $\hat{\rho}$ is the system's density matrix. For the duration of the isochoric strokes, a Markovian open system dynamics given by a Gorini–Kossakowski–Sudarshan–Lindblad (GKLS) master equation governs the system's dynamics, \begin{equation}\label{lindblad.dynamics} \partial_t\hat{\rho} = -\frac{i}{\hbar}[\hat{H}_0,\hat{\rho}] + \mathcal{D}(\hat{\rho}), \end{equation} in which $\mathcal{D}(\hat{\rho})$ is the dissipation superoperator which must conform to Lindblad’s form for a Markovian evolution \begin{equation}\label{lindblad.diss} \mathcal{D}(\hat{\rho}) = k_{\uparrow } (\hat{a}^\dagger \hat{\rho} \hat{a} - \frac{1}{2}\{\hat{a} \hat{a}^\dagger, \hat{\rho}\})+ k_{\downarrow } (\hat{a} \hat{\rho} \hat{a}^\dagger - \frac{1}{2}\{\hat{a}^\dagger \hat{a}, \hat{\rho}\}). \end{equation} The $ k_{\uparrow }$ and $ k_{\downarrow }$ are called heat conductance rates and the heat conductivity (heat transport rate) is defined as $\Gamma \equiv k_{\downarrow }-k_{\uparrow }$ \cite{Kosloff2017TheQH}. Note that, heat transport is not the only source of irreversibility in a quantum heat engine. Non-commutativity of the Hamiltonians at different times results in an additional dissipation channel, dubbed "quantum friction"~\cite{PhysRevE.65.055102}.\\ Thermalization times are often neglected in the analyses of quantum heat engines, since they are assumed to be much shorter than the compression and expansion times.~\cite{PhysRevLett.112.030602,PhysRevE.98.032121,PhysRevE.99.022110,PhysRevE.99.032108,e18050168}. However, this assumption requires heat transport rates to be large, a circumstance for which it is shown that the optimal power production happens when the cycle times are vanishingly small~\cite{10.1119/1.18197,Rezek_2006,Kosloff2017TheQH}.\\ Another line of reasoning to justify small thermalization times in the isochoric process is that, for the frequency to stay constant (hence, the Hamiltonian to be static), the duration of the system-bath interaction must be shorter than the duration of the isentropic process. Meanwhile, assuming that there exists a considerable energy exchange between the system and the bath or that the system equilibrates to the temperature of the bath, controlling the interaction time is equivalent to adjusting the system-bath coupling. However, a large system-bath coupling also implies a large heat transport rate. Additionally, in a thermalization map, which follows a GKLS type master equation, there is an implicit assumption that the thermalization time (which scales proportional to the inverse of the system-bath interaction energy), is longer than the relaxation dynamics of the bath and the unperturbed dynamics of the system~\cite{1979}. Therefore, for a realistic and consistent thermodynamic analysis of a quantum heat engine in its full generality, it is important to take into account the effect of thermalization times without imposing additional assumptions.\\ In order to see the effect of the duration of isochoric strokes, we take the total cycle time to be $\tau_{tot} = \tau_{12} + \tau_{23} + \tau_{34} + \tau_{41}$. We assume that $\tau_{adi}=\tau_{12} = \tau_{34}$ and $\tau_{iso}=\tau_{23} = \tau_{41}$. The engine power can be expressed as, \begin{equation}\label{engine.power} P = -\frac{\left\langle W_{12}\right\rangle + \left\langle W_{34}\right\rangle}{\tau_{\text{tot}}} \end{equation} in which $\tau_{tot}=2\tau_{adi}+2\tau_{iso}$. For an ideal engine, efficiency becomes \begin{equation}\label{engine.efficiency} \eta = -\frac{\left\langle W_{12}\right\rangle + \left\langle W_{34}\right\rangle}{\left\langle Q_{23}\right\rangle}. \end{equation} Due to the fact that the ideal adiabatic and isochoric strokes are quasi-static, upon implementing a full cycle in finite time, the state the working medium of the engine doesn't precisely return to the initial state from which the cycle started. Therefore, for a complete study of a finite time quantum heat engine a limit cycle analysis has to be taken into account and the thermodynamic behavior of the engine in this mode of operation needs to be addressed~\cite{Kosloff2017TheQH,PhysRevE.97.062153,PhysRevE.70.046110}.\\ Existence of a limit cycle for a quantum heat engine can be argued in very general terms. A general quantum process is described by a completely positive trace preserving (CPTP) map. The quantum channel for the entire cycle is a composition of four CPTP maps, \begin{equation}\label{cyclemap} \varepsilon(\hat{\rho})= \varepsilon_{4\rightarrow1}\circ\varepsilon_{3\rightarrow4}\circ\varepsilon_{2\rightarrow3}\circ\varepsilon_{1\rightarrow2}(\hat{\rho}) \end{equation} which makes itself CPTP. Due to the Brouwer fixed-point theorem, every such channel should have at least one density operator fixed point $\rho^*$ such that~\cite{watrous_2018} \begin{equation}\label{limitcyc} \varepsilon(\hat{\rho}^*) = \hat{\rho}^*. \end{equation} Lindblad showed that relative entropy (KL-divergence) between a state and a reference state is contractive under CPTP maps~\cite{Lindblad1975}. Later, Petz proved that a variety of metrics (including the Bures metric) are monotone (contractive) under CPTP maps~\cite{PETZ199681}. These theorems can be expressed mathematically as \begin{eqnarray}\label{monotone} D_{\text{KL}}(\varepsilon(\hat{\rho})||\varepsilon(\hat{\rho}_{\text{ref}})) \leq D_{\text{KL}}(\hat{\rho}||\hat{\rho}_{\text{ref}}),\\ D(\varepsilon(\hat{\rho}),\varepsilon(\hat{\rho}_{\text{ref}})) \leq D(\hat{\rho},\hat{\rho}_{\text{ref}}) \end{eqnarray} Here, $D_{\text{KL}}$ is the relative entropy (which is not a metric but a divergence function) and $D$ is a metric on the statistical manifold of the density operators. According to these theorems, the two states become less and less distinguishable upon successive application of the map $\varepsilon$. If the fixed point of the CPTP map is unique, the density operator will monotonically approach the steady-state via the dynamical map~\cite{Frigerio1977}. In this case, if we take $\hat{\rho}_{\text{ref}}=\hat{\rho}^*$,~\cref{limitcyc} and~\cref{monotone} imply that upon successive application of the quantum channel, the system monotonically approaches the fixed point. \section{STA by CD-driving} \label{sec:sta} In $1\rightarrow 2$ and $3\rightarrow 4$ the quantum system is isolated and work is performed by changing the frequency between $\omega_c$ and $\omega_h$, which are fixed as design parameters. We now consider a counter-diabatic drive on the system by means of a control Hamiltonian added to $\hat{H}_0$, i.e., \begin{equation}\label{cd.ham.sta} \hat{H}_{CD}(t) = \hat{H}_0(t) + \hat{H}_1(t) \end{equation} such that the resulting dynamics given by \begin{equation}\label{eq.mo.sta} \partial_t{\hat{\rho}} = -\frac{i}{\hbar}[\hat{H}_{CD},\hat{\rho}] \end{equation} is identical to the approximate adiabatic evolution under $\hat{H}_0$ alone. For a closed system, the general form of $\hat{H}_1$ reads~\cite{Berry_2009} \begin{equation}\label{cd.drive.sta} \hat{H}_1(t) = i\hbar \sum_n (\ket{\partial_t n}\bra{n} - \braket{n}{\partial_t n}\ket{n}\bra{n}) \end{equation} where $\ket{n}$ is the $n$-th eigenstate of $\hat{H}_0(t)$. This STA protocol requires that $\langle \hat{H}_1(0)\rangle = \langle \hat{H}_1(\tau)\rangle = 0$. Hence, the frequency satisfies the following initial and final conditions. \begin{align}\label{bc.sta} \omega_t(0)=\omega_i,\quad \dot{\omega}_t(0)=0, \quad \ddot{\omega}_t(0)=0,\nonumber \\ \omega_t(\tau)=\omega_f,\quad \dot{\omega}_t(\tau)=0, \quad \ddot{\omega}_t(\tau)=0. \end{align} An interpolating ansatz which satisfies the mentioned boundary conditions is~\cite{PhysRevX.4.021013}, \begin{equation}\label{freq.ansazt.sta} \omega_t = \omega_i + (\omega_f - \omega_0)[10s^3-15s^4+6s^5] \end{equation} in which $s=t/\tau_{adi}$. For a time-dependent QHO, $\hat{H}_1$ in~Eq.(\ref{cd.drive.sta}) can be expressed as~\cite{Muga_2010} \begin{equation}\label{qho.h1.sta} \hat{H}_1 = -\frac{\dot{\omega}_t}{4\omega_t}(\hat{x}\hat{p}+\hat{p}\hat{x}). \end{equation} The total Hamiltonian governing the time evolution of the STA controlled QHO becomes \begin{equation}\label{qho.hcd.sta} \hat{H}_{CD} = \frac{\hat{p}^2}{2m}+\frac{m\omega_t^2\hat{x}^2}{2} -\frac{\dot{\omega}_t}{4\omega_t} (\hat{x}\hat{p}+\hat{p}\hat{x}). \end{equation} Hence, for a time dependent QHO, the total Hamiltonian is quadratic in $\hat{x}$ and $\hat{p}$ and can be considered a generalized QHO~\cite{PhysRevE.96.012133}, \begin{equation}\label{eff.hamiltonian.cd} \hat{H}_{CD} = \hbar \Omega_t(\hat{b}_t^{\dagger}\hat{b}_t + 1/2), \end{equation} with an effective frequency $\Omega_t$ and the ladder operator $\hat{b}_t$ given by \begin{eqnarray}\label{eff.freq.lad} \Omega_t &=& \omega_t\sqrt{1-\dot{\omega}_t^2/4\omega_t^4}; \\ \hat{b}_t &=& \sqrt{\frac{m\Omega_t}{2\hbar}}(\zeta_t \hat{x} + \frac{i\hat{p}}{m\Omega_t}). \end{eqnarray} The work done during this process is given by \begin{equation}\label{work.adi} W = \int_{0}^{\tau_{adi}}\text{Tr}(\dot{\hat{H}}_0(t) \hat{\rho}(t)) dt \end{equation} The expectation value of the CD driving is defined as~\cite{PhysRevE.98.032121}, \begin{multline}\label{exp.val.cd} \left\langle \hat{H}_1(t) \right\rangle = \left\langle \hat{H}_{CD}(t) \right\rangle - \left\langle \hat{H}_0(t) \right\rangle= \\ \frac{\omega_t}{\omega_i} \left\langle \hat{H}_0(t) \right\rangle \left(\frac{\omega_t}{\Omega_t}-1\right). \end{multline} In order to obtain an accurate thermodynamic description of this controlled process, one should also keep track of the energy cost of CD-driving. The energetic cost of CD driving can be given as the time average of $\left\langle \hat{H}_1(t) \right\rangle$~\cite{PhysRevE.98.032121}. \begin{equation}\label{sta.cost} C_{\text{STA}} = \frac{1}{\tau_{adi}}\int_{0}^{\tau_{\text{adi}}} \left\langle \hat{H}_1(t) \right\rangle dt. \end{equation} \section{STE by CD-driving} \label{sec:stt} For open quantum systems, eigenvalues of the density matrix are time dependent. The time evolution of the density matrix can be written in terms of changes in its eigenprojectors and changes in the eigenvalues. Such a trajectory map can be written as \begin{equation}\label{eq.mo.ste} \dot{\hat{\rho}} = -\frac{i}{\hbar}[\hat{H}_{CD},\hat{\rho}] + \sum_{n}\partial_t \lambda_n(t)\ket{n_t}\bra{n_t}. \end{equation} It is known that (after a modification to make it norm preserving) we can cast \cref{eq.mo.ste} into a Lindblad-like form~\cite{Alipour2020shortcutsto}, \begin{equation}\label{lind.like.ste} \partial_t \tilde{\rho} = \sum_{mn}\gamma_{mn}(\tilde{L}_{mn} \tilde{\rho} \tilde{L}_{mn}^{\dagger} - \frac{1}{2}\{\tilde{L}_{mn}^{\dagger} \tilde{L}_{mn}, \tilde{\rho}\}), \end{equation} in which $\tilde{L}_{mn}$ and $\gamma_{mn}$ are the jump operators and the dissipation rates in the Lindblad-like master equation for a unitarily equivalent trajectory $\tilde{\rho}$. In~\cite{Alipour2020shortcutsto} it is shown that, using the CD-driven STE framework, one can achieve the fast thermalization of a time-dependent QHO from an initial to a final thermal state. For an open and time-dependent QHO, we can write the trajectory map for the unitary transformed density matrix $\tilde{\rho}=\hat{U}_x \hat{\rho} \hat{U}_x^{\dagger}$ with $\hat{U}_x = \text{exp}(im\alpha_t \hat{x}^2/2\hbar)$ as~\cite{Alipour2020shortcutsto}, \begin{equation}\label{qho.ste.master} \partial_t \tilde{\rho} = \frac{-i}{\hbar} \left[\frac{\hat{p}^2}{2m} + \frac{1}{2} m\tilde{\omega}_{CD}^2 \hat{x}^2, \tilde{\rho}\right] - \gamma_t \left[\hat{x}, \left[\hat{x}, \tilde{\rho}\right]\right], \end{equation} in which $\tilde{\omega}_{CD}$ is the effective frequency and $\gamma_t$ is the effective dissipation rate. Such a procedure for shortcut to thermalization can be implemented by modulation of the driving frequency and the dephasing strength~\cite{Alipour2020shortcutsto,PhysRevResearch.2.033178}. For~\cref{qho.ste.master} we have~\cite{Alipour2020shortcutsto}, \begin{eqnarray}\label{qho.ste.param} \omega_{CD}^2 &=& \omega_t^2 - \alpha_t^2 - \dot{\alpha}_t; \\ \gamma_t &=& \frac{m\omega_t}{\hbar}\frac{\dot{u}_t}{(1-u_t)^2}; \\ u_t &=& e^{-\beta_t \hbar \omega_t}; \\ \alpha_t &=& \zeta_t - \dot{\omega}/2\omega_t; \\ \zeta_t &=& -\frac{\dot{\omega}}{2\omega_t} + \frac{\dot{u}_t}{1-u_t^2}. \end{eqnarray} Here, $\beta_t$ is the inverse temperature at time $t$. Similar to the STA case, shortcut to equilibrium is achieved for a general setup by requesting $\omega_t$ to follow \cref{freq.ansazt.sta} (with $s=t/\tau_{iso}$) and $\beta_t$ to conform to the ansatz~\cite{Alipour2020shortcutsto}, \begin{equation}\label{beta.ansazt.ste} \beta_t = \beta_i + (\beta_f - \beta_0)[10s^3-15s^4+6s^5], \end{equation} in which $s=t/\tau_{iso}$. For this protocol, an entropy based definition for the infinitesimal thermal energy and work can be calculated as given in~\cite{PhysRevA.105.L040201}. \begin{align} {\mathchar'26\mkern-12mu d}\mathcal{Q} &= {\mathchar'26\mkern-12mu d} Q - {\mathchar'26\mkern-12mu d} W_{CD} = \text{Tr}\left[\tilde{\mathcal{D}}_{CD}(\tilde{\rho}) \hat{H}_0 \right]dt;\label{heat.ste}\\ {\mathchar'26\mkern-12mu d}\mathcal{W} &= {\mathchar'26\mkern-12mu d} W + {\mathchar'26\mkern-12mu d} W_{CD} = \text{Tr}\left[\tilde{\rho}\dot{\hat{H}}_0-i\left[\hat{H}_0,\tilde{H}_{CD} \right] \right]dt\label{work.ste} \end{align} in which $\tilde{\mathcal{D}}_{CD}=- \gamma_t \left[\hat{x}, \left[\hat{x}, \tilde{\rho}\right]\right]$ is the transformed dissipator contributing to the incoherent evolution in~\cref{qho.ste.master}. ${\mathchar'26\mkern-12mu d} W_{CD}=-i\text{Tr}[[\hat{H},\tilde{H}_{CD}] \tilde{\rho}]dt$ is the environment-induced dissipative work, due to the CD evolution along the trajectory. ${\mathchar'26\mkern-12mu d}\mathcal{Q}$ and ${\mathchar'26\mkern-12mu d}\mathcal{W}$ are heat and work calculated in the instantaneous eigenbasis of $\tilde{\rho}$. In~\cref{heat.ste} and~\cref{work.ste}, ${\mathchar'26\mkern-12mu d} Q$ and ${\mathchar'26\mkern-12mu d} W$ signify the infinitesimal changes in the conventional definitions of heat $Q=\text{Tr}[\dot{\hat{\rho}}\hat{H}_0]$ and work $W=\text{Tr}[\hat{\rho}\dot{\hat{H}}_0]$, resulting from Spohn separation~\cite{10.1063/1.523789}, in which the coherent (dissipative) changes of the internal energy due to the master equation is phenomenologically labelled as work (heat).\\ However, it is argued in~\cite{PhysRevA.105.L040201} that for a general evolution of an open quantum system, the contribution of changes in the internal energy which one can label as heat must also coincide with the contribution associated with changes in the entropy. In general, Spohn seperation doesn't necessarily respect this condition. As a resolution to this issue, the contribution to the changes in the internal energy associated with changes in the eigenvalues of the density matrix is called heat and the contribution associated with the changes in the eigenprojectors is called work. For the CD-driven harmonic oscillator driven to thermalization, these contributions take the form given in~\cref{heat.ste} and~\cref{work.ste} as discussed in~\cite{PhysRevA.105.L040201}. Therefore, for the driven system described in this section, the actual values for heat ($\mathcal{Q}$) and work ($\mathcal{W}$) during the process can be found by integrating~\cref{heat.ste} and~\cref{work.ste} throughout the trajectory. Since during an isochoric process the Hamiltonian is static ($\dot{\hat{H}}_0=0$) we can define the integral of ${\mathchar'26\mkern-12mu d} W_{CD}$ through the trajectory $\mathcal{C}$ during this process as a cost function as, \begin{equation}\label{ste.cost} C_{\text{STE}} = \int_{\mathcal{C}} {\mathchar'26\mkern-12mu d} W_{CD} = -i\int_{0}^{\tau_{\text{iso}}} \text{Tr}[[\hat{H}_0,\tilde{H}_{CD}] \tilde{\rho}]dt. \end{equation} This cost is not defined operationally, and it quantifies the total environment-induced dissipative work throughout the isochoric process. However, any driving protocol must expend at least the amount of energy given in~\cref{ste.cost} to drive the system for the duration of the isochore. Therefore this can be thought of as a lower bound on any operational cost that one can define for such a process. \section{Otto engine with CD driven STA and STE} \label{sec:otto_sta_ste} Using the frameworks explained in the previous sections, we propose a CD driven Otto engine with a QHO as a working medium. In \cref{otto.schm} we boost the engine using CD driven STA protocols for expansion and compression and using a CD driven STE protocol for the hot isochore. The cold isochore is modeled by a Markovian master equation and no control Hamiltonian is applied during this step.\\ Adiabatic evolution implies $\beta_f \omega_f = \beta_i \omega_i$. Hence, after an adiabatic driving for the compression (expansion) stroke, the evolution of the system leads to a density matrix whose statistical distance is close to the thermal density matrix at $\omega_2 = \omega_h$ and $T_2 = (\omega_h/\omega_c)\times T_c$ ($\omega_4 = \omega_c$ and $T_4 = (\omega_c/\omega_h)\times T_c$). We can quantify this statistical distance using fidelity. However, due to the non-commutivity of the control and system Hamiltonians, there will be quantum friction which will manifest itself as coherence terms on the final density matrix. Also, due to the fact that for an isochoric process the frequency must be kept constant, we will use $\omega_t=\omega_h$ throughout the CD-driven hot isochoric stroke, which means that in order to achieve STE we will only consider modulation of the dephasing strength.\\ We distinguish between the thermodynamic efficiency ($\eta^{th}$) of the control engine, which doesn't take into account the control costs and is given by an expression similar to~\cref{engine.efficiency}, and the operational efficiency ($\eta^{op}$) which considers the energetic costs. The operational efficiency for the CD-driven Otto engine can be calculated taking into account the costs for STA and STE protocols. \begin{equation}\label{op.efficiecy} \eta^{op} = -\frac{W_{12} + W_{34}}{\mathcal{Q}_{23}+C_{adi}^{1\rightarrow 2} + C_{iso}^{2\rightarrow 3} + C_{adi}^{3\rightarrow 4}}. \end{equation} \begin{figure*} \caption{Power and efficiency of the Otto engine as a function of the duration of the isochoric and adiabatic branches. Top row: Power vs $\tau_{adi} \label{pow_eff_all} \end{figure*} In~\cref{op.efficiecy}, $W_{12}$ and $W_{34}$ are calculated using~\cref{work.adi} and $\mathcal{Q}_{23}$ is calculated by integrating~\cref{heat.ste}. $C_{adi}^{1\rightarrow 2}$, $C_{iso}^{2\rightarrow 3}$ and $C_{adi}^{3\rightarrow 4}$ are the driving costs for the compression, cold isochore and expansion strokes and are defined in the previous sections. Power output is calculated using a similar expression given in~\cref{engine.power} by substituting $W_{12}$ and $W_{34}$ into the equation. \section{Results} \label{sec:result} In this section, we compare UNA, STA, and ST{{\AE}} Otto engines in terms of power and efficiency. To this end, we choose $\omega_c = \omega_1 = \omega_4 = 1$ and $\omega_h = \omega_2 = \omega_3 = 2.5$. We assume the durations of the two adiabatic strokes to be identical ($\tau_{adi}$), and similarly for the two isochores ($\tau_{iso}$). We vary $\tau_{adi}$ and $\tau_{iso}$ independently between 0.7 and 7.5. The temperatures for the cold and hot bath are taken to be $T_c=T_1=1$ and $T_h=T_3=10$, respectively. We further assume that the heat conductivities for the hot and cold isochores are equal, i.e., $\Gamma_h=\Gamma_c=0.22$. All numerical values are given in appropriate units (energy in units of $\hbar\omega_c$, temperature in units of $\hbar\omega_c/k_B$, etc.). The parameter ranges were chosen such that all relevant operational regimes were accessible in the analysis. Throughout this section, all the calculated efficiencies are operational except when it is directly stated otherwise. For convenience we drop the "op" superscript for the operational efficiency. We have used the open source package QuTiP for our calculations~\cite{qutip}.\\ \cref{pow_eff_all} compares both the power output of the three engines (top panel) and their efficiencies (bottom panel) as a function of the stroke durations, for a single cycle of operation starting from the thermal state ($\omega_c,T_c$) which is indicated as "1" in~\cref{otto.schm}. It is evident from~\cref{pow_eff_all}(a) that for small values of $\tau_{adi}$ and $\tau_{iso}$ the UNA engine results in a negative power output and hence, does not behave as a proper heat engine. However, in STA~\cref{pow_eff_all}(b) and ST{\AE}~\cref{pow_eff_all}(c) engines, a positive power output is produced even for small stroke times. It should be noted that we have not considered stroke times smaller than 0.7 because for the parameter regime that we chose, this results in trap inversion (with an infinite STA driving cost). The maximum power output of the ST{\AE} and STA engines occur for the smallest stroke times however for the uncontrolled engine maximum power occurs for an intermediate time $\tau_{adi}=\tau_{iso}\approx 2.3$, after which the power output declines. The figures demonstrate that the maximum power is larger for ST{\AE} engine than the STA-only engine, which itself yields higher power than the finite-time engine with no CD-driving. When cycle time is dominated by either isochoric or adiabatic processes the power output of the uncontrolled engine is low and, as expected, the STA engine yields higher a power output when the cycle time is dominated by the isochoric processes. However, ST{\AE} engine outputs a relatively large values for power whether the contribution of the isochoric processes is larger for the cycle time than the adiabatic ones or vice-versa. For large values of the cycle time, the difference between the power outputs of the three engines become negligible as expected. \cref{pow_eff_all}(d) shows that in the parameter regime that we have considered, the UNA engine yields a negative value for efficiency as the cycle time becomes very small. Hence, it doesn't act as a proper heat engine. Comparing~\cref{pow_eff_all}(d),~\cref{pow_eff_all}(e) and~\cref{pow_eff_all}(f) we see that for small values of $\tau_{adi}$, the efficiency is comparatively small for all three engines. However, STA and ST{\AE} engines give larger values for the efficiency, the latter displaying a superior performance. As the cycle time becomes larger, the difference between the values of the efficiency of the three engines diminish. But it is clear that the engines with CD-driving, specifically the ST{\AE} engine, has a larger efficiency for a bigger subset of cycle times considered for our calculations. \begin{figure} \caption{Power and efficiency characteristics of the controlled and uncontrolled finite time Otto engines for an equally allocated time for their isochoric and adiabatic branches, $\tau_{iso} \label{power_efficiency_eqst} \end{figure} In order to see finer details of the performance metrics of the three engines for a subset of the cycle times considered in~\cref{pow_eff_all}, in~\cref{power_efficiency_eqst} efficiency and power of the engines are presented for $\tau_{adi}=\tau_{iso}=\tau$. In~\cref{power_efficiency_eqst}(a) we see a dramatic difference between the power output of the ST{\AE} engine with the other two engines considered in this study specially for short cycle times. This figure shows a clear advantage of using CD-driving for thermalization and adiabaticity to enhance the power output. From~\cref{power_efficiency_eqst}(b) it is clear that for equal time allocated to isochoric and adiabatic branches, the largest to smallest efficiencies are obtained for the ST{\AE} engine, the STA engine and the UNA engine, respectively for smaller cycle times. This demonstrates that even considering the energetic costs spent on driving the working medium to adiabaticity and thermalization, the ST{\AE} engine is superior in terms of efficiency to both UNA and STA engines. For larger cycle times the efficiency of the three engines approach the ideal value of $1-\omega_c/\omega_h=0.6$. Also, for the entirely of cycle times considered in this study, the efficiency of all three engines is below the Carnot efficiency $1-T_c/T_h=0.9$ and Curzon-Ahlborn (CA) efficiency $1-\sqrt{T_c/T_h}\approx0.68$. It is also worth pointing out that without considering the costs, the thermodynamic efficiency of the ST{\AE} engine equals the ideal value throughout the considered cycle times considered in our calculations.\\ \begin{figure} \caption{Cost of the STA and STE driving for one cycle of the ST{\AE} \label{costs_alltime} \end{figure} In~\cref{costs_alltime} we can see the control costs for the ST{\AE} engine. As expected,~\cref{costs_alltime}(a) and~\cref{costs_alltime}(c) show that the energetic cost for CD-driving for adiabaticity decreases for longer $\tau_{adi}$, whereas in~\cref{costs_alltime}(b) we can see that the cost for driving the system to thermalization shrinks as we increase $\tau_{iso}$. This fact again verifies that the control cost given in~\cref{ste.cost} is indeed a suitable cost function for our purposes. Comparison between the maximums of the costs of the isochoric and adiabatic strokes reveal that this value is lower for thermalization than expansion, which in turn is lower than that of compression.\\ \begin{figure*} \caption{Power and efficiency of the Otto engine at its limit cycle as a function of the duration of the isochoric and adiabatic branches. Top row: Limit cycle power vs $\tau_{adi} \label{lc_pow_eff_all} \end{figure*} Next, we move on to the performance metrics of the engines in their respective limit cycles. Our numerical calculations show that all three engines reach a limit cycle throughout the cycle times that we have considered in this study. One has to note that in the limit cycle the state of the working medium at the end of each stroke do not necessarily coincide with the ones given in the ideal Otto engine as represented by~\cref{otto.schm}, unless the times allocated for each stroke are very large. Calculating the fidelity of the state of the working medium and the final state after each repetition of the cycle, we see observed that 7 cycles are enough to converge to the limit cycle of the engines for all the cycle times. In~\cref{lc_pow_eff_all} the results for the power and efficiency of the engines are shown.\\ From~\cref{lc_pow_eff_all}(a) we can observe that the UNA engine yields a negative power output for a considerably large subset of allocated times for the adiabatic and isochoric strokes. Comparing~\cref{lc_pow_eff_all}(a) and~\cref{lc_pow_eff_all}(a), we see that the uncontrolled engine is much more unreliable in its limit cycle than when we only consider one cycle of operation. In comparison,~\cref{lc_pow_eff_all}(b) and~\cref{lc_pow_eff_all}(c) give a positive power output in their limit cycles for all $\tau_{adi}$ and $\tau_{iso}$. Moreover, in~\cref{lc_pow_eff_all}(a) we see that in its limit cycle, the UNA engine yields a vanishingly small power outputs for cycle times dominated by isochoric/adiabatic processes. On the other hand,~\cref{lc_pow_eff_all}(b) shows that the power output in the STA engine gets vanishingly small only for the cycle times dominated by adiabatic processes. However,~\cref{lc_pow_eff_all}(c) shows a clear advantage of using ST{\AE} engine, in which power output is much larger compared to the UNA and STA engines, specially when we consider isochoric or adiabatic process dominated cycle times. The figure shows that similar to the case of single cycle, for the STA{\AE} engine in its limit cycle, the power output is higher for smaller values of $\tau_{adi}$ and $\tau_{iso}$.\\ \begin{figure} \caption{Limit cycle power and efficiency characteristics of the controlled and uncontrolled finite time Otto engines for an equally allocated time for their isochoric and adiabatic branches, $\tau_{iso} \label{lc_power_efficiency_eqst} \end{figure} The grey areas in~\cref{lc_pow_eff_all}(d) show that for a considerable section of the considered allocated times for the isochoric and adiabatic strokes, the uncontrolled engine doesn't act as a proper heat engine. In its limit cycle the efficiency of the UNA engine is low compared to the ST{\AE} and the STA engines except when $\tau_{iso}$ and $\tau_{adi}$ are both large. In this case the expansion and compression strokes approach their adiabatic evolution and in the end of the isochoric strokes the fidelity between the density matrix of working medium and the ideal thermal state of for the temperature of the respective bath is almost unity. Comparing~\cref{lc_pow_eff_all}(e) and~\cref{lc_pow_eff_all}(f) we see that surprisingly, for cycles with small $\tau_{iso}$, the efficiency of the ST{\AE} engine is smaller than that of the STA engine except when $\tau_{adi}$ is also small. Although, this doesn't mean that the STA-only engine is superior to the ST{\AE} engine for the mentioned cycle time allocations because in this case, as seen in~\cref{lc_pow_eff_all}(b) the power output of the STA-only engine becomes much smaller than the ST{\AE} engine. We can also clearly see that the maximum value attained for efficiency is larger for the ST{\AE} engine than the STA engine when we consider the entirety of the parameter regime. An interesting aspect of the ST{{\AE}} protocol limit cycles is that the performance is non-monotonous in $\tau_{adi}$. As a result, high-power/high-efficiency islands appear, such as the one observed around $\tau_{adi}=\tau_{iso}\approx 1.6$ in \cref{lc_pow_eff_all}(f).\\ \begin{figure} \caption{Cost of the STA and STE driving for the limit cycle of the ST{\AE} \label{lc_costs_alltime} \end{figure} In order to make a more detailed the comparison of the performance metrics of the engines for a subset of the stroke times considered in this study, in~\cref{lc_power_efficiency_eqst} we present efficiency and power of the heat engines for $\tau_{adi}=\tau_{iso}=\tau$. In~\cref{lc_power_efficiency_eqst}(a) we can see that the ST{\AE} engine yields a much higher power output than the STA and UNA engines. Comparing~\cref{power_efficiency_eqst}(a) and~\cref{lc_power_efficiency_eqst}(a) we see that in their limit cycle, the power of the STA and UNA engines get reduced, as compared with the case of single cycle, throughout the cycle times. However, for the ST{\AE} engine the power output for the limit cycle is higher than that of the single cycle for the entirety of the cycle time.\\ As shown in~\cref{lc_power_efficiency_eqst}(b), in its limit cycle, the efficiency of the ST{\AE} engine is no longer monotonic in the cycle time. The difference between the efficiencies of the uncontrolled engine and the controlled engines is more dramatic in their limit cycles and their efficiency values don't approach each other to the ideal value of $1-\omega_c/\omega_h=0.6$ unless the cycle time is very large. The results show that the ST{\AE} engine has higher efficiency than the STA engine, specially for smaller cycle times. In~\cref{lc_costs_alltime} the costs for CD-driving at the limit cycle for the ST{\AE} engine are presented. Similar to the case of single cycle, we can see that the cost of the hot isochoric and adiabatic strokes are highest for short $\tau_{iso}$ and $\tau_{adi}$ respectively. However, for short cycle times in the limit cycle the share of the thermalization cost is the highest in the total control cost as opposed to the case of single cycle. Again, as $\tau_{adi}$ and $\tau_{iso}$ is increased, the control costs of the adiabatic and hot isochoric strokes decreases. However, for small $\tau_{adi}$ in the limit cycle, the cost for CD-driving during the expansion and compression strokes becomes relatively small for larger $\tau_{iso}$ as opposed to what we see for a single cycle. \section{Conclusion} \label{sec:conclusion} In conclusion, we studied the thermodynamic performance of a quantum harmonic Otto engine in finite time and compared three cases of Otto engine with no CD-driving, Otto engine with STA on the adiabatic strokes and Otto engine with STA on the adiabatic strokes and STE on the hot isochoric branch. We have included the costs of driving the engine for calculating the thermodynamic figures of merit. In our calculations we have considered two modes of operation. First we studied the thermodynamic performance criteria for a single cycle and then we calculated the same figures of merit for the engines running in their respective limit cycles.\\ Our results for single cycle indicate that controlling the hot thermalization process in the Otto engine greatly increases the power output of the engine specifically for short cycle time durations. Also, even taking the control costs into account, the ST{\AE} engine yields a higher efficiency than the STA and UNA engines. For a single cycle starting from the thermal state of the cold bath, the control costs for thermalization is lower than that of the compression and expansion strokes. In total, the control costs decline as we increase the time allocated for the isochoric and adiabatic branches. Our calculations show that for the UNA engine the parameter regime of the stroke time durations for which the power output and efficiency are negative greatly increases in its limit cycle as compared to only a single cycle of operation. For the CD-driven engines (both STA and ST{\AE}), the power output remains positive throughout the stroke times considered in this study.\\ Our numerical calculations show that in the limit cycle, the efficiency vs cycle time is not necessarily monotonic for cycle time for the ST{\AE} engine and the power for this engine in this mode of operation increases unlike the STA and UNA engines compared to the single cycle case. In its limit cycle, the thermalization cost dominates the control costs for the ST{\AE} engine, specially for small time allocation for the isochoric branches. However, similar to the single cycle case, the costs decline as the cycle time is increases. In addition to illuminating the fundamental potential and energetic cost limitations of finite-time quantum heat engines with shortcuts to adiabaticity and equilibration, our results can be significant for their practical implementations with optimum power and efficiency. \acknowledgements We gratefully acknowledge M.~T.~Naseem, B.~Çakmak, and O.~Pusuluk for fruitful discussions. \nocite{*} \input{STA_STE_Otto.bbl} \end{document}
\begin{document} \title{A Single Approach to Decide Chase Termination on Linear Existential Rules} \begin{abstract} Existential rules, long known as tuple-generating dependencies in database theory, have been intensively studied in the last decade as a powerful formalism to represent ontological knowledge in the context of ontology-based query answering. A knowledge base is then composed of an instance that contains incomplete data and a set of existential rules, and answers to queries are logically entailed from the knowledge base. This brought again to light the fundamental chase tool, and its different variants that have been proposed in the literature. It is well-known that the problem of determining, given a chase variant and a set of existential rules, whether the chase will halt on any instance, is undecidable. Hence, a crucial issue is whether it becomes decidable for known subclasses of existential rules. In this work, we consider linear existential rules, a simple yet important subclass of existential rules that generalizes inclusion dependencies. We show the decidability of the \emph{all instance} chase termination problem on linear rules for three main chase variants, namely \emph{semi-oblivious}, \emph{restricted} and \emph{core} chase. To obtain these results, we introduce a novel approach based on so-called derivation trees and a single notion of forbidden pattern. Besides the theoretical interest of a unified approach and new proofs, we provide the first positive decidability results concerning the termination of the restricted chase, proving that chase termination on linear existential rules is decidable for both versions of the problem: Does \emph{every} fair chase sequence terminate? Does \emph{some} fair chase sequence terminate? \end{abstract} \section{Introduction} The chase procedure is a fundamental tool for solving many issues involving tuple-generating dependencies, such as data integration \cite{DBLP:conf/pods/Lenzerini02}, data-exchange \cite{DBLP:journals/tcs/FaginKMP05}, query answering using views \cite{DBLP:journals/vldb/Halevy01} or query answering on probabilistic databases \cite{DBLP:conf/icde/OlteanuHK09}. In the last decade, tuple-generating dependencies raised a renewed interest under the name of \emph{existential rules} for the problem known as ontology-based query answering. In this context, the aim is to query a knowledge base $(\ensuremath{I}, \ensuremath{\Sigma})$, where $\ensuremath{I}$ is an instance and $\ensuremath{\Sigma}$ is a set of existential rules (see e.g. the survey chapters \cite{DBLP:books/sp/virgilio09/CaliGL09,DBLP:conf/rweb/MugnierT14}). In more classical database terms, this problem can be recast as querying an instance $\ensuremath{I}$ under incomplete data assumption, provided with a set of constraints $\ensuremath{\Sigma}$, which are tuple-generating dependencies. The chase is a fundamental tool to solve dependency-related problems as it allows one to compute a (possibly infinite) \emph{universal model} of $(\ensuremath{I}, \ensuremath{\Sigma})$, \emph{i.e.}, a model that can be homomorphically mapped to any other model of $(\ensuremath{I}, \ensuremath{\Sigma})$. Hence, the answers to a conjunctive query (and more generally to any kind of query closed by homomorphism) over $(\ensuremath{I}, \ensuremath{\Sigma})$ can be defined by considering solely this universal model. Several variants of the chase have been introduced, and we focus in this paper on the main ones: semi-oblivious \cite{DBLP:conf/pods/Marnette09} (aka skolem \cite{DBLP:conf/pods/Marnette09}), restricted \cite{DBLP:conf/icalp/BeeriV81,DBLP:journals/tcs/FaginKMP05} (aka standard \cite{phd/Onet12}) and core \cite{DBLP:conf/pods/DeutschNR08}. It is well known that all of these produce homomorphically equivalent results but terminate for increasingly larger subclasses of existential rules. Any chase variant starts from an instance and exhaustively performs a sequence of rule applications according to a redundancy criterion which characterizes the variant itself. The question of whether a chase variant terminates on \emph{all instances} for a given set of existential rules is known to be undecidable when there is no restriction on the kind of rules \cite{DBLP:journals/ai/BagetLMS11,DBLP:conf/icalp/GogaczM14}. A number of \emph{sufficient} syntactic conditions for termination have been proposed in the literature for the semi-oblivious chase (see e.g. \cite{phd/Onet12,DBLP:journals/jair/GrauHKKMMW13,DBLP:phd/hal/Rocher16} for syntheses), as well as for the restricted chase \cite{DBLP:conf/ijcai/CarralDK17} (note that the latter paper also defines a sufficient condition for non-termination). However, only few positive results exist regarding the termination of the chase on specific classes of rules. Decidability was shown for the semi-oblivious chase on guarded-based rules (linear rules, and their extension to (weakly-)guarded rules) \cite{DBLP:conf/pods/CalauttiGP15}. Decidability of the core chase termination on guarded rules for a fixed instance was shown in \cite{DBLP:conf/icdt/Hernich12}. In this work, we provide new insights on the chase termination problem for \emph{linear} existential rules, a simple yet important subclass of guarded existential rules, which generalizes inclusion dependencies \cite{DBLP:journals/tods/Fagin81} and practical ontological languages \cite{DBLP:journals/ws/CaliGL12}. Precisely, the question of whether a chase variant terminates on all instances for a set of linear existential rules is studied in two fashions: \begin{itemize} \item does \emph{every} (fair) chase sequence terminate? \item does \emph{some} (fair) chase sequence terminate? \end{itemize} It is well-known that these two questions have the same answer for the semi-oblivious and the core chase variants, but not for the restricted chase. Indeed, this last one may admit both terminating and non-terminating sequences over the same knowledge base. We show that the termination problem is decidable for linear existential rules, whether we consider any version of the problem and any chase variant. We study chase termination by exploiting in a novel way a graph structure, namely the \emph{derivation tree}, which was originally introduced to solve the ontology-based (conjunctive) query answering problem for the family of greedy-bounded treewidth sets of existential rules \cite{DBLP:conf/ijcai/BagetMRT11,DBLP:phd/hal/Thomazo13}, a class that generalizes guarded-based rules and in particular linear rules. We first use derivation trees to show the decidability of the termination problem for the semi-oblivious and restricted chase variants, and then generalize them to \emph{entailment trees} to show the decidability of termination for the core chase. For any chase variant we consider, we adopt the same high-level procedure: starting from a finite set of canonical instances (representative of all possible instances), we build a (set of) tree structures for each canonical instance, while forbidding the occurrence of a specific pattern, we call \emph{unbounded-path witness}. The built structures are finite thanks to this forbidden pattern, and this allows us to decide if the chase terminates on the associated canonical instance. By doing so, we obtain a uniform approach to study the termination of several chase variants, that we believe to be of theoretical interest per se. The derivation tree is moreover a simple structure and the algorithms built on it are likely to lead to an effective implementation. Let us also point out that our approach is constructive: if the chase terminates on a given instance, the algorithm that decides termination actually computes the result of the chase (or a superset of it in the case of the core chase), otherwise it pinpoints a forbidden pattern responsible for non-termination. Besides providing new theoretical tools to study chase termination, we obtain the following results for linear existential rules: \begin{itemize} \item a new proof of the decidability of the semi-oblivious chase termination, building on different objects than the previous proof provided in \cite{DBLP:conf/pods/CalauttiGP15}; we show that our algorithm provides the same complexity upper-bound; \item the decidability of the restricted chase termination, for both versions of the problem, i.e., termination of all (fair) chase sequences and termination of some (fair) chase sequence; to the best of our knowledge, these are the first positive results on the decidability of the restricted chase termination; \item a new proof of the decidability of the core chase termination, with different objects than previous work reported in \cite{DBLP:conf/icdt/Hernich12}; although this latter paper solves the question of the core chase termination given a \emph{single} instance, the results actually allow to infer the decidability of the \emph{all} instance version of the problem, by noticing that only a finite number of instances need to be considered (see the next section). \end{itemize} The paper is organized as follows. After introducing some preliminary notions (Section 2), we define the main components of our framework, namely derivation trees and unbounded-path witnesses (Section 3). We build on these objects to prove the decidability of the semi-oblivious and restricted chase termination (Section 4). Finally, we generalize derivation-trees to entailment trees and use them to prove the decidability of the core chase termination (Section 5). Detailed proofs are provided in the appendix. \section{Preliminaries} \label{section-preliminaries} We consider a logical \emph{vocabulary} composed of a finite set of predicates and an infinite set of constants. An \emph{atom} $\alpha$ has the form $\ensuremath{r}(t_1,\ldots, t_n)$ where $\ensuremath{r}$ is a predicate of arity $n$ and the $t_i$ are terms (i.e., variables or constants). We denote by $\ensuremath{t}s{\alpha}$ (resp. $\vars{\alpha}$) the set of terms (resp. variables) in $\alpha$ and extend the notations to a set of atoms. A \emph{ground} atom does not contain any variable. It is convenient to identify the existential closure of a conjunction of atoms with the set of these atoms. An \emph{instance} is a set of (non-necessarily ground) atoms, which is finite unless otherwise specified. Abusing terminology, we will often see an instance as its isomorphic model. Given two sets of atoms $\ensuremath{S}$ and $\ensuremath{S}'$, a \emph{homomorphism} from $\ensuremath{S}'$ to $\ensuremath{S}$ is a substitution $\pi$ of $\vars{\ensuremath{S}'}$ by $\ensuremath{t}s{\ensuremath{S}}$ such that $\pi(\ensuremath{S}') \subseteq \ensuremath{S}$. It holds that $\ensuremath{S} \models \ensuremath{S}'$ (where $\models$ denotes classical logical entailment) iff there is a homomorphism from $\ensuremath{S}'$ to $\ensuremath{S}$. An endomorphism of $\ensuremath{S}$ is a homomorphism from $\ensuremath{S}$ to itself. A set of atoms is a \emph{core} if it admits only injective endomorphisms. Any finite set of atoms is logically equivalent to one of its subsets that is a core, and this core is unique up to isomomorphism (i.e., bijective variable renaming). Given sets of atoms $\ensuremath{S}$ and $\ensuremath{S}'$ such that $\ensuremath{S} \cap \ensuremath{S}' \neq \emptyset$, we say that $\ensuremath{S}$ \emph{folds} onto $\ensuremath{S}'$ if there is a homomorphism $\pi$ from $\ensuremath{S}$ to $\ensuremath{S}'$ such that $\pi$ is the identity on $\ensuremath{S} \cap \ensuremath{S}'$. The homomorphism $\pi$ is called a \emph{folding}. In particular, it is well-known that any set of atoms \emph{folds} onto its core. An existential rule (or simply \emph{rule}) is of the form $\ensuremath{\sigma} = \forall\vect{x}\forall\vect{y}.[\body{\vect{x},\vect{y}} \rightarrow \exists \vect{z}.\head{\vect{x},\vect{z}}]$ where $\body{\vect{x},\vect{y}}$ and $\head{\vect{x},\vect{z}}$ are non-empty conjunctions of atoms on variables, respectively called the \emph{body} and the \emph{head} of the rule, also denoted by $\body{\ensuremath{\sigma}}$ and $\head{\ensuremath{\sigma}}$, and $\vect{x}, \vect{y}$ and $\vect{z}$ are pairwise disjoint tuples of variables. The variables of $\vect z$ are called \emph{existential variables}. The variables of $\vect x$ form the \emph{frontier} of $\ensuremath{\sigma}$, which is also denoted by $\fr{\ensuremath{\sigma}}$. For brevity, we will omit universal quantifiers in the examples. A \emph{knowledge base} (KB) is of the form $\ensuremath{\mathcal{K}} = \ensuremath{\mathcal{K}}long$, where $\ensuremath{I}$ is an instance and $\ensuremath{\Sigma}$ is a finite set of existential rules. A rule $\ensuremath{\sigma} = \body{\ensuremath{\sigma}} \rightarrow \head{\ensuremath{\sigma}}$ is \emph{applicable} to an instance $\ensuremath{I}$ if there is a homomorphism $\pi$ from $ \body{\ensuremath{\sigma}} $ to $\ensuremath{I}$. The pair $(\ensuremath{\sigma}, \pi)$ is called a \emph{trigger} for $\ensuremath{I}$. The result of the application of $\ensuremath{\sigma}$ according to $\pi$ on $\ensuremath{I}$ is the instance $\ensuremath{I}' = \ensuremath{I} \cup \ensuremath{\pi^s}(\head{\ensuremath{\sigma}})$, where $\ensuremath{\pi^s}$ (here $s$ stands for \emph{safe}) extends $\pi$ by assigning a distinct fresh variable (also called a \emph{null}) to each existential variable. We also say that $\ensuremath{I}'$ is obtained by \emph{firing} the trigger $(\ensuremath{\sigma}, \pi)$ on $\ensuremath{I}$. By $\pi_{\mid\fr{\ensuremath{\sigma}}}$ we denote the restriction of $\pi$ to the domain $\fr{\ensuremath{\sigma}}$. \sloppy \begin{definition}[Derivation] A \emph{$\ensuremath{\Sigma}$-derivation} (or simply \emph{derivation} when $\ensuremath{\Sigma}$ is clear from the context) from an instance $\ensuremath{I} = I_0$ to an instance $I_n$ is a sequence $I_0, (\ensuremath{\sigma}_1,\pi_1), I_1 \ldots, I_{n-1}, (\ensuremath{\sigma}_n,\pi_n), I_n$, such that for all $1 \leq i \leq n$: $\ensuremath{\sigma}_i \in \ensuremath{\Sigma}$, $(\ensuremath{\sigma}_i,\pi_i)$ is a trigger for $I_{i-1}$, $I_i$ is obtained by firing $(\ensuremath{\sigma}_i,\pi_i)$ on $I_{i-1}$, and $I_i \neq I_{i-1}$. We may also denote this derivation by the associated sequence of instances $(I_0, \ldots, I_n)$ when the triggers are not needed. The notion of derivation can be naturally extended to an \emph{infinite} sequence. \end{definition} \fussy We briefly introduce below the main chase variants and refer to \cite{phd/Onet12} for a detailed presentation. The \emph{semi-oblivious} chase prevents several applications of the same rule through the same mapping of its frontier. Given a derivation from $I_0$ to $I_{i}$, a trigger $(\ensuremath{\sigma},\pi)$ for $I_i$ is said to be \emph{active according to the semi-oblivious criterion}, if there is no trigger $(\ensuremath{\sigma}_j,\pi_j)$ in the derivation with $\ensuremath{\sigma} = \ensuremath{\sigma}_j$ and $\pi_{\mid{\fr{\ensuremath{\sigma}}}} = \pi_{j_\mid{\fr{\ensuremath{\sigma}_j}}}$. The \emph{restricted} chase performs a rule application only if the added set of atoms is not redundant with respect to the current instance. Given a derivation from $I_0$ to $I_{i}$, a trigger $(\ensuremath{\sigma},\pi)$ for $I_i$ is said to be \emph{active according to the restricted criterion} if $\pi$ cannot be extended to a homomorphism from $(\body{\ensuremath{\sigma}}\cup\head{\ensuremath{\sigma}})$ to $\ensuremath{I}_{i}$ (equivalently, $ \pi^s(\head{\ensuremath{\sigma}})$ does not fold onto $\ensuremath{I}_{i}$). A \emph{semi-oblivious (resp. restricted) chase sequence} of $\ensuremath{I}$ with $\ensuremath{\Sigma}$ is a possibly infinite $\ensuremath{\Sigma}$-derivation from $\ensuremath{I}$ such that each trigger $(\ensuremath{\sigma}_i,\pi_i)$ in the derivation is active according to the semi-oblivious (resp. restricted) criterion. Furthermore, a (possibly infinite) chase sequence is required to be \emph{fair}, which means that a possible rule application is not indefinitely delayed. Formally, if some $I_i$ in the derivation admits an active trigger $(\ensuremath{\sigma},\pi)$, then there is $j > i$ such that, either $I_j$ is obtained by firing $(\ensuremath{\sigma},\pi)$ on $I_{j-1}$, or $(\ensuremath{\sigma},\pi)$ is not an active trigger anymore on $I_j$. A \emph{terminating} chase sequence is a finite fair sequence. In its original definition {\cite{DBLP:conf/pods/DeutschNR08}, the \emph{core} chase proceeds in a breadth-first manner, and, at each step, first fires in parallel all active triggers according to the restricted chase criterion, then computes the core of the result. Alternatively, to bring the definition of the core chase closer to the above definitions of the semi-oblivious and restricted chases,} one can define a \emph{core chase sequence} as a possibly infinite sequence $I_0, (\ensuremath{\sigma}_1,\pi_1), I_1, \ldots$, alternating instances and triggers, such that each instance $I_i$ is obtained from $I_{i-1}$ by first firing the active trigger $(\ensuremath{\sigma}_i, \pi_i)$ according to the restricted criterion, then computing the core of the result. An instance admits a terminating core chase sequence in that sense if and only if the core chase as originally defined terminates on that instance. For the three chase variants, fair chase sequences compute a (possibly infinite) \emph{universal model} of the KB, but only the core chase stops if and only if the KB has a \emph{finite} universal model. It is well-known that, for the semi-oblivious and the core chase, if there is a terminating chase sequence from an instance $I$ then all fair sequences from $I$ are terminating. This is not the case for the restricted chase, since the order in which rules are applied has an impact on termination, as illustrated by Example \ref{ex-intro}. \begin{example}\label{ex-intro} Let $\ensuremath{\Sigma} = \{\ensuremath{\sigma}_1, \ensuremath{\sigma}_2\}$, with $\ensuremath{\sigma}_1 = p(x,y) \rightarrow \exists z ~ p(y,z)$ and $\ensuremath{\sigma}_2 = p(x,y) \rightarrow p(y,y)$. Let $\ensuremath{I} = p(a,b)$. The KB $(\ensuremath{I}, \ensuremath{\Sigma})$ has a finite universal model, for example, $I^* = \{p(a,b), p(b,b)\}$. The semi-oblivious chase does not terminate on $\ensuremath{I}$ as $\ensuremath{\sigma}_1$ is applied indefinitely, while the core chase terminates after one breadth-first step and returns $I^*$. The restricted chase has a terminating sequence, for example, $(\ensuremath{\sigma}_2, \{x \mapsto a, y \mapsto b\})$, which yields $I^*$ as well, but it also has infinite fair sequences, for example, the breadth-first sequence that applies $\ensuremath{\sigma}_1$ before $\ensuremath{\sigma}_2$ at each step. \end{example} \pagebreak We study the following problems for the semi-oblivious, restricted and core chase variants: \begin{itemize} \item \emph{(All instance) all sequence termination:} Given a set of rules $\ensuremath{\Sigma}$, is it true that, for any instance, all fair sequences are terminating? \item \emph{(All instance) one sequence termination:} Given a set of rules $\ensuremath{\Sigma}$, is it true that, for any instance, there is a terminating sequence? \end{itemize} Note that, according to the terminology of \cite{DBLP:journals/fuin/GrahneO18}, these problems can be recast as deciding whether, for a chase variant, a given set of rules belongs to the class CT$_{\forall\forall}$ or CT$_{\forall\exists}$, respectively. An existential rule is called \emph{linear} if its body and its head are both composed of a single atom (e.g., \cite{DBLP:journals/ws/CaliGL12}). Linear rules generalize \emph{inclusion dependencies} \cite{DBLP:journals/tods/Fagin81} by allowing several occurrences of the same variable in an atom. They also generalize positive inclusions in the description logic DL-Lite$_\mathcal R$ (the formal basis of the web ontological language OWL2 QL) \cite{DBLP:journals/jar/CalvaneseGLLR07}, which can be seen as inclusion dependencies restricted to unary and binary predicates. Note that the restriction of existential rules to rules with a single head is often made in the literature, considering that any existential rule with a complex head can be decomposed into several rules with a single head, by introducing a fresh predicate for each rule. However, while this translation preserves the termination of the semi-oblivious chase, it is not the case for the restricted and the core chases. Hence, considering linear rules with a complex head would require to extend the techniques developed in this paper. To simplify the presentation, we assume in the following that each rule frontier is of size at least one. This assumption is made without loss of generality. \footnote{For instance, it can always be ensured by adding a position to all predicates, which is filled by the same fresh constant in the initial instance, and by a new frontier variable in each rule.} We first point out that the termination problem on linear rules can be recast by considering solely instances that contain a single atom (as already remarked in several contexts). \begin{propositionrep} \label{prop-atomic-instance} Let $\ensuremath{\Sigma}$ be a linear set of rules. The semi-oblivious (resp. restricted, core) chase terminates on all instances if and only if it terminates on all singleton instances. \end{propositionrep} \begin{proof} Obviously, the fact that a chase variant does not halt on an atomic instance implies the fact that it does not terminate on all instances. On the other direction, we can easily see that if the chase does not halt on an instance then it will not halt on one of its atoms. For a chase variant that does not terminate there exists an infinite derivation whose associated chase graph is also infinite. As the arity of the nodes in the chase graph is bounded by the size of the ruleset, the chase graph must contains an infinite path starting from a node of the initial instance. Because the chase graph for linear rules forms a tree it follows that this infinite path is created by a single atom of the initial instance. \end{proof} We will furthermore rely on the following notion of the type of an atom. \begin{definition}[Type of an atom] \label{definition-type} The \emph{type of an atom} $\alpha = r(t_1,\ldots, t_n)$, denoted by $\type{\alpha}$, is the pair $(r,\mathcal{P})$ where $\mathcal{P}$ is the partition of $\{1,\ldots,n\}$ induced by term equality (i.e., $i$ and $j$ are in the same class of $\mathcal{P}$ iff $\ensuremath{t}_i = \ensuremath{t}_j$). \end{definition} Note that there are finitely (more specifically, exponentially) many types for a given vocabulary. If two atoms $\alpha$ and $\alpha'$ have the same type, then there is a \emph{natural mapping} from $\alpha$ to $\alpha'$, denoted by $\varphi_{\alpha\rightarrow \alpha'}$, and defined as follows: it is a bijective mapping from $\ensuremath{t}s{\alpha}$ to $\ensuremath{t}s{\alpha'}$, that maps the $i$-th term of $\alpha$ to the $i$-th term of $\alpha'$. Note that $\varphi_{\alpha\rightarrow \alpha'}$ may not be an isomorphism, as constants from $\alpha$ may not be mapped to themselves. However, if $(\ensuremath{\sigma},\pi)$ is a trigger for $\{\alpha\}$, then $(\ensuremath{\sigma},\varphi_{\alpha\rightarrow \alpha'}\circ\pi)$ is a trigger for $\{\alpha'\}$, as there are no constants in the considered rules. Together with Proposition \ref{prop-atomic-instance}, this implies that one can check all instance all sequence termination by checking all sequence termination on a finite set of instances, called \emph{canonical instances}: for each type, there is exactly one canonical instance that has this type. We will consider different kinds of tree structures, which have in common to be \emph{trees of bags}: these are rooted trees, whose nodes, called \emph{bags}, are labeled by an atom.\footnote{Furthermore the trees we will consider are decomposition trees of the associated set of atoms. That is why we use the classical term of \emph{bag} to denote a node.} We define the following notations for any node $\ensuremath{B}$ of a tree of bags $\ensuremath{S}tree$: \begin{itemize} \item $\alphas{\ensuremath{B}}$ is the label of $\ensuremath{B}$; \item $\ensuremath{t}s{\ensuremath{B}} = \ensuremath{t}s{\alphas{\ensuremath{B}}}$ is the set of terms of $B$; \item $\ensuremath{t}s{\ensuremath{B}}$ is divided into two sets of terms, those \emph{generated} in $\ensuremath{B}$, denoted by $\generated{\ensuremath{B}}$, and those shared with its parent, denoted by $\shared{\ensuremath{B}}$; precisely, $\ensuremath{t}s{\ensuremath{B}} = \shared{\ensuremath{B}} \cup \generated{\ensuremath{B}}$, $\shared{\ensuremath{B}} \cap \generated{\ensuremath{B}} = \emptyset$, and if $\ensuremath{B}$ is the root of $\ensuremath{S}tree$, then $\generated{\ensuremath{B}} = \ensuremath{t}s{\ensuremath{B}}$ (hence $\shared{\ensuremath{B}} = \emptyset$), otherwise $\ensuremath{B}$ has a parent $\ensuremath{B}_p$ and $\generated{\ensuremath{B}} = \ensuremath{t}s{\ensuremath{B}} \ensuremath{S}minus \ensuremath{t}s{\ensuremath{B}_p}$ (hence, $\shared{\ensuremath{B}} = \ensuremath{t}s{\ensuremath{B}_p} \cap \ensuremath{t}s{\ensuremath{B}}$). \end{itemize} We denote by $\treeatoms{\ensuremath{S}tree}$ the set of atoms that label the bags in $\ensuremath{S}tree$. Finally, we recall some classical mathematical notions. A \emph{subsequence} $S'$ of a sequence $S$ is a sequence that can be obtained from $S$ by deleting some (or no) elements without changing the order of the remaining elements. The \emph{arity} of a tree is the maximal number of children for a node. A \emph{prefix} $T'$ of a tree $T$ is a tree that can be obtained from $T$ by repeatedly deleting some (or no) leaves of $T$. \section{Derivation Trees} A classical tool to reason about the chase is the so-called \emph{chase graph} (see e.g., \cite{DBLP:journals/ws/CaliGL12}), which is the directed graph consisting of all atoms that appear in the considered derivation, and with an arrow from a node $n_1$ to a node $n_2$ iff $n_2$ is created by a rule application on $n_1$ and possibly other atoms. \footnote{Note that the chase graph in \cite{DBLP:conf/pods/DeutschNR08} is a different notion.} In the specific case of KBs of the form $(\{\alpha\}, \ensuremath{\Sigma})$, where $\alpha$ is an atom and $\ensuremath{\Sigma}$ is a set of linear rules, the chase graph is a tree. We recall below its definition in this specific case, in order to emphasize its differences with another tree, called \emph{derivation tree}, on which we will actually rely. \begin{definition}[Chase Graph for Linear Rules] \label{definition-chase-graph} Let $\ensuremath{I}$ be a singleton instance, $\ensuremath{\Sigma}$ be a set of linear rules and $\ensuremath{I} = I_0, (\ensuremath{\sigma}_1,\pi_1), I_1 \ldots, I_{n-1}, (\ensuremath{\sigma}_n,\pi_n), I_n$ be a semi-oblivious $\ensuremath{\Sigma}$-derivation from $\ensuremath{I}$. The \emph{chase graph} (also called \emph{chase tree}) assigned to $S$ is a tree of bags built as follows: \begin{itemize} \item the set of bags is in bijection with $\ensuremath{I}_n$ via the labeling function $\alphas{}$; \item the set of edges is in bijection with the set of triggers in $S$ and is built as follows: for each trigger $(\ensuremath{\sigma}_i,\pi_i)$ in $S$, there is an edge $(\ensuremath{B},\ensuremath{B}')$ with $\alphas{\ensuremath{B}} = {\pi_i(\body{\ensuremath{\sigma}_i})}$ and $\alphas{\ensuremath{B}'} = \pi_i^s(\head{\ensuremath{\sigma}_i})$. \end{itemize} \end{definition} \begin{example} \label{example-chase-graph-not-enough} Let $\ensuremath{I} = q(a)$ and $\ensuremath{\Sigma}=\{\ensuremath{\sigma}_1,\ensuremath{\sigma}_2\}$ where $\ensuremath{\sigma}_1 = q(x) \rightarrow \exists y \exists z \exists t ~p(x,y,z,t)$ and $\ensuremath{\sigma}_2 = p(x,y,z,t) \rightarrow p(x,z,t,y)$. Let $S = \ensuremath{I},(\ensuremath{\sigma}_1,\pi_1),\ensuremath{I}_1,(\ensuremath{\sigma}_2,\pi_2),\ensuremath{I}_3,(\ensuremath{\sigma}_2,\pi_3),\ensuremath{I}_3$ with $\pi_1 = \{ x \mapsto a\}$, $\pi_1^s(\head{\ensuremath{\sigma}_1}) = p(a,y_0,z_0,t_0)$, $\pi_2 = \{ x \mapsto a, y \mapsto y_0, z \mapsto z_0, t \mapsto t_0\}$ and ${\pi_3 = \{ x \mapsto a, y \mapsto z_0,}$ $ z \mapsto t_0, t \mapsto y_0\}$. The chase graph associated with $S$ is a path of four nodes as represented in Figure~\ref{figure-chase-graph-not-enough}. \end{example} \begin{figure} \caption{Chase Graph and Derivation Tree of Example \ref{example-chase-graph-not-enough} \label{figure-chase-graph-not-enough} \end{figure} To check termination of a chase variant on a given KB $(\{\alpha\}, \ensuremath{\Sigma})$, the general idea is to build a tree of bags associated with the chase on this KB in such a way that the occurrence of some forbidden pattern indicates that a path of unbounded length can be developed, hence the chase does not terminate. The forbidden pattern is composed of two distinct nodes such that one is an ancestor of the other and, intuitively speaking, these nodes ``can be extended in similar ways'', which leads to an arbitrarily long path that repeats the pattern. Two atoms with the same type admit the same rule triggers, however, within a derivation, the same rule applications cannot necessarily be performed on both of them because of the presence of other atoms (this is true already for datalog rules, since the same atom is never produced twice). Hence, on the one hand we will specialize the notion of type, into that of a \emph{sharing type}, and, on the other hand, adopt another tree structure, called a \emph{derivation tree}, in which two nodes with the same sharing type have the required similar behavior. \begin{definition}[Sharing type and Twins] \label{definition-sharing-type} Given a tree of bags, the \emph{sharing type} of a bag $\ensuremath{B}$ is a pair $ (\type{\alphas{\ensuremath{B}}},P)$ where $P$ is the set of positions in $\alphas{\ensuremath{B}}$ in which a term of $\shared{\ensuremath{B}}$ occurs. We denote the fact that two bags $\ensuremath{B}$ and $\ensuremath{B}'$ have the same sharing type by $\ensuremath{B} \ensuremath{\equiv_{st}} \ensuremath{B}'$. Furthermore, we say that two bags $\ensuremath{B}$ and $\ensuremath{B}'$ are \emph{twins} if they have the same sharing type, the same parent $\ensuremath{B}_p$ and if the natural mapping $\varphi_{\alphas{B}\rightarrow\alphas{B'}}$ is the identity on the terms of $\alphas{\ensuremath{B}_p}$. \end{definition} We can now specify the forbidden pattern that we will consider: it is a pair of two distinct nodes with the same sharing type, such that one is an ancestor of the other. \begin{definition}[Unbounded-Path Witness] An \emph{unbounded-path witness} (UPW) in a derivation tree is a pair of distinct bags $(\ensuremath{B},\ensuremath{B}')$ such that $\ensuremath{B}$ and $\ensuremath{B}'$ have the same sharing type and $\ensuremath{B}$ is an ancestor of $\ensuremath{B}'$. \end{definition} As explained below on Example \ref{example-chase-graph-not-enough}, the chase graph is not the appropriate tool to use this forbidden pattern as a witness of chase non-termination. \noindent \emph{Example \ref{example-chase-graph-not-enough} (cont'd).} $B_1$, $B_2$ and $B_3$ have the same classical type,\\ $t = (p, \{\{1\}, \{2\},\{3\},\{4\}\})$. The sharing type of $B_1$ is $(t,\{1\})$, while $B_2$ and $B_3$ have the same sharing type $(t,\{1,2,3,4\})$. $B_2$ and $B_3$ fulfill the condition of the forbidden pattern, however it is easily checked that any derivation that extends this derivation is finite. Derivation trees were introduced as a tool to define the \emph{greedy bounded treewidth set (gbts)} family of existential rules \cite{DBLP:conf/ijcai/BagetMRT11,DBLP:phd/hal/Thomazo13}. A derivation tree is associated with a derivation, however it does not have the same structure as the chase graph. The fundamental reason is that, when a rule $\ensuremath{\sigma}$ is applied to an atom $\alpha$ via a homomorphism $\pi$, the newly created bag is not necessarily attached in the tree as a child of the bag labeled by $\alpha$. Instead, it is attached as a child of the \emph{highest} bag in the tree labeled by an atom that contains $\pi(\fr{\ensuremath{\sigma}})$, the image by $\pi$ of the frontier of $\ensuremath{\sigma}$ (note that $\pi(\fr{\ensuremath{\sigma}})$ remains the set of terms shared between the new bag and its parent). In the following definition, a derivation tree is not associated with \emph{any} derivation, but with a semi-oblivious derivation, which has the advantage of yielding trees with bounded arity (Proposition \ref{prop-bounded-arity} in the Appendix). This is appropriate to study the termination of the semi-oblivious chase, and later the restricted chase, as a restricted chase sequence is a specific semi-oblivious chase sequence. \begin{definition}[Derivation Tree] Let $\ensuremath{I} = \{\alpha\}$ be a singleton instance, $\ensuremath{\Sigma}$ be a set of linear rules, and $\ensuremath{S} = \ensuremath{I}_0,(\ensuremath{\sigma}_1,\pi_1),\ensuremath{I}_1, \ldots, (\ensuremath{\sigma}_n,\pi_n),\ensuremath{I}_n$ be a semi-oblivious $\ensuremath{\Sigma}$-derivation. The \emph{derivation tree} assigned to $\ensuremath{S}$ is a tree of bags $\ensuremath{S}tree$ built as follows: \begin{itemize} \item the root of the tree, $\ensuremath{B}_0$, is such that $\alphas{\ensuremath{B}_0} = \alpha$; \item for each trigger $(\ensuremath{\sigma}_i,\pi_i)$, $0 < i \leq n$, let $\ensuremath{B}_i $ be the bag such that $\alphas{\ensuremath{B}_i} = \ensuremath{\pi^s}_{i}(\head{\ensuremath{\sigma}_{i}})$. Let $j$ be smallest integer such that $\pi_{i}(\fr{\ensuremath{\sigma}_{i}}) \subseteq \ensuremath{t}s{\ensuremath{B}_j}$: $\ensuremath{B}_i$ is added as a child to $\ensuremath{B}_j$. \end{itemize} By extension, we say that a derivation tree $ \ensuremath{S}tree$ is \emph{associated with $\alpha$ and $\ensuremath{\Sigma}$} if there exists a semi-oblivious $\ensuremath{\Sigma}$-derivation $\ensuremath{S}$ from $\alpha$ such that $\ensuremath{S}tree$ is assigned to $\ensuremath{S}$. \end{definition} \noindent \emph{Example \ref{example-chase-graph-not-enough} (cont'd).} The derivation tree associated with $S$ is represented in Figure \ref{figure-chase-graph-not-enough}. Bags have the same sharing types in the chase tree and in the derivation tree. However, we can see here that they are not linked in the same way: $\ensuremath{B}_3$ was a child of $\ensuremath{B}_2$ in the chase tree, it becomes a child of $\ensuremath{B}_1$ in the derivation tree. Hence, the forbidden pattern cannot be found anymore in the tree. Note that every non-root bag $\ensuremath{B}$ shares a least one term with its parent (since the rule frontiers are not empty), furthermore this term is generated in its parent (otherwise $\ensuremath{B}$ would have been added at a higher level in the tree). \begin{toappendix} \begin{proposition}\label{prop-bounded-arity} The arity of a derivation tree is bounded. \end{proposition} \begin{proof} We first point out that a bag has a bounded number of twin children. Since we consider semi-oblivious derivations, a bag $\ensuremath{B}_p$ cannot have two twin children $\ensuremath{B}_{c_1}$ and $\ensuremath{B}_{c_2}$, created by applications of the same rule $\ensuremath{\sigma}$. Indeed, although these rule applications may map $\body{\ensuremath{\sigma}}$ to distinct atoms, the associated homomorphisms, say $\pi_1$ and $\pi_2$, would have the same restriction to the rule frontier, i.e., $\pi_1{_{\mid\fr{\ensuremath{\sigma}}}} = \pi_2{_{\mid\fr{\ensuremath{\sigma}}}}$. Hence, all twin children of a bag come from applications of distinct rules. It follows that the arity of a node is bounded by the number of atom types $\times$ the cardinal of the ruleset. \end{proof} \end{toappendix} \section{Semi-Oblivious and Restricted Chase Termination} We now use derivation trees and sharing types to characterize the termination of the semi-oblivous chase. The fundamental property of derivation trees that we exploit is that, when two nodes have the same sharing type, the considered (semi-oblivious) derivation can always be extended so that these nodes have the same number of children, and in turn these children have the same sharing type. We first specify the notion of \emph{bag copy}. \begin{definition}[Bag Copy] Let $\ensuremath{\mathcal{T}},\ensuremath{\mathcal{T}}'$ be two (possibly equal) trees of bags. Let $\ensuremath{B}$ be a bag of $\ensuremath{\mathcal{T}}$ and $\ensuremath{B}'$ be a bag of $\ensuremath{\mathcal{T}}'$ such that $\ensuremath{B} \ensuremath{\equiv_{st}} \ensuremath{B}'$. Let $\ensuremath{B}_c$ be a child of $\ensuremath{B}$. A \emph{copy} of $\ensuremath{B}_c$ \emph{under} $\ensuremath{B}'$ is a bag $\ensuremath{B}'_c$ such that $\alphas{\ensuremath{B}'_c} = \varphi^s(\alphas{\ensuremath{B}_c})$, where $\varphi^s$ is a substitution of $\ensuremath{t}s{\ensuremath{B}_c}$ defined as follows: \begin{itemize} \item if $\ensuremath{t} \in \shared{\ensuremath{B}_c}$, then $\varphi^s(\ensuremath{t}) = \varphi_{\alphas{\ensuremath{B}} \rightarrow \alphas{\ensuremath{B}'}}(\ensuremath{t})$, where $\varphi_{\alphas{\ensuremath{B}} \rightarrow \alphas{\ensuremath{B}'}}$ is the natural mapping from $\alphas{\ensuremath{B}}$ to $\alphas{\ensuremath{B}'}$; \item if $\ensuremath{t} \in \generated{\ensuremath{B}_c}$, then $\varphi^s(\ensuremath{t})$ is a fresh new {variable}. \end{itemize} \end{definition} Let $\ensuremath{S}tree_e$ be obtained from a derivation tree $\ensuremath{S}tree$ by adding a copy of a bag: strictly speaking, $\ensuremath{S}tree_e$ may not be a derivation tree in the sense that there may be no derivation to which it can be assigned (intuitively, some rule applications that would allow to produce the copy may be missing). Rather, there is some derivation tree of which $\ensuremath{S}tree_e$ is a \emph{prefix} (intuitively, one can add bags to $\ensuremath{S}tree_e$ to obtain a derivation tree). That is why the following proposition considers more generally prefixes of derivation trees. \begin{propositionrep} \label{proposition-sharing-type-children-derivation-tree} Let $\ensuremath{S}tree$ be a prefix of a derivation tree, $\ensuremath{B}$ and $\ensuremath{B}'$ be two bags of $\ensuremath{S}tree$ such that $\ensuremath{B} \ensuremath{\equiv_{st}} \ensuremath{B}'$, and $\ensuremath{B}_c$ be a child of $\ensuremath{B}$. Then: \emph{(a)} the tree obtained from $\ensuremath{S}tree$ by adding the copy $\ensuremath{B}'_c$ of $\ensuremath{B}_c$ under $\ensuremath{B}'$ is a prefix of a derivation tree, and \emph{(b)} it holds that $\ensuremath{B}_c \ensuremath{\equiv_{st}} \ensuremath{B}'_c$. \end{propositionrep} \begin{proof} Let $\ensuremath{B}$ and $\ensuremath{B}'$ be two atoms of $\ensuremath{S}tree$ having the same sharing type. Let $\ensuremath{B}_c$ be a child of $\ensuremath{B}$ created by a trigger $(\ensuremath{\sigma},\pi)$. By definition of derivation tree, $\pi$ maps the rule frontier $\fr{\ensuremath{\sigma}}$ to $\ensuremath{t}s{\ensuremath{B}}$, without this being possible for the parent of $\ensuremath{B}$. Furthermore, we know that $\pi$ maps $\body{\ensuremath{\sigma}}$ to a (possibly strict) descendant of $\ensuremath{B}$. We assume that $\ensuremath{S}tree$ does not already contain the image of $\head{\ensuremath{\sigma}}$ via $\pi$, otherwise the thesis trivially holds. Let $\ensuremath{S}$ be the derivation associated with $\ensuremath{S}tree$ and $\alpha_0,\dots,\alpha_{k}$ be the path of the \emph{chase-graph} associated with $\ensuremath{S}$ such that $\alpha_0=\alphas{\ensuremath{B}}$ and $\alpha_{k}=\alphas{\ensuremath{B}_{c}}$, whose sequence of associated rule applications is $(\ensuremath{\sigma}_1,\pi_1),\dots, (\ensuremath{\sigma}_{k},\pi_{k})=(\ensuremath{\sigma},\pi)$. We define $\hat\pi_i^{\mathrm{safe}}(t)=\varphi_{\alphas{\ensuremath{B}}\rightarrow\alphas{\ensuremath{B}'}}\circ\pi_i(t)$ whenever $\pi_i(t)\in\ensuremath{t}s{\ensuremath{B}}$ and otherwise $\hat\pi_i^{\mathrm{safe}}(t)$ to be a fresh new variable consistently used over the rule applications, that is, such that $\pi_i^{\mathrm{safe}}(t)=\pi_j^{\mathrm{safe}}(t)$ if and only if $\hat\pi_i^{\mathrm{safe}}(t)=\hat\pi_j^{\mathrm{safe}}(t)$. Then, for all $1\leq i \leq k$, we extend $\ensuremath{S}$ by adding a trigger $(\ensuremath{\sigma}_i,\hat\pi_i)$\footnote{$\hat\pi_i$ is the restriction of $\hat\pi_i^{\mathrm{safe}}$ to the variables of the body of $\ensuremath{\sigma}_i$.} whenever $\hat\pi_i^{\mathrm{safe}}(\head{\ensuremath{\sigma}_i})$ is not an atom already produced by $\ensuremath{S}$ thereby obtaining a new derivation $\ensuremath{S}'$. Let $\ensuremath{S}tree'$ be an extension of $\ensuremath{S}tree$ where a bag labeled with the atom $\hat\pi_i^{\mathrm{safe}}(\head{\ensuremath{\sigma}_i})$ is added for each new trigger in $\ensuremath{S}'$ and attached to the highest descendant of $\ensuremath{B}'$ whose set of terms contains $\hat\pi_i^{}(\fr{\ensuremath{\sigma}_i})$. Clearly, $\ensuremath{S}tree'$ is a derivation tree associated with $\ensuremath{S}'$. We now show that $\ensuremath{S}tree'$ contains a node $\ensuremath{B}_c'$ which is a copy of $\ensuremath{B}_c$ under $\ensuremath{B}'$. As $\ensuremath{B}$ is the parent of $\ensuremath{B}_c$, the image of $\fr{\ensuremath{\sigma}}$ via $\pi$ contains at least one term which is generated in $\ensuremath{B}$ (and in general only terms generated by the ancestors of $\ensuremath{B}$). Therefore, because $\ensuremath{B}$ and $\ensuremath{B}'$ have the same sharing type, the image of $\fr{\ensuremath{\sigma}}$ via $\varphi_{\alphas{\ensuremath{B}}\rightarrow \alphas{\ensuremath{B}'}}\circ\pi$ contains at least one term generated in $\ensuremath{B}'$ (and in general only terms generated by the ancestors of $\ensuremath{B}'$). So, $\ensuremath{B}'$ is the only possible parent of $\ensuremath{B}'_c$ in $\ensuremath{S}tree'$. Moreover, it is easy to see that $\ensuremath{B}_c \ensuremath{\equiv_{st}} \ensuremath{B}'_c$. Let $\ensuremath{S}tree''$ be the extension of $\ensuremath{S}tree$ with $\ensuremath{B}_c'$ under $\ensuremath{B}'$. It can be easily verified that $\ensuremath{S}tree''$ is a prefix of the derivation tree $\ensuremath{S}tree'$, in the sense that it is a tree of bags which can be obtained by recursively removing some of the leaves of $\ensuremath{S}tree'$, i.e., those corresponding to the triggers in $\ensuremath{S}'\ensuremath{S}minus\ensuremath{S}$ which are different from $(\ensuremath{\sigma},\pi)$. \end{proof} The size of a derivation tree without UPW is bounded, since its arity is bounded (Proposition \ref{prop-bounded-arity} in the Appendix) and its depth is bounded by the number of sharing types. It remains to show that a derivation tree that contains a UPW can be extended to an arbitrarily large derivation tree. We recall that similar property would not hold for the chase tree, as witnessed by Example~\ref{example-chase-graph-not-enough}. \begin{propositionrep} \label{proposition-finiteness-derivation-tree} There exists an arbitrary large derivation tree associated with $\alpha$ and $\ensuremath{\Sigma}$ if and only if there exists a derivation tree associated with $\alpha$ and $\ensuremath{\Sigma}$ that contains an unbounded-path witness. \end{propositionrep} \begin{proof} If there is no derivation tree having an unbounded-path witness, then the depth of all derivation trees is upper bounded by the number of sharing types. As derivation trees are of bounded arity, all derivation trees must be of bounded size. If there is a derivation tree $\ensuremath{S}tree$ having an unbounded-path witness $(\ensuremath{B},\ensuremath{B}')$, we show that there are arbitrary large derivation trees. We do so by contradiction. Let us assume that $(\ensuremath{B},\ensuremath{B}')$ is a UPW be two such bags such that $\ensuremath{B}'$ is of maximal depth among all such pairs and among all trees, which by hypothesis are of bounded size. Let $\ensuremath{B}_c$ be the child of $\ensuremath{B}$ that is on the shortest path from $\ensuremath{B}$ to $\ensuremath{B}'$ (possibly $\ensuremath{B}_c = \ensuremath{B}'$). By Proposition \ref{proposition-sharing-type-children-derivation-tree}, $\ensuremath{B}'$ has a child $\ensuremath{B}'_c$ that has the same sharing type as $\ensuremath{B}_c$. By Proposition \ref{proposition-sharing-type-children-derivation-tree}, $\ensuremath{B}'$ has a child $\ensuremath{B}'_c$ that has the same sharing type as $\ensuremath{B}_c$, either in the same tree, or in an extension of this tree, which is in contradiction with the fact that $\ensuremath{B}'$ was of maximal depth. Hence there are arbitrary large derivation trees. \end{proof} The previous proposition yields a characterization of the existence of an infinite semi-oblivious derivation. At this point, one may notice that an infinite semi-oblivious derivation is not necessarily fair. However, from this infinite derivation one can always build a fair derivation by inserting missing triggers. Obviously, this operation has no effect on the termination of the semi-oblivious chase. More precaution will be required for the restricted chase. One obtains an algorithm to decide termination of the semi-oblivious chase for a given set of rules: for each canonical instance, build a semi-oblivious derivation and the associated derivation tree by applying rules until a UPW is created (in which case the answer is no) or all possible rule applications have been performed; if no instance has returned a negative answer, the answer is yes. \begin{corollary} \label{corollary-semi-oblivious-finiteness-decidability} The all-sequence termination problem for the semi-oblivious chase on linear rules is decidable. \end{corollary} \fussy This algorithm can be modified to run in polynomial space (which is optimal \cite{DBLP:conf/pods/CalauttiGP15}), by guessing a canonical instance and a UPW of its derivation tree. \sloppy \begin{propositionrep} The all-sequence termination problem for the semi-oblivious chase on linear rules is in \textsc{PSpace}. \end{propositionrep} \begin{proof} Let $\ensuremath{\mathcal{T}}$ be a derivation tree of root the canonical instance $\{\alpha\}$ that contains a UPW $(\ensuremath{B},\ensuremath{B}')$, where the sharing type of both bags is $ST$. We show that there exists a semi-oblivious derivation of length at most exponential whose derivation tree has root $\{\alpha\}$ and that contains a UPW $(\ensuremath{B}_s,\ensuremath{B}'_s)$ where the sharing type of both bags is $ST$. First, by Proposition \ref{proposition-sharing-type-children-derivation-tree}, we conclude that it is not necessary to have twice the same sharing type on the path from the root to $\ensuremath{B}'$ in the derivation tree. It is thus enough to show that to generate a child $\ensuremath{B}_c$ from its parent $\ensuremath{B}_p$, a derivation of length at most exponential is necessary. Let us consider the chase graph of the derivation generating $\alphas{\ensuremath{B}_c}$ from $\alphas{\ensuremath{B}_p}$. This chase graph can be assumed w.l.o.g. to be a path. It there are no pairs of atoms having the same sharing type on this path, then the derivation is of length at most exponential. Otherwise, we show that we can build a shorter semi-oblivious derivation that generates $\alphas{\ensuremath{B}_c}$. Let us thus assume that there is $\ensuremath{B}$ and $\ensuremath{B}'$ such that both have the same sharing type, and the terms of $\ensuremath{B}_p$ that appear in $\ensuremath{B}$ appear in the same position in $\ensuremath{B}'$, and that $\ensuremath{B}'$ is on the path from $\ensuremath{B}$ to $\ensuremath{B}_c$ in the chase graph. A derivation similar to that applicable after $\ensuremath{B}'$ is actually applicable to $\ensuremath{B}$, by Proposition \ref{proposition-sharing-type-children-derivation-tree}. A copy of $\ensuremath{B}_c$ under $\ensuremath{B}_p$ is thus generated by this derivation, which proves our claim. We now describe the algorithm. We guess the canonical instance and the sharing type $ST$ of the UPW. We then check that there is a descendant (not necessarily a child) of that canonical instance that has sharing type $ST$. This can be done by guessing the shortest derivation creating a bag of sharing type $ST$. It is only necessary to remember the sharing type of the ``current'' bag, as we know that any bag created during a derivation is added as a descendant of the root. We then want to prove that a bag of sharing type $ST$ can have a (strict) descendant of sharing type $ST$. In contrast with the case of the root, a trigger applied below a bag $\ensuremath{B}$ does not necessarily create a bag that is as well below $\ensuremath{B}$ -- it could be added higher up in the tree. We thus have to remember the shared variables of $\ensuremath{B}$, and verify at each step that the shared variables of the currently considered bag are not a subset of them. This leads to a \textsc{PSpace} procedure. \end{proof} We now consider the restricted chase. To this aim, we call \emph{restricted derivation tree} associated with $\alpha$ and $\ensuremath{\Sigma}$ a derivation tree associated with a restricted $\ensuremath{\Sigma}$-derivation from $\alpha$. We first point out that Proposition \ref{proposition-sharing-type-children-derivation-tree} is not true anymore for a restricted derivation tree, as the order in which rules are applied matters. \begin{example} Consider a restricted tree that contains bags $B$ and $B'$ with the same sharing type, labeled by atoms $q(t,u)$ and $q(v,w)$ respectively, where the second term is generated. Consider the following rules (the same as in Example \ref{ex-intro}):\\ $\ensuremath{\sigma}_1: q(x,y) \rightarrow \exists z ~q(y,z)$\\ $\ensuremath{\sigma}_2: q(x,y) \rightarrow q(y,y)$\\ Assume $\ensuremath{B}$ has a child $\ensuremath{B}_c$ labeled by $q(u,z_0)$ obtained by an application of $\ensuremath{\sigma}_1$, and $\ensuremath{B}'$ has a child $\ensuremath{B}'_1$ labeled by $q(w,w)$ obtained by an application of $\ensuremath{\sigma}_2$. It is not possible to extend this tree by copying $\ensuremath{B}_c$ under $\ensuremath{B}'$. Indeed, the corresponding application of $\ensuremath{\sigma}_1$ does not comply with the restricted chase criterion: it would produce an atom of the form $q(w,z_1)$ that folds into $q(w,w)$. \end{example} We thus prove a weaker proposition by considering that $\ensuremath{B}'$ is a leaf in the restricted derivation tree. \begin{proposition} \label{proposition-sharing-type-restricted-derivation-tree} Let $\ensuremath{S}tree$ be a prefix of a restricted derivation tree, $\ensuremath{B}$ and $\ensuremath{B}'$ be two bags of $\ensuremath{S}tree$ such that $\ensuremath{B} \ensuremath{\equiv_{st}} \ensuremath{B}'$ and \emph{$\ensuremath{B}'$ is a leaf}. Let $\ensuremath{B}_c$ be a child of $\ensuremath{B}$. Then: \emph{(a)} the tree obtained from $\ensuremath{S}tree$ by adding the copy $\ensuremath{B}'_c$ of $\ensuremath{B}_c$ under $\ensuremath{B}'$ is a prefix of a restricted derivation tree, and \emph{(b)} it holds that $\ensuremath{B}_c \ensuremath{\equiv_{st}} \ensuremath{B}'_c$. \end{proposition} \begin{proof} Let $S$ be the restricted derivation associated with $\ensuremath{S}tree$. Let $S_c$ be the subsequence of $S$ that starts from $\ensuremath{B}$ and produces the strict descendants of $\ensuremath{B}$. Obviously, any rule application in $S_c$ is performed on a descendant of $\ensuremath{B}$, hence we do not care about rule applications that produce bags that are not descendants of $\ensuremath{B}$. We prove the property by induction on the length of $S_c$. If $S_c$ is empty, the property holds with $\ensuremath{S}tree_e = \ensuremath{S}tree$. Assume the property is true for $0 \leq |S_c| \leq k$. Let $|S_c| = k + 1$. By induction hypothesis, there is an extension $\ensuremath{S}tree' $ of $\ensuremath{S}tree$ such that the subtree of $\ensuremath{B}$ restricted to the first $k$ elements of $S_c$ is `quasi-isomorphic' to the subtree rooted in $\ensuremath{B}'$ (via a bijective substitution defined by the natural mappings between sharing types, say $\phi$) . Let $(\ensuremath{\sigma},\pi)$ be the last trigger of $S_c$, and assume it applies to a bag $\ensuremath{B}_d$. In $\ensuremath{S}tree'$, there is a bag $\ensuremath{B}'_d = \phi(\ensuremath{B}_d)$. Hence, $\ensuremath{\sigma}$ can be applied to $\ensuremath{B}'_d$ with the homomorphism $\phi \circ \pi$. Any folding of the produced bag $\ensuremath{B}''$ to a bag in $\ensuremath{S}tree'$ necessarily maps $\ensuremath{B}''$ to a bag in the subtree rooted in $\ensuremath{B}'_d$ (because $\ensuremath{B}'_d$ and $\ensuremath{B}''$ share a term generated in $\ensuremath{B}'_d$, that only occurs in the subtree rooted in $\ensuremath{B}'_d$ and remains invariant by the folding). Since $\ensuremath{B}_d$ and $\ensuremath{B}'_d$ have quasi-isomorphic subtrees, and $(\ensuremath{\sigma},\pi)$ satisfies the restricted chase criterion, so does $(\ensuremath{\sigma},\phi \circ \pi)$. Furthermore, the quasi-isomorphism $\phi$ preserves the sharing types. Hence, $\ensuremath{B}'_d$ is added exactly like the bag produced by $(\ensuremath{\sigma},\pi)$. We conclude that the property holds true at rank $k+1$. \end{proof} The previous proposition allows us to obtain a variant of Proposition \ref{proposition-finiteness-derivation-tree} adapted to the restricted chase: \begin{propositionrep} \label{proposition-finiteness-restricted-derivation-tree} There exists an arbitrary large restricted derivation tree associated with $\alpha$ and $\ensuremath{\Sigma}$ if and only if there exists a restricted derivation tree associated with $\alpha$ and $\ensuremath{\Sigma}$ that contains an unbounded-path witness. \end{propositionrep} \begin{proof} If there is no restricted derivation tree with a UPW, then the size of any restricted derivation tree is bounded since a restricted derivation tree is a derivation tree. We prove the other direction by contradiction. Assume that the size of restricted derivation trees is bounded whereas the forbidden pattern occurs in some of them. Consider a restricted chase sequence $S$ with associated restricted derivation tree $\ensuremath{S}tree$ that contains a UPW $(\ensuremath{B},\ensuremath{B}')$ of maximal depth among all such pairs and all trees, and such that $\ensuremath{B}'$ is a leaf (we can do the latter assumption since the prefix of any restricted derivation is a restricted derivation). Let $\ensuremath{B}_c$ be the child of $\ensuremath{B}$ that is on the shortest path from $\ensuremath{B}$ to $\ensuremath{B}'$ (possibly $\ensuremath{B}_c = \ensuremath{B}'$). By Proposition \ref{proposition-sharing-type-restricted-derivation-tree}, there is a restricted derivation tree that extends $\ensuremath{S}tree$ such that $\ensuremath{B}'$ has a child $\ensuremath{B}'_c$ of the same sharing type as $\ensuremath{B}_c$, hence $(\ensuremath{B}_c, \ensuremath{B}'_c)$ is a UPW of depth strictly greater than $(\ensuremath{B},\ensuremath{B}')$, which contradicts the hypothesis. \end{proof} It is less obvious than in the case of the semi-oblivious chase that the existence of an infinite derivation entails the existence of an infinite \emph{fair} derivation. However, this property still holds: \begin{propositionrep} For linear rules, every (infinite) non-terminating restricted derivation is a subsequence of a fair restricted derivation. \end{propositionrep} \begin{proof} Let $\ensuremath{S}$ be a non-terminating restricted derivation. In particular, there exists a least one infinite branch in the associated derivation tree. Let us consider the following derivation: when the node $\ensuremath{B}_k$ of depth $k$ on this branch has been generated, complete the corresponding subsequence by trying to apply (i.e., while respecting the restricted criterion) all currently applicable triggers that add a bag a depth at most $k-1$. These additional rule applications cannot prevent the creation of any bag that is below $\ensuremath{B}_k$ in the derivation tree. Indeed, let $\alpha_c$ be an atom possibly created by a rule application, whose bag would be attached as a child of a bag $\ensuremath{B}$; since $\alpha_c$ shares a variable with $\alphas{\ensuremath{B}}$ that is generated in $B$, which thus only occurs in the subtree of $B$, the only possibility for $\alpha_c$ to fold into the current instance, is to be mapped to an atom in the subtree of $\ensuremath{B}$. By construction, any possible rule application will be performed or inhibited at some point, which implies that the derivation that we build in this fashion is fair. \end{proof} Similarly to Proposition \ref{proposition-finiteness-derivation-tree} for the semi-oblivious chase, Proposition \ref{proposition-finiteness-restricted-derivation-tree} provides an algorithm to decide termination of the restricted chase. The difference is that it is not sufficient to build a single derivation for a given canonical instance; instead, all possible restricted derivations from this instance have to be built (note that the associated restricted derivation trees are finite for the same reasons as before, and there is obviously a finite number of them). Hence, we obtain: \begin{corollary} \label{corollary-restricted-finiteness-decidability} The all-sequence termination problem for the restricted chase on linear rules is decidable. \end{corollary} A rough analysis of the proposed algorithm provides a \textsc{co-N2ExpTime} upper-bound for the complexity of the problem, by guessing a derivation that is of length at most double exponential, and checking whether there is a UPW in the corresponding derivation tree. \begin{figure} \caption{Finite versus Infinite Derivation Tree for Example \ref{ex-bfs-stop} \label{figure-infinite-bf-derivation} \end{figure} Importantly, the previous algorithm is naturally able to consider solely some type of restricted derivations, i.e., build only derivation trees associated with such derivations, which is of theoretical but also of practical interest. Indeed, implementations of the restricted chase often proceed by building \emph{breadth-first} sequences (which are intrinsically fair), or variants of these. As witnessed by the next example, the termination of all breadth-first sequences is a strictly weaker requirement than the termination of all fair sequences, in the sense that the restricted chase terminates on more sets of rules. \begin{example} \label{ex-bfs-stop}Consider the following set of rules:\\ $\ensuremath{\sigma}_1 = p(x,y) \rightarrow q(y) \quad \quad \quad ~\ensuremath{\sigma}_2 = p(x,y) \rightarrow r(y,x)$\\ $ \ensuremath{\sigma}_3 = q(y) \rightarrow \exists z ~r(y,z) \quad \quad \ensuremath{\sigma}_4 = r(x,y) \rightarrow \exists z ~p(y,z)$\\ All breadth-first restricted derivations terminate, whatever the initial instance is. Remark that every application of $\ensuremath{\sigma}_1$ is followed by an application of $\ensuremath{\sigma}_2$ in the same breadth-first step, which prevents the application of $\ensuremath{\sigma}_3$. However, there is a fair restricted derivation that does not terminate (and this is even true for any instance). Indeed, an application of $\ensuremath{\sigma}_2$ can always be delayed, so that it comes too late to prevent the application of $\ensuremath{\sigma}_3$. See Figure \ref{figure-infinite-bf-derivation}: on the left, a finite derivation tree associated with a breadth-first derivation from instance $p(x,y)$; on the right, an infinite derivation tree associated with a (non breadth-first) fair infinite derivation from the same instance. The numbers on edges give the order in which bags are created. \end{example} We now prove the decidability of the one-sequence termination problem, building on the same objects as before, but in a different way. Indeed, a (restricted) derivation tree $\ensuremath{S}tree$ that contains a UPW $(B,B')$ is a witness of the existence of an infinite (restricted fair) derivation, but does not prove that \emph{every} (restricted fair) derivation that extends $\ensuremath{S}tree$ is infinite. To decide, we will consider trees associated with a \emph{sharing type} instead of a type. A derivation tree associated with a sharing type $T$ has a root bag whose sharing type is $T$, and is built as for usual root bags, except that shared terms are taken into account, i.e., triggers $(\ensuremath{\sigma}, \pi)$ such that $\pi(\fr{\ensuremath{\sigma}}) \subseteq \shared{T}$ are simply ignored. The algorithm proceeds as follows: \begin{enumerate} \item For each sharing type $T$, generate all restricted derivations trees associated with $T$, stopping the construction of a tree when, for each leaf $B_L$, either there is no active trigger on $\alphas{B_L}$ or $B_L$ forms a UPW with one of its ancestors. \item Mark all the sharing types that have at least one associated tree without UPW. \item Propagate the marks until stability: if a sharing type $T$ has at least one tree for which all UPWs $(B,B')$ are such that the sharing type of $B$ is marked, then mark $T$. \item If all sharing types that correspond to instances (i.e., without shared terms) are marked, return \emph{yes}, otherwise return \emph{no}. \end{enumerate} \begin{proposition} The previous algorithm terminates and returns yes if and only if there is a terminating restricted sequence. \end{proposition} \begin{proof}(Sketch) Termination follows from the finiteness of the set of sharing types and the bound on the size of a tree. Concerning the correctness of the algorithm, we show that a terminating restricted derivation cannot have a derivation tree that contains an unmarked UPW, i.e., whose associated sharing type is not marked. By contradiction: assume there is a terminating restricted derivation whose derivation tree contains an unmarked UPW; consider such an unmarked UPW $(B,B')$ such that $B'$ is of maximal depth in the tree. The subtree of $B'$ necessarily admits as prefix one of the restricted derivation trees associated with the sharing type of $B'$ built by the algorithm, otherwise the derivation would not be fair. Moreover, since the sharing type of $B'$ is not marked, this prefix contains an unmarked UPW. Hence, the tree contains an unmarked UPW $(B'',B''')$ with $B'''$ of depth strictly greater than the depth of $B'$, which contradicts the hypothesis. \end{proof} \begin{corollary} \label{corollary-one-sequence-finiteness-decidability} The one-sequence termination problem for the restricted chase on linear rules is decidable. \end{corollary} By guessing a terminating restricted derivation, which must be of size at most double exponential, and checking that the obtained instance is indeed a universal model, we obtain a \textsc{N2ExpTime} upper bound for the complexity of the one-sequence termination problem. We conclude this section by noting that the previous Example \ref{ex-bfs-stop} may give the (wrong) intuition that, given a set of rules, it is sufficient to consider breadth-first sequences to decide if there exists a terminating sequence. The following example shows that it is not the case: here, no breadth-first sequence is terminating, while there exists a terminating sequence for the given instance. \begin{example} \label{ex-bfs-non-stop-bis} Let $\ensuremath{\Sigma}=\{\ensuremath{\sigma}_1,\ensuremath{\sigma}_2,\ensuremath{\sigma}_3\}$ with $\ensuremath{\sigma}_1 = p(x,y) \rightarrow \exists z ~p(y,z) $, $\ensuremath{\sigma}_2 = p(x,y) \rightarrow h(y)$, and $\ensuremath{\sigma}_3= h(x) \rightarrow ~p(x,x)$. In this case, for every instance, there is a terminating restricted chase sequence, where the application of $\ensuremath{\sigma}_2$ and $\ensuremath{\sigma}_3$ prevents the indefinite application of $\ensuremath{\sigma}_1$. However, starting from $\ensuremath{I} = \{p(a,b)\}$, by applying rules in a breadth-first fashion one obtains a non-terminating restricted chase sequence, since $\ensuremath{\sigma}_1$ and $\ensuremath{\sigma}_2$ are always applied in parallel from the same atom, before applying $ \ensuremath{\sigma}_3$. \end{example} As for the all-sequence termination problem, the algorithm may restrict the derivations of interest to specific kinds. \section{Core Chase Termination} We now consider the termination of the core chase of linear rules. Keeping the same approach, we prove that the finiteness of the core chase is equivalent to the existence of a finite tree of bags whose set of atoms is a minimal universal model. We call this a \emph{(finite) complete core}. To bound the size of a complete core, we show that it cannot contain an unbounded-path witness. Note that in the binary case, it would be possible to work again on derivation trees, but this is not true anymore for arbitrary arity. Indeed, as shown in Example \ref{example-mlm}, there are linear sets of rules for which no derivation tree form a complete core (while it holds for binary rules). We thus introduce a more general tree structure, namely \emph{entailment trees}. \begin{example} \label{example-mlm} Let us consider the following rules: \begin{tabular}{ll} $s(x) \rightarrow \exists y \exists z ~p(y,z,x)$ & \hspace{1cm} $p(y,z,x) \rightarrow \exists v ~q(y,v,x)$ \\ $q(y,v,x) \rightarrow p(y,v,x)$ & ~ \\ \end{tabular} Let $\ensuremath{I} = \{s(a)\}$. The first rule applications yield a derivation tree $\ensuremath{S}tree$ which is a path of bags $B_0, B_1,B_2,B_3$ respectively labeled by the following atoms:\\ $s(a), p(y_0,z_0,a), q(y_0,v_0,a)$ and $p(y_0, v_0,a)$. $\ensuremath{S}tree$ is represented on the left of Figure \ref{figure-example-mlm}. Let $A$ be this set of atoms. First, note that $ A$ is not a core: indeed it is equivalent to its strict subset $A'$ defined by $\{B_0, B_2, B_3\}$ with a homomorphism $\pi$ that maps $\alphas{B_1}$ to $\alphas{B_3}$. Trivially, $A'$ is a core since it does not contain two atoms with the same predicate. Second, note that any further rule application on $\ensuremath{S}tree$ is redundant, i.e., generates a set of atoms equivalent to $A$ (and $A'$). Hence, $A'$ is a complete core, however there is no derivation tree that corresponds to it. There is even no \emph{prefix} of a derivation tree that corresponds to it (which ruins the alternative idea of building a prefix of a derivation tree that would be associated with a complete core). In particular, note that $\{B_0, B_1, B_2\}$ is indeed a core, but it is not complete. \end{example} \begin{figure} \caption{Derivation tree and entailment tree for Example \ref{example-mlm} \label{figure-example-mlm} \end{figure} In the following definition of entailment tree, we use the notation $\alpha_1 \rightarrow \alpha_2$, where $\alpha_i$ is an atom, to denote the rule $\forall X (\alpha_1 \rightarrow \exists Y ~\alpha_2)$ with $X = \vars{\alpha_1}$ and $Y = \vars{\alpha_2}\ensuremath{S}minus X$. \begin{definition}[Entailment Tree] \label{definition-entailment-tree} An \emph{entailment tree} associated with $\alpha$ and $\ensuremath{\Sigma}$ is a tree of bags $\ensuremath{\mathcal{T}}$ such that: \begin{enumerate} \item $\ensuremath{B}_r$, the root of $\ensuremath{\mathcal{T}}$, is such that $\ensuremath{\Sigma} \models \alpha \rightarrow \alphas{\ensuremath{B}_r}$ and $\ensuremath{\Sigma} \models \alphas{\ensuremath{B}_r} \rightarrow \alpha$; \item \sloppypar{For any bag $\ensuremath{B}_c$ child of a node $\ensuremath{B}$, the following holds: \emph{(i)} $\ensuremath{t}s{\ensuremath{B}_c}\cap \generated{\ensuremath{B}} \neq \emptyset$ \emph{(ii)} The terms in $\generated{\ensuremath{B}_c}$ are variables that do not occur outside the subtree of $\ensuremath{\mathcal{T}}$ rooted in $\ensuremath{B}_c$ \emph{(iii)} $\ensuremath{\Sigma} \models \alphas{\ensuremath{B}} \rightarrow \alphas{\ensuremath{B}_c}$.} \item there is no pair of twins. \end{enumerate} \end{definition} Note that $\alpha$ is not necessarily the root of the entailment tree, as it may not belong to the result of the core chase on $\alpha$ (hence Point 1). First note that an entailment tree is independent from any derivation. The main difference with a derivation tree is that it employs a more general parent-child relationship, that relies on entailment rather than on rule application, hence the name entailment tree. Intuitively, with respect to a derivation tree, one is allowed to move a bag $B$ higher in the tree, provided that it contains at least one term generated in its new parent $B_p$; then, the terms of $B$ that are not shared with $B_p$ are freshly renamed. Finally, since the problem of whether an atom is entailed by a linear existential rule knowledge base is decidable (precisely \textsc{PSpace}-complete \cite{DBLP:books/sp/virgilio09/CaliGL09}), one can actually generate all non-twin children of a bag and keep a tree with bounded arity. Derivation trees are entailment trees, but not necessarily conversely. A crucial distinction between these two structures is the following statement, which does not hold for derivation trees, as illustrated by Example \ref{example-mlm}. \begin{propositionrep} \label{proposition-entailment-core} If the core chase associated with $\alpha$ and $\ensuremath{\Sigma}$ is finite, then there exists an entailment tree $\ensuremath{\mathcal{T}}$ such that the set of atoms associated with $\ensuremath{\mathcal{T}}$ is a complete core. \end{propositionrep} \noindent \emph{Example \ref{example-mlm} (cont'd).} The tree defined by the path of bags $B_0$, $B_2$, $B_3$ is an entailment tree, represented on the right of Figure \ref{figure-example-mlm}, which defines a complete core. \begin{proof} Let $\ensuremath{S}tree$ be the derivation tree associated with a derivation containing a core $C$ of $\chase{\alpha}{\ensuremath{\Sigma}}$. Let $\varphi$ be an idempotent homomorphism from the atoms of $\ensuremath{S}tree$ to $C$. We assign to each bag $\ensuremath{B}$ of $\ensuremath{S}tree$ a set of trees $\{T_1,\ldots,T_{n_B}\}$ such that: \begin{enumerate} \item each tree contains only elements of $C$; \item the forest assigned to $\ensuremath{B}$ contains exactly once the elements of $C$ appearing in the subtree rooted in $\ensuremath{B}$; \item for each pair $(\ensuremath{B}_p,\ensuremath{B}_c)$ of bags in some $T_i$ such that $\ensuremath{B}_p$ is a parent of $\ensuremath{B}_c$, $\ensuremath{\Sigma} \models \alphas{\ensuremath{B}_p} \rightarrow \alphas{\ensuremath{B}_c}$; \item each $T_i$ is a decomposition tree; \item for each $T_i$, the root of $T_i$ contains all the terms that belong both to $T_i$ and to $C \ensuremath{S}minus T_i$; \item each term $\ensuremath{t}$ belonging to distinct $T_i$ and $T_j$ of the forest assigned with a bag $\ensuremath{B}$ also belongs to the parent of $\ensuremath{B}$. \end{enumerate} Moreover, we will show that if $\varphi(\ensuremath{B})$ is a descendant of $\ensuremath{B}$ (including $\ensuremath{B}$) in $\ensuremath{S}tree$, then its associated forest is a tree. \begin{itemize} \item if $\ensuremath{B}$ is a leaf, we consider two cases: \begin{itemize} \item $\ensuremath{B}$ belongs to the core: we assign it a single tree, containing only a root being itself. All conditions are trivial. \item $\ensuremath{B}$ does not belong to the core: we assign it an empty forest, and all conditions are trivial. \end{itemize} \item if $\ensuremath{B}$ is an internal node, let $\{T_1,\ldots,T_n\}$ be the union of the forests assigned to the children of $\ensuremath{B}$. We distinguish three cases: \begin{itemize} \item $\ensuremath{B}$ is in the core: we assign to $\ensuremath{B}$ the tree $T$ containing $\ensuremath{B}$ as root, and having as children the roots of $\{T_1,\ldots,T_n\}$. \begin{itemize} \item 1. 2.: holds by induction assumption, the fact that different $T_i$'s cover disjoint subtrees of $\ensuremath{S}tree$, and the fact that $\ensuremath{B}$ belongs to the core \item 3.: it is enough to check this for the pairs (root of $T$, root of $T_i$). The root of $T$ is an ancestor of root of $T_i$ in $\ensuremath{S}tree$, hence $\ensuremath{\Sigma} \models \alphas{\mathrm{root}(T)}\rightarrow \alphas{\ensuremath{B}_i}$, where $\ensuremath{B}_i$ is the root of $T_i$ \item 4. if $\ensuremath{t}$ appears in $T$ but in no $T_i$, it appears only in $\ensuremath{B}$ and the connectivity of the substructure containing $\ensuremath{t}$ holds. If it belongs to some $T_i$ and to $C \ensuremath{S}minus T_i$, it must belong to the root of $T_i$ by assumption 6.. If $\ensuremath{t}$ belongs to $C \ensuremath{S}minus T$, it belongs to $\ensuremath{B}$ by connectivity of $\ensuremath{S}tree$. If $\ensuremath{t}$ belongs to another $T_j$, we distinguish two cases: $T_j$ is in the same forest as $T_i$, and then by induction assumption 7. on the child of $\ensuremath{B}$ to which this forest is associated, $\ensuremath{t}$ belongs to $\ensuremath{B}$. Or $T_j$ is in the forest of another child of $\ensuremath{B}$, and then by connectivity property for $\ensuremath{t}$, it belongs to $\ensuremath{B}$. Hence the connectivity property for $\ensuremath{t}$ in $T$ is fulfilled. \item 5. By connectivity of $\ensuremath{S}tree$, as $\ensuremath{B}$ is the root of $T$ \item 6. true as there is only one tree \end{itemize} \item $\varphi(\ensuremath{B}) \not = \ensuremath{B}$ but is a descendant of $\ensuremath{B}$. By induction assumption 2., there exists exactly one tree among the trees associated with children of $\ensuremath{B}$ containing $\varphi(\ensuremath{B})$. Let assume w.l.o.g that it is $T_1$, of root $\ensuremath{B}_1$. We build the following tree $T$: for all $T_i \not = T_1$, we add to $\ensuremath{B}_1$ a subtree by putting the root of $T_i$ under $\ensuremath{B}_1$. \begin{itemize} \item 1. No added elements, hence by induction assumption 1. \item 2. No added elements, hence by induction assumption 2. \item 3. To check for pairs ($\ensuremath{B}_1$,$\ensuremath{B}_i$), where $\ensuremath{B}_i$ is the root of $T_i$. $\ensuremath{\Sigma} \models \alphas{\ensuremath{B}_1} \rightarrow \alphas{\varphi(\ensuremath{B})}$, as $\varphi(\ensuremath{B})$ is a descendant of $\ensuremath{B}_1$ in $T_1$. Moreover, $\ensuremath{\Sigma} \models \alphas{\varphi(\ensuremath{B})} \rightarrow \alphas{\ensuremath{B}_i}$, as $\varphi(\ensuremath{B})$ is more specific than $\ensuremath{B}$, and $\varphi$ is the identity on shared terms. \item 4. for all term $\ensuremath{t}$ appearing in a single tree, the connectivity property holds by induction assumption 4.. Let $\ensuremath{t}$ appearing in two trees. $\ensuremath{t}$ appears in the roots of both tree by $6.$, and must appear in $\ensuremath{B}$ by connectivity of $\ensuremath{S}tree$, hence in $\varphi(B)$, and hence in $\ensuremath{B}_1$ (by 6.). As $\ensuremath{B}_1$ and the roots of both trees are neighbor, this proves the result. \item 5. let $\ensuremath{t}$ belonging to $T$ and to $C \ensuremath{S}minus T$. By connectivity of $\ensuremath{S}tree$, $\ensuremath{t}$ belongs to $\ensuremath{B}$, hence to $\varphi(\ensuremath{B})$ (because $\varphi(t) =t$). As $\ensuremath{t}$ belongs both to $T_1$ and to $C \ensuremath{S}minus T_1$, $\ensuremath{t}$ belongs to $\ensuremath{B}_1$, and hence to the root of the assigned tree. \item 6. true as there is only one tree. \end{itemize} \item $\varphi(\ensuremath{B})$ is not a descendant of $\ensuremath{B}$. We assign to $\ensuremath{B}$ the union of the forests associated to its children. \begin{itemize} \item 1.-5 By induction assumption \item 6. let $\ensuremath{t}$ belonging to two trees $T_1$ and $T_2$. If $T_1$ and $T_2$ come from forest associated to two different children, $\ensuremath{t}$ belongs to $\ensuremath{B}$ by connectivity of $\ensuremath{S}tree$. If $T_1$ and $T_2$ come from the same forest, $\ensuremath{t}$ belongs to $\ensuremath{B}$ by induction assumption 7. Then $\ensuremath{t}$ belongs to $\ensuremath{B}$. As $\ensuremath{t}$ is in $C$, $\ensuremath{t}$ belongs to $\varphi(\ensuremath{B})$. By connectivity of $\ensuremath{S}tree$, it belongs to the parent of $\ensuremath{B}$, because that parent is on the path from $\ensuremath{B}$ to $\varphi(\ensuremath{B})$, which proves 6. \end{itemize} \end{itemize} \end{itemize} Finally, we check that the following property is satisfied: for any bag $\ensuremath{B}$, if $\ensuremath{B}$ is in the core, then a single tree with root $\ensuremath{B}$ is assigned to it. If $\alpha$ is in the core, we have built such a tree. It remains to obtain an entailment tree: for that, we have to bring up nodes at the highest level with respect to shared terms. We may also have to say something about 'generated' if it still appear in the definition of an entailment tree. \end{proof} Differently from the semi-oblivious case, we cannot conclude that the chase does not terminate as soon as a UPW is built, because the associated atoms may later be mapped to other atoms, which would remove the UPW. Instead, starting from the initial bag, we recursively add bags that do not generate a UPW (for instance, we can recursively add all such non-twin children to a leaf). Once the process terminates (the non-twin condition and the absence of UPW ensure that it does), we check that the obtained set of atoms $C$ is complete (i.e., is a model of the KB): for that, it suffices to perform each possible rule application on $C$ and check if the resulting set of atoms is equivalent to $C$. See Algorithm \ref{algorithm-core-chase}. The set $C$ may not be a core, but it is complete iff it contains a complete core. We now focus on the key properties of entailment trees associated with complete cores. We first introduce the notion of \emph{redundant bags}, which captures some cases of bags that cannot appear in a finite core. As witnessed by Example \ref{example-mlm}, this is not a characterization: $B_1$ is not redundant (according to next Definition \ref{definition-redundancy}), but cannot belong to a complete core. \begin{definition}[Redundancy] \label{definition-redundancy} Given an entailment tree, a bag $\ensuremath{B}_c$ child of $\ensuremath{B}$ is redundant if there exists an atom $\beta$ (that may not belong to the tree) with \emph{(i)} $\ensuremath{\Sigma} \models \alphas{\ensuremath{B}} \rightarrow \beta$; \emph{(ii)} there is a homomorphism from $\alphas{\ensuremath{B}_c}$ to $\beta$ that is the identity on $\shared{\ensuremath{B}_c}$ \emph{(iii)} $|\ensuremath{t}s{\beta} \ensuremath{S}minus \ensuremath{t}s{\ensuremath{B}}| < |\ensuremath{t}s{\ensuremath{B}_c} \ensuremath{S}minus \ensuremath{t}s{\ensuremath{B}}|$. \end{definition} Note that $\ensuremath{B}_c$ may be redundant even if the ``cause'' for redundancy, i.e., $\beta$, is not in the tree yet. The role of this notion in the proofs is as follows: we show that if a complete entailment tree contains a UPW then it contains a redundant bag, and that a complete core cannot contain a redundant bag, hence a UPW. To prove this, we rely on next Proposition \ref{proposition-swissknife-bag-copy}, which is the counterpart for entailment trees of Proposition \ref{proposition-sharing-type-children-derivation-tree}: performing a bag copy from an entailment tree results in an entailment tree (the notion of prefix is not needed, since a prefix of an entailment tree is an entailment tree) and keeps the properties of the copied bag. \begin{propositionrep} \label{proposition-swissknife-bag-copy} Let $\ensuremath{B}$ be a bag of an entailment tree $\ensuremath{\mathcal{T}}$, $\ensuremath{B}'$ be a bag of an entailment tree $\ensuremath{\mathcal{T}}'$ such that $\ensuremath{B} \ensuremath{\equiv_{st}} \ensuremath{B}'$. Let $\ensuremath{B}_c$ be a child of $\ensuremath{B}$ and $\ensuremath{B}_c'$ be a copy of $\ensuremath{B}_c$ under $\ensuremath{B}'$. Let $\ensuremath{\mathcal{T}}''$ be the extension of $\ensuremath{\mathcal{T}}'$ where $\ensuremath{B}_c'$ is added as a child of $\ensuremath{B}'$. Then \emph{(i)} $\ensuremath{\mathcal{T}}''$ is an entailment tree; \emph{(ii)} $\ensuremath{B}_c$ and $\ensuremath{B}_c'$ have the same sharing type; \emph{(iii)} $\ensuremath{B}_c'$ is redundant if and only if $\ensuremath{B}_c$ is redundant. \end{propositionrep} In light of this, the copy of a bag can be naturally extended to the copy of the whole subtree rooted in a bag, which is crucial element in the proof of next Proposition~\ref{proposition-uc-excluded-main-text}: \begin{toappendix} Another important property of entailment trees (which is also satisfied by derivation trees) is that its structure provides information on where a bag may be mapped by $\varphi$ if its parent is left invariant by $\varphi$. \begin{lemma} \label{lemma-locality} Let $\ensuremath{\mathcal{T}}$ be an entailment tree. Let $\varphi$ be a homomorphism from the atoms of $\ensuremath{\mathcal{T}}$ to themselves. Let $\ensuremath{B}_p$ such that $\varphi_{\mid \ensuremath{t}s{\ensuremath{B}_p}}$ is the identity. Let $\ensuremath{B}_c$ be a child of $\ensuremath{B}_p$. Then $\varphi(\ensuremath{B}_c)$ is in the subtree rooted in $\varphi(\ensuremath{B}_p) = \ensuremath{B}_p$. \end{lemma} \begin{proof} $\ensuremath{B}_c$ is a child of $\ensuremath{B}_p$ thus there exists at least one term generated in $\ensuremath{B}_p$ that is a term of $\ensuremath{B}_c$. As $\varphi$ is the identity on $\ensuremath{B}_p$, that term belongs as well to $\varphi(\alphas{\ensuremath{B}_c})$. Thus $\varphi(\alphas{\ensuremath{B}_c})$ should also be in a bag that is in the subtree rooted in $\ensuremath{B}_p$. \end{proof} \end{toappendix} \begin{proposition} \label{proposition-uc-excluded-main-text} A complete core cannot contain \emph{(i)} a redundant bag \emph{(ii)} an unbounded-path witness. \end{proposition} \begin{toappendix} \begin{proposition} \label{proposition-strong-redundancy} A complete core cannot contain a redundant bag. \end{proposition} \begin{proof} Let $\ensuremath{\mathcal{T}}$ be a complete entailment tree, and let $\hat{\ensuremath{B}}$ be a bag that is redundant. We prove that there exists a non-injective endomorphism of $\ensuremath{\mathcal{T}}$, showing that $\ensuremath{\mathcal{T}}$ cannot be a core. For any entailment tree $\ensuremath{\mathcal{T}}_p$ that is a prefix of $\ensuremath{\mathcal{T}}$, we build $\ensuremath{\mathcal{T}}'_{p}$ and a mapping $\varphi$ from the terms of $\ensuremath{\mathcal{T}}_p$ to the terms of $\ensuremath{\mathcal{T}}'_{p}$ as follows: \begin{itemize} \item for any prefix of $\ensuremath{\mathcal{T}}_p$ that does not contain $\hat{\ensuremath{B}}$, we define $\ensuremath{\mathcal{T}}_{p}' = \ensuremath{\mathcal{T}}_p$ and $\varphi$ the identity \item for the prefix that contains all nodes of $\ensuremath{\mathcal{T}}$, including $\hat{\ensuremath{B}}$, except the descendants of $\hat{\ensuremath{B}}$, we define $\ensuremath{\mathcal{T}}'_p$ as $\ensuremath{\mathcal{T}}_p$ to which we add a leaf (if necessary) to the parent of $\hat{\ensuremath{B}}$ in $\ensuremath{\mathcal{T}}$, which is the witness of the redundancy of $\hat{\ensuremath{B}}$. We define $\varphi$ as the identity on any term that is not generated in $\hat{\ensuremath{B}}$, and as its image by the $\varphi$ witnessing the redundancy pattern on terms generated in $\hat{\ensuremath{B}}$. \item if we have defined $\ensuremath{\mathcal{T}}'_{p}$ for $\ensuremath{\mathcal{T}}_p$, and we want to define $\varphi$ for $\ensuremath{\mathcal{T}}'_n$ for $\ensuremath{\mathcal{T}}_n$ which is $\ensuremath{\mathcal{T}}_p$ to which a leaf $\ensuremath{B}_d$ has been added, we add where it belongs the bag $\varphi(\ensuremath{B}_d)$, where we extend $\varphi$ to term generated in $\ensuremath{B}_d$ by choosing fresh images. \end{itemize} By construction, $\ensuremath{\mathcal{T}}'$ is an entailment tree, and $\varphi$ is a homomorphism from $\ensuremath{\mathcal{T}}$ to $\ensuremath{\mathcal{T}}'$. Moreover, $\varphi$ is not injective: indeed, as $\hat{\ensuremath{B}}$ is redundant, $\varphi$ is not injective on the terms of $\hat{\ensuremath{B}}$. As $\ensuremath{\mathcal{T}}$ is complete, there exists a homomorphism from $\ensuremath{\mathcal{T}}'$ to $\ensuremath{\mathcal{T}}$. Hence the composition of the two homomorphisms is a homomorphism from $\ensuremath{\mathcal{T}}$ to itself, which is not injective, as $\varphi$ is not. Hence $\ensuremath{\mathcal{T}}$ is not a core. \end{proof} \end{toappendix} \begin{toappendix} \begin{proposition} \label{proposition-uc-excluded} A complete core cannot contain any unbounded-path witness. \end{proposition} \begin{proof} We prove the result by contradiction. Let us assume that $\ensuremath{\mathcal{T}}$ is a complete core containing an unbounded-path witness $(\ensuremath{B},\ensuremath{B}')$. Let us choose $(\ensuremath{B},\ensuremath{B}')$ such that $\ensuremath{B}'$ is of maximal depth with respect to its branch, that is, there is no unbounded-path witness $(\ensuremath{B}''',\ensuremath{B}'')$ with $\ensuremath{B}''$ a strict descendant of $\ensuremath{B}'$. Let $\ensuremath{B}_c$ be the child of $\ensuremath{B}$ on the path from $\ensuremath{B}$ to $\ensuremath{B}'$. Let us denote by $\ensuremath{\mathcal{T}}_{\ensuremath{B}_c}$ the subtree of $\ensuremath{\mathcal{T}}$ which is rooted at $\ensuremath{B}_c$ and by $\ensuremath{\mathcal{T}}_{\ensuremath{B}_c'}$ a copy of $\ensuremath{\mathcal{T}}_{\ensuremath{B}_c}$ under $B'$ whose root is $\ensuremath{B}_c'$. Then, let $\ensuremath{\mathcal{T}}'$ be the extension of $\ensuremath{\mathcal{T}}$ where $\ensuremath{\mathcal{T}}_{\ensuremath{B}_c'}$ is added as a child of $\ensuremath{B}'$. We want to show that there exists a bag $\ensuremath{B}_r'$ child of $\ensuremath{B}'$ and a mapping from $\ensuremath{\mathcal{T}}_{\ensuremath{B}_c'}$ into $\ensuremath{\mathcal{T}}_{\ensuremath{B}_r'}$, which is the identity on the terms of $\ensuremath{\mathcal{T}}$. More precisely, we want to show that for each $\ensuremath{B}_d'$ descendant of $ \ensuremath{B}_c'$ the following properties hold. \begin{enumerate} \item the image of $\ensuremath{B}_d'$ belongs to $\ensuremath{\mathcal{T}}_{\ensuremath{B}_r'}$ \item the image of a term generated in $\ensuremath{B}_d'$ is a term generated in a bag of $\ensuremath{\mathcal{T}}_{\ensuremath{B}_r'}$ \end{enumerate} We do so by induction on $k$ the distance between $\ensuremath{B}_d'$ and $\ensuremath{B}_c'$ in $\ensuremath{\mathcal{T}}$. \begin{itemize} \item If $k=0$ then $\ensuremath{B}_d'=\ensuremath{B}_c'$. Because $\ensuremath{\mathcal{T}}$ is a complete core, there exists a homomorphism from the atoms of $\mathcal{T'} $ to those of $ \mathcal{T}$ which is the identity on the terms of $\ensuremath{\mathcal{T}}$. We show that the image of $\ensuremath{B}'_c$ is a strict descendant of a child of $\ensuremath{B}'$. Note first that no child of $\ensuremath{B}'$ in $\ensuremath{\mathcal{T}}$ can be a safe renaming of $\ensuremath{B}'_c$. Indeed, by Proposition \ref{proposition-swissknife-bag-copy}, $\ensuremath{B}_c$ and $\ensuremath{B}_c'$ have the same sharing type and therefore $\ensuremath{B}_c'$ (as well as any safe renaming of its generated terms) cannot be a child of $\ensuremath{B}'$ because the couple $(\ensuremath{B}_c,\ensuremath{B}_c')$ would form an unbounded-path witness with $\ensuremath{B}_c'$ strictly deeper than $\ensuremath{B}'$. Then, if $\ensuremath{B}_c'$ maps to $\ensuremath{B}'$ then the couple $(\ensuremath{B}',\ensuremath{B}_c')$ is redundant and therefore also $(\ensuremath{B},\ensuremath{B}_c)$ is redundant, by Proposition \ref{proposition-swissknife-bag-copy}, which in turn implies that $\ensuremath{\mathcal{T}}$ is not a core, because of Proposition \ref{proposition-strong-redundancy}. Finally, if $\ensuremath{B}_c'$ maps to any child of $\ensuremath{B}'$ then it does so by specializing the sharing type of $\ensuremath{B}_c'$ (as we showed that no safe renaming of $\ensuremath{B}_c'$ can be a child of $\ensuremath{B}'$), which means that $\ensuremath{B}_c'$ is redundant. Therefore, by Proposition \ref{proposition-swissknife-bag-copy}, $\ensuremath{B}_c$ is also redundant and hence, by Proposition \ref{proposition-strong-redundancy}, $\ensuremath{\mathcal{T}}$ is not a core. This proves that the image of $\ensuremath{B}_c'$ is a strict descendant of some $\ensuremath{B}_r'$ child of $\ensuremath{B}'$. Now, to prove the second point, let $t$ be a term generated in $\ensuremath{B}'_c$ and $t'$ its image. It is easy to see that for entailment trees any term that belongs to two bags in ancestor-descendant relationship also belongs to the bags on the shortest path between them. Therefore, if $t'$ is generated by a strict ancestor of $\ensuremath{B}_r'$ then $t'$ belongs to the terms of $\ensuremath{B}'$. This means that starting from the sharing type of $\ensuremath{B}_c'$ one can build a strictly more specific sharing type where the position corresponding to the generated term $t$ becomes shared with $\ensuremath{B}'$. From this one can find a node $\ensuremath{B}_c''$ which is strictly more specific than $\ensuremath{B}_c'$ and that can be added as a child of $\ensuremath{B}'$. This means that $\ensuremath{B}_c'$ is redundant and by Proposition \ref{proposition-swissknife-bag-copy} also $\ensuremath{B}_c$ is redundant, so $\ensuremath{\mathcal{T}}$ is not a core. \item Let us assume that both properties hold for all bags at distance $k$ from $\ensuremath{B}_c'$. We want to show that they still holds for the bags at distance $k+1$. Let $\ensuremath{B}_d'$ be a node at distance $k+1$ from $\ensuremath{B}_c'$ whose parent is $\ensuremath{B}'_\delta$. By definition, $\ensuremath{B}_d'$ contains a term generated by $B'_\delta$ and, by induction, we know that the image of this term is generated in a bag of $\ensuremath{\mathcal{T}}_{\ensuremath{B}_r'}$. Thus, it follows that the image of $\ensuremath{B}_d'$ belongs to $\ensuremath{\mathcal{T}}_{\ensuremath{B}_r'}$ as required by the first point. For the second point we reason by contradiction and show that when the property does not hold then $\ensuremath{\mathcal{T}}$ admits a non-injective endomorphism and thus it is not a core. We proceed with the following construction. Let $\ensuremath{\mathcal{T}}_{\ensuremath{B}_r}$ be a copy of $\ensuremath{\mathcal{T}}_{\ensuremath{B}_r'}$ under $\ensuremath{B}$ and $\ensuremath{\mathcal{T}}''$ the extension of $\ensuremath{\mathcal{T}}$ where $\ensuremath{\mathcal{T}}_{\ensuremath{B}_r}$ is added as a child of $\ensuremath{B}$. We know by induction that there exists a homomorphism from $\ensuremath{\mathcal{T}}'$ to $\ensuremath{\mathcal{T}}$ mapping all nodes at distance ${k+1}$ from $\ensuremath{B}'_c$ to the subtree rooted at $\ensuremath{B}'_r$. From this, we can conclude that there exists a homomorphism from $\ensuremath{\mathcal{T}}_{(k+1)}$ to $\ensuremath{\mathcal{T}}''$, where $\ensuremath{\mathcal{T}}_{(k+1)}$ is the prefix of $\ensuremath{\mathcal{T}}$ which includes all nodes of $\ensuremath{\mathcal{T}}$ except for the descendants of $\ensuremath{B}_c$ that are at distance strictly greater than $k+1$ from it. Now, we further extend $\ensuremath{\mathcal{T}}''$ by adding an image for all nodes which are at distance strictly greater than $k+1$ from $\ensuremath{B}_c$ thereby obtaining a new entailment tree $\ensuremath{\mathcal{T}}'''$. It follows that $\ensuremath{\mathcal{T}}$ can be mapped to $\ensuremath{\mathcal{T}}'''$. Beside, since $\ensuremath{\mathcal{T}}$ is complete there exists an homomorphism from $\ensuremath{\mathcal{T}}'''$ to $\ensuremath{\mathcal{T}}$. So, by composing these two homomorphisms we get a homomorphism from $\ensuremath{\mathcal{T}}$ to $\ensuremath{\mathcal{T}}$. We show that the homomorphism from $\ensuremath{\mathcal{T}}$ to $\ensuremath{\mathcal{T}}'''$ is non-injective. Recall that to construct $\ensuremath{\mathcal{T}}'$ the whole subtree rooted at $\ensuremath{B}_c'$ has been copied from the subtree rooted at $\ensuremath{B}_c$. Let us denote by $\ensuremath{B}_d$ the node at distance $k+1$ from $\ensuremath{B}_c$ from which $\ensuremath{B}_d'$ has been copied under $\ensuremath{B}_\delta'$. Let $t$ be a term generated at position $i$ in $\ensuremath{B}_d'$. If its image was generated by a strict ancestor of $\ensuremath{B}_r'$ then this would also belong to the terms of $\ensuremath{B}'$. By Proposition \ref{proposition-swissknife-bag-copy}, $\ensuremath{B}_d$ and $\ensuremath{B}_d'$ have the same sharing types, hence the mapping from $\ensuremath{\mathcal{T}}_{(k+1)}$ to $\ensuremath{\mathcal{T}}''$ (and thus that from $\ensuremath{\mathcal{T}} $ to $\ensuremath{\mathcal{T}}'''$) maps the generated term at position $i$ of $\ensuremath{B}_d$, we call $s$, to a distinct term in $\ensuremath{B}$, we call $s'$. Moreover, the homomorphism is the identity on $s'$. Therefore, the homomorphism from $\ensuremath{\mathcal{T}}$ to $\ensuremath{\mathcal{T}}'''$ is non-injective as both $s'$ and $s$ have the same image. \end{itemize} To finish the proof, we proceed with the following construction. Let $\ensuremath{\mathcal{T}}^*$ be an entailment tree derived from $\ensuremath{\mathcal{T}}$ where $i)$ the whole subtree rooted at $\ensuremath{B}_r'$ has been copied under $\ensuremath{B}$ and $ii)$ the subtree rooted at $\ensuremath{B}_c$ has been removed. Note that $\ensuremath{\mathcal{T}}^*$ is of size strictly smaller than that of $\ensuremath{\mathcal{T}}$ because we added a bag for each descendant node of $\ensuremath{B}'_r$, which is a strict descendant of bag $\ensuremath{B}_c$, and that this last one has been removed. Now, because $\ensuremath{\mathcal{T}}_{\ensuremath{B}_c'}$ maps to $\ensuremath{\mathcal{T}}_{\ensuremath{B}_r'}$ it follows that $\ensuremath{\mathcal{T}}_{\ensuremath{B}_c}$ maps to $\ensuremath{\mathcal{T}}_{\ensuremath{B}_r}$ and by extending this homomorphism to the identity on all other terms we get that $\ensuremath{\mathcal{T}}$ can be mapped to $\ensuremath{\mathcal{T}}^*$. Hence, $\ensuremath{\mathcal{T}}$ is not a core. \end{proof} \end{toappendix} \begin{corollary} \label{corollary-core-finiteness-decidability} The all-sequence termination problem for the core chase on linear rules is decidable. \end{corollary} \begin{algorithm} \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{A set of linear rules} \Output{\texttt{true} if and only if the core chase terminates on all instances} \For{each canonical atom $\alpha$} { Let $\ensuremath{S}tree$ be the entailment tree restricted to $\alpha$;\\ \While{a bag $\ensuremath{B}$ can be added to $\ensuremath{S}tree$ respecting twin-free entailment tree condition and without creating a UPW}{ add $ \ensuremath{B}$ to $\ensuremath{S}tree$ } \If{there is a rule $\ensuremath{\sigma}$ applicable to $\treeatoms{\ensuremath{S}tree}$ through $\pi$ s.t. $\treeatoms{\ensuremath{S}tree} \not \models \treeatoms{\ensuremath{S}tree} \cup \ensuremath{\pi^s}(\head{\ensuremath{\sigma}})$} {\Return \texttt{false}} } \Return \texttt{true} \caption{Deciding core chase termination} \label{algorithm-core-chase} \end{algorithm} A rough complexity analysis of this algorithm yields a \textsc{2ExpTime} upper bound for the termination problem. Indeed, the exponential number of (sharing) types yields a bound on the number of canonical instances to be checked, the arity of the tree, as well as the length of a path without UPW in the tree, and each edge can be generated with a call to a \textsc{PSpace} oracle. \section{Concluding remarks} We have shown the decidability of chase termination over linear rules for three main chase variants (semi-oblivious, restricted, core) following a novel approach based on derivation trees, and their generalization to entailment trees, and a single notion of forbidden pattern. As far as we know, these are the first decidability results for the restricted chase, on both versions of the termination problem (i.e., \emph{all sequence} and \emph{one sequence} termination). The simplicity of the structures and algorithms make them subject to implementation. We leave for future work the study of the precise complexity of the termination problems. A straightforward analysis of the complexity of the algorithms that decide the termination of the restricted and core chases yields upper bounds, however we believe that a finer analysis of the properties of sharing types would provide tighter upper bounds. Future work also includes the extension of the results to more complex classes of existential rules: linear rules with a complex head, which is relevant for the termination of the restricted and core chases, and more expressive classes from the guarded family. Derivation trees were precisely defined to represent derivations with guarded rules and their extensions (i.e., greedy bounded treewidth sets), hence they seem to be a promising tool to study chase termination on that family. \end{document}
\begin{document} \begin{abstract} We consider fibrewise singly generated Fell-bundles over \'etale groupoids. Given a continuous real-valued 1-cocycle on the groupoid, there is a natural dynamics on the cross-sectional algebra of the Fell bundle. We study the Kubo--Martin--Schwinger equilibrium states for this dynamics. Following work of Neshveyev on equilibrium states on groupoid $C^*$-algebras, we describe the equilibrium states of the cross-sectional algebra in terms of measurable fields of traces on the $C^*$-algebras of the restrictions of the Fell bundle to the isotropy subgroups of the groupoid. As a special case, we obtain a description of the trace space of the cross-sectional algebra. We apply our result to generalise Neshveyev's main theorem to twisted groupoid $C^*$-algebras, and then apply this to twisted $C^*$-algebras of strongly connected finite $k$-graphs. \end{abstract} \keywords{Groupoid; Fell bundle; $k$-graph; cocycle; KMS state.} \subjclass[2010]{46L35} \maketitle \section{Introduction} The study of KMS states of $C^*$-algebras was originally motivated by applications of $C^*$-dynamical systems to the study of quantum statistical mechanics \cite{BR}. However, KMS states make sense for any $C^*$-dynamical system, even if it does not model a physical system, and there is significant evidence that the KMS data is a useful invariant of a dynamical system. For example, the results of Enomoto, Fujii and Watatani \cite{EFW} show that the KMS data for a Cuntz--Krieger algebra encodes the topological entropy of the associated shift space. And Bost and Connes showed that the Riemann zeta function can be recovered from the KMS states of an appropriate $C^*$-dynamical system \cite{BC}. As a result there has recently been significant interest in the study of KMS states of $C^*$-dynamical systems arising from combinatorial and algebraic data \cite{BC, CarlsenLarsen, N, Thomsen, Kakariadis}. In particular, there are indications of a close relationship between KMS structure of such systems, and ideal structure of the $C^*$-algebra \cite{aHLRS, LLNSW, Yang}. Our original motivation in this paper was to investigate whether the relationship, discovered in \cite{aHLRS}, between simplicity and the presence of a unique state for the $C^*$-algebra of a strongly connected $k$-graph persists in the situation of twisted higher-rank graph $C^*$-algebras. The methods used to establish this in \cite{aHLRS} exploit direct calculations with the generators of the $C^*$-algebra. Unfortunately, a similar approach seems to be more or less impossible in the situation of twisted $k$-graph $C^*$-algebras, because the twisting data quickly renders the calculations required unmanageable. Instead we base our approach on groupoid models for $k$-graph $C^*$-algebras and their analogues. Building on ideas from \cite{KR}, Neshveyev proved in \cite{N} that the KMS states of a groupoid $C^*$-algebra for a dynamics induced by a continuous real-valued cocycle on the groupoid are parameterised by pairs consisting of a suitably invariant measure $\mu$ on the unit space, and an equivalence class of $\mu$-measurable fields of traces on the $C^*$-algebras of the fibres of the isotropy bundle that are equivariant for the natural action of the groupoid by conjugation. Though Neshveyev's results are not used directly to compute the KMS states of $k$-graph algebras in \cite{aHLRS}, it is demonstrated in \cite[Section~12]{aHLRS} that the main results of that paper could be recovered using Neshveyev's theorems. Every twisted $k$-graph algebra can be realised as a twisted groupoid $C^*$-algebra \cite{KPS1}, and simplicity of twisted $k$-graph algebras can be characterised using this description \cite{KPS2}. Twisted $k$-graph $C^*$-algebras are in turn a special case of cross-sectional algebras of Fell bundles over groupoids. Since the latter constitute a very flexible and widely applicable model for $C^*$-algebraic representations of dynamical systems, we begin by generalising Neshveyev's theorems to this setting; though since it simplifies our results and since it covers our key example of twisted groupoid $C^*$-algebras, we restrict to the situation of Fell bundles whose fibres are all singly generated. Neshveyev's approach relies heavily on Renault's Disintegration Theorem \cite{Re}, and we likewise rely very heavily on the generalisation of the Disintegration Theorem to Fell-bundle $C^*$-algebras established by Muhly and Williams \cite{MW}. Our first main theorem, Theorem~\ref{thm2}, is a direct analogue in the situation of Fell bundles of Neshveyev's result. It shows that the KMS states on the cross-sectional algebra of a Fell bundle $\mathcal{B}$ with singly generated fibres over an \'etale groupoid $\mathcal{G}$ are parameterised by pairs consisting of a suitably invariant measure $\mu$ on $\mathcal{G}^{(0)}$ and a $\mu$-measureable field of traces on the $C^*$-algebras $C^*(\mathcal{G}^x_x, \mathcal{B})$ of the restrictions of $\mathcal{B}$ to the isotropy groups of $\mathcal{G}$ that satisfies a suitable $\mathcal{G}$-invariance condition. By applying this result with inverse temperature equal to zero, we obtain a description of the trace space of $C^*(\mathcal{G}, \mathcal{B})$. Given a continuous $\mathbb{T}$-valued $2$-cocycle $\sigma$ on $\mathcal{G}$, or more generally a twist over $\mathcal{G}$ in the sense of Kumjian \cite{Kum}, there is a Fell line-bundle over $\mathcal{G}$ whose cross-sectional algebra coincides with the twisted $C^*$-algebra $C^*(\mathcal{G},\sigma)$ (see Lemma~\ref{Fell-groupoid}). We apply Theorem~\ref{thm2} to such bundles to obtain a generalisation of Neshveyev's results \cite[Theorem~1.2 and Theorem~1.3]{N} to twisted groupoid $C^*$-algebras (see Corollary~\ref{kms states for groupoid}). We next consider a strongly connected $k$-graph $\Lambda$ in the sense of \cite{KP}. There is only one probability measure $M$ on the unit space $\mathcal{G}_\Lambda^{(0)} = \Lambda^\infty$ that is invariant in the sense described above \cite[Lemma~12.1]{aHLRS}. Given a cocycle $c$ on $\Lambda$, Kumjian, Pask and the second author introduced a twisted $C^*$-algebra $C^*(\Lambda,c)$ and showed that the cocycle $c$ induces a cocycle $\sigma_c$ on the associated path groupoid $\mathcal{G}_\Lambda$ such that the $C^*$-algebras $C^*(\Lambda,c)$ and $C^*(\mathcal{G}_\Lambda,\sigma_c)$ are isomorphic \cite[Corollary~7.9]{KPS1}. The cocycle $\sigma_c$ determines an antisymmetric bicharacter $\omega_c$ on $\operatorname{Per\, \Lambda}$ (see \cite{OPT} or \cite[Proposition~3.1]{KPS2}). The trace simplex of $C^*(\operatorname{Per\, \Lambda}, \sigma_c)$ is canonically isomorphic to the state space of the commutative subalgebra $C^*(Z_{\omega_c})$ of the centre of the bicharacter $\omega_c$ (see Lemma~\ref{lemma:traces}). Conjugation in the line-bundle associated to $\sigma_c$ determines an action of the quotient $\mathcal{H}_\Lambda$ of $\mathcal{G}_\Lambda$ by the interior of its isotropy on $\Lambda^\infty \times Z_{\omega_c}$. Kumjian, Pask and the second author showed that $C^*(\Lambda, c)$ is simple if and only if this action is minimal. Here we prove that the KMS states of $C^*(\Lambda, c)$ are parameterised by $M$-measurable fields of traces on $C^*(Z_{\omega_c})$ that are invariant for the same action of $\mathcal{H}_\Lambda$. Unfortunately, however, we have been unable to prove that minimality of the action implies that it admits a unique invariant field of traces. We begin with a section on preliminaries. We show if $\omega$ is an antisymmetric bicharacter on a finitely generated free abelian group $F$ that is cohomologous to a cocycle on $P$, then the trace spaces of $C^*(P,\omega)$ and $C^*(Z_\omega)$ are isomorphic. In Section~\ref{sec:KMS-Fell}, we prove our main theorems about the KMS states on the cross-sectional algebra of a Fell bundle. In Section~\ref{sec:KMS twisted G}, we construct a Fell bundle from a cocycle on a groupoid, and use our results in Section~\ref{sec:KMS-Fell} to obtain a twisted version of Neshveyev's results in \cite{N}. Section~\ref{sec:KMS k-graph} contains our results about the preferred dynamics on the twisted $C^*$-algebras of $k$-graphs. We finish off by posing the question whether simplicity of $C^*(\Lambda, c)$ implies that it admits a unique KMS state. \section{Preliminaries} Throughout this paper $\mathbb{T}$ is regarded as a multiplicative group with identity $1$. \subsection{Groupoids} Let $\mathcal{G}$ be a locally compact second countable Hausdorff groupoid (see \cite{Re}). For each $x\in \mathcal{G}^{(0)}$, we write $\mathcal{G}^x=r^{-1}(x)$, $\mathcal{G}_x=s^{-1}(x)$ and $\mathcal{G}^x_x=\mathcal{G}_x\cap\mathcal{G}^x$. The set $\mathcal{I}so(\mathcal{G}):=\bigcup_{x\in \mathcal{G}^{(0)}}\mathcal{G}^x_x$ is called the \textit{isotropy} of $\mathcal{G}$. We say $\mathcal{G}$ is \'{e}tale if $r$ and $s$ are local homeomorphisms. A bisection of $\mathcal{G}$ is an open subset $U$ of $\mathcal{G}$ such that $r|_{U}$ and $s|_{U}$ are homeomorphisms. A \textit{continuous $\mathbb{T}$-valued 2-cocycle} $\sigma$ on $\mathcal{G}$ is a continuous function $\sigma:\mathcal{G}^2\rightarrow \mathbb{T}$ such that $\sigma(r(\gamma),\gamma)=c(\gamma,s(\gamma))=1$ for all $\gamma\in \mathcal{G}$ and $\sigma(\alpha,\beta)\sigma(\alpha\beta,\gamma)=\sigma(\beta,\gamma)\sigma(\alpha,\beta\gamma)$ for all composable triples $(\alpha,\beta,\gamma)$. We write $Z^2(\mathcal{G},\mathbb{T})$ for the group of all continuous $\mathbb{T}$-valued 2-cocycles on $\mathcal{G}$. Let $b:\mathcal{G}\rightarrow \mathbb{T}$ be a continuous function such that $b(x)=1$ for all $x\in \mathcal{G}^{(0)}$. The function $\delta^1 b: \mathcal{G}\times \mathcal{G}\rightarrow \mathbb{T}$ given by $\delta^1 b(\gamma,\alpha)=b(\gamma)b(\alpha)\overline{b(\gamma\alpha)}$ is a continuous 2-cocycle and is called the \textit{$2$-coboundary} associated to $b$. Note that if $b$ is continuous ,then $\delta^1 b$ is a $\mathbb{T}$-valued 2-cocycle on $\mathcal{G}$. Two continuous $\mathbb{T}$-valued 2-cocycles $\sigma,\sigma'$ are \textit{cohomologous} if $\sigma'\overline{\sigma}=\delta^1b$ for some continuous $b$. A \textit{continuous $\mathbb{R}$-valued 1-cocycle} $D$ on $\mathcal{G}$ is a continuous homomorphism from $D$ to $\mathbb{R}$. Given $\sigma\in Z^2(\mathcal{G},\mathbb{T})$, the space $C_c(\mathcal{G})$ is a $*$-algebra with the involution and multiplication defined by \[ f^*(\gamma):=\overline{\sigma(\gamma,\gamma^{-1})f(\gamma^{-1})} \text{ and } \] \[ (fg)(\gamma):=\sum_{\alpha\beta=\gamma}\sigma(\alpha,\beta)f(\alpha)g(\beta)\quad \text{for } f,g\in C_c(\mathcal{G}). \] We denote this $*$-algebra by $C_c(\mathcal{G},\sigma)$. The formula \[\|f\|_I=\max\Big(\sup_{x\in \mathcal{G}^{(0)}}\sum_{\lambda\in \mathcal{G}^x}|f(\lambda)|,\sup_{x\in \mathcal{G}^{(0)}}\sum_{\lambda\in \mathcal{G}_x}|f(\lambda)| \Big)\] determines a norm on $C_c(\mathcal{G},\sigma)$. By a $*$-representation of $C_c(\mathcal{G},\sigma)$, we mean a $*$-homomorphism from $C_c(\mathcal{G},\sigma)$ to the bounded operators on a Hilbert space. The \textit{twisted groupoid} $C^*$-algebra $C^*(\mathcal{G},\sigma)$ is the completion of $C_c(\mathcal{G},\sigma)$ in the universal norm \[\|f\|:=\sup\{\|L(f)\|: L \text{ is a $*$-representation of } C_c(\mathcal{G},\sigma)\}. \] A measure $\mu$ on $\mathcal{G}^{(0)}$ is called \textit{quasi-invariant} if the measures \[\nu(f):=\int_{\mathcal{G}^{(0)}}\sum_{\gamma\in\mathcal{G}^x}f(\gamma)\,d\mu \quad\text{ and }\quad \nu^{-1}(f):=\int_{\mathcal{G}^{(0)}}\sum_{\gamma\in\mathcal{G}_x}f(\gamma)\,d\mu\] are equivalent. We write $\Delta_\mu=\frac{d\nu}{d\nu^{-1}}$ for a Radon--Nykodym derivative of $\nu$ with respect to $\nu^{-1}$. We will call $\Delta_\mu$ the \textit{Radon--Nykodym cocycle} of $\mu$. Given a bisection $U$ and $x\in \mathcal{G}^{(0)}$, let $U^x:=U\cap r^{-1}(x)$. Define $T_U:r(U)\rightarrow s(U)$ by $T(x)=s(U^x)$. To see that a measure $\mu$ is quasi-invariant it suffices to show that \[ \int_{r(U)} f(T_U(x))\, d\mu(x)=\int_{s(U)} f(x)\Delta_\mu(U_x)\, d\mu(x) \] for all bisections $U$ and all $f:s(U)\rightarrow \mathbb{R}$. \subsection{Fell bundles} Let $C,D$ be $C^*$-algebras. A $C$--$D$ bimodule $Y$ is said to be a $C$--$D$-imprimitivity bimodule if it is a full left Hilbert $C$-module and a full right Hilbert $D$-module; and for all $y,y',y''\in Y$, $c\in C$ and $d\in D$, we have \begin{align}\label{imprimitivity} _C\langle y\cdot d,y'\rangle=_C\langle y,y'\cdot d^*\rangle,\quad\notag& \langle c\cdot y,y'\rangle_D=\langle y,c^*\cdot y'\rangle_D\text{ and }\\ _C\langle y,y'\rangle\cdot y''&=y\cdot \langle y',y''\rangle_D. \end{align} Let $\mathcal{G}$ be a locally compact second countable \'{e}tale groupoid. Suppose that $p:\mathcal{B}\rightarrow \mathcal{G}$ is a separable upper-semi continuous Banach bundle over $\mathcal{G}$ (see \cite[Definition~A.1]{MW}). Let \[\mathcal{B}^2:=\{(a,b)\in \mathcal{B}\times\mathcal{B}:(p(a),p(b))\in \mathcal{G}^2\}.\] Following \cite{MW}, we say $\mathcal{B}$ is a \textit{ Fell bundle} over $\mathcal{G}$ if there is a continuous involution $a\mapsto a^*:\mathcal{B}\rightarrow \mathcal{B}$ and a continuous bilinear associative multiplication $(a,b)\mapsto ab:\mathcal{B}^2\rightarrow \mathcal{B}$ such that \begin{itemize} \item [(F1)] $p(ab)=p(a)p(b)$, \item [(F2)]$p(a^*)=p(a)^{-1}$, \item [(F3)]$(ab)^*=b^*a^*$, \item [(F4)]for each $x\in \mathcal{G}^{(0)}$, the fibre $B(x)$ is a $C^*$-algebra with respect to the $*$-algebra structure given by the above involution and multiplication, and \item [(F5)] for each $\gamma\in \mathcal{G}$, $B(\gamma)$ is a $B(r(\gamma))$-$B(s(\gamma))$-imprimitivity bimodule with actions induced by the multiplication and the inner products \begin{equation}\label{f5} _{B(r(\gamma))}\langle a,b\rangle=ab^* \text { and } \langle a,b\rangle_{B(s(\gamma))}=a^*b. \end{equation} \end{itemize} For $x\in \mathcal{G}^{(0)},$ we sometimes write $A(x)$ for the fibre $B(x)$ to emphasis on its $C^*$-algebraic structure. Given a Fell bundle $\mathcal{B}$ over $\mathcal{G}$, we say the fibre $B(\gamma)$ is \textit{singly generated} if there exists an element $\ensuremath{\mathds{1}}_\gamma\in B(\gamma)$ such that \[_{A(r(\gamma))}\langle \ensuremath{\mathds{1}}_\gamma,\ensuremath{\mathds{1}}_\gamma\rangle=\ensuremath{\mathds{1}}_\gamma\ensuremath{\mathds{1}}_\gamma^*=1_{A(r(\gamma))}, \quad \langle \ensuremath{\mathds{1}}_\gamma,\ensuremath{\mathds{1}}_\gamma\rangle_{A(s(\gamma))}=\ensuremath{\mathds{1}}_\gamma^*\ensuremath{\mathds{1}}_\gamma=1_{A(s(\gamma))} \text { and}\] \[ B(\gamma)=A(r(\gamma)) \ensuremath{\mathds{1}}_\gamma=\ensuremath{\mathds{1}}_\gamma A(s(\gamma)). \] In particular, for $x\in \mathcal{G}^{(0)}$, the fibre $A(x)$ is singly generated if and only if it is a unital $C^*$-algebra, and we can then take $\ensuremath{\mathds{1}}_x=1_{A(x)}$. A continuous function $f:\mathcal{G}\rightarrow \mathcal{B}$ is a \textit{section} if $p\circ f$ is the identity map on $\mathcal{G}$. A section $f$ \textit{vanishes at infinity} if the set $\{\gamma\in \mathcal{G}:\|f(x)\|\geq \epsilon\}$ is compact for all $\epsilon>0$. We write $\Gamma_0(\mathcal{G};\mathcal{B})$ for the completion of the set of sections which vanishes at infinity with respect to the norm $\|f\|:=\sup_{\gamma\in \mathcal{G}}\|f(\gamma)\|$. The space $\Gamma_0(\mathcal{G};\mathcal{B})$ is a Banach space, see for example \cite[Proposition~C.23]{W}. A Fell bundle $\mathcal{B}$ over $\mathcal{G}$ has \textit{enough sections} if for every $\gamma\in \mathcal{G}$ and $a\in \mathcal{B}(\gamma)$, there is a section $f$ such that $f(\gamma)=a$. If $\mathcal{G}$ is a locally compact Hausdorff space, then $p:\mathcal{B}\rightarrow \mathcal{G}$ has enough sections, see \cite[Appendix~C]{FD}. The space $\Gamma_c(\mathcal{G};\BB)$ of compactly supported continuous sections is a $*$-algebra with involution and multiplication given by \begin{gather} f^*(\gamma):=f(\gamma^{-1})^* \text{ and }\label{invo-formula}\\ f*g(\gamma):=\sum_{\alpha\beta=\gamma}f(\alpha)g(\beta)\quad \text{for } f,g\in \Gamma_c(\mathcal{G};\BB).\label{multi-formula} \end{gather} The $I$-norm on $\Gamma_c(\mathcal{G};\BB)$ is given by \[\|f\|_I=\max\Big(\sup_{x\in \mathcal{G}^{(0)}}\sum_{\lambda\in \mathcal{G}^x}\|f(\lambda)\|,\sup_{x\in \mathcal{G}^{(0)}}\sum_{\lambda\in \mathcal{G}_x}\|f(\lambda)\| \Big).\] A $*$-homomorphism $L:\Gamma_c(\mathcal{G};\BB)\rightarrow B(\mathcal{H}_L)$ is \textit{$I$-norm decreasing representation} if $\overline{\operatorname{span}}\{L(f)\xi:f\in \Gamma_c(\mathcal{G};\BB),\xi\in \mathcal{H}_L=\mathcal{H}_L\}$ and if $\|L(f)\|\leq \|f\|_I$ for all $f\in \Gamma_c(\mathcal{G};\BB)$. The \textit{universal $C^*$-norm } on $\Gamma_c(\mathcal{G};\BB)$ is \[\|f\|:=\sup\{\|L(f)\|: L \text{ is a $I$-norm decreasing representation}\}, \] and $\mathbb{C}G$ is the completion of $\Gamma_c(\mathcal{G};\BB)$ with respect to the universal norm. Let $\mathcal{F}F$ be a closed subgroupoid of $\mathcal{G}$. Then $\mathcal{B}|_\mathcal{F}F$ is a Fell bundle over $\mathcal{F}F$. We write $\Gamma_c(\mathcal{F}F;\mathcal{B})$ in place of $\Gamma_c(\mathcal{F}F;\mathcal{B}|_\mathcal{F}F)$ and we denote the completion $\Gamma_c(\mathcal{F}F;\mathcal{B})$ in the universal norm by $C^*(\mathcal{F}F,\mathcal{G})$. Suppose that each fibre in $\mathcal{B}$ is singly generated. Fix $x\in \mathcal{G}^{(0)}$. For $u\in \mathcal{G}^x_x$ and $a\in B(u)$, let $a\cdot \delta_u\in \Gamma_c(\mathcal{G}^x_x;\mathcal{B})$ be the section given by \begin{align*} a\cdot\delta_u(v)=\begin{cases} a&\text{if $u=v$}\\ 0&\text{otherwise.}\\ \end{cases} \end{align*} Then \[\mathbb{C}Gx=\overline{\operatorname{span}}\{a\cdot\delta_u:u\in \mathcal{G}^x_x, a\in B(u)\}.\] In particular $\mathbb{C}Gx$ is a unital $C^*$-algebra with $1_{\mathbb{C}Gx}=\ensuremath{\mathds{1}}_x\cdot \delta_x.$ \subsection{Representations of Fell bundles and the Disintegration Theorem} Let $p:\mathcal{B}\rightarrow \mathcal{G}$ be a Fell bundle over a locally compact second countable \'{e}tale groupoid $\mathcal{G}$. Suppose that $\mathcal{G}^{(0)}*\mathcal{H}$ is a Borel Hilbert bundle over $\mathcal{G}^{(0)}$ as in \cite[Definition~F.1]{W}. Let \[\operatorname{End}(\mathcal{G}^{(0)}*\mathcal{H}):=\{(x,T,y): x,y\in \mathcal{G}^{(0)}, T\in B\big( \mathcal{H}(y),\mathcal{H}(x)\big)\}.\] Following \cite[Definition~4.5]{MW}, we say a map $\hat{\pi}:\mathcal{B}\rightarrow \operatorname{End}(\mathcal{G}^{(0)}*\mathcal{H})$ is a \textit{$*$-functor} if each $\hat{\pi}(a)$ has the form $\hat{\pi}(a)=(r(p(a)),\pi(a),s(p(a)))$ for some $\pi(a):\mathcal{H}(s(p(a)))\rightarrow \mathcal{H}(r(p(a)))$ such that the maps $\pi(a)$ collectively satisfy \begin{itemize} \item [(S1)] $\pi(\lambda a+b)=\lambda\pi(a)+\pi(b)$ if $p(a)=p(b)$, \item [(S2)] $\pi(ab)=\pi(b)\pi(a)$ whenever $(a,b)\in \mathcal{B}^2$, and \item [(S3)]$\pi(a^*)=\pi(a)^*$. \end{itemize} A \textit{strict representation} of $\mathcal{B}$ is a triple $(\mu, \mathcal{G}^{(0)}*\mathcal{H},\hat{\pi})$ consisting of a quasi-invariant measure $\mu$ on $\mathcal{G}^{(0)}$, a Borel Hilbert bundle $\mathcal{G}^{(0)}*\mathcal{H}$ and a $*$-functor $\hat{\pi}$. For such a triple, we write $L^2(\mathcal{G}^{(0)}*\mathcal{H},\mu)$ for the completion of the set of all Borel sections $f:\mathcal{G}^{(0)}\rightarrow \mathcal{G}^{(0)}*\mathcal{H}$ with $\int_{\mathcal{G}^{(0)}}\langle f(x), f(x)\rangle_{\mathcal{H}(x)}\,d\mu(x)<\infty$ with respect to \[\langle f, g\rangle_{L^2(\mathcal{G}^{(0)}*\mathcal{H},\mu)}=\int_{\mathcal{G}^{(0)}}\langle f(x), g(x)\rangle_{\mathcal{H}(x)}\,d\mu(x).\] Let $\Delta_\mu(u)$ be the Radon--Nikodym cocycle for $\mu$. Given a strict representation $(\mu, \mathcal{G}^{(0)}*\mathcal{H},\hat{\pi})$, Proposition~4.10 \cite{MW} gives an $I$-norm bounded $*$-homomorphism $L$ on $L^2(\mathcal{G}^{(0)}*\mathcal{H},\mu)$ such that \begin{equation}\label{integratedform} \big( L(f)\xi\big|\eta\big)=\int_{\mathcal{G}^{(0)}}\sum_{u\in\mathcal{G}^x}\big(\pi(f(u))\xi(s(u))\,\big|\,\eta(r(u))\big) \Delta_\mu(u)^{-\frac{1}{2}}\, d\mu(x). \end{equation} We call $L$ \textit{the integrated form} of $\pi$. The Disintegration Theorem \cite[Theorem~4.13]{MW} shows that every non degenerate representation $M$ of $\mathbb{C}G$ is equivalent to the integrated form of a strict representation. \subsection{Cocycles and bicharacters on groups}\label{sec:bicharacters} Let $F$ be a group. Viewing $F$ as a groupoid with the discrete topology, we write $Z^2(F,\mathbb{T})$ for the set of (continuous) $\mathbb{T}$-valued 2-cocycles on $F$. Given $\sigma\in Z^2(F,\mathbb{T})$, define $\sigma^*(p,q)=\overline{\sigma(q,p)}$. Proposition~3.2 of \cite{OPT} implies that $\sigma,\sigma'\in Z^2(F,\mathbb{T})$ are cohomologous if and only if $\sigma\sigma^*=\sigma'\sigma'^*$. Given $\sigma\in Z^2(F,\mathbb{T})$, the $C^*$-algebra $C^*(F,\sigma)$ is the universal $C^*$-algebra generated by unitaries $\{W_p :p\in F\}$ satisfying $W_pW_q=\sigma(p,q)W_{pq}$ for all $p,q\in F$. A standard argument shows that if $\sigma$ and $\sigma'$ are cohomologous in $Z^2(F,\mathbb{T})$, say $\sigma=\delta^1b \sigma'$, then the map $W_p\mapsto b(b)W_p$ descends to an isomorphism from $C^*(F,\sigma)$ onto $C^*(F,\sigma')$, see for example \cite[Proposition3.5]{SWW}. A \textit{bicharacter} on $F$ is a function $\omega:F\times F\rightarrow \mathbb{T}$ such that the functions $\omega(\cdot,p)$ and $\omega(q,\cdot)$ are homomorphisms. A bicharacter $\omega$ is \textit{antisymmetric} if $\omega(p,q)=\overline{\omega(q,p)}$. Each bicharacter is a $\mathbb{T}$-valued 2-cocycle. If $F$ is a free abelian finitely generated group, then \cite[Proposition~3.2]{OPT} shows that every $\mathbb{T}$-valued 2-cocycle $\sigma$ on $F$ is cohomologous to a bicharacter: Let $q_1,\dots ,q_t$ be the generators of $F$. Define a bicharacter $\omega:F\times F\rightarrow \mathbb{T}$ on generators by \begin{equation}\label{omega-formula} \omega(q_i,q_j)=\begin{cases} \sigma(q_i,q_j)\overline{\sigma(q_j,q_i)}&\text{if $i>j$.}\\ 1&\text{if $i\leq j$}\\ \end{cases} \end{equation} Then $\omega\omega^*=\sigma\sigma^*$ and by \cite[Proposition~3.2]{OPT}, $\omega$ is cohomologous to $\sigma$. Given $\sigma\in Z^2(F,\mathbb{T})$, the map $p\mapsto (\sigma\sigma^*)(p,\cdot)$ is a homomorphism from $F$ into the character space of $F$. Let \[ Z_\sigma:=\{p\in F: \sigma\sigma^*(p,q)=1 \text{ for all } q\in F\} \] be the kernel of the this homomorphism. Therefore $Z_\sigma$ is a subgroup of $F$. If $\omega$ is a bicharacter cohomologous to $\sigma$, then $Z_\omega=Z_\sigma$. \begin{lemma}\label{lemma:traces} Suppose that $F$ is a finitely generated free abelian group. Let $\sigma\in Z^2(F,\mathbb{T})$ and let $\omega$ be the bicharacter defined in \eqref{omega-formula}. Then \[\mathbb{T}r(C^*(F,\sigma))\cong\mathbb{T}r(C^*(F,\omega))\cong \mathbb{T}r(C^*(Z_\omega))\cong \mathbb{T}r(C^*(Z_\sigma)).\] \end{lemma} \begin{proof} The first and third isomorphisms are clear. So we prove the second isomorphism. We first claim that for every $\psi\in \mathbb{T}r(C^*(F,\omega))$, we have \[ \psi(W_p)=0 \text{ for all } p\notin Z_{\omega}. \] To see this, fix $p\notin Z_{\omega}$. There exists at least one generator $q_i\in F$ such that $(\omega\omega^*)(p,q_i)\neq 1$. Since $\psi$ is a trace and $\omega$ is a bicharacter, we have \begin{align*} \psi(W_p)=\psi(W_{q_i}^*W_pW_{q_i})&=\omega(p,q_i)\omega({q_i}^{-1},pq_i)\psi(W_p)\\ &=\omega(p,q_i)\omega(q_i^{-1},p)\omega(q_i^{-1},q_i)\psi(W_p)\\ &=\omega(p,q_i)\overline{\omega(q_i,p)}\omega(q_i^{-1},q_i)\psi(W_p)\\ &=(\omega\omega^*)(q_i,p)\omega(q_i^{-1},q_i)\psi(W_p). \end{align*} The formula \eqref{omega-formula} for $\omega$ says that $\omega(q_i^{-1},q_i)=1$. Since $(\omega\omega^*)(q_i,p)\neq 1$, the above computation shows that $\psi(W_p)=0$. Next define $\Upsilon:C^*(F,\omega)\rightarrow C^*(Z_{\omega})$ on generators by \begin{align*} \Upsilon(W_p)=\begin{cases} W_p&\text{if $p\in Z_{\omega}$}\\ 0&\text{if $p\notin Z_{\omega}$.}\\ \end{cases} \end{align*} This induces a map $\Phi:\mathbb{T}r( C^*(Z_{\omega}))\rightarrow \mathbb{T}r(C^*(F,\omega))$ by $\Phi(\psi)= \psi\circ\Upsilon$. The map $\Phi$ is clearly a continuius and affine map. The embedding $\iota:C^*(Z_{\omega})\rightarrow C^*(F,\omega)$ induces a map $\tilde{\iota}:\mathbb{T}r( C^*(F,\omega))\rightarrow \mathbb{T}r(C^*(Z_{\omega}))$ with $\tilde{\iota}(\psi)= \psi\circ\iota$. A quick computation shows that $\tilde{\iota}$ and $\Phi$ are inverses of each other and therefore $\Phi$ is an isomorphism. \end{proof} \subsection{KMS states} Let $\tau$ be an action of $\mathbb{R}$ by the automorphisms of a $C^*$-algebra $A$. We say an element $a\in A$ is \emph{analytic} if the map $t\mapsto \alpha_t(a)$ is the restriction of an analytic function $z\mapsto\alpha_z(a)$ on $\mathbb{C}$. Following \cite{BR,aHLRS,N}, for $\beta\in \mathbb{R}$, we say that a state $\psi$ of $A$ is a \emph{KMS$_\beta$} state (or KMS state at inverse temperature $\beta$) if $\psi(ab)=\psi(b\alpha_{i\beta}(a))$ for all analytic elements $a,b$. It suffices to check this condition (the KMS condition) on a set of analytic elements that span a dense subalgebra of $A$. By \cite[Propositions~5.3.3]{BR}, all KMS$_\beta$ states for $\beta \not= 0$ are $\tau$-invariant in the sense that $\psi(\tau_t(a))=\psi(a)$ for all $t\in \mathbb{R}$ and $a\in A$. \section{KMS states on the $C^*$-algebras of Fell bundles}\label{sec:KMS-Fell} In \cite[Theorem~1.1 and Theorem~1.3]{N}, Neshveyev described the KMS states of $C^*$-algebras of locally compact second countable \'{e}tale groupoids. Here, we generalise his results to the $C^*$-algebras of Fell bundles over groupoids. Our proof follows Neshveyev's closely. Let $\mu$ be a probability measure on $\mathcal{G}^{(0)}$. A \textit{$\mu$-measurable field of states} is a collection $\{\psi_x\}_{x\in \mathcal{G}^{(0)}}$ of states $\psi_x$ on $\mathbb{C}Gx$ such that for every $f\in \Gamma_c(\mathcal{G};\BB)$ the function $x\mapsto \sum_{u\in \mathcal{G}^x_x}\psi_x(f(u)\cdot\delta):\mathcal{G}^{(0)}\rightarrow \mathbb{C}$ is $\mu$-measurable. Given a $\mu$-measurable field $\Psi:=\{\psi_x\}_{x\in \mathcal{G}^{(0)}}$ of states we define \begin{align*} [\Psi]_\mu=\big\{\varphi: \varphi \text{ is a } \mu\text{-measurable field of states and } \varphi_x=\psi_x \text{ for }\mu\text{-a.e. }x\in \mathcal{G}^{(0)}\big\}. \end{align*} Given a state $\psi$ on a $C^*$-algebra $A$, the \textit{centraliser} of $\psi$ is the set of all elements $a\in A$ such that \[\psi(ab)=\psi(ba) \text { for all } b\in A.\] \begin{thm}\label{thm1} Let $p:\mathcal{B}\rightarrow \mathcal{G}$ be a Fell bundle with singly generated fibres over a locally compact second countable \'{e}tale groupoid $\mathcal{G}$. Let $\mu$ be a probability measure on $\mathcal{G}^{(0)}$ and let $\Psi := \{\psi_x\}_{x\in \mathcal{G}^{(0)}}$ be a $\mu$-measurable field of tracial states. There is a state $\mathbb{T}heta(\mu, \Psi)$ of $\mathbb{C}G$ with centraliser containing $\Gamma_0(\mathcal{G}^{(0)};\BB)$ such that, for $f \in \Gamma_c(\mathcal{G};\BB)$, we have \begin{equation}\label{Nesh-formula} \mathbb{T}heta(\mu,\Psi)(f)= \int_{\mathcal{G}^{(0)}}\psi_x\big(f|_{\mathcal{G}^x_x}\big)\,d\mu(x)=\int_{\mathcal{G}^{(0)}}\sum_{u\in \mathcal{G}^x_x}\psi_x\big(f(u)\cdot\delta_u\big)\, d\mu(x). \end{equation} We have $\mathbb{T}heta(\mu,\Psi) = \mathbb{T}heta(\nu, \Phi)$ if and only if $\mu = \nu$ and $[\Psi]_\mu = [\Phi]_\mu$. \end{thm} We start with injectivity of the map induced by $\mathbb{T}heta$. \begin{lemma}\label{injectivity} Let $p:\mathcal{B}\rightarrow \mathcal{G}$ be a Fell bundle with singly generated fibres over a locally compact second countable \'{e}tale groupoid $\mathcal{G}$. If $\mu$ is a probability measure on $\mathcal{G}^{(0)}$ and $\Psi := \{\psi_x\}_{x \in \mathcal{G}^{(0)}}$ and $\Psi' := \{\psi'_x\}_{x \in \mathcal{G}^{(0)}}$ are $\mu$-measurable fields of tracial states such that $\psi_x = \psi'_x$ for $\mu$-almost every $x$, then the functions $\mathbb{T}heta(\mu,\Psi)$ and $\mathbb{T}heta(\mu,\Psi')$ given by~\eqref{Nesh-formula} agree. If $\psi$ is a state of $\mathbb{C}G$ with centraliser containing $\Gamma_0(\mathcal{G}^{(0)};\BB)$, then there is at most one pair $\big(\mu,[\Psi]_{\mu}\big)$ consisting of a probability measure $\mu$ on $\mathcal{G}^{(0)}$ and a $\mu$-equivalence class $[\Psi]_\mu$ of $\mu$-measurable fields of tracial states on $\mathbb{C}Gx$ such that that $\mathbb{T}heta(\mu,\Psi) = \psi$. \end{lemma} \begin{proof} The first statement is immediate from the definition of $\mu$-equivalence. Now fix a state $\psi$ of $\mathbb{C}G$ with centraliser containing $\Gamma_0(\mathcal{G}^{(0)};\BB)$. Suppose that $\mu,\mu'$ are probability measures on $\mathcal{G}^{(0)}$ and that $\Psi = \{\psi_x\}_{x\in \mathcal{G}^{(0)}}$ and $\Psi' = \{\psi'_x\}_{x\in \mathcal{G}^{(0)}}$ are $\mu$-measurable fields of states satisfying $\mathbb{T}heta(\mu,\Psi) = \psi = \mathbb{T}heta(\mu', \Psi')$. For each $f\in C_0(\mathcal{G}^{(0)})$, there is a section $\tilde{f}\in \Gamma_c(\mathcal{G},\mathcal{B})\subseteq C^*(\mathcal{G},\mathcal{B})$ such that \begin{align*} \tilde{f}(\gamma)= \begin{cases} f(x)\ensuremath{\mathds{1}}_x&\text{if } \gamma=x\in \mathcal{G}^{(0)}\\ 0&\text{if } \gamma\notin \mathcal{G}^{(0)}.\\ \end{cases} \end{align*} So~\eqref{Nesh-formula}, shows that \[ \int_{\mathcal{G}^{(0)}}\psi_x(\tilde{f}(x))\,d\mu(x) = \psi(\tilde{f}) = \int_{\mathcal{G}^{(0)}}\psi'_x(\tilde{f}(x))\,d\mu'(x). \] Since each $\tilde{f}(x) = f(x)\ensuremath{\mathds{1}}_x$, and since each $\psi_x$ and each $\psi'_x$ is a tracial state, we have $\psi_x(\tilde{f}(x)) = f(x) = \psi'_x(\tilde{f}(x))$ for all $x$, and so $\int_{\mathcal{G}^{(0)}} f \,d\mu = \psi(\tilde{f}) = \int_{\mathcal{G}^{(0)}} f \,d\mu'$. So the Riesz Representation Theorem shows that $\mu = \mu'$. To see that $\psi$ and $\psi'$ agree $\mu$-almost everywhere, we suppose to the contrary that $\psi_x\neq \psi'_x$ for some set $V\subseteq \mathcal{G}^{(0)}$ with $\mu(V)\neq 0$ and derive a contradiction. Since $\mathcal{B}$ has enough sections, there is a countable family $\mathcal{F}\subseteq \Gamma_c\big(\mathcal{G}\cup\cap\mathcal{I}so(\mathcal{G});\mathcal{B}\big)$ such that for each $\gamma\in \mathcal{G}\cup\cap\mathcal{I}so(\mathcal{G})$, we have $\overline{\operatorname{span}}\{f(\gamma):f\in \mathcal{F}\}=\mathcal{B}(\gamma)$. So there is at least one $f \in \mathcal{F}$ and $V' \subseteq V$ of nonzero measure, so that \[ \psi\big(f|_{\mathcal{G}^x_x}\big)=\sum_{u \in G^x_x} \psi_x(f(u)\cdot\delta_u) \not= \sum_{u \in G^x_x} \psi_x(f(u)\cdot\delta_u)=\psi\big(f|_{\mathcal{G}^x_x}\big)\quad \text{for all }x\in V'. \] For each $l\in \mathbb{N},$ let $V'_l:=\big\{x\in V':\big|\psi_x\big(f|_{\mathcal{G}^x_x}\big)-\psi'_x\big(f|_{\mathcal{G}^x_x}\big)\big|> \frac{1}{l}\big\}$. So there is $l\in \mathbb{N}$ such that $\mu( V'_l)>0$. Now for $0\leq j\leq 3$, let \[V'_{l,j}:=\Big\{x\in V'_l: \operatorname{Arg}\big(\psi_x\big(f|_{\mathcal{G}^x_x}\big)-\psi'_x\big(f|_{\mathcal{G}^x_x}\big)\big)\in\Big[j\frac{\pi}{4},(j+1) \frac{\pi}{4}\Big]\Big\}.\] Therefore there is $j$ such that $\mu(V'_{l,j})>0$. Then \[\mathbb{R}e \bigg(e^{-(\frac{\pi}{4}, i\frac{\pi}{2})}\int_{V_{l,j}} \Big(\psi_x\big(f|_{\mathcal{G}^x_x}\big)-\psi'_x\big(f|_{\mathcal{G}^x_x}\big)\Big)\,d\mu(x)\bigg)\geq \mu(V'_{l,j})\frac{1}{l\sqrt{2}}>0,\] which is a contradiction. \end{proof} \begin{proof}[Proof of Theorem\ref{thm1}] Fix a state $\psi$ of $\mathbb{C}G$ whose centraliser contains $\Gamma_0(\mathcal{G}^{(0)};\BB)$. Let $(H,L,\xi)$ be the corresponding GNS-triple. Applying the Disintegration Theorem (see \cite[Theorem ~4.13]{MW}) gives a strict representation $(\lambda,\mathcal{G}^{(0)}*\mathcal{H},\hat{\pi})$ of $\mathcal{B}$ such that $L$ is the integrated form of $\pi$ on $L^2(\mathcal{G}^{(0)}*\mathcal{H},\lambda)$. By \cite[Lemma~5.22]{MW}, there is a unitary isomorphism from $H$ onto $L^2(\mathcal{G}^{(0)}*\mathcal{H},\lambda)$. We identify $H$ with $L^2(\mathcal{G}^{(0)}*\mathcal{H},\lambda)$ and view $\xi$ as a section of the bundle $\mathcal{G}^{(0)}*\mathcal{H}$. Let $\mu$ be the measure on $\mathcal{G}^{(0)}$ given by $d\mu(x):=\|\xi(x)\|^2d\lambda(x)$. For each $x\in \mathcal{G}^{(0)}$, define $\psi_x:\mathbb{C}Gx\rightarrow \mathbb{C}$ by \begin{equation}\label{smaller-trace} \psi_x(a\cdot\delta_u)=\|\xi(x)\|^{-2}\big(\pi(a)\xi(x),\xi(x)\big) \end{equation} where $u\in \mathcal{G}^x_x$ and $a\in B(u)$. We first show that $\psi_x$ is a state on $\mathbb{C}Gx$: Fix $u\in \mathcal{G}^x_x$ and $a\in B(u)$. A computation using the multiplication and the involution formulas \eqref{multi-formula} and \eqref{invo-formula} shows that for $v\in \mathcal{G}^x_x$ and $b\in B(u)$ we have \begin{equation}\label{multi-algebra-isotropy} (a\cdot\delta_u)*(b\cdot\delta_v)=ab\cdot \delta_{uv} \text{ and } (a\cdot \delta_u)^*=a^*\cdot \delta_{u^{-1}}. \end{equation} Therefore using (S1) and (S2) at the final line we see that \begin{align*} \psi_x\big((a\cdot\delta_u)*(a\cdot\delta_u)^*\big)&=\psi_x\big(aa^*\cdot\delta_{uu^{-1}}\big)\\ &=\|\xi(x)\|^{-2}\big(\pi(aa^*)\xi(x)\big|\xi(x)\big)\\ &=\|\xi(x)\|^{-2}\big(\pi(a^*)\xi(x)\big|\pi(a^*)\xi(x)\big)\\ &\geq 0 \end{align*} Since $\hat{\pi}$ is a $*$-functor, (S1)--(S3) imply that $\pi(\ensuremath{\mathds{1}}_x)=1_{B(\mathcal{H}(x))}$. Now the computation \[\psi_x(\ensuremath{\mathds{1}}_x\cdot\delta_x)=\|\xi(x)\|^{-2}\big(\pi(\ensuremath{\mathds{1}}_x)\xi(x)\big|\xi(x)\big)=1\] implies that $\psi_x$ is a state on $\mathbb{C}Gx$. We claim that the pair $\big(\mu,\{\psi_x\}_{x\in\mathcal{G}^{(0)}}\big)$ satisfies the equation \eqref{Nesh-formula}. By \eqref{integratedform} for all $f\in\Gamma_c(\mathcal{G};\BB)$ we have \begin{equation}\label{integ-form-psi} \psi(f)=\big(L(f)\xi\,\big|\,\xi\big)=\int_{\mathcal{G}^{(0)}}\sum_{u\in \mathcal{G}^x}\big(\pi(f(u))\xi(s(u)) \,\big| \,\xi(x)\big)\Delta_\lambda(u)^{-\frac{1}{2}}\,d\lambda(x). \end{equation} To prove \eqref{Nesh-formula}, it suffices to show that for $\lambda$-almost every $x\in \mathcal{G}^{(0)}$ we have \[\sum_{u\in \mathcal{G}^x\setminus\mathcal{G}^x_x}\big(\pi(f(u))\xi(s(u)) \,\big| \,\xi(x)\big)\Delta_\lambda(u)^{-\frac{1}{2}}=0.\] Equivalently we can show that for $\lambda$-almost every $x\in \mathcal{G}^{(0)}$, for each bisection $U\subseteq \mathcal{G}\setminus \cup_{x\in \mathcal{G}^{(0)}}\mathcal{G}^x_x$ such that $u\in \mathcal{G}^x\cap U$ and that $a\in B(u)$, we have \begin{equation}\label{equivalent-formula} \big(\pi(a)\xi(s(u)) \,\big| \,\xi(x)\big)=0. \end{equation} Fix a bisection $U\subseteq \mathcal{G}\setminus \cup_{x\in \mathcal{G}^{(0)}}\mathcal{G}^x_x$ and $g\in \Gamma_c(U;\mathcal{B})$ with $\operatorname{supp} g\subseteq U$. Since $r(\operatorname{supp} g)$ is a closed subset of the open set $r(U)$, there is a positive function $p\in C_0(r(U))\subseteq C_0(\mathcal{G}^{(0)})$ such that $p\equiv 1$ on $r(\operatorname{supp} g)$ and zero otherwise. Fix an approximate identity $(e_\kappa)$ for $\Gamma_0(\mathcal{G}^{(0)};\BB)$ and set $h_\kappa:=p e_\kappa$ for all $\kappa$. Since each $h_\kappa$ is in the centraliser of $\psi$ and $h_\kappa* g\rightarrow g$, we have \[\psi(g)=\lim_{\kappa}\psi(g*h_\kappa)=\lim_{\kappa}\psi(h_\kappa*g)=0.\] Define $q:\mathcal{G}^{(0)}\rightarrow \mathbb{C}$ by \[q(x)=\sum_{u\in \mathcal{G}^x}\overline{\big(\pi(g(u))\xi(s(u)) \,\big| \,\xi(x)\big)\Delta_\lambda(u)^{-\frac{1}{2}}}.\] Clearly $q(x)\in \mathbb{C}$. Since $\psi(g)=0$, $\psi(q(x)g)=0$ for all $x\in \mathcal{G}^{(0)}$. Applying \eqref{integ-form-psi} for $\psi$ together with (S1) for the $*$-functor $\hat{\pi}$ give us \begin{align}\label{formula-thm1-1} \notag0&=\psi(q(x)g)=\int_{\mathcal{G}^{(0)}}q(x)\sum_{u\in \mathcal{G}^x}\big(\pi(g(u))\xi(s(u)) \,\big| \,\xi(x)\big)\Delta_\lambda(u)^{-\frac{1}{2}}\,d\lambda(x)\\ &=\int_{\mathcal{G}^{(0)}}\Big|\sum_{u\in \mathcal{G}^x}\big(\pi(g(u))\xi(s(u)) \,\big| \,\xi(x)\big)\Delta_\lambda(u)^{-\frac{1}{2}}\Big|^2\,d\lambda(x). \end{align} Thus $\sum_{u\in \mathcal{G}^x}\big(\pi(g(u))\xi(s(u)) \,\big| \,\xi(x)\big)\Delta_\lambda(u)^{-\frac{1}{2}}=0$ for $\lambda$-almost every $x\in \mathcal{G}^{(0)}$. Since $\mathcal{B}$ has enough sections, we can fix a countable set $\{g_n\}$ of elements $\Gamma_c(U;\mathcal{B})$ such that for each $u\in U$, the set $\{g_n(u):n\in \mathbb{N}\}$ is a dense subset of $B(u)$. Notice that for any $x\in r(U)$ the set $U\cap \mathcal{G}^x$ is a singleton. Let $U\cap \mathcal{G}^x:=u^x$. For each $n\in \mathbb{N}$, let \[X_n:=\Big\{x\in U:\sum_{u\in \mathcal{G}^x\cap U}\big(\pi(g_n(u))\xi(s(u)) \,\big| \,\xi(x)\big)\neq 0\Big\} \,\text { and }\, X=\bigcup_{n\in \mathbb{N}} X_n.\] Equation \eqref{formula-thm1-1} implies that $\lambda(X)=0$. For $x\in U\setminus X$ and $n\in \mathbb{N}$, \[\big(\pi(g_n(u^x))\xi(s(u^x)) \,\big| \,\xi(x)\big)=\sum_{u\in \mathcal{G}^x\cap U}\big(\pi(g_n(u))\xi(s(u)) \,\big| \,\xi(x)\big)=0\] by definition of $x$. By choice of $g_n$, the set $\{g_n(u^x):n\in \mathbb{N}\}$ is a dense subset of $B(u^x)$. It follows that $\big(\pi(a)\xi(s(u^x)) \,\big| \,\xi(x)\big)=0$ for all $a\in B(u^x)$, giving \eqref{equivalent-formula}. So $\psi$ is given by \eqref{Nesh-formula}. To see that each $\psi_x$ is a trace, note that since $\Gamma_0(\mathcal{G}^{(0)};\BB)$ is contained in the centraliser of $\psi_x$, the formula \eqref{Nesh-formula} implies that \begin{align*} \int_{\mathcal{G}^{(0)}}\sum_{u\in \mathcal{G}^x_x}&\big(\pi\big((f*g)(u)\big)\xi(x) \,\big|\, \xi(x)\big)\Delta_\lambda(u)^{-\frac{1}{2}}\,d\lambda(x)\\ &=\int_{\mathcal{G}^{(0)}}\sum_{u\in \mathcal{G}^x_x}\big(\pi\big((g*f)(u)\big)\xi(x))\, \big|\, \xi(x)\big)\Delta_\lambda(u)^{-\frac{1}{2}}\,d\lambda(x) \end{align*} for all $f,g\in \Gamma_0(\mathcal{G}^{(0)};\BB)$. Therefore for $\lambda$-almost every $x\in \mathcal{G}^{(0)}$, we have \begin{equation}\label{vir12} \sum_{u\in \mathcal{G}^x_x}\big(\pi\big((f*g)(u)\big)\xi(x))\, \big|\, \xi(x)\big) =\sum_{u\in \mathcal{G}^x_x}\big(\pi\big((g*f)(u)\big)\xi(x))\, \big|\, \xi(x)\big). \end{equation} Fix $a,b\in B(x)$ and $u,v\in \mathcal{G}^x_x$ so that $a\cdot \delta_u,\, b\cdot \delta_v$ are typical spanning elements of $\mathbb{C}Gx$. Choose $f,g\in \Gamma_0(\mathcal{G}^{(0)};\BB)$ such that $f(x)=a$ and $g(x)=b$. Then sums in both sides of \eqref{vir12} collapse and we get \[\big(\pi(ab)\xi(x) \big| \xi(x)\big) =\big(\pi(ba)\xi(x) \big| \xi(x)\big) \text { for } \lambda\text{-a.e. } x\in \mathcal{G}^{(0)}.\] Since $(a\cdot \delta_u) * (b\cdot \delta_v)=ab\cdot \delta_{uv},$ the formula \eqref{smaller-trace} for $\psi_x$ implies that \[\psi_x\big((a\cdot \delta_u) * (b\cdot \delta_v)\big)=\psi_x\big((b\cdot \delta_v) * (a\cdot \delta_u)\big).\] Thus $\psi_x$ is a trace on $\mathbb{C}Gx$. We have now proved that every state of $C^*(\mathcal{G},\mathcal{B})$ is given by a quasi-invariant measure $\mu$ and some $\mu$-measurable field $\{\psi_x\}_{x\in \mathcal{G}^{(0)}}$. By Lemma~\ref{injectivity}, we see that each state of $C^*(\mathcal{G},\mathcal{B})$ is given by \eqref{Nesh-formula} for some $\big(\mu,[\Psi]_\mu\big)$. We must show that every $\big(\mu,[\Psi]_\mu\big)$ gives a state. It suffices to prove this for a probability measure $\mu$ on $\mathcal{G}^{(0)}$ and a representative $\{\psi_x\}_{x\in \mathcal{G}^{(0)}}\in [\Psi]_\mu$. For each $x\in \mathcal{G}^{(0)}$, define $\varphi_x:\Gamma_c(\mathcal{G};\BB)\rightarrow \mathbb{C}$ by \[\varphi_x(f)=\sum_{u\in \mathcal{G}^x_x}\psi_x\big(f(u)\cdot\delta_u\big) \text{ for } f\in \Gamma_c(\mathcal{G};\BB).\] Since $\psi_x$ is a state $\varphi_x$ extends to $\mathbb{C}G$. Now if we prove that each $\varphi_x$ is a well-defined state on $\mathbb{C}G$, then since $x\mapsto \varphi_x(b):\mathcal{G}^{(0)}\rightarrow \mathbb{C}$ is $\mu$-measurable for all $b\in \mathbb{C}G$, we can define a functional $\psi$ on $\mathbb{C}G$ by $\psi(b)=\int_{\mathcal{G}^{(0)}}\varphi_x(b)\,d\mu(x)$. Since $\mu$ is a probability measure on $\mathcal{G}^{(0)}$, $\psi$ is a state on $\mathbb{C}G$ which satisfies \eqref{Nesh-formula}. So we fix $x\in \mathcal{G}^{(0)}$ and show that $\varphi_x$ is a well-defined state on $\mathbb{C}G$: Let $(H_x,\pi_x,\zeta_x)$ be the GNS-triple corresponding to $\psi_x$. Let $Y(x)$ be the closure of $\Gamma_c(\mathcal{G}_x;\mathcal{B})$ under the $\mathbb{C}Gx$-valued pre-inner product \[\langle f,g\rangle=f^**g.\] Then $Y(x)$ is a right Hilbert $\mathbb{C}Gx$-module with the right action determined by the multiplication, see \cite[Lemma~2.16]{tfb}. Also $\mathbb{C}G$ acts as adjointable operator on $Y(x)$ by multiplication. By \cite[Proposition~2.66]{tfb} there is a representation $Y(x)$-$\operatorname{Ind}(\pi_x):\mathbb{C}G\rightarrow \mathcal{L}\big(Y(x)\otimes_{\mathbb{C}Gx}H_x\big)$ such that \[Y(x)\text{-}\operatorname{Ind}(\pi_x)(f)(g\otimes k)=f*g\otimes k.\] For convenience, we write $\theta_x:=Y(x)$-$\operatorname{Ind}(\pi_x)$. Take $h_x\in \Gamma_c(\mathcal{G}^x_x;\BB)$ such that $\operatorname{supp} h_x \subseteq \{x\}$ and $h_x(x)=\ensuremath{\mathds{1}}_x$. We take $f\in \Gamma_c(\mathcal{G};\BB)$ and compute $\theta_x(f)(h_x\otimes \zeta_x)$: \begin{align}\label{computation-of-theta} \big(\theta_x(f)(h_x\otimes \zeta_x) \,\big|\,(h_x\otimes \zeta_x)\big)\notag&=\big(f*h_x\otimes \zeta_x \,\big|\,h_x\otimes \zeta_x\big)\\ \notag&=\big(\pi_x(\langle h_x,f*h_x\rangle) \zeta_x \,\big|\,\zeta_x\big)\\ &=\psi_x\big(\langle h_x,f*h_x\rangle\big). \end{align} For each $u\in \mathcal{G}^x_x$, we have \[\langle h_x,f*h_x\rangle(u)=(h_x^**f*h_x)(u)=\sum_{\alpha\beta\gamma=u}h_x(\alpha^{-1})^*f(\beta)h_x(\gamma).\] Each summand vanishes unless $\alpha^{-1}=\gamma=x$ and $\beta=u$. Therefore \[\langle h_x,f*h_x\rangle(u)=\ensuremath{\mathds{1}}_x^*f(u)\ensuremath{\mathds{1}}_x=f(u),\] and hence $\langle h_x,f*h_x\rangle=f|_{\mathcal{G}^x_x}$. Putting this in \eqref{computation-of-theta}, we get \[\big(\theta_x(f)(h_x\otimes \zeta_x) \,\big|\,(h_x\otimes \zeta_x)\big)=\psi_x\big(f|_{\mathcal{G}^x_x}\big)=\sum_{u\in \mathcal{G}^x_x}\psi_x\big(f(u)\cdot \delta_u\big)=\varphi(f).\] Also since $\langle h_x,h_x\rangle=\ensuremath{\mathds{1}}_x\cdot\delta_x$, \[\|h_x\otimes\zeta_x\|=\big(h_x\otimes\zeta_x\big|h_x\otimes\zeta_x\big)=\big( \pi_x\langle h_x,h_x\rangle\zeta_x\big|\zeta_x\big)=\psi_x\big(\langle h_x,h_x\rangle\big)=\psi_x(\ensuremath{\mathds{1}}_x\cdot\delta_x)=1.\] Now since $f\mapsto \big(\theta_x(f)(h_x\otimes \zeta_x) \,\big|\,(h_x\otimes \zeta_x)\big)$ is a state, $\varphi$ is a state as well. Thus there is surjection between the simplex of the states on $\mathbb{C}G$ with centraliser containing $\Gamma_0(\mathcal{G}^{(0)};\BB)$ and the pairs $\big(\mu,[\psi]_{\mu}\big)$. Lemma~\ref{injectivity} gives the injectivity and we have now completed the proof. \end{proof} \begin{definition} Given a pair $(\mu, C)$ consisting of a probability measure $\mu$ on $\mathcal{G}^{(0)}$ and a $\mu$-equivalence class $C$ of $\mu$-measurable fields of tracial states, we write $\tilde{\mathbb{T}heta}(\mu, C)$ for the state $\mathbb{T}heta(\mu, \Psi)$ for any representative $\Psi$ of $C$. \end{definition} \begin{thm}\label{thm2} Let $p:\mathcal{B}\rightarrow \mathcal{G}$ be a Fell bundle with singly generated fibres over a locally compact second countable \'{e}tale groupoid $\mathcal{G}$. Suppose that $\gamma\mapsto \ensuremath{\mathds{1}}_\gamma:\mathcal{G}\rightarrow \mathcal{B}$ is continuous. Let $D$ be a continuous $\mathbb{R}$-valued $1$-cocycle on $\mathcal{G}$ and let $\tau$ be the dynamic on $\mathbb{C}G$ given by $\tau_t(f)(\gamma)=e^{itD(\gamma)}f(\gamma)$. Let $\beta\in \mathbb{R}$. Then $\widetilde{\mathbb{T}heta}$ restricts to a bijection between the simplex of KMS$_\beta$ states of $(\mathbb{C}G,\tau)$ and the pairs $\big(\mu,[\Psi]_{\mu}\big)$ such that \begin{itemize} \item [(I)] $\mu$ is a quasi-invariant measure with Radon--Nykodym cocycle $e^{-\beta D}$, and \item [(II)] for $\mu$-almost every $x\in \mathcal{G}^{(0)}$, we have \[ \psi_{s(\eta)}(a\cdot \delta_u)=\psi_{r(\eta)}\big((\ensuremath{\mathds{1}}_\eta a \ensuremath{\mathds{1}}_\eta^*)\cdot \delta_{\eta u\eta^{-1}}\big) \quad\text{for }u\in \mathcal{G}^x_x, a\in B(u)\text{ and }\eta\in \mathcal{G}_x. \] \end{itemize} \end{thm} \begin{remark} In principal, the condition in Theorem~\ref{thm2}(II) depends on the particular representative $\Psi = \{\psi_x\}_{x\in \mathcal{G}^{(0)}}$ of the $\mu$-equivalence class $[\Psi]_\mu$. But if $\Psi = \{\psi_x\}_{x\in \mathcal{G}^{(0)}}$ and $\Psi' = \{\psi'_x\}$ represent the same equivalence class, then $\psi_x = \psi'_x$ for $\mu$-almost every $x$, and so $\Psi$ satisfies~(II) if and only if $\Psi'$ does. \end{remark} Before starting the proof, we establish some notation. Let $U$ be a bisection. For each $x\in \mathcal{G}^{(0)}$, we write $U^x:=r^{-1}(x)\cap U$ and $U_x:=s^{-1}(x)\cap U$. The maps $x\mapsto U^x:r(U)\rightarrow U$ and $x\mapsto U_x:s(U)\rightarrow U$ are homeomorphisms and we can view them as the inverse of $r,s$ respectively. We also write $T_U:r(U)\rightarrow s(U)$ for the homeomorphism given by $T_U(x)=s(U^x)$. \begin{proof} Suppose that $\psi$ is a KMS$_\beta$ state on $(\mathbb{C}G,\tau)$. Since $D|_{\mathcal{G}^{(0)}}=0$, the KMS condition implies that $\Gamma_0(\mathcal{G}^{(0)};\BB)$ is contained in the centraliser of $\psi$. By Theorem~\ref{thm1} there is a pair $\big(\mu,[\Psi]_\mu\big)$ consisting of a probability measure $\mu$ on $\mathcal{G}^{(0)}$ and a $\mu$-equivalence class $[\Psi]_\mu$ of $\mu$-measurable fields of tracial states on $\mathbb{C}Gx$ that satisfies \eqref{Nesh-formula}. Fix a representative $\{\psi_x\}_{x\in \mathcal{G}^{(0)}}\in[\Psi]_\mu$. We claim that $\mu$ and $\{\psi_x\}_{x\in \mathcal{G}^{(0)}}$ satisfy (I)-(II). First note that for a bisection $U$, $f\in \Gamma_c(U;\mathcal{B})$ and $g\in \Gamma_c(\mathcal{G};\BB)$, the multiplication formula in $\Gamma_c(\mathcal{G};\BB)$ implies that \begin{align*} f*g(\gamma)=\sum_{\alpha\beta=\gamma}f(\alpha)g(\beta)= \begin{cases} f(U^x)g((U^x)^{-1}\gamma)&\text{if } x=r(\gamma)\in r(U)\\ 0&\text{if } r(\gamma)\notin r(U).\\ \end{cases} \end{align*} Similarly \begin{align*} g*\tau_{i\beta}(f)(\gamma)&=\begin{cases} e^{-\beta D(U_x)}g(\gamma(U_x)^{-1})f(U_x)&\text{if }x=s(\gamma)\in s(U)\\ 0&\text{if }s(\gamma)\notin s(U).\\ \end{cases} \end{align*} Since $\psi$ is a KMS$_\beta$ state, we have $\psi(f*g)=\psi(g*\tau_{i\beta}(f))$. Now applying formula \eqref{Nesh-formula} for $\psi$ gives us \begin{align}\label{formula-thm2-1} \int_{r(U)} \sum_{u\in \mathcal{G}^x_x}\psi_x\notag&\big(f(U^x)g((U^x)^{-1}u)\cdot\delta_u\big)\, d\mu(x)\\ &=\int_{s(U)} e^{-\beta D(U_x)}\sum_{u\in \mathcal{G}^x_x}\psi_x\big(g(u(U_x)^{-1})f(U_x)\cdot\delta_u\big)\, d\mu(x). \end{align} To see (I), fix a bisection $U$ and let $q\in C_c(s(U))$. Since $\mathcal{B}$ has enough sections, we can define $h:U\rightarrow \mathcal{B}$ by $h(\gamma)=q(s(\gamma))\ensuremath{\mathds{1}}_\gamma$. Since $\gamma\mapsto \ensuremath{\mathds{1}}_\gamma$ is continuous, $h$ descends to a continuous section $\tilde{h}$ on $\mathcal{G}$. Now we apply \eqref{formula-thm2-1} with $f:=\tilde{h}$ and $g:=\tilde{h}^*$. The sums in both sides collapse to the single term $u=x$. Since $U^x=U_{T_U(x)}$, we have \[\int_{r(U)} \psi_x\Big(\big(|q(T_U(x))|^2\ensuremath{\mathds{1}}_x\ensuremath{\mathds{1}}_x^*\big)\cdot\delta_x\Big)\, d\mu(x)\\ =\int_{s(U)} e^{-\beta D(U_x)}\psi_x\big((|q(x)|^2\ensuremath{\mathds{1}}_x\ensuremath{\mathds{1}}_x^*)\cdot\delta_x\big)\, d\mu(x).\] Note that $(\lambda a)\cdot \delta=\lambda(a\cdot \delta)$ for all $\lambda\in \mathbb{C}$ and $\ensuremath{\mathds{1}}_x\ensuremath{\mathds{1}}_x^*=1_{A(x)}=\ensuremath{\mathds{1}}_x$. Since $\ensuremath{\mathds{1}}_x\cdot \delta_x=1_{\mathbb{C}Gx}$ and $\psi_x$ is a state on $\mathbb{C}Gx$, we have \[\int_{r(U)} \big|q(T_U(x))\big|^2\, d\mu(x)\\ =\int_{s(U)} e^{-\beta D(U_x)}|q(x)|^2\, d\mu(x).\] Thus $\mu$ is a quasi-invariant measure with Radon--Nykodym cocycle $e^{-\beta D}$. For (II), let $x\in \mathcal{G}^{(0)}$, $u\in \mathcal{G}^x_x, a\in B(u)$ and $\eta\in \mathcal{G}_x$. Let $\tilde{a}\in \Gamma_c(\mathcal{G}_x;\mathcal{B})$ such that $ \tilde{a}$ is supported in a bisection $U$ and $\tilde{a}(u)=a$. Since $U$ is a bisection, it follows that $\tilde{a}(v)=0$ for all $v\in \mathcal{G}_x\setminus \{u\}$. Fix a bisection $V$ containing $\eta$ such that $s(V)\subseteq s(U)$. Fix $q\in C_c(\mathcal{G}^{(0)})$ such that $q\equiv1$ on a neighborhood of $x$ and $\operatorname{supp} (q)\subseteq s(V)$. Define $h\in \Gamma_c(\mathcal{G};\BB)$ by \begin{align*} h(\gamma)=\begin{cases} q(s(\gamma))\ensuremath{\mathds{1}}_\gamma&\text{if $\gamma\in V$}\\ 0&\text{otherwise.}\\ \end{cases} \end{align*} Since $\psi$ is a KMS$_\beta$ state, we have \begin{equation} \label{kmsforii} \psi(( \tilde{a}*h^*)*h)=\psi(h*\tau_{i\beta}( \tilde{a}*h^*)). \end{equation} We compute both sides of \eqref{kmsforii}. For the left-hand side, we first apply the formula \eqref{Nesh-formula} for $\psi$ to get \begin{equation} \label{simplifying-left-kmsforii} \psi(( \tilde{a}*h^*)*h) = \int_{\mathcal{G}^{(0)}} \sum_{v\in \mathcal{G}^y_y}\psi_y\big(( \tilde{a}*h^**h)(v)\cdot\delta_v\big)\,d\mu(y). \end{equation} Since $h$ is supported on the bisection $V$, $h^**h$ is supported on $s(V)$ and we have \[( \tilde{a}*h^**h)(v)=\sum_{\alpha\beta=v} \tilde{a}(\alpha)(h^**h)(\beta)= \tilde{a}(v)(h^**h)(s(v)).\] Since $\tilde{a}$ is supported in $U$, \begin{align*} \sum_{v\in \mathcal{G}^y_y}\psi_y\big(( \tilde{a}*h^**h)(v)\cdot\delta_v\big)&=\sum_{v\in \mathcal{G}^y_y\cap U}\psi_y\big( \big(\tilde{a}(v)(h^**h)(s(v))\big)\cdot\delta_v\big)\\ &=\psi_y\big( \big(\tilde{a}(U_y)(h^**h)(y)\big)\cdot\delta_{U_y}\big). \end{align*} Putting this in \eqref{simplifying-left-kmsforii} and applying the definition of $h$, we get \begin{align} \label{kmsforiil} \psi(( \tilde{a}*h^*)*h)\notag&=\int_{s(V)} \psi_y\big(\big( \tilde{a}(U_y)(h^**h)(y)\big)\cdot\delta_{U_y}\big)\,d\mu(y)\\ &=\int_{s(V)}|q(y)|^2\psi_y\big( \tilde{a}(U_y)\cdot\delta_{U_y}\big)\,d\mu(y). \end{align} For the right-hand side, we start by applying the formula \eqref{Nesh-formula} for $\psi$: \begin{align*} \psi\big(h*\tau_{i\beta}( \tilde{a}*h^*)\big)\notag&=\int_{\mathcal{G}^{(0)}} \sum_{w\in \mathcal{G}^z_z}\psi_z\big((h*\tau_{i\beta}( \tilde{a}*h^*))(w)\cdot\delta_w\big)\,d\mu(z). \end{align*} Two applications of the multiplication formula in $\Gamma_c(\mathcal{G};\BB)$ give \begin{align*} \psi&\big(h*\tau_{i\beta}( \tilde{a}*h^*)\big)=\int_{r(V)} \sum_{w\in \mathcal{G}^z_z} \psi_z\Big(\big(h(V^z)\tau_{i\beta}( \tilde{a}*h^*)((V^z)^{-1}w)\big)\cdot\delta_w\Big)\,d\mu(z)\\ \!&=\!\int_{r(V)} \!\!\! e^{-\beta D\big(U_{T_V(z)}(V^z)^{-1}\big)}\psi_z\Big(\big(h(V^z) \tilde{a}\big(U_{T_V(z)}\big)h(V^z)^*\big)\cdot\delta_{V^zU_{T_V(z)}(V^z)^{-1}}\Big)\,d\mu(z)\\ \!&=\!\int_{r(V)} \!\!\!e^{-\beta D\big( U_{T_V(z)}(V^z)^{-1}\big)}\big|q(T_V(z))\big|^2\psi_z\Big(\big(\ensuremath{\mathds{1}}_{V^z} \tilde{a} (U_{T_V(z)})\ensuremath{\mathds{1}}_{V^z}^*\big)\cdot\delta_{V^zU_{T_V(z)}(V^z)^{-1}}\Big)\,d\mu(z). \end{align*} Since for each $z\in r(V)$, we have $V^z=V_{T_V(z)}$ and $z=r\big(V_{T_V(z)}\big)$, the variable substitution $y=T_V(z)$ gives \begin{equation} \label{kmsforiir} \psi(h*\tau_{i\beta}( \tilde{a}*h^*)) = \int_{s(V)} |q(y)|^2\psi_{r(V_y)}\big((\ensuremath{\mathds{1}}_{V_y} \tilde{a}(U_{y})\ensuremath{\mathds{1}}_{V_y}^*)\cdot\delta_{V_yU_{y}(V_y)^{-1}}\big)\,d\mu(y). \end{equation} Putting $y=x$ in \eqref{kmsforiir}, we have $U_y=u$ and $V_y=\eta$. Since $|q(x)|^2=1$, part (II) follows from \eqref{kmsforiil} and \eqref{kmsforiir}. For the other direction, suppose that $\big(\mu,[\psi]_{\mu}\big)$ satisfies (I)-(II). The formula \eqref{Nesh-formula} in Theorem~\ref{thm1} gives a state $\psi:=\mathbb{T}heta(\mu,\Psi)$ on $\mathbb{C}G$. We aim to show that $\psi$ is a KMS$_\beta$ state. It suffices to show that for each bisection $U$, each $f\in \Gamma_c(U;\mathcal{B})$, and each $g\in \Gamma_c(\mathcal{G};\BB)$ we have \begin{equation} \label{kms-check} \psi(f*g)=\psi(g*\tau_{i\beta}(f)) \end{equation} Fix a representative $\{\psi_x\}_{x\in \mathcal{G}^{(0)}}\in[\Psi]_\mu$. The left-hand side of \eqref{kms-check} is \begin{equation}\label{lhand-kms-check} \psi(f*g)=\int_{r(U)}\sum_{u\in \mathcal{G}^x_x}\psi_x\big(\big(f(U^x)g((U^x)^{-1}u)\big)\cdot\delta_u\big)\,d\mu(x). \end{equation} We compute the right-hand in terms of the representative $\{\psi_x\}_{x\in \mathcal{G}^{(0)}}\in[\Psi]_\mu$. We start by the multiplication formula in $\Gamma_c(\mathcal{G};\BB)$ and the formula \eqref{Nesh-formula} for $\psi$: \begin{align*} \psi(g*\tau_{i\beta}(f))&=\int_{x\in \mathcal{G}^{(0)}}\sum_{u\in \mathcal{G}^x_x}\psi_x\big((g*\tau_{i\beta}(f))(u)\cdot\delta_u\big)\,d\mu(x)\\ &=\int_{x\in s(U)}\sum_{u\in \mathcal{G}^x_x}e^{-\beta D(U_x)}\psi_x\big(\big(g(u(U_x)^{-1})f(U_x)\big)\cdot\delta_u\big)\,d\mu(x). \end{align*} Since $\mu$ is quasi-invariant with Radon--Nykodym cocycle $e^{-\beta D}$, the substitution $x=T_U(y)$ gives \[\psi(g*\tau_{i\beta}(f))=\int_{r(U)}\sum_{u\in \mathcal{G}^{T_U(y)}_{T_U(y)}}\psi_{{T_U(y)}}\big(\big(g(u(U_{{T_U(y)}})^{-1})f(U_{{T_U(y)}})\big)\cdot\delta_u\big)\,d\mu(y).\] Since $U_{{T_U(y)}}=U^y$, we obtain \[\psi(g*\tau_{i\beta}(f))=\int_{r(U)}\sum_{u\in \mathcal{G}^{T_U(y)}_{T_U(y)}}\psi_{{T_U(y)}}\big(\big(g(u(U^y)^{-1})f(U^y)\big)\cdot\delta_u\big)\,d\mu(y).\] Applying the identity $\mathcal{G}^{T_U(y)}_{T_U(y)}(U^y)^{-1}=(U^y)^{-1}\mathcal{G}^y_y$, we can rewrite the sum as \begin{equation} \label{rhand-kms-check} \psi(g*\tau_{i\beta}(f))=\int_{r(U)}\sum_{v\in \mathcal{G}^{y}_{y}}\psi_{{T_U(y)}}\Big(\big(g((U^y)^{-1}v)f(U^y)\big)\cdot\delta_{(U^y)^{-1}vU^y}\Big)\,d\mu(y). \end{equation} To simplify further, fix $v\in \mathcal{G}^{y}_{y}$. Using that $\ensuremath{\mathds{1}}_{U^y}\ensuremath{\mathds{1}}_{U^y}^*=\ensuremath{\mathds{1}}_y$ then applying \eqref{multi-algebra-isotropy} give \begin{align*} \psi_{{T_U(y)}}&\Big(\big(g((U^y)^{-1}v)f(U^y)\big)\cdot\delta_{(U^y)^{-1}vU^y}\Big)\\ &=\psi_{{T_U(y)}}\Big(\big(g((U^y)^{-1}v)\ensuremath{\mathds{1}}_{U^y}\ensuremath{\mathds{1}}_{U^y}^*f(U^y)\big)\cdot\delta_{(U^y)^{-1}vU^y(U^y)^{-1}U^y}\Big)\\ &=\psi_{{T_U(y)}}\bigg(\Big(\big(g((U^y)^{-1}v)\ensuremath{\mathds{1}}_{U^y}\big)\cdot\delta_{(U^y)^{-1}vU^y}\Big)\Big(\big(\ensuremath{\mathds{1}}_{U^y}^*f(U^y)\big)\cdot\delta_{(U^y)^{-1}U^y}\Big)\bigg). \end{align*} Since $g((U^y)^{-1}v)\ensuremath{\mathds{1}}_{U^y}\in B((U^y)^{-1}vU^y)$ and $(U^y)^{-1}vU^y\in \mathcal{G}^{T_U(y)}_{T_U(y)}$, the trace property of $\psi_{T_U(y)}$ implies that \begin{align*} \psi_{{T_U(y)}}&\Big(\big(g((U^y)^{-1}v)f(U^y)\big)\cdot\delta_{(U^y)^{-1}vU^y}\Big)\\ &=\psi_{{T_U(y)}}\bigg(\Big(\big(\ensuremath{\mathds{1}}_{U^y}^*f(U^y)\big)\cdot\delta_{(U^y)^{-1}U^y}\Big)\Big(\big(g((U^y)^{-1}v)\ensuremath{\mathds{1}}_{U^y}\big)\cdot\delta_{(U^y)^{-1}vU^y}\Big) \bigg)\\ &=\psi_{{T_U(y)}}\Big(\big(\ensuremath{\mathds{1}}_{U^y}^*f(U^y)g((U^y)^{-1}v)\ensuremath{\mathds{1}}_{U^y}\big)\cdot\delta_{(U^y)^{-1}vU^y}\Big) \quad \text {by \eqref{multi-algebra-isotropy}}. \end{align*} We apply (II) with $\eta=U^y$. Recall that $T_U(y)=s(U^y)$ and so $r(\eta)=y$ and we have \begin{align*} \psi_{T_U(y)}\Big(\big(g((U^y)^{-1}v)&f(U^y)\big)\cdot\delta_{(U^y)^{-1}vU^y}\Big)\\ &=\psi_{y}\Big(\big(\ensuremath{\mathds{1}}_{U^y}\ensuremath{\mathds{1}}_{U^y}^*f(U^y)g((U^y)^{-1}v)\ensuremath{\mathds{1}}_{U^y}\ensuremath{\mathds{1}}_{U^y}^*\big)\cdot\delta_{v}\Big)\\ &=\psi_{y}\big(\big(f(U^y)g((U^y)^{-1}v)\big)\cdot\delta_v\big). \end{align*} Substituting this in each term of \eqref{rhand-kms-check} gives \begin{align*} \psi(g*\tau_{i\beta}(f))=\int_{r(U)}\sum_{v\in \mathcal{G}^{y}_{y}}\psi_{y}\big(\big(f(U^y)g((U^y)^{-1}v)\big)\cdot\delta_v\big)\,d\mu(y). \end{align*} which is precisely \eqref{lhand-kms-check}. So \eqref{kms-check} holds, and $\psi$ is a KMS$_\beta$ state for $\tau$. \end{proof} \begin{lemma} With the hypotheses of Theorem~\ref{thm2}, suppose that $\psi$ is a KMS$_\beta$ state on $(\mathbb{C}G,\tau)$ and that $\big(\mu, C\big)$ is the associated pair given by Theorem~\ref{thm2}. For any $\mu$-measurable field of states $\Psi = \{\psi_x\}_{x\in \mathcal{G}^{(0)}}$ such that $[\Psi]_\mu = C$, we have \[ \psi_{x}(a\cdot \delta_u)=0 \text{ for } \mu\text{-a.e. }x\in \mathcal{G}^{(0)} \text{, }u\in \mathcal{G}^x_x\setminus D^{-1}(0), \text{ and }a\in B(u). \] \end{lemma} \begin{proof} Fix $x\in \mathcal{G}^{(0)}$, $u\in \mathcal{G}^x_x\setminus D^{-1}(0)$ and $a\in B(u)$. Let $\varepsilon:=\frac{|D(u)+1|}{2}$. Since $\mathcal{B}$ has enough sections, there exists $f\in \Gamma_c(\mathcal{G};\BB)$ such that $f(u)=a$ and that $f$ is supported in a bisection $U$ such that $D(U)\subseteq \big(D(u)-\epsilon, D(u)+\epsilon\big)$. In particular, if $D(u)<0$, then $D(v)<-\varepsilon$ for all $v\in U$, and if $D(u)>0$, then $D(v)>\varepsilon$ for all $v\in U$. Recall that $U_x$ is the unique element of $s^{-1}(x)\cap U$. Since $\psi$ is a KMS$_\beta$ state, it is $\tau$-invariant and we have $\psi(\tau_1(f))=\psi(f)$. Applying the formula \eqref{Nesh-formula} for $\psi$ gives \[\int_{s(U)}\psi_x\big(f(U_x)\cdot\delta_{U_x}\big)\,d\mu(x)=\int_{s(U)}e^{-D(U_x)}\psi_x\big(f(U_x)\cdot\delta_{U_x}\big)\,d\mu(x).\] Now our choice of $u$ forces $\psi_x\big(f(U_x)\cdot\delta_{U_x}\big)=0$ for $\mu$-almost every $x\in \mathcal{G}^{(0)}$. In particular $\psi_x(a\cdot\delta_{u})=0$ for $\mu$-almost every $x\in \mathcal{G}^{(0)}$. \end{proof} By specialising to $\beta = 0$, we can use our results to describe the trace space of the cross-section algebra of a Fell bundle with singly generated fibres. This is particularly important given the role of the trace simplex of a simple $C^*$-algebra in Elliott's classification program. \begin{cor} Let $p:\mathcal{B}\rightarrow \mathcal{G}$ be a Fell bundle with singly generated fibres over a locally compact second countable \'{e}tale groupoid $\mathcal{G}$. Then $\widetilde{\mathbb{T}heta}$ restricts to a bijection between the trace space of $(\mathbb{C}G,\tau)$ and the pairs $\big(\mu,[\Psi]_{\mu}\big)$ consisting of a probability measure $\mu$ on $\mathcal{G}^{(0)}$ and a $\mu$-equivalence class $[\Psi]_\mu$ of $\mu$-measurable fields of tracial states on $\mathbb{C}Gx$ such that \begin{itemize} \item [(I)] $\mu$ is a quasi-invariant measure with Radon--Nykodym cocycle $1$. \item [(II)] For $\mu$-almost every $x\in \mathcal{G}^{(0)}$, we have \[\psi_{s(\eta)}(a\cdot \delta_u)=\psi_{r(\eta)}\big((\ensuremath{\mathds{1}}_\eta a \ensuremath{\mathds{1}}_\eta^*)\cdot \delta_{\eta u\eta^{-1}}\big) \quad\text{for }u\in \mathcal{G}^x_x, a\in B(u)\text{ and }\eta\in \mathcal{G}_x.\] \end{itemize} \end{cor} \begin{proof} The KMS condition at inverse temperature $0$ reduces to the trace property. So we just need to observe that the proof of Theorem~\ref{thm2} does not require the automatic $\tau$-invariance of KMS states for $\tau$. \end{proof} \section{KMS states on twisted groupoid $C^*$-algebras}\label{sec:KMS twisted G} To apply our results to twisted groupoid $C^*$-algebras, we recall how to regard a twisted groupoid $C^*$-algebra as the cross-sectional algebra of a Fell-bundle with one-dimensional fibres. This is standard; we just include it for completeness. \begin{lemma}\label{Fell-groupoid} Let $\mathcal{G}$ be a locally compact second countable \'{e}tale groupoid, and let $\sigma\in Z^2(\mathcal{G},\mathbb{T})$. Let $\mathcal{B}:=\mathcal{G}\times \mathbb{C}$ and equip $\mathcal{B}$ with the product topology. Define $p:\mathcal{B}\rightarrow \mathcal{G}$ by $p(\gamma,z)=\gamma$. Then \begin{itemize} \item[(I)] $p:\mathcal{B}\rightarrow \mathcal{G}$ is a Fell bundle with respect to the multiplication and involution given by \begin{equation}\label{multiplicationFG} (\gamma,z)(\eta,w)=(\gamma\eta,\sigma(\gamma,\eta)zw)\text{ and } (\gamma,z)^*=(\gamma^{-1},\overline{\sigma(\gamma,\gamma^{-1})}\overline{z}). \end{equation} \item[(II)]For each $\gamma\in \mathcal{G}$, the fibre $\mathcal{B}(\gamma)$ is singly generated with $\ensuremath{\mathds{1}}_\gamma:=(\gamma,1)$. The map $\gamma\mapsto \ensuremath{\mathds{1}}_\gamma:\mathcal{G}\rightarrow \mathcal{B}$ is continuous. \item[(III)] There is an injective $*$-homomorphism $\Phi$ from $C_c(\mathcal{G},\sigma)$ onto $\Gamma_c(\mathcal{G},B)$ such that \[ \Phi(f)(\gamma)=(\gamma,f(\gamma)) \text { for all } f\in C_c(\mathcal{G},\sigma) \text{ and }\gamma\in \mathcal{G}. \] This homomorphism extends to an isomorphism $\Phi:C^*(\mathcal{G},\sigma)\rightarrow C^*(\mathcal{G},B)$. \item[(IV)] There is an isomorphism $\Upsilon:C^*(\mathcal{G}_x^x,\sigma)\rightarrow \mathbb{C}Gx$ such that \[\Upsilon(W_u)= (u,1)\cdot \delta_{u} \text{ for all } u\in \mathcal{G}^x_x. \] \end{itemize} \end{lemma} \begin{proof} For (I), since $\mathbb{C}$ is a Banach space, $\mathcal{B}$ is the trivial upper-semi continuous Banach bundle. We check (F1)--(F5): The conditions (F1) and (F2) follow from \eqref{multiplicationFG} easily. To see (F3), let $a:=(\gamma,z)$ and $b:=(\eta,w)$. An easy computation using \eqref{multiplicationFG} shows that \[(ab)^*=\big((\eta\gamma)^{-1},\overline{\sigma(\gamma\eta,\eta^{-1}\gamma^{-1})\sigma(\gamma,\eta)}\overline{zw}\big), \text{ and}\] \[b^*a^*=\big((\eta\gamma)^{-1},\sigma(\eta^{-1},\gamma^{-1}) \overline{\sigma(\eta,\eta^{-1})\sigma(\gamma,\gamma^{-1})}\overline{zw}\big). \] Two applications of the cocycle relation give us \begin{align*} \sigma(\eta^{-1},\gamma^{-1}) \sigma(\gamma\eta,\eta^{-1}\gamma^{-1})\sigma(\gamma,\eta)&=\sigma(\gamma\eta,\eta^{-1})\sigma(\gamma,\gamma^{-1})\sigma(\gamma,\eta)\\&=\sigma(\eta,\eta^{-1})\sigma(\eta,r(\eta))\sigma(\gamma,\gamma^{-1})\\ &=\sigma(\eta,\eta^{-1})\sigma(\gamma,\gamma^{-1}). \end{align*} Therefore $(ab)^*=b^*a^*$. For $(F4)$, let $x\in\mathcal{G}^{(0)}$. Since $x^{-1}=x=x^{-1}x$, the operations \eqref{multiplicationFG} make sense in the fibre $B(x)$ and turn it into a $*$-algebra. Also for $a=(x,z)\in B(x)$, we have $\|aa^*\|=|c(x^{-1},x)z\overline{z}|=|z|^2=\|a\|^2$. Thus $B(x)$ is a $C^*$-algebra. For (F5), note that each fibre $B(\gamma)$ is a full left Hilbert $A(r(\gamma))$-module and a full right Hilbert $A(s(\gamma))$-module. Equations \eqref{imprimitivity} and \eqref{f5} follow from \eqref{multiplicationFG}. (II) is clear. To see (III), note that the multiplication and involution formulas in $C_c(\mathcal{G},\sigma)$ and $\Gamma_c(\mathcal{G};\BB)$ show that $\Phi$ is a $*$-homomorphism. Since each section $g\in \Gamma_c(\mathcal{G};\BB)$ has the form $g(\gamma)=(\gamma, z_{g,\gamma})$ for some $z_{g,\gamma}\in \mathbb{C}$, we can define $\tilde{\Phi}:\Gamma_c(\mathcal{G};\BB)\rightarrow C_c(\mathcal{G},\sigma)$ by $\tilde{\Phi}(g)(\gamma)=z_{g,\gamma}$. An easy computation shows that $\tilde{\Phi}$ is the inverse of $\Phi$ and therefore $\Phi$ is a bijection. For each $I$-norm decreasing representation $L$ of $\Gamma_c(\mathcal{G};\BB)$, the map $L\circ \Phi$ is a $*$-representation of $C_c(\mathcal{G},\sigma)$. Therefore \begin{align*} \|\Phi&(f)\|_{\Gamma_c(\mathcal{G};\BB)} \\ &=\sup \{\|L(\Phi(f))\|:L \text{ is an $I$-norm decreasing representation of } \Gamma_c(\mathcal{G};\BB)\}\\ &\leq \sup \{\|L'(f)\|:L' \text{ is a $*$-representation of } C_c(\mathcal{G},\sigma)\}\\ &=\|f\|_{ C_c(\mathcal{G},\sigma)}. \end{align*} Thus $\Phi$ is norm decreasing and therefore extends to an isomorphism of $C^*$-algebras. For (IV), take $W_u,W_v\in \mathcal{G}^x_x$. We have \[\Upsilon(W_uW_v)=\sigma(u,v)\Upsilon(W_{uv})=\sigma(u,v)((uv,1)\cdot \delta_{uv}).\] To compare this with $\Upsilon(W_u)\Upsilon(W_v)$, we calculate applying \eqref{multi-algebra-isotropy} in the second equality: \[\Upsilon(W_u)\Upsilon(W_v)=\big((u,1)\cdot \delta_{u}\big)*\big((v,1)\cdot \delta_{v}\big)=(u,1)(v,1)\cdot \delta_{u,v}=\sigma(u,v)((uv,1)\cdot \delta_{uv}).\] Thus $\Upsilon$ is a $*$-homomorphism. The map $\tilde{\Upsilon}:\mathbb{C}Gx\rightarrow C^*(\mathcal{G}_x^x,\sigma)$ given by $\tilde{\Upsilon}((u,z)\cdot\delta_u)=zW_u$ is an inverse for $\Upsilon$, so $\Upsilon$ descends to an isomorphism of $C^*$-algebras. \end{proof} In parallel with Section~\ref{sec:KMS-Fell}, we say that a collection $\{\psi_x\}_{x\in \mathcal{G}^{(0)}}$ of states $\psi_x$ on $C^*(\mathcal{G}^x_x,\sigma)$ is a \textit{$\mu$-measurable field of states} if for every $f\in C_c(\mathcal{G},\sigma)$, the function $x\mapsto\sum_{u\in \mathcal{G}^x_x}f(u)\psi_x(W_u)$ is $\mu$-measurable. \begin{cor}\label{kms states for groupoid} Let $\mathcal{G}$ be a locally compact second countable \'{e}tale groupoid, and let $\sigma\in Z^2(\mathcal{G},\mathbb{T})$. Let $D$ be a continuous $\mathbb{R}$-valued $1$-cocycle on $\mathcal{G}$ and let $\tilde{\tau}$ be the dynamics on $C^*(\mathcal{G},\sigma)$ given by $\tilde{\tau}_t(f)(\gamma)=e^{itD(\gamma)}f(\gamma)$. Take $\beta\in \mathbb{R}$. There is a bijection between the simplex of the KMS$_\beta$ states of $\big(C^*(\mathcal{G},\sigma),\tilde{\tau}\big)$ and the pairs $\big(\mu,[\Psi]_{\mu}\big)$ consisting of a probability measure $\mu$ on $\mathcal{G}^{(0)}$ and a $\mu$-equivalence class $[\Psi]_\mu$ of $\mu$-measurable fields of tracial states on $C^*(\mathcal{G}^x_x,\sigma)$ such that \begin{itemize} \item [(I)] $\mu$ is a quasi-invariant measure with Radon--Nykodym cocycle $e^{-\beta D}$. \item [(II)]For each representative $\{\psi_x\}_{x\in \mathcal{G}^{(0)}}\in[\Psi]_\mu$ and for $\mu$-almost every $x\in \mathcal{G}^{(0)}$, we have \[\psi_{x}(W_u)=\sigma(\eta u,\eta^{-1})\sigma(\eta,u)\overline{\sigma(\eta^{-1},\eta)}\psi_{r(\eta)}\big(W_{\eta u\eta^{-1}}\big)\quad \text{for } u\in \mathcal{G}^x_x, \text{ and } \eta\in \mathcal{G}_x.\] \end{itemize} The state corresponding to the pair $\big(\mu,[\Psi]_{\mu}\big)$ is given by \begin{equation}\label{Nesh-formula-groupoid} \psi(f)=\int_{\mathcal{G}^{(0)}}\sum_{u\in \mathcal{G}^x_x}f(u)\psi_x(W_u)\, d\mu(x)\quad\text{for all }f\in C_c(\mathcal{G},\sigma). \end{equation} \end{cor} \begin{proof} Lemma~\ref{Fell-groupoid} yields a Fell bundle $\mathcal{B}$ over $\mathcal{G}$, an isomorphism $\Phi:C^*(\mathcal{G},\sigma)\rightarrow C^*(\mathcal{G},\mathcal{B})$, and isomorphism $\Upsilon:C^*(\mathcal{G}_x^x,\sigma)\rightarrow \mathbb{C}Gx$. The isomorphism $\Phi$ intertwines the dynamics $\tilde{\tau}$ and $\tau$ induced by $D$ on $C^*(\mathcal{G},\sigma)$ and $C^*(\mathcal{G},\mathcal{B})$. We aim to apply Theorem~\ref{thm2}. Let $\psi$ be a KMS$_\beta$ state of $\big(C^*(\mathcal{G},\sigma),\tilde{\tau}\big)$. Then $\varphi:=\psi\circ \Phi^{-1}$ is a KMS$_\beta$ state on $\big(C^*(\mathcal{G},\mathcal{B}),\tau\big)$ and Theorem~\ref{thm2} gives a pair $\big(\mu,\{\varphi_x\}_{x\in\mathcal{G}^{(0)}}\big)$ of a probability measure $\mu$ on $\mathcal{G}^{(0)}$ and a $\mu$-measurable fields of tracial states on $C^*(\mathcal{G}^x_x,\mathcal{B})$ satisfying (I)-(II) of Theorem~\ref{thm2}. Let $\psi_x:=\varphi_x\circ \Upsilon$. For each $f\in C_c(\mathcal{G},\sigma)$, the function $x\mapsto\sum_{u\in \mathcal{G}^x_x}f(u)\psi_x(W_u)=\sum_{u\in \mathcal{G}^x_x}\varphi_x ((u,f(u))\cdot \delta_{u})$ is $\mu$-measurable. Therefore $\{\psi_x\}_{x\in \mathcal{G}^{(0)}}$ is a $\mu$-measurable field of states on $C^*(\mathcal{G}^x_x,\sigma)$. To see that $\{\psi_x\}_{x\in \Lambda^\infty}$ satisfies (II), let $u\in \mathcal{G}_x^x$ and $\eta\in\mathcal{G}_x$. A computation in $\mathcal{G}\times \mathbb{C}$ shows that \[ \ensuremath{\mathds{1}}_\eta (u,z) \ensuremath{\mathds{1}}_\eta^*=(\eta,1) (u,1) (\eta,1)^* = \big(\eta u \eta^{-1}, \sigma(\eta u,\eta^{-1})\sigma(\eta,u)\overline{\sigma(\eta^{-1}z,\eta)}\big). \] Now applying part~(II) of Theorem~\ref{thm2} to $\{\varphi_x\}_{x\in \Lambda^\infty}$ with $\eta$ and $a=(u,1)$ we get \begin{align*} \psi_{x}(W_u)&=\varphi_{x}\big((u,1)\cdot\delta_{u}\big)\\ &=\varphi_{r(\eta)}\Big(\big(\eta u \eta^{-1},\sigma(\eta u,\eta^{-1})\sigma(\eta,u)\overline{\sigma(\eta^{-1},\eta)}\big) \cdot \delta_{\eta u \eta^{-1}}\Big)\\ &=\sigma(\eta u,\eta^{-1})\sigma(\eta,u)\overline{\sigma(\eta^{-1},\eta)}\varphi_{r(\eta)}\big((\eta u \eta^{-1},1) \cdot \delta_{\eta u \eta^{-1}}\big)\\ &=\sigma(\eta u,\eta^{-1})\sigma(\eta,u)\overline{\sigma(\eta^{-1},\eta)}\psi_{r(\eta)}\big(W_{\eta u \eta^{-1}}\big). \end{align*} To see \eqref{Nesh-formula-groupoid}, fix $f\in C_c(\mathcal{G},\sigma)$. Applying the formula \eqref{Nesh-formula} for $\varphi$ we have \begin{align}\label{state-groupois-bundle} \psi(f)\notag&=\varphi(\Phi(f))=\int_{\mathcal{G}^{(0)}}\sum_{u\in \mathcal{G}^x_x}\varphi_x(\Phi(f)(u)\cdot \delta_u)\, d\mu(x)\\ &=\int_{\mathcal{G}^{(0)}}\sum_{u\in \mathcal{G}^x_x}f(u)\varphi_x((u,1)\cdot \delta_u)\, d\mu(x) =\int_{\mathcal{G}^{(0)}}\sum_{u\in \mathcal{G}^x_x}f(u)\psi_x(W_u)\, d\mu(x). \end{align} So the KMS$_\beta$ state $\psi$ yields a pair $\big(\mu,[\psi]_{\mu}\big)$ satisfying (I) and (II), and $\psi$ in then given by \eqref{Nesh-formula-groupoid}. For the converse, fix $\big(\mu,\{\psi_x\}_{x\in \mathcal{G}^{(0)}}\big)$ satisfying (I) and (II). Let $\varphi_x=\psi_x\circ \Upsilon^{-1}$. For $g\in \Gamma_c(\mathcal{G};\BB)$ and $u\in \mathcal{G}^{(0)}$, let $z_{g,u}\in \mathbb{C}$ be the element such that $g(u)=(u, z_{g,u})$. The function $x\mapsto\sum_{u\in \mathcal{G}^x_x}\varphi_x(g(u)\cdot \delta_u)=\sum_{u\in \mathcal{G}^x_x}z_{g,u}\psi_x(W_u)$ is $\mu$-measurable. Therefore $\{\varphi_x\}_{x\in \mathcal{G}^{(0)}}$ is a $\mu$-measurable field of states on $C^*(\mathcal{G}^x_x,\mathcal{B})$. By (II) we have \begin{align*} \varphi_{x}((u,z)\cdot\delta_u)&=\psi_{x}\circ\Psi^{-1}\big((u,z)\cdot\delta_{u}\big)=\psi_x(zW_u)\\ &=z\sigma(\eta u,\eta^{-1})\sigma(\eta,u)\overline{\sigma(\eta^{-1},\eta)}\psi_{r(\eta)}\big(W_{\eta u\eta^{-1}}\big)\\ &=\psi_{r(\eta)}\big(z\sigma(\eta u,\eta^{-1})\sigma(\eta,u)\overline{\sigma(\eta^{-1},\eta)}W_{\eta u\eta^{-1}}\big)\\ &=\varphi_{r(\eta)}\big(\big(\eta u \eta^{-1},z\sigma(\eta u,\eta^{-1})\sigma(\eta,u)\overline{\sigma(\eta^{-1},\eta)}\big)\cdot \delta_{\eta u\eta^{-1}}\big)\\ &=\varphi_{r(\eta)}\big((\ensuremath{\mathds{1}}_\eta (u,z) \ensuremath{\mathds{1}}_\eta^*)\cdot \delta_{\eta u\eta^{-1}}\big). \end{align*} Thus $\big(\mu,\{\varphi_x\}_{x\in\mathcal{G}^{(0)}}\big)$ is a pair as in Theorem~\ref{thm2}. Therefore there is a KMS$_\beta$ state $\varphi:=\mathbb{T}heta(\mu,\Psi)$ on $C^*(\mathcal{G},\mathcal{B})$ satisfying \eqref{Nesh-formula}. Now $\psi=\varphi\circ \Phi$ is a KMS$_\beta$ on $C^*(\mathcal{G},\sigma)$ and by \eqref{state-groupois-bundle} $\psi$ satisfies \eqref{Nesh-formula-groupoid}. \end{proof} \begin{remark} Corollary~\ref{kms states for groupoid} applied to the trivial cocycle $\sigma\equiv 1$ recovers the results of Neshveyev in \cite[Theorem~1.3]{N}. \end{remark} \section{KMS states on the $C^*$-algebras of twisted higher-rank graphs}\label{sec:KMS k-graph} \subsection{Higher-rank graphs} Let $\Lambda$ be a $k$-graph with vertex set $\Lambda^0$ and degree map $d:\Lambda\rightarrow\mathbb{N}^k$ in the sense of \cite{KP}. For any $n\in \mathbb{N}^k$, we write $\Lambda^n:=\{\lambda\in \Lambda^*: d(\lambda)=n\}$. A $k$-graph $\Lambda$ is finite if $\Lambda^n$ is finite for all $n\in \mathbb{N}^k$. Given $u,v\in \Lambda^0$, $u\Lambda v$ denotes $\{\lambda\in \Lambda: r(\lambda)=u \text { and } s(\lambda)=v\}$. We say $\Lambda$ is \textit{strongly connected} if $u\Lambda v\neq\emptyset$ for every $u,v\in \Lambda^0$. A $k$-graph $\Lambda$ has no sources if $u\Lambda^n\neq\emptyset$ for every $u\in \Lambda^0$ and $n\in \mathbb{N}^k$ and it is row finite if $u\Lambda^n$ is finite for all $u\in \Lambda^0$, and $n\in \mathbb{N}^k$. A $\mathbb{T}$-valued 2-cocycle $c$ on $\Lambda$ is a map $c:\Lambda^2\rightarrow \mathbb{T}$ such that $c(r(\lambda),\lambda)=c(\lambda,s(\lambda))=1$ for all $\lambda\in \Lambda$ and $c(\lambda,\mu)c(\lambda\mu,\nu)=c(\mu,\nu)c(\lambda,\mu\nu)$ for all composable elements $\lambda,\mu,\nu$. We write $Z^2(\Lambda,\mathbb{T})$ for the group of all $\mathbb{T}$-valued 2-cocycles on $\Lambda$. Let $\Omega_k:=\{(m,n)\in \mathbb{N}^k\times \mathbb{N}^k:m\leq n\}$. One can verify that that $\Omega_k$ is a $k$-graph with $r(m,n)=(m,m), s(m,n)=(n,n),$ $(m,n)(n,p)=(m,p)$ and $d(m,n)=n-m$. We identify $\Omega_k^0$ with $\mathbb{N}^k$ by $(m,m)\mapsto m$. The set \[\Lambda^\infty:=\{x:\Omega_k\rightarrow \Lambda: \text{ $x$ is a functor that intertwines the degree maps}\}\] is called the \textit{infinite-path space} of $\Lambda$. For $l\in \mathbb{N}^k$, the shift map $\rho^l:\Lambda^\infty\rightarrow \Lambda^\infty $ is given by $\rho^l(x)(m,n)= x(m+l,n+l)$ for all $x\in \Lambda^\infty$ and $(m,n)\in \Omega_k$. Let $\Lambda$ be a strongly connected finite $k$-graph. The set \[\operatorname{Per\, \Lambda}:=\{m-n: m,n\in \mathbb{N}^k, \rho^m(x)=\rho^n(x) \text{ for all } x\in \Lambda^\infty\}\subseteq \mathbb{Z}^k\] is subgroup of $\mathbb{Z}^k$ and is called \textit{periodicity group} of $\Lambda$ (see \cite[Proposition~5.2]{aHLRS}). \subsection{The path groupoid} Suppose that $\Lambda$ is a row finite $k$-graph with no sources. The set \[\mathcal{G}_\Lambda:=\{(x,l,y)\in \Lambda^\infty\times \mathbb{Z}^k\times\Lambda^\infty:l=m-n,m,n\in \mathbb{N}^k \text{ and } \sigma^m(z)=\sigma^{n}(z)\}\] is a groupoid with $(\mathcal{G}_\Lambda)^{(0)}=\{(x,0,x):x\in \Lambda^\infty\}$ identified with $\Lambda^\infty$, structure maps $r(x,l,y)=x$, $s(x,l,y)=y$, $(x,l,y)(y,l',z)=(x,l+l',z)$ and $(x,l,y)^{-1}=(y,-l,x)$. This groupoid is called \textit{infinite-path groupoid}. For $\lambda,\mu\in \Lambda$ with $s(\lambda)=r(\mu)$ let \[Z(\lambda,\mu):=\{ (\lambda x,d(\lambda)-d(\mu),\mu x)\in \mathcal{G}_\Lambda: x\in \Lambda^\infty \text{ and } r(x)=s(\lambda) \}.\] The sets $\{Z(\lambda,\mu): \lambda,\mu\in \Lambda\}$ form a basis for a locally compact Hausdorff topology on $\mathcal{G}_\Lambda$ in which it is an \'{e}tale groupoid (see \cite[Proposition~2.8]{KP}). Let $\Lambda\mathbin{_s*_s}\Lambda:=\{(\mu,\nu)\in \Lambda\times\Lambda: s(\mu)=s(\nu)\}$. Let $\mathcal{P}$ be a subset of $\Lambda\mathbin{_s*_s}\Lambda$ such that \begin{equation}\label{partition-form} (\mu,s(\mu))\in \mathcal{P} \text{ for all }\mu\in \Lambda\text{ and }\mathcal{G}_\Lambda=\bigsqcup_{(\mu,\nu)\in \mathcal{P}}Z(\mu,\nu). \end{equation} There is always such a $\mathcal{P}$, see \cite[Lemma~6.6]{KPS1}. For each $\alpha\in \mathcal{G}_\Lambda$, we write $(\mu_\alpha,\nu_\alpha)$ for the element of $\mathcal{P}$ such that $\alpha\in Z(\mu_\alpha,\nu_\alpha)$. Let $\hat{d}:\mathcal{G}_\Lambda\rightarrow \mathbb{Z}^k$ be the function defined by $\hat{d}(x,n,y)=n$. Given a 2-cocycle $c$ on $\Lambda$, \cite[Lemma~6.3]{KPS1} says that for every composable pair $\alpha,\beta\in \mathcal{G}_\Lambda$ there are $\lambda,\iota,\kappa\in \Lambda$ and $y\in \Lambda^\infty$ such that \[\nu_\alpha\lambda=\mu_\beta\iota,\quad \mu_\alpha\lambda=\mu_{\alpha\beta}\kappa,\quad \nu_\beta\iota=\nu_{\alpha\beta}\kappa, \quad \text{ and}\] \[ \alpha=(\mu_\alpha\lambda y,\hat{d}(\alpha),\nu_\alpha\lambda y),\quad \beta=(\mu_\beta\iota y,\hat{d}(\beta),\nu_\beta\iota y)\quad \text{and } \alpha\beta=(\mu_{\alpha\beta}\kappa y,\hat{d}(\alpha\beta),\nu_{\alpha\beta}\kappa y). \] Furthermore, the formula \[ \sigma_c(\alpha,\beta)=c(\mu_\alpha,\lambda)\overline{c(\nu_\alpha,\iota)}c(\mu_\beta,\iota)\overline{c(\nu_\beta,\iota)}\overline{c(\mu_{\alpha\beta},\kappa)}c(\nu_{\alpha\beta},\kappa). \] is a continuous 2-cocycle on $\mathcal{G}_\Lambda$ and does not depend on the choice of $\lambda,\iota,\kappa$. Theorem~6.5 of \cite{KPS1} shows that continuous 2-cocycles on $\mathcal{G}_\Lambda$ obtained from different partitions $\mathcal{P},\mathcal{P}'$ are cohomologous. Let $\Lambda$ be a strongly connected finite $k$-graph and take $c\in Z^2(\Lambda,\mathbb{T})$. Let $\mathcal{P}\subseteq\Lambda\mathbin{_s*_s}\Lambda$ be as in \eqref{partition-form}. For each $x\in \Lambda^\infty$, define $\sigma_c^x:\operatorname{Per\, \Lambda}\rightarrow \mathbb{T}$ by $\sigma_c^x(p,q):=\sigma_c((x,p,x),(x,q,x))$. Clearly $\sigma_c^x\in Z^2(\operatorname{Per\, \Lambda},\mathbb{T})$. By \cite[Lemma~3.3]{KPS2} the cohomology class of $\sigma_c^x$ is independent of $x$. So by the argument of Section~\ref{sec:bicharacters} there is a bicharacter $\omega_c$ on $\operatorname{Per\, \Lambda}$ that is cohomologous to $\sigma_c^x$ for all $x\in \Lambda^\infty$. \subsection{KMS states of preferred dynamics} Given a finite $k$-graph $\Lambda$ and for $1\leq i\leq k$, let $A_i\in M_{\Lambda^{0}}$ be the matrix with entries $A_i(u,v):=|u\Lambda^{e_i}v|$. Writing $\rho(A_i)$ for the spectral radius of $A_i$, define $D:\mathcal{G}_\Lambda\rightarrow \mathbb{R}$ by $D(x,n,y)=\sum_{i=1}^k n_i\ln \rho(A_i)$. The function $D$ is locally constant and therefore it is a continuous $\mathbb{R}$-valued $1$-cocycle on $\mathcal{G}_\Lambda$. Lemma~12.1 of \cite{aHLRS} shows that there a unique probability measure $M$ on $\Lambda^\infty$ with Radon--Nycodym cocycle $e^{D}$. This measure is a Borel measure and satisfies \begin{equation}\label{k-graph-measure} M\big(x\in \Lambda^\infty:\{x\}\times \operatorname{Per\, \Lambda} \times \{x\}\neq \mathcal{G}^x_x\}\big)=0. \end{equation} Given $\sigma\in Z^2(\mathcal{G}_\Lambda,\mathbb{T})$, $D$ induces a dynamics $\tau$ on $C^*(\mathcal{G}_\Lambda,\sigma)$ such that $\tau_t(f)(x,m,y)=e^{it D(x,m,y)}f(x,m,y)$. Following \cite{aHLRS} we call this dynamics the \textit{preferred dynamics}. \begin{cor}\label{kms states for graph} Suppose that $\Lambda$ is a strongly connected finite $k$-graph. Let $c\in Z^2(\Lambda,\mathbb{T})$ and let $\mathcal{P}$ be as in \eqref{partition-form}. Suppose that $\omega_c\in Z^2(\operatorname{Per\, \Lambda},\mathbb{T})$ is a bicharacter cohomologous to $\sigma_c^x(p,q)=\sigma_c((x,p,x),(x,q,x))$ for all $x\in \Lambda^\infty$. Let $\tau$ be the preferred dynamics on $C^*(\mathcal{G}_\Lambda,\sigma_c)$. Let $M$ be the measure described at \eqref{k-graph-measure}. There is a bijection between the simplex of KMS$_1$ states of $\big(C^*(\mathcal{G}_\Lambda,\sigma_c),\tau\big)$ and the set of $M$-equivalence classes $[\psi]_M$ of tracial states $\{\psi_x\}_{x\in \Lambda^\infty}$ on $C^*(\operatorname{Per\, \Lambda},\omega_c)$ such that for all $W_p\in \operatorname{Per\, \Lambda}$ and $\eta:=(y,m,x)\in(\mathcal{G}_\Lambda)_x$, we have \begin{equation}\label{property II of twisted} \psi_{x}(W_p)=\sigma_c\big(\eta,(x,p,x)\big)\sigma_c\big((y,m+p,x),\eta^{-1}\big)\overline{\sigma_c(\eta^{-1},\eta)}\psi_{y}(W_p). \end{equation} The state corresponding to the class $[\psi]_{M}$ satisfies \[ \psi(f)=\int_{\mathcal{G}^{(0)}}\sum_{p\in \operatorname{Per\, \Lambda}}f(x,p,x)\psi_x(W_p)\, dM(x)\quad\text{for all }f\in C_c(\mathcal{G},\sigma). \] \end{cor} \begin{proof} Fix $x\in \Lambda^\infty$ such that $\{x\}\times \operatorname{Per\, \Lambda} \times \{x\}=\mathcal{G}^x_x$. Let $\delta^1 b$ be the $2$-coboundary such that $\omega_c=\delta^1 b \sigma_c^x$. Composing the isomorphism $W_p\mapsto b(p)W_p$ of $C^*(\operatorname{Per\, \Lambda},\omega_c)$ onto $C^*(\operatorname{Per\, \Lambda},\sigma_c^x)$ and the isomorphism $W_p\mapsto W_{(x,p,x)}:C^*(\operatorname{Per\, \Lambda},\sigma_c^x)\rightarrow C^*(\mathcal{G}^x_x,\sigma_c)$, we obtain an isomorphism $\Phi:C^*(\operatorname{Per\, \Lambda},\omega_c)\rightarrow C^*(\mathcal{G}^x_x,\sigma_c)$ such that \[\Phi(W_p)= b(p) W_{(x,p,x)} \text{ for all } p\in \operatorname{Per\, \Lambda}. \] Since $M$ is the only probability measure on $\Lambda^\infty$ with Radon--Nykodym cocycle $e^{D}$, by Corollary~\ref{kms states for groupoid} it suffices to show that there is a bijection between the fields of tracial states on $C^*(\operatorname{Per\, \Lambda},\omega_c)$ satisfying \eqref{property II of twisted} and the $M$-measurable fields of tracial states on $C^*(\mathcal{G}^x_x,\sigma_c)$ satisfying Corollary~\ref{kms states for groupoid} (II). Let $\{\varphi_x\}_{x\in \Lambda^\infty}$ be an $M$-measurable field of tracial states on $C^*(\mathcal{G}^x_x,\sigma_c)$ satisfying Corollary~\ref{kms states for groupoid} (II). Then clearly $\{ \varphi_x\circ\Phi\}_{x\in \Lambda^\infty}$ is a field of tracial states on $C^*(\operatorname{Per\, \Lambda},\omega_c)$. Applying part (II) of Corollary~\ref{kms states for groupoid} with $\eta$ and $u=(x,p,x)$ we get \begin{align*} (\varphi_x\circ \Phi)(W_p)&=\varphi_x \big(b(p)W_{(x,p,x)}\big)\\ &=b(p)\sigma_c\big((y,m+p,x),\eta^{-1}\big)\sigma_c(\eta,(x,p,x))\overline{\sigma_c(\eta^{-1},\eta)}\varphi_y(W_{(y,p,y)})\\ &=\sigma_c\big((y,m+p,x),(x,p,x)\big)\sigma_c\big(\eta,(x,p,x)\big)\overline{\sigma_c(\eta^{-1},\eta)}\varphi_x\circ \Phi(W_p). \end{align*} Conversely let $\{\psi_x\}_{x\in \Lambda^\infty}$ be a field of states on $C^*(\operatorname{Per\, \Lambda},\omega_c)$ satisfying \eqref{property II of twisted}. Since $M$ is a Borel measure on $\Lambda^\infty$, for all $f\in C_c(\mathcal{G},\sigma)$, the function \[x\mapsto\sum_{u\in \mathcal{G}^x_x}f(u)(\psi_x\circ \Phi^{-1})(W_u)=\sum_{p\in \operatorname{Per\, \Lambda}}f(x,p,x)\overline{b(p)}\psi_x(W_p)\] is continuous and hence is $M$-measurable. Therefore $\{\psi_x\circ \Phi^{-1}\}_{x\in \Lambda^\infty}$ is a $M$-measurable field of tracial states on $C^*(\mathcal{G}^x_x,\sigma_c)$. Now applying \eqref{property II of twisted} to $\{\psi_x\}_{x\in \Lambda^\infty}$ with $\eta$ and $W_p$ we have \begin{align*} (\psi_x\circ \Phi^{-1})(W_u)&=\psi_x \big(\overline{b(p)}W_{p}\big)\\ &=\overline{b(p)}\sigma_c\big((y,m+p,x),\eta^{-1}\big)\sigma_c(\eta,(x,p,x))\overline{\sigma_c(\eta^{-1},\eta)}\psi_y(W_{p})\\ &=\sigma_c\big((y,m+p,x),(x,p,x)\big)\sigma_c\big(\eta,(x,p,x)\big)\overline{\sigma_c(\eta^{-1},\eta)}(\psi_y\circ \Phi^{-1})(W_{u}). \end{align*} \end{proof} \subsection{KMS states and the invariance} Given a strongly connected finite $k$-graph $\Lambda$, let $\mathcal{I}_\Lambda$ be the interior of the isotropy $\mathcal{I}so(\mathcal{G}_\Lambda)$ in $\mathcal{G}_\Lambda$. Define $\mathcal{H}_\Lambda:=\mathcal{G}_\Lambda/\mathcal{I}_\Lambda$ and let $\pi:\mathcal{G}_\Lambda\rightarrow \mathcal{H}_\Lambda$ be the quotient map. Let $c\in Z^2(\Lambda,\mathbb{T})$ and let $\mathcal{P}$ be as in \eqref{partition-form}. Suppose that $\omega_c\in Z^2(\operatorname{Per\, \Lambda},\mathbb{T})$ is a bicharacter cohomologous to $\sigma_c^x(p,q)=\sigma_c((x,p,x),(x,q,x))$ for all $x\in \Lambda^\infty$. By \cite[Lemma~3.6]{KPS2} there is a continuous $\widehat{Z}_{\omega_c}$-valued 1-cocycle $\tilde{r}^\sigma$ on $\mathcal{H}_\Lambda$ such that \[\tilde{r}^\sigma_{\pi(\gamma)}(p)=\sigma\big(\gamma,(y,p,y)\big)\sigma\big((x,m+p,y)\gamma^{-1}\big)\overline{\sigma(\gamma^{-1},\gamma)}\] for all $\gamma=(x,m,y)\in \mathcal{G}_\Lambda$ and $p\in Z_{\omega_c}$. This induces an action $B$ of $\mathcal{H}_\Lambda$ on $\Lambda^\infty\times \widehat{Z}_{\omega_c}$ such that \[B_{\pi(\gamma)}(s(\gamma),\chi)=\big(r(\gamma),\tilde{r}^\sigma_{\pi(\gamma)}\cdot\chi\big)\text{ for all } \gamma\in \mathcal{H}_\Lambda\text{ and } \chi\in \widehat{Z}_{\omega_c}.\] \begin{cor}\label{cor:invariance} Suppose that $\Lambda$ is a strongly connected finite $k$-graph. Let $c\in Z^2(\Lambda,\mathbb{T})$ and let $\mathcal{P}$ be as in \eqref{partition-form}. Let $\omega_c\in Z^2(\operatorname{Per\, \Lambda},\mathbb{T})$ be a bicharacter cohomologous to $\sigma_c^x(p,q)=\sigma_c((x,p,x),(x,q,x))$ for all $x\in \Lambda^\infty$. Let $\tau$ be the preferred dynamics on $C^*(\mathcal{G}_\Lambda,\sigma_c)$ and let $M$ be the measure of \eqref{k-graph-measure}. Then there is a bijection between the simplex of the KMS$_1$ states of $(C^*(\mathcal{G}_\Lambda,\sigma_c),\tau)$ and the set of $M$-equivalence classes $[\psi]_M$ of tracial states $\{\psi_x\}_{x\in \Lambda^\infty}$ on $C^*(Z_{\omega_c})\cong \widehat{Z}_{\omega_c}$ that are invariant under the action $B$, in the sense that \[ B_{\pi(\gamma)}\big(s(\gamma),\psi_{r(\gamma)}\big)=\big(r(\gamma),\psi_{s(\gamma)}\big)\text{ for all } \gamma\in \mathcal{H}_\Lambda. \] \end{cor} \begin{proof} This follows from Corollary~\ref{kms states for graph} and Lemma~\ref{lemma:traces}. \end{proof} \subsection{A question of uniqueness for KMS$_1$ states} If $c=1$, the results of \cite{aHLRS} show that $C^*(\mathcal{G}_\Lambda,\sigma_1)$ has unique KMS$_1$ state if and only if it is simple (see Theorem~11.1 and Section~12 in \cite{aHLRS}). Corollary~4.8 of \cite{KPS2} shows that $C^*(\mathcal{G}_\Lambda,\sigma_c)$ is simple if and only if the action $B$ of $\mathcal{H}_\Lambda$ on $\Lambda^\infty \times \widehat{Z}_{\omega_c}$ is minimal. So it is natural to ask whether minimality of the action $B$ characterises the presence of a unique KMS$_1$ state for the preferred dynamics? We have not been able to answer this question. The following brief comments describe the difficulty in doing so. The key point in \cite{aHLRS} that demonstrates that KMS states are parameterised by measures on the dual of the periodicity group of the graph is the observation that in the absence of a twist, the centrality of the copy of $C^*(\operatorname{Per\, \Lambda})$ in $C^*(\Lambda)$ can be used to show that KMS states are completely determined by their values on this subalgebra. This, combined with Neshveyev's theorems, shows that the field of states $\{\psi_x\}_{x\in \Lambda^\infty}$ corresponding to a KMS state $\psi$ is, up to measure zero, a constant field (see \cite[pages 27--28]{aHLRS}). The corresponding calculation fails in the twisted setting. However, we are able to show that, whether or not $\mathcal{H}_\Lambda$ acts minimally on $\Lambda^\infty \times \widehat{Z}_{\omega_c}$, there is an injective map from the states of $C^*(Z_{\omega_c})$ that are invariant for the action of $\mathcal{H}_\Lambda$ on $\widehat{Z}_{\omega_c}$ induced by the cocycle $\tilde{r}^\sigma$ to the KMS states of the $C^*$-algebra. It follows in particular that the Haar state on $C^*(Z_{\omega_c})$ induces a KMS state as expected. \begin{cor}\label{cor aHLSR} Let $\phi$ be a state on $C^*(Z_{\omega_c})$ such that $\tilde{r}_{\pi(\gamma)}\cdot \phi=\phi$ for all $\gamma\in \mathcal{H}_\Lambda$. Then there is a KMS$_1$ state $\psi_\phi$ of $(C^*(\mathcal{G}_\Lambda,\sigma),\tau)$ such that \[ \psi_\phi(f)=\int_{\mathcal{G}^{(0)}}\sum_{p\in \operatorname{Per\, \Lambda}}f(x,p,x)\phi(W_p)\, dM(x)\quad\text{for all }f\in C_c(\mathcal{G},\sigma). \] The map $\phi\mapsto \psi_\phi$ is injective. In particular, there is a KMS$_1$ state $\psi_{\mathbb{T}r}$ of $(C^*(\mathcal{G}_\Lambda, \sigma), \tau)$ such that \[ \psi_{\mathbb{T}r}(f) = \int_{\mathcal{G}^{(0)}} f(x,0,x)\, dM(x)\quad\text{for all }f\in C_c(\mathcal{G},\sigma). \] \end{cor} \begin{proof} For each $x\in \Lambda^\infty$ define. \begin{align*} \psi_x=\begin{cases} \phi&\text{if $\{x\}\times \operatorname{Per\, \Lambda}\times \{x\} = \mathcal{G}^x_x$}\\ 0&\text{if $\{x\}\times \operatorname{Per\, \Lambda}\times \{x\} \neq \mathcal{G}^x_x$.}\\ \end{cases} \end{align*} Then $\psi_\phi := \mathbb{T}heta(M, \{\psi_x\}_{x\in \Lambda^\infty})$ satisfies the desired formula. The first statement, and injectivity of $\phi \mapsto \psi_\phi$ follows from Corollary~\ref{cor:invariance}. The final statement follows from the first statement applied to the Haar trace $\mathbb{T}r$ on $C^*(Z_{\omega_c})$. \end{proof} \begin{remark} Suppose that $\mathcal{H}_\Lambda$ acts minimally on $\Lambda^\infty \times \widehat{Z}_{\omega_c}$. Then in particular the induced action $\tilde{B}$ of $\mathcal{H}_\Lambda$ on $\widehat{Z}_{\omega_c}$ is minimal. So if $\phi$ is a state of $C^*(Z_{\omega_c})$ that is invariant for $\tilde{B}$ as in Corollary~\ref{cor aHLSR}, then continuity ensures that the associated measure is invariant for translation in $Z_{\omega_c}$, so must be equal to the Haar measure. So to prove that $\psi_{\mathbb{T}r}$ is the unique KMS$_1$-state when $C^*(\Lambda, c)$ is simple, it would suffice to show that the map $\phi \mapsto \psi_\phi$ of Corollary~\ref{cor aHLSR} is surjective. One approach to this would be to establish that if $\{\psi_x\}_{x\in \Lambda^\infty}$ is an $M$-measurable field of tracial states on $C^*(Z_{\omega_c})$, then the state $\phi$ given by $\phi := \int_{\Lambda^\infty} \psi_x \,dM(x)$ is $\tilde{B}$-invariant and satisfies $\psi_\phi = \mathbb{T}heta(M, \{\psi\}_{x\in\Lambda^\infty})$, but we have not been able to establish either. \end{remark} \end{document}
\begin{document} \title{Complex Laplacians and Applications in Multi-Agent Systems \thanks{This work is supported by the National Natural Science Foundation of China under grant~11301114 and Hong Kong Research Grants Council under grant~618511.}} \author{Jiu-Gang~Dong, and~Li~Qiu,~\IEEEmembership{Fellow,~IEEE} \thanks{J.-G. Dong is with the Department of Mathematics, Harbin Institute of Technology, Harbin 150001, China and is also with the Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China (e-mail: [email protected]).} \thanks{L. Qiu is with the Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China (e-mail: [email protected]).} } \maketitle \begin{abstract} Complex-valued Laplacians have been shown to be powerful tools in the study of distributed coordination of multi-agent systems in the plane including formation shape control problems and set surrounding control problems. In this paper, we first provide some characterizations of complex Laplacians. As an application, we then establish some necessary and sufficient conditions to ensure that the agents interacting on complex-weighted networks converge to consensus in some sense. These general consensus results are used to discuss some multi-agent coordination problems in the plane. \end{abstract} \begin{IEEEkeywords} Complex Laplacians, multi-agent systems, complex consensus. \end{IEEEkeywords} \section{Introduction} In the past decade there has been increasing interest in studying the distributed coordination and control of multi-agent systems, which appear in diverse situations including consensus problems, flocking and formation control~\cite{JLM03,OSM04,OSFM07,OT11}. As a natural tool, Laplacian matrices of a weighted graph (modeling the interaction among agents) are extensively used in the study of the distributed coordination problems of multi-agent systems. Most results are based on real Laplacians, see, e.g., the agreement~\cite{BS03,OSM04,OSFM07,RB05}, generalized consensus~\cite{CLHY11,Morbidi13} and bipartite consensus on signed graphs~\cite{Altafini13,MSJCH14}. Very recently, complex Laplacians have been applied to multi-agent systems~\cite{LDYYG13,LWHF14,LH14}. In particular, formation shape control problems in the plane with complex Laplacians were discussed in~\cite{LDYYG13,LWHF14}, while based on complex Laplacians, new methods were developed in~\cite{LH14} for the distributed set surrounding design, which contains consensus on complex-valued networks as a special case. It has been shown that complex Laplacians are powerful tools for multi-agent systems and can significantly simplify the analysis once the state space is a plane. From this point, it is worth investigating complex Laplacians independently. The main goal of this paper is to study the properties of complex Laplacians. More precisely, for a complex-weighted graph, we provide a necessary and sufficient condition ensuring that the complex Laplacian has a simple eigenvalue at zero with a specified eigenvector. The condition is in terms of connectivity of graphs and features of weights. It is shown that the notion of {\em structural balance} for complex-weighted graphs plays a critical role for establishing the condition. To demonstrate the importance of the obtained condition, we apply the condition to consensus problems on complex-weighted graphs. A general notion of consensus, called {\em complex consensus}, is introduced, which means that all limiting values of the agents have the same modulus. Some necessary and sufficient conditions for complex consensus are obtained. These complex consensus results extend and complement some existing ones including the standard consensus results~\cite{RB05} and bipartite consensus results~\cite{Altafini13}. This paper makes the following contributions. 1) We extend the known results on complex Laplacains (see~\cite{Reff12}) to a general setting. 2) We establish general consensus results, which are shown to be useful in the study of distributed coordination of multi-agent systems in the plane such as circular formation and set surrounding control. In particular, our results supplement the bipartite consensus results in~\cite{Altafini13}. The remainder of this paper is organized as follows. Section~\ref{section: complex Laplacians} discusses the properties of the complex Laplacian. Some multi-agent coordination control problems, based on the complex Laplacian, are investigated in Section~\ref{section: application}. Section~\ref{section: examples} presents some examples to illustrate our results. This paper is concluded in Section~\ref{section: conclusion}. The notation used in the paper is quite standard. Let $\mathbb R$ be the field of real numbers and $\mathbb C$ the field of complex numbers. For a complex matrix $A\in\mathbb C^{n\times n}$, $A^*$ denotes the conjugate transpose of $A$. We use $\bar{z}$ to denote the complex conjugate of a complex number $z$. The modulus of $z$ is denoted by $|z|$. Let $\mathbf{1}\in\mathbb R^n$ be the $n$-dimensional column vector of ones. For $x=[x_1,\ldots,x_n]^T\in\mathbb C^n$, let $\|x\|_1$ be its $1$-norm, i.e., $\|x\|_1=\sum_{i=1}^n|x_i|$. Denote by $\mathbb T$ the unit circle, i.e., $\mathbb T=\{z\in\mathbb C:\ |z|=1\}$. It is easy to see that $\mathbb T$ is an abelian group under multiplication. For $\zeta=[\zeta_1,\ldots,\zeta_n]^T\in\mathbb T^n$, let $D_\zeta:=\mathrm{diag}(\zeta)$ denote the diagonal matrix with $i$th diagonal entry $\zeta_i$. Finally, we have ${\rm j}=\sqrt{-1}$. \section{Complex-weighted graphs}\label{section: complex Laplacians} In this section we present some interesting results on complex-weighted graphs. We believe that these results themselves are also interesting from the graph theory point of view. Before proceeding, we introduce some basic concepts of complex-weighted graphs. \subsection{Preliminaries} The digraph associated with a complex matrix $A=[a_{ij}]_{n\times n}$ is denoted by $\mathcal G(A)=(\mathcal V,\mathcal E)$, where $\mathcal V=\{1,\ldots,n\}$ is the vertex set and $\mathcal E\subset\mathcal V\times \mathcal V$ is the edge set. An edge $(j,i)\in\mathcal E$, i.e., there exists an edge from $j$ to $i$ if and only if $a_{ij}\neq0$. The matrix $A$ is usually called the adjacency matrix of the digraph $\mathcal G(A)$. Moreover, we assume that $a_{ii}=0$, for $i=1,\ldots,n$, i.e., $\mathcal G(A)$ has no self-loop. For easy reference, we say $\mathcal G(A)$ is complex, real and nonnegative if $A$ is complex, real and (real) nonnegative, respectively. Let $\mathcal N_i$ be the neighbor set of agent $i$, defined as $\mathcal N_i=\{j:\ a_{ij}\neq0\}$. A directed path in $\mathcal G(A)$ from $i_1$ to $i_k$ is a sequence of distinct vertices $i_1,\ldots,i_k$ such that $(i_l,i_{l+1})\in\mathcal E$ for $l=1,\ldots,k-1$. A cycle is a path such that the origin and terminus are the same. The {\em weight} of a cycle is defined as the product of weights on all its edges. A cycle is said to be {\em positive} if it has a positive weight. The following definitions are used throughout this paper. \begin{itemize} \item[$\cdot$] A digraph is said to be {\em (structurally) balanced} if all cycles are positive. \item[$\cdot$] A digraph has a directed spanning tree if there exists at least one vertex (called a root) which has a directed path to all other vertices. \item[$\cdot$] A digraph is strongly connected if for any two distinct vertices $i$ and $j$, there exists a directed path from $i$ to $j$. \end{itemize} For a strongly connected graph, it is clear that all vertices can serve as roots. We can see that being strongly connected is stronger than having a directed spanning tree and they are equivalent when $A$ is Hermitian. For a complex digraph $\mathcal G(A)$, the complex Laplacian matrix $L=[l_{ij}]_{n\times n}$ of $\mathcal G(A)$ is defined by $L=D-A$ where $D=\mathrm{diag}(d_1,\ldots,d_n)$ is the modulus degree matrix of $\mathcal G(A)$ with $d_i=\sum_{j\in\mathcal N_i}|a_{ij}|$. This definition appears in the literature on gain graphs (see, e.g., \cite{Reff12}), which can be thought as a generalization of standard Laplacian matrix of nonnegative graphs. We need the following definition on {\em switching equivalence} \cite{Reff12, Z89}. \begin{defn}\label{defn: switching equivalent} {\rm Two graphs $\mathcal G(A_1)$ and $\mathcal G(A_2)$ are said to be {\em switching equivalent}, written as $\mathcal G(A_1)\sim\mathcal G(A_2)$, if there exists a vector $\zeta=[\zeta_1\ldots,\zeta_n]^T\in\mathbb T^n$ such that $A_2=D_\zeta^{-1} A_1D_\zeta$.} \end{defn} It is not difficult to see that the switching equivalence is an equivalence relation. We can see that switching equivalence preserves connectivity and balancedness. We next investigate the properties of eigenvalues of complex Laplacian $L$. \subsection{Properties of the complex Laplacian} For brevity, we say $A$ is {\em essentially nonnegative} if $\mathcal G(A)$ is switching equivalent to a graph with a nonnegative adjacency matrix. By definition, it is easy to see that $A$ is essentially nonnegative if and only if there exists a diagonal matrix $D_\zeta$ such that $D_\zeta^{-1} AD_\zeta$ is nonnegative. By the Ger\v{s}gorin disk theorem \cite[Theorem 6.1.1]{HJ87}, we see that all the eigenvalues of the Laplacian matrix $L$ of $A$ have nonnegative real parts and zero is the only possible eigenvalue with zero real part. We next further discuss the properties of eigenvalues of $L$ in terms of $\mathcal G(A)$. \begin{lem}\label{lem: 1} Zero is an eigenvalue of $L$ with an eigenvector $\zeta\in\mathbb T^n$ if and only if $A$ is essentially nonnegative. \end{lem} \begin{proof} (Sufficiency) Assume that $A$ is essentially nonnegative. That is, there exists a diagonal matrix $D_\zeta$ such that $A_1=D_\zeta^{-1}AD_\zeta$ is nonnegative. Let $L_1$ be the Laplacian matrix of the nonnegative matrix $A_1$ and thus $L_1\mathbf{1}=0$. A simple observation shows that these two Laplacian matrices are similar, i.e., $L_1=D_\zeta^{-1}LD_\zeta$. Therefore, $L\zeta=0$. (Necessity) Let $L\zeta=0$ with $\zeta\in\mathbb T^n$. Then we have $LD_\zeta\mathbf{1}=0$ and so $D_\zeta^{-1}LD_\zeta\mathbf{1}=0$. Expanding the equation $D_\zeta^{-1}LD_\zeta\mathbf{1}=0$ in component form, we can verify that $D_\zeta^{-1}LD_\zeta\in\mathbb R^{n\times n}$ has nonpositive off-diagonal entries. This implies that $A_1=D_\zeta^{-1}AD_\zeta$ is nonnegative and thus $A$ is essentially nonnegative. \end{proof} If we take the connectedness into account, then we can derive a stronger result. \begin{prop}\label{prop: 2} Zero is a simple eigenvalue of $L$ with an eigenvector $\xi\in\mathbb T^n$ if and only if $A$ is essentially nonnegative and $\mathcal G(A)$ has a spanning tree. \end{prop} \begin{proof} The proof follows from a sequence of equivalences: $$ (1)\Leftrightarrow(2)\Leftrightarrow(3)\Leftrightarrow(4). $$ Conditions (1)-(4) are given in the following. \begin{itemize} \item[$(1)$] $A$ is essentially nonnegative and $\mathcal G(A)$ has a spanning tree. \item[$(2)$] There exists a diagonal matrix $D_\zeta$ such that $A_1=D_\zeta^{-1}AD_\zeta$ is nonnegative and $\mathcal G(A_1)$ has a spanning tree. \item[$(3)$] There exists a diagonal matrix $D_\zeta$ such that $L_1=D_\zeta^{-1}LD_\zeta$ has a simple zero eigenvalue with an eigenvector being $\mathbf{1}$. \item[$(4)$] $L$ has a simple zero eigenvalue with an eigenvector $\zeta\in\mathbb T^n$. \end{itemize} Here, the second one is from \cite[Lemma 3.1]{Ren07} and the last one follows from the similarity. \end{proof} Here a key issue is how to verify the essential nonnegativity of $A$. Thanks to the concept of balancedness of digraphs, we can derive a necessary and sufficient condition for $A$ to be essentially nonnegative. To this end, for a complex matrix $A$, we denote by $A_H=(A+A^*)/2$ the Hermitian part of $A$. Clearly, we have $A=A_H$ when $A$ is Hermitian. \begin{prop}\label{prop: 1} The complex matrix $A=[a_{ij}]_{n\times n}$ is essentially nonnegative if and only if $\mathcal G(A_H)$ is balanced and $a_{ij}a_{ji}\geq0$ for all $1\leq i,j\leq n$. \end{prop} \begin{proof} Since $A_H$ is Hermitian, it follows from~\cite{Z89} that $\mathcal G(A_H)$ is balanced if and only if $A_H$ is essentially nonnegative. Therefore, to complete the proof, we next show that $A$ is essentially nonnegative if and only if $A_H$ is essentially nonnegative and $a_{ij}a_{ji}\geq0$ for all $1\leq i,j\leq n$. {\em Sufficiency:} By the condition that $a_{ij}a_{ji}\geq0$, we have that $|a_{ij}a_{ji}|=\bar{a}_{ij} \bar{a}_{ji}$. Multiplying both sides by $a_{ij}$, we obtain that $|a_{ji}|a_{ij}=|a_{ij}|\bar{a}_{ji}$. Consequently, for a diagonal matrix $D_\zeta$ with $\zeta=[\zeta_1,\ldots,\zeta_n]^T\in\mathbb T^n$, we have for $a_{ij}\neq0$ \begin{equation}\label{eq: relation} \zeta_i^{-1}\frac{a_{ij}+\bar{a}_{ji}}{2}\zeta_j =\frac{1+\frac{|a_{ji}|}{|a_{ij}|}}{2}\zeta_i^{-1}a_{ij}\zeta_j. \end{equation} It thus follows that $D_\zeta^{-1}A_HD_\zeta$ being nonnegative implies $D_\zeta^{-1}AD_\zeta$ being nonnegative, which proves the sufficiency. {\em Necessity:} Now assume that $A$ is essentially nonnegative. That is, there exists a diagonal matrix $D_\zeta$ such that $D_\zeta^{-1}AD_\zeta$ is nonnegative. Then we have $$ a_{ij}a_{ji}=(\zeta_i^{-1}a_{ij}\zeta_j)(\zeta_j^{-1}a_{ji}\zeta_i)\geq0 $$ from which we know that relation~\eqref{eq: relation} follows. This implies that $D_\zeta^{-1}A_HD_\zeta$ is nonnegative. This concludes the proof. \end{proof} The above proposition deals with the balancedness of $\mathcal G(A_H)$, instead of $\mathcal G(A)$ itself. The reason is that $\mathcal G(A)$ being balanced is not a sufficient condition for $A$ being essentially nonnegative, as shown in the following example. \begin{exa} {\rm Consider the complex matrix $A$ given by $$ A=\begin{bmatrix} 0 & 2 & 0 \\ 1 & 0 & 0 \\ -\mathrm{j} & \mathrm{j} & 0 \\ \end{bmatrix}. $$ It is straightforward that $\mathcal G(A)$ only has a positive cycle of length two and thus is balanced. However, we can check that $A$ is not essentially nonnegative.} \end{exa} The following theorem is a combination of Propositions~\ref{prop: 2} and \ref{prop: 1}. \begin{thm} Zero is a simple eigenvalue of $L$ with an eigenvector $\xi\in\mathbb T^n$ if and only if $\mathcal G(A)$ has a spanning tree, $\mathcal G(A_H)$ is balanced and $a_{ij}a_{ji}\geq0$ for all $1\leq i,j\leq n$. \end{thm} We next turn our attention to the case that $A$ is not essentially nonnegative. When $\mathcal G(A)$ has a spanning tree and $A$ is not essentially nonnegative, what we can only obtain from Proposition~\ref{prop: 2} is that either zero is not an eigenvalue of $L$, or zero is an eigenvalue of $L$ with no associated eigenvector in $\mathbb T^n$. To provide further understanding, we here consider the special case that $A$ is Hermitian. In this case, $L$ is also Hermitian. Then all eigenvalues of $L$ are real. Let $\lambda_1\leq\lambda_2\leq\cdots\leq\lambda_n$ be the eigenvalues of $L$. The positive semidefiniteness of $L$, i.e., the fact that $\lambda_1\geq0$, can be obtained by the following observation. For $z=[z_1,\ldots,z_n]^T\in\mathbb C^n$, we have \begin{equation}\label{eq: positivity of L} \begin{split} z^*Lz&=\sum_{i=1}^n\bar{z}_i\left(\sum_{j\in\mathcal N_i}|a_{ij}|z_i-\sum_{j\in\mathcal N_i}a_{ij}z_j\right)\\ &=\frac{1}{2}\sum_{(j,i)\in\mathcal E}\left(|a_{ij}||z_i|^2+|a_{ij}||z_j|^2-2a_{ij}\bar{z}_iz_j\right)\\ &=\frac{1}{2}\sum_{(j,i)\in\mathcal E}|a_{ij}|\left|z_i-\varphi(a_{ij})z_j\right|^2\\ \end{split} \end{equation} where $\varphi:\ \mathbb C\backslash\{0\}\rightarrow\mathbb T$ is defined by $\varphi(a_{ij})=\frac{a_{ij}}{|a_{ij}|}$. Based on \eqref{eq: positivity of L}, we have the following lemma. \begin{lem}\label{lem: 2} Let $A$ be Hermitian. Assume that $\mathcal G(A)$ has a spanning tree. Then $L$ is positive definite, i.e., $\lambda_1>0$, if and only if $A$ is not essentially nonnegative. \end{lem} \begin{proof} We only show the sufficiency since the necessity follows directly from Proposition~\ref{prop: 2}. Assume the contrary. Then there exists a nonzero vector $y=[y_1,\ldots,y_n]^T\in\mathbb C^n$ such that $Ly=0$. By \eqref{eq: positivity of L}, \begin{equation*} \begin{split} y^*Ly=\frac{1}{2}\sum_{(j,i)\in\mathcal E}|a_{ij}|\left|y_i-\frac{a_{ij}}{|a_{ij}|}y_j\right|^2=0. \end{split} \end{equation*} This implies that $y_i=\frac{a_{ij}}{|a_{ij}|}y_j$ for $(j,i)\in\mathcal E$ and so $|y_i|=|y_j|$ for $(j,i)\in\mathcal E$. Note that for $\mathcal G(A)$ with $A$ being Hermitian, having a spanning tree is equivalent to the strong connectivity. Then we conclude that $|y_i|=|y_j|$ for all $i,j=1,\ldots,n$. Without loss of generality, we assume that $y\in\mathbb T^n$. It follows from Lemma~\ref{lem: 1} that $A$ is essentially nonnegative, a contradiction. \end{proof} On the other hand, for the general case that $A$ is not Hermitian, we cannot conclude that $L$ has no zero eigenvalue when $\mathcal G(A)$ has a spanning tree and $A$ is not essentially nonnegative. Example~\ref{exa: 1} in Section~\ref{section: examples} provides such an example. \section{Applications}\label{section: application} In this section, we study the distributed coordination problems with the results established in Section~\ref{section: complex Laplacians}. We first consider the consensus problems on complex-weighted digraphs. \subsection{Complex consensus} For a group of $n$ agents, we consider the continuous-time (CT) consensus protocol over complex field \begin{equation}\label{eq: Algorithm} \dot{z}_i(t)=u_i(t),\ t\geq0 \end{equation} where $z_i(t)\in\mathbb C$ and $u_i(t)\in\mathbb C$ are the state and input of agent $i$, respectively. We also consider the corresponding discrete-time (DT) protocol over complex field \begin{equation}\label{eq: discrete algorithm} z_i(k+1)=z_i(k)+u_i(k),\ k=0,1,\ldots. \end{equation} The communications between agents are modeled as a complex graph $\mathcal G(A)$. The control input $u_i$ is designed, in a distributed way, as $$ u_i=-\kappa\sum_{j\in\mathcal N_i}(|a_{ij}|z_i-a_{ij}z_j), $$ where $\kappa>0$ is a fixed control gain. Then we have the following two systems described as \begin{equation*} \dot{z}_i(t)=-\kappa\sum_{j\in\mathcal N_i}(|a_{ij}|z_i-a_{ij}z_j) \end{equation*} and \begin{equation*} z_i(k+1)=z_i(k)-\kappa\sum_{j\in\mathcal N_i}(|a_{ij}|z_i-a_{ij}z_j). \end{equation*} Denote by $z=(z_1,\ldots,z_n)^T\in\mathbb C^n$ the aggregate position vector of $n$ agents. With the Laplacain matrix $L$ of $\mathcal G(A)$, these two systems can be rewritten in more compact forms: \begin{equation}\label{eq: Algorithm with complex L} \dot{z}(t)=-\kappa Lz(t) \end{equation} in the CT case and \begin{equation}\label{eq: discrete algorithm with complex L} z(k+1)=z(k)-\kappa Lz(k) \end{equation} in the DT case. Inspired by the consensus in real-weighted networks \cite{Altafini13, OSM04, OSFM07}, we introduce the following definition. \begin{defn}\label{defn: modulus consensus} {\rm We say that the CT system \eqref{eq: Algorithm with complex L} (or the DT system~\eqref{eq: discrete algorithm with complex L}) reaches the {\em complex consensus} if $\lim_{t\rightarrow\infty}|z_i(t)|=a>0$ (or $\lim_{k\rightarrow\infty}|z_i(k)|=a>0$) for $i=1,\ldots,n$.} \end{defn} The following is useful in simplifying the statement of complex consensus results. Let $A$ be an essentially nonnegative complex matrix. If $\mathcal G(A)$ has a spanning tree, then it follows from Proposition~\ref{prop: 2} that $L$ has a simple eigenvalue at zero with an associated eigenvector $\zeta\in\mathbb T^n$. Thus, we have $A_1=D_\zeta^{-1}AD_\zeta$ is nonnegative and $D_\zeta^{-1}LD_\zeta$ has a simple eigenvalue at zero with an associated eigenvector $\mathbf{1}$. In the standard consensus theory~\cite{RBA07}, it is well-known that $D_\zeta^{-1}LD_\zeta$ has a nonnegative left eigenvector $\nu=[\nu_1,\ldots,\nu_n]^T$ corresponding to eigenvalue zero, i.e., $\nu^T(D_\zeta^{-1}LD_\zeta)=0$ and $\nu_i\geq0$ for $i=1,\ldots,n$. We assume that $\|\nu\|_1=1$. Letting $\eta=D_\zeta^{-1}\nu=[\eta_1,\ldots,\eta_n]^T$, we have $\|\eta\|_1=1$ and $\eta^TL=0$. We first state a necessary and sufficient condition for complex consensus of the CT system~\eqref{eq: Algorithm with complex L}. \begin{thm}\label{thm: 2} The CT system \eqref{eq: Algorithm with complex L} reaches complex consensus if and only if $A$ is essentially nonnegative and $\mathcal G(A)$ has a spanning tree. In this case, we have $$ \lim_{t\rightarrow\infty}z(t)=(\eta^Tz(0))\zeta. $$ \end{thm} \begin{proof} Assume that $A$ is essentially nonnegative and $\mathcal G(A)$ has a spanning tree. By Proposition~\ref{prop: 2}, we have $L$ has a simple eigenvalue at zero with an associated eigenvector $\zeta\in\mathbb T^n$. Thus, we conclude that $A_1=D_\zeta^{-1}AD_\zeta$ is nonnegative and $D_\zeta^{-1}LD_\zeta$ has a simple eigenvalue at zero with an eigenvector $\mathbf{1}$. Let $z=D_\zeta x$. By system \eqref{eq: Algorithm with complex L}, we can see that $x$ satisfies the system \begin{equation*} \dot{x}=-\kappa D_\zeta^{-1}LD_\zeta x. \end{equation*} Note that this is the standard consensus problem. From~\cite{RBA07}, it follows that $$ \lim_{t\rightarrow\infty}x(t)=\nu^Tx(0)\mathbf{1}=\nu^TD_\zeta^{-1}z(0)\mathbf{1}. $$ This is equivalent to $$ \lim_{t\rightarrow\infty}z(t)=(\nu^TD_\zeta^{-1}z(0))D_\zeta\mathbf{1}= (\eta^Tz(0))\zeta. $$ To show the other direction, we now assume that the system \eqref{eq: Algorithm with complex L} reaches complex consensus but $\mathcal G(A)$ does not have a spanning tree. Let $T_1$ be a maximal subtree of $\mathcal G$. Note that $T_1$ is a spanning tree of subgraph $\mathcal G_1$ of $\mathcal G(A)$. Denote by $\mathcal G_2$ the subgraph induced by vertices not belonging to $\mathcal G_1$. It is easy to see that there does not exist edge from $\mathcal G_1$ to $\mathcal G_2$ since otherwise $T_1$ is not a maximal subtree. All possible edges between $\mathcal G_1$ and $\mathcal G_2$ are from $\mathcal G_2$ to $\mathcal G_1$, and moreover we can see that there is no directed path from a vertex in $\mathcal G_2$ to the root of $T_1$ by $T_1$ being a maximal subtree again. Therefore it is impossible to reach the complex consensus between the root of $T_1$ and vertices of $\mathcal G_2$. This implies that the system \eqref{eq: Algorithm with complex L} cannot reach complex consensus. We obtain a contradiction. Hence $\mathcal G(A)$ have a spanning tree. On the other hand, since the system \eqref{eq: Algorithm with complex L} reaches complex consensus we can see that the solutions $y=[y_1,\ldots,y_n]^T$ of the equation $Ly=0$ always have the property $|y_i|=|y_j|$ for all $i,j=1,\ldots,n$. Namely, zero is an eigenvalue of $L$ with an eigenvector $\zeta\in\mathbb T^n$. It thus follows from Lemma~\ref{lem: 1} that $A$ is essentially nonnegative. We complete the proof of Theorem \ref{thm: 2}. \end{proof} For $\mathcal G(A)$, define the maximum modulus degree $\Delta$ by $\Delta=\max_{1\leq i\leq n}d_i$. We are now in a position to state the complex consensus result for the DT system~\eqref{eq: discrete algorithm with complex L}. \begin{thm}\label{thm: discrete version of 2} Assume that the input gain $\kappa$ is such that $0<\kappa<1/\Delta$. Then the DT system \eqref{eq: discrete algorithm with complex L} reaches complex consensus if and only if $A$ is essentially nonnegative and $\mathcal G(A)$ has a spanning tree. In this case, we have $$ \lim_{k\rightarrow\infty}z(k)=(\eta^Tz(0))\zeta. $$ \end{thm} \begin{proof} Assume that $A$ is essentially nonnegative and $\mathcal G(A)$ has a spanning tree. By Propositions~\ref{prop: 2}, we have $L$ has a simple eigenvalue at zero with an associated eigenvector $\zeta\in\mathbb T^n$. Thus, we conclude that $A_1=D_\zeta^{-1}AD_\zeta$ is nonnegative and $D_\zeta^{-1}LD_\zeta$ has a simple eigenvalue at zero with an associated eigenvector $\mathbf{1}$. Let $z=D_\zeta x$. By system \eqref{eq: discrete algorithm with complex L}, we can see that $x$ satisfies the system \begin{equation*} x(k+1)=(I-\kappa D_\zeta^{-1}LD_\zeta)x(k). \end{equation*} Note that this is the standard consensus problem. From~\cite{RBA07}, it follows that $$ \lim_{k\rightarrow\infty}x(k)=\nu^Tx(0)\mathbf{1}=\nu^TD_\zeta^{-1}z(0)\mathbf{1}. $$ This is equivalent to $$ \lim_{k\rightarrow\infty}z(k)=(\nu^TD_\zeta^{-1}z(0))D_\zeta\mathbf{1}=(\eta^Tz(0))\zeta. $$ To show the other direction, we now assume that the system \eqref{eq: discrete algorithm with complex L} reaches complex consensus. Using the same arguments as for the CT system \eqref{eq: Algorithm with complex L} above, we can see that $\mathcal G(A)$ have a spanning tree. On the other hand, based on the Ger\v{s}gorin disk theorem \cite[Theorem 6.1.1]{HJ87}, all the eigenvalues of $-\kappa L$ are located in the union of the following $n$ disks: $$ \left\{z\in\mathbb C: \left|z+\kappa\sum_{j\in\mathcal N_i}|a_{ij}|\right|\leq\kappa\sum_{j\in\mathcal N_i}|a_{ij}|\right\}, \ i=1,\ldots,n. $$ Clearly, all these $n$ disks are contained in the largest disk defined by $$ \left\{z\in\mathbb C: \left|z+\kappa\Delta\right|\leq\kappa\Delta\right\}. $$ Noting that $0<\kappa<1/\Delta$, we can see that the largest disk is contained in the region $\left\{z\in\mathbb C: \left|z+1\right|<1\right\}\cup\{0\}$. By translation, we have all the eigenvalues of $I-\kappa L$ are located in the following region: $$ \left\{z\in\mathbb C: |z|<1\right\}\cup\{1\}. $$ Since the system \eqref{eq: discrete algorithm with complex L} reaches complex consensus we can see that $1$ must be the eigenvalue of $I-\kappa L$. All other eigenvalue of $I-\kappa L$ have the modulus strictly smaller than $1$. Moreover, if $y=[y_1,\ldots,y_n]^T$ is an eigenvector of $I-\kappa L$ corresponding to eigenvalue $1$, then $|y_i|=|y_j|>0$ for $i,j=1,\ldots,n$. That is, zero is an eigenvalue of $L$ with an eigenvalue $\zeta\in\mathbb T^n$. It thus follows from Lemma~\ref{lem: 1} that $A$ is essentially nonnegative. We complete the proof of Theorem \ref{thm: discrete version of 2}. \end{proof} \begin{rem}\label{rem: Hermitian} {\rm \noindent \begin{itemize} \item[1)] In Theorems \ref{thm: 2} and \ref{thm: discrete version of 2}, the key point is to check the condition that $A$ is essentially nonnegative which, by Proposition~\ref{prop: 1}, can be done by examining the condition that $\mathcal G(A_H)$ is balanced and $a_{ij}a_{ji}\geq0$ for all $1\leq i,j\leq n$. \item[2)]For the special case when $A$ is Hermitian, Theorems \ref{thm: 2} and \ref{thm: discrete version of 2} take a simpler form. As an example, we consider the CT system \eqref{eq: Algorithm with complex L} with $A$ being Hermitian. In this case, it follows from Proposition~\ref{prop: 1} that $A$ is essentially nonnegative if and only if $\mathcal G(A)$ is balanced. Then we have that the CT system \eqref{eq: Algorithm with complex L} reaches complex consensus if and only if $\mathcal G(A)$ has a spanning tree and is balanced. In this case, $$ \lim_{t\rightarrow\infty}z(t)=\frac{1}{n}(\zeta^*z(0))\zeta. $$ In addition, in view of Lemma~\ref{lem: 2}, it yields that $\lim_{t\rightarrow\infty}z(t)=0$ when $\mathcal G(A)$ has a spanning tree and is unbalanced. \item[3)] By the standard consensus results in \cite{RB05}, Theorems~\ref{thm: 2} and \ref{thm: discrete version of 2} can be generalized to the case of switching topology. We omit the details to avoid repetitions. \end{itemize} } \end{rem} \begin{rem} {\rm \noindent \begin{itemize} \item[1)] Theorems~\ref{thm: 2} and \ref{thm: discrete version of 2} actually give an equivalent condition to ensure that all the agents converge to a common circle centered at the origin. Motivated by this observation, we can modify the two systems~\eqref{eq: Algorithm with complex L} and \eqref{eq: discrete algorithm with complex L} accordingly to study the circular formation problems. Similar to Theorems~\ref{thm: 2} and \ref{thm: discrete version of 2}, we can establish a necessary and sufficient condition to ensure all the agents converge to a common circle centered at a given point and are distributed along the circle in a desired pattern, expressed by the prespecified angle separations and ordering among agents. We omit the details due to space limitations. \item[2)] Part of Theorem~\ref{thm: 2} has been obtained in the literature, see~\cite[Theorems III.5 and III.6]{LH14}. As potential applications, the reuslts in Section~\ref{section: complex Laplacians} can be used to study the set surrounding control problems~\cite{LH14}. A detailed analysis for this is beyond the scope of this paper. \end{itemize} } \end{rem} \subsection{Bipartite consensus revisited} As an application, we now revisit some bipartite consensus results from Theorems~\ref{thm: 2} and \ref{thm: discrete version of 2}. We will see that these bipartite consensus results improve the existing results in the literature. Let $\mathcal G(A)$ be a signed graph, i.e., $A=[a_{ij}]_{n\times n}\in\mathbb R^{n\times n}$ and $a_{ij}$ can be negative. By bipartite consensus, we mean on a signed graph, all agents converge to a consensus value whose absolute value is the same for all agents except for the sign. The state $z$ is now restricted to the field of real numbers $\mathbb R$, denoted by $x$. Then the two systems \eqref{eq: Algorithm with complex L} and \eqref{eq: discrete algorithm with complex L} reduces to the standard consensus systems: \begin{equation}\label{eq: standard CT system} \dot{x}(t)=-\kappa Lx(t) \end{equation} and \begin{equation}\label{eq: standard DT system} x(k+1)=x(k)-\kappa Lx(k). \end{equation} With the above two systems and based on Theorems~\ref{thm: 2} and \ref{thm: discrete version of 2}, we can derive the bipartite consensus results on signed graphs. \begin{cor}\label{thm: signed 2} Let $\mathcal G(A)$ be a signed digraph. Then the CT system \eqref{eq: standard CT system} achieves bipartite consensus asymptotically if and only if $A$ is essentially nonnegative and $\mathcal G(A)$ has a spanning tree. In this case, for any initial state $x(0)\in\mathbb R^n$, we have $$ \lim_{t\rightarrow\infty}x(t)=(\eta^Tx(0))\sigma $$ where $\sigma=[\sigma_1\ldots,\sigma_n]^T\in\{\pm1\}^n$ such that $D_{\sigma}AD_{\sigma}$ is nonnegative matrix and $\eta^TL=0$ with $\eta=[\eta_1,\dots,\eta_n]^T\in\mathbb R^n$ and $\|\eta\|_1=1$. \end{cor} \begin{cor}\label{thm: signed D version of 2} Let $\mathcal G(A)$ be a signed digraph. Then the DT system \eqref{eq: standard DT system} with $0<\kappa<1/\Delta$ achieves bipartite consensus asymptotically if and only if $A$ is essentially nonnegative and $\mathcal G(A)$ has a spanning tree. In this case, for any initial state $x(0)\in\mathbb R^n$, we have $$ \lim_{k\rightarrow\infty}x(k)=(\eta^Tx(0))\sigma $$ where $\eta$ and $\sigma$ are defined as in Corollary~\ref{thm: signed 2}. \end{cor} \begin{rem} {\rm Corollary \ref{thm: signed 2} indicates that bipartite consensus can be achieved under a condition weaker than that given in Theorem~2 in~\cite{Altafini13}. In addition, we also obtain a similar necessary and sufficient condition for bipartite consensus of the DT system~\eqref{eq: standard DT system}.} \end{rem} \section{Examples}\label{section: examples} In this section we present some examples to illustrate our results. \begin{exa} {\rm Consider the complex graph $\mathcal G(A)$ illustrated in Figure \ref{fig: balanced} with adjacency matrix $$ A=\begin{bmatrix} 0 & 0 & -\mathrm{j} & 0 \\ 1 & 0 & 0 & 0 \\ 0 & \mathrm{j} & 0 & 0 \\ 0 & 1+\mathrm{j} & 0 & 0 \\ \end{bmatrix}. $$ It is trivial that $\mathcal G(A)$ has a spanning tree. Since $\mathcal G(A_H)$ is balanced, Proposition~\ref{prop: 1} implies that $A$ is essentially nonnegative. Furthermore, defining $\zeta=[1,1,\mathrm{j},e^{\mathrm{j}\frac{\pi}{4}}]^T\in\mathbb T^4$, we have $$ A_1=D_\zeta^{-1}AD_\zeta=\begin{bmatrix} 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & \sqrt{2} & 0 & 0 \\ \end{bmatrix}. $$ The set of eigenvalues of the complex Laplacian $L$ is $\{0, \sqrt{2}, 3/2+\sqrt{3}\mathrm{j}/2, 3/2-\sqrt{3}\mathrm{j}/2\}$. The vector $\zeta$ is an eigenvector associated with eigenvalue zero. A simulation under system \eqref{eq: Algorithm with complex L} is given in Figure \ref{fig: Modolus consensus}, which shows that the complex consensus is reached asymptotically. This confirms the analytical results of Theorems \ref{thm: 2} and \ref{thm: discrete version of 2}. \begin{figure} \caption{Balanced graph.} \label{fig: balanced} \end{figure} \begin{figure} \caption{Complex consensus process of the agents.} \label{fig: Modolus consensus} \end{figure}} \end{exa} \begin{exa}\label{exa: 1} {\rm Consider the complex graph $\mathcal G(A)$ illustrated in Figure \ref{fig: unbalanced1} with adjacency matrix $$ A=\begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1-\mathrm{j} & 0 & 0\\ 0 & \mathrm{j} & 0 & 0 & 0 & 0\\ \mathrm{j}& 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & -\mathrm{j} & 0\\ \end{bmatrix}. $$ We can see that $\mathcal G(A)$ has a spanning tree and $A$ is not essentially nonnegative since $\mathcal G(A_H)$ is unbalanced. We can verify that zero is an eigenvalue of $L$. The simulation in Figure \ref{fig: no MC} shows that the complex consensus cannot be reached. \begin{figure} \caption{Unbalanced graph.} \label{fig: unbalanced1} \end{figure} \begin{figure} \caption{Trajectories of the agents which mean that complex consensus cannot be reached.} \label{fig: no MC} \end{figure}} \end{exa} \section{Conclusion}\label{section: conclusion} Motivated by the study of bipartite consensus problems, we discuss the consensus problems in complex-weighted graphs. To this end, we first establish some key properties of the complex Laplacian. We emphasize that these properties can be examined by checking the properties of the corresponding digraph. Then we give some necessary and sufficient conditions to ensure the convergence of complex consensus. It is shown that these general consensus results can be used to study some distributed coordination control problems of multi-agent systems in a plane. In particular, these results cover the bipartite consensus results on signed digraphs. We believe that the properties of the complex Laplacian obtained in this paper are useful in other multi-agent coordination problems in a plane. \end{document}
\begin{document} \pagestyle{empty} \title{DISCRETE PHASE SPACE AND MINIMUM-UNCERTAINTY STATES} \author{William K.~Wootters and Daniel M.~Sussman} \affiliation{Department of Physics, Williams College\\ Williamstown, MA 01267, USA} \begin{abstract} The quantum state of a system of $n$ qubits can be represented by a Wigner function on a discrete phase space, each axis of the phase space taking values in the finite field ${\mathbb F}_{2^n}$. Within this framework, we show that one can make sense of the notion of a ``rotationally invariant state'' of any collection of qubits, and that any such state is, in a well defined sense, a state of minimum uncertainty. \end{abstract} \setcounter{section}{0} \section{Introduction} A quantum state cannot be squeezed down to a point in phase space. But there are quantum states that closely approximate classical states, such as the coherent states of a harmonic oscillator. One characterization of the coherent states is based on the Wigner function: they are the only states for which the Wigner function is both strictly positive and rotationally symmetric around its center (here we assume a specific scaling of the axes appropriate for the given oscillator). One can also express the quantum mechanics of {\em discrete} systems in terms of phase space. In this paper we consider a system of $n$ qubits described in the framework of Ref.~\cite{GHW}, in which the discrete phase space can be pictured as a $2^n \times 2^n$ array of points. In this framework, the discrete Wigner function preserves the tomographic feature of the usual Wigner function, but the points of the discrete phase space are defined abstractly and do not come with an immediate physical interpretation. As in the continuous case, a point in discrete phase space is {\em illegal} as a quantum state: it holds too much information. But one can ask whether there are quantum states that, like coherent states, approximate a phase-space point as closely as possible. We would like to identify such states and thereby to give more physical meaning to the discrete phase space. In this paper we focus primarily on the second of the two properties mentioned above: invariance under rotations. We will see that one can make sense of this notion in the discrete space and that rotationally invariant states exist for any number of qubits. The most interesting property of these states is that they minimize uncertainty in a well defined sense. The product $\Delta q\Delta p$, where $q$ and $p$ are position and momentum, has no meaning in our setting because our variables have no natural ordering. We therefore express uncertainty in information-theoretic terms, specifically in terms of the R\'enyi entropy of order 2 (which we call simply ``R\'enyi entropy" for short). Moreover we consider not just the ``axis variables,'' but also variables associated with all the other directions in the discrete phase space. (In the continuous case these other directions would be associated with linear combinations of $q$ and $p$.) We will find that {\em each} rotationally invariant state minimizes the R\'enyi entropy, averaged over all these variables. This will leave us with the question of picking out a ``most pointlike" of the rotationally invariant states, if such a notion can be made meaningful; we address this question briefly in the conclusion. \section{DISCRETE PHASE SPACE } \label{sec-2} Over the years there have been many proposals for generalizing the Wigner function to discrete systems. (See, for example, Refs.~\cite{BH,Marmo} and papers cited in Ref.~[1].) Here we adopt the discrete Wigner function proposed by Gibbons {\em et al.} \cite{GHW}, which is well suited to a system of qubits. The basic idea is to use, instead of the field of real numbers in which position and momentum normally take their values, a {\em finite} field with a number of elements equal to the dimension $d$ of the state space. There exists a field with $d$ elements if and only if $d$ is a power of a prime; so this approach applies directly only to quantum systems, such as a collection of qubits, whose state-space dimension is such a number. The two-element field ${\mathbb F}_2$ is simply the set $\{0,1\}$ with addition and multiplication mod 2, but the field of order $2^n$ with $n$ larger than 1 is different from arithmetic mod $2^n$. For example, ${\mathbb F}_4$ consists of the elements $\{0,1,\omega,\omega+1\}$, in which $0$ and $1$ act as in ${\mathbb F}_2$ and arithmetic involving the abstract symbol $\omega$ is determined by the equation $\omega^2 = \omega + 1$. The discrete phase space for a system of $n$ qubits is a two-dimensional vector space over ${\mathbb F}_{2^n}$; that is, a point in the phase space can be expressed as $(q,p)$, where $q$ and $p$, the discrete analogues of position and momentum, take values in ${\mathbb F}_{2^n}$. In this phase space it makes perfect sense to speak of lines and parallel lines; a line, for example, is the solution to a linear equation. The key idea in constructing a Wigner function is to assign a pure quantum state, represented by a one-dimensional projection operator $Q(\lambda)$, to each line $\lambda$ in phase space. The only requirement imposed on the function $Q(\lambda)$ is that it be ``translationally covariant." This means that if we translate the line $\lambda$ in phase space by adding a fixed vector $(q,p)$ to each point, the associated quantum state changes by a unitary operator $T_{(q,p)}$ associated with $(q,p)$. The unitary translation operator $T_{(q,p)}$ is defined to be \begin{equation} T_{(q,p)} = X^{q_1}Z^{p_1} \otimes \cdots \otimes X^{q_n}Z^{p_n}, \end{equation} where $X$ and $Z$ are Pauli operators and $q_i$ and $p_i$, which are elements of ${\mathbb F}_2$, are components of $q$ and $p$ when they are expanded in particular ``bases'' for the field: e.g., $q = q_1b_1 + \cdots + q_nb_n$, where $(b_1,\ldots,b_n)$ is the basis chosen for the coordinate $q$.\footnote{The bases for $q$ and $p$ cannot be chosen independently: each must be proportional to the {\em dual} of the other\cite{GHW}.} One finds that the requirement of translational covariance severely constrains the construction: \begin{enumerate} \item States assigned to parallel lines must be orthogonal. A complete set of parallel lines, or ``striation,'' consists of exactly $d$ lines; so the states associated with a given striation constitute a complete orthogonal basis for the state space. In other words, each striation is associated with a complete orthogonal measurement on the system. \item The bases associated with different striations must be {\em mutually unbiased}. That is, each element of one basis is an equal-magnitude superposition of the elements of any of the other bases. There are exactly $d+1$ striations, so this construction generates a set of $d+1$ mutually unbiased bases. (Such a set is just sufficient for the complete tomographic reconstruction of an unknown quantum state.) \end{enumerate} Despite these constraints, there are many allowed functions $Q(\lambda)$. This implies that there are many possible definitions of the Wigner function for a system of qubits, because once we have chosen a particular assignment of quantum states to phase-space lines, the Wigner function of any quantum state is uniquely fixed by the requirement that the sums over the lines of any striation be equal to the probabilities of the outcomes of the corresponding measurement. \section{ROTATIONALLY INVARIANT STATES} In the finite field, consider a quadratic polynomial $x^2 + ax + b$ that has no roots. Then the equation \begin{equation} q^2 + aqp + bp^2 = c, \end{equation} with $c$ taking all nonzero values in ${\mathbb F}_{2^n}$, defines what we will call a set of ``circles'' centered at the origin. Fixing the values of $a$ and $b$---this is somewhat analogous to fixing the scales of the axes in the continuous case---we define a {\em rotation} to be any linear transformation of the phase space that leaves each circle invariant.\footnote{A different notion of rotation has been used in Ref.~\cite{Klimov}.} (We consider only rotations around the origin. A state centered at the origin can always be translated to another point by $T_{(q,p)}$.) For example, in the two-qubit phase space, our circles can be defined by the equation \begin{equation} q^2 + q p + \omega p^2 = c, \end{equation} and an example of a rotation is the transformation $R$ defined by \begin{equation} \mtx{c}{q' \\ p'} = R\mtx{c}{q \\ p} = \mtx{cc}{1 & 1 \\ \omega+1 & \omega}\mtx{c}{q \\ p}. \end{equation} One can check that this particular rotation has the property that if we apply it repeatedly, starting with any nonzero vector, it generates the entire circle on which that vector lies. In this sense $R$ is a {\em primitive} rotation. With every unit-determinant linear transformation $L$ on the phase space, one can associate (though not uniquely) a unitary transformation $U$ on the state space whose action by conjugation on the translation operators $T_{(q,p)}$ mimics the action of $L$ on the corresponding points of phase space\cite{Chau, GHW}.\footnote{The argument in Appendix B.3 of Ref.~\cite{GHW} contains an error: Eqs.~(B24) and (B25) implicitly assume that the chosen field basis is self-dual, which is not in fact the case. However, the proof can be repaired by starting with a self-dual basis to get those equations, and then changing to the actual basis via the argument of Appendix C.1. That there exists a self-dual basis for ${\mathbb F}_{2^n}$ is proved in Ref.~\cite{selfdual}.} One can show that every rotation has unit determinant and must therefore have an associated unitary transformation. For example, for the rotation $R$ given above, if we expand both $q$ and $p$ in the field basis $(b_1,b_2) = (\omega, \omega+1)$, the following unitary transformation acts in the desired way on the translation operators: \begin{equation} U = \frac{1}{2}\mtx{cccc}{1 & i & i & -1 \\ i & 1 & -1 & i \\ 1 & i & -i& 1 \\ -i & -1 & -1 & i}. \end{equation} Thus just as \begin{equation} R\mtx{c}{1 \\ 0} = \mtx{cc}{1 & 1 \\ \omega+1 & \omega}\mtx{c}{1 \\ 0} = \mtx{c}{1 \\ \omega + 1}, \end{equation} we have that \begin{equation} UT_{(1, 0)}U^\dag = U(X\otimes X)U^\dag = iX\otimes (XZ) \, \propto T_{(1,\omega+1)}. \end{equation} For any number $n$ of qubits, let $R$ be a primitive rotation, and let $U$ be a unitary transformation associated with $R$ in the above sense. (Techniques for finding $U$ can be found in Refs.~\cite{GHW,Chau}.) Then from the action of $U$ on the translation operators, it follows that $U$ acts in a particularly simple way on the mutually unbiased bases associated with the striations of phase space: starting with any one of these bases, repeated applications of $U$ generate all the other bases cyclically. That there always exists a unitary $U$ generating a complete set of mutually unbiased bases for $n$ qubits has been shown by Chau \cite{Chau}. In our present context, we will reach the same conclusion by showing, in the following paragraph, that there always exists a primitive rotation. The existence of such a unitary matrix $U$ leads naturally to a simple prescription for choosing the function $Q(\lambda)$: (i) Use the translation operators to assign computational basis states to the vertical lines. (ii) Apply $U$ repeatedly to these states, and $R$ repeatedly to the lines, in order to complete the correspondence. This prescription results in a definition of the Wigner function that is ``rotationally covariant,'' in the sense that when one transforms the density matrix by $U$, the values of the Wigner function are permuted among the phase-space points according to~$R$. How does one find a primitive rotation $R$? First, for any number of qubits, there always exists a {\em primitive} polynomial of the form $x^2+x+b$ \cite{poly}, which one can use to define circles by the equation $q^2 + qp + b p^2 = c$. Then the linear transformation \begin{equation} L = \mtx{cc}{1 & b \\ 1 & 0} \label{L} \end{equation} is guaranteed to cycle through all the nonzero points of phase space \cite{Lidl}, and it always takes circles to other circles. Raising $L$ to the power $d-1$ gives us a unit-determinant transformation that preserves circles and is indeed a primitive rotation. Moreover, one can write $R$ explicitly in terms of $b$: \begin{equation} R = L^{d-1} = \mtx{cc}{1 & 1 \\ b^{-1} & b^{-1}+1}. \end{equation} With $Q(\lambda)$ chosen in the way we have prescribed, the eigenstates of $U$ are our rotationally invariant states. When we apply $U$ to {\em any} state, the Wigner function simply flows along the circles in accordance with the rotation $R$. But an eigenstate of $U$ does not change under this action, so its Wigner function must be constant on each circle. \section{MINIMIZING ENTROPY} Consider again our complete set of $d+1$ mutually unbiased bases, and let $|ij\rangle$ be the $j$th vector in the $i$th basis. These vectors together have the following remarkable property: for any pure state $|\psi\rangle$, the probabilities $p_{ij} = |\langle \psi|ij\rangle|^2$ satisfy \cite{2design, 2design2} \begin{equation} \sum_{ij} p_{ij}^2 = 2. \end{equation} Now consider the R\'enyi entropy $H_R = -\log_2\left(\sum_j p_{ij}^2\right)$ of the outcome-probabilities of the $i$th measurement when applied to the state $|\psi\rangle$. This entropy is a measure of our inability to predict the outcome of the measurement. The {\em average} of $H_R$ over all the mutually unbiased measurements can be bounded from below \cite{Ballester}: \begin{equation} \langle H_R \rangle = \left(\frac{1}{d+1}\right)\sum_i\left[-\log_2\left(\sum_j p_{ij}^2\right)\right] \geq -\log_2\left[\left(\frac{1}{d+1}\right)\sum_{ij} p_{ij}^2\right] = \log_2(d+1) - 1, \label{ineq} \end{equation} with equality holding only if the R\'enyi entropy is {\em constant} over all the mutually unbiased measurements.\footnote{The analogous inequality in terms of Shannon entropy was proved in Refs.~\cite{uncertainty, uncertainty2}.} Now, for any of the rotationally invariant states defined in the last section, the R\'enyi entropies associated with the $d+1$ mutually unbiased measurements are indeed equal. By the inequality (\ref{ineq}), such states therefore minimize the average R\'enyi entropy over all these measurements, that is, over all the directions in phase space. \section{EXAMPLES} The one-qubit case is very simple. The three mutually unbiased bases generated in our construction are the eigenstates of the Pauli operators $X$, $Y$, and $Z$. It is not hard to find a unitary transformation that cycles through these three bases. Such a transformation rotates the Bloch sphere by $120^\circ$ around the axis $(x,y,z) = (1,1,1)$. The two eigenstates of this unitary transformation, which are the eigenstates of $X+Y+Z$, are rotationally invariant: each of their Wigner functions is constant on the only circle in the $2\times 2$ phase space. And each of these states minimizes the average R\'enyi entropy for the measurements $X$, $Y$, and $Z$. It is interesting to note that one of these two states has a positive Wigner function. Clearly there is nothing intrinsically special about these two states. They are special only in relation to the three measurements $X$, $Y$, and $Z$, which are associated with the three striations of the phase space. But in the context of quantum cryptography, the entropy-minimization property is quite relevant. In the six-state scheme (in which the signal states are the eigenstates of $X$, $Y$, and $Z$), if Eve chooses to eavesdrop by making a complete measurement on certain photons, her best choice is to make a measurement whose outcome-states are entropy-minimizing in our sense: it turns out that such a choice minimizes Eve's own R\'enyi entropy about Alice's bit. An interesting example comes from the 3-qubit case. The relevant field is ${\mathbb F}_8$, which can be constructed from ${\mathbb F}_2$ by introducing an element $b$ that is defined to satisfy the equation $b^3 + b^2 + 1 = 0$. In our $8 \times 8$ discrete phase space, we can define circles via the equation \begin{equation} q^2 + qp + p^2 = c, \label{3circ} \end{equation} where $c$ can take any nonzero value. A primitive rotation preserving these circles is\footnote{Even though Eq.~(\ref{3circ}) is not of the form we used in reaching Eq.~(\ref{L}), in that it is not based on a primitive polynomial, the matrix $R$ is nevertheless a primitive rotation.} \begin{equation} R = \mtx{cc}{b^3 & b^6 \\ b^6 & b^5}. \end{equation} One finds that of the eight eigenvectors of any unitary $U$ corresponding to $R$, all of which are rotationally invariant, exactly one has a positive Wigner function for a specific, fixed function $Q(\lambda)$ associated with $U$. This state is also easy to describe physically. For a particular choice of $U$, it is of the form \begin{equation} |\psi\rangle = \sqrt{1/3}|+++\rangle + \sqrt{2/3}|---\rangle, \end{equation} where $|+\rangle$ and $|-\rangle$ are the two eigenstates (with a specific relative phase) of the operator $X+Y+Z$. If we regard $|\psi\rangle$ as analogous to a coherent state at the origin, then the coherent-like states at the 63 other phase-space points can be obtained from $|\psi\rangle$ by applying Pauli rotations to the individual qubits. The Wigner function of each of these states has the value 0.319 at its center, the largest value possible for any three-qubit state. \section{CONCLUSION} We have found that one can make sense of the notion of rotational invariance in a discrete phase space for a system of $n$ qubits. The rotationally invariant states are in this respect analogous to the energy eigenstates of a harmonic oscillator, but the analogy is not perfect. Our rotationally invariant states are all states of minimum uncertainty with respect to the various directions in phase space, whereas except for the ground state, the harmonic oscillator eigenstates do not have this property (the uncertainty, even in our R\'enyi sense, increases with increasing energy). We have considered the further restriction to positive Wigner functions but so far have found examples of such states only for a single qubit and for three qubits. However, for any number of qubits, one can show that at least one of our rotationally invariant states takes a value at its center equal to the maximum value attainable by the Wigner function of {\em any} state. Perhaps this latter property, rather than positivity, should be taken as the defining feature of a ``most pointlike" state. \end{document}
\begin{document} \title{The Power of the Weak} \begin{abstract} A landmark result in the study of logics for formal verification is Janin \& Walukiewicz's theorem, stating that the modal $\mu$-calculus ($\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$) is equivalent modulo bisimilarity to standard monadic second-order logic (here abbreviated as $\ensuremath{\mathrm{SMSO}}\xspace$), over the class of labelled transition systems (LTSs for short). Our work proves two results of the same kind, one for the alternation-free fragment of $\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$ ($\ensuremath{\mu_{D}\ML}\xspace$) and one for weak $\ensuremath{\mathrm{MSO}}\xspace$ ($\ensuremath{\mathrm{WMSO}}\xspace$). Whereas it was known that $\ensuremath{\mu_{D}\ML}\xspace$ and $\ensuremath{\mathrm{WMSO}}\xspace$ are equivalent modulo bisimilarity on binary trees, our analysis shows that the picture radically changes once we reason over arbitrary LTSs. The first theorem that we prove is that, over LTSs, $\ensuremath{\mu_{D}\ML}\xspace$ is equivalent modulo bisimilarity to \emph{noetherian} $\ensuremath{\mathrm{MSO}}\xspace$ ($\ensuremath{\mathrm{NMSO}}\xspace$), a newly introduced variant of $\ensuremath{\mathrm{SMSO}}\xspace$ where second-order quantification ranges over ``well-founded'' subsets only. Our second theorem starts from $\ensuremath{\mathrm{WMSO}}\xspace$, and proves it equivalent modulo bisimilarity to a fragment of $\ensuremath{\mu_{D}\ML}\xspace$ defined by a notion of continuity. Analogously to Janin \& Walukiewicz's result, our proofs are automata-theoretic in nature: as another contribution, we introduce classes of parity automata characterising the expressiveness of $\ensuremath{\mathrm{WMSO}}\xspace$ and $\ensuremath{\mathrm{NMSO}}\xspace$ (on tree models) and of $\ensuremath{\mu_{C}\ML}\xspace$ and $\ensuremath{\mu_{D}\ML}\xspace$ (for all transition systems). \end{abstract} \textit{Keywords} Modal $\mu$-Calculus, Weak Monadic Second Order Logic, Tree Automata, Bisimulation. \section{Introduction} \label{sec:intro} \subsection{Expressiveness modulo bisimilarity} A seminal result in the theory of modal logic is van Benthem's Characterisation Theorem~\cite{vanBenthemPhD}, stating that, over the class of all labelled transition systems (LTSs for short), every bisimulation-invariant first-order formula is equivalent to (the standard translation of) a modal formula: \begin{equation} \label{eq:vB} \ensuremath{\mathrm{ML}}\xspace \equiv \ensuremath{\mathrm{FO}}\xspace/{\mathrel{\underline{\leftrightarrow}}} \qquad \text{ (over the class of all LTSs)}. \end{equation} Over the years, a wealth of variants of the Characterisation Theorem have been obtained. For instance, van Benthem's theorem is one of the few preservation results that transfers to the setting of finite models~\cite{rose:moda97}; for a recent, rich source of van Benthem-style characterisation results, see~\cite{DawarO09}. The general pattern of these results takes the shape \begin{equation} \label{eq:vBGeneral} M \equiv L/{\mathrel{\underline{\leftrightarrow}}} \qquad \text{ (over a class of models $\mathsf{C}$)}. \end{equation} Apart from their obvious relevance to model theory, the interest in these results increases if $\mathsf{C}$ consists of transition structures that represent certain computational processes, as in the theory of the formal specification and verification of properties of software. In this context, one often takes the point of view that bisimilar models represent \emph{the same} process. For this reason, only bisimulation-invariant properties are relevant. Seen in this light, \eqref{eq:vBGeneral} is an \emph{expressive completeness} result: all the relevant properties expressible in $L$ (which is generally some rich yardstick formalism), can already be expressed in a (usually computationally more feasible) fragment $M$. Of special interest to us is the work~\cite{Jan96}, which extends van Benthem's result to the setting of \emph{second-order} logic, by proving that the bisimulation-invariant fragment of standard monadic second-order logic ($\ensuremath{\mathrm{SMSO}}\xspace$) is the \emph{modal $\mu$-calculus} ($\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$), viz., the extension of basic modal logic with least- and greatest fixpoint operators: \begin{equation} \label{eq:JW} \ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace \equiv \ensuremath{\mathrm{SMSO}}\xspace/{\mathrel{\underline{\leftrightarrow}}} \qquad \text{ (over the class of all LTSs)}. \end{equation} The aim of this paper is to study the fine structure of such connections between second-order logics and modal $\mu$-calculi, obtaining variations of the expressiveness completeness results \eqref{eq:vB} and \eqref{eq:JW}. Our departure point is the following result, from \cite{ArnNiw92}. \begin{equation} \label{eq:weakfinite} \ensuremath{\mu_{D}\ML}\xspace \equiv \ensuremath{\mathrm{WMSO}}\xspace/{\mathrel{\underline{\leftrightarrow}}} \qquad \text{ (over the class of all binary trees)}. \end{equation} It relates two variants of $\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$ and standard $\ensuremath{\mathrm{MSO}}\xspace$ respectively. The first, $\ensuremath{\mu_{D}\ML}\xspace$ is the \emph{alternation-free} fragment of $\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$: the constraint means that variables bound by least fixpoint operators cannot occur in subformulas ``controlled'' by a greatest fixpoint operator, and viceversa. The fact that our best model-checking algorithms for $\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$ are exponential in the alternation depth of fixpoint operators \cite{EmersonL86,DBLP:conf/cav/LongBCJM94} makes $\ensuremath{\mu_{D}\ML}\xspace$ of special interest for applications. The second-order formalism appearing in \eqref{eq:weakfinite} is \emph{weak} monadic second-order logic (see e.g. \cite[Ch. 3]{ALG02}), a variant of standard $\ensuremath{\mathrm{MSO}}\xspace$ where second-order quantification ranges over finite sets only. The interest in $\ensuremath{\mathrm{WMSO}}\xspace$ is also justified by applications in software verification, in which it often makes sense to only consider finite portions of the domain of interest. Equation \eqref{eq:weakfinite} offers only a very narrow comparison of the expressiveness of these two logics. The equivalence is stated on binary trees, whereas \eqref{eq:vB} and \eqref{eq:JW} work at the level of arbitrary LTSs. In fact, it turns out that the picture in the more general setting is far more subtle. First of all, we know that $\ensuremath{\mu_{D}\ML}\xspace \not\equiv \ensuremath{\mathrm{WMSO}}\xspace/{\mathrel{\underline{\leftrightarrow}}}$ already on (arbitrary) trees, because the class of well-founded trees, definable by the formula $\mu x.\square x$ of $\ensuremath{\mu_{D}\ML}\xspace$, is not $\ensuremath{\mathrm{WMSO}}\xspace$-definable. Moreover, whereas $\ensuremath{\mathrm{WMSO}}\xspace$ is a fragment of $\ensuremath{\mathrm{SMSO}}\xspace$ on binary trees ---in fact, on all finitely branching trees--- as soon as we allow for infinite branching the two logics turn out to have \emph{incomparable} expressive power, see~\cite{CateF11,Zanasi:Thesis:2012}. For instance, the property ``each node has finitely many successors'' is expressible in $\ensuremath{\mathrm{WMSO}}\xspace$ but not in $\ensuremath{\mathrm{SMSO}}\xspace$. It is thus the main question of this work to clarify the status of $\ensuremath{\mathrm{WMSO}}\xspace/{\mathrel{\underline{\leftrightarrow}}}$ and $\ensuremath{\mu_{D}\ML}\xspace$ on arbitrary LTSs. We shall prove that, in this more general setting, \eqref{eq:weakfinite} ``splits'' into the following two results, introducing in the picture a new modal logic $\ensuremath{\mu_{C}\ML}\xspace$ and a new second-order logic $\ensuremath{\mathrm{NMSO}}\xspace$. \begin{theorem}~ \label{t:11} \begin{eqnarray} \ensuremath{\mu_{C}\ML}\xspace \equiv \ensuremath{\mathrm{WMSO}}\xspace/{\mathrel{\underline{\leftrightarrow}}} \qquad \text{ (over the class of all LTSs)}. \label{eq:mucML=wmso} \\ \ensuremath{\mu_{D}\ML}\xspace \equiv \ensuremath{\mathrm{NMSO}}\xspace/{\mathrel{\underline{\leftrightarrow}}} \qquad \text{ (over the class of all LTSs)}. \label{eq:afmc=nmso} \end{eqnarray} \end{theorem} For the first result \eqref{eq:mucML=wmso}, our strategy is to start from $\ensuremath{\mathrm{WMSO}}\xspace$ and seek a suitable modal fixpoint logic characterising its bisimulation-invariant fragment. Second-order quantification $\exists p.\varphi$ in $\ensuremath{\mathrm{WMSO}}\xspace$ requires $p$ to be interpreted over a finite subset of an LTS. We identify a notion of \emph{continuity} as the modal counterpart of this constraint, and call the resulting logic $\ensuremath{\mu_{C}\ML}\xspace$, the \emph{continuous} $\mu$-calculus. This is a fragment of $\ensuremath{\mu_{D}\ML}\xspace$, defined by the same grammar as the full $\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$, \begin{equation*} \varphi\ ::= q \mid \neg\varphi \mid \varphi \lor \varphi \mid \Diamond \varphi \mid \mu p.\varphi' \end{equation*} with the difference that $\varphi'$ does not just need to be positive in $p$, but also continuous in $p$. This terminology refers to the fact that $\varphi'$ is interpreted by a function that is continuous with respect to the Scott topology; as we shall see, $p$-continuity can be given a \emph{syntactic} characterisation, as a certain fragment $\cont{\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace}{p}$ of $\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$--- see also~\cite{Fontaine08,FV12} for more details. For our second result \eqref{eq:afmc=nmso}, we move in the opposite direction. That is, we look for a natural second-order logic of which $\ensuremath{\mu_{D}\ML}\xspace$ is the bisimulation-invariant fragment. Symmetrically to the case \eqref{eq:mucML=wmso} of $\ensuremath{\mathrm{WMSO}}\xspace$ and continuity, a crucial aspect is to identify which constraint on second-order quantification corresponds to the constraint on fixpoint alternation expressed by $\ensuremath{\mu_{D}\ML}\xspace$. Our analysis stems from the observation that, when a formula $\mu p.\varphi$ of $\ensuremath{\mu_{D}\ML}\xspace$ is satisfied in a tree model $\mathstr{T}$, the interpretation of $p$ must be a subset of a \emph{well-founded} subtree of $\mathstr{T}$, because alternation-freedom prevents $p$ from occurring in a $\nu$-subformula of $\varphi$. We introduce the concept of \emph{noetherian} subset as a generalisation of this property from trees to arbitrary LTSs: intuitively, a subset of a LTS $\mathstr{S}$ is called noetherian if it is a subset of a bundle of paths that does not contain any infinite ascending chain. The logic $\ensuremath{\mathrm{NMSO}}\xspace$ appearing in \eqref{eq:afmc=nmso}, which we call \emph{noetherian} second-order logic, is the variant of $\ensuremath{\mathrm{MSO}}\xspace$ restricting second-order quantification to noetherian subsets. A unifying perspective over these results can be given through the lens of K\"onig's lemma, saying that a subset of a tree $\mathstr{T}$ is finite precisely when it is included in a subtree of $\mathstr{T}$ which is both finitely branching and well-founded. In other words, finiteness on trees has two components, a \emph{horizontal} (finite branching) and a \emph{vertical} (well-foundedness) dimension. The bound imposed by $\ensuremath{\mathrm{NMSO}}\xspace$-quantification acts only on the \emph{vertical} dimension, whereas $\ensuremath{\mathrm{WMSO}}\xspace$-quantification acts on both. It then comes at no surprise that \eqref{eq:mucML=wmso}-\eqref{eq:afmc=nmso} collapse to \eqref{eq:weakfinite} on binary trees. The restriction to finitely branching models nullifies the difference between noetherian and finite, equating $\ensuremath{\mathrm{WMSO}}\xspace$ and $\ensuremath{\mathrm{NMSO}}\xspace$ (and thus also $\ensuremath{\mu_{D}\ML}\xspace$ and $\ensuremath{\mu_{C}\ML}\xspace$). Another interesting observation concerns the relative expressive power of $\ensuremath{\mathrm{WMSO}}\xspace$ with respect to standard $\ensuremath{\mathrm{MSO}}\xspace$. As mentioned above, $\ensuremath{\mathrm{WMSO}}\xspace$ is \emph{not} strictly weaker than $\ensuremath{\mathrm{SMSO}}\xspace$ on arbitrary LTSs. Nonetheless, putting together \eqref{eq:JW} and \eqref{eq:mucML=wmso} reveals that $\ensuremath{\mathrm{WMSO}}\xspace$ collapses within the boundaries of $\ensuremath{\mathrm{SMSO}}\xspace$-expressiveness when it comes to bisimulation-invariant formulas, because $\ensuremath{\mu_{C}\ML}\xspace$ is strictly weaker than $\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$. In fact, modulo bisimilarity, $\ensuremath{\mathrm{WMSO}}\xspace$ turns out to be even weaker than $\ensuremath{\mathrm{NMSO}}\xspace$, as $\ensuremath{\mu_{C}\ML}\xspace$ is also a fragment of $\ensuremath{\mu_{D}\ML}\xspace$. In a sense, this new landscape of results tells us that the feature distinguishing $\ensuremath{\mathrm{WMSO}}\xspace$ from $\ensuremath{\mathrm{SMSO}}\xspace$/$\ensuremath{\mathrm{NMSO}}\xspace$, \emph{viz.} the ability of expressing cardinality properties of the horizontal dimension of models, disappears once we focus on the bisimulation-invariant part, and thus is not computationally relevant. \subsection{Automata-theoretic characterisations} Janin \& Walukiewicz's proof of \eqref{eq:JW} passes through a characterisation of the two logics involved in terms of \emph{parity automata}. In a nutshell, a parity automaton $\mathstr{A} = \tup{A,\Delta,\Omega,a_I}$ processes LTSs as inputs, according to a transition function $\Delta$ defined in terms of a so-called \emph{one-step logic} $L_{1}(A)$, where the states $A$ of $\mathstr{A}$ may occur as unary predicates. The map $\Omega \colon A \to \mathbb{N}$ assigns to each state a \emph{priority}; if the least priority value occurring infinitely often during the computation is even, the input is accepted. Both $\ensuremath{\mathrm{SMSO}}\xspace$ and $\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$ are characterised by classes of parity automata: what changes is just the one-step logic, which is, respectively, first-order logic with ($\ensuremath{Fo_1}\xspacee$) and without ($\ensuremath{Fo_1}\xspace$) equality. \begin{eqnarray} \ensuremath{\mathrm{SMSO}}\xspace & \equiv & \mathit{Aut}(\ensuremath{Fo_1}\xspacee) \qquad \text{ (over the class of all trees)}, \label{eq:AutCharMSO} \\ \ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace & \equiv & \mathit{Aut}(\ensuremath{Fo_1}\xspace) \qquad \text{ (over the class of all LTSs)}. \label{eq:AutCharMuML} \end{eqnarray} This kind of automata-theoretic characterisation, which we believe is of independent interest, also underpins our two correspondence results. As the second main contribution of this paper, we introduce new classes of parity automata that exactly capture the expressive power of the second-order languages $\ensuremath{\mathrm{WMSO}}\xspace$ and $\ensuremath{\mathrm{NMSO}}\xspace$ (over tree models), and of the modal languages $\ensuremath{\mu_{C}\ML}\xspace$ and $\ensuremath{\mu_{D}\ML}\xspace$ (over arbitrary models). Let us start from the simpler case, that is $\ensuremath{\mathrm{NMSO}}\xspace$ and $\ensuremath{\mu_{D}\ML}\xspace$. As mentioned above, the leading intuition for these logics is that they are constrained in what can be expressed about the \emph{vertical} dimension of models. In automata-theoretic terms, we translate this constraint into the requirement that runs of an automaton can see at most one parity infinitely often: this yields the class of so-called \emph{weak} parity automata \cite{MullerSaoudiSchupp92}, which we write $\mathit{Aut}W(L_{1})$ for a given one-step logic $L_{1}$. \ensuremath{\mathrm{FO}}\xspaceotnote{ Interestingly, \cite{MullerSaoudiSchupp92} introduces the class $\mathit{Aut}W(\ensuremath{Fo_1}\xspacee)$ in order to shows that it characterises $\ensuremath{\mathrm{WMSO}}\xspace$ on binary trees, whence the name of \emph{weak} automata. As discussed above, this correspondence is an ``optical illusion'', due to the restricted class of models that are considered, on which $\ensuremath{\mathrm{NMSO}}\xspace = \ensuremath{\mathrm{WMSO}}\xspace$. } We shall show: \begin{theorem} \begin{eqnarray} \ensuremath{\mathrm{NMSO}}\xspace & \equiv & \mathit{Aut}W(\ensuremath{Fo_1}\xspacee) \qquad \text{ (over the class of all trees)}, \label{eq:AutCharNmso} \\ \ensuremath{\mu_{D}\ML}\xspace & \equiv & \mathit{Aut}W(\ensuremath{Fo_1}\xspace)\label{eq:AutCharMudML} \qquad \text{ (over the class of all LTSs)}. \end{eqnarray} \end{theorem} It is worth to zoom in on our main point of departure from Janin \& Walukiewicz' proofs of \eqref{eq:AutCharMSO}-\eqref{eq:AutCharMuML}. In the characterisation \eqref{eq:AutCharMSO}, due to \cite{Walukiewicz96}, a key step is to show that each automaton in $\mathit{Aut}(\ensuremath{Fo_1}\xspacee)$ can be simulated by an equivalent \emph{non-deterministic} automaton of the same class. This is instrumental in the projection construction, allowing to build an automaton equivalent to $\exists p.\varphi \in \mso$ starting from an automaton for $\varphi$. Our counterpart \eqref{eq:AutCharNmso} is also based on a simulation theorem. However, we cannot proceed in the same manner, as the class $\mathit{Aut}W(\ensuremath{Fo_1}\xspacee)$, unlike $\mathit{Aut}(\ensuremath{Fo_1}\xspacee)$, is \emph{not} closed under non-deterministic simulation. Thus we devise a different construction, which starting from a weak automaton $\mathstr{A}$ creates an equivalent automaton $\mathstr{A}'$ which acts non-deterministically only on a \emph{well-founded} portion of each accepted tree. It turns out that the class $\mathit{Aut}W(\ensuremath{Fo_1}\xspacee)$ is closed under this variation of the simulation theorem; moreover, the property of $\mathstr{A}'$ is precisely what is needed to make a projection construction that mirrors $\ensuremath{\mathrm{NMSO}}\xspace$-quantification. We now consider the automata-theoretic characterisation of $\ensuremath{\mathrm{WMSO}}\xspace$ and $\ensuremath{\mu_{C}\ML}\xspace$. Whereas in \eqref{eq:AutCharNmso}-\eqref{eq:AutCharMudML} the focus was on the vertical dimension of a given model, the constraint that we now need to translate into automata-theoretic terms concerns both \emph{vertical} and \emph{horizontal} dimension. Our revision of \eqref{eq:AutCharMSO}-\eqref{eq:AutCharMuML} thus moves on two different axes. The constraint on the vertical dimension is handled analogously to the cases \eqref{eq:AutCharNmso}-\eqref{eq:AutCharMudML}, by switching from standard to \emph{weak} parity automata. The constraint on the horizontal dimension requires more work. The first problem lies in finding the right one-step logic, which should be able to express cardinality properties as $\ensuremath{\mathrm{WMSO}}\xspace$ is able to do. An obvious candidate would be weak monadic second-order logic itself, or more precisely, its variant $\ensuremath{\mathrm{WMSO}_1}\xspace$ over the signature of unary predicates (corresponding to the automata states). A very helpful observation from~\cite{vaananen77} is that we can actually work with an equivalent formalism which is better tailored to our aims. Indeed, $\ensuremath{\mathrm{WMSO}_1}\xspace \equiv \ensuremath{Fo_1}\xspaceei$, where $\ensuremath{Fo_1}\xspaceei$ is the extension of $\ensuremath{Fo_1}\xspacee$ with the generalised quantifier $\ensuremath{\exists^\infty}\xspace$, with $\ensuremath{\exists^\infty}\xspace x. \varphi$ stating the existence of \emph{infinitely} many objects satisfying $\varphi$. At this stage, our candidate automata class for $\ensuremath{\mathrm{WMSO}}\xspace$ could be $\mathit{Aut}W(\ensuremath{Fo_1}\xspaceei)$. However, this fails because $\ensuremath{Fo_1}\xspaceei$ bears too much expressive power: since it extends $\ensuremath{Fo_1}\xspacee$, we would find that, over tree models, $\mathit{Aut}W(\ensuremath{Fo_1}\xspaceei)$ extends $\mathit{Aut}W(\ensuremath{Fo_1}\xspacee)$, whereas we already saw that $\mathit{Aut}W(\ensuremath{Fo_1}\xspacee) \equiv \ensuremath{\mathrm{NMSO}}\xspace$ is incomparable to $\ensuremath{\mathrm{WMSO}}\xspace$. It is here that we crucially involve the notion of \emph{continuity}. For a class $\mathit{Aut}W(L_{1})$ of weak parity automata, we call \emph{continuous-weak} parity automata, forming a class $\mathit{Aut}WC(L_{1})$, those satisfying the following additional constraint: \begin{itemize} \item for every state $a$ with even priority $\Omega(a)$, every one-step formula $\varphi \in L_{1}(A)$ defining the transitions from $a$ has to be continuous in all states $a'$ lying in a cycle with $a$; dually, if $\Omega(a)$ is odd, every such $\varphi$ has to be $a'$-cocontinuous.\ensuremath{\mathrm{FO}}\xspaceotnote{ It is important to stress that, even though continuity is a semantic condition, we have a \emph{syntactic} characterisation of $\ensuremath{Fo_1}\xspaceei$-formulas satisfying it (see \cite{carr:mode18}), meaning that $\mathit{Aut}WC(\ensuremath{Fo_1}\xspaceei)$ is definable independently of the structures that takes as input. } \end{itemize} We can now formulate our characterisation result as follows. \begin{theorem} \begin{eqnarray} \ensuremath{\mathrm{WMSO}}\xspace & \equiv & \mathit{Aut}WC(\ensuremath{Fo_1}\xspaceei) \qquad \text{ (over the class of trees)}, \label{eq:AutCharWmso} \\ \ensuremath{\mu_{C}\ML}\xspace & \equiv & \mathit{Aut}WC(\ensuremath{Fo_1}\xspace)\label{eq:AutCharMucML} \qquad \text{ (over the class of all LTSs)}. \end{eqnarray} \end{theorem} Thus automata for $\ensuremath{\mathrm{WMSO}}\xspace$ deviate from $\ensuremath{\mathrm{SMSO}}\xspace$-automata $\mathit{Aut}(\ensuremath{Fo_1}\xspacee)$ on two different levels: at the global level of the automaton run, because of the weakness and continuity constraint, and at the level of the one-step logic defining a single transition step. Another interesting point stems from pairing \eqref{eq:AutCharWmso}-\eqref{eq:AutCharMucML} with the expressive completeness result \eqref{eq:mucML=wmso}: although automata for $\ensuremath{\mathrm{WMSO}}\xspace$ are based on a more powerful one-step logic ($\ensuremath{Fo_1}\xspaceei$) than those for $\ensuremath{\mu_{C}\ML}\xspace$ ($\ensuremath{Fo_1}\xspace$), modulo bisimilarity they characterise the same expressiveness. This connects back to our previous observation, that the ability of $\ensuremath{\mathrm{WMSO}}\xspace$ to express cardinality properties on the horizontal dimension vanishes in a bisimulation-invariant context. \subsection{Outline} It is useful to conclude this introduction with a roadmap of how the various results are achieved. In a nutshell, the two expressive completeness theorems \eqref{eq:mucML=wmso} and \eqref{eq:afmc=nmso} will be based respectively on the following two chains of equivalences: \begin{eqnarray} \ensuremath{\mu_{D}\ML}\xspace \equiv \mu_{D}\ensuremath{Fo_1}\xspace \equiv \mathit{Aut}W(\ensuremath{Fo_1}\xspace) \equiv \mathit{Aut}W(\ensuremath{Fo_1}\xspacee)/{\mathrel{\underline{\leftrightarrow}}} \equiv \ensuremath{\mathrm{NMSO}}\xspace/{\mathrel{\underline{\leftrightarrow}}} \ \text{(over LTSs)}.\label{eq:chain-afmc=nmso} \\ \ensuremath{\mu_{C}\ML}\xspace \equiv \mu_{C}\ensuremath{Fo_1}\xspace \equiv \mathit{Aut}WC(\ensuremath{Fo_1}\xspace) \equiv \mathit{Aut}WC(\ensuremath{Fo_1}\xspaceei)/{\mathrel{\underline{\leftrightarrow}}} \equiv \ensuremath{\mathrm{WMSO}}\xspace/{\mathrel{\underline{\leftrightarrow}}} \ \text{ (over LTSs)}. \label{eq:chain-mucML=wmso} \end{eqnarray} After giving a precise definition of the necessary preliminaries in Section \ref{sec:prel}, we proceed as follows. First, Section \ref{sec:parityaut} introduces parity automata parametrised over a one-step language $L_{1}$, both in the standard ($\mathit{Aut}(L_{1})$), weak ($\mathit{Aut}W(L_{1})$) and continuous-weak ($\mathit{Aut}WC(L_{1})$) form. With Theorems \ref{t:autofor} and \ref{t:fortoaut}, we show that \begin{equation}\label{sec:outlineFix=Aut} \mu_{D}L_{1}\equiv \mathit{Aut}W(L_{1}) \qquad \qquad \mu_{C}L_{1} \equiv \mathit{Aut}WC(L_{1}) \qquad\text{ (over LTSs)} \end{equation} where $\mu_{D}L_{1}$ and $\mu_{C}L_{1}$ are extensions of $L_{1}$ with fixpoint operators subject to a ``noetherianess'' and a ``continuity'' constraint respectively. Instantiating \eqref{sec:outlineFix=Aut} yield the second equivalence both in \eqref{eq:chain-afmc=nmso} and \eqref{eq:chain-mucML=wmso}: \begin{equation*} \mu_{D}\ensuremath{Fo_1}\xspace \equiv \mathit{Aut}W(\ensuremath{Fo_1}\xspace) \qquad \qquad \mu_{C}\ensuremath{Fo_1}\xspace \equiv \mathit{Aut}WC(\ensuremath{Fo_1}\xspace) \qquad \text{ (over LTSs)}. \end{equation*} Next, in Section \ref{sec:autwmso}, Theorem~\ref{t:wmsoauto}, we show how to construct from a $\ensuremath{\mathrm{WMSO}}\xspace$-formula an equivalent automaton of the class $\mathit{Aut}WC(\ensuremath{Fo_1}\xspaceei)$. In Section \ref{sec:autnmso}, Theorem~\ref{t:nmsoauto}, we show the analogous characterisation for $\ensuremath{\mathrm{NMSO}}\xspace$ and $\mathit{Aut}W(\ensuremath{Fo_1}\xspacee)$. These two sections yield part of the last equivalence in \eqref{eq:chain-mucML=wmso} and in \eqref{eq:chain-afmc=nmso} respectively. \begin{equation} \label{eq:outlineForToAut} \mathit{Aut}W(\ensuremath{Fo_1}\xspacee) \geq \ensuremath{\mathrm{NMSO}}\xspace \qquad \qquad {\mathit{Aut}WC(\ensuremath{Fo_1}\xspaceei)} \geq \ensuremath{\mathrm{WMSO}}\xspace \qquad \text{ (over trees)}. \end{equation} Notice that, differently from all the other proof pieces, \eqref{eq:outlineForToAut} only holds on trees, because the projection construction for automata relies on the input LTSs being tree shaped. Section \ref{sec:fixpointToSO} yields the remaining bit of the automata characterisations. Theorem~\ref{t:mfl2mso} shows \[ \mu_{D}\ensuremath{Fo_1}\xspacee \leq \ensuremath{\mathrm{NMSO}}\xspace \qquad \qquad \mu_{C}\ensuremath{Fo_1}\xspaceei \leq \ensuremath{\mathrm{WMSO}}\xspace \qquad \text{ (over LTSs)},\] which, paired with \eqref{sec:outlineFix=Aut}, yields \begin{equation*} \mathit{Aut}W(\ensuremath{Fo_1}\xspacee) \equiv \mu_{D}\ensuremath{Fo_1}\xspacee \leq \ensuremath{\mathrm{NMSO}}\xspace \ensuremath{\exists^\infty}\xspacead \mathit{Aut}WC(\ensuremath{Fo_1}\xspaceei) \equiv \mu_{C}\ensuremath{Fo_1}\xspaceei \leq \ensuremath{\mathrm{WMSO}}\xspace \ \text{ (over LTSs)}. \end{equation*} Putting the last equation and \eqref{eq:outlineForToAut} together we have our automata characterisations \begin{equation*} \mathit{Aut}W(\ensuremath{Fo_1}\xspacee) \equiv \ensuremath{\mathrm{NMSO}}\xspace \qquad \qquad {\mathit{Aut}WC(\ensuremath{Fo_1}\xspaceei)} \equiv \ensuremath{\mathrm{WMSO}}\xspace \qquad \text{ (over trees)}. \end{equation*} which also yields the rightmost equivalence in \eqref{eq:chain-mucML=wmso} and in \eqref{eq:chain-afmc=nmso}, because any LTS is bisimilar to its tree unraveling. \begin{equation*} \mathit{Aut}W(\ensuremath{Fo_1}\xspacee)/{\mathrel{\underline{\leftrightarrow}}} \equiv \ensuremath{\mathrm{NMSO}}\xspace/{\mathrel{\underline{\leftrightarrow}}} \qquad \qquad {\mathit{Aut}WC(\ensuremath{Fo_1}\xspaceei)}/{\mathrel{\underline{\leftrightarrow}}} \equiv \ensuremath{\mathrm{WMSO}}\xspace/{\mathrel{\underline{\leftrightarrow}}} \qquad \text{ (over LTSs)}. \end{equation*} At last, Section \eqref{sec:expresso} is split into two parts. First, Theorem \ref{t:mlaut} extends the results in Section \ref{sec:parityaut} to complete the following chains of equivalences, yielding the first block in \eqref{eq:chain-afmc=nmso} and in \eqref{eq:chain-mucML=wmso}. \[ \ensuremath{\mu_{D}\ML}\xspace \equiv \mu_{D}\ensuremath{Fo_1}\xspace \equiv \mathit{Aut}W(\ensuremath{Fo_1}\xspace) \qquad \qquad \ensuremath{\mu_{C}\ML}\xspace \equiv \mu_{C}\ensuremath{Fo_1}\xspace \equiv \mathit{Aut}WC(\ensuremath{Fo_1}\xspace) \qquad \text{ (over LTSs)}. \] As a final step, Subsection \ref{ss:bisinv} fills the last gap in \eqref{eq:chain-afmc=nmso}-\eqref{eq:chain-mucML=wmso} by showing \[ \mathit{Aut}W(\ensuremath{Fo_1}\xspace) \equiv \mathit{Aut}W(\ensuremath{Fo_1}\xspacee)/{\mathrel{\underline{\leftrightarrow}}} \qquad \qquad \mathit{Aut}WC(\ensuremath{Fo_1}\xspace) \equiv \mathit{Aut}WC(\ensuremath{Fo_1}\xspaceei)/{\mathrel{\underline{\leftrightarrow}}} \qquad \text{ (over LTSs)}. \] \subsection{Conference versions and companion paper} This journal article is based on two conference papers \cite{DBLP:conf/lics/FacchiniVZ13,DBLP:conf/csl/CarreiroFVZ14}, which were based in their turn on a Master thesis \cite{Zanasi:Thesis:2012} and a PhD dissertation \cite{carr:frag2015}. Each of the two conference papers focussed on a single expressive completeness theorem between \eqref{eq:mucML=wmso} and \eqref{eq:afmc=nmso}: presenting both results in a mostly uniform way has required an extensive overhaul, involving the development of new pieces of theory, as in particular the entirety of Sections~\ref{sec:parity-to-mc}, \ref{sec:mc-to-parity} and \ref{sec:fixpointToSO}. All missing proofs of the conference papers are included and the simulation theorem for $\ensuremath{\mathrm{NMSO}}\xspace$- and $\ensuremath{\mathrm{WMSO}}\xspace$-automata is simplified, as it is now based on macro-states that are sets instead of relations. Moreover, we amended two technical issues with the characterisation $\ensuremath{\mu_{D}\ML}\xspace \equiv \ensuremath{\mathrm{NMSO}}\xspace /{\mathrel{\underline{\leftrightarrow}}}$ presented in \cite{DBLP:conf/lics/FacchiniVZ13}. First, the definition of noetherian subset in $\ensuremath{\mathrm{NMSO}}\xspace$ has been made more precise, in order to prevent potential misunderstandings arising with the formulation in \cite{DBLP:conf/lics/FacchiniVZ13}. Second, as stated in \cite{DBLP:conf/lics/FacchiniVZ13} the expressive completeness result was only valid on trees. In this version, we extend it to arbitrary LTSs, thanks to the new material in Section \ref{sec:fixpointToSO}. Finally, our approach depends on model-theoretic results on the three main one-step logics featuring in this paper: $\ensuremath{Fo_1}\xspace$, $\ensuremath{Fo_1}\xspacee$ and $\ensuremath{Fo_1}\xspaceei$. We believe these results to be of independent interest, and in order to save some space here, we decided to restrict our discussion of the model theory of these monadic predicate logics in this paper to a summary. Full details can be found in the companion paper \cite{carr:mode18}. \section{Preliminaries} \label{sec:prel} We assume the reader to be familiar with the syntax and (game-theoretic) semantics of the modal $\mu$-calculus and with the automata-theoretic perspective on this logic. For background reading we refer to~\cite{ALG02,Ven08}; the purpose of this section is to fix some notation and terminology. \subsection{Transition systems and trees} \label{ssec:prelim_trees} Throughout this article we fix a set $\mathsf{Prop}$ of elements that will be called \emph{proposition letters} and denoted with small Latin letters $p, q, \ldots$ . We will often focus on a finite subset $\mathsf{P} \subseteq_{\omega} \mathsf{Prop}$, and denote with $C$ the set $\wp (\mathsf{P})$ of \emph{labels} on $\mathsf{P}$; it will be convenient to think of $C$ as an \emph{alphabet}. Given a binary relation $R \subseteq X \times Y$, for any element $x \in X$, we indicate with $R[x]$ the set $\{ y \in Y \mid (x,y) \in R \}$ while $R^+$ and $R^{*}$ are defined respectively as the transitive closure of~$R$ and the reflexive and transitive closure of~$R$. The set $\mathsf{Ran}(R)$ is defined as $\bigcup_{x\in X}R[x]$. A \emph{$\mathsf{P}$-labeled transition system} (LTS) is a tuple $\mathstr{S} = \tup{T,R,\kappa,s_I}$ where $T$ is the universe or domain of $\mathstr{S}$, $\kappa:T\to\wp(\mathsf{P})$ is a colouring (or marking), $R\subseteq T^2$ is the accessibility relation and $s_I \in T$ is a distinguished node. We call $\kappa(s)$ the colour, or type, of node $s \in T$. Observe that the colouring ${\kappa:T\to\wp(\mathsf{P})}$ can be seen as a valuation $\tscolors^\natural:\mathsf{P}\to\wp (T)$ given by $\tscolors^\natural(p) \mathrel{:=} \{s \in T \mid p\in \kappa(s)\}$. A \emph{$\mathsf{P}$-tree} is a $\mathsf{P}$-labeled LTS in which every node can be reached from $s_I$, and every node except $s_I$ has a unique predecessor; the distinguished node $s_I$ is called the \emph{root} of $\mathstr{S}$. Each node $s \in T$ uniquely defines a subtree of $\mathstr{S}$ with carrier $R^{*}[s]$ and root $s$. We denote this subtree by ${\mathstr{S}.s}$. The \emph{tree unravelling} of an LTS $\mathstr{S}$ is given by $\unravel{\mathstr{S}} \mathrel{:=} \tup{T_P,R_P,\kappa',s_I}$ where $T_P$ is the set of finite paths in $\mathstr{S}$ stemming from $s_I$, $R_P(t,t')$ iff $t'$ is a one-step extension of $t$ and the colour of a path $t\in T_P$ is given by the colour of its last node in $T$. The \emph{$\omegaega$-unravelling} $\omegaegaunrav{\mathstr{S}}$ of $\mathstr{S}$ is an unravelling which has $\omegaega$-many copies of each node different from the root. A \emph{$p$-variant} of a transition system $\mathstr{S} = \tup{T,R,\kappa,s_I}$ is a $\mathsf{P}\cup\{p\}$-transition system $\tup{T,R,\kappa',s_I}$ such that $\kappa'(s)\setminus\{p\} = \kappa(s) \setminus \{p \}$ for all $s \in T$. Given a set $S \subseteq T$, we let $\mathstr{S}[p\mapsto S]$ denote the $p$-variant where $p \in \kappa'(s)$ iff $s \in S$. Let $\varphi \in L$ be a formula of some logic $L$, we use $\ensuremath{\mathsf{Mod}}_{L}(\varphi) = \{\mathstr{S} \mid \mathstr{S} \models \varphi\}$ to denote the class of transition systems that make $\varphi$ true. The subscript $L$ will be omitted when $L$ is clear from context. A class $\mathsf{C}$ of transition systems is said to be \emph{$L$-definable} if there is a formula $\varphi \in L$ such that $\ensuremath{\mathsf{Mod}}_{L}(\varphi) = \mathsf{C}$. We use the notation $\varphi \equiv \psi$ to mean that $\ensuremath{\mathsf{Mod}}_{L}(\varphi) = \ensuremath{\mathsf{Mod}}_{L}(\psi)$ and given two logics $L, L'$ we use $L \equiv L'$ when the $L$-definable and $L'$-definable classes of models coincide. \subsection{Games} We introduce some terminology and background on infinite games. All the games that we consider involve two players called \emph{\'Eloise} ($\exists$) and \emph{Abelard} ($\ensuremath{\mathrm{FO}}\xspacerall$). In some contexts we refer to a player $\Pi$ to specify a a generic player in $\{\exists,\ensuremath{\mathrm{FO}}\xspacerall\}$. Given a set $A$, by $A^*$ and $A^\omegaega$ we denote respectively the set of words (finite sequences) and streams (or infinite words) over $A$. A \emph{board game} $\mathcal{G}$ is a tuple $(G_{\exists},G_{\ensuremath{\mathrm{FO}}\xspacerall},E,\text{\sl Win})$, where $G_{\exists}$ and $G_{\ensuremath{\mathrm{FO}}\xspacerall}$ are disjoint sets whose union $G=G_{\exists}\cup G_{\ensuremath{\mathrm{FO}}\xspacerall}$ is called the \emph{board} of $\mathcal{G}$, $E\subseteq G \times G$ is a binary relation encoding the \emph{admissible moves}, and $\text{\sl Win} \subseteq G^{\omegaega}$ is a \emph{winning condition}. An \emph{initialized board game} $\mathcal{G}@u_I$ is a tuple $(G_{\exists},G_{\ensuremath{\mathrm{FO}}\xspacerall},u_I, E,\text{\sl Win})$ where $u_I \in G$ is the \emph{initial position} of the game. In a \emph{parity game}, the set $\text{\sl Win}$ is given by a \emph{parity function}, that is, a map $\Omega: G \to \omegaega$ of finite range, in the sense that a sequence $(a_{i})i<\omega$ belongs to $\text{\sl Win}$ iff the maximal value $n$ that is reached as $n = \Omega(a_{i})$ for infinitely many $i$, is even. Given a board game $\mathcal{G}$, a \emph{match} of $\mathcal{G}$ is simply a path through the graph $(G,E)$; that is, a sequence $\pi = (u_i)_{i< \alphapha}$ of elements of $G$, where $\alphapha$ is either $\omegaega$ or a natural number, and $(u_i,u_{i+1}) \in E$ for all $i$ with $i+1 < \alphapha$. A match of $\mathcal{G}@u_{I}$ is supposed to start at $u_{I}$. Given a finite match $\pi = (u_i)_{i< k}$ for some $k<\omegaega$, we call $\mathit{last}(\pi) \mathrel{:=} u_{k-1}$ the \emph{last position} of the match; the player $\Pi$ such that $\mathit{last}(\pi) \in G_{\Pi}$ is supposed to move at this position, and if $E[\mathit{last}(\pi)] = \emptyset$, we say that $\Pi$ \emph{got stuck} in $\pi$. A match $\pi$ is called \emph{total} if it is either finite, with one of the two players getting stuck, or infinite. Matches that are not total are called \emph{partial}. Any total match $\pi$ is \emph{won} by one of the players: If $\pi$ is finite, then it is won by the opponent of the player who gets stuck. Otherwise, if $\pi$ is infinite, the winner is $\exists$ if $\pi \in \text{\sl Win}$, and $\ensuremath{\mathrm{FO}}\xspacerall$ if $\pi \not\in \text{\sl Win}$. Given a board game $\mathcal{G}$ and a player $\Pi$, let $\pmatches{G}{\Pi}$ denote the set of partial matches of $\mathcal{G}$ whose last position belongs to player $\Pi$. A \emph{strategy for $\Pi$} is a function $f:\pmatches{G}{\Pi}\to G$. A match $\pi = (u_i)_{i< \alphapha}$ of $\mathcal{G}$ is \emph{$f$-guided} if for each $i < \alphapha$ such that $u_i \in G_{\Pi}$ we have that $u_{i+1} = f(u_0,\dots,u_i)$. Let $u \in G$ and a $f$ be a strategy for $\Pi$. We say that $f$ is a \emph{surviving strategy} for $\Pi$ in $\mathcal{G}@u$ if for each $f$-guided partial match $\pi$ of $\mathcal{G}@u$, if $\mathit{last}(\pi)$ is in $G_{\Pi}$ then $f(\pi)$ is legitimate, that is, $(\mathit{last}(\pi), f(\pi)) \in E$. We say that $f$ is a \emph{winning strategy} for $\Pi$ in $\mathcal{G}@u$ if, additionally, $\Pi$ wins each $f$-guided total match of $\mathcal{G}@u$. If $\Pi$ has a winning strategy for $\mathcal{G}@u$ then $u$ is called a \emph{winning position} for $\Pi$ in $\mathcal{G}$. The set of positions of $\mathcal{G}$ that are winning for $\Pi$ is denoted by $\text{\sl Win}_{\Pi}(\mathcal{G})$. A strategy $f$ is called \emph{positional} if $f(\pi) = f(\pi')$ for each $\pi,\pi'\in \mathsf{Dom}(f)$ with $\mathit{last}(\pi) = \mathit{last}(\pi')$. A board game $\mathcal{G}$ with board $G$ is \emph{determined} if $G = \text{\sl Win}_{\exists}(\mathcal{G}) \cup \text{\sl Win}_{\ensuremath{\mathrm{FO}}\xspacerall}(\mathcal{G})$, that is, each $u \in G$ is a winning position for one of the two players. The next result states that parity games are positionally determined. \begin{fact}[\cite{EmersonJ91,Mostowski91Games}] \label{THM_posDet_ParityGames} For each parity game $\mathcal{G}$, there are positional strategies $f_{\exists}$ and $f_{\ensuremath{\mathrm{FO}}\xspacerall}$ respectively for player $\exists$ and $\ensuremath{\mathrm{FO}}\xspacerall$, such that for every position $u \in G$ there is a player $\Pi$ such that $f_{\Pi}$ is a winning strategy for $\Pi$ in $\mathcal{G}@u$. \end{fact} In the sequel we will often assume, without notification, that strategies in parity games are positional. Moreover, we think of a positional strategy $f_\Pi$ for player $\Pi$ as a function $f_\Pi:G_\Pi\to G$. \subsection{The Modal $\mu$-Calculus and some of its fragments.} \label{subsec:mu} The language of the modal $\mu$-calculus ($\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$) is given by the following grammar: \begin{equation*} \varphi\ \mathrel{::=} q \mid \neg q \mid \varphi \land \varphi \mid \varphi \lor \varphi \mid \Diamond \varphi \mid \Box \varphi \mid \mu p.\varphi \mid \nu p.\varphi \end{equation*} where $p,q \in \mathsf{Prop}$ and $p$ is positive in $\varphi$ (i.e., $p$ is not negated). We will freely use standard syntactic concepts and notations related to this language, such as the sets $\mathit{FV}(\varphi)$ and $\mathit{BV}(\varphi)$ of \emph{free} and \emph{bound} variables of $\varphi$, and the collection $\mathit{Sfor}(\varphi)$ of subformulas of $\varphi$. We use the standard convention that no variable is both free and bound in a formula and that every bound variable is fresh. We let $\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace(\mathsf{P})$ denote the collection of formulas $\varphi$ with $\mathit{FV}(\varphi) \subseteq \mathsf{P}$. Sometimes we write $\psi \trianglelefteq \varphi$ to denote that $\psi$ is a subformula of $\varphi$. For a bound variable $p$ occurring in some formula $\varphi \in \ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$, we use $\delta_p$ to denote the binding definition of $p$, that is, the unique formula such that either $\mu p.\delta_p$ or $\nu p.\delta_p$ is a subformula of $\varphi$. We need some notation for the notion of \emph{substitution}. Let $\varphi$ and $\{ \psi_{z} \mid z \in Z \}$ be modal fixpoint formulas, where $Z \cap \mathit{BV}{(\varphi)} = \varnothing$. Then we let $\varphi[\psi_{z}/z \mid z \in Z]$ denote the formula obtained from $\varphi$ by simultaneously substituting each formula $\psi_{z}$ for $z$ in $\varphi$ (with the usual understanding that no free variable in any of the $\psi_{z}$ will get bound by doing so). In case $Z$ is a singleton $z$, we will simply write $\varphi[\psi_{z}/z]$, or $\varphi[\psi]$ if $z$ is clear from context. The semantics of this language is completely standard. Let $\mathstr{S} = \tup{T,R,\kappa, s_I}$ be a transition system and $\varphi \in \ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$. We inductively define the \emph{meaning} $\ext{\varphi}^{\mathstr{S}}$ which includes the following clauses for the least $(\mu)$ and greatest ($\nu$) fixpoint operators: \begin{align*} \ext{\mu p.\psi}^{\mathstr{S}} & \mathrel{:=} \bigcap \{S \subseteq T \mid S \supseteq \ext{\psi}^{\mathstr{S}[p\mapsto S]} \} \\ \ext{\nu p.\psi}^{\mathstr{S}} & \varphi \bigcup \{S \subseteq T \mid S \subseteq \ext{\psi}^{\mathstr{S}[p\mapsto S]} \} \end{align*} We say that $\varphi$ is \emph{true} in $\mathstr{S}$ (notation $\mathstr{S} \tscolorsdash \varphi$) iff $s_I \in \ext{\varphi}^{\mathstr{S}}$. We will now describe the semantics defined above in game-theoretic terms. That is, we will define the evaluation game $\mathcal{E}(\varphi,\mathstr{S})$ associated with a formula $\varphi \in \ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$ and a transition system $\mathstr{S}$. This game is played by two players (\ensuremath{\exists}\xspace and \ensuremath{Forall}\xspace) moving through positions $(\xi,s)$ where $\xi \trianglelefteq \varphi$ and $s \in T$. In an arbitrary position $(\xi,s)$ it is useful to think of \ensuremath{\exists}\xspace trying to show that $\xi$ is true at $s$, and of \ensuremath{Forall}\xspace of trying to convince her that $\xi$ is false at $s$. The rules of the evaluation game are given in the following table. \begin{center} \begin{tabular}{|l|c|l|c|} \hline Position & Player & Admissible moves \\ \hline $(\psi_1 \vee \psi_2,s)$ & $\exists$ & $\{(\psi_1,s),(\psi_2,s) \}$ \\ $(\psi_1 \wedge \psi_2,s)$ & $\ensuremath{\mathrm{FO}}\xspacerall$ & $\{(\psi_1,s),(\psi_2,s) \}$ \\ $(\Diamond\varphi,s)$ & $\exists$ & $\{(\varphi,t)\ |\ t \in R[s] \}$ \\ $(\Box\varphi,s)$ & $\ensuremath{\mathrm{FO}}\xspacerall$ & $\{(\varphi,t)\ |\ t \in R[s] \}$ \\ $(\mu p.\varphi,s)$ & $-$ & $\{(\varphi,s) \}$ \\ $(\nu p.\varphi,s)$ & $-$ & $\{(\varphi,s) \}$ \\ $(p,s)$ with $p \in \mathit{BV}(\varphi)$ & $-$ & $\{(\delta_p,s) \}$ \\ $(\lnot q,s)$ with $q \in \mathit{FV}(\varphi)$ and $q \notin \kappa(s)$ & $\ensuremath{\mathrm{FO}}\xspacerall$ & $\emptyset$ \\ $(\lnot q,s)$ with $q \in \mathit{FV}(\varphi)$ and $q \in \kappa(s)$ & $\exists$ & $\emptyset$ \\ $(q,s)$ with $q \in \mathit{FV}(\varphi)$ and $q \in \kappa(s)$ & $\ensuremath{\mathrm{FO}}\xspacerall$ & $\emptyset$ \\ $(q,s)$ with $q \in \mathit{FV}(\varphi)$ and $q \notin \kappa(s)$ & $\exists$ & $\emptyset$ \\ \hline \end{tabular} \end{center} Every finite match of this game is lost by the player that got stuck. To give a winning condition for an infinite match let $p$ be, of the bound variables of $\varphi$ that get unravelled infinitely often, the one such that $\delta_{p}$ the highest subformula in the syntactic tree of $\varphi$. The winner of the match is \ensuremath{Forall}\xspace if $p$ is a $\mu$-variable and \ensuremath{\exists}\xspace if $p$ is a $\nu$-variable. We say that $\varphi$ is true in $\mathstr{S}$ iff \ensuremath{\exists}\xspace has a winning strategy in $\mathcal{E}(\varphi,\mathstr{S})$. \begin{proposition}[Adequacy Theorem]\label{p:unfold=evalgame} Let $\varphi = \varphi(p)$ be a formula of $\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$ in which all occurrences of $p$ are positive, $\mathstr{S}$ be a LTS and $s \in T$. Then: \begin{equation} \label{eq:adeq3} s \in \ext{\mu p.\varphi}^{\mathstr{S}} \iff (\mu p.\varphi,s) \in \text{\sl Win}_{\ensuremath{\exists}\xspace}(\mathcal{E}(\mu p.\varphi,\mathstr{S})). \end{equation} \end{proposition} Formulas of the modal $\mu$-calculus may be classified according to their \emph{alternation depth}, which roughly is given as the maximal length of a chain of nested alternating least and greatest fixpoint operators~\cite{Niwinski86}. The \emph{alternation-free fragment} of the modal $\mu$-calculus~($\ensuremath{\mu_{D}\ML}\xspace$) is usually defined as the collection of $\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$-formulas without nesting of least and greatest fixpoint operators. It can also be also given a more standard grammatical definition as follows. \begin{definition} Given a set $\mathsf{Q}$ of propositional variables, we define the fragment $\noe{\mu\ensuremath{\mathrm{ML}}\xspace}{\mathsf{Q}}$ of \ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace-formulas that are (syntactically) \emph{noetherian} in $\mathsf{Q}$, by the following grammar: \begin{equation*} \varphi \mathrel{::=} q \mid \psi \mid \varphi \lor \varphi \mid \varphi \land \varphi \mid \Diamond \varphi \mid \Box \varphi \mid \mu p.\varphi' \end{equation*} where $q \in \mathsf{Q}$, $\psi$ is a $\mathsf{Q}$-free $\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$-formula, and $\varphi' \in \noe{\mu\ensuremath{\mathrm{ML}}\xspace}{\mathsf{Q}\cup\{p\}}$. The \emph{co-noetherian} fragment $\conoe{\mu\ensuremath{\mathrm{ML}}\xspace}{Q}$ is defined dually. \end{definition} The alternation-free $\mu$-calculus can be defined as the fragment of the full language where we restrict the application of the least fixpoint operator $\mu p$ to formulas that are noetherian in $p$ (and apply a dual condition to the greatest fixpoint operator). \begin{definition} The formulas of the \emph{alternation-free} $\mu$-calculus $\ensuremath{\mu_{D}\ML}\xspace$ are defined by the following grammar: \begin{equation*} \varphi \mathrel{::=} q \mid \neg q \mid \varphi\lor\varphi \mid \varphi\land\varphi \mid \Diamond \varphi \mid \Box \varphi \mid \mu p. \varphi' \mid \nu p. \varphi'', \end{equation*} where $p,q \in \mathsf{Prop}$, $\varphi' \in \ensuremath{\mu_{D}\ML}\xspace \cap \noe{\mu\ensuremath{\mathrm{ML}}\xspace}{p}$ and dually $\varphi'' \in \ensuremath{\mu_{D}\ML}\xspace \cap \conoe{\mu\ensuremath{\mathrm{ML}}\xspace}{p}$. \end{definition} It is then immediate to verify that the above definition indeed captures exactly all formulas without alternation of least and greatest fixpoints. One may prove that a formula $\varphi \in \ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$ belongs to the fragment $\ensuremath{\mu_{D}\ML}\xspace$ iff for all subformulas $\mu p.\psi_1$ and $\nu q.\psi_2$ it holds that $p$ is not free in $\psi_2$ and $q$ is not free in $\psi_1$. Over arbitrary transition systems, this fragment is less expressive than the whole $\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$~\cite{Park79}. In order to properly define the fragment $\ensuremath{\mu_{C}\ML}\xspace \subseteq \ensuremath{\mu_{D}\ML}\xspace$ which is of critical importance in this article, we are particularly interested in the \emph{continuous} fragment of the modal $\mu$-calculus. As observed in Section~\ref{sec:intro}, the abstract notion of continuity can be given a concrete interpretation in the context of $\mu$-calculus. \begin{definition} Let $\varphi \in \ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$, and $q$ be a propositional variable. We say that \emph{$\varphi$ is continuous in $q$} iff for every transition system $\mathstr{S}$ there exists some finite $S \subseteq_\omegaega \tscolors^\natural(q)$ such that $$ \mathstr{S} \tscolorsdash \varphi \ensuremath{\exists^\infty}\xspacead\text{iff}\ensuremath{\exists^\infty}\xspacead \mathstr{S}[q \mapsto S] \tscolorsdash \varphi. $$ \end{definition} We can give a syntactic characterisation of the fragment of $\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$ that captures this property. \begin{definition} Given a set $\mathsf{Q}$ of propositional variables, we define the fragment of \ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace \emph{continuous} in $\mathsf{Q}$, denoted by $\cont{\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace}{\mathsf{Q}}$, by induction in the following way \begin{equation*} \varphi \mathrel{::=} q \mid \psi \mid \varphi \lor \varphi \mid \varphi \land \varphi \mid \Diamond \varphi \mid \mu p.\varphi' \end{equation*} where $q,p \in \mathsf{Q}$, $\psi$ is a $\mathsf{Q}$-free $\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$-formula and $\varphi' \in \cont{\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace}{\mathsf{Q}\cup\{p\}}$. The \emph{co-continuous} fragment $\cocont{\mu\ensuremath{\mathrm{ML}}\xspace}{Q}$ is defined dually. \end{definition} \begin{proposition}[\cite{Fontaine08,FV12}]\label{prop:FVcont} A $\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$-formula is continuous in $q$ iff it is equivalent to a formula in the fragment $\cont{\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace}{q}$. \end{proposition} Finally, we define $\ensuremath{\mu_{C}\ML}\xspace$ to be the fragment of $\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$ where the use of the least fixed point operator is restricted to the continuous fragment. \begin{definition} Formulas of the fragment $\ensuremath{\mu_{C}\ML}\xspace$ are given by: \begin{equation*} \varphi \mathrel{::=} q \mid \lnot q \mid \varphi \lor \varphi \mid \varphi \land \varphi \mid \Diamond \varphi \mid \Box \varphi \mid \mu p.\varphi' \mid \nu p.\varphi'' \end{equation*} where $p,q \in \mathsf{Prop}$, $\varphi' \in \cont{\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace}{p} \cap \ensuremath{\mu_{C}\ML}\xspace$, and dually $\varphi'' \in \cocont{\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace}{p} \cap \ensuremath{\mu_{C}\ML}\xspace$. \end{definition} It is easy to verify that $\ensuremath{\mu_{C}\ML}\xspace \subseteq \ensuremath{\mu_{D}\ML}\xspace$. Characteristic about $\ensuremath{\mu_{C}\ML}\xspace$ is that in a formula $\mu p. \varphi \in \ensuremath{\mu_{C}\ML}\xspace$, all occurrences of $p$ are \emph{existential} in the sense that they may be in the scope of a diamond but not of a box. Furthermore, as an immediate consequence of Proposition \ref{prop:FVcont} we may make the following observation. \begin{corollary}\label{cor:cont} For every $\ensuremath{\mu_{C}\ML}\xspace$-formula $\mu p. \varphi$, $\varphi$ is continuous in $p$. \end{corollary} \subsection{Bisimulation} Bisimulation is a notion of behavioral equivalence between processes. For the case of transition systems, it is formally defined as follows. \begin{definition} Let $\mathstr{S} = \tup{T, R, \kappa, s_I}$ and $\mathstr{S}' = \tup{T', R', \kappa', s'_I}$ be $\mathsf{P}$-labeled transition systems. A \emph{bisimulation} is a relation $Z \subseteq T \times T'$ such that for all $(t,t') \in Z$ the following holds: \begin{description} \itemsep 0 pt \item[(atom)] $\kappa(t) = \kappa'(t')$; \item[(forth)] for all $s \in R[t]$ there is $s'\in R'[t']$ such that $(s,s') \in Z$; \item[(back)] for all $s'\in R'[t']$ there is $s \in R[t]$ such that $(s,s') \in Z$. \end{description} Two pointed transition systems $\mathstr{S}$ and $\mathstr{S}'$ are \emph{bisimilar} (denoted $\mathstr{S} \mathrel{\underline{\leftrightarrow}} \mathstr{S}'$) if there is a bisimulation $Z \subseteq T \times T'$ containing $(s_I,s'_I)$. \end{definition} The following observation about tree unravellings is the key to understand the importance of tree models in the setting of invariance modulo bismilarity results. \begin{fact} \label{prop:tree_unrav} $\mathstr{S}$, $\unravel{\mathstr{S}}$ and $\mathstr{S}^{\omega}$ are bisimilar, for every transition system $\mathstr{S}$. \end{fact} A class $\mathsf{C}$ of transition systems is \emph{bisimulation closed} if $\mathstr{S} \mathrel{\underline{\leftrightarrow}} \mathstr{S}'$ implies that $\mathstr{S} \in \mathsf{C}$ iff $\mathstr{S}' \in \mathsf{C}$, for all $\mathstr{S}$ and $\mathstr{S}'$. A formula $\varphi \in L$ is \emph{bisimulation-invariant} if $\mathstr{S} \mathrel{\underline{\leftrightarrow}} \mathstr{S}'$ implies that $\mathstr{S} \tscolorsdash \varphi$ iff $\mathstr{S}' \tscolorsdash \varphi$, for all $\mathstr{S}$ and $\mathstr{S}'$. \begin{fact} Each $\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$-definable class of transition systems is bisimulation closed. \end{fact} \subsection{Monadic second-order logics} \label{sec:prel-so} Three variants of monadic second-order logic feature in our work: \emph{standard}, \emph{weak}, and \emph{noetherian} monadic second-order logic, and for each of these three variants, we consider a one-sorted and a two-sorted version. As we will see later, the one-sorted version fits better in the automata-theoretic framework, whereas it is more convenient to use the two-sorted approach when translating $\mu$-calculi into second order languages. In both the one-sorted and the two-sorted version, the syntax of the three languages is the same, the difference lying in the semantics, more specificaly, in the type of subsets over which the second-order quantifiers range. In the case of standard and weak monadic second-order logic, these quantifiers range over all, respectively, all finite subsets of the model. In the case of \ensuremath{\mathrm{NMSO}}\xspace we need the concept of a \emph{noetherian} subset of an LTS. \begin{definition} \label{d:bundle1} Let $\mathstr{S} = \tup{T,R,\kappa, s_I}$ be an LTS, and let $B$ be a non-empty set of finite paths that all share the same starting point $s$; we call $B$ a \emph{bundle rooted at} $s$, or simply an $s$-\emph{bundle}, if $B$ does not contain an infinite ascending chain $\pi_{0} \sqsubset \pi_{1} \sqsubset \cdots$, where $\sqsubset$ denotes the (strict) initial-segment relation on paths. A \emph{bundle} is simply an $s$-bundle for some $s \in T$. A subset $X$ of $T$ is called \emph{noetherian} if there is a bundle $B$ such that each $t \in X$ lies on some path in $B$. \end{definition} Notice that in a tree model, the noetherian subsets coincide with those that are included in a well-founded subtree. \subsubsection*{One-sorted monadic second-order logics} \begin{definition}\label{def:mso} The formulas of the \emph{(one-sorted) monadic second-order language} are defined by the following grammar: \begin{eqnarray*}\label{EQ_mso} \varphi \mathrel{::=} \here{p} \mid p \sqsubseteq q \mid R(p,q) \mid \lnot\varphi \mid \varphi\lor\varphi \mid \exists p.\varphi, \end{eqnarray*} where $p$ and $q$ are letters from $\mathsf{Prop}$. We adopt the standard convention that no proposition letter is both free and bound in $\varphi$. \end{definition} As mentioned, the three logics $\ensuremath{\mathrm{SMSO}}\xspace$, $\ensuremath{\mathrm{WMSO}}\xspace$ and $\ensuremath{\mathrm{NMSO}}\xspace$ are distinguished by their semantics. Let $\mathstr{S} = \tup{T,R,\kappa, s_I}$ be an LTS. The interpretation of the atomic formulas is fixed: \begin{align*} \mathstr{S} \models \here{p} & \ensuremath{\exists^\infty}\xspacead\text{ iff }\ensuremath{\exists^\infty}\xspacead \tscolors^\natural(p) = \{s_I\} \\ \mathstr{S} \models p \sqsubseteq q & \ensuremath{\exists^\infty}\xspacead\text{ iff }\ensuremath{\exists^\infty}\xspacead \tscolors^\natural(p) \subseteq \tscolors^\natural(q) \\ \mathstr{S} \models R(p,q) & \ensuremath{\exists^\infty}\xspacead\text{ iff }\ensuremath{\exists^\infty}\xspacead \text{for every $s\in \tscolors^\natural(p)$ there exists $t\in \tscolors^\natural(q)$ such that $sRt$} \end{align*} Furthermore, the interpretation of the boolean connectives is standard. The interpretation of the existential quantifier is where the logics diverge: \begin{align*} \mathstr{S} \models\ \exists p. \varphi & \ensuremath{\exists^\infty}\xspacead\text{ iff }\ensuremath{\exists^\infty}\xspacead \mathstr{S}[p \mapsto X] \models \varphi \, \left.\begin{cases} \text{for some } & (\ensuremath{\mathrm{SMSO}}\xspace) \\ \text{for some \emph{finite} } & (\ensuremath{\mathrm{WMSO}}\xspace) \\ \text{for some \emph{noetherian} } & (\ensuremath{\mathrm{NMSO}}\xspace) \end{cases}\right\}\, X \subseteq T. \end{align*} Observe that for a given monadic second-order formula $\varphi$, the classes $\ensuremath{\mathsf{Mod}}_{\ensuremath{\mathrm{SMSO}}\xspace}(\varphi)$, $\ensuremath{\mathsf{Mod}}_{\ensuremath{\mathrm{WMSO}}\xspace}(\varphi)$ and $\ensuremath{\mathsf{Mod}}_{\ensuremath{\mathrm{NMSO}}\xspace}(\varphi)$ will generally be different. \subsubsection*{Two-sorted monadic second-order logics} The reader may have expected to see the following more standard language for second-order logic. \begin{definition} \label{def:2mso} Given a set $\mathsf{iVar}$ of individual (first-order) variables, we define the formulas of the \emph{two-sorted monadic second-order language} by the following grammar: \[ \varphi \mathrel{::=} p(x) \mid R(x,y) \mid x \approx y \mid \neg \varphi \mid \varphi \lor \varphi \mid \exists x.\varphi \mid \exists p.\varphi \] where $p \in \mathsf{Prop}$, $x,y \in \mathsf{iVar}$ and $\approx$ is the symbol for equality. \end{definition} Formulas are interpreted over an LTS $\mathstr{S} = \tup{T,R,\kappa, s_I}$ with a variable assignment $g: \mathsf{iVar} \to T$, and the semantics of the language is completely standard. Depending on whether second-order quantification ranges over all subsets, over finite subsets or over noetherian subsets, we obtain the three two-sorted variants denoted respectively as $2\ensuremath{\mathrm{SMSO}}\xspace$, $2\ensuremath{\mathrm{WMSO}}\xspace$ and $2\ensuremath{\mathrm{NMSO}}\xspace$. \subsubsection*{Equivalence of the two versions} In each variant, the one-sorted and the two-sorted versions can be proved to be equivalent, but there is a sublety due to the fact that our models have a distinguished state. In the one-sorted language, we use the downarrow $\here$ to access this distinguished state; in the two-sorted approach, we will use a \emph{fixed} variable $v$ to refer to the distinguished state, and given a formula $\varphi(v)$ of which $v$ is the only free individual variable, we write $\mathstr{S} \models \varphi[s_{I}]$ rather than $\mathstr{S}[v \mapsto s_{I}] \models \varphi$. As a consequence, the proper counterpart of the one-sorted language $\ensuremath{\mathrm{SMSO}}\xspace$ is the set $2\ensuremath{\mathrm{SMSO}}\xspace(v)$ of those $2\ensuremath{\mathrm{SMSO}}\xspace$-formulas that have precisely $v$ as their unique free variable. More in particular, with $L \in \{\ensuremath{\mathrm{SMSO}}\xspace, \ensuremath{\mathrm{WMSO}}\xspace, \ensuremath{\mathrm{NMSO}}\xspace\}$, we say that $\varphi \in L$ is \emph{equivalent to} $\psi(v) \in L(v)$ if \[ \mathstr{S} \models \varphi \text{ iff } \mathstr{S} \models \psi[s_{I}] \] for every model $\mathstr{S} = \tup{T,R,\kappa, s_I}$. We can now state the equivalence between the two approaches to monadic second-order logic as follows. \begin{fact} \label{fact:msovs2mso} Let $L \in \{\ensuremath{\mathrm{SMSO}}\xspace, \ensuremath{\mathrm{WMSO}}\xspace, \ensuremath{\mathrm{NMSO}}\xspace\}$ be a monadic second-order logic. \begin{enumerate} \item There is an effective construction transforming a formula $\varphi \in L$ into an equivalent formula $\varphi^{t} \in 2L(v)$. \item There is an effective construction transforming a formula $\psi \in 2L(v)$ into an equivalent formula $\psi^{o} \in L$. \end{enumerate} \end{fact} Since it is completely straightforward to define a translation $(\cdot)^{t}$ as required for part (1) of Fact~\ref{fact:msovs2mso}, we only discuss the proof of part (2). The key observation here is that a single-sorted language can interpret the corresponding two-sorted language by encoding every individual variable $x \in \mathsf{iVar}$ as a set variable $p_x$ denoting a singleton, and that it is easy to write down a formula stating that a variable indeed is interpreted by a singleton. As a consequence, where $2L(\mathsf{P},\mathsf{X})$ denotes the set of $2L$-formulas with free second-order variables in $\mathsf{P}$ and free first-order variables in $\mathsf{X}$, it is not hard to formulate a translation $(\cdot)^{m} : 2L(\mathsf{P},\mathsf{X}) \to L(\mathsf{P} \uplus \{ p_{x} \mid x \in \mathsf{X} \})$ such that, for every model $\mathstr{S}$, every variable assignment $g$ and every formula $\psi \in 2L(\mathsf{Prop},\mathsf{X})$: \[ \mathstr{S},g \models \psi \ensuremath{\exists^\infty}\xspacead\text{iff}\ensuremath{\exists^\infty}\xspacead \mathstr{S}[p_{x} \mapsto \{g(x)\} \mid x \in \mathsf{X}] \models \psi^m. \] From this it is immediate that any $\psi \in 2L(v)$ satisfies \[ \mathstr{S} \models \psi[s_{I}] \ensuremath{\exists^\infty}\xspacead\text{iff}\ensuremath{\exists^\infty}\xspacead \mathstr{S} \models \exists p_{v} (\here{p_{v}} \land \psi^{m}), \] so that we may take $\psi^{o} \mathrel{:=} \exists p_{v} (\here{p_{v}} \land \psi^{m})$. \section{One-step logics, parity automata and $\mu$-calculi} \label{sec:parityaut} This section introduces and studies the type of parity automata that will be used in the characterisation of $\ensuremath{\mathrm{WMSO}}\xspace$ and $\ensuremath{\mathrm{NMSO}}\xspace$ on tree models. In order to define these automata in a uniform way, we introduce, at a slightly higher level of abstraction, the notion of a \emph{one-step logic}, a concept from coalgebraic modal logic~\cite{cirs:modu04} which provides a nice framework for a general approach towards the theory of automata operating on infinite objects. As salient specimens of such one-step logics we will discuss monadic first-order logic with equality ($\ensuremath{Fo_1}\xspacee$) and its extension with the infinity quantifier ($\ensuremath{Fo_1}\xspaceei$). We then define, parametric in the language $L_{1}$ of such a one-step logic, the notions of an $L_{1}$-automaton and of a mu-calculus $\muL_{1}$, and we show how various classes of $L_{1}$-automata effectively correspond to fragments of $\muL_{1}$. \input{neutral/ssec-3-onestep} \input{neutral/ssec-3-parityaut} \input{neutral/ssec-3-mucalculi} \input{neutral/ssec-3-aut-to-fma} \input{neutral/ssec-3-fma-to-aut} \section{Automata for $\ensuremath{\mathrm{WMSO}}\xspace$} \label{sec:autwmso} In this section we start looking at the automata-theoretic characterisation of $\ensuremath{\mathrm{WMSO}}\xspace$. That is, we introduce the following automata, corresponding to this version of monadic second-order logic; these \emph{$\ensuremath{\mathrm{WMSO}}\xspace$-automata} are the continuous-weak automata for the one-step language $\ensuremath{Fo_1}\xspaceei$, cf.~Definition~\ref{d:ctwk}. \begin{definition} A \emph{$\ensuremath{\mathrm{WMSO}}\xspace$-automaton} is a continuous-weak automaton for the one-step language $\ensuremath{Fo_1}\xspaceei$. \end{definition} Recall that our definition of continuous-weak automata is syntactic in nature, i.e., if $\mathstr{A} = \tup{A,\Delta,\Omega,a_I}$ is a $\ensuremath{\mathrm{WMSO}}\xspace$-automaton, then for any pair of states $a,b$ with $a \prec b$ and $b \prec a$, and any $c\in C$, we have $\Delta(a,c) \in \cont{{\ensuremath{Fo_1}\xspaceei(A)}^+}{b}$ if $\Omega(a)$ is odd and $\Delta(a,c) \in \cocont{{\ensuremath{Fo_1}\xspaceei(A)}^+}{b}$ if $\Omega(a)$ is even. The main result of this section states one direction of the automata-theoretic characterisation of $\ensuremath{\mathrm{WMSO}}\xspace$. \begin{theorem} \label{t:wmsoauto} There is an effective construction transforming a $\ensuremath{\mathrm{WMSO}}\xspace$-formula $\varphi$ into a $\ensuremath{\mathrm{WMSO}}\xspace$-automaton $\mathstr{A}_{\varphi}$ that is equivalent to $\varphi$ on the class of trees. \end{theorem} The proof proceeds by induction on the complexity of $\varphi$. For the inductive steps, we will need to verify that the class of $\ensuremath{\mathrm{WMSO}}\xspace$-automata is closed under the boolean operations and finite projection. The latter closure property requires most of the work: we devote Section \ref{sec:simulationwmso} to a simulation theorem that puts $\ensuremath{\mathrm{WMSO}}\xspace$-automata in a suitable shape for the projection construction. To this aim, it is convenient to define a closure operation on classes of tree models corresponding to the semantics of $\ensuremath{\mathrm{WMSO}}\xspace$ quantification. The inductive step of the proof of Theorem \ref{t:wmsoauto} will show that the classes that are recognizable by $\ensuremath{\mathrm{WMSO}}\xspace$-automata are closed under this operation. \begin{definition}\label{def:tree_finproj-w} Fix a set $\mathsf{P}$ of proposition letters, a proposition letter $p \not\in P$ and a language $\mathsf{C}$ of $\mathsf{P}\cup\{p\}$-labeled trees. The \emph{finitary projection} of $\mathsf{C}$ over $p$ is the language of $\mathsf{P}$-labeled trees defined as \[ {Finexists} p.\mathsf{C} \mathrel{: =} \{\mathstr{T} \mid \text{ there is a finite $p$-variant } \mathstr{T}' \text{ of } \mathstr{T} \text{ with } \mathstr{T}' \in \mathsf{C}\}. \] A collection of classes of tree models is \emph{closed under finitary projection over $p$} if it contains the class ${{Finexists} p}.\mathsf{C}$ whenever it contains the class $\mathsf{C}$ itself. \end{definition} \subsection{Simulation theorem for $\ensuremath{\mathrm{WMSO}}\xspace$-automata} \label{sec:simulationwmso} \noindent Our next goal is a \emph{projection construction} that, given a $\ensuremath{\mathrm{WMSO}}\xspace$-automaton $\mathstr{A}$, provides one recognizing ${Finexists p}.\ensuremath{\mathsf{TMod}}(\mathstr{A})$. For $\ensuremath{\mathrm{SMSO}}\xspace$-automata, the analogous construction crucially uses the following \emph{simulation theorem}: every $\ensuremath{\mathrm{SMSO}}\xspace$-automaton $\mathstr{A}$ is equivalent to a \emph{non-deterministic} automaton $\mathstr{A}'$ \cite{Walukiewicz96}. Semantically, non-determinism yields the appealing property that every node of the input model $\mathstr{T}$ is associated with at most one state of $\mathstr{A}'$ during the acceptance game--- that means, we may assume $\ensuremath{\exists}\xspace$'s strategy $f$ in $\mathcal{A}(\mathstr{A}',\mathstr{T})$ to be \emph{functional} (\emph{cf.} Definition \ref{def:StratfunctionalFinitary} below). This is particularly helpful in case we want to define a $p$-variant of $\mathstr{T}$ that is accepted by the projection construct on $\mathstr{A}'$: our decision whether to label a node $s$ with $p$ or not, will crucially depend on the value $f(a,s)$, where $a$ is the unique state of $\mathstr{A}'$ that is associated with $s$. Now, in the case of $\ensuremath{\mathrm{WMSO}}\xspace$-automata we are interested in guessing \emph{finitary} $p$-variants, which requires $f$ to be functional only on a \emph{finite} set of nodes. Thus the idea of our simulation theorem is to turn a $\ensuremath{\mathrm{WMSO}}\xspace$-automaton $\mathstr{A}$ into an equivalent one $\mathstr{A}^{F}$ that behaves non-deterministically on a \emph{finite} portion of any accepted tree. For $\ensuremath{\mathrm{SMSO}}\xspace$-automata, the simulation theorem is based on a powerset construction: if the starting automaton has carrier $A$, the resulting non-deterministic automaton is based on ``macro-states'' from the set $\pow A$. Analogously, for $\ensuremath{\mathrm{WMSO}}\xspace$-automata we will associate the non-deterministic behaviour with macro-states. However, as explained above, the automaton $\mathstr{A}^{F}$ that we construct has to be non-deterministic just on finitely many nodes of the input and may behave as $\mathstr{A}$ (i.e. in ``alternating mode'') on the others. To this aim, $\mathstr{A}^{F}$ will be ``two-sorted'', roughly consisting of a copy of $\mathstr{A}$ (with carrier $A$) together with a variant of its powerset construction, based both on $A$ and $\pow A$. For any accepted $\mathstr{T}$, the idea is to make any match $\pi$ of $\mathcal{A}(\mathstr{A}^{F},\mathstr{T})$ consist of two parts: \begin{description} \item[(\textit{Non-deterministic mode})] For finitely many rounds $\pi$ is played on macro-states, i.e. positions belong to the set $\pow A \times T$. In her strategy player $\exists$ assigns macro-states (from $\pow A$) only to \emph{finitely many} nodes, and states (from $A$) to the rest. Also, her strategy is functional in $\pow A$, i.e. it assigns \emph{at most one macro-state} to each node. \item[(\textit{Alternating mode})] At a certain round, $\pi$ abandons macro-states and turns into a match of the game $\mathcal{A}(\mathstr{A},\mathstr{T})$, i.e. all subsequent positions are from $A \times T$ (and are played according to a not necessarily functional strategy). \end{description} Therefore successful runs of $\mathstr{A}^{{\scshaperiptscriptstyle N}}$ will have the property of processing only a \emph{finite} amount of the input with $\mathstr{A}^{{\scshaperiptscriptstyle N}}$ being in a macro-state and all the rest with $\mathstr{A}^{{\scshaperiptscriptstyle N}}$ behaving exactly as $\mathstr{A}$. We now proceed in steps towards the construction of $\mathstr{A}^{{\scshaperiptscriptstyle N}}$. First, recall from Definition \ref{def:basicform-ofoe} that a \emph{$A$-type} is just a subset of $A$. We now define a notion of liftings for sets of types, which is instrumental in translating the transition function from states on macro-states. \begin{definition} The \emph{lifting} of a type $S \in \wp A$ is defined as the following $\wp A$-type: \[ \lift{S} \mathrel{:=} \begin{cases} \{ S \} & \text{ if } S \neq \varnothing \\ \varnothing & \text{ if } S = \varnothing. \end{cases} \] This definition is extended to sets of $A$-types by putting $\lift{\Sigma} \mathrel{:=} \{ \lift{S} \mid S \in \Sigma \}$. \end{definition} The distinction between empty and non-empty elements of $\Sigma$ is to ensure that the empty type on $A$ is lifted to the empty type on $\wp A$. Notice that the resulting set $\lift{\Sigma}$ is either empty or contains exaclty one $\wp A$-type. This property is important for functionality, see below. Next we define a translation on the sentences associated with the transition function of the original $\ensuremath{\mathrm{WMSO}}\xspace$-automaton. Following the intuition given above, we want to work with sentences that can be made true by assigning macro-states (from $\pow A$) to finitely many nodes in the model, and ordinary states (from $A$) to all the other nodes. Moreover, each node should be associated with \emph{at most one} macro-state, because of functionality. These desiderata are expressed for one-step formulas as \emph{$\pow A$-continuity} and \emph{$\pow A$-separability}, see the Definitions~\ref{def:semnotions} and~\ref{d:sep}. For the language $\ensuremath{Fo_1}\xspaceei$, Theorem \ref{t:osnf} and Proposition~\ref{p:sep} guarantee these properties when formulas are in a certain syntactic shape. The next definition will provide formulas that conform to this particular shape. \begin{definition}\label{DEF_finitary_lifting} Let $\varphi \in {\ensuremath{Fo_1}\xspaceei}^+(A)$ be a formula of shape $\posdbnfofoei{\vlist{T}}{\Pi}{\Sigma}$ for some $\Pi,\Sigma \subseteq \pow A$ and $\vlist{T} = \{T_1,\dots,T_k\} \subseteq \pow A$. We define $\varphi^{Fin} \in {\ensuremath{Fo_1}\xspaceei}^+(A \cup \pow A)$ as the formula $\posdbnfofoei{\lift{\vlist{T}}}{\lift{\Pi} \cup \lift{\Sigma}}{\Sigma}$, that means, \begin{equation}\label{eq:unfoldingNablaolque} \begin{aligned} \varphi^{Fin} \mathrel{: =}\ & \exists \vlist{x}.\Big(\arediff{\vlist{x}} \land \bigwedge_{0 \leq i \leq n} \tau^+_{\lift{T}_i}(x_i) \land \ensuremath{\mathrm{FO}}\xspacerall z.(\arediff{\vlist{x},z} \to \bigvee_{S\in \lift{\Pi} \cup \lift{\Sigma} \cup \Sigma} \tau^+_S(z))\Big) \\ & \land \bigwedge_{P\in\Sigma} \ensuremath{\exists^\infty}\xspace y.{\tau}^{+}_P(y) \land \ensuremath{\ensuremath{\mathrm{FO}}\xspacerall^\infty}\xspace y.\bigvee_{P\in\Sigma} {\tau}^{+}_P(y) \end{aligned} \end{equation} \end{definition} We combine the previous definitions to form the transition function for macro-states. \begin{definition}\label{PROP_DeltaPowerset} Let $\mathstr{A} = \tup{A,\Delta,\Omega,a_I}$ be a $\ensuremath{\mathrm{WMSO}}\xspace$-automaton. Fix $c \in C$ and $Q \in \pow A$. By Theorem \ref{t:osnf}, for some $\Pi,\Sigma \subseteq \pow A$ and $T_i \subseteq A$, there is a sentence $\Psi_{Q,c} \in {\ensuremath{Fo_1}\xspaceei}^+(A)$ in the basic form $\bigvee \posdbnfofoei{\vlist{T}}{\Pi}{\Sigma}$ such that $\bigwedge_{a \in Q} \Delta(a,c) \equiv \Psi_{Q,c}$. By definition $\Psi_{Q,c}$ is of the form $\bigvee_{i}\varphi_i$, with each $\varphi_{i}$ of shape $\posdbnfofoei{\vlist{T}}{\Pi}{\Sigma}$. We put $\Delta^{\sharp}(Q,c) \mathrel{:=} \bigvee_{i}\varphi_i^{Fin}$, where the translation $(-)^{Fin}$ is given as in Definition~\ref{DEF_finitary_lifting}. Observe that $\Delta^{\sharp}(Q,c)$ is of type ${\ensuremath{Fo_1}\xspaceei}^+(A \cup \pow A)$. \end{definition} We have now all the ingredients to define our two-sorted automaton. \begin{definition}\label{def:finitaryconstruct} Let $\mathstr{A} = \tup{A,\Delta,\Omega,a_I}$ be a {\ensuremath{\mathrm{WMSO}}\xspace-automaton}. We define the \emph{finitary construct over $\mathstr{A}$} as the automaton $\mathstr{A}^{Fin} = \tup{A^{Fin},\Delta^{Fin},\Omega^{Fin},a_I^{Fin}}$ given by \[ \begin{array}{lll} A^{Fin} &\mathrel{: =}& A \cup \pow A \\ a_I^{Fin} &\mathrel{: =}& \{a_I\} \end{array} \hspace*{5mm} \begin{array}{lll} \Omega^{Fin}(a) &\mathrel{: =}& \Omega(a) \\ \Omega^{Fin}(R) &\mathrel{: =}& 1 \end{array} \hspace*{5mm} \begin{array}{lll} \Delta^{Fin}(a,c) &\mathrel{: =}& \Delta(a,c) \\ \Delta^{Fin}(Q,c) &\mathrel{: =}& \Delta^{\sharp}(Q,c) \vee \bigwedge_{a \in Q} \! \! \Delta(a,c). \end{array} \] \end{definition} \begin{remark} In the standard powerset construction of non-deterministic parity automata (\cite{Walukiewicz02}, see also \cite{Ven08,ArnoldN01}) macro-states are required to be \emph{relations} rather than sets in order to determine whether a run through macro-states is accepting. This is not needed in our construction: macro-states will never be visited infinitely often in accepting runs, thus they may simply be assigned the priority $1$. \end{remark} The idea behind this definition is that $\mathstr{A}^{Fin}$ is enforced to process only a finite portion of any accepted tree while in the non-deterministic mode. This is encoded in game-theoretic terms through the notion of functional and finitary strategy. \begin{definition}\label{def:StratfunctionalFinitary} Given a $\ensuremath{\mathrm{WMSO}}\xspace$-automaton $\mathstr{A} = \tup{A,\Delta,\Omega,a_I}$ and transition system $\mathstr{T}$, a strategy $f$ for \ensuremath{\exists}\xspace in $\mathcal{A}(\mathstr{A},\mathstr{T})$ is \emph{functional in $B \subseteq A$} (or simply functional, if $B=A$) if for each node $s$ in $\mathstr{T}$ there is at most one $b \in B$ such that $(b,s)$ is a reachable position in an $f$-guided match. Also $f$ is \emph{finitary} in $B$ if there are only finitely many nodes $s$ in $\mathstr{T}$ for which a position $(b,s)$ with $b \in B$ is reachable in an $f$-guided match. \end{definition} The next proposition establishes the desired properties of the finitary construct. \begin{theorem}[Simulation Theorem for $\ensuremath{\mathrm{WMSO}}\xspace$-automata] \label{PROP_facts_finConstrwmso} Let $\mathstr{A}$ be a $\ensuremath{\mathrm{WMSO}}\xspace$-automaton and $\mathstr{A}^{Fin}$ its finitary construct. \begin{enumerate}[(1)] \itemsep 0 pt \item $\mathstr{A}^{Fin}$ is a $\ensuremath{\mathrm{WMSO}}\xspace$-automaton.\label{point:finConstrAut-w} \item For any tree model $\mathstr{T}$, if $(a_I^{Fin},s_I)$ is a winning position for $\ensuremath{\exists}\xspace$ in $\mathcal{A}(\mathstr{A}^{Fin},\mathstr{T})$ , then she has a winning strategy that is both functional and finitary in $\pow A$. \label{point:finConstrStrategy} \item $\mathstr{A} \equiv \mathstr{A}^{Fin}$. \label{point:finConstrEquiv} \end{enumerate} \end{theorem} \begin{proof} \begin{enumerate}[(1)] \item Observe that any cluster of $\mathstr{A}^{Fin}$ involves states of exactly one sort, either $A$ or $\pow A$. For clusters on sort $A$, weakness and continuity of $\mathstr{A}^{Fin}$ follow by the same properties of $\mathstr{A}$. For clusters on sort $\pow A$, weakness follows by observing that all macro-states in $\mathstr{A}^{Fin}$ have the same priority. Concerning continuity, by definition of $\Delta^{Fin}$ any macro-state can only appear inside a formula of the form $\varphi^{Fin} = \posdbnfofoei{\lift{\vlist{T}}}{\lift{\Pi} \cup \lift{\Sigma}}{\Sigma}$ as in \eqref{eq:unfoldingNablaolque}. Because $\pow A \cap \bigcup\Sigma = \varnothing$, by Theorem \ref{t:osnf-cont} $\varphi^{Fin}$ is continuous in each $Q \in \pow A$. \item Let $f$ be a (positional) winning strategy for $\ensuremath{\exists}\xspace$ in $\mathcal{A} (\mathstr{A}^{Fin},\mathstr{T})@(a_I^{Fin},s_I)$. We define a strategy $f'$ for $\ensuremath{\exists}\xspace$ in the same game as follows: \begin{enumerate}[label=(\alphaph*),ref=\alphaph*] \item \label{point:stat2point1} On basic positions of the form $(a,s) \in A\times T$, let $V: A \to \wp R[s]$ be the valuation suggested by $f$. We let the valuation suggested by $f'$ be the restriction $V'$ of $V$ to $A$. Observe that, as no predicate from $A^{Fin}\setminus A =\pow A$ occurs in $\Delta^{Fin}(a,\tscolors(s)) = \Delta(a,\tscolors(s))$, then $V'$ also makes that sentence true in $\R{s}$. \item For winning positions of the form $(R,s) \in \pow A \times T$, let $V_{R,s}: (\wp A \cup A) \to \wp R[s]$ be the valuation suggested by $f$. As $f$ is winning, $\Delta^{Fin}(R,\tscolors(s))$ is true in the model $V_{R,s}$. If this is because the disjunct $\bigwedge_{a \in R} \Delta(a,\tscolors(s))$ is made true, then we can let $f'$ suggest the restriction to $A$ of $V_{R,s}$, for the same reason as in \eqref{point:stat2point1}. Otherwise, the disjunct $\Delta^{\sharp}(R,\tscolors(s)) = \bigvee_{i}\varphi_i^{Fin}$ is made true. This means that, for some $i$, $(R[s], V_{R,s}) \models \varphi_i^{Fin}$. Now, by construction of $\varphi_i^{Fin}$ as in \eqref{eq:unfoldingNablaolque}, we have $\pow A \cap \bigcup\Sigma = \varnothing$. By Theorem \ref{t:osnf-cont}, this implies that $\varphi_i^{Fin}$ is continuous in $\pow A$. Thus we have a restriction $V_{R,s}'$ of $V_{R,s}$ that verifies $\varphi_i^{Fin}$ and assigns only finitely many nodes to predicates from $\pow A$. Moreover, by construction of $\varphi_i^{Fin}$, for each $S \in \{\lift{T}_1,\dots,\lift{T}_k\}\cup \in \lift{\Pi} \cup \lift{\Sigma}$, $S$ contains at most one element from $\pow A)$. Thus, by Proposition~\ref{p:sep}, $\varphi_i^{Fin}$ is $\pow A$-separable. But then we may find a separating valuation $V_{R,s}''\leq_{\pow A} V_{R,s}''$ such that $V_{R,s}''$ verifies $\varphi_i^{Fin}$. Separation means that $V_{R,s}''$ associates with each node at most one predicate from $\pow A$, and the fact that $V_{R,s}''\leq_{\pow A} V_{R,s}''$, combined with the $\pow A$-continuity of $V_{R,s}'$ ensures $\pow A$-continuity of $V_{R,s}''$. In this case we let $f'$ suggest $V_{R,s}''$ at position $(R,s)$. \end{enumerate} The strategy $f'$ defined as above is immediately seen to be surviving for $\ensuremath{\exists}\xspace$. It is also winning, since at every basic winning position for $\ensuremath{\exists}\xspace$, the set of possible next basic positions offered by $f'$ is a subset of those offered by $f$. By this observation it also follows that any $f'$-guided match visits basic positions of the form $(R,s) \in \pow A \times C$ only finitely many times, as those have odd parity. By definition, the valuation suggested by $f'$ only assigns finitely many nodes to predicates in $\pow A$ from positions of that shape, and no nodes from other positions. It follows that $f'$ is finitary in $\pow A$. Functionality in $\pow A$ also follows immediately by definition of $f'$. \item For the direction from left to right, it is immediate by definition of $\mathstr{A}^{Fin}$ that a winning strategy for $\ensuremath{\exists}\xspace$ in $\mathcal{G} = \mathcal{A}(\mathstr{A},\mathstr{T})@(a_I,s_I)$ is also winning for $\ensuremath{\exists}\xspace$ in $\mathcal{G}^{Fin} = \mathcal{A}(\mathstr{A}^{Fin},\mathstr{T})@(a_I^{Fin},s_I)$. For the direction from right to left, let $f$ be a winning strategy for $\ensuremath{\exists}\xspace$ in $\mathcal{G}^{Fin}$. The idea is to define a strategy $f'$ for $\ensuremath{\exists}\xspace$ in stages, while playing a match $\pi'$ in $\mathcal{G}$. In parallel to $\pi'$, a shadow match $\pi$ in $\mathcal{G}^{Fin}$ is maintained, where $\ensuremath{\exists}\xspace$ plays according to the strategy $f$. For each round $z_i$, we want to keep the following relation between the two matches: \begin{center} Fbox{\parbox{12cm}{ Either \begin{enumerate}[label=(\arabic*),ref=\arabic*] \item positions of the form $(Q,s) \in \pow A \times T$ and $(a,s) \in A \times T$ occur respectively in $\pi$ and $\pi'$, with $a \in Q$, \end{enumerate} or \begin{enumerate}[label=(\arabic*),ref=\arabic*] \item[(2)] the same position of the form $(a,s) \in A \times T$ occurs in both matches. \end{enumerate} }}\hspace*{0.3cm}($\ddag$) \end{center} The key observation is that, because $f$ is winning, a basic position of the form $(Q,s) \in \pow A \times T$ can occur only for finitely many initial rounds $z_0,\dots,z_n$ that are played in $\pi$, whereas for all successive rounds $z_n,z_{n+1},\dots$ only basic positions of the form $(a,s) \in A \times T$ are encountered. Indeed, if this was not the case then either $\ensuremath{\exists}\xspace$ would get stuck or the highest priority occurring infinitely often would be odd, since states from $\pow A$ all have priority $1$. It follows that enforcing a relation between the two matches as in ($\ddag$) suffices to prove that the defined strategy $f'$ is winning for $\ensuremath{\exists}\xspace$ in $\pi'$. For this purpose, first observe that $(\ddag).1$ holds at the initial round, where the positions visited in $\pi'$ and $\pi$ are respectively $(a_I,s_I) \in A \times T$ and $(\{a_I\},s_I) \in A^{Fin} \times T$. Inductively, consider any round $z_i$ that is played in $\pi'$ and $\pi$, respectively with basic positions $(a,s) \in A \times T$ and $(q,s) \in A^{Fin} \times T$. To define the suggestion of $f'$ in $\pi'$, we distinguish two cases. \begin{itemize} \item First suppose that $(q,s)$ is of the form $(Q,s) \in \pow A\times T$. By ($\ddag$) we can assume that $a$ is in $Q$. Let $V_{Q,s} :A^{Fin} \rightarrow \wp(\R{s})$ be the valuation suggested by $f$, verifying the sentence $\Delta^{Fin}(Q,\tscolors(s))$. We distinguish two further cases, depending on which disjunct of $\Delta^{Fin}(Q,\tscolors(s))$ is made true by $V_{Q,s}$. \begin{enumerate}[label=(\roman*), ref=\roman*] \item \label{point:valuation1} If $(\R{s},V_{Q,s})\models \bigwedge_{b \in Q} \Delta(b,\tscolors(s))$, then we let $\ensuremath{\exists}\xspace$ pick the restriction to $A$ of the valuation $V_{Q,s}$. \item \label{point:valuation2} If $(\R{s},V_{Q,s})\models \Delta^{\sharp}(Q,\tscolors(s))$, we let $\ensuremath{\exists}\xspace$ pick a valuation $V_{a,s}:A \rightarrow \p (\R{s})$ defined by putting, for each $b \in A$: \begin{align*} V_{a,s}(b) \mathrel{:=} \bigcup_{b \in Q'} \{t \in \R{s} \mid t \in V_{Q,s}(Q')\} \cup \{t \in \R{s} \mid t \in V_{Q,s}(b)\} . \end{align*} \end{enumerate} It can be readily checked that the suggested move is legitimate for $\ensuremath{\exists}\xspace$ in $\pi$, i.e. it makes $\Delta(a,\tscolors(s))$ true in $\R{s}$. For case \eqref{point:valuation2}, observe that the nodes assigned to $b$ by $V_{Q,s}$ have to be assigned to $b$ also by $V_{a,s}$, as they may be necessary to fulfill the condition, expressed with $\ensuremath{\exists^\infty}\xspace$ and $\ensuremath{\ensuremath{\mathrm{FO}}\xspacerall^\infty}\xspace$ in $\Delta^{\sharp}$, that infinitely many nodes witness (or that finitely many nodes do not witness) some type. We now show that $(\ddag)$ holds at round $z_{i+1}$. If \eqref{point:valuation1} is the case, any next position $(b,t)\in A \times T$ picked by player $\ensuremath{\mathrm{FO}}\xspacerall$ in $\pi'$ is also available for $\ensuremath{\mathrm{FO}}\xspacerall$ in $\pi$, and we end up in case $(\ddag .2)$. Suppose instead that \eqref{point:valuation2} is the case. Given a move $(b,t) \in A \times T$ by $\ensuremath{\mathrm{FO}}\xspacerall$, by definition of $V_{a,s}$ there are two possibilities. First, $(b,t)$ is also an available choice for $\ensuremath{\mathrm{FO}}\xspacerall$ in $\pi$, and we end up in case $(\ddag .2)$ as before. Otherwise, there is some $Q' \in \pow A$ such that $b$ is in $Q'$ and $\ensuremath{\mathrm{FO}}\xspacerall$ can choose $(Q',t)$ in the shadow match $\pi$. By letting $\pi$ advance at round $z_{i+1}$ with such a move, we are able to maintain $(\ddag .1)$ also in $z_{i+1}$. \item In the remaining case, inductively we are given the same basic position $(a,s) \in A\times T$ both in $\pi$ and in $\pi'$. The valuation $V$ suggested by $f$ in $\pi$ verifies $\Delta^{Fin}(a,\tscolors(s)) = \Delta(a,\tscolors(s))$, thus we can let the restriction of $V$ to $A$ be the valuation chosen by $\ensuremath{\exists}\xspace$ in the match $\pi'$. It is immediate that any next move of $\ensuremath{\mathrm{FO}}\xspacerall$ in $\pi'$ can be mirrored by the same move in $\pi$, meaning that we are able to maintain the same position --whence the relation $(\ddag.1)$-- also in the next round. \end{itemize} In both cases, the suggestion of strategy $f'$ was a legitimate move for $\ensuremath{\exists}\xspace$ maintaining the relation $(\ddag)$ between the two matches for any next round $z_{i+1}$. It follows that $f'$ is a winning strategy for $\ensuremath{\exists}\xspace$ in $\mathcal{G}$. \end{enumerate} \end{proof} \subsection{From formulas to automata} In this subsection we conclude the proof of Theorem~\ref{t:wmsoauto}. We first focus on the case of projection with respect to finite sets, which exploits our simulation result, Theorem~\ref{PROP_facts_finConstrwmso}. The definition of the projection construction is formulated more generally for parity automata, as it will be later applied to classes other than $\mathit{Aut}WC(\ensuremath{Fo_1}\xspaceei)$. It clearly preserves the weakness and continuity conditions. \begin{definition}\label{DEF_fin_projection} Let $\mathstr{A} = \tup{A, \Delta, \Omega, a_I}$ be a parity automaton on alphabet $\p(\mathsf{P} \cup \{p\})$. We define the automaton ${{\exists} p}.\mathstr{A} = \tup{A, \DeltaProj, \Omega, a_I}$ on alphabet $\p\mathsf{P}$ by putting \begin{equation*} \DeltaProj(a,c) \ \mathrel{: =} \ \Delta(a,c) \qquad \qquad \DeltaProj(Q,c) \ \mathrel{: =} \ \Delta(Q,c) \vee \Delta(Q,c\cup\{p\}). \end{equation*} The automaton ${{\exists} p}.\mathstr{A}$ is called the \emph{finitary projection construct of $\mathstr{A}$ over $p$}. \end{definition} \begin{lemma}\label{PROP_fin_projection} Let $\mathstr{A}$ be a $\ensuremath{\mathrm{WMSO}}\xspace$-automaton on alphabet $\p (\mathsf{P} \cup \{p\})$. Then $\mathstr{A}^{Fin}$ is a $\ensuremath{\mathrm{WMSO}}\xspace$-automaton on alphabet $\p\mathsf{P}$ which satisfies $$\ensuremath{\mathsf{TMod}}({{\exists} p}.\mathstr{A}^{Fin}) \equiv {{Finexists} p}.\ensuremath{\mathsf{TMod}}(\mathstr{A}).$$ \end{lemma} \begin{proof} Unraveling definitions, we need to show that for any tree $\mathstr{T} = \tup{T,R,\tscolors \: \mathsf{P} \to \wp T,s_I}$: $${{\exists} p}.\mathstr{A}^{Fin} \text{ accepts } \mathstr{T} \text{ iff } \text{there is a finite }p \text{ -variant }\mathstr{T}' \text{of } \mathstr{T} \text{ such that } \mathstr{A} \text{ accepts } \mathstr{T}'. $$ For the direction from left to right, by the equivalence between $\mathstr{A}$ and $\mathstr{A}^{Fin}$ it suffices to show that if ${{\exists} p}.\mathstr{A}^{Fin}$ accepts $\mathstr{T}$ then there is a finite $p$-variant $\mathstr{T}'$ of $\mathstr{T}$ such that $\mathstr{A}^{Fin}$ accepts $\mathstr{T}'$. First, we first observe that the properties stated by Theorem~\ref{PROP_facts_finConstrwmso}, which hold for $\mathstr{A}^{Fin}$ by assumption, by construction hold for ${{\exists} p}.\mathstr{A}^{Fin}$ as well. Thus we can assume that the given winning strategy $f$ for $\ensuremath{\exists}\xspace$ in $\mathcal{G_{\exists}} = \mathcal{A}({Finexists p}.\mathstr{A}^{Fin},\mathstr{T})@(a_I^{Fin},s_I)$ is functional and finitary in $\pow A$. Functionality allows us to associate with each node $s$ either none or a unique state $Q_s \in \pow A$ such that $(Q_s,s)$ is winning for $\ensuremath{\exists}\xspace$. We now want to isolate the nodes that $f$ treats ``as if they were labeled with $p$''. For this purpose, let $V_{s}$ be the valuation suggested by $f$ from a position $(Q_s,s) \in \pow A \times T$. As $f$ is winning, $V_{s}$ makes $\DeltaProj(Q,\kappa(s))$ true in $\R{s}$. We define a $p$-variant $\mathstr{T}' = \tup{T,R,\tscolors' \: \mathsf{P}\cup\{p\} \to \wp T,s_I}$ of $\mathstr{T}$ by defining $\kappa' \mathrel{:=} \kappa[p \mapsto X_{p}]$, that is, by colouring with $p$ all nodes in the following set: \begin{equation}\label{eq:X_p} X_p \mathrel{:=} \{s \in T\mid (\R{s},V_{s}) \models \Delta^{F}(Q_s,\kappa(s)\cup\{p\})\}. \end{equation} The fact that $f$ is finitary in $\pow A$ guarantees that $X_p$ is finite, whence $\mathstr{T}'$ is a finite $p$-variant. It remains to show that $\mathstr{A}^{Fin}$ accepts $\mathstr{T}'$: we claim that $f$ itself is winning for $\ensuremath{\exists}\xspace$ in $\mathcal{G} = (\mathstr{A}^{Fin},\mathstr{T}')@(a_I,s_I)$. In order to see that, let us construct in stages an $f$-guided match $\pi$ of $\mathcal{G}$ and an $f$-guided shadow match $\tilde{\pi}$ of $\mathcal{G_{\exists}}$. The inductive hypothesis we want to bring from one round to the next is that the same basic position occurs in both matches, as this suffices to prove that $f$ is winning for $\ensuremath{\exists}\xspace$ in $\mathcal{G}$. First we consider the case of a basic position $(Q,s) \in A^{Fin} \times T$ where $Q \in \pow A$. By assumption $f$ provides a valuation $V_s$ that makes $\DeltaProj(Q,\tscolors(s))$ true in $\R{s}$. Thus $V_s$ verifies either $\Delta^{Fin}(Q,\tscolors(s))$ or $\Delta^{Fin}(Q,\tscolors(s)\cup \{p\})$. Now, the match $\pi^{Fin}$ is played on the $p$-variant $\mathstr{T}'$, where the labeling $\tscolors'(s)$ is decided by the membership of $s$ to $X_p$. According to \eqref{eq:X_p}, if $V_s$ verifies $\Delta^{Fin}(Q,\tscolors(s)\cup \{p\})$ then $s$ is in $X_p$, meaning that it is labeled with $p$ in $\mathstr{T}'$, i.e. $\tscolors'(s) = \tscolors(s)\cup \{p\}$. Therefore $V_s$ also verifies $\Delta^{Fin}(Q,\tscolors'(s))$ and it is a legitimate move for $\ensuremath{\exists}\xspace$ in match $\pi^{Fin}$. In the remaining case, $V_s$ verifies $\Delta^{Fin}(Q,\tscolors(s))$ but falsifies $\Delta^{Fin}(Q,\tscolors(s)\cup \{p\})$, implying by definition that $s$ is not in $X_p$. This means that $s$ is not labeled with $p$ in $\mathstr{T}'$, i.e. $\tscolors'(s) = \tscolors(s)$. Thus again $V_s$ verifies $\Delta^{Fin}(Q,\tscolors'(s))$ and it is a legitimate move for $\ensuremath{\exists}\xspace$ in match $\pi^{Fin}$. It remains to consider the case of a basic position $(a,s) \in A^{Fin} \times T$ with $a \in A$ a state. By definition $\DeltaProj(a,\tscolors(s))$ is just $\Delta^{Fin}(a,\tscolors(s))$. As $(a,s)$ is winning, we can assume that no position $(Q,s)$ with $Q$ a macro-state is winning according to the same $f$, as making $\DeltaProj$-sentences true never forces $\ensuremath{\exists}\xspace$ to mark a node both with a state and a macro-state. Therefore, $s$ is not in $X_p$ either, meaning that it it is not labeled with $p$ in the $p$-variant $\mathstr{T}'$ and thus $\tscolors'(s) = \tscolors(s)$. This implies that $f$ makes $\Delta^{Fin}(a,\tscolors'(s)) = \Delta^{Fin}(a,\tscolors(s))$ true in $\R{s}$ and its suggestion is a legitimate move for $\ensuremath{\exists}\xspace$ in match $\pi^{Fin}$. In order to conclude the proof, observe that for all positions that we consider the same valuation is suggested to $\ensuremath{\exists}\xspace$ in both games: this means that any next position that is picked by player $\ensuremath{Forall}\xspace$ in $\pi^{Fin}$ is also available for $\ensuremath{Forall}\xspace$ in the shadow match $\tilde{\pi}$. We now show the direction from right to left of the statement. Let $\mathstr{T}'$ be a finite $p$-variant of $\mathstr{T}$, with labeling function $\kappa'$, and $g$ a winning strategy for $\exists$ in $\mathcal{G} = \mathcal{A}(\mathstr{A},\mathstr{T}')@(a_I,s_I)$. Our goal is to define a strategy $g'$ for $\exists$ in $\mathcal{G_{\exists}}$. As usual, $g'$ will be constructed in stages, while playing a match $\pi'$ in $\mathcal{G_{\exists}}$. In parallel to $\pi'$, a \emph{bundle} $\mathcal{B}$ of $g$-guided shadow matches in $\mathcal{G}$ is maintained, with the following condition enforced for each round $z_i$: \begin{center} Fbox{\parbox{13.5cm}{ \begin{enumerate} \item If the current basic position in $\pi'$ is of the form $(Q,s) \in \pow A \times T$, then for each $a \in Q$ there is an $g$-guided (partial) shadow match $\pi_a$ at basic position $(a,s) \in A\times T$ in the current bundle $\mathcal{B}_i$. Also, either $\mathstr{T}'_s$ is not $p$-free (i.e., it does contain a node $s'$ with $p \in \kappa'(s')$) or $s$ has some sibling $t$ such that $\mathstr{T}'_t$ is not $p$-free. \item Otherwise, the current basic position in $\pi'$ is of the form $(a,s) \in A \times T$ and $\mathstr{T}'_s$ is $p$-free. Also, the bundle $\mathcal{B}_i$ only consists of a single $g$-guided match $\pi_a$ whose current basic position is also $(a,s)$. \end{enumerate} }}\hspace*{0.3cm}($\ddag$) \end{center} We recall the idea behind ($\ddag$). Point ($\ddag.1$) describes the part of match $\pi'$ where it is still possible to encounter nodes which are labeled with $p$ in $\mathstr{T}'$. As $\DeltaProj$ only takes the letter $p$ into account when defined on macro-states in $\pow A$, we want $\pi'$ to visit only positions of the form $(Q,s) \in \pow A \times T$ in that situation. Anytime we visit such a position $(Q,s)$ in $\pi'$, the role of the bundle is to provide one $g$-guided shadow match at position $(a,s)$ for each $a \in Q$. Then $g'$ is defined in terms of what $g$ suggests from those positions. Point ($\ddag.2$) describes how we want the match $\pi'$ to be played on a $p$-free subtree: as any node that one might encounter has the same label in $\mathstr{T}$ and $\mathstr{T}'$, it is safe to let ${Finexists p}.\mathstr{A}^{Fin}$ behave as $\mathstr{A}$ in such situation. Provided that the two matches visit the same basic positions, of the form $(a,s)\in A \times T$, we can let $g'$ just copy $g$. The key observation is that, as $\mathstr{T}'$ is a \emph{finite} $p$-variant of $\mathstr{T}$, nodes labeled with $p$ are reachable only for finitely many rounds of $\pi'$. This means that, provided that ($\ddag$) hold at each round, ($\ddag.1$) will describe an initial segment of $\pi'$, whereas ($\ddag.2$) will describe the remaining part. Thus our proof that $g'$ is a winning strategy for $\exists$ in $\mathcal{G}_{\exists}$ is concluded by showing that ($\ddag$) holds for each stage of construction of $\pi'$ and $\mathcal{B}$. For this purpose, we initialize $\pi'$ from position $(a_{I}^{\sharp},s) \in \pow A\times T$ and the bundle $\mathcal{B}$ as $\mathcal{B}_0 = \{\pi_{a_I}\}$, with $\pi_{a_I}$ the partial $g$-guided match consisting only of the position $(a_I,s)\in A\times T$. The situation described by ($\ddag .1$) holds at the initial stage of the construction. Inductively, suppose that at round $z_i$ we are given a position $(q,s) \in A^{F} \times T$ in $\pi^{F}$ and a bundle $\mathcal{B}_i$ as in ($\ddag$). To show that ($\ddag$) can be maintained at round $z_{i+1}$, we distinguish two cases, corresponding respectively to situation ($\ddag.1$) and ($\ddag.2$) holding at round $z_i$. \begin{enumerate}[label = (\Alph*), ref = \Alph*] \item If $(q,s)$ is of the form $(Q,s) \in \pow A \times T$, by inductive hypothesis we are given with $g$-guided shadow matches $\{\pi_a\}_{a \in Q}$ in $\mathcal{B}_i$. For each match $\pi_a$ in the bundle, we are provided with a valuation $V_{a,s}: A \rightarrow \p (\R{s})$ making $\Delta(a,\kappa'(s))$ true. Then we further distinguish the following two cases. \begin{enumerate}[label = (\roman*), ref = \roman*] \item \label{point:TsNotPFree} Suppose first that $\mathstr{T}'_s$ is not $p$-free. We let the suggestion $V' \: A^{F} \to \p (\R{s})$ of $g'$ from position $(Q,s)$ be defined as follows: \begin{align*} V'(q') \mathrel{:=} \begin{cases} \bigcap\limits_{\substack{(a,b) \in q',\\ a \in Q}}\{t\ \in \R{s} \mid t \in V_{a,s}(b)\} & q' \in \pow A \\[2em] \bigcup\limits_{a \in Q} \{t\ \in \R{s} \mid t \in V_{a,s}(q') \text{ and }\mathstr{T}'.t\text{ is $p$-free}\} & q' \in A. \end{cases} \end{align*} The definition of $V'$ on $q' \in \pow A$ is standard (\emph{cf.}~\cite[Prop. 2.21]{Zanasi:Thesis:2012}) and guarantees a correspondence between the states assigned by the valuations $\{V_{a,s}\}_{a \in Q}$ and the macro-states assigned by $V'$. The definition of $V'$ on $q' \in A$ aims at fulfilling the conditions, expressed via $\ensuremath{\exists^\infty}\xspace$ and $\ensuremath{\ensuremath{\mathrm{FO}}\xspacerall^\infty}\xspace$, on the number of nodes in $\R{s}$ witnessing (or not) some $A$-types. Those conditions are the ones that $\Delta^{\sharp}(Q,\kappa'(s))$ --and thus also $\Delta^{F}(Q,\kappa'(s))$-- ``inherits'' by $\bigwedge_{a \in R} \Delta(a,\kappa'(s))$, by definition of $\Delta^{\sharp}$. Notice that we restrict $V'(q')$ to the nodes $t \in V_{a,s}(q')$ such that $\mathstr{T}'.t$ is $p$-free. As $\mathstr{T}'$ is a \emph{finite} $p$-variant, only \emph{finitely many} nodes in $V_{a,s}(q')$ will not have this property. Therefore their exclusion, which is crucial for maintaining condition ($\ddag$) (\emph{cf.}~case \eqref{point:ddag2CardfromMacro} below), does not influence the fulfilling of the cardinality conditions expressed via $\ensuremath{\exists^\infty}\xspace$ and $\ensuremath{\ensuremath{\mathrm{FO}}\xspacerall^\infty}\xspace$ in $\Delta^{\sharp}(Q,\kappa'(s))$. On the base of these observations, one can check that $V'$ makes $\Delta^{\sharp}(Q,\kappa'(s))$--and thus also $\Delta^{F}(Q,\kappa'(s))$--true in $\R{s}$. In fact, to be a legitimate move for $\exists$ in $\pi'$, $V'$ should make $\DeltaProj(Q,\kappa(s))$ true: this is the case, for $\Delta^{F}(Q,\kappa'(s))$ is either equal to $\Delta^{F}(Q,\kappa(s))$, if $p \not\in \kappa'(s)$, or to $\Delta^{F}(Q,\kappa(s)\cup\{p\})$ otherwise. In order to check that we can maintain $(\ddag)$, let $(q',t) \in A^{F} \times T$ be any next position picked by $\ensuremath{\mathrm{FO}}\xspacerall$ in $\pi'$ at round $z_{i+1}$. As before, we distinguish two cases: \begin{enumerate}[label = (\alphaph*), ref = \alphaph*] \item If $q'$ is in $A$, then, by definition of $V'$, $\ensuremath{\mathrm{FO}}\xspacerall$ can choose $(q',t)$ in some shadow match $\pi_a$ in the bundle $\mathcal{B}_i$. We dismiss the bundle --i.e. make it a singleton-- and bring only $\pi_a$ to the next round in the same position $(q',t)$. Observe that, by definition of $V'$, $\mathstr{T}'.t$ is $p$-free and thus ($\ddag.2$) holds at round $z_{i+1}$. \label{point:ddag2CardfromMacro} \item Otherwise, $q'$ is in $\pow A$. The new bundle $\mathcal{B}_{i+1}$ is given in terms of the bundle $\mathcal{B}_i$: for each $\pi_a \in \mathcal{B}_i$ with $a\in Q$, we look if for some $b \in q'$ the position $(b,t)$ is a legitimate move for $\ensuremath{\mathrm{FO}}\xspacerall$ at round $z_{i+1}$; if so, then we bring $\pi_a$ to round $z_{i+1}$ at position $(b,t)$ and put the resulting (partial) shadow match $\pi_b$ in $\mathcal{B}_{i+1}$. Observe that, if $\ensuremath{\mathrm{FO}}\xspacerall$ is able to pick such position $(q',t)$ in $\pi'$, then by definition of $V'$ the new bundle $\mathcal{B}_{i+1}$ is non-empty and consists of an $g$-guided (partial) shadow match $\pi_b$ for each $b \in q'$. In this way we are able to keep condition ($\ddag.1$) at round $z_{i+1}$. \end{enumerate} \item Let us now consider the case in which $\mathstr{T}'_s$ is $p$-free. We let $g'$ suggest the valuation $V'$ that assigns to each node $t \in \R{s}$ all states in $\bigcup_{a \in Q}\{b \in A\ |\ t \in V_{a,s}(b)\}$. It can be checked that $V'$ makes $\bigwedge_{a \in Q} \Delta(a,\kappa'(s))$ -- and then also $\Delta^{F}(Q,\kappa'(s))$ -- true in $\R{s}$. As $p \not\in \kappa(s)=\kappa'(s)$, it follows that $V'$ also makes $\DeltaProj(Q,\kappa(s))$ true, whence it is a legitimate choice for $\exists$ in $\pi'$. Any next basic position picked by $\ensuremath{\mathrm{FO}}\xspacerall$ in $\pi'$ is of the form $(b,t) \in A \times T$, and thus condition ($\ddag.2$) holds at round $z_{i+1}$ as shown in (i.a). \end{enumerate} \item In the remaining case, $(q,s)$ is of the form $(a,s) \in A \times T$ and by inductive hypothesis we are given with a bundle $\mathcal{B}_i$ consisting of a single $f$-guided (partial) shadow match $\pi_a$ at the same position $(a,s)$. Let $V_{a,s}$ be the suggestion of $\exists$ from position $(a,s)$ in $\pi_a$. Since by assumption $s$ is $p$-free, we have that $\kappa'(s) = \kappa(s)$, meaning that $\DeltaProj(a,\kappa(s))$ is just $\Delta(a,\kappa(s)) = \Delta(a,\kappa'(s))$. Thus the restriction $V'$ of $V$ to $A$ makes $\Delta(a,\kappa'(t))$ true and we let it be the choice for $\exists$ in $\tilde{\pi}$. It follows that any next move made by $\ensuremath{\mathrm{FO}}\xspacerall$ in $\tilde{\pi}$ can be mirrored by $\ensuremath{\mathrm{FO}}\xspacerall$ in the shadow match $\pi_a$. \end{enumerate} \end{proof} \subsubsection{Closure under Boolean operations} Here we show that the collection of $\mathit{Aut}(\ensuremath{\mathrm{WMSO}}\xspace)$-recognizable classes of tree models is closed under the Boolean operations. For union, we use the following result, leaving the straightforward proof as an exercise to the reader. \begin{lemma} \label{t:cl-dis} Let $\mathstr{A}_{0}$ and $\mathstr{A}_{1}$ be $\ensuremath{\mathrm{WMSO}}\xspace$-automata. Then there is a $\ensuremath{\mathrm{WMSO}}\xspace$-automaton $\mathstr{A}$ such that $\ensuremath{\mathsf{TMod}}(\mathstr{A})$ is the union of $\ensuremath{\mathsf{TMod}}(\mathstr{A}_{0})$ and $\ensuremath{\mathsf{TMod}}(\mathstr{A}_{1})$. \end{lemma} For closure under complementation we reuse the general results established in Section \ref{sec:parityaut} for parity automata. \begin{lemma} \label{t:cl-cmp} Let $\mathstr{A}$ be an $\ensuremath{\mathrm{WMSO}}\xspace$-automaton. Then the automaton $\overline{\mathstr{A}}$ defined in Definition~\ref{d:caut} is a $\ensuremath{\mathrm{WMSO}}\xspace$-automaton recognizing the complement of $\ensuremath{\mathsf{TMod}}(\mathstr{A})$. \end{lemma} \begin{proof} It suffices to check that Proposition \ref{prop:autcomplementation} restricts to the class $\mathit{Aut}WC(\ensuremath{Fo_1}\xspaceei)$ of $\ensuremath{\mathrm{WMSO}}\xspace$-automata. First, the fact that $\ensuremath{Fo_1}\xspaceei$ is closed under Boolean duals (Definition~\ref{def:concreteduals}) implies that it holds for the class $\mathit{Aut}(\ensuremath{Fo_1}\xspaceei)$. It then remains to check that the dual automata construction $\overline{(\cdot)}$ preserves weakness and continuity. But this is straightforward, given the self-dual nature of these properties. \end{proof} We are now finally able to conclude the direction from formulas to automata of the characterisation theorem. \begin{proof}[of Theorem \ref{t:wmsoauto}] The proof is by induction on $\varphi$. \begin{itemize} \item For the base case, we consider the atomic formulas $\here{p}$, $p \sqsubseteq q$ and $R(p,q)$. \begin{itemize} \item The $\ensuremath{\mathrm{WMSO}}\xspace$-automaton $\mathstr{A}_{\here{p}} = \tup{A,\Delta,\Omega,a_I}$ is given by putting \begin{eqnarray*} A \ \mathrel{: =} \ \{a_0,a_1\} \qquad \qquad a_I \ \mathrel{: =} \ a_0 \qquad \qquad \Omega(a_0) \ \mathrel{: =} \ 0 \qquad \qquad \Omega(a_1) \ \mathrel{: =} \ 0 \\ \Delta(a_0,c) \ \mathrel{: =} \ \left\{ \begin{array}{ll} \ensuremath{\mathrm{FO}}\xspacerall x. a_1(x) & \mathbbox{if } p \in c \\ \bot & \mathbbox{otherwise.} \end{array} \right. \qquad \Delta(a_1,c) \ \mathrel{: =} \ \left\{ \begin{array}{ll} \ensuremath{\mathrm{FO}}\xspacerall x. a_1(x) & \mathbbox{if } p \not\in c \\ \bot & \mathbbox{otherwise.} \end{array} \right. \end{eqnarray*} \item The $\ensuremath{\mathrm{WMSO}}\xspace$-automaton $\mathstr{A}_{p\sqsubseteq q} = \tup{A,\Delta,\Omega,a_I}$ is given by $A \mathrel{:=} \{ a \}$, $a_{I} \mathrel{:=} a$, $\Omega(a) \mathrel{:=} 0$ and $\Delta(a,c) \mathrel{:=} \ensuremath{\mathrm{FO}}\xspacerall x\, a(x)$ if $p \not\in c$ or $q\in c$, and $\Delta(a,c) \mathrel{:=} \bot$ otherwise. \item The $\ensuremath{\mathrm{WMSO}}\xspace$-automaton $\mathstr{A}_{R(p,q)} = \tup{A,\Delta,\Omega,a_I}$ is given below: \begin{eqnarray*} A \ \mathrel{: =} \ \{a_0,a_1\} \qquad \qquad a_I \ \mathrel{: =} \ a_0 \qquad \qquad \Omega(a_0) \ \mathrel{: =} \ 0 \qquad \qquad \Omega(a_1) \ \mathrel{: =} \ 1 \\ \Delta(a_0,c) \ \mathrel{: =} \ \left\{ \begin{array}{ll} \exists x. a_1(x) \wedge \ensuremath{\mathrm{FO}}\xspacerall y. a_0(y) & \mathbbox{if }p \in c \\ \ensuremath{\mathrm{FO}}\xspacerall x\ (a_0(x)) & \mathbbox{otherwise.} \end{array} \right. \qquad \Delta(a_1,c) \ \mathrel{: =} \ \left\{ \begin{array}{ll} \top & \mathbbox{if }q \in c \\ \bot & \mathbbox{otherwise} \end{array} \right. \end{eqnarray*} \end{itemize} \item For the Boolean cases, where $\varphi = \psi_1 \vee \psi_2$ or $\varphi = \neg\psi$ we refer to the Boolean closure properties that we just established in the Lemmas~\ref{t:cl-dis} and~\ref{t:cl-cmp}, respectively. \item The case $\varphi = \exists p. \psi$ follows by the following chain of equivalences, where $\mathstr{A}_{\psi}$ is given by the inductive hypothesis and ${Finexists p}.\mathstr{A}_{\psi}$ is constructed according to Definition~\ref{DEF_fin_projection}: \begin{alignat*}{2} {Finexists p}.\mathstr{A}_{\psi} \text{ accepts }\mathbb{T} & \text{ iff } \mathstr{A}_{\psi} \text{ accepts } \mathbb{T}[p \mapsto X], \text{ for some } X \subseteq_{\omega} T & \ensuremath{\exists^\infty}\xspacead\text{(Lemma~\ref{PROP_fin_projection})} \\ & \text{ iff } \mathbb{T}[p \mapsto X] \models \psi, \text{ for some } X \subseteq_{\omega} T & \ensuremath{\exists^\infty}\xspacead\text{(induction hyp.)} \\ & \text{ iff } \mathbb{T} \models \exists p. \psi & \ensuremath{\exists^\infty}\xspacead\text{(semantics $\ensuremath{\mathrm{WMSO}}\xspace$)} \end{alignat*} \end{itemize} \end{proof} \section{Automata for $\ensuremath{\mathrm{NMSO}}\xspace$} \label{sec:autnmso} In this section we introduce the automata that capture $\ensuremath{\mathrm{NMSO}}\xspace$. \begin{definition} A \emph{$\ensuremath{\mathrm{NMSO}}\xspace$-automaton} is a weak automaton for the one-step language $\ensuremath{Fo_1}\xspacee$. \end{definition} Aanalogous to the previous section, our main goal here is to construct an equivalent $\ensuremath{\mathrm{NMSO}}\xspace$-automaton for every $\ensuremath{\mathrm{NMSO}}\xspace$-formula. \begin{theorem} \label{t:nmsoauto} There is an effective construction transforming a $\ensuremath{\mathrm{NMSO}}\xspace$-formula $\varphi$ into a $\ensuremath{\mathrm{NMSO}}\xspace$-automaton $\mathstr{A}_{\varphi}$ that is equivalent to $\varphi$ on the class of trees. \end{theorem} The proof for Theorem \ref{t:nmsoauto} will closely follow the steps for proving the analogous result for $\ensuremath{\mathrm{WMSO}}\xspace$ (Theorem \ref{t:wmsoauto}). Again, the crux of the matter is to show that the collection of classes of tree models that are recognisable by some $\ensuremath{\mathrm{NMSO}}\xspace$-automaton, is closed under the relevant notion of projection. Where this was finitary projection for $\ensuremath{\mathrm{WMSO}}\xspace$ (Def. \ref{def:tree_finproj-w}), the notion mimicking $\ensuremath{\mathrm{NMSO}}\xspace$-quantification is \emph{noetherian} projection. \begin{definition}\label{def:tree_finproj-n} Given a set $\mathsf{P}$ of proposition letters, $p \not\in P$ and a class $\mathsf{C}$ of $\mathsf{P}\cup\{p\}$-labeled trees, we define the \emph{noetherian projection} of $\mathsf{C}$ over $p$ as the language of $\mathsf{P})$-labeled trees given as $$ {\scshaperiptscriptstyle N}exists p.\mathsf{C} \mathrel{: =} \{\mathstr{T} \mid \text{ there is a noetherian $p$-variant } \mathstr{T}' \text{ of } \mathstr{T} \text{ with } \mathstr{T}' \in \mathsf{C}\}. $$ A collection of classes of tree modelss is \emph{closed under noetherian projection over $p$} if it contains the class ${{{\scshaperiptscriptstyle N}exists} p}.\mathsf{C}$ whenever it contains the class $\mathsf{C}$ itself. \end{definition} \subsection{Simulation theorem for $\ensuremath{\mathrm{NMSO}}\xspace$-automata}\label{sec:simulation_nmso} Just as for $\ensuremath{\mathrm{WMSO}}\xspace$-automata, also for $\ensuremath{\mathrm{NMSO}}\xspace$-automata the projection construction will rely on a simulation theorem, constructing a two-sorted automaton $\mathstr{A}^{{\scshaperiptscriptstyle N}}$ consisting of a copy of the original automaton, based on states $A$, and a variation of its powerset construction, based on macro-states $\pow A$. For any accepted $\mathstr{T}$, we want any match $\pi$ of $\mathcal{A}(\mathstr{A}^{{\scshaperiptscriptstyle N}},\mathstr{T})$ to split in two parts: \begin{description} \item[(\textit{Non-deterministic mode})] for finitely many rounds $\pi$ is played on macro-states, i.e. positions are of the form $\pow A \times T$. The strategy of player $\exists$ is functional in $\pow A$, i.e. it assigns \emph{at most one macro-state} to each node. \item[(\textit{Alternating mode})] At a certain round, $\pi$ abandons macro-states and turns into a match of the game $\mathcal{A}(\mathstr{A},\mathstr{T})$, i.e. all next positions are from $A \times T$ (and are played according to a non-necessarily functional strategy). \end{description} The only difference with the two-sorted construction for $\ensuremath{\mathrm{WMSO}}\xspace$-automata is that, in the non-deterministic mode, the cardinality of nodes to which $\exists$'s strategy assigns macro-states is irrelevant. Indeed, $\ensuremath{\mathrm{NMSO}}\xspace$'s finiteness is only on the vertical dimension: assigning an odd priority to macro-states will suffice to guarantee that the non-deterministic mode processes just a well-founded portion of any accepted tree. We now proceed in steps towards the construction of $\mathstr{A}^{{\scshaperiptscriptstyle N}}$. First, the following lifting from states to macro-states parallels Definition \ref{DEF_finitary_lifting}, but for the one-step language $\ensuremath{Fo_1}\xspacee$ proper of $\ensuremath{\mathrm{NMSO}}\xspace$-automata. It is based on the basic form for $\ensuremath{Fo_1}\xspacee$-formulas, see Definition \ref{def:basicform-ofoe}. \begin{definition}\label{DEF_noetherian_lifting} Let $\varphi \in {\ensuremath{Fo_1}\xspacee}^+(A)$ be of shape $\posdbnfofoe{\vlist{T}}{\Pi}$ for some $\Pi \subseteq \pow A$ and $\vlist{T} = \{T_1,\dots,T_k\} \subseteq \pow A$. We define $\varphi^{{\scshaperiptscriptstyle N}}$ as $\posdbnfofoe{\lift{\vlist{T}}}{\lift{\Pi}} \in {\ensuremath{Fo_1}\xspacee}^+(\pow A )$, that means, \begin{equation}\label{eq:unfoldingNablaofoe} \varphi^{{\scshaperiptscriptstyle N}} \ \mathrel{: =} \ \exists \vlist{x}.\big(\arediff{\vlist{x}} \land \bigwedge_{0 \leq i \leq n} \tau^+_{\lift{T}_i}(x_i) \land \ensuremath{\mathrm{FO}}\xspacerall z.(\arediff{\vlist{x},z} \to \bigvee_{S\in \lift{\Pi} } \tau^+_S(z))\big) \end{equation} \end{definition} It is instructive to compare \eqref{eq:unfoldingNablaofoe} with its $\ensuremath{\mathrm{WMSO}}\xspace$-counterpart \eqref{eq:unfoldingNablaolque}: the difference is that, because the quantifiers $\ensuremath{\exists^\infty}\xspace$ and $\ensuremath{\ensuremath{\mathrm{FO}}\xspacerall^\infty}\xspace$ are missing, the sentence does not impose any cardinality requirement, but only enforces $\pow A$-separability --- \emph{cf.} Section \ref{sec:onestep-short}. \begin{lemma}\label{lemma:automatafunctionalsentence} Let $\varphi \in {\ensuremath{Fo_1}\xspacee}^+(A)$ and $\varphi^{{\scshaperiptscriptstyle N}}\in {\ensuremath{Fo_1}\xspacee}^+(\pow A )$ be as in Definition~\ref{DEF_noetherian_lifting}. Then $\varphi^{{\scshaperiptscriptstyle N}}$ is separating in $\pow A$. \end{lemma} \begin{proof} Each element of $\lift{\vlist{T}}$ and $\lift{\Pi}$ is by definition either the empty set or a singleton $\{Q\}$ for some $Q \in \pow A$. Then the statement follows from Proposition~\ref{p:sep} . \end{proof} We are now ready to define the transition function for macro-states. The following adapts Definition \ref{PROP_DeltaPowerset} to the one-step language $\ensuremath{Fo_1}\xspacee$ of $\ensuremath{\mathrm{NMSO}}\xspace$-automata, and its normal form result, Theorem~\ref{t:osnf}. \begin{definition}\label{PROP_DeltaPowerset_noet} Let $\mathstr{A} = \tup{A,\Delta,\Omega,a_I}$ be a $\ensuremath{\mathrm{NMSO}}\xspace$-automaton. Fix any $c \in C$ and $Q \in \pow A$. By Theorem~\ref{t:osnf} there is a sentence $\Psi_{Q,c} \in {\ensuremath{Fo_1}\xspacee}^+(A)$ in the basic form $\bigvee \nablaofoe{\vlist{T}}{\Pi}$, for some $\Pi \subseteq \pow A$ and $T_i \subseteq A$, such that $\bigwedge_{a \in Q} \Delta(a,c) \equiv \Psi_{Q,c}$. By definition, $\Psi_{Q,c} = \bigvee_{n}\varphi_n$, with each $\varphi_{n}$ of shape $\nablaofoe{\vlist{T}}{\Pi}$. We put $\Delta^{\flat}(Q,c) \mathrel{:=} \bigvee_{n}\varphi_n^{{\scshaperiptscriptstyle N}} \in {\ensuremath{Fo_1}\xspacee}^+(\pow A)$, where the translation $(\cdot)^{{\scshaperiptscriptstyle N}}$ is as in Definition \ref{DEF_noetherian_lifting}. \end{definition} \noindent We now have all the ingredients for the two-sorted construction over $\ensuremath{\mathrm{NMSO}}\xspace$-automata. \begin{definition}\label{def:noetherianconstruct} Let $\mathstr{A} = \tup{A,\Delta,\Omega,a_I}$ be a {\ensuremath{\mathrm{NMSO}}\xspace-automaton}. We define the \emph{noetherian construct over $\mathstr{A}$} as the automaton $\mathstr{A}^{{\scshaperiptscriptstyle N}} = \tup{A^{{\scshaperiptscriptstyle N}},\Delta^{{\scshaperiptscriptstyle N}},\Omega^{{\scshaperiptscriptstyle N}},a_I^{{\scshaperiptscriptstyle N}}}$ given by \[ \begin{array}{lll} A^{{\scshaperiptscriptstyle N}} &\mathrel{: =}& A \cup \pow A \\ a_I^{{\scshaperiptscriptstyle N}} &\mathrel{: =}& \{a_I\} \end{array} \hspace*{5mm} \begin{array}{lll} \Omega^{{\scshaperiptscriptstyle N}}(a) &\mathrel{: =}& \Omega(a) \\ \Omega^{{\scshaperiptscriptstyle N}}(R) &\mathrel{: =}& 1 \end{array} \hspace*{5mm} \begin{array}{lll} \Delta^{{\scshaperiptscriptstyle N}}(a,c) &\mathrel{: =}& \Delta(a,c) \\ \Delta^{{\scshaperiptscriptstyle N}}(Q,c) &\mathrel{: =}& \Delta^{\flat}(Q,c) \vee \bigwedge_{a \in Q} \! \! \Delta(a,c). \end{array} \] \end{definition} The construction is the same as the one for $\ensuremath{\mathrm{WMSO}}\xspace$-automata (Definition \ref{def:finitaryconstruct}) but for the definition of the transition function for macro-states, which is now free of any cardinality requirement. \begin{definition}\label{def:noetherianstrategy} We say that a strategy $f$ in an acceptance game $\mathcal{A}(\mathstr{A},\mathstr{T})$ is \emph{noetherian} in $B \subseteq A$ when in any $f$-guided match there can be only finitely many rounds played at a position of shape $(q,s)$ with $q \in B$. \end{definition} \begin{theorem}[Simulation Theorem for $\ensuremath{\mathrm{NMSO}}\xspace$-automata] \label{PROP_facts_noetConstr} Let $\mathstr{A}$ be an $\ensuremath{\mathrm{NMSO}}\xspace$-automaton and $\mathstr{A}^{{\scshaperiptscriptstyle N}}$ its noetherian construct. \begin{enumerate}[(1)] \itemsep 0 pt \item \label{point:finConstrAut-n} $\mathstr{A}^{{\scshaperiptscriptstyle N}}$ is an $\ensuremath{\mathrm{NMSO}}\xspace$-automaton. \item \label{point:finConstrStrategy-n} For any $\mathstr{T}$, if $\ensuremath{\exists}\xspace$ has a winning strategy in $\mathcal{A}(\mathstr{A}^{{\scshaperiptscriptstyle N}}, \mathstr{T})$ from position $(a_I^{{\scshaperiptscriptstyle N}},s_I)$ then she has one that is functional in $\pow A$ and noetherian in $\pow A$. \item $\mathstr{A} \equiv \mathstr{A}^{{\scshaperiptscriptstyle N}}$. \label{point:finConstrEquiv-n} \end{enumerate} \end{theorem} \begin{proof} The proof follows the same steps as the one of Proposition \ref{PROP_facts_finConstrwmso}, minus all the concerns about continuity of the constructed automaton and any associated winning strategy $f$ being finitary. One still has to show that $f$ is noetherian in $\pow A$ (``vertically finitary''), but this is enforced by macro-states having an odd parity: visiting one of them infinitely often would mean $\exists$'s loss. \end{proof} \begin{remark} As mentioned, the class $\mathit{Aut}(\ensuremath{Fo_1}\xspacee)$ of automata characterising $\ensuremath{\mathrm{SMSO}}\xspace$ \cite{Jan96} also enjoys a simulation theorem \cite{Walukiewicz96}, turning any automaton into an equivalent non-deterministic one. Given that the class $\mathit{Aut}W(\ensuremath{Fo_1}\xspacee)$ only differs for the weakness constraint, one may wonder if the simulation result for $\mathit{Aut}(\ensuremath{Fo_1}\xspacee)$ could not actually be restricted to $\mathit{Aut}W(\ensuremath{Fo_1}\xspacee)$, making our two-sorted construction redundant. This is actually not the case: not only does Walukiewicz's simulation theorem \cite{Walukiewicz96} fail to preserve the weakness constraint, but even without this failure our purposes would not be served: A fully non-deterministic automaton is instrumental in guessing a $p$-variant of any accepted tree, but it does not guarantee that the $p$-variant is also noetherian, as the two-sorted construct does. \end{remark} \subsection{From formulas to automata} We can now conclude one direction of the automata characterisation of $\ensuremath{\mathrm{NMSO}}\xspace$. \begin{lemma}\label{PROP_noet_projection} For each $\ensuremath{\mathrm{NMSO}}\xspace$-automaton $\mathstr{A}$ on alphabet $\p (\mathsf{P} \cup \{p\})$, let $\mathstr{A}^{{\scshaperiptscriptstyle N}}$ be its noetherian construct. We have that $$\ensuremath{\mathsf{TMod}}({{\exists} p}.\mathstr{A}^{{\scshaperiptscriptstyle N}}) \ \equiv\ {{{\scshaperiptscriptstyle N}exists} p}.\ensuremath{\mathsf{TMod}}(\mathstr{A}). $$ \end{lemma} \begin{proof} The argument is the same as for $\ensuremath{\mathrm{WMSO}}\xspace$-automata (Lemma \ref{PROP_fin_projection}). As in that proof, the inclusion from left to right relies on the simulation result (Theorem \ref{PROP_facts_noetConstr}): ${{\exists} p}.\mathstr{A}^{{\scshaperiptscriptstyle N}}$ is two-sorted and its non-deterministic mode can be used to guess a noetherian $p$-variant of any accepted tree. \end{proof} \begin{proof}[of Theorem \ref{t:nmsoauto}] As for its $\ensuremath{\mathrm{WMSO}}\xspace$-counterpart Theorem \ref{t:wmsoauto}, the proof is by induction on $\varphi \in \ensuremath{\mathrm{NMSO}}\xspace$. The boolean inductive cases are handled by the $\ensuremath{\mathrm{NMSO}}\xspace$-versions of Lemma \ref{t:cl-dis} and \ref{t:cl-cmp}. The projection case follows from Lemma~\ref{PROP_noet_projection}. \end{proof} \section{Fixpoint operators and second-order quantifiers} \label{sec:fixpointToSO} In this section we will show how to translate some of the mu-calculi that we encountered until now into the appropriate second-order logics. Given the equivalence between automata and fixpoint logics that we established in Section~\ref{sec:parityaut}, and the embeddings of $\ensuremath{\mathrm{WMSO}}\xspace$ and $\ensuremath{\mathrm{NMSO}}\xspace$ into, respectively, the automata classes $\mathit{Aut}WC(\ensuremath{Fo_1}\xspaceei)$ and $\mathit{Aut}W(\ensuremath{Fo_1}\xspacee)$ that we provided in the Sections~\ref{sec:autwmso} and~\ref{sec:autnmso} for the class of tree models, the results here provide the missing link in the automata-theoretic characterizations of the monadic second order logics $\ensuremath{\mathrm{WMSO}}\xspace$ and $\ensuremath{\mathrm{NMSO}}\xspace$: \begin{eqnarray*} \mu_{C}(\ensuremath{Fo_1}\xspaceei) \equiv \ensuremath{\mathrm{WMSO}}\xspace && \qquad \text{ (over the class of all tree models)} \\ \mu_{D}(\ensuremath{Fo_1}\xspacee) \equiv \ensuremath{\mathrm{NMSO}}\xspace && \qquad \text{ (over the class of all tree models)}. \end{eqnarray*} \subsection{Translating $\mu$-calculi into second-order logics} More specifically, our aim in this Section is to prove the following result. \begin{theorem} \label{t:mfl2mso} \begin{enumerate}[(1)] \item There is an effective translation $(\cdot)^{*}: \mu_{D}\ensuremath{Fo_1}\xspacee \to \ensuremath{\mathrm{NMSO}}\xspace$ such that $\varphi \equiv \varphi^{*}$ for every $\varphi \in \mu_{D}\ensuremath{Fo_1}\xspacee$; that is: \[ \mu_{D}\ensuremath{Fo_1}\xspacee \leq \ensuremath{\mathrm{NMSO}}\xspace. \] \item There is an effective translation $(\cdot)^{*}: \mu_{C}\ensuremath{Fo_1}\xspaceei \to \ensuremath{\mathrm{WMSO}}\xspace$ such that $\varphi \equiv \varphi^{*}$ for every $\varphi \in \mu_{C}\ensuremath{Fo_1}\xspaceei$; that is: \[ \mu_{C}\ensuremath{Fo_1}\xspaceei \leq \ensuremath{\mathrm{WMSO}}\xspace. \] \end{enumerate} \end{theorem} Two immediate observations on this Theorem are in order. First, note that we use the same notation $(\cdot)^{*}$ for both translations; this should not cause any confusion since the maps agree on formulas belonging to their common domain. Consequently, in the remainder we will speak of a single translation $(\cdot)^{*}$. Second, as the target language of the translation $(\cdot)^{*}$ we will take the \emph{two-sorted} version of second-order logic, as discussed in section~\ref{sec:prel-so}, and thus we will need Fact~\ref{fact:msovs2mso} to obtain the result as formulated in Theorem~\ref{t:mfl2mso}, that is, for the one-sorted versions of \ensuremath{\mathrm{MSO}}\xspace. We reserve a fixed individual variable $v$ for this target language, i.e., every formula of the form $\varphi^{*}$ will have this $v$ as its unique free variable; the equivalence $\varphi \equiv \varphi^{*}$ is to be understood accordingly. The translation $(\cdot)^{*}$ will be defined by a straightforward induction on the complexity of fixpoint formulas. The two clauses of this definition that deserve some special attention are the ones related to the fixpoint operators and the modalities. \paragraph{Fixpoint operators} It is important to realise that our clause for the fixpoint operators \emph{differs} from the one used in the standard inductive translation $(\cdot)^{s}$ of $\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$ into standard $\ensuremath{\mathrm{MSO}}\xspace$, where we would inductively translate $(\mu p. \varphi)^{*}$ as \begin{equation} \label{eq:st} \ensuremath{\mathrm{FO}}\xspacerall p\, \big( \ensuremath{\mathrm{FO}}\xspacerall w\, (\varphi^{*}[w/v] \to p(w)) \to p(v) \big), \end{equation} which states that $v$ belongs to any prefixpoint of $\varphi$ with respect to $p$. To understand the problem with this translation in the current context, suppose, for instance, that we want to translate some continuous $\mu$-calculus into $\ensuremath{\mathrm{WMSO}}\xspace$. Then the formula in \eqref{eq:st} expresses that $v$ belongs to the intersection of all \emph{finite} prefixpoints of $\varphi$, whereas the least fixpoint is identical to the intersection of \emph{all} prefixpoints. As a result, \eqref{eq:st} does not give the right translation for the formula $\mu p.\varphi$ into \ensuremath{\mathrm{WMSO}}\xspace. To overcome this problem, we will prove that least fixpoints in restricted calculi like $\mu_{D}\ensuremath{Fo_1}\xspacee$, $\mu_{C}\ensuremath{Fo_1}\xspaceei$ and many others, in fact satisfy a rather special property, which enables an alternative translation. We need the following definition to formulate this property. \begin{definition} \label{d:rst} Let $F: \wp(S)\to \wp(S)$ be a functional; for a given $X \subseteq S$ we define the \emph{restricted} map $F_{\rst{X}}: \wp(S)\to \wp(S)$ by putting $F_{\rst{X}}(Y) \mathrel{:=} FY \cap X$. \end{definition} The observations formulated in the proposition below provide the crucial insight underlying our embedding of various alternation-free and continuous $\mu$-calculi into, respectively, $\ensuremath{\mathrm{NMSO}}\xspace$ and $\ensuremath{\mathrm{WMSO}}\xspace$. \begin{proposition} \label{p:afmc-rstGen} \label{p:keyfix} Let $\mathstr{S}$ be an LTS, and let $r$ be a point in $\mathstr{S}$. \begin{enumerate}[(1)] \item For any formula $\varphi$ with $\mu p. \varphi \in \mu_{D}\ensuremath{Fo_1}\xspacee$ we have \begin{equation} \label{eq:foe-d} r \in \ext{\mu p.\varphi}^{\mathstr{S}} \text{ iff there is a noetherian set $X$ such that } r \in \mathit{LFP}. (\varphi^{\mathstr{S}}_{p})_{\rst{X}}. \end{equation} \item For any formula $\varphi$ with $\mu p. \varphi \in \mu_{C}\ensuremath{Fo_1}\xspaceei$ we have \begin{equation} \label{eq:foei-c} r \in \ext{\mu p.\varphi}^{\mathstr{S}} \text{ iff there is a finite set $X$ such that } r \in \mathit{LFP}. (\varphi^{\mathstr{S}}_{p})_{\rst{X}}. \end{equation} \end{enumerate} \end{proposition} \begin{remark} In fact, the statements in Proposition~\ref{p:keyfix} can be generalised to the setting of a fixpoint logic $\muL_{1}$ associated with an arbitrary one-step language $L_{1}$. \end{remark} The right-to-left direction of both \eqref{eq:foe-d} and \eqref{eq:foei-c} follow from the following, more general, statement, which can be proved by a routine transfinite induction argument. \begin{proposition} \label{p:rstfix} Let $F: \wp(S)\to \wp(S)$ be monotone. Then for every subset $X \subseteq S$ it holds that $\mathit{LFP}. F\rst{X}\subseteq \mathit{LFP}.F$. \end{proposition} The left-to-right direction of \eqref{eq:foe-d} and \eqref{eq:foei-c} will be proved in the next two sections. Note that in the continuous case we will in fact prove a slightly stronger result, which applies to \emph{arbitrary} continuous functionals. \newcommand{\mathit{PRE}}{\mathit{PRE}} The point of Proposition~\ref{p:keyfix} is that it naturally suggests the following translation for the least fixpoint operator, as a subtle but important variation of \eqref{eq:st}: \begin{equation} \label{eq:trlmu} (\mu p. \varphi)^{*} \mathrel{: =} \exists q\,\Big(\ensuremath{\mathrm{FO}}\xspacerall p \subseteq q.\, \big(p \in \mathit{PRE}((\varphi^{\mathstr{S}}_{p})_{\rst{q}}) \to p(v)\big)\Big), \end{equation} where $p \in \mathit{PRE}((\varphi^{\mathstr{S}}_{p})_{\rst{q}})$ expresses that $p \subseteq q$ is a prefixpoint of the map $(\varphi^{\mathstr{S}}_{p})_{\rst{q}}$, that is: \[ p \in \mathit{PRE}((\varphi^{\mathstr{S}}_{p})_{\rst{q}}) \mathrel{: =} \ensuremath{\mathrm{FO}}\xspacerall w\, \Big( ( q(w) \land \varphi^{*}[w/v]) \to p(w) \Big). \] \paragraph{Modalities} Finally, before we can give the definition of the translation $(\cdot)^{*}$, we briefly discuss the clause involving the modalities. Here we need to understand the role of the \emph{one-step formulas} in the translation. First an auxiliary definition. \begin{definition}\label{def:exp} Let $\mathstr{S} = \tup{T,R,\kappa, s_I}$ be a $\mathsf{P}$-LTS, $A$ be a set of new variables, and $V: A \to \wp(X)$ be a valuation on a subset $X\subseteq T$. The $\mathsf{P}\cup A$-LTS $\mathstr{S}^{V}\mathrel{:=} \tup{T,R,\kappa^V, s_I}$ given by defining the marking $\kappa^V: T \to \wp{(\mathsf{P} \cup A)}$ where \[\kappa^V(s)\mathrel{:=} \begin{cases} \kappa(s) & \text{ if } s \notin X \\ \kappa(s) \cup \{ a \in A \mid s \in V(a)\}& \text{ else,} \end{cases}\] is called the $V$-expansion of $\mathstr{S}$. \end{definition} The following proposition states that at the one-step level, the formulas that provide the semantics of the modalities of $\mu\ensuremath{Fo_1}\xspacee$ and $\mu\ensuremath{Fo_1}\xspaceei$ can indeed be translated into, respectively $\ensuremath{\mathrm{NMSO}}\xspace$ and $\ensuremath{\mathrm{WMSO}}\xspace$. \begin{proposition} \label{p:1trl} There is a translation $(\cdot)^{\dag}: \ensuremath{Fo_1}\xspaceei(A) \to \ensuremath{\mathrm{WMSO}}\xspace$ such that for every model $\mathstr{S}$ and every valuation $V: A \to \wp(R[s_{I}])$: \[ (R[s_{I}],V) \models \alpha \text{ iff } \mathstr{S}^{V} \models \alpha^{\dag}[s_{I}]. \] Moreover, $(\cdot)^{\dag}$ restricts to first-order logic, i.e., $\alpha^{\dag}$ is a first-order formula if $\alpha \in \ensuremath{Fo_1}\xspacee$. \end{proposition} \begin{proof} Basically, the translation $(\cdot)^{\dag}$ \emph{restricts} all quantifiers to the collection of successors of $v$. In other words, $(\cdot)^{\dag}$ is the identity on basic formulas, it commutes with the propositional connectives, and for the quantifiers $\exists$ and $\ensuremath{\exists^\infty}\xspace$ we define: \[\begin{array}{lll} (\exists x\, \alpha)^{\dag} &\mathrel{: =}& \exists x\, (Rvx \land \alpha^{\dag}) \\ (\ensuremath{\exists^\infty}\xspace x\, \alpha)^{\dag} &\mathrel{: =}& \ensuremath{\mathrm{FO}}\xspacerall p \exists x\, (Rvx \land \neg p(x) \land \alpha^{\dag}) \end{array}\] We leave it for the reader to verify the correctness of this definition --- observe that the clause for the infinity quantifier $\ensuremath{\exists^\infty}\xspace$ is based on the equivalence between $\ensuremath{\mathrm{WMSO}}\xspace$ and $\ensuremath{\mathrm{FO}}\xspaceei$, established by V\"a\"an\"anen~\cite{vaananen77}. \end{proof} \noindent We are now ready to define the translation used in the main result of this section. \begin{definition} By an induction on the complexity of formulas we define the following translation $(\cdot)^{*}$ from $\mu\ensuremath{\mathrm{FO}}\xspaceei$-formulas to formulas of monadic second-order logic: \[\begin{array}{lll} p^{*} &\mathrel{: =}& p(v) \\ (\neg\varphi)^{*} &\mathrel{: =}& \neg \varphi^{*} \\ (\varphi\lor\psi)^{*} &\mathrel{: =}& \varphi^{*} \lor \psi^{*} \\ (\nxt{\alpha}(\ol{\varphi}))^{*} &\mathrel{: =}& \alpha^{\dag}[\varphi_{i}^{*}/a_{i} \mid i \in I], \end{array}\] where $\alpha^{\dag}$ is as in Proposition~\ref{p:1trl}, and $[\varphi_{i}^{*}/a_{i} \mid i \in I]$ is the substitution that replaces every occurrence of an atomic formula of the form $a_{i}(x)$ with the formula $\varphi_{i}^{*}(x)$ (i.e. the formula $\varphi_{i}^{*}$, but with the free variable $v$ substituted by $x$). Finally, the inductive clause for a formula of the form $\mu p.\varphi$ is given as in \eqref{eq:trlmu}. \end{definition} \begin{proofof}{Theorem~\ref{t:mfl2mso}} First of all, it is clear that in both cases the translation $(\cdot)^{*}$ lands in the correct language. For both parts of the theorem, we thence prove that $(\cdot)^{*}$ is truth preserving by a straightforward formula induction. E.g., for part (2) we need to show that, for an arbitrary formula $\varphi\in \mu_{C}\ensuremath{Fo_1}\xspaceei$ and an arbitrary model $\mathstr{S}$: \begin{equation} \label{eq:xxxx1} \mathstr{S} \tscolorsdash \varphi \text{ iff } \mathstr{S} \models \varphi^{*}[s_{I}]. \end{equation} As discussed in the main text, the two critical cases concern the inductive steps for the modalities and the least fixpoint operators. Let $ L_{1}^{+} \in \{\ensuremath{Fo_1}\xspacee,\ensuremath{Fo_1}\xspaceei\}$. We start verifying the case of modalities. Hence, consider the formula $\nxt{\alpha}(\varphi_{1},\ldots,\varphi_{n})$ with $\alpha(a_{1},\ldots,a_{n}) \in L_{1}^{+}$. By induction hypothesis, $\varphi_\ell \equiv \varphi^{*}_\ell$, for $\ell=1,\dots, n$. Now, let $\mathstr{S}$ be a transition system. We have that \begin{align*} \mathstr{S} \tscolorsdash \nxt{\alpha}(\varphi_{1},\ldots,\varphi_{n}) \text{ iff } & (R[s_{I}],V_{\overline{\varphi}}) \models \alpha(a_{1},\ldots,a_{n}) & \text{( by \eqref{eq:mumod})} \\ \text{ iff } & \mathstr{S}^{V_{\overline{\varphi}}} \models \alpha^{\dag}[s_{I}] & \text{( by Prop. \ref{p:1trl})} \\ \text{ iff } & \mathstr{S} \models \alpha^{\dag}[\varphi_{i}^{*}/a_{i} \mid i \in I][s_{I}] & \text{( by \eqref{eq:valmod}, Def. \ref{def:exp} and IH)} \end{align*} The inductive step for the least fixpoint operator will be justified by Proposition~\ref{p:keyfix}. In more detail, given a formula of the form $\mu x. \psi \in\mu_{Y} L_{1}^{+}$, with $Y=D$ for $L_{1}^{+} =\ensuremath{Fo_1}\xspacee$, and $Y=C$ for $L_{1}^{+} =\ensuremath{Fo_1}\xspaceei$, consider the following chain of equivalences: \begin{align*} & s_{I} \in \ext{\mu p.\psi}^{\mathstr{S}} \\ \text{ iff } & s_{I} \in \mathit{LFP}. (\psi^{\mathstr{S}}_{p})_{\rst{Q}} \text{ for some } \begin{cases} \text{ finite} & \\ \text{ noetherian} & \end{cases} \text{set } Q & \text{( by \eqref{eq:foe-d}/\eqref{eq:foei-c})} \\ \text{ iff } & s_{I} \in \bigcap \Big\{ P \subseteq Q \mid P \in \mathit{PRE}((\psi^{\mathstr{S}}_{p})_{\rst{Q}})\Big\} \text{ for some } \begin{cases} \text{ finite} & \\ \text{ noetherian} & \end{cases} \text{set } Q \\ \text{ iff } & \mathstr{S} \models \exists q.\, \Big(\ensuremath{\mathrm{FO}}\xspacerall p \subseteq q.\, \big(p \in \mathit{PRE}((\psi^{\mathstr{S}}_{p})_{\rst{q}}) \to p(s_{I})\big) \Big) \\ \text{ iff } & \mathstr{S} \models (\mu p. \psi)^{*}[s_{I}]. & (\text{IH}) \end{align*} This concludes the proof of \eqref{eq:xxxx1}. \end{proofof} \subsection{Fixpoints of continuous maps} It is well-known that continuous functionals are \emph{constructive}. That is, if we construct the least fixpoint of a continuous functional $F: \wp(S) \to \wp(S)$ using the ordinal approximation $\varnothing, F\varnothing, F^2\varnothing, \ldots, F^{\alpha}\varnothing, \ldots$, then we reach convergence after at most $\omegaega$ many steps, implying that $\mathit{LFP}. F = F^{\omegaega}\varnothing$. We will see now that this fact can be strengthened to the following observation, which is the crucial result needed in the proof of Proposition~\ref{p:keyfix}. \begin{theorem} \label{t:fixcont} Let $F: \wp(S)\to \wp(S)$ be a continuous functional. Then for any $s \in S$: \begin{equation} \label{eq:fixcont} s \in \mathit{LFP}. F \text{ iff } s \in \mathit{LFP}.F\rst{X}, \text{ for some finite } X \subseteq S. \end{equation} \end{theorem} \begin{proof} The direction from right to left of \eqref{eq:fixcont} is a special case of Proposition~\ref{p:rstfix}. For the opposite direction of \eqref{eq:fixcont} a bit more work is needed. Assume that $s \in \mathit{LFP}. F$; we claim that there are sets $U_{1},\ldots,U_{n}$, for some $n \in \omegaega$, such that $s \in U_{n}$, $U_{1} \subseteq_{\omegaega} F(\varnothing)$, and $U_{i+1} \subseteq_{\omegaega} F(U_{i})$, for all $i$ with $1 \leq i < n$. To see this, first observe that since $F$ is continuous, we have $\mathit{LFP}. F = F^{\omegaega}(\varnothing) = \bigcup_{n\in\omegaega}F^{n}(\varnothing)$, and so we may take $n$ to be the least natural number such that $s \in F^{n}(\varnothing)$. By a downward induction we now define sets $U_{n},\ldots,U_{1}$, with $U_{i} \subseteq F^{i}(\varnothing)$ for each $i$. We set up the induction by putting $U_{n} \mathrel{:=} \{ s \}$, then $U_{n} \subseteq F^{n}(\varnothing)$ by our assumption on $n$. For $i<n$, we define $U_{i}$ as follows. Using the inductive fact that $U_{i+1} \subseteq_{\omegaega} F^{i+1}(\varnothing) = F(F^{i}(\varnothing))$, it follows by continuity of $F$ that for each $u \in U_{i+1}$ there is a set $V_{u} \subseteq_{\omegaega} F^{i}(\varnothing)$ such that $u \in F(V_{u})$. We then define $U_{i} \mathrel{:=} \bigcup \{ V_{u} \mid u \in U_{i+1} \}$, so that clearly $U_{i+1} \subseteq_{\omegaega} F(U_{i})$ and $U_{i} \subseteq_{\omegaega} F^{i}(\varnothing)$. Continuing like this, ultimately we arrive at stage $i=1$ where we find $U_{1} \subseteq F(\varnothing)$ as required. Finally, given the sequence $U_{n},\ldots,U_{1}$, we define \[ X \mathrel{:=} \bigcup_{0<i\leq n} U_{i}. \] It is then straightforward to prove that $U_{i} \subseteq \mathit{LFP}. F\rst{X}$, for each $i$ with $0<i\leq n$, and so in particular we find that $s \in U_{n} \subseteq \mathit{LFP}. F\rst{X}$. This finishes the proof of the implication from left to right in \eqref{eq:fixcont}. \end{proof} \noindent As an almost immediate corollary of this result we obtain the second part of Proposition~\ref{p:keyfix}. \begin{proofof}{Proposition~\ref{p:keyfix}(2)} Take an arbitrary formula $\mu p. \varphi \in \mu_{C}\ensuremath{Fo_1}\xspaceei$, then by definition we have $\varphi \in \mu_{C}\ensuremath{Fo_1}\xspaceei \cap \cont{\mu\ensuremath{Fo_1}\xspaceei}{p}$. But it follows from a routine inductive proof that every formula $\psi \in \mu_{C}\ensuremath{Fo_1}\xspaceei \cap \cont{\mu\ensuremath{Fo_1}\xspaceei}{\mathsf{Q}}$ is continuous in each variable in $\mathsf{Q}$. Thus $\varphi$ is continuous in $p$, and so the result is immediate by Theorem~\ref{t:fixcont}. \end{proofof} \subsection{Fixpoints of noetherian maps} We will now see how to prove Proposition~\ref{p:keyfix}(1), which is the key result that we need to embed alternation-free $\mu$-calculi such as $\mu_{D}\ensuremath{Fo_1}\xspacee$ and $\mu_{D}\ensuremath{\mathrm{ML}}\xspace$ into noetherian second-order logic. Perhaps suprisingly, this case is slightly more subtle than the characterisation of fixpoints of continuous maps. We start with stating some auxiliary definitions and results on monotone functionals, starting with a game-theoretic characterisation of their least fixpoints~\cite{Ven08}. \begin{definition} \label{d:unfgame} Given a monotone functional $F: \wp(S)\to \wp(S)$ we define the \emph{unfolding game} $\mathcal{U}_{F}$ as follows: \begin{itemize} \item at any position $s \in S$, $\ensuremath{\exists}\xspace$ needs to pick a set $X$ such that $s \in FX$; \item at any position $X \in \wp(S)$, $\ensuremath{Forall}\xspace$ needs to pick an element of $X$ \item all infinite matches are won by $\ensuremath{Forall}\xspace$. \end{itemize} A positional strategy $\strat: S \to \wp(S)$ for $\ensuremath{\exists}\xspace$ in $\mathcal{U}_{F}$ is \emph{descending} if, for all ordinals $\alphapha$, \begin{equation} \label{eq:dec} s \in F^{\alphapha+1}(\varnothing) \text{ implies } \strat(s) \subseteq F^{\alphapha}(\varnothing). \end{equation} \end{definition} It is not the case that \emph{all} positional winning strategies for $\ensuremath{\exists}\xspace$ in $\mathcal{U}_{F}$ are descending, but the next result shows that there always is one. \begin{proposition} \label{p:unfg} Let $F: \wp(S)\to \wp(S)$ be a monotone functional. \begin{enumerate}[(1)] \item For all $s \in S$, $s \in \text{\sl Win}_{\ensuremath{\exists}\xspace}(\mathcal{U}_{F})$ iff $s \in \mathit{LFP}. F$; \item If $s \in \mathit{LFP}. F$, then \ensuremath{\exists}\xspace has a descending winning strategy in $\mathcal{U}_{F}@s$. \end{enumerate} \end{proposition} \begin{proof} Point (1) corresponds to \cite[Theorem 3.14(2)]{Ven08}. For part (2) one can simply take the following strategy. Given $s \in \mathit{LFP}.F$, let $\alphapha$ be the least ordinal such that $s \in F^{\alphapha}(\varnothing)$; it is easy to see that $\alphapha$ must be a successor ordinal, say $\alphapha = \beta + 1$. Now simply put $\strat(s) \mathrel{:=} F^{\beta}(\varnothing)$. \end{proof} \begin{definition} \label{d:str-tree} Let $F: \wp(S)\to \wp(S)$ be a monotone functional, let $\strat$ be a positional winning strategy for $\ensuremath{\exists}\xspace$ in $\mathcal{U}_{F}$, and let $r \in S$. Define $T_{\strat,r} \subseteq S$ to be the set of states in $S$ that are $\strat$-reachable in $\mathcal{U}_{F}@r$. This set has a tree structure induced by the map $\strat$ itself, where the children of $s \in T_{\strat,r}$ are given by the set $\strat(s)$; we will refer to $T_{\strat,r}$ as the \emph{strategy tree} of $\strat$. \end{definition} Note that a strategy tree $T_{\strat,r}$ will have no infinite paths, since we define the notion only for a \emph{winning} strategy $\strat$. \begin{proposition} \label{p:afmc-2} Let $F: \wp(S)\to \wp(S)$ be a monotone functional, let $r \in S$, and let $\strat$ be a descending winning strategy for $\ensuremath{\exists}\xspace$ in $\mathcal{U}_{F}$. Then \begin{equation} \label{eq:afmc3} r \in \mathit{LFP}. F \text{ implies } r \in \mathit{LFP}. F\rst{T_{\strat,r}}. \end{equation} \end{proposition} \begin{proof} Let $F,r$ and $\strat$ be as in the formulation of the proposition. Assume that $r \in \mathit{LFP}. F$, then clearly $r \in F^{\alpha}(\varnothing)$ for some ordinal $\alpha$; furthermore, $T_{\strat,r}$ is defined and clearly we have $r \in T_{\strat,r}$. Abbreviate $T \mathrel{:=} T_{\strat,r}$. It then suffices to show that for all ordinals $\alphapha$ we have \begin{equation} \label{eq:unf1} F^{\alpha}(\varnothing) \cap T \subseteq (F\rst{T})^{\alphapha}(\varnothing). \end{equation} We will prove \eqref{eq:unf1} by transfinite induction. The base case, where $\alphapha = 0$, and the inductive case where $\alphapha$ is a limit ordinal are straightforward, so we focus on the case where $\alphapha$ is a successor ordinal, say $\alphapha = \beta +1$. Take an arbitrary state $u \in F^{\beta+1}(\varnothing) \cap T$, then we find $\strat(u) \subseteq F^{\beta}(\varnothing)$ by our assumption \eqref{eq:dec}, and $\strat(u) \subseteq T$ by definition of $T$. Then the induction hypothesis yields that $\strat(u) \subseteq (F\rst{T})^{\beta}(\varnothing)$, and so we have $\strat(u) \subseteq (F\rst{T})^{\beta}(\varnothing) \cap T$. But since $\strat$ is a winning strategy, and $u$ is a winning position for $\ensuremath{\exists}\xspace$ in $\mathcal{U}_{F}$ by Claim~\ref{p:unfg}(i), $\strat(u)$ is a legitimate move for $\ensuremath{\exists}\xspace$, and so we have $u \in F(\strat(u))$. Thus by monotonicity of $F$ we obtain $u \in F((F\rst{T})^{\beta}(\varnothing) \cap T)$, and since $u \in T$ by assumption, this means that $u \in (F\rst{T})^{\beta+1}(\varnothing)$ as required. \end{proof} We now turn to the specific case where we consider the least fixed point of a functional $F$ which is induced by some formula $\varphi(p) \in \mu_{D}L_{1}$ on some LTS $\mathstr{S}$. By Proposition \ref{p:unfg} and Fact~\ref{f:adeqmu}, $\ensuremath{\exists}\xspace$ has a winning strategy in $\mathcal{E}(\mu p.\varphi(p),\mathstr{S})@(\mu p.\varphi(p),s)$ if and only if she has a winning strategy in $\mathcal{U}_{F}@s$ too, where $F \mathrel{:=} \varphi_{p}^{\mathstr{S}}$ is the monotone functional defined by $\varphi(p)$. The next Proposition makes this correspondence explicit when $L_{1} = \ensuremath{\mathrm{FO}}\xspacee$. First, we need to introduce some auxiliary concepts and notations. Given a winning strategy $\strat$ for $\ensuremath{\exists}\xspace$ in $\mathcal{E}(\mu p. \varphi, \mathstr{S})@(\mu p. \varphi,s)$, we denote by $B(\strat)$ the set of all finite $\strat$-guided, possibly partial, matches in $\mathcal{E}(\psi,\mathstr{S})@(\psi,s)$ in which no position of the form $(\nu q. \psi, r)$ is visited. Let $\strat$ be a positional winning strategies for $\ensuremath{\exists}\xspace$ in $\mathcal{U}_F@s$ and $\strat'$ a winning strategy for her in $\mathcal{E}(\mu p.\varphi,\mathstr{S})@(\mu p.\varphi, s)$. We call $\strat$ and $\strat'$ \emph{compatible} if each point in $T_{\strat,s}$ occurs on some path belonging to $B(\strat')$. \begin{proposition}\label{p:unfold=evalgame2} Let $\varphi(p) \in \mu_{D}\ensuremath{\mathrm{FO}}\xspacee{p}$ and $s \in \ext{\mu p.\varphi}^{\mathstr{S}}$. Then there is a descending winning strategy for $\ensuremath{\exists}\xspace$ in $\mathcal{U}_F@s$ compatible with a winning strategy for $\ensuremath{\exists}\xspace$ in $\mathcal{E}(\mu p.\varphi,\mathstr{S})@(\mu p.\varphi,s)$ \end{proposition} \begin{proof} Let $F \mathrel{:=} \varphi_{p}^{\mathstr{S}}$ be the monotone functional defined by $\varphi(p)$. From $s \in \ext{\mu p.\varphi}^{\mathstr{S}}$, we get that $s \in \mathit{LFP}. F$. Applying Proposition~\ref{p:unfg} to the fact that $s \in \mathit{LFP}. F$ yields that $\ensuremath{\exists}\xspace$ has a descending winning strategy $\strat: S \to \wp{(S)}$ in $\mathcal{U}_{F}@s$. We define $\ensuremath{\exists}\xspace$'s strategies $\strat'$ in $\mathcal{E}(\mu p.\varphi,\mathstr{S})@(\mu p.\varphi,s)$, and $\strat^*$ in $\mathcal{U}_{F}@s$ as follows: \begin{enumerate} \item In the evaluation games $\mathcal{E}$, after the initial automatic move, the position of the match is $(\varphi,s)$; there $\ensuremath{\exists}\xspace$ first plays her positional winning strategy $\strat_s$ from $\mathcal{E}(\varphi(p),\mathstr{S}[p \mapsto \strat(s)])@(\varphi(p), s)$, and we define her move $\strat^*(s)$ in the unfolding game $\mathcal{U}$ as the set of all nodes $t \in \strat(s)$ such that there is a $\strat_s$-guided match in $B(\strat_s)$ whose last position is $(p,t)$. \item Each time a position $(p,t)$ is reached in the evaluation games $\mathcal{E}$, distinguisg cases: \begin{enumerate} \item if $t \in \text{\sl Win}_{\ensuremath{\exists}\xspace}(\mathcal{U}_{F})$, then $\ensuremath{\exists}\xspace$ continues with the positional winning strategy $\strat_t$ from $\mathcal{E}(\varphi(p),\mathstr{S}[p \mapsto \strat(t)])@(\varphi(p),t)$, and we define her move $\strat^*(t)$ in $\mathcal{U}$ as the set of all nodes $w \in \strat(t)$ such that there is a $\strat_t$-guided match in $B(\strat_t)$ whose last position $(p,w)$; \item if $t \notin \text{\sl Win}_{\ensuremath{\exists}\xspace}(\mathcal{U}_{F})$, then $\ensuremath{\exists}\xspace$ continues with a random positional strategy and we define $\strat^*(t)\mathrel{:=} \varnothing$. \end{enumerate} \item For any position $(p,t)$ that was not reached in the previous steps, $\ensuremath{\exists}\xspace$ sets $\strat^*(t)\mathrel{:=} \varnothing$. \end{enumerate} By construction, $\strat'$ and $\strat^*$ are compatible. Moreover, $\strat^*(t) \subseteq \strat(t)$, for $t \in S$, meaning that $\strat^*$ is descending. We verify that both $\strat'$ and $\strat^*$ are actually winning strategies for $\ensuremath{\exists}\xspace$ in the respective games. First of all, observe that every position of the form $(p,t)$ reached during a $\strat'$-guided match, we have $t \in \text{\sl Win}_{\ensuremath{\exists}\xspace}(\mathcal{U}_{F})$. This can be proved by induction on the number of position of the form $(p,t)$ visited during an $\strat'$-guided match. For the inductive step, assume $w \in \text{\sl Win}_{\ensuremath{\exists}\xspace}(\mathcal{U}_{F})$. Hence $\strat_w$ is winning for $\ensuremath{\exists}\xspace$ in $\mathcal{E}(\varphi,\mathstr{S}[p \mapsto \strat(w)])@(\varphi,w)$. This means that if a position of the form $(p, t)$ is reached, the variable $p$ must be true at $t$ in the model $\mathstr{S}[p \mapsto \strat(w)]$, meaning that it belongs to the set $\strat(w)$. By assumption $\strat$ is a winning strategy for $\ensuremath{\exists}\xspace$ in $\mathcal{U}_F$, and therefore any element of $\strat(w)$ is again a member of the set $\text{\sl Win}_{\ensuremath{\exists}\xspace}(\mathcal{U}_{F})$. Finally, let $\pi$ be an arbitrary $\strat'$-guided match of $\mathcal{E}(\varphi, \mathstr{S}[p \mapsto \strat(w)])@(\varphi,w)$. We verify that $\pi$ is winning for $\ensuremath{\exists}\xspace$. First observe that since $\strat$ is winning for her in $\mathcal{U}_F@s$, the fixpoint variable $p$ is unfolded only finitely many times during $\pi$. Let $(p,t)$ be the last basic position in $\pi$ where $p$ occurs. Then from now on $\strat'$ and $\strat_t$ coincide, yielding that the match is winning for $\ensuremath{\exists}\xspace$. We finally verify that $\strat^*$ is winning for $\ensuremath{\exists}\xspace$ in the unfolding game $\mathcal{U}_F@s$. First of all, since $\strat'$ is winning, $B(\strat')$ does not contain an infinite ascending chain of $\strat'$-guided matches, and thence any $\strat^*$-guided match in $\mathcal{U}_{F}@s$ is finite. It therefore remains to verify that for every $\strat^*$-guided match $\pi$ in $\mathcal{U}_{F}@s$ such that $\textsf{last}(\pi)$ is an $\ensuremath{\exists}\xspace$ position, she can always move. We do it by induction on the length of a $\strat^*$-guided match. At each step, we use compatibility and thus keep track of the corresponding position in the evaluation game $\mathcal{E}(\mu p.\varphi,\mathstr{S})@(\mu p.\varphi,s)$. The initial position for her is $s \in S$. Notice that $\strat^*(s) = \strat(s) \cap B(\xi')$ and therefore $\strat'$ corresponds to $\strat_s$ on $\mathcal{E}(\varphi(p),\mathstr{S}[p \mapsto \strat^*(s)])@(\varphi(p),s)$ and it is therefore winning for $\ensuremath{\exists}\xspace$. In particular, this means that $s \in F(\strat^*(s))$. Hence, as initial move, $\ensuremath{\exists}\xspace$ is allowed to play $\strat^*(s)$. Moreover any subsequent choice for $\ensuremath{Forall}\xspace$ is such that there is a winning match $\pi \in B(\xi_s)$ for $\ensuremath{\exists}\xspace$ such that $\textsf{last}(\pi)=(p,w)$. For the induction step, assume $\ensuremath{Forall}\xspace$ has chosen $ t \in \strat^*(w)$, where $\strat^*(w) = \strat(w) \cap B(\xi')$, $\strat'$ corresponds to the winning strategy $\strat_w$ on $\mathcal{E}(\varphi(p),\mathstr{S}[p \mapsto \strat^*(w)])@(\varphi(p),w)$, and there is a winning match $\pi \in B(\xi_w)$ for $\ensuremath{\exists}\xspace$ such that $\textsf{last}(\pi)=(p,w)$. By construction, $\strat'$ corresponds to the winning strategy $\strat_t$ for $\ensuremath{\exists}\xspace$ on $\mathcal{E}(\varphi(p),\mathstr{S}[p \mapsto \strat(t)])@(\varphi(p),t)$. Because $\strat^*(t)= \strat(s) \cap B(\xi')$, $\strat_t$ is also winning for her in $\mathcal{E}(\varphi(p),\mathstr{S}[p \mapsto \strat^*(t)])@(\varphi(p),t)$, meaning that $s \in F(\strat^*(s))$. The move $\strat^*(t)$ is therefore admissible, and any subsequent choice for $\ensuremath{Forall}\xspace$ is such that there is a winning match $\pi \in B(\xi_t)$ for $\ensuremath{\exists}\xspace$ with $\textsf{last}(\pi)=(p,w)$. \end{proof} \begin{proofof}{Proposition~\ref{p:keyfix}(1)} Let $\mathstr{S}$ be an LTS and $\varphi(p) \in \mu_{D}\ensuremath{Fo_1}\xspacee{p}$. The right-to-left direction of \eqref{eq:foe-d} being proved by Proposition \ref{p:rstfix}, we check the left-to-right direction. We first verify that winning strategies in evaluation games for noetherian fixpoint formulas naturally induce bundles. More precisely: \textsc{Claim.} Let $B^\mathstr{S}(\strat)$ be the projection of $B(\strat)$ on $S$, that is the set of all paths in $\mathstr{S}$ that are a projection on $S$ of a $\strat$-guided (partial) match in $B(\strat)$. Then $B^\mathstr{S}(\strat)$ is a bundle. \begin{pfclaim} Assume towards a contradiction that $B^\mathstr{S}(\strat)$ contains an infinite ascending chain $\pi_{0} \sqsubset \pi_{1} \sqsubset \cdots$. Let $\pi$ be the limit of this chain and consider the set of elements in $B(\strat)$ that, projected on $S$, are prefixes of $\pi$. By K\"{o}nig's Lemma, this set contains an infinite ascending chain whose limit is an infinite $\strat$-guided match in $\mathcal{E}(\mu p. \varphi,\mathstr{S})$ which starts at $(\mu p. \varphi,s)$, and of which $\pi$ is the projection on $S$. By definition of $B(\strat)$, the highest bound variable of $\mu p. \varphi$ that gets unravelled infinitely often in $\rho$ is a $\mu$-variable, meaning that the match is winning for $\ensuremath{Forall}\xspace$, a contradiction. \end{pfclaim} Assume that $s \in \ext{\mu p.\varphi}^{\mathstr{S}}$, and let $F \mathrel{:=} \varphi_{p}^{\mathstr{S}}$ be the monotone functional defined by $\varphi(p)$. By Proposition \ref{p:unfold=evalgame2}, $\ensuremath{\exists}\xspace$ has a winning strategy $\strat'$ in $\mathcal{E}(\mu p.\varphi,\mathstr{S})@(\mu p.\varphi,s)$ compatible with a descending winning strategy $\strat$ in $\mathcal{U}_{F}@s$. By Proposition~\ref{p:afmc-2}, we obtain that $s \in \mathit{LFP}. F\rst{T_{\strat,s}}$. Because of compatibility, every node in $T_{\strat, s}$ occurs on some path of $B(\strat')$. From the Claim we known that $B^\mathstr{S}(\strat')$ is a bundle, meaning that $T_{\strat, s}$ is noetherian as required. \end{proofof} \section{Expressiveness modulo bisimilarity} \label{sec:expresso} In this Section we use the tools developed in the previous parts to prove the main results of the paper on expressiveness modulo bisimilarity, viz., Theorem~\ref{t:11} stating \begin{eqnarray} \ensuremath{\mu_{D}\ML}\xspace &\equiv& \ensuremath{\mathrm{NMSO}}\xspace / {\mathrel{\underline{\leftrightarrow}}} \label{eq:z11} \\[1mm] \ensuremath{\mu_{C}\ML}\xspace &\equiv& \ensuremath{\mathrm{WMSO}}\xspace / {\mathrel{\underline{\leftrightarrow}}} \label{eq:z12} \end{eqnarray} \begin{proofof}{Theorem~\ref{t:11}} The structure of the proof is the same for the statements \eqref{eq:z11} and \eqref{eq:z12}. In both cases, we will need three steps to establish a link between the modal language on the left hand side of the equation to the bisimulation-invariant fragment of the second-order logic on the right hand side. The first step is to connect the fragments $\ensuremath{\mu_{D}\ML}\xspace$ and $\ensuremath{\mu_{C}\ML}\xspace$ of the modal $\mu$-calculus to, respectively, the weak and the continuous-weak automata for first-order logic without equality. That is, in Theorem~\ref{t:mlaut} below we prove the following: \begin{eqnarray} \ensuremath{\mu_{D}\ML}\xspace &\equiv& \mathit{Aut}W(\ensuremath{Fo_1}\xspace) \label{eq:z21} \\[1mm] \ensuremath{\mu_{C}\ML}\xspace &\equiv& \mathit{Aut}WC(\ensuremath{Fo_1}\xspace) \label{eq:z22} \end{eqnarray} Second, the main observations that we shall make in this section is that \begin{eqnarray} \mathit{Aut}W(\ensuremath{Fo_1}\xspace) &\equiv& \mathit{Aut}W(\ensuremath{Fo_1}\xspacee)/{\mathrel{\underline{\leftrightarrow}}} \label{eq:z31} \\[1mm] \mathit{Aut}WC(\ensuremath{Fo_1}\xspace) &\equiv& \mathit{Aut}WC(\ensuremath{Fo_1}\xspaceei)/ {\mathrel{\underline{\leftrightarrow}}} \label{eq:z32} \end{eqnarray} That is, for \eqref{eq:z31} we shall see in Theorem~\ref{t:bi-aut} below that a weak $\ensuremath{Fo_1}\xspacee$-automaton $\mathstr{A}$ is bisimulation invariant iff it is equivalent to a weak $\ensuremath{Fo_1}\xspace$-automaton $\mathstr{A}^{\Diamond}$ (effectively obtained from $\mathstr{A}$); and similarly for \eqref{eq:z32}. Finally, we use the automata-theoretic characterisations of $\ensuremath{\mathrm{NMSO}}\xspace$ and $\ensuremath{\mathrm{WMSO}}\xspace$ that we obtained in earlier sections: \begin{eqnarray} \mathit{Aut}W(\ensuremath{Fo_1}\xspacee) &\equiv& \ensuremath{\mathrm{NMSO}}\xspace \label{eq:z41} \\[1mm] \mathit{Aut}WC(\ensuremath{Fo_1}\xspaceei) &\equiv& \ensuremath{\mathrm{WMSO}}\xspace \label{eq:z42} \end{eqnarray} Then it is obvious that the equation \eqref{eq:z11} follows from \eqref{eq:z21}, \eqref{eq:z31} and \eqref{eq:z41}, while similarly \eqref{eq:z12} follows from \eqref{eq:z22}, \eqref{eq:z32} and \eqref{eq:z42}. \end{proofof} It is left to prove the equations \eqref{eq:z21} and \eqref{eq:z22}, and \eqref{eq:z31} and \eqref{eq:z32}; this we will take care of in the two subsections below. \subsection{Automata for $\ensuremath{\mu_{D}\ML}\xspace$ and $\ensuremath{\mu_{C}\ML}\xspace$} In this subsection we consider the automata corresponding to the continuous and the alternation-free $\mu$-calculus. That is, we verify the equations \eqref{eq:z21} and \eqref{eq:z22}. \begin{theorem} \label{t:mlaut} \begin{enumerate} \item There is an effective construction transforming a formula $\varphi \in \ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace$ into an equivalent automaton in $\mathit{Aut}(\ensuremath{Fo_1}\xspace)$, and vice versa. \item There is an effective construction transforming a formula $\varphi \in \ensuremath{\mu_{D}\ML}\xspace$ into an equivalent automaton in $\mathit{Aut}W(\ensuremath{Fo_1}\xspace)$, and vice versa. \item There is an effective construction transforming a formula $\varphi \in \ensuremath{\mu_{C}\ML}\xspace$ into an equivalent automaton in $\mathit{Aut}WC(\ensuremath{Fo_1}\xspace)$, and vice versa. \end{enumerate} \end{theorem} \begin{proof} In each of these cases the direction from left to right is easy to verify, so we omit details. For the opposite direction, we focus on the hardest case, that is, we will only prove that $\mathit{Aut}WC(\ensuremath{Fo_1}\xspace) \leq \ensuremath{\mu_{C}\ML}\xspace$. By Theorem~\ref{t:autofor} it suffices to show that $\mu_{C}\ensuremath{Fo_1}\xspace \leq \ensuremath{\mu_{C}\ML}\xspace$, and we will in fact provide a direct, inductively defined, truth-preserving translation $(\cdot)^{t}$ from $\mu_{C}\ensuremath{Fo_1}\xspace(\mathsf{P})$ to $\ensuremath{\mu_{C}\ML}\xspace(\mathsf{P})$. Inductively we will ensure that, for every set $\mathsf{Q} \subseteq \mathsf{P}$: \begin{equation} \label{eq:zz1} \varphi \in \cont{\mu\ensuremath{Fo_1}\xspace}{\mathsf{Q}} \text{ implies } \varphi^{t} \in \cont{\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace}{\mathsf{Q}} \end{equation} and that the dual property holds for cocontinuity. Most of the clauses of the definition of the translation $(\cdot)^{t}$ are completely standard: for the atomic clause we take $p^{t} \mathrel{:=} p$ and $(\neg p)^{t} \mathrel{:=} \neg p)$, for the boolean connectives we define $(\varphi_{0}\lor\varphi_{1})^{t} \mathrel{:=} \varphi_{0}^{t} \lor \varphi_{1}^{t}$ and $(\varphi_{0}\land\varphi_{1})^{t} \mathrel{:=} \varphi_{0}^{t} \land \varphi_{1}^{t}$, and for the fixpoint operators we take $(\mu p. \varphi)^{t} \mathrel{:=} \mu p. \varphi^{t}$ and $(\nu p. \varphi)^{t} \mathrel{:=} \nu p. \varphi^{t}$ --- to see that the latter clauses indeed provide formulas in $\ensuremath{\mu_{C}\ML}\xspace$ we use \eqref{eq:zz1} and its dual. In all of these cases it is easy to show that \eqref{eq:zz1} holds (or remains true, in the inductive cases). The only interesting case is where $\varphi$ is of the form $\nxt{\alpha}(\varphi_{1},\ldots,\varphi_{n})$. By definition of the language $\mu_{C}\ensuremath{Fo_1}\xspace$ we may assume that $\alpha(a_{1},\ldots,a_{n}) \in \cont{\ensuremath{Fo_1}\xspace(A)}{B}$, where $A = \{ a_{1},\ldots,a_{n} \}$ and $B = \{ a_{1}, \ldots, a_{k} \}$, that for each $1 \leq i \leq k$ the formula $\varphi_{i}$ belongs to the set $\cont{\mu_{C}\ensuremath{Fo_1}\xspace}{\mathsf{Q}}$ and that for each $k+1\leq j \leq n$ the formula $\varphi_{j}$ is $\mathsf{Q}$-free. It follows by the induction hypothesis that $\varphi_{l} \equiv \varphi_{l}^{t} \in \ensuremath{\mu_{C}\ML}\xspace$ for each $l$, that $\varphi_{i}^{t} \in \cont{\ensuremath{\mu\ensuremath{\mathrm{ML}}\xspace}\xspace}{\mathsf{Q}}$ for each $1 \leq i \leq k$, and that the formula $\varphi_{j}^{t}$ is $\mathsf{Q}$-free for each $k+1\leq j \leq n$. The key observation is now that by Theorem~\ref{t:osnf-cont} we may without loss of generality assume that $\alpha$ is in \emph{normal form}; that is, a disjunction of formulas of the form $\alpha_{\Sigma} = \posdgbnfofo{\Sigma}{\Pi}$, where every $\Sigma$ and $\Pi$ is a subset of $\wp (A)$, $B \cap \bigcup\Pi = \varnothing$ for every $\Pi$, and \[ \posdgbnfofo{\Sigma}{\Pi} \mathrel{:=} \bigwedge_{S\in\Sigma} \exists x \bigwedge_{a \in S} a(x) \;\land\; \ensuremath{\mathrm{FO}}\xspacerall x \bigvee_{S\in\Pi} \bigwedge_{a \in S} a(x) \] We now define \[\begin{array}{rll} \bigvee \big(\nxt{\alpha_{\Sigma}}(\ol{\varphi})\big)^{t} & \mathrel{:=} & \bigwedge_{S\in\Sigma} \Diamond \bigwedge_{a_{l} \in S} \varphi_{l}^{t} \;\land\; \Box \bigvee_{S\in\Pi} \bigwedge_{a_{j} \in S} \varphi_{j}^{t} \\ \varphi^{t} & \mathrel{:=} & {\bigvee}_{\Sigma} \big(\nxt{\alpha_{\Sigma}}(\ol{\varphi})\big)^{t} \end{array}\] It is then obvious that $\varphi$ and $\varphi^{t}$ are equivalent, so it remains to verify \eqref{eq:zz1}. But this is immediate by the observation that all formulas $\varphi_{j}^{t}$ in the scope of the $\Box$ are associated with an $a_{j}$ belonging to a set $S \subseteq A$ that has an empty intersection with the set $B$; that is, each $a_{j}$ belongs to the set $\{ a_{k+1}, \ldots, a_{n}\}$ and so $\varphi_{j}^{t}$ is $\mathsf{Q}$-free. \end{proof} \subsection{Bisimulation invariance, one step at a time} \label{ss:bisinv} In this subsection we will show how the bisimulation invariance results in this paper can be proved by automata-theoretic means. Following Janin \& Walukiewicz~\cite{Jan96}, we will define a construction that, for $L_{1} \in \{{\ensuremath{Fo_1}\xspacee},{\ensuremath{Fo_1}\xspaceei}\}$, transforms an arbitrary $L_{1}$-automaton $\mathstr{A}$ into an $\ensuremath{Fo_1}\xspace$-automaton $\mathstr{A}^{\Diamond}$ such that $\mathstr{A}$ is bisimulation invariant iff it is equivalent to $\mathstr{A}^{\Diamond}$. In addition, we will make sure that this transformation preserves both the weakness and the continuity condition. The operation $(\cdot)^{\Diamond}$ is completely determined by the following translation at the one-step level. \begin{definition} Recall from Theorem~\ref{t:osnf} that any formula in ${\ensuremath{Fo_1}\xspacee}^+(A)$ is equivalent to a disjunction of formulas of the form $\posdbnfofoe{\vlist{T}}{\Sigma}$, whereas any formula in ${\ensuremath{Fo_1}\xspaceei}^+(A)$ is equivalent to a disjunction of formulas of the form $\posdbnfolque{\vlist{T}}{\Pi}{\Sigma}$. Based on these normal forms, for both one-step languages $L_{1}={\ensuremath{Fo_1}\xspacee}$ and $L_{1}={\ensuremath{Fo_1}\xspaceei}$, we define the translation $(\cdot)^{\Diamond} : {L_{1}}^+(A) \to \ensuremath{Fo_1}\xspace^+(A)$ by setting \[ \left.\begin{array}{l} \Big( \posdbnfofoe{\vlist{T}}{\Sigma} \Big)^{\Diamond} \\ \Big( \posdbnfolque{\vlist{T}}{\Pi}{\Sigma} \Big)^{\Diamond} \end{array}\right\} \mathrel{: =} \bigwedge_{i} \exists x_i. \tau^+_{T_i}(x_i) \land \ensuremath{\mathrm{FO}}\xspacerall x. \bigvee_{S\in\Sigma} \tau^+_S(x), \] and for $\alpha = \bigvee_{i} \alpha_{i}$ we define $\alpha^{\Diamond} \mathrel{: =} \bigvee \alpha_{i}^{\Diamond}$. \end{definition} \noindent This definition propagates to the level of automata in the obvious way. \begin{definition} Let $L_{1}\in \{{\ensuremath{Fo_1}\xspacee},{\ensuremath{Fo_1}\xspaceei}\}$ be a one-step language. Given an automaton $\mathstr{A} = \tup{A,\Delta,\Omega,a_{I}}$ in $\mathit{Aut}(L_{1})$, define the automaton $\mathstr{A}^{\Diamond} \mathrel{: =} \tup{A,\Delta^{\Diamond},\Omega,a_{I}}$ in $\mathit{Aut}(\ensuremath{Fo_1}\xspace)$ by putting, for each $(a,c) \in A \times C$: \[ \Delta^{\Diamond}(a,c) \mathrel{: =} (\Delta(a,c))^{\Diamond}. \] \end{definition} The main result of this section is the theorem below. For its formulation, recall that $\mathstr{S}^{\omega}$ is the $\omega$-unravelling of the model $\mathstr{S}$ (as defined in the preliminaries). As an immediate corollary of this result, we see that \eqref{eq:z31} and \eqref{eq:z32} hold indeed. \begin{theorem} \label{t:bi-aut} Let $L_{1}\in \{{\ensuremath{Fo_1}\xspacee},{\ensuremath{Fo_1}\xspaceei}\}$ be a one-step language and let $\mathstr{A}$ be an $L_{1}$-automaton. \begin{enumerate} \item The automata $\mathstr{A}$ and $\mathstr{A}^{\Diamond}$ are related as follows, for every model $\mathstr{S}$: \begin{equation} \label{eq:crux} \mathstr{A}^{\Diamond} \text{ accepts } \mathstr{S} \text{ iff } \mathstr{A} \text{ accepts } \mathstr{S}^{\omega}. \end{equation} \item The automaton $\mathstr{A}$ is bisimulation invariant iff $\mathstr{A} \equiv \mathstr{A}^{\Diamond}$. \item If $\mathstr{A}\in \mathit{Aut}W(L_{1})$ then $\mathstr{A}^{\Diamond}\in \mathit{Aut}W(\ensuremath{Fo_1}\xspace)$, and if $\mathstr{A}\in \mathit{Aut}WC(\ensuremath{Fo_1}\xspaceei)$ then $\mathstr{A}^{\Diamond}\in \mathit{Aut}WC(\ensuremath{Fo_1}\xspace)$. \end{enumerate} \end{theorem} The remainder of this section is devoted to the proof of Theorem~\ref{t:bi-aut}. The key proposition is the following observation on the one-step translation, that we take from the companion paper~\cite{carr:mode18}. \begin{proposition} \label{p-1P} Let $L_{1}\in \{{\ensuremath{Fo_1}\xspacee},{\ensuremath{Fo_1}\xspaceei}\}$. For every one-step model $(D,V)$ and every $\alpha \in L_{1}^+(A)$ we have \begin{equation} \label{eq-1cr} (D,V) \models \alphapha^{\Diamond} \text{ iff } (D\times \omega,V_\pi) \models \alphapha, \end{equation} where $V_{\pi}$ is the induced valuation given by $V_{\pi}(a) \mathrel{: =} \{ (d,k) \mid d \in V(a), k\in\omegaega\}$. \end{proposition} \begin{proofof}{Theorem~\ref{t:bi-aut}} The proof of the first part is based on a fairly routine comparison, based on Proposition~\ref{p-1P}, of the acceptance games $\mathcal{A}(\mathstr{A}^{\Diamond},\mathstr{S})$ and $\mathcal{A}(\mathstr{A},\mathstr{S}^{\omega})$. (In a slightly more general setting, the details of this proof can be found in~\cite{Venxx}.) For part~2, the direction from right to left is immediate by Theorem \ref{t:mlaut}. The opposite direction follows from the following equivalences, where we use the bisimilarity of $\mathstr{S}$ and $\mathstr{S}^{\omega}$ (Fact~\ref{prop:tree_unrav}): \begin{align*} \mathstr{A} \text{ accepts } \mathstr{S} & \text{ iff } \mathstr{A} \text{ accepts } \mathstr{S}^{\omega} & \tag{$\mathstr{A}$ bisimulation invariant} \\ & \text{ iff } \mathstr{A}^{\Diamond} \text{ accepts } \mathstr{S} & \tag{equivalence~\eqref{eq:crux}} \end{align*} It remains to be checked that the construction $(\cdot )^{\Diamond}$, which has been defined for arbitrary automata in $\mathit{Aut}(L_{1})$, transforms both $\ensuremath{\mathrm{WMSO}}\xspace$-automata and $\ensuremath{\mathrm{NMSO}}\xspace$-automata into automata of the right kind. This can be verified by a straightforward inspection at the one-step level. \end{proofof} \begin{remark}{\rm In fact, we are dealing here with an instantiation of a more general phenomenon that is essentially coalgebraic in nature. In~\cite{Venxx} it is proved that if $L_{1}$ and $L_{1}'$ are two one-step languages that are connected by a translation $(\cdot )^{\Diamond}: L_{1}' \to L_{1}$ satisfying a condition similar to \eqref{eq-1cr}, then we find that $\mathit{Aut}(L_{1})$ corresponds to the bisimulation-invariant fragment of $\mathit{Aut}(L_{1}')$: $\mathit{Aut}(L_{1}) \equiv \mathit{Aut}(L_{1}')/{\mathrel{\underline{\leftrightarrow}}}$. This subsection can be generalized to prove similar results relating $\mathit{Aut}W(L_{1})$ to $\mathit{Aut}W(L_{1}')$, and $\mathit{Aut}WC(L_{1})$ to $\mathit{Aut}WC(L_{1}')$. }\end{remark} {\small } \end{document}
\begin{document} \title[Isometric actions on pseudo-Riemannian nilmanifolds]{Isometric actions on pseudo-Riemannian nilmanifolds} \author{Viviana del Barco} \email{[email protected]} \address{Universidad Nacional de Rosario, ECEN-FCEIA, Depto de Matem\'a\-tica, Pellegrini 250, 2000 Rosario, Santa Fe, Argentina.} \author{Gabriela P. Ovando} \email{[email protected]} \address{CONICET - Universidad Nacional de Rosario, ECEN-FCEIA, Depto de Matem\'a\-tica, Pellegrini 250, 2000 Rosario, Santa Fe, Argentina.} \date{\today} \begin{abstract} This work deals with the structure of the isometry group of pseudo-Rieman\-nian 2-step nilmanifolds. We study the action by isometries of several groups and we construct examples showing substantial differences with the Riemannian situation; for instance the action of the nilradical of the isometry group does not need to be transitive. For a nilpotent Lie group endowed with a left-invariant pseudo-Riemannian metric we study conditions for which the subgroup of isometries fixing the identity element equals the subgroup of isometric automorphisms. This set equality holds for pseudo-$H$-type Lie groups. \end{abstract} \thanks{{\it (2010) Mathematics Subject Classification}: 53C50, 53C30, 22E25, 53B30.\\ {\it Keywords:} pseudo-Riemannian nilmanifolds, nilpotent Lie groups, isometry groups, bi-invariant metrics. } \thanks{ } \mathfrak a ketitle \section{Introduction} A pseudo-Riemannian nilmanifold is a pseudo-Riemannian manifold $M$ which admits a transitive action by isometries of a nilpotent Lie group. The interest on these manifolds has been renovated in the last years motivated by their applications not only in mathematics but also in physics (see for instance \cite{Du,Fi} and references therein). A typical question is the possibility of the extension to the pseudo-Riemannian case of several properties already known in the Riemannian situation, a topic which could be quite complicated. Flat pseudo-Riemannian nilmanifolds were investigated in \cite{DI}; for non-flat nilmanifolds there are today many open problems. Let $M$ denote a Riemannian manifold with isometry group $\Iso(M)$. Wolf proved in \cite{Wo} that {\em if a connected nilpotent Lie group $N\subseteq \Iso(M)$ acts transitively on $M$ then $N$ is unique, it is the nilradical of the isometry group, and the transitive action of $N$ is also simple. Thus $M$ can be identified with the nilpotent Lie group $N$ equipped with a left-invariant metric. Furthermore the subgroup $H$ of isometries fixing the identity element coincides with the group $H^{aut_N}$ of isometric automorphisms of $N$ and therefore the isometry group is the semidirect product $\Iso(M)=N\rtimes H$ (*)} (see \cite{Go, Wi}). Later Kaplan \cite{Ka1} studied other isometric actions on a family of 2-step nilmanifolds, namely on $H$-type Lie groups. It was shown that the group of isometric automorphisms coincides with the group of isometries of $N$ fixing its identity element and the distribution \begin{equation}\label{dist} TN = \mv N \oplus \zz N. \end{equation} These subbundles are obtained by translation on the left of the splitting at the Lie algebra level \begin{equation}\label{split} \mathfrak a thfrak n = \mv \oplus \zz, \end{equation} where $\mathfrak a thfrak n $ is the Lie algebra of $N$, $\zz$ denotes its center and $\mv$ the orthogonal complementary subspace of $\zz$. Our goal here is to investigate some Lie groups acting by isometries on a fixed pseudo-Riemannian 2-step nilpotent Lie group: the group of isometries preserving the splitting (\ref{dist}), the group of isometric automorphisms and the full isometry group. We get several results concerning this topic, improving the results in \cite{CP} and we have examples showing difficulties. As noticed recently by Wolf et al. \cite{WZ} the question of the structure of the nilradical of the isometry group for a pseudo-Riemannian nilmanifold is subtle. We exhibit a 2-step nilpotent Lie group $N$ equipped with a left-invariant Lorentzian metric such that: \begin{itemize} \item the group of isometric automorphisms is smaller than the subgroup of isometries fixing the identity element; \item the Lie group $N$ is not normal into $\Iso(N)$, hence the algebraic structure (*) does not hold; \item the action of the nilradical of $\Iso(N)$ is not transitive on $N$. \end{itemize} These facts reveal remarkable differences with the Riemannian si\-tuation. For Riemannian 2-step nilmanifolds it is known that every isometry is ''compatible'' with the splitting (\ref{split}). Geometrically the subspace $\mv$ correponds to negative eigenvalues of the Ricci operator while the subspace $\mathfrak a thfrak z $ is described by the non-negative eigenvalues. We shall see that a similar characterization cannot be achieved for a metric with signature. Let $N$ denote a 2-step nilpotent Lie group equipped with a pseudo-Riemannian left-invariant metric. If the center is non-degenerate one gets a decomposition of the Lie algebra $\mathfrak a thfrak n $ as in (\ref{split}). Under these hypothesis \begin{enumerate} \item[{\it a.}] the group of isometric automorphisms coincides with the group of isometries fixing the identity element and preserving the splitting (\ref{dist}); \item[{\it b.}] we get conditions to assert the equality $H=H^{aut_N}$ so obtaining the structure (*) for the full isometry group. \end{enumerate} In particular, the family of pseudo-$H$-type Lie groups satisfies the conditions in {\it b.} However this does not characterize this family. In the case of degenerate center the situation is more singular. We mainly work with bi-invariant metrics on 2-step nilpotent Lie groups, showing that {\it a.} above does not hold: there are isometric automorphisms not preser\-ving any kind of splitting (\ref{dist}). Even more, we present an example where there is no relationship between those groups. \section{Basic facts and notations} \label{sec1} Let $(G, \lela\,,\,{\rm (i)}ra)$ denote a Lie group equipped with a left-invariant pseudo-Rieman\-nian metric. In this section we recall some definitions and properties of groups acting by isometries. Let $\Iso(G)$ denote the (full) isometry group of $(G, \lela\,,\,{\rm (i)}ra)$. This is a Lie group whenever it is equipped with the compact-open topology. Since translations on the left are isometries, it is easy to verify that every $f\in \Iso(G)$ can be written as a product \begin{equation} f= L_g\circ \varphi \label{comp}\end{equation} where $L_g$ denotes the translation by the element $g\in G$ and $\varphi$ is an isometry fixing the identity element. The subgroup of left-translations by elements of $G$ is closed in $\Iso(G)$ and it is isomorphic to $G$. The subgroup of isometries fixing the identity element denoted by $H$ is also a closed subgroup of $\Iso(G)$ and due to (\ref{comp}) one has $$\Iso(G)=G\cdot H.$$ Let $\Aut(G)$ denote the group of automorphisms of $G$ and set $\Iso^{aut}(G)=G\cdot H^{aut_G}$ where $H^{aut_G}$ denotes the group of isometric automorphisms of $G$, that is $H^{aut_G}=\Aut(G)\cap \Iso(G)$. Since for every automorphism $\phi\in \Aut(G)$ it holds $\phi \circ L_x = L_{\phi(x)} \circ \phi$, it follows that the subgroup of translations on the left is a normal subgroup of the group $\Iso^{aut}(G)$, thus one gets $$\Iso^{aut}(G)= G \rtimes H^{aut_G}.$$ A pseudo-Riemannian manifold is called {\em locally symmetric} if $\nabla R \equiv 0$, where $\nabla$ denotes the covariant derivative with respect to the Levi-Civita connection and $R$ denotes the curvature tensor. The Ambrose-Hicks-Cartan theorem (see for example \cite[Thm. 17, Ch. 8]{ON}) states that given a complete locally symmetric pseudo-Riemannian manifold $M$, a linear isomorphism $A : T_p M \to T_p M$ is the differential of some isometry of $M$ that fixes the point $p\in M$ if and only if it preserves the symmetric bilinear form that the metric induces into the tangent space and if for every $u,v, w \in T_p M$ the following equation holds: \begin{equation} \label{ACH} R(Au, Av)Aw = A R(u, v)w. \end{equation} Let $\ggo$ denote the Lie algebra of $G$ which is identified with the Lie algebra of left-invariant vector fields on $G$. Then for $G$ connected the following statements are equivalent (see \cite[Ch. 11]{ON}): \begin{enumerate} \item $\la\, ,\,\ra$ is right-invariant, hence bi-invariant; \item $\la\,,\,\ra$ is $\Ad(G)$-invariant; \item the inversion map $g \to g^{-1}$ is an isometry of $G$; \item $\la [u, v], w\ra + \la v, [u, w] \ra= 0$ for all $u,v,w \in \ggo$; \item $\nabla_u w = \frac12 [u, w]$ for all $u,w \in \ggo$, where $\nabla$ denotes the Levi Civita connection; \item the geodesics of $G$ starting at the identity element $e$ are the one parameter subgroups of $G$. \end{enumerate} By 3. the pair $(G, \lela\,,\,{\rm (i)}ra)$ is a pseudo-Riemannian symmetric space, that is, geodesic symmetries are isometries. Usual computations show that the curvature tensor is \begin{equation} R(u, w) = - \frac14 \ad([u,w]) \qquad \quad \mathfrak b ox{ for } u,w \in \ggo. \label{curvatura} \end{equation} The following lemma is proved by applying the Ambrose-Hicks-Cartan theorem to the Lie group $G$ equipped with a bi-invariant metric and whose curvature formula was given in (\ref{curvatura}) (see \cite{Mu}). \begin{lem} \label{iso} Let $G$ be a simply connected Lie group with a bi-invariant pseudo-Riemannian metric. Then a linear isomorphism $A : \ggo \to \ggo$ is the differential of some isometry in $H$ if and only if for all $u,v,w\in \ggo$, the linear map $A$ satisfies the following two conditions: \begin{enumerate} \item $\la A u, A w \ra = \la u, w\ra $; \item $ A[[u, v], w] = [[Au, Av], Aw]$. \end{enumerate} \end{lem} Note. A symmetric bilinear form on a Lie algebra $\mathfrak a thfrak n g$ satisfying 4. above is said to be ad-invariant. If it is non-degenerate we just call it a {\em metric}. \section{A homogeneous Lorentzian manifold of dimension four} \label{sec3} In this section we study geometrical features of a Lorentzian manifold of dimension four and we show that it admits a transitive and simple action by isometries of both a solvable and a nilpotent Lie group. Set $M$ the pseudo-Riemannian manifold $\R^4$ with the following Lorentz\-ian metric \begin{equation}\label{metric} g= dt (dz + \frac12 y dx - \frac12 x dy) + dx^2 + dy^2 \end{equation} where $(t,x,y,z)$ are usual coordinates for $\RR^4$. Denote $v=(x, y)$ and for each $(t_1, v_1, z_1)\in \RR^4$ consider the following differentiable functions of $M$: \begin{equation}\label{actionN} L^N_{(t_1, v_1, z_1)}(t_2, v_2, z_2)=(t_1+t_2, v_1+v_2, z_1+z_2+\frac12 v_1^t J v_2) \end{equation} \begin{equation}\label{actionG} L^G_{(t_1, v_1, z_1)}(t_2, v_2, z_2)=(t_1+t_2, v_1+R(t_1)v_2, z_1+z_2+\frac12 v_1^t JR(t_1) v_2) \end{equation} where $J$ and $R(t)$ are the linear maps on $\RR^2$ given by \begin{equation}\label{mat} J=\left( \begin{matrix} 0 & 1 \\ -1 & 0 \end{matrix} {\rm (i)}ght), \qquad \qquad R(t)=\left( \begin{matrix} \cos t & -\sin t \\ \sin t & \cos t \end{matrix} {\rm (i)}ght) \qquad t\in \RR. \end{equation} Both maps $L^N_{(t_1, v_1, z_1)}$ and $L^G_{(t_1, v_1, z_1)}$ are isometries of $(M, g)$: in fact on the basis $\{\partial_t, \partial_x, \partial_y, \partial_z\}$ of $T \RR^4$ one has the following differentials $$ L^N_{(t_1, x_1,y_1, z_1) *} = \left( \begin{matrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0\\ 0 & -\frac12 y_1 & \frac12 x_1 & 1 \end{matrix} {\rm (i)}ght)$$ $$ L^G_{(t_1, x_1, y_1, z_1) *} = \left( \begin{matrix} 1 & 0 & 0 & 0 \\ 0 & \cos t_1 & -\sin t_1 & 0 \\ 0 & \sin t_1 & \cos t_1 & 0\\ 0 & \mu & \nu & 1 \end{matrix} {\rm (i)}ght) \mathfrak b ox{ with } \begin{array}l \mu= \frac12( x_1 \sin t_1 - y_1 \cos t_1), \\ \\ \nu = \frac12(x_1 \cos t_1+ y_1 \sin t_1). \end{array}$$ Thus the maps above give isometric left-actions of a solvable Lie group $G$ and a nilpotent Lie group $N$ on $(M,g)$. Both, the Lie group $G$ and $N$ are modelled on $\RR^4$ with its canonical differentiable structure and with the multiplication map where the translations on the left are induced by the maps $L^G$ on $G$ and $L^N$ on $N$. It is not hard to see that the actions of both groups $G$ and $N$ on $M$ are simple and transitive so that the homogeneous Lorentzian manifold $M$ can be represented as $(G, g_G)$ and $(N, g_N)$ where $g_G$ and $g_N$ are both given by the same formula (\ref{metric}). Canonical computations show that the metric $g_N$ is left-invariant on $N$ and the metric $g_G$ is bi-invariant on $G$, that is, translations on the left and on the right on $G$ are isometries for $g_G$ (see the previous section). \begin{rem} The exponential map from $\ggo$ to $G$ is not surjective as one verifies with the formula given in Section 4.2 \cite{BOV1}, a fact previously proved in \cite{St}. Thus $G$ is a solvable Lie group which belongs to the class of examples named in Example 3.4 \cite{WZ} for which there exists a pair of points that cannot be joined by an unbroken geodesic. The Lie group $G$ known as the oscillator group \cite{St} underlies the Nappi-Witten space \cite{NW}. Since the Lorentzian metric $g_G$ is bi-invariant on $G$, the homogeneous manifold $M$ is symmetric. \end{rem} \subsection{The solvmanifold model} Making use of the model $(G, g_G)$ one obtains the isometry group of $(M,g)$. Actually as an application of Lemma \ref{iso}, the group $H$ of isometries of $G$ - and hence $M$ - fixing the element $e=(0,0,0)$ (the identity element of $G$) has a differential at $e$ with matrix of the form $$\left( \begin{matrix} \varepsilon & 0 & 0 \\ Jv & A & 0 \\ -\frac12 ||v||^2 & -(Jv)^t A & \varepsilon \end{matrix} {\rm (i)}ght) \qquad \mathfrak b ox{ with }\quad \varepsilon=\pm 1,\,A\in\Or(2),\;v\in\R^2$$ where $\Or(2)$ denotes the orthogonal group of $\RR^2$. Thus $$H\simeq (\{1,-1\}\times \Or(2))\ltimes \RR^2$$ (see \cite{BOV1} for more details). Notice that $H$ has four connected components and the connected component of the identity coincides with the group of inner automorphisms of $G$ $$H_0=\{\chi_g:G\lra G,\; \chi_g(x)=gxg^{-1}:\;g\in G\}\simeq \SO(2)\ltimes \R^2.$$ Explicitly for $(t,v,z)\in\R^4$ and $g=(t_0, v_0, z_0)$ \begin{eqnarray}\label{eq:ig} \chi_g(t,v,z) & = & (t,v_0+R(t_0)v-R(t)v_0,\\ & & \hspace{1cm}z+\frac12 v_0^tJR(t_0)v-\frac12 v_0^tJR(t)v_0-\frac12(R(t_0)v)^tJR(t)v_0). \nonumber \end{eqnarray} The following diffeomorphisms \begin{eqnarray}\label{eq:is} \psi_1(t,v,z)&=&(-t,Sv,-z),\ \ \text{ where } S(x,y)=(-x,y)\nonumber \\ \psi_2(t,v,z)&=&(-t,R(-t)v,-z),\ \ \label{eq:fi}\\ \psi_3(t,v,z)&=&\psi_1\circ \psi_2(t,v,z)=(t,SR(-t)v,z), \nonumber \end{eqnarray} constitute isometries of $M$ fixing the element $e$ and they belong to different connected components of $H$. The other three connected components of $H$ are $$ H_0\cdot \psi_1,\qquad H_0\cdot \psi_2 \qquad\text{and } \quad H_0\cdot \psi_3.$$ Besides, the group $H^{aut_G}$ of isometric automorphisms of $G$ is not connected since it corresponds to the connected components $H_0$ and $H_0\cdot \psi_1$, where $\psi_1$ is as in (\ref{eq:is}). Notice that $\chi_{(-\pi,0,0)}\circ\psi_2 $ is the group inversion of $G$. \begin{rem} \label{normal} Since $\psi_2 \circ L^G_{(t_1, v_1, z_1)} \circ \psi_2^{-1}$ is not a translation on the left, the subgroup of left-translations $G$ is not normal in $\Iso(M)$. \end{rem} Note that $G$ is normal into $\Iso_0(M)$, the connected component of the identity of $\Iso(M)$, hence $$\Iso_0(M)= G\rtimes H_0.$$ The Lie algebra of $\Iso(G)$ denoted by $\iso$ corresponds to the vector space spanned by $\{f_0, f_1, f_2, e_0, e_1, e_2, e_3\}$ obeying the non-trivial Lie bracket relations: $$ \begin{array}{ccc} {\begin{array}{rcl} {[f_0, f_1]} & = & f_2 \\ {[f_0, e_2]} & = & - e_1 \\ {[e_0, e_1]} & = & e_2 \end{array}} & { \begin{array}{rcl} {[f_0, f_2]} & = & -f_1 \\ {[f_1, e_2]} & = & e_3 \\ {[e_0, e_2]} & = & -e_1 \end{array}} & {\begin{array}{rcl} {[f_0, e_1]} & = & e_2 \\ {[f_2, e_1]} & = & - e_3 \\ {[e_1, e_2]} & = & e_3. \end{array}} \end{array} $$ \subsection{The nilmanifold model} We study the structure of the isometry group of the nilmanifold $M$ with respect to the nilpotent Lie group $N$. The nilpotent Lie group $N$ corresponds to the Lie group $\RR \times \Hh_3$ where $\Hh_3$ denotes the Heisenberg Lie group of dimension three. Notice that for any $(t_1,v_1,z_1)\in\R^4$ it holds \begin{equation}L^G_{(t_1, v_1, z_1)}=L^N_{(t_1, v_1, z_1)} \circ \chi_{(t_1,0,0)}=\chi_{(t_1,0,0)} \circ L^N_{(t_1, R(-t_1) v_1, z_1)},\label{lgln}\end{equation} so that the Lie algebra $\mathfrak a thfrak n $ viewed into $\iso$ is spanned by the vectors \linebreak $f_0-e_0, e_1, e_2, e_3$ obeying the non-zero Lie bracket relation $$[e_1, e_2]=e_3.$$ Since $\chi_{(t_1,0,0)}\in H^{aut_N} \subseteq H$ we have $$\Iso(M)= N\cdot H \quad \mathfrak b ox{with }H\simeq(\{1,-1\} \times \Or(2))\ltimes \RR^2,$$ but {\em $N$ is not a normal subgroup of $\Iso_0(M)$}. For the left-invariant metric given in (\ref{metric}) a skew-symmetric derivation $D$ of $(\nn, \left\langle \, ,\, {\rm (i)}ght\rangle)$ must preserve both subspaces $\zz$ and $\mv$ and following canonical computations one gets the non-trivial equalities $$De_1=\eta e_2 \qquad D e_2= -\eta e_1, \quad \eta \in \RR.$$ So the connected component of the identity in the subgroup of isometric automorphisms of $N$ is given by $$H^{aut_N}_0=\{\chi_{(s,0,0)}: s\in\R\}. $$ Recall from (\ref{eq:ig}) that $ \chi_{(s,0,0)}(t,v,z)=(t, R(s)v, z)$ with $ R(s)$ as in (\ref{mat}). Furthermore $\Iso^{aut}(N)$ is not connected. In fact the map $\psi_1$ defined in (\ref{eq:is}) is also an isometric automorphism of $N$. We already proved that $$\Iso^{aut}(N) \subsetneq\Iso(M).$$ Compare with \cite{CP}. The nilradical of $\iso$ is the ideal of dimension five spanned by $\{f_1, f_2,$ $e_1, e_2, e_3\}$ which does not contain the subalgebra $\nn$. In fact, $\mathfrak a thfrak n $ is not contained in the commutator $[\iso, \iso]$. In terms of the isometry functions the maximal connected normal nilpotent subgroup of $\Iso_0(M)$ corresponds to $$ \tilde{N}=\{ L^G_{(0,w, z)} \circ \chi_{(0,v,0)}: \quad v,w, \in \RR^2, z \in \RR \}$$ and usual computations show that the orbit of a point $(t_0,v_0,z_0)$ is given by $$\mathfrak a thcal O(t_0,v_0,z_0)=\{(t,v,z)\in M : \quad t=t_0\}.$$ This proves that the action of $\tilde{N}$ {\em is not transitive on the nilmanifold} $M$. \noindent We summarize the results for $(M, g)$. \begin{itemize} \item $N$ acts transitively on $M$ but it is not a normal subgroup of $\Iso_0(M)$. \item The maximal connected normal nilpotent Lie subgroup of $\Iso(M)$, namely the nilradical, does not contain $N$ and its action on $M$ is not transitive. \item $\Iso^{aut}(N)\subsetneq \Iso(M)$, where $\Iso^{aut}(N)=N\rtimes H^{aut_N}$, $\Iso(M)= N \cdot H$. \item $H^{aut_N}$ is not connected. \end{itemize} Compare this with Section 4.2 in \cite{Wo}, and results in \cite{CP}. \section{Isometric actions for non-degenerate center} This section concerns the study of isometric actions. Let $(N, \lela\,,\,{\rm (i)}ra)$ denote a simply connected 2-step nilpotent Lie group equipped with a left-invariant pseudo-Riemannian metric and let $\lela\,,\,{\rm (i)}ra$ denote the metric on its corresponding Lie algebra $\mathfrak a thfrak n $. Let $\mathfrak a thfrak z $ be the center of $\mathfrak a thfrak n $ and assume the restriction of $\lela\,,\,{\rm (i)}ra$ to $\mathfrak a thfrak z $ is non-degenerate. Hence there exists an orthogonal decomposition of $\mathfrak a thfrak n $ \begin{equation} \label{ortdec} \mathfrak a thfrak n =\mv\oplus \mathfrak a thfrak z \end{equation} so that $\mv$ is also non-degenerate. This induces on the Lie group $N$ left-invariant orthogonal distributions $\mv N$ and $\mathfrak a thfrak z N$ such that $TN=\mv N\oplus\mathfrak a thfrak z N$. Denote by $\Iso^{spl}(N)$ the group of isometries of $N$ that preserve the splitting $TN=\mv N \oplus \mathfrak a thfrak z N$ \cite{CP,Ka1}. Notice that translations on the left by elements of the group $N$ preserve this splitting. Thus $$\Iso^{spl}(N) = N\cdot H^{spl}$$ where $H^{spl}$ is the subgroup of isometries which preserve the splitting and fix the identity element of $N$. When the metric is positive definite, one has \cite{Eb,Ka1}: \begin{equation} \label{isomgroup} \Iso(N)=\Iso^{aut}(N)=\Iso^{spl}(N). \end{equation} The purpose here is to analyze the group equalities above in the pseudo-Riemannian case and occasionally to state new relationships between these three groups. Since the pseudo-Riemannian metric on $N$ is invariant by left-translations we study the geometry of $N$ as effect from $(\nn, \lela\,,\,{\rm (i)}ra)$. Given $u\in\mathfrak a thfrak n $, denote by $\ad_u^*$ the adjoint transformation of $\ad_u$ with respect to $\lela\,,\,{\rm (i)}ra$. One verifies that when $u\in \mv$ and $w\in \mathfrak a thfrak z $ it holds $\ad_u^*w\in\mv,$ while $\ad_u^*w=0$ if $u\in\mathfrak a thfrak z $ or $u,w\in\mv$. Furthermore each $w\in\mathfrak a thfrak z $ defines a linear transformation $j(w):\mv\lra\mv$ by $$j(w)u=\ad_u^*w \mathfrak b ox{ for all } u\in\mv,$$ so that \begin{equation}\label{eq:jota} \left \langle j(w)u,u'{\rm (i)}ght \rangle =\left \langle w,[u,u']{\rm (i)}ght \rangle \quad \mathfrak b ox{ for all } u,u'\in\mv. \end{equation} Thus for $w\in \zz$, the map $j(w)$ belongs to $\sso(\mv)$, the Lie algebra of skew-symmetric maps of $\mv$ with respect to $\lela\,,\,{\rm (i)}ra$ and one gets that $j: \mathfrak a thfrak z \to \sso(\mv)$ is a linear homomorphism. As in the Riemannian case, the maps $j(w)$ capture important geometric information of the pseudo-Riemannian space $(N,\lela\,,\,{\rm (i)}ra)$. The covariant derivative $\nabla$ relative to the Levi Civita connection of ($N$, $\lela\,,\,{\rm (i)}ra$) evaluated on left-invariant vector fields is $$2\,\nabla_uw=[u,w]-\ad_u^*w-\ad^*_wu,\quad u,w\in\mathfrak a thfrak n , $$ which together with the formula for $j:\mathfrak a thfrak z \lra \mathfrak a thfrak{so}(\mv)$ in (\ref{eq:jota}) gives \begin{equation}\label{eq:nabla}\left\{ \begin{array}{ll} \nabla_u w=\frac12 \,[u,w] & \mathfrak b ox{ if } u,w\in\mv,\\ \nabla_u w=\nabla_wu=-\frac12 j(w)u & \mathfrak b ox{ if } u\in\mv,\,w\in\mathfrak a thfrak z ,\\ \nabla_u u'=0& \mathfrak b ox{ if } u, u'\in\mathfrak a thfrak z . \end{array}{\rm (i)}ght. \end{equation} Since for simply connected nilpotent Lie groups the exponential map $\exp:\mathfrak a thfrak n \lra N$ is a diffeomorphism, it is possible to define smooth maps $b:N\lra \mv$ and $a:N\lra \mathfrak a thfrak z $ such that for a given $n\in N$ one writes \begin{equation} n=\exp(b(n)+a(n))\label{exp}.\end{equation} Let $\{b_1,\ldots,b_m\}$ be a basis of $\mv$ and $\{a_1,\ldots,a_p\}$ be a basis of $\mathfrak a thfrak z $, then there are defined maps $\{\beta_i,\alpha_j:N\lra \R:\,i=1,\ldots,m,\,j=1,\ldots,p\}$ for which $$b(n)=\sum_{i=1}^m\beta_i(n)b_i, \qquad a(n)=\sum_{j=1}^p \alpha_j(n)a_j . $$ Thus $\varphi=(\beta_1,\ldots,\beta_m,\alpha_1,\ldots,\alpha_p)$ is a global coordinate system for $N$ where at $n\in N$ it holds \begin{equation}\label{vectores} \begin{array}{rcl} \frac{\partial}{\partial \beta_i}_{|_n}&=&L_{n_*}{|_e}(b_i+\frac12 \sum_{k=1}^m\,[b_i,\beta_k(n)b_k]),\\ \frac{\partial}{\partial \alpha_j}_{|_n}&=&L_{n_*}{|_e}(a_j). \end{array} \end{equation} To verify these equalities see formulas for the exponential map in \cite{Eb}. Let $\gamma:I\lra N$ be a curve on $N$ and write $b$ and $a$ for the vector valued maps $\gamma(t)=\exp(b(t)+a(t))$. Making use of the equalities in (\ref{vectores}) one gets \begin{eqnarray} \frac{d}{dt}\gamma(t)&=&\sum_{i=1}^m \betaidot \frac{\partial}{\partial \beta_i}_{|{\gamma(t)}} +\sum_{i=j}^p \alphajdot \frac{\partial}{\partial \alpha_j}_{|{\gamma(t)}}\nonumber\\ &=&L_{\gamma(t)_*}{|_e}\left( \sum_{i=1}^m \betaidot x_i+ \sum_{j=1}^p (\sum_{k=1}^m\frac12\, c^j_{ik}\betaidot v_k+\alphajdot)z_j{\rm (i)}ght)\nonumber\\ &=&L_{\gamma(t)_*}{|_e}\left(\stackrel{\cdot}{b} +\stackrel{\cdot}{a}+\frac12\, [\stackrel{\cdot}{b},b]{\rm (i)}ght),\nonumber \end{eqnarray} where $c_{ik}^j$ denote the structure constants for the Lie algebra. Notice that $L_{\gamma(t)_*}{|_e}(\stackrel{\cdot}{b})$ and $L_{\gamma(t)_*}{|_e}(\stackrel{\cdot}{a}+\frac12\, [\stackrel{\cdot}{b},b])$ are the components of $d\gamma/dt$ in the bundles $\mv N $ and $ \mathfrak a thfrak z N$ respectively. So the covariant derivative of $\stackrel{\cdot}{\gamma}$ along $\gamma$ is given by \begin{eqnarray}\nonumber \nabla_{d\gamma/dt}d\gamma/dt&=&L_{\gamma(t)_*}{|_e}\left( \nabla_{\stackrel{\cdot}{b} + \stackrel{\cdot}{a}+\frac12 [\stackrel{\cdot}{b},b] }\stackrel{\cdot}{b} +(\stackrel{\cdot}{a}+\frac12 [\stackrel{\cdot}{b},b]){\rm (i)}ght). \end{eqnarray} Denote by $\sigma=\,\stackrel{\cdot}{b} +\stackrel{\cdot}{a}+\frac12 [\stackrel{\cdot}{b},b]$ the curve in $\mathfrak a thfrak n $. Then \begin{equation}\nabla_{\sigma(t)}\stackrel{\cdot}{b}\,= \nabla_{\sigma(t)}(\sum_{i=1}^m\betaidot b_i)=\sum_{i=1}^m\betaidotdot b_i+\sum_{i=1}^m\betaidot \nabla_{\sigma(t)}b_i\nonumber \end{equation} which after (\ref{eq:nabla}) equals \begin{equation} \nabla_{\sigma(t)}\stackrel{\cdot}{b}\,=\,\stackrel{\cdot\cdot}{b}+\sum_{i=1}^m\betaidot \left( \frac12 [\stackrel{\cdot}{b},b_i]-\frac12 j(\stackrel{\cdot}{a}+\frac12[\stackrel{\cdot}{b},b])b_i {\rm (i)}ght)\,=\,\stackrel{\cdot\cdot}{b}-\frac12 j(\stackrel{\cdot}{a}+\frac12[\stackrel{\cdot}{b},b])\stackrel{\cdot}{b}.\label{nablaxpunto}\end{equation} Similar computations give \begin{equation} \nabla_{\sigma(t)}\left(\stackrel{\cdot}{a}+\frac12 [\stackrel{\cdot}{b},b]{\rm (i)}ght)\,=\, \,\stackrel{\cdot\cdot}{a}+\frac12 [\stackrel{\cdot\cdot}{b},b]-\frac12 j(\stackrel{\cdot}{a}+\frac12[\stackrel{\cdot}{b},b])\stackrel{\cdot}{b}.\nonumber \end{equation} Therefore a curve $\gamma:I \to N$ is a geodesic if and only if the curve $\sigma:I \to \mathfrak a thfrak n $ satisfies $\nabla_{\sigma(t)}\sigma(t)=0$, that is, the following system of equations is satisfied \begin{equation}\left\{\begin{array}{l} \stackrel{\cdot\cdot}{b}-j(\stackrel{\cdot}{a}_0 ){\stackrel{\cdot}{b}}=0 \\ \stackrel{\cdot}{a}+\frac12 [\stackrel{\cdot}{b},b]=\stackrel{\cdot}{a}_0 \end{array}{\rm (i)}ght.\quad \,\mathfrak b ox{ where } \stackrel{\cdot}{a}_0 \mathfrak b ox{ denotes }\stackrel{\cdot}{a}(0). \label{geodesic} \end{equation} Let $f$ be an automorphism of $N$. Its differential at the identity $e$ satisfies $f_*(\mathfrak a thfrak z ) \subseteq \mathfrak a thfrak z $ and if moreover $f$ is an isometry its differential also preserves the orthogonal complement $\mathfrak a thfrak z ^\bot=\mv$: $f_*( \mv )\subseteq \mv$. In view of $f \circ L_n = L_{f(n)} \circ f$ for every $n\in N$, $f$ preserves the invariant distributions $\mathfrak a thfrak z N$ and $\mv N$ whenever it is an isometric automorphism. Therefore \begin{equation}\label{autis} \Iso^{aut}(N) \subseteq \Iso^{spl}(N). \end{equation} \begin{pro} \label{autspl} Let $N$ be a simply-connected 2-step nilpotent Lie group with a left-invariant pseudo-Riemannian metric such that its center is non-degenerate. Then $$\Iso^{spl}(N)= \Iso^{aut}(N).$$ \end{pro} \begin{proof} In view of (\ref{autis}) we should prove that every isometry $f\in H^{spl}$ is an automorphism of $N$. The proof follows from the next equality for $f$, we shall prove: $$f_*(j(u)w)=j(f_*u)f_*w \qquad \mathfrak b ox{ for all }w\in \mv, u\in \mathfrak a thfrak z . $$ Let $\gamma=\exp(b(t)+a(t))$ denote the geodesic throughout $e$ such that \linebreak $\stackrel{\cdot}{\gamma}(0)=w+u$, that is $\stackrel{\cdot}{b}(0)=w$ and $\stackrel{\cdot}{a}(0)=u$. Since $f_*$ is an isometry preserving the splitting the geodesic $\tilde{\gamma}=f\circ \gamma$ can be written as $\tilde{\gamma}(t)=\exp (f_*b(t)+f_*a(t))$. The Equations (\ref{nablaxpunto}) and (\ref{geodesic}) at $t=0$ for both geodesics $\gamma$ and $\tilde{\gamma}$ give \begin{eqnarray} j(f_*u)f_*w&=&\left( j(f_*\stackrel{\cdot}{a}_0) f_*\stackrel{\cdot}{b}{\rm (i)}ght)_{t=0}\nonumber \\ &=&2\left(\nabla_{f_*(\stackrel{\cdot}{b} +(\stackrel{\cdot}{a}+\frac12 [\stackrel{\cdot}{b},b]))}f_*\stackrel{\cdot}{b}{\rm (i)}ght)_{t=0}\nonumber\\ &=&2\,f_*\left(\nabla_{\stackrel{\cdot}{b} +(\stackrel{\cdot}{a}+\frac12 [\stackrel{\cdot}{b},b])}\stackrel{\cdot}{b}{\rm (i)}ght)_{t=0}\nonumber\\ &=&f_*(j(u)w), \nonumber \end{eqnarray} as intended to prove. Now consider $w_1,w_2\in\mv$. For any $u\in\mathfrak a thfrak z $ it holds \begin{eqnarray} \left \langle f_*[w_1,w_2],u{\rm (i)}ght \rangle &=&\left \langle [w_1,w_2],f^{-1}_*u{\rm (i)}ght \rangle\;=\;\left \langle w_2\;,\;j(f^{-1}_*u)w_1{\rm (i)}ght \rangle\;=\nonumber\\ &=&\left \langle w_2\;,\;j(f^{-1}_*u)(f^{-1}_*(f_*w_1)){\rm (i)}ght \rangle\;=\;\left \langle w_2\;,\;f^{-1}_*(j(u)f_*w_1){\rm (i)}ght \rangle\;=\nonumber\\ &=&\left \langle f_*w_2\;,\;j(u)f_*w_1{\rm (i)}ght \rangle\;=\;\left \langle [f_*w_1,f_*w_2]\,,\,u{\rm (i)}ght \rangle ,\nonumber \end{eqnarray} which together with the fact that $\mathfrak a thfrak z $ is non-degenerate implies that $f_*$ is a Lie algebra automorphism. Consequently $f\in H^{aut_N}$. \end{proof} The proof above extends to the pseudo-Riemannian setting that one performed by Kaplan in \cite{Ka1}. Below we investigate geometrical properties of pseudo-Riemannian 2-step nilpotent Lie groups to get conditions to assert that the group of isometries of $N$ preserving the splitting coincides with the full isometry group of $N$. The Ricci tensor of $(N, \lela\,,\,{\rm (i)}ra)$ can be seen at the Lie algebra level as the bilinear form on $\mathfrak a thfrak n $ defined throughout the curvature tensor $R$ by $$\Ric(u,w)=\operatorname{trace}(\xi\lra R(\xi,u)w)\quad\mathfrak b ox{ for }\quad u,w\in \mathfrak a thfrak n .$$ Since $\Ric$ is a symmetric form on $\mathfrak a thfrak n $ there exists a linear operator $\Rc:\mathfrak a thfrak n \lra \mathfrak a thfrak n $ such that $$\left \langle \Rc u,w{\rm (i)}ght \rangle=\Ric(u,w)$$ which is called the {\em Ricci operator}. The {\em scalar curvature} $s$ of $N$ is the trace of the Ricci operator $\Rc$. In the pseudo-Riemannian case the formulas for the Ricci operator are slightly different from those in the Riemannian case (see \cite{Eb}). Recall that a basis $\{w_1,\ldots,w_n\}$ of $\mathfrak a thfrak n $ is said to be {\em orthonormal} if $\left \langle w_i,w_j{\rm (i)}ght \rangle =\pm \delta_{ij}$. The proof of the next proposition follows from usual computations and it can be seen in \cite{Ov}. \begin{pro}\label{pro1} Let $(N, \lela\,,\,{\rm (i)}ra)$ denote a 2-step nilpotent Lie group equipped with a left-invariant pseudo-Riemannian metric such that the center is non-degenerate. \begin{enumerate} \item The Ricci operator leaves $\mv$ and $\mathfrak a thfrak z $ invariant. \item If $\{a_1,\ldots,a_p\}$ is an orthonormal basis of $\mathfrak a thfrak z $ then \begin{equation}\label{eq:tv} \Rc|_{\mv}=\frac12 \sum_{i=1}^p\varepsilon_k j(a_k)^2 \qquad \mathfrak b ox{ with } \varepsilon_k=\left \langle a_k,a_k{\rm (i)}ght \rangle. \end{equation} \item $\Ric(u,u')=-\frac{1}{4} \operatorname{trace}(j(u)j(u'))$, for all $u, u'\in \mathfrak a thfrak z $. \end{enumerate} \end{pro} We proceed with the study of the eigenvalues of the Ricci operator $\Rc$. Recall that if $U$ is a real vector space its complexification is the vector space $$U^\CC=U\otimes_\RR \CC$$ and such that $\dim_\RR U=\dim_\CC U^\CC$. A real linear transformation $T$ of $U$ defines a $\CC$-linear operator on $U^\CC$ as $T(u\otimes z)=T(u)\otimes z$ for all $u\in U$. In addition, if $U=U_1\oplus U_2$ then $U^\CC= U_1^\CC\oplus U_2^\CC$ and $U_i$ is invariant under $T$ if and only if $U_i^\CC$ is invariant under the complex transformation $T$. Denote with $\lambda_1,\lambda_2,\cdots,\lambda_s$ the different eigenvalues of the (complex) Ricci operator $\Rc$. Since the metric is left-invariant, the eigenvalues of $\Rc$ are constant on $N$. The subspace of $\mathfrak a thfrak n ^\CC$ associated to the eigenvalue $\lambda_i$ is \begin{equation}\label{Vlambda} V_{\lambda_i}=\ker (\Rc-\lambda_i I)^{r_i},\end{equation} where $r_i$ is the degree of $\lambda_i$ in the minimal polynomial of $\Rc$. The Jordan decomposition theorem states that \begin{equation}\label{decn} \mathfrak a thfrak n ^\CC =V_{\lambda_1}\oplus V_{\lambda_2}\oplus\cdots \oplus V_{\lambda_s}.\end{equation} Translating on the left the spaces above, at a generic point $n\in N$ one obtains \begin{equation} \label{tnc}T_nN^\CC= L_{n\,*}|_e V_{\lambda_1}\oplus L_{n\,*}|_e V_{\lambda_2}\oplus\cdots \oplus L_{n\,*}|_e V_{\lambda_s}. \end{equation} The subspace $L_{n\,*}|_e V_{\lambda_i}$ of $T_nN^\CC$ is that one corresponding to the eigenvalue $\lambda_i$ of the Ricci tensor at $n$, $\Rc_n$; that is, $$L_{n\,*}|_e V_{\lambda_i}=\ker(\Rc_n-\lambda_i I)^{r_i}.$$ Given an isometry $f$ of $N$, it holds \begin{equation}f_*|_n \,\Rc_n=\Rc_{f(n)}\,f_*|_n \quad\mathfrak b ox{for all} \quad n\in N,\label{difatp}\end{equation} and this formula is also valid for the corresponding complexified linear transformations. The last two equations yield to \begin{eqnarray}\label{a} u\in L_{n\,*}|_e V_{\lambda_i} &\Leftrightarrow& (\Rc_n-\lambda_{i} I)^{r_{i}} u=0\nonumber\\ &\Leftrightarrow& f_*|_n((\Rc_n-\lambda_{i} I)^{r_{i}} u)=0\nonumber\\ &\Leftrightarrow& (\Rc_{f(n)}-\lambda_{i} I)^{r_{i}} (f_*|_n u)=0\nonumber\\ &\Leftrightarrow& f_*|_n( u) \in L_{f(n)\,*}|_e V_{\lambda_i} .\end{eqnarray} Therefore the direct sum of vector spaces in (\ref{tnc}) is preserved by isometries. \begin{lem} \label{lm2} Let $(N,\lela\,,\,{\rm (i)}ra)$ be a 2-step nilpotent Lie group such that $\lela\,,\,{\rm (i)}ra$ is a pseudo-Riemannian left-invariant metric for which the center is non-degene\-rate. Assume \begin{eqnarray}\label{condic} \mv^\CC&=&V_{\lambda_1}\oplus\cdots\oplus V_{\lambda_j},\nonumber\\ \mathfrak a thfrak z ^\CC&=&V_{\lambda_{j+1}}\oplus\cdots\oplus V_{\lambda_s}, \end{eqnarray} for the different eigenvalues $\lambda_1,\lambda_2,\cdots,\lambda_s$ of the Ricci operator $\Rc$ with $V_{\lambda_i}$ the eigenspace corresponding to $\lambda_i$. Then every isometry of $N$ preserves the splitting $TN=\mv N\oplus \mathfrak a thfrak z N$, that is, $$\Iso(N)=\Iso^{spl}(N).$$ \end{lem} \begin{proof} The hypothesis in (\ref{condic}) implies that for any $n\in N$, \begin{eqnarray} \mv N_n^\CC&=& L_{n\,*}|_eV_{\lambda_1}\oplus\cdots\oplus L_{n\,*}|_e V_{\lambda_j}, \nonumber\\ \mathfrak a thfrak z N_n^\CC&=&L_{n\,*}|_e V_{\lambda_{j+1}}\oplus\cdots\oplus L_{n\,*}|_e V_{\lambda_s}.\nonumber\end{eqnarray} Let $f$ be an isometry of $N$, then $f_*|_n L_{n\,*}|_e V_{\lambda_i}= L_{f (n)\,*}|_e V_{\lambda_i}$ as a consequence of the conditions in (\ref{a}). Therefore $$f_*|_n(\mv N_n^\CC)=\mv N_{f(n)}^\CC \quad \mathfrak b ox{ and }\quad f_*|_n(\mathfrak a thfrak z N_n^\CC)=\mathfrak a thfrak z N_{f(n)}^\CC $$ from which we conclude $f_*|_n (\mv N_n)=\mv N_{f(n)}$, $ f_*|_n (\mathfrak a thfrak z N_n)=\mathfrak a thfrak z N_{f(n)}$ and so $f\in \Iso^{spl}(N)$. \end{proof} A large family of nilpotent Lie algebras satisfying the hypothesis of the lemma above is the family of pseudo-$H$-type Lie algebras. \footnote{We find this name for the first time in \cite{Ci}, see also \cite{JPP}.} A nilpotent Lie algebra $\mathfrak a thfrak n $ (or its corresponding simply connected Lie group) equipped with a metric $\lela\,,\,{\rm (i)}ra$ for which the center is non-degenerate is said to be {\em of pseudo-$H$-type} whenever it satisfies \begin{equation}\label{tipoH} j(u)^2=-\left \langle u,u{\rm (i)}ght \rangle I\qquad \mathfrak b ox{ for all }u\in\mathfrak a thfrak z . \end{equation} Positive definite metric Lie algebras satisfying (\ref{tipoH}) are already known as $H$-type Lie algebras, introduced by Kaplan in \cite{Ka1}. $H$-type Lie groups are 2-step nilpotent and they are natural generalizations of the Iwasawa $N$-groups associated to semisimple Lie groups of real rank one. Notice that pseudo-$H$-type Lie algebras are not necessarily non-singular since vectors of zero norm could satisfy (\ref{tipoH}). For any nilpotent Lie group of pseudo-$H$-type the three groups in (\ref{isomgroup}) coincide. \begin{thm} \label{teo1} Let $(N,\lela\,,\,{\rm (i)}ra)$ denote a pseudo-$H$-type Lie group. Then \begin{enumerate} \item $\Iso^{aut}(N)=\Iso^{spl}(N)=\Iso(N).$ \item the scalar curvature of $(N,\lela\,,\,{\rm (i)}ra)$ is negative. \end{enumerate} \end{thm} \begin{proof} The first equality in 1. holds after Proposition \ref{autspl}. We use the previous lemma to prove the second one. Indeed we show that for pseudo-$H$-type Lie algebras, the Ricci operator is diagonalizable and negative (resp. positive) definite on $\mv$ (resp. on $\mathfrak a thfrak z $). Let $\{a_k\}_{k=1}^p$ be an orthonormal basis of $\mathfrak a thfrak z $. The fact of $N$ being of pseudo-$H$-type implies $j(a_k)^2=-\left \langle a_k,a_k{\rm (i)}ght \rangle I_m=-\varepsilon_k I_m$ with $\varepsilon_k=\pm 1$ for $k=1,\ldots, p$ and $m=\dim \mv$. Then, according to (\ref{eq:tv}) the Ricci operator satisfies \begin{equation} \Rc|_{\mv}=\frac12\sum_{k=1}^p\varepsilon_k j(a_k)^2= -\frac12 \sum_{k=1}^p\varepsilon_k^2 I_m=-\frac{p}{2} I_m \end{equation} so $\Rc$ is negative definite on $\mv$. On the other hand for $u\in \mathfrak a thfrak z $ $$\Ric(u,u)=-\frac14 \operatorname{trace} (j(u)^2)=\frac14 \operatorname{trace} (\left \langle u,u{\rm (i)}ght \rangle I_m)=\frac{\left \langle u,u {\rm (i)}ght \rangle}{4}\,m.$$ Hence $\left \langle \Rc u,u{\rm (i)}ght \rangle =\Ric(u,u)=\left \langle \frac{m}{4} u,u {\rm (i)}ght \rangle$ for all $u\in\mathfrak a thfrak z $. Polarizing this identity one gets $\left \langle \Rc u,u'{\rm (i)}ght \rangle=\left \langle \frac{m}{4} u,u' {\rm (i)}ght \rangle$ for any $u,u'\in\mathfrak a thfrak z $ and therefore $\Rc=\frac{m}4\,I_p$ on $\mathfrak a thfrak z $. In particular $\Rc$ is positive definite on $\mathfrak a thfrak z $. Clearly, the eigenvalues of $\Rc$ are $\lambda_1=-p/2$ and $\lambda_2=m/4$ and the subspace $V_{\lambda_i}$ in (\ref{Vlambda}) is the eigenspace corresponding to $\lambda_i$, for each $i=1,2$. Moreover $\mv=V_{\lambda_1}$ and $\mathfrak a thfrak z =V_{\lambda_2}$, so requirements (\ref{condic}) are satisfied and the first assertion of the theorem follows. From the proof above, the trace of the Ricci operator is $s=-pm/4$ where $p=\dim \mathfrak a thfrak z $ and $m=\dim \mv$, hence negative and 2. follows. \end{proof} A natural question at this point is the validity of the converse of the previous result. The next example gives a negative answer, that is pseudo-$H$-type Lie groups are not the only ones for which $\Iso^{aut}(N)=\Iso(N)$. Recall that solvable Lie groups endowed with a Riemannian left-invariant me\-tric have non-positive scalar curvature (see Theorem 6 \cite{Je}). As the next example shows, this assertion does not hold in the indefinite case. \begin{exa}\label{exa1} Let $\Hh_3$ be the Heisenberg Lie group and denote with $\hh_3$ its Lie algebra which is spanned by vectors $e_1,e_2,e_3$ satisfying the non-zero Lie bracket relation $[e_1,e_2]=e_3$. Consider the left-invariant metric on $\Hh_3$ induced by the metric on $\hh_3$ given by $$-\left \langle e_1,e_1{\rm (i)}ght \rangle=\left \langle e_2,e_2{\rm (i)}ght \rangle=\left \langle e_3,e_3{\rm (i)}ght \rangle=1; $$ in particular the center of $\hh_3$ is non-degenerate. After the computation of $j(e_3)$ it holds $j(e_3)^2= I$ implying that $\hh_3$ is not a pseudo-$H$-type Lie algebra. By Proposition \ref{pro1}, the Ricci operator satisfies $\Rc|_\mv=\frac12 I$ and $\Rc(e_3)=-\frac12 e_3$. So $\Rc$ is diagonalizable and the scalar curvature of $(\Hh_3,\lela\,,\,{\rm (i)}ra)$ is $s=1/2$. Also $\mv=V_{1/2}$ and $\mathfrak a thfrak z =V_{-1/2}$, thus Lemma \ref{lm2} leads us to $$\Iso(\Hh_3)=\Iso^{aut}(\Hh_3). $$ In \cite{BOV1} it is shown that this non-flat pseudo-Riemannian nilpotent Lie group satisfies $\Iso^{aut}(\Hh_3)=\Hh_3\rtimes\Or(1,1)$. \end{exa} \begin{rem} The Lorentzian nilpotent Lie group $(N, g_N)$ in the previous section shows that in the pseudo-Riemannian case the isometries do not need to preserve the splitting $\vv N \oplus \zz N$, even when the center is non-degenerate (compare with \cite{CP}). Using Proposition \ref{pro1} one gets that the Ricci tensor of this manifold is non-zero but nilpotent. Then the Lie algebra $\mathfrak a thfrak n $ of $N$ is the subspace defined in (\ref{Vlambda}) corresponding to the zero eigenvalue, that is $V_0=\mathfrak a thfrak n $. Since $V_0$ has non-trivial intersections with $\mv$ and $\mathfrak a thfrak z $, a decomposition as in (\ref{condic}) is not possible. The fact that the Ricci operator is nilpotent implies that the scalar curvature of the Lie groups $(G,g_G)$ and $(N,g_N)$ are zero but they are not Ricci flat. Note that $G$ is 3-step solvable (see Theorem 7 in \cite{Je}). \end{rem} Once one knows that for $(N, \lela\,,\,{\rm (i)}ra)$ it holds $H^{aut_N}=H$, the isometry group can be computed as follows. Recall that since $N$ is simply connected, we do not distinguish between the group of automorphisms of $N$ and of $\mathfrak a thfrak n $. The isotropy group $H$ is given by \begin{equation}\label{auton} H = \{(\phi, T) \in \Or(\mathfrak a thfrak z , \lela\,,\,{\rm (i)}ra_{\mathfrak a thfrak z }) \times \Or(\mv, \lela\,,\,{\rm (i)}ra_{\mv}) : Tj(w)T^{-1} = j(\phi w), \quad w \in \mathfrak a thfrak z \} \end{equation} where $\Or(V, \lela\,,\,{\rm (i)}ra)$ denotes the group of isometric linear maps of $(V, \lela\,,\,{\rm (i)}ra)$. The Lie algebra of $G$ is given by \begin{equation}\label{deron} \hh = \{(A,B) \in \sso(\mathfrak a thfrak z , \lela\,,\,{\rm (i)}ra_{\mathfrak a thfrak z }) \times \sso(\mv, \lela\,,\,{\rm (i)}ra_{\mv}) : [B, j(w)] = j(Aw), \quad w \in \mathfrak a thfrak z \} \end{equation} where $\sso(V, \lela\,,\,{\rm (i)}ra)$ denotes the set of skew-symmetric linear maps of $(V, \lela\,,\,{\rm (i)}ra)$. In fact, let $\psi$ denote a orthogonal automorphism of $(\mathfrak a thfrak n , \lela\,,\,{\rm (i)}ra)$. Thus $\psi(\mathfrak a thfrak z )\subseteq \mathfrak a thfrak z $ and since $\mv = \mathfrak a thfrak z ^{\perp}$ then $\psi(\mv)\subseteq \mv$. Set $\phi := \psi_{|_{\mathfrak a thfrak z }}$ and $T := \psi_{|_{\mv}}$, thus $(\phi, T) \in \Or(\mathfrak a thfrak z , \lela\,,\,{\rm (i)}ra_{\mathfrak a thfrak z }) \times \Or(\mv, \lela\,,\,{\rm (i)}ra_{\mv})$ such that $$\begin{array}{rcl} \left \langle \phi^{-1}[u, v]; x{\rm (i)}ght \rangle & = &\left \langle [Tu, Tv]; j(x){\rm (i)}ght \rangle \quad \mathfrak b ox{ if and only if }\\ \left \langle j(\phi x)u, v {\rm (i)}ght \rangle & = & \left \langle j(x)Tu, Tv{\rm (i)}ght \rangle \end{array}$$ which implies (\ref{auton}). By derivating (\ref{auton}) one gets (\ref{deron}). The following proposition is the correct version of that in \cite{Ov}: \begin{pro} Let $N$ denote a simply connected 2-step nilpotent Lie group endowed with a left-invariant pseudo-Riemannian metric, with respect to which the center is non-degenerate and such that $H=H^{aut_N}$. Then the group of isometries is $$\Iso(N)=N \rtimes H^{aut_N} $$ where $N$ acts as the group of left-translations by elements of $N$ and the isotropy subgroup $H$ is given by (\ref{auton}) with Lie algebra as in (\ref{deron}). \end{pro} \section{Isometric actions for degenerate center} Let $N$ denote a simply connected nilpotent Lie group with Lie algebra $\mathfrak a thfrak n $ and set a direct sum of vector spaces $$\mathfrak a thfrak n = \mv \oplus \mathfrak a thfrak z .$$ Note that $\mv$ does not necessarily coincide with $\mathfrak a thfrak z ^{\perp}$. By translations on the left, set $TN=\mv N \oplus \mathfrak a thfrak z N$, a decomposition of left-invariant distributions. Define $\Iso^{spl}(N)$ as the group of isometries preserving this splitting. Assume $(N, \lela\,,\,{\rm (i)}ra)$ is endowed with a bi-invariant metric (in this case $\mathfrak a thfrak z ^{\perp} \subseteq \mathfrak a thfrak z $). Here we show that the three groups $\Iso(N)$, $\Iso^{aut}(N)$ and $\Iso^{spl}(N)$ could be very different and moreover there could be no relationship between $\Iso^{aut}(N)$ and $\Iso^{spl}(N)$. \begin{lem}\label{lm1} Let $(N, \lela\,,\,{\rm (i)}ra)$ be a 2-step nilpotent Lie group equipped with a bi-invariant metric, let $\mathfrak a thfrak n $ denote its Lie algebra. Then for any direct sum decomposition $\mathfrak a thfrak n =\mathfrak a thfrak z \oplus \mv$ the automorphism $Ad(n)$ does not preserve $\mv$, whenever $n$ is non-central. \end{lem} \begin{proof} Since the Lie group is 2-step nilpotent, for any $n\in N$ there exists $w\in \mathfrak a thfrak n $ such that $n=\exp\,(w)$ and $Ad(n)=I+\frac12\, \ad_w$. Consider a direct sum as vector spaces of the form $\mathfrak a thfrak n =\mv\oplus\mathfrak a thfrak z $. Suppose $n$ is non-central then $n=\exp (w)$ with $w\notin \mathfrak a thfrak z $ and let $u\in \mathfrak a thfrak n $ be such that $[w,u]\neq 0$. Write $u=b+ a$ with $a\in \mathfrak a thfrak z $ and $b\in\mv-\{0\}$. Then $Ad(n)(b)=b+\frac12 [w,b]=b+\frac12 [w,u]$ having non-zero component on $\mathfrak a thfrak z $. \end{proof} \begin{rem} Let $N$ denote a pseudo-Riemannian nilpotent Lie group admitting a lattice $\Gamma$ such that $\Gamma \backslash N$ is compact. Let $\Iso_0^{spl}(\Gamma \backslash N)$ denote the connected component of the subgroup of isometries preserving a fixed splitting of $T(\Gamma \backslash N)$. In \cite{CP2} the authors conclude that $\Iso_0^{spl}(\Gamma \backslash N)\simeq T^m$ an $m$-dimensional torus. Its proof should make use of the fact that there are no non-trivial inner automorphisms in this group (Lemma \ref{lm1}). \end{rem} The fact that $N$ is endowed with a bi-invariant metric implies that the conjugation map $\chi_n$ given by $\chi_n(g)=n g n^{-1}$ is an isometry for all $n\in N$. The lemma above says that none of these automorphisms preserve any possible splitting $TN= \mv\,N \oplus \mathfrak a thfrak z \,N$ with the exception of the trivial ones, thus $\Iso^{spl}(N)\neq \Iso^{aut}(N)$ and $\Iso^{spl}(N)\neq \Iso (N)$ since $N$ is non abelian. As already mentioned for bi-invariant metrics, the map: $g \to g^{-1}$ is an isometry which is not an automorphism, unless the group is abelian. Thus $\Iso^{aut}(N) \neq \Iso(N)$. As stated above, the group $\Iso^{aut}(N)$ is not contained into the group $\Iso^{spl}(N)$. The next example shows that $\Iso^{spl}(N)$ could not be contained in $\Iso^{aut}(N)$. \begin{exa} Consider $\R^6$ with the canonical differentiable structure and let $g$ denote the following pseudo-Riemannian metric on $\RR^6$: \begin{equation}\label{metric2} g=d x_1dx_6+dx_3dx_4-dx_2dx_5. \end{equation} This manifold admits an isometric transitive and simple action of the 2-step nilpotent Lie group $N$ which is modelled on $\R^6$ with multiplication operation such that for $p=(x_1,x_2,x_3,x_4,x_5,x_6)$ and $q=(y_1,y_2,$ $y_3,y_4,y_5,y_6)$ it holds \begin{eqnarray} p\cdot q&=&\left( x_1+y_1,x_2+y_2,x_3+y_3,x_4+y_4+\frac12 (x_1y_2-x_2y_1),{\rm (i)}ght.\nonumber\\ &&\;\quad\left. x_5+y_5+\frac12 (x_1y_3-x_3y_1), x_6+y_6+\frac12 (x_2y_3-x_3y_2){\rm (i)}ght). \end{eqnarray} The corresponding metric on $N$ induced by (\ref{metric2}) is invariant under translations on the left and also on the right, hence $g$ is a bi-invariant (pseudo-Riemannian) metric on $N$. Its Lie algebra $\mathfrak a thfrak n $ admits a basis $\{e_i\}_{i=1}^6$ obeying the non-zero Lie-bracket relations $$[e_1,e_2]=e_4,\qquad [e_1,e_3]=e_5,\qquad [e_2,e_3]=e_6. $$ The Lie algebra $\mathfrak a thfrak n $ is the free 2-step nilpotent Lie algebra on three generators and $N$ is its corresponding simply connected Lie group. At the Lie algebra level, the bi-invariant metric (\ref{metric2}) induces the ad-invariant metric verifying \cite{dBO} $$\left \langle e_1,e_6{\rm (i)}ght \rangle=\left \langle e_3,e_4{\rm (i)}ght \rangle =-\left \langle e_2,e_5{\rm (i)}ght \rangle=1.$$ The center $\mathfrak a thfrak z $ of $\mathfrak a thfrak n $ is spanned by $e_4,e_5,e_6$ and it is totally isotropic ($\mathfrak a thfrak z ^\bot=\mathfrak a thfrak z $\,). Moreover \begin{equation}\mathfrak a thfrak n =\mv\oplus \mathfrak a thfrak z \label{spl} \end{equation} where $\mv=span \{e_1,e_2,e_3\}$ is also a totally isotropic subspace. The splitting of $TN$ by left-invariant distributions associated to (\ref{spl}) at each point $p$ of $N$ is given by $$\mv N_p=span\{X_1|_p,X_2|_p,X_3|_p\} \quad\mathfrak b ox{ and }\quad\mathfrak a thfrak z {N}_p=span\{X_4|_p,X_5|_p,X_6|_p\}$$ where $X_i$ is the left-invariant vector field of $N$ such that ${X_i}_{|_0}=e_i$. Canonical computations show that $\left \langle \partial_i,\partial_j\,{\rm (i)}ght \rangle=\left \langle X_i,X_j\,{\rm (i)}ght \rangle$ for all $i,j$, where $\partial_i=\frac{\partial}{\partial x_i}$. For each $\tau\in \RR$ consider the diffeomorphism $F^\tau$ defined as $$\begin{array}{l} F^{\tau}(x_1,x_2,x_3,x_4,x_5,x_6)=(\cosh \tau\, x_1+\sinh \tau\,x_3,x_2,\sinh \tau\, x_1+\cosh \tau\, x_3,\\\hspace{4.2cm}\cosh \tau\, x_4-\sinh \tau\, x_6,x_5, -\sinh \tau\, x_4+\cosh \tau\, x_6)\end{array}$$ which is an isometry of $N$. Indeed, its differential at $p$, in the ordered basis $\{\partial_1,\,\partial_3,\,\partial_4,\,\partial_6,\,\partial_2,\,\partial_5\}$ of $T_pN$, is $$dF^{\tau}_p= \left( \begin{array}{ccc} A&0&0\\ 0&A^{-1}&0\\ 0&0&I_{2\times 2}\end{array}{\rm (i)}ght)\quad\mathfrak b ox{ where }\quad A=\left( \begin{array}{cc} \cosh \tau &\sinh \tau\\ \sinh \tau &\cosh \tau\end{array}{\rm (i)}ght)\in \Or(1,1).$$ The map $F^{\tau}$ preserves the metric in (\ref{metric2}) for all $\tau \in \RR$. Moreover, it preserves the splitting $TN=\mv N\oplus \mathfrak a thfrak z N $. In fact $dF^{\tau}_p (\mathfrak a thfrak z {N}_p)=\mathfrak a thfrak z {N}_{F^\tau (p)}$ and \begin{eqnarray} dF^{\tau}_pX_1|_p &=&\cosh \tau\,X_1|_{F^{\tau}(p)}+\sinh \tau\, X_3|_{F^{\tau}(p)}\in\mv N_{F^\tau(p)},\nonumber\\ dF^{\tau}_pX_2|_p&=&X_2|_{F^{\tau}(p)}\in \mv N_{F^{\tau}(p)},\nonumber\\ dF^{\tau}_pX_3|_p &=&\sinh \tau\,X_1|_{F^{\tau}(p)}+\cosh \tau\, X_3|_{F^{\tau}(p)}\in\mv {N}_{F^{\tau}(p)}.\nonumber \end{eqnarray} Therefore $dF^{\tau}_p(\mv {N}_p) =\mv {N}_{F^{\tau}(p)}$ and $F^{\tau}\in \Iso^{spl}(N)$ Finally, notice that $dF^{\tau}_0$ is not a Lie algebra isomorphism if $\tau\neq 0$, therefore $F^{\tau}\notin \Iso^{aut}(N)$ for $\tau\neq 0$. Furthermore, $\Iso(N)=N\cdot \Or(3,3)$ by Lemma \ref{iso}. \end{exa} We have already proved that on the Lie group $N$ of the previous example there are isometries preserving a fixed splitting which are not automorphisms. \begin{pro}Let $(N, \lela\,,\,{\rm (i)}ra)$ denote a 2-step nilpotent Lie group endowed with a bi-invariant metric. Then the center is degenerate \cite{dBO} and \begin{itemize} \item $\Iso^{spl}(N)\neq\Iso^{aut}(N)$; \item $\Iso^{spl}(N)\subsetneq\Iso(N)$ and $\Iso^{aut}(N)\subsetneq\Iso(N)$. \end{itemize} \end{pro} \end{document}
\begin{document} \newcounter{save}\setcounter{save}{\value{section}} \title{Profit Maximizing Prior-free Multi-unit \\Procurement Auctions with Capacitated Sellers} \author{Arupratan Ray\inst{1}, Debmalya Mandal\inst{2}, \and Y. Narahari\inst{1}} \institute{Department of Computer Science and Automation, Indian Institute of Science, Bangalore, India, \email{[email protected], [email protected]} \and Harvard School of Engineering and Applied Sciences, Cambridge, MA, \email{[email protected]}} \maketitle \begin{abstract} In this paper, we derive bounds for profit maximizing prior-free procurement auctions where a buyer wishes to procure multiple units of a homogeneous item from $n$ sellers who are strategic about their per unit valuation. The buyer earns the profit by reselling these units in an external consumer market. The paper looks at three scenarios of increasing complexity : (1) sellers with unit capacities, (2) sellers with non-unit capacities which are common knowledge, and (3) sellers with non-unit capacities which are private to the sellers. First, we look at unit capacity sellers where per unit valuation is private information of each seller and the revenue curve is concave. For this setting, we define two benchmarks. We show that no randomized prior free auction can be constant competitive against any of these two benchmarks. However, for a lightly constrained benchmark we design a prior-free auction PEPA (Profit Extracting Procurement Auction) which is 4-competitive and we show this bound is tight. Second, we study a setting where the sellers have non-unit capacities that are common knowledge and derive similar results. In particular, we propose a prior free auction PEPAC (Profit Extracting Procurement Auction with Capacity) which is truthful for any concave revenue curve. Third, we obtain results in the inherently harder bi-dimensional case where per unit valuation as well as capacities are private information of the sellers. We show that PEPAC is truthful and constant competitive for the specific case of linear revenue curves. We believe that this paper represents the first set of results on single dimensional and bi-dimensional profit maximizing prior-free multi-unit procurement auctions. \end{abstract} \section{Introduction} Procurement auctions for awarding contracts to supply goods or services are prevalent in many modern resource allocation situations. In several of these scenarios, the buyer plays the role of an intermediary who purchases some goods or services from the suppliers and resells it in the consumer market. For example, in the retail sector, an intermediary procures products from different vendors (perhaps through an auction) and resells it in consumer markets for a profit. In a cloud computing setting \cite{abhinandan2013}, an intermediary buys cloud resources from different service providers (again through an auction) and resells these resources to requesters of cloud services. The objective of the intermediaries in each of these cases is to maximize the profit earned in the process of reselling. Solving such problems via optimal auction of the kind discussed in the auction literature \cite{myerson1981optimal} inevitably requires assumption of a \emph{prior} distribution on the sellers' valuations. The requirement of a known prior distribution often places severe practical limitations. We have to be extremely careful in using a prior distribution which is collected from the past transactions of the bidders as they can possibly manipulate to do something differently than before. Moreover, deciding the prior distribution ideally requires a large number of samples. In reality, we can only approximate it with a finite number of samples. Also, prior-dependent auctions are non-robust: if the prior distribution alters, we are compelled to repeat the entire computation for the new prior, which is often computationally hard. This motivates us to study \emph{prior-free} auctions. In particular, in this paper, we study profit maximizing prior-free procurement auctions with one buyer and $n$ sellers. \section{Prior Art and Contributions} The problem of designing a revenue-optimal auction was first studied by Myerson \cite{myerson1981optimal}. Myerson considers the setting of a seller trying to sell a single object to one of several possible buyers and characterizes all revenue-optimal auctions that are BIC (Bayesian Incentive Compatible) and IIR (Interim individually Rational). Dasgupta and Spulber \cite{dasgupta1990managing} consider the problem of designing an optimal procurement auction where suppliers have unlimited capacity. Iyengar and Kumar \cite{iyengar2008optimal} consider the setting where the buyer purchases multiple units of a single item from the suppliers and resells it in the consumer market to earn some profit. We consider the same setting here, however, we focus on the design of \emph{prior-free} auctions unlike the \emph{prior-dependent} optimal auction designed in \cite{iyengar2008optimal}. \subsection{Related Work and Research Gaps} Goldberg et al. \cite{goldberg2001competitive} initiated work on design of prior-free auctions and studied a class of single-round sealed-bid auctions for an item in unlimited supply, such as digital goods where each bidder requires at most one unit. They introduced the notion of \emph{competitive} auctions and proposed prior-free randomized competitive auctions based on random sampling. In \cite{goldberg2006competitive}, the authors consider non-asymptotic behavior of the random sampling optimal price auction (RSOP) and show that its performance is within a large constant factor of a prior-free benchmark. Alaei et al. \cite{alaei2009random} provide a nearly tight analysis of RSOP which shows that it is 4-competitive for a large class of instances and 4.68 competitive for a much smaller class of remaining instances. The competitive ratio has further been improved to 3.25 by Hartline and McGrew \cite{hartline2005optimal} and to 3.12 by Ichiba and Iwama \cite{ichiba2010averaging}. Recently, Chen et al. \cite{chen2014optimal} designed an optimal competitive auction where the competitive ratio matches the lower bound of 2.42 derived in \cite{goldberg2006competitive} and therefore settles an important open problem in the design of digital goods auctions. Beyond the digital goods setting, Devanur et al. \cite{devanur2012envy} have studied prior-free auctions under several settings such as multi-unit, position auctions, downward-closed, and matroid environments. They design a prior-free auction with competitive ratio of 6.24 for the multi-unit and position auctions using the 3.12 competitive auction for digital goods given in \cite{ichiba2010averaging}. They also design an auction with a competitive ratio of 12.5 for multi-unit settings by directly generalizing the random sampling auction given in \cite{goldberg2006competitive}. Our setting is different than the above works on forward auction as we consider a procurement auction with capacitated sellers. Somewhat conceptually closer to our work is budget feasible mechanism design (\cite{singer2010budget}, \cite{bei2012budget}, \cite{chen2011approximability}, \cite{dobzinski2011mechanisms}) which models a simple procurement auction. Singer \cite{singer2010budget} considers single-dimensional mechanisms that maximize the buyer's valuation function on subsets of items, under the constraint that the sum of the payments provided by the mechanism does not exceed a given budget. Here the objective is maximizing social welfare derived from the subset of items procured under a budget and the benchmark considered is welfare optimal. On the other hand, our work considers maximizing profit or revenue of the buyer which is fundamentally different from the previous objective and our benchmark is revenue optimal. A simple example can be constructed to show that Singer's benchmark is not revenue optimal as follows. Suppose there is a buyer with budget \$100 and valuation function $V(k) = \$25k$ ($k$ is the number of items procured) and $5$ sellers with costs \$10, \$20, \$50, \$60, and \$70. Then Singer's benchmark will procure 3 items (with costs \$10, \$20, and \$50) earning a negative utility to the buyer which is welfare optimal but not revenue optimal as an omniscient revenue maximizing allocation will procure 2 items (with costs \$10 and \$20) yielding a revenue or utility of \$20 to the buyer. Although the design of prior-free auctions has generated wide interest in the research community (\cite{alaei2009random}, \cite{chen2014optimal}, \cite{devanur2013prior}, \cite{devanur2012envy}, \cite{goldberg2006competitive}, \cite{goldberg2001competitive}, \cite{hartline2005optimal}, \cite{ichiba2010averaging}) most of the works have considered the forward setting. The reverse auction setting is subtly different from forward auctions especially if the sellers are capacitated and the techniques used for forward auctions cannot be trivially extended to the case of procurement auctions. To the best of our knowledge, design of profit-maximizing prior-free multi-unit procurement auctions is yet unexplored. Moreover, the existing literature on prior-free auctions is limited to the single-dimensional setting where each bidder has only one private type which is valuation per unit of an item. However, in a procurement auction, the sellers are often capacitated and strategically report their capacities to increase their utilities. Therefore, the design of bi-dimensional prior-free procurement auctions is extremely relevant in practice and in this paper, we believe we have derived the first set of results in this direction. \subsection{Contributions} In this paper, we design profit-maximizing prior free procurement auctions where a buyer procures multiple units of an item from $n$ sellers and subsequently resells the units to earn revenue. Our contributions are three-fold. First, we look at unit capacity sellers and define two benchmarks for analyzing the performance of any prior-free auction -- (1) an optimal single price auction $(\mathcal{F})$ and (2) an optimal multi-price auction $(\mathcal{T})$. We show that no prior-free auction can be constant competitive against any of the two benchmarks. We then consider a lightly relaxed benchmark $(\mathcal{F}^{(2)})$ which is constrained to procure at least two units and design a prior-free auction PEPA (Profit Extracting Procurement Auction) which is 4-competitive against $(\mathcal{F}^{(2)})$ for any concave revenue curve. Second, we study a setting where the sellers have non-unit capacities that are common knowledge and derive similar results. In particular, we propose a prior free auction PEPAC (Profit Extracting Procurement Auction with Capacity) which is truthful for any concave revenue curve. Third, we obtain results in the inherently harder bi-dimensional case where per unit valuation as well as capacities are private information of the sellers. We show that PEPAC is truthful and constant competitive for the specific case of linear revenue curves. We believe the proposed auctions represent the first effort in single dimensional and bi-dimensional prior-free multi-unit procurement auctions. Further, these auctions can be easily adapted in real-life procurement situations due to the simple rules and prior-independence of the auctions. \section{Sellers with Unit Capacities} \label{model1} We consider a single round procurement auction setting with one buyer (retailer, intermediary etc.) and $n$ sellers where each seller has a single unit of a homogeneous item. The buyer procures multiple units of the item from the sellers and subsequently resells it in an outside consumer market, earning a revenue of $\mathcal{R}(q)$ from selling $q$ units of the item. We assume that the revenue curve of the outside market $\mathcal{R}(q)$ is concave with $\mathcal{R}(0) = 0$. This is motivated by the following standard argument from economics. According to the \emph{law of demand}, the quantity demanded decreases as price per unit increases. It can be easily shown that the marginal revenue falls with an increase in the number of units sold and so the revenue curve is concave. \subsection{Procurement Auction} We assume that the buyer (auctioneer) has \emph{unlimited demand}, but as each seller (bidder) has unit capacity, the number of units the buyer can procure and resell is limited by the total number of sellers. We make the following assumptions about the bidders: \begin{enumerate} \item Each bidder has a private valuation $v_i$ which represents the true minimum amount he is willing to receive to sell a single unit of the item. \item Bidders' valuations are independently and identically distributed. \item Utility of a bidder is given as payment minus valuation. \end{enumerate} In reference to the \emph{revelation principle} \cite{myerson1981optimal}, we will restrict our attention to single-round, sealed-bid, truthful auctions. Now we define the notions of single-round sealed-bid auction, bid-independent auction, and competitive auction for our setting. \subsubsection*{Single-round Sealed-bid Auction ($\mathcal{A}$).} \begin{enumerate} \item The bid submitted by bidder $i$ is $b_i$. The vector of all the submitted bids is denoted by $\textbf{b}$. Let \textbf{b}$_{-i}$ denote the masked vector of bids where $b_i$ is removed from \textbf{b}. \item Given the bid vector $\textbf{b}$ and the revenue curve $\mathcal{R}$, the auctioneer computes an allocation $\textbf{x} = (x_1, \ldots, x_n)$, and payments $\textbf{p} = (p_1, \ldots, p_n)$. If bidder $i$ sells the item, $x_i = 1$ and we say bidder $i$ wins. Otherwise, bidder $i$ loses and $x_i = 0$. The auctioneer pays an amount $p_i$ to bidder $i$. We assume that $p_i \geq b_i$ for all winning bidders and $p_i = 0$ for all losing bidders. \item The auctioneer resells the units bought from the sellers in the outside consumer market. The profit of the auction (or auctioneer) is given by, \begin{center} $\mathcal{A}(\textbf{b}, \mathcal{R}) = \mathcal{R}(\sum_{i=1}^{n}{x_i(\textbf{b}, \mathcal{R})}) - \sum_{i=1}^{n}{p_i(\textbf{b}, \mathcal{R})}$. \end{center} \end{enumerate} The auctioneer wishes to maximize her profit satisfying IR (Individual Rationality) and DSIC (Dominant Strategy Incentive Compatibility). As bidding $v_i$ is a dominant strategy for bidder $i$ in a truthful auction, in the remainder of this paper, we assume that $b_i = v_i$. \subsubsection*{Bid-independent Auction.} An auction is \emph{bid-independent} if the offered payment to a bidder is independent of the bidder's bid. It can certainly depend on the bids of the other bidders and the revenue curve. Such an auction is determined by a function $f$ (possibly randomized) which takes the masked bid vectors and the revenue curve as input and maps it to payments which are non-negative real numbers. Let $\mathcal{A}_f($\textbf{b}$, \mathcal{R})$ denote the bid-independent auction defined by $f$. For each bidder $i$ the allocation and payments are determined in two phases as follows : \begin{enumerate} \item Phase I : \begin{enumerate} \item $t_i \leftarrow f(\textbf{b}_{-i}, \mathcal{R})$. \item If $t_i < b_i$, set $x_i \leftarrow 0$, $p_i \leftarrow 0$, and remove bidder $i$. \end{enumerate} Suppose $n^\prime$ is the number of bidders left. Let $t_{[i]}$ denote the $i^{th}$ lowest value of $t_j$ among the remaining $n^\prime$ bidders. Let $x_{[i]}$ and $p_{[i]}$ be the corresponding allocation and payment. Now we choose the allocation that maximizes the revenue of the buyer. \item Phase II : \begin{enumerate} \item $k \leftarrow \underset{0 \leq i \leq n^\prime}{\operatorname{argmax}}\ (\mathcal{R}(i) - \sum_{j=1}^i{t_{[j]}})$. \item Set $x_{[i]} \leftarrow 1$ and $p_{[i]} \leftarrow t_{[i]}$ for $i = \{1, \ldots, k\}$. \item Otherwise, set $x_{[i]} \leftarrow 0, p_{[i]} \leftarrow 0$. \end{enumerate} \end{enumerate} For any bid-independent auction, the allocation of bidder $i$ is non-increasing in valuation $v_i$ and his payment is independent of his bid. It follows from Myerson's characterization \cite{myerson1981optimal} of truthful auctions that any bid-independent auction is truthful. \subsubsection*{Competitive Auction.} In the absence of any prior distribution over the valuations of the bidders, we cannot compare the profit of a procurement auction with respect to the average profit of the optimal auction. Rather, we measure the performance of a truthful auction on any bid by comparing it with the profit that would have been achieved by an omniscient optimal auction ($OPT$), the optimal auction which knows all the true valuations in advance without requiring to elicit them from the bidders. \begin{definition} \emph{$\beta$-competitive auction ($\beta > 1$)} : An auction $\mathcal{A}$ is \emph{$\beta$-competitive} against $OPT$ if for all bid vectors \textbf{b}, the expected profit of $\mathcal{A}$ on $\textbf{b}$ satisfies \begin{center} $\mathbb{E}[\mathcal{A}(\textbf{b}, \mathcal{R})] \geq \displaystyle \frac{OPT(\textbf{b}, \mathcal{R})}{\beta}$. \end{center} We refer to $\beta$ as the \emph{competitive ratio} of $\mathcal{A}$. Auction $\mathcal{A}$ is \emph{competitive} if its competitive ratio $\beta$ is a constant. \end{definition} \subsection{Prior-Free Benchmarks} \label{bench} As a first step in comparing the performance of any prior-free procurement auction, we need to come up with the right metric for comparison that is a benchmark. It is important that we choose such a benchmark carefully for such a comparison to be meaningful. Here, we start with the strongest possible benchmark for comparison: the profit of an auctioneer who knows the bidder's true valuations. This leads us to consider the two most natural metrics for comparison -- the optimal multiple price and single price auctions. We compare the performances of truthful auctions to that of the optimal multiple price and single price auctions. Let $v_{[i]}$ denote the $i$-th lowest valuation. \subsubsection*{Optimal Single Price Auction ($\mathcal{F}$).} Let $\textbf{b}$ be a bid vector. Auction $\mathcal{F}$ on input $\textbf{b}$ determines the value $k$ such that $\mathcal{R}(k) - kv_{[k]}$ is maximized. All bidders with bid $b_i \leq v_{[k]}$ win at price $v_{[k]}$; all remaining bidders lose. We denote the optimal procurement price for \textbf{b} that gives the optimal profit by $OPP(\textbf{b}, \mathcal{R})$. The profit of $\mathcal{F}$ on input $\textbf{b}$ is denoted by $\mathcal{F}(\textbf{b}, \mathcal{R})$. So we have, \begin{center} $\mathcal{F}(\textbf{b}, \mathcal{R}) = \max \limits_{0 \leq i \leq n} (\mathcal{R}(i) - iv_{[i]})$.\\ $OPP(\textbf{b}, \mathcal{R}) = \underset{v_{[i]}}{\operatorname{argmax}}\ (\mathcal{R}(i) - iv_{[i]})$. \end{center} \subsubsection*{Optimal Multiple Price Auction ($\mathcal{T}$).} Auction $\mathcal{T}$ buys from each bidder at her bid value. So auction $\mathcal{T}$ on input $\textbf{b}$ determines the value $l$ such that $\mathcal{R}(l) - \sum_{i=1}^{l}{v_{[i]}}$ is maximized. First $l$ bidders win at their bid value; all remaining bidders lose. The profit of $\mathcal{T}$ on input $\textbf{b}$ is given by \begin{center} $\mathcal{T}(\textbf{b}, \mathcal{R}) = \max \limits_{0 \leq i \leq n} (\mathcal{R}(i) - \sum_{j=1}^{i}{v_{[j]}})$. \end{center} It is clear that $\mathcal{T}(\textbf{b}, \mathcal{R}) \geq \mathcal{F}(\textbf{b}, \mathcal{R})$ for any bid vector $\textbf{b}$ and any revenue curve $\mathcal{R}$. However, $\mathcal{F}$ does not perform very poorly compared to $\mathcal{T}$. We prove a bound between the performance of $\mathcal{F}$ and $\mathcal{T}$. Specifically, we observe that in the worst case, the maximum ratio of $\mathcal{T}$ to $\mathcal{F}$ is logarithmic in the number $n$ of bidders. \begin{lemma} \label{benchmarkrelation} For any \normalfont \textbf{b} \textit{and any concave revenue curve $\mathcal{R}$}, \begin{center} $\mathcal{F}(\textbf{b}, \mathcal{R}) \geq \displaystyle \frac{\mathcal{T}(\textbf{b}, \mathcal{R})}{\mathrm{ln} \ n}$. \end{center} \end{lemma} \begin{pfof} We use the following property of concave function, $\displaystyle \frac{\mathcal{R}(i)}{i} \geq \frac{\mathcal{R}(j)}{j}$ $\ \ \forall i \leq j$. Suppose $\mathcal{T}$ buys $k$ units and $\mathcal{F}$ buys $l$ units from the sellers. \begin{align*} \mathcal{T}(\textbf{b}, \mathcal{R}) &= \mathcal{R}(k) - \sum_{i=1}^{k}{v_{[i]}} = \sum_{i=1}^{k}\left({\frac{\mathcal{R}(k)}{k} - v_{[i]}}\right) \\ &\leq \sum_{i=1}^{k}\left({\frac{\mathcal{R}(i)}{i} - v_{[i]}}\right) \leq \sum_{i=1}^{k}{\frac{\mathcal{R}(l)- lv_{[l]}}{i}} \\ &\leq \mathcal{F}(\textbf{b}, \mathcal{R})(\mathrm{ln}\ n + O(1)). \end{align*} \end{pfof} The result implies that if an auction $\mathcal{A}$ is constant-competitive against $\mathcal{F}$ then it is $\ln n$ competitive against $\mathcal{T}$. Now we show that no truthful auction can be constant-competitive against $\mathcal{F}$ and hence it cannot be competitive against $\mathcal{T}$. \begin{theorem} \label{impossibility} For any truthful auction $\mathcal{A}_f$, any revenue curve $\mathcal{R}$, and any $\beta \geq 1$, there exists a bid vector \normalfont \textbf{b} \textit{such that the expected profit of $\mathcal{A}_f$ on} \normalfont \textbf{b} \textit{is at most} $\mathcal{F}($\textbf{b}$, \mathcal{R})/\beta$. \end{theorem} \begin{pfof} Consider a bid-independent randomized auction $\mathcal{A}_f$ on two bids, $r$ and $L < r$ where $r = \mathcal{R}(1)$. Suppose $g$ and $G$ denote the probability density function and cumulative density function of the random variable $f(r, \mathcal{R})$. We fix one bid at $r$ and choose $L$ depending on the two cases. \begin{enumerate} \item Case I : $G(r) \leq 1/\beta$. We choose $L = \displaystyle \frac{r}{\beta}$. \\ Then $\mathcal{F}(\textbf{b}, \mathcal{R}) = r \left (1 - \displaystyle \frac{1}{\beta} \right)$ and $\mathbb{E}[\mathcal{A}_f(\textbf{b}, \mathcal{R})] \leq \displaystyle \frac{1}{\beta}\left( r - \frac{r}{\beta} \right) = \frac{\mathcal{F}(\textbf{b}, \mathcal{R})}{\beta}$. \item Case II : $G(r) > 1/\beta$. We choose $L = \displaystyle r - \epsilon$ such that $G(r) - G(r - \epsilon) < 1/\beta$. As $G$ is a non-decreasing function and $G(r) > 1/\beta$ such a value of $\epsilon$ always exists. Then $\mathcal{F}(\textbf{b}, \mathcal{R}) = r - (r - \epsilon) = \epsilon$ and \begin{align*} \mathbb{E}[\mathcal{A}_f(\textbf{b}, \mathcal{R})] &= \displaystyle \int_{r - \epsilon}^{r} (r - y) g(y) \mathrm{d}y\\ &\leq \displaystyle r \int_{r - \epsilon}^{r} g(y) \mathrm{d}y - \displaystyle (r - \epsilon)\int_{r - \epsilon}^{r}g(y) \mathrm{d}y \\ &= \displaystyle \epsilon \int_{r - \epsilon}^{r} g(y) \mathrm{d}y = \displaystyle \epsilon (G(r) - G(r - \epsilon)) \\ &< \frac{\epsilon}{\beta} = \frac{\mathcal{F}(\textbf{b}, \mathcal{R})}{\beta}. \end{align*} \end{enumerate} \end{pfof} Theorem \ref{impossibility} shows that we cannot match the performance of the optimal single price auction when the optimal profit is generated from the single lowest bid. Therefore we present an auction that is \emph{competitive} against $\mathcal{F}^{(2)}$, the optimal single price auction that buys at least two units. Such an auction achieves a constant fraction of the revenue of $\mathcal{F}^{(2)}$ on \emph{all inputs}. \subsubsection*{An Auction $\mathcal{F}$ that Procures at least Two Units ($\mathcal{F}^{(2)}$).} Let $\textbf{b}$ be a bid vector. Auction $\mathcal{F}^{(2)}$ on input $\textbf{b}$ determines the value $k \geq 2$ such that $\mathcal{R}(k) - kv_{[k]}$ is maximized. The profit of $\mathcal{F}^{(2)}$ on input $\textbf{b}$ is \begin{center} $\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R}) = \max \limits_{2 \leq i \leq n} (\mathcal{R}(i) - iv_{[i]})$. \end{center} Note that, though $\mathcal{F}^{(2)}$ is slightly constrained the performance of $\mathcal{F}^{(2)}$ can be arbitrarily bad in comparison to $\mathcal{F}$. We demonstrate it using a simple example where $\mathcal{F}$ procures only one unit as follows. \begin{example} Consider the revenue curve $\mathcal{R}(k) = rk$ ($r>0$). Let $0 < \epsilon \ll r$ and bid vector $\textbf{b} = (\epsilon, r - \epsilon, r, \ldots, r)$. Then, ${\mathcal{F}(\textbf{b}, \mathcal{R})} = r - \epsilon$ and $\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R}) = 2r - 2(r - \epsilon) = 2\epsilon$. Hence, $\displaystyle \frac{\mathcal{F}(\textbf{b}, \mathcal{R})}{\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R})} = \frac{r - \epsilon}{2\epsilon} = \left(\frac{r}{2\epsilon} - \frac{1}{2}\right) \rightarrow \infty$ as $\epsilon \rightarrow 0$. \end{example} But if $\mathcal{F}$ chooses to buy at least two units, we have $\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R}) = \mathcal{F}(\textbf{b}, \mathcal{R})$. Thus, comparing auction performance to $\mathcal{F}^{(2)}$ is identical to comparing it to $\mathcal{F}$ if we exclude the bid vectors where only the lowest bidder wins in the optimal auction. From now on, we say an auction is \emph{$\beta$-competitive} if it is \emph{$\beta$-competitive} against $\mathcal{F}^{(2)}$. \subsection{Profit Extracting Procurement Auction (PEPA)} \label{rspa} We now present a prior-free procurement auction based on random sampling. Our auction takes the bids from the bidders and then partitions them into two sets by flipping a fair coin for each bid to decide to which partition to assign it. Then we use one partition for market analysis and utilize what we learn from a sub-auction on the other partition, and vice versa. We extend the \emph{profit extraction} technique of \cite{goldberg2006competitive}. The goal of the technique is, given \textbf{b}, $\mathcal{R}$, and profit $P$, to find a subset of bidders who generate profit $P$. \subsubsection*{Profit Extraction (PE$_P(\textbf{b}, \mathcal{R})$).} Given target profit $P$, \begin{enumerate} \item Find the largest value of $k$ for which $v_{[k]}$ is at most $(\mathcal{R}(k) - P)/k$. \item Pay these $k$ bidders $(\mathcal{R}(k) - P)/k$ and reject others. \end{enumerate} \subsubsection*{Profit Extracting Procurement Auction (PEPA).} \begin{enumerate} \item Partition the bids \textbf{b} uniformly at random into two sets \textbf{b$^{\prime}$} and \textbf{b$^{\prime\prime}$}: for each bid, flip a fair coin, and with probability 1/2 put the bid in \textbf{b$^{\prime}$} and otherwise in \textbf{b$^{\prime\prime}$}. \item Compute $F^\prime = \mathcal{F}(\textbf{b}^\prime, \mathcal{R})$ and $F^{\prime\prime} = \mathcal{F}(\textbf{b}^{\prime\prime}, \mathcal{R})$ which are the optimal single price profits for \textbf{b$^{\prime}$} and \textbf{b$^{\prime\prime}$} respectively. \item Compute the auction results of PE$_{F^{\prime\prime}}$(\textbf{b$^{\prime}$}, $\mathcal{R})$ and PE$_{F^\prime}$(\textbf{b$^{\prime\prime}$}, $\mathcal{R})$. \item Run the auction PE$_{F^{\prime\prime}}$(\textbf{b$^{\prime}$}, $\mathcal{R})$ or PE$_{F^\prime}$(\textbf{b$^{\prime\prime}$}, $\mathcal{R})$ that gives higher profit to the buyer. Ties are broken arbitrarily. \end{enumerate} The following lemmas can be easily derived. \begin{lemma} \label{pepa1} PEPA \textit{is truthful}. \end{lemma} \begin{lemma} \label{pepa2} PEPA \textit{has profit} $F^\prime$ \textit{if} $F^\prime = F^{\prime\prime}$; \textit{otherwise it has profit} $\min(F^\prime, F^{\prime\prime})$. \end{lemma} Now we derive the competitive ratio for PEPA, first for linear revenue curve and then for any arbitrary concave revenue curve. \begin{theorem} \label{thm1} PEPA is $4$-competitive if the revenue curve is linear i.e. $\mathcal{R}(k) = rk$ where $r > 0$ and this bound is tight. \end{theorem} \begin{pfof} By definition, $\mathcal{F}^{(2)}$ on \textbf{b} buys from $k \geq 2$ bidders for a profit of $\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R}) = \mathcal{R}(k) - kv_{[k]}$. These $k$ bidders are divided uniformly at random between \textbf{b$^{\prime}$} \ and \textbf{b$^{\prime\prime}$}. Let $k_1$ be the number of them in \textbf{b$^{\prime}$} \ and $k_2$ the number in \textbf{b$^{\prime\prime}$}. Now we denote the $i^{th}$ lowest bid in \textbf{b$^{\prime}$} \ by $v_{[i]}^\prime$ and in \textbf{b$^{\prime\prime}$} \ by $v_{[i]}^{\prime\prime}$. Clearly, $v_{[k_1]}^\prime \leq v_{[k]}$ and $v_{[k_2]}^{\prime\prime} \leq v_{[k]}$. So we have, $\mathcal{F}^\prime \geq \mathcal{R}(k_1) - k_1v_{[k_1]}^\prime \geq \mathcal{R}(k_1) - k_1v_{[k]}$ and $\mathcal{F}^{\prime\prime} \geq \mathcal{R}(k_2) - k_2v_{[k_2]}^{\prime\prime} \geq \mathcal{R}(k_2) - k_2v_{[k]}$. \begin{align*} \displaystyle \frac{\min(F^\prime, F^{\prime\prime})}{\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R})} &\geq \frac{\min(\mathcal{R}(k_1) - k_1v_{[k]}, \mathcal{R}(k_2) - k_2v_{[k]})}{\mathcal{R}(k) - kv_{[k]}} \\ &= \frac{\min(rk_1 - k_1v_{[k]}, rk_2 - k_2v_{[k]})}{rk - kv_{[k]}} \\ &= \frac{\min(k_1, k_2)}{k}. \end{align*} Thus, the competitive ratio is given by \begin{align} \displaystyle \frac{\mathbb{E}[P]}{\mathcal{F}^{(2)}} &= \frac{1}{k} \sum_{i=1}^{k-1}{\min(i, k-i){k \choose i} 2^{-k}} = \displaystyle \frac{1}{2} - {k-1 \choose \lfloor{k/2}\rfloor} 2^{-k}. \end{align} The above expression achieves its minimum of 1/4 for $k$ = 2 and $k$ = 3. As $k$ increases it approaches 1/2. \end{pfof} \begin{example} The bound presented on the competitive ratio is tight. Consider the revenue curve $\mathcal{R}(k) = 2lk$ ($l>0$) and bid vector \textbf{b} which consists of two bids $l - \epsilon$ and $l$. All other bids are very high compared to $l$. Then, ${\mathcal{F}(\textbf{b}, \mathcal{R})} = \mathcal{F}^{(2)}(\textbf{b}, \mathcal{R}) = 2l$. The expected profit of PEPA is $l \cdot \mathbb{P}$ [two low bids are split between \textbf{b$^{\prime}$} \ and \textbf{b$^{\prime\prime}$}] = $l/2$ = $\mathcal{F}(\textbf{b}, \mathcal{R})/4$. \end{example} \begin{theorem} For any concave revenue curve, PEPA is $4$-competitive. \end{theorem} \begin{pfof} Using notation defined above, \begin{align*} \displaystyle \frac{\min(F^\prime, F^{\prime\prime})}{\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R})} &\geq \frac{\min(\mathcal{R}(k_1) - k_1v_{[k]}, \mathcal{R}(k_2) - k_2v_{[k]})}{\mathcal{R}(k) - kv_{[k]}}\\ &= \frac{\min(k_1(\frac{\mathcal{R}(k_1)}{k_1} - v_{[k]}), k_2(\frac{\mathcal{R}(k_2)}{k_2} - v_{[k]}))}{k(\frac{\mathcal{R}(k)}{k} - v_{[k]})} \\ &\geq \frac{\min(k_1(\frac{\mathcal{R}(k)}{k} - v_{[k]}), k_2(\frac{\mathcal{R}(k)}{k} - v_{[k]}))}{k(\frac{\mathcal{R}(k)}{k} - v_{[k]})} \\ &= \frac{\min(k_1, k_2)}{k} \geq 1/4. \end{align*} \end{pfof} \section{Sellers with Non-Unit Non-Strategic Capacities} \label{Nonstrategic} \subsection{Setup} Now we consider the setting where sellers can supply more than one unit of an item. Seller $i$ has valuation per unit $v_i$ and a maximum capacity $q_i$ where $v_i$ is a positive real number and $q_i$ is a positive integer. In other words, each seller can supply at most $q_i$ units of a homogeneous item. We assume that the sellers are strategic with respect to valuation per unit only and they always report their capacities truthfully. Let $x_i$ and $p_i$ denote the allocation and per unit payment to bidder $i$. Then the profit of the auction (or auctioneer) for bid vector $\textbf{b}$ is \begin{center} $\mathcal{A}(\textbf{b}, \mathcal{R}) = \mathcal{R}(\displaystyle \sum_{i=1}^{n}{x_i(\textbf{b}, \mathcal{R})}) - \displaystyle \sum_{i=1}^{n}{p_i(\textbf{b}, \mathcal{R})\cdot x_i(\textbf{b}, \mathcal{R})}$. \end{center} The auctioneer wants to maximize her profit satisfying feasibility, IR, and DSIC. As before, we first define the notion of bid-independent auction for this setting. \subsubsection*{Bid-independent Auction.} For each bidder $i$, the allocation and payments are determined in two phases as follows. \begin{enumerate} \item Phase I : \begin{enumerate} \item $t_i \leftarrow f(\textbf{b}_{-i}, \mathcal{R})$. \item If $t_i < v_i$, set $x_i \leftarrow 0$, $p_i \leftarrow 0$, and remove bidder $i$. \item Let $n^\prime$ be the number of remaining bidders. \end{enumerate} \item Phase II : \begin{enumerate} \item $i^\prime \leftarrow \underset{0 \leq i \leq m^\prime}{\operatorname{argmax}}\ \left(\mathcal{R}(i) - \displaystyle\sum_{j=1}^{k-1}{q_{[j]}t_{[j]}} - (i - \displaystyle\sum_{j=1}^{k-1}{q_{[j]}})t_{[k]}\right)$ \\where $\ m^\prime = \sum_{j=1}^{n^\prime}{q_{[j]}}$ and $\sum_{j=1}^{k-1}{q_{[j]}} < i \leq \sum_{j=1}^{k}{q_{[j]}}$ \item Suppose $k^\prime$ satisfies $\sum_{j=1}^{k^\prime-1}{q_{[j]}} < i^\prime \leq \sum_{j=1}^{k^\prime}{q_{[j]}}$. \item Set $x_{[i]} \leftarrow q_{[i]}$ and $p_{[i]} \leftarrow t_{[i]}$ for $i = \{1, \ldots, k^\prime - 1\}$. \item Set $x_{[k^\prime]} \leftarrow (i^\prime - \sum_{j=1}^{k^\prime-1}{q_{[j]}})$ and $p_{[k^\prime]} \leftarrow t_{[k^\prime]}$. \item Otherwise, set $x_{[i]} \leftarrow 0, p_{[i]} \leftarrow 0$. \end{enumerate} \end{enumerate} As the allocation is monotone in bids and payment is bid-independent, any bid-independent auction is truthful. \subsection{Prior-Free Benchmark} We denote the $i$-th lowest valuation by $v_{[i]}$ and the corresponding capacity by $q_{[i]}$. Suppose $m = \sum_{i=1}^{n}{q_i}$. Then we have, \begin{center} $\mathcal{F}(\textbf{b}, \mathcal{R}) = \max \limits_{0 \leq i \leq m} (\mathcal{R}(i) - iv_{[j]})$, where $\sum_{k=1}^{j-1}{q_{[k]}} < i \leq \sum_{k=1}^{j}{q_{[k]}}$ $OPP(\textbf{b}, \mathcal{R}) = \underset{v_{[j]}}{\operatorname{argmax}}\left( \underset{\sum_{k=1}^{j-1}{q_{[k]}} < i \leq \sum_{k=1}^{j}{q_{[k]}}}\max (\mathcal{R}(i) - iv_{[j]}) \right)$ \end{center} The first $j$ bidders are the winners and they are allocated at their full capacity except possibly the last one. As no truthful auction can be constant-competitive against $\mathcal{F}$ we define $\mathcal{F}^{(2)}$ as the optimal single price auction that buys from at least two bidders. The profit of $\mathcal{F}^{(2)}$ on input vector \textbf{b} is \begin{center} $\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R}) = \max \limits_{q_{[1]} < i \leq m} (\mathcal{R}(i) - iv_{[j]})$, where $\sum_{k=1}^{j-1}{q_{[k]}} < i \leq \sum_{k=1}^{j}{q_{[k]}}$ \end{center} \subsection{Profit Extracting Procurement Auction with Capacity (PEPAC)} Now we extend the random sampling based procurement auction presented in Section \ref{rspa} for this setting. \subsubsection*{Profit Extraction with Capacity (PEC$_P(\textbf{b}, \mathcal{R})$).} \begin{enumerate} \item Find the largest value of $k^\prime$ for which $v_{[k]}$ is at most $(\mathcal{R}(k^\prime) - P)/k^\prime$ where $\sum_{i=1}^{k-1}{q_{[i]}} < k^\prime \leq \sum_{i=1}^{k}{q_{[i]}}$. \item Pay these $k$ bidders $(\mathcal{R}(k^\prime) - P)/k^\prime$ per unit and reject others. \end{enumerate} \subsubsection*{Profit Extracting Procurement Auction with Capacity (PEPAC).} PEPAC is same as PEPA except that it invokes PEC$_P(\textbf{b}, \mathcal{R})$ instead of PE$_P(\textbf{b}, \mathcal{R})$. Next we derive the performance of PEPAC through the following theorems. \begin{theorem} \label{pepac1} PEPAC is $4$-competitive for any concave revenue curve $\mathcal{R}$ if \\$q_i = q \ \forall \ i \in \{1, \ldots, n\}$. \end{theorem} \begin{pfof} By definition, $\mathcal{F}^{(2)}$ on \textbf{b} buys from $k \geq 2$ bidders for a profit of $\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R}) = \mathcal{R}(k^\prime) - k^\prime v_{[k]}$ where $\sum_{i=1}^{k-1}{q} < k^\prime \leq \sum_{i=1}^{k}{q}$ These $k$ bidders are divided uniformly at random between \textbf{b$^{\prime}$} \ and \textbf{b$^{\prime\prime}$}. Let $k_1$ be the number of them in \textbf{b$^{\prime}$} \ and $k_2$ the number in \textbf{b$^{\prime\prime}$}. Now we denote the $i^{th}$ lowest bid in \textbf{b$^{\prime}$} \ by $v_{[i]}^\prime$ and in \textbf{b$^{\prime\prime}$} \ by $v_{[i]}^{\prime\prime}$. Clearly, $v_{[k_1]}^\prime \leq v_{[k]}$ and $v_{[k_2]}^{\prime\prime} \leq v_{[k]}$. As $\mathcal{F}^\prime$ and $\mathcal{F}^{\prime\prime}$ are optimal in respective partitions we have, \begin{align*} \mathcal{F}^{\prime} \geq \mathcal{R}(k_1q) - k_1qv_{[k_1]}^{\prime} \geq \mathcal{R}(k_1q) - k_1qv_{[k]}.\\ \mathcal{F}^{\prime\prime} \geq \mathcal{R}(k_2q) - k_2qv_{[k_2]}^{\prime\prime} \geq \mathcal{R}(k_2q) - k_2qv_{[k]}. \end{align*} Auction profit is $P =\ \min(F^\prime, F^{\prime\prime})$. Therefore, \begin{align*} \displaystyle \frac{\min(F^\prime, F^{\prime\prime})}{\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R})} &\geq \frac{\min(\mathcal{R}(k_1q) - k_1qv_{[k]}, \mathcal{R}(k_2q) - k_2qv_{[k]})}{\mathcal{R}(k^\prime) - k^\prime v_{[k]}} \\ &= \frac{\min(k_1q(\frac{\mathcal{R}(k_1q)}{k_1q} - v_{[k]}), k_2q(\frac{\mathcal{R}(k_2q)}{k_2q} - v_{[k]}))}{k^\prime(\frac{\mathcal{R}(k^\prime)}{k^\prime} - v_{[k]})} \\ &\geq \frac{\min(k_1q(\frac{\mathcal{R}(k^\prime)}{k^\prime} - v_{[k]}), k_2q(\frac{\mathcal{R}(k^\prime)}{k^\prime} - v_{[k]}))}{k^\prime(\frac{\mathcal{R}(k^\prime)}{k^\prime} - v_{[k]})} \\ & \displaystyle[\mbox{as} \ \frac{\mathcal{R}(k_1q)}{k_1q} \geq \frac{\mathcal{R}(k^\prime)}{k^\prime} \ \mbox{and}\ \frac{\mathcal{R}(k_2q)}{k_2q} \geq \frac{\mathcal{R}(k^\prime)}{k^\prime}\displaystyle]\\ &= \frac{\min(k_1q, k_2q)}{k^\prime} \geq \frac{\min(k_1q, k_2q)}{kq} \\ &= \frac{\min(k_1, k_2)}{k}. \end{align*} Thus, the competitive ratio is given by \begin{align*} \displaystyle \frac{\mathbb{E}[P]}{\mathcal{F}^{(2)}} &= \frac{1}{k} \sum_{i=1}^{k-1}{\min(i, k-i){k \choose i} 2^{-k}}\\ &= \displaystyle \frac{1}{2} - {k-1 \choose \lfloor{k/2}\rfloor} 2^{-k}.\\ \end{align*} which is the same as in Theorem \ref{thm1}. \end{pfof} \begin{theorem} \label{pepac2} PEPAC is $4 \cdot \left( \displaystyle \frac{q_{\max}}{q_{\min}}\right)$-competitive if the revenue curve is linear and $q_i \in [q_{\min}, q_{\max}] \ \ \forall \ i \in \{1, \ldots, n\}$ and further this bound is tight. \end{theorem} \begin{pfof} By definition, $\mathcal{F}^{(2)}$ on \textbf{b} buys from $k \geq 2$ bidders for a profit of $\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R}) = \mathcal{R}(k^\prime) - k^\prime v_{[k]}$ where $\sum_{i=1}^{k-1}{q_{[i]}} < k^\prime \leq \sum_{i=1}^{k}{q_{[i]}}$ These $k$ bidders are divided uniformly at random between \textbf{b$^{\prime}$} \ and \textbf{b$^{\prime\prime}$}. Let $k_1$ be the number of bidders in \textbf{b$^{\prime}$} \ and $k_2$ the number in \textbf{b$^{\prime\prime}$}. Now we denote the $i^{th}$ lowest bid (according to valuation) in \textbf{b$^{\prime}$} \ by $(v_{[i]}^\prime, q_{[i]}^\prime)$ and in \textbf{b$^{\prime\prime}$} \ by $(v_{[i]}^{\prime\prime}, q_{[i]}^{\prime\prime})$. Clearly, $v_{[k_1]}^\prime \leq v_{[k]}$ and $v_{[k_2]}^{\prime\prime} \leq v_{[k]}$. As $\mathcal{F}^\prime$ and $\mathcal{F}^{\prime\prime}$ are optimal in respective partitions we have, \begin{align*} \mathcal{F}^\prime \geq \mathcal{R}(\sum_{i=1}^{k_1}{q_{[i]}^\prime}) - (\sum_{i=1}^{k_1}{q_{[i]}^\prime})v_{[k]}, \ \ \mathcal{F}^{\prime\prime} \geq \mathcal{R}(\sum_{i=1}^{k_2}{q_{[i]}^{\prime\prime}}) - (\sum_{i=1}^{k_2}{q_{[i]}^{\prime\prime}})v_{[k]}. \end{align*} Suppose $\displaystyle\sum_{i=1}^{k_1}{q_{[i]}^\prime} = q_{x}$, $\displaystyle\sum_{i=1}^{k_2}{q_{[i]}^{\prime\prime}} = q_{y}$, and $\displaystyle\sum_{i=1}^{k}{q_{[i]}} = q_{z}$. \begin{align*} \displaystyle \frac{\min(F^\prime, F^{\prime\prime})}{\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R})} &\geq \frac{\min(\mathcal{R}(q_{x}) - q_{x}v_{[k]}, \mathcal{R}(q_{y}) - q_{y}v_{[k]})}{\mathcal{R}(k^\prime) - k^\prime v_{[k]}} \\ &= \frac{\min(r q_{x} - q_{x}v_{[k]}, r q_{y} - q_{y}v_{[k]})}{r k^\prime - k^\prime v_{[k]}} \\ &= \frac{\min(q_x, q_y)}{k^\prime} \geq \frac{\min(q_x, q_y)}{q_z} \\ &\geq \frac{\min(k_1 q_{\min}, k_2 q_{\min})}{k q_{\max}} \\ &= \left(\frac{q_{\min}}{q_{\max}}\right) \frac{\min(k_1, k_2)}{k} \geq \frac{q_{\min}}{4 \cdot q_{\max}}. \end{align*} \end{pfof} \begin{theorem} \label{pepac3} PEPAC is $4 \cdot \left( \displaystyle \frac{q_{\max}}{q_{\min}}\right)$-competitive for any concave revenue curve $\mathcal{R}$ if $q_i \in [q_{\min}, q_{\max}] \ \ \forall \ i \in \{1, \ldots, n\}$. \end{theorem} \section{Sellers with Non-Unit Strategic Capacities} \label{strategic} \subsection{Setup} In this case, seller $i$ can misreport his capacity $q_i$ in addition to misreporting his valuation per unit $v_i$ to maximize his gain from the auction. Here, we assume that sellers are not allowed to overbid their capacity. This can be enforced by declaring, as part of the auction, that if a seller fails to provide the number of units he has bid, he suffers a huge penalty (financial or legal loss). But underbidding may help a seller as depending on the mechanism it may result in an increase in the payment which can often more than compensate the loss due to a decrease in allocation. Hence, as shown by Iyengar and Kumar \cite{iyengar2008optimal}, even when the bidders can only underbid their capacities, an auction that simply ignores the capacities of the bidders need not be incentive compatible. A small example can be constructed as follows. \begin{example} Suppose the $(v_i, q_i)$ values of the sellers are $(6, 100), (8, 100), (10, 200)$ and $(12, 100)$. Consider an external market with maximum demand of $200$ units. The revenue curve is given by $R(j) = 15j$ when $j <= 200$ and $15 * 200$ when $j > 200$. Suppose the buyer conducts the classic Kth price auction where the payment to a winning seller is equal to the valuation of the first losing seller. Bidding valuation truthfully is a weekly dominant strategy of the sellers but it does not deter them from possibly altering their capacities. If they report both $v_i$ and $q_i$ truthfully the allocation will be $(100, 100, 0 , 0)$ and the utility of the second seller will be $(10-8) * 100 = 200$. If the second seller underbids his capacity to $90$ the allocation changes to $(100, 90, 10, 0)$ and the utility of the second seller will be $(12-8) * 90 = 360$. So the Kth price auction is clearly not incentive compatible. \end{example} Note that the actual values of $b_i = (v_i,q_i)$ are known to only seller $i$. From now on, the true type of each bidder is represented by $b_i = (v_i,q_i)$ and each reported bid is represented by $\hat{b}_i = (\hat{v}_i,\hat{q}_i)$. So we denote the true type vector by \textbf{b} and the reported bid vector by $\hat{\textbf{b}}$. We denote the utility or \emph{true} surplus of bidder $i$ by $u_i(\hat{\textbf{b}}, \mathcal{R})$ and the \emph{offered} surplus by $\hat{u}_i(\hat{\textbf{b}}, \mathcal{R})$. True surplus is the pay-off computed using the true valuation and the offered surplus is the pay-off computed using the reported valuation. \begin{center} $u_i(\hat{\textbf{b}}, \mathcal{R}) = [p_i(\hat{\textbf{b}}, \mathcal{R})x_i(\hat{\textbf{b}}, \mathcal{R}) - v_ix_i(\hat{\textbf{b}}, \mathcal{R})]$. $\hat{u}_i(\hat{\textbf{b}}, \mathcal{R}) = [p_i(\hat{\textbf{b}}, \mathcal{R})x_i(\hat{\textbf{b}}, \mathcal{R}) - \hat{v}_ix_i(\hat{\textbf{b}}, \mathcal{R})]$. \end{center} \subsection{A Characterization of DSIC and IR Procurement Auctions} Iyengar and Kumar \cite{iyengar2008optimal} have characterized all DSIC and IR procurement auctions and the payment rule that implements a given DSIC allocation rule. \label{ch} \begin{enumerate} \item \label{c1} A feasible allocation rule \textbf{x} is \textbf{DSIC} if, and only if, $x_i(((v_i,q_i),\hat{b}_{-i}), \mathcal{R})$ is non-increasing in $v_i$,$\ \forall q_i $, $ \forall \hat{b}_{-i}$. \item \label{c2} A mechanism (\textbf{x}, \textbf{p}) is \textbf{DSIC} and \textbf{IR} if, and only if, the allocation rule \textbf{x} satisfies \ref{c1} and the ex-post offered surplus is \begin{center} $\hat{u}_i(\hat{b}_i, \hat{b}_{-i}, \mathcal{R}) = \displaystyle \int_{\hat{v}_i}^{\infty} x_i(u, \hat{q}_i, \hat{b}_{-i}, \mathcal{R})\mathrm{d}u$. \end{center} with $\hat{u}_i((\hat{v}_i, \hat{q}_i), \hat{b}_{-i}, \mathcal{R})$ non-negative and non-decreasing in $\hat{q}_i$ for all $\hat{v}_i$ and for all $\hat{b}_{-i}$. \end{enumerate} We use the same prior-free benchmark and the random sampling procurement auction as defined in Section \ref{Nonstrategic} and extend our previous results for strategic capacity case. \begin{lemma} \label{pesc1} PEC$_P$ is truthful if $\mathcal{R}$ is linear. \end{lemma} \begin{pfof} Let $k_1$ and $k_2$ be the number of units PEC$_P$ procures from the bidders to achieve target profit $P$ when bidders report their capacities truthfully and misreport their capacities respectively. By our assumption bidders are only allowed to underbid their capacities. So we have $k_2 \leq k_1$. Now truthful reporting is a dominant strategy of the bidders if the bidders are not better off by underbidding their capacities. So $\hat{q}_i \leq q_i$ and $u_i(((v_i,\hat{q}_i),\hat{b}_{-i}), \mathcal{R}) \leq u_i(((v_i,q_i),\hat{b}_{-i}), \mathcal{R})$. Hence we have, \begin{center} $\displaystyle -v_i\hat{q}_i + \hat{q}_i\frac{\mathcal{R}(k_2) - P}{k_2} \leq -v_iq_i + q_i\frac{\mathcal{R}(k_1) - P}{k_1}$. \end{center} A sufficient condition for the above inequality to hold is \begin{center} $\displaystyle \frac{\mathcal{R}(k_2) - P}{k_2} \leq \frac{\mathcal{R}(k_1) - P}{k_1} \ \ \forall k_2 \leq k_1$. \end{center} Clearly a linear revenue curve satisfies the above sufficient condition. \end{pfof} The following results immediately follow from the above lemma. \begin{theorem} \label{pepasc1} PEPAC is truthful if $\mathcal{R}$ is linear. \end{theorem} \begin{theorem} PEPAC is $4$-competitive if the revenue curve $\mathcal{R}$ is linear and \\$q_i = q \ \forall \ i = \{1, \ldots, n\}$. \end{theorem} \begin{theorem} PEPAC is $4 \cdot \left( \displaystyle \frac{q_{\max}}{q_{\min}}\right)$-competitive for any linear revenue curve $\mathcal{R}$ if $q_i \in [q_{\min}, q_{\max}] \ \forall i \in \{1, \ldots, n\}$. \end{theorem} \section{Conclusion and Future Work} \label{future} In this paper, we have considered a model of prior-free profit-maximizing procurement auctions with capacitated sellers and designed prior-free auctions for both single dimensional and bi-dimensional sellers. We have shown that the optimal single price auction cannot be matched by any truthful auction. Hence, we have considered a lightly constrained single price auction as our benchmark in the analysis. We have presented procurement auctions based on profit extraction, PEPA for sellers with unit capacity and PEPAC for sellers with non-unit capacity and proved an upper bound on their competitive ratios. For the bi-dimensional case, PEPAC is truthful for the specific case of linear revenue curves. Our major future work is to design a prior-free auction for bi-dimensional sellers which is truthful and competitive for all concave revenue curves. Subsequently, we would like to design prior-free procurement auctions for the more generic setting where each seller can announce discounts based on the volume of supply. \end{document}
\begin{document} \date{\today} \subjclass{34C15, 34D06, 92D25} \keywords{Synchronization, region of attraction, transient stability, second-order Kuramoto oscillators, lossless power grids, connected network, inhomogeneous dampings} \begin{abstract} We consider the synchronization problem of swing equations, a second-order Kuramoto-type model, on {\frac{\lam}{N}ilonm connected networks} with {\frac{\lam}{N}ilonm inhomogeneous dampings}. This was largely motivated by its relevance to the dynamics of power grids. We focus on the estimate of the region of attraction of synchronous states which is a central problem in the transient stability of power grids. In the recent literature, D\"{o}rfler, Chertkov, and Bullo [{\frac{\lam}{N}ilonm Proc. Natl. Acad. Sci. USA}, 110 (2013), pp. 2005-2010] found a condition for the synchronization in smart grids. They pointed out that the region of attraction is an important unsolved problem. In [{\frac{\lam}{N}ilonm SIAM J. Control Optim.}, 52 (2014), pp. 2482-2511], only a special case was considered where the oscillators have homogeneous dampings and the underlying graph has a diameter less than or equal to 2. There the analysis heavily relies on these assumptions; however, they are too strict compared to the real power networks. In this paper, we continue the study and derive an estimate on the region of attraction of phase-locked states for lossless power grids on connected graphs with inhomogeneous dampings. Our main strategy is based on the gradient-like formulation and energy estimate. We refine the assumptions by constructing a new energy functional which enables us to consider such general settings. \frac{\lam}{N}ilonnd{abstract} \maketitle \centerline{\date} \section{Introduction} \textbf{General background.-} The synchronization of large populations of weakly coupled oscillators is very common in nature, and it has been extensively studied in various scientific communities such as physics, biology, sociology, etc. The scientific interest in the synchronization of coupled oscillators can be traced back to Christiaan Huygens' report on coupled pendulum clocks \cite{H}. However, its rigorous mathematical treatment was done by Winfree \cite{W} and Kuramoto \cite{K} only several decades ago. Since then, the Kuramoto model became a paradigm for synchronization and various extensions have been extensively explored in scientific communities such as applied mathematics \cite{C-H-J-K,C-H-Y,C-L-H-X-Y}, engineering and control theory \cite{C-S, D-B-1, D-B-0, D-B-2}, physics \cite{A-B, P-R-K, S}, neuroscience and biology \cite{E, K}. In the present work, we consider the synchronization of a variant of Kuramoto model which has relevant significance in engineering, in particular, the power grids with general network topology and inhomogeneous dampings. The power grid, as a complex large-scale system, has rich nonlinear dynamics, and its synchronization and transient stability are very important in real applications. The transient stability, roughly speaking, is concerned with the ability of a power network to settle into an acceptable steady-state operating condition following a large disturbance. In recent years, renewable energy has fascinated not only the scientific community but also the industry. It is believed that the future power generations will rely increasingly on renewables such as wind and solar power, and the industry of renewable power has been in growth. These renewable power sources are highly stochastic; thus, an increasing number of transient disturbances will act on increasingly complex power grids. As a consequence, it becomes significantly important to study complex power grids and their transient stability. \textbf{Literature review.-} The similarity between the power grids and nonuniform second-order (inertial) Kuramoto oscillators \begin{equation*}\label{eq1}m_i\ddot{\theta}_i+d_i\dot{\theta}_{i} = \Omega_{i} + \sum_{j=1}^{N} a_{ij} \sin(\theta_{j} - \theta_{i})\frac{\lam}{N}ilonnd{equation*} has been reported and explored in many literature such as \cite{D-B-2, F-N-P,F-R-C-M-R,S-U-S-P}. If we take $m_i=0$, $d_i=1$ and $a_{ij}=K/N$, then it reduces to the classic Kuramoto model with mean-field coupling strength $K$. The synchronization of the classic model has been studied in many literature, such as \cite{C-H-J-K, C-S, J-M-B, L-X-Y, V-M,V-M2}. This problem is to look for conditions on the parameters and/or initial phase configurations leading to the existence or emergence of phase-locked states. The inertial effect was first conceived by Ermentrout \cite{E} to explain the slow synchronization of certain biological systems. Mathematically, incorporating the inertial effect into Kuramoto model is simply adding the second-order term, resulting in a model with $m_i=m, d_i=1, a_{ij}=K/N$, which causes richer phenomena from the dynamical viewpoint. For mathematical results on the inertial model we refer to \cite{C-H-Y,C-L-H-X-Y,D-B-1,L-X-Y1}. A connection between first and second-order models is the topological conjugacy argument in \cite{D-B}. The power networks with synchronous motors can be described by swing equations, a system of nonuniform second-order Kuramoto oscillators, see Subsection \ref{subsecmodels}. The transient stability, in terms of power grids, is concerned with the system's ability to reach an acceptable synchronism after a major disturbance such as short circuit caused by lightning. Then the fundamental problem, as pointed in the survey \cite{V-W-C}, is: {\frac{\lam}{N}ilonm whether the post-fault state (when the disturbance is cleared) is located in the region of attraction of synchronous states}. Thus, a closely related issue is to estimate the region of attraction of synchronous states. In the recent paper \cite{D-C-B}, the authors focused on the network topology, but as the authors mentioned, {\it ``another important question not addressed in the present article concerns the region of attraction of a synchronized solution''}. Therefore, the region of attraction of synchronized states is indeed a central problem for the transient stability. For the power grid, some analysis on transient stability can be found in \cite{C, C-C-C, V-W-C}, where the approach is the so-called direct method based on the energy function. However, this method did not provide explicit formulas to check if the power system synchronizes for given initial data and parameters. Actually, the energy function, containing a pair-wise nonlinear attraction with terms $\cos(\theta_i-\theta_j)$, is difficult to study. Another tool is based on the singular perturbation theory \cite{C-W-V, D-B-1} by which the second-order dynamics can be approximated by the first-order dynamics when the system is sufficiently strongly over-damped, i.e., the ratio of inertia over damping is sufficiently small. For example, in \cite{D-B-1} the authors studied the more sophisticated power networks with energy losses (phase shifts) and derived algebraic conditions that relate the synchronization to the underlying network structure. Unfortunately there is no formula in \cite{C-W-V,D-B-1} to check whether a given system is so strongly over-damped that the result can be applied. In the survey paper \cite{D-B-0}, D\"{o}rfler and Bullo pointed out that the transient dynamics of second-order oscillator networks is a challenging open problem. As far as the authors know, the direct analysis on the region of attraction for second-order Kuramoto oscillators could be found only in \cite{C-H-Y, C-L-H-X-Y, L-X-Y1}. However, in terms of power grids, there are drawbacks in at least two aspects. First, in these studies the inertia and damping are assumed to be either uniform \cite{C-H-Y, C-L-H-X-Y} or homogeneous \cite{L-X-Y1}, which is not realistic in power generators. The second one lies in the network topology. For example, in \cite{L-X-Y1} the transient stability was considered when the underlying graph have a diameter less than or equal to 2. In \cite{D-B-1}, the underlying network has to be even all-to-all (see \cite[Theorem 2.1]{D-B-1}). In practice, it is not realistic to assume that a power network should have such a nice connectivity, for example, the Northern European power grid \cite{M-H-K-S}. Thus, the real situation challenges us to consider the general systems with inhomogeneous dampings and general networks. \textbf{Contributions.-} The main contribution of this paper is to {\frac{\lam}{N}ilonm estimate the region of attraction of synchronous states for lossless power grids on general networks with inhomogeneous dampings}. To the best of the authors' knowledge, this is the first rigorous study on this challenging problem for such a general model of lossless power grids with oscillators. We use a direct analysis on the dynamics of second-order Kuramoto-type model and derive an {\frac{\lam}{N}ilonm explicit} formula to estimate the region of attraction. {Among the rigorous analysis of Kuramoto oscillators, a typical method is to study the dynamics of phase difference, for example, \cite{C-H-J-K, C-H-Y, C-L-H-X-Y, C-S, D-B, L-X-Y1}. However, such an analysis crucially relies on the homogeneousness of parameters and the nice connectivity that the diameter of the graph should be less than or equal to 2. Thus, this method fails for the current case. Our strategy is to use the gradient-like formulation and energy method. Departing from the (physical) energy for the so-called direct method in \cite{C, C-C-C, V-W-C}, we will construct a virtual energy function which enables us to derive the boundedness of the trajectory. Then we can use the {\L}ojasiewicz's theory to derive the convergence immediately. We also remark that our virtual energy is different with that in \cite{C-L-H-X-Y} where the uniform inertia and damping were considered.} \textbf{Organization of paper.-} In Section 2, we present the models, main result and some discussions. In Section 3, we give a proof to the main result. In Section 4, we present some numeric illustrations. Finally, Section 5 is devoted to a conclusion. {\textbf{Notation:}} \noindent $\|\cdot\|$---Euclidean norm in $\mathbb R^N$, \noindent $L^\infty(\mathbb R^+,\mathbb R^N)=\left\{f:\mathbb R^+\rightarrow \mathbb R^N\mid f \,\,\mbox{is bounded}\right\},$ \noindent $W^{1,\infty}(\mathbb R^+,\mathbb R^N)=\left\{f:\mathbb R^+\rightarrow \mathbb R^N\mid f \,\,\mbox{is differentiable}, f, f'\in L^\infty(\mathbb R^+,\mathbb R^N)\right\}.$ \section{Models, main result and discussions}\label{preliminaries} \setcounter{equation}{0} In this section, we present the model of power grids as a second-order Kuramoto-type model, and its gradient-like flow formulation together with a key convergence result for the general gradient-like system with analytic potentials. Some preliminary inequalities are also provided. \subsection{Models}\label{subsecmodels} A mathematical model for a {\frac{\lam}{N}ilonm lossless} network-reduced power system \cite{C-C-C,D-C-B} can be defined by the following swing equations: \begin{equation} \label{grid} m_i\ddot{\theta}_i+d_i\dot{\theta}_{i} = P_{m,i} + \sum_{j=1}^{N} |V_i|\cdot|V_j|\cdot\Im(Y_{ij}) \sin(\theta_{j} - \theta_{i}), \quad i=1,2,\cdots,N, \quad t > 0. \frac{\lam}{N}ilonnd{equation} Here $\theta_i $ and $\dot{\theta}_{i}$ are the rotor angle and frequency of the $i$-th generator, respectively. The parameters $P_{m,i}>0$, $|V_i|>0$, $m_i>0$, and $d_i>0$ are the effective power input, voltage level, generator inertia constant, and damping coefficient of the $i$-th generator, respectively. For $Y=(Y_{ij})$ we denote the symmetric nodal admittance matrix, and $\Im(Y_{ij})$ represents the susceptance of the transmission line between $i$ and $j$. If the power network is subject to energy loss due to the transfer conductance, then it should be depicted by a phase shift in each coupling term \cite{D-B-1}. We refer to \cite{D-B-1,D-C-B,S-P} for more details or the derivation of \frac{\lam}{N}ilonqref{grid} from physical principles. For simplicity in mathematical sense, let us take $\Omega_{i}=P_{m,i}$ and $a_{ij}=|V_i|\cdot|V_j|\cdot\Im(Y_{ij})$, and drop the hats in \frac{\lam}{N}ilonqref{grid}. Then the system \frac{\lam}{N}ilonqref{grid} becomes a second-order model of coupled oscillators \begin{align}\begin{aligned}\label{Ku-iner-net} m_i\ddot{\theta}_i+d_i\dot{\theta}_{i}& = \Omega_{i} + \sum_{j=1}^{N} a_{ij} \sin(\theta_{j} - \theta_{i}),\quad i=1,2,\dots,N. \frac{\lam}{N}ilonnd{aligned}\frac{\lam}{N}ilonnd{align} Here, the coupling between oscillators is symmetric since $Y$ is a symmetric matrix. If $m_i/d_i=m_j/d_j$ for all $i\neq j$, it is said to be a model with {\frac{\lam}{N}ilonm homogeneous} dampings. We can define a graph $\mathcal G=(\mathcal V, \mathcal W)$ associated to the system \frac{\lam}{N}ilonqref{Ku-iner-net} such that $\mathcal V=\{1,2,\dots, N\},$ and $\mathcal W=\left\{(i,j): a_{ij}>0\right\}.$ In this setting, we call $\mathcal G$ the {undirected} graph induced by the matrix $A=(a_{ij})$. {We acknowledge that a real power network should contain both generators and loads, while the above model includes only generators. In power flow, loads can be modeled by different ways, for example, a system of first-order Kuramoto oscillators \cite{D-C-B} or algebraic equations. Another typical way is to use the Kron reduction to obtain so-called ``network-reduced'' model so that the loads are involved in the transfer admittance \cite{D-B-2, Ward}, and the resulted system consists of only generators. In such a sense, the network-reduced model \frac{\lam}{N}ilonqref{grid} becomes an often studied mathematical model for power grids. It is worthwhile to mention that the Northern European power grid in \cite{M-H-K-S} does not have the nice connectivity in literature \cite{D-B-1,L-X-Y1} after the so-called Kron reduction \cite{D-B-2} (this can be seen by looking into the power flow chart in \cite[Fig.4]{M-H-K-S} together with the topological properties of Kron reduction in \cite[Theorem III.4]{D-B-2}). } Next, we recall some definitions for complete synchronization of coupled oscillators. \begin{definition} Let $\theta(t)=(\theta_1(t), \dots, \theta_N(t))$ be an ensemble of phases of Kuramoto oscillators. \begin{enumerate} \item The Kuramoto ensemble asymptotically exhibits complete frequency synchronization if and only if \[ \displaystyle \lim_{t \to \infty} |\omega_i(t) - \omega_j(t)| = 0, \quad \forall\; i \not = j. \] Here, $\omega_i(t):=\dot\theta_i(t)$ is the frequency of $i$th oscillator at time $t$. \item The Kuramoto ensemble asymptotically exhibits phase-locked state if and only if the relative phase differences converge to some constant asymptotically: \[ \displaystyle \lim_{t \to \infty} ({\theta}_i(t) - {\theta}_j(t) )= \theta_{ij}, \qquad \forall\; i \not = j. \] \frac{\lam}{N}ilonnd{enumerate} \frac{\lam}{N}ilonnd{definition} \subsection{A macro-micro decomposition}\label{s3.1} We notice that the system \frac{\lam}{N}ilonqref{Ku-iner-net} can be rewritten as a system of first-order ODEs: \begin{align*} \begin{aligned} {\dot \theta}_i &= \omega_i, \quad i=1, 2, \dots, N, \quad t > 0, \\ {\dot \omega}_i &= \frac{1}{m_i} \left[ -d_i\omega_i + \Omega_i + \sum_{j=1}^{N} a_{ij}\sin(\theta_j - \theta_i) \right]. \frac{\lam}{N}ilonnd{aligned} \frac{\lam}{N}ilonnd{align*} Let $\theta:=(\theta_1, \theta_2, \dots, \theta_N)$, \,$\omega:=(\omega_1,\omega_2, \dots,\omega_N)$, \,$M:=diag\{m_1, m_2, \dots, m_N\}$, $D:=diag\{d_1, d_2, \dots, d_N\}$, and $ \Omega:=( \Omega_1, \Omega_2, \dots, \Omega_N)$. Using these newly defined notations, we introduce macro variables as follows: \begin{equation}\label{vs} \Omega_c:= \frac{\sum_{i=1}^N\Omega_i}{tr(D)} = \frac{\sum_{i=1}^N\Omega_i}{\sum_{i=1}^N d_i}, \quad \theta_s:= \sum_{i=1}^Nd_i\theta_i, \quad \omega_s:= \sum_{i=1}^Nm_i\omega_i, \frac{\lam}{N}ilonnd{equation} where $tr(\cdot)$ denotes the trace of a matrix. We also set the phase fluctuations (micro-variables) as $$\hat\theta_i:=\theta_i- \Omega_c \,t, \quad\,i=1,2,\dots,N,$$ then we get $\ddot{\hat\theta}_i=\ddot{\theta}_i$, $\dot{\hat\theta}_i=\dot{\theta}_i-\Omega_c\,$, and the system \frac{\lam}{N}ilonqref{Ku-iner-net} can be rewritten as \begin{equation} \label{Ku-iner-net1} m_i\ddot{\hat\theta}_i+d_i\dot{\hat\theta}_{i} = \hat\Omega_i + \sum_{j=1}^{N} a_{ij} \sin(\hat\theta_{j} - \hat\theta_{i}) \quad \mbox{with} \quad \hat\Omega_i := \Omega_{i}-d_i \Omega_c, \frac{\lam}{N}ilonnd{equation} where the ``micro'' natural frequencies $\hat\Omega_i$ sum to zero: \begin{equation*} \sum_{i=1}^N\hat \Omega_i=0. \frac{\lam}{N}ilonnd{equation*} In particular, if $\Omega_i/d_i= \Omega_j/d_j$ for all $i,j=1,2,\dots,N$, then we have $\hat\Omega_i=0$ for each $i$ and the equation \frac{\lam}{N}ilonqref{Ku-iner-net1} reduces to a system of coupled oscillators with identical natural frequencies: \begin{equation*}\label{Ku-iner-homo} m_i\ddot{\hat\theta}_i+d_i\dot{\hat\theta}_{i} = \sum_{j=1}^{N} a_{ij} \sin(\hat\theta_{j} - \hat\theta_{i}). \frac{\lam}{N}ilonnd{equation*} {Note that the ensemble of micro-variables $(\hat\theta_1, \dots, \hat\theta_N)$ is a phase shift of the original ensemble $(\theta_1, \dots, \theta_N)$, thus, they share the same asymptotic property as long as we concern only the synchronization or phase-locking behavior. Moreover, the equations for the variable $\theta_i$ and $\hat\theta_i$, i.e., \frac{\lam}{N}ilonqref{Ku-iner-net} and \frac{\lam}{N}ilonqref{Ku-iner-net1}, have the same form. So, we may consider \frac{\lam}{N}ilonqref{Ku-iner-net1} instead of \frac{\lam}{N}ilonqref{Ku-iner-net} when we concern the synchronization problem. These observations enable us to assume, without loss of generality, the natural frequencies in \frac{\lam}{N}ilonqref{Ku-iner-net} satisfy \begin{equation}\label{eqsum0} \sum_{i=1}^N\Omega_i=0. \frac{\lam}{N}ilonnd{equation} In the rest of this paper, we consider the system \frac{\lam}{N}ilonqref{Ku-iner-net} with \frac{\lam}{N}ilonqref{eqsum0}.} \subsection{An inequality on connected graphs} Consider a symmetric and connected network, which can be realized with a weighted graph $\mathcal G = (\mathcal V, \mathcal W, A)$. Here, $\mathcal V = \{1, 2, \dots, N\}$ and $\mathcal W \subseteq \mathcal V\times \mathcal V$ are vertex and edge sets, respectively, and $A = (a_{ij})$ is an $N \times N$ matrix whose element $a_{ij}$ denotes the capacity of the edge (communication weight) flowing from $j$ to $i$. We note that the underlying network of power grids \frac{\lam}{N}ilonqref{Ku-iner-net} is {\frac{\lam}{N}ilonm undirected}, i.e., the adjacency matrix $A=\{a_{ij}\}$ is symmetric. We say the graph $\mathcal G$ is connected if for any pair of nodes $i,j\in \mathcal V$, there exists a shortest path from $i$ to $j$, say \[i=p_1\to p_2\to p_3\to \cdots \to p_{ _{d_{ij}}}=j, \quad (p_k, p_{k+1})\in \mathcal W, \quad k=1,2,\dots,d_{ij}-1.\] In order for the complete synchronization of \frac{\lam}{N}ilonqref{Ku-iner-net}, in this paper we assume that the induced undirected graph $\mathcal G$ is {\frac{\lam}{N}ilonm connected}. The following result, which connects the total deviations and the partial deviations along the edges in a connected graph, will be useful in the energy estimate. For its proof, we refer to \cite{C-L-H-X-Y}. \begin{lemma}\label{Equivalence.comm} Suppose that the graph $\mathcal G= (\mathcal V, \mathcal W, A)$ is connected and let $\theta_i$ be the phase of the Kuramoto oscillator located at the vertex $i$. Then, there there exists a positive constant $L_*$ such that \[ \displaystyle L_* \sum_{l,k = 1}^{N} |\theta_l - \theta_k|^2 \leq \sum_{(l, k) \in \mathcal W} |\theta_l - \theta_k|^2 \leq \sum_{l,k =1}^{N} |\theta_l - \theta_k|^2, \] where the positive constant $L_*$ is given by \begin{equation}\label{L} L_* := \frac{1}{1+d(\mathcal G)|\mathcal W^c|} \quad \mbox{with} \quad d(\mathcal G) := \max_{1\leq i,j \leq N}d_{ij}. \frac{\lam}{N}ilonnd{equation} Here $\mathcal W^c$ is the complement of edge set $\mathcal W$ in $\mathcal V\times \mathcal V$ and $|\mathcal W^c|$ denotes its cardinality. \frac{\lam}{N}ilonnd{lemma} \begin{remark} $L_*$ has a strictly positive lower bound as \[ L_* = \frac{1}{1+ d(\mathcal G)|\mathcal W^c|} \geq \frac{1}{1+ d(\mathcal G)N^{2}}. \] \frac{\lam}{N}ilonnd{remark} \subsection{Main result} Based on Subsections \ref{subsecmodels} and \ref{s3.1}, our model for network-reduced lossless power grids can be restated as \frac{\lam}{N}ilonqref{Ku-iner-net} together with the restriction \frac{\lam}{N}ilonqref{eqsum0}, i.e., \begin{align}\begin{aligned}\label{Ku-iner-net2} &m_i\ddot{\theta}_i+d_i\dot{\theta}_{i} = \Omega_{i} + \sum_{j=1}^{N} a_{ij} \sin(\theta_{j} - \theta_{i}),\,\, i=1,2,\dots,N, \\ &\sum_{i=1}^N \Omega_{i}=0,\qquad a_{ij}=a_{ji}. \frac{\lam}{N}ilonnd{aligned}\frac{\lam}{N}ilonnd{align} In this subsection, we present the main result in this paper. We begin by setting several extremal parameters: \begin{align*}\begin{aligned} a_u :=\max\left\{a_{ij}: (j,i)\in \mathcal W \right\},& \quad a_{\frac{\lam}{N}ilonll}:=\min\left\{a_{ij}: (j,i)\in \mathcal W \right\}, \\ d_u := \max_{1 \leq i\leq N}d_i, \quad d_\frac{\lam}{N}ilonll := \min_{1 \leq i\leq N}d_i,& \quad m_u := \max_{1 \leq i\leq N}m_i, \quad m_\frac{\lam}{N}ilonll := \min_{1 \leq i\leq N}m_i. \frac{\lam}{N}ilonnd{aligned}\frac{\lam}{N}ilonnd{align*} We also set fluctuations of parameters: \begin{align*}\begin{aligned} \hat d_i:=d_i-\frac1N\sum_{i=1}^N d_i,& \quad \hat D=diag(\hat d_1, \hat d_2,\dots, \hat d_N),\\ \hat m_i:=m_i-\frac1N\sum_{i=1}^N m_i,& \quad \hat M=diag(\hat m_1, \hat m_2,\dots, \hat m_N). \frac{\lam}{N}ilonnd{aligned}\frac{\lam}{N}ilonnd{align*} Using those notations, we introduce our main assumptions on the parameters and initial configurations below. \begin{itemize} \item[${\bf (H1)}$] The underlying graph $\mathcal G$ is connected. \item[${\bf (H2)}$] Let $D_0 \in (0,\pi)$ be given. The parameters satisfy \begin{equation}\label{assume} a_u^2N^2(2m_u+\lambdabda)<d_\frac{\lam}{N}ilonll^2(2\mathcal R_0a_\frac{\lam}{N}ilonll L_*N-\lambdabda), \frac{\lam}{N}ilonnd{equation} where $\mathcal{R}_0 := \frac{\sin D_0}{D_0}$, $L_*$ is given in Lemma \ref{Equivalence.comm}, and \begin{equation}\label{lambda} \lambdabda:= \frac{\sqrt{tr({\hat D}^2)}}{\sqrt{N}} + \frac{2\sqrt{tr({\hat M}^2)}}{\sqrt{N}}. \frac{\lam}{N}ilonnd{equation} \item[${\bf (H3)}$] For some $\displaystyle \frac{\lam}{N}ilon\in \left(\frac{a_{u}^{2}N^2}{d_\frac{\lam}{N}ilonll\big(2\mathcal{R}_0a_\frac{\lam}{N}ilonll L_*N-\lambdabda\big) },\,\, \frac{d_\frac{\lam}{N}ilonll}{2m_u+\lambdabda}\right)$, the parameters and initial data satisfy \begin{equation}\label{assumpB} \max \left\{ \sqrt{\widetilde\mathcal{E}(0)}, \frac{2\sqrt{2}C_1\max\{\frac{\lam}{N}ilon, 1\}\|\Omega\|}{\widetilde C_\frac{\lam}{N}ilonll \sqrt{C_0}}\right\} < \frac{\sqrt{C_0}}{2}D_0, \frac{\lam}{N}ilonnd{equation} where \begin{align*} C_0 &:= \min \left \{\frac{m_\frac{\lam}{N}ilonll}{2}, \frac{\lam}{N}ilon d_\frac{\lam}{N}ilonll \left(1 - 2\frac{\lam}{N}ilon\frac{m_u}{d_\frac{\lam}{N}ilonll} \right) \right \}, \quad C_1 :=\max \left \{\frac{3m_u}{2}, \frac{\lam}{N}ilon d_u \left(1 + 2\frac{\lam}{N}ilon\frac{m_u}{d_\frac{\lam}{N}ilonll} \right) \right\}, \cr \widetilde C_\frac{\lam}{N}ilonll &:= \min\left\{d_\frac{\lam}{N}ilonll - 2\frac{\lam}{N}ilon m_u, 2\frac{\lam}{N}ilon \mathcal{R}_0 a_\frac{\lam}{N}ilonll L_*N - \frac{a_u^2 N^2}{d_\frac{\lam}{N}ilonll} \right\} - \frac{\lam}{N}ilon\lambdabda, \frac{\lam}{N}ilonnd{align*} and \[ \widetilde{\mathcal{E}}(0) :=\frac{\lam}{N}ilon\sum_{i=1}^N d_i (\theta_i(0)-\theta_c(0))^2+ 2\frac{\lam}{N}ilon\sum_{i=1}^N m_i (\theta_i(0) - \theta_c(0))\omega_i(0)+ \sum_{i=1}^N m_i \omega_i^2(0) \] with \[ \theta_c(0) := \frac1N \sum_{i=1}^N \theta_i(0). \] \frac{\lam}{N}ilonnd{itemize} Then we are now in a position to state our main theorem in this paper. \begin{theorem}\label{thm4} Suppose that the hypotheses ${\bf (H1)}$-${\bf (H3)}$ hold. Then the global solution $\theta(t)$ to the system \frac{\lam}{N}ilonqref{Ku-iner-net2} asymptotically exhibits phase-locked states. \frac{\lam}{N}ilonnd{theorem} \subsection{Discussions} {We would like to explain about the accessibility of the assumptions ${\bf (H1)}$-${\bf (H3)}$. The assumption ${\bf (H1)}$ guarantees the positivity of the constant $L_*$ appeared in \frac{\lam}{N}ilonqref{L}, and then ${\bf (H2)}$ can hold true, for example, when the inertia is small and the variances of inertia and damping are also small. Now, the assumption ${\bf (H2)}$ ensures that the interval for admissible $\frac{\lam}{N}ilon$ is nonempty, which further guarantee that $\widetilde C_\frac{\lam}{N}ilonll>0$. Finally, the condition \frac{\lam}{N}ilonqref{assumpB} can be fulfilled when the size of initial data (in terms of the initial energy $\widetilde\mathcal{E}(0)$) and the size of (micro) natural frequencies $\|\Omega\|$ are small.} {By the definition of the perturbed matrices $\hat D$ and $\hat M$, we find $\lambdabda = 0$ if $d_i = d$ and $m_i = m$ for all $1 \leq i \leq N$. Moreover, in the case of uniform inertia and damping, our assumptions ${\bf (H1)}$- ${\bf (H3)}$ become the ones in \cite{C-L-H-X-Y}. } {We acknowledge that our estimate is conservative in the sense that the conditions are sufficient but not necessary. In spite of that, Theorem \ref{thm4} gives {\frac{\lam}{N}ilonm explicit} formulas to guarantee that a given state must be in the region of attraction for synchronous states of a grid system by verifying that it meets the framework in ${\bf (H1)}$-${\bf (H3)}$, which is easy to operated since only algebraic operations are involved. We could also observe some interesting points from the statement in Theorem \ref{thm4}. We notice that the parametric condition \frac{\lam}{N}ilonqref{assume} becomes more flexible when we increase the constant $L_*$ and/or decrease the constant $\lambdabda$. Recalling \frac{\lam}{N}ilonqref{L} we see that if one decreases the diameter of the graph or increase the number of arcs, then the value of $L_*$ becomes larger and the parametric condition is relaxed. On the other hand, by \frac{\lam}{N}ilonqref{lambda}, the constant $\lambdabda$ depends on the fluctuations of the nonuniform parameters $d_i$ and $m_i$; thus, the parametric fluctuations hinder the synchronization. These two observations are consistent with our intuition and give some qualitative understanding for the synchronizability versus the system parameters.} {Compared to \cite{D-B-1, L-X-Y1}, the advantage of our main result lies in at least two aspects. First, we study the general systems in which the dampings can be inhomogeneous, i.e., the ratio of damping over inertia can be different between generators. In comparison, the analysis in \cite{L-X-Y1} is limited to the case of homogeneous dampings. Second, we extend the network topology to the most general case, i.e., the underlying graph can be arbitrary except the fundamental restriction that the network should be connected. Here, the connectedness is indeed necessary for synchronization; otherwise, the oscillators in different components cannot be expected to synchronize. In this sense, our assumption on the connectivity is most general. In contrast, the main result in \cite{D-B-1} impliedly assumes that the underlying network is all-to-all interacted, i.e., each pair of nodes are connected to each other directly; in \cite{L-X-Y1}, a basic hypothesis is that the underlying graph should have a diameter less than or equal to 2.} \section{Proof of main result: convergence to phase-locked states} \setcounter{equation}{0} In this section, we give the proof of the main result, Theorem \ref{thm4}. Our main strategy can be summarized as follows. In Subsection \ref{sec_gradform}, we present a gradient formulation of the system \frac{\lam}{N}ilonqref{Ku-iner-net2} and introduce some related theory. This theory tells that the boundedness of trajectory implies its convergence. Then, in order to show the boundedness, we construct a virtual energy functional in Subsection \ref{sec_apriori}. The energy functional $ \widetilde \mathcal{E}(t)$ involves the fluctuation of phases around their averaged quantity \begin{equation}\label{thetacc}\theta_c(t) = \frac 1N\sum_{i=1}^N \theta_i(t).\frac{\lam}{N}ilonnd{equation} In order to illustrate the reason to use such an energy, we begin with the energy functional $\mathcal{E}(t)$ which was introduced in \cite{C-L-H-X-Y}. In Subsection \ref{sec_main}, we combine the above estimate and theory to derive the convergence to phase-locked states for the power networks \frac{\lam}{N}ilonqref{Ku-iner-net2}. \subsection{A gradient-like flow formulation}\label{sec_gradform} In this part we present a new formulation of the system \frac{\lam}{N}ilonqref{Ku-iner-net} as a second-order gradient-like system in the case of symmetric capacity, i.e., $a_{ij} = a_{ji}$ for all $i,j \in \{1, 2, \dots,N\}$. For the classic Kuramoto model, the potential function in the gradient flow was first introduced in \cite{V-W}, which can be extended to the Kuramoto model with symmetric interactions. The following result was presented in \cite{C-L-H-X-Y}; we sketch the proof here for the reader. \begin{lemma}\label{lemgradform} The system \frac{\lam}{N}ilonqref{Ku-iner-net} is a second-order gradient-like system with a real analytical potential $f$, i.e., \begin{equation}\label{gradsyst} M {\ddot \theta} + D{\dot \theta} = \nabla f(\theta), \frac{\lam}{N}ilonnd{equation} if and only if the adjacency matrix $A= (a_{ij})$ is symmetric. \frac{\lam}{N}ilonnd{lemma} \begin{proof} (i) Suppose that the matrix $A$ is symmetric, i.e., $a_{ij}=a_{ji}.$ We define $f: \mathbb{R}^N\to \mathbb{R}$ as \begin{equation}\label{eqpot} f(\theta) :=\sum_{k=1}^N\Omega_k \theta_k+\frac{1}{2}\sum_{k,l=1}^N a_{kl}\cos(\theta_k-\theta_l). \frac{\lam}{N}ilonnd{equation} It is clearly analytic in $\theta$, and system \frac{\lam}{N}ilonqref{Ku-iner-net} is a second-order gradient-like system \frac{\lam}{N}ilonqref{gradsyst} with the potential $f$ defined in \frac{\lam}{N}ilonqref{eqpot}. (ii) We now assume that the system \frac{\lam}{N}ilonqref{Ku-iner-net} is a gradient system with an analytic potential $f$, i.e., \[ \frac{\partial f(\theta)}{\partial {\theta_i}} =\Omega_{i}+ \sum_{j=1}^{N} a_{ij} \sin(\theta_{j} - \theta_{i}),\quad i=1,2,\dots,N. \] Then the potential $f$ must satisfy $\displaystyle \frac{\partial^2 f }{\partial{\theta_k} \partial\theta_l}= \frac{\partial^2 f }{\partial{\theta_l} \partial\theta_k}$ for $l\neq k.$ This concludes $a_{lk}=a_{kl}$ for all $l,k \in \{1,2,\dots,N\}$. \frac{\lam}{N}ilonnd{proof} We next present a convergence result for the second-order gradient-like system on $\mathbb R^N$: \begin{equation}\label{mrgradient} M\ddot\theta+D\dot\theta =\nabla f(\theta), \quad \theta\in \mathbb{R}^N, \quad t\geq 0. \frac{\lam}{N}ilonnd{equation} Note that the set of equilibria $\mathcal{S}$ coincides with the set of critical points of the potential $f$: \[ {\mathcal S} := \{ \theta \in \mathbb R^N:~\nabla f(\theta) = 0 \}. \] Based on the celebrated theory of {\L}ojasiewicz \cite{Lo1}, a convergence result of the gradient-like system with uniform inertia was established in \cite{H-J}; as a slight extension the following result was given in \cite{L-X-Y1}. \begin{lemma}\label{convergencethm2} \cite{L-X-Y1} Assume that $f$ is analytic and let $\theta=\theta(t)$ be a global solution of \frac{\lam}{N}ilonqref{mrgradient}. If $\theta(\cdot)\in W^{1,\infty}(\mathbb R^+,\mathbb R^N)$, i.e., $\theta(\cdot) \in L^\infty(\mathbb R^+,\mathbb R^N)$ and $\dot\theta(\cdot)\in L^\infty(\mathbb R^+,\mathbb R^N)$, then there exists an equilibrium $\theta_e\in \mathcal S$ such that \[\lim_{t\to+\infty}\left\{\|\dot\theta(t)\|+\|\theta(t)-\theta_e\|\right\}=0.\] \frac{\lam}{N}ilonnd{lemma} {Before we proceed, we first clarify that the Kuramoto oscillators are treated, in this paper, as a dynamic system on the whole space $\mathbb R^N$. Indeed, one can consider it as a system on the $N$-torus $\mathbb S^1\times \dots\times\mathbb S^1$ since the coupling function $\sin(\cdot)$ is $2\pi$-periodic. However, in order to apply the {\L}ojasiewicz's theory, we should treat the system \frac{\lam}{N}ilonqref{Ku-iner-net2} as a system on $\mathbb R^N$. For more details on {\L}ojasiewicz's theory and applications, please refer to \cite{C-L-H-X-Y,H-J,L-X-Y,L-X-Y1}.} Then, as a direct application of Lemma \ref{convergencethm2}, we obtain {\it a priori} result on the complete frequency synchronization for \frac{\lam}{N}ilonqref{Ku-iner-net2}. \begin{proposition} \label{convergencethm} Let $\theta= \theta(t)$ be a solution to \frac{\lam}{N}ilonqref{Ku-iner-net2} in $W^{1, \infty}(\mathbb R^+, \mathbb R^N)$. Then there exists $\theta^\infty \in {\mathcal S}$ such that $\lim_{t\to \infty}\{\|\dot\theta(t) \|+\|\theta(t)-\theta^\infty\|\}=0.$ \frac{\lam}{N}ilonnd{proposition} The following lemma declares that $\dot\theta(\cdot) $ is in $L^\infty(\mathbb R^+,\mathbb R^N)$ once $\theta(t)$ is a solution of the system \frac{\lam}{N}ilonqref{Ku-iner-net2}. \begin{lemma}\label{lemmafreqbdd} Let $\theta = \theta(t)$ be a solution to \frac{\lam}{N}ilonqref{Ku-iner-net2}. Then ${\dot \theta}(\cdot)\in L^\infty(\mathbb R^+,\mathbb R^N)$. \frac{\lam}{N}ilonnd{lemma} \begin{proof}It follows from \frac{\lam}{N}ilonqref{Ku-iner-net2} that $\omega_i$ satisfies \[ m_i \dot{\omega}_i + d_i\omega_i = \Omega_i + \sum_{j=1}^N a_{ij} \sin(\theta_j - \theta_i) \leq |\Omega_i| + \sum_{j=1}^N a_{ij}. \] Note that $\omega_i$ is an analytic function of $t$. This implies that the zero-set $\{t:\omega_i(t) = 0 \}$ is countable and finite in any finite time-interval, i.e., $|\omega_i(t)|$ is piecewise differentiable and continuous. We multiply the above relation by $\mbox{sgn}(\omega_i)$ and divide it by $m_i > 0$ to get \[ \frac{d |\omega_i|}{dt} + \frac{d_i}{m_i}|\omega_i| \leq \frac{1}{m_i} \left( |\Omega_i| + \sum_{j=1}^N a_{ij} \right), \quad \mbox{a.e. $t \geq 0$}. \] We now use Gronwall inequality and continuity of $|\omega_i|$ to obtain that for all $t>0$, \begin{align*}\begin{aligned} |\omega_i(t)| \leq |\omega_i(0)| e^{-\frac{d_i}{m_i}t} + \frac{1}{d_i}\left( |\Omega_i| + \sum_{j=1}^N a_{ij} \right)\left(1 - e^{-\frac{d_i}{m_i}t}\right) \leq |\omega_i(0)| + \frac{1}{d_i}\left( |\Omega_i| + \sum_{j=1}^N a_{ij} \right), \frac{\lam}{N}ilonnd{aligned}\frac{\lam}{N}ilonnd{align*} due to $a_{ij} \geq 0$. This concludes the boundedness of $\omega(t)=\dot\theta(t)$ as a function in time. \frac{\lam}{N}ilonnd{proof} {\begin{remark}\label{remark1} By Proposition \ref{convergencethm} and Lemma \ref{lemmafreqbdd}, to prove that the phase-locked states emerges in the system \frac{\lam}{N}ilonqref{Ku-iner-net2}, it suffices to show $\theta(\cdot)\in L^\infty(\mathbb R^+,\mathbb R^N)$, i.e., the trajectory of phase is bounded. \frac{\lam}{N}ilonnd{remark} \begin{remark}\label{remark2} Considering the system of coupled oscillators \frac{\lam}{N}ilonqref{Ku-iner-net}, with general natural frequencies with $\sum_{i=1}^N \Omega_i\neq0$, we cannot expect the trajectory $\theta(t)=(\theta_1(t),\dots,\theta_N(t))$ be bounded in $\mathbb R^N$, since the right hand side of \frac{\lam}{N}ilonqref{Ku-iner-net} sums to $\sum_{i=1}^N \Omega_i\neq0$. This is why we apply the macro-micro decomposition and define the micro-variables in Section \ref{s3.1}, which allows us to assume without loss of any generality that $\sum_{i=1}^N\Omega_i=0$ and reduces to the model \frac{\lam}{N}ilonqref{Ku-iner-net2}. In the next subsection, this restriction will be crucially used. \frac{\lam}{N}ilonnd{remark} } \subsection{Construction of the energy functional $\widetilde\mathcal{E}$}\label{sec_apriori} Inspired by \cite{C-L-H-X-Y}, we first introduce a temporal energy functional $\mathcal{E}$: for $\frac{\lam}{N}ilon > 0$, \begin{align}\begin{aligned} \label{energy-1} \mathcal{E} [\theta, \omega]&:= \frac{\lam}{N}ilon \langle D\theta, \thetaAngle + 2 \frac{\lam}{N}ilon\langle M\theta, \omega Angle + \langle M\omega, \omegaAngle\\& =\frac{\lam}{N}ilon \sum_{i=1}^N d_i \theta_i^2+ 2\frac{\lam}{N}ilon \sum_{i=1}^N m_i \theta_i\omega_i+ \sum_{i=1}^N m_i \omega_i^2. \frac{\lam}{N}ilonnd{aligned}\frac{\lam}{N}ilonnd{align} Here, the notation $\langle \cdot, \cdotAngle$ represents the standard inner product in $\mathbb R^N$. Then we easily find the equivalence relation between $\mathcal{E}[\theta,\omega]$ and $\|\theta\|^2 + \|\omega\|^2$. \begin{lemma}\label{equivalence} Let $\frac{\lam}{N}ilon\in \left(0,\, \frac{d_\frac{\lam}{N}ilonll}{2m_u} \right)$. Then we have the following relation: \[ C_0 (\|\theta\|^2+\|\omega\|^2) \leq {\mathcal E}[\theta, \omega] \leq C_1 (\|\theta\|^2+\|\omega\|^2), \quad \forall\,\theta, \omega\in \mathbb R^N, \] where $C_0$ and $C_1$ are positive constants (independent of $(\theta, \omega)$) given by \begin{align*} \begin{aligned} C_0 := \min \left \{\frac{m_\frac{\lam}{N}ilonll}{2}, \frac{\lam}{N}ilon d_\frac{\lam}{N}ilonll\left(1 - 2\frac{\lam}{N}ilon\frac{m_u}{d_\frac{\lam}{N}ilonll} \right) \right \} \quad \mbox{and} \quad C_1 :=\max \left \{\frac{3m_u}{2}, \frac{\lam}{N}ilon d_u \left(1 + 2\frac{\lam}{N}ilon\frac{m_u}{d_\frac{\lam}{N}ilonll} \right) \right\}. \frac{\lam}{N}ilonnd{aligned} \frac{\lam}{N}ilonnd{align*} \frac{\lam}{N}ilonnd{lemma} \begin{proof} In \frac{\lam}{N}ilonqref{energy-1}, the cross term $ \theta_i \omega_i $ can be estimated by Young's inequality: \[ | \theta_i \omega_i |\leq \frac{\lam}{N}ilon\theta_i^2 + \frac{\omega_i^2}{4\frac{\lam}{N}ilon}. \] Then, we have \[ 2\frac{\lam}{N}ilon m_i |\theta_i \omega_i |\leq 2\frac{\lam}{N}ilon^2m_i\theta_i^2 + \frac{m_i}{2}\omega_i^2 \leq 2\frac{\lam}{N}ilon^2 \frac{m_u}{d_\frac{\lam}{N}ilonll} d_i\theta_i^2 + \frac{m_i}{2}\omega_i^2 , \] and hence \[ \sum_{i=1}^N\frac{m_i}{2}\omega_i^2+\frac{\lam}{N}ilon d_\frac{\lam}{N}ilonll \left(1 - 2\frac{\lam}{N}ilon\frac{m_u}{d_\frac{\lam}{N}ilonll} \right)\sum_{i=1}^N \theta_i^2\leq{\mathcal E}[\theta, \omega] \leq \sum_{i=1}^N\frac{3m_i}{2}\omega_i^2+\frac{\lam}{N}ilon d_u\left(1 + 2\frac{\lam}{N}ilon\frac{m_u}{d_\frac{\lam}{N}ilonll} \right)\sum_{i=1}^N \theta_i^2. \] This gives the desired result. \frac{\lam}{N}ilonnd{proof} \begin{lemma}\label{lemma-energy}Let $D_0\in (0, \pi)$ and suppose that the phase configuration $\{\theta_i\}_{i=1}^N$ satisfies \[ \max_{1 \leq i, j \leq N} |\theta_i - \theta_j | \leq D_0. \] Then the following estimate holds. \begin{eqnarray*} &&(i)\,\,\,\,\,\,a_{u}\sum_{(i,j)\in \mathcal W}\Big|\sin(\theta_{j}-\theta_i)(\omega_j-\omega_i)\Big| \leq \frac{a_u^2 N^2}{d_\frac{\lam}{N}ilonll}\|\theta - \theta_c\|^2 + d_\frac{\lam}{N}ilonll\|\omega\|^2. \cr && (ii)\,\, \sum_{(i,j)\in \mathcal W} a_{ij} \sin(\theta_j-\theta_i)(\theta_j-\theta_i) \geq 2\mathcal{R}_0 a_{\frac{\lam}{N}ilonll} L_* N\|\theta-\theta_c\|^2, \frac{\lam}{N}ilonnd{eqnarray*} where $\mathcal{R}_0$ is given by $\mathcal{R}_0 := \frac{\sin D_0}{D_0}$, and the vector $\theta-\theta_c$ is understood as $\theta-\theta_c:=(\theta_1,\dots,\theta_N)-(\theta_c,\dots,\theta_c)$ with $\theta_c$ given in \frac{\lam}{N}ilonqref{thetacc}. \frac{\lam}{N}ilonnd{lemma} \begin{proof} (i) We use $|\sin(\theta_j-\theta_i)| \leq |\theta_j-\theta_i|$ and Young's inequality to obtain $$\begin{aligned} a_{u} \sum_{(i,j) \in \mathcal W}\Big|\sin(\theta_j-\theta_i)(\omega_j-\omega_i)\Big| &\leq \frac{a_{u}^2 N}{2d_\frac{\lam}{N}ilonll} \sum_{(i,j)\in \mathcal W} |\theta_j-\theta_i|^2+ \frac{d_\frac{\lam}{N}ilonll}{2N} \sum_{(i,j)\in \mathcal W} |\omega_j-\omega_i|^2\cr &\leq \frac{a_u^2 N}{2d_\frac{\lam}{N}ilonll}\sum_{1 \leq i,j \leq N}|\theta_i - \theta_j|^2 + d_\frac{\lam}{N}ilonll\|\omega\|^2\cr &= \frac{a_u^2 N^2}{d_\frac{\lam}{N}ilonll}\|\theta - \theta_c\|^2 + d_\frac{\lam}{N}ilonll\|\omega\|^2, \frac{\lam}{N}ilonnd{aligned}$$ where we used the relations that \[\sum_{1 \leq i,j \leq N}|\theta_i - \theta_j|^2=2N\|\theta - \theta_c\|^2,\] and \[ \sum_{(i,j)\in \mathcal W} |\omega_j-\omega_i|^2 \leq \sum_{1 \leq i,j \leq N}|\omega_i - \omega_j|^2 =2N\|\omega - \omega_c\|^2 \leq 2N\|\omega\|^2. \] (ii)~ It follows from the assumption \[ \max_{1 \leq i,j \leq N}|\theta_j-\theta_i|\leq D_{0} < \pi, \] and the simple relation \[ x\sin x\geq \mathcal{R}_0 x^2 \quad \mbox{for} \quad x \in [-D_0,D_0], \] that $$\begin{aligned} \sum_{(i,j) \in \mathcal W}a_{ij}\sin(\theta_j-\theta_i)(\theta_j-\theta_i) &\geq \mathcal{R}_0\sum_{(i,j)\in \mathcal W}a_{ij}|\theta_j-\theta_i|^2\cr &\geq \mathcal{R}_0 a_{\frac{\lam}{N}ilonll} L_* \sum_{1\leq i,j\leq N}|\theta_j-\theta_i|^2\cr &=2\mathcal{R}_0 a_{\frac{\lam}{N}ilonll} L_* N\|\theta-\theta_c\|^2. \frac{\lam}{N}ilonnd{aligned}$$ Here $L_*$ is the positive constant explicitly defined in Lemma \ref{Equivalence.comm}. \frac{\lam}{N}ilonnd{proof} Recall that the system \frac{\lam}{N}ilonqref{Ku-iner-net2} can be rewritten as \begin{align} \begin{aligned} \label{first-KMI} {\dot \theta}_i &= \omega_i, \quad i=1, 2, \dots, N, \quad t > 0, \\ {\dot \omega}_i &= \frac{1}{m_i} \left[ -d_i\omega_i + \Omega_i + \sum_{j=1}^{N} a_{ij}\sin(\theta_j - \theta_i) \right],\\ &\sum_{i=1}^N \Omega_{i}=0,\qquad a_{ij}=a_{ji}. \frac{\lam}{N}ilonnd{aligned} \frac{\lam}{N}ilonnd{align} Next, we present quantitative estimates of the interaction force term. For notational simplicity, we denote \[ {\mathcal E}(t) := {\mathcal E}[\theta(t), \omega(t)], \quad t \geq 0. \] where $(\theta(t),\omega(t))$ is the solution to the system \frac{\lam}{N}ilonqref{Ku-iner-net2} or \frac{\lam}{N}ilonqref{first-KMI}. \begin{proposition}\label{EnergyEstimateLemma} Let $D_0\in (0,\pi)$ and $\{\theta_i\}_{i=1}^N$ be any smooth solution to the system \frac{\lam}{N}ilonqref{Ku-iner-net2}. Suppose that \[ a_u^2N m_u < d_\frac{\lam}{N}ilonll^2 \mathcal R_0 a_\frac{\lam}{N}ilonll L_* \quad \mbox{and} \quad \max_{t\in [0,T_0]} \max_{1 \leq i, j \leq N} |\theta_i(t) - \theta_j(t) | \leq D_0. \] for some $T_0 > 0$. Then, for any $\frac{\lam}{N}ilon$ satisfying \begin{equation}\label{condi_e} \frac{a_{u}^{2}N}{2d_\frac{\lam}{N}ilonll \mathcal{R}_0 a_\frac{\lam}{N}ilonll L_*}<\frac{\lam}{N}ilon<\frac{d_\frac{\lam}{N}ilonll}{2m_u}, \frac{\lam}{N}ilonq we have \begin{equation}\label{energy-0} \frac{d}{dt}\mathcal{E}(t) + C_\frac{\lam}{N}ilonll \mathcal{D}(t) \leq 2\max\{\frac{\lam}{N}ilon, 1\}\| \Omega\|\left( \|\theta - \theta_c\| + \|\omega\|\right), \quad \mbox{for} \,\,\,t\in[0, T_0], \frac{\lam}{N}ilonnd{equation} where $\mathcal{D}(t) := \mathcal{D}[\theta(t),\omega(t)]$ and $C_\frac{\lam}{N}ilonll$ are defined by \[ \mathcal{D}[\theta,\omega]:= \|\omega\|^2 + \|\theta - \theta_c\|^2 \quad \mbox{and} \quad C_\frac{\lam}{N}ilonll:= \min\left\{d_\frac{\lam}{N}ilonll - 2\frac{\lam}{N}ilon m_u, 2\frac{\lam}{N}ilon \mathcal{R}_0 a_\frac{\lam}{N}ilonll L_*N - \frac{a_u^2 N^2}{d_\frac{\lam}{N}ilonll} \right\}. \] \frac{\lam}{N}ilonnd{proposition} \begin{proof} The proof is divided into three steps. $\mbox{\boldmath $u$}llet$ {\bf Step A.-} We multiply $2\omega_i$ on both sides of the second equation in $\frac{\lam}{N}ilonqref{first-KMI}_2$, sum it over $i$, and then use the symmetry of $a_{ij}$ and Lemma \ref{lemma-energy} to obtain \begin{align*} \begin{aligned} \frac{d}{dt} \sum_{i=1}^N m_i\omega_i^2 &= -2 \sum_{i=1}^N d_i\omega_i^2 + 2 \sum_{i=1}^N \Omega_i \omega_i + 2 \sum_{i,j=1}^{N}a_{ij}\sin (\theta_{j}-\theta_i)\omega_i \\ &=-2 \sum_{i=1}^N d_i\omega_i^2 + 2 \sum_{i=1}^N \Omega_i \omega_i - \sum_{i,j=1}^{N}a_{ij}\sin (\theta_{j}-\theta_i)(\omega_j- \omega_i)\\ &\leq -2 \sum_{i=1}^N d_i\omega_i^2 + 2 \sum_{i=1}^N \Omega_i \omega_i + a_{u}\sum_{(i,j)\in \mathcal W}\Big|\sin(\theta_{j}-\theta_i)(\omega_j-\omega_i)\Big| \\ &\leq -2 \sum_{i=1}^N d_i\omega_i^2 + 2 \|\Omega\| \|\omega\| +\frac{a_u^2 N^2}{d_\frac{\lam}{N}ilonll}\|\theta - \theta_c\|^2 + d_\frac{\lam}{N}ilonll\|\omega\|^2. \frac{\lam}{N}ilonnd{aligned} \frac{\lam}{N}ilonnd{align*} This yields \begin{equation}\label{sumoveri.comm} \frac{d}{dt}\langle M\omega,\omegaAngle \leq-d_\frac{\lam}{N}ilonll \|\omega\|^2 + 2\|\Omega\|\|\omega\| + \frac{a_u^2 N^2}{d_\frac{\lam}{N}ilonll}\|\theta - \theta_c\|^2. \frac{\lam}{N}ilonq $\mbox{\boldmath $u$}llet$ {\bf Step B.-} We now multiply $2\theta_i$ on both sides of $\frac{\lam}{N}ilonqref{first-KMI}_2$ to obtain \begin{align*} \begin{aligned} 2m_i \left( \frac{d \omega_i}{dt} \right) \theta_i = - d_i\frac{d}{dt} \theta_i^2 + 2 \Omega_i \theta_i + 2\sum_{j=1}^{N}a_{ij}\sin(\theta_j-\theta_i)\theta_i. \frac{\lam}{N}ilonnd{aligned} \frac{\lam}{N}ilonnd{align*} Summing the above equality over $i$ and using the symmetry of $a_{ij}$ and Lemma \ref{lemma-energy}, we find \begin{align}\label{est_wt} \begin{aligned} 2 \sum_{i=1}^{N} m_i\left( \frac{d \omega_i}{dt}\right) \theta_i &= -\frac{d}{dt}\sum_{i=1}^N d_i\theta_i^2 + 2 \sum_{i=1}^N \Omega_i \theta_i + 2\sum_{(j,i)\in \mathcal W}a_{ij}\sin(\theta_j-\theta_i)\theta_i\\ &= -\frac{d}{dt}\sum_{i=1}^N d_i\theta_i^2+ 2 \sum_{i=1}^N \Omega_i \theta_i - \sum_{(j,i)\in \mathcal W}a_{ij}\sin(\theta_j-\theta_i)(\theta_j-\theta_i) \\&= -\frac{d}{dt}\sum_{i=1}^N d_i\theta_i^2+ 2 \sum_{i = 1}^N\Omega_i (\theta_i - \theta_c) - \sum_{(j,i)\in \mathcal W}a_{ij}\sin(\theta_j-\theta_i)(\theta_j-\theta_i) \\ &\leq -\frac{d}{dt}\sum_{i=1}^N d_i\theta_i^2 + 2 \|\Omega\| \|\theta - \theta_c \| - 2\mathcal{R}_0 a_{\frac{\lam}{N}ilonll} L_* N\|\theta-\theta_c\|^2, \frac{\lam}{N}ilonnd{aligned} \frac{\lam}{N}ilonnd{align} where we used the restriction that \[ \sum_{i = 1}^N \Omega_i = 0. \] On the other hand, the term in the left hand side of relation \frac{\lam}{N}ilonqref{est_wt} can be rewritten as \begin{equation}\label{est_wt2} m_i\frac{d\omega_i}{dt} \theta_i=m_i \frac{d}{dt} (\omega_i\theta_i)- m_i \omega_i ^2. \frac{\lam}{N}ilonq Combining \frac{\lam}{N}ilonqref{est_wt} and \frac{\lam}{N}ilonqref{est_wt2}, we obtain \begin{equation*} \frac{d}{dt}\left( 2 \sum_{i=1}^N m_i \omega_i \theta_i +\sum_{i=1}^N d_i\theta_i^2 \right) + 2\mathcal{R}_0 a_{\frac{\lam}{N}ilonll} L_* N\|\theta-\theta_c\|^2 \leq 2\| \Omega\| \|\theta - \theta_c\| + 2 \sum_{i=1}^N m_i\omega_i^2. \frac{\lam}{N}ilonnd{equation*} Finally, we use the fact \[ \sum_{i=1}^N m_i \omega_i^2 \leq m_u\|\omega\|^2, \] to conclude \begin{align}\label{New-1} \begin{aligned} \frac{d}{dt}\left( \langle D\theta, \thetaAngle + 2 \langle M\theta, \omega Angle\right) + 2\mathcal{R}_0 a_{\frac{\lam}{N}ilonll} L_* N\|\theta-\theta_c\|^2 \leq 2\| \Omega\| \|\theta - \theta_c\| + 2m_u\|\omega\|^2. \frac{\lam}{N}ilonnd{aligned} \frac{\lam}{N}ilonnd{align} $\mbox{\boldmath $u$}llet$ {\bf Step C.-} Taking \frac{\lam}{N}ilonqref{sumoveri.comm} + $\frac{\lam}{N}ilon \times$ \frac{\lam}{N}ilonqref{New-1} yields $$\begin{aligned} \frac{d}{dt} \mathcal{E}(t) + (d_\frac{\lam}{N}ilonll - 2\frac{\lam}{N}ilon m_u)&\|\omega\|^2 + \left( 2\frac{\lam}{N}ilon \mathcal{R}_0 a_\frac{\lam}{N}ilonll L_*N - \frac{a_u^2 N^2}{d_\frac{\lam}{N}ilonll}\right) \|\theta - \theta_c\|^2 \cr & \leq 2\max\{\frac{\lam}{N}ilon, 1\}\| \Omega\|\Big( \|\theta - \theta_c\| + \|\omega\|\Big). \frac{\lam}{N}ilonnd{aligned}$$ Then it follows from the condition on $\frac{\lam}{N}ilon > 0$ in \frac{\lam}{N}ilonqref{condi_e} that \[ \frac{d}{dt}\mathcal{E}(t) + C_\frac{\lam}{N}ilonll \mathcal{D}(t) \leq 2\max\{\frac{\lam}{N}ilon, 1\}\| \Omega\|\Big( \|\theta - \theta_c\| + \|\omega\|\Big), \quad \mbox{for}\,\,\,t\in [0,T_0]. \] This is the desired inequality and the proof is completed. \frac{\lam}{N}ilonnd{proof} It follows from the definition of $\mathcal{D}[\theta,\omega]$ and Lemma \ref{equivalence} that \[ \mathcal{D}[\theta,\omega] \leq \|\omega\|^2 + \|\theta\|^2 \leq \frac{1}{C_0}\mathcal{E}[\theta,\omega], \] or equivalently, \[ C_0 \mathcal{D}[\theta,\omega] \leq \mathcal{E}[\theta,\omega]. \] However, we can easily find that the functional $\mathcal{E}[\theta,\omega]$ is not bounded from above by the dissipation rate $\mathcal{D}[\theta,\omega]$. In the case of uniform inertia and damping \cite{C-H-Y, C-L-H-X-Y}, applying a macro-micro decomposition if necessary, we can assume $\theta_c(t)=0$ for all $t \geq 0$, which implies that along the flow \frac{\lam}{N}ilonqref{Ku-iner-net2} we have \[ \frac{1}{2N}\sum_{1 \leq i,j \leq N}|\theta_i - \theta_j|^2 = \sum_{i=1}^N \theta_i^2, \quad \forall\, t>0. \] This immediately implies that $\mathcal{E}(t)$ is bounded from above by $\mathcal{D}(t)$ uniformly in time, and thus, they are equivalent. Then we can derive a nice differential inequality on $\mathcal{E}(t)$ from \frac{\lam}{N}ilonqref{energy-0} which enables us to obtain the uniform boundedness of the temporal energy functional $\mathcal{E}(t)$ under suitable initial configurations. However, in the current case with non-uniform parameters, the average quantity $\theta_c(t)$ is not conserved. As a consequence, the dissipation $\mathcal{D}(t)$ does not provide a damping effect for the energy functional $\mathcal{E}(t)$. In order to obtain a proper dissipation of energy, we introduce a modified energy functional $\widetilde\mathcal{E}$ as: \[ \widetilde{\mathcal{E}}[\theta,\omega] :=\frac{\lam}{N}ilon\sum_{i=1}^N d_i (\theta_i-\theta_c)^2+ 2\frac{\lam}{N}ilon\sum_{i=1}^N m_i (\theta_i - \theta_c)\omega_i+ \sum_{i=1}^N m_i \omega_i^2\quad \mbox{with} \quad \theta_c = \frac1N \sum_{i=1}^N \theta_i. \] In the lemma below, we provide some relations between $\mathcal{E}$ and $\widetilde\mathcal{E}$, and $\mathcal{D}$ and $\widetilde\mathcal{E}$. \begin{lemma}\label{lemma_newenergy} (1) The functionals $\mathcal{E}$ and $\widetilde\mathcal{E}$ have the following relation: \begin{align}\label{diff_et} \begin{aligned} \widetilde\mathcal{E} = \mathcal{E} - 2\frac{\lam}{N}ilon \theta_s \theta_c + \frac{\lam}{N}ilon \,tr(D)\theta_c^2 - 2\frac{\lam}{N}ilon \omega_s \theta_c \,, \frac{\lam}{N}ilonnd{aligned} \frac{\lam}{N}ilonnd{align} where $\theta_s$ and $\omega_s$ are given as in \frac{\lam}{N}ilonqref{vs}. \newline (2) The functional $\widetilde\mathcal{E}$ and the dissipation rate $\mathcal{D}$ are equivalent: \begin{equation}\label{eqv_ed} C_0 \mathcal{D}[\theta,\omega] \leq \widetilde\mathcal{E}[\theta,\omega] \leq C_1 \mathcal{D}[\theta,\omega], \quad \forall\,\theta, \omega\in \mathbb R^N, \frac{\lam}{N}ilonq where $C_0$ and $C_1$ are positive constants given as in Lemma \ref{equivalence}. \frac{\lam}{N}ilonnd{lemma} \begin{proof} (1) The relation between $\mathcal{E}$ and $\widetilde\mathcal{E}$ immediately follows from the definition of $\widetilde\mathcal{E}$: \begin{align*} \begin{aligned} \widetilde\mathcal{E} &= \mathcal{E} - 2\frac{\lam}{N}ilon\sum_{i=1}^N d_i \theta_i \theta_c + \frac{\lam}{N}ilon \sum_{i=1}^N d_i \theta_c^2 - 2\frac{\lam}{N}ilon \sum_{i=1}^N m_i \omega_i \theta_c\cr &= \mathcal{E} - 2\frac{\lam}{N}ilon \theta_s \theta_c + \frac{\lam}{N}ilon \,tr(D)\theta_c^2 - 2\frac{\lam}{N}ilon \omega_s \theta_c\,. \frac{\lam}{N}ilonnd{aligned} \frac{\lam}{N}ilonnd{align*} (2) Replacing the term $\theta$ by $\theta-\theta_c$ in Lemma \ref{equivalence} yields the desired estimate \begin{equation*}\label{est_equiv} C_0 (\|\theta - \theta_c\|^2+\|\omega\|^2) \leq \widetilde\mathcal{E}[\theta, \omega] \leq C_1 (\|\theta - \theta_c\|^2+\|\omega\|^2). \frac{\lam}{N}ilonnd{equation*} \frac{\lam}{N}ilonnd{proof} We now present the time-evolution of the modified energy functional \[ \widetilde\mathcal{E}(t):=\widetilde\mathcal{E}[\theta(t),\omega(t)]. \] Before we proceed, we first mention an conservation property which is important in the upcoming estimate. {\begin{lemma}\label{remarksum0-1} The sum of weighted average is conserved in time: \begin{equation}\label{eqderive0} \dot\theta_s+\dot\omega_s=0. \frac{\lam}{N}ilonnd{equation} \frac{\lam}{N}ilonnd{lemma}} \begin{proof} This immediately follows from \frac{\lam}{N}ilonqref{Ku-iner-net2}. In particular, here we used the restriction $\sum_{i=1}^N \Omega_{i}=0$. \frac{\lam}{N}ilonnd{proof} \begin{proposition}\label{prop_final} Let $D_0\in (0,\pi)$ and $\{\theta_i\}_{i=1}^N$ be any smooth solution to the system \frac{\lam}{N}ilonqref{Ku-iner-net2}. Suppose that \begin{equation*} a_u^2N^2(2m_u+\lambdabda)<d_\frac{\lam}{N}ilonll^2(2\mathcal R_0a_\frac{\lam}{N}ilonll L_*N-\lambdabda) \quad \mbox{with} \quad \lambdabda= \frac{\sqrt{tr({\hat D}^2)}}{\sqrt{N}} + \frac{2\sqrt{tr({\hat M}^2)}}{\sqrt{N}}, \frac{\lam}{N}ilonnd{equation*} and \begin{equation}\label{assumD} \max_{t\in [0,T_0]} \max_{1 \leq i, j \leq N} |\theta_i(t) - \theta_j(t) | \leq D_0, \frac{\lam}{N}ilonnd{equation} for some $T_0 > 0$. Then, for any $\frac{\lam}{N}ilon$ satisfying \begin{equation*} \frac{a_{u}^{2}N^2}{d_\frac{\lam}{N}ilonll(2\mathcal{R}_0a_\frac{\lam}{N}ilonll L_*N-\lambdabda) }<\frac{\lam}{N}ilon<\frac{d_\frac{\lam}{N}ilonll}{2m_u+\lambdabda}, \frac{\lam}{N}ilonnd{equation*} we have \begin{equation}\label{ineq_main} \frac{d}{dt}\widetilde\mathcal{E}(t) + \widetilde C_\frac{\lam}{N}ilonll \mathcal{D}(t) \leq \frac{2\sqrt{2}\max\{\frac{\lam}{N}ilon, 1\}\| \Omega\|}{\sqrt{C_0}} \sqrt{\widetilde\mathcal{E}(t)}, \quad \mbox{for}\,\,\,t\in [0, T_0], \frac{\lam}{N}ilonq where $\widetilde C_\frac{\lam}{N}ilonll$ is a positive constant given by $ \widetilde C_\frac{\lam}{N}ilonll := C_\frac{\lam}{N}ilonll - \frac{\lam}{N}ilon\lambdabda. $ Moreover, we have \begin{equation}\label{ineqmain1} \frac{d}{dt} \widetilde \mathcal{E}(t) + \frac{\widetilde C_\frac{\lam}{N}ilonll}{C_1} \widetilde\mathcal{E}(t) \leq \frac{2\sqrt{2}\max\{\frac{\lam}{N}ilon, 1\}\| \Omega\|}{\sqrt{C_0}} \sqrt{\widetilde\mathcal{E}(t)}, \quad \mbox{for}\,\,\,t\in [0, T_0]. \frac{\lam}{N}ilonnd{equation} \frac{\lam}{N}ilonnd{proposition} \begin{proof} It follows from Proposition \ref{EnergyEstimateLemma} and \frac{\lam}{N}ilonqref{diff_et} in Lemma \ref{lemma_newenergy} that $\widetilde\mathcal{E}$ satisfies \[ \frac{d}{dt}\widetilde \mathcal{E}(t) + C_\frac{\lam}{N}ilonll \mathcal{D}(t) \leq \underbrace{\frac{d}{dt} \left( \frac{\lam}{N}ilon \,tr(D)\theta_c^2 - 2\frac{\lam}{N}ilon \theta_s \theta_c - 2\frac{\lam}{N}ilon \omega_s \theta_c\right)}_{=:I} + \underbrace{2\max\{\frac{\lam}{N}ilon, 1\}\| \Omega\|\left( \|\theta - \theta_c\| + \|\omega\|\right)}_{=: J}. \] Using \frac{\lam}{N}ilonqref{eqderive0}, we rewrite $I$ as $$\begin{aligned} I &= 2\frac{\lam}{N}ilon\, tr (D)\theta_c \dot\theta_c - 2\frac{\lam}{N}ilon \dot\theta_s \theta_c - 2\frac{\lam}{N}ilon \theta_s \dot\theta_c - 2\frac{\lam}{N}ilon \dot\omega_s \theta_c - 2\frac{\lam}{N}ilon \omega_s \dot\theta_c\cr &= 2\frac{\lam}{N}ilon\, tr (D)\theta_c \dot\theta_c - 2\frac{\lam}{N}ilon \theta_s \dot\theta_c - 2\frac{\lam}{N}ilon \omega_s \dot\theta_c \quad \left(\because \dot\theta_s + \dot\omega_s = 0\right)\cr &= -2\frac{\lam}{N}ilon\dot\theta_c \sum_{i=1}^N d_i(\theta_i - \theta_c) -2\frac{\lam}{N}ilon\omega_s \dot\theta_c \quad \left(\because \theta_s = \sum_{i=1}^N d_i(\theta_i - \theta_c) + tr(D)\theta_c\right)\cr &= -2\frac{\lam}{N}ilon\omega_c \sum_{i=1}^N d_i(\theta_i - \theta_c) -2\frac{\lam}{N}ilon\omega_s \omega_c \quad \left( \because \dot\theta_c = \omega_c := \frac1N\sum_{i=1}^N \omega_i\right). \frac{\lam}{N}ilonnd{aligned}$$ Note that \[ \sum_{i=1}^N d_i(\theta_i - \theta_c) = \sum_{i=1}^N \hat d_i(\theta_i - \theta_c) \quad \mbox{and} \quad \omega_s = \sum_{i=1}^N \hat m_i \omega_i + tr(M) w_c. \] This yields $$\begin{aligned} I &= -2\frac{\lam}{N}ilon\omega_c \sum_{i=1}^N \hat d_i(\theta_i - \theta_c) -2\frac{\lam}{N}ilon\left(\sum_{i=1}^N \hat m_i \omega_i + tr(M) w_c\right) \omega_c \cr &\leq -2\frac{\lam}{N}ilon\omega_c \sum_{i=1}^N \hat d_i(\theta_i - \theta_c) -2\frac{\lam}{N}ilon \omega_c \sum_{i=1}^N \hat m_i \omega_i. \frac{\lam}{N}ilonnd{aligned}$$ On the other hand, we find $$\begin{aligned} &\left|2\frac{\lam}{N}ilon\omega_c \sum_{i=1}^N \hat d_i(\theta_i - \theta_c) +2\frac{\lam}{N}ilon \omega_c \sum_{i=1}^N \hat m_i \omega_i\right|\cr &\quad =\left| \frac{2\frac{\lam}{N}ilon}{N}\left(\sum_{i=1}^N \omega_i \right)\left(\sum_{i=1}^N \hat d_i(\theta_i - \theta_c) \right)+ \frac{2\frac{\lam}{N}ilon}{N}\left(\sum_{i=1}^N \hat m_i \omega_i \right)\left(\sum_{i=1}^N \omega_i\right)\right|\cr &\quad\leq \frac{2\frac{\lam}{N}ilon}{N}\sqrt{N}\|\omega\| \sqrt{tr({\hat D}^2)}\|\theta - \theta_c\| + \frac{2\frac{\lam}{N}ilon}{N}\sqrt{tr({\hat M}^2)} \|\omega\| \sqrt{N}\|\omega\|\cr &\quad= \frac{2\frac{\lam}{N}ilon}{\sqrt{N}}\sqrt{tr ({\hat D}^2)}\|\omega\|\|\theta - \theta_c\| + \frac{2\frac{\lam}{N}ilon \sqrt{tr({\hat M}^2)}}{\sqrt{N}}\|\omega\|^2\cr &\quad\leq \frac{\lam}{N}ilon\left(\frac{\sqrt{tr({\hat D}^2)}}{\sqrt{N}}\|\omega\|^2 +\frac{\sqrt{tr({\hat D}^2)}}{\sqrt{N}} \|\theta-\theta_c\|^2\right)+ \frac{2\frac{\lam}{N}ilon\sqrt{tr({\hat M}^2)}}{\sqrt{N}} \|\omega\|^2 \cr &\quad\leq \frac{\lam}{N}ilon\left(\frac{\sqrt{tr({\hat D}^2)}}{\sqrt{N}} + \frac{2\sqrt{tr({\hat M}^2)}}{\sqrt{N}}\right) \mathcal{D}[\theta,\omega]. \frac{\lam}{N}ilonnd{aligned}$$ Thus, we have $$I\leq \frac{\lam}{N}ilon\left(\frac{\sqrt{tr({\hat D}^2)}}{\sqrt{N}} + \frac{2\sqrt{tr({\hat M}^2)}}{\sqrt{N}}\right) \mathcal{D}[\theta,\omega]. $$ For the estimate of $J$, we obtain $$\begin{aligned} J=2\max\{\frac{\lam}{N}ilon, 1\}\| \Omega\|\left( \|\theta - \theta_c\| + \|\omega\|\right) &\leq 2\sqrt{2}\max\{\frac{\lam}{N}ilon, 1\}\| \Omega\|\sqrt{ \|\theta - \theta_c\|^2 + \|\omega\|^2}\cr & \leq \frac{2\sqrt{2}\max\{\frac{\lam}{N}ilon, 1\}\| \Omega\|}{\sqrt{C_0}} \sqrt{\widetilde\mathcal{E}(t)}, \frac{\lam}{N}ilonnd{aligned}$$ where we used the elementary relation $a + b \leq \sqrt{2}\sqrt{a^2 + b^2}$ for $a,b \geq 0$ and Lemma \ref{lemma_newenergy} (2). We now combine the above estimates for $I$ and $J$ to see that, for $t\in[0, T_0]$, \[ \frac{d}{dt}\widetilde\mathcal{E}(t) + \left(C_\frac{\lam}{N}ilonll - \frac{\lam}{N}ilon\lambdabda \right)\mathcal{D}(t) \leq \frac{2\sqrt{2}\max\{\frac{\lam}{N}ilon, 1\}\| \Omega\|}{\sqrt{C_0}} \sqrt{\widetilde\mathcal{E}(t)}. \] This is the desired inequality \frac{\lam}{N}ilonqref{ineq_main}. Finally, the last inequality \frac{\lam}{N}ilonqref{ineqmain1} immediately follows from \frac{\lam}{N}ilonqref{eqv_ed} and \frac{\lam}{N}ilonqref{ineq_main}. \frac{\lam}{N}ilonnd{proof} \subsection{Proof of Theorem \ref{thm4}}\label{sec_main} For the sake of notational simplicity, we set \[ y(t) := \sqrt{\widetilde\mathcal{E}(t)} \quad t \geq 0. \] Define \[ \mathcal{T} := \left\{ T \in \mathbb{R}_{+} : y(t)<\frac{\sqrt{C_0}}{2}D_0, \quad \forall\, t \in [0,T) \right\}, \quad {T}^* := \sup \mathcal{T}. \] Note that by the assumption \frac{\lam}{N}ilonqref{assumpB}, \[ y(0) < \frac{\sqrt{C_0}}{2}D_0. \] Due to the continuity of $y$, there exists a positive constant $T>0$ such that $T\in \mathcal T$. We now claim that \begin{equation}\label{claim3}T^*=\infty.\frac{\lam}{N}ilonnd{equation} Suppose the opposite, i.e., $T^*$ is finite. Then, we should have \begin{equation} \label{eitheror}y(T^*)=\frac{\sqrt{C_0}}{2}D_0. \frac{\lam}{N}ilonnd{equation} Note that on the interval $[0,T^*)$, we can derive that $$\begin{aligned} \max_{1 \leq i,j \leq N}|\theta_i(t)-\theta_j(t)|^2 &\leq 4\max_{1 \leq i \leq N}|\theta_i(t) - \theta_c(t)|^2 \leq 4\sum_{i=1}^N|\theta_i(t) - \theta_c(t)|^2\cr & \leq 4\mathcal{D}(t)\leq \frac{4}{C_0}\widetilde\mathcal{E}(t)\cr & \leq \frac{4}{C_0}\left( \frac{\sqrt{C_0}}{2}D_0\right)^2 = D_0^2, \frac{\lam}{N}ilonnd{aligned}$$ which means that the condition \frac{\lam}{N}ilonqref{assumD} is fulfilled, and then Proposition \ref{prop_final} can be applied. By \frac{\lam}{N}ilonqref{ineqmain1} we have \begin{equation}\label{EnergyMain} \frac{d y}{dt} \leq \frac{\sqrt{2}\max\{\frac{\lam}{N}ilon, 1\}\| \Omega\|}{\sqrt{C_0}} - \frac{\widetilde C_\frac{\lam}{N}ilonll}{2C_1} y, \quad \mbox{for} \quad t \in [0,T^*]. \frac{\lam}{N}ilonnd{equation} Note that the solution $y(t)$ to the system \frac{\lam}{N}ilonqref{EnergyMain} satisfies \[ y(T^*) \leq \max \left\{ y(0), \frac{2\sqrt{2}C_1\max\{\frac{\lam}{N}ilon, 1\}\|\Omega\|}{\widetilde C_\frac{\lam}{N}ilonll \sqrt{C_0}}\right\} < \frac{\sqrt{C_0}}{2}D_0, \] where we used the assumption \frac{\lam}{N}ilonqref{assumpB}. This contradicts \frac{\lam}{N}ilonqref{eitheror} and the claim \frac{\lam}{N}ilonqref{claim3} is proved, i.e., \[ \widetilde \mathcal{E} (t) <\frac{C_0}{4}D_0^2, \qquad \forall~ t\geq0. \] This implies that \begin{equation}\label{diffbound} \max_{1 \leq i,j \leq N}|\theta_i(t)-\theta_j(t)|^2\leq 4\mathcal{D} (t) \leq \frac{4}{C_0}\widetilde\mathcal{E}(t)<D_0^2, \qquad \forall~ t\geq 0. \frac{\lam}{N}ilonnd{equation} On the other hand, we recall the relation \frac{\lam}{N}ilonqref{eqderive0} to get \[ \omega_s(t) + \theta_s(t) = \omega_s(0) + \theta_s(0), \qquad \forall~t \geq 0. \] This means that \[ |\theta_s(t)| \leq |\omega_s(t) + \theta_s(t)| + |\omega_s(t)| = |\omega_s(0) + \theta_s(0)| + |\omega_s(t)|,\quad \forall \,t\geq0. \] We now use the fact that $\omega(\cdot) \in L^\infty(\mathbb R^+,\mathbb R^N)$ in Lemma \ref{lemmafreqbdd} to deduce \begin{equation}\label{weightsumbdd} |\theta_s(t)| \leq K_0, \quad \forall \, t\geq0, \frac{\lam}{N}ilonnd{equation} for some positive constant $K_0$. Combining the relations \frac{\lam}{N}ilonqref{diffbound} and \frac{\lam}{N}ilonqref{weightsumbdd}, we see that the trajectory $\theta(\cdot)$ is bounded as a function in time $t$. Thus, we obtain $\theta(\cdot)\in W^{1,\infty}(\mathbb R^+, \mathbb R^N)$ (see Remark \ref{remark1}), since $\dot\theta(\cdot)$ is bounded by Lemma \ref{lemmafreqbdd}. Finally, we apply Proposition \ref{convergencethm} to find that the system \frac{\lam}{N}ilonqref{Ku-iner-net2} asymptotically attains the phase-locked states. This completes the proof. \begin{remark} If, in addition, $D_0\leq\pi/2$, then the emergent phase-locked state must be confined in an arc with length less than $\pi/2$. Thus, the result in \cite[Theorem 3.1]{L-X-Y1} holds. Furthermore, by appealing to the approach in \cite{L-X-Y1} (see the Step 2 in the proof of Theorem 2.1), we can derive that the convergence to the phase-locked states is exponentially fast. \frac{\lam}{N}ilonnd{remark} \begin{remark} In our approach, the function $\widetilde\mathcal{E}$ is not a physical energy, so it can be regarded as a virtual energy. This virtual energy functional is different from that in \cite{C-L-H-X-Y} where the case of uniform inertia and damping was considered. Actually, the uniformity implies some nice property so that the mean value of phases can be assumed to be zero all the time. This played important roles in that analysis. In this work, this property is absent due to the non-uniform parameters; thus, we construct the new energy functional $\widetilde\mathcal{E}$ to overcome this difficulty. \frac{\lam}{N}ilonnd{remark} \section{Numerical simulations} \setcounter{equation}{0} {In ${\bf (H2)}$ and ${\bf (H3)}$, the parameters $D_0$ and $\frac{\lam}{N}ilon$ are chosen from some open intervals, then the estimated region of attraction is different upon different choices. As we see in \frac{\lam}{N}ilonqref{diffbound}, the constant $D_0$ is actually the range of phases for the system. In the statement of Theorem \ref{thm4}, it is pre-assigned in $(0,\pi)$ which needs to fit \frac{\lam}{N}ilonqref{assume}. Its value affects the admissible range of $\frac{\lam}{N}ilon$, other constants and the right hand side of \frac{\lam}{N}ilonqref{assumpB}. On the other hand, the choice of $\frac{\lam}{N}ilon$ affects the energy functional $\tilde \mathcal{E}$ and other constants. So, it would be interesting to investigate the region of attraction with different range of phases energy functional and different energy functional. In this section, we will do some simulations and illustrate the influence of $D_0$ and $\frac{\lam}{N}ilon$ on the estimated region, for a special setting. The conservativeness of our estimate is also illustrated.} {Our numerical simulations will be carried out by using Matlab. In order to show the region of attraction intuitively in a plane, we will consider the simple case consists of two oscillators. Then, the dynamics is given by \begin{align*} \begin{aligned} m_1\ddot{\theta}_1+d_1\dot{\theta}_{1}& = \Omega_{1} + a_{12} \sin(\theta_{2} - \theta_{1}),\\ m_2\ddot{\theta}_2+d_2\dot{\theta}_{2}& = \Omega_{2} + a_{21} \sin(\theta_{1} - \theta_{2}). \frac{\lam}{N}ilonnd{aligned} \frac{\lam}{N}ilonnd{align*} To reduce the dimension of variables, we assume that the initial frequencies are determined by initial phases in the following way: \[ d_1\omega_1(0)=\Omega_1+a_{12}\sin(\theta_2(0)-\theta_1(0)), \quad d_2\omega_2(0)=\Omega_2+a_{21}\sin(\theta_1(0)-\theta_2(0)). \] Note that the dampings can be inhomogeneous, thus the two-oscillator system cannot be written as a single equation of $\theta:=\theta_1-\theta_2$. We set the parameters $m_i$ and $d_i$ by using random data which are uniformly distributed in the following way: \[m_i\in (0.10, 0.15), \quad d_i\in(0.30, 0.40),\] and set the symmetric coupling strength as $a_{12}=a_{21}=0.2.$ We have $L_*=1$.} \subsection{Varying $\frac{\lam}{N}ilon$.} {In this part, we set the range of phases as \[D_0=\pi/4\] which fits the condition \frac{\lam}{N}ilonqref{assume}. Then we can calculate the parameters $\mathcal R_0, \lambdabda, D_0, C_0, C_1, C_l, \tilde C_l$, and the interval for the possible location of the positive coefficient $\frac{\lam}{N}ilon$ for the energy functional \[ \widetilde{\mathcal{E}}[\theta,\omega] :=\frac{\lam}{N}ilon\sum_{i=1}^2 d_i (\theta_i-\theta_c)^2+ 2\frac{\lam}{N}ilon\sum_{i=1}^2 m_i (\theta_i - \theta_c)\omega_i+ \sum_{i=1}^2 m_i \omega_i^2\quad \mbox{with} \quad \theta_c = \frac12 \sum_{i=1}^2 \theta_i. \] The natural frequencies $\Omega_i,i=1,2$ are randomly chosen as sufficient small data which have mean 0 and satisfy the condition \frac{\lam}{N}ilonqref{assumpB}. Then we can finally illustrate the region of attraction in $[0,\pi] \times [0,\pi]$, which is shown in Fig. 1 (a). The region of attraction is registered by the dark color. For different choices of admissible coefficients $\frac{\lam}{N}ilon$ satisfying {\bf (H3)}, we illustrate the boundary of the region in Fig. 1 (b). The different choices of $\frac{\lam}{N}ilon$ are registered by the different colors. We observe that the smaller choice of $\frac{\lam}{N}ilon$ produces a relative larger region of attraction.} \begin{figure} \centering \subfigure[]{\includegraphics[scale=0.22]{ee2.png} } \subfigure[]{ \includegraphics[scale=0.22]{ee1.png} } \caption{(a): The region of attraction for a special choice of admissible $\frac{\lam}{N}ilon$. (b): The boundary of region of attraction depending on $\frac{\lam}{N}ilon$. } \frac{\lam}{N}ilonnd{figure} \subsection{Varying $D_0$} {In Fig. 2, we try to illustrate the estimated region of attraction for different choices of constant $D_0\in (0,\pi)$, which needs to fit the condition \frac{\lam}{N}ilonqref{assume}. We choose 18 numbers in $(0,\pi)$: \[ \frac{\pi}{19},\, \frac{2\pi}{19},\,\frac{3\pi}{20},\, \dots, \frac{18\pi}{19}, \] and use the restriction \frac{\lam}{N}ilonqref{assume} to find out the admissible ones. Simple computation indicates that all numbers in $[ \frac{\pi}{19}, \frac{9\pi}{19}]$ fit the condition \frac{\lam}{N}ilonqref{assume}. Then we carry out the simulation using the admissible ones. In view of Fig. 1, we choose $\frac{\lam}{N}ilon$ as the smallest one among the admissible choices of $\frac{\lam}{N}ilon$. Fig. 2 shows the result depending on the values of $D_0$, which indicates that the larger choice of $D_0$ produces a larger region.} \begin{figure} \centering \subfigure[]{\includegraphics[scale=0.22]{dd2.png} } \subfigure[]{ \includegraphics[scale=0.3]{dd1.png} } \caption{(a): The region of attraction for a special choice of admissible $D_0$. (b): The boundary of region of attraction depending on $D_0$. } \frac{\lam}{N}ilonnd{figure} \subsection{Conservativeness} {We acknowledge that our result is conservative in the sense that the framework is only sufficient for the phase locking behavior, for example, the presented estimate on the region of attraction. We do some simulations, see Fig. 3, to illustrate this. We use the same parameters as in the simulation for Fig. 1. For the initial phases, we chose $(\theta_1,\theta_2) = (3,1)$ which does not fit any region shown in Figs. 1-2. The employed numerical method is a classical fourth order Runge-Kutta one using the built-in {\it ode45} Matlab command. The simulation in Fig. 3 shows that the frequencies are synchronized at an exponential rate, so the phase converges to a phase-locked state. This suggests a future problem to improve the estimate of the region of attraction.} \begin{figure} \centering \subfigure[]{\includegraphics[scale=0.5]{f1.png} } \subfigure[]{ \includegraphics[scale=0.5]{f2.png} } \caption{(a): The evolution of $\omega_i,i=1,2$. (b): The evolution of $\log |\omega_1 - \omega_2|$.} \frac{\lam}{N}ilonnd{figure} \section{Conclusion} \setcounter{equation}{0} In this paper, we studied the synchronization and transient stability of the power grids on connected networks with inhomogeneous dampings. As mentioned before, the central problem for the transient stability is to identify the region of attraction of the synchronous states, which was considered actually very rare. In \cite{L-X-Y1}, a special case of the power network model was considered: the damping is homogeneous and the underlying graph has a diameter less than or equal to 2. This is very strict in real applications for power grids. {Moreover}, the analysis in \cite{L-X-Y1}, based on the phase diameter, heavily relied on these assumptions and cannot be extended to general cases. In the present work, we employed the energy method to overcome the difficulty and obtained the desired estimate for this problem in the general case. {Simulations are provided to give some comparison on the different choices of parameters $D_0$ and $\frac{\lam}{N}ilon$, for a special setting of the simple network with two oscillators. In view of the potential application in engineering, the quantitative improvement of the estimate, including the parametric condition and the region of attraction, would be an interesting future problem. The heterogeneity of the parameters and/or general connectivity mean that the method of studying the phase difference cannot work well, while our estimate gives a way to overcome these difficulties. It is reasonable to expect a refined energy functional and a better energy estimate to improve the current result.} \section*{Acknowledgments} Z. Li was supported by 973 Program (2012CB215201), National Nature Science Foundation of China (11401135), and the Fundamental Research Funds for the Central Universities (HIT.BRETIII.201501 and HIT.PIRS.201610). Y.-P. Choi was partially supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2012R1A6A3A03039496), Engineering and Physical Sciences Research Council (EP/K00804/1), and ERC-Starting grant HDSPCONTR ``High-Dimensional Sparse Optimal Control''. Y.-P. Choi is also supported by the Alexander Humboldt Foundation through the Humboldt Research Fellowship for Postdoctoral Researchers. \begin{thebibliography}{10} \bibitem{A-B} {\sc J.~A. Acebron, L.~L. Bonilla, C.~J.~P. P\'{e}rez Vicente, F. Ritort, and R. Spigler}, {\frac{\lam}{N}ilonm The Kuramoto model: A simple paradigm for synchronization phenomena}, Rev. Mod. Phys., {77} (2005), pp.~137-185. \bibitem{C} {\sc H.-D. Chiang}, {\frac{\lam}{N}ilonm Direct Methods for Stability Analysis of Electric Power Systems}, Wiley, New York, 2011. \bibitem {C-C-C} {\sc H.-D. Chiang, C.~C. Chu, and G. Cauley}, {\frac{\lam}{N}ilonm Direct stability analysis of electric power systems using energy functions: Theory, applications, and perspective}, Proc. IEEE, {83} (1995), pp.~1497-1529. \bibitem {C-W-V} {\sc H.-D. Chiang, F.~F. Wu and P.~P. Varaiya}, {\frac{\lam}{N}ilonm Foundations of the potential energy boundary surface method for power system transient stability analysis}, IEEE Trans. Circuits Systems, {35} (1988), pp.~712-728. \bibitem {C-H-J-K} {\sc Y.-P. Choi, S.-Y. Ha, S. Jung, and Y. Kim}, {\frac{\lam}{N}ilonm Asymptotic formation and orbital stability of phase-locked states for the Kuramoto model.} Physica D, {241} (2012), pp.~735-754. \bibitem{C-H-Y} {\sc Y.-P. Choi, S.-Y. Ha, and S.-B. Yun}, {\frac{\lam}{N}ilonm Complete synchronization of Kuramoto oscillators with finite inertia}, Physica D, {240} (2011), pp.~32-44. \bibitem{C-L-H-X-Y} {\sc Y.-P. Choi, Z. Li, S.-Y. Ha, X. Xue, and S.-B. Yun}, {\frac{\lam}{N}ilonm Complete entrainment of Kuramoto oscillators with inertia on networks via gradient-like flow}, J. Diff. Eqs., {257} (2014), pp.~2591-2621. \bibitem{C-S} {\sc N. Chopra, and M.~W. Spong}, {\frac{\lam}{N}ilonm On exponential synchronization of Kuramoto oscillators}, IEEE Trans. Automatic Control, {54} (2009), pp.~353-357. \bibitem {D-B} {\sc F. D\"{o}rfler, and F. Bullo}, {\frac{\lam}{N}ilonm On the critical coupling for Kuramoto oscillators}, SIAM. J. Appl. Dyn. Syst., {10} (2011), pp.~1070-1099. \bibitem{D-B-1} {\sc F. D\"{o}rfler, and F. Bullo}, {\frac{\lam}{N}ilonm Synchronization and transient stability in power networks and nonuniform Kuramoto oscillators}, SIAM J. Control Optim., {50} (2012), pp.~1616-1642. \bibitem{D-B-0} {\sc F. D\"{o}rfler, and F. Bullo}, {\frac{\lam}{N}ilonm Synchronization in complex oscillator networks: A survey}, Automatica, 50 (2014), pp.~1539-1564. \bibitem{D-B-2} {\sc F. D\"{o}rfler, and F. Bullo}, {\frac{\lam}{N}ilonm Kron reduction of graphs with applications to electrical networks}, IEEE Trans. Circuits Systems I: Regular Papers, {60} (2013), pp.~150-163. \bibitem{D-C-B} {\sc F. D\"{o}rfler, M. Chertkov, and F. Bullo}, {\frac{\lam}{N}ilonm Synchronization in complex oscillator networks and smart grids}, Proc. Natl. Acad. Sci., {110} (2013), pp.~2005-2010. \bibitem{E} {\sc G.~B. Ermentrout}, {\frac{\lam}{N}ilonm An adaptive model for synchrony in the firefly Pteroptyx malaccae}, J. Math. Biol., {29} (1991), pp.~571-585. \bibitem{F-N-P} {\sc G. Filatrella, A.~H. Nielsen, and N.~F. Pedersen}, {\frac{\lam}{N}ilonm Analysis of a power grid using a Kuramoto-like model}, Eur. Phys. J. B, {61} (2008), pp.~485-491. \bibitem{F-R-C-M-R} {\sc V. Fioriti, S. Ruzzante, E. Castorini, E. Marchei, and V. Rosato}, {\frac{\lam}{N}ilonm Stability of a distributed generation network using the Kuramoto models, in Critical Information Infrastructure Security}, Lecture Notes in Comput. Sci., Springer, New York, 2009, pp.~14-23. \bibitem {H-J} {\sc A. Haraux, and M.~A. Jendoubi}, {\frac{\lam}{N}ilonm Convergence of solutions of second-order gradient-like systems with analytic nonlinearities}, J. Diff. Eqs., {144} (1998), pp.~313-320 . \bibitem{H} {\sc C. Huygens}, {\frac{\lam}{N}ilonm Horologium Oscillatorium}, Paris, France, 1673. \bibitem{J-M-B} {\sc A. Jadbabaie, N. Motee, and M. Barahona}, {\frac{\lam}{N}ilonm On the stability of the Kuramoto model of coupled nonlinear oscillators}, Proceedings of the American Control Conference, Boston Massachusetts 2004. \bibitem{K} {\sc Y. Kuramoto}, {\frac{\lam}{N}ilonm International symposium on mathematical problems in mathematical physics}, Lecture Notes Phys., {39} (1975), pp.~420. \bibitem{L-X-Y} {\sc Z. Li, X. Xue, and D. Yu}, {\frac{\lam}{N}ilonm On the {\L}ojasiewicz exponent of Kuramoto model}, J. Math. Phys., 56 (2015), pp.~0227041:1-20. \bibitem{L-X-Y1} {\sc Z. Li, X. Xue, and D. Yu}, {\frac{\lam}{N}ilonm Synchronization and tansient stability in power grids based on {\L}ojasiewicz inequalities}, SIAM J. Control Optim., {52} (2014), pp.~2482-2511. \bibitem {Lo1} {\sc S. {\L}ojasiewicz}, {\frac{\lam}{N}ilonm Une propri\'{e}t\'{e} topologique des sous-ensembles analytiques r\'{e}els}, in Les \'{E}quations aux D\'{e}riv\'{e}es Partielles, \'{E}ditions du Centre National de la Recherche Scientifique, Paris, 1963, pp.~87-89. \bibitem{M-H-K-S} {\sc P. J. Menck, J. Heitzig, J. Kurths, and H. J. Schellnhuber}, {\frac{\lam}{N}ilonm How dead ends undermine power grid stability}, Nature Communications, {5} (2014), 3969. \bibitem{P-R-K} {\sc A. Pikovsky, , M. Rosenblum, and J. Kurths}, {\frac{\lam}{N}ilonm Synchrnization: A universal concept in nonlinear sciences}, Cambridge University Press, Cambridge, 2001. \bibitem{S} {\sc S.~H. Strogatz}, {\frac{\lam}{N}ilonm From Kuramoto to Crawford: exploring the onset of synchronization in populations of coupled oscillators}, Physica D, {143} (2000), pp.~1-20. \bibitem{S-P} {\sc P.~W. Sauer and M.~A. Pai}, {\frac{\lam}{N}ilonm Power system dynamics and stability}, Prentice-Hall, Englewood Cliffs, NJ, 1998. \bibitem{S-U-S-P} {\sc D. Subbarao, R. Uma, B. Saha, and M.~V.~R. Phanendra}, {\frac{\lam}{N}ilonm Self-organization on a power system}, IEEE Power Engrg. Rev., {21} (2001), pp.~59-61. \bibitem{V-W} {\sc J.~L. van Hemmen, and W.~F. Wreszinski}, {\frac{\lam}{N}ilonm Lyapunov function for the Kuramoto model of nonlinearly coupled oscillators}, J. Stat. Phys., {72} (1993), pp.~145-166. \bibitem{V-W-C} {\sc P. Varaiya, F.~F. Wu, and R.~L. Chen}, {\frac{\lam}{N}ilonm Direct methods for transient stability analysis of power systems: Recent results}, Proc. IEEE, {73} (1985), pp.~1703-1715. \bibitem {V-M} {\sc M. Verwoerd, and O. Mason}, {\frac{\lam}{N}ilonm A convergence result for the Kuramoto model with all-to-all coupling}, SIAM J. Appl. Dyn. Syst., {10} (2011), pp.~906-920. \bibitem {V-M2} {\sc M. Verwoerd, and O. Mason}, {\frac{\lam}{N}ilonm Global Phase-Locking in Finite Populations of Phase-Coupled Oscillators}, SIAM J. Appl. Dyn. Syst., {7} (2008), pp.~134-160. \bibitem{Ward} {\sc J. B. Ward}, {\frac{\lam}{N}ilonm Equivalent circuits for power-flow studies}, Trans. Am. Inst. Electr. Eng. 68 (2009), pp.~373--382. \bibitem{W} {\sc A.T. Winfree}, {\frac{\lam}{N}ilonm Biological rhythms and the behavior of populations of coupled oscillators}, J. Theor. Biol. 16 (1967), pp.~15--42. \frac{\lam}{N}ilonnd{thebibliography} \frac{\lam}{N}ilonnd{document}
\betagin{document} \centerline{\Lambdarge \bf Sensitivity analysis for HJB } \centerline{\Lambdarge \bf equations with an application to} \centerline{\Lambdarge \bf coupled backward-forward systems} \centerline{\bf Vassili Kolokoltsov\footnote{Department of Statistics, University of Warwick, Coventry, CV4 7AL, UK, [email protected]}, Wei Yang\footnote{Department of Mathematics and Statistics, University of Strathclyde Glasgow, G1 1XH, UK, [email protected]}} \betagin{abstract} In this paper, we analyse Lipschitz continuous dependence of the solution to Hamilton-Jacobi-Bellman equations on a functional parameter. This sensitivity analysis not only has the interest on its own, but also is important for the mean field games methodology, namely for solving a coupled system of backward-forward equations. We show that the unique solution to a Hamilton-Jacobi-Bellman equation and its spacial gradient are Lipschitz continuous uniformly with respect to the functional parameter. In particular, we provide verifiable criteria for the so-called feedback regularity condition. Finally as an application, we show how the sensitive results are used to solved the coupled system of backward-forward equations. \end{abstract} { \partialr\mathbf{n}oindent \partialr\mathbf{n}oindent {\bf Key words}: Hamilton-Jacobi-Bellman equation, HJB equation, sensitivity analysis, mean field games, feedback regularity } \section{Introduction and motivation} Partial differential equations are used to model multidimensional dynamical systems and to describe a wide variety of phenomena such as sound, heat, fluid flow and quantum mechanics. Sensitivity analysis for multidimensional dynamical systems governed by partial differential equations has been a growing interest in recent years, since sensitivity results give us certain understanding of the stability of dynamical systems and prediction of their phase transitions if any. So far sensitivity results have had a wide-range of applications in science and engineering, including optimization, parameter estimation, model simplification, optimal control and experimental design. Recent progress in sensitivity analysis can be found, e.g. in \cite{S2005} for Burger's equation, \cite{RT2003} for Navier-Stokes equation, \cite{AGMR2010, M2002, M2011,MT2000} for elliptic and parabolic equations, \cite{ Bai,Ko06a, ManNorrisBai} for nonlinear kinetic equations, and references therein. This work at hand contributes to the presently ongoing investigation of sensitivity analysis for partial differential equations and focuses on Hamilton-Jacobi-Bellman (HJB) equations. The Hamilton-Jacobi-Bellman equation is a central object of the stochastic control theory. The solution to the HJB equation is the so-called {\it value function} which gives the optimal payoff for a given stochastic dynamical system associated with certain payoff function. In the context of pension savings management, Macov\'a and Sevcovic \cite{MS2010} studied sensitivity analysis of the solution to a HJB equation with respect to real parameters such as the percentage of salary transferred to a pension fund and the saver's risk aversion. In this paper, we study the sensitivity of the solution to HJB equations with respect to a {\it functional} parameter. This work is motivated by the study of the mean field games methodology, which is a recently developed research area and has attracted massive attention. The mean field games methodology was developed independently by J.-M. Lasry and P.-L. Lions, see \cite{LL2006, GLL2010} and video lectures \cite{L}, and by M. Huang, R.P. Malham\'e and P. Caines, see \cite{HCM3, HCM07, HCM10,Hu10}. The Mean field games have exhibit their strong power in solving complex dynamical systems and have been applied in many areas, to name a few, including growth theory in economics \cite{LLG2010}, limit order books modelling in finance \cite{LLLL2014}, environmental policy design \cite{LST2010}. See \cite{GS2014} for a survey on mean field games models. The very core idea of the mean field games methodology is the characterisation and approximation of interacting N-player stochastic games by two coupled forward-backward equations, with one of these two equations being a HJB equation. Solving this system of two coupled equations requires a so-called {\it feedback regularity property} of the feedback control associated to the HJB equation, see e.g. the condition $(37)$ in proving Theorem 10 of \cite{HCM3}. The feedback regularity property of a control strategy is that the control strategy has a Lipschitz continuous dependence on certain observable parameter, see the condition \eqref{FBR} in the following for its' mathematical description in the context of mean field games. However the fact, that in \cite{HCM3} this critical feedback regularity property was directly assumed to hold in the model without stating verifiable conditions for the feedback regularity property, makes it impossible to apply this powerful tool in solving real life problems. In order to facilitate applications of mean field games, we are motived to prove the required feedback regularity property and provide verifiable conditions for it. This in turn motivates us to undertake the sensitivity analysis for HJB equations with respect to a functional parameter. Further, we associate HJB equations to a class of general Markov processes, in order to provide a unified framework where the sensitivity analysis for general HJB equations with respect to functional parameters are studied. This is the main aim of this paper. This work has two-fold contributions: first, this work presents a unified framework where the sensitivity analysis for general HJB equations with respect to a functional parameter is studied. For the stochastic control theory, these sensitivity results promote the understanding on how individual's optimal payoff depends on certain external observable functional parameters. These results have wide range of applications e.g. in financial markets, economics and marine ecology. Second, for the mean field games theory, we prove the required feedback regularity property and provide verifiable conditions for it. Our sensitivity results enables applications of mean field games in solving real complex systems. The organisation of this paper is as follows. Section \ref{Problem description} gives the detailed description and the derivation of the problem in question. Section \ref{Preliminaries} recalls some basic but heavily used concepts for solving the problem in question. These concepts include operators, propagators and variational derivatives. The main results of this paper are presented in Section \ref{Main results}. Proofs for these results are collected in the appendix section. In Section \ref{Application}, as an application, we apply these sensitivity results to the study a mean field games model and give verifiable conditions for the feedback regularity property \eqref{FBR}. \section{Problem description} \lambdabel{Problem description} In this section, we present the problem we aim to study, starting with the description of a stochastic control problem with a general Markov dynamics. Let $\mathbf{C}:=C_\infty(\mathbf{R}^d)$ be the Banach space of bounded continuous functions $f:\mathbf{R}^d\to \mathbf{R}$ with $\lim_{x\rightarrow\infty}f(x)=0$, equipped with norm $\|f\|_{\mathbf{C}}:=\sup_x|f(x)|$. We shall denote by $\mathbf{C}^1:=C^1_\infty(\mathbf{R}^d)$ the Banach space of continuously differentiable and bounded functions $f:\mathbf{R}^d\to\mathbf{R}$ such that the derivative $f'$ belongs to $\mathbf{C}$, equipped with the norm $\|f\|_{\mathbf{C}^1}:=\sup_x|f(x)|+\sup_x |f'(x)|$, and by $\mathbf{C}^2:=C^2_\infty(\mathbf{R}^d)$ the Banach space of twice continuously differentiable and bounded functions $f:\mathbf{R}^d\to \mathbf{R}$ such that the first derivative $f'$ and the second derivative $f''$ belong to $\mathbf{C}$, equipped with the norm $\|f\|_{\mathbf{C}^2}:=\sup_x |f(x)|+\sup_x |f'(x)|+\sup_x |f''(x)|$. Let $\mathbf{C}_{Lip}:=C_{Lip}(\mathbf{R}^d)$ denote the space of Lipschitz continuous functions $f:\mathbf{R}^d\to \mathbf{R}$, equipped with the norm $\|f\|_{\mathbf{C}_{Lip}}:=\sup_x|f(x)|+\sup_{x,y}\frac{|f(x)-f(y)|}{|x-y|}$. \betagin{remark} Note that $\mathbf{C}^2=C^2_\infty(\mathbf{R}^d)$ is a Banach space which is densely and continuously embedded in $\mathbf{C}=C_\infty(\mathbf{R}^d)$. Depending on the modelling assumption, the Banach space $\mathbf{C}^2$ can be replaced by other examples of functions spaces, such as the space of H{\"o}lder continuous functions, and our methods can be applied in a similar way. \end{remark} Let $T>0$ be fixed and $\mathcal{U}$ be a compact subset of a Euclidean space, interpreted as the set of admissible controls, with the Euclidean norm $|\cdot|$. Here (and in such like expressions), $\cdot$ denotes a dynamic variable. Take $\mathcal{M}$ to be a bounded, convex, closed subset of another Banach space $\mathbf{S}$, equipped with the norm $\|\cdot\|_{\mathbf{S}}$. In applications, very often the Banach space $\mathbf{S}$ is taken as the dual space $(\mathbf{C}^2)^*$ of $\mathbf{C}^2$ and the set $\mathcal{M}$ is taken as the set of probability measures on $\mathbf{R}^d$, which is denoted by $\mathcal{P}(\mathbf{R}^d)$. Now we introduce the stochastic dynamics associated to the stochastic control problem. Specifically, the dynamics is described by a family of bounded linear operators \betagin{equation}\lambdabel{A} \{A[t,\mu,u]: t\in[0,T],\mu\in \mathcal{M},u\in \mathcal{U}\} \end{equation} where for all $(t,\mu,u)\in [0,T]\times \mathcal{M}\times \mathcal{U}$, $A[t,\mu,u]: \mathbf{C}^2\to \mathbf{C}$. For each $(t,\mu,u)\in [0,T]\times \mathcal{M}\times \mathcal{U}$, the linear operator $A[t,\mu,u]$ is assumed to generate a Feller process with values in $\mathbf{R}^d$ and to be of the form \betagin{equation} \lambdabel{fellergeneratorwithdriftcont} A[t,\mu,u]f(z)=(h(t,z,\mu,u), \mathbf{n}abla f(z)) + L[t,\mu] f(z), \end{equation} where the coefficient $h: [0,T]\times\mathbf{R}^d\times\mathcal{M}\times \mathcal{U}\to \mathbf{R}^d$ is a vector-valued function. For each pair $(t,\mu)\in [0,T]\times \mathcal{M}$, the linear bounded operator $L[t,\mu]:\mathbf{C}^2\to \mathbf{C}$ is of L\'evy-Khintchine form with variable coefficients: {\small \betagin{equation} \betagin{split} &L[t,\mu]f(z)=\frac{1}{2}(G(t,z,\mu)\mathbf{n}abla,\mathbf{n}abla)f(z)+ (b(t,z,\mu),\mathbf{n}abla f(z))\lambdabel{L}\\ +&\int_{\mathbf{R}^d} (f(z+y)-f(z)-(\mathbf{n}abla f (z), y){\bf 1}_{B_1}(y))\mathbf{n}u (t,z,\mu,dy) \end{split} \end{equation} } where $\mathbf{n}abla$ denotes the gradient operator and $(G(t,z,\mu)\mathbf{n}abla,\mathbf{n}abla)=\sum _{i,j}G_{i,j} (t,z,\mu)\frac{\partialrtial ^2}{\partialrtial z_i\partialrtial z_j}$; ${\bf 1}_{B_1}$ denotes the indicator function of the unit ball in $\mathbf{R}^d$ ; for each $(t,z,\mu)\in [0,T]\times\mathbf{R}^d\times\mathcal{M}$, $G(t,z,\mu)$ is a symmetric non-negative matrix, $b(t,z,\mu)$ is a vector, $\mathbf{n}u(t,z,\mu,\cdot)$ is a L\'evy measure on $\mathbf{R}^d$, i.e. \betagin{equation} \lambdabel{condlevy0} \int_{\mathbf{R}^d} \min (1,|y|^2)\mathbf{n}u(t, z, \mu, dy) <\infty, \end{equation} with $\mathbf{n}u (t, z, \mu, \{0\})=0$. We assume that the mappings $(t,z,\mu)\to G(t,z,\mu)$, $(t,z,\mu)\to b(t,z,\mu)$ and $(t,z,\mu)\to \mathbf{n}u(t,z,\mu, \cdot)$ are Borel measurable with respect to the Borel $\sigmagma$-algebra in $[0,T]\times\mathbf{R}^d\times\mathcal{M}$. \betagin{remark} It worths noting on the special structure of operator $A$ in \eqref{fellergeneratorwithdriftcont}. The operator $A$ depends on three parameters: time $t$, control $u$ and unusually a $\mu\in\mathcal{M}$. Notice that the control parameter $u$ appears only in the drift coefficient $h$, but not in the operator $L$. In plain words, in the stochastic control problem, it is only allowed to control the deterministic component of the dynamics. Here the operator $L$ \eqref{L} models the noise, i.e. the stochastic component of the dynamics. The noise modeled by the L\'evy-Khintchine form operator $L$ in \eqref{L} includes Gaussian noise (see an example in Remark \ref{example of h and L} ) and L\'evy stable noise (see an example in \eqref{eqstable}). \end{remark} Denote by $\{u.\}=\{u_t\in \mathcal{U}: t\in[0,T]\}$ the control process and by $C([0,T], \mathcal{M})$ the space of continuous curves $\{\mu.\}=\{\mu_t\in \mathcal{M}$, $t\in [0,T]\}$. For given $\{u.\}$ and $\{\mu.\}\in C([0,T], \mathcal{M})$, let $(X_t^{\{\mu.\},\{u.\}}: t\in[0,T])$ be a controlled stochastic process on a probability space $(\mathcal{O}megaega, \mathcal{F}, \mathcal{P})$ with values in $\mathbf{R}^d$ and generated by the family of operators $\{A[t,\mu,u]: t\in[0,T],\mu\in \mathcal{M},u\in \mathcal{U}\}$ in \eqref{A} of the form \eqref{fellergeneratorwithdriftcont}. For notational brevity, in the following we write $(X_t: t\in[0,T])$ instead of $(X_t^{\{\mu.\},\{u.\}}: t\in[0,T])$. \betagin{remark} \lambdabel{example of h and L} \betagin{enumerate} \item A simple example of the function $h$ in \eqref{fellergeneratorwithdriftcont} is \[ h(t,z,\mu,u)=\int_{\mathbf{R}^d} a(t,z, y,u)\mu(dy). \] with $a: [0,T]\times\mathbf{R}^d\times\mathbf{R}^d\times \mathcal{U}\to \mathbf{R}^d$. \item In the special case of \eqref{L} where $L[t,\mu]= \frac{1}{2}(G(t,z,\mu)\mathbf{n}abla,\mathbf{n}abla)$, the process associated with the operators \eqref{A} can also be described by stochastic differential equations \[ dX_t=h(t,X_t, \mu_t, u_t)\, dt+\sigmagma (t,X_t, \mu_t) dW_t \] with $G(t, z,\mu)=tr\{\sigmagma (t, z,\mu)\sigmagma^T (t, z,\mu)\}$, $W_t$ being a standard Brownian motion in $\mathbf{R}^d$. \end{enumerate} \end{remark} Given the observable functional parameter $\{\mu.\}\in C([0,T], \mathcal{M})$, the objective is to maximize the expected total payoff \[ \mathbb{E} \left[ \int_t^T J(s,X_s,\mu_s,u_s) \, ds +V^T (X_T,\mu_T)\right] \] at any time $t\in[0,T]$ over a suitable class of controls $\{u_t\in\mathcal{U}, t\in[0,T]\}$ with a running cost function $J: [0,T] \times \mathbf{R}^d\times \mathcal{M}\times \mathcal{U} \to \mathbf{R}$ and a terminal cost function $V^T: \mathbf{R}^d\times \mathcal{M}\to \mathbf{R}$. Therefore, for a given parameter $\{\mu.\}\in C([0,T],\mathcal{M})$, the value function $V:[0,T]\times \mathbf{R}^d\to \mathbf{R}$ starting at time $t$ and state $x$ is defined by \betagin{align}\lambdabel{valuefunction} &V(t,x; \{\mu.\}):=\\ &\sup_{\{u.\}} \mathbb{E}_x \left[ \int_t^T J(s,X_s,\mu_s,u_s) \, ds +V^T (X_T,\mu_T)\right].\mathbf{n}otag \end{align} \betagin{remark} This type of stochastic control problems with a functional parameter not only arises from the study of mean field games (see more detailed discussion in \cite{KLY2012} and Section \ref{Application} of this paper), but also appears in financial markets. For example, in the context of designing optimal trading strategies in high frequency trading based on limit order book dynamics, the functional parameter $\{\mu.\}$ can be interpreted as the bid/ask price distribution. \end{remark} By standard arguments from dynamic programming principle and assuming appropriate regularity, the value function $V$ satisfies the Hamilton-Jacobi-Bellman (HJB) equation \betagin{align}\lambdabel{HJB_Mu} \betagin{split} \displaystyle -\frac{\partialrtial{V}}{\partialrtial{t}}(t,x;\{\mu.\}) &= H_{t}(x,\mathbf{n}abla V(t,x;\{\mu.\}), \mu_t) \\ &\hspace{1.5em}+L[t,\mu_t]V(t,x;\{\mu.\})\\ V(T,x;\{\mu.\})&=V^T(x;\mu_T), \end{split} \end{align} where the Hamiltonian $H:[0,T]\times \mathbf{R}^d\times\mathbf{R}^d\times \mathcal{M} \to \mathbf{R}$ is defined by \betagin{equation}\lambdabel{HJB H_Mu} H_{t}(x,p, \mu)=\max_{u\in\mathcal{U}} ( h(t,x,\mu,u)p+J(t,x,\mu,u)). \end{equation} The aim of this paper is to investigate the sensitivity of the solution $V(\cdot,\cdot;\{\mu.\})$ to the HJB equation \eqref{HJB_Mu} with respect to the functional parameter $\{\mu.\}\in C([0,T], \mathcal{M})$. In fact, we prove in Section \ref{Main results} that the unique solution $V(\cdot,\cdot;\{\mu.\})$ to the HJB equation \eqref{HJB_Mu} and its spatial gradient $\mathbf{n}abla V(\cdot,\cdot;\{\mu.\})$ are Lipschitz continuous uniformly with respect to $\{\mu.\}$. \section{Preliminaries}\lambdabel{Preliminaries} In this section, we recall some concepts which are used in the following of the paper. Let $\mathbf{B}$ and $\mathbf{D}$ denote some Banach spaces. For a function $F:\mathbf{D}\to\mathbf{B}$, its {\it variational derivative} $D_\chi F(\mu)$ at $\mu\in \mathbf{D}$ in the direction $\chi\in \mathbf{D}$ is defined as \[ D_\chi F (\mu)=\lim_{s\to 0} \frac{F(\mu+s\chi)-F(\mu)}{s} \] if the limit exists. F is said to be differentiable at $\mu\in\mathbf{D}$ if the limit exits for all $\chi\in\mathbf{D}$. At each point $\mu\in \mathbf{D}$, the derivative defines a function $D_. F(\mu):\mathbf{D}\to\mathbf{B}$. Let ${\mathcal L}(\mathbf{D},\mathbf{B})$ denote the space of linear bounded operators from $\mathbf{D}$ to $\mathbf{B}$ and it is equipped with the usual operator norm $\|\cdot\|_{\mathbf{D}\to\mathbf{B}}$. For the analysis of time non-homogeneous evolutions, we need the notion of a propagator. A family of mappings $\{U^{t,r}\}$ from $\mathbf{B}$ to $\mathbf{B}$, parametrized by the pairs of numbers $r\leq t$ (resp. $t\leq r$) is called a {\it (forward) propagator} (resp. a {\it backward propagator}) in $\mathbf{B}$, if $U^{t,t}$ is the identity operator in $\mathbf{B}$ for all $t\geq 0$ and the following {\it chain rule}, or {\it propagator equation}, holds for $r\leq s\leq t$ (resp. for $t\leq s\leq r$): $$U^{t,s}U^{s,r}=U^{t,r}.$$ Sometimes, the family $\{U^{t,r}, t\leq r\}$ is also called a {\it two-parameter semigroup}. A backward propagator $\{U^{t,r}, t\leq r\}$ of bounded linear operators on the Banach space $\mathbf{B}$ is called {\it strongly continuous} if for every $f\in\mathbf{B}$, the mappings $$ t\mapsto U^{t,r}f \,\text{for all }t\leq r,\,\text{ and }\, r\mapsto U^{t,r}f \,\text{for all }t\leq r, $$ are continuous as mappings from $\mathbf{R}$ to $\mathbf{B}$. By the principle of uniform boundedness if $\{U^{t,r}, t\leq r\}$ is a strongly continuous propagator of bounded linear operators, then the norms of $\{U^{t,r}, t\leq r\}$ are uniformly bounded for $t,r$ in any compact interval. Assume that the Banach space $\mathbf{D}$ is a dense subset of $\mathbf{B}$ and continuously embedded in $\mathbf{B}$. Suppose $\{U^{t,r}, t\leq r\}$ is a strongly continuous backward propagator of bounded linear operators on a Banach space $\mathbf{B}$ with the common invariant domain $\mathbf{D}\subset \mathbf{B}$, i.e. if $f\in\mathbf{D}$ then $U^{t,r}f\in \mathbf{D}$ for all $t\leq r$. Let $\{L_t, t\geq0\}$ be a family of operators $L_t\in {\mathcal L}(\mathbf{D},\mathbf{B})$, depending continuously on $t$. The family $\{L_t, t\geq0\}$ is said to {\it generate} $\{U^{t,r}, t\leq r\}$ on $\mathbf{D}$ if, for any $f\in \mathbf{D}$, we have, for all $t\leq s\leq r$ \betagin{align} \lambdabel{Generates} \frac{d}{ds}U^{t,s}f = U^{t,s}L_sf, \quad \frac{d}{ds}U^{s,r}f = -L_sU^{s,r}f. \end{align} The derivatives exist in the norm topology of $\mathbf{B}$ and if $s=t$ (resp. $s=r$) they are assumed to be only a right (resp. left) derivative. One often needs to estimate the difference of two propagators when the difference of their generators is available. To this end, we shall often use the following rather standard trick. \betagin{prop} \lambdabel{prop-propergatorProperty} For $i=1,2$ let $\{L_t^i, t\geq0\}$ be a family of operators $L_t^i\in {\mathcal L}(\mathbf{D}, \mathbf{B})$, depending continuously on $t$, which generates a backward propagator $\{U_i^{t,r}, t\leq r\}$ in $\mathbf{B}$ satisfying $$ a_1:=\sup_{t\leq r}\max\left\{\|U_1^{t,r}\|_{\mathbf{B}\to \mathbf{B}}, \|U_2^{t,r}\|_{\mathbf{B}\to \mathbf{B}} \right\}<\infty. $$ If $\mathbf{D}$ is invariant under $\{U_1^{t,r}, t\leq r\}$ and $$ a_2:=\sup_{t\leq r}\|U_1^{t,r}\|_{\mathbf{D}\to \mathbf{D}}<\infty, $$ then, for each $t\leq r$, \betagin{equation} \lambdabel{PropergatorProperty } U_2^{t,r}-U_1^{t,r}=\int_t^rU_2^{t,s}(L^2_s-L_s^1)U_1^{s,r}ds \end{equation} and \betagin{align} &\|U_2^{t,r}-U_1^{t,r}\|_{\mathbf{D}\to \mathbf{B}} \lambdabel{PropergatorPropertya}\\ \le &a_1a_2(r-t) \sup_{t\leq s\le r}\|L^2_s-L_s^1 \|_{\mathbf{D}\to \mathbf{B}}.\mathbf{n}otag \end{align} \end{prop} \proof By (\ref{Generates}), we get \betagin{equation*} \betagin{split} U_2^{t,r}-U_1^{t,r}&=U_2^{t,s}U_1^{s,r}\big|_{s=t}^r=\int_t^r\frac{d}{ds} \left(U_2^{t,s}U_1^{s,r} \right) ds\\ &=\int_t^r U_2^{t,s} L_s^2 U_1^{s,r}-U_2^{t,s} L_s^1 U_1^{s,r}ds\\ &=\int_t^rU_2^{t,s}(L^2_s-L_s^1)U_1^{s,r}ds, \end{split} \end{equation*} which implies both \eqref{PropergatorProperty } and \eqref{PropergatorPropertya}. \qed \section{Main results}\lambdabel{Main results} In this section, first we list the assumptions needed for the discussion. Then based on these assumptions, we show that for each fixed parameter $\{\mu.\}\in C([0,T], \mathcal{M})$, the HJB equation \eqref{HJB_Mu} is well posed and we prove the Lipchitz continues dependence of the solution to \eqref{HJB_Mu} with respect to the functional parameter $\{\mu.\}\in C([0,T], \mathcal{M})$. \subsection{Main assumptions}\lambdabel{assumptions} For any $\mu\in\mathcal{M}$, define the set $\mathcal{M} - \mu:= \{\eta-\mu: \eta\in\mathcal{M}\}$, which, as a subset of $\mathbf{S}$, is equipped with the norm $\|\cdot\|_\mathbf{S}$. In the analysis below,we need the following assumptions: {\bf (A1)}: the Hamiltonian $H:[0,T]\times \mathbf{R}^d\times\mathbf{R}^d\times \mathcal{M} \to \mathbf{R}$ is continuous in $t$ and Lipschitz continuous uniformly in $x$ on bounded subsets of $p$. Furthermore, it is Lipschitz continuous uniformly in $p$, that is there exists a constant $c_1$ such that for all $x\in \mathbf{R}^d$, $\mu \in \mathcal{M}$ and $t\in[0,T]$ we have for $p,p'\in \mathbf{R}^d$ \betagin{equation} \lambdabel{eq1thweak_existencem} |H_{t}(x,p,\mu)-H_{t}(x,p',\mu)|\leq c_1|p-p'|. \end{equation} It is bounded in $p=0$, that is there exists a constant $c_2>0$ such that for all $x\in \mathbf{R}^d,\, \mu \in \mathcal{M},\, t\in[0,T]$ \betagin{equation} \lambdabel{eq2thweak_existencem} |H_{t}(x,0, \mu)|\leq c_2. \end{equation} For each $t\in [0,T]$ and $x,p\in\mathbf{R}^d$, the function $\mu\mapsto H_{t}(x,p, \mu)$ is differentiable in any direction $\chi\in\mathcal{M} - \mu$, such that $(t,x,p,\mu)\mapsto D_\chi H_{t}(x,p, \mu)$ is continuous and satisfies that for each bounded set $B\subset \mathbf{R}^d$ there exists a constant $c_3>0$ such that for all $t\in [0,T], x\in \mathbf{R}^d, \mu\in \mathcal{M}$ \betagin{equation} \lambdabel{eqassumonder1} \sup_{p\in B}|D_\chi H_{t}(x,p, \mu)|\le c_3\|\chi\|_{\mathbf{S}} . \end{equation} {\bf (A2)}: the mapping \betagin{align*} [0,T]\times \mathcal{M}\to {\mathcal L}(\mathbf{C}^2,\mathbf{C}), \qquad (t,\mu)\mapsto L[t,\mu] \end{align*} is continuous. For any $\{\mu.\} \in C([0,T], \mathcal{M})$, the operator curve $\{L[t,\mu_t]: t\in[0,T]\}$ generates a strongly continuous backward propagator $\{U^{t,s}_{\{\mu.\}},t\leq s\}$ of operators $U^{t,s}_{\{\mu.\}}\in {\mathcal L}(\mathbf{C},\mathbf{C})$ with the common invariant domains $\mathbf{C}^2$ and $\mathbf{C}^1$. There exists a constant $c_4>0$ such that for all $0\leq t\leq s\leq T$ we have \betagin{align}\lambdabel{BDD 2m} \max &\left\{\|U^{t,s}_{\{\mu.\}}\|_{\mathbf{C}\to \mathbf{C}}, \|U^{t,s}_{\{\mu.\}}\|_{{\mathbf{C}^1}\to{\mathbf{C}^1}}, \|U^{t,s}_{\{\mu.\}}\|_{\mathbf{C}^2 \to\mathbf{C}^2}\right\}\mathbf{n}otag\\ &\leq c_4. \end{align} The propagator has a {\it smoothing property}, that is for each $0\leq t<s\leq T$ we have \betagin{equation}\lambdabel{smoothingm} U^{t,s}_{\{\mu.\}}: \mathbf{C}\to \mathbf{C}^1, \quad U^{t,s}_{\{\mu.\}}: \mathbf{C}_{Lip}\to \mathbf{C}^2, \end{equation} and there exists a $\beta \in (0,1)$ and constants $c_5,c_6>0$ such that \betagin{align} &\|{U}^{t,s}_{\{\mu.\}}\mathbf{v}arphii\|_{\mathbf{C}^1} \leq c_5(s-t)^{-\beta}\|\mathbf{v}arphii\|_{\mathbf{C}},\lambdabel{smooth property 2m-1}\\ &\|{U}^{t,s}_{\{\mu.\}}\psi\|_{\mathbf{C}^2} \leq c_6(s-t)^{-\beta} \|\psi\|_{\mathbf{C}_{Lip}}\lambdabel{smooth property 2m} \end{align} for all $\mathbf{v}arphii\in \mathbf{C}$ and $\psi\in \mathbf{C}_{Lip}$. (ii) for any $t\in[0,T]$, the mapping $\mu\mapsto L[t,\mu]$ is differentiable in any direction $\chi\in\mathcal{M}-\mu$, such that the mapping $\mu\mapsto D_\chi L[t,\mu]$ is continuous. There exists a constant $c_7>0$ such that for each $\mu\in \mathcal{M}$ and $\chi\in \mathcal{M}-\mu$ we have for all $t\in[0,T]$ \betagin{equation} \lambdabel{eqassumonder2} \left\|D_\chi L[t,\mu]\right\|_{\mathbf{C}^2 \to\mathbf{C}}\leq c_7 \|\chi \|_{\mathbf{S}}; \end{equation} {\bf (A3)}: for any $\mu \in \mathcal{M}$, the mapping $x\mapsto V^T(x; \mu)$ is twice continuously differentiable, and for each $x\in\mathbf{R}^d$ the mapping $\mu\mapsto V^T(x; \mu)$ is differentiable in any direction $\chi\in\mathcal{M}-\mu$ such that the mapping $(x,\mu)\mapsto D_\chi V^T(x;\mu)$ is continuous. There exists a constant $c_{8}>0$ such that \betagin{equation} \lambdabel{eqassumonder3} \|D_\chi V^T(\cdot;\mu)\|_{\mathbf{C}^1}\le c_8 \|\chi\|_{\mathbf{S}}. \end{equation} \betagin{remark} \betagin{enumerate} \item If the Banach space $\mathbf{S}$ is given as the Euclidean space $\mathbf{R}$, then $D_{\chi}$ corresponds to the standard partial derivatives and are denoted by $\partialrtial/\partialrtial \alpha$ for $\alpha\in\mathbf{R}$. \item The assumptions {\bf (A1)} {\bf (A2)} and {\bf (A3)} are stated in an abstract setting. These assumptions are made concrete in Example \ref{example assumptions} in Section \ref{Application} in an application to mean field games. \end{enumerate} \end{remark} The smoothing conditions \eqref{smoothingm}, \eqref{smooth property 2m-1} and \eqref{smooth property 2m} in assumption {\bf (A2)} are essential and critical in the following analysis. Let us show two basic examples which satisfy assumption {\bf (A2)}: the diffusion operator {\small \betagin{align} &L[t,\mu]f(x)\mathbf{n}otag\\ =&\frac{1}{2}(\sigma ^2(t,x,\mu) \mathbf{n}abla,\mathbf{n}abla)f(x)+(b(t,x,\mu), \mathbf{n}abla f)(x)\lambdabel{eqdifpr} \end{align} } with smooth enough functions $b,\sigma$, see e.g. in \cite{PorEid84} and references therein. The operators $\{L[t,\mu],t\in[0,T]\}$ generate the stochastic process $(X(t),t\in[0,T])$ which obeys the stochastic differential equation \[ dX_t = b(t, X_t, \mu_t)dt + \sigmagma(t, X_t, \mu_t)dW_t, \] where $W$ is a standard Brownian motion. Another example is given by stable-like processes with the generating family \betagin{align} &L[t,\mu]f(x)\mathbf{n}otag\\ =&a(t,x) |\mathbf{D}elta|^{\alpha (x)/2}+(b(t,x,\mu), \mathbf{n}abla f)(x)\lambdabel{eqstable} \end{align} with smooth enough functions $a, \alpha$ such that the range of $a$ is a compact interval of positive numbers and the range of $\alpha$ is a compact subinterval of $(1,2)$. In both cases, each operator $U_\mu^{t,s}$, $t\leq s$, has a kernel, e.g. it is given by \betagin{equation} U_\mu^{t,s} f(x)= \int G_\mu(t,s,x,y) f(y) dy \end{equation} with a certain Green's function $G_\mu$, such that for every $x\in\mathbf{R}^d$ and $t\leq s$, \betagin{equation} \sup_{\mu\in\mathcal{M}}\int_{\mathbf{R}^d} |\mathbf{n}abla_x G_\mu(t,s,x,y) | dy\leq c(s-t)^{-\beta} \end{equation} for a constant $c>0$. For the case of standard Brownian motion (i.e. $L[t,\mu]f(x)=\frac{1}{2}\mathbf{D}eltalta f(x)$), we have explicit expression with $\betata=\frac{1}{2}$; for general diffusion processes \eqref{eqdifpr}, we know the function $G$ is bounded by Gaussian kernel and its derivate of $G$ is bounded by the one of the Gaussian kernel with $\betata=\tfrac{1}{2}$; for the stable process \eqref{eqstable}, we know the function $G$ is bounded by stable density and its derivate of $G$ is bounded by the one of the stable density with $\betata=(\inf_x \alpha(x))^{-1}$. Then \eqref{smooth property 2m-1} is implied. \eqref{smooth property 2m} can be obtained in the similar but lengthy manner by differentiation with respect to $x$, see details in \cite{VK2} and references therein. \subsection{Well-posedness of HJB equations} In this subsection, we prove the well-posedness of the HJB equation \eqref{HJB_Mu} for any given $\{\mu_t:t\in [0,T]\}\in C([0,T],\mathcal{M})$. For this purpose, we generalise the explicit dependence of the functions $H, L, V^T$ on the parameter $\mu\in\mathcal{M}$, and encode their dependence on $\mu_t$ through time $t$. Specifically, we consider the general Cauchy problem \betagin{align}\lambdabel{G_HJB} \betagin{split} -\frac{\partialrtial{V}}{\partialrtial{t}}(t,x) &= H_t(x,\mathbf{n}abla V(t,x)) +L_tV(t,x)\\ V(T,x)&=V^T(x) \end{split} \end{align} with the Hamiltonian $H:[0,T]\times\mathbf{R}^d\times\mathbf{R}^d\to \mathbf{R}$ defined by \betagin{equation}\lambdabel{HJB 2b} H_t(x,p)=\max_{u\in\mathcal{U}} ( h(t,x,u)p+J(t,x,u)). \end{equation} and the operator $L_t: \mathbf{C}^2\to \mathbf{C}$ for each $t\in [0,T]$. Notice that the wellposedness of \eqref{G_HJB} immediately implies the wellposedness of \eqref{HJB_Mu} for each fixed $\{\mu_t:t\in [0,T]\}\in C([0,T],\mathcal{M})$ under the same conditions. By Duhamel's principle (c.f. \cite{E2010}), if $V$ is a classical solution of \eqref{G_HJB}, then $V$ is also a {\it mild solution} of \eqref{G_HJB}, i.e. it satisfies {\small \betagin{align} V(t,x) = &(U^{t,T} V^T(\cdot))(x) \mathbf{n}otag\\ &+ \int_t^T U^{t,s}H_s(\,\cdot\,,\mathbf{n}abla V(s,\cdot))(x) ds\lambdabel{mild_value_function} \end{align} } for all $t\in [0,T]$ and $x\in\mathbf{R}^d$. For the sensitivity analysis of this work, it is sufficient to consider only a mild solution. For this reason, we will establish the existence of a unique mild solution. In this subsection, we mostly follow Chapter 7 in \cite{VK2}, where one also can find details for existence of a classical solution. We present this result for completeness on the level of generality which is required by what follows. \betagin{theorem} \lambdabel{weak_existence} Assume conditions {\bf (A1)} and {\bf (A2)}. If the terminal data $V^T(\cdot)$ is in $\mathbf{C}^1$, then there exists a unique mild solution $V$ of \eqref{G_HJB}, satisfying $V(t,\cdot)\in \mathbf{C}^1$ for all $t\in[0,T]$. \end{theorem} \proof see the proof in the appendix \ref{Proof to the Theorem weak_existence }. \qed By the wellposedness of equation \eqref{mild_value_function}, its solution defines a propagator in $\mathbf{C}^1$. The regularising term $L_tV(t,x)$ in the HJB equation \eqref{G_HJB} allows us to get a more regular solution than a mild solution. Standard arguments, see e.g. \cite{FlSo}, show that this mild solution is a viscosity solution to the original equation \eqref{G_HJB} and it solves the corresponding optimization problem. \subsection{Sensitivity analysis for HJB equations}\lambdabel{Sensitivity analysis of HJB } In this subsection, we analyse the dependency of the solution of the HJB equation \eqref{HJB_Mu} on the functional parameter $\{\mu.\}\in C([0,T],\mathcal{M})$. Under the conditions {\bf (A1)} and {\bf (A2)}, Theorem \ref{weak_existence} guarantees the existence of a unique mild solution $V(\cdot,\cdot; \{\mu.\})$ of \eqref{HJB_Mu} for each fixed $\{\mu.\}\in C([0,T],\mathcal{M})$. The following observation plays an important role in this work. Let $\{\mu^1_.\},\{\mu^2_.\}$ be in $C([0,T], \mathcal{M})$ and let $\alphapha \in [0,1]$. Since $\mathcal{M}$ is convex, the curve $$\{\mu^1_.\}+\alphapha \{(\mu^2-\mu^1)_.\}:=\{\mu^1_t+\alphapha (\mu^2_t-\mu^1_t), t\in[0,T]\}$$ belongs to $C([0,T],\mathcal{M})$. Thus, we can define the function $V_\alphapha:[0,T]\times\mathbf{R}^d\to \mathbf{R}$ \betagin{equation}\lambdabel{Valpha} V_\alphapha(t,x):=V\big(t,x; \{\mu^1_.\}+\alphapha \{(\mu^2-\mu^1)_.\}\big). \end{equation} and have the relation \betagin{align} &V(t,x; \{\mu^2_.\})-V(t,x; \{\mu^1_.\})\mathbf{n}otag\\ =&V_{1}(t,x)-V_{0}(t,x).\lambdabel{estimate} \end{align} Furthermore, for $\{\mu^1_.\},\{\mu^2_.\}\in C([0,T], \mathcal{M})$ define \betagin{align}\lambdabel{Halpha} H_{\alpha,t}(x,p) :&=H_{t}(x,p, \mu^1_t+\alphapha (\mu^2_t-\mu^1_t)) \\ &=\max_{u\in\mathcal{U}} ( h(t,x,\mu^1_t+\alphapha (\mu^2_t-\mu^1_t),u)p\mathbf{n}otag\\ &\hspace{1em}+J(t,x,\mu^1_t+\alphapha (\mu^2_t-\mu^1_t),u))\mathbf{n}otag \end{align} \betagin{equation}\lambdabel{Lalpha} \betagin{split} L_{\alpha}[t]:=L(t, \mu^1_t+\alphapha (\mu^2_t-\mu^1_t))\\ \end{split} \end{equation}\betagin{equation}\lambdabel{VTalpha} \betagin{split} V_{\alpha}^T(x):=V^T(x;\mu^1_T+\alphapha (\mu^2_T-\mu^1_T)) \end{split} \end{equation} with $\alpha\in[0,1]$, $t\in[0,T]$ and $(x,p)\in\mathbf{R}^{d}\times\mathbf{R}^d$. Then the sensitivity analysis of the solution of \eqref{HJB_Mu} with respect to a function parameter $\{\mu.\}\in C([0,T],\mathcal{M})$ can be reduced to the one of the solution to the following Cauchy problem with respect to a real parameter $\alphapha\in[0,1]$: \betagin{align} \frac{\partialrtial{V_\alphapha}}{\partialrtial{t}}(t,x)& =-H_{\alpha,t}(x, \mathbf{n}abla V_\alphapha(t,x))- L_{\alpha}[t]V_\alphapha (t,x)\mathbf{n}otag\\ V_\alphapha(T,x)&=V_\alphapha^T(x).\lambdabel{V_alpha} \end{align} The sensitivity analysis with respect to $\alpha\in [0,1]$ consists of two steps. In the first step, we omit the Hamiltonian term in \eqref{V_alpha} and only consider the sensitivity of the evolution $V_{\alpha}(t,\cdot)=U^{t,T}_{\alpha} V^T_{\alpha}(\cdot)$, where for each $\alphapha\in [0,1]$, the propagator $\{U_{\alpha}^{t,s}:t\leq s\}$ is generated by the family of operators $\{L_{\alpha}[t]: t\in[0,T]\}$. This step not only serves as an intermediate step towards our full scheme, but also unveils some interesting observations, including the formula \eqref{smooth property 6} for the differentiation of an operator with respect to a parameter. In the second step, we include the Hamiltonian term and complete the analysis. \betagin{theorem} \lambdabel{Sensitivity 1} Assume conditions {\bf (A2)} and {\bf (A3)}. Define $W:[0,1]\times [0,T]\times \mathbf{R}^d\to\mathbf{R}^d$ \betagin{equation}\lambdabel{smooth property 3} W_{\alpha}(t,x)=U^{t,T}_{\alpha} V^T_{\alpha}(x). \end{equation} Then for each $t\in [0,T]$ and $x\in\mathbf{R}^d$, the mapping $\alphapha\to W_\alpha(t,x)$ is Lipschitz continuous with uniformly bounded Lipschitz constants, more precisely for every $\alpha_1, \alpha_2\in[0,1]$ with $\alpha_1\mathbf{n}eq \alpha_2$, there exists a constant $c>0$ such that for every $t\in[0,T]$ {\small \betagin{align*} &\frac{\|W_{\alpha_1}(t,\cdot)-W_{\alpha_2}(t,\cdot)\|_{\mathbf{C}^1}}{|\alpha_1-\alpha_2|}\mathbf{n}otag\\ \leq& c(T-t)^{1-\betata} \mathbf{u}nderset{\betagin{subarray}{c} s\in[t,T] \\ \gammamma\in[\alpha_1,\alpha_2] \end{subarray}}{\sup}\left\| \frac{\partialrtial L_\gammamma}{\partialrtial \alpha}[s]\right\|_{\mathbf{C}^2\to\mathbf{C}}\|V^T_{\alpha_2}(\cdot)\|_{\mathbf{C}^2}\\ &+ c \sup_{\gammamma\in[\alpha_1,\alpha_2]}\left\|\frac{\partialrtial V_\gammamma^T}{\partialrtial \alpha}\right\|_{\mathbf{C}^1}. \end{align*} } \end{theorem} \proof See the proof in the appendix \ref{Proof to the Theorem Sensitivity 1}. \qed \betagin{remark} To clarify the notations $\alphapha$ and $\gammamma$ used in the derivative $\partialrtial {V_\gammamma^T}/{\partialrtial \alphapha}$: $V^T_\alphapha$ denotes that the $\alphapha$ is the variable of the function $V^T$ and $\frac{\partialrtial}{\partialrtial \alphapha} $ denotes the derivative of a function with respect to the variable $\alphapha$. The $\gammamma$ is the value of the variable used to calculate the value of the derivative. $\frac{\partialrtial V_\gammamma^T}{\partialrtial \alphapha}$ denotes the value of the derivative of the function $V^T$ with respect to $\alphapha$ at the point $\alphapha =\gammamma$. \end{remark} In this work, we are only concerned with the Lipschitz continuity of the solution of the HJB with respect to the parameter. However, it is also interesting to know whether the mapping $\alpha\mapsto W_{\alpha}(t,\cdot)$ is differentiable for each $t\in [0,T]$. For the completeness, the next proposition will show the existence of the derivative $\frac{\partialrtial W_{\alpha}}{\partialrtial \alpha}(t,\cdot)$ in $\mathbf{C}$ and present its' explicit expression. \betagin{prop} \lambdabel{prop} Assume conditions {\bf (A2)} and {\bf (A3)}. Then \betagin{enumerate} \item[{\rm (i)}] for each $0\leq t<s\leq T$, the mapping $\alpha\mapsto U_\alphapha^{t,s}$ is differentiable and the derivative $\frac{\partialrtial U_\alphapha^{t,s}}{\partialrtial \alphapha }$ has the representation \betagin{equation} \lambdabel{smooth property 6} \frac{\partialrtial U_\alphapha^{t,s}}{\partialrtial \alphapha} =\int_t^s {U}^{t,r}_{\alphapha} \frac{\partialrtial{L_\alphapha}}{\partialrtial{\alphapha}}[r] U_\alpha^{r,s} \, dr. \end{equation} \item[{\rm (ii)}] for each $t\in[0,T]$, the mapping $\alpha\mapsto W_\alpha(t,\cdot)$ defined in \eqref{smooth property 3} is differentiable as a function from $[0,1]$ to $\mathbf{C}$ and the partial derivative $\tfrac{\partialrtial W_\alphapha}{\partialrtial \alphapha}(t,\cdot)$ can be represented by \betagin{equation} \lambdabel{smooth property 5} \frac{\partialrtial W_\alphapha}{\partialrtial \alphapha} (t,\cdot) = {U}_\alphapha^{t,T} \frac{\partialrtial V^T_\alphapha}{\partialrtial \alphapha} (\cdot) +\frac{\partialrtial U_\alphapha^{t,T}}{\partialrtial \alphapha}V^T_\alphapha(\cdot). \end{equation} \end{enumerate} \end{prop} \proof See the proof in the appendix \ref{Proof for the Proposition prop}. \qed \betagin{remark} If one would have that, for each $0\leq t<s\leq T$, the mapping $\alpha\mapsto U_{\alpha}^{t,s}$ is continuous from $[0,1]$ to $\mathcal{L}(\mathbf{C}, \mathbf{C}^1)$, then $\lim_{\alpha_1\to \alpha}U_{\alpha}^{t,s}$ exists as an operator from $\mathbf{C}$ to $\mathbf{C}^1$. Then one would have that the mapping $\alpha\mapsto W_\alpha(t,\cdot)$ is differentiable as a function from $[0,1]$ to $\mathbf{C}^1$. \end{remark} \betagin{theorem} \lambdabel{Sensitivity 2} Assume the conditions {\bf (A1), (A2)} and {\bf (A3)}. Then the following statements hold: \betagin{enumerate} \item[{\rm (a)}] For any $T>0$, the mild solution $V_\alphapha$ to \eqref{V_alpha} is Lipschitz continuous with respect to $\alphapha$ i.e. there exists a constant $c=c(T)>0$ such that for each $\alpha_1,\alpha_2\in[0,1]$ with $\alpha_1\mathbf{n}eq \alpha_2$, {\small \betagin{align} &\sup_{t\in[0,T]}\frac{\| V_{\alpha_1}(t,\cdot)-V_{\alpha_2}(t,\cdot)\|_{\mathbf{C}^1}}{ |\alpha_1-\alpha_2|}\lambdabel{T10}\\[.4em] &\hspace{-1cm} \leq c\mathbf{B}igg( \sup_{\gammamma\in[\alpha_1,\alpha_2]}\left\|\frac{\partialrtial{V^T_\gammamma}}{\partialrtial{\alphapha}}(\cdot)\right\|_{\mathbf{C}^1} +\mathbf{u}nderset{\betagin{subarray}{c} (t,p)\in\mathcal{O} \mathbf{n}otag\\ \gammamma\in[\alpha_1,\alpha_2] \end{subarray}}{\sup}\left\|\frac{\partialrtial H_{\gammamma,t}(\cdot,p)}{\partialrtial \alphapha}\right\|_{\mathbf{C}}\mathbf{n}otag\\[.4em] &\hspace{-1cm}\hspace{1em}+ \mathbf{u}nderset{\betagin{subarray}{c} t\in[0,T] \mathbf{n}otag\\ \gammamma\in[\alpha_1,\alpha_2] \end{subarray}}{\sup}\left\|\frac{\partialrtial L_\gammamma}{\partialrtial \alphapha}[t]\right\|_{\mathbf{C}^2\to \mathbf{C}} \left(\left\|V_{\alpha_2}^T(\cdot)\right\|_{\mathbf{C}^2}+1\right)\mathbf{B}igg),\mathbf{n}otag \end{align} } where $\mathcal{O}=\{(t,p): t\in[0,T],|p|\leq \sup_{t\in[0,T]}\|V_\alpha(t,\cdot)\|_{\mathbf{C}^1}\}$. \item[{\rm (b)}] The mild solution $V$ to \eqref{G_HJB} and its spacial derivative $\mathbf{n}abla V$ are Lipschitz continuous uniformly with respect to $\{\mu.\}$, that is, for each $\{\mu^1_.\},\{\mu^2_.\}\in C([0,T], \mathcal{M})$, there exists a constant $k>0$ such that \betagin{align}\lambdabel{V Lip} &\sup_{t\in[0,T]}\|V(t,\cdot; \{\mu^1_.\})-V(t,\cdot; \{\mu^2_. \})\|_{\mathbf{C}^1}\mathbf{n}otag\\ &\leq k \sup_{t\in [0,T]}\|\mu_t^1-\mu_t^2\|_{\mathbf{S}} \end{align} and \betagin{align}\lambdabel{V Lip_2} &\sup_{t\in[0,T]}\|\mathbf{n}abla V(t,\cdot; \{\mu^1_.\})-\mathbf{n}abla V(t,\cdot; \{\mu^2_.\})\|_{\mathbf{C}}\mathbf{n}otag\\ &\leq k \sup_{t\in [0,T]}\|\mu_t^1-\mu_t^2\|_{\mathbf{S}}. \end{align} \end{enumerate} \end{theorem} \proof See the proof in the appendix \ref{Proof for the Theorem Sensitivity 2}. \qed \section{An application to mean field games}\lambdabel{Application} In this section, we apply the sensitivity results in Theorem \ref{Sensitivity 2} to a mean field games model and to provide verifiable conditions for the feedback regularity condition. Consider a continuous time dynamic game with a continuum of players and a terminal time $T>0$. By saying a continuum of players, we mean that all players are identical so the game is symmetric with respect to permutation of the players. Choose one of the players and call it the {\it reference player}. Take $\mathbf{S}=(\mathbf{C}^2)^*$ as the dual Banach space of $\mathbf{C}^2$ and $\mathcal{M}= \mathcal{P}(\mathbf{R}^d)$. Let $\mu_t\in \mathcal{P}(\mathbf{R}^d)$ denote the probability distribution of the continuum of players in the state space $\mathbf{R}^d$ at the time $t$ and $\{\mu_t\in \mathcal{P}(\mathbf{R}^d):t\in[0,T]\}$ denote the (probability distribution) {\it measure flow}. The controlled state dynamics of the reference player is modelled by a controlled stochastic process $(X_t:t\in[0,T])$ associated to the family of operators $A$ in \eqref{A}. At each time $t\in[0,T]$, the reference player knows only his own position $X_t$ and the distribution of the continuum of players $\mu_t\in \mathcal{P}(\mathbf{R}^d)$. \betagin{remark} For a better understanding of the stochastic process $(X_t: t\in [0,T])$, one may think that the controlled dynamics is described by a stochastic differential equation. For a very particular case of our model, set $L[t,\mu]= \frac{1}{2}(\sigmagma\mathbf{n}abla,\mathbf{n}abla)$ with a constant $\sigmagma$, i.e. the operator $L[t,\mu]$ generates a Brownian motion $\{\sigmagma W_t: t\geq 0\}$. Then one can write a stochastic differential equation corresponding to the generator \eqref{fellergeneratorwithdriftcont} as \[ dX_t=h(t,X_t, \mu_t, u_t)\, dt+\sigmagma dW_t \quad \text{for all}\,\, t\geq 0. \] In fact, this is exactly the case which was considered in the initial work on the mean field games \cite{HMC05, HCM3, HCM07, LL2007}. In our framework, this controlled dynamics of each player is extended to an arbitrary Markov process with a generator \eqref{fellergeneratorwithdriftcont} depending on a probability measures $\mu$. \end{remark} The measure flow of the continuum of players in the state space $\mathbf{R}^d$, $\{\mu_t\in \mathcal{P}(\mathbf{R}^d):t\in[0,T]\}$, is the solution to the evolution equation \betagin{align}\lambdabel{limiting KE} & \frac{d}{dt} \int_{\mathbf{R}^d} g(y)\,\,\mu_t(dy)\mathbf{n}otag\\ & =\int _{\mathbf{R}^d} \left(A[t,\mu_t, u_t]g(y)\right)\,\, \mu_t(dy) \end{align} for a test function $g\in \mathbf{C}^2$, with a given initial value $\mu_0\in\mathcal{P}(\mathbf{R}^d)$. The equation \eqref{limiting KE} is a controlled version of a {\it general kinetic equation} in weak form and very often written in the compact form \betagin{equation}\lambdabel{dynamic-abs} {\frac{d} {dt}} (g, \mu_t) =(A[t,\mu_t, u_t ]g, \mu_t). \end{equation} See \cite{KLY2012, KY2012} for more discussion on equation \eqref{dynamic-abs} and its well posedness with open-loop controls under rather general technical assumptions. Let $C_{\mu_0}([0,T], \mathcal{P}(\mathbf{R}^d))$ be the set of continuous functions $t \rightarrow \mu_t$ with $\mu_t\in \mathcal{P}(\mathbf{R}^d)$ for each $t\in[0,T]$ and with the norm \betagin{align} \lambdabel{D*} \|\mu\|_{(\mathbf{C}^2)^*}:=&\sup_{\|g\|_{\mathbf{C}^2}\leq 1}|(g,\mu)|\mathbf{n}otag\\ =&\sup_{\|g\|_{\mathbf{C}^2}\leq 1}\left|\int_{\mathbf{R}^d}g(x)\mu(dx)\right |. \end{align} In this game, the reference player faces an optimal control problem described by the HJB equation \eqref{HJB_Mu}. If the max is achieved only at one point, i.e. for any $(t,x,\mu, p)\in[0,T]\times\mathbf{R}^d\times\mathcal{P}(\mathbf{R}^d)\times\mathbf{R}^d$, $$\arg\max_{u\in\mathcal{U}} ( h(t,x,\mu,u)p+J(t,x,\mu,u))$$ is a singleton, then one can derive the unique optimal control strategy from the solution to \eqref{HJB_Mu}. For any given measure flow $\{\mu_t: t\in [0,T]\}\in C_{\mu_0}([0,T], \mathcal{P}(\mathbf{R}^d))$, let the resulting unique optimal control strategy be denoted by \betagin{equation}\lambdabel{feedbacklaw} \hat u(t,x;\{\mu_s: s\in [t,T]\}) \end{equation} for all $t\in[0,T], x\in \mathbf{R}^d$. Substituting the optimal feedback control strategy \eqref{feedbacklaw} into \eqref{dynamic-abs} yields the closed-loop evolution equation for the distributions $\mu_t$ \betagin{align} {\frac{d} {dt}} (g, \mu_t)& =(A[t,\mu_t, \hat u(t,x;\{\mu_s: s\in [t,T]\})]g, \mu_t)\lambdabel{Kinetic equation_coupled} \end{align} The mean field game methodology amounts to find a value function $V$ for the representative player and a measure flow $\{\mu_.\}$ such that the two coupled equations \eqref{Kinetic equation_coupled} with initial data $ \mu|_{t=0}=\mu_0$ and \betagin{equation} -\frac{\partialrtial{V}}{\partialrtial{t}}(t,x) = H_{t}(x,\mathbf{n}abla V(t,x), \mu_t)+L[t,\mu_t]V(t,x)\lambdabel{HJB_coupled} \end{equation} with terminal data $V(T,\cdot;\mu_T)=V^T(\cdot;\mu_T)$ and \betagin{equation} H_{t}(x,p, \mu)=\max_{u\in\mathcal{U}} ( h(t,x,\mu,u)p+J(t,x,\mu,u))\lambdabel{singleton control} \end{equation} are satisfied at the same time, with the optimal control function $\hat u$ being the argmax in \eqref{HJB_coupled}. The HJB equation \eqref{HJB_coupled} is exactly the HJB equation \eqref{HJB_Mu}, which is the main object of this paper. Since the controlled kinetic equation \eqref{Kinetic equation_coupled} is forward and the HJB equation \eqref{HJB_coupled} is backward, this system of coupled equations is referred to as a {\it coupled backward-forward system}. See \cite{KLY2012} for the full discussion on solving the coupled backward-forward system of equations \eqref{Kinetic equation_coupled} and \eqref{HJB_coupled}. To solve this coupled backward-forward system \eqref{Kinetic equation_coupled}-\eqref{HJB_coupled}, it is critical that the resulting control mapping $\hat u$ \eqref{feedbacklaw} satisfies the so-called {\it feedback regularity} condition (see e.g. \cite{HCM3}), i.e. for any $\{\eta_t: t\in[0,T]\}$, $\{\mathbf{x}i_t: t\in[0,T]\}\in C_{\mu_0}([0,T], \mathcal{P}(\mathbf{R}^d))$, there exists a constant $k_1>0$ such that \betagin{align} &\sup_{(t,x)\in [0,T]\times \mathbf{R}^d} \mathbf{B}ig | \hat u(t,x ;\{\eta_s: s\in [t,T]\})\lambdabel{FBR}\mathbf{n}otag\\ &\hspace{7em}-\hat u(t,x;\{\mathbf{x}i_s: s\in [t,T]\})\mathbf{B}ig|\mathbf{n}otag\\ &\leq k_1\sup_{s\in [0,T]}||\eta_s -\mathbf{x}i_s||_{(\mathbf{C}^2)^*}. \end{align} \betagin{theorem} \lambdabel{thfeedbackHJB} Suppose $L$, $H$ and $V^T$ in \eqref{HJB_coupled} satisfy the conditions {\bf (A1)}, {\bf (A2)}, {\bf (A3)} respectively. Assume additionally that the max in \eqref{singleton control} is achieved only at one point, i.e. for any $(t,x,\mu, p)\in[0,T]\times\mathbf{R}^d\times\mathcal{P}(\mathbf{R}^d)\times\mathbf{R}^d$ \betagin{equation}\lambdabel{uni_u} \arg\max_{u\in\mathcal{U}} ( h(t,x,\mu,u)p+J(t,x,\mu,u)) \end{equation} is a singleton and the resulting control as a function of $(t,x,\mu, p)$ is continuous in $t\in[0,T]$ and Lipschitz continuous in $(x,\mu, p)\in\mathbf{R}^d\times\mathcal{P}(\mathbf{R}^d)\times\mathbf{R}^d$ uniformly with respect to $t$, $x$, $\mu$ and bounded $p$. Then, the optimal feedback control strategy $$\hat u (t,x;\{\mu_s: s\in [t,T]\})$$ derived via the HJB equation \eqref{HJB_coupled}, has the feedback regularity property defined in \eqref{FBR}. \end{theorem} \proof By Theorem \ref{Sensitivity 2}, together with the assumption that the resulting unique control mapping is Lipschitz continuous in $(x,\mu,p)$, we conclude that the unique point of maximum in the expression \[ \max_{u\in\mathcal{U}} \{ h(t,x,\mu_t,u)\mathbf{n}abla V(t,x;\{\mu.\})+J(t,x,\mu_t,u)\} \] has the claimed properties. \qed To appreciate the results in Theorem \ref{thfeedbackHJB} better and to illustrate the conditions {\bf (A1)}, {\bf (A2)} and {\bf (A3)} are verifiable conditions for the feedback feedback regularity property in \eqref{FBR}, we give the following example with concrete conditions. \betagin{example} \lambdabel{example assumptions} Set the control set $\mathcal{U}=[-1, 1]$. The controlled dynamics $X_t$ is associated to the family of operators $A$ of the form \betagin{align} &A[t,\mu,u]f(x)\lambdabel{example A}\\ = &(h(t,x,\mu,u), \mathbf{n}abla f(x)) + L[t,\mu]f(x)\mathbf{n}otag\\ =&(h(t,x,\mu,u), \mathbf{n}abla f(x)) + \frac{1}{2}(G(t,x,\mu)\mathbf{n}abla,\mathbf{n}abla) f(x)\mathbf{n}otag \end{align} where the drift coefficient $h$ is linear in $u$ and of the form $$h(t,x,\mu,u)=\int_{\mathbf{R}^d} \betata(t,x, y)\mu(dy) +u,$$ and $$G(t,x,\mu) = \int_{\mathbf{R}^d} g(t,x, y)\mu(dy)$$ with the functions $ \betata, g: [0,T]\times \mathbf{R}^d \times \mathbf{R}^d\to \mathbf{R}^d$ being bounded, continuous in $t$ and Lipschitz continuous uniformly in $x$ and $y$. See Remark \ref{example of h and L} for the corresponding form of stochastic differential equations. Since the function $G$ is linear in $\mu$ hence differentiable in $\mu$. Together with the conditions on $g$, the condition \eqref{eqassumonder2} is satisfied. The operator $L[t,\mu]f(x)=\frac{1}{2}(G(t,x,\mu)\mathbf{n}abla,\mathbf{n}abla) f(x)$ in \eqref{example A} generates a strongly continuous backward propagator which has the smoothing property with $\betata=\frac{1}{2}$, see discussion in subsection \ref{assumptions}. Hence, the assumption {\bf (A2)} is satisfied. The running cost function $J$ is quadratic in $u$ and of the form \betagin{align} &J(t,x,\mu,u)\mathbf{n}otag\\ =&\int_{\mathbf{R}^d}\alphapha (t,x,y)\mu(dy)-\frac{1}{2}u^2 \int_{\mathbf{R}^d}\theta(t,x,y)\mu(dy)\mathbf{n}otag \end{align} where the functions $\alphapha, \theta: [0,T]\times \mathbf{R}^d \times \mathbf{R}^d\to \mathbf{R}$ are bounded, continuous in $t$ and Lipschitz continuous uniformly in $x$ and $y$, and $\theta (t,x,y)> 0$ for any $(t,x,y)$. The boundeness of $J$ guarantees the condition \eqref{eq2thweak_existencem} is satisfied. The special form of the functions $h$ and $J$ guarantees the condition \eqref{eq1thweak_existencem} and \eqref{eqassumonder1} are satisfied. Hence the assumption {\bf (A1)} is satisfied. Together with a terminal function $V^T\equiv 0$, Theorem \ref{Sensitivity 2} gives the result that the spacial derivative of the solution to \eqref{HJB_coupled} is Lipschitz continuous uniformly with respect to the functional parameter $\{\mu.\}$. Further, under this linear-quadratic setting, the max in \eqref{singleton control} is achieved only at one point and one gets an explicit formula of the unique point of maximum, i.e. {\small \betagin{align} &u(t,x,\mu,p)\lambdabel{formular u}\\ &=\arg \max_u \left\{ h(t,x,\mu,u) p+J(t,x,\mu,u)\right\}\mathbf{n}otag\\ =&\betagin{cases} &\frac{p}{\int_{\mathbf{R}^d}\theta(t,x,y)\mu(dy)},\,\quad\quad\text{if }\frac{p}{\int_{\mathbf{R}^d}\theta(t,x,y)\mu(dy)}\in[-1,1]\mathbf{n}otag\\[.7em] & 1, \quad\,\,\,\text{if } \int_{\mathbf{R}^d}\frac{\alphapha}{\betata}(t,x,y)\mu(dy)p \mathbf{n}otin[-1,1] \,\text{and } \,p>0\mathbf{n}otag\\[.7em] & -1,\quad \text{if } \int_{\mathbf{R}^d}\frac{\alphapha}{\betata}(t,x,y)\mu(dy)p \mathbf{n}otin[-1,1]\,\text{and } \,p<0\mathbf{n}otag \end{cases} \end{align} } which is continuous in $t$ and Lipschitz continuous in $(x,\mu, p)$ uniformly with respect to $x$, $\mu$ and bounded $p$. Now it is checked that this example satisfies all the conditions in Theorem \ref{thfeedbackHJB}. By setting $p=\mathbf{n}abla V(t,x;\{\mu.\})$, we may conclude that the result optimal feedback control strategy has the the feedback regularity property. \end{example} \betagin{remark} Let us stress again that in \eqref{V Lip}, \eqref{V Lip_2} the space $\mathbf{S}$ is an abstract Banach space, but in application to control depending on empirical measures, we have in mind the norm of the dual space $(\mathbf{C}^2)^*$, where $\mathbf{C}^2$ is the domain of the generating family $A[t,\mu,u]$. \end{remark} \section{Appendix} \appendix \section{Proof of Theorem \ref{weak_existence} } \lambdabel{Proof to the Theorem weak_existence } Let $C_{V^T}^T([0,T], \mathbf{C}^1)$ be the set of functions $\mathbf{v}arphii: [0,T]\times \mathbf{R}^d\to \mathbf{R}$, which satisfy $\mathbf{v}arphii (T,x)=V^T(x)$ for all $x\in \mathbf{R}^d$, $\mathbf{v}arphii (t,\cdot)\in \mathbf{C}^1$ for each $t\in [0,T]$ with the norms $\|\mathbf{v}arphii(t,\cdot)\|_{\mathbf{C}^1}$ uniformly bounded in $t$, and the mapping $t\mapsto \mathbf{v}arphii(t,\cdot)$ is continuous as a mapping from $[0,T]$ to $\mathbf{C}$. We equip this space with the norm $$\|\mathbf{v}arphii\|_{C_{V^T}^T([0,T], \mathbf{C}^1)}:=\sup_{t\in[0,T]}\|\mathbf{v}arphii(t,\cdot)\|_{\mathbf{C}^1}.$$ Note this definition of the set $C_{V^T}^T([0,T], \mathbf{C}^1)$ is not standard in the sense that it the continuity is considered from $[0,T]$ to $\mathbf{C}$, but not from $[0,T]$ to $\mathbf{C}^1$. Define an operator $\mathcal{P}si$ acting on $C_{V^T}^T([0,T], \mathbf{C}^1)$ by \betagin{align} \lambdabel{mildformpsi1} \mathcal{P}si(\mathbf{v}arphii)(t,x):=& (U^{t,T} V^T(\cdot))(x) \mathbf{n}otag\\ &+ \int_t^T U^{t,s}H_s(\cdot,\mathbf{n}abla \mathbf{v}arphii(s,\cdot))(x)ds. \end{align} Since the propagator $U^{t,T}$ is strongly continuous in $t$ and the integral term is continuous in $t$, the mapping $t\to \mathcal{P}si(\mathbf{v}arphii)(t,\cdot)$ is continuous. Since $V^T(\cdot)\in \mathbf{C}^1$ and the family $\{{U}^{t,T}, 0\leq t\leq T\}$ is bounded as a family of mappings from $\mathbf{C}^1$ to $\mathbf{C}^1$, we have ${U}^{t,T}V^T(\cdot)\in \mathbf{C}^1$ and it is uniformly bounded on $0\leq t\leq T$. By the triangle inequality and \eqref{eq1thweak_existencem},\eqref{eq2thweak_existencem}, for each $t\in[0,T]$ {\small \betagin{eqnarray} \lambdabel{ineq H} &&\|H_t(\cdot,\mathbf{n}abla \mathbf{v}arphii(t,\cdot))\|_{\mathbf{C}}\mathbf{n}otag\\ &\leq& \|H_t(\cdot,0)\|_{\mathbf{C}} +\|H_t(\cdot,\mathbf{n}abla \mathbf{v}arphii(t,\cdot))-H_t(\cdot,0)\|_{\mathbf{C}} \mathbf{n}onumber \\[.4em] &\leq& c_2+c_1\|\mathbf{n}abla \mathbf{v}arphii(t,\cdot) \|_{\mathbf{C}} \mathbf{n}onumber \\[.4em] &\leq & c_2+c_1\| \mathbf{v}arphii(t,\cdot) \|_{\mathbf{C}^1}. \end{eqnarray} } The last inequality in \eqref{ineq H} holds since by definition $\| \mathbf{v}arphii(t,\cdot) \|_{\mathbf{C}^1}= \|\mathbf{n}abla \mathbf{v}arphii(t,\cdot) \|_{\mathbf{C}}+ \|\mathbf{v}arphii(t,\cdot)\|_{\mathbf{C}}$ and $\|\mathbf{v}arphii(t,\cdot)\|_{\mathbf{C}}\geq 0$. The smoothing condition \eqref{smoothingm} guarantees that ${U}^{t,s} H_s(\cdot,\mathbf{n}abla \mathbf{v}arphii(s,\cdot)) \in \mathbf{C}^1$ for each $0\le t<s\leq T$. The conditions \eqref{BDD 2m}and \eqref{smooth property 2m} and the inequality \eqref{ineq H} imply for each $t\in[0,T]$ that {\small \betagin{align*}\lambdabel{GL} &\|\mathcal{P}si(\mathbf{v}arphii)(t,\cdot)\|_{\mathbf{C}^1}\mathbf{n}otag\\ \leq &\|{U}^{t,T}V^T(\cdot)\|_{{\mathbf{C}^1}}+\int_t^T \| U^{t,s} H_s(\cdot,\mathbf{n}abla \mathbf{v}arphii(s,\cdot))\|_{\mathbf{C}^1} \,\,ds \mathbf{n}otag\\ \leq &c_4 \|V^T(\cdot) \|_{\mathbf{C}^1} + c_5 \int_t^T (s-t)^{-\beta}\|H_s(\cdot,\mathbf{n}abla \mathbf{v}arphii(s,\cdot))\|_{\mathbf{C}}\,\, ds\mathbf{n}otag\\ \leq & c_4 \|V^T (\cdot)\|_{\mathbf{C}^1} + c_5 \int_t^T (s-t)^{-\beta}(c_2+c_1\| \mathbf{v}arphii(s,\cdot) \|_{\mathbf{C}^1})\,\, ds\mathbf{n}otag\\ \leq & c_4 \|V^T (\cdot)\|_{\mathbf{C}^1} + c_5\left (c_2+c_1\sup_{t\leq s\leq T}\|\mathbf{v}arphii(s,\cdot)\|_{\mathbf{C}^1}\right)\frac{(T-t)^{1-\beta}}{1-\beta}. \end{align*} } It follows that the operator $\mathcal{P}si$ maps $C_{V^T}^T([0,T], \mathbf{C}^1)$ to itself, i.e. \[ \mathcal{P}si: C_{V^T}^T([0,T], \mathbf{C}^1)\mapsto C_{V^T}^T([0,T], \mathbf{C}^1). \] Conditions \eqref{eq1thweak_existencem} and \eqref{smooth property 2m} imply for every $\mathbf{v}arphii^1,\mathbf{v}arphii^2 \in C_{V^T}^T([0,T],\mathbf{C}^1)$ and $t\in [0,T]$ that {\small \betagin{align} \lambdabel{eq5thweak_existence} &\|\mathcal{P}si(\mathbf{v}arphii^1)(t,\cdot)-\mathcal{P}si(\mathbf{v}arphii^2)(t,\cdot)\|_{\mathbf{C}^1}\mathbf{n}otag\\ \leq &\int_t^T \| U^{t,s}[H_s(\cdot,\mathbf{n}abla \mathbf{v}arphii^1(s,\cdot))-H_s(\cdot,\mathbf{n}abla \mathbf{v}arphii^2(s,\cdot))]\|_{\mathbf{C}^1}\,\,ds \mathbf{n}otag \\ \leq &\int_t^T c_5 (s-t)^{-\beta} c_1 \|\mathbf{n}abla \mathbf{v}arphii^1(s,\cdot)-\mathbf{n}abla \mathbf{v}arphii^2(s,\cdot) \|_{\mathbf{C}}\,\,ds \mathbf{n}otag \\ \leq &c_1 c_5 \frac{(T-t)^{1-\beta}}{1-\beta} \sup_{t\leq s\leq T} \| \mathbf{v}arphii^1(s,\cdot)-\mathbf{v}arphii^2(s,\cdot) \|_{\mathbf{C}^1}. \end{align} } Hence, for $T>0$ small enough, we get wellposedness by the contraction principle. Similar arguments yield the wellposedness on the interval $[t_0, T]$ for $T-t_0$ small enough, with $\mathcal{P}si(\mathbf{v}arphii) (T-t_0, \cdot)\in \mathbf{C}^1$. Iteration of the above procedure on the whole interval $[0,T]$ completes the proof. \qed \section{Proof of Theorem \ref{Sensitivity 1} } \lambdabel{Proof to the Theorem Sensitivity 1} From \eqref{smooth property 3}, for each $\alpha_1, \alpha_2\in[0,1]$, $t\in[0,T]$ and $x\in\mathbf{R}^d$, we have \betagin{align}\lambdabel{W-diff} &W_{\alpha_1}(t,x)-W_{\alpha_2}(t,x)\\[.4em] &=U_{\alpha_1}^{t,T}\left( V^T_{\alpha_1} - V^T_{\alpha_2}\right)(x)+\left( U_{\alpha_1}^{t,T}-U_{\alpha_2}^{t,T}\right)V^T_{\alpha_2}(x).\mathbf{n}otag \end{align} By the condition {\bf (A3)}, for each $x\in \mathbf{R}^d$ the mapping $\alpha\to V_{\alpha}^T(x)$ is differentiable and the derivative $\frac{\partialrtial V^T_\alpha}{\partialrtial \alpha}(\cdot)$ belongs to $\mathbf{C}^1$. Since for any $0\leq t\leq T$ and $\alpha_1\in[0,1]$, $U_{\alpha_1}^{t,T}: \mathbf{C}^1\to \mathbf{C}^1$, together with \eqref{BDD 2m} we have \betagin{align}\lambdabel{ex1} &\|U_{\alpha_1}^{t,T}\left( V^T_{\alpha_1} - V^T_{\alpha_2}\right)(\cdot)\|_{\mathbf{C}^1}\mathbf{n}otag\\ &=\left\|U_{\alpha_1}^{t,T} \int_{\alpha_2}^{\alpha_1}\frac{\partialrtial V_\gammamma^T}{\partialrtial \alpha}(\cdot)d\gammamma\right\|_{\mathbf{C}^1}\mathbf{n}otag\\ &\leq c_4 |\alpha_1-\alpha_2|\sup_{\gammamma\in[\alpha_1,\alpha_2]}\left\|\frac{\partialrtial V_\gammamma^T}{\partialrtial \alpha}(\cdot)\right\|_{\mathbf{C}^1}. \end{align} By \eqref{PropergatorProperty } in Proposition \ref{prop-propergatorProperty} and the smoothing property \eqref{smooth property 2m}, we have $$U_{\alpha_1}^{t,T}-U_{\alpha_2}^{t,T}=\int_t^T U_{\alpha_1} ^{t,s}( L_{\alpha_1}[s]-L_{\alpha_2}[s])U_{\alpha_2} ^{s,T}ds$$ which is an operator mapping $\mathbf{C}^2$ to $\mathbf{C}^1$. Together with the condition {\bf (A2)} (ii) that for each $t\in[0,T]$ the mapping $\alpha\mapsto L_{\alpha}[t]$ is differentiable and $\frac{\partialrtial L_\alpha}{\partialrtial \alpha}[t]: \mathbf{C}^2\to\mathbf{C}$, we have {\small \betagin{align}\lambdabel{ex2} &\left\|\left( U_{\alpha_1}^{t,T}-U_{\alpha_2}^{t,T}\right)V^T_{\alpha_2}(\cdot)\right\|_{\mathbf{C}^1}\mathbf{n}otag\\ &\leq c_4c_5 \int_t^T(s-t)^{-\betata}ds\mathbf{n}otag\\ &\hspace{3em}\times\sup_{s\in[t,T]} \| L_{\alpha_1}[s]-L_{\alpha_2}[s]\|_{\mathbf{C}^2\to\mathbf{C}}\|V^T_{\alpha_2}(\cdot)\|_{\mathbf{C}^2}\mathbf{n}otag\\ &\leq c_4c_5 \frac{(T-t)^{1-\beta}}{1-\betata}|\alpha_1-\alpha_2|\mathbf{n}otag\\ &\hspace{3em}\times\mathbf{u}nderset{\betagin{subarray}{c} s\in[t,T] \\ \gammamma\in[\alpha_1,\alpha_2] \end{subarray}}{\sup}\left\| \frac{\partialrtial L_\gammamma}{\partialrtial \alpha}[s]\right\|_{\mathbf{C}^2\to\mathbf{C}}\|V^T_{\alpha_2}(\cdot)\|_{\mathbf{C}^2}. \end{align} } Therefore, from \eqref{W-diff} together with \eqref{ex1} and \eqref{ex2}, we complete the proof. \qed \section{Proof of Proposition \ref{prop}} \lambdabel{Proof for the Proposition prop} (i) Since the operator $L_\alpha[t]$ is differentiable in $\alpha$ for each $t\in[0,T]$, together with \eqref{PropergatorProperty } in Proposition 2.1, for $\alpha_1,\alpha\in[0,1]$ with $\alpha_1\mathbf{n}eq\alpha$, we have {\small \betagin{align*}\lambdabel{U-a} \frac{U^{t,s}_{\alpha_1}-U^{t,s}_{\alpha}}{\alpha_1-\alpha}&=\frac{1}{\alpha_1-\alpha}\int^s_t U^{t,r}_{\alpha_1}(L_{\alpha_1}[r]- L_{\alpha}[r])U^{r,s}_{\alpha}dr\mathbf{n}otag\\ &=\int^s_t U^{t,r}_{\alpha_1}\frac{\partialrtial L_\gammamma}{\partialrtial \alpha}[r]U^{r,s}_{\alpha}dr \end{align*} } with some $\gammamma \in[\alpha_1,\alpha]$. By \eqref{PropergatorPropertya} in Proposition \ref{prop-propergatorProperty}, for each $0\leq t<r\leq T$ the mapping $\alpha\mapsto U_{\alpha}^{t,r}$ is continuous as a function from $[0,1]$ to $\mathcal{L}(\mathbf{C}^2, \mathbf{C})$ in the strong operator topology. Next, we approximate a continuous function by a sequence of twice continuously differentiable functions. Then by standard density arguments, we have for each $0\leq t<r\leq T$, the mapping $\alpha\mapsto U_{\alpha}^{t,r}$ is also continuous as a function from $[0,1]$ to $\mathcal{L}(\mathbf{C}, \mathbf{C})$ in the strong operator topology. Together with the smoothing property \eqref{smoothingm} and the condition that for each $r\in[0,T]$ and $\gammamma\in[0,1]$, $\frac{\partialrtial L_\gammamma}{\partialrtial \alpha}[r]: \mathbf{C}^2\to \mathbf{C}$, we have that the derivative $$ \frac{\partialrtial U^{t,r}_\alpha}{\partialrtial \alpha}:=\lim_{\alpha_1\to\alpha}\frac{U^{t,s}_{\alpha_1}-U^{t,s}_{\alpha}}{\alpha_1-\alpha} $$ exists in the strong topology in $\mathcal{L}(\mathbf{C}^2, \mathbf{C})$ and equals to $$\frac{\partialrtial U^{t,r}_\alpha}{\partialrtial \alpha}=\int^s_t U^{t,r}_{\alpha}\frac{\partialrtial L_\alpha}{\partialrtial \alpha}[r]U^{r,s}_{\alpha}dr.$$ (ii) By condition {\bf (A3)}, the mapping $\alpha\mapsto V^T_\alpha(\cdot)$ is differentiable and the derivative exists in $\mathbf{C}^1$, namely, $\lim_{\alpha_1\to\alpha}\frac{V^T_{\alpha_1}-V^T_{\alpha}}{\alpha_1-\alpha}$ exists and belongs to $\mathbf{C}^1$. Since the mapping $\alpha\mapsto U_{\alpha}^{t,r}$ is continuous as a function from $[0,1]$ to $\mathcal{L}(\mathbf{C}, \mathbf{C})$ in the strong operator topology, hence for any $t\in[0,T]$, $$\lim_{\alpha_1\to\alpha}\left(U^{t,T}_{\alpha_1}\frac{V^T_{\alpha_1}-V^T_{\alpha}}{\alpha_1-\alpha}\right)=U^{t,T}_{\alpha}\frac{V^T_{\alpha}}{\partialrtial \alpha}\in\mathbf{C}.$$ Finally, from \eqref{smooth property 3} , the derivative \betagin{align} \frac{\partialrtial W_{\alpha}}{\partialrtial \alpha}(t,\cdot)&:=\lim_{\alpha_1\to\alpha}\frac{W_{\alpha_1}(t,\cdot)-W_{\alpha}(t,\cdot)}{\alpha_1-\alpha}\mathbf{n}otag\\ &=\lim_{\alpha_1\to\alpha}\left(U^{t,T}_{\alpha_1}\frac{V^T_{\alpha_1}(\cdot)-V^T_{\alpha}(\cdot)}{\alpha_1-\alpha}\right)\mathbf{n}otag\\ &\hspace{1em}+ \lim_{\alpha_1\to\alpha}\frac{U^{t,T}_{\alpha_1}-U^{t,T}_{\alpha}}{\alpha_1-\alpha}V^T_{\alpha}(\cdot)\mathbf{n}otag\\ &={U}_\alphapha^{t,T} \frac{\partialrtial V^T_\alphapha}{\partialrtial \alphapha} (\cdot) +\frac{\partialrtial U_\alphapha^{t,T}}{\partialrtial \alphapha}V^T_\alphapha(\cdot)\mathbf{n}otag \end{align} exists in $\mathbf{C}$ for each $t\in[0,T]$ and any $\alpha\in[0,1]$. \qed \section{Proof of Theorem \ref{Sensitivity 2}} \lambdabel{Proof for the Theorem Sensitivity 2} (a) Recall in the proof of Theorem \ref{weak_existence}, for any $\alphapha \in [0,1]$, the unique solution $V_\alphapha $ is the unique fixed point of the mapping $$\mathbf{v}arphii \mapsto \mathcal{P}si_{\alpha}(\mathbf{v}arphii), \quad C^T_{V_\alpha^T}([0,T], \mathbf{C}^1)\to C^T_{V_\alpha^T}([0,T], \mathbf{C}^1)$$ defined by, for each $t\in[0,T]$ \betagin{equation} \lambdabel{mapping Psi} \mathcal{P}si_{\alpha}(\mathbf{v}arphii)(t,\cdot) = U_\alphapha^{t,T} V_\alphapha^T(\cdot) + \int_t^T U_\alphapha^{t,s}H_{\alphapha,s}(\cdot,\mathbf{n}abla \mathbf{v}arphii (s,\cdot))ds. \end{equation} For any $\alpha_i\in[0,1]$, $i=1,2$, let $V_{\alpha_i}$ be the unique fixed point of the mapping $\mathcal{P}si_{\alpha_i}$, i.e. $$V_{\alpha_i}=\mathcal{P}si_{\alpha_i} (V_{\alphapha_i} ),\quad \text{for } i=1,2.$$ Then from \eqref{mapping Psi} we have {\small \betagin{align} &V_{\alpha_1}(t, \cdot)-V_{\alpha_2}(t, \cdot)\mathbf{n}otag\\ =&\mathcal{P}hi_{\alpha_1}(V_{ \alpha_1}(t, \cdot))-\mathcal{P}hi_{\alpha_2}(V_{\alpha_2}(t, \cdot))\mathbf{n}otag\\[.4em] =&U^{t,T}_{\alpha_1}V^T_{\alpha_1}(\cdot)-U^{t,T}_{\alpha_2}V^T_{\alpha_2}(\cdot)\mathbf{n}otag\\[.4em] +&\int_t^TU^{t,s}_{\alpha_1}H_{\alpha_1,s}(\cdot,\mathbf{n}abla V_{\alpha_1}(s,\cdot))ds \mathbf{n}otag\\ -&\int_t^TU^{t,s}_{\alpha_2}H_{\alpha_2,s}(\cdot,\mathbf{n}abla V_{\alpha_2}(s,\cdot))ds\mathbf{n}otag\\[.4em] =&U^{t,T}_{\alpha_1}V^T_{\alpha_1}(\cdot)-U^{t,T}_{\alpha_2}V^T_{\alpha_2}(\cdot)\mathbf{n}otag\\[.4em] +&\int_t^T \left(U^{t,s}_{\alpha_1}- U^{t,s}_{\alpha_2}\right)H_{\alpha_1,s}(\cdot,\mathbf{n}abla V_{\alpha_1}(s,\cdot))ds \mathbf{n}otag\\ +&\int_t^T U^{t,s}_{\alpha_2}\left(H_{\alpha_1,s}(\cdot,\mathbf{n}abla V_{\alpha_1}(s,\cdot))-H_{\alpha_2,s}(\cdot,\mathbf{n}abla V_{\alpha_1}(s,\cdot))\right)ds \mathbf{n}otag\\ +&\int_t^T U^{t,s}_{\alpha_2}\left(H_{\alpha_2,s}(\cdot,\mathbf{n}abla V_{\alpha_1}(s,\cdot))-H_{\alpha_2,s}(\cdot,\mathbf{n}abla V_{\alpha_2}(s,\cdot))\right)ds\mathbf{n}otag\\ =:&\Lambdambda_1+\Lambdambda_2+\Lambdambda_3+\Lambdambda_4 \lambdabel{5}. \end{align} } By theorem \ref{Sensitivity 1}, there exists a constant $c>0$ such that {\small \betagin{align}\lambdabel{Lambda_1} &\|\Lambdambda_1\|_{\mathbf{C}^1} \leq c |\alpha_1-\alpha_2| \sup_{\gammamma\in[\alpha_1,\alpha_2]}\left\|\frac{\partialrtial V^T_{\gammamma}}{\partialrtial \alpha}\right\|_{\mathbf{C}^1}\mathbf{n}otag\\ &+c |\alpha_1-\alpha_2| (T-t)^{1-\betata}\hspace{-.8em}\mathbf{u}nderset{\betagin{subarray}{c} s\in[t,T] \\ \gammamma\in[\alpha_1,\alpha_2] \end{subarray}}{\sup}\left\| \frac{\partialrtial L_\gammamma}{\partialrtial \alpha}[s]\right\|_{\mathbf{C}^2\to\mathbf{C}}\|V^T_{\alpha_2}(\cdot)\|_{\mathbf{C}^2}. \end{align} } By proposition \ref{prop-propergatorProperty} and the differentiability of $L_\alpha[t]$ in $\alpha$ for each $t\in[0,T]$, we have {\small \betagin{align}\lambdabel{Lambda_2} &\|\Lambdambda_2\|_{\mathbf{C}^1}\\ &=\mathbf{B}ig\|\int_t^T \left( \int_t^s U^{t,r}_{\alpha_1}(L_{\alpha_1}[r]-L_{\alpha_2}[r])U^{r,s}_{\alpha_2}dr\right)\mathbf{n}otag\\ &\hspace{3cm}H_{\alpha_1,s}(\cdot,\mathbf{n}abla V_{\alpha_1}(s,\cdot))ds\mathbf{B}ig\|_{\mathbf{C}^1} \mathbf{n}otag\\[.4em] &=\mathbf{B}ig\| \int_t^T \left( \int_t^s U^{t,r}_{\alpha_1}(\alpha_1-\alpha_2)\frac{\partialrtial L_\theta}{\partialrtial \alpha}[r]U^{r,s}_{\alpha_2}dr\right) \mathbf{n}otag\\[.4em] &\hspace{3cm}H_{\alpha_1,s}(\cdot,\mathbf{n}abla V_{\alpha_1}(s,\cdot))ds\mathbf{B}ig\|_{\mathbf{C}^1} \mathbf{n}otag\\[.4em] & \leq|\alpha_1-\alpha_2|c_5c_6\int_t^T\int_t^s(r-t)^{-\betata}(s-r)^{-\betata}\| \frac{\partialrtial L_\theta}{\partialrtial \alpha}[r] \|_{\mathbf{C}^2\to\mathbf{C}}dr\mathbf{n}otag\\ &\hspace{3cm} \| H_{\alpha_1,s}(\cdot,\mathbf{n}abla V_{\alpha_1}(s,\cdot)) \|_{\mathbf{C}_{Lip}}ds\mathbf{n}otag\\[.4em] & \leq|\alpha_1-\alpha_2|c_5c_6 m\frac{ (T-t)^{2-2\beta}}{(1-\beta)^2} \sup_{r\in[t,T]}\left\|\frac{\partialrtial L_\theta}{\partialrtial \alpha}[r]\right\|_{\mathbf{C}^2\to \mathbf{C}}\mathbf{n}otag \end{align} } with $\theta\in[0,1]$ and $m:=\sup_{\alphapha\in[0,1]}\sup_{( s,p)\in \mathcal{O}}\|H_{\alpha,s}(\cdot, p)\|_{\mathbf{C}_{Lip}}<\infty$. The last inequality in \eqref{Lambda_2} is obtained through the calculation \betagin{align*} &\int_t^T \int_t^s (r-t)^{-\betata}(s-r)^{-\betata}drds\\ =&\int_t^T \int_r^T (r-t)^{-\betata}(s-r)^{-\betata}dsdr\\ =& \int_t^T (r-t)^{-\betata} \frac{(T-r)^{1-\betata}}{1-\betata}dr\\ \leq &\frac{(T-t)^{1-\betata}}{1-\betata}\int _t^T(r-t)^{-\betata}dr=\frac{ (T-t)^{2-2\beta}}{(1-\beta)^2}. \end{align*} By the condition {\bf (A1)}, for each $t\in[0,T]$ and $x,p\in\mathbf{R}^d$ the mapping $\alpha\mapsto H_{\alpha,s}(x,p)$ is differentiable and the derivative is continuous, we have {\small \betagin{align}\lambdabel{Lambda_3} &\|\Lambdambda_3\|_{\mathbf{C}^1}\mathbf{n}otag\\ &=\left\|\int_t^T U^{t,s}_{\alpha_2} (\alpha_1-\alpha_2) \frac{\partialrtial H_{\alpha,s}}{\partialrtial \alpha}(\cdot,\mathbf{n}abla V_{\alpha_1}(s,\cdot))ds\right \|_{\mathbf{C}^1}\mathbf{n}otag\\ &\leq |\alpha_1-\alpha_2|\int_t^T c_5 (s-t)^{-\beta}\left\|\frac{\partialrtial H_{\theta,s}}{\partialrtial \alpha}(\cdot, \mathbf{n}abla V_{\alpha_1}(s,\cdot))\right\|_{\mathbf{C}} ds\mathbf{n}otag\\[.4em] &\leq |\alpha_1-\alpha_2| c_5 \frac{(T-t)^{1-\beta}}{1-\beta} \, \sup_{(s,p)\in\mathcal{O}}\left \|\frac{\partialrtial H_{\theta,s}}{\partialrtial \alpha}(\cdot, p)\right\|_{\mathbf{C}}. \end{align} } By \eqref{smooth property 2m} and \eqref{eq1thweak_existencem}, we get {\small \betagin{align}\lambdabel{Lambda_4} &\|\Lambdambda_4\|_{\mathbf{C}^1}\mathbf{n}otag\\ =&\left\| \int_t^T \hspace{-.6em}U^{t,s}_{\alpha_2}\left(H_{\alpha_2,s}(\cdot,\mathbf{n}abla V_{\alpha_1}(s,\cdot))-H_{\alpha_2,s}(\cdot,\mathbf{n}abla V_{\alpha_2}(s,\cdot))\right)ds \right\|_{\mathbf{C}^1}\mathbf{n}otag\\ \leq &c_1 c_5\int_t^T (s-t)^{-\betata} \|\mathbf{n}abla V_{\alpha_1}(s,\cdot)-\mathbf{n}abla V_{\alpha_2}(s,\cdot)\|_{\mathbf{C}} \, ds\mathbf{n}otag\\ \leq &c_1 c_5\int_t^T (s-t)^{-\betata} \|V_{\alpha_1}(s,\cdot)- V_{\alpha_2}(s,\cdot)\|_{\mathbf{C}^1} \, ds\mathbf{n}otag\\ \leq &c_1 c_5\frac{(T-t)^{1-\betata}}{1-\betata} \sup_{s\in[t,T]}\|V_{\alpha_1}(s,\cdot)- V_{\alpha_2}(s,\cdot)\|_{\mathbf{C}^1}. \end{align} } It follows, from (\ref{5}) together with the estimates \eqref{Lambda_1}, \eqref{Lambda_2}, \eqref{Lambda_3} and \eqref{Lambda_4}, that {\small \betagin{align} &\sup_{t\in[0,T]}\| V_{\alpha_1}(t,\cdot)-V_{\alpha_2}(t,\cdot)\|_{\mathbf{C}^1}\\ &\leq c |\alpha_1-\alpha_2| \sup_{\gammamma\in[\alpha_1,\alpha_2]}\left\|\frac{\partialrtial V^T_{\gammamma}}{\partialrtial \alpha}\right\|_{\mathbf{C}^1}\mathbf{n}otag\\ &+c |\alpha_1-\alpha_2| (T-t)^{1-\betata}\|V^T_{\alpha_2}(\cdot)\|_{\mathbf{C}^2} \mathbf{u}nderset{\betagin{subarray}{c} t\in[0,T] \\ \gammamma\in[\alpha_1,\alpha_2] \end{subarray}}{\sup}\left\| \frac{\partialrtial L_\gammamma}{\partialrtial \alpha}[t]\right\|_{\mathbf{C}^2\to\mathbf{C}}\mathbf{n}otag\\ &+|\alpha_1-\alpha_2|c_5c_6 m\frac{(T-t)^{2-2\beta} }{(1-\beta)^2} \mathbf{u}nderset{\betagin{subarray}{c} t\in[0,T] \\ \gammamma\in[\alpha_1,\alpha_2] \end{subarray}}{\sup}\left\|\frac{\partialrtial L_\gammamma}{\partialrtial \alpha}[t]\right\|_{\mathbf{C}^2\to \mathbf{C}}\mathbf{n}otag\\ &+ |\alpha_1-\alpha_2| c_5 \frac{(T-t)^{1-\beta}}{1-\beta} \, \mathbf{u}nderset{\betagin{subarray}{c} (t,p)\in\mathcal{O} \\ \gammamma\in[\alpha_1,\alpha_2] \end{subarray}}{\sup} \left \|\frac{\partialrtial H_{\gammamma,t}}{\partialrtial \alpha}(\cdot, p)\right\|_{\mathbf{C}}\mathbf{n}otag\\ &+c_1 c_5\frac{(T-t)^{1-\betata} }{1-\betata}\sup_{t\in[0,T]}\|V_{\alpha_1}(t,\cdot)- V_{\alpha_2}(t,\cdot)\|_{\mathbf{C}^1} . \end{align} } For $t$ close to $T$ enough so that $\frac{(T-t)^{1-\betata}}{(1-\betata)^2} <1$, we have inequality \eqref{T10}. For arbitrary $t\in[0,T]$, the proof follows by iterations. (b) By the definitions of $L_\alpha[t]$, $H_{\alpha,t}(x,p)$, $V^T_\alpha(x)$ in \eqref{Halpha}, \eqref{Lalpha}, \eqref{VTalpha} respectively and the assumptions \eqref{eqassumonder1}, \eqref{eqassumonder2}, \eqref{eqassumonder3}, for any $\{\mu^1_.\},\{\mu^2_.\}\in C([0,T], \mathcal{M})$, the statement follows from the equation \eqref{estimate} and the inequality \eqref{T10} by setting $\alpha_1=1$ and $\alpha_2=0$. \qed {\bf Acknowledgements: } The authors thank the referees for their valuable suggestions. {\small \betagin{thebibliography}{99} \bibitem{AGMR2010} W. Alt, R. Griesse, N. Metla, A. R\"osch. Lipschitz stability for elliptic optimal control problems with mixed control-state constraints. {\it Optimization: A Journal of Mathematical Programming and Operations Research}, {\bf 59:6} (2010), 833-849. \bibitem{Bai} I. F. Bailleul. Sensitivity for the Smoluchowski equation. {\it Journal of Physics A: Mathemetical and Theoretical}, {\bf44(24)} (2011), 245004. \bibitem{E2010} L. C. Evans. Partial Differential Equations (Graduate Studies in Mathematics). 2010, American Mathematical Society. \bibitem{FlSo} W. Fleming, H. M. Soner. Controlled Markov processes and viscosity solutions. 2nd edition. Stochastic Modelling and Applied Probability, 25, 2006. Springer, New York. \bibitem{GLL2010} O. Gu\'eant, J.-M. Lasry, P.-L. Lions. Mean Field Games and Applications. Paris-Princeton Lectures on Mathematical Finance 2010, Springer. \bibitem{G2004} R. Griesse. Parametric sensitivity analysis in optimal control of a reaction-diffusion systemÑ Part I: Solution differentiability. {\it Numerical Functional Analysis and Optimization}, {\bf 25:1-2} (2004), 93-117. \bibitem{GM2005} E. Gobet, R. Munos. Sensitivity Analysis Using It\^o-Malliavin Calculus and Martingales, and Application to Stochastic Optimal Control. {\it SIAM Journal on Control and Optimization}, {\bf 43:5} (2005), 1676-1713. \bibitem{GS2014} D.A. Gomes and J. Sa\'ude. Mean field games models - a brief survey. {\it Dyn Games Appl}, {\bf 4} (2014), 110-154. \bibitem{HMC05} M. Huang, R.P. Malham\'e, P.E. Caines. Nash equilibria for large-population linear stochastic systems with weakly coupled agents. In: E.K. Boukas, R. P. Malham\'e (Eds). Analysis, Control and Optimization of Complex Dynamic Systems. Springer (2005), 215-252. \bibitem{HCM3} M. Huang, R. P. Malham\'e, P. E. Caines. Large population stochastic dynamic games: closed-loop Mckean-Vlasov systems and the Nash certainty equivalence principle. {\it Communications in information and systems}, {\bf 6:3} (2006), 221-252. \bibitem{HCM07} M. Huang, P. E. Caines, R. P. Malham\'e. Large-Population Cost-Coupled LQG Problems With Nonuniform Agents: Individual-Mass Behavior and Decentralized $\epsilonsilon$-Nash Equilibria. {\it IEEE Transactions on Automatic Control}, {\bf 52:9} (2007), 1560-1571. \bibitem{HCM10} M. Huang, P. E. Caines, R. P. Malham\'e. The NCE (mean field) principle with locality dependent cost interactions. {\it IEEE Transaction on Automatic Control}, {\bf 55:12} (2010), 2799-2805. \bibitem{Hu10} M. Huang. Large-population LQG games involving a major player: the Nash certainty equivalence principle. {\it SIAM Journal on Control and Optimization}, {\bf 48:5} (2010), 3318-3353. \bibitem{L} P.-L. Lions. Th\'eorie des jeux \`a champs moyen et applications. Cours au Coll\`ege de France, http://www.college-defrance. fr/default/EN/all/equ der/cours et seminaires.htm. \bibitem{Ko06a} V. N. Kolokoltsov. On the regularity of solutions to the spatially homogeneous Boltzmann equation with polynomially growing collision kernel. {\it Advanced Studies in Contemporay Mathematics}, {\bf 12:1} (2006), 9-38. \bibitem{VK2} V. N. Kolokoltsov. Markov processes, semigroups and generators, De Gryuter studies in Mathmatics 38, 2011 \bibitem{KLY2012} V. N. Kolokoltsov, J.J. Li, W. Yang. Mean field games and nonlinear Markov processes, arXiv:1112.3744v2 (2012). \bibitem{KY2012} V. N. Kolokoltsov, W. Yang. Existence of solutions to path-dependent kinetic equations and related forward-backward systems, {\it Open Journal of Optimization}, 2(2), 39-44. \bibitem{LLLL2014} A. Lachapelle, J.-M. Lasry2, C.-A. Lehalle, and P.-L. Lions. Efficiency of the price formation process in presence of high frequency participants: a mean field game analysis. arXiv:1305.6323v3 (2014). \bibitem{LST2010} A. Lachapelle, J. Salomon, G. Turinici. Computation of mean field equilibria in economics. {\it Math. Models Methods Appl. Sci}, {\bf 20(4)} (2010), 567-588. \bibitem{LL2006} J.-M. Lasry, P.-L. Lions. Jeux à champ moyen. I. Le cas stationnaire. (French) [Mean field games. I. The stationary case] C. R. Math. Acad. Sci. Paris, {\bf 343: 9} (2006), 619-625. \bibitem{LLG2010} J. M. Lasry, P.L. Lions and O. Gueant. Application of mean field games to growth theory. (2010) preprint \bibitem{LL2007} J.-M. Lasry, P.-L. Lions. Mean field games. {\it Japanese Journal of Mathematics}, {\bf 2:1} (2007), 229-260. \bibitem{MS2010} Z. Macov\' a and D. Sevcovic. Weakly nonlinear analysis of the Hamilton-Jacobi-Bellman equation arising from pension savings management. {\it International Journal of Numerical analysis and modeling}, {\bf 7(4)} (2010), 619-138. \bibitem{ManNorrisBai} P. L. W. Man, J. R. Norris, I. Bailleul and M. Kraft. Coupling algorithms for calculating sensitivities of Smoluchowski's coagulation equation. {\it SIAM Journal on Scientific Computing}, {\bf 32:2} (2010), 635-655. \bibitem{M2002} K. Malanowski. Sensitivity analysis for parametric optimal control of semilinear parabolic equations. {\it Journal of Convex Analysis}, {\bf 9:2} (2002), 543-561. \bibitem{M2011} K. Malanowski. Sensitivity analysis for state constrained optimal control problems. {\it Control Cybernet}. {\bf 40:4} (2011), 1043-1058. \bibitem{MT2000} K. Malanowski, F. Tr\"oltzsch. Lipschitz stability of solutions to parametric optimal control for elliptic equations. {\it Control and Cybernetics}, {\bf 29} (2000), 237-256. \bibitem{McEn} W. McEneaney. Max-plus methods for nonlinear control and estimation. Systems and Control: Foundations and Applications. Birkh\"auser Boston, Inc., Boston, MA, 2006. \bibitem{PorEid84} F. O. Porper, S. D. Eidelman. Two-sided estimates of the fundamental solutions of second-order parabolic equations and some applications of them. (Russian) {\it Uspekhi Mat. Nauk}, {\bf 39(3)} (1984), 107-156. \bibitem{RT2003} T. Roubicek, F. Tr\"oltzsch. Lipschitz stability of optimal controls for the steady-state Navier-Stokes equations. {\it Control and Cybernetics}, {\bf 32:3} (2003), 683-705. \bibitem{S2005} J. R. Singler. Sensitivity analysis of partial differential equations with applications to fluid flow. PhD thesis, the Virginia Polytechnic Institute and State University, 2005. \bibitem{T2000} F. Tr\"oltzsch. Lipschitz stability of solutions of linear-quadratic parabolic control problems with respect to perturbations. {\it Dynamics of Continuous, Discrete and Impulsive Systems}, {\bf 7:2} (2000), 289-306. \end{thebibliography} } \end{document}
\begin{document} \begin{abstract} We classify constant mean curvature surfaces invariant by a $1$-parameter group of isometries in the Berger spheres and in the special linear group $\mathrm{Sl}(2,\mathbb{R})$. In particular, all constant mean curvature spheres in those spaces are described explicitly, proving that they are not always embedded. Besides new examples of Delaunay-type surfaces are obtained. Finally the relation between the area and volume of these spheres in the Berger spheres is studied, showing that, in some cases, they are not solution to the isoperimetric problem. \end{abstract} \maketitle \mathbb{S}ection{Introduction} In the last years, constant mean curvature surfaces of the homogeneous Riemannian $3$-manifolds have been deeply studied. The starting point was the work of Abresch and Rosenberg~\cite{ARb}, where they found a holomorphic quadratic differential in any constant mean curvature surface of a homogeneous Riemannian $3$-manifold with isometry group of dimension $4$. Berger spheres, the Heisenberg group, the special linear group $\mathrm{Sl}(2,\mathbb{R})$ and the Riemannian product $\mathbb{S}^2 \times \mathbb{R}$ and $\mathbb{H}^2 \times \mathbb{R}$, where $\mathbb{S}^2$ and $\mathbb{H}^2$ are the $2$-dimensional sphere and hyperbolic plane, are the most relevant examples of such homogeneous $3$-manifolds. Abresch and Rosenberg~\cite{ARa} proved that a complete constant mean curvature surface in $\mathbb{S}^2 \times \mathbb{R}$ and $\mathbb{H}^2 \times \mathbb{R}$ with vanishing Abresch-Rosenberg differential must be \emph{rotationally} invariant (that is, invariant under a $1$-parameter group of isometries acting trivially on the fiber). Moreover, do Carmo and Fernández~\cite[Theorem 2.1]{dCF} showed that, even locally, every constant mean curvature surface in $\mathbb{S}^ 2 \times \mathbb{R}$ or $\mathbb{H}^2 \times \mathbb{R}$ with vanishing Abresch-Rosenberg differential must be rotationally invariant too. Finally, Espinar and Rosenberg~\cite{ER} for every homogeneous Riemannian spaces with isometry group of dimension $4$, proved that every constant mean curvature surface with vanishing Abresch-Rosenberg differential must be invariant by a $1$-parameter group of isometries. Constant mean curvature surfaces invariant by a $1$-parameter group of isometries were studied in the product spaces $\mathbb{S}^2 \times \mathbb{R}$ and $\mathbb{H}^2 \times \mathbb{R}$ by Hsiang and Hsiang~\cite{HH89} and Pedrosa and Ritoré~\cite{PR99}. Also, in the Heisenberg group, the study was made by Tomter~\cite{To}, Figueroa, Mercuri and Pedrosa~\cite{FMP} and Caddeo, Piu and Ratto~\cite{CPR}. Tomter described explicitly in~\cite{To} the constant mean curvature spheres computing their volume and area in order to give an upper bound for the isoperimetric profile of the Heisenberg group. The authors in~\cite{FMP} studied not only the \emph{rotationally} invariant case, but the surfaces invariant by any closed $1$-parameter group of isometries of the Heisenberg group, and organized most of the results that had appeared in the literature. In the special linear group $\mathrm{Sl}(2,\mathbb{R})$ the classification was obtained by Gorodsky~\cite{G} and, very recently, the classification was made in the universal cover of $\mathrm{Sl}(2,\mathbb{R})$ by Espinoza~\cite{E}. The aim of this paper is to classify the constant mean curvature surfaces invariant by a $1$-parameter group of isometries that fix a curve, that is, \emph{rotationally invariant}, in the Berger spheres (Theorem~\ref{tm:clasificacion-berger}). In this classification it turns out that constant mean curvature spheres are not always embeddded (see figure~\ref{fig:immersed-cmc-sphere}) contradicting the result announced by Abresch and Rosenberg in~\cite[Theorem 6]{ARb}. Besides, we obtain some new examples of surfaces similar to the Delaunay constant mean curvature surfaces in $\mathbb{R}^3$. Moreover, since we obtain an explicit immersion for the constant mean curvature sphere (see Cororally~\ref{cor:sphere-CMC-berger}), we analyse the relation between the area and the volume of the constant mean curvature spheres and show that, for some Berger spheres, they are not the best candidates to solve the isoperimetric problems. Finally some Delaunay-type surfaces give rise, in some Berger spheres, to embedded minimal tori which are not the Clifford torus, proving that the Lawson conjecture is not true in some Berger spheres (see Remark~\ref{rm:clasificacion-berger}.(2)). Using the same techniques, and giving a sketch of the proofs, we classify rotationally invariant constant mean curvature surfaces in $\mathrm{Sl}(2,\mathbb{R})$ (see Theorem~\ref{tm:clasificacion-Sl(2,R)}), and we obtain an explicit description for the constant mean curvature spheres, showing that they are not always embedded (see figure~\ref{fig:sl-embebidas}). Although the classification in $\mathrm{Sl}(2,\mathbb{R})$ was made by Gorodsky in~\cite{G}, there exist a mistake in \cite[Theorem 2.(b)]{G} where he claims that for every $H > 0$ there exists a sphere with constant mean curvature $H$, something that is actually false (see Remark~\ref{rm:clasificacion-Sl(2,R)}.(1)). \mathbb{S}ection[CMC surfaces in homogeneous spaces]{Constant mean curvature surfaces in the homogeneous spaces} Let $N$ be a homogeneous Riemannian $3$-manifold with isometry group of dimension $4$. Then there exists a Riemannian submersion $\Pi:N\rightarrow M^2(\kappa)$, where $M^2(\kappa)$ is a 2-dimensional simply connected space form of constant curvature $\kappa$, with totally geodesic fibers and there exists a unit Killing field $\xi$ on $N$ which is vertical with respect to $\Pi$. We will assume that $N$ is oriented, and we can define a cross product $\wedge$, such that if $\{e_1,e_2\}$ are linearly independent vectors at a point $p$, then $\{e_1,e_2,e_1\wedge e_2\}$ is the orientation at $p$. If $\bar{\nabla}$ denotes the Riemannian connection on $N$, the properties of $\xi$ imply (see \cite{D}) that for any vector field $V$ \begin{equation}\label{eq:killing-property} \bar{\nabla}_V\xi=\tau(V\wedge\xi), \end{equation} where the constant $\tau$ is the bundle curvature. As the isometry group of $N$ has dimension 4, $\kappa-4\tau^2\not=0$. The case $\kappa-4\tau^2=0$ corresponds to $\mathbb{S}^3$ with its standard metric if $\tau\not=0$ and to the Euclidean space $\mathbb{R}^3$ if $\tau=0$, which have isometry groups of dimension $6$. In our study we are going to deal mainly with the Berger spheres, which correspond to $\kappa > 0$ and $\tau \neq 0$, and with the special linear group $\mathrm{Sl}(2, \mathbb{R})$, which correspond to $\kappa < 0$ and $\tau \neq 0$. The fibration in both cases is by circles. \begin{quote} \emph{Along the paper $\mathrm{E}(\kappa,\tau)$ will denote an oriented homogeneous Riemannian $3$-manifold with isometry group of dimension $4$, where $\kappa$ is the curvature of the basis, $\tau$ the bundle curvature (and therefore $\kappa-4\tau^2\not=0$).} \end{quote} Now, let $\Phi:\Sigma\rightarrow \mathrm{E}(\kappa, \tau)$ be an immersion of an orientable surface $\Sigma$ and $N$ a unit normal vector field. We define the function $C:\Sigma\rightarrow \mathbb{R}$ by \[ C=\langle N,\xi\rangle, \] where $\langle,\rangle$ denotes the metric in $\mathrm{E}(\kappa, \tau)$, and also the metric of $\Sigma$. It is clear that $C^2\leq 1$. Suppose now that the immersion $\Phi$ has {\em constant mean curvature}. Consider on $\Sigma$ the structure of Riemann surface associated to the induced metric and let $z=x+iy$ be a conformal parameter on $\Sigma$. Then, the induced metric is written as $e^{2u}|dz|^2$ and we denote by $\partial_z=(\partial_x-i\partial_y)/2$ and $\partial_{\bar{z}}=(\partial_x+i\partial_y)/2$ the usual operators. For these surfaces, the Abresch-Rosenberg quadratic differential $\Theta$, defined by \[ \Theta(z)=\left(\langle\mathbb{S}igma(\partial_z,\partial_z),N\rangle- \frac{(\kappa-4\tau^2)}{2(H+i\tau)}\langle\Phi_z,\xi\rangle^2\right)(dz)^2, \] where $\mathbb{S}igma$ is the second fundamental form of the immersion, is holomorphic (see \cite{ARb}). We denote $p(z)=\langle \mathbb{S}igma(\partial_z,\partial_z),N\rangle$ and $A(z)=\langle\Phi_z,\xi\rangle$. \begin{proposition}[\cite{D, FM}] \label{prop:integrability-conditions} The fundamental data $\{u,C,H,p,A\}$ of a constant mean curvature immersion $\Phi:\Sigma\rightarrow \mathrm{E}(\kappa, \tau) $ satisfy the following integrability conditions: \begin{equation}\label{eq:integrability-conditions} \begin{aligned} p_{\bar{z}}&=\frac{e^{2u}}{2}(\kappa - 4\tau^2)CA,& A_{\bar{z}}&=\frac{e^{2u}}{2}(H+i\tau)C,\\ C_z&=-(H-i\tau)A-2e^{-2u}\bar{A}p,& |A|^2&=\frac{e^{2u}}{4}(1-C^2). \end{aligned} \end{equation} Conversely, if $u,C:\Sigma\rightarrow\mathbb{R}$ with $-1\leq C\leq 1$ and $p,A:\Sigma\rightarrow\mathbb{C}$ are functions on a simply connected surface $\Sigma$ satisfying equations \eqref{eq:integrability-conditions}, then there exists a unique, up to congruences, immersion $\Phi:\Sigma\rightarrow\mathrm{E}(\kappa, \tau)$ with constant mean curvature $H$ and whose fundamental data are $\{u,C,H,p,A\}$. \end{proposition} Given a constant mean curvature surface $\Sigma$ with vanishing Abresch-Rosenberg differential we know that it must be invariant by a $1$-parameter group of isometries (see~\cite{ARa, dCF, ER}). Now we will restrict our attention to constant mean curvature spheres $S$, which will be treated in a uniform way for all $\mathrm{E}(\kappa, \tau)$. The advantage of using this aproach is that we will obtain a global formula for the area of the constant mean curvature spheres in terms of $\kappa$ and $\tau$ (see Proposition~\ref{prop:area-esferas}). In this case using~\eqref{eq:integrability-conditions} and taking into account that $\Theta = 0$ we get \[ \begin{split} C_z &= \frac{-(H-i\tau)A}{4(H^2 + \tau^2)}[4(H^2 + \tau^2) + (\kappa - 4\tau^2)(1-C^2)], \\ C_{z\bar{z}} &= \frac{-e^{2u}}{32(H^2 + \tau^2)}C[4(H^2 + \tau^2) + (\kappa - 4\tau^2)(1-C^2)]^2 \end{split} \] Because $[4(H^2 + \tau^2) + (\kappa - 4\tau^2)(1-C^2)] (> 4H^2 + \kappa) > 0$ the only critical points of $C$ appear where $A$ vanish, i.e., taking into account~\eqref{eq:integrability-conditions} when $C^2(p) = 1$. But the Hessian of $C$ is given by $(H^2 + \tau^2)^2 > 0$ (except for minimal spheres in $\mathbb{S}^2 \times \mathbb{R}$, but in that case the sphere is the slice $\mathbb{S}^2 \times \{t_0\} \mathbb{S}ubset \mathbb{S}^2 \times \mathbb{R}$) so all critical points are non degenerate. Hence, $C$ is a Morse function on $S$ and so it has only two critical points $p$ and $q$ which are the absolute maximum and minimum of $C$. The function $v: S \rightarrow \mathbb{R}$ given by $v = \arctanh C$ is a harmonic function from~\eqref{eq:integrability-conditions} with singularities at $p$ and $q$ and without critical points. Now there exist a global conformal parameter $w = x + i y$ over $S$ such that $v(w) = \mathrm{Re}(w) = x$. In this new global conformal parameter the function $C$ is $C(x) = \tanh(x)$ and so it is not difficult to check that the conformal factor of the metric can be written as: \[ e^{2u(x)} = \frac{16(H^2 + \tau^2) \cosh^2 x}{[4(H^2 + \tau^2)\cosh^2 x + (\kappa - 4\tau^2)]^2}, \qquad x \in \mathbb{R} \] Now, to obtain the area of the constant mean curvature sphere it is sufficient to integrate the above function for $x \in \mathbb{R}$ and $y \in [0, T]$, where $T$ must be $2\pi$ since, by the Gauss-Bonnet theorem, \[ 4\pi = \int_S K \,\mathrm{d} A = T \int_{\mathbb{R}} e^{2u(x)} K \,\mathrm{d} x = -T \int_{\mathbb{R}} u''(x) \,\mathrm{d} x = 2T. \] Then, the area is given by \[ \textrm{Area}(S) = \int_0^{2\pi} \int_\mathbb{R} e^{2u(x)}\,\mathrm{d}x\,\mathrm{d}y = 2\pi\int_\mathbb{R} e^{2u(x)}\,\mathrm{d}x \] and a straightforward computation yields the following lemma. \begin{proposition}\label{prop:area-esferas} The area of a constant mean curvature sphere $S$ in $\mathrm{E}(\kappa, \tau)$ is given by: \[ \textrm{Area}(S) = \begin{cases} \displaystyle{\frac{8\pi}{4H^2 + \kappa} \left[ 1 + \frac{4(H^2 + \tau^2)}{\mathbb{S}qrt{4H^2 + \kappa}\mathbb{S}qrt{4\tau^2 - \kappa}} \arctan\left(\frac{\mathbb{S}qrt{4\tau^2 - \kappa}}{\mathbb{S}qrt{4H^2 + \kappa}} \right)\right]}, & \text{if } \kappa - 4\tau^2 < 0, \\ \\ \displaystyle{\frac{8\pi}{4H^2 + \kappa} \left[ 1 + \frac{4(H^2 + \tau^2)}{\mathbb{S}qrt{4H^2 + \kappa}\mathbb{S}qrt{\kappa - 4\tau^2}} \arctanh\left(\frac{\mathbb{S}qrt{\kappa - 4\tau^2}}{\mathbb{S}qrt{4H^2 + \kappa}} \right)\right]}, & \text{if } \kappa - 4\tau^2 > 0 \end{cases} \label{eq:area-CMC-sphere} \] where $H$ is the mean curvature of $S$. \end{proposition} \begin{remark} The same formula was already obtained for constant mean curvature spheres in the Heisenberg group with $\kappa = 0$ and $\tau = 1$ by~\cite[Proposition 5]{To} and in $M^2(\kappa) \times \mathbb{R}$ by~\cite{Pedrosa04} when $\kappa = 1$ and by~\cite{HH89} when $\kappa = -1$. It is important to remark that in \cite{To, Pedrosa04} the mean curvature is the trace of the second fundamental form while here the mean curvature is half of it. \end{remark} \mathbb{S}ection{The berger spheres} A Berger sphere is a usual $3$-sphere $\mathbb{S}^3 = \{(z, w) \in \mathbb{C}^2:\, \abs{z}^2 + \abs{w}^2 = 1\}$ endowed with the metric \[ g(X, Y) = \frac{4}{\kappa}\left[\prodesc{X}{Y} + \left(\frac{4\tau^2}{\kappa} - 1\right)\prodesc{X}{V}\prodesc{Y}{V} \right] \] where $\prodesc{\,}{\,}$ stands for the usual metric on the sphere, $V_{(z, w)} = (iz, iw)$, for each $(z, w) \in \mathbb{S}^3$ and $\kappa$, $\tau$ are real numbers with $\kappa > 0$ and $\tau \neq 0$. For now on we will denote the Berger sphere $(\mathbb{S}^2,g)$ as $\mathbb{S}^3_b(\kappa, \tau)$, which is a model for a homogeneous space $\mathrm{E}(\kappa, \tau)$ when $\kappa > 0$ and $\tau \neq 0$. In this case the vertical Killing field is given by $\xi = \frac{\kappa}{4\tau}V$. We note that $\mathbb{S}^3_b(4, 1)$ is the round sphere. The group of isometries of $\mathbb{S}^3(\kappa, \tau)$ is $U(2)$. The next proposition classifies, up to conjugation, the $1$-parameter groups of $U(2)$ into two types. \begin{proposition}\label{prop:classificacion-subgroups-sphere} A $1$-parameter group of $U(2)$, up to conjugation and reparametrization, must be one of the following types: \begin{enumerate}[(i)] \item $\mathrm{Rot} = \left\{\begin{pmatrix} 1 & 0 \\ 0 & e^{it} \end{pmatrix}:\, t \in \mathbb{R}\right\}$ \item $\left\{\begin{pmatrix} e^{i \alpha t} & 0 \\ 0 & e^{it} \end{pmatrix}:\, t \in \mathbb{R}\right\}$, with $\alpha \in \mathbb{R}\mathbb{S}etminus\{0\}$. \end{enumerate} \end{proposition} \begin{proof} All $1$-parametric group of U(2) are generated, via the exponential map, by an element of the Lie algebra \[ \mathfrak{u}(2) = \left\{ \begin{pmatrix} ia & x e^{iy} \\ -xe^{-iy} & ib \end{pmatrix}:\, a, b, x, y \in \mathbb{R} \right\} \] We are going to reduce the possible $1$-parametric groups by conjugation. It is clear that given $A \in \mathfrak{u}(2)$ and $D \in U(2)$ then $A$ and $D.A.D^{-1}$ are conjugated. So if $A = \left(\begin{smallmatrix}ia & x e^{iy} \\ -xe^{-iy} & ib\end{smallmatrix} \right)$, then taking $D = \left(\begin{smallmatrix} 1 & 0 \\ 0 & e^{iy} \end{smallmatrix} \right)$ it follows that \[ \begin{pmatrix} 1 & 0 \\ 0 & e^{iy} \end{pmatrix} \begin{pmatrix} ia & x e^{iy} \\ -xe^{-iy} & ib \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & e^{-iy} \end{pmatrix} = \begin{pmatrix} ia & x \\ -x & ib \end{pmatrix} \] Hence we may suppose that, up to conjugation, $y = 0$, i.e., $A = \left(\begin{smallmatrix} ia & x \\ -x & ia\end{smallmatrix}\right)$. First, if $a = b$, taking $D = \frac{1}{\mathbb{S}qrt{2}} \left(\begin{smallmatrix} i & -1 \\ 1 & -i \end{smallmatrix} \right)$ we have \[ \frac{1}{\mathbb{S}qrt{2}} \begin{pmatrix} i & -1 \\ 1 & -i \end{pmatrix} \begin{pmatrix} ia & x \\ -x & ia \end{pmatrix} \frac{1}{\mathbb{S}qrt{2}} \begin{pmatrix} -i & 1 \\ -1 & i \end{pmatrix} = \begin{pmatrix} i (a-x) & 0 \\ 0 & i (a+x) \end{pmatrix} \] On the other hand if $a \neq b$ then taking $D = \left(\begin{smallmatrix} -\lambda & i \mu \\ -i \mu & \lambda \end{smallmatrix}\right)$ where $\lambda, \mu \in \mathbb{R}$ such that $\lambda^2 + \mu^2 = 1$ and $\lambda \mu (a-b) = x(\lambda^2 - \mu^2)$, we have \[ \begin{pmatrix} -\lambda & i \mu \\ -i \mu & \lambda \end{pmatrix} \begin{pmatrix} ia & x \\ -x & ib \end{pmatrix} \begin{pmatrix} -\lambda & i \mu \\ -i \mu & \lambda \end{pmatrix} = \begin{pmatrix} i(a\lambda^2 + b\mu^2 +2x\lambda \mu) & 0 \\ 0 & i(a\mu^2 + b\lambda^2 - 2x\lambda \mu) \end{pmatrix} \] So we may always assume that, up to conjugation, every $1$-parameter group of $U(2)$ is generated by $\left(\begin{smallmatrix}i\alpha & 0 \\ 0 & i\beta \end{smallmatrix}\right)$ with $\alpha, \beta \in \mathbb{R}$. We note that we can interchange $\alpha$ and $\beta$ by conjugation. Via the exponential map this group becomes in $ t \mapsto \left( \begin{smallmatrix} e^{it\alpha} & 0 \\ 0 & e^{it\beta} \end{smallmatrix} \right) $. Finally if $\alpha = \beta = 0$ then we get the trivial group, if $\beta \neq 0$ we can reparametrize $t \mapsto t/\beta$ obtaining (i) if $\alpha = 0$ and (ii) if $\alpha \neq 0$. Both groups (i) and (ii) are not conjugated because their determinants do not coincide. \end{proof} Among the two types of groups describe in the previous lemma the only $1$-parameter group of isometries of $U(2)$ which fix a curve is $\mathrm{Rot}$. It fixes the set $\ell = \{(z, 0) \in \mathbb{S}^3\}$ which is a great circle that we shall call in the sequel the \emph{axis of rotation}. The other type of group (i) is, for $\alpha = 1$, the traslation along the fiber and, for $\alpha \neq 1$, the composition of a rotation and translation along the fiber. In the Berger sphere $\mathbb{S}^3_b(\kappa, \tau)$, we will denote by $E^1_{(z, w)} = (-\bar{w},\bar{z})$ and $E^2_{(z, w)} = (-i\bar{w}, i \bar{z})$. Then $\{E^1,E^2,V\}$ is an orthogonal basis of $T\mathbb{S}^3_b(\kappa, \tau)$ which satisfies $\lvert E^1\rvert^2 = \lvert E^2\rvert^2 = 4/\kappa$ and $\abs{V}^2 = 16\tau^2/\kappa^2$. The connection $\nabla$ associated to $g$ is given by \begin{align*} \nabla_{E_1} E_1 &= 0, &\nabla_{E_1} E_2 &= -V, &\nabla_{E_1} V &= \frac{4\tau^2}{\kappa}E_2 \\ \nabla_{E_2} E_1 &= V, &\nabla_{E_2} E_2 &= 0, &\nabla_{E_2} V &= - \frac{4\tau^2}{\kappa} E_1 \\ \nabla_{V} E_1 &= \left(\frac{4\tau^2}{\kappa} - 2\right)E_2, &\nabla_{V} E_2 &= -\left(\frac{4\tau^2}{\kappa} - 2\right)E_1, &\nabla_{V} V &= 0 \end{align*} Let $\Phi: \Sigma \rightarrow \mathbb{S}^3_b(\kappa, \tau)$ be an immersion of an oriented constant mean curvature surface $\Sigma$ invariant by $\mathrm{Rot}$. Then we can identify $\mathbb{S}^3_b(\kappa, \tau)/\mathrm{Rot}$ with $\mathbb{S}^2$ and so $\Sigma$ is $\pi^{-1}(\gamma)$ for some smooth curve $\gamma \mathbb{S}ubseteq \mathbb{S}^2$. It is sufficient to consider that $\gamma$ is in the upper half sphere and it is parametrized by arc length in $\mathbb{S}^2$, i.e., $\gamma(s) = \bigl(\cos x(s) e^{iy(s)}, \mathbb{S}in x(s) \bigr)$, with $\cos x(s) > 0$ and $x'(s)^2 + y'(s)^2\mathbb{S}in^2 x(s) = 1$ for all $s \in I$. Then we can write down the immersion as $\Phi(s, t) = \bigl(\cos x(s) e^{iy(s)}, \mathbb{S}in x(s) e^{it} \bigr)$. A unit normal vector along $\Phi$ is given by \[ N = C \left\{ - \tau \mathrm{Re} \left[\left(\frac{\tan \alpha}{\cos x} + i \tan x \right) e^{i(t + y)}\right] E^1_\Phi - \tau \mathrm{Im} \left[\left(\frac{\tan \alpha}{\cos x} + i \tan x \right) e^{i(t + y)}\right] E^2_\Phi + \frac{\kappa}{4\tau}V_\Phi \right\} \] where $\alpha$ is an auxiliary function defined by $\cos \alpha(s) = x'(s)$, and \[ C(s) = \frac{\cos x(s) \cos \alpha(s)}{\mathbb{S}qrt{\cos^2 \alpha(s) \bigl[\cos^2 x(s) + \frac{4\tau^2}{\kappa} \mathbb{S}in^2 x(s) \bigr] + \frac{4\tau^2}{\kappa} \mathbb{S}in^2 \alpha(s)}}. \] Now by a straigthforward computation we obtain the mean curvature $H$ of $\Sigma$ with respect to the normal $N$ defined above: \[ \frac{2\cos^3 \alpha \cos^3 x}{\tau C^3}H = \left(\cos^2 x + \frac{4\tau^2}{\kappa}\mathbb{S}in^2 x\right)\alpha' + \frac{\mathbb{S}in \alpha}{\tan x}\left[ \left(1 - \frac{4\tau^2}{\kappa} \right)\cos^2 x \cos^2 \alpha + \frac{4\tau^2}{\kappa} (1- \tan^2 x) \right] \] Then we get the following result: \begin{lemma} The generating curve $\gamma(s) = \bigl( \cos x(s) e^{iy(s)}, \mathbb{S}in x(s) \bigr)$ of a surface $\Sigma$ of $\mathbb{S}^3_b$ invariant by the group $\mathrm{Rot}$ satisfies the following system of ordinary differential equations: \begin{equation}\label{eq:sistema} \left\{ \begin{aligned} x' &= \cos \alpha, \\ y' &= \,\mathrm{d}rac{\mathbb{S}in \alpha}{\cos x}, \\ \alpha' &= \frac{1}{(\cos^2 x + \frac{4\tau^2}{\kappa} \mathbb{S}in^2 x)}\left\{ \frac{2\cos^3 \alpha \cos^3 x}{\tau C^3}H + \right.\\ &\left.\qquad -\frac{\mathbb{S}in \alpha}{\tan x}\left[ \left(1 - \frac{4\tau^2}{\kappa} \right)\cos^2 x \cos^2 \alpha + \frac{4\tau^2}{\kappa} (1- \tan^2 x) \right]\right\} \end{aligned} \right. \end{equation} where $H$ is the mean curvature of $\Sigma$ with respect to the normal defined before. Moreover, if $H$ is constant then the function: \begin{equation}\label{eq:integral-primera} \tau C \mathbb{S}in x \tan \alpha - H \mathbb{S}in^2 x \end{equation} is a constant $E$ that we will call the \emph{energy} of the solution. \end{lemma} \begin{remark}\label{rmk:properties-system-sphere} From the uniqueness of the solutions of~\eqref{eq:sistema} for a given initial conditions one can show that if $(x, y, \alpha)$ is a solution then: \begin{enumerate}[(i)] \item We can translate the solution by the $y$-axis, i.e., $(x, y + y_0, \alpha)$ is a solution for any $y_0 \in \mathbb{R}$. \item Reflection of a solution curve across a line $y = y_0$ is a solution curve with opposite sign of $H$, that is, $(x, 2y_0 - y, -\alpha)$ is a solution for $-H$. \item Reversal of parameter for a solution is a solution with opposite sign of $H$, that is, $\bigl(x(2s_0-s), y(2s_0-s), \alpha(2s_0-s) + \pi\bigr)$ is a solution for $-H$. \item If $(x,y, \alpha)$ is defined for $s \in ]s_0 - \varepsilon, s_0 + \varepsilon[$ with $x'(s_0) = 0$ then the solution can be continued by reflection across $y = y(s_0)$. \end{enumerate} So thanks to the above properties we can always consider a solution $(x, y, \alpha)$ with positive mean curvature and initial condition $(x_0, 0, \alpha_0)$ at $s = 0$. \end{remark} \begin{lemma}\label{lm:restricciones-E-Berger} Let $\bigl(x(s), y(s), \alpha(s) \bigr)$ be a solution of~\eqref{eq:sistema} with energy $E$. Then the energy $E$ satisfies \begin{equation}\label{eq:restricciones-energia-berger} -H - \frac{1}{2}\mathbb{S}qrt{4H^2 + \kappa} \leq 2E \leq -H + \frac{1}{2}\mathbb{S}qrt{4H^2 + \kappa} \end{equation} and $x(s) \in [x_1, x_2]$ where $x_j = \arcsin \mathbb{S}qrt{t_j}$, $j = 1, 2$, \[ t_1 = \frac{\kappa - 8HE - \mathbb{S}qrt{\kappa^2 - 16\kappa E(H+E)}}{2(4H^2 + \kappa)}, \quad t_2 = \frac{\kappa - 8HE + \mathbb{S}qrt{\kappa^2 - 16\kappa E(H+E)}}{2(4H^2 + \kappa)}. \] Also $x'(s) = \cos \alpha(s) = 0$ if, and only if, $x(s)$ is exactly $x_1$ or $x_2$. \end{lemma} \begin{proof} First from~\eqref{eq:integral-primera} we obtain \begin{equation}\label{eq:sin-alpha} \mathbb{S}in \alpha = \frac{1}{\rho} (E + H \mathbb{S}in^2 x) \mathbb{S}qrt{1 + \frac{4\tau^2}{\kappa} \tan^2 x }, \quad \cos \alpha = \frac{1}{\rho}\tau \mathbb{S}in x \mathbb{S}qrt{1 - \frac{4}{\kappa} \frac{(E + H \mathbb{S}in^2 x)^2}{\mathbb{S}in^2 x \cos^2 x}} \end{equation} where $\rho = \mathbb{S}qrt{\tau^2 \mathbb{S}in^2 x + \left(1 - \frac{4\tau^2}{\kappa} \right) (E + H \mathbb{S}in^2 x)^2}$. Then $\frac{4}{k}(E + H \mathbb{S}in^2 x)^2 -\mathbb{S}in^2 x \cos^2 x \leq 0$, that is, $p(\mathbb{S}in^2 x) \leq 0$, where $p$ is the polynomial \[ p(t) = \left(1 + \frac{4H^2}{\kappa}\right) t^2 - \left(1 - \frac{8H}{\kappa} E \right) t + \frac{4}{\kappa}E^2 \] As $p(t)$ must be non-positive the vertex of this parabola must be non-positive too, that is, \begin{equation}\label{eq:inequality-for-E-H} \left(1 - \frac{8}{\kappa}E \right)^2 - \frac{16}{\kappa}E^2 \left(1 + \frac{4H^2}{\kappa} \right) \geq 0 \end{equation} and $\mathbb{S}in^2 x(s) \in [t_1, t_2]$ where $t_1$ and $t_2$ are the roots of $p$. Finally, as $\cos x(s) \geq 0$ because we choose the curve $\gamma$ on the upper half semisphere, it must be $x(s) \in [0, \pi/2]$ so $x(s) \in [x_1, x_2]$ where $x_j = \arcsin \mathbb{S}qrt{t_j}$, $j = 1, 2$. \end{proof} Now we describe the complete solutions of~\eqref{eq:sistema} in terms of $H$ and $E$. \begin{theorem}\label{tm:clasificacion-berger} Let $\Sigma$ be a complete, connected, rotationally invariant surface with constant mean curvature $H$ and energy $E$ in $\mathbb{S}^3_b(\kappa, \tau)$. Then $\Sigma$ must be of one of the following types: \begin{enumerate}[(i)] \item If $E = 0$ then $\Sigma$ is a $2$-sphere (possibly immersed, see Corollary~\ref{cor:sphere-CMC-berger}). Moreover, if $H = 0$ too then $\Sigma$ is the great $2$-sphere $\{(z, w) \in \mathbb{S}^3:\, \mathrm{Im}(z) = 0\}$ which is always embedded. \item If $E = \frac{1}{4}(-2H \pm \mathbb{S}qrt{4H^2 + \kappa})$ then $\Sigma$ is the Clifford torus with radii $r = \mathbb{S}qrt{\frac{1}{2} \pm \frac{H}{\mathbb{S}qrt{4H^2 + \kappa}}}$, that is, $\mathcal{T}_H = \{(z, w) \in \mathbb{S}^3:\, \abs{z} = r^2, \, \abs{w}^2 = 1- r^2\}$. \item If $E > 0$ or $E < -H$ (and different from the case (ii)) then $\Sigma$ is an unduloid-type surface (see figure~\ref{fig:unduloide}). \item If $-H < E < 0$ then $\Sigma$ is a nodoid-type surface (see figure~\ref{fig:nodoide}). \item If $E = -H$ then $\Sigma$ is generated by an union of curves meeting at the north pole (see figure~\ref{fig:circles}). \end{enumerate} Surfaces of type (iii)--(v) are compact if and only if \begin{equation} T(H,E) = 2\int_{x_1}^{x_2} \frac{(E+H \mathbb{S}in^2 x) \mathbb{S}qrt{1 + \frac{4\tau^2}{\kappa} \tan^2 x}}{\tau \mathbb{S}qrt{\mathbb{S}in^2 x \cos^2 x - \frac{4}{\kappa} (E+H\mathbb{S}in^2 x)^2}}\,\mathrm{d}x \label{eq:periodo-berger} \end{equation} is a rational multiple of $\pi$ (see Lemma~\ref{lm:restricciones-E-Berger} for the definition of $x_j$, $j = 1, 2$). Moreover, surfaces of type (iii) are compact and embedded if and only if $T = 2\pi/k$ with $k \in \mathbb{Z}$. \end{theorem} \begin{remark}\label{rm:clasificacion-berger}~ \begin{enumerate} \item In the round sphere case this study was made by Hsiang~\cite[Theorem 3]{H}. However he did not distinguish, in terms of the energy, between the nodoid and unduloid case. The subriemannian case, which we can think as fixing $\kappa = 4$ and taking $\tau \rightarrow \infty$, was study by Hurtado and Rosales~\cite[Theorem 6.4]{HR} \item As $T(H,E)$ is a non-constant continuous function over a non-empty subset of $\mathbb{R}^2$ (see~\eqref{eq:restricciones-energia-berger} for the restrictions of $E$), there exist values of $H$ and $E$ such that $T(H,E)$ is a rational multiple of $\pi$ and so the corresponding surfaces of type (iii)-(v) are compact. Among all these compact examples, the minimal ones only appear in (iii) and, from~\eqref{eq:restricciones-energia-berger}, for $0 < E^2 \leq \kappa/16$. For $\kappa = 4$ and $\tau = 0.4$, figure~\ref{fig:periodo-unduloide-minimal} shows that there exists a value of $E$ such that $T(0,E) = 2\pi$, that is, the corresponding surface is embedded and compact so it is an embedded minimal torus which is not a Clifford torus. This surface is a counterexample to the Lawson's conjecture in the Berger sphere $\mathbb{S}^3_b(4,0.4)$. The author thinks that there exists a value $\tau_0 \approx 0.57$ such that for $\tau \leq \tau_0$ there are always examples of compact embedded minimal tori (unduloid-type surface) whereas for $\tau > \tau_0$ there are not. These surfaces would be counterexamples to the Lawson's conjecture in the Berger spheres with $\kappa = 4$ and $\tau \leq \tau_0$. \begin{figure} \caption{The period $T(0,E)$ (see~\eqref{eq:periodo-berger} \label{fig:periodo-unduloide-minimal} \end{figure} \end{enumerate} \end{remark} \begin{proof} First we obtain several usefull formulae. Sustituting~\eqref{eq:sin-alpha} in the third equation of~\eqref{eq:sistema} we get \begin{equation}\label{eq:alpha'} \alpha'(s) = \frac{\tau^2 \tan x(s) q(\mathbb{S}in^2 x(s))}{\cos x(s) \mathbb{S}qrt{\cos^2 x(s) + \frac{4\tau^2}{\kappa}\mathbb{S}in^2 x(s)}\left[\tau^2 \mathbb{S}in^2 x(s) + \left(1 - \frac{4\tau^2}{\kappa} \right)(E + H \mathbb{S}in^2 x(s))^2\right]^{3/2}} \end{equation} where $q(t)$ is the polynomial given by \begin{equation}\label{eq:alpha'-polinomio} \begin{split} q(t) &= \frac{H}{\kappa^2}(\kappa - 4\tau^2)(4H^2 + \kappa)t^3 + \frac{1}{\kappa}(\kappa - 4\tau^2) \left( \frac{12EH^2}{\kappa} - (E+2H) \right)t^2 + \\ &\quad +\left(\frac{12 H E^2\left(\kappa -4 \tau ^2\right)}{\kappa^2}+2 E+H \right) t + \frac{4 E^3 \left(\kappa -4 \tau ^2\right)}{\kappa ^2}-E \end{split} \end{equation} \noindent \textbf{(i)} Firstly if $H = 0$ then by~\eqref{eq:sin-alpha} we get that $\mathbb{S}in \alpha = 0$, i.e., $x(s) = s + x_0$ and $y(s) = 0$. Hence the surface $\Sigma$ is the great $2$-sphere $\{(z, w) \in \mathbb{S}^3_b(\kappa, \tau):\, \mathrm{Im}(z) = 0\}$. Secondly if $H > 0$ then, by Lemma~\ref{lm:restricciones-E-Berger}, $\mathbb{S}in^2 x(s) \in [0, \kappa/(4H^2 + \kappa)]$, i.e., $\tan^2 x(s) \in [0, \kappa/4H^2]$ and we may suppose that $\tan x(s) \in [0, \mathbb{S}qrt{\kappa}/2H]$. By~\eqref{eq:sin-alpha} $\cos \alpha > 0$ in that interval so we can express $y$ as a function of $x$. Taking into account~\eqref{eq:sistema} and~\eqref{eq:sin-alpha} an easy computation shows that \[ y'(x) = \frac{H}{\tau} \tan x \frac{\mathbb{S}qrt{1 + \,\mathrm{d}rac{4\tau^2}{\kappa} \tan^2 x}}{\mathbb{S}qrt{1 - \,\mathrm{d}rac{4H^2}{\kappa}\tan^2x}}, \quad x \in\, \left]0, \arctan \frac{\mathbb{S}qrt{\kappa}}{2H}\right[ \] We can integrate the above equation by the change of variable given by \[ u = \mathbb{S}qrt{1 - \frac{4H^2}{\kappa}\tan^2 x}\left/\mathbb{S}qrt{1 + \frac{4\tau^2}{\kappa}\tan^2 x}\right.. \]Finally we get \begin{equation}\label{eq:def-y-sphere} y(x) = \begin{cases} -\arctan \left(\frac{\tau}{H} \lambda(x)\right) + \frac{H}{\tau} \,\mathrm{d}rac{\mathbb{S}qrt{4\tau^2 - \kappa}}{\mathbb{S}qrt{4H^2+\kappa}}\arctan \left(\,\mathrm{d}rac{\mathbb{S}qrt{4\tau^2 - \kappa}}{\mathbb{S}qrt{4H^2+\kappa}} \lambda(x)\right) & \text{if } \kappa - 4\tau^2 < 0, \\ \\ -\arctan \left(\frac{\tau}{H} \lambda(x)\right) - \frac{H}{\tau} \,\mathrm{d}rac{\mathbb{S}qrt{\kappa - 4\tau^2}}{\mathbb{S}qrt{4H^2+\kappa}}\arctanh \left(\,\mathrm{d}rac{\mathbb{S}qrt{\kappa - 4\tau^2}}{\mathbb{S}qrt{4H^2+\kappa}} \lambda(x)\right) & \text{if } \kappa - 4\tau^2 > 0, \\ \end{cases} \end{equation} where $\lambda(x) = \left.\mathbb{S}qrt{1 - \frac{4H^2}{\kappa}\tan^2 x}\right/\mathbb{S}qrt{1 + \frac{4\tau^2}{\kappa}\tan^2 x}$. We note that $y\bigl(\arctan(\mathbb{S}qrt{\kappa}/2H)\bigr)=0$ where meets orthogonally the axis $\ell$ and $y$ is a strictly increasing function of $x$, for $x$ in $]0, \arctan(\mathbb{S}qrt{\kappa}/2H)[$. Then $y$ reach its minimun at $x = 0$. The function $y$ only give us half of a sphere, but we can obtain the other half by reflecting the solution along the line $x = 0$. Then its easy to see that the sphere is embedded if, and only if, $y(0) > -\pi$. In other case the sphere is immersed (see figure~\ref{fig:immersed-cmc-sphere}). \begin{figure} \caption{Non-embedded region of CMC spheres (we fix $\kappa = 4$)} \label{fig:immersed-cmc-sphere} \end{figure} \noindent \textbf{(ii)} If $E = \frac{1}{4}(-2H \pm \mathbb{S}qrt{4H^2 + \kappa})$ the previous lemma says that $t_1 = t_2$ and so $x(s)$ must be the constant $x_1 = x_2 = \arcsin \mathbb{S}qrt{\frac{1}{2}(1 \mp \frac{2H}{\mathbb{S}qrt{4H^2 + \kappa}})}$. We can integrate completely the solution to obtain that $\Phi(s, t) = (r e^{is/r}, \mathbb{S}qrt{1-r^2} e^{it})$ where \[ r = \mathbb{S}qrt{\frac{1}{2} \pm \frac{H}{\mathbb{S}qrt{4H^2 + \kappa}}} \] i.e., $\Sigma$ is a Clifford torus. \noindent\textbf{[(iii), case $E>0$]}. We suppose now that the equality in~\eqref{eq:inequality-for-E-H} does not hold and that $E > 0$. We consider the maximal solution of~\eqref{eq:sistema} with initial condition $(x_1,0,\pi/2)$ (we will later see that this is not a restriction) and we may suppose, by the maximality condition, that there exist $s_2$ such that $\alpha(s_2) = \pi/2$. We analyze the sign of $\alpha'$ using~\eqref{eq:alpha'}. It is sufficient to study de sign of the polynomial $q$ in~\eqref{eq:alpha'-polinomio} between $t_1$ and $t_2$ (see Lemma~\ref{lm:restricciones-E-Berger}). A straightforward computation shows that $q$ is strictly increasing and that $q(t_1)q(t_2) \leq 0$. Then there exist a unique $s_1$ such that $\alpha'(s_1) = 0$. So $\alpha$ is a strictly increasing function in $]0, s_1[$, strictly decreasing in $]s_1, s_2[$ and $s_1$ is an absolute maximum. Now, as $\mathbb{S}in \alpha > 0$ we can express $x$ as a function of $y$, then from~\eqref{eq:sistema} \begin{equation}\label{eq:derivative-x(y)} \frac{\mathrm{d}\, x}{\mathrm{d}\, y} = \cos x \cot \alpha > 0 \end{equation} so $x(y)$ is an strictly increasing function and, because $\cos \alpha(s_2) = 0$, it must be $x(y(s_2)) = x_2$. In particular the solution $x$ takes all values in the interval $[x_1,x_2]$ so, by the unicity of the solution, every maximal solution with initical condicion $(x_0, 0, \alpha_0)$ with $x_0$ necesarilly in $[x_1,x_2]$ must be a reparametrization of this one. Finally, taking into account the above formula, the third equation of~\eqref{eq:sistema} and~\eqref{eq:sin-alpha}, we get \begin{equation}\label{eq:second-derivative-x(y)} \begin{split} \frac{\mathrm{d}^2\, x}{\mathrm{d}\, y^2} = &\frac{-\tau^2 \mathbb{S}in x \cos x}{\left(\cos^2 x + \frac{4\tau^2}{\kappa} \mathbb{S}in^2 x \right)^2 (E + H \mathbb{S}in^2 x)^3} \left[\cos^2 x\, q(\mathbb{S}in^2 x) +\right. \\ &\quad +\left.\left(\cos^2 x + \frac{4\tau^2}{\kappa}\mathbb{S}in^2 x \right) (E + H \mathbb{S}in^2 x)\left(\cos^2 x \mathbb{S}in^2 x - \frac{4}{\kappa}(E + H \mathbb{S}in^2 x)^2 \right) \right] \end{split} \end{equation} It is straightforwad to check that $\mathrm{d}^2x/\mathrm{d}y^2$ has only one zero at $y_1$ in $]0, y(s_2)[$ and that $x$ is convex in $]0, y_1[$ and concave in $]y_1,y_2[$. By successive reflextions across the vertical lines on which $x(y)$ reaches its critial points, we get the full solution which is similar to an Euclidean unduloid (se figure~\ref{fig:unduloide}). The period of this unduloid is given by \begin{equation}\label{eq:periodo} T = 2y(s_2) = 2\int_{x_1}^{x_2} y'(x) \,\mathrm{d}x = 2\int_{x_1}^{x_2} \frac{(E+H \mathbb{S}in^2 x) \mathbb{S}qrt{1 + \frac{4\tau^2}{\kappa} \tan^2 x}}{\tau \mathbb{S}qrt{\mathbb{S}in^2 x \cos^2 x - \frac{4}{\kappa} (E+H\mathbb{S}in^2 x)^2}} \end{equation} Hence if~\eqref{eq:periodo} is a rational multiple of $\pi$ then the surface is compact. Moreover, the surface is embedded if and only if $T = 2\pi/k$ for $k \in \mathbb{N}$. \begin{figure} \caption{Curve $\gamma(s)$ for $E > 0$\label{fig:unduloide} \label{fig:unduloide} \caption{Curve $\gamma(s)$ for $E < 0$ and $E \neq -H$\label{fig:nodoide} \label{fig:nodoide} \end{figure} \noindent\textbf{[(iii), case $E<-H$]}. In this case $\mathbb{S}in \alpha < 0$ so we can express $x$ as a function of $y$ and a similar reasoning as in the previous case is sufficient to check that the surface must be a unduloid (see figure~\ref{fig:unduloide}). \noindent\textbf{(iv)} If $-H < E < 0$ we consider the maximal solution with initial condition $(x_2,0,\pi/2)$. We note that in this case $\mathbb{S}in \alpha$ may change its sign: $\mathbb{S}in \alpha < 0$ if $\mathbb{S}in^2 x \in [t_1, -E/H[$ and $\mathbb{S}in \alpha > 0$ if $\mathbb{S}in^2 x \in ]-E/H, t_2]$. By~\eqref{eq:alpha'} $\alpha' > 0$ so $\alpha$ is strictly increasing. Let $0 < s_1 < s_2$ such that $\alpha(s_1) = \pi$ and $\alpha(s_2) = 3\pi/2$ (and so $x(s_2) = x_1$). Then $\alpha \in ]\pi/2,\pi[$ on $s \in ]0, s_1[$ and $\alpha \in ]\pi, 3\pi/2[$ on $s \in ]s_1,s_2[$. Now we can express the solution $\gamma$ in $]0,s_2[$ as two graphs of the function $x(y)$ meeting at the line $y = y(s_1)$. First using~\eqref{eq:derivative-x(y)} we get that $x(y)$ is strictly decreasing on $]0,y(s_1)[$ and strictly increasing on $]y(s_2), y(s_1)[$. Second taking into account~\eqref{eq:second-derivative-x(y)} $x(y)$ is strictly concave on $]0, y(s_1)[$ and strictly convex on $]y(s_2), y(s_1)[$. As $y = 0$ and $y = y(s_2)$ are lines of symmetry because $x'(0) = x'(s_2) = 0$, we can reflect successively $\gamma$ to obtain the complete solution, which is similar to an Euclidean nodoid (see figure~\ref{fig:nodoide}). Also the solution produce a compact surface if~\eqref{eq:periodo} is a rational multiple of $\pi$ as in the nodoid case. In this case the surface is always immersed. \noindent\textbf{(v)} Finally we study the case $E = -H \neq 0$. Now $\mathbb{S}in x \in [2H/\mathbb{S}qrt{4H^2 + \kappa}, 1]$ so the curve may aproach to the north pole $p_N$ of the $2$-sphere. We consider the maximal solution with initial condition $(\arcsin(2H/\mathbb{S}qrt{4H^2 + \kappa}), 0, 3\pi/2)$ and define $s_1 > 0$ the first number such that $\alpha(s_1) = 2\pi$, that is, the first time the curve $\gamma$ meets the north pole. We can express $x$ as a function of $y$ on every connected component of $\gamma\mathbb{S}etminus \{p_N\}$ because $\mathbb{S}in \alpha < 0$ away of $p_N$. Using~\eqref{eq:alpha'} we get that $\alpha' > 0$ and so $\alpha \in [3\pi/2, 2\pi]$. Then, taking into account~\eqref{eq:derivative-x(y)} and~\eqref{eq:second-derivative-x(y)}, we obtain that $x(y)$ is strictly decreasing and convex in $]y(s_1), 0[$. We continue the generating curve to obtain another branch of the graph of the function $x(y)$ meeting the north pole. We observe now that \[ \mathbb{H}at{x}(s) = x(2s_1 - s), \quad \mathbb{H}at{y}(s) = 2y(s_1) + \pi - y(2s_1-s), \quad \mathbb{H}at{\alpha}(s) = 3\pi - \alpha(2s_1 - s), \quad s \in [s_1, 2s_1] \] is a solution of~\eqref{eq:sistema} with energy $E = -H$. That is, the other branch of the solution is just the reflexion of $x(y)$ with respect the line $y = y(s_1) + \pi/2$. By successive reflexions across the critical points of $x$, we obtain the full solution (see figure~\ref{fig:circles}). \begin{figure} \caption{Graphic of the curve $\gamma(s) = (x(s), y(s))$ for $E = -H$\label{fig:circles} \label{fig:circles} \end{figure} If $y(s_1) = -\pi/2$ then the reflexion line corresponding to the critical point coincide with the reflexion line of the branch an so the solution will be embedded. Using the next expresion for $y(s_1)$ we have check numerically that for $\tau$ small and for suitable $H$ we get $y(s_1) = -\pi/2$ and so the solution is an embedded torus. \[ y(s_1) = \int_{\frac{2H}{\mathbb{S}qrt{4H^2 + \kappa}}}^{\pi/2} y'(x) \,\mathrm{d}x = \int_{\frac{2H}{\mathbb{S}qrt{4H^2 + \kappa}}}^{\pi/2} \frac{(E+H \mathbb{S}in^2 x) \mathbb{S}qrt{1 + \frac{4\tau^2}{\kappa} \tan^2 x}}{\tau \mathbb{S}qrt{\mathbb{S}in^2 x \cos^2 x - \frac{4}{\kappa} (E+H\mathbb{S}in^2 x)^2}} \] Moreover, $\gamma$ is close (and so $\Sigma$ is compact) if, and only if, $y(s_1)$ is a rational multiple of $2\pi$. \end{proof} \begin{corollary}\label{cor:sphere-CMC-berger} Let $\Phi:]-a,a[ \times ]-\pi, \pi[ \rightarrow \mathbb{S}^3_b(\kappa, \tau)$ be the immersion given by: \[ \Phi(x, t) = \begin{cases} \left(\cos(x + a) e^{i y(x+a)}, \mathbb{S}in(x + a)e^{i t} \right), & \text{if } x < 0 \\ \left(\cos(a - x) e^{-i y(a - x)}, \mathbb{S}in(a - x)e^{i t} \right), & \text{if } x\geq 0 \\ \end{cases} \] where $a = \arctan(\mathbb{S}qrt{\kappa}/2H)$ and $y$ is the function defined in~\eqref{eq:def-y-sphere}. Then $\Phi$ defines an immersion of a sphere with constant mean curvature $H$. Moreover $\Phi$ is an embbedding if and only if $y(0) > -\pi$ (see figure~\ref{fig:immersed-cmc-sphere}). \end{corollary} \mathbb{S}ection{The special linear group $\mathrm{Sl}(2,\mathbb{R})$} We are going to study the constant mean curvature surfaces invariante by a $1$-parameter group of isometries in $\mathrm{Sl}(2,\mathbb{R})$, that is, in the group of real matrix of order $2$ with determinant $1$. It is more convenient to give another description of this group as $\mathrm{Sl}(2,\mathbb{R}) = \{(z, w) \in \mathbb{C}:\, \abs{z}^2 - \abs{w}^2 = 1\}$. It is easy to check that the transformation \[ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \mapsto \frac{1}{2}\Bigl( (a + d) + i (b - c), (b + c) + i(a -d)\Bigr), \quad ad-bc=1 \] is a diffeomorphism. We endow $\mathrm{Sl}(2,\mathbb{R})$ with the metric $g$ given by \[ g(E^i, E^j) = \delta_{ij}\frac{4}{-\kappa}, \quad g(V,V) = \frac{16\tau^2}{\kappa^2}, \quad g(V,E^j) = 0, \quad i,j = 1, 2. \] where $\kappa$ and $\tau$ are real numbers such that $\kappa < 0$ and $\tau \neq 0$ and $\{E^1, E^2, V\}$ is a global reference on $T\mathrm{Sl}(2,\mathbb{R})$ defined by \[ E^1_{(z, w)} = (\bar{w}, \bar{z}), \quad E^2_{(z, w)} = (i\bar{w}, i\bar{z}), \quad V_{(z, w)} = (iz, iw) \] Then $(\mathrm{Sl}(2,\mathbb{R}), g)$ is a model for an homogeneous space $\mathrm{E}(\kappa, \tau)$ with $\kappa < 0$. $\mathrm{Sl}(2,\mathbb{R})$ is a fibration over $\mathbb{H}^2(\kappa)$ with fibers generated by the unit killing field $\xi = -\frac{\kappa}{4\tau}V$. We can identify the isometry group of $\mathrm{Sl}(2,\mathbb{R})$ with $U_1(2)$. The connection associate to $g$ is given by \begin{align*} \nabla_{E_1} E_1 &= 0, &\nabla_{E_1} E_2 &= V, &\nabla_{E_1} V &= \frac{4\tau^2}{\kappa}E_2, \\ \nabla_{E_2} E_1 &= -V, &\nabla_{E_2} E_2 &= 0, &\nabla_{E_2} V &= - \frac{4\tau^2}{\kappa} E_1, \\ \nabla_{V} E_1 &= \left(\frac{4\tau^2}{\kappa} - 2\right)E_2, &\nabla_{V} E_2 &= -\left(\frac{4\tau^2}{\kappa} - 2\right)E_1, &\nabla_{V} V &= 0. \end{align*} As in the Berger sphere case we concentrate our attention in the $1$-parameter groups of isometries witch fixed a curve, that we call the axis. We define \[ \mathrm{Rot} = \left\{ \begin{pmatrix} 1 & 0 \\ 0 & e^{it} \end{pmatrix}:\, t \in \mathbb{R} \right\} \] Then $\mathrm{Rot}$ fix the curve $\ell = \{(z, 0)\in \mathrm{Sl}(2,\mathbb{R})\}$ wich is a circle and we can idenfity $\mathrm{Sl}(2,\mathbb{R})/\mathrm{Rot}$ with $O =\{(z, a) \in \mathbb{C} \times \mathbb{R}:\, \abs{z}^2 -a^2 = 1\}$. Let $\Phi: \Sigma \rightarrow \mathrm{Sl}(2,\mathbb{R})$ be an immersion of an oriented constant mean curvature surface $\Sigma$ invariant by $\mathrm{Rot}$. Then $\Sigma = \pi^{-1}(\gamma)$ for some smooth curve $\gamma \mathbb{S}ubset O$, where $\pi: \mathrm{Sl}(2,\mathbb{R}) \rightarrow O$ is the projection. Let $\gamma(s) = (\cosh x(s) e^{iy(s)}, \mathbb{S}inh x(s))$, we may suppose that \[ x'(s)^2 + y'(s)^2 \cosh^2 x(s) = 1 \] and we will call $\alpha$ the function such that $x'(s) = \cos \alpha(s)$. Then we can write down the immersion $\Phi(s, t) = (\cosh x(s) e^{iy(s)}, \mathbb{S}inh x(s)e^{it})$. A unit normal vector along $\Phi$ is given by \[ N = C \left\{ -\tau \mathrm{Re}\left[\left(\frac{\tan \alpha}{\cosh x} - i\tanh x \right)e^{i(t+y)}\right]E^1_\Phi - \tau \mathrm{Im}\left[\left(\frac{\tan \alpha}{\cosh x} - i\tanh x \right)e^{i(t+y)}\right] - \frac{\kappa}{4\tau}V_\Phi\right\} \] where \[ C(s) = \frac{\cos \alpha(s) \cosh x(s)}{\mathbb{S}qrt{\cos^2 \alpha(s) [\cosh^2 x(s) - \frac{4\tau^2}{\kappa} \mathbb{S}inh^2 x(s)] - \frac{4\tau^2}{\kappa}\mathbb{S}in^2 \alpha(s)}} \] Now by a straightforward computation we get the mean curvature $H$ of $\Sigma$ with respect to the normal defined above: \[ \begin{split} \frac{2\cos^3 \alpha \cosh^3 x}{\tau C^3}H &= \left( \cosh^2 x - \frac{4\tau^2}{\kappa}\mathbb{S}inh^2 x\right) \alpha' + \\ &\quad + \frac{\mathbb{S}in \alpha}{\tanh x} \left[ \left(1 - \frac{4\tau^2}{\kappa} \right) \cos^2 \alpha \cosh^2 x + \frac{4\tau^2}{\kappa}(2\cos^2 \alpha - 1)(1 + \tanh^2 x)\right] \end{split} \] Hence we obtain the following result: \begin{lemma}\label{lm:sistema-Sl(2,R)} The generating curve $\gamma(s) = (\cosh x(s) e^{iy(s)}, \mathbb{S}inh x(s))$ of a surface $\Sigma$ invariant by the group $\mathrm{Rot}$ satisfies the following system of ordinary differential equations: \begin{equation}\label{eq:sistema-Sl(2,R)} \left\{ \begin{aligned} x' &= \cos \alpha, \\ y' &= \,\mathrm{d}rac{\mathbb{S}in \alpha}{\cosh x}, \\ \alpha' &= \frac{1}{(\cosh^2 x - \frac{4\tau^2}{\kappa} \mathbb{S}inh^2 x)}\left\{ \frac{2\cos^3 \alpha \cosh^3 x}{\tau C^3}H + \right.\\ &\left.\qquad -\frac{\mathbb{S}in \alpha}{\tanh x}\left[ \left(1 - \frac{4\tau^2}{\kappa} \right) \cos^2 \alpha \cosh^2 x + \frac{4\tau^2}{\kappa}(2\cos^2 \alpha - 1)(1 + \tanh^2 x)\right]\right\} \end{aligned} \right. \end{equation} where $H$ is the mean curvature of $\Sigma$ with respect to the normal defined before. Moreover, if $H$ is constant then the function \begin{equation}\label{eq:energy-Sl(2,R)} \tau C \mathbb{S}inh x \tan \alpha - H \mathbb{S}inh^2 x \end{equation} is a constant $E$ that we will call the \emph{energy} of the solution. \end{lemma} The remark~\ref{rmk:properties-system-sphere} is also true for this system and so we can always consider a solution $(x, y, \alpha)$ with positive mean curvature vector and initial condition $(x_0, 0, \alpha_0)$. \begin{lemma}\label{lm:restricciones-E-Sl(2,R)} Let $(x(s), y(s), \alpha(s))$ be a solution of~\eqref{eq:sistema-Sl(2,R)} with energy $E$. Then: \begin{enumerate}[(i)] \item If $4H^2 + \kappa > 0$ then it must be $4E < 2H - \mathbb{S}qrt{4H^2 + \kappa}$. Also $\mathbb{S}inh^2 x(s) \in [t_1, t_2]$ where \[ t_1 = \frac{-8HE - \kappa - \mathbb{S}qrt{16\kappa E(H-E) + \kappa^2}}{2(4H^2 + \kappa)}, \quad t_2 = \frac{-8HE - \kappa + \mathbb{S}qrt{16\kappa E(H-E) + \kappa^2}}{2(4H^2 + \kappa)} \] Moreover, $x'(s) = \cos \alpha(s) = 0$ if and only if $\mathbb{S}inh^2 x(s)$ is exactly $t_1$ or $t_2$. \item If $4H^2 + \kappa < 0$ then $\mathbb{S}inh^2 x(s) \in [t_1, +\infty[$. Moreover, $x'(s) = \cos \alpha(s) = 0$ if and only if $\mathbb{S}inh^2 x(s) = t_1$. \item If $4H^2 + \kappa = 0$ then $E < H/2$ and $\mathbb{S}inh^2 x(s) \in [E^2/H(H-2E), +\infty[$. Moreover, $x'(s) = \cos \alpha(s) = 0$ if and only if $\mathbb{S}inh^2 x(s) = E^2/H(H-2E)$. \end{enumerate} \end{lemma} \begin{proof} Using~\eqref{eq:energy-Sl(2,R)} we get that \begin{equation}\label{eq:sin-alpha-sl} \mathbb{S}in \alpha = \frac{1}{\mu}(E+H\mathbb{S}inh^2 x) \mathbb{S}qrt{1 - \frac{4\tau^2}{\kappa}\tanh^2 x}, \quad \cos \alpha = \frac{1}{\mu}\tau \mathbb{S}inh x \mathbb{S}qrt{1 + \frac{4}{\kappa} \frac{(E+H\mathbb{S}inh^2 x)^2}{\cosh^2 x \mathbb{S}inh^2 x}} \end{equation} where $\mu^2 = \tau^2 \mathbb{S}inh^2 x + (E+H\mathbb{S}inh^2 x)^2 \left[1 - \frac{4\tau^2}{\kappa}\left( \tanh^2 x - \frac{1}{\cosh^2 x}\right)\right]$. From the above formula for $\cos \alpha$ we deduce that $p(\mathbb{S}inh^2 x) \geq 0$, where \[ p(t) = \left(1 + \frac{4H^2}{\kappa} \right)t^2 + \left(1 + \frac{8HE}{\kappa}\right)t + \frac{4E^2}{\kappa} \] The result follows from the study of the sign of this polynomial for $t \geq 0$. \end{proof} Now we describe the complete solutions of~\eqref{eq:sistema-Sl(2,R)} in terms of the mean curvature $H$ and the energy $E$. \begin{theorem}\label{tm:clasificacion-Sl(2,R)} Let $\Sigma$ be a complete, connected, rotationally invariant surfaces with constant mean curvature $H$ and energy $E$ in $\mathrm{Sl}(2,R)$. Then $\Sigma$ must be one of the following types: \begin{enumerate} \item[1.] If $4H^2 + \kappa > 0$ then \begin{enumerate}[(a)] \item If $E = 0$ then $\Sigma$ is a $2$-sphere. It is not always embedded (see figure~\ref{fig:sl-embebidas}). \item If $E > 0$ then $\Sigma$ is an unduloid-type surface. \item If $E < 0$ then $\Sigma$ is a nodoid-type surface which is always immersed. \end{enumerate} Moreover surfaces of type 1.(b) and 1.(c) are compact if and only if \[ T(H, E) = 2\int_{x_1}^{x_2} \frac{(E+H\mathbb{S}inh^2 x)\mathbb{S}qrt{1-\frac{4\tau^2}{\kappa}\tanh^2 x}}{\tau \mathbb{S}qrt{\mathbb{S}inh^ 2 x \cosh^2 x + \frac{4}{\kappa}(E+H\mathbb{S}inh^ 2 x)^2}} \] is a rational multiple of $\pi$, where $x_j = \arcsinh \mathbb{S}qrt{t_j}$, $j = 1, 2$ (see Lemma~\ref{lm:restricciones-E-Sl(2,R)}.(i)). Moreover, surfaces of type 1.(b) are compact and embedded if and only if $T = 2\pi/k$ with $k \in \mathbb{Z}$. \item[2.] If $4H^2 + \kappa \leq 0$ then $\Sigma$ is immersed and non-compact. Moreover the curve $\gamma$ which generate $\Sigma$ is of the type of figure~\ref{fig:esfera-open} when $E = 0$, figure~\ref{fig:unduloide-open} when $E > 0$ and figure~\ref{fig:nodoide-open} when $E < 0$. \end{enumerate} \end{theorem} \begin{remark}~\label{rm:clasificacion-Sl(2,R)} \begin{enumerate} \item This theorem was first stated by Gorodsky in~\cite{G} for $\kappa = -4$ and $\tau = 1$. However he did not take into account that for $4H^2 +\kappa \leq 0$ there are not constant mean curvature spheres (otherwise, by the Daniel correspondence, see~\cite{D}, we were able to construct constant mean curvature spheres with $4H^2 - 1 \leq 0$ in $\mathbb{H}^2 \times \mathbb{R}$ which is a contradiction by~\cite[Corollary 5.2]{NR}). \item All the examples described in the above theorem can be lifted to the universal cover. Because the fiber in the universal cover is a line, not a circle, all the constant mean curvature spheres are embedded there. Moreover for $E \geq 0$ the surfaces are embedded too by the same reason. This classification has been obtained, very recently, by Espinoza~\cite{E}. \end{enumerate} \end{remark} \begin{proof} Firsly we are going to analyze $4H^2 + \kappa > 0$ because it is quite similar to the Berger sphere case. In this case, taking into account the previous lemma and that $H \geq 0$, $x(s)$ moves between two values $x_1 = -\arcsinh \mathbb{S}qrt{t_2}$ and $x_2 = -\arcsinh \mathbb{S}qrt{t_1}$. If $E = 0$ then $x_2 = 0$ and so the curve $\gamma$ may intersect the axis of rotation $\ell$. As $\cos \alpha > 0$ for $x(s) \in ]x_1 = -\arctanh(\mathbb{S}qrt{-\kappa}/2H), 0[$ we can express $y$ as a function of $x$. Now using~\eqref{eq:sin-alpha-sl} we get that \[ y'(x) = \frac{H \tanh x \mathbb{S}qrt{1 - \frac{4\tau^2}{\kappa} \tanh^2 x}}{\tau \mathbb{S}qrt{1 + \frac{4H^2}{\kappa}\tanh^2 x}} \] And we can integrate explicitly this equation to obtain that \begin{equation}\label{eq:def-sphere-sl} y(x) = \arctan \left(\frac{\tau}{H} \rho(x)\right) - \frac{H}{\tau} \,\mathrm{d}rac{\mathbb{S}qrt{4\tau^2 - \kappa}}{\mathbb{S}qrt{4H^2+\kappa}}\arctan \left(\,\mathrm{d}rac{\mathbb{S}qrt{4\tau^2 - \kappa}}{\mathbb{S}qrt{4H^2+\kappa}} \rho(x)\right) \end{equation} where $\rho(x) = \left.\mathbb{S}qrt{1 + \frac{4H^2}{\kappa}\tanh^2 x}\right/\mathbb{S}qrt{1 - \frac{4\tau^2}{\kappa}\tanh^2 x}$. We note that $y(x_1) = 0$ where meets orthogonally the axis $\ell$ and $y(x)$ is a strictly increasing and strictly convex. The function $y(x)$ only describe half of the sphere, but we can obtain the whole sphere by reflection the solution along the line $x = 0$. It is easy to see that the sphere is embedded if, and only if, $y(0) > -\pi$ (see the figure~\ref{fig:sl-embebidas}). \begin{figure} \caption{Non-embedded region of CMC spheres in $\mathrm{Sl} \label{fig:sl-embebidas} \end{figure} Now if $E > 0$ then $\mathbb{S}in \alpha > 0$ by~\eqref{eq:sin-alpha-sl} and so we can express $x$ as a function of $y$. A similar reasoning as in the Berger sphere case for $E > 0$ is sufficient to check that the surface must be a unduloid (see figure~\ref{fig:unduloide}). Finally if $E < 0$ then $\mathbb{S}in \alpha$ may change its sign. As in the Berger sphere case for $-H < E < 0$ we can express the curve $(x(s), y(s))$ as two graphs of the function $x(y)$. Hence it is straightforward to check that the situation is the same as in figure~\ref{fig:nodoide} and the surface must be a nodoid-type one. In both cases the surface is compact if and only if \begin{equation}\label{eq:periodo-Sl(2,R)} T(H,E) = 2\int_{x_1}^{x_2} y'(x) \,\mathrm{d} x = 2 \int_{x_1}^{x_2} \frac{(E+H\mathbb{S}inh^2 x)\mathbb{S}qrt{1-\frac{4\tau^2}{\kappa}\tanh^2 x}}{\tau \mathbb{S}qrt{\mathbb{S}inh^ 2 x \cosh^2 x + \frac{4}{\kappa}(E+H\mathbb{S}inh^ 2 x)^2}} \end{equation} is a rational multiple of $\pi$, where $x_j = \arcsinh \mathbb{S}qrt{t_j}$, $j = 1, 2$ (see Lemma~\ref{lm:restricciones-E-Sl(2,R)}). On the other hand the situation for $4H^2 + \kappa \leq 0$ is different from the above and it does not have a counterpart in the Berger sphere case. We firstly observe, by the previous lemma, that in this case $x(s)$ does not have to move between two real values. It is only bounded above by a constant that depend on $H$ and only vanish when $E = 0$ so the solution intersect the axis of rotation only in this case. Moreover, as $x'(s) = \cos \alpha(s)$ can only vanish once the solution cannot be periodic. We are going to distinguish between $E = 0$, $E > 0$ and $E < 0$ and we define for all the cases $x_1 = -\arcsinh\mathbb{S}qrt{t_1}$ when $4H^2 + \kappa < 0$ and $x_1 = -\arcsinh (\abs{E}/\mathbb{S}qrt{H(H-2E)})$ when $4H^2 + \kappa = 0$. Because we choose $H \geq 0$ it must be $x(s) \in ]-\infty, x_1]$. If $E = 0$ then we consider the maximal solution with inicial condition $(0, 0, \pi)$. In this case $\cos \alpha(s) < 0$ for any $s$ so we can express $y$ as a function of $x$. Then \[ \frac{\mathrm{d}\,y}{\mathrm{d}\, x} = \frac{H \mathbb{S}inh x \mathbb{S}qrt{1 - \frac{4\tau^2}{\kappa}\tanh^2 x}}{\mathbb{S}qrt{1 + \frac{4H^2}{\kappa} \tanh^2 x}} < 0 \] so the function $y$ is strictly decreasing. Moreover $\mathrm{d}^2\, y/\mathrm{d}\, x^2 > 0$ so the function $y$ is strictly convex. In figure~\ref{fig:esfera-open} we can see the two situations for $E = 0$. \begin{figure} \caption{Different solutions for $E = 0$ depending on the sign of $4H^2 + \kappa$\label{fig:esfera-open} \label{fig:esfera-open} \end{figure} In the second case, that is for $E > 0$, we consider the maximal solution with initial conditions $(x_1, 0, \pi/2)$. Then there exists $s_1 > 0$ such that $\alpha'(s_1) = 0$, $\alpha'$ is positive for $s < s_1$ and negative for $s>s_1$. Hence, using that $\cos \alpha < 0$ and $\mathbb{S}in \alpha > 0$ by~\eqref{eq:sin-alpha-sl}, we get that $\alpha(s) \in ]\pi/2, \pi[$. We can express $x$ in terms of $y$ because $\mathbb{S}in \alpha > 0$ by~\eqref{eq:sin-alpha-sl} and \begin{equation}\label{eq:dx/dy-Sl(2,R)} \frac{\,\mathrm{d} x}{\,\mathrm{d} y} = \cot \alpha \cosh x < 0 \end{equation} so $x$ is a strictly decreasing function of $y$ (see figure~\ref{fig:unduloide-open}). Finally when $E < 0$ we consider the maximal solution with initial condition $(x_1, 0, 3\pi/2)$. In this case $\mathbb{S}in \alpha$ could vanish so we can not express $x$ as a function of $y$. As $\alpha'$ is always negative let $s_1 > 0$ such that $\alpha(s_1) = \pi$. Then $\alpha \in ]\pi, 3\pi/2[$ on $s \in ]0, s_1[$ and $\alpha \in ]\pi/2, \pi[$ on $s > s_1$ because $\cos \alpha (s)$ does not vanish anymore. Now we can express the solution $\gamma$ as two graphs of the function $x(y)$ meeting at the line $y = y(s_1)$. First using~\eqref{eq:dx/dy-Sl(2,R)} $x(y)$ is strictly increasing on $]y(s_1), 0[$ and strictly decreasing on $]y(s_1), +\infty[$. Therefore the solution must be similar to the figure~\ref{fig:nodoide-open}. \begin{figure} \caption{Curve $\gamma(s)$ for $4H^2 + \kappa \leq 0$ and $E > 0$\label{fig:unduloide-open} \label{fig:unduloide-open} \caption{Curve $\gamma(s)$ for $4H^2 + \kappa \leq 0$ and $E < 0$\label{fig:nodoide-open} \label{fig:nodoide-open} \end{figure} \end{proof} \begin{corollary} Let $4H^2 + \kappa > 0$ and $\Phi:]-a,a[ \times ]-\pi, \pi[ \rightarrow \mathrm{Sl}(2,\mathbb{R})$ be the immersion given by: \[ \Phi(x, t) = \begin{cases} \left(\cosh(x + a) e^{i y(x+a)}, \mathbb{S}inh(x + a)e^{i t} \right), & \text{if } x < 0 \\ \left(\cosh(a - x) e^{-i y(a - x)}, \mathbb{S}inh(a - x)e^{i t} \right), & \text{if } x\geq 0 \\ \end{cases} \] where $a = \arctanh(\mathbb{S}qrt{-\kappa}/2H)$ and $y$ is the function defined in~\eqref{eq:def-sphere-sl}. Then $\Phi$ defines an immersion of a sphere with constant mean curvature $H$. Moreover $\Phi$ is an embbedding if and only if $y(0) > -\pi$ (see figure~\ref{fig:sl-embebidas}). \end{corollary} \mathbb{S}ection{The isoperimetric problem in the Berger spheres} In~\cite{TU09-2} the authors studied the stability of constant mean curvature surfaces in the Berger spheres. They proved that for $1/3 \leq 4\tau^2/\kappa < 1$ the solution to the isoperimetric problem are the rotationally constant mean curvature $H$ spheres, $\mathcal{S}_H$. Besides they showed that there exist unstable constant mean curvature spheres for $\tau$ close to zero. Moreover, for $4\tau^2/\kappa < 1/3$ there exist stable constant mean curvature spheres and tori. The aim of this section is to study the relation between the area and the volumen of the rotatilonally constant mean curvature spheres in order to understand the isoperimetric problem for $4\tau^2/\kappa < 1/3$. We have given in Corollary~\ref{cor:sphere-CMC-berger} a parametrization of the constant mean curvature sphere $\mathcal{S}_H$. Then, using that parametrization, we define the interior domain of $\mathcal{S}_H$ as \[ \Omega_H = \{(z, w) \in \mathbb{S}^3:\, -y(\arccos\abs{z}) < \arg(z) < y(\arccos\abs{z})\} \] Hence one of the volumes determined by $\mathcal{S}_H$ is $\mathrm{vol}(\Omega_H)$ (note that this does not have to be the smaller one). \begin{lemma} The volume of $\Omega_H$ is given by: \[ \mathrm{vol}(\Omega_H) = \begin{cases} \,\mathrm{d}rac{16\pi \tau}{\kappa^2} \left(2\arctan\left( \frac{\tau}{H} \right) - \,\mathrm{d}rac{\kappa H}{\tau(4H^2 + \kappa)} + \mu \arctan\left(\frac{\mathbb{S}qrt{4\tau^2 -\kappa}}{\mathbb{S}qrt{4H^2 + \kappa}} \right)\right), & \text{if } \kappa - 4\tau^2 < 0 \\ \\ \,\mathrm{d}rac{16\pi \tau}{\kappa^2} \left(2\arctan\left( \frac{\tau}{H} \right) - \,\mathrm{d}rac{\kappa H}{\tau(4H^2 + \kappa)} + \mu\arctanh\left(\frac{\mathbb{S}qrt{\kappa - 4\tau^2}}{\mathbb{S}qrt{4H^2 + \kappa}} \right)\right), & \text{if } \kappa - 4\tau^2 > 0 \\ \end{cases} \] where \[ \mu = \frac{2H}{\tau}\frac{(\kappa - 4\tau^2)(2H^2 + \kappa) - 2\tau^2(4H^2 + \kappa)}{\mathbb{S}qrt{\abs{4\tau^2 - \kappa}}(4H^2 + \kappa)^{3/2}}. \] \end{lemma} \begin{proof} Firstly it is easy to see that, because the symmetry of the sphere, we can restrict ourselves to the domain $\Omega_H^+ = \{(z, w) \in \mathbb{S}^3:\, \arg(z) < y(\arccos \abs{z})\}$ so $\mathrm{vol}(\Omega_H) = 2\mathrm{vol}(\Omega_H^+)$. Secondly the volume form $\omega_b$ of $\mathbb{S}^3(\kappa, \tau)$ and $\omega$ of $\mathbb{S}^3$ are related by $\omega_b = \frac{16\tau}{\kappa^2}\omega$. Hence it is sufficient to calculate the volume of $\Omega_H^+$ with respecto to the standar metric on the sphere. We are going to apply the co-area formula using the function $f(z, w) = \arccos\abs{z}$ which asign to the point $(z, w)$ the distant to the curve $\ell = \{(z, 0) \in \mathbb{S}^3\}$, which is the axis of rotation of the group $\mathrm{Rot}$. Then \[ \mathrm{vol}(\Omega_H) = 2\int_{\Omega_H^+} \omega_b = \frac{32\tau}{\kappa^2}\int_{\Omega_H^+} \omega = \frac{32\tau}{\kappa^2}\int_\mathbb{R} \left( \int_{\Gamma_t \cap \Omega_H^+} \omega_t\right) \,\mathrm{d} t \] where $\omega_t$ is the restriction of the form $\omega$ to $\Gamma_t = f^{-1}(\{t\})$ and we have taken into account that $\abs{\nabla f} = 1$. Now we can parametrize $\Gamma_t \cap \Omega_H^+$ as $\varphi:[0, y(t)] \times [0, 2\pi] \rightarrow \Gamma_t \cap \Omega_H^+$, $\varphi(u, \theta) = \bigl(\cos t e^{iu}, \mathbb{S}in t e^{i\theta} \bigr)$. We note that $\Gamma_t \cap \Omega_H^+ = \emptyset$ for $t > \arctan(\mathbb{S}qrt{\kappa}/2H)$ or $t < 0$. Hence the above integral can be rewritten as \[ \begin{split} \mathrm{vol}(\Omega_H) &= \frac{32\tau}{\kappa^2}\int_{0}^{\arctan(\mathbb{S}qrt{\kappa}/2H)} \left(\int_{[0, y(t)] \times [0, 2\pi]} \varphi^*(\omega_t) \right) \,\mathrm{d} t = \\ &= \frac{16\pi\tau}{\kappa^2} 4 \int_{0}^{\arctan(\mathbb{S}qrt{\kappa}/2H)} \mathbb{S}in t\, \cos t\, y(t) \,\mathrm{d} t \end{split} \] Finally a long but straightforward computation yields the above integral and the result. \end{proof} Now we are able to draw the area of $\mathcal{S}_H$ in terms of its volume. We are going to compare it with the tori $\mathcal{T}_H$ (see figure~\ref{fig:perfil-esfera-toro}) because for $\tau$ close to zero there are stable constant mean curvature spheres and tori (see~\cite{TU09-2}) so both surfaces are candidates so solve the isoperimetric problem. The area and the smallest volume enclose by $\mathcal{T}_H$ are given by: \[ \mathrm{Area}(\mathcal{T}_H) = \frac{4\tau}{\kappa}\frac{4\pi^2}{\mathbb{S}qrt{4H^2 + \kappa}},\qquad \text{Volume enclosed by } \mathcal{T}_H = \frac{16\pi^2\tau}{\kappa^2}\left(1 - \frac{2H}{\mathbb{S}qrt{4H^2 + \kappa}} \right) \] \begin{figure} \caption{Graphics of the area of the CMC spheres (solid line) and CMC tori (dashed line) in terms of the volume for diferent Berger spheres ($\kappa = 4$).\label{fig:perfil-esfera-toro} \label{fig:perfil-esfera-toro} \end{figure} We can fix, without lost of generality, $\kappa = 4$. Figure~\ref{fig:perfil-esfera-toro} shows the four different situation that appears in the Berger spheres: for $\tau = 0.5$ the spheres are the best candidates to solve the isoperimetric problem, for $\tau = 0.407$ the minimal Clifford torus has the same area and volumen that the minimal sphere so both are candidates to solve the isoperimetric problem. For $\tau = 0.374$ and $\tau = 0.244$ (in the last case there are unsatable spheres and non-congruent spheres enclosing the same volume) it appears an open interval centered at $\pi^2 16 \tau/\kappa^2$ such that the tori $\mathcal{T}_H$ are the candidates to solve the isoperimetric problem. \end{document}
\begin{document} \title[$\overline{\partial}$ on homogeneous varieties] {The $\overline{\partial}$-equation on homogeneous varieties with an isolated singularity} \author{J. Ruppenthal} \address{Department of Mathematics, University of Wuppertal, Gau{\ss}str. 20, 42119 Wuppertal, Germany.} \email{[email protected]} \date{July 23, 2008} \subjclass[2000]{32C36, 32W05} \keywords{Cauchy-Riemann equations, singular spaces, $L^p$-estimates.} \begin{abstract} Let $X$ be a regular irreducible variety in $\C\mathbb{P}^{n-1}$, $Y$ the associated homogeneous variety in $\C^n$, and $N$ the restriction of the universal bundle of $\C\mathbb{P}^{n-1}$ to $X$. In the present paper, we compute the obstructions to solving the $\overline{\partial}$-equation in the $L^p$-sense on $Y$ for $1\leq p\leq \infty$ in terms of cohomology groups $H^q(X,\mathcal{O}(N^\mu))$. That allows to identify obstructions explicitly if $X$ is specified more precisely, for example if it is equivalent to $\C\mathbb{P}^1$ or an elliptic curve. \end{abstract} \maketitle \section{Introduction} One strategy to study the $\overline{\partial}$-equation on singular complex spaces is to use Hironaka's resolution of singularities in order to pull-back the $\overline{\partial}$-equation to a regular setting, where it is treatable much easier. See \cite{AHL}, \cite{BiMi} or \cite{Ha} for detailed information about resolution of singularities. That strategy has been pursued already in \cite{FOV1} and \cite{Rp4}, where it leads to more or less imprecise results. But the method seems to be quite promising for further investigations, because it can be improved considerably. We were able to do that in this paper for homogeneous varieties with an isolated singularity, where the desingularization is obtained by a single blow up. We believe that one should draw special attention to this strategy, because there are some analogies to the case of complex projective varieties, where we have an intimate connection between the $L^2$-cohomology of the regular part of the variety and the $L^2$-cohomology of resolutions (see \cite{PaSt1}). For a complex projective variety $Z\subset\C\mathbb{P}^n$, the Cheeger-Goresky-MacPherson conjecture (see \cite{CGM}) states that the $L^2$-deRham cohomology $H^*_{(2)}(Z^*)$ of the regular part of the variety $Z^*:=\mbox{Reg } Z$ with respect to the (incomplete) restriction of the Fubini-Study metric is naturally isomorphic to the intersection cohomology of middle perversity $IH^*(Z)$ (which in turn is isomorphic to the cohomology of a small resolution of singularities). Ohsawa proved this conjecture under the extra assumption that the variety has only isolated singularities in \cite{Oh}, while it is still open for higher-dimensional singular sets. The early interest in the conjecture of Cheeger, Goresky and MacPherson was motivated in large parts by the hope that one could then use the natural isomorphism and a Hodge decomposition for $H^k_{(2)}(Z^*)$ to put a pure Hodge structure on the intersection cohomology of $Z$ (cf. \cite{CGM}). That was in fact done by Pardon and Stern in the case of isolated singularities (see \cite{PaSt2}). Their work includes the computation of the $L^2$-Dolbeault cohomology groups $H^{p,q}_{(2)}(Z^*)$ in terms of cohomology groups of a resolution of singularities (see \cite{PaSt1}; also for further references).\\ Let us now direct our attention to the case of singular Stein spaces. Though one would expect similar relations in this (local) situation, no such representation of the $L^2$-Dolbeault cohomology is known. The best results include quite rough lower and upper bounds on the dimension of some of the $L^2$-Dolbeault cohomology groups (see \cite{DFV}, \cite{Fo}, \cite{FOV1}, \cite{FOV2}, \cite{OvVa} or \cite{Rp4}). The origin of the present work is the attempt to compute the $L^2$-Dolbeault cohomology groups in the spirit of the work of Cheeger-Goresky-MacPherson, Ohsawa, Pardon-Stern and others in terms of certain cohomology groups on a resolution of singularities. But, in the absence of compactness, most of their arguments do not carry over to the local situation and one has to develop some new strategies. One such tool which could be helpful for studying the $\overline{\partial}$-equation (even locally) on singular complex spaces is a Dolbeault complex with weights according to normal crossings developed in \cite{Rp6}. A short review of this construction is contained in section \ref{sec:sufficient} of this paper, the main result is the exactness of the complex cited here as Theorem \ref{thm:main}. Weights according to normal crossings are a natural choice because one can achieve that the exceptional set of a desingularization consists of normal crossings only, and the deformation of a metric under desingularization produces singularities along the exceptional set which have to be taken into account when we treat the $\overline{\partial}$-equation (cf. the introduction to \cite{Rp6}). Another interesting tool that we use in this paper is an integration along the fibers of the normal bundle of the exceptional set of a desingularization. This idea has been already used by E.\ S.\ Zeron and the author in \cite{RuZe} to construct an explicit $\overline{\partial}$-integration formula on weighted homogeneous varieties. The method is described in section \ref{sec:fibers}. A crucial point about both these tools is that they depend on integral formulas. So, they allow to drop the the restriction to $L^2$-spaces given by the well-known Hilbert space methods. In view of the large difficulties in computing the $L^2$-cohomology explicitly, it seems reasonable to gain a broader view and better understanding by also considering $L^p$-Dolbeault cohomology groups for arbitrary $1\leq p\leq \infty$. Besides the $L^2$-results mentioned above, only the $L^\infty$-case has been addressed in a number of publications: \cite{AcZe1}, \cite{AcZe2}, \cite{FoGa}, \cite{Rp2}, \cite{Rp4}, \cite{RuZe}, \cite{SoZe}. These papers treat H{\"o}lder regularity of the $\overline{\partial}$-equation provided the right-hand side of the equation is bounded. Clearly, this implies the solution of the Cauchy-Riemann equations in the $L^\infty$-sense. In view of those results, the present paper is an attempt to embed the $L^2$ and $L^\infty$-case into the broader spectrum of an $L^p$-theory. In fact, by use of the Dolbeault complex with weights and the integration along the fibers of the normal bundle, is is possible to compute the $L^p$-Dolbeault cohomology groups on a homogeneous variety with an isolated singularity $Y$ for all $p$ such that $2d/p\notin \Z$ (where $d=\dim Y$) and for $p=1$ (see Theorem \ref{thm:sufficient} and Theorem \ref{thm:necessary} below). This does not solve the $L^2$-problem but gives a quite precise idea what to expect for the $L^2$-groups.\\ We will now describe the results of this paper in detail. Let $X$ be a regular irreducible variety in $\C\mathbb{P}^{n-1}$, and $Y$ the associated homogeneous variety in $\C^n$ which has an isolated singularity at the origin. We denote by $N$ the restriction of the universal bundle on $\C\mathbb{P}^{n-1}$ to $X$. Let $d=\dim Y$. The regular complex manifold $Y^*:=Y\setminus \{0\}=\mbox{Re }g Y$ carries a hermitian structure induced by restriction of the euclidian metric of the ambient space $\C^n$. Let $|\cdot|_Y$ and $dV_Y$ be the resulting metric and volume form on $Y^*$. Now, if $U\subset Y^*$ is an open set, and $\omega$ a measurable $(0,q)$-form on $U$, we set \begin{eqnarray*} \|\omega\|_{L^p_{0,q}(U)}^p &:=& \int_U |\omega|_Y^p dV_Y\ , \mbox{ for } 1\leq p<\infty,\\ \|\omega\|_{L^\infty_{0,q}(U)} &:=& \mbox{ess} \sup_{\substack{z\in U}} |\omega|_Y(z). \end{eqnarray*} We are interested in the following cohomology groups, where the $\overline{\partial}$-equation has to be interpreted in the sense of distributions (throughout this paper). Due to the incompleteness of the metric, different extensions of the $\overline{\partial}$-operator on smooth forms lead to different cohomology groups. For $U\subset \mbox{Re }g Y$ open, let \begin{eqnarray*} H^q_{(p)}(U,\mathcal{O}) := \frac{\{\omega\in L^p_{0,q}(U):\overline{\partial}\omega=0\}}{\{\omega\in L^p_{0,q}(U):\exists f\in L^p_{0,q-1}(U): \overline{\partial} f=\omega\}}. \end{eqnarray*} We will show (giving sufficient conditions for $L^p$-solvability of the $\overline{\partial}$-equation): \begin{thm}\label{thm:sufficient} Let $X$, $Y$ and $N$ as above, $D\subset\subset Y$ strongly pseudoconvex such that $0\in D$, $D^*=D\setminus\{0\}$, and $1\leq p\leq \infty$, $1\leq q\leq d=\dim Y$. Set $$a(p,q,d) := \left\{\begin{array}{ll} \max\{k\in\Z: k< 1+q- 2d/p\} &, p \neq 1,\\ \max\{k\in\Z: k\leq 1+ q -2d/p\} &, p=1. \end{array}\right.$$ Then there exists an injective homomorphism \begin{eqnarray}\label{eq:sufficient} H^q_{(p)}(D^*,\mathcal{O}) \hookrightarrow \bigoplus_{\mu\geq a(p,q,d)} H^q(X,\mathcal{O}(N^{-\mu})). \end{eqnarray} \end{thm} The right hand side in \eqref{eq:sufficient} is finite-dimensional because $N$ is a negative holomorphic line bundle. Necessary conditions are determined by: \begin{thm}\label{thm:necessary} Let $X$, $Y$ and $N$ as above, and let $D\subset\subset Y$ be an open set such that $0\in D$, $D^*=D\setminus\{0\}$, and $1\leq p\leq \infty$, $1\leq q\leq d=\dim Y$. Set $$c(p,q,d) := \max\{k\in\Z: k\leq 1+q- 2d/p\}.$$ Then there exists an injective homomorphism \begin{eqnarray}\label{eq:necessary} \bigoplus_{\mu\geq c(p,q,d)} H^q(X,\mathcal{O}(N^{-\mu})) \hookrightarrow H^q_{(p)}(D^*,\mathcal{O}). \end{eqnarray} \end{thm} Note that sufficient and necessary conditions coincide if $2d/p\notin\Z$ or $p=1$, and that $c(p,q,d)=a(p,q,d)+1$ in all other cases. So, there remains a little uncertainness about the contribution of $H^q(X,\mathcal{O}(N^{-a}))$, for example if $p=2$.\\ The proof of Theorem \ref{thm:sufficient} and Theorem \ref{thm:necessary} depends heavily on an embedded desingularization of $Y\subset \C^n$, which is in our situation simply given by a single blow-up of the origin in $\C^n$. We will study the behavior of $L^p$-norms under this resolution of singularities in the next section, while we will present the first part of the proof of Theorem \ref{thm:sufficient} in section \ref{sec:sufficient}. The main tool here is a Dolbeault complex with weights according to normal crossings that was constructed in \cite{Rp6}. The second part of the proof is settled by another important tool of our work, namely an integration along the fibers of the holomorphic line bundle $N$, which we will develop in section \ref{sec:fibers}. This idea has been already used by E.\ S.\ Zeron and the author in \cite{RuZe} to construct an explicit $\overline{\partial}$-integration formula on weighted homogeneous varieties. In section \ref{sec:fibers}, we obtain as a byproduct: \begin{thm}\label{thm:integration} Let $X$ and $Y$ be as above, $D\subset\subset Y$ an open subset, $D^*=D\setminus\{0\}$, and $1\leq p\leq \infty$, $1\leq q\leq \dim Y$. Let $\omega \in L^p_{0,q}(D^*)\cap \ker \overline{\partial}$ with compact support in $D$. Then there exists $\eta\in L^p_{0,q-1}(D^*)$ such that $\overline{\partial}\eta=\omega$. \end{thm} Using Theorem \ref{thm:integration} in case $q=1$ and Hartogs' Extension Theorem on normal Stein spaces with isolated singularities, it is easy to deduce vanishing of the first cohomology with compact support (see section \ref{sec:fibers}): \begin{thm}\label{thm:compact} Let $X$ and $Y$ be as above, $D\subset\subset Y$ an open subset, $D^*=D\setminus\{0\}$, and $1\leq p\leq \infty$. Then: $$H^1_{(p),cpt}(D,\mathcal{O}):=\frac{\{\omega\in L^p_{0,1}(D^*): \overline{\partial}\omega=0,\ \supp\omega\subset\subset D\}} {\{\ \omega\in L^p_{0,1}(D^*): \exists f\in L^p(D^*): \overline{\partial} f=\omega,\ \supp f\subset\subset D\}}=0.$$ \end{thm} We will then prove Theorem \ref{thm:necessary} in section \ref{sec:necessary}, and discuss some examples and applications in the last section \ref{sec:examples}. Let us mention a few of them at this point. Let $X$, $Y$ and $N$ be as above, and $D$ a strongly pseudoconvex neighborhood of the origin in $Y$, $D^*=D\setminus\{0\}$. If, for example, a group $H^q_{(p)}(D^*,\mathcal{O})$ is vanishing, then it follows by standard techniques that we can construct a bounded $L^p$-solution operator for the $\overline{\partial}$-equation in degree $(0,q)$ on $D^*$ (see Theorem \ref{thm:bounded}). When we restrict our attention to the case $\dim Y=2$, $X$ is a compact Riemann surface, and that allows to compute the groups $H^1(X,\mathcal{O}(N^{-\mu}))$ by the Theorem of Riemann-Roch. We will do that for $X\cong \C\mathbb{P}^1$ or $X$ an elliptic curve, and deduce some consequences for $L^p$-solvability of the $\overline{\partial}$-equation on $Y$. Combining an Extension Theorem for cohomology classes on complex spaces of Scheja (Theorem \ref{thm:scheja1}) with our integration along the fibers, we deduce that $$H^q_{(p)}(D^*,\mathcal{O})=0$$ for $1\leq q \leq \dim Y-2$ (Theorem \ref{thm:scheja2}), and that in turn gives vanishing results for some classes $H^q(X,\mathcal{O}(N^{-\mu}))$ (Theorem \ref{thm:scheja3}). Similarly, we can show easily that $$H^q(\C\mathbb{P}^k, \mathcal{O}(N^{-\mu}))=0$$ for all $\mu\geq q-2k$, where $N$ is the universal bundle over $\C\mathbb{P}^k$ (Theorem \ref{thm:extension}). \section{Behavior of $L^p$-norms under desingularization}\label{sec:resolution} Let $X$ be a regular irreducible (connected) variety in $\C\mathbb{P}^{n-1}$ of dimension $d-1\geq 1$, \newline and let $Y$ be the associated homogeneous variety in $\C^n$ (given by the same homogeneous polynomials). So, $Y$ is an irreducible homogeneous variety in $\C^n$ of dimension $d$, and it is regular outside the origin. We will now investigate the embedded desingularization of $Y$, which is given by blowing up the origin in $\C^n$. Let $$U \subset \C^n\times \C \mathbb{P}^{n-1}$$ be given by the equations $$z_j w_k = z_k w_j\ \ \ \mbox{ for all }\ \ j\neq k,$$ where $z_1, ..., z_n$ are the euclidian coordinates of $\C^n$, and $w_1, ..., w_n$ the homogeneous coordinates of $\C\mathbb{P}^{n-1}$. That is a submanifold of dimension $n$ in $\C^n\times \C \mathbb{P}^{n-1}$. Let $$\Pi: U \rightarrow \C^n,\ (z,w) \mapsto z,$$ be the projection to the first component. Then $$H:= \Pi^{-1}(\{0\}) \cong \C\mathbb{P}^{n-1},$$ but the pre-image of all points in $\C^n\setminus\{0\}$ consists of exactly one point. We have that $$\Pi|_{U\setminus H} : U\setminus H \rightarrow \C^n\setminus \{0\}$$ is biholomorphic, $\Pi: U \rightarrow \C^n$ is the blow up of the origin. On the other hand, consider the projection $$P: U \rightarrow H,\ (z,w) \mapsto (0,w).$$ If $\{w_k=1\}$ is a chart in $H$, then $$P^{-1}(\{w_k=1\}) \cong \{w_k=1\} \times \C.$$ $U$ is in fact a holomorphic line bundle over $H\cong \C\mathbb{P}^{n-1}$. It is called the universal bundle. Now, let $$N := \overline{\Pi_{U\setminus H}^{-1}(Y\setminus\{0\})} \subset U.$$ This is a complex submanifold of dimension $d$ in $U$. Let $$\pi:=\Pi|_N: N \rightarrow Y,\ \ \ \mbox{ and }\ \ E:=\pi^{-1}(\{0\}) = N\cap H \cong X.$$ Then $\pi: N \rightarrow Y$ is a desingularization of $Y$ (with exceptional set $E\cong X$). We will from now on identify $E$ with $X$. On the other hand, $$p:=P|_N: N \rightarrow X$$ is a holomorphic line bundle. It is the restriction of the universal bundle to $X$, and the normal bundle of $X$ in $N$ at the same time. Hence, it is a negative bundle in the sense of Grauert (see \cite{Gr1}). So, there exists an integer $\mu_0\geq 0$ such that $$H^q(X, \mathcal{O}(N^{-\mu})) = 0\ \ \ \mbox{ for all }\ \ q\geq 1,\ \mu\geq \mu_0,$$ because the dual bundle $N^{-1} := N^*$ is positive.\\ $U$ is covered by $n$ charts $U_j \cong \C^n$ ($j=1, ..., n$) defined by $w_j=1$. Let us consider one such domain, say $U_1$. Here, we have holomorphic coordinates $$z_1,\ w_2,\ ...,\ w_n,$$ and in these coordinates $$\Pi (z_1, w_2, ..., w_n) = ( z_1, z_1 w_2, ..., z_1 w_n).$$ This implies that \begin{eqnarray*} \Pi^* d\o{z_1} &=& d\o{z_1}\ ,\\ \Pi^* d\o{z_j} &=& \o{z_1} d\o{w_j}\ ,\ \ \mbox{ for }\ j=2, ..., n. \end{eqnarray*} We will now develop a similar statement on $N$, which is a bit more complicated. First of all, we will choose a nice hermitian metric $h$ on $U$. For this, let $h_1'$ be any hermitian metric on $H\cong \C\mathbb{P}^{n-1}$, say the Fubini-Study metric, and $$h_1=P^* h_1'$$ the pull-back to $U$. Furthermore, let $h_2$ be given in the charts $U_j$ (where $w_j=1$) as $$h_2=\big(|w_1|^2+\cdots +|w_{j-1}|^2 + 1 + |w_{j+1}|^2+\cdots |w_n|^2\big) dz_j\otimes d\o{z_j}.$$ It is easy to see that $h_2$ is globally well defined because $z_j/w_j=z_k/w_k$. Then, \begin{eqnarray*} h:=h_1\oplus h_2 \end{eqnarray*} gives a (in some sense natural) hermitian metric on $U$, where in a chart $U_j$ the coordinate $z_j$ is orthogonal to $$w_1, ..., w_{j-1}, w_{j+1}, ..., w_n.$$ Let $$i: N \hookrightarrow U\ \ \ \mbox{ and } \ \ \ \iota: Y \hookrightarrow \C^n$$ be the natural inclusions. This implies that $$\Pi\circ i = \iota \circ\pi.$$ As $Y^*$ carries the hermitian structure induced by restriction (respectively pull-back) of the euclidian metric of the ambient $\C^n$, $N$ is a hermitian submanifold of $U$ with the induced hermitian structure $i^* h$. We denote by $\|\cdot\|_N$ the resulting norm on the Grassmannian of $N$, and by $dV_N$ the associated volume form.\\ Let $Q\in X$ be a point in the exceptional set. We can assume that $Q\in U_1$. Then there exists a neighborhood $W_Q'$ of $Q$ in $X\cap U_1$ with holomorphic coordinates $$x_2, ..., x_d$$ on $W_Q'$. It follows that $$t:=z_1,\ x_2, ..., x_d$$ are holomorphic coordinates on $W_Q:=p^{-1}(W_Q')\subset N$. We identify $x_k$ with $p^* x_k$. It follows from the construction of the metric that $t=z_1$ is orthogonal to the $x_k$.\\ Hence, by shrinking $W_Q'$ a little, it follows that \begin{eqnarray}\label{eq:ajk1} i_* \frac{\partial}{\partial\o{x_j}} = \sum_{k=2}^{n} a_{jk} \frac{\partial}{\partial\o{w_k}}, \end{eqnarray} where \begin{eqnarray}\label{eq:ajk2} \sum_{k=2}^{n} |a_{jk}|^2 \sim 1 \end{eqnarray} on $W_Q$ for all $j=2, ..., d$. We also have that \begin{eqnarray*} dt\wedge d\o{t} \wedge dx_2\wedge d\o{x_2}\wedge \cdots \wedge dx_d\wedge d\o{x_d} \sim dV_N \end{eqnarray*} on $W_Q$. Using \eqref{eq:ajk1} and \eqref{eq:ajk2}, we calculate: \begin{eqnarray*} \big\|\pi_* \frac{\partial}{\partial \o{x_j}}\big\|_Y^2 &=& \big\|\iota_* \pi_* \frac{\partial}{\partial \o{x_j}}\big\|_{\C^n}^2 = \big\| \Pi_* i_* \frac{\partial}{\partial \o{x_j}}\big\|_{\C^n}^2\\ &=& \big\| \Pi_* \sum_{k=2}^n a_{jk} \frac{\partial}{\partial\o{w_k}}\big\|_{\C^n}^2 = \big\| \sum_{k=2}^n a_{jk} \sum_{l=1}^n \frac{\partial\o{\Pi_l}}{\partial \o{w_k}} \cdot \frac{\partial}{\partial\o{z_l}}\big\|_{\C^n}^2\\ &=& \big\| \sum_{k=2}^n a_{jk} \sum_{l=1}^n \delta_{lk}\cdot z_1 \frac{\partial}{\partial\o{z_l}}\big\|_{\C^n}^2 = |z_1|^2 \sum_{k=2}^n |a_{jk}|^2 \sim |z_1|^2 \end{eqnarray*} (where $\delta_{lk}$ denotes the Kronecker-$\delta$), because $$\pi_* \frac{\partial}{\partial \o{x_j}}\big|_y\in T^{0,1}_y (Y\setminus\{0\})$$ for all $y\in\pi(W_Q)\setminus\{0\}$, and $\|v\|_Y=\|\iota_* v\|_{\C^n}$ on $T^{0,1} (Y\setminus\{0\})$ (since $\|\cdot\|_Y$ is the norm induced by $\|\cdot\|_{\C^n}$).\\ So, for a point $y\in \pi(W_Q)\setminus\{0\}$, we can now calculate \begin{eqnarray*} \big\|(\pi_{N\setminus X}^{-1})^* d\o{x_k}\big\|_Y (y) &=& \max_{0\neq v\in T^{0,1}_y Y} \|v\|^{-1}_Y(y) \left|d\o{x_k}\big((\pi_{N\setminus X}^{-1})_* v\big)\right|(\pi^{-1}(y))\\ &\sim& \max_{j=2, ..., d} \big\|\pi_* \frac{\partial}{\partial \o{x_j}}\big\|_Y^{-1} (y) \left|d\o{x_k}\big(\frac{\partial}{\partial \o{x_j}}\big)\right|(\pi^{-1}(y))\\ &\sim& \max_{j=2, ..., d} \big\|\pi_* \frac{\partial}{\partial \o{x_j}}\big\|_Y^{-1} (y) \sim |z_1(y)|^{-1}, \end{eqnarray*} because $\pi$ is an biholomorphism outside $X$, and $$d\o{x_k}\left(\frac{\partial}{\partial\o{t}}\right)=0,$$ since the coordinates $x_2, ..., x_d$ are orthogonal to $t=z_1$.\\ Since $t=\Pi^* z_1 = \pi^* z_1$, the esimate \begin{eqnarray*} \big\|(\pi_{N\setminus X}^{-1})^* d\o{x_k}\big\|_Y \sim |z_1|^{-1} \end{eqnarray*} also yields \begin{eqnarray*} \big\|(\pi_{N\setminus X}^{-1})^* ( \o{t} d\o{x_j})\big\|_Y \sim 1. \end{eqnarray*} Summing up, we conclude: \begin{lem}\label{lem:blowup} Let $Q\in X$. Then there exists a neighborhood $W_Q$ of $Q$ in $N$ with holomorphic coordinates $t, x_2, ..., x_d$ such that $$X\cap W_Q = \{ t=0\},$$ and \begin{eqnarray*} \alpha_1 &:=& (\pi|_{N\setminus X}^{-1})^* d\o{t},\\ \alpha_j &:=& (\pi|_{N\setminus X}^{-1})^* (\o{t} d\o{x_j}),\ \ j=2, ..., d, \end{eqnarray*} are a basis of the $(0,1)$-forms on $\pi(W_Q \setminus X)\subset Y\setminus\{0\}$ with $$\|\alpha_j\|_Y \sim 1.$$ This implies for the volume forms that $$\pi|_{N\setminus X}^* dV_Y \sim |t|^{2d-2} dV_N$$ on $W_Q\setminus X$. Hence, for a function $f$ on $\pi(W_Q \setminus X)=\pi(W_Q)\setminus\{0\}$, and $1\leq p \leq \infty$, we have that $$f\in L^p(\pi(W_Q) \setminus \{0\})$$ exactly if $$|t|^{\frac{2d-2}{p}} \cdot \pi^* f \in L^p(W_Q \setminus X).$$ Let $1\leq q\leq n$. If $\omega \in L^p_{0,q}(\pi(W_Q)\setminus \{0\})$ is a $(0,q)$-form on $\pi(W_Q) \setminus\{0\}$, then \begin{eqnarray*} |t|^{\frac{2d-2}{p} - (q-1)} \cdot \pi^* \omega \in L^p_{0,q}(W_Q\setminus X). \end{eqnarray*} On the other hand, for $\eta\in L^p_{0,q}(W_Q\setminus X)$ a $(0,q)$-form on $W_Q\setminus X$, $$|t|^{\frac{2d-2}{p} - q} \cdot \eta \in L^p_{0,q}(W_Q\setminus X)$$ implies that \begin{eqnarray*} (\pi|_{N\setminus X}^{-1})^* \eta \in L^p_{0,q}(\pi(W_Q)\setminus \{0\}). \end{eqnarray*} \end{lem} \begin{proof} Only the last two statements remain to show. $\omega\in L^p_{0,q}(\pi(W_Q)\setminus\{0\})$ has a representation $$\sum_{1\leq k_1 < \cdots < k_q\leq d} f_{k_1\cdots k_q} \alpha_{k_1}\wedge\cdots \wedge \alpha_{k_q},$$ where the coefficients $f_{k_1\cdots k_d} \in L^p(\pi(W_Q)\setminus\{0\})$, and the proof is clear from what we have seen before. The last statement follows analogously. \end{proof} \section{Sufficient Conditions (Theorem \ref{thm:sufficient})}\label{sec:sufficient} Let $D$ be a strongly pseudoconvex domain in $Y$ such that $0\in D$, and let $D^*:=D\setminus\{0\}$. We can assume that $D\cap U =\{z\in U:\rho(z)<0\}$ where $\rho\in C^2(U)$ is a regular strictly plurisubharmonic defining function on a neighborhood $U$ of $bD$. Then there exists $\epsilon>0$ such that $D_\epsilon:= D\cup \{z\in U:\rho(z)<\epsilon\}$ is a strongly pseudoconvex extension of $D$. So, it follows by Grauert's bump method that the natural homomorphism $$r_q: H^q_{(p)}(D_\epsilon^*,\mathcal{O}) \rightarrow H^q_{(p)}(D^*,\mathcal{O})$$ (induced by restriction of forms) is surjective (see \cite{LiMi}, chapter IV.7). Here, we also set $D_\epsilon^*=D_\epsilon\setminus\{0\}$. We will work with the desingularization $\pi: N \rightarrow Y$ described in the previous section. So, let $$G=\pi^{-1}(D),\ G_\epsilon=\pi^{-1}(D_\epsilon),\ G^*=\pi^{-1}(D^*),\ G^*_\epsilon=\pi^{-1}(D^*_\epsilon),$$ and $[\omega] \in H^q_{(p)}(D^*,\mathcal{O})$ represented by $\omega\in L^p_{0,q}(D^*_\epsilon)$. We will show in this section how $\omega$ determines a class in \eqref{eq:iso}, and that $[\omega]=0$ if that class vanishes. The point that a different representative of $[\omega]$ defines the same class is postponed to the next section. We can use Lemma \ref{lem:blowup} to determine properties of $\pi^* \omega$. It is convenient to work with the weighted Dolbeault complexes that we introduced in \cite{Rp6}. So, we have to describe some concepts. Let $\mathcal{I}$ be the sheaf of ideals of $E=X$ in $N$. For $k\in\Z$ we will use the sheaves $\mathcal{I}^k\mathcal{O}$ which are subsheaves of the sheaf of germs of meromorphic functions on $N$. It follows from Theorem 5.1 in \cite{Rp4} that \begin{eqnarray}\label{eq:iso} H^q(G,\mathcal{I}^k\mathcal{O}) \cong H^q(G_\epsilon,\mathcal{I}^k\mathcal{O}) \cong \bigoplus_{\mu\geq k} H^q(X,\mathcal{O}(N^{-\mu})) \end{eqnarray} for all $q\geq 1$, because $G$ and $G_\epsilon$ are strongly pseudoconvex neighborhoods of the zero section of the negative holomorphic line bundle $N$. Note that on the left-hand side of \eqref{eq:iso}, $\mathcal{O}$ is the structure sheaf on $N$, while on the right $\mathcal{O}(N^{-\mu})$ denotes the sheaf of germs of holomorphic sections in the bundle $N^{-\mu}$ over $X$. So, in order to prove Theorem \ref{thm:sufficient}, it is enough to show that there exists an injective homomorphism \begin{eqnarray}\label{eq:sufficient2} H^q_{(p)}(D^*,\mathcal{O}) \hookrightarrow H^q(G,\mathcal{I}^{a(p,q,d)}\mathcal{O}). \end{eqnarray} What we need is a suitable fine resolution for the sheaves $\mathcal{I}^k\mathcal{O}$. Let $s\in \R$, $U\subset N$ open, and $\eta$ a measurable $(0,r)$-form on $U$. Then, we say that $$\eta\in |\mathcal{I}|^s L^p_{(0,r),loc}(U)$$ if for each point $z\in U$ there is a local generator $f_z$ of $\mathcal{I}_z$ (defined on a neighborhood $V_z$ of $z$) such that $$|f_z|^{-s} \eta \in L^p_{0,r}(V_z).$$ This property does not depend on the choice of $f_z$, and so the spaces $|\mathcal{I}|^s L^p_{(0,r),loc}(U)$ are well-defined.\\ We have to use a weighted $\overline{\partial}$-operator, which we define locally again. Let $k\in\Z$, $z\in N$ and $f_z$ a local generator of $\mathcal{I}_z$ defined on $V_z$. Then, for a current $\Phi$ on $V_z$, we set $$\overline{\partial}_k \Phi:= f_z^k \overline{\partial} \big( f_z^{-k} \Phi\big),$$ provided the construction makes sense. In that case $\overline{\partial}_k$ is well-defined because the construction does not depend on the choice of the generator. Now, we have to make a connection between the weighted operators $\overline{\partial}_k$ and weighted $L^p$-spaces defined above. We will use: \begin{defn}\label{defn:k} Let $1\leq p \leq \infty$ and $s$ be real numbers. Then we call \begin{eqnarray*} k(p,s) := \max\{m\in\Z: |z_1|^s L^p_{loc}(\C) \subset |z_1|^m L^1_{loc}(\C)\} \end{eqnarray*} the $\overline{\partial}$-weight of $(p,s)$, where $|z_1|^t L^p_{loc}(\C)=\{f \mbox{ measurable}: |z_1|^{-t} f \in L^p_{loc}(\C)\}$. Now, we define for $0\leq q\leq d=\dim Y$ the sheaves $|\mathcal{I}|^s \mathcal{L}^p_{0,r}$ by: \begin{eqnarray*} |\mathcal{I}|^s \mathcal{L}^p_{0,r} (U) :=\{ f\in |\mathcal{I}|^s L^p_{(0,r),loc}(U): \overline{\partial}_{k(p,s)} f \in |\mathcal{I}|^s L^p_{(0,r+1),loc}(U)\} \end{eqnarray*} for open sets $U\subset\C^n$ (it is a presheaf wich is already a sheaf). \end{defn} From now on, if an index $k$ is not specified, it should always be the $\overline{\partial}$-weight $k(p,s)$, where $p$ and $s$ arise from the context. We need to compute the $\overline{\partial}$-weight of $(p,s)$ explicitly: \begin{lem}\label{lem:k1} Let $1\leq p \leq \infty$ and $s$ be real numbers, and $k(p,s)$ the $\overline{\partial}$-weight of $(p,s)$ according to Definition \ref{defn:k}. Then \begin{eqnarray}\label{eq:k1} k(p,s) = \left\{ \begin{array}{ll} \max\{m\in\Z: m<2 + s-2/p\} & ,\ p\neq 1,\\ \max\{m\in\Z: m\leq 2 + s-2/p\}& ,\ p=1. \end{array}\right. \end{eqnarray} \end{lem} \begin{proof} See \cite{Rp6}, Lemma 2.2. \end{proof} We can now cite the main results about the Dolbeault complex with weights according to normal crossings. Adapted to our present situation, Theorem 1.5 in \cite{Rp6} reads as: \begin{thm}\label{thm:main} For $1\leq p\leq \infty$ and $s\in \R$, let $k(p,s)\in \Z$ be the $\overline{\partial}$-weight according to Definition \ref{defn:k}. Then: \begin{eqnarray}\label{eq:complex} 0 \rightarrow \mathcal{I}^k\mathcal{O} \hookrightarrow |\mathcal{I}|^s \mathcal{L}^p_{0,0} \xrightarrow{\ \overline{\partial}_k\ } |\mathcal{I}|^s \mathcal{L}^p_{0,1} \xrightarrow{\ \overline{\partial}_k\ } \cdots \xrightarrow{\ \overline{\partial}_k\ } |\mathcal{I}|^s \mathcal{L}^p_{0,d} \rightarrow 0 \end{eqnarray} is an exact (and fine) resolution of $\mathcal{I}^k\mathcal{O}$. \end{thm} Let us now return to $\omega\in L^p_{0,q}(D^*_\epsilon)$. If we extend $\pi^*\omega$ trivially over the exceptional set $E$, Lemma \ref{lem:blowup} implies immediately: \begin{lem}\label{lem:piw} Let $s=(q-1) - \frac{2d-2}{p}$. Then: $$\pi^* \omega \in |\mathcal{I}|^s L^p_{(0,q),loc} (G_\epsilon).$$ \end{lem} Now, we need to find a suitable weight $k$ such that $\overline{\partial}_k \pi^*\omega=0$. This is in fact the $\overline{\partial}$-weight of $(p,s)$ in Lemma \ref{lem:piw}, as we will see shortly. But before, it is the time to make the connection to Theorem \ref{thm:sufficient}: \begin{lem}\label{lem:piw1} Let $k(p,s)$ be the $\overline{\partial}$-weight of $p$ and $$s=(q-1) - \frac{2d-2}{p}.$$ Then: $$k(p,s) = a(p,q,d),$$ where $a(p,q,d)$ is the constant from Theorem \ref{thm:sufficient}. So, Lemma \ref{lem:piw} yields \begin{eqnarray}\label{eq:al1} \pi^* \omega \in |\mathcal{I}|^s L^p_{(0,q),loc}(G_\epsilon) \subset |\mathcal{I}|^{a(p,q,s)} L^1_{(0,q),loc} (G_\epsilon) \end{eqnarray} by Definition of the $\overline{\partial}$-weight. \end{lem} \begin{proof} The proof is immediate, because \begin{eqnarray*} 2+s-2/p &=& (q+1) - \frac{2d-2}{p} - 2/p = (q+1) -2d/p. \end{eqnarray*} \end{proof} From now on, if the indices are not specified, $a$ should always be the constant $a(p,q,d)$ from Theorem \ref{thm:sufficient}. We will now see that in fact $$\overline{\partial}_a \pi^*\omega=0.$$ This is a consequence of \eqref{eq:al1}, Lemma \ref{lem:blowup} and the following extension theorem for the $\overline{\partial}$-equation, which we will show in a (for further use) slightly more general version than needed: \begin{lem}\label{lem:ext1} Let $D\subset\C^n$ be an open set, $1\leq P\leq\infty$ and $f\in L^P_{0,Q}(D)$ a $(0,Q)$-form on $D$ such that $\overline{\partial} f =g$ in the sense of distributions on $D\setminus H$, where $H=\{z\in \C^n: z_1=0\}$ and $g\in L^1_{0,Q+1}(D)$, and $f$ has the following structure: $$ f =\sum_{|J|=Q} f_J d\o{z_J}$$ (in multi-index notation) such that $$|z_1|^{-w(P)} f_J \in L^P(D) \mbox{ for all multi-indices } J \mbox{ with } 1\notin J,$$ where $$w(P)=\left\{\begin{array}{ll} 2/P -1 & , \mbox{ if } 1\leq P \leq 2,\\ 0& , \mbox { if } 2\leq P \leq \infty.\end{array}\right.$$ Then $\overline{\partial} f=g$ on the whole set $D$. \end{lem} We will use the statement only in case $P=1$ and $w(P)=1$. \begin{proof} The statement is local, so we can assume that $D$ is bounded. For $r>0$, define $$U(r):=\{z\in\C^n: \dist(z,H)=|z_1|<r\}.$$ Choose a smooth cut-off function $\chi\in C^\infty_{cpt}(\R)$ with $|\chi|\leq 1$, $\chi(t)=1$ if $|t|\leq 1/2$, $\chi(t)=0$ if $|t|\geq 2/3$, and $|\chi'| \leq 8$. Now, let $$\chi_r(z):=\chi(\frac{\dist(z,H)}{r}).$$ Then $\chi_r\equiv 1$ on $U(r/2)$ and $\supp \chi_r \subset U(3r/4)$. $\chi_r$ is smooth and we have: $$\||z_1|^{w}\nabla \chi_r\|\leq |\chi'| r^{w-1} \leq 8r^{w-1}.$$ Since $D$ is bounded, there is $R>0$ such that $D\subset B_R(0)$. Let $s=P/(P-1)$ be the coefficient dual to $P$. It follows that \begin{eqnarray*} \int_{D}\||z_1|^w \nabla \chi_r\|^s dV_{\C^n} \leq 8^s (2R)^{2n-2} r^{s(w-1)} \int_{\{\zeta\in\C:|\zeta|<r\}} dV_{\C}, \end{eqnarray*} and we conclude: \begin{eqnarray}\label{eq:dchi} \||z_1|^w \nabla\chi_r\|_{L^s(D)} &\lesssim& r^{w-1 + 2/s} = r^{w-1+2 - 2/P}\lesssim 1 \end{eqnarray} by the choice of $w(P)$. The statement remains true in case $P=1$ and $s=\infty$. What we have to show is that \begin{eqnarray}\label{eq:dqe2} \int_D f\wedge \overline{\partial} \phi = (-1)^{q+1} \int_D g\wedge\phi \end{eqnarray} for all smooth $(n,n-Q-1)$-forms $\phi$ with compact support in $D$. By assumption, $\overline{\partial} f=g$ on $D\setminus H$. That leads to: \begin{eqnarray*} \int_D f\wedge\overline{\partial}\phi &=& \int_D f\wedge\chi_r \overline{\partial}\phi +\int_{D\setminus H} f \wedge(1-\chi_r) \overline{\partial}\phi\\ &=& \int_D f \wedge\chi_r \overline{\partial}\phi + \int_{D\setminus H} f \wedge\overline{\partial}[ (1-\chi_r) \phi]- \int_{D\setminus H} f\wedge\overline{\partial}(1-\chi_r)\wedge \phi\\ &=& \int_D f \wedge\chi_r \overline{\partial}\phi + (-1)^{Q+1} \int_{D} g \wedge(1-\chi_r) \phi + \int_{D\setminus H} f \wedge\overline{\partial}\chi_r\wedge\phi. \end{eqnarray*} Now, we will consider what happens as $r\rightarrow 0$. Let us first consider $$\int_D f\wedge\chi_r\overline{\partial}\phi\ \ \mbox{ and }\ \ \int_{D} g \wedge(1-\chi_r) \phi.$$ Since $|\chi_r|\leq 1$, we have \begin{eqnarray*} \|f\wedge\chi_r\overline{\partial}\phi \|, \|g\wedge(1-\chi_r)\phi\| \in L^1(D), \end{eqnarray*} and we know that $f\wedge\chi_r\overline{\partial}\phi \rightarrow 0$ pointwise if $r\rightarrow 0$, and $g\wedge (1-\chi_r)\phi \rightarrow g\wedge\phi$. Hence, Lebesgue's Theorem on dominated convergence gives: \begin{eqnarray*} \lim_{r\rightarrow 0} \int_D f \wedge\chi_r \overline{\partial}\phi = 0,\ \ \ \lim_{r\rightarrow 0} \int_D g \wedge(1- \chi_r) \phi = \int_D g\wedge\phi. \end{eqnarray*} To prove \eqref{eq:dqe2}, only $$\lim_{r\rightarrow 0}\int_{D} f\wedge\overline{\partial}\chi_r\wedge\phi=0$$ remains to show. Because of $$\overline{\partial} \chi_r = \frac{\partial\chi_r}{\partial\o{z_1}} d\o{z_1},$$ we only have to consider the coefficients $f_J$ where $1\notin J$. So, using \eqref{eq:dchi} and the H\"older Inequality, we get \begin{eqnarray*} \lim_{r\rightarrow 0} \|f\wedge\overline{\partial}\chi_r\wedge \phi\|_{L^1(D)} &=& \lim_{r\rightarrow 0} \|f\wedge\overline{\partial}\chi_r\wedge \phi\|_{L^1(U(r))}\\ &\leq& \lim_{r\rightarrow 0} \|f \wedge\phi\|_{L^P(U(r))}\||z_1|^w \nabla \chi_r\|_{L^s(D)}\\ &\lesssim& \lim_{r\rightarrow 0} \|f \wedge\phi\|_{L^P(U(r))}. \end{eqnarray*} Since $f\in L^P$, we conclude $$\lim_{r\rightarrow 0} \|f \wedge\phi\|_{L^P(U(r))} = 0$$ (see for instance \cite{Alt}, Lemma A 1.16), and that completes the proof. \end{proof} So, choose a point on the exceptional set $E$. Locally, we can assume that this point is the origin in $\C^d$, and that $E=\{z_1=0\}$ in a small neighborhood $V$. It follows from Lemma \ref{lem:piw1} that $$z_1^{-a} \pi^* \omega \in L^1_{0,q}(V),$$ and it is clear that $$\overline{\partial} \big( z_1^{-a} \pi^* \omega\big) =0 \ \mbox{ on }\ V\setminus E$$ in the sense of distributions. But if we take a closer look at $z_1^{-a} \pi^*\omega$ it follows from Lemma \ref{lem:blowup}, that $$z_1^{-a} \pi^*\omega = \sum_{|J|=q} f_J d\o{z_J},$$ where $f_J=|z_1| h_J$ with $h_J\in L^1(V)$ if $1\notin J$. So, Lemma \ref{lem:ext1} yields $\overline{\partial} (z_1^{-a} \pi^*\omega)=0$ on $V$. Thinking globally, that means nothing else but: \begin{lem}\label{lem:piw2} If $a=a(p,q,d)$ is the index from Theorem \ref{thm:sufficient}, then $$\overline{\partial}_a \pi^* \omega=0.$$ Hence, $\pi^* \omega\in |\mathcal{I}|^s \mathcal{L}^p_{0,q}(G_\epsilon)$ (where $s=(q-1)-\frac{2d-2}{p}$) defines a cohomology class $$[\pi^*\omega] \in H^q(G_\epsilon, \mathcal{I}^a \mathcal{O}).$$ \end{lem} \begin{proof} Because of $a(p,q,d)=k(p,s)$ by Lemma \ref{lem:piw1}, $\overline{\partial}_a \pi^*\omega=\overline{\partial}_k \pi^*\omega=0$ implies that $\pi^* \omega\in |\mathcal{I}|^s \mathcal{L}^p_{0,q}(G_\epsilon)$. Hence, $\pi^*\omega$ defines a cohomology class $[\pi^*\omega]$ in $H^q(G_\epsilon,\mathcal{I}^a\mathcal{O})$ by Theorem \ref{thm:main}. \end{proof} Now, assume that \begin{eqnarray}\label{eq:closed1} [\pi^*\omega]=0\ \ \ \mbox{ in } \ \ \ H^q(G_\epsilon,\mathcal{I}^a\mathcal{O}). \end{eqnarray} We will conclude this section by showing that this implies $[\omega]=0$ in $H^q_{(p)}(D^*,\mathcal{O})$. By the use of Theorem \ref{thm:main}, the assumption \eqref{eq:closed1} tells us that there exists \begin{eqnarray}\label{eq:closed2} \eta\in |\mathcal{I}|^s L^p_{(0,q-1),loc}(G_\epsilon) \end{eqnarray} such that $\overline{\partial}_k\eta=\pi^*\omega$ on $G_\epsilon$. This means that $\overline{\partial}\eta=\pi^*\omega$ on $G_\epsilon \setminus E=G_\epsilon^*$. Recall that \begin{eqnarray}\label{eq:closed3} s=(1-q) -\frac{2d-2}{p}. \end{eqnarray} Let $\eta':=\eta|_{G}$. Then, \eqref{eq:closed2}, \eqref{eq:closed3} and the last statement of Lemma \ref{lem:blowup} yield that $$\vartheta := (\pi|_{G \setminus E}^{-1})^* \eta \in L^p_{0,q-1}(D\setminus \{0\}).$$ Because $\pi$ is a biholomorphic map outside the exceptional set, we know that $\overline{\partial}\vartheta=\omega$ on $D\setminus \{0\}=D^*$. So, it follows that $[\omega]=0$ in $H^q_{(p)}(D^*,\mathcal{O})$. To complete the proof of Theorem \ref{thm:sufficient}, it remains to show that a different representing $(0,q)$-form for the class $[\omega]\in H^q_{(p)}(D^*,\mathcal{O})$ defines the same class in $$H^q(G_\epsilon,\mathcal{I}^{a}\mathcal{O}) \cong H^q(G,\mathcal{I}^a\mathcal{O}).$$ That will be done in the next section, where we can restrict our considerations to the set $G$ (no need to consider the extension $G_\epsilon$ any more). \section{Integration along the Fibers}\label{sec:fibers} Assume that $\wt{\omega}$ is another representing form for the class $[\omega]\in H^q_{(p)}(D^*,\mathcal{O})$, namely that $$\wt{\omega} \in L^p_{0,q}(D^*),$$ such that $\overline{\partial} \wt{\omega}=0$ on $D^*$ in the sense of distributions, and there exists $\sigma \in L^p_{0,q-1}(D^*)$ with $$\omega-\wt{\omega} = \overline{\partial} \sigma.$$ Here again, $\pi^*\wt{\omega}\in |\mathcal{I}|^s \mathcal{L}^p_{0,q}(G)$, but unfortunately we do not have $\pi^* \sigma \in |\mathcal{I}|^s\mathcal{L}^p_{0,q-1}(G)$. But, we can use $\pi^*\sigma$ to construct $\wt{\sigma}\in |\mathcal{I}|^s\mathcal{L}^p_{0,q-1}(G)$ such that $$\pi^*\omega-\pi^*\wt{\omega} = \overline{\partial}_a \wt{\sigma}.$$ Let $\chi \in C^\infty_{cpt}(G)$ be a smooth cut-off function with compact support in $G$ such that $\chi\equiv 1$ in a neighborhood of the exceptional set $E=X$. Now, consider \begin{eqnarray*} \delta := \pi^* \omega-\pi^*\wt{\omega} -\overline{\partial} \big((1-\chi)\pi^*\sigma\big) \in |\mathcal{I}|^s \mathcal{L}^p_{0,q}(G). \end{eqnarray*} We will now solve the equation $\overline{\partial}_a\tau=\delta$ with $\tau\in |\mathcal{I}|^s\mathcal{L}^p_{0,q-1}(G)$, and then $$\overline{\partial}_a\big( \tau + (1-\chi)\pi^* \sigma \big) = \pi^*\omega - \pi^*\wt{\omega}$$ will tell us that $[\pi^*\omega] = [\pi^*\wt{\omega}]$ in $H^q(G,\mathcal{I}^a \mathcal{O})$. The crucial point is that the form $\delta$ has compact support in $G$. That allows us to solve the equation $\overline{\partial}_a\tau=\delta$ by integrating over the fibers of $N$ interpreted as a holomorphic line bundle over $E=X$. That idea has been already used by E.\ S.\ Zeron and the author in \cite{RuZe}. We can define that integration locally: Let $Q\in X$. Then there exists a neighborhood $U_Q$ of $Q$ in $X$ with coordinates $z_1, ..., z_{d-1}$ such that $N$ is trivial over $U_Q$: $$N|_{U_Q} \cong U_Q\times \C.$$ So, let $z_1, ... z_{d-1}, t$ be the coordinates on $V_Q:=N|_{U_Q}$. Then, $\delta|_{V_Q} \in |t|^s L^p_{0,q}(V_Q)$ can be written uniquely as \begin{eqnarray*} \delta|_{V_Q} = \sum_{|J|=q} g_J d\o{z_J} + \sum_{|J|=q-1} f_{J} d\o{t}\wedge d\o{z_J} \end{eqnarray*} where all the coefficients $g_J, f_{J} \in |t|^s L^p(V_Q)$ satisfy $t^{-a} g_J, t^{-a} f_J \in L^1(V_Q)$. Now, we define: \begin{eqnarray}\label{eq:fibers} \tau|_{V_Q} := \sum_{|J|=q-1} \left( \frac{t^a}{2\pi i} \int_{\C} \frac{f_J(z_1, ..., z_{d-1}, \zeta)}{\zeta^a} \frac{d\zeta \wedge d\o{\zeta}}{\zeta-t} \right) d\o{z_J}. \end{eqnarray} We have to show that this construction globally defines a form $\tau\in |\mathcal{I}|^s \mathcal{L}^p_{0,q-1}(G)$ such that $\overline{\partial}_a\tau=\delta$, where we intensively use the fact that $\delta$ has compact support. Firstly, we remark that the operator in \eqref{eq:fibers} maps continuously $$|t|^s L^p_{0,q}(V_Q\cap G) \rightarrow |t|^{s+1-\epsilon} L^p_{0,q-1}(V_Q\cap G) \subset |t|^s L^p_{0,q-1}(V_Q\cap G),$$ because $a(p,q,d)=k(p,s)$ is the $\overline{\partial}$-weight of $(p,s)$ (see \cite{Rp6}, Theorem 2.1). For $\overline{\partial}_a \tau=\delta$, we have to show that \begin{eqnarray}\label{eq:dqt} \overline{\partial}\big( t^{-a} \tau|_{V_Q}\big) = t^{-a} \delta|_{V_Q}. \end{eqnarray} Here, we have to work a little, because we are dealing with weak concepts. Let $U\subset \C^{d-1}$ be an open set, $1\leq r \leq d$, and $\omega\in C^0_{0,r}(U\times\C)$ such that $\omega$ has compact support in the $z_d$-direction, i.e. $\supp \omega\cap F_a$ is compact in $F_a$ for all fibers $F_a=\{a\}\times\C$, $a\in U$. If $$\omega = \sum_{\substack{|J|=r-1,\\ d\notin J}} a_{dJ} d\o{z_d}\wedge d\o{z_J} + \sum_{\substack{|K|=r,\\ d\notin K}} a_K d\o{z_K}$$ (the multi-indices in ascending order), then let $${\bf S}_r^d \omega := \sum_{|J|=r-1} {\bf I}(a_{dJ})\ d\o{z_J}$$ where $${\bf I} f (z_1, ..., z_d) :=\frac{1}{2\pi i} \int_\C f(z_1, ..., z_{d-1}, t) \frac{dt \wedge d\o{t}}{t-z_d}.$$ It is not hard to see that we can use these operators ${\bf S}_r^d$ to construct a $\overline{\partial}$-homotopy formula for forms with compact support in the fibers: \begin{lem}\label{lem:dqc} Let $\omega\in C^1_{0,q}(U\times\C)$ such that $\omega$ has compact support in $z_d$. Then: $$\omega = \overline{\partial} {\bf S}^d_q \omega + {\bf S}^d_{q+1}\overline{\partial} \omega.$$ \end{lem} \begin{proof} Let $$\omega = \sum_{\substack{|J|=q-1,\\ d\notin J}} a_{dJ} d\o{z_d}\wedge d\o{z_J} + \sum_{\substack{|K|=q,\\ d\notin K}} a_K d\o{z_K}$$ and $$\overline{\partial} \omega = \sum_{\substack{|K|=q,\\ d\notin K}} c_{dK} d\o{z_d}\wedge d\o{z_K} + \cdots$$ Then we compute that: \begin{eqnarray}\label{eq:dq-closed} c_{dK}=\frac{\partial a_K}{\partial \o{z_d}} - \sum_{\substack{J\subset K,\\ |J|=q-1}} \mbox{sign} { K\setminus J\ J \choose K} \frac{\partial a_{dJ}}{\partial \o{z_{K\setminus J}}}. \end{eqnarray} By use of the inhomogeneous Cauchy-Integral Formula in one complex variable and the assumption about the support of $\omega$, we compute furthermore: \begin{eqnarray*} \overline{\partial} \big({\bf I} (a_{dJ}) d\o{z_J}\big) = a_{dJ} d\o{z_d}\wedge d\o{z_J} + \sum_{k\notin J\cup \{d\}} {\bf I}\left(\frac{\partial a_{dJ}}{\partial \o{z_k}}\right) d\o{z_k}\wedge d\o{z_J}. \end{eqnarray*} But this leads to (summing up): $$\overline{\partial} {\bf S}_q^d \omega = \sum_{\substack{|J|=q-1,\\ d\notin J}} a_{dJ} d\o{z_d}\wedge d\o{z_J} + \sum_{\substack{|K|=q,\\ d\notin K}} {\bf I}\big(b_K\big) d\o{z_K},$$ where \begin{eqnarray*} b_K &=& \sum_{J\subset K} \mbox{sign} { K\setminus J\ J \choose K} \frac{\partial a_{dJ}}{\partial \o{z_{K\setminus J}}} = \frac{\partial a_K}{\partial \o{z_d}} - c_{dK} \end{eqnarray*} by the use of \eqref{eq:dq-closed}. But $a_K$ has compact support in $z_d$. So (using the inhomogeneous Cauchy Formula again), \begin{eqnarray*} {\bf{I}}\left(b_K\right) d\o{z_K} &=& {\bf I}\left(\frac{\partial a_K}{\partial \o{z_d}}\right) d\o{z_K} - {\bf I}\big( c_{dK} \big) d\o{z_K}\\ &=& a_K d\o{z_K} - {\bf I}\big( c_{dK} \big) d\o{z_K} , \end{eqnarray*} and we are done, because $${\bf S}_{q+1}^d \overline{\partial}\omega =\sum_{\substack{|K|=q,\\d\notin K}} {\bf I}\big( c_{dK}\big) d\o{z_K}.$$ \end{proof} Turning to $L^1$-forms, we can deduce: \begin{lem}\label{lem:dqc2} Let $R>0$ and $\omega\in L^1_{0,q}(U\times\C)$ such that $\overline{\partial}\omega=0$, and $\omega$ has support in $U\times \Delta_R(0)$, where $\Delta_R(0)$ is the disc of radius $R$ at $0$. Then: $$\omega = \overline{\partial} {\bf S}^d_q \omega$$ on each subset $V\times \Delta_R(0)$ where $V\subset\subset U$. \end{lem} \begin{proof} We simply use the assumption $V\subset\subset U$ because we really do not need to care about the boundary. Using convolution with a Dirac sequence, there exists a sequence of smooth forms $f_j \in C^\infty_{0,q}(V\times \C)$ such that \begin{eqnarray*} \lim_{j\rightarrow \infty} f_j &=& \omega \ \ \mbox{ in }\ L^1_{0,q}(V\times \C),\\ \lim_{j\rightarrow \infty} \overline{\partial} f_j &=& 0 \ \ \mbox{ in }\ L^1_{0,q+1}(V\times \C), \end{eqnarray*} and we can assume that the $f_j$ have support in $V\times \Delta_{R+1}(0)$. Lemma \ref{lem:dqc} tells us that $$f_j = \overline{\partial} {\bf S}^d_q f_j + {\bf S}^d_{q+1} \overline{\partial} f_j$$ for all $j$, and passing to the limit in $L^1$-spaces proves the Lemma, because the operators ${\bf S}^d_q$, ${\bf S}^d_{q+1}$ map continuously from $L^1$ to $L^1$. \end{proof} Let us return to the $\overline{\partial}$-equation \eqref{eq:dqt} which we are trying to prove. But that is now an easy consequence of Lemma \ref{lem:dqc2}, because $$t^{-a} \tau|_{V_Q} = {\bf S}^d_q \big(t^{-a} \delta|_{V_Q}\big).$$ It only remains to show that $\tau$ is globally well-defined. If we change coordinates on $X$, that does not effect the Definition \eqref{eq:fibers}, but we have to care about what happens for a different trivialization of $N$. So, let $w=\phi(z_1, ..., z_{d-1}) t$ and \begin{eqnarray*} \delta|_{V_Q} = \sum_{|J|=q} g_J d\o{z_J} + \sum_{|J|=q-1} \wt{f_{J}} d\o{w}\wedge d\o{z_J}. \end{eqnarray*} Then $d\o{w} = \o{\phi(z_1, ..., z_{d-1})} d\o{t}$ and $$\wt{f_J} = \o{\phi}^{-1} f_J.$$ That yields (with $\xi=\phi\zeta$): \begin{eqnarray*} \frac{t^a}{2\pi i} \int_{\C} \frac{f_J(z_1, ..., z_{d-1},\zeta)}{\zeta^a} \frac{d\zeta \wedge d\o{\zeta}}{\zeta-t} &=& \frac{\phi^{-a} w^a}{2\pi i} \int_{\C} \frac{\o{\phi} \wt{f_J}(z_1, ..., z_{d-1},\xi)}{\phi^{-a}\xi^a} \frac{|\phi|^{-2} d\xi \wedge d\o{\xi}}{\phi^{-1}(\xi-w)}\\ &=& \frac{w^a}{2\pi i} \int_{\C} \frac{\wt{f_J}(z_1, ..., z_{d-1},\xi)}{\xi^a} \frac{d\xi \wedge d\o{\xi}}{\xi-w}, \end{eqnarray*} and that shows that $\tau \in |\mathcal{I}|^s L^p_{0,q-1}(G)$ is globally well defined by \eqref{eq:fibers}. Since $\overline{\partial}_a \tau=\delta$ as we have seen before, we have finished the proof that $[\pi^* \omega]=[\pi^*\wt{\omega}]$ in $H^q(G,\mathcal{I}^a \mathcal{O})$, and that also finishes the proof of Theorem \ref{thm:sufficient}. Another interesting application of the integration along the fibers is Theorem \ref{thm:integration}, namely the solution of the $\overline{\partial}$-equation for forms with compact support: {\bf Theorem 1.3.} {\it Let $X$ and $Y$ be as in the introduction, $D\subset\subset Y$ an open subset, $D^*=D\setminus\{0\}$, and $1\leq p\leq \infty$, $1\leq q\leq \dim Y$. Let $$\omega \in L^p_{0,q}(D^*)\cap \ker \overline{\partial}$$ with compact support in $D$. Then there exists $\eta\in L^p_{0,q-1}(D^*)$ such that $\overline{\partial}\eta=\omega$.} \begin{proof} As before, let $G:=\pi^{-1}(D)$ and $G^*:=G\setminus E$. Let $\omega\in L^p_{0,q}(D^*)$ such that $\overline{\partial}\omega=0$, and $\omega$ has support in $D$. Then (by Lemma \ref{lem:ext1}) $$\pi^*\omega \in |\mathcal{I}|^s L^p_{0,q}(G),$$ where $$s=(q-1)-\frac{2d-2}{p},$$ and by Lemma \ref{lem:ext1} we have $\overline{\partial}_a\pi^*\omega=0$ with $a=a(p,q,d)=k(p,s)$ the $\overline{\partial}$-weight of $(p,s)$. Because $\pi^* \omega$ has compact support in $G$, we can integrate along the fibers as before and obtain $\tau \in |\mathcal{I}|^s L^p_{0,q-1} (G)$ such that $\overline{\partial}_a \tau=\pi^*\omega$. But then $$\eta:= (\pi|_{G\setminus E}^{-1})^* \tau \in L^p_{0,q-1}(D^*)$$ by Lemma \ref{lem:blowup}, and $\overline{\partial} \eta= \omega$ on $D^*$. \end{proof} As a consequence, we can now also prove Theorem \ref{thm:compact}, namely $H^1_{(p),cpt}(D^*,\mathcal{O})=0$ for $D\subset\subset Y$ as above. So, let $\omega\in L^p_{0,1}(D^*)$ be $\overline{\partial}$-closed with support in $D$. We can assume that $$D\subset\subset \wt{D}=Y\cap B_R(0)$$ for $R>0$ large enough, and extend $\omega$ trivially by $0$ to the whole set $\wt{D}^*=\wt{D}\setminus\{0\}$. By Theorem \ref{thm:integration}, there exists $f\in L^p(\wt{D}^*)$ such that $\overline{\partial} f= \omega$. So, $f$ is holomorphic on $\wt{D}\setminus D$ which is connected because $X$ and $Y$ are chosen irreducible. But $Y$ is a normal complex space, because $Y$ is a complete intersection, and a complete intersection is a normal space exactly if the codimension of the singular set is $\geq 2$ (see \cite{Ab}, 12.3, or \cite{Sch2}, Korollar 4). So, $f|_{\wt{D}\setminus D}$ extends uniquely to a holomorphic function $F$ on the whole set $\wt{D}$ by Hartogs' Extension Theorem for singular spaces (see \cite{MePo2}, or \cite{Rp5}), and $$F\in \mathcal{O}(\wt{D}) \subset L^\infty_{loc}(\wt{D}).$$ But then $$f':= f- F \in L^p(D^*)$$ is the desired solution of $\overline{\partial} f' =\omega$, because $\supp f'\subset\supp\omega$ by the identity theorem for holomorphic functions. That proves Theorem \ref{thm:compact}. \section{Necessary Conditions (Theorem \ref{thm:necessary})}\label{sec:necessary} We will now prove Theorem \ref{thm:necessary}. So, let $D\subset\subset Y$ be a bounded open set such that $0\in D$, $D^*=D\setminus\{0\}$, $G=\pi^{-1}(D)$ and $G^*=\pi^{-1}(D^*)$. We will use the exhaustion function $\rho:=\|\cdot\|^2\circ \pi$ which is strictly plurisubharmonic on $N$ outside the zero section $X$. Then there exist indices $\epsilon>0$ and $\delta>0$ such that $$G_\epsilon \subset \subset G \subset \subset G_\delta,$$ where $$G_\epsilon=\{z\in N: \rho(z)<\epsilon\}$$ and $$G_\delta = \{z\in N:\rho(z)<\delta\}$$ are smoothly bounded strongly pseudoconvex neighborhoods of the zero section in $N$. We can again use the fact that $$\bigoplus_{\mu\geq c(p,q,d)} H^q(X,\mathcal{O}(N^{-\mu})) \cong H^q(G_\epsilon, \mathcal{I}^{c(p,q,d)} \mathcal{O})$$ by Theorem 5.1 in \cite{Rp4}. We must now clearify what the Definition of $c(p,q,d)$ means for us: \begin{lem}\label{lem:c} There exists $0<\nu<1$ such that the following is true: Let $$t=(q-1)-\frac{2d-2}{p} + \nu,$$ and $k(p,t)$ the $\overline{\partial}$-weight of $(p,t)$. Then: $$c(p,q,d) = k(p,t).$$ \end{lem} \begin{proof} When we represent the $\overline{\partial}$-weight $k(p,t)$ by the formula in Lemma \ref{lem:k1}, then it is easy to see that there exists $0<\nu<1$ such that \begin{eqnarray*} k(p,t) &=& \max\left\{\begin{array}{ll} m\in \Z: m < q + 1 -\frac{2d}{p} + \nu& , p\neq 1\\ m\in \Z: m\leq q+1 -\frac{2d}{p} + \nu&, p=1 \end{array}\right\}\\ &=& \max\{m\in\Z: m\leq q+1 -2d/p\} = c(p,q,d). \end{eqnarray*} One just has to choose $\nu>0$ small enough. \end{proof} We will abbreviate $c(p,q,d)$ by $c$ from now on. By use of Lemma \ref{lem:c}, the exact sequence in Theorem \ref{thm:main} tells us that a class $[\omega]\in H^q(G_\epsilon, \mathcal{I}^c \mathcal{O})$ can be represented by a form \begin{eqnarray*} \omega_1 \in |\mathcal{I}|^{t} \mathcal{L}^p_{0,q}(G_\epsilon). \end{eqnarray*} But, we will see that there also exists \begin{eqnarray*} \omega_2 \in |\mathcal{I}|^{t} \mathcal{L}^p_{0,q}(G_\delta). \end{eqnarray*} such that $[\omega_2|_{G_\epsilon}] = [\omega_1] =[\omega] \in H^q(G_\epsilon, \mathcal{I}^c\mathcal{O})$. That follows from the following consideration: As in the beginning of the proof of Theorem \ref{thm:sufficient}, Grauert's bump method (see \cite{LiMi}, chapter IV.7) tells us that the mapping \begin{eqnarray}\label{eq:bumps} H^q(G_\delta, \mathcal{I}^c \mathcal{O}) \rightarrow H^q(G_\epsilon, \mathcal{I}^c \mathcal{O}) \end{eqnarray} induced by restriction of forms is surjective. For later use, we remark that it is in fact an isomorphism because the groups under consideration are of equal finite dimension.\\ Now, let \begin{eqnarray*} s:=q -\frac{2d-2}{p} = t + (1-\nu). \end{eqnarray*} We will show that we can also assume \begin{eqnarray*} \omega_2 \in |\mathcal{I}|^s \mathcal{L}^p_{0,q}(\wt{G}), \end{eqnarray*} where $G\subset\subset \wt{G} \subset\subset G_\delta$. This follows from the fact that we can solve the $\overline{\partial}_c$-equation locally from $$|\mathcal{I}|^t \mathcal{L}^p_{0,q}$$ into $$|\mathcal{I}|^{t+1-\nu} \mathcal{L}^p_{0,q-1} = |\mathcal{I}|^s \mathcal{L}^p_{0,q-1}.$$ That is well-known at points not on the exceptional set $X=E$, because at such points we only have to solve from $L^p_{0,q}$ into $L^p_{0,q-1}$. At points on the exceptional set, it follows from Theorem 2.1 in \cite{Rp4}, because $c$ is the $\overline{\partial}$-weight of $(p,t)$. So, cover a domain which is slightly smaller than $G_\delta$ by finitely many domains $\{U_j\}_{j\in J}$ where we have solutions $$\overline{\partial}_c v_j = \omega_2|_{U_j}, \ \ v_j \in |\mathcal{I}|^s \mathcal{L}^p_{0,q-1}(U_j).$$ Then, let $\{\chi_j\}_{j\in J}$ be a smooth partition of unity associated to $\{U_j\}_{j\in J}$, and define $$\eta:=\sum_{j\in J} \chi_j v_j \in |\mathcal{I}|^s \mathcal{L}^p_{0,q-1}(\wt{G}),$$ where $\wt{G}:=\bigcup U_j$. Then, we calculate: $$\overline{\partial}_c \eta = \sum_{j\in J} \chi_j \overline{\partial}_c v_j + \sum \overline{\partial} \chi_j \wedge v_j= : \omega_2 - \omega.$$ Therefore $[\omega_2]=[\omega] \in H^q(G_\epsilon,\mathcal{I}^c \mathcal{O})$ can in fact be represented by $\omega\in |\mathcal{I}|^s \mathcal{L}^p_{0,q}(\wt{G})$. Now, it follows from the last statement of Lemma \ref{lem:blowup} that $$(\pi|_{G\setminus X}^{-1})^* \omega \in L^p_{0,q}(D^*),$$ and it is clear that this form is $\overline{\partial}$-closed in the sense of distributions on $D^*$. Hence, $\omega$ determines a class in $H^q_{(p)}(D^*,\mathcal{O})$. We have to show that this assignment defines in fact a mapping from $H^q(G_\epsilon,\mathcal{I}^c \mathcal{O})$ into $H^q_{(p)}(D^*,\mathcal{O})$.\\ Because \eqref{eq:bumps} is an isomorphism, we only have to consider what happens if the class $[\omega]\in H^q(G_\epsilon,\mathcal{I}^c\mathcal{O})$ is given by a different form $$\omega_2' \in |\mathcal{I}|^t \mathcal{L}^p_{0,q}(G_\delta)$$ such that $$\omega_2 - \omega_2' = \overline{\partial}_c \vartheta$$ with $\vartheta \in |\mathcal{I}|^t \mathcal{L}^p_{0,q-1}(G_\delta)$. Now, construct $\omega' \in |\mathcal{I}|^s\mathcal{L}^p_{0,q}(\wt{G})$ from $\omega_2'$ analogously to the construction of $\omega$ (from $\omega_2$). Then, it follows that \begin{eqnarray*} \omega-\omega' &=& - \overline{\partial}_c \eta + \overline{\partial}_c \eta' +\omega_2 - \omega_2'\\ &=& - \overline{\partial}_c \eta + \overline{\partial}_c \eta' + \overline{\partial}_c \vartheta|_{\wt{G}} = \overline{\partial}_c \Delta, \end{eqnarray*} with $$\Delta := \eta'-\eta + \vartheta|_{\wt{G}} \in |\mathcal{I}|^t \mathcal{L}^p_{0,q-1}(\wt{G}).$$ Furthermore, we get $$(\pi|_{G\setminus X}^{-1})^* \omega - (\pi|_{G\setminus X}^{-1})^* \omega' = \overline{\partial} (\pi|_{G\setminus X}^{-1})^* \Delta,$$ where $(\pi|_{G\setminus X}^{-1})^* \Delta \in L^p_{0,q-1}(D^*)$ by the last statement of Lemma \ref{lem:blowup}, because $$t\geq s-1=(q-1) -\frac{2d-2}{p}.$$ This shows that $\omega$ and $\omega'$ determine the same class in $H^q_{(p)}(D^*,\mathcal{O})$, and hence our mapping is well-defined. It remains to show that it is injective. That can be done by integration along the fibers. So, assume that $$(\pi|_{G\setminus X}^{-1})^* \omega = \overline{\partial} \alpha$$ on $D^*$ where $\alpha \in L^p_{0,q-1}(D^*)$. Let $\chi \in C^\infty_{cpt}(D)$ be a smooth cut-off function with compact support which is identically $1$ in a neighborhood of the origin. Then: $$\beta := \omega - \overline{\partial} \pi^* \big( (1-\chi) \alpha\big) \in |\mathcal{I}|^t \mathcal{L}^p_{0,q}(G_\epsilon)$$ has compact support in $G_\epsilon$ and is $\overline{\partial}_c$-closed. Thus, integration along the fibers of $N$ as a holomorphic line bundle over $X$ (as in section \ref{sec:fibers}) gives $$\gamma \in |\mathcal{I}|^t \mathcal{L}^p_{0,q-1}(G_\epsilon)$$ such that $$\overline{\partial}_c\gamma=\beta,\ \ \ \mbox{ and }\ \ \overline{\partial}_c \left(\gamma + \pi^* \big( (1-\chi) \alpha\big)\right)= \omega,$$ where $$\gamma + \pi^* \big( (1-\chi) \alpha\big) \in |\mathcal{I}|^t \mathcal{L}^p_{0,q-1}(G_\epsilon).$$ Here, one should recall that $c=k(p,t)$ by Lemma \ref{lem:c}. So, Theorem \ref{thm:main} yields $[\omega]=0 \in H^q(D_\epsilon,\mathcal{I}^c \mathcal{O})$, as we intended to show. That completes the proof of Theorem \ref{thm:necessary}. \section{Examples and Applications}\label{sec:examples} As a direct consequence of Theorem \ref{thm:sufficient}, we obtain: \begin{thm}\label{thm:bounded} Let $X$, $Y$ and $N$ be as in the introduction, $D\subset\subset Y$ strongly pseudoconvex such that $0\in D$, $D^*=D\setminus\{0\}$, $1\leq p\leq \infty$, $1\leq q\leq d=\dim Y$. Set $$a(p,q,d) := \left\{\begin{array}{ll} \max\{k\in\Z: k< 1+q- 2d/p\} &, p \neq 1,\\ \max\{k\in\Z: k\leq 1+ q -2d/p\} &, p=1, \end{array}\right.$$ and assume that $$\bigoplus_{\mu\geq a(p,q,d)} H^q(X,\mathcal{O}(N^{-\mu}))=0.$$ Then there exists a bounded linear operator $${\bf S}_q: L^p_{0,q}(D^*)\cap \ker \overline{\partial} \rightarrow L^p_{0,q-1}(D^*)$$ such that $\overline{\partial} {\bf S}_q \omega= \omega$. \end{thm} \begin{proof} Theorem \ref{thm:sufficient} tells us that $$H^q_{(p)}(D^*,\mathcal{O})=0.$$ Now, the statement follows by a standard technique based on the open mapping theorem (see for example \cite{FOV1}, Lemma 4.2), because $L^p_{0,q}(D^*)$ and $L^p_{0,q-1}(D^*)$ are Banach spaces. \end{proof} Let us take a look at two simple examples in the case $d=\dim Y=2$. Then, $X$ is a compact Riemann surface. Firstly, let us assume that genus $g(X)=0$, hence $X\cong \C\mathbb{P}^1$. Let $z_0\in X$ be an arbitrary point and $D= - (z_0)$ the associated divisor. Then it follows that \begin{eqnarray*} H^j(X,\mathcal{O}(\mu D))\cong H^j(X,\mathcal{O}(N^\mu)) \end{eqnarray*} for all $j\geq 0$ and $\mu\in\Z$. It is well-known (and easy to calculate by power series) that $$l(\mu):=\dim H^0(X,\mathcal{O}(-\mu D)) = \left\{ \begin{array}{ll} 1 + \mu&,\mbox{ for } \mu\geq -1,\\ 0&,\mbox{ for } \mu \leq -1, \end{array}\right.$$ because $H^0(X,\mathcal{O}(-\mu D))$ is the space of meromorphic functions on $X$ with a single pole of order $\mu$ at $z_0$. Hence, we calculate by the Theorem of Riemann-Roch that \begin{eqnarray*} - \dim H^1(X,\mathcal{O}(N^{-\mu})) = \deg(-\mu D) + 1 - g(\C\mathbb{P}^1) - l(\mu) = \left\{ \begin{array}{ll} 0&,\mbox{ for } \mu\geq -1,\\ -1&,\mbox{ for } \mu = -2. \end{array}\right. \end{eqnarray*} Therefore, Theorem \ref{thm:sufficient} and Theorem \ref{thm:necessary} tell us that \begin{eqnarray*} \dim H^1_{(p)} (D^*,\mathcal{O}) \left\{ \begin{array}{ll} =0&, \mbox{ if } p>4/3,\\ \leq 1&, \mbox{ if } p=4/3,\\ =1&, \mbox{ if } p<4/3, \end{array}\right. \end{eqnarray*} if $D$ is a strongly pseudoconvex neighborhood of the origin in $Y$. An important example for such a variety is $Y=\{(x,y,z)\in\C^3: xy=z^2\}$.\\ As a second example, we use the same construction, but assume that $X$ is an elliptic curve in $\C\mathbb{P}^{n-1}$. Here, $H^0(X,\mathcal{O}(-\mu D))$ is the space of elliptic functions with a single pole of order $\mu$. So, it is well-known that we have $$l(\mu):=\dim H^0(X,\mathcal{O}(-\mu D)) =\left\{ \begin{array}{ll} 0 &, \mbox{ for } \mu \leq -1,\\ 1 &, \mbox{ for } \mu = 0,\\ \mu &, \mbox{ for } \mu\geq 1. \end{array}\right.$$ Using the Theorem of Riemann-Roch again, we calculate \begin{eqnarray*} - \dim H^1(X,\mathcal{O}(N^{-\mu})) = \deg(-\mu D) + 1 - g(X) - l(\mu) = \left\{ \begin{array}{ll} \mu &, \mbox{ for } \mu \leq -1,\\ -1 &, \mbox{ for } \mu = 0,\\ 0 &, \mbox{ for } \mu\geq 1. \end{array}\right. \end{eqnarray*} Therefore, Theorem \ref{thm:sufficient} and Theorem \ref{thm:necessary} tell us that \begin{eqnarray*} \dim H^1_{(p)} (D^*,\mathcal{O}) \left\{ \begin{array}{ll} =0&, \mbox{ if } p>4,\\ \in\{0, 1\}&, \mbox{ if } p=4,\\ =1&, \mbox{ if } 4>p>2,\\ \in \{1,2\}&, \mbox{ if } p=2,\\ =2&, \mbox{ if } 2>p> 4/3,\\ \in\{2,3,4\} &, \mbox{ if } p=4/3,\\ =4&, \mbox{ if } 4/3> p, \end{array}\right. \end{eqnarray*} if $D$ is a strongly pseudoconvex neighborhood of the origin. Examples are the varieties $Y=\{(x,y,z)\in\C^3: y^2z=x^3+axz^2 + bz^3\}$ for suitable values of $a$, $b$.\\ Let us return to the first example $X \cong \C\mathbb{P}^1$. Combining that consideration with Theorem \ref{thm:necessary}, we obtain: \begin{thm}\label{thm:l2} Let $X$ and $Y$ be as in the introduction, $\dim Y=2$ and $D\subset\subset Y$ strongly pseudoconvex such that $0\in D$, $D^*=D\setminus\{0\}$. Then: $$H^1_{(2)}(D^*,\mathcal{O})=0\ \ \Leftrightarrow\ \ X\cong \C\mathbb{P}^1.$$ \end{thm} \begin{proof} By assumption, $X$ is a compact Riemann surface. If $X\cong \C\mathbb{P}^1$ then $H^1_{(2)}(D^*,\mathcal{O})=0$ by the considerations above. Conversely, if $H^1_{(2)}(D^*,\mathcal{O})=0$, then $H^1(X,\mathcal{O})=0$ by Theorem \ref{thm:necessary}, and that implies that $X\cong\C\mathbb{P}^1$. \end{proof} This example, namely the groups $H^1_{(2)}$ at isolated singularities of co-dimension two, are of special interest because of the following Extension Theorem of Scheja (see \cite{Sch1,Sch2}), which settles the case of higher co-dimension: \begin{thm}\label{thm:scheja1} Let $Y$ be a closed pure dimensional analytic subset in $\C^n$ which is locally a complete intersection, and $A$ a closed pure dimensional analytic subset of $Y$. Then, the natural restriction mapping $$H^q(Y,\mathcal{O}_Y) \rightarrow H^q(Y\setminus A,\mathcal{O}_{Y\setminus A})$$ is bijective for all $0\leq q\leq \dim Y-\dim A-2$. \end{thm} Using this result, our integration along the fibers yields: \begin{thm}\label{thm:scheja2} Let $X$ and $Y$ be as in the introduction, and $D\subset\subset Y$ strongly pseudoconvex such that $0\in D$, $D^*=D\setminus\{0\}$, $1\leq p\leq \infty$, and $$1\leq q\leq d-2=\dim Y-2.$$ Then: $$H^q_{(p)}(D^*,\mathcal{O})=0.$$ \end{thm} \begin{proof} As in the beginning of section \ref{sec:sufficient}, assume that $$D\cap U =\{z\in U:\rho(z)<0\}$$ where $\rho\in C^2(U)$ is a regular strictly plurisubharmonic defining function on a neighborhood $U$ of $bD$, and that there exists $\epsilon>0$ such that $$D_\epsilon:= D\cup \{z\in U:\rho(z)<\epsilon\}$$ is a strongly pseudoconvex extension of $D$. So, it follows by Grauert's bump method that the natural homomorphism \begin{eqnarray}\label{eq:bump11} r_q: H^q_{(p)}(D_\epsilon^*,\mathcal{O}) \rightarrow H^q_{(p)}(D^*,\mathcal{O}) \end{eqnarray} (induced by restriction of forms) is surjective (see \cite{LiMi}, chapter IV.7). Here, we also set $D_\epsilon^*=D_\epsilon\setminus\{0\}$. We need to observe that $D_\epsilon$ is a Stein domain. But that follows from the fact that $D_\epsilon$ is a bounded strongly pseudoconvex domain in the Stein space $Y$ (see \cite{Na2}). Moreover, $Y$ is a complete intersection, and so Theorem \ref{thm:scheja1} tells us that \begin{eqnarray}\label{eq:stein} H^q(D^*_\epsilon,\mathcal{O})=0. \end{eqnarray} Now, let $[\omega]\in H^q_{(p)}(D^*,\mathcal{O})$ be represented by the $\overline{\partial}$-closed form $\omega\in L^p_{0,q}(D^*)$. Because \eqref{eq:bump11} is surjective, we can assume that $$\omega\in L^p_{0,q}(D^*_\epsilon).$$ But now \eqref{eq:stein} tells us that there exists $$\eta\in L^p_{(0,q-1),loc}(D^*_\epsilon)$$ such that $\overline{\partial} \eta=\omega$ on $D^*_\epsilon$. So, choose a smooth cut-off function $\chi\in C^\infty_{cpt}(D)$ with compact support in $D$ such that $\chi$ is identically $1$ in a neighborhood of the origin. It follows that $[\omega]\in H^q_{(p)}(D^*,\mathcal{O})$ can be represented by $$\tau:= \omega - \overline{\partial}\big((1-\chi) \eta\big) \in L^p_{0,q}(D^*)$$ which has compact support in $D$. The integration along the fibers which we have already used in the sections \ref{sec:fibers} and \ref{sec:necessary} tells us that we can use the blow up of $Y$ to produce a solution $$\sigma \in L^p_{0,q-1}(D^*)$$ such that $\overline{\partial} \sigma=\tau$ (see Theorem \ref{thm:integration}), and that finishes the proof. \end{proof} Combining Theorem \ref{thm:scheja2} with Theorem \ref{thm:necessary} (for $p=1$), we obtain immediately: \begin{thm}\label{thm:scheja3} Let $X$, $Y$ and $N$ be as in the introduction, and $$1\leq q\leq d-2=\dim Y-2.$$ Then it follows that $$H^q(X,\mathcal{O}(N^{-\mu}))=0$$ for all $\mu\geq 1+q -2d$. \end{thm} Similarly, we can deduce from Theorem \ref{thm:necessary}: \begin{thm}\label{thm:extension} Let $N$ be the universal bundle over $\C\mathbb{P}^{k}$ for $k\geq 1$, and $1\leq q \leq k$. Then: $$H^q(\C\mathbb{P}^k, \mathcal{O}(N^{-\mu}))=0\ \ \ \mbox{ for all } \mu\geq q-2k.$$ \end{thm} \begin{proof} Note that we have at no place assumed that $X$ is a proper subset of $\C\mathbb{P}^{n-1}$, respectively that $Y$ should be a proper subset of $\C^n$. So, in the setting of the introduction let $X=\C\mathbb{P}^{n-1}=\C\mathbb{P}^k$ and $Y=\C^n=\C^{k+1}$. Moreover, let $D$ be the unit ball in $Y$ and $p=2n/(2n-1)$. Now, if $\omega \in L^p_{0,q}(D^*)$ is a $\overline{\partial}$-closed form on the punctured ball, then the $\overline{\partial}$-Extension Theorem 3.2 in \cite{Rp4} tells us that in fact $\omega$ defines a $\overline{\partial}$-closed $L^p$-form on the whole ball. So, there exists $\eta\in L^p_{0,q-1}(D)$ such that $\overline{\partial}\eta=\omega$ (see \cite{Kr}), and it follows that $H^q_{(p)}(D^*,\mathcal{O})=0$ for all $1\leq q \leq k+1$. Thus, Theorem \ref{thm:necessary} implies that $H^q(\C\mathbb{P}^k, \mathcal{O}(N^{-\mu}))=0$ for all \begin{eqnarray*} \mu &\geq& q+1 -\frac{2n}{p} = q+1 - 2n + 1 = q-2k. \end{eqnarray*} \end{proof} {\bf Acknowledgments} This work was done while the author was visiting the University of Michigan at Ann Arbor, supported by a fellowship within the Postdoc-Programme of the German Academic Exchange Service (DAAD). The author would like to thank the SCV group at the University of Michigan for its hospitality. \end{document}
\begin{document} \title{\sffamily Bregman distances and Chebyshev sets} \author{Heinz H.\ Bauschke\thanks{Mathematics, Irving K.\ Barber School, The University of British Columbia Okanagan, Kelowna, B.C. V1V 1V7, Canada. Email: \texttt{[email protected]}.},~ Xianfu Wang\thanks{Mathematics, Irving K.\ Barber School, The University of British Columbia Okanagan, Kelowna, B.C. V1V 1V7, Canada. Email: \texttt{[email protected]}.},~ Jane Ye\thanks{Department of Mathematics and Statistics, University of Victoria, Victoria, B.C. V8W 3P4, Canada. Email:~\texttt{[email protected]}.},~ and~Xiaoming Yuan\thanks{Department of Management Science, Antai College of Economics and Management, Shanghai Jiao Tong University, Shanghai, 200052, China. Email: \texttt{[email protected]}.}} \date{December 24, 2007} \maketitle \vskip 8mm \begin{abstract} \noindent A closed set of a Euclidean space is said to be Chebyshev if every point in the space has one and only one closest point in the set. Although the situation is not settled in infinite-dimensional Hilbert spaces, in 1932 Bunt showed that in Euclidean spaces a closed set is Chebyshev if and only if the set is convex. In this paper, from the more general perspective of Bregman distances, we show that if every point in the space has a unique nearest point in a closed set, then the set is convex. We provide two approaches: one is by nonsmooth analysis; the other by maximal monotone operator theory. Subdifferentiability properties of Bregman nearest distance functions are also given. \end{abstract} {\small \noindent {\bfseries 2000 Mathematics Subject Classification:} Primary 41A65; Secondary 47H05, 49J52. \noindent {\bfseries Keywords:} Bregman distance, Bregman projection, Chebyshev set with respect to a Bregman distance, Legendre function, maximal monotone operator, nearest point, subdifferential operators. } \section{Introduction} Throughout, $\ensuremath{\mathbb R}^J$ is the standard Euclidean space with inner product $\scal{\cdot}{\cdot}$ and induced norm $\|\cdot\|$, and $\Gamma$ is the set of proper lower semicontinuous convex functions on $\ensuremath{\mathbb R}^J$. Let $C$ be a nonempty closed subset of $\ensuremath{\mathbb R}^J$. If each $x\in \ensuremath{\mathbb R}^J$ has a unique nearest point in $C$, the set $C$ is called Chebyshev. The famous Chebyshev set problem inquires: ``Is a Chebyshev set necessarily convex?''. It has been studied by many authors, see \cite{Edgar,jborwein,Frank1,deutsch,lewis,urruty1,urruty3,urruty4} and the references therein. Although answered in the affirmative by Bunt in 1932, we look at the problem from the more general point of view of Bregman distances. Let \begin{equation} \label{eq:preD} \text{$f \colon \ensuremath{\mathbb R}^J\to\ensuremath{\,\left]-\infty,+\infty\right]}$ be convex and differentiable on $U := \ensuremath{\operatorname{int}\operatorname{dom}f} \neq \ensuremath{\varnothing}$.} \end{equation} The \ensuremath{\varnothing}h{Bregman distance} associated with $f$ is defined by \begin{equation} \label{eq:D} D\colon \ensuremath{\mathbb R}^J \times \ensuremath{\mathbb R}^J \to \ensuremath{\left[0,+\infty\right[}X \colon (x,y) \mapsto \begin{cases} f(x)-f(y)-\scal{\nabla f(y)}{x-y}, &\text{if}\;\;y\in\ensuremath{U};\\ \ensuremath{+\infty}, & \text{otherwise}. \end{cases} \end{equation} Assume that $C\subset U$. It is a natural generalization of the Chebyshev problem to ask the following: \begin{quotation} \noindent ``If every $x\in U$ has --- in terms of the Bregman distance --- a unique nearest point in $C$, i.e., $C$ is Chebyshev for the Bregman distance, must $C$ be convex?'' \end{quotation} We give two approaches to our affirmative answer: one uses beautiful properties of maximal monotone operators: Rockafellar's virtual convexity theorem on ranges of maximal monotone operators; the other uses generalized subdifferentials from nonsmooth analysis, which allows us to characterize Chebyshev sets. We also study subdifferentiabilities of Bregman distance functions associated to closed sets. These nonsmooth analysis results are interesting in their own right, since Bregman distances have found tremendous applications in Statistics, Engineering, and Optimization; see the recent books \cite{ButIus,CenZen} and the references therein. The function $D$ does not define a metric, since it is not symmetric and does not satisfy the triangle inequality. It is thus remarkable that it is not only possible to derive many results on projections and distances similar to the one obtained in finite dimensional Euclidean spaces, but also to provide a general framework for best approximations. The paper is organized as follows. In Section~\ref{assumption}, we state our assumptions on $f$ and provide some concrete choices. In Section~\ref{geodesicscurve}, we characterize left Bregman nearest points and geodesics. We show that the Bregman normal is a proximal normal. In Section~\ref{monotone}, when $f$ is Legendre and $1$-coercive and $C$ is Chebyshev, we show that the composition of the Bregman nearest-point map and $\nabla f^*$ is maximal monotone. This allows us to apply Rockafellar's theorem on virtual convexity of range of maximal monotone operator to obtain that a Chebyshev set is convex. In Section~\ref{clarkemor}, we study subdifferentiability properties of left Bregman distance function. Formulas for the Clarke subdifferential, the limiting subdifferential and the Dini subdifferential are given. In Section~\ref{completecheby}, we give a complete characterizations of Chebyshev sets. Our approach generalizes the results given by Hiriart-Urruty \cite{urruty4,urruty1} from the Euclidean to the Bregman setting. Finally, in Section~\ref{rightchar} we show that the convexity of Chebyshev sets for right Bregman projections of $f$ can be studied by using the left Bregman projections of $f^*$. We give an example showing that even if the right Bregman projection is single-valued, the set $C$ need not be convex. \noindent{\bf Notation:} In $\ensuremath{\mathbb R}^J$, the closed ball centered at $x$ with radius $\delta>0$ is denoted by ${B}_{\delta}(x)$ and the closed unit ball is $\ensuremath{B} = B_1(0)$. For a set $S$, the expressions $\ensuremath{\operatorname{int}} S$, $\ensuremath{\operatorname{cl}} S$, $\ensuremath{\operatorname{conv}} S$ signify the interior, closure, and convex hull of $S$ respectively. For a set-valued mapping $T:\ensuremath{\mathbb R}^J\ensuremath{\rightrightarrows}\ensuremath{\mathbb R}^J$, we use $\ensuremath{\operatorname{ran}} T$ and $\ensuremath{\operatorname{dom}} T$ for its range and domain, and $T^{-1}$ for its set-valued inverse, i.e., $x\in T^{-1}(y) \Leftrightarrow y\in T(x).$ For a function $f:\ensuremath{\mathbb R}^J\rightarrow \ensuremath{\,\left]-\infty,+\infty\right]}$, $\ensuremath{\operatorname{dom}} f$ is the domain of $f$, and $f^*$ is its Fenchel conjugate; $\ensuremath{\operatorname{conv}} f$ ($\ensuremath{\operatorname{cl}}\ensuremath{\operatorname{conv}} f$) denotes the convex hull (closed convex hull) of $f$. For a differentiable function $f$, $\nabla f(x)$ and $\nabla^2 f(x)$ denote the gradient vector and the Hessian matrix at $x$. Our notation is standard and follows, e.g., \cite{Rock70,Rock98}. \section{Standing Assumptions and Examples}\label{assumption} From now on, and until the end of Section~\ref{completecheby}, our standing assumptions on $f$ and $C$ are: \begin{itemize} \item[\bfseries A1] $f\in\Gamma$ is a convex function of Legendre type, i.e., $f$ is essentially smooth and essentially strictly convex in the sense of \cite[Section~26]{Rock70}. \item[\bfseries A2] $f$ is $1$-coercive, i.e., $\displaystyle \lim_{\|x\|\rightarrow +\infty}f(x)/\|x\|=+\infty$. An equivalent requirement is $\ensuremath{\operatorname{dom}} f^*=\ensuremath{\mathbb R}^J$ (see \cite[Theorem~11.8(d)]{Rock98}). \item[\bfseries A3] The set $C$ is a nonempty closed subset of $\ensuremath{U}$. \end{itemize} Important instances of functions satisfying the above conditions are: \begin{example} Let $x=(x_j)_{1\leq j\leq J}$ and $y=(y_j)_{1\leq j\leq J}$ be two points in $\ensuremath{\mathbb R}^J$. \begin{enumerate} \item \ensuremath{\varnothing}h{Energy:} If $f\colon x\mapsto\tfrac{1}{2}\|x\|^2$, then $U=\ensuremath{\mathbb R}^J$, \begin{equation*} D(x,y) = \frac{1}{2}\|x-y\|^2, \end{equation*} and $\ensuremath{\nabla^2\!} f(x)=\ensuremath{\operatorname{Id}}$ for every $x\in\ensuremath{\mathbb R}^J$. Note that $f^*(x)=\frac{1}{2}\|x\|^2$, $\ensuremath{\operatorname{dom}} f^*=\ensuremath{\mathbb R}^J$, and $\ensuremath{\nabla^2\!} f^*=\ensuremath{\operatorname{Id}}$. \item \label{ex:examples:KL} \ensuremath{\varnothing}h{Boltzmann-Shannon entropy:} If $f\colon x\mapsto\sum_{j=1}^{J}x_j\ln(x_j)-x_j$ if $x\geq 0$, $+\infty$ otherwise. Here $x\geq 0$ means $x_{j}\geq 0$ for $1\leq j\leq J$ and similarly for $x> 0$, and $0\ln 0=0$. Then $U=\{x\in \ensuremath{\mathbb R}^J\colon x > 0\}$, and \begin{equation*} D(x,y) = \begin{cases} \textstyle \sum_{j=1}^J x_j \ln(x_j/y_j) - x_j + y_j, & \text{if $x \geq 0$ and $y>0$;}\\ \ensuremath{+\infty}, & \text{otherwise} \end{cases} \end{equation*} is the so-called \ensuremath{\varnothing}h{Kullback-Leibler divergence}. Note that $$\ensuremath{\nabla^2\!} f(x)= \begin{pmatrix} 1/x_{1} & 0 & \cdots & 0& 0\\ 0 & 1/x_{2} & 0& \cdots & 0\\ \vdots & 0 & \ddots &0 & 0 \\ 0 & \ldots & 0 & 1/x_{J-1}& 0\\ 0 & \ldots &0 & 0 & 1/x_{J} \end{pmatrix}, $$ that $ f^*(x)=\sum_{j=1}^{J}e^{x_{j}}$ with $\ensuremath{\operatorname{dom}} f^*=\ensuremath{\mathbb R}^{J}$, and that $$\ensuremath{\nabla^2\!} f^*(x)= \begin{pmatrix} e^{x_{1}} & 0 & \cdots & 0& 0\\ 0 & e^{x_{2}} & 0& \cdots & 0\\ \vdots & 0 & \ddots & 0& 0 \\ 0 & \ldots & 0 & e^{x_{J-1}}& 0\\ 0 & \ldots &0 & 0 & e^{x_{J}} \end{pmatrix}. $$ \item \ensuremath{\varnothing}h{Fermi-Dirac entropy}: If $f:x\mapsto \sum_{j=1}^{J}x_{j}\ln x_{j}+(1-x_{j})\ln (1-x_{j})$, then $U=\{x\in \ensuremath{\mathbb R}^J: 0<x<1\}$ and $$ D(x,y) = \begin{cases} \textstyle \sum_{j=1}^J x_j \ln(x_j/y_j)+(1-x_{j})\ln((1-x_{j})/(1-y_{j})), & \text{if $1\geq x \geq 0$ and $1>y>0$;}\\ \ensuremath{+\infty}, & \text{otherwise.} \end{cases} $$ While $$\ensuremath{\nabla^2\!} f(x)= \begin{pmatrix} \frac{1}{x_{1}(1-x_{1})} & 0 & \cdots & 0\\ 0 & \frac{1}{x_{2}(1-x_{2})}& 0& 0 \\ \vdots & 0 & \ddots & 0\\ 0 & \ldots & 0 & \frac{1}{x_{J}(1-x_{J})}\\ \end{pmatrix}, \quad \forall 0<x<1, x\in \ensuremath{\mathbb R}^{J}, $$ we have $f^*(x)=\sum_{j=1}^{J}\ln (1+e^{x_{j}})$ with $$\ensuremath{\nabla^2\!} f^*(x)= \begin{pmatrix} \frac{e^{x_{1}}}{(1+e^{x_{1}})^2} & 0 & \cdots & 0\\ 0 & \frac{e^{x_{2}}}{(1+e^{x_{2}})^2} & 0& 0\\ \vdots & 0 & \ddots & 0\\ 0 & \ldots & 0 & \frac{e^{x_{J}}}{(1+e^{x_{J}})^2}\\ \end{pmatrix}, \quad \forall x\in \ensuremath{\mathbb R}^J. $$ \item In general, we can let $f\colon x\mapsto\sum_{i=1}^J \phi(x_{i})$ where $\phi:\ensuremath{\mathbb R}\rightarrow\ensuremath{\,\left]-\infty,+\infty\right]}$ is an Legendre function. Then $U=(\ensuremath{\operatorname{int}}\ensuremath{\operatorname{dom}}\phi)^{J}$, $$D(x,y)=\sum_{j=1}^{J}\phi(x_j)-\phi(y_{j})-\phi'(y_j)(x_j-y_j),\quad \forall\; x\in\ensuremath{\mathbb R}^J,y\in U.$$ In particular, one can use $\phi(t)=|t|^p/p$ with $p>1$. \end{enumerate} \end{example} The following result (see \cite[Theorem~26.5]{Rock70}) plays an important role in the sequel. \begin{fact}[Rockafellar]\label{isom} A convex function $f$ is of Legendre type if and only if $f^*$ is. In this case, the gradient mapping $$\nabla f:U \rightarrow\ensuremath{\operatorname{int}}\ensuremath{\operatorname{dom}} f^*:x\mapsto \nabla f(x),$$ is a topological isomorphism with inverse mapping $(\nabla f)^{-1}=\nabla f^*$. \end{fact} \section{Bregman Distances and Projection Operators}\label{geodesicscurve} We start with \begin{definition} The \ensuremath{\varnothing}h{left Bregman nearest-distance function} to $C$ is defined by \begin{equation} \bD{C}\: \colon U\to \ensuremath{\left[0,+\infty\right[}X \colon y\mapsto \inf_{x\in C}D(x,y), \end{equation} and the \ensuremath{\varnothing}h{left Bregman nearest-point map} (i.e., the classical Bregman projector) onto $C$ is $$\bproj{C}\colon \ensuremath{U} \to \ensuremath{U} \colon y\mapsto \underset{x\in C}{\operatorname{argmin}}\;\: D(x,y) = \{x\in C\colon D(x,y) = \bD{C}(y)\}.$$ \end{definition} The \ensuremath{\varnothing}h{right Bregman distance} and \ensuremath{\varnothing}h{right Bregman projector} onto $C$ are defined analogously and denoted by $\fD{C}$ and $\fproj{C}$, respectively. Note that while in \cite{noll} the authors consider proximity operators associated with convex set $C$, here our set $C$ need not be convex and we do not assume that $D(\cdot,\cdot)$ is jointly convex. We shall often need the following identity \begin{equation}\label{3point} D(c,y)-D(x,y)=f(c)-f(x)-\langle\nabla f(y),c-x\ensuremath{\operatorname{ran}}gle, \end{equation} which is an immediate consequence of the definition. Our first result characterizes the left Bregman nearest point. \begin{proposition}\label{near} Let $x\in C$ and $y\in U$. \begin{enumerate} \item[{\rm (i)}] Then \begin{equation}\label{same}{x}\in \bproj{C}(y)\quad \Leftrightarrow \quad D(c,x)\geq \langle \nabla f(y)-\nabla f(x), c-x\ensuremath{\operatorname{ran}}gle \quad \forall c\in C. \end{equation} If $C$ is convex, then \begin{equation}\label{convexpart} {x}\in \bproj{C}(y)\quad \Leftrightarrow \quad 0\geq \langle \nabla f(y)-\nabla f(x), c-x\ensuremath{\operatorname{ran}}gle \quad \forall c\in C. \end{equation} \item[{\rm (ii)}] Suppose that $x\in \bproj{C}(y)$. Then the Bregman projection of \begin{equation}\label{geodesics} z_{\lambda}=\nabla f^*(\lambda \nabla f(y)+(1-\lambda)\nabla f(x)) \mbox{ with $0\leq \lambda< 1$,} \end{equation} on $C$ is singleton with \begin{equation}\label{25:i} \bproj{C}(z_{\lambda})=x. \end{equation} If $C$ is convex, \eqref{25:i} holds for every $\lambda\geq 0$. \end{enumerate} \end{proposition} \begin{proof} (i): By definition, $x\in \bproj{C}(y)$ if and only if $$0\leq D(c,y)-D(x,y) \quad \forall c\in C;$$ equivalently, $f(c)-f(x)\geq \langle \nabla f(y), c-x\ensuremath{\operatorname{ran}}gle$ by (\ref{3point}). Subtracting $\langle \nabla f(x), c-x\ensuremath{\operatorname{ran}}gle$ from both sides, we obtain $$D(c,x)\geq \langle \nabla f(y)-\nabla f(x),c-x\ensuremath{\operatorname{ran}}gle.$$ Hence (\ref{same}) holds. The convex counterpart \eqref{convexpart} is well known and follows, e.g., from \cite[Proposition~3.16]{Baus97}. (ii): Assume that $x\in\bproj{C}(y)$ and $z_{\lambda}=\nabla f^*(\lambda \nabla f(y)+(1-\lambda)\nabla f(x))$ with $0\leq \lambda <1$. Then by (\ref{same}), \begin{equation}\label{charac} D(c,x)\geq \langle \nabla f(y)-\nabla f(x), c-x\ensuremath{\operatorname{ran}}gle \quad \forall c\in C. \end{equation} Take $c\in C$. By Fact~\ref{isom}, $\nabla f\circ\nabla f^*=\ensuremath{\operatorname{Id}}$, we have \begin{align}\label{relation} &\langle\nabla f(z_{\lambda})-\nabla f(x),c-x\ensuremath{\operatorname{ran}}gle\\ &= \langle\nabla f \circ\nabla f^*(\lambda\nabla f(y)+(1-\lambda)\nabla f(x))-\nabla f(x),c-x\ensuremath{\operatorname{ran}}gle\\ &=\langle (\lambda\nabla f(y)+(1-\lambda)\nabla f(x))-\nabla f(x),c-x\ensuremath{\operatorname{ran}}gle\\ &=\lambda \langle \nabla f(y)-\nabla f(x),c-x\ensuremath{\operatorname{ran}}gle. \end{align} If $\langle\nabla f(y)-\nabla f(x),c-x\ensuremath{\operatorname{ran}}gle\leq 0,$ then $$\lambda \langle\nabla f(y)-\nabla f(x),c-x\ensuremath{\operatorname{ran}}gle\leq 0\leq D(c,x);$$ if $\langle\nabla f(y)-\nabla f(x),c-x\ensuremath{\operatorname{ran}}gle\geq 0$, then using $0\leq\lambda <1$ and (\ref{charac}), $$\lambda \langle\nabla f(y)-\nabla f(x),c-x\ensuremath{\operatorname{ran}}gle\leq\langle\nabla f(y)-\nabla f(x),c-x\ensuremath{\operatorname{ran}}gle \leq D(c,x).$$ In either case, by (\ref{relation}) we have $$\langle\nabla f(z_{\lambda})-\nabla f(x),c-x\ensuremath{\operatorname{ran}}gle \leq D(c,x) \quad \forall c\in C.$$ Hence $x\in \bproj{C}(z_{\lambda})$ by (\ref{same}). We proceed to show that $\bproj{C}(z_{\lambda})$ is a singleton. If $\lambda=0$, then $z_{\lambda}=x$, $\bproj{C}(x)=\{x\}$ by strict convexity of $f$. It remains to consider the case $0<\lambda<1$. Let $\hat{x}\in \bproj{C}(z_{\lambda})$. Then $D(x,z_{\lambda})=D(\hat{x},z_{\lambda})$, which is $$f(\hat{x})-f(x)-\langle \nabla f(z_{\lambda}),\hat{x}-x\ensuremath{\operatorname{ran}}gle =0,$$ by (\ref{3point}). Using $z_{\lambda}=\nabla f^*(\lambda \nabla f(y)+(1-\lambda)\nabla f(x)),$ we have $$f(\hat{x})-f(x)-\langle \lambda \nabla f(y)+(1-\lambda)\nabla f(x),\hat{x}-x\ensuremath{\operatorname{ran}}gle=0,$$ so that $$\lambda [f(\hat{x})-f(x)-\langle \nabla f(y),\hat{x}-x\ensuremath{\operatorname{ran}}gle]+(1-\lambda)[f(\hat{x})-f(x)-\langle \nabla f(x),\hat{x}-x\ensuremath{\operatorname{ran}}gle]=0, $$ and $$\lambda [f(x)-f(\hat{x})-\langle \nabla f(y),x-\hat{x}\ensuremath{\operatorname{ran}}gle] =(1-\lambda)[f(\hat{x})-f(x)-\langle \nabla f(x),\hat{x}-x\ensuremath{\operatorname{ran}}gle]. $$ This gives, by (\ref{3point}), $\lambda [D(x,y)-D(\hat{x},y)]=(1-\lambda)D(\hat{x},x)$ and hence $$D(x,y)-D(\hat{x},y)=\frac{1-\lambda}{\lambda}D(\hat{x},x),$$ since $1>\lambda >0$. If $\hat{x}\neq x$, then $D(\hat{x},x)>0$ by the strict convexity of $f$ so that $D(x,y)>D(\hat{x},y)$, and this contradicts that $x\in \bproj{C}(y)$. Therefore, $\bproj{C}(z_{\lambda})=\{x\}$. When $C$ is convex, by (\ref{convexpart}), $x\in \bproj{C}(y)$ if and only if \begin{equation}\label{whythis} \langle \nabla f(y)-\nabla f(x), c-x\ensuremath{\operatorname{ran}}gle \leq 0, \quad \forall c\in C. \end{equation} If $z_{\lambda}=\nabla f^*(\lambda \nabla f(y)+(1-\lambda)\nabla f(x))$ with $\lambda\geq 0$, then \begin{align} &\langle \nabla f(z_{\lambda})-\nabla f(x), c-x\ensuremath{\operatorname{ran}}gle=\langle \nabla f\circ\nabla f^*(\lambda \nabla f(y)+(1-\lambda)\nabla f(x))-\nabla f(x),c-x\ensuremath{\operatorname{ran}}gle\\ & =\lambda\langle \nabla f(y)-\nabla f(x),c-x\ensuremath{\operatorname{ran}}gle\leq 0. \end{align} By (\ref{convexpart}), $x\in\bproj{C}(z_{\lambda})$. Applying (\ref{25:i}), we see that $x=\bproj{C}(z_{\lambda})$. Indeed, select $\lambda_{1}>\lambda$. Since \begin{equation}\label{ch1} z_{\lambda}=\nabla f^*(\lambda \nabla f(y)+(1-\lambda)\nabla f(x))\quad \Rightarrow \quad\nabla f(z_{\lambda})= \nabla f(x)+\lambda (\nabla f(y)-\nabla f(x)), \end{equation} \begin{equation}\label{ch2} z_{\lambda_{1}}=\nabla f^*(\lambda_{1} \nabla f(y)+(1-\lambda_{1})\nabla f(x))\quad \Rightarrow \quad \nabla f(z_{\lambda_{1}})=\nabla f(x)+\lambda_{1} (\nabla f(y)-\nabla f(x)). \end{equation} Solve (\ref{ch2}) for $\nabla f(y)-\nabla f(x)$ and put into (\ref{ch1}) to get $$\nabla f(z_{\lambda}) =\left(1-\frac{\lambda}{\lambda_{1}}\right)\nabla f(x)+\frac{\lambda}{\lambda_{1}}\nabla f(z_{\lambda_{1}}).$$ This gives $$z_{\lambda}=\nabla f^*((1-\lambda/\lambda_{1})\nabla f(x)+\lambda/\lambda_{1}\nabla f(z_{\lambda_{1}})).$$ As $x\in \bproj{C}(z_{\lambda_{1}})$, (\ref{25:i}) applies. \end{proof} It is interesting to point out a connection to the \ensuremath{\varnothing}h{proximal normal cone} $N_{C}^{P}(x)$ of $C$ at $x\in C$ Recall that $$N_{C}^{P}(x):=\{t(y-x):\; t\geq 0, x\in\proj{C}(y), y\in \ensuremath{\mathbb R}^J\},$$ in which $\proj{C}$ denotes the usual projection on $C$ in terms of Euclidean norm, and each vector $t(y-x)$ is called a proximal normal to $C$ at $x$; see, e.g., \cite[Section~1.1]{Yuri} for further information. \begin{proposition} Suppose that $f$ is twice continuously differentiable on $U$, let $y\in U$, and suppose that $x\in\bproj{C}(y)$. Then $\nabla f(y)-\nabla f(x)\in N_{C}^{P}(x).$ \end{proposition} \begin{proof} By Proposition~\ref{near}(i), \begin{equation}\label{proxnormal} D(c,x)\geq \langle \nabla f(y)-\nabla f(x), c-x\ensuremath{\operatorname{ran}}gle \quad \forall c\in C. \end{equation} Since the Hessian of $f$ is continuous, using Taylor's formula, we obtain \begin{equation}\label{taylor} D(c,x)=f(c)-f(x)-\langle\nabla f(x),c-x\ensuremath{\operatorname{ran}}gle=\frac{1}{2}\langle c-x, \ensuremath{\nabla^2\!} f(\xi)(c-x)\ensuremath{\operatorname{ran}}gle \quad \mbox{ where $\xi\in [c,x]$}. \end{equation} Fix $\delta>0$. Since $\ensuremath{\nabla^2\!} f$ is continuous on the compact set $C\cap {B}_{\delta}(x)$, there exists $\sigma=\sigma(x,\delta) >0$ such that $\|\ensuremath{\nabla^2\!} f(\xi)\|\leq 2\sigma$ for every $\xi\in C\cap {B}_{\delta}(x)$. Then (\ref{taylor}) gives $D(c,x)\leq\sigma \|c-x\|^2$. By (\ref{proxnormal}), $$\sigma\|c-x\|^2\geq \langle \nabla f(y)-\nabla f(x), c-x\ensuremath{\operatorname{ran}}gle \quad \forall c\in C\cap {B}_{\delta}(x).$$ By \cite[Proposition~1.1.5.(b) on page~25]{Yuri}, $\nabla f(y)-\nabla f(x)\in N_{C}^{P}(x).$ \end{proof} The following example illustrates the geodesics $\{z_{\lambda}:\; 0\leq\lambda \leq 1\}$ given by (\ref{geodesics}). \begin{example} Let $x=(x_j)_{1\leq j\leq J}$ and $y=(y_j)_{1\leq j\leq J}$ be two points in $\ensuremath{\mathbb R}^J$. \begin{enumerate} \item If $f\colon x \mapsto \frac{1}{2}\|x\|^2$, then $\nabla f=\nabla f^*=\ensuremath{\operatorname{Id}}.$ We have $$z_{\lambda}=\lambda y+(1-\lambda)x,$$ for $\lambda\in [0,1]$. Hence $z_{\lambda}$ is a component-wise \ensuremath{\varnothing}h{arithmetic mean} of $x$ and $y$. \item If $f\colon x\mapsto\sum_{j=1}^{J}x_j\ln(x_j)-x_j$, then $$\nabla f(x)=(\ln x_{1},\ldots, \ln x_{n}),$$ $$f^*\colon x^*\mapsto \sum_{j=1}^{J}\exp(x_{j}^*),$$ so that $$\nabla f^*(x^*)=(\exp x_{1}^*,\ldots, \exp x_{J}^*).$$ We have \begin{align} z_{\lambda} &=\nabla f^*(\lambda \nabla f(y)+(1-\lambda)\nabla f(x)) \\ &=\nabla f^*(\lambda\ln y_{1}+(1-\lambda)\ln x_{1},\ldots, \lambda \ln y_{J}+(1-\lambda)\ln x_{J})\\ &=(\exp(\lambda\ln y_{1}+(1-\lambda)\ln x_{1}),\ldots, \exp(\lambda \ln y_{J}+(1-\lambda)\ln x_{J}))\\ &=(y_{1}^{\lambda}x_{1}^{1-\lambda},\ldots, y_{J}^{\lambda}x_{J}^{1-\lambda}). \end{align} Hence $z_{\lambda}$ is a component-wise \ensuremath{\varnothing}h{geometric mean} of $x$ and $y$. \item If $f\colon x\mapsto \sum_{j=1}^{J}\exp(x_{j})$, then $f^*\colon x^*\mapsto\sum_{j=1}^{J}x_j^*\ln(x_j^*)-x_j^*$ so that $$\nabla f(x)=(\exp(x_{1}),\ldots, \exp(x_{J})),\quad \nabla f^*(x^*)=(\ln x_{1}^*,\ldots, \ln x_{J}^*).$$ Hence $$z_{\lambda}=(\ln (\lambda \exp(y_{1})+(1-\lambda) \exp (x_{1})),\ldots,\ln (\lambda \exp(y_{J})+(1-\lambda) \exp (x_{J}))).$$ \end{enumerate} \end{example} Define the \ensuremath{\varnothing}h{symmetrization} of $D$ for $x,y\in U$ by $$S(x,y):=D(x,y)+D(y,x)=\langle \nabla f(x)-\nabla f(y),x-y\ensuremath{\operatorname{ran}}gle.$$ \begin{proposition} Given $x,y\in U$ and $0<\lambda < 1$, set $$z_{\lambda}:=\nabla f^*(\lambda\nabla f(y)+(1-\lambda) \nabla f(x)).$$ Then we have \begin{enumerate} \item[{\rm (i)}] $D(x,y)=D(x,z_{\lambda})+D(z_{\lambda},y)+\frac{1-\lambda}{\lambda}S(x,z_{\lambda}).$ \item[{\rm (ii)}] $S(x,y)=\frac{1}{1-\lambda}S(y,z_{\lambda})+\frac{1}{\lambda}S(z_{\lambda},x).$ \end{enumerate} \end{proposition} \begin{proof} Since $z_{\lambda}=\nabla f^*(\lambda\nabla f(y)+(1-\lambda) \nabla f(x))$, and $$D(x,z_{\lambda})=f(x)-f(z_{\lambda})-\langle\nabla f(z_{\lambda}),x-z_{\lambda}\ensuremath{\operatorname{ran}}gle,$$ we have \begin{align} D(x,z_{\lambda}) &=f(x)-f(z_{\lambda})-\langle \lambda \nabla f(y)+(1-\lambda)\nabla f(x), x-z_{\lambda} \ensuremath{\operatorname{ran}}gle\\ & =\lambda [f(x)-f(z_{\lambda})-\langle\nabla f(y),x-z_{\lambda}\ensuremath{\operatorname{ran}}gle]+ (1-\lambda)[f(x)-f(z_{\lambda})-\langle\nabla f(x),x-z_{\lambda}\ensuremath{\operatorname{ran}}gle]\\ &= \lambda [D(x,y)-D(z_{\lambda},y)]-(1-\lambda)[f(z_{\lambda})-f(x)-\langle\nabla f(x),z_{\lambda}-x\ensuremath{\operatorname{ran}}gle] \\ & =\lambda [D(x,y)-D(z_{\lambda},y)]-(1-\lambda) D(z_{\lambda},x). \end{align} Hence $(1-\lambda)[D(x,z_{\lambda})+D(z_{\lambda},x)]+\lambda D(x,z_{\lambda})+\lambda D(z_{\lambda},y)=\lambda D(x,y).$ Dividing both sides by $\lambda$ yields \begin{equation}\label{oneeq} D(x,y)=D(x,z_{\lambda})+D(z_{\lambda},y)+\frac{1-\lambda}{\lambda}S(x,z_{\lambda}), \end{equation} which is (i). To see (ii), we rewrite $$z_{\lambda}=\nabla f^*((1-\lambda) \nabla f(x)+\lambda\nabla f(y)).$$ Applying (i), we get \begin{equation}\label{twoeq} D(y,x)=D(y,z_{\lambda})+D(z_{\lambda},x)+\frac{\lambda}{1-\lambda}S(z_{\lambda},y). \end{equation} Adding (\ref{oneeq}) and (\ref{twoeq}), we obtain \begin{align} S(x,y)& =D(x,y)+D(y,x)\\ &= [D(z_{\lambda},y)+D(y,z_{\lambda})]+[D(z_{\lambda},x)+D(x,z_{\lambda})]+\frac{1-\lambda}{\lambda}S(x,z_{\lambda})\\ & \qquad +\frac{\lambda}{1-\lambda} S(z_{\lambda},y)\nonumber\\ &= \bigg(1+\frac{\lambda}{1-\lambda}\bigg)S(z_{\lambda}, y)+\bigg(1+\frac{1-\lambda}{\lambda}\bigg) S(x,z_{\lambda})\\ &=\frac{1}{1-\lambda}S(y,z_{\lambda})+\frac{1}{\lambda}S(z_{\lambda},x), \end{align} which is (ii). \end{proof} \section{Bregman Nearest Points and Maximal Monotone Operators} \label{monotone} We shall need the following pointwise version of a concept due to Rockafellar and Wets \cite[Definition~1.16]{Rock98}. \begin{definition} Let $g:\ensuremath{\mathbb R}^J\times \ensuremath{\mathbb R}^J\rightarrow \ensuremath{\,\left]-\infty,+\infty\right]}$ and let $\bar{y}\in\ensuremath{\mathbb R}^J$. We say that $g$ is level bounded in the first variable locally uniformly at $\bar{y}$, if for every $\alpha\in\ensuremath{\mathbb R}$, there exists $\delta>0$ such that $$ \bigcup_{y\in B_\delta(\bar{y})} \{x\in \ensuremath{\mathbb R}^J\colon g(x,y)\leq \alpha\}\;\;\text{is bounded.}$$ \end{definition} \begin{proposition}\label{generalcase} The Bregman distance $D$ is level bounded in the first variable locally uniformly at every point in $U$. \end{proposition} \begin{proof} Suppose the opposite. Then, for some $\bar{y}\in U$, $\bar{\alpha}\in\ensuremath{\mathbb R}$, for every $n\in \{1,2,\ldots\}$, there exist $y_{n}\in U$, $x_{n}\in\ensuremath{\operatorname{dom}} f$ such that $$\|y_{n}-\bar{y}\|<\frac{1}{n},\quad D(x_{n},y_{n})\leq \bar{\alpha},\quad \|x_{n}\|>n.$$ We then have $y_{n}\rightarrow\bar{y},$ $\|x_{n}\|\rightarrow\infty$ and \begin{equation}\label{finitevalue} \quad D(x_{n},y_{n})\leq \bar{\alpha}. \end{equation} Now \begin{align} D(x_{n},y_{n})& =f(x_{n})-f(y_{n})-\scal{\nabla f(y_{n})}{x_{n}-y_{n}}\\ &= f(x_{n})-\scal{\nabla f(y_{n})}{x_{n}}+[-f(y_{n})+\scal{\nabla f(y_{n})}{y_{n}}].\label{timhorton} \end{align} Since $f$ is Legendre, $\nabla f$ is continuous on $U$. When $n\rightarrow\infty$, we have \begin{equation}\label{finitevalue1} -f(y_{n})+\scal{\nabla f(y_{n})}{y_{n}}\rightarrow -f(\bar{y}) +\scal{\nabla f(\bar{y})}{\bar{y}}, \end{equation} and \begin{align} f(x_{n})-\scal{\nabla f(y_{n})}{x_{n}} &=\|x_{n}\|\left(\frac{f(x_{n})}{\|x_{n}\|}-\langle\nabla f(y_{n}), \frac{x_{n}}{\|x_{n}\|}\ensuremath{\operatorname{ran}}gle\right)\label{finitevalue2}\\ &\geq \|x_{n}\|\left(\frac{f(x_{n})}{\|x_{n}\|}-\|\nabla f(y_{n})\|\right)\rightarrow\infty,\label{finitevalue3} \end{align} since $\|\nabla f(y_{n})\|\rightarrow \|\nabla f(\bar{y})\|$ and $\lim f(x_{n})/\|x_{n}\|=+\infty.$ (\ref{finitevalue1})-(\ref{finitevalue3}) and (\ref{timhorton}) altogether show that $D(x_{n},y_{n})\rightarrow\infty$, but this contradicts (\ref{finitevalue}). \end{proof} The following result will be very useful later. \begin{theorem}\label{banffday} The following hold. \begin{enumerate} \item For each $y\in U$, the set $\bproj{C}(y)$ is nonempty and compact. Moreover, $\bD{C}$ is continuous on $U$. \item If $x_{n}\in \bproj{C}(y_{n})$ and $y_{n}\rightarrow y\in U$, then the sequence $(x_{n})_{n=1}^{\infty}$ is bounded, and all its cluster points lie in $\bproj{C}(y)$. \item Let $y\in U$ and $\bproj{C}(y)=\{x\}$. If $x_{n}\in \bproj{C}(y_{n})$ and $y_{n}\rightarrow y$, then $x_{n}\rightarrow x$; consequently, $\bproj{C}$ is continuous at $y$. \end{enumerate} \end{theorem} \begin{proof} Fix $\bar{y}\in U$ and $\delta>0$ such that $B_\delta(\bar{y})\subset U$. Consider the proper lower semicontinuous function $g:\ensuremath{\mathbb R}^J\times \ensuremath{\mathbb R}^J\rightarrow \ensuremath{\,\left]-\infty,+\infty\right]}$ defined by $$(x,y)\mapsto D(x,y)+\iota_{C}(x)+\iota_{B_\delta(\bar{y})}(y).$$ Observe that $\ensuremath{\operatorname{dom}} g=C\times B_\delta(\bar{y})$. For every $y \in \ensuremath{\mathbb R}^J$ and $\alpha\in\ensuremath{\mathbb R}$, we have \begin{equation} \label{e:071209:a} \{x\in\ensuremath{\mathbb R}^J \colon g(x,y)\leq\alpha\} = \begin{cases} C\cap \{x\in\ensuremath{\mathbb R}^J\colon D(x,y)\leq\alpha\}, &\text{if $y\in B_\delta(\bar{y})$;}\\ \varnothing, &\text{otherwise.} \end{cases} \end{equation} We now show that \begin{equation} \label{e:071209:b} \text{$g$ is level bounded in the first variable locally uniformly at every point in $\ensuremath{\mathbb R}^J$. } \end{equation} To this end, fix $\bar{z}\in\ensuremath{\mathbb R}^J$ and $\alpha\in\ensuremath{\mathbb R}$.\\ \ensuremath{\varnothing}h{~~Case 1:} $\bar{z}\notin B_\delta(\bar{y})$.\\ Let $\epsilon>0$ be so small that $B_\delta(\bar{y})\cap B_\epsilon(\bar{z})=\varnothing$. Then \eqref{e:071209:a} yields $$ \bigcup_{z\in B_\epsilon(\bar{z})} \{x\in\ensuremath{\mathbb R}^J \colon g(x,z)\leq\alpha\} = \varnothing, $$ which is certainly bounded.\\ \ensuremath{\varnothing}h{~~Case 2:} $\bar{z}\in B_\delta(\bar{y})$.\\ Since $B_\delta(\bar{y})\subset U$, we have $\bar{z}\in U$. Proposition~\ref{generalcase} guarantees the existence of $\epsilon>0$ such that $$\bigcup_{z\in B_\epsilon(\bar{z})} \{x\in\ensuremath{\mathbb R}^J \colon D(x,z)\leq \alpha\}\;\;\text{is bounded.}$$ In view of \eqref{e:071209:a}, the set $$\bigcup_{z\in B_\epsilon(\bar{z})\cap B_\delta(\bar{y})} C \cap \{x\in\ensuremath{\mathbb R}^J\colon D(x,z)\leq \alpha\} = \bigcup_{z\in B_\epsilon(\bar{z})}\{x\in\ensuremath{\mathbb R}^J \colon g(x,z)\leq \alpha\} $$ is bounded as well. Altogether, we have verified \eqref{e:071209:b}. Define a function $m$ at $y\in\ensuremath{\mathbb R}^J$ by $$m(y):=\inf_{x\in\ensuremath{\mathbb R}^J} g(x,y)=\begin{cases} \inf_{x\in C}D(x,y)=\bD{C}(y), & \text{if $y\in B_\delta(\bar{y})$;}\\ {+\infty}, &\text{otherwise.} \end{cases}$$ Then $m = \bD{C} + \iota_{B_\delta(\bar{y})}$ and $$\ensuremath{\operatorname*{argmin}}_{x\in\ensuremath{\mathbb R}^J}g(x,y)=\begin{cases} \ensuremath{\operatorname*{argmin}}_{x\in C}D(x,y)=\bproj{C}(y), & \text{if $y\in B_\delta(\bar{y})$;}\\ \varnothing, &\text{otherwise.} \end{cases} $$ Now \eqref{e:071209:b} and \cite[Theorem~1.17(a)]{Rock98} implies that if $y\in B_\delta(\bar{y})$, then $\bproj{C}(y)$ is nonempty and compact. In particular, $\bproj{C}(\bar{y})\neq\varnothing$ and compact. Take $\bar{x}\in\bproj{C}(\bar{y})$. As $$g(\bar{x},\cdot)=D(\bar{x},\cdot)+\iota_{B_\delta(\bar{y})}$$ is continuous at $\bar{y}$, by \cite[Theorem~1.17(c)]{Rock98}, the function $m$ is continuous at $\bar{y}$. Hence $\bD{C}$ is continuous at $\bar{y}$. Since $\bar{y}\in U$ is arbitrary, this proves (i). Next, \cite[Theorem~1.17(b)]{Rock98} gives (ii) since $\bD{C}$ is continuous on $U$. Finally, (iii) is an immediate consequence of (ii). \end{proof} Our next result states that $\bproj{C}\circ\nabla f^*$ is a monotone operator. This is also related to \cite[Proposition~3.32.(ii)(c)]{Sico03}, which establish a stronger property when $C$ is convex. \begin{proposition}\label{monotoneyes} For every $x,y$ in $U$, \begin{equation}\label{seehow} \langle \bproj{C}(y)-\bproj{C}(x),\nabla f(y)-\nabla f(x)\ensuremath{\operatorname{ran}}gle\geq 0; \end{equation} consequently, $\bproj{C}\circ \nabla f^*$ is monotone. \end{proposition} \begin{proof} Since $$D(\bproj{C}(x),y)\geq D(\bproj{C}(y),y),\quad D(\bproj{C}(y),x)\geq D(\bproj{C}(x),x),$$ we use (\ref{3point}) to get $$f(\bproj{C}(x))-f(\bproj{C}(y))-\langle\nabla f(y), \bproj{C}(x)-\bproj{C}(y)\ensuremath{\operatorname{ran}}gle\geq 0,$$ $$f(\bproj{C}(y))-f(\bproj{C}(x))-\langle\nabla f(x), \bproj{C}(y)-\bproj{C}(x)\ensuremath{\operatorname{ran}}gle\geq 0.$$ Adding these inequalities yields $$\langle\nabla f(y), \bproj{C}(y)-\bproj{C}(x)\ensuremath{\operatorname{ran}}gle -\langle\nabla f(x), \bproj{C}(y)-\bproj{C}(x)\geq 0, $$ i.e., \eqref{seehow}. The monotonicity now follows from Fact~\ref{isom} and our assumption that $\ensuremath{\operatorname{dom}} f^*=\ensuremath{\mathbb R}^J$. \end{proof} \begin{definition} The set $C$ is \ensuremath{\varnothing}h{Chebyshev with respect to the left Bregman distance}, or simply $\bD{}\,$-Chebyshev, if for every $x\in U$, $\bproj{C}(x)$ is nonempty and a singleton. \end{definition} For some instances of $f$, it is known that if $C$ is convex, then it is $\bD{}\,$-Chebyshev\ (see, e.g., \cite[Theorem~3.14]{Baus97}) and $\bproj{C}$ is continuous (see, e.g., \cite[Proposition~3.10(i)]{noll}). The next result is a refinement. \begin{proposition}\label{maximalcase} Suppose that $C$ is $\bD{}\,$-Chebyshev. Then $\bproj{C}: U\rightarrow C$ is continuous. Hence $\bproj{C}\circ \nabla f^*$ is continuous and maximal monotone. \end{proposition} \begin{proof} While the continuity of $\bproj{C}$ follows from Theorem~\ref{banffday}(iii), Proposition~\ref{monotoneyes} shows that $\bproj{C}\circ \nabla f^*$ is monotone. Since $\bproj{C}$ is continuous on $U$ and $\nabla f^*:\ensuremath{\mathbb R}^J\rightarrow U $ is continuous, we conclude that $\bproj{C}\circ\nabla f^*$ is continuous on $\ensuremath{\mathbb R}^J$. Altogether, since $\bproj{C}\circ \nabla f^*$ is single-valued, it is maximal monotone on $\ensuremath{\mathbb R}^J$ by \cite[Example~12.7]{Rock98}. \end{proof} Rockafellar's well-known result on the virtual convexity of the range of a maximal monotone operator allows us to show that $\bD{}\,$-Chebyshev\ sets are convex. Our proof extends a Hilbert space technique due to Berens and Westphal \cite{BW}. \begin{theorem}[$\bD{}\,$-Chebyshev\ sets are convex] Suppose that $C$ is $\bD{}\,$-Chebyshev. Then $C$ is convex. \end{theorem} \begin{proof} By Proposition~\ref{maximalcase}, $\bproj{C}\circ \nabla f^*$ is a maximal monotone operator on $\ensuremath{\mathbb R}^J$. Using \cite[Theorem 12.41]{Rock98} (or \cite[Theorem~19.2]{Simons}), $\ensuremath{\operatorname{cl}} [\ensuremath{\operatorname{ran}} \bproj{C}\circ \nabla f^*]$ is convex. Since $ \ensuremath{\operatorname{ran}} \nabla f^*=U$ and $C\subset U $, it follows that $$C\supset\ensuremath{\operatorname{ran}}\big(\bproj{C}\circ \nabla f^*\big)=\bproj{C}(\nabla f^*(\ensuremath{\mathbb R}^J))= \bproj{C}(U)\supset \bproj{C}(C)=C,$$ from which $\ensuremath{\operatorname{cl}} [\ensuremath{\operatorname{ran}} \bproj{C}\circ \nabla f^*]=\ensuremath{\operatorname{cl}} C=C.$ Hence $C$ is convex. \end{proof} \begin{corollary} The set $C$ is $\bD{}\,$-Chebyshev\ if and only if it is convex. \end{corollary} \section{Subdifferentiabilities of Bregman Distances}\label{clarkemor} Let us show that $\bD{C}$ is locally Lipschitz on $U$. \begin{proposition}\label{distance} Suppose $f$ is twice continuously differentiable on $U$. Then the left Bregman distance function satisfies \begin{equation} \bD{C} =f^*\circ\nabla f-(f+\iota_{C})^*\circ \nabla f =[f^*-(f+\iota_{C})^*]\circ \nabla f, \end{equation} and it is locally Lipschitz on $U$. \end{proposition} \begin{proof} The Mean Value Theorem and the continuity of $\ensuremath{\nabla^2\!} f$ on $U$ imply that $\nabla f$ is locally Lipschitz on $U$. For $y\in U$, \begin{align} \bD{C}(y) &=\inf_{c\in C}[f(c)-f(y)-\langle\nabla f(y),c-y\ensuremath{\operatorname{ran}}gle]\\ &=\inf_{c}[(f+\iota_{C})(c)-\langle\nabla f(y),c\ensuremath{\operatorname{ran}}gle +f^*(\nabla f(y))]\\ &=f^*(\nabla f(y))-\sup_{c}[\langle\nabla f(y),c\ensuremath{\operatorname{ran}}gle-(f+\iota_{C})(c)]\\ & =f^*(\nabla f (y))-(f+\iota_{C})^*(\nabla f(y)). \end{align} Note that $f+\iota_{C}\geq f$, $(f+\iota_{C})^*\leq f^*$, so $\ensuremath{\operatorname{dom}} f^*\subset \ensuremath{\operatorname{dom}} (f+\iota_{C})^*$. Being convex functions, both $(f+\iota_{C})^*$ and $f^*$ are locally Lipschitz on interior of their respective domains, in particular on $\ensuremath{\operatorname{int}}\ensuremath{\operatorname{dom}} f^*=\ensuremath{\mathbb R}^J$. Since $\nabla f: U \rightarrow\ensuremath{\mathbb R}^J$ is locally Lipschitz, we conclude that $\bD{C}$ is locally Lipschitz on $U$. \end{proof} For a function $g$ that is finite and locally Lipschitz at a point $y$, we define the \ensuremath{\varnothing}h{Dini subderivative} and \ensuremath{\varnothing}h{Clarke subderivative} of $g$ at $y$ in the direction $w$, denoted respectively by $\ensuremath{\operatorname{\; d}} g(y)(w)$ and $\ensuremath{\operatorname{\; \hat{d}}} g (y)(w)$, via $$\ensuremath{\operatorname{\; d}} g(y)(w):=\liminf_{t\downarrow 0}\frac{g(y+tw)-g(y)}{t},$$ $$\ensuremath{\operatorname{\; \hat{d}}} g(y)(w):=\limsup_{\stackrel{x\rightarrow y}{t\downarrow 0}}\frac{g(x+tw)-g(x)}{t},$$ and the corresponding \ensuremath{\varnothing}h{Dini subdifferential} and \ensuremath{\varnothing}h{Clarke subdifferential} via $$\hat{\partial} g(y): =\{y^*\in\ensuremath{\mathbb R}^J: \langle y^*,w\ensuremath{\operatorname{ran}}gle\leq \ensuremath{\operatorname{\; d}} g(y)(w),\; \forall w\in\ensuremath{\mathbb R}^J\},$$ $$\overline{\partial} g(y): =\{y^*\in\ensuremath{\mathbb R}^J: \langle y^*,w\ensuremath{\operatorname{ran}}gle\leq \ensuremath{\operatorname{\; \hat{d}}} g(y)(w),\; \forall w\in\ensuremath{\mathbb R}^J\}.$$ Furthermore, the \ensuremath{\varnothing}h{limiting subdifferential} is defined by $$\partial_L g(y):=\limsup_{x\rightarrow y}\hat{\partial} g(x),$$ see \cite[Definition~8.3]{Rock98}. We say that $g$ is \ensuremath{\varnothing}h{Clarke regular} at $y$ if $\ensuremath{\operatorname{\; d}} g(y)(w)=\ensuremath{\operatorname{\; \hat{d}}} g(y)(w)$ for every $w\in \ensuremath{\mathbb R}^J$, or equivalently $\hat{\partial} g(y)=\overline{\partial} g(y)$. For further properties of these subdifferentials and subderivatives, see \cite{Frank,mordukhovich,Rock98}. We now study the subdifferentiability of $\bD{C}$ in terms of $\bproj{C}$. \begin{proposition}\label{subdiff1} Suppose $f$ is twice continuously differentiable on $U$. Then the function $-\bD{C}$ is Dini subdifferentiable on $U$; more precisely, if $y\in U$, then $$\ensuremath{\nabla^2\!} f(y)[\bproj{C}(y)-y]\subset \hat{\partial} (-\bD{C})(y),$$ and thus \begin{equation}\label{tom1} \ensuremath{\nabla^2\!} f(y)[\ensuremath{\operatorname{conv}}\bproj{C}(y)-y]\subset \hat{\partial} (-\bD{C})(y). \end{equation} \end{proposition} \begin{proof} Fix $y\in U$. By Theorem~\ref{banffday}(i), $\bproj{C}(y)\neq \varnothing$. Let $x\in \bproj{C}(y).$ As $\hat{\partial}$ is convex-valued, it suffices to show that \begin{equation}\label{subnegative} \ensuremath{\nabla^2\!} f(y)(x-y)\in \hat{\partial} (-\bD{C})(y). \end{equation} To this end, let $t>0$ and $w\in \ensuremath{\mathbb R}^J$. Since for sufficiently small $t$, $y+tw\in U$, \begin{align}-\bD{C}(y+tw) &=\sup_{c\in C}\big(-f(c)+f(y+tw)+\langle\nabla f(y+tw),c-(y+tw)\ensuremath{\operatorname{ran}}gle\big)\\ &\geq -f(x)+f(y+tw)+\langle\nabla f(y+tw),x-(y+tw)\ensuremath{\operatorname{ran}}gle \end{align} and \begin{equation} \bD{C}(y)=f(x)-f(y)-\langle\nabla f(y),x-y\ensuremath{\operatorname{ran}}gle, \end{equation} we have $$ -\bD{C}(y+tw)+\bD{C}(y)\geq f(y+tw)-f(y)+\langle \nabla f(y+tw)-\nabla f(y),x-y\ensuremath{\operatorname{ran}}gle +\langle \nabla f(y+tw),-tw\ensuremath{\operatorname{ran}}gle. $$ Dividing both sides by $t$ and taking the limit inferior as $t\downarrow 0$, we have \begin{align} \ensuremath{\operatorname{\; d}} (-\bD{C})(y)(w) &\geq \langle\nabla f(y),w\ensuremath{\operatorname{ran}}gle +\langle \ensuremath{\nabla^2\!} f(y) w, x-y\ensuremath{\operatorname{ran}}gle -\langle \nabla f(y),w\ensuremath{\operatorname{ran}}gle\\ &=\langle \ensuremath{\nabla^2\!} f(y)(x-y), w\ensuremath{\operatorname{ran}}gle, \end{align} which gives (\ref{subnegative}). \end{proof} \begin{lemma}\label{subdiff2} Suppose that $f$ is twice continuously differentiable on $U$, let $y\in U$, and suppose that $\bproj{C}(y)$ is a singleton. Then $\bD{C}$ is Dini subdifferentiable at $y$ and \begin{equation}\label{amy} \ensuremath{\nabla^2\!} f(y)(y-\bproj{C}(y))\in \hat{\partial} \bD{C}(y). \end{equation} \end{lemma} \begin{proof} Suppose that $\bproj{C}(y)=\{x\}$, and fix $w\in \ensuremath{\mathbb R}^{J}$. Let $(t_{n})$ be a positive sequence such that $(y+t_nw)$ lies in $U$, $t_{n}\downarrow 0$, and $$\ensuremath{\operatorname{\; d}} \bD{C}(y)(w)=\lim_{n\rightarrow \infty}\frac{\bD{C}(y+t_{n}w)-\bD{C}(y)}{t_{n}}.$$ Select $x_{n}\in \bproj{C}(y+t_{n}w)$, which is possible by Theorem~\ref{banffday}(i). We have \begin{multline} \bD{C}(y+t_{n}w)-\bD{C}(y)\nonumber\\ \begin{aligned} =&\; D(x_{n},y+t_{n}w)-D(x_{n},y)+D(x_{n},y)-D(x,y)\\ \geq &\; D(x_{n},y+t_{n}w)-D(x_{n},y)\\ = &\; f(x_{n})-f(y+t_{n}w)-\langle\nabla f(y+t_{n}w),x_{n}-(y+t_{n}w)\ensuremath{\operatorname{ran}}gle -[f(x_{n})-f(y)-\langle \nabla f(y),x_{n}-y\ensuremath{\operatorname{ran}}gle] \\ =&\; -(f(y+t_{n}w)-f(y))-\langle \nabla f(y+t_{n}w)-\nabla f(y),x_{n}-y\ensuremath{\operatorname{ran}}gle +t_{n} \langle\nabla f(y+t_{n}w),w\ensuremath{\operatorname{ran}}gle .\\ \end{aligned} \end{multline} Dividing by $t_{n}$, we get \begin{multline}\label{weneed} \frac{\bD{C}(y+t_{n}w)-\bD{C}(y)}{t_{n}}\geq \\ -\frac{f(y+t_{n}w)-f(y)}{t_{n}} -\frac{\langle \nabla f(y+t_{n}w)-\nabla f(y),x_{n}-y\ensuremath{\operatorname{ran}}gle}{t_{n}}+\langle\nabla f(y+t_{n}w),w\ensuremath{\operatorname{ran}}gle. \end{multline} By Theorem~\ref{banffday}(iii), $x_{n}\rightarrow x$. Taking limits in (\ref{weneed}) yields $$\ensuremath{\operatorname{\; d}} \bD{C}(y)(w)\geq -\langle \ensuremath{\nabla^2\!} f(y)w, x-y\ensuremath{\operatorname{ran}}gle=\langle \ensuremath{\nabla^2\!} f(y)(y-x),w\ensuremath{\operatorname{ran}}gle.$$ Since this holds for every $w\in \ensuremath{\mathbb R}^J$, we conclude that $\ensuremath{\nabla^2\!} f(y)(y-x)\in \hat{\partial}\bD{C}(y)$. \end{proof} Lemma~\ref{subdiff2} allows us to generalize \cite[Example~8.53]{Rock98} from the Euclidean distance to the left Bregman distance. It delineates the differences between the Dini subdifferential, limiting subdifferential and Clarke subdifferential. \begin{theorem}\label{complete} Suppose that $f$ is twice continuously differentiable on $U$ and that for every $u\in U$, $\ensuremath{\nabla^2\!} f(u)$ is positive definite. Set $g=\bD{C}$, and let $y\in U$ and $w\in\ensuremath{\mathbb R}^J$. Then the following hold. \begin{enumerate} \item The Dini subderivative is \begin{equation}\label{rightone} \ensuremath{\operatorname{\; d}} g(y)(w)=\min_{x\in\bproj{C}(y)}\langle \ensuremath{\nabla^2\!} f(y)(y-x),w\ensuremath{\operatorname{ran}}gle, \end{equation} so that the Dini subdifferential of $g$ is \begin{equation}\label{theset} \hat{\partial} g(y)=\begin{cases} \{\ensuremath{\nabla^2\!} f(y)[y-\bproj{C}(y)]\} & \mbox{ if $\bproj{C}(y)$ is a singleton;}\\ \varnothing, & \mbox{ otherwise}. \end{cases} \end{equation} The limiting subdifferential is \begin{equation}\label{mordusubdiff}\partial_L g(y)=\ensuremath{\nabla^2\!} f(y)[y-\bproj{C}(y)]. \end{equation} The Clarke subderivative is \begin{equation}\label{clarkederiv} \ensuremath{\operatorname{\; \hat{d}}} g(y) (w)=\max_{x\in\bproj{C}(y)}\langle \ensuremath{\nabla^2\!} f(y)(y-x), w\ensuremath{\operatorname{ran}}gle, \end{equation} from which we get the Clarke subdifferential \begin{equation}\label{tom2}\overline{\partial}g(y)=\ensuremath{\nabla^2\!} f(y)[y-\ensuremath{\operatorname{conv}}\bproj{C}(y)]. \end{equation} Hence $-\bD{C}$ is Clarke regular on $U$. \item If $y\in C$, then $g$ is strictly differentiable with derivative $0$. \end{enumerate} \end{theorem} \begin{proof} By Theorem~\ref{banffday}(i), $\bproj{C}(y)\neq\varnothing$. Fix $x\in \bproj{C}(y)$ and $t>0$ sufficiently small so that $y+tw\in U$. In view of $\bD{C}(y+tw)\leq D(x,y+tw)$ and $\bD{C}(y)=D(x,y)$, we have \begin{multline} \ensuremath{\operatorname{\; d}} g(y)(w)= \liminf_{t\downarrow 0}\frac{\bD{C}(y+tw)-\bD{C}(y)}{t}\leq \liminf_{t\downarrow 0}\frac{D(x,y+tw)-D(x,y)}{t}\nonumber\\ \begin{aligned} = &\;\liminf_{t\downarrow 0}\frac{ f(x)-f(y+tw)-\langle \nabla f(y+tw),x-(y+tw)\ensuremath{\operatorname{ran}}gle-[f(x)-f(y)-\langle\nabla f(y), x-y\ensuremath{\operatorname{ran}}gle]}{t} \\ = &\; \liminf_{t\downarrow 0}\frac{-[f(y+tw)-f(y)]-\langle \nabla f(y+tw)-\nabla f(y), x-y\ensuremath{\operatorname{ran}}gle+t\langle\nabla f(y+tw), w\ensuremath{\operatorname{ran}}gle}{t}\\ = &\; \liminf_{t\downarrow 0}-\frac{f(y+tw)-f(y)}{t}-\frac{\langle \nabla f(y+tw)-\nabla f(y), x-y\ensuremath{\operatorname{ran}}gle}{t}+\langle\nabla f(y+tw), w\ensuremath{\operatorname{ran}}gle\\ =&\; -\langle\nabla f(y), w\ensuremath{\operatorname{ran}}gle-\langle \ensuremath{\nabla^2\!} f(y)w,x-y\ensuremath{\operatorname{ran}}gle+\langle\nabla f(y),w\ensuremath{\operatorname{ran}}gle\\ = &\; \langle \ensuremath{\nabla^2\!} f(y)(y-x),w\ensuremath{\operatorname{ran}}gle. \end{aligned} \end{multline} Since this holds for every $x\in\bproj{C}(y)$, it follows from Theorem~\ref{banffday}(i) that $$\ensuremath{\operatorname{\; d}} g (y)(w)\leq \min_{x\in\bproj{C}(y)}\langle \ensuremath{\nabla^2\!} f(y)(y-x),w\ensuremath{\operatorname{ran}}gle.$$ To get the opposite inequality, we consider a positive sequence $(t_n)$ such that $t_{n}\downarrow 0$, $(y+t_nw)$ lies in $U$, and $$\ensuremath{\operatorname{\; d}} g(y)(w)=\lim_{n\rightarrow\infty} \frac{\bD{C}(y+t_{n}w)-\bD{C}(y)}{t_{n}}.$$ Select $x_{n}\in\bproj{C}(y+t_{n}w)$, which is possible by Theorem~\ref{banffday}(i). Then \begin{align} \bD{C}(y+t_{n}w) &=D(x_{n},y+t_{n}w)\nonumber\\ & =f(x_{n})-f(y+t_{n}w)-\langle\nabla f(y+t_{n}w),x_{n}-(y+t_{n}w)\ensuremath{\operatorname{ran}}gle \end{align} and \begin{equation} \bD{C}(y)\leq D(x_{n},y) \leq f(x_{n})-f(y)-\langle \nabla f(y),x_{n}-y\ensuremath{\operatorname{ran}}gle. \end{equation} By Theorem~\ref{banffday}(ii), and after taking a subsequence if necessary, we assume that $x_{n}\rightarrow x\in \bproj{C}(y)$. We estimate \begin{multline} \frac{\bD{C}(y+t_{n}w)-\bD{C}(y)}{t_{n}}\\ \begin{aligned} \geq &\; \frac{-[f(y+t_{n}w)-f(y)]-\langle \nabla f(y+t_{n}w)-\nabla f(y),x_{n}-y\ensuremath{\operatorname{ran}}gle+\langle\nabla f(y+t_{n}w),t_{n}w\ensuremath{\operatorname{ran}}gle}{t_{n}}\\ =& \frac{-[f(y+t_{n}w)-f(y)]}{t_{n}}-\frac{\langle \nabla f(y+t_{n}w)-\nabla f(y),x_{n}-y\ensuremath{\operatorname{ran}}gle}{t_{n}}+\langle\nabla f(y+t_{n}w),w\ensuremath{\operatorname{ran}}gle. \end{aligned} \end{multline} Taking limits, we obtain $$\ensuremath{\operatorname{\; d}} g(y)(w)\geq -\langle \ensuremath{\nabla^2\!} f(y)w,x-y\ensuremath{\operatorname{ran}}gle =\langle \ensuremath{\nabla^2\!} f(y)(y-x),w\ensuremath{\operatorname{ran}}gle\geq \min_{x\in\bproj{C}(y)}\langle \ensuremath{\nabla^2\!} f(y)(y-x),w\ensuremath{\operatorname{ran}}gle.$$ Therefore, (\ref{rightone}) is correct. For $y^*\in\ensuremath{\mathbb R}^J$, $y^*\in \hat{\partial}g(y)$ if and only if $$\langle y^*,w\ensuremath{\operatorname{ran}}gle\leq \langle \ensuremath{\nabla^2\!} f(y)(y-x),w\ensuremath{\operatorname{ran}}gle \quad \forall x\in\bproj{C}(y),w\in \ensuremath{\mathbb R}^J.$$ This holds if and only if $y^*=\ensuremath{\nabla^2\!} f(y)(y-x)$, $\forall x\in\bproj{C}(y)$; since $\ensuremath{\nabla^2\!} f(y)$ is invertible, we deduce that $x=y-(\ensuremath{\nabla^2\!} f(y))^{-1}y^*$, so that $\bproj{C}(y)$ is unique. Therefore, if $\bproj{C}(y)$ is not unique, then $\hat{\partial} g(y)$ has to be empty. Hence (\ref{theset}) holds. For every $z\in \ensuremath{\mathbb R}^J$, we have $$\hat{\partial} g(z)\subset \ensuremath{\nabla^2\!} f(z)(z-\bproj{C}(z)).$$ The upper semicontinuity of $\bproj{C}$ (see Theorem~\ref{banffday}(ii)) implies through $\partial_L g(y)=\limsup_{z\rightarrow y}\hat{\partial} g(z)$ that \begin{equation}\label{hockey1}\partial_L g(y)\subset \ensuremath{\nabla^2\!} f(y)(y-\bproj{C}(y)). \end{equation} Equality actually has to hold. Indeed, for $x\in\bproj{C}(y)$ and $0\leq \lambda< 1$, the point $$z_{\lambda}:=\nabla f^*(\lambda\nabla f(y)+(1-\lambda)\nabla f(x)),$$ has $\bproj{C}(z_{\lambda})=\{x\}$ by Proposition~\ref{near}(ii). Lemma~\ref{subdiff2} shows that $$\ensuremath{\nabla^2\!} f(z_{\lambda})(z_{\lambda}-x)\in \hat{\partial} g(z_{\lambda}),$$ where $\ensuremath{\nabla^2\!} f(z_\lambda)(z_\lambda-x)\to \ensuremath{\nabla^2\!} f(y)(y-x)$ as $\lambda \to 1$, since $\ensuremath{\nabla^2\!} f$ is continuous. Thus $\ensuremath{\nabla^2\!} f(y)(y-x)\in \partial_L g(y)$ and therefore \begin{equation}\label{hockey2} \ensuremath{\nabla^2\!} f(y)(y-\bproj{C}(y))\subset \partial_L g(y). \end{equation} Hence (\ref{hockey1}) and (\ref{hockey2}) together give (\ref{mordusubdiff}). Since $g$ is locally Lipschitz around $y\in U$ by Proposition~\ref{distance}, the singular subdifferential of $g$ at $y$ is $0$, so that its polar cone is $\ensuremath{\mathbb R}^J$. Then for every $w\in \ensuremath{\mathbb R}^J$, using \cite[Exercise~8.23]{Rock98} we have $$\ensuremath{\operatorname{\; \hat{d}}} g(y)(w)=\sup\{\langle y^*,w\ensuremath{\operatorname{ran}}gle:\; y^*\in\partial_L g(y)\};$$ thus, (\ref{clarkederiv}) follows from (\ref{mordusubdiff}). Now (\ref{clarkederiv}) is the same as $$\ensuremath{\operatorname{\; \hat{d}}} g(y) (w)=\max\langle \ensuremath{\nabla^2\!} f(y)(y-\ensuremath{\operatorname{conv}}\bproj{C}(y)), w\ensuremath{\operatorname{ran}}gle.$$ As $\ensuremath{\operatorname{conv}}\bproj{C}(y)$ is compact, we obtain (\ref{tom2}). Or directly apply \cite[Theorem 8.49]{Rock98} and (\ref{mordusubdiff}). The Clarke regularity of $-\bD{C}$ follows from combining (\ref{tom1}) and (\ref{tom2}). Indeed, $$\ensuremath{\nabla^2\!} f(y)[\ensuremath{\operatorname{conv}} \bproj{C}(y)-y]\subset \hat{\partial}(-\bD{C})(y)\subset\overline{\partial} (-\bD{C})(y)=\ensuremath{\nabla^2\!} f(y)[\ensuremath{\operatorname{conv}}\bproj{C}(y)-y],$$ so that $\hat{\partial} (-\bD{C})(y)=\overline{\partial}(-\bD{C})(y).$ (ii): When $y\in C$, $\bproj{C}(y)=\{y\}$. By (\ref{mordusubdiff}), $\partial_L g(y)=\{0\}$, and this implies that $g$ is strictly differentiable at $y$ by \cite[Theorem~9.18(b)]{Rock98}. \end{proof} \begin{corollary}\label{singletonchar} Suppose that $f$ is twice continuously differentiable and that $\ensuremath{\nabla^2\!} f(y)$ is positive definite, for every $y\in U$. Then for $y\in U$, the following are equivalent: \begin{enumerate} \item $\bD{C}$ is Dini subdifferentiable at $y$; \item $\bD{C}$ is differentiable at $y$; \item $\bD{C}$ is strictly differentiable at $y$; \item $\bD{C}$ is Clarke regular at $y$; \item $\bproj{C}(y)$ is a singleton. \end{enumerate} If these hold, we have $\nabla \bD{C}(y)=\ensuremath{\nabla^2\!} f(y)[y-\bproj{C}(y)]$. \end{corollary} \begin{proof} (i)$\Rightarrow$(ii): By Proposition~\ref{subdiff1}, both $-\bD{C}$ and $\bD{C}$ are Dini subdifferentiable. Thus $\bD{C}$ is differentiable at $y$ (see \cite[Exercise~3.4.14 on page~143]{Yuri}), and $$\hat{\partial}\bD{C}(y)=-\hat{\partial} (-\bD{C})(y)=\{\nabla \bD{C}(y)\}.$$ (ii)$\Rightarrow$(i) is clear. (ii)$\Leftrightarrow$(iii)$\Leftrightarrow$(iv): This is a consequence of \cite[Theorem~3.4]{WuYe}. (ii)$\Leftrightarrow$(v): If $\bD{C}$ is differentiable at $y$, then (\ref{theset}) implies that $\bproj{C}(y)$ is a singleton. Conversely, if $\bproj{C}(y)$ is a singleton, then (\ref{tom2}), Proposition~\ref{distance}, and \cite[Proposition~2.2.4]{Clarke} show that $\bD{C}$ is strictly differentiable and hence differentiable at $y$. Finally, the gradient formula $\nabla \bD{C}(y)=\ensuremath{\nabla^2\!} f(y)[y-\bproj{C}(y)]$ is a consequence of Proposition~\ref{subdiff1} or Lemma~\ref{subdiff2}. \end{proof} \begin{corollary} Suppose that $f$ is twice continuously differentiable on $U$ and that $\ensuremath{\nabla^2\!} f(y)$ is positive definite for every $y\in U$. Then $\bproj{C}$ is almost everywhere and generically single-valued on $U$. \end{corollary} \begin{proof} By Proposition~\ref{distance}, $\bD{C}$ is locally Lipschitz on $U$. Apply Rademacher's Theorem \cite[Theorem~9.1.2]{lewis} or \cite[Corollary~3.4.19]{Yuri} to obtain that $\bD{C}$ is differentiable almost everywhere on $U$. Moreover, since $-\bD{C}$ is Clarke regular on $U$ by Theorem~\ref{complete}, we use \cite[Theorem~10]{loewen} to conclude that $-\bD{C}$ is differentiable generically on $U$, and so is $\bD{C}$. Hence the result follows from Corollary~\ref{singletonchar}. \end{proof} \section{Characterizations of Chebyshev Sets} \label{completecheby} \begin{definition}\label{fenchelsubdifferential} For $g:\ensuremath{\mathbb R}^J\rightarrow\ensuremath{\,\left]-\infty,+\infty\right]}$ (not necessarily convex), we let $$\partial g(x):=\{s\in \ensuremath{\mathbb R}^J:\; g(y)\geq g(x)+\langle s,y-x\ensuremath{\operatorname{ran}}gle\ \forall y\in\ensuremath{\mathbb R}^J\}\quad \mbox{if $x\in \ensuremath{\operatorname{dom}} g$}; $$ and $\partial g(x) = \varnothing$ otherwise; and the Fenchel conjugate of $g$ is defined by $$s\mapsto g^*(s):=\sup\{\langle s,x\ensuremath{\operatorname{ran}}gle -g(x):\; x\in \ensuremath{\mathbb R}^J\}.$$ \end{definition} According to \cite[Proposition~1.4.3]{urruty1}, \begin{equation}\label{onewayonly1} s\in\partial g(x)\quad \Rightarrow \quad x\in\partial g^*(s), \end{equation} which becomes ``$\Leftrightarrow$'' if $g\in\Gamma$. In order to study $\bD{}\,$-Chebyshev\ sets, we need two preparatory results concerning subdifferentiabilities of $f+\iota_{C}$ and $(f+\iota_{C})^*$. Lemmas~\ref{friday}, ~\ref{conjugatesub} and Theorem~\ref{mainpart} below generalize respectively, and are inspired by \cite[Propositions~3.2.1, 3.2.2 and Theorem~3.2.3]{urruty1}. \begin{lemma}\label{friday} Let $x\in \ensuremath{\mathbb R}^J$. Then $$ \partial (f+\iota_{C})(x)=\{s\in\ensuremath{\mathbb R}^J\colon x\in\bproj{C}(\nabla f^*(s))\}=(\bproj{C}\circ\nabla f^*)^{-1}(x),$$ and consequently $\partial (f+\iota_C) = \big(\bproj{C}\circ\nabla f^*\big)^{-1}$. \end{lemma} \begin{proof} The statement is clear if $x\notin C$, so assume $x\in C$. By \cite[Theorem~1.4.1]{urruty1}, \begin{equation}\label{gonewithwind} s\in \partial (f+\iota_{C})(x) \quad \Leftrightarrow \quad (f+\iota_{C})^*(s)+(f+\iota_{C})(x)=\langle s,x\ensuremath{\operatorname{ran}}gle. \end{equation} Proposition~\ref{distance} shows that $$(f+\iota_{C})^*=f^*-\bD{C}\circ\nabla f^* \quad \mbox{ on $\ensuremath{\mathbb R}^J$}.$$ Combining with (\ref{gonewithwind}) and since $x\in C$, we get $$s\in \partial (f+\iota_{C})(x)\quad \Leftrightarrow\quad f^*(s)-(\bD{C}\circ\nabla f^*)(s)+f(x)=\langle s,x\ensuremath{\operatorname{ran}}gle;$$ equivalently, \begin{align} \bD{C}(\nabla f^*(s))& =f(x)+f^*(s)-\langle s,x\ensuremath{\operatorname{ran}}gle\\ & =f(x)+f^*((\nabla f\circ \nabla f^*)(s))-\langle\nabla f\circ \nabla f^*(s),x\ensuremath{\operatorname{ran}}gle\\ &= f(x)-f(\nabla f^*(s))-\langle\nabla f(\nabla f^*(s)), x-\nabla f^*(s)\ensuremath{\operatorname{ran}}gle\\ &= D(x,\nabla f^*(s)), \end{align} i.e., $x\in\bproj{C}(\nabla f^*(s)).$ \end{proof} The following result, which establishes the link between $\partial (f+\iota_{C})^*$ and $\bproj{C}\circ\nabla f^*$, is the cornerstone for the convexity characterization of $\bD{}\,$-Chebyshev\ sets. \begin{lemma} \label{conjugatesub} Let $s\in\ensuremath{\mathbb R}^J$. Then $$\partial (f+\iota_{C})^*(s)=\ensuremath{\operatorname{conv}} [\bproj{C}(\nabla f^*(s))]=\ensuremath{\operatorname{conv}} [\bproj{C}\circ\nabla f^*(s)] .$$ Consequently, $\bproj{C}\circ\nabla f^*$ is monotone on $\ensuremath{\mathbb R}^{J}$. \end{lemma} \begin{proof} Since $f$ is $1$-coercive and $C$ is closed, the function $f+\iota_{C}$ is $1$-coercive and lower semicontinuous. We have that $\ensuremath{\operatorname{conv}} (f+\iota_{C})$ is lower semicontinuous by \cite[Proposition~1.5.4]{urruty1}, and $\ensuremath{\operatorname{dom}} (f+\iota_{C})^{*}=\ensuremath{\mathbb R}^J$ by \cite[Proposition~1.3.8]{urruty1}. Now $$x\in \partial (f+\iota_{C})^*(s) \quad \Leftrightarrow\quad x\in \partial [\ensuremath{\operatorname{conv}} (f+\iota_{C})]^*(s) \quad \Leftrightarrow\quad s\in \partial [\ensuremath{\operatorname{conv}}(f+\iota_{C})](x),$$ in which the first equivalences follows from \cite[Corollary~1.3.6]{urruty1} and the second equivalence uses the lower semicontinuity of $\ensuremath{\operatorname{conv}} (f+\iota_{C})$. Using \cite[Theorem~1.5.6]{urruty1}, $s\in \partial [\ensuremath{\operatorname{conv}}(f+\iota_{C})](x)$ if and only if there exist $x_{1},\ldots, x_{k}\in \ensuremath{\mathbb R}^J$, $\alpha_{1},\ldots,\alpha_{k}>0$ such that \begin{equation}\label{togo} \sum_{j=1}^{k}\alpha_j=1, \quad x=\sum_{j=1}^{k}\alpha_{j}x_{j},\quad \mbox{ and }\quad s\in \bigcap_{j=1}^{k}\partial (f+\iota_{C})(x_{j}). \end{equation} But $s\in \partial (f+\iota_{C})(x_{j})$ is equivalent to $$x_{j}\in \bproj{C}(\nabla f^*(s)),$$ by Lemma~\ref{friday}. Hence (\ref{togo}) gives $\partial (f+\iota_{C})^*(s)=\ensuremath{\operatorname{conv}} \bproj{C}(\nabla f^*(s)).$ Finally, as a selection of $\partial(f+\iota_C)^*$, which is maximal monotone, the operator $\bproj{C}\circ\nabla f^*$ is monotone. \end{proof} \begin{remark} \label{r:071210} Let $y\in \ensuremath{\mathbb R}^J=\ensuremath{\operatorname{dom}} f^*$. Then $(f+\iota_{C})^*(y)=f^*(y)-\inf_{x\in C}[f(x)+f^*(y)-\scal{y}{x}].$ Since $$f(x)+f^*(y)-\langle y,x\ensuremath{\operatorname{ran}}gle=f(x)+f^*(\nabla f(\nabla f^*(y)))-\langle \nabla f(\nabla f^*(y)),x\ensuremath{\operatorname{ran}}gle = D(x,\nabla f^*(y)),$$ we have $(f+\iota_{C})^*(y)=f^*(y)-\bD{C}(\nabla f^*(y)).$ Hence $$(f+\iota_{C})^*=f^*-\bD{C}\circ\nabla f^*;$$ see also Proposition~\ref{distance}. If $f=\tfrac{1}{2}\|\cdot\|^2$, then $$(f+\iota_{C})^*=\frac{1}{2}\|\cdot\|^2-\frac{1}{2}d_{C}^2$$ is the so-called \ensuremath{\varnothing}h{Asplund function}, where $d_{C}(y):=\inf\{\|y-x\|:\; x\in C\},$ $\forall y\in \ensuremath{\mathbb R}^J$. In this case, Lemma~\ref{conjugatesub} is classical; see \cite[pages~262--264]{urruty3} or \cite{urruty4}. \end{remark} We also need the following result from \cite{solo}. \begin{proposition}[Soloviov] \label{vlad} Let $g:\ensuremath{\mathbb R}^J\rightarrow\ensuremath{\,\left]-\infty,+\infty\right]}$ be lower semicontinuous, and assume that $g^*$ is essentially smooth. Then $g$ is convex. \end{proposition} Now we are ready for the main result of this section. \begin{theorem}[Characterizations of $\bD{}$-Chebyshev sets]\label{mainpart} The following are equivalent: \begin{enumerate} \item\label{whati} $C$ is convex; \item\label{whatii} $C$ is $\bD{}\,$-Chebyshev, i.e., $\bproj{C}$ is single-valued on $U$; \item\label{whatiii} $\bproj{C}$ is continuous on $U$; \item \label{whativ} $\bD{C}\circ\nabla f^*$ is differentiable on $\ensuremath{\mathbb R}^J$; \item \label{whatv} $f+\iota_{C}$ is convex. \end{enumerate} When these equivalent conditions hold, we have \begin{equation}\label{chenchen1} \nabla (\bD{C}\circ \nabla f^*)=\nabla f^*-\bproj{C}\circ\nabla f^* \quad \mbox{ on $\ensuremath{\mathbb R}^J$}; \end{equation} consequently, $\bD{C}\circ \nabla f^*$ is continuously differentiable. If, in addition, $f$ is twice continuously differentiable on $U$ and $\ensuremath{\nabla^2\!} f(y)$ is positive definite $\forall y\in U$, then \ensuremath{\varnothing}h{\ref{whati}--\ref{whatv}} are equivalent to \begin{enumerate} \item[{\rm (vi)}] $\bD{C}$ is differentiable on $U$. \end{enumerate} In this case, we have \begin{equation}\label{chenchen2} \nabla\bD{C}(y)=\ensuremath{\nabla^2\!} f(y)[y-\bproj{C}(y)] \quad \mbox{ $\forall y\in U$}; \end{equation} consequently, $\bD{C}$ is continuously differentiable. \end{theorem} \begin{proof} \ref{whati}$\Rightarrow$\ref{whatii} is well known; see, e.g., \cite[Theorem~3.12]{Baus97}. \ref{whatii}$\Rightarrow$\ref{whatiii} follows from Theorem~\ref{banffday}(iii). To see \ref{whatiii}$\Rightarrow$\ref{whativ}, we use Remark~\ref{r:071210}: $$\bD{C}\circ\nabla f^*=f^*-(f+\iota_{C})^*.$$ Since $\bproj{C}$ is continuous on $U$ and $\nabla f^*:\ensuremath{\mathbb R}^J\rightarrow U$, $\partial (f+\iota_{C})^*$ is single-valued on $\ensuremath{\mathbb R}^J$ by Lemma~\ref{conjugatesub}. Thus, $(f+\iota_{C})^*$ is differentiable on $\ensuremath{\mathbb R}^J$. Altogether, $\bD{C}\circ\nabla f^* = f^* - (f+\iota_C)^*$ is differentiable on $\ensuremath{\mathbb R}^J$. When $f+\iota_{C}$ is convex, since $C\subset U$ we have that $\ensuremath{\operatorname{dom}} (f+\iota_{C})=C$ is convex, and this shows \ref{whatv}$\Rightarrow$\ref{whati}. We now prove \ref{whativ}$\Rightarrow$\ref{whatv} and assume \ref{whativ}. Remark~\ref{r:071210} shows \begin{equation}\label{chenchen3} (f+\iota_{C})^*=f^*-\bD{C}\circ\nabla f^*, \end{equation} which implies that \begin{equation}\label{betterform} (f+\iota_{C})^* \mbox{ is differentiable on $\ensuremath{\mathbb R}^J$.} \end{equation} Since $f+\iota_{C}$ is lower semicontinuous, it follows from Proposition~\ref{vlad} that $f+\iota_{C}$ is convex. When equivalent conditions (i)-(v) hold, (\ref{chenchen1}) follows from Lemma~\ref{conjugatesub} and (\ref{chenchen3}). Since $\nabla f^*$ is continuous and $\bproj{C}$ is continuous by \ref{whatiii}, we obtain that $\bD{C}\circ\nabla f^*$ is continuously differentiable. When $\ensuremath{\nabla^2\!} f(y)$ is positive definite $\forall y\in U$, (ii)$\Leftrightarrow$(vi) by Corollary~\ref{singletonchar}. Finally, (\ref{chenchen2}) follows from Theorem~\ref{complete}, i.e., (\ref{theset}). This finishes the proof. \end{proof} \section{Right Bregman Projections}\label{rightchar} In this section, it will be convenient to write $D_{f}$ for the {Bregman distance} associated with $f$ (see \eqref{eq:D}). Correspondingly, we write $\bproj{C}^{f}$, $\fproj{C}^{f}$ for the corresponding left and right projection operators. While $D_{f}$ is convex in its first argument, it is not necessarily so in its second argument. The properties of $\fproj{C}^{f}$ can be studied by using $\bproj{\nabla f(C)}^{f^*}$. \begin{proposition}\label{different} Let $f\in\Gamma$ be Legendre and $C\subset\ensuremath{\operatorname{int}}\ensuremath{\operatorname{dom}} f$. Then for the right Bregman nearest point projection, we have \begin{equation}\label{left1.1} \fproj{C}^{f}=\nabla f^*\circ \bproj{\nabla f(C)}^{f^*}\circ\nabla f; \end{equation} or equivalently, \begin{equation}\label{left1} \bproj{\nabla f(C)}^{f^*}=\nabla f\circ \fproj{C}^{f}\circ\nabla f^*. \end{equation} \end{proposition} \begin{proof} By \cite[Theorem~3.7(v)]{Baus97} (applied to $f^*$ rather than $f$), $$D_{f^*}(x^*,y^*)=D_{f}(\nabla f^*(y^*),\nabla f^*(x^*)) \quad \forall x^*,y^*\in\ensuremath{\operatorname{int}}\ensuremath{\operatorname{dom}} f^*.$$ For every $y^*\in\ensuremath{\operatorname{int}}\ensuremath{\operatorname{dom}} f^*$, we thus have \begin{align} \bproj{\nabla f(C)}^{f^*}(y^*) & =\ensuremath{\operatorname*{argmin}}_{x^*\in \nabla f(C)} D_{f^*}(x^*,y^*)=\ensuremath{\operatorname*{argmin}}_{x^*\in \nabla f(C)} D_{f}(\nabla f^*(y^*),\nabla f^*(x^*))\label{manteo1}\\ & = \nabla f (\fproj{\nabla f^*(\nabla f(C))}^{f}(\nabla f^*(y^*)))=\nabla f(\fproj{C}^{f}(\nabla f^*(y^*)))\\ & =(\nabla f \circ \fproj{C}^{f}\circ\nabla f^*)(y^*),\label{manteo3} \end{align} which gives (\ref{left1}). Finally, we see that (\ref{left1.1}) is equivalent to (\ref{left1}) by using Fact~\ref{isom}. \end{proof} \begin{lemma}\label{preparation} Let $f\in\Gamma$ be Legendre, let $C\subset\ensuremath{\mathbb R}^J$ be such that $\ensuremath{\operatorname{cl}}{C}\subset \ensuremath{\operatorname{int}\operatorname{dom}}\, f$, and assume that for every $y\in \ensuremath{\operatorname{int}\operatorname{dom}}\, f$, $\bproj{C}^{f}(y)\neq\varnothing$. Then $C$ is closed. \end{lemma} \begin{proof} Assume that $(c_{n})_{n=1}^{\infty}$ is a sequence in $C$, and $c_{n}\to y$. We need to show that $y\in C$. By assumption $y\in\ensuremath{\operatorname{cl}}{C}$, and $y\in U$. If $y\notin C$, then \begin{equation}\label{strict} D_{f}(c,y)=f(c)-f(y)-\langle\nabla f(y),c-y\ensuremath{\operatorname{ran}}gle>0, \quad \forall c\in C, \end{equation} by, e.g., \cite[Theorem~3.7.(iv)]{Baus97}. On the other hand, as $f$ is continuous on $U$, $$ 0\leq \bD{C}^{f}(y) \leq D_{f}(c_{n},y)= f(c_{n})-f(y)-\langle \nabla f(y),c_{n}-y\ensuremath{\operatorname{ran}}gle \to 0. $$ Thus, $\bD{C}^{f}(y)=0$. Using (\ref{strict}), we see that this contradicts the assumption that $\bproj{C}^{f}(y)\neq \varnothing$. \end{proof} \begin{theorem}\label{termbegin} Let $f\in\Gamma$ be Legendre, with full domain $\ensuremath{\mathbb R}^J$, and let $C\subset\ensuremath{\mathbb R}^J$ be closed with $\ensuremath{\operatorname{cl}} (\nabla f(C))\subset\ensuremath{\operatorname{int}}\ensuremath{\operatorname{dom}} f^*$. Assume that $\fproj{C}^{f}(y)$ is a singleton for every $y\in\ensuremath{\mathbb R}^J$. Then $\nabla f(C)$ is convex. \end{theorem} \begin{proof} We have $f^*$ is Legendre and $f^*$ is $1$-coercive. By (\ref{left1}), $\bproj{\nabla f(C)}^{f^*}(y)$ is single-valued for every $y\in\ensuremath{\operatorname{int}}\ensuremath{\operatorname{dom}} f^*$. As $\ensuremath{\operatorname{cl}} (\nabla f(C))\subset\ensuremath{\operatorname{int}}\ensuremath{\operatorname{dom}} f^*$, Lemma~\ref{preparation} says that the set $\nabla f(C)$ is closed. Hence we apply Theorem~\ref{mainpart} to $f^*$ and $\nabla f(C)$, and we obtain that $\nabla f(C)$ is convex. \end{proof} \begin{corollary}\label{termbegin1} Let $f$ and $C$ satisfy \ensuremath{\varnothing}h{\textbf{A1}--\textbf{A3}}, assume that $f$ has full domain, and that $\fproj{C}^{f}(y)$ is a singleton for every $y\in\ensuremath{\mathbb R}^J$. Then $\nabla f(C)$ is convex. \end{corollary} The following example shows that even if $\fproj{C}^{f}(y)$ is a singleton for every $y\in\ensuremath{\operatorname{int}}\ensuremath{\operatorname{dom}} f$, the set $C$ may fail to be convex. Thus, Theorem~\ref{mainpart} fails for the right Bregman projection $\fproj{C}^f$. Note that Theorem~\ref{termbegin} allows us to conclude that $\nabla f(C)$ is convex rather than $C$. \begin{example} Consider the Legendre function $f:\ensuremath{\mathbb R}^2\rightarrow\ensuremath{\mathbb R}$ given by $$f(x,y):=e^{x}+e^{y}\qquad \forall (x,y)\in\ensuremath{\mathbb R}^2,$$ and its Fenchel conjugate \begin{equation*} f^*\colon \ensuremath{\mathbb R}^2\rightarrow\ensuremath{\,\left]-\infty,+\infty\right]} \colon (x,y) \mapsto \begin{cases} x\ln x-x+y\ln y-y, &\text{if}\;\;x\geq 0, y\geq 0;\\ \ensuremath{+\infty}, & \text{otherwise}. \end{cases} \end{equation*} Define a compact convex set $$C:=\big[(0,0),(1,2)\big]=\{(\lambda,2\lambda): 0\leq\lambda\leq 1\}.$$ As $\nabla f(x,y)=(e^x,e^y)$ for every $(x,y)\in \ensuremath{\mathbb R}^2$, we see that $$\nabla f(C)=\{(e^{\lambda},e^{2\lambda}):\ 0\leq \lambda\leq 1\}$$ is compact but clearly \ensuremath{\varnothing}h{not convex}. (i) In view of Theorem~\ref{termbegin} and the lack of convexity of $\nabla f(C)$, there must exist $(x,y)\in \ensuremath{\mathbb R}^2$ such that $\fproj{C}^{f}(x,y)$ is \ensuremath{\varnothing}h{multi-valued}. (ii) Since $\bproj{C}^{f}(x,y)$ is a singleton for every $(x,y)\in \ensuremath{\mathbb R}^2$, and since $\fproj{\nabla f(C)}^{f^*}=\nabla f\circ \bproj{ C}^{f}\circ\nabla f^*$ by Proposition~\ref{different} (applied to $f^*$ and $\nabla f(C)$), we deduce that $\fproj{\nabla f(C)}^{f^*}$ is single-valued on $\ensuremath{\operatorname{int}}\ensuremath{\operatorname{dom}} f^*=\{(x,y):\; x>0,y>0\}.$ Therefore, the analogue of Theorem~\ref{mainpart} for the right Bregman projection Theorem~\ref{mainpart} fails even though $f^*$ is Legendre and $1$-coercive. \end{example} \section*{Acknowledgments} Heinz Bauschke was partially supported by the Natural Sciences and Engineering Research Council of Canada and by the Canada Research Chair Program. Xianfu Wang was partially supported by the Natural Sciences and Engineering Research Council of Canada. Jane Ye was partially supported by the Natural Sciences and Engineering Research Council of Canada. Xiaoming Yuan was partially supported by the Pacific Institute for the Mathematical Sciences, by the University of Victoria, by the University of British Columbia Okanagan, and by the National Science Foundation of China Grant~10701055. \small \end{document}
\begin{document} \title{An embedded corrector problem for homogenization. \ Part I: Theory} \begin{abstract} This article is the first part of a two-fold study, the objective of which is the theoretical analysis and numerical investigation of new approximate corrector problems in the context of stochastic homogenization. We present here three new alternatives for the approximation of the homogenized matrix for diffusion problems with highly-oscillatory coefficients. These different approximations all rely on the use of an {\em embedded} corrector problem (that we previously introduced in~\cite{notre_cras}), where a finite-size domain made of the highly oscillatory material is embedded in a homogeneous infinite medium whose diffusion coefficients have to be appropriately determined. The motivation for considering such embedded corrector problems is made clear in the companion article~\cite{refpartii}, where a very efficient algorithm is presented for the resolution of such problems for particular heterogeneous materials. In the present article, we prove that the three different approximations we introduce converge to the homogenized matrix of the medium when the size of the embedded domain goes to infinity. \end{abstract} \section{Introduction} Let $D \subset {\mathbb R}^d$ be a smooth bounded domain of ${\mathbb R}^d$ (with $d \in {\mathbb N}^\widetilde{s}ar$), $f \in L^2(D)$ and $({\mathbb A}_\varepsilon)_{\varepsilon > 0}$ be a family of uniformly bounded and coercive diffusion matrix fields such that ${\mathbb A}_\varepsilon$ varies on the characteristic length-scale $\varepsilon > 0$. We consider the family of elliptic problems \begin{equation}\label{eq:diveps} u_\varepsilon \in H^1_0(D), \quad -\hbox{\rm div}\left[ {\mathbb A}_\varepsilon \, \nabla u_\varepsilon \right]=f \ \text{in $D$}. \end{equation} When $\varepsilon$ is much smaller than the characteristic size of the domain $D$, problem~\eqref{eq:diveps} is challenging to address from a numerical perspective. In order to obtain a sufficient accuracy, any discretization method indeed needs to resolve the oscillations of ${\mathbb A}_\varepsilon$, which leads to a discrete problem with a prohibitively large number of degrees of freedom. It is well-known (see e.g.~\cite{Bensoussan,Cioranescu,Jikov}) that, if ${\mathbb A}_\varepsilon$ is bounded and bounded away from zero uniformly in $\varepsilon$, problem~\eqref{eq:diveps} admits a homogenized limit. Up to the extraction of a subsequence, that we denote $\varepsilon'$, there exists a homogenized matrix-valued field ${\mathbb A}^\widetilde{s}ar \in (L^\infty(D))^{d \times d}$ such that, for any $f \in L^2(D)$, the solution $u_{\varepsilon'}$ to~\eqref{eq:diveps} converges, weakly in $H^1_0(D)$, to $u^\widetilde{s}ar$, the unique solution to the homogenized equation \begin{equation}\label{eq:div0} u^\widetilde{s}ar \in H^1_0(D), \quad -\mbox{\rm div}\left[ {\mathbb A}^\widetilde{s}ar \nabla u^\widetilde{s}ar \right] = f \mbox{ in $D$}. \end{equation} Note that the homogenized matrix, and hence the function $u^\widetilde{s}ar$, depends in general on the extracted subsequence. This setting includes in particular the periodic case, where ${\mathbb A}_\varepsilon(x)={\mathbb A}_{\rm per}(x/\varepsilon)$ for a fixed ${\mathbb Z}^d$-periodic function ${\mathbb A}_{\rm per}$, the quasi-periodic case, where ${\mathbb A}_\varepsilon(x)={\mathbb A}_{\rm q-per}(x/\varepsilon)$ for a fixed quasi-periodic function ${\mathbb A}_{\rm q-per}$, and the stationary random case (see~\cite{kozlov,papa}), where $$ {\mathbb A}_\varepsilon(x)={\mathbb A}_{\rm sta}(x/\varepsilon,\omega) \ \text{for some realization $\omega$ of a stationary random function ${\mathbb A}_{\rm sta}$.} $$ In these three cases, the convergence of $u_\varepsilon$ to $u^\widetilde{s}ar$ holds for the whole sequence (and not only up to a subsequence extraction), and the homogenized matrix field ${\mathbb A}^\widetilde{s}ar$ is actually equal to a constant and deterministic matrix in the whole domain $D$. Once this homogenized matrix has been determined, problem~\eqref{eq:div0} can be solved by standard numerical techniques with a much lower computational cost than the original problem~\eqref{eq:diveps}. The computation of the homogenized matrix is often a challenging task. In the quasi-periodic case and in the random stationary case, corrector problems posed over the whole space ${\mathbb R}^d$ have to be solved. In practice, approximate corrector problems defined on truncated domains with appropriate boundary conditions (typically periodic boundary conditions) are considered to obtain approximate homogenized diffusion matrices. The larger the size of the truncated domain, the more accurate the corresponding approximation of the homogenized matrix. The use of standard finite element discretizations to tackle these corrector problems may lead to very large discretized problems, whose computational costs can be prohibitive. In this article, we propose some alternative methods to approximate the homogenized matrix. These are based on the use of an {\em embedded corrector problem} that is again defined over the whole space ${\mathbb R}^d$. In this new problem (see~\eqref{eq:pbbase} below), the diffusion coefficient is equal to ${\mathbb A}_\varepsilon$ in a bounded domain of typical size $R$, and to a {\em constant} matrix $A_R$ outside this bounded domain, the value of which has to be properly chosen. Our motivation for considering such a family of corrector problems is the following. Recently, a very efficient numerical method has been proposed and developed in the series of works~\cite{Stamm1,Stamm2} in order to solve Poisson problems arising in implicit solvation models. The adaptation of this algorithm, which is based on a boundary integral formulation of the problem, has enabled us to solve these embedded corrector problems in a very efficient way in situations when the considered heterogeneous medium is composed of (possibly polydisperse) spherical inclusions embedded into a homogeneous material (see Fig.~\ref{fig:boules} below). This algorithm will be presented in details in the companion article~\cite{refpartii}. The choice of the value of the exterior constant diffusion coefficient $A_R$ is instrumental to obtain approximate effective matrices which converge to the exact homogenized matrix when $R$ goes to infinity. In this article, we propose three different approaches to choose the value of the constant exterior diffusion matrix $A_R$ and to define effective matrices from the embedded corrector problem. We prove the convergence of these three approximations to the actual homogenized matrix ${\mathbb A}^\widetilde{s}ar$ as $R$ goes to infinity. We also show that a naive choice of $A_R$ leads to an approximate homogenized matrix which {\em does not converge} to the exact homogenized matrix when $R$ goes to infinity. This article is organized as follows. In Section~\ref{sec:existing}, we recall some basic elements on the theory of stochastic homogenization, and review the standard associated numerical methods. The embedded corrector problem mentioned above and the three different approaches we propose to compute effective matrices are presented in Section~\ref{sec:new_defs}. The proofs of consistency of the proposed approximations are collected in Section~\ref{sec:proofs}. Two particular situations, the case of a homogeneous material and the one-dimensional case, for which analytical computations can be performed, are briefly discussed in Section~\ref{sec:justif}. The present work complements the earlier publication~\cite{notre_cras}, where we briefly presented our approaches. We provide here a complete and detailed analysis of them. We refer to~\cite{refpartii} for a detailed presentation of the algorithmic aspects along with some numerical illustrations. \section{Stochastic homogenization: a prototypical example} \label{sec:existing} In the sequel, the following notation is used. Let $d\in {\mathbb N}^\widetilde{s}ar$, $0 < \alpha \leq \beta < +\infty$ and $$ {\cal M}:= \left\{ A\in {\mathbb R}^{d\times d}, \; A^T = A \ \text{and, for any $\xi \in {\mathbb R}^d$}, \ \alpha |\xi|^2 \leq \xi^T A \xi \leq \beta |\xi|^2 \right\}. $$ Let $(e_i)_{1\leq i \leq d}$ be the canonical basis of ${\mathbb R}^d$. Taking $\xi = e_i$ and next $\xi = e_i+e_j$ in the above definition, we see that any $A:=\left( A_{ij} \right)_{1\leq i,j \leq d} \in {\cal M}$ satisfies $|A_{ij}| \leq \beta$ for any $1 \leq i,j \leq d$. We further denote by $\mathcal{D}({\mathbb R}^d)$ the set of $C^\infty$ functions with compact supports in ${\mathbb R}^d$. In this section, we briefly recall the well-known homogenization theory in the stationary ergodic setting, as well as standard strategies to approximate the homogenized coefficients. We refer to~\cite{kozlov,papa} for some seminal contributions, to~\cite{engquist-souganidis} for a general, numerically oriented presentation, and to~\cite{Bensoussan,Cioranescu,Jikov} for classical textbooks. We also refer to the review article~\cite{singapour} (and the extensive bibliography contained therein) for a presentation of our particular setting. The stationary ergodic setting can be viewed as a prototypical example of contexts in which the alternative method we propose here for approximating the homogenized matrix can be used. \subsection{Theoretical setting} Let $(\Omega, {\cal F}, {\mathbb P})$ be a probability space and $\dps Q := \left( -\frac{1}{2},\frac{1}{2} \right)^d$. For a random variable $X\in L^1(\Omega, d{\mathbb P})$, we denote by $\dps {\mathbb E}[X]:= \int_\Omega X(\omega)\, d{\mathbb P}(\omega)$ its expectation value. For the sake of convenience, we restrict the presentation to the case of discrete stationarity, even though the ideas presented here can be readily extended to the case of continuous stationarity. We assume that the group $({\mathbb Z}^d, +)$ acts on~$\Omega$. We denote by $(\tau_k)_{k\in{\mathbb Z}^d}$ this action, and assume that it preserves the measure ${\mathbb P}$, i.e. $$ \forall k\in {\mathbb Z}^d, \quad \forall F\in {\cal F}, \quad {\mathbb P}(\tau_k(F)) = {\mathbb P}(F). $$ We also assume that $\tau$ is ergodic, that is, $$ \forall F\in {\cal F}, \quad \left( \forall k\in {\mathbb Z}^d, \; \tau_k F = F \right) \implies \left( {\mathbb P}(F) = 0 \mbox{ or } 1 \right). $$ A funtion ${\cal S}\in L^1_{\rm loc}\left( {\mathbb R}^d, L^1(\Omega)\right)$ is said to be stationary if \begin{equation} \label{eq:stationary} \forall k\in {\mathbb Z}^d, \quad {\cal S}(x + k, \omega) = {\cal S}(x, \tau_k \omega) \mbox{ for almost all $x\in {\mathbb R}^d$ and almost surely}. \end{equation} In that context, the Birkhoff ergodic theorem~\cite{Birkhoff1,Birkhoff2,Birkhoff3} can be stated as follows: \begin{theorem} \label{th:Birkhoff} Let ${\cal S} \in L^\infty\left( {\mathbb R}^d, L^1(\Omega)\right)$ be a stationary function in the sense of~\eqref{eq:stationary}. For $k = (k_1,k_2, \dots, k_d) \in {\mathbb Z}^d$, we set $\dps |k|_\infty = \sup_{1\leq i \leq d} |k_i|$. Then, $$ \frac{1}{(2N+1)^d}\sum_{|k|_\infty \leq N} {\cal S}(y, \tau_k \omega) \mathop{\longrightarrow}_{N\to +\infty} {\mathbb E}\left[ {\cal S}(y, \cdot) \right] \mbox{ in $L^\infty({\mathbb R}^d)$, almost surely}. $$ This implies that $$ {\cal S}\left( \frac{x}{\varepsilon}, \omega\right) \mathop{\rightharpoonup}_{\varepsilon \to 0}^* {\mathbb E}\left[ \frac{1}{|Q|}\int_Q {\cal S}(y, \cdot)\,dy \right] \mbox{ in $L^\infty({\mathbb R}^d)$, almost surely}. $$ \end{theorem} Note that here $|Q|=1$. We kept nevertheless the normalizing factor $|Q|^{-1}$ in the above formula to emphasize that the convergence holds toward the expectation of the mean value over the unit cell of the underlying lattice (here ${\mathbb Z}^d$). We also recall the definition of $G$-convergence introduced by F.~Murat and L.~Tartar in~\cite{MuratTartar}: \begin{definition}[$G$-convergence]\label{def:Gconv} Let $D$ be a smooth bounded domain of ${\mathbb R}^d$. A sequence of matrix-valued functions $\left( {\mathbb A}^R \right)_{R>0} \subset L^\infty(D, {\cal M})$ is said to converge in the sense of homogenization (or to $G$-converge) in $D$ to a matrix-valued function ${\mathbb A}^\widetilde{s}ar\in L^\infty(D, {\cal M})$ if, for all $f\in H^{-1}(D)$, the sequence $(u^R)_{R>0}$ of solutions to $$ u^R\in H^1_0(D), \quad -\mbox{{\rm div}}\left( {\mathbb A}^R \nabla u^R \right) = f \mbox{ in $\mathcal{D}'(D)$} $$ satisfies $$ \left\{ \begin{array}{l} \dps u^R \mathop{\rightharpoonup}_{R\to +\infty} u^\widetilde{s}ar \mbox{ weakly in $H^1_0(D)$}, \\ \noalign{\vskip 3pt} \dps {\mathbb A}^R \nabla u^R \mathop{\rightharpoonup}_{R\to +\infty} {\mathbb A}^\widetilde{s}ar \nabla u^\widetilde{s}ar \mbox{ weakly in $L^2(D)$}, \end{array} \right . $$ where $u^\widetilde{s}ar$ is the unique solution to the homogenized equation $$ u^\widetilde{s}ar \in H^1_0(D), \quad -\mbox{\rm div}\left( {\mathbb A}^\widetilde{s}ar \nabla u^\widetilde{s}ar \right) = f \mbox{ in $\mathcal{D}'(D)$}. $$ \end{definition} The following theorem is a classical result of stochastic homogenization theory (see e.g.~\cite{Jikov}): \begin{theorem} \label{th:randhomog} Let ${\mathbb A} \in L^\infty({\mathbb R}^d, L^1(\Omega))$ be such that ${\mathbb A}(x, \omega) \in {\cal M}$ almost surely and for almost all $x\in {\mathbb R}^d$. We assume that ${\mathbb A}$ is stationary in the sense of~\eqref{eq:stationary}. For any $R>0$ and $\omega \in \Omega$, we set ${\mathbb A}^R(\cdot, \omega):= {\mathbb A}(R\cdot, \omega)$. Then, almost surely, for any arbitrary smooth bounded domain $D\subset {\mathbb R}^d$, the sequence $\left({\mathbb A}^R(\cdot, \omega)\right)_{R>0}\subset L^\infty(D; {\cal M})$ $G$-converges to a \emph{constant and deterministic} matrix $A^\widetilde{s}ar\in{\cal M}$, which is given by $$ \forall p \in {\mathbb R}^d, \quad A^\widetilde{s}ar p = {\mathbb E} \left[\frac{1}{|Q|} \int_Q {\mathbb A}(x, \cdot) \left( p + \nabla w_p(x, \cdot) \right) \,dx \right], $$ where $w_p$ is the unique solution (up to an additive constant) in $$ \mathbb{B}ig\{ v \in L^2_{\rm loc}({\mathbb R}^d, L^2(\Omega)), \quad \nabla v \in \left( L^2_{\rm unif}({\mathbb R}^d, L^2(\Omega)) \right)^d \mathbb{B}ig\} $$ to the so-called corrector problem \begin{equation} \label{eq:correctorrandom} \left\{ \begin{array}{l} -\mbox{{\rm div}}\left( {\mathbb A}(\cdot, \omega)(p + \nabla w_p(\cdot, \omega))\right) = 0 \mbox{ almost surely in $\mathcal{D}'({\mathbb R}^d)$}, \\ \noalign{\vskip 3pt} \nabla w_p \mbox{ is stationary in the sense of~\eqref{eq:stationary}}, \\ \noalign{\vskip 3pt} \dps {\mathbb E}\left[ \int_Q \nabla w_p(x, \cdot)\,dx \right] = 0. \end{array} \right. \end{equation} \end{theorem} In Theorem~\ref{th:randhomog}, the notation $L^2_{\rm unif}$ refers to the uniform $L^2$ space: $$ L^2_{\rm unif}({\mathbb R}^d, L^2(\Omega)) :=\left\{ u \in L^2_{\rm loc}({\mathbb R}^d;L^2(\Omega)), \quad \sup_{x \in {\mathbb R}^d} \int_{x+(0,1)^d} \|u(y,\cdot)\|_{L^2(\Omega)}^2 \, dy < \infty \right\}. $$ The major difficulty to compute the homogenized matrix $A^\widetilde{s}ar$ is the fact that the corrector problem~\eqref{eq:correctorrandom} is set over the whole space ${\mathbb R}^d$ and cannot be reduced to a problem posed over a bounded domain (in contrast e.g. to periodic homogenization). This is the reason why approximation strategies yielding practical approximations of $A^\widetilde{s}ar$ are necessary. \subsection{Standard numerical practice} A common approach to approximate $A^\widetilde{s}ar$ consists in introducing a truncated version of~\eqref{eq:correctorrandom}, see e.g.~\cite{Bourgeat}. For any $R>0$, let us denote $\dps Q_R := \left(-\frac{R}{2},\frac{R}{2} \right)^d$ and $$ H^1_{\rm per}(Q_R):=\left\{ w \in H^1_{\rm loc}({\mathbb R}^d), \quad \mbox{$w$ is $R \, {\mathbb Z}^d$-periodic} \right\}. $$ Observing that $Q_1 = Q$, we also introduce $$ H^1_{\rm per}(Q):=\left\{ w \in H^1_{\rm loc}({\mathbb R}^d), \quad \mbox{$w$ is ${\mathbb Z}^d$-periodic} \right\}. $$ For any $p\in {\mathbb R}^d$, let $\widetilde{w}_p^R(\cdot, \omega)$ be the unique solution in $H^1_{\rm per}(Q_R)/{\mathbb R}$ to \begin{equation} \label{eq:correctorrandom-N} -\mbox{{\rm div}}\left( {\mathbb A}(\cdot, \omega) \left(p + \nabla \widetilde{w}_p^R(\cdot, \omega) \right) \right) = 0 \mbox{ almost surely in $\mathcal{D}'({\mathbb R}^d)$}. \end{equation} It satisfies the variational formulation $$ \forall v \in H^1_{\rm per}(Q_R), \quad \int_{Q_R} (\nabla v)^T {\mathbb A}(\cdot, \omega) \left( p + \nabla \widetilde{w}_p^R(\cdot, \omega) \right) = 0. $$ The corresponding approximate (or apparent) homogenized matrix $A^{\widetilde{s}ar,R}(\omega) \in {\cal M}$ is defined by $$ \forall p \in {\mathbb R}^d, \quad A^{\widetilde{s}ar,R}(\omega) \, p := \frac{1}{|Q_R|} \int_{Q_R} {\mathbb A}(\cdot, \omega) \left( p + \nabla \widetilde{w}_p^R(\cdot, \omega) \right). $$ The matrix $A^{\widetilde{s}ar,R}(\omega)$ is constant and random. A.~Bourgeat and A.~Piatniski proved in~\cite{Bourgeat} that the sequence of matrices $\left(A^{\widetilde{s}ar,R}(\omega)\right)_{R>0}$ converges almost surely to $A^\widetilde{s}ar$ as $R$ goes to infinity. Recent mathematical studies (initiated in~\cite{GloriaOtto1}) by A.~Gloria, F.~Otto and their collaborators, have examined in details the speed of convergence (along with related questions) of $A^{\widetilde{s}ar,R}(\omega)$ to $A^\widetilde{s}ar$ (see also~\cite[Theorem 1.3 and Proposition 1.4]{nolen}). Variance reduction techniques have also been introduced to improve the approximation of $A^\widetilde{s}ar$, see e.g.~\cite{legoll_jcp} for a review. \begin{remark} In~\cite{Bourgeat}, A.~Bourgeat and A.~Piatniski also analyzed a truncated corrector problem supplied with homogeneous Dirichlet boundary conditions (in contrast to~\eqref{eq:correctorrandom-N}, where periodic boundary conditions are used) and proved similar convergence results. Likewise, in~\cite{Huet}, C.~Huet introduced a corrector problem supplied with Neumann boundary conditions. \end{remark} \begin{remark} Besides approximations based on~\eqref{eq:correctorrandom-N}, other techniques have been introduced to approximate $A^\widetilde{s}ar$. We refer to~\cite{cottereau,lemaire} for optimization-based techniques, to~\cite{mourrat} for an approach based on the heat equation associated to~\eqref{eq:diveps}, and to~\cite{filtrage_sab,filtrage_blanc} for approaches based on filtering. We also mention~\cite{brisard_autres_CL}, where a problem posed on ${\mathbb R}^d$ (which is different from our embedded problem~\eqref{eq:pbbase}) is considered. In a slighly different context, and with a different objective than ours here, the work~\cite{lu_otto} studies the question of optimal artificial boundary condition for random elliptic media. \end{remark} The proof in~\cite{Bourgeat} relies on the following scaling argument. For any $R>0$, let ${\mathbb A}^R(\cdot, \omega) := {\mathbb A}(R\cdot, \omega)$ and $\dps w_p^R(\cdot, \omega) := \frac{1}{R}\widetilde{w}_p^R(R\cdot, \omega)$. Rescaling problem~\eqref{eq:correctorrandom-N}, we obtain that, for any $p\in {\mathbb R}^d$, $w_p^R(\cdot, \omega)$ is the unique solution in $H^1_{\rm per}(Q)/{\mathbb R}$ to \begin{equation} \label{eq:correctorrandom-N2} -\mbox{{\rm div}}\left( {\mathbb A}^R(\cdot, \omega) \left( p + \nabla w_p^R(\cdot, \omega) \right) \right) = 0 \mbox{ almost surely in $\mathcal{D}'({\mathbb R}^d)$}, \end{equation} and that \begin{equation} \label{eq:maison} A^{\widetilde{s}ar,R}(\omega) \, p = \frac{1}{|Q|} \int_Q {\mathbb A}^R(\cdot, \omega) \left( p + \nabla w_p^R(\cdot, \omega) \right). \end{equation} Choosing $w_p^R(\cdot, \omega)$ as the solution to~\eqref{eq:correctorrandom-N2} of zero average, it is easy to see that $\left( w_p^R(\cdot, \omega) \right)_{R>0}$ is bounded in $H^1_{\rm per}(Q)$. In addition, we know that the sequence $\left( {\mathbb A}^R(\cdot, \omega)\right)_{R>0}$, which belongs to $L^\infty(Q, {\cal M})$, $G$-converges almost surely to $A^\widetilde{s}ar$ in $Q$. Using~\cite[Theorem~5.2 page~151]{Jikov} (which is recalled below as Theorem~\ref{th:th1}), we are in position to pass to the limit $R\to +\infty$ in~\eqref{eq:maison} and obtain the desired convergence result. At this point, we make the following remark. If $\left( {\mathbb A}^R\right)_{R>0} \subset L^\infty(Q; {\cal M})$ is a general family of matrix-valued fields which $G$-converges to a constant matrix $A^\widetilde{s}ar$ as $R$ goes to infinity, one can define for all $R>0$ effective approximate matrices $A^{\widetilde{s}ar, R}$ as follows. Consider, for any $p\in {\mathbb R}^d$, the unique solution $w_p^R$ in $H^1_{\rm per}(Q)/{\mathbb R}$ to \begin{equation} \label{eq:correctorrandom-N3} -\mbox{{\rm div}}\left( {\mathbb A}^R(p + \nabla w_p^R)\right) = 0 \mbox{ almost surely in $\mathcal{D}'({\mathbb R}^d)$}, \end{equation} and define the matrix $A^{\widetilde{s}ar, R}$ by \begin{equation}\label{eq:defper} \forall p \in {\mathbb R}^d, \quad A^{\widetilde{s}ar,R} \, p = \frac{1}{|Q|} \int_Q {\mathbb A}^R \left( p + \nabla w_p^R \right). \end{equation} Then, using the same arguments as in the above stationary ergodic case, it can be proven that $\dps A^{\widetilde{s}ar,R} \mathop{\longrightarrow}_{R\to +\infty} A^\widetilde{s}ar$. Solving~\eqref{eq:correctorrandom-N3} by means of standard finite element methods requires the use of very fine discretization meshes, which may lead to prohibitive computational costs. This motivates our work and the alternative definitions of effective matrices that we propose in the next section. \section{Three alternative definitions of effective matrices}\label{sec:new_defs} Let $B = B(0,1)$ be the unit open ball of ${\mathbb R}^d$, $\Gamma = \partial B$ and $n(x)$ be the outward pointing unit normal vector at point $x \in \Gamma$. For any measurable subset $E$ of ${\mathbb R}^d$, we denote by $\chi_E$ the characteristic function of $E$. The embedded corrector problem we define below (see~\eqref{eq:pbbase}) depends on $B$. We note that all the results presented in this article do not use the fact that $B$ is a ball. They can thus be easily extended to the case when $B$ is a general smooth bounded domain of ${\mathbb R}^d$. \subsection{Embedded corrector problem}\label{sec:embedded} In this section, we introduce an {\em embedded corrector} problem, which we will use in the sequel to define new approximations of the homogenized coefficient $A^\widetilde{s}ar$. We introduce the vector spaces \begin{equation} \label{eq:def_V0} V:=\left\{v\in L^2_{\rm loc}({\mathbb R}^d), \ \nabla v \in \left(L^2({\mathbb R}^d)\right)^d\right\} \quad \mbox{and} \quad V_0:= \left\{ v \in V, \ \int_B v = 0\right\}. \end{equation} The space $V_0$, endowed with the scalar product $\langle \cdot, \cdot \rangle$ defined by $$ \forall v,w\in V_0, \quad \langle v,w\rangle:= \int_{{\mathbb R}^d} \nabla v \cdot \nabla w, $$ is a Hilbert space. For any matrix-valued field ${\mathbb A} \in L^\infty(B,{\cal M})$, any constant matrix $A\in {\cal M}$, and any vector $p\in {\mathbb R}^d$, we denote by $w^{{\mathbb A},A}_p$ the unique solution in $V_0$ to \begin{equation} \label{eq:pbbase} -\mbox{{\rm div}}\mathbb{B}ig( \mathcal{A}^{{\mathbb A},A} \left( p + \nabla w^{{\mathbb A}, A}_p \right) \mathbb{B}ig) = 0 \mbox{ in $\mathcal{D}'({\mathbb R}^d)$}, \end{equation} where (see Figure~\ref{fig:boules}) $$ \mathcal{A}^{{\mathbb A},A}(x) := \left| \begin{array}{l} {\mathbb A}(x) \mbox{ if } x\in B,\\ A \mbox{ if } x\in {\mathbb R}^d \setminus B. \end{array}\right . $$ The variational formulation of~\eqref{eq:pbbase} reads as follows: find $w_p^{{\mathbb A}, A} \in V_0$ such that \begin{equation}\label{eq:FV} \forall v\in V_0, \quad \int_B (\nabla v)^T {\mathbb A} (p + \nabla w_p^{{\mathbb A}, A}) + \int_{{\mathbb R}^d\setminus B} (\nabla v)^T A \nabla w_p^{{\mathbb A}, A} - \int_\Gamma (Ap\cdot n) \, v = 0. \end{equation} Problem~\eqref{eq:pbbase} is linear and the above bilinear form is coercive in $V_0$. This problem is thus equivalent to a minimization problem (recall that ${\mathbb A}$ and $A$ are symmetric). The solution $w_p^{{\mathbb A}, A}$ to~\eqref{eq:pbbase} is equivalently the unique solution to the minimization problem \begin{equation} \label{eq:optim1} w_p^{{\mathbb A}, A} = \mathop{\mbox{argmin}}_{v\in V_0} J_p^{{\mathbb A}, A}(v), \end{equation} where \begin{equation} \label{eq:optim2} J^{{\mathbb A}, A}_p(v) := \frac{1}{|B|} \int_B (p + \nabla v)^T {\mathbb A} (p + \nabla v) + \frac{1}{|B|} \int_{{\mathbb R}^d\setminus B} (\nabla v)^T A \nabla v - \frac{2}{|B|} \int_{\Gamma} (Ap\cdot n) v. \end{equation} We define the map ${\cal J}_p^{{\mathbb A}}: {\cal M} \to {\mathbb R}$ by \begin{equation} \label{eq:def_curly_J} \forall A\in {\cal M}, \quad {\cal J}_p^{{\mathbb A}}(A) := J_p^{{\mathbb A}, A} \left( w_p^{{\mathbb A}, A} \right) = \min_{v\in V_0} J_p^{{\mathbb A}, A}(v). \end{equation} The linearity of the map ${\mathbb R}^d \ni p \mapsto w_p^{{\mathbb A}, A} \in V_0$ yields that, for any $A\in {\cal M}$, the map ${\mathbb R}^d\ni p \mapsto {\cal J}_p^{{\mathbb A}}(A)$ is quadratic. As a consequence, for all $A\in {\cal M}$, there exists a unique symmetric matrix $G^{{\mathbb A}}(A)\in {\mathbb R}^{d\times d}$ such that \begin{equation} \label{eq:defGA} \forall p\in {\mathbb R}^d, \quad {\cal J}^{{\mathbb A}}_p(A) = p^T G^{{\mathbb A}}(A) p. \end{equation} \subsection{Motivation of the embedded corrector problem} For all $R>0$, let us denote by $B_R$ the open ball of ${\mathbb R}^d$ centered at $0$ of radius $R$. We make the following remark, considering, for the sake of illustration, the stationary ergodic setting. Let ${\mathbb A}(x,\omega)$ be a stationary random matrix-valued field. A simple scaling argument shows that, in this case, for all $R>0$ and $p\in {\mathbb R}^d$, the unique solution $\widetilde{w}_p^{R,{\mathbb A},A}(\cdot, \omega)$ in $V$ to \begin{equation}\label{eq:equivrand} -\mbox{div}\mathbb{B}ig( \big( {\mathbb A}(\cdot,\omega) \chi_{B_R} + A (1-\chi_{B_R}) \big) \big( p + \nabla \widetilde{w}_p^{R,{\mathbb A},A}(\cdot, \omega) \big) \mathbb{B}ig) = 0 \mbox{ in $\mathcal{D}'({\mathbb R}^d)$}, \quad \int_{B_R} \widetilde{w}_p^{R,{\mathbb A},A}(\cdot, \omega) = 0, \end{equation} satisfies $\dps w_p^{{\mathbb A}^R(\cdot, \omega),A}(\cdot, \omega) = \frac{1}{R} \, \widetilde{w}_p^{R,{\mathbb A},A}(R \cdot, \omega)$, where ${\mathbb A}^R(x,\omega):= {\mathbb A}\left(Rx, \omega\right)$ for any $x\in B$. Solving embedded corrector problems of the form~\eqref{eq:pbbase} with ${\mathbb A} = {\mathbb A}^R(\cdot, \omega)$ in $B$ is then equivalent to solving~\eqref{eq:equivrand}. Figure~\ref{fig:boules} gives an illustration of the matrix-valued field $\mathcal{A}^{R, \omega, A}:= {\mathbb A}(\cdot,\omega) \, \chi_{B_R} + A \, (1-\chi_{B_R})$. \begin{figure} \caption{Left: field ${\mathbb A} \label{fig:boules} \end{figure} From now on, we consider $\left( {\mathbb A}^R \right)_{R>0} \subset L^\infty(B; {\cal M})$ a general family of matrix-valued fields which $G$-converges in the sense of Definition~\ref{def:Gconv} to a {\em constant} matrix $A^\widetilde{s}ar$ in $B$. Keep in mind that the random stationary ergodic setting provides a prototypical example of such a family of matrix-valued fields. The rest of the section is devoted to the presentation of different methods for constructing approximate effective matrices, using corrector problems of the form~\eqref{eq:pbbase}. We first present in Section~\ref{sec:def0} a naive definition, which turns out to be non-convergent in general. In the subsequent Sections~\ref{sec:def1}, \ref{sec:def2} and~\ref{sec:def3}, we present three possible choices leading to converging approximations, namely~\eqref{eq:optimisation}, \eqref{eq:def2} and~\eqref{eq:def3}. The motivation for considering problems of the form~\eqref{eq:pbbase} is twofold. First, we show below that the solution $w^{{\mathbb A}^R, A}_p$ to~\eqref{eq:pbbase} can be used to define consistent approximations of $A^\widetilde{s}ar$. We refer to Section~\ref{sec:proofs} for the proof that the upcoming approximations~\eqref{eq:optimisation}, \eqref{eq:def2} and~\eqref{eq:def3} converge to $A^\widetilde{s}ar$ when $R \to \infty$. Second, problem~\eqref{eq:pbbase} can be efficiently solved. We recall that, in~\cite{Stamm1, Stamm2}, an efficient numerical method has been introduced to compute the electrostatic interaction of molecules with an infinite continuous solvent medium, based on implicit solvation models. The problem to solve there reads: find $w\in H^1(\Omega)$ solution to \begin{equation} \label{eq:pb_stamm} - \Delta w = 0 \mbox{ in $\Omega$}, \quad w = g \mbox{ on $\partial \Omega$}, \end{equation} where $\Omega \subset {\mathbb R}^d$ is a bounded domain composed of the union of a finite but possibly very large number of balls, and $g\in L^2(\partial \Omega)$. As shown in~\cite{Stamm1,Stamm2}, Problem~\eqref{eq:pb_stamm} can be efficiently solved using a numerical approach based on domain decomposition, boundary integral formulation and discretization with spherical harmonics. Inspired by~\cite{Stamm1,Stamm2}, we have developed an efficient algorithm for the resolution of~\eqref{eq:pbbase}, which is somehow similar to the method used for the resolution of~\eqref{eq:pb_stamm}. This algorithm is presented in the companion article~\cite{refpartii}. In short, Problem~\eqref{eq:pbbase} can be efficiently solved using a boundary integral formulation, domain decomposition methods and approximation with spherical harmonics in the case when the matrix-valued field ${\mathbb A}(x)$ models the diffusion coefficient of a material composed of spherical inclusions embedded in a uniform medium. More precisely, our algorithm is specifically designed to solve~\eqref{eq:pbbase} in the case when, in $B$, $$ {\mathbb A}(x)=\left| \begin{array}{l} A_{\rm int}^i \mbox{ if } x \in B(x_i,r_i), \ 1 \leq i \leq I, \\ A_{\rm ext} \mbox{ if } x \in B \setminus \bigcup_{i=1}^I B(x_i, r_i), \end{array} \right. $$ for some $I\in{\mathbb N}^\widetilde{s}ar$, $A_{\rm int}^i, A_{\rm ext} \in {\cal M}$ for any $1 \leq i \leq I$, $(x_i)_{1\leq i \leq I}\subset B$ and $(r_i)_{1\leq i \leq I}$ some set of positive real numbers such that $\bigcup_{i=1}^I B(x_i,r_i) \subset B$ and $B(x_i,r_i) \cap B(x_j,r_j) = \emptyset$ for all $1\leq i \neq j \leq I$. We have denoted by $B(x_i,r_i) \subset {\mathbb R}^d$ the open ball of radius $r_i$ centered at $x_i$. We refer the reader to~\cite{refpartii} for more details on our numerical method. The approach we propose in this article is thus particularly suited for the homogenization of stochastic heterogeneous materials composed of spherical inclusions (see again Figure~\ref{fig:boules}). The properties of the inclusions (i.e. the coefficients $A_{\rm int}^i$), their center $x_i$ and their radius $r_i$ may be random, as long as ${\mathbb A}$ is stationary. In particular, this algorithm enables to compute very efficiently the effective thermal properties of polydisperse materials. \subsection{A failed attempt to define a homogenized matrix}\label{sec:def0} It is of common knowledge in the homogenization community that {\em $G$-convergence is not sensitive to the choice of boundary conditions}, see e.g.~\cite[p.~27]{Allaire}. Thus, at first glance, one could naively think that it would be sufficient to choose a fixed matrix $A\in {\cal M}$, define $w_p^{{\mathbb A}^R, A}$ for any $p \in {\mathbb R}^d$ and $R>0$ as the unique solution in $V_0$ to~\eqref{eq:pbbase} with ${\mathbb A} = {\mathbb A}^R$, and introduce, in the spirit of~\eqref{eq:defper}, the matrix $A_0^R$ defined by \begin{equation} \label{eq:def0} \forall p\in {\mathbb R}^d, \quad A^R_0 p = \frac{1}{|B|} \int_B {\mathbb A}^R \left( p + \nabla w_p^{{\mathbb A}^R, A} \right). \end{equation} However, as implied by the following lemma, the sequence $\left( A^R_0 \right)_{R>0}$ defined by~\eqref{eq:def0} {\em does not} converge in general to $A^\widetilde{s}ar$ as $R$ goes to infinity. Imposing the value of the exterior matrix $A$ in~\eqref{eq:pbbase} is actually much stronger than imposing some (non-oscillatory) boundary conditions on a truncated corrector problem as in~\eqref{eq:correctorrandom-N3}. It turns out that the sequence $\left( A^R_0 \right)_{R>0}$ actually converges as $R$ goes to infinity, but that its limit {\em depends} on the exterior matrix $A$. The following lemma, the proof of which is postponed until Section~\ref{sec:prooflem1}, is not only interesting to guide the intuition. It is also essential in our analysis, in particular for the identification of the limit of $A_0^R$. \begin{lemma}\label{lem:lem1} Let $({\mathbb A}^R)_{R>0}$ and $(A^R)_{R>0}$ be two sequences such that, for any $R>0$, ${\mathbb A}^R\in L^\infty(B, {\cal M})$ and $A^R \in {\cal M}$. We assume that $({\mathbb A}^R)_{R>0}$ $G$-converges to a matrix-valued field ${\mathbb A}^\widetilde{s}ar\in L^\infty(B,{\cal M})$ on $B$ and that $(A^R)_{R>0}$ converges to some $A^\infty\in {\cal M}$. For any $R>0$ and $p\in {\mathbb R}^d$, let $w_p^{{\mathbb A}^R, A^R}$ be the unique solution in $V_0$ to \begin{equation} \label{eq:pbn} -\mbox{{\rm div}}\left( {\cal A}^{{\mathbb A}^R,A^R} \left(p + \nabla w_p^{{\mathbb A}^R, A^R}\right)\right) = 0 \mbox{ in $\mathcal{D}'({\mathbb R}^d)$}, \end{equation} where $$ {\cal A}^{{\mathbb A}^R,A^R}(x):=\left\{ \begin{array}{l} {\mathbb A}^R(x) \mbox{ if $x\in B$},\\ A^R \mbox{ otherwise}. \end{array} \right. $$ Then, the sequence $\left(w^{{\mathbb A}^R, A^R}_p \right)_{R>0}$ weakly converges in $H^1_{\rm loc}({\mathbb R}^d)$ to $w_p^{{\mathbb A}^\widetilde{s}ar, A^\infty}$, which is the unique solution in $V_0$ to \begin{equation} \label{eq:infty} -\mbox{{\rm div}}\left( {\cal A}^{{\mathbb A}^\widetilde{s}ar,A^\infty} \left(p + \nabla w_p^{{\mathbb A}^\widetilde{s}ar, A^\infty} \right)\right) = 0 \mbox{ in $\mathcal{D}'({\mathbb R}^d)$}, \end{equation} where $$ {\cal A}^{{\mathbb A}^\widetilde{s}ar,A^\infty}(x):= \left\{ \begin{array}{l} {\mathbb A}^\widetilde{s}ar(x) \mbox{ if $x\in B$},\\ A^\infty \mbox{ otherwise}. \end{array}\right. $$ Moreover, \begin{equation} \label{eq:infty2} {\cal A}^{{\mathbb A}^R,A^R} \left(p + \nabla w_p^{{\mathbb A}^R, A^R}\right) \mathop{\rightharpoonup}_{R\to +\infty} {\cal A}^{{\mathbb A}^\widetilde{s}ar,A^\infty} \left(p + \nabla w_p^{{\mathbb A}^\widetilde{s}ar, A^\infty}\right) \mbox{ weakly in $L^2_{\rm loc}({\mathbb R}^d)$}. \end{equation} \end{lemma} We now briefly show how to use the above lemma to study the limit of $\left( A_0^R\right)_{R>0}$ defined by~\eqref{eq:def0}. From Lemma~\ref{lem:lem1}, we immediately deduce that $$ \lim_{R \to \infty} A^R_0 p = \frac{1}{|B|}\int_B A^\widetilde{s}ar \left(p + \nabla w_p^{A^\widetilde{s}ar, A} \right) = A^\widetilde{s}ar p + A^\widetilde{s}ar \frac{1}{|B|} \int_B \nabla w_p^{A^\widetilde{s}ar, A}. $$ The above right-hand side is different from $A^\widetilde{s}ar p$ in general, unless $A=A^\widetilde{s}ar$, as stated in the following lemma, the proof of which is given in Section~\ref{sec:prooflem2}. \begin{lemma}\label{lem:lem2} Let $A^\widetilde{s}ar, A\in {\cal M}$ and for all $p\in {\mathbb R}^d$, let $w_p^{A^\widetilde{s}ar, A}$ be the unique solution in $V_0$ to \begin{equation} \label{eq:taiwan} - \mbox{\rm div}\left( \mathcal{A}^{A^\widetilde{s}ar, A}\left( p + \nabla w_p^{A^\widetilde{s}ar, A}\right) \right) = 0 \mbox{ in $\mathcal{D}'({\mathbb R}^d)$}, \end{equation} where $$ \mathcal{A}^{A^\widetilde{s}ar, A}(x):=\left\{ \begin{array}{cc} A^\widetilde{s}ar & \mbox{ if $x\in B$},\\ A & \mbox{ otherwise}. \end{array} \right . $$ Then, $$ \left[ \forall p\in {\mathbb R}^d, \ \ A^\widetilde{s}ar p = \frac{1}{|B|}\int_B A^\widetilde{s}ar \left(p + \nabla w_p^{A^\widetilde{s}ar, A} \right)\right] \quad \mbox{ if and only if } \quad A = A^\widetilde{s}ar. $$ \end{lemma} Thus, we have to find how to define a sequence of constant exterior matrices $(A^R)_{R>0} \subset {\cal M}$ such that problem~\eqref{eq:pbbase} with ${\mathbb A} = {\mathbb A}^R$ and $A = A^R$ enables us to introduce converging approximations of $A^\widetilde{s}ar$. In Sections~\ref{sec:def1}, \ref{sec:def2} and~\ref{sec:def3}, we present three possible choices, which yield three alternative definitions of approximate homogenized matrices that all converge to $A^\widetilde{s}ar$ when $R \to \infty$. \subsection{First definition: minimizing the energy of the corrector} \label{sec:def1} To gain some intuition, we first recast~\eqref{eq:pbbase} as $$ -\mbox{{\rm div}}\left[ \mathbb{B}ig(A + \chi_B ({\mathbb A} -A) \mathbb{B}ig) \, \mathbb{B}ig(p + \nabla w_p^{{\mathbb A}, A} \mathbb{B}ig)\right] = 0 \mbox{ in $\mathcal{D}'({\mathbb R}^d)$}. $$ Thus, in this problem, the quantity ${\mathbb A}-A$ can be seen as a local perturbation in $B$ to the constant homogeneous exterior medium characterized by the diffusion coefficient $A$. In particular, in the case of a perfectly homogeneous infinite medium (when ${\mathbb A} = A$), the unique solution $w_p^{{\mathbb A}, A}$ to the above equation is $w_p^{{\mathbb A}, A}=0$. In the context of homogenization, when the inner matrix-valued coefficient ${\mathbb A}$ is fixed, a natural idea is then to define the value of the exterior matrix $A$ as {\em the matrix so that the energy ${\cal J}^{{\mathbb A}}_p(A)$ of $w_p^{{\mathbb A}, A}$ (which is always non-positive) is as close to 0 as possible (i.e. as small as possible in absolute value)}. In order to define a more isotropic criterion, we consider the maximization of the quantity $\dps \sum_{i=1}^d {\cal J}^{{\mathbb A}}_{e_i}(A)$ rather than ${\cal J}^{{\mathbb A}}_p(A)$. This motivates our first definition~\eqref{eq:optimisation}. We have the following result, the proof of which is postponed until Section~\ref{sec:proofconcavity}. We recall that ${\cal J}^{{\mathbb A}}_p$ and $G^{{\mathbb A}}$ are defined by~\eqref{eq:def_curly_J} and~\eqref{eq:defGA}. \begin{lemma} \label{lem:lemconcavity} For any ${\mathbb A}\in L^\infty(B,{\cal M})$, the function $\dps \mathcal{J}^{{\mathbb A}}: {\cal M} \ni A \mapsto \sum_{i=1}^d {\cal J}^{{\mathbb A}}_{e_i}(A) = \mbox{\rm Tr}\left(G^{{\mathbb A}}(A)\right)$ is concave. Moreover, when $d \leq 3$, $\mathcal{J}^{{\mathbb A}}$ is strictly concave. \end{lemma} Since we are interested in practical aspects, we did not investigate the case $d \geq 4$, but we are confident that our arguments could be extended to higher dimensions. We infer from Lemma~\ref{lem:lemconcavity} that, for any $R>0$, there exists a matrix $A^R_1 \in {\cal M}$ such that \begin{equation} \label{eq:optimisation} A^R_1 \in \mathop{\mbox{argmax }}_{A\in {\cal M}} \sum_{i=1}^d {\cal J}_{e_i}^{{\mathbb A}^R}(A) = \mathop{\mbox{argmax }}_{A\in {\cal M}} \mbox{Tr}\left(G^{{\mathbb A}^R}(A)\right), \end{equation} where we recall that ${\mathbb A}^R = {\mathbb A}(R \cdot)$. Moreover, in dimension $d \leq 3$, this matrix is unique. Such a matrix $A^R_1$ can be seen as a matrix which minimizes the absolute value of the sum of the energies of the corrector functions $w_{e_i}^{{\mathbb A}^R,A}$ over all possible $A\in {\cal M}$. Indeed, using the equivalent expression~\eqref{eq:expJ3} of ${\cal J}_p^{{\mathbb A}^R}(A)$ given below, we have that $$ A^R_1 \in \mathop{\mbox{argmin }}_{A\in {\cal M}} \sum_{i=1}^d \left( \int_B \left( \nabla w_{e_i}^{{\mathbb A}^R, A}\right)^T {\mathbb A}^R \nabla w_{e_i}^{{\mathbb A}^R, A} + \int_{{\mathbb R}^d \setminus B} \left( \nabla w_{e_i}^{{\mathbb A}^R, A}\right)^T A \nabla w_{e_i}^{{\mathbb A}^R, A} \right). $$ This provides a justification of the definition of $A^R_1$ by~\eqref{eq:optimisation}. As shown in Proposition~\ref{prop:prop1} below, $A^R_1$ is a converging approximation of $A^\widetilde{s}ar$. \subsection{Second definition: an averaged effective matrix} \label{sec:def2} We present here a second natural way to define an effective approximation of the homogenized matrix using the matrix $A^R_1$ defined in the previous section. The idea is to define the matrix $A^R_2 \in {\cal M}$ such that, formally, for any $p \in {\mathbb R}^d$, \begin{multline} \label{eq:maison2} \int_B \left[\left(p + \nabla w_p^{{\mathbb A}^R, A^R_1}\right)^T {\mathbb A}^R \left(p + \nabla w_p^{{\mathbb A}^R, A^R_1}\right) - p^T A^R_2 p \right] \\ + \int_{{\mathbb R}^d\setminus B}\left[ \left(p + \nabla w_p^{{\mathbb A}^R, A^R_1}\right)^T A^R_1 \left(p + \nabla w_p^{{\mathbb A}^R, A^R_1}\right) - p^T A^R_1 p \right] = 0. \end{multline} Formally, we thus ask that the energy of $p \cdot x + w_p^{{\mathbb A}^R, A^R_1}(x)$ (measured with the energy associated to $\mathcal{A}^{{\mathbb A}^R, A^R_1}(x)$) is equal to the energy of $p \cdot x$ (measured with the energy associated to $\mathcal{A}^{A^R_2, A^R_1}(x)$). Note however that the second term in~\eqref{eq:maison2} is not necessarily well-defined, since we may have $\nabla w_p^{{\mathbb A}^R, A^R_1} \not\in \left[L^1({\mathbb R}^d\setminus B)\right]^d$. Formally, the above relation reads \begin{multline*} \frac{1}{|B|} \int_B \left(p + \nabla w_p^{{\mathbb A}^R, A^R_1}\right)^T {\mathbb A}^R \left(p + \nabla w_p^{{\mathbb A}^R, A^R_1}\right) \\ + \frac{1}{|B|} \int_{{\mathbb R}^d\setminus B} \left(\nabla w_p^{{\mathbb A}^R, A^R_1}\right)^T A^R_1 \nabla w_p^{{\mathbb A}^R, A^R_1} - \frac{2}{|B|} \int_{{\mathbb R}^d\setminus B} (A^R_1 p \cdot n) \, w_p^{{\mathbb A}^R, A^R_1} = p^T A^R_2 p, \end{multline*} where now all the terms are well-defined. In view of~\eqref{eq:optim2} and~\eqref{eq:def_curly_J}, the above relation reads \begin{equation} \label{eq:def2} \forall p\in{\mathbb R}^d, \quad p^T A_2^R p = J_p^{{\mathbb A}^R,A^R_1}\left(w_p^{{\mathbb A}^R, A^R_1}\right) = {\cal J}_p^{{\mathbb A}^R}(A^R_1), \end{equation} which implies, in view of~\eqref{eq:defGA}, that $$ A_2^R = G^{{\mathbb A}^R}(A^R_1), $$ where $A^R_1$ is a solution to~\eqref{eq:optimisation}. We prove the following convergence result in Section~\ref{sec:proofprop1}. \begin{proposition}\label{prop:prop1} Let $({\mathbb A}^R)_{R>0} \subset L^\infty(B, {\cal M})$ be a family of matrix-valued fields which $G$-converges in $B$ to a constant matrix $A^\widetilde{s}ar \in {\cal M}$ as $R$ goes to infinity. Then, the sequences of matrices $\left( A^R_1 \right)_{R>0}$ and $\left( A^R_2 \right)_{R>0}$, respectively defined by~\eqref{eq:optimisation} and~\eqref{eq:def2}, satisfy $$ A^R_1 \mathop{\longrightarrow}_{R\to +\infty} A^\widetilde{s}ar \quad \mbox{ and } \quad A^R_2 \mathop{\longrightarrow}_{R\to +\infty} A^\widetilde{s}ar. $$ \end{proposition} \subsection{Third definition: a self-consistent effective matrix} \label{sec:def3} We eventually introduce a third definition, inspired by~\cite{Christensen}. Let us assume that, for any $R>0$, there exists a matrix $A^R_3 \in {\cal M}$ such that \begin{equation} \label{eq:def3} A^R_3 = G^{{\mathbb A}^R}(A^R_3). \end{equation} Such a matrix formally satisfies the following equation (see~\eqref{eq:maison2}): for all $p\in {\mathbb R}^d$, \begin{multline*} \int_B \left[\left(p + \nabla w_p^{{\mathbb A}^R, A^R_3}\right)^T {\mathbb A}^R \left(p + \nabla w_p^{{\mathbb A}^R, A^R_3}\right) - p^T A^R_3 p \right] \\ + \int_{{\mathbb R}^d\setminus B}\left[ \left(p + \nabla w_p^{{\mathbb A}^R, A^R_3}\right)^T A^R_3 \left(p + \nabla w_p^{{\mathbb A}^R, A^R_3}\right) - p^T A^R_3 p \right] = 0. \end{multline*} Formally, the energy of $p \cdot x + w_p^{{\mathbb A}^R, A^R_3}(x)$ (measured with the energy associated to $\mathcal{A}^{{\mathbb A}^R, A^R_3}(x)$) is equal to the energy of $p \cdot x$ (measured with the energy associated to $A^R_3$). This third definition also yields a converging approximation of $A^\widetilde{s}ar$, as stated in the following proposition which is proved in Section~\ref{sec:proofprop2}: \begin{proposition} \label{prop:prop2} Let $({\mathbb A}^R)_{R>0} \subset L^\infty(B, {\cal M})$ be a family of matrix-valued fields which $G$-converges in $B$ to a constant matrix $A^\widetilde{s}ar \in {\cal M}$ as $R$ goes to infinity. Let us assume that, for any $R>0$, there exists a matrix $A^R_3\in {\cal M}$ satisfying~\eqref{eq:def3}. Then, $$ A^R_3 \mathop{\longrightarrow}_{R\to +\infty} A^\widetilde{s}ar. $$ \end{proposition} \begin{remark} It is sufficient to assume that there exists a sequence $R_n$ converging to $\infty$ and such that, for any $n \in {\mathbb N}^\widetilde{s}ar$, there exists a matrix $A^{R_n}_3\in {\cal M}$ satisfying~\eqref{eq:def3}. Then $\dps \lim_{n \to \infty} A^{R_n}_3 = A^\widetilde{s}ar$. \end{remark} In general, we are not able to prove the existence of a matrix $A_3^R$ satisfying~\eqref{eq:def3}. However, the following weaker existence result holds in the case of an isotropic homogenized medium. Its proof is postponed until Section~\ref{sec:proofprop3}. \begin{proposition} \label{prop:prop3} Let $({\mathbb A}^R)_{R>0} \subset L^\infty(B, {\cal M})$ be a family of matrix-valued fields which $G$-converges in $B$ to a constant matrix $A^\widetilde{s}ar \in {\cal M}$ as $R$ goes to infinity. In addition, assume that $A^\widetilde{s}ar = a^\widetilde{s}ar I_d$, where $I_d$ is the identity matrix of ${\mathbb R}^{d \times d}$. Then, for any $R>0$, there exists a positive number $a^R_3 \in [\alpha,\beta]$ (which is unique at least in the case when $d \leq 3$) such that \begin{equation} \label{eq:spher} a^R_3 = \frac{1}{d}\mbox{\rm Tr}\left( G^{{\mathbb A}^R} \left( a^R_3 \, I_d \right) \right). \end{equation} In addition, \begin{equation} \label{eq:spher2} a^R_3 \mathop{\longrightarrow}_{R\to +\infty} a^\widetilde{s}ar. \end{equation} \end{proposition} Again, we did not investigate whether the solution to~\eqref{eq:spher} is unique in dimension $d \geq 4$. Note that, since $A^\widetilde{s}ar = a^\widetilde{s}ar I_d \in {\cal M}$, we have that $a^\widetilde{s}ar \in [\alpha,\beta]$. Note also that~\eqref{eq:spher} is weaker than~\eqref{eq:def3}, which would read $a^R_3 \, I_d = G^{{\mathbb A}^R}(a^R_3 \, I_d)$. However, this weaker result is sufficient to prove that $a^R_3$ is a converging approximation of $a^\widetilde{s}ar$. \begin{remark} In the mechanics literature, other types of approximations have been proposed, based on the analytical solution of the so-called Eshelby problem~\cite{Eshelby}. We refer the reader to the Appendix of~\cite{Thomines} for a pedagogical mathematical introduction to the main methods (including those presented in~\cite{Benveniste,Christensen,Hill,MoriTanaka}) that were derived from the works of Eshelby. For the sake of brevity, we do not detail them here. \end{remark} \section{Proofs of consistency}\label{sec:proofs} We collect in this section the proofs of the above propositions. We begin by proving some technical lemmas useful in our analysis. \subsection{Preliminary lemmas} We first recall two classical functional analysis results on the space $V_0$ defined by~\eqref{eq:def_V0}. The first result can be proved using a standard contradiction argument. \begin{lemma}[Poincar\'e-Wirtinger inequality in $V_0$]\label{lem:Poincare} For all $r>0$, there exists $K_r>0$ such that \begin{equation} \label{eq:Poincare} \forall v \in V_0, \quad \left\| v \right\|_{L^2(B_r)} \leq K_r \left\| \nabla v \right\|_{L^2(B_r)}, \end{equation} where $B_r := B(0,r)$ is the open ball of ${\mathbb R}^d$ of radius $r$ and centered at the origin. \end{lemma} The next lemma is a straightforward consequence of the continuity of the trace application from $H^1(B)$ to $L^2(\Gamma)$ and of inequality~\eqref{eq:Poincare} for $r=1$. \begin{lemma} \label{lem:trace} There exists $L>0$ such that \begin{equation} \label{eq:trace} \forall v \in V_0, \quad \left\| v \right\|_{L^2(\Gamma)} \leq L \left\| \nabla v \right\|_{L^2(B)}. \end{equation} \end{lemma} We next recall a classical homogenization result (see e.g.~\cite[Theorem 5.2 p.~151]{Jikov}), which plays a central role in our analysis: \begin{theorem} \label{th:th1} Let $O \subset {\mathbb R}^d$ be an open subset of ${\mathbb R}^d$ and $D$ and $D_1$ two subdomains of $O$ with $D_1\subset D \subset O$. Consider a sequence $({\mathbb A}^R)_{R>0} \subset L^\infty(O, {\cal M})$ and assume that it $G$-converges as $R$ goes to infinity to a matrix-valued function ${\mathbb A}^\widetilde{s}ar \in L^\infty(D, {\cal M})$ in the domain $D$. Besides, let $p\in {\mathbb R}^d$ and let $(w_p^R)_{R>0} \subset H^1(D_1)$ be a sequence of functions which weakly converges (in $H^1(D_1)$) to some $w_p^\infty \in H^1(D_1)$. We assume that $$ \forall R>0, \quad -\mbox{{\rm div}}\left( {\mathbb A}^R \left( p +\nabla w_p^R \right) \right) = 0 \mbox{ in $\mathcal{D}'(D_1)$}. $$ Then, $$ {\mathbb A}^R \left(p + \nabla w_p^R \right) \rightharpoonup {\mathbb A}^\widetilde{s}ar \left(p + \nabla w_p^\infty \right) \mbox{ weakly in $L^2(D_1)$} $$ and $w_p^\infty$ satisfies $$ -\mbox{{\rm div}}\left( {\mathbb A}^\widetilde{s}ar \left(p + \nabla w_p^\infty \right) \right) = 0 \mbox{ in $\mathcal{D}'(D_1)$}. $$ \end{theorem} Lastly, the following technical result will be useful in the proofs below: \begin{lemma} \label{lem:tech} Let $(p_i)_{1\leq i \leq d}$ be a basis of ${\mathbb R}^d$. Let $A_1$ and $A_2$ be two constant matrices in ${\cal M}$ such that, for any $1\leq i \leq d$, we have $$ -\mbox{{\rm div}}\mathbb{B}ig( \big(A_1 \chi_B + A_2 (1 - \chi_B) \big) p_i \mathbb{B}ig) = 0 \mbox{ in $\mathcal{D}'({\mathbb R}^d)$}. $$ Then $A_1 = A_2$. \end{lemma} \begin{proof} For any $\varphi \in \mathcal{D}({\mathbb R}^d)$ and any $1\leq i \leq d$, we have \begin{eqnarray*} 0 &=& \int_{{\mathbb R}^d} (\nabla \varphi)^T \left(A_1 \chi_B + A_2 \chi_{{\mathbb R}^d \setminus B} \right) p_i \\ &=& \int_B (\nabla \varphi)^T A_1 \, p_i + \int_{{\mathbb R}^d \setminus B} (\nabla \varphi)^T A_2 \, p_i \\ &=& \int_\Gamma (A_1 p_i \cdot n) \varphi - \int_\Gamma (A_2 p_i \cdot n) \varphi. \end{eqnarray*} Since $\varphi$ is arbitrary, this implies that $$ \big( (A_1 -A_2) p_i \big) \cdot n(x) = 0 \ \text{on $\Gamma$}. $$ Hence $(A_1 -A_2) p_i = 0$ for any $1\leq i \leq d$. Since $(p_i)_{1\leq i \leq d}$ is a basis of ${\mathbb R}^d$, we get $A_1 = A_2$. \end{proof} \subsection{Equivalent definitions of $w^{{\mathbb A},A}_p$} \label{sec:equivalent} We collect here some equivalent definitions of the solution $w_p^{{\mathbb A}, A}$ to~\eqref{eq:pbbase}. As pointed out above (see~\eqref{eq:FV}), the variational formulation of~\eqref{eq:pbbase} is \begin{equation} \label{eq:FVgen} \forall v \in V_0, \quad \int_B (\nabla v)^T {\mathbb A} \left(p + \nabla w^{{\mathbb A}, A}_p \right) + \int_{{\mathbb R}^d \setminus B} \left(\nabla v\right)^T A \nabla w^{{\mathbb A}, A}_p - \int_{\Gamma} (A p\cdot n) \, v = 0. \end{equation} Taking $v = w_p^{{\mathbb A}, A}$ as a test function in~\eqref{eq:FVgen}, we obtain the following useful relation: \begin{equation} \label{eq:rel1} \int_B \left( \nabla w_p^{{\mathbb A}, A}\right)^T {\mathbb A} \nabla w_p^{{\mathbb A}, A} + \int_{{\mathbb R}^d \setminus B} \left(\nabla w_p^{{\mathbb A}, A}\right)^T A \nabla w_p^{{\mathbb A}, A} = - \int_B p^T {\mathbb A} \nabla w_p^{{\mathbb A}, A} + \int_{\Gamma} (Ap \cdot n) \, w_p^{{\mathbb A}, A}. \end{equation} We recall, as announced in Section~\ref{sec:embedded}, that $w_p^{{\mathbb A}, A}$ is equivalently the unique solution to the optimization problem~\eqref{eq:optim1}--\eqref{eq:optim2}. We then infer from~\eqref{eq:def_curly_J} and~\eqref{eq:rel1} that \begin{equation} \label{eq:expJ3} {\cal J}_p^{{\mathbb A}}(A) = \frac{1}{|B|} \int_B p^T {\mathbb A} p - \frac{1}{|B|} \int_B \left( \nabla w^{{\mathbb A}, A}_p\right)^T {\mathbb A} \nabla w^{{\mathbb A}, A}_p - \frac{1}{|B|} \int_{{\mathbb R}^d\setminus B} \left( \nabla w^{{\mathbb A}, A}_p\right)^T A \nabla w^{{\mathbb A}, A}_p. \end{equation} Equivalently, we also have that \begin{equation} \label{eq:expJ2} {\cal J}_p^{{\mathbb A}}(A) = \frac{1}{|B|} \int_B p^T {\mathbb A} \left(p + \nabla w^{{\mathbb A}, A}_p \right) - \frac{1}{|B|} \int_{\Gamma} (Ap \cdot n) \, w^{{\mathbb A}, A}_p. \end{equation} \subsection{Proof of Lemma~\ref{lem:lem1}} \label{sec:prooflem1} We first show that the sequence $\left( \left\| \nabla w_p^{{\mathbb A}^R, A^R}\right\|_{L^2({\mathbb R}^d)}\right)_{R>0}$ is bounded. The weak formulation of~\eqref{eq:pbn} is given by~\eqref{eq:FVgen} with ${\mathbb A} \equiv {\mathbb A}^R$ and $A \equiv A^R$. Using~\eqref{eq:rel1} (again with ${\mathbb A} \equiv {\mathbb A}^R$ and $A \equiv A^R$), we have \begin{eqnarray*} \alpha \left\| \nabla w_p^{{\mathbb A}^R, A^R}\right\|_{L^2({\mathbb R}^d)}^2 &\leq& \int_B \left(\nabla w_p^{{\mathbb A}^R, A^R} \right)^T {\mathbb A}^R \nabla w_p^{{\mathbb A}^R, A^R} + \int_{{\mathbb R}^d \setminus B} \left(\nabla w_p^{{\mathbb A}^R, A^R} \right)^T A^R \nabla w_p^{{\mathbb A}^R, A^R} \\ &=& \int_{\Gamma} (A^Rp \cdot n) \, w_p^{{\mathbb A}^R, A^R} - \int_B p^T {\mathbb A}^R \left( \nabla w_p^{{\mathbb A}^R, A^R} \right) \\ & \leq & \beta\left( |\Gamma|^{1/2} \left\| w_p^{{\mathbb A}^R, A^R}\right\|_{L^2(\Gamma)} + |B|^{1/2} \left\| \nabla w_p^{{\mathbb A}^R, A^R} \right\|_{L^2(B)} \right) \\ & \leq & \beta \left( |\Gamma|^{1/2} L + |B|^{1/2} \right) \left\|\nabla w_p^{{\mathbb A}^R, A^R} \right\|_{L^2({\mathbb R}^d)}, \end{eqnarray*} where, in the last line, we have used~\eqref{eq:trace}. We deduce that, for all $R>0$, $$ \left\| \nabla w_p^{{\mathbb A}^R, A^R} \right\|_{L^2({\mathbb R}^d)} \leq \frac{\beta}{\alpha}\left( |\Gamma|^{1/2} L + |B|^{1/2} \right). $$ Let $r>0$. Using~\eqref{eq:Poincare}, we deduce from the above bound that the sequence $\left( \left\| w_p^{{\mathbb A}^R, A^R} \right\|_{H^1(B_r)} \right)_{R>0}$ is bounded. Therefore, up to the extraction of a subsequence, there exists a function $w^{\infty,r}_p \in H^1(B_r)$ such that $$ w_p^{{\mathbb A}^R, A^R} \mathop{\rightharpoonup}_{R\to +\infty} w^{\infty,r}_p \mbox{ weakly in $H^1(B_r)$}. $$ By uniqueness of the limit in the distributional sense, we see that $w^{\infty,r'}_p|_{B_r} = w^{\infty,r}_p$ for any $r' >r$. Thus, there exists a function $w_p^\infty\in H^1_{\rm loc}({\mathbb R}^d)$ such that, up to the extraction of a subsequence, \begin{equation} \label{eq:maison3} w_p^{{\mathbb A}^R, A^R} \mathop{\rightharpoonup}_{R\to +\infty} w^{\infty}_p \mbox{ weakly in $H^1_{\rm loc}({\mathbb R}^d)$}. \end{equation} Moreover, since the sequence $\left( \left\| \nabla w_p^{{\mathbb A}^R, A^R} \right\|_{L^2({\mathbb R}^d)} \right)_{R>0}$ is bounded, there exists $W_p^\infty \in \left( L^2({\mathbb R}^d) \right)^d$ such that (up to the extraction of a subsequence) $\dps \nabla w_p^{{\mathbb A}^R, A^R} \mathop{\rightharpoonup}_{R\to +\infty} W^\infty_p$ weakly in $L^2({\mathbb R}^d)$. By uniqueness of the limit, we get that $\nabla w^{\infty}_p = W^\infty_p \in \left( L^2({\mathbb R}^d) \right)^d$. As a consequence, we obtain that $w^\infty_p \in V$. In addition, we obviously have $\dps \int_B w^\infty_p= 0$ and thus $w^\infty_p \in V_0$. At this point, we have shown that, up to the extraction of a subsequence, $w_p^{{\mathbb A}^R, A^R}$ weakly converges as $R \to \infty$ to $w^\infty_p$ in $H^1(B)$. Furthermore, we know that $$ -\mbox{{\rm div}} \left( {\mathbb A}^R \left( p + \nabla w_p^{{\mathbb A}^R, A^R} \right) \right) = 0 \mbox{ in $\mathcal{D}'(B)$} $$ and that the sequence $\left( {\mathbb A}^R \right)_{R>0}$ $G$-converges to ${\mathbb A}^\widetilde{s}ar$ in $B$. Hence, using Theorem~\ref{th:th1} with the choice $D_1 = B$, we obtain that \begin{equation} \label{eq:toto_in} {\mathbb A}^R \left( p+ \nabla w_p^{{\mathbb A}^R, A^R} \right) \rightharpoonup {\mathbb A}^\widetilde{s}ar \left(p + \nabla w_p^\infty \right) \mbox{ weakly in $L^2(B)$}. \end{equation} For any compact domain $D_1 \subset {\mathbb R}^d \setminus B$, we infer from~\eqref{eq:maison3} that $$ A^R \left( p+ \nabla w_p^{{\mathbb A}^R, A^R} \right) \rightharpoonup A^\infty \left( p + \nabla w_p^\infty \right) \mbox{ weakly in $L^2(D_1)$}. $$ This implies that \begin{equation} \label{eq:toto_out} A^R \left( p+ \nabla w_p^{{\mathbb A}^R, A^R} \right) \rightharpoonup A^\infty \left( p + \nabla w_p^\infty \right) \mbox{ weakly in $L^2_{\rm loc}({\mathbb R}^d \setminus B)$}. \end{equation} Collecting~\eqref{eq:toto_in} and~\eqref{eq:toto_out}, we get the claimed convergence~\eqref{eq:infty2}: $$ {\cal A}^{{\mathbb A}^R,A^R} \left(p+ \nabla w_p^{{\mathbb A}^R, A^R} \right) \rightharpoonup {\cal A}^{{\mathbb A}^\widetilde{s}ar,A^\infty} \left( p + \nabla w_p^\infty \right) \mbox{ weakly in $L^2_{\rm loc}({\mathbb R}^d)$}. $$ Multiplying the above by $\nabla \varphi$, where $\varphi$ is an arbitrary function in ${\cal D}({\mathbb R}^d)$, and using~\eqref{eq:pbn}, we deduce~\eqref{eq:infty}. This concludes the proof of Lemma~\ref{lem:lem1}. \subsection{Proof of Lemma~\ref{lem:lem2}} \label{sec:prooflem2} Assume first that $A^\widetilde{s}ar = A$. Then, for all $p\in{\mathbb R}^d$, $w_p^{A^\widetilde{s}ar,A} = 0$ is obviously the unique solution in $V_0$ to~\eqref{eq:taiwan}. This yields that $\dps A^\widetilde{s}ar p = \frac{1}{|B|} \int_B A^\widetilde{s}ar (p + \nabla w_p^{A^\widetilde{s}ar, A})$. Conversely, let us now assume that, for all $p\in {\mathbb R}^d$, we have $$ A^\widetilde{s}ar p = \frac{1}{|B|} \int_B A^\widetilde{s}ar (p + \nabla w_p^{A^\widetilde{s}ar, A}). $$ Since $A^\widetilde{s}ar$ is constant and invertible, this implies that $\dps \int_B \nabla w_p^{A^\widetilde{s}ar, A} = 0$. Multiplying this equation by $p^T A$, we get that $\dps 0 = \int_B p^T A \nabla w_p^{A^\widetilde{s}ar,A} = \int_\Gamma (A p\cdot n) \, w_p^{A^\widetilde{s}ar,A}$. We now write~\eqref{eq:rel1} with ${\mathbb A} \equiv A^\widetilde{s}ar$: $$ \int_B (\nabla w_p^{A^\widetilde{s}ar,A})^T A^\widetilde{s}ar \nabla w_p^{A^\widetilde{s}ar,A} + \int_{{\mathbb R}^d \setminus B} (\nabla w_p^{A^\widetilde{s}ar,A})^T A \nabla w_p^{A^\widetilde{s}ar,A} = - p^T A^\widetilde{s}ar \int_B \nabla w_p^{A^\widetilde{s}ar, A} + \int_\Gamma (A p\cdot n) \, w_p^{A^\widetilde{s}ar,A} = 0. $$ We thus get that $\nabla w_p^{A^\widetilde{s}ar,A} = 0$ in ${\mathbb R}^d$. Hence~\eqref{eq:taiwan} yields that $$ \forall p\in {\mathbb R}^d, \quad - \mbox{\rm div}\left[ \left(A^\widetilde{s}ar \chi_B + A (1 - \chi_B)\right)p\right] = 0 \mbox{ in $\mathcal{D}'({\mathbb R}^d)$}. $$ Using Lemma~\ref{lem:tech}, we get that $A = A^\widetilde{s}ar$. This concludes the proof of Lemma~\ref{lem:lem2}. \subsection{Proof of Lemma~\ref{lem:lemconcavity}}\label{sec:proofconcavity} We first prove a technical lemma which will be used to prove the {\em strict} concavity of $\mathcal{J}^{{\mathbb A}}$ when $d=3$. \begin{lemma}\label{lem:dipole} Let $r>0$ and let $S_r$ (respectively $B_r$) be the sphere (respectively the ball) of radius $r$ of ${\mathbb R}^3$ centered at the origin. Let $\sigma \in {\cal C}^\infty(S_r)$ and ${\mathbb P}hi \in {\mathbb R}^{3\times 3}$ be a constant symmetric matrix such that $\mbox{\rm Tr }{\mathbb P}hi = 0$. Assume that \begin{equation}\label{eq:rel2} \forall x \in {\mathbb R}^3 \setminus \overline{B_r}, \quad \int_{S_r} \frac{(x-y)^T {\mathbb P}hi (x-y) }{|x-y|^5} \ \sigma(y) \ dy = 0. \end{equation} Then, it holds that either ${\mathbb P}hi = 0$ or $\sigma = 0$. \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:dipole}] The proof falls in two steps. \noindent {\bf Step 1.} The first part of the proof consists in showing that the function $$ {\mathbb R}^3 \setminus S_r \ni x \mapsto \widetilde V(x) := \int_{S_r} \frac{(x-y)^T {\mathbb P}hi(x-y)}{|x-y|^5} \ \sigma(y) \ dy \in {\mathbb R} $$ is in fact the restriction to ${\mathbb R}^3 \setminus S_r$ of the electrostatic potential $V$ generated by the singular distribution $\rho \in {\cal E}'({\mathbb R}^3)$ supported on $S_r$ and defined by \begin{equation}\label{eq:defrho} \forall \psi \in C^\infty({\mathbb R}^3), \quad \langle \rho,\psi \rangle_{{\cal E}',C^\infty} = \frac{1}{3} \int_{S_r} ({\mathbb P}hi : D^2\psi(y)) \ \sigma(y) \ dy. \end{equation} The distribution $\rho$ can be interpreted as a smooth layer of quadrupoles on $S_r$. The link between $V$ and $\rho$ will be detailed below. Since $\rho$ defined by~\eqref{eq:defrho} is compactly supported and of order $2$ (for ${\mathbb P}hi \neq 0$), its Fourier transform is analytic, does not grow faster than $|k|^2$ at infinity, and we have $$ \widehat{\rho}(0) = \frac{1}{(2\pi)^{3/2}} \langle \rho, 1\rangle_{{\cal E}',C^\infty} = 0, \qquad \frac{\partial\widehat{\rho}}{\partial k_j}(0) = - \frac{i}{(2\pi)^{3/2}} \langle \rho, x_j \rangle_{{\cal E}',C^\infty} = 0. $$ The Poisson equation $-\Delta V = 4\pi \rho$ therefore has a unique solution $V$ belonging to ${\cal S}'({\mathbb R}^3)$ and vanishing at infinity. We have $\widehat{V} \in L^\infty({\mathbb R}^3)$ and $$ \forall k \in {\mathbb R}^3 \setminus \{0\}, \quad \widehat{V}(k) = \frac{4\pi}{|k|^2} \, \widehat{\rho}(k). $$ Let $\phi \in \mathcal{D}({\mathbb R}^3)$ be supported in ${\mathbb R}^3 \setminus S_r$ and $\psi = \phi \widetilde{s}ar |\cdot|^{-1}$. Note that $\phi \in {\cal S}({\mathbb R}^3)$, $\psi \in C^\infty({\mathbb R}^3)$, $\widehat{\psi} \in L^1({\mathbb R}^3)$, and $|k|^2 \, \widehat{\psi}(k) = 4\pi \, \widehat{\phi}(k)$. However, $\psi \not\in {\cal S}({\mathbb R}^3)$. We write $$ \langle V,\phi \rangle_{{\cal D}',{\cal D}} = \langle V,\phi \rangle_{{\cal S}',{\cal S}} = \left\langle \overline{\widehat{V}}, \widehat{\phi} \right\rangle_{{\cal S}',{\cal S}} = \int_{{\mathbb R}^3} \overline{\widehat{V}(k)} \ \widehat{\phi}(k) \, dk = \int_{{\mathbb R}^3} \overline{\widehat{\rho}(k)} \ \widehat{\psi}(k) \, dk = \langle \rho,\psi \rangle_{{\cal E}',C^\infty}. $$ For any $y \in S_r$, we have $$ \psi(y) = \int_{{\rm Supp}(\phi)} \frac{\phi(x)}{|x-y|} \, dx, $$ hence, for $y \in S_r$, $$ \frac{\partial^2 \psi}{\partial y_i \partial y_j}(y) = 3 \int_{{\rm Supp}(\phi)} \frac{\phi(x) \, (x_i-y_i) \, (x_j - y_j)}{|x-y|^5} \, dx - \delta_{ij} \int_{{\rm Supp}(\phi)} \frac{\phi(x)}{|x-y|^3} \, dx. $$ Using next the fact that $\mbox{\rm Tr }{\mathbb P}hi = 0$, we get \begin{align*} \langle V,\phi \rangle_{{\cal D}',{\cal D}} = \langle \rho,\psi \rangle_{{\cal E}',C^\infty} & = \frac{1}{3} \int_{S_r} ({\mathbb P}hi : D^2 \psi(y)) \ \sigma(y) \ dy \\ & = \int_{{\rm Supp}(\phi)} \left( \int_{S_r} \frac{(x-y)^T {\mathbb P}hi(x-y)}{|x-y|^5} \, \sigma(y) \, dy \right) \phi(x) \, dx \\ &= \int_{{\rm Supp}(\phi)} \widetilde{V} \, \phi. \end{align*} Therefore $V|_{{\mathbb R}^3 \setminus S_r}=\widetilde{V}$, as claimed above. Furthermore, hypothesis~\eqref{eq:rel2} implies that $V=0$ in ${\mathbb R}^3 \setminus \overline{B_r}$, hence in particular that $V \in {\cal E}'({\mathbb R}^3)$. \noindent {\bf Step 2.} Let us denote by ${\cal H}_l$ the vector space of the homogeneous harmonic polynomials of total degree $l$. Recall that ${\rm dim}({\cal H}_l)=2l+1$ and that a basis of ${\cal H}_l$ consists of the functions of the form $(r^l Y_{lm}(\theta,\varphi))_{-l \leq m \leq l}$, where $(r,\theta,\varphi)$ are the usual spherical coordinates and $Y_{lm}$ are the real spherical harmonics. Since $V \in {\cal E}'({\mathbb R}^3)$, we have \begin{equation}\label{eq:harmpol} \forall l \in {\mathbb N}, \quad \forall p_l \in {\cal H}_l, \quad \langle \rho, p_l \rangle_{{\cal E}',C^\infty} = - \frac {1}{4\pi} \langle \Delta V,p_l \rangle_{{\cal E}',C^\infty} = - \frac{1}{4\pi} \langle V, \Delta p_l \rangle_{{\cal E}',C^\infty} = 0. \end{equation} We now assume that ${\mathbb P}hi \neq 0$ and we show that $\sigma = 0$. Without loss of generality, we can assume that ${\mathbb P}hi = \mbox{\rm diag}(a_1,a_2,-a_1-a_2)$ with $a_1$ and $a_2$ in ${\mathbb R}_+$ and $a_1a_2 \neq 0$. For any $l \in {\mathbb N}$, consider the map $L_l : {\cal H}_{l+2} \ni p_{l+2} \mapsto L_l \, p_{l+2} = {\mathbb P}hi : D^2p_{l+2} \in {\cal H}_l$. We are going to prove that $L_l$ is surjective. Any $p_{l+2} \in {\cal H}_{l+2} $ is of the form $$ p_{l+2}(x_1,x_2,x_3) = \sum_{k=0}^{l+2} x_3^{l+2-k} q_k(x_1,x_2) $$ where the $q_k$'s are homogeneous polynomials of total degree $k$ on ${\mathbb R}^2$ satisfying \begin{equation} \label{eq:cond1} \forall 0 \le k \le l, \qquad \Delta q_{k+2}+(l+2-k)(l+1-k)q_k=0. \end{equation} If additionally $p_{l+2} \in \mbox{\rm Ker}(L_l)$, then there also holds \begin{equation} \label{eq:cond2} \forall 0 \le k \le l+2, \qquad \lambda \frac{\partial^2q_k}{\partial x_1^2} + \frac{\partial^2q_k}{\partial x_2^2} = 0 \quad \text{with} \quad \lambda = \frac{2a_1+a_2}{a_1+2a_2}. \end{equation} From~\eqref{eq:cond1}, we infer that $p_{l+2}$ is completely determined by $q_{l+1}$ and $q_{l+2}$. From~\eqref{eq:cond2}, we obtain that, for each $0 \le k \le l+2$, $r_k(x_1,x_2) := q_k(\lambda^{1/2}x_1,x_2)$ is a two-dimensional harmonic homogeneous polynomial of order $k$. Consequently, we have $$ r_k(x_1,x_2) = \alpha_k \, \mbox{\rm Re}\big( (x_1+ix_2)^k \big) + \beta_k \, \mbox{\rm Im}\big( (x_1+ix_2)^k \big) \quad \text{for some $\alpha_k$ and $\beta_k$ in ${\mathbb R}$.} $$ An element of $\mbox{Ker}(L_l)$ is therefore completely determined by $\alpha_{k+1}$, $\beta_{k+1}$, $\alpha_{k+2}$ and $\beta_{k+2}$. Hence, $\mbox{\rm dim}({\mbox{\rm Ker}}(L_l))=4$. It follows that $$ \mbox{\rm Rank}(L_l)=\mbox{\rm dim}({\cal H}_{l+2})-\mbox{\rm dim}(\mbox{\rm Ker}(L_l))=(2(l+2)+1)-4=2l+1=\mbox{\rm dim}({\cal H}_l). $$ Therefore $L_l$ is surjective. For any $l \in {\mathbb N}$ and $q_l \in {\cal H}_l$, there thus exists $p_{l+2} \in {\cal H}_{l+2}$ such that $q_l = L_l p_{l+2}$. We then deduce from~\eqref{eq:defrho} and~\eqref{eq:harmpol} that $$ \int_{S_r} q_l(y) \, \sigma(y) \, dy = \int_{S_r} L_l p_{l+2}(y) \, \sigma(y) \, dy = 3 \langle \rho, p_{l+2} \rangle_{{\cal E}',C^\infty} = 0. $$ Since $(Y_{lm})_{-l \le m \le l}$ is a basis of ${\cal H}_l$, we finally obtain that $$ \forall l \in {\mathbb N}, \quad \forall -l \le m \le l, \quad \int_{{\mathbb S}^2} Y_{lm}(y) \, \sigma(ry) \, dy = 0, $$ where ${\mathbb S}^2$ is the unit sphere of ${\mathbb R}^3$. This implies that $\sigma=0$ and thus concludes the proof of Lemma~\ref{lem:dipole}. \end{proof} We are now in position to prove Lemma~\ref{lem:lemconcavity}. \begin{proof}[Proof of Lemma~\ref{lem:lemconcavity}] Let ${\mathbb A}\in L^\infty(B, {\cal M})$. We first prove that, for all $p \in {\mathbb R}^d$, the function ${\cal M} \ni A \mapsto {\cal J}_p^{{\mathbb A}}(A)$ is concave. We next prove its strict concavity. The proof falls in three steps. \noindent \textbf{Step 1.} The concavity of ${\cal J}_p^{{\mathbb A}}$ is a straighforward consequence of~\eqref{eq:optim1}--\eqref{eq:optim2}--\eqref{eq:def_curly_J}: ${\cal J}_p^{{\mathbb A}}(A)$ is the minimum of a family of functions that depend on $A$ in an affine way: it is hence concave. Because it will be useful for the proof of strict concavity, we now proceed more quantitatively. We recall that $w_p^{{\mathbb A}, A}$ is defined by~\eqref{eq:pbbase} or equivalently~\eqref{eq:optim1}. Consider $A_1$ and $A_2$ in ${\cal M}$, $\lambda \in [0,1]$ and $A_\lambda = \lambda A_1 + (1-\lambda) A_2$. We compute that \begin{align*} & |B| \, {\cal J}_p^{{\mathbb A}}(A_\lambda) \\ &= |B| \, J_p^{{\mathbb A},A_\lambda}(w_p^{{\mathbb A}, A_\lambda}) \\ &= \int_B (p + \nabla w_p^{{\mathbb A}, A_\lambda})^T {\mathbb A} (p + \nabla w_p^{{\mathbb A}, A_\lambda}) + \int_{{\mathbb R}^d\setminus B} (\nabla w_p^{{\mathbb A}, A_\lambda})^T A_\lambda \nabla w_p^{{\mathbb A}, A_\lambda} - 2 \int_{\Gamma} (A_\lambda p\cdot n) \, w_p^{{\mathbb A}, A_\lambda} \\ &= \lambda \left( \int_B (p + \nabla w_p^{{\mathbb A}, A_\lambda})^T {\mathbb A} (p + \nabla w_p^{{\mathbb A}, A_\lambda}) + \int_{{\mathbb R}^d\setminus B} (\nabla w_p^{{\mathbb A}, A_\lambda})^T A_1 \nabla w_p^{{\mathbb A}, A_\lambda} - 2 \int_{\Gamma} (A_1 p\cdot n) \, w_p^{{\mathbb A}, A_\lambda} \right) \\ &+ (1-\lambda) \left( \int_B (p + \nabla w_p^{{\mathbb A}, A_\lambda})^T {\mathbb A} (p + \nabla w_p^{{\mathbb A}, A_\lambda}) + \int_{{\mathbb R}^d\setminus B} (\nabla w_p^{{\mathbb A}, A_\lambda})^T A_2 \nabla w_p^{{\mathbb A}, A_\lambda} - 2 \int_{\Gamma} (A_2 p\cdot n) \, w_p^{{\mathbb A}, A_\lambda} \right) \\ &= \lambda \, |B| \, J_p^{{\mathbb A},A_1}(w_p^{{\mathbb A}, A_\lambda}) + (1-\lambda) \, |B| \, J_p^{{\mathbb A},A_2}(w_p^{{\mathbb A}, A_\lambda}). \end{align*} In view of~\eqref{eq:def_curly_J}, we obtain that \begin{equation} \label{eq:sera_utile1} {\cal J}_p^{{\mathbb A}}(A_\lambda) \geq \lambda {\cal J}_p^{{\mathbb A}}(A_1) + (1-\lambda) {\cal J}_p^{{\mathbb A}}(A_2), \end{equation} which means, as already pointed out above, that the function ${\cal M} \ni A \mapsto {\cal J}_p^{{\mathbb A}}(A)$ is concave. Furthermore, since the minimizer of $J_p^{{\mathbb A},A}$ is unique for any $A \in {\cal M}$, we get that \begin{equation} \label{eq:sera_utile2} {\cal J}_p^{{\mathbb A}}(A_\lambda) = \lambda {\cal J}_p^{{\mathbb A}}(A_1) + (1-\lambda) {\cal J}_p^{{\mathbb A}}(A_2) \quad \Longrightarrow \quad w_p^{{\mathbb A}, A_\lambda} = w_p^{{\mathbb A}, A_1} = w_p^{{\mathbb A}, A_2}. \end{equation} We now prove the strict concavity of $\dps {\cal J}^{{\mathbb A}} = \sum_{i=1}^d {\cal J}^{{\mathbb A}}_{e_i}$ in the case when $d\leq3$. To this aim, we assume that there exists two matrices $A_1$ and $A_2$ in ${\cal M}$ so that \begin{equation}\label{eq:nostrict} \forall \lambda\in (0,1), \quad \lambda {\cal J}^{{\mathbb A}}(A_1) + (1-\lambda){\cal J}^{{\mathbb A}}(A_2) = {\cal J}^{{\mathbb A}}\big(\lambda A_1 + (1-\lambda) A_2\big), \end{equation} and we are going to show that $A_1 = A_2$. In view of~\eqref{eq:sera_utile1}, the assumption~\eqref{eq:nostrict} implies that, for any $1 \leq i \leq d$, $$ \forall \lambda\in (0,1), \quad \lambda {\cal J}^{{\mathbb A}}_{e_i}(A_1) + (1-\lambda) {\cal J}^{{\mathbb A}}_{e_i}(A_2) = {\cal J}^{{\mathbb A}}_{e_i}\big(\lambda A_1 + (1-\lambda) A_2\big), $$ which implies, in view of~\eqref{eq:sera_utile2}, that $$ \forall \lambda\in (0,1), \quad w_p^{{\mathbb A}, \lambda A_1+(1-\lambda) A_2} = w_p^{{\mathbb A}, A_1} = w_p^{{\mathbb A}, A_2}. $$ For the sake of simplicity, we denote this function by $w_i$ in the rest of the proof. It satisfies $$ -\mbox{\rm div} \left( A_1 (e_i + \nabla w_i) \right) = -\mbox{\rm div}\left( A_2 (e_i + \nabla w_i) \right) = 0 \mbox{ in $\mathcal{D}'\big({\mathbb R}^d \setminus \overline{B}\big)$}. $$ Since $A_1$ and $A_2$ are constant matrices, this implies that, for any $1 \leq i \leq d$, \begin{equation}\label{eq:solwi} -\mbox{\rm div}\left( A_1 \nabla w_i \right) = -\mbox{\rm div}\left( A_2 \nabla w_i \right) = 0 \mbox{ in $\mathcal{D}'\big({\mathbb R}^d \setminus \overline{B}\big)$}. \end{equation} Standard elliptic regularity theory implies that $w_i$ is analytic in ${\mathbb R}^d \setminus \overline{B}$ (see e.g.~\cite[Sec.~2.4 p.~18]{gilbarg2001elliptic}). \noindent \textbf{Step 2.} We now proceed by proving that, when $d \leq 3$, equation~\eqref{eq:solwi} implies that \begin{equation}\label{eq:solwi_bis} \text{either $A_1$ and $A_2$ are proportional or $w_i$ is a constant function in ${\mathbb R}^d \setminus \overline{B}$}. \end{equation} The case $d=1$ is straightforward. We now consider the case $d=2$. Without loss of generality, we can assume that $A_1 = I_2$ and $A_2$ is diagonal (this can be shown by a linear coordinate transform and the unique continuation principle). If $A_1$ and $A_2$ are not proportional, it follows from~\eqref{eq:solwi} that $$ \forall x=(x_1,x_2) \in {\mathbb R}^2 \setminus \overline{B}, \qquad \frac{\partial^2 w_i}{\partial x_1^2}(x_1,x_2) = \frac{\partial^2 w_i}{\partial x_2^2}(x_1,x_2) = 0. $$ This implies that there exists $(a,b,c,d) \in {\mathbb R}^4$ such that $w(x_1,x_2) = a x_1 x_2 + b x_1 + c x_2 + d$ in ${\mathbb R}^2 \setminus \overline{B}$. Since $\nabla w_i \in L^2({\mathbb R}^2 \setminus \overline{B})$, it follows that $a=b=c=0$, and hence the claim~\eqref{eq:solwi_bis} when $d=2$. We now turn to the case $d=3$, which is more difficult. Let $r>1$ be sufficiently large so that the following two conditions are satisfied: $$ E:= A_1^{1/2} S_r \subset {\mathbb R}^d \setminus \overline{B} \quad \mbox{and thus also} \quad {\mathbb R}^d \setminus \left( A_1^{1/2} \overline{B_r} \right) \subset {\mathbb R}^d \setminus \overline{B}, $$ where we recall that $S_r$ (respectively $B_r$) is the sphere (respectively open ball) of radius $r$ in ${\mathbb R}^3$. As a consequence of~\eqref{eq:solwi}, there exists a function $\sigma_i \in {\cal C}^\infty(E)$ so that, for all $x \in {\mathbb R}^d \setminus \left( A_1^{1/2} \overline{B_r} \right)$, we have \begin{equation} \label{eq:taiwan2} w_i(x) = C + \int_{E} G_{A_1}(x-e) \, \sigma_i(e) \, de \end{equation} where $C$ is a constant and $G_{A_1}$ is the Green function of the operator $-\mbox{\rm div}\left( A_1 \nabla \cdot \right)$, which reads $\dps G_{A_1}(z) = \frac{1}{4 \pi \, \sqrt{\mbox{\rm det}(A_1)}} \frac{1}{\sqrt{z^T (A_1)^{-1} z}}$ for all $z\in {\mathbb R}^3 \setminus \{0\}$. Using the change of variables $y := A_1^{-1/2} e$, we obtain that there exists a constant $c>0$ so that $$ w_i(x) = C + c \int_{S_r} \frac{1}{\left| A_1^{-1/2}x-y \right|} \ \sigma_i(A_1^{1/2} y) \, dy. $$ Let us denote ${\mathbb P}si:= A_2 - A_1$. For any $x \in {\mathbb R}^d \setminus \left( A_1^{1/2} \overline{B_r} \right)$, it holds that \begin{align*} 0& = \mbox{\rm div}_x \left( {\mathbb P}si \nabla_x w_i(x)\right) \\ &= c \int_{S_r} \mbox{\rm div}_x\left( {\mathbb P}si \nabla_x \left[ \frac{1}{\left| A_1^{-1/2}x - y \right|} \right] \right) \, \sigma_i(A_1^{1/2} y) \, dy \\ & = c \int_{S_r} \mbox{\rm div}_x \left( - \frac{{\mathbb P}si A_1^{-1/2}\left( A_1^{-1/2}x - y \right)}{\left| A_1^{-1/2}x - y \right|^3} \right) \, \sigma_i(A_1^{1/2} y) \, dy \\ & = c \int_{S_r} \left[ 3 \, \frac{\left( A_1^{-1/2}x - y\right)^T A_1^{-1/2} {\mathbb P}si A_1^{-1/2} \left(A_1^{-1/2}x - y\right)}{\left| A_1^{1/2}x - y \right|^5} - \frac{\mbox{\rm Tr}\left( A_1^{-1/2} {\mathbb P}si A_1^{-1/2} \right)}{{\left| A_1^{-1/2}x-y \right|^3}}\right] \sigma_i(A_1^{1/2} y) \, dy \\ & = c \int_{S_r} \left[ \frac{\left(A_1^{-1/2}x -y\right)^T {\mathbb P}hi \left(A_1^{-1/2}x -y\right)}{\left| A_1^{-1/2}x-y \right|^5}\right] \, \sigma_i(A_1^{1/2} y) \, dy, \end{align*} where ${\mathbb P}hi := 3 A_1^{-1/2} {\mathbb P}si A_1^{-1/2} - \mbox{\rm Tr}(A_1^{-1/2} {\mathbb P}si A_1^{-1/2}) \, I_3$ is a symmetric matrix, the trace of which vanishes. Since this equality holds true for all $x \in {\mathbb R}^d \setminus \left( A_1^{1/2} \overline{B_r} \right)$, it holds that, for all $\overline{x} \in {\mathbb R}^d \setminus \overline{B_r}$, $$ 0 = \int_{S_r} \left[ \frac{\left(\overline{x} -y\right)^T {\mathbb P}hi \left(\overline{x} -y\right)}{| \overline{x}-y |^5}\right] \widehat{\sigma}_i(y)\,dy, $$ where for all $y\in S_r$, $\widehat{\sigma}_i(y) = \sigma_i(A_1^{1/2}y)$. Lemma~\ref{lem:dipole} then implies that: \begin{itemize} \item either $\widehat{\sigma}_i = 0$, hence $\sigma_i = 0$, which implies, in view of~\eqref{eq:taiwan2}, that $w_i = C$ on ${\mathbb R}^d \setminus \left( A_1^{1/2} \overline{B}_r \right) \subset {\mathbb R}^d \setminus \overline{B}$. Since $w_i$ is analytic in ${\mathbb R}^d \setminus \overline{B}$, we get that $w_i = C$ on ${\mathbb R}^d \setminus \overline{B}$ (this is the unique continuation property for elliptic equations, see e.g.~\cite{protter}). \item or ${\mathbb P}hi = 0$. Then ${\mathbb P}si = A_2 - A_1 = \mu A_1$ for some $\mu \in {\mathbb R}$ and thus $A_1$ and $A_2$ are proportional. \end{itemize} This proves the claim~\eqref{eq:solwi_bis} when $d=3$. \noindent {\bf Step 3.} We have shown in Step 2 that, when $d \leq 3$, \eqref{eq:solwi_bis} holds for any $1 \leq i \leq d$. We now successively consider the two cases of~\eqref{eq:solwi_bis}. \noindent {\bf Step 3a.} We consider the first possibility in~\eqref{eq:solwi_bis} and assume that $A_1$ and $A_2$ are proportional, that is $A_2 = (1 + \mu) A_1$. We proceed by contradiction and assume that $\mu \neq 0$. Since $$ A_1 (\nabla w_i +p) \cdot n = {\mathbb A} (\nabla w_i +p)\cdot n = A_2 (\nabla w_i +p) \cdot n = (1+\mu) A_1 (\nabla w_i +p) \cdot n \mbox{ on $\Gamma$} $$ with $\mu \neq 0$, these functions have to be equal to zero on $\Gamma$. The function $u \in H^1(B)$ defined by $u(x) := w_i(x) + p\cdot x$ for all $x \in B$ is then solution to $$ - \mbox{\rm div}\left({\mathbb A} \nabla u\right) = 0 \quad \mbox{ in $B$}, \qquad {\mathbb A} \nabla u \cdot n = 0 \quad \mbox{ on $\Gamma$}. $$ As a consequence, there exists a constant $C\in {\mathbb R}$ such that $u=C$ in $B$, and $w_i(x) = -p\cdot x + C$ for all $x\in B$. In particular, $\nabla w_i +p = 0$ in $B$. Using the variational formulation~\eqref{eq:FVgen} of the embedded corrector problem with test function $w_i$, we get $$ \int_{{\mathbb R}^d \setminus B} \left(\nabla w_i\right)^T A_1 \nabla w_i = \int_\Gamma (A_1 p\cdot n) \, w_i, $$ and $$ \int_{{\mathbb R}^d \setminus B} \left(\nabla w_i\right)^T A_2 \nabla w_i = \int_\Gamma (A_2 p\cdot n) \, w_i. $$ In view of~\eqref{eq:optim2}, this implies that $$ {\cal J}^{{\mathbb A}}_{e_i}(A_1) = - \frac{1}{|B|} \int_{{\mathbb R}^d \setminus B} \left(\nabla w_i\right)^T A_1 \nabla w_i $$ and $$ {\cal J}^{{\mathbb A}}_{e_i}(A_2) = - \frac{1}{|B|} \int_{{\mathbb R}^d \setminus B} \left(\nabla w_i\right)^T A_2 \nabla w_i = (1+\mu) {\cal J}^{{\mathbb A}}_{e_i}(A_1). $$ Since $\mu \neq 0$, we obtain that ${\cal J}^{{\mathbb A}}_{e_i}(A_2) = {\cal J}^{{\mathbb A}}_{e_i}(A_1) = 0$, which yields that $\nabla w_i = 0$ in ${\mathbb R}^d \setminus B$. As a consequence, there exists $\widetilde{C} \in {\mathbb R}$ such that $w_i(x) = \widetilde{C}$ for all $x\in {\mathbb R}^d \setminus B$. The continuity of $w_i$ on $\Gamma$ implies that $$ \forall x\in \Gamma, \quad C - p\cdot x = \widetilde{C}, $$ which yields the desired contradiction. We hence have shown that, if $A_1$ and $A_2$ are proportional, then $A_2 = A_1$. \noindent {\bf Step 3b.} We next assume that $A_1$ and $A_2$ are not proportional. Then, in view of~\eqref{eq:solwi_bis}, we know that, for any $1 \leq i \leq d$, $w_i$ is a constant function in ${\mathbb R}^d \setminus \overline{B}$, hence $\nabla w_i = 0$ in ${\mathbb R}^d \setminus \overline{B}$. The function $w_i$ satisfies~\eqref{eq:pbbase} for the tensor $\mathcal{A}^{{\mathbb A},A_1}$, which implies that $$ \left. n^T {\mathbb A} (e_i + \nabla w_i) \right|_{\Gamma_-} = \left. n^T A_1 (e_i + \nabla w_i) \right|_{\Gamma_+} = n^T A_1 e_i \quad \text{on $\Gamma$}. $$ Since $w_i$ also satisfies~\eqref{eq:pbbase} for the tensor $\mathcal{A}^{{\mathbb A},A_2}$, we have $$ \left. n^T {\mathbb A} (e_i + \nabla w_i) \right|_{\Gamma_-} = n^T A_2 e_i \quad \text{on $\Gamma$}. $$ We hence deduce that $n^T A_1 e_i = n^T A_2 e_i$ on $\Gamma$, hence $A_1 e_i = A_2 e_i$. This holds for any $1 \leq i \leq d$, thus $A_1 = A_2$. This concludes the proof of Lemma~\ref{lem:lemconcavity}. \end{proof} \subsection{Proof of Proposition~\ref{prop:prop1}} \label{sec:proofprop1} \noindent {\bf Step 1: $A_1^R$ converges to $A^\widetilde{s}ar$.} Since $A_1^R \in {\cal M}$, all its coefficients are bounded. Up to the extraction of a subsequence (which we still denote by $(A_1^R)_{R>0}$ for the sake of simplicity), we know that there exists a matrix $A^\infty_1 \in {\cal M}$ such that $\dps \lim_{R \to \infty} A^R_1 = A^\infty_1$. We now prove that $A^\infty_1 = A^\widetilde{s}ar$, which implies the convergence of the whole sequence $(A^R_1)_{R>0}$ to $A^\widetilde{s}ar$. Let $p\in{\mathbb R}^d$. It follows from Lemma~\ref{lem:lem1} that $w_p^{{\mathbb A}^R, A^R_1}$ weakly converges in $H^1_{\rm loc}({\mathbb R}^d)$ to $w_p^{A^\widetilde{s}ar, A^\infty_1}$. In addition, \begin{equation} \label{eq:lim} \mathcal{A}^{{\mathbb A}^R,A^R_1} (p + \nabla w_p^{{\mathbb A}^R, A^R_1}) \rightharpoonup \left(A^\widetilde{s}ar \chi_B + A^\infty_1 (1-\chi_B) \right) \left(p + \nabla w_p^{A^\widetilde{s}ar, A^\infty_1}\right) \mbox{ weakly in $L^2_{\rm loc}({\mathbb R}^d)$}. \end{equation} To prove that $A^\infty_1 = A^\widetilde{s}ar$, we consider a second family of functions of $V_0$, namely $\left(w_p^{{\mathbb A}^R, A^\widetilde{s}ar}\right)_{R>0}$. Recall that, for all $R>0$, $w_p^{{\mathbb A}^R, A^\widetilde{s}ar}$ is the unique solution in $V_0$ to $$ -\mbox{{\rm div}} \left( \mathcal{A}^{{\mathbb A}^R,A^\widetilde{s}ar} \left( p + \nabla w_p^{{\mathbb A}^R, A^\widetilde{s}ar} \right)\right) = 0 \mbox{ in $\mathcal{D}'({\mathbb R}^d)$}. $$ Using Lemma~\ref{lem:lem1} again, we obtain that $\left(w_p^{{\mathbb A}^R, A^\widetilde{s}ar}\right)_{R>0}$ weakly converges in $H^1_{\rm loc}({\mathbb R}^d)$ to $w_p^{A^\widetilde{s}ar, A^\widetilde{s}ar} = 0$. Furthermore, we have \begin{equation} \label{eq:limchek} \mathcal{A}^{{\mathbb A}^R,A^\widetilde{s}ar} \left( p + \nabla w_p^{{\mathbb A}^R, A^\widetilde{s}ar} \right) \rightharpoonup A^\widetilde{s}ar \left( p + \nabla w_p^{A^\widetilde{s}ar, A^\widetilde{s}ar} \right) = A^\widetilde{s}ar p \mbox{ weakly in $L^2_{\rm loc}({\mathbb R}^d)$}. \end{equation} Since $A^R_1$ is (the unique) solution to~\eqref{eq:optimisation}, we have $$ \sum_{i=1}^d {\cal J}^{{\mathbb A}^R}_{e_i}(A^\widetilde{s}ar) \leq \sum_{i=1}^d {\cal J}^{{\mathbb A}^R}_{e_i}(A^R_1), $$ which reads, using~\eqref{eq:expJ2}, as \begin{multline} \label{eq:genial} \sum_{i=1}^d \int_B e_i^T {\mathbb A}^R \left( e_i + \nabla w^{{\mathbb A}^R, A^\widetilde{s}ar}_{e_i} \right) - \int_{\Gamma} (A^\widetilde{s}ar e_i \cdot n) \, w^{{\mathbb A}^R, A^\widetilde{s}ar}_{e_i} \\ \leq \sum_{i=1}^d \int_B e_i^T {\mathbb A}^R \left( e_i + \nabla w_{e_i}^{{\mathbb A}^R, A^R_1} \right) - \int_{\Gamma} ( A_1^R e_i \cdot n) \, w_{e_i}^{{\mathbb A}^R, A^R_1}. \end{multline} We wish to pass to the limit $R \to \infty$ in this inequality. Using~\eqref{eq:lim} and~\eqref{eq:limchek}, we first have, for any $p \in {\mathbb R}^d$, \begin{equation} \label{eq:exc1} \int_B p^T {\mathbb A}^R \left( p + \nabla w_p^{{\mathbb A}^R, A^R_1} \right) \mathop{\longrightarrow}_{R\to +\infty} \int_B p^T A^\widetilde{s}ar \left( p + \nabla w_p^{A^\widetilde{s}ar, A^\infty_1} \right) \end{equation} and \begin{equation} \label{eq:exc2} \int_B p^T {\mathbb A}^R \left( p + \nabla w_p^{{\mathbb A}^R, A^\widetilde{s}ar} \right) \mathop{\longrightarrow}_{R\to +\infty} \int_B p^T A^\widetilde{s}ar p. \end{equation} Second, we know that $\widetilde{w}_p^{{\mathbb A}^R, A^\widetilde{s}ar}$ (respectively $\widetilde{w}_p^{{\mathbb A}^R, A^R_1}$) weakly converges in $H^1(B)$ to $\widetilde{w}_p^{A^\widetilde{s}ar, A^\widetilde{s}ar} = 0$ (respectively to $\widetilde{w}_p^{A^\widetilde{s}ar, A^\infty_1}$). The compactness of the trace operator from $H^1(B)$ to $L^2(\Gamma)$ yields that these convergences also hold strongly in $L^2(\Gamma)$. Thus, \begin{equation}\label{eq:exc3} \int_\Gamma (A^\widetilde{s}ar p \cdot n) \, w_p^{{\mathbb A}^R, A^\widetilde{s}ar} \mathop{\longrightarrow}_{R\to +\infty} 0 \end{equation} and \begin{equation}\label{eq:exc4} \int_\Gamma (A_1^R p \cdot n) \, w_p^{{\mathbb A}^R, A^R_1} \mathop{\longrightarrow}_{R\to +\infty} \int_\Gamma (A^\infty_1 p \cdot n) \, w_p^{A^\widetilde{s}ar, A^\infty_1}. \end{equation} Collecting~\eqref{eq:exc1}, \eqref{eq:exc2}, \eqref{eq:exc3} and~\eqref{eq:exc4}, we are in position to pass to the limit $R \to \infty$ in~\eqref{eq:genial}, and deduce that \begin{equation} \label{eq:ineg} \sum_{i=1}^d \int_B e_i^T A^\widetilde{s}ar e_i \leq \sum_{i=1}^d \int_B e_i^T A^\widetilde{s}ar \left( e_i + \nabla w_{e_i}^{A^\widetilde{s}ar, A^\infty_1} \right) - \int_\Gamma (A_1^\infty e_i \cdot n) \, w_{e_i}^{A^\widetilde{s}ar, A^\infty_1}. \end{equation} In view of~\eqref{eq:rel1}, we have that, for all $p\in{\mathbb R}^d$, \begin{multline*} \int_B p^T A^\widetilde{s}ar \nabla w_p^{A^\widetilde{s}ar, A^\infty_1} - \int_\Gamma (A_1^\infty p \cdot n) \, w_p^{A^\widetilde{s}ar, A^\infty_1} \\ = - \int_{{\mathbb R}^d \setminus B} \left( \nabla w_p^{A^\widetilde{s}ar, A^\infty_1} \right)^T A^\infty_1 \nabla w_p^{A^\widetilde{s}ar, A^\infty_1} - \int_B \left( \nabla w_p^{A^\widetilde{s}ar, A^\infty_1} \right)^T A^\widetilde{s}ar \nabla w_p^{A^\widetilde{s}ar, A^\infty_1}, \end{multline*} which implies that \begin{multline*} \int_B p^T A^\widetilde{s}ar \left( p + \nabla w_p^{A^\widetilde{s}ar, A^\infty_1} \right) - \int_\Gamma (A_1^\infty p \cdot n) \, w_p^{A^\widetilde{s}ar, A^\infty_1} \\ = \int_B p^T A^\widetilde{s}ar p - \int_{{\mathbb R}^d \setminus B} \left( \nabla w_p^{A^\widetilde{s}ar, A^\infty_1} \right)^T A^\infty_1 \nabla w_p^{A^\widetilde{s}ar, A^\infty_1} - \int_B \left( \nabla w_p^{A^\widetilde{s}ar, A^\infty_1} \right)^T A^\widetilde{s}ar \nabla w_p^{A^\widetilde{s}ar, A^\infty_1}. \end{multline*} Thus, \eqref{eq:ineg} yields that $$ 0 \leq - \sum_{i=1}^d \left[ \int_B \left( \nabla w_{e_i}^{A^\widetilde{s}ar, A^\infty_1} \right)^T A^\widetilde{s}ar \nabla w_{e_i}^{A^\widetilde{s}ar, A^\infty_1} + \int_{{\mathbb R}^d \setminus B} \left( \nabla w_{e_i}^{A^\widetilde{s}ar, A^\infty_1} \right)^T A^\infty_1 \nabla w_{e_i}^{A^\widetilde{s}ar, A^\infty_1}\right], $$ which implies that $\nabla w_{e_i}^{A^\widetilde{s}ar, A^\infty_1} = 0$ on ${\mathbb R}^d$ for all $1\leq i \leq d$. As a consequence, for all $1\leq i \leq d$, $$ - \mbox{\rm div}\left[ \left(A^\widetilde{s}ar \chi_B + A_1^\infty(1-\chi_B)\right) e_i\right] = 0 \mbox{ in $\mathcal{D}'({\mathbb R}^d)$}. $$ In view of Lemma~\ref{lem:tech}, this implies that $A^\infty_1 = A^\widetilde{s}ar$ and concludes the proof of the first assertion of Proposition~\ref{prop:prop1}. \noindent {\bf Step 2: $A_2^R$ converges to $A^\widetilde{s}ar$.} Recall that $A^R_2$ is defined, following~\eqref{eq:def2}, by $p^T A^R_2 p = {\cal J}^{{\mathbb A}^R}_p(A_1^R)$. Using~\eqref{eq:expJ2} and the above arguments, we see that $$ \lim_{R \to \infty} {\cal J}^{{\mathbb A}^R}_p(A_1^R) = \frac{1}{|B|} \int_B p^T A^\widetilde{s}ar \left( p + \nabla w_p^{A^\widetilde{s}ar, A^\infty_1} \right) - \frac{1}{|B|} \int_\Gamma (A^\infty_1 p \cdot n) \, w_p^{A^\widetilde{s}ar, A^\infty_1}. $$ Since $w_p^{A^\widetilde{s}ar, A^\infty_1} = w_p^{A^\widetilde{s}ar, A^\widetilde{s}ar} = 0$, we get that $\dps \lim_{R \to \infty} {\cal J}^{{\mathbb A}^R}_p(A_1^R) = p^T A^\widetilde{s}ar p$. For any $p \in {\mathbb R}^d$, we thus have $\dps \lim_{R \to \infty} p^T A^R_2 p = p^T A^\widetilde{s}ar p$, hence $\dps \lim_{R \to \infty} A^R_2 = A^\widetilde{s}ar$. This concludes the proof of the second assertion of Proposition~\ref{prop:prop1}. \subsection{Proof of Proposition~\ref{prop:prop2}} \label{sec:proofprop2} Since $A^R_3\in {\cal M}$, all its coefficients are bounded. Hence, up to the extraction of a subsequence (that we still denote by $\left( A^R_3\right)_{R>0}$ to simplify the notation), there exists a matrix $A^\infty_3 \in {\cal M}$ such that $\dps A^R_3 \mathop{\longrightarrow}_{R\to +\infty} A^\infty_3$. We show that $A^\infty_3 = A^\widetilde{s}ar$. Let $p\in{\mathbb R}^d$. Recall that, for all $R>0$, $w_p^{{\mathbb A}^R, A^R_3}$ is the unique solution in $V_0$ to $$ -\mbox{{\rm div}}\left( \mathcal{A}^{{\mathbb A}^R,A^R_3} \left(p + \nabla w_p^{{\mathbb A}^R, A^R_3}\right)\right) = 0 \mbox{ in $\mathcal{D}'({\mathbb R}^d)$}. $$ Using Lemma~\ref{lem:lem1}, we have that $\left(w_p^{{\mathbb A}^R, A^R_3}\right)_{R>0}$ weakly converges in $H^1_{\rm loc}({\mathbb R}^d)$ to $w_p^{A^\widetilde{s}ar, A^\infty_3}$, which is the unique solution in $V_0$ to \begin{equation} \label{eq:lim2} -\mbox{{\rm div}}\mathbb{B}ig( \left(A^\widetilde{s}ar \chi_B + A^\infty_3(1-\chi_B) \right) \left(p + \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right)\mathbb{B}ig) = 0 \mbox{ in $\mathcal{D}'({\mathbb R}^d)$}. \end{equation} Using~\eqref{eq:defGA} and~\eqref{eq:expJ2}, we see that $$ p^T G^{{\mathbb A}^R}(A^R_3) p = {\cal J}^{{\mathbb A}^R}_p(A^R_3) = \frac{1}{|B|} \int_B p^T {\mathbb A}^R \left( p + \nabla w_p^{{\mathbb A}^R, A^R_3} \right) - \frac{1}{|B|} \int_\Gamma \left( A^R_3 p \cdot n \right) \, w_p^{{\mathbb A}^R, A^R_3}. $$ Using Lemma~\ref{lem:lem1} and arguing as in the proof of Proposition~\ref{prop:prop1}, we deduce that $$ \lim_{R \to \infty} p^T G^{{\mathbb A}^R}(A^R_3) p = \frac{1}{|B|} \int_B p^T A^\widetilde{s}ar \left( p + \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right) - \frac{1}{|B|} \int_\Gamma \left( A^\infty_3 p\cdot n \right) \, w_p^{A^\widetilde{s}ar, A^\infty_3}. $$ Passing to the limit $R \to \infty$ in~\eqref{eq:def3}, we hence get that \begin{equation} \label{eq:genial2} p^T A^\infty_3 p = \frac{1}{|B|} \int_B p^T A^\widetilde{s}ar \left( p + \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right) - \frac{1}{|B|} \int_\Gamma \left( A^\infty_3 p \cdot n \right) \, w_p^{A^\widetilde{s}ar, A^\infty_3}. \end{equation} Using the relation~\eqref{eq:rel1} for the problem~\eqref{eq:lim2}, we have that \begin{multline*} \int_B \left( \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right)^T A^\widetilde{s}ar \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} + \int_{{\mathbb R}^d \setminus B} \left( \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right)^T A^\infty_3 \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \\ = - \int_B p^T A^\widetilde{s}ar \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} + \int_\Gamma \left( A^\infty_3 p \cdot n \right) \, w_p^{A^\widetilde{s}ar, A^\infty_3}. \end{multline*} We thus deduce from~\eqref{eq:genial2} that \begin{equation} \label{eq:relation} p^T A^\infty_3 p = p^T A^\widetilde{s}ar p - \frac{1}{|B|} \int_B \left( \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right)^T A^\widetilde{s}ar \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} - \frac{1}{|B|} \int_{{\mathbb R}^d \setminus B} \left( \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right)^T A^\infty_3 \nabla w_p^{A^\widetilde{s}ar, A^\infty_3}. \end{equation} This implies that \begin{equation} \label{eq:genial3} \text{$A^\infty_3 \leq A^\widetilde{s}ar$ in the sense of symmetric matrices.} \end{equation} In addition, we infer from~\eqref{eq:relation} that \begin{align} & \int_B \left( p + \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right)^T A^\widetilde{s}ar p \nonumber \\ &= |B| \, p^T A^\widetilde{s}ar p + \int_B p^T A^\widetilde{s}ar \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \nonumber \\ &= |B| \, p^T A_3^\infty p + \int_B \left( p + \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right)^T A^\widetilde{s}ar \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} + \int_{{\mathbb R}^d \setminus B} \left( \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right)^T A^\infty_3 \nabla w_p^{A^\widetilde{s}ar, A^\infty_3}. \label{eq:genial4} \end{align} The variational formulation of~\eqref{eq:lim2}, tested with the test function $w_p^{A^\widetilde{s}ar, A^\infty_3}$, yields $$ 0 = \int_B \left( p + \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right)^T A^\widetilde{s}ar \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} - \int_\Gamma \left( A^\infty_3 p \cdot n \right) \, w_p^{A^\widetilde{s}ar, A^\infty_3} + \int_{{\mathbb R}^d \setminus B} \left( \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right)^T A^\infty_3 \nabla w_p^{A^\widetilde{s}ar, A^\infty_3}. $$ Subtracting twice the above relation from~\eqref{eq:genial4}, we get \begin{multline*} \int_B \left( p + \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right)^T A^\widetilde{s}ar p = |B| \, p^T A_3^\infty p - \int_B \left( p + \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right)^T A^\widetilde{s}ar \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \\ - \int_{{\mathbb R}^d \setminus B} \left( \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right)^T A^\infty_3 \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} + 2 \int_\Gamma \left( A^\infty_3 p \cdot n \right) \, w_p^{A^\widetilde{s}ar, A^\infty_3}, \end{multline*} which we recast as \begin{eqnarray} 0 &=& \int_B \left( p + \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right)^T A^\widetilde{s}ar \left( p + \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right) - 2 \int_\Gamma \left( A^\infty_3 p \cdot n \right) \, w_p^{A^\widetilde{s}ar, A^\infty_3} \nonumber \\ & & \qquad + \int_{{\mathbb R}^d \setminus B} \left( \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right)^T A^\infty_3 \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} - |B| \, p^T A^\infty_3 p \nonumber \\ & \geq & \int_B \left( p + \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right)^T A^\infty_3 \left( p + \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right) - 2 \int_\Gamma \left( A^\infty_3 p \cdot n \right) \, w_p^{A^\widetilde{s}ar, A^\infty_3} \nonumber \\ & & \qquad + \int_{{\mathbb R}^d \setminus B} \left( \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} \right)^T A^\infty_3 \nabla w_p^{A^\widetilde{s}ar, A^\infty_3} - |B| \, p^T A^\infty_3 p, \label{eq:genial6} \end{eqnarray} where we have eventually used~\eqref{eq:genial3}. We now define, for any $v\in V_0$, $$ {\cal I}(v) := \frac{1}{2}\int_B (p + \nabla v)^T A^\infty_3 (p + \nabla v) - \int_\Gamma (A^\infty_3 p \cdot n) \, v + \frac{1}{2} \int_{{\mathbb R}^d \setminus B} (\nabla v)^T A^\infty_3 \nabla v - \frac{1}{2} |B| \, p^T A^\infty_3 p. $$ The unique solution $v_0 \in V_0$ to the minimization problem $$ v_0 = \mathop{\mbox{argmin}}_{v\in V_0} {\cal I}(v) $$ satisfies $$ -\mbox{{\rm div}} \mathbb{B}ig( A^\infty_3 (p + \nabla v_0) \mathbb{B}ig) = 0 \mbox{ in $\mathcal{D}'(R^d)$}, $$ and therefore is simply $v_0 = 0$. Thus, \begin{equation} \label{eq:genial5} \forall v \in V_0, \quad {\cal I}(v) \geq {\cal I}(v_0) = 0. \end{equation} We recast~\eqref{eq:genial6} as $$ 0 \geq 2 \, {\cal I}(\widetilde{w}_p^{A^\widetilde{s}ar, A^\infty_3}). $$ Together with~\eqref{eq:genial5}, the above inequality implies that $w_p^{A^\widetilde{s}ar, A^\infty_3}$ is the unique minimizer of ${\cal I}$ on $V_0$, hence $w_p^{A^\widetilde{s}ar, A^\infty_3} = 0$. This results holds for all $p\in{\mathbb R}^d$. In view of~\eqref{eq:lim2} and Lemma~\ref{lem:tech}, we thus obtain that $A^\infty_3 = A^\widetilde{s}ar$. This concludes the proof of Proposition~\ref{prop:prop2}. \subsection{Proof of Proposition~\ref{prop:prop3}} \label{sec:proofprop3} \noindent \textbf{Step 1: Proof of~\eqref{eq:spher}.} For all $R>0$, ${\mathbb A}^R \in L^\infty(B; {\cal M})$, hence, for any $v \in V_0$, we have $$ J_p^{\alpha I_d,A}(v) \leq J_p^{{\mathbb A}^R,A}(v) \leq J_p^{\beta I_d,A}(v) $$ and therefore $$ p^T G^{\alpha I_d}(A) p \leq p^T G^{{\mathbb A}}(A) p \leq p^T G^{\beta I_d}(A) p. $$ Thus, for any $A \in {\cal M}$, we have \begin{equation} \label{eq:h1} \mbox{Tr}\left( G^{\alpha I_d}(A)\right) \leq \mbox{Tr}\left( G^{{\mathbb A}^R}(A)\right) \leq \mbox{Tr}\left( G^{\beta I_d}(A)\right). \end{equation} For any $\gamma \in [\alpha,\beta]$, we introduce $\dps f_{{\mathbb A}^R}(\gamma) = \frac{1}{d}\mbox{Tr}\left( G^{{\mathbb A}^R}(\gamma I_d)\right) - \gamma$. Satisfying~\eqref{eq:spher} amounts to finding $a^R_3 \in [\alpha,\beta]$ such that $f_{{\mathbb A}^R}(a^R_3)=0$. Introducing $\dps f_\alpha(\gamma) = \frac{1}{d}\mbox{Tr}\left( G^{\alpha I_d}(\gamma I_d)\right) - \gamma$ and likewise for $f_\beta(\gamma)$, we deduce from~\eqref{eq:h1} that \begin{equation} \label{eq:h2} \forall \gamma \in [\alpha,\beta], \quad f_\alpha(\gamma) \leq f_{{\mathbb A}^R}(\gamma) \leq f_\beta(\gamma). \end{equation} To proceed, we note that we have an explicit expression of $f_\alpha(\gamma)$, using the explicit solution to Eshelby's problem~\cite{Eshelby}. Indeed, for any $1\leq i \leq d$ and any $\alpha,\gamma>0$, the solution $w_{e_i}^{\alpha I_d,\gamma I_d}$ to~\eqref{eq:pbbase} with $A = \gamma I_d$ et ${\mathbb A}(x) = \alpha I_d$ on ${\mathbb R}^d$ is given by $$ w_{e_i}^{\alpha I_d,\gamma I_d}(x) = \left\{ \begin{array}{l} C(\alpha, \gamma) \, x_i \mbox{ if $|x|\leq 1$}, \\ \noalign{\vskip 5pt} \dps C(\alpha, \gamma) \, \frac{x_i}{|x|^d} \mbox{ if $|x|\geq 1$}, \end{array} \right. \ \text{with} \ C(\alpha, \gamma) = \frac{\gamma - \alpha}{(d-1)\gamma + \alpha}. $$ With~\eqref{eq:defGA}, \eqref{eq:expJ2} and the above expression, we easily obtain that $$ \mbox{Tr} \left( G^{\alpha I_d}(\gamma I_d) \right) = \sum_{i=1}^d {\cal J}_{e_i}^{\alpha I_d}(\gamma I_d) = d \, \left( \alpha + (\alpha-\gamma) C(\alpha, \gamma) \right), $$ hence $$ \frac{1}{d}\mbox{Tr}\left( G^{\alpha I_d}(\gamma I_d) \right) = \alpha + (\alpha-\gamma) C(\alpha, \gamma) , $$ and thus $$ f_\alpha(\gamma) = (\alpha-\gamma) \mathbb{B}ig( C(\alpha, \gamma) + 1 \mathbb{B}ig) = (\alpha-\gamma) \frac{d\gamma}{(d-1)\gamma+\alpha}. $$ We see that, when $\gamma \in [\alpha,\beta]$, we have $f_\alpha(\gamma) \leq 0$ and the equation $f_\alpha(\gamma) = 0$ has a unique solution, $\gamma=\alpha$. Likewise, when $\gamma \in [\alpha,\beta]$, we have $f_\beta(\gamma) \geq 0$ and the equation $f_\beta(\gamma) = 0$ has a unique solution, $\gamma=\beta$. The bound~\eqref{eq:h2} implies that there exists $a^R_3 \in [\alpha,\beta]$ such that $f_{{\mathbb A}^R}(a^R_3)=0$. This proves~\eqref{eq:spher}. Besides, in the case when $d \leq 3$, Lemma~\ref{lem:lemconcavity} implies that, for any $R>0$, $f_{{\mathbb A}^R}$ is strictly concave. This yields the uniqueness of $a^R_3$ when $d \leq 3$. \noindent \textbf{Step 2: Proof of~\eqref{eq:spher2}.} We follow the same arguments as in the beginning of the proof of Proposition~\ref{prop:prop2}. Since $a^R_3 \in [\alpha,\beta]$, we know that, up to the extraction of a subsequence (that we still denote by $\left( a^R_3\right)_{R>0}$ to simplify the notation), there exists $a^\infty_3 \in [\alpha, \beta]$ such that $\dps a^R_3 \mathop{\longrightarrow}_{R\to +\infty} a^\infty_3$. Passing to the limit $R \to \infty$ in~\eqref{eq:spher}, we get that \begin{equation} \label{eq:genial2_bis} d \, a^\infty_3 = \sum_{i=1}^d \frac{a^\widetilde{s}ar}{|B|} \int_B e_i^T \left( e_i + \nabla w_{e_i}^{a^\widetilde{s}ar I_d, a^\infty_3 I_d} \right) - \frac{a^\infty_3}{|B|} \int_\Gamma (e_i \cdot n) \, w_{e_i}^{a^\widetilde{s}ar I_d, a^\infty_3 I_d}, \end{equation} where, for any $p \in {\mathbb R}^d$, $w_p^{a^\widetilde{s}ar I_d, a^\infty_3 I_d}$ is the unique solution in $V_0$ to \begin{equation} \label{eq:lim2_bis} -\mbox{{\rm div}}\mathbb{B}ig( \left(a^\widetilde{s}ar \chi_B + a^\infty_3( 1- \chi_B) \right) \left(p + \nabla w_p^{a^\widetilde{s}ar I_d, a^\infty_3 I_d} \right)\mathbb{B}ig) = 0 \mbox{ in $\mathcal{D}'({\mathbb R}^d)$}. \end{equation} Using the relation~\eqref{eq:rel1} for problem~\eqref{eq:lim2_bis}, we have that \begin{multline*} a^\widetilde{s}ar \int_B \left| \nabla w_p^{a^\widetilde{s}ar I_d, a^\infty_3 I_d} \right|^2 + a^\infty_3 \int_{{\mathbb R}^d \setminus B} \left| \nabla w_p^{a^\widetilde{s}ar I_d, a^\infty_3 I_d} \right|^2 \\ = - a^\widetilde{s}ar \int_B p^T \nabla w_p^{a^\widetilde{s}ar I_d, a^\infty_3 I_d} + a^\infty_3 \int_\Gamma (p \cdot n) \, w_p^{a^\widetilde{s}ar I_d, a^\infty_3 I_d}. \end{multline*} We thus deduce from~\eqref{eq:genial2_bis} that $$ d \, a^\infty_3 = d \, a^\widetilde{s}ar - \frac{1}{|B|} \sum_{i=1}^d \left( a^\widetilde{s}ar \int_B \left| \nabla w_{e_i}^{a^\widetilde{s}ar I_d, a^\infty_3 I_d} \right|^2 + a^\infty_3 \int_{{\mathbb R}^d \setminus B} \left| \nabla w_{e_i}^{a^\widetilde{s}ar I_d, a^\infty_3 I_d} \right|^2 \right). $$ This implies that $$ a^\infty_3 \leq a^\widetilde{s}ar. $$ The sequel of the proof follows the same lines as the proof of Proposition~\ref{prop:prop2}. \section{Two special cases} \label{sec:justif} In this section, we consider two special cases: the one-dimensional case (in Section~\ref{sec:1D}) and the case of a homogeneous material (in Section~\ref{sec:homogene}). In the first case, we show that our three definitions yield the same approximation of $A^\widetilde{s}ar$ as the standard method based on~\eqref{eq:correctorrandom-N}. In the second case, we show that our three definitions yield the value of the homogeneous material. \subsection{The one-dimensional case} \label{sec:1D} If $d=1$, then the solution to~\eqref{eq:pbbase} can be analytically computed. It satisfies $$ \frac{d w^{{\mathbb A},A}}{dx} = 0 \quad \text{on ${\mathbb R} \setminus B$}, \qquad \frac{d w^{{\mathbb A},A}}{dx} = \frac{A}{{\mathbb A}(x)}-1 \quad \text{on $B$}. $$ We then get from~\eqref{eq:expJ3} that \begin{eqnarray} \nonumber |B| \, {\cal J}^{{\mathbb A}^R}(A) &=& \int_B {\mathbb A}^R - \int_B {\mathbb A}^R \left( \frac{d w^{{\mathbb A}^R,A}}{dx} \right)^2 - \int_{{\mathbb R} \setminus B} A \left( \frac{d w^{{\mathbb A}^R,A}}{dx} \right)^2 \\ \nonumber &=& \int_B {\mathbb A}^R - \left[ \frac{A^2 \, |B|}{{\mathbb A}^\widetilde{s}ar_R} - 2 A \, |B| + \int_B {\mathbb A}^R \right] \\ \nonumber &=& |B| \left( 2A - \frac{A^2}{A^\widetilde{s}ar_R} \right), \label{eq:titi} \end{eqnarray} where we have introduced $\dps (A^\widetilde{s}ar_R)^{-1} := \frac{1}{|B|} \int_B ({\mathbb A}^R)^{-1}$, namely the harmonic mean of ${\mathbb A}^R$ on $B$. The definitions~\eqref{eq:optimisation}, \eqref{eq:def2} and~\eqref{eq:def3} all yield $$ A^R_1 = A^R_2 = A^R_3 = A^\widetilde{s}ar_R. $$ We point out that, in this one-dimensional case, the approximate coefficient $A^{\widetilde{s}ar}_R$ is identical to the effective coefficient $A^{\widetilde{s}ar,R}$ defined by~\eqref{eq:defper} (i.e. considering a {\em truncated} corrector problem supplied with periodic boundary conditions). Thus, in this context, we can see that our alternative definitions of effective coefficients are consistent with the standard one. \subsection{The case of a homogeneous material} \label{sec:homogene} We assume here that \begin{equation} \label{eq:hyp_homog} \text{for all $R>0$, \ \ ${\mathbb A}^R = {\mathbb A}$ is constant and equal to some matrix $\overline{A}\in{\cal M}$.} \end{equation} We show below that $\overline{A}$ is the unique maximizer of $\dps A \mapsto \sum_{i=1}^d {\cal J}_{e_i}^{{\mathbb A}}(A)$, and hence that Definition~\eqref{eq:optimisation} yields $A^R_1 = \overline{A}$ for all $R>0$. We next show that Definition~\eqref{eq:def2} yields $A^R_2 = \overline{A}$. We eventually show that $A^R_3 = \overline{A}$ satisfies~\eqref{eq:def3}, and that $G^{{\mathbb A}}$ has a unique fixed point. \subsubsection{Definition~\eqref{eq:optimisation}} From~\eqref{eq:expJ3} and our assumption~\eqref{eq:hyp_homog}, we see that, for any $A \in {\cal M}$, \begin{equation} \label{eq:titi2} {\cal J}_p^{{\mathbb A}}(A) \leq \frac{1}{|B|} \int_B p^T {\mathbb A} p = p^T \, \overline{A} \, p, \end{equation} hence $$ \sum_{i=1}^d {\cal J}_{e_i}^{{\mathbb A}}(A) \leq \sum_{i=1}^d e_i^T \, \overline{A} \, e_i. $$ If $A=\overline{A}$, we see that the diffusion matrix in~\eqref{eq:pbbase} is constant, therefore $w_p^{{\mathbb A},\overline{A}} = 0$. We then deduce from~\eqref{eq:optim2} that $\dps {\cal J}_p^{{\mathbb A}}\left( \overline{A} \right) = p^T \, \overline{A} \, p$, which directly implies that $A = \overline{A}$ is a maximizer of $\dps {\cal M} \ni A \mapsto \sum_{i=1}^d {\cal J}_{e_i}^{{\mathbb A}}(A)$. Conversely, assume that $\widehat{A}$ is a maximizer of $\dps {\cal M} \ni A \mapsto \sum_{i=1}^d {\cal J}_{e_i}^{{\mathbb A}}(A)$. Then, for any $1\leq i\leq d$, $\dps {\cal J}_{e_i}^{{\mathbb A}}\left( \widehat{A} \right) = e_i^T \, \overline{A} \, e_i$. We thus infer from~\eqref{eq:expJ3} that $\nabla w_{e_i}^{{\mathbb A},\widehat{A}} = 0$. Using~\eqref{eq:pbbase}, we deduce that $$ \mbox{{\rm div}}\mathbb{B}ig( \left( \overline{A} \chi_B + \widehat{A} \chi_{{\mathbb R}^d \setminus B_R} \right) e_i \mathbb{B}ig) = \mbox{{\rm div}}\mathbb{B}ig( \mathcal{A}^{{\mathbb A}, \widehat{A}} e_i \mathbb{B}ig) = 0. $$ Using Lemma~\ref{lem:tech}, we obtain that $\widehat{A} = \overline{A}$. We hence have shown that $\overline{A}$ is the unique maximizer of $\dps {\cal M} \ni A \mapsto \sum_{i=1}^d {\cal J}_{e_i}^{{\mathbb A}}(A)$. Our first definition therefore yields $A^R_1 = \overline{A}$ for all $R>0$. \subsubsection{Definition~\eqref{eq:def2}} We deduce from~\eqref{eq:def2}, the fact that $A^R_1 = \overline{A}$ and the above expression of ${\cal J}_p^{{\mathbb A}}\left( \overline{A} \right)$ that, for any $p \in {\mathbb R}^d$, $$ p^T A_2^R p = {\cal J}_p^{{\mathbb A}}(A^R_1) = {\cal J}_p^{{\mathbb A}}\left( \overline{A} \right) = p^T \, \overline{A} \, p. $$ Since $A^R_2$ and $\overline{A}$ are symmetric, this implies that $A^R_2 = \overline{A}$. \subsubsection{Definition~\eqref{eq:def3}} We deduce from the above expression of ${\cal J}_p^{{\mathbb A}}\left( \overline{A} \right)$ that $G^{{\mathbb A}}\left( \overline{A} \right) = \overline{A}$, hence $\overline{A}$ is a fixed point of $G^{{\mathbb A}}$. The remainder of this section is devoted to showing that $\overline{A}$ is the {\em unique} fixed point of $G^{{\mathbb A}}$. We recast~\eqref{eq:titi2} as $$ \forall p \in {\mathbb R}^d, \quad p^T G^{{\mathbb A}}(A) p \leq p^T \, \overline{A} \, p. $$ If $A$ is a fixed point of $G^{{\mathbb A}}$, then we have that \begin{equation} \label{eq:genial7} A \leq \overline{A}. \end{equation} We now follow the same steps as in the proof of Proposition~\ref{prop:prop2}. Using~\eqref{eq:defGA} and~\eqref{eq:expJ2}, we see that \begin{eqnarray*} p^T A p = p^T G^{{\mathbb A}}(A) p &=& {\cal J}^{{\mathbb A}}_p(A) \\ &=& \frac{1}{|B|} \int_B p^T {\mathbb A} (p + \nabla w_p^{{\mathbb A}, A}) - \frac{1}{|B|} \int_{\Gamma} (A p\cdot n) \, w_p^{{\mathbb A}, A}. \end{eqnarray*} Using~\eqref{eq:rel1}, we deduce that $$ p^T A p = p^T \, \overline{A} p - \frac{1}{|B|} \int_B (\nabla w_p^{{\mathbb A}, A})^T {\mathbb A} \nabla w_p^{{\mathbb A}, A} - \frac{1}{|B|} \int_{{\mathbb R}^d \setminus B} (\nabla w_p^{{\mathbb A}, A})^T A \nabla w_p^{{\mathbb A}, A}. $$ We infer from the above relation and~\eqref{eq:rel1} that \begin{eqnarray} \int_B (p + \nabla w_p^{{\mathbb A}, A})^T {\mathbb A} p &=& |B| \, p^T \, \overline{A} p + \int_B p^T {\mathbb A} \nabla w_p^{{\mathbb A}, A} \nonumber \\ &=& |B| \, p^T A p + \int_B (p + \nabla w_p^{{\mathbb A}, A})^T {\mathbb A} \nabla w_p^{{\mathbb A}, A} + \int_{{\mathbb R}^d \setminus B} (\nabla w_p^{{\mathbb A}, A})^T A \nabla w_p^{{\mathbb A}, A} \nonumber \\ &=& |B| \, p^T A p + \int_{\Gamma} (A p \cdot n) \, w_p^{{\mathbb A}, A}. \label{eq:genial4_bis} \end{eqnarray} The equality~\eqref{eq:rel1} can also be written as $$ 0 = \int_B (p + \nabla w_p^{{\mathbb A}, A})^T {\mathbb A} \nabla w_p^{{\mathbb A}, A} - \int_{\Gamma} (A p\cdot n) \, w_p^{{\mathbb A}, A} + \int_{{\mathbb R}^d \setminus B} (\nabla w_p^{{\mathbb A}, A})^T A \nabla w_p^{{\mathbb A}, A}. $$ Subtracting~\eqref{eq:genial4_bis} from the above relation and next using~\eqref{eq:genial7}, we get \begin{eqnarray} 0 &=& \int_B (p + \nabla w_p^{{\mathbb A}, A})^T {\mathbb A} (p + \nabla w_p^{{\mathbb A}, A}) - 2 \int_{\Gamma} (A p\cdot n) \, w_p^{{\mathbb A}, A} \nonumber \\ & & \qquad + \int_{{\mathbb R}^d \setminus B} (\nabla w_p^{{\mathbb A}, A})^T A \nabla w_p^{{\mathbb A}, A} - |B| \, p^T A p \nonumber \\ & \geq & \int_B (p + \nabla w_p^{{\mathbb A}, A})^T A (p + \nabla w_p^{{\mathbb A}, A}) - 2 \int_{\Gamma} (A p\cdot n) \, w_p^{{\mathbb A}, A} \nonumber \\ & & \qquad + \int_{{\mathbb R}^d \setminus B} (\nabla w_p^{{\mathbb A}, A})^T A \nabla w_p^{{\mathbb A}, A} - |B| \, p^T A p. \label{eq:genial6_bis} \end{eqnarray} We now define, for all $v\in V_0$, the energy $$ {\cal I}(v) := \frac{1}{2} \int_B (p + \nabla v)^T A (p + \nabla v) - \int_{\Gamma} (A p\cdot n) \, v + \frac{1}{2} \int_{{\mathbb R}^d \setminus B} (\nabla v)^T A \nabla v - \frac{1}{2} |B| \, p^T A p. $$ The unique solution $v_0 \in V_0$ to the minimization problem $$ v_0 = \mathop{\mbox{argmin}}_{v\in V_0} {\cal I}(v) $$ satisfies $$ -\mbox{{\rm div}} \mathbb{B}ig( A (p + \nabla v_0) \mathbb{B}ig) = 0 \mbox{ in $\mathcal{D}'(R^d)$}, $$ and therefore is simply $v_0 = 0$. Thus, \begin{equation} \label{eq:genial5_bis} \forall v \in V_0, \quad {\cal I}(v) \geq {\cal I}(v_0) = 0. \end{equation} We recast~\eqref{eq:genial6_bis} as $$ 0 \geq 2 \, {\cal I}(w_p^{{\mathbb A}, A}) \quad \text{with $w_p^{{\mathbb A}, A} \in V_0$}. $$ Collecting the above relation with~\eqref{eq:genial5_bis}, we deduce that $w_p^{{\mathbb A}, A}$ is the unique minimizer of ${\cal I}$ on $V_0$, hence that $w_p^{{\mathbb A}, A} = 0$. This results holds for all $p\in{\mathbb R}^d$. In view of Lemma~\ref{lem:tech} and our assumption~\eqref{eq:hyp_homog}, we thus obtain that $A = \overline{A}$. This is the claimed uniqueness result of the fixed point of $G^{{\mathbb A}}$, under assumption~\eqref{eq:hyp_homog}. \end{document}
\begin{document} \maketitle \begin{abstract} In this work, we prove a new decomposition result for rank $m$ symmetric tensor fields which generalizes the well known solenoidal and potential decomposition of tensor fields. This decomposition is then used to describe the kernel and to prove an injectivity result for first $(k+1)$ integral moment transforms of symmetric $m$-tensor fields in $\mathbb{R}^n$. Additionally, we also present a range characterization for first $(k+1)$ integral moment transforms in terms of the John's equation. \end{abstract} \section{Introduction} The space of covariant symmetric $m$-tensor fields on $\mathbb{R}^n$ with components in Schwartz space $\mathcal{S}(\mathbb{R}^n)$ will be denoted by $\mathcal{S}(S^m)$. In standard Euclidean coordinates, any element $f \in \mathcal{S}(S^m)$ can be written as $$ f(x) = f_{i_1\mathrm{d}ots i_m}(x) dx^{i_1} \cdots dx^{i_m}$$ with $f_{i_1 \mathrm{d}ots i_m} \in \mathcal{S}(\mathbb{R}^n)$ are symmetric in its components. For repeated indices, Einstein summation convention will be assumed throughout this article. Also, we will not distinguish between covariant and contravariant tensors as we are working with the Euclidean metric. The space of oriented lines in $\mathbb{R}^n$ is parametrized by points of the tangent bundle of unit sphere $\mathbb{S}^{n-1}$ and it is denoted by $$T{\mathbb{S}}^{n-1}=\{(x,\xi)\in{\mathbb{R}}^n\times{\mathbb{R}}^n\mid |\xi|=1,\langleangle x,\xi\rangleangle=0\}. $$ For each $(x,\xi)\in T{\mathbb{S}}^{n-1}$, we have a unique line $\{x+t\xi\mid t\in{\mathbb{R}}\}$ passing through point $x$ and in the direction $\xi$. For a non-negative integer $q \gammaeq 0$, the $q$-th integral moment transform of a symmetric $m$-tensor field is the function $I^q :{\mathcal{S}}(S^m)\rangleightarrow{\mathcal{S}}(T\mathbb{S}^{n-1})$ given by \cite{Sharafutdinov_Generalized_Tensor_Fields}: \begin{equation}\langleabel{eq:definition of momentum ray transform} (I^q f)(x,\xi)=\int\langleimits_{-\infty}^\infty t^q\langleangle f(x+t\xi),\xi^m\rangleangle dt = \int\langleimits_{-\infty}^\infty t^q f_{i_1\mathrm{d}ots i_m}(x+t\xi)\,\xi^{i_1} \cdots \xi^{i_m} dt. \end{equation} In the above equation, $\langleangle f, \xi^m \rangleangle$ actually means $\langleangle f, \xi^{\omegatimes m} \rangleangle$, where $\xi^{\omegatimes m}$ denotes $m$-times tensor product of $\xi$ with itself. \nablaoindent The collection of first $(k+1)$ integral moment transforms of $f \in \mathcal{S}(S^m)$ is denoted by $\mathrm{i}c^k f$, more specifically, the operator $\mathrm{i}c^k: \mathcal{S}(S^m)\rangleightarrow \langleeft(\mathcal{S}(T\mathbb{S}^{n-1})\rangleight)^{k+1}$ defined by \begin{align}\langleabel{eq:definition of Ik moment transforms} \mathrm{i}c^k(f)(x, \xi) = (I^0 f(x, \xi),I^1 f(x, \xi),\mathrm{d}ots, I^k f(x, \xi)), \quad \mbox{ for } (x, \xi) \in T\mathbb{S}^{n-1}. \end{align} The zeroth integral moment transform $I^0$ or $\mathrm{i}c^0$ coincides with the well known longitudinal ray transform (also known as ray transform) of symmetric $m$-tensor fields in $\mathbb{R}^n$. The problem of inverting the longitudinal ray transform (LRT) is primarily motivated from their appearance in several imaging problems, notably in medical imaging, seismic imaging, ocean imaging and many more. It is well known \cite{Sharafutdinov1994} that the LRT has a non-trivial kernel (containing all potential tensor fields with certain decay at infinity) which tells that one cannot recover the entire tensor field just from LRT data. On the other hand, the solenoidal part $f^s$ of a symmetric $m$-tensor field $f$ can be determined uniquely from the knowledge of $\mathrm{i}c^0 f$. In this regard, explicit reconstruction algorithms have been studied by many researchers in various settings, please see \cite{Denisjuk1994,Denisjuk_Paper,Helgason_Book,Katsevich2006,Katsevich2007, Katsevich2013,Monard_Mishra_2019,Monard1,Monard2,Monard,Monard_2019,Palamodov2009,Schuster2000,Sharafutdinov2007,Svetov2012,Tuy1983,Vertgeim2000} and references therein. In addition to these explicit schemes, approximate inversion methods (such as microlocal inversion) have also been developed extensively to recover the solenoidal part a symmetric $m$-tensor field, see \cite{Boman-Quinto-Duke,Boman1993,Greenleaf-Uhlmann-Duke1989,GU1990c,Krishnan2009a,Venky_and_Rohit,Krishnan-Quinto,Krishnan2009,Lan2003,Ramaseshan2004}. It is evident from the non-injectivity of LRT that one needs more information (in addition to LRT) for the full recovery of a tensor field. In 1984, Sharafutdinov \cite{Sharafutdinov_Generalized_Tensor_Fields} introduced integral moment transforms (see \eqref{eq:definition of momentum ray transform}) and showed that the collection of first $(m+1)$ integral moment transforms, $\mathrm{i}c^m$, is injective over symmetric $m$-tensor fields in $\mathbb{R}^n$. For the scalar case $ (m=0) $, the integral moment transforms $ I^k\, (k>0) $ appear in the study of inversion of cone transforms and conical Radon transforms, see \cite{haltmeier2020conical,Kuchment_Fatma,Markus_Moon_conical_Radon_trans} and references there in. And the latter transforms arise in image reconstruction from the data obtained by Compton cameras, which have potential applications in medical and industrial imaging. In \cite{Anuj_Rohit}, authors proved a support theorem and an injectivity result for first $(m+1)$ integral moment transforms of symmetric $m$-tensor fields on simple real analytic Riemannian manifolds. Then in \cite{Francois_Rohit_Venky}, authors gave an inversion formula for integral moment transforms on a simple Riemannian surface. Later in the article \cite{Rohit_Kumar_Mishra}, author presented an explicit scheme for the recovery of a vector field in $\mathbb{R}^n$ using $n$-dimensional restricted data of first 2-integral moment transform of the unknown vector field. Most recently in a couple of papers \cite{Krishnan2018,Krishnan2019a}, authors studied first $(m+1)$ integral moment transforms and its properties over $m$-tensor fields in a great detail. In \cite{Krishnan2018}, authors proved the invertibility together with stability estimates for the collection of first $(m+1)$ integral moment transform $\mathrm{i}c^m$. In their second paper \cite{Krishnan2019a}, authors gave a detailed description of range for the operator $\mathrm{i}c^m$. To the best of our knowledge, the study on the transform $\mathrm{i}c^k$ over rank symmetric $m$-tensor fields is limited to cases $k=0$ and $k=m$ only. The current article addresses injectivity and range characterization questions for the intermediate cases $0 < k < m$ of the operator $\mathrm{i}c^k$. It is well known that a symmetric $m$-tensor field $f$ can be decomposed uniquely into its potential part and solenoidal part. This decomposition is not closed in the sense that the solenoidal and the potential components of a tensor field $f$ are not in the Schwartz space even if $f$ is in the Schwartz space. Therefore, it is not possible to apply an iterative scheme (similar to \cite{Anuj_Rohit}) on the decomposition. To overcome this difficulty, we introduce $k$-potential tensor fields and $k$-solenoidal tensor fields (see Definition \rangleef{def:k-potential and k-solenoidal}) by extending classical notions of potential and solenoidal tensor fields respectively. Then, we prove a decomposition result (see Theorem \rangleef{th:decomp_in_original_sp}) which shows that any symmetric $m$-tensor field $f$ can be decomposed uniquely into a $k$-potential tensor field and a $k$-solenoidal tensor field. With the help of this decomposition theorem, we provide an explicit description of the kernel for the operator $\mathrm{i}c^k$, see Theorem \rangleef{th:injectivity result}. Additionally, we also prove that the operator $\mathrm{i}c^k$ is injective over $(k+1)$-solenoidal tensor fields. Our injectivity result generalizes the existing injectivity results for $\mathrm{i}c^0$ (injective over solenoidal tensor fields) and $\mathrm{i}c^m$ (injective over $m$-tensor fields). Apart from injectivity and invertibility issues, the range characterization questions are also very important in the field of integral geometry. For instance, the knowledge of range is essential in order to project measured data on the range before applying inversion algorithms. The second order differential operator (also known as John operator) \begin{align}\langleabel{def:John equation} J_{ij}=\frac{\partial^2}{\partial x^i\partial\xi^j}- \frac{\partial^2}{\partial x^j\partial\xi^i} \quad 1\langlee i,j\langlee n \end{align} shows up in the range characterization results for ray transform of functions by Helgason \cite{Helgason_Book} and of tensor fields by Sharafutdinov \cite{Sharafutdinov1994} in $\mathbb{R}^n (n\gammae 3)$. The John differential equation was first introduced by Fritz John \cite{John_Fritz_ultrahyperbolic} to study ultrahyperbolic differential equations in $\mathbb{R}^3$. The final goal of this article is to give a detailed description of the range for the operator $\mathrm{i}c^k$ in terms of John's differential equations, see Theorem \rangleef{th:range characterisation for I-k}. The rest of the article is organized as follows. In section \rangleef{sec:Def and notation}, we introduce some definitions and notation used throughout this work. Section \rangleef{sec:Decomposition Results} is devoted to the proof of decomposition theorem of symmetric $m$-tensor fields. The injectivity results and kernel description is discussed in section \rangleef{sec: Injectivity results}. Finally, section \rangleef{sec:range characterization} contains the proof of range characterization for the integral moment transform $\mathrm{i}c^k$. \\ \nablaoindent \textbf{Acknowledgements.} The second author would like to thank Jenn-Nan Wang for suggesting this problem during his visit to Taiwan in 2018 and he would also like to express his sincere gratitude towards Vladimir A. Sharafutdinov for introducing the subject of this paper. Both authors would like to thank Venky P. Krishnan for several fruitful discussions on the results of this article which helped us to improve the manuscript.\\ \nablaoindent \textbf{Funding:} Both authors benefited from the Venky P. Krishnan's SERB Matrics grant MTR/2017/000837. \section{Definitions and notation}\langleabel{sec:Def and notation} In this section we introduce some important definitions and notation used throughout this article. Most of these definitions and notation can be found in the book ``Integral geometry of tensor fields" by Sharafutdinov \cite{Sharafutdinov1994} and also in the article \cite{Krishnan2018}. \subsection{Some differential operators} Let $T^m(\mathbb{R}^n)$ denotes the space of $m$-tensors on $\mathbb{R}^n$. There is a natural projection of $T^m(\mathbb{R}^n)$ onto the space of symmetric tensors $S^m(\mathbb{R}^n)$, $\sigma : T^m(\mathbb{R}^n) \rangleightarrow S^m(\mathbb{R}^n)$ given by \begin{align}\langleabel{eq:definition of sigma} (\sigma v)_{i_1\mathrm{d}ots i_m} = \frac{1}{m!}\sum_{\pi \in \Pi_m} v_{\pi(i_1)\mathrm{d}ots \pi(i_m)} \end{align} where $\Pi_m$ is the set of permutation of order $m$. \\ For $x \in \mathbb{R}^n$, we define the \textit{symmetric multiplication operators} $i_x :S^m(\mathbb{R}^n) \rangleightarrow S^{m+1}(\mathbb{R}^n)$ by $$(i_x f)_{i_1i_2\mathrm{d}ots, i_{m+1}} =\sigma(i_1, \mathrm{d}ots, i_m, i_{m+1})(x_{i_{m+1}}f_{i_{1}i_{2}\mathrm{d}ots i_{m}}).$$ In the same spirit, we also define the dual of $i_x$, \textit{the convolution operator}, $j_x :S^m(\mathbb{R}^n) \rangleightarrow S^{m-1}(\mathbb{R}^n)$ by $$(j_x f)_{i_1i_2\mathrm{d}ots i_{m-1}} =f_{i_1i_2\mathrm{d}ots i_m}x^{i_m}.$$ The composition of these operators will be essential in the next section to prove the decomposition theorem and hence for the convenience of reader, we introduce the operators $i_{x^{\omegatimes k}} : S^m(\mathbb{R}^n) \rangleightarrow S^{m+k}(\mathbb{R}^n)$ and $j_{x^{\omegatimes k}} : S^{m+k}(\mathbb{R}^n) \rangleightarrow S^{m}(\mathbb{R}^n)$, for any fixed integer $k \gammaeq 1$, as follows: \begin{align*} \langleeft(i_{x^{\omegatimes k}}f\rangleight)_{i_1i_2\mathrm{d}ots i_{m+k}} &=\sigma(i_1, \mathrm{d}ots, i_m \mathrm{d}ots i_{m+k})(x_{i_{m+1}\mathrm{d}ots x_{i_{m+k}}}f_{i_{1}i_{2}\mathrm{d}ots i_{m}})\\ (j_{x^{\omegatimes k}}f)_{i_1i_2\mathrm{d}ots i_{m}} &=x^{i_{m+1}}\mathrm{d}ots x^{i_{m+k}}f_{i_{1}i_{2}\mathrm{d}ots i_{m}i_{m+1}\mathrm{d}ots i_{m+k} }. \end{align*} \nablaoindent Next, we define two important first order differential operators on $C^\infty(S^m)$, the space symmetric $m$-tensor fields whose components are $C^\infty$ smooth. The operator of \textit{inner differentiation} or \textit{symmetrized derivative} $\mathrm{d}:C^\infty(S^m)\rangleightarrow C^\infty(S^{m+1})$ is defined by $$(\mathrm{d} u)_{i_1\mathrm{d}ots i_mi_{m+1}} = \sigma(i_1, \mathrm{d}ots , i_m) \langleeft(\frac{\partial u_{i_1\mathrm{d}ots i_m} }{\partial x_{i_{m+1}}}\rangleight)$$ where $\sigma$ is defined in equation \eqref{eq:definition of sigma}. \\ \nablaoindent The \textit{divergence} operator $\mathrm{d}elta:C^\infty(S^{m})\rangleightarrow C^\infty(S^{m-1})$ is defined by the formula $$ (\mathrm{d}elta u)_{i_1\mathrm{d}ots i_{m-1}} = \sum_{j=1}^n \frac{\partial u_{i_1\mathrm{d}ots i_{m-1}j} }{\partial x_{j}}.$$ \subsection{Some properties of moment ray transforms} Note that the definition of $q$-th integral moment transform $I^q$ will make sense if we define them to $\mathbb{R}^n \times \mathbb{R}^n \setminus \{0\}$. For later use, we define the operator $J^q: \mathcal{S}(S^m) \langleongrightarrow C^\infty(\mathbb{R}^n \times (\mathbb{R}^{n} \setminus \{0\}))$ by extending $I^q$ to $ \rangleanglen \times \rangleanglen\setminus\{0\} $ \begin{equation}\langleabel{eq:definition of Jk} J^q f(x,\xi)=\int\langleimits_{-\infty}^\infty t^q\langleangle f(x+t\xi),\xi^m\rangleangle \, d t \quad\mbox{for}\quad(x,\xi)\in \mathbb{R}^{n} \times \mathbb{R}^{n}\setminus \{0\}. \end{equation} It has been shown in \cite{Krishnan2018} that the data $(I^0 f, I^1 f, \mathrm{d}ots , I^k f)$ and $(J^0 f, J^1 f, \mathrm{d}ots , J^k f)$ are equivalent for any $0 \langleeq k \langleeq m$ and there is a explicit relation between these operators \begin{equation}\langleabel{eq:relation between Ik and Jk} (J^q\!f)(x,\xi)=|\xi|^{m-2q-1}\sum\langleimits_{\ell=0}^q(-1)^{q-\ell}{q\choose\ell}|\xi|^\ell \langleangle\xi,x\rangleangle^{q-\ell}\,(I^\ell\!f) \langleeft(x-\frac{\langleangle x, \xi \rangleangle}{|\xi|^2}\xi,\frac{\xi}{|\xi|}\rangleight). \end{equation} In certain instances, it will be more convenient to work with the operator $J^q$ instead of $I^q$. One clearly evident advantage of working with functions $ J^k f $ is that the partial derivatives $ \frac{\partial}{\partial\xi^{i}} $ and $\frac{\partial}{\partial x^{i}}$ are well defined on $ J^k f $ for $ k=0,1,\mathrm{d}ots,m$. \nablaoindent The Fourier transform of a symmetric $m$-tensor field $f \in \mathcal{S}(S^m)$ is defined component-wise, that is, \begin{align*} \widehat{f}_{i_1\mathrm{d}ots i_m}(y) = \widehat{f_{i_1\mathrm{d}ots i_m}}(y), \quad y \in \mathbb{R}^n \end{align*} where $\widehat{h}(y)$ denotes the usual Fourier transform of a scalar function $h$ defined on $\mathbb{R}^n$. \nablaoindent The Fourier transform $\mathcal{F} :\mathcal{S}(T\mathbb{S}^{n-1}) \langleongrightarrow \mathcal{S}(T\mathbb{S}^{n-1})$ is defined as follows, see \cite[Section 2.1]{Sharafutdinov1994}: \begin{align}\langleabel{eq:Fourier transform on sphere bundle} \mathcal{F} (\varphi) (y, \xi) = \widehat{\varphi}(y, \xi) = \frac{1}{(2 \pi)^{(n-1)/2}}\int_{\xi^\perp} e^{-i x\cdot y} \varphi(x, \xi)\, dx \end{align} where $dx$ is the $(n-1)$-dimensional Lebesgue measure on the hyperplane $\xi^\perp = \{ x\in \mathbb{R}^n : \langle x,\xi \rangle = 0\}.$ \nablaoindent This definition of Fourier transform is used to compute the following Fourier transform of $q$-th integral moment transform of $f$: \begin{align*} \widehat{I^q f}(y, \xi) = (2 \pi)^{1/2} i^q \langleangle \xi, \partial_y\rangleangle^q \langleangle \widehat{f}(y), \xi^m\rangleangle. \end{align*} For $q=0$, the above equality reduces to \begin{align*} \widehat{I f}(y, \xi) = (2 \pi)^{1/2} \langleangle \widehat{f}(y), \xi^m\rangleangle. \end{align*} \section{Decomposition results}\langleabel{sec:Decomposition Results} We start this section by defining two special tensor fields which are generalizations of the solenoidal tensor fields and potential tensor fields respectively. \begin{definition}[$k$-solenoidal and $k$-potential tensor fields]\langleabel{def:k-potential and k-solenoidal} For any fixed $1 \langleeq k \langleeq m$, a symmetric $m$-tensor field $f \in C^\infty(S^m)$ is said to be \begin{enumerate} \item $k$-solenoidal tensor field if \begin{align*} \mathrm{d}elta^k f = 0. \end{align*} \item $k$-potential tensor field if there exists a $(m-k)$-tensor field $v \in C^\infty(S^{m-k})$ such that \begin{align*} f = \mathrm{d}^k v. \end{align*} \end{enumerate} \end{definition} \nablaoindent For $k=1$, the $k$-solenoidal and $k$-potential tensor fields coincide with the usual solenoidal and potential tensor fields respectively. The goal of this section is to prove that any symmetric $m$-tensor field can be decomposed uniquely into its $k$-solenoidal part and $k$-potential part. This decomposition theorem extends the result \cite[Theorem 2.6.2]{Sharafutdinov1994} which gives a unique decomposition of a symmetric $m$-tensor fields into its solenoidal part and potential part. In fact both these results can be viewed as generalizations of the well-known Helmholtz (the name Helmholtz-Hodge decomposition also used widely) decomposition of vector field into divergence free part (solenoidal part) and curl free part (potential part). \\ To prove the main decomposition result of this section, we need the following two lemmas: \begin{lemma}\langleabel{th: decomposition of f in frequency variable} Let $f$ be a symmetric $m$-tensor field in $\mathbb{R}^n$ and $x \in \mathbb{R}^n$ be a non-zero vector. Then for $0 \langleeq k \langleeq m$, there exist symmetric $m$-tensor field $g$ and symmetric $(m-k)$-tensor field $v$ such that the following decomposition of $f$ holds: \begin{align}\langleabel{eq: decomposition of f in frequency variable} f = g + i_{x^{\omegatimes k}} v \end{align} where $g$ satisfies $j_{x^{\omegatimes k}} g =0$ and given by \begin{align}\langleabel{eq: expression of g} &g_{i_1i_2\cdots i_m}\nablaonumber\\ &= \sigma(i_1, \mathrm{d}ots , i_m)\langleeft( \mathrm{d}elta_{i_1}^{j_1}\cdots \mathrm{d}elta_{i_k}^{j_k} - \frac{x_{i_1}\mathrm{d}ots x_{i_k}x^{j_1}\mathrm{d}ots x^{j_k}}{|x|^{2k}}\rangleight)\langleeft(\mathrm{d}elta^{j_{k+1}}_{i_{k+1}} - \frac{x^{j_{k+1}}x_{i_{k+1}} }{|x|^2}\rangleight)\cdots \langleeft(\mathrm{d}elta^{j_{m}}_{i_{m}} - \frac{x^{j_{m}}x_{i_{m}} }{|x|^2}\rangleight)f_{j_1 j_2 \mathrm{d}ots j_m}. \end{align} \end{lemma} We skip the proof of this lemma as it can be achieved directly from the duality of the linear operators $ i_{x^{\omegatimes k}}: S^m(\rangleanglen)\langleongrightarrow S^{m+k}(\rangleanglen) $ and $ j_{x^{\omegatimes k}}:S^m(\rangleanglen)\langleongrightarrow S^{m-k}(\rangleanglen)$, see also \cite[Lemma 2.6.1]{Sharafutdinov1994}. \begin{lemma} Let $ f\in \mathcal{S}(S^m)$ be a symmetric $m$-tensor field and $g$, $v$ be as in the above Lemma \rangleef{th: decomposition of f in frequency variable}. Then for any multi-index $ \alpha $, the following identities hold: \begin{align}\langleabel{estimate_for_g} D^{\alpha} g_{i_1i_2\mathrm{d}ots i_m}(x)= |x|^{-2(|\alpha|+m)}\sum\langleimits_{|\beta|\langlee |\alpha|} P^{\alpha j_1 \mathrm{d}ots j_m}_{\beta i_1 \mathrm{d}ots i_m}(x) D^{\beta} f_{j_1 \mathrm{d}ots j_m}(x), \end{align} \begin{align}\langleabel{esti_for_v} D^{\alpha} v_{i_1 \mathrm{d}ots i_{m-k}}(x)= |x|^{-2(|\alpha|+m)}\sum\langleimits_{|\beta|\langlee |\alpha|} Q^{\alpha j_1 \mathrm{d}ots j_m}_{\beta i_1 \mathrm{d}ots i_{m-k}}(x) D^{\beta} f_{j_1 \mathrm{d}ots j_m}(x) \end{align} where $ P^{\alpha j_1 \mathrm{d}ots j_m}_{\beta i_1 \mathrm{d}ots i_m}(x) $ and $ Q^{\alpha j_1 \mathrm{d}ots j_m}_{\beta i_1 \mathrm{d}ots i_{m-k}}(x) $ are homogeneous polynomial of degree $ (2m+|\alpha|+|\beta|) $ and $ (2m+|\alpha|+|\beta|-k) $ respectively. Also, $ D=(D_1,\mathrm{d}ots,D_n) $, $ D_{j}=-\mathrm{i} \partial_{x_j} $. \end{lemma} \begin{proof} Let us start with the observation that if we expand the right hand side of the expression for $g$ given in \eqref{eq: expression of g}, then every term in this expansion will be of the following form: \begin{align*} \frac{x_{i_1}\mathrm{d}ots x_{i_p}x^{j_1}\mathrm{d}ots x^{j_p}}{|x|^{2p}}f_{j_1\mathrm{d}ots j_pi_{p+1}\mathrm{d}ots i_m} \quad \mbox{ for some }\quad 0 \langleeq p \langleeq m. \end{align*} Keeping this observation in mind, we will prove our lemma using induction on $\alpha$. For $ |\alpha|=1 $, we get $ D^\alpha= -\mathrm{i} \partial_{x_k} $ for some $\ 1\langlee k\langlee n $ and therefore \[ D^{\alpha }\langleeft(\frac{x_{i_1}\mathrm{d}ots x_{i_p}x^{j_1}\mathrm{d}ots x^{j_p}}{|x|^{2p}}\rangleight) = \frac{\mbox{homogeneous poly of degree} (2p+1)}{|x|^{2(p+1)}}. \] For $p =m$, the above equality becomes \[ D^{\alpha }\langleeft(\frac{x_{i_1}\mathrm{d}ots x_{i_m}x^{j_1}\mathrm{d}ots x^{j_m}}{|x|^{2m}}\rangleight) = \frac{\mbox{homogeneous poly of degree} (2m+1)}{|x|^{2(m+1)}}. \] Using this, one can easily verify the following equality $$ D^{\alpha}g_{i_1i_2\cdots i_m} = \frac{1}{|x|^{2(m+1)}}\sum_{|\beta|=0}^{1} P^{\alpha j_1\mathrm{d}ots j_m}_{\beta i_1\mathrm{d}ots i_m}(x) D^{\beta} f_{j_1\mathrm{d}ots j_m}(x) $$ where $P^{\alpha j_1\mathrm{d}ots j_m}_{\beta i_1\mathrm{d}ots i_m}(x)$ is a homogeneous polynomial of degree $ 2m+1+|\beta|$. This shows that our result is true for $|\alpha| =1$. Now, assume that the result is true for $ |\alpha|=k$ then we aim to verify the result for $|\alpha| = k+1$. The idea here is to break $\alpha$ (such that $|\alpha| = k+1$) as $ \alpha= \gammaamma_1+\gammaamma_2 $ with $ |\gammaamma_1|=k $ and $ |\gammaamma_2| =1$. Then by applying $D^{\gammaamma_2}$ (which is same as the case $ |\alpha|=1$) to $ D^{\gammaamma_1}g_{i_1i_2\cdots i_m}$, we get the desired result for $g$. Finally to get the estimate for $v$, first we apply $ j_{x^{\omegatimes k}}$ to the equation \eqref{eq: decomposition of f in frequency variable} and then by again using a similar induction argument on $ \alpha $, we conclude the proof our lemma. \end{proof} Now, we are ready to present our decomposition theorem for symmetric $m$-tensor fields, which is one of the key aspects of this article and this decomposition will be used at several places later. \begin{theorem}\langleabel{th:decomp_in_original_sp} Let $ f \in \mathcal{S}(S^m)$ be a symmetric $m$-tensor field defined on $\mathbb{R}^n$ and $ 1 \langleeq k \langleeq \min\{n-1, m\}$ be a fixed positive integer. Then there exist uniquely determined smooth symmetric $m$-tensor field $ g $ and $(m-k)$-tensor field $ v $ satisfying \begin{align}\langleabel{decomp_of_f} f= g+\mathrm{d}^k v; \quad \ \mathrm{d}elta^{k} g=0, \end{align} $g(x), v(x) \rangleightarrow 0 \ \mbox{as}\ |x| \rangleightarrow \infty$. Additionally, we have the following decay estimates: \begin{align}\langleabel{estimate_for_g_and_v} |g(x)| \langlee C(1 + |x|)^{1-n}; \qquad |\mathrm{d}^{\ell}v(x)| \langlee C(1 + |x|)^{k+1-\ell} \quad \mbox{ (for } 0 \langleeq \ell\langleeq k). \end{align} The tensor fields $g$ and $v$ will be called the $k$-solenoidal part and the $k$-potential part of $f$ respectively. \end{theorem} \begin{proof}[\textbf{Proof of existence}] We use the notation $ \widehat{f}(y) $ for the Fourier transform of $f$ which we define component-wise, that is, $$\widehat{f_{i_1\mathrm{d}ots i_m}}(y) = \widehat{f}_{i_1\mathrm{d}ots i_m}(y).$$ Then, we apply Theorem \rangleef{th: decomposition of f in frequency variable} to find unique symmetric tensor fields $\widehat{g}$ and $\widehat{v}$, of order $m$ and $(m-k)$ respectively, such that \begin{align}\langleabel{1.12} \widehat{f }(y)=\widehat{g}(y) + i_{y^{\omegatimes k}}\widehat{v}(y) \ \ \mbox{and }\ j_{y^{\omegatimes k}} \widehat{g}(y)=0. \end{align} Using relations \eqref{estimate_for_g} and \eqref{esti_for_v} for $\widehat{g}$ and $\widehat{v}$, we have that the both fields $ \widehat{g}(y) $ and $ \widehat{v}(y) $ are smooth on $ \rangleanglen_{0}=\rangleanglen\setminus\{0\} $, decay rapidly as $|y| \rangleightarrow \infty$. Additionally, we also have the following estimates for $|y| \langleeq 1$ and for any multi-index $\alpha =(\alpha_1, \mathrm{d}ots, \alpha_n)$: \begin{equation}\langleabel{1.13} |D^{\alpha}\widehat{g}(y)|\langlee |y|^{-|\alpha|}, \qquad |D^{\alpha}\widehat{v}(y)|\langlee |y|^{-|\alpha|-k}. \end{equation} From above estimates, we see that $ D^{\alpha} \widehat{g}(y) $ is integrable for $ |\alpha|\langlee n-1 $ and $ D^{\alpha} \widehat{v}(y) $ is integrable for $ |\alpha| \langlee n-k-1 $. Hence $g$ and $v$ are smooth under the assumption $ 1 \langleeq k \langleeq \min\{m, n-1\}$. Also, by a direct application of the inverse Fourier transform to \eqref{1.12}, we get the following required decomposition: \begin{align*} f= g+\mathrm{d}^k v; \quad \ \mathrm{d}elta^{k} g=0. \end{align*} Further, the summability condition of $\widehat{g}$ and $\widehat{v}$ will give $g(x)$, $v(x) \rangleightarrow 0$ as $|x| \rangleightarrow \infty$. Thus the only thing remains to show is the following estimates: $$ |g(x)| \langlee C(1 + |x|)^{1-n}; \qquad |\mathrm{d}^\ell v(x)| \langlee C(1 + |x|)^{k+1-\ell-n} \quad \mbox{ (for } 0 \langleeq \ell\langleeq k).$$ We show the estimate for $g$ in detail and estimate for $v$ can be achieved by similar arguments. We start by writing $ g $ in terms of Fourier inversion formula as follows: \begin{align*} g(x)&=\int_{\rangleanglen} e^{\mathrm{i} x\cdot y} \widehat{g}(y) d y\\ \Longrightarrow \qquad x^{\alpha} g(x)&= (-\mathrm{i})^{|\alpha|}\int_{\rangleanglen} \widehat{g}(y)D_{y}^{\alpha} e^{\mathrm{i} x\cdot y}\ d y. \end{align*} As $ \widehat{g}(y)$ is not smooth at the origin, in order to apply the integration by parts in the above identity, we rewrite the above integral in the following way: \begin{align*} x^{\alpha} g(x)&= (-\mathrm{i})^{|\alpha|}\langleim_{\end{prob}silon \rangleightarrow 0}\int_{|y|\gammaeq \end{prob}silon} \widehat{g}(y)D_{y}^{\alpha} e^{\mathrm{i} x\cdot y}\ d y\\ &= \mathrm{i}^{|\alpha|}\langleim_{\end{prob}silon \rangleightarrow 0}\langleeft( \int_{|y|\gammae \end{prob}silon} D_{y}^{\alpha}(\widehat{g}(y)) e^{\mathrm{i} x\cdot y}\ d y -\int_{|y|=\end{prob}silon}D_{y}^{\alpha}(\widehat{g}(y)) \nablau^{\alpha} e^{\mathrm{i} x\cdot y}\ d \sigma(y)\rangleight). \end{align*} Using inequality $ |D^{\alpha}\widehat{g}(y)|\langlee |y|^{-|\alpha|}$ from \eqref{1.13}, we conclude that $\langleim\langleimits_{\end{prob}silon \rangleightarrow 0} \int_{|y|=\end{prob}silon}D_{y}^{\alpha}(\widehat{g}(y)) \nablau^{\alpha} e^{\mathrm{i} x\cdot y}\ \mathrm{d} \sigma(y) $ equals to $ 0 $ for $ |\alpha|\langlee n-2 $ and constant for $|\alpha| = n-1 $. Additionally, we have $ D^{\alpha} \widehat{g} \in L^1(\rangleanglen) $ for $ |\alpha|\langlee n-1 $ gives \begin{align*} |x^{\alpha} g(x)|\langlee & C_{\alpha}, \end{align*} where $ C_{\alpha} $ is a constant depending only on the multi-index $ \alpha $. Taking sum over $ |\alpha| $ from $ 0 $ to $ n-1$ and using the fact that $ (1+|x|)^{n-1} $ and $ \sum\langleimits_{|\alpha|=0}^{n-1} |x^{\alpha}|$ are comparable, we get the estimate \begin{align*} |g(x)|\langlee C (1+|x|)^{1-n}. \end{align*} Similar argument will work to derive the estimate of $ v $ and its derivatives. This finishes the proof of existence. \\ \nablaoindent \textbf{\textit{Proof of uniqueness.}} Assume if possible, we have two such decomposition, that is, there are $g_1,\ g_2, \ v_1$ and $v_2$ satisfying \begin{align*} g_1 + \mathrm{d}^k v_1 = f = g_2 + \mathrm{d}^k v_2, \quad \mbox{ and } \quad \mathrm{d}elta^{k}g_1=0 = \mathrm{d}elta^{k}g_2\\ \rangleangleightarrow (g_1 -g_2) + \mathrm{d}^k (v_1 -v_2) = 0, \quad \mbox{ and } \quad \mathrm{d}elta^{k}(g_1-g_2)=0. \end{align*} \nablaoindent Therefore to prove the uniqueness of the decomposition, it is enough to prove $ f=0 $ implies $ g=v=0 $. Now $f=0$ gives $ g+\mathrm{d}^{k}v=0 $ and $ \mathrm{d}elta^{k}g=0 $. Since $ g \in \mathcal{S}'(S^m) $ and $ v\in \mathcal{S}'(S^{m-k}) $, where $ \mathcal{S}' $ denotes the space of tempered distributions. Applying Fourier transform of the equations $ g+\mathrm{d}^{k}v=0 $ and $ \mathrm{d}elta^{k}g=0 $, we get $ \widehat{g}(y)+ (\mathrm{i})^{k} i_{y^{\omegatimes {k}}}\widehat{v}(y)=0 $ and $ j_{y^{\omegatimes (k)}} \widehat{g}(y)=0$. By Theorem \rangleef{th: decomposition of f in frequency variable} we have $ \widehat{g}(y)=\widehat{v}(y)=0 $ in $ \rangleanglen_0= \rangleanglen\setminus\{0\} $, $i.e., $ the support of distributions is contained in $ \{0\}.$ Thus $ \widehat{g} $ and $ \widehat{v} $ can be written as finite linear combination of derivatives of Dirac delta distribution. Therefore \[ \widehat{g} = \sum\langleimits_{|\alpha|\langlee p} c_{\alpha} \partial^{\alpha} \mathrm{d}elta_{0}\] for some positive integer $ p $ and $ \mathrm{d}elta_0 $ is the Dirac delta distribution. Again $ \partial^{\alpha}\mathrm{d}elta_0 \in \mathcal{S}'(\rangleanglen)$ be the space of tempered distributions, for any multi-index $ \alpha $. Taking inverse Fourier transform of the above in the sense of tempered distributions, we obtain $ g $ is a polynomial of degree almost $ p$. But $ g(x) \rangleightarrow 0$ as $ |x| \rangleightarrow \infty$ implies $ g = 0 $ in $ \rangleanglen $. One can argue similarly and conclude that $ v(x) =0$ in $ \rangleanglen $. \end{proof} \begin{remark} We remark that, the estimates for the Fourier transform of $g$ and $v$ in \eqref{1.13} are optimal and can not be improved. \end{remark} \section{Kernel description and Injectivity result for the operator $\mathrm{i}c^k$}\langleabel{sec: Injectivity results} It is known \cite[Theorem 2.2.1]{Sharafutdinov1994} that the ray transform $\mathrm{i}c^0/ I^0$ is injective over solenoidal tensor fields (sometimes also called $I^0$ is s-injective) in $\mathbb{R}^n$. In a recent article \cite{Krishnan2018}, authors showed the injectivity of $\mathrm{i}c^m$ over symmetric $m$-tensor fields in $\mathbb{R}^n$. In this section, our aim is to generalize this injectivity result for $\mathrm{i}c^k$ ($0 < k < m$). Additionally, we also provide an explicit description for the kernel of $\mathrm{i}c^k$ ($0 < k < m$). \begin{theorem}[Injectivity of $\mathrm{i}c^k$]\langleabel{injectivity_result} Let $ f\in \mathcal{S}(S^m) $ be a $(k+1)$-solenoidal tensor field in $\rangleanglen$, that is, $\mathrm{d}elta^{k+1} f =0$. Then $$\mathrm{i}c^k f = 0 \qquad \Longrightarrow \qquad f \equiv 0.$$ In other words, the operator $\mathrm{i}c^k$ is injective over $(k+1)$-solenoidal tensor fields. \end{theorem} Given a symmetric $m$-tensor field, we define a symmetric $(m -\ell)$- tensor field $f_{m-\ell}$ obtained from $ f $ by fixing the first $\ell$ indices $ i_1,\mathrm{d}ots,i_\ell$. This can be done by fixing any $\ell$ indices. Due to symmetry it is enough to fix the first $\ell$ indices that is, \begin{equation}\langleabel{def: of restricted tensor field} \langleeft( f_{m-\ell}\rangleight)_{j_1\mathrm{d}ots j_{m-\ell}} = f_{i_1\mathrm{d}ots i_\ell j_1\mathrm{d}ots j_{m-\ell}}, \quad \mbox{ where } i_1,\mathrm{d}ots, i_{\ell} \ \mbox{ are fixed.} \end{equation} Using this notation, the extended $q$-th integral moment ray transform of tensor field $f_{m-\ell}$ for any fixed choice of $i_1,\mathrm{d}ots,i_\ell$ will be denoted by $J^q f_{m-\ell} (x, \xi)$, for any integer $ q \gammae 0$. \begin{lemma}\langleabel{inversion} If $ I^0f,\mathrm{d}ots,I^r\!f\,\ (0\langlee r\langlee m)$ are given for a symmetric $m$-tensor field $ f \in \mathcal{S}(S^m)$. Then the following identity holds \begin{equation}\langleabel{inversion_general} (J^0f_{m-r})_{i_1\mathrm{d}ots i_r}= \frac{ (m-r)!}{m!}\sigma(i_1\mathrm{d}ots i_r)\sum_{p=0}^{r} (-1)^p \binom{r}{p}\, \frac{\partial^r J^pf}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_p}\partial\xi^{i_{p+1}}\mathrm{d}ots\partial\xi^{i_r}} \end{equation} for $1\langlee i_1,\mathrm{d}ots,i_r\langlee n $. \end{lemma} \begin{proof} This result has been already proved in \cite[Theorem 3.1]{Krishnan2018} for the case $r=m$ and we follow similar technique to prove the result for case $(0 \langleeq r < m)$ with the required modifications. The idea is to use induction on $m$. For $m=0$, the only choice for $r$ is $0$ and hence the relation \eqref{inversion_general} holds trivially. In fact, if $r=0$ then the relation \eqref{inversion_general} holds for any $m$. Assume the relation \eqref{inversion_general} is true for $m$ tensor fields with $ 0 \langlee r < m$. We want to use this induction hypothesis to verify \eqref{inversion_general} for any $1\langlee r+1<m+1$. Differentiating $ J^p\!f $ with respect to $ \xi^{i_{r+1}} $ we get \begin{align*} J^p\!f_m&=\frac{1}{m+1}\langleeft(\frac{\partial J^pf}{\partial \xi^{i_{r+1}}}-\frac{\partial J^{p+1}f}{\partial x^{i_{r+1}}}\rangleight)\\=& \int_{-\infty}^{\infty} t^p \langleeft( f_{i_1\cdots i_{m}i_{r+1}}(x+t\xi)\, \xi_{i_1}\,\cdots \xi_{i_{m}}\rangleight)\mathrm{d} t \end{align*} for $ 0\langlee p\langlee r $ and $ f_m= f_{m+1-1}$ is a symmetric $ m $ tensor field given by \eqref{def: of restricted tensor field} . Thus by induction hypothesis, we have \begin{align*}\langleabel{Eq2.2} (J^0(f_m)_{m-r})_{i_1\cdots i_r}&= \frac{ (m-r)!}{m!}\sigma(i_1\mathrm{d}ots i_r)\sum_{p=0}^{r} (-1)^p \binom{r}{p}\, \frac{\partial^r J^pf_m}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_p}\partial\xi^{i_{p+1}}\mathrm{d}ots\partial\xi^{i_r}}\nablaonumber\\ &= \frac{ (m-r)!}{(m+1)!}\sigma(i_1\mathrm{d}ots i_r)\sum_{p=0}^{r} (-1)^p \binom{r}{p}\, \frac{\partial^r }{\partial x^{i_1}\mathrm{d}ots\partial x^{i_p}\partial\xi^{i_{p+1}}\mathrm{d}ots\partial\xi^{i_r}} \langleeft(\frac{\partial J^pf}{\partial \xi^{i_{r+1}}}-\frac{\partial J^{p+1}f}{\partial x^{i_{r+1}}}\rangleight). \end{align*} Since $(J^0(f_m)_{m-r})_{i_1\cdots i_r}= (J^0f_{m-r})_{i_1\cdots i_{r+1}} $, which is symmetric with respect to indices $ i_1 \mathrm{d}ots i_{r+1}$. Therefore, above equation reduces to \begin{equation}\langleabel{Eq2.3} (J^0f_{m-r})_{i_1\cdots i_{r+1}}= \frac{ (m-r)!}{(m+1)!}\sigma(i_1\mathrm{d}ots i_{r+1})\langleeft[\sum_{p=0}^{r} (-1)^p \binom{r}{p}\, \frac{\partial^r }{\partial x^{i_1}\mathrm{d}ots\partial x^{i_p}\partial\xi^{i_{p+1}}\mathrm{d}ots\partial\xi^{i_r}} \langleeft(\frac{\partial J^pf}{\partial \xi^{i_{r+1}}}-\frac{\partial J^{p+1}f}{\partial x^{i_{r+1}}}\rangleight)\rangleight]. \end{equation} Using, the arguments used in \cite[Theorem 3.1]{Krishnan2018}, the term inside the bracket can be expressed as \begin{align*} \sum_{p=0}^{r} (-1)^p \binom{r}{p}\, \frac{\partial^r }{\partial x^{i_1}\mathrm{d}ots\partial x^{i_p}\partial\xi^{i_{p+1}}\mathrm{d}ots\partial\xi^{i_r}} \langleeft(\frac{\partial J^pf}{\partial \xi^{i_{r+1}}}-\frac{\partial J^{p+1}f}{\partial x^{i_{r+1}}}\rangleight)&= \sum_{p=0}^{r+1} (-1)^p \binom{r+1}{p}\, \frac{\partial^{r+1}J^pf }{\partial x^{i_1}\mathrm{d}ots\partial x^{i_p}\partial\xi^{i_{p+1}}\mathrm{d}ots\partial\xi^{i_{r+1}}}. \end{align*} With the help of this, \eqref{Eq2.3} implies \begin{equation*} (J^0f_{m-r})_{i_1\cdots i_{r+1}}=\frac{ (m-r)!}{(m+1)!}\sigma(i_1\mathrm{d}ots i_{r+1}) \sum_{p=0}^{r+1} (-1)^p \binom{r+1}{p}\, \frac{\partial^{r+1}J^pf }{\partial x^{i_1}\mathrm{d}ots\partial x^{i_p}\partial\xi^{i_{p+1}}\mathrm{d}ots\partial\xi^{i_{r+1}}}. \end{equation*} This completes the proof. \end{proof} \begin{proof}[Proof of Theorem \rangleef{injectivity_result}] Let $f$ be a symmetric $m$-tensor field in $\mathbb{R}^n$ satisfying $\mathrm{d}elta^{k+1} f =0$ and $\mathrm{i}c^k f =0 $, that is, $ I^\ell f =0 $ for $\ell = 0, 1 , \mathrm{d}ots, k$. Our aim is to show that these conditions imply $f \equiv 0$. Before moving further, recall $ J^0f,\mathrm{d}ots,J^k f $ are the extended operators satisfying $ J^{\ell}f|_{T\mathbb{S}^{n-1}} =I^{\ell}f$ for $ \ell=0,1,\mathrm{d}ots,k$. By Lemma \rangleef{inversion}, we have \begin{equation*} J^0 f_{m-\ell}(x, \xi) =\frac{(m-\ell)!}{\ell!}\sigma(i_1\mathrm{d}ots i_\ell)\sum\langleimits_{r=0}^{\ell}(-1)^r{\ell\choose r}\frac{\partial^{\ell}J^r f(x, \xi)} {\partial x^{i_1}\mathrm{d}ots\partial x^{i_r}\partial\xi^{i_{r+1}}\mathrm{d}ots\partial\xi^{i_\ell}}. \end{equation*} From equation \eqref{eq:relation between Ik and Jk}, we know $ I^{\ell}f(x, \xi)= 0 $ implies $ J^{\ell}f (x, \xi)=0 $ for each $ \ell=0,1,\mathrm{d}ots,k $. Therefore, the above equation gives $$ J^0 f_{m-\ell}(x,\xi)=0.$$ Taking the the Fourier transform of the above equation over $T\mathbb{S}^{n-1}$ yields, see \cite[Equation 2.1.15]{Sharafutdinov1994}, \begin{align*} \langleeft\langleangle \widehat{f}_{m-\ell}(y) , \xi^{m-\ell}\rangleight\rangleangle =0 \quad \mbox{for}\quad y\perp\xi. \quad \end{align*} Therefore for all $y \perp \xi$ we have \begin{align}\langleabel{eq: Fourier transform of Jf(m-l)} \langleeft\langleangle \widehat{f}(y), y^{\ell}\omegatimes \xi^{m-\ell}\rangleight\rangleangle =0 \quad \mbox{for}\quad \ell=0,1, \mathrm{d}ots,k. \end{align} For a fixed $y \in \mathbb{R}^n$, let $\zeta_1, \zeta_2,\mathrm{d}ots, \zeta_{n-1}$ be $(n-1)$ linearly independent vectors in the hyperplane $y^\perp$. Then, we can rewrite the above conditions as follows: \begin{align*} \langleeft\langleangle \widehat{f}(y), y^{\ell}\omegatimes \zeta_{i_1}^{\omegatimes j_1}\omegatimes \cdots \omegatimes \zeta_{i_{n-1}}^{\omegatimes j_{n-1}}\rangleight\rangleangle = 0, \quad \mbox{where } 1 \langleeq i_1, \mathrm{d}ots, i_{n-1} \langleeq (n-1)\ \mbox{ and } \sum_{p=1}^{n-1} j_p = m-\ell. \end{align*} The collection $\langleeft\{ y^{\ell}\omegatimes \zeta_{i_1}^{\omegatimes j_1}\omegatimes \cdots \omegatimes \zeta_{i_{n-1}}^{\omegatimes j_{n-1}}\rangleight\}$ is a linearly independent set, for details see \cite[Section 5]{Venky_and_Rohit}. Since $f$ is symmetric, the above relation provides $\begin{pmatrix} n+m-\ell-2\\ m-\ell \end{pmatrix}$ independent conditions on $\widehat{f}(y)$ for every fixed $y \in \mathbb{R}^n$ and $ 0 \langleeq \ell \langleeq k$. Therefore in total, we have \begin{align*} \sum_{r =m-k}^{m}\begin{pmatrix} n+r-2\\ r \end{pmatrix} \end{align*} independent conditions. But the dimension of a symmetric $m$-tensor in $\mathbb{R}^n$ is $$\begin{pmatrix} n+m-1\\ m \end{pmatrix} = \sum_{r =0}^{m}\begin{pmatrix} n+r-2\\ r \end{pmatrix}. $$ Therefore, we require $\sum_{r =0}^{m-k-1}\begin{pmatrix} n+r-2\\ r \end{pmatrix}$ more condition on $\widehat{f}(y)$ for the unique recovery of $\widehat{f}$ at $y \in \mathbb{R}^n$. To obtain these relations, we use the condition $\mathrm{d}elta^{k+1}f=0$. By similar argument, we again take the Fourier transform of $\mathrm{d}elta^{k+1}f=0$ to get \[ \langleeft\langleangle\widehat{f}(y), y^{k+1}\rangleight\rangleangle =0.\] This is a symmetric $(m-k-1)$-tensor field. Taking the tensor product with $y^{p-1}\omegatimes\xi^{m-k-p} \,$ for $ 1\langlee p \langlee m-k $, entails \begin{align*} \langleeft\langleangle\widehat{f}(y),y^{k+p}\omegatimes\xi^{m-k-p}\rangleight\rangleangle =0. \end{align*} We can argue in exactly similar way as we did above to conclude that the above equality will provide total $\sum_{r =0}^{m-k-1}\begin{pmatrix} n+r-2\\ r \end{pmatrix}$ independent conditions which are also independent of the conditions we get from \eqref{eq: Fourier transform of Jf(m-l)}. Thus by combining all these independent conditions, we get $ \hat{f}(y)=0 $ for all $ y\nablae 0 $ $ i.e., $ support of components of $ \widehat{f}\subseteq \{0\} $. Therefore components of $ \hat{f}(y) $ are a distribution which can written as a linear combination of derivatives of the Dirac delta distribution. But the condition $ f\in \mathcal{S}(S^m) $ implies $ f=0 $. \end{proof} \begin{theorem}[Kernel of $\mathrm{i}c^k$]\langleabel{th:injectivity result} A symmetric $m$-tensor field $ f \in \mathcal{S}(S^m)$ is in the kernel of the operator $\mathrm{i}c^k$ for $ 1 \langleeq k \langleeq \min\{m, n-1\}$ if and only if $ f = \mathrm{d}^{k+1} v$, for some $(m-k-1)$-tensor field $v$ satisfying $\mathrm{d}^{\ell}v \rangleightarrow 0 $ as $ |x| \rangleightarrow \infty$. \end{theorem} \begin{proof} To proof the `if' part of the theorem, assume $ f=\mathrm{d}^{k+1} v$ for some $ v \in C^{\infty}(S^{m-k-1})$ satisfying $\mathrm{d}^{\ell}v \rangleightarrow 0 $ as $ |x| \rangleightarrow \infty$ for $ \ell=0,1,\mathrm{d}ots,k $. Then a simple application of integration by parts entails $$ I^{\ell}(f)= I^0(\mathrm{d}^{k+1-\ell}v)=0, \qquad \mbox{ for } 0\langlee \ell \langlee k.$$ Conversely, if $ f\in \mathcal{S}(S^{m}) $ satisfies $ I^{\ell}f =0,\, \ell=0,1,\mathrm{d}ots,k$. According to our decomposition theorem \rangleef{th:decomp_in_original_sp}, $ f $ can be written as \[f=g+\mathrm{d}^{k+1}v,\quad \mathrm{d}elta^{k+1} g=0 \quad \mbox{and}\quad \mathrm{d}^{\ell}v\, \rangleightarrow 0 \quad \mbox{as} \quad |x| \rangleightarrow \infty, \ \ 0\langlee \ell\langlee k.\] \nablaoindent Now from Lemma \rangleef{inversion} we have \[ J^{0}f_{m-\ell}(x,\xi)=0 \quad \ell=0,1,\mathrm{d}ots,k \] where $ f_{m-\ell} $ is symmetric $ m-\ell $ tensor field obtained from $ f $ by fixing $ \ell $ indices. This imply $ I^{0}f_{m-\ell}= J^{0}f_{m-\ell}|_{T\mathbb{S}^{n-1}}=0.$ Fourier transform of $ I^{0}f_{m-\ell} $ gives \[ \widehat{I^{0}f}(y,\xi)=\langleeft\langleangle\widehat{f}_{m-\ell}(y), \xi^{m-\ell} \rangleight\rangleangle=0 \] for $ y\perp \xi $ and $ 0\langlee \ell\langlee k $. This, $ y\perp\xi $ and together with the fact $ \widehat{f}(y) = \widehat{g}(y)+ y^{k+1}\omegatimes \widehat{v}(y)$ gives \[\langleeft\langleangle \widehat{g}_{m-\ell}(y),\xi^{m-\ell} \rangleight\rangleangle=0. \] By multiplying this equation with $ y^{\ell}=\underbrace{y\omegatimes\cdots\omegatimes y}_{\ell\, \mbox{times}}\,(y\nablae 0) $ and then summing over $ \ell $ indices , we obtain \[\langleeft\langleangle\widehat{g}(y), y^{\ell}\omegatimes \xi^{m-\ell}\rangleight\rangleangle=0\quad \mbox{for}\quad 0\langlee \ell \langlee k. \] Applying Fourier transform on the equation $\mathrm{d}elta^{k+1}g=0$, we get \[\langleeft\langleangle\widehat{g}(y)\omegatimes y^{k+1}\rangleight\rangleangle=0.\] This is a symmetric $(m-k-1)$-tensor field and taking the tensor product with $y^{r-1}\omegatimes\xi^{m-k-r} \,$ for $ 1\langlee r \langlee m-k $ entails \begin{align*} \langleeft\langleangle \widehat{g}(y), y^{k+r}\omegatimes\xi^{m-k-r}\rangleight\rangleangle =0. \end{align*} Thus for a non-zero vector $ y\in \rangleanglen $ with $ y\perp\xi $ , we have \begin{align}\langleabel{Eq3.11} \langleeft\langleangle \widehat{g}(y), y^{r}\omegatimes\xi^{m-r} \rangleight\rangleangle=0 \end{align} for $ 0\langlee r\langlee m $. Therefore we are in the same situation as in the Theorem \rangleef{injectivity_result} and have enough linearly independent relations, which implies $ \widehat{g}(y)=0 $ for $ y\nablae 0 $. Since $ \widehat{g}(y) $ is an integrable function. So we can view this as a distribution and support of $ (\widehat{g})\subseteq \{0\} $. Amending the arguments used in the proof of uniqueness part of the Theorem \rangleef{th:decomp_in_original_sp}, we get $ g=0 $ in $ \rangleanglen $. Putting $g=0$ in the decomposition above, we achieve $f =\mathrm{d}^{k+1}v$ which completes the proof of converse part as well. \end{proof} \section{Range characterization}\langleabel{sec:range characterization} \nablaoindent This section is devoted to a detailed description of the range for the operator $\mathrm{i}c^k$. More specifically, we prove \begin{theorem}\langleabel{th:range characterisation for I-k} Let $ n\gammae 3$ and $1 \langleeq k \langleeq m$. An element $ (\varphi^0,\varphi^1,\mathrm{d}ots,\varphi^k)\in (\mathcal{S}(T\mathbb{S}^{n-1}))^{k+1}$ belongs to the range of operator $\mathrm{i}c^k$ if and only if the following two conditions are satisfied: \begin{enumerate} \item $\varphi^\ell(x,-\xi)= (-1)^{m-\ell}\varphi^\ell(x,\xi) $ for $ \ell=0,1,\mathrm{d}ots,k$. \item For $0\langlee \ell\langlee k$, the functions $ \psi^{\ell} \in C^{\infty}(\rangleanglen \times \rangleanglen \setminus\{0\})$, defined by \begin{equation}\langleabel{def:of psi l} \psi^{\ell}= |\xi|^{m-2\ell-1}\sum\langleimits_{r=0}^\ell(-1)^{\ell-r}\binom{\ell}{r}|\xi|^r \langleangle\xi,x\rangleangle^{\ell-r}\,(I^r\!f) \langleeft(x-\frac{\langleangle x, \xi \rangleangle}{|\xi|^2}\xi,\frac{\xi}{|\xi|}\rangleight) \end{equation} satisfies the equations \begin{equation}\langleabel{Johns's condition for psi k} \betaig(\frac{\partial^2}{\partial x^{i_1}\partial\xi^{j_1}}-\frac{\partial^2}{\partial x^{j_1}\partial\xi^{i_1}}\betaig)\mathrm{d}ots \betaig(\frac{\partial^2}{\partial x^{i_{m+1}}\partial\xi^{j_{m+1}}}-\frac{\partial^2}{\partial x^{j_{m+1}}\partial\xi^{i_{m+1}}}\betaig) \psi^k=0 \end{equation} for all indices $1\langleeq i_1,j_1,\mathrm{d}ots,i_{m+1},j_{m+1}\langleeq n$. \end{enumerate} \end{theorem} \nablaoindent The range of the operator $ \mathrm{i}c^m$ was already proved in \cite{Krishnan2019a}. Therefore we consider the $ k<m $ case here. The following theorem from \cite[Theorem 2.10.1]{Sharafutdinov1994} provides the range characterization for the operator $I^0$ and we will use this repeatedly to prove our range characterization theorem for $\mathrm{i}c^k$. \begin{theorem} \langleabel{Th3.1} Let $ n\gammae 3 $. A function $\varphi\in{\mathcal S}(T\mathbb{S}^{n-1})$ belongs to the range of $I^0$ if and only if $\varphi$ satisfies the following two conditions: \begin{enumerate} \item[(1)] $\varphi(x,-\xi)=(-1)^m\varphi(x,\xi)$; \item[(2)] the function $\psi\in C^\infty\big({\mathbb R}^n\times({\mathbb R}^n\setminus\{0\})\big)$, defined by \begin{align*} \psi(x,\xi)=|\xi|^{m-1}\varphi\betaig(x-\frac{\langleangle\xi,x\rangleangle}{|\xi|^2}\xi,\frac{\xi}{|\xi|}\betaig), \end{align*} satisfies the equations \begin{equation} \betaig(\frac{\partial^2}{\partial x^{i_1}\partial\xi^{j_1}}-\frac{\partial^2}{\partial x^{j_1}\partial\xi^{i_1}}\betaig)\mathrm{d}ots \betaig(\frac{\partial^2}{\partial x^{i_{m+1}}\partial\xi^{j_{m+1}}}-\frac{\partial^2}{\partial x^{j_{m+1}}\partial\xi^{i_{m+1}}}\betaig) \psi=0 \langleabel{Eq1.9} \end{equation} for all indices $1\langleeq i_1,j_1,\mathrm{d}ots,i_{m+1},j_{m+1}\langleeq n$. \end{enumerate} \end{theorem} \nablaoindent With the help of this John's operator, we rewrite the relation \eqref{Johns's condition for psi k} as follows: $$ J_{i_{m+1}j_{m+1}}\mathrm{d}ots J_{i_2j_2} J_{i_1j_1} \psi^k=0 \quad \mbox{for all indices} \quad 1\langlee i_1,j_1,\mathrm{d}ots,i_{m+1},j_{m+1}\langlee n.$$ \subsection{Required lemmas and results for Proof of Theorem \rangleef{th:range characterisation for I-k}}\langleabel{subsec:lemmas for range characterization} We need a good amount of preparation before we get in to the proof of Theorem \rangleef{th:range characterisation for I-k}. We start by making a quick observation that if a symmetric $m$-tensor field $ f\in \mathcal{S}(S^m) $ is given by \begin{equation}\langleabel{def:of f} f= \sum_{s=0}^{k}\mathrm{d}^{s} g_{s}, \quad \mbox{ where } g_s \in \mathcal{S}(S^{m-s}), \mbox{ for } s = 0, 1, \mathrm{d}ots , k. \end{equation} Then using the identity $I^{\ell}(\mathrm{d} f) =-\ell\, I^{\ell-1}\!f$ recursively, we get \begin{align} I^{\ell} (\mathrm{d}^sg_s)&= \begin{cases} (-1)^s\, \binom{\ell}{s}\, s! \,I^{\ell-s} g_s& \mbox{if}\quad s\langlee \ell\\ 0& \mbox{if} \quad s> \ell. \end{cases}\nablaonumber \\ \Longrightarrow \qquad \varphi^\ell =I^\ell\!f &= \sum_{s=0}^{\ell} (-1)^s\, \binom{\ell}{s}\ s! \,I^{\ell-s} g_s \langleabel{eq:relation between phi and gs}\\ \mbox{ and } \qquad \psi^\ell =J^\ell\!f &= \sum_{s=0}^{\ell} (-1)^s\, \binom{\ell}{s}\ s! \,J^{\ell-s} g_s, \qquad \mbox{ for } 0\langlee \ell\langlee k \langleabel{eq:relation between psi and gs}. \end{align} Note, if we can find tensor fields $g_s$, for $0 \langleeq s \langleeq k$, satisfying the above relation \eqref{eq:relation between phi and gs} then $(\varphi^0, \varphi^1, \mathrm{d}ots, \varphi^k)$ will be in the range of operator $\mathrm{i}c^k$ and $\mathrm{i}c^k f = (\varphi^0, \varphi^1, \mathrm{d}ots, \varphi^k)$, where $f$ is given by equation \eqref{def:of f}. Keeping this key conclusion in mind, we present a series of lemmas essential to proceed further. \begin{lemma}\langleabel{Lm5.1} If $\psi^s$,\, for\, $ 0\langlee s\langlee \ell -1 $ is given by relation \eqref{eq:relation between psi and gs} for known tensor fields $g_s$ (for $0 \langleeq s \langleeq \ell-1$), then the function $ \chi^{\ell} \in C^{\infty}(\rangleanglen \times\rangleanglen\setminus{0})$ (for each fixed $0\langleeq \ell \langlee k$) defined by \begin{equation}\langleabel{eq:definition_of_chi_l} \chi^{\ell}=\frac{(-1)^\ell}{\ell!} \langleeft(\psi^\ell - \sum_{s=0}^{\ell-1}(-1)^{s}\,\binom{\ell}{s}\,s! \,J^{\ell-s} g_{s}\rangleight) \end{equation} satisfy the following properties: \begin{enumerate} \item For $(x, \xi) \in C^{\infty}(\rangleanglen \times\rangleanglen\setminus{0})$ and $t \in \mathbb{R}$, \begin{equation}\langleabel{translation of chi l} \chi^\ell (x+t\xi,\xi)= \chi^{\ell}(x,\xi). \end{equation} \item For $(x, \xi) \in C^{\infty}(\rangleanglen \times\rangleanglen\setminus{0})$ and $0 \nablaeq t \in \mathbb{R}$, \begin{equation}\langleabel{homogenety w r to xi} \chi^\ell (x,t\xi)=\frac{t^{m-\ell}}{|t|} \chi^{\ell}(x,\xi). \end{equation} \end{enumerate} \end{lemma} \begin{proof} For any $ t\in \mathbb{R} $, from \cite[Statement 2.8]{Krishnan2019a}, we have \begin{align}\langleabel{Eq5.10} \psi^{\ell}(x+t\xi,\xi)&= \sum_{p=0}^{\ell}\binom{\ell}{p}(-t)^{\ell-p} \psi^p(x,\xi) \nablaonumber \\ &= \psi^{\ell}+\sum_{p=0}^{\ell-1}\sum_{s=0}^{p} \binom{\ell}{p}(-t)^{\ell-p} (-1)^{s}\binom{p}{s}s!\,J^{p-s} g_{s}, \qquad \mbox{ from \eqref{eq:relation between psi and gs}} \nablaonumber \\ &= \psi^{\ell} +\sum_{s=0}^{\ell-1}\sum_{p=s}^{\ell-1} \binom{\ell}{p}(-t)^{\ell-p} (-1)^{s}\binom{p}{s}s!\,J^{p-s} g_{s}. \end{align} Also, from definition of $J^\ell$, we get \begin{align*} J^{\ell}f(x+t\xi,\xi)= \sum_{p=0}^{\ell}\binom{\ell}{p}(-t)^{\ell-p} J^pf(x,\xi). \end{align*} By replacing $f$ by $g_s$ and $\ell$ by $\ell -s$, this relation reduces to \begin{align*} J^{\ell-s}g_s(x+t\xi,\xi)= \sum_{p=0}^{\ell-s}\binom{\ell-s}{p}(-t)^{\ell-s-p} J^pg_s(x,\xi). \end{align*} Consider, \begin{align} \sum_{s=0}^{\ell-1}(-1)^{s}\,\binom{\ell}{s}\,s! \,J^{\ell-s} g_{s}(x+t\xi,\xi)&= \sum_{s=0}^{\ell-1}\sum_{p=0}^{\ell-s}(-1)^{s}\,\binom{\ell}{s}\,s!\binom{\ell-s}{p}(-t)^{\ell-s-p} J^pg_s(x,\xi)\nablaonumber \\ &=\sum_{s=0}^{\ell-1}\sum_{p=0}^{\ell-s}(-1)^{s}\,\frac{\ell!}{p!\, (\ell-s-p)!}(-t)^{\ell-s-p} J^pg_s(x,\xi)\nablaonumber \\ &=\sum_{s=0}^{\ell-1}\sum_{p=s}^{\ell}(-1)^{s}\,\frac{\ell!}{(\ell-p)!\, (p-s)!}(-t)^{\ell-p} J^{p-s}g_s(x,\xi) \nablaonumber \\ &= \sum_{s=0}^{\ell-1}\sum_{p=s}^{\ell}(-1)^s (-t)^{\ell-p} \binom{\ell}{p}\binom{p}{s}\, s! J^{p-s}g_s(x,\xi) \nablaonumber \\ &= \sum_{s=0}^{\ell-1}\sum_{p=s}^{\ell-1}(-1)^s (-t)^{\ell-p} \binom{\ell}{p}\binom{p}{s}\, s! J^{p-s}g_s(x,\xi) \nablaonumber \\&\qquad \quad + \sum_{s=0}^{\ell-1}(-1)^s \binom{\ell}{s}\, s!\, J^{\ell-s}g_s(x,\xi) \langleabel{Eq5.11}. \end{align} Putting these expressions in the definition of function $\chi^\ell$ (see equation \eqref{eq:definition_of_chi_l}), we get \begin{equation*} \chi^{\ell}(x+t\xi,\xi)= \frac{(-1)^\ell}{\ell!} \langleeft(\psi^{\ell}(x+t\xi,\xi) - \sum_{s=0}^{\ell-1}(-1)^{s}\,\binom{\ell}{s}\,s! \,J^{\ell-s} g_{s}(x+t\xi,\xi)\rangleight). \end{equation*} The identities \eqref{Eq5.10} and \eqref{Eq5.11} proved above yields $$ \chi^{\ell}(x+t\xi,\xi)= \frac{(-1)^\ell}{\ell!} \langleeft( \psi^{\ell}(x,\xi)- \sum_{s=0}^{\ell-1}(-1)^s \binom{\ell}{s}\, s!\, J^{\ell-s}g_s(x,\xi)\rangleight) = \chi^{\ell}(x,\xi). $$ This completes the proof of identity \eqref{translation of chi l}. Next for $t\nablae 0$, the definition of $\chi^\ell$ (equation \eqref{eq:definition_of_chi_l}) gives \begin{align*} \chi^{\ell}(x,t\xi)= \frac{(-1)^\ell}{\ell!} \langleeft( \psi^{\ell}(x,t\xi)- \sum_{s=0}^{\ell-1}(-1)^s \binom{\ell}{s}\, s!\, J^{\ell-s}g_s(x,t\xi)\rangleight). \end{align*} Then the required relation \eqref{homogenety w r to xi} can be achieved directly from the following two known homogeneity properties (first identity follows from direct computation and the second one is from \cite[Statement 2.8]{Krishnan2019a}) : \begin{align*} J^{\ell-s}g_s(x,t\xi)= \frac{t^{m-\ell}}{|t|} J^{\ell-s}g_s(x,\xi)\quad \mbox{ and } \quad \psi^{\ell}(x,t\xi) = \frac{t^{m-\ell}}{|t|}\psi^{\ell}(x,\xi). \end{align*} Hence the proof of lemma is complete. \end{proof} \begin{lemma}\langleabel{extension lemma} Let $\chi^\ell$ and $\psi^\ell$ satisfy same conditions as in previous lemma. Also, define the function $\widetilde{\chi}^\ell$ on $T\mathbb{S}^{n-1}$ by \begin{equation*} \widetilde{\chi}^{\ell}= \frac{(-1)^\ell}{\ell!} \langleeft(\varphi^\ell - \sum_{s=0}^{\ell-1}(-1)^{s}\binom{\ell}{s}\,s!\, I^{\ell-s} g_{s}\rangleight). \end{equation*} Then $\widetilde{\chi}^\ell = \chi^\ell|_{T\mathbb{S}^{n-1}}$ and we can obtain $\chi^\ell$ from $\widetilde{\chi}^\ell$ using the following explicit relation: \[\chi^{\ell}(x,\xi)= |\xi|^{m-\ell-1}\,\widetilde{\chi}^{\ell}\langleeft( x-\frac{\langle x,\xi \rangle }{|\xi|^2}\xi, \frac{\xi}{|\xi|} \rangleight), \qquad \ (x, \xi) \in \mathbb{R}^n \times \mathbb{R}^n\setminus \{0\}.\] \end{lemma} \begin{proof} For any $ t,s\in\mathbb{R} $ with $ s\nablae 0 $ equations \eqref{translation of chi l} and \eqref{homogenety w r to xi} gives \begin{align*} \chi^{\ell}(x+t\xi,s\xi)=\frac{s^{m-\ell}}{|s|}\chi^{\ell}(x,\xi). \end{align*} Now choosing $ t= - \frac{\langle x,\xi\rangle}{|\xi|^2}$ and $ s = \frac{1}{|\xi|} $, this gives \begin{align*} \chi^{\ell}\langleeft( x-\frac{\langle x,\xi \rangle }{|\xi|^2}\xi, \frac{\xi}{|\xi|} \rangleight)= \frac{1}{|\xi|^{m-\ell-1}} \chi^{\ell}(x,\xi). \end{align*} Therefore \begin{align*} \chi^{\ell}(x,\xi)&= |\xi|^{m-\ell-1}\chi^{\ell}\langleeft( x-\frac{\langle x,\xi \rangle }{|\xi|^2}\xi, \frac{\xi}{|\xi|} \rangleight)\\ &= |\xi|^{m-\ell-1}\widetilde{\chi^{\ell}}\langleeft( x-\frac{\langle x,\xi \rangle }{|\xi|^2}\xi, \frac{\xi}{|\xi|} \rangleight). \end{align*} \end{proof} The next three lemmas are direct adaptation of results from \cite{Sharafutdinov1994} and \cite{Krishnan2019a} and hence we state without giving their proofs. \begin{lemma}\cite[Theorem 2.10]{Sharafutdinov1994}\langleabel{general johns condition} For every indices $ 1\langlee i_1,\mathrm{d}ots,i_r\langlee n $ and each $ h\in \mathcal{S}({S^r(\rangleanglen)}) $, the next equality holds \begin{equation} J_{i_{r+1}j_{r+1}} \cdots J_{i_1j_1} J^0h=0. \end{equation} \end{lemma} \begin{lemma}\cite{Krishnan2019a} \langleabel{Lm 4.1} For every $\ell=0,1,\mathrm{d}ots,k$ and for every integer $k\gammae 0$, the following equality holds: \begin{equation}\langleabel{Eq4.1} \begin{aligned} \langle\xi,\partial_x\rangle^\ell\psi^k=\begin{cases} (-1)^\ell\,{k\choose \ell}\,\ell!\,\psi^{k-\ell} &\mbox{if}\quad\ell\langlee k,\\ 0 &\mbox{if}\quad\ell>k. \end{cases} \end{aligned} \end{equation} \end{lemma} \begin{lemma}\cite[Lemma 2.6]{Krishnan2019a} \langleabel{Lm 4.2} Let a function $\psi\in C^\infty\big({\rangleanglen}\times{\rangleanglen}\setminus\{0\})\big)$ be positively homogeneous in the second argument \begin{equation} \psi(x,t\xi)=t^\langleambda\psi(x,\xi)\quad(t>0). \langleabel{Eq2.10} \end{equation} Assume the restriction $\psi|_{T{\mathbb S}^{n-1}}$ to belong to ${\mathcal S}(T{\mathbb S}^{n-1})$. Assume also that restrictions to ${\mathcal S}(T{\mathbb S}^{n-1})$ of the function $\langle\xi,\partial_x\rangle\psi$ and of all its derivatives belong to ${\mathcal S}(T{\mathbb S}^{n-1})$, i.e., \begin{equation} \langleeft.\frac{\partial^{k+p}(\langle\xi,\partial_x\rangle\psi)}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_k}\partial \xi^{j_1}\mathrm{d}ots\partial \xi^{j_p}}\rangleight|_{T{\mathbb S}^{n-1}}\in{\mathcal S}(T{\mathbb S}^{n-1})\quad\mbox{for all}\quad 1\langlee i_1,\mathrm{d}ots,i_k,j_1,\mathrm{d}ots,j_p\langlee n. \end{equation} Then the restriction to ${\mathcal S}(T{\mathbb S}^{n-1})$ of every derivative of $\psi$ also belongs to ${\mathcal S}(T{\mathbb S}^{n-1})$, i.e., \begin{equation} \langleeft.\frac{\partial^{k+p}\psi}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_k}\partial \xi^{j_1}\mathrm{d}ots\partial \xi^{j_p}}\rangleight|_{T{\mathbb S}^{n-1}} \in{\mathcal S}(T{\mathbb S}^{n-1})\quad\mbox{for all}\quad 1\langlee i_1,\mathrm{d}ots,i_k,j_1,\mathrm{d}ots,j_p\langlee n. \end{equation} \end{lemma} \begin{lemma}\langleabel{John's condition } Let $ \psi^{k} $ satisfies \begin{equation}\langleabel{Eq36} J_{i_1j_1} \cdots J_{i_{m+1}j_{m+1}}\psi^k=0 \end{equation} and \begin{equation}\langleabel{Eq37} \psi^{k}(x,t\xi)= \frac{t^{m-k}}{|t|}\psi^{k} (x,\xi) \end{equation} for each $ 0\langlee \ell\langlee k <m $. For each indices $ 1\langlee i_1,i_2,\mathrm{d}ots,i_{\ell}\langlee n $, let $ \Psi_{i_1\cdots i_\ell},\ 0\langlee \ell \langlee k<m$ denotes the function defined by \begin{equation}\langleabel{eq:John's condition} \Psi_{i_1\mathrm{d}ots i_\ell}= \frac{ (m-\ell)!}{m!}\sigma(i_1\mathrm{d}ots i_\ell) \betaigg(\sum\langleimits_{p=0}^{\ell}(-1)^p\binom{\ell}{p}\,\frac{\partial^\ell \psi^p}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_p}\partial\xi^{i_{p+1}}\mathrm{d}ots\partial\xi^{i_\ell}}\betaigg). \end{equation} Then $ \Psi_{i_1\cdots i_\ell},\ 0\langlee \ell \langlee k<m$ satisfies the following John's condition \begin{equation}\langleabel{Eq5.6} J_{i_1j_1} \cdots J_{i_{m-\ell+1}j_{m-\ell+1}} \Psi_{i_1\mathrm{d}ots i_{\ell}}=0. \end{equation} \end{lemma} \begin{remark} The proof of this lemma is very similar to \cite[Lemma 2.7]{Krishnan2019a}. So we do not present the complete proof and only indicate the key arguments here. We should mention that Lamma \rangleef{John's condition } is new and has not been proved in the earlier work \cite{Krishnan2019a}. \end{remark} \begin{proof} According to Lemma \rangleef{Lm 4.1}, we have for every $0\langlee r\langlee k $ \begin{equation}\langleabel{Eq40} \begin{aligned} \langle\xi,\partial_x\rangle^{k-r}\psi^{k}= (-1)^{k-r}\,{k\choose r}\,(k-r)!\,\psi^{r}. \end{aligned} \end{equation} Using above relation with $k= \ell$ and $r= p$ in equation \eqref{Eq5.6}, we obtain \begin{align}\langleabel{modified johns condition} J_{i_1j_1} \cdots J_{i_{m-\ell+1}j_{m-\ell+1}} \sigma(i_1\mathrm{d}ots i_\ell) \betaigg(\sum\langleimits_{p=0}^{\ell}\frac{1}{(\ell-p)!}\,\frac{\partial^\ell \langle\xi,\partial_x\rangle^{\ell-p}}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_p}\partial\xi^{i_{p+1}}\mathrm{d}ots\partial\xi^{i_l}}\betaigg) \psi^{\ell}=0. \end{align} Thus proving \eqref{Eq5.6} is equivalent to prove \eqref{modified johns condition}. The proof will be based on induction argument on $ m $. For $m =0$, we have $ \ell=k=0 $ and it is easy to see that the equation \eqref{modified johns condition} holds. Assume that the relation \eqref{modified johns condition} is true for some $ m $ with $k < m$ and for all $0 \langlee \ell \langlee k $. Then, we aim to verify the result for $m+1$ with $ 1\langlee \ell +1\langlee k+1<m+1 $. For every index $i_{\ell+1}$ satisfying $1\langlee i_{\ell+1}\langlee n$, we define the function $\psi^{\ell}_{i_{\ell+1}}$ by \begin{equation*} \psi^{\ell}_{i_{\ell+1}}(x,\xi)=\langleeft(\frac{1}{\ell+1}\,\frac{\partial}{\partial\xi^{i_{\ell+1}}} \langle \xi,\partial_{x}\rangle + \frac{\partial}{\partial x^{i_{\ell+1}}} \rangleight) \psi^{\ell+1}(x,\xi). \langleabel{Eq2.36} \end{equation*} In the light of the homogeneity relation\eqref{Eq37} together with equation \eqref{Eq40}, the equation reduces to \begin{equation*} \psi^{\ell}_{i_{\ell+1}}(x,t\xi)=\frac{t^{m-\ell}}{|t|}\psi^{\ell}_{i_{\ell+1}}(x,\xi)\quad(0\nablaeq t\in{\mathbb{R}}). \end{equation*} Then, we have from \cite[Lemma 2.7]{Krishnan2019a} $\psi^{\ell}_{i_{\ell+1}}$ satisfies \eqref{Eq36}. Thus for every ${i_{\ell+1}}$ and for each $ 0\langlee \ell\langlee k<m $, the function $\psi^{\ell}_{i_{\ell+1}}$ satisfies hypotheses of Lemma \rangleef{John's condition }. By the induction hypothesis \eqref{modified johns condition}, for all $1\langlee i,j,{i_{\ell+1}},i_1,\mathrm{d}ots, i_{\ell}\langlee n$, \begin{align*} \sigma(i_1\mathrm{d}ots i_\ell) \betaigg(\sum\langleimits_{p=0}^{\ell}\frac{1}{(\ell-p)!}\,\frac{\partial^\ell \langle\xi,\partial_x\rangle^{\ell-p}}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_p}\partial\xi^{i_{p+1}}\mathrm{d}ots\partial\xi^{i_l}}\betaigg)J_{i_1j_1} \cdots J_{i_{m-\ell+1}j_{m-\ell+1}} \psi^{\ell}_{i_{\ell+1}}=0. \end{align*} Now using the similar arguments used in \cite[Lemma 2.7]{Krishnan2019a} we can conclude that \[ J_{i_1j_1} \cdots J_{i_{m-\ell+1}j_{m-\ell+1}} \sigma(i_1\mathrm{d}ots i_{\ell+1}) \betaigg(\sum\langleimits_{p=0}^{\ell+1}\frac{1}{(\ell+1-p)!}\,\frac{\partial^{\ell+1} \langle\xi,\partial_x\rangle^{\ell+1-p}}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_p}\partial\xi^{i_{p+1}}\mathrm{d}ots\partial\xi^{i_{\ell+1}}}\betaigg) \psi^{\ell+1}=0. \] Thus knowing \eqref{modified johns condition} for some $ m $ with $ 0\langlee \ell\langlee k<m $, with the help of induction we are proving \eqref{modified johns condition} for $ m+1 $ with $ 1\langlee \ell+1\langlee k+1<m+1 $. Thus for $ m+1 $, $ \ell=0 $ case is still left. $ \ell=0 $ case follows from the fact that, $ \langle \xi,\partial_x\rangle $ commutes with John's operator $ J_{ij} $, $\langle\xi,\partial_x\rangle^{k}\psi^{k}= (-1)^{k}\,(k)!\,\psi^{0} $ and \eqref{Eq36}. This completes the proof. \end{proof} \begin{lemma}\langleabel{Lm 8.2} For $ 0\langlee p\langlee \ell-1 $, if $ \psi^p $ is given by \eqref{eq:relation between psi and gs}, then \begin{equation} \begin{aligned} \Psi_{i_1\mathrm{d}ots i_\ell}= \frac{1}{\binom{m}{\ell}}\frac{\partial^{\ell}}{\partial x^{i_1}\cdots x^{i_{\ell}}} \chi^{\ell} + \frac{1}{\binom{m}{\ell}} \sigma(i_1\mathrm{d}ots i_{\ell}) \sum_{s=0}^{\ell-1} \binom{m-s}{\ell-s}\, \frac{\partial^{s} ((J^0g_s)_{m-\ell})_{i_1\mathrm{d}ots i_{\ell-s}}}{\partial x^{i_{\ell-s+1}}\cdots \partial x^{i_{\ell}}} \end{aligned} \end{equation} where $ \Psi_{i_1\mathrm{d}ots i_\ell} $ is given by \eqref{eq:John's condition}. \end{lemma} \begin{proof} To prove this lemma, we need to compute the term in parenthesis on the right hand side of equation \eqref{eq:John's condition} using the expression for $\psi^\ell = \sum_{s=0}^{p}(-1)^{s}\binom{p}{s}\, s!\, J^{p-s} g_{s}$. Consider \begin{align*} \sum_{p=0}^{\ell -1}(-1)^p\binom{\ell}{p}\frac{\partial^\ell \psi^p}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_p}\partial\xi^{i_{p+1}}\mathrm{d}ots\partial\xi^{i_\ell}}&=\sum_{p=0}^{\ell -1} \sum_{s=0}^{p}(-1)^{p-s}\binom{p}{s} \binom{\ell}{p}\, s!\, \frac{\partial^\ell J^{p-s}g_s }{\partial x^{i_1}\mathrm{d}ots\partial x^{i_p}\partial\xi^{i_{p+1}}\mathrm{d}ots\partial\xi^{i_\ell}}\\ &=\sum_{s=0}^{\ell -1} \underbrace{\sum_{p=s}^{\ell-1}(-1)^{p-s}\binom{p}{s}\binom{\ell}{p}\, s!\, \frac{\partial^\ell J^{p-s}g_s }{\partial x^{i_1}\mathrm{d}ots\partial x^{i_p}\partial\xi^{i_{p+1}}\mathrm{d}ots\partial\xi^{i_\ell}}}_{\mathcal{J}_s}. \end{align*} First, let us focus on $\mathcal{J}_s$; \begin{align*} \mathcal{J}_s &= \sum_{p=s}^{\ell-1}(-1)^{p-s}\binom{p}{s}\binom{\ell}{p}\, s!\, \frac{\partial^\ell J^{p-s}g_s }{\partial x^{i_1}\mathrm{d}ots\partial x^{i_p}\partial\xi^{i_{p+1}}\mathrm{d}ots\partial\xi^{i_\ell}}\\ &=\sum_{p=0}^{\ell-s-1} (-1)^{p} \binom{p+s}{s}\, s! \binom{\ell}{p+s}\frac{\partial^\ell J^{p}g_s}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_{p+s}}\partial\xi^{i_{p+s+1}}\mathrm{d}ots\partial\xi^{i_\ell}}\nablaonumber\\ & = \sum_{p=0}^{\ell-s-1} (-1)^p \frac{\ell!}{p!(\ell-p-s)!}\,\frac{\partial^\ell J^{p}g_s}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_{p+s}}\partial\xi^{i_{p+s+1}}\mathrm{d}ots\partial\xi^{i_\ell}}\nablaonumber\\ &= \frac{\ell!}{(\ell-s)!}\sum_{p=0}^{\ell-s-1} (-1)^p \frac{(\ell-s)!}{p!(\ell-p-s)!}\,\frac{\partial^\ell J^{p}g_s}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_{p+s}}\partial\xi^{i_{p+s+1}}\mathrm{d}ots\partial\xi^{i_\ell}}\nablaonumber\\ &=\frac{\ell!}{(\ell-s)!} \sum_{p=0}^{\ell-s-1} (-1)^p \binom{\ell-s}{p} \,\frac{\partial^\ell J^{p}g_s}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_{p+s}}\partial\xi^{i_{p+s+1}}\mathrm{d}ots\partial\xi^{i_\ell}}. \end{align*} Next, recall the Lemma \rangleef{inversion} with values $ m=m-s $ and $r=\ell-s $ in equation \eqref{inversion_general} gives \begin{align*} (J^0h_{m-\ell})_{i_1\mathrm{d}ots i_{\ell-s}}= \frac{ (m-\ell)!}{(m-s)!}\sigma(i_1\mathrm{d}ots i_{\ell-s})\sum_{p=0}^{\ell-s}(-1)^p\binom{\ell-s}{p}\, \frac{\partial^{\ell-s} J^ph}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_p}\partial\xi^{i_{p+1}}\mathrm{d}ots\partial\xi^{i_{\ell-s}}}. \end{align*} Differentiating this relation $s$ times with respect to $ x^{i_{\ell-s+1}},\mathrm{d}ots,x^{i_{\ell}}$ respectively and by applying the symmetrization $\sigma(i_1\mathrm{d}ots i_{\ell}) $, we obtain \begin{align*}\langleabel{Eq 8.26} \sigma(i_1\mathrm{d}ots i_{\ell}) \frac{\partial^{s} (J^0h_{m-\ell})_{i_1\mathrm{d}ots i_{\ell-s}}}{\partial x^{i_{\ell-s+1}}\cdots \partial x^{i_{\ell}}} = \frac{ (m-\ell)!}{(m-s)!}\sigma(i_1\mathrm{d}ots i_{\ell})\sum_{p=0}^{\ell-s}(-1)^p\binom{\ell-s}{p}\, \frac{\partial^{\ell} J^p h}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_{p+s}}\partial\xi^{i_{p+s+1}}\mathrm{d}ots\partial\xi^{i_{\ell}}}. \end{align*} This identity for $h = g_s$ reduces the expression for $\mathcal{J}_s$ to \begin{align*} \frac{(m-\ell)!}{m!}\sigma(i_1\mathrm{d}ots i_\ell) \mathcal{J}_s= \frac{(-1)^{\ell-s+1}}{\binom{m}{\ell}(\ell-s)!}\frac{\partial^{\ell} J^{p}g_s}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_{\ell}}}+\frac{\binom{m-s}{\ell-s}}{\binom{m}{\ell}}\sigma(i_1\mathrm{d}ots i_{\ell}) \frac{\partial^{s} ((J^0g_s)_{m-\ell})_{i_1\mathrm{d}ots i_{\ell-s}}}{\partial x^{i_{\ell-s+1}}\cdots \partial x^{i_{\ell}}}. \end{align*} Now from Lemma \rangleef{John's condition }, we have \begin{align*} \Psi_{i_1\mathrm{d}ots i_\ell}&= \frac{ (m-\ell)!}{m!}\sigma(i_1\mathrm{d}ots i_\ell) \betaigg(\sum\langleimits_{p=0}^{\ell}(-1)^p\binom{\ell}{p}\,\frac{\partial^\ell \psi^p}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_p}\partial\xi^{i_{p+1}}\mathrm{d}ots\partial\xi^{i_\ell}}\betaigg)\\ &=\frac{ (m-\ell)!}{m!}\sigma(i_1\mathrm{d}ots i_\ell) \sum\langleimits_{s=0}^{\ell-1}\mathcal{J}_s +\frac{ (m-\ell)!}{m!}(-1)^\ell \frac{\partial^\ell \psi^p}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_\ell}}\\ &=\frac{(-1)^\ell}{\binom{m}{\ell}\ell!}\sum_{s=0}^{\ell-1} \frac{(-1)^{s+1}\ell!}{(\ell-s)!}\frac{\partial^{\ell} J^{p}g_s}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_{\ell}}}+\sigma(i_1\mathrm{d}ots i_{\ell})\sum_{s=0}^{\ell-1}\frac{\binom{m-s}{\ell-s}}{\binom{m}{\ell}} \frac{\partial^{s} ((J^0g_s)_{m-\ell})_{i_1\mathrm{d}ots i_{\ell-s}}}{\partial x^{i_{\ell-s+1}}\cdots \partial x^{i_{\ell}}}+ \frac{(-1)^\ell}{\binom{m}{\ell}\ell!} \, \frac{\partial^\ell \psi^{\ell}}{\partial x^{i_1}\mathrm{d}ots\partial x^{i_{\ell}}}\nablaonumber\\ &=\frac{(-1)^\ell}{\binom{m}{\ell}\ell!}\frac{\partial^{\ell} }{\partial x^{i_1}\mathrm{d}ots\partial x^{i_{\ell}}}\langleeft(\psi^{\ell}-\sum_{s=0}^{\ell-1} (-1)^{s}\binom{\ell}{s}\,s!\,J^pg_s\rangleight)+\sigma(i_1\mathrm{d}ots i_{\ell})\sum_{s=0}^{\ell-1}\frac{\binom{m-s}{\ell-s}}{\binom{m}{\ell}} \frac{\partial^{s} ((J^0g_s)_{m-\ell})_{i_1\mathrm{d}ots i_{\ell-s}}}{\partial x^{i_{\ell-s+1}}\cdots \partial x^{i_{\ell}}}\\ &=\frac{1}{\binom{m}{\ell}}\frac{\partial^{\ell}}{\partial x^{i_1}\cdots x^{i_{\ell}}} \chi^{\ell} + \frac{1}{\binom{m}{\ell}} \sigma(i_1\mathrm{d}ots i_{\ell}) \sum_{s=0}^{\ell-1} \binom{m-s}{\ell-s}\, \frac{\partial^{s} ((J^0g_s)_{m-\ell})_{i_1\mathrm{d}ots i_{\ell-s}}}{\partial x^{i_{\ell-s+1}}\cdots \partial x^{i_{\ell}}}. \end{align*} This completes the proof of Lemma \rangleef{Lm 8.2}. \end{proof} \begin{lemma}\langleabel{restriction on Schwartz space} If $\psi^r$ is given by \eqref{eq:relation between psi and gs} for $ 0\langlee r \langlee \ell -1 $, then for every indices $ 1\langlee i_1,j_1,\cdots, i_{m-\ell+1}, j_{m-\ell+1}\langlee n$, \[ \langleeft( J_{i_1j_1} \cdots J_{i_{m-\ell+1}j_{m-\ell+1}} \chi^{\ell} \rangleight)\betaigg|_{T\mathbb{S}^{n-1}}\in \mathcal{S}(T\mathbb{S}^{n-1}). \] \end{lemma} \begin{proof} This proof follows from repeated application of \cite[Statement 2.10]{Krishnan2019a}. We know from equation \eqref{homogenety w r to xi} of Lemma \rangleef{Lm5.1} that the function $ \chi^{\ell}(x,\xi) $ is positively homogeneous of degree $ \langleambda=m-\ell-1 $ in its second variable. Also, by definition \[ \chi^{\ell}\betaig|_{T\mathbb{S}^{n-1}}\in \mathcal{S}(T\mathbb{S}^{n-1}).\] Differentiating \eqref{translation of chi l} with respect to $ t $, we obtain \begin{align*} \frac{d}{d t} \chi^{\ell}(x+t\xi,\xi)& = \langle \xi,\partial_x\rangle \chi^{\ell}(x+t\xi,\xi) =0\\ \Longrightarrow\qquad \qquad \langleeft.\frac{\mathrm{d}}{\mathrm{d}\, t} \chi^{\ell}(x+t\xi,\xi) \rangleight|_{t =0}&= \langle \xi,\partial_x\rangle \chi^{\ell}(x,\xi) =0. \end{align*} This implies $ \langle \xi,\partial_x\rangle \chi^{\ell}(x,\xi) $ and all its derivatives with respect to $ x_j $'s and $ \xi_j $'s restricted to $ T\mathbb{S}^{n-1} $ belong to $ \mathcal{S}(T\mathbb{S}^{n-1}) $. Thus $ \chi^{\ell} $ satisfies the hypotheses of Lemma \rangleef{Lm 4.2} for $ k=p=m-\ell-1 $ and we get \begin{equation}\langleabel{Eq5.27} \langleeft( J_{i_1j_1} \cdots J_{i_{m-\ell+1}j_{m-\ell+1}} \chi^{\ell} \rangleight)\betaigg|_{T\mathbb{S}^{n-1}}\in \mathcal{S}(T\mathbb{S}^{n-1}). \end{equation} This finishes the proof. \end{proof} \subsection{Proof of Theorem \rangleef{th:range characterisation for I-k}} \textbf{Proof of necessity.} To prove the necessary part of the theorem, let us assume that $(\varphi^0, \varphi^1, \mathrm{d}ots, \varphi^k) \in (\mathcal{S}(T\mathbb{S}^{n-1}))^{k+1}$ is in the range of operator $\mathrm{i}c^k$, that is, there exists $ f \in \mathcal{S}(S^m)$ such that \begin{align}\langleabel{eq: relation between phil and Il} I^\ell f(x, \xi)= \varphi^{\ell}(x,\xi)= \int\langleimits_{-\infty}^\infty t^{\ell}\langleangle f(x+t\xi),\xi^m\rangleangle \,dt, \qquad \mbox{ for } 0 \langleeq \ell \langleeq k. \end{align} Then the first condition \textit{(1)} of Theorem \rangleef{th:range characterisation for I-k}, $ \varphi^{\ell}(x,-\xi) = (-1)^{m-\ell} \varphi^{\ell}(x,\xi)$ for $ \ell=0,1,\mathrm{d}ots,k$, can be verified by a straight forward substitution ($\xi \mapsto -\xi$). And the second condition $\textit{(2)}$ of Theorem \rangleef{th:range characterisation for I-k} follows from \cite[Lemma 2.5]{Krishnan2019a}, which can be proved by a direct computation too. \\ \nablaoindent \textbf{Proof of sufficiency.} Assume $(\varphi^0, \varphi^1, \mathrm{d}ots, \varphi^k) \in (\mathcal{S}(T\mathbb{S}^{n-1}))^{k+1}$ satisfy the properties \textit{(1)} and \textit{(2)}, then our aim is to find a $f \in \mathcal{S}(S^m)$ such that equation \eqref{eq: relation between phil and Il} holds. Our idea is to construct a tensor field $f$ of the form \begin{equation}\langleabel{Eq5.1} f= \sum_{s=0}^{k}\mathrm{d}^{s} g_{s}, \quad \mbox{ where } g_s \in \mathcal{S}(S^{m-s}), \mbox{ for } s = 0, 1, \mathrm{d}ots , k \end{equation} which will be equivalent to find tensor fields $g_s$, for $0 \langleeq s \langleeq k$, satisfying the following relation (see the discussion in the first two paragraphs of subsection \rangleef{subsec:lemmas for range characterization}): \begin{equation}\langleabel{Eq5.2} \begin{aligned} \varphi^\ell=I^\ell\!f &= \sum_{s=0}^{\ell} (-1)^s\, \binom{\ell}{s}\, s! \,I^{\ell-s} g_s, \qquad \mbox{ for } 0\langlee \ell\langlee k. \end{aligned} \end{equation} To obtain required tensor fields $g_s$, we are going to use Mathematical induction and the Theorem \rangleef{Th3.1} successively. In particular, we show that the function $ \chi^{\ell} \in C^{\infty}(\rangleanglen \times\rangleanglen\setminus{0})$ (for each fixed $0\langleeq \ell \langlee k$) defined in Lemma \rangleef{Lm5.1} by \begin{equation}\langleabel{definition_of_chi_l} \chi^{\ell}=\frac{(-1)^\ell}{\ell!} \langleeft(\psi^\ell - \sum_{s=0}^{\ell-1}(-1)^{s}\,\binom{\ell}{s}\,s! \,J^{\ell-s} g_{s}\rangleight) \end{equation} satisfies the hypotheses of the Theorem \rangleef{Th3.1} when $m$ replaced by $ m-\ell$. Then by applying Theorem \rangleef{Th3.1} on $\chi^{\ell}$, we will prove the existence of tensor fields $g_s$ ($0\langleeq s \langlee k$) iteratively. \\ \textbf{Case $\ell =0$:} For $ \ell=0 $, we have $ \chi^0=\psi^0 $. Also, the statement of theorem (equation \eqref{Johns's condition for psi k}) gives \[ J_{i_{m+1}j_{m+1}}\cdots J_{i_2j_2} J_{i_1j_1} \psi^k=0. \] Using \eqref{Eq4.1} for $\ell=k$ together with the fact that the operators $ \langle \xi, \partial_{x}\rangle $ and $ J_{ij} $ commute, we get \[J_{i_{m+1}j_{m+1}}\cdots J_{i_2j_2} J_{i_1j_1} \psi^0=0. \] Hence, we can apply Theorem \rangleef{Th3.1} on $ \chi^0 $ to get $ g_0\in \mathcal{S}(S^m) $ satisfying \[ I^0g_0 =\varphi^0 \quad \mbox{and}\quad J^0g_0=\psi^0=\chi^0. \] Thus, we have proved that our claim about the function $\chi^\ell$ is valid for $ \ell=0 $. \\ \textbf{Induction hypothesis:} Assume that the function $ \chi^{r}$ satisfies the properties \textit{(1)} and \textit{(2)} of Theorem \rangleef{Th3.1} and hence shows the existence of $g_{r}\in \mathcal{S}(S^{m-r})$ such that \begin{equation*} \chi^{r} = J^0g_r= \frac{(-1)^r}{r!} \langleeft(\psi^r - \sum_{s=0}^{r-1}(-1)^{s}\binom{r}{s}s!\, J^{r-s} g_{s}\rangleight), \qquad \mbox{ for } 0\langleeq r \langleeq \ell -1. \end{equation*} This implies \begin{equation}\langleabel{Eq 8.13} \begin{aligned} \psi^r=&\sum_{s=0}^{r-1}(-1)^{s}\binom{r}{s}\,s!\, J^{r-s} g_{s}+ (-1)^r r! J^0g_r =&\sum_{s=0}^{r}(-1)^{s}\binom{r}{s}s!\,J^{r-s} g_{s}. \end{aligned} \end{equation} Here, we use Lemma \rangleef{John's condition } to get the following relation \[J_{i_1j_1} \cdots J_{i_{m-\ell+1}j_{m-\ell+1}} \Psi_{i_1\cdots i_{\ell}} =0.\] This together with Lemma \rangleef{Lm 8.2} entails \begin{align}\langleabel{Eq8.5} J_{i_1j_1} \cdots J_{i_{m-\ell+1}j_{m-\ell+1}}\langleeft( \frac{\partial^{\ell}}{\partial x^{i_1}\cdots x^{i_{\ell}}} \chi^{\ell} + \sigma(i_1\mathrm{d}ots i_{\ell}) \sum_{s=0}^{\ell-1}\binom{m-s}{\ell-s} \frac{\partial^{s} ((J^0g_s)_{m-\ell})_{i_1\mathrm{d}ots i_{\ell-s}}}{\partial x^{i_{\ell-s+1}}\cdots \partial x^{i_{\ell}}}\rangleight)=0. \end{align} The operator $J_{ij}$ is a constant coefficients differential operator and commutes with partial derivatives (in fact with any constant coefficient differential operator). Therefore \eqref{Eq8.5} is equivalent to the equation \begin{equation}\langleabel{Eq8.6} \begin{aligned} & \frac{\partial^{\ell}}{\partial x^{i_1}\cdots x^{i_{\ell}}} \langleeft( J_{i_1j_1} \cdots J_{i_{m-\ell+1}j_{m-\ell+1}} \chi^{\ell}\rangleight) \\&\qquad \quad + \sigma(i_1\mathrm{d}ots i_{\ell}) \sum_{s=0}^{\ell-1} \binom{m-s}{\ell-s} \frac{\partial^{s} }{\partial x^{i_{\ell-s+1}}\cdots \partial x^{i_{\ell}}} \langleeft( J_{i_1j_1} \cdots J_{i_{m-\ell+1}j_{m-\ell+1}} ((J^0g_s)_{m-\ell})_{i_1\mathrm{d}ots i_{\ell-s}}\rangleight)=0. \end{aligned} \end{equation} The second term on the left side of above equation is zero from the Proposition \rangleef{general johns condition}. Thus the above relation reduces to \begin{equation}\langleabel{Eq 8.14} \frac{\partial^{\ell}}{\partial x^{i_1}\cdots x^{i_{\ell}}}\langleeft(J_{i_1j_1} \cdots J_{i_{m-\ell+1}j_{m-\ell+1}} \chi^{\ell} \rangleight)=0. \end{equation} For a fixed $ \xi \nablae 0 $, the Proposition \rangleef{restriction on Schwartz space} implies that the restriction of $ \langleeft(J_{i_1j_1} \cdots J_{i_{m-\ell+1}j_{m-\ell+1}} \chi^{\ell} \rangleight)(\cdot,\xi) $ on the hyperplane $ \xi^{\perp}=\{x\in \rangleanglen: \langle x,\xi \rangle=0 \} $ belongs to $ \mathcal{S}(\xi^{\perp}) $. Moreover, \eqref{Eq 8.14} implies that the restriction is itself zero. Since $ \xi\nablae 0 $ is arbitrary, therefore \begin{equation*} J_{i_1j_1} \cdots J_{i_{m-\ell+1}j_{m-\ell+1}}\chi^{\ell}=0. \end{equation*} Lemma \rangleef{extension lemma} gives the function $ \chi^{\ell} $ obtained from $ \tilde{\chi}^{\ell} =\chi^{\ell}|_{T\mathbb{S}^{n-1}}$ by the following rule \[ \chi^{\ell}=|\xi|^{m-\ell-1}\,\widetilde{\chi}^{\ell}\langleeft( x-\frac{\langle x,\xi \rangle }{|\xi|^2}\xi, \frac{\xi}{|\xi|} \rangleight), \] where $\tilde{\chi}^{\ell}= \frac{(-1)^l}{l!} \langleeft(\varphi^l - \sum_{s=0}^{\ell-1}(-1)^{s}\binom{\ell}{s}\,s!\, I^{\ell-s} g_{s}\rangleight) \in \mathcal{S}(T\mathbb{S}^{n-1})$. And equation \eqref{homogenety w r to xi} gives \[\chi^{\ell}(x,-\xi)= (-1)^{m-\ell} \chi^{\ell}(x,\xi). \] This implies $ \tilde{\chi}^{\ell}=\chi^{\ell}|_{T\mathbb{S}^{n-1}} $ also enjoys the same. Thus we have prove that if \eqref{Eq 8.13} holds, then the function $ \chi^{\ell} $ satisfies the hypotheses of Theorem \rangleef{Th3.1} for $ m-\ell $ tensor fields. With the help of second principle of induction, there exists a $ g_{\ell}\in \mathcal{S}(S^{m-\ell}) $ such that \begin{equation}\langleabel{identity for chi l} J^0g_{\ell}=\chi^{\ell} \quad I^0g_{\ell}=\chi^{\ell}|_{T\mathbb{S}^{n-1}} \end{equation} holds for $ \ell =0,\mathrm{d}ots,k $. Thus, the tensor field $ f $ defined by \eqref{Eq5.1} gives \[ I^{\ell}f=\varphi^{\ell};\quad \ell=0,1,\mathrm{d}ots,k. \] This completes the proof of sufficient part and Theorem \rangleef{th:range characterisation for I-k} as well. \begin{remark} \begin{itemize} \item We already know from Theorem \rangleef{th:decomp_in_original_sp} that the operator $\mathrm{i}c^k$ has an infinite dimensional kernel containing all $(k+1)$-potential field (tensor fields of the form $ f=\mathrm{d}^{k+1}v $ for some $ v\in\mathcal{S}(S^{m-k-1})$). This was the primary motivation to define $ f $ in the form of \eqref{Eq5.1} for the proof of Theorem \rangleef{th:range characterisation for I-k}. \item The techniques used in this work to prove the range characterization for the operator $\mathrm{i}c^k$ is different from the one used in \cite[Theorem 1.3]{Krishnan2019a}. In fact for the particular case $k =m$, our result provides a new proof of \cite[Theorem 1.3]{Krishnan2019a}. Hence, the Theorem \rangleef{th:range characterisation for I-k} can be thought of as a generalization of \cite[Theorem 1.3]{Krishnan2019a}. \end{itemize} \end{remark} \end{document}
\begin{document} \newcommand{\AND}{} \title{Teams in Online Scheduling Polls: Game-Theoretic Aspects\thanks{R.\ Bredereck is supported by the DFG fellowship BR 5207/2. N. Talmon is supported by an I-CORE ALGO fellowship. Main work done while R.\ Bredereck and N. Talmon were with TU Berlin.}} \author{Robert Bredereck$^1$, Jiehua Chen$^2$, Rolf Niedermeier$^2$, Svetlana Obraztsova$^3$, and Nimrod~Talmon$^4$\\ $^1$University of Oxford, United Kingdom, \texttt{[email protected]}\\ $^2$TU Berlin, Germany, \texttt{\{jiehua.chen, rolf.niedermeier\}@tu-berlin.de}\\ $^3$I-CORE, Hebrew University of Jerusalem, Israel, \texttt{[email protected]}\\ $^4$Weizmann Institute of Science, Israel, \texttt{[email protected]} } \maketitle \begin{abstract} Consider an important meeting to be held in a team-based organization. Taking availability constraints into account, an online scheduling poll is being used in order to decide upon the exact time of the meeting. Decisions are to be taken during the meeting, therefore each team would like to maximize its relative attendance in the meeting (i.e., the proportional number of its participating team members). We introduce a corresponding game, where each team can declare (in the scheduling poll) a lower total availability, in order to improve its relative attendance---the pay-off. We are especially interested in situations where teams can form coalitions. We provide an efficient algorithm that, given a coalition, finds an optimal way for each team in a coalition to improve its pay-off. In contrast, we show that deciding whether such a coalition exists is NP-hard. We also study the existence of Nash equilibria: Finding Nash equilibria for various small sizes of teams and coalitions can be done in polynomial time while it is coNP-hard if the coalition size is unbounded. \end{abstract} \section{Introduction}\label{section:introduction} An organization is going to hold a meeting, where people are to attend. Since people come from different places and have availability constraints, an open online scheduling poll is taken to decide upon the meeting time. Each individual can approve or disapproval of each of the suggested time slots. In order to have the highest possible attendance, the organization will schedule the meeting at a time slot with the maximum sum of declared availabilities. During the meeting, proposals will be discussed and decisions will be made. Usually, people have different interests in the decision making, e.g. they are from different teams that each want their own proposals to be put through. We consider people with the same interest as from the same \emph{team} and as a result, each team (instead of each individual) may declare (in the scheduling poll) the number of its members that can attend the meeting at each suggested time slot. For a simple illustration, suppose that three teams, $t_1$, $t_2$, and $t_3$, are about to hold a meeting, either at 9am or at 10am. Two members from~$t_1$, one member from~$t_2$, and three members from~$t_3$ are available at 9am, while exactly two members of each team are available at 10am. The availabilities of the teams can be illustrated as an integer matrix: \begin{align*} A\coloneqq \begin{blockarray}{ccc} c_1 & c_2 & \\ [3px] \begin{block}{(cc)c} 2 & 2 & \ \ t_1 \\ 1 & 2 & \ \ t_2 \\ 3 & 2 & \ \ t_3 \\ \end{block} \end{blockarray} \end{align*} A time slot is a winner if it receives the maximum sum of declared availabilities. Thus, if the three teams declare their true availabilities, then both 9am and 10am co-win (since six people in total are available at 9am and 10am each), and the meeting will be scheduled at either 9am or 10am. Now, if a team (i.e. people with the same interest) wants to influence any decision made during the meeting, then it will want to send as many of its available team members to the meeting as possible because this will maximize its \emph{relative power}---the proportion of its own attendees. For our simple example, if the meeting is to be held at 9am, then the relative powers of teams~$t_1$, $t_2$, and $t_3$ are $1/3$, $1/6$, and $1/2$, respectively. This again means that a sophisticated team will change its availabilities declared in the poll from time to time. However, each team must not report a number which is higher than its true availability since it cannot send more members than available. Given this constraint, it is interesting to know whether any team can increase its relative attendance by misreporting about its availability. Aiming to have more power in the meeting, our teams can declare a different number than their true availabilities, possibly changing the winning time slot to one where their relative powers are maximized. For the case where several time slots co-win, however, it is not clear which co-winning time slot will be used. To be on the safe side, the teams must maximize the relative power of \emph{each} co-winning time slot. In other words, our teams are pessimistic and consider their \emph{pay-off} as the \emph{minimum} over all the relative powers at each co-winning time slot. In our example, this means that the pay-off of team~$t_2$ would be $1/6$, since this is its relative power at 9am, which is smaller than its relative power, $1/3$, at 10am. The pay-offs of teams~$t_1$ and $t_3$ are both~$1/3$. In this case, team~$t_2$ can be strategic by updating its availability and declare zero availability at 9am; as a result, the meeting would be held at 10am, where team~$t_2$ has better pay-off with relative number of $1/3$. We do not allow arbitrary deviations from the real availabilities of teams; specifically, we do not allow a team to declare as available a higher number than actually available. Further, we do not allow a team to send more team members to the meeting than it declared as available, because this is often mandated by the circumstances. As some examples, we mention that the organizer might need to arrange a meeting room and specify the number of participants in the meeting up-front (similarly, if the meeting is to be carried in a restaurant, the number of chairs at the table shall be decided beforehand); or the organizer might need to obtain buses to transport the participants. Thus, the teams must send exactly the declared number of members to the meeting. For instance, it is not possible for team~$t_2$ to declare $3$ at 9am since only one of its team members is available. A formal description of the corresponding game, called \emph{team power game} (\textsc{TPG}\xspace, in short), and a discussion on our example are given in~\cref{section:preliminaries}. As already remarked, to improve the pay-off, a team may lie about the number of its available members. Sometimes, teams can even form a coalition and update their availabilities strategically. In our example, after team~$t_2$ misreported its availabilities such that each team receives a pay-off of $1/3$, teams~$t_1$ and $t_3$ may collaborate: if both teams keep their declared availabilities at 9am but declare zero availability at 10am (note that team~$t_2$ does not change its updated availabilities), then 9am will be the unique winner (with total availability of $5$); as a result, $t_1$ and $t_3$ receive better pay-offs of $2/5$ and $3/5$, respectively. Such a successful deviation from the declared availabilities of the teams in a coalition (while keeping the declared availabilities of the teams not in the coalition unchanged) is called an \emph{improvement step}. After some teams perform an improvement step, other teams may also want to update their availabilities to improve. This iterative process leads to the question of whether there is a stable situation, where improvement is impossible---a \emph{Nash equilibrium}. Of course, when searching for equilibria, it is natural to ask how hard it is to decide whether an improvement step is possible. In this paper, we are interested in the computational complexity of the following problems: (1) finding an improvement step (if it exists) for a specific coalition, (2) finding an improvement step (if it exists) for any coalition, and (3) finding a \emph{$t$-strong Nash equilibrium} (if it exists). \newcommand{\tablefootnote}[1]{\footnote{#1}} \iffalse \begin{table*}[] \caption{Complexity results for the team power game. ``Unary'' (resp.\ ``Binary'') means that the input and the strategy profiles are encoded in unary (resp.\ binary). Variable~$t$ stands for the number of teams in a coalition, while~$a_{\max}$ stands for the maximum true availability. An entry labeled with ``${\mathsf{P}}$'' means polynomial-time solvability. An entry labeled with ``${\mathsf{FPT}}$ for~$k$'' means solvability in $f(k)\cdot |I|^{O(1)}$ time, where $f$ is a function solely depending on $k$ and $|I|$ denotes the size of the given input. An entry labeled with ``${\mathsf{W[2]}}$-hard for~$k$'' implies that the corresponding problem is not ``${\mathsf{FPT}}$ for $k$'' unless ${\mathsf{W[2]}}={\mathsf{FPT}}$ (this is considered unlikely in parameterized complexity theory). } \centering \begin{minipage}{\textwidth} { \centering \begin{tabular}{@{}lllll@{}} \toprule (1) & Finding an improvement step for a given coalition & Unary & in ${\mathsf{P}}$~(Thm.~\ref{thm:unary-p})\\ & & Binary & in ${\mathsf{FPT}}$ for $t$~(Thm.~\ref{thm:binary-unbounded})$^1$\\[1ex] (2) & Deciding the existence of an improvement step for any coalition & Binary & in ${\mathsf{P}}$ for constant~$t$ (Cor.~\ref{cor:poly}) \\ && $a_{\max} = 1$ & ${\mathsf{NP}}$-complete~(Thm.~\ref{thm:cvc}) \\ && $a_{\max} = 1$ & ${\mathsf{W[2]}}$-hard for $t$ (Thm.~\ref{thm:cvc}) \\[.8ex] (3) & Finding a $1$-strong Nash equilibrium & $a_{\max} \le 3$ & in ${\mathsf{P}}$, always exists (Thm.~\ref{thm:simple_Nash_always_exists})\\[.8ex] & Finding a $2$-strong Nash equilibrium & $a_{\max} = 1$ & in ${\mathsf{P}}$, always exists (Prop.~\ref{prop:two_Nash_always_exists})\\ && $a_{\max} \geq 2$ & open, does not always exist\\ &Deciding the existence of a $t$-strong Nash equilibrium & & ${\mathsf{coNP}}$-hard\\ \bottomrule \end{tabular} {\mathsf{P}}ar} $^1$\footnotesize{We conjecture it to be even in ${\mathsf{P}}$. Strong ${\mathsf{NP}}$-hardness is excluded by Theorem~\ref{thm:unary-p}.} \end{minipage} \label{table:results} \end{table*} \fi \iftrue { \begin{table}[t] \caption{Complexity results for the team power game. ``Unary'' (resp.\ ``Binary'') means that the input and the strategy profiles are encoded in unary (resp.\ binary). Variable~$t$ stands for the number of teams in a coalition, while~$a_{\max}$ stands for the maximum true availability. An entry labeled with ``${\mathsf{P}}$'' means polynomial-time solvability. An entry labeled with ``${\mathsf{FPT}}$ for~$k$'' means solvability in $f(k)\cdot |I|^{O(1)}$ time, where $f$ is a function solely depending on $k$ and $|I|$ denotes the size of the given input. An entry labeled with ``${\mathsf{W[2]}}$-hard for~$k$'' implies that the corresponding problem is not ``${\mathsf{FPT}}$ for $k$'' unless ${\mathsf{W[2]}}={\mathsf{FPT}}$ (this is considered unlikely in parameterized complexity theory). } {\center \begin{tabular}{@{}l@{}ll@{}} \toprule (1) & \multicolumn{2}{l}{Finding an improvement step for a given coalition} \\ & \qquad Unary & in ${\mathsf{P}}$~(Thm.~\ref{thm:unary-p}) \\ & \qquad Binary & in ${\mathsf{FPT}}$ for $t$~(Thm.~\ref{thm:binary-unbounded})$^1$\\[1ex] (2) & \multicolumn{2}{l}{Deciding the existence of an improvement step}\\ & \multicolumn{2}{l}{for any coalition}\\ & \qquad Binary & in ${\mathsf{P}}$ for constant~$t$ (Cor.~\ref{cor:poly})\\ & \qquad $a_{\max}= 1$ & ${\mathsf{NP}}$-complete~(Thm.~\ref{thm:cvc}) \\ & \qquad $a_{\max}= 1$ & ${\mathsf{W[2]}}$-hard for $t$ (Thm.~\ref{thm:cvc}) \\[1ex] (3) & \multicolumn{2}{l}{Finding a $1$-strong Nash equilibrium} \\ & \qquad $a_{\max} \le 3$ & in ${\mathsf{P}}$, always exists (Thm.~\ref{thm:simple_Nash_always_exists})\\ & \qquad $a_{\max} \ge 4$ & {\color{darkgray}open} (Rem.~\ref{rem:1-NE})\\[.8ex] & \multicolumn{2}{l}{Finding a $2$-strong Nash equilibrium} \\ & \qquad $a_{\max} = 1$ & in ${\mathsf{P}}$, always exists (Prop.~\ref{prop:two_Nash_always_exists})\\ & \qquad $a_{\max} \geq 2$ & {\color{darkgray}open}, does not always exist\\[.8ex] & \multicolumn{2}{l}{Deciding the existence of a $t$-strong Nash equilibrium} \\ & \qquad $a_{\max} = 2$ & ${\mathsf{coNP}}$-hard (Thm.~\ref{thm:strongNash-coNP-hard})\\ \bottomrule \end{tabular} {\mathsf{P}}ar} $^1$\footnotesize{We conjecture it to be even in ${\mathsf{P}}$. Strong ${\mathsf{NP}}$-hardness is excluded by Theorem~\ref{thm:unary-p}.} \label{table:results} \end{table} } \fi \myparagraph{Main Contributions} We show that, depending on the size of the coalition (i.e., the amount of collaboration allowed), the computational complexity of finding an improvement step for a given coalition and deciding whether an improvement step exists for an arbitrary coalition ranges from being polynomial-time solvable to being ${\mathsf{NP}}$-hard; further, deciding whether an improvement step exists for any coalition of size at most~$t$ is ${\mathsf{W[2]}}$-hard when parameterizing by the coalition size~$t$. We show that a $1$-strong Nash equilibrium always exists for some special profiles and we provide a simple polynomial-time algorithm for finding it in these cases. Finally, we show that deciding whether a $t$-strong Nash equilibrium exists is ${\mathsf{coNP}}$-hard. Our results are summarized in~\cref{table:results}. \myparagraph{Related Work} Recently, online scheduling polls such as Doodle~/~Survey Monkey caught the attention of several researches. \citet{RNBNG13} initiated empirical investigations of scheduling polls and identified influences of national culture on people's scheduling behavior, by analyzing actual Doodle polls from 211~countries. \citet{zou2015strategic} also analyzed actual Doodle polls, and devised a model to explain their experimental findings. They observed that people participating in open polls tend to be more ``cooperative'' and additionally approve time slots that are very popular or unpopular; this is different to the behavior of people participating in closed polls. \citet{OEPR15} formally modeled the behavior observed by \citet{zou2015strategic} as a game, where approving additional time slots may result in pay-off increase. While the game introduced by \citet{OEPR15} captures the scenario that each individual player tries to appear to be cooperative, our team power game models the perspective that each individual team (player) as a whole tries to maximize its relative power in the meeting, which means that approving more time slots is not necessarily a good strategy. Quite different in flavor, \citet{lee2014algorithmic} considered a computational problem from the point of view of the poll initiator, whose goal is to choose the time slots to poll over, in order to optimize a specific cost function. Finally, since scheduling polls might be modeled as approval elections, we mention the vast amount of research done on approval elections in general, e.g.,~\cite{brams1978approval} and on iterative approval voting in particular, e.g.,~\cite{DORK15,lev2012convergence,MPRJ10}. \iffalse JOURNAL NT CUT Recently, scheduling polls such as Doodle/Survey Monkey caught the attention of several researches. \citet{RNBNG13} initiated empirical investigations of scheduling polls and analyzed 1.5 million Doodle polls from 211~countries. They investigated the influence of national culture on people's scheduling behavior and found strong relations between characteristics of national culture and several behavioral phenomena, such as that poll participants from collectivist countries respond earlier, agree to fewer options, but find more consensus than predominantly individualist societies. \citet{zou2015strategic} analyzed actual Doodle polls, where their main interest was in comparing open polls (where each participant can see other responses) to closed polls (where no participant can see the other responses). The results of their paper can be summarized as follows: (1) the average reported availability is higher in open polls, (2) responses in open polls have positive correlation with previous responses which turns out to be significantly stronger than the correlation in hidden polls, and (3) participants in open polls tend to mainly approve time slots which are very popular or very unpopular. In order to explain their experimental findings, \citet{zou2015strategic} proposed a model where the main assumption is that participants in open polls have a pressure to be cooperative, i.e., to approve many time slots; thus, in short, people tend to approve the unpopular time slots even if they cannot participate in them, just to be perceived as being cooperative. \citet{OEPR15} formally model the behavior observed by \citet{zou2015strategic} as a game, where voters derive a bonus from approving additional alternatives. They observed that when this bonus is capped by a certain threshold, the corresponding game equilibria behave consistently with the observations of \citet{zou2015strategic}. \citet{lee2014algorithmic} considered a computational problem of scheduling polls, from the point of view of the poll initiator, called the \emph{convener}. The task of the convener, who has some prior knowledge concerning the availability probabilities of the poll participants, is to choose a small set of possible time slots to poll over in order to optimize a specific cost function. Since scheduling polls, when simplified, might be modeled as Approval elections, we mention in this context the vast amount of research done on Approval elections (see, for example, the work by~\citet{MPRJ10} and \citet{DORK15}). \fi \iffalse JOURNAL In particular, taking into account the dynamic nature of scheduling polls, we mention the work done on iterative voting for Approval elections. \todo[inline]{Does anyone have any references for the above paragraph? (e.g.\ the one who wrote this) If not, I suggest to replace it by a ``general motivation of why we think iterative voting is related''. OK (NT).} \citet{MPRJ10} study iterative voting for the Plurality voting rule and show how convergence, which is guaranteed for Plurality, depends on the exact attributes of the game, such as the tie-breaking scheme, and on assumptions regarding voters' weights and strategies. In contrast to our model, they assume that the voters do not coordinate their actions. \citet{DORK15} combine iterative preference elicitation and iterative voting and study the outcome and performance of a setting where manipulative voters submit partial preferences. They show that in practice, manipulation is unlikely to influence the final outcome. In contrast to our model, they assume that the voters do not change their votes but complete their partial preferences over time. \todo[inline]{Now, we discuss the papers from the wish list. However, to be honest, I do not completely understand why we want to discuss exactly these papers, there are many others papers on iterative voting that are as unrelated (or as relative if you want to think positively) as these two above. Just as examples? I think it's better to have one paragraph with some short list of papers about iterative Approval voting (NT). Filtering out unrelated papers is good! (Hua)} \fi \newcommand{\mathsf{winners}}{\mathsf{winners}} \newcommand{\mathsf{team}\text{-}\mathsf{power}}{\mathsf{team}\text{-}\mathsf{power}} \newcommand{{\mathsf{P}}ayoff}{\mathsf{pay}\text{-}\mathsf{off}} \newcommand{\mathsf{s}_A}{\mathsf{s}_A} \newcommand{s}{s} \section{Preliminaries}\label{section:preliminaries} We begin this section by defining the rules of the game which is of interest here. Then, we formally define the related computational problems we consider in this paper. Throughout, given a number~$n\in \mathds{N}$, by $[n]$ we mean the set~$\{1,2,\ldots, n\}$. \myparagraph{Rules of the Game} The game is called the \emph{team power game} (\textsc{TPG}\xspace, in short). It consists of $n$ players, the \emph{teams}, $t_1,t_2,\ldots,t_n,$ and $m$ possible \emph{time slots}, $c_1,c_2,\ldots,c_m$. Each team $t_i$ is associated with a \emph{true availability vector} $A_i = (a_i^1, a^2_i, \ldots,a_i^m),$ \noindent where $a_i^j \in \mathds{N}$ is the (true) availability of team~$t_i$ for time slot~$c_j$. Importantly, each team is only aware of its own availability vector. During the game, each team~$t_i$ announces a \emph{declared (availability) vector} $B_i=(b^1_i,b^2_i,\ldots,b^m_i),$ \noindent where $b_i^j\le a_i^j$ is the declared availability of team~$t_i$ for time slot~$c_j$; using standard game-theoretic terms, we define the \emph{strategy} of team~$t_i$ to be its declared availability vector~$B_i$. We use~$A$ and~$B$ to denote the matrices consisting of a row for each team's true and declared availability vectors. That is, for $i \in [n]$ and $j \in [m]$, $A \coloneqq (a^j_i), B \coloneqq (b^j_i)$. Given a declared availability matrix~$B$, the co-winners of the corresponding scheduling poll, denoted as $\mathsf{winners}(B)$, are the time slots with the maximum sum of declared availabilities: \begin{align*} \mathsf{winners}(B) \coloneqq \argmax\limits_{c_j \in \{c_1,c_2\ldots,c_m\}} \{\sum_{i \in [n]} b_i^j\}\text{.} \end{align*} Before we define the \emph{pay-off} of each team, we introduce the notion of \emph{relative power}. The relative power $\mathsf{team}\text{-}\mathsf{power}(B,t_i,c_j)$ of team $t_i$ at time slot~$c_j$ equals the number of members from $t_i$ who will attend the meeting at time slot $c_j$, divided by the total number of attendees at this time slot: \begin{align*} \mathsf{team}\text{-}\mathsf{power}(B,t_i,c_j) \coloneqq \frac{b_i^j}{\sum_{k\in [n]} b_k^j}\text{.} \end{align*} In order to define the pay-off of each team, we need to decide how to proceed when several time slots tie as co-winners. In this paper we consider a \emph{maximin} version of the game, where ties are broken adversarially. That is, the pay-off of team $t_i$ is defined to be the minimum, over all co-winners, of its \emph{relative power}: \begin{align*} {\mathsf{P}}ayoff(B,t_i) \coloneqq \min\limits_{c_j \in \mathsf{winners}(B)} \mathsf{team}\text{-}\mathsf{power}(B,t_i,c_j)\text{.} \end{align*} When we refer to an \emph{input} for \textsc{TPG}\xspace, we mean a true availability matrix~$A\in \mathds{N}^{n\times m}$ where each row~$A_i$ represents the true availability of a team~$t_i$ for the $m$ time slots. When we refer to a \emph{strategy profile} (in short, \emph{strategy}) for input~$A$ we mean a declared availability matrix~$B\in \mathds{N}^{n\times m}$ where each row~$B_i$ represents the declared availability vector of team~$t_i$. \myparagraph{Computational Problems Related to the Game} Given a \emph{coalition}, i.e., a subset of teams, a deviation of the teams in the coalition from their current strategy profile is an \emph{improvement step} if, by this deviation, each team in the coalition strictly improves its pay-off. Given a positive integer~$t$, a \emph{$t$-strong Nash equilibrium} for some input $A$ is a strategy profile $B$ such that no coalition of at most $t$ teams has an improvement step wrt.~$B$. We are interested in the following computational questions: 1. Given an input, a strategy profile, and a coalition of at most $t$ teams, does this coalition admit an improvement step compared to the given strategy profile? 2. Given an input, a strategy profile, and a positive integer~$t$, is there any coalition of at most $t$ teams which has an improvement step compared to the given strategy profile? 3. Given an input and a positive integer~$t$, does a \emph{$t$-strong Nash equilibrium} for this input exist? We are particularly interested in understanding the dependency of the computational complexity of the above problems on the number $t$ of teams in a coalition. Specifically, we consider (1) $t$ being a constant (modeling situations where not too many teams are willing to cooperate or where cooperation is costly) and (2) $t$ being unbounded. \myparagraph{Illustrating Example} Consider the input matrix~$A$ given in Section~\ref{section:introduction}, which specifies the true availabilities of three teams~$t_1,t_2,t_3$ over two time slots~$c_1,c_2$. If all teams declared their true availabilities, then both time slots win with total availability $6$. The pay-offs of the teams~$t_1,t_2,t_3$ are $1/3,1/6,1/3$, respectively. Team~$t_2$ can improve its pay-off by declaring~$(0,2)$ (i,e., declaring $0$ for $c_1$ and $2$ for $c_2$). As a result, $c_2$ would become the unique winner with total availability~$6$ and team~$t_2$ would receive a better pay-off: $1/3$. \noindent Thus, the profile $B$ for~$A$ where all teams declare their true availabilities (i.e., where $B = A$) is not a $1$-strong Nash equilibrium. Nevertheless, $A$ does admit the following $1$-strong Nash equilibrium: \begin{align*} B'\coloneqq \begin{pmatrix} 2 & 2 \\ 1 & 2 \\ 3 & 0 \end{pmatrix} \end{align*} The declared availability matrix~$B'$, however, is not a $2$-strong Nash equilibrium, since if team~$t_1$ and~$t_2$ would form a coalition and declare the same availability vector~$(0,2)$, then $c_2$ would be the unique winner with total availability $4$ and both~$t_1$ and $t_2$ would have a better pay-off: $1/2$. \section{Improvement Steps}\label{section:results_improvement_steps} We begin with the following lemma, which basically says that, in search for an improvement step, a fixed coalition of teams needs only to focus on a single time slot. \begin{lemma}\label{one_is_enough} If a coalition has an improvement step wrt.\ a strategy profile~$B$, then it also has an improvement step~$E=(e^j_i)$ wrt.\ $B$, where there is one time slot~$c_k$ such that each team~$t_i$ in the coalition declares zero availability for all other time slots (i.e., $e^{j}_{i} = 0$ holds for each team~$t_i$ in the coalition and each time slot $c_{j}\neq c_k$). \end{lemma} The missing proof for \cref{one_is_enough} can be found in \cref{proof:one_is_enough}. By \cref{one_is_enough}, we know that if a fixed coalition has an improvement step for a strategy profile~$B$, then it admits an improvement step that involves only one time slot. Assume that it is time slot $c_k$. In order to compute an improvement step for the coalition, we first declare zero availabilities for the teams in the coalition, for all other time slots. Then, we have to declare specific availabilities for the teams in the coalition, for time slot $c_k$. This is where the collaboration between the teams comes into play: even though each team, in order to improve its pay-off, might wish to declare as high as possible availability for time slot $c_k$ (i.e., its true availability), the teams shall collaboratively decide on the declared availabilities, since a too-high declared availability for one team might make it impossible for another team (even when declaring the maximum possible amount, i.e., the true availability) to improve its pay-off. It turns out that this problem is basically equivalent to the following problem (which, in our eyes, is interesting also on its own). \myparagraph{Relation to Horn Constraint Systems} \iftrue Using \cref{one_is_enough}, we know that if a coalition of $t$ teams admits an improvement step for a strategy profile~$B$, then it admits an improvement step that involves only one time slot. Let $c_k$ be such a time slot. Then, finding an improvement step that involves only time slot~$c_k$ reduces to the following number problem. \fi given a natural number vector~$(a_1,a_2,\ldots,a_{t})\in \mathds{N}^t$, a rational number vector~$(p_1,p_2,\ldots,p_t)\in \mathds{Q}^{t}$ with $\sum_{i\in[t]}p_i\le 1$, and a natural number~$p\in \mathds{N}$, searches for a natural number vector~$(x_1,x_2,\ldots,x_t)$ where for each $i\in [t]$ the following holds: \begin{inparaenum}[(1)] \item $1\le x_i \le a_{i}$ and \item $x_i/(p+\sum_{i\in[t]}x_i) > p_i$. \end{inparaenum} Intuitively, the vector $(a_1,\ldots, a_t)$ corresponds to the true availabilities of the teams in the coalition in time slot $c_k$, while the solution vector $(x_1, \ldots, x_t)$ corresponds to the declared availabilities of the teams in the coalition in time slot $c_k$; accordingly, the first constraint makes sure that each declared availability is upper-bounded by its true availability. Further, the vector $(p_1, \ldots, p_t)$ corresponds to the current pay-offs of the teams in the coalition, while $p$ corresponds to the sum of the declared availabilities of the teams not in the coalition at time slot $c_k$; accordingly, the second constraint makes sure that, for each team in the coalition, the new pay-off is strictly higher than its current pay-off. More formally, we argue that the coalition~$\{t_1,t_2,\ldots,t_t\}$ has an improvement step compared to strategy profile~$B$, involving only time slot~$c_k$, if and only if the instance $(A^*, P, p)$ for $t$-\textsc{Threshold Covering}\xspace has a solution, where $A^* \coloneqq (a^{k}_1,\ldots,a^{k}_t)$, $P \coloneqq ({\mathsf{P}}ayoff(B,t_1),\ldots,{\mathsf{P}}ayoff(B,t_t))$, and $p \coloneqq \sum_{i \in [n]\setminus [t]}b^{k}_i$. \begin{remark} Since the values~$p_i$ ($i \in [t]$) are rational numbers, we can rearrange the second constraint in the description of $t$-\textsc{Threshold Covering}\xspace to obtain an integer linear feasibility problem. This means that $t$-\textsc{Threshold Covering}\xspace is a special variant of the so-called \textsc{Horn Constraint System}\xspace problem which, given a matrix~$U=(u_{i,j})\in \mathds{R}^{n'\times m'}$ with each row having at most one positive element, a vector~$b \in \mathds{R}^{n'}$, and a positive integer~$d$, decides the existence of an integer vector~$x\in \{0,1,\ldots,d\}^{m'}$ such that $U\cdot x \ge b$; \textsc{Horn Constraint System}\xspace is weakly ${\mathsf{NP}}$-hard and can be solved in pseudo-polynomial time~\cite{Lagarias1985,Lagarias1985}. \end{remark} Taking a closer look at $t$-\textsc{Threshold Covering}\xspace, we observe the following: if we would know the sum of the variables $(x_1, \ldots, x_t)$, then we would be able to directly solve our problem by checking every constraint and taking the smallest feasible value (i.e., given $\sum_{i \in [t]} x_i$, we would set each $x_i$ to be the minimum over all values satisfying $x_i / (p + \sum_{i\in[t]}x_i) > p_i$). This yields a simple polynomial-time algorithm for finding an improvement step for the likely case where all availabilities are polynomially upper-bounded in the input size; technically, this means where the input profile~$A$ is encoded in unary. \begin{theorem}\label{thm:unary-p} Consider an input~$A$ and a strategy profile~$B$. Let $s$ be the sum of all entries in~$A$. Finding an improvement step (if it exists) for a given coalition is solvable in $O(s^2)$~time. \end{theorem} The proof for \cref{thm:unary-p} can be found in \cref{proof:unary-p}. Indeed, $t$-\textsc{Threshold Covering}\xspace can be reduced to finding the sum $\sum_{i \in [t]} x_i$. If the input is encoded in binary, however, then this sum might be exponentially large in the number of bits that encode our input, thus we cannot simply enumerate all possible values. If the coalition size~$t$ or a certain parameter~$\ell$ that measures the number of ``large'' true availabilities is a constant, then we still have polynomial-time algorithms for which the degree of the polynomial in the running time does not depend on the specific parameter value. Specifically, by the famous Lenstra's theorem~(\cite{Len83}, later improved by~\citet{Kan87} and by~\citet{FraTar1987}), we have the following result. \begin{theorem}\label{thm:binary-unbounded} Consider an input~$A$ and a strategy profile~$B$. Let $L$ be the length of the binary encoding of $A$. For each of the following times~$T$, there is a $T$-time algorithm that finds an improvement step, compared to~$B$, for a given coalition of $t$ teams: \begin{enumerate} \item $T = O(t^{2.5t+o(t)} \cdot L^2)$ and \item for each constant value~$c$, $T = f(\ell_c) \cdot t^2 \cdot L^{c^2+2}$, \end{enumerate} where $f$ is a computable function and $\ell_c \coloneqq \max_{j}|\{i \in [t]\colon a_{i}^{j}> L^{c}\}|$ is the maximum over the numbers of teams~$t_i$ in the coalition that have true availabilities~$a_i^{j}$ with $a_i^{j}>L^c$ for the same time slot~$c_j$. \end{theorem} The proof for \cref{thm:binary-unbounded} can be found in \cref{proof:binary-unbounded}. Using \cref{thm:binary-unbounded}, and checking all $\sum_{i=1}^{t}\binom{n}{i}$ possible coalitions of size at most~$t$, we obtain the following. \begin{corollary}\label{cor:poly} Given an input and a strategy profile, we can find, in polynomial time, a coalition of a constant number of teams and, for this coalition, find an improvement step compared to the given profile. \end{corollary} In general, however, deciding whether an improvement step exists is computationally intractable as the next result shows. We briefly note that, under standard complexity assumptions, a problem being ${\mathsf{W[2]}}$-hard for parameter~$k$ presumably excludes any algorithm with running time~$f(k)\cdot |I|^{O(1)}$, where $f$ is a computable function depending only on $k$ and $|I|$ is the size of the input. \begin{theorem}\label{thm:cvc} Given an input and a strategy profile, deciding whether there is a coalition of size~$t$ that has an improvement step is ${\mathsf{W[2]}}$-hard wrt.~$t$ even if all teams are of size one. It remains ${\mathsf{NP}}$-complete if there is no restriction on the coalition size. \end{theorem} \begin{proof}(Sketch). To show ${\mathsf{W[2]}}$-hardness, we provide a parameterized reduction from the \textsc{Set Cover}\xspace problem, which is ${\mathsf{W[2]}}$-complete wrt.\ the set cover size~$\textsc{SC}\xspacek$~\cite{DF13}: Given sets $\textsc{SC}\xspaceS=\{S_1,\dots,S_{\textsc{SC}\xspacem}\}$ over a universe~$\textsc{SC}\xspaceU=\{u_1,\dots,u_{\textsc{SC}\xspacen}\}$ of elements and a positive integer~$\textsc{SC}\xspacek$, \textsc{Set Cover} asks whether there is a size-$\textsc{SC}\xspacek$ \emph{set cover}~$\textsc{SC}\xspaceS' \subseteq \textsc{SC}\xspaceS$, i.e., $|\textsc{SC}\xspaceS'|= \textsc{SC}\xspacek$ and $\bigcup_{S_i\in \textsc{SC}\xspaceS'}S_i=\textsc{SC}\xspaceU$. The idea of such a parameterized reduction is, given a \textsc{Set Cover}\xspace instance~$(\textsc{SC}\xspaceS,\textsc{SC}\xspaceU,\textsc{SC}\xspacek)$, to produce, in $f(\textsc{SC}\xspacek)\cdot (|\textsc{SC}\xspaceS|+|\textsc{SC}\xspaceU|)^{O(1)}$ time, an equivalent instance~$(A,B,t)$ such that $t\le g(\textsc{SC}\xspacek)$, where $f$ and $g$ are two computable functions. Let $(\textsc{SC}\xspaceS,\textsc{SC}\xspaceU,\textsc{SC}\xspacek)$ denote a \textsc{Set Cover}\xspace instance. For technical reasons, we assume that each set cover contains at least three sets. \noindent \textbf{Time slots.}\quad For each element~$u_j\in\textsc{SC}\xspaceU$, we create one \emph{element slot}~$e_j$. Let $E:=\{e_1,\dots,e_{\textsc{SC}\xspacen}\}$ denote the set containing all element slots. We create two special time slots: $\alpha$ (the original winner) and~$\beta$ (the potential new winner). \noindent \textbf{Teams and true availabilities~$A=(a^j_i)$.}\quad For each set~$S_i \in \textsc{SC}\xspaceS$, we create a \emph{set team}~$t_i$ that has true availability~$1$ at time slot~$\alpha$, at time slot~$\beta$, and at each element slot~$e_j$ with~$u_j \in S_i$. We introduce several dummy teams, as follows. Intuitively, the role of these dummy teams is to allow to set specific sums of availabilities for the time slots; the crucial observation in this respect is that the dummy teams do not have any incentive to change their true availabilities, therefore we can assume that they do not participate in any coalition. For each element~$u_j$, let $\#(u_j)$ denote the number of sets from~$\textsc{SC}\xspaceS$ that contain~$u_j$. For each element slot $e_j$, we create $(2\textsc{SC}\xspacem - 1 - \#(u_j))$ dummy teams such that each of these dummy teams has availability~$1$ at element slot~$e_j$ and availability~$0$ for all other time slots. Similarly, for time slot~$\alpha$, we create $\textsc{SC}\xspacem$~additional dummy teams, each of which has availability~$1$ for time slot~$\alpha$ and availability~$0$ for all other time slots. For time slot~$\beta$, we create $2\textsc{SC}\xspacem-1-\textsc{SC}\xspacek$ further dummy teams, each of which has availability~$1$ for time slot~$\beta$ and availability~$0$ for all other time slots. \noindent \textbf{Declared availabilities~$B=(b^j_i)$.}\quad Each dummy team declares availability for the time slot where it is available. Each set team declares availability for all time slots where it is available except for time slot~$\beta$ where all set teams declare availability~$0$. We set the size of the coalition $t$ to be $k$. This completes the reduction which can be computed in polynomial time. Indeed, it is also a parameterized reduction. A formal correctness proof as well as the extension to the case of unrestricted coalition sizes to show the ${\mathsf{NP}}$-hardness result are deferred to \cref{proof:thm-cvc}. \end{proof} Taking a closer look at the availability matrix constructed in the proof of \cref{thm:cvc}, we observe the following. \begin{corollary} Deciding the existence of an improvement step for any coalition is ${\mathsf{NP}}$-hard, even for very sparse availability matrices, i.e., even if each team has only one team member and is truly available at no more than four time slots. \end{corollary} \section{Nash equilibria}\label{section:results_nash_equilibria} We move on to consider the existence of Nash equilibria. Somewhat surprisingly, it seems that, a $1$-strong Nash equilibrium always exists. Unfortunately, we can only prove this when the maximum availability $\ensuremath{a_{\max}} \coloneqq \max_{i \in [n], j \in [m]} a_i^j$ is at most three. Extending our proof strategy to $\ensuremath{a_{\max}}\ge4$ seems to require a huge case analysis. \begin{theorem}\label{thm:simple_Nash_always_exists} If the maximum availability $\ensuremath{a_{\max}}$ is at most three, then \textsc{TPG}\xspace always admits a $1$-strong Nash equilibrium. \end{theorem} \begin{proof}(Sketch). Let $A=(a^j_i)$ be the input profile. We begin by characterizing two simple cases for which $1$-strong Nash equilibria always exist. \noindent\textbf{Safe single-team slot.} Suppose that a time slot $c_{j}$ exists where only one team, $t_{i}$, is available with some availability~$a^*$ (i.e., $a^{j}_{i}=a^*$), all other teams are not available in this time slot (i.e., $a^{j}_{i'}=0$ for all $i' \neq i$), and no other team, $t_{i'}, i' \neq i$, is available with availability greater than~$a^*$ at any time slot. Then, we obtain a $1$-strong Nash equilibrium~$B=(b^l_i)$ by setting $b_i^j \coloneqq a_i^j$, and, for each $i' \in [n]$ and each $j' \neq j$, setting $b_{i'}^{j'} \coloneqq 0$; to see why $B$ is a Nash equilibrium, notice that the only team (namely $t_i$) i.e. available at time slot~$c_j$ already receives the best possible pay-off (namely $1$) and no other team can prevent~$c_j$ from being a co-winner, which would be necessary to improve their pay-off (which is $0$). We call such time slot~$c_j$ a \emph{safe single-team~slot}. \noindent\textbf{Safe multiple-team slot.} Suppose that a time slot~$c_j$ exists where multiple teams have non-zero true availabilities and no single team is powerful enough to prevent~$c_j$ from co-winning, by declaring zero availability. That is, for each team~$t_i$ and each time slot~$c_{j'} \neq c_j$, it holds that $a^{j'}_i \leq \sum_{i'\neq i} a^j_{i'}$. Again, we obtain a $1$-strong Nash equilibrium~$B=(b^j_i)$ by setting $b_l^j \coloneqq a_l^j$ for each team~$t_l$ and setting $b^{j'}_l \coloneqq 0$ for each other time slot~$c_j \neq c_{j'}$. We call such time slot~$c_j$ a \emph{safe multiple-team slot}. For example, the following input profile contains two safe multiple-team slots, namely $c_1$ and $c_4$: { \centering $ A\coloneqq \ \ \begin{blockarray}{ccccc} c_1 & c_2 & c_3 & c_4 \\ \begin{block}{(cccc)c} 1 & 2 & 0 & 0 & \ \ t_1 \\ 2 & 0 & 2 & 0 & \ \ t_2 \\ 1 & 0 & 0 & 1 & \ \ t_3 \\ 0 & 1 & 1 & 3 & \ \ t_4 \\ \end{block} \end{blockarray} ${\mathsf{P}}ar} We are ready to consider instances $\ensuremath{a_{\max}} \le 3$. \noindent\textbf{Instances with $\ensuremath{a_{\max}} = 2$.} Consider the maximum availability sum~$x$ of all time slots, i.e., the maximum column sum of the matrix~$A$. Clearly, $x \ge \ensuremath{a_{\max}}$. We proceed by considering the different possible values of $x$. \noindent\textit{Cases with $x = 2$:} If $x$ is two, then there is a time slot where only one team is available with availability $\ensuremath{a_{\max}}=2$. Thus, there is a safe single-team slot. \noindent\textit{Cases with $x = 3$:} If $x$ is three, then, since we have $\ensuremath{a_{\max}}=2$, it follows that either (1) there is a safe single-team slot where only one team is available with availability $\ensuremath{a_{\max}}=2$ or (2) there is a time slot~$c_j$ where a single team~$t_i$ has availability $\ensuremath{a_{\max}}=2$ and another team~$t_{i'}$ has availability $1$. In the first case, there is a safe single-team, so let us consider the second case. To this end, let $c_j$ be the time slot such that a single team~$t_i$ has availability $\ensuremath{a_{\max}}=2$ and another team~$t_{i'}$ has availability $1$. Next, we show how to construct a $1$-strong Nash equilibrium~$B=(b^j_i)$. First, for each team~$t_i$, set $b_{i}^j\coloneqq a^j_i$ and $b_{i}^{j'}\coloneqq 0, j'\neq j$. This makes time slot $c_j$ the unique winner. Team~$t_i$ receives pay-off~$2/3$ and team $t_{i'}$ receives pay-off~$1/3$. Second, for each time slot~$c_{j'} \neq c_j$, if $a^{j'}_{i'} > 0$, then set~$b^{j'}_{i'}\coloneqq 1$; otherwise, find any team~$t_k \neq t_i$ with non-zero availability $a^{j'}_{k} = 1$ and set~$b^{j'}_{k}\coloneqq 1$. In this way, every time slot except~$c_j$ has total availability one (if there is at least one team with non-zero availability for this slot). Thus, $c_j$ remains a unique winner and the declared total availabilities of other time slots make it impossible for any team to improve: First, team~$t_i$ cannot improve because it would receive the same pay-off~$2/3$ for every time slot which it could make a new single winner (recall that no safe single-team slot exists). Second, team~$t_{i'}$ also cannot improve because it cannot create a new single winner at all. Last, neither of the remaining teams can improve because they cannot prevent $c_j$ from co-winning. Hence, we have a $1$-strong Nash equilibrium. \noindent\textit{Cases with $x \geq 4$:} Every time slot~$c_j$ with availability sum~$x$ is a safe multiple-team slot since $\forall j'\colon a^{j'}_i \le \ensuremath{a_{\max}}=2$ and $\forall i\colon \sum_{i'\neq i} a^j_{i'}\ge x-\ensuremath{a_{\max}} \ge 2$. We defer the proof details for instances with $\ensuremath{a_{\max}} = 3$ to Appendix~\ref{proof:simple_Nash_always_exists}. \end{proof} \begin{remark}\label{rem:1-NE} We do not know any instances without $1$-strong Nash equilibria. However, we could not generalize our proof even for instances with $\ensuremath{a_{\max}} = 4$. Nevertheless, some general observations from our proof hold for every $\ensuremath{a_{\max}}$. In particular, if there is a column with only one entry with~$\ensuremath{a_{\max}}$ (a special case of a safe single-team slot) or if the maximum column sum is at least~$2\ensuremath{a_{\max}}$ (a special case of a safe multiple-team slot), then a $1$-strong Nash equilibrium exists. \end{remark} Since our proof is constructive, we obtain the following. \begin{corollary} If the maximum availability $\ensuremath{a_{\max}}$ is at most three, then a $1$-strong Nash equilibrium for \textsc{TPG}\xspace can be found in polynomial time. \end{corollary} The situation where $t \ge 2$ is quite different already with only two teams. By a proof similar to the case of $t=1$ and $\ensuremath{a_{\max}}=2$, we can show that a $2$-strong Nash equilibrium always exists for $t = 2$ and $\ensuremath{a_{\max}}=1$: \begin{proposition} \label{prop:two_Nash_always_exists} If the maximum availability $\ensuremath{a_{\max}}$ is one, then a $2$-strong Nash equilibrium for \textsc{TPG}\xspace always exists and can be found in polynomial time. \end{proposition} Complementing~\cref{thm:simple_Nash_always_exists}, we demonstrate that $2$-strong Nash equilibria do not always exist, even when $\ensuremath{a_{\max}} = 2$; to this end, consider the following example. \begin{example}\label{example:Nash_not_always_exists} We provide in the following an example with maximum true availability two, and show that it does not admit a $1$-strong Nash equilibrium. Consider the following input for \textsc{TPG}\xspace. {\centering $ A\coloneqq \begin{blockarray}{ccc} c_1 & c_2 & \\ [3px] \begin{block}{(cc)c} 2 & 0 & t_1 \\ 2 & 2 & t_2 \\ 0 & 2 & t_3 \\ \end{block} \end{blockarray} ${\mathsf{P}}ar} Informally, \noindent The main crux of this example is that $t_1$ (or, symmetrically, $t_3$) can cooperate with $t_2$; in such cooperation, $t_2$ can choose whether to be `in favor` of $t_1$ or $t_3$, by declaring either $b_1^2 = 2$ and $b_2^2 = 0$ (favoring $t_1$), or $b_1^2 = 0$ and $b_2^2 = 2$ (favoring $t_3$). Moreover, $t_1$ or $t_3$ can `reward' $t_2$ by not declaring its true availability, which is $2$, but only $1$. In such a cooperation, both $t_2$ and $t_1$ (or $t_2$ and $t_3$) strictly improve their pay-offs. More formally, let us consider all possibilities for $t_2$. By symmetry, we can assume without loss of generality, that $t_2$ declares either $(0,1)$, $(0,2)$, $(1,1)$, $(1,2)$, or $(2,2)$ (declaring~$(0,0)$ is not possible in a Nash equilibrium---see the analog $(1,1)$ case)). To this end, we denote the declared availability matrix~$B$ simply by the symbolic vector $[b^1_1 b^2_1,b^1_2 b^2_2,b^1_3 b^2_3]$ (e.g.\ the declared availability matrix~$B=A$ is denoted by $[20,22,02]$). We write $B \to^X B'$ if the coalition~$X$ receives a better pay-off by changing the declared availabilities from~$B$ to $B'$. Now, we consider each of the possible strategies that team~$t_2$ may take: \begin{description} \item[$t_2$ declares $(0, 1)$:]\ \\ $[x0,01,0y] \to^{\{t_3\}} [x0,01,02]$ $(0\le x \le 2 ,0\le y \le 1)$; $[x0,01,02] \to^{\{t_2\}} [x0,02,02]$ $(0\le x \le 2)$. \item[$t_2$ declares $(0,2)$:] \ \\ $[x0,02,0y] \to^{\{t_3\}} [x0,02,02]$ $(0\le x \le 2 ,0\le y \le 1)$;\\ $[x0,02,02] \to^{\{t_1,t_2\}} [10,20,02]$ $(0\le x \le 2)$. \item[$t_2$ declares $(1, 1)$:]\ \\ $[x0,11,0y] \to^{\{t_2\}} [x0,20,0y]$ $(x>y)$;\\ $[x0,11,0y] \to^{\{t_2\}} [x0,20,0y]$ $(x=y)$;\\ $[x0,11,0y] \to^{\{t_2\}} [x0,02,0y]$ $(x<y)$. \item[$t_2$ declares $(1,2)$:] \ \\ $[x0,12,0y] \to^{\{t_3\}} [x0,12,02]$ $(0\le x \le 2,0\le y \le 1)$;\\ $[x0,12,02] \to^{\{t_2\}} [x0,20,02]$ $(0\le x \le 1)$;\\ $[20,12,02] \to^{\{t_1,t_2\}} [10,20,02]$. \item[$t_2$ declares $(2,2)$:]\ \\ $[x0,22,0y] \to^{\{t_1\}} [20,22,0y]$ $(0\le x \le 1,0\le y \le 1)$;\\ $[20,22,02] \to^{\{t_1,t_2\}} [10,20,02]$;\\ $[20,22,01] \to^{\{t_2\}} [20,02,01]$;\\ $[10,22,02] \to^{\{t_2\}} [10,20,02]$. \end{description} The above case analysis cover (by symmetry) all possible cases, and thus, shows that there are no $2$-strong Nash equilibria for the instance~$A$. \end{example} Naturally, there are profiles for which $t$-strong Nash equilibria do exist: consider, for example, $A$ being the all-one matrix. Next, we show that if the coalition size is unbounded, then finding a Nash equilibrium becomes ${\mathsf{coNP}}$-hard. \begin{theorem}\label{thm:strongNash-coNP-hard} Deciding whether a Nash equilibrium exists for a given input is ${\mathsf{coNP}}$-hard. \end{theorem} \begin{proof}(Sketch). We reduce from the complement of the following ${\mathsf{NP}}$-complete problem~\cite{G84}: \textsc{Restricted X3C}, which given sets $\mathcal{F}=\{S_1, \ldots, S_{3n}\}$, each containing exactly $3$ elements from $E = \{e_1, \ldots, e_{3n}\}$ such that (1) $n \geq 2$ and (2) each element $e_i$ appears in exactly $3$ sets, asks whether there is a size-$n$ \emph{exact cover}~$\mathcal{F'}\subseteq \mathcal{F}$, i.e., $|\textsc{SC}\xspaceS'|= n$ and $\bigcup_{S_i\in \textsc{SC}\xspaceS'}S_i=\textsc{SC}\xspaceU$. Given an instance~$(\mathcal{F}, E)$ of the complement of \textsc{Restricted X3C} we construct a game. For each element $e_i$ ($i \in [3n]$) we construct a time slot $e_i$. We construct one additional time slot, denoted by $\alpha$. For each set~$S_j$ ($j \in [3n]$) we construct a team $s_j$. For a team $s_j$, we set its availability for time slot $e_i$, namely $a_i^j$, to be $n$ if $e_i \in S_j$, and otherwise $0$. We set the availability of all teams to be $1$ in time slot $\alpha$. We consider $2n$-strong Nash-equilibria; thus, we consider coalitions containing up to $2n$ teams. This finishes the description of the polynomial-time reduction. The correctness proof can be found in \cref{proof:strongNash-coNP-hard}. \end{proof} \section{Conclusion}\label{section:outlook} We introduced a game considering power of teams (referred to as \textsc{TPG}\xspace) that is naturally motivated by online scheduling polls where teams declare and update their availabilities in a dynamic process to increase their relative power. Our work leads to several directions for future research. \noindent \textbf{Tie-breaking rules:} In this paper the teams are pessimistic, i.e., in case of several co-winners, the pay-off is defined as the \emph{minimum} of the relative number, over the co-winners. This corresponds to situations where ties are broken adversarially. We chose this tie-breaking as a standard and natural one, and as one which models teams which are pessimistic in nature, where having too low power in the team might have very bad consequences. Naturally, one might study other tie-breaking rules such as breaking ties uniformly at random or breaking ties lexicographically; we mention that most of our results seem to transfer to lexicographic tie-breaking. \noindent\textbf{More refined availability constraints:} In the online scheduling polls considered in this paper, the availability constraints expressed by the participants are dichotomous: each participant can only declare either ``available'' or ``not available'' at each time slot. Sometimes, the availability constraints of people participating in scheduling polls are more fine-grained; for example, a participant might not be sure whether she is available or not for some of the suggested time slots, but can only provide a ``maybe available'' answer for these time slots. Correspondingly, it is interesting to study \textsc{TPG}\xspace when we allow participants to express more refined availability constraints, maybe even allowing them to fully rank the time slots according to their constraints. \noindent\textbf{Nash modification problem:} Taking the point of view of the poll convener (who desires to reach a Nash equilibrium), we suggest to study the following problem: given an input for \textsc{TPG}\xspace, what is the minimum number of time slots that shall be removed so that the modified input will have a Nash equilibrium? \iffalse NT JOURNAL (OLD OUTLOOK) Third, it makes sense to consider restricted variants of the team power game, where not all Nash equilibria are allowed. For instance, if the teams are only allowed to start with the true availabilities and change availabilities from there, then, contrasting \cref{thm:simple_Nash_always_exists}, $1$-strong Nash equilibria do not always exist. It is not clear what is the computational complexity of finding an improvement step and deciding the existence of a Nash equilibrium in this case, for any value of coalition size~$t$. Following are some more possible questions. \begin{itemize} \item \textbf{Expectation version.} For the expectation version of \textsc{TPG}\xspace, we know a polynomial-time algorithm for deciding improvement step for $t = 1$ (since~\cref{one_is_enough} also holds). We do not know other results, though. \item \textbf{Attendance maximization.} Can the coalition voters vote such that the number of coalition voters in the meeting is maximized (either in expectation or in maximin)? \item \textbf{Combined goals.} How to define maximizing relative power while minimizing absolute attendance? (Since the team wants to have maximum power in the meeting, but nobody really wants to attend the meeting, and, also, it is desirable for the team that as minimum members as possible will attend.) \item \textbf{More tie-breaking rules.} For example, what about the maximum balances? what about the maximum (over all tied co-winners) of minimum (over all teams) of attendance? \item \textbf{Winning time slots.} Who shall be the winning time slots: for example, time slots with the best total-attendance? or time slots with best maximum of minimum of attendance? or, maybe, allow only time slots with at least one of each coalition? \item \textbf{Nash Modification problem.} What is the computational complexity of the following problem: given an input for \textsc{TPG}\xspace with no Nash equilibrium, what is the minimum number of time slots to remove from this input, such that the modified input will have a Nash equilibrium? The motivation for this problem is apparent from the point-of-view of the poll initiator, as his possible reaction to a loop, for example. \end{itemize} \fi \newcommand{\bibremark}[1]{} \begin{thebibliography}{} \bibitem[{\mathsf{P}}rotect\citeauthoryear{Brams and Fishburn}{1978}]{brams1978approval} Brams, S.~J., and Fishburn, P.~C. \newblock 1978. \newblock Approval voting. \newblock {\em American Political Science Review} 72(03):831--847. \bibitem[{\mathsf{P}}rotect\citeauthoryear{Dery \bgroup et al\mbox.\egroup }{2015}]{DORK15} Dery, L.~N.; Obraztsova, S.; Rabinovich, Z.; and Kalech, M. \newblock 2015. \newblock Lie on the fly: Iterative voting center with manipulative voters. \newblock In {\em Proceedings of IJCAI-2015}, 2033--2039. \bibitem[{\mathsf{P}}rotect\citeauthoryear{Downey and Fellows}{2013}]{DF13} Downey, R.~G., and Fellows, M.~R. \newblock 2013. \newblock {\em Fundamentals of Parameterized Complexity}. \newblock Springer. \bibitem[{\mathsf{P}}rotect\citeauthoryear{Frank and Tardos}{1987}]{FraTar1987} Frank, A., and Tardos, {\'E}. \newblock 1987. \newblock An application of simultaneous {D}iophantine approximation in combinatorial optimization. \newblock {\em Combinatorica} 7(1):49--65. \bibitem[{\mathsf{P}}rotect\citeauthoryear{Garey, Johnson, and Stockmeyer}{1976}]{GJS76} Garey, M.~R.; Johnson, D.~S.; and Stockmeyer, L.~J. \newblock 1976. \newblock Some simplified {NP}-complete graph problems. \newblock {\em Theoretical Computer Science} 1(3):237--267. \bibitem[{\mathsf{P}}rotect\citeauthoryear{Gonzalez}{1984}]{G84} Gonzalez, T.~F. \newblock 1984. \newblock Clustering to minimize the maximum intercluster distance. \newblock {\em Theoretical Computer Science} 38:293--306. \bibitem[{\mathsf{P}}rotect\citeauthoryear{Kannan}{1987}]{Kan87} Kannan, R. \newblock 1987. \newblock Minkowski's convex body theorem and integer programming. \newblock {\em \bibremark{No string.}Mathematics of Operations Research\bibremark{No publisher.}} 12:415--440. \bibitem[{\mathsf{P}}rotect\citeauthoryear{Lagarias}{1985}]{Lagarias1985} Lagarias, J.~C. \newblock 1985. \newblock The computational complexity of simultaneous diophantine approximation problems. \newblock {\em SIAM Journal on Computing} 14(1):196--209. \bibitem[{\mathsf{P}}rotect\citeauthoryear{Lee}{2014}]{lee2014algorithmic} Lee, H. \newblock 2014. \newblock Algorithmic and game-theoretic approaches to group scheduling. \newblock In {\em Proceedings of AAMAS-2014}, 1709--1710. \bibitem[{\mathsf{P}}rotect\citeauthoryear{Lenstra}{1983}]{Len83} Lenstra, H.~W. \newblock 1983. \newblock Integer programming with a fixed number of variables. \newblock {\em \bibremark{No string.}Mathematics of Operations Research\bibremark{No publisher.}} 8:538--548. \bibitem[{\mathsf{P}}rotect\citeauthoryear{Lev and Rosenschein}{2012}]{lev2012convergence} Lev, O., and Rosenschein, J.~S. \newblock 2012. \newblock Convergence of iterative voting. \newblock In {\em Proceedings of AAMAS-2012}, 611--618. \bibitem[{\mathsf{P}}rotect\citeauthoryear{Meir \bgroup et al\mbox.\egroup }{2010}]{MPRJ10} Meir, R.; Polukarov, M.; Rosenschein, J.~S.; and Jennings, N.~R. \newblock 2010. \newblock Convergence to equilibria in plurality voting. \newblock In {\em Proceedings of AAAI-2010}, 823--828. \bibitem[{\mathsf{P}}rotect\citeauthoryear{Obraztsova \bgroup et al\mbox.\egroup }{2015}]{OEPR15} Obraztsova, S.; Elkind, E.; Polukarov, M.; and Rabinovich, Z. \newblock 2015. \newblock Doodle poll games. \newblock In {\em Proceedings of the 1st IJCAI Workshop on AGT}. \bibitem[{\mathsf{P}}rotect\citeauthoryear{Reinecke \bgroup et al\mbox.\egroup }{2013}]{RNBNG13} Reinecke, K.; Nguyen, M.~K.; Bernstein, A.; N{\"{a}}f, M.; and Gajos, K.~Z. \newblock 2013. \newblock Doodle around the world: {O}nline scheduling behavior reflects cultural differences in time perception and group decision-making. \newblock In {\em Proceedings of CSCW-2013}, 45--54. \bibitem[{\mathsf{P}}rotect\citeauthoryear{Zou, Meir, and Parkes}{2015}]{zou2015strategic} Zou, J.~Y.; Meir, R.; and Parkes, D.~C. \newblock 2015. \newblock Strategic voting behavior in {D}oodle polls. \newblock In {\em Proceedings of CSCW-2015}, 464--472. \end{thebibliography} \ifcompileappendix \appendix \section{Missing Proofs}\label{apx} Below we provide proofs missing from the main text. \subsection{Proof of Lemma~\ref{one_is_enough}} \label{proof:one_is_enough} \begin{proof} Suppose that, for a certain coalition, $D=(d^j_i)$ is an improvement step for $B=(b^{j}_i)$. Let $c_{j_1}$ and $c_{j_2}$ be two time slots such that at least one team~$t_i$ in the coalition has $d^{j_1}_{i} \neq 0$ and at least one team~$t_{i'}$ (possibly different) in the coalition has $d^{j_2}_{i'}\neq 0$. We distinguish between two cases. If $c_{j_1}\notin \mathsf{winners}(D)$ (or~$c_{j_2} \notin \mathsf{winners}(D)$), then the pay-offs of all teams remain the same even if each team~$t_i$ in the coalition with non-zero declared availability~$d^{j_1}_{i}$ (or~$d^{j_2}_{i}$) for~$c_{j_1}$ (or~$c_{j_2}$) will declare zero instead. Otherwise, both~$c_{j_1}$ and $c_{j_2}$ are co-winners in~$D$. Fix any team $t_i$ in the coalition. Since strategy~$D$ is an improvement step compared to $B$, it follows that the relative powers of $t_i$ in $c_{j_1}$ and in $c_{j_2}$ for strategy~$D$ are both strictly larger than the pay-off of $t_i$ for strategy~$B$. Thus, we can change the improvement step~$D$ to make all teams in the coalition declare zero availability, say, in $c_{j_2}$, and we would still obtain an improvement step. By repeating the above reasoning, we fix a desired improvement step where all teams in the coalition have non-zero availability only in the same single time slot. \end{proof} \subsection{Proof of Theorem~\ref{thm:unary-p}} \label{proof:unary-p} \begin{proof} Following~\cref{one_is_enough}, we begin by guessing the unique time slot $c_k$ for which the teams in the coalition will declare non-zero availability. Further, we guess the value of~$w\coloneqq\sum_{i=1}^{t} x_i$. Crucially, this value is upper-bounded by $\sum_{i=1}^{t}a_{i}^{j} \leq s n$; recall that $s$ is the sum of all true availabilities and $n$ is the number of teams. Then, for each team~$t_i$ in the coalition, we compute the minimum value needed for $t_i$ to get strictly better pay-off than ${\mathsf{P}}ayoff(B,t_i)$, i.e., \linebreak $\min\{x\le a_{i}^{j} \mid x / (p + w) > {\mathsf{P}}ayoff(B,t_i)\}$, where $p$ is the sum of the declared availabilities of the teams not in the coalition; we pick these values as the $x_i$'s. If as a result we obtain $\sum_{i=1}^{t}x_i \le w$, then we return $(x_1, \ldots, x_t)$. Otherwise, we proceed to the next guess. If all guesses fail, then we reject. The running time is $O(s^2)$. \end{proof} \subsection{Proof of Theorem~\ref{thm:binary-unbounded}} \label{proof:binary-unbounded} \begin{proof} Following~\cref{one_is_enough}, we begin by guessing the unique time slot $c_k$ for which the teams in the coalition will declare non-zero availability. After guessing the unique time slot $c_k$ for which the teams in the coalition would declare non-zero availabilities, we can run the integer linear program (ILP, in short) specified for the $t$-\textsc{Threshold Covering}\xspace problem. By the famous result of \citet{Len83} (later improved by~\citet{Kan87} and by~\citet{FraTar1987}), we know that an ILP with $\rho$~variables and $z$ input bits can be solved in $O(\rho^{2.5\rho+o(\rho)}\cdot z)$ time. Since the ILP specified for $t$-\textsc{Threshold Covering}\xspace has $t$ variables and can be represented in $O(|A'|\cdot |B'|)$ bits, where $|A'|$ and $|B'|$ denote the numbers of binary bits needed to encode the true and the declared availabilities at time slot~$c_j$, respectively, we obtain an algorithm with running time $O(t^{2.5t+o(t)} \cdot L^2)$. As for the second running time, after guessing the unique time slot $c_k$, we additionally guess the sum~$w'$ of declared availabilities of the teams in the coalition whose true availabilities are upper-bounded by $L^c$; we call these teams \emph{small teams}. Then, we modify the ILP specified for $t$-\textsc{Threshold Covering}\xspace to search for the declared availabilities of the remaining teams. Using the declared availabilities for the remaining teams, we can calculate the declared availabilities for the small teams as described in the first algorithm. Again, by Lenstra's result, we can solve the ILP in $g(\ell_c)\cdot |A_j'|\cdot |B_j'|$~time, where $|A_j'|$ and $|B_j'|$ denote the numbers of binary bits needed to encode the true and declared availabilities at time slot~$c_k$, respectively. Searching for the declared availabilities for the small teams can be done in $O(t^2\cdot L^{c^2})$ time. In total, we obtain an algorithm with running time~$f'(\ell_c)\cdot t^2\cdot L^{c^2+2}$. \end{proof} \subsection{Proof of Theorem~\ref{thm:cvc} \\ (correctness, unrestricted coalition size)}\label{proof:thm-cvc} \begin{proof} Let us state the total availabilities of the various time slots. Each element slot has total availability~$2\textsc{SC}\xspacem-1$, time slot~$\beta$ has total availability~$2\textsc{SC}\xspacem-1-\textsc{SC}\xspacek$, and time slot~$\alpha$ has total availability~$2\textsc{SC}\xspacem$. Indeed, time slot~$\alpha$ is a unique winner. Each set team receives a pay-off of~$1/(2\textsc{SC}\xspacem)$ and each dummy team receives pay-off zero. Next, we prove the correctness of the reduction. For the ``if'' case, assume that there is a size-$k$ set cover $\textsc{SC}\xspaceS'\subseteq\textsc{SC}\xspaceS$. We show that the coalition that corresponds to the set cover~$\textsc{SC}\xspaceS'$ (recall that $t = k$) can improve by making time slot~$\beta$ the unique winner. To this end, each set team~$t_i$ with $S_i \in \textsc{SC}\xspaceS'$ changes its declared availability for time slot~$\alpha$ and for all element slots to~$0$ and changes its declared availability for time slot~$\beta$ to~$1$. As a result, time slot~$\alpha$ has total availability~$2\textsc{SC}\xspacem-k$, each element slot has total availability at most~$2\textsc{SC}\xspacem-2$ (since the coalition corresponds to a set cover), and time slot~$\beta$ has total availability~$2\textsc{SC}\xspacem-1$. Thus, time slot~$\beta$ is the unique winner and each set team receives a pay-off of~$1/(2\textsc{SC}\xspacem-1)$; this is a strict improvement for all teams in the coalition. For the ``only if'' case, assume that there is a coalition of $t$~teams that can improve their pay-offs by changing their declared availabilities. We observe that time slot~$\alpha$ cannot be a (co-)winner since if it was, then either no team would improve or the pay-off of at least one coalition member would be zero. Now, we show that the subfamily~$\textsc{SC}\xspaceS'$ corresponds to the coalition is a set cover of size~$t=k$. For this, we distinguish between two cases, depending on whether time slot~$\beta$ is a unique winner or not. First, suppose that the coalition makes time slot~$\beta$ become the new unique winner. Then, after changing the coalition's declared availabilities, time slot~$\beta$ has total availability of at most~$2\textsc{SC}\xspacem-1$. It follows that each element time slot has total availability of at most~$2\textsc{SC}\xspacem-2$. This implies that $\textsc{SC}\xspaceS'$, which corresponds to the set teams of the coalition, forms a set cover, since otherwise there would be one element slot for which the coalition cannot decrease its total availability to be at most~$2\textsc{SC}\xspacem-2$. Next, suppose that $\beta$ is not a unique winner which means that there is some subset~$E'\subseteq E$ of element slots within the co-winner set (including the case that some element slot~$e_j$ is a unique winner). Note that if the coalition contains only one team~$t^*$, then time slot~$\alpha$ would still be a (co-)winner which is not possible by the arguments above. Thus, let us assume that the coalition has at least two teams that make all element slots from~$E'$ become (co-)winners. All coalition members must still declare availability~$1$ for all element slots from~$E'$, since otherwise the pay-off of some coalition members would decrease to zero. Furthermore, for each element slot~$e_j\notin E'$ that is not a co-winner, there is at least one coalition member that changes its declared availability from~$1$ to~$0$, since otherwise $e_j$~would be a co-winner. Hence, subfamily~$\textsc{SC}\xspaceS'$ is a set cover (recall that it corresponds to the teams in the coalition): each element corresponding to an element slot from~$E'$ is covered by all sets from $\textsc{SC}\xspaceS'$ and each element corresponding to an element slot from~$E \setminus E'$ is covered by at least one set from~$\textsc{SC}\xspaceS'$. By the analysis of the two cases above, the existence of a coalition that can improve always implies the existence of a set cover. This completes the proof for the case that the size of the coalition is at most~$t$. \noindent\textbf{Unrestricted coalition size.}\quad Our problem is in ${\mathsf{NP}}$ as we can check in polynomial-time whether a specific strategy is an improvement step for a given coalition. If there is no restriction on the coalition size, then NP-hardness does not immediately transfer from the above construction, but it can be obtained by a slight modification as follows. The above proof would almost work through: a size-$k$ set cover in the original instance still implies a coalition of size~$t$ that can improve by making~$\beta$ be the new unique winner. Furthermore, by the same arguments as above, it still follows that every coalition that can improve still consists of set teams which correspond to a set cover. We cannot, however, exclude the existence of a coalition which corresponds to a set cover of size larger than~$t$. To fix this, we assume that each element in our \textsc{Set Cover}\xspace instance occurs in at most three sets. (This variant remains NP-hard since \textsc{Vertex Cover} remains NP-hard even if each vertex has degree at most three~\cite{GJS76}; we did not assume this in the ${\mathsf{W[2]}}$-hardness proof since this variant is not ${\mathsf{W[2]}}$-hard.) We further assume, without loss of generality, that~$k\ge3$. The reasoning for the restricted coalition size case still holds, so it only remains to show that no coalition of size larger than~$t$ can improve. Assume, towards a contradiction, that there is a coalition of size at least~$t+1$ that can improve. First, notice again that time slot~$\alpha$ cannot be a (co\nobreakdash-)winner. Second, assume that there is some subset~$E'\subseteq E$ of element slots within the set of co-winners (including the case that some element slot~$e_j$ is a single winner). All coalition members must still declare availability~$1$ for all element slots from~$E'$, since otherwise the pay-off of some team member would decrease to zero. However, this is not possible since each element occurs in at most three sets and $t+1=k+1>3$. Third, assume that the coalition makes time slot~$\beta$ be the new unique winner. Then, after changing the coalition's declared availabilities, time slot~$\beta$ has total availability of at least~$2\textsc{SC}\xspacem$ since otherwise some coalition members would decrease their pay-off. However, this also means that no coalition member improves its pay-off, which is at most the same as the original pay-off~$1/(2\textsc{SC}\xspacem)$. This completes the proof for the case of unrestricted coalition size. \end{proof} \subsection{Proof of Theorem~\ref{thm:simple_Nash_always_exists} (instances with $\ensuremath{a_{\max}} \neq 2$)}\label{proof:simple_Nash_always_exists} \begin{proof} \noindent\textbf{Instances with $\ensuremath{a_{\max}} = 1$.} Considering the above two cases, the proof is relatively simple for inputs with $\ensuremath{a_{\max}}=1$: either there is a column~$j$ in~$A$ where only one team has availability~$1$, implying that time slot~$c_j$ is a safe single-team slot, or every time slot~$c_j$ is a safe multiple-team slot since $\forall j'\colon a^{j'}_i \le \ensuremath{a_{\max}}=1$ and $\forall i\colon \sum_{i'\neq i} a^j_{i'}\ge 1$. \noindent\textbf{Instances with $\ensuremath{a_{\max}} = 3$.} We consider the case where the maximum true availability~$\ensuremath{a_{\max}}$ is three. Again, let $x$ be the maximum sum of availabilities over all time slots, and notice that $x \geq \ensuremath{a_{\max}}$. If $x$ is at least six, then every time slot~$c_j$ with availability sum~$x$ is a safe multiple-team slot, because $\forall j'\colon a^{j'}_i \le \ensuremath{a_{\max}}=3$ and $\forall i\colon \sum_{i'\neq i} a^j_{i'}\ge x-\ensuremath{a_{\max}} \ge 3$. If $x$ is three, then there is a time slot where only one team is available with availability $\ensuremath{a_{\max}}=3$, i.e., there is a safe single-team slot. Now, assume that $x$ is four or five. We distinguish between four cases and implicitly assume that the $k$th case does not hold in the $(k+1)$th case. First, there is a safe single-team slot (which implies a $1$-strong Nash equilibrium by our observation). Note that this case includes time slots where only one team is available with availability $\ensuremath{a_{\max}}=3$ as well as the situation that there is a time slot~$c_j$ where only one team~$t_i$ is available with availability~$2$ and~$t_i$ is the only team with availability~$2$ for every time slot. Second, there is a time slot~$c_j$ where one team~$t_i$ has availability $\ensuremath{a_{\max}}=3$ and another team~$t_{i'}$ has availability one while every remaining team has availability zero. Analogously to the case with $\ensuremath{a_{\max}}=2$ and $x=3$, first, for each team~$t_\ell$, we set $b_\ell^j\coloneqq a_\ell^j$ and $b_\ell^{j'}\coloneqq 0$ ($j'\neq j$ to make time slot $c_j$ a single winner. Team $t_i$ receives pay-off~$3/4$ and team $t_{i'}$ receives pay-off~$1/4$. Now, for each team~$t_\ell$ and for each time slot~$c_{j'}\neq c_j$, we modify the declared availabilities~$b_{\ell}^{j'}$ as follows. If there is some $\ell \notin \{i,i'\}$ with $a^{j'}_\ell=3$, then set~$b^{j'}_\ell\coloneqq a^{j'}_\ell$. Otherwise, if $a^{j'}_{i'} > 0$, then set~$b^{j'}_{i'}\coloneqq a^{j'}_{i'}$ and if $a^{j'}_{i'} = 0$, then set~$b^{j'}_{\ell}\coloneqq a^{j'}_{\ell}$ for the first position~$\ell \notin \{i,i'\}$ with $a^{j'}_{\ell}>0$. This does not prevent $c_j$ from being the unique winner but makes it impossible for the teams to improve their pay-offs: team~$t_i$ cannot improve, because it would receive at most the same pay-off~$3/4$ for every time slot which it could make a new single winner. Note that, since we are not in the first case, it follows that there is no time slot where only team~$t_i$ is available with availability~$\ensuremath{a_{\max}}=3$. Furthermore, if there is a time slot~$c_{j^*}$ where team~$t_i$ is available with availability~$2$ and no other team is available, then there is some time slot $c_{j''}$ with $a^{j''}_{i''}=3$ ($i''\neq i$) and, hence, $b^{j''}_{i''}=3$. (This slot~$c_{j''}$ must exist since otherwise we would be in the first case and have a safe single-team slot~$c_{j^*}$.) Thus, team~$t_i$ cannot make $c_{j^*}$ become a new single winner. Team $t_{i'}$ also cannot improve, because it cannot create a new single winner at all. Hence, we have a $1$-strong Nash equilibrium. Third, there is a time slot~$c_j$ where one team~$t_i$ has availability $\ensuremath{a_{\max}}=3$ and another team~$t_{i'}$ has availability two, implying that the remaining teams are not available at slot~$c_j$. Similarly to the previous case, for each team~$t_\ell$, we first set $b^j_\ell\coloneqq a_\ell^j$ and $b^{j'}_\ell\coloneqq 0$ ($j'\neq j$) to make time slot $c_j$ a single winner. Team $t_i$ receives pay-off~$3/5$ and team $t_{i'}$ receives pay-off~$2/5$. Now, for each team~$t_\ell\neq t_i$ and for each time slot~$c_{j'}\neq c_j$ with $a^{j'}_i=\ensuremath{a_{\max}}=3$, we modify its declared availability~$b^{j'}_{\ell}$ such that the total declared availability sum of the teams other than $t_i$ is always two at time slot~$c_{j'}$. \begin{itemize} \item If $a^{j'}_{i'} \ge 2$, then set~$b^{j'}_{i'}\coloneqq a^{j'}_{i'}$. \item If $a^{j'}_{i'} = 1$, then set~$b^{j'}_{i'}\coloneqq a^{j'}_{i'}$ and let $t_\ell\notin \{t_i,t_{i'}\}$ be the first team with $0<a^{j'}_{\ell}<\ensuremath{a_{\max}}$ and set~$b^{j'}_{\ell}\coloneqq a^{j'}_{\ell}$. Note that such a team~$t_\ell$ must exist since we are not in the second case. \item If $a^{j'}_{i'} = 0$, then set~$b^{j'}_{\ell}\coloneqq a^{j'}_{\ell}$ for the first team~$t_\ell \notin \{t_i,t_{i'}\}$ with $a^{j'}_{\ell}>1$ or for the first two teams~$t_\ell \notin \{t_i,t_i'\}$ with $a^{j'}_{\ell}=1$. Again, such teams~$t_\ell$ exist(s) since we are not in the second case. This does not prevent $c_j$ from being the unique winner but makes it impossible for the teams to improve their pay-offs: team~$t_i$ cannot improve, because it would receive at most the same pay-off~$3/5$ for every time slot which it could make a new (co-)winner. Each of the remaining teams (including $t_{i'}$) also cannot improve, because it cannot create a new single winner at all (note that $x=5$). \end{itemize} Hence, we have a $1$-strong Nash equilibrium. Fourth, there is a time slot~$c_j$ where one team~$t_i$ has availability~$\ensuremath{a_{\max}}=3$ and there are another two teams, $t_{i'}$ and~$t_{i''}$, both with availability one. Similarly to the previous case, for each team~$t_\ell$, we first set $b^j_\ell\coloneqq a^j_\ell$ and $b^{j'}_\ell\coloneqq 0$ ($j'\neq j$) to make time slot~$c_j$ a single winner. Team $t_i$ receives pay-off~$3/5$ and teams~$t_{i'}$ and~$t_{i''}$ both receive pay-off~$1/5$. Then, for each time slot~$c_{j'} \neq c_j$ such that $a^{j'}_i=\ensuremath{a_{\max}}=3$, we modify the declared availabilities~$b^{j'}_\ell$ of some teams~$\ell$: \begin{itemize} \item If $a^{j'}_{i'} > 0$ and $a^{j'}_{i''} > 0$, then set~$b^{j'}_{i'}\coloneqq 1$ and set~$b^{j'}_{i''}\coloneqq 1$. \item If $a^{j'}_{i'} = 0$ and $a^{j'}_{i''} > 1$, then set~$b^{j'}_{i''}\coloneqq a^{j'}_{i''}$. \item If $a^{j'}_{i'} = 0$ and $a^{j'}_{i''} = 1$, then set~$b^{j'}_{i''}\coloneqq a^{j'}_{i''}$, let $t_\ell$ be the first team with $0<a^{j'}_{\ell}<\ensuremath{a_{\max}}$, and set~$b^{j'}_{\ell}\coloneqq a^{j'}_{\ell}$. Note that such a team~$t_\ell$ must exist since we are not in the second case. \item If $a^{j'}_{i'} = 0$ and $a^{j'}_{i''} = 0$, then let $t_\ell, t_{\ell'}$ be the first two teams with $a^{j'}_{\ell} = a^{j'}_{\ell'}=1$ and set~$b^{j'}_{\ell}\coloneqq a^{j'}_{\ell}$. Note that such a team~$t_\ell$ must exist since we are not in the first three cases. The cases with $a^{j'}_{i''} = 0$ follow analogously. The new declared availabilities still make $c_j$ a unique winner but make it impossible for the teams to improve their pay-offs: team~$t_i$ cannot improve, because it would receive at most the same pay-off~$3/5$ for every time slot which it could make a new (co-)winner. Neither of the remaining teams (including~$t_{i'}$ and team~$t_{i''}$) can improve, because they cannot create a new single winner at all. \end{itemize} Hence, we have a $1$-strong Nash equilibrium. \end{proof} \subsection{Proof of Theorem~\ref{thm:strongNash-coNP-hard} (correctness)}\label{proof:strongNash-coNP-hard} We will use the following observation. \begin{observation}\label{truth_for_winning_slot} Let $B$ be a Nash-equilibrium for some input $A$. Then, for each time slot $c_j \in \mathsf{winners}(B)$, all teams declare their true availabilities. \end{observation} \begin{proof} Assume, towards a contradiction, that the declared availability of some team $t_i$ in time slot $c_j$ is strictly less than its true availability. Then, this team can improve its payoff by declaring its true availability. \end{proof} Now we are ready to proof the correctness of the construction for Theorem~\ref{thm:strongNash-coNP-hard}. \begin{proof}(of Theorem~\ref{thm:strongNash-coNP-hard}). Now, we show that a $2n$-Nash-equilibrium exists if and only if an exact cover does not exist. \noindent We show the ``only if'' part by showing that existence of an exact cover implies non-existence of a Nash-equilibrium. To this end, we assume that there is an exact cover and prove that a Nash-equilibrium does not exist. Let $\mathcal{F}'$ be an exact cover. Consider the remaining sets $\bar{\mathcal{F}'} = \{S_1, \ldots, S_{3n}\} \setminus \mathcal{F}'$. It holds that $|\mathcal{F}'| = n$ and $|\bar{\mathcal{F}'}| = 2n$. Notice further that, while $\mathcal{F}'$ covers each element exactly once, $\bar{\mathcal{F}'}$ covers each element exactly twice. We further assume, towards a contraction, that a Nash-equilibrium does exist; let us denote it by $B$. In what follows, we consider several cases concerning the structure of profile~$B$, which is a Nash-equilibrium, and show an improvement step for each of these cases, contradicting the assumption that $B$ is indeed a Nash-equilibrium. Consider the set of winning time slots of $B$. First we consider the case where $\alpha \in B$. \noindent\textbf{Case 1.} In this case, $\alpha \in \mathsf{winners}(B)$. By Observation~\ref{truth_for_winning_slot} we have that the declared availabilities of all teams in $\alpha$ is $1$; thus, the sum of declared availabilities for all winning slots is $3n$. Below, we consider three different sub-cases. \noindent\textbf{Case 1a.} In this case, $\alpha$ is the only winning time slot, i.e., $\alpha = \mathsf{winners}(B)$. Consider any time slot, say $e_1$, and consider the three teams who are available in it, say $s_i$, $s_j$, and $s_k$. We claim that these teams can improve their pay-off, as follows: if $s_i$, $s_j$, and $s_k$, will declare availability of $0$ for all time slots except for $e_1$ for which they will declare availability of $n$, then their pay-off would increase from $\frac{1}{3n}$ to~$\frac{1}{3}$. \noindent\textbf{Case 1b.} In this case, there are at most three other time-slots, besides $\alpha$, with sum of declared availabilities being $3n$, and there exists a team $s_i$ such that $a^i_j = n$ for each $j$ such that $E_j \in \mathsf{winners}(B)$. We claim that $s_i$ can improve its pay-off, as follows: if $s_i$ will declare availability of $0$ for time slot $\alpha$, then its pay-off would increase from $\frac{1}{3n}$ to $\frac{1}{3}$. \noindent\textbf{Case 1c.} In this case, $\alpha$ is indeed a winning slot, but neither Case 1a nor Case 1b hold. It follows that, for every team~$s_j$ there must exist $E_i \in \mathsf{winners}(B)$ such that $a^j_i = 0$. Thus, ${\mathsf{P}}ayoff(s_j, B) = 0$ for each team $s_j$. Recall that $\bar{\mathcal{F}'}$ contains those sets which are not part of the exact cover for the instance of \textsc{Restricted X3C}. We claim that the set of teams corresponding to the sets in $\bar{\mathcal{F}'}$ can improve their pay-off as follows: if each such team will declare availability of~$0$ for all time slots $E_i$ and availability of~$1$ for $\alpha$, then their pay-offs would increase to $\frac{1}{3n}$. This follows since $\bar{\mathcal{F}'}$ covers each element twice, thus, by deviating as described above, the sum of declared availabilities of all time slots $E_i$ would decrease to being at most $n$, while the sum of availabilities of $\alpha$ would be at least $2n$, making $\alpha$ a unique winner. Next we consider the case where $\alpha$ is not winning. \noindent\textbf{Case 2.} In this case, $\alpha \notin \mathsf{winners}(B)$. By Observation~\ref{truth_for_winning_slot} we have that the declared availabilities in the winning time slots of all teams is $n$, thus, the sum of declared availabilities for all winning time slots is $3n$. Below, we consider two sub-cases. \noindent\textbf{Case 2a.} In this case, there exists a team $s_i$ such that $a^i_j = n$ for each $j$ such that $E_j \in \mathsf{winners}(B)$. Since in the instance of \textsc{Restricted X3C} we have that no two sets are the same, it follows that there are four other teams, besides $s_i$, denoted by $s_j$, $s_f$, $s_p$, and $s_q$, and two different time slots, denoted by $e_x$ and by $e_y$, such that it the following conditions hold: \begin{enumerate} \item $a^i_x = a^j_x = a^f_x = n$; \item $a^i_y = a^p_y = a^q_y = n$; \item $j \notin \{p, q\}$; \item $E_y \in \mathsf{winners}(B).$ \end{enumerate} Notice that ${\mathsf{P}}ayoff(s_j,B) = 0$, ${\mathsf{P}}ayoff(s_i,B) = \frac{1}{3}$, and ${\mathsf{P}}ayoff(s_f, B) \leq \frac{1}{3}$. We claim that $s_i, s_j, s_f$ can improve their pay-offs as follows: if $s_i, s_f$ will declare availabilities of $n$ for time slot $E_x$ and $0$ for all other slots while $s_j$ will declare availability of $n - 1$ for time slot $E_x$ and $0$ for all other slots, then $E_x$ would become the unique winning time slot; as a result, the pay-offs of $s_i, s_f$ would increase to $\frac{n}{3n-1}$ while the pay-off of $s_j$ would increase to $\frac{n - 1}{3n-1}$. \noindent\textbf{Case 2b.} In this case, no team exists i.e. available in all winning time slots. In this case, the improvement step described in Case 1c is an improvement step, and the reasoning is the same, thus omitted. \noindent For the ``if'' part, we have to that if no exact cover exists, then there is a Nash equilibrium. To this end, we will describe a profile $B$ and would argue that, if indeed an exact cover does not exist, then $B$ is a Nash-equilibrium. The profile~$B$ is as follows. Each team~$s_j$ declares availability of $n$ for all time slots $e_i$ and availability of $0$ for $\alpha$, making each time slot a co-winner. Thus, the pay-off of each team is $0$. Let us assume, towards a contradiction, that $B$ is not a $2n$-Nash-equilibrium, i.e., that an improvement step with respect to $B$, involving at most $2n$ teams, exists, denoted by $B'$. First of all, we observe that no time slot $e_i$ is a winning time slot (i.e., $e_i \notin \mathsf{winners}(B')$ for all $e_i$) because otherwise after the improvement step~$B'$, at least $3n - 3$ teams will have pay-off $0$ (this follows since, in the instance of \textsc{Restricted X3C}, each element is included in exactly three sets). However, no coalition of at most $3$ teams can decrease the sum of availabilities of all other time slots as to make $e_i$ the unique winner; thus, we conclude that, after the improvement step, $\alpha$ shall be the unique winning slot. The maximum sum of availabilities which time slot $\alpha$ might get after an improvement step involving at most $2n$ teams is $2n$. Therefore, to have that such a deviation is a profitable deviation, i.e., an improvement step, the sum of declared availabilities of all time slots $e_i$ has to decrease by at least $n + 1$. This could happen only if, for each time slot $e_i$, at least two teams with non-zero availabilities would decrease their declared availabilities. Thus, an exact cover must exist. \end{proof} \fi \end{document}
\begin{document} \newcommand{\oto}{{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.9ex}}} \newcommand{\leftarrowmbda}{\leftarrowmbdabda} \newcommand{\downarrow}{\downarrow} \newcommand{\Downarrow\!}{\Downarrow\!} \newcommand{\upsilonarrow}{\upsilonarrow} \newcommand{\rightarrow}{\rightarrow} \newcommand{\leftarrow}{\leftarrow} \newcommand{\longrightarrow}{\longrightarrow} \newcommand{\longleftarrow}{\longleftarrow} \newcommand{\rightarrowt}{\!\rightarrowtail\!} \newcommand{\upsilon}{\upsilonsilon} \newcommand{\Upsilon}{\Upsilonsilon} \newcommand{\Lambda}{\Lambdabda} \newcommand{{\cal F}}{{\cal F}} \newcommand{{\cal G}}{{\cal G}} \newcommand{{\cal H}}{{\cal H}} \newcommand{{\mathcal{N}}}{{\mathcal{N}}} \newcommand{{\cal B}}{{\cal B}} \newcommand{{\cal I}}{{\cal I}} \newcommand{{\cal T}}{{\cal T}} \newcommand{{\cal S}}{{\cal S}} \newcommand{\mathfrak{V}}{\mathfrak{V}} \newcommand{\mathfrak{U}}{\mathfrak{U}} \newcommand{{\cal P}}{{\cal P}} \newcommand{\mathcal{Q}}{\mathcal{Q}} \newcommand{{\rm int}}{{\rm int}} \newcommand{\bigvee}{\bigvee} \newcommand{\bigwedge}{\bigwedge} \newcommand{\downdownarrows}{\downdownarrows} \newcommand{\diamondsuit}{\diamondsuitmondsuit} \newcommand{{\bf y}}{{\bf y}} \newcommand{{\rm colim}}{{\rm colim}} \newcommand{R^{\!\forall}}{R^{\!\forall}} \newcommand{R_{\!\exists}}{R_{\!\exists}} \newcommand{R^{\!\da}}{R^{\!\downarrow}} \newcommand{R_{\!\ua}}{R_{\!\upsilonarrow}} \newcommand{{\swarrow}}{{{\swarrow}rrow}} \newcommand{{\searrow}}{{{\searrow}rrow}} \newcommand{{\rm id}}{{\rm id}} \newcommand{{\rm cl}}{{\rm cl}} \newcommand{{\rm sub}}{{\rm sub}} \numberwithin{equation}{section} \renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}} \begin{frontmatter} \title{A comparative study of ideals in fuzzy orders\tnoteref{F}} \tnotetext[F]{This work is supported by National Natural Science Foundation of China (No. 11771310)} \author{Hongliang Lai} \ead{[email protected]} \author{Dexue Zhang\corref{cor}}\cortext[cor]{Corresponding author.} \ead{[email protected]} \author{Gao Zhang} \ead{[email protected]} \address{School of Mathematics, Sichuan University, Chengdu 610064, China} \begin{abstract} This paper presents a comparative study of three kinds of ideals in fuzzy order theory: forward Cauchy ideals (generated by forward Cauchy nets), flat ideals and irreducible ideals, including their role in connecting fuzzy order with fuzzy topology. \end{abstract} \begin{keyword} Fuzzy order \sep Fuzzy topology \sep Forward Cauchy ideal \sep Flat ideal \sep Irreducible ideal \sep Scott $\mathcal{Q}$-topology \sep Scott $\mathcal{Q}$-cotopology \end{keyword} \end{frontmatter} \section{Introduction} The notion of ideals (i.e., directed lower sets) in ordered sets is primitive in domain theory. Domains and Scott topology are both postulated in terms of ideals and their suprema. For a partially ordered set $P$, let ${\rm Idl}(P)$ denote the set of ideals in $P$ with the inclusion order, and ${\bf y}:P\longrightarrow {\rm Idl}(P)$ be the map that assigns each $x\in P$ to the principal ideal $\downarrow\!x$. Then $P$ is directed complete if ${\bf y}$ has a left adjoint $\sup:{\rm Idl}(P)\longrightarrow P$ (which sends each ideal to its supremum); $P$ is a domain if it is directed complete and the left adjoint of ${\bf y}$ also has a left adjoint. A Scott open set of $P$ is an upper set $U$ such that for each ideal $I$ in $P$, if the supremum of $I$ is in $U$ then $I$ intersects with $U$. In order to establish a theory of fuzzy domains (or, quantitative domains), the first step is to find an appropriate notion of ideals for fuzzy orders (or, $\mathcal{Q}$-orders, where $\mathcal{Q}$ is the truth-value quantale). The problem seems simple, but, it turns out to be a very intricate one because of the complication of the table of truth-values --- the quantale $\mathcal{Q}$. In fact, there are several natural extension of this notion to the fuzzy setting. This paper presents a comparative study of three kinds of them: forward Cauchy ideals, flat ideals and irreducible ideals. Before summarizing related attempts in the literature and explaining what we will do in this paper, we recall some equivalent reformulations of ideals in a partially ordered set. Let $P$ be a partially ordered set. A net $\{x_i\}$ in $P$ is eventually monotone if there is some $i$ such that $x_j\leq x_k$ whenever $i\leq j\leq k$ \cite{Gierz2003}. Let $I$ be a non-empty lower set in $P$. The following are equivalent: \begin{itemize} \setlength{\itemsep}{-2pt} \item $I$ is an ideal, that is, for any $x,y$ in $I$, there is some $z\in I$ such that $x,y\leq z$. \item There exists an eventually monotone net $\{x_i\}$ such that $I= \bigcup_i\bigcap_{j\geq i}\downarrow\!x_j$. \item $I$ is \emph{flat} in the sense that for any upper sets $G,H$ of $P$, if $I$ intersects with both $G$ and $H$, then $I$ intersects with $G\cap H$. \item $I$ is \emph{irreducible} in the sense that for any lower sets $B,C$ of $P$, if $I{\rm sub}seteq B\cup C$ then either $I{\rm sub}seteq B$ or $I{\rm sub}seteq C$. \end{itemize} The net-approach is extended to fuzzy orders in \cite{BvBR1998,Wagner94,Wagner97}, resulting in the notions of forward Cauchy net and Yoneda completeness (a.k.a liminf completeness). Fuzzy lower sets generated by forward Cauchy nets are called ideals in \cite{FK97,FS02}. They will be called forward Cauchy ideals in this paper, in order to distinguish them from flat ideals and irreducible ideals. Yoneda completeness, as a version of quantitative directed completeness, has received wide attention in the study of fuzzy orders, including generalized metric spaces as a special case, see e.g. \cite{BvBR1998,FK97,FS02,FSW,Goubault,HW2011,HW2012, KS2002,LZ16,Ru,Wagner97}. The extension of the flat-approach to the fuzzy setting originates in the work of Vickers \cite{SV2005} in the case the truth-value quantale is Lawvere's quantale $([0,\infty]^{\rm op},+)$ (which is isomorphic to the unit interval with the product t-norm). This approach results in the notions of flat ideal (called flat left module in \cite{SV2005}) and flat completeness of fuzzy orders. It is shown in \cite{SV2005} that for Lawvere's quantale, flat completeness is equivalent to Yoneda completeness. The recent paper \cite{Zhang18} extends the irreducible-approach to the fuzzy setting in the study of sobriety of fuzzy cotopological spaces, resulting in the notions of irreducible ideal and irreducible completeness of fuzzy orders. Forward Cauchy ideals, flat ideals and irreducible ideals in a fuzzy ordered set are all natural generalizations of the notion of ideals in a partially ordered set; the resulting completeness for fuzzy orders are natural extensions of directed completeness in order theory. We note in passing that, from a category theory perspective, such completeness for fuzzy orders is an example of the theory of cocompleteness in enriched category theory with respect to a class of weights \cite{AK,Kelly,KS05}. This paper aims to present a comparative study of forward Cauchy ideals, flat ideals and irreducible ideals, hence of the resulting completeness notions. Since all of them are intended to play the role of directed lower sets in fuzzy order theory, before comparing them with each other, we propose the following criteria for a class $\Phi$ of fuzzy sets that are meant for the role of ideals in fuzzy orders: \begin{enumerate} \item[(I1)] If the truth-value quantale $\mathcal{Q}$ is the two-element Boolean algebra, then for each partially ordered set $A$, $\Phi(A)$ is the set of ideals in $A$. This is to require that $\Phi$ is a generalization of the class of ideals. \item[(I2)] $\Phi$ is saturated. Saturatedness of $\Phi$ guarantees that for each $\mathcal{Q}$-ordered set $A$, $\Phi(A)$ is the free $\Phi$-continuous $\mathcal{Q}$-ordered set generated by $A$. So, for a saturated class $\Phi$ of fuzzy sets, there exist enough $\Phi$-continuous $\mathcal{Q}$-ordered sets. \item[(I3)] $\Phi$ generates a functor from the category of $\mathcal{Q}$-ordered sets and $\Phi$-cocontinuous maps to that of $\mathcal{Q}$-topological spaces and/or $\mathcal{Q}$-cotopological spaces. This functor is expected to play the role of the functor in domain theory that sends each partially ordered set to its Scott topology. As in the classical case, such functors are of fundamental importance in the theory of fuzzy domains. \end{enumerate} Besides the interrelationship among the classes of forward Cauchy ideals, flat ideals and irreducible ideals, their connection to fuzzy topology will also be discussed. The contents are arranged as follows. Section 2 recalls some basic ideas that are needed in the subsequent sections. Section 3 concerns the relationship among forward Cauchy ideals, flat ideals and irreducible ideals. The main results are: (i) Every forward Cauchy ideal is flat (irreducible, resp.) if and only if the truth-value quantale is meet continuous (dually meet continuous, resp.). (ii) For the quantale obtained by equipping $[0,1]$ with a left continuous t-norm, irreducible ideals coincide with forward Cauchy ideals. (iii) For a prelinear quantale, every irreducible ideal is flat. (iv) For a quantale that satisfies the law of double negation, flat ideals coincide with irreducible ideals. Section 4 proves that for every quantale, both the class of flat ideals and that of irreducible ideals are saturated. As for forward Cauchy ideals, it is shown in \cite{FSW} that for a completely distributive value quantale (see \cite{FS02,FSW} for definition), the class of forward Cauchy ideals is saturated. The conclusion is extended in \cite{LZ07} to the case that $\mathcal{Q}$ is a continuous and integral quantale. Section 5 concerns the connection between fuzzy orders and fuzzy topological spaces. For each subclass $\Phi$ of flat ideals, a functor is constructed from the category of $\mathcal{Q}$-ordered sets and $\Phi$-cocontinuous maps to that of stratified $\mathcal{Q}$-topological spaces. For each subclass $\Phi$ of irreducible ideals, a full functor is constructed from the category of $\mathcal{Q}$-ordered sets and $\Phi$-cocontinuous maps to that of stratified $\mathcal{Q}$-cotopological spaces. This shows that irreducible ideals can be used to generate closed sets, hence $\mathcal{Q}$-cotopologies, whereas flat ideals can be used to generate open sets, hence $\mathcal{Q}$-topologies. We would like to remind the reader that, in general, there is no natural way to switch between closed sets and open sets in the fuzzy setting. This lack of ``duality" between closed sets and open sets demands that we need different kinds of fuzzy ideals to connect fuzzy orders with fuzzy topological spaces and/or fuzzy cotopological spaces. This is the \emph{raison d'etre} for flat ideals and irreducible ideals. \section{Preliminaries} In this preliminary section, we recall briefly some basic ideas of complete lattices \cite{Gierz2003}, quantales \cite{Rosenthal1990}, and $\mathcal{Q}$-orders that will be needed. A quantale $\mathcal{Q}$ is a monoid in the monoidal category of complete lattices and join-preserving maps \cite{Rosenthal1990}. Explicitly, a quantale $\mathcal{Q}$ is a monoid $(Q,\&)$ such that $Q$ is a complete lattice and \begin{equation*} p\&\bigvee_{j\in J}q_j=\bigvee_{j\in J}p\& q_j, ~ \Big(\bigvee_{j\in J}q_j\Big)\&p=\bigvee_{j\in J}q_j\&p. \end{equation*} for all $p\in Q$ and $\{q_j\}_{j\in J}{\rm sub}seteq Q$. The unit $1$ of the monoid $(Q,\&)$ is in general not the top element of $Q$. If it happens that the unit element coincides with the top element of $Q$, then we say that $\mathcal{Q}$ is \emph{integral}. If the operation $\&$ is commutative then we say $\mathcal{Q}$ is a commutative quantale. A quantale $(Q,\&)$ is meet continuous if the underlying lattice $Q$ is meet continuous. \begin{SA} Throughout this paper, if not otherwise specified, all quantales are assumed to be integral and commutative.\end{SA} Since the semigroup operation $\&$ distributes over arbitrary joins, it determines a binary operation $\rightarrow$ on $Q$ via the adjoint property \begin{equation*} p\&q\leq r\iff q\leq p\rightarrow r. \end{equation*} The binary operation $\rightarrow$ is called the \emph{implication}, or the \emph{residuation}, corresponding to $\&$. Some basic properties of the binary operations $\&$ and $\rightarrow$ are collected below, they can be found in many places, e.g. \cite{Belo02,Rosenthal1990}. \begin{prop}\leftarrowbel{2.1} Let $\mathcal{Q}$ be a quantale. Then \begin{enumerate}[(1)] \item $1\rightarrow p=p$. \item $p\leq q \iff 1= p\rightarrow q$. \item $p\rightarrow(q\rightarrow r)=(p\& q)\rightarrow r$. \item $p\&(p\rightarrow q)\leq q$. \item $\Big(\bigvee_{j\in J}p_j\Big)\rightarrow q=\bigwedge_{j\in J}(p_j\rightarrow q)$. \item $p\rightarrow\Big(\bigwedge_{j\in J} q_j\Big)=\bigwedge_{j\in J}(p\rightarrow q_j)$. \item $p=\bigwedge_{q\in Q}((p\rightarrow q)\rightarrow q)$. \end{enumerate} \end{prop} We often write $\neg p$ for $p\rightarrow 0$ and call it the \emph{negation} of $p$. Though it is true that $p\leq\neg\neg p$ for all $p\in Q$, the inequality $\neg\neg p\leq p$ does not always hold. A quantale $\mathcal{Q}$ satisfies the {\it law of double negation} if \[(p\rightarrow 0)\rightarrow 0 = p \] for all $p\in Q$. \begin{prop}{\rm(\cite{Belo02})} \leftarrowbel{properties of negation} Suppose that $\mathcal{Q}$ is a quantale that satisfies the law of double negation. Then \begin{enumerate}[(1)] \item $p\rightarrow q = \neg(p\&\neg q)=\neg q\rightarrow \neg p$. \item $p\&q =\neg (q\rightarrow\neg p)=\neg (p\rightarrow\neg q)$. \item $\neg(\bigwedge_{i\in I}p_i) = \bigvee_{i\in I}\neg p_i$. \end{enumerate}\end{prop} The quantales with the unit interval $[0,1]$ as underlying lattice are of particular interest in fuzzy set theory \cite{Ha98,KMP00}. In this case, the semigroup operation $\&$ is exactly a left continuous t-norm on $[0,1]$ \cite{KMP00}. A continuous t-norm on $[0,1]$ is a left continuous t-norm $\&$ that is continuous with respect to the usual topology. \begin{exmp} (\cite{KMP00}) Some basic t-norms: \begin{enumerate}[(1)] \item The t-norm $\min$: $a\&b=a\wedge b=\min\{a,b\}$. The corresponding implication is given by \[a\rightarrow b=\left\{\begin{array}{ll} 1, & a\leq b;\\ b, & a>b.\end{array}\right.\] \item The product t-norm: $a\&b=a\cdot b$. The corresponding implication is given by $$a\rightarrow b=\left\{\begin{array}{ll} 1, & a\leq b;\\ b/a, & a>b.\end{array}\right.$$ \item The {\L}ukasiewicz t-norm: $a\&b=\max\{a+b-1,0\}$. The corresponding implication is given by $$a\rightarrow b= \min\{1, 1-a+b\}. $$ In this case, $([0,1],\&)$ satisfies the law of double negation. \item The nilpotent minimum t-norm: $$a\& b=\left\{\begin{array}{ll} 0, & a+b\leq 1;\\ \min\{a,b\}, & a+b>1.\end{array}\right.$$ The corresponding implication is given by $$a\rightarrow b=\left\{\begin{array}{ll} 1, & a\leq b;\\ \max\{1-a,b\}, & a>b.\end{array}\right.$$ In this case, $([0,1],\&)$ satisfies the law of double negation. \end{enumerate}\end{exmp} The following theorem, known as the ordinal sum decomposition theorem, is of fundamental importance in the theory of continuous t-norms. \begin{thm} {\rm(\cite{Fau55,Mostert1957})} \leftarrowbel{ordinal sum} For each continuous t-norm $\&$ on $[0,1]$, there is a set of disjoint open intervals $\{(a_i,b_i)\}$ of $[0,1]$ that satisfy the following conditions: \begin{enumerate}[(i)] \item For each $i$, both $a_i$ and $b_i$ are idempotent and the restriction of $\&$ on $[a_i,b_i]$ is either isomorphic to the \L ukasiewicz t-norm or to the product t-norm; \item $x\&y=\min\{x,y\}$ if $(x,y)\notin\bigcup_i[a_i,b_i]^2$. \end{enumerate} \end{thm} A \emph{$\mathcal{Q}$-order} (or an order valued in the quantale $\mathcal{Q}$) \cite{Wagner94,Zadeh71} on a set $A$ is a reflexive and transitive $\mathcal{Q}$-relation on $A$. Explicitly, a $\mathcal{Q}$-order on $A$ is a map $R: A\times A\longrightarrow Q$ such that $R(x,x)=1$ and $R(y,z)\& R(x,y)\leq R(x,z)$ for any $x,y,z\in A$. The pair $(A,R)$ is called a $\mathcal{Q}$-ordered set. A $\mathcal{Q}$-ordered set is also called a \emph{$\mathcal{Q}$-category} in the literature, since it is precisely a category enriched over the symmetric monoidal category $\mathcal{Q}$. As usual, we write $A$ for the pair $(A, R)$ and $A(x,y)$ for $R(x,y)$ if no confusion would arise. Two elements $x,y$ in a $\mathcal{Q}$-ordered set $A$ are \emph{isomorphic} if $A(x,y)=A(y,x)=1$. We say that $A$ is \emph{separated} if isomorphic elements in $A$ are equal, that is, $A(x,y)=A(y,x)=1$ implies that $x=y$. If $R: A\times A\longrightarrow Q$ is a $\mathcal{Q}$-order on $A$, then $R^{\rm op}: A\times A\longrightarrow Q$, given by $R^{\rm op}(x,y)=R(y,x)$, is also a $\mathcal{Q}$-order on $A$ (by commutativity of $\&$), called the opposite of $R$. \begin{exmp} This example belongs to the folklore in fuzzy order theory, see e.g. \cite{Belo02}. For all $p, q\in Q$, let \[d_L(p,q)=p\rightarrow q.\] Then $(Q,d_L)$ is a separated $\mathcal{Q}$-ordered set. The opposite of $(Q,d_L)$ is $(Q,d_R)$, where \[d_R(p,q)=q\rightarrow p.\] Both $(Q,d_L)$ and $(Q,d_R)$ play important roles in the theory of $\mathcal{Q}$-ordered sets. \end{exmp} \begin{exmp} \cite{Belo02} Let $X$ be a set. A map $\leftarrowmbda: X\longrightarrow Q$ is called a fuzzy set (valued in $\mathcal{Q}$) of $X$, the value $\leftarrowmbda(x)$ is interpreted as the membership degree of $x$. The map \[{\rm sub}_X: Q^X\times Q^X\longrightarrow Q,\] given by \begin{equation*}{\rm sub}_X(\leftarrowmbda, \mu)=\bigwedge_{x\in X}\leftarrowmbda(x)\rightarrow \mu(x),\end{equation*} defines a separated $\mathcal{Q}$-order on $Q^X$. Intuitively, the value ${\rm sub}_X(\leftarrowmbda,\mu)$ measures the degree that $\leftarrowmbda$ is a subset of $\mu$. Thus, ${\rm sub}_X$ is called the \emph{fuzzy inclusion order} on $Q^X$. The opposite of ${\rm sub}_X$ is called the \emph{converse fuzzy inclusion order} on $Q^X$. In particular, if $X$ is a singleton set then the $\mathcal{Q}$-ordered sets $(Q^X,{\rm sub}_X)$ and $(Q^X,{\rm sub}_X^{\rm op})$ degenerate to the $\mathcal{Q}$-ordered sets $(Q,d_L)$ and $(Q,d_R)$, respectively. \end{exmp} A map $f: A\longrightarrow B$ between $\mathcal{Q}$-ordered sets is $\mathcal{Q}$-order preserving if \[A(x_1,x_2)\leq B(f(x_1),f(x_2))\] for any $x_1,x_2\in A$. We write \[\mbox{$\mathcal{Q}$-{\sf Ord}}\] for the category of $\mathcal{Q}$-ordered sets and $\mathcal{Q}$-order preserving maps. Let $f: A\longrightarrow B$ and $g:B\longrightarrow A$ be $\mathcal{Q}$-order preserving maps. We say $f$ is left adjoint to $g$ (or, $g$ is right adjoint to $f$), $f\downarrowshv g$ in symbols, if $$A(x,g(y))=B(f(x),y)$$ for all $x\in A$ and $y\in B$. Let $A,B$ be $\mathcal{Q}$-ordered sets. A $\mathcal{Q}$-distributor $\phi:A\oto B$ from $A$ to $B$ is a map $\phi:A\times B\longrightarrow Q$ such that \[B(b,b')\&\phi(a,b)\&A(a',a)\leq \phi(a',b')\] for any $a,a'\in A$ and $b,b'\in B$. Roughly speaking, a $\mathcal{Q}$-distributor $\phi:A\oto B$ is a $\mathcal{Q}$-relation between $A$ and $B$ that is compatible with the $\mathcal{Q}$-orders on $A$ and $B$. It is easy to see that the set $\mathcal{Q}$-Dist$(A,B)$ of all $\mathcal{Q}$-distributors from $A$ to $B$ form a complete lattice under the pointwise order. \begin{exmp}\leftarrowbel{lower set as dist} (Fuzzy lower sets as $\mathcal{Q}$-distributors) A \emph{fuzzy lower set} \cite{LZ06} of a $\mathcal{Q}$-ordered set $A$ is a map $\phi:A\longrightarrow Q$ such that \[\phi(y)\&A(x,y)\leq\phi(x).\] It is obvious that $\phi:A\longrightarrow Q$ is a fuzzy lower set if and only if $\phi: A\longrightarrow(Q, d_R)$ preserves $\mathcal{Q}$-order. Dually, a \emph{fuzzy upper set} \cite{LZ06} of $A$ is a map $\psi:A\longrightarrow Q$ such that \[A(x,y)\&\psi(x)\leq\psi(y).\] It is clear that $\psi:A\longrightarrow Q$ is a fuzzy upper set if and only if $\psi: A\longrightarrow(Q, d_L)$ preserves $\mathcal{Q}$-order. If we write $*$ for the terminal object in the category $\mathcal{Q}$-{\sf Ord}, namely, $*$ is a $\mathcal{Q}$-ordered set with only one element, then for each fuzzy lower set $\phi:A\longrightarrow Q$ of $A$, the map \[\phi\urcorner:A\times\{*\}\longrightarrow Q,\quad \phi\urcorner(x,*)=\phi(x)\] is a $\mathcal{Q}$-distributor $\phi\urcorner:A\oto *$. This establishes a bijection between fuzzy lower sets of $A$ and $\mathcal{Q}$-distributors from $A$ to $*$. For each fuzzy upper set $\psi$ of $A$, the map \[\ulcorner\psi: \{*\}\times A\longrightarrow Q,\quad \ulcorner\psi(*,x)=\psi(x)\] is a $\mathcal{Q}$-distributor $\psi:*\oto A$. This establishes a bijection between fuzzy upper sets of $A$ and $\mathcal{Q}$-distributors from $*$ to $A$. \end{exmp} \begin{lem}Let $\phi$ be a fuzzy lower set (fuzzy upper set, resp.) of a $\mathcal{Q}$-ordered set $A$, $p\in Q$. \begin{enumerate}[(1)] \item Both $p\&\phi$ and $p\rightarrow \phi$ are fuzzy lower sets (fuzzy upper sets, resp.) of $A$. \item $\phi\rightarrow p$ is a fuzzy upper set (fuzzy lower set, resp.) of $A$ and $\phi=\bigwedge_{q\in Q}(\phi\rightarrow q)\rightarrow q)$. \end{enumerate} \end{lem} Let ${\cal P} A$ denote the set of fuzzy lower sets of $A$ endowed with the fuzzy inclusion order. Explicitly, elements in ${\cal P} A$ are $\mathcal{Q}$-order preserving maps $A\longrightarrow(Q, d_R)$, and \[{\cal P} A(\phi_1,\phi_2)={\rm sub}_A(\phi_1,\phi_2)=\bigwedge_{x\in A}(\phi_1(x)\rightarrow\phi_2(x)).\] Dually, let ${\cal P}^\downarrowg A$ denote the set of fuzzy upper sets of $A$ endowed with the \emph{converse} fuzzy inclusion order. Explicitly, elements in ${\cal P}^\downarrowg A$ are $\mathcal{Q}$-order preserving maps $A\longrightarrow(Q, d_L)$, and \[{\cal P}^\downarrowg A(\psi_1,\psi_2)={\rm sub}_A(\psi_2,\psi_1)=\bigwedge_{x\in A}(\psi_2(x)\rightarrow\psi_1(x)).\] It is clear that $({\cal P}^\downarrowg A)^{\rm op}={\cal P} (A^{\rm op})$ \cite{St05}. For each $a\in A$, $A(-,a)$ is a fuzzy lower set of $A$. Moreover, \[{\cal P} A(A(-,a),\phi)=\phi(a)\] for all $a\in A$ and $\phi\in{\cal P} A$. This fact is indeed a special case of the Yoneda lemma in enriched category theory. The Yoneda lemma ensures that the assignment $a\mapsto A(-,a)$ defines an embedding ${\bf y}: A\longrightarrow{\cal P} A$, known as the Yoneda embedding. The correspondence $A\mapsto{\cal P} A$ gives rise to a functor ${\cal P}:\mathcal{Q}$-${\sf Ord}\longrightarrow\mathcal{Q}$-{\sf Ord} that sends a $\mathcal{Q}$-order preserving map $f: A\longrightarrow B$ to ${\cal P} f=f^\rightarrow:{\cal P} A\longrightarrow{\cal P} B$, where \[f^\rightarrow(\phi)(y)=\bigvee_{x\in A}\phi(x)\&B(y,f(x)).\] Moreover, $f^\rightarrow:{\cal P} A\longrightarrow{\cal P} B$ has a right adjoint given by $f^\leftarrow:{\cal P} B\longrightarrow{\cal P} A$, where $f^\leftarrow(\psi) = \psi\circ f$. This means for all $\phi\in{\cal P} A$ and $\psi\in{\cal P} B$, \begin{equation} \leftarrowbel{kan adjunction} {\rm sub}_B(f^\rightarrow(\phi),\psi) = {\rm sub}_A(\phi,f^\leftarrow(\psi)).\end{equation} The adjunction $f^\rightarrow\downarrowshv f^\leftarrow$ is a special case of the enriched Kan extension in category theory \cite{Kelly,Lawvere73}. For $\mathcal{Q}$-distributors $\phi:A\oto B$ and $\psi: B\oto C$, the composite $\psi\circ\phi: A\oto C$ is given by $$(\psi\circ\phi)(a,c)=\bigvee_{b\in B}\psi(b,c)\&\phi(a,b).$$ It is clear that ($\mathcal{Q}$-Dist$(*,*),\circ)$ is a quantale and is isomorphic to $\mathcal{Q}=(Q,\&)$. In this paper, we identify ($\mathcal{Q}$-Dist$(*,*),\circ)$ with $\mathcal{Q}$. For a fuzzy lower set $\phi:A\longrightarrow Q$ and a fuzzy upper set $\psi:A\longrightarrow Q$ of a $\mathcal{Q}$-ordered set $A$, the tensor product \[\phi\otimes\psi\] is defined as the composite of $\mathcal{Q}$-distributors \[\phi\urcorner\circ\ulcorner\psi: *\oto A\oto*.\] Explicitly, $\phi \otimes \psi$ is an element of the quantale $\mathcal{Q}$ given by $\phi \otimes \psi=\bigvee_{x\in A}\phi(x)\&\psi(x)$. Intuitively, the value $\phi\otimes\psi$ measures the degree that the fuzzy lower set $\phi$ intersects with the fuzzy upper set $\psi$. The correspondence \[(\psi,\phi)\mapsto \phi \otimes \psi\] defines a $\mathcal{Q}$-distributor \[ \otimes: {\cal P}^\downarrowg A\oto{\cal P} A.\] In particular, for each fuzzy upper set $\psi$ of $A$, the correspondence $\phi\mapsto \phi\otimes \psi$ defines a fuzzy upper set of ${\cal P} A$: \begin{equation}\leftarrowbel{compositon as distributor}{-\otimes \psi}: {\cal P} A\longrightarrow Q. \end{equation} The following lemma exhibits a close relationship between the $\mathcal{Q}$-distributor $\otimes: {\cal P}^\downarrowg A\oto{\cal P} A$ (intersection degree) and the fuzzy inclusion order (subset degree). \begin{lem}\leftarrowbel{tensor via sub} Let $A$ be a $\mathcal{Q}$-ordered set. \begin{enumerate}[(1)]\item For each fuzzy lower set $\phi$ and each fuzzy upper set $\psi$ of $A$, \[\phi\otimes\psi=\bigwedge_{p\in Q}({\rm sub}_A(\phi,\psi\rightarrow p)\rightarrow p).\] In particular, if $\mathcal{Q}$ satisfies the law of double negation, then $\phi\otimes\psi = \neg({\rm sub}_A(\phi, \neg\psi))$. \item For any fuzzy lower sets $\phi_1,\phi_2$ of $A$, \[{\rm sub}_A(\phi_1,\phi_2)=\bigwedge_{p\in Q}(\phi_1\otimes(\phi_2\rightarrow p) \rightarrow p).\] In particular, if $\mathcal{Q}$ satisfies the law of double negation, then ${\rm sub}_A(\phi_1, \phi_2) = \neg(\phi_1\otimes(\neg\phi_2))$. \end{enumerate} \end{lem} \begin{proof}(1) By Proposition \ref{2.1}(7), it holds that \begin{align*}\phi\otimes\psi &= \bigvee_{x\in A}\phi(x)\&\psi(x)\\ &= \bigwedge_{p\in Q}\Big[\Big(\Big(\bigvee_{x\in A}\phi(x)\&\psi(x)\Big)\rightarrow p\Big) \rightarrow p\Big] \\ &= \bigwedge_{p\in Q}\Big[\bigwedge_{x\in A}(\phi(x)\rightarrow(\psi(x)\rightarrow p))\rightarrow p\Big] \\ &= \bigwedge_{p\in Q}({\rm sub}_A(\phi,\psi\rightarrow p)\rightarrow p). \end{align*} (2) The proof is similar, so, we omit it here. \end{proof} A supremum of a fuzzy lower set $\phi$ of a $\mathcal{Q}$-ordered set $A$ is an element of $A$, say $\sup\phi$, such that \[A(\sup\phi,x)={\rm sub}_A(\phi,{\bf y}(x))\] for all $x\in A$. It is clear that, up to isomorphism, every fuzzy lower set has at most one supremum. So, we'll speak of \emph{the supremum of a fuzzy lower set}. A $\mathcal{Q}$-order preserving map $f:A\longrightarrow B$ preserves the supremum of a fuzzy lower set $\phi$ of $A$ if, whenever $\sup\phi$ exists, $f(\sup\phi)$ is a supremum of $f^\rightarrow(\phi)$. It is well-known that left adjoints preserve suprema. \begin{exmp}\cite{St05} \leftarrowbel{PA is complete} Let $A$ be a $\mathcal{Q}$-ordered set. Then every fuzzy lower set of ${\cal P} A$ has a supremum. Actually, for each fuzzy lower set $\Lambdabda$ of ${\cal P} A$, $\sup\Lambdabda=\bigvee_{\phi\in{\cal P} A}\Lambdabda(\phi)\&\phi$. \end{exmp} \begin{exmp}(Intersection degree as supremum) \leftarrowbel{intersection as sup} For each fuzzy lower set $\phi$ and each fuzzy upper set $\psi$ of a $\mathcal{Q}$-ordered set $A$, the intersection degree of $\phi$ with $\psi$ is the supremum of $\psi^\rightarrow(\phi)$ in $(Q,d_L)$ (recall that $\psi:A\longrightarrow(Q,d_L)$ is a $\mathcal{Q}$-order preserving map), i.e., $\phi\otimes\psi=\sup\psi^\rightarrow(\phi)$. This is because for all $q\in Q$, \begin{align*}{\rm sub}_Q(\psi^\rightarrow(\phi),d_L(-,q)) &={\rm sub}_A(\phi,d_L(\psi(-), q))\\ &=\bigwedge_{x\in A}(\phi(x)\rightarrow(\psi(x)\rightarrow q)) \\ &= d_L\Big(\bigvee_{x\in A}\phi(x)\&\psi(x), q\Big)\\ &=d_L(\phi\otimes\psi,q). \end{align*} In particular, letting $\psi$ be the identity map on $(Q,d_L)$ one obtains that for each fuzzy lower set $\phi$ of $(Q,d_L)$, $\sup\phi=\bigvee_{q\in Q}q\&\phi(q)$. \end{exmp} \begin{exmp}(Inclusion degree as supremum) \leftarrowbel{inclusion as sup} For any fuzzy lower sets $\phi,\leftarrowmbda$ of a $\mathcal{Q}$-ordered set $A$, the inclusion degree ${\rm sub}_A(\phi,\leftarrowmbda)$ is the supremum of $\leftarrowmbda^\rightarrow(\phi)$ in $(Q,d_R)$ (recall that $\leftarrowmbda:A\longrightarrow(Q,d_R)$ is a $\mathcal{Q}$-order preserving map), i.e., ${\rm sub}_A(\phi,\leftarrowmbda)=\sup\leftarrowmbda^\rightarrow(\phi)$. This is because for all $q\in Q$, \begin{align*}{\rm sub}_Q(\leftarrowmbda^\rightarrow(\phi),d_R(-,q)) &={\rm sub}_A(\phi,d_R(\leftarrowmbda(-), q))\\ &=\bigwedge_{x\in A}(\phi(x)\rightarrow(q\rightarrow\leftarrowmbda(x))) \\ &= \bigwedge_{x\in A}(q\rightarrow(\phi(x)\rightarrow\leftarrowmbda(x)))\\ &=d_R({\rm sub}_A(\phi,\leftarrowmbda),q). \end{align*} In particular, letting $\leftarrowmbda$ be the identity map on $(Q,d_R)$ one obtains that for each fuzzy lower set $\phi$ of $(Q,d_R)$, the supremum of $\phi$ in $(Q,d_R)$ is given by $\bigwedge_{q\in Q}(\phi(q)\rightarrow q)$. \end{exmp} \section{Forward Cauchy ideals, flat ideals and irreducible ideals} A net $\{x_i\}$ in a $\mathcal{Q}$-ordered set $A$ is forward Cauchy \cite{Wagner97} if \[\bigvee_i\bigwedge_{i\leq j\leq k}A(x_j,x_k)=1.\] Forward Cauchy nets are clearly a $\mathcal{Q}$-analogue of eventually monotone nets in partially ordered sets. A Yoneda limit (a.k.a liminf) \cite{Wagner97} of a forward Cauchy net $\{x_i\}$ in $A$ is an element $a$ in $A$ such that \[A(a,y)=\bigvee_i\bigwedge_{i\leq j}A(x_j,y) \] for all $y\in A$. It is clear that Yoneda limit is a $\mathcal{Q}$-version of \emph{least eventual upper bound}. Yoneda limits of a forward Cauchy net, if exist, are unique up to isomorphism. \begin{lem}\leftarrowbel{yoneda limit in Q} If $\{a_i\}$ is a forward Cauchy net in $(Q,d_L)$, then $\bigvee_i\bigwedge_{j\geq i}a_j$ is a Yoneda limit of $\{a_i\}$ and \[\bigvee_i\bigwedge_{j\geq i}a_j=\bigwedge_i\bigvee_{j\geq i}a_j.\] \end{lem} \begin{proof}The first half is \cite[Proposition 2.30]{Wagner97}. It remains to check the equality \[\bigvee_i\bigwedge_{j\geq i}a_j=\bigwedge_i\bigvee_{j\geq i}a_j.\] Since $\bigvee_i\bigwedge_{j\geq i}a_j$ is a Yoneda limit of $\{a_i\}$, it follows that for all $x\in Q$, \begin{align*} \Big(\bigvee_i\bigwedge_{j\geq i}a_j\Big)\rightarrow x& = d_L\Big(\bigvee_i\bigwedge_{j\geq i}a_j,x\Big)\\ &=\bigvee_i\bigwedge_{j\geq i}d_L(x,a_j) \\ &=\bigvee_i\bigwedge_{j\geq i}(a_j\rightarrow x) \\ &\leq \Big(\bigwedge_i\bigvee_{j\geq i}a_j\Big)\rightarrow x. \end{align*} Letting $x= \bigvee_i\bigwedge_{j\geq i}a_j$ we obtain that \[\bigvee_i\bigwedge_{j\geq i}a_j\geq\bigwedge_i\bigvee_{j\geq i}a_j.\] The inequality `$\leq$' is trivial, so, the equality is valid. \end{proof} \begin{prop} \leftarrowbel{3.9} {\rm(A special case of \cite[Theorem 3.1] {Wagner97})} For each forward Cauchy net $\{\phi_i\}$ in ${\cal P} A$, the fuzzy lower set $\bigvee_i\bigwedge_{j\geq i}\phi_j$ is a Yoneda limit of $\{\phi_i\}$. That is, for each fuzzy lower set $\phi$ of $A$, \[{\rm sub}_A\Big(\bigvee_i\bigwedge_{j\geq i}\phi_j,\phi\Big) =\bigvee_i\bigwedge_{j\geq i}{\rm sub}_A(\phi_j,\phi). \] \end{prop} The following proposition says that every Yoneda limit of forward Cauchy net $\{x_i\}$ is a supremum of a fuzzy lower set generated by $\{x_i\}$. \begin{prop} {\rm (\cite[Lemma 46]{FSW})} \leftarrowbel{yoneda limit as suprema} An element $a$ in a $\mathcal{Q}$-ordered set $A$ is a Yoneda limit of a forward Cauchy net $\{x_i\}$ if and only if $a$ is a supremum of the fuzzy lower set $\bigvee_i\bigwedge_{i\leq j}A(-,x_j)$ generated by $\{x_i\}$. \end{prop} A fuzzy set $\leftarrowmbda:A\longrightarrow Q$ is \emph{inhabited} if $\bigvee_{a\in A}\leftarrowmbda(a)=1$. Inhabited fuzzy sets are counterpart of non-empty sets in the fuzzy setting. \begin{defn}Let $A$ be a $\mathcal{Q}$-ordered set, $\phi:A\longrightarrow Q$ a fuzzy lower set of $A$. \begin{enumerate} \item[\rm(1)] $\phi $ is a forward Cauchy ideal if there exists a forward Cauchy net $\{x_i\}$ in $A$ such that \[\phi=\bigvee_i\bigwedge_{i\leq j}A(-,x_j).\] \item[\rm(2)] $\phi$ is a flat ideal if it is inhabited and is flat in the sense that \[\phi\otimes(\psi_1\wedge\psi_2)= \phi\otimes\psi_1 \wedge\phi\otimes\psi_2 \] for all fuzzy upper sets $\psi_1, \psi_2$ of $A$. \item[\rm(3)] $\phi$ is an irreducible ideal if it is inhabited and is irreducible in the sense that \[{\rm sub}_A(\phi, \phi_1\vee\phi_2)={\rm sub}_A(\phi, \phi_1)\vee{\rm sub}_A(\phi, \phi_2) \] for all fuzzy lower sets $\phi_1, \phi_2$ of $A$. \end{enumerate} \end{defn} \begin{rem}\leftarrowbel{history} Forward Cauchy ideals, flat ideals and irreducible ideals in $\mathcal{Q}$-ordered sets are all natural extensions of ideals in a partially ordered set. The study of forward Cauchy ideals dates back to Wagner \cite{Wagner94,Wagner97}. For more information on forward Cauchy ideals the reader is referred to \cite{FK97,FS02,FSW,HW2011,HW2012,LZ07,ZF05}, besides the works of Wagner. The notion of flat ideals originates in the paper \cite{SV2005} of Vickers in the case that $\mathcal{Q}$ is Lawvere's quantale $([0,\infty]^{\rm op},+)$, under the name of \emph{flat left module}. It is extended to the general case in \cite{TLZ2014}. It is shown in \cite{TLZ2014} that if the quantale $\mathcal{Q}=(Q,\&)$ is a frame, i.e., $\&=\wedge$, then a fuzzy lower set $\phi$ of a $\mathcal{Q}$-ordered set $A$ is flat if and only if for any $x,y\in A$, \[\phi(x)\wedge\phi(y)\leq\bigvee_{z\in A}\phi(z)\wedge A(x,z)\wedge A(y,z).\] Hence, in the case that $\mathcal{Q}=(Q,\&)$ is a frame, flat ideals in a $\mathcal{Q}$-ordered set $A$ coincides with ideals of $A$ in the sense of \cite[Definition 5.1]{LZ07}. Irreducible ideals are introduced in \cite{Zhang18} in the study of sobriety of $\mathcal{Q}$-cotopological spaces. \end{rem} \begin{exmp}\leftarrowbel{principal ideal} For each $a$ in a $\mathcal{Q}$-ordered set $A$, the fuzzy lower set $A(-,a)$ is a forward Cauchy ideal, a flat ideal and an irreducible ideal. \end{exmp} \begin{defn} (\cite{AK,KS05,LZ07}) \leftarrowbel{class} By a class of weights we mean a functor $\Phi: \mathcal{Q}$-${\sf Ord} \longrightarrow\mathcal{Q}$-{\sf Ord} such that \begin{enumerate}[(1)] \item for each $\mathcal{Q}$-ordered set $A$, $\Phi(A)$ is a subset of ${\cal P} A$ with the $\mathcal{Q}$-order inherited from ${\cal P} A$; \item for all $\mathcal{Q}$-ordered set $A$ and all $a\in A$, ${\bf y}(a)\in\Phi(A)$; \item $\Phi(f)= {\cal P} f =f^\rightarrow$ for every $\mathcal{Q}$-order preserving map $f: A\longrightarrow B$. \end{enumerate} \end{defn} The second condition ensures that $A$ can be embedded in $\Phi(A)$ via the Yoneda embedding. We also write ${\bf y}$ for the embedding $A\longrightarrow\Phi(A)$ if no confusion will arise. In category theory, a $\mathcal{Q}$-distributor of the form $A\oto *$ is called a \emph{weight} or a \emph{presheaf} \cite{KS05,St05}. This accounts for the terminology \emph{class of weights}. Together with Example \ref{principal ideal} the following conclusion asserts that forward Cauchy ideals, flat ideals and irreducible ideals are all examples of class of weights. \begin{prop}If $f:A\longrightarrow B$ is $\mathcal{Q}$-order preserving, then for each forward Cauchy ideal (flat ideal, irreducible ideal, resp.) $\phi$ of $A$, $f^\rightarrow(\phi)$ is a forward Cauchy ideal (flat ideal, irreducible ideal, resp.) of $B$. \end{prop} \begin{proof} We check, for example, that if $\phi$ is irreducible then so is $f^\rightarrow(\phi)$. For all fuzzy lower sets $\phi_1, \phi_2$ of $B$, thanks to Equation (\ref{kan adjunction}), we have \begin{align*}{\rm sub}_B(f^\rightarrow(\phi),\phi_1\vee\phi_2) &= {\rm sub}_A(\phi,(\phi_1\vee\phi_2)\circ f) \\ &= {\rm sub}_A(\phi,\phi_1\circ f)\vee{\rm sub}_A(\phi,\phi_2\circ f)\\ &= {\rm sub}_B(f^\rightarrow(\phi),\phi_1)\vee{\rm sub}_B(f^\rightarrow(\phi),\phi_2), \end{align*} hence $f^\rightarrow(\phi)$ is irreducible. \end{proof} In the following, we write $\mathcal{W}$, ${\cal I}$ and ${\cal F}$ for the class of forward Cauchy ideals, irreducible ideals and flat ideals, respectively. \begin{defn}Let $\Phi$ be a class of weights. A $\mathcal{Q}$-ordered set $A$ is $\Phi$-complete\footnote{~\emph{$\Phi$-cocomplete} would be a better terminology from the viewpoint of category theory. However, following the tradition in domain theory, we choose \emph{$\Phi$-complete} here.} if each $\phi\in\Phi(A)$ has a supremum. In particular, a $\mathcal{Q}$-ordered set $A$ is \begin{enumerate}[(1)] \item Yoneda complete (a.k.a liminf complete) if each forward Cauchy ideal of $A$ has a supremum (which is equivalent to that every forward Cauchy net in $A$ has a Yoneda limit); \item irreducible complete if each irreducible ideal of $A$ has a supremum; \item flat complete if each flat ideal of $A$ has a supremum. \end{enumerate} \end{defn} Yoneda complete, irreducible complete, and flat complete are all natural extension of \emph{directed complete} to the fuzzy setting. In the case that $\mathcal{Q}=(Q,\&)$ is a frame, based on flat completeness (under the name of \emph{fuzzy directed completeness}), a theory of frame-valued directed complete orders and frame-valued domains have been developed in \cite{Liu-Zhao,Yao10,Yao16,YS11}. It is easily seen that $A$ is $\Phi$-complete if and only if ${\bf y}:A\longrightarrow\Phi(A)$ has a left adjoint. In this case, the left adjoint of ${\bf y}$ sends each $\phi\in\Phi(A)$ to its supremum $\sup\phi$. A $\mathcal{Q}$-order preserving map $f:A\longrightarrow B$ is \emph{$\Phi$-cocontinuous} if for all $\phi\in\Phi(A)$, $f(\sup\phi)=\sup f^\rightarrow(\phi)$ whenever $\sup\phi$ exists. This section mainly concerns the relationship among the class $\mathcal{W}$ of forward Cauchy ideals, the class ${\cal I}$ of irreducible ideals, and the class ${\cal F}$ of flat ideals. Given classes of weights $\Phi$ and $\Psi$, we say that $\Phi$ is a subclass of $\Psi$ if $\Phi(A){\rm sub}seteq \Psi(A)$ for each $\mathcal{Q}$-ordered set $A$. As we shall see, under some mild assumptions, $\mathcal{W}$ is a subclass of both ${\cal I}$ and ${\cal F}$. A complete lattice $L$ is meet continuous \cite{Gierz2003} if for all $a\in L$ and all directed subset $D$ of $L$, \[a\wedge \bigvee D=\bigvee_{d\in D}(a\wedge d).\] A complete lattice is dually meet continuous if its opposite is meet continuous. A quantale $\mathcal{Q}=(Q,\&)$ is (dually, resp.) meet continuous if the complete lattice $Q$ is (dually, resp.) meet continuous. \begin{thm}\leftarrowbel{FC is irreducible} For a dually meet continuous quantale $\mathcal{Q}$, every forward Cauchy ideal is irreducible. \end{thm} \begin{lem}\leftarrowbel{order convergence} If $\{a_i\}$ is a forward Cauchy net in the $\mathcal{Q}$-ordered set $(Q,d_R)$, then $ \bigwedge_i\bigvee_{j\geq i}a_j$ is a Yoneda limit of $\{a_i\}$ and \[\bigvee_i\bigwedge_{j\geq i}a_j=\bigwedge_i\bigvee_{j\geq i}a_j.\] \end{lem} \begin{proof}First, we show that $\bigwedge_i\bigvee_{j\geq i}a_j$ is a Yoneda limit of $\{a_i\}$. That is, for all $x\in Q$, \[d_R\Big(\bigwedge_i\bigvee_{j\geq i}a_j,x\Big)=\bigvee_i\bigwedge_{j\geq i}d_R(a_j,x).\] On one hand, since $\{a_i\}$ is a forward Cauchy net in $(Q,d_R)$, \[\bigvee_i\bigwedge_{i\leq j\leq l}(a_l\rightarrow a_j)=\bigvee_i\bigwedge_{i\leq j\leq l}d_R(a_j,a_l)=1,\] then \[\bigvee_i\bigwedge_{j\geq i}\Big[\bigvee_k\bigwedge_{l\geq k}(a_l\rightarrow a_j)\Big]=1,\] hence \[\bigvee_i\bigwedge_{j\geq i}\Big[\Big(\bigwedge_k\bigvee_{l\geq k}a_l\Big)\rightarrow a_j\Big]=1.\] Thus, \begin{align*}d_R\Big(\bigwedge_i\bigvee_{j\geq i}a_j,x\Big)&= \Big[x\rightarrow\bigwedge_k\bigvee_{l\geq k}a_l\Big]\&\bigvee_i\bigwedge_{j\geq i}\Big[\Big(\bigwedge_k\bigvee_{l\geq k}a_l\Big)\rightarrow a_j)\Big]\\ &\leq \bigvee_i\bigwedge_{j\geq i}(x\rightarrow a_j) \\ &=\bigvee_i\bigwedge_{j\geq i}d_R(a_j,x). \end{align*} On the other hand, since for each $i$ we always have \[x\rightarrow \bigvee_{j\geq i}a_j \geq \bigvee_k\bigwedge_{l\geq k}(x\rightarrow a_l),\] it follows that \[d_R\Big(\bigwedge_i\bigvee_{j\geq i}a_j,x\Big)=\bigwedge_i\Big(x\rightarrow\bigvee_{j\geq i}a_j\Big)\geq \bigvee_k\bigwedge_{l\geq k}(x\rightarrow a_l)=\bigvee_i\bigwedge_{j\geq i}d_R(a_j,x) . \] Therefore, $\bigwedge_i\bigvee_{j\geq i}a_j$ is a Yoneda limit of $\{a_i\}$. Next, we prove the equality \[\bigvee_i\bigwedge_{j\geq i}a_j=\bigwedge_i\bigvee_{j\geq i}a_j.\] Since $\bigwedge_i\bigvee_{j\geq i}a_j$ is a Yoneda limit of $\{a_i\}$ in $(Q,d_R)$, it follows that for all $x\in Q$, \begin{align*} x\rightarrow \bigwedge_i\bigvee_{j\geq i}a_j& = d_R\Big(\bigwedge_i\bigvee_{j\geq i}a_j,x\Big)\\ &=\bigvee_i\bigwedge_{j\geq i}d_R(a_j,x) \\ &=\bigvee_i\bigwedge_{j\geq i}(x\rightarrow a_j) \\ &\leq x\rightarrow \bigvee_i\bigwedge_{j\geq i}a_j. \end{align*} Letting $x=1$, we obtain that \[\bigwedge_i\bigvee_{j\geq i}a_j\leq \bigvee_i\bigwedge_{j\geq i}a_j.\] The converse inequality is trivial, so the equality is valid. \end{proof} Lemma \ref{yoneda limit in Q} and Lemma \ref{order convergence} imply that every forward Cauchy net in the $\mathcal{Q}$-ordered sets $(Q,d_L)$ and $(Q,d_R)$ is order convergent. But, $(Q,d_L)$ and $(Q,d_R)$ may have different forward Cauchy nets. For example, the sequence $\{n\}$ is forward Cauchy in $([0,\infty],d_R)$ but not in $([0,\infty],d_L)$. \begin{proof}[Proof of Theorem \ref{FC is irreducible}] Let $\{x_i\}$ be a forward Cauchy net in a $\mathcal{Q}$-ordered set $A$ and $\varphi=\bigvee_i\bigwedge_{j\geq i}A(-,x_j)$. We show that $\varphi$ is an irreducible ideal. \textbf{Step 1}. $\varphi$ is inhabited. This is easy since \[\bigvee_{x\in A}\varphi(x)\geq\bigvee_{i}\varphi(x_i)=\bigvee_{i}\bigvee_j\bigwedge_{k\geq j}A(x_i,x_k)\geq \bigvee_{i} \bigwedge_{k\geq i}A(x_i,x_k)=1.\] \textbf{Step 2}. For each fuzzy lower set $\phi$ of $A$, \[{\rm sub}_A(\varphi,\phi)= \bigwedge_i\bigvee_{j\geq i}\phi(x_j).\] Since $\phi$ is a fuzzy lower set, $\{\phi(x_j)\}$ is a forward Cauchy net in $(Q, d_R)$. Then, \begin{align*}{\rm sub}_A(\varphi,\phi)&= {\rm sub}_A\Big(\bigvee_i\bigwedge_{j\geq i}A(-,x_j),\phi\Big) \\ & =\bigvee_i\bigwedge_{j\geq i}{\rm sub}_A(A(-,x_j),\phi) & \text{(Proposition \ref{3.9})}\\ &= \bigvee_i\bigwedge_{j\geq i}\phi(x_j)& \text{(Yoneda lemma)}\\ &= \bigwedge_i\bigvee_{j\geq i}\phi(x_j).& \text{(Lemma \ref{order convergence})} \end{align*} \textbf{Step 3}. For all fuzzy lower sets $\phi_1, \phi_2$ of $A$, ${\rm sub}_A(\varphi, \phi_1\vee\phi_2)={\rm sub}_A(\varphi, \phi_1)\vee{\rm sub}_A(\varphi, \phi_2)$. This is easy since \begin{align*}{\rm sub}_A(\varphi, \phi_1)\vee{\rm sub}_A(\varphi, \phi_2)&= \bigwedge_i\bigvee_{j\geq i}\phi_1(x_j)\vee \bigwedge_i\bigvee_{j\geq i}\phi_2(x_j)& \text{(Step 2)}\\ & = \bigwedge_i\bigvee_{j\geq i}(\phi_1(x_j)\vee\phi_2(x_j)) & \text{($\mathcal{Q}$ is dually meet continuous)}\\&= {\rm sub}_A(\varphi, \phi_1\vee\phi_2).& \text{(Step 2)}\end{align*} The proof is completed. \end{proof} Interestingly, the dual meet continuity of $\mathcal{Q}$ is also necessary for Theorem \ref{FC is irreducible}. \begin{prop}\leftarrowbel{DMC is necessary} If all forward Cauchy ideals are irreducible, then the quantale $\mathcal{Q}$ is dually meet continuous. \end{prop} \begin{proof}We show that for each $a\in Q$ and each filtered set $F$ in $Q$, \[a\vee\bigwedge_{x\in F} x=\bigwedge_{x\in F}(a\vee x).\] Consider the fuzzy lower set $\phi =\bigvee_{x\in F}d_R(-,x)$ of the $\mathcal{Q}$-ordered set $(Q, d_R)$. Since \[\phi =\bigvee_{x\in F}\bigwedge_{y\in F,y\leq x}d_R(-,y),\] it follows that $\phi$ is a forward Cauchy ideal, hence an irreducible ideal by assumption. Since both the identity map ${\rm id}_Q$ on $Q$ and the constant map $\underline{a}:Q\longrightarrow Q$ with value $a$ are fuzzy lower sets of $(Q, d_R)$, then \begin{align*}a\vee\bigwedge_{x\in F} x&={\rm sub}_Q(\phi,\underline{a})\vee{\rm sub}_Q(\phi,{\rm id}_Q) \\ &={\rm sub}_Q(\phi,\underline{a}\vee{\rm id}_Q) \\ &= \bigwedge_{x\in F}(a\vee x).\end{align*} This finishes the proof. \end{proof} Irreducible ideals need not be forward Cauchy in general. Let $\mathcal{Q}=\{0,a,b,1\}$ be the Boolean algebra with four elements. Assume that $A$ is the $\mathcal{Q}$-ordered set with points $x,y$ and \[A(x,x)=A(y,y)=1, \quad A(x,y)=A(y,x)=0.\] Then the map $\phi$, given by $\phi(x)=a$ and $\phi(y)=b$, is an irreducible ideal in $A$. But, $\phi$ cannot be generated by any forward Cauchy net in $A$. This example is essentially \cite[Note 3.12]{Zhang18}. The following conclusion is very useful in the theory of fuzzy orders based on left continuous t-norms, it says every irreducible ideal is forward Cauchy in this case. \begin{thm}\leftarrowbel{G is FC} If the quantale $\mathcal{Q}$ is the unit interval equipped with a left continuous t-norm $\&$, then irreducible ideals coincide with forward Cauchy ideals. \end{thm} \begin{proof} By Theorem \ref{FC is irreducible}, we only need to prove that every irreducible ideal $\phi $ of a $\mathcal{Q}$-ordered set $A$ is forward Cauchy. Let \[{\rm C}\phi = \{(x,r)\in X\times[0,1)\mid \phi(x)>r\}.\] Define a relation $\sqsubseteq$ on ${\rm C}\phi $ by \[ (x,r)\sqsubseteq(y,s)\iff A(x,y)\rightarrow r \leq s. \] We claim that $({\rm C}\phi,\sqsubseteq)$ is a directed set. Before proving this, we note that if $(x,r)\sqsubseteq(y,s)$ then $r<A(x,y) $ and $r\leq s$. That $\sqsubseteq$ is reflexive and transitive is easy, it remains to check that it is directed. For any $ (x,r),(y,s)\in {\rm C}\phi$, consider the fuzzy lower sets $\psi_1=A(x,-)\rightarrow r$ and $ \psi_2=A(y,-)\rightarrow s$. Since $\phi $ is an irreducible ideal, \begin{align*} {\rm sub}_X(\phi,\psi_1\vee\psi_2) &={\rm sub}_X(\phi,A(x,-)\rightarrow r)\vee {\rm sub}_X(\phi,A(y,-)\rightarrow s) \\ &= {\rm sub}_X(A(x,-),\phi\rightarrow r)\vee {\rm sub}_X(A(y,-),\phi\rightarrow s)\\ &=(\phi(x)\rightarrow r)\vee(\phi(y)\rightarrow s). \end{align*} Since $(\phi(x)\rightarrow r)\vee(\phi(y)\rightarrow s)<1$, there exists some $z$ such that \[ \phi(z)\rightarrow\big[(A(x,z)\rightarrow r)\vee(A(y,z)\rightarrow s)\big] <1. \] Let $ t=(A(x,z)\rightarrow r)\vee(A(y,z)\rightarrow s)$, then $(z,t)\in {\rm C}\phi$ and $(x,r)\sqsubseteq(z,t), (y,s)\sqsubseteq(z,t)$. Hence $({\rm C}\phi,\sqsubseteq)$ is a directed set. From now on, we also write an element in ${\rm C}\phi$ as a pair $(x_i,r_i)$. Define a net \[\mathfrak{x}:{\rm C}\phi\longrightarrow A\] by $\mathfrak{x}(x_i,r_i)=x_i.$ We prove in two steps that $\mathfrak{x}$ is a forward Cauchy net and it generates $\phi$, hence $\phi$ is a forward Cauchy ideal. \textbf{Step 1}. $\mathfrak{x}$ is forward Cauchy. Let $t<1$. Since $\phi$ is inhabited, there is some $(x_i,r_i)\in {\rm C}\phi$ such that $t\leq r_i$. Then $A(x_j,x_k)\rightarrow r_j\leq r_k<1$ whenever $(x_k,r_k)\sqsupseteq(x_j, r_j)\sqsupseteq(x_i,r_i)$, hence \[A(x_j,x_k)> r_j\geq r_i\geq t.\] By arbitrariness of $t$ we obtain that $\mathfrak{x}$ is forward Cauchy. \textbf{Step 2}. $\phi$ is generated by $\mathfrak{x}$, i.e., \[ \phi(x)=\bigvee_{(x_i,r_i)}\bigwedge_{(x_j,r_j) \sqsupseteq (x_i,r_i)}A(x,x_{j}) \] for all $x\in A$. Take $x\in A$ and $r<\phi(x)$. For all $(x_j,r_j)\in {\rm C}\phi$, if $(x,r)\sqsubseteq(x_j,r_j)$, then $A(x,x_j)> r$, hence, by arbitrariness of $r$, \[ \phi(x)\leq\bigvee_{r<\phi(x)}\bigwedge_{(x_j,r_j) \sqsupseteq (x,r)}A(x,x_{j})\leq\bigvee_{(x_i, r_i)}\bigwedge_{(x_j,r_j) \sqsupseteq (x_i,r_i)}A(x,x_{j}). \] For the converse inequality, we show that for each $(x_i,r_i)\in{\rm C}\phi$, \[ \phi(x)\geq \bigwedge_{(x_j,r_j) \sqsupseteq (x_i,r_i)}A(x,x_j). \] Let $t$ be an arbitrary number that is strictly smaller than \[\bigwedge_{(x_j,r_j) \sqsupseteq (x_i,r_i)}A(x,x_j).\] Since $\& $ is left continuous and $\phi$ is inhabited, there is some $(x_k,r_k) \in {\rm C}\phi$ such that \[ t\leq r_k\&\bigwedge_{(x_j,r_j) \sqsupseteq (x_i,r_i)}A(x,x_j). \] Take some $(x_l,r_l)\in {\rm C}\phi$ such that $(x_i,r_i), (x_k,r_k)\sqsubseteq(x_l,r_l)$. Then \begin{align*} \phi(x) \geq\phi(x_l)\&A(x,x_l) \geq r_k\& A(x,x_l)\geq t.\end{align*} Therefore, by arbitrariness of $t$, \[\phi(x)\geq \bigwedge_{(x_j,r_j) \sqsupseteq (x_i,r_i)}A(x,x_j). \] The proof is completed. \end{proof} A slight improvement of the argument shows that the above theorem is valid for all linearly ordered quantales. That is, if $\mathcal{Q}$ is a linearly ordered quantale, then irreducible ideals coincide with forward Cauchy ideals. As an application, the following corollary characterizes, for the quantale $\mathcal{Q}=([0,1],\&)$ with $\&$ being a left continuous t-norm, the irreducible ideals in the $\mathcal{Q}$-ordered sets $([0,1],d_L)$ and $([0,1],d_R)$. \begin{cor} \leftarrowbel{irreducible ideals in [0,1]} Let $\&$ be a left continuous t-norm and $\mathcal{Q}=([0,1],\&)$. \begin{enumerate}[(1)]\item A fuzzy lower set $\phi$ of the $\mathcal{Q}$-ordered set $([0,1],d_L)$ is an irreducible ideal if and only if either $\phi(x)=x \rightarrow a$ for some $a\in [0,1]$ or $\phi(x)=\bigvee_{b<a}(x\rightarrow b)$ for some $ a>0$. \item A fuzzy lower set $\psi$ of the $\mathcal{Q}$-ordered set $([0,1],d_R)$ is an irreducible ideal if and only if either $\psi(x)=a \rightarrow x$ for some $a\in [0,1]$ or $\psi(x)=\bigvee_{b>a}(b\rightarrow x)$ for some $ a<1$. \end{enumerate} \end{cor} \begin{proof}(1) Sufficiency is easy since the fuzzy lower set $\phi(x)=\bigvee_{b<a}(x\rightarrow b)$ is generated by the forward Cauchy sequence $\{a-1/n\}$. As for necessity, suppose that $\phi$ is an irreducible ideal of $([0,1],d_L)$. Then there is a forward Cauchy net $\{x_i\}$ in $([0,1],d_L)$ such that \[\phi(x)=\bigvee_i\bigwedge_{j\geq i}(x\rightarrow x_j).\] Let $a= \bigvee_i\bigwedge_{j\geq i}x_j$. Then \[\phi(x)=\bigvee_i\bigwedge_{j\geq i}(x\rightarrow x_j)=\bigvee_i\Big(x\rightarrow \bigwedge_{j\geq i}x_j\Big), \] hence either $\phi(x)=x \rightarrow a$ or $\phi(x)=\bigvee_{b<a}(x\rightarrow b)$. (2) Similar to (1). \end{proof} \begin{thm}\leftarrowbel{FC is flat} For a meet continuous quantale $\mathcal{Q}$, every forward Cauchy ideal is flat. \end{thm} \begin{proof}We only need to show that if $\{x_i\}$ is a forward Cauchy net in a $\mathcal{Q}$-ordered set $A$ and \[\varphi=\bigvee_i\bigwedge_{j\geq i}A(-,x_j),\] then $\varphi$ is flat. We do this in two steps. \textbf{Step 1}. For each fuzzy upper set $\psi$ of $A$, \[\varphi\otimes\psi=\bigvee_i\bigwedge_{j\geq i}\psi(x_j).\] Since $\psi$ is a fuzzy upper set, $\{\psi(x_i)\}$ is a forward Cauchy net in $(Q,d_L)$, hence \begin{align*}\varphi\otimes\psi &= \bigwedge_{p\in Q}({\rm sub}_X(\varphi,\psi\rightarrow p)\rightarrow p) &(\text{Lemma \ref{tensor via sub}})\\ &= \bigwedge_{p\in Q}\Big({\rm sub}_X\Big(\bigvee_i\bigwedge_{j\geq i}A(-,x_j),\psi\rightarrow p\Big)\rightarrow p\Big) \\ &= \bigwedge_{p\in Q}\Big(\Big(\bigvee_i\bigwedge_{j\geq i}{\rm sub}_A \big(A(-,x_j),\psi\rightarrow p \big)\Big)\rightarrow p\Big)&(\text{Proposition \ref{3.9}}) \\ &= \bigwedge_{p\in Q}\Big(\Big(\bigvee_i\bigwedge_{j\geq i}(\psi(x_j)\rightarrow p)\Big)\rightarrow p\Big) &(\text{Yoneda lemma})\\ &= \bigwedge_{p\in Q}\Big(\Big(\bigvee_i\bigwedge_{j\geq i}\psi(x_j)\rightarrow p\Big)\rightarrow p\Big) &(\text{Lemma \ref{yoneda limit in Q}})\\ &= \bigvee_i\bigwedge_{j\geq i}\psi(x_j). \end{align*} \textbf{Step 2}. $\varphi$ is flat. For any fuzzy upper sets $\psi_1$ and $\psi_2$, \begin{align*}\varphi\otimes \psi_1\wedge\phi\otimes\psi_2 &=\bigvee_i\bigwedge_{j\geq i} \psi_1(x_j) \wedge\bigvee_i\bigwedge_{j\geq i} \psi_2(x_j) &(\text{Step 1})\\ & = \bigvee_i\bigwedge_{j\geq i}(\psi_1(x_j)\wedge\psi_2(x_j)) & (\mathcal{Q}~\text{is meet continuous})\\ & = \varphi\otimes(\psi_1\wedge\psi_2). &(\text{Step 1}) \end{align*} The proof is completed. \end{proof} Similar to Proposition \ref{DMC is necessary}, it can be shown that the meet continuity of $\mathcal{Q}$ is also necessary in Theorem \ref{FC is flat}. \begin{prop}\leftarrowbel{MC is necessary} If all forward Cauchy ideals are flat, then the quantale $\mathcal{Q}$ is meet continuous. \end{prop} \begin{exmp} Consider the quantale $\mathcal{Q}=([0,1],\wedge)$ and the $\mathcal{Q}$-ordered set $ ([0,1],d_L)$. By linearity of $[0,1]$, every fuzzy lower set $\phi$ of $([0,1],d_L)$ satisfies \[\phi(x)\wedge \phi(y)\leq \bigvee\limits_{z\in[0,1]}\phi(z)\wedge d_L(x, z)\wedge d_L(y, z), \] hence every inhabited fuzzy lower set $\phi$ of $([0,1],d_L)$ is a flat ideal by Remark \ref{history}. In particular, the map $\phi:[0,1]\longrightarrow[0,1]$, given by \[\phi(x)=\begin{cases}1-x, & x\leq 1/2, \\ 1/2, &x>1/2,\end{cases} \] is a flat ideal of $([0,1],d_L)$. But, it is not irreducible by Corollary \ref{irreducible ideals in [0,1]}, hence not forward Cauchy by Theorem \ref{FC is irreducible}. \end{exmp} In the case that $\mathcal{Q}$ is the interval $[0,1]$ equipped with a continuous t-norm, we are able to present a sufficient and necessary condition for flat ideals to be forward Cauchy. \begin{thm}\leftarrowbel{main} Let $\mathcal{Q}$ be the unit interval equipped with a continuous t-norm $\&$. The following are equivalent: \begin{enumerate}[(1)] \item $\&$ is Archimedean, i.e., $\&$ has no non-trivial idempotent elements. \item Every flat ideal is forward Cauchy. \item Every flat ideal is irreducible. \end{enumerate} \end{thm} We prove a lemma first. A quantale $\mathcal{Q}=(Q,\&)$ is called \emph{divisible} \cite{Ha98,Ho95} if \[x\&(x\rightarrow y)=x\wedge y\] for all $x,y\in Q$. It is known that the underlying lattice of a divisible quantale is a frame, hence a distributive lattice, see e.g. \cite{Ho95}. Let $\mathcal{Q}$ be a divisible quantale and $b$ an idempotent element. Then for all $x\in Q$, \[b\wedge x=b\&(b\rightarrow x)=b\&b\&(b\rightarrow x)\leq b\&x\leq b\wedge x,\] hence $b\wedge x=b\&x$. \begin{lem}\leftarrowbel{3.11} Suppose $\mathcal{Q}=(Q,\&)$ is a divisible quantale, $b$ is an idempotent element of $\&$. Then for each $a\in Q$, the map $\phi(x)=b\vee(x\rightarrow a)$ is a flat ideal of the $\mathcal{Q}$-ordered set $(Q,d_L)$.\end{lem} \begin{proof} It is clear that $\phi$ is a fuzzy lower set of $(Q,d_L)$ and $\bigvee_{x\in Q}\phi(x)=1$. It remains to check that $\phi\otimes(\psi_1\wedge\psi_2)= (\phi\otimes\psi_1)\wedge(\phi\otimes\psi_2)$ for all upper sets $\psi_1,\psi_2 $ of $(Q,d_L)$. Because for $i=1,2$, \begin{align*} \phi\otimes \psi_i &= \bigvee_{x\in Q}((b\&\psi_i(x))\vee((x\rightarrow a)\&\psi_i(x)))\\ &= (b\wedge\psi_i(1))\vee \psi_i(a) &(\text{$b$ is idempotent})\\ & = (b\vee\psi_i(a))\wedge\psi_i(1), &(\text{$Q$ is distributive}) \end{align*} therefore \begin{align*}(\phi\otimes\psi_1)\wedge(\phi\otimes\psi_2)&= (b\vee\psi_1(a))\wedge\psi_1(1)\wedge(b\vee\psi_2(a))\wedge\psi_2(1)\\ &= (b\vee(\psi_1(a)\wedge\psi_2(a))\wedge (\psi_1(1)\wedge\psi_2(1))\\ &= \phi\otimes(\psi_1\wedge\psi_2). \end{align*} This completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{main}] $(1)\Rightarrow(2)$ This is contained in \cite[Proposition 7.9]{LLZ17}. If $\&$ is isomorphic to the product t-norm, an equivalent version of this implication can also be found in Vickers \cite{SV2005}. $(2)\Rightarrow(3)$ This follows immediately from Theorem \ref{FC is irreducible}. $(3)\Rightarrow(1)$ Suppose $b$ is a non-trivial idempotent element of $\&$. Take some $a\in (0,b)$. Since $[0,1]$ together with a continuous t-norm is a divisible quantale \cite{Belo02,Ha98}, it follows from Lemma \ref{3.11} that $\phi(x)=b\vee(x\rightarrow a)$ is a flat ideal of $([0,1],d_L)$. But, $\phi$ is not irreducible, because neither $\phi(x)\leq b$ for all $x$ nor $\phi(x)\leq x\rightarrow a$ for all $x$, a contradiction. \end{proof} Finally, we discuss the relationship between irreducible ideals and flat ideals. A quantale $\mathcal{Q}$ is prelinear if $(p\rightarrow q)\vee(q\rightarrow p)=1$ for all $p,q\in Q$. It is known that $\mathcal{Q}$ is prelinear if and only if $(p\wedge q)\rightarrow r= (p\rightarrow r)\vee(q\rightarrow r)$ for all $p,q,r\in Q$, see e.g. \cite{Belo02}. \begin{prop}\leftarrowbel{irr is flat} If $\mathcal{Q}$ is prelinear, then every irreducible ideal is a flat ideal. \end{prop} \begin{proof}Assume that $\phi$ is an irreducible ideal. Then for all fuzzy upper sets $\psi_1,\psi_2$, \begin{align*}\phi\otimes(\psi_1\wedge\psi_2) &= \bigwedge_{p\in Q} ({\rm sub}_A (\phi,(\psi_1\wedge\psi_2)\rightarrow p )\rightarrow p )\\ &= \bigwedge_{p\in Q} ({\rm sub}_A (\phi,(\psi_1\rightarrow p)\vee(\psi_2 \rightarrow p) )\rightarrow p )\\ &= \bigwedge_{p\in Q}\Big( ({\rm sub}_A (\phi, \psi_1\rightarrow p )\rightarrow p )\wedge ({\rm sub}_A (\phi, \psi_2 \rightarrow p )\rightarrow p )\Big)\\ &= (\phi\otimes\psi_1)\wedge(\phi\otimes\psi_2), \end{align*} hence $\phi$ is flat. \end{proof} \begin{prop} If $\mathcal{Q}$ satisfies the law of double negation then flat ideals coincide with irreducible ideals.\end{prop} \begin{proof}If $\mathcal{Q}$ satisfies the law of double negation, then, by Lemma \ref{tensor via sub}, for all fuzzy lower sets $\phi,\varphi$ and fuzzy upper set $\psi$, \begin{equation*}{\rm sub}_A(\phi, \varphi) = \phi\otimes(\varphi\rightarrow0)\rightarrow0 \end{equation*} and \begin{equation*}\phi\otimes\psi = {\rm sub}_A(\phi, \psi\rightarrow0)\rightarrow0. \end{equation*} The conclusion follows easily from these equations. \end{proof} \begin{cor}\leftarrowbel{irreducible weights in [0,1] NM} Let $\mathcal{Q}$ be the unit interval equipped with a left continuous t-norm $\&$. If $\mathcal{Q}$ satisfies the law of double negation, then for each fuzzy lower set $\phi$ of a $\mathcal{Q}$-ordered set, the following are equivalent: \begin{enumerate}[\rm(1)] \item $\phi$ is a forward Cauchy ideal. \item $\phi$ is an irreducible ideal. \item $\phi$ is a flat ideal. \end{enumerate} \end{cor} \section{Saturatedness} Let $\Phi$ be a class of weights. A $\mathcal{Q}$-ordered set $A$ is \emph{$\Phi$-continuous} if it is $\Phi$-complete and the left adjoint $\sup:\Phi(A)\longrightarrow A$ of ${\bf y}:A\longrightarrow\Phi(A)$ has a left adjoint. This kind of postulation is standard in order theory \cite{Wood}. In the case that $\Phi={\cal P}$ (the largest class of weights), $\Phi$-continuous $\mathcal{Q}$-ordered sets are the completely distributive (or, totally continuous) $\mathcal{Q}$-categories in \cite{PZ15,St07}. We write $\Phi$-{\sf Cont} for the category of $\Phi$-continuous $\mathcal{Q}$-ordered sets and $\Phi$-cocontinuous maps. The category $\Phi$-{\sf Cont} is the subject of fuzzy domain theory. So, a natural question is whether such $\mathcal{Q}$-ordered sets exist. As we will see, saturatedness of $\Phi$ guarantees that there exist enough such things. A class of weights $\Phi$ is {\it saturated} \cite{KS05,LZ07} if for all $\mathcal{Q}$-ordered set $A$ and $\Lambdabda\in\Phi(\Phi(A))$, \[\bigvee_{\phi\in\Phi(A)}\Lambdabda(\phi)\&\phi\in\Phi(A).\] A category-minded reader will recognize soon that a saturated class of weights is an example of KZ-monads \cite{Kock,Zo76}. \begin{thm}Let $\Phi$ be a saturated class of weights. \begin{enumerate}[(1)] \item For each $\mathcal{Q}$-ordered set $A$, $\Phi(A)$ is $\Phi$-continuous. \item For each $\mathcal{Q}$-order preserving map $f:A\longrightarrow B$, $\Phi(f)$ is $\Phi$-cocontinuous. \item The functor $\Phi:\mathcal{Q}$-${\sf Ord}\longrightarrow\Phi$-{\sf Cont}, which sends each $\mathcal{Q}$-order preserving map $f$ to $\Phi(f)$, is a left adjoint of the forgetful functor $\Phi$-${\sf Cont}\longrightarrow\mathcal{Q}$-{\sf Ord}. \end{enumerate} \end{thm} \begin{proof}(1) For each $\Lambdabda\in\Phi(\Phi(A))$, since $\Phi$ is saturated, $\bigvee_{\phi\in\Phi(A)}\Lambdabda(\phi)\&\phi$ belongs to $\Phi(A)$. It is easy to verify that for all $\psi\in\Phi(A)$, \begin{align*}{\cal P} \Phi(A) (\Lambdabda, \Phi(A)(-,\psi)) &= \Phi(A)\Big(\bigvee_{\phi\in\Phi(A)}\Lambdabda(\phi)\&\phi, \psi\Big) \end{align*} hence $\bigvee_{\phi\in\Phi(A)}\Lambdabda(\phi)\&\phi$ is a supremum of $\Lambdabda$ in $\Phi(A)$, i.e., \[{\sup}_{\Phi(A)} \Lambdabda =\bigvee_{\phi\in\Phi(A)}\Lambdabda(\phi)\&\phi.\] This shows that $\Phi(A)$ is $\Phi$-complete. Next, we show that $\Phi(A)$ is $\Phi$-continuous, that is, $\sup_{\Phi(A)}:\Phi(\Phi(A))\longrightarrow\Phi(A)$ has a left adjoint. To this end, write ${\bf y}_A$ for the Yoneda embedding $A\longrightarrow\Phi(A)$. For each $\Lambdabda\in\Phi(\Phi(A))$, since $\Lambdabda:\Phi(A) \longrightarrow(Q,d_R)$ preserves $\mathcal{Q}$-order, it follows that for all $x\in A$ and $\phi\in\Phi(A)$, \[\phi(x)= {\rm sub}_A({\bf y}_A(x),\phi)\leq \Lambdabda(\phi)\rightarrow\Lambdabda({\bf y}_A(x)), \] hence \[{\sup}_{\Phi(A)}(\Lambdabda)(x)= \bigvee_{\phi\in\Phi(A)}\Lambdabda(\phi)\&\phi(x) =\Lambdabda\circ{\bf y}_A(x) .\] This means that $\sup_{\Phi(A)}$ is the map obtained by restricting the domain and codomain of \[{\bf y}_A^\leftarrow:{\cal P}\Phi(A)\longrightarrow{\cal P} A\] to $\Phi(\Phi(A))$ and $\Phi(A)$, respectively. Therefore, $\sup_{\Phi(A)}$ has a left adjoint, given by restricting the domain and codomain of ${\bf y}_A^\rightarrow:{\cal P} A\longrightarrow{\cal P}\Phi(A)$ to $\Phi(A)$ and $\Phi(\Phi(A))$, respectively. (2) and (3) are a special case of \cite[Theorem 4.7]{LZ07}, which is again a special case of a general result in category theory \cite{AK,Kelly,KS05}. \end{proof} The above theorem shows that if $\Phi$ is a saturated class of weights, then for each $\mathcal{Q}$-ordered set $A$, $\Phi(A)$ is the free $\Phi$-continuous $\mathcal{Q}$-ordered set generated by $A$. This section concerns the saturatedness of the classes of forward Cauchy ideals, irreducible ideals, and flat ideals. A quantale $\mathcal{Q}=(Q,\&)$ is completely distributive (continuous, resp.) if the complete lattice $Q$ is a completely distributive lattice (a continuous lattice, resp.). So, each completely distributive quantale is a continuous quantale and each continuous quantale is a meet continuous quantale. For continuity and completely distributivity of complete lattices, the reader is referred to the monograph \cite{Gierz2003}. The following proposition was first proved in \cite{FSW} when $\mathcal{Q}$ is a \emph{completely distributive value quantale}, the version presented below was proved in \cite{LZ06} making use of Lemma \ref{characterization of FC fuzzy lower set}. \begin{prop} If $\mathcal{Q}$ is a continuous quantale, then the class of forward Cauchy ideals is saturated. \end{prop} \begin{lem} \leftarrowbel{characterization of FC fuzzy lower set} {\rm(\cite{LZ06})} Let $\mathcal{Q}$ be a continuous quantale and $\phi$ be an inhabited fuzzy lower set of a $\mathcal{Q}$-ordered set $A$. The following are equivalent: \begin{enumerate}[\rm(1)] \item $\phi$ is a forward Cauchy ideal. \item If $r\ll\phi(x)$ and $s\ll\phi(y)$, then for every $t\ll 1$, there is some $z\in A$ such that $t\ll\phi(z)$, $r\ll A(x,z)$ and $s\ll A(y,z)$. \end{enumerate}\end{lem} The saturatedness of the classes of flat ideals and irreducible ideals is a special case of a general result in enriched category theory, namely, \cite[Proposition 5.4]{KS05}. However, in order to make this paper self-contained, a direct verification in this special case is included here. \begin{prop}The class of irreducible ideals is saturated. \end{prop} \begin{proof}It suffices to show that for each $\mathcal{Q}$-ordered set $A$ and each irreducible ideal $\Lambdabda:\mathcal{I} A\longrightarrow\mathcal{Q}$ of $(\mathcal{I} A,{\rm sub}_A)$, the map $\sup\Lambdabda:A\longrightarrow\mathcal{Q} $, given by $$\sup\Lambdabda(x) = \bigvee_{\phi\in\mathcal{I} A} \Lambdabda(\phi)\&\phi(x) ,$$ is an irreducible ideal of $A$. \textbf{Step 1}. $\bigvee_{x\in A}\sup\Lambdabda(x)=1$. This is easy since \[\bigvee_{x\in A}\sup\Lambdabda(x) = \bigvee_{x\in A}\bigvee_{\phi\in\mathcal{I} A} \Lambdabda(\phi)\&\phi(x) = \bigvee_{\phi\in\mathcal{I} A}\bigvee_{x\in A} \Lambdabda(\phi)\&\phi(x) =\bigvee_{\phi\in\mathcal{I} A} \Lambdabda(\phi) =1. \] \textbf{Step 2}. For any fuzzy lower sets $\phi_1, \phi_2$ of $A$, \[{\rm sub}_A(\sup\Lambdabda, \phi_1\vee \phi_2)= {\rm sub}_A(\sup\Lambdabda, \phi_1)\vee {\rm sub}_A(\sup\Lambdabda,\phi_2).\] To see this, for a fuzzy lower set $\phi$ of $A$, consider the fuzzy lower set of $(\mathcal{I} A,{\rm sub}_A)$: \[{\rm sub}_A(-,\phi): \mathcal{I} A\longrightarrow\mathcal{Q} .\] Then \begin{align*}{\rm sub}_{\mathcal{I} A}(\Lambdabda,{\rm sub}_A(-,\phi))&= \bigwedge_{\psi\in\mathcal{I} A} (\Lambdabda(\psi)\rightarrow{\rm sub}_A(\psi,\phi))\\ &= \bigwedge_{\psi\in\mathcal{I} A}\Big(\Lambdabda(\psi)\rightarrow\bigwedge_{x\in A}(\psi(x)\rightarrow \phi(x))\Big)\\ &=\bigwedge_{x\in A}\Big( \Big(\bigvee_{\psi\in\mathcal{I} A}\Lambdabda(\psi)\& \psi(x)\Big)\rightarrow \phi(x)\Big) \\ & ={\rm sub}_A(\sup\Lambdabda,\phi). \end{align*} Therefore, \begin{align*} {\rm sub}_A(\sup\Lambdabda, \phi_1\vee \phi_2) &={\rm sub}_{\mathcal{I} A}(\Lambdabda, {\rm sub}_A(-, \phi_1\vee \phi_2))\\ &={\rm sub}_{\mathcal{I} A}(\Lambdabda, {\rm sub}_A(-, \phi_1)\vee {\rm sub}_A(-,\phi_2)) \\ &= {\rm sub}_{\mathcal{I} A}(\Lambdabda, {\rm sub}_A(-, \phi_1))\vee{\rm sub}_{\mathcal{I} A}(\Lambdabda, {\rm sub}_A(-, \phi_2))\\ &={\rm sub}_A(\sup\Lambdabda, \phi_1)\vee {\rm sub}_A(\sup\Lambdabda,\phi_2), \end{align*} where the second equality holds since each element in $\mathcal{I} A$ is irreducible; the reason for the third equality is that $\Lambdabda$ is irreducible. \end{proof} \begin{prop}The class of flat ideals is saturated. \end{prop} \begin{proof} We only need to show that for each $\mathcal{Q}$-ordered set $A$ and each flat ideal $\Lambdabda:{\cal F} A\longrightarrow\mathcal{Q}$ of $({\cal F} A,{\rm sub}_A)$, the map $\sup\Lambdabda:A\longrightarrow\mathcal{Q} $, given by $$\sup\Lambdabda(x) = \bigvee_{\phi\in{\cal F} A} \Lambdabda(\phi)\&\phi(x) ,$$ is a flat ideal of $A$. \textbf{Step 1}. $\bigvee_{x\in A}\sup\Lambdabda(x)=1$. This is easy since \[\bigvee_{x\in A}\sup\Lambdabda(x) = \bigvee_{x\in A}\bigvee_{\phi\in{\cal F} A} \Lambdabda(\phi)\&\phi(x) = \bigvee_{\phi\in{\cal F} A}\bigvee_{x\in A} \Lambdabda(\phi)\&\phi(x) =\bigvee_{\phi\in{\cal F} A} \Lambdabda(\phi) =1. \] \textbf{Step 2}. For all fuzzy upper sets $\psi_1, \psi_2$ of $A$, \[\sup\Lambdabda \otimes(\psi_1\wedge \psi_2)= (\sup\Lambdabda\otimes\psi_1)\wedge (\sup\Lambdabda\otimes\psi_2).\] To see this, for each fuzzy upper set $\psi$ on $A$, consider the fuzzy upper set of $({\cal F} A,{\rm sub}_A)$ (see Equation (\ref{compositon as distributor})): \[-\otimes\psi: {\cal F} A\longrightarrow\mathcal{Q}. \] Then \begin{align*}\Lambdabda\otimes(-\otimes \psi) &= \bigvee_{\phi\in{\cal F} A}\Big(\Lambdabda(\phi)\&\bigvee_{x\in A}(\phi(x)\& \psi(x))\Big)\\ &=\bigvee_{x\in A} \bigvee_{\phi\in{\cal F} A}(\Lambdabda(\phi)\& \phi(x))\& \psi(x) \\ &=\bigvee_{x\in A} \sup\Lambdabda(x)\& \psi(x) \\ & =\sup\Lambdabda\otimes\psi. \end{align*} Therefore, \begin{align*} \sup\Lambdabda \otimes(\psi_1\wedge \psi_2)&=\Lambdabda\otimes( -\otimes(\psi_1 \wedge \psi_2))\\ &=\Lambdabda\otimes((-\otimes \psi_1)\wedge (-\otimes\psi_2))\\ &= (\Lambdabda\otimes(-\otimes \psi_1))\wedge (\Lambdabda\otimes(-\otimes\psi_2))\\ &=(\sup\Lambdabda\otimes\psi_1)\wedge( \sup\Lambdabda\otimes\psi_2). \end{align*} The proof is completed. \end{proof} \section{Scott $\mathcal{Q}$-topology and Scott $\mathcal{Q}$-cotopology} The connection between partially ordered sets and topological spaces is the essence of domain theory. The fuzzy version of Alexandroff topology has been investigated in \cite{LZ06}. This section concerns the extension of Scott topology to the fuzzy setting. We recall some basic definitions first. A $\mathcal{Q}$-topology on a set $X$ is a subset $\tau$ of $Q^X$ subject to the following conditions: \begin{enumerate} \item[(O1)] $p_X\in\tau$ for all $p\in Q$; \item[(O2)] $\leftarrowmbda\wedge \mu\in\tau$ for all $\leftarrowmbda,\mu\in\tau$; \item[(O3)] $\bigvee_{j\in J}\leftarrowmbda_j\in\tau$ for each subset $\{\leftarrowmbda_j\}_{j\in J}$ of $\tau$.\end{enumerate} For a $\mathcal{Q}$-topological space $(X,\tau)$, elements in $\tau$ are said to be open. A $\mathcal{Q}$-topology $\tau$ is \emph{stratified} \cite{HS95,HS99} if \begin{enumerate} \item[(O4)] $p\&\leftarrowmbda \in\tau$ for all $p\in Q$ and $\leftarrowmbda\in \tau$.\end{enumerate} A $\mathcal{Q}$-topology $\tau$ is \emph{co-stratified} \cite{CLZ11} if \begin{enumerate} \item[(O5)] $p\rightarrow\leftarrowmbda \in\tau$ for all $p\in Q$ and $\leftarrowmbda\in \tau$.\end{enumerate} A $\mathcal{Q}$-topology is \emph{strong} \cite{CLZ11,Zhang07} if it is both stratified and co-stratified. A $\mathcal{Q}$-cotopology on a set $X$ is a subset $\tau$ of $Q^X$ subject to the following conditions: \begin{enumerate} \item[(C1)] $p_X\in\tau$ for all $p\in Q$; \item[(C2)] $\leftarrowmbda\vee \mu\in\tau$ for all $\leftarrowmbda, \mu\in\tau$; \item[(C3)] $\bigwedge_{j\in J} \leftarrowmbda_j\in\tau$ for each subset $\{\leftarrowmbda_j\}_{j\in J}$ of $\tau$.\end{enumerate} For a $\mathcal{Q}$-cotopological space $(X,\tau)$, elements in $\tau$ are said to be closed. A $\mathcal{Q}$-cotopology $\tau$ is \emph{stratified} if \begin{enumerate} \item[(C4)] $p\rightarrow \leftarrowmbda\in\tau$ for all $p\in Q$ and $\leftarrowmbda\in\tau$. \end{enumerate} A $\mathcal{Q}$-cotopology $\tau$ is \emph{co-stratified} if \begin{enumerate} \item[(C5)] $p\&\leftarrowmbda\in\tau$ for all $p\in Q$ and $\leftarrowmbda\in\tau$. \end{enumerate} A $\mathcal{Q}$-cotopology $\tau$ is \emph{strong} \cite{CLZ11,Zhang07} if it is both stratified and co-stratified. Let $\mathcal{Q}$ be a quantale that satisfies the law of double negation. If $\tau$ is a stratified (co-stratified, resp.) $\mathcal{Q}$-cotopology on a set $X$, then \[\neg(\tau)=\{\neg \leftarrowmbda \mid \leftarrowmbda\in\tau\} \] is a stratified (co-stratified, resp.) $\mathcal{Q}$-topology on $X$, where $\neg \leftarrowmbda (x)=\neg(\leftarrowmbda(x))$ for all $x\in X$. Conversely, if $\tau$ is a stratified (co-stratified, resp.) $\mathcal{Q}$-topology on $X$, then \[\neg(\tau)=\{\neg \leftarrowmbda \mid \leftarrowmbda\in\tau\} \] is a stratified (co-stratified, resp.) $\mathcal{Q}$-cotopology on $X$. In general, there does not exist a natural way to switch between closed sets and open sets, so, we need to consider the open-set version and the closed-set version when generalizing Scott topology to $\mathcal{Q}$-ordered sets. As demonstrated below, flat ideals and irreducible ideals are related to the open-set and the closed-set version, respectively. \begin{defn} Let $\Phi$ be a class of weights and $A$ be a $\mathcal{Q}$-ordered set. \begin{enumerate}[\rm(1)] \item A fuzzy set $\psi: A\longrightarrow Q$ is $\Phi$-open if it is a fuzzy upper set and for all $\phi\in\Phi(A)$, \[\psi(\sup\phi)\leq \phi\otimes\psi \] whenever $\sup\phi$ exists. \item A fuzzy set $\leftarrowmbda:A\longrightarrow Q$ is $\Phi$-closed if it is a fuzzy lower set and for all $\phi\in\Phi(A)$, \[{\rm sub}_A(\phi,\leftarrowmbda)\leq \leftarrowmbda(\sup\phi) \] whenever $\sup\phi$ exists. \end{enumerate} \end{defn} Let $\psi$ be a fuzzy upper set of $A$. Since $\phi\otimes\psi=\sup\psi^\rightarrow(\phi)$ by Example \ref{intersection as sup}, then $\phi\otimes\psi\leq \psi(\sup\phi)$, hence a fuzzy upper set $\psi$ is $\Phi$-open if and only if \begin{equation*} \psi(\sup\phi)= \phi\otimes\psi\end{equation*}for all $\phi\in\Phi(A)$, if and only if $\psi:A\longrightarrow(Q,d_L)$ is $\Phi$-cocontinuous in the sense that $\psi(\sup\phi)=\sup\psi^\rightarrow(\phi)$ for all $\phi\in\Phi(A)$. Similarly, a fuzzy lower set $\leftarrowmbda$ is $\Phi$-closed if and only if \begin{equation}\leftarrowbel{Scott closed =}{\rm sub}_A(\phi,\leftarrowmbda)= \leftarrowmbda(\sup\phi)\end{equation} for all $\phi\in\Phi(A)$, if and only if $\leftarrowmbda: A\longrightarrow (Q,d_R)$ is $\Phi$-cocontinuous. \begin{prop} \leftarrowbel{open sets} Let $\Phi$ be a class of weights. \begin{enumerate}[(1)] \item Each constant fuzzy set is $\Phi$-open. \item If $\psi$ is a $\Phi$-open, then so is $p\&\psi$ for all $p\in Q$. \item The join of a set of $\Phi$-open fuzzy sets is $\Phi$-open. \item If $\Phi$ is a subclass of flat ideals, then the meet of two $\Phi$-open fuzzy sets is $\Phi$-open. \end{enumerate} \end{prop} Thus, if $\Phi$ is a subclass of flat ideals, then for each $\mathcal{Q}$-ordered set $A$, the $\Phi$-open fuzzy sets of $A$ form a stratified $\mathcal{Q}$-topology on $A$, called the $\Phi$-Scott $\mathcal{Q}$-topology and denoted by $\sigma_\Phi(A)$. \begin{prop} \leftarrowbel{closed sets} Let $\Phi$ be a class of weights. \begin{enumerate}[(1)] \item Each constant fuzzy set is $\Phi$-closed. \item If $\psi$ is a $\Phi$-closed fuzzy set of $A$, then so is $p\rightarrow\psi$ for all $p\in Q$. \item The meet of a set of $\Phi$-closed fuzzy sets is $\Phi$-closed. \item If $\Phi$ is a subclass of irreducible ideals, then the join of two $\Phi$-closed fuzzy sets is $\Phi$-closed. \end{enumerate} \end{prop} Thus, if $\Phi$ is a subclass of irreducible ideals, then for each $\mathcal{Q}$-ordered set $A$, the $\Phi$-closed fuzzy sets form a stratified $\mathcal{Q}$-cotopology on $A$, called the $\Phi$-Scott $\mathcal{Q}$-cotopology on $A$ and denoted by $\sigma^{\rm co}_\Phi(A)$. \begin{con} For the class ${\cal F}$ of all flat ideals, we say fuzzy Scott open sets (Scott $\mathcal{Q}$-topology, resp.) instead of ${\cal F}$-open fuzzy sets (${\cal F}$-Scott $\mathcal{Q}$-topology, resp.), and write $\sigma(A)$ instead of $\sigma_{\cal F}(A)$. Dually, For the class ${\cal I}$ of all irreducible ideals, we say fuzzy Scott closed sets (Scott $\mathcal{Q}$-cotopology, resp.) instead of ${\cal I}$-closed fuzzy sets (${\cal I}$-Scott $\mathcal{Q}$-cotopology, resp.), and write $\sigma^{\rm co}(A)$ instead of $\sigma^{\rm co}_{\cal I}(A)$. \end{con} \begin{rem} (1) Let $\{x_i\}$ be a forward Cauchy net in a $\mathcal{Q}$-ordered set $A$ and \[\varphi=\bigvee_i\bigwedge_{j\geq i}A(-,x_j).\] For each fuzzy upper set $\psi$ of $A$, by the argument of Theorem \ref{FC is flat}, we have \[\varphi\otimes\psi=\bigvee_i\bigwedge_{j\geq i}\psi(x_j).\] Thus, a fuzzy upper set $\psi$ of $A$ is $\mathcal{W}$-open if and only if \[\bigvee_i\bigwedge_{j\geq i}\psi(x_j)\geq\psi(x)\] for every forward Cauchy net $\{x_i\}$ with a Yoneda limit $x$. This shows that $\mathcal{W}$-open fuzzy sets are the Scott open fuzzy sets in the sense of Wagner \cite[Definition 4.1]{Wagner97}. In particular, if $\mathcal{Q}$ is meet continuous, then $\mathcal{W}$-open fuzzy sets of $A$ form a stratified $\mathcal{Q}$-topology on $A$. (2) Lemma 4.6 in \cite{Wagner97} claims that if $\phi,\psi$ are $\mathcal{W}$-open fuzzy sets of a $\mathcal{Q}$-ordered set $A$, then so is the fuzzy set $\phi\&\psi:A\longrightarrow Q$ given by $(\phi\&\psi)(x)= \phi(x)\&\psi(x)$. This is not true in general. Let $\mathcal{Q}$ be the unit interval $[0,1]$ equipped with the product t-norm $\&$. For each class of weights $\Phi$, the identity map ${\rm id}$ is clearly $\Phi$-open in the $\mathcal{Q}$-ordered set $([0,1],d_L)$, but, ${\rm id}\&{\rm id}$ is not a fuzzy upper set of $([0,1],d_L)$. (3) If $\mathcal{Q}=(Q,\&)$ is a frame, i.e., $\&=\wedge$, then, as noted in Remark \ref{history}, the fuzzy ideals considered in \cite{Yao16} are exactly the flat ideals, hence a fuzzy set $\psi$ of a $\mathcal{Q}$-ordered set $A$ is fuzzy Scott open if and only if it is so in the sense of Yao \cite[Definition 2.10]{Yao16}. \end{rem} \begin{rem}Let $\{x_i\}$ be a forward Cauchy net in a $\mathcal{Q}$-ordered set $A$ and \[\phi=\bigvee_i\bigwedge_{j\geq i}A(-,x_j).\] By Proposition \ref{3.9} (or, \textbf{Step 2} in the argument of Theorem \ref{FC is irreducible}), \[{\rm sub}_A(\phi,\psi)= \bigvee_i\bigwedge_{j\geq i}\psi(x_j)\] for each fuzzy lower set $\psi$ of $A$. By Proposition \ref{yoneda limit as suprema}, Yoneda limits of $\{x_i\}$ are exactly the suprema of the fuzzy lower set \[\phi=\bigvee_i\bigwedge_{j\geq i}A(-,x_j).\] So, a fuzzy lower set $\psi$ of $A$ is $\mathcal{W}$-closed if and only if \[\bigvee_i\bigwedge_{j\geq i}\psi(x_j)\leq\psi(x) \] for every forward Cauchy net $\{x_i\}$ with a Yoneda limit $x$. This shows that $\mathcal{W}$-closed fuzzy sets are exactly the Scott closed fuzzy sets in the sense of Wagner (\cite{Wagner97}, Definition 4.4). \end{rem} \begin{prop}Let $\Phi$ be a subclass of flat ideals. Then for each $\Phi$-cocontinuous map $f: A\longrightarrow B$ between $\mathcal{Q}$-ordered sets, $f:(A,\sigma_\Phi(A))\longrightarrow (B,\sigma_\Phi(B))$ is continuous. \end{prop} Therefore, for a subclass $\Phi$ of flat ideals, assigning each $\mathcal{Q}$-ordered set $A$ to the $\mathcal{Q}$-topological space $(A,\sigma_\Phi(A))$ defines a functor $\Sigma_\Phi$ from the category of $\mathcal{Q}$-ordered sets and $\Phi$-cocontinuous maps to that of stratified $\mathcal{Q}$-topological spaces. It is known in domain theory that the functor sending each ordered set to its Scott topology is a full functor, but, it is not clear whether $\Sigma_\Phi$ is a full functor. The situation with Scott $\mathcal{Q}$-cotopology looks more promising. The following conclusion implies that for every subclass $\Phi$ of irreducible ideals, assigning each $\mathcal{Q}$-ordered set $A$ to the $\mathcal{Q}$-cotopological space $(A,\sigma^{\rm co}_\Phi(A))$ gives a full functor $\Sigma^{\rm co}_\Phi$ from the category of $\mathcal{Q}$-ordered sets and $\Phi$-cocontinuous maps to that of stratified $\mathcal{Q}$-cotopological spaces. \begin{prop}\leftarrowbel{full functor} {\rm(\cite[Proposition 4.15]{Wagner97} for the class of forward Cauchy ideals)} Let $\Phi$ be a class of weights. For each map $f: A\longrightarrow B $ between $\mathcal{Q}$-ordered sets, the following are equivalent: \begin{enumerate}[(1)] \item $ f: A\longrightarrow B $ is $\Phi$-cocontinuous. \item For each $\Phi$-closed fuzzy set $\phi$ of $B$, $\phi\circ f$ is a $\Phi$-closed fuzzy set of $A$. \end{enumerate}\end{prop} \begin{proof} $(1)\Rightarrow(2)$ This is easy since the composite of $\Phi$-cocontinuous maps is $\Phi$-cocontinuous. $(2)\Rightarrow(1)$ First, we show that $f$ preserves $\mathcal{Q}$-order. For all $a_1,a_2\in A$, since $\psi=B(-,f(a_2))$ is a $\Phi$-closed fuzzy set of $B$, then $\psi\circ f=B(f(-),f(a_2))$ is $\Phi$-closed, hence \[A(a_1,a_2) =\psi\circ f(a_2)\&A(a_1,a_2) \leq \psi\circ f(a_1)= B(f(a_1),f(a_2)), \] showing that $f$ preserves $\mathcal{Q}$-order. Second, we show that for each $\phi\in\Phi(A)$, if $\sup\phi $ exists, then for all $b\in B$, \[{\rm sub}_B(f^\rightarrow(\phi),B(-,b))=B(f({\sup}\phi),b),\] hence $f(\sup\phi)$ is a supremum of $f^\rightarrow(\phi)$. Since $B(-,b)$ is a $\Phi$-closed fuzzy set of $B$, $B(-,b)\circ f$ is a $\Phi$-closed fuzzy set of $A$, hence, by Eq. (\ref{Scott closed =}), \[{\rm sub}_B(f^\rightarrow(\phi),B(-,b))={\rm sub}_A(\phi,B(-,b)\circ f) = B(f({\sup}\phi),b).\] This completes the proof. \end{proof} \begin{exmp} \leftarrowbel{5.8} Let $\mathcal{Q}=([0,1],\&)$ with $\&$ being a left continuous t-norm. Then $\phi$ is a fuzzy Scott closed set in $([0,1],d_R)$ if and only if $\phi:([0,1],d_L)\longrightarrow([0,1],d_L)$ is right continuous and $\mathcal{Q}$-order preserving. By Corollary \ref{irreducible ideals in [0,1]}, a fuzzy lower set $\psi$ of $([0,1],d_R)$ is an irreducible ideal if and only if either $\psi(x)=a \rightarrow x$ for some $a\in [0,1]$ or $\psi(x)=\bigvee_{b>a}(b\rightarrow x)$ for some $ a<1$. Since the supremum of $ \bigvee_{b>a}(b\rightarrow x)$ in $([0,1],d_R)$ is (see Example \ref{inclusion as sup}) \[\bigwedge_{x\in [0,1]}\Big(\bigvee_{b>a}(b\rightarrow x)\rightarrow x\Big)=\bigwedge_{b>a}\bigwedge_{x\in [0,1]}((b\rightarrow x)\rightarrow x)= a,\] it follows that a fuzzy lower set $\phi$ of $([0,1],d_R)$ is fuzzy Scott closed if and only if for all $a<1$, \[{\rm sub}_{[0,1]}\Big(\bigvee_{b>a}(b\rightarrow x),\phi\Big)= \bigwedge_{b>a}\phi(b)\leq\phi(a).\] The conclusion thus follows. \end{exmp} If $\mathcal{Q}$ is the unit interval $[0,1]$ equipped with a continuous t-norm $\&$, we have a bit more: the fuzzy Scott $\mathcal{Q}$-cotopology on each $\mathcal{Q}$-ordered set is a strong $\mathcal{Q}$-cotopology. \begin{prop}\leftarrowbel{strong} Let $\mathcal{Q}=([0,1],\&)$ with $\&$ being a left continuous t-norm. The following are equivalent: \begin{enumerate}[\rm (1)] \item $\&$ is a continuous t-norm. \item The Scott $\mathcal{Q}$-cotopology on each $\mathcal{Q}$-ordered set is a strong $\mathcal{Q}$-cotopology. \end{enumerate} \end{prop} \begin{proof}$(1)\Rightarrow(2)$ We only need to check that if $\phi$ is a fuzzy Scott closed fuzzy set of a $\mathcal{Q}$-ordered set $A$, then so is $a\&\phi$ for all $a\in[0,1]$. Since $\phi$ is a fuzzy Scott closed set of $A$ and $a\&{\rm id}$ is a fuzzy Scott closed set of $([0,1],d_R)$, both $\phi:A\longrightarrow([0,1],d_R)$ and $a\&{\rm id}:([0,1],d_R)\longrightarrow([0,1],d_R)$ preserve suprema of irreducible ideals, then $a\&\phi=(a\&{\rm id})\circ\phi:A\longrightarrow([0,1],d_R)$ preserve suprema of irreducible ideals, hence $a\&\phi$ is fuzzy Scott closed. $(2)\Rightarrow(1)$ If $\&$ is not continuous, by \cite[Proposition 1.19]{KMP00} there is some $a\in[0,1]$ such that $a\&{\rm id}:[0,1]\longrightarrow[0,1]$ is not right continuous, hence not a fuzzy Scott closed set in $([0,1],d_R)$. Since the identity map on $[0,1]$ is fuzzy Scott closed in $([0,1],d_R)$ by Example \ref{5.8}, the Scott $\mathcal{Q}$-cotopology on $([0,1],d_R)$ cannot be a strong one. \end{proof} \begin{exmp}This example shows that if $\mathcal{Q}=([0,1],\&)$ with $\&$ being a continuous t-norm, then the Scott $\mathcal{Q}$-cotopology on $([0,1],d_R)$ is the strong $\mathcal{Q}$-cotopology on $[0,1]$ generated by the identity map. Let $\tau$ denote the strong $\mathcal{Q}$-cotopology on $[0,1]$ generated by the identity map. By Example \ref{5.8}, a fuzzy Scott closed set in $([0,1],d_R)$ is exactly a right continuous and $\mathcal{Q}$-order preserving map $\phi:([0,1],d_L)\longrightarrow([0,1],d_L)$. So, the conclusion has already been proved in \cite{Zhang18} in the case that $\&$ is the t-norm $\min$, the product t-norm, and the {\L}ukasiewicz t-norm. Here we prove it in the general case by help of the ordinal sum decomposition of continuous t-norms. Since the Scott $\mathcal{Q}$-cotopology on $([0,1],d_R)$ is strong and contains the identity map as a closed set, it suffices to show that if $\phi$ is a fuzzy Scott closed set in $([0,1],d_R)$ then $\phi\in\tau$. We do this in two steps. \textbf{Step 1}. If $\phi$ is a fuzzy Scott closed set in $([0,1],d_R)$ and $\phi\geq {\rm id}$, then $\phi\in\tau$. Since $\&$ is a continuous t-norm, there is a set of disjoint open intervals $\{(a_i,b_i)\}$ such that \begin{itemize}\setlength{\itemsep}{-2pt} \item for each $i$, both $a_i$ and $b_i$ are idempotent and the restriction of $\&$ on $[a_i,b_i]$ is either isomorphic to the \L ukasiewicz t-norm or to the product t-norm; \item $x\&y=\min\{x,y\}$ if $(x,y)\notin\bigcup_i[a_i,b_i]^2$. \end{itemize} For each $x\in[0,1]$, define $g_x: [0,1]\longrightarrow[0,1]$ by \[ g_x(y)=\begin{cases} \phi(x)\vee((\phi(x)\rightarrow x)\rightarrow y),&\text{$ (x,\phi(x))\in(a_i,b_i)^2 $ for some $i$ and $\phi(x)>x$,} \\ \phi(x)\vee(b_i\rightarrow y),&\text{$ (x,\phi(x))\in(a_i,b_i)^2 $ for some $i$ and $ \phi(x)=x $,}\\ \phi(x)\vee(x\rightarrow y),&\text{$ (x,\phi(x))\not\in(a_i,b_i)^2 $ for any $ i $.} \end{cases} \] Each $g_x$ is clearly a member of $\tau$, so, in order to see that $\phi\in\tau$, it suffices to show that for all $y\in[0,1]$, \[\phi(y)=\bigwedge_{x\in[0,1]}g_x(y).\] Before proving this equality, we list here some facts about the maps $g_x$, the verifications are left to the reader. \begin{enumerate}[(M1)] \item $\phi(y)\leq g_x(y)$ whenever $y\leq x$. \item If $ (x,\phi(x))\in(a_i,b_i)^2 $ for some $i$ and $\phi(x)>x$, then $g_x(x)= (\phi(x)\rightarrow x)\rightarrow x=\phi(x)$ and $g_x(y)= (\phi(x)\rightarrow x)\rightarrow y\geq\phi(y)$ for all $y>x$. \item If $(x,\phi(x))\in(a_i,b_i)^2$ for some $i$ and $\phi(x)=x$, then $\phi(y)=y=g_x(y)$ whenever $x\leq y<b_i$ and $g_x(y)=1\geq\phi(y)$ for all $y\geq b_i$. \item If $(x,\phi(x))\not\in(a_i,b_i)^2$ for any $i$, then for all $y\geq x$, $g_x(y)=\phi(x)\vee (x\rightarrow y)=1 \geq \phi(y)$. \end{enumerate} It follows immediately from these facts that for all $y\in[0,1]$, $\phi(y)\leq\bigwedge_{x\in[0,1]}g_x(y)$. For the converse inequality, we distinguish three cases. \textbf{Case 1}. $(y,\phi(y))\in(a_i,b_i)^2 $ for some $i$ and $ \phi(y)>y$. Then by fact (M2), $g_{y}(y)=\phi(y)$, hence $\phi(y)\geq\bigwedge_{x\in[0,1]}g_x(y)$. \textbf{Case 2}. $(y,\phi(y))\in(a_i,b_i)^2$ for some $i$ and $\phi(y)=y$. Then by fact (M3), $g_{y}(y)=\phi(y)$, hence $\phi(y)\geq\bigwedge_{x\in[0,1]}g_x(y)$. \textbf{Case 3}. $ (y,\phi(y))\not\in(a_i,b_i)^2 $ for any $i$. In this case, if we can show that $g_x(y)=\phi(x)$ for all $x>y$, then we will obtain that $\phi(y)=\bigwedge_{x>y}\phi(x)\geq\bigwedge_{x\in[0,1]}g_x(y)$ by right continuity of $\phi$. The proof is divided into four subcases. Subcase 1. $y \in (a_i,b_i) $ for some $i$ and $\phi(y)\geq b_i$. If $x\leq b_i $, then $x\rightarrow y \leq b_i$ and $\phi(x)\geq\phi(y)\geq b_i $, hence $g_x(y)=\phi(x)\vee (x\rightarrow y)=\phi(x)$. For $x>b_i$, \begin{itemize}\setlength{\itemsep}{-2pt} \item if $ (x,\phi(x))\in(a_j,b_j)^2 $ for some $j$ and $ \phi(x)>x $, then \[g_x(y)=\phi(x)\vee((\phi(x)\rightarrow x)\rightarrow y)=\phi(x)\vee y =\phi(x);\] \item if $ (x,\phi(x))\in(a_j,b_j)^2 $ for some $j$ and $ \phi(x)=x $, then \[g_x(y)=\phi(x)\vee(b_j\rightarrow y)=\phi(x)\vee y =\phi(x);\] \item if $(x,\phi(x))\not\in(a_j,b_j)^2$ for any $j$, then \[g_x(y)=\phi(x)\vee(x\rightarrow y)=\phi(x)\vee y =\phi(x).\] \end{itemize} Subcase 2. $y \not\in [a_i,b_i]$ for any $i$. In this case, since $t\rightarrow y= y$ for all $t>y$, it follows that $g_x(y)=\phi(x)\vee y=\phi(x)$. Subcase 3. $y=a_i$. For $x<b_i$, \begin{itemize}\setlength{\itemsep}{-2pt} \item if $ x<\phi(x)<b_i $, then $(x,\phi(x))\in(a_i,b_i)^2$, hence $g_x(y)=\phi(x)\vee((\phi(x)\rightarrow x)\rightarrow a_i)=\phi(x)$; \item if $ \phi(x)\geq b_i $, then $g_x(y)=\phi(x)\vee(x\rightarrow y)=\phi(x)$ since $x\rightarrow y=x\rightarrow a_i \leq b_i$; \item if $ \phi(x)=x $, then $g_x(y)=\phi(x)\vee(b_i\rightarrow y)=\phi(x)\vee(b_i\rightarrow a_i)=\phi(x)$. \end{itemize} For $x>b_i$, \begin{itemize}\setlength{\itemsep}{-2pt} \item if $(x,\phi(x))\in(a_j,b_j)^2$ for some $j$ and $\phi(x)>x$, then $a_i<a_j$, hence \[g_x(y)=\phi(x)\vee((\phi(x)\rightarrow x)\rightarrow a_i)=\phi(x)\vee a_i =\phi(x);\] \item if $(x,\phi(x))\in(a_j,b_j)^2$ for some $j$ and $\phi(x)=x$, then $g_x(y)=\phi(x)\vee(b_j\rightarrow a_i)=\phi(x)$; \item if $(x,\phi(x))\not\in(a_j,b_j)^2$ for any $j$, then $g_x(y)=\phi(x)\vee(x\rightarrow a_i)=\phi(x)$. \end{itemize} Subcase 4. $y=b_i$. If $a_j=b_i$ for some $j$, then the conclusion holds by Subcase 3. Otherwise, the argument for Subcase 2 can be applied to show that $g_x(y)=\phi(x)$. \textbf{Step 2}. If $\phi$ is a fuzzy Scott closed set in $([0,1],d_R)$, then $\phi\in\tau$. Since $\phi(1)\rightarrow \phi$ is fuzzy Scott closed and ${\rm id}\leq \phi(1)\rightarrow \phi$, it follows that $\phi(1)\rightarrow \phi\in\tau$ by Step 1. Since $\tau$ is strong and $\&$ is continuous, then $\phi=\phi(1)\&(\phi(1)\rightarrow \phi) \in\tau$. \end{exmp} \end{document}
\begin{document} \title{Constructible isocrystals} \begin{abstract} We introduce a new category of coefficients for $p$-adic cohomology called constructible isocrystals. Conjecturally, the category of constructible isocrystals endowed with a Frobenius structure is equivalent to the category of perverse holonomic arithmetic $\mathcal D$-modules. We prove here that a constructible isocrystal is completely determined by any of its geometric realizations. \end{abstract} \tableofcontents \addcontentsline{toc}{section}{Introduction} \section*{Introduction} The relation between topological invariants and differential invariants of a manifold is always fascinating. We may first recall de Rham theorem that implies the existence of an isomorphism $$ \mathrm H^i_{\mathrm{dR}}(X) \simeq \mathrm{Hom}(\mathrm H_{i}(X), \mathbb C) $$ on any complex analytic manifold. The non abelian version is an equivalence of categories $$ \mathrm{MIC}(X) \simeq \mathrm{Rep}_{\mathbb C}(\pi_{1}(X,x)) $$ between coherent modules endowed with an integrable connection and finite dimensional representations of the fundamental group. The same result holds on a smooth algebraic variety if we stick to regular connections (see \cite{Deligne70} or Bernard Malgrange's lecture in \cite{Borel87}). It has been generalized by Masaki Kashiwara (\cite{Kashiwara84}) to an equivalence $$ \mathrm D^{\mathrm b}_{\mathrm{reg,hol}}(X) \simeq \mathrm D^{\mathrm b}_{\mathrm{cons}}(X^{\mathrm {an}}) $$ between the categories of bounded complexes of $\mathcal D_{X}$-modules with regular holonomic cohomology and bounded complexes of $\mathbb C_{X^{\mathrm{an}}}$-modules with constructible cohomology. Both categories come with a so-called $t$-structure but these $t$-structures do not correspond under this equivalence. Actually, they define a new $t$-structure on the other side that may be called \emph{perverse}. The notion of perverse sheaf on $X^\mathrm{an}$ has been studied for some time now (see \cite{Borel87} for example). On the $\mathcal D$-module side however, this notion only appeared in a recent article of Kashiwara (\cite{Kashiwara04}) even if he does not give it a name (we call it perverse but it might as well be called constructible (see \cite{Abe13*}). Anyway, he shows that the perverse $t$-structure on $\mathrm D^b_{\mathrm{reg,hol}}(X)$ is given by $$ \left\{ \begin{array}{l} \mathrm D^{\leq 0} : \mathrm{codim}\,\mathrm{supp}\, \mathcal H^n(\mathcal F^\bullet) \geq n\ \mathrm{for}\ n \geq 0 \\ \mathrm D^{\geq 0} : \mathcal H^n_{Z}(\mathcal F^\bullet) = 0\ \mathrm{for}\ n < \mathrm{codim} Z. \end{array} \right. $$ In particular, if we call \emph{perverse} a complex of $\mathcal D_{X}$-modules satisfying both conditions, there exists an equivalence of categories $$ \mathrm D^{\mathrm{perv}}_{\mathrm{reg,hol}}(X) \simeq \mathrm{Cons}(X^{\mathrm {an}}) $$ between the categories of perverse (complexes of) $\mathcal D_{X}$-modules with regular holonomic cohomology and constructible $\mathbb C_{X^{\mathrm{an}}}$-modules. In a handwritten note \cite{Deligne*} called ``Cristaux discontinus'', Pierre Deligne gave an algebraic interpretation of the right hand side of this equivalence. More precisely, he introduces the notion of constructible pro-coherent crystal and proves an equivalence $$ \mathrm{Cons}_{\mathrm{reg,pro-coh}}(X/\mathbb C) \simeq \mathrm{Cons}(X^{\mathrm {an}}) $$ between the categories of regular constructible pro-coherent crystals and constructible $\mathbb C_{X^{\mathrm{an}}}$-modules. By composition, we obtain what may be called the \emph{Deligne-Kashiwara correspondence} $$ \mathrm{Cons}_{\mathrm{reg,pro-coh}}(X/\mathbb C) \simeq \mathrm D^{\mathrm{perv}}_{\mathrm{reg,hol}}(X). $$ It would be quite interesting to give an algebraic construction of this equivalence but this is not our purpose here. Actually, we would like to describe an arithmetic analog. Let $K$ be a $p$-adic field with discrete valuation ring $\mathcal V$ and perfect residue field $k$. Let $X \hookrightarrow P$ be a locally closed embedding of an algebraic $k$-variety into a formal $\mathcal V$-scheme. Assume for the moment that $P$ is smooth and quasi-compact, and that the locus of $X$ at infinity inside $P$ has the form $D \cap \overline X$ where $D$ is a divisor in $P$. We may consider the category $\mathrm D^b(X \subset P/K)$ of bounded complexes of $\mathcal D^\dagger_{P}({}^\dagger D)_{\mathbb Q}$-modules on $P$ with support on $\overline X$ (see \cite{Berthelot02} for example). On the other hand, we may also consider the category of overconvergent isocrystals on $(X \subset P/K)$. In \cite{Caro09}, Daniel Caro proved that there exists a fully faithful functor $$ \mathrm{sp}_{+} : \mathrm{Isoc}^\dagger_{\mathrm{coh}}(X \subset P/K) \to \mathrm D^{\mathrm b}_{\mathrm{coh}}(X \subset P/K) $$ (the index $\mathrm{coh}$ simply means overconvergent isocrystals in Berthelot's sense - see below). This is the first step towards an overconvergent Deligne-Kashiwara correspondence. Note that this construction is extended to a slightly more general situation by Tomoyuki Abe and Caro in \cite{AbeCaro13*} and was already known to Pierre Berthelot in the case $\overline X = P_{k}$ (proposition 4.4.3 of \cite{Berthelot96}). In \cite{LeStum14}, we defined a category that we may denote $\mathrm{MIC}^\dagger_{\mathrm{cons}}(P/K)$ of convergent constructible $\nabla$-modules on $P_{K}$ when $P$ is a geometrically connected smooth proper curve over $\mathcal V$, as well as a category $\mathrm D^{\mathrm {perv}}(P/K)$ of perverse (complexes of) $\mathcal D^\dagger_{P\mathbb Q}$-modules on $P$, and built a functor $$ \mathrm R \widetilde{\mathrm{sp}}_{*} : \mathrm{MIC}^\dagger_{\mathrm{cons}}(P/K) \to \mathrm D^{\mathrm{perv}}_{\mathrm{coh}}(P/K). $$ Actually, we proved the overconvergent Deligne-Kashiwara correspondence in this situation: this functor induces an equivalence of categories $$ \mathrm R \widetilde{\mathrm{sp}}_{*} : F\mathrm{-MIC}^\dagger_{\mathrm{cons}}(P/K) \simeq F\mathrm{-D}^{\mathrm{perv}}_{\mathrm{hol}}(P/K) $$ between (convergent) constructible $F$-$\nabla$-modules on $P_{K}$ and perverse holonomic $F$-$\mathcal D^\dagger_{P\mathbb Q}$-modules on $P$. Note that this is compatible with Caro's $\mathrm{sp}_{+}$ functor. In order to extend this theorem to higher dimension, it is necessary to develop a general theory of \emph{constructible (overconvergent) isocrystals}. One could try to mimic Berthelot's original definition and let $\mathrm{Isoc}^\dagger_{\mathrm{cons}}(X \subset Y \subset P/K)$ be the category of $j^\dagger_{X}\mathcal O_{]Y[}$-modules $\mathcal F$ endowed with an overconvergent connection which are only ``constructible'' and not necessarily coherent (here $X$ is open in $Y$ and $Y$ is closed in $P$). It means that there exists a locally finite covering of $X$ by locally closed subvarieties $Z$ such that $j^\dagger_{Z}\mathcal F$ is a coherent $j^\dagger_{Z}\mathcal O_{]Y[}$-module. It would then be necessary to show that the definition is essentially independent of $P$ as long as $P$ is smooth and $Y$ proper, and that they glue when there does not exist any global geometric realization. We choose here an equivalent but different approach with built-in functoriality. I introduced in \cite{LeStum11} the overconvergent site of the algebraic variety $X$ and showed that we can identify the category of locally finitely presented modules on this site with the category of overconvergent isocrystals in the sense of Berthelot. Actually, we can define a broader category of overconvergent isocrystals (without any finiteness condition) and call an overconvergent isocrystal $E$ \emph{constructible} when there exists a locally finite covering of $X$ by locally closed subvarieties $Y$ such that $E_{|Y}$ is locally finitely presented. Note that $K$ may be any non trivial complete ultrametric field and that there exists a relative theory (over some base $O$). We denote by $\mathrm{Isoc}^\dagger_{\mathrm{cons}}(X/O)$ the category of constructible overconvergent isocrystals on $X/O$. In order to compute these objects, one may define a category $\mathrm{MIC}^\dagger_{\mathrm{cons}}(X,V/O)$ of constructible modules endowed with an overconvergent connection on any ``geometric realization'' $V$ of $X/O$, as in Berthelot's approach. We will prove (theorem \ref{thm} below) that, when $\mathrm{Char}(K) \neq 0$, there exists an equivalence of categories $$ \mathrm{Isoc}^\dagger_{\mathrm{cons}}(X/O) \simeq \mathrm{MIC}^\dagger_{\mathrm{cons}}(X,V/O). $$ As a corollary, we obtain that the later category is essentially independent of the choice of the geometric realization (and that they glue when there does not exist such a geometric realization). Note that this applies in particular to the case of the curve $P$ above which ``is'' a geometric realization of $P_{k}$ so that $$ \mathrm{Isoc}^\dagger_{\mathrm{cons}}(P_{k}/K) = \mathrm{MIC}^\dagger_{\mathrm{cons}}(P/K). $$ In the first section, we briefly present the overconvergent site and review some material that will be needed afterwards. In the second one, we study some functors between overconvergent sites that are associated to locally closed embeddings. We do a little more that what is necessary for the study of constructible isocrystal, hoping that this will be useful in the future. In section three, we introduce overconvergent isocrystals and explain how one can construct and deconstruct them. In the last section, we show that constructible isocrystals may be interpreted in terms of modules with integrable connections. \section*{Notations and conventions} Throughout this article, $K$ denotes a non trivial complete ultrametric field with valuation ring $\mathcal V$ and residue field $k$. An \emph{algebraic variety} over $k$ is a scheme over $k$ that admits a locally finite covering by schemes of finite type over $k$. A \emph{formal scheme} over $\mathcal V$ always admits a locally finite covering by $\pi$-adic formal schemes of finite presentation over $\mathcal V$. An \emph{analytic variety} over $K$ is a strictly analytic $K$-space in the sense of Berkovich (see \cite{Berkovich93} for example). We will use the letters $X, Y, Z, U, C, D, \ldots$ to denote algebraic varieties over $k$, $P, Q, S$ for formal schemes over $\mathcal V$ and $V, W,O$ for analytic varieties over $K$. An analytic variety over $K$ is said to be \emph{good} if it is locally affinoid. This is the case for example if $V$ is affinoid, proper or algebraic, or more generally, if $V$ is an open subset of such a variety. Note that, in Berkovich original definition \cite{Berkovich90}, all analytic varieties were good. As usual, we will write $\mathbb A^1$ and $\mathbb P^1$ for the affine and projective lines. We will also use $\mathbb D(0, 1^\pm)$ for the open or closed disc of radius $1$. \section*{Acknowledgments} Many thanks to Tomoyuki Abe, Pierre Berthelot, Florian Ivorra, Vincent Mineo-Kleiner, Laurent Moret-Bailly, Matthieu Romagny and Atsushi Shiho with whom I had interesting conversations related to some questions discussed here. \section{The overconvergent site} \label{bible} We briefly recall the definition of the overconvergent site from \cite{LeStum11}. An object is made of \begin{enumerate} \item a locally closed embedding $X \hookrightarrow P$ of an algebraic variety (over $k$) into a formal scheme (over $\mathcal V$) and \item a morphism $\lambda : V \to P_{K}$ of analytic varieties (over $K$). \end{enumerate} We denote this object by $X \subset P \stackrel {\mathrm{sp}}\leftarrow P_{K} \leftarrow V$ and call it an \emph{overconvergent variety}. Here, $\mathrm{sp}$ denotes the \emph{specialization} map and we also introduce the notion of \emph{tube} of $X$ in $V$ which is $$ ]X[_{V} := \lambda^{-1}(\mathrm{sp}^{-1}(X)). $$ We call the overconvergent variety \emph{good} if any point of $]X[_{V}$ has an affinoid neighborhood in $V$. It makes it simpler to assume from the beginning that all overconvergent varieties are good since the important theorems can only hold for those (and bad overconvergent varieties play no role in the theory). But, on the other hand, most constructions can be carried out without this assumption. We define a \emph{formal morphism} between overconvergent varieties as a triple of compatible morphisms: $$ \xymatrix{X' \ar@{^{(}->}[r] \ar[d]^f & P' \ar[d]^v & P'_{K} \ar[l] \ar[d]^{v_{K}} & V' \ar[l] \ar[d]^u \\ X \ar@{^{(}->}[r] & P & P_{K} \ar[l] & V \ar[l] } $$ Such a formal morphism induces a continuous map $$ ]f[_{u} : ]X'[_{V'} \to ]X[_{V} $$ between the tubes. Actually, the notion of formal morphism is too rigid to reflect the true nature of the algebraic variety $X$ and it is necessary to make invertible what we call a \emph{strict neighborhood} and that we define now: it is a formal morphism as above such that $f$ is an isomorphism $X' \simeq X$ and $u$ is an open immersion that induces an isomorphism between the tubes $]X'[_{V'} \simeq ]X[_{V}$. Formal morphisms admit calculus of right fraction with respect to strict neighborhoods and the quotient category is the \emph{overconvergent site} $\mathrm{An}^\dagger_{/\mathcal V}$. Roughly speaking, we allow the replacement of $V$ by any neighborhood of $]X[_{V}$ in $V$ and we make the role of $P$ secondary (only existence is required). Since we call our category a site, we must endow it with a topology which is actually defined by the pretopology of families of formal morphisms $$ \xymatrix{X \ar@{^{(}->}[r] \ar@{=}[d] & P_{i} \ar[d]^{v_{i}} & P_{iK} \ar[l] \ar[d]^{v_{iK}} & V_{i} \ar[l] \ar@{_{(}->}[d] \\ X \ar@{^{(}->}[r] & P & P_{K} \ar[l] & V \ar[l] } $$ where $V_{i}$ is open in $V$ and $]X[_{V} \subset \cup V_{i}$ (this is a \emph{standard} site). Since the formal scheme plays a very loose role in the theory, we usually denote by $(X, V)$ an overconvergent variety and write $(f,u)$ for a morphism. We use the general formalism of \emph{restricted} category (also called \emph{localized} or \emph{comma} or \emph{slice} category) to define relative overconvergent sites. First of all, we define an \emph{overconvergent presheaf} as a presheaf (of sets) $T$ on $\mathrm{An}^\dagger_{/\mathcal V}$. If we are given an overconvergent presheaf $T$, we may consider the restricted site $\mathrm{An}^\dagger_{/T}$. An object is a section $s$ of $T$ on some overconvergent variety $(X, V)$ but we like to see $s$ as a morphism from (the presheaf represented by) $(X,V)$ to $T$. We will then say that $(X,V)$ is a \emph{(overconvergent) variety over $T$}. A morphism between varieties over $T$ is just a morphism of overconvergent varieties which is compatible with the given sections. The above pretopology is still a pretopology on $\mathrm{An}^\dagger_{/T}$ and we denote by $T_{\mathrm{An}^\dagger}$ the corresponding topos. As explained by David Zureick-Brown in his thesis (see \cite{ZureickBrown10} and \cite{ZureickBrown14*}), one may as well replace $\mathrm{An}^\dagger_{/T}$ by any fibered category over $\mathrm{An}^\dagger_{/\mathcal V}$. This is necessary if one wishes to work with algebraic stacks instead of algebraic varieties. As a first example, we can apply our construction to the case of a representable sheaf $T := (X, V)$. Another very important case is the following: we are given an overconvergent variety $(C, O)$ and an algebraic variety $X$ over $C$. Then, we define the overconvergent sheaf $X/O$ as follows: a section of $X/O$ is a variety $(X', V')$ over $(C, O)$ with a given factorization $X' \to X \to C$ (this definition extends immediately to algebraic spaces - or even algebraic stacks if one is ready to work with fibered categories). Alternatively, if we are actually given a variety $(X, V)$ over $(C, O)$, we may also consider the overconvergent presheaf $X_{V}/O$: a section is a variety $(X', V')$ over $(C, O)$ with a given factorization $X' \to X \to C$ which extends to \emph{some} factorization $(X', V') \to (X, V) \to (C, O)$. Note that we only require the \emph{existence} of the second factorization. In other words, $X_{V}/O$ is the image presheaf of the natural map $(X, V) \to X/O$. An important theorem (more precisely its corollary 2.5.12 in \cite{LeStum11}) states that, if we work only with \emph{good} overconvergent varieties, then there exists an isomorphism of topos $(X_{V}/O)_{\mathrm{An}^\dagger} \simeq (X/O)_{\mathrm{An}^\dagger}$ when we start from a \emph{geometric} situation \begin{align} \label{geom} \xymatrix{X \ar@{^{(}->}[r] \ar[d]^f & P \ar[d]^v & P_{K} \ar[l] \ar[d]^{v_{K}} & V \ar[l] \ar[d]^u \\ C \ar@{^{(}->}[r] & S & S_{K} \ar[l] & O \ar[l] } \end{align} with $P$ proper and smooth around $X$ over $S$ and $V$ a neighborhood of the tube of $X$ in $P_{K} \times_{S_{K}} O$ (and $(C,O)$ is good). If we are given a morphism of overconvergent presheaves $v : T' \to T$, we will also say that $T'$ is a \emph{(overconvergent) presheaf over} $T$. It will induce a morphism of topos $v_{\mathrm{An}^\dagger} : T'_{\mathrm{An}^\dagger} \to T_{\mathrm{An}^\dagger}$. We will often drop the index $\mathrm{An}^\dagger$ and keep writing $v$ instead of $v_{\mathrm{An}^\dagger}$. Also, we will usually write the inverse image of a sheaf $\mathcal F$ as $\mathcal F_{|T'}$ when there is no ambiguity about $v$. Note that there will exist a triple of adjoint functors $v_{!}, v^{-1}, v_{*}$ with $v_{!}$ exact. For example, any morphism $(f, u) : (Y, W) \to (X, V)$ of overconvergent varieties will give rise to a morphism of topos $$ (f,u)_{\mathrm{An}^\dagger} : (Y, W)_{\mathrm{An}^\dagger} \to (X, V)_{\mathrm{An}^\dagger}. $$ It will also induce a morphism of overconvergent presheaves $f_{u} : Y_{W}/O \to X_{V}/O$ giving rise to a morphism of topos $f_{u\mathrm{An}^\dagger} : (Y_{W}/O)_{\mathrm{An}^\dagger} \to (X_{V}/O)_{\mathrm{An}^\dagger}$. Finally, if $(C,O)$ is an overconvergent variety, then any morphism $f : Y \to X$ of algebraic varieties over $C$ induces a morphism of overconvergent presheaves $f : Y/O \to X/O$ giving rise to a morphism of topos $f_{\mathrm{An}^\dagger} : (Y/O)_{\mathrm{An}^\dagger} \to (X/O)_{\mathrm{An}^\dagger}$. If we are given an overconvergent variety $(X, V)$, there exists a \emph{realization} map (morphism of topos) $$ \xymatrix@R0cm{(X, V)_{\mathrm{An}^\dagger} \ar[r]^{\varphi} & ]X[_{V \mathrm{an}} \\ (X, V') & \ar@{|->}[l] ]X[_{V'}} $$ where $]X[_{V \mathrm{an}}$ denotes the category of sheaves (of sets) on the analytic variety $]X[_{V}$ (which has a section $\psi$). Now, if $T$ is any overconvergent presheaf and $(X, V)$ is a variety over $T$, then there exists a canonical morphism $(X, V) \to T$. Therefore, if $\mathcal F$ is a sheaf on $T$, we may consider its restriction $\mathcal F_{|(X,V)}$ which is a sheaf on $(X, V)$. We define the \emph{realization} of $\mathcal F$ on $(X, V$) as $$ \mathcal F_{X,V} := \varphi_{V*}\left(\mathcal F_{|(X,V)}\right) $$ (we shall simply write $\mathcal F_{V}$ in practice unless we want to emphasize the role of $X$). As one might expect, the sheaf $\mathcal F$ is completely determined by its realizations $\mathcal F_{V}$ and the transition morphisms $]f[_{u}^{-1}\mathcal F_{V} \to \mathcal F_{V'}$ obtained by functoriality whenever $(f, u) : (X', V') \to (X, V)$ is a morphism over $T$. We will need below the following result : \begin{prop}\label{cart} If we are given a \emph{cartesian} diagram of overconvergent presheaves (with a representable upper map) $$ \xymatrix{ (X', V') \ar[rr]^{(f,u)} \ar[d]^{s'} && (X, V) \ar[d]^s \\ T' \ar[rr]^v && T, } $$ and $\mathcal F'$ is a sheaf on $T'$, then $$ (v_{*}\mathcal F')_{V} = ]f[_{u*}\mathcal F'_{V'}. $$ \end{prop} \begin{proof} Since the diagram is cartesian, we have (this is formal) $$ s^{-1}v_{*}\mathcal F' = (f,u)_{*}s'^{-1} \mathcal F'. $$ It follows that \begin{align*} (v_{*}\mathcal F')_{V} &= \varphi_{V*}s^{-1}v_{*}\mathcal F' \\ &= \varphi_{V*}(f,u)_{*}s'^{-1}\mathcal F' \\ &= ]f[_{u*}\varphi_{V'*}s'^{-1}\mathcal F' \\ &= ]f[_{u*}\mathcal F'_{V'}. \qedhere \end{align*} \end{proof} If $(X, V)$ is an overconvergent variety, we will denote by $i_{X} : ]X[_{V} \hookrightarrow V$ the inclusion map. Then, if $T$ is an overconvergent presheaf, we define the structural sheaf of $\mathrm{An}^\dagger_{/T}$ as the sheaf $\mathcal O^\dagger_{T}$ whose realization on any $(X, V)$ is $i_{X}^{-1} \mathcal O_{V}$. An $\mathcal O^\dagger_{T}$-module $E$ will also be called a \emph{(overconvergent) module} on $T$. As it was the case for sheaves of sets, the module $E$ is completely determined by its realizations $E_{V}$ and the transition morphisms \begin{align} \label{transm} ]f[_{u}^{\dagger}E_{V} := i_{X'}^{-1}u^*i_{X*} E_{V }\to E_{V'} \end{align} obtained by functoriality whenever $(f, u) : (X', V') \to (X, V)$ is a morphism over $T$. A module on $T$ is called an \emph{(overconvergent) isocrystal} if all the transition maps \eqref{transm} are actually isomorphisms (used to be called a crystal in \cite{LeStum11}). We will denote by $$ \mathrm{Isoc}^\dagger(T) \subset \mathcal O^\dagger_{T}\mathrm{-Mod} $$ the full subcategory made of all isocrystals on $T$ (used to be denoted by $\mathrm{Cris}^\dagger(T)$ in \cite{LeStum11}). Be careful that inclusion is only right exact in general. If we are given a morphism of overconvergent presheaves $v : T' \to T$ then the functors $v_{!}, v^{-1}, v_{*}$ preserve modules (we use the same notation $v_{!}$ for sheaves of sets and abelian groups: this should not create any confusion) and $v^{-1}$ preserves isocrystals. One can show that a module on $T$ is locally finitely presented if and only if it is an isocrystal with coherent realizations. We will denote their category by $\mathrm{Isoc}^\dagger_{\mathrm{coh}}(T)$ (be careful that it only means that the realizations are coherent: $\mathcal O^\dagger_{T}$ is not a coherent ring in general). In the case $T = X/S_{K}$ and $\mathrm{Char}(K) = 0$, this is equivalent to Berthelot's original definition 2.3.6 in \cite{Berthelot96c*} of an overconvergent isocrystal. Back to our examples, it is not difficult to see that, when $(X,V)$ is an overconvergent variety, the realization functor induces an equivalence of categories $$ \mathrm{Isoc}^\dagger(X,V) \simeq i_{X}^{-1}\mathcal O_{V}\mathrm{-Mod} $$ between isocrystals on $(X, V)$ and $i_{X}^{-1}\mathcal O_{V}$-modules. Now, if $(X, V)$ is a variety over an overconvergent variety $(C,O)$ and $$ p_{1}, p_{2} : (X, V \times_{O} V) \to (X, V) $$ denote the projections, we define an \emph{overconvergent stratification} on an $i_{X}^{-1}\mathcal O_{V}$-module $\mathcal F$ as an isomorphism $$ \epsilon : ]p_{2}[^\dagger \mathcal F \simeq ]p_{1}[^\dagger \mathcal F $$ that satisfies the cocycle condition on triple products and the normalization condition along the diagonal. They form an additive category $\mathrm{Strat}^\dagger(X,V/O)$ with cokernels and tensor products. It is even an abelian category when $V$ is universally flat over $O$ in a neighborhood of $]X[_{V}$. In any case, the realization functor will induce an equivalence $$ \mathrm{Isoc}^\dagger(X_{V}/O) \simeq \mathrm{Strat}^\dagger(X,V/O). $$ We may also consider for $n \in \mathbb N$, the $n$-th infinitesimal neighborhood $V^{(n)}$ of $V$ in $V \times_{O} V$. Then, a \emph{(usual) stratification} on an $i_{X}^{-1}\mathcal O_{V}$-module $\mathcal F$ is a compatible family of isomorphisms $$ \epsilon^{(n)} : i_{X}^{-1}\mathcal O_{V^{(n)}} \otimes_{i_{X}^{-1}\mathcal O_{V}} \mathcal F \simeq \mathcal F \otimes_{i_{X}^{-1}\mathcal O_{V}} i_{X}^{-1}\mathcal O_{V^{(n)}} $$ that satisfy the cocycle condition on triple products and the normalization condition along the diagonal. Again, they form an additive category $\mathrm{Strat}(X,V/O)$ with cokernels and tensor products, and even an abelian category when $V$ is smooth over $O$ in a neighborhood of $]X[_{V}$. There exists an obvious faithful functor \begin{align} \label{map1} \mathrm{Strat}^\dagger(X,V/O) \to \mathrm{Strat}(X,V/O). \end{align} Note that, \emph{a priori}, different overconvergent stratifications might give rise to the same usual stratification (and of course many usual stratifications will \emph{not} extend at all to an overconvergent one). Finally, a \emph{connection} on an $i_{X}^{-1}\mathcal O_{V}$-module $\mathcal F$ is an $\mathcal O_{O}$-linear map $$ \nabla : \mathcal F \to \mathcal F \otimes_{i_{X}^{-1}\mathcal O_{V}} i_{X}^{-1} \Omega^1_{V} $$ that satisfies the Leibnitz rule. Integrability is defined as usual. They form an additive category $\mathrm{MIC}(X,V/O)$ and there exists again a faithful functor \begin{align} \label{map2} \mathrm{Strat}(X,V/O) \to \mathrm{MIC}(X,V/O) \end{align} ($\nabla$ is induced by $\epsilon^{(1)} - \sigma$ where $\sigma$ switches the factors in $V \times_{O} V$). When $V$ is smooth over $O$ in a neighbourhood of $]X[_{V}$ and $\mathrm{Char}(K) = 0$, then the functor \eqref{map2} is an equivalence. Actually, both categories are then equivalent to the category of $i_{X}^{-1}\mathcal D_{V/O}$-modules. In general, we will denote by $\mathrm{MIC}^\dagger(X,V/O)$ the image of the composition of the functors \eqref{map1} and \eqref{map2} and then call the connection \emph{overconvergent} (and add an index $\mathrm{coh}$ when we consider only coherent modules). Thus, there exists a realization functor \begin{align} \label{map3} \mathrm{Isoc}^\dagger(X_{V}/O) \to \mathrm{MIC}^\dagger(X,V/O) \end{align} which is faithful and essentially surjective (but not an equivalence in general). In practice, we are interested in isocrystals on $X/O$ where $(C,O)$ is an overconvergent variety and $X$ is an algebraic variety over $C$. We can localize in order to find a geometric realization $V$ for $X$ over $O$ such as \eqref{geom} and work directly on $(X,V)$: there exists an equivalence of categories $$ \mathrm{Isoc}^\dagger(X/O) \simeq \mathrm{Isoc}^\dagger(X_{V}/O) $$ that may be composed with \eqref{map3} in order to get the realization functor $$ \mathrm{Isoc}^\dagger(X/O) \to \mathrm{MIC}^\dagger(X,V/O). $$ In \cite{LeStum11}, we proved that, when $\mathrm{Char}(K) = 0$, it induces an equivalence $$ \mathrm{Isoc}_{\mathrm{coh}}^\dagger(X/O) \simeq \mathrm{MIC}_{\mathrm{coh}}^\dagger(X,V/O) $$ (showing in particular that the right hand side is independent of the choice of the geometric realization and that they glue). We will extend this below to what we call \emph{constructible isocrystals}. \section{Locally closed embeddings} \label{emb} In this section, we fix an algebraic variety $X$ over $k$. Recall that a (overconvergent) variety over $X/\mathcal M(K)$ (we will simply write $X/K$ in the future) is a pair made of an overconvergent variety $(X', V')$ and a morphism $X' \to X$. In other words, it is a diagram $$ \xymatrix{&& V' \ar[d] \\ X' \ar@{^{(}->}[r] \ar[d] & P' & P'_{K} \ar[l] \\ X && \quad } $$ where $P'$ is a formal scheme. We also fix a presheaf $T$ over $X/K$. For example, $T$ could be (the presheaf represented by) an overconvergent variety $(X', V')$ over $X/K$. Also, if $(C, O)$ is an overconvergent variety and $X$ is an algebraic variety over $C$, then we may consider the sheaf $T := X/O$ (see section \ref{bible}). Finally, if we are given a morphism of overconvergent varieties $(X, V) \to (C, O)$, then we could set $T := X_{V}/O$ (see section \ref{bible} again). Finally, we also fix an open immersion $\alpha : U \hookrightarrow X$ and denote by $\beta : Z \hookrightarrow X$ the embedding of a closed complement. Actually, in the beginning, we consider more generally a locally closed embedding $\gamma : Y \hookrightarrow X$. \begin{dfn} The \emph{restriction} of $T$ to $Y$ is the inverse image $$ T_{Y} := (Y/K) \times_{(X/K)} T $$ of $T$ over $Y/K$. We will still denote by $\gamma : T_{Y} \hookrightarrow T$ the corresponding map. When $\mathcal F$ is a sheaf on $T$, the \emph{restriction} of $T$ to $Y$ is $\mathcal F_{|Y} := \gamma^{-1}\mathcal F$. \end{dfn} For example, if $T = (X', V')$ is a variety over $X/K$, then $T_{Y} = (Y', V')$ where $Y'$ is the inverse image of $Y$ in $X'$. Also, if $(C, O)$ is an overconvergent variety, $X$ is an algebraic variety over $C$ and $T = X/O$, then $T_{Y} = Y/O$. Finally, if we are given a morphism of overconvergent varieties $(X, V) \to (C, O)$ and $T = X_{V}/O$, then we will have $T_{Y} = Y_{V}/O$. If $(X, V)$ is an overconvergent variety, we may consider the morphism of overconvergent varieties $(\gamma, \mathrm{Id}_{V}) : (Y, V) \hookrightarrow (X, V)$. We will then denote by $]\gamma[_{V} : ]Y[_{V} \hookrightarrow ]X[_{V}$ or simply $]\gamma[$ if there is no ambiguity, the corresponding map on the tubes. Recall that $]\gamma[$ is the inclusion of an analytic domain. This is an open immersion when $\gamma$ is a closed embedding and conversely (we use Berkovich topology). The next result generalizes proposition 3.1.10 of \cite{LeStum11}. \begin{prop}\label{direct} Let $(X', V')$ be an overconvergent variety over $T$ and $\gamma' : Y' \hookrightarrow X'$ the inclusion of the inverse image of $Y$ inside $X'$. If $\mathcal F$ is a sheaf on $T_{Y}$, then $$ (\gamma_{*}\mathcal F)_{X',V'} = ]\gamma'[_{*}\mathcal F_{Y',V'}. $$ \end{prop} \begin{proof} Using corollary 2.4.15 of \cite{LeStum11}, this follows from proposition \ref{cart}. \end{proof} Since we will use it in some of our examples, we should also mention that $\mathrm R^i\gamma_{*} E = 0$ for $i > 0$ when $E$ is an isocrystal with coherent realizations. We can work out very simple examples right now. We will do our computations on the overconvergent variety $$ \mathbb P^1_{k/K} := \mathbb P^{1}_{k} \hookrightarrow \widehat {\mathbb P}^{1}_{\mathcal V} \leftarrow \mathbb P^{1,\mathrm{an}}_{K}. $$ We consider first the open immersion $\alpha : \mathbb A^{1}_{k} \hookrightarrow \mathbb P^{1}_{k}$ and the structural sheaf $\mathcal O^\dagger_{\mathbb A^{1}_{k}/K}$. If we let $i : \mathbb D(0, 1^+) \hookrightarrow \mathbb P^{1,\mathrm{an}}_{K}$ denote the inclusion map, we have $$ \mathrm R\Gamma(\mathbb P^{1}_{k/K}, \alpha_{*} \mathcal O^\dagger_{\mathbb A^{1}_{k}/K}) = \mathrm R\Gamma(\mathbb P^{1,\mathrm{an}}_{K}, i_{*}i^{-1}\mathcal O_{\mathbb P^{1,\mathrm{an}}_{K}}) = K[t]^\dagger := \cup_{\lambda > 1} K\{t/\lambda\} $$ (functions with radius of convergence (strictly) bigger than one at the origin). On the other hand, if we start from the inclusion $\beta : \infty \hookrightarrow \mathbb P^{1}_{k}$ and let $j : \mathbb D(\infty, 1^-) \hookrightarrow \mathbb P^{1,\mathrm{an}}_{K}$ denote the inclusion map, we have $$ \mathrm R\Gamma(\mathbb P^{1}_{k/K}, \beta_{*} \mathcal O^\dagger_{\infty/K}) = \mathrm R\Gamma(\mathbb P^{1,\mathrm{an}}_{K}, j_{*}j^{-1}\mathcal O_{\mathbb P^{1,\mathrm{an}}_{K}}) = K[1/t]^\mathrm{an} := \cap_{\lambda > 1} K\{\lambda/t\} $$ (functions with radius of convergence at least one at infinity). \begin{cor} \label{resex} We have \begin{enumerate} \item $ \gamma_{\mathrm{An}^\dagger}^{-1} \circ \gamma_{\mathrm{An}^\dagger *} = \mathrm{Id}, $ \item if $\gamma' : Y' \hookrightarrow X$ is another locally closed embedding with $Y \cap Y' = \emptyset$, then $$ \gamma_{\mathrm{An}^\dagger}^{-1} \circ \gamma'_{\mathrm{An}^\dagger *} = 0. $$ \end{enumerate} \end{cor} Alternatively, one may say that if $\mathcal F$ is a sheaf on $T_{Y}$, we have $$ (\gamma_{*}\mathcal F)_{|Y} = \mathcal F \quad \mathrm{and} \quad (\gamma_{*}\mathcal F)_{|Y'} = 0. $$ The first assertion of the corollary means that $\gamma_{\mathrm{An}^\dagger}$ is an embedding of topos (direct image is fully faithful). Actually, from the fact that $Y$ is a subobject of $X$ in the category of varieties, one easily deduces that $T_{Y}$ is a subobject of $T$ in the overconvergent topos and $\gamma_{\mathrm{An}^\dagger}$ is therefore an \emph{open} immersion of topos. Note also that the second assertion applies in particular to open and closed complements (both ways): in particular, these functors \emph{cannot} be used to glue along open and closed complements. We will need some refinement. We focus now on the case of an \emph{open} immersion $\alpha : U \hookrightarrow X$ which gives rise to a closed embedding on the tubes. \begin{prop} The functor $\alpha_{\mathrm{An}^\dagger*} : T_{U\mathrm{An}^\dagger} \to T_{\mathrm{An}^\dagger}$ is exact and preserves isocrystals. \end{prop} \begin{proof} This is not trivial but can be proved exactly as in Corollary 3.1.12 and Proposition 3.3.15 of \cite{LeStum11} (which is the case $T = X/O$). \end{proof} The following definition is related to rigid cohomology with compact support (recall that $\beta : Z \hookrightarrow X$ denotes the embedding of a closed complement of $U$): \begin{dfn} If $\mathcal F$ is a sheaf of abelian groups on $T$, then $$ \underline \Gamma_{U}\mathcal F = \ker(\mathcal F \to \beta_{*}\mathcal F_{|Z}) $$ is the subsheaf of \emph{sections of $\mathcal F$ with support in $U$}. \end{dfn} If we denote by $\mathcal U$ the \emph{closed} subtopos of $T_{\mathrm{An}^\dagger}$ which is the complement of the open topos $T_{Z\mathrm{An}^\dagger}$, then $\underline \Gamma_{U}$ is the same thing as the functor $\mathcal H^0_{\mathcal U}$ of sections with support in $\mathcal U$. With this in mind, the first two assertions of the next proposition below are completely formal. One may also show that the functor $\mathcal F \mapsto \mathcal F/\beta_{!}\beta^{-1}\mathcal F$ is an exact left adjoint to $\underline \Gamma_{U}$ ; it follows that $\underline \Gamma_{U}$ preserves injectives. Actually, we should use the open/closed formalism only in the classical situation. Recall (see \cite{Iversen86}, section II.6 for example for these kinds of things) that if $i : W \hookrightarrow V$ is a closed embedding of topological spaces, then $i_{*}$ has a right adjoint $i^!$ (and one usually sets $\underline \Gamma_{W}:= i_{*}i^!$) which commutes with direct images. If $(X, V)$ is an overconvergent variety, we know that $]\alpha[ : ]U[ \hookrightarrow ]X[$ is a closed embedding and we may therefore consider the functors $]\alpha[^{!}$ and $\underline \Gamma_{]U[}$. \begin{prop} \label{gammu} \begin{enumerate} \item The functor $\underline \Gamma_{U}$ is left exact and preserves modules, \item if $\mathcal F$ is a sheaf of abelian groups on $T$, then there exists a distinguished triangle $$ \mathrm R \underline \Gamma_{U} \mathcal F \to \mathcal F \to \mathrm R \beta_{*}\mathcal F_{|Z} \to , $$ \item \label{locgam} if $(X', V')$ is a variety over $T$ and $\alpha' : U' \hookrightarrow X'$ denotes the immersion of the inverse image of $U$ into $X'$, we have $$ (\mathrm R \underline \Gamma_{U}E)_{V'} = \mathrm R \underline \Gamma_{]U'[_{V'}}E_{V'} $$ for any \emph{isocrystal} $E$ on $T$. \end{enumerate} \end{prop} \begin{proof} The first assertion follows immediately from the fact that all the functors involved ($\beta^{-1}$, $\beta_{*}$ and $\ker$) do have these properties. The second assertion results from the fact that the map $\mathcal F \to \beta_{*}\mathcal F_{|Z}$ is surjective when $\mathcal F$ is an injective sheaf (this is formal). In order to prove the last assertion, it is sufficient to remember (this is a standard fact) that there exists a distinguished triangle $$ \mathrm R \underline \Gamma_{]U'[_{V'}}E_{V'} \to E_{V'} \to \mathrm R ]\beta'[_{*}]\beta[^{-1}E_{V'} \to $$ where $\beta' : Z' \hookrightarrow X'$ denotes the inverse image of the inclusion of a closed complement of $U$. Since $E$ is an isocrystal, we have $(E_{|Z})_{Z',V'} = ]\beta'[^{-1}E_{X',V'}$. \end{proof} Note that the second assertion means that there exists an exact sequence $$ 0 \to \underline \Gamma_{U} \mathcal F \to \mathcal F \to \beta_{*}\mathcal F_{|Z} \to \mathrm R^1\underline \Gamma_{U} \mathcal F \to 0 $$ and that $\mathrm R^{i} \beta_{*}\mathcal F_{|Z} = \mathrm R^{i+1}\underline \Gamma_{U} \mathcal F$ for $i > 0$. We can do the exercise with $\alpha : \mathbb A^{1}_{k} \hookrightarrow \mathbb P^{1}_{k}$ and $\beta : \infty \hookrightarrow \mathbb P^{1}_{k}$ as above. We obtain $$ \mathrm R\Gamma(\mathbb P^{1}_{k/K}, \mathrm R \underline \Gamma_{\mathbb A^1_{k}}\mathcal O^\dagger_{\mathbb P^{1}_{k}/K}) = [K \to K[1/t]^{\mathrm {an}}] = (K[1/t]^{\mathrm {an}}/K)[-1]. $$ Since realization does not commute with the inverse image in general, we need to consider the following functor: \begin{lem} If $\mathcal F$ is a sheaf on $T$, then the assignment $$ (X', V') \mapsto (j_{U}^\dagger \mathcal F)_{V'} := ]\alpha'[_{*}]\alpha'[^{-1}\mathcal F_{V'}, $$ where $\alpha' : U' \hookrightarrow X'$ denotes the immersion of the inverse image of $U$ into $X'$, defines a sheaf on $T$. \end{lem} \begin{proof} We give ourselves a morphism $(f, u) : (X'', V'') \to (X', V')$ over $T$, we denote by $g : U'' \to U'$ the map induced by $f$ on the inverse images of $U$ into $X'$ and $X''$ respectively, and by $\alpha'' : U'' \hookrightarrow X''$ the inclusion map. And we consider the cartesian diagram (forgetful functor to algebraic varieties is left exact) $$ \xymatrix{(U'', V'') \ar@{^{(}->}[r] \ar[d]^{(g,u)}& (X'', V'') \ar[d]^{(f,u)} \\ (U', V') \ar@{^{(}->}[r] & (X', V')} $$ which gives rise to a cartesian diagram (tube is left exact) $$ \xymatrix{]U''[_{V''} \ar@{^{(}->}[r] \ar[d]^{]g[_{u}}& ]X''[_{V''} \ar[d]^{]f[_{u}}\\ ]U'[_{V'} \ar@{^{(}->}[r] & ]X'[_{V'} }. $$ Since $]\alpha'[$ is a closed embedding, we have $]f[_{u}^{-1} \circ ]\alpha'[_{*} = ]\alpha''[_{*} \circ ]g[_{u}^{-1}$ and there exists a canonical map $$ ]f[_{u}^{-1}]\alpha'[_{*}]\alpha'[^{-1}\mathcal F_{V'} = ]\alpha''[_{*}]g[_{u}^{-1}]\alpha'[^{-1}\mathcal F_{V'} = ]\alpha''[_{*}]\alpha''[^{-1}]f[_{u}^{-1}\mathcal F_{V''} \to ]\alpha''[_{*}]\alpha''[^{-1}\mathcal F_{V''}.\qedhere $$ \end{proof} \begin{dfn} If $\mathcal F$ is a sheaf on $T$, then $j^\dagger_{U}\mathcal F$ is the sheaf of \emph{overconvergent sections of $\mathcal F$ around $U$}. \end{dfn} \begin{prop} \begin{enumerate} \item The functor $j_{U}^\dagger$ is exact and preserves isocrystals, \item if $E$ is an isocrystal on $T$, we have $j_{U}^\dagger E = \alpha_{*}\alpha^{-1} E$. \end{enumerate} \end{prop} \begin{proof} Exactness can be checked on realizations. But, if $(X',V')$ is a variety over $T$ and $\alpha' : U' \hookrightarrow X'$ denotes the immersion of the inverse image of $U$ in $X'$, then we know the exactness of $]\alpha'[_{*}$ (because $]\alpha'[$ is a closed embedding) and $]\alpha'[^{-1}$. The second part of the first assertion is a consequence of the second assertion which follows from the fact that $(\alpha^{-1}E)_{V'} = ]\alpha'[^{-1} E_{V'}$ when $E$ is an isocrystal. \end{proof} Note that the canonical map $j_{U}^\dagger \mathcal F \to \alpha_{*}\alpha^{-1} \mathcal F$ is still bijective when $\mathcal F$ is a sheaf of \emph{Zariski type} (see Definition 4.6.1\footnote{the comment following definition 4.6.1 in \cite{LeStum11} is not correct and Lemma 4.6.2 is only valid for an \emph{open} immersion} of \cite{LeStum11}) but there are important concrete situations where equality fails as we shall see right now. In order to exhibit a counter example, we let again $\alpha : \mathbb A^{1}_{k} \hookrightarrow \mathbb P^{1}_{k}$ and $\beta : \infty \hookrightarrow \mathbb P^{1}_{k}$ denote the inclusion maps and consider the sheaf $ \mathcal F := \beta_{*}\mathcal O^\dagger_{\infty/K} $ which is \emph{not} an isocrystal (and not even of Zariski type). Since $\alpha^{-1} \circ \beta_{*} = 0$, we have $\alpha_{*}\alpha^{-1}\mathcal F = 0$. Now, let us denote by $i_{\xi} : \xi \hookrightarrow \mathbb P^{1,\mathrm{an}}_{K}$ the inclusion of the generic point of the unit disc (corresponding to the Gauss norm) and let $i : \mathbb D(0, 1^+) \hookrightarrow \mathbb P^{1,\mathrm{an}}_{K}$ and $j : \mathbb D(\infty, 1^-) \hookrightarrow \mathbb P^{1,\mathrm{an}}_{K}$ be the inclusion maps as above. Let $$ \mathcal R := \left\{\sum_{n \in \mathbb Z} a_{n}t^n, \quad \left\{\begin{array}{c} \exists \lambda > 1, \lambda^na_{n} \to 0\ \mathrm{for}\ n \to + \infty \\ \forall \lambda > 1, \lambda^na_{n} \to 0 \ \mathrm{for}\ n \to -\infty \end{array} \right. \right\} $$ be the \emph{Robba ring} (functions that converge on some open annulus of outer radius one at infinity). Then, one easily sees that $$ (j^\dagger_{\mathbb A^1_{k}}\beta_{*}\mathcal O_{\infty/K}^\dagger)_{\mathbb P^{1}_{k/K}} = i_{*}i^{-1}j_{*}\mathcal O_{\mathbb D(0, 1^-)} = i_{\xi*}\mathcal R $$ so that $j^\dagger_{\mathbb A^1_{k}}\mathcal F \neq 0$. This computation also shows that $$ \mathrm R\Gamma(\mathbb P^{1}_{k/K}, j^\dagger_{\mathbb A^1_{k}}\beta_{*}\mathcal O^\dagger_{\infty/K}) = \mathcal R. $$ We now turn to the study of the \emph{closed} embedding $\beta : Z \hookrightarrow X$ which requires some care (as we just experienced, the direct image of an isocrystal needs not be an isocrystal). The following definition has to do with cohomology with support in a closed subset. \begin{dfn} For any sheaf of abelian groups $\mathcal F$ on $T$, $$ \underline \Gamma^\dagger_{Z} \mathcal F := \ker\left(\mathcal F \to \alpha_{*}\mathcal F_{|U}\right) $$ is the subsheaf of \emph{overconvergent sections of $\mathcal F$ with support in $Z$}. \end{dfn} We will do some examples below when we have more material at our disposal. As above, if we denote by $\mathcal Z$ the closed subtopos of $T_{\mathrm{An}^\dagger}$ which is the complement of the open topos $T_{U\mathrm{An}^\dagger}$, then $\underline \Gamma^\dagger_{Z}$ is the same thing as the functor $\mathcal H^0_{\mathcal Z}$ of sections with support in $\mathcal Z$. This is the approach taken by David Zureick-Brown in \cite{ZureickBrown10} and \cite{ZureickBrown14*} in order to define cohomology with support in $Z$ on the overconvergent site. The next proposition is completely formal if one uses Zureick-Brown's approach. Also, as above, one may prove that $\underline \Gamma^\dagger_{Z}$ preserves injectives because the functor $\mathcal F \mapsto \mathcal F/\alpha_{!}\alpha^{-1}\mathcal F$ is an exact left adjoint. \begin{prop}\label{gamdag} \begin{enumerate} \item The functor $\underline \Gamma^\dagger_{Z}$ is left exact and preserves modules. \item If $\mathcal F$ is an abelian sheaf on $T$, then there exists a distinguished triangle $$ 0 \to \mathrm R\underline \Gamma^\dagger_{Z} \mathcal F \to \mathcal F \to \alpha_{*}\mathcal F_{|U} \to. $$ \end{enumerate} \end{prop} We will also show below that $\underline \Gamma^\dagger_{Z}$ preserves isocrystals. \begin{proof} As in the proof of proposition \ref{gammu}, the first assertion follows from the fact that all the functors involved (and the kernel as well) are left exact and preserve overconvergent modules. And similarly the second one is a formal consequence of the definition because $\alpha_{*}$ and $\alpha^{-1}$ both preserve injectives (they both have an exact left adjoint) and the map $\mathcal F \to \alpha_{*}\mathcal F_{|U}$ is an epimorphism when $\mathcal F$ is injective (standard). \end{proof} Note that the last assertion of the proposition means that there exists an exact sequence $$ 0 \to \underline \Gamma^\dagger_{Z} \mathcal F \to \mathcal F \to \alpha_{*}\mathcal F_{|U} \to \mathrm R^1\underline \Gamma^\dagger _{Z} \mathcal F \to 0. $$ and that $\mathrm R^{i}\underline \Gamma^\dagger_{Z} \mathcal F = 0$ for $i > 1$. Before going any further, we want to stress out the fact that $\beta^{-1}$ has an adjoint $\beta_{!}$ on the left in the category of all modules (or abelian groups or even sets with a light modification) but $\beta_{!}$ does not preserve isocrystals in general. Actually, we always have $(\beta_{!}\mathcal F)_{X',V'} = 0$ unless the morphism $X' \to X$ factors through $Z$ (recall that we use the coarse topology on the algebraic side). Again, the workaround consists in working directly with the realizations. If $j : W \hookrightarrow V$ is an open immersion of topological spaces, then $j^{-1}$ has an adjoint $j_{!}$ on the left also (on sheaves of abelian groups or sheaves of sets with a light modification). This is an exact functor that commutes with inverse images (see \cite{Iversen86}, II.6 again). Now, if $(X, V)$ is an overconvergent variety, then $]\beta[ : ]Z[ \hookrightarrow ]X[$ is an \emph{open} immersion and we may consider the functor $]\beta[_{!}$. \begin{lem} \label{lmbdg} If $\mathcal F$ is a sheaf (of sets or abelian groups) on $T_{Z}$, then the assignment $$ (X', V') \mapsto (\beta_{\dagger} \mathcal F)_{X',V'} := ]\beta'[_{!} \mathcal F_{Z',V'}, $$ where $\beta' : Z' \hookrightarrow X'$ denotes the embedding of the inverse image of $Z$ into $X'$, defines a sheaf on $T$. Moreover, if $E$ is an isocrystal on $T_{Z}$, then $\beta_{\dagger}E$ is an isocrystal on $T$. \end{lem} \begin{proof} As above, we consider a morphism $(f, u) : (X'', V'') \to (X', V')$ over $T$. We denote by $h : Z'' \to Z'$ the map induced by $f$ on the inverse images of $Z$ into $X'$ and $X''$ respectively, and by $\beta'' : Z'' \hookrightarrow X''$ the inclusion map. We have a cartesian diagram $$ \xymatrix{(Z'', V'') \ar@{^{(}->}[r] \ar[d]^{(h,u)}& (X'', V'') \ar[d]^{(f,u)} \\ (Z', V') \ar@{^{(}->}[r] & (X', V')} $$ giving rise to a cartesian diagram $$ \xymatrix{]Z''[_{V''} \ar@{^{(}->}[r] \ar[d]^{]h[_{u}}& ]X''[_{V''} \ar[d]^{]f[_{u}}\\ ]Z'[_{V'} \ar@{^{(}->}[r] & ]X'[_{V'} }. $$ It follows that there exists a canonical map $$ ]f[_{u}^{-1} ]\beta'[_{!} \mathcal F_{V'} = ]\beta''[_{!} ]h[_{u}^{-1} \mathcal F_{V'} \to ]\beta''[_{!} \mathcal F_{V''} $$ as asserted. We consider now an isocrystal $E$ and we want to show that $$ ]f[_{u}^{\dagger} ]\beta'[_{!} E_{V'} \simeq ]\beta''[_{!} E_{V''}. $$ This immediately follows from the equality (which is formal) $$ i_{X''}^{-1}\mathcal O_{V''} \otimes_{i_{X''}^{-1}u^{-1}\mathcal O_{V'}} ]\beta''[_{!} ]h[_{u}^{-1} E_{V'} = ]\beta''[_{!} \left(i_{Z''}^{-1}\mathcal O_{V''} \otimes_{i_{Z''}^{-1}u^{-1}\mathcal O_{V'}} ]h[_{u}^{-1} E_{V'}\right). \qedhere $$ \end{proof} \begin{dfn} $\beta_{\dagger} \mathcal F$ is the \emph{overconvergent direct image} of $\mathcal F$. \end{dfn} Note that there exists two flavors of $\beta_{\dagger}$: for sheaves of sets and for sheaves of abelian groups. Whichever we consider should be clear from the context. \begin{prop} \label{invim} \begin{enumerate} \item If $\mathcal F$ a sheaf on $T_{Z}$, then \begin{enumerate} \item $(\beta_{\dagger} \mathcal F)_{|Z} = \mathcal F$, \item $(\beta_{\dagger} \mathcal F)_{|U} = 0$, \item if $E$ is an isocrystal on $T$, then \begin{align} \label{omdag} \mathcal Hom(\beta_{\dagger} \mathcal F, E) = \beta_{*}\mathcal Hom(\mathcal F, \beta^{-1}E), \end{align} \item there exists a short exact sequence \begin{align} \label{fonsec} 0 \to \beta_{\dagger} \mathcal F \to \beta_{*} \mathcal F \to j_{U}^\dagger \beta_{*} \mathcal F \to 0, \end{align} \end{enumerate} \item $\beta_{\dagger}$ is a fully faithful exact functor that preserve isocrystals, and the induced functor $$ \beta_{\dagger} : \mathrm{Isoc}^\dagger(T_{Z}) \to \mathrm{Isoc}^\dagger(T) $$ is left adjoint to $$ \beta^{-1} : \mathrm{Isoc}^\dagger(T) \to \mathrm{Isoc}^\dagger(T_{Z}). $$ \end{enumerate} \end{prop} \begin{proof} As usual, if $(X',V')$ is a variety over $T$, then we denote by $\alpha' : U' \hookrightarrow X'$ and $\beta' : Z' \hookrightarrow X'$ the inclusions of the inverse images of $U$ and $Z$ respectively. When $(X', V')$ is an overconvergent variety over $T_{Z}$, then we will have $]\beta'[ = \mathrm{Id}$, and when $(X', V')$ is an overconvergent variety over $T_{U}$ then $]\beta'[ = \emptyset$. We obtain the first two assertions. When $E$ is an isocrystal on $T$, we have an isomorphism (this is standard) $$ \mathcal Hom(]\beta'[_{!} \mathcal F_{V'}, E_{V'}) = ]\beta'[_{*}\mathcal Hom(\mathcal F_{V'}, ]\beta'[^{-1}E_{V'}), $$ from which the third assertion follows. Also, there exists a short exact sequence $$ 0 \to ]\beta'[_{!} \mathcal F_{Z',V'} \to ]\beta'[_{*} \mathcal F_{Z',V'} \to ]\alpha'[_{*}]\alpha'[^{-1}]\beta'[_{*} \mathcal F_{Z',V'} \to 0 $$ which provides the fourth assertion. Full faithfulness and exactness of $\beta_{\dagger}$ follow from the full faithfulness and exactness of $]\beta'[_{!}$ for all $(X', V')$. The fact that $\beta_{\dagger}$ preserves isocrystals was proved in lemma \ref{lmbdg}. The last assertion may be obtained by taking global sections on the equality \eqref{omdag}. \end{proof} We can also mention that there exists a distinguished triangle $$ \beta_{\dagger} \mathcal F \to \mathrm R \beta_{*} \mathcal F \to j_{U}^\dagger \mathrm R \beta_{*} \mathcal F \to. $$ Now, we prove that the exact sequence \eqref{fonsec} is universal: \begin{prop} \label{exthom} If $\mathcal F'$ and $\mathcal F''$ are modules on $T_{Z}$ and $T_{U}$ respectively, then any extension $$ 0 \to \beta_{\dagger} \mathcal F' \to \mathcal F \to \alpha_{*} \mathcal F'' \to 0 $$ is a pull back of the fundamental extension \eqref{fonsec} through a unique morphism $\alpha_{*} \mathcal F'' \to j_{U}^\dagger \beta_{*} \mathcal F'$. \end{prop} \begin{proof} We know that $\beta^{-1}\alpha_{*}\mathcal F'' = 0$ and it follows that $$ \mathrm{Hom}(\alpha_{*}\mathcal F'', \beta_{*}\mathcal F') = \mathrm{Hom}(\beta^{-1}\alpha_{*}\mathcal F'', \mathcal F') = 0. $$ This being true for any sheaves, we see that actually, we have $\mathrm R\mathrm{Hom}(\alpha_{*}\mathcal F'', \mathrm R\beta_{*}\mathcal F') = 0$. It formally follows that that $\mathrm R^{i}\mathrm{Hom}(\alpha_{*}\mathcal F'', \beta_{*}\mathcal F') = 0$ for $i \leq 1$. As a consequence, we obtain a canonical isomorphism \begin{align} \label{fdis} \mathrm{Hom}(\alpha_{*}\mathcal F'', j_{U}^\dagger\beta_{*}\mathcal F') \simeq \mathrm{Ext}(\alpha_{*}\mathcal F'', \beta_{\dagger}\mathcal F'). \end{align} This is exactly the content of our assertion. \end{proof} We should observe that we always have $\mathrm{Hom}(\alpha_{*}\mathcal F'', \beta_{\dagger}\mathcal F') = 0$. However, it is \emph{not} true that $\mathrm{Ext}(\alpha_{*}\mathcal F'', \beta_{\dagger}\mathcal F') = 0$ in general. This can happen because $\beta_{\dagger}$ does not preserve injectives (although it is exact). The overconvergent direct image is related to overconvergent support as follows: \begin{prop} \label{isex} If $E$ is an isocrystal on $T$, then $$ \underline \Gamma^\dagger_{Z} E = \beta_{\dagger} E_{|Z} $$ and, for all $i > 0$, $\mathrm R^i \underline \Gamma^\dagger_{Z} E =0$. \end{prop} \begin{proof} Recall from proposition \ref{gamdag} that there exists an exact sequence $$ 0 \to \underline \Gamma^\dagger_{Z} E \to E \to \alpha_{*}E_{|U} \to \mathrm R^1\underline \Gamma^\dagger _{Z} E \to 0 $$ and that $\mathrm R^i \underline \Gamma^\dagger_{Z} E =0$ for $i > 1$. Now, let $(X', V')$ be a variety over $T$. Denote by $\beta' : Z' \hookrightarrow X', \alpha' : U' \hookrightarrow X'$ the embeddings of the inverse images of $Z$ and $U$ into $X'$. There exists a short exact sequence (standard again) $$ 0 \to ]\beta'[_{!}]\beta'[^{-1}E_{V'} \to E_{V'} \to ]\alpha'[_{*}]\alpha'[^{-1}E_{V'} \to 0. $$ Since $E$ is an isocrystal, we have $(\alpha_{*}E_{|U})_{V'} = ]\alpha'[_{*}]\alpha'[^{-1}E_{V'}$. It follows that $(\mathrm R^1 \underline \Gamma^\dagger_{Z} E)_{V'} =0$ and we also see that $$ (\underline \Gamma^\dagger_{Z} E)_{V'} = ]\beta'[_{!}]\beta'[^{-1} E_{V'} = (\beta_{\dagger} E_{|Z})_{V'}. \qedhere $$ \end{proof} Note that the proposition is still valid for sheaves of Zariski type and not merely for isocrystals. Be careful however that $\beta_{\dagger} E \neq \underline \Gamma^\dagger_{Z} \beta_{*}E$ in general even when $E$ is an isocrystal on $T_{Z}$. With our favorite example in mind, we have $\underline \Gamma^\dagger_{Z} \beta_{*}\mathcal O^\dagger_{\infty/K} = \beta_{*}\mathcal O^\dagger_{\infty/K} \neq \beta_{\dagger}\mathcal O^\dagger_{\infty/K}$ as our computations below will show. \begin{cor} \label{exact} The functor $\underline \Gamma^\dagger_{Z}$ preserves isocrystals and the induced functor $$ \underline \Gamma^\dagger_{Z} : \mathrm{Isoc}^\dagger(T) \to \mathrm{Isoc}^\dagger(T) $$ is exact. Moreover, if $E$ is an isocrystal on $T$, then there exists a short exact sequence $$ 0 \to \underline \Gamma^\dagger_{Z} E \to E \to j^\dagger_{U}E \to 0. $$ \end{cor} We might as well write this last short exact sequence as $$ 0 \to \beta_{\dagger}E_{|Z} \to E \to \alpha_{*}E_{|U} \to 0. $$ As promised above, we can do an example and consider the closed embedding $\beta : \infty \hookrightarrow \mathbb P^{1}_{k}$ again. We compute $\beta_{\dagger}\mathcal O^\dagger_{\infty/K} = \underline \Gamma^\dagger_{\infty} \mathcal O^\dagger_{\mathbb P^1_{k}/K}$. We have $$ \mathrm R\Gamma(\mathbb P^{1}_{k/K}, \beta_{\dagger}\mathcal O^\dagger_{\infty/K}) = [K \to K[t]^\dagger] = \left(K[t]^\dagger/K\right)[-1]. $$ We can also remark that the (long) exact sequence obtained by applying $\mathrm R\Gamma(\mathbb P^{1}_{k/K}, -)$ to the fundamental short exact sequence $$ 0 \to \beta_{\dagger}\mathcal O^\dagger_{\infty/K} \to \beta_{*} \mathcal O^\dagger_{\infty/K} \to j_{U}^\dagger \beta_{*} \mathcal O^\dagger_{\infty/K} \to 0 $$ reads \begin{align} \label{Robseq} 0 \to K[1/t]^{\mathrm{an}} \to \mathcal R \to K[t]^\dagger/K \to 0. \end{align} \begin{cor} \label{eqemb} \begin{enumerate} \item The functors $\alpha_{*}$ and $\alpha^{-1}$ induce an equivalence between isocrystals on $T_{U}$ and isocrystals on $T$ such that $\underline \Gamma^\dagger_{Z}E = 0$ (or $j^\dagger_{U}E= E$). \item The functors $\beta_{\dagger}$ and $\beta^{-1}$ induce an equivalence between isocrystals on $T_{Z}$ and isocrystals on $T$ such that $\underline \Gamma^\dagger_{Z}E = E$ (or $j^\dagger_{U}E=0$). \end{enumerate} \end{cor} \begin{proof} If $E''$ is an isocrystal on $T_{U}$, then $\alpha_{*}E''$ is an isocrystal on $T$ and therefore $\underline \Gamma^\dagger_{Z}\alpha_{*}E'' = \beta_{\dagger}\beta^{-1} \alpha_{*} E'' = 0$. And conversely, if $E$ is an isocrystal on $T$ such that $\underline \Gamma^\dagger_{Z}E = 0$, then $E = j^\dagger_{U}E = \alpha_{*}\alpha^{-1}E$. This shows the first part. Now, if $E'$ is an isocrystal on $T_{Z}$, then $\beta_{\dagger}E'$ is an isocrystal on $T$ and therefore $\underline \Gamma^\dagger_{Z}\beta_{\dagger}E' = \beta_{\dagger}\beta^{-1}\beta_{\dagger} E' = \beta_{\dagger} E'$. And conversely, if $E$ is an isocrystal on $T$ such that $\underline \Gamma^\dagger_{Z}E = E$, then $E = \beta_{\dagger}\beta^{-1}E$. \end{proof} We can also make the functor of sections with support in an open subset come back into the picture: \begin{cor} If $E$ is an isocrystal on $T$, then there exists a distinguished triangle $$ \mathrm R \underline \Gamma_{U} E \to j^\dagger_{U} E \to j^\dagger_{U} \mathrm R \beta_{*}E_{|Z} \to. $$ \end{cor} \begin{proof} There exists actually a commutative diagram of distinguished triangles: $$ \xymatrix{ & \underline \Gamma^\dagger_{Z} E \ar@{=}[r] \ar[d] & \underline \Gamma^\dagger_{Z} E \ar[d] \\ \mathrm R \underline \Gamma_{U} E \ar[r] \ar@{=}[d] & E \ar[r] \ar[d] & \mathrm R \beta_{*}E_{|Z} \ar[r] \ar[d] & \\ \mathrm R \underline \Gamma_{U} E \ar[r] & j^\dagger_{U} E \ar[r] \ar[d] & j^\dagger_{U} \mathrm R \beta_{*}E_{|Z} \ar[r] \ar[d] &. \\ &&& } $$ More precisely, we know that the vertical triangles as well as the middle horizontal one are all distinguished. The bottom one must be distinguished too. \end{proof} Back to our running example, we see that the long exact sequence obtained by applying $\mathrm R\Gamma(\mathbb P^{1}_{k/K}, -)$ to the distinguished triangle $$ \mathrm R \underline \Gamma_{\mathbb A^1_{k}} \mathcal O^\dagger_{\mathbb P^1_{k}/K} \to j^\dagger_{\mathbb A^1_{k}} \mathcal O^\dagger_{\mathbb P^1_{k}/K} \to j^\dagger_{\mathbb A^1_{k}} \mathrm R \beta_{*}\mathcal O^\dagger_{\infty/K} \to $$ reads $$ 0 \to K[t]^\dagger \to \mathcal R \to K[1/t]^{\mathrm{an}}/K \to0. $$ We can summarize the situation as follows: \begin{enumerate} \item There exists two triples of adjoint functors (up means left) : $$ \xymatrix{ \mathcal O^\dagger_{T_{U}}\mathrm{-Mod} \ar@<.5cm>@{^{(}->}[rr]^{\alpha_{!}} \ar@<-.5cm>@{^{(}->}[rr]^{\alpha_{*}} && \mathcal O^\dagger_{T}\mathrm{-Mod} \ar[ll]_-{\alpha^{-1}} \ar[rr]^-{\beta^{-1}} && \mathcal O^\dagger_{T_{Z}}\mathrm{-Mod}. \ar@<-.5cm>@{_{(}->}[ll]_{\beta{!}} \ar@<.5cm>@{_{(}->}[ll]_{\beta{*}} } $$ Moreover, $\alpha_{*}$ is \emph{exact} and \emph{preserves isocrystals} (and so do $\alpha^{-1}$ and $\beta^{-1}$). \item There exists two functors with support (that preserve injectives) $$ \xymatrix{ \underline \Gamma_{U} & \ar@(ul,dl)[] & \mathcal O^\dagger_{T}\mathrm{-Mod} & \ar@(ur,dr)[] & \underline \Gamma^\dagger_{Z}. } $$ Moreover, $\Gamma^\dagger_{Z}$ \emph{preserves isocrystals} and is \emph{exact on isocrystals}. \item There exists two other functors $$ \xymatrix{ j^\dagger_{U} & \ar@(ul,dl)[] & \mathcal O^\dagger_{T}\mathrm{-Mod} && \mathcal O^\dagger_{T_{Z}}\mathrm{-Mod}. \ar@{_{(}->}[ll]_{\beta_{\dagger}} } $$ They are both \emph{exact} and \emph{preserve isocrystals} (but not injectives). If $E$ is an isocrystal on $T$, we have $$ j^\dagger_{U}E = \alpha_{*} E_{|U} \quad \mathrm {and} \quad \underline \Gamma^\dagger_{Z} E = \beta_{\dagger}E_{|Z}. $$ \end{enumerate} \section{Constructibility} Recall that $K$ denotes a complete ultrametric field with ring of integers $\mathcal V$ and residue field $k$. We let $X$ be an algebraic variety over $k$ and $T$ a (overconvergent) presheaf over $X/K$. Roughly speaking, $T$ is some family of varieties $X'$ over $X$ which embed into a formal $\mathcal V$-scheme $P'$, together with a morphism of analytic $K$-varieties $V' \to P'_{K}$. A \emph{(overconvergent) module} $\mathcal F$ on $T$ is then a compatible family of $i_{X'}^{-1}\mathcal O_{V'}$-modules $\mathcal F_{V'}$, where $i_{X'} : ]X'[_{V'} \hookrightarrow V'$ denotes the inclusion of the tube (the reader is redirected to section \ref{bible} for the details). \begin{dfn} A module $\mathcal F$ on $T$ is said to be \emph{constructible} (with respect to $X$) if there exists a locally finite covering of $X$ by locally closed subvarieties $Y$ such that $\mathcal F_{|Y}$ is locally finitely presented. \end{dfn} Recall that a locally finitely presented module is the same thing as an isocrystal with coherent realizations. It is important to notice however that a constructible module is \emph{not} necessarily an isocrystal (the transition maps might not be bijective). We'll give an example later. \begin{prop} \begin{enumerate} \item Constructible modules on $T$ form an additive category which is stable under cokernel, extension, tensor product and internal Hom. \item Constructible isocrystals on $T$ form an additive category $\mathrm{Isoc}^\dagger_{\mathrm{cons}}(T)$ which is stable under cokernel, extensions and tensor product. \end{enumerate} \end{prop} \begin{proof} The analog to the first assertion for locally finitely presented modules is completely formal besides the internal Hom question that was proved in proposition 3.3.12 of \cite{LeStum11}. The analog to the second assertion for all isocrystals was proved in corollary 3.3.9 of \cite{LeStum11}. Since the restriction maps $\mathcal F \mapsto \mathcal F_{|Y}$ are exact and commute with tensor product and internal Hom, everything follows. \end{proof} Note however that $\mathcal Hom(E_{1}, E_{2})$ needs \emph{not} be an isocrystal (see example below) when $E_{1}$ and $E_{2}$ are two constructible isocrystals. \begin{prop} \label{firstprop} Let $\mathcal F$ be a module on $T$. \begin{enumerate} \item \label{equiv} $\mathcal F$ is constructible if and only if there exists a locally finite covering by locally closed subvarieties $Y$ of $X$ such that $\mathcal F_{|Y}$ is constructible. \item If $T' \to T$ is any morphism of overconvergent presheaves and $\mathcal F$ is constructible, then $\mathcal F_{|T'}$ is constructible. The converse also is true if $T' \to T$ is a covering. \item \label{astre}Assume that $T$ is actually a presheaf on $X'/K$ for some $f : X' \to X$. If $\mathcal F$ is constructible with respect to $X$, then it is also constructible with respect to $X'$. \end{enumerate} \end{prop} \begin{proof} The first assertion is an immediate consequence of the transitivity of locally finite coverings by locally closed subsets: if $X = \cup X_{i}$ and $X_{i} = \cup X_{ij}$ are such coverings, so is the covering $X = \cup X_{ij}$. In order to prove the second assertion, note first that it is formally satisfied by locally finitely presented modules. Moreover, if $Y$ is a locally closed subvariety of $X$, we have $(\mathcal F_{|T'})_{|Y} = (\mathcal F_{|Y})_{|T'_{Y}}$. The result follows. Finally, for the third assertion, if $X = \cup X_{i}$ is a locally finite covering by locally closed subvarieties, so is $X' = \cup f^{-1}(X_{i})$. Moreover, by definition $\mathcal F_{|f^{-1}(X_{i})} = \mathcal F_{|X_{i}}$ and there is nothing to do. \end{proof} Together with corollary \ref{eqemb} above, the next proposition will allow us to move freely along a closed or open embedding when we consider constructible isocrystals (note that this is obviously wrong for overconvergent isocrystals with coherent realizations): \begin{prop} \begin{enumerate} \item If $\alpha : U \hookrightarrow X$ is an open immersion of algebraic varieties, then a module $\mathcal F''$ on $T_{U}$ is constructible if and only if $\alpha_{*}\mathcal F''$ is constructible. \item If $\beta : Z \hookrightarrow X$ is a closed embedding of algebraic varieties, then a module $\mathcal F'$ on $T_{Z}$ is constructible if and only if $\beta_{\dagger}\mathcal F'$ is constructible. \end{enumerate} \end{prop} \begin{proof} We may assume that $U$ and $Z$ are open and closed complements. We saw in corollary \ref{resex} that $(\alpha_{*} \mathcal F'')_{|Z} = \mathcal F''$ and $(\alpha_{*} \mathcal F')_{|U} = 0$. And we also saw in proposition \ref{invim} that $(\beta_{\dagger} \mathcal F')_{|Z} = \mathcal F'$ and $(\beta_{\dagger} \mathcal F')_{|U} = 0$. \end{proof} It is easy to see that the \emph{usual} dual to a constructible isocrystal is \emph{not} an isocrystal in general: if $\beta : Z \hookrightarrow X$ is a closed embedding of algebraic varieties and $E$ is an overconvergent isocrystal on $Z$ with coherent realizations, it follows from proposition \ref{invim} that $$ (\beta_{\dagger}E)\check{} := \mathcal Hom(\beta_{\dagger}E, \mathcal O^\dagger_{T_{}}) = \beta_{*}\mathcal Hom(E, \mathcal O^\dagger_{T_{Z}}) = \beta_{*}E\check{}, $$ which is constructible but is not an isocrystal in general (as we saw in section \ref{emb}). The next property is also very important because it allows the use of noetherian induction to reduce some assertions about constructible isocrystals to analogous assertions about overconvergent isocrystals with coherent realizations. \begin{lem} \label{semloc} A module $\mathcal F$ on $T$ is constructible if and only if there exists a closed subvariety $Z$ of $X$ such that, if $U := X \setminus Z$, then both $\mathcal F_{|Z}$ and $\mathcal F_{|U}$ are constructible. We may even assume that $U$ is dense in $X$ and $\mathcal F_{|U}$ is locally finitely presented. \end{lem} \begin{proof} The condition is sufficient thanks to assertion \ref{equiv} of Proposition \ref{firstprop}. Conversely, if $\xi$ is a generic point of $X$, then there exists a locally closed subset $Y$ of $X$ such that $\xi \in Y$ and $\mathcal F_{|Y}$ is locally finitely presented. And $Y$ contains necessarily an open neighborhood $U_{\xi}$ of $\xi$ in $X$. We may choose $U := \cup U_{\xi}$. \end{proof} \begin{prop} \label{extension} An isocrystal $E$ on $T$ is constructible if and only if there exists an exact sequence \begin{align} \label{fdext} 0 \to \beta_{\dagger}E' \to E \to \alpha_{*} E'' \to 0 \end{align} where $E''$ (resp. $E'$) is a constructible isocrystal on a closed subvariety $Z$ of $X$ (resp. on $U := X \setminus Z$) and $\beta : Z \hookrightarrow X$ (resp. $\alpha : U \hookrightarrow X$) denotes the inclusion map. We may assume that $U$ is dense in $X$ and that $E''$ has coherent realizations. \end{prop} \begin{proof} If we are given such and exact sequence, we may pull back along $\alpha$ and $\beta$ in order to obtain $E' \simeq E_{|Z}$ and $E'' \simeq E_{|U}$. And conversely, we may set $E' := E_{|Z}$ and $E'' := E_{|U}$ in order to get such an exact sequence by proposition \ref{isex}. \end{proof} Note that this property is specific to constructible \emph{isocrystals} and that the analog for constructible modules is wrong. It follows from proposition \ref{exthom} that any extension such as \eqref{fdext} comes from a unique morphism $\alpha_{*}E'' \to j^\dagger_{U}\beta_{*}E'$. This is a classical glueing method and the correspondence is given by the following morphism of exact sequences $$ \xymatrix{ 0 \ar[r] & \beta_{\dagger}E' \ar[r] & \beta_{*}E' \ar[r] &j^\dagger_{U}\beta_{*}E' \ar[r] & 0 \\ 0 \ar[r] & \beta_{\dagger}E' \ar[r] \ar@{=}[u] & E \ar[u] \ar[r] & \alpha_{*} E'' \ar[r] \ar[u] & 0. } $$ We can do the computations in the very special case of $\alpha : \mathbb A^{1}_{k} \hookrightarrow \mathbb P^{1}_{k}$ and $\beta : \infty \hookrightarrow \mathbb P^{1}_{k}$. We have $E' = \mathcal O_{\infty/K}^\dagger \otimes_{K} H$ for some finite dimensional vector space $H$, and $E''$ is given by a finite free $K[t]^\dagger$-module $M$ of finite rank endowed with a (overconvergent) connection. One can show that there exists a canonical isomorphism \begin{align*} \mathrm {Ext}(\alpha_{*}E'', \beta_{\dagger}E') & = \mathrm {Hom}(\alpha_{*}E'', j^\dagger_{U}\beta_{*}E') \\ &= \mathrm{Hom}_{\nabla}(M, \mathcal R \otimes_{K} H) \\ & = H^ {0}_{\mathrm{dR}}(\check M \otimes_{K[t]^\dagger} \mathcal R) \otimes_{K} H \end{align*} (the second identity is not trivial). A slight generalization will give a classification of constructible isocrystals on smooth projective curves as in theorem 6.15 of \cite{LeStum14}. \section{Integrable connections and constructibility} In this section, we will give a more concrete description of constructible isocrystals in the case $T$ is representable by some overconvergent variety $(X,V)$, in the case $T = X_{V}/O$ where $(X,V)$ is a variety over some overconvergent variety $(C,O)$, and finally when $T = X/O$ where $X$ is a variety over $C$ (see section \ref{bible}). \begin{dfn} Let $(X, V)$ be an overconvergent variety. An $i_{X}^{-1}\mathcal O_{V}$-module $\mathcal F$ is \emph{constructible} if there exists a locally finite covering of $X$ by locally closed subvarieties $Y$ such that $i_{Y}^{-1} i_{X*} \mathcal F$ is a coherent $i_{Y}^{-1}\mathcal O_{V}$-module. \end{dfn} Of course, we have $i_{Y}^{-1} i_{X*} \mathcal F = i_{Y\subset X}^{-1}\mathcal F$ if we denote by $i_{Y \subset X} : ]Y[_{V} \hookrightarrow ]X[_{V}$ the inclusion of the tubes. \begin{prop} \label{consm} Let $(X, V)$ be an overconvergent variety. Then, \begin{enumerate} \item $\mathrm{Isoc}^\dagger_{\mathrm{cons}}(X,V)$ is an abelian subcategory of $\mathrm{Isoc}^\dagger(X,V)$, \item the realization functor induces an equivalence between $\mathrm{Isoc}^\dagger_{\mathrm{cons}}(X,V)$ and the category of all constructible $i_{X}^{-1}\mathcal O_{V}$-modules. \end{enumerate} \end{prop} \begin{proof} It was shown in proposition 3.3.8 of \cite{LeStum11} that the realization functor induces an equivalence between $\mathrm{Isoc}^\dagger(X,V)$ and the category of all $i_{X}^{-1}\mathcal O_{V}$-modules. And overconvergent isocrystals correspond to coherent modules. The second assertion is an immediate consequence of these observations. The first assertion then follows immediately from the analogous result about coherent modules. \end{proof} Recall that an $i_{X}^{-1}\mathcal O_{V}$-module may be endowed with an overconvergent stratification. Then, we have: \begin{prop} \label{relco} Let $(X, V)$ be a variety over an overconvergent variety $(C,O)$. \begin{enumerate} \item If $V$ universally flat over $O$ in a neighborhood of $]X[$, then $\mathrm{Isoc}^\dagger_{\mathrm{cons}}(X_{V}/O)$ is an abelian subcategory of $\mathrm{Isoc}^\dagger(X_{V}/O)$, \item the realization functor induces an equivalence between $\mathrm{Isoc}^\dagger_{\mathrm{cons}}(X_{V}/O)$ and the category of constructible $i_{X}^{-1}\mathcal O_{V}$-modules $\mathcal F$ endowed with an overconvergent stratification. \end{enumerate} \end{prop} \begin{proof} According to proposition 3.5.3 of \cite{LeStum11}, its corollary and proposition 3.5.5 of \cite{LeStum11}, the proof goes exactly as in proposition \ref{consm}. \end{proof} The next corollary is valid if we work with \emph{good} overconvergent varieties (which we may have assumed from the beginning). \begin{cor} If $(C, O)$ is an (good) overconvergent variety and $X$ is an algebraic variety over $C$, then $\mathrm{Isoc}^\dagger_{\mathrm{cons}}(X/O)$ is an abelian subcategory of $\mathrm{Isoc}^\dagger(X/O)$. \end{cor} \begin{proof} Using proposition 4.6.3 of \cite{LeStum11}, we may assume that $X$ has a geometric realization over $(C,O)$ and use the second part of proposition 3.5.8 in \cite{LeStum11}. \end{proof} We could have included a description of constructible isocrystal as modules endowed with an overconvergent stratification on some geometric realization of $X/O$ but we are heading towards a finer description (this is what the rest of this section is all about). Recall that any overconvergent stratification will induce, by pull back at each level, a usual stratification. This is a faithful construction and we want to show that it is actually \emph{fully} faithful when we work with constructible modules (in suitable geometric situations). Thus, we have the following sequence of injective maps $$ \mathrm{Hom}_{\mathrm{Strat}^\dagger}(\mathcal F, \mathcal G) \hookrightarrow \mathrm{Hom}_{\mathrm{Strat}}(\mathcal F, \mathcal G) \hookrightarrow \mathrm{Hom}(\mathcal F, \mathcal G) $$ and we wonder wether the first one is actually bijective. In order to do so, we will also have to study the injectivity of the maps in the sequence $$ \mathrm{Ext}_{\mathrm{Strat}^\dagger}(\mathcal F, \mathcal G) \to \mathrm{Ext}_{\mathrm{Strat}}(\mathcal F, \mathcal G) \to \mathrm{Ext}(\mathcal F, \mathcal G). $$ We start with the following observation: \begin{prop} Let $(X, V)$ be a variety over an overconvergent variety $(C,O)$, $\alpha : U \hookrightarrow X$ the inclusion of an open subvariety of $X$ and $\beta : Z \hookrightarrow X$ the inclusion of a closed complement. Let $\mathcal F'$ be an $i_{Z}^{-1}\mathcal O_{V}$-module and $\mathcal F''$ an $i_{U}^{-1}\mathcal O_{V}$-module. Then a usual (resp. an overconvergent) stratification on the direct sum $]\beta[_{!} \mathcal F' \oplus ]\alpha[_{*} \mathcal F''$ is uniquely determined by its restrictions to $\mathcal F'$ and $\mathcal F''$. \end{prop} \begin{proof} Let us denote by $$ \epsilon^{(n)} = \left(\begin{array} {cc} ]\beta[_{!} \epsilon'^{(n)} & \varphi_{n} \\ \psi_{n} & ]\alpha[_{*} \epsilon''^{(n)}\end{array} \right) \quad \left(\mathrm{resp.}\ \epsilon = \left(\begin{array} {cc} ]\beta[_{!} \epsilon' & \varphi \\ \psi & ]\alpha[_{*} \epsilon''\end{array} \right)\right) $$ the (resp. the overconvergent) stratification of $]\beta[_{!} \mathcal F' \oplus ]\alpha[_{*} \mathcal F''$ (recall that the maps $]\beta[_{!}$ and $]\alpha[_{*}$ are fully faithful). Then, the maps $$ \varphi_{n} : i_{X}^{-1}\mathcal O_{V^{(n)}} \otimes_{i_{X}^{-1}\mathcal O_{V}} ]\alpha[_{*} \mathcal F'' \to ]\beta[_{!} \mathcal F' \otimes_{i_{X}^{-1}\mathcal O_{V}} i_{X}^{-1}\mathcal O_{V^{(n)}} $$ and $$ \psi_{n} : i_{X}^{-1}\mathcal O_{V^{(n)}} \otimes_{i_{X}^{-1}\mathcal O_{V}} ]\beta[_{!} \mathcal F' \to ]\alpha[_{*} \mathcal F'' \otimes_{i_{X}^{-1}\mathcal O_{V}} i_{X}^{-1}\mathcal O_{V^{(n)}} $$ $$ (\mathrm{resp.}\ \varphi: ]p_{2}[^\dagger ]\alpha[_{*} \mathcal F'' \simeq ]p_{1}[^\dagger ]\beta[_{!} \mathcal F' \quad \mathrm{and} \quad \psi : ]p_{2}[^\dagger ]\beta[_{!} \mathcal F' \simeq ]p_{1}[^\dagger ]\alpha[_{*} \mathcal F'') $$ are necessarily zero as one may see by considering the fibres (resp. and using the fact that $p_{i}^\dagger$ commutes with $]\alpha[_{*}$ and $]\beta[_{!}$). \end{proof} We keep the assumptions and the notations of the proposition for a while and assume that $\mathcal F'$ and $\mathcal F''$ are both endowed with a usual (resp. an overconvergent) stratification. From the general fact that $$ \mathrm{Hom}( ]\beta[_{!} \mathcal F', ]\alpha[_{*} \mathcal F'') = 0\quad \mathrm{and} \quad \mathrm{Hom}(]\alpha[_{*} \mathcal F'', ]\beta[_{!} \mathcal F') = 0, $$ we can deduce that $$ \mathrm{Hom}_{\mathrm{Strat}}( ]\beta[_{!} \mathcal F', ]\alpha[_{*} \mathcal F'') = 0\quad \mathrm{and} \quad \mathrm{Hom}_{\mathrm{Strat}}(]\alpha[_{*} \mathcal F'', ]\beta[_{!} \mathcal F') = 0 $$ $$ (\mathrm{resp.}\ \mathrm{Hom}_{\mathrm{Strat}^\dagger}( ]\beta[_{!} \mathcal F', ]\alpha[_{*} \mathcal F'') = 0\quad \mathrm{and} \quad \mathrm{Hom}_{\mathrm{Strat}^\dagger}(]\alpha[_{*} \mathcal F'', ]\beta[_{!} \mathcal F') = 0). $$ Since we also know that $$ \mathrm{Ext}( ]\beta[_{!} \mathcal F', ]\alpha[_{*} \mathcal F'') = 0, $$ we can deduce the following result from the proposition: \begin{cor} \label{spliov1} If $\mathcal F'$ and $\mathcal F''$ are both endowed with a usual (resp. an overconvergent) stratification, then we have $$ \mathrm{Ext}_{\mathrm{Strat}}( ]\beta[_{!} \mathcal F', ]\alpha[_{*} \mathcal F'') = 0 \quad (\mathrm{resp.}\ \mathrm{Ext}_{\mathrm{Strat}^\dagger}( ]\beta[_{!} \mathcal F', ]\alpha[_{*} \mathcal F'') = 0). $$ \end{cor} Alternatively, it means that any short exact sequence of $i_{Z}^{-1}\mathcal O_{V}$-modules (resp. with a usual stratification, resp. with an overconvergent stratification) $$ \xymatrix{ 0 \ar[r] & ]\alpha[_{*} \mathcal F'' \ar[r] & \mathcal F \ar[r] & ]\beta[_{!} \mathcal F' \ar[r] & 0 } $$ splits (and the splitting is compatible with the extra structure). From the proposition, we may also deduce the following: \begin{cor} \label{spliov2} If $\mathcal F'$ and $\mathcal F''$ are both endowed with a usual (resp. an overconvergent) stratification, then the following map is (resp. maps are) injective $$ \mathrm{Ext}_{\mathrm{Strat}}( ]\alpha[_{*} \mathcal F'', ]\beta[_{!} \mathcal F') \hookrightarrow \mathrm{Ext}( ]\alpha[_{*} \mathcal F'', ]\beta[_{!} \mathcal F') $$ $$ \left(\mathrm{resp.}\ \mathrm{Ext}_{\mathrm{Strat}^\dagger}( ]\alpha[_{*} \mathcal F'', ]\beta[_{!} \mathcal F') \hookrightarrow \mathrm{Ext}_{\mathrm{Strat}}( ]\alpha[_{*} \mathcal F'', ]\beta[_{!} \mathcal F') \hookrightarrow \mathrm{Ext}( ]\alpha[_{*} \mathcal F'', ]\beta[_{!} \mathcal F')\right). $$ \end{cor} Alternatively, it means that if $\mathcal F$ is an $i_{X}^{-1}\mathcal O_{V}$-module with a usual (resp. an overconvergent) stratification, \emph{and} if the exact sequence of $i_{X}^{-1}\mathcal O_{V}$-modules $$ \xymatrix{ 0 \ar[r] & ]\beta[_{!} \mathcal F_{|]Z[}\ar[r] & \mathcal F \ar[r] & ]\alpha[_{*} \mathcal F_{|]U[} \ar[r] & 0 } $$ splits, then the splitting is always compatible with the (resp. the overconvergent) stratifications. We are now ready to prove our main result: \begin{prop} \label{fulfait} Let $$ \xymatrix{X \ar@{^{(}->}[r] \ar[d]^f & P \ar[d]^v & P_{K} \ar[l] \ar[d]^{v_{K}} & V \ar[l] \ar[d]^u \\ C \ar@{^{(}->}[r] & S & S_{K} \ar[l] & O \ar[l] } $$ be a formal morphism of overconvergent varieties with $f$ quasi-compact, $v$ smooth at $X$, $O$ locally separated and $V$ a good neighborhood of $X$ in $P_{K} \times_{S_{K}} O$. If $\mathcal F$ and $\mathcal G$ are two constructible $i_{X}^{-1}\mathcal O_{V}$-modules endowed with an overconvergent stratification, then $$ \mathrm{Hom}_{\mathrm{Strat}^\dagger}(\mathcal F, \mathcal G) \simeq \mathrm{Hom}_{\mathrm{Strat}}(\mathcal F, \mathcal G). $$ \end{prop} \begin{proof} Since we know that the map is injective, we may rephrase the assertion as follows: we are given a morphism $\varphi : \mathcal F \to \mathcal G$ of constructible $i_{X}^{-1}\mathcal O_{V}$-modules and we have to show that $\varphi$ is actually compatible with the overconvergent stratifications. This question is clearly local on $O$ which is locally compact. We may therefore assume that the image of $O$ in $S_{K}$ is contained in some $S'_{K}$ with $S'$ quasi-compact. We may then pull back the diagram along $S' \to S$ and assume that $X$ is finite dimensional (use assertion \ref{astre} of proposition \ref{firstprop}). This will allow us to use noetherian induction. We know (use for example propositions \ref{semloc} and \ref{relco}) that there exists a dense open subset $U$ of $X$ such that the restrictions $\mathcal F''$ and $\mathcal G''$ to $U$ of $\mathcal F$ and $\mathcal G$ are coherent. Moreover, it was shown in corollary 3.4.10 of \cite{LeStum11} that the proposition is valid for $\mathcal F''$ and $\mathcal G''$ on $U$. Let us denote as usual by $\alpha : U \hookrightarrow X$ the inclusion map. Since $]\alpha[_{*}$ is fully faithful, we see that the proposition is valid for $]\alpha[_{*}\mathcal F''$ and $]\alpha[_{*}\mathcal G''$. In other words, we have a bijection \begin{align} \label{pf1} \mathrm{Hom}_{\mathrm{Strat}^\dagger}(]\alpha[_{*}\mathcal F'', ]\alpha[_{*}\mathcal G'') \simeq \mathrm{Hom}_{\mathrm{Strat}}(]\alpha[_{*}\mathcal F'', ]\alpha[_{*}\mathcal G''). \end{align} We denote now by $\beta : Z \hookrightarrow X$ the inclusion of a closed complement of $U$ and let $\mathcal F'$ and $\mathcal G'$ be the restrictions of $\mathcal F$ and $\mathcal G$ to $Z$. And we observe the following commutative diagram: $$ \xymatrix{ 0 \ar[d] & 0 \ar[d] \\ \mathrm{Hom}_{\mathrm{Strat}^\dagger}(]\alpha[_{*}\mathcal F'', \mathcal G) \ar[d] \ar@{^{(}->}[r] & \mathrm{Hom}_{\mathrm{Strat}}(]\alpha[_{*}\mathcal F'', \mathcal G) \ar[d] \\ \mathrm{Hom}_{\mathrm{Strat}^\dagger}(]\alpha[_{*}\mathcal F'', ]\alpha[_{*}\mathcal G'') \ar[d] \ar[r]^-\simeq & \mathrm{Hom}_{\mathrm{Strat}}(]\alpha[_{*}\mathcal F'', ]\alpha[_{*}\mathcal G'') \ar[d] \\ \mathrm{Ext}_{\mathrm{Strat}^\dagger}(]\alpha[_{*}\mathcal F'', ]\beta[_{!}\mathcal G') \ar@{^{(}->}[r] & \mathrm{Ext}_{\mathrm{Strat}}(]\alpha[_{*}\mathcal F'', ]\beta[_{!}\mathcal G'). } $$ The columns are exact because $\mathrm{Hom}(]\alpha[_{*}\mathcal F'', ]\beta[_{!}\mathcal G') = 0$, the bottom map is injective thanks to corollary \ref{spliov2} and the middle map is the isomorphism \eqref{pf1}. It follows from the five lemma (or an easy diagram chasing) that the upper map is necessarily bijective: we have \begin{align} \label{pf2} \mathrm{Hom}_{\mathrm{Strat}^\dagger}(]\alpha[_{*}\mathcal F'', \mathcal G) \simeq \mathrm{Hom}_{\mathrm{Strat}}(]\alpha[_{*}\mathcal F'', \mathcal G''). \end{align} We turn now to the other side: by induction, the proposition is valid for $\mathcal F'$ and $\mathcal G'$ on $Z$, and since $]\beta[_{!}$ is fully faithful, it also holds for $]\beta[_{!}\mathcal F'$ and $]\beta[_{!}\mathcal G'$. Hence, we have \begin{align} \label{pf3} \mathrm{Hom}_{\mathrm{Strat}^\dagger}(]\beta[_{!}\mathcal F', ]\beta[_{!}\mathcal G') \simeq \mathrm{Hom}_{\mathrm{Strat}}(]\beta[_{!}\mathcal F', ]\beta[_{!}\mathcal G'). \end{align} Now, we consider the commutative square $$ \xymatrix{ \mathrm{Hom}_{\mathrm{Strat}^\dagger}(]\beta[_{!}\mathcal F', ]\beta[_{!}\mathcal G') \ar[d]^-\simeq \ar[r]^-\simeq & \mathrm{Hom}_{\mathrm{Strat}}(]\beta[_{!}\mathcal F', ]\beta[_{!}\mathcal G') \ar[d]^-\simeq \\ \mathrm{Hom}_{\mathrm{Strat}^\dagger}(]\beta[_{!}\mathcal F',\mathcal G) \ar@{^{(}->}[r] & \mathrm{Hom}_{\mathrm{Strat}}(]\beta[_{!}\mathcal F', \mathcal G). } $$ The vertical maps are bijective because $\mathrm{Hom}(]\beta[_{!}\mathcal F', ]\alpha[_{*}\mathcal G'') = 0$ and the upper map is simply the isomorphism \eqref{pf3}. If follows that we have an isomorphism \begin{align} \label{pf4} \mathrm{Hom}_{\mathrm{Strat}^\dagger}(]\beta[_{!}\mathcal F', \mathcal G) \simeq \mathrm{Hom}_{\mathrm{Strat}}(]\beta[_{!}\mathcal F', \mathcal G). \end{align} In order to end the proof, we will need to kill another obstruction. Since the proposition holds for $]\alpha[_{*}\mathcal F''$ and \emph{any} constructible $\mathcal G$, then the following canonical map is necessarily injective: \begin{align} \label{pf5} \mathrm{Ext}_{\mathrm{Strat}^\dagger}(]\alpha[_{*}\mathcal F'', \mathcal G) \hookrightarrow \mathrm{Ext}_{\mathrm{Strat}}(]\alpha[_{*}\mathcal F'', \mathcal G). \end{align} We consider now the commutative diagram with exact columns: $$ \xymatrix{ 0 \ar[d] & 0 \ar[d] \\ \mathrm{Hom}_{\mathrm{Strat}^\dagger}(]\alpha[_{*}\mathcal F'', \mathcal G) \ar[d] \ar[r]^\simeq & \mathrm{Hom}_{\mathrm{Strat}}(]\alpha[_{*}\mathcal F'', \mathcal G) \ar[d] \\ \mathrm{Hom}_{\mathrm{Strat}^\dagger}(\mathcal F, \mathcal G) \ar[d] \ar@{^{(}->}[r] & \mathrm{Hom}_{\mathrm{Strat}}(\mathcal F, \mathcal G) \ar[d] \\ \mathrm{Hom}_{\mathrm{Strat}^\dagger}(]\beta[_{\dagger}\mathcal F',\mathcal G) \ar[d] \ar[r]^-\simeq & \mathrm{Hom}_{\mathrm{Strat}}(]\beta[_{\dagger}\mathcal F', \mathcal G) \ar[d] \\ \mathrm{Ext}_{\mathrm{Strat}^\dagger}(]\alpha[_{*}\mathcal F'', \mathcal G) \ar@{^{(}->}[r] & \mathrm{Ext}_{\mathrm{Strat}}(]\alpha[_{*}\mathcal F'', \mathcal G). } $$ The horizontal isomorphisms are just \eqref{pf2} and \eqref{pf4} and the bottom injection is \eqref{pf5}. It is then sufficient to apply the five lemma again. \end{proof} We may reformulate the statement of the proposition as follows: \begin{cor} The forgetful functor from constructible $i_{X}^{-1}\mathcal O_{V}$-modules endowed with an overconvergent stratification to $i_{X}^{-1}\mathcal O_{V}$-modules endowed with a usual stratification is fully faithful. \end{cor} It is also worth mentioning the following immediate consequence: \begin{cor} \label{injbij} If $\mathcal F$ and $\mathcal G$ are two constructible $i_{X}^{-1}\mathcal O_{V}$-modules endowed with an overconvergent stratification, then we have an injective map \begin{align} \label{sur} \mathrm{Ext}_{\mathrm{Strat}^\dagger}(\mathcal F, \mathcal G) \hookrightarrow \mathrm{Ext}_{\mathrm{Strat}}(\mathcal F, \mathcal G). \end{align} \end{cor} It means that if \begin{align} \label{ext} \xymatrix{ 0 \ar[r] & \mathcal F \ar[r] & \mathcal G \ar[r] & \mathcal H \ar[r] & 0 } \end{align} is a short exact sequence of constructible $i_{X}^{-1}\mathcal O_{V}$-modules endowed with an overconvergent stratification, then any splitting for the usual stratifications will be compatible with the overconvergent stratifications. I strongly suspect that much more is actually true: if we are given an exact sequence \eqref{ext} of constructible $i_{X}^{-1}\mathcal O_{V}$-modules endowed with usual stratifications and if the stratifications of $\mathcal F'$ and $\mathcal F''$ are overconvergent, then the stratification of $\mathcal F$ should also be overconvergent. In other words, the injective map \eqref{sur} would be an isomorphism. If $(X, V)$ is a variety over an overconvergent variety $(C,O)$, then we will denote by $$ \mathrm{MIC}^\dagger_{\mathrm{cons}}(X,V/O) $$ the category of constructible $i_{X}^{-1}\mathcal O_{V}$-modules $\mathcal F$ endowed with an overconvergent connection (recall that it means that the connection extends to some overconvergent stratification). Then, we can also state the following corollary: \begin{cor} \label{equiV} If $\mathrm{Char}(K) = 0$, then the realization functor induces an equivalence of categories $$ \mathrm{Isoc}^\dagger_{\mathrm{cons}}(X_{V}/O) \simeq \mathrm{MIC}^\dagger_{\mathrm{cons}}(X,V/O). $$ \end{cor} As a consequence, we may observe that we will have, for a constructible isocrystal $E$ on $X_{V}/O$, $$ \Gamma(X_{V}/O, E) \simeq H^{0}_{\mathrm{dR}}(E_{V}), $$ and we expect the same to hold for higher cohomology spaces (we only know at this point that $$ H^1(X_{V}/O, E) \subset H^{1}_{\mathrm{dR}}(E_{V})). $$ Again, we need to work with good overconvergent varieties for the theorem to holds: \begin{thm} \label{thm} Assume that $\mathrm{Char}(K) = 0$ and that we are given a commutative diagram \begin{align} \label{geomag} \xymatrix{X \ar@{^{(}->}[r] \ar[d]^f & P \ar[d]^v & P_{K} \ar[l] \ar[d]^{v_{K}} & V \ar[l] \ar[d]^u \\ C \ar@{^{(}->}[r] & S & S_{K} \ar[l] & O \ar[l] } \end{align} where $P$ is a formal scheme over $S$ which is proper and smooth around $X$ and $V$ is a neighborhood of the tube of $X$ in $P_{K} \times_{S_{K}} O$ (and $O$ is good in the neighborhood of $]C[$). Then the realization functor induces an equivalence of categories $$ \mathrm{Isoc}^\dagger_{\mathrm{cons}}(X/O) \simeq \mathrm{MIC}^\dagger_{\mathrm{cons}}(X,V/O) $$ between constructible overconvergent isocrystals on $X/O$ and constructible $i_{X}^{-1}\mathcal O_{V}$-modules endowed with an overconvergent connection. \end{thm} \begin{proof} Using the second assertion of proposition 3.5.8 in \cite{LeStum11}, this follows immediately from corollary \ref{equiV}. \end{proof} As a consequence of the theorem, we see that the notion of constructible module endowed with an overconvergent connection only depends on $X$ and \emph{not} on the choice of the geometric realization \eqref{geomag}. It is likely that this could have been proven directly using Berthelot's technic of diagonal embedding. However, we believe that our method is much more natural because functoriality is built-in. \addcontentsline{toc}{section}{Bibliography} \end{document}
\begin{equation}taegin{document} \tauitle{\begin{equation}taf Exact Controllability of Linear Stochastic Differential Equations and Related Problems\tauhanks{This work is supported in part by the National Natural Science Foundation of China (11471192, 11371375, 11526167), the Fundamental Research Funds for the Central Universities (SWU113038, XDJK2014C076), the Nature Science Foundation of Shandong Province (JQ201401), the Natural Science Foundation of CQCSTC (2015jcyjA00017), and NSF Grant DMS-1406776.}} \alphauthor{Yanqing Wang\frphiootnote{School of Mathematics and Statistics, Southwest University, Chongqing 400715, China; email: {\taut [email protected]}},~~ Donghui Yang\frphiootnote{School of Mathematics and Statistics, Central South University, Changsha 410075, China; email: {\taut [email protected]}},~~Jiongmin Yong\frphiootnote{Department of Mathematics, University of Central Florida, Orlando, FL 32816, USA; email: {\taut [email protected]}},~~and~~Zhiyong Yu\frphiootnote{Corresponding author, School of Mathematics, Shandong University, Jinan 250100, China; email: {\taut [email protected]}}} \muaketitle \begin{equation}taegin{abstract} A notion of $L^p$-exact controllability is introduced for linear controlled (forward) stochastic differential equations, for which several sufficient conditions are established. Further, it is proved that the $L^p$-exact controllability, the validity of an observability inequality for the adjoint equation, the solvability of an optimization problem, and the solvability of an $L^p$-type norm optimal control problem are all equivalent. \frepsilonnd{abstract} \mus \nuo\begin{equation}taf Keywords: \rm Controlled stochastic differential equation, $L^p$-exact controllability, observability inequality, norm optimal control problem. \mus \nuo\begin{equation}taf AMS subject classification: \rm 93B05, 93E20, 60H10 \muaketitle \section{Introduction} Let $(\Omega,{\cal F},\muathbb{F},\muathbb{P})$ be a complete filtered probability space on which a $d$-dimensional standard Brownian motion $W(\cdot)\frepsilonquiv(W_1(\cdot),\cdots,W_d(\cdot))$ is defined such that $\muathbb{F}\frepsilonquiv\{{\cal F}_t\}_{t\gammaes0}$ is its natural filtration augmented by all the $\muathbb{P}$-null sets. Consider the following linear controlled (forward) stochastic differential equation (FSDE, for short): \begin{equation}tael{FSDE1}dX(t)=\Big[A(t)X(t)+B(t)u(t)\Big]dt+\sum_{k=1}^d\Big[C_k(t)X(t)+D_k(t)u(t)\Big]dW_k(t),\quadq t\gammaes0,\frepsilone where $A,C_1,\cdots,C_d:[0,T]\tauimes\Omega\tauo\muathbb{R}^{n\tauimes n}$ and $B:D_1,\cdots,D_d:[0,T]\tauimes\Omega\tauo\muathbb{R}^{n\tauimes m}$ are suitable matrix-valued processes. Here, $\muathbb{R}^{n\tauimes n}$ and $\muathbb{R}^{n\tauimes m}$ are the sets of all $(n\tauimes n)$ and $(n\tauimes m)$ real matrices, respectively. In the above, $X(\cdot)$ is the {\inftyt state process} valued in the $n$-dimensional (real) Euclidean space $\muathbb{R}^n$ and $u(\cdot)$ is the {\inftyt control process} valued in the $m$-dimensional (real) Euclidean space $\muathbb{R}^m$. We will denote system (\ref{FSDE1}) by $[A(\cdot),C(\cdot);B(\cdot),D(\cdot)]$, with $C(\cdot)=(C_1(\cdot),\cdots,C_d(\cdot))$ and $D(\cdot)=(D_1(\cdot),\cdots, D_d(\cdot))$, and denote $$[A(\cdot),0;B(\cdot),0]=[A(\cdot);B(\cdot)].$$ In addition, if $A(\cdot)$ and $B(\cdot)$ are deterministic, $[A(\cdot);B(\cdot)]$ is reduced to a linear controlled ordinary differential equation (ODE, for short), for which a very mature theory is available; See, for example, Wonham \cite{Wonham 1985}, and references cited therein. \mus For system \frepsilonqref{FSDE1}, a control process $u:[0,T]\tauimes\Omega\tauo\muathbb{R}^m$ is said to be {\inftyt feasible} on $[0,T]$ if $t\muapsto u(t)$ is $\muathbb{F}$-progressively measurable, and for any initial state $x\inftyn\muathbb{R}^n$, the state equation \frepsilonqref{FSDE1} admits a unique strong solution $X(\cdot)\frepsilonquiv X(\cdot\,;x,u(\cdot))$ on $[0,T]$ with the property \begin{equation}tael{|X|}X(0)=x,\quadq\muathbb{E}\Big[\sup_{t\inftyn[0,T]}|X(t)|\Big]<\inftynfty.\frepsilone Let ${\cal U}[0,T]$ be the set of all {\inftyt feasible controls} on $[0,T]$, whose precise definition will be given a little later. Let $\muathbb{U}[0,T]\subseteq{\cal U}[0,T]$. System $[A(\cdot),C(\cdot);B(\cdot),D(\cdot)]$ is said to be $L^p$-{\inftyt exactly controllable} on $[0,T]$ by $\muathbb{U}[0,T]$ if for any initial state $x\inftyn\muathbb{R}^n$ and any {\inftyt terminal state} $\xi\inftyn L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n)$, the set of all ${\cal F}_T$-measurable random variables $\xi:\Omega\tauo\muathbb{R}^n$ with $\muathbb{E}|\xi|^p<\inftynfty$, there exists a $u(\cdot)\inftyn\muathbb{U}[0,T]$ such that \begin{equation}tael{X=xi}X(T;x,u(\cdot))=\xi.\frepsilone Clearly, the above notion is a natural extension of controllability for ODE system $[A(\cdot);B(\cdot)]$ (\cite{Wonham 1985}). \mus Controllability is one of the most important concepts in control theory. For time-invariant linear ODE systems, it is well-known that the controllability is equivalent to many conditions, among which, the Kalman's rank condition is the most interesting one. It is also known that the controllability of a controlled linear ODE system is equivalent to the observability of its adjoint system. For controlled linear partial differential equations (PDEs, for short), the notion of controllability can further be split into the so-called {\inftyt exact controllability}, {\inftyt null controllability}, {\inftyt approximate controllability}, which are not equivalent in general, and all of these three are closely related to the so-called {\inftyt unique continuation} property, and/or the {\inftyt observability inequality} for the adjoint equation. For extensive surveys of controllability results on deterministic systems, see Lee--Markus \cite{Lee-Markus 1967} for ODE systems, and Russell \cite{Russell 1978}, Lions \cite{Lions 1988a, Lions 1988b} and Zuazua \cite{Zuazua 2006} for PDE systems. \mus The study of the controllability for stochastic systems can be traced back to the work of Connor \cite{Connors-1967} in 1967, followed by Sunahara--Aihara--Kishino \cite{Sunahara-Aihara-Kishino 1975}, Zabczyk \cite{Zabczyk 1981}, Ehrhardt--Kliemann \cite{Ehrhardt-Kliemann-1982}, and Chen--Li--Peng--Yong \cite{Chen-Li-Peng-Yong-1993}. With the help of backward stochastic differential equations (BSDEs, for short), Peng \cite{Peng 1994} introduced the so-called {\inftyt exact terminal controllability} and {\inftyt exact controllability}\frphiootnote{In the terminology of the current paper, this should roughly be called the $L^2$-exact controllability.} for linear FSDE system with constant coefficients; the former was characterized by the non-degeneracy of the matrix $D$ and the latter was characterized by a generalized version of Kalman's rank condition. Later, Liu--Peng \cite{Liu-Peng 2010} extended the results to the linear FSDE system with bounded time-varying coefficients, using a random version of Gramian matrix. On the other hand, Buckdahn--Quincampoix--Tessitore \cite{Buckdahn-Quincampoix-Tessitore 2006} and Goreac \cite{Goreac 2008} studied the so-called {\inftyt approximate controllability} (in $L^2$ sense) for linear FSDE systems, with constant coefficients and with degenerate matrix $D$ in the state equation; Some generalized Kalman type conditions are obtained to characterize the approximate controllability. In \cite{Lu-Yong-Zhang 2012}, L\"u--Yong--Zhang established a representation of It\^o's integral as a Lebesgue/Bochner integral, which has some interesting consequences on controllability of a linear FSDE system with vanished diffusion (see below). \mus In this paper, for any $p\inftyn [1,\inftynfty)$, we propose a notion of $L^p$-exact controllability (see Definition \ref{Sec3_Def_Lp_Exact}) for FSDE systems. When $p=2$ and all the coefficients of the system are bounded, our notion is reduced to the one studied in \cite{Peng 1994, Liu-Peng 2010}. We point out that since the coefficients $B(\cdot)$ and $D_k(\cdot)$ are allowed to be unbounded, the corresponding set of admissible controls is delicate and it makes the controllability problem under consideration more interesting (see below for detailed explanation). Inspired by the results of deterministic systems, for any $p\inftyn(1,\inftynfty)$, we introduce a stochastic version of observability inequality (see Theorem \ref{equivalence}) for the adjoint equation, the validity of which is proved to be equivalent to the $L^p$-exact controllability of the linear FSDE (with random coefficients). This provides an approach to study the controllability of linear FSDE systems by establishing an inequality for BSDEs. Moreover, we introduce a family of optimization problems of the adjoint linear BSDE (see Problem (O) and Problem (O)$'$ in Section \ref{S_Observability}), and the solvability of these optimization problems is proved to be equivalent to the $L^p$-exact controllability of the linear FSDE. In other words, we additionally provide an approach to study the exact controllability through infinite-dimensional optimization theory. \mus As an application, we consider some $L^p$-type norm optimal control problems (see Problem (N) and Problem (N)$'$ in Section \ref{S_Norm_Optimal_Control}). The norm optimal control problem for deterministic finite or infinite dimensional systems has been investigated by many researchers (see e.g. \cite{Fattorini 1999, Fattorini 2011, Gugat-Leugering 2008, Wang-Xu 2013, Wang-Zuazua 2012}). Recently, Gashi \cite{Gashi 2015} studied a norm optimal control problem (in $L^2$ sense) for linear FSDE systems with deterministic time-varying coefficients by virtue of the corresponding Hamiltonian system and Riccati equation. Moreover, Wang--Zhang \cite{Wang-Zhang 2015} studied a kind of approximately norm optimal control problems for linear FSDEs. In the present paper, with the help of optimization problems for BSDE, we solve the norm optimal control problem for linear FSDE systems with random coefficients (see Theorems \ref{Sec4_Theorem_Equiv} and \ref{Sec4_weak_Theorem_Equiv}, and Corollary \ref{Corollary 5.5}). \mus The rest of this paper is organized as follows. In Section \ref{S_Preliminary}, we present some preliminaries. Section \ref{Controllability} is devoted to the introduction of the $L^p$-exact controllability for linear FSDE systems. Some sufficient conditions of the $L^p$-exact controllability are established for two types of systems: The diffusion is control-free and the diffusion is ``fully'' controlled. In Section \ref{S_Observability}, we establish the equivalence among the $L^p$-exact controllability, the validity of observability inequality for the adjoint equation and the solvability of optimal control problems. Finally, as an application, a norm optimization problem is considered in Section \ref{S_Norm_Optimal_Control}. \section{Preliminaries}\lambdaabel{S_Preliminary} Recall that $\muathbb{R}^n$ is the $n$-dimensional (real) Euclidean (vector) space with the standard Euclidean norm $|\cdot|$ induced by the standard Euclidean inner product $\lambdaan\cdot\,,\cdot\rangle$, and $\muathbb{R}^{n\tauimes m}$ is the space of all $(n\tauimes m)$ (real) matrices, with the inner product $$\lambdaan A,B\rangle=\taur[A^\tauop B],\quadq\frphiorall A,B\inftyn\muathbb{R}^{n\tauimes m},$$ so that $\muathbb{R}^{n\tauimes m}$ is also a Euclidean space. Hereafter, the superscript $^\tauop$ denotes the transpose of a vector or a matrix. We now introduce some spaces, besides $L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n)$ introduced in the previous section. Let $H=\muathbb{R}^n,\muathbb{R}^{n\tauimes m}$, etc., and $p,q\inftyn[1,\inftynfty)$. \begin{equation}taegin{itemize} \inftytem $L^p_\muathbb{F}(\Omega;L^q(0,T;H))$ is the set of all $\muathbb{F}$-progressively measurable processes $\frphi(\cdot)$ valued in $H$ such that $$\|\frphi(\cdot)\|_{L^p_\muathbb{F}(\Omega;L^q(0,T;H))}\frepsilonquiv\Big[\muathbb{E}\Big (\inftynt_0^T|\frphi(t)|^qdt\Big )^{p\omegaver q}\Big]^{1\omegaver p}<\inftynfty.$$ \inftytem $L^q_\muathbb{F}(0,T;L^p(\Omega;H))$ is the set of all $\muathbb{F}$-progressively measurable processes $\frphi(\cdot)$ valued in $H$ such that $$\|\frphi(\cdot)\|_{L^q_\muathbb{F}(0,T;L^p(\Omega;H))}\frepsilonquiv\Big[\inftynt_0^T\Big (\muathbb{E}|\frphi(t)|^p\Big )^{q\omegaver p}dt\Big]^{1\omegaver q}<\inftynfty.$$ \inftytem $L^p_\muathbb{F}(\Omega;C([0,T];H)$ is the set of all $\muathbb{F}$-progressively measurable processes $\frphi(\cdot)$ valued in $H$ such that for almost all $\omega\inftyn\Omega$, $t\muapsto\frphi(t,\omega)$ is continuous and $$\|\frphi(\cdot)\|_{L^p_\muathbb{F}(\Omega;C([0,T];H))}\frepsilonquiv\Big[\muathbb{E}\Big (\sup_{t\inftyn[0,T]}|X(t)|^p\Big )\Big]^{1\omegaver p}<\inftynfty.$$ \frepsilonnd{itemize} In the similar manner, one may define $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle L^p_\muathbb{F}(\Omega;L^\inftynfty(0,T;H)),\quad L^\inftynfty_\muathbb{F}(0,T;L^p(\Omega;H)),\quad L^p_\muathbb{F}(0,T;L^\inftynfty(\Omega;H)),\quad L^\inftynfty_\muathbb{F}(\Omega;L^p(0,T;H)),\\ \nuoalign{ }\deltaisplaystyle L^\inftynfty_\muathbb{F}(\Omega;L^\inftynfty(0,T;H)),\quad L^\inftynfty_\muathbb{F}(0,T;L^\inftynfty(\Omega;H)),\quad L^\inftynfty_\muathbb{F}(\Omega;C([0,T];H)).\frepsilona$$ We have \begin{equation}tael{L=L}L^p_\muathbb{F}(\Omega;L^p(0,T;H))=L^p_\muathbb{F}(0,T;L^p(\Omega;H))\frepsilonquiv L^p_\muathbb{F}(0,T;H),\quadq p\inftyn[1,\inftynfty],\frepsilone and \begin{equation}tael{Minkowski}\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle L^p_\muathbb{F}(\Omega;L^q(0,T;H))\subseteq L^q_\muathbb{F}(0,T;L^p(\Omega;H)),\quadq1\lambdaes p\lambdaes q\lambdaes\inftynfty,\\ \nuoalign{ }\deltaisplaystyle L^q_\muathbb{F}(0,T;L^p(\Omega;H))\subseteq L^p_\muathbb{F}(\Omega;L^q(0,T;H)),\quadq1\lambdaes q\lambdaes p\lambdaes\inftynfty.\frepsilona\right.\frepsilone In particular, \begin{equation}tael{L(1p)}L^1_\muathbb{F}(0,T;L^p(\Omega;H))\subseteq L^p_\muathbb{F}(\Omega;L^1(0,T;H)),\quadq1\lambdaes p\lambdaes\inftynfty.\frepsilone Also, we have that \begin{equation}tael{C-ne-C}\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle L^p_\muathbb{F}(\Omega;C([0,T];H))\subseteq L^p_\muathbb{F}(\Omega;L^\inftynfty(0,T;H))\subseteq L^\inftynfty_\muathbb{F}(0,T;L^p(\Omega;H)).\frepsilona\frepsilone In fact, for $1\lambdaes p\lambdaes q<\inftynfty$, by Minkowski's integral inequality, we have $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\|\frphi(\cdot)\|_{L^q_\muathbb{F}(0,T;L^p(\Omega;H))}^p=\Big[\inftynt_0^T\Big (\muathbb{E}|\frphi(t)|^p\Big )^{q\omegaver p}dt\Big]^{p\omegaver q}\lambdaes\muathbb{E}\Big (\inftynt_0^T|\frphi(t)|^qdt\Big )^{p\omegaver q}=\|\frphi(\cdot)\|_{L^p_\muathbb{F}(\Omega;L^q(0,T;H))}^p.\frepsilona$$ This gives the first inclusion in \frepsilonqref{Minkowski}. Other cases can be proved similarly. Now, we introduce the following definition. \begin{equation}taegin{definition}\lambdaabel{feasible control} \rm An $\muathbb{F}$-progressive measurable process $u:[0,T]\tauimes\Omega\tauo\muathbb{R}^m$ is called a {\inftyt feasible control} of system $[A(\cdot),C(\cdot);B(\cdot),D(\cdot)]$ if under $u(\cdot)$, for any $x\inftyn\muathbb{R}^n$, system $[A(\cdot),C(\cdot);B(\cdot),D(\cdot)]$ admits a unique strong solution $X(\cdot)\inftyn L^1_\muathbb{F}(\Omega;C([0,T];\muathbb{R}^n))$ satisfying $X(0)=x$. The set of feasible controls is denoted by ${\cal U}[0,T]$. \frepsilonnd{definition} Now, for the state equation \frepsilonqref{FSDE1}, we introduce the following basic hypothesis. \mus \begin{equation}taf (H1) \rm The $\muathbb{R}^{n\tauimes n}$-valued processes $A(\cdot),C_1(\cdot),\cdots,C_d(\cdot)$ satisfy \begin{equation}tael{AC}A(\cdot),C_1(\cdot),\cdots,C_d(\cdot)\inftyn L^\inftynfty_\muathbb{F}(0,T;\muathbb{R}^{n\tauimes n}).\frepsilone The $\muathbb{R}^{n\tauimes m}$-valued processes $B(\cdot),D_1(\cdot),\cdots,D_d(\cdot)$ are $\muathbb{F}$-progressively measurable. \mus The following result gives a big class of feasible controls for system $[A(\cdot),C(\cdot);B(\cdot),D(\cdot)]$, whose proof is standard. \begin{equation}taegin{proposition}\lambdaabel{well-posedness of SDE} \sl Let {\rm(H1)} hold. Let $u:[0,T]\tauimes\Omega\tauo\muathbb{R}^n$ be $\muathbb{F}$-progressively measurable. Suppose the following holds: \begin{equation}tael{Sec2_Ups}p\gammaes 1,~B(\cdot)u(\cdot)\inftyn L^p_\muathbb{F}(\Omega;L^1(0,T;\muathbb{R}^n)),~D_k(\cdot)u(\cdot)\inftyn L^p_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^n)),~1\lambdaes k\lambdaes d.\frepsilone Then $u(\cdot)\inftyn{\cal U}[0,T]$, and the solution $X(\cdot)\frepsilonquiv X(\cdot\,;x,u(\cdot))$ of \frepsilonqref{FSDE1} with initial state $x$ under control $u(\cdot)$ satisfies the following: \begin{equation}tael{Lp-estimate} \|X(\cdot)\|_{L^p_\muathbb{F}(\Omega;C([0,T];\muathbb{R}^n))}\nuegthinspace \lambdaes\nuegthinspace K{\bf i}g\{|x|\nuegthinspace +\nuegthinspace \|B(\cdot)u(\cdot)\|_{L^p_\muathbb{F}(\Omega;L^1(0,T;\muathbb{R}^n))}\nuegthinspace +\nuegthinspace \sum_{k=1}^d\|D_k(\cdot)u(\cdot)\|_{L^p_\muathbb{F} (\Omega;L^2(0,T;\muathbb{R}^n))}{\bf i}g\}.\frepsilone Hereafter, $K>0$ will denote a generic constant which could be different from line to line. Further, if \begin{equation}tael{Sec2_Upw}p\gammaes 2,~B(\cdot)u(\cdot)\inftyn L^1_\muathbb{F}(0,T;L^p(\Omega;\muathbb{R}^n)),~D_k(\cdot)u(\cdot)\inftyn L^2_\muathbb{F}(0,T;L^p(\Omega;\muathbb{R}^n)),~1\lambdaes k\lambdaes d,\frepsilone then $u(\cdot)\inftyn{\cal U}[0,T]$, and the following holds: \begin{equation}taegin{equation}\lambdaabel{Lp-estimate*} \|X(\cdot)\|_{L^p_\muathbb{F}(\Omega;C([0,T];\muathbb{R}^n))}\nuegthinspace \lambdaes K\nuegthinspace {\bf i}g\{|x|\nuegthinspace +\nuegthinspace \|B(\cdot)u(\cdot)\|_{L^1_\muathbb{F}(0,T;L^p(\Omega;\muathbb{R}^n))}\nuegthinspace +\nuegthinspace \sum_{k=1}^d\|D_k(\cdot)u(\cdot)\|_{L^2_\muathbb{F} (0,T;L^p(\Omega;\muathbb{R}^n))}{\bf i}g\}. \frepsilonnd{equation} \frepsilonnd{proposition} \mus The above result leads us to the following definitions. \begin{equation}taegin{definition}\lambdaabel{Sec2_Lp feasible control} \rm A control $u(\cdotot)\inftyn{\cal U}[0,T]$ is said to be {\inftyt $L^p$-feasible} (respectively, {\inftyt $L^p$-restricted feasible}) for system $[A(\cdotot),C(\cdotot);B(\cdotot),D(\cdotot)]$ if \frepsilonqref{Sec2_Ups} (respectively, \frepsilonqref{Sec2_Upw}) holds true. The set of $L^p$-feasible controls (respectively, $L^p$-restricted feasible controls) is denoted by $\muathcal U^p[0,T]$ (respectively, $\muathcal U^p_r[0,T]$). \frepsilonnd{definition} Now, let us introduce the following two sets of hypotheses. \mus {\begin{equation}taf(H2)} For some $\rho\inftyn(1,\inftynfty]$ and $\sigma\inftyn(2,\inftynfty]$, the following hold: \begin{equation}tael{BD}\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle B(\cdot)\inftyn\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle L^\rho_\muathbb{F}(\Omega;L^{2\sigma\omegaver \sigma+2}(0,T;\muathbb{R}^{n\tauimes m})),\quad\rho\inftyn (1,\inftynfty],\ \sigma\inftyn (2,\inftynfty),\\ \nuoalign{ }\deltaisplaystyle L^\rho_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^{n\tauimes m})),\quad\rho\inftyn(1,\inftynfty],\ \sigma=\inftynfty, \frepsilona\right.\\ [3mm] \nuoalign{ }\deltaisplaystyle D_1(\cdot),\cdots,D_d(\cdot)\inftyn L^\rho_\muathbb{F}(\Omega;L^\sigma(0,T;\muathbb{R}^{n\tauimes m})).\frepsilona\frepsilone \mus {\begin{equation}taf(H2)$'$} For some $\rho,\sigma\inftyn(2,\inftynfty]$, the following hold: \begin{equation}tael{BD'}\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle B(\cdot)\inftyn\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle L^{2\sigma\omegaver \sigma+2}_\muathbb{F}(0,T;L^\rho(\Omega;\muathbb{R}^{n\tauimes m})),\quad\rho\inftyn (2,\inftynfty],\ \sigma\inftyn (2,\inftynfty),\\ \nuoalign{ }\deltaisplaystyle L^2_\muathbb{F}(0,T;L^\rho(\Omega;\muathbb{R}^{n\tauimes m})),\quad\rho\inftyn(2,\inftynfty],\ \sigma=\inftynfty, \frepsilona\right.\\ [3mm] \nuoalign{ }\deltaisplaystyle D_1(\cdot),\cdots,D_d(\cdot)\inftyn L^\sigma_\muathbb{F}(0,T;L^\rho(\Omega;\muathbb{R}^{n\tauimes m})).\frepsilona\frepsilone \mus In what follows, (H2) will be used for problems involving $L^p$-feasible controls and (H2)$'$ will be used for problems involving $L^p$-restricted feasible controls. We now denote the following set of controls: $$\muathbb{U}^{p,\rho,\sigma}[0,T]=\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle L^{\rho p\omegaver\rho-p}_\muathbb{F}(\Omega;L^{2\sigma\omegaver \sigma-2}(0,T;\muathbb{R}^m)),\quadq p\inftyn[1,\rho),~\rho\inftyn(1,\inftynfty),~\sigma\inftyn(2,\inftynfty),\\ \nuoalign{ }\deltaisplaystyle L^p_\muathbb{F}(\Omega;L^{2\sigma\omegaver \sigma-2}(0,T;\muathbb{R}^m)),\quadq p\inftyn[1,\rho),~\rho=\inftynfty,~\sigma\inftyn(2,\inftynfty),\\ \nuoalign{ }\deltaisplaystyle L^{\rho p\omegaver\rho-p}_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^m)),\quadq p\inftyn[1,\rho),~\rho\inftyn(1,\inftynfty),~\sigma=\inftynfty,\\ \nuoalign{ }\deltaisplaystyle L^p_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^m)),\quadq p\inftyn[1,\rho),~\rho=\sigma=\inftynfty.\frepsilona\right.$$ Clearly, as $\rho\mathop{\uparrow}+\inftynfty$ and/or $\sigma\mathop{\uparrow}+\inftynfty$, the set $\muathbb{U}^{p,\rho,\sigma}[0,T]$ is getting larger and larger. Whereas, as $p\mathop{\uparrow}\rho$, the set $\muathbb{U}^{p,\rho,\sigma}[0,T]$ is getting smaller and smaller. Similarly, we introduce $$\muathbb{U}_r^{p,\rho,\sigma}[0,T]=\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle L^{2\sigma\omegaver \sigma-2}_\muathbb{F}(0,T;L^{\rho p\omegaver\rho-p}(\Omega;\muathbb{R}^m)),\quadq p\inftyn[2,\rho),~\rho,\sigma\inftyn(2,\inftynfty),\\ \nuoalign{ }\deltaisplaystyle L^{2\sigma\omegaver \sigma-2}_\muathbb{F}(0,T;L^p(\Omega;\muathbb{R}^m)),\quadq p\inftyn[2,\rho),~\rho=\inftynfty,~\sigma\inftyn(2,\inftynfty),\\ \nuoalign{ }\deltaisplaystyle L^2_\muathbb{F}(0,T;L^{\rho p\omegaver\rho-p}(\Omega;\muathbb{R}^m)),\quadq p\inftyn[2,\rho),~\rho\inftyn(2,\inftynfty),~\sigma=\inftynfty,\\ \nuoalign{ }\deltaisplaystyle L^2_\muathbb{F}(0,T;L^p(\Omega;\muathbb{R}^m)),\quadq p\inftyn[2,\rho),~\rho=\sigma=\inftynfty.\frepsilona\right.$$ We have the following proposition. \begin{equation}tap{}\sl {\rm(i)} Let {\rm(H1)} and {\rm(H2)} hold. Then \begin{equation}tael{U in U}\muathbb{U}^{p,\rho,\sigma}[0,T]\subseteq{\cal U}^p[0,T], \quadq\frphiorall p\inftyn[1,\rho).\frepsilone {\rm(ii)} Let {\rm(H1)} and {\rm(H2)$'$} hold. Then \begin{equation}tael{U in U*}\muathbb{U}_r^{p,\rho,\sigma}[0,T]\subseteq{\cal U}_r^p[0,T], \quadq\frphiorall p\inftyn[2,\rho).\frepsilone\frepsilonp \inftyt Proof. \rm (i) Let (H1) and (H2) hold. Let $u(\cdot)\inftyn\muathbb{U}^{p,\rho,\sigma}[0,T]$. We only consider the case that $\rho<\inftynfty$ and $\sigma<\inftynfty$; Others can be proved similarly. We make the following calculations (noting $\sigma>2$ and $1\lambdaes p<\rho$), \begin{equation}tael{|Bu|}\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\|B(\cdot)u(\cdot)\|_{L^p_\muathbb{F}(\Omega;L^1(0,T;\muathbb{R}^n))}^p=\muathbb{E}\Big (\inftynt_0^T|B(t)u(t)|dt\Big )^p\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq\quadq\quadq\quadq\quad\lambdaes\muathbb{E}\Big[\Big (\inftynt_0^T|B(t)|^{2\sigma\omegaver \sigma+2}dt\Big )^{(\sigma+2)p\omegaver2\sigma}\Big (\inftynt_0^T|u(t)|^{2\sigma\omegaver \sigma-2}dt\Big )^{(\sigma-2)p\omegaver 2\sigma}\Big]\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq\quadq\quadq\quadq\quad\lambdaes\Big[\muathbb{E}\Big (\inftynt_0^T|B(t)|^{2\sigma\omegaver \sigma+2}dt\Big )^{(\sigma+2)\rho\omegaver2\sigma}\Big]^{p\omegaver\rho}\Big[\muathbb{E}\Big (\inftynt_0^T|u(t)|^{2\sigma\omegaver \sigma-2}dt\Big )^{(\sigma-2)p\rho\omegaver 2\sigma(\rho-p)}\Big]^{\rho-p\omegaver\rho}\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq\quadq\quadq\quadq\quad\frepsilonquiv\|B(\cdot)\|_{L^\rho_\muathbb{F}(\Omega;L^{2\sigma\omegaver \sigma+2}(0,T;\muathbb{R}^{n\tauimes m}))}^p \|u(\cdot)\|_{L^{p\rho\omegaver\rho-p}_\muathbb{F}(\Omega;L^{2\sigma\omegaver \sigma-2}(0,T;\muathbb{R}^m))}^p,\frepsilona\frepsilone and for each $k=1,\cdots,d$, \begin{equation}tael{|Du|}\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\|D_k(\cdot)u(\cdot)\|_{L^p_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^n))}^p=\muathbb{E}\Big (\inftynt_0^T|D_k(t)u(t)|^2dt\Big )^{p\omegaver2}\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq\quadq\quadq\quadq\quadq\lambdaes\muathbb{E}\Big[\Big (\inftynt_0^T|D_k(t)|^\sigma dt\Big )^{p\omegaver \sigma}\Big (\inftynt_0^T|u(t)|^{2\sigma\omegaver \sigma-2}dt\Big )^{(\sigma-2)p\omegaver 2\sigma}\Big]\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq\quadq\quadq\quadq\quadq\lambdaes\Big[\muathbb{E}\Big (\inftynt_0^T|D_k(t)|^\sigma dt\Big )^{\rho\omegaver \sigma}\Big]^{p\omegaver\rho}\Big[\muathbb{E}\Big (\inftynt_0^T|u(t)|^{2\sigma\omegaver \sigma-2}dt\Big )^{(\sigma-2)p\rho\omegaver 2\sigma(\rho-p)}\Big]^{\rho-p\omegaver\rho}\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq\quadq\quadq\quadq\quadq\frepsilonquiv\|D_k(\cdot)\|_{L^\rho_\muathbb{F}(\Omega;L^\sigma(0,T;\muathbb{R}^{n\tauimes m}))}^p \|u(\cdot)\|_{L^{p\rho\omegaver\rho-p}_\muathbb{F}(\Omega;L^{2\sigma\omegaver \sigma-2}(0,T;\muathbb{R}^m))}^p.\frepsilona\frepsilone By Definition \ref{Sec2_Lp feasible control}, $u(\cdot)\inftyn{\cal U}^p[0,T]$, proving (i). \mus In the similar manner, we are able to prove (ii). \sigmagned {$\sqr69$} \section{Exact Controllability}\lambdaabel{Controllability} We now give a precise definition of $L^p$-exact controllability. \begin{equation}tade{Sec3_Def_Lp_Exact} \rm Let $\muathbb{U}[0,T]\subseteq{\cal U}[0,T]$. System $[A(\cdot),C(\cdot);B(\cdot),D(\cdot)]$ is said to be $L^p$-{\inftyt exactly controllable} by $\muathbb{U}[0,T]$ on the time interval $[0,T]$, if for any $(x,\xi)\inftyn\muathbb{R}^n\tauimes L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n)$, there exists a $u(\cdot)\inftyn\muathbb{U}[0,T]$ such that the solution $X(\cdot)\inftyn L^1_{\muathbb F}(\Omegamega;C([0,T];\muathbb R^n))$ of \frepsilonqref{FSDE1} with $X(0)=x$ satisfies $X(T)=\xi$. \frepsilonde In the above, $\muathbb{U}[0,T]$ could be ${\cal U}^q[0,T]$, or ${\cal U}^q_r[0,T]$ for some suitable $q\gammaes1$, and also it could be $\muathbb{U}^{p,\rho,\sigma}[0,T]$, or $\muathbb{U}^{p,\rho,\sigma}_r[0,T]$. We emphasize that in defining the system to be $L^p$-exactly controllable (for $p\gammaes1$) by $\muathbb{U}[0,T]$, we only require $X(\cdot)\inftyn L^1_{\muathbb F}(\Omegamega;C([0,T];\muathbb R^n))$ (since $\muathbb{U}[0,T]\subseteq{\cal U}[0,T]$). Depending on the choice of $\muathbb{U}[0,T]$, $X(\cdot)$ might have better integrability/regularity but we do not require any better property than $L^1_{\muathbb F}(\Omegamega;C([0,T];\muathbb R^n))$. We will see shortly that this gives us a great reflexibility. \subsection{The case $D(\cdot)=0$} In this subsection, we consider system $[A(\cdot),C(\cdot);B(\cdot),0]$, i.e., the state equation reads \begin{equation}tael{FSDE0}dX(t)=\Big[A(t)X(t)+B(t)u(t)\Big]dt+\sum_{k=1}^dC_k(t)X(t)dW_k(t),\quadq t\gammaes0.\frepsilone Thus, the control $u(\cdot)$ does not appear in the diffusion. When all the coefficients in the above are constants, it was shown in \cite{Buckdahn-Quincampoix-Tessitore 2006} that the system is approximately controllable (under some additional conditions) in the following sense: For any $(x,\xi)\inftyn\muathbb{R}^n\tauimes L^2_{{\cal F}_T}(\Omega;\muathbb{R}^n)$, and any $\frepsilon>0$, there exists a $u_\frepsilon(\cdot)\inftyn L^2_\muathbb{F}(0,T;\muathbb{R}^m)\frepsilonquiv\muathbb{U}^{2,\inftynfty,\inftynfty}[0,T]$ such that the solution $X(\cdot)\frepsilonquiv X(\cdot\,;x,u_\frepsilon(\cdot))$ with $X(0)=x$ satisfies $$\|X(T)-\xi\|_{L^2_{{\cal F}_T}(\Omega;\muathbb{R}^n)}<\frepsilon.$$ The following is our first result which improves the results of \cite{Buckdahn-Quincampoix-Tessitore 2006} significantly. \begin{equation}tat{D=0}\sl Let $D(\cdotot)=0$ and {\rm(H1)} hold. Let \begin{equation}tael{B(t)B(t)>d}B(t)B(t)^\tauop\gammaes\delta I>0,\quadq t\inftyn[0,T],~\alphas,\frepsilone for some $\delta>0$. Then for any $p>1$, system $[A(\cdot),C(\cdot);B(\cdot),0]$ is $L^p$-exactly controllable on $[0,T]$ by ${\cal U}^q[0,T]$ with any $q\inftyn(1,p)$. \frepsilont \inftyt Proof. \rm Consider the following system: $$\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle dX(t)=v(t)dt+\sum_{k=1}^dC_k(t)X(t)dW_k(t),\quadq t\gammaes0,\\ \nuoalign{ }\deltaisplaystyle X(0)=x,\frepsilona\right.$$ with $v(\cdot)\inftyn L^q_\muathbb{F}(\Omega;L^1(0,T;\muathbb{R}^n))$, $q>1$. Then the unique solution $X(\cdot)$ satisfies $$\muathbb{E}\Big[\sup_{t\inftyn[0,T]}|X(t)|^q\Big]\lambdaes K\Big[|x|^q+\muathbb{E}\Big (\inftynt_0^T|v(t)|dt\Big )^q\Big].$$ Let $\Phi(\cdot)$ be the solution to the following: $$\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle d\Phi(t)=\sum_{k=1}^dC_k(t)\Phi(t)dW_k(t),\quadq t\gammaes0,\\ \nuoalign{ }\deltaisplaystyle\Phi(0)=I.\frepsilona\right.$$ Then $\Phi(\cdot)^{-1}$ exists and satisfies the following: $$\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle d\begin{equation}taig[\Phi(t)^{-1}\begin{equation}taig]=\sum_{k=1}^d\Phi(t)^{-1}C_k(t)^2dt-\sum_{k=1}^d\Phi(t)^{-1}C_k(t)dW_k(t),\quadq t\gammaes0,\\ \nuoalign{ }\deltaisplaystyle\Phi(0)^{-1}=I.\frepsilona\right.$$ Therefore, for any $q\gammaes1$, $$\muathbb{E}\Big[\sup_{[t\inftyn[0,T]}|\Phi(t)|^q+\sup_{t\inftyn[0,T]}|\Phi(t)^{-1}|^q\Big]\lambdaes K(T,q),$$ with the constant $K(T,q)$ depending on $T$ and $q$ (as well as the bound of $C_k(\cdot)$), and we have the following variation of constants formula for $X(\cdot)$: \begin{equation}tael{X(t)}X(t)=\Phi(t)x+\Phi(t)\inftynt_0^t\Phi(s)^{-1}v(s)ds,\quadq t\gammaes0.\frepsilone Now, for any $q\inftyn(1,p)$, we want to choose some $v(\cdot)\inftyn L^q_\muathbb{F}(\Omega;L^1(0,T;\muathbb{R}^n))$ so that $X(T)=\xi$ which is equivalent to the following: $$\Phi(T)^{-1}\xi-x=\inftynt_0^T\widehat v(s)ds,$$ with $$\widehat v(t)=\Phi(t)^{-1}v(t),\quadq t\inftyn[0,T].$$ Since $\xi\inftyn L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n)$, for any $\begin{equation}taar q\inftyn(q,p)$, we have $$\muathbb{E}|\Phi(T)^{-1}\xi|^{\begin{equation}taar q}\lambdaes\Big (\muathbb{E}|\Phi(T)^{-1}|^{\begin{equation}taar qp\omegaver p-\begin{equation}taar q}\Big )^{p-\begin{equation}taar q\omegaver p}\Big (\muathbb{E}|\xi|^p\Big )^{\begin{equation}taar q\omegaver p}\lambdaes K\Big (\muathbb{E}|\xi|^p\Big )^{\begin{equation}taar q\omegaver p}.$$ Thus, $\Phi(T)^{-1}\xi-x\inftyn L^{\begin{equation}taar q}_{{\cal F}_T}(\Omega;\muathbb{R}^n)$. Then, by \cite[Theorem 3.1]{Lu-Yong-Zhang 2012}, we can find a $\widehat v(\cdot)\inftyn L^{\begin{equation}taar q}_\muathbb{F}(\Omega;L^1(0,T;\muathbb{R}^n))$ such that $$\Phi(T)^{-1}\xi-x=\inftynt_0^T\widehat v(s)ds.$$ Define $$v(t)=\Phi(t)\widehat v(t),\quadq t\inftyn[0,T].$$ Since $q<\begin{equation}taar q$, one has $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\muathbb{E}\Big (\inftynt_0^T|v(t)|dt\Big )^q\lambdaes\muathbb{E}\Big (\inftynt_0^T|\Phi(t)|\,|\widehat v(t)|dt\Big )^q\lambdaes\muathbb{E}\Big[\Big (\sup_{t\inftyn[0,T]}|\Phi(t)|\Big )\inftynt_0^T|\widehat v(t)|dt\Big]^q\\ \nuoalign{ }\deltaisplaystyle\lambdaes K\Big[\muathbb{E}\Big (\inftynt_0^T|\widehat v(t)|dt\Big )^{\begin{equation}taar q}\Big]^{q\omegaver\begin{equation}taar q}=K\|\widehat v(\cdot)\|_{L^{\begin{equation}taar q}_\muathbb{F}(\Omega;L^1(0.T;\muathbb{R}^m))}^{q\omegaver\begin{equation}taar q}.\frepsilona$$ Now, we define $$u(t)=B(t)^\tauop[B(t)B(t)^\tauop]^{-1}\begin{equation}taig[v(t)-A(t)X(t)\begin{equation}taig],\quadq t\gammaes0,$$ with $X(\cdot)$ defined by \frepsilonqref{X(t)}. Then $$A(t)X(t)+B(t)u(t)=v(t),\quadq t\gammaes0,$$ which implies that $$X(t)=\Phi(t)x+\Phi(t)\inftynt_0^t\Phi(s)^{-1}\Big[A(s)X(s)+B(s)u(s)\Big]ds,\quadq t\gammaes0.$$ This means that $X(\cdot)$ is the solution to \frepsilonqref{FSDE0}, corresponding to $(x,u(\cdot))$ with $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\muathbb{E}\Big (\inftynt_0^T|B(t)u(t)|dt\Big )^q=K\Big (\muathbb{E}\nuegthinspace \inftynt_0^T|v(t)-A(t)X(t)|dt\Big )^q\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq\quadq\quadq\quadq\lambdaes K\Big[\Big (\muathbb{E}\inftynt_0^T|v(t)|dt\Big )^q+\Big (\muathbb{E}\inftynt_0^T|X(t)|dt\Big )^q\Big].\frepsilona$$ Therefore, $u(\cdot)\inftyn{\cal U}^q[0,T]$ which makes $X(T)=\xi$. This proves our conclusion. \sigmagned {$\sqr69$} \mus Let us make some comments on the above result. To this end, let us define $$\muathbb{L}_T(u(\cdot))=\inftynt_0^Tu(t)dt,\quadq u(\cdot)\inftyn L^1_\muathbb{F}(0,T;\muathbb{R}^n).$$ Then a result from \cite[Theorems 3.1, 3.2]{Lu-Yong-Zhang 2012} tells us that \begin{equation}tael{L=L}\muathbb{L}_T\Big (L^p_\muathbb{F}(\Omega;L^1(0,T;\muathbb{R}^n))\Big )\supseteq\muathbb{L}_T\Big (L^1_\muathbb{F}\begin{equation}taig(0,T;L^p(\Omega;\muathbb{R}^n) \begin{equation}taig)\Big )=L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n),\quadq\frphiorall p\inftyn[1,\inftynfty),\frepsilone and \begin{equation}tael{Lq ne Lp}\muathbb{L}_T\Big (L^q_\muathbb{F}\begin{equation}taig(0,T;L^p(\Omega;\muathbb{R}^n)\begin{equation}taig)\Big )\subsetneq L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n),\quadq\frphiorall p\inftyn(1,\inftynfty),~q\inftyn(1,\inftynfty].\frepsilone Thus, \begin{equation}tael{Lp ne Lp}\muathbb{L}_T\Big (L^p_\muathbb{F}\begin{equation}taig(\Omega;L^q(0,T;\muathbb{R}^n)\begin{equation}taig)\Big )\subsetneq L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n),\quadq\frphiorall 1<p\lambdaes q\lambdaes\inftynfty.\frepsilone In particular, \begin{equation}tael{L2 ne Lp}\muathbb{L}_T\Big (L^p_\muathbb{F}\begin{equation}taig(\Omega;L^2(0,T;\muathbb{R}^n)\begin{equation}taig)\Big )\subsetneq L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n),\quadq\frphiorall 1<p\lambdaes 2.\frepsilone Let us look at an implication of the above. Consider a system of the following form: $$dX(t)=u(t)dt,\quadq t\gammaes0.$$ For terminal state $\xi\inftyn L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n)$, with $p>1$, if one is only allowed to use the control from $L^p_\muathbb{F}(\Omega;L^2(0,T;$ $\muathbb{R}^n))$, the above system is not controllable. Whereas, by Theorem \ref{D=0}, this system is $L^p$-exactly controllable on $[0,T]$ by ${\cal U}^q[0,T]$ for any $q\inftyn(1,p)$. This is a main reason that we define the $L^p$-exact controllability allowing the control taken from a larger space than $L^p_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^m))$, and not restricting the state process $X(\cdot)$ to belong to $L^p_\muathbb{F}(\Omega;C([0,T];\muathbb{R}^n))$. \mus Next, we notice that in Theorem \ref{D=0}, condition \frepsilonqref{B(t)B(t)>d} implies that $$\widehatbox{rank}\,B(t)=n\lambdaes m,\quadq t\inftyn[0,T],~\alphas$$ This means that the dimension of the control process is no less than that of the state process. Now, if \begin{equation}tael{rank B<n}\widehatbox{rank}\,B(t)<n,\quadq t\inftyn[0,T],~\alphas,\frepsilone which will always be the case if $m<n$, then for each $t\inftyn[0,T]$, there exists a $\tauh(t)\inftyn\muathbb{R}^n\setminus\{0\}$ such that \begin{equation}tael{th(t)B(t)=0}\tauh(t)^\tauop B(t)=0.\frepsilone The following gives a negative result for the exact controllability, under condition \frepsilonqref{th(t)B(t)=0} with a little more regularity conditions on $\tauh(\cdot)$ and $C(\cdot)$, which is essentially an extension of \frepsilonqref{Lp ne Lp}. \begin{equation}tat{non-controllable} \sl Let $D(\cdotot)=0$ and {\rm(H1)} hold. Suppose there exists a continuous differentiable function $\tauh:[0,T]\tauo\muathbb{R}^n$, $|\tauh(t)|=1$, for all $t\inftyn[0,T]$ such that \frepsilonqref{th(t)B(t)=0} holds. Also, let \begin{equation}tael{C_k}C_k(\cdot)\inftyn L^\inftynfty_\muathbb{F}(\Omega;C([0,T];\muathbb{R}^{n\tauimes n})),\quadq 1\lambdaes k\lambdaes d.\frepsilone Then for any $p>1$, $[A(\cdot),C(\cdot);B(\cdot),0]$ is not $L^p$-exactly controllable on $[0,T]$ by ${\cal U}^p[0,T]$. \frepsilont \inftyt Proof. \rm Let $$0=t_0<t_1<t_2<\cdots,\quadq t_\frepsilonll\tauo T.$$ Define $$G_0=\begin{equation}taigcup_{\frepsilonll=0}^\inftynfty\Big[t_\frepsilonll,{t_\frepsilonll+t_{\frepsilonll+1}\omegaver2}\Big ),\quadq G_1=\begin{equation}taigcup_{\frepsilonll=0}^\inftynfty\Big[{t_\frepsilonll+t_{\frepsilonll+1}\omegaver2},t_{\frepsilonll+1}\Big ).$$ Take $$\zeta_0,\zeta_1\inftyn\muathbb{R}^n,\quad|\tauh(T)^\tauop\zeta_1-\tauh(T)^\tauop\zeta_0|=1.$$ Set $$\zeta(s)=\zeta_0I_{G_0}(s)-\zeta_1I_{G_1}(s),\quadq s\inftyn[0,T),$$ and \begin{equation}tael{xi}\xi=x+\inftynt_0^T\sum_{k=1}^d\zeta(s)dW_k(s).\frepsilone We claim that the above constructed $\xi$ cannot be hit by the state $X(T)$ from $X(0)=x$ under any $u(\cdot)\inftyn{\cal U}^p[0,T]$. We show this by contradiction. Suppose otherwise, then for some $u(\cdot)\inftyn{\cal U}^p[0,T]$, $X(\cdot)$ satisfies $$\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle dX(s)=\Big[A(s)X(s)+B(s)u(s)\Big]ds+C(s)X(s)dW(s),\quadq s\inftyn[0,T],\\ \nuoalign{ }\deltaisplaystyle X(0)=x,\quadq X(T)=\xi.\frepsilona\right.$$ Hence, $$d[\tauh(t)^\tauop X(t)]=\Big[\tauh(t)^\tauop A(t)+\deltaot\tauh(t)^\tauop\Big]X(t)dt+\sum_{k=1}^d\tauh(t)^\tauop C_k(t)X(t)dW_k(t),\quadq t\gammaes0.$$ Now, let \begin{equation}tael{eta}\frepsilonta(t)=x+\inftynt_0^t\sum_{k=1}^d\zeta(s)dW_k(s),\quadq t\inftyn[0,T].\frepsilone Then \begin{equation}tael{BSDE}\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle d\Big[\tauh(t)^\tauop\Big (X(t)-\frepsilonta(t)\Big )\Big]=\Big[\tauh(t)^\tauop A(t)X(t)+\deltaot\tauh(t)^\tauop\Big (X(t)-\frepsilonta(t)\Big )\Big]dt\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq\quadq\quadq\quadq\quadq+\sum_{k=1}^d\tauh(t)^\tauop\Big (C_k(t)X(t)-\zeta(t)\Big )dW_k(t), \quaduad t\inftyn [0,T],\\ \nuoalign{ }\deltaisplaystyle\tauh(T)^\tauop\Big (X(T)-\frepsilonta(T)\Big )=0.\frepsilona\right.\frepsilone By a standard estimate for BSDEs and Burkholder-Davis-Gundy inequality, one obtains \begin{equation}tael{SSec3_Eq1}\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\muathbb{E}{\bf i}g\{\sup_{s\inftyn[t,T]}{\bf i}g|\tauh(s)^\tauop\Big (X(s)-\frepsilonta(s)\Big ){\bf i}g|^p +\Big[\inftynt_t^T\sum_{k=1}^d{\bf i}g|\tauh(s)^\tauop\Big (C_k(s)X(s)-\zeta(s)\Big ){\bf i}g|^2ds\Big]^{p\omegaver2}{\bf i}g\}\\ \nuoalign{ }\deltaisplaystyle\lambdaes K\muathbb{E}\Big (\inftynt_t^T|\tauh(s)^\tauop A(s)X(s)+\deltaot\tauh(s)^\tauop X(s)-\deltaot\tauh(s)^\tauop\frepsilonta(s)|ds\Big )^p\\ \nuoalign{ }\deltaisplaystyle\lambdaes K(T-t)^p\muathbb{E}\Big[\sup_{s\inftyn[t,T]}|X(s)|^p+\sup_{s\inftyn[t,T]}|\frepsilonta(s)|^p\Big]\\ \nuoalign{ }\deltaisplaystyle\lambdaes K(T-t)^p\Big[|x|^p+\muathbb{E}\Big (\inftynt_0^T|B(s)u(s)|ds\Big )^p+\Big (\inftynt_0^T|\zeta(s)|^2ds\Big )^{p\omegaver2}\Big].\frepsilona\frepsilone On the other hand, \begin{equation}tael{SSec3_Eq2}\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\muathbb{E}\Big[\sup_{s\inftyn[t,T]}|X(s)-\xi|^p\Big]\lambdaes K\muax\{ |T-t|^p, |T-t|^{p\omegaver2}\}\Big[\,|x|^p+\muathbb{E}\Big (\inftynt_t^T |B(s)u(s)|ds\Big )^p\Big]\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq\quadq\quadq\quadq\quadq+K\muathbb{E}\Big (\inftynt_t^T |B(s)u(s)|ds\Big )^p,\frepsilona\frepsilone and for any $f,g,h\inftyn L^2(t,T;\muathbb{R})$, $$\|f-g\|^p_{L^2(t,T;\muathbb{R})}\gammaes 2^{-(p-1)}\|f-h\|^p_{L^2(t,T;\muathbb{R})}-\|h-g\|^p_{L^2(t,T;\muathbb{R})}.$$ Thus, for each $k=1,2,\cdots,d$, $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\muathbb{E}\Big[\inftynt_{t_i}^T{\bf i}g|\tauh(s)^\tauop\Big (C_k(s)X(s)-\zeta(s)\Big ){\bf i}g|^2ds\Big]^{p\omegaver 2} \\ \nuoalign{ }\deltaisplaystyle\gammaes2^{-4(p-1)}\muathbb{E}\Big (\inftynt_{t_i}^T\begin{equation}taig|\tauh(T)^\tauop\begin{equation}taig[C_k(T)\xi-\zeta(s)\begin{equation}taig]\begin{equation}taig|^2 ds\Big )^{p\omegaver 2}-2^{-3(p-1)}\muathbb{E}\Big (\inftynt_{t_i}^T\begin{equation}taig|\begin{equation}taig[\tauh(T)^\tauop-\tauh(s)^\tauop\begin{equation}taig]\zeta(s)\begin{equation}taig|^2ds\Big )^{p\omegaver2}\\ \nuoalign{ }\deltaisplaystyle\quad-2^{-(p-1)}\muathbb{E}\Big (\inftynt_{t_i}^T\begin{equation}taig|\begin{equation}taig[\tauh(s)-\tauh(T)\begin{equation}taig]^\tauop C_k(T)\xi{\bf i}g|^2 ds\Big )^{p\omegaver 2} -2^{-2(p-1)}\muathbb{E}\Big (\inftynt_{t_i}^T\begin{equation}taig|\tauh(s)^\tauop\begin{equation}taig[C_k(s)-C_k(T)\begin{equation}taig]\xi\begin{equation}taig|^2 ds\Big )^{p\omegaver 2}\\ \nuoalign{ }\deltaisplaystyle\quad-\muathbb{E}\Big (\inftynt_{t_i}^T\begin{equation}taig|\tauh(s)^\tauop C_k(s)\begin{equation}taig[X(s)-\xi\begin{equation}taig]\begin{equation}taig|^2 ds\Big )^{p\omegaver 2}\frepsilona$$ $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\gammaes 2^{-4(p-1)}\Big ({T-t_i\omegaver2}\Big )^{p\omegaver 2}\muathbb{E}\Big[ |\tauh(T)^\tauop\zeta_0-\tauh(T)^\tauop C_k(T)\xi|^2 +|\tauh(T)^\tauop\zeta_1-\tauh(T)^\tauop C_k(T)\xi|^2\Big]^{p\omegaver 2}~~~~~~~~~~~~~~~~~~~~~~~\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq-2^{-3(p-1)}(T-t_i)^{p\omegaver 2}\|\deltaot\tauh(\cdot)\|_{L^\inftynfty(0,T;\muathbb{R}^n)}^p\Big (\muathbb{E}\inftynt_{t_i}^T|\zeta(s)|^2ds\Big )^{p\omegaver2}\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq-2^{-2(p-1)}(T-t_i)^{\frphirac{p}{2}}\muathbb{E}\Big[\sup_{s\inftyn[t_i,T]}\begin{equation}taig|[\tauh(s)-\tauh(T)]^\tauop C_k(T)\xi\begin{equation}taig|^p\Big]\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq-2^{-(p-1)}(T-t_i)^{p\omegaver 2}\muathbb{E}\Big[\sup_{s\inftyn [t_i,T]} |C_k(T)-C_k(s)|^p |\xi|^p\Big]\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq-(T-t_i)^{p\omegaver 2}\|\tauh(\cdot)^\tauop C_k(\cdot)\|^p_{L^\inftynfty_\muathbb{F}(0,T;\muathbb{R}^{n\tauimes n})}\muathbb{E}\Big[\sup_{s\inftyn [t_i,T]}|X(s)-\xi|^p\Big].\frepsilona$$ Since $C_k(\cdot)\inftyn L^\inftynfty_\muathbb{F}(\Omega;C([0,T];\muathbb{R}^{n\tauimes n}))$, by Lebesgue's dominated convergence theorem, we have $$\muathbb{E}\Big[\sup_{s\inftyn [t_i,T]}|C_k(T)-C_k(s)|^p |\xi|^p \Big] = o(1),\quadq\alphas,~~i\tauo\inftynfty.$$ From \frepsilonqref{SSec3_Eq2}, $$\muathbb{E}\Big[\sup_{s\inftyn [t_i,T]}|X(s)-\xi|^p\Big]=o(1),\quadq\alphas,~~i\tauo\inftynfty.$$ The other two negative terms can be estimated similarly. Consequently, making use of \frepsilonqref{SSec3_Eq1}, $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle2^{-4(p-1)}\Big ({T-t_i\omegaver2}\Big )^{p\omegaver 2}\muathbb{E}\Big[ |\tauh(T)^\tauop\zeta_0-\tauh(T)^\tauop C_k(T)\xi|^2 +|\tauh(T)^\tauop\zeta_1-\tauh(T)^\tauop C_k(T)\xi|^2 \Big]^{p\omegaver 2}\\ \nuoalign{ }\deltaisplaystyle\lambdaes\muathbb{E}\Big (\inftynt_{t_i}^T |\tauh(s)^\tauop C_k(s)X(s) -\tauh(s)^\tauop\zeta(s)|^2 ds \Big )^{p\omegaver 2} +o\Big ( |T-t_i|^{p\omegaver 2}\Big )=o\Big (|T-t_i|^{p\omegaver 2}\Big ) \quad \alphas,~~i\tauo\inftynfty.\frepsilona$$ This leads to $$\muathbb{E}\Big (|\tauh(T)^\tauop\zeta_0-\tauh(T)^\tauop C_k(T)\xi|^2+|\tauh(T)^\tauop\zeta_1-\tauh(T)^\tauop C_k(T)\xi|^2\Big )^{p\omegaver2}=0.$$ Thus, $$|\tauh(T)^\tauop\zeta_0-\tauh(T)^\tauop C_k(T)\xi|^2+|\tauh(T)^\tauop\zeta_1-\tauh(T)^\tauop C_k(T)\xi|^2=0,\quadq\alphas,$$ which is a contradiction since $|\tauh(T)^\tauop(\zeta_0-\zeta_1)|=1$. The above implies that the terminal state $\xi$ constructed above cannot be hit by the state under any $u(\cdot)\inftyn{\cal U}^p[0,T]$. This completes the proof. \sigmagned {$\sqr69$} \subsection{The case $D(\cdot)$ is surjective} In this subsection, we let $d=1$. The case $d>1$ can be discussed similarly. For system $[A(\cdot),C(\cdot);B(\cdot),D(\cdot)]$, we assume the following: \begin{equation}tael{DD>d}D(t)D(t)^\tauop\gammaes\delta I,\quadq\alphas,\alphae\ t\inftyn[0,T].\frepsilone In this case, $[D(t)D(t)^\tauop]^{-1}$ exists and uniformly bounded. We define \begin{equation}tael{hABD}\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\widehat A(t)=A(t)-B(t)D(t)^\tauop[D(t)D(t)^\tauop]^{-1}C(t),\\ \nuoalign{ }\deltaisplaystyle\widehat B(t)=B(t)\begin{equation}taig\{I-D(t)^\tauop[D(t)D(t)^\tauop]^{-1}D(t)\begin{equation}taig\},\\ \nuoalign{ }\deltaisplaystyle\widehat D(t)=B(t)D(t)^\tauop[D(t)D(t)^\tauop]^{-1},\frepsilona\right.\frepsilone and introduce the following controlled system: \begin{equation}tael{Yu_Sec3_Equation1000}\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle dX(t)=\Big[\widehat A(t)X(t)+\widehat B(t)v(t)+\widehat D(t)Z(t)\Big]dt+Z(t)dW(t),\quadq t\inftyn [0,T],\\ \nuoalign{ }\deltaisplaystyle X(0)=x,\frepsilona\right.\frepsilone with $X(\cdot)$ being the state and $(v(\cdot),Z(\cdot))$ being the control. Using our notation, the above system can be denoted by $[\widehat A(\cdot),0;(\widehat B(\cdot),\widehat D(\cdot)),(0,I)]$. Comparing $[A(\cdot),C(\cdot);B(\cdot),D(\cdot)]$ with $[\widehat A(\cdot),0;(\widehat B(\cdot),\widehat D(\cdot));(0,I)]$, the latter has a simpler structure. For system $[\widehat A(\cdot),0;(\widehat B(\cdot),\widehat D(\cdot))$, $(0,I)]$, we need the following set: $$\widehat{\cal U}^p[0,T]\frepsilonquiv{\bf i}g\{v(\cdot)\begin{equation}taigm|\widehat B(\cdot)v(\cdot)\inftyn L^p_\muathbb{F}(\Omega;L^1(0,T;\muathbb{R}^n)){\bf i}g\}.$$ The following result is a kind of reduction. \begin{equation}tat{Theorem 3.4} \sl Let {\rm (H1)} and \frepsilonqref{DD>d} hold. Let $\widehat A(\cdot),\widehat B(\cdot),\widehat D(\cdot)$ be defined by \frepsilonqref{hABD}. Suppose \begin{equation}tael{|hA|,|hD|}\widehat A(\cdot)\inftyn L^\inftynfty_{\muathbb F}(\Omegamega;L^{1+\frepsilon}(0,T;\muathbb R^{n\tauimes n})),\quadq \widehat D(\cdot)\inftyn L^\inftynfty_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^{n\tauimes n})),\frepsilone where $\frepsilon>0$ is a given constant. Then system $[\widehat A(\cdot),0;(\widehat B(\cdot),\widehat D(\cdot)),(0,I)]$ is $L^p$-exactly controllable on $[0,T]$ by $\widehat{\cal U}^p[0,T]\nuegthinspace \tauimes\nuegthinspace L^p_\muathbb{F}(\Omega;$ $L^2(0,T;\muathbb{R}^n))$ if and only if system $[A(\cdot),C(\cdot);B(\cdot),D(\cdot)]$ is $L^p$-exactly controllable on $[0,T]$ by ${\cal U}^p[0,T]$. \frepsilont \inftyt Proof. \rm ($\Rightarrow$). First of all, we note that for any $Z(\cdot)\inftyn L^p_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^n))$, \begin{equation}tael{hDZ}\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\muathbb{E}\Big (\inftynt_0^T|\widehat D(t)Z(t)|dt\Big )^p\lambdaes\muathbb{E}\Big[\Big (\inftynt_0^T|\widehat D(t)|^2dt\Big )^{p\omegaver2}\Big (\inftynt_0^T|Z(t)|^2dt\Big )^{p\omegaver2}\Big]\\ \nuoalign{ }\deltaisplaystyle\lambdaes\|\widehat D(\cdot)\|_{L^\inftynfty_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^{n\tauimes n}))}^p\muathbb{E}\Big (\inftynt_0^T|Z(t)|^2dt\Big )^{p\omegaver2}.\frepsilona\frepsilone Thus, under condition \frepsilonqref{|hA|,|hD|}, for any $x\inftyn\muathbb{R}^n$ and $(v(\cdot),Z(\cdot))\inftyn\widehat{\cal U}^p[0,T]\tauimes L^p_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^n))$, system \frepsilonqref{Yu_Sec3_Equation1000} admits a unique solution. \mus Now, if system $[\widehat A(\cdot),0;(\widehat B(\cdot),\widehat D(\cdot)),(0,I)]$ is $L^p$-exactly controllable on $[0,T]$ by $\widehat{\cal U}^p[0,T]\tauimes L^p_\muathbb{F}(\Omega;L^2(0,T;$ $\muathbb{R}^n))$, then for any $x\inftyn\muathbb{R}^n$ and $\xi\inftyn L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n)$, there exists a triple $(X(\cdot),v(\cdot),Z(\cdot))\inftyn L^p_\muathbb{F}(\Omega;C([0,T];$ $\muathbb{R}^n))\tauimes\widehat{\cal U}^p[0,T]\tauimes L^p_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^n))$ such that \begin{equation}tael{Yu_Sec3_Equation1}\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle dX(t)=\Big[\widehat A(t)X(t)+\widehat B(t)v(t)+\widehat D(t)Z(t)\Big]dt+Z(t)dW(t),\quadq t\inftyn [0,T],\\ \nuoalign{ }\deltaisplaystyle X(0)=x,\quadq X(T)=\xi.\frepsilona\right.\frepsilone Define $$u(t)=D(t)^\tauop[D(t)D(t)^\tauop]^{-1}[Z(t)-C(t)X(t)]+[I-D(t)^\tauop [D(t)D(t)^\tauop]^{-1}D(t)]v(t),\quaduad t\inftyn [0,T].$$ We have \begin{equation}tael{Yu_Sec3_Bu}B(t)u(t)=\widehat D(t)[Z(t)-C(t)X(t)]+\widehat B(t)v(t),\quaduad t\inftyn [0,T].\frepsilone Since $v(\cdot)\inftyn\widehat{\cal U}^p[0,T]$ and $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\muathbb{E}\Big (\inftynt_0^T|\widehat D(t)(Z(t)-C(t)X(t))|dt\Big )^p\lambdaes\muathbb{E}\Big[\Big (\inftynt_0^T|\widehat D(t)|^2dt \Big )^{p\omegaver 2}\Big (\inftynt_0^T|Z(t)-C(t)X(t)|^2dt\Big )^{p\omegaver 2}\Big]\\ \nuoalign{ }\deltaisplaystyle\lambdaes\|\widehat D(\cdot)\|^p_{L^\inftynfty_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^{n\tauimes n}))} \|Z(\cdot)-C(\cdot)X(\cdot)\|^p_{L^p_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^n))}<\inftynfty,\frepsilona$$ one has $B(\cdot)u(\cdot)\inftyn L^p_\muathbb{F}(\Omega;L^1(0,T;\muathbb{R}^n))$. Further, by \begin{equation}tael{Yu_Sec3_Du}D(t)u(t)=Z(t)-C(t)X(t),\quaduad t\inftyn [0,T],\frepsilone we obtain $D(\cdot)u(\cdot)\inftyn L^p_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^n))$. Therefore $u(\cdot)\inftyn{\cal U}^p[0,T]$. From \frepsilonqref{Yu_Sec3_Bu} and \frepsilonqref{Yu_Sec3_Du}, it is easy to see $$\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle A(t)X(t)+B(t)u(t)=\widehat A(t)X(t)+\widehat B(t)v(t)+\widehat D(t)Z(t),\\ \nuoalign{ }\deltaisplaystyle C(t)X(t)+D(t)u(t)=Z(t),\frepsilona\right. \quaduad t\inftyn [0,T],$$ and thus, \frepsilonqref{Yu_Sec3_Equation1} reads $$\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle dX(t)=\Big[A(t)X(t)+B(t)u(t)\Big]dt+\Big[C(t)X(t)+D(t)u(t)\Big]dW(t),\quadq t\inftyn[0,T],\\ \nuoalign{ }\deltaisplaystyle X(0)=x,\quadq X(T)=\xi.\frepsilona\right.$$ This proves the $L^p$-exact controllability of system $[A(\cdot),C(\cdot);B(\cdot),D(\cdot)]$ on $[0,T]$ by ${\cal U}^p[0,T]$. \mus ($\mathop{\Leftarrow}mbdaeftarrow$). If system $[A(\cdot),C(\cdot);B(\cdot),D(\cdot)]$ is $L^p$-exactly controllable on $[0,T]$ by ${\cal U}^p[0,T]$, then for any $x\inftyn\muathbb{R}^n$ and $\xi\inftyn L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n)$, there exists a pair $(X(\cdot),u(\cdot))\inftyn L^p_\muathbb{F}(\Omega;C([0,T];\muathbb{R}^n))\tauimes{\cal U}^p[0,T]$ such that \begin{equation}tael{Yu_Sec3_Equation2}\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle dX(t)=\Big[A(t)X(t)+B(t)u(t)\Big]dt+\Big[C(t)X(t)+D(t)u(t)\Big]dW(t),\quadq t\inftyn[0,T],\\ \nuoalign{ }\deltaisplaystyle X(0)=x,\quadq X(T)=\xi.\frepsilona\right.\frepsilone Let $$ \lambdaeft\{ \begin{equation}taegin{aligned} & Z(t)=C(t)X(t)+D(t)u(t),\\ & v(t)=u(t), \frepsilonnd{aligned} \right. \quadq t\inftyn[0,T]. $$ Then $Z(\cdot)\inftyn L^p_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^n))$ and $$u(t)=D(t)^\tauop [D(t)D(t)^\tauop]^{-1}[Z(t)-C(t)X(t)]+[I-D(t)^\tauop [D(t)D(t)^\tauop]^{-1}D(t)]v(t),\quaduad t\inftyn [0,T].$$ Further, $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle B(t)u(t)\nuegthinspace =\nuegthinspace B(t)D(t)^\tauop [D(t)D(t)^\tauop]^{-1}[Z(t)\nuegthinspace -\nuegthinspace C(t)X(t)]\nuegthinspace +\nuegthinspace B(t)[I\nuegthinspace -\nuegthinspace D(t)^\tauop [D(t)D(t)^\tauop]^{-1}D(t)]v(t)\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq=\widehat D(t)Z(t)+\Big[\widehat A(t)-A(t)\Big]X(t)+\widehat B(t)v(t),\quaduad t\inftyn [0,T].\frepsilona$$ Consequently, $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\muathbb{E}\Big (\inftynt_0^T|\widehat B(t)v(t)|dt\Big )^p\lambdaes3^{p-1}\muathbb{E}\Big[\Big (\inftynt_0^T|\widehat D(t)Z(t)|dt\Big )^p+\Big (\inftynt_0^T|[\widehat A(t)-A(t)]X(t)|dt\Big )^p\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq\quadq\quadq\quadq\quadq+\Big (\inftynt_0^T|B(t)u(t)|dt\Big )^p\Big]\\ \nuoalign{ }\deltaisplaystyle\quadq\lambdaes K\muathbb{E}\Big[\Big (\inftynt_0^T|\widehat D(t)|^2dt\Big )^{p\omegaver2}\Big (\inftynt_0^T|Z(t)|^2dt\Big )^{p\omegaver2}+\sup_{t\inftyn[0,T]}|X(t)|^p +\Big (\inftynt_0^T|B(t)u(t)|dt\Big )^p\Big]\\ \nuoalign{ }\deltaisplaystyle\quadq\lambdaes K\Big[|x|^p+\muathbb{E}\Big (\inftynt_0^T|Z(t)|^2dt\Big )^{p\omegaver2}+\muathbb{E}\Big (\inftynt_0^T|B(t)u(t)|dt\Big )^p+\muathbb{E}\Big (\inftynt_0^T|D(t)u(t)|^2dt\Big )^{p\omegaver2}\Big].\frepsilona$$ Thus, $v(\cdot)\inftyn\widehat{\cal U}^p[0,T]$. Also, $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\widehat A(t)X(t)+\widehat B(t)v(t)+\widehat D(t)Z(t)\\ \nuoalign{ }\deltaisplaystyle=\Big[A(t)-B(t)D(t)^\tauop\begin{equation}taig[D(t)D(t)^\tauop\begin{equation}taig]^{-1}C(t)\Big]X(t) +B(t)\begin{equation}taig\{I-D(t)^\tauop\begin{equation}taig[D(t)D(t)^\tauop\begin{equation}taig]^{-1}D(t)\begin{equation}taig\}u(t)\\ \nuoalign{ }\deltaisplaystyle\quadq+B(t)D(t)^\tauop\begin{equation}taig[D(t)D(t)^\tauop\begin{equation}taig]^{-1}\begin{equation}taig[C(t)X(t)+D(t)u(t)\begin{equation}taig]\\ \nuoalign{ }\deltaisplaystyle=A(t)X(t)+B(t)u(t),\quaduad t\inftyn [0,T].\frepsilona$$ Hence, \frepsilonqref{Yu_Sec3_Equation2} can be rewritten as $$\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle dX(t)=\Big[\widehat A(t)X(t)+\widehat B(t)v(t)+\widehat D(t)Z(t)\Big]dt+Z(t)dW(t),\quadq t\inftyn [0,T],\\ \nuoalign{ }\deltaisplaystyle X(0)=x,\quadq X(T)=\xi.\frepsilona\right.$$ This proves the $L^p$-exact controllability of system $[\widehat A(\cdot),0;(\widehat B(\cdot),\widehat D(\cdot)),(0,I)]$ on $[0,T]$ by $\widehat{\cal U}^p[0,T]\tauimes L^p_\muathbb{F}(\Omega;$ $L^2(0,T;\muathbb{R}^n))$. \sigmagned {$\sqr69$} \mus Now, for system $[\widehat A(\cdot),0;(\widehat B(\cdot),\widehat D(\cdot)),(0,I)]$, similar to the definition of exact controllability, we introduce the following definition. \begin{equation}tade{null-controllability} \rm System $[\widehat A(\cdot),0,(\widehat B(\cdot),\widehat D(\cdot)),(0,I)]$ is said to be {\inftyt exactly null-controllable} by $\widehat{\cal U}^p[0,T]\tauimes L^p_\muathbb{F}(\Omega;L^2(0,T;$ $\muathbb{R}^n))$ on the time interval $[0,T]$, if for any $x\inftyn\muathbb{R}^n$, there exists a pair $(v(\cdot),Z(\cdot))\inftyn\widehat{\cal U}^p[0,T]\tauimes L^p_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^n))$ such that the solution $X(\cdot)$ to \frepsilonqref{Yu_Sec3_Equation1} under $(v(\cdot),Z(\cdot))$ satisfies $X(T)=0$. \frepsilonde We have the following result for system $[\widehat A(\cdot),0;(\widehat B(\cdot),\widehat D(\cdot)),(0,I)]$. \begin{equation}tat{Liu-Peng} \sl Let {\rm(H1)} and \frepsilonqref{DD>d} hold. Let $\widehat A(\cdot),\widehat B(\cdot),\widehat D(\cdot)$ be defined by \frepsilonqref{hABD}. Suppose \begin{equation}taegin{equation}\lambdaabel{Sec2_ABD_Epsilon} \begin{equation}taegin{aligned} & \widehat A(\cdotot) \inftyn L^\inftynfty_{\muathbb F}(\Omegamega;L^{1+\frepsilon}(0,T;\muathbb R^{n\tauimes n})),\quaduad \widehat B(\cdotot) \inftyn L^{(2\vee p) +\frepsilon}_{\muathbb F}(\Omegamega;L^2(0,T;\muathbb R^{n\tauimes m})),\\ & \widehat D(\cdotot)\inftyn L^\inftynfty_{\muathbb F}(\Omegamega;L^{2+\frepsilon}(0,T;\muathbb R^{n\tauimes n})), \frepsilonnd{aligned} \frepsilonnd{equation} where $2\vee p \frepsilonquiv \muax\{2,p\}$ and $\frepsilon>0$ is a given constant. Then the following are equivalent:\\ {\rm(i)} $[\widehat A(\cdot),0;(\widehat B(\cdot),\widehat D(\cdot)),(0,I)]$ is $L^p$-exactly controllable on $[0,T]$ by $\widehat{\cal U}^p[0,T]\tauimes L^p_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^n))$;\\ {\rm(ii)} $[\widehat A(\cdot),0;(\widehat B(\cdot),\widehat D(\cdot)),(0,I)]$ is exactly null-controllable on $[0,T]$ by $\widehat{\cal U}^p[0,T]\tauimes L^p_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^n))$;\\ {\rm(iii)} Matrix $G$ defined below is invertible: \begin{equation}tael{Yu_Sec3_G}G=\muathbb{E}\inftynt_0^T{\cal Y}(t)\widehat B(t)\widehat B(t)^\tauop {\cal Y}(t)^\tauop dt,\frepsilone where ${\cal Y}(\cdot)$ is the adapted solution to the following FSDE: \begin{equation}tael{cY}\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle d{\cal Y}(t)=-{\cal Y}(t)\widehat A(t) dt-{\cal Y}(t)\widehat D(t)dW(t),\quadq t\gammaes0,\\ \nuoalign{ }\deltaisplaystyle{\cal Y}(0)=I.\frepsilona\right.\frepsilone \frepsilont \inftyt Proof. \rm (i) $\mathop{\Rightarrow}$ (ii) is trivial. \mus (ii) $\mathop{\Rightarrow}$ (iii). First of all, under \frepsilonqref{Sec2_ABD_Epsilon}, FSDE \frepsilonqref{cY} admits a unique strong solution ${\cal Y}(\cdotot)\inftyn \begin{equation}taigcap_{q>1}L^q_{\muathbb F}(\Omegamega;C([0,T];$ $\muathbb{R}^{n\tauimes n}))$ and the Matrix $G$ is well defined. Now we prove the conclusion by contradiction. Suppose matrix $G$ is not invertible, then there exists a vector $0\nue\begin{equation}ta\inftyn\muathbb{R}^n$ such that $$0=\begin{equation}ta^\tauop G\begin{equation}ta=\muathbb{E}\inftynt_0^T\begin{equation}ta^\tauop{\cal Y}(t)\widehat B(t)\widehat B(t)^\tauop{\cal Y}(t)^\tauop\begin{equation}ta dt=\muathbb{E}\inftynt_0^T|\widehat B(t)^\tauop{\cal Y}(t)^\tauop\begin{equation}ta|^2dt.$$ Therefore \begin{equation}tael{bYB=0}\begin{equation}ta^\tauop{\cal Y}(t)\widehat B(t) =0,\quadq\alphae\;t\inftyn[0,T],\quad\alphas\frepsilone Now, we claim that by choosing $x=\begin{equation}ta\inftyn\muathbb{R}^n$, there will be no $(v(\cdot),Z(\cdot))\inftyn\widehat{\cal U}^p[0,T]\tauimes L^p_\muathbb{F}(\Omega;L^2(0,T;$ $\muathbb{R}^n))$ such that the corresponding state precess $X(\cdot)\frepsilonquiv X(\cdot\,;x,v(\cdot),Z(\cdot))$ satisfies $$X(0)=x,\quadq X(T)=0$$ In fact, suppose there exists a pair $(v(\cdot),Z(\cdot))\inftyn\widehat{\cal U}^p[0,T]\tauimes L^p_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^n))$ such that the above is true. Then applying the It\^o's formula to ${\cal Y}(t)X(t)$ on the interval $[0,T]$, one obtains the following relationship: \begin{equation}tael{Yu_Sec3_ItoFormulaHat}-\begin{equation}ta={\cal Y}(T)X(T)-X(0)=\inftynt_0^T{\cal Y}(t)\widehat B(t)v(t)dt+\inftynt_0^T\Big[{\cal Y}(t)Z(t) -{\cal Y}(t)\widehat D(t)X(t)\Big]dW(t).\frepsilone It is easy to check that $$\muathbb{E}\Big (\inftynt_0^T|{\cal Y}(t)Z(t)-{\cal Y}(t)\widehat D(t)X(t)|^2dt\Big )^{1/2}<\inftynfty.$$ Thus, \begin{equation}tael{Yu_Sec3_xHat}-\begin{equation}ta=\muathbb{E}\inftynt_0^T{\cal Y}(t)\widehat B(t)v(t)dt.\frepsilone Making use of \frepsilonqref{bYB=0}, we get $$-|\begin{equation}ta|^2=\muathbb{E}\inftynt_0^T\begin{equation}ta^\tauop{\cal Y}(t)\widehat B(t)v(t)dt =0,$$ a contradiction. \mus (iii) $\mathop{\Rightarrow}$ (i). Under \frepsilonqref{Sec2_ABD_Epsilon}, for any given $\xi\inftyn L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n)$, the following BSDE $$\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle dX_1(t)=\Big[\widehat A(t)X_1(t) +\widehat D(t)Z_1(t)\Big]dt +Z_1(t)dW(t),\quaduad t\inftyn [0,T],\\ \nuoalign{ }\deltaisplaystyle X_1(T)=\xi\frepsilona\right.$$ admits a unique adapted solution $(X_1(\cdot),Z_1(\cdot))\inftyn L^p_\muathbb{F}(\Omega;C([0,T];\muathbb{R}^n))\tauimes L^p_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^n))$. Since $G$ is invertible, for any $x\inftyn\muathbb{R}^n$, we may define $$v(t)=-\widehat B(t)^\tauop{\cal Y}(t)^\tauop G^{-1}\begin{equation}taig[x-X_1(0)\begin{equation}taig],\quaduad t\inftyn [0,T].$$ Note that $\widehat B(\cdot)\inftyn L^{(2\vee p)+\frepsilon}_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^{n\tauimes m}))$ leads to the following: $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\muathbb{E}\Big (\inftynt_0^T|\widehat B(t)v(t)|dt\Big )^p\lambdaes K\muathbb{E}\Big (\inftynt_0^T|\widehat B(t)|^2|{\cal Y}(t)|dt\Big )^p\lambdaes K\muathbb{E}\Big[\Big (\sup_{t\inftyn[0,T]}|{\cal Y}(t)|^p\Big )\Big (\inftynt_0^T|\widehat B(t)|^2dt\Big )^p\Big]\\ \nuoalign{ }\deltaisplaystyle\lambdaes K\Big[\muathbb{E}\Big (\inftynt_0^T|\widehat B(t)|^2dt\Big )^{p+\frepsilon}\Big]^{\frphirac{p}{p+\frepsilon}}\Big[\muathbb{E}\Big (\sup_{t\inftyn [0,T]}|{\cal Y}(t)|^{\frphirac{p\frepsilon}{ p+\frepsilon}}\Big )\Big]^{\frphirac{\frepsilon }{ p+\frepsilon}}<\inftynfty,\frepsilona$$ which implies $v(\cdot)\inftyn\widehat{\cal U}^p[0,T]$. For this $v(\cdot)$, we define $(X_2(\cdot),Z_2(\cdot))\inftyn L^p_\muathbb{F}(\Omega;C([0,T];\muathbb{R}^n))\tauimes L^p_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^n))$ to be the unique adapted solution of the following BSDE: $$\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle dX_2(t)=\Big[\widehat A(t)X_2(t)+\widehat B(t)v(t)+\widehat D(t)Z_2(t)\Big]dt+Z_2(t)dW(t),\quad t\inftyn [0,T],\\ \nuoalign{ }\deltaisplaystyle X_2(T)=0.\frepsilona\right.$$ Applying It\^o's formula to ${\cal Y}(\cdot)X_2(\cdot)$, we have (comparing with \frepsilonqref{Yu_Sec3_ItoFormulaHat}) $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle-X_2(0)={\cal Y}(T)X_2(T)-X_2(0)\\ \nuoalign{ }\deltaisplaystyle=\inftynt_0^T{\cal Y}(t)\widehat B(t)v(t)dt+\inftynt_0^T\Big[{\cal Y}(t)Z_2(t)-{\cal Y}(t)\widehat D(t)X_2(t)\Big]dW(t)=\muathbb{E}\inftynt_0^T{\cal Y}(t)\widehat B(t)v(t)dt\\ \nuoalign{ }\deltaisplaystyle=-{\bf i}g\{\inftynt_0^T\muathbb{E}\Big[{\cal Y}(t)\widehat B(t)\widehat B(t)^\tauop {\cal Y}(t)^\tauop\Big]dt{\bf i}g\}G^{-1}\begin{equation}taig[x-X_1(0)\begin{equation}taig]=-\begin{equation}taig[x-X_1(0)\begin{equation}taig],\frepsilona$$ which implies $X_1(0)+X_2(0)=x$. Now, we define $$X(t)=X_1(t)+X_2(t),\quadq Z(t)=Z_1(t)+Z_2(t),\quadq t\inftyn [0,T].$$ Then, by linearity, we see that $(X(\cdot),v(\cdot),Z(\cdot))$ satisfies the following: $$\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle dX(t)=\Big[\widehat A(t)X(t)+\widehat B(t)v(t)+\widehat D(t)Z(t)\Big]dt+Z(t)dW(t),\quad t\inftyn [0,T],\\ \nuoalign{ }\deltaisplaystyle X(0)=x,\quadq X(T)=\xi.\frepsilona\right.$$ This means that system $[\widehat A(\cdot),0;(\widehat B(\cdot),\widehat D(\cdot)),(0,I)]$ is $L^p$-exactly controllable on $[0,T]$ by $\widehat{\cal U}^p[0,T]\tauimes L^p_\muathbb{F}(\Omega;L^2$ $(0,T;\muathbb{R}^n))$. \sigmagned {$\sqr69$} \mus The above result is essentially due to Liu--Peng \cite{Liu-Peng 2010}. We have re-organized the way presenting the result. It is worthy of pointing out that, unlike in \cite{Liu-Peng 2010}, we allow the coefficients to be unbounded and allow $p$ to be different from $2$. Combining Theorems \ref{Theorem 3.4} and \ref{Liu-Peng}, we have the following result. \begin{equation}tat{Theorem 3.7} \sl Let {\rm (H1)}, \frepsilonqref{DD>d} and \frepsilonqref{Sec2_ABD_Epsilon} hold with $\widehat A(\cdot)$, $\widehat B(\cdotot)$ and $\widehat D(\cdot)$ defined by \frepsilonqref{hABD}. Then system $[A(\cdot),C(\cdot);B(\cdot),D(\cdot)]$ is $L^p$-exactly controllable on $[0,T]$ by ${\cal U}^p[0,T]$ if and only if $G$ defined by \frepsilonqref{Yu_Sec3_G} is invertible. \frepsilonnd{theorem} As a simple corollary of the above, we have the following result for the case of deterministic coefficients. \begin{equation}tac{deterministic coef} \sl Let {\rm(H1)}, \frepsilonqref{DD>d} and \frepsilonqref{Sec2_ABD_Epsilon} hold. Let $\widehat A(\cdot)$ and $\widehat B(\cdot)$ be deterministic. Let $\widehat\Phi(\cdot)$ be the solution to the following ODE: \begin{equation}tael{h Phi}\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle d\widehat\Phi(t)=\widehat A(t)\widehat\Phi(t)dt,\quadq t\gammaes0,\\ \nuoalign{ }\deltaisplaystyle\widehat\Phi(0)=I.\frepsilona\right.\frepsilone Denote $$\Psi=\inftynt_0^T\widehat\Phi(s)^{-1}\widehat B(s)\Big (\widehat\Phi(s)^{-1}\widehat B(s)\Big )^\tauop ds.$$ Suppose that $\Psi$ is invertible. Then system $[A(\cdot),C(\cdot);B(\cdot),D(\cdot)]$ is $L^p$-exactly controllable on $[0,T]$ by ${\cal U}^p[0,T]$. \frepsilonc \inftyt Proof. \rm In the current case, we have $$\widehat\Phi(t)^{-1}=\muathbb{E}{\cal Y}(t),\quadq t\inftyn[0,T].$$ On the other hand, $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle0\lambdaes\muathbb{E}\Big[\Big ({\cal Y}(t)-\muathbb{E}{\cal Y}(t)\Big )\widehat B(t)\widehat B(t)^\tauop\Big ({\cal Y}(t)-\muathbb{E}{\cal Y}(t)\Big )^\tauop\Big]\\ \nuoalign{ }\deltaisplaystyle\quad=\muathbb{E}\Big[{\cal Y}(t)\widehat B(t)\widehat B(t)^\tauop{\cal Y}(t)\Big]-\Big[\muathbb{E}{\cal Y}(t)\widehat B(t)\widehat B(t)^\tauop\muathbb{E}{\cal Y}(t)\Big].\frepsilona$$ Hence, $$G\gammaes\Psi.$$ Then our conclusion follows from the above theorem. \sigmagned {$\sqr69$} \mus The invertibility of matrix $G$ defined by \frepsilonqref{Yu_Sec3_G} gives a nice criterion for the $L^p$-exact controllability of system $[A(\cdot),C(\cdot);$ $B(\cdot),D(\cdot)]$ through the $L^p$-exact controllability of system $[\widehat A(\cdot),0;(\widehat B(\cdot),\widehat D(\cdot)),(0,I)]$. However, unless $n=1$, in the case of random coefficients, the solution ${\cal Y}(\cdot)$ of FSDE \frepsilonqref{cY} does not have a relatively simple (explicit) form. Thus, the applicability of condition (iii) in Theorem \ref{Liu-Peng} is somehow limited. In the rest of this subsection, we will present another sufficient condition for the $L^p$-exact controllability of $[\widehat A(\cdot),0;(\widehat B(\cdot),\widehat D(\cdot)),(0,I)]$ which might have a better applicability. \mus Now, with random coefficients, we still let $\widehat\Phi(\cdot)$ be the solution to \frepsilonqref{h Phi} which is a random ODE. Presumably, $\widehat\Phi(\cdot)$ is easier to get than ${\cal Y}(\cdot)$ (the solution of \frepsilonqref{cY}). Define $$\widetilde D(t)=\widehat\Phi(t)^{-1}\widehat D(t)\widehat\Phi(t),\quadq t\inftyn[0,T],$$ and introduce the following {\inftyt mean-field stochastic Fredholm integral equation} of first kind: \begin{equation}tael{MF-Fredholm}Y(t)=\zeta+\mathop{\Leftarrow}mbda\inftynt_\tau^{\begin{equation}taar\tau}\muathbb{E}_\tau\begin{equation}taig[\widetilde D(s)\widehat Z(s)\begin{equation}taig]ds -\inftynt_t^{\begin{equation}taar\tau}\widetilde D(s)\widehat Z(s)ds-\inftynt_t^{\begin{equation}taar\tau}\widehat Z(s)dW(s),\quadq t\inftyn[\tau,\begin{equation}taar\tau],\frepsilone where $\muathbb{E}_\tau[\,\cdot\,]=\muathbb{E}[\,\cdot\,|{\cal F}_\tau]$. We have the following result. \begin{equation}tal{Yu_Sec3_Lemma_MF_L} \sl Let the following hold: \begin{equation}tael{|hA|,|hD|g}\widehat A(\cdot)\inftyn L^\inftynfty_\muathbb{F}(\Omegamega;L^{1+\frepsilon}(0,T;\muathbb{R}^{n\tauimes n})),\quadq \widehat D(\cdot)\inftyn L^\inftynfty_\muathbb{F}(\Omega;L^{2+\frepsilon}(0,T;\muathbb{R}^{n\tauimes n})),\frepsilone for some $\frepsilon>0$. Then there exists a positive constant $\frepsilon'$ depending only on $\widehat D(\cdotot)$ and $\mathop{\Leftarrow}mbda$, such that for any $0\lambdaes\tau<\begin{equation}taar\tau\lambdaes T$, any $\zeta\inftyn L^p_{{\cal F}_{\begin{equation}taar\tau}}(\Omega;\muathbb{R}^n)$ and $\mathop{\Leftarrow}mbda\inftyn L^\inftynfty_{{\cal F}_{\begin{equation}taar\tau}}(\Omega;\muathbb{R}^{n\tauimes n})$, satisfying $\begin{equation}taar\tauau-\tauau\lambdaes \frepsilon'$, \frepsilonqref{MF-Fredholm} admits a unique solution $(Y(\cdot),\widehat Z(\cdot))\inftyn L^p_\muathbb{F}(\Omega;C([\tau,\begin{equation}taar\tau];\muathbb{R}^n))\tauimes L^p_\muathbb{F}(\Omega;L^2(\tau,\begin{equation}taar\tau;\muathbb{R}^n))$. \frepsilonl \inftyt Proof. \rm Let $0\lambdaes\tau<\begin{equation}taar\tau\lambdaes T$, and let $\widehat z(\cdot)\inftyn L^p_\muathbb{F}(\Omega;L^2(\tau,\begin{equation}taar\tau;\muathbb{R}^n))$ be given and consider the following BSDE: $$Y(t)=\zeta+\mathop{\Leftarrow}mbda\inftynt_\tau^{\begin{equation}taar\tau}\muathbb{E}_\tau\begin{equation}taig[\widetilde D(s)\widehat z(s)\begin{equation}taig]ds-\inftynt_t^{\begin{equation}taar\tau}\widetilde D(s)\widehat Z(s)ds-\inftynt_t^{\begin{equation}taar\tau}\widehat Z(s)dW(s),\quadq t\inftyn[\tau,\begin{equation}taar\tau].$$ On the right hand side of the above, the sum of the first two terms are treated as the terminal state. By the standard theory of BSDEs (\cite{El Karoui-Peng-Quenez 1997}), we know that the above BSDE admits a unique adapted solution $(Y(\cdot),\widehat Z(\cdot))$ and the following estimate holds: $$\muathbb{E}\Big[\sup_{t\inftyn[\tau,\begin{equation}taar\tau]}|Y(t)|^p+\Big (\inftynt_\tau^{\begin{equation}taar\tau}|\widehat Z(s)|^2ds\Big )^{p\omegaver2}\Big]\lambdaes K\muathbb{E}{\bf i}g|\zeta+\mathop{\Leftarrow}mbda\inftynt_\tau^{\begin{equation}taar\tau}\muathbb{E}_\tau\begin{equation}taig[\widetilde D(s) \widehat z(s)\begin{equation}taig]ds{\bf i}g|^p.$$ Thus, for $\widehat z_1(\cdot),\widehat z_2(\cdot)\inftyn L^p_\muathbb{F}(\Omega;L^2(\tau,\begin{equation}taar\tau;\muathbb{R}^n))$, if we let $(Y_1(\cdot),\widehat Z_1(\cdot))$ and $(Y_2(\cdot),\widehat Z_2(\cdot))$ be the corresponding adapted solutions, then, noting that both $\widehat\Phi(\cdot)$ and $\widehat\Phi(\cdot)^{-1}$ are bounded, and $\widehat D(\cdot)\inftyn L^\inftynfty_\muathbb{F}(\Omega;L^{2+\frepsilon}(0,T;\muathbb{R}^{n\tauimes n}))$, one has $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\muathbb{E}\Big[\sup_{t\inftyn [\tau,\begin{equation}taar\tau]}|Y_1(t)-Y_2(t)|^p +\Big (\inftynt_\tau^{\begin{equation}taar\tau}|\widehat Z_1(s)-\widehat Z_2(s)|^2ds\Big )^{p\omegaver2}\Big]\\ \nuoalign{ }\deltaisplaystyle\lambdaes K\muathbb{E}{\bf i}g|\inftynt_\tau^{\begin{equation}taar\tau}\muathbb{E}_\tauau\Big[|\widehat D(s)|\,|\widehat z_1(s)-\widehat z_2(s)|\Big]ds {\bf i}g|^p\lambdaes K\muathbb{E}{\bf i}g\{\inftynt_\tau^{\begin{equation}taar\tau}\Big[|\widehat D(s)|\,|\widehat z_1(s)-\widehat z_2(s)|\Big]ds {\bf i}g\}^p\\ \nuoalign{ }\deltaisplaystyle\lambdaes K\muathbb{E}{\bf i}g\{\Big (\inftynt_\tau^{\begin{equation}taar\tau}|\widehat D(s)|^2ds\Big )^{1\omegaver2}\Big ( \inftynt_\tau^{\begin{equation}taar\tau}|\widehat z_1(s)-\widehat z_2(s)|^2ds\Big )^{1\omegaver2}{\bf i}g\}^p\\ \nuoalign{ }\deltaisplaystyle\lambdaes K \sup_{\omegamega\inftyn \Omegamega}\Big (\inftynt_\tau^{\begin{equation}taar\tau}|\widehat D(s,\omegamega)|^2ds\Big )^{p\omegaver2} \muathbb{E}\Big (\inftynt_\tau^{\begin{equation}taar\tau}|\widehat z_1(s)-\widehat z_2(s)|^2ds\Big )^{p\omegaver2}\\ \nuoalign{ }\deltaisplaystyle\lambdaes K(\begin{equation}taar\tau-\tau)^{\frepsilon p\omegaver2(2+\frepsilon)}\Big[\sup_{\omegamega\inftyn\Omegamega}\Big (\inftynt_0^{T}|\widehat D(s,\omegamega)|^{2+\frepsilon} ds\Big )^{p\omegaver2+\frepsilon}\Big]\muathbb{E}\Big (\inftynt_\tau^{\begin{equation}taar\tau}|\widehat z_1(s)-\widehat z_2(s)|^2ds \Big )^{p\omegaver2}.\frepsilona$$ Consequently, we can find an absolute constant $\frepsilon'>0$ such that as long as $0<\begin{equation}taar\tau-\tau\lambdaes\frepsilon'$, the map $\widehat z(\cdot)\muapsto\widehat Z(\cdot)$ is a contraction which admits a unique fixed point $\widehat Z(\cdot)$. Letting $Y(\cdot)$ be given by \frepsilonqref{MF-Fredholm}, one sees that $(Y(\cdot),\widehat Z(\cdot))\inftyn L^p_\muathbb{F}(\Omega;C([\tau,\begin{equation}taar\tau];\muathbb{R}^n))\tauimes L^p_\muathbb{F}(\tau,\begin{equation}taar\tau;\muathbb{R}^n)$ is the unique solution to \frepsilonqref{MF-Fredholm}. That completes the proof. \sigmagned {$\sqr69$} \mus Now, we are ready to prove the following result. \begin{equation}tat{Sufficient 1} \sl Let {\rm(H1)}, \frepsilonqref{DD>d} and \frepsilonqref{Sec2_ABD_Epsilon} hold. Let \begin{equation}tael{Psi,Th}\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\Psi(t,\tau)=\inftynt_\tau^t\Big[\muathbb{E}_\tau\Big (\widehat\Phi(s)^{-1}\widehat B(s)\Big )\muathbb{E}_\tau\Big ([\widehat\Phi(s)^{-1}\widehat B(s)]^\tauop\Big )\Big]ds,\\ \nuoalign{ }\deltaisplaystyle\Theta(t,\tau)=\inftynt_\tau^t\Big[\widehat\Phi(s)^{-1}\widehat B(s)\muathbb{E}_\tau\Big ([\widehat\Phi(s)^{-1}\widehat B(s)]^\tauop\Big )\Big]ds,\frepsilona\right.\quadq0\lambdaes\tau<t\lambdaes T.\frepsilone Suppose there exists a $\delta>0$ such that, for any $T-\delta\lambdaes\tau<t\lambdaes T$, $\Psi(t,\tau)$ is invertible and $\Psi(t,\tau)^{-1} \inftyn L^\inftynfty_{{\cal F}_\tau}(\Omega;\muathbb{R}^{n\tauimes n})$. Moreover, suppose there exists a constant $M>0$ such that \begin{equation}tael{M}|\Theta(t,\tau)\Psi(t,\tau)^{-1}|\lambdaes M,\quadq T-\delta\lambdaes\tau<t\lambdaes T,\quad\alphas\frepsilone Then, for any $p>1$, system $[A(\cdot),C(\cdot);B(\cdot),D(\cdot)]$ is $L^p$-exactly controllable on $[0,T]$ by ${\cal U}^p[0,T]$. \frepsilont \mus \rm Note that for $\Psi(t,\tau)$ and $\Theta(t,\tau)$ defined in \frepsilonqref{Psi,Th}, one has $$\muathbb{E}_\tau\Theta(t,\tau)=\Psi(t,\tau).$$ Thus, in the case that $\widehat A(\cdot)$ and $\widehat B(\cdot)$ are deterministic, condition \frepsilonqref{M} is automatically true with $M=1$, as long as $\Psi(t,\tau)$ is invertible. We will present an example that $\widehat A(\cdot)$ is random and \frepsilonqref{M} holds. We also point out that condition \frepsilonqref{DD>d} implies that $m\gammaes n$. Further, condition that $\Psi(t,\tau)^{-1}$ exists implies that $m>n$. In fact, if \frepsilonqref{DD>d} holds and $m=n$, then $\widehat B(\cdot)=0$ which implies that $\Psi(t,\tau)=0$. We will say a little bit more about this shortly. \mus \inftyt Proof. \rm By Theorem \ref{Theorem 3.4}, all we need do is to prove the $L^p$-exact controllability of system $[\widehat A(\cdot),0;(\widehat B(\cdot),$ $\widehat D(\cdot)),(0,I)]$ on $[0,T]$ by $\widehat{\cal U}^p[0,T]\tauimes L^p_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^n))$. Now, let $T-\delta\lambdaes\tau<\begin{equation}taar\tau\lambdaes T$, and $\xi\inftyn L^p_{{\cal F}_\tau}(\Omega;\muathbb{R}^n)$, $\begin{equation}taar\xi\inftyn L^p_{{\cal F}_{\begin{equation}taar\tau}}(\Omega;\muathbb{R}^n)$. For any given $v(\cdot),Z(\cdot)$, the solution to $[\widehat A(\cdot),0;(\widehat B(\cdot),\widehat D(\cdot)),(0,I)]$ on $[\tau,\begin{equation}taar\tau]$ with $X(\tau)=\xi$ is given by \begin{equation}tael{X2(t)}X(t)=\widehat\Phi(t)\widehat\Phi(\tau)^{-1}\xi+\widehat\Phi(t)\inftynt_\tau^t\widehat\Phi(s)^{-1}\Big[\widehat B(s)v(s) +\widehat D(s)Z(s)\Big]ds+\widehat\Phi(t)\inftynt_\tau^t\widehat\Phi(s)^{-1}Z(s)dW(s).\frepsilone Therefore, getting $X(\begin{equation}taar\tau)=\begin{equation}taar\xi$ is equivalent to having the following: \begin{equation}tael{X(T)=xi}\widehat\Phi(\begin{equation}taar\tau)^{-1}\begin{equation}taar\xi-\widehat\Phi(\tau)^{-1}\xi=\inftynt_\tau^{\begin{equation}taar\tau}\widehat\Phi(s)^{-1}\Big[\widehat B(s)v(s)+\widehat D(s)Z(s)\Big]ds+\inftynt_\tau^{\begin{equation}taar\tau}\widehat\Phi(s)^{-1}Z(s)dW(s).\frepsilone This implies \begin{equation}tael{3.22}\muathbb{E}_\tau[\widehat\Phi(\begin{equation}taar\tau)^{-1}\begin{equation}taar\xi]-\widehat\Phi(\tau)^{-1}\xi-\inftynt_\tau^{\begin{equation}taar\tau}\muathbb{E}_\tau\Big[\widehat\Phi(s)^{-1}\widehat D(s)Z(s)\Big]ds=\inftynt_\tau^{\begin{equation}taar\tau}\muathbb{E}_\tau\Big[\widehat\Phi(s)^{-1}\widehat B(s)v(s)\Big]ds.\frepsilone We now take $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle v(s)=\muathbb{E}_\tau\Big (\widehat\Phi(s)^{-1}\widehat B(s)\Big )^\tauop\Psi(\begin{equation}taar\tau,\tau)^{-1}\Big[\muathbb{E}_\tau[\widehat\Phi(\begin{equation}taar\tau)^{-1}\begin{equation}taar\xi]-\widehat\Phi(\tau)^{-1}\xi\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq\quadq\quadq\quadq\quadq\quadq\quadq\quadq-\inftynt_\tau^{\begin{equation}taar\tau}\muathbb{E}_\tau\Big (\widehat\Phi(t)^{-1}\widehat D(t)Z(t)\Big )dt\Big],\quadq s\inftyn[\tau,\begin{equation}taar\tau],\frepsilona$$ which is ${\cal F}_\tau$-measurable. Further, noting that $\widehat\Phi(\cdot)^{-1}$ is bounded and $\Psi(\begin{equation}taar\tau,\tau)^{-1}\inftyn L^\inftynfty_{{\cal F}_\tau}(\Omega;\muathbb{R}^{n\tauimes n})$, one has $$\muathbb{E}\Big (\inftynt_\tau^{\begin{equation}taar\tau}|\widehat B(s)v(s)|ds\Big )^p\lambdaes K\muathbb{E}{\bf i}g\{\Big (\inftynt_\tau^{\begin{equation}taar\tau}|\widehat B(s)|\muathbb{E}_\tau|\widehat B(s)|ds\Big )^p\Gamma{\bf i}g\},$$ where $$\Gamma\frepsilonquiv\muathbb{E}_\tau|\begin{equation}taar\xi|^p+|\xi|^p+\Big (\inftynt_\tau^{\begin{equation}taar\tau}\muathbb{E}_\tau\begin{equation}taig[\,|\widehat D(t)||Z(t)|\,\begin{equation}taig]dt\Big )^p,$$ which is ${\cal F}_\tau$-measurable. Since $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\Big (\inftynt_\tau^{\begin{equation}taar\tau}|\widehat B(s)|\muathbb{E}_\tau|\widehat B(s)|ds\Big )^p\lambdaes\Big ({1\omegaver2}\inftynt_\tau^{\begin{equation}taar\tau} |\widehat B(s)|^2ds+{1\omegaver2}\inftynt_\tau^{\begin{equation}taar\tau}\muathbb{E}_\tau|\widehat B(s)|^2ds\Big )^p\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq\quadq\quadq\quadq\quad\lambdaes K\Big (\inftynt_\tau^{\begin{equation}taar\tau}|\widehat B(s)|^2ds\Big )^p+K\muathbb{E}_\tau\Big (\inftynt_\tau^{\begin{equation}taar\tau}|\widehat B(s)|^2ds\Big )^p,\frepsilona$$ we have $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\muathbb{E}\Big (\inftynt_\tau^{\begin{equation}taar\tau}|\widehat B(s)v(s)|ds\Big )^p\lambdaes K\muathbb{E}{\bf i}g\{\Big (\inftynt_\tau^{\begin{equation}taar\tau} |\widehat B(s)|^2ds\Big )^p\Gamma+\muathbb{E}_\tau\Big (\inftynt_\tau^{\begin{equation}taar\tau}|\widehat B(s)|^2ds\Big )^p\Gamma{\bf i}g\}\\ \nuoalign{ }\deltaisplaystyle=K\muathbb{E}{\bf i}g\{\Big (\inftynt_\tau^{\begin{equation}taar\tau}|\widehat B(s)|^2ds\Big )^p\Gamma{\bf i}g\}+K\muathbb{E}{\bf i}g\{\muathbb{E}_\tau\Big[\Big ( \inftynt_\tau^{\begin{equation}taar\tau}|\widehat B(s)|^2ds\Big )^p\Gamma\Big]{\bf i}g\}\lambdaes K\muathbb{E}{\bf i}g\{\Big (\inftynt_\tau^{\begin{equation}taar\tau}|\widehat B(s)|^2ds\Big )^p\Gamma{\bf i}g\}.\frepsilona$$ Since $\widehat B(\cdot)\inftyn L^\inftynfty_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^{n\tauimes m}))$ and $\widehat D(\cdot)\inftyn L^\inftynfty_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^{n\tauimes m}))$, one has $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\muathbb{E}\Big (\inftynt_\tau^{\begin{equation}taar\tau}|\widehat B(s)v(s)|ds\Big )^p\lambdaes K\muathbb{E}{\bf i}g\{|\xi|^p+|\begin{equation}taar\xi|^p+\Big[ \inftynt_\tau^{\begin{equation}taar\tau}\muathbb{E}_\tau\Big (|\widehat D(t)||Z(t)|\Big )dt\Big]^p{\bf i}g\}\\ \nuoalign{ }\deltaisplaystyle\lambdaes K\muathbb{E}\Big (|\xi|^p+|\begin{equation}taar\xi|^p\Big )+K\muathbb{E}{\bf i}g\{\muathbb{E}_\tau\Big[\Big (\inftynt_\tau^{\begin{equation}taar\tau}|\widehat D(t)|^2dt\Big )^{1\omegaver2}\Big (\inftynt_\tau^{\begin{equation}taar\tau}|Z(t)|^2dt\Big )^{1\omegaver2}\Big]{\bf i}g\}^p\\ \nuoalign{ }\deltaisplaystyle\lambdaes K\muathbb{E}\Big (|\xi|^p+|\begin{equation}taar\xi|^p\Big )+K\muathbb{E}\Big[\muathbb{E}_\tau\Big (\inftynt_\tau^{\begin{equation}taar\tau}|Z(t)|^2dt \Big )^{1\omegaver2}\Big]^p\lambdaes K\muathbb{E}\Big[|\xi|^p+|\begin{equation}taar\xi|^p+\Big (\inftynt_\tau^{\begin{equation}taar\tau}|Z(t)|^2dt\Big )^{p\omegaver2}\Big].\frepsilona$$ Thus, $Z(\cdot)\inftyn L^p_\muathbb{F}(\Omega;L^2(\tau,\begin{equation}taar\tau;\muathbb{R}^n))$ implies $\widehat B(\cdot)v(\cdot)\inftyn L^p_\muathbb{F}(\Omega;L^1(\tau,\begin{equation}taar\tau;\muathbb{R}^n))$. For such a $v(\cdot)$, \frepsilonqref{3.22} holds. Moreover, \frepsilonqref{X(T)=xi} becomes \begin{equation}tael{Fredholm}0=\frepsilonta(\begin{equation}taar\tau,\tau)+\inftynt_\tau^{\begin{equation}taar\tau}\Theta(\begin{equation}taar\tau,\tau)\Psi(\begin{equation}taar\tau,\tau)^{-1}\muathbb{E}_\tau\Big (\widetilde D(s)\widehat Z(s)\Big )ds -\inftynt_\tau^{\begin{equation}taar\tau}\widetilde D(s)\widehat Z(s)ds-\inftynt_\tau^{\begin{equation}taar\tau}\widehat Z(s)dW(s),\frepsilone where $$\widetilde D(s)=\widehat\Phi(s)^{-1}\widehat D(s)\widehat\Phi(s),\quadq\widehat Z(s)=\widehat\Phi(s)^{-1}Z(s),$$ and $$\frepsilonta(\begin{equation}taar\tau,\tau)\frepsilonquiv\widehat\Phi(\begin{equation}taar\tau)^{-1}\begin{equation}taar\xi-\widehat\Phi(\tau)^{-1}\xi-\Theta(\begin{equation}taar\tau,\tau) \Psi(\begin{equation}taar\tau,\tau)^{-1}\Big (\muathbb{E}_\tau[\widehat\Phi(\begin{equation}taar\tau)^{-1}\begin{equation}taar\xi]-\widehat\Phi(\tau)^{-1}\xi\Big ),$$ which is ${\cal F}_{\begin{equation}taar\tau}$-measurable with $$\muathbb{E}|\frepsilonta(\begin{equation}taar\tau,\tau)|^p\lambdaes K\muathbb{E}\Big (|\xi|^p+|\begin{equation}taar\xi|^p\Big ),\quadq\muathbb{E}_\tau\frepsilonta(\begin{equation}taar\tau,\tau)=0.$$ To summarize the above, we see that for given $0\lambdaes\tau<\begin{equation}taar\tau\lambdaes T$ and $\xi\inftyn L^p_{{\cal F}_\tau}(\Omega;\muathbb{R}^n)$, $\begin{equation}taar\xi\inftyn L^p_{{\cal F}_{\begin{equation}taar\tau}}(\Omega;\muathbb{R}^n)$, to get a pair of $(v(\cdot),Z(\cdotot))$ so that $X(\tau)=\xi$ and $X(\begin{equation}taar\tau)=\begin{equation}taar\xi$ if and only if there exists an $\muathbb{F}$-adapted process $\widehat Z(\cdot)$ such that \frepsilonqref{Fredholm} holds true. \mus By Lemma \ref{Yu_Sec3_Lemma_MF_L}, there exists a uniform $\frepsilon'>0$ such that as long as $ (T-\deltaelta)\vee (T-\frepsilon')\lambdaes\tauau<\begin{equation}taar\tauau\lambdaes T$, the following equation \begin{equation}tael{BSDE*}\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle Y(t)=\frepsilonta(\begin{equation}taar\tau,\tau)+\inftynt_\tau^{\begin{equation}taar\tau}\Theta(\begin{equation}taar\tau,\tau)\Psi(\begin{equation}taar\tau,\tau)^{-1}\muathbb{E}_\tau\Big (\widetilde D(s)\widehat Z(s)\Big )ds-\inftynt_t^{\begin{equation}taar\tau}\widetilde D(s)\widehat Z(s)ds\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq\quadq\quadq\quadq\quadq\quadq\quadq -\inftynt_t^{\begin{equation}taar\tau}\widehat Z(s)dW(s),\quadq t\inftyn[\tau,\begin{equation}taar\tau]\frepsilona\frepsilone admits a unique adapted solution $(Y(\cdot),\widehat Z(\cdot))\inftyn L^p_\muathbb{F}(\Omega;C([\tau,\begin{equation}taar\tau];\muathbb{R}^n))\tauimes L^p_\muathbb{F}(\Omega;L^2(\tau,\begin{equation}taar\tau;\muathbb{R}^n))$. We notice that $$Y(\tau)=\muathbb{E}_\tau[Y(\tau)]=0,$$ which proves the existence of a $\widehat Z(\cdot)\inftyn L^p_\muathbb{F}(\tau,\begin{equation}taar\tau;\muathbb{R}^n)$ to \frepsilonqref{Fredholm}. \mus Now, we can complete our proof as follows: Arbitrarily select a $T_0$ such that $(T-\delta)\vee(T-\frepsilon')\lambdaes T_0<T$. For any $x\inftyn\muathbb{R}^n$ and $\begin{equation}taar\xi\inftyn L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n)$, let $$\xi=X(T_0;x,0,0)\inftyn L^p_{{\cal F}_{T_0}}(\Omega;\muathbb{R}^n).$$ Then by what we have proved, there exists a pair $(v(\cdot),Z(\cdot))\inftyn\widehat{\cal U}^p[T_0,T]\tauimes L^p_\muathbb{F}(\Omega;L^2(T_0,T;\muathbb{R}^n))$ such that $$X(T_0)=\xi,\quadq X(T)=\begin{equation}taar\xi.$$ We obtain the $L^p$-exact controllability on $[0,T]$ by ${\cal U}^p[0,T]$. \sigmagned {$\sqr69$} \mus The following simple example is to show that condition \frepsilonqref{M} is possible for random coefficient case. \begin{equation}taex{} \rm Let $$\widehat A(t)=\begin{equation}taegin{pmatrix}0&a(t)\\ 0&0\frepsilonnd{pmatrix},\quadq\widehat B(t)=\begin{equation}taegin{pmatrix}0\\ 1\frepsilonnd{pmatrix},$$ where $a:[0,T]\tauimes\Omega\tauo\muathbb{R}$ satisfies the following conditions: $t\muapsto a(t)$ is $C^2$ with $$a(\cdot),a'(\cdot),a''(\cdot)\inftyn L^\inftynfty_\muathbb{F}(0,T;\muathbb{R}),\quadq a(t)\gammaes1,\quadq t\inftyn[0,T],~\alphas$$ For example, we may choose $$a(t)=1+\inftynt_0^t\inftynt_0^s{W(\tau)^2\omegaver1+W(\tau)^2}d\tau ds,\quadq t\inftyn[0,T].$$ Then $$\widehat\Phi(t)=\begin{equation}taegin{pmatrix}1&\inftynt_0^ta(s)ds\\ 0&1\frepsilonnd{pmatrix},\quadq t\inftyn[0,T].$$ Thus, $$\widehat\Phi(t)^{-1}\widehat B(t)=\begin{equation}taegin{pmatrix}1&-\inftynt_0^ta(s)ds\\ 0&1\frepsilonnd{pmatrix}\begin{equation}taegin{pmatrix}0\\ 1\frepsilonnd{pmatrix}=\begin{equation}taegin{pmatrix}-\inftynt_0^ta(s)ds\\ 1\frepsilonnd{pmatrix}\frepsilonquiv\begin{equation}taegin{pmatrix}\alpha(t)\\ 1\frepsilonnd{pmatrix}.$$ Denoting $\begin{equation}taar\alpha(s)=\muathbb{E}_\tau\alpha(s)$, we have $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\Psi(t,\tau)=\inftynt_\tau^t\muathbb{E}_\tau\Big (\widehat\Phi(s)^{-1}\widehat B(s)\Big )\muathbb{E}_\tau\Big (\widehat\Phi(s)^{-1}\widehat B(s)\Big )^\tauop ds =\inftynt_\tau^t\begin{equation}taegin{pmatrix}\begin{equation}taar\alpha(s)\\ 1\frepsilonnd{pmatrix}\begin{equation}taegin{pmatrix}\begin{equation}taar\alpha(s)&1\frepsilonnd{pmatrix}ds\\ \nuoalign{ }\deltaisplaystyle\quadq\quad=\begin{equation}taegin{pmatrix}\inftynt_\tau^t\begin{equation}taar\alpha(s)^2ds&\inftynt_\tau^t\begin{equation}taar\alpha(s)ds\\ \inftynt_\tau^t\begin{equation}taar\alpha(s)ds&t-\tau\frepsilonnd{pmatrix}.\frepsilona$$ A direct computation shows that (denoting $\begin{equation}taar a(\tau)=\muathbb{E}_\tau a(\tau)$) $$F(t)\frepsilonquiv\deltaet\Psi(\tau,t)=(t-\tau)\inftynt_\tau^t\begin{equation}taar\alpha(s)^2ds-\Big (\inftynt_\tau^t\begin{equation}taar\alpha(s)ds\Big )^2=\Big[2\begin{equation}taar a(\tau)^2+R(t)\Big](t-\tau)^4,$$ with $$\lambdaim_{t\tauo\tau}R(t)=0.$$ Hence, for $t-\tau>0$ small, $\Psi(\tau,t)$ is invertible, and $$\Psi(t,\tau)^{-1}={1\omegaver F(t)}\begin{equation}taegin{pmatrix}t-\tau&-\inftynt_\tau^t\begin{equation}taar\alpha(s)ds\\-\inftynt_\tau^t\begin{equation}taar\alpha(s)ds&\inftynt_\tau^t\begin{equation}taar\alpha(s)^2ds \frepsilonnd{pmatrix}.$$ Also, $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\Theta(t,\tau)=\inftynt_\tau^t\widehat\Phi(s)^{-1}\widehat B(s)\muathbb{E}_\tau\Big (\widehat\Phi(s)^{-1}\widehat B(s)\Big )^\tauop ds =\inftynt_\tau^t\begin{equation}taegin{pmatrix}\alpha(s)\\ 1\frepsilonnd{pmatrix}\begin{equation}taegin{pmatrix}\begin{equation}taar\alpha(s)&1\frepsilonnd{pmatrix}ds\\ \nuoalign{ }\deltaisplaystyle\quadq\quad=\begin{equation}taegin{pmatrix}\inftynt_\tau^t\alpha(s)\begin{equation}taar\alpha(s)ds&\inftynt_\tau^t\alpha(s)ds\\ \inftynt_\tau^t\begin{equation}taar\alpha(s)ds&t-\tau\frepsilonnd{pmatrix}. \frepsilona$$ Then $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\Theta(t,\tau)\Psi(t,\tau)^{-1}={1\omegaver F(t)}\begin{equation}taegin{pmatrix}\inftynt_\tau^t\alpha(s)\begin{equation}taar\alpha(s)ds&\inftynt_\tau^t\alpha(s)ds\\ \inftynt_\tau^t\begin{equation}taar\alpha(s)ds&t-\tau\frepsilonnd{pmatrix}\begin{equation}taegin{pmatrix}t-\tau&-\inftynt_\tau^t\begin{equation}taar\alpha(s)ds\\-\inftynt_\tau^t\begin{equation}taar\alpha(s)ds&\inftynt_\tau^t\begin{equation}taar\alpha(s)^2ds \frepsilonnd{pmatrix}\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq\quadq\quad\frepsilonquiv{1\omegaver F(t)}\begin{equation}taegin{pmatrix}\mathop{\Leftarrow}mbda_1(t)&\mathop{\Leftarrow}mbda_2(t)\\ 0&F(t)\frepsilonnd{pmatrix},\frepsilona$$ where $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\mathop{\Leftarrow}mbda_1(t)=(t-\tau)\inftynt_\tau^t\alpha(s)\begin{equation}taar\alpha(s)ds-\Big (\inftynt_\tau^t\alpha(s)ds\Big )\Big (\inftynt_\tau^t\begin{equation}taar\alpha(s)ds\Big ),\\ \nuoalign{ }\deltaisplaystyle\mathop{\Leftarrow}mbda_2(t)=\Big (\inftynt_\tau^t\alpha(s)ds\Big )\Big (\inftynt_\tau^t\begin{equation}taar\alpha(s)^2ds\Big )-\Big (\inftynt_\tau^t\begin{equation}taar\alpha(s)ds\Big )\Big (\inftynt_\tau^t\alpha(s)\begin{equation}taar\alpha(s)ds\Big ). \frepsilona$$ Some direct (lengthy) calculations show that $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\mathop{\Leftarrow}mbda_1(t)=\Big[2a(\tau)+\gamma_1(t)\Big](t-\tau)^4,\quadq\lambdaim_{t\deltaa\tau}\gamma_1(t)=0,\\ \nuoalign{ }\deltaisplaystyle\mathop{\Leftarrow}mbda_2(t)=\gamma_2(t)(t-\tau)^4,\quadq\lambdaim_{t\deltaa\tau}\gamma_2(t)=0.\frepsilona$$ Hence, $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\Theta(t,\tau)\Psi(t,\tau)^{-1}={1\omegaver F(t)}\begin{equation}taegin{pmatrix}\mathop{\Leftarrow}mbda_1(t)&\mathop{\Leftarrow}mbda_2(t)\\ 0&F(t)\frepsilonnd{pmatrix}\\ \nuoalign{ }\deltaisplaystyle={1\omegaver\begin{equation}taig[2\begin{equation}taar a(\tau)+R(t)\begin{equation}taig](t-\tau)^4}\begin{equation}taegin{pmatrix}\begin{equation}taig[2a(\tau)+\gamma_1(t)\begin{equation}taig](t-\tau)^4&\gamma_2(t)(t-\tau)^4\\ 0&\begin{equation}taig[2\begin{equation}taar a(\tau)+R(t)\begin{equation}taig](t-\tau)^4\frepsilonnd{pmatrix}\\ \nuoalign{ }\deltaisplaystyle={1\omegaver2\begin{equation}taar a(\tau)+R(t)}\begin{equation}taegin{pmatrix}2a(\tau)+\gamma_1(t)&\gamma_2(t)\\ 0&2\begin{equation}taar a(\tau)+R(t)\frepsilonnd{pmatrix}.\frepsilona$$ As a result, we obtain $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle{\bf i}g|\Theta(t,\tau)\Psi(t,\tau)^{-1}{\bf i}g|\lambdaes K,\quadq\frphiorall t>\tau,\widehatbox{ with $t-\tau$ small}.\frepsilona$$ The above shows that \frepsilonqref{M} holds. \frepsilonx As we mentioned earlier, condition \frepsilonqref{DD>d} implies that $m\gammaes n$, and if $m=n$ and \frepsilonqref{DD>d} holds, then $\widehat B(\cdot)=0$. Hence, in order $\Psi$ to be invertible, one must have $m>n$. The following result is concerned with a case that $D(\cdot)$ is surjective and $m=n$. \begin{equation}tap{m=n} \sl Let {\rm (H1)}, \frepsilonqref{DD>d} and \frepsilonqref{Sec2_ABD_Epsilon} hold and $m=n$. Then, for any $p>1$, the system $[A(\cdot),C(\cdot);B(\cdot),$ $D(\cdot)]$ is not $L^p$-exactly controllable on any $[0,T]$ by ${\cal U}^p[0,T]$ with $T>0$. \frepsilonp \inftyt Proof. \rm In the current case, $D(\cdot)^{-1}$ is bounded. If for any $x\inftyn\muathbb{R}^n$ and $\xi\inftyn L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n)$, one can find a $u(\cdot)\inftyn{\cal U}^p[0,T]$ such that $X(0)=x$ and $X(T)=\xi$, then we let $$Z(t)=C(t)X(t)+D(t)u(t),\quadq t\inftyn[0,T],$$ which will lead to $$u(t)=D(t)^{-1}\Big[Z(t)-C(t)X(t)\Big],\quadq t\inftyn[0,T].$$ Hence, $(X(\cdot),Z(\cdot))$ is an adapted solution to the following BSDE: $$\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle dX(t)=\Big[\Big (A(t)-B(t)D(t)^{-1}C(t)\Big )X(t)+B(t)D(t)^{-1}Z(t)\Big]dt+Z(t)dW(t),\quad t\inftyn[0,T],\\ \nuoalign{ }\deltaisplaystyle X(T)=\xi.\frepsilona\right.$$ Then $X(0)$ cannot be arbitrarily specified. Hence, $L^p$-exact controllability is not possible for system $[A(\cdot),C(\cdot);B(\cdot),D(\cdot)]$. \sigmagned {$\sqr69$} \mus In the above two subsections, we have discussed the two extreme cases: either $D(\cdot)=0$, or $D(\cdot)$ is full rank (for the case $d=1$). The case in between remains open. Some partial results have been obtained, but they are not at a mature level to be reported. We hope to present them in a forthcoming paper. \section{Duality and Observability Inequality}\lambdaabel{S_Observability} As we know that for deterministic linear ODE systems, the controllability of the original systems is equivalent to the observability of the dual equations. We would like to see how such a result will look like for our FSDE system $[A(\cdot),C(\cdot);B(\cdot),D(\cdot)]$. To this end, let us first look at an abstract result, whose proof should be standard. But for reader's convenience, we present a proof. \begin{equation}tap{onto}\sl Let $\muathbb{X}$ and $\muathbb{Y}$ be Banach spaces, $\muathbb{K}:\muathbb{X}\tauo\muathbb{Y}$ be a bounded linear operator, and $\muathbb{K}^*:\muathbb{Y}^*\tauo\muathbb{X}^*$ be the adjoint operator of $\muathbb{K}$. Then $\muathbb{K}$ is surjective if and only if there exists a $\delta>0$ such that \begin{equation}tael{>d}|\muathbb{K}^*y^*|_{\muathbb{X}^*}\gammaes\delta|y^*|_{\muathbb{Y}^*},\quadq\frphiorall y^*\inftyn\muathbb{Y}^*.\frepsilone Further, if $\muathbb{X}$ and $\muathbb{Y}$ are reflexive and the map $x^*\muapsto|x^*|_{\muathbb{X}^*}^2$ from $\muathbb{X}^*$ to $\muathbb{R}$ is Fr\'echet differentiable, then \frepsilonqref{>d} is also equivalent to the following: For any $y\inftyn\muathbb{Y}$, the functional \begin{equation}tael{J(y)}J(y^*;y)={1\omegaver2}|\muathbb{K}^*y^*|_{\muathbb{X}^*}^2+\lambdaan y,y^*\rangle,\quadq y^*\inftyn\muathbb{Y}^*,\frepsilone admits a minimum over $\muathbb{Y}^*$. In addition, if the norm of $\muathbb{X}^*$ is strictly convex, then for any $y\inftyn\muathbb{Y}$, the optimal solution of \frepsilonqref{J(y)} is necessarily unique. \frepsilonp \inftyt Proof. \rm Suppose ${\cal R}(\muathbb{K})=\muathbb{Y}$, i.e., $\muathbb K$ is a surjection. Then ${\cal R}(\muathbb{K})$ is closed. By Banach Closed Range Theorem (\cite{Yosida 1980}), ${\cal R}(\muathbb{K}^*)$ is closed. Moreover, $${\cal N}(\muathbb{K}^*)^\phierp\frepsilonquiv{\bf i}g\{y\inftyn\muathbb{Y}\begin{equation}taigm|\lambdaan y^*,y\rangle=0,\quad\frphiorall y^*\inftyn{\cal N}(\muathbb{K}^*){\bf i}g\} ={\cal R}(\muathbb{K})=\muathbb{Y}.$$ This implies that ${\cal N}(\muathbb{K}^*)=\{0\}$. Thus, $\muathbb{K}^*$ is injective with ${\cal R}(\muathbb{K}^*)$ closed. Therefore, $\muathbb{K}^*:\muathbb{Y}^*\tauo{\cal R}(\muathbb{Y}^*)$ is one-to-one and onto. Hence, $(\muathbb{K}^*)^{-1}:{\cal R}(\muathbb{K}^*)\tauo\muathbb{Y}^*$ is bounded. Consequently, for any $x^*=\muathbb{K}^*y^*\inftyn{\cal R}(\muathbb{K}^*)$, $$|y^*|_{\muathbb{Y}^*}=|(\muathbb{K}^*)^{-1}\muathbb{K}^*y^*|_{\muathbb{Y}^*}\lambdaes\|(\muathbb{K}^*)^{-1}\| |\muathbb{K}^*y^*|_{\muathbb{X}^*}\frepsilonquiv{1\omegaver\delta}|\muathbb{K}^*y^*|_{\muathbb{X}^*},$$ which leads to \frepsilonqref{>d}. \mus Conversely, suppose (\ref{>d}) holds. Then ${\cal R}(\muathbb{K}^*)$ is closed and $\muathbb{K}^*$ is injective. Thus, by Banach Closed Range Theorem, ${\cal R}(\muathbb{K})$ is also closed and $${\cal R}(\muathbb{K})={\cal N}(\muathbb{K}^*)^\phierp=\{0\}^\phierp=\muathbb{Y},$$ proving that $\muathbb{K}$ is surjective. \mus \mus Now, for any $y\inftyn\muathbb{Y}$, consider functional $y^*\muapsto J(y^*;y)$. Clearly, under condition \frepsilonqref{>d}, we have that $y^*\muapsto J(y^*;y)$ is coercive and weakly lower semi-continuous. Hence, if $\{y^*_k\}_{k\gammaes1}$ is a minimizing sequence, then it is bounded. By the reflexivity of $\muathbb{Y}^*$, we may assume that $y^*_k$ converges weakly to some $\begin{equation}taar y^*\inftyn\muathbb{Y}^*$. Then by the weakly lower semi-continuity of the functional $J(\cdot\,;y)$, $\begin{equation}taar y^*$ must be a minimum. \mus Conversely, for any $y\inftyn\muathbb{Y}$, let $\begin{equation}taar y^*\inftyn\muathbb{Y}^*$ be a minimum of $y^*\muapsto J(y^*;y)$. Denote the Fr\'echet derivative of $x^*\muapsto{1\omegaver2}|x^*|_{\muathbb{X}^*}^2$ by $\Gamma(x^*)$, i.e., $$\lambdaim_{\delta\tauo0}{|x^*+\delta\xi^*|_{\muathbb{X}^*}^2-|x^*|^2_{\muathbb{X}^*}\omegaver2\delta} \frepsilonquiv \frphirac{d}{d\deltaelta}\frphirac 1 2 | x^* +\deltaelta\xi^* |^2_{\muathbb X^*}{\bf i}g|_{\deltaelta=0} =\lambdaan\Gamma(x^*),\xi^*\rangle,\quadq \frphiorall\xi^*\inftyn\muathbb{X}^*,$$ where we denote by $\frphirac{d}{d\deltaelta}\begin{equation}taig|_{\deltaelta=0} f(\deltaelta)$ the derivative of function $f$ at $\deltaelta=0$. Then by the optimality of $\begin{equation}taar y^*$, we have \begin{equation}tael{diff} \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle0\lambdaes\lambdaim_{\delta\tauo0}{J(\begin{equation}taar y^*+\delta y^*;y)-J(\begin{equation}taar y^*;y)\omegaver\delta}=\lambdaim_{\delta\tauo0}{|\muathbb{K}^*(\begin{equation}taar y^*+\delta y^*)|_{\muathbb{X}^*}^2-|\muathbb{K}^*\begin{equation}taar y^*|_{\muathbb{X}^*}^2\omegaver2\delta}+\lambdaan y,y^*\rangle\\ \nuoalign{ }\deltaisplaystyle\quad=\lambdaan\Gamma(\muathbb{K}^*\begin{equation}taar y^*),\muathbb{K}^*y^*\rangle+\lambdaan y,y^*\rangle=\lambdaan\muathbb{K} \Gamma(\muathbb{K}^*\begin{equation}taar y^*)+y,y^*\rangle,\quadq\frphiorall y^*\inftyn\muathbb{Y}^*.\frepsilona \frepsilone Hence, $$\muathbb{K}\Gamma(\muathbb{K}^*\begin{equation}taar y^*)+y=0.$$ Since $y\inftyn\muathbb{Y}$ is arbitrary, we obtain that ${\cal R}(\muathbb{K})=\muathbb{Y}$. Then by what we have proved, \frepsilonqref{>d} holds. Finally, if we further assume that the norm of $\muathbb{X}^*$ is strictly convex, then we must have the uniqueness of the optimal solution to $y^*\muapsto J(y^*;y)$. \sigmagned {$\sqr69$} \mus For any given $1<p<\rho\lambdaes\inftynfty$ and $2<\sigma\lambdaes\inftynfty$, we denote $q\frepsilonquiv{p\omegaver p-1}$, and \begin{equation}tael{pqmn}\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\begin{equation}taar p\frepsilonquiv\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle{p\rho\omegaver\rho-p},\quadq\rho<\inftynfty,\\ \nuoalign{ }\deltaisplaystyle p,\quadq\quadq \rho=\inftynfty,\frepsilona\right. \quadq\quadq \begin{equation}taar q\frepsilonquiv\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle{p\rho\omegaver p\rho-\rho+p}, \quadq\rho<\inftynfty,\\ \nuoalign{ }\deltaisplaystyle{p\omegaver p-1},\quadq\quadq\rho=\inftynfty,\frepsilona\right.\\ \nuoalign{ }\deltaisplaystyle\mu\frepsilonquiv\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle{2\sigma\omegaver \sigma-2},\quadq \sigma<\inftynfty,\\ \nuoalign{ }\deltaisplaystyle2,\quadq\quadq \sigma=\inftynfty,\frepsilona\right.\quadq\quadq\nuu\frepsilonquiv\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle{2\sigma\omegaver \sigma+2},\quadq \sigma<\inftynfty,\\ \nuoalign{ }\deltaisplaystyle2,\quadq\quadq \sigma=\inftynfty.\frepsilona\right.\frepsilona\frepsilone Clearly, with the above notations, we have \begin{equation}tael{=1}\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle{1\omegaver p}+{1\omegaver q}=1,\quadq{1\omegaver\begin{equation}taar p}+{1\omegaver\begin{equation}taar q}=1,\quadq{1\omegaver\mu}+{1\omegaver\nu}=1,\\ [4mm] \nuoalign{ }\deltaisplaystyle1<\nu\lambdaes2\lambdaes\mu<\inftynfty,\quadq1<p\lambdaes\begin{equation}taar p<\inftynfty,\quadq1<\begin{equation}taar q\lambdaes {p\omegaver p-1},\\ [4mm] \nuoalign{ }\deltaisplaystyle\muathbb{U}^{p,\rho,\sigmagma}[0,T]\frepsilonquiv L^{\begin{equation}taar p}_\muathbb{F}(\Omega;L^\mu(0,T;\muathbb{R}^m)),\quadq\muathbb{U}^{p,\rho,\sigma}[0,T]^*\frepsilonquiv L^{\begin{equation}taar q}_\muathbb{F}(\Omega;L^\nu(0,T;\muathbb{R}^m)),\\ [3mm] \nuoalign{ }\deltaisplaystyle\muathbb{U}_r^{p,\rho,\sigma}[0,T]\frepsilonquiv L^\mu_\muathbb{F}(0,T;L^{\begin{equation}taar p}(\Omega;\muathbb{R}^m)),\quadq\muathbb{U}_r^{p,\rho,\sigma}[0,T]^*\frepsilonquiv L^\nu_\muathbb{F}(0,T;L^{\begin{equation}taar q}(\Omega;\muathbb{R}^m)).\frepsilona\right.\frepsilone In the rest of this paper, we will keep the above notations. We now present the first main result of this section. \begin{equation}tat{equivalence} \sl Let {\rm(H1)--(H2)} (respectively, {\rm (H1)} and {\rm(H2)$'$}) hold. Then system $[A(\cdot),C(\cdot);B(\cdot),D(\cdot)]$ is $L^p$-exactly controllable on $[0,T]$ by $\muathbb{U}^{p,\rho,\sigma}[0,T]$ (respectively, $\muathbb{U}^{p,\rho,\sigma}_r[0,T]$) if and only if there exists a $\delta>0$ such that the following, called {\inftyt an observability inequality}, holds: \begin{equation}tael{observability inequality}\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle{\bf i}g\|B(\cdot)^\tauop Y(\cdot)+\sum_{k=1}^dD_k(\cdot)^\tauop Z_k(\cdot){\bf i}g\|_{\muathbb{U}^{p,\rho,\sigma}[0,T]^*}\gammaes\delta\|\frepsilonta\|_{L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n)}, \quadq\frphiorall\frepsilonta\inftyn L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n),\frepsilona\frepsilone (respectively, \begin{equation}tael{weak observability inequality}\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle{\bf i}g\|B(\cdot)^\tauop Y(\cdot)+\sum_{k=1}^dD_k(\cdot)^\tauop Z_k(\cdot){\bf i}g\|_{\muathbb{U}_r^{p,\rho,\sigma}[0,T]^*}\gammaes\delta\|\frepsilonta\|_{L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n)}, \quadq\frphiorall\frepsilonta\inftyn L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n),\frepsilona\begin{equation}taig)\frepsilone where $(Y(\cdot),Z(\cdot))$ (with $Z(\cdot)\frepsilonquiv(Z_1(\cdot),\cdots,Z_d(\cdot))$) is the unique adapted solution to the following BSDE: \begin{equation}tael{Sec3_Dual_Sys}\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle dY(t)=-\Big[A(t)^\tauop Y(t)+\sum_{k=1}^dC_k(t)^\tauop Z_k(t)\Big]dt+\sum_{k=1}^dZ_k(t)dW_k(t),\quadq t\inftyn [0,T],\\ \nuoalign{ }\deltaisplaystyle Y(T)=\frepsilonta.\frepsilona\right.\frepsilone \frepsilont \inftyt Proof. \rm We only prove the equivalence between system's $L^p$-exactly controllability on $[0,T]$ by $\muathbb{U}^{p,\rho,\sigma}[0,T]$ and the validity of the observability inequality \frepsilonqref{observability inequality}. The other part can be proved with the similar procedure. For any $(x,u(\cdot))\inftyn\muathbb{R}^n\tauimes\muathbb{U}^{p,\rho,\sigma}[0,T]$, with $p>1$, let $X(\cdot)\frepsilonquiv X(\cdot\,;x,u(\cdot))$ be the unique solution to \frepsilonqref{FSDE1}. Then $$X(\cdot\,;x,u(\cdot))=X(\cdot\,;x,0)+X(\cdot\,;0,u(\cdot)).$$ Define $$\muathbb{K} u(\cdot)=X(T;0,u(\cdot)),\quadq\frphiorall u(\cdot)\inftyn\muathbb{U}^{p,\rho,\sigma}[0,T].$$ Then $$\|\muathbb{K} u(\cdot)\|_{L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n)}\lambdaes K\|u(\cdot)\|_{\muathbb{U}^{p,\rho,\sigma}[0,T]},\quadq u(\cdot)\inftyn\muathbb{U}^{p,\rho,\sigma}[0,T].$$ Thus, $\muathbb{K}:\muathbb{U}^{p,\rho,\sigma}[0,T]\tauo L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n)$ is a bounded linear operator. Now, system $[A(\cdot),C(\cdot);B(\cdot),D(\cdot)]$ is $L^p$-exactly controllable on $[0,T]$ by $\muathbb{U}^{p,\rho,\sigma}[0,T]$ if and only if for any $(x,\xi)\inftyn\muathbb{R}^n\tauimes L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n)$, $$\xi-X(T;x,0)=\muathbb{K} u(\cdot),$$ for some $u(\cdot)\inftyn\muathbb{U}^{p,\rho,\sigma}[0,T]$, which is equivalent to $${\cal R}(\muathbb{K})=L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n).$$ This means that $\muathbb{K}:\muathbb{U}^{p,\rho,\sigma}[0,T]\tauo L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n)$ is surjective. Hence, by Proposition \ref{onto}, this is equivalent to the following: For some $\delta>0$, \begin{equation}tael{|K*eta|}\|\muathbb{K}^*\frepsilonta\|_{\muathbb{U}^{p,\rho,\sigma}[0,T]^*}\gammaes\delta\|\frepsilonta\|_{L^q_{{\cal F}_T}(\Omega:\muathbb{R}^n)}, \quadq\frphiorall\frepsilonta\inftyn L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n).\frepsilone We now find $\muathbb{K}^*:L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n)\tauo\muathbb{U}^{p,\rho,\sigma}[0,T]^*$. To this end, we consider BSDE \frepsilonqref{Sec3_Dual_Sys}, with $\frepsilonta\inftyn L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n)$. By a standard result of BSDEs (\cite{El Karoui-Peng-Quenez 1997}), under (H1), \frepsilonqref{Sec3_Dual_Sys} admits a unique adapted solution $$(Y(\cdot),Z(\cdot))\frepsilonquiv\begin{equation}taig(Y(\cdot\,;\frepsilonta),Z(\cdot\,;\frepsilonta)\begin{equation}taig)\inftyn L^q_\muathbb{F}(\Omega;C([0,T];\muathbb{R}^n))\tauimes L^q_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^{n\tauimes d})),$$ and the following estimate holds: \begin{equation}tael{}\muathbb{E}\Big[\sup_{t\inftyn[0,T]}|Y(t)|^{q}+\Big (\inftynt_0^T|Z(t)|^2dt\Big )^{q\omegaver2}\Big]\lambdaes K\muathbb{E}|\frepsilonta|^{q},\frepsilone where we denote $Z(\cdot)\frepsilonquiv(Z_1(\cdot),Z_2(\cdot),\deltaots,Z_d(\cdot))$. By applying It\^o's formula to $\lambdaan X(\cdot\,;0,u(\cdot)),Y(\cdot)\rangle$ on the interval $[0,T]$, we have the following duality relation: \begin{equation}tael{Sec3_Duality}\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\lambdaan\muathbb{K} u(\cdot),\frepsilonta\rangle=\muathbb{E}\lambdaan X(T),\frepsilonta\rangle=\muathbb{E}\inftynt_0^T\lambdaan u(t),B(t)^\tauop Y(t)+\sum_{k=1}^dD_k(t)^\tauop Z_k(t)\rangle dt=\lambdaan u(\cdot),\muathbb{K}^*\frepsilonta\rangle.\frepsilona\frepsilone Hence, \begin{equation}tael{K*}(\muathbb{K}^*\frepsilonta)(t)\frepsilonquiv B(t)^\tauop Y(t)+\sum_{k=1}^dD_k(t)^\tauop Z_k(t),\quaduad t\inftyn [0,T].\frepsilone Combining \frepsilonqref{|K*eta|} with \frepsilonqref{K*}, we obtain \frepsilonqref{observability inequality}. \sigmagned {$\sqr69$} \mus Theorem \ref{equivalence} provides an approach to study the controllability of stochastic linear systems by establishing an inequality for BSDEs. The following example illustrates this approach. \begin{equation}taex{Example 4.3} \rm Let the dimensions of both state process and Brownian motion be 1, the dimension of control process be 2. Let $$A(\cdot), B_1(\cdot), D_2(\cdot)\inftyn L^\inftynfty_\muathbb{F}(0,T;\muathbb{R}),\quad| B_1(t)|,\ | D_2(t)|\gammaes\delta,\quadq t\inftyn[0,T],$$ for some $\delta>0$. Consider the following system: \begin{equation}tael{Sec3_Example_Orig_Sys}\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle dX(t)={\bf i}gg[A(t)X(t)+\begin{equation}taegin{pmatrix} B_1(t) & 0\frepsilonnd{pmatrix}\begin{equation}taegin{pmatrix}u_1(t)\\ u_2(t)\frepsilonnd{pmatrix}{\bf i}gg]dt+\begin{equation}taegin{pmatrix} 0 & D_2(t)\frepsilonnd{pmatrix}\begin{equation}taegin{pmatrix}u_1(t)\\ u_2(t)\frepsilonnd{pmatrix}dW(t),\quad t\gammaes 0,\\ \nuoalign{ }\deltaisplaystyle X(0)=x,\frepsilona\right.\frepsilone and the adjoint system is given by \begin{equation}tael{Sec3_Example_Dual_Sys}\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle dY(t)=-A(t)Y(t)dt+Z(t)dW(t),\quadq t\inftyn[0,T],\\ \nuoalign{ }\deltaisplaystyle y(T)=\frepsilonta,\frepsilona\right.\frepsilone where $\frepsilonta\inftyn L^2_{\muathbb F}(0,T;\muathbb R)$. A direct calculation leads to \begin{equation}tael{Sec3_Example_Eq_0}\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\muathbb{E}\inftynt_0^T{\bf i}gg|\begin{equation}taegin{pmatrix} B_1(t)\\ 0\frepsilonnd{pmatrix}Y(t)+\begin{equation}taegin{pmatrix}0\\ D_2(t)\frepsilonnd{pmatrix}Z(t){\bf i}gg|^2dt=\muathbb{E}\inftynt_0^T\Big[| B_1(t)|^2|Y(t)|^2 +| D_2(t)|^2|Z(t)|^2\Big]dt\\ [3mm] \nuoalign{ }\deltaisplaystyle\quadq\quadq\quadq\quadq\quadq\quadq\quadq\quadq\quadq\gammaes\delta^2\muathbb{E}\inftynt_0^T\Big[|Y(t)|^2+|Z(t)|^2\Big]dt.\frepsilona\frepsilone Let $\begin{equation}ta\inftyn L^\inftynfty_\muathbb{F}(0,T;\muathbb{R})$ to be chosen later. Applying It\^o's formula to $|Y(\cdot)|^2e^{\inftynt_0^\cdot\begin{equation}ta(s)ds}$ on the interval $[0,T]$, we deduce that \begin{equation}tael{Sec3_Example_Eq_1}\muathbb{E}\Big[|\frepsilonta|^2e^{\inftynt_0^T\begin{equation}ta(s)ds}\Big]-|Y(0)|^2=\muathbb{E}\inftynt_0^T\Big[\begin{equation}taig(\begin{equation}ta(t)-2A(t)\begin{equation}taig) |Y(t)|^2+|Z(t)|^2\Big]e^{\inftynt_0^t\begin{equation}ta(s)ds}dt.\frepsilone In \frepsilonqref{Sec3_Example_Eq_1}, selecting $\begin{equation}ta(\cdot)=2A(\cdot)+1$ leads to \begin{equation}tael{Sec3_Example_Eq_2}\muathbb{E}\inftynt_0^T\Big[|Y(t)|^2+|Z(t)|^2\Big]e^{\inftynt_0^t (2A(s)+1)ds}dt=\muathbb{E}\Big[|\frepsilonta|^2e^{\inftynt_0^T(2A(s)+1)ds}\Big]-|Y(0)|^2.\frepsilone On the other hand, selecting $\begin{equation}ta(\cdot)=2A(\cdot)$, we get $$\muathbb{E}\Big[|\frepsilonta|^2e^{\inftynt_0^T2A(s)ds}\Big]-|Y(0)|^2=\muathbb{E}\inftynt_0^T|Z(t)|^2e^{\inftynt_0^t2A(s)ds}dt\gammaes0.$$ Then, \begin{equation}tael{Sec3_Example_Eq_3}-|Y(0)|^2\gammaes-\muathbb{E}\Big[|\frepsilonta|^2e^{\inftynt_0^T2A(s)ds}\Big].\frepsilone Combining \frepsilonqref{Sec3_Example_Eq_2} with \frepsilonqref{Sec3_Example_Eq_3}, one has $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\muathbb{E}\inftynt_0^T\Big[|Y(t)|^2+|Z(t)|^2\Big]e^{\inftynt_0^t(2A(s)+1)ds}dt\gammaes\muathbb{E}\Big[|\frepsilonta|^2e^{\inftynt_0^T (2A(s)+1)ds}\Big]-\muathbb{E}\Big[|\frepsilonta|^2e^{\inftynt_0^T2A(s)ds}\Big]\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq\quadq\quadq\quadq\quadq\quadq\quadq\quad~=(e^T-1)\muathbb{E}\Big[|\frepsilonta|^2e^{\inftynt_0^T2A(s)ds}\Big].\frepsilona$$ Due to the boundedness of $A$ (without loss of generality, we assume $|A(t)|\lambdaes K$ a.s. a.e.), we obtain $$e^{(2K+1)T}\muathbb{E}\inftynt_0^T\Big[|Y(t)|^2+|Z(t)|^2\Big]dt\gammaes(e^T-1)e^{-2KT}\muathbb{E}|\frepsilonta|^2,$$ i.e. $$\muathbb{E}\inftynt_0^T\Big[|Y(t)|^2+|Z(t)|^2\Big]dt\gammaes(e^T-1)e^{-(4K+1)T}\muathbb{E}|\frepsilonta|^2.$$ Combining with \frepsilonqref{Sec3_Example_Eq_0}, we have proved that the observability inequality holds true for BSDE \frepsilonqref{Sec3_Example_Dual_Sys}. By Theorem \ref{equivalence}, system \frepsilonqref{Sec3_Example_Orig_Sys} is $L^2$-exactly controllable on $[0,T]$ by $\muathbb{U}^{2,\inftynfty,\inftynfty}[0,T]\frepsilonquiv L^2_{\muathbb F}(0,T;\muathbb R^2)$. \frepsilonx Now, we introduce the following definition which makes the name ``observability inequality'' aforementioned meaningful. \begin{equation}tade{observability} \rm Let (H1) hold and $(Y(\cdot),Z(\cdot))$ be the adapted solution to BSDE \frepsilonqref{Sec3_Dual_Sys} with $\frepsilonta\inftyn L_{{\cal F}_T}^q(\Omega;\muathbb{R}^n)$. \mus (i) For the pair $(B(\cdot),D(\cdot))$ with $B(\cdot),D_k(\cdot)\inftyn L^1_\muathbb{F}(0,T;\muathbb{R}^{n\tauimes m})$ ($k=1,2,\cdots,d$) and $D(\cdot)=(D_1(\cdot),\cdots,$ $D_d(\cdot))$, the map \begin{equation}tael{observer}\frepsilonta\muapsto\muathbb{K}^*\frepsilonta\frepsilonquiv B(\cdot)^\tauop Y(\cdot)+\sum_{k=1}^dD_k(\cdot)^\tauop Z_k(\cdot)\frepsilone is called an $\muathbb{Y}[0,T]$-{\inftyt observer} of BSDE \frepsilonqref{Sec3_Dual_Sys} if $$\deltaisplaystyle \muathbb{K}^*\frepsilonta\inftyn\muathbb{Y}[0,T],\quadq\frphiorall\frepsilonta\inftyn L^q_{{\cal F}_T}(\Omega;\muathbb{R}^m),$$ where $\muathbb{Y}[0,T]$ is a subspace of $L^1_\muathbb{F}(0,T;\muathbb{R}^m)$. BSDE \frepsilonqref{Sec3_Dual_Sys}, together with the observer \frepsilonqref{observer} is denoted by $[A(\cdot)^\tauop,C(\cdot)^\tauop;B(\cdot)^\tauop,D(\cdot)^\tauop]$. \mus (ii) System $[A(\cdot)^\tauop,C(\cdot)^\tauop;B(\cdot)^\tauop,D(\cdot)^\tauop]$ is said to be {\inftyt $L^q$-exactly observable} by $\muathbb{Y}[0,T]$ observations if from the {\inftyt observation} $\muathbb{K}^*\frepsilonta\inftyn\muathbb{Y}[0,T]$, the terminal value $\frepsilonta\inftyn L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n)$ of $Y(\cdot)$ at $T$ can be uniquely determined, i.e., the map $\muathbb{K}^*:L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n)\tauo\muathbb{Y}[0,T]$ admits a bounded inverse. \frepsilonde With the above definition, we clearly have the following result: \begin{equation}tat{observe} \sl Let {\rm (H1)--(H2)} (respectively, {\rm (H1)--(H2)$'$}) hold. Then $[A(\cdot),C(\cdot);B(\cdot),D(\cdot)]$ is $L^p$-exactly controllable on $[0,T]$ by $\muathbb{U}^{p,\rho,\sigma}[0,T]$ (respectively, $\muathbb{U}_r^{p,\rho,\sigma}[0,T]$) if and only if $[A(\cdot)^\tauop,C(\cdot)^\tauop;B(\cdot)^\tauop,D(\cdot)^\tauop]$ is $L^q$-exactly observable by $\muathbb{U}^{p,\rho,\sigma}[0,T]^*$ (respectively, $\muathbb{U}_r^{p,\rho,\sigma}[0,T]^*$) observations. \frepsilont Next, for any $x\inftyn\muathbb{R}^n$, let $X(\cdot\,;x,0)$ be the solution to the state equation \frepsilonqref{FSDE1} corresponding to the initial state $x$ and $u(\cdot)=0$. Denote $$\muathbb{K}_0x=X(T;x,0),\quadq\frphiorall x\inftyn\muathbb{R}^n.$$ Then applying It\^o's formula to $\lambdaan X(\cdot\,;x,0),Y(\cdot)\rangle$ with $(Y(\cdot),Z(\cdot))\frepsilonquiv(Y(\cdot\,;\frepsilonta),Z(\cdot\,;\frepsilonta))$ being the adapted solution to \frepsilonqref{Sec3_Dual_Sys}, we have $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\muathbb{E}\lambdaan X(T),\frepsilonta\rangle-\lambdaan x,Y(0)\rangle=\muathbb{E}\inftynt_0^T\Big (\lambdaan A(t)X(t),Y(t)\rangle+\lambdaan X(t),-A(t)^\tauop Y(t)-\sum_{k=1}^dC_k(t)^\tauop Z_k(t)\rangle\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq\quadq\quadq\quadq\quadq\quadq+\sum_{k=1}^d\lambdaan C_k(t)X(t),Z_k(t)\rangle\Big )dt=0.\frepsilona$$ Hence, $$\lambdaan x,Y(0)\rangle=\muathbb{E}\lambdaan\muathbb{K}_0x,\frepsilonta\rangle=\lambdaan x,\muathbb{K}_0^*\frepsilonta\rangle,\quadq\frphiorall x\inftyn\muathbb{R}^n.$$ This leads to \begin{equation}tael{K0*}\muathbb{K}_0^*\frepsilonta=Y(0;\frepsilonta),\quadq\frphiorall\frepsilonta\inftyn L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n).\frepsilone \mus Now, for any $(x,\xi)\inftyn\muathbb{R}^n\tauimes L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n)$, we introduce a functional $J(\cdot\,;x,\xi):L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n)\tauo\muathbb{R}$ defined by \begin{equation}tael{Sec3_Dual_Cost}J(\frepsilonta;x,\xi)={1\omegaver2}\|\muathbb{K}^*\frepsilonta\|_{\muathbb{U}^{p,\rho,\sigma}[0,T]^*}^2+\lambdaan x,\muathbb{K}_0^*\frepsilonta\rangle-\muathbb{E}\lambdaan\xi,\frepsilonta \rangle,\quadq\frphiorall\frepsilonta\inftyn L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n),\frepsilone where $\muathbb{K}^*$ and $\muathbb{K}_0^*$ are given by \frepsilonqref{K*} and \frepsilonqref{K0*}, respectively. Equivalently, \begin{equation}tael{Sec3_Dual_Cost*}\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle J(\frepsilonta;x,\xi)={1\omegaver2}\|B(\cdot)^\tauop Y(\cdot)+\sum_{k=1}^dD_k(\cdot)^\tauop Z_k(\cdot)\|_{\muathbb{U}^{p,\rho,\sigma}[0,T]^*}^2+\lambdaan x,Y(0)\rangle -\muathbb{E}\lambdaan\xi,\frepsilonta\rangle,\quad\frphiorall\frepsilonta\inftyn L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n),\frepsilona\frepsilone with $(Y(\cdot),Z(\cdot))$ being the adapted solution to BSDE \frepsilonqref{Sec3_Dual_Sys}. One can pose the following optimization problem. \mus \begin{equation}taf Problem (O). \rm Minimize \frepsilonqref{Sec3_Dual_Cost*} subject to BSDE \frepsilonqref{Sec3_Dual_Sys} over $L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n)$. \mus Note that the spaces $\muathbb{U}^{p,\rho,\sigma}[0,T]$ and $L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n)$ are reflexive since their norms are uniformly convex. In order to apply Proposition \ref{onto}, we need to show that $$\frphi(\cdot)\muapsto{1\omegaver2}\|\frphi(\cdot)\|^2_{\muathbb{U}^{p,\rho,\sigma}[0,T]^*}\frepsilonquiv{1\omegaver2} \|\frphi(\cdot)\|^2_{L^{\begin{equation}taar q}_\muathbb{F}(\Omega;L^\nu(0,T;\muathbb{R}^m))}$$ is Fr\'echet differentiable. For simplicity of notation, we shall use the following notations for a while: $$ \| \cdotot \|_{L^{\begin{equation}taar q}_{\muathbb F} (\Omegamega;L^\nuu(0,T; \muathbb R^m))} \frepsilonquiv \| \cdotot \|_{L^{\begin{equation}taar q,\nuu}_{\muathbb F}} \mubox{ and } \|\cdotot\|_{L^\nuu(0,T; \muathbb R^m)}\frepsilonquiv \| \cdotot \|_{L^\nuu}. $$ Now let $\frphi(\cdotot), \phisi(\cdotot)\inftyn L^{\begin{equation}taar q}_{\muathbb F}(\Omegamega;L^\nuu(0,T;\muathbb R^m))$. If $\|\frphi(\cdotot)\|_{L^{\begin{equation}taar q,\nuu}_{\muathbb F}}=0$, then \begin{equation}taegin{equation}\lambdaabel{Sec3_Eq_Temp1.1} \frphirac 1 2 \frphirac{d}{d\deltaelta}{\bf i}g\{ \| \frphi(\cdotot)+\deltaelta\phisi(\cdotot) \|_{L^{\begin{equation}taar q,\nuu}_{\muathbb F}}^2 {\bf i}g\}{\bf i}g|_{\deltaelta =0} =0. \frepsilonnd{equation} If $\|\frphi(\cdotot)\|_{L^{\begin{equation}taar q,\nuu}_{\muathbb F}} \nueq 0$, \begin{equation}taegin{equation}\lambdaabel{Sec3_Eq_Temp1.2} \begin{equation}taegin{aligned} \frphirac 1 2 \frphirac{d}{d\deltaelta}{\bf i}g\{ \| \frphi(\cdotot)+\deltaelta\phisi(\cdotot) \|_{L^{\begin{equation}taar q,\nuu}_{\muathbb F}}^2 {\bf i}g\}{\bf i}g|_{\deltaelta =0} =\ & \frphirac 1 2 \frphirac{d}{d\deltaelta}\begin{equation}taigg\{ \begin{equation}taigg[ \muathbb E\begin{equation}taigg( \inftynt_0^T |\frphi(t) +\deltaelta\phisi(t)|^{\nuu} dt \begin{equation}taigg)^{\frphirac{\begin{equation}taar q}{\nuu}} \begin{equation}taigg]^{\frphirac {2}{\begin{equation}taar q}} \begin{equation}taigg\}{\bf i}g|_{\deltaelta =0}\\ =\ & \frphirac {1}{\begin{equation}taar q} \| \frphi(\cdotot) \|_{L^{\begin{equation}taar q,\nuu}_{\muathbb F}}^{2-\begin{equation}taar q} \frphirac{d}{d\deltaelta}{\bf i}g\{\muathbb E[f(\omegamega,\deltaelta)] {\bf i}g\}{\bf i}g|_{\deltaelta =0},\\ \frepsilonnd{aligned} \frepsilonnd{equation} provided the derivative on the right hand side exists, where $$f(\omegamega,\deltaelta) = \begin{equation}taigg( \inftynt_0^T |\frphi(t,\omegamega) +\deltaelta \phisi(t,\omegamega)|^{\nuu} dt \begin{equation}taigg)^\frphirac{\begin{equation}taar q}{\nuu}.$$ For simplicity of notation, we denote $\inftynfty\tauimes 0 =0$. Then \frepsilonqref{Sec3_Eq_Temp1.1} and \frepsilonqref{Sec3_Eq_Temp1.2} can be combined into \begin{equation}taegin{equation}\lambdaabel{Sec3_Eq_Temp2} \frphirac 1 2 \frphirac{d}{d\deltaelta}{\bf i}g\{ \|\frphi(\cdotot) +\deltaelta \phisi(\cdotot)\|_{L^{\begin{equation}taar q,\nuu}_\muathbb{F}}^2 {\bf i}g\}{\bf i}g|_{\deltaelta =0} = \frphirac {1}{\begin{equation}taar q} \| \frphi(\cdotot) \|_{L^{\begin{equation}taar q,\nuu}_{\muathbb F}}^{2-\begin{equation}taar q} {\begin{equation}taf1}_{\{ \|\frphi(\cdotot)\|_{L^{\begin{equation}taar q,\nuu}_\muathbb{F}} \nueq 0 \}} \frphirac{d}{d\deltaelta}{\bf i}g\{\muathbb E[f(\omegamega,\deltaelta)] {\bf i}g\}{\bf i}g|_{\deltaelta =0}. \frepsilonnd{equation} To exchange the order of derivation and expectation in \frepsilonqref{Sec3_Eq_Temp2}, we calculate $\frphirac{\phiartial f}{\phiartial\deltaelta}$. For any $\deltaelta\inftyn (-1,1)$ and $\omegamega\inftyn\Omegamega$, if $\| \frphi(\cdotot,\omegamega) +\deltaelta\phisi(\cdotot,\omegamega) \|_{L^\nuu} =0$, i.e., $\frphi(t,\omegamega) +\deltaelta\phisi(t,\omegamega) =0$ a.e. $t\inftyn [0,T]$, then \begin{equation}taegin{equation}\lambdaabel{sec3_Eq_Temp2.1} \frphirac{\phiartial f}{\phiartial\deltaelta} (\omegamega,\deltaelta) = \lambdaim_{\Deltaelta\deltaelta\rightarrow 0} \frphirac{1}{\Deltaelta\deltaelta} {\bf i}g( f(\omegamega,\deltaelta+\Deltaelta\deltaelta) -f(\omegamega,\deltaelta) {\bf i}g) = \lambdaim_{\Deltaelta\deltaelta\rightarrow 0} \frphirac{1}{\Deltaelta\deltaelta} \|\Deltaelta\deltaelta\phisi(\cdotot,\omegamega)\|_{L^\nuu}^{\begin{equation}taar q} =0. \frepsilonnd{equation} On the other hand, when $\| \frphi(\cdotot,\omegamega) +\deltaelta\phisi(\cdotot,\omegamega) \|_{L^\nuu} \nueq 0$, \begin{equation}taegin{equation}\lambdaabel{sec3_Eq_Temp2.2} \frphirac{\phiartial f}{\phiartial\deltaelta}(\omegamega,\deltaelta) = \frphirac{\begin{equation}taar q}{\nuu} \|\frphi(\cdotot,\omegamega) +\deltaelta\phisi(\cdotot,\omegamega)\|_{L^{\nuu}}^{\begin{equation}taar q-\nuu} \frphirac{\phiartial}{\phiartial \deltaelta} \begin{equation}taigg(\inftynt_0^T g(t,\omegamega,\deltaelta) dt\begin{equation}taigg), \frepsilonnd{equation} where $$ g(t,\omegamega,\deltaelta) = |\frphi (t,\omegamega)+\deltaelta\phisi(t,\omegamega)|^{\nuu}. $$ Combining \frepsilonqref{sec3_Eq_Temp2.1} with \frepsilonqref{sec3_Eq_Temp2.2}, one has \begin{equation}taegin{equation}\lambdaabel{sec3_Eq_Temp2.3} \frphirac{\phiartial f}{\phiartial\deltaelta}(\omegamega,\deltaelta) = \frphirac{\begin{equation}taar q}{\nuu} \|\frphi(\cdotot,\omegamega) +\deltaelta\phisi(\cdotot,\omegamega)\|_{L^{\nuu}}^{\begin{equation}taar q-\nuu} {\begin{equation}taf1}_{\{ \|\frphi(\cdotot,\omegamega) +\deltaelta\phisi(\cdotot,\omegamega)\|_{L^{\nuu}} \nueq 0 \}} \frphirac{\phiartial}{\phiartial \deltaelta} \begin{equation}taigg(\inftynt_0^T g(t,\omegamega,\deltaelta) dt\begin{equation}taigg). \frepsilonnd{equation} To exchange the order of derivation and integral in \frepsilonqref{sec3_Eq_Temp2.3}, we calculate $$ \frphirac{\phiartial g}{\phiartial\deltaelta}(t,\omegamega,\deltaelta) = \nuu |\frphi(t,\omegamega) +\deltaelta\phisi(t,\omegamega)|^{\nuu-2} \lambdaangle \frphi(t,\omegamega) +\deltaelta\phisi(t,\omegamega),\ \phisi(t,\omegamega) \ranglegle, $$ and then $$ \begin{equation}taegin{aligned} \begin{equation}taigg| \frphirac{\phiartial g}{\phiartial\deltaelta}(t,\omegamega,\deltaelta) \begin{equation}taigg| \lambdaes\ & \nuu |\frphi(t,\omegamega) +\deltaelta\phisi(t,\omegamega)|^{\nuu-1} |\phisi(t,\omegamega)|\\ \lambdaes\ & C {\bf i}g( |\frphi(t,\omegamega)|^\nuu +|\phisi(t,\omegamega)|^\nuu {\bf i}g) \inftyn L^1(0,T;\muathbb R), \frepsilonnd{aligned} $$ where $C>0$ is a constant only depending on $\nuu$. By Theorem 2.27 in \cite[Page 56]{Folland 1999}, the order of derivation and integral in \frepsilonqref{sec3_Eq_Temp2.3} can be exchanged. Then, we have \begin{equation}taegin{equation}\lambdaabel{sec3_Eq_Temp2.4} \begin{equation}taegin{aligned} \frphirac{\phiartial f}{\phiartial\deltaelta}(\omegamega,\deltaelta) =&\ \frphirac{\begin{equation}taar q}{\nuu} \|\frphi(\cdotot,\omegamega) +\deltaelta\phisi(\cdotot,\omegamega)\|_{L^{\nuu}}^{\begin{equation}taar q-\nuu}{\begin{equation}taf1}_{\{ \|\frphi(\cdotot,\omegamega) +\deltaelta\phisi(\cdotot,\omegamega)\|_{L^{\nuu}} \nueq 0 \}} \inftynt_0^T \frphirac{\phiartial g}{\phiartial \deltaelta} (t,\omegamega,\deltaelta) dt\\ =&\ \begin{equation}taar q \|\frphi(\cdotot,\omegamega) +\deltaelta\phisi(\cdotot,\omegamega)\|_{L^{\nuu}}^{\begin{equation}taar q-\nuu}{\begin{equation}taf1}_{\{ \|\frphi(\cdotot,\omegamega) +\deltaelta\phisi(\cdotot,\omegamega)\|_{L^{\nuu}} \nueq 0 \}}\\ & \tauimes\inftynt_0^T |\frphi (t,\omegamega)+\deltaelta\phisi(t,\omegamega)|^{\nuu-2}\lambdaangle \frphi(t,\omegamega)+\deltaelta\phisi(t,\omegamega),\ \phisi(t,\omegamega) \ranglegle dt\\ =&\ \begin{equation}taar q \|\frphi(\cdotot,\omegamega) +\deltaelta\phisi(\cdotot,\omegamega)\|_{L^{\nuu}}^{\begin{equation}taar q-\nuu}\\ & \tauimes\inftynt_0^T |\frphi (t,\omegamega)+\deltaelta\phisi(t,\omegamega)|^{\nuu-2}\lambdaangle \frphi(t,\omegamega)+\deltaelta\phisi(t,\omegamega),\ \phisi(t,\omegamega) \ranglegle dt. \frepsilonnd{aligned} \frepsilonnd{equation} By virtue of H\"{o}lder's inequality, we have $$\begin{equation}taegin{aligned} \begin{equation}taigg|\frphirac{\phiartial f}{\phiartial \deltaelta}(\omegamega,\deltaelta)\begin{equation}taigg| \lambdaes&\ \begin{equation}taar q \|\frphi(\cdotot,\omegamega) +\deltaelta\phisi(\cdotot,\omegamega)\|_{L^{\nuu}}^{\begin{equation}taar q-\nuu} \inftynt_0^T |\frphi (t,\omegamega)+\deltaelta\phisi(t,\omegamega)|^{\nuu-1}|\phisi(t,\omegamega)| dt\\ \lambdaes&\ \begin{equation}taar q \| \frphi(\cdotot,\omegamega) +\deltaelta\phisi(\cdotot,\omegamega) \|_{L^\nuu}^{\begin{equation}taar q-\nuu} \| \frphi(\cdotot,\omegamega)+\deltaelta\phisi(\cdotot,\omegamega) \|_{L^\nuu}^{\nuu-1} \|\phisi(\cdotot,\omegamega)\|_{L^\nuu} \\ \lambdaes&\ C\lambdaeft( \|\frphi(\cdotot,\omegamega)\|_{L^\nuu}^{\begin{equation}taar q} +\|\phisi(\cdotot,\omegamega)\|_{L^\nuu}^{\begin{equation}taar q} \right) \inftyn L^1_{\muathcal F_T}(\Omegamega;\muathbb R), \frepsilonnd{aligned}$$ where $C>0$ is a constant only depending on $\begin{equation}taar q$. Theorem 2.27 in \cite[Page 56]{Folland 1999} works again to exchange the order of derivation and expectation in \frepsilonqref{Sec3_Eq_Temp2}. Then, combining with \frepsilonqref{sec3_Eq_Temp2.4}, we have \begin{equation}taegin{equation}\lambdaabel{sec3_Eq_Temp2.5} \begin{equation}taegin{aligned} & \frphirac 1 2 \frphirac{d}{d\deltaelta} {\bf i}g\{ \|\frphi(\cdotot) +\deltaelta \phisi(\cdotot)\|_{L^{\begin{equation}taar q,\nuu}_\muathbb{F}}^2 {\bf i}g\}{\bf i}g|_{\deltaelta =0} = \frphirac {1}{\begin{equation}taar q} \|\frphi(\cdotot)\|_{L^{\begin{equation}taar q,\nuu}_{\muathbb F}}^{2-\begin{equation}taar q} {\begin{equation}taf1}_{\{ \|\frphi(\cdotot)\|_{L^{\begin{equation}taar q,\nuu}_\muathbb{F}} \nueq 0 \}} \muathbb E \begin{equation}taigg\{ \frphirac{\phiartial}{\phiartial\deltaelta} f(\omegamega,\deltaelta) \begin{equation}taigg\}{\bf i}g|_{\deltaelta =0}\\ =\ & \|\frphi(\cdotot)\|_{L^{\begin{equation}taar q,\nuu}_\muathbb{F}}^{2-\begin{equation}taar q} {\begin{equation}taf1}_{\{ \|\frphi(\cdotot)\|_{L^{\begin{equation}taar q,\nuu}_\muathbb{F}} \nueq 0 \}} \muathbb E \begin{equation}taigg\{ \|\frphi(\cdotot,\omegamega)\|_{L^{\nuu}}^{\begin{equation}taar q-\nuu} \inftynt_0^T |\frphi (t)|^{\nuu-2}\lambdaangle \frphi(t),\ \phisi(t) \ranglegle dt \begin{equation}taigg\}\\ =\ & \|\frphi(\cdotot)\|_{L^{\begin{equation}taar q,\nuu}_\muathbb{F}}^{2-\begin{equation}taar q} \muathbb E \begin{equation}taigg\{ \|\frphi(\cdotot,\omegamega)\|_{L^{\nuu}}^{\begin{equation}taar q-\nuu} \inftynt_0^T |\frphi (t)|^{\nuu-2}\lambdaangle \frphi(t),\ \phisi(t) \ranglegle dt \begin{equation}taigg\} \\ =\ & \muathbb E\inftynt_0^T \begin{equation}taigg\lambdaangle \|\frphi(\cdotot)\|^{2-\begin{equation}taar q}_{L^{\begin{equation}taar q,\nuu}_{\muathbb F}} \muathbb E_t {\bf i}g[ \| \frphi(\cdotot,\omegamega) \|^{\begin{equation}taar q-\nuu}_{L^\nuu} {\begin{equation}taf1}_{\{ \|\frphi(\cdotot,\omegamega)\|_{L^\nuu}\nueq 0 \}} {\bf i}g] |\frphi(t)|^{\nuu-2}\frphi(t),\ \phisi(t) \begin{equation}taigg\ranglegle dt\\ \frepsilonquiv\ & \muathbb E\inftynt_0^T \lambdaangle \Gammaamma(\frphi(\cdotot))(t),\ \phisi(t) \ranglegle dt, \frepsilonnd{aligned} \frepsilonnd{equation} where \begin{equation}taegin{equation}\lambdaabel{G(f)*} \Gammaamma(\frphi(\cdotot))(t) = \| \frphi(\cdotot) \|^{2-\begin{equation}taar q}_{L^{\begin{equation}taar q}_{\muathbb F}(\Omegamega;L^\nuu(0,T;\muathbb R^m))} M(t) |\frphi(t)|^{\nuu-2} \frphi(t),\quaduad t\inftyn [0,T], \frepsilonnd{equation} with \begin{equation}tael{M(t)}M(t)=\muathbb{E}_t\Big[ \|\frphi(\cdotot,\omegamega)\|^{\begin{equation}taar q-\nuu}_{L^\nuu(0,T;\muathbb R^m)} {\begin{equation}taf1}_{\{ \|\frphi(\cdotot,\omegamega)\|_{L^\nuu(0,T;\muathbb R^m)} \nueq 0 \}} \Big],\quadq t\inftyn[0,T].\frepsilone We have the following lemma. \begin{equation}tal{Lemma 4.6} \sl Let $p\lambdaes \frphirac{2\sigma\rho}{\sigma\rho-2\rho+2\sigma}$. Then for any $\frphi(\cdot)\inftyn L^{\begin{equation}taar q}_\muathbb{F}(\Omega;L^\nu(0,T;\muathbb{R}^m))\frepsilonquiv\muathbb{U}^{p,\rho,\sigma}[0,T]^*$, \begin{equation}tael{}\Gamma(\frphi(\cdot))(\cdot)\inftyn L^{\begin{equation}taar p}_\muathbb{F}(\Omega;L^\mu(0,T;\muathbb{R}^m))=\muathbb{U}^{p,\rho,\sigma}[0,T].\frepsilone \frepsilonl We notice that, in $p\lambdaes \frphirac{2\sigma\rho}{\sigma\rho-2\rho+2\sigma}$, $\rho$ takes values in $(1,\inftynfty]$ and $\sigma$ takes values in $(2,\inftynfty]$; When $\rho=\inftynfty$ or/and $\sigma=\inftynfty$, the right hand side of the inequality takes the limit. Moreover the inequality $p\lambdaes \frphirac{2\sigma\rho}{\sigma\rho-2\rho+2\sigma}$ is equivalent to $\begin{equation}taar q \gammaes \nuu$ or $\begin{equation}taar p \lambdaes \muu$.\\ \inftyt Proof of Lemma \ref{Lemma 4.6}. \rm Note that $p\lambdaes \frphirac{2\sigma\rho}{\sigma\rho-2\rho+2\sigma}$ is equivalent to $\begin{equation}taar q\gammaes\nu$. It suffices to show that $$\widehat\Gamma(\frphi(\cdot))\frepsilonquiv M(\cdot)|\frphi(\cdot)|^{\nu-2}\frphi(\cdot)\inftyn L^{\begin{equation}taar q}_\muathbb{F}(\Omega;L^\nu(0,T;\muathbb{R}^m)).$$ First of all, if $\begin{equation}taar q=\nu$, then $\begin{equation}taar p=\mu$. In this case, $M(\cdot)=1$ and $$\widehat\Gamma(\frphi(\cdot))(t)=|\frphi(t)|^{\nu-2}\frphi(t),\quadq t\inftyn[0,T].$$ Hence, (note that $L^{\begin{equation}taar p}_\muathbb{F}(\Omega;L^\mu(0,T;\muathbb{R}^m))=L^\mu_\muathbb{F}(0,T;\muathbb{R}^m)$ in the current case) $$\muathbb{E}\inftynt_0^T|\widehat\Gamma(\frphi(\cdot))(t)|^\mu dt=\muathbb{E}\inftynt_0^T\Big (|\frphi(t)|^{\nu-1}\Big )^\mu dt=\muathbb{E}\inftynt_0^T|\frphi(t)|^\nu dt<\inftynfty.$$ Now, let $\begin{equation}taar q>\nu$. Then $$\mu={\nu\omegaver\nu-1}>{\begin{equation}taar q\omegaver\begin{equation}taar q-1}=\begin{equation}taar p,$$ which leads to ${\begin{equation}taar q\mu\omegaver\begin{equation}taar p\nu}>1$. Note that $M(\cdot)$ is a (nonnegative valued) martingale. By Jensen's inequality, $$\muathbb{E} M(t)^{\begin{equation}taar q\omegaver\begin{equation}taar q-\nu}=\muathbb{E}{\bf i}g\{\muathbb{E}_t\Big[\Big (\inftynt_0^T|\frphi(t)|^\nu dt\Big )^{{\begin{equation}taar q\omegaver\nu}-1}\Big]{\bf i}g\}^{\begin{equation}taar q\omegaver\begin{equation}taar q-\nu}\lambdaes\muathbb{E}\Big (\inftynt_0^T|\frphi(t)|^\nu dt\Big )^{\begin{equation}taar q\omegaver\nu}.$$ Hence, using Doob's inequality, $$\muathbb{E}\Big[\sup_{t\inftyn[0,T]}M(t)^{\begin{equation}taar q\omegaver\begin{equation}taar q-\nu}\Big]\lambdaes\Big ({\begin{equation}taar q\omegaver\nu}\Big )^{\begin{equation}taar q\omegaver\begin{equation}taar q-\nu}\muathbb{E}\begin{equation}taig[ M(T)^{\begin{equation}taar q\omegaver\begin{equation}taar q-\nu}\begin{equation}taig]=\Big ({\begin{equation}taar q\omegaver\nu}\Big )^{\begin{equation}taar q\omegaver\begin{equation}taar q-\nu}\muathbb{E}\Big (\inftynt_0^T|\frphi(t)|^\nu dt\Big )^{\begin{equation}taar q\omegaver\nu}.$$ Consequently, $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\muathbb{E}\Big (\inftynt_0^T|\widehat\Gamma(\frphi(\cdot))(t)|^\mu dt\Big )^{\begin{equation}taar p\omegaver\mu}=\muathbb{E}\Big (\inftynt_0^TM(t)^\mu|\frphi(t)|^{(\nu-1)\mu}dt\Big )^{\begin{equation}taar p\omegaver\mu}\\ \nuoalign{ }\deltaisplaystyle\lambdaes\muathbb{E}\Big[\sup_{t\inftyn[0,T]}M(t)^{\begin{equation}taar p}\Big (\inftynt_0^T|\frphi(t)|^\nu dt\Big )^{\begin{equation}taar p\omegaver\mu}\Big]\\ \nuoalign{ }\deltaisplaystyle\lambdaes\Big[\muathbb{E}\Big (\sup_{t\inftyn[0,T]}M(t)^{\begin{equation}taar p\begin{equation}taar q\mu\omegaver\begin{equation}taar q\nu-\begin{equation}taar p\mu}\Big )\Big]^{\begin{equation}taar q\mu-\begin{equation}taar p\nu\omegaver\begin{equation}taar q\mu}\Big[\muathbb{E}\Big (\inftynt_0^T|\frphi(t)|^\nu dt\Big )^{\begin{equation}taar q\omegaver\nu}\Big]^{\begin{equation}taar p\nu\omegaver\begin{equation}taar q\mu}.\frepsilona$$ Since $${\begin{equation}taar p\mu\omegaver\begin{equation}taar q\mu-\begin{equation}taar p\nu}={{\begin{equation}taar q\omegaver\begin{equation}taar q-1}{\nu\omegaver\nu-1}\omegaver {\begin{equation}taar q\nu\omegaver\nu-1}-{\begin{equation}taar q\nu\omegaver\begin{equation}taar q-1}}={1\omegaver\begin{equation}taar q-\nu},\quadq{\begin{equation}taar p\nu\omegaver\begin{equation}taar q\mu}={\nu-1\omegaver\begin{equation}taar q-1},$$ from the above, we obtain \begin{equation}tael{doob} \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\muathbb{E}\Big (\inftynt_0^T|\widehat\Gamma(\frphi(\cdot))(t)|^\mu dt\Big )^{\begin{equation}taar p\omegaver\mu}\lambdaes\Big[\muathbb{E}\Big (\sup_{t\inftyn[0,T]}M(t)^{\begin{equation}taar q\omegaver\begin{equation}taar q-\nu}\Big )\Big]^{\begin{equation}taar p(\begin{equation}taar q-\nu)\omegaver\begin{equation}taar q}\Big[\muathbb{E}\Big (\inftynt_0^T|\frphi(t)|^\nu dt\Big )^{\begin{equation}taar q\omegaver\nu}\Big]^{\nu-1\omegaver\begin{equation}taar q-1}\\ \nuoalign{ }\deltaisplaystyle\lambdaes{\bf i}g\{\Big ({\begin{equation}taar q\omegaver\nu}\Big )^{\begin{equation}taar q\omegaver\begin{equation}taar q-\nu}\muathbb{E}\Big (\inftynt_0^T|\frphi(t)|^\nu dt\Big )^{\begin{equation}taar q\omegaver\nu}{\bf i}g\}^{\begin{equation}taar p(\begin{equation}taar q-\nu)\omegaver\begin{equation}taar q}\Big[\muathbb{E}\Big (\inftynt_0^T|\frphi(t)|^\nu dt\Big )^{\begin{equation}taar q\omegaver\nu}\Big]^{\nu-1\omegaver\begin{equation}taar q-1}\\ \nuoalign{ }\deltaisplaystyle=\Big ({\begin{equation}taar q\omegaver\nu}\Big )^{\begin{equation}taar q\omegaver\begin{equation}taar q-1}\muathbb{E}\Big (\inftynt_0^T|\frphi(t)|^\nu dt\Big )^{\begin{equation}taar q\omegaver\nu}<\inftynfty.\frepsilona\frepsilone This proves out conclusion. \sigmagned {$\sqr69$} \mus Now, let us look at the optimal solution $\begin{equation}taar\frepsilonta$ of Problem (O). According to the above, we see that the optimal solution $\begin{equation}taar\frepsilonta$ of Problem (O) satisfies the following: $$0=\muathbb E \lambdaan\muathbb{K}\Gamma(\muathbb{K}^*\begin{equation}taar\frepsilonta)+\muathbb{K}_0x-\xi,\frepsilonta\rangle,\quadq \frphiorall\frepsilonta\inftyn L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n),$$ where $\Gamma(\muathbb{K}^*\begin{equation}taar\frepsilonta)$ is given by \frepsilonqref{G(f)*} with $\frphi(\cdot)=\muathbb{K}^*\begin{equation}taar\frepsilonta$. Thus, \begin{equation}tael{4.25}\muathbb{K}\Gamma(\muathbb{K}^*\begin{equation}taar\frepsilonta)+\muathbb{K}_0x-\xi=0.\frepsilone Now, when $p\lambdaes \frphirac{2\sigma\rho}{\sigma\rho-2\rho+2\sigma}$, we define \begin{equation}tael{bar u}\begin{equation}taar u(\cdot)=\Gamma(\muathbb{K}^*\begin{equation}taar\frepsilonta)\inftyn L^{\begin{equation}taar p}_\muathbb{F}(\Omega;L^\mu(0,T;\muathbb{R}^m))\frepsilonquiv\muathbb{U}^{p,\rho,\sigma}[0,T].\frepsilone Then \frepsilonqref{4.25} reads $$\xi=\muathbb{K}\begin{equation}taar u(\cdot)+\muathbb{K}_0x=X(T;x,\begin{equation}taar u(\cdot)),$$ which means that $\begin{equation}taar u(\cdot)\inftyn\muathbb{U}^{p,\rho,\sigma}[0,T]$ is a control steering $x\inftyn\muathbb{R}^n$ to $\xi\inftyn L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n)$. Therefore, we obtain the following result, making use of Proposition \ref{onto} and Theorem \ref{equivalence}. \begin{equation}tat{Theorem 4.7} \sl Let {\rm(H1)--(H2)} hold and $p\lambdaes \frphirac{2\sigma\rho}{\sigma\rho-2\rho+2\sigma}$. Then the observability inequality \frepsilonqref{observability inequality} holds if and only if for any $(x,\xi)\inftyn\muathbb{R}^n\tauimes L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n)$, Problem {\rm(O)} admits a unique optimal solution $\begin{equation}taar\frepsilonta\inftyn L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n)$. In this case, the control $\begin{equation}taar u(\cdot)\inftyn\muathbb{U}^{p,\rho,\sigma}[0,T]$ defined by \frepsilonqref{bar u} steers $x$ to $\xi$. Moreover, with $\begin{equation}taar u(\cdot)$ defined by \frepsilonqref{bar u} for $$\muathbb{K}^*\begin{equation}taar\frepsilonta=B(\cdot)^\tauop\begin{equation}taar Y(\cdot)+\sum_{k=1}^dD_k(\cdot)^\tauop\begin{equation}taar Z_k(\cdot),$$ the following coupled FBSDE \begin{equation}tael{Sec3_Haml_Sys}\lambdaeft\{\nuegthinspace \nuegthinspace \begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle d\begin{equation}taar X(t)=\Big[A(t)\begin{equation}taar X(t)+B(t)\begin{equation}taar u(t)\Big]dt+\sum_{k=1}^d \Big[C_k(t)\begin{equation}taar X(t)+D_k(t)\begin{equation}taar u(t)\Big]dW_k(t),\\ \nuoalign{ }\deltaisplaystyle d\begin{equation}taar Y(t)=-\Big[A(t)^\tauop\begin{equation}taar Y(t)+\sum_{k=1}^d C_k(t)^\tauop\begin{equation}taar Z_k(t)\Big]dt+\sum_{k=1}^d\begin{equation}taar Z_k(t)dW_k(t),\\ \nuoalign{ }\deltaisplaystyle\begin{equation}taar X(0)=x,\quadq\begin{equation}taar X(T)=\xi\frepsilona\right.\frepsilone admits a unique adapted solution $$(\begin{equation}taar X(\cdot),\begin{equation}taar Y(\cdot),\begin{equation}taar Z(\cdot))\inftyn L^p_\muathbb{F}(\Omega;C([0,T];\muathbb{R}^n))\tauimes L^q_\muathbb{F}(\Omega;C([0,T];\muathbb{R}^n))\tauimes L^q_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^{n\tauimes d})).$$ \frepsilont \inftyt Proof. \rm By the above analysis, the only remaining thing is to prove the uniqueness of FBSDE \frepsilonqref{Sec3_Haml_Sys}. Now, let $(\tauilde X(\cdotot),\tauilde Y(\cdotot),\tauilde Z(\cdotot)) \inftyn L^p_{\muathbb F}(\Omegamega;C([0,T];\muathbb R^n)) \tauimes L^q_{\muathbb F}(\Omegamega;C([0,T];\muathbb R^n)) \tauimes L^q_{\muathbb F}(\Omegamega;L^2(0,T;\muathbb R^{n\tauimes d}))$ be a solution to \frepsilonqref{Sec3_Haml_Sys} with \begin{equation}taegin{equation}\lambdaabel{Sec4_tilde_u} \tauilde u(\cdotot) = \Gammaamma(\muathbb K^* \tauilde Y(T)) = \Gammaamma{\bf i}g(B(\cdotot)^\tauop \tauilde Y(\cdotot) +\sum_{k=1}^d D_k(\cdotot)^\tauop \tauilde Z_k(\cdotot){\bf i}g), \frepsilonnd{equation} and $\Gammaamma(\cdotot)$ is given by \frepsilonqref{G(f)*}. When $p\lambdaes \frphirac{2\sigma\rho}{\sigma\rho-2\rho+2\sigma}$, Lemma \ref{Lemma 4.6} implies $\tauilde u(\cdotot)\inftyn \muathbb U^{p,\rho,\sigma}[0,T]$. Next we prove that $\tauilde \frepsilonta =\tauilde Y(T)$ is an optimal solution to Problem (O). For each $\frepsilonta\inftyn L^q_{\muathcal F_T}(\Omegamega;\muathbb R^n)$, we denote the solution to BSDE \frepsilonqref{Sec3_Dual_Sys} by $(Y(\cdotot),Z(\cdotot))$, then $$ \begin{equation}taegin{aligned} & J(\frepsilonta;x,\xi) -J(\tauilde\frepsilonta;x,\xi) = \frphirac 1 2 {\bf i}g( \| \muathbb K^*\frepsilonta \|^2_{\muathbb U^{p,\rho,\sigma}[0,T]^*} - \| \muathbb K^*\tauilde\frepsilonta \|^2_{\muathbb U^{p,\rho,\sigma}[0,T]^*} {\bf i}g) + \lambdaangle x,\ \muathbb K^*_0\frepsilonta-\muathbb K^*_0\tauilde \frepsilonta \ranglegle -\muathbb E\lambdaangle \xi,\ \frepsilonta-\tauilde \frepsilonta \ranglegle. \frepsilonnd{aligned} $$ Due to the convexity of $\|\cdotot\|^2_{\muathbb U^{p,\rho,\sigma}[0,T]^*}$, \frepsilonqref{Sec4_tilde_u}, \frepsilonqref{K*} and \frepsilonqref{K0*}, we have $$ \begin{equation}taegin{aligned} & J(\frepsilonta;x,\xi) -J(\tauilde\frepsilonta;x,\xi)\gammaes \muathbb E\inftynt_0^T \lambdaangle \Gammaamma(\muathbb K^*\tauilde\frepsilonta)(t),\ (\muathbb K^*\frepsilonta-\muathbb K^*\tauilde \frepsilonta)(t) \ranglegle dt + \lambdaangle x,\ \muathbb K^*_0\frepsilonta-\muathbb K^*_0\tauilde \frepsilonta \ranglegle -\muathbb E\lambdaangle \xi,\ \frepsilonta-\tauilde \frepsilonta \ranglegle\\ & = \muathbb E\inftynt_0^T {\bf i}g\lambdaangle \tauilde u(t),\ B(t)^\tauop(Y(t)-\tauilde Y(t)) +\sum_{k=1}^d D_k(t)^\tauop (Z_k(t)-\tauilde Z_k(t)) {\bf i}g\ranglegle dt\\ & \quadquad +\lambdaangle x,\ Y(0)-\tauilde Y(0) \ranglegle -\muathbb E\lambdaangle \xi,\ \frepsilonta-\tauilde\frepsilonta \ranglegle. \frepsilonnd{aligned} $$ We apply It\^{o}'s formula to $\lambdaangle \tauilde X(\cdotot),\ Y(\cdotot)-\tauilde Y(\cdotot) \ranglegle$, and by \frepsilonqref{Sec3_Haml_Sys}, we obtain the right hand side of the above inequality equals zero. Hence we have $J(\frepsilonta;x,\xi) - J(\tauilde \frepsilonta;x,\xi) \gammaeq 0$, which implies that $\tauilde \frepsilonta = \tauilde Y(T)$ is an optimal solution to Problem (O). Now, let $(X^i(\cdotot), Y^i(\cdotot), Z^i(\cdotot)) \inftyn L^p_{\muathbb F}(\Omegamega;C(0,T;\muathbb R^n)) \tauimes L^q_{\muathbb F}(\Omegamega;C(0,T;\muathbb R^n)) \tauimes L^q_{\muathbb F}(\Omegamega;L^2(0,T;\muathbb R^{n\tauimes d}))$ ($i=1,2$) be two solutions to \frepsilonqref{Sec3_Haml_Sys}. From the above analysis, both $Y^1(T)$ and $Y^2(T)$ are optimal controls to Problem (O). By the uniqueness of optimal control (see Proposition \ref{onto}), $Y^1(T)=Y^2(T)$. Moreover, by the uniqueness of BSDE, we have $(Y^1(\cdotot), Z^1(\cdotot)) = (Y^2(\cdotot),Z^2(\cdotot))$. Furthermore, by the uniqueness of FSDE, we have $X^1(\cdotot)=X^2(\cdotot)$. We obtain the uniqueness of \frepsilonqref{Sec3_Haml_Sys}, and complete the proof. \sigmagned {$\sqr69$} \mus \begin{equation}tar{Remark 4.8} \rm The notion of adaptability represents a fundamental difference between deterministic and stochastic systems. From the derivation of Fr\'echet derivative (see the third line of \frepsilonqref{sec3_Eq_Temp2.5}), we can obtain naturally a process: $$ \tauilde\Gammaamma(\frphi(\cdotot))(t) \frepsilonquiv \| \frphi(\cdotot) \|^{2-\begin{equation}taar q}_{L^{\begin{equation}taar q}_{\muathbb F}(\Omegamega;L^\nuu(0,T;\muathbb R^m))} \| \frphi(\cdotot,\omegamega) \|^{\begin{equation}taar q-\nuu}_{L^\nuu(0,T;\muathbb R^m)} |\frphi(t)|^{\nuu-2}\frphi(t),\quaduad t\inftyn [0,T] $$ which is closely linked to our problem. But unfortunately, $\tauilde\Gammaamma(\frphi(\cdotot))(\cdotot)$ is not $\muathbb F$-adapted when $p\nueq\frphirac{2\sigma\rho}{\sigma\rho-2\rho+2\sigma}$ (equivalently $\begin{equation}taar q\nueq \nuu$). Hence, in order to meet the requirement of adaptability, we use $$ \Gammaamma(\frphi(\cdotot))(t) = \muathbb E_t[\tauilde\Gammaamma(\frphi(\cdotot))(t)],\quaduad t\inftyn [0,T] $$ to replace $\tauilde\Gammaamma(\frphi(\cdotot))(\cdotot)$. However, this treatment leads to some difficulty. As a matter of fact, through a direct calculation, we can obtain that the following equation $$\lambdaeft\{ \muathbb E\lambdaeft( \inftynt_0^T |\tauilde\Gammaamma(\frphi(\cdotot))(t)|^{\muu} dt \right)^{\frphirac {\begin{equation}taar p} {\muu}} \right\}^{\frphirac {1} {\begin{equation}taar p}} = \|\frphi(\cdotot)\|_{\muathbb U^{p,\rho,\sigma}[0,T]^*}$$ holds for any $p\inftyn (1,\inftynfty)$. But due to the introduction of conditional expectation, we only get an inequality $$\lambdaeft\{ \muathbb E\lambdaeft( \inftynt_0^T |\Gammaamma(\frphi(\cdotot))(t)|^{\muu} dt \right)^{\frphirac {\begin{equation}taar p}{\muu}} \right\}^{\frphirac {1} {\begin{equation}taar p}} \lambdaeq \frphirac{\begin{equation}taar q}{\nuu} \|\frphi(\cdotot)\|_{\muathbb U^{p,\rho,\sigma}[0,T]^*}$$ for $p\lambdaes\frphirac{2\sigma\rho}{\sigma\rho-2\rho+2\sigma}$ (see \frepsilonqref{doob}). The technique involving Doob's martingale inequality used in Lemma \ref{Lemma 4.6} is invalid for $p>\frphirac{2\sigma\rho}{\sigma\rho-2\rho+2\sigma}$. \frepsilonr It is naturally to ask what happens if $p > \frphirac{2\sigma\rho}{\sigma\rho-2\rho+2\sigma}$? To obtain a similar result as Theorem \ref{Theorem 4.7}, instead of functional $J(x,\xi;\frepsilonta)$ defined by \frepsilonqref{Sec3_Dual_Cost}, we introduce the following functional: \begin{equation}tael{J'}J'(\frepsilonta;x,\xi)={1\omegaver2}\|\muathbb{K}^*\frepsilonta\|_{\muathbb{U}_r^{p,\rho,\sigma}[0,T]^*}^2+\lambdaan x,\muathbb{K}_0^*\frepsilonta\rangle-\muathbb{E}\lambdaan\xi,\frepsilonta \rangle,\quadq\frphiorall\frepsilonta\inftyn L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n),\frepsilone where $\muathbb{K}^*$ and $\muathbb{K}_0^*$ are given by \frepsilonqref{K*} and \frepsilonqref{K0*}, respectively. Equivalently, \begin{equation}tael{J'*}\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle J'(\frepsilonta;x,\xi)={1\omegaver2}\|B(\cdot)^\tauop Y(\cdot)+\sum_{k=1}^dD_k(\cdot)^\tauop Z_k(\cdot)\|_{\muathbb{U}_r^{p,\rho,\sigma}[0,T]^*}^2+\lambdaan x,Y(0)\rangle -\muathbb{E}\lambdaan\xi,\frepsilonta\rangle,\\ \nuoalign{ }\deltaisplaystyle\quadq\quadq\quadq\quadq\quadq\quadq\quadq\quadq\quadq\quadq\quadq\quadq\frphiorall\frepsilonta\inftyn L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n),\frepsilona\frepsilone with $(Y(\cdot),Z(\cdot))$ being the adapted solution to BSDE \frepsilonqref{Sec3_Dual_Sys}. Note that we have changed from $\muathbb{U}^{p,\rho,\sigma}[0,T]$ to $\muathbb{U}_r^{p,\rho,\sigma}[0,T]$ in the above. We now pose the following optimization problem. \mus \begin{equation}taf Problem (O)$'$. \rm Minimize \frepsilonqref{J'} subject to BSDE \frepsilonqref{Sec3_Dual_Sys} over $L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n)$. \mus Suppose $\frphi(\cdot), \phisi(\cdotot)\inftyn L^\nu_\muathbb{F}(0,T;L^{\begin{equation}taar q}(\Omega;\muathbb{R}^m))$. A similar procedure as Problem (O) leads to $$ \begin{equation}taegin{aligned} & \frphirac 1 2 \frphirac{d}{d\deltaelta}{\bf i}g\{ \|\frphi(\cdotot)+\deltaelta\phisi(\cdotot)\|^2_{L^\nuu_{\muathbb F}(0,T;L^{\begin{equation}taar q}(\Omegamega;\muathbb R^m))}{\bf i}g\}{\bf i}g|_{\deltaelta =0} \\ & = \|\frphi(\cdotot)\|^{2-\nuu}_{L^\nuu_{\muathbb F}(0,T;L^{\begin{equation}taar q}(\Omegamega;\muathbb R^m))} \inftynt_0^T {\bf i}g(\muathbb E |\frphi(t)|^{\begin{equation}taar q}{\bf i}g)^{\frphirac{\nuu-\begin{equation}taar q}{\begin{equation}taar q}} \muathbb E{\bf i}g[ |\frphi(t)|^{\begin{equation}taar q-2} \lambdaangle \frphi(t),\ \phisi(t) \ranglegle {\bf i}g] dt\\ & \frepsilonquiv \muathbb E\inftynt_0^T \lambdaangle \Gammaamma'(\frphi(\cdotot))(t),\ \phisi(t) \ranglegle dt, \frepsilonnd{aligned} $$ where \begin{equation}tael{G'(f)*}\Gamma'(\frphi(\cdot))(t)= \|\frphi(\cdotot)\|^{2-\nuu}_{L^\nuu_{\muathbb F}(0,T;L^{\begin{equation}taar q}(\Omegamega;\muathbb R^m))} \Big (\muathbb{E}|\frphi(t)|^{\begin{equation}taar q}\Big )^{{\nu-\begin{equation}taar q\omegaver\begin{equation}taar q}}|\frphi(t)|^{\begin{equation}taar q-2}\frphi(t),\quadq t\inftyn[0,T]. \frepsilone Unlike the case of Problem (O), in the above we do not need conditional expectation. Due to this, the following lemma holds without the constraint $p\lambdaes \frphirac{2\sigma\rho}{\sigma\rho-2\rho+2\sigma}$. However, due to the use of $\muathbb{U}^{p,\rho,\sigma}_r[0,T]^*$, condition $p\gammaes 2$ is needed. \begin{equation}tal{Lemma 4.8} \sl Let $p\gammaes 2$. Then for any $\frphi(\cdot)\inftyn L^\nu_\muathbb{F}(0,T;L^{\begin{equation}taar q}(\Omega;\muathbb{R}^m))\frepsilonquiv\muathbb{U}_r^{p,\rho,\sigma}[0,T]^*$, \begin{equation}tael{}\Gamma'(\frphi(\cdot))(\cdot)\inftyn L^\mu_\muathbb{F}(0,T;L^{\begin{equation}taar p}(\Omega;\muathbb{R}^m))=\muathbb{U}_r^{p,\rho,\sigma}[0,T].\frepsilone \frepsilonl \inftyt Proof. \rm It suffices to calculate the following: $$\begin{equation}taa{ll} \nuoalign{ }\deltaisplaystyle\inftynt_0^T{\bf i}g\{\muathbb{E}\Big[\Big (\muathbb{E}|\frphi(t)|^{\begin{equation}taar q}\Big )^{{\nu-\begin{equation}taar q\omegaver\begin{equation}taar q}}|\frphi(t)|^{\begin{equation}taar q-1}\Big]^{\begin{equation}taar p}{\bf i}g\}^{\mu\omegaver\begin{equation}taar p}dt=\inftynt_0^T\Big[\Big (\muathbb{E}|\frphi(t)|^{\begin{equation}taar q}\Big )^{\nu-\begin{equation}taar q\omegaver\begin{equation}taar q-1}\muathbb{E}|\frphi(t)|^{\begin{equation}taar q}\Big]^{\mu\omegaver\begin{equation}taar p}dt=\inftynt_0^T\Big (\muathbb{E}|\frphi(t)|^{\begin{equation}taar q}\Big )^{\nu\omegaver\begin{equation}taar q}dt<\inftynfty.\frepsilona$$ Hence, our conclusion follows. \sigmagned {$\sqr69$} \mus Then similar to Theorem \ref{Theorem 4.7}, we have the following result. \begin{equation}tat{Theorem 4.9} \sl Let {\rm(H1)} and {\rm(H2)$'$} hold and $p\gammaes 2$. Then the observability inequality \frepsilonqref{weak observability inequality} holds if and only if for any $(x,\xi)\inftyn\muathbb{R}^n\tauimes L^p_{{\cal F}_T}(\Omega;\muathbb{R}^n)$, Problem {\rm(O)$'$} admits a unique optimal solution $\begin{equation}taar\frepsilonta'\inftyn L^q_{{\cal F}_T}(\Omega;\muathbb{R}^n)$. In this case, the control $\begin{equation}taar u'(\cdot)\inftyn\muathbb{U}_r^{p,\rho,\sigma}[0,T]$ defined by the following \begin{equation}tael{baru}\begin{equation}taar u'(\cdot)=\Gamma'(\muathbb{K}^*\begin{equation}taar\frepsilonta'),\frepsilone with $\Gamma'(\cdotot)$ given by \frepsilonqref{G'(f)*} steers $x$ to $\xi$. Moreover, with such defined $\begin{equation}taar u'(\cdot)$, the coupled FBSDE \frepsilonqref{Sec3_Haml_Sys} admits a unique adapted solution $$ (\begin{equation}taar X(\cdot),\begin{equation}taar Y(\cdot),\begin{equation}taar Z(\cdot))\inftyn L^p_\muathbb{F}(\Omega;C([0,T];\muathbb{R}^n))\tauimes L^q_\muathbb{F}(\Omega;C([0,T];\muathbb{R}^n))\tauimes L^q_\muathbb{F}(\Omega;L^2(0,T;\muathbb{R}^{n\tauimes d})). $$ \frepsilont \section{Norm optimal control problems}\lambdaabel{S_Norm_Optimal_Control} When the system \frepsilonqref{FSDE1} is $L^p$-exactly controllable on $[0,T]$ by $\muathbb{U}^{p,\rho,\sigma}[0,T]$ (respectively, $\muathbb{U}_r^{p,\rho,\sigma}[0,T]$) for any given $p$ satisfying $ p\lambdaes \frphirac{2\sigma\rho}{\sigma\rho-2\rho+2\sigma}$ (respectively, $p\gammaes 2$), then for any $(x, \xi)\inftyn \muathbb R^n\tauimes L^p_{\muathcal F_T}(\Omegamega;\muathbb R^n)$, from Theorem \ref{Theorem 4.7} (respectively, Theorem \ref{Theorem 4.9}), we know that $\begin{equation}taar u$ (respectively, $\begin{equation}taar u'$) defined by \frepsilonqref{bar u} (respectively, by \frepsilonqref{baru}) is one of $\muathbb U^{p,\rho,\sigma}[0,T]$- (respectively, $\muathbb U_r^{p,\rho,\sigma}[0,T]$-) admissible controls which steers the state process from the initial value $x$ to the terminal value $\xi$. In this section, we shall restrict to the case $p= \frphirac{2\sigma\rho}{\sigma\rho-2\rho+2\sigma}$ (respectively, $p\gammaes 2$) and further show that $\begin{equation}taar u$ (respectively, $\begin{equation}taar u'$) has a characteristic of minimum norm. \mus First, for any given $1<\rho\lambdaes\inftynfty$, $2<\sigma\lambdaes\inftynfty$ and $p= \frphirac{2\sigma\rho}{\sigma\rho-2\rho+2\sigma}$, we introduce a $\muathbb U^{p,\rho,\sigma}[0,T]$-norm optimal control problem: for any $(x,\xi)\inftyn\muathbb R^n\tauimes L^p_{\muathcal F_T}(\Omegamega;\muathbb R^n)$, minimize $\|u\|_{\muathbb{U}^{p,\rho,\sigma}[0,T]}$ over the $\muathbb U^{p,\rho,\sigma}[0,T]$-admissible control set: $$ \muathcal U(x,\xi) := {\bf i}g\{ u(\cdotot)\inftyn \muathbb{U}^{p,\rho,\sigma}[0,T]\ {\bf i}g|\ X(T;x,u(\cdotot)) = \xi {\bf i}g\}. $$ For simplicity, we denote the above $\muathbb U^{p,\rho,\sigma}[0,T]$-norm optimal control problem by {\begin{equation}taf Problem (N)}. Note that the system \frepsilonqref{FSDE1} is $L^p$-exactly controllable on $[0,T]$ by $\muathbb{U}^{p,\rho,\sigma}[0,T]$ if and only if, for any $(x,\xi)\inftyn\muathbb R^n\tauimes L^p_{\muathcal F_T}(\Omegamega;\muathbb R^n)$, the $\muathbb U^{p,\rho,\sigma}[0,T]$-admissible control set $\muathcal U(x,\xi)$ is not empty. We call $\begin{equation}taar u\inftyn \muathcal U(x,\xi)$ a $\muathbb U^{p,\rho,\sigma}[0,T]$-norm optimal control to Problem (N) if $$\|\begin{equation}taar u(\cdotot)\|_{\muathbb{U}^{p,\rho,\sigma}[0,T]} = \inftynf_{u(\cdotot)\inftyn \muathcal U(x,\xi)} \|u(\cdotot)\|_{\muathbb{U}^{p,\rho,\sigma}[0,T]}.$$ For any given $2\lambdaes p <\rho\lambdaes \inftynfty$ and $2< \sigma\lambdaes \inftynfty$, we similarly introduce the $\muathbb U_r^{p,\rho,\sigma}[0,T]$-norm optimal control problem reading: \mus {\begin{equation}taf Problem (N)$'$.} For any $(x,\xi)\inftyn\muathbb R^n\tauimes L^p_{\muathcal F_T}(\Omegamega;\muathbb R^n)$, minimize $\|\cdotot\|_{\muathbb{U}_r^{p,\rho,\sigma}[0,T]}$ over the $\muathbb U_r^{p,\rho,\sigma}[0,T]$-admissible control set $$ \muathcal U'(x,\xi) := {\bf i}g\{ u\inftyn \muathbb{U}_r^{p,\rho,\sigma}[0,T]\ {\bf i}g|\ X(T;x,u(\cdotot)) = \xi {\bf i}g\}. $$ In the previous section, we have given some equivalent conditions for the $L^p$-exact controllability of system \frepsilonqref{FSDE1} on $[0,T]$ by $\muathbb U^{p,\rho,\sigma}[0,T]$ (respectively, $\muathbb U^{p,\rho,\sigma}_r[0,T]$) (see Theorem \ref{equivalence}, Theorem \ref{observe}, Theorem \ref{Theorem 4.7} and Theorem \ref{Theorem 4.9}). Now, by virtue of the related $\muathbb U^{p,\rho,\sigma}[0,T]$- (respectively, $\muathbb U_r^{p,\rho,\sigma}[0,T]$-) norm optimal control problem, we present another one. \begin{equation}tat{Sec4_Theorem_Equiv} \sl Let the assumptions {\rm(H1)}, {\rm(H2)} and $p= \frphirac{2\sigma\rho}{\sigma\rho-2\rho+2\sigma}$ hold. Then the following two statements are equivalent: \mus $\begin{equation}taullet$ For any $(x,\xi)\inftyn \muathbb R^n\tauimes L^p_{\muathcal F_T}(\Omegamega;\muathbb R^n)$, Problem {\rm(O)} admits a unique optimal solution $\begin{equation}taar\frepsilonta\inftyn L^q_{\muathcal F_T}(\Omega;\muathbb R^n)$; \mus $\begin{equation}taullet$ For any $(x,\xi)\inftyn \muathbb R^n\tauimes L^p_{\muathcal F_T}(\Omegamega;\muathbb R^n)$, Problem {\rm(N)} admits a unique optimal control $\begin{equation}taar u(\cdotot)\inftyn \muathbb{U}^{p,\rho,\sigma}[0,T]$. \nuoindent Moreover, the unique norm optimal control $\begin{equation}taar u$ to Problem {\rm(N)} is given by \frepsilonqref{bar u}, and the minimal norm is given by \begin{equation}taegin{equation}\lambdaabel{Sec4_Min_Norm} \|\begin{equation}taar u(\cdotot)\|_{\muathbb{U}^{p,\rho,\sigma}[0,T]} = \sqrt{\muathbb E\lambdaangle \xi,\ \begin{equation}taar\frepsilonta\ranglegle -\lambdaangle x,\ \muathbb K_0^*\begin{equation}taar\frepsilonta \ranglegle}. \frepsilonnd{equation} The minimal value of functional $J(\cdotot;x,\xi)$ is given by \begin{equation}taegin{equation}\lambdaabel{Sec4_Min_J} J(\begin{equation}taar\frepsilonta;x,\xi) = -\frphirac 1 2 \|\begin{equation}taar u(\cdotot)\|^2_{\muathbb{U}^{p,\rho,\sigma}[0,T]} = -\frphirac 1 2 {\bf i}g(\muathbb E\lambdaangle \xi,\ \begin{equation}taar\frepsilonta\ranglegle -\lambdaangle x,\ \muathbb K_0^*\begin{equation}taar\frepsilonta \ranglegle{\bf i}g). \frepsilonnd{equation} \frepsilont \inftyt Proof. \rm (Sufficiency). Since for any $(x,\xi)\inftyn\muathbb R^n\tauimes L^p_{\muathcal F_T}(\Omegamega;\muathbb R^n)$, Problem (N) admits an optimal control, then the corresponding $\muathbb U^{p,\rho,\sigma}[0,T]$-admissible control set $\muathcal U(x,\xi)$ is not empty. Therefore, the system \frepsilonqref{FSDE1} is $L^p$-exactly controllable. By Theorem \ref{equivalence} and Theorem \ref{Theorem 4.7}, the first statement holds true. (Necessity). First of all, when $p= \frphirac{2\sigma\rho}{\sigma\rho-2\rho+2\sigma}$, by Lemma \ref{Lemma 4.6}, $\begin{equation}taar u$ defined by \frepsilonqref{bar u} is a $\muathbb U^{p,\rho,\sigma}[0,T]$-admissible control to Problem (N). Let $\begin{equation}taar\frepsilonta\inftyn L^q_{\muathcal F_T}(\Omegamega;\muathbb R^n)$ be the optimal solution to Problem (O), and $(\begin{equation}taar Y(\cdotot),\begin{equation}taar Z(\cdotot))$ be the solution to BSDE \frepsilonqref{Sec3_Dual_Sys} with terminal condition $\begin{equation}taar Y(T)=\begin{equation}taar\frepsilonta$. For any $\frepsilonta\inftyn L^q_{\muathcal F_T}(\Omegamega;\muathbb R^n)$, similarly, we denote by $(Y(\cdotot),Z(\cdotot))$ the solution to BSDE \frepsilonqref{Sec3_Dual_Sys} with terminal condition $Y(T)=\frepsilonta$. Since $\begin{equation}taar\frepsilonta$ is optimal, we obtain the following relation named the Euler-Lagrange equation: \begin{equation}taegin{equation}\lambdaabel{Sec3_EulerLagrange} \begin{equation}taegin{aligned} 0=\ & \frphirac{d}{d\deltaelta} J(\begin{equation}taar\frepsilonta+\deltaelta\frepsilonta;x,\xi){\bf i}g|_{\deltaelta =0}=\muathbb{E}\inftynt_0^T \lambdaangle \Gammaamma(\muathbb K^*\begin{equation}taar\frepsilonta)(t),\ (\muathbb K^*\frepsilonta)(t) \ranglegle dt +\lambdaangle x,\ \muathbb K^*_0 \frepsilonta \ranglegle -\muathbb E\lambdaangle \xi,\ \frepsilonta \ranglegle\\ =\ & \muathbb E\inftynt_0^T \lambdaangle \begin{equation}taar u(t),\ (\muathbb K^*\frepsilonta)(t) \ranglegle dt +\lambdaangle x, Y(0) \ranglegle -\muathbb E\lambdaangle \xi,\ \frepsilonta \ranglegle. \frepsilonnd{aligned} \frepsilonnd{equation} Meanwhile, for any control $u(\cdotot)\inftyn \muathcal U(x,\xi)$, by applying It\^o's formula to $\lambdaan X(\cdot\,;x,u(\cdot)),Y(\cdot)\rangle$ on the interval $[0,T]$, we have \begin{equation}taegin{equation}\lambdaabel{ito2} \muathbb E\inftynt_0^T \lambdaangle u(t),\ (\muathbb K^*\frepsilonta)(t) \ranglegle dt +\lambdaangle x,\ Y(0) \ranglegle -\muathbb E\lambdaangle \xi,\ \frepsilonta \ranglegle =0. \frepsilonnd{equation} By letting $\frepsilonta = \begin{equation}taar\frepsilonta$ in both \frepsilonqref{Sec3_EulerLagrange} and \frepsilonqref{ito2}, for any $u\inftyn\muathcal U(x,\xi)$, we obtain \begin{equation}taegin{equation}\lambdaabel{Sec4_Temp1} \muathbb E \inftynt_0^T \lambdaan \begin{equation}taar u(t),\ (\muathbb K^*\begin{equation}taar\frepsilonta)(t)\rangle dt = \muathbb E\lambdaeft\lambdaangle \xi,\ \begin{equation}taar\frepsilonta \right\ranglegle -\lambdaangle x,\ \begin{equation}taar Y(0) \ranglegle = \muathbb E\inftynt_0^T \lambdaangle u(t),\ (\muathbb K^*\begin{equation}taar\frepsilonta)(t) \ranglegle dt. \frepsilonnd{equation} It is easy to calculate that $$ \muathbb E \inftynt_0^T \lambdaan \begin{equation}taar u(t), (\muathbb K^*\begin{equation}taar\frepsilonta)(t)\rangle dt=\|\muathbb K^*\begin{equation}taar\frepsilonta\|_{\muathbb U^{p,\rho,\sigma}[0,T]^*}^2, $$ and $$ \|\begin{equation}taar u(\cdotot)\|_{\muathbb{U}^{p,\rho,\sigma}[0,T]}=\|\muathbb K^*\begin{equation}taar\frepsilonta\|_{\muathbb U^{p,\rho,\sigma}[0,T]^*}. $$ By above three equations and H\"{o}lder's inequality, we get the optimality of $\begin{equation}taar u$. The uniqueness of the optimal control to Problem (N) comes from the strictly convex property of $\muathbb{U}^{p,\rho,\sigma}[0,T]$. Let us come back to the Euler-Lagrange equation. Letting $\frepsilonta=\begin{equation}taar\frepsilonta$ and from the definition of $\begin{equation}taar u$, \frepsilonqref{Sec3_EulerLagrange} is reduced to $$\|\begin{equation}taar u(\cdotot)\|_{\muathbb{U}^{p,\rho,\sigma}[0,T]}^2 = \muathbb E\lambdaangle \xi,\ \begin{equation}taar\frepsilonta\ranglegle -\lambdaangle x,\ \begin{equation}taar Y(0) \ranglegle,$$ which, together with \frepsilonqref{K0*}, implies \frepsilonqref{Sec4_Min_Norm}. Then, we calculate $$J(\begin{equation}taar\frepsilonta;x,\xi) = \frphirac 1 2 \|\begin{equation}taar u(\cdotot)\|_{\muathbb{U}^{p,\rho,\sigma}[0,T]}^2 -{\bf i}g( \muathbb E\lambdaangle \xi,\ \begin{equation}taar\frepsilonta\ranglegle -\lambdaangle x,\ \begin{equation}taar Y(0) \ranglegle {\bf i}g) = - \frphirac 1 2 \|\begin{equation}taar u(\cdotot)\|_{\muathbb{U}^{p,\rho,\sigma}[0,T]}^2,$$ which is \frepsilonqref{Sec4_Min_J}, and the proof is completed. \sigmagned {$\sqr69$} \begin{equation}tar{Remark 5.2} \rm When we consider the corresponding $\muathbb U^{p,\rho,\sigma}[0,T]$-norm optimal control problems with $p< \frphirac{2\sigma\rho}{\sigma\rho-2\rho+2\sigma}$ in the same way, we can obtain $$\|\muathbb K^*\begin{equation}taar\frepsilonta\|_{\muathbb U^{p,\rho,\sigma}[0,T]^*} \lambdaes \|u(\cdotot)\|_{\muathbb{U}^{p,\rho,\sigma}[0,T]},$$ where $u(\cdotot)$ is any given $\muathbb U^{p,\rho,\sigma}[0,T]$-admissible control. If we can also obtain $$\|\begin{equation}taar u(\cdotot)\|_{\muathbb{U}^{p,\rho,\sigma}[0,T]} \lambdaes \|\muathbb K^*\begin{equation}taar\frepsilonta\|_{\muathbb U^{p,\rho,\sigma}[0,T]^*},$$ then we will solve $\muathbb U^{p,\rho,\sigma}[0,T]$-norm optimal control problems. However, due to the additional conditional expectation in the Fr\'echet derivative $\Gammaamma(\cdotot)$, we cannot prove the above inequality. In fact, we only have (comparing \frepsilonqref{doob}) $$\|\begin{equation}taar u(\cdotot)\|_{\muathbb U^{p,\rho,\sigma}[0,T]} \lambdaes \frphirac{\begin{equation}taar q}{\nuu} \|\muathbb K^*\begin{equation}taar\frepsilonta\|_{\muathbb U^{p,\rho,\sigma}[0,T]^*}.$$ \frepsilonr Since conditional expectation is not introduced in $\Gammaamma'(\cdotot)$, we can solve the $\muathbb U_r^{p,\rho,\sigma}[0,T]$-norm optimal control problems for any $p\gammaes 2$, whose proof is similar to that of Theorem \ref{Sec4_Theorem_Equiv}. \begin{equation}tat{Sec4_weak_Theorem_Equiv} \sl Let the assumptions {\rm(H1)}, {\rm(H2)$'$} and $p\gammaes 2$ hold. Then the following two statements are equivalent: \mus $\begin{equation}taullet$ For any $(x,\xi)\inftyn \muathbb R^n\tauimes L^p_{\muathcal F_T}(\Omegamega;\muathbb R^n)$, Problem {\rm(O)$'$} admits a unique optimal solution $\begin{equation}taar\frepsilonta'\inftyn L^q_{\muathcal F_T}(\Omega;\muathbb R^n)$; \mus $\begin{equation}taullet$ For any $(x,\xi)\inftyn \muathbb R^n\tauimes L^p_{\muathcal F_T}(\Omegamega;\muathbb R^n)$, Problem {\rm(N)$'$} admits a unique optimal control $\begin{equation}taar u'(\cdotot)\inftyn \muathbb{U}_r^{p,\rho,\sigma}[0,T]$. \nuoindent Moreover, the unique norm optimal control $\begin{equation}taar u'(\cdotot)$ to Problem {\rm(N)$'$} is given by \frepsilonqref{baru}, and the minimal norm is given by \begin{equation}taegin{equation} \|\begin{equation}taar u'(\cdotot)\|_{\muathbb{U}_r^{p,\rho,\sigma}[0,T]} = \sqrt{\muathbb E\lambdaangle \xi,\ \begin{equation}taar\frepsilonta'\ranglegle -\lambdaangle x,\ \muathbb K_0^*\begin{equation}taar\frepsilonta' \ranglegle}. \frepsilonnd{equation} The minimal value of functional $J'(\cdotot;x,\xi)$ is given by \begin{equation}taegin{equation} J'(\begin{equation}taar\frepsilonta;x,\xi) = -\frphirac 1 2 \|\begin{equation}taar u'(\cdotot)\|^2_{\muathbb{U}_r^{p,\rho,\sigma}[0,T]} = -\frphirac 1 2 {\bf i}g(\muathbb E\lambdaangle \xi,\ \begin{equation}taar\frepsilonta'\ranglegle -\lambdaangle x,\ \muathbb K_0^*\begin{equation}taar\frepsilonta' \ranglegle{\bf i}g). \frepsilonnd{equation} \frepsilont \begin{equation}tar{Remark 5.4} \rm When $p=\frphirac{2\sigma\rho}{\sigma\rho-2\rho+2\sigma}$ (equivalently, $\begin{equation}taar p =\muu$), the spaces $\muathbb U^{p,\rho,\sigma}[0,T]$ and $\muathbb U^{p,\rho,\sigma}_r[0,T]$ coincide with $L^\muu_{\muathbb F}(0,T;\muathbb R^m)$, and then both Problem (N) and Problem (N)' imply the $L^\muu_{\muathbb F}(0,T;\muathbb R^m)$-norm optimal control problem. Precisely, Theorems \ref{Sec4_Theorem_Equiv} and \ref{Sec4_weak_Theorem_Equiv} provide the same result for the $L^\muu_{\muathbb F}(0,T;\muathbb R^m)$-norm optimal control problem when the system is $L^p$-exactly controllable with $p\gammaes 2$. However, when $1<p<2$, only Theorem \ref{Sec4_Theorem_Equiv} gives a result, and Theorem \ref{Sec4_weak_Theorem_Equiv} is invalid. \frepsilonr More specifically, when $\sigma=\inftynfty$ and $p=\frphirac{2\rho}{\rho+2}$ (equivalently, $\begin{equation}taar p =\muu =2$), $\muathbb U^{p,\rho,\sigma}[0,T] =\muathbb U^{p,\rho,\sigma}_r[0,T] = L^2_{\muathbb F}(0,T;\muathbb R^m)$. Problem (N) becomes the classical norm optimal control problem (see \cite{Wang-Zhang 2015}), and Theorem \ref{Sec4_Theorem_Equiv} provides a result for the classical $L^2_{\muathbb F}(0,T;\muathbb R^m)$-norm optimal control problem. We notice that in the present paper the matrices $B(\cdotot)$ and $D_k(\cdotot)$ ($k=1,2\deltaots,d$) are not necessary to be bounded (see Assumption (H2)), while in the literature, only bounded matrix cases were studied. Furthermore, instead of the standard norm $\|\cdot\|_{L_{\muathbb F}^{2}(0,T;\muathbb R^m)}$, we can extend our method to minimize the following generalized weighted norm \begin{equation}taegin{equation}\lambdaabel{Sec4_Weighted_Norm} \lambdaeft(\muathbb E\inftynt_0^T \lambdaangle R(t)u(t),u(t) \ranglegle dt\right)^{\frphirac 1 2} \frepsilonnd{equation} subject to \frepsilonqref{FSDE1} over the $L^2_{\muathbb F}(0,T;\muathbb R^m)$-admissible control set $\muathcal U(x,\xi) $, where $R(\cdot)\inftyn L^\inftynfty_{\muathbb F}(0,T;\muathbb R^{m\tauimes m})$ is symmetric, and there exists a constant $\deltaelta>0$ such that $R(t)-\deltaelta I$ is positive semi-definite for a.e. $t\inftyn [0,T]$. In fact, due to the definition of $R(\cdot)$ and Denman-Beavers iteration \cite{Lee-Markus 1967} for the square roots of matrices, there exists a matrix-valued process $N(\cdot)\inftyn L^\inftynfty_{\muathbb F}(0,T;\muathbb R^{m\tauimes m})$ which is invertible and $N^{-1}$ is bounded also, such that $R(\cdotot) = N^\tauop(\cdotot)N(\cdotot)$. Then, letting $$\begin{equation}taegin{aligned} & \widehatat u(\cdotot) = N(\cdotot)u(\cdotot),\quad \widehatat B(\cdotot) = B(\cdotot)N^{-1}(\cdotot),\quaduad \widehatat D_j(\cdotot)=D_j(\cdotot)N^{-1}(\cdotot),\quaduad j=1,2,\deltaots,d, \frepsilonnd{aligned}$$ we transform the original weighted norm optimal control problem \frepsilonqref{FSDE1} and \frepsilonqref{Sec4_Weighted_Norm} to the following equivalent standard one: to minimize $\|\widehatat u(\cdotot)\|_{L^{2}_{\muathbb F}(0,T;\muathbb R^m)}$ subject to $$\lambdaeft\{ \begin{equation}taegin{aligned} d X(t) =\ & {\bf i}g[A(t) X(t) +\widehatat B(t)\widehatat u(t){\bf i}g] dt +\sum_{k=1}^d {\bf i}g[ C_k(t) X(t) +\widehatat D_k(t)\widehatat u(t) {\bf i}g]dW_k(t),\quaduad t\inftyn [0,T],\\ X(0) =\ & x, \frepsilonnd{aligned} \right.$$ over the corresponding $L^2_{\muathbb F}(0,T;\muathbb R^m)$-admissible control set $$ \begin{equation}taegin{aligned} \widehatat{\muathcal U}(x,\xi) \frepsilonquiv\ & {\bf i}g\{ \widehatat u(\cdotot)\inftyn L^2_{\muathbb F}(0,T;\muathbb R^m)\ {\bf i}g|\ X(T;x,u(\cdotot)) = \xi {\bf i}g\}=N^{-1}\muathcal U(x,\xi)\\ \frepsilonquiv\ & {\bf i}g\{ \widehatat u(\cdotot) = N^{-1} u(\cdotot)\ {\bf i}g|\ u(\cdotot)\inftyn \muathcal U(x,\xi) {\bf i}g\}. \frepsilonnd{aligned}$$ Applying the results of Theorems \ref{equivalence}, \ref{Theorem 4.7} and \ref{Sec4_Theorem_Equiv}, we can solve the generalized weighted norm optimal control problem. \begin{equation}tac{Corollary 5.5} \sl Let $\sigma=\inftynfty$, $p=\frphirac{2\rho}{\rho+2}$ and {\rm (H1)--(H2)} hold. Then the system \frepsilonqref{FSDE1} is $L^p$-exactly controllable on $[0,T]$ by $L^2_{\muathbb F}(0,T;\muathbb R^m)$, if and only if, for any $(x, \xi)\inftyn\muathbb R^n\tauimes L^p_{\muathcal F_T}(\Omegamega;\muathbb R^n)$, the weighted norm optimal control problem \frepsilonqref{FSDE1} and \frepsilonqref{Sec4_Weighted_Norm} admits a unique optimal control. In this case, with \begin{equation}taegin{equation}\lambdaabel{Sec4_Norm_Optimal_Control_Weighted} \begin{equation}taar u(t) = R^{-1}(t) {\bf i}g( B^\tauop(t)\begin{equation}taar Y(t) +\sum_{k=1}^d D_k(t)^\tauop \begin{equation}taar Z_k(t) {\bf i}g),\quaduad t\inftyn [0,T], \frepsilonnd{equation} and the following coupled FBSDE \begin{equation}taegin{equation}\lambdaabel{Sec4_Haml_Sys_Weighted} \lambdaeft\{ \begin{equation}taegin{aligned} d\begin{equation}taar X(t) =\ & {\bf i}g[ A(t)\begin{equation}taar X(t) +B(t)\begin{equation}taar u(t) {\bf i}g] dt +\sum_{k=1}^d {\bf i}g[ C_k(t)\begin{equation}taar X(t) +D_k(t)\begin{equation}taar u(t) {\bf i}g] dW_k(t),\\ -d\begin{equation}taar Y(t) =\ & {\bf i}g[ A(t)^\tauop \begin{equation}taar Y(t) +\sum_{k=1}^d C_k(t)^\tauop \begin{equation}taar Z_k(t) {\bf i}g] dt -\sum_{k=1}^d \begin{equation}taar Z_k(t) dW_k(t),\\ \begin{equation}taar X(0) =\ & x,\quadquad \begin{equation}taar X(T) = \xi \frepsilonnd{aligned} \right. \frepsilonnd{equation} admits a unique adapted solution $$ (\begin{equation}taar X(\cdotot),\begin{equation}taar Y(\cdotot),\begin{equation}taar Z(\cdotot)) \inftyn L^p_{\muathbb F}(\Omegamega;C(0,T;\muathbb R^n)) \tauimes L^p_{\muathbb F}(\Omegamega;C(0,T;\muathbb R^n)) \tauimes L^p_{\muathbb F}(\Omegamega;L^2(0,T;\muathbb R^{n\tauimes d})). $$ Moreover, $\begin{equation}taar u(\cdotot)$ defined by \frepsilonqref{Sec4_Norm_Optimal_Control_Weighted} is the unique weighted norm optimal control, and the minimal weighted norm is given by $$ \lambdaeft(\muathbb E\inftynt_0^T \lambdaangle R(t)\begin{equation}taar u(t),\begin{equation}taar u(t) \ranglegle dt\right)^{\frphirac 1 2} = \sqrt{\muathbb E \lambdaangle \xi,\begin{equation}taar Y(T) \ranglegle -\lambdaangle x,\begin{equation}taar Y(0) \ranglegle}. $$ \frepsilonc \begin{equation}taegin{thebibliography}{99} \begin{equation}taibitem{Buckdahn-Quincampoix-Tessitore 2006} R.~Buckdahn, M.~Quincampoix, and G.~Tessitore, \inftyt A characterization of approximately controllable linear stochastic differential equations, \sl Stoch. Partial Differ. Equ. Appl., edited by G. Da Prato and L. Tubaro, \rm Chapman \& Hall, Boca Raton (2006), 253--260. \begin{equation}taibitem{Chen-Li-Peng-Yong-1993} S.~Chen, X.~Li, S.~Peng, and J.~Yong, \inftyt On stochastic linear controlled systems, \rm Preprint, 1993. \begin{equation}taibitem{Connors-1967} M.~M.~Connors, \inftyt Controllability of discrete, linear, random dynamic systems, \sl SIAM J. Control, \rm 5 (1967), 183--210. \begin{equation}taibitem{Denman-Beavers 1976} E.~D.~Denman and A.~N.~Beavers, Jr., \inftyt The matrix sign function and computations in systems, \sl Appl. Math. Comput., \rm 2 (1976), 63--94. \begin{equation}taibitem{El Karoui-Peng-Quenez 1997} N.~El Karoui, S.~Peng, and M.~C.~Quenez, \inftyt Backward stochastic differential equations in finance, \sl Math. Finance, \rm 7 (1997), 1--71. \begin{equation}taibitem{Ehrhardt-Kliemann-1982} M.~Ehrhardt and W.~Kliemann, \inftyt Controllability of linear stochastic systems, \sl Systems Control Lett., \rm 2 (1982/83), 145--153. \begin{equation}taibitem{Fattorini 1999} H.~O.~Fattorini, \sl Infinite-Dimensional Optimization and Control Theory, \rm Cambridge Univ. Press, Cambridge, 1999. \begin{equation}taibitem{Fattorini 2011} H.~O.~Fattorini, \inftyt Time and norm optimal controls: a survey of recent results and open problems, \sl Acta Math. Sci. Ser. B Engl. Ed., \rm 31 (2011), 2203--2218. \begin{equation}taibitem{Folland 1999} G.~B.~Folland, \sl Real Analysis: Modern Techniques and Their Applications, 2nd Edition, \rm Pure and Applied Mathematics, John Wiley \& Sons, New York, 1999. \begin{equation}taibitem{Gashi 2015} B.~Gashi, \inftyt Stochastic minimum-energy control, \sl Systems Control Lett., \rm 85 (2015), 70--76. \begin{equation}taibitem{Goreac 2008} D.~Goreac, \inftyt A Kalman-type condition for stochastic approximate controllability, \sl C. R. Math. Acad. Sci. Paris, \rm 346 (2008), 183--188. \begin{equation}taibitem{Gugat-Leugering 2008} M.~Gugat and G.~Leugering, \inftyt $L^\inftynfty$-norm minimal control of the wave equation: on the weakness of the bang-bang principle, \sl ESAIM Control Optim. Calc. Var., \rm 14 (2008), 254--283. \begin{equation}taibitem{Lee-Markus 1967} E.~B.~Lee and L.~Markus, \sl Foundations of Optimal Control Theory, \rm The SIAM Series in Applied Mathematics, John Wiley \& Sons, 1967. \begin{equation}taibitem{Lions 1988a} J.~L.~Lions, \inftyt Exact controllability, stabilizability and perturbations for distributed systems, \sl SIAM Rev., \rm 30 (1988), 1--68. \begin{equation}taibitem{Lions 1988b} J.~L.~Lions, \sl Controlabilit\'e exacte, perturbations et stabilisation de syst\`emes distribu\'es, \rm Masson, Paris, RMA 8, 1988. \begin{equation}taibitem{Liu-Peng 2010} F.~Liu and S.~Peng, \inftyt On controllability for stochastic control systems when the coefficient is time-variant, \sl J. Syst. Sci. Complex., \rm 23 (2010), 270--278. \begin{equation}taibitem{Lu-Yong-Zhang 2012} Q.~L\"u, J.~Yong, and X.~Zhang, \inftyt Representation of It\^o integrals by Lebesgue/Bochner integrals, \sl J. Eur. Math. Soc. (JEMS), \rm 14 (2012), 1795--1823. \begin{equation}taibitem{Peng 1994} S.~Peng, \inftyt Backward stochastic differential equation and exact controllability of stochastic control systems, \sl Progr. Natur. Sci. (English Ed.), \rm 4 (1994), 274--284. \begin{equation}taibitem{Shi-Wang-Yong 2013} Y.~Shi, T.~Wang, and J.~Yong, \inftyt Mean-field backward stochastoic Volterra integral equations, \sl Discrete \& Continuous Dyn. Systems, Ser. B, \rm 18 (2013),1929--1967. \begin{equation}taibitem{Russell 1978} D.~L.~Russell, \inftyt Controllability and stabilizability theory for linear partial differential equations, Recent Progress and open questions, \sl SIAM Rev., \rm 20 (1978), 639--739. \begin{equation}taibitem{Sunahara-Aihara-Kishino 1975} Y.~Sunahara, S.~Aihara, and K.~Kishino, \inftyt On the stochastic observability and controllability for non-linear systems, \sl Int. J Control, \rm 22 (1975), 65--82. \begin{equation}taibitem{Wang-Xu 2013} G.~Wang and Y.~Xu, \inftyt Equivalence of three different kinds of optimal control problems for heat equations and its applications, \sl SIAM J. Control Optim., \rm 51 (2013), 848--880. \begin{equation}taibitem{Wang-Zuazua 2012} G.~Wang and E.~Zuazua, \inftyt On the equivalence of minimal time and minimal norm controls for internally controlled heat equations, \sl SIAM J. Control Optim., \rm 50 (2012), 2938--2958. \begin{equation}taibitem{Wang-Zhang 2015} Y.~Wang and C.~Zhang, \inftyt The norm optimal control problem for stochastic linear control systems, \sl ESAIM Control Optim. Calc. Var., \rm 21 (2015), 399--413. \begin{equation}taibitem{Wonham 1985} W.~M.~Wonham, \sl Linear Multivariable Control: A Geometric Approach, 3rd Edition, \rm Springer Verlag, New York, 1985. \begin{equation}taibitem{Yong-Zhou 1999} J.~Yong and X.~Y.~Zhou, \sl Stochastic Controls: Hamiltonian Systems and HJB Equations, \rm Springer, New York, 1999. \begin{equation}taibitem{Yosida 1980} K.~Yosida, \sl Functional Analysis, \rm Springer-Verlag, Berlin, New York, 1980. \begin{equation}taibitem{Zabczyk 1981} J.~Zabczyk, \inftyt Controllability of stochastic linear systems, \sl Systems \& Control Letters, \rm 1 (1981), 25--31. \begin{equation}taibitem{Zuazua 2006} E.~Zuazua, \sl Controllability of Partial Differential Equations, \rm manuscript, 2006. \frepsilonnd{thebibliography} \frepsilonnd{document}
\begin{document} \title{Difference independence of the Euler gamma function} \begin{abstract} In this paper, we established a sharp version of the difference analogue of the celebrated H\"{o}lder's theorem concerning the differential independence of the Euler gamma function $\Gamma$. More precisely, if $P$ is a polynomial of $n+1$ variables in $\mathbb{C}[X, Y_0,\dots, Y_{n-1}]$ such that \begin{equation*} P(s, \Gamma(s+a_0), \dots, \Gamma(s+a_{n-1}))\equiv 0 \end{equation*} for some $(a_0, \dots, a_{n-1})\in \mathbb{C}^{n}$ and $a_i-a_j\notin \mathbb{Z}$ for any $0\leq i<j\leq n-1$, then we have $$P\equiv 0.$$ Our result complements a classical result of algebraic differential independence of the Euler gamma function proved by H\"{o}lder in 1886, and also a result of algebraic difference independence of the Riemann zeta function proved by Chiang and Feng in 2006. \end{abstract} \section{Introduction} A classical theorem of H\"{o}lder \cite{6} states that the Euler gamma function $$\Gamma(s)=\int_{0}^{+\infty}t^{s-1}e^{-t}dt, ~~~ \Re s> 0$$ which can be analytically continued to the whole complex plane $\mathbb{C}$, does not satisfy any nontrivial algebraic differential equation whose coefficients are polynomials in $\mathbb{C}$. We state it in the following. \begin{thmx} Let $P$ be a polynomial of $n+1$ variables in $\mathbb{C}[X, Y_0,\dots, Y_{n-1}]$. Assume that \begin{equation*} P(s, \Gamma(s), \dots, \Gamma^{(n-1)}(s))\equiv 0, \end{equation*} then we have \begin{equation*} P\equiv 0. \end{equation*} \end{thmx} To the best of our knowledge, the Euler gamma function $\Gamma$ seems to be the first known example which satisfies the algebraic differential independence property in the literature. It is well known that the Riemann zeta function $\zeta$ is associated with $\Gamma$ by the famous Riemann functional equation \begin{equation}\label{eq0.1} \begin{aligned} \zeta(1-s)=2^{1-s}\pi^{-s}\cos \frac{\pi s}{2}\Gamma(s)\zeta(s). \end{aligned} \end{equation} Motivated by the Riemann functional equation, it is natural to consider the algebraic differential independence property for the Riemann zeta function. The study of the algebraic differential independence of the Riemann zeta function $\zeta$ can be dated back to Hilbert. In \cite{Hilbert}, he conjectured that H\"{o}lder's result can be extended to the Riemann zeta function $\zeta$. Later, this conjecture was verified by Ostrowski in \cite{11}. Bank and Kaufman \cite{1, 2} made the following celebrated generalizations of H\"{o}lder's result. \begin{thmx} \label{thm-Bank-Kaufman} Let $P$ be a polynomial in $K[X, Y_0, \dots, Y_{n-1}]$, where $K$ is the field of all meromorphic functions such that the Nevanlinna's characteristic $T(r, f)=o(r)$ as $r$ goes to infinity for any $f$ in $K$. Assume that \begin{equation*} P(s, \Gamma(s), \dots, \Gamma^{(n-1)}(s))\equiv 0, \end{equation*} then we have \begin{equation*} P\equiv 0. \end{equation*} \end{thmx} For the Nevanlinna characteristic $T(r, f)$, we refer to Hayman's book \cite{Hayman} for a detailed introduction. Since $\Gamma$ and $\zeta$ appeared very naturally in Riemann functional equation \eqref{eq0.1}, Markus in \cite{Markus} posted an open problem to study the joint algebraic differential independence of $\Gamma$ and $\zeta$. We refer the readers to the references \cite{7, 8, 8B, 9} for the recent developments in this direction. It is interesting to study the algebraic difference independence of $\zeta$ or $\Gamma$. Feng and Chiang proved the following result. \begin{thmx}\label{thm-Chiang-Feng} Let $P$ be a polynomial of $n+1$ variables in $\mathbb{C}[X, Y_0,\dots, Y_{n-1}]$ and $s_0, \dots, s_{n-1}$ be $n$ distinct numbers in $\mathbb{C}$. Assume that \begin{equation*} P(s, \zeta(s+s_0), \dots, \zeta(s+s_{n-1}))\equiv 0, \end{equation*} then we have \begin{equation*} P\equiv 0. \end{equation*} \end{thmx} Chiang and Feng's result extended a result of Ostrowski in \cite{11} where the assumption of $s_0, \dots s_{n-1}$ are $n$ distinct real numbers are needed. Indeed, Chiang and Feng proved that Theorem \ref{thm-Chiang-Feng} also holds under the same assumption in Theorem \ref{thm-Bank-Kaufman} , we refer the interested readers to \cite{3} for the details. Here, we also mention two remarkable universality results due to Voronin in 1970s for the differential case \cite{Voronin-1975} and the difference case \cite{Voronin}. We refer to \cite{Steuding} for the detailed introduction of the recent developments in this direction. To the best of our knowledge, the topic of the algebraic difference independence of the Euler gamma function was first addressed by Hardouin in \cite{Hardouin} in the framework of difference Galois theory. Motivated by the multiplication theorem of Euler gamma function \begin{equation} \Gamma(ns)=n^{ns-\frac{1}{2}}(2\pi)^{\frac{1-n}{2}}\prod_{j=0}^{n-1} \Gamma(s+\frac{j}{n}), \end{equation} Hardouin proved the following result. \begin{thmx}[\cite{Hardouin}]\label{thm-D} Let $a_0, \dots, a_{n-1}$ be $n$ complex numbers in $\mathbb{C}$, $b_0, \dots, b_{m-1}(\geq 2)$ be $m$ integers such that $\{a_j ( mod \ 1)\}_{j=0}^{n-1}$ and $\{\sum\limits_{l=0}^{b_j-1}\frac{l}{b_j}(mod \ 1) \}_{j=0}^{m-1}$ are $\mathbb{Z}$-linearly independent. Assume that \begin{equation*} P(s, \Gamma(s+a_0), \dots, \Gamma(s+a_{n-1}), \Gamma(b_0s), \dots, \Gamma(b_{m-1} s))\equiv 0 \end{equation*} for some polynomial $P$, then we have \begin{equation*} P\equiv 0. \end{equation*} \end{thmx} Hardouin's proof relies on Kolchin's type theorem in an essential way. See also in \cite{BBD} for a detailed disccusion of Kolchin's type theorem and several powerful applications in algebraic independence problems. Our starting point is another well known difference equation of $\Gamma$, \begin{equation}\label{eq-Gamma} \Gamma(s+1)=s\Gamma(s). \end{equation} This may be the obvious obstruction for us to study the algebraic difference independence of the Euler gamma function $\Gamma$. One can not expect to obtain Theorem B for $\Gamma$ directly. While in this paper, we will show that the machinery exhibited in \eqref{eq-Gamma} is the only obstruction to get the algebraic difference independence of $\Gamma$. Now, we state our main result in the following. In this paper, we will use an elementary method inspired by \cite{6, 11} to prove our main result, which avoids the advanced difference Galois theory. This may be of independent interest. We define \begin{equation} \mathcal{H}:=\{(a_0, \dots, a_{n-1})\in \mathbb{C}^{n}: a_i-a_j\notin \mathbb{Z} \ \text{for any} \ 0\leq i< j\leq n-1 \}. \end{equation} Now, we state our main result in the following. \begin{theorem}\label{thm-difference-Holder} Let $P$ be a polynomial of $n+1$ variables in $\mathbb{C}[X, Y_0,\dots, Y_{n-1}]$. Assume that \begin{equation*} P(s, \Gamma(s+a_0), \dots, \Gamma(s+a_{n-1}))\equiv 0 \end{equation*} for some $(a_0, \dots, a_{n-1})\in \mathcal{H}$, then we have \begin{equation*} P\equiv 0. \end{equation*} \end{theorem} We remark that we can also use Theorem \ref{thm-D} to recover part of the result of Theorem \ref{thm-difference-Holder} under the same condition of $(a _j)_{j=0}^{n-1}$ and also $m=0$ in Theorem \ref{thm-D}. While, it can not completely recover Theorem \ref{thm-difference-Holder}, since the condition in Theorem \ref{thm-difference-Holder} is sharp. Our result complements the classical result of algebraic differential independence of Euler gamma function proved by H\"{o}lder \cite{6} in 1886, and also a result of algebraic difference independence of Riemann zeta function proved by Chiang and Feng \cite{3} in 2006. \begin{corollary} Let $P$ be a polynomial of $n+1$ variables in $\mathbb{C}[X, Y_0,\dots, Y_{n-1}]$. Assume that \begin{equation*} P(s, \Gamma(s), \dots, \Gamma(s+(n-1)\alpha))\equiv 0 \end{equation*} for some $\alpha\not\in\mathbb{Q}$, then we have \begin{equation*} P\equiv 0. \end{equation*} \end{corollary} \begin{remark} Theorem \ref{thm-difference-Holder} can be seen as a difference version of the H\"{o}lder's theorem. The identity \eqref{eq-Gamma} shows that the discussion restricted to $\mathcal{H}$ is necessary. We can also extend Theorem \ref{thm-difference-Holder} to the setting of $K[X, Y_0, \dots, Y_{n-1}]$ where $K$ is the field of all meromorphic functions such that the Nevanlinna's characteristic $T(r, f)=o(r)$ as $r$ goes to infinity for any $f$ in $K$. While, we will not address it in this paper. \end{remark} By Theorem \ref{thm-difference-Holder} and the Euclidean's algorithm, it is not hard to give the following two examples. \begin{example} Let $P=P(X, Y, Z)$ be a polynomial of $3$ variables in $\mathbb{C}[X, Y, Z]$. Assume that \begin{equation*} P(s, \Gamma(s+a_0), \Gamma(s+a_1))\equiv 0, \end{equation*} then \begin{equation*} P\equiv 0, \end{equation*} unless $a_1-a_0\in \mathbb{Z}$. In the latter case, if $\Re a_0<\Re a_1$, $P$ can be divided by the polynomial $R(X, Y, Z)=Z-(X+a_0)\dots (X+a_1-1)Y$. \end{example} \begin{example} Let $P(X, Y, Z, W)=YW-Z^2-YZ$ in $\mathbb{C}[X, Y, Z, W]$. We have \begin{equation*} P(s, \Gamma(s), \Gamma(s+1), \Gamma(s+2))\equiv 0. \end{equation*} $P$ belongs to the ideal \begin{equation*} <W-(X+1)Z, Z-XY> \end{equation*} generated by $W-(X+1)Z$ and $Z-XY$ in $\mathbb{C}[X, Y, Z, W]$. Furthermore, $P$ can be written by \begin{equation*} P(X, Y, Z, W)=Y(W-(X+1)Z)+Z(XY-Z). \end{equation*} \end{example} \begin{remark} Indeed, inspired by Example 1 and Example 2, we can apply Theorem \ref{thm-difference-Holder} and the Euclidean's algorithm again to give a complete characterization of the following set \begin{equation*} \mathcal{I}:=\{P\in \mathbb{C}[X, Y_0,\dots, Y_{n-1}]: P(s,\Gamma(s+a_0), \dots, \Gamma(s+a_{n-1}))\equiv 0\} \end{equation*} without any assumption on $a_0, \dots, a_{n-1}$. While, we will not discuss it in this paper. \end{remark} \section{Proof of Theorem \ref{thm-difference-Holder}} In order to prove Theorem \ref{thm-difference-Holder}, we need introduce a lexicographic order between any two monomials $Y_{0}^{i_0}\dots Y_{n-1}^{i_{n-1}}$ and $Y_{0}^{j_0}\dots Y_{n-1}^{j_{n-1}}$ in $\mathbb{C}[Y_0, \dots, Y_{n-1}]$, which plays an important role in our proof. And this strategty was inspired by Ostrowski's proof of H\"{o}lder's classical proof in \cite{11}. It also shares some spirit of Kolchin's type theorem which was used in \cite{Hardouin, BBD} We first introduce an order for the $n$ symbols $Y_0, \dots, Y_{n-1}$, \begin{equation}\label{eq-order} Y_0\prec Y_1\prec\dots\prec Y_{n-1}. \end{equation} This can be used to induce a lexicographic order between any two monomials $Y_{0}^{i_0}\dots Y_{n-1}^{i_{n-1}}$ and $Y_{0}^{j_0}\dots Y_{n-1}^{j_{n-1}}$. We still denote it by $\prec$ to simplify the notation. We define it in the following, \begin{enumerate} \item [{\bf case 1}:] $Y_{0}^{i_0}\dots Y_{n-1}^{i_{n-1}}=Y_{0}^{j_0}\dots Y_{n-1}^{j_{n-1}}$ if $i_k=j_k$ for $k=0, \dots, n-1$; \item [{\bf case 2}:] $Y_{0}^{i_0}\dots Y_{n-1}^{i_{n-1}}\prec Y_{0}^{j_0}\dots Y_{n-1}^{j_{n-1}} $ if $i_0<j_0$ or there exists $1\leq k \leq n-1$ such that $$i_0=j_0, \dots, i_{k-1}=j_{k-1}, i_{k}<j_{k};$$ \item [{\bf case 3}:] $Y_{0}^{j_0}\dots Y_{n-1}^{j_{n-1}}\prec Y_{0}^{i_0}\dots Y_{n-1}^{i_{n-1}} $ can be defined similarly as in {\bf case 2}. \end{enumerate} For any nonzero polynomial $P=P(X, Y_0, \dots, Y_{n-1})$ in $\mathbb{C}[X, Y_0, \dots, Y_{n-1}]$, we write it by \begin{equation}\label{eq-sum-P} P=\sum_{i=(i_0, \dots, i_{n-1})}\Phi_{i}(X)Y_{0}^{i_0}\dots Y_{n-1}^{i_{n-1}}, \end{equation} where $\Phi_i(X)\in \mathbb{C}[X]$ and $\Phi_i(X)\neq 0$. The {\bf highest term} of $P$ is defined by the maximal element in $\mathcal{T}_{P} $ with respect to the lexicographic order $\prec$ introduced above, where \begin{equation} \mathcal{T}_{P}:=\{Y_{0}^{i_0}\dots Y_{n-1}^{i_{n-1}}: \Phi_i(X)\ Y_{0}^{i_0}\dots Y_{n-1}^{i_{n-1}} \ \text{appeared in} \ \eqref{eq-sum-P}\}. \end{equation} For any monomial $L=Y_{0}^{i_0}Y_{1}^{i_1}\dots Y_{n-1}^{i_{n-1}}$, we define its {\bf degree} $\deg(L)$ by $$\deg(L):=\sum_{k=0}^{n-1}i_{k}.$$ The {\bf height} of $P$ is defined by the degree of the highest term of $P$. Now, we will prove Theorem \ref{thm-difference-Holder}. \begin{proof} Let \begin{equation} \mathcal{S}:=\{P\in \mathbb{C}[X, Y_0, \dots, Y_{n-1}]: P(s, \Gamma(s+a_0), \dots, \Gamma(s+a_{n-1}))\equiv 0\}. \end{equation} We will prove Theorem \ref{thm-difference-Holder} by contradiction. We assume that $\mathcal{S} \neq \{0\}$. By our assumption, there exists a nonzero polynomial \begin{equation*} Q=\sum\limits_{i=(i_0, \dots, i_{n-1})}\Psi_{i}(X)Y_{0}^{i_0}\dots Y_{n-1}^{i_{n-1}}, \end{equation*} which is of the lowest height in $\mathcal{S}\backslash\{0\}$ with $\Psi_j(X)Y_{0}^{j_0}\dots Y_{n-1}^{j_{n-1}}$ being its highest term for some $j=(j_0, \dots, j_{n-1})$. Moreover, we also make the following assumption. {\noindent \bf Assumption LD:} The nonzero polynomial $\Psi_j(X)$ appearing in the highest term of $Q$ is also of the lowest degree. Let \begin{equation}\label{eq-def-T} T(X, Y_0, \dots, Y_{n-1}):=Q(X+1, (X+a_0)Y_0, \dots, (X+a_{n-1})Y_{n-1}). \end{equation} Noting that \begin{equation*} Q(s, \Gamma(s+a_0), \dots, \Gamma(s+a_{n-1}))\equiv 0, \end{equation*} we have $$ T(s, \Gamma(s+a_0), \dots, \Gamma(s+a_{n-1}))\equiv 0 $$ by \eqref{eq-Gamma}. And the highest term of $T$ is $\hat{\Psi}_{j}(X)Y_{0}^{j_0}\dots Y_{n-1}^{j_{n-1}}$, where \begin{equation*} \hat{\Psi}_{j}(X):=\Psi_{j}(X+1)(X +a_0)^{j_0}\dots (X+a_{n-1})^{j_{n-1}}. \end{equation*} It follows from the Euclidean's algorithm, there exist two polynomials $R=R(X)$ and $U=U(X)$ in $\mathbb{C}[X]$ such that \begin{equation*} \hat{\Psi}_j=R\Psi_j+U, \end{equation*} where either $U=0$ or $0<\deg U<\deg \Psi_j$. It is easy to see that $\deg R\geq 1$. We claim that $U=0$. Otherwise, we know that the polynomial \begin{equation*} H(X, Y_0, \dots, Y_{n-1}):=T(X, Y_0, \dots, Y_{n-1})-R(X)Q(X, Y_0, \dots, Y_{n-1}) \end{equation*} is in $\mathcal{S}$. It follows that the highest term of $H$ is \begin{equation*} U(X)Y_{0}^{j_0}\dots Y_{n-1}^{j_{n-1}} \end{equation*} and $0<\deg U<\deg \Psi_j$. Thus, $H\neq 0$, which contradicts the choice of $Q$ and {\bf Assumption LD}. Now, we have $U=0$. Since $U=0$, we see that the highest term of $H$ is less than the highest term of $Q$ if $H\neq 0$. This again contradicts our choice of $Q$. Thus, we get $H= 0$. That is, \begin{equation}\label{eq-TQ} T(X, Y_0, \dots ,Y_{n-1})=R(X)Q(X, Y_0, \dots, Y_{n-1}). \end{equation} We first assume that there exists $\beta\notin \Lambda :=\{-a_k: 0\leq k\leq n-1\}$ such that $R(\beta)=0$. By \eqref{eq-def-T} and \eqref{eq-TQ}, we get \begin{equation*} Q(\beta+1, (\beta+a_0)Y_0, \dots, (\beta+a_{n-1})Y_{n-1})=0 \end{equation*} in $\mathbb{C}[Y_0, \dots, Y_{n-1}]$. This implies that \begin{equation*} Q(\beta+1, Y_0, \dots, Y_{n-1})=\sum\limits_{i=(i_0, \dots, i_{n-1})}\Psi_{i}(\beta+1)Y_{0}^{i_0}\dots Y_{n-1}^{i_{n-1}}= 0 \end{equation*} in $\mathbb{C}[Y_0, \dots, Y_{n-1}]$. Thus, we have \begin{equation*} \Psi_{i}(\beta+1)=0 \end{equation*} for all $i$, which implies that each $\Psi_i(X)$ can be divided by $X-\beta-1$. This contradicts our assumption that $\Psi_j$ is of the lowest degree. Hence, each root of $R$ lies in $\Lambda$. Without loss of generality, we assume that $R(-a_0)=0$. Thus, we get \begin{equation*}\label{eq-first-level} Q(-a_0+1, 0, (a_1-a_0)Y_1, \dots, (a_{n-1}-a_0)Y_{n-1})= 0 \end{equation*} by \eqref{eq-def-T} and \eqref{eq-TQ}. Recalling that $a_j-a_0\notin \mathbb{Z}$ for any $j\neq 0$, we have \begin{equation}\label{eq-Q} Q(-a_0+1, 0, Y_1, \dots, Y_{n-1})= 0. \end{equation} Taking $X=-a_0+1$, $Y_0=0$ in \eqref{eq-def-T} and \eqref{eq-TQ}, we get \begin{align*} &Q(-a_0+2, 0, (a_1-a_0+1)Y_1, \dots, (a_{n-1}-a_0+1)Y_{n-1})\\ =&R(-a_0+1)Q(-a_0+1, 0, Y_1, \dots, Y_{n-1})= 0 \end{align*} by \eqref{eq-Q}. Noting that $a_j-a_0\notin \mathbb{Z}$ for any $j\neq 0$ again, we obtain \begin{equation*} Q(-a_0+2, 0, Y_1, \dots, Y_{n-1})=0 \end{equation*} in $\mathbb{C}[Y_0, \dots, Y_{n-1}]$. By induction, we can prove that for any $m\in \mathbb{N}$, \begin{equation*} Q(-a_0+m, 0, Y_1, \dots, Y_{n-1})= 0 \end{equation*} in $\mathbb{C}[Y_0, \dots, Y_{n-1}]$. It follows by the fundamental theorem of algebra, we get \begin{equation*} Q(X, 0, Y_1, \dots, Y_{n-1})=0 \end{equation*} in $\mathbb{C}[X, Y_0, \dots, Y_{n-1}]$. Thus, we proved that $Q$ can be divided by the monomial $Y_0$, which contradicts the assumption that $Q$ is of the lowest height in $\mathcal{S}$. Now, we finish the proof of Theorem \ref{thm-difference-Holder}. \end{proof} \end{document}
\begin{document} \title{Preserving Entanglement of Flying Qubits in Optical Fibers by \\Dynamical Decoupling} \author{Bin Yan} \affiliation{Key Laboratory of Quantum Information, University of Science and Technology of China, CAS, Hefei, 230026, People's Republic of China} \author{Chuan-Feng Li$\footnote{email:[email protected]}$} \affiliation{Key Laboratory of Quantum Information, University of Science and Technology of China, CAS, Hefei, 230026, People's Republic of China} \author{Guang-Can Guo} \affiliation{Key Laboratory of Quantum Information, University of Science and Technology of China, CAS, Hefei, 230026, People's Republic of China} \date{\today } \begin{abstract} We theoretically investigate the influence of dynamical decoupling sequence in preserving entanglement of polarized photons in polarization-maintaining birefringent fibers(PMF) under a classic Gauss 1/f noise. We study the dynamic evolution of entanglement along the control sequence embedded fibers. Decoherence due to dispersion of polarization mode in PMF can be dramatically depressed, even for a wild optical width. Entanglement degree can be effectively preserved while the control pulse is implemented. \end{abstract} \maketitle Entanglement plays a central role in quantum communication process. Due to an interaction with the uncontrollable degree of freedom of the environment, quantum system may lose its entanglement degree. In an optical fiber-based quantum communication channel, the residual optical birefringence randomly accumulating along the fiber set up a main obstacle to maintain the fidelity of information. Polarization-entangled photons pairs distributed over optical fibers even suffer the process of abrupt disappearance of entanglement\cite{1}, known as a phenomenon of entanglement sudden death\cite{2,3}. Thus it is upper most important to find effective methods for preserving entanglement of polarized photons propagating in optical fibers. Several approaches have been developed to address this issue. Notable examples are decoherence-free sbuspace\cite{4,5} and quantum error-correction codes\cite{6,7}, both based on carefully encoding the quantum information into a wider, while partially redundant, Hilbert space. Alternative approach is quantum feedback\cite{8}. In this technology information channel is designed to be closed-loop with appropriate measurements and real-time correction to the system. However, all these strategies have the drawback of requiring a large amount of extra resources. On the contrary, an open-lope control method known as dynamical decoupling avoids all these hindrances. Dynamical Decoupling(DD) is a simple and effective method for coherence control. In this technology undesired effects of the environment are eliminated via strong and rapid time-dependent pulses faster than the environment correction time. The physical idea behind DD scheme comes from refocusing techniques in Nuclear Magnetic Resonance (NMR) systems\cite{9} and then be extended to any other physical contexts in the last decades, such as nuclear-quadrupole qubits and electron spins qubits\cite{10}. Preserving of entanglement between two stationary qubits by dynamical decoupling also has been wildly discussed\cite{11,12,13}. Some prominent examples of DD schemes are the wildly used Hahn's spin echo, periodic DD (PDD), Carr-Purcell-Meiboom-Gill (CPMG)\cite{14}, concatenated DD (CDD)\cite{15}, Uhrig DD (UDD)\cite{16}, and recently proposed near-optimal decoupling (QDD)\cite{17} used to eliminate general decoherence of qubits. Extension of the time-dependent DD sequences to the spare-dependent dynamic evaluation process lead to the idea of applying DD controls into the optical fiber-based quantum communication channel\cite{18,19}. Experimental implementation has notably revealed the potential of this extension\cite{20,21}. However, this delicately designed experiment is limited to the condition that the noises are systematically introduced in a non-stochastic way, while in a real fiber-based channel they are distributed randomly. In this paper, we investigate the performance of CPMG sequence in preserving entanglement of polarized photons under stochastic classic Gauss noises. We consider here the entangled photons are distributed through polarization-maintaining(PM) birefringent fibers, in which case the polarized photons suffer a pure dephasing process, $ T1>>T2 $. Such building blocks of coherence control for a dephasing dynamic can be extended to general decoherence processes in ordinary single mode optical fibers with more complicated DD schemes like CDD\cite{15} or QDD\cite{17}. In the following, we show the derivation of entanglement evolution using the filter-design method\cite{22} and give the numerical simulation with some special initial states. We consider the situation of photon distribution depicted in Fig. 1. Photon A is preserved by a quantum register, while the entangled photon B, carrying the encoded information propagates through the fiber to the recipient. The fiber is embedded with $ \pi $ pules realized by half-wave plates. In our scheme the intervals between waveplates are fixed to a certain scale in the fiber and the waveplates sequence are arranged as continues CPMG cycles. Note that the CPMG sequence has a self-construct feature as illustrated in Fig.1. In other words we can view two cycles of CPMG sequences with N pules each as one cycle of CPMG sequence with 2N pules. This feature of CPMG sequence allows us to analysis the dynamic evaluation of entanglement through the fiber directly, as illustrated in the following. \begin{figure} \caption{(a)Photon distribution scheme. (b)The length between pulses embedded in the fiber is fixed to a certain scale. Measuremnet at length L in the fiber corresponds to a 2 pulses CPMG sequence, while measurement at length corresponds to a 4 pulses CPMG sequence.} \end{figure} Under a photon distribution process described above, the qubits-environment interaction Hamiltonian can be written as\cite{23}: \begin{equation} \hat{H}=\dfrac{1}{2}\int d\omega(|\omega_{A}><\omega|\otimes b(\omega,L)\sigma_z^A+|\omega_{A}><\omega|\otimes I_B) \end{equation} In nondispersive media only the first order of $ \omega $ remains in $ b(\omega,L) $\cite{23}. Thus we can rewrite b as: \begin{equation} b(\omega,L)=\omega(\Omega+\beta(L)) \end{equation} $ \Omega $ is a constant and $ \beta(L) $ presents for the stochastic fluctuation of the noises with a zero mean, and it has a two-point correlation function: \begin{equation} S(L_1-L_2)=<\beta(L_1),\beta(L_2)> \end{equation} Where $ <> $ corresponds to the average with respect to the noise realizations. The statistical properties of the environment can also be expressed as the spectral density of noise: \begin{equation} S(\omega)=\int e^{iwt}S(L)dL \end{equation} In the following discussing the statistics of fluctuations is assumed to be Gaussian, in which case the noise is completely defined by the first-order correlation function S(L) in equation(3). We assume the polarization entangled photon pairs are generated via spontaneous parametric down conversion. The quantum state of the generated photon pairs can then be written as\cite{1}: \begin{equation} |\varphi_{in}>=\int d\omega f(\omega)|\omega,-\omega>\otimes|P> \end{equation} $ |\omega,-\omega> $ is the frequency basis vector of photon pairs, $ \omega $ denotes the offset from the central frequency. $ |P> $ represents polarization modes, which can be expressed as the superimposing of horizontal and vertical polarization basises $ |HH>,|HV>,|VH>,|VV> $ Due to the stochastic dephasing noise described in Hamiltonian (1), the output state of the photon pairs after photon A propagating through the fiber for a length L without DD sequence implemented is: \begin{eqnarray} \nonumber |\varphi_{out}>=\int d\omega f(\omega)e^{-\frac{1}{2}\int_0^L b_s(\omega,L)dL}|\omega,-\omega>\\ \nonumber \otimes (a|HH>+b|HV>)\\ \nonumber +\int d\omega f(\omega)e^{\frac{1}{2}\int_0^L b_s(\omega,L)dL}|\omega,-\omega>\\\otimes (c|VH>+d|VV>) \end{eqnarray} where the subscript s stands for a given realization of noise before taking the ensemble average. The density matrix that characterizes the detected polarization qubits can be abtended by a process of tracing over the frequency modes and taking ensemble average of the environment noise: \begin{equation} \rho=\int D\beta(Tr^\omega|\psi_{out}><\psi_{out}|)) \end{equation} The integration is a Gaussian functional integral with the variable $ \beta(L) $ having a Gaussian distribution. The elements of the resulting density matrix are then given by: \begin{eqnarray} \nonumber &&\rho_{ii,12,34}=\rho_{ii,12,34}(0),\\ \nonumber &&i=1,2,3,4 \\ \nonumber&&\rho_{ij}=\rho_{ij}(0)\dfrac{1}{\surd{1+\sigma^2f(L)}}exp({-\dfrac{\omega_0^2f(L)}{1+\sigma^2f(L)}} ),\\ && i,j=others, \end{eqnarray} where $ f(L) $ can be expressed as the integral of the noise spectrum $ S(w) $: \begin{equation} f(L)=\int\dfrac{d\omega}{2\pi}S(\omega)\dfrac{F(\omega L)}{\omega^2} \end{equation} $ F(x) $is a filter function depends on the shape of DD sequence. For a free evaluation without control implemented $ F(x)=2\sin^2{\dfrac{x}{2}} $. The fiber-based DD control can be considered as a filter design process(See Ref.\cite{22} for details). Any kinds of control sequence can be expressed as a particular filter function. Analytical expression for CPMG sequence with pulses of even number N(replace $ \sin^2{x} $ with $ \cos^2{x} $ for odd-N CPMG) embedded before the measurement point can be written as: \begin{equation} F(x)=8\sin^4(\dfrac{x}{4N})\sin^2(\dfrac{x}{2})/\cos^2{\dfrac{z}{2N}} \end{equation} Note that the intervals between pulses are fixed in our scheme. Namely, N is proportional to the length of the fiber. As the particular property of CPMG described above, we can investigate the dynamical evaluation process using the deduced filter function: \begin{equation} F(\omega L)=8\sin^4(\dfrac{\omega}{4n})\sin^2(\dfrac{\omega L}{2})/\cos^2{\dfrac{\omega}{2n}} \end{equation} \begin{figure} \caption{(a)Evoluation of entanglement along the fiber for different pulse intervals. (b)Evoluation of entanglement with one SE pulse embedded into the fiber. Note that in SE case here, the point at which the pulse is embedded changes with the measurement point.} \end{figure} \begin{figure} \caption{Concurrence as a function of the total pulse number within 50 units length. } \end{figure} Consider the frequency-dependent character of the evaluation process. From equation (8) it can be generated that the elements of the resulting density matrix increase with $ \sigma $. Namely, decoherence is inhibited, instead of strengthen, by the optical frequency dispersion. This effect, though can be hardly observed due to the ordinary situation with $ \sigma<<\omega_0 $, may more generally inspire us that under a DD control process an additional degree of freedom with special structure can stabilize the spin coherence. In the optical case here, since a significant decoherence will emerge within a length scale of $ \sigma^2f(L)<<1 $, equation (8) can then be simplified as: \begin{equation} \rho_{13,14,23,24}=\rho_{13,14,23,24}(0)e^{-\omega_0^2f(L)} \end{equation} which is not sensitive to the optical frequency dispersion. We now turn to the characterization of the degree of entanglement of the two-photons state. Wootters concurrence\cite{24} is particularly convenient for the two-qubit case here. Other reliable measure of entanglement will yield the same conclusion. The concurrence can be calculated explicitly from the denity matrix $ \rho $ discribed in equation (12) \begin{equation} C(\rho)=\max(0,\sqrt{\lambda_1}-\sqrt{\lambda_2}-\sqrt{\lambda_3}-\sqrt{\lambda_4}) \end{equation} where the quantities $ \lambda_i $ are the eigenvalues in decreasing order of the matrix: \begin{equation} \rho^{'}=\rho(\sigma^A_y\otimes\sigma^B_y)\rho^*(\sigma^A_y\otimes\sigma^B_y) \end{equation} In the following we will analysis the evoluation of entanglement of a class of bipartite density matrices initially prepared with the common X-form\cite{25}: $$ \rho^{AB}=\begin{pmatrix} \rho_{11} & 0 & 0 & \rho_{14}\\0 & \rho_{22} & \rho_{23} & 0\\0 & \rho_{32} & \rho_{33} & 0\\\rho_{41} & 0 & 0 & \rho_{44} \end{pmatrix} $$ which occur in many contexts including pure Bell states as well as Werner mixed states.For the concurrence with such arbitrary initial state there is no compact analytical expression. However, it has been readily showed\cite{26} that entanglement sudden death always occur under a classic Gauss noise, as well as the evolution with DD control implemented. The DD method inhibit losing of entanglement by reshaping the exponential factor of the diagonal elements. We now give the numerical simulation to evaluate the performance of the CPMG in our scheme. Without losing universality, we numerical analysis the concurrence evolution with a initial specific X-form entangled state: $$ \rho^{AB}(0)=\dfrac{1}{3}\begin{pmatrix} 1/2 & 0 & 0 & 0\\0 & 1 & 1 & 0\\0 & 1 & 1 & 0\\0 & 0 & 0 & 1/2 \end{pmatrix} $$ Thus the initial concurrence is $ C(0)=\dfrac{1}{3} $. In Fig.2 (a) we give the numerical simulation of the concurrence evolution under a Gauss 1/f noise (the noise spectral density is $ S(\omega)\propto 1/\omega $, origination of which are discussed in Ref. \cite{27}). We observed that the concurrence of entanglement improves as the intervals between pulses decrease. As a contrast we also depicted the entanglement evolution process with SE pulse implemented in Fig.2 (b). It can be inferred that the CPMG sequence drastically out-perform the single SE pulse in inhibiting the entanglement sudden death. The performance of the SE pulse has been experimental tested as an effective method for coherence control\cite{28}. \begin{figure} \caption{Concurrence evolution under a general Gauss noise environment with variable $ \alpha $ ranges from 0.5 to 1.5} \end{figure} We also analysis the entanglement preservation in a fixed fiber length as the total number of pulses changes. Thus the minimum number of pulses can be estimated to achieve a given level of entanglement. Fig.3 shows that the entanglement rebirth while a certain number of pulses are implemented. However,it can be seen that the growth rate of concurrence declines while the number of pulses increases. This makes a preservation of entanglement to a extremely high level be difficult. The performance of CPMG sequence under a general Gauss $ 1/f^\alpha $ with spectral density $ S(\omega)\propto 1/\omega^\alpha $ is also considered. Fig.4 shows the concurrence evolution under the noise spectral density with variable $ \alpha $ ranges from 0.5 to 1.5. In conclusion, we demonstrated that dynamic decoupling can effectively overcome the stochastic dispersion in the polarization-maintaining birefringent fibers under classic Gauss noises. Entanglement can be successful preserved by embedding waveplates as CPMG sequence into the fibers. Our work will evidently enhance the scope of fiber-based quantum communication. We also hope this method be extended with more complicated DD sequence to general decoherent environment in ordinary sigle mode fibers. This work is supported by the National Basic Research Program (2011CB921200) and National Natural Science Foundation of China (Grants No. 60921091 and No. 10874162) \end{document}
\begin{document} \title{Quantitative Safety: Linking Proof-Based Verification with Model Checking for Probabilistic Systems} \begin{abstract} This paper presents a novel approach for augmenting proof-based verification with performance-style analysis of the kind employed in state-of-the-art model checking tools for probabilistic systems.\ Quantitative safety properties usually specified as probabilistic system invariants and modeled in proof-based environments are evaluated using bounded model checking techniques \cite{CBC03}. Our specific contributions include the statement of a theorem that is central to model checking safety properties of proof-based systems, the establishment of a procedure; and its full implementation in a prototype system (YAGA) which readily transforms a probabilistic model specified in a proof-based environment to its equivalent verifiable PRISM \cite{PRISM} model equipped with reward structures \cite{KNP07}.\ The reward structures capture the exact interpretation of the probabilistic invariants and can reveal succinct information about the model during experimental investigations.\ Finally, we demonstrate the novelty of the technique on a probabilistic library case study. \end{abstract} \section{Introduction} There are two main techniques for investigating quantitative properties of {\it probabilistic} systems behaviours: {\it Probabilistic model checking} comprises the formalisation of a model (which describes a system operationally), and a suite of algorithms to analyse various correctness and performance properties (usually expressed as a probabilistic temporal logic) of the model. {\it Proof-based verification} on the other hand are practical applications of deductive proof methods to establish a link between the operational description of the model and the desired properties. Over the years these two techniques have developed almost independently.\ The goal of this paper is to establish a formal link between them in a way that has never been previously explored.\ Our intention is to make such linkage beneficial to practitioners of both techniques, and hence ensure future applicability of the proposed practice especially on an industrial scale. The B-Method \cite{Abrial96} is an industrial-strength specification language for describing large-scale abstract system behaviours.\ The method's development process ensures that specifications gradually evolve via {\it refinement} to implementable code.\ The probabilistic B (pB) \cite{Hoang05} extends the B-Method to incorporate probability. PRISM \cite{PRISM} is a probabilistic model checker which accepts probabilistic models described in its modeling language --- a simple state-based language. \ Three types of probabilistic models are supported directly --- Discrete Time Markov Chains (DTMCs), Markov Decision Processes (MDPs), and Continuous Time Markov Chains (CTMCs).\ This work is based on the the MDP type of the language. One of the key features of the pB language is the statement of invariants --- they provide a means of describing properties required to maintain the integrity of a system under construction.\ In an attempt to establish a generalised view of probabilistic system invariant properties, Hoang T.S. {\it et al.} \cite{HJRMM03} originally explored the use of ``expectations'' as system invariants in proof-based systems.\ Including probabilistic invariants in systems design is a useful means of enforcing quantitative safety properties --- for example, tolerance for expected error in a probabilistic system behaviour. Discharging a pB machine's safety proof obligation involves the verification (by a human or automated support) that indeed the expectation holds for all executions of the machine; but sometimes the prover fails to establish this goal.\ It is pertinent to mention that not all the undischargeable proof obligations are malignant to the overall performance outlook of the final machine for deployment.\ However, in safety-critical systems development, this assumption cannot be taken lightly hence the need to explore other techniques of gaining intuition into the failure of the provers to discharge their proof obligations. For standard (non-probabilistic) B machines, a prototype system \cite{LB03} which incorporates a model checking tool has been used to detect various errors in simple machine specifications.\ However, for the probabilistic counterparts, we show how to link the model checking facility of PRISM with abstract systems specified in pB via the latter's probabilistic invariants.\ The importance of such a link is to enable pB designers explore their models experimentally; such an exploration is likely to reveal undesirable performance attributes of their models, and in particular guide them with a decision in the event that the invariants fail to hold. Our technique is as follows.\ Given a proof-based machine specified in pB, we generate its equivalent probabilistic action systems representation (in the PRISM language) fully augmented with reward structures \cite{KNP07} inherited from the pB machine's expectations, and either confirm (or refute) the statement over the expectations.\ In the event that the statement fails to hold, we get an intuition of the possible cause(s) of failure.\ Our specific contributions are as follows: \begin{itemize} \item [(a)]\ The statement of a theorem that forms the implicit link between a pB invariant and its reward-based model checking equivalent. \item [(b)]\ A procedure for transforming a pB machine specified in a proof-based environment to its equivalent PRISM representation. \item [(c)]\ A prototype system to automate the procedure.\ In addition, we equip the resultant PRISM model with reward structures inherited from the pB machine's expectations, to allow for experiments. \item [(d)]\ A demonstration of the novelty of our technique on a small case study of a probabilistic library system. \end{itemize} Overall, this paper is structured as follows.\ We introduce the pGSL and its expectations in sec. 2, and set the theoretical foundation of our procedure in secs. 3, and 4.\ The automation of that procedure is in sec. 5.\ A practical demonstration of our technique is in secs. 6 and 7.\ Finally, we conclude in sec. 8. \begin{figure} \caption{\rm Structural definition of the expectation transformer-style semantics.} \label{fig:semantics} \end{figure} \section{Probabilistic Generalised Substitution Language {\it pGSL}} Abrial's Generalised Substitution Language {\it GSL} \cite{Abrial96} is based on Dijkstra's weakest-precondition $wp$ semantics of describing computations and their meaning \cite{Dijkstra76}.\ The semantics, expressive in the {\it B-Method} (B) \cite{Abrial96}, defines the concept of an ``abstract machine''.\ The {\it Abstract Machine Notation (AMN)} explores B's capabilities via {\it refinement} for incrementing designs such that relevant system properties are always preserved.\ The complete framework supports the development of provably correct systems. The logic {pGSL} \cite{Morgan98} is a smooth extension of the {GSL}, in which the standard boolean values --- representing certainty are replaced by real-values --- representing probability.\ Its logical framework is the {\it probabilistic Abstract Machine Notation (pAMN)} \cite{Hoang05}, an extension of the standard {AMN}.\ Its specification is based on the probabilistic B (pB).\ The syntactic structure of the {pGSL} is rich enough to permit the specification of abstract probabilistic system behaviours.\ Details of the GSL can be found in \cite{Abrial96}, while that of its theoretical and practical extensions are in \cite{Morgan98} and \cite{HJRMM03,Hoang05} respectively. An important component of the {pGSL} is the specification of probabilistic invariants to ensure consistency between system designs.\ This is indeed crucial since it also assures that undesirable operation sequences do not lead to a violation of the critical properties of a system.\ In fact, the semantics of the {pGSL} which is based on the expectation transformer-style semantics of {pGCL} \cite{MM04} (shown in Fig. {\ref{fig:semantics}}) gives a complete characterisation of probabilistic programs with nondeterminism, and they are sufficient to express many performance-style properties.\ To further explore this capability, we set out the definitions below.\ Their elementary details can be found elsewhere \cite{GW86,CGP99}. \noindent {\it \textbf{Definition 1:} (Sub-distribution)\ For any finite state space S, the set of sub-distributions over S is \begin{eqnarray} \hspace*{-1ex}\begin{array}[t]{l} \overline{S} \ \triangleq \ \quad \{\Delta: S \rightarrow [0, 1] \mid \sum \Delta \le 1\} \ , \end{array}\label{eq:distribution} \end{eqnarray} the set of functions from S into the closed interval of reals [0, 1] that sum to no more than one \footnote{Probabilities that sum to less than one represent aborting behaviours. We do not discuss such program behaviours here.}}. \noindent {\it \textbf{Definition 2:} (Labeled Markov Decision Process)\ A tuple (S, $\hat{s}$, $\textsf{A}$, $\textsf{L}$), where S is as defined above, $\hat{s} \in S$ is the initial state, $\textsf{A} : S \rightarrow 2^{\overline{S}}$ is a transition function, and $\textsf{L} : S \rightarrow 2^{AP}$ is a labeling function which assigns to each state, a subset of the set of atomic propositions AP that are valid for that state.} \noindent {\it \textbf{Definition 3:} (Path)\ A path in an MDP is a non-empty finite or infinite sequence of states $s_{0}\,\cdot\,ackrel{\alpha_{0}}{\longrightarrow}$ $s_{1}\,\cdot\,ackrel{\alpha_{1}}{\longrightarrow}$ ... where $\alpha_{i} \in \textsf{A}(s_{i})$ \ and \ $\alpha_{i}(s_{i + 1}) > 0$ \ for all $s_{i} \in S$.} \noindent {\it \textbf{Definition 4:} (Absorbing state)\ A state $s_{i} \in S$ is said to be absorbing if no transition leaves this state after resolving all the nondeterministic selections in the state {\it i.e.}, $s_{i}\,\cdot\,ackrel{\alpha_{i} = 1}{\longrightarrow} s_{i}$, and $s_{i}\,\cdot\,ackrel{\alpha_{i} = 0}{\longrightarrow} s_{j}$ whenever $\mathord{i \ne j}$.} A probabilistic computation tree formalises the notion of a probabilistic distribution over execution traces required to give semantic interpretation to temporal properties.\ Each step on a path will have an associated probability (often probability one for standard steps in the computation) --- and the probabilities on those individual steps when multiplied together, determine probability for paths ending up in a particular absorbing state.\ Our interest lies in only using such probabilities masses for {\it finite} paths. \\ \noindent {\it \textbf{Definition 5:} (Endpoint of a distribution)\ Any absorbing state s over the distribution $\Delta$ is said to be at the endpoint of the distribution.} \noindent {\it \textbf{Definition 6:} (Random variable)\ A random variable is a non-negative real-valued function over the state space in which our programs operate.} \noindent {\it \textbf{Definition 7:} (Expected value) \ For any bounded random variable $\alpha$ in $S \rightarrow \mathbb{R}_{\ge}$ and distribution $\Delta \in \overline{S}$, the expected value of $\alpha$ over $\Delta$ is defined \begin{eqnarray} \hspace*{-1ex}\begin{array}[t]{l} \mathord{\displaystyle\int_{\Delta}\alpha} \ \triangleq \ \quad \mathord{\displaystyle\sum (\alpha.s\ast\Delta.s)} \ , \end{array}\label{eq:expected} \end{eqnarray} for any state $s$ in the endpoint of the distribution $\Delta$. } \subsection{The PCHOICE Operator} In \cite{HJRMM03} Hoang T.S. {\it et al.} introduced a {\bf PCHOICE} operator in the standard AMN's operations --- similar to the probabilistic choice operator of Fig. \ref{fig:semantics}, which also permits the specification of probabilistic behaviours in a typical machine.\ This extension, captured in the the probabilistic Abstract Machine Notation (pAMN), and expressed in the pB method, describes probabilistic machines with an additional EXPECTATIONS clause\footnote{The complete framework encapsulates state variables and the operations on the states by the use of `clauses'.}.\ Ideally, probabilistic invariant properties are then defined as random variables over the machine's state, and encoded in the EXPECTATIONS clause.\ An invariant of this form is then an ``expected value-invariant''.\ Later on, we show how the pAMN can be used to specify abstract probabilistic system behaviours.\ A comprehensive list of the pAMN clauses can be found in \cite{Hoang05}. \subsection{The EXPECTATIONS clause} It gives a random variable $\xi$ over the program state, denoting the expected value-invariant, and an initial expression {\it e} which is evaluated over the program variables when the machine is initialised.\ The idea is that after arbitrary executions of the program, the expected value of $\xi$ at any given program state, is always at least the value of {\it e} initially \cite{HJRMM03}. More formally, suppose a probabilistic machine has initialisation {\it INIT} and two operations {\it OpX} \ and {\it OpY} respectively, therefore, satisfying the probabilistic proof obligation for some expected value-invariant $\xi$ and initial expression $e$ would operationally (Fig. \ref{fig:semantics}) imply that\footnote{For random variables $R$, $R^{'}$, the implication-like relation $R \Rrightarrow R^{'}$ means $R$ is everywhere less than or equal to $R^{'}$.} \begin{eqnarray} \begin{array}{l} \hspace*{-1ex}\begin{array}[t]{l} \quad \xi \Rrightarrow \wP{{OpX}}{\xi}\quad \quad $and$ \quad \quad \xi \Rrightarrow \wP{{OpY}}{\xi} \quad $provided$ \quad e \ \Rrightarrow \wP{{INIT}}{\xi}, \end{array} \end{array}\label{eq:probInv} \end{eqnarray} which then assures that \begin{eqnarray} \begin{array}{l} \hspace*{-1ex}\begin{array}[t]{l} \quad e \ \Rrightarrow \mbox{$wp.{INIT}.{(wP.{({\mathrm{It}} \ OpX \ \sqcap \ OpY \ \mathrm{tI})}.{\xi})}$}. \end{array} \end{array}\label{eq:probdOP} \end{eqnarray} \noindent The operational interpretation of (\ref{eq:probdOP}) is that arbitrary interleaving of the operations {\it OpX} and {\it OpY} after the initialisation {\it INIT} should always result in a distribution over the final states (of variables) such that the expected value (with respect to invariant $\xi$) is at least the initial value specified by the expression $e$.\ Clearly, the conditions in (\ref{eq:probInv}) imply the operational interpretation of (\ref{eq:probdOP}).\ However, if there is a particular interleaving of the machine which demonstrates the failure of (\ref{eq:probdOP}), then it must be true that (\ref{eq:probInv}) has hitherto failed as well.\ The example below illustrates our argument. \subsubsection{Example: A Simple Demonic Machine} Fig. \ref{fig:simple} shows a pAMN (adapted from \cite{HJRMM03}) that captures the operations of a simple pB machine called Demon, with a single variable {\it cc}; the INVARIANT clause specifies that {\it cc} must be Integer-valued --- pB's prover always checks that this statement is true using the operational reasoning established in the previous section.\ Initially, {\it cc} is set to 0; the OPERATIONS clause contains operations {\it OpX} and {\it OpY}.\ {\it OpX} can either increment {\it cc} by 1 or decrement it by the same value both with probability $1/2$, while {\it OpY} just re-initialises {\it cc} to 0.\ The variable {\it nn} in the operation {\it OpX} is an output parameter which need not occur in the VARIABLES clause\footnote{It must however follow a similar machine declaration as $cc$ to enable its PRISM transformation.}.\ The EXPECTATIONS clause specifies the expected value-invariant $\xi$ to be the random variable {\it cc}, and the initial expression {\it e} to be 0, so that ``the expected value of {\it cc} over any endpoint distribution is never decreased below 0'' by the Demon's {\it OpX} and {\it OpY} operations. \begin{figure} \caption{\rm A pB model specified in the pAMN.} \label{fig:simple} \end{figure} In this example, the machine {\it fails} to satisfy the probabilistic proof obligation specified in (\ref{eq:probdOP}), and the reason is that \begin{eqnarray} \hspace*{-1ex}\begin{array}[t]{l} \quad (\exists cc \in INT: \ \neg(cc \ \Rrightarrow \ \wP{OpY}{cc})). \end{array}\label{eq:expectedvalue} \end{eqnarray} The immediate expression captures the failure of the pB prover to establish the proof obligations in (\ref{eq:probInv}).\ However, in terms of a distribution viewpoint, it is possible to see exactly why this failure of the pB prover would similarly result in the failure of (\ref{eq:probdOP}).\ Consider the program fragment \begin{eqnarray} \begin{array}{l} INIT; OpX; (\ccc{OpY}{(nn \ge 0)}{{\mathrm{skip}}}); \end{array}\label{eq:probinit} \end{eqnarray} it yields the distribution given by \[ \delta = \left\{ \begin{array}{l} \hspace*{-1ex}\begin{array}[t]{l} \ cc := 0 \ \ \ \quad @ 1/2 \\ \ cc := -1 \quad @ 1/2 \end{array} \end{array}\right.\] over the final state of the random variable {\it cc}.\ Calculating the expected value over this distribution we get \begin{eqnarray}\nonumber \hspace*{-1ex}\begin{array}[t]{l} \mathord{\displaystyle\int_{\delta} cc} = 1/2 \times 0 + 1/2 \times -1 \quad \equiv - 1/2, \end{array} \end{eqnarray} which is clearly a violation of the conditions specified in the EXPECTATIONS clause. In this simple case, it is clear that the failure to establish the pB proof obligation corresponds to an exact result distribution over endpoints that demonstrates the failure.\ Currently pB provers do not provide diagnostic information necessary to give practitioners the much needed operational intuition (in terms of distributions) for locating failures.\ This becomes even more complicated with increasing size and complexity of the pB machines.\ For such cases, we simply rely on the model checking capabilities of state-of-the-art tools like PRISM. To do this, we need to translate the pB machines to their equivalent PRISM models, and use the latter's algorithmic analysis to attempt to locate failures. In the sections that follow, we show how the analysis of an equivalent PRISM representation of a pB machine can provide a link between the operational viewpoints (distribution-centered) of both a proof-based environment (encapsulating probabilistic invariants), and a model checking platform.\ Such a link is key to getting a better understanding of the expected value-invariants over endpoint probabilistic distributions. \section{PRISM Reward Specification} The PRISM model checker permits models to be augmented with information about rewards (or costs).\ The tool can analyse properties which relate to the expected values of the rewards.\ A reward structure \cite{KNP07} can be used to represent additional information about the system the MDP (the model types in this paper) represents --- for example, the expected number of packets sent (or lost) on a protocols request. The temporal logic probabilistic CTL \cite{HJ94} has been extended in \cite{AHK03} to allow for reward-based specifications as constraints of the type that express {\it reachability}, {\it cumulative} and {\it instantaneous} rewards to model checkers.\ However, for the purpose of this work, the instantaneous variant is useful. \noindent {\it \textbf{Definition 8:} (Expected instantaneous reward)\ The probabilistic CTL permits reward properties of the form \ $\mathord{{R}_{\sim r} [\mathbb{I}^{= k}]}$, at time-step $k$, where \ $\mathord{\sim \in \{<, \le, \ge, >\}}$, $\mathord{r \in \mathbb{R}_{\ge}}$ and $\mathord{k \in \mathbb{N}}$.\ The reward formula $\mathord{{R}_{\sim r} [\mathbb{I}^{= k}]}$\footnote{For MDP's we require Rmin or Rmax; this is allowed in PRISM by enabling the sparse engine in the tool's options menu.}is true if from some initial state $s_{0}$, the expected state reward at time-step $k$ meets the bound $\sim r$.\ For example the specification $\mathord{R_{\ge 50}[\mathbb{I}^{=2}]}$ could be interpreted to mean that the expected number of packets sent by the protocol after two time-steps is at least 50.} \section{Quantitative Safety and pB Machines}\label{sec:safety} Formal approaches necessitate that every safe system be {\it invariant-driven} \cite{Dijkstra76}.\ Therefore, quantitative safety properties can be proved by verifying invariants.\ The EXPECTATIONS clause of a pB machine encapsulates the machine's safety property.\ However, an interesting dimension in investigating pB safety properties is whether it is possible to find a nondeterministic selection $\sqcap_{0\le n \le N} \ P_{n}$ of the operations of the machine that demonstrates the failure of the invariants --- this schedule must then lead to the problem of locating counterexamples for probabilistic model checking \cite{HK07}. In \cite{MMG08} McIver {\it et al.} set out a strategy for computing a ``refutation-of-safety'' certificate for probabilistic systems using model checking techniques.\ The existence of a certificate corresponds to an invariant failure in our own context of safety.\ In the next section, we present an automation which demonstrates the key features of that strategy and in particular show how it can be used to investigate pB safety properties.\ The theorem below is fundamental for investigating safety properties specified for pB machines. \noindent {\it \textbf{Theorem:}\ Given a pB machine invariant $\xi$ and initial expression {\it e} encapsulated in the machine's EXPECTATIONS clause, let $\Delta$ be the finite result distribution over endpoints for the interleaving $\mathord{\mathrm{It} \ \sqcap_{0\le n \le N} \ P_{n} \ \mathrm{tI}}$ after the machine's initialisation INIT.\ A safe machine always guarantees that \begin{eqnarray} \begin{array}{l} e \ \Rrightarrow \wP{INIT}{(\wP{(\mathrm{It} \ \sqcap_{0\le n \le N} \ P_{n}\ \mathrm{tI})}{\xi}) \ \Rightarrow \quad \displaystyle\int_{{{\Delta}}} \xi} \ \ge \ e, \end{array}\label{eqn:theorem} \end{eqnarray} \noindent provided the pB machine's prover will always discharge the proof obligations given by $\mathord{e \ \Rrightarrow \wP{INIT}{\xi}}$ \ and \ $\mathord{\xi \ \Rrightarrow \wP{(\mathrm{It} \ \sqcap_{0\le n \le N} \ P_{n} \ \mathrm{tI})}{\xi}}$ } respectively. \noindent {\it \textbf{Corollary:}\ Suppose it is always possible to split the distribution $\Delta$ into finite sub-distributions $\delta_{k}$ for all $k \ge 0$ where $\delta_{k}$ is the result distribution on the $kth$ iteration, then \begin{eqnarray} \begin{array}{l} \mathord{(\forall k \ge 0 \wedge \delta_{k} \in \Delta: \quad \displaystyle\int_{\delta_{k}} \xi} \ \ge \ e) \ . \end{array}\label{eqn:corollary} \end{eqnarray}} The usefulness of the corollary is in the practical demonstration of the theorem.\ Its simplicity is captured by the fact that, if a probabilistic computation tree which interprets the execution traces of the machine is available to a verifier, then these traces must be finite with respect to the result distributions they represent.\ Using bounded model checking techniques, it then suffices to argue that only finite values of $k$ are required to establish the proof obligation for a safe pB machine.\ More so, with state-of-the-art probabilistic model checking tools such as PRISM, it is possible to identify a sub-space of the entire distribution in which the failure is located (if any). \section{YAGA: A pAMN To PRISM Translator}\label{sec:YAGA} In section \ref{sec:safety}, we stated a theorem that is central to defining safety features for pB machines with a finite trace distribution over endpoints.\ The corollary of that theorem has the practical interpretation that, {bounded model checking techniques when equipped with reward structures} are likely to provide an intuition to locating invariant failures in their transformed proof-based models.\ In this section we discuss a language-level translator nicknamed \texttt{YAGA}\footnote{The name YAGA is coined from an Igbo (a language largely spoken in southeastern Nigeria) word --- YAGAzie, which literally means ``may it go well ...''.\ In reality, it could be argued that YAGA is simply Yet Another Gangling Automation.} and with architectural framework shown in Fig. \ref{fig:YAGA}.\ \texttt{YAGA}, a java-based implementation of the algorithm {pAMN2PRISM} (shown in Appendix A) is a prototype system that essentially takes a pAMN framework, encapsulating a pB model (with syntactic checks supposedly discharged) as its input parameter, and generates its precise probabilistic action systems representation in the PRISM language.\ The associated reward structure of the generated PRISM model is inherited from the pB machine's EXPECTATIONS clause.\ The PRISM model checker then readily offers its temporal logic specification (as probabilistic CTL formulas) which can easily be checked by conducting experiments on the transformed model.\ The experimental results are sufficient to validate (or refute) the probabilistic invariants specified in the abstract machine's EXPECTATIONS clause. \subsection{Overview: YAGA Transformation Rules} We summarise the transformation rules as follows.\ YAGA's algorithmic interpretation is in Appendix A. \begin{figure} \caption{\rm YAGA - Architectural Overview} \label{fig:YAGA} \end{figure} \subsubsection{Main Module} \noindent {\bf PRISM constants list:} Are constructed from the pB machine's parameter list (if any) and the CONSTANTS clause.\ The type of a constant is implicitly checked from the PROPERTIES clause. \noindent{\bf PRISM formula list:} Are auto-generated as {\it atomic} predicates from the pB machine's PROPERTIES and INVARIANTS clauses. \noindent{\bf PRISM module name:} Is the pB machine's name. \noindent{\bf PRISM variables declaration and initial values list:} Are constructed from each variable in the VARIABLES clause, its type in the INVARIANT clause, and its initial values from the INITIALISATION clause.\ The lower and upper limits of the variables are respectively the default lowest values of their types, and a bound specified from the PRISM constants list (above). \noindent{\bf PRISM update statements:} Each update statement is labeled with the operation's name from the pB machine.\ In addition, \begin{itemize} \item [(a)] its guard is inherited from the guard in the pB machine's OPERATIONS clause and strengthened by the formulas in the PRISM formula list, such that \item [(b)] the choice of a selection of formula is dependent on the expressions in the pB machine's update statement.\ For each update, YAGA checks that the formula-dependent expressions are included in the PRISM guard. \end{itemize} \subsubsection{Counter Module} The counter module is a vital encoding that helps enumerate the distributions in $\delta_{k}$ (in corollary) for finite $k$ steps.\ To capture this behaviour in a model checking environment, we apply the following rules. \noindent{\bf PRISM module name:} Counter. \noindent{\bf PRISM variable declaration and initial value:} Variable {\it count} is initially set to 0 and bounded by $\mathord{(MAX\_COUNT + 1)}$. \noindent{\bf PRISM update statement:} Each update is constructed to synchronise with the updates in the main module.\ They can only increment the {\it count} variable by 1 on each action.\ In addition, this module also contains a similar unsynchronised update statement which ensures we will eventually reach $\mathord{(MAX\_COUNT + 1)}$. \subsubsection{Reward Structure} The specific reward structure is inherited from the pB machine's EXPECTATIONS clause --- states where the {\it count} variable equals $\mathord{(MAX\_COUNT + 1)}$ are worth the random variable value specified in the EXPECTATIONS clause plus $\mathord{(MAX\_COUNT)}$\footnote{This padding is to ensure the PRISM engine is consistent with computing positive instantaneous rewards.\ Finally, we subtract this parameter from the PRISM computed reward value.}. \ However, to make the construction of our reward structures precise for model checking, we note more formally as a result of the theorem in sec. \ref{sec:safety} that \noindent {\it \textbf{Remark 1:} Given any pB machine invariant $\xi$ and initial expression {\it e}, then from an initial state $s_{0}$, after the machine's initialisation INIT, any arbitrary interleaving $\mathord{\mathrm{It} \ \sqcap_{0\le n \le N} \ P_{n} \ \mathrm{tI}}$ must guarantee that: \begin{eqnarray} \begin{array}{l} (k:\in[0, MAX\_COUNT + 1]: e \Rrightarrow \wP{INIT}({\wP{(\mathrm{It} \ \sqcap_{0\le n \le N} P_{n} \ \mathrm{tI})}{\xi}}) \Rightarrow \hspace*{-1ex}\begin{array}[t]{l} Rmin_{=?} [\mathbb{I}^{= k}] \ge \xi.s_{0}) \end{array} \end{array}\label{eq:bdmodelchecking} \end{eqnarray} such that the expected minimum instantaneous reward at the $kth$ step is worth the random variable value of the EXPECTATIONS clause plus $\mathord{MAX\_COUNT}$.\ We recall that the {\it Counter} module keeps explicit track of the $kth$ time-step, and the expected value here is captured by the sub-distribution in (\ref{eqn:corollary}). }\\ \noindent {\it \textbf{Remark 2:} However, if there exists some {\it k} such that (\ref{eq:bdmodelchecking}) above fails to hold, then $\xi$ cannot be an invariant}. \section{Case Study: A Library Bookkeeping System}\label{sec:casestudy} We present a pB machine which captures the basic operations underlying the accounting package of a library system --- the implication of an undischargeable proof obligation of the machine on the performance of the library was an open problem in \cite{HJRMM03}.\ The state of the machine contains four variables: {\it booksInLibrary}, {\it loansStarted}, {\it loansEnded} and {\it booksLost} which are respectively used to keep track of: the number of books in the library, the number of book loans initiated by the library, the number of book loans completed by the library, and the number of books possibly never returned to the library. Initially, the machine has two operations: {\it StartLoan}, to initiate a loan on a book, and {\it EndLoan}, to terminate the loan of a book.\ The {\it StartLoan} operation has a precondition that there are books available for loan; it decrements {\it booksInLibrary} and increments {\it loansStarted}; when a book is returned, the {\it EndLoan} operation reverses the effect of the {\it StartLoan} operation by recording that either the book ``really is'' returned, or is actually reported lost with some probability $pp$, so that {\it booksLost} is incremented. The machine uses the random variables {\it loansEnded} and {\it booksLost} to record the expected losses of the number of books over time.\ Since with a probability $pp$ a book is lost on each {\it EndLoan} operation, the library system would then be expected to lose a proportion $pp$ over a number of {\it EndLoan} operations.\ However, to ensure that the library is always in the business of books lending, we define the expected value-invariant \begin{eqnarray} \hspace*{-1ex}\begin{array}[t]{l} \displaystyle\int_{\Delta} \ (pp \times loansEnded - booksLost) \ \ge \ 0, \end{array}\label{eq:inv} \end{eqnarray} \noindent which captures the idea that the expected value of the random variable $\mathord{pp \times loansEnded - booksLost}$ over its endpoint distributions $\Delta$ can never be decreased below 0.\ This is indeed a safety property for the library and ought to be checked throughout its lifetime to ensure it is not violated by its future designs.\ Below, we present two designs of the library and also keep in mind the property specified in (\ref{eq:inv}). \subsection{A Safe Library Bookkeeping System} Since there are no restrictions on when the operations of the machine can be invoked, except for the obvious preconditions on {\it StartLoan} and {\it EndLoan}; the specification of a safe library system is the nondeterministic choice given by \begin{eqnarray} \hspace*{-1ex}\begin{array}[t]{l} SafeLibrary \ \triangleq \ \ \mathrm{It} \ StartLoan \ \sqcap \ Endloan \ \mathrm{tI} \ . \end{array}\label{eq:spec1} \end{eqnarray} \subsection{An Unsafe Library Bookkeeping System} Suppose that to enable the library accountant do a periodic stock take of the library transactions, a new operation called {\it StockTake} is introduced into the system.\ The {\it StockTake} operation is very similar to the initialisation, except for an extra output ({\it totalCost}) to record the cost of replacing the books lost up to the time of doing a stock take.\ Again, we augment (\ref{eq:spec1}) and give another specification of the library as \begin{eqnarray} \hspace*{-1ex}\begin{array}[t]{l} UnsafeLibrary \ \triangleq \ \ \mathrm{It} \ StartLoan \ \sqcap \ Endloan \ \sqcap \ StockTake \ \mathrm{tI} \ . \end{array}\label{eq:spec2} \end{eqnarray} The complete pB machine describing the library (all of its three operations) is shown in Appendix A. \begin{figure} \caption{\rm Library Bookkeeping System} \label{fig:result} \end{figure} \section{PRISM Experimental Results} In this section we report experimental results that are indeed performance-style characterisations of the two designs of our library model --- the safe library (\ref{eq:spec1}) and the unsafe library (\ref{eq:spec2}).\ Our interest lies in justifying the reason why one design of the library (without stockTake) is safe and why the other (with stockTake) is unsafe.\ We note that our safety property of concern is captured by (\ref{eq:inv}). To enable us carry out this performance analysis, we quickly use the capabilities of YAGA.\ The equivalent PRISM representation of the pB machine discussed in the previous section as generated by YAGA is shown in Appendix A.\ From {Remark 1}, our obvious reward specification becomes \begin{eqnarray} \hspace*{-1ex}\begin{array}[t]{l} (\forall \ 0 \le k \le MAX\_COUNT: \quad Rmin_{=?} [\mathbb{I}^{= k}] - MAX\_COUNT)\ . \end{array}\label{eq:spec3} \end{eqnarray} The requirement for a safe library system is that for all time-steps $k$, $Rmin$ is never decrease below zero.\ However, our experimental result (shown in Fig. \ref{fig:result}) reveals that indeed the unsafe library violates this safety property after just three time-steps ($k$ = 3) of its execution --- that is for $MAX\_COUNT$ = 2, $\mathord{pp = 0.5}$, and $totalBooks = 1$, the expected minimum instantaneous reward $Rmin$ of the unsafe library is -0.25.\ A quick conclusion that can be drawn from our analysis is that introducing the ``demonic'' StockTake operation has the adverse effect of subverting the overall performance outlook of the library system.\ Our result is a practical demonstration of the claim by Hoang T.S. {\it et al.} \cite{HJRMM03} in an attempt to explain why the presence of the StockTake operation would result in a failure of the proof obligation for the probabilistic invariant property in (\ref{eq:inv}).\ In the proof-based system, reaching this conclusion was practically impossible. \section{Conclusion and Future Work} This paper has explored the practical application of reward-based specifications of bounded model checking techniques \cite{CBC03} to locate failures in the context of proof-based verification for simple safety properties of probabilistic systems.\ We demonstrated the rich benefits that can be derived by complementing proof-based probabilistic verification techniques with a model checking performance-style evaluation, in a manner that has never been previously explored. Our contribution is seen as a first attempt at fully integrating quantitative performance analysis to systems design at early stages of development.\ Our method scales in this regard since it can be carried out at the level of source code and hence can guide system developers with decisions on a choice of design most suitable for implementation. However, in other to fully integrate this performance-style analysis into software development process, we intend to, in the future, incorporate a diagnostic mechanism based on counterexamples location employed in \cite{HK07,HS07} to YAGA.\ Our intention is that such a mechanism will report explicit cause(s) of failure by also exploring the backward analysis strategy in \cite{MMG08}.\ It is our belief that these enhancements would provide a useful performance analysis suite for probabilistic systems developed in the pB language. \begin{figure} \caption{\rm The pB Model of Section (\ref{sec:casestudy} \label{fig:pB} \end{figure} \begin{figure} \caption{\rm A YAGA-Generated PRISM Representation of Fig. (\ref{fig:pB} \label{fig:PRISM} \end{figure} \begin{figure} \caption{\rm An Algorithmic Description of YAGA} \end{figure} \end{document}
\begin{document} \title{Improved Adams-type inequalities and their extremals in dimension $2m$} \begin{abstract} In this paper we prove the existence of extremal functions for the Adams-Moser-Trudinger inequality on the Sobolev space $H^{m}(\Omega)$, where $\Omega$ is any bounded, smooth, open subset of $\mathbb{R}^{2m}$, $m\ge 1$. Moreover, we extend this result to improved versions of Adams' inequality of Adimurthi-Druet type. Our strategy is based on blow-up analysis for sequences of subcritical extremals and introduces several new techniques and constructions. The most important one is a new procedure for obtaining capacity-type estimates on annular regions. \end{abstract} \section{Introduction} Given $m\in \mathbb{N}$, $m\ge 1$, let $\Omega\subseteq \mathbb{R}^{2m}$ be a bounded open set with smooth boundary. For any $\beta >0$, we consider the Moser-Trudinger functional $$ F_{\beta}(u) := \int_{\Omega} e^{\beta u^2}dx $$ and the set $$ M_{0}:= \cur{u\in H^m_0(\Omega)\: :\; \|u\|_{H^m_0(\Omega)}\le 1}, $$ where $$ \|u\|_{H^m_0(\Omega)} = \|\Delta^{\frac{m}{2}} u\|_{L^2(\Omega)} \qquad \mbox{ and }\qquad \Delta^{\frac{m}{2}} u := \begin{Si}{cl} \Delta^{n} u & \mbox{ if } m = 2n, \;n\in \mathbb{N}, \\ \nabla\Delta^n u & \mbox{ if } m = 2n +1,\; n\in \mathbb{N}. \end{Si} $$ The Adams-Moser-Trudinger inequality (see \cite{Adams}) implies that \begin{equation}\label{Adams} \sup_{M_{0}} F_\beta <+\infty \qquad \Longleftrightarrow \qquad \beta \le \beta^*, \end{equation} where $\beta^*:= m (2m-1)! Vol(\mathbb S^{2m})$. This result is an extension to dimension $2m$ of the work done by Moser \cite{mos} and Trudinger \cite{tru} in the case $m=1$, and can be considered as a critical version of the Sobolev inequality for the space $H^m_0(\Omega)$. A classical problem related to Moser-Trudinger and Sobolev-type embeddings consists in investigating the existence of extremal functions. While it is rather simple to prove that the supremum in \eqref{Adams} is attained for any $\beta <\beta^*$, lack of compactness due to concentration phenomena makes the critical case $\beta = \beta^*$ challenging. The first proof of existence of extremals for \eqref{Adams} was given by Carleson and Chang \cite{CC} in the special setting $m=1$ and $\Omega = B_1(0)$. The case of arbitrary domains $\Omega\subseteq \mathbb{R}^2$ was treated by Flucher in \cite{flu}. These results are based on sharp estimates on the values that $F_\beta$ can attain on concentrating sequences of functions. Recently, a different approach was proposed in \cite{MarMan} and \cite{DizDru}. Concerning the higher order case, as far as we know, the existence of extremals was proved only for $m=2$ by Lu and Yang in \cite{LuYa2} (see also \cite{Ndi}). In this work, we are able to study the problem for any arbitrary $m\ge 1$. Indeed, we prove here the following result. \begin{trm} Let $\Omega\subseteq \mathbb{R}^{2m}$ be a smooth bounded domain, then for any $m\ge 1$ and $\beta\le \beta^*$ the supremum in \eqref{Adams} is attained, i.e. there exists a function $u^*\in M_0$ such that $F_{\beta}(u^*) = \sup_{M_0} F_\beta$. \end{trm} More generally, we are interested in studying extremal functions for a larger family of inequalities. Let us denote $$ \lambda_1(\Omega):= \inf_{u\in H^m_0(\Omega), u\neq 0} \frac{\|u\|^2_{H^m_0(\Omega)}}{\|u\|^2_{L^2(\Omega)}}. $$ For the 2-dimensional case, in \cite{AD} it was proved that if $\Omega \subseteq \mathbb{R}^2$ and $0\le \alpha<\lambda_1(\Omega)$, then \begin{equation}\label{AD} \sup_{u\in M_0} \int_{\Omega} e^{\beta^*u^2(1+\alpha\|u\|^2_{L^2(\Omega)})} dx <+\infty. \end{equation} Moreover the bound on $\alpha$ is sharp, i.e. the supremum is infinite for any $\alpha \ge\lambda_1(\Omega)$. A stronger form of this inequality can be deduced from the results in \cite{Tint}: \begin{equation}\label{ADstr} \sup_{u\in H^1_0(\Omega),\; \|u\|_{H^1_0(\Omega)}^2-\alpha\|u\|_{L^2(\Omega)}^2 \le 1 } F_{\beta^*} <+\infty. \end{equation} Surprisingly, the study of extremals for the stronger inequality \eqref{ADstr} is easier than for \eqref{AD}. In fact, it was proved in \cite{Yang} that the supremum in \eqref{ADstr} is attained for any $0\le \alpha<\lambda_1(\Omega)$, while existence of extremal functions for \eqref{AD} is known only for small values of $\alpha$ (see \cite{LuYa}). Such results have been extended to dimension 4 in \cite{LuYa2} and \cite{Ngu}. In this paper, we consider the case of an arbitrary $m\ge 1$. For any $0\le \alpha <\lambda_1(\Omega)$ we denote $$ \|u\|_{\alpha}^2 := \|u\|_{H^{m}_0(\Omega) }^2 - \alpha\|u\|_{L^2(\Omega)}^2, $$ and we consider the set $$ M_{\alpha}:= \cur{u\in H^m_0(\Omega)\: :\; \|u\|_{\alpha}\le 1} $$ and the quantity \begin{equation}\label{sup} S_{\alpha,\beta}:= \sup_{M_\alpha} F_\beta. \end{equation} Observe that Poincare's inequality implies that for any $0\le \alpha<\lambda_1(\Omega)$, $\|\cdot\|_\alpha$ is a norm on $H^m_0$ which is equivalent to $\|\cdot\|_{H^m_0}$. Our main result is the following: \begin{trm}\label{main} Let $\Omega\subseteq \mathbb{R}^{2m}$ be a smooth bounded domain, then for any $m\ge 1$ the following holds: \begin{enumerate} \item For any $0\le \beta \le \beta^*$ and $0\le \alpha <\lambda_1(\Omega)$ we have $S_{\alpha,\beta}<+\infty$, and there exists a function $u^*\in M_\alpha$ such that $F_\beta(u^*)=S_{\alpha,\beta}$. \item If $\alpha \ge \lambda_1(\Omega)$, or $\beta > \beta^*$, we have $S_{\alpha,\beta}=+\infty$. \end{enumerate} \end{trm} The proof of the first part of Theorem \ref{main} for $\beta = \beta^*$ is the most difficult one and it is based on blow-up analysis for sequences of sub-critical extremals. We will take a sequence $\beta_n \nearrow \beta^*$ and find $u_n \in M_\alpha$, such that $F_{\beta_n}(u_n) = S_{\alpha,\beta_n}$. If $u_n$ is bounded in $L^\infty(\Omega)$, then standard elliptic regularity proves that $u_n $ converges in $H^m(\Omega)$ to a function $u_0 \in M_\alpha$ such that $F_{\beta^*}(u_0)= S_{\alpha,\beta^*}$. Hence, one has to exclude that $u_n$ blows-up, i.e. that $\displaystyle{\mu_n:= \max_{\overline{\Omega}}|u_n|\to +\infty}$. This is done through a contradiction argument. On the one hand, if $\mu_n \to +\infty$, one can show that $u_n$ admits a unique blow-up point $x_0$ and give a precise description of the behavior of $u_n$ around $x_0$. Specifically, we will prove (see Proposition \ref{big}) that blow-up implies \begin{equation*} S_{\alpha,\beta^*}= \lim_{n\to + \infty} F_{\beta_n}(u_n) \le |\Omega|+\frac{Vol(\mathbb S^{2m})}{2^{2m}} e^{ \displaystyle{\beta^* \bra{C_{\alpha,x_0} -I_m}}}, \end{equation*} where $C_{\alpha,x_0}$ is the value at $x_0$ of the trace of the regular part of the Green's function for the operator $(-\Delta)^m - \alpha$, and $I_m$ is a dimensional constant. On the other hand, by exhibiting a suitable test function, we will prove (see Proposition \ref{proptest}) that such upper bound cannot hold, concluding the proof. While the general strategy is rather standard in the study of this kind of problems (see e.g. \cite{AD}, \cite{flu}, \cite{IulMan}, \cite{Li1}, \cite{Li2}, \cite{LuYa}, \cite{LuYa2}, \cite{Ngu} and \cite{Yang}), our proof introduces several elements of novelty. First, our description of the behaviour of $u_n$ near its blow-up point $x_0$ is sharper than the one given for $m=2$ in \cite{LuYa2} and \cite{Ngu}. There, in order to compensate the lack of sufficiently sharp standard elliptic estimates on a small scale, the authors needed to modify the standard scaling for the Euler-Lagrange equation satisfied by $u_n$. Instead, following the approach first introduced in \cite{mar}, we are able to use the standard scaling replacing classical elliptic estimates with Lorentz-Zygmund type regularity estimates. Secondly, in order to describe the behaviour of $u_n$ far from $x_0$, we extend to higher dimension the approach of Adimurthi and Druet \cite{AD}, which is based on the properties of truncations of $u_n$. To preserve the high-order regularity required in the high-dimensional setting, we introduce polyharmonic truncations. This step, requires precise pointwise estimates on the derivatives of $u_n$, which are a generalisation of the ones in \cite{MS}, where the authors study sequences of positive critical points of $F_\beta$ constrained to spheres in $H^m_0$. We stress that the results of \cite{MS} cannot be directly applied to our case, since here subcritical maximizers are not necessarily positive in $\Omega$ if $m\ge 2$. In addition, the presence of the parameter $\alpha$ modifies the Euler-Lagrange equation. While the differences in the nonlinearity do not create significant issues, the argument in \cite{MS} relies strongly on the positivity assumption. Therefore, here we propose a different proof. The most important feature of our proof of Theorem \ref{main} is that it does not rely on explicit capacity estimates. A crucial step in our blow-up analysis consists in finding sharp lower bounds for the integral of $|\Delta^\frac{m}{2}u_n|^2$ on annular regions. In all the earlier works, this is achieved by comparing the energy of $u_n$ with the quantity $$ i(a,b,R_1,R_2):= \min_{u\in E_{a,b}} \int_{ \cur{R_1 \le |x| \le R_2 }} |\Delta^\frac{m}{2}u|^2 dy $$ for suitable choices of $a=(a_0,\ldots,a_{m-1})$, $b=(b_0,\ldots,b_{m-1})$, and where $E_{a,b}$ denotes the set of all the $H^m$ functions on $\cur{R_1 \le |x| \le R_2 } $ satisfying $\partialrtial_\nu^i u_n = a_i$ on $\partialrtial B_{R_1}(0)$ and $\partialrtial_\nu^i u_n = b_i$ on $\partialrtial B_{R_2}(0)$ for $i =0,\ldots, m-1$. While for $m=1$ or $m=2$, $i(a,b,R_1,R_2)$ can be explicitly computed, finding its expression for an arbitrary $m$ appears to be very hard. In our work we show that these capacity estimates are unnecessary, since equivalent lower bounds can be obtained by directly comparing the Dirichlet energy of $u_n$ with the energy of a suitable polyharmonic function. This results in a considerable simplification of the proof, even for $m=1,2$. Finally, working with arbitrary values of $m$ makes much harder the construction of good test functions and the study of blow-up near $\partialrtial \Omega$, since standard moving planes techniques are not available for $m\ge 2$. To address the last issue, we will apply the Pohozaev-type identity introduced in \cite{RobWei} and applied in \cite{MarPet} to Liouville-type equations. It would be interesting to extend our result to Adams' inequality in odd dimension or, more generally, to the non-local Moser-Trudinger inequality for fractional-order Sobolev spaces proved in \cite{Marfraz}, for which the existence of extremals is still open. In this fractional setting, the behavior of blowing-up subcritical extremals was studied in \cite{MarSch} (at least for nonnegative functions). However, obtaining capacity-type estimates becomes much more challenging, and our argument to avoid them relies strongly on the local nature of the operator $(-\Delta)^m$. This paper is organized as follows. In Section \ref{prel}, we will introduce some notation and state some preliminary results. In Section \ref{subcrit}, we will focus on the subcritical case $\beta<\beta^*$. In Section \ref{main sec}, we will analyze the blow up behavior of subcritical extremals. Since this part of the paper will discuss the most important elements of our work, it will be divided into several subsections. Finally, in Section \ref{sec test}, we will introduce new test functions and we will complete the proof of Theorem \ref{main}. For the reader convenience, we will recall in Appendix some known results concerning elliptic estimates for the operator $(-\Delta)^m$. \section*{Acknowledgments} We are grateful to Professor Luca Martinazzi for introducing us to the problem and for supporting us in the preparation of this work with his encouragement and with many invaluable suggestions. A consistent part of this work was carried out while we were employed by the University of Basel. We would like to thank the Department of Mathematics and Computer Science for their hospitality and support. \section{Preliminaries}\label{prel} Throughout the paper we will denote by $\omega_l$ the $l-$dimensional Hausdorff measure of the unit sphere $\mathbb S^{l}\subseteq \mathbb{R}^{l+1}.$ We recall that, for any $m\ge 1$, \begin{equation}\label{omega} \omega_{2m-1} = \frac{2\pi^m}{(m-1)!} \qquad \mbox{ and } \qquad \omega_{2m} = \frac{2^{m+1}\pi^m}{(2m-1)!!} . \end{equation} It is known that the fundamental solution of $(-\Delta)^m$ in $\mathbb{R}^{2m}$ is given by $-\frac{1}{\gamma_m} \log|x|$, where $$ \gamma_m:= \omega_{2m-1} 2^{2m-2} [(m-1)!]^2 = \frac{\beta^*}{2m}, $$ with $\beta^*$ defined as in \eqref{Adams}. In other words, one has $$ (-\Delta)^m \bra{-\frac{2m}{\beta^*} \log|x| }= \partiallta_0 \qquad \mbox{ in } \mathbb{R}^{2m}. $$ More generally, for any $1 \le l \le m-1$, we have $$ \Delta^l (\log|x|) =\widetildelde K_{m,l} \frac{1}{|x|^{2l}}, $$ where \begin{equation}\label{Kml1} \begin{split} \widetildelde K_{m,l} & = (-1)^{l+1} 2^{2l-1} \frac{(l-1)!(m-1)!}{(m-l-1)! }. \end{split}\end{equation} This also yields $$ \Delta^{l+\frac{1}{2}} (\log|x|) = -2l \widetildelde K_{m,l} \frac{x}{|x|^{2l+2}}. $$ For any $1\le j\le 2m-1$, we define \begin{equation}\label{Kml2} K_{m,\frac{j}{2}}:= \begin{Si}{cl} \widetildelde K_{m,\frac{j}{2}} & \text{ for } j \text{ even }\\ \rule{0cm}{0.5cm} -(j-1) \widetildelde K_{m,\frac{j-1}{2}} & \text{ for } j \text{ odd}, j\ge 3, \\ 1 & \text{ for } j= 1. \end{Si} \end{equation} Then, we obtain \begin{equation}\label{ej} \Delta^\frac{j}{2} (\log |x|) = \frac{K_{m,\frac{j}{2}}}{|x|^j} e_j(x), \qquad \text{ where }\qquad e_j(y):= \begin{Si}{cc} 1 & j \text{ even,}\\ \frac{y}{|y|} & j\text{ odd}. \end{Si} \end{equation} In order to use the same notation for all the values of $m$, we will use the symbol $\cdot $ to denote both the scalar product between vectors in $\mathbb{R}^{2m}$ and the standard Euclidean product between reals numbers. This turns out to be very useful to have compact integration by parts formulas. For instance, we will use several times the following Proposition: \begin{prop}\label{parts} Let $\Omega \subseteq \mathbb{R}^{2m}$ be a bounded open domain with Lipschitz boundary. Then, for any $u\in H^m(\Omega)$, $v\in H^{2m}(\Omega)$, we have $$ \int_{\Omega}\Delta^\frac{m}{2} u \cdot \Delta^\frac{m}{2} v\, dx = \int_{\Omega} u (-\Delta)^m v \, dx -\sum_{j=0}^{m-1} \int_{\partialrtial \Omega} (-1)^{m+j} \nu \cdot \Delta^{\frac{j}{2}} u \, \Delta^{\frac{2m-j-1}{2}} v \, d\sigma, $$ where $\nu$ denotes the outer normal to $\partialrtial \Omega$. \end{prop} A crucial role in our proof will be played by Green's functions for operators of the form $(-\Delta)^m -\alpha$. We recall here that for any $x_0\in \Omega$, and $0\le \alpha <\lambda_1(\Omega)$, there exists a unique distributional solution $G_{\alpha,x_0}$ of \begin{equation}\label{green} \begin{Si}{cc} (-\Delta)^m G_{\alpha,x_0} = \alpha G_{\alpha,x_0} + \partiallta_{x_0} & \mbox{ in }\Omega,\\ G_{\alpha,x_0} = \partialrtial_\nu G_{\alpha,x_0} = \ldots = \partialrtial^{m-1}_{\nu} G_{\alpha,x_0} = 0 & \mbox{ on } \partialrtial \Omega. \end{Si} \end{equation} Some of the main properties of the function $G_{\alpha,x_0}$ are listed in the following Proposition. We refer to \cite{Boggio} and \cite{AcSw} for the proof of the case $\alpha=0$, while the general case can be obtained with minor modifications. \begin{prop}\label{prop green} Let $\Omega$ be a bounded open set with smooth boundary. Then, for any $x_0\in \Omega$ and $0\le \alpha < \lambda_1(\Omega)$, we have: \begin{enumerate} \item There exist $C_{\alpha,x_0} \in \mathbb{R}$ and $\psi_{\alpha,x_0} \in C^{2m-1}(\overline{\Omega})$ such that $\psi_{\alpha,x_0}(x_0)=0$ and $$G_{\alpha,x_0}(x)= -\frac{2m}{\beta^*}\log|x-x_0| + C_{\alpha,x_0} + \psi_{\alpha,x_0}(x), \qquad \text{ for any } x\in \Omega \setminus \{x_0\}.$$ \item There exists a constant $C = C(m,\alpha,\Omega)$ independent of $x_0$, such that $$ |G_{\alpha, x_0}(x)| \le C |\log |x-x_0||, $$ and $$ |\nabla^l G_{\alpha,x_0}(x)| \le \frac{C}{|x-x_0|^l},$$ for any $ 1\le l \le 2m-1, x \in \Omega \setminus \{x_0\}.$ \item $G_{\alpha,x_0}(x)= G_{\alpha,x}(x_0)$, for any $x\in \Omega \setminus \{x_0\}.$ \end{enumerate} \end{prop} In addition, using integration by parts and Proposition \ref{prop green}, we can establish the following new property. \begin{lemma}\label{int Green} For any $x_0\in \Omega$ and $0\le \alpha<\lambda_1(\Omega)$, we have $$ \int_{\Omega\setminus B_\partiallta(x_0)} |\Delta^\frac{m}{2} G_{\alpha,x_0} |^2 dx = \alpha \|G_{\alpha,x_0} \|_{L^2(\Omega)}^2 - \frac{2m }{\beta^*} \log \partiallta + C_{\alpha,x_0} +H_m + O(\partiallta |\log \partiallta|), $$ as $\partiallta \to 0$, where $C_{\alpha, x_0}$ is as in Proposition \ref{prop green} and \begin{equation}\label{Hm} H_m := \begin{Si}{cc} \displaystyle{\bra{\frac{2m}{\beta^*} }^2 \omega_{2m-1} \sum_{j=1}^{m-1} (-1)^{j+m} K_{m,\frac{j}{2}} K_{m,\frac{2m-j-1}{2}}} & \mbox{ if } m\ge 2,\\ 0 & \mbox{ if } m=1. \end{Si} \end{equation} \end{lemma} \begin{proof} From Proposition \ref{parts} applied in $\Omega \setminus B_\partiallta(x_0)$ and \eqref{green}, we find $$ \int_{\Omega\setminus B_\partiallta(x_0)} |\Delta^\frac{m}{2} G_{\alpha,x_0} |^2 dx = \alpha \int_{\Omega\setminus B_\partiallta(x_0)} G_{\alpha,x_0}^2 dx + \sum_{j=0}^{m-1}\int_{\partialrtial B_\partiallta(x_0)} (-1)^{m+j} \nu \cdot \Delta^{\frac{j}{2}} G_{\alpha,x_0} \Delta^{\frac{2m-j-1}{2}} G_{\alpha,x_0} \, d\sigma. $$ On $\partialrtial B_\partiallta(x_0)$, Proposition \ref{prop green}, \eqref{ej}, and the identity $\frac{2m}{\beta^*}K_{m,\frac{m-1}{2}}= \frac{(-1)^{m-1}}{\omega_{2m-1}} $ yield \[\begin{split} \nu \cdot G_{\alpha,x_0} \Delta^{\frac{2m-1}{2}} G_{\alpha,x_0} & =\bra{-\frac{2m}{\beta^*} \log \partiallta + C_{\alpha,x_0} +O(\partiallta)} \bra{\frac{-2m}{\beta^*}K_{m,\frac{2m-1}{2}}\partiallta^{1-2m}+O(1)} \\ &= \frac{(-1)^m }{\omega_{2m-1}} \partiallta^{1-2m} \bra{-\frac{2m}{\beta^*}\log \partiallta +C_{\alpha,x_0} + O(\partiallta) + O(\partiallta^{2m-1} |\log \partiallta|)}, \end{split} \] and, for $m\ge 2$ and $1\le j \le m-1$, that \[\begin{split} \nu \cdot \Delta^{\frac{j}{2}} G_{\alpha,x_0} \Delta^{\frac{2m-j-1}{2}} G_{\alpha,x_0} &= \bra{-\frac{2m}{\beta^*}K_{m,\frac{j}{2}}\partiallta^{-j}+O(1)} \bra{-\frac{2m }{\beta^*} K_{m,\frac{2m-j-1}{2}} \partiallta^{1+j-2m}+ O(1)} \\ & = \bra{\frac{2m}{\beta^*}}^2 K_{m,\frac{j}{2}} K_{m,\frac{2m-j-1}{2}} \partiallta^{1-2m} (1+O(\partiallta^j)). \end{split} \] Then, we get \begin{equation}\label{int boun} \sum_{j=0}^{m-1}\int_{\partialrtial B_\partiallta(x_0)} (-1)^{m+j} \nu \cdot \Delta^{\frac{j}{2}} G_{\alpha,x_0} \Delta^{\frac{2m-j-1}{2}} G_{\alpha,x_0} \, d\sigma = - \frac{2m }{\beta^*} \log \partiallta + C_{\alpha,x_0} +H_m + O(\partiallta |\log \partiallta|), \end{equation} with $H_m$ as in \eqref{Hm}. Finally, applying again Proposition \ref{prop green}, we find \begin{equation}\label{l2 part} \int_{\Omega\setminus B_\partiallta(x_0)} G_{\alpha,x_0}^2 dx = \|G_{\alpha,x_0} \|_{L^2(\Omega)}^2 + O(\partiallta^{2m}\log^2 \partiallta). \end{equation} The conclusion follows by \eqref{int boun} and \eqref{l2 part}. \end{proof} \begin{rem}One can further observe that $$ H_m = \frac{m}{\beta^*} \sum_{j=1}^{m-1} \frac{(-1)^{[\frac{2j}{m}]}}{j}. $$ Indeed, we have the identity $$ (-1)^m \omega_{2m-1} \frac{2m}{\beta^*} K_{m,\frac{j}{2}} K_{m,m-\frac{j}{2}-\frac{1}{2}} = \begin{Si}{cc} \frac{1}{j} & j \text{ even}, \\ \rule{0cm}{0.5cm} \frac{1}{2m-j-1} & j \text{ odd}. \end{Si} $$ Hence, \[\begin{split} \omega_{2m-1} \frac{2m}{\beta^*} \sum_{j=1}^{m-1} (-1)^{j+m} K_{m,\frac{j}{2}} K_{m,m-\frac{j}{2}-\frac{1}{2}} &= \sum_{j=1,\, j \text{ even}}^{m-1} \frac{1}{j} - \sum_{j=1,\, j \text{ odd}}^{m-1} \frac{1}{2m-j-1} \\ & = \sum_{j=1,\, j \text{ even}}^{m-1} \frac{1}{j} - \sum_{j=m,\, j \text{ even}}^{2m-2} \frac{1}{j} \\ & = \frac{1}{2} \sum_{j=1}^{m-1} \frac{(-1)^{[\frac{2j}{m}]}}{j}. \end{split} \] \end{rem} We conclude this section, by recalling the following standard consequence of Adams' inequality and the density of $C^{\infty}_c(\Omega)$ in $H^m_0(\Omega)$. \begin{lemma}\label{integrability} For any $u\in H^m_0(\Omega)$ and $\beta\in \mathbb{R}^+$, we have $e^{\beta u^2}\in L^1(\Omega)$. \end{lemma} \begin{proof} For any $\varepsilon>0$ we can find a function $v_{\varepsilon}\in C^\infty_0(\Omega)$ such that $\|v_\varepsilon - u \|_{H^m_0(\Omega)}^2\le \varepsilon $. Since $$ u^2 = v_\varepsilon^2 + (u-v_\varepsilon)^2 + 2 v_\varepsilon (u-v_\varepsilon) \le 2 v_\varepsilon^2 + 2 (u-v_\varepsilon)^2, $$ we have $$ e^{\beta u^2} \le \|e^{2\beta v_\varepsilon^2}\|_{L^\infty(\Omega)} e^{2\beta (u-v_\varepsilon)^2} \le \|e^{2\beta v_\varepsilon^2}\|_{L^\infty(\Omega)} e^{2\beta\varepsilon \bra{\frac{u-v_\varepsilon}{\|u-v_\varepsilon\|_{H^m_0(\Omega)}}}^2}. $$ If we choose $\varepsilon>0$ small enough, we get $2\varepsilon \beta \le \beta^*$ and, applying Adam's inequality \eqref{Adams}, we find $$ \int_{\Omega} e^{\beta u^2}dx \le \|e^{2\beta v_\varepsilon^2}\|_{L^\infty(\Omega)} F_{\beta^*}\bra{ \frac{u-v_\varepsilon}{\|u-v_\varepsilon\|_{H^m_0(\Omega)}}} <+\infty. $$ \end{proof} \section{Subcritical inequalities and their extremals}\label{subcrit} In this section, we prove the existence of extremal functions for $F_\beta$ on $M_\alpha$ in the subcritical case $\beta<\beta^*$, $0\le \alpha<\lambda_1(\Omega)$. As in the case $m=1$, this is a consequence of Vitali's convergence theorem and of the following improved Adams-type inequality, which is a generalization of Theorem $1.6$ in \cite{Lions}. \begin{prop}\label{Lions} Let $u_n\in H^m_0(\Omega)$ be a sequence of functions such that $\|u_n\|_{H^m_0(\Omega)}\le 1$ and $u_n\rightharpoonup u_0$ in $H^m_0(\Omega)$. Then, for any $0<p<\frac{1}{1-\|u_0\|_{H^m_0}^2}$, we have $$ \limsup_{n\to +\infty } F_{p\beta^*} (u_n) <+\infty. $$ \end{prop} \begin{proof} First, we observe that $$ \|u_n-u_0\|_{H^m_0(\Omega)}^2 = \|u_n\|_{H^m_0(\Omega)}^2 + \|u_0\|_{H^m_0(\Omega)}^2 - 2(u_n,u_0)_{H^m_0(\Omega)} \le 1 - \|u_0\|_{H^m_0(\Omega)}^2 + o(1). $$ Hence, there exists $\sigma>0$ such that $$ p\|u_n-u_0\|_{H^m_0(\Omega)}^2 \le \sigma <1, $$ for sufficiently large $n$. For any $\gamma>0$, we have $$ u_n^2 \le (1+\gamma^2) u_0^2 + (1+\frac{1}{\gamma^2}) (u_n-u_0)^2. $$ Since $0<\sigma<1$, we can choose $\gamma$ sufficiently large so that $\sigma \bra{1+\frac{1}{\gamma^2}}<1$. Applying H\"older's inequality with exponents $q =\frac{1}{\sigma \bra{1+\frac{1}{\gamma^2}} } $ and $q'= \frac{q}{q-1}$, we get $$ F_{p \beta^*} (u_n) \le \int_{\Omega} e^{p\beta^* (1+\gamma^2) u_0^2} e^{p\beta^* (1+\frac{1}{\gamma^2}) (u_n-u_0)^2} dx \le \|e^{p\beta^* (1+\gamma^2) u_0^2}\|_{L^{q'}(\Omega)} \| e^{p\beta^* (1+\frac{1}{\gamma^2}) (u_n-u_0)^2} \|_{L^q(\Omega)}. $$ Lemma \ref{integrability} guarantees that $\|e^{p\beta^* (1+\gamma^2) u_0^2}\|_{L^{q'}(\Omega)}<+\infty$. Moreover, since $$ p q (1+\frac{1}{\gamma^2}) \|u_n-u_0\|_{H^m_0(\Omega)}^2 = \frac{p}{\sigma}\|u_n-u_0\|_{H^m_0(\Omega)}^2 \le 1, $$ for large $n$, Adams' inequality \eqref{Adams} yields $$ \| e^{p\beta^* (1+\frac{1}{\gamma^2}) (u_n-u_0)^2} \|_{L^q(\Omega)} = F_{\beta^*} \bra{\sqrt{p q (1+\frac{1}{\gamma^2})}(u_n-u_0)}^\frac{1}{q} \le S_{0,\beta^*}^\frac{1}{q}<+\infty. $$ Hence, $\limsup_{n\to + \infty } F_{p\beta^*} (u_n) <+\infty.$ \end{proof} Next we recall the following consequence of Vitali's convergence theorem (see e.g. \cite{Rudin} ). \begin{trm} \label{Vitali} Let $\Omega\subseteq \mathbb{R}^{2m}$ be a bounded open set and take a sequence $\{f_n\}_{n\in \mathbb{N}} \subseteq L^1(\Omega).$ Assume that: \begin{enumerate} \item For a.e. $x\in \Omega$ the pointwise limit $f(x):=\lim_{n\to + \infty}f_n(x)$ exists. \item There exists $p>1$ such that $\|f_n\|_{L^p(\Omega)}\le C$. \end{enumerate} Then, $f\in L^1(\Omega)$ and $f_n\to f$ in $L^1(\Omega)$. \end{trm} We can now prove the existence of subcritical extremals. \begin{prop}\label{sub} For any $\beta <\beta^*$ and $0\le \alpha <\lambda_1(\Omega)$, we have $S_{\alpha,\beta}<+\infty$. Moreover $S_{\alpha, \beta}$ is attained, i.e., there exists $u_{\alpha,\beta}\in M_{\alpha}$ such that $S_{\alpha,\beta}=F_\beta(u_{\alpha,\beta}).$ \end{prop} \begin{proof} Let $u_n \in M_\alpha$ be a maximizing sequence for $F_\beta$, i.e. such that $F_\beta(u_n)\to S_{\alpha,\beta}$ as $n\to + \infty$. Since $F_\beta(u_n)\le F_\beta(\frac{u_n}{\|u_n\|_\alpha})$, w.l.o.g we can assume $\|u_n\|_\alpha =1$, for any $n\in \mathbb{N}$. Since $\alpha<\lambda_1(\Omega)$, $u_n$ is uniformly bounded in $H^m_0(\Omega)$. In particular, extracting a subsequence, we can find $u_0\in H^m_0(\Omega)$ such that $u_n\rightharpoonup u_0$ in $H^m_0(\Omega)$, $u_n\to u_0$ in $L^2(\Omega)$ and $u_n \to u_0$ a.e. in $\Omega$. Observe that \[ \begin{split} \|u_0\|_{\alpha}^2 = \|u_0\|_{H^m_0(\Omega)}^2 -\alpha \|u_0 \|_{L^2(\Omega)}^2 \le \liminf_{n\to +\infty }\|u_n\|_{H^m_0(\Omega)}^2 -\alpha \|u_n \|_{L^2(\Omega)}^2 = \liminf_{n\to +\infty }\|u_n\|_{\alpha} ^2 = 1, \end{split} \] hence $u_0\in M_\alpha$. If we prove that there exists $p>1$ such that \begin{equation}\label{p>1} \|e^{\beta u_n^2}\|_{L^p(\Omega)}\le C, \end{equation} then we can apply Theorem \ref{Vitali} to $f_n:= e^{\beta u_n^2}$ and we obtain $F_{\beta}(u_0)=S_{\alpha,\beta}$ and $S_{\alpha,\beta}<+\infty$, which concludes the proof. To prove \eqref{p>1} we shall treat two differnt cases. Assume first that $u_0=0$. Then we have $$ \beta \|u_n\|_{H^m_0(\Omega)}^2 = \beta(1 +\alpha \|u_n \|_{L^2(\Omega)}^2 ) = \beta + o(1) < \beta^*, $$ and we can find $p >1$ such that $$ p \beta \|u_n\|_{H^m_0(\Omega)}^2 \le \beta^*, $$ for $n$ large enough. In particular, using \eqref{Adams}, we obtain $$ \|e^{\beta u_n^2}\|_{L^p(\Omega)}^p = \int_{\Omega} e^{p\beta u_n^2} dx \le F_{\beta^*}\bra{\frac{u_n}{\|u_n\|_{H^m_0(\Omega)}}}\le S_{0,\beta^*}<+\infty. $$ Assume instead $u_0\neq 0$. Consider the sequence $v_n:= \frac{u_n}{\|u_n\|_{H^m_0(\Omega)}}$, and observe that $v_n\rightharpoonup v_0$ in $H^m_0(\Omega)$ where $v_0 = \frac{u_0}{\sqrt{1+\alpha\|u_0\|^2_{L^2}}}$. Since \[\begin{split} \|u_n\|_{H^m_0}^2 (1-\|v_0\|_{H^m_0}^2) & = \bra{1+\alpha \|u_n\|_{L^2}^2 }\bra{1 -\frac{\|u_0\|_{H^n_0}^2}{1+\alpha\|u_0\|^2_{L^2}}} \\ & = 1 + \alpha\|u_0\|_{L^2}^2- \|u_0\|_{H^n_0}^2 + o(1)\\ & = 1 - \|u_0\|_{\alpha}^2 + o(1), \end{split} \] and $u_0\neq 0$, we get $$ \limsup_{n\to +\infty} \|u_n\|_{H^m_0}^2 < \frac{1}{1-\|v_0\|_{H^m_0}^2}. $$ In particular, there exist $p,q>1$ such that $$ p\| u_n\|_{H^m_0}^2\le q < \frac{1}{1-\|v_0\|_{H^m_0}^2}, $$ for $n$ large enough. Then, we get $$ \|e^{\beta u_n^2}\|_{L^p}^p \le \|e^{\beta^* u_n^2}\|_{L^p}^p = \|e^{\beta^* \|u_n\|_{H^m_0}^2 v_n^2}\|_{L^p}^p \le \|e^{\beta^* q v_n^2}\|_{L^1}= F_{q\beta^*}(v_n) \le C, $$ where the last inequality follows from Proposition \ref{Lions}. Therefore, the proof of \eqref{p>1} is complete. \end{proof} Finally, we stress that, as $\beta \to \beta^*$, the family $u_{\alpha,\beta}$ is a maximizing family for the critical functional $F_{\beta^*}$. \begin{lemma}\label{limsub} For any $0\le \alpha <\lambda_1(\Omega)$, we have $$ \lim_{\beta \nearrow \beta^*} S_{\alpha,\beta}= S_{\alpha,\beta^*}. $$ \end{lemma} \begin{proof} Clearly, $S_{\alpha,\beta}$ is monotone increasing with respect to $\beta$. In particular, we must have $$ \lim_{\beta\nearrow \beta^* } S_{\alpha,\beta}\le S_{\alpha,\beta^*}. $$ To prove the opposite inequality, we observe that, for any function $u\in M_\alpha$, the monotone convergence theorem implies $$ F_{\beta^*}(u) = \lim_{\beta\nearrow \beta^*} F_\beta(u) \le \lim_{\beta\nearrow \beta^*} S_{\alpha,\beta}. $$ Since $u$ is an arbitrary function in $M_\alpha$, we get $$ S_{\alpha,\beta^*} \le \lim_{\beta \nearrow \beta^*} S_{\alpha,\beta_n}. $$ \end{proof} \section{Blow-up analysis at the critical exponent}\label{main sec} In this section, we will study the behaviour of subcritical extremals as $\beta$ approaches the critical exponent $\beta^*$ from below. In the following, we will take a sequence $(\beta_n)_{n\in \mathbb{N}}$ such that \begin{equation}\label{betan} 0< \beta_n <\beta^* \quad \mbox{ and } \quad \beta_n \to \beta^*, \text{ as } n\to+\infty. \end{equation} Due to Proposition \ref{sub}, for any $n\in \mathbb{N}$, we can find a function $u_n\in M_{\alpha}$ such that \begin{equation}\label{extremal} F_{\beta_n}(u_n)= S_{\alpha,\beta_n}. \end{equation} \begin{lemma}\label{primo} If $u_n\in M_\alpha$ satisfies \eqref{extremal}, then $u_n$ has the following properties \begin{enumerate} \item $\|u_n\|_\alpha=1$. \item $u_n$ is a solution to \begin{equation}\label{star} \begin{Si}{ll} (-\Delta )^m u_n = \lambda_n u_n e^{\beta_n u_n^2} +\alpha u_n & \mbox{ in }\Omega, \\ u_n = \partialrtial_\nu u_n= \cdots = \partialrtial^{m-1}_{\nu} u_n = 0 & \mbox{ on }\partialrtial \Omega, \end{Si} \end{equation} where \begin{equation}\label{lambdan} \lambda_n = \bra{\int_{\Omega} u_n^2 e^{\beta_n u_n^2}dx }^{-1}. \end{equation} \item $u_n\in C^{\infty}(\overline{\Omega})$. \item $F_{\beta_n}(u_n) \to S_{\alpha,\beta^*}$ as $n\to +\infty$. \item If $\lambda_n$ is as in \eqref{lambdan}, then $\displaystyle{\limsup_{n\to +\infty} \lambda_n <+\infty}$. \end{enumerate} \begin{proof} \emph{1.} Since $u_n\in M_{\alpha}$, we have $\|u_n\|_{\alpha}\le 1$, $\forall\; n\in \mathbb{N}$. Moreover, the maximality of $u_n$ implies $u_n\neq 0$. If $\|u_{n}\|_\alpha<1$, then we would have $$ S_{\alpha,\beta_n } = F_{\beta_n}(u_n) < F_{\beta_n}\bra{\frac{u_n}{\|u_n\|_{\alpha}}}, $$ which is a contradiction. \emph{2.} Since $u_n$ is a critical point for $F_{\beta_n}$ constrained to $M_\alpha$, there exists $\gamma_n \in \mathbb{R}$ such that \begin{equation}\label{e1} \gamma_n\bra{ (u_n,\varphi)_{H^m_0} -\alpha (u_n,\varphi)_{L^2} } = \beta_n \int_{\Omega} u_n e^{\beta_n u_n^2}\varphi dx, \end{equation} for any function $\varphi \in H^m_0(\Omega)$. Taking $u_n$ as test function and using \emph{1}., we find \begin{equation}\label{e2} \gamma_n = \beta_n \int_{\Omega} u_n^2 e^{\beta_n u_n^2} dx. \end{equation} In particular, $\gamma_n \neq 0$ and \eqref{e1} implies that $u_n$ is a weak solution of \eqref{star} with $\lambda_n:=\frac{\beta_n}{\gamma_n}$. Finally, \eqref{e2} is equivalent to \eqref{lambdan}. \emph{3.} By Lemma \ref{integrability}, we know that $u_n$ and $e^{\beta_n u_n^2}$ belong to every $L^p$ space, $p>1$. Then, applying standard elliptic regularity results (see e.g. Proposition \ref{ell zero}) and Sobolev embedding theorem, we find $u_n \in W^{2m,p}(\Omega)\subseteq C^{2m-1,\gamma}(\Omega)$, for any $\gamma\in (0,1)$. Then, we also have $(-\Delta)^m u_n\in C^{2m-1,\gamma}(\Omega)$ and, applying recursively Schauder estimates (Proposition \ref{ell shau}), we conclude that $u_n \in C^{\infty}(\overline \Omega)$. \emph{4.} This is a direct consequence of Lemma \ref{limsub}. \emph{5.} Assume by contradiction that there exists a subsequence for which $\lambda_n \to +\infty$, as $n\to +\infty$. Then, by \eqref{lambdan}, we have $$ \int_{\Omega}u_n^2 e^{\beta_n u_n^2} dx \to 0, $$ as $n\to +\infty$. Exploiting the basic inequality $e^{t}\le 1+t e^{t}$ for $t\ge 0$, we obtain $$ F_{\beta_n}(u_n) \le |\Omega|+\beta_n \int_{\Omega} u_n^2 e^{\beta_n u_n^2}dx \to |\Omega|. $$ Since, by \emph{4.}, $F_{\beta_n}(u_n) = S_{\alpha,\beta_n}\to S_{\alpha,\beta^*} >|\Omega|$, we get a contradiction. \end{proof} \end{lemma} In order to prove that $S_{\alpha,\beta^*}$ is finite and attained, we need to show that $u_n$ does not blow-up as $n\to +\infty$. Let us take a point $x_n \in \Omega$ such that \begin{equation}\label{mun and xn} \mu_n:=\max_{\overline \Omega} |u_n| =|u_n(x_n)|. \end{equation} Extracting a subsequence and changing the sign of $u_n$ we can always assume that \begin{equation}\label{mun and xn2} u_n (x_n) = \mu_n \quad \mbox{ and } \quad x_n \to x_0 \in \overline \Omega, \text{ as }n\to +\infty. \end{equation} The main purpose of this section consists in proving the following Proposition. \begin{prop}\label{big} Let $\beta_n$, $u_n$, $\mu_n$, $x_n$, and $x_0$ be as in \eqref{betan}, \eqref{extremal}, \eqref{mun and xn}, and \eqref{mun and xn2}. If $\mu_n\to +\infty$, then $x_0\in \Omega$ and we have $$ S_{\alpha,\beta^*} = \lim_{n\to +\infty} F_{\beta_n}(u_n) \le|\Omega| + \frac{\omega_{2m}}{2^{2m}} e^{\beta^* \bra{C_{\alpha,x_0} - I_m}}, $$ where $C_{\alpha,x_0}$ is as in Proposition \ref{prop green} and \begin{equation}\label{Imbrutto} I_m:= - \frac{m 4^{2m}}{\beta^*\omega_{2m} }\int_{\mathbb{R}^{2m}} \frac{\log\bra{1+\frac{|y|^2}{4}}}{\bra{4+|y|^2}^{2m}}dy. \end{equation} \end{prop} The proof of Proposition \ref{big} is quite long and it will be divided into several subsections. Some standard properties of $u_n$ will be established in section 4.1. Then, in section 4.2, as a consequence of Lorentz-Zygmund elliptic estimates, we will prove uniform bounds for $\Delta u_n^2$. Such bounds will be crucial in the analysis given in section 4.3, where we will study the behaviour of $u_n$ on a small scale. Sections 4.4, 4.5 and 4.6 contain respectively estimates on the derivatives of $u_n$, the definition of suitable polyharmonic truncations of $u_n$, and the description of the behaviour of $u_n$ far from $x_0$. In section 4.7 we will deal with blow-up at the boundary, which will be excluded using Pohozaev-type identities. Finally, we conclude the proof in section 4.8 by introducing a new technique to obtain lower bounds on the Dirichlet energy for $u_n$ on suitable annular regions. In the rest of this section $\beta_n$, $u_n$, $\mu_n$, $x_n$, and $x_0$ will always be as in Proposition \ref{big} and we will always assume that $\mu_n \to +\infty$. \subsection{Concentration near the blow-up point} In this subsection we will prove that, if $\mu_n \to +\infty$, $u_n$ must concentrate around the blow-up point $x_0$. We start by proving that its weak limit in $H^m_0(\Omega)$ is $0$. \begin{lemma}\label{weak lim} If $\mu_n \to +\infty$, then $u_n \rightharpoonup 0$ in $H^m_0(\Omega)$ and $u_n \to 0$ in $L^p(\Omega)$ for any $p\ge 1$. \end{lemma} \begin{proof} Since $u_n$ is bounded in $H^m_0(\Omega)$, we can assume that $u_n \rightharpoonup u_0$ in $H^m_0(\Omega)$ with $u_0 \in H^m_0(\Omega)$. The compactness of the embedding of $H^m_0(\Omega)$ into $L^p(\Omega)$ implies $u_n \to u_0$ in $L^p(\Omega)$, for any $p\ge 1$. If $u_0\neq 0$, then, by Proposition \ref{Lions}, $e^{\beta_n u_n^2}$ is bounded in $L^{p_0}(\Omega)$ for some $p_0>1$. By Lemma \ref{primo}, we know that $\lambda_n$ is bounded. Hence $(-\Delta)^m u_n$ is bounded in $L^{s}(\Omega)$ for any $1<s<p_0$. Then, by elliptic estimates (see Proposition \ref{ell zero}), we find that $u_n$ is bounded in $W^{2m,s}(\Omega)$ and, by Sobolev embeddings, in $L^\infty(\Omega)$. This contradicts $\mu_n \to +\infty$. Hence, we have $u_0=0$. \end{proof} In fact, $u_n$ converges to $0$ in a much stronger sense if we stay far from the blow-up point $x_0$, while $|\Delta^\frac{m}{2} u_n|^2 $ concentrates around $x_0$. \begin{lemma}\label{conv0C} If $\mu_n\to +\infty$, then we have: \begin{enumerate} \item $|\Delta^\frac{m}{2} u_n|^2 \rightharpoonup \partiallta_{x_0}$ in the sense of measures. \item $e^{\beta_n u_n^2}$ is bounded in $L^s(\Omega\setminus B_\partiallta(x_0))$, for any $s\ge1$, $\partiallta>0$. \item $u_n\to 0$ in $C^{2m-1,\gamma}(\Omega\setminus B_\partiallta(x_0))$, for any $\gamma\in (0,1)$, $\partiallta>0$. \end{enumerate} \end{lemma} \begin{proof} First of all, for any function $\xi\in C^{2m}(\overline{\Omega})$, we observe that $$ \Delta^{\frac{m}{2}} (u_n \xi) = \xi \Delta^\frac{m}{2} u_n + f_n, $$ with $$ |f_n|\le C_1 \sum_{l=0}^{m-1} |\nabla^l u_n||\nabla^{m-l} \xi| \le C_2 \sum_{l=0}^{m-1} |\nabla^l u_n|, $$ for some constants $C_1,C_2>0$, depending only on $m,l,$ and $\xi$. Since $u_n\rightharpoonup 0$ in $H^m_0(\Omega)$, and $H^m_0(\Omega)$ is compactly embedded in $H^{m-1}(\Omega)$, we get that $f_n \to 0$ in $L^2(\Omega)$. In particular, we have \begin{equation}\label{cutoff1} \begin{split} \|\Delta^\frac{m}{2} ( u_n \xi)\|^2_{L^2(\Omega)} &= \int_{\Omega} \xi^2 |\Delta^\frac{m}{2} u_n |^2dx + 2 \int_{\Omega} \Delta^\frac{m}{2} u_n \cdot f_n dx + \int_{\Omega} |f_n|^2 dx \\ & = \int_{\Omega} \xi^2 |\Delta^\frac{m}{2} u_n |^2dx +o(1). \end{split} \end{equation} We can now prove the first statement of this lemma. Assume by contradiction that there exists $r>0$ such that \begin{equation}\label{cut2} \limsup_{n\to +\infty} \|\Delta^\frac{m}{2} u_n\|_{L^2(B_{r}(x_0)\cap \Omega)}^2 <1. \end{equation} Take a function $\xi \in C^\infty_c(\mathbb{R}^{2m})$ such that $\xi \equiv 1$ on $B_\frac{r}{2}(x_0)$, $\xi \equiv 0$ on $\mathbb{R}^{2m}\setminus B_{r}(x_0)$ and $0\le \xi\le 1$. By \eqref{cutoff1} and \eqref{cut2}, we have that $ \limsup_{n\to + \infty} \|\Delta^\frac{m}{2} ( u_n\xi) \|_{L^2(\Omega)}^2 <1. $ Adams' inequality implies that we can find $s> 1$ such that $e^{\beta_n (u_n\xi)^2}$ is bounded in $L^s(\Omega)$. In particular, $e^{\beta_n u_n^2}$ is bounded in $L^{s}(B_\frac{r}{2}(x_0))$. By Lemma \ref{weak lim}, $u_n \to 0$ in $L^p(\Omega)$ for any $p\ge 1$. Therefore, we get that $(-\Delta)^m u_n \to 0 $ in $L^{q}(\Omega)$ for any $1<q<s$. Then, Proposition \ref{ell zero} yields $u_n \to 0$ in $W^{2m,q}(\Omega)$ and, since $q>1$, in $L^\infty(\Omega)$. This contradicts $\mu_n\to +\infty$. To prove \emph{2.}, we fix a cut-off function $\xi_2\in C^{\infty}_c(\mathbb{R}^{2m})$ such that $\xi_2 \equiv 1$ in $\mathbb{R}^{2m}\setminus B_\partiallta(x_0)$, $\xi_2 \equiv 0$ in $B_\frac{\partiallta}{2}(\Omega)$, and $\xi \le 1$. Since $|\Delta^\frac{m}{2} u_n |\rightharpoonup \partiallta_{x_0}$, from \eqref{cutoff1} we get $\|\Delta^\frac{m}{2} (u_n\xi_2) \|_{L^2(\Omega)} \to 0$. Then, Adams' inequality implies that $e^{\beta_n (u_n \xi_2)^2}$ is bounded in $L^s(\Omega)$, for any $s>1$. Because of the definition of $\xi_2$, we get the conclusion. To prove \emph{3.}, we apply standard elliptic estimates. By part \emph{2.}, we know that $u_n$ and $e^{\beta_n u_n^2}$ are bounded in $L^s(\Omega \setminus B_\partiallta(x_0))$ for any $s\ge 1$. Since $\lambda_n$ is bounded, the same bound holds for $(-\Delta)^m u_n$. Then, elliptic estimates (Propostion \ref{ell new}) imply that $u_n $ is bounded in $W^{2m,s}(\Omega\setminus B_{2\partiallta}(x_0))$. By Sobolev embedding theorem, it is also bounded in $C^{2m-1,\gamma}(\Omega \setminus B_{2\partiallta}(x_0))$, for any $\gamma \in (0,1)$. Then, up to a subsequence, we can find a function $u_0\in C^{2m-1,\gamma}(\Omega \setminus B_{2\partiallta}(x_0))$ such that $u_n \to u_0$ in $C^{2m-1,\gamma}(\Omega \setminus B_{2\partiallta} (x_0))$. Since $u_n \rightharpoonup 0$ in $H^m_0(\Omega)$, we must have $u_0 \equiv 0$ in $\Omega \setminus B_{2\partiallta}(x_0)$ and $u_n \to 0$ in $C^{2m-1,\gamma}(\Omega \setminus B_{2\partiallta}(x_0))$. \end{proof} \subsection{Lorentz-Sobolev elliptic estimates} In this subsection, we prove uniform integral estimates on the derivatives of $u_n$. Notice that Sobolev's inequality implies $\|\nabla^l u_n\|_{L^{\frac{2m}{l}}(\Omega)}\le C$ for any $1\le l \le m-1$. In addition, standard elliptic estimates (Proposition \ref{ell L1}) yield $\|\nabla^l u_n\|_{L^p(\Omega)}\le C$, for any $p<\frac{2m}{l}$ and $m \le l\le 2m-1$. Arguing as in \cite{mar}, we will prove that sharper estimates can be obtained thanks to Lorentz-Zygmund elliptic regularity theory (see Proposition \ref{ell Lor} in Appendix). In the following, for any $\alpha\ge 0$, $1<p<+\infty$, and $1\le q \le +\infty$, $(L(\log L)^\alpha,\|\cdot\|_{L(\log L)^\alpha})$ and $(L^{(p,q)}(\Omega),\|\cdot\|_{(p,q)})$, will denote respectively the Zygmund and Lorentz spaces on $\Omega$. We refer to the Appendix for the precise definitions (see \eqref{zygm}-\eqref{Lor fin}). \begin{lemma}\label{Lorentz} For any $1\le l\le 2m-1$, we have $$\| \nabla^l u_n \|_{(\frac{2m}{l},2)} \le C.$$ \end{lemma} \begin{proof} Set $f_n:= (-\Delta)^m u_n$. By Proposition \ref{ell Lor}, there exists a constant $C>0$ such that $$ \|\nabla^{l} u_n\|_{(\frac{2m}{l},2)}\le C \|f_n\|_{L(Log L)^\frac{1}{2}}, $$ for any $1\le l\le 2m-1$, $n\in \mathbb{N}$. Therefore, it is sufficient to prove that $f_n$ is bounded in $L(\log L)^\frac{1}{2}$.For any $x \in \mathbb{R}^+$, let $\log^+x := \max\{0,\log x\}$ be the positive part of $\log x$. Since $\beta_n$ and $\lambda_n$ are bounded, using the simple inequalities $$ \log(x+y) \le x+ \log^+ y \qquad \text{and} \qquad \log^+(xy) \le \log^+x +\log^+ y,\qquad x,y\in \mathbb{R}^+, $$ we find \[ \begin{split} \log(2+|f_n|) & \le 2 + \log^+|u_n| + \log^+ \bra{ \lambda_n e^{\beta_n u_n^2}+\alpha}\\ & \le C+ \log^+ |u_n| + \beta_n u_n^2 \\ & \le C (|u_n| + 1)^2. \end{split} \] Then, $$ |f_n| \log^\frac{1}{2}(2+|f_n|) \le C |f_n|(1+|u_n|) \le C \bra{ \lambda_n |u_n| e^{\beta_n u_n^2} + \lambda_n u_n^2 e^{\beta_n u_n^2} + \alpha |u_n| + \alpha u_n^2 }, $$ and, by Lemma \ref{weak lim} and \eqref{lambdan}, as $n\to + \infty$ we get \[\begin{split} \int_{\Omega } |f_n| \log^\frac{1}{2}(2+|f_n|)dx &\le C \bra{\lambda_n \int_{\Omega} |u_n| e^{\beta_n u_n^2} dx + 1+ o(1)} \\ & \le C \bra{\lambda_n \int_{\{ |u_n|<1\}} |u_n| e^{\beta_n u_n^2} dx + \lambda_n \int_{\{ |u_n|\ge 1\}} |u_n|^2 e^{\beta_n u_n^2} dx + 1+ o(1)}\\ & \le C\bra{\lambda_n e^{\beta_n} |\Omega|+2 + o(1)} =O(1). \end{split} \] Hence, $f_n$ is bounded in $L(Log L)^\frac{1}{2}$. \end{proof} As a consequence of Lemma \ref{Lorentz}, we obtain an integral estimate on the derivatives of $u_n^2$, which will play an important role in Sections 4.3 and 4.4. The idea behind this estimate is based on the following remark: up to terms involving only lower order derivatives, which can be controlled using Lemma \ref{Lorentz}, $(-\Delta)^m u_n^2$ coincides with $u_n(-\Delta)^m u_n$, which is bounded in $L^1(\Omega)$. Then, estimates on $u_n^2$ can be obtained via Green's representation formula. \begin{lemma}\label{integral est} There exists a constant $C>0$ such that for any $1\le l\le 2m-1$, $x\in \Omega$, and $\rho>0$ with $B_{\rho}(x)\subseteq \Omega$, we have $$ \int_{B_{\rho}(x)} |\nabla^l u_n^2| dy \le C \rho^{2m-l}. $$ \end{lemma} \begin{proof} We start by observing that $(-\Delta)^m u_n^2$ is bounded in $L^1(\Omega)$. Clearly \[ \begin{split} |(-\Delta)^m u_n^2 | &\le 2 |u_n(-\Delta)^m u_n| + C \sum_{j=1}^{2m-1}|\nabla^j u_n| |\nabla^{2m-j} u_n|. \end{split} \] Equation \eqref{lambdan} and Lemma \ref{weak lim} imply that $u_n(-\Delta)^m u_n$ is bounded in $L^1(\Omega)$. As a consequence of H\"older's inequality for Lorentz spaces (Proposition \eqref{HolLor}) and Lemma \ref{Lorentz}, we find $$\int_{\Omega} |\nabla^{2m-j} u_n||\nabla^j u_n|dx \le \|\nabla^{2m-j} u_n \|_{(\frac{2m}{2m-j},2)}\|\nabla^{j} u_n \|_{(\frac{2m}{j},2)} \le C. $$ Thus, $(-\Delta)^m u_n^2$ is bounded in $L^1(\Omega)$. Now, we apply Green's representation formula to $u_n^2$ to get $$ u_n^2 (y) = \int_{\Omega} G_y (z) (-\Delta)^m u_n^2(z) dz, $$ for any $y \in \Omega$ where $G_y:= G_{0,y}$ is defined as in \eqref{green}. By the properties of $G_y$ (see Proposition \ref{prop green}), we have $$ |\nabla^l_y G_y (z) |\le \frac{C}{|y-z|^l}, $$ for any $y,z\in \Omega$ with $z\neq y$. Hence $$ |\nabla^l u_n^2(y)|\le \int_{\Omega} \frac{C |(-\Delta)^m u_n^2(z)|}{|y-z|^l} dz. $$ Let $x\in \Omega$ and $\rho>0$ be as in the statement. Then, we find \[ \begin{split} \int_{B_\rho(x)} |\nabla^l u_n^2| dy &\le \int_{B_\rho(x)} \int_{\Omega} \frac{C |(-\Delta)^m u_n^2(z)|}{|y-z|^l} dzdy \\ &= C \int_{\Omega} |(-\Delta)^m u_n^2(z)| \int_{B_\rho(x)} \frac{1}{|y-z|^l} dydz. \end{split} \] Since $$ \int_{B_\rho(x)} \frac{1}{|y-z|^l} dy \le \int_{B_\rho(x)} \frac{1}{|y-x|^l} dy = C \rho^{2m-l}, $$ and $(-\Delta)^m u_n^2$ is bounded in $L^1(\Omega)$, we get the conclusion. \end{proof} \subsection{The behavior on a small scale} Let $u_n$, $\mu_n$ and $x_n$ be as in \eqref{extremal}, \eqref{mun and xn}, \eqref{mun and xn2}. In this subsection, we will study the behavior of $u_n$ on small balls centered at the maximum point $x_n$. Define $r_n>0 $ so that \begin{equation}\label{rn} \omega_{2m} r_n^{2m} \lambda_n \mu_n^2 e^{\beta_n \mu_n^2}=1, \end{equation} with $\omega_{2m}$ as in \eqref{omega}. \begin{rem} Note that, as $n\to +\infty$, we have $r_n^{2m} = o(\mu_n^{-2})$ and, in particular, $r_n \to 0$. \end{rem} \begin{proof} Indeed, by \eqref{lambdan}, we have $$\frac{1}{\lambda_ne^{\beta_n\mu_n^2}} = \frac{1}{e^{\beta_n \mu_n^2}}\int_{\Omega} u_n^2 e^{\beta_n u_n^2} dx \le \|u_n\|_{L^2(\Omega)}^2.$$ Since $u_n \to 0$ in $L^2(\Omega)$, the definition of $r_n^{2m}$ yields $r_n^{2m} \mu_n^2 \to 0$ as $n\to + \infty$. \end{proof} Let us now consider the scaled function \begin{equation}\label{etan} \eta_n (y):= \mu_n ( u_n(x_n+ r_n y) -\mu_n), \end{equation} which is defined on the set \begin{equation*} \Omega_n :=\{y\in \mathbb{R}^{2m}\;:\; x_n + r_n y \in \Omega\}. \end{equation*} The main purpose of this subsection consists in proving the following convergence result. \begin{prop}\label{conv eta} We have $\frac{d(x_n, \partialrtial \Omega)}{r_n} \to +\infty$ and, in particular, $\Omega_n$ approaches $\mathbb{R}^{2m}$ as $n\to + \infty$. Moreover, $\eta_n$ converges to the limit function \begin{equation}\label{eta0} \eta_0 (y) = - \frac{m}{\beta^*} \log\bra{1+\frac{|y|^2}{4}} \end{equation} in $C^{2m-1,\gamma}_{loc}(\mathbb{R}^{2m})$, for any $\gamma\in (0,1)$. \end{prop} In order to avoid repetitions, it is convenient to see Proposition \ref{conv eta} as a special case of the following more general result, which will be useful also in the proof of Proposition \ref{stima Luca}. \begin{prop}\label{conv eta gen} Given two sequences $\widetildelde x_n\in \Omega$ and $s_n\in\mathbb{R}^+$, consider the scaled set $\widetildelde \Omega_n :=\{ y\in \mathbb{R}^{2m}\;:\; \widetildelde x_n +s_n y \in \Omega \}$ and the functions $v_n(y):= u_n(\widetildelde x_n +s_n y )$ and $\widetildelde \eta_n(y):= \widetildelde \mu_n \bra{ v_n (y )- \widetildelde \mu_n} $, where $\widetildelde \mu_n := u_n(\widetildelde x_n)$. Assume that \begin{enumerate} \item $\omega_{2m} s_n^{2m} \lambda_n \widetildelde \mu_n^2 e^{\beta_n \widetildelde \mu_n^2} = 1$ and $|\widetildelde \mu_n| \to +\infty$, $s_n^{2m} \to 0$, as $n\to+ \infty$. \item For any $R>0$ there exists a constant $C(R)>0$ such that \begin{equation}\label{ass} \left|\frac{v_n}{\widetildelde \mu_n} \right| \le C(R) \quad \mbox{and }\quad v_n^2 - \widetildelde \mu_n^2 \le C(R) \quad \text{ in }\widetildelde \Omega_n \cap B_{R}(0). \end{equation} \end{enumerate} Then, we have $\frac{d(\widetildelde x_n, \partialrtial \Omega)}{s_n} \to +\infty$ and $\frac{v_n}{\mu_n}\to 1$ in $C^{2m-1,\gamma}_{loc}(\mathbb{R}^{2m})$, for any $\gamma\in (0,1)$. Moreover $\widetildelde \eta_n\to \eta_0$ in $C^{2m-1,\gamma}_{loc}(\mathbb{R}^{2m})$, where $\eta_0$ is defined as in \eqref{eta0}. \end{prop} Note that the assumptions of Proposition \ref{conv eta gen} are satisfied when $\widetildelde x_n =x_n$ and $s_n =r_n$. Hence, Proposition \ref{conv eta} follows from Proposition \ref{conv eta gen}. We split the proof of Proposition \ref{conv eta gen} into four steps. The first two steps (Lemma \ref{boundarydist0} and Lemma \ref{boundarydist}) are stated under more general assumptions, since they will be reused in the proof of Proposition \ref{stime grad}. \begin{lemma}\label{boundarydist0} Given two sequences $\widetildelde x_n \in \Omega$ and $s_n\in \mathbb{R}^+$, let $\widetildelde \Omega_n$ and $v_n$ be defined as in Proposition \ref{conv eta gen}. Let also $\Sigma$ be a finite (possibly empty) subset of $\mathbb{R}^{2m}\setminus \{0\}$. Assume that \begin{enumerate} \item $s_n\to 0$ and $\displaystyle{D_n:= \max_{0\le i\le 2m-1}|\nabla^i v_n(0)|\to +\infty}$ as $n\to +\infty$. \item For any $R>0$, there exist $C(R)>0$ and $N(R)\in \mathbb{N}$ such that $$ |v_n(y)|\le C(R)D_n \qquad \mbox{ and } \qquad |(-\Delta)^m v_n(y)| \le C(R)D_n, $$ for any $\displaystyle{y\in \widetildelde \Omega_{n,R}:= \widetildelde \Omega_n \cap B_R(0)\setminus \bigcup_{\xi \in \Sigma} B_\frac{1}{R}(\xi)}$ and any $n\ge N(R)$. \end{enumerate} Then, we have $$ \lim_{n\to + \infty} \frac{d(\widetildelde x_n,\partialrtial \Omega)}{s_n} = +\infty. $$ \end{lemma} \begin{proof} Let us consider the functions $w_n(y):= \frac{v_n(y)}{D_n}.$ First, we observe that the assumptions on $\widetildelde x_n$ and $s_n$ imply \begin{equation}\label{cond wn 1} w_n = O(1), \end{equation} and \begin{equation}\label{cond wn 2} |(-\Delta)^m w_n| = O(1), \end{equation} uniformly in $\widetildelde \Omega_{n,R}$, for any $R>0$. Moreover, by Sobolev's inequality, for any $1\le j\le m$ we have that \begin{equation}\label{cond wn 3} \|\nabla^j w_{n}\|_{L^{\frac{2m}{j}}(\widetildelde \Omega_n)} = D_n^{-1} \|\nabla^j u_{n}\|_{L^{\frac{2m}{j}}( \Omega)} \le C D_{n}^{-1} \|\Delta^\frac{m}{2} u_n\|_{L^2(\Omega)} = O(D_{n}^{-1}). \end{equation} Then, using H\"older's inequality, \eqref{cond wn 1} and \eqref{cond wn 3} give \begin{equation}\label{cond wn 4} \|w_n\|_{W^{m,1}(\widetildelde \Omega_{n,R})} =O(1). \end{equation} Now, we assume by contradiction that for a subsequence \[ \frac{d(\widetildelde x_n,\partialrtial \Omega)}{s_n} \to R_0 \in [0,+\infty). \] Then, the sets $\widetildelde \Omega_n$ converge in $C^{\infty}_{loc}$ to a hyperplane $\mathcal P$ such that $d(0,\partialrtial \mathcal P)=R_0$. For any sufficiently large $R>0$ and any $p>1$, using \eqref{cond wn 2}, \eqref{cond wn 4}, Proposition \ref{ell new}, and Remark \ref{remdomains}, we find a constant $C=C(R)$ such that $\|w_n\|_{W^{2m,p}(\widetildelde \Omega_{n,\frac{R}{2}})} \le C.$ Then, Sobolev's embeddings imply that $\|w_n\|_{C^{2m-1,\gamma}(\widetildelde \Omega_{n,\frac{R}{2}})}\le C$, for any $\gamma\in (0,1)$. Reproducing the standard proof of the Ascoli-Arzelà theorem, we find a function $w_0\in C^{2m-1,\gamma}_{loc}(\overline{\mathcal P}\setminus \Sigma)$ such that, up to a subsequence, we have \begin{equation}\label{conv wn} w_n \to w_0 \qquad \text{ in } C^{2m-1}_{loc}(\mathcal P \setminus \Sigma) \end{equation} and \begin{equation}\label{conv wn bound} \nabla^j w_n (\xi_n) \to \nabla^j w_0 (\xi), \quad 0\le j\le 2m-1, \end{equation} for any $\xi \in \overline{\mathcal P}\setminus \Sigma$ and any sequence $\{\xi_n\}_{n\in \mathbb{N}}$ such that $\xi_n \to \xi$. Since $w_n =0$ on $\partialrtial \widetildelde \Omega_n$ and $\widetildelde \Omega_n$ converges to $\mathcal P$, \eqref{conv wn bound} yields $w_0 \equiv 0$ in $\partialrtial \mathcal P \setminus \Sigma$. Furthermore, \eqref{cond wn 3} and \eqref{conv wn} imply that $\nabla w_0 \equiv 0$ in $\mathcal P\setminus \Sigma $. Therefore, $w_0 \equiv 0$ on $\overline{\mathcal P} \setminus \Sigma$. But, by definition of $D_n$ and $w_n$, we have $$ \max_{0\le i\le 2m-1}|\nabla^i w_n(0)| = 1, $$ which contradicts either \eqref{conv wn} (if $R_0>0$) or \eqref{conv wn bound} (if $R_0=0$). \end{proof} \begin{lemma}\label{boundarydist} Let $s_n$, $\widetildelde x_n$, $v_n$, $\widetildelde \Omega_n$, $D_n$ and $\Sigma$ be as in Lemma \ref{boundarydist0}. Then, $|v_n(0)|\to + \infty$ and $$ \frac{v_n}{v_n(0)} \to 1 \qquad \text{ in } C^{2m-1,\gamma}_{loc}(\mathbb{R}^{2m}\setminus \Sigma), $$ for any $\gamma\in (0,1)$. \end{lemma} \begin{proof} Consider the function $ w_n (y) := \dfrac{v_n(y)}{D_n} $, $y\in \widetildelde \Omega_n$. As in \eqref{cond wn 1}, \eqref{cond wn 2} and \eqref{cond wn 3}, we have \begin{equation}\label{eq risc1} \begin{split} w_n = O(1) \qquad \text{ and }\qquad (-\Delta)^m w_n = O(1), \end{split} \end{equation} uniformly in $B_{R}(0)\setminus \bigcup_{\xi\in \Sigma}B_\frac{1}{R}(\xi)$, for any $R>0$, and \begin{equation}\label{grad zero1} \|\nabla w_n\|_{L^{2m}(\widetildelde \Omega_n)}\to 0. \end{equation} By \eqref{eq risc1}, Proposition \ref{ell loc}, Sobolev's embeddings, and \eqref{grad zero1}, a subsequence of $w_n$ must converge to a constant function $w_0$ in $C^{2m-1,\gamma}_{loc}(\mathbb{R}^{2m}\setminus \Sigma)$, for any $\gamma \in (0,1)$. In particular, we have $|\nabla^j w_n(0)|\to 0$ for any $1\le j\le 2m-1$. Then, the definitions of $D_n$ and $w_n$ give $$ 1 = \max_{0\le j\le 2m-1} |\nabla^j w_n(0)| = |w_n(0)|, $$ which implies that $|v_n(0)|=D_n\to +\infty$ and that $|w_0|\equiv 1$ in $\mathbb{R}^{2m}\setminus \Sigma$. Hence, $$ \frac{v_n}{v_n(0)} = \frac{w_n}{w_n(0)} \to 1 \qquad \mbox{ in }C^{2m-1,\gamma}_{loc}(\mathbb{R}^{2m}\setminus \Sigma). $$ \end{proof} Next, we let $\widetildelde x_n, s_n$, $\widetildelde \mu_n$ and $\widetildelde \eta_n$ be as in Proposition \ref{conv eta gen} and we apply Lemma \ref{integral est} to prove bounds for $\Delta \widetildelde \eta_n$ in $L^1_{loc}(\mathbb{R}^{2m})$. \begin{lemma}\label{lap etan} Under the assumptions of Proposition \ref{conv eta gen}, there exists a constant $C>0$ such that $$ \|\Delta \widetildelde \eta_n\|_{L^1(B_R(0))} \le C R^{2m-2}, $$ for any $R>1$ and for sufficiently large $n$. \end{lemma} \begin{proof} First, we observe that $\widetildelde x_n$ and $s_n$ satisfy the assumptions of Lemma \ref{boundarydist0} and Lemma \ref{boundarydist}. Indeed, equation \eqref{star}, the definition of $v_n$, and the assumptions on $\widetildelde x_n$ and $s_n$ yield $v_n = O(|\widetildelde \mu_n|)$ and \begin{equation}\begin{split}\label{verify} (-\Delta)^m v_n & = s_n^{2m } \lambda_n v_n e^{\beta_n v_n^2} +s_n^{2m}\alpha v_n \\ & = \omega_{2m}^{-1} \frac{v_n}{\widetildelde \mu_n^2} e^{\beta_n (v_n^2-\widetildelde \mu_n^2)} + s_n^{2m}\alpha v_n \\ & = O(|\widetildelde \mu_n^{-1}|) + O(s_n^{2m}|\widetildelde \mu_n|), \end{split} \end{equation} uniformly in $\widetildelde \Omega_n \cap B_R(0)$, for any $R>0$. Then, Lemma \ref{boundarydist0} and Lemma \ref{boundarydist} imply that $\widetildelde \Omega_n$ approaches $\mathbb{R}^{2m}$ and \begin{equation}\label{rip} \frac{v_n}{\widetildelde \mu_n} \to 1 \qquad \text{ in } C^{2m-1,\gamma}_{loc}(\mathbb{R}^{2m}), \text{ for any }\gamma\in (0,1). \end{equation} Next, we rewrite the estimates of Lemma \ref{integral est} in terms of $\widetildelde \eta_n$. On the one hand, by Lemma \ref{integral est}, there exists $C>0$, such that \begin{equation*} \|\Delta u_n^2\|_{L^1(B_{Rs_n}(\widetildelde x_n))} \le C (s_n R)^{2m-2}, \end{equation*} for any $R>0$ and $n\in \mathbb{N}$. On the other hand, we have \[ \begin{split} \|\Delta u_n^2\|_{L^1(B_{Rs_n}(\widetildelde x_n))} & \ge 2 \|u_n \Delta u_n\|_{L^1(B_{Rs_n}(\widetildelde x_n))} - 2 \| \nabla u_n\|^2_{L^2(B_{Rs_n}(\widetildelde x_n))} \\ & = 2 s_n^{2m-2} \bra{ \|v_n \Delta v_n\|_{L^1(B_{R}(0))} - \| \nabla v_n\|^2_{L^2(B_{R}(0))} }. \end{split} \] Then, we obtain \begin{equation}\label{stima20b} \|v_n \Delta v_n\|_{L^1(B_{R}(0))} \le C R^{2m-2} + \| \nabla v_n\|^2_{L^2(B_{R}(0))}. \end{equation} By \eqref{rip} and the definition of $\widetildelde \eta_n$, we infer \begin{equation}\label{stima20c} \begin{split} \|v_n \Delta v_n\|_{L^1(B_{R}(0))} = |\widetildelde \mu_n | \|\Delta v_n\|_{L^1(B_{R}(0))} (1+o(1)) &= \| \Delta \widetildelde \eta_n \|_{L^1(B_{R}(0))} (1+o(1)) \\ &\ge \frac{1}{2} \| \Delta \widetildelde \eta_n \|_{L^1(B_{R}(0))}, \end{split} \end{equation} for sufficiently large $n$. Finally, applying H\"older's inequality, \begin{equation}\label{stima20d} \| \nabla v_n\|^2_{L^2(B_{R}(0))} \le \| \nabla v_n\|^2_{L^{2m}(B_{R}(0))} |B_R|^{1-\frac{1}{m}} \le \| \nabla u_n\|^2_{L^{2m}(\Omega)} |B_R|^{1-\frac{1}{m}} \le C R^{1-\frac{1}{m}}. \end{equation} Since $1-\frac{1}{m}\le 2m -2$, the conclusion follows from \eqref{stima20b}, \eqref{stima20c}, and \eqref{stima20d}. \end{proof} We can now complete the proof of Proposition \ref{conv eta gen}. \begin{proof}[Proof of Proposition \ref{conv eta gen}] Arguing as in the previous Lemma, we have that $\frac{d(\widetildelde x_n,\partialrtial \Omega)}{s_n}\to +\infty$ and that \eqref{rip} holds. Observe that \eqref{rip} implies \begin{equation}\label{explanation} (1+o(1))s_n^{2m} \widetildelde \mu_n^2 = \frac{s_n^{2m}}{\omega_{2m}}\int_{B_1(0) }v_n^2(y) dy = \frac{1}{\omega_{2m}} \int_{B_{s_n}(\widetildelde x_n)} u_n^2 (x)dx = O(\|u_n\|_{L^2(\Omega)}^2) =o(1). \end{equation} Moreover, as in \eqref{verify}, by the definitions of $\widetildelde \eta_n$ and $v_n$, and the assumptions on $\widetildelde \mu_n$, $s_n$ and $\widetildelde x_n$, we get \begin{equation}\label{lap bound} (-\Delta)^m \widetildelde \eta_n = O(1) + O(s_n^{2m}\widetildelde \mu_n^2) =O(1), \end{equation} uniformly in $B_R(0)$, for any $R>0$. In addition, Lemma \ref{lap etan} implies that $\Delta \widetildelde \eta_n$ is bounded in $L^1_{loc}(\mathbb{R}^{2m})$. By Proposition \ref{ell loc} and Sobolev's embedding theorem, $\Delta \widetildelde \eta_n$ is bounded in $L^\infty_{loc}(\mathbb{R}^{2m})$. As a consequence of \eqref{ass} and \eqref{rip}, we have $$ C(R) \ge v_n^2 - \widetildelde \mu_n^2 = (v_n -\widetildelde \mu_n)(v_n + \widetildelde \mu_n) = \widetildelde \eta_n (2+o(1)) $$ in $B_{R}(0)$. Since $\widetildelde \eta_n(0)=0$, Proposition \ref{ell use} shows that $\widetildelde \eta_n$ is bounded in $L^\infty_{loc}(\mathbb{R}^{2m})$. Together with \eqref{lap bound}, Proposition \ref{ell loc}, and Sobolev's embeddings, this implies that $\eta_n$ it is bounded in $C^{2m-1,\gamma}_{loc}(\mathbb{R}^{2m})$, for any $\gamma\in (0,1)$. Then, we can extract a subsequence such that $\widetildelde \eta_n$ converges in $C^{2m-1,\gamma}_{loc}(\mathbb{R}^{2m})$ to a limit function $\eta_0\in C^{2m-1,\gamma}_{loc}(\mathbb{R}^{2m})$. Observe that, as $n\to +\infty$, $$ (-\Delta)^m \widetildelde \eta_n = \bra{1+ \frac{\widetildelde\eta_n}{\widetildelde \mu_n^2}} \bra{ \omega_{2m}^{-1} e^{2 \beta_n \widetildelde \eta_n +\beta_n \frac{\widetildelde \eta_n^2}{\widetildelde \mu_n^2}} + \alpha s_n^{2m} \widetildelde \mu_n^2 } \to \omega_{2m}^{-1} e^{2\beta^* \eta_0}, $$ locally uniformly in $\mathbb{R}^{2m}$. This implies that $\eta_0$ must be a weak solution of \begin{equation}\label{Liouville} \begin{Si}{l} (-\Delta)^m \eta_0 = \omega_{2m}^{-1} e^{2\beta^* \eta_0}, \\ e^{2\beta^*\eta_0}\in L^1(\mathbb{R}^{2m}), \\ \eta_0\le 0, \eta_0 (0) =0. \end{Si} \end{equation} Solutions of problem \eqref{Liouville} have been classified in \cite{MarClass} (see also \cite{Lin} and \cite{Xu}). In particular, Theorems 1 and 2 in \cite{MarClass} imply that there exists a real number $a\le 0$, such that $\lim_{|y|\to +\infty} \Delta \eta_0(y) = a$. Moreover, either $a\neq 0$, or $\eta_0 (y) = - \frac{m}{\beta^*} \log\bra{1+\frac{|y|^2}{4}},$ for any $y\in \mathbb{R}^{2m}$. To exclude the first possibility we observe that, if $a\neq 0$, then we can find $R_0>0$ such that $|\Delta \eta_0|\ge \frac{|a|}{2}$ for $|y|\ge R_0$. This yields \begin{equation}\label{lap eta0} \int_{B_R(0)} |\Delta \eta_0| dy \ge \int_{B_{R_0}(0)} |\Delta \eta_0| dy + \frac{|a|}{2} \omega_{2m} (R^{2m}-R_0^{2m}), \end{equation} for any $R>R_0$. But Lemma \ref{lap etan} implies \begin{equation}\label{cont1} \int_{B_R(0)}|\Delta \eta_0| dy \le C R^{2m-2}, \end{equation} for any $R>1$. For large values of $R$, \eqref{cont1} contradicts \eqref{lap eta0}. \end{proof} This completes the proof of Proposition \ref{conv eta gen}. Now, we state some properties of the function $\eta_0$ that will play a crucial role in the next sections. \begin{lemma}\label{int eta0}Let $\eta_0$ be as in \eqref{eta0}. Then, as $R\to +\infty $, we have \begin{equation}\label{propeta1} \omega_{2m}^{-1} \int_{B_R(0)} e^{2\beta^*\eta_0} dy=1 + O(R^{-2m}) \end{equation} and \begin{equation}\label{propeta2} \int_{B_R(0)}|\Delta^\frac{m}{2}\eta_0|^2 dy= \frac{2m}{\beta^*}\log \frac{R}{2} + I_m -H_m + O(R^{-2}\log R ), \end{equation} where $H_m$ is defined as in \eqref{Hm} and \begin{equation}\label{Im} I_m=\int_{\mathbb{R}^{2m}} \eta_0 (-\Delta)^m \eta_0\: dy= - \frac{m 4^{2m}}{\beta^*\omega_{2m}}\int_{\mathbb{R}^{2m}} \frac{\log\bra{1+\frac{|y|^2}{4}}}{\bra{4+|y|^2}^{2m}}dy \end{equation} is as in \eqref{Imbrutto}. \end{lemma} \begin{proof} First, using a straightforward change of variable and the representation of $\mathbb{S}^{2m}$ through the standard stereographic projection, we observe that \[ \int_{\mathbb{R}^{2m}} e^{2\beta^*\eta_0} dy= \int_{\mathbb{R}^{2m}} \frac{4^m}{(1+|y|^2)^{2m}}dy = \omega_{2m}. \] Since $e^{2\beta^*\eta_0}= O(\frac{1}{|y|^{4m}})$ as $|y|\to +\infty$, we get \eqref{propeta1}. The proof of \eqref{propeta2} relies on the integration by parts formula of Proposition \ref{parts}. For any $1\le l\le m-1$, we have \[ \Delta^l \eta_0(y)= \frac{m}{\beta^*} \sum_{k=0}^l a_{k,l} \frac{|y|^{2k}}{(4+|y|^2)^{2l}}, \quad a_{k,l}= (-1)^l (l-1)! \binom{l}{k} \frac{(m+l-1)! (m-l+k-1)!}{(m+k-1)!(m-l-1)!} 2^{4l-2k}, \] and \[ \Delta^{l+\frac{1}{2}} \eta_0(y)= \frac{m}{\beta^*} \sum_{k=0}^l b_{k,l} \frac{|y|^{2k} y }{(4+|y|^2)^{2l+1}} , \quad b_{k,l}= \begin{Si}{cl } 8(k+1)a_{k+1,l}+ (2k-4l) a_{k,l} & 0\le k \le l-1, \\ -2l a_{ll} & k = l. \end{Si} \] Note that $a_{ll}=-2\widetildelde K_{m,l}$, where $\widetildelde K_{m,l}$ is as in \eqref{Kml1}. In any case, for $1\le j\le 2m-1$, we find \begin{equation}\label{asym} \Delta^\frac{j}{2} \eta_0 =-\frac{2m}{\beta^*} K_{m,\frac{j}{2}} \frac{e_j(y)}{|y|^j} + O(|y|^{-2-j}), \end{equation} as $|y|\to +\infty$, where $K_{m,\frac{j}{2}}$ and $e_{j}$ are defined as in \eqref{Kml2} and \eqref{ej}. Integrating by parts, we find \[ \int_{B_R(0)}|\Delta^\frac{m}{2}\eta_0|^2 dy= \int_{B_R(0)} \eta_0 (-\Delta)^m \eta_0 \, dy - \sum_{j=0}^{m-1}\int_{\partialrtial B_R(0)} (-1)^{j+m} \nu \cdot \Delta^{\frac{j}{2}} \eta_0 \Delta^{\frac{2m-j-1}{2}} \eta_0 \, d\sigma. \] On $\partialrtial B_R(0)$, \eqref{asym} and the identity $\frac{2m}{\beta^*}K_{m,\frac{2m-1}{2}}= \frac{(-1)^{m-1}}{\omega_{2m-1}}$ imply \[\begin{split} \nu \cdot \eta_0 \Delta^{\frac{2m-1}{2}} \eta_0 & =\bra{-\frac{2m}{\beta^*} \log \frac{R}{2} +O(R^{-2})} \bra{\frac{-2m}{\beta^*}K_{m,\frac{2m-1}{2}}R^{1-2m}+O(R^{-2m-1})} \\ &= \frac{(-1)^m }{\omega_{2m-1}} R^{1-2m} \bra{-\frac{2m}{\beta^*}\log \frac{R}{2} + O(R^{-2}\log R)}, \end{split} \] and, for $1\le j \le m-1$, that \[\begin{split} \nu \cdot \Delta^{\frac{j}{2}} \eta_0 \Delta^{\frac{2m-j-1}{2}} \eta_0 &= \bra{-\frac{2m}{\beta^*}K_{m,\frac{j}{2}}R^{-j}+O(R^{-j-2})} \bra{-\frac{2m }{\beta^*} K_{m,\frac{2m-j-1}{2}} R^{1+j-2m}+ O(R^{j-2m-1})} \\ & = \bra{\frac{2m}{\beta^*}}^2 K_{m,\frac{j}{2}} K_{m,\frac{2m-j-1}{2}} R^{1-2m} +O(R^{-2m-1}). \end{split} \] Hence, we have \begin{equation}\label{for} \int_{B_R(0)}|\Delta^\frac{m}{2}\eta_0|^2 dy = \int_{B_R(0)} \eta_0 (-\Delta)^m \eta_0 \, dy +\frac{2m}{\beta^*}\log \frac{R}{2} -H_m +O(R^{-2}\log R). \end{equation} Finally, since $\eta_0 (-\Delta)^m \eta_0$ decays like $|y|^{-4m}\log |y|$ as $|y|\to +\infty$, we get \[ \int_{B_R(0)}\eta_0 (-\Delta)^m \eta_0 \,dy= I_m +O(R^{-2m} \log R), \] which, together with \eqref{for}, gives the conclusion. \end{proof} \begin{rem}\label{rem integrals} Proposition \ref{conv eta} and Lemma \ref{int eta0} imply \begin{enumerate} \item $\displaystyle{\lim_{n\to +\infty}\int_{B_{R r_n(x_n) }} \lambda_n u_n^2 e^{\beta_n u_n^2} dx =1+O(R^{-2m})}$. \item $\displaystyle{\lim_{n\to +\infty}\int_{B_{R r_n(x_n) }} \lambda_n \mu_n u_n e^{\beta_n u_n^2} dx =1+O(R^{-2m})}$. \item $\displaystyle{\lim_{n\to + \infty}\int_{B_{R r_n(x_n) }} \lambda_n \mu_n |u_n| e^{\beta_n u_n^2} dx =1+O(R^{-2m})}$. \item $\displaystyle{\lim_{n\to +\infty}\int_{B_{R r_n(x_n) }} \lambda_n \mu_n^2 e^{\beta_n u_n^2} dx =1+O(R^{-2m})}$. \end{enumerate} Indeed, all the integrals converge to $\displaystyle{\omega_{2m}^{-1}\int_{B_R(0)} e^{2\beta^* \eta_0} dy}$. \end{rem} \subsection{Estimates on the derivatives of \texorpdfstring{$\boldsymbol{u_n}$}{un} } In this subsection, we prove some pointwise estimates on $u_n$ and its derivatives that are inspired from the ones in Theorem 1 of \cite{mar} and Proposition 11 of \cite{MS} (where the authors assume $\alpha=0$ and $u_n\ge0$). \begin{prop}\label{stima Luca} There exists a constant $C>0$, such that $$ |x-x_n|^{2m}\lambda_n u_n^2 e^{\beta_n u_n^2} \le C, $$ for any $x\in \Omega$. \end{prop} \begin{proof} Let us denote \begin{equation}\label{defln1} L_n:= \sup_{x\in \overline{\Omega}} |x-x_n|^{2m} \lambda_n u_n^2(x)e^{\beta_n u_n^2(x)}. \end{equation} Assume by contradiction that $L_n\to +\infty$ as $n\to +\infty$. Take a point $\widetildelde x_n\in \Omega$ such that \begin{equation}\label{defln2} L_n = |\widetildelde x_n -x_n |^{2m} \lambda_n u_n^2(\widetildelde x_n) e^{\beta_n u_n^2(\widetildelde x_n)}, \end{equation} and define $\widetildelde \mu_n := u_n(\widetildelde x_n)$ and $s_n\in \mathbb{R}^+$ such that \begin{equation}\label{defsn} \omega_{2m} s_n^{2m} \lambda_n \widetildelde \mu_n^2 e^{\beta_n \widetildelde \mu_n^2}= 1. \end{equation} We will show that $\widetildelde x_n$ and $s_n$ satisfy the assumptions of Proposition \ref{conv eta gen}. Clearly, since $L_n\to+\infty$, \eqref{defln2} and \eqref{defsn} imply that \begin{equation}\label{ratio} |\widetildelde \mu_n| \to +\infty\qquad \text{ and } \qquad \frac{|x_n-\widetildelde x_n|}{s_n}\to +\infty. \end{equation} In particular, $s_n\to 0$. Let $v_n$ and $\widetildelde \Omega_n$ be as in Proposition \ref{conv eta gen}. Using \eqref{defln1} and \eqref{defln2}, we obtain \begin{equation}\label{important} \frac{v_n^2}{\widetildelde \mu_n^2} e^{v_n^2- \widetildelde \mu_n^2} \le \frac{|y_n|^{2m}}{|y-y_n|^{2m}}, \end{equation} for any $y\in \widetildelde \Omega_{n}$, where $y_n:= \frac{x_n -\widetildelde x_n}{s_n}$. Since $|y_n|\to+\infty$, \eqref{important} yields \begin{equation}\label{important2} \frac{v_n^2}{\widetildelde \mu_n^2} e^{v_n^2- \widetildelde \mu_n^2} \le C(R) \qquad \text{in } \widetildelde \Omega_n\cap B_R(0), \end{equation} for sufficiently large $n$. Thanks to \eqref{important2}, we infer that $$ \left|\frac{v_n}{\widetildelde \mu_n}\right| \le C(R) \quad \text{ and } \quad v_n^2 - \widetildelde \mu_n^2 \le C(R) $$ on the set $\{|v_n|\ge |\widetildelde \mu_n|\}\cap B_R(0)$, and therefore on $\widetildelde \Omega_n\cap B_R(0)$. Then, all the assumptions of Proposition \ref{conv eta gen} are satisfied. In particular, as in Remark \ref{rem integrals}, by Proposition \ref{conv eta gen} and Lemma \ref{int eta0}, we get \begin{equation}\label{mass} \lim_{n\to+\infty}\int_{B_{Rs_n}(\widetildelde x_n)} \lambda_n u_n^2 e^{\beta_n u_n^2}dx = \omega_{2m}^{-1}\int_{B_R(0)}e^{2\beta^*\eta_0} dy = 1 +O(R^{-2m}). \end{equation} Besides, if $r_n$ is as in \eqref{rn}, we have $r_n\le s_n$ and, by \eqref{ratio}, $B_{Rs_n}(\widetildelde x_n)\cap B_{Rr_n}(x_n)=\emptyset$, for any $R>0$. Then, \eqref{lambdan}, Remark \ref{rem integrals}, and \eqref{mass} imply $$ 1=\lim_{n\to +\infty} \int_{\Omega} \lambda_n u_n^2 e^{\beta_n u_n^2} dx \ge \lim_{n\to +\infty} \int_{B_{R r_n}(x_n)\cup B_{Rs_n}(\widetildelde x_n)} \lambda_n u_n^2 e^{\beta_n u_n^2} dx = 2 +O(R^{-2m}), $$ which is a contradiction for large values of $R$. \end{proof} Next, we prove pointwise estimates on $|\nabla^l u_n|$ for any $1\le l\le 2m-1$. \begin{prop}\label{stime grad} There exists a constant $C>0$ such that $$ |x-x_n|^l | u_n \nabla^l u_n | \le C, $$ for any $x\in \Omega$ and $1\le l\le 2m-1$. \end{prop} The proof of Proposition \ref{stime grad} follows the same steps of the ones of Propositions \ref{conv eta gen}. However, in this case it will be more difficult to obtain uniform bounds on $u_n$ on a small scale. For any $1\le l \le 2m-1$, we denote \begin{equation}\label{Lnl} L_{n,l}:= \sup_{x\in \overline \Omega} |x-x_n|^l | u_n| |\nabla^l u_n |. \end{equation} Let $ x_{n,l} \in \Omega$ be such that \begin{equation}\label{Lnl2} | x_{n,l} -x_n|^l |u_n( x_{n,l}) \nabla^l u_n ( x_{n,l})|=L_{n,l}. \end{equation} We define $s_{n,l} := | x_{n,l} -x_n |$, $ \mu_{n,l}:=u_n( x_{n,l})$, and $y_{n,l} := \frac{x_n- x_{n,l}}{s_{n,l}}$. Up to subsequences, we can assume $y_{n,l} \to \overline{y}_l\in \mathbb S^{2m-1}$ as $n\to +\infty$. Consider now the scaled functions \[ v_{n,l} (y) = u_{n} ( x_{n,l} + s_{n,l} y), \] which are defined on the sets $\Omega_{n,l}:=\{ y\in \mathbb{R}^{2m}\;:\: x_{n,l} + s_{n,l}y \in \Omega \}$. Observe that $v_{n,l}$ satisfies \begin{equation}\label{eqvnl} \begin{Si}{cc} (-\Delta)^m v_{n,l} = s_{n,l}^{2m} \lambda_n v_{n,l} e^{\beta_n v_{n,l}^2} +s_{n,l}^{2m} \alpha v_{n,l} & \text{ in }\Omega_{n,l},\\ v_{n,l}= \partialrtial_{\nu} v_{n,l} = \ldots = \partialrtial_{\nu}^{m-1} v_{n,l}=0, & \text{ on }\partialrtial \Omega_{n,l}. \end{Si} \end{equation} Moreover, Proposition \ref{stima Luca} yields \begin{equation}\label{scaled estimate} s_{n,l}^{2m } \lambda_n v_{n,l}^{2} e^{\beta_n v_{n,l}^2} \le \frac{C}{|y-y_{n,l}|^{2m}}, \end{equation} for any $y\in \Omega_{n,l}$, and \eqref{Lnl2} can be rewritten as \begin{equation}\label{Lnl3} L_{n,l}= |v_{n,l}(0) ||\nabla^l v_{n,l}(0)| = |\mu_{n,l}| |\nabla^l v_{n,l}(0)|. \end{equation} \begin{rem}\label{rem sn} If $L_{n,l}\to +\infty$ as $n\to +\infty$, then Lemma \ref{conv0C} implies that $s_{n,l} \to 0$. In particular, \eqref{scaled estimate} gives \[ s_{n,l}^{2m } \lambda_n v_{n,l} e^{\beta_n v_{n,l}^2} \to 0 \] as $n\to +\infty$, uniformly in $\Omega_{n,l} \setminus B_{\frac{1}{R}}(\overline y_l)$, for any $R>0$. Indeed, if we choose a sequence $\{a_n\}_{n\in \mathbb{N}}$ such that $a_n \to +\infty$ and $s_{n,l}^{2m}\lambda_n a_n e^{\beta_n a_n^2} \to 0$ as $n\to +\infty$, then we have $$ \left|s_{n,l}^{2m } \lambda_n v_{n,l} e^{\beta_n v_{n,l}^2} \right|\le s_{n,l}^{2m } \lambda_n a_ne^{\beta_n a_n^2}, $$ on the set $\{|v_{n,l}|\le a_n\}$, while \eqref{scaled estimate} gives $$ \left|s_{n,l}^{2m } \lambda_n v_{n,l} e^{\beta_n v_{n,l}^2} \right|\le \frac{s_{n,l}^{2m } \lambda_n v_{n,l}^2 e^{\beta_n v_{n,l}^2} }{a_n} \le \frac{C}{a_n|y-y_{n,l}|}, $$ on the set $\{|v_{n,l}|\ge a_n\}$. \end{rem} In the following, we will treat separately the cases $l=1$ and $2\le l \le 2m-1$. \begin{lemma}\label{grad dist} If $L_{n,1}\to +\infty$ as $n\to +\infty$, then we have $\frac{d( x_{n,1},\partialrtial \Omega)}{s_{n,1}} \to +\infty $. Moreover, $\frac{v_{n,1}}{\mu_{n,1}} \to 1$ in $C^{2m-1,\gamma}_{loc}(\mathbb{R}^{2m}\setminus \{\overline y_1\})$, for any $\gamma\in (0,1)$. \end{lemma} \begin{proof} It is sufficient to prove that $x_{n,1}$, $s_{n,1}$ and $v_{n,1}$ satisfy the assumptions of Lemma \ref{boundarydist0} and Lemma \ref{boundarydist}, with $\Sigma=\{\overline y_1\}$. First of all, we observe that, for any $R>0$, the definition of $L_{n,1}$ implies $|\nabla v_{n,1}^2|\le C(R)L_{n,1}$ in $\Omega_{n,1} \setminus B_\frac{1}{R}(\overline y_1)$. Then, a Taylor expansion and \eqref{Lnl3} yield \begin{equation}\label{L1} v_{n,1}^2 \le \mu_{n,1}^2+ C(R) L_{n,1}\le C(R) D_{n,1}^2 \end{equation} in $\Omega_{n,1} \cap B_{R}(0)\setminus B_\frac{1}{R}(\overline y_1)$, where $D_{n,1}:= \max_{0\le i\le 2m-1}|\nabla^i v_{n,1}(0)|$. Moreover, by equation \eqref{eqvnl}, Remark \ref{rem sn}, and \eqref{L1}, we get $$|(-\Delta)^m v_{n,1} | = o(1) + s_{n,1}^{2m} \alpha v_{n,1} = o(1) + O(s_{n,1}^{2m}D_{n,1}),$$ uniformly in $\Omega_{n,1} \cap B_{R}(0)\setminus B_\frac{1}{R}(\overline y_1)$. Finally, Remark \ref{rem sn} gives $s_{n,1}\to 0$, while \eqref{Lnl3} and the condition $L_{n,1}\to +\infty$ imply $D_{n,1}\to +\infty$. \end{proof} We can now prove Proposition \ref{stime grad} for $l=1$. \begin{proof}[Proof of Proposition \ref{stime grad} for $l=1$] Assume by contradiction that $L_{n,1}\to +\infty$, as $n\to +\infty$. Consider the function $z_n(y):= \dfrac{v_{n,1}(y)- \mu_{n,1}}{| \nabla v_{n,1}(0)|}$. On the one hand, by the definitions of $L_{n,1}$ and $ x_{n,1}$ in \eqref{Lnl} and \eqref{Lnl2}, and by Lemma \ref{grad dist}, we have $$ |\nabla v_{n,1}(y)|\le \frac{|\nabla v_{n,1}(0)|(1+o(1))}{|y-y_{n,1}|} \le C(R) |\nabla v_{n,1}(0)|, $$ uniformly in $B_R(0)\setminus B_\frac{1}{R}(\overline y_{1})$, for any $R>0$. In particular, \[ |\nabla z_n(y) | \le C(R) \qquad \text{ in } B_{R}(0)\setminus B_\frac{1}{R}(\overline y_1). \] Since $z_n(0)=0$, $z_n$ is bounded in $L^\infty_{loc}(\mathbb{R}^{2m}\setminus \{\overline y_1\})$. On the other hand, arguing as in \eqref{explanation}, Lemma \ref{grad dist} implies that $$s_{n,1}^{2m} \mu_{n,1}^2 =o(1),$$ and, using also \eqref{scaled estimate}, that $$ (-\Delta)^m z_n = \frac{\lambda_n s_{n,1}^{2m} v_{n,1} e^{\beta_n v_{n,1}^2} + \alpha s_{n,1}^{2m} v_{n,1}}{|\nabla v_{n,1} (0)|} = O\bra{\frac{1}{ \mu_{n.1} |\nabla v_{n,1}(0)|}}=o(1), \qquad \text{ in } B_{R}(0)\setminus B_\frac{1}{R}(\overline y_1). $$ By Proposition \ref{ell loc}, we find a function $z_0$, harmonic in $\mathbb{R}^{2m}\setminus \{ \overline y_1\}$, such that, up to subsequences, $z_n\to z_0$ in $C^{2m-1,\gamma}_{loc}(\mathbb{R}^{2m} \setminus \{\overline y_{1}\})$, for any $\gamma\in (0,1)$. We claim now that $z_0$ must be constant on $\mathbb{R}^{2m} \setminus \{ \overline y_{1}\}$. To prove this, we observe that, by Lemma \ref{integral est}, for any $R>0$ there exists a constant $C(R)>0$ such that $$ \| \nabla v_{n,1}^2\|_{L^1(B_R(0))} \le C(R). $$ Applying Lemma \ref{grad dist} and \eqref{Lnl3}, we obtain \[\begin{split} \| \nabla v_{n,1}^2\|_{L^1(B_R(0))} &\ge 2 \int_{B_{R}(0)\setminus B_\frac{1}{R}(\overline y_1)} |v_{n,1}| |\nabla v_{n,1}| dy \\ &= 2|{ \mu_{n,1}}| (1+o(1)) \|\nabla v_{n,1}\|_{L^1(B_{R}(0)\setminus B_\frac{1}{R}(\overline y_1))} \\ & =2 L_{n,1} (1+o(1)) \|\nabla z_n\|_{L^1(B_{R}(0)\setminus B_\frac{1}{R}(\overline y_1))}. \end{split} \] Thus, as $n\to +\infty$, we have $$ \|\nabla z_n\|_{L^1(B_{R}(0)\setminus B_\frac{1}{R}(\overline y_1))} \le \frac{C(R)}{L_{n,1}}\to 0. $$ Hence, $z_0$ must be constant, which contradicts $$ |\nabla z_0(0)| = \lim_{n\to +\infty} |\nabla z_n(0)| =1. $$ \end{proof} We shall now deal with the case $2\le l\le 2m-1$. Since Proposition \ref{stime grad} has been proved for $l=1$, we know that $L_{n,1}$ is bounded, i.e. $$ |x-x_n| |u_n(x)| |\nabla u_n (x)| \le C, $$ for any $x\in \Omega$. Equivalently, given any $1\le l\le 2m-1$, we have \begin{equation}\label{new} |v_{n,l}(y)||\nabla v_{n,l}(y)| \le \frac{C}{|y-y_{n,l}|}, \end{equation} for any $y\in \Omega_{n,l}$. In particular, \eqref{new} yields \begin{equation}\label{new2} \|\nabla v_{n,l}^2\|_{L^\infty(\Omega_{n,l}\setminus B_{\frac{1}{R}}(\overline y_l))}\le C(R), \end{equation} for any $R>0$. \begin{lemma}\label{grad dist case l} Fix any $2\le l\le 2m-1$. If $L_{n,l}\to +\infty$ as $n\to +\infty$, then we have $\frac{d( x_{n,l},\partialrtial \Omega)}{s_{n,l}} \to +\infty $. Moreover, $\frac{v_{n,l}}{\mu_{n,l}} \to 1$ in $C^{2m-1,\gamma}_{loc}(\mathbb{R}^{2m}\setminus \{\overline y_l\})$, for any $\gamma\in (0,1)$. \end{lemma} \begin{proof} As in Lemma \ref{grad dist}, we show that $x_{n,l}$, $s_{n,l}$ and $v_{n,l}$ satisfy the assumptions of Lemma \ref{boundarydist0} and Lemma \ref{boundarydist}, with $\Sigma=\{\overline y_l\}$. Let us denote $D_{n,l}:= \max_{0\le i\le 2m-1}|\nabla^i v_{n,l}(0)|$. Note that \eqref{Lnl3} and the condition $L_{n,l}\to +\infty$ imply $D_{n,l}\to +\infty$. Then, for any $R>0$, a Taylor expansion and \eqref{new2} yield \begin{equation}\label{L1 case l} v_{n,l}^2 \le \mu_{n,l}^2+ C(R) \le C(R) D_{n,l}^2 \end{equation} in $\Omega_{n,l} \cap B_{R}(0)\setminus B_\frac{1}{R}(\overline y_l)$. Moreover, by equation \eqref{eqvnl}, Remark \ref{rem sn}, and \eqref{L1 case l}, we get $$|(-\Delta)^m v_{n,l} | = o(1) + s_{n,l}^{2m} \alpha v_{n,l} = o(1) + O(s_{n,l}^{2m}D_{n,l}),$$ uniformly in $\Omega_{n,l} \cap B_{R}(0)\setminus B_\frac{1}{R}(\overline y_l)$. \end{proof} \begin{proof}[Proof of Proposition \ref{stime grad} for $2\le l\le 2m -1$.] Assume by contradiction that $L_{n,l}\to +\infty$ as $n\to +\infty$. Consider the function $z_n:= \frac{v_{n,l}- \mu_{n,l}}{|\nabla^l v_n(0)|}$. Observe that \eqref{Lnl3}, \eqref{new}, and Lemma \ref{grad dist case l}, yield \begin{equation}\label{grad fin} |\nabla z_n (y)|\le \frac{C(R)}{L_{n,l}} \to 0, \end{equation} uniformly in $B_R(0)\setminus B_\frac{1}{R}(\overline y_l)$, for any $R>0$. Since $z_n(0)=0$, \eqref{grad fin} implies that \[ |z_n|\le \frac{C(R)}{L_{n,l}}\to 0, \] uniformly in $B_R(0)\setminus B_\frac{1}{R}(\overline y_l)$. Similarly, as a consequence of equation \eqref{eqvnl}, \eqref{scaled estimate}, and Lemma \ref{grad dist case l}, one has $$ |(-\Delta)^m z_n |\le \frac{C(R)}{L_{n,l}}, $$ in $B_R(0)\setminus B_\frac{1}{R}(\overline y_l)$. Therefore, up to subsequnces, $z_n\to 0$ in $C^{2m-1,\gamma}_{loc}(\mathbb{R}^{2m}\setminus \{\overline y_{l}\})$, for any $\gamma\in (0,1)$. Since $|\nabla^l z_n(0)| = 1$ for any $n$, we get a contradiction. \end{proof} \subsection{Polyharmonic truncations} In this subsection, we will generalize the truncation argument introduced in \cite{AD} and \cite{Li1}. For any $A>1$ and $n\in \mathbb{N}$, we will introduce a new function $u_n^A$ whose values are close to $\frac{\mu_n}{A}$ in a small ball centered at $x_n$, and which coincides with $u_n$ outside the same ball. \begin{lemma}\label{def rhon} For any $A>1$ and $n\in \mathbb{N}$, there exists a radius $0<\rho_n^A <d(x_n,\partialrtial \Omega)$ and a constant $C=C(A)$ such that \begin{enumerate} \item $u_n\ge \frac{\mu_n}{A}$ in $B_{\rho^A_n}(x_n)$. \item $|u_n - \frac{\mu_n}{A}|\le C \mu_n^{-1}$ on $\partialrtial B_{\rho^A_n}(x_n)$. \item $|\nabla^l u_n |\le \dfrac{C}{\mu_n (\rho_n^A)^l}$ on $\partialrtial B_{\rho_n^A}(x_n)$, for any $1\le l\le 2m-1$. \item If $r_n$ is defined as in \eqref{rn}, then $\frac{\rho_n^A}{r_n} \to +\infty$ as $n\to +\infty$. \end{enumerate} \end{lemma} \begin{proof} For any $\sigma\in \mathbb S^{2m-1}$, the function $t\mapsto u_n(x_n + t \sigma)$ ranges from $\mu_n$ to $0$ in the interval $[0,t^*_n(\sigma)]$, where $t^*_n(\sigma) := \sup\{ t>0 \;:\; x_n +s \sigma \in \Omega \text{ for any } s\in [0,t] \}$. Since $u_n\in C(\overline\Omega)$, one can define $$ t_n^A(\sigma):= \inf\{t \in [0,t^*_n(\sigma))\;:\; u_n(x_n +t\sigma)=\frac{\mu_n}{A}\}. $$ Clearly, one has $0<t_n^A(\sigma)<t_n^*(\sigma)$ and $u_n(x_n +t_n^A(\sigma) \sigma) = \frac{\mu_n}{A}$, for any $\sigma \in \mathbb S^{2m-1}$. Moreover, the function $\sigma\longmapsto t_n^A(\sigma)$ is lower semi-continuous on $\mathbb S^{2m-1}$. In particular, we can find $\overline{\sigma}_n^A$ such that $\displaystyle{t_n^A(\overline \sigma_n^A)=\min_{\sigma\in \mathbb S^{2m-1}} t_n^A(\sigma)}$. We define $\rho_n^A:=t_n^A(\overline \sigma_n)$, and $y_n^A:= x_n + \rho_n^A\overline \sigma_n^A \in \partialrtial B_{\rho_n^A}(x_n)$. By construction we have, $0<\rho_{n}^A<d(x_n,\partialrtial \Omega)$, $u_n\ge \frac{\mu_n}{A}$ on $B_{\rho_n^A}(x_n)$, and $u_n(y_n^A)=\frac{\mu_n}{A}$. Thus, applying Proposition \ref{stime grad}, we get $$ |\nabla^l u_n|\le \frac{C A}{\mu_n (\rho_n^A)^l}, $$ on $\partialrtial B_{\rho_n^A} (x_n)$, for any $1\le l\le 2m-1$. Furthermore, for any $x\in \partialrtial B_{\rho_n^A} (x_n)$, one has $$ |u_n(x) - \frac{\mu_n}{A}| = |u_n(x) - u_n(y_n^A)| \le \pi \rho_n^A \sup_{\partialrtial B_{\rho_n^A} (x_n)} |\nabla u_n| \le \frac{C}{\mu_n}. $$ Finally, if $r_n$ is as in \eqref{rn}, Proposition \ref{conv eta} and \eqref{etan} imply that $u_n = \mu_n + O(\mu_n^{-1})$ uniformly in $B_{r_nR}(x_n)$, for any $R>0$. Therefore, for sufficiently large $n$, we have $r_n R < \rho_n^A$. Since $R$ is arbitrary, we get the conclusion. \end{proof} Let $\rho_n^A$ be as in the previous lemma and let $v_n^A \in C^{2m} (\overline{B_{\rho_n^A}(x_n)})$ be the unique solution of $$ \begin{Si}{ll} (-\Delta)^m v_n^A =0 & \text{ in } B_{\rho_n^A}(x_n), \\ \partialrtial^i_\nu v_n^A = \partialrtial^i_\nu u_n & \text{ on } \partialrtial B_{\rho_n^A}(x_n), 0\le i \le m-1. \end{Si} $$ We consider the function \begin{equation}\label{unA} u_n^A(x) := \begin{Si}{cl} v_n^A & \text{ in } B_{\rho_n^A}(x_n),\\ u_n & \text{ in } \Omega \setminus B_{\rho_n^A}(x_n). \end{Si} \end{equation} By definition, we have $u_n^A\in H^{m}_0(\Omega)$. The main purpose of this section is to study the properties of $u_n^A$. \begin{lemma}\label{est unA} For any $A>1$, we have $$ u_n^A = \frac{\mu_n}{A} + O(\mu_n^{-1}), $$ uniformly on $\overline{B_{\rho_n^A}(x_n)}$. \end{lemma} \begin{proof} Define $\widetildelde v_n(y) := v_n^A ( x_n + \rho_n^A y) -\frac{\mu_n}{A}$ for $y\in B_1(0)$. Then, by elliptic estimates (Proposition \ref{ell harm2}), we have \[\begin{split} \|v_n^A-\frac{\mu_n}{A}\|_{L^\infty(B_{\rho_n^A}(x_n))} = \|\widetildelde v_n\|_{L^\infty(B_{1}(0))} & \le C\sum_{l=0}^{m-1} \|\nabla^l \widetildelde v_n\|_{L^\infty(\partialrtial B_1(0))} \\ & = C \sum_{l=0}^{m-1} (\rho_n^A)^l\|\nabla^l v_n^A\|_{L^\infty(\partialrtial B_{\rho_n^A}(x_n))}\\ & = C \sum_{l=0}^{m-1} (\rho_n^A)^l\|\nabla^l u_n\|_{L^\infty(\partialrtial B_{\rho_n^A}(x_n))} \end{split} \] By Lemma \ref{def rhon}, we know that $(\rho_n^A)^l\|\nabla^l u_n\|_{L^\infty(\partialrtial B_{\rho_n^A}(x_n))}\le \frac{C}{\mu_n}$ and the proof is complete. \end{proof} \begin{prop}\label{trunc} For any $A>1$, we have $$ \limsup_{n\to +\infty} \int_{\Omega} |\Delta^\frac{m}{2} u_n^A|^2 dx \le \frac{1}{A}. $$ \end{prop} \begin{proof} Since $u_n^A \equiv u_n$ in $\Omega \setminus B_{\rho_n^A}(x_n)$, $u_n^A$ is $m-$harmonic in $B_{\rho_n^A}(x_n)$, and $\partialrtial_\nu^j u_n^A =\partialrtial_\nu^j u_n$ on $\partialrtial B_{\rho_n^A}(x_n)$ for $0\le j\le m-1$, we have \begin{equation} \begin{split}\label{A1} \int_{\Omega} |\Delta^\frac{m}{2} (u_n -u_n^A)|^2 dx & = \int_{B_{\rho_n^A}(x_n)} \Delta^\frac{m}{2} (u_n - u^A_n) \Delta^\frac{m}{2} u_n \,dx\\ & = \int_{B_{\rho_n^A}(x_n)} (u_n - u_n^A) (-\Delta)^m u_n \,dx. \end{split} \end{equation} As a consequence of Lemma \ref{def rhon}, we get $(-\Delta)^m u_n\ge 0$ in $B_{\rho_n^A}(x_n)$. Therefore, the maximum principle guarantees $u_n\ge u_n^A$ in $B_{\rho_n^A}(x_n)$. Hence, if $r_n$ is as in \eqref{rn}, we have \begin{equation}\label{A2} \begin{split} \int_{B_{\rho_n^A}(x_n)} (u_n - u^A_n) (-\Delta)^m u_n dx &\ge \int_{B_{Rr_n}(x_n)} (u_n - u^A_n) (-\Delta)^m u_n dx \\ &\ge \int_{B_{Rr_n}(x_n)} (u_n - u^A_n) \lambda_n u_n e^{\beta_n u_n^2} dx, \end{split} \end{equation} for any $R>0$. By Lemma \ref{est unA}, \eqref{rn}, and Proposition \ref{conv eta}, we find \begin{equation}\label{A3} \begin{split} \int_{B_{Rr_n}(x_n)} &(u_n - u^A_n) \lambda_n u_n e^{\beta_n u_n^2} dx \\& = r_n^{2m}\lambda_n \int_{B_{R}(0)} \bra{\mu_n +\frac{\eta_n}{\mu_n}-\frac{\mu_n}{A} + O(\mu_n^{-1}) } \bra{\mu_n +\frac{\eta_n}{\mu_n}} e^{\beta_n \bra{ \mu_n^2 +2 \eta_n + \frac{\eta_n^2}{\mu_n^2}}} dy\\ &= \omega_{2m}^{-1}\bra{ 1- \frac{1}{A}} \int_{B_{R}(0)} e^{2\beta^*\eta_0} dy + o(1), \end{split} \end{equation} where $\eta_n$ and $\eta_0$ are as in \eqref{etan} and \eqref{eta0}. Using \eqref{A1}, \eqref{A2}, \eqref{A3}, and Lemma \ref{int eta0}, as $n\to+\infty$ and $R\to +\infty$ we find \begin{equation}\label{A4} \liminf_{n\to +\infty} \int_{\Omega} |\Delta^\frac{m}{2} (u_n -u_n^A)|^2 dx \ge 1-\frac{1}{A}. \end{equation} Finally, since $u_n^A$ is $m-$harmonic in $B_{\rho_n^A}(x_n)$, we have \begin{equation}\label{A5} \begin{split} 1+o(1)&=\int_{\Omega} |\Delta^\frac{m}{2}u_n|^2 dx \\& = \int_{\Omega} |\Delta^\frac{m}{2} u_n^A|^2dx +\int_{\Omega} |\Delta^\frac{m}{2}\bra{u_n -u_n^A}|^2dx + 2\int_{\Omega} \Delta^\frac{m}{2}u_n^A \cdot \Delta^\frac{m}{2}(u_n-u_n^A)dx \\ & = \int_{\Omega} |\Delta^\frac{m}{2} u_n^A|^2dx +\int_{\Omega} |\Delta^\frac{m}{2}\bra{u_n -u_n^A}|^2dx. \end{split} \end{equation} Thus, \eqref{A4} and \eqref{A5} yield the conclusion. \end{proof} As a consequence of Proposition \ref{trunc}, we get some simple but crucial estimates. \begin{lemma}\label{lemma crucial} Let $0\le \alpha<\lambda_1(\Omega) $ and let $S_{\alpha,\beta^*}$ be as in \eqref{sup}. Then, we have $$S_{\alpha,\beta^*} = |\Omega| + \lim_{n\to +\infty} \frac{1}{\lambda_n \mu_n^2 }.$$ In particular, $\lambda_n \mu_n\to 0$ as $n\to +\infty$. \end{lemma} \begin{proof} Fix $A>1$ and let $u_n^A$ be as in \eqref{unA}. By Adams' inequality \eqref{Adams} and Proposition \ref{trunc}, we know that $e^{\beta_n (u_n^A)^2}$ is bounded in $L^{p}(\Omega)$, for any $1<p<A$. Since $u_n^A\to 0$ a.e. in $\Omega$, Theorem \ref{Vitali} gives \begin{equation}\label{outside} \lim_{n\to +\infty}\int_{\Omega\setminus B_{\rho_n^A}(x_n)} e^{\beta_n u_n^2} dx = \lim_{n\to +\infty} \int_{\Omega\setminus B_{\rho_n^A}(x_n)} e^{\beta_n (u_n^A)^2} dx = |\Omega|. \end{equation} By Lemma \ref{def rhon}, $u_n \ge \frac{\mu_{n}}{A}$ on $B_{\rho_n^A}(x_n)$. Hence, \begin{equation}\label{inside1} \int_{B_{\rho_n^A(x_n)}} e^{\beta_n u_n^2} dx \le \frac{A^2}{\mu_n^2} \int_{B_{\rho_n^A}(x_n)} u_n^2 e^{\beta_n u_n^2}dx \le \frac{A^2}{\lambda_n \mu_n^2}. \end{equation} Moreover, for $R>0$ large enough, Lemma \ref{def rhon} and Remark \ref{rem integrals} imply \begin{equation}\label{inside2} \limsup_{n\to+\infty} \int_{B_{\rho_n^A(x_n)}} e^{\beta_n u_n^2} dx \ge \limsup_{n\to+ \infty} \int_{B_{r_nR(x_n)}} e^{\beta_n u_n^2} dx = (1+O(R^{-2m})) \limsup_{n\to +\infty} \frac{1}{\lambda_n \mu_n^2}. \end{equation} From \eqref{extremal}, \eqref{outside}, \eqref{inside1}, \eqref{inside2}, and Lemma \ref{limsub}, we get $$ |\Omega| + \limsup_{n\to +\infty} \frac{1}{\lambda_n\mu_n^2} \le S_{\alpha,\beta^*} \le |\Omega| + A^2\liminf_{n\to +\infty} \frac{1}{\lambda_n\mu_n^2}. $$ Since $A$ is an arbitrary number greater than $1$, we get the conclusion. \end{proof} We conclude this section with the following lemma, which gives $L^1$ bounds on $(-\Delta)^m (\mu_n u_n)$. This will be important in the analysis of the behaviour of $u_n$ far from $x_0$, which is given in the next section. \begin{lemma}\label{lemma mun} The sequence $\lambda_n \mu_n u_n e^{\beta_n u_n^2}$ is bounded in $L^1(\Omega)$. Moreover, $\lambda_n \mu_n u_n e^{\beta_n u_n^2} \rightharpoonup \partiallta_0 $ in the sense of measures. \end{lemma} \begin{proof} By Remark \ref{rem integrals}, it is sufficient to show that $$ \lim_{R\to 0}\limsup_{n\to +\infty} \lambda_n \int_{\Omega \setminus B_{r_n R}(x_n)} \mu_n |u_n| e^{\beta_n u_n^2} dx =0. $$ Let us denote $f_n = \lambda_n \mu_n u_n e^{\beta_n u_n^2}$. Fix $A>1$ and let $\rho_n^A$ and $u_n^A$ be as in Lemma \ref{def rhon} and \eqref{unA}. Then, for any $R>0$ and $n$ sufficiently large, we have $$ \int_{\Omega \setminus B_{r_n R}(x_n)} | f_n(x)| dx = \int_{B_{\rho_n^A}(x_n) \setminus B_{r_n R}(x_n)}|f_n(x)| dx + \int_{\Omega \setminus B_{\rho_n^A}(x_n)} |f_n(x)| dx =: I_n^1+I_n^2. $$ By Lemma \ref{def rhon}, \eqref{lambdan}, and Remark \ref{rem integrals}, we obtain \[\begin{split} I_n^1 \le A \int_{B_{\rho_n^A}(x_n) \setminus B_{r_n R}(x_n)} \lambda_n u_n^2 e^{\beta_n u_n^2} dx &\le A \int_{\Omega \setminus B_{r_n R}(x_n)} \lambda_n u_n^2 e^{\beta_n u_n^2} dx \\ & = A \bra{1 - \int_{ B_{r_n R}(x_n)} \lambda_n u_n^2 e^{\beta_n u_n^2} dx} \\ &= A \,O(R^{-2m}). \end{split} \] Therefore, \begin{equation}\label{In1} \displaystyle{\limsup_{n\to +\infty} I_n^1 \le A\, O(R^{-2m})}. \end{equation} For the second integral, we observe that Proposition \ref{trunc} and Adams' inequality imply that $e^{\beta_n (u_n^A)^2}$ is bounded in $L^p(\Omega)$, for any $1<p<A$. In particular, applying H\"older's inequality and Lemma \ref{lemma crucial}, we get \begin{equation}\label{In2} \begin{split} I_n^2 \le \int_{\Omega \setminus B_{\rho_n^A}(x_n)} | f_n(x)| dx &\le \lambda_n \mu_n \|e^{\beta_n (u_n^A)^2}\|_{L^p(\Omega)}\|u_n\|_{L^{\frac{p}{p-1}}(\Omega)} \\ & \le C \lambda_n \mu_n \|u_n\|_{L^{\frac{p}{p-1}}(\Omega)} \to 0, \end{split} \end{equation} as $n\to +\infty$. Since $R$ is arbitrary, the conclusion follows from \eqref{In1} and \eqref{In2}. \end{proof} \subsection{Convergence to Green's fuction} In this subsection, we will study the behavior of the sequence $\mu_n u_n$ according to the position of the blow-up point $x_0$. First, we will show that, if $x_0 \in \Omega,$ we have $\mu_n u_n \to G_{\alpha,x_0}$ locally uniformly in $\overline{\Omega} \setminus \{x_0\}$, where $G_{\alpha,x_0}$ is the Green's function for $(-\Delta)^m -\alpha$, defined as in \eqref{green}. \begin{lemma}\label{bound green} The sequence $\mu_n u_n$ is bounded in $W^{m,p}_0(\Omega)$, for any $p \in [1,2)$. \end{lemma} \begin{proof} Let $v_n$ be the unique solution to \[ \begin{Si}{cc} (-\Delta)^m v_n = \lambda_n \mu_n u_n e^{\beta_n u_n^2}=:f_n & \mbox{ in }\Omega,\\ v_n = \partialrtial_\nu v_n = \ldots = \partialrtial^{m-1}_{\nu} v_n =0 & \mbox{ on }\partialrtial \Omega. \end{Si} \] By Lemma \ref{lemma mun}, we know that $f_n$ is bounded in $L^1(\Omega)$. By Proposition \ref{ell L1}, we can conclude that $v_n$ is bounded in $W^{m,p}_0(\Omega)$ for any $1\le p<2$. Define now $w_n = \mu_n u_n -v_n$. Then $w_n$ solves \[ \begin{Si}{cc} (-\Delta)^m w_n = \alpha w_n + \alpha v_n & \mbox{ in }\Omega,\\ w_n = \partialrtial_\nu w_n =\ldots= \partialrtial^{m-1} w_n =0 & \mbox{ on }\partialrtial\Omega. \end{Si} \] If we test the equation against $w_n$, using Poincare's and Sobolev's inequalities, we find that \[\begin{split} \|w_n\|_{H^m_0(\Omega)}^2 = \alpha \|w_n\|_{L^2(\Omega)}^2 + \alpha\int_{\Omega} w_n v_n dx & \le \alpha \|w_n\|_{L^2(\Omega)}^2 + \alpha\|w_n\|_{L^2(\Omega)}\|v_n\|_{L^2(\Omega)}\\ & \le \frac{\alpha}{\lambda_1(\Omega)}\|w_n\|_{H^m_0(\Omega)}^2 +\frac{\alpha}{\sqrt{\lambda_1(\Omega)}} \|w_n\|_{H^m_0(\Omega)} \|v_n\|_{L^2(\Omega)} \\ & \le \frac{\alpha}{\lambda_1(\Omega)}\|w_n\|_{H^m_0(\Omega)}^2 +C \|w_n\|_{H^m_0(\Omega)}. \end{split} \] Then, $$ \|w_n\|_{H^m_0(\Omega)} \bra{1-\frac{\alpha}{\lambda_1(\Omega)}}\le C, $$ which implies that $w_n$ is bounded $H^{m}_0(\Omega)$. This yields the conclusion. \end{proof} \begin{lemma}\label{conv Green} Let $x_0$ be as in \eqref{mun and xn2}. If $x_0 \in \Omega$, then we have: \begin{enumerate} \item $\mu_n u_n \rightharpoonup G_{\alpha,x_0}$ in $W^{m,p}_0(\Omega)$ for any $1< p<2$; \item $ \mu_n u_n \to G_{\alpha,x_0}$ in $\displaystyle{C^{2m-1,\gamma}_{loc}(\overline{\Omega}\setminus\{x_0\})}$. \end{enumerate} \end{lemma} \begin{proof} Fix $1<p<2$. By Lemma \ref{bound green}, we can find $\widetildelde u\in W^{m,p}_0(\Omega)$ such that $\mu_n u_n \rightharpoonup \widetildelde{u}$ in $W^{m,p}_0(\Omega)$. Let $\varphi$ be any test function in $ C^\infty_c(\Omega)$. Applying Lemma \ref{lemma mun} and the compactness of the embedding of $W^{m,p}_0(\Omega)$ into $L^1(\Omega)$, we obtain \[\begin{split} \int_{\Omega} (\mu_n\lambda_n u_n e^{\beta_n u_n^2} +\alpha \mu_n u_n ) \varphi dx &=\varphi(x_0) +\alpha \int_{\Omega} \widetildelde{u} \varphi \, dx + o(1). \end{split} \] Hence necessarily $\widetildelde{u}=G_{\alpha,x_0}$. To conclude the proof, it remains to show that $\mu_n u_n\to G_{\alpha,x_0}$ in $C^{2m-1,\gamma}_{loc}(\overline \Omega \setminus \{x_0\})$. By elliptic estimates (Proposition \ref{ell new}), it is sufficient to show that $(-\Delta)^m (\mu_n u_n )$ is bounded in $L^s(\Omega\setminus B_\partiallta(x_0))$, for any $s>1$, $\partiallta>0$. This follows from Lemma \ref{conv0C} and Lemma \ref{lemma crucial}. \end{proof} Lemma \ref{conv Green} describes the behaviour of $\mu_n u_n$ when $x_0\in \Omega$. The following Lemma deals with the case $x_0\in \partialrtial \Omega$. In fact, we will prove in the next subsection that blow-up at the boundary is not possible. \begin{lemma}\label{conv 0 boun} If $x_0 \in \partialrtial \Omega$, we have: \begin{enumerate} \item $\mu_n u_n \rightharpoonup 0$ in $W^{m,p}_0(\Omega)$ for any $1<p<2$. \item $ \mu_n u_n \to 0$ in $\displaystyle{C^{2m-1,\gamma}_{loc}(\overline{\Omega}\setminus\{x_0\})}$, for any $\gamma \in (0,1)$. \end{enumerate} \end{lemma} \begin{proof} As before, using Lemma \ref{bound green} and Lemma \ref{lemma mun}, we can find $\widetildelde u \in W^{m,p}_0(\Omega)$, $p\in (1,2)$, such that $\mu_nu_n\rightharpoonup \widetildelde u$ in $W^{m,p}_0(\Omega)$ for any $p\in (1,2)$ and $\mu_n u_n \to \widetildelde u $ in $C^{2m-1,\gamma}_{loc}(\overline \Omega\setminus \{x_0\})$, for any $\gamma \in (0,1)$. Moreover, as $n\to +\infty$, we have \[\begin{split} \int_{\Omega} (\mu_n\lambda_n u_n e^{\beta_n u_n^2} +\alpha \mu_n u_n ) \varphi dx &= \alpha \int_{\Omega} \widetildelde{u} \varphi \, dx + o(1), \end{split} \] Then, $\widetildelde u$ is a weak solution of $(-\Delta)^m \widetildelde u = \alpha \widetildelde u$ in $\Omega$. Since $\widetildelde u\in W^{m,p}_0(\Omega)$, elliptic regularity (Proposition \ref{ell zero}) implies $\widetildelde u\in {W^{3m,p}(\Omega)}$, for any $p\in (1,2)$. In particular, we have $\widetildelde u\in H^m_0(\Omega)$, and $$ \|\widetildelde u\|_{H^m_0(\Omega)}^2 = \alpha \|\widetildelde u\|_{L^2(\Omega)}^2. $$ Since $0\le \alpha <\lambda_1(\Omega)$, we must have $\widetildelde u \equiv 0$. \end{proof} \subsection{The Pohozaev identity and blow-up at the boundary} In this subsection, we prove that the blow-up point $x_0$ cannot lie on $\partialrtial \Omega$. The proof is based on the following Pohozaev-type identity. \begin{lemma}\label{Pohozaev} Let $\Omega\subseteq \mathbb{R}^{2m}$ be a bounded open set with Lipschitz boundary. If $u\in C^{2m}(\overline{\Omega})$ is a solution of \begin{equation}\label{eq Poho} (-\Delta)^m u = h(u), \end{equation} with $h:\mathbb{R} \longrightarrow \mathbb{R}$ continuous, then for any $y\in \mathbb{R}^{2m}$ the following identity holds: $$ \frac{1}{2}\int_{\partialrtial\Omega} |\Delta^\frac{m}{2} u|^2 (x-y)\cdot \nu \,d\sigma(x) +\int_{\partialrtial\Omega} f(x)d\sigma(x) = \int_{\partialrtial \Omega} H(u(x)) (x-y) \cdot \nu d\sigma(x) - 2m \int_{\Omega} H(u(x))dx, $$ where $H(t):= \int_0^t h(s)ds$ and $$ f(x):= \sum_{j=0}^{m-1} (-1)^{m+j} \nu\cdot \bra{\Delta^{\frac{j}{2}} ((x-y)\cdot \nabla u) \Delta^{\frac{2m-j-1}{2}}u}. $$ \end{lemma} \begin{proof} We multiply equation \eqref{eq Poho} for $(x-y)\cdot\nabla u$ and integrate on $\Omega$ to obtain \begin{equation}\label{first step Poho} \int_{\Omega} (x-y)\cdot\nabla u \, (-\Delta)^m u \,dx = \int_{\Omega} (x-y)\cdot\nabla u \, h(u)dx. \end{equation} On the one hand, using the divergence Theorem, we can rewrite the RHS of \eqref{first step Poho} as \[ \begin{split} \int_{\Omega} (x-y)\cdot\nabla u \, h(u)dx & = \int_{\Omega} (x-y)\cdot\nabla H(u)dx \\ & = \int_{\Omega} \dv \bra{ (x-y) H(u)}dx - 2m \int_{\Omega} H(u) dx \\ & = \int_{\partialrtial\Omega} H(u) (x-y)\cdot \nu \, d\sigma (x) - 2m \int_{\Omega} H(u) dx . \end{split} \] On the other hand, we can integrate by parts the LHS of \eqref{first step Poho} to find $$ \int_{\Omega} (x-y)\cdot\nabla u \, (-\Delta)^m u \, dx = \int_{\Omega} \Delta^\frac{m}{2} \bra{ (x-y) \cdot \nabla u } \Delta^\frac{m}{2} u \, dx + \int_{\partialrtial\Omega} f d\sigma. $$ As proved in Lemma 14 of \cite{MarPet}, we have the identity $$ \Delta^\frac{m}{2} \bra{ (x-y) \cdot \nabla u } \cdot \Delta^\frac{m}{2} u = \frac{1}{2} \dv \bra{ (x-y)|\Delta^\frac{m}{2} u|^2}. $$ Hence, the divergence theorem yields $$ \int_{\Omega} (x-y)\cdot\nabla u \, (-\Delta)^m u \, dx = \frac{1}{2}\int_{\partialrtial\Omega} (x-y)\cdot \nu \, |\Delta^\frac{m}{2} u|^2 d\sigma(x) + \int_{\partialrtial\Omega} f d\sigma. $$ \end{proof} We now apply Lemma \ref{Pohozaev} to $u_n$ in a neighborhood of $x_0$, and we use Lemma \ref{conv 0 boun} to prove that $x_0$ must be in $\Omega$. A smart choice of the point $y$ is crucial to control the boundary terms in the identity. This strategy was first introduced in \cite{RobWei} and was applied in \cite{MarPet} to Liouville equations in dimension $2m$. \begin{lemma}\label{no boundary} Let $x_0$ be as in \eqref{mun and xn2}. Then $x_0 \in \Omega$. \end{lemma} \begin{proof} We assume by contradiction that $x_0\in \partialrtial \Omega$. If we fix a sufficiently small $\partiallta >0$, we have that $\frac{1}{2}\le \nu \cdot \nu(x_0)\le 1$ on $\partialrtial \Omega \cap B_\partiallta(x_0)$. Then we can define \begin{equation*} \rho_n := \frac{ \int_{\partialrtial \Omega \cap B_\partiallta (x_0)} |\Delta^\frac{m}{2} u_n|^2 (x-x_0)\cdot \nu d\sigma(x) }{\int_{ \partialrtial \Omega \cap B_\partiallta (x_0)} |\Delta^\frac{m}{2} u_n|^2 \nu \cdot \nu(x_0)d\sigma(x) } \quad \text{ and } \quad y_n:= x_0 +\rho_n \nu(x_0). \end{equation*} Observe that $|y_n -x_0 | \le 2\partiallta$. Applying the Pohozaev identity of Lemma \ref{Pohozaev} on $\Omega_\partiallta = \Omega \cap B_\partiallta(x_0)$, we obtain \begin{equation}\label{our Pohozaev} \begin{split} \frac{1}{2}\int_{\partialrtial \Omega_\partiallta} |\Delta^\frac{m}{2} u_n|^2 &(x-y_n)\cdot \nu \,d\sigma(x) +\int_{\partialrtial \Omega_\partiallta} f_n(x)d\sigma(x) \\ & = \int_{\partialrtial \Omega_\partiallta} H_n(u_n(x)) (x-y_n) \cdot \nu d\sigma(x) - 2m \int_{ \Omega_\partiallta} H_n(u_n(x))dx, \end{split} \end{equation} where $H_n(t)= \frac{\lambda_n}{2\beta_n}e^{\beta_n t^2} + \frac{\alpha}{2}t^2$, and $$ f_n:= \sum_{j=0}^{m-1} (-1)^{m+j} \nu\cdot \bra{\Delta^{\frac{j}{2}} ((x-y_n)\cdot \nabla u_n) \Delta^{\frac{2m-j-1}{2}}u_n}. $$ Observe that the definition of $y_n$ implies \begin{equation}\label{int 0} \int_{\partialrtial \Omega \cap B_\partiallta (x_0)} |\Delta^\frac{m}{2} u_n|^2 (x-y_n)\cdot \nu \,d\sigma(x) =0, \end{equation} and thus, by Lemma \ref{conv 0 boun}, we have \begin{equation}\label{int 1} \int_{\partialrtial \Omega_\partiallta} |\Delta^\frac{m}{2} u_n|^2 (x-y_n)\cdot \nu \,d\sigma(x) = \int_{\Omega \cap \partialrtial B_\partiallta(x_0)} |\Delta^\frac{m}{2} u_n|^2 (x-y_n)\cdot \nu \,d\sigma(x) = o(\mu_n^{-2}). \end{equation} Similarly, since $f_n= - |\Delta^\frac{m}{2} u_n|^2 (x-y_n)\cdot \nu $ on $\partialrtial \Omega \cap B_\partiallta(x_0)$, applying \eqref{int 0} and Lemma \ref{conv 0 boun}, we get \begin{equation}\label{int 2} \int_{\partialrtial \Omega_\partiallta} f_n(x)d\sigma(x) = \int_{\Omega \cap \partialrtial B_\partiallta(x_0)} f_n(x)d\sigma(x) = o(\mu_n^{-2}). \end{equation} Furthermore, we have \[\begin{split} \int_{\partialrtial \Omega_\partiallta} e^{\beta_n u_n^2} (x-y_n)\cdot \nu d\sigma(x) &= \int_{ \Omega \cap \partialrtial B_\partiallta(x_0) } e^{\beta_n u_n^2} (x-y_n)\cdot \nu d\sigma (x) + \int_{\partialrtial \Omega \cap B_\partiallta(x_0)} (x-y_n)\cdot \nu d\sigma(x) \\ & = I_{\partiallta,n} + o(\mu_n^{-2}), \end{split} \] where $I_{\partiallta,n} = \int_{\partialrtial \Omega_\partiallta} (x-y_n)\cdot \nu d\sigma(x)=O(\partiallta)$ uniformly with respect to $n$. In particular, \begin{equation}\label{int 3} \begin{split} \int_{\partialrtial \Omega_\partiallta} H_n(u_n(x)) (x-y_n)\cdot \nu d\sigma(x) &= \frac{\lambda_n }{2\beta_n} \int_{\partialrtial \Omega_\partiallta} e^{\beta_n u_n^2} (x-y_n)\cdot \nu d\sigma(x) + \frac{\alpha}{2} \int_{\Omega \cap \partialrtial B_\partiallta(x_0) } u_n^2 (x-y_n)\cdot \nu d\sigma (x)\\ & = \frac{\lambda_n}{2\beta_n}I_{\partiallta,n} +o(\mu_n^{-2}).\\ \end{split} \end{equation} Finally, we have \begin{equation} \label{int 4} \begin{split} \int_{ \Omega_\partiallta} H_n(u_n(x))dx &= \frac{\lambda_n }{2\beta_n} \int_{\Omega_\partiallta } e^{\beta_n u_n^2} dx + \frac{ \alpha}{2} \int_{ \Omega_\partiallta} u_n^2 dx \\ & = \frac{\lambda_n}{2\beta_n} \int_{\Omega_\partiallta}e^{\beta_n u_n^2} dx+ o(\mu_{n}^{-2}). \\ \end{split} \end{equation} Therefore, \eqref{int 1}, \eqref{int 2}, \eqref{int 3}, \eqref{int 4} allow to rewrite the identity in \eqref{our Pohozaev} as \begin{equation}\label{Poho1} \lambda_n \mu_n^2 \bra{ 2m \int_{\Omega_\partiallta}e^{\beta_n u_n^2} dx -I_{\partiallta,n}} =o(1). \end{equation} Lemma \ref{conv 0 boun}, \eqref{extremal} and Lemma \ref{limsub}, assure $$ \int_{\Omega_\partiallta}e^{\beta_n u_n^2} dx = F_{\beta_n}(u_n) - \int_{\Omega\setminus B_\partiallta(x_0)} e^{\beta_n u_n^2}dx \to S_{\alpha,\beta^*} - |\Omega \setminus B_\partiallta(x_0)|\ge S_{\alpha,\beta^*}- |\Omega| >0, $$ as $n\to +\infty$. Then, for $\partiallta$ sufficiently small, the quantity $\int_{\Omega_\partiallta}e^{\beta_n u_n^2} dx - I_{n,\partiallta}$ is bounded away from $0$. Hence, the identity \eqref{Poho1} implies $\lambda_n \mu_n^2 \to 0$ and, since $I_{n,\partiallta}=O(\partiallta)$, \begin{equation}\label{Poho2} \lambda_n \mu_n^2 \int_{\Omega_\partiallta}e^{\beta_n u_n^2} dx =o(1). \end{equation} But \eqref{Poho2} contradicts Remark \ref{rem integrals}, since for any large $R>0$ one has $$ \lambda_n \mu_n^2 \int_{\Omega_\partiallta}e^{\beta_n u_n^2} dx \ge \lambda_n \mu_n^2 \int_{B_{R r_n }(x_n)}e^{\beta_n u_n^2} dx = 1 +O(R^{-2m}). $$ \end{proof} \subsection{Neck analysis} In this subsection, we complete the proof of Proposition \ref{big} by giving a sharp upper bound on $\displaystyle{\frac{1}{\lambda_n \mu_n^2}}$. Let us fix a large $R>0$ and a small $\partiallta>0$ and let us consider the annular region \begin{equation*} A_n(R,\partiallta):=\{x \in \Omega : r_n R \le |x-x_n| \le \partiallta \}, \end{equation*} where $r_n$ is given by \eqref{rn}. Note that, by Lemma \ref{no boundary}, we have $A_n(R,\partiallta)\subseteq \Omega$, for any $0<\partiallta<d(x_0,\partialrtial \Omega)$ and any sufficiently large $n\in \mathbb{N}$. Our main idea is to compare the Dirichlet energy of $u_n$ on $A_n(R,\partiallta)$ with the energy of the $m-$harmonic function \begin{equation*} \mathcal W_n(x):=-\frac{2m}{\beta^*\mu_n}\log |x-x_n|. \end{equation*} As a consequence of Proposition \ref{conv eta} and \eqref{etan}, on $\partialrtial B_{Rr_n}(x_n)$, we have $$ u_n(x) = \mu_n + \frac{\eta_0(\frac{x-x_n}{r_n})}{\mu_n} +o(\mu_n^{-1}) = \mu_n -\frac{2m}{\beta^*\mu_n} \log \frac{R}{2} + \frac{O(R^{-2})}{\mu_n}+o(\mu_n^{-1}), $$ as $n\to +\infty$. Similarly, using also \eqref{asym}, we find $$ \Delta^\frac{j}{2} u_n (x)= \frac{\Delta^\frac{j}{2} \eta_0(\frac{x-x_n}{r_n})}{r_n^j \mu_n} + o(r_n^{-j}\mu_n^{-1}) =-\frac{2m K_{m,\frac{j}{2}}}{\beta^*r_n^j\mu_n R^{j}} e_{n,j} + \frac{O(R^{-j-2})}{r_n^j\mu_n}+o(r_n^{-j}\mu_n^{-1}), $$ for any $1\le j \le 2m-1$, where $e_{n,j}:= e_j(x-x_n)$ with $e_j$ is as in \eqref{ej}. The function $\mathcal W_n$ has an analog behaviour. Indeed, remembering the definition of $r_n$ in \eqref{rn}, we get \begin{equation}\label{Wn1} \mathcal W_n(x) = \frac{\beta_n}{\beta^*} \mu_n - \frac{2m}{\beta^*\mu_n} \log R + \frac{1}{\beta^* \mu_n} \log \bra{\omega_{2m} \lambda_n \mu_n^2}, \end{equation} and, by \eqref{ej}, \begin{equation}\label{Wn2} \Delta^\frac{j}{2} \mathcal W_n = -\frac{2m K_{m,\frac{j}{2}}}{\beta^*\mu_n r_n^j R^{j}}e_{n,j}, \qquad \mbox{ for any }\, 1\le j\le 2m-1, \end{equation} on $\partialrtial B_{Rr_n}(x_n)$. We can so conclude that, as $n\to+\infty$, on $\partialrtial B_{R r_n}(x_n)$, we have the expansions \begin{equation}\label{ex1} u_n - \mathcal W_n = \bra{1-\frac{\beta_n}{\beta^*} }\mu_n +\frac{1}{\beta^*\mu_n} \log \bra{ \frac{2^{2m}}{\omega_{2m} \lambda_n \mu_n^2}} + \frac{O(R^{-2})}{\mu_n}+o(\mu_n^{-1}), \end{equation} and \begin{equation}\label{ex2} \Delta^\frac{j}{2} (u_n -\mathcal W_n) = \frac{O(R^{-j-2})}{r_n^j\mu_n}+o(r_n^{-j}\mu_n^{-1}),\qquad \text{ for any } 1\le j\le 2m-1. \end{equation} Similarly, on $\partialrtial B_\partiallta (x_n)$, we can use Lemma \ref{conv Green} and Propositon \ref{prop green} to get \begin{equation}\label{ex3} u_n - \mathcal W_n =\frac{C_{\alpha, x_0} }{\mu_n} + \frac{O(\partiallta)}{\mu_n}+o(\mu_n^{-1}), \end{equation} and \begin{equation}\label{ex4} \Delta^\frac{j}{2} (u_n -\mathcal W_n)= \frac{O(1)}{\mu_n}+o(\mu_n^{-1}), \qquad \text{ for any } 1\le j\le 2m-1. \end{equation} Here we have also used that $\frac{|x-x_n|}{|x-x_0|}\to 1$, uniformly on $\partialrtial B_\partiallta(x_n)$. The asymptotic formulas in \eqref{Wn1}-\eqref{ex4} allow to compare $\| \Delta^{\frac{m}{2}}u_n \|_{L^2(A_n(R,\partiallta))}$ and $\| \Delta^{\frac{m}{2}} \mathcal W_n \|_{L^2(A_n(R,\partiallta))}$. Since the quantity $\lambda_n \mu_n^2$ appears in \eqref{ex1}, this will result in the desired upper bound. \begin{lemma}\label{final} Under the assumptions of Proposition \ref{big}, we have $$ \lim_{n\to +\infty} \frac{1}{\lambda_n \mu_n^2} \le \frac{\omega_{2m}}{2^{2m}} e^{\beta^*\bra{C_{\alpha,x_0} -I_m}}. $$ \end{lemma} \begin{proof} First, Young's inequality yields \begin{equation}\label{first} \begin{split} \| \Delta^{\frac{m}{2}}u_n \|_{L^2(A_n(R,\partiallta))}^2 - \| \Delta^{\frac{m}{2}}\mathcal W_n \|_{L^2(A_n(R,\partiallta))}^2\ge 2 \int_{A_n(R,\partiallta)} \Delta^\frac{m}{2} (u_n-\mathcal W_n) \cdot \Delta^\frac{m}{2} \mathcal W_n dx. \end{split} \end{equation} Integrating by parts, the integral in the RHS equals to \begin{equation}\label{b0} \int_{A_n(R,\partiallta)} \Delta^\frac{m}{2} (u_n-\mathcal W_n) \cdot \Delta^\frac{m}{2} \mathcal W_n dx = - \int_{\partialrtial A_n(R,\partiallta)} \sum_{j=0}^{m-1} (-1)^{m+j} \nu\cdot \bra{\Delta^{\frac{j}{2}} (u_n-\mathcal W_n) \Delta^{\frac{2m-j-1}{2}}\mathcal W_n} d\sigma. \end{equation} Let us denote, $\Lambda_n:= \frac{2^{2m}}{\omega_{2m} \lambda_n \mu_n^2}$. On $\partialrtial B_{R r_n}(x_n)$, by \eqref{Wn2}, \eqref{ex1}, \eqref{ex2}, and the explicit expression of $K_{\frac{2m-1}{2}}$ (see \eqref{Kml2}), we find \begin{equation}\label{b1} \begin{split} (u_n -\mathcal W_n ) \Delta^{\frac{2m-1}{2}} \mathcal W_n \cdot \nu &= -\frac{2m}{\beta^*} \bra{1 - \frac{\beta_n}{\beta^*} +\frac{1}{\beta^* \mu_n^2} \log\bra{ \Lambda_n} + \frac{O(R^{-2})}{\mu_n^2}+o(\mu_n^{-2})} \frac{K_{m,\frac{2m-1}{2}}}{(r_nR)^{2m-1}} \\ & = \frac{(-1)^m}{\omega_{2m-1}(r_n R)^{2m-1}} \bra{1 - \frac{\beta_n}{\beta^*} +\frac{1}{\beta^* \mu_n^2} \log\bra{ \Lambda_n} + \frac{O(R^{-2})}{\mu_n^2}+o(\mu_n^{-2})}, \end{split} \end{equation} and, for $1\le j\le m-1$, \begin{equation}\label{b2} \begin{split} \Delta^\frac{j}{2}(u_n -\mathcal W_n ) \Delta^{\frac{2m-j-1}{2}} \mathcal W_n \cdot \nu &= \bra{\frac{O(R^{-2})}{\mu_n^2} + o(\mu_n^{-2})}O(r_n R)^{1-2m}. \end{split} \end{equation} Similarly, on $\partialrtial B_{\partiallta}(x_0)$, \eqref{ej}, \eqref{ex3} and \eqref{ex4} yield \begin{equation}\label{b3} \begin{split} (u_n -\mathcal W_n ) \Delta^{\frac{2m-1}{2}} \mathcal W_n \cdot \nu & = \frac{(-1)^m}{\omega_{2m-1}\partiallta^{2m-1}} \bra{\frac{C_{\alpha,x_0}}{\mu_n^2}+ \frac{O(\partiallta)}{\mu_n^2}+o(\mu_n^{-2})}, \end{split} \end{equation} and \begin{equation}\label{b4} \begin{split} \Delta^\frac{j}{2}(u_n -\mathcal W_n ) \Delta^{\frac{2m-j-1}{2}} \mathcal W_n \cdot \nu &= \bra{ \frac{O(1)}{\mu_n^2} + o(\mu_n^{-2}) }O(\partiallta^{1+j- 2m}), \end{split} \end{equation} for any $1\le j\le m-1$. Using \eqref{b1}, \eqref{b2}, \eqref{b3}, \eqref{b4}, we can rewrite \eqref{b0} as \[ \int_{A_n(R,\partiallta)} \Delta^\frac{m}{2} (u_n-\mathcal W_n) \cdot \Delta^\frac{m}{2} \mathcal W_n dx = \Gamma_n+ \frac{O(R^{-2})}{\mu_n^2}+\frac{O(\partiallta)}{\mu_n^2}+o(\mu_n^{-2}), \] with \begin{equation}\label{Gamman} \Gamma_n := 1 - \frac{\beta_n}{\beta^*} +\frac{1}{\beta^* \mu_n^2} \log\bra{ \Lambda_n} - \frac{C_{\alpha,x_0}}{\mu_n^2}. \end{equation} Therefore, \eqref{first} reads as \begin{equation}\label{lower} \| \Delta^{\frac{m}{2}}u_n \|_{L^2(A_n(R,\partiallta))}^2 - \| \Delta^{\frac{m}{2}}\mathcal W_n \|_{L^2(A_n(R,\partiallta))}^2\ge 2\Gamma_n+ \frac{O(R^{-2})}{\mu_n^2}+\frac{O(\partiallta)}{\mu_n^2}+o(\mu_n^{-2}). \end{equation} We shall now compute the difference in the LHS of \eqref{lower} in a precise way. Since $\|u_n\|_\alpha =1$, we have \[ \| \Delta^{\frac{m}{2}}u_n \|_{L^2(A_n(R,\partiallta))}^2 = 1 + \alpha \|u_n\|^2_{L^2(\Omega)} - \int_{\Omega\setminus B_\partiallta(x_0)} |\Delta^\frac{m}{2} u_n|^2 dx - \int_{B_{r_n R}(x_n)} |\Delta^\frac{m}{2} u_n|^2 dx. \] By Lemma \ref{conv Green} and Lemma \ref{int Green}, we infer $$ \|u_n\|_{L^2(\Omega)}^2 = \frac{\|G_{\alpha,x_0}\|_{L^2(\Omega)}^2}{\mu_n^2} +o(\mu_n^{-2}), $$ and $$ \int_{\Omega\setminus B_\partiallta(x_0)} |\Delta^\frac{m}{2} u_n |^2 dx = \mu_n^{-2}\bra{ \alpha \|G_{\alpha,x_0} \|_{L^2(\Omega)}^2 - \frac{2m }{\beta^*} \log \partiallta + C_{\alpha,x_0} +H_m + O(\partiallta| \log \partiallta|)+o(1)}. $$ Moreover, Proposition \ref{conv eta} and Lemma \ref{int eta0} imply \[ \int_{B_{r_nR}(x_n)}|\Delta^\frac{m}{2} u_n |^2 dx= \mu_n^{-2}\bra{\frac{2m}{\beta^*}\log \frac{R}{2} + I_m -H_m + O(R^{-2}\log R )+o(1)}. \] Therefore, \[ \| \Delta^{\frac{m}{2}}u_n \|_{L^2(A_n(R,\partiallta))}^2 = 1+ \frac{2m}{\beta^*\mu_n^2} \log \frac{2\partiallta}{R} -\frac{C_{\alpha,x_0} +I_m}{\mu_n^2} + \frac{O(R^{-2}\log R )}{\mu_n^2} +\frac{O(\partiallta |\log \partiallta| )}{\mu_n^2} +o(\mu_n^{-2}). \] The identity $\omega_{2m-1}\frac{2m}{\beta^*}K_{m,\frac{m}{2}}^2=1$ and a direct computation show that \[\begin{split} \| \Delta^{\frac{m}{2}}\mathcal W_n \|_{L^2(A_n(R,\partiallta))}^2 &= \omega_{2m-1}\bra{\frac{2m K_{m,\frac{m}{2}}}{\beta^* \mu_n }}^2 \log \frac{\partiallta}{R r_n} \\ &= \frac{2m}{\beta^* \mu_n^2} \log \frac{\partiallta}{R} +\frac{\beta_n}{\beta^*} +\frac{1}{\beta^*\mu_n^2}\log\bra{{\omega_{2m}\lambda_n \mu_n^2}}. \end{split} \] Hence, \begin{equation}\label{upper} \| \Delta^{\frac{m}{2}}u_n \|_{L^2(A_n(R,\partiallta))}^2 - \| \Delta^{\frac{m}{2}}\mathcal W_n \|_{L^2(A_n(R,\partiallta))}^2 = \Gamma_n -\frac{I_m}{\mu_n^2} +\frac{O(R^{-2}\log R)}{\mu_n^2}+\frac{O(\partiallta |\log \partiallta|)}{\mu_n^2}+o(\mu_n^{-2}), \end{equation} with $\Gamma_n$ as in \eqref{Gamman}. Comparing \eqref{lower} and \eqref{upper}, we find the upper bound \begin{equation}\label{quasi} \Gamma_n \le -\frac{I_m}{\mu_n^2} +\frac{O(R^{-2}\log R)}{\mu_n^2}+\frac{O(\partiallta| \log \partiallta|)}{\mu_n^2}+o(\mu_n^{-2}). \end{equation} Since $\beta_n <\beta^*$, the definition of $\Gamma_n$ in \eqref{Gamman} implies \[ \Gamma_n \ge \frac{1}{\beta^* \mu_n^2} \log\bra{ \Lambda_n} - \frac{C_{\alpha,x_0}}{\mu_n^2}, \] Then, \eqref{quasi} yields \[ \log\bra{ \Lambda_n} \le \beta^* ( C_{\alpha,x_0} -I_m) + O(R^{-2} \log R)+O(\partiallta|\log \partiallta|) + o(1). \] Passing to the limit as $n \to +\infty$, $R\to +\infty$ and $\partiallta \to 0$, we can conclude \[ \lim_{n\to +\infty}\Lambda_n \le e^{\beta^*\bra{ C_{\alpha,x_0} - I_m}}. \] \end{proof} We have so concluded the proof of Proposition \ref{big}, which follows directly from Lemma \ref{lemma crucial}, Lemma \ref{no boundary}, and Lemma \ref{final}. \section{Test functions and the proof of Theorem \ref{main}}\label{sec test} In this section, we complete the proof of Theorem \ref{main} by showing that the upper bound on $S_{\alpha,\beta^*}$, given in Proposition \ref{big}, cannot hold. Consequently, any sequence $u_n \in M_\alpha$ satisfying \eqref{extremal} must be uniformly bounded in $\Omega$. \begin{lemma}\label{polynomial} For any $x_0\in \mathbb{R}^{2m}$, and $\varepsilon$,$R$,$\mu>0$, there exists a unique radially symmetric polynomial $p_{\varepsilon,R,\mu,x_0}$ such that \begin{equation}\label{derp} \partialrtial_\nu^i p_{\varepsilon,R,\mu,x_0} (x)= - \partialrtial_\nu^i \bra{\mu^2+ \eta_0 \bra{ \frac{x -x_0}{\varepsilon}} + \frac{2m}{\beta^*}\log |x-x_0| } \quad \mbox{ on }\partialrtial B_{\varepsilon R} (x_0), \end{equation} for any $0\le i\le m-1$, where $\eta_0$ is as in \eqref{eta0}. Moreover, $p_{\varepsilon,R,\mu,x_0}$ has the form \begin{equation}\label{formp} p_{\varepsilon,R,\mu,x_0} (x) = -\mu^2 + \sum_{j=0}^{m-1} c_j(\varepsilon,R) |x-x_0|^{2j}, \end{equation} with $$ c_0 (\varepsilon,R) = -\frac{2m}{\beta^*}\log (2\varepsilon) +d_0(R) \quad \mbox{and} \quad c_j(\varepsilon,R) = \varepsilon^{-{2j}} R^{-2j} d_j (R), \quad 1\le j\le m-1, $$ where $d_j(R) =O(R^{-2}) $ as $R \to +\infty, \text{ for } 0\le j \le m-1$. \end{lemma} \begin{proof} We can construct $p_{\varepsilon,R,\mu,x_0}$ in the following way. Let $ d_1(R)$,...,$ d_{m-1}(R)$ be the unique solution of the non-degenerate linear system \begin{equation}\label{system} \sum_{j=\sqbra{\frac{i+1}{2}}}^{m-1} \frac{(2j)!}{(2j-i)!} d_j (R)= \frac{2m}{\beta^*}(-1)^{i} (i-1)!- R^i \eta_0^{(i)} (R), \quad i=1,\ldots, m-1. \end{equation} Set also \begin{equation}\label{0} \begin{split} \widetildelde d_0(\varepsilon, R,\mu):&= - \bra{\mu^2 +\eta_0(R) + \frac{2m}{\beta^*}\log (\varepsilon R)}-\sum_{j=1}^{m-1} d_j(R), \end{split} \end{equation} and $$ q(x):= \widetildelde{d}_0(\varepsilon, R,\mu)+ \sum_{j=1}^{m-1} d_j(R)|x|^{2j}. $$ If we define $p_{\varepsilon,R,\mu,x_0}(x):= q\bra{\frac{x-x_0}{\varepsilon R}}$, then $p_{\varepsilon,R,\mu,x_0}(x)$ satisfies \eqref{derp} for any $0\le i\le m-1$. Since, as $R\to +\infty$, $$ \eta_0^{(i)} (R) = \frac{2m}{\beta^*}(-1)^{i} (i-1)! R^{-i} + O(R^{-i-2}), \quad \text{ for } 1\le i\le m-1,$$ and the system in \eqref{system} is nondegenerate, we find ${d}_j = O(R^{-2})$ as $R \to +\infty$ for $1\le j\le m-1$. Similarly, we have \[ \begin{split} \widetildelde d_0 (\varepsilon, R, \mu) &= -\mu^2 -\frac{2m}{\beta^*}\log (2\varepsilon) + d_0(R), \end{split} \] where \[ d_0(R):= -\eta_0(R)-\frac{2m}{\beta^*}\log \frac{ R}{2} - \sum_{j=1}^{m-1} d_j(R), \] and, by \eqref{0} and the asymptotic behavior at infinity of $\eta_0$, $d_0(R)= O(R^{-2})$ as $R\to +\infty$. Then $p_{\varepsilon,R,\mu,x_0}$ has the form \eqref{formp} with $c_0(\varepsilon,R):= \widetildelde d_0(\varepsilon, R,\mu) +\mu^2$ and $c_j(\varepsilon,R):= (\varepsilon R)^{-2j} d_j(R).$ \end{proof} \begin{rem}\label{rem poly} Observe that Lemma \ref{polynomial} gives $$ \left| p_{\varepsilon,R,\mu,x_0} +\mu^2 +\frac{2m}{\beta^*}\log(2\varepsilon) \right|\le C R^{-2} \quad \mbox{ and } \quad |\Delta^{\frac{m}{2}}p_{\varepsilon,R,\mu,x_0}| \le C \varepsilon^{-m} R^{-m-2} , $$ in $B_{\varepsilon R}(x_0)$, where $C$ depends only on $m$. \end{rem} \begin{prop}\label{proptest} For any $x_0\in \Omega$, and $0\le \alpha<\lambda_1(\Omega)$, we have $$ S_{\alpha,\beta^*}> |\Omega|+ \frac{\omega_{2m}}{2^{2m}} e^{\beta^*\bra{C_{\alpha,x_0}-I_m}}, $$ where $C_{\alpha,x_0}$ and $I_m$ are respectively as in Proposition \ref{prop green} and \eqref{Im}. \end{prop} \begin{proof} We consider the function \[ u_{\varepsilon,\alpha,x_0}(x):= \begin{Si}{cc} \mu_\varepsilon +\dfrac{\eta_0\bra{\frac{x-x_0}{\varepsilon}}}{\mu_\varepsilon} + \dfrac{C_{\alpha,x_0} + \psi_{\alpha, x_0}(x) + p_{\varepsilon} (x)}{\mu_\varepsilon} & \text{ for } |x-x_0| < \varepsilon R_\varepsilon,\\ \dfrac{G_{\alpha,x_0}(x)}{\mu_\varepsilon} & \text{ for } |x-x_0|\ge \varepsilon R_\varepsilon, \end{Si} \] where $\psi_{\alpha,x_0}$ is as in the expansion of $G_{\alpha,x_0}$ given in Proposition \ref{prop green}, $R_\varepsilon= |\log \varepsilon|$, $\mu_\varepsilon$ is a constant that will be fixed later, and $p_\varepsilon := p_{\varepsilon,R_\varepsilon,\mu_\varepsilon,x_0}$ is the polynomial defined in Lemma \ref{polynomial}. To simplify the notation, in this proof we will write $u_\varepsilon$ in place of $u_{\varepsilon,\alpha,x_0}$ without specifying the dependence on $\alpha$ and $x_0$. Note that the choice of $p_\varepsilon$ (specifically \eqref{derp}) implies that, for sufficiently small $\varepsilon$, $u_\varepsilon \in H^m_0(\Omega)$. Moreover, we can write $u_\varepsilon= \frac{\widetildelde u_\varepsilon}{\mu_\varepsilon}$, where \begin{equation}\label{testfunction} \widetildelde u_\varepsilon (x) = \begin{Si}{cc} \eta_0\bra{\frac{x-x_0}{\varepsilon}} + C_{\alpha,x_0} +\psi_{\alpha, x_0} (x) + p_\varepsilon +\mu_\varepsilon^2 & \text{ if } |x-x_0|< \varepsilon R_\varepsilon,\\ G_{\alpha,x_0} & \text{ if } |x-x_0|\ge \varepsilon R_\varepsilon, \end{Si} \end{equation} is a function that does not depend on the choice of $\mu_\varepsilon$, because of Lemma \ref{polynomial}. In particular, if we fix $\mu_\varepsilon := \|\widetildelde u_\varepsilon \|_\alpha$, we get $\|u_\varepsilon\|_\alpha=1$, and so $u_\varepsilon \in M_\alpha$. In order to compute $F_{\beta^*}(u_\varepsilon)$, we need a precise expansion of $\mu_\varepsilon$. Observe that, by Lemma \ref{int eta0}, the function $\eta_\varepsilon(x):= \eta_0\bra{\frac{x-x_0}{\varepsilon}}$, satisfies \begin{equation}\label{test1} \begin{split} \int_{B_{\varepsilon R_\varepsilon}(x_0)}|\Delta^\frac{m}{2} \eta_\varepsilon|^2 dx &= \int_{B_{R_\varepsilon}(0)} |\Delta^{\frac{m}{2}} \eta_0|^2 dx \\ &= \frac{2m}{\beta^*}\log \frac{R_\varepsilon}{2} + I_m - H_m + O(R_\varepsilon^{-2}\log R_\varepsilon). \end{split} \end{equation} Since $\psi_{\alpha, x_0}\in C^{2m-1}(\overline \Omega)$, we have \begin{equation}\label{test2} \int_{B_{\varepsilon R_\varepsilon}(x_0)} |\Delta^\frac{m}{2} \psi_{\alpha,x_0} |^2 dx= O(\varepsilon^{2m} R_\varepsilon^{2m}), \end{equation} Remark \ref{rem poly} gives $|\Delta^\frac{m}{2}p_\varepsilon| = O(\varepsilon^{-m}R_\varepsilon^{-m-2})$ in $B_{\varepsilon R_\varepsilon}(x_0)$. Therefore, \begin{equation}\label{test3} \int_{B_{\varepsilon R_\varepsilon}(x_0)} |\Delta^\frac{m}{2} p_\varepsilon |^2 dx = O( R_\varepsilon^{-4}). \end{equation} Using H\"older's inequality, \eqref{test1} and \eqref{test2}, we find \begin{equation}\label{test4} \begin{split} \int_{B_{\varepsilon R}(x_0)} \Delta^\frac{m}{2}\eta_\varepsilon \cdot \Delta^\frac{m}{2}\psi_{\alpha,x_0} dx &\le \| \Delta^\frac{m}{2}\eta_\varepsilon \|_{L^2(B_{\varepsilon R_\varepsilon}(x_0))} \| \Delta^\frac{m}{2}\psi_{\alpha,x_0} \|_{L^2(B_{\varepsilon R_\varepsilon}(x_0))}\\ &=O(\varepsilon^{m}R_\varepsilon^{m} \log^\frac{1}{2} R_\varepsilon). \end{split} \end{equation} Similarly, by \eqref{test1}, \eqref{test2} and \eqref{test3}, we get \begin{equation}\label{test5} \int_{B_{\varepsilon R_\varepsilon}(x_0)} \Delta^{\frac{m}{2}} \eta_\varepsilon \cdot \Delta^\frac{m}{2} p_\varepsilon dx = O (R_\varepsilon^{-2}\log^\frac{1}{2} R_\varepsilon), \end{equation} and \begin{equation}\label{test6} \int_{B_{\varepsilon R_\varepsilon}(x_0)} \Delta^\frac{m}{2}p_\varepsilon \cdot \Delta^\frac{m}{2}\psi_{\alpha,x_0} dx = O(\varepsilon^{m}R_\varepsilon^{m-2}). \end{equation} By \eqref{test1}, \eqref{test2}, \eqref{test3}, \eqref{test4}, \eqref{test5} and \eqref{test6}, we infer $$ \int_{B_{\varepsilon R_\varepsilon}(x_0)} |\Delta^\frac{m}{2} \widetildelde u_\varepsilon|^2dx = \frac{2m}{\beta^*}\log \frac{R_\varepsilon}{2} +I_m- H_m + O(R_\varepsilon^{-2}\log R_\varepsilon). $$ Furthermore, applying Lemma \ref{int Green}, we have \[ \begin{split} \int_{\Omega \setminus B_{\varepsilon R_\varepsilon}(x_0)} |\Delta^\frac{m}{2} \widetildelde u_\varepsilon|^2 dx &= \int_{\Omega \setminus B_{\varepsilon R_\varepsilon}(x_0)} |\Delta^\frac{m}{2} G_{\alpha,x_0}|^2 dx \\ &= -\frac{2m}{\beta^*} \log (\varepsilon R_\varepsilon) + C_{\alpha,x_0} + H_m +\alpha \|G_{\alpha,x_0}\|_{L^2(\Omega)}^2 + O(\varepsilon R_\varepsilon |\log (\varepsilon R_\varepsilon)|). \end{split} \] Hence, \begin{equation}\label{test7} \int_{\Omega} |\Delta^\frac{m}{2} \widetildelde u_\varepsilon|^2dx = -\frac{2m}{\beta^*}\log (2\varepsilon)+C_{\alpha,x_0} + I_m + \alpha \|G_{\alpha,x_0} \|_{L^2(\Omega)}^2 + O(R_\varepsilon^{-2}\log R_\varepsilon). \end{equation} Finally, since \eqref{testfunction} and Remark \ref{rem poly}, imply $\widetildelde u_\varepsilon = O( |\log \varepsilon|)$ on $B_{\varepsilon R_\varepsilon}(x_0)$, and since $G_{\alpha,x_0}=O(|\log|x-x_0||)$ near $x_0$, we find \begin{equation}\label{test8} \begin{split} \| \widetildelde u_\varepsilon\|_{L^2(\Omega)}^2 &= \|G_{\alpha,x_0}\|_{L^2(\Omega\setminus B_{\varepsilon R_\varepsilon})}^2 + O(\varepsilon^{2m} R_\varepsilon^{2m} \log^2\varepsilon )\\ & = \|G_{\alpha,x_0}\|_{L^2(\Omega)}^2 + O(\varepsilon^{2m} R_\varepsilon^{2m} \log^2 \varepsilon ). \end{split} \end{equation} Therefore, using \eqref{test7} and \eqref{test8}, we obtain \begin{equation}\label{mueps} \mu_\varepsilon^2=\| \widetildelde u_\varepsilon\|^2_\alpha = -\frac{2m}{\beta^*}\log (2\varepsilon)+C_{\alpha,x_0} + I_m + O(R_\varepsilon^{-2}\log R_\varepsilon). \end{equation} We can now estimate $F_{\beta^*}(u_\varepsilon)$. On $B_{\varepsilon R_\varepsilon} (x_0)$, by definition of $u_\varepsilon$, we get \[ u_\varepsilon ^2 \ge \mu_\varepsilon^2 + 2\bra{ \eta_0\bra{\frac{x-x_0}{\varepsilon}} +C_{\alpha,x_0} +\psi_{\alpha,x_0}(x) + p_\varepsilon(x)}. \] Then, Lemma \ref{polynomial}, Remark \ref{rem poly}, and \eqref{mueps}, give \[ \begin{split} u_\varepsilon ^2 \ge -\frac{2m}{\beta^*} \log(2 \varepsilon) + 2\eta_0\bra{\frac{x-x_0}{\varepsilon}} + C_{\alpha,x_0} -I_m +O(R_\varepsilon^{-2}\log R_\varepsilon). \end{split} \] Hence, using a change of variables and Lemma \ref{int eta0}, \begin{equation}\label{final1} \begin{split} \int_{B_{\varepsilon R_\varepsilon}(x_0)} e^{\beta^* u_\varepsilon^2} dx & \ge \frac{1}{2^{2m}} e^{\beta^* \bra{C_{\alpha,x_0}- I_m }} (1 + O(R_\varepsilon^{-2} \log R_\varepsilon)) \int_{B_{R_\varepsilon}(0)} e^{2\beta^* \eta_0} dy \\ & =\frac{\omega_{2m}}{2^{2m}} e^{\beta^* \bra{C_{\alpha,x_0} -I_m} } + O(R_\varepsilon^{-2} \log R_\varepsilon)). \end{split} \end{equation} Outside $B_{\varepsilon R_\varepsilon}(x_0)$, the basic inequality $e^{t^2}\ge 1+t^2$ gives \begin{equation}\label{final2} \begin{split} \int_{\Omega \setminus B_{\varepsilon R_\varepsilon}(x_0)} e^{\beta^* u_\varepsilon^2} dx & = \int_{\Omega \setminus B_{\varepsilon R_\varepsilon}(x_0)} e^{\frac{\beta^*}{\mu_\varepsilon^2} G_{\alpha,x_0}^2 } dx \\ & \ge |\Omega| + \frac{\beta^*}{\mu_\varepsilon^2} \|G_{\alpha,x_0}\| _{L^2(\Omega)}^2 + o(\mu_\varepsilon^{-2}) + O(\varepsilon^{2m} R_\varepsilon^{2m}). \end{split} \end{equation} Since $R_\varepsilon = O(\mu_\varepsilon^2 )$, by \eqref{final1} and \eqref{final2}, we conclude that \[\begin{split} F_{\beta^*} (u_\varepsilon) &\ge |\Omega| + \frac{\omega_{2m}}{2^{2m}} e^{\beta^* \bra{C_{\alpha,x_0} - I_m} } + \frac{\beta^*}{\mu_\varepsilon ^2} \|G_{\alpha,x_0}\| _{L^2(\Omega)}^2 + o(\mu_\varepsilon^{-2}). \end{split}\] In particular, for sufficiently small $\varepsilon$, we find $$ S_{\alpha,\beta^*}\ge F_{\beta^*}(u_\varepsilon)>|\Omega| + \frac{\omega_{2m}}{2^{2m}} e^{\beta^* \bra{C_{\alpha,x_0} - I_m} }. $$ \end{proof} We can now prove Theorem \ref{main} using Proposition \ref{big} and Proposition \ref{proptest}. \begin{proof}[Proof of Theorem \ref{main}] \mbox{ } \emph{1.} Let $\beta_n$, $u_n$ and $\mu_n$ be as in \eqref{betan}, \eqref{extremal}, \eqref{mun and xn} and \eqref{mun and xn2}. Since $\|u_n\|_{\alpha}=1$ and $0\le \alpha <\lambda_1(\Omega)$, $u_n$ is bounded in $H^m_0(\Omega)$. In particular, we can find a function $u_0\in H^m_0(\Omega)$ such that, up to subsequences, $u_n\rightharpoonup u_0$ in $H^m_0(\Omega)$ and $u_n \to u_0$ a.e. in $\Omega$. The weak lower semicontinuity of $\|\cdot\|_\alpha$ implies that $u_0\in M_\alpha$. By Propositions \ref{big} and \ref{proptest}, we must have $\displaystyle{\limsup_{n\to +\infty}\mu_n \le C}$. Then, Fatou's Lemma and the dominated convergence theorem imply respectively $F_{\beta^*}(u_0)<+\infty$ and $F_{\beta_n}(u_n)\to F_{\beta^*}(u_0)$. Since, by Lemma \ref{limsub}, $u_n$ is maximizing sequence for $S_{\alpha,\beta^*}$, we conclude that $S_{\alpha,\beta^*}=F_{\beta^*}(u_0)$. Then, $S_{\alpha,\beta^*}$ is finite and attained. \emph{2.} Clearly, if $\beta>\beta^*$, using \eqref{Adams}, we get $$ S_{\alpha,\beta} \ge S_{0,\beta} = +\infty, \qquad \text{ for any } \alpha\ge 0. $$ Assume now $\alpha\ge \lambda_1(\Omega)$ and $0\le \beta \le \beta^*$. Let $\varphi_1$ be an eigenfuntion for $(-\Delta)^m $ on $\Omega$ corresponding to $\lambda_1(\Omega)$, i.e. a nontrivial solution of \[ \begin{Si}{cc} (-\Delta)^m \varphi_1 = \lambda_1(\Omega)\varphi_1 & \text{ in } \Omega,\\ \varphi_1 = \partialrtial_{\nu}\varphi_1 = \ldots= \partialrtial_{\nu}^{m-1} \varphi_1 =0 & \text{ on }\partialrtial \Omega. \end{Si} \] Observe that, for any $t\in \mathbb{R}$, $$ \|t\varphi_1\|_{\alpha}^2 = t^2(\lambda_1(\Omega) -\alpha)\|\varphi_1\|^2_{L^2} \le 0. $$ In particular, $t\varphi_1\in M_\alpha$. Then we have $$ S_{\alpha,\beta} \ge F_{\alpha,\beta}(t\varphi_1) \to +\infty, $$ as $t\to +\infty$. \end{proof} \appendix \renewcommand\thesection{} \section{Appendix: Some elliptic estimates} \renewcommand\thesection{\alphalph{section}} In this appendix, we recall some useful elliptic estimates which have been used several times throughout the paper. We start by recalling that $m-$harmonic functions are of class $C^\infty$ and that bounds on their $L^1$-norm give local uniform estimates on all their derivatives. \begin{prop}\label{ell harm} Let $\Omega \subseteq \mathbb{R}^N$ be a bounded open set. Then, for any $m\ge 1$, $l \in \mathbb{N}$, $\gamma\in (0,1)$, and any open set $V\subset\subset\Omega$, there exists a constant $C=C(m,l,\gamma,V,\Omega)$ such that every m-harmonic function $u$ in $\Omega$ satisfies $$ \|u\|_{C^{l,\gamma}(V)} \le C \|u\|_{L^1(\Omega)}. $$ \end{prop} Proposition \ref{ell harm} can be deduced e.g. from Proposition 12 in \cite{mar}, and its proof is based on Pizzetti's formula \cite{Piz}, which is a generalization of the standard mean value property for harmonic functions. If $m\ge 2$, in general $m-$harmonic functions on a bounded open set $\Omega$ do not satisfy the maximum principle, unless $\Omega$ is one of the so called positivity preserving domains (balls are the simplest example). However, it is always true that the $C^{m-1}$ norm of a $m-$harmonic function can be controlled in terms of the $L^\infty$ norm of its derivatives on $\partialrtial \Omega$. \begin{prop}\label{ell harm2} Let $\Omega \subseteq \mathbb{R}^N$ be a smooth bounded open set. Then, there exists a constant $C=C(\Omega)>0$ such that $$ \|u\|_{C^{m-1}(\Omega)} \le C \sum_{l=0}^{m-1} \|\nabla^l u\|_{L^\infty (\partialrtial \Omega)}, $$ for any $m-$harmonic function $u\in C^{m-1}(\overline \Omega)$. \end{prop} We recall now the main results concerning Schauder and $L^p$ elliptic estimates for $(-\Delta)^m$. \begin{prop}[see Theorem 2.18 of \cite{GGS}]\label{ell shau} Let $\Omega\subseteq \mathbb{R}^N$ be a bounded open set with smooth boundary, and take $k,m\in \mathbb{N}$, $k \ge 2m$, and $\gamma \in (0,1)$. If $u \in H^m(\Omega)$ is a weak solution of the problem \begin{equation}\label{general prob} \begin{Si}{cl} (-\Delta)^m u = f & \mbox{ in } \Omega, \\ \partialrtial_\nu^j u = h_j & \mbox{ on }\partialrtial \Omega, \; 0\le j\le m-1, \end{Si} \end{equation} with $f\in C^{k-2m,\gamma}(\Omega)$ and $h_j \in C^{k-j,\gamma}(\partialrtial \Omega)$, $0\le j\le m-1$, then $u\in C^{k,\gamma}(\Omega)$ and there exists a constant $C=C(\Omega,k,\gamma)$ such that $$ \|u\|_{C^{k,\gamma}(\Omega)} \le C \bra{ \| f\|_{C^{k-2m,\gamma}(\Omega)} + \sum_{j=0}^{m-1} \|h_j\|_{C^{k-j,\gamma}(\partialrtial \Omega)}}. $$ \end{prop} \begin{prop}[see Theorem 2.20 of \cite{GGS}]\label{ell zero} Let $\Omega\subseteq \mathbb{R}^N$ be a bounded open set with smooth boundary, and take $m,k\in \mathbb{N}$, $k \ge 2m$, and $p >1$. If $u \in H^m(\Omega)$ is a weak solution of \eqref{general prob} with $f\in W^{k-2m,p}(\Omega)$ and $h_j \in W^{k-j-\frac{1}{p},p}(\partialrtial \Omega)$, $0\le j\le m-1$, then $u\in W^{k,p}(\Omega)$ and there exists a constant $C=C(\Omega,k,p)$ such that $$ \|u\|_{W^{k,p}(\Omega)} \le C \bra{ \| f\|_{W^{k-2m,p}(\Omega)} + \sum_{j=0}^{m-1} \|h_j\|_{W^{k-j-\frac{1}{p},\gamma}(\partialrtial \Omega)}}. $$ \end{prop} In the absence of boundary conditions one can obtain local estimates combining Propositions \ref{ell shau} and \ref{ell zero} with Proposition \ref{ell harm}. \begin{prop}\label{ell loc} Let $\Omega\subseteq \mathbb{R}^N$ be a bounded open set with smooth boundary and take $m,k\in \mathbb{N}$, $k \ge 2m$, $p >1$. If $f\in W^{k-2m,p}(\Omega)$ and $u$ is a weak solution of $(-\Delta)^m u = f$ in $\Omega$, then $u\in W^{k,p}_{loc}(\Omega)$ and, for any open set $V\subset \subset \Omega$, there exists a constant $C=C(k,p,V,\Omega)$ such that $$ \|u\|_{W^{k,p}(V)} \le C \bra{ \| f\|_{W^{k-2m,p}(\Omega)} + \|u\|_{L^1(\Omega)}}. $$ Similarly, if $f\in C^{k-2m,\gamma}(\Omega)$ and $u$ is a weak solution of $(-\Delta)^m u =f$ in $\Omega$, then $u\in C^{k,\gamma}_{loc}(\Omega)$ and, for any open set $V\subset \subset \Omega$, there exists a constant $C=C(k,\gamma,V,\Omega)$ such that $$ \|u\|_{C^{k,\gamma}(V)} \le C \bra{ \| f\|_{C^{k-2m,\gamma}(\Omega)} + \|u\|_{L^1(\Omega)}}. $$ \end{prop} In many cases, one has to deal with solutions of $(-\Delta)^m u =f$ in $\Omega$, with boundary conditions satisfied only on a subset of $\partialrtial \Omega$. For instance, as a consequence of Proposition \ref{ell zero}, Green's representation formula, and the continuity of trace operators on $W^{m,1}(\Omega)$, one obtains the following Proposition. \begin{prop}\label{ell new} Let $\Omega\subseteq \mathbb{R}^N$ be an open set with smooth boundary, and fix $x_0, x_1 \in \mathbb{R}^{2m}$ and $p>1$. For any $\partiallta,R >0$ such that $\Omega \cap B_R(x_1) \setminus B_{2\partiallta}(x_0)\neq \emptyset$, there exists a constant $C=C(\Omega,x_0, x_1,\partiallta,R)$ such that every weak solution $u$ of problem \eqref{general prob}, with $f\in L^p(\Omega)$ and $h_j = 0$, $0\le j\le m-1$, satisfies $$ \|u\|_{W^{2m,p}(\Omega \cap B_R(x_1)\setminus B_{2\partiallta}(x_0))} \le C( \|f\|_{L^p(\Omega\cap B_{2R}(x_1)\setminus B_\partiallta(x_0))} + \|u\|_{W^{m,1}(\Omega\setminus B_{2R}(x_1)\cap B_\partiallta(x_0))}). $$ \end{prop} \begin{rem}\label{remdomains} The constant $C$ appearing in Proposition \ref{ell new} depends on $\Omega$ only through the $C^{2m}$ norms of the local maps that define $B_{2R}(x_1)\cap \partialrtial \Omega$. In particular, Proposition \ref{ell new} can be applied uniformly to sequences $\{\Omega_n\}_{n\in \mathbb{N}}$, which converge in the $C^{2m}_{loc}$ sense to a limit domain $\Omega$. \end{rem} The following Proposition holds only in the special case $m=1$. It gives a Harnack-type inequality which is useful to control the local behavior of a sequence of solutions of $-\Delta u = f$, when the behavior at one point is known. \begin{comment} \begin{prop}\label{ell loc} Let $\Omega\subseteq \mathbb{R}^n$ be a bounded open set and let $u_n \in H^m(\Omega)$ be a sequence of weak solutions of $(-\Delta)^m u_n = f_n$. If $f_n$ is bounded in $L^p(\Omega)$ for some $p>1$ and $u_n$ is bounded in $L^1(\Omega)$, then $u_n$ is bounded in $W^{2m,p}_{loc}(\Omega)$. \end{prop} \begin{proof} We write $u_n = v_n + h_n$ with $v_n$ solving $$ \begin{Si}{cc} (-\Delta)^m v_n = f_n & \mbox{ in }\Omega\\ v_n = \partialrtial_{\nu} v_n = \ldots = \partialrtial_{\nu}^{m-1} v_n = 0 & \mbox{ in }\partialrtial \Omega. \end{Si} $$ and $(-\Delta)^m h_n =0$ in $\Omega$. By Proposition \ref{ell zero} we have that $v_n$ is bounded in $W^{2m,p}(\Omega)$. By assumptions $h_n = u_n -v_n$ is bounded in $L^1(\Omega)$. Then, by Proposition \ref{ell harm}, $h_n$ is bounded in $C^{2m,\gamma}_{loc}(\Omega)$ for any $\gamma\in (0,1)$. In particular $u_n$ is bounded in $W^{2m,p}_{loc}(\Omega)$. \end{proof} \end{comment} \begin{prop}\label{ell use} Let $u_n \in H^1(B_R(0))$ be a sequence of weak solutions of $-\Delta u_n = f_n$ in $B_R(0)\subseteq \mathbb{R}^N$, $R>0$. Assume that $f_n$ is bounded in $L^\infty(B_R(0))$, and there exists $C>0$ such that $u_n\le C$ and $u_n(0)\ge-C$. Then, $u_n$ is bounded in $L^\infty(B_\frac{R}{2}(0))$. \end{prop} \begin{proof} We write $u_n = v_n + h_n$, with $h_n$ harmonic in $B_R(0)$, and $v_n$ solving $$ \begin{Si}{cc} \Delta v_n = f_n & \mbox{ in }B_R(0),\\ v_n = 0 & \mbox{ on }\partialrtial B_R(0). \end{Si} $$ By Proposition \ref{ell zero}, $v_n$ is bounded in $W^{2,p}(B_R(0))$, for any $p>1$. In particular, it is bounded in $L^\infty(B_R(0))$. Then, we have $$h_n = u_n -v_n \le C +\|v_n\|_{L^\infty(B_R(0))} \le \widetildelde{C},$$ and $$h_n(0)= u_n(0) -v_n(0)\ge -C -\|v_n\|_{L^\infty(B_R(0))}\ge - \widetildelde{C}.$$ By the mean value property, for any $x\in B_{\frac{R}{2}}(0)$, we get \[ \begin{split} h_n(x) - \widetildelde{C} & = \frac{2^N}{\omega_N R^N} \int_{B_{\frac{R}{2}}(x)} (h_n - \widetildelde{C}) dy \\ & \ge \frac{2^N}{\omega_N R^N} \int_{B_{R}(0)} (h_n - \widetildelde{C}) dy \\ & = 2^N (h_n(0)-\widetildelde{C}) \\ & \ge -2^{N+1} \widetildelde{C}. \end{split} \] Hence, $h_n$ is bounded in $L^\infty(B_\frac{R}{2}(0))$. \end{proof} Finally, we recall some Lorentz-Zygmund type elliptic estimates. For any $\alpha\ge 0$, let $L(\log L)^\alpha$ be defined as the space \begin{equation}\label{zygm} L (\log L)^\alpha=\cur{ f: \Omega \longrightarrow \mathbb{R} \text{ s.t. } f \text{ is measurable and } \int_\Omega |f|\log^\alpha (2+|f|) dx <+\infty}, \end{equation} and endowed with the norm \begin{equation}\label{zygm norm} \|f\|_{L(Log L)^\alpha}:= \int_\Omega |f|\log^\alpha (2+|f|) dx. \end{equation} Given $1<p< +\infty$ , and $1\le q\le +\infty$, let $L^{(p,q)}(\Omega)$ be the Lorentz space \begin{equation}\label{Lor} L^{(p,q)}(\Omega):= \{ u : \Omega \longrightarrow \mathbb{R}\, :\, u \text{ is measurable and } \|u\|_{(p,q)}<+\infty \}, \end{equation} where \begin{equation}\label{Lor norm} \|u\|_{(p,q)}:= \bra{\int_{0}^{|\Omega|} t^{\frac{q}{p}-1} u^{**}(t)^q dt}^\frac{1}{q}, \quad \text{ for } 1 \le q<+\infty, \end{equation} and \begin{equation} \|u\|_{{(p,\infty)}} = \sup_{t\in (0,|\Omega|)} t^\frac{1}{p} u^{**}(t), \end{equation} with \begin{equation} u^{**}(t) := t^{-1} \int^t_0 u^*(s) ds, \end{equation} and \begin{equation}\label{Lor fin} u^*(t):=\inf\{\lambda >0 \; :\; |\{|u|>\lambda\} | \le t \}. \end{equation} Among the many properties of Lorentz spaces we recall the following H\"older-type inequality (see \cite{ONe}). \begin{prop}\label{HolLor} Let $1<p,p'< +\infty$, $1\le q,q'\le +\infty$, be such that $\frac{1}{p}+\frac{1}{p'}= \frac{1}{q}+\frac{1}{q'}=1$. Then, for any $u\in L^{(p,q)}(\Omega)$, $v\in L^{(p',q')}(\Omega)$, we have $$ \|u v\|_{L^1(\Omega)} \le \|u\|_{(p,q)} \|v\|_{(p',q')}. $$ \end{prop} As proved in Corollary 6.16 of \cite{BS} (see also Theorem 10 in \cite{mar}) one has the following: \begin{prop}\label{ell Lor} Let $\Omega\subseteq \mathbb{R}^N$, $N\ge 2m$, be a bounded smooth domain and take $0\le \alpha \le 1$. If $f\in L(\log L)^\alpha$, and $u$ is a weak solution of \eqref{general prob}, then $\nabla^{2m-l}u \in L^{(\frac{N}{N-l}, \frac{1}{\alpha})}(\Omega)$, for any $1\le l\le 2m-1$. Moreover, there exists a constant $C=C(\Omega,l)>0$ such that $$ \|\nabla^{2m-l} u \|_{(\frac{N}{N-l},\frac{1}{\alpha})} \le C \|f\|_{L(Log L)^\alpha}. $$ \end{prop} Note that, if $\alpha=0$, we have $L(\log^\alpha L) = L^1(\Omega)$. Moreover, $L^{(\frac{N}{N-l}, \frac{1}{\alpha})}(\Omega)= L^{(\frac{N}{N-l},\infty)}(\Omega)$ coincides with the weak $L^\frac{N}{N-l}$ space on $\Omega$. In particular, $L^{(\frac{N}{N-l},\infty)}(\Omega)\subseteq L^p(\Omega)$ for any $1\le p< \frac{N}{N-l}$. Therefore, as a consequence of Proposition \ref{ell Lor}, we recover the following well known result, whose classical proof relies on Green's representation formula. \begin{prop}\label{ell L1} Let $\Omega\subseteq \mathbb{R}^{N}$, $N\ge 2m$, be a bounded smooth domain. Then, for any $1\le l\le 2m-1$ and $1\le p<\frac{N}{N-l}$, there exists a constant $C=C(p,l,\Omega)$ such that every weak solution of \eqref{general prob} with $f\in L^1(\Omega)$ satisfies $$ \|\nabla^{2m-l} u \|_{L^p(\Omega)} \le C \|f\|_{L^1(\Omega)}. $$ \end{prop} \end{document}
\begin{document} \title[Hermite-Hadamard, Hermite-Hadamard-Fej\'{e}r, Dragomir-Agarwal and ...] {Hermite-Hadamard, Hermite-Hadamard-Fej\'{e}r, Dragomir-Agarwal and Pachpatte type inequalities for convex functions via new fractional integrals} \author[B.~Ahmad, A.~Alsaedi, M.~Kirane and B.\,T.~Torebek\hfil \hfilneg] {Bashir~Ahmad, Ahmed~Alsaedi, Mokhtar~Kirane and Berikbol~T.~Torebek} \address{Bashir Ahmad \newline NAAM Research Group, Department of Mathematics, \newline Faculty of Science, King Abdulaziz University, \newline P.O. Box 80203, Jeddah 21589, Saudi Arabia} \email{bashirahmad\[email protected]} \address{Ahmed Alsaedi \newline NAAM Research Group, Department of Mathematics, \newline Faculty of Science, King Abdulaziz University, \newline P.O. Box 80203, Jeddah 21589, Saudi Arabia} \email{[email protected]} \address{Mokhtar Kirane \newline LaSIE, Facult\'{e} des Sciences, \newline Pole Sciences et Technologies, Universit\'{e} de La Rochelle, \newline Avenue M. Crepeau, 17042 La Rochelle Cedex, France \newline NAAM Research Group, Department of Mathematics, \newline Faculty of Science, King Abdulaziz University, \newline P.O. Box 80203, Jeddah 21589, Saudi Arabia} \email{[email protected]} \address{Berikbol T. Torebek \newline Al--Farabi Kazakh National University \newline al--Farabi ave. 71, 050040, Almaty, Kazakhstan \newline Institute of Mathematics and Mathematical Modeling.\newline 125 Pushkin str., 050010 Almaty, Kazakhistan} \email{[email protected]} \subjclass[2000]{26A33; 26D10} \keywords{Hermite-Hadamard inequality; Hermite-Hadamard-Fej\'{e}r inequality; Dragomir-Agarwal inequality; Pachpatte inequalities; new fractional integral operator. } \begin{abstract} The aim of this paper is to establish Hermite-Hadamard, Hermite-Hadamard-Fej\'{e}r, Dragomir-Agarwal and Pachpatte type inequalities for new fractional integral operators with exponential kernel. These results allow us to obtain a new class of functional inequalities which generalizes known inequalities involving convex functions. Furthermore, the obtained results may act as a useful source of inspiration for future research in convex analysis and related optimization fields. \end{abstract} \maketitle \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{remark}[theorem]{Remark} \newtheorem{problem}[theorem]{Problem} \newtheorem{example}[theorem]{Example} \newtheorem{definition}[theorem]{Definition} \allowdisplaybreaks \section{Introduction} The inequalities for convex functions due to Hermite and Hadamard are found to be of great importance, for example, see \cite{DP00, PPT92}. According to the inequalities \cite{H1893, H1883}, \begin{itemize} \item if $u:\,I\rightarrow \mathbb{R}$ is a convex function on the interval $I\subset \mathbb{R}$ and $a,b\in I$ with $b>a,$ then \begin{equation}\label{1.1}u\left(\frac{a+b}{2}\right)\leq \frac{1}{b-a}\int\limits^b_a u(y)dy \leq \frac{u(a)+u(b)}{2}.\end{equation} \end{itemize} For a concave function $u$, the inequalities in \eqref{1.1} hold in the reversed direction. We note that Hadamard's inequality refines the concept of convexity, and it follows from Jensen's inequality. The classical Hermite-Hadamard inequality yields estimates for the mean value of a continuous convex function $u : [a, b] \rightarrow \mathbb{R}.$ The well-known inequalities dealing with the integral mean of a convex function $u$ are the Hermite-Hadamard inequalities or its weighted versions. They are also known as Hermite-Hadamard-Fej\'{e}r inequalities. In \cite{F06}, Fej\'{e}r obtained the weighted generalization of Hermite-Hadamard inequality \eqref{1.1} as follows. \begin{itemize} \item Let $u:\, [a,b]\rightarrow \mathbb{R}$ be a convex function. Then the inequality \begin{equation}\label{1.2}u\left(\frac{a+b}{2}\right)\int\limits^b_a w(y)dy\leq\int\limits^b_a u(y) w(y)dy \leq \frac{u(a)+u(b)}{2}\int\limits^b_a w(y)dy \end{equation} holds for a nonnegative, integrable function $v:\, [a,b]\rightarrow \mathbb{R}$, which is symmetric to $\frac{a+b}{2}.$ \end{itemize} In \cite{DA98}, Dragomir and Agarwal obtained the following results in connection with the right part of \eqref{1.1}: \begin{itemize} \item Let $u:\, I\subseteq \mathbb{R}\rightarrow \mathbb{R}$ be a differentiable mapping on $I,a,b \in I.$ If $|u'|$ is convex on $[a,b],$ then the following inequality holds: \begin{equation}\label{1.3}\left|\frac{u(a)+u(b)}{2}-\frac{1}{b-a}\int\limits^b_a u(y)dy\right|\leq \frac{b-a}{8}\left(|u'(a)|+|u'(b)|\right). \end{equation} \end{itemize} In \cite{P03}, Pachpatte established two new Hermite-Hadamard type inequalities for products of convex functions as follows: \begin{itemize} \item Let $u$ and $w$ be nonnegative and convex functions on $[a,b]\subset \mathbb{R},$ then \begin{equation}\label{1.4}\begin{aligned}\frac{1}{b-a}\int\limits^b_a u(y)w(y)dy \leq \frac{u(a)w(a)+u(b)w(b)}{3} + \frac{u(a)w(b)+u(b)w(a)}{6} \end{aligned}\end{equation} and \begin{equation}\begin{aligned} 2 &u\left(\frac{a+b}{2}\right) w\left(\frac{a+b}{2}\right) \leq \frac{1}{b-a}\int\limits^b_a u(y)w(y)dy \\& +\frac{u(a)w(a)+u(b)w(b)}{6} + \frac{u(a)w(b)+u(b)w(a)}{3}.\label{1.5} \end{aligned}\end{equation}\end{itemize} Next we present some results on the generalization of aforementioned inequalities. In \cite{SSYB13}, Sarikaya et. al. represented Hermite-Hadamard and Dragomir-Agarwal inequalities in fractional integral forms as follows. \begin{itemize} \item Let $u:\,[a,b]\rightarrow \mathbb{R}$ be a positive function and $u\in L^1([a,b]).$ If $u$ is a convex function on $[a,b],$ then the following inequalities for fractional integrals hold \begin{equation*}u\left(\frac{a+b}{2}\right)\leq \frac{\Gamma(\alpha+1)}{2(b-a)^\alpha}\left[I^\alpha_au(b)+I^\alpha_bu(a)\right] \leq \frac{u(a)+u(b)}{2}\end{equation*} with $\alpha>0.$ \end{itemize} \begin{itemize} \item Let $u:\,[a,b]\rightarrow \mathbb{R}$ be a differentiable mapping on $(a, b).$ If $|u'|$ is convex on [a, b], then the following inequality for fractional integrals holds: \begin{multline*}\left|\frac{u(a)+u(b)}{2}-\frac{\Gamma(\alpha+1)}{2(b-a)^\alpha}\left[I^\alpha_au(b)+I^\alpha_bu(a)\right]\right|\\ \leq \frac{b-a}{2(\alpha+1)}(1-2^{-\alpha})\left(|u'(a)|+|u'(b)|\right).\end{multline*} \end{itemize} In \cite{I16}, I\c{s}can obtained the following Hermite-Hadamard-Fej\'{e}r integral inequalities via fractional integrals: \begin{itemize} \item Let $u:\,[a,b]\rightarrow \mathbb{R}$ be convex function with $a < b$ and $u \in L^1([a, b]).$ If $v:\,[a,b]\rightarrow \mathbb{R}$ is nonnegative, integrable and symmetric to $(a+b)/2,$ then the following inequalities for fractional integrals hold \begin{align*}u\left(\frac{a+b}{2}\right)\left[I^\alpha_av(b)+I^\alpha_bv(a)\right]&\leq \left[I^\alpha_a(uv)(b)+I^\alpha_b(uv)(a)\right]\\& \leq \frac{u(a)+u(b)}{2}\left[I^\alpha_av(b)+I^\alpha_bv(a)\right] \end{align*} with $\alpha>0.$ \end{itemize} Many generalizations and extensions of the Hermite-Hadamard, Hermite-Hadamard-Fej\'{e}r, Dragomir-Agarwal and Pachpatte type inequalities were obtained for various classes of functions using fractional integrals; see \cite{BPP16, C16, CK17, HYT14, ITM16, I16, JS16, SSYB13, WLFZ12, ZW13} and references therein. These studies motivated us to consider a new class of functional inequalities for convex functions generalizing the classical Hermite-Hadamard, Hermite-Hadamard-Fej\'{e}r, Dragomir-Agarwal and Pachpatte inequalities. Here we emphasize that we derive some functional inequalities for the new fractional integral operators with exponential kernel. The difference between our results and the known generalizations is that the above fractional analogues of functional inequalities do not follow from our results. In fact our results are the simplest generalizations of only classical inequalities. The paper is organized as follows. Section \ref{Prel} contains some basic concepts related to our proposed study. In Section \ref{HH}, a Hermite-Hadamard type inequality for a fractional integral with an exponential kernel is proved. The fractional analogue of the Hermite inequality is investigated in Section \ref{HHF}. Section \ref{DA} is devoted to the generalization of Dragomir-Agarwal's inequality. In Section \ref{P}, we obtain generalized Pachpatte-type inequalities with fractional integrals in the class of convex functions. \section{Preliminaries}\label{Prel} We give some definitions for further use. \begin{definition}\label{def1.1} A function $u : [a, b]\subset \mathbb{R} \rightarrow \mathbb{R}$ is said to be convex if $$u(\mu x+(1-\mu)y)\leq \mu u(x)+(1-\mu)u(y)$$ for all $x, y\in [a,b]$ and $\mu\in [0,1].$ We call $u$ a concave function if $(-u)$ is convex. \end{definition} Now we give some necessary concepts related to the new fractional integral which are used in the sequel. \begin{definition}\label{def1.2} Let $f\in L_1(a,b).$ The fractional integrals $\mathcal{I}^\alpha_a$ and $\mathcal{I}^\alpha_b$ of order $\alpha\in (0,1)$ are defined by \begin{equation} \label{I-1} \mathcal{I}^\alpha_a u(x)=\frac{1}{\alpha}\int\limits^x_a \exp\left(-\frac{1-\alpha}{\alpha}(x-s)\right) u(s)ds,\, x>a \end{equation} and \begin{equation}\label{I-2} \mathcal{I}^\alpha_b u(x)=\frac{1}{\alpha}\int\limits^b_x \exp\left(-\frac{1-\alpha}{\alpha}(s-x)\right) u(s)ds,\, x<b \end{equation} respectively. \end{definition} If $\alpha=1,$ then $$\lim_{\alpha\rightarrow 1}\mathcal{I}^\alpha_a u(x)=\int\limits^x_a u(s)ds,\,\, \lim_{\alpha\rightarrow 1}\mathcal{I}^\alpha_b u(x)=\int\limits^b_x u(s)ds.$$ Moreover, in view of $$\lim_{\alpha\rightarrow 0} \frac{1}{\alpha}\exp\left(-\frac{1-\alpha}{\alpha}(x-s)\right)=\delta(x-s),$$ we deduce that $$\lim_{\alpha\rightarrow 0}\mathcal{I}^\alpha_a u(x)=u(x),\,\, \lim_{\alpha\rightarrow 0}\mathcal{I}^\alpha_b u(x)=u(x).$$ \begin{definition} The left and right Riemann--Liouville fractional integrals $I_{a} ^\alpha$ and $I_{b} ^\alpha$ of order $\alpha\in\mathbb R$ ($\alpha>0$) are given by $$ I_{a} ^\alpha \left[ f \right]\left( t \right) = {\rm{ }}\frac{1}{{\Gamma \left( \alpha \right)}}\int\limits_a^t {\left( {t - s} \right)^{\alpha - 1} f\left( s \right)} ds, \,\,\, t\in(a,b], $$ and $$ I_{b} ^\alpha \left[ f \right]\left( t \right) = {\rm{ }}\frac{1}{{\Gamma \left( \alpha \right)}}\int\limits_t^b {\left( {s - t} \right)^{\alpha - 1} f\left( s \right)} ds, \,\,\, t\in[a,b), $$ respectively. Here $\Gamma$ denotes the Euler gamma function. \end{definition} We henceforth set $\rho=\frac{1-\alpha}{\alpha}(b-a).$ \section{Hermite-Hadamard type inequality}\label{HH} \begin{theorem}\label{th2.1} Let $u: [a, b]\rightarrow \mathbb{R}$ be a positive function with $0\leq a < b$ and $u \in L_1 (a, b).$ If $u$ is a convex function on $[a, b],$ then the following inequalities for fractional integrals \eqref{I-1} and \eqref{I-2} hold: \begin{equation}\label{2.1} u\left(\frac{a+b}{2}\right)\leq \frac{1-\alpha}{2\left(1-\exp\left(-\rho\right)\right)}\left[\mathcal{I}^\alpha_a u(b)+\mathcal{I}^\alpha_b u(a)\right]\leq \frac{u(a)+u(b)}{2}. \end{equation} \end{theorem} \begin{proof} Since $u$ is a convex function on $[a,b],$ we get for $x$ and $y$ from $[a, b]$ with $\mu=\frac{1}{2}$ \begin{equation}\label{2.2}u\left(\frac{x+y}{2}\right) \leq \frac{u(x)+u(y)}{2},\end{equation} which, for $x=ta+(1-t)b,\, y=(1-t)a+tb,$ takes the form: \begin{equation}\label{2.3}2u\left(\frac{a+b}{2}\right)\leq u(ta+(1-t)b)+u((1-t)a+tb).\end{equation} Multiplying both sides of \eqref{2.3} by $\exp\left(-\rho t\right)$ and then integrating the resulting inequality with respect to $t$ over $[0,1],$ we obtain \begin{align*}\frac{2\left(1-\exp\left(-\rho \right)\right)}{\rho}u\left(\frac{a+b}{2}\right) & \leq\int\limits^1_0 \exp\left(-\rho t\right) \left[u(ta+(1-t)b)+u((1-t)a+tb)\right]dt\\&= \int\limits^1_0 \exp\left(-\rho t\right) u(ta+(1-t)b)dt \\&+\int\limits^1_0 \exp\left(-\rho t\right)u((1-t)a+tb)dt\\&=\frac{1}{b-a}\int\limits^b_a \exp\left(-\frac{1-\alpha}{\alpha}(b-s)\right)u(s)ds\\&+ \frac{1}{b-a}\int\limits^b_a \exp\left(-\frac{1-\alpha}{\alpha}(s-a)\right)u(s)ds\\&= \frac{\alpha}{b-a}\left[\mathcal{I}^\alpha_a u(b)+ \mathcal{I}^\alpha_b u(a)\right]. \end{align*} As a result, we get $$\frac{2\left(1-\exp\left(-\rho \right)\right)}{\mathcal{A}}u\left(\frac{a+b}{2}\right) \leq \frac{\alpha}{b-a}\left[\mathcal{I}^\alpha_a u(b)+ \mathcal{I}^\alpha_b u(a)\right].$$ Thus the first inequality of \eqref{2.1} is established. For the proof of the second inequality in \eqref{2.1}, we first note that if $u$ is a convex function, then, for $t\in [0, 1],$ it yields $$u(ta+(1-t)b)\leq tu(a)+(1-t)u(b)$$ and $$u((1-t)a+tb)\leq (1-t)u(a)+tu(b).$$ By adding the above two inequalities, we have \begin{equation}\label{2.4}u(ta+(1-t)b)+u((1-t)a+tb)\leq u(a)+u(b). \end{equation} Multiplying both sides of \eqref{2.4} by $\exp\left(-\rho t \right)$ and integrating the resulting inequality with respect to $t$ over $[0,1],$ we obtain \begin{align*}\frac{2\left(1-\exp\left(-\rho \right)\right)}{\rho}\left[u(a)+u(b)\right] & \geq \int\limits^1_0 \exp\left(-\rho t\right) u(ta+(1-t)b)dt \\&+\int\limits^1_0 \exp\left(-\rho t\right)u((1-t)a+tb)dt,\end{align*} that is, $$\frac{\alpha}{b-a}\left[\mathcal{I}^\alpha_a u(b)+ \mathcal{I}^\alpha_b u(a)\right]\leq \frac{2\left(1-\exp\left(-\rho \right)\right)}{\rho}\left[u(a)+u(b)\right].$$ Hence the second inequality in \eqref{2.1} is proved. This completes the proof of Theorem \ref{th2.1}. \end{proof} \begin{corollary}Let $u: [a, b]\rightarrow \mathbb{R}$ be a positive function with $0\leq a < b$ and $u \in L_1 (a, b).$ If $u$ is a concave function on $[a, b],$ then the following inequalities for fractional integrals \eqref{I-1} and \eqref{I-2} hold: \begin{align*} u\left(\frac{a+b}{2}\right)\geq \frac{1-\alpha}{2\left(1-\exp\left(-\rho \right)\right)}\left[\mathcal{I}^\alpha_a u(b)+\mathcal{I}^\alpha_b u(a)\right]\geq \frac{u(a)+u(b)}{2}. \end{align*} \end{corollary} \begin{remark} For $\alpha \rightarrow 1,$ observe that \begin{align*}\lim_{\alpha \rightarrow 1}\frac{1-\alpha}{2\left(1-\exp\left(-\rho \right)\right)}=\frac{1}{2(b-a)}. \end{align*} Thus, Hermite-Hadamard inequality \eqref{1.1} follows from Theorem \ref{th2.1} in the limit $\alpha \rightarrow 1.$ \end{remark} \section{Hermite-Hadamard-Fej\'{e}r type inequality}\label{HHF} \begin{theorem}\label{th3.1} Let $u:\,[a,b]\rightarrow \mathbb{R}$ be convex and integrable function with $a<b.$ If $w:\,[a,b]\rightarrow \mathbb{R}$ is nonnegative, integrable and symmetric with respect to $\frac{a+b}{2},$ that is, $w(a+b-x)=w(x),$ then the following inequalities hold \begin{multline}\label{3.1} u\left(\frac{a+b}{2}\right)\left[\mathcal{I}^{\alpha}_a w(b)+\mathcal{I}^{\alpha}_b w(a)\right] \leq \left[\mathcal{I}^{\alpha}_a \left(u w\right)(b)+\mathcal{I}^{\alpha}_b\left(u w\right)(a)\right]\\\leq \frac{u(a)+u(b)}{2}\left[\mathcal{I}^{\alpha}_a w(b)+\mathcal{I}^{\alpha}_b w(a)\right]. \end{multline} \end{theorem} \begin{proof} Since $u$ is a convex function on $[a, b],$ we have the inequality \eqref{2.3} for all $t\in [0; 1]$. Multiplying both sides of \eqref{2.3} by \begin{equation}\label{3.2}\exp\left(-\rho t \right)w\left((1-t)a+tb\right),\end{equation} and then integrating the resulting inequality with respect to $t$ over $[0, 1],$ we obtain \begin{align*}2u\left(\frac{a+b}{2}\right)&\int\limits^1_0 \exp\left(-\rho t\right)w\left((1-t)a+tb\right) dt \\& \leq \int\limits^1_0 \exp\left(-\rho t\right) u\left(ta+(1-t)b\right) w\left((1-t)a+tb\right)dt\\& + \int\limits^1_0 \exp\left(-\rho t\right) u\left((1-t)a+tb\right) w\left((1-t)a+tb\right)dt\\& = \frac{1}{b-a}\int\limits^b_a \exp\left(-\frac{1-\alpha}{\alpha}(s-a)\right) u\left(a+b-s\right) w(s)ds\\& + \frac{1}{b-a}\int\limits^b_a \exp\left(-\frac{1-\alpha}{\alpha}(s-a)\right) u(s)w(s)ds\\&=\frac{1}{b-a}\int\limits^b_a \exp\left(-\frac{1-\alpha}{\alpha}(b-s)\right) u(s) w\left(a+b-s\right)ds \\&+ \frac{\alpha}{b-a}\mathcal{I}^\alpha_b \left[u(a)w(a)\right]=\frac{\alpha}{b-a}\left[\mathcal{I}^\alpha_a \left[u(a)w(a)\right]+\mathcal{I}^\alpha_b \left[u(a)w(a)\right]\right], \end{align*} that is, \begin{multline*}2u\left(\frac{a+b}{2}\right)\int\limits^1_0 \exp\left(-\rho t\right)w\left((1-t)a+tb\right) dt \\ \leq \frac{\alpha}{b-a}\left[\mathcal{I}^\alpha_a \left[u(a)w(a)\right]+\mathcal{I}^\alpha_b \left[u(a)w(a)\right]\right]. \end{multline*} Since $w$ is symmetric with respect to $\frac{a+b}{2},$ we have $$\mathcal{I}^\alpha_a w(b)=\mathcal{I}^\alpha_b w(a)=\frac{1}{2}\left[\mathcal{I}^\alpha_a w(b)+\mathcal{I}^\alpha_b w(a)\right].$$ Therefore, we have \begin{align*}u\left(\frac{a+b}{2}\right)\left[\mathcal{I}^\alpha_a w(b)+\mathcal{I}^\alpha_b w(a)\right] \leq\mathcal{I}^\alpha_a \left[w\left(b\right) u(b)\right]+\mathcal{I}^\alpha_b \left[w\left(a\right) u(a)\right]. \end{align*} This establishes the first inequality of Theorem \ref{th3.1}. To prove the second inequality in \eqref{3.1}, we first notice that if $u$ is a convex function, then, for all $t\in [0 1],$ it yields the inequality \eqref{2.4}. Multiplying both sides of \eqref{2.3} by \eqref{3.2} and integrating the resulting inequality with respect to $t$ over $[0, 1],$ we get \begin{align*}&\int\limits^1_0 \exp\left(-\rho t\right) u\left(ta+(1-t)b\right) w\left((1-t)a+tb\right)dt \\&+ \int\limits^1_0 \exp\left(-\rho t\right) u\left((1-t)a+tb\right)w\left((1-t)a+tb\right) dt \\& \leq \left[u(a)+u(b)\right]\int\limits^1_0 \exp\left(-\rho t\right)w\left((1-t)a+tb\right) dt. \end{align*} In consequence, we obtain \begin{align*}\mathcal{I}^\alpha_a \left[w\left(b\right) u(b)\right]+\mathcal{I}^\alpha_b \left[w\left(a\right) u(a)\right] \leq \frac{u(a)+u(b)}{2} \left[\mathcal{I}^\alpha_a w(b)+\mathcal{I}^\alpha_b w(a)\right].\end{align*} Thus the proof of Theorem \ref{th3.1} is complete. \end{proof} \begin{corollary}Let $u:\,[a,b]\rightarrow \mathbb{R}$ be concave and integrable function with $a<b.$ If $w:\,[a,b]\rightarrow \mathbb{R}$ is nonnegative, integrable and symmetric with respect to $\frac{a+b}{2},$ that is, $w(a+b-x)=w(x),$ then the following inequalities hold \begin{align*}u\left(\frac{a+b}{2}\right)\left[\mathcal{I}^{\alpha}_a w(b)+\mathcal{I}^{\alpha}_b w(a)\right]& \geq \left[\mathcal{I}^{\alpha}_a \left(u w\right)(b)+\mathcal{I}^{\alpha}_b\left(u w\right)(a)\right]\\& \geq \frac{u(a)+u(b)}{2}\left[\mathcal{I}^{\alpha}_a w(b)+\mathcal{I}^{\alpha}_b w(a)\right]. \end{align*} \end{corollary} \begin{remark} From Theorem \ref{th3.1} with $\alpha \to 1,$ we indeed have Hermite-Hadamard-Fej\'{e}r inequality \eqref{1.2}. \end{remark} \section{Dragomir-Agarwal type inequality}\label{DA} \begin{theorem}\label{th4.1} Let $u:\, I\subseteq \mathbb{R}\rightarrow \mathbb{R}$ be a differentiable mapping on $I,a,b \in I.$ If $|u'|$ is convex on $[a,b],$ then the following inequality involving fractional integrals \eqref{I-1} and \eqref{I-2} holds: \begin{multline}\label{4.1}\left|\frac{u(a)+u(b)}{2}-\frac{1-\alpha}{2\left(1-\exp\left(-\rho\right)\right)} \left[\mathcal{I}^\alpha_a u(b)+\mathcal{I}^\alpha_b u(a)\right]\right|\\ \leq \frac{b-a}{2\rho}\tanh\left(\frac{\rho}{4}\right)\left(|u'(a)|+|u'(b)|\right). \end{multline} \end{theorem} \begin{proof} For $u'\in L_1(a,b),$ it is easy to find that \begin{multline}\label{4.2} \frac{u(a)+u(b)}{2}-\frac{1-\alpha}{2\left(1-\exp\left(-\rho\right)\right)}\left[\mathcal{I}^\alpha_b u(a)+\mathcal{I}^\alpha_a u(b)\right]\\ =\frac{b-a}{2\left(1-\exp\left(-\rho\right)\right)}\left\{\int\limits^1_0 \exp\left(-\rho t\right)u'\left(ta+(1-t)b\right)dt\right.\\ -\left.\int\limits^1_0 \exp\left(-\rho(1-t)\right)u'\left(ta+(1-t)b\right)dt\right\}.\end{multline} Then, using \eqref{4.2} and the convexity of $|u'|,$ we obtain \begin{align*}\left|\frac{u(a)+u(b)}{2}\right.& \left.-\frac{1-\alpha}{2\left(1-\exp\left(-\rho\right)\right)} \left[\mathcal{I}^\alpha_b u(a)+\mathcal{I}^\alpha_a u(b)\right]\right|\\& \leq \frac{b-a}{2}\int\limits^1_0 \frac{\left|\exp\left(-\rho t\right) -\exp\left(-\rho(1-t)\right)\right|}{1-\exp\left(-\rho\right)} \left|u'\left(ta+(1-t)b\right)\right|dt\\& \leq \frac{b-a}{2}\int\limits^1_0 \frac{\left|\exp\left(-\rho t\right) -\exp\left(-\rho(1-t)\right)\right|}{1-\exp\left(-\rho\right)} t\left|u'\left(a\right)\right|dt\\& +\frac{b-a}{2}\int\limits^1_0 \frac{\left|\exp\left(-\rho t\right) -\exp\left(-\rho(1-t)\right)\right|}{1-\exp\left(-\rho\right)} (1-t)\left|u'\left(b\right)\right|dt\\&= \frac{b-a}{2}\left|u'\left(a\right)\right|\int\limits^{\frac{1}{2}}_0 \frac{\exp\left(-\rho t\right) -\exp\left(-\rho(1-t)\right)}{1-\exp\left(-\rho\right)} t dt\\& + \frac{b-a}{2}\left|u'\left(a\right)\right|\int\limits^1_{\frac{1}{2}} \frac{\exp\left(-\rho(1-t)\right)-\exp\left(-\rho t\right)} {1-\exp\left(-\rho\right)} t dt\\& + \frac{b-a}{2}\left|u'\left(b\right)\right|\int\limits^{\frac{1}{2}}_0 \frac{\exp\left(-\rho t\right) -\exp\left(-\rho(1-t)\right)}{1-\exp\left(-\rho\right)} (1-t) dt\\&+ \frac{b-a}{2}\left|u'\left(b\right)\right|\int\limits^1_{\frac{1}{2}} \frac{\exp\left(-\rho (1-t)\right)- \exp\left(-\rho t\right)}{1-\exp\left(-\rho\right)} (1-t) dt\\& = \frac{b-a}{2\left(1-\exp\left(-\rho \right)\right)} \left[\left|u'\left(a\right)\right|\left(I_1+I_2\right) + \left|u'\left(b\right)\right|\left(I_3+I_4\right)\right]. \end{align*} As a result, we get \begin{multline}\label{4.3}\frac{u(a)+u(b)}{2}-\frac{1-\alpha}{2\left(1-\exp\left(-\rho\right)\right)}\left[\mathcal{I}^\alpha_b u(a)+\mathcal{I}^\alpha_a u(b)\right]\\ \leq \frac{b-a}{2\left(1-\exp\left(-\rho \right)\right)}\left[\left|u'\left(a\right)\right|\left(I_1+I_2\right) + \left|u'\left(b\right)\right|\left(I_3+I_4\right)\right], \end{multline} where \begin{multline}\label{4.4}I_1=\int\limits^{\frac{1}{2}}_0 \left(\exp\left(-\rho t\right) -\exp\left(-\rho (1-t)\right)\right) t dt\\ =-\frac{\exp\left(-\frac{\rho}{2}\right)}{\rho}+ \frac{1}{\rho^2}\left(1 -\exp\left(-\rho\right)\right), \end{multline} \begin{multline}\label{4.5}I_2=\int\limits^1_{\frac{1}{2}} \left(\exp\left(-\rho(1-t)\right)-\exp\left(-\rho t\right)\right) t dt\\ =\frac{1}{\rho}\left(1 -\exp\left(-\frac{\rho}{2}\right)+\exp\left(-\rho\right)\right) -\frac{1}{\rho^2}\left(1 -\exp\left(-\rho \right)\right), \end{multline} \begin{multline}\label{4.6}I_3=\int\limits^{\frac{1}{2}}_0 \left(\exp\left(-\rho t\right) -\exp\left(-\rho (1-t)\right)\right) (1-t) dt\\ =-\frac{\exp\left(-\frac{\rho}{2}\right)} {\rho}+\frac{1}{\rho}\left(1+\exp\left(-\rho\right)\right) - \frac{1}{\rho^2}\left(1 -\exp\left(-\rho\right)\right) \end{multline} and \begin{multline}\label{4.7}I_4=\int\limits^1_{\frac{1}{2}} \left(\exp\left(-\rho t\right)- \exp\left(-\rho (1-t)\right)\right) (1-t) dt\\ =-\frac{\exp\left(-\frac{\rho}{2}\right)}{\rho}+ \frac{1}{\rho^2}\left(1 -\exp\left(-\rho\right)\right).\end{multline} Inserting the values of $I_i \, (i=1,2,3,4)$ given by \eqref{4.4}-\eqref{4.7} in \eqref{4.3}, we obtain the inequality \eqref{4.1}. This completes the proof. \end{proof} \begin{corollary} Let $u:\, I\subseteq \mathbb{R}\rightarrow \mathbb{R}$ be a differentiable mapping on $I,a,b \in I.$ If $|u'|$ is concave on $[a,b],$ then the following inequality holds: \begin{multline*}\left|\frac{u(a)+u(b)}{2}-\frac{1-\alpha}{2\left(1-\exp\left(-\rho\right)\right)} \left[\mathcal{I}^\alpha_a u(b)+\mathcal{I}^\alpha_b u(a)\right]\right|\\ \geq \frac{b-a}{2\rho}\tanh\left(\frac{\rho}{4}\right)\left(|u'(a)|+|u'(b)|\right). \end{multline*} \end{corollary} \begin{remark} For $\alpha \rightarrow 1,$ we find that \begin{align*}\lim_{\alpha \rightarrow 1}\frac{1-\alpha}{2\left(1-\exp\left(-\rho \right)\right)}=\frac{1}{2(b-a)}, \end{align*} \begin{align*}\lim_{\alpha \rightarrow 1}\frac{b-a}{2\rho}\tanh\left(\frac{\rho}{4}\right)=\frac{b-a}{8}. \end{align*} Thus we get Dragomir-Agarwal inequality \eqref{1.3} from Theorem \ref{th4.1} when $\alpha \rightarrow 1.$ \end{remark} \section{Pachpatte type inequalities}\label{P} \begin{theorem}\label{th5.1} Let $u$ and $w$ be real-valued, nonnegative and convex functions on $[a, b].$ Then the following inequalities involving fractional integrals \eqref{I-1} and \eqref{I-2} hold: \begin{multline}\label{5.1}\frac{\alpha}{2(b-a)}\left[\mathcal{I}^\alpha_a\left(u(b)w(b)\right)+\mathcal{I}^\alpha_b\left(u(a)w(a)\right)\right]\\ \leq \left[u(a)w(a)+ u(b)w(b)\right]\frac{{\rho}^2-2 \rho+ 4- \left({\rho}^2+2 \rho+4\right)\exp\left(-\rho \right)}{2\rho^3}\\ +\left[u(a)w(b)+u(b)w(a)\right]\frac{\rho-2+ \exp\left(-\rho \right)\left(\rho+2\right)}{\rho^3}, \end{multline} \begin{multline}\label{5.2}2u\left(\frac{a+b}{2}\right)w\left(\frac{a+b}{2}\right) \leq \frac{1-\alpha}{2\left(1-\exp\left(-\rho\right)\right)}\left[\mathcal{I}^{\alpha}_a u(b)w(b)+\mathcal{I}^{\alpha}_b u(a)w(a)\right]\\ + \left[u(a)w(a)+u(b)w(b)\right]\frac{\rho-2+ \exp\left(-\rho\right)\left(\rho +2\right)} {\rho^2\left(1-\exp\left(-\rho \right)\right)}\\ +\left[u(a)w(b)+u(b)w(a)\right]\frac{{\rho}^2-2 \rho+ 4- \left({\rho}^2+2 \rho+4\right)\exp\left(-\rho\right)} {2\rho^2\left(1-\exp\left(-\rho\right)\right)}. \end{multline} \end{theorem} \begin{proof} Since $u$ and $w$ are convex on $[a, b],$ then, for $\xi\in [0, 1],$ it follows from definition \ref{def1.1} that \begin{align*}u\left(\xi a+(1-\xi)b\right)w\left(\xi a+(1-\xi)b\right)& \leq \xi^2 u(a)w(a)+ (1-\xi)^2u(b)w(b)\\&+\xi(1-\xi)\left[u(a)w(b)+u(b)w(a)\right] \end{align*} and \begin{align*} u\left((1-\xi)a+\xi b\right)w\left((1-\xi)a+\xi b\right)& \leq (1-\xi)^2 u(a)w(a)+ \xi^2u(b)w(b)\\& +\xi(1-\xi)\left[u(a)w(b)+u(b)w(a)\right]. \end{align*} Consequently, we have \begin{equation}\label{5.3}\begin{aligned} u\left(\xi a+(1-\xi)b\right)w\left(\xi a+(1-\xi)b\right)& +u\left((1-\xi)a+tb\right)w\left((1-\xi)a+\xi b\right)\\& \leq (2\xi^2-2 \xi+1) \left[u(a)w(a)+ u(b)w(b)\right]\\& +2t(1-\xi)\left[u(a)w(b)+u(b)w(a)\right].\end{aligned} \end{equation} Multiplying both sides of inequality \eqref{5.3} by $\exp\left(-\rho \xi \right)$ and integrating the resulting inequality with respect to $\xi \in [0, 1],$ we obtain \begin{multline*}\int\limits^1_0 \exp\left(-\rho \xi \right) u\left(\xi a+(1-\xi)b\right)w\left(\xi a+(1-\xi)b\right)d \xi \\ +\int\limits^1_0 \exp\left(-\rho \xi \right)u\left((1-\xi)a+\xi b\right)w\left((1-\xi)a+\xi b\right)d \xi \\ =\frac{\alpha}{b-a}\left[\mathcal{I}^\alpha_a\left(u(b)w(b)\right)+ \mathcal{I}^\alpha_b\left(u(a)w(a)\right)\right]\\ \leq \left[u(a)w(a)+ u(b)w(b)\right]\int\limits^1_0 \exp\left(-\rho \xi \right)(2\xi^2-2\xi+1)d \xi \\ +\left[u(a)w(b)+u(b)w(a)\right]\int\limits^1_0 \exp\left(-\rho \xi \right)2\xi(1-\xi)d \xi\\ =\left[u(a)w(a)+ u(b)w(b)\right]\frac{{\rho}^2-2 \rho+ 4- \left({\rho}^2+2 \rho+4\right)\exp\left(-\rho\right)}{\rho^3}\\ +2\left[u(a)w(b)+u(b)w(a)\right]\frac{\rho-2+ \exp\left(-\rho\right)\left(\rho+2\right)}{\rho^3}. \end{multline*} So \begin{multline*}\frac{\alpha}{2(b-a)}\left[\mathcal{I}^\alpha_a\left(u(b)w(b)\right)+\mathcal{I}^\alpha_b\left(u(a)w(a)\right)\right]\\ \leq \left[u(a)w(a)+ u(b)w(b)\right]\frac{{\rho}^2-2 \rho+ 4- \left({\rho}^2+2 \rho+4\right)\exp\left(-\rho \right)}{2\rho^3}\\ +\left[u(a)w(b)+u(b)w(a)\right] \frac{\rho-2+\exp\left(-\rho \right)\left(\rho+2\right)}{\rho^3}, \end{multline*} which completes the proof of \eqref{5.1}. Next we establish the inequality \eqref{5.2}. Again using convexity of the functions $u$ and $v$ on $[a,b],$ we have \begin{align*}& u\left(\frac{a+b}{2}\right)w\left(\frac{a+b}{2}\right)\\& = u\left(\frac{\xi a+(1-\xi)b}{2}+\frac{(1-\xi)a+\xi b}{2}\right) w\left(\frac{\xi a+(1-\xi)b}{2}+\frac{(1-\xi)a+\xi b}{2}\right)\\& \leq \left(\frac{u\left(\xi a+(1-\xi)b\right)+u\left((1-\xi)a+\xi b\right)}{2}\right) \left(\frac{w\left(ta+(1-\xi)b\right)+w\left((1-\xi)a+\xi b\right)}{2}\right)\\& \leq\frac{u\left(\xi a+(1-\xi)b\right)w\left(\xi a+(1-\xi)b\right)}{4} +\frac{u\left((1-\xi)a+\xi b\right)w\left((1-\xi)a+\xi b\right)}{4}\\& +\frac{\xi(1-\xi)}{2}\left[u(a)w(a)+u(b)w(b)\right] +\frac{(2\xi^2-2\xi+1)}{4}\left[u(a)w(b)+u(b)w(a)\right]. \end{align*} Thus \begin{equation}\label{5.4} \begin{aligned} &u\left(\frac{a+b}{2}\right)w\left(\frac{a+b}{2}\right)\\ & \leq\frac{u\left(\xi a+(1-\xi)b\right)w\left(\xi a+(1-\xi)b\right)}{4} +\frac{u\left((1-\xi)a+\xi b\right)w\left((1-\xi)a+\xi b\right)}{4}\\& +\frac{t(1-\xi)}{2}\left[u(a)w(a)+u(b)w(b)\right] +\frac{(2\xi^2-2\xi+1)}{4}\left[u(a)w(b)+u(b)w(a)\right]. \end{aligned}\end{equation} Multiplying both sides of \eqref{5.4} by $\exp\left(-\rho \xi \right)$ and then integrating the resulting inequality with respect to $t \in [0, 1],$ we have \begin{align*}&\frac{1-\exp\left(-\rho \right)} {\rho}u\left(\frac{a+b}{2}\right)w\left(\frac{a+b}{2}\right)\\& \leq \int\limits^1_0 \exp\left(-\rho \xi \right)\frac{u\left(\xi a+(1-\xi)b\right)w\left(\xi a+(1-\xi)b\right)}{4}d \xi\\& +\int\limits^1_0 \exp\left(-\rho \xi \right)\frac{u\left((1-\xi)a+\xi b\right)w\left((1-\xi)a+\xi b\right)}{4}d \xi\\& +\int\limits^1_0 \exp\left(-\rho \xi\right) \frac{\xi(1-\xi)}{2}\left[u(a)w(a)+u(b)w(b)\right]d \xi\\& +\int\limits^1_0 \exp\left(-\rho \xi \right)\frac{2\xi^2-2\xi+1}{4}\left[u(a)w(b)+u(b)w(a)\right]d \xi\\& =\frac{\alpha}{4(b-a)}\left[\mathcal{I}^{\alpha}_a u(b)w(b)+\mathcal{I}^{\alpha}_b u(a)w(a)\right]\\& + \left[u(a)w(a)+u(b)w(b)\right]\frac{\rho-2+ \exp\left(-\rho\right)\left(\rho+2\right)}{2\rho^3}\\ &+\left[u(a)w(b)+u(b)w(a)\right]\frac{{\rho}^2-2 \rho+ 4- \left({\rho}^2+2 \rho+4\right)\exp\left(-\rho\right)}{4\rho^3}, \end{align*} which can alternatively be written as \begin{align*}& u\left(\frac{a+b}{2}\right)w\left(\frac{a+b}{2}\right)\\& \leq\frac{1-\alpha}{4\left(1-\exp\left(-\rho\right)\right)}\left[\mathcal{I}^{\alpha}_a u(b)w(b)+\mathcal{I}^{\alpha}_b u(a)w(a)\right]\\& + \left[u(a)w(a)+u(b)w(b)\right]\frac{\rho-2+ \exp\left(-\rho\right)\left(\rho+2\right)} {2\rho^2\left(1-\exp\left(-\rho\right)\right)}\\ &+\left[u(a)w(b)+u(b)w(a)\right]\frac{{\rho}^2-2 \rho+ 4- \left({\rho}^2+2 \rho+4\right) \exp\left(-\rho\right)}{4\rho^2\left(1-\exp\left(-\rho\right)\right)}. \end{align*} This completes the proof. \end{proof} \begin{corollary} Suppose that $u$ and $w$ are real-valued, nonnegative and concave functions on $[a, b].$ Then the following inequalities hold: \begin{multline*}\frac{\alpha}{2(b-a)}\left[\mathcal{I}^\alpha_a\left(u(b)w(b)\right)+ \mathcal{I}^\alpha_b\left(u(a)w(a)\right)\right]\\ \geq \left[u(a)w(a)+ u(b)w(b)\right]\frac{{\rho}^2-2 \rho+ 4- \left({\rho}^2+2 \rho+4\right)\exp\left(-\rho\right)}{2\rho^3}\\ +\left[u(a)w(b)+u(b)w(a)\right]\frac{\rho-2+ \exp\left(-\rho \right)\left(\rho+2\right)}{\rho^3}, \end{multline*} \begin{multline*} 2 u\left(\frac{a+b}{2}\right)w\left(\frac{a+b}{2}\right) \geq \frac{1-\alpha}{2\left(1-\exp\left(-\rho\right)\right)}\left[\mathcal{I}^{\alpha}_a u(b)w(b)+\mathcal{I}^{\alpha}_b u(a)w(a)\right]\\ + \left[u(a)w(a)+u(b)w(b)\right]\frac{\rho-2+ \exp\left(-\rho\right)\left(\rho+2\right)} {\rho^2\left(1-\exp\left(-\rho\right)\right)}\\ +\left[u(a)w(b)+u(b)w(a)\right]\frac{{\rho}^2-2 \rho+ 4- \left({\rho}^2+2 \rho+4\right)\exp\left(-\rho\right)} {2\rho^2\left(1-\exp\left(-\rho\right)\right)}. \end{multline*} \end{corollary} \begin{remark} Using the limiting values \begin{align*}\lim_{\alpha \rightarrow 1}\frac{1-\alpha}{2\left(1-\exp\left(-\rho\right)\right)}=\frac{1}{2(b-a)}, \, \, \lim_{\alpha \rightarrow 1}\frac{\rho-2+\exp\left(-\rho\right)\left(\rho+2\right)}{\rho^3}=\frac{1}{6}, \end{align*} \begin{align*} \lim_{\alpha \rightarrow 1}\frac{{\rho}^2-2 \rho+ 4- \left({\rho}^2+2 \rho+4\right)\exp\left(-\rho\right)} {2\rho^2\left(1-\exp\left(-\rho\right)\right)}=\frac{1}{3}, \end{align*} we obtain Pachpatte inequalities \eqref{1.4} and \eqref{1.5} from Theorem \ref{th5.1} when $\alpha \to 1.$ \end{remark} \section*{Discussions and Conclusions} We obtained the generalization of the Hermite-Hadamard, HermiteHadamard-Fej\'{e}r, Dragomir-Agarwal and Pachpatte type inequalities for new fractional integral operators with exponential kernel. As an immediate consequence of the results derived in this paper, one can obtain similar inequalities for the following fractional integrals with Mittag-Leffler nonsingular kernel:\\ \begin{equation*} \mathfrak{I}^\alpha_a u(x)=\frac{1}{\alpha}\int\limits^x_a E_{\alpha,1}\left(-\frac{1-\alpha}{\alpha}(x-s)^\alpha\right) u(s)ds,\, x>a \end{equation*} and \begin{equation*} \mathfrak{I}^\alpha_b u(x)=\frac{1}{\alpha}\int\limits^b_x E_{\alpha,1}\left(-\frac{1-\alpha}{\alpha}(s-x)^\alpha\right) u(s)ds,\, x<b \end{equation*} for $f\in L_1(a,b)$ and $\alpha\in (0,1).$ Here $E_{\alpha,\mu}\left(z\right)$ is the Mittag-Leffler type function: $$E_{\alpha,\mu}\left(z\right)=\sum\limits_{k=0}^{\infty} \frac{z^k}{\Gamma\left(\alpha k+\mu\right)}.$$ Moreover, we believe that the present work would serve as a strong motivation for the fellow researchers to enhance/enrich similar known literature on the related topics. \end{document}
\begin{document} \title{Combinatorial Prophet Inequalities} \begin{abstract}{ \noindent We introduce a novel framework of Prophet Inequalities for combinatorial valuation functions. For a (non-monotone) submodular objective function over an arbitrary matroid feasibility constraint, we give an $O(1)$-competitive algorithm. For a monotone subadditive objective function over an arbitrary downward-closed feasibility constraint, we give an $O(\log n \log^2 r)$-competitive algorithm (where $r$ is the cardinality of the largest feasible subset). Inspired by the proof of our subadditive prophet inequality, we also obtain an $O(\log n \cdot \allowbreak \log^2 r)$-competitive algorithm for the Secretary Problem with a monotone subadditive objective function subject to an arbitrary downward-closed feasibility constraint. Even for the special case of a cardinality feasibility constraint, our algorithm circumvents an $\Omega(\sqrt{n})$ lower bound by Bateni, Hajiaghayi, and Zadimoghaddam \cite{BHZ13-submodular-secretary_original} in a restricted query model. En route to our submodular prophet inequality, we prove a technical result of independent interest: we show a variant of the Correlation Gap Lemma \cite{CCPV07-correlation_gap, ADSY-OR12} for non-monotone submodular functions. }\mathrm{e}nd{abstract} \ifFULL \mathrm{e}lse \setcounter{page}{0} \thispagestyle{empty} \fi \section{Introduction}\label{sec:Intro} The {\mathrm{e}m Prophet Inequality} and {\mathrm{e}m Secretary Problem} are classical problems in stopping theory. In both problems a decision maker must choose one of $n$ items arriving in an online fashion. In the Prophet Inequality, each item is drawn independently from a known distribution, but the order of arrival is chosen adversarially. In the Secretary Problem, the decision maker has no prior information about the set of items to arrive (except their cardinality, $n$), but the items are guaranteed to arrive in a uniformly random order. Historically, there are many parallels between the research of those two problems. The classic (single item) variants of both problems were resolved a long time ago: in 1963 Dynkin gave a tight $e$-competitive algorithm for the Secretary Problem \cite{Dynkin63}; a little over a decade later Krengel and Sucheston \cite{KS77-prophet} gave a tight $2$-competitive algorithm for the Prophet Inequality. Motivated in part by applications to mechanism design, multiple-choice variants of both problems have been widely studied for the past decade in the online algorithms community. Instead of one item, the decision maker is restricted to selecting a feasible subset of the items. The seminal papers of \cite{HKP04-first_secretary, Kleinberg05-multiple_choice-secretary} introduced a secretary problem subject to a cardinality constraint (and \cite{Kleinberg05-multiple_choice-secretary} also obtained a $1-O(1/\sqrt{r})$-competitive algorithm). In 2007, Hajiaghayi et al. followed with a prophet inequality subject to cardinality constraint \cite{HKS07-prophet_and_online-MD}. In the same year, Babaioff et al. introduced the famous {\mathrm{e}m matroid secretary problem} \cite{BIK07-secretary_original}; in 2012 Kleinberg and Weinberg introduced (and solved!) the analogous matroid prophet inequality. For general downward-closed constraints, $O(\log n \log r)$-competitive algorithms were recently obtained for both the problems \cite{Rub16-downward_closed}. In all the works mentioned in the previous paragraph, the goal is to maximize the sum of selected items' values, i.e. an additive objective is optimized. For the secretary problem, there has also been significant work on optimizing more general, combinatorial objective functions. A line of great works \cite{BHZ13-submodular-secretary_original, FNS11-submodular_secretary-old, BUCM12-submodular_secretary-old, FZ15-submodular_secretary} on secretary problem with submodular valuations culminated with a general reduction by Feldman and Zenklusen \cite{FZ15-submodular_secretary} from any submodular function to additive (linear) valuations with only $O(1)$ loss. Going beyond submodular is an important problem \cite{FI15-supermodular_secretary}, but for subadditive objective functions there is a daunting $\Omega(\sqrt n)$ lower bound on the competitive ratio for restricted value queries \cite{BHZ13-submodular-secretary_original}. Surprisingly, this line of work on combinatorial secretary problems has seen no parallels in the world of prophet inequalities. In this work we break the ice by introducing a new framework of combinatorial prophet inequalities. \paragraph{Combinatorial Prophet Inequalities:} Our main conceptual contribution is a generalization of the Prophet Inequality setting to combinatorial valuations. Roughly, on each of $n$ days, the decision maker knows an independent prior distribution over $k$ potential items that could appear.\footnote{Note that some notion of independence assumption is necessary as even for the single choice problem, if values are arbitrarily correlated then every online algorithm is $\Omega(n)$ competitive~\cite{HillKertz-Journal92}.} She also has access to a combinatorial (in particular, submodular or monotone subadditive) function $f$ that describes the value of any subset of the $n \cdot k$ items (see Section~\ref{sec:combinatorial} for a formal definition and further discussion). We obtain the following combinatorial prophet inequalities: \begin{thm}[Submodular Prophet; informal]\label{thm:submodular-prophet} There exists an efficient randomized $O(1)$-competitive algorithm for (non-monotone) submodular prophet over any matroid. \mathrm{e}nd{thm} \begin{thm}[Monotone Subadditive Prophet; informal]\label{thm:subadditive-prophet} There exists an $O(\log n \cdot \log^2 r)$-competitive algorithm for monotone subadditive prophet inequality subject to any downward-closed constraints family. \mathrm{e}nd{thm} \paragraph{Subadditive Secretary Problem:} Building on the techniques of our subadditive prophet inequality, we go back to the secretary world and prove a (computationally inefficient) $O(\log n \cdot \log^2 r)$-competitive algorithm for the subadditive secretary problem subject to any downward-closed feasibility constraint. As noted earlier, this algorithm circumvents the impossibility result of Bateni et al. \cite{BHZ13-submodular-secretary_original} for efficient algorithms \footnote{In fact, for general downward-closed constraint, even with additive valuations one should not expect efficient algorithms with membership queries \cite{Rub16-downward_closed}.}. \begin{thm}[Monotone Subadditive Secretary; informal]\label{thm:subadditive-secretary-informal} There exists an $O(\log n \cdot \log ^2 r)$-competitive algorithm for monotone subadditive secretaries subject to any downward-closed constraints family. \mathrm{e}nd{thm} \paragraph{Non-monotone correlation gap:} En route to Theorem \ref{thm:submodular-prophet}, we prove a technical contribution that is of independent interest: a constant {\mathrm{e}m correlation gap} for non-monotone submodular functions. For a monotone submodular function $f$, \cite{CCPV07-correlation_gap} showed that the expected value of $f$ over any distribution of subsets is at most a constant factor larger than the expectation over subsets drawn from the product distribution with the same marginals. This bound on the correlation gap has been very useful in the past decade with applications in optimization \cite{CVZ14-CRS}, mechanism design \cite{ADSY-OR12, Yan-SODA11, BCK12-correlation_gap_risk-averse, BH16-correlatin_gap-mechanism_design}, influence in social networks \cite{RSS15-knapsack_adaptive_seeding, BPRS16-locally_adaptive_seeding}, and recommendation systems \cite{KSS13-recommendation_system-correlation_gap}. It turns out (see Example \ref{ex:naive-corr-gap}) that when $f$ is non-monotone, the correlation gap is unbounded, even for $n=2$! Instead, we prove a correlation gap for a related function: \[ f_{\max}(S) \triangleq \max_{T \subseteq S} f(T).\] (Note that $f_{\max}$ is monotone, but may not be submodular.) \begin{thm}[Non-monotone correlation gap; informal]\label{thm:corrgap} For any (non-monotone) submodular function $f$, the function $f_{\max}$ has a correlation gap of $O(1)$. \mathrm{e}nd{thm} \ifFULL \subsection{Further related work} \paragraph{Secretary Problem} In recent years, following the seminal work of Babaioff, Immorlica, and Kleinebrg \cite{BIK07-secretary_original}, there has been extensive work on the {\mathrm{e}m matroid secretary problem}, where the objective is to maximize the sum of values of secretaries subject to a matroid constraint. For general matroids, there has been sequence of improving competitive ratio with the state of the art being $O(\log \log r)$ \cite{Lachish14-secretary, FSZ15-loglogr}. Obtaining a constant competitive ratio for general matroids remains a central open problem, but constant bounds are known for many special cases (see survey by Dinitz \cite{Dinitz13-survey} and references therein). Other variants have also been considered, such as a hiring that returns for a second (or potentially $k$-th) interview \cite{Vardi15-returning_secretary}, or secretaries whose order of arrival has exceptionally low entropy \cite{KKN15-limited_randomness}. Of special interest to us is a recent paper by Feldman and Izsak \cite{FI15-supermodular_secretary} that considers matroid secretary problems with general monotone objective functions, parametrized by the {\mathrm{e}m supermodular degree}. For objective $f$ with supermodular degree $\mathcal{D}_f^+$ and general matroid constraint, they obtain a competitive ratio of $O\left({\mathcal{D}_f^+}^3 \log \mathcal{D}_f^+ + {\mathcal{D}_f^+}^2\log r\right)$. Their results are incomparable to our Theorem \ref{thm:subadditive-secretary-informal}, but the motivation is related---obtaining secretary algorithms beyond submodular functions. \paragraph{Prophet Inequality} The connection between multiple-choice prophet inequalities and mechanism design was recognized in the seminal paper by Hajiaghayi et al.~\cite{HKS07-prophet_and_online-MD}. In particular, they proved a prophet inequality for uniform matroids; their bound was later improved by Alaei~\cite{Alaei11-tight_uniform_prophet}. Chawla et al. further developed the connection between prophet inequalities and mechanism design, and proved, for general matroids, a variant of the prophet inequality where the algorithm may choose the order in which the items are viewed. The {\mathrm{e}m matroid prophet inequality} was first explicitly formulated by Kleinberg and Weinberg \cite{KW12-matroid_prophet}, who also gave a tight $2$-competitive algorithm. In a different direction, Alaei, Hajiaghayi, and Liaghat~\cite{AHL12-prophet_matching} considered a variant they call {\mathrm{e}m prophet-inequality matching}, which is useful for online ad allocation. More generally, for intersection of a constant number of matroid, knapsack, and matching constraints, Feldman, Svensson, and Zenklusen \cite{FSZ16-OCRS} gave an $O(1)$-competitive algorithm; this is a corollary of their {\mathrm{e}m online contention reslution schemes} (OCRS), which we also use heavily (see Section~\ref{sec:OCRS}). Azar, Kleinberg, and Weinberg \cite{AKW14-limited-information} considered a limited information variant where the algorithm only has access to samples from each day's distributions. Esfandiari et al.~\cite{EHLM15-secrotrophet} considered a mixed notion of ``Prophet Secretary'' where the items arrive in a uniformly random order and draw their values from known independent distributions. Finally, for general downward-closed constraint, \cite{Rub16-downward_closed} gave $O(\log n \log r)$-competitive algorithms for both Prophet Inequality and Secretary Problem; these algorithms are the basis of our algorithms for the respective subadditive problems. \paragraph{Other notions of online submodular optimization} Online submodular optimization has been studied in contexts beyond secretary. In online submodular welfare maximization, there are $m$ items, $n$ people, and each person has a monotone submodular value function. Given the value functions, the items are revealed one-by-one and the problem is to immediately and irrevocably allocate it to a person, while trying to maximize the sum of all the value functions (welfare). The greedy strategy is already half competitive. Kapralov et al.~\cite{KPV-SODA13} showed that for adversarial arrival greedy is the best possible in general (competitive ratio of $1/2$), but under a ``large capacities'' assumption, a primal-dual algorithm can obtain $1-1/e$-competitive ratio \cite{D0KMY13-large_capacities}. For random arrival Korula et al.~\cite{KMZ-STOC15} showed that greedy can beat half; obtaining $1-1/e$ in this settings remains open. Buchbinder et al.~\cite{BFS-SODA15} considered the problem of (monotone) submodular maximization with {\mathrm{e}m preemption}, when the items are revealed in an adversarial order. Since sublinear competitive ratio is not possible in general with adversarial order, they consider a relaxed model where we are allowed to drop items (preemption) and give constant-competitive algorithms. Submodular maximization has also been studied in the \mathrm{e}mph{streaming setting}, where we have space constraints but are again allowed to drop items~\cite{BMKK-KDD14,CK-IPCO15,CGQ-ICALP15}. The ``learning community'' has looked into \mathrm{e}mph{experts} and \mathrm{e}mph{bandits} settings for submodular optimization. In these settings, different submodular functions arrive one-by-one and the algorithm, which is trying to minimize/ maximize its value, has to select a set before seeing the function. The function is then revealed and the algorithm gets the value for the selected set. The goal is to perform as close as possible to the best fixed set in hindsight. Since submodular minimization can be reduced to convex function minimization using Lov\'asz extension, sublinear regrets are possible~\cite{HazanKale12}. For submodular maximization, the usual benchmark is a $1-1/e$ multiplicative loss and an additive regret~\cite{GolovinStreeter-NIPS08,GolovinKrause-COLT10,GKS-ArXiv14}. Interestingly, to the best of our knowledge none of those problems have been studied for subadditive functions. \mathrm{e}lse \subsection{Related work} Due to space constraints, we defer to the full version important discussion on connections to related works on secretary problem~\cite{BIK07-secretary_original, Lachish14-secretary, FSZ15-loglogr, Dinitz13-survey, Vardi15-returning_secretary, KKN15-limited_randomness, FI15-supermodular_secretary}, prophet inequality~\cite{HKS07-prophet_and_online-MD, Alaei11-tight_uniform_prophet, KW12-matroid_prophet, AHL12-prophet_matching, FSZ16-OCRS, AKW14-limited-information, EHLM15-secrotrophet, Rub16-downward_closed}, and other related online models~\cite{KPV-SODA13, D0KMY13-large_capacities, KMZ-STOC15, BFS-SODA15, BMKK-KDD14,CK-IPCO15,CGQ-ICALP15, HazanKale12, GolovinStreeter-NIPS08,GolovinKrause-COLT10,GKS-ArXiv14}. \fi \subsection{Organization} We begin by defining our model for combinatorial prophet inequalities in Section~\ref{sec:combinatorial}; in Section~\ref{sec:Prelim} we develop some necessary notation and recall known results; in Section~\ref{sec:pf-cor-gap} we formalize and prove our correlation gap for non-monotone submodular functions; and in Section~\ref{sec:ProphetMatroid} we prove the submodular prophet inequality. The subadditive prophet and secretary algorithms share the following high level approach: use a lemma of Dobzinski \cite{Dobzinski07-subadditive-vs-XOS} to reduce subadditive to XOS objective functions; then solve the XOS case using the respective (prophet and secretary) algorithms of \cite{Rub16-downward_closed} for additive objective function and general downward closed feasibility constraint. It turns out that the secretary case is much simpler since we can use the additive downward-closed algorithm of \cite{Rub16-downward_closed} as black-box; we present it in Section~\ref{sec:Secretary}. For the prophet inequality, we have to make changes to the already highly non-trivial algorithm of \cite{Rub16-downward_closed} for the additive, downward-closed case; we defer this proof \ifFULL to Section~\ref{sec:Prophet}. \mathrm{e}lse to the full version. \fi \section{Combinatorial but Independent Functions}\label{sec:combinatorial} In this section, we define and motivate what we mean by ``submodular (or subadditive) valuations over independent items''. Recall that in the classic prophet inequality, a gambler is asked to choose one of $n$ independent non-negative random payoffs. In {\mathrm{e}m multiple choice prophet inequalities}~\cite{HKS07-prophet_and_online-MD}, the gambler chooses multiple payoffs, subject to some known-in-advance feasibility constraint, and receives their sum. Here, we are interested in the case where the gambler's utility is not additive over the outcomes of the random draws: For example, instead of monetary payoffs, at each time period the gambler can choose to receive a random item, say a car, and the utility from owning multiple cars diminishes quickly. As another example, the gambler receives monetary payoffs, but his marginal utility for the one-millionth dollar is much smaller than for the first. Formally, we define: \begin{defn} [{${\cal C}$ Valuations over Independent Items}]\label{def:C-independent} Let ${\cal C}$ be a class of valuation functions (in particular, we are interested in ${\cal C} \in \{\text{submodular, monotone subadditive}\}$). Consider: \begin{itemize} \item $n$ sets $U_1, \dots ,U_n$ and distributions ${\cal D}_1, \dots ,{\cal D}_n$, where ${\cal D}_i$ returns a single item from $U_i$. \item A function $f:\{0,1\}^{U} \rightarrow \mathbb{R}_+$, where $U \triangleq \bigcup_{i=1}^n U_i$ and $f \in {\cal C}$. \item For $\myvec{X} \in \prod_i U_i$ and subset $S \subseteq [n]$, let $v_{\myvec{X}}(S) \triangleq f\big(\{X_i: i\in S\}\big)$. \mathrm{e}nd{itemize} Let ${\cal D}$ be a distribution over valuation functions $v\left(\cdot\right):2^{n}\rightarrow\mathbb{R}_+$. We say that ${\cal D}$ is ``{\mathrm{e}m ${\cal C}$ over independent items}'' if it can be written as the distribution that first samples $\myvec{X} \sim \bigtimes_i {\cal D }_i$, and then outputs valuation function $v_{\myvec{X}}(\cdot)$. \mathrm{e}nd{defn} \subsubsection*{Related notions in the literature} Combinatorial functions over independent items have been considered before. Agrawal et al. \cite{ADSY-OR12}, for example, consider a different, incomparable framework defined via the generalization of submodular and monotone functions to non-binary, ordered domains (e.g. $f: \bigtimes U_i \rightarrow \mathbb{R}_+$ is submodular if $f(\max\{\myvec{X}, \myvec{Y}\}) + f(\min\{\myvec{X}, \myvec{Y}\}) \leq f(\myvec{X}) + f(\myvec{Y})$). In our definition, per contra, there is no natural way to define a full order over the set of items potentially available on each time period. For example, if we select an item on Day~$1$, an item $X_2$ on Day~$2$ may have a larger marginal contribution than item $Y_2$, but a lower contribution if we did not select any item on Day~$1$. Another relevant definition has been considered in probability theory \cite{Sch99-concentration_results} and more recently in mechanism design \cite{RW15-subadditive}. The latter paper considers auctioning $n$ items to a buyer that has a random, ``independent'' monotone subadditive valuation over the items. Now the seller knows which items she is selling them, but different types of buyers may perceive each item differently. This is captured via an {\mathrm{e}m attribute} of an item, which describes how each buyer values a bundle containing this item. Formally, \begin{defn} [{Monotone Subadditive Valuations over Independent Items \cite{Sch99-concentration_results, RW15-subadditive}}]\label{def:subadditive-independent-RW} We say that a distribution ${\cal D}$ over valuation functions $v\left(\cdot\right):2^{n}\rightarrow\mathbb{R}$ is {\mathrm{e}m subadditive over independent items} if: \begin{enumerate} \item All $v\left(\cdot\right)$ in the support of ${\cal D}$ exhibit \mathrm{e}mph{no externalities}. Formally, let $\Omega_{S}=\bigtimes_{i\in S}\Omega_{i}$, where each $\Omega_{i}$ is a compact subset of a normed space. There exists a distribution ${\cal D}^{\myvec{X}}$ over $\Omega_{[n]}$ and functions $V_{S}:\Omega_{S}\rightarrow\mathbb{R}$ such that ${\cal D}$ is the distribution that first samples $\myvec{X}\leftarrow{\cal D}^{\myvec{X}}$ and outputs the valuation function $v\left(\cdot\right)$ with $v\left(S\right)=V_{S}\left(\langle X_{i}\rangle_{i\in S}\right)$ for all $S$. \item All $v\left(\cdot\right)$ in the support of ${\cal D}$ are monotone and subadditive. \item The private information is \mathrm{e}mph{independent across items}. That is, the ${\cal D}^{\myvec{X}}$ guaranteed in Property 1 is a product distribution. \mathrm{e}nd{enumerate} \mathrm{e}nd{defn} { \noindent For monotone valuations, Definition~\ref{def:C-independent} is stronger than Definition~\ref{def:subadditive-independent-RW} as it assumes that the valuation function is defined over every subset of $U = \bigcup U_i$, rather than just the support of ${\cal D}$. However, it turns out that for monotone subadditive functions Definitions \ref{def:C-independent} and \ref{def:subadditive-independent-RW} are equivalent.} \begin{observation} A distribution ${\cal D}$ is subadditive over independent items according to Definition \ref{def:C-independent} if and only if it is subadditive over independent items according to Definition \ref{def:subadditive-independent-RW}. \mathrm{e}nd{observation} \ifFULL \begin{proof}[Proof sketch] It's easy to see that Definition \ref{def:C-independent} implies Definition \ref{def:subadditive-independent-RW}: We can simply identify between the set of attributes $\Omega_i$ on day $i$ and the set of potential items $U_i$, and let $V_{S}\left(\langle X_{i}\rangle_{i\in S}\right) \triangleq f\big(\{X_i: i\in S\}\big)$. Observe that the desiderata of Definition \ref{def:subadditive-independent-RW} are satisfied. In the other direction, we again identify between each $U_i$ and $\Omega_i$. For feasible set $S$ which consists of only one item in $U_i$ for each $i\in R$, we can let $\myvec{X}_S$ be the corresponding vector in $\prod \Omega_i$, and define $f\big(S\big) \triangleq V_{\myvec{X}_S}(R)$. Definition \ref{def:C-independent} requires that we define $f(\cdot)$ over any subset of $U \triangleq \bigcup U_i$. We do this by taking the maximum of $f(\cdot)$ over all feasible subsets. Namely, for set $T \subseteq U$, let $R_T \subseteq [n]$ denote again the set of $i$'s such that $|T \cap U_i| \geq 1$. We set: \[ f(T) \triangleq \max_{\substack{S\subseteq T \;\;\; \text{s.t.}\\ \forall i \;\;\; |S \cap U_i| \leq 1}} V_{\myvec{X}_S}(R_T). \] Now $f(\cdot)$ is monotone subadditive because it is maximum of monotone subadditive functions. \mathrm{e}nd{proof} \mathrm{e}lse (See full version for details.) \fi \section{Preliminaries}\label{sec:Prelim} \subsection{Submodular and Matroid Preliminaries}\label{sec:submodular-prelims} A set function $f: \{0,1\}^n \rightarrow \mathbb{R}_+$ is a submodular function if for all $S,T \subseteq [n]$ it satisfies $f(S\cup T) + f(S \cap T) \leq f(S) + f(T)$. For any $e\in [n]$ and $S \subseteq [n]$, let $f_S(e)$ denote $f(S\cup e) - f(S)$. We recall some notation for to extend submodular functions from the discrete hypercube $\{0,1\}^n$ to relaxations whose domain is the continuous hypercube $[0,1]^n$. For any vector $\myvec{x} \in [0,1]^n$, let $S \sim \myvec{x}$ denote a random set $S$ that contains each element $i \in [n]$ independently w.p. $x_i$. Moreover, let $1_S$ denote a vector of length $n$ containing $1$ for $i \in S$ and $0$ for $i \not\in S$. \begin{defn} We define important continuous extensions of any set function $f$. \noindent \quad \mathrm{e}mph{Multilinear extension $F$}: \begin{align*} F(\myvec{x}) &\triangleq \E_{S\sim \myvec{x}}[f(S)]. \intertext{\quad \mathrm{e}mph{Concave closure $f^+$}:} f^+(\myvec{x}) &\triangleq \max_{\alpha}\Big\{ \sum_{S\subseteq [n]} \alpha_S f(S) \mid \sum_S \alpha_S =1 \text{ and }\sum_S \alpha_S 1_S = \myvec{x} \Big\}. \intertext{\quad \mathrm{e}mph{Continuous relaxation $f^*$}:} f^*(\myvec{x}) &\triangleq \min_{S \subseteq [n]} \bigg\{ f(S) + \sum_{i \in [n] \setminus S} f_S(e)\cdot x_i \bigg\} . \mathrm{e}nd{align*} \mathrm{e}nd{defn} \subsubsection{Some Useful Results} \begin{lem}[Correlation gap \cite{CCPV07-correlation_gap}] \label{lem:corrgapmonot} For any monotone submodular function and $\myvec{x} \in [0,1]^n$, \[ F(\myvec{x}) \leq f^+(\myvec{x}) \leq f^*(\myvec{x}) \leq \left(1 - \frac1e \right)^{-1} F(\myvec{x}). \] \mathrm{e}nd{lem} \begin{lem}[Lemma $2.2$ of~\cite{BFNS-SODA14}]\label{lem:BFNS} Consider any submodular function $f$ and any set $A \subseteq [n]$. Let $S$ be a random subset of $A$ that contains each element of $S$ w.p. at most $p$ (not necessarily independently), then \[ \E_{S} [f(S)] \geq (1-p) \cdot f(\mathrm{e}mptyset). \] \mathrm{e}nd{lem} \ifFULL \begin{lem}[Lemma $2.3$ of~\cite{FMV-SICOMP11}] \label{lem:FMV-doubleLem} For any non-negative submodular function $f$ and any sets $A,B \subseteq [n]$, \[ \E_{ \substack{S \sim 1_A/2 \\T \sim 1_B/2}} [f(S \cup T)] \geq \frac14 \left( f(\mathrm{e}mptyset) + f(A) + f(B) + f(A \cup B)\right). \] \mathrm{e}nd{lem} \begin{lem}[Theorem $2.1$ of~\cite{FMV-SICOMP11}] \label{lem:FMV-double} For any non-negative submodular function $f$, any set $A \subseteq [n]$ and $p \in [0,1]$, \[ F({1}_A \cdot p ) \geq p(1-p) \cdot \max_{T \subseteq A} f(T). \] \mathrm{e}nd{lem} We can now prove the following useful variant of the previous two lemmata (see also an alternative self-contained proof in Appendix~\ref{sec:missingProofs}). \mathrm{e}lse We also need the following lemma, which is inspired by similar lemmata of Feige, Mirrokni, and Vondrak~\cite{FMV-SICOMP11}. \fi \begin{lem}\label{lem:low-high} Consider any non-negative submodular function $f$ and $0 \leq L \leq H \leq 1$. Let $S^*$ be a set that maximizes $f$, and let $\myvec{x} \in [0,1]^n$ be such that for all $i \in [n]$, $L \leq x_i \leq H$. Then, \[F(\myvec{x}) \geq L(1-H)\cdot f(S^*).\] \mathrm{e}nd{lem} \ifFULL \begin{proof} Imagine a process in which in which we construct the random set $T \sim \myvec{x}$, i.e. set $T$ containing each element $e$ independently w.p. $x_e$, in two steps. In the first step we construct set $T'$ by selecting every element independently w.p. exactly $L$. In the second step we construct set $\tilde{T}$ containing each element $e$ independently w.p. $(x_e - L)/(1-L)$. It's easy to verify that the union of the two sets $T' \cup \tilde{T}$ contains each element $e$ independently with probability exactly $x_e$. From Lemma~\ref{lem:FMV-double}, we know that at the end of first step, the generated set has expected value $\E[f(T')] \geq L (1-L) f(S^*)$. Now, we argue that the second step does not ``hurt'' the value by a lot. We note that in the second step each element is added w.p. at most $(H - L)/(1-L)$ because $x_e \leq H$. Let $g(S) := f(S \cup T')$ be a non-negative submodular function. We apply Lemma~\ref{lem:BFNS} on $g$ to get $\E[g(\tilde{T})] \geq (1-H)/(1-L)\cdot g(\mathrm{e}mptyset) $, which implies $\E[f(T' \cup \tilde{T})] \geq (1-H)/(1-L)\cdot \E[f(T')]$. Together, we get $F(\myvec{x}) = \E[T' \cup \tilde{T}] \geq L (1-L) (1-H)/(1-L) f(S^*) = L (1-H) f(S^*)$. \mathrm{e}nd{proof} \mathrm{e}lse \fi \subsubsection{Online Contention Resolution Schemes}\label{sec:OCRS} Given a point $\myvec{x}$ in the matroid polytope $P$ of matroid $\mathcal{M}$, many submodular maximization applications like to select each element $i$ independently with probability $x_i$ and claim that the selected set $S$ has expected value $F(\myvec{x})$~\cite{CVZ14-CRS}. The difficulty is that $S$ need not be feasible in $\mathcal{M}$, and we can only select $T \subseteq S$ that is feasible. Chekuri et al.~\cite{CVZ14-CRS} introduced the notion of \mathrm{e}mph{contention resolution schemes} (CRS) that describes how, given a random $S$, one can find a feasible $T\subseteq S$ such that the expected value $f(T)$ will be close to $F(\myvec{x})$. Recently, Feldman, Svensson, and Zenklusen gave {\mathrm{e}m online contention resolution schemes} (OCRS). Informally, it says that the decision of whether to select element $i \in S$ into $T$ can be made online, even before knowing the entire set $S$~\cite{FSZ16-OCRS}. In particular, we will need their definition of {\mathrm{e}m greedy} OCRS; we define it below and state the results from \cite{FSZ16-OCRS} that we use in our $O(1)$-submodular prophet inequality result over matroids. \begin{defn}[Greedy OCRS] Let $\myvec{x}$ belong to a matroid polytope $P$ and $S \sim \myvec{x}$. A greedy OCRS defines a downward-closed family $\cal{F}_{\myvec{x}}$ of feasible sets in the matroid. All elements reveal one-by-one if they belong to $S$, and when element $i \in [n]$ reveals, the greedy OCRS selects it if, together with the already selected elements, the obtained set is in $\cal{F}_{\myvec{x}}$. \mathrm{e}nd{defn} \ignore{ \begin{defn}[$(b,c)$-selectable] For $0 \leq b,c \leq 1$, a greedy OCRS is $(b,c)$-selectable if for any $\myvec{x} \in b\cdot P $, given that some element $i \in S$, where $S\sim \myvec{x}$, it selects $i $ with probability at least $c$. \mathrm{e}nd{defn} \begin{lem}[Theorem $1.8$ of~\cite{FSZ16-OCRS}]\label{lem:OcrsExist} There exists a $(\frac12,\frac12 )$-selectable deterministic greedy OCRS for matroid polytopes. \mathrm{e}nd{lem} \begin{lem}[Theorem $1.10$ of~\cite{FSZ16-OCRS}]\label{lem:OcrsUseful} Given a non-negative submodular function $f$ and a $(b,c)$-selectable greedy OCRS for a polytope $P$, applying OCRS to an input $x \in b\cdot P$ results in a random set $T$ satisfying $\E_T[F(1_T/2)] \geq (c/4)\cdot F(x)$. \mathrm{e}nd{lem}} \begin{lem}[Theorems 1.8 and 1.10 of~\cite{FSZ16-OCRS}]\label{lem:OCRS} Given a non-negative submodular function $f$, a matroid $\mathcal{M}$, and a vector $\myvec{x}$ in the convex hull of independent sets in $\mathcal{M}$, there exists a deterministic greedy OCRS that outputs a set $T$ satisfying $\E_T[F(1_T/2)] \geq (1/16)\cdot F(x)$. \mathrm{e}nd{lem} \subsection{Subadditive and Downward-Closed Preliminaries} A set function $f: \{0,1\}^n \rightarrow \mathbb{R}_+$ is {\mathrm{e}m subadditive} if for all $S,T \subseteq [n]$ it satisfies $f(S\cup T) \leq f(S) + f(T)$. It's monotone if $f(S) \leq f(T)$ for any $S \subseteq T$. A set function $f: \{0,1\}^n \rightarrow \mathbb{R}_+$ is an {\mathrm{e}m XOS} (alternately, {\mathrm{e}m fractionally subadditive}) function if there exist linear functions $L_i: \{0,1\}^n \rightarrow \mathbb{R}_+$ such that $f(S) = \max_i \{L_i(S) \}$. (See Feige~\cite{Feige-SICOMP09} for illustrative examples.) \begin{lem} [{\cite[Lemma 3]{Dobzinski07-subadditive-vs-XOS}}]\label{lem:dobzinski} Any subadditive function $v:2^{n}\rightarrow\mathbb{R}_{+}$ can be $\left(\frac{\log\left|S\right|}{2e}\right)$-approximated by an XOS function $\widehat{v}:2^{n}\rightarrow\mathbb{R}_{+}$; i.e. for every $S\subseteq\left[n\right]$, \[ \widehat{v}\left(S\right)\leq v\left(S\right)\leq\left(\frac{\log\left|S\right|}{2e}\right)\widehat{v}\left(S\right). \] Furthermore, the XOS function has the form $\widehat{v}\left(S\right)=\max_{T\subseteq\left[n\right]}p_{T}\cdot\left|T\cap S\right|$ for an appropriate choice of $p_{T}$'s.\mathrm{e}nd{lem} \begin{thm} [{\cite{Rub16-downward_closed}}] \label{thm:additive_secretary} When items take values in $\left\{ 0,1\right\}$, there are $O\left(\log n\right)$-competitive algorithms for (additive) {\sc Downward-Closed Secretary} and {\sc Downward-Closed Prophet}. \mathrm{e}nd{thm} \section{Correlation Gap for non-monotone submodular functions}\label{sec:pf-cor-gap} For monotone submodular functions, \cite{CCPV07-correlation_gap} proved that \begin{gather} \label{eq:monotone-gap} F(\myvec{x}) \geq (1-1/e) f^+(\myvec{x}). \mathrm{e}nd{gather} This result was later rediscovered by \cite{ADSY-OR12}, who called the ratio between $f^+(\myvec{x})$ and $F(\myvec{x})$ {\mathrm{e}m correlation gap}. It's useful in many applications since it says that up to a constant factor, picking items independently is as good as the best correlated distribution with the same element marginals. What is the correct generalization of \mathrm{e}qref{eq:monotone-gap} to non-monotone submodular functions? It is tempting to conjecture that $F(\myvec{x}) \geq c\cdot f^+(\myvec{x})$ for some constant $c>0$. However, the following example shows that even for a function as simple as the directed cut function on a two-vertex graph, this gap may be unbounded. \begin{example} \label{ex:naive-corr-gap} Let $f$ be the directed cut function on the two-vertex graph $u \rightarrow v$; i.e. $f(\mathrm{e}mptyset) = 0$, $f(\{u\}) = 1$, $f(\{v\}) = 0$, and $f(\{u,v\}) = 0$. Let $\myvec{x} = (\mathrm{e}psilon, 1-\mathrm{e}psilon)$. Then, \[F(\myvec{x}) = \mathrm{e}psilon^2 \ll \mathrm{e}psilon = f^+(\myvec{x}).\] \mathrm{e}nd{example} It turns out that the right way to generalize \mathrm{e}qref{eq:monotone-gap} to non-monotone submodular functions is to first make them monotone: \begin{defn} [$f_{\max}$] \[ f_{\max}(S) \triangleq \max_{T \subseteq S} f(T).\] \mathrm{e}nd{defn} For non-monotone submodular $f$, we have that $f_{\max}$ is monotone, but it may no longer be submodular, as shown by the following example: \begin{example} [$f_{\max}$ is not submodular] Let $f$ be the directed cut function on the four-vertex graph $u \rightarrow v \rightarrow w \rightarrow x$. In particular, $f(\{v\}) = 1$, $f(\{u,v\}) = 1$, $f(\{v,w\}) = 1$, and $f(\{u,w\}) = 2$. \[f_{\max}(\{u,v\}) - f_{\max}(\{v\}) = 1 - 1 < 2 -1 = f_{\max}(\{u,v,w\}) - f_{\max}(\{v,w\}).\] \mathrm{e}nd{example} Finally, we are ready to define correlation gap for non-monotone functions: \begin{defn} [Correlation gap] \label{def:correlation-gap} The {\mathrm{e}m correlation gap} of any set function $f$ is \[ \max_{\myvec{x} \in [0,1]^n} \max_{\alpha \geq 0 } \left\{ \frac{ f^+(\myvec{x}) }{ F_{\max} (\myvec{x}) } \Bigl\vert \sum_S \alpha_S = 1 \text{ and } \sum_S \alpha_S 1_S = \myvec{x} \right\}, \] where $F_{\max}$ is the multilinear extension of $f_{\max}$. \mathrm{e}nd{defn} Notice that for monotone $f$, we have that $f_{\max} \mathrm{e}quiv f$, so Definition~\ref{def:correlation-gap} generalizes the correlation gap for monotone submodular functions. Furthermore, one could replace $f^+$ with $f^+_{\max}$ in Definition~\ref{def:correlation-gap}; observe that the resulting definition is equivalent. \begin{thm}[Non-monotone correlation gap]\label{thm:correlation-gap_formal} For any (non-monotone non-negative) submodular function $f$, the correlation gap is at most $200$. \mathrm{e}nd{thm} While the constant can be improved slightly, we have not tried to optimize it, focusing instead on clarity of exposition. Our proof goes through a third relaxation, $f^*_{1/2}$. \begin{defn} [$f^*_{1/2}$] \label{defn:gstar} For any set function $f$ and any $\myvec{x}\in [0,1]^n$, \begin{align*} f^*_{1/2}(\myvec{x}) \triangleq \min_{S \subseteq [n]} \bigg\{ \E_{T \sim 1_S/2} \big[f(T) + \sum_{i \in [n] \setminus S} f_T(e)\cdot x_i \big] \bigg\}. \mathrm{e}nd{align*} \mathrm{e}nd{defn} Below, we will prove (Lemma \ref{lem:gstaratleast}) that $f^+(\myvec{x}) \leq 4 \cdot f^*_{1/2}(\myvec{x})$. We then show (Lemma \ref{lem:gstaratmost}) that $f^*_{1/2}(\myvec{x}) \leq 50 \cdot F(\myvec{x}/2)$, which implies \begin{gather}\label{eq:x/2-correlation_gap} f^+(\myvec{x}) \leq 4 \cdot f^*_{1/2}(\myvec{x}) \leq 200 \cdot F(\myvec{x}/2) . \mathrm{e}nd{gather} Finally, to finish the proof of Theorem~\ref{thm:correlation-gap_formal}, it suffices to show that $F(\myvec{x}/2) \leq F_{\max}(\myvec{x})$. This is easy to see since drawing $T$ according to $\myvec{x}/2$ is equivalent to drawing $S$ according to $\myvec{x}$, and then throwing out each element from $S$ independently with probability $1/2$. For $F_{\max}(\myvec{x})$, on the other hand, we draw the same set $S$ and then take the optimal subset. \subsection{Proof that $f^+(\myvec{x}) \leq 4 \cdot f^*_{1/2}(\myvec{x})$} \begin{lem}\label{lem:gstaratleast} For any $\myvec{x}\in [0,1]^n$ and non-negative submodular function $f: \{0,1\}^n \rightarrow \mathbb{R}_+$, \[ f^+(\myvec{x}) \leq 4 \cdot f^*_{1/2}(\myvec{x}). \]\mathrm{e}nd{lem} We first prove the following auxiliary claim: \begin{claim} \label{cla:aux} For any sets $S,T \subseteq [n]$, \[ \E_{T_{1/2} \sim 1_T/2} [f((S \setminus T) \cup T_{1/2}] \geq \frac14 f(S). \] \mathrm{e}nd{claim} \begin{proof} Define a new auxiliary function $h(U) \triangleq f((S \setminus T) \cup U)$. Observe that $h$ continues to be non-negative and submodular. We now have, \begin{align*} \E_{T_{1/2} \sim 1_T/2} [f((S \setminus T) \cup T_{1/2}] & = \E_{T_{1/2} \sim 1_T/2} [h(T_{1/2})] \\ & \geq \frac14 h(T) && \text{(Lemma~\ref{lem:low-high} for $L=H=1/2$)}\\ & = \frac14 f(S) && \text{(Definition of $h$)}. \mathrm{e}nd{align*} \ignore{ Define also $A \triangleq T \cap S$ and $B \triangleq T \setminus S$. We can now rewrite, \begin{gather*} \E_{T_{1/2} \sim 1_T/2} [f((S \setminus T) \cup T_{1/2}] = \E_{T_{1/2} \sim 1_T/2} [h(T_{1/2})] = \E_{ \substack{S \sim 1_A/2 \\T \sim 1_B/2}} [h(S \cup T)] . \mathrm{e}nd{gather*} Applying Lemma \ref{lem:FMV-doubleLem}, we get: \begin{align*} \E_{T_{1/2} \sim 1_T/2} [f((S \setminus T) \cup T_{1/2}] & \geq \frac14 \left( h(\mathrm{e}mptyset) + h(A) + h(B) + h(A \cup B)\right) \\ & \geq \frac14 h(A) && \text{($h$ is non-negative)}\\ & = \frac14 f((S \setminus T) \cup A) && \text{(Definition of $h$)}\\ & = \frac14 f(S) && \text{(Definition of $A$).} \mathrm{e}nd{align*} } \mathrm{e}nd{proof} \begin{proof}[Proof of Lemma \ref{lem:gstaratleast}] Fix $\myvec{x}$, and let $S^* = S^*(\myvec{x})$ denote the optimal set that satisfies $f^*_{1/2}(\myvec{x}) = \E_{T \sim 1_{S^*}/2} \big[f(T) + \sum_{i \in [n] \setminus S^*} f_T(e) x_i \big] $. Let $\{\alpha_S\}$ be the optimal distribution that satisfies $f^+(\myvec{x}) = \sum_S \alpha_S f(S)$. Then, $\frac 14 f^+(\myvec{x}) = \frac 14 \sum_S \alpha_S f(S) $ \begin{align*} &\leq \sum_S \alpha_S \cdot \E_{T \sim 1_{S^*}/2} [ f((S\setminus S^*)\cup T) ] && \text{(Claim \ref{cla:aux})} \\ &= \sum_S \alpha_S \cdot \E_{T \sim 1_{S^*}/2} \left[ f(T) + f_T(S \setminus S^*) \right] \\ &\leq \sum_S \alpha_S \cdot \E_{{T \sim 1_{S^*}/2}} \left[ f(T) + \sum_{i \in S\setminus S^*} f_T(e) \right] && \text{(submodularity)}\\ &= \E_{{T \sim 1_{S^*}/2}} \left[f(T) \sum_S \alpha_S + \sum_S \alpha_S \sum_{i \in S\setminus S^*} f_T(e) \right] \\ &= \E_{{T \sim 1_{S^*}/2}} \left[ f(T) + \sum_{i \in [n]\setminus S^*} f_T(e) x_i \right] = f^*_{1/2} (\myvec{x}) && \text{(using $\sum_S \alpha_S 1_S = \myvec{x}$)}. \mathrm{e}nd{align*} \mathrm{e}nd{proof} \subsection{Proof that $f^*_{1/2}(\myvec{x}) \leq 50 \cdot F(\myvec{x} /2)$} The proof of the following lemma is similar to Lemma $5$ in~\cite{CCPV07-correlation_gap}. \ifFULL \mathrm{e}lse (See full version). \fi \begin{lem}\label{lem:gstaratmost}. \[ f^*_{1/2}(\myvec{x}) \leq 50 \cdot F\left( \myvec{x} / 2\right). \] \mathrm{e}nd{lem} \ifFULL \begin{proof} Consider an exponential clock running for each element $i \in [n]$ at rate $x_i$. Whenever the clock triggers, we update set $S$ to $S \cup \{i\}$. For $t \in [0,1]$, let $S(t)$ denote the set of elements in $S$ by time $t$. Thus, each element belongs to $S(1)$ w.p. $1 - \mathrm{e}xp(-x_i)$, which is between $x_i (1- \frac1e)$ and $x_i$. Let $V(t) \triangleq \E_{T \sim 1_{S(t)}/2 } [f(T)]$, i.e. expected value of set that picks each element in $S(t)$ independently w.p. $\frac12$. Our goal is to show that: \begin{gather} \label{eq:goal} f^*_{1/2}(\myvec{x}) \leq \left(\frac{2}{1-e^{-1/2}} \right) \cdot \E [V(1)] \leq \left(\frac{2}{1-e^{-1/2}} \right) \left(\frac{4(e-1)}{e-2} \right) \cdot F\left( \myvec{x} / 2\right). \mathrm{e}nd{gather} We begin with the second inequality of \mathrm{e}qref{eq:goal}. Consider the auxiliary submodular function $g(S) \triangleq \E_{T \sim 1 - \mathrm{e}xp(-\myvec{x})} [f(S \cap T)]$, and let $G$ denote its multilinear extension. Let $S^*$ be a maximizer of $g$, and observe that \begin{gather*} V(1) = G(1_{[n]}/2) \leq g(S^*). \mathrm{e}nd{gather*} Observe further that, with slight abuse of notation, $F(\myvec{x} / 2) = G\left(\frac{\myvec{x}/2}{1 - \mathrm{e}xp(-\myvec{x})} \right)$; this is well-defined since for any $x_i \in [0,1]$, we have \begin{gather*} \frac12 \leq \frac{x_i/2}{1 - \mathrm{e}xp(-x_i)} \leq \frac{e}{2(e-1)} < 1. \mathrm{e}nd{gather*} Moreover, since $\frac{x_i/2}{1 - \mathrm{e}xp(-x_i)}$ is bounded, Lemma~\ref{lem:low-high} gives \[F(\myvec{x} / 2) \geq \frac12 \cdot \left(1- \frac{e}{2(e-1)}\right) \cdot g(S^*) = \frac{e-2}{4(e-1)} g(S^*) =\Omega(1) \cdot g(S^*).\] We now turn to the first inequality of \mathrm{e}qref{eq:goal}. Consider an infinitesimal interval interval $(t, t+dt]$. For any $i \notin S(t)$ the exponential clock triggers with probability $x_i \; dt$, so it contributes to $V(t+dt)$ with probability $x_i/2 \; dt$. The probability that two clocks trigger in the same infinitesimal is negligible ($O(dt^2)$). Therefore, \begin{align*} \E[ V(t+dt) - V(t)] & = \E_{S(t)} \E_{T \sim 1_{S(t)}/2 } \left[ \sum_{j \in [n]\setminus S} \frac{x_i}{2} f_T(j) \; dt \right] -O(dt^2)\\ & \geq \frac{1}{2} \Big(f^*_{1/2}(\myvec{x}) - \underbrace{\E_{S(t)} \E_{T \sim 1_{S(t)}/2 } \E[f(T)]}_{\E[ V(t) ]}\Big)\; dt -O(dt^2). \mathrm{e}nd{align*} Dividing both sides by $dt$ and taking the limit as $dt \rightarrow 0$, we get: \begin{gather*} \frac{d}{dt} \E[ V(t) ] \geq \frac{1}{2} \Big(f^*_{1/2}(\myvec{x}) - \E[ V(t) ]\Big). \mathrm{e}nd{gather*} To solve the differential inequality, let $\phi(t) = \E[V(t) ] $ and $\psi(t) = \mathrm{e}xp(\frac{t}{2})\, \phi(t)$. We get $\frac{d\phi}{dt} \geq \frac12 (f^*_{1/2}(\myvec{x}) - \phi(t))$ and $\frac{d\psi}{dt} = \mathrm{e}xp(\frac{t}{2}) (\frac{d\phi}{dt} + \frac{\phi(t)}{2}) \geq \mathrm{e}xp(\frac{t}{2})\,\frac{f^*_{1/2}(\myvec{x})}{2}$. Since $\psi(0) = \phi(0) = 0$, integration over $t$ gives \[ \E[V(t)] = \phi(t) = \mathrm{e}xp(-t/2) \,\psi(x) \geq \frac{f^*_{1/2}(\myvec{x})}{2} (1 - \mathrm{e}xp(-t/2) ). \] In particular, plugging in $t=1$ completes the proof of the first inequality in \mathrm{e}qref{eq:goal}. \mathrm{e}nd{proof} \mathrm{e}lse \fi \section{Submodular Prophets over Matroids}\label{sec:ProphetMatroid} \begin{defn} [{\sc Submodular Matroid Prophet}] The offline inputs to the problem are: \begin{itemize} \item $n$ sets $U_1, \dots ,U_n$; we denote their union $U \triangleq \bigcup_{i=1}^n U_i$; \item a (not necessarily monotone) non-negative submodular function $f:\{0,1\}^{U} \rightarrow \mathbb{R}_+$; \item $n$ distributions $\mathcal{D}_i$ over subset $U_i$; and \item a matroid $\cal{M}$ over $[n]$ \mathrm{e}nd{itemize} On the $i$-th time period, the algorithm observes an element $X_i \in U_i$ drawn according to $\mathcal{D}_i$, independently from outcomes and actions on past and future days. The algorithm must decide (immediately and irrevocably) whether to add $i$ and $X_i$ to sets $W$ and $X_W$, respectively, subject to $W$ remaining independent in $\cal{M}$. The objective is to maximize $f(X_W)$. \mathrm{e}nd{defn} \begin{thm}\label{thm:submod-prophet-matroid} There is a randomized algorithm with a competitive ratio of $O(1)$ for any {\sc Submodular Matroid Prophet} \mathrm{e}nd{thm} \paragraph{Proof overview} The main ingredients in the proof of Theorem \ref{thm:submod-prophet-matroid} are known {\mathrm{e}m online contention resolution schemes} (OCRS) due to Feldman, Svensson, and Zenklusen~\cite{FSZ16-OCRS}, and our new bound on the {\mathrm{e}m correlation gap} for non-monotone submodular functions (Theorem~\ref{thm:correlation-gap_formal}). Let $\myvec{x} \in [0,1]^{U}$ denote the vector of probabilities that each element realizes (i.e. $x_{(i,j)} = \mathcal{D}_i(j)$). A naive proof plan proceeds as follows: Select elements online using the OCRS (w.r.t $\myvec{x}$); obtain a constant factor approximation to $F(\myvec{x})$; use a ``correlation gap'' to show a constant factor approximation of $f^+(\myvec{x})$; finally, observe that $f^+(\myvec{x})$ is an upper bound on $OPT$. There are two problems with that plan: First, the OCRS of Feldman et al. applies when elements realize independently. The realization of different elements for the same day is obviously correlated (exactly one element realizes), so we cannot directly apply their OCRS. The second problem is that for non-monotone submodular function, it is in general not true that $F(\myvec{x})$ approximates $f^+(\myvec{x})$ (see Example \ref{ex:naive-corr-gap}). The solution to both obstacles is working with $\myvec{x}/2$ instead of $\myvec{x}$. In Section~\ref{sec:pf-cor-gap} we showed that $F(\myvec{x}/2)$ is a constant factor approximation of $f^+(\myvec{x})$ (Ineq. \mathrm{e}qref{eq:x/2-correlation_gap}). Then, in Subsection \ref{sec:use-OCRS}, we give an algorithm that approximates the selection of the greedy OCRS on $\myvec{x}/2$. Our plan is then to show: \begin{align*} ALG & = \Omega(\E_{S \sim OCRS(\myvec{x}/2)}[f(S)]) && \text{(Subsection \ref{sec:use-OCRS})} \\ & = \Omega(F(\myvec{x}/2)) && \text{(Lemma \ref{lem:OCRS})} \\ & = \Omega (f^+(\myvec{x})) && \text{(Ineq. \mathrm{e}qref{eq:x/2-correlation_gap})} \\ & = \Omega (OPT). \mathrm{e}nd{align*} \subsection{Applying the OCRS to our setting}\label{sec:use-OCRS} In this subsection we show an algorithm that obtains, in expectation, $1/2$ of the expected value of the OCRS with probabilities $\myvec{x}/2$. Our algorithm uses the greedy OCRS as a black box. On each day, the algorithm (sequentially) feeds the OCRS a subset of the elements $U_i$ that can potentially arrive on that day. The subset on each day is chosen at random; it is correlated with the element that actually arrives on that day, and independent from the subsets chosen on other days. The guarantee is that the distribution over sequences fed into the OCRS is identical to the distribution induced by $\myvec{x}/2$. \subsubsection*{Reduction} For each $i$, let $U_i$ denote the set of elements that can arrive on day $i$, and fix some (arbitrary) order over $U_i$. For a subset $S_i \subseteq U_i$, let $P_{\myvec{x}/2}^i(S_i)$ denote the probability that the set $S_i$ is exactly the outcome of sampling from $U_i$ according to $\myvec{x}/2$. When element $(i,j)$ arrives on day $i$, the algorithm feeds into the OCRS a random set $T_i$ drawn from the following distribution. With probability $\frac{P_{\myvec{x}/2}^i(\{(i,j)\})}{x_{i,j}}$, the algorithm feeds just element $(i,j)$, i.e. $T_i = \{(i,j)\}$; notice that this guarantees $ \Pr \left[T_i = \{(i,j)\} \right]= P_{\myvec{x}/2}^i(\{(i,j)\})$. Otherwise, the algorithm lets $T_i$ be a random subset of $U_i$, drawn according to $\myvec{x}/2$, conditioned on $|T_i| \neq 1$. This guarantees that the probability mass on subsets of size $\neq 1$ is also allocated according to $\myvec{x}/2$. Now, if the algorithm fed the singleton $\{(i,j)\}$ and the OCRS selected it, then the algorithm also takes $\{(i,j)\}$; otherwise the algorithm does not take $\{(i,j)\}$. (In particular, if $|T_i| \neq 1$, the algorithm ignores the decisions of the OCRS.) \subsubsection*{Analyzing the reduction} Observe that on each day the distribution over $T_i$'s is identical to the distribution $P_{\myvec{x}/2}^i(\cdot)$. Since the $T_i$'s are also independent, it means that the distribution of inputs to the OCRS is indeed distributed according to $\myvec{x}/2$. Conditioning on $(i,j)$ is being fed (i.e., with probability $x_{i,j}/2$), $P_{\myvec{x}/2}^i(\cdot)$ assigns at least $1/2$ probability to the event where no other element is also being fed (this is precisely the reason we divide $\myvec{x}$ by $2$): \[ \Pr[T_i = \{(i,j)\} \mid T_i \ni (i,j)] \geq 1/2. \] Since the OCRS is greedy, for any history on days $1,\dots,i-1$, if it selects $(i,j)$ when observing set $T_i \ni (i,j)$, it would also select $(i,j)$ when observing only this element on day $i$. Furthermore, since the OCRS is only allowed to select one element on day $i$, conditioning on the OCRS selecting $(i,j)$, the future days ($i+1,\dots,n$) proceed independently of whether the algorithm also selected $(i,j)$. Therefore, conditioning on the greedy OCRS selecting any set $S_{\textrm{OCRS}}$, the algorithm selects a subset $T_{\textrm{ALG}} \subseteq S_{\textrm{OCRS}}$ where each element appears with probability at least $1/2$. Finally, to argue that the algorithm obtains at least $1/2$ of the expected value of the set selected by the OCRS, fix the set $S_{\textrm{OCRS}}$ selected by the OCRS, and consider the submodular function $g(\bar{T}) \triangleq f(S_{\textrm{OCRS}} \setminus \bar{T})$. Setting $\bar{T} \triangleq T_{\textrm{ALG}} \setminus S_{\textrm{OCRS}}$, we have that $f(T_{\textrm{ALG}}) = g(\bar{T})$. Thus by Lemma \ref{lem:BFNS}, \begin{gather*} \E[f(T_{\textrm{ALG}})] \geq \frac{1}{2} \E[g(\mathrm{e}mptyset)] = \frac{1}{2}\E[f(S_{\textrm{OCRS}})]. \qed \mathrm{e}nd{gather*} \section{Subadditive Secretary over Downward-Closed Constraints}\label{sec:Secretary} \begin{defn} [{\sc Monotone Subadditive Downward-Closed Secretary}] Consider $n$ items, a monotone subadditive valuation function from subsets of items to $\mathbb{R}_+$, and an arbitrary downward-closed set system ${\cal F}$ over the items; both $f$ and ${\cal F}$ are adversarially chosen. The algorithm receives as input $n$ (but not ${\cal F}$ or $f$). The items arrive in a uniformly random order. Initialize $W$ as the empty set. When item $i$ arrives, the algorithm observes all feasible subsets of items that have already arrived, and their valuation in $f$. The algorithm then decides (immediately and irrevocably) whether to add $i$ to the set $W$, subject to the constraint that $W$ remains a feasible set in ${\cal F}$. The goal is to maximize $f(W)$. \mathrm{e}nd{defn} \begin{thm} \label{thm:subadditive-secretary}There is a deterministic algorithm for {\sc Monotone Subadditive Downward-Closed Secretary} that achieves a competitive ratio of $O\left(\log n\cdot\log^{2}r\right)$. \mathrm{e}nd{thm} \begin{proof} Let $T^{\star}$ be the set chosen by the offline algorithm ($OPT = f(T^{\star})$). By Lemma \ref{lem:dobzinski} there exists a $p_{T^{\star}}$ such that for every $S\subseteq T^{\star}$: \begin{align} \label{eq:f(S)>pt|S|} f(S) &\geq p_{T^{\star}}\left|S\cap T^{\star}\right|; \\ \label{eq:OPT<pt|T|logr} OPT &= f(T^{\star}) = O\left( p_{T^{\star}} \left|T^{\star}\right| \log\left|T^{\star}\right|\right) = O\left(p_{T^{\star}} \left|T^{\star}\right|\right) \log r. \mathrm{e}nd{align} Assume that we know $p_{T^{\star}}$ (discussed later). We define a new feasibility constraint ${\cal F}'$ as follows: a set $T \subseteq [n]$ is feasible in ${\cal F}'$ iff it is feasible in ${\cal F}$ and for every subset $S \subseteq T$, we have $f(S) \geq p_{T^{\star}}\left|S\right|$. Notice that because we also force the condition on all subsets of $T$, ${\cal F}'$ is downward-closed and it does not depend on the order of arrival. We run the algorithm for $\{0,1\}$-valued (additive) {\sc Downward-Closed Secretary} (as guaranteed by Theorem \ref{thm:additive_secretary}) with feasibility constraint ${\cal F}'$ where all values are $1$. By \mathrm{e}qref{eq:f(S)>pt|S|}, $T^{\star}$ is feasible in ${\cal F}'$, and by \mathrm{e}qref{eq:OPT<pt|T|logr} $ p_{T^{\star}} \left|T^{\star}\right| = \Omega\left(\frac{OPT}{\log r}\right)$. Therefore, the additive $\{0,1\}$-values algorithm returns a set $T^{ALG}$ of size $\left|T^{ALG}\right| = \Omega\left(\frac{OPT}{p_{T^{\star}} \log n \log r}\right)$. Furthermore, $T^{ALG}$ is also feasible in ${\cal F}'$, i.e. \begin{gather}\label{eq:t^ALG} f(T^{ALG}) \geq p_{T^{\star}} \left|T^{ALG}\right| = \Omega\left(\frac{OPT}{\log n \log r}\right). \mathrm{e}nd{gather} \subsubsection*{Guessing $p_{T^{\star}}$} Finally, we don't actually know $p_{T^{\star}}$, but we can guess it correctly, up to a constant factor, with probability $1/\log r$. \ifFULL We run the classic secretary algorithm over the first $n/2$ items, where we use the value of the singleton $f(\{i\})$ as ``the value of item $i$'': Observe the first $n/4$ items and select none; then take the next item whose value is larger than every item observed so far. With constant probability this algorithm selects the item with the largest value, which we denote by $M$. Also, with constant probability the algorithm sees the item with the largest value too early and does not select it. Assume that this is the case. Since we obtained expected value of $\Omega\left(M\right)$ on the first $n/2$ items we can, without loss of generality, ignore values less than $M/r$. In particular, we know that $p_{T^{\star}} \in [M/r, M]$. Pick $\alpha \in \{M/r,M/(2r),\dots,M/2,M\}$ uniformly at random, and use it instead of $p_{T^{\star}}$ to define ${\cal F}'$. With probability $1/\log r$, $p_{T^{\star}} \in [\alpha, 2\alpha]$, in which case the algorithm returns a set $T^{ALG}$ satisfying \mathrm{e}qref{eq:t^ALG}. \mathrm{e}lse See full version for details. \fi \mathrm{e}nd{proof} \ifFULL \mathrm{e}lse \mathrm{e}nd{document} \fi \section{Subadditive Prophet}\label{sec:Prophet} \begin{defn} [{\sc Monotone Subadditive Downward-Closed Prophet}] The offline inputs to the problem are: \begin{itemize} \item $n$ sets $U_1, \dots ,U_n$; we denote their union $U \triangleq \bigcup_{i=1}^n U_i$; \item a monotone non-negative subadditive function $f:\{0,1\}^{U} \rightarrow \mathbb{R}_+$; \item $n$ distributions $\mathcal{D}_i$ over subset $U_i$; and \item a feasibility constraint $\cal{F}$ over $[n]$. \mathrm{e}nd{itemize} On the $i$-th time period, the algorithm observes an element $X_i \in U_i$ drawn according to $\mathcal{D}_i$, independently from outcomes and actions on past and future days. The algorithm must decide (immediately and irrevocably) whether to add $i$ and $X_i$ to sets $W$ and $X_W$, respectively, subject to the constraint that $W$ remains feasible in $\cal{F}$. The objective is to maximize $f(X_W)$. \mathrm{e}nd{defn} Let $r$ denote the maximum cardinality of a feasible set $S\in{\cal F}$. \begin{thm} \label{thm:subadditive-prophet}There is a deterministic algorithm for {\sc Monotone Subadditive Downward-Closed Prophet} that achieves a competitive ratio of $O\left(\log n\cdot\log^{2}r\right)$. \mathrm{e}nd{thm} The proof of Theorem \ref{thm:subadditive-prophet} consists of three steps: in Subsection \ref{sub:Subadditive-to-XOS} we reduce monotone subadditive valuations over independent items to monotone XOS subadditive valuations over independent items, with a loss of $O\left(\log r\right)$, using a lemma of Dobzinski \cite{Dobzinski07-subadditive-vs-XOS}. Then in Subsection \ref{sub:XOS-to-XOS} we use a standard reduction from general XOS valuations to XOS with $\left\{ 0,1\right\} $ marginal contributions, losing another factor of $O\left(\log r\right)$. Finally, in Subsection \ref{sub:XOS-with-01} we use techniques from \cite{Rub16-downward_closed} to give an $O\left(\log n\right)$-competitive algorithm for monotone XOS with $\left\{ 0,1\right\} $ marginal contributions. \subsection{Subadditive to XOS \label{sub:Subadditive-to-XOS}} \begin{defn} [{\sc Monotone XOS Downward-Closed Prophet}] For any set $M$ and items [n], the offline inputs to the problem are: \begin{itemize} \item $n$ sets $U_1, \dots ,U_n$ of {\mathrm{e}m valuations vectors} in $\mathbb{R}_+^M$; we denote their union $U \triangleq \bigcup_{i=1}^n U_i$; \item a monotone XOS function $\widehat{f}:\{0,1\}^{U} \rightarrow \mathbb{R}_+$ \[\widehat{f}(S) \triangleq \max_{m \in M} \sum_{\mathbf{u} \in S}u_m \text{ for $S \in \{0,1\}^{U}$};\] \item $n$ distributions $\mathcal{D}_i$ over subset $U_i$; and \item a feasibility constraint $\cal{F}$ over $[n]$, which is a collection of subsets of $[n]$. \mathrm{e}nd{itemize} On the $i$-th time period, the algorithm observes a valuations vector $X_i \in U_i$ drawn according to $\mathcal{D}_i$, independently from outcomes and actions on past and future days. The algorithm must decide (immediately and irrevocably) whether to add $i$ and $X_i$ to sets $W$ and $X_W$, respectively, subject to the constraint that $W$ remains feasible in $\cal{F}$. The objective is to maximize $\widehat{f}(X_W)$. \mathrm{e}nd{defn} Below (Proposition \ref{prop:XOS-prophet-general}) we give an $O\left(\log n\cdot\log r\right)$-competitive algorithm for {\sc Monotone XOS Downward-Closed Prophet}. By Dobzinski's lemma (Lemma \ref{lem:dobzinski}), this implies an $O\left(\log n\cdot\log^2 r\right)$-competitive algorithm for {\sc Monotone Subadditive Downward-Closed Prophet}. \begin{prop} \label{prop:XOS-prophet-general}There is a deterministic algorithm for {\sc Monotone XOS Downward-Closed Prophet} that achieves a competitive ratio of $O\left(\log n\cdot\log r\right)$.\mathrm{e}nd{prop} \subsection{XOS to XOS with $\left\{ 0,1\right\} $ coefficients\label{sub:XOS-to-XOS}} Below (Proposition \ref{prop:XOS-prophet-01}), we give an $O\left(\log n\right)$-competitive algorithm for {\sc Monotone XOS Downward-Closed Prophet} in the special case where all the vectors $v \in U$ are in $\{0,1\}^M$. First, let us show why this would imply Proposition \ref{prop:XOS-prophet-general}. \begin{proof} [Proof of Proposition \ref{prop:XOS-prophet-general} from Proposition \ref{prop:XOS-prophet-01}] We recover separately the contributions from ``tail'' events (a single item taking an exceptionally high value) and the ``core'' contribution that is spread over many items. Run the better of the following two algorithms: \paragraph*{Tail } Let $OPT$ denote the expected offline optimum value. Whenever we see a feasible item whose valuations vector $X_{i}$ has value at least $2OPT$, we select it. For item $i$, let $p_{i}=\Pr\left[X_{i}\geq2OPT\right]$. We have \[ OPT\geq2OPT\cdot\Pr\left[\mathrm{e}xists i\colon X_{i}\geq2OPT\right]=2OPT\cdot\left(1-\prod\left(1-p_{i}\right)\right). \] Dividing by $OPT$ and rearranging, we get \[ 1/2\leq\prod\left(1-p_{i}\right)\leq e^{-\sum p_{i}}, \] and thus \[ \sum p_{i}\leq\ln2. \] Therefore the probability that we want to take an item but can't is at most $\ln2$, so this algorithm achieves at least a $\left(1-\ln2\right)$-fraction of the expected contribution from values greater than $2OPT$. \paragraph*{Core } Observe that we can safely ignore values less than $OPT/2r$, as those can contribute a total of at most $OPT/2$. Partition all remaining values into $2+\log r$ intervals $\left[OPT/2r,OPT/r\right],\allowbreak\dots,\mbox{\allowbreak}\left[OPT,2OPT\right]$. The expected contribution from the values in each interval is $\Omega\left(1/\log r\right)$-fraction of the expected offline optimum without values greater than $2OPT$. Pick the interval with the largest expected contribution, round down all the values in this interval, and run the algorithm guaranteed by Proposition \ref{prop:XOS-prophet-01}. This achieves an $\Omega\left(\frac{1}{\log n\cdot\log r}\right)$-fraction of the expected contribution from values less than or equal to $2OPT$. \mathrm{e}nd{proof} \subsection{XOS with $\left\{ 0,1\right\} $ coefficients\label{sub:XOS-with-01}} \begin{prop} \label{prop:XOS-prophet-01}When the $X_{i}$'s take values in $\left\{ 0,1\right\} ^{M}$, there is a deterministic algorithm for {\sc Monotone XOS Downward-Closed Prophet} that achieves a competitive ratio of $O\left(\log n\right)$. \mathrm{e}nd{prop} \subsubsection{A dynamic potential function}\label{sec:dynpotent} At each iteration, the algorithm maintains a target value $\tau$ and a target probability $\pi$, where $\pi$ is the probability (over future realizations) that the current restricted prophet beats $\tau$. We say that an outcome (i.e. a pair of item and valuations vector) is {\mathrm{e}m good} if selecting it does not decrease the probability of beating the target value by a factor greater than $n^{2}$, and {\mathrm{e}m bad} otherwise. Notice that all the bad items together contribute at most a $\left(1/n\right)$-fraction of the probability of beating $\tau$. A key ingredient is that $\tau$ is updated dynamically. If the probability of observing a good outcome is too low (less than $1/4$), we deduct $1$ from $\tau$. We show (Lemma \ref{lem:main}) that this increases $\pi$ by a factor of at least $2$. Since $\pi$ decreases by at most an $n^{2}$ factor when we select an item, and increases by a factor of $2$ whenever we deduct $1$ from $\tau$: we balance $2\log n$ deductions for every item the algorithm selects, and this gives the $O\left(\log n\right)$ competitive ratio. So far our algorithm is roughly as follows: set a target value $\tau$; whenever the probability $\pi$ of reaching the target $\tau$ drops below $1/4$, decrease $\tau$; if $\pi>1/4$, sit and wait for a good outcome - one will arrive with probability at least $1/4$ (we actually do this with $\Pr[A]$ instead of $\pi$, where $A$ is a closely related event). There is one more subtlety: what should the algorithm do if no good outcomes arrive? In other words, what if the probability of observing a good outcome is neither very low nor very close to $1$, say $1/2$ or even $1-\frac{1}{\log n}$? On one hand, we can't decrease $\tau$ again, because we are no longer guaranteed a significant increase in $\pi$; on the other hand, after, say $\Theta\left(\log^{2}n\right)$ iterations, we still have a high probability of having an iteration where none of the good outcomes arrive. (If no good outcomes are coming, we don't want the algorithm to wait forever...) Fortunately, there is a simple solution: the algorithm waits for the last item with a good outcome in its support; if, against the odds, no good outcomes have yet been observed, the algorithm ``hallucinates'' that this last item has a good valuations vector, and selects it. In expectation, at most a constant fraction of the items we select are ``hallucinated'', so the competitive ratio is still $O\left(\log n\right)$. \subsubsection{Notation} We let $OPT$ denote the expected (offline) optimum. $W$ is the set of items selected so far ($W$ for ``Wins''), and $\mathrm{e}ll_{W}\triangleq\max\left\{ i\in W\right\} $ is the index of the last selected item. Let ${\cal F}$ denote the family of all feasible subsets of $[n]$. For any $T\subseteq\left[n\right]$, let ${\cal F}_{T}$ denote the family of feasible sets whose intersection with $\left\{ 1,\dots,\max\left( T\right) \right\} $ is exactly $T$. Let $X_{i}=\left(X_{i}^{m}\right)_{m\in M}\in\left\{ 0,1\right\} ^{M}$ denote the random vector drawn for the $i$-th item. We use $z_{i}^ {}$ to refer to the observed realization of $X_{i}^ {}$. Our algorithm will maintain a subset $M' \subseteq M$. We let \[ V_{M'}\left({\cal F},X_{\left[n\right]}\right)\triangleq\max_{S\in{\cal F}}\max_{m\in M'}\sum_{i\in S}(X_{i})_{m} \] denote the value of optimum offline solution (note that this is also a random variable). Let $\tau=\tau\left(W\right)$ be the current target value, and $\pi=\pi\left(\tau,W\right)$ denotes the current target probability: \[ \pi\left(\tau,W\right)\triangleq\Pr\left[V_{M'}\left({\cal F}_{W},X_{\left[n\right]}^ {}\right)>\tau\mid X_{\left[\mathrm{e}ll_{W}\right]}^ {}=z_{\left[\mathrm{e}ll_{W}\right]}^ {}\right]. \] For each $y_{j}\in\supp\left(X_{j}^ {}\right)$, we define $\pi^{j,y_{j}}=\pi^{j,y_{j}}\left(\tau,W\right)$ to be the probability of reaching $\tau$, given that: \begin{itemize} \item $z_{j}=y_{j}$, \item $j$ is the next item we select, and \item item $j$ actually contributes $1$ to the offline optimum. \mathrm{e}nd{itemize} Formally, \[ \pi^{j,y_{j}}\left(\tau,W\right)\triangleq\Pr\left[V_{M'\cap y_{j}}\left({\cal F}_{W+j},X_{\left[n\right]}\right)>\tau\mid X_{\left[\mathrm{e}ll_{W}\right]\cup\left[j\right]}=\left(z_{\left[\mathrm{e}ll_{W}\right]},y_{j}\right)\right], \] where we slightly abuse notation and also use $y_{j}$ to denote the set of $m\in M$ such that $y_{j}^{m}=1$. We say that a future outcome $\left(j,y_{j}\right)$ is {\mathrm{e}m good} if $\pi^{j,y_{j}}\geq n^{-2}\cdot\pi$ and $j$ is feasible (and otherwise it is {\mathrm{e}m bad}), and let $G=\left\{ \mbox{good\,}\left(j,y_{j}\right)\right\} $ denote the set of good future outcomes. Finally, \[A \triangleq A\left(\pi,\tau,W\right), \] \text{ is the event that at least one of the good outcomes occur.} \subsubsection{Updated proof plan and the algorithm} The idea is to always maintain a threshold $\tau$ such that probability of one of the good outcomes to occur is large, i.e. $\Pr[A]$ is at least a constant $\frac14$. The way we do this is by showing in Claim~\ref{claim:Api} that at any time during the execution of the algorithm, conditioned on what all has happened till now, the probability $\pi$ that the offline algorithm achieves the threshold $\tau$ gives a lower bound on $\Pr[A]$. Hence, whenever $\Pr[A]$ goes below $\frac14$, we decrease the threshold $\tau$, which increases $\pi$ due to Lemma~\ref{lem:main} and, indirectly, increases $\Pr[A]$ by Claim~\ref{claim:Api}. Initialize $\tau\leftarrow OPT/2$, $M'\leftarrow M$, and $W\leftarrow\mathrm{e}mptyset$. Lemma~\ref{lem:concentration} uses a concentration bound due to Ledoux to show that in the beginning $\tau = OPT/2$ satisfies $\pi > \frac14$. After each update to $W$, decrease $\tau$ until $\Pr\left[A\right]\geq1/4$, or until $\left|W\right|>\tau$. When $\Pr\left[A\right]\geq1/4$, reveal the values of items until observing a good outcome. When we observe a good outcome $z_{j}$, add $j$ to $W$ and restrict $M'$ to its intersection with $z_{j}$. Since we restrict $M$ to $M'$, this gives us that at any time \[V_{M'}\left({\cal F},X_{W}\right) = |W|. \] If we reach the last item with good outcomes in its support, and none of the good outcomes realize, add this last item to $G$ and subtract $1$ from $\tau$ \begin{comment} \footnote{Adding an item with value $0$ can clearly only hurt the performance of our algorithm, but it seems to simplify the analysis. } \mathrm{e}nd{comment} (without modifying $M'$). See also pseudocode in Algorithm \ref{alg:prophet}. \begin{algorithm} \protect\caption{\label{alg:prophet}Prophet} \begin{enumerate} \item $\tau\leftarrow\frac{OPT}{2}$; $M'\leftarrow M$; $W\leftarrow\mathrm{e}mptyset$ \item while $\tau>\left|W\right|$: \begin{enumerate} \item $\pi\leftarrow\Pr\left[V_{M'}\left({\cal F}_{W},X_{\left[n\right]}\right)>\tau\mid X_{\left[\mathrm{e}ll_{W}\right]}=z_{\left[\mathrm{e}ll_{W}\right]}\right]$ {\color{gray-comment} \# $\pi$ is the probability that, given the history, the offline optimum can still beat $\tau$.} \item $G\leftarrow\left\{ \left(j,y_{j}\right):j>\mathrm{e}ll_{W}\,\mbox{AND\,}\pi^{j,y_{j}}\geq n^{-2}\cdot\pi\right\} \cap\left(\bigcup_{S\in{\cal F}_{W}}S\right)$ {\color{gray-comment} \# $G$ is the set of good and feasible outcomes.} \item if $\Pr\left[A\right]\geq1/4$ {\color{gray-comment} \# A good outcome is likely occur.} \begin{enumerate} \item $j^{*}\leftarrow\min\left\{ j\in G\colon\left(j,z_{j}\right)\in G\right\} $ {\color{gray-comment} \# Wait for a good and feasible outcome.} \item if $j^{*}=\infty$ {\color{gray-comment} \# No good outcomes.} \begin{enumerate} \item $j^{*}\leftarrow\max G$ {\color{gray-comment} \# Select the last potentially good item.} \item $\tau\leftarrow\tau-1$ {\color{gray-comment} \# Adjust the target value to account for select an item with value $0$} \mathrm{e}nd{enumerate} \item else {\color{gray-comment} \# $j^{*}$ is actually a good item.} \begin{enumerate} \item $M'\leftarrow M'\cap z_{j}$ \mathrm{e}nd{enumerate} \item $W\leftarrow W\cup\left\{ j^{*}\right\} $ \mathrm{e}nd{enumerate} \item else \begin{enumerate} \item $\tau\leftarrow\tau-1$\label{enu:deduct-from-tau} {\color{gray-comment} \# decrease target value $\tau$ until $\Pr\left[A\right]\geq1/4.$}\mathrm{e}nd{enumerate} \mathrm{e}nd{enumerate} \mathrm{e}nd{enumerate} \mathrm{e}nd{algorithm} We first claim that $\pi$ gives us a lower bound on $\Pr[A]$ because most of the mass in $\pi$ comes from good outcomes. \begin{claim}\label{claim:Api} At any point during the run of the algorithm, \[ \Pr[A\left(\pi,\tau,W\right)] \geq \left(1- \frac1n \right) \pi(W,\tau). \] \mathrm{e}nd{claim} \begin{proof} For each $\left(j,y_{j}\right)\notin G$, we have, by definition of $G$, \[ \pi^{j,y_{j}}\left(W,\tau\right)<n^{-2}\cdot\pi\left(W,\tau\right). \] Summing over all $\left(j,y_{j}\right)\notin G$, \begin{flalign*} \sum_{j}\sum_{y_{j}:\left(j,y_{j}\right)\notin G}\Pr\left[y_{j}\right]\cdot\pi^{j,y_{j}}\left(W,\tau\right) & \leq\sum_{j}\sum_{y_{j}:\left(j,y_{j}\right)\notin G}\Pr\left[y_{j}\right]\cdot\left(n^{-2}\cdot\pi\left(W,\tau\right)\right)\\ & \leq\sum_{j}n^{-2}\cdot\pi\left(W,\tau\right)\\ & \leq n^{-1}\cdot\pi\left(W,\tau\right). \mathrm{e}nd{flalign*} Thus, most of $\pi$ comes from good $\left(j,y_{j}\right)$'s: \begin{equation} \Pr[A] = \sum_{j}\sum_{y_{j}:\left(j,y_{j}\right)\in G}\Pr\left[y_{j}\right]\cdot\pi^{j,y_{j}}\left(W,\tau\right)\geq\left(1-1/n\right)\pi\left(W,\tau\right).\label{eq:most-pi-from-good} \mathrm{e}nd{equation} \mathrm{e}nd{proof} \subsubsection{Concentration for the beginning} \label{sec:concsubaddproph} \begin{thm} {\cite[Theorem 2.4]{Ledoux1997}}\label{thm:ledoux} There exists some constant $K>0$ such that the following holds. Let $Y_{i}$'s be independent (but not necessarily identical) random variables in some space $S$; let ${\cal C}$ be a countable class of measurable functions $f\colon S\rightarrow\left[0,1\right]$; and let $Z=\sup_{f\in{\cal C}}\sum_{i=1}^{n}f\left(Y_{i}\right)$. Then, \[ \Pr\left[Z\geq\E\left[Z\right]+t\right]\le\mathrm{e}xp\left(-\frac{t}{K}\cdot\log\left(1+\frac{t}{\E\left[Z\right]}\right)\right). \] \mathrm{e}nd{thm} To make the connection to our setting, let $Y_{i}$ be the vector in $\left[0,1\right]^{{\cal F}\times M}$ whose $\left(S,m\right)$-th coordinate is $X_{i}^{m}$ if $i\in S$, and $0$ otherwise. Let $f_{S,m}\left(Y_{i}\right)\triangleq\left[Y_{i}\right]_{S,m}$, so $\sum_{i=1}^{n}f_{S,m}\left(Y_{i}\right)$ is simply the value of the set $S$ under the $m$-th summation in the XOS representation of the valuation function. Let ${\cal C}\triangleq\left\{ f_{S}\right\} _{S\in{\cal F}}$. The above concentration inequality can now be written as \begin{equation} \Pr\left[V\left({\cal F},X_{\left[n\right]}\right)\geq OPT+t\right]\le\mathrm{e}xp\left(-\frac{t}{K}\cdot\log\left(1+\frac{t}{OPT}\right)\right).\label{eq:V-concentrates} \mathrm{e}nd{equation} \begin{lem} \label{lem:concentration}Assume $OPT\geq\Omega\left(\log n\right)$. Then, \[ \Pr\left[V\left({\cal F},X_{\left[n\right]}\right)\geq\frac{OPT}{2}\right]>1/4. \] \mathrm{e}nd{lem} \begin{proof} We have, \begin{align} OPT & =\int_{-OPT}^{\infty}\Pr\left[V\left({\cal F},X_{\left[n\right]}\right)\geq OPT+t\right]dt,\label{eq:OPT-integral} \mathrm{e}nd{align} which can be decomposed as to integrals over $\left[-OPT,-OPT/2\right]$, $\left[-OPT/2,OPT\right]$, and $\left[OPT,\infty\right]$. The first two integrals can be easily bounded as \[ \int_{-OPT}^{-OPT/2}\Pr\left[V\left({\cal F},X_{\left[n\right]}\right)\geq OPT+t\right]dt\leq\int_{-OPT}^{-OPT/2}1\cdot dt\leq\frac{OPT}{2} \] and \begin{eqnarray*} \int_{-OPT/2}^{OPT}\Pr\left[V\left({\cal F},X_{\left[n\right]}\right)\geq OPT+t\right]dt & \leq & \int_{-OPT/2}^{OPT}\Pr\left[V\left({\cal F},X_{\left[n\right]}\right)\geq OPT/2\right]dt\\ & \leq & \frac{3OPT}{2}\cdot\Pr\left[V\left({\cal F},X_{\left[n\right]}\right)>\frac{OPT}{2}\right]. \mathrm{e}nd{eqnarray*} For the third integral we use the concentration bound (\ref{eq:V-concentrates}): \begin{eqnarray*} \int_{OPT}^{\infty}\Pr\left[V\left({\cal F},X_{\left[n\right]}\right)\geq OPT+t\right]dt & \leq & \int_{OPT}^{\infty}\mathrm{e}xp\left(-\frac{t}{K}\cdot\log\left(1+\frac{t}{OPT}\right)\right)dt\\ & \leq & \int_{OPT}^{\infty}\mathrm{e}xp\left(-\frac{t}{K}\right)dt\\ & = & \left[Ke^{-t/K}\right]_{OPT}^{\infty}=K\cdot e^{-OPT/K}, \mathrm{e}nd{eqnarray*} which is negligible since $OPT=\omega\left(1\right)$. Plugging into (\ref{eq:OPT-integral}), we have: \[ OPT\leq\frac{OPT}{2}+\frac{3OPT}{2}\cdot\Pr\left[V\left({\cal F},X_{\left[n\right]}\right)>\frac{OPT}{2}\right]+o\left(1\right), \] and after rearranging we get \[ \Pr\left[V\left({\cal F},X_{\left[n\right]}\right)>\frac{OPT}{2}\right]\geq1/3-o\left(1\right). \] \mathrm{e}nd{proof} \subsubsection{Main lemma} \begin{lem} \label{lem:main}At any point during the run of the algorithm, if $\Pr\left[A\right]\leq1/4$, then subtracting $1$ from $\tau$ doubles $\pi$; i.e. \[ \pi\left(W,\tau-1\right)\geq2\pi\left(W,\tau\right). \] \mathrm{e}nd{lem} \begin{proof}[Proof of Lemma~\ref{lem:main}] Consider the event that the optimum solution (conditioned on the items $W$ we already selected and the realizations $z_{\left[\mathrm{e}ll_{W}\right]}$ we have already seen) reaches $\tau$. We can write it as a union of disjoint events, depending on the next item $j>\mathrm{e}ll_{W}$ that is part of the optimum solution, and its possible realizations $y_{j}$: \[ \pi\left(W,\tau\right)=\sum_{j}\sum_{y_{j}}\Pr\left[y_{j}\right]\cdot\underbrace{\Pr\left[V_{M'\cap y_{j}}\left({\cal F}_{W\cup\left\{ j\right\} },X_{\left[n\right]}\right)>\tau\mid X_{\left[\mathrm{e}ll_{W}\right]\cup\left[j\right]}=\left(z_{\left[\mathrm{e}ll_{W}\right]},y_{j}\right)\right]}_{\pi^{j,y_{j}}\left(W,\tau\right)}. \] We break the RHS into the sum over $\left(j,y_{j}\right)$'s that are good and the sum over those that are bad. Now, Claim~\ref{claim:Api} gives \begin{equation} \sum_{j}\sum_{y_{j}:\left(j,y_{j}\right)\in G}\Pr\left[y_{j}\right]\cdot\pi^{j,y_{j}}\left(W,\tau\right)\geq\left(1-1/n\right)\pi\left(W,\tau\right).\label{eq:most-pi-from-good} \mathrm{e}nd{equation} Since $y_{j}\in\left\{ 0,1\right\} ^{M}$, each item can contribute at most $1$ to the offline optimum. Therefore: \[ \underbrace{\Pr\left[V_{M'\cap y_{j}}\left({\cal F}_{W\cup\left\{ j\right\} },X_{\left[n\right]}\right)>\tau\mid X_{\left[\mathrm{e}ll_{W}\right]\cup\left[j\right]}=\left(z_{\left[\mathrm{e}ll_{W}\right]},y_{j}\right)\right]}_{\pi^{j,y_{j}}=\pi^{j,y_{j}}\left(\tau,W\right)}\leq\pi^{j,0}\left(W,\tau-1\right) \] Plugging into (\ref{eq:most-pi-from-good}), we have \begin{eqnarray} \left(1-1/n\right)\pi\left(W,\tau\right) & \leq & \sum_{j}\sum_{y_{j}:\left(j,y_{j}\right)\in G}\Pr\left[y_{j}\right]\cdot\pi^{j,0}\left(W,\tau-1\right)\nonumber \\ & \leq & \sum_{j}\Pr\left[\left(j,y_{j}\right)\in G\right]\cdot\pi^{j,0}\left(W,\tau-1\right)\nonumber \\ & \leq & \left(\sum_{j}\Pr\left[\left(j,y_{j}\right)\in G\right]\right)\cdot\pi\left(W,\tau-1\right),\label{eq:sum-of-Pr=00005Bj,y=00005D} \mathrm{e}nd{eqnarray} where the second inequality follows because $\pi^{j,0}\left(W,\tau-1\right)$ doesn't depend on $y_{j}$, and the third because conditioning on the $j$-th item being $0$ can only decrease the probability of reaching $\tau-1$. Recall that $A$ is the union of all the events $\left(j,y_{j}\right)\in G$. Therefore, \[ \Pr\left[A\right]\geq\sum_{j}\Pr\left[\left(j,y_{j}\right)\in G\right]\left(1-\Pr\left[A\right]\right) \] Plugging in $\Pr\left[A\right]<1/4$, we get that $\sum_{j}\Pr\left[\left(j,y_{j}\right)\in G\right]<1/3$. Plugging into (\ref{eq:sum-of-Pr=00005Bj,y=00005D}) and rearranging, we get \[ \pi\left(W,\tau-1\right)\geq\frac{3n}{n-1}\pi\left(W,\tau\right). \] \mathrm{e}nd{proof} \subsubsection{Putting it all together} \begin{lem} \label{lem:potential}At any point during the run of the algorithm, \[ \tau\geq\frac{OPT}{2}-\left(2\log n+1\right)\cdot\left|W\right|-2 \] \mathrm{e}nd{lem} \begin{proof} We prove by induction that at any point during the run of the algorithm, \begin{equation} \log\pi\geq-2-\left(2\log n+1\right)\cdot\left|W\right|+\left(\frac{OPT}{2}-\tau\right).\label{eq:induction} \mathrm{e}nd{equation} After initialization, $\log\pi\geq-2$ by Lemma \ref{lem:concentration}. By definition of $G$, whenever we add an item to $W$, we decrease $\log\pi$ by at most $2\log n$ - hence the $2\log n\cdot\left|W\right|$ term. Notice that when the algorithm ``hallucinates'' a $1$, we also decrease $\tau$ by $1$ to correct for the hallucination - at any point during the run of the algorithm, this has happened at most $\left|W\right|$ times. Recall that we may also decrease $\tau$ in the last line of Algorithm \ref{alg:prophet} (in order to increase $\pi$); whenever we do this, $\tau$ decreases by $1$, but $\pi$ doubles (by Lemma \ref{lem:main}), so $\log\pi$ increases by $1$, and Inequality (\ref{eq:induction}) is preserved. Finally, since $\pi$ is a probability, we always maintain $\log\pi\leq0$. \mathrm{e}nd{proof} We are now ready to complete the proof of Theorem \ref{thm:subadditive-prophet}. \begin{proof} [Proof of Proposition~\ref{prop:XOS-prophet-01}]The algorithm always terminates after at most $O\left(OPT\right)$ decreases to the value of $\tau$. By Lemma \ref{lem:potential}, when the algorithm terminates, we have $\left|W\right|\geq\tau\geq\frac{OPT}{2}-\left(2\log n+1\right)\cdot\left|W\right|-2$, and therefore in particular $\left|W\right|\geq\frac{OPT-4}{4\log n+4}$. Finally, recall that sometimes the algorithm ``hallucinates'' good realizations, i.e. for some items $i\in W$ that we select, $X_{i}=0$. However, each time we add an item, the probability that we add a zero-value item is at most $3/4$ (by the condition $\Pr\left[A\right]>1/4$). Therefore in expectation the value of the algorithm is at least $\left|W\right|/4$. \mathrm{e}nd{proof} \appendix \section{Missing Proofs}\label{sec:missingProofs} \noindent \textbf{Lemma~\ref{lem:low-high}}. Consider any submodular function $f$ and $L,H \in [0,1]$. Let $S^*$ be a set that maximizes $f$, and let $\myvec{x} \in [0,1]^n$ such that for all $i \in [n]$, $L \leq x_i \leq H$. Then, \[F(\myvec{x}) \geq (1-H)(1-L) f(\mathrm{e}mptyset) + (1-H)L\cdot f(S^*)\] \begin{proof} Assume, wlog, that $S^* = \{1,\dots,k\}$. By submodularity, at each step after we add another element, the potential marginal gain of all other elements decreases. In particular, if we add the elements in $S^*$ in any order, they all have non-negative marginal contribution (since each has a non-negative marginal contribution when added last). Let $\myvec{x}_{\leq i}$ denote the restriction of $\myvec{x}$ to $[i]$. We first show by induction that for every $i \leq k$, $F(\myvec{x}_{\leq i}) \geq (1-L)f(\mathrm{e}mptyset) + L\cdot f([i])$. Denote $S_i \triangleq S \cap [i]$. We have that $F(\myvec{x}_{\leq i})$ is at least: \begin{align*} \E_{S \sim \myvec{x}} \Big[f(S_i)\Big] & \geq \E_{S \sim \myvec{x}} \Big[f(S_{i-1}) + f(S_i \cup [i-1]) - f([i-1]) \Big] && \text{(Submodularity)}\\ & = F(\myvec{x}_{\leq i-1}) + \E_{S \sim \myvec{x}} \Big[f(S_i \cup [i-1]) - f([i-1]) \Big] \\ & \geq F(\myvec{x}_{\leq i-1}) + x_i \Big(f([i]) - f([i-1])\Big) && \text{(Submodularity)} \\ & \geq F(\myvec{x}_{\leq i-1}) + L \Big(f([i]) - f([i-1])\Big) && (f([i]) - f([i-1]) \geq 0). \mathrm{e}nd{align*} Finally by the induction hypothesis, $F(\myvec{x}_{\leq i-1}) \geq (1-L)f(\mathrm{e}mptyset) + L\cdot f([i-1])$. In particular, we now have that \[F(\myvec{x}_{\leq k}) \geq (1-L) f(\mathrm{e}mptyset) + L\cdot f(S^*).\] It is left to argue that the rest of the elements do not hurt the value too much. Consider any $S_i \subseteq S^*$, and let $B^* = B^*(S_k) = \{k+1, \dots ,\mathrm{e}ll \}$ be the worst set that we could add to $S_k$, i.e. the $B$ that minimizes $f(S_k \cup B)$. Let $B_j \triangleq \{k+1, \dots ,\mathrm{e}ll \} \subseteq B^*$, and let $T_j$ denote the intersection of $B_j$ with a set $T \subseteq T^*$ sampled according to $\myvec{x}$. Now consider two options for adding elements from $T^*$ to $S_k$: \begin{enumerate} \item deterministically, or \item independently at random with probabilities sampled according to $\myvec{x}$. \mathrm{e}nd{enumerate} Since $f$ is non-negative, we have that even when we add all the bad elements deterministically, \begin{gather}\label{eq:from-positivity} \sum f(S_k \cup B_j) - f(S_k \cup B_{j-1}) = f(S_k \cup B^*) - f(S_k) \geq - f(S_k). \mathrm{e}nd{gather} When we add the elements at random, we have (by submodularity) that the marginal contribution of each bad element can only increase compared to its contribution in the first case. Therefore, \begin{align*} \E_T \left[f(S_k \cup T) -f(S_k) \right] & = \sum \E_T\left[ f(S_k \cup T_j) - f(S_k \cup T_{j-1})\right] \\ & \geq \sum x_j \left(f(S_k \cup B_j) - f(S_k \cup B_{j-1})\right) && \text{(Submodularity)}\\ & \geq \sum H \left(f(S_k \cup B_j) - f(S_k \cup B_{j-1})\right) && (f(S_k \cup B_j) - f(S_k \cup B_{j-1}) \leq 0) \\ & \geq - H f(S_k) && \text{(Inequality \mathrm{e}qref{eq:from-positivity})}. \mathrm{e}nd{align*} So far we have $\E_T \left[f(S_k \cup T)\right] \geq (1-H) f(S_k)$. Finally, the marginal contribution of the remaining elments $\{\mathrm{e}ll+1,\dots, n\}$ is non-negative by submodularity (if it were negative, we could get a worse set $B'$). Therefore, for every $S_k$, adding the rest of the elements can decrease the value by at most a factor of $1-H$. \mathrm{e}nd{proof} \mathrm{e}nd{document}
\begin{document} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{definition}[theorem]{Definition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{question}[theorem]{Question} \theoremstyle{definition} \newtheorem{remark}{Remark} \newtheorem{example}{Example} \newcommand{\pr}[1]{\left\langle #1 \right\rangle} \newcommand{\mathcal{R}}{\mathcal{R}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\mathcal{V}}{\mathcal{V}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\mathcal{U}}{\mathcal{U}} \newcommand{\mathcal{N}}{\mathcal{N}} \newcommand{\mathcal{G}}{\mathcal{G}} \newcommand{\mathcal{P}}{\mathcal{P}} \newcommand{\mathcal{T}}{\mathcal{T}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\mathcal{K}}{\mathcal{K}} \newcommand{\mathrm{C}}{\mathrm{C}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\mathbb{K}}{\mathbb{K}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{{\mathcal{B}^+}}{{\mathcal{B}^+}} \newcommand{\mathrm{D}}{\mathrm{D}} \newcommand{\mathrm{o}}{\mathrm{o}} \newcommand{\mathrm{OD}}{\mathrm{OD}} \newcommand{\mathrm{D}o}{\mathrm{D}_\mathrm{o}} \newcommand{\mathsf{S}_1}{\mathsf{S}_1} \newcommand{\mathsf{G}_1}{\mathsf{G}_1} \newcommand{\mathsf{G}_\mathrm{fin}}{\mathsf{G}_\mathrm{fin}} \newcommand{\mathsf{S}_\mathrm{fin}}{\mathsf{S}_\mathrm{fin}} \newcommand{\longrightarrow}{\longrightarrow} \newcommand{{\setminus}}{{\setminus}} \newcommand{{\omega}}{{\omega}} \newcommand{\sel}[1]{\mathsf{S}_{#1}} \newcommand{\game}[1]{\mathsf{G}_{#1}} \newcommand{\pl}[1]{\textnormal{\texttt{Player}}\textnormal{\texttt{ #1}}} \newcommand{{\omega}in}[2]{\textnormal{\texttt{#1}}\uparrow{\mathsf{G}}_{\mathrm{#2}}} \newcommand{\Win}[2]{\textnormal{\texttt{#1}}\uparrow{\mathsf{G}}_{#2}} \newcommand{\los}[2]{\textnormal{\texttt{#1}}\not{\!\uparrow}\,{\mathsf{G}}_{\mathrm{#2}}} \newcommand{\Los}[2]{\textnormal{\texttt{#1}}\not{\!\uparrow}\,{\mathsf{G}}_{{#2}}} \newcommand{\underline{\textnormal{Id}}}{\underline{\textnormal{Id}}} \title{Bornologies and filters in selection principles on function spaces} \author[L. F. Aurichi]{Leandro F. Aurichi$^1$} \thanks{$^1$ Supported by FAPESP (2017/09252-3)} \address{Instituto de Ci\^encias Matem\'aticas e de Computa\c c\~ao, Universidade de S\~ao Paulo, Caixa Postal 668, S\~ao Carlos, SP, 13560-970, Brazil} \email{[email protected]} \author[R. M. Mezabarba]{Renan M. Mezabarba$^2$} \thanks{$^2$ Supported by CNPq (140427/2016-3)} \address{Instituto de Ci\^encias Matem\'aticas e de Computa\c c\~ao, Universidade de S\~ao Paulo, Caixa Postal 668, S\~ao Carlos, SP, 13560-970, Brazil} \email{[email protected]} \keywords{topological games, selection principles, countable fan tightness, bornology, filters, function spaces} \subjclass[2010]{Primary 54D20; Secondary 54G99, 54A10} \begin{abstract} We extend known results of selection principles in $C_p$-theory to the context of spaces of the form $C_{\mathcal{B}}(X)$, where $\mathcal{B}$ is a bornology on $X$. Particularly, by using the filter approach of Jordan to $C_p$-theory, we show that $\gamma$-productive spaces are productive with a larger class of $\gamma$-like spaces. \end{abstract} \maketitle \section{Introduction} The framework of selection principles, introduced by Scheepers in \cite{Scheep1996}, provides a uniform manner to deal with diagonalization processes that appears in several mathematical contexts since the 1920's. Detailed surveys on this subject are provided in \cite{tsabanextravaganza,sakaischeepers}. Here we present a brief introduction, in order to fix notations. Given an infinite set $S$, let $\mathcal{A}$ and $\mathcal{C}$ be families of nonempty subsets of $S$. We consider the following classic selection principles: \begin{itemize} \item $\mathsf{S}_1(\mathcal{A,C})$: for each sequence $(A_n:n\in\omega)$ of elements of $\mathcal{A}$ there is a sequence $(C_n:n\in\omega)$ such that $C_n\in A_n$ for all $n$ and $\{C_n:n\in\omega\}\in \mathcal{C}$; \item $\mathsf{S}_\mathrm{fin}(\mathcal{A,C})$: for each sequence $(A_n:n\in\omega)$ of elements of $\mathcal{A}$ there is a sequence $(C_n:n\in\omega)$ such that $C_n\in[A_n]^{<\omega}$ for all $n$ and $\bigcup_{n\in\omega}C_n\in\mathcal{C}$. \end{itemize} There are natural infinite games of perfect information associated with these selection principles. In the same setting of the above paragraph, a play of the game $\mathsf{G}_1(\mathcal{A,C})$ is defined as follows: for every inning $n<\omega$, \pl{I} chooses an element $A_n\in \mathcal{A}$, and then \pl{II} picks a $C_n\in A_n$; \pl{II} wins the play if $\{C_n:n\in \omega\}\in\mathcal{C}$. The game $\mathsf{G}_\mathrm{fin}(\mathcal{A,C})$ is defined in a similar way. For $\textnormal{\texttt{J}}\in\{\textnormal{\texttt{I,II}}\}$, we denote the sentence ``\pl{J} has a winning strategy in the game $\mathsf{G}$'' by ${\omega}in{J}{}$, while its negation is denoted by $\texttt{J}\,{\not{\!\uparrow}}\,\,\mathsf{G}$. The interest about these games lies on finding winning strategies for some of the players and, in the topological context, asking how the topological properties of a space determine these strategies for particular instances of families $\mathcal{A}$ and $\mathcal{C}$. In the next diagram, the straight arrows summarize the general implications between these principles. \footnotesize \begin{equation}\begin{tikzpicture} \matrix(m)[matrix of nodes, row sep=6mm, column sep=5mm, jump/.style={text width=10mm,anchor=center}, txt/.style={anchor=center},] { ${\omega}in{II}{1}(\mathcal{A,C})$&$\los{I}{1}(\mathcal{A,C})$&$\mathsf{S}_1(\mathcal{A,C})$\\ ${\omega}in{II}{fin}(\mathcal{A,C})$&$\los{I}{fin}(\mathcal{A,C})$&$\mathsf{S}_\mathrm{fin}(\mathcal{A,C})$\\ }; \draw[double, ->] (m-1-1) -- (m-1-2); \draw[double, ->] (m-1-2) -- (m-1-3); \draw[double, ->] (m-2-1) -- (m-2-2); \draw[double, ->] (m-2-2) -- (m-2-3); \path[dashed, ->] (m-1-3) edge [double,bend right=30] (m-1-2); \path[dashed, ->] (m-2-3) edge [double,bend left=30] (m-2-2); \draw[double, ->] (m-1-1) -- (m-2-1); \draw[double, ->] (m-1-2) -- (m-2-2); \draw[double, ->] (m-1-3) -- (m-2-3); \end{tikzpicture}\end{equation}\normalsize The dashed arrows above mark implications that are not necessarily true in general. An important situation, in which these converses hold, occurs when one takes $\mathcal{A}=\mathcal{C}=\mathcal{O}(X)$, where $\mathcal{O}(X)$ denotes the family of all open coverings of a topological space $X$. \begin{theorem}[Hurewicz, 1925]\label{Hur} For a topological space $X$, $\mathsf{S}_\mathrm{fin}(\mathcal{O}(X),\mathcal{O}(X))$ is equivalent to $\los{I}{fin}(\mathcal{O}(X),\mathcal{O}(X))$.\end{theorem} \begin{theorem}[Pawlikowski, 1994]\label{Paw} For a topological space $X$, $\mathsf{S}_1(\mathcal{O}(X),\mathcal{O}(X))$ is equivalent to $\los{I}{1}(\mathcal{O}(X),\mathcal{O}(X))$.\end{theorem} We introduce in Section~\ref{newsel} a variation of the principles defined above, allowing us to treat simultaneously of several selection principles, based on the ideas presented in \cite{garcia95, ABD}. We shall use these principles in connection with function spaces. In this context, many dualities are known between selective local properties of $C_p(X)$ and selective covering properties of $X$, where $X$ is a Tychonoff space and $C_p(X)$ denotes the space of the continuous real functions on $X$ with the topology of the pointwise convergence. Particularly, we are interested in the dualities summarized in the next diagram, where $\Omega$ stands for the collection of $\omega$-coverings of $X$ -- those open coverings $\mathcal{U}$ such that each finite subset $F$ of $X$ is contained in some element of $\mathcal{U}$ --, $\Omega_{\mathrm{o}}$ denotes the family $\{A\subset C_p(X):\mathrm{o}\in\overline{A}\}$ and $\mathrm{o}$ is the constant zero function. \small \[\begin{tikzpicture} \matrix(m)[matrix of nodes, row sep=7mm, column sep=8mm, jump/.style={text width=10mm,anchor=center}, txt/.style={anchor=center},] { ${\omega}in{II}{1}({\Omega,\Omega})$&${\omega}in{II}{1}(\Omega_{\mathrm{o}},\Omega_{\mathrm{o}})$&${\omega}in{II}{fin}({\Omega,\Omega})$&${\omega}in{II}{fin}(\Omega_{\mathrm{o}},\Omega_{\mathrm{o}})$\\ $\los{I}{1}(\Omega,\Omega)$&$\los{I}{1}(\Omega_{\mathrm{o}},\Omega_{\mathrm{o}})$&$\los{I}{fin}(\Omega,\Omega)$&$\los{I}{fin}(\Omega_{\mathrm{o}},\Omega_{\mathrm{o}})$\\ $\mathsf{S}_1(\Omega,\Omega)$&$\mathsf{S}_1(\Omega_{\mathrm{o}},\Omega_{\mathrm{o}})$&$\mathsf{S}_\mathrm{fin}(\Omega,\Omega)$&$\mathsf{S}_\mathrm{fin}(\Omega_{\mathrm{o}},\Omega_{\mathrm{o}})$\\ }; \draw[double, <->] (m-1-1) to node[auto]{\cite{Scheep2015}} (m-1-2); \draw[double, <->] (m-2-1) to node[auto]{\cite{Scheep1997}} (m-2-2); \draw[double, <->] (m-3-1) to node[auto]{\cite{Sakai1988}} (m-3-2); \draw[double, ->] (m-1-1) -- (m-2-1); \draw[double, ->] (m-1-2) -- (m-2-2); \draw[double, <->] (m-2-1) to node[auto]{\cite{Scheep1997}} (m-3-1); \draw[double, <->] (m-2-2) -- (m-3-2); \draw[double, <->] (m-1-3) to node[auto]{\cite{Scheep2015}} (m-1-4); \draw[double, <->] (m-2-3) to node[auto]{\cite{Scheep1997}} (m-2-4); \draw[double, <->] (m-3-3) to node[auto]{\cite{Arhanbook} and \cite{Scheep1996b}} (m-3-4); \draw[double, ->] (m-1-3) -- (m-2-3); \draw[double, ->] (m-1-4) -- (m-2-4); \draw[double, <->] (m-2-3) to node[auto]{\cite{Scheep1997}} (m-3-3); \draw[double, <->] (m-2-4) -- (m-3-4); \draw[double, ->] (m-1-2) -- (m-1-3); \draw[double, ->] (m-2-2) -- (m-2-3); \draw[double, ->] (m-3-2) -- (m-3-3); \end{tikzpicture} \] \normalsize The work of Caserta et al.~\cite{caserta2012}, generalizing the bottom horizontal equivalences of the diagram above, motivated us to investigate the other equivalences in a broader context, for spaces of the form $C_\mathcal{B}(X)$, where $\mathcal{B}$ is a bornology with a compact base. In~\cite{AurMez} we presented generalizations for the horizontal equivalences, but at that time we were not able to solve the vertical ones, originally proved by Scheepers~\cite{Scheep1997} in the context of $C_p$-theory. In Section~\ref{born} we settle these remaining equivalences, by using the upper semi-finite topology \cite{oldguyMichael} on the family $\mathcal{B}$, and we analyze its consequences accordingly to the framework presented in Section~\ref{newsel}. Back to the $C_p$-theory context, Jordan~\cite{jordan2007} obtains general dualities by using filters, and Sections~\ref{filters} and~\ref{productive} are dedicated to extend his results for spaces of the form $C_{\mathcal{B}}(X)$. Particularly, in the last section we show that the class of $\gamma$-productive spaces is \emph{productive} with a class (formally) larger than the class of $\gamma$-spaces -- both definitions are recalled there. \section{In between $\mathsf{S}_1$/$\mathsf{G}_1$ and $\mathsf{S}_\mathrm{fin}$/$\mathsf{G}_\mathrm{fin}$}\label{newsel} In this section we fix an infinite set $S$ and families $\mathcal{A}$ and $\mathcal{C}$ of subsets of $S$. We shall denote by $[2,\aleph_0]$ (resp. $[2,\aleph_0)$) the set of all cardinals $\alpha$ such that $2\leq\alpha\leq\aleph_0$ (resp. $2\leq \alpha<\aleph_0)$ and for $n\geq 1$, let $\underline{n}\colon \omega\to[2,\aleph_0)$ be the constant function given by $m\mapsto n+1$ for all $m\in\omega$. For a function $\varphi\colon\omega\to[2,\aleph_0]$, we consider the following selection principle: \begin{itemize} \item $\sel{\varphi}(\mathcal{A,C})$: for each sequence $(A_n:n\in\omega)$ of elements of $\mathcal{A}$ there is a sequence $(C_n:n\in\omega)$ such that $C_n\in[A_n]^{<\varphi(n)}$ for all $n$ and $\bigcup_{n\in\omega}C_n\in\mathcal{C}$. \end{itemize} Note that for $\varphi\equiv \aleph_0$, one gets the definition of the selection principle $\mathsf{S}_\mathrm{fin}$. On the other hand, $\mathsf{S}_1(\mathcal{A,C})\Rightarrow\sel{\underline{1}}(\mathcal{A,C})$ and it is formally stronger. However, since $\sel{\underline{1}}(\mathcal{A,C})=\mathsf{S}_1(\mathcal{A,C})$ holds for all pairs $(\mathcal{A,C})$ considered along this work, we shall not worry about this, and for simplicity we assume this equality as an additional hypothesis for the general case. Hence, for each $n\geq 1$ it makes sense to denote the selection principle $\sel{\underline{n}}(\mathcal{A,C})$ as $\sel{n}(\mathcal{A,C})$. The original prototype of the above selection principle was defined in \cite{garcia95}, where the authors concerned about variations of tightness by taking \[\mathcal{A}=\mathcal{C}=\Omega_x:=\{A\subset X:x\in\overline{A}\}.\] In~\cite{ABD}, the natural adaptation of the principle $\sel{\varphi}$ to the context of games was analyzed for the same pair $(\Omega_x,\Omega_x)$. This motivates our next definition. For $\mathcal{A},\mathcal{C}$ and $\varphi$ as before, let $\game{\varphi}(\mathcal{A,C})$ be the infinite game of perfect information between $\pl{I}$ and $\pl{II}$, defined as follows: \begin{itemize} \item for every inning $n<\omega$, \pl{I} chooses an element $A_n\in \mathcal{A}$, and then \pl{II} picks a $C_n\in [A_n]^{<\varphi(n)}$; \item \pl{II} wins if $\bigcup_{n\in\omega}C_n\in\mathcal{C}$. \end{itemize} Again, the constant function $\varphi\equiv \omega$ yields the game $\mathsf{G}_\mathrm{fin}(\mathcal{A,C})$, and since we assume $\mathsf{G}_1(\mathcal{A,C})=\game{\underline{1}}(\mathcal{A,C})$, we may denote the game $\game{\underline{n}}(\mathcal{A,C})$ as $\game{n}(\mathcal{A,C})$. The general relationship between these principles is stated in the following \begin{proposition} Let $\varphi$ and $\psi$ be functions of the form $\omega\to[2,\aleph_0]$ such that $\psi\leq \varphi$. Then \[\label{diagramageral} \begin{tikzpicture} \matrix(m)[matrix of nodes, row sep=7mm, column sep=6mm, jump/.style={text width=10mm,anchor=center}, txt/.style={anchor=center},] { $\Win{II}{\psi}(\mathcal{A,C})$&$\Los{I}{\psi}(\mathcal{A,C})$&$\sel{\psi}(\mathcal{A,C})$\\ $\Win{II}{\varphi}(\mathcal{A,C})$&$\Los{I}{\varphi}(\mathcal{A,C})$&$\sel{\varphi}(\mathcal{A,C})$\\ }; \draw[double, ->] (m-1-1) -- (m-1-2); \draw[double, ->] (m-1-2) -- (m-1-3); \draw[double, ->] (m-2-1) -- (m-2-2); \draw[double, ->] (m-2-2) -- (m-2-3); \draw[double, ->] (m-1-1) -- (m-2-1); \draw[double, ->] (m-1-2) -- (m-2-2); \draw[double, ->] (m-1-3) -- (m-2-3); \end{tikzpicture} \] \end{proposition} Particularly, note that for $\psi=\underline{1}$ and $\varphi\equiv\aleph_0$, the above diagram yields the first one presented in Introduction. Also, for $\mathcal{A}=\mathcal{C}=\mathcal{O}(X)$, one has the following \begin{theorem} [García-Ferreira and Tamariz-Mascarúa~\cite{garcia95}]\label{minicurso} $\sel{f}(\mathcal{O}(X),\mathcal{O}(X))$ and $\mathsf{S}_1(\mathcal{O}(X),\mathcal{O}(X))$ are equivalent for any space $X$ and any function $f\colon\omega\to[2,\aleph_0)$. \end{theorem} Thus, in this context, all of the following statements are equivalent, \begin{enumerate}[(i)]\begin{multicols}{2} \item$\los{I}{1}(\mathcal{O,O})$ \item $\Los{I}{f}(\mathcal{O,O})$ \columnbreak \item $\sel{f}(\mathcal{O,O})$ \item $\mathsf{S}_1(\mathcal{O,O})$,\end{multicols} \end{enumerate} because $\mathsf{S}_1(\mathcal{O,O})\Rightarrow \los{I}{1}(\mathcal{O,O})$ by Theorem \ref{Paw}. \begin{remark} The natural question then is whether the games $\game{f}(\mathcal{O}(X),\mathcal{O}(X))$ and $\mathsf{G}_1(\mathcal{O}(X),\mathcal{O}(X))$ are equivalent or not. Nathaniel Hiers, in a joint work with Logan Crone, Lior Fishman, and Stephen Jackson, recently\footnote{At the Conference Frontiers of Selection Principles, that took place on Warsaw during the two last weeks of August, 2017.} presented an affirmative answer concerning the game $\game{2}$, for any Hausdorff space $X$. Although their solution can possibly be extended for any function $f\colon\omega\to[2,\aleph_0)$, we mention that in this general case, an affirmative answer can also be obtained when $X$ is a T$_1$ second countable space or a Hausdorff space with G$_\delta$-points. \end{remark} However, the above equivalences do not hold in the tightness context. Denoting as $\underline{\textnormal{Id}}\colon\omega\to[2,\aleph_0)$ the function given by $\underline{\textnormal{Id}}(n)=n+2$ for all $n\in\omega$, one has the following \begin{theorem} [Garc\'ia-Ferreira and Tamariz-Mascar\'ua, \cite{garcia95}]\label{seleq} Let $Y$ be a topological space, $y\in Y$ and let $f\colon\omega\to[2,\aleph_0)$ be a function. \begin{enumerate} \item If $f$ is bounded, then $\sel{f}(\Omega_y,\Omega_y)$ is equivalent to $\mathsf{S}_1(\Omega_y,\Omega_y)$. \item If $f$ is unbounded, then $\sel{f}(\Omega_y,\Omega_y)$ is equivalent to $\sel{\underline{\textnormal{Id}}}(\Omega_y,\Omega_y)$. \end{enumerate} \end{theorem} Examples 3.7 and 3.8 in \cite{garcia95} show that in general the implications \[\mathsf{S}_1(\Omega_y,\Omega_y)\stackrel{\#}{\Longrightarrow}\sel{\underline{\textnormal{Id}}}(\Omega_y,\Omega_y)\Rightarrow\mathsf{S}_\mathrm{fin}(\Omega_y,\Omega_y)\] are not reversible. Still, the authors also show that for spaces of the form $C_p(X)$, where $X$ is a Tychonoff space, the converse of $(\#$) holds. We prove this in the next section in a more general context. We finish this section with the counterpart of Theorem \ref{seleq} for games, that will be useful later. \begin{theorem} [Aurichi, Bella and Dias~\cite{ABD}]\label{abdt} Let $Y$ be a topological space, $y\in Y$ and let $f\colon\omega\to[2,\aleph_0)$ be a function. \begin{enumerate} \item If $f$ is bounded, then the games $\game{f}(\Omega_y,\Omega_y)$ and $\game{{k-1}}(\Omega_y,\Omega_y)$ are equivalent, where $k=\limsup_{n\in\omega} f(n)$. \item If $f$ is unbounded, then $\game{f}(\Omega_y,\Omega_y)$ and $\game{\underline{\textnormal{Id}}}(\Omega_y,\Omega_y)$ are equivalent. \end{enumerate} \end{theorem} \section{Bornologies as hyperspaces}\label{born} We recall the basic definitions from \cite{AurMez}. A {bornology} $\mathcal{B}$ on a topological space $X$ is an ideal of subsets of $X$ that covers the space. A subset $\mathcal{B}'$ of $\mathcal{B}$ is called a compact base for the bornology $\mathcal{B}$ if $\mathcal{B}'$ is cofinal in $\mathcal{B}$ with respect to inclusion and all its elements are compact subspaces of $X$. For a topological space $X$ and a bornology $\mathcal{B}$ on $X$, we call the {topology of the uniform convergence on} $\mathcal{B}$, denoted by $\mathcal{T}_{\mathcal{B}}$, as the topology on $C(X)$ having as a neighborhood base at each $f\in C(X)$ the sets of the form \[\langle B,\varepsilon\rangle[f]:=\{g\in C(X):\forall x\in B(|f(x)-g(x)|<\varepsilon)\},\] for $B\in\mathcal{B}$ and $\varepsilon>0$. By $C_{\mathcal{B}}(X)$ we mean the space $(C(X),\mathcal{T}_{\mathcal{B}})$. It can be showed that $\mathcal{T}_{\mathcal{B}}$ is obtained from a separating uniformity over $C(X)$, from which it follows that $C_{\mathcal{B}}(X)$ is a Tychonoff space (see McCoy and Ntantu \cite{McCoybook}). It is also worth to mention that $C_{\mathcal{B}}(X)$ is a homogeneous space, so there is no loss of generality in fixing an appropriate point from $C_{\mathcal{B}}(X)$ in order to analyze its closure properties -- in this case, we fix the zero function $\mathrm{o}\colon X\to\mathbb{R}$. A collection $\mathcal{C}$ of open sets of $X$ is a $\mathcal{B}$-covering for $X$ if for every $B\in\mathcal{B}$ there is a $C\in\mathcal{C}$ such that $B\subset C$. Following the notation of Caserta {et al.}\cite{caserta2012}, we denote by $\mathcal{O}_{\mathcal{B}}$ the collection of all open $\mathcal{B}$-coverings for $X$. When $\mathcal{U}\in\mathcal{O}_{\mathcal{B}}$ is such that $X\not\in\mathcal{U}$, the $\mathcal{B}$-covering $\mathcal{U}$ is said to be nontrivial. An important fact about nontrivial $\mathcal{B}$-coverings is that any cofinite subset of it is also a $\mathcal{B}$-covering of $X$. The main examples of bornologies with a compact base on a topological space $X$ are the bornologies $\mathcal{F}=[X]^{<\aleph_0}$ and $\mathsf{K}=\{A\subset X:\exists K\subset X$ compact and $A\subset K\}$ $-$ if $X$ is a Hausdorff space, then $\mathsf{K}=\{A\subset X:\overline{A}$ is compact$\}$. For $\mathcal{B}=\mathcal{F}$, one has $C_{\mathcal{F}}(X)=C_p(X)$ and the $\mathcal{F}$-coverings turns out to be the $\omega$-coverings of $X$. Also, if $X$ is Hausdorff, it follows that $C_{\mathsf{K}}(X)=C_{k}(X)$, where $C_k(X)$ denotes the set $C(X)$ with the compact-open topology. One readily sees that $\mathcal{O}_{\mathsf{K}}=\mathcal{K}$, where $\mathcal{K}$ denotes the set of the so called $K$-coverings of $X$. Now, recall we want to generalize the following theorem for $\mathcal{B}$-coverings. \begin{theorem} [Scheepers~\cite{Scheep1997}]\label{original} Let $X$ be a Tychonoff space. \begin{enumerate} \item $\mathsf{S}_\mathrm{fin}(\Omega,\Omega)$ is equivalent to $\los{I}{fin}(\Omega,\Omega)$. \item $\mathsf{S}_1(\Omega,\Omega)$ is equivalent to $\los{I}{1}(\Omega,\Omega)$. \end{enumerate} \end{theorem} Although the requirement of a compact base is necessary to settle the dualities between local properties of $C_\mathcal{B}(X)$ and covering properties of $X$, the generalization of the above theorem for $\mathcal{B}$-covering does not need any requirement on the bornology $\mathcal{B}$: in fact, it holds for an arbitrary family $\mathcal{B}$ of subsets of $X$. \begin{remark}[Scheepers' key idea]\label{key} It may be enlightening to review Scheepers' original proof. The key idea in his arguments for proving Theorem~\ref{original} consists in finding an appropriate hyperspace $Y=Y(X)$ such that $\sel{\bullet}(\Omega,\Omega)$ in $X$ translates as $\sel{\bullet}(\mathcal{O}(Y),\mathcal{O}(Y))$ in $Y$. This is done in such a way that he can carry back and forth strategies and plays from the game $\game{\bullet}(\Omega,\Omega)$ in $X$ to the game $\game{\bullet}(\mathcal{O}(Y),\mathcal{O}(Y))$. This allows him to reduce the problem to a scenario where Theorems~\ref{Hur} and~\ref{Paw} are available.\end{remark} The difficulty in following the above sketch when trying to generalize it to $\mathcal{B}$-coverings consists in finding an appropriate hyperspace $Y(X)$. Scheepers originally used $Y(X)=\sum_{n\in\omega}X^n$, but we were not able to relate this construction to the bornology $[X]^{<\aleph_0}$. The way we found to solve this problem was to consider $Y(X)$ as the bornology itself, with an appropriate topology. More generally, given a family $\mathcal{B}$ of nonempty subsets of a topological space $X$, we consider the topology on $\mathcal{B}$ whose basic open neighborhoods are sets of the form \[\langle U\rangle :=\{B\in\mathcal{B}:B\subset U\},\] for $U\subset X$ open. This type of hyperspace has been studied already in the literature\footnote{We would like to thank Valentin Gutev for pointing this out.}: in~\cite{oldguyMichael}, Michael considers over $\mathcal{A}(X):=\{A\subset X:A\ne\emptyset\}$ the topology generated by sets of the form \begin{equation} U^{+}:=\{A\in \mathcal{A}(X):A\subset U\},\end{equation} with $U$ ranging over the open sets of $X$, and he calls it as the {upper semi-finite topology} on $\mathcal{A}(X)$; by restricting this construction to the the family of all nonempty closed subsets of $X$, one obtains the so called {upper Vietoris} topology~\cite{Hola}. Since the topology on $\mathcal{B}$ generated by the family $\{\langle U\rangle:U\subset X$ is open$\}$ is the topology of $\mathcal{B}$ as a subspace of $\mathcal{A}(X)$, we shall write ${\mathcal{B}^+}$ to denote the family $\mathcal{B}$ endowed with this topology. The main problem with the hyperspace ${\mathcal{B}^+}$ concerns its poor separation properties: one readily sees that if there are $A,B\in\mathcal{B}$ such that $A\subset B$, then they cannot be separated as points of ${\mathcal{B}^+}$, showing that ${\mathcal{B}^+}$ is not T$_1$. However, this lack of separation properties will be harmless in our context. \begin{lemma}\label{lemma1} Let $X$ be a topological space and let $\mathcal{B}$ be a family of subsets of $X$. \begin{enumerate} \item If $\mathcal{U}$ is a $\mathcal{B}$-covering for $X$, then $\langle \mathcal{U}\rangle:=\{\langle U\rangle:U\in\mathcal{U}\}$ is an open covering for ${\mathcal{B}^+}$. \item If $\mathcal{W}$ is an open covering for ${\mathcal{B}^+}$ consisting of basic open sets, then the family ${\omega}idetilde{\mathcal{W}}:=\{U:\langle U\rangle\in \mathcal{W}\}$ is a $\mathcal{B}$-covering for $X$. \end{enumerate} Then, let $\varphi\colon\omega\to[2,\aleph_0]$ be a function. \begin{enumerate} \setcounter{enumi}{2} \item $\sel{\varphi}(\mathcal{O_B,O_B})$ holds in $X$ if and only if $\sel{\varphi}(\mathcal{O}({\mathcal{B}^+}),\mathcal{O}({\mathcal{B}^+}))$ holds. \item The games $\game{\varphi}(\mathcal{O_B,O_B})$ in $X$ and $\game{\varphi}(\mathcal{O}({\mathcal{B}^+}),\mathcal{O}({\mathcal{B}^+}))$ are equivalent. \end{enumerate} \end{lemma} \begin{proof} The items $(1)$ and $(2)$ follow from the definition of ${\mathcal{B}^+}$. The other items hold because one can replace arbitrary open coverings in ${\mathcal{B}^+}$ with open coverings consisting of basic open sets, what enables one to uses the previous items. \end{proof} \begin{theorem}\label{neworiginal} Let $X$ be a topological space and let $\mathcal{B}$ be a family of subsets of $X$. \begin{enumerate} \item If $\mathsf{S}_1(\mathcal{O_B,O_B})$ holds, then $\Los{I}{1}(\mathcal{O_B,O_B})$ also holds. \item If $\mathsf{S}_\mathrm{fin}(\mathcal{O_B,O_B})$ holds, then $\los{I}{fin}(\mathcal{O_B,O_B})$ also holds. \end{enumerate} \end{theorem} \begin{proof} Repeat the steps in Remark~\ref{key} with $Y(X)={\mathcal{B}^+}$. \end{proof} \begin{corollary}\label{easy} Let $X$ be a topological space and let $\mathcal{B}$ be a family of subsets of $X$. For a function $f\colon \omega\to[2,\aleph_0)$, the following are equivalent: \begin{enumerate} \item $\los{I}{1}(\mathcal{O_B,O_B})$; \item $\Los{I}{f}(\mathcal{O_B,O_B})$; \item $\sel{f}(\mathcal{O_B,O_B})$; \item $\mathsf{S}_1(\mathcal{O_B,O_B})$. \end{enumerate} \end{corollary} \begin{proof} These equivalences hold for the pair $(\mathcal{O}({\mathcal{B}^+}),\mathcal{O}({\mathcal{B}^+}))$. Apply Lemma \ref{lemma1} to finish. \end{proof} By replacing $\mathcal{B}$ with $[X]^{<\aleph_0}$ in the above corollary results in a strengthening of Theorem~\ref{original}, while taking $\mathcal{B}$ as the family of all compact subsets of $X$ yields new results about $K$-coverings. Well, \emph{almost new} results, as we explain below. \begin{remark} Following the announcement of this work, Boaz Tsaban brought to our attention that a result similar to Theorem~\ref{neworiginal} also appears in the (thus far, unpublished) MSc thesis of his student Nadav Samet~\cite{Nadav}. Instead of considering a topology over a family of subsets of $X$, they take a family $\mathbb{P}$ of filters of open sets and observe that sets of the form $O_U:=\{p\in\mathbb{P}:U\in p\}$ define a base for a topology over $\mathbb{P}$ when $U$ ranges over the open sets of $X$. \end{remark} In connection with spaces of the form $C_{\mathcal{B}}(X)$, we first state a generalization of some of our results in \cite{AurMez}. \begin{proposition}\label{prop1} Let $X$ be a Tychonoff space and let $\mathcal{B}$ be a bornology on $X$ with a compact base. Consider a function $\varphi\colon \omega\to[2,\aleph_0]$. \begin{enumerate} \item $\sel{\varphi}(\mathcal{O_B,O_B})$ holds in $X$ if and only if $\sel{\varphi}(\Omega_{\mathrm{o}},\Omega_{\mathrm{o}})$ holds in $C_{\mathcal{B}}(X)$. \item If $\varphi$ is non-decreasing, then the games $\game{\varphi}(\mathcal{O_B,O_B})$ in $X$ and $\game{\varphi}(\Omega_{\mathrm{o}},\Omega_{\mathrm{o}})$ in $C_\mathcal{B}(X)$ are equivalent. \end{enumerate} \end{proposition} The proof is essentially an adaptation of the arguments presented in \cite{AurMez}, and it follows with appropriate applications of the following lemma, adapted from \cite{caserta2012}. \begin{lemma} Let $X$ be a Tychonoff space and let $\mathcal{B}$ be a bornology with a compact base on $X$. \begin{enumerate} \item If $\mathcal{U}$ is a collection of open sets of $X$ such that $X\not\in\mathcal{U}$, then $\mathcal{U}\in\mathcal{O_B}$ if and only if $\mathcal{A}(\mathcal{U})=\{f\in C_{\mathcal{B}}(X):\exists U\in\mathcal{U}\left(f\upharpoonright (X\setminus U)\equiv 1\right)\}\in\Omega_{\mathrm{o}}.$ \item Let $A\subset C_{\mathcal{B}}(X)$, $n\in\omega$ and set $\mathcal{U}_n(A)=\left\{f^{-1}\left[\left(-\frac{1}{n+1},\frac{1}{n+1}\right)\right]:f\in A\right\}$. If $\mathrm{o}\in\overline{A}$, then $\mathcal{U}_n(A)\in\mathcal{O_B}$. \item If $(A_n)_{n\in\omega}$ is a sequence of finite subsets of $C_{\mathcal{B}}(X)$ such that $\bigcup_{n\in\omega}\mathcal{U}_n(A_n)$ is a nontrivial $\mathcal{B}$-covering, then $\bigcup_{n\in\omega}A_n\in\Omega_{\mathrm{o}}$. \item If $(A_n)_{n\in\omega}$ is a sequence of finite subsets of $C_{\mathcal{B}}(X)$ such that $\bigcup_{n\in\omega}A_n\in\Omega_{\mathrm{o}}$ and for each $n\in\omega$ and each $g\in A_n$ there is a proper open set $U_g\subset X$ such that $g\upharpoonright(X\setminus U_g)\equiv 1$, then $\bigcup_{n\in\omega}\{U_g:g\in A_n\}\in \mathcal{O_B}$. \end{enumerate} \end{lemma} \begin{proof}\text{ } (1)$\quad$If $\mathcal{U}\in\mathcal{O}_\mathcal{B}$ and $\langle B,\varepsilon\rangle[\mathrm{o}]$ is a neighborhood of $\mathrm{o}$, then we obtain a function ${f\in \mathcal{A}(\mathcal{U})\cap \langle B,\varepsilon\rangle[\mathrm{o}]}$, because $\overline{B}$ is compact and $X$ is a Tychonoff space (see \cite[Theorem 3.1.7]{Engelking})\footnote{Particularly, everything still works if $X$ is a normal space and $\mathcal{B}$ is a bornology with a closed base.}. Conversely, for a $B\in\mathcal{B}$ we take an $f\in \mathcal{A}(\mathcal{U})\cap \langle B,1\rangle[\mathrm{o}]$, from which we obtain an open set $U\in\mathcal{U}$ such that $B\subset U$. (2)$\quad$It follows because for a function $f\in C_{\mathcal{B}}(X)$, $f\in \langle B,\frac{1}{n+1}\rangle[\mathrm{o}]$ if and only if $B\subset f^{-1}\left[\left(-\frac{1}{n+1},\frac{1}{n+1}\right)\right]$. (3)$\quad$In addition to the previous observation, we use the fact that if $\mathcal{U}\in\mathcal{O}_\mathcal{B}$ is nontrivial, then $\mathcal{U}\setminus F\in\mathcal{O}_\mathcal{B}$ for any finite subset $F\subset\mathcal{U}$. (4)$\quad$For a $B\in\mathcal{B}$, we take a $g\in\bigcup_{n\in\omega} A_n\cap \langle B,1\rangle[\mathrm{o}]$, from which it follows that $B\subset U_g$, because $g\upharpoonright (X\setminus U_g)\equiv 1$.\qedhere \end{proof} Particularly, the monotonicity hypothesis in Proposition \ref{prop1} can be dropped if the function $\varphi$ is of the form $\omega\to[2,\aleph_0)$. In fact, this follows from Theorem \ref{abdt} and from its counterpart for $\mathcal{B}$-coverings, which we state below. \begin{proposition} Let $X$ be a topological space with a bornology $\mathcal{B}$, and consider a function $f\colon\omega\to[2,\aleph_0)$. \begin{enumerate} \item If $f$ is bounded, then the games $\game{f}(\mathcal{O_B,O_B})$ and $\game{{k}}(\mathcal{O_B,O_B})$ are equivalent, where $k=\limsup_{n\in\omega} f(n)$; \item If $f$ is unbounded, then $\game{f}(\mathcal{O_B,O_B})$ and $\game{\underline{\textnormal{Id}}}(\mathcal{O_B,O_B})$ are equivalent. \end{enumerate} \end{proposition} \begin{proof} In face of Corollary \ref{easy}, we just need to worry about $\pl{II}$. For the first case where $f$ is bounded, note that there exists an $m_0\in\omega$ such that $f(n)\leq k$ for all $n\geq m_0$. Thus, if $\mu$ is a winning strategy for $\pl{II}$ in $\game{f}$, then $\mu$ induces a winning strategy on $\game{k}$ simply by ignoring the $m_0$ first innings -- here we also use the fact that $\mathcal{U}\setminus F\in\mathcal{O}_\mathcal{B}$ whenever $\mathcal{U}\in\mathcal{O}_\mathcal{B}$ is nontrivial and $F\in[\mathcal{U}]^{<\omega}$. The converse holds because the set $N=\{n\in\omega:f(n)=k\}$ is infinite. Now, if $f$ and $g$ are unbounded, by symmetry it is enough to show that \[\Win{II}{f}(\mathcal{O}_\mathcal{B},\mathcal{O}_\mathcal{B})\Rightarrow\Win{II}{g}(\mathcal{O}_\mathcal{B},\mathcal{O}_\mathcal{B}).\] Indeed, if $\mu$ is a strategy for $\pl{II}$ in $\game{f}$, we fix a sequence $(n_i)_{i\in\omega}$ of natural numbers such that $g(n_i)\geq f(i)$ for all $i<\omega$ and then we induce a winning strategy for $\pl{II}$ in the game $\game{g}$ by using $\mu$ only in the innings $n\in\{n_i:i<\omega\}$. \end{proof} Now, we can translate Corollary~\ref{easy} for function spaces automatically. \begin{corollary} Let $X$ be a Tychonoff space and let $\mathcal{B}$ be a bornology on $X$ with a compact base. For a function $f\colon \omega\to[2,\aleph_0)$, the following are equivalent: \begin{enumerate} \item $\los{I}{1}(\Omega_{\mathrm{o}},\Omega_{\mathrm{o}})$; \item $\Los{I}{f}(\Omega_{\mathrm{o}},\Omega_{\mathrm{o}})$; \item $\sel{f}(\Omega_{\mathrm{o}},\Omega_{\mathrm{o}})$; \item $\mathsf{S}_1(\Omega_{\mathrm{o}},\Omega_{\mathrm{o}})$. \end{enumerate} \end{corollary} \begin{remark} We still do not know if the games $\game{f}(\mathcal{O},\mathcal{O})$ and $\mathsf{G}_1(\mathcal{O},\mathcal{O})$ are equivalent for arbitrary topological spaces. If it become to be true, at least for T$_0$-spaces, then an analogous result can be derived for function spaces with the tools we presented above. \end{remark} \section{Bornologies and filters}\label{filters} Recall that a filter $\mathcal{F}$ on a set $C$ is a family of subsets of $C$ closed upwards and closed for taking finite intersections -- it is called a proper filter if $\emptyset\not\in \mathcal{F}$. For a topological space $Y$ and a point $y\in Y$, we consider the neighborhood filter of $y$, \begin{equation}\mathcal{N}_{y,Y}:=\{N\subset Y:\exists V\subset Y\text{, }V\text{ is open and }y\in V\subset N\}.\end{equation} In this section we intend to generalize Theorem 3 in \cite{jordan2007}, but first we make the necessary definitions, adapted from \cite{jordan2006,jordan2007}. For a fixed set $C$, we denote by $\mathbb{F}(C)$ the family of the proper filters of $C$, and for a cardinal $\kappa\geq \aleph_0$ we let $\mathbb{F}_\kappa(C)$ be the family of those proper filters of $C$ of the form \begin{equation}\label{filtrogerado}\mathcal{G}^\uparrow:=\{F\subset Y:\exists G\in\mathcal{G}\left(G\subset F\right)\},\end{equation} where $\mathcal{G}\in[{\omega}p(C)]^{\leq \kappa}$ -- particularly, we call the elements in $\mathbb{F}_1(C)$ and $\mathbb{F}_{\aleph_0}(C)$ as principal filters and countable based filters, respectively. If $R\subset C\times D$ is a binary relation on the sets $C$ and $D$ and if $\mathcal{F}$ is a collection of subsets of $C$, we set $R\left(\mathcal{F}\right):=\{R[F]:F\in\mathcal{F}\}^{\uparrow}$. It is worth to mention that the correspondence \[{\omega}p\left({\omega}p(C)\right)\ni \mathcal{G}\longmapsto \mathcal{G}^{\uparrow}\in{\omega}p\left({\omega}p(C)\right),\] does not determine a function ${\omega}p\left({\omega}p(C)\right)\to \mathbb{F}(C)$. In fact, $\mathcal{G}^{\uparrow}$ is a proper filter if and only if $\emptyset\not\in\mathcal{G}$ and for all $A,B\in\mathcal{G}$ there is a $G\in\mathcal{G}$ such that $G\subset A\cap B$. By a class of filters $\mathbb{K}$ we mean a property about filters, and we write $\mathcal{F}\in\mathbb{K}$ to indicate that the filter $\mathcal{F}$ has property $\mathbb{K}$. We also say that a topological space $Y$ is a $\mathbb{K}$-space if $\mathcal{N}_{y,Y}\in\mathbb{K}$ for all $y\in Y$. For instance, the class of $\mathbb{F}_{\aleph_0}$-spaces is the class of spaces with countable character. We say that a class of filters $\mathbb{K}$ is $\mathbb{F}_1$-composable if for any sets $C,D$ and any relation $R\subset C\times D$, the following holds: \begin{equation}\label{composable}\mathcal{F}\in\mathbb{F}(C)\cap \mathbb{K}\text{ and }R(\mathcal{F})\in\mathbb{F}(D)\Rightarrow R(\mathcal{F})\in\mathbb{K}.\end{equation} Note that the condition ``$R(\mathcal{F})\in\mathbb{F}(D)$'' in the left hand of the above implication is necessary, because in general there is no guarantee that $R(\mathcal{F})$ is a proper filter on $D$. If $R$ is just a relation in $C\times D$, it may happens that $R[F]=\emptyset$ for some $F\in\mathcal{F}$, and in this case $R(\mathcal{F})$ is not a proper filter. However, the situation becomes simpler if $R$ is a function. \begin{lemma}\label{aha} Let $C$ and $D$ be sets and let $f\colon C\to D$ be a function. \begin{enumerate} \item\label{aha1} If $\mathcal{F}\in\mathbb{F}(C)$, then $f(\mathcal{F})\in\mathbb{F}(D)$. \item\label{aha2} If $\mathcal{G}\in\mathbb{F}(D)$, then $f^{-1}(\mathcal{G})\in\mathbb{F}(C)$ if and only if $G\cap f[C]\ne\emptyset$ for all $G\in\mathcal{G}$. \end{enumerate} \end{lemma} Finally, a class of filters $\mathbb{K}$ is called $\mathbb{F}_{\omega}$-steady if for any set $C$ and each pair of filters $\mathcal{F}\in\mathbb{F}(C)\cap\mathbb{K}$ and $\mathcal{G}\in\mathbb{F}_{\omega}(C)$ such that $F\cap G\ne\emptyset$ for all $(F,G)\in\mathcal{F}\times\mathcal{G}$, the following holds \begin{equation}\label{steady}\mathcal{F}\vee\mathcal{G}:=\{F\cap G:(F,G)\in\mathcal{F}\times\mathcal{G}\}^{\uparrow}\in\mathbb{K}.\end{equation} Given a Tychonoff space $(X,\tau)$ and a bornology $\mathcal{B}$ on $X$, we denote by $\Gamma_{\mathcal{B}}(X)$ the filter on $\tau$ generated by the sets \begin{equation}V(B):=\{U\in\tau:{B}\subset U\},\end{equation} i.e., $\Gamma_{\mathcal{B}}(X):=\{V(B):B\in\mathcal{B}\}^\uparrow$. Note that whenever $\mathcal{B}'\subset \mathcal{B}$ is a base for $\mathcal{B}$, then $\{V(B):B\in\mathcal{B}'\}^\uparrow =\Gamma_{\mathcal{B}}(X)$. In \cite{jordan2007}, Jordan considers the filter $\Gamma(X)$ on $\tau$, which coincides with $\Gamma_{[X]^{<\omega}}(X)$ according to our previous definition. Jordan shows that if a class of filters $\mathbb{K}$ is $\mathbb{F}_1$-composable and $\mathbb{F}_\omega$-steady, then $\Gamma(X)\in \mathbb{K}$ if and only if $C_{p}(X)$ is a $\mathbb{K}$-space. Our next results aim to extend this conclusion to $\Gamma_{\mathcal{B}}(X)$ and $C_{\mathcal{B}}(X)$. Their proofs are natural adaptations from \cite{jordan2007}. \begin{proposition}\label{halfultimate} Let $(X,\tau)$ be a Tychonoff space and let $\mathcal{B}$ be a bornology with a compact base on $X$. Suppose that $\mathbb{K}$ is an $\mathbb{F}_1$-composable class of filters. If $C_{\mathcal{B}}(X)$ is a $\mathbb{K}$-space, then $\Gamma_{\mathcal{B}}(X)\in\mathbb{K}$. \end{proposition} \begin{proof} It is enough to present a neighborhood filter $\mathcal{F}$ in $C_{\mathcal{B}}(X)$, a set $Y$ with functions $\pi\colon Y\to\tau$ and $\Phi\colon Y\to C_{\mathcal{B}}(X)$ such that $\Phi^{-1}(\mathcal{F})\in\mathbb{F}(Y)$ and $\Gamma_\mathcal{B}(X)=\pi(\Phi^{-1}(\mathcal{F}))$. In fact, since $\mathcal{F}\in\mathbb{K}$ and $\mathbb{K}$ is $\mathbb{F}_1$-composable, it follows by the previous lemma that $\Phi^{-1}(\mathcal{F})\in\mathbb{K}$ and $\pi(\Phi^{-1}(\mathcal{F}))\in\mathbb{K}$. Let $\mathcal{B}_0$ be a compact base for $\mathcal{B}$. Let $Y:=\{(B,U)\in\mathcal{B}_0\times\tau:B\subset U\}$ and define $\pi:Y\rightarrow \tau$ by $\pi(B,U)=U$. Since $X$ is a Tychonoff space, for each $(B,U)\in Y$ there exists some $f=f_{(B,U)}\in C_{\mathcal{B}}(X)$ such that $f\upharpoonright B\equiv 0$ and $f\upharpoonright X\setminus U\equiv 1$, so we may set $\Phi:Y\rightarrow C_{\mathcal{B}}(X)$ by $\Phi(B,U)=f_{(B,U)}$ for each $(B,U)\in Y$. Now, we take $\mathcal{F}:=\mathcal{N}_{\mathrm{o},C_{\mathcal{B}}(X)}=\{\langle B,\frac{1}{n+1}\rangle[\mathrm{o}]:B\in\mathcal{B}_0,n\in\omega\}^\uparrow$. In order to prove $\pi(\Phi^{-1}(\mathcal{F}))=\Gamma_{\mathcal{B}}(X)$, it is enough to note that both filters are generated by the same base. Indeed, for any $B\in\mathcal{B}_0$ and $\varepsilon\in(0,1)$ one has \[V(B)=\pi\left[\Phi^{-1}\left[\left\langle B,\varepsilon\right\rangle [\mathrm{o}]\right]\right],\] from which the desired equality follows. \end{proof} \begin{proposition}\label{halfultimate2} Let $\mathbb{K}$ be a class of filters that is $\mathbb{F}_1$-composable and $\mathbb{F}_\omega$-steady. For a topological space $(X,\tau)$ and a bornology $\mathcal{B}$ on $X$, $\Gamma_{\mathcal{B}}(X)\in\mathbb{K}$ implies that $C_{\mathcal{B}}(X)$ is a $\mathbb{K}$-space. \end{proposition} \begin{proof} We use a similar strategy as in the proof of the previous proposition. We define a set $Y$ with a proper countably based filter $\mathcal{H}\in\mathbb{F}_\omega(Y)$ and functions $\pi\colon Y\to C_\mathcal{B}(X)$ and $\Phi\colon Y\to \tau$ such that $\Phi^{-1}(\Gamma_\mathcal{B}(X))$ and $\Phi^{-1}(\Gamma_\mathcal{B}(X))\vee \mathcal{H}$ are proper filters in $Y$, and $\pi(\Phi^{-1}(\Gamma_\mathcal{B}(X))\vee \mathcal{H})=\mathcal{N}_{\mathrm{o},\mathcal{C}_\mathcal{B}(X)}$. Again, the hypotheses over $\mathbb{K}$ guarantee that $\pi(\Phi^{-1}(\Gamma_\mathcal{B}(X))\vee\mathcal{H})\in\mathbb{K}$, which is enough to finish the proof, since $C_{\mathcal{B}}(X)$ is a homogeneous space. For brevity, we call $I_n:=(-\frac{1}{n+1},\frac{1}{n+1})\subset\mathbb{R}$ for each $n\in\omega$. Let $Y:=\{(f,B,n)\in C_{\mathcal{B}}(X)\times \mathcal{B}\times\omega:f[{B}]\subset I_n\}$ and $\mathcal{H}:=\{M_n:n\in\omega\}^{\uparrow}$, where $M_n:=\{(f,B,m)\in Y:m\geq n\}$ for each $n\in\omega$. Now, let $\pi:Y\rightarrow C_{\mathcal{B}}(X)$ be defined by $\pi(f,B,n)=f$ and $\Phi:Y\rightarrow \tau$ defined as $\Phi(f,B,n)=f^{-1}[I_n]$. Since $\Gamma_{\mathcal{B}}(X)\in\mathbb{K}$ and $\mathbb{K}$ is $\mathbb{F}_1$-composable, it follows that $\Phi^{-1}(\Gamma_{\mathcal{B}}(X))\in\mathbb{K}$. Also, since $\Phi^{-1}[V(B)]\cap M_n\ne\emptyset$ for all $n\in\omega$, the $\mathbb{F}_\omega$-steadiness of $\mathbb{K}$ gives $\Phi^{-1}(\Gamma_{\mathcal{B}}(X))\vee \mathcal{H}\in\mathbb{K}$ and, again by the $\mathbb{F}_1$-composability of $\mathbb{K}$, $\pi(\Phi^{-1}(\Gamma_{\mathcal{B}}(X))\vee\mathcal{H})\in\mathbb{K}$. So, in order to finish the proof we must show that $\mathcal{N}_{\mathrm{o},C_\mathcal{B}(X)}=\pi(\Phi^{-1}(\Gamma_{\mathcal{B}}(X))\vee\mathcal{H})$. The desired equality follows because \[\left\langle B,\frac{1}{n+1}\right\rangle[\mathrm{o}]=\pi\left[\Phi^{-1}\left[V(B)\right]\cap M_n\right]\] holds for any $B\in\mathcal{B}$ and $n\in\omega$. \end{proof} Altogether, the propositions above yields the following \begin{theorem}\label{ultimate} Let $X$ be a Tychonoff space and let $\mathcal{B}$ be a bornology with a compact base on $X$. If $\mathbb{K}$ is a class of filters that is $\mathbb{F}_1$-composable and $\mathbb{F}_\omega$-steady, then $\Gamma_{\mathcal{B}}(X)\in\mathbb{K}$ if and only if $C_{\mathcal{B}}(X)$ is a $\mathbb{K}$-space. \end{theorem} \begin{example} For a cardinal $\kappa\geq \aleph_0$, we say that a filter $\mathcal{F}$ on $C$ belongs to $\mathbb{T}_\kappa$ if for all $A\subset C$ such that $A\cap F\ne \emptyset$ holds for every element of $\mathcal{F}$, there is a $B\in[A]^{\leq \kappa}$ such that $B\cap F\ne\emptyset$ for all $F\in\mathcal{F}$. It can be shown that $\mathbb{T}_\kappa$ is a class of filters $\mathbb{F}_1$-composable and $\mathbb{F}_\omega$-steady. So, under the assumptions of the previous corollary, $C_\mathcal{B}(X)$ has tightness less than or equal to $\kappa$ if and only if $\Gamma_\mathcal{B}(X)\in\mathbb{T}_\kappa$, i.e., any $\mathcal{B}$-covering of $X$ has a $\mathcal{B}$-subcovering of cardinality $\leq\kappa$, a result originally due to McCoy and Ntantu~\cite{McCoybook}. \end{example} \begin{example} Let $\mathcal{F}$ be a proper filter on a set $C$, and consider the family \[\mathcal{M}:=\{A\subset C:\forall F\in\mathcal{F}(A\cap F\ne\emptyset)\}.\] Note that for any function $\varphi\colon\omega\to[2,\aleph_0]$, both classes of filters $\sel{\varphi}(\mathcal{M},\mathcal{M})$ and ${\Win{II}{\varphi}(\mathcal{M},\mathcal{M})}$ are $\mathbb{F}_1$-composable. Thus, the directions ``property in $C_\mathcal{B}(X)$ implies property in $X$'' of Proposition~\ref{prop1} concerning both $\sel{\varphi}$ and $\pl{II}$ follow from Proposition~\ref{halfultimate}. On the other hand, the converses follow from Proposition~\ref{halfultimate2} whenever $\varphi$ is a constant function, because in this case it can be proved that $\sel{\varphi}(\mathcal{M,M})$ and $\Win{II}{\varphi}(\mathcal{M,M})$ are also $\mathbb{F}_{\omega}$-steady classes of filters. However, we were not able to prove similar statements regarding $\pl{I}$ in the game $\game{\varphi}(\mathcal{M,M})$. \end{example} \section{$\gamma$-productive spaces}\label{productive} Gerlits and Nagy \cite{Gerlits1982} had introduced the concept of point-cofinite open coverings\footnote{Usually called $\gamma$-coverings in the literature.} in their analysis of Fr\'echet property in $C_p(X)$. Recall that a topological space $Y$ is Fr\'echet (resp. strictly Fr\'echet) if for each $y\in Y$ and for each $A\in\Omega_y$ there is a subset $B\subset A$ such that $B\in\Gamma_y$, where \[\Gamma_y:=\{A\subset Y:|A\setminus V|<\aleph_0\text{ for all }V\in\mathcal{N}_{y,Y}\},\] (resp. if $\mathsf{S}_1(\Omega_y,\Gamma_y)$ holds). We say that an infinite collection $\mathcal{U}$ of proper open sets of $X$ is called a point-cofinite covering if for all $x\in X$ the set $\{U\in\mathcal{U}:x\not\in\mathcal{U}\}$ is finite, and we denote by $\Gamma$ the collection of all point-cofinite coverings of $X$ - particularly, note that $\Gamma\subset \Omega$. A space $X$ is called a $\gamma$-space if any nontrivial $\omega$-covering has a (countable) $\gamma$-subcovering. Then we have the following \begin{theorem} [Gerlits and Nagy\cite{Gerlits1982}]\label{frechetG} For a Tychonoff space $X$, the following are equivalent: \begin{enumerate} \item $X$ is a $\gamma$-space; \item $\mathsf{S}_1(\Omega,\Gamma)$ holds; \item $C_p(X)$ is a strictly Fr\'echet space; \item $C_p(X)$ is a Fr\'echet space. \end{enumerate} \end{theorem} In \cite{jordan2007}, Jordan obtain a variation of the above theorem as a corollary of Theorem \ref{ultimate}, with appropriate definitions for Fr\'echet filters and \emph{strongly}\footnote{Strictly Fr\'echet spaces and strongly Fr\'echet spaces are not formally the same: the later were independently introduced by Michael~\cite{Michaelfrechet} and Siwiec~\cite{Siwiec}. Although the definition is similar, in the strong case there is the additional requirement for the sequence $(A_n)_{n\in\omega}$ with $y\in\bigcap_{n\in\omega}\overline{A_n}$ to be decreasing.} Fr\'echet filters, which turns out to be $\mathbb{F}_1$-composable and $\mathbb{F}_\omega$-steady classes of filters. However, we shall pay more attention to the following characterization. \begin{theorem}[Jordan and Mynard\cite{jordan2004}]\label{prodfrechet} A topological space $Y$ is productively Fr\'echet if and only if $Y\times Z$ is a Fr\'echet space for any strongly Fr\'echet space $Z$. \end{theorem} In the above theorem, the sentence ``$Y$ is productively Fr\'echet'' has a very precise meaning -- it is a space such that all of its neighborbood filters belongs to the class of productively Fr\'echet filters: \begin{center} $\mathcal{F}\in\mathbb{F}(C)$ is productively Fr\'echet if for each strongly Fr\'echet filter $\mathcal{H}\in\mathbb{F}(C)$ such that $F\cap H\ne\emptyset$ for all $(F,H)\in\mathcal{F}\times\mathcal{H}$ there is a filter $\mathcal{G}\in\mathbb{F}_{\omega}(C)$ refining $\mathcal{F}\vee \mathcal{H}$. \end{center} The considerations above justify the following definition of Jordan \cite{jordan2007}: a Tychonoff space $X$ is called $\gamma$-productive if the filter $\Gamma(X)$ is productively Fr\'echet. Then, by using the fact that productively Fr\'echet filters are $\mathbb{F}_1$-composable and $\mathbb{F}_\omega$-steady, Jordan proves the following. \begin{proposition}[Jordan \cite{jordan2007}]\label{corjordan} If $X$ is $\gamma$-productive, then $X\times Y$ is a $\gamma$-space for all $\gamma$-space $Y$. \end{proposition} By extending this definition to the filter $\Gamma_{\mathcal{B}}(X)$, we will prove that the \emph{productivity} of $\gamma$-productive spaces is far more strong. In order to do this, we will follow the indirect approach presented by Miller, Tsaban and Zdomskyy in \cite{tsaban2016} to derive Proposition~\ref{corjordan}. Let $X$ be a Tychonoff space and let $\mathcal{B}$ be a bornology on $X$. We say that an infinite collection $\mathcal{U}$ of proper open sets of $X$ is a $\mathcal{B}$-cofinite covering if for all $B\in\mathcal{B}$ the set $\{U\in\mathcal{U}:B\not\subset U\}$ is finite, and we denote by $\Gamma_{\mathcal{B}}$ the collection of all $\mathcal{B}$-cofinite coverings of $X$. Naturally, we say that $X$ is a $\gamma_\mathcal{B}$-space if any nontrivial $\mathcal{B}$-covering has a (countable) $\mathcal{B}$-cofinite subcovering. McCoy and Ntantu~\cite{McCoybook} have introduced $\mathcal{B}$-cofinite open coverings in their generalization of Theorem~\ref{frechetG}, calling them as \emph{$\mathcal{B}$-sequences} there. Although they just stated items (\ref{enrolo1}) and (\ref{enrolo4}) of the theorem below, their arguments, which are adapted from Gerlits and Nagy~\cite{Gerlits1982}, can be used to prove the following. \begin{theorem}[McCoy and Ntantu~\cite{McCoybook}] Let $X$ be a Tychonoff space and let $\mathcal{B}$ be a bornology with a compact base on $X$. The following are equivalent: \begin{enumerate} \item\label{enrolo1} $X$ is a $\gamma_\mathcal{B}$-space; \item $\mathsf{S}_1(\mathcal{O_B},\Gamma_\mathcal{B})$ holds; \item $C_{\mathcal{B}}(X)$ is strictly Fr\'echet; \item\label{enrolo4} $C_{\mathcal{B}}(X)$ is Fr\'echet. \end{enumerate} \end{theorem} \begin{remark} In particular, all of the above conditions are equivalent to require that $C_{\mathcal{B}}(X)$ is strongly Fr\'echet. \end{remark} Now, we will say that a Tychonoff space $X$ with a bornology $\mathcal{B}$ is $\gamma_\mathcal{B}$-productive if the filter $\Gamma_{\mathcal{B}}(X)$ is productively Fr\'echet -- note that for $\mathcal{B}=[X]^{<\aleph_0}$, one obtains the original definition of $\gamma$-productive spaces. Since we will work with the product of spaces endowed with different bornologies, we need to describe their behavior under products. \begin{proposition} Given a family $\{X_t:t\in T\}$ of pairwise disjoint topological spaces, consider for each $t\in T$ a set $\mathcal{B}_t\subset{\omega}p(X_t)$. Let $\mathcal{B}_0=\{\prod_{t\in T}B_t:\forall t(B_t\in \mathcal{B}_t)\}$ and $\mathcal{B}_1=\{\bigsqcup_{t\in T}B_t:B_t\in\mathcal{B}_t$ for finitely many $t\in T$, $B_t=\emptyset$ otherwise$\}$. If $\mathcal{B}_t$ is a (compact) base for each $t\in T$, then $\mathcal{B}_0$ and $\mathcal{B}_1$ are (compact) bases for bornologies on $\prod_{t\in T}X_t$ and $\sum_{t\in T}X_t$, respectively. \end{proposition} We denote by $\bigotimes_{t\in T}\mathcal{B}_t$ and $\bigoplus_{t\in T}\mathcal{B}_t$ the bornologies generated by the bases $\mathcal{B}_0$ and $\mathcal{B}_1$ in the above proposition, respectively. \begin{proposition} Let $\{X_t:t\in T\}$ be a family of topological spaces, and for each $t\in T$ let $\mathcal{B}_t$ be a bornology on $X_t$. Then $C_{\bigoplus_{t\in T}\mathcal{B}_t}(\sum_{t\in T}X_t)$ is homeomorphic to $\prod_{t\in T}C_{\mathcal{B}_t}(X_t)$. \end{proposition} \begin{proof} Note that the map \[C_{\bigoplus_{t\in T}\mathcal{B}_t}\left(\sum_{t\in T}X_t\right)\ni f\longmapsto (f\upharpoonright X_t)_{t\in T}\in\prod_{t\in T}C_{\mathcal{B}_t}(X_t)\] is continuous and it has a continuous inverse. \end{proof} \begin{remark}Particularly, it follows from the previous proposition that \begin{equation} C_p\left(\sum_{t\in T}X_t\right)\textnormal{is homeomorphic to}\prod_{t\in T}C_p(X_t), \end{equation} and, if each $X_t$ is a Hausdorff space, then \begin{equation} C_k\left(\sum_{t\in T}X_t\right)\textnormal{is homeomorphic to}\prod_{t\in T}C_k(X_t). \end{equation} For brevity, if $X_t=X$ and $\mathcal{B}_t=\mathcal{B}$ for all $t\in T$, we will write $\mathcal{B}^{|T|}$ instead of $\bigotimes_{t\in T}\mathcal{B}$.\end{remark} The next lemma will be very useful later. \begin{lemma} [Miller, Tsaban and Zdomskyy~\cite{tsaban2016}]\label{trick} Let $\mathfrak{P}$ be a topological property hereditary for closed subspaces and preserved under finite power. Then for any pair of topological spaces $X$ and $Y$, $X\times Y$ has the property $\mathfrak{P}$ provided that $X+ Y$ has the property $\mathfrak{P}$. \end{lemma} The first three items in the next proposition states that the property ``having a bornology $\mathcal{B}$ with a compact base such that it is a $\gamma_\mathcal{B}$-space'' satisfies the conditions of the previous lemma. \begin{proposition} Let $X$ be a topological space and let $\mathcal{B}$ be a bornology on $X$. \begin{enumerate}[(a)] \item If $Y\subset X$, then $\mathcal{B}_Y:=\{B\cap Y:B\in \mathcal{B}\}$ is a bornology on $Y$. If $Y$ is closed and $\mathcal{B}$ has a compact base on $X$, then $\mathcal{B}_Y$ has a compact base on $Y$. \item If $X$ is a $\gamma_\mathcal{B}$-space and $Y\subset X$ is closed, then $Y$ is a $\gamma_{\mathcal{B}_Y}$-space. \item If $X$ is a $\gamma_\mathcal{B}$-space and $\mathcal{B}$ has a compact base, then $X^n$ is a $\gamma_{\mathcal{B}^n}$-space for any $n\in\omega$. \item\label{voltagama} If $X$ is a $\gamma_\mathcal{B}$-space and $Y$ is a $\gamma_{\mathcal{L}}$-space for a bornology $\mathcal{L}$ in $Y$ such that $X\times Y$ is a $\gamma_{\mathcal{B}\otimes\mathcal{L}}$-space, then $X\sqcup Y$ is a $\gamma_{\mathcal{B}\oplus\mathcal{L}}$-space. \end{enumerate} \end{proposition} \begin{corollary} Let $X$ and $Y$ be topological spaces with bornologies $\mathcal{B}$ and $\mathcal{L}$, respectively, both of them with compact bases. Then $X\times Y$ is a $\gamma_{\mathcal{B}\otimes\mathcal{L}}$-space if and only if $X+ Y$ is a $\gamma_{\mathcal{B}\oplus \mathcal{L}}$-space. \end{corollary} We finally can state and prove the desired extension of Proposition \ref{corjordan}. \begin{corollary} Let $X$ be a Tychonoff space and let $\mathcal{B}$ be a bornology with a compact base on $X$. If $X$ is $\gamma_\mathcal{B}$-productive, then $X\times Y$ is a $\gamma_{\mathcal{B}\otimes \mathcal{L}}$-space for any Tychonoff space $Y$ endowed with a bornology $\mathcal{L}$ with a compact base such that $Y$ is a $\gamma_{\mathcal{L}}$-space. \end{corollary} \begin{proof} If $X$ is $\gamma_\mathcal{B}$-productive then $C_\mathcal{B}(X)$ is productively Fr\'echet. Since $Y$ is a $\gamma_{\mathcal{L}}$-space, it follows that $C_{\mathcal{L}}(Y)$ is Fr\'echet, hence $C_\mathcal{B}(X)\times C_\mathcal{L}(Y)$ is (strongly) Fr\'echet. But $C_\mathcal{B}(X)\times C_\mathcal{L}(Y)$ is homeomorphic to $C_{\mathcal{B}\oplus \mathcal{L}}(X+ Y)$, thus $X+ Y$ is a $\gamma_{\mathcal{B}\oplus \mathcal{L}}$-space, and the conclusion follows from the last corollary. \end{proof} Particularly, if a Tychonoff space $X$ is $\gamma$-productive, then $X\times Y$ is a $\gamma_{[X]^{<\omega}\otimes \mathcal{L}}$-space whenever $Y$ is a Tychonoff $\gamma_{\mathcal{L}}$-space, where $\mathcal{L}$ is a bornology with a compact base on $Y$. This suggests that the converse of Proposition \ref{corjordan} may be false. {} \end{document}
\begin{document} \title{\sc {Rainbow domination and related problems on some classes of perfect graphs} \begin{abstract} Let $k \in \mathbb{N}$ and let $G$ be a graph. A function $f: V(G) \rightarrow 2^{[k]}$ is a rainbow function if, for every vertex $x$ with $f(x)=\varnothing$, $f(N(x)) =[k]$. The rainbow domination number $\gamma_{kr}(G)$ is the minimum of $\sum_{x \in V(G)} |f(x)|$ over all rainbow functions. We investigate the rainbow domination problem for some classes of perfect graphs. \end{abstract} \section{Introduction} Bre\u{s}ar et al. introduced the rainbow domination problem in 2008~\cite{kn:bresar}. The $k$-rainbow domination-problem drew our attention because it is solvable in polynomial time for classes of graphs of bounded rankwidth but, unless one fixes $k$ as a constant, it seems not formulatable in monadic second-order logic. Let us start with the definition. \begin{definition} Let $k \in \mathbb{N}$ and let $G$ be a graph. A function $f: V(G) \rightarrow 2^{[k]}$ is a $k$-rainbow function if, for every $x \in V(G)$, \begin{equation} \label{eqn1} f(x)=\varnothing \quad \text{implies}\quad \cup_{y \in N(x)} \; f(y)\; = [k]. \end{equation} The $k$-rainbow domination number of $G$ is \begin{multline} \label{eqn2} \gamma_{rk}(G) = \min \; \bigl\{\; \|f\| \; \mid \; \text{$f$ is a $k$-rainbow function for $G$}\;\bigr \},\\ \text{where} \quad \|f\|=\sum\nolimits_{x \in V(G)} \; |f(x)|. \end{multline} \end{definition} We call $\|f\|$ the \underline{cost} of $f$ over the graph $G$. When there is danger of confusion, we write $\|f\|_{G}$ instead of $\|f\|$. We call the elements of $[k]$ the \underline{colors} of the rainbow and, for a vertex $x$ we call $f(x)$ the \underline{label} of $x$. For a set $S$ of vertices we write \[f(S)=\cup_{x \in S}\; f(x).\] It is a common phenomenon that the introduction of a new domination variant is followed chop-chop by an explosion of research results and their write-ups. One reason for the popularity of domination problems is the wide range of applicability and directions of possible research. We moved our bibliography of recent publications on this specific domination variant to the appendix. We refer to~\cite{kn:sumenjak} for the description of an application of rainbow domination. To begin with, Bre\u{s}ar et al. showed that, for any graph $G$, \begin{equation} \label{eqn3} \gamma_{rk}(G)=\gamma(G \Box K_k), \end{equation} where $\gamma$ denotes the domination number and where $\Box$ denotes the Cartesian product. This observation, together with Vizing's conjecture, stimulated the search for graphs for which $\gamma=\gamma_{r2}$ (see also~\cite{kn:aharoni,kn:hartnell}). Notice that, by~\eqref{eqn3} and Vizing's upperbound $\gamma_{rk}(G) \leq k \cdot \gamma(G)$~\cite{kn:vizing}. Chang et al.~\cite{kn:chang} were quick on the uptake and showed that, for $k \in \mathbb{N}$, the $k$-rainbow domination problem is NP-complete, even when restricted to chordal graphs or bipartite graphs. The same paper shows that there is a linear-time algorithm to determine the parameter on trees. A similar algorithm for trees appears in~\cite{kn:yen} and this paper also shows that the problem remains NP-complete on planar graphs. Notice that~\eqref{eqn3} shows that $\gamma_{rk}(G)$ is a non-decreasing function in $k$. Chang et al. show that, for all graphs $G$ with $n$ vertices and all $k \in \mathbb{N}$, \begin{equation} \label{eqn13} \min \; \{\;k,\;n\;\} \leq \gamma_{rk}(G) \leq n \quad\text{and} \quad \gamma_{rn}(G)=n. \end{equation} For trees $T$, Chang et al.~\cite{kn:chang} give sharp bounds for the smallest $k$ satisfying $\gamma_{rk}(T)=|V(T)|$. Many other papers establish bounds and relations, eg, between the $2$-rainbow domination number and the total domination number or the (weak) roman domination number~\cite{kn:chellali,kn:fujita3,kn:furuya,kn:wu,kn:wu2}, or study edge- or vertex critical graphs with respect to rainbow domination~\cite{kn:rad}, or obtain results for special graphs such as paths, cycles, graphs with given radius, and the generalized Petersen graphs~\cite{kn:ali,kn:fujita4,kn:shao,kn:stepien,kn:stepien2,kn:tong, kn:wang,kn:xavier,kn:xu}. Pai and Chiu develop an exact algorithm and a heuristic for $3$-rainbow domination. In~\cite{kn:pai} they present the results of some experiments. Let us mention that the $k$-rainbow domination number may be computed, via~\eqref{eqn3}, by an exact, exponential algorithm that computes the domination number. For example, this shows that the $k$-rainbow domination number can be computed in $O(1.4969^{nk})$~\cite{kn:rooij2,kn:rooij}. A $k$-rainbow family is a set of $k$-rainbow functions which sum to at most $k$ for each vertex. The $k$-rainbow domatic number is defined as the maximal number of elements in such a family. Some results were obtained in~\cite{kn:fujita,kn:meierling,kn:sheikholeslami}. Whenever domination problems are under investigation, the class of strongly chordal graphs are of interest from a computational point of view. Farber showed that a minimum weight dominating set can be computed in polynomial time on strongly chordal graphs~\cite{kn:farber}. Recently, Chang et al. showed that the $k$-rainbow dominating number is equal to the so-called weak $\{k\}$-domination number for strongly chordal graphs~\cite{kn:bresar,kn:bresar2,kn:chang2}. A weak $\{k\}$-dominating function is a function $g:V(G) \rightarrow \{0,\dots,k\}$ such that, for every vertex $x$, \begin{equation} \label{eqn4} g(x)=0 \quad \text{implies} \quad \sum_{y \in N(x)} g(y) \geq k. \end{equation} The weak domination number $\gamma_{wk}(G)$ minimizes $\sum_{x \in V(G)} g(x)$, over all weak $\{k\}$-dominating functions $g$. In their paper, Chang et al. show that the $k$-rainbow domination number is polynomial for block graphs. As far as we know, the $k$-rainbow domination number is open for strongly chordal graphs. It is easy to see that, for each $k$, the $k$-rainbow domination problem can be formulated in monadic second-order logic. This shows that, for each $k$, the parameter is computable in linear time for graphs of bounded treewidth or rankwidth~\cite{kn:courcelle}. \begin{theorem} \label{thm courcelle} Let $k \in \mathbb{N}$. There exists a linear-time algorithm that computes $\gamma_{rk}(G)$ for graphs of bounded rankwidth. \end{theorem} For example, Theorem~\ref{thm courcelle} implies that, for each $k$, $\gamma_{rk}(G)$ is computable in polynomial time for distance-hereditary graphs, ie, the graphs of rankwidth 1. Also, graphs of bounded outerplanarity have bounded treewidth, which implies bounded rankwidth. A direct application of the monadic second-order theory involves a constant which is an exponential function of $k$. In the following section we show that, often, this exponential factor can be avoided. \section{$k$-Rainbow domination on cographs} Cographs are the graphs without an induced $P_4$. As a consequence, cographs are completely decomposable by series and parallel operations, that is, joins and unions~\cite{kn:gallai}. In other words, a graph is a cograph if and only if every nontrivial, induced subgraph is disconnected or its complement is disconnected. Cographs have a rooted, binary decomposition tree, called a cotree, with internal nodes labeled as joins and unions~\cite{kn:corneil3}. For a graph $G$ and $k \in \mathbb{N}$, let $F(G,k)$ denote the set of $k$-rainbow functions on $G$. Furthermore, define \begin{eqnarray} F^{+}(G,k) &=& \{\; f \in F(G,k) \;\mid\; \forall_{x \in V(G)} \; f(x) \neq \varnothing \; \} \\ \text{and} \quad F^{-}(G,k) & = & F(G,k) \setminus F^{+}(G,k). \end{eqnarray} \begin{theorem} \label{thm cograph} There exists a linear-time algorithm to compute the $k$-rainbow domination number $\gamma_{rk}(G)$ for cographs $G$ and $k \in \mathbb{N}$. \end{theorem} \begin{proof} We describe a dynamic programming algorithm to compute the $k$-rainbow domination number. A minimizing $k$-rainbow function can be obtained by backtracking. \noindent It is easy to determine the minimal cost of $k$-rainbow functions that have no empty set-labels. We therefore concentrate on those $k$-rainbow functions for which some labels are empty sets. \noindent Let $k \in \mathbb{N}$. For a cograph $H$ define \begin{eqnarray} \label{eqn5} R^{+}(H) & =& \min \; \{\; \|f\|_{H} \; \mid \; f \in F^{+}(H,k) \; \text{and}\; f(V(H)) = [k] \;\}, \\ \label{eqn52} R^{-}(H) &=& \min \; \{\; \|f\|_{H} \; \mid \; f \in F^{-}(H,k)\;\}. \end{eqnarray} Here, we adopt the convention that $R^{-}(H)=\infty$ if $F^{-}(H,k)=\varnothing$. \noindent Notice that \begin{equation} \label{eqn16} \boxed{R^{+}(H) = \max \; \{\; |V(H)|,\; k\;\}.} \end{equation} \noindent Assume that $H$ is the union of two smaller cographs $H_1$ and $H_2$. Then \begin{equation} \label{eqn6} R^{-}(H) = \min \; \{ R^{-}(H_1)+|V(H_2)|,\; R^{-}(H_2)+|V(H_1)|, \; R^{-}(H_1)+R^{-}(H_2) \;\}. \end{equation} \noindent Now assume that $H$ is the join of two smaller cographs, $H_1$ and $H_2$. Then we have \begin{equation} \label{eqn7} R^{-}(H) = \min \;\{\; R^{+}(H_1), \;R^{+}(H_2), \;R^{-}(H_1), \;R^{-}(H_2),\; 2k \; \}. \end{equation} \noindent We prove the correctness of Equation~\eqref{eqn7} below. \noindent Let $f$ be a $k$-rainbow function from $F^{-}(H,k)$ with minimum cost over $H$. Consider the following cases. \begin{enumerate}[\rm (a)] \item $f(x) \neq \varnothing$ for all $x \in V(H_1)$. Then there is a vertex with an empty label in $H_2$. Let $L_2=\cup_{z \in V(H_2)} f(z)$. Define an other $k$-rainbow function $f^{\prime}$ as follows. For an arbitrary vertex $x \in V(H_1)$ let $f^{\prime}(x)=f(x) \cup L_2$. For all other vertices $y$ in $H_1$ let $f^{\prime}(y)=f(y)$ and for all vertices $z$ in $H_2$ let $f^{\prime}(z)=\varnothing$. Notice that $f^{\prime}(V(H_1))=[k]$. So, $f^{\prime}$ is a $k$-rainbow function with at most the same cost as $f$. This shows that $R^{-}(H)=R^{+}(H_1)$. \item $f(y) \neq \varnothing$ for all $y \in V(H_2)$. This case is similar to the previous, so that $R^{-}(H) = R^{+}(H_2)$. \item $f(x) = f(y) = \varnothing$ for some $x \in V(H_1)$ and some $y \in V(H_2)$. Let \[L_1 = f(V(H_1)) \quad \text{and}\quad L_2=f(V(H_2)).\] For each color $\ell \in [k]$, let $\nu_{\ell}$ be the number of times that $\ell$ is used as a label, that is, \[\nu_\ell = | \{ \; x \; | \; x \in V(H) \quad \text{and}\quad \ell \in f(x)\;\}|.\] Consider the following two subcases. \begin{enumerate}[\rm (i)] \item There exists some $\ell$ with $\nu_\ell = 1$. Let $u$ be the unique vertex with $\ell \in f(u)$. Assume that $u \in V(H_1)$. The case where $u \in V(H_2)$ is similar. Then, $u$ is adjacent to all $x \in V(H)$ with $f(x) = \varnothing$. Modify $f$ to $f'$, such that $f'(u) = f(u) \cup (L_2 \setminus L_1)$, $f'(x) = f(x)$ for all $x \in V(H_1)\setminus \{u\}$, and $f'(y) = \varnothing$ for all $y \in V(H_2)$. Then $f'$ is a $k$-rainbow function from $F^{-}(H,k)$ and the cost of $f'$ is at most the cost of $f$. Moreover, $f'$ restricted to $H_1$ is a $k$-rainbow function with minimum cost over $H_1$. Thus, in this case, $R^{-}(H) = R^{-}(H_1)$. \item For all $\ell$, $\nu_\ell \geq 2$. Then, the cost of $f$ over $H$ is at least $2k$. In this case, we use an alternative function $f'$, which selects a certain vertex $u$ in $H_1$ and a certain vertex $v$ in $H_2$, and set $f'(u) = f'(v) = [k]$. For all vertices $z \in V(H)\setminus \{u,v\}$, let $f'(z) = \varnothing$. The cost of $f'$ is $2k$ (which is at most the cost of $f$), and $f'$ remains a $k$-rainbow function from $F^{-}(H,k)$. Thus, in this case, $R^{-}(H) = 2k$. \end{enumerate} \end{enumerate} This proves the correctness of Equation~\eqref{eqn7}. \noindent At the root of the cotree, we obtain $\gamma_{rk}(G)$ via \begin{equation} \gamma_{rk}(G) = \min\; \{\; |V(G)|, \; R^{-}(G)\; \}. \end{equation} \noindent The cotree can be obtained in linear time (see, eg,~\cite{kn:bretscher,kn:corneil2,kn:habib}). Each $R^{+}(H)$ is obtained in $O(1)$ time via Equation~\eqref{eqn16}, and $R^{-}(H)$ is obtained in $O(1)$ time via Equations~\eqref{eqn6} and~\eqref{eqn7}. \noindent This proves the theorem. \qed\end{proof} The weak $\{k\}$-domination number (recall the definition near Equation~\eqref{eqn4}) was introduced by Bre\u{s}ar, Henning and Rall in~\cite{kn:bresar2} as an accessible, `monochromatic version' of $k$-rainbow domination. In the following theorem we turn the tables. In general, for graphs $G$ one has that $\gamma_{wk}(G) \leq \gamma_{rk}(G)$ since, given a $k$-rainbow function $f$ one obtains a weak $\{k\}$-dominating function $g$ by defining, for $x \in V(G)$, $g(x)=|f(x)|$. The parameters $\gamma_{wk}$ and $\gamma_{rk}$ do not always coincide. For example $\gamma_{w2}(C_6)=3$ and $\gamma_{r2}(C_6)=4$. Bre\u{s}ar et al. ask, in their Question~3, for which graphs the equality $\gamma_{w2}(G)=\gamma_{r2}(G)$ holds. As far as we know this problem is still open. Chang et al. showed that weak $\{k\}$-domination and $k$-rainbow domination are equivalent for strongly chordal graphs~\cite{kn:chang2}. For cographs equality does not hold. For example, \begin{equation} \label{eqn22} \text{when} \quad G=(P_3 \oplus P_3) \otimes (P_3 \oplus P_3) \quad \text{then}\quad \gamma_{w3}(G)=4 \quad \text{and}\quad \gamma_{r3}(G)=6. \end{equation} Let $G$ be a graph and let $k \in \mathbb{N}$. For a function $g: V(G) \rightarrow \{0,\dots,k\}$ we write $\|g\|_{G}=\sum_{x \in V(G)} g(x)$. Furthermore, for $S \subset V(G)$ we write $g(S)=\sum_{x \in S} g(x)$. \begin{theorem} \label{thm weak cograph} There exists an $O(k^2 \cdot n)$ algorithm to compute the weak $\{k\}$-domination number for cographs when a cotree is a part of the input. \end{theorem} \begin{proof} Let $k \in \mathbb{N}$. For a cograph $H$ and $q \in \mathbb{N} \cup \{0\}$, define \begin{multline} \label{eqn23} W(H,q)=\min \; \{\;\|g\|_{H}\;|\; g:V(H) \rightarrow \{0,\dots,k\} \quad\text{and}\\ \forall_{x \in V(G)} \; g(x)=0 \quad \Rightarrow \quad g(N(x))+ q \geq k\;\}. \end{multline} \noindent When a cograph $H$ is the union of two smaller cographs $H_1$ and $H_2$ then \begin{equation} \label{eqn24} \gamma_{wk}(H)=\gamma_{wk}(H_1)+\gamma_{wk}(H_2). \end{equation} \noindent In such a case, we have \begin{equation} \label{eqn24-2} W(H,q)= W(H_1,q) + W(H_2,q). \end{equation} \noindent When a cograph $H$ is the join of two cographs $H_1$ and $H_2$ then the minimal cost of a weak $\{k\}$-dominating function is bounded from above by $2k$. Then \begin{equation} \label{eqn25} W(H,q)= \min \; \{\;W_1+W_2\;|\; W_1=W(H_1,q+W_2) \quad\text{and}\quad W_2=W(H_2,q+W_1) \;\}. \end{equation} \noindent The weak $\{k\}$-domination number of a cograph~$G$, $W(G,0)$, can be obtained via the above recursion, spending $O(k^2)$ time in each of the $n$ nodes in the cotree. This completes the proof. \qed\end{proof} \begin{remark} A $\{k\}$-dominating function~\cite{kn:domke} $g:V(G)\rightarrow \{0,\dots,k\}$ satisfies \[\forall_{x \in V(G)} \; g(N[x]) \geq k.\] The $\{k\}$-domination number $\gamma_{\{k\}}(G)$ is the minimal cost of a $\{k\}$-dominating function. A similar proof as for Theorem~\ref{thm weak cograph} shows the following theorem. \begin{theorem} There exists an $O(k^2 \cdot n)$ algorithm to compute $\gamma_{\{k\}}(G)$ when $G$ is a cograph. \end{theorem} Similar results can be obtained for, eg, the $(j,k)$-domination number, introduced by Rubalcaba and Slater~\cite{kn:rubalcaba,kn:rubalcaba2}. \end{remark} \begin{remark} A frequently studied generalization of cographs is the class of $P_4$-sparse graphs. A graph is $P_4$-sparse if every set of 5 vertices induces at most one $P_4$~\cite{kn:hoang,kn:jamison}. We show in Appendix~\ref{appendix cographs} that the rainbow domination problem can be solved in linear time on $P_4$-sparse graphs. \end{remark} \section{Weak $\{k\}$-L-domination on trivially perfect graphs} Chang et al. were able to solve the $k$-rainbow domination problem (and the weak $\{k\}$-domination problem) for two subclasses of strongly chordal graphs, namely for trees and for blockgraphs. In order to obtain linear-time algorithms, they introduced a variant, called the weak $\{k\}$-L-domination problem~\cite{kn:chang2,kn:chang}. In this section we show that this problem can be solved in $O(k\cdot n)$ time for trivially perfect graphs. \begin{definition} A $\{k\}$-assignment of a graph $G$ is a map $L$ from $V(G)$ to ordered pairs of elements from $\{0,\dots,k\}$. Each vertex $x$ is assigned a label $L(x)=(a_x,b_x)$, where $a_x$ and $b_x$ are elements of $\{0,\dots,k\}$. A weak $\{k\}$-L-dominating function is a function $w:V(G) \rightarrow \{0,\dots,k\}$ such that, for each vertex $x$ the following two conditions hold. \begin{eqnarray} w(x) & \geq & a_x, \quad\text{and}\\ w(x) & =0 & \quad\Rightarrow\quad w(N[x]) \geq b_x. \end{eqnarray} The weak $\{k\}$-L-domination number is defined as \begin{equation} \label{27} \gamma_{wkL}(G)=\min\;\{\;\|g\|\;|\; \text{$g$ is a weak $\{k\}$-L-dominating function on $G$}\;\}. \end{equation} \end{definition} Notice that \begin{equation} \label{eqn28} \forall_{x \in V(G)}\; L(x)=(0,k) \quad\Rightarrow \quad \gamma_{wk}(G)=\gamma_{wkL}(G). \end{equation} \begin{definition} A graph is trivially perfect if it has no induced $P_4$ or $C_4$. \end{definition} Wolk investigated the trivially perfect graphs as the comparability graphs of forests. Each component of a trivially perfect graph $G$ has a model which is a rooted tree $T$ with vertex set $V(G)$. Two vertices of $G$ are adjacent if, in $T$, one lies on the path to the root of the other one. Thus each path from a leaf to the root is a maximal clique in $G$ and these are all the maximal cliques. See~\cite{kn:chu,kn:golumbic} for the recognition of these graphs. In the following we assume that a rooted tree $T$ as a model for the graph is a part of the (connected) input. We simplify the problem by using two basic observations. (See~\cite{kn:chang2,kn:chang} for similar observations.) Let $T$ be a rooted tree which is the model for a connected trivially perfect graph $G$. Let $R$ be the root of $T$; note that this is a universal vertex in $G$. We assume that $G$ is equipped with a $\{k\}$-assignment $L$, which attributes each vertex $x$ with a pair $(a_x,b_x)$ of numbers from $\{0,\dots,k\}$. \begin{enumerate}[\rm (I)] \item There exists a weak $\{k\}$-L-dominating function $g$ of minimal cost such that \begin{equation} \label{eqn29} \forall_{x \in V(G)\setminus \{R\}} \; a_x > 0 \quad \Rightarrow \quad g(x)=a_x. \end{equation} \item There exists a weak $\{k\}$-L-dominating function $g$ of minimal cost such that \begin{equation} \label{eqn32} \forall_{x \in V(G) \setminus \{R\}} \; a_x=0 \quad\text{and}\quad b_x \leq \sum_{y \in N[x]} a_y \quad \Rightarrow \quad g(x)=0. \end{equation} \end{enumerate} \begin{definition} The \underline{reduced instance} of the weak $\{k\}$-L-domination problem is the subtree $T^{\prime}$ of $T$ with vertex set $V(G^{\prime}) \setminus W$, where \begin{multline} \label{eqn30} W = \{\; x \;|\; x \in V(G) \setminus \{R\} \quad \text{and}\quad a_x >0\;\} \quad \cup \quad \\ \{\;x \;|\; x \in V(G) \setminus \{R\} \quad\text{and}\quad a_x =0 \quad\text{and}\quad \sum_{y \in N[x]} a_y \geq b_x\;\}. \end{multline} The labels of the reduced instance are, for $x \neq R$, $L(x)=(a_x^{\prime},b_x^{\prime})$, where \begin{equation} \label{eqn31} a_x^{\prime}=0 \quad \text{and}\quad b_x^{\prime}=b_x - \sum_{y \in N[x]} a_y, \end{equation} and the root $R$ has a label $L(R)=(a_R^{\prime},b_R^{\prime})$, where \begin{equation} \label{eqn33} a_R^{\prime}=a_R \quad\text{and}\quad b_R^{\prime}=\max\;\{\;0,\;b-\sum_{x \in V(G) \setminus \{R\}} a_x\;\}. \end{equation} \end{definition} The previous observations prove the following lemma. \begin{lemma} Let $T^{\prime}$ and $L^{\prime}$ be a reduced instance of a weak $\{k\}$-L-domination problem. Then \[\gamma_{wkL}(G)=\gamma_{wkL^{\prime}}(G^{\prime})+ \sum_{x \in V(G)\setminus \{R\}} a_x.\] \end{lemma} In the following, let $G$ be a connected, trivially perfect graph and let $G$ be equipped with a $\{k\}$-assignment. Let $G^{\prime}=(V^{\prime},E^{\prime})$ be a reduced instance with model a $T^{\prime}$ and a root $R$, and a reduced assignment $L^{\prime}$. Let $g$ be a weak $\{k\}$-$L^{\prime}$-dominating function on $G^{\prime}$ of minimal cost. Notice that we may assume that \[\boxed{\forall_{x \in V(G^{\prime}) \setminus \{R\}} \; g(x) \in \{0,1\}.}\] Let $x$ be an internal vertex in the tree $T^{\prime}$ and let $Z$ be the set of descendants of $x$. Let $P$ be the path in $T^{\prime}$ from $x$ to the root $R$. Assume that $Z$ is a union of $r$ distinct cliques, say $B_1,\dots,B_r$. Assume that the vertices of each $B_j$ are ordered $x^j_1,\dots,x^j_{r_j}$ such that \[\boxed{p \leq q \quad\Rightarrow\quad b^{\prime}_{x^j_p} \geq b^{\prime}_{x^j_q}.}\] Define $d_{x^j_p}=b^{\prime}_{x^j_p}-p+1$. Relabel the vertices of $Z$ as $z_1,\dots,z_{\ell}$ such that \[\boxed{p \leq q \quad\Rightarrow d_{z_p} \geq d_{z_q}.}\] \begin{lemma} There exists an optimal weak $\{k\}$-$L^{\prime}$-dominating function $g$ such that $g(z_i) \geq g(z_j)$ when $i < j$. \end{lemma} We moved the proof of this lemma to Appendix~\ref{appendix TP}. \begin{definition} For $a \in \{0,\dots,k\}$, $a \geq a_R^{\prime}$, let $\Gamma(G^{\prime},L^{\prime},a)$ be the minimal cost over all weak $\{k\}$-$L^{\prime}$-dominating functions $g$ on $G^{\prime}$ on condition that $g(P) \geq a$. \end{definition} \begin{lemma} Define $d_{z_{\ell+1}}=a$. Let $i^{\ast} \in \{1,\dots,\ell+1\}$ be such that \begin{enumerate}[\rm (a)] \item $\max\;\{\;a,\;d_{z_i^{\ast}}\;\}+i^{\ast}-1$ is smallest possible, and \item $i^{\ast}$ is smallest possible with respect to $(a)$. \end{enumerate} Let $H=G^{\prime}-Z$. Let $L^H$ be the restriction of $L^{\prime}$ to $V(H)$ with the following modifications. \[\forall_{y \in P}\; b^H_y=\max\;\{\;0,\;b^{\prime}_y-i^{\ast}+1\;\}.\] Let $a^H=\max\;\{\;a,\;d_{z_{i^{\ast}}}\;\}$. Then \[\Gamma(G^{\prime},L^{\prime},a)=\Gamma(H,L^H,a^H)+i^{\ast}-1.\] \end{lemma} We moved the proof of this lemma to Appendix~\ref{appendix TP}. The previous lemmas prove the following theorem. \begin{theorem} Let $G$ be a trivially perfect graph with $n$ vertices. Let $T$ be a rooted tree that represents $G$. Let $k \in \mathbb{N}$ and let $L$ be a $\{k\}$-assignment of $G$. Then there exists an $O(k \cdot n)$ algorithm that computes a weak $\{k\}$-L-dominating function of $G$. \end{theorem} The related $(j,k)$-domination problem can be solved in linear time on trivially perfect graphs. The weak $\{k\}$-L-domination problem can be solved in linear time on complete bipartite graphs. We moved that section to Appendix~\ref{appendix CB}. \section{$2$-Rainbow domination of interval graphs} \label{section interval} In~\cite{kn:bresar2} the authors ask four questions, the last one of which is, whether there is a polynomial algorithm for the $2$-rainbow domination problem on (proper) interval graphs. In this section we show that $2$-rainbow domination can be solved in polynomial time on interval graphs. We use the equivalence of the $2$-rainbow domination problem with the weak $\{2\}$-domination problem. The equivalence of the two problems, when restricted to trees and interval graphs, was observed in~\cite{kn:bresar2}. Chang et al., proved that it holds for general $k$ when restricted to the class of strongly chordal graphs~\cite{kn:chang2}. The class of interval graphs is properly contained in that of the strongly chordal graphs. An interval graph has a consecutive clique arrangement. That is a linear ordering $[C_1,\dots,C_t]$ of the maximal cliques of the interval graph such that, for each vertex, the cliques that contain it occur consecutively in the ordering~\cite{kn:gilmore}. For a function $g: V(G) \rightarrow \{0,1,2\}$ we write, as usual, for any $S \subseteq V(G)$, \[g(S)=\sum_{x \in S} g(x).\] The weak $\{2\}$-domination problem is defined as follows. \begin{definition} Let $G$ be a graph. A function $g: V(G) \rightarrow \{0,1,2\}$ is a weak $\{2\}$-dominating function on $G$ if \begin{equation} \label{eqn19} \forall_{x \in V(G)} \;\; g(x)=0 \quad \text{implies}\quad g(N[x]) \geq 2. \end{equation} The weak $\{2\}$-domination number of $G$ is \begin{equation} \label{eqn20} \gamma_{w2}(G)=\min \;\{\;\sum_{x \in V(G)}\; g(x) \;|\; \text{$g$ is a weak $\{2\}$-domination function on $G$}\;\}. \end{equation} \end{definition} Br\u{s}ar and \u{S}umenjak proved the following theorem~\cite{kn:bresar2}. \begin{theorem} When $G$ is an interval graph, \begin{equation} \label{eqn21} \gamma_{w2}(G)=\gamma_{r2}(G). \end{equation} \end{theorem} In the following, let $G=(V,E)$ be an interval graph. \begin{lemma} \label{lm int1} There exists a weak $\{2\}$-dominating function $g$, with $g(V)=\gamma_{r2}(G)$, such that every maximal clique has at most 2 vertices assigned the value 2. \end{lemma} \begin{proof} Assume that $C_i$ is a maximal clique in the consecutive clique arrangement of $G$. Assume that $C_i$ has 3 vertices $x$, $y$ and $z$ with $g(x)=g(y)=g(z)=2$. Assume that, among the three of them, $x$ has the most neighbors in $\cup_{j\geq i} C_j$ and that $y$ has the most neighbors in $\cup_{j \leq i} C_j$. Then any neighbor of $z$ is also a neighbor of $x$ or it is a neighbor of $y$. So, if we redefine $g(z)=1$, we obtain a weak $\{2\}$-dominating function with value less than $g(V)$, a contradiction. \qed\end{proof} \begin{lemma} \label{lm int2} There exists a weak $\{2\}$-dominating function $g$ with minimum value $g(V)=\gamma_{r2}(G)$ such that every maximal clique has at most four vertices with value 1. \end{lemma} \begin{proof} The proof is similar to that of Lemma~\ref{lm int1}. Let $C_i$ be a clique in the consecutive clique arrangement of $G$. Assume that $C_i$ has 5 vertices $x_i$, $i \in \{1,\dots,5\}$, with $g(x_i)=1$ for each $i$. Order the vertices $x_i$ according to their neighborhoods in $\cup_{j \geq i} C_j$ and according to their neighborhoods in $\cup_{j \leq i} C_j$. For simplicity, assume that $x_1$ and $x_2$ have the most neighborhoods in the first union of cliques and that $x_3$ and $x_4$ have the most neighbors in the second union of cliques. Then $g(x_5)$ can be reduced to zero; any other vertex that has $x_5$ in its neighborhood already has two other $1$'s in it. \noindent This proves the lemma. \qed\end{proof} \begin{theorem} \label{thm int} There exists a polynomial algorithm to compute the $2$-rainbow domination number for interval graphs. \end{theorem} We moved the proof of this theorem and some remarks to Appendix~\ref{appendix interval}. We obtained similar results for the class of permutation graphs. We moved that section to Appendix~\ref{appendix permutation}. \section{$\NP$-Completeness for splitgraphs} A graph $G$ is a splitgraph if $G$ and $\Bar{G}$ are both chordal. A splitgraph has a partition of its vertices into two set $C$ and $I$, such that the subgraph induced by $C$ is a clique and the subgraph induced by $I$ is an independent set. Although the $\NP$-completeness of $k$-rainbow domination for chordal graphs was established in~\cite{kn:chang}, their proof does not imply the intractability for the class of splitgraphs. However, that is easy to mend. \begin{theorem} \label{thm NP-c rainbow} Let $k \in \mathbb{N}$. Computing $\gamma_{rk}(G)$ is $\NP$-complete for splitgraphs. \end{theorem} We moved the proof of this theorem to Appendix~\ref{appendix splitgraphs}. Similarly, we have the following theorem. \begin{theorem} \label{thm NP-c weak} Let $k \in \mathbb{N}$. Computing $\gamma_{wk}(G)$ is $\NP$-complete for splitgraphs. \end{theorem} We moved the proof of this theorem to Appendix~\ref{appendix splitgraphs}. \begin{thebibliography}{99} \bibitem{kn:aharoni}Aharoni,~R. and T.~Szab\'o, Vizing's conjecture for chordal graphs, {\em Discrete Mathematics\/} {\bf 309} (2009), pp.~1766--1768. \bibitem{kn:araujo}Araujo,~J., C.~Sales and I.~Sau, Weighted coloring on $P_4$-sparse graphs. Manuscript on HAL, Id:~inria--00467853, 2010. \bibitem{kn:babel}Babel,~L., {\em On the $P_4$-structure of graphs\/}, PhD Thesis, TU-M\"unchen, 1997. \bibitem{kn:bertossi}Bertossi,~A., Dominating sets for split and bipartite graphs, {\em Information Processing Letters\/} {\bf 19} (1984), pp.~37--40. \bibitem{kn:bretscher}Bretscher,~A., D.~Corneil, M.~Habib and C.~Paul, A simple linear time LexBFS cograph recognition algorithm, {\em SIAM Journal on Discrete Mathematics\/} {\bf 22} (2008), pp.~1277--1296. \bibitem{kn:chu}Chu,~F., A simple linear time certifying LBFS-based algorithm for recognizing trivially perfect graphs and their complements, {\em Information Processing Letters\/} {\bf 107} (2008), pp.~7--12. \bibitem{kn:corneil2}Corneil,~D., H.~Lerchs and L.~Stewart-Burlingham, Complement reducible graphs, {\em Discrete Applied Mathematics\/} {\bf 3} (1981), pp.~163--174. \bibitem{kn:corneil3}Corneil,~D., Y.~Perl and L.~Stewart, A linear recognition algorithm for cographs, {\em SIAM Journal on Computing\/} {\bf 14} (1985), pp.~926--934. \bibitem{kn:courcelle}Courcelle,~B., The expression of graph properties and graph transformations in monadic second-order logic. In: {\em handbook of graph grammars and graph transformations\/}, World Scientific Publishing Co., Inc. River Edge, NJ, USA, 1997. \bibitem{kn:courcelle2}Courcelle,~B., J.~Makowski and U.~Rotics, Linear time solvable optimization problems on graphs of bounded cliquewidth, {\em Theoretical Computer Science\/} {\bf 33} (2000), pp.~125--150. \bibitem{kn:farber}Farber,~M., Domination, independent domination, and duality in strongly chordal graphs, {\em Discrete Applied Mathematics\/} {\bf 7} (1984), pp.~115--130. \bibitem{kn:gallai}Gallai,~T., Transitiv orientierbare Graphen, {\em Acta Math. Acad. Sci. Hung.\/} {\bf 18} (1967), pp.~25--66. \\ A translation appears in (J.~Ram\'irez-Alfons\'in and B.~Reed, eds) {\em Perfect graphs\/}, John Wiley \& Sons, Interscience series in discrete mathematics and optimization, Chichester, 2001. \bibitem{kn:gilmore}Gilmore,~P. and A.~Hoffman, A characterization of comparability graphs and of interval graphs, {\em Canadian Journal of Mathematics\/} {\bf 16} (1964), pp.~539--548. \bibitem{kn:golumbic}Golumbic,~M., Trivially perfect graphs, {\em Discrete Mathematics\/} {\bf 24} (1978), pp.~105--107. \bibitem{kn:habib}Habib,~M. and C.~Paul, A simple linear time algorithm for cograph recognition, {\em Discrete Applied Mathematics\/} {\bf 145} (2005), pp.~183--197. \bibitem{kn:hoang}Ho\`ang,~C., {\em A class of perfect graphs\/}, Master's thesis, School of Computer Science, McGill University, Montreal, 1983. \bibitem{kn:howorka}Howorka,~E., A characterization of distance-hereditary graphs, {\em The Quarterly Journal of Mathematics, Oxford, Second Series\/} {\bf 28} (1977), pp.~417--420. \bibitem{kn:jamison}Jamison,~B. and S.~Olariu, A tree representation for $P_4$-sparse graphs, {\em Discrete Applied Mathematics\/} {\bf 35} (1992), pp.~115--129. \bibitem{kn:klavik}Klav\'{\i}k,~P., J.~Kratochv\'{\i}l and B.~Walczak, Extending partial representations of function graphs and permutation graphs. Manuscript on ArXiV: 1204.6391, 2012. \bibitem{kn:kloks}Kloks,~T. and Y.~Wang, {\em Advances in graph algorithms\/}. Manuscript on ViXrA: 1409.0165, 2014. \bibitem{kn:rooij2}van~Rooij,~J., {\em Exact exponential-time algorithms for domination problems in graphs\/}, PhD Thesis, Utrecht University, 2011. \bibitem{kn:rooij}van~Rooij,~J. and H.~Bodlaender, Exact algorithms for dominating set, {\em Discrete Applied Mathematics\/} {\bf 159} (2011), pp.~2147--2164. \bibitem{kn:vizing}Vizing,~V., Some unsolved problems in graph theory, {\em Uspehi Mat. Naukno.\/} (in Russian) {\bf 23} (1968), pp.~117--134. \bibitem{kn:wolk}Wolk,~E., The comparability graph of a tree, {\em Proceedings of the American Mathematical Society\/} {\bf 13} (1962), pp.~789--795. (See also the note that appeared in the same journal, volume 65, 1965.) \end{thebibliography} \appendix \section{Rainbow domination on $P_4$-sparse graphs} \label{appendix cographs} Many problems can be solved in linear time for the class of cographs and $P_4$-sparse graphs, see eg,~\cite[Theorem~2 and Corollary~3]{kn:courcelle2}. Both classes are of bounded cliquewidth (or rankwidth). We are not aware of many problems of which the complexities differ on the two classes of graphs. An interesting example, that might have different complexities on cographs and $P_4$-sparse graphs, is the weighted coloring problem~\cite{kn:araujo}. In this section we show that the $k$-rainbow domination problem can be solved in linear time on $P_4$-sparse graphs. Ho\`ang introduced $P_4$-sparse graphs as follows. \begin{definition} A graph is $P_4$-sparse if every set of 5 vertices contains at most one $P_4$ as an induced subgraph. \end{definition} Jamison and Olariu showed that $P_4$-sparse graphs have a decomposition similar to the decomposition of cographs. To define it we need the concept of a spider. \begin{definition} A graph $G$ is a thin spider if its vertices can be partitioned into three sets $S$, $K$ and $T$, such that \begin{enumerate}[\rm (a)] \item $S$ induces an independent set and $K$ induces a clique and \[|S|=|K| \geq 2.\] \item Every vertex of $T$ is adjacent to every vertex of $K$ and to no vertex of $S$. \item There is a bijection from $S$ to $K$, such that every vertex of $S$ is adjacent to exactly one unique vertex of $K$. \end{enumerate} A thick spider is the complement of a thin spider. \end{definition} Notice that, possibly, $T=\varnothing$. The set $T$ is called the head, and $K$ and $S$ are called the body and the feet of the thin spider. Jamison and Olariu proved the following theorem~\cite{kn:jamison}. \begin{theorem} A graph is $P_4$-sparse if and only if for every induced subgraph $H$ one of the following holds. \begin{enumerate}[\rm (i)] \item $H$ is disconnected, or \item $\Bar{H}$ is disconnected, or \item $H$ is isomorphic to a spider. \end{enumerate} \end{theorem} \begin{theorem} \label{P4sparse} There exists a linear-time algorithm to compute the $k$-rainbow domination number for $P_4$-sparse graphs and $k \in \mathbb{N}$. \end{theorem} \begin{proof} We extend Formula~\eqref{eqn7} in the proof of Theorem~\ref{thm cograph} to include spiders. \noindent Assume that $G$ is a thin spider, with a head $T$, a body $K$ and an independent set of feet $S$. We need to consider only $k$-rainbow colorings such that at least one vertex of $G$ has an empty label. Notice that we may assume that all the feet have labels of cardinality at most one. Furthermore, there is at most one vertex in $S$ that has an empty label. \noindent First assume that one foot, say $x$, has an empty label. Then its neighbor, say $y \in K$ has label $[k]$. In that case, the (only) optimal $k$-rainbow coloring has \begin{enumerate}[\rm (a)] \item all vertices in $S \setminus \{x\}$ a label of cardinality 1, \item all vertices in $K \setminus \{y\}$ an empty label, and \item all vertices of $T$ also an empty label. \end{enumerate} It follows that the cost in this case is \begin{equation} \label{eqsparse1} |S|-1+k. \end{equation} \noindent All other optimal $k$-rainbow colorings give all feet a label of cardinality 1. If some vertex of $T$ has an empty label, then its neighborhood has cost at least $k$, and the total cost will be more than~\eqref{eqsparse1}. So, the only alternative $k$-rainbow coloring assigns no empty label to $S$ nor $T$, and assigns an empty label to some vertex $v$ of $K$. In such a case, the neighborhood of $v$ has cost at least $k$, so that combining with the cost in the non-neighboring feet, the total cost is at least that of~\eqref{eqsparse1}. \noindent In conclusion, the optimal cost of a thin spider $G$ is \begin{equation} \min\; \{\; |V(G)|, |S| - 1 + k\; \}. \end{equation} \noindent The case analysis for the thick spider is similar. This proves the theorem. \qed\end{proof} \begin{remark} A graph $G$ is $P_4$-lite if every induced subgraph $H$ of at most 6 vertices satisfies one of the following. \begin{enumerate}[\rm (i)] \item $H$ contains at most two induced $P_4$'s, or \item $H$ is isomorphic to $S_3$ or $\Bar{S_3}$. \end{enumerate} These graphs contain the $P_4$-sparse graphs. Babel and Olariu studied $(q,q-4)$-graphs, see, eg,~\cite{kn:babel}. It would be interesting to study the complexity of the $k$-rainbow problem on these graphs. \end{remark} \section{Trivially perfect graphs} \label{appendix TP} \begin{lemma} There exists an optimal weak $\{k\}$-$L^{\prime}$-dominating function $g$ such that $g(z_i) \geq g(z_j)$ when $i < j$. \end{lemma} \begin{proof} Assume not. Let $\Hat{g}$ be the same as $g$ except that $\Hat{g}(z_i)=1$ and $\Hat{g}(z_j)=0$. To see that $\Hat{g}$ is a weak $\{k\}$-L-dominating function, first notice that, when $z_i$ and $z_j$ are in a common clique of $G^{\prime}[Z]$ then it is easy to see that $\Hat{g}$ defines a weak $\{k\}$-L-dominating function of at most the same cost. \noindent Now assume that the claim holds for all pairs contained in common cliques. Then choose $z_i$ and $z_j$ with $i < j$, $g(z_i)=0$ and $g(z_j)=1$, and $z_i$ and $z_j$ in different cliques with $i$ as small as possible and $j$ as large as possible. Say $z_i=x^m_{m^{\ast}}$ and $z_j=x^h_{h^{\ast}}$. Then, by assumption, \[\forall_{t < m^{\ast}}\; g(x^m_t)=1 \quad \forall_{t > m^{\ast}}\; g(x^m_t)=0 \quad \forall_{t < h^{\ast}}\; g(x^h_t)=1 \quad \forall_{t > h^{\ast}}\; g(x^h_t)=0.\] Now, notice that \[g(N(z_i))=m^{\ast}-1+g(P) \geq b^{\prime}_{z_i} \quad\Rightarrow\quad g(P) \geq b^{\prime}_{z_i}-m^{\ast}+1.\] We prove that $\Hat{g}(N[z_j]) \geq b^{\prime}_{z_j}$. We have that \[\Hat{g}(N[z_j])= g(P) +h^{\ast} -1 \geq b^{\prime}_{z_i}-m^{\ast}+h^{\ast}.\] By definition and the ordering on $Z$, \[b^{\prime}_{z_i}-m^{\ast}+1 = d_{z_i} \geq d_{z_j}=b^{\prime}_{z_j}-h^{\ast}+1 \quad\Rightarrow \quad \Hat{g}(N[z_j]) \geq b^{\prime}_{z_j}.\] This proves the lemma. \qed\end{proof} \begin{lemma} \label{lm appendix TP} Define $d_{z_{\ell+1}}=a$. Let $i^{\ast} \in \{1,\dots,\ell+1\}$ be such that \begin{enumerate}[\rm (a)] \item $\max\;\{\;a,\;d_{z_i^{\ast}}\;\}+i^{\ast}-1$ is smallest possible, and \item $i^{\ast}$ is smallest possible with respect to $(a)$. \end{enumerate} Let $H=G^{\prime}-Z$. Let $L^H$ be the restriction of $L^{\prime}$ to $V(H)$ with the following modifications. \[\forall_{y \in P}\; b^H_y=\max\;\{\;0,\;b^{\prime}_y-i^{\ast}+1\;\}.\] Let $a^H=\max\;\{\;a,\;d_{z_{i^{\ast}}}\;\}$. Then \[\Gamma(G^{\prime},L^{\prime},a)=\Gamma(H,L^H,a^H)+i^{\ast}-1.\] \end{lemma} \begin{proof} Let $g$ be a weak $\{k\}$-$L^{\prime}$-dominating function, satisfying $g(P) \geq a$, of minimal cost. We first prove that the restriction of $g$ to $H$ is a weak $\{k\}$-$L^H$-dominating function and that $g(P) \geq a^H$. \noindent Let $i$ be the largest index such that $g(z_j)=1$ for all $j < i$. If $i=\ell+1$ we have $g(P) \geq a=d_{z_{\ell+1}}$. \noindent Now assume that $i \leq \ell$. Let $z_i=x^m_{m^{\ast}}$. Then \[g(N[z_i]) = g(P)+m^{\ast}-1 \geq b^{\prime}_{z_i} \quad\Rightarrow\quad g(P) \geq b^{\prime}_{z_i} - m^{\ast}+1=d_{z_i}.\] Thus $g(P) \geq \max \{\;a,\;d_{z_i}\;\}$ and so we have that \[g(P) +i-1 \geq \max \;\{\;a,\;d_{z_i}\;\} +i-1 \geq \max\;\{\;a,\;d_{z_{i^{\ast}}}\;\}+i^{\ast}-1.\] We claim that $i^{\ast} \geq i$. \noindent Suppose that $i^{\ast} < i$. Then let $\Hat{g}$ be the same as $g$ except that \[\Hat{g}(R)= \min\;\{\;g(R)+i - i^{\ast},\; k\;\} \quad\text{and}\quad \forall_{i^{\ast} \leq j < i} \; \hat{g}(z_j)=0.\] Let $j \geq i^{\ast}$ and let $h$ and $h^{\ast}$ be such that $z_j=x^h_{h^{\ast}}$. Let $t$ be the smallest index for which $\hat{g}(x^h_t)=0$. By the inequality above, $\Hat{g}(P)\geq \max\{\;a,\;d_{z_i^{\ast}}\;\}$, and so, \[\hat{g}(N[x^h_{h^{\ast}}]) = \hat{g}(N[x^h_t])=\hat{g}(P)+t-1 \geq d_{z_{i^{\ast}}}+t-1 \geq d_{x^h_t}+t-1 =b^{\prime}_{x^h_t} \geq b^{\prime}_{x^h_{h^{\ast}}}.\] Thus $\hat{g}$ is a weak $\{k\}$-$L^{\prime}$-dominating function on $G^{\prime}$ with $\Hat{g}(P) \geq a$ and with cost at most $\|g\|$. Therefore, we may assume that \[i^{\ast} \geq i \quad\text{and}\quad g(P) \geq \max \; \{\;a,\;d_{z_i^{\ast}}\;\} = a^H.\] \noindent Also, notice that \begin{multline} \forall_{y \in P} \; g(y)=0 \quad\Rightarrow \\ g^H(N[y]) \geq \max \{\;0,\; g(N[y])-i+1\;\} \geq \max\;\{\;0,\;b_y^{\prime}-i^{\ast}+1\;\}=b_y^H. \end{multline} This proves that \[\Gamma(G^{\prime},L^{\prime},a) - i^{\ast}+1 \geq \Gamma(H,L^H,a^H).\] \noindent Now let $g^H$ be a weak $\{k\}$-$L^H$-dominating function of $H$, with $g^H(P) \geq a^H$, of minimal cost. Extend $g^H$ to a function $g^{\prime}$ on $G^{\prime}$ by defining $g^{\prime}(z_j)=1$ for all $j < i^{\ast}$ and $g^{\prime}(z_j)=0$ for all $j \geq i^{\ast}$. We claim that $g^{\prime}$ is a weak $\{k\}$-L-dominating function of $G^{\prime}$. Let $i \geq i^{\ast}$. Say $z_i=x^m_{m^{\ast}}$. Let $m^{\ast}$ be the first index such that $g^{\prime}(x^m_j)=0$ for $j \geq m^{\ast}$. Then \[g^{\prime}(N[z_i])=g^{\prime}(P)+m^{\ast}-1 \geq d_{z_i^{\ast}}+m^{\ast}-1 \geq d_{z_i}+m^{\ast}-1 = b_{z_i}.\] For $i < i^{\ast}$ we have $g^{\prime}(z_i) \neq 0$. For the vertices $y \in P$, $g^{\prime}(N[y])=k$ or \[g^{\prime}(N[y]) \geq g^H(N_H[y])+i^{\ast}-1 \geq b^H_y+i^{\ast}-1 \geq b_y^{\prime}.\] This proves the lemma. \qed\end{proof} \section{$\NP$-Completeness proofs for splitgraphs} \label{appendix splitgraphs} \begin{theorem} \label{appendix thm NP-c rainbow} For each $k \in \mathbb{N}$, the $k$-rainbow domination problem is $\NP$-complete for splitgraphs. \end{theorem} \begin{proof} Assume that $G$ is a splitgraph with maximal clique $C$ and independent set $I$. Construct an auxiliary graph $G^{\prime}$ by making $k-1$ pendant vertices adjacent to each vertex of $C$. Thus $G^{\prime}$ has $|V(G)|+|C|(k-1)$ vertices, and $G^{\prime}$ remains a splitgraph. We prove that \[\gamma_{rk}(G^{\prime})=\gamma(G) + |C|\cdot (k-1).\] Since domination is $\NP$-complete for splitgraphs~\cite{kn:bertossi}, this proves that $k$-rainbow domination is $\NP$-complete also. \noindent We first show that \[\gamma_{rk}(G^{\prime}) \leq \gamma(G) + |C|\cdot (k-1).\] Consider a dominating set $D$ of $G$ with $|D|=\gamma(G)$. We use $D$ to construct a $k$-rainbow function $f$ for $G^{\prime}$ as follows: \begin{itemize} \item For any $v \in D$, if $v \in C$, let $f(v) = [k]$; else, if $v \in I$, let $f(v) = \{k\}$; \item For any $v \in V(G)\setminus D$, let $f(v) = \varnothing$; \item For the $k-1$ pendant vertices attaching to a vertex $v \in C$, if $f(v) = [k]$, then $f$ assigns to each of these pendant vertices an empty set. Otherwise, if $f(v) = \varnothing$, then $f$ assigns the distinct size-1 sets $\{1\}, \{2\}, \ldots, \{k-1\}$ to these pendant vertices, respectively. \end{itemize} It is straightforward to check that $f$ is a $k$-rainbow function. Moreover, we have \begin{equation} \gamma_{rk}(G^{\prime}) \leq \sum_{x \in V(G^{\prime})} \: |f(x)| = \gamma(G) + |C|\cdot (k-1). \end{equation} \noindent We now show that \[\gamma_{rk}(G^{\prime}) \geq \gamma(G) + |C|\cdot (k-1).\] Consider a minimizing $k$-rainbow function $f$ for $G^{\prime}$. Without loss of generality, we further assume that $f$ assigns either $\varnothing$ or a size-1 subset to each pendant vertex. \footnote{Otherwise, if a pendant vertex $p$ attaching $v$ is assigned a set with two or more labels, say $f(p) = \{\ell_1, \ell_2, \ldots\}$, we modify $f$ into $f'$ so that $f'(p) = \{ \ell_1 \}$, $f'(v) = f(v) \cup (f(p)\setminus \{\ell_1\})$, and $f'(x) = f(x)$ for the remaining vertices; the resulting $f'$ is still a minimizing $k$-rainbow function.} Define $D \subseteq V(G)$ as \begin{equation} D = \{\; x \; \mid\; f(x) \neq \varnothing \;\mbox{\ and\ }\; x \in V(G)\; \}. \end{equation} That is, $D$ is formed by removing all the pendant vertices in $G^{\prime}$, and selecting all those vertices where $f$ assigns a non-empty set. Observe that $D$ is a dominating set of $G$. \footnote{That is so because for any $v \in V(G)\setminus D$, we have $f(v) = \varnothing$ so that the union of labels of $v$'s neighbor in $G^{\prime}$ is $[k]$; however, at most $k-1$ neighbors of $v$ are removed, and each was assigned a size-1 set, so that $v$ must have at least one neighbor in $D$.} Moreover, we have \begin{eqnarray*} |D| &=& \sum_{x \in C}\; [f(x) \neq \varnothing]\; +\; \sum_{x \in I}\; [f(x) \neq \varnothing] \\ &\leq& \sum_{x \in V(G^{\prime}) \setminus I}\; |f(x)| - |C|\cdot (k-1) + \sum_{x \in I}\; |f(x)| \\ &\leq& \sum_{x \in V(G^{\prime})} |f(x)| - |C|\cdot (k-1), \end{eqnarray*} where the first inequality follows from the fact that for each $v \in C$ and its corresponding pendant vertices $P_v$, \[|f(v)| + \sum_{x \in P_v} |f(x)| - (k-1) = \begin{cases} 0 & \text{if $f(v) = \varnothing$}\\ \geq 1 & \text{if $f(v) \neq \varnothing$.} \end{cases}\] Consequently, we have \begin{equation} \gamma(G) \leq |D| \leq \gamma_{rk}(G^{\prime}) - |C|\cdot (k-1). \end{equation} This proves the theorem. \qed\end{proof} \begin{theorem} \label{appendix thm NP-c weak} For each $k \in \mathbb{N}$, the weak $\{k\}$-domination problem is $\NP$-complete for splitgraphs. \end{theorem} \begin{proof} Let $G$ be a splitgraph with maximal clique $C$ and independent set $I$. Construct the graph $G^{\prime}$ as in Theorem~\ref{thm NP-c rainbow}, by adding $k-1$ pendant vertices to each vertex of the maximal clique $C$. We prove that \[\gamma_{wk}(G^{\prime})=\gamma(G)+|C|\cdot (k-1).\] \noindent First, let us prove that \[\gamma_{wk}(G^{\prime}) \leq \gamma(G)+|C|(k-1).\] Let $D$ be a minimum dominating set. Construct a weak $\{k\}$-domination function $g: V(G^{\prime}) \rightarrow \{0,\dots,k\}$ as follows. \begin{enumerate}[\rm (i)] \item For $x \in D \cap C$, let $g(x)=k$. \item For $x \in D \cap I$, let $g(x)=1$. \item For $x \in V(G) \setminus D$, let $g(x)=0$. \item For a pendant vertex $x$ with $N(x) \in D$, let $g(x)=0$. \item For a pendant vertex $x$ with $N(x) \notin D$, let $g(x)=1$. \end{enumerate} It is easy to check that $g$ is a weak $\{k\}$-dominating function with cost \[\gamma_{wk}(G^{\prime}) \leq \sum_{x \in V(G^{\prime})} g(x) = \gamma(G)+|C| \cdot (k-1).\] \noindent To prove the converse, let $g$ be a weak $\{k\}$-dominating function for $G^{\prime}$ of minimal cost. We may assume that $g(x) \in \{0,1\}$ for every pendant vertex $x$. Define \[D=\{\;x\;|\; x \in V(G) \quad\text{and}\quad g(x) > 0\;\}.\] Then $D$ is a dominating set of $G$. Furthermore, \begin{eqnarray*} \gamma(G) \leq |D| &=& \sum_{x \in C}\; [g(x) > 0] + \sum_{x \in I}\; [g(x)>0]\\ & \leq & \sum_{x \in V(G^{\prime}) \setminus I} g(x) - |C|\cdot(k-1)+\sum_{x \in I} g(x) \\ & \leq & \sum_{x \in V(G^{\prime})}g(x) - |C| \cdot (k-1) \\ & \leq & \gamma_{wk}(G^{\prime}) - |C| \cdot (k-1). \end{eqnarray*} This proves the theorem. \qed\end{proof} \section{2-Rainbow domination of interval graphs} \label{appendix interval} \begin{theorem} \label{appendix thm int} There exists a polynomial algorithm to compute the $2$-rainbow domination number for interval graphs. \end{theorem} \begin{proof} By~Lemmas~\ref{lm int1} and~\ref{lm int2} there is a polynomial dynamic programming algorithm which solves the problem. Let $[C_1,\dots,C_t]$ be a consecutive clique arrangement. For each $i$ the algorithm computes a table of partial $2$-rainbow domination numbers for the subgraph induced by $\cup_{\ell =1}^i C_{\ell}$, parameterized by a given subset of vertices in $C_i$ that are assigned the values 0, 1 and 2. We say that a vertex $x \in C_i$ is {\em satiated\/} if \[g(N(x) \cap (\cup_{\ell=1}^i C_i)) \geq 2.\] The tabulated rainbow domination numbers are partial in the sense that the neighborhood condition is not necessarily satisfied for all vertices in $C_i$ that are assigned the value 0. The vertices of $C_i$ that are assigned the value 0 which are not satiated, either need an extra 1 or an extra 2. Each of the two subsets of nonsatiated vertices is characterized by one representative vertex, the one among them that extends furthest to the left. \noindent In total, the dynamic system is characterized by 4 state variables. They are \begin{enumerate}[\rm (i)] \item the set of, at most two 2's, \item the set of, at most four 1's, \item the nonsatiated vertex that extends furthest to the left and that needs an extra 1, and \item a similar nonsatiated vertex that needs an extra 2. \end{enumerate} Each clique has at most $n=|V(G)|$ vertices and so, there are at most $n^2 \cdot n^4 \cdot n \cdot n=n^8$ different assignments of the state variables. In the transition $i \rightarrow i+1$, the table is computed by minimizing, for all sensible assignments of vertices in $C_{i+1}$, over the compatible values in the table at stage $i$. Each update takes $O(1)$ time. The number of cliques is at most $n$, and so the algorithm runs in $O(n^9)$ time. \qed\end{proof} \begin{remark} The observations above can be generalized to show that, for each $k$, the $k$-rainbow domination problem can be solved in polynomial time for interval graphs. However, this leaves open the question whether $k$-rainbow domination is in $\FPT$ for interval graphs. \end{remark} \begin{remark} Let $A$ be the closed neighborhood matrix of an interval graph or a strongly chordal graph $G$. Then $A$ is totally balanced, that is, after a suitable permutation or rows and columns the matrix becomes greedy, ie, $\Gamma$-free. A matrix is greedy if it has no $2 \times 2$ submatrix $\bigl ( \begin{smallmatrix} 1 & 1 \\ 1 & 0 \end{smallmatrix} \bigr )$. For (totally) balanced matrices some polyhedra have only integer extreme points. This leads to polynomial algorithms for domination on eg, strongly chordal graphs. Notice that the closed neighborhood matrix of $G \Box K_2$ is \begin{equation} \label{eqn11} B=\begin{pmatrix} A & I \\ I & A \end{pmatrix}. \end{equation} To solve the $2$-rainbow domination problem for strongly chordal graphs, one needs to solve the following integer programming problem. \begin{eqnarray} \label{eqn12} \min \; \sum_{i=1}^{2n} \; x_i \quad & & \text{such that}\\ && B \mathbf{x} \geq \mathbf 1 \quad \text{and}\quad \forall_{i} \; x_i \in \{0,1\}. \end{eqnarray} For the $2$-rainbow domination problem it would be interesting to know on what conditions on the adjacency matrix $A$, the matrix $B$ is (totally) balanced. \end{remark} \section{2-Rainbow domination of permutation graphs} \label{appendix permutation} Permutation graphs properly contain the class of cographs. They can be defined as the intersection graphs of a set of straight line segments that have their endpoints on two parallel lines. In other words, a graph $G$ is a permutation graph whenever both $G$ and $\Bar{G}$ are comparability graphs~\cite{kn:dushnik,kn:klavik}. We can use a technique similar to that used for interval graphs to show that $2$-rainbow domination is polynomial for permutation graphs. An intersection model of straight line segments for a permutation graph, as described above, is called a permutation diagram. \begin{definition} Consider a permutation diagram for a permutation graph $G$. A scanline is a straight line segment with endpoints on the two parallel lines of which neither of the endpoints coincides with any endpoint of a line segment of the permutation diagram. \end{definition} We proceed as in Section~\ref{section interval}. Consider a $2$-rainbow function $f$ for a graph $G=(V,E)$. For a set $S \subseteq V$ we write, $f(S)=\cup_{x \in S} f(x)$. \begin{lemma} \label{lm perm1} Let $G=(V,E)$ be a permutation graph and let $f$ be a $2$-rainbow function with $\sum|f(x)|=\gamma_{r2}(G)$. Consider a permutation diagram with two parallel horizontal lines. Let $s$ be a scanline. There are at most 4 line segments that cross $s$ with function value $\{1,2\}$. \end{lemma} \begin{proof} Assume that there are at least 5 such line segments. Take 4 of them, each one with the endpoint as far as possible from the endpoint of $s$ on the top and bottom line. Any line segment crossing the fifth, crosses also at least one of these four. This shows that the function value of the fifth line can be replaced by $\varnothing$. This contradicts the optimality of $f$. \noindent This proves the lemma. \qed\end{proof} \begin{lemma} \label{lm perm2} For every scanline $s$ there are at most 8 line segments that cross it and that have function values not $\varnothing$. \end{lemma} \begin{proof} The proof is similar to the proof of Lemma~\ref{lm perm1}. Consider the 8 line segments that cross $s$, and with function values that are nonempty subsets of $\{1,2\}$, and of which the union is pairwise $\{1,2\}$, and that are furthest away from the endpoints of $s$. \noindent Any other line segment crossing $s$ with function value $\neq \varnothing$ would contradict the optimality of $f$. \qed\end{proof} \begin{theorem} \label{thm perm} There exists a polynomial algorithm to compute the $2$-rainbow domination number for permutation graphs. \end{theorem} \begin{proof} Consider a dynamic programming algorithm, similar to that described for interval graphs in Theorem~\ref{thm int}, that proceeds by moving a scanline from left to right through the permutation diagram. \qed\end{proof} \begin{remark} Obviously, a similar technique shows that the weak $\{2\}$-domination number can be computed in polynomial-time for permutation graphs. Perhaps it is interesting to ask the question whether these two domination numbers are equal for the class of permutation graphs. \end{remark} \section{Weak $\{k\}$-L-domination on complete bipartite graphs} \label{appendix CB} For integers $1 \leq j \leq k$, a $(j,k)$-dominating function on a graph $G$ is a function $g: V(G) \rightarrow \{0,\dots,j\}$ such that for every vertex $x$, $g(N[x]) \geq k$~\cite{kn:rubalcaba,kn:rubalcaba2}. The $(j,k)$-domination number $\gamma_{j,k}(G)$ is the minimal cost of a $(j,k)$-dominating function. \begin{theorem} Let $G$ be trivially perfect. There exists a linear-time algorithm to compute $\gamma_{j,k}(G)$. \end{theorem} \begin{proof} Let $T$ be a tree model for $G$. It is easy to check that the following procedure solves the problem. Color the vertices of the first $\lfloor \frac{k}{j} \rfloor$ $\BFS$-levels of the tree $T$ with color $j$, and color the vertices in the next level with the remainder $k-\lfloor \frac{k}{j} \rfloor \cdot j = k \bmod j$. \qed\end{proof} We show that the weak $\{k\}$-L-domination problem can be solved in linear time on complete bipartite graphs. Let $G$ be complete bipartite, with color classes $V$ and $V^{\prime}$. Let $L$ be a $\{k\}$-assignment, that is, $L$ assigns a pair $(a_x,b_x)$ of numbers from $\{0,\dots,k\}$ to every vertex $x$. For simplicity we may assume that, for all vertices $x$, \[a_x=0.\] For simplicity we also assume that $|V|=|V^{\prime}|=n$. Denote the $b$-labels of vertices in $V$ by $b(1),\dots,b(n)$ and denote the $b$-labels of vertices in $V^{\prime}$ by $b^{\prime}(1),\dots,b^{\prime}(n)$. We assume that these are ordered such that \[b(1) \; \geq\; \dots \;\geq\; b(n) \quad\text{and}\quad b^{\prime}(1) \;\geq\; \dots\;\geq\; b^{\prime}(n).\] Then the weak $\{k\}$-L-domination problem can be formulated as follows. Let $b(n+1)=b^{\prime}(n+1)=0$. \begin{gather*} \min \; x+y \\ \text{subject to}\quad x \geq b^{\prime}(y+1) \quad \text{and}\quad y \geq b(x+1). \end{gather*} \begin{theorem} The weak $\{k\}$-L-domination problem can be solved in linear time on complete bipartite graphs. \end{theorem} \begin{proof} Let, for $x \in \{0,\dots,n\}$, \begin{eqnarray*} y^1(x)&=&\min\;\{\;y\;|\; y \in \{0,\dots,n\} \quad\text{and}\quad y \geq b(x+1)\;\}=b(x+1)\\ y^2(x)&=& \min \; \{\;y\;|\; y \in \{0,\dots,n\} \quad\text{and}\quad x \geq b^{\prime}(y+1)\;\}. \end{eqnarray*} Let \[m(x)=\max\;\{\;y^1(x),\;y^2(x)\;\}.\] Then the solution to the weak $\{k\}$-L-domination problem is \[\min \;\{\;x+m(x)\;|\; x \in \{0,\dots,n\}\;\}.\] It is easy to check that the values $y^1(x)$ and $y^2(x)$ can be computed in, overall, linear time. This proves the theorem. \qed\end{proof} \end{document}
\begin{document} \draft \title{The Halting Problem for Quantum Computers} \begin{abstract} We argue that the halting problem for quantum computers which was first raised by Myers, is by no means solved, as has been claimed recently. We explicitly demonstrate the difficulties that arise in a quantum computer when different branches of the computation halt at different, unknown, times. \end{abstract} \pacs{PACS numbers: 03.65.Bz, 03.67.Lx} \begin{multicols}{2} \newcommand\mathC{\mkern1mu\raise2.2pt\hbox{$\scriptscriptstyle|$} {\mkern-7mu\rm C}} \newcommand{{\rm I\! R}}{{\rm I\! R}} In \cite{Myers} Myers drew attention to the fact that there may be a problem if different branches of a quantum computation take different numbers of steps to complete their calculation. In a subsequent paper \cite{Ozawa}, Ozawa claimed, in effect, to have solved this problem. We wish to reopen the issue. The reasons are two-fold. Firstly we will show that the standard halting scheme for Turing machines which was also used in \cite{Ozawa} does not apply to any useful computers; the scheme is unitary only for computers which do not halt. Secondly, and more importantly, we will argue that the specific way the problem was framed, namely whether monitoring the halting does or does not spoil the computation, is not the important issue. Indeed, one can certainly build a quantum computer for which monitoring the halting does not spoil the computation by building a computer which is effectively classical. The key issue is whether the computer allows useful interference. Later in this letter we will describe why the standard framework for quantum Turing machines as used in \cite{Ozawa} only applies to computers which do not halt. First we set out, in general, what we would like a quantum computer to do. Any quantum algorithm relies on the fact that if an arbitrary input state $|i>$ evolves to the final state $|\psi_i>$ then the superposition $\sum_i a_i |i>$ evolves as \begin{eqnarray} \sum_i a_i |i> \mapsto \sum_i a_i |\psi_i>. \end{eqnarray} The states $|\psi_i>$ are, of course, not known beforehand; they arise at the end of the computation in which the computer computes them, step by step, according to the program. The problem is that for different inputs $|i>$, the number of steps required may not all be the same. A situation might thus arise in which, say, the superposition \begin{eqnarray} |1> + |2> \end{eqnarray} reaches \begin{eqnarray} |\psi_1> + |\tilde\psi_2> \end{eqnarray} at a certain point during the computation, where $|\psi_1>$ is the final state obtained from $ |1>$, but $|\tilde\psi_2>$ is not the final state which will be obtained from $|2>$, but just some intermediate result. What can one do? An obvious suggestion is simply to wait until the computation in the second branch has also finished. The problem is that unitarity of quantum evolution prevents the state $|\psi_1>$ from remaining unchanged. To see this, let us denote by $U$ the time evolution operator corresponding to one step of computation. We take $U$ to be time-independent, that is, we include all the relevant degrees of freedom as part of our computer. Otherwise we would have to take in to account the interactions of the computer with external degrees of freedom. Let us suppose that $|\tilde\psi_1>$ is the state of the first branch a step before first reaching the final result. i.e. \begin{eqnarray} U|\tilde\psi_1> = |\psi_1>.\label{UPsiTilde} \end{eqnarray} Then it is impossible to also have \begin{eqnarray} U|\psi_1> = |\psi_1>.\label{HaltedPsi} \end{eqnarray} Indeed in order for both (\ref{UPsiTilde}) and (\ref{HaltedPsi}) to be true $|\tilde\psi_1>$ must be equal to $|\psi_1>$, but we have assumed that they are different by construction. In order to allow the result of a computation to remain unchanged once a computation is finished, it is necessary to add some other degrees of freedom (i.e. an ancilla) to the computer which continue to evolve and thus preserve unitarity. Following Deutsch \cite{Deutsch}, it is also customary to introduce a halt qubit which lies in a two-dimensional Hilbert space spanned by $|0>$ and $|1>$, where $|0>$ means that the computation is still continuing and $|1>$ means ``halted''. We thus take the complete Hilbert space of a quantum computer to be spanned by vectors of the form \begin{eqnarray} |\psi>_C|H>_H |A>_A; \end{eqnarray} where the subscripts $C,\ H$ and $A$ refer to the computational states, the halt qubit and the ancilla respectively. The introduction of an ancilla and the halt bit allows one to solve the problem of the halting of a given branch. We require two conditions to be fulfilled for this branch. Firstly, we require the computation to be able to stop. That is we require the computational state to change until it reaches its final state. At that moment the halt qubit should change from $|0>$ to $|1>$. The second requirement is that, once the computation has halted (i.e. once the halt qubit is in state $|1>$), neither the halt qubit nor the computational state should change further. All that is allowed to change is the state of the ancilla. These two requirements are written as follows: \begin{eqnarray} U|\tilde\chi_1>_{C,A}|0>_H=|\psi_1>_C|1>_H|a_0>_A\label{firsthalted} \end{eqnarray} and \begin{eqnarray} U|\psi_1>_C|1>_H|a_k>_A=|\psi_1>_C|1>_H|a_{k+1}>_A\label{haltedk} \end{eqnarray} where $|\tilde\chi_1>_{C,A}$ is the, possibly entangled, state of the computer and ancilla a step before first reaching the final result $|\psi_1>_C$, and $|a_k>_A$ are a set of states of the ancilla. It is straightforward to see that unitarity requires \begin{eqnarray} {}_A<a_k|a_{k^\prime}>_A=0\quad \forall k \neq k^\prime. \end{eqnarray} Thus once the computation has halted the ancilla starts evolving through a sequence of orthogonal states. In effect, the ancilla contains a record of the time since the computation halted. As we will show, this is at the core of the halting problem. Roughly speaking, the computational states in two branches which halt at different, unknown, times do not interfere because they are entangled with this record of the halting time. Within this general framework we can now discuss the different situations which might occur. The simplest case is if all branches of the calculation halt at the same time. In this case one can arrange things to have the desired interference of the computational states. One way to do so is as follows. Take the state of the ancilla to be $|a_0>_A$, say, for all states at the beginning of the computation. The ancilla remains in this state until the point at which all the branches simultaneously halt. We arrange exactly the same evolution of the ancilla for all branches (i.e. the associated state of the ancilla remains $|a_0>_A$ until the halt qubit changes, then the ancilla starts to evolve in the sequence $|a_0>_A,\ |a_1>_A,\ |a_2>_A$ etc.). We have thus arranged that there can be the required interference between branches. Thus in the situation where all branches halt at the same time, there is no difficulty. It may also be noted that one can monitor the computer without spoiling the computation. We now consider the crucial case in which different branches halt at different times, but we do not know in advance how long each branch takes. Consider two branches which halt at different times and assume that the first branch halts first. It then evolves in the following way: \begin{eqnarray} & &|\psi_1>_C|1>_H|a_0>_A\nonumber\\ & &\mapsto |\psi_1>_C|1>_H|a_{1}>_A\nonumber\\ & &\mapsto |\psi_1>_C|1>_H|a_{2}>_A\quad\ldots \end{eqnarray} The second branch halts at some later time and then evolves as \begin{eqnarray} & &|\psi_2>_C|1>_H|b_0>_A\nonumber\\ & &\mapsto |\psi_2>_C|1>_H|b_{1}>_A\nonumber\\ & &\mapsto |\psi_2>_C|1>_H|b_{2}>_A\quad\ldots \end{eqnarray} Here the $|b_k>$ are some other orthogonal set of states of the ancilla which need not be related to the $|a_k>$ (we note that (\ref{firsthalted}) and (\ref{haltedk}) do not require that the set of states through which the ancilla evolves after halting be the same for different branches). The simplest possibility is to arrange that the sequence of states of the ancilla is the same for all branches (i.e. namely the state of the ancilla remains $|a_0>_A$ for all states until the halt qubit changes, then the ancilla starts to evolve in the sequence $|a_0>_A,\ |a_1>_A,\ |a_2>_A$ etc.). However, although the ancilla evolves through exactly the same set of states for all branches, branches which halt at different times are not synchronized. Consequently two computational branches which halt at different times first decohere because they are entangled with two different states of the halt bit but even after both branches have halted, they still decohere due to entanglement to orthogonal states of the ancilla. This situation has the property that monitoring the computation does not affect it and is thus an example of how it can be arranged that monitoring the calculation does not spoil it. However the reason that this is the case is that, in fact, there is no interference at all between branches which halt at different times even in the absence of monitoring. Thus as far as branches which halt at different times are concerned, the computation is effectively classical. Alternatively it might be possible, for example, to choose the $|b_k>$ to be the same set as the $|a_k>$, but in some different order. At certain times in the future there might be reinterference of different branches; however these times are unknown. We also note that in this situation the monitoring the halting bit does affect the results of the computation as it prevents reinterference of the computational bits. However monitoring is not the important issue; the issue is that although (in the absence of monitoring) there might be reinterference, since one does not know when it occurs, it is not useful for computational purposes. Other choices for the $|b_k>$ can be made but not so as to arrange useful interference. We now turn to a discussion of the standard halting scheme for quantum Turing machines as in \cite{Ozawa}. We will argue that this halting scheme is not consistent with unitarity except in the trivial case in which the computer never halts. (As will become clear, the problems with this halting scheme are independent of the issue of whether the branches halt at the same or different times.) Following the discussion in \cite{Ozawa} we write the state of a quantum Turing machine. in terms of a basis \begin{equation} |C> = |q_C>|h_C>|T_C>|H>. \end{equation} $|q_C>$ is the internal state of the head, assumed, by definition, to lie in a finite-dimensional Hilbert space and $|h_C>$ is the position of the head. $|T_C>$ is the state of the tape; the tape is built out of cells, each cell carrying an identical finite-dimensional Hilbert space. In addition the system has a halt qubit $|H>$ which lies in a two-dimensional Hilbert space spanned by $|0>$ and $|1>$, where $|1>$ means ``halted''. The evolution of the computer occurs in steps; each step being described by the same unitary operator $U$. This unitary operator is such that the internal state of the head, the state of the tape cell at the location of the head, the state of the halting bit and the position of the head are updated according to the current state of the head, the state of the qubit at the current position of the head and the current state of the halting qubit. The key equation describing the halting scheme is equation (6) of \cite{Ozawa}: \begin{eqnarray} & & U|q_C>|h_C>|T_C>|1>\nonumber\\ & &\quad = \sum_{q,d}c_{q,d}|q>|h_C+d>|T_C>|1>,\label{Ozawa6} \end{eqnarray} where the quantity $d$ may have values $+1$ or $-1$ denoting whether the head has moved to the right or left, and $c_{q,d}$ are constants. According to (\ref{Ozawa6}) once the halt qubit is set to $|1>$, the proposed quantum Turing machine no longer changes the halt qubit or the tape string. The above halting scheme seems very natural, however it contains subtle but very serious problem. We will show that (\ref{Ozawa6}) implies that the unitary evolution operator $U$ is essentially trivial, namely that (\ref{Ozawa6}) cannot be satisfied by any $U$ which allows the halt bit of any state to change from $|0>$ to $|1>$. i.e. the halting scheme is valid only for a computer which never halts. In order to prove our result, let us first consider the following set of states in which the halt qubit is $|1>$. \begin{eqnarray} |q_j>|n>|{\bf \hat T},T_n=\xi,T_{n+2}=\xi>|1>,\label{halted} \end{eqnarray} where the states $|q_j>$, $j=1...M$ are an orthonormal basis for the internal states, and where $|n>$, $-\infty<n<\infty$ are states labeled by the integer $n$ specifying the position of the head. $|{\bf \hat T},T_n=\xi,T_{n+2}=\xi>$ is the state of the tape; ${\bf \hat T}$ labels the states of the cells at all positions on the tape except those explicitly exhibited, in this case the cells at positions $n$ and $n+2$ where the states are $\xi$. The most general evolution of (\ref{halted}) under (\ref{Ozawa6}) is \begin{eqnarray} & & U|q_j>|n>|{\bf \hat T},T_n=\xi,T_{n+2}=\xi>|1>\nonumber\\ & &\quad = (|Q^+_j>|n+1> + |Q^-_j>|n-1>)\nonumber\\ & &\qquad\times|{\bf \hat T},T_n=\xi,T_{n+2}=\xi>|1>,\label{Uhalted} \end{eqnarray} where we have not assumed that the states $|Q^+_j>$ and $|Q^-_j>$ are necessarily normalized or orthogonal. For two different values of $j$, states of the form (\ref{halted}) are orthogonal so the states to which they evolve must also be orthogonal; also the norm of a given state must not change under evolution. Thus one derives that \begin{equation} <Q^+_j|Q^+_k> + <Q^-_j|Q^-_k> = \delta_{jk}.\label{condition1} \end{equation} Let us now consider the following state \begin{equation} |q_k>|n+2>|{\bf \hat T},T_n=\xi,T_{n+2}=\xi>|1>;\label{haltedplus2} \end{equation} this evolves to \begin{eqnarray} & &(|Q^+_k>|n+3> + |Q^-_k>|n+1>)\nonumber\\ & &\quad\times |{\bf T},T_n=\xi,T_{n+2}=\xi>|1>.\label{Uhaltedplus2} \end{eqnarray} Note that $|Q^+_k>$ and $|Q^-_k>$ are the same set of states as in appear in (\ref{Uhalted}) since the evolved state can only depend on the internal state of the head and the tape at the position of the head. Now (\ref{halted}) and (\ref{haltedplus2}) are orthogonal for all $j$ and $k$; thus \begin{eqnarray} <Q^-_j|Q^+_k>=0\quad \forall j,k.\label{condition2} \end{eqnarray} Now consider the $M$ states \begin{eqnarray} |E_k> := |Q^+_k> + |Q^-_k>\label{Ek} \end{eqnarray} The conditions (\ref{condition1}) and (\ref{condition2}) imply that \begin{eqnarray} <E_j|E_k>=\delta_{jk}. \end{eqnarray} Since $j$ and $k$ run from $1$ to $M$ (the dimension of the space) the $|E_k>$ form an orthonormal basis for the internal states of the head. Let us now consider the action of $U$ on a state which has the halt qubit in the state $|0>$. For the halt scheme to be non-trivial there must exist at least one internal state of the head $|q_0>$, say, and a tape-cell state $\eta$, such that if internal state of the head is $|q_0>$ and the state of cell at the current position of the head is $\eta$ then the state with halt qubit set to $|0>$ evolves to a state which has a least one component with the halt qubit set to $|1>$. Let us now consider the state \begin{equation} |q_0>|n>|{\bf \hat T}, T_n = \eta,T_{n+2} = \psi>|0>,\label{nonhalted} \end{equation} where we have labeled the state at $n+2$ for later convenience. The most general way in which (\ref{nonhalted}) can evolve is \begin{eqnarray} & &U|q_0>|n>|{\bf \hat T}, T_n = \eta,T_{n+2} = \psi>|0>\nonumber\\ & &\quad =\sum_\mu |\Phi^-_\mu>|n-1>|{\bf \hat T}, T_n = e_\mu,T_{n+2} = \psi>|1>\nonumber\\ & &\qquad + \sum_\mu |\Phi^+_\mu>|n+1>|{\bf \hat T}, T_n = e_\mu,T_{n+2} = \psi>|1> \nonumber\\ & &\qquad + |\Psi>|0>.\label{Unonhalted} \end{eqnarray} The states $| e_\mu>$ are an orthonormal basis for the states of the tape at the position $n$ and $|\Phi^-_\mu>$ are generic (non-normalized, and not necessarily mutually orthogonal) internal states so that $\sum_\mu |\Phi^-_\mu>|t_n = e_\mu>$ is the most general entangled state of the tape cell at $n$ and the internal states of the head. $|\Psi>$ is some state of the position of head, the tape and the internal state of the head which need not be specified. Note that the output states of the head, $|\Phi^\pm_\mu>$ are independent of the states of the tape except at $n$, hence in particular they are independent of $\psi$. Now consider a halted state with the state of the tape chosen to be $| e_\nu>$, at the positions $n$ and $n+2$, i.e. \begin{eqnarray} |q_j>|n>|{\bf \hat T},T_n=e_\nu,T_{n+2}=e_\nu>|1>. \end{eqnarray} This state is orthogonal to the state (\ref{nonhalted}) for any $\psi$ and for $\psi = e_\nu$ in particular, so that the fact that the states must also be orthogonal after evolution by $U$ requires that \begin{equation} <Q^+_j|\Phi^+_\nu> + <Q^-_j|\Phi^-_\nu> = 0 \quad\forall \mu,j.\label{condition3} \end{equation} Also the fact that (\ref{nonhalted}), with $\psi$ chosen to be $e_\nu$, is orthogonal to \begin{equation} |q_j>|n+2>|{\bf \hat T},T_n=e_\nu,T_{n+2}=e_\nu>|1>; \end{equation} shows that \begin{equation} <Q^-_j|\Phi^+_\nu> = 0 \quad\forall \nu,j.\label{condition4} \end{equation} Similarly, by considering a suitably chosen halted state at position $n-2$, we may show that \begin{equation} <Q^+_j|\Phi^-_\nu> = 0 \quad\forall \nu,j.\label{condition5} \end{equation} Now (\ref{condition3}), (\ref{condition4}) and (\ref{condition5}) imply that \begin{equation} |\Phi^+_\nu> + |\Phi^-_\nu> \end{equation} is orthogonal to all basis vectors $|E_k>$ and so \begin{equation} |\Phi^+_\nu> + |\Phi^-_\nu> = 0. \end{equation} Thus substituting $|\Phi^+_\nu> = - |\Phi^-_\nu>$ in (\ref{condition4}) we find \begin{equation} <Q^-_j|\Phi^-_\nu> = 0 \quad\forall \nu,j.\label{condition6} \end{equation} and so from (\ref{Ek}),(\ref{condition5}) and (\ref{condition6}) $|\Phi^-_\nu>$ is orthogonal to $|E_k>$ for all $k$ and hence we find \begin{equation} |\Phi^-_\nu> =|\Phi^+_\nu> = 0 \quad\forall \nu\label{Phizero}. \end{equation} Thus considering (\ref{Unonhalted}) and (\ref{Phizero}) we reach our conclusion that the requirement that $U$ be unitary prevents any state from evolving from un-halted to halted. Thus the halting scheme and therefore the results in \cite{Ozawa} only apply to computers which do not halt. It might be wondered why this particular halting scheme does not work, although it seems to be almost identical to the one presented earlier in this letter. The internal states of the machine and the position of the head appear to play the role of the ancilla in our model. However these degrees of freedom play an active role during computation and have highly constrained dynamics and it is this which creates the problem. In conclusion, we have shown that the halting problem for quantum computers is by no means solved as has been claimed. In particular we have shown that the standard halting scheme for quantum Turing machines, as used in \cite{Ozawa} is not consisent with unitarity. We have also given a general discussion of halting in quantum computers illustrating the problems which arise when different branches of the computation halt at different times. We should point out that our discussion of the general problem of halting is not exhaustive. For example we have considered a particular model of quantum computation in which when a branch halts, it halts with certainty in a single step. We have shown, that in this model, if the halting time of different branches of the computer are different and unknown, then useful interference is not possible. We anticipate that similar problems will arise in any model in which branches halt at unknown times. One possible resolution, as has been discussed by Bernstein and Vazirani \cite{BV}, is to restructure algorithms so as to ensure simultaneous halting. We do not know, however, whether this can be done for all computational problems. \noindent{\large\bf Acknowledgments} We are grateful to Yu Shi and Richard Jozsa for very helpful discussions. \end{multicols} \end{document}
\begin{document} \title {\bf Identities for the shifted harmonic numbers and binomial coefficients} \author{ {Ce Xu\thanks{Email: [email protected] (C. Xu)}}\\[1mm] \small School of Mathematical Sciences, Xiamen University\\ \small Xiamen 361005, P.R. China} \date{} \maketitle \noindent{\bf Abstract } We develop new closed form representations of sums of $(n+\alpha)$th shifted harmonic numbers and reciprocal binomial coefficients through $\alpha$th shifted harmonic numbers and Riemann zeta function with positive integer arguments. Some interesting new consequences and illustrative examples are considered. \\[2mm] \noindent{\bf Keywords} Harmonic numbers; polylogarithm function; binomial coefficients; integral representations; combinatorial series identities; summation formulas. \\[2mm] \noindent{\bf AMS Subject Classifications (2010):} 05A10; 05A19; 11B65; 11B83; 11M06; 33B15; 33D60; 33C20 \section{Introduction} Let $\mathbb{N}}\def\Z{\mathbb{Z}:=\{1,2,3,\ldots\}$ be the set of natural numbers, and $\mathbb{N}}\def\Z{\mathbb{Z}_0:=\mathbb{N}}\def\Z{\mathbb{Z}\cup\{0\}$. In this paper we will develop identities, closed form representations of shifted harmonic numbers and reciprocal binomial coefficients of the form: \[{W^{(l)}_{k,r}}\left( {p,m,\alpha } \right): = \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{({H_{n + \alpha }^{(m)})^l}}{{{n^p}\left( {\begin{array}{*{20}{c}} {n + k + r} \\ k \\ \end{array}} \right)}}},\ (r,k\in \mathbb{N}}\def\Z{\mathbb{Z}_0,\ \alpha \notin \mathbb{N}}\def\Z{\mathbb{Z}^-:=\{-1,-2,\ldots\}),\tag{1.1}\] for $m,l\in \{1,2,3\}, l+m\in\{2,3,4\}, p\in \{0,1\}$ with $p+k\geq 2$, where $H^{(m)}_\alpha$ stands for the $\alpha$-th generalized shifted harmonic number defined by \[H_\alpha ^{\left( m \right)}: = \displaystyle\!frac{{{{\left( { - 1} \right)}^{m - 1}}}}{{\left( {m - 1} \right)!}}\left( {{\psi ^{\left( {m - 1} \right)}}\left( {\alpha + 1} \right) - {\psi ^{\left( {m - 1} \right)}}\left( 1 \right)} \right),\;2 \le m \in \mathbb{N}}\def\Z{\mathbb{Z},\tag{1.2}\] \[{H_\alpha } = H_\alpha ^{\left( 1 \right)}: = \psi \left( {\alpha + 1} \right) + \gamma ,\tag{1.3}\] here $\psi \left( z \right)$ is digamma function (or called Psi function ) defined by \[\psi \left( z \right) := \displaystyle\!frac{d}{{dz}}\left( {\ln \Gamma \left( z \right)} \right) = \displaystyle\!frac{{\Gamma '\left( z \right)}}{{\Gamma \left( z \right)}}\] and $\psi \left( z \right)$ satisfy the following relations in the forms \[\psi \left( z \right) = - \gamma + \displaystyle\!sum\displaystyle\!limits_{n = 0}^\displaystyle\!infty {\left( {\displaystyle\!frac{1}{{n + 1}} - \displaystyle\!frac{1}{{n + z}}} \right)} ,\;z\notin \mathbb{N}}\def\Z{\mathbb{Z}^-_0:=\{0,-1,-2\ldots\}, \] \[{\psi ^{\left( n \right)}}\left( z \right) = {\left( { - 1} \right)^{n + 1}}n!\displaystyle\!sum\displaystyle\!limits_{k = 0}^\displaystyle\!infty {1/{{\left( {z + k} \right)}^{n + 1}}}, n\in \mathbb{N}}\def\Z{\mathbb{Z},\] \[\psi \left( {x + n} \right) = \displaystyle\!frac{1}{x} + \displaystyle\!frac{1}{{x + 1}} + \cdots + \displaystyle\!frac{1}{{x + n - 1}} + \psi \left( x \right),\;n = 1,2,3, \ldots .\] From the definition of Riemann zeta function and Hurwitz zeta function, we know that \[{\psi ^{\left( n \right)}}\left( 1 \right) = {\left( { - 1} \right)^{n + 1}}n!\zeta \left( {n + 1} \right),{\psi ^{\left( n \right)}}\left( z \right) = {\left( { - 1} \right)^{n + 1}}n!\zeta \left( {n + 1,z} \right).\] The Riemann zeta function and Hurwitz zeta function are defined by \[\zeta(s):=\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac {1}{n^{s}}},\mathbb{R}}\def\pa{\partiale(s)>1,\tag{1.4}\] and \[\zeta \left( {s,\alpha + 1} \right): = \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{{{\left( {n + \alpha } \right)}^s}}}} ,\left( {{\mathop{\mathbb{R}}\def\pa{\partiale}\nolimits} \left( s \right) > 1,\alpha \notin \mathbb{N}}\def\Z{\mathbb{Z}^ - } \right).\tag{1.5}\] Therefore, in the case of non integer values we may write the generalized shifted harmonic numbers in terms of zeta functions \[H_\alpha ^{\left( m \right)} = \zeta \left( m \right) - \zeta \left( {m,\alpha + 1} \right),\;\alpha \notin \mathbb{N}}\def\Z{\mathbb{Z}^-,\ 2\leq m\in \mathbb{N}}\def\Z{\mathbb{Z},\tag{1.6}\] and for $m=1$, \[{H_\alpha } \equiv H_\alpha ^{\left( 1 \right)} = \displaystyle\!sum\displaystyle\!limits_{k = 1}^\displaystyle\!infty {\left( {\displaystyle\!frac{1}{k} - \displaystyle\!frac{1}{{k + \alpha }}} \right)} .\tag{1.7}\] Here $\Gamma \left( z \right) := \displaystyle\!int\displaystyle\!limits_0^\displaystyle\!infty {{e^{ - t}}{t^{z - 1}}dt} ,\;{\mathop{\mathbb{R}}\def\pa{\partiale}\nolimits} \left( z \right) > 0$ is called gamma function, and $\gamma$ denotes the Euler-Mascheroni constant defined by \[\gamma := \mathop {\displaystyle\!lim }\displaystyle\!limits_{n \to \displaystyle\!infty } \left( {\displaystyle\!sum\displaystyle\!limits_{k = 1}^n {\displaystyle\!frac{1}{k}} - \ln n} \right) = - \psi \left( 1 \right) \approx {\rm{ 0 }}{\rm{. 577215664901532860606512 }}....\] The evaluation of the polygamma function $\psi^{(n)}(\displaystyle\!frac{p}{q})$ at rational values of the argument can be explicitly done via a formula as given by K$\ddot{o}$lbig [14], or Choi and Cvijovi$\acute{c}$ [10] in terms of the Polylogarithmic or other special functions. Some specific values are listed in the books [1,18,25]. For example, George E. Andrews, Richard Askey and Ranjan Roy [1], or H.M. Srivastava and J. Choi [25] gave the following Gauss's formula \begin{align*} \psi \left( {\displaystyle\!frac{p}{q}} \right) =& 2\displaystyle\!sum\displaystyle\!limits_{k = 1}^{\left[ {(q - 1)/2} \right]} {\cos \left( {\displaystyle\!frac{{2kp\pi }}{q}} \right)\ln \left( {2\sin \displaystyle\!frac{{k\pi }}{q}} \right)} + {r_q}\left( p \right)\\& - \gamma - \displaystyle\!frac{\pi }{2}\cot \displaystyle\!frac{{p\pi }}{q} - \ln q ,\;\ \ \ \ \ \ \ \ \ \left( {0 < p < q;p,q \in \mathbb{N}}\def\Z{\mathbb{Z}} \right),\tag{1.8} \end{align*} where $[x]$ denotes the greatest integer $\leq x$, and if $q$ is even, ${r_q}\left( p \right) = {\left( { - 1}\right)^p}\ln 2$; if $q$ is odd, ${r_q}\left( p \right) = 0$. For details and historical introductions, please see [1, 3, 4, 10, 14, 18, 25] and references therein. From (1.2), (1.3) and (1.8), we can obtain some specific values of shifted harmonic numbers: \[{H_{1/2}} = 2 - 2\ln 2,{H_{3/2}} = \displaystyle\!frac{8}{3} - 2\ln 2,H_{1/2}^{\left( 2 \right)} = 4 - 2\zeta \left( 2 \right),H_{3/2}^{\left( 2 \right)} = \displaystyle\!frac{{40}}{9} - 2\zeta \left( 2 \right),{H_{5/2}} = \displaystyle\!frac{{46}}{{15}} - 2\ln 2.\] Letting $\alpha$ approach $n$ ($n$ is a positive integer) in (1.6) and (1.7), then the shifted harmonic numbers are reducible to classical harmonic number defined by $${H_n} \equiv H_n^{\left( 1 \right)}:=\displaystyle\!sum\displaystyle\!limits_{j=1}^n\displaystyle\!frac {1}{j},\ H^{(m)}_n:=\displaystyle\!sum\displaystyle\!limits_{j=1}^n\displaystyle\!frac {1}{j^m},\ n, m \in \mathbb{N}}\def\Z{\mathbb{Z}.\eqno(1.9)$$ While there are many results for sums of harmonic numbers (or shifted harmonic numbers) with positive terms, for example we know that [20,23] \[\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_n^{ 2}}}{{\left( {\begin{array}{*{20}{c}} {n + k} \\ k \\ \end{array}} \right)}}} = \displaystyle\!frac{k}{{k - 1}}\left( {\zeta \left( 2 \right) - H_{k - 1}^{\left( 2 \right)} + \displaystyle\!frac{2}{{{{\left( {k - 1} \right)}^2}}}} \right),\;2 \le k \in \mathbb{N}}\def\Z{\mathbb{Z},\] \[\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + 1/4}^{\left( 2 \right)}}}{{n\left( {\begin{array}{*{20}{c}} {n + 4} \\ 4 \\ \end{array}} \right)}}} = 152 - \displaystyle\!frac{{140}}{3}G - \displaystyle\!frac{{128}}{3}\ln 2 - \displaystyle\!frac{{64}}{9}\pi - \displaystyle\!frac{{69}}{2}\zeta \left( 2 \right),\] where $G$ is Catalan¡¯s constant, defined by \[G = \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{{\left( { - 1} \right)}^{n - 1}}}}{{{{\left( {2n - 1} \right)}^2}}}} \approx {\rm{ 0}}{\rm{.915965}}....\] Further work in the summation of harmonic numbers and binomial coefficients has also been done by Sofo [18-24]. In this paper, we will prove that the series of (1.1) can be expressed as a rational linear combination of products of zeta values and shifted harmonic numbers. Next, we give two lemmas. The following lemma will be useful in the development of the main theorems. \begin{lem} For integers $k,r\geq 0$ and $\alpha\not\in \mathbb{N}}\def\Z{\mathbb{Z}^-$, then have \[\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{f\left( {n,\alpha } \right)}}{{\left( {n + r} \right)\left( {n + k} \right)}}} = \displaystyle\!frac{{k - \alpha }}{{k - r}}\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{f\left( {n,\alpha } \right)}}{{\left( {n + k} \right)\left( {n + \alpha } \right)}}} + \displaystyle\!frac{{\alpha - r}}{{k - r}}\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{f\left( {n,\alpha } \right)}}{{\left( {n + r} \right)\left( {n + \alpha } \right)}}} ,\tag{1.10}\] where the function $f\left( {n,\alpha } \right)$ satisfy the following relation \[\mathop {\displaystyle\!lim }\displaystyle\!limits_{n \to \displaystyle\!infty } {n^\beta }f\left( {n,\alpha } \right) = c,\;\beta > - 1,\] $c$ is a constant. \end{lem} \it{Proof.}\rm\quad This lemma is almost obvious. $\square$ \begin{lem} For integer $p>0$ and $\alpha \neq 0,-1,-2,\ldots$, we have \[\displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - 1}}{\rm{L}}{{\rm{i}}_p}\left( x \right)dx} = \displaystyle\!sum\displaystyle\!limits_{i = 1}^{p - 1} {\displaystyle\!frac{{{{\left( { - 1} \right)}^{i-1}}}}{{{\alpha ^i}}}\zeta \left( {p + 1 - i} \right)} - {\left( { - 1} \right)^p}\displaystyle\!frac{{{H_\alpha }}}{{{\alpha ^p}}},\tag{1.11}\] where ${\rm Li}{_p}\left( x \right)$ is polylogarithm function defined for $\left| x \right| \le 1$ by \[{\rm Li}{_p}\left( x \right) = \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{x^n}}}{{{n^p}}}}, \mathbb{R}}\def\pa{\partiale(p)>1 .\tag{1.12}\] \end{lem} \it{Proof.}\rm\quad It is obvious that we can rewrite the integral on the left hand side of (1.11) as \[\displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - 1}}{\rm{L}}{{\rm{i}}_p}\left( x \right)dx} = \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{{n^p}\left( {n + \alpha } \right)}}}.\tag{1.13} \] By using the partial fraction decomposition \[\displaystyle\!frac{1}{{{n^p}\left( {n + \alpha } \right)}} = \displaystyle\!sum\displaystyle\!limits_{i = 1}^{p - 1} {\displaystyle\!frac{{{{\left( { - 1} \right)}^{i-1}}}}{{{\alpha ^i}}} \cdot \displaystyle\!frac{1}{{{n^{p + 1 - i}}}}} - {\left( { - 1} \right)^p}\displaystyle\!frac{1}{{{\alpha ^p}}}\left( {\displaystyle\!frac{1}{n} - \displaystyle\!frac{1}{{n + \alpha }}} \right),\tag{1.14}\] and combining (1.7) with (1.13), we deduce the desired result. This completes the proof of Lemma 1.2. $\square$ \section{Shifted harmonic number identities} In this section, we will establish some explicit relationships that involve shifted harmonic numbers and the following type sums \[\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^{\left( m \right)}}}{{\left( {n + r} \right)\left( {n + k} \right)}}} ,\;\left( {r,k\in \mathbb{N}}\def\Z{\mathbb{Z}_0,\ r \neq k,\alpha \notin \mathbb{N}}\def\Z{\mathbb{Z}^-,m\in \{1,2\}} \right).\] We now prove the following theorems. \begin{thm} For integers $m,k\geq 1$ and $\mathbb{R}}\def\pa{\partiale(\alpha)>0$, then we have the following recurrence relation \begin{align*} &I\left( {\alpha ,m,k} \right) = \displaystyle\!sum\displaystyle\!limits_{i = 0}^{m - 1} {\left( {\begin{array}{*{20}{c}} {m - 1} \\ i \\ \end{array}} \right)\left( {m - i - 1} \right)!\displaystyle\!frac{{{{\left( { - 1} \right)}^{m - i}}}}{{{\alpha ^{m - i}}}}I\left( {\alpha ,i,k} \right)} \\ &\quad\ + \displaystyle\!sum\displaystyle\!limits_{j = 0}^{k - 1} {\left( {\begin{array}{*{20}{c}} k \\ j \\ \end{array}} \right){{\left( { - 1} \right)}^{m + k - j}}\left( {m + k - j - 1} \right)!H_\alpha ^{\left( {m + k - j} \right)}I\left( {\alpha ,0,j} \right)} \\ &\quad\ - \displaystyle\!sum\displaystyle\!limits_{j = 0}^{k - 1} {\left( {\begin{array}{*{20}{c}} k \\ j \\ \end{array}} \right){{\left( { - 1} \right)}^{m + k - j}}\left( {m + k - j - 1} \right)!\zeta \left( {m + k - j} \right)I\left( {\alpha ,0,j} \right)} \\ &\quad\ + \displaystyle\!sum\displaystyle\!limits_{i = 1}^{m - 1} {\displaystyle\!sum\displaystyle\!limits_{j = 0}^{k - 1} {\left( {\begin{array}{*{20}{c}} {m - 1} \\ i \\ \end{array}} \right)\left( {\begin{array}{*{20}{c}} k \\ j \\ \end{array}} \right){{\left( { - 1} \right)}^{m + k - i - j}}\left( {m + k - i - j - 1} \right)!H_\alpha ^{\left( {m + k - i - j} \right)}I\left( {\alpha ,i,j} \right)} } \\ &\quad\ - \displaystyle\!sum\displaystyle\!limits_{i = 1}^{m - 1} {\displaystyle\!sum\displaystyle\!limits_{j = 0}^{k - 1} {\left( {\begin{array}{*{20}{c}} {m - 1} \\ i \\ \end{array}} \right)\left( {\begin{array}{*{20}{c}} k \\ j \\ \end{array}} \right){{\left( { - 1} \right)}^{m + k - i - j}}\left( {m + k - i - j - 1} \right)!\zeta \left( {m + k - i - j} \right)I\left( {\alpha ,i,j} \right)} }.\tag{2.1} \end{align*} where $I\left( {\alpha ,m,k} \right)$ is defined by the integral \[I\left( {\alpha ,m,k} \right): = \displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - 1}}{{\ln }^m}x{{\ln }^k}\left( {1 - x} \right)} dx.\tag{2.2}\] with \[I\left( {\alpha ,0,0} \right) = \displaystyle\!frac{1}{\alpha },I\left( {\alpha ,i,0} \right) = {\left( { - 1} \right)^i}i!\displaystyle\!frac{1}{{{\alpha ^{i + 1}}}}.\] \end{thm} \it{Proof.}\rm\quad Applying the definition of Beta function ${B\left( {\alpha ,\beta } \right)}$, we can find that \[I\left( {\alpha ,m,k} \right): = \displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - 1}}{{\ln }^m}x{{\ln }^k}\left( {1 - x} \right)} dx = {\left. {\displaystyle\!frac{{{\partial ^{m + k}}B\left( {\alpha ,\beta } \right)}}{{\partial {\alpha ^m}\partial {\beta ^k}}}} \right|_{\beta = 1}},\tag{2.3}\] where the Beta function is defined by \[B\left( {\alpha,\beta} \right) := \displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - 1}}{{\left( {1 - x} \right)}^{\beta - 1}}dx} = \displaystyle\!frac{{\Gamma \left( \alpha \right)\Gamma \left( \beta \right)}}{{\Gamma \left( {\alpha + \beta} \right)}},\;{\mathop{\mathbb{R}}\def\pa{\partiale}\nolimits} \left( \alpha \right) > 0,{\mathop{\mathbb{R}}\def\pa{\partiale}\nolimits} \left( \beta \right) > 0.\tag{2.4}\] By using (2.4) and the definition of $\psi (x)$, it is obvious that \[\displaystyle\!frac{{\partial B\left( {\alpha ,\beta } \right)}}{{\partial \alpha }} = B\left( {\alpha ,\beta } \right)\left[ {\psi \left( \alpha \right) - \psi \left( {\alpha + \beta } \right)} \right].\] Therefore, differentiating $m-1$ times this equality, we can deduce that \[\displaystyle\!frac{{{\partial ^m}B\left( {\alpha ,\beta } \right)}}{{\partial {\alpha ^m}}} = \displaystyle\!sum\displaystyle\!limits_{i = 0}^{m - 1} {\left( {\begin{array}{*{20}{c}} {m - 1} \\ i \\ \end{array}} \right)\displaystyle\!frac{{{\partial ^i}B\left( {\alpha ,\beta } \right)}}{{\partial {\alpha ^i}}}} \cdot\left[ {{\psi ^{\left( {m - i - 1} \right)}}\left( \alpha \right) - {\psi ^{\left( {m - i - 1} \right)}}\left( {\alpha + \beta } \right)} \right].\tag{2.5}\] Since $B(\alpha,\beta)=B(\beta,\alpha)$, then we also have \[\displaystyle\!frac{{{\partial ^m}B\left( {\alpha ,\beta } \right)}}{{\partial {\beta ^m}}} = \displaystyle\!sum\displaystyle\!limits_{i = 0}^{m - 1} {\left( {\begin{array}{*{20}{c}} {m - 1} \\ i \\ \end{array}} \right)\displaystyle\!frac{{{\partial ^i}B\left( {\alpha ,\beta } \right)}}{{\partial {\beta ^i}}}} \cdot\left[ {{\psi ^{\left( {m - i - 1} \right)}}\left( \beta \right) - {\psi ^{\left( {m - i - 1} \right)}}\left( {\beta + \alpha } \right)} \right].\tag{2.6}\] Putting $\beta=1$ in (2.6) and combining (1.2) (2.3), we arrive at the conclusion that \[I\left( {\alpha ,0,m} \right){\rm{ = }}\displaystyle\!sum\displaystyle\!limits_{i = 0}^{m - 1} {{{\left( { - 1} \right)}^{m - i}}\left( {m - i - 1} \right)!\left( {\begin{array}{*{20}{c}} {m - 1} \\ i \\ \end{array}} \right)I\left( {\alpha ,0,i} \right)} H_\alpha ^{\left( {m - i} \right)}.\tag{2.7}\] Furthermore, by using (2.5), the following identity is easily derived \begin{align*} \displaystyle\!frac{{{\partial ^{m + k}}B\left( {\alpha,\beta} \right)}}{{\partial {\alpha^m}\partial {\beta^k}}} &=\displaystyle\!frac{{{\partial ^k}}}{{\partial {\beta^k}}}\left( {\displaystyle\!frac{{{\partial ^m}B\left( {\alpha,\beta} \right)}}{{\partial {\alpha^m}}}} \right) \nonumber \\ &=\displaystyle\!sum\displaystyle\!limits_{i = 0}^{m - 1} {\left( {\begin{array}{*{20}{c}} {m - 1} \\ i \\ \end{array}} \right)\displaystyle\!frac{{{\partial ^{i + k}}B\left( {\alpha,\beta} \right)}}{{\partial {\alpha^i}\partial {\beta^k}}} \cdot } \left[ {{\psi ^{\left( {m - i - 1} \right)}}\left(\alpha \right) - {\psi ^{\left( {m - i - 1} \right)}}\left( {\alpha + \beta} \right)} \right] \nonumber \\ &\quad \ - \displaystyle\!sum\displaystyle\!limits_{j = 0}^{k - 1} {\left( {\begin{array}{*{20}{c}} k \\ j \\ \end{array}} \right)} \displaystyle\!frac{{{\partial ^{ j}}B\left( {\alpha,\beta} \right)}}{{\partial {\beta^j}}}{\psi ^{\left( {m + k - j - 1} \right)}}\left( {\alpha + \beta} \right) \nonumber \\ &\quad \ - \displaystyle\!sum\displaystyle\!limits_{i = 1}^{m - 1} {\displaystyle\!sum\displaystyle\!limits_{j = 0}^{k - 1} {\left( {\begin{array}{*{20}{c}} {m - 1} \\ i \\ \end{array}} \right)\left( {\begin{array}{*{20}{c}} k \\ j \\ \end{array}} \right)} \displaystyle\!frac{{{\partial ^{i + j}}B\left( {\alpha,\beta} \right)}}{{\partial {\alpha^i}\partial {\beta^j}}}} {\psi ^{\left( {m + k - i - j - 1} \right)}}\left( {\alpha + \beta} \right) .\tag{2.8} \end{align*} From (1.2) and (1.5), we know that if $\beta=1$, then we have \[{\psi ^{\left( {m - i - 1} \right)}}\left( \alpha \right) - {\psi ^{\left( {m - i - 1} \right)}}\left( {\alpha + 1} \right) = {\left( { - 1} \right)^{m - i}}\left( {m - i - 1} \right)!\displaystyle\!frac{1}{{{\alpha ^{m - i}}}},\tag{2.9}\] \[{\psi ^{\left( {m + k - j - 1} \right)}}\left( {\alpha + 1} \right) = {\left( { - 1} \right)^{m + k - j}}\left( {m + k - j - 1} \right)!\left( {\zeta \left( {m + k - j} \right) - H_\alpha ^{\left( {m + k - j} \right)}} \right).\tag{2.10}\] Hence, taking $\beta=1$ in (2.8), then substituting (2.9) and (2.10) into (2.8) respectively, we can obtain (2.1). The proof of Theorem 2.1 is finished. $\square$\\ From (2.1) and (2.7), we can get the following identities: for $\alpha>0$, \begin{align*} &I\left( {\alpha ,0,1} \right) = \displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - 1}}\ln \left( {1 - x} \right)dx} = - \displaystyle\!frac{{{H_\alpha }}}{\alpha },\tag{2.11}\\ &I\left( {\alpha ,0,2} \right) = \displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - 1}}{{\ln }^2}\left( {1 - x} \right)dx} = \displaystyle\!frac{{H_\alpha ^2 + H_\alpha ^{\left( 2 \right)}}}{\alpha },\tag{2.12}\\ &I\left( {\alpha ,1,1} \right) = \displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - 1}}\ln x\ln \left( {1 - x} \right)dx} = \displaystyle\!frac{{{H_\alpha }}}{{{\alpha ^2}}} - \displaystyle\!frac{{\zeta \left( 2 \right) - H_\alpha ^{\left( 2 \right)}}}{\alpha },\tag{2.13}\\ &I\left( {\alpha ,0,3} \right) = \displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - 1}}{{\ln }^3}\left( {1 - x} \right)dx} = - \displaystyle\!frac{{H_\alpha ^3 + 3{H_\alpha}H_\alpha ^{\left( 2 \right)} + 2H_\alpha ^{\left( 3 \right)}}}{\alpha },\tag{2.14}\\ &I\left( {\alpha ,1,2} \right) = \displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - 1}}\ln x{{\ln }^2}\left( {1 - x} \right)dx} = - \displaystyle\!frac{{H_\alpha ^2 + H_\alpha ^{\left( 2 \right)}}}{{{\alpha ^2}}} + 2\displaystyle\!frac{{\zeta \left( 3 \right) - H_\alpha ^{\left( 3 \right)}}}{\alpha } + 2\displaystyle\!frac{{\zeta \left( 2 \right) - H_\alpha ^{\left( 2 \right)}}}{\alpha }{H_\alpha }.\tag{2.15} \end{align*} \begin{thm} For integers $r,k\in \mathbb{N}}\def\Z{\mathbb{Z}_0 \ (r\neq k)$ and $\alpha> max\{k,r\}$, we have \begin{align*} \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + \alpha}}}}{{\left( {n + r} \right)\left( {n + k} \right)}}} =& \displaystyle\!frac{{k - \alpha }}{{k - r}}\left\{ {\displaystyle\!frac{{H_{\alpha - k}^2 + H_{\alpha - k}^{\left( 2 \right)}}}{{\alpha - k}} - \displaystyle\!sum\displaystyle\!limits_{j = 1}^k {\displaystyle\!frac{{{H_{\alpha + j - k}}}}{{j\left( {\alpha + j - k} \right)}}} } \right\}\\ &+ \displaystyle\!frac{{\alpha - r}}{{k - r}}\left\{ {\displaystyle\!frac{{H_{\alpha - r}^2 + H_{\alpha - r}^{\left( 2 \right)}}}{{\alpha - r}} - \displaystyle\!sum\displaystyle\!limits_{j = 1}^r {\displaystyle\!frac{{{H_{\alpha + j - r}}}}{{j\left( {\alpha + j - r} \right)}}} } \right\}.\tag{2.16} \end{align*} \end{thm} \it{Proof.}\rm\quad Replacing $\alpha$ by $n+\alpha$ in (2.11), then multiplying it by $(n+k)^{-1}$ and summing with respect to $n$, we conclude that \begin{align*} \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + \alpha }}}}{{\left( {n + k} \right)\left( {n + \alpha } \right)}}} =& - \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{n + k}}\displaystyle\!int\displaystyle\!limits_0^1 {{x^{n + \alpha - 1}}\ln \left( {1 - x} \right)} } dx\\ = &\displaystyle\!sum\displaystyle\!limits_{j = 1}^k {\displaystyle\!frac{1}{j}\displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha + j - k - 1}}\ln \left( {1 - x} \right)dx} } + \displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - k - 1}}{{\ln }^2}\left( {1 - x} \right)dx}.\tag{2.17} \end{align*} The relations (2.11), (2.12) and (2.17) yield the following result \[\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + \alpha }}}}{{\left( {n + k} \right)\left( {n + \alpha } \right)}}} = \displaystyle\!frac{{H_{\alpha - k}^2 + H_{\alpha - k}^{\left( 2 \right)}}}{{\alpha - k}} - \displaystyle\!sum\displaystyle\!limits_{j = 1}^k {\displaystyle\!frac{{{H_{\alpha + j - k}}}}{{j\left( {\alpha + j - k} \right)}}} \;\;\left( {\alpha > k} \right).\tag{2.18}\] Putting $f\left( {n,\alpha } \right) = {H_{n + \alpha }}$ in (1.10), and combining (2.18), we may easily deduce the desired result. $\square$ \begin{cor} For integers $r,k,m\in \mathbb{N}}\def\Z{\mathbb{Z}$ with $r\neq k$ and real $\alpha> max\{k,r\}$, we have \begin{align*} \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + \alpha - m}}}}{{\left( {n + r} \right)\left( {n + k} \right)}}} =& \displaystyle\!frac{{k - \alpha }}{{k - r}}\left\{ {\displaystyle\!frac{{H_{\alpha - k}^2 + H_{\alpha - k}^{\left( 2 \right)}}}{{\alpha - k}} - \displaystyle\!sum\displaystyle\!limits_{j = 1}^k {\displaystyle\!frac{{{H_{\alpha + j - k}}}}{{j\left( {\alpha + j - k} \right)}}} } \right\}\\ &+ \displaystyle\!frac{{\alpha - r}}{{k - r}}\left\{ {\displaystyle\!frac{{H_{\alpha - r}^2 + H_{\alpha - r}^{\left( 2 \right)}}}{{\alpha - r}} - \displaystyle\!sum\displaystyle\!limits_{j = 1}^r {\displaystyle\!frac{{{H_{\alpha + j - r}}}}{{j\left( {\alpha + j - r} \right)}}} } \right\}\\ &- \displaystyle\!frac{1}{{k - r}}\displaystyle\!sum\displaystyle\!limits_{j = 1}^m {\left\{ {\displaystyle\!frac{{{H_{j + \alpha - m}} - {H_r}}}{{j + \alpha - m - r}} - \displaystyle\!frac{{{H_{j + \alpha - m}} - {H_k}}}{{j + \alpha - m - k}}} \right\}},\tag{2.19} \end{align*} where $\alpha \ne m + r - j $and $m + k - j\;\left( {j = 1,2, \cdots ,m} \right)$ with $\alpha - m \notin \mathbb{N}}\def\Z{\mathbb{Z}^-$. \end{cor} \it{Proof.}\rm\quad By the definition of $H_{n+\alpha}$, we can find the following relation \[{H_{n + \alpha }} - {H_{n + \alpha - m}} = \displaystyle\!sum\displaystyle\!limits_{j = 1}^m {\displaystyle\!frac{1}{{j + n + \alpha - m}}} .\] Hence, using the above identity, we can rewrite the series on the left hand side of (2.19) as \[\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + \alpha - m}}}}{{\left( {n + r} \right)\left( {n + k} \right)}}} = \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + \alpha }}}}{{\left( {n + r} \right)\left( {n + k} \right)}}} - \displaystyle\!sum\displaystyle\!limits_{j = 1}^m {\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{\left( {n + j + \alpha - m} \right)\left( {n + r} \right)\left( {n + k} \right)}}} } .\tag{2.20}\] By a direct calculation, we obtain \begin{align*} &\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{\left( {n + j + \alpha - m} \right)\left( {n + r} \right)\left( {n + k} \right)}}} \\ &= \displaystyle\!frac{1}{{k - r}}\left\{ {\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{\left( {n + j + \alpha - m} \right)\left( {n + r} \right)}}} - \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{\left( {n + j + \alpha - m} \right)\left( {n + k} \right)}}} } \right\}\\ & = \displaystyle\!frac{{{H_{j + \alpha - m}} - {H_r}}}{{\left( {k - r} \right)\left( {j + \alpha - m - r} \right)}} - \displaystyle\!frac{{{H_{j + \alpha - m}} - {H_k}}}{{\left( {k - r} \right)\left( {j + \alpha - m - k} \right)}}.\tag{2.21} \end{align*} Substituting (2.16) and (2.21) into (2.20), we can prove (2.19). The proof of Corollary 2.3 is thus completed. $\square$\\ From (2.16) and (2.19), we have the following special cases: \begin{align*} &\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n - 1/2}}}}{{\left( {n + 1} \right)\left( {n + 2} \right)}}} = \displaystyle\!frac{2}{3} + \displaystyle\!frac{1}{3}\ln 2,\ \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + 1/2}}}}{{\left( {n + 1} \right)\left( {n + 2} \right)}}} = 3\ln 2 - 1,\\ &\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + 3/2}}}}{{\left( {n + 1} \right)\left( {n + 2} \right)}}} = \displaystyle\!frac{{14}}{3} - 5\ln 2,\ \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + 5/2}}}}{{\left( {n + 1} \right)\left( {n + 2} \right)}}} = \displaystyle\!frac{{131}}{{45}} - \displaystyle\!frac{7}{3}\ln 2. \end{align*} Next, we evaluate the summation of the shifted harmonic sums of order two of the form, \[\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^{\left( 2 \right)}}}{{\left( {n + r} \right)\left( {n + k} \right)}}} ,\;\;\alpha > \displaystyle\!max \{ k,r\} ,\;k,r \in \mathbb{N}}\def\Z{\mathbb{Z}_0,k \ne r.\] First, we need to obtain the integral representation of ${H_{\alpha }^{\left( 2 \right)}}$. Setting $p=2$ in (1.11) gives \[\displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - 1}}{\rm{L}}{{\rm{i}}_2}\left( x \right)dx} = \displaystyle\!frac{{\zeta \left( 2 \right)}}{\alpha } - \displaystyle\!frac{{{H_\alpha }}}{{{\alpha ^2}}}.\tag{2.22}\] With the help of (2.13), we get \[\displaystyle\!frac{{H_\alpha ^{\left( 2 \right)}}}{\alpha } = \displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - 1}}\ln x\ln \left( {1 - x} \right)} dx + \displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - 1}}{\rm{L}}{{\rm{i}}_2}\left( x \right)dx},\ \alpha>0.\tag{2.23} \] Replacing $\alpha$ by $n+\alpha$ in (2.23), then multiplying it by $(n+k)^{-1}$ and summing with respect to $n$, the result is \begin{align*} \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^{\left( 2 \right)}}}{{\left( {n + k} \right)\left( {n + \alpha } \right)}}} =& \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{n + k}}\displaystyle\!int\displaystyle\!limits_0^1 {{x^{n + \alpha - 1}}\ln x\ln \left( {1 - x} \right)} dx} + \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{n + k}}\displaystyle\!int\displaystyle\!limits_0^1 {{x^{n + \alpha - 1}}{\rm{L}}{{\rm{i}}_2}\left( x \right)dx} } \\ = & - \displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - k - 1}}\ln x{{\ln }^2}\left( {1 - x} \right)dx} - \displaystyle\!sum\displaystyle\!limits_{j = 1}^k {\displaystyle\!frac{1}{j}} \displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha + j - k - 1}}\ln x\ln \left( {1 - x} \right)} dx\\ &- \displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - k - 1}}\ln \left( {1 - x} \right){\rm{L}}{{\rm{i}}_2}\left( x \right)dx} - \displaystyle\!sum\displaystyle\!limits_{j = 1}^k {\displaystyle\!frac{1}{j}} \displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha + j - k - 1}}{\rm{L}}{{\rm{i}}_2}\left( x \right)} dx.\tag{2.24} \end{align*} We note that by using (2.11), the following identities ar easily derived \begin{align*} \displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - 1}}\ln \left( {1 - x} \right){\rm{L}}{{\rm{i}}_2}\left( x \right)dx} = & - \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + \alpha }}}}{{{n^2}\left( {n + \alpha } \right)}}} \\ =& \displaystyle\!frac{1}{\alpha }\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + \alpha }}}}{{n\left( {n + \alpha } \right)}}} - \displaystyle\!frac{1}{\alpha }\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + \alpha }}}}{{{n^2}}}} .\tag{2.25} \end{align*} On the other hand, from (2.18), setting $k=0$, we have \[\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + \alpha }}}}{{n\left( {n + \alpha } \right)}}} = \displaystyle\!frac{{H_\alpha ^2 + H_\alpha ^{\left( 2 \right)}}}{\alpha },\alpha \notin \mathbb{N}}\def\Z{\mathbb{Z}^-\cup\{0\}.\tag{2.26}\] Hence, combining (2.13), (2.15), (2.22) and (2.24)-(2.26), we obtain \begin{align*} \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^{\left( 2 \right)}}}{{\left( {n + k} \right)\left( {n + \alpha } \right)}}} =& \displaystyle\!frac{1}{{\alpha - k}}\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + \alpha - k}}}}{{{n^2}}}} - 2\displaystyle\!frac{{\zeta \left( 3 \right) - H_{\alpha - k}^{\left( 3 \right)}}}{{\alpha - k}}\\ &- 2\displaystyle\!frac{{\zeta \left( 2 \right) - H_{\alpha - k}^{\left( 2 \right)}}}{{\alpha - k}}{H_{\alpha - k}} - \displaystyle\!sum\displaystyle\!limits_{j = 1}^k {\displaystyle\!frac{{H_{\alpha + j - k}^{\left( 2 \right)}}}{{j\left( {\alpha + j - k} \right)}}} ,\ (\alpha>k) .\tag{2.27} \end{align*} Letting $f\left( {n,\alpha } \right) = H_{n + \alpha }^{\left( 2 \right)}$ in (1.10), then substituting (2.27) into (1.10), we get the following Theorem. \begin{thm} For integers $r,k\geq 0$ and real $\alpha>k>r$, then we have \begin{align*} \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^{\left( 2 \right)}}}{{\left( {n + r} \right)\left( {n + k} \right)}}} = \displaystyle\!frac{1}{{k - r}}\left\{ \begin{array}{l} \left( {r - \alpha } \right)\displaystyle\!sum\displaystyle\!limits_{j = 1}^r {\displaystyle\!frac{{H_{\alpha + j - r}^{\left( 2 \right)}}}{{j\left( {\alpha + j - r} \right)}}} - \left( {k - \alpha } \right)\displaystyle\!sum\displaystyle\!limits_{j = 1}^k {\displaystyle\!frac{{H_{\alpha + j - k}^{\left( 2 \right)}}}{{j\left( {\alpha + j - k} \right)}}} \\ - \displaystyle\!sum\displaystyle\!limits_{j = 1}^{k - r} {\displaystyle\!frac{{{H_{\alpha + j - k}}}}{{{{\left( {\alpha + j - k} \right)}^2}}}} + 2H_{\alpha - r}^{\left( 3 \right)} + {H_{\alpha - k}}\zeta \left( 2 \right) + 2{H_{\alpha - r}}H_{\alpha - r}^{\left( 2 \right)} \\ - 2H_{\alpha - k}^{\left( 3 \right)} - {H_{\alpha - r}}\zeta \left( 2 \right) - 2{H_{\alpha - k}}H_{\alpha - k}^{\left( 2 \right)} \\ \end{array} \right\}.\tag{2.28} \end{align*} \end{thm} \it{Proof.}\rm\quad From (1.10) and (2.27), we arrive at the conclusion that \begin{align*} \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty \displaystyle\!frac{{H_{n + \alpha }^{\left( 2 \right)}}}{{\left( {n + r} \right)\left( {n + k} \right)}} =& \displaystyle\!frac{1}{{k - r}}\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + \alpha - r}} - {H_{n + \alpha - k}}}}{{{n^2}}}} \\ &+ \displaystyle\!frac{{r - \alpha }}{{k - r}}\displaystyle\!sum\displaystyle\!limits_{j = 1}^r {\displaystyle\!frac{{H_{\alpha + j - r}^{\left( 2 \right)}}}{{j\left( {\alpha + j - r} \right)}}} - \displaystyle\!frac{{k - \alpha }}{{k - r}}\displaystyle\!sum\displaystyle\!limits_{j = 1}^k {\displaystyle\!frac{{H_{\alpha + j - k}^{\left( 2 \right)}}}{{j\left( {\alpha + j - k} \right)}}} \\ & + \displaystyle\!frac{2}{{k - r}}\left\{ \begin{array}{l} H_{\alpha - r}^{\left( 3 \right)} + {H_{\alpha - k}}\zeta \left( 2 \right) + {H_{\alpha - r}}H_{\alpha - r}^{\left( 2 \right)} \\ - H_{\alpha - k}^{\left( 3 \right)} - {H_{\alpha - r}}\zeta \left( 2 \right) - {H_{\alpha - k}}H_{\alpha - k}^{\left( 2 \right)} \\ \end{array} \right\}.\tag{2.29} \end{align*} By the definition of shifted harmonic number, and using (1.14), we conclude that \[H_{\alpha - r}^{\left( m \right)} - H_{\alpha - k}^{\left( m \right)} = \displaystyle\!sum\displaystyle\!limits_{j = 1}^{k - r} {\displaystyle\!frac{1}{{{{\left( {j + \alpha - k} \right)}^m}}}},\] and for $2\leq p\in \mathbb{N}}\def\Z{\mathbb{Z},\ 0\leq r<k\in \mathbb{N}}\def\Z{\mathbb{Z}$, \begin{align*} &\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + \alpha - r}} - {H_{n + \alpha - k}}}}{{{n^p}}}} =\displaystyle\!sum\displaystyle\!limits_{j = 1}^{k - r} {\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{{n^p}\left( {n + j + \alpha - k} \right)}}} }\\ & = \displaystyle\!sum\displaystyle\!limits_{j = 1}^{k - r} {\displaystyle\!sum\displaystyle\!limits_{i = 1}^{p - 1} {\displaystyle\!frac{{{{\left( { - 1} \right)}^{i - 1}}}}{{{{\left( {j + \alpha - k} \right)}^i}}}\zeta \left( {p + 1 - i} \right)} } + {\left( { - 1} \right)^{p - 1}}\displaystyle\!sum\displaystyle\!limits_{j = 1}^{k - r} { {\displaystyle\!frac{{{H_{j + \alpha - k}}}}{{{{\left( {j + \alpha - k} \right)}^p}}}} } \\ &= \displaystyle\!sum\displaystyle\!limits_{i = 1}^{p - 1} {{{\left( { - 1} \right)}^{i - 1}}\zeta \left( {p + 1 - i} \right)\left( {H_{\alpha - r}^{\left( i \right)} - H_{\alpha - k}^{\left( i \right)}} \right)} + {\left( { - 1} \right)^{p - 1}}\displaystyle\!sum\displaystyle\!limits_{j = 1}^{k - r} { {\displaystyle\!frac{{{H_{j + \alpha - k}}}}{{{{\left( {j + \alpha - k} \right)}^p}}}} } .\tag{2.30} \end{align*} Taking $p=2$ in (2.30) and substituting it into (2.29), by simple calculation, we obtain the desired result. $\square$ \begin{cor} For integers $r,k,m\in \mathbb{N}}\def\Z{\mathbb{Z}$ and $\alpha> k>r$, then \begin{align*} \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha - m}^{\left( 2 \right)}}}{{\left( {n + r} \right)\left( {n + k} \right)}}} =& \displaystyle\!frac{1}{{k - r}}\left\{ \begin{array}{l} \left( {r - \alpha } \right)\displaystyle\!sum\displaystyle\!limits_{j = 1}^r {\displaystyle\!frac{{H_{\alpha + j - r}^{\left( 2 \right)}}}{{j\left( {\alpha + j - r} \right)}}} - \left( {k - \alpha } \right)\displaystyle\!sum\displaystyle\!limits_{j = 1}^k {\displaystyle\!frac{{H_{\alpha + j - k}^{\left( 2 \right)}}}{{j\left( {\alpha + j - k} \right)}}} \\ - \displaystyle\!sum\displaystyle\!limits_{j = 1}^{k - r} {\displaystyle\!frac{{{H_{\alpha + j - k}}}}{{{{\left( {\alpha + j - k} \right)}^2}}}} + 2H_{\alpha - r}^{\left( 3 \right)} + {H_{\alpha - k}}\zeta \left( 2 \right) + 2{H_{\alpha - r}}H_{\alpha - r}^{\left( 2 \right)} \\ - 2H_{\alpha - k}^{\left( 3 \right)} - {H_{\alpha - r}}\zeta \left( 2 \right) - 2{H_{\alpha - k}}H_{\alpha - k}^{\left( 2 \right)} \\ \end{array} \right\}\\ & - \displaystyle\!frac{1}{{k - r}}\left\{ \begin{array}{l} \displaystyle\!sum\displaystyle\!limits_{j = 1}^m {\displaystyle\!frac{{{H_{\alpha + j - m}} - {H_r}}}{{{{\left( {\alpha + j - m - r} \right)}^2}}}} - \displaystyle\!sum\displaystyle\!limits_{j = 1}^m {\displaystyle\!frac{{\zeta \left( 2 \right) - H_{\alpha + j - m}^{\left( 2 \right)}}}{{\alpha + j - m - r}}} \\ - \displaystyle\!sum\displaystyle\!limits_{j = 1}^m {\displaystyle\!frac{{{H_{\alpha + j - m}} - {H_k}}}{{{{\left( {\alpha + j - m - k} \right)}^2}}}} + \displaystyle\!sum\displaystyle\!limits_{j = 1}^m {\displaystyle\!frac{{\zeta \left( 2 \right) - H_{\alpha + j - m}^{\left( 2 \right)}}}{{\alpha + j - m - k}}} \\ \end{array} \right\}.\tag{2.31} \end{align*} where $\alpha \ne m + r - j,m + k - j\;\left( {j = 1,2, \cdots ,m} \right)$ and $\alpha - m \notin \mathbb{N}}\def\Z{\mathbb{Z}^-$. \end{cor} \it{Proof.}\rm\quad By a similar argument as in the proof of Corollary 2.3, we have \[\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha - m}^{\left( 2 \right)}}}{{\left( {n + r} \right)\left( {n + k} \right)}}} = \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^{\left( 2 \right)}}}{{\left( {n + r} \right)\left( {n + k} \right)}}} - \displaystyle\!sum\displaystyle\!limits_{j = 1}^m {\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{{{\left( {n + j + \alpha - m} \right)}^2}\left( {n + k} \right)\left( {n + r} \right)}}} }.\tag{2.32} \] On the other hand, we easily obtain the result \begin{align*} &\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{{{\left( {n + j + \alpha - m} \right)}^2}\left( {n + k} \right)\left( {n + r} \right)}}} \\ & = \displaystyle\!frac{1}{{k - r}}\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\left( {\displaystyle\!frac{1}{{{{\left( {n + j + \alpha - m} \right)}^2}\left( {n + r} \right)}} - \displaystyle\!frac{1}{{{{\left( {n + j + \alpha - m} \right)}^2}\left( {n + k} \right)}}} \right)} \\ & = \displaystyle\!frac{1}{{k - r}}\left\{ \begin{array}{l} \displaystyle\!frac{1}{{{{\left( {j + \alpha - m - r} \right)}^2}}}\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\left( {\displaystyle\!frac{1}{{n + r}} - \displaystyle\!frac{1}{{n + j + \alpha - m}}} \right)} \\ - \displaystyle\!frac{1}{{j + \alpha - m - r}}\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{{{\left( {n + j + \alpha - m} \right)}^2}}}} \\ - \displaystyle\!frac{1}{{{{\left( {j + \alpha - m - k} \right)}^2}}}\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\left( {\displaystyle\!frac{1}{{n + k}} - \displaystyle\!frac{1}{{n + j + \alpha - m}}} \right)} \\ + \displaystyle\!frac{1}{{j + \alpha - m - k}}\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{{{\left( {n + j + \alpha - m} \right)}^2}}}} \\ \end{array} \right\}\\ & = \displaystyle\!frac{1}{{k - r}}\left\{ \begin{array}{l} \displaystyle\!sum\displaystyle\!limits_{j = 1}^m {\displaystyle\!frac{{{H_{\alpha + j - m}} - {H_r}}}{{{{\left( {\alpha + j - m - r} \right)}^2}}}} - \displaystyle\!sum\displaystyle\!limits_{j = 1}^m {\displaystyle\!frac{{\zeta \left( 2 \right) - H_{\alpha + j - m}^{\left( 2 \right)}}}{{\alpha + j - m - r}}} \\ - \displaystyle\!sum\displaystyle\!limits_{j = 1}^m {\displaystyle\!frac{{{H_{\alpha + j - m}} - {H_k}}}{{{{\left( {\alpha + j - m - k} \right)}^2}}}} + \displaystyle\!sum\displaystyle\!limits_{j = 1}^m {\displaystyle\!frac{{\zeta \left( 2 \right) - H_{\alpha + j - m}^{\left( 2 \right)}}}{{\alpha + j - m - k}}} \\ \end{array} \right\}.\tag{2.33} \end{align*} Substituting (2.28) and (2.33) into (2.32) respectively, we deduce (2.31). This completes the proof of Corollary 2.5. $\square$ \begin{thm} For integers $r,k\geq 0$ and $\alpha>k>r$, then we have \begin{align*} \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^2 + H_{n + \alpha }^{\left( 2 \right)}}}{{\left( {n + r} \right)\left( {n + k} \right)}}} =& \displaystyle\!frac{{k - \alpha }}{{k - r}}\left\{ {\displaystyle\!frac{{H_{\alpha - k}^3 + 3{H_{\alpha - k}}H_{\alpha - k}^{\left( 2 \right)} + 2H_{\alpha - k}^{\left( 3 \right)}}}{{\alpha - k}} - \displaystyle\!sum\displaystyle\!limits_{j = 1}^k {\displaystyle\!frac{{H_{\alpha + j - k}^2 + H_{\alpha + j - k}^{\left( 2 \right)}}}{{j\left( {\alpha + j - k} \right)}}} } \right\}\\ & + \displaystyle\!frac{{\alpha - r}}{{k - r}}\left\{ {\displaystyle\!frac{{H_{\alpha - r}^3 + 3{H_{\alpha - r}}H_{\alpha - r}^{\left( 2 \right)} + 2H_{\alpha - r}^{\left( 3 \right)}}}{{\alpha - r}} - \displaystyle\!sum\displaystyle\!limits_{j = 1}^r {\displaystyle\!frac{{H_{\alpha + j - r}^2 + H_{\alpha + j - r}^{\left( 2 \right)}}}{{j\left( {\alpha + j - r} \right)}}} } \right\}.\tag{2.34} \end{align*} \end{thm} \it{Proof.}\rm\quad From (2.12) and (2.14), we know that \begin{align*} \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^2 + H_{n + \alpha }^{\left( 2 \right)}}}{{\left( {n + k} \right)\left( {n + \alpha } \right)}}} &=\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{n + k}}\displaystyle\!int\displaystyle\!limits_0^1 {{x^{n + \alpha - 1}}{{\ln }^2}\left( {1 - x} \right)} } dx\\ & = - \displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - k - 1}}{{\ln }^3}\left( {1 - x} \right)} dx - \displaystyle\!sum\displaystyle\!limits_{j = 1}^k {\displaystyle\!frac{1}{j}\displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha + j - k - 1}}{{\ln }^2}\left( {1 - x} \right)} dx} \\ & = \displaystyle\!frac{{H_{\alpha - k}^3 + 3{H_{\alpha - k}}H_{\alpha - k}^{\left( 2 \right)} + 2H_{\alpha - k}^{\left( 3 \right)}}}{{\alpha - k}} - \displaystyle\!sum\displaystyle\!limits_{j = 1}^k {\displaystyle\!frac{{H_{\alpha + j - k}^2 + H_{\alpha + j - k}^{\left( 2 \right)}}}{{j\left( {\alpha + j - k} \right)}}} .\tag{2.35} \end{align*} Setting $f\left( {n,\alpha } \right) = H_{n + \alpha }^2 + H_{n + \alpha }^{\left( 2 \right)}$ in (1.10) and combining (2.35), the result is (2.34). $\square$\\ Similarly to the proof of Theorem 2.6, we have the following similar result. \begin{thm} For integers $r,k\geq 0$ and $\alpha>k>r$, then the following identity holds: \begin{align*} &\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^3 + 3{H_{n + \alpha }}H_{n + \alpha }^{\left( 2 \right)} + 2H_{n + \alpha }^{\left( 3 \right)}}} {{\left( {n + r} \right)\left( {n + k} \right)}}} \\ & = \displaystyle\!frac{{k - \alpha }} {{k - r}}\left\{ \begin{gathered} \displaystyle\!frac{{H_{\alpha - k}^4 + 6H_{\alpha - k}^2H_{\alpha - k}^{\left( 2 \right)} + 8{H_{\alpha - k}}H_{\alpha - k}^{\left( 3 \right)} + 3{{\left( {H_{\alpha - k}^{\left( 2 \right)}} \right)}^2} + 6H_{\alpha - k}^{\left( 4 \right)}}} {{\alpha - k}} \\ - \displaystyle\!sum\displaystyle\!limits_{j = 1}^k {\displaystyle\!frac{{H_{\alpha + j - k}^3 + 3{H_{\alpha + j - k}}H_{\alpha + j - k}^{\left( 2 \right)} + 2H_{\alpha + j - k}^{\left( 3 \right)}}} {{j\left( {\alpha + j - k} \right)}}} \\ \end{gathered} \right\}\\ & \quad + \displaystyle\!frac{{\alpha - r}} {{k - r}}\left\{ \begin{gathered} \displaystyle\!frac{{H_{\alpha - r}^4 + 6H_{\alpha - r}^2H_{\alpha - r}^{\left( 2 \right)} + 8{H_{\alpha - r}}H_{\alpha - r}^{\left( 3 \right)} + 3{{\left( {H_{\alpha - r}^{\left( 2 \right)}} \right)}^2} + 6H_{\alpha - r}^{\left( 4 \right)}}} {{\alpha - r}} \\ - \displaystyle\!sum\displaystyle\!limits_{j = 1}^r {\displaystyle\!frac{{H_{\alpha + j - r}^3 + 3{H_{\alpha + j - r}}H_{\alpha + j - r}^{\left( 2 \right)} + 2H_{\alpha + j - r}^{\left( 3 \right)}}} {{j\left( {\alpha + j - r} \right)}}} \\ \end{gathered} \right\} .\tag{2.36} \end{align*} \end{thm} \it{Proof.}\rm\quad Applying the same arguments as in the proof of Theorem 2.6, we may easily deduce \begin{align*} &\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^3 + 3{H_{n + \alpha }}H_{n + \alpha }^{\left( 2 \right)} + 2H_{n + \alpha }^{\left( 3 \right)}}} {{\left( {n + \alpha } \right)\left( {n + k} \right)}}} \\ &= \displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - k - 1}}{{\ln }^4}\left( {1 - x} \right)} dx + \displaystyle\!sum\displaystyle\!limits_{j = 1}^k {\displaystyle\!frac{1} {j}\displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha + j - k - 1}}{{\ln }^3}\left( {1 - x} \right)} dx} .\tag{2.37} \end{align*} Setting $m=4$ in (2.7) and combining (1.10), (2.11), (2.12) with (2.14) we obtain \[\displaystyle\!int\displaystyle\!limits_0^1 {{x^{\alpha - 1}}{{\ln }^4}\left( {1 - x} \right)} dx = \displaystyle\!frac{{H_\alpha ^4 + 6H_\alpha ^2H_\alpha ^{\left( 2 \right)} + 8{H_\alpha }H_\alpha ^{\left( 3 \right)} + 3{{\left( {H_\alpha ^{\left( 2 \right)}} \right)}^2} + 6H_\alpha ^{\left( 4 \right)}}} {\alpha }.\tag{2.38}\] Therefore, by using (2.14), (2.37) and (2.38) yields the desired result. This completes the proof Theorem 2.7. $\square$\\ Substituting (2.28) into (2.34), we can obtain the following Corollary. \begin{cor} For integers $r,k\geq 0$ and $\alpha>k>r$, then we have the quadratic sums \begin{align*} \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^2}}{{\left( {n + r} \right)\left( {n + k} \right)}}} = \displaystyle\!frac{1}{{k - r}}\left\{ \begin{array}{l} H_{\alpha - r}^3 + {H_{\alpha - r}}H_{\alpha - r}^{\left( 2 \right)} + {H_{\alpha - r}}\zeta \left( 2 \right) \\ - H_{\alpha - k}^3 - {H_{\alpha - k}}H_{\alpha - k}^{\left( 2 \right)} - {H_{\alpha - k}}\zeta \left( 2 \right) \\ + \left( {r - \alpha } \right)\displaystyle\!sum\displaystyle\!limits_{j = 1}^r {\displaystyle\!frac{{H_{\alpha + j - r}^2}}{{j\left( {\alpha + j - r} \right)}}} + \displaystyle\!sum\displaystyle\!limits_{j = 1}^{k - r} {\displaystyle\!frac{{{H_{\alpha + j - k}}}}{{{{\left( {\alpha + j - k} \right)}^2}}}} \\ - \left( {k - \alpha } \right)\displaystyle\!sum\displaystyle\!limits_{j = 1}^k {\displaystyle\!frac{{H_{\alpha + j - k}^2}}{{j\left( {\alpha + j - k} \right)}}} \\ \end{array} \right\}.\tag{2.39} \end{align*} \end{cor} Further, using the definition of shifted harmonic number, by a simple calculation, the following relations are easily derived \begin{align*} &\displaystyle\!frac{\partial }{{\partial \alpha }}\left( {H_{n + \alpha }^{\left( m \right)}} \right) = m\left( {\zeta \left( {m + 1} \right) - H_{n + \alpha }^{\left( {m + 1} \right)}} \right),\\ &\displaystyle\!frac{{{\partial ^m}}}{{\partial {\alpha ^m}}}\left( {{H_{n + \alpha }}} \right) = {\left( { - 1} \right)^{m + 1}}m!\left( {\zeta \left( {m + 1} \right) - H_{n + \alpha }^{\left( {m + 1} \right)}} \right),\\ &\displaystyle\!frac{\partial }{{\partial \alpha }}\left( {H_{n + \alpha }^2} \right) = 2\zeta \left( 2 \right){H_{n + \alpha }} - 2{H_{n + \alpha }}H_{n + \alpha }^{\left( 2 \right)},\\ &\displaystyle\!frac{\partial }{{\partial \alpha }}\left( {H_{n + \alpha }^3} \right) = 3\zeta \left( 2 \right)H_{n + \alpha }^2 - 3H_{n + \alpha }^2H_{n + \alpha }^{\left( 2 \right)}. \end{align*} Hence, from Theorem 2.2, 2.4, 2.7 and Corollary 2.8 with the above relations, we can get the Theorem 2.9. \begin{thm} For positive integers $k,r,m$ and real $\alpha>{\rm max}\{k,r\}$ with $k\neq r$. Then the linear, quadratic and cubic sums \[\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^{\left( m \right)}}}{{\left( {n + r} \right)\left( {n + k} \right)}}} ,\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + \alpha }}H_{n + \alpha }^{\left( 2 \right)}}}{{\left( {n + r} \right)\left( {n + k} \right)}}} ,\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^3}}{{\left( {n + r} \right)\left( {n + k} \right)}}} ,\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^2H_{n + \alpha }^{\left( 2 \right)}}}{{\left( {n + r} \right)\left( {n + k} \right)}}} \] can be expressed in terms of shifted harmonic numbers and zeta values. \end{thm} As simple example is as follows: \begin{cor} For integers $r,k\geq 0$ and $\alpha>k>r$, then we have \[\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^{\left( 3 \right)}}}{{\left( {n + r} \right)\left( {n + k} \right)}}} = \displaystyle\!frac{1}{{k - r}}\left\{ \begin{array}{l} \left( {r - \alpha } \right)\displaystyle\!sum\displaystyle\!limits_{j = 1}^r {\displaystyle\!frac{{H_{\alpha + j - r}^{\left( 3 \right)}}}{{j\left( {\alpha + j - r} \right)}}} - \left( {k - \alpha } \right)\displaystyle\!sum\displaystyle\!limits_{j = 1}^k {\displaystyle\!frac{{H_{\alpha + j - k}^{\left( 3 \right)}}}{{j\left( {\alpha + j - k} \right)}}} \\ - \displaystyle\!frac{1}{2}\displaystyle\!sum\displaystyle\!limits_{j = 1}^{k - r} {\displaystyle\!frac{{H_{\alpha + j - k}^{\left( 2 \right)}}}{{{{\left( {\alpha + j - k} \right)}^2}}}} - \displaystyle\!sum\displaystyle\!limits_{j = 1}^{k - r} {\displaystyle\!frac{{{H_{\alpha + j - k}}}}{{{{\left( {\alpha + j - k} \right)}^3}}}} + 3\left( {H_{\alpha - r}^{\left( 4 \right)} - H_{\alpha - k}^{\left( 4 \right)}} \right) \\ + \left( {H_{\alpha - k}^{\left( 2 \right)} - H_{\alpha - r}^{\left( 2 \right)}} \right)\zeta \left( 2 \right) + \left( {{H_{\alpha - k}} - {H_{\alpha - r}}} \right)\zeta \left( 3 \right) \\ + \left( {{{\left( {H_{\alpha - r}^{\left( 2 \right)}} \right)}^2} - {{\left( {H_{\alpha - k}^{\left( 2 \right)}} \right)}^2}} \right) + 2\left( {{H_{\alpha - r}}H_{\alpha - r}^{\left( 3 \right)} - {H_{\alpha - k}}H_{\alpha - k}^{\left( 3 \right)}} \right) \\ \end{array} \right\}.\] \end{cor} \section{Some results of ${W^{(l)}_{k,r}}\left( {p,m,\alpha } \right)$ } In this section, we give some closed form form sums of ${W^{(l)}_{k,r}}\left( {p,m,\alpha } \right)$ through shifted harmonic numbers and zeta values. First, we consider the partial fraction decomposition \[\displaystyle\!frac{1}{{\prod\displaystyle\!limits_{i = 1}^m {\left( {n + {a_i}} \right)} }} = \displaystyle\!sum\displaystyle\!limits_{j = 1}^m {\displaystyle\!frac{{{A_j}}}{{n + {a_j}}}},\tag{3.1} \] where \[{A_j} = \mathop {\displaystyle\!lim }\displaystyle\!limits_{n \to - {a_j}} \displaystyle\!frac{{n + {a_j}}}{{\prod\displaystyle\!limits_{i = 1}^m {\left( {n + {a_i}} \right)} }} = \prod\displaystyle\!limits_{i = 1,i \ne j}^n {{{\left( {{a_i} - {a_j}} \right)}^{ - 1}}}.\] Letting $m=k$ and $a_i=r+i$ in (3.1), we have \[\displaystyle\!frac{1}{{\left( {\begin{array}{*{20}{c}} {n + k + r} \\ k \\ \end{array}} \right)}} = \displaystyle\!frac{{k!}}{{\prod\displaystyle\!limits_{i = 1}^k {\left( {n + r + i} \right)} }} = \displaystyle\!sum\displaystyle\!limits_{j = 1}^k {{{\left( { - 1} \right)}^{j + 1}}j\left( {\begin{array}{*{20}{c}} k \\ j \\ \end{array}} \right)} \displaystyle\!frac{1}{{n + r + j}},\ k,r\in \mathbb{N}}\def\Z{\mathbb{Z}_0.\tag{3.2}\] On the other hand, we can find that \begin{align*} \displaystyle\!frac{1}{{\left( {\begin{array}{*{20}{c}} {n + k + r} \\ k \\ \end{array}} \right)}} =& \displaystyle\!frac{k}{{\left( {n + r + 1} \right)\left( {\begin{array}{*{20}{c}} {n + k + r} \\ {k - 1} \\ \end{array}} \right)}}\\ = &\displaystyle\!frac{k}{{\left( {n + r + 1} \right)}}\displaystyle\!sum\displaystyle\!limits_{j = 1}^{k - 1} {{{\left( { - 1} \right)}^{j + 1}}j\left( {\begin{array}{*{20}{c}} {k - 1} \\ j \\ \end{array}} \right)} \displaystyle\!frac{1}{{n + r + 1 + j}}\\ =& k\displaystyle\!sum\displaystyle\!limits_{j = 1}^{k - 1} {{{\left( { - 1} \right)}^{j + 1}}j\left( {\begin{array}{*{20}{c}} {k - 1} \\ j \\ \end{array}} \right)} \displaystyle\!frac{1}{{\left( {n + r + 1} \right)\left( {n + r + 1 + j} \right)}},\ r\in\mathbb{N}}\def\Z{\mathbb{Z}_0,k\in\mathbb{N}}\def\Z{\mathbb{Z}.\tag{3.3} \end{align*} From identities (2.16), (2.28), (2.39), (3.2) and (3.3), we obtain the following new results: for $\alpha>k+r,\ 2\leq k\in \mathbb{N}}\def\Z{\mathbb{Z},\ r\in \mathbb{N}}\def\Z{\mathbb{Z}_0$, \begin{align*} W_{k,r}^{\left( 1 \right)}\left( {0,1,\alpha } \right) &= \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + \alpha }}}}{{\left( {\begin{array}{*{20}{c}} {n + k + r} \\ k \\ \end{array}} \right)}}} \\ & = k\displaystyle\!sum\displaystyle\!limits_{j = 1}^{k - 1} {{{\left( { - 1} \right)}^{j + 1}}j\left( {\begin{array}{*{20}{c}} {k - 1} \\ j \\ \end{array}} \right)} \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + \alpha }}}}{{\left( {n + r + 1} \right)\left( {n + r + 1 + j} \right)}}} \\ & = k\displaystyle\!sum\displaystyle\!limits_{j = 1}^{k - 1} {{{\left( { - 1} \right)}^{j + 1}}\left( {\begin{array}{*{20}{c}} {k - 1} \\ j \\ \end{array}} \right)\left\{ \begin{array}{l} H_{\alpha - r - 1}^2 + H_{\alpha - r - 1}^{\left( 2 \right)} - H_{\alpha - r - 1 - j}^2 - H_{\alpha - r - 1 - j}^{\left( 2 \right)} \\ - \left( {r + 1 + j - \alpha } \right)\displaystyle\!sum\displaystyle\!limits_{i = 1}^{r + 1 + j} {\displaystyle\!frac{{{H_{\alpha + i - r - 1 - j}}}}{{i\left( {\alpha + i - r - 1 - j} \right)}}} \\ + \left( {r + 1 - \alpha } \right)\displaystyle\!sum\displaystyle\!limits_{i = 1}^{r + 1} {\displaystyle\!frac{{{H_{\alpha + i - r - 1}}}}{{i\left( {\alpha + i - r - 1} \right)}}} \\ \end{array} \right\}}.\tag{3.4}\\ W_{k,r}^{\left( 1 \right)}\left( {0,2,\alpha } \right) &= \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^{\left( 2 \right)}}}{{\left( {\begin{array}{*{20}{c}} {n + k + r} \\ k \\ \end{array}} \right)}}} \\ & = k\displaystyle\!sum\displaystyle\!limits_{j = 1}^{k - 1} {{{\left( { - 1} \right)}^{j + 1}}j\left( {\begin{array}{*{20}{c}} {k - 1} \\ j \\ \end{array}} \right)} \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^{\left( 2 \right)}}}{{\left( {n + r + 1} \right)\left( {n + r + 1 + j} \right)}}} \\ & = k\displaystyle\!sum\displaystyle\!limits_{j = 1}^{k - 1} {{{\left( { - 1} \right)}^{j + 1}}\left( {\begin{array}{*{20}{c}} {k - 1} \\ j \\ \end{array}} \right)\left\{ \begin{array}{l} \left( {r + 1 - \alpha } \right)\displaystyle\!sum\displaystyle\!limits_{i = 1}^{r + 1} {\displaystyle\!frac{{H_{\alpha + i - r - 1}^{\left( 2 \right)}}}{{i\left( {\alpha + i - r - 1} \right)}}} \\ - \left( {r + 1 + j - \alpha } \right)\displaystyle\!sum\displaystyle\!limits_{i = 1}^{r+1+j} {\displaystyle\!frac{{H_{\alpha + i - r - 1 - j}^{\left( 2 \right)}}}{{i\left( {\alpha + i - r - 1 - j} \right)}}} \\ - \displaystyle\!sum\displaystyle\!limits_{i = 1}^{j} {\displaystyle\!frac{{{H_{\alpha + i - r - 1 - j}}}}{{{{\left( {\alpha + i - r - 1 - j} \right)}^2}}}} \\ + 2H_{\alpha - r - 1}^{\left( 3 \right)} + {H_{\alpha - r - 1 - j}}\zeta \left( 2 \right) + 2{H_{\alpha - r - 1}}H_{\alpha - r - 1}^{\left( 2 \right)} \\ - 2H_{\alpha - r - 1 - j}^{\left( 3 \right)} - {H_{\alpha - r - 1}}\zeta \left( 2 \right) - 2{H_{\alpha - r - 1 - j}}H_{\alpha - r - 1 - j}^{\left( 2 \right)} \\ \end{array} \right\}},\tag{3.5}\\ W_{k,r}^{\left( 2 \right)}\left( {0,1,\alpha } \right) &= \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^2}}{{\left( {\begin{array}{*{20}{c}} {n + k + r} \\ k \\ \end{array}} \right)}}} \\ & = k\displaystyle\!sum\displaystyle\!limits_{j = 1}^{k - 1} {{{\left( { - 1} \right)}^{j + 1}}j\left( {\begin{array}{*{20}{c}} {k - 1} \\ j \\ \end{array}} \right)} \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^2}}{{\left( {n + r + 1} \right)\left( {n + r + 1 + j} \right)}}} \\ & = k\displaystyle\!sum\displaystyle\!limits_{j = 1}^{k - 1} {{{\left( { - 1} \right)}^{j + 1}}\left( {\begin{array}{*{20}{c}} {k - 1} \\ j \\ \end{array}} \right)} \left\{ \begin{array}{l} H_{\alpha - r - 1}^3 + {H_{\alpha - r - 1}}H_{\alpha - r - 1}^{\left( 2 \right)} + {H_{\alpha - r - 1}}\zeta \left( 2 \right) \\ - H_{\alpha - r - 1 - j}^3 - {H_{\alpha - r - 1 - j}}H_{\alpha - r - 1 - j}^{\left( 2 \right)} - {H_{\alpha - r - 1 - j}}\zeta \left( 2 \right) \\ + \left( {r + 1 - \alpha } \right)\displaystyle\!sum\displaystyle\!limits_{i = 1}^{r + 1} {\displaystyle\!frac{{H_{\alpha + i - r - 1}^2}}{{i\left( {\alpha + i - r - 1} \right)}}} \\ + \displaystyle\!sum\displaystyle\!limits_{i = 1}^j {\displaystyle\!frac{{{H_{\alpha + i - r - 1 - j}}}}{{{{\left( {\alpha + i - r - 1 - j} \right)}^2}}}} \\ - \left( {r + 1 + j - \alpha } \right)\displaystyle\!sum\displaystyle\!limits_{i = 1}^{r + 1 + j} {\displaystyle\!frac{{H_{\alpha + i - r - 1 - j}^2}}{{i\left( {\alpha + i - r - 1 - j} \right)}}} \\ \end{array} \right\}.\tag{3.6} \end{align*} and for $k\in \mathbb{N}}\def\Z{\mathbb{Z}$ and $\alpha>k$, \begin{align*} W_{k,0}^{\left( 1 \right)}\left( {1,1,\alpha } \right) &= \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + \alpha }}}}{{n\left( {\begin{array}{*{20}{c}} {n + k} \\ k \\ \end{array}} \right)}}} \\ & = \displaystyle\!sum\displaystyle\!limits_{j = 1}^k {{{\left( { - 1} \right)}^{j + 1}}j\left( {\begin{array}{*{20}{c}} k \\ j \\ \end{array}} \right)} \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + \alpha }}}}{{n\left( {n + j} \right)}}} \\ & = \displaystyle\!sum\displaystyle\!limits_{j = 1}^k {{{\left( { - 1} \right)}^{j + 1}}\left( {\begin{array}{*{20}{c}} k \\ j \\ \end{array}} \right)} \left\{ \begin{array}{l} H_\alpha ^2 + H_\alpha ^{\left( 2 \right)} - H_{\alpha - j}^2 - H_{\alpha - j}^{\left( 2 \right)} \\ - \left( {j - \alpha } \right)\displaystyle\!sum\displaystyle\!limits_{i = 1}^j {\displaystyle\!frac{{{H_{\alpha + i - j}}}}{{i\left( {\alpha + i - j} \right)}}} \\ \end{array} \right\},\tag{3.7}\\ W_{k,0}^{\left( 1 \right)}\left( {1,2,\alpha } \right) &= \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^{\left( 2 \right)}}}{{n\left( {\begin{array}{*{20}{c}} {n + k} \\ k \\ \end{array}} \right)}}} \\ & = \displaystyle\!sum\displaystyle\!limits_{j = 1}^k {{{\left( { - 1} \right)}^{j + 1}}j\left( {\begin{array}{*{20}{c}} k \\ j \\ \end{array}} \right)} \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H^{(2)}_{n + \alpha }}}}{{n\left( {n + j} \right)}}} \\ & = \displaystyle\!sum\displaystyle\!limits_{j = 1}^k {{{\left( { - 1} \right)}^{j + 1}}\left( {\begin{array}{*{20}{c}} k \\ j \\ \end{array}} \right)\left\{ \begin{array}{l} 2H_\alpha ^{\left( 3 \right)} + {H_{\alpha - j}}\zeta \left( 2 \right) + 2{H_\alpha }H_\alpha ^{\left( 2 \right)} \\ - 2H_{\alpha - j}^{\left( 3 \right)} - {H_\alpha }\zeta \left( 2 \right) - 2{H_{\alpha - j}}H_{\alpha - j}^{\left( 2 \right)} \\ - \left( {j - \alpha } \right)\displaystyle\!sum\displaystyle\!limits_{i = 1}^j {\displaystyle\!frac{{H_{\alpha + i - j}^{\left( 2 \right)}}}{{i\left( {\alpha + i - j} \right)}}} \\ - \displaystyle\!sum\displaystyle\!limits_{i = 1}^j {\displaystyle\!frac{{{H_{\alpha + i - j}}}}{{{{\left( {\alpha + i - j} \right)}^2}}}} \\ \end{array} \right\}},\tag{3.8} \\ W_{k,0}^{\left( 2 \right)}\left( {1,1,\alpha } \right) &= \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^2}}{{n\left( {\begin{array}{*{20}{c}} {n + k} \\ k \\ \end{array}} \right)}}} \\ & = \displaystyle\!sum\displaystyle\!limits_{j = 1}^k {{{\left( { - 1} \right)}^{j + 1}}j\left( {\begin{array}{*{20}{c}} k \\ j \\ \end{array}} \right)} \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^2}}{{n\left( {n + j} \right)}}} \\ & = \displaystyle\!sum\displaystyle\!limits_{j = 1}^k {{{\left( { - 1} \right)}^{j + 1}}\left( {\begin{array}{*{20}{c}} k \\ j \\ \end{array}} \right)\left\{ \begin{array}{l} H_\alpha ^3 + {H_\alpha }H_\alpha ^{\left( 2 \right)} + {H_\alpha }\zeta \left( 2 \right) \\ - H_{\alpha - j}^3 - {H_{\alpha - j}}H_{\alpha - j}^{\left( 2 \right)} - {H_{\alpha - j}}\zeta \left( 2 \right) \\ + \displaystyle\!sum\displaystyle\!limits_{i = 1}^j {\displaystyle\!frac{{{H_{\alpha + i - j}}}}{{{{\left( {\alpha + i - j} \right)}^2}}}} \\ - \left( {j - \alpha } \right)\displaystyle\!sum\displaystyle\!limits_{i = 1}^j {\displaystyle\!frac{{H_{\alpha + i - j}^2}}{{i\left( {\alpha + i - j} \right)}}} \\ \end{array} \right\}}.\tag{3.9} \end{align*} Hence, from Corollary 2.8, Theorem 2.9 and formulas (3.2), (3.3), we obtain the following description of ${W^{(l)}_{k,r}}\left( {p,m,\alpha } \right)$. \begin{thm} For positive integers $k,r,m$ and real $\alpha\ (\alpha>k>r)$ with $p=0,1$, then the sums \[W_{k,r}^{(1)}\left( {p,m,\alpha } \right),W_{k,r}^{(2)}\left( {p,1,\alpha } \right),W_{k,r}^{(3)}\left( {p,1,\alpha } \right)\] and \[\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + \alpha }}H_{n + \alpha }^{\left( 2 \right)}}}{{{n^p}\left( {\begin{array}{*{20}{c}} {n + k + r} \\ k \\ \end{array}} \right)}}} ,\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{H_{n + \alpha }^2H_{n + \alpha }^{\left( 2 \right)}}}{{{n^p}\left( {\begin{array}{*{20}{c}} {n + k + r} \\ k \\ \end{array}} \right)}}} \] can be expressed in terms of shifted harmonic numbers and ordinary zeta values. \end{thm} At the end of this section we give a explicit formula of sums associated with shifted harmonic numbers. We define the parametric polylogarithm function by the series \[{\rm{L}}{{\rm{i}}_{p,\alpha }}\left( x \right): = \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{x^n}}}{{{{\left( {n + \alpha } \right)}^p}}}} ,\;x \in \left( { - 1,1} \right),\;\mathbb{R}}\def\pa{\partiale(p)>1,\ \alpha \notin \mathbb{N}}\def\Z{\mathbb{Z}^-.\] Next, we consider the following integral \[\displaystyle\!int\displaystyle\!limits_0^1 {{x^{r - 1}}{\rm{L}}{{\rm{i}}_{p,\alpha }}\left( x \right){\rm{L}}{{\rm{i}}_{m,\beta }}\left( x \right)} dx,\;p,m\in \mathbb{N}}\def\Z{\mathbb{Z},\ \alpha,\beta,r \notin \mathbb{N}}\def\Z{\mathbb{Z}^-.\] First, using integration by parts, the following identity is easily derived \[\displaystyle\!int\displaystyle\!limits_0^1 {{x^{r - 1}}{\rm{L}}{{\rm{i}}_{p,\alpha }}\left( x \right)} dx = \displaystyle\!sum\displaystyle\!limits_{i = 1}^{p - 1} {\displaystyle\!frac{{{{\left( { - 1} \right)}^{i - 1}}}}{{{{\left( {r - \alpha } \right)}^i}}}\zeta \left( {p + 1 - i,\alpha + 1} \right)} + {\left( { - 1} \right)^{p - 1}}\displaystyle\!frac{{{H_r} - {H_\alpha }}}{{{{\left( {r - \alpha } \right)}^p}}}.\tag{3.10}\] We note that \begin{align*} \displaystyle\!int\displaystyle\!limits_0^1 {{x^{r - 1}}{\rm{L}}{{\rm{i}}_{p,\alpha }}\left( x \right){\rm{L}}{{\rm{i}}_{m,\beta }}\left( x \right)} dx &= \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{{{\left( {n + \alpha } \right)}^p}}}\displaystyle\!int\displaystyle\!limits_0^1 {{x^{n+r - 1}}{\rm{L}}{{\rm{i}}_{m,\beta }}\left( x \right)} dx} \\ & = \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{{{\left( {n + \beta } \right)}^m}}}\displaystyle\!int\displaystyle\!limits_0^1 {{x^{n+r - 1}}{\rm{L}}{{\rm{i}}_{p,\alpha }}\left( x \right)} dx} .\tag{3.11} \end{align*} Substituting (3.11) into (3.10) respectively, we can deduce that \begin{align*} &{\left( { - 1} \right)^{m - 1}}\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + r}}}}{{{{\left( {n + \alpha } \right)}^p}{{\left( {n + r - \beta } \right)}^m}}}} - {\left( { - 1} \right)^{p - 1}}\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + r}}}}{{{{\left( {n + \beta } \right)}^m}{{\left( {n + r - \alpha } \right)}^p}}}} \\ & = {\left( { - 1} \right)^{m - 1}}{H_\beta }\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{{{\left( {n + \alpha } \right)}^p}{{\left( {n + r - \beta } \right)}^m}}}} - {\left( { - 1} \right)^{p - 1}}{H_\alpha }\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{{{\left( {n + \beta } \right)}^m}{{\left( {n + r - \alpha } \right)}^p}}}} \\ & \quad+ \displaystyle\!sum\displaystyle\!limits_{i = 1}^{p - 1} {{{\left( { - 1} \right)}^{i - 1}}\zeta \left( {p + 1 - i,\alpha + 1} \right)} \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{{{\left( {n + \beta } \right)}^m}{{\left( {n + r - \alpha } \right)}^i}}}} \\ & \quad- \displaystyle\!sum\displaystyle\!limits_{i = 1}^{m - 1} {{{\left( { - 1} \right)}^{i - 1}}\zeta \left( {m + 1 - i,\beta + 1} \right)} \displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{1}{{{{\left( {n + \alpha } \right)}^p}{{\left( {n + r - \beta } \right)}^i}}}} .\tag{3.12} \end{align*} Putting $r=2\alpha,\alpha=\beta$ in (3.12), we have that \begin{align*} &\left\{ {{{\left( { - 1} \right)}^{m - 1}} - {{\left( { - 1} \right)}^{p - 1}}} \right\}\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + 2\alpha }}}}{{{{\left( {n + \alpha } \right)}^{p + }}^m}}} \\ =& \left\{ {{{\left( { - 1} \right)}^{m - 1}} -{{\left( { - 1} \right)}^{p - 1}}} \right\}{H_\alpha }\zeta \left( {p + m,\alpha + 1} \right)\\ & + \displaystyle\!sum\displaystyle\!limits_{i = 1}^{p - 1} {{{\left( { - 1} \right)}^{i - 1}}\zeta \left( {p + 1 - i,\alpha + 1} \right)} \zeta \left( {m + i,\alpha + 1} \right)\\ & - \displaystyle\!sum\displaystyle\!limits_{i = 1}^{m - 1} {{{\left( { - 1} \right)}^{i - 1}}\zeta \left( {m + 1 - i,\beta + 1} \right)} \zeta \left( {p + i,\alpha + 1} \right).\tag{3.13} \end{align*} From (3.13), we can get some specific cases \begin{align*} &\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + 2\alpha }}}}{{{{\left( {n + \alpha } \right)}^3}}}} = {H_\alpha }\zeta \left( {3,\alpha + 1} \right) + \displaystyle\!frac{1}{2}{\zeta ^2}\left( {2,\alpha + 1} \right),\\ &\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + 2\alpha }}}}{{{{\left( {n + \alpha } \right)}^5}}}} = {H_\alpha }\zeta \left( {5,\alpha + 1} \right) + \zeta \left( {2,\alpha + 1} \right)\zeta \left( {4,\alpha + 1} \right) - \displaystyle\!frac{1}{2}{\zeta ^2}\left( {3,\alpha + 1} \right),\\ &\displaystyle\!sum\displaystyle\!limits_{n = 1}^\displaystyle\!infty {\displaystyle\!frac{{{H_{n + 2\alpha }}}}{{{{\left( {n + \alpha } \right)}^7}}}} = {H_\alpha }\zeta \left( {7,\alpha + 1} \right) + \zeta \left( {2,\alpha + 1} \right)\zeta \left( {6,\alpha + 1} \right) + \displaystyle\!frac{1}{2}{\zeta ^2}\left( {4,\alpha + 1} \right) - \zeta \left( {3,\alpha + 1} \right)\zeta \left( {5,\alpha + 1} \right). \end{align*} {\bf Acknowledgments.} The authors would like to thank the anonymous referee for his/her helpful comments, which improve the presentation of the paper. {\small } \end{document}
\begin{document} \title{Distribution of Non-Wieferich primes in certain algebraic groups under ABC} \date{\today} \author{Subham Bhakta} \address{Mathematisches Institut, Bunsenstrasse 3-5, D-37073 Germany} \email{[email protected]} \subjclass[2010]{Primary: 11G05; 11N13; Secondary 11R04, 11R29} \keywords{ABC conjecture, non-Wieferich primes, number fields, elliptic curves} \begin{abstract} Under ABC, Silverman showed that there are infinitely many non-Wieferich primes with respect to any (non-trivial) base $a$. Recently Srinivas and Subramani proved an analogous result over number fields with trivial class group. In the first part of this article, we extend their result to any arbitrary number fields. Secondly, we give an asymptotic lower bound for the number of non-Wieferich prime ideals. Furthermore, we show a lower bound of same order is achievable for non-Wieferich prime ideals having norm congruent to $1 \pmod k.$ Lastly, we generalize Silverman's work for elliptic curves over arbitrary number fields following the treatment by K\"uhn and M\"uller. \end{abstract} \maketitle \section{Introduction} Classically a rational prime $p$ is called non-Wieferich prime with respect to base $a$, if $$a^{p-1} \equiv 1 \pmod p \hspace{0.15cm} \text{and} \hspace{0.15cm} a^{p-1} \not\equiv 1 \pmod {p^2},$$ holds simultaneously. It is not known whether there are infinitely many non-Wieferich primes or not. Under ABC conjecture, it is known that there are infinitely many non-Wieferich primes with (non-trivial) base $a$. By non-trivial we mean $a\neq \pm 1.$ Silverman showed that there are at least $c\log x$ many non Wieferich primes up to $x$, for some constant $c>0,$ depending on base $a.$ A number field analog of the same problem was considered by K. Srinivas and M. Subramani in \cite{Srini}. In the preliminary section, we shall recall their notations and main results, which are the starting point of our article. For number fields with class number one, they showed that there are infinitely many non-Wieferich primes with base $\eta,$ under some certain conditions on $\eta$ and assuming ABC conjecture over number fields. In this article, we first extend their result to any arbitrary number field with relaxing some conditions on the unit $\eta.$ More precisely, in section 3 we prove the following \begin{theorem} \label{Thm 1} Suppose $K$ be any arbitrary number field and assume ABC holds over $K .$ Let $\eta$ be a unit such that $|\sigma(\eta)|<1$ for all but exactly one embedding $\sigma.$ Then there are infinitely many non-Wieferich prime ideals with respect to $\eta.$ \end{theorem} \noindent In the same section, we give an asymptotic lower bound generalizing Silverman's ideas over number fields. In other words, we prove \begin{theorem} \label{Thm 2} Under the same assumption of the previous theorem, there are at least $c\log x$ many non Wieferich prime ideals with respect to $\eta$ of norm at most $x,$ where the $c>0$ is a constant depending on $K$ and the unit $\eta.$ \end{theorem} \noindent In section $4$ we discuss non-Wieferich primes in certain congruence classes. Under ABC, Graves-Murty (see \cite{Murty}) first showed that there are at least $c \frac{\log x}{\log \log x}$ many non Wieferich primes up to $x.$ Later Ding and Chen in [6], generalized the result by showing existence of at least $c \frac{\log x}{\log \log x}(\log\log x)^M$ many primes up to $x$, for any positive integer $M.$ We generalize both of these results to arbitrary number fields with an asymptotic lower bound of order $c\log x.$ More precisely, we prove \begin{theorem} \label{Thm 3}Let $k$ be a fixed natural number. Under the same assumption of Theorem 1.0.1, there are at least $c\log x$ many non Wieferich prime ideals $\pi$ with respect to $\eta,$ such that $N(\pi) \leq x$ and $N(\pi) \equiv 1 \pmod k,$ where the constant $c>0$ depends only on $K$ and unit $\eta.$ \end{theorem} \noindent In the last section, we discuss a more general problem. Suppose $G$ be a commutative algebraic group, and $P \in G(K)$ be a point of infinite order. An analogous problem in this generalized situation asks whether $N_{\mathfrak{p}}P \equiv 1 \pmod {\mathfrak{p}^2},$ where $N_{\mathfrak{p}}=|G(\mathbb{F}_{\mathfrak{p}})|.$ For instance when we take $G$ to be the multiplicative group $\mathbb{G}_m$, and $P \in \mathbb{G}(K)$ to be a \textit{non-torsion} unit, the problem then asks about order of the non-unit when reduced modulo $\mathfrak{p}^2.$ In other words, our section 3 and 4 are devoted for $\mathbb{G}_m.$ Silverman studied this general problem over elliptic curves. He showed that, under ABC there are infinitely many (in fact an asymptotic lower bound of order $c\sqrt{\log x}$) non-Wieferich primes for elliptic curves with $j$ invariant $0$ and $1728$. The same arguments could not be applied to other elliptic curves due to the unavailability of an inequality involving the height of points. This case was later settled by K\"uhn and M\"uller \cite{Kuhn}. We shall discuss their result and prove the following \begin{theorem} Let $E$ be an elliptic curve defined over an arbitrary number field $K$ and $P \in E(K)$ be a point of infinite order. If the ABC conjecture over $K$ is true, then are at least $c \sqrt{\log x}$ many non-Wieferich primes with respect to $P$ of norm at most $x.$ Here $c>0$ be a constant, depending only on $K$ and $P.$ \end{theorem} \section{preliminary} \subsection{Notations} Let $K$ be a number field, and $\mathcal{O}_K$ be its ring of integers. Denote $M_K$ to be the (up-to equivalence) set of valuations in $K$. We further consider $M_{K, \infty}$ as the set of archimdean places, and $M'_K$ as the set of finite places of $K,$ up-to equivalence. For any $\alpha \in \mathcal{O}_K$ and $\mathfrak{p}\in M'_K,$ we denote \[ ||\alpha||_{\mathfrak{p}}= N_{K/\mathbb{Q}}(\mathfrak{p})^{-v_{\mathfrak{p}}(\alpha)},\] where $N_{K/\mathbb{Q}}(\mathfrak{p})$ is the index of $\mathfrak{p}$ in $\mathcal{O}_K,$ and $v_{\mathfrak{p}}(\alpha)$ is the maximum power of $\mathfrak{p}$ dividing $\alpha.$ For an element $\alpha \in \mathcal{O}_K$ we denote $$N_{K/\mathbb{Q}}(\alpha)=\prod_{\sigma \in M_K} \sigma(\alpha).$$ Throughout this article we shall denote $N_{K/\mathbb{Q}}(I) := N(I)$ for any ideal $I$ in $\mathcal{O}_K.$ \begin{definition}[Weil height] Let $\alpha \in \mathbb{P}^1(K)$, then the Weil height of $\alpha$ is defined to be, \[H(\alpha)=\Big(\prod_{v \in M_K} \max \{||\alpha||_v,1\}\Big)^{\frac{1}{[K:\mathbb{Q}]}}.\] In particular, for a point $P=[a:c] \in \mathbb{P}^1(K)$ we have \[H(P)=\Big(\prod_{v \in M_K} \max\{||a||_v, ||c||_v\}\Big)^{\frac{1}{[K:\mathbb{Q}]}}.\] Moreover, for a triple $a,b,c \in K$ we define \[H(a,b,c)=\Big(\prod_{v \in M_K} \max\{||a||_v, ||b||_v, ||c||_v\}\Big)^{\frac{1}{[K:\mathbb{Q}]}}.\] \end{definition} \subsection{ABC over number fields} Let $a,b,c$ be elements in $K$ such that $a+b+c=0.$ Define the radical of an element $\alpha \in \mathcal{O}_K$ by \[\text{rad}(\alpha)= \Big(\prod_{\mathfrak{p} \hspace{0.07cm} \text{prime in} \hspace{0.1cm} \mathcal{O}_K, \hspace{0.07cm}\mathfrak{p} \mid \alpha} N_{K/\mathbb{Q}}(\mathfrak{p})^{v_{\mathfrak{p}}(p)}\Big)^{\frac{1}{[K:\mathbb{Q}]}}\] \begin{conjecture}[ABC over number fields]\label{ABC} Let $a,b,c$ be a triple in $K$ such that $a+b+c=0$. Then for any $\epsilon>0,$ \[H(a,b,c) \ll_{\epsilon, K} \big(\mathrm{rad}(abc)\big)^{1+\epsilon}.\] \end{conjecture} \begin{remark} We are omitting details, but one can see the definitions of $H(.)$ and $\mathrm{rad}(.)$ are well-defined, in other words they are invariant under considering $a,b$ or $c$ in some bigger extension. Moreover the inequality $H(a,c,a+c) \ll_{K} H([a:c])$ holds true for any $a,c \in K.$ \end{remark} \subsection{Non Wieferich primes and units} A prime ideal $\pi$ of $\mathcal{O}_K$ is said to be non-Wieferich prime with respect to base $\eta$, if the following \[\eta^{N(\pi)-1} \equiv 1 \pmod \pi, \hspace{0.1cm} \eta^{N(\pi)-1} \not\equiv \pi^2 \pmod \pi\] holds simultaneously. It is known that $\mathcal{O}^{*}_K$, the group of units of $\mathcal{O}_K$ has rank $r+s-1$ as a $\mathbb{Z}$-module, where $r$ is the number of real embeddings and $s$ the number of conjugate pairs of complex embeddings of $K.$ Some of the theorems which we wish to prove are about certain units. These units have absolute value less than for one all but exactly one embedding. So we need to ensure existence of such a unit for our sake. For which we can use the following lemma from \cite{Ram} \begin{lemma} \label{Ram book} Let $\sigma \in M_K$ be a real archimedian place of $K.$ Then there exist a unit $\eta \in \mathcal{O}_K$ such that, \[|\sigma(\eta)|>1, |\tau(\eta)|<1 \hspace{0.1cm}\text{for all} \hspace{0.15cm}\tau \in M_{K, \infty}-\{\sigma\}.\] \end{lemma} \noindent We can say even more, \begin{proposition} Density of the units satisfying the condition of Lemma \ref{Ram book} is \[\frac{r+s-1}{2^{r+s-1}}.\] \end{proposition} \begin{proof} Consider the embedding used in the proof of Dirichlet's unit theorem, \[\mathcal{O}^*_K \hookrightarrow \mathbb{R}^{r+s-1}\] given by $\eta \mapsto \Big(\log\big(\sigma_i(\eta)\big)\Big)_{1\leq i\leq r+s-1}.$ So the problem about counting, \[\{\eta \in \mathcal{O}^{*}_K \mid H(\eta) \leq x, |\sigma(\eta)|<1 \hspace{0.1cm} \text{for all but one embedding}.\}\] It is now same as estimating, \[\{(x_1,x_2,\cdots, x_{r+s-1}) \in \mathbb{Z}^{r+s-1} \mid |x_i| \leq \log x, \hspace{0.1cm} x_i<0 \hspace{0.1cm}\text{for all but one coordinate}\}.\] The above quantity is about $c_K (r+s-1)(\log X)^{r+s-1}.$ On the other hand, total number of units of height at most $x$ is $c_K2^{r+s-1}(\log x)^{r+s-1},$ where $c_K$ is co-volume of $\mathcal{O}_K^{*}$. And so does the result follows. \end{proof} \noindent The reason behind mentioning the result is that, we are working over a certain set of units, and the proposition above shows that the set has positive density. So our domain is not that bad. One may also ask why are we not considering non units ? Following the proofs in the next section, one can see there is really no problem with arbitrary algebraic integers once we have property like Lemma 2.3.1. But the following proposition says, that phenomenon is not very likely to happen. \begin{proposition} For any arbitrary number fields, set of $\alpha \in \mathcal{O}_K$ such that $|\sigma(\alpha)|<1$ for all but one embedding, have zero density. \end{proposition} \begin{proof} The map, \[\alpha \mapsto \Big(\sigma(\alpha)_{\sigma \in M_K}\Big)\] embeds $O_K$ into $\mathbb{R}^{\deg(K)}$ as a lattice of dimension $\deg(K).$ We have \[\{\alpha \in \mathcal{O}_K \mid H(\alpha) \leq x\} \sim C_K x^{\deg(K)}\] and also, \[\{\alpha \in \mathcal{O}_K \mid H(\alpha) \leq x, |\sigma(\alpha)|<1 \hspace{0.1cm}\text{for all but one embedding}\}\sim C_K x.\] Indeed, the desired set has zero density. \end{proof} \begin{remark} The constant $c_K$ above is co-volume of the lattice $\mathcal{O}_K$ in $\mathbb{R}^{\deg(K)}.$ It is well know the co-volume is explicitly given by $\mathrm{disc}(\mathcal{O}_K).$ See page 7 in \cite{Shan} for more details. \end{remark} \subsection{Nuts and bolts to fix problem concerning class number} K. Srinivas and M. Subramani proved their result in the case of class number one, under some certain conditions on unit $\eta.$ The assumption on class number was required to write some elements as a product of primes uniquely (see their proof of Lemma 4.1 in \cite{Srini}). Class number one really guarantees this because, $\mathcal{O}_K$ then becomes a PID, and a UFD as well. In general, it is possible that $K$ does not have class number one, while some of its extension have. Then we could then do all of these over that extension and still get our job done, which is illustrated by the next lemma. \begin{lemma} \label{lifting} Let $L$ be an extension of $K$ and $\mathfrak{P}$ be a prime ideal of $\mathcal{O}_L$ lying over a prime ideal $\mathfrak{p}$ in $\mathcal{O}_K$. If $\mathfrak{P}$ is a non-Wieferich prime in $\mathcal{O}_L$ with respect to a unit $\eta \in \mathcal{O}_K$, then $\mathfrak{p}$ is a non-Wieferich prime in $K$ with respect to $\eta$. Conversely, if $\mathfrak{p}$ splits completely in $L/K,$ and is a non-Wieferich prime with respect to a unit $\eta \in \mathcal{O}_K,$ then $\mathfrak{P}$ is non-Wieferich prime with respect to $\eta$ for any prime $\mathfrak{P}$ in $\mathcal{O}_L$ lying over $\mathfrak{p}$. \end{lemma} \begin{proof} Let $\mathfrak{P}$ be a prime ideal non-Wieferich prime in $\mathcal{O}_L$ with respect to a unit $\eta \in \mathcal{O}_K,$ and $\mathfrak{p}$ be a prime in $\mathcal{O}_K$ lying below it. For sake of contradiction, suppose $\mathfrak{p}$ is Wieferich in $\mathcal{O}_K$ with respect to $\eta$. Then \[\eta^{N_{K/\mathbb{Q}}(\mathfrak{p})} \equiv 1 \pmod {\mathfrak{p}^2} .\] Now $N_{L/\mathbb{Q}}(\mathfrak{P})$ is some power of $N_{K/\mathbb{Q}}(\mathfrak{p})$, and so \[\eta^{N_{L/\mathbb{Q}}(\mathfrak{P})} \equiv 1 \pmod {\mathfrak{P}^2},\] contradicting our assumption that $\mathfrak{p}$ is Wieferich in $\mathcal{O}_K$.\\ \newline \noindent For the converse, if $\mathfrak{P}$ is non-Wieferich with respect to base $\eta$, then \[\eta^{N_{L/\mathbb{Q}}(\mathfrak{P})} \not\equiv 1 \pmod{\mathfrak{P}^2}.\] But $\mathfrak{p}$ splits completely in $L/K$, which implies $N_{L/\mathbb{Q}}(\mathfrak{P})=N_{K/\mathbb{Q}}(\mathfrak{p})$, and hence $\mathfrak{p}$ is indeed non-Wieferich with respect to $\eta.$ \end{proof} \noindent So we can perhaps try to get an a finite extension of $K$ with trivial class group. But this is not a very common phenomenon, and in fact not always true. Golod and Shafarevich in \cite{Gol} showed that for every $n$ there are infinitely many number fields $K$ of degree $n$, which do not have a finite extension with trivial class group. Pollak (see \cite{Pol} pp. 175) has given example of such a number field of degree $2.$\\ \newline \noindent Instead, we shall invoke the theory of Hilbert class field. So let us note down the necessary tools from this theory. \begin{definition}[Hilbert class field] \label{Hil} Consider the maximal unramified abelian extension $K'$ of $K$, which contains all other unramified abelian extensions of $K$. This finite field extension $K'$ is called the Hilbert class field of $K$. \end{definition} \noindent It is known that $K'$ is a finite Galois extension of $K$ and $[K' : K]=h_K$, where $h_K$ is the class number of K. In fact, the ideal class group of $K$ is isomorphic to the Galois group of $K'$ over $K.$ It follows from class field theory that every ideal of $\mathcal{O}_K$ extends to a principal ideal of the ring extension $\mathcal{O}_{K'}$. \subsection{Siegel's theorem and one application} \noindent This small subsection is intended to introduce some notations and results from elliptic curves, which will be needed in the last section. Let $E$ be an elliptic curve defined over $K.$ Denote $\Delta_E$ to be the discriminant of $E.$ Let $P \in E(K)$ be a $K-$rational point. We define the weil height of $P$ to be $h(P)=\frac{h(x(P))}{2}$ and the canonical height, \[\hat{h}(P)=\frac{1}{2}\lim_{n \to \infty}\frac{h(nP)}{n^2}.\] \noindent Let us now recall a version of Siegel's theorem, which suits our main purpose. \begin{proposition}[A version of Siegel] \label{Siegel} Fix the point $\infty \in E(\overline{K})$, and an absolute value $v \in M_{K, \infty}$. Then \[\lim_{P \in E(\overline{K}), h(x(P)) \to \infty} \frac{\log \big(\min \big \{||x(P)||_v,1\big\}\big)}{h(x(P))} \to 0.\] \end{proposition} \begin{proof} Follows at once by taking, \[f=x, Q= \infty,\] in Theorem 3.1 of \cite{Silverman}. \end{proof} \begin{corollary} \label{Siegel corollary} Let $P_n=\frac{a_n}{b_n}$ be an infinite sequence of points in $E(K).$ Then the following holds, \[N_{K/\mathbb{Q}}(b_n)^{1-\epsilon} \ll_{K,\epsilon} N_{K/\mathbb{Q}}(a_n) \ll_{K, \epsilon} N_{K/\mathbb{Q}}(b_n)^{1+\epsilon}.\] \end{corollary} \begin{proof} Siegel's theorem implies, \[\lim_{n \to \infty} \log \big(\min \big\{||a_n||_v, ||b_n||_v\big\}\big)=0,\] for any $v \in M_{K, \infty}.$ In particular, for any $v \in M_{K, \infty}$ \[\lim_{n \to \infty} \frac{\log(||a_n||_v)}{\log(||b_n||_v)}=1.\] And so, \[||b_n||_v^{1-\epsilon} \ll_{K, \epsilon} ||a_n||_v \ll_{K, \epsilon} ||b_n||_v^{1+\epsilon}.\] Multiplying over all $v \in M_{K, \infty},$ we get the desired result. \end{proof} \subsection{Growth in cyclotomic polynomials} Let $\phi_n(x)$ be the $n^{th}$ cyclotmic polynomial. It is a polynomial of degree $\phi(n)$ in $\mathbb{Z}[X].$ Then we have the following estimate, \begin{proposition} \label{Growth} Let $z \in \mathbb{C}$ be a complex number with $|z| \geq 2,$ then \[|\phi_n(z)| \geq \frac{1}{3}\Big(\frac{|z|-1}{3}\Big)^{\phi(n)-1}\] for any $n \geq 1.$ \end{proposition} \begin{proof} Write $z=e^{i\theta}$ with $x \in \mathbb{R}$ and $0<\theta \leq 2\pi,$ and let $\omega_n$ be the primitive $n^{th}$ root of unity. We can write \[\frac{\phi_n(z)}{(z+e^{i\theta})^{\phi(n)}}=\sideset{}{'}\prod_{i=1}^{n}\frac{z-\omega_n^i}{z+e^{i\theta}}.\] In particular, \[\Big|\frac{\phi_n(z)}{(z+e^{i\theta})^{\phi(n)}}\Big|=\Big|\sideset{}{'}\prod_{i=1}^{n} \frac{x-e^{i(\frac{2\pi}{n}-\theta)}}{x+1}\Big|,\] where both of these restricted products are running $i's$ co-prime to $n.$ Now $1-\Big|\frac{x-e^{i(\frac{2\pi}{n}-\theta)}}{x+1}\Big|$ is asymptotic to $\frac{2}{x^2}\big(1+\cos(\frac{2\pi}{n}-\theta)\big).$ Therefore it is positive and decreasing in $x$ and, $1$ otherwise. In particular, \[\Big|\frac{\phi_n(z)}{(z+e^{i\theta})^{\phi(n)}}\Big| \geq \frac{\phi_n(2)}{3^{\phi(n)}}(|z|-1)^{\phi(n)-1} \geq \frac{1}{3}\Big(\frac{|z|-1}{3}\Big)^{\phi(n)-1}\] since $|\phi_n(2)| \geq 1.$ \end{proof} \section{Effective Srinivas-Subramani} \noindent First aim of this section was to remove the assumption on class number and a mild condition on units. We shall do both things simultaneously. For all but finitely many units, we may assume there exist $\sigma \in M_{K, \infty}$ such that $1<|\sigma(\eta)|.$ The hypothesis on class number was needed in \cite{Srini} to ensure that we can write $\eta^n-1$ as product of primes uniquely. For arbitrary number fields, we could however write $$(\eta^n-1)=U_nV_n $$ where $U_n$ is a square-free and and $V_n$ is a square-full ideal in $\mathcal{O}_K.$ \begin{claim} \label{prime tracker} Let $\pi$ be a prime ideal dividing $U_n.$ Then $\pi$ is non-Wieferich prime with respect to $\eta.$ \end{claim} \begin{proof} The proof is exactly as proof of Lemma 5.1 in \cite{Srini}. \end{proof} \noindent We can write, \[\sigma(\eta)^n=1+\sigma(U_n)\sigma(V_n),\] and hope to show $N(U_n)=N(\sigma(U_n)) \to \infty$ as done in \cite{Srini}. The hope is not bad because, the assumption on $\eta$ was crucial at this stage, and we are actually doing everything with $\sigma(\eta)$ instead. But the main problem is, we can not apply ABC anymore, since $U_n, V_n$'s are not always elements of $\mathcal{O}_K.$\\ \newline \noindent Recall that we have an extension $K'$ of $K$, of degree $h_K$ such that, all ideals in $\mathcal{O}_K$ are principal in $\mathcal{O}_{K'}.$ In particular, we can factorize $\eta^n-1$ as $u_nv_n$ in $\mathcal{O}_{K'}$ such that $u_n$ and $v_n$ are obtained by lifting $U_n$ and $V_n$ respectively. It is then evident that $v_{\mathfrak{P}}(v_n)$ is even for any prime ideal $\mathfrak{P}$ in $\mathcal{O}_{K'}.$ To show infinitude of non-Wieferich primes in $\mathcal{O}_K,$ it is then enough to show $\{N_{K'/\mathbb{Q}}(u_n)\}$ is unbounded because there are only finitely many prime ideals in $\mathcal{O}_{K'}$ lying below of a prime ideal in $\mathcal{O}_{K}.$ \begin{lemma} \label{norm ineq 1} Following the same notations we have, \[N_{K'/\mathbb{Q}}(u_n)^{(2\deg(K)h_K-1)(1+\epsilon)} \gg_{K, \epsilon} |\sigma(\eta)|^{n(1-\epsilon)},\] for any $\epsilon>0.$ \end{lemma} \begin{proof} First we start by writing \[\sigma(\eta)^n=1+\sigma(u_n)\sigma(v_n).\] \noindent Modifying equation (13) in \cite{Srini} we have, \[\prod_{\mathfrak{P} \mid \sigma(u_n)}N(\mathfrak{P})^{v_{\mathfrak{P}}(p)} \leq N_{K'/\mathbb{Q}}(u_n)^{\deg(K)h_K}.\] The $h_K$ factor is coming because $K'$ has degree $\deg(K)h_K$ over $\mathbb{Q}.$ On the other hand, as we already discussed earlier, the maximum power of $\mathfrak{P}$ dividing $\sigma(v_n)$ is always at least $2$ (if non-zero). Arguing same as in \cite{Srini}, \[\prod_{\mathfrak{P} \mid \sigma(v_n)}N(\mathfrak{P})^{2v_{\mathfrak{P}}(p)}\leq \Big(\sideset{}{'}\prod_{\mathfrak{P} \mid \sigma(u_n)}N(\mathfrak{P})^{2h_K}\Big)\sqrt{N(v_n)}.\] The restricted product above runs over ramified primes in $K',$ which is clearly finite. And hence, \[|\sigma(\eta)^n|\leq \Big(N(u_n)^{\deg(K)h_K}\sqrt{N(v_n)}\Big)^{1+\epsilon}.\] Arguing similarly as in page $7$ in \cite{Srini}, and keeping the condition on $\eta$ on mind we have $N(v_n)< C|\sigma(\eta)|^n|,$ and hence \[N_{K'/\mathbb{Q}}(u_n)^{(2\deg(K)h_K-1)(1+\epsilon)} \gg |\sigma(\eta)|^{n(1-\epsilon)},\] for any $\epsilon>0.$ \end{proof} \noindent We now want to give a lower bound for the number of non-Wieferich primes. First observe that, $\{N_{K'/\mathbb{Q}}(u_n)\}$ is unbounded, so we may assume (after some stage), $1<|N_{K'/\mathbb{Q}}(u_n)|$. It is clear that for any two primes $\pi_1 \neq \pi_2 \in \mathcal{O}_K$ such that $\pi_1,\pi_2$ not lying over a same prime, the quantities $\eta^{N(\pi_1)}-1, \eta^{N(\pi_2)}-1$ can not have a common prime factor because $\mathrm{gcd}(N(\pi_1), N(\pi_2))=1.$ One then needs only to find primes $\pi \in \mathcal{O}_K$ such that \[ N_{K'/\mathbb{Q}}(\eta^{N(\pi)}-1) \leq x^{h_k},\] because $N(\pi)^{h_K} \leq N_{K/\mathbb{Q}}(\eta^{N(\pi)}-1)^{h_K}=N_{K'/\mathbb{Q}}(\eta^{N(\pi)}-1).$ By assumption, $|\sigma(\eta)| \leq 1$ for all but one embedding $\sigma \in M_K.$ In particular, \[N(\eta^{N(\pi)}-1)\leq |\sigma(\eta)|^{N(\pi)}2^d.\] And so we need primes $\pi \in \mathcal{O}_K$ such that $N(\pi) \ll_{K} \log_{|\sigma(\eta)|} X,$ and it is well known that there are at least $\frac{\log x}{\log \log x}$ many of such. We then get a lower bound of order $\frac{\log x}{\log\log x}.$ However we should aim for $\log x,$ since this was given by Silverman over $\mathbb{Q}.$\\ \newline Let $\phi_n(x)$ be the $n^{\text{th}}$ cyclotomic polynomial. Since $|\eta|>1,$ first of all it is clear that $\phi_n(\eta) \neq 0$. Again, if class group of $K$ is trivial then it is fine, otherwise arguing exactly same as before we can still do the factorization \[\phi_n(\eta)=u'_nv'_n\] over $K'.$ The right hand side above make sense because, $\phi_n(\eta)$ divides $\eta^n-1$ in $O_K$ and so $(\phi_n(\eta))=U'_nV_n'$ with $U'_n \mid U_n$ and $V'_n \mid V_n.$ Then one gets $u'_n, v'_n$ by lifting $U'_n, V'_n$ respectively. \begin{lemma} \[N_{K'/\mathbb{Q}}(u'_n) \leq N_{K'/\mathbb{Q}}(u_n) \hspace{0.15cm} \text{and} \hspace{0.15cm} N_{K'/\mathbb{Q}}(v'_n) \leq N_{K'/\mathbb{Q}}(v_n).\] \end{lemma} \begin{proof} We know $U_n' \mid U_n$ is $\mathcal{O}_K$ and so by definition, $(u'_n) \mid (u_n)$ in $\mathcal{O}_{K'}.$ In particular, \[N_{K'/\mathbb{Q}}(u'_n)=N\big((u'_n)\big) \leq N\big((u_n)\big) \leq N_{K'/\mathbb{Q}}(u_n).\] The other part follows similarly. \end{proof} \begin{lemma} \label{cycl 1} Let $\pi$ be a prime dividing $u'_n,$ then $\pi$ is non-Wieferich prime in $O_{K'}$ with respect to $\eta.$ If $(n, N(\pi))=1,$ then \[N_{K'/\mathbb{Q}}(\pi) \equiv 1 \pmod n.\] In other words, the order of $\eta$ modulo $\pi$ is exactly $n.$ \end{lemma} \begin{proof} It is already known from \cite{Srini} that $\pi$ definitely is a non-Wieferich prime, because $\pi \mid u'_n \mid \eta^n-1.$ On the other hand, \[\eta^n-1=\prod_{d \mid n} \phi_d(\eta)\] If $\pi$ divides $\eta^{d}-1$ for some non-trivial divisor $d$ of $n,$ then $\pi^2|\eta^n-1.$ In other words, the polynomial $x^n-1$ has multiple roots in the residue field of $\pi,$ which have characteristic $p.$ And this is not possible since $(n, N(\pi))=1.$ This implies order of $\eta$ in the residue field of $\pi$ is exactly $n,$ and so does the result follows. \end{proof} \noindent From this point we shall assume $\sigma$ to be trivial, because a choice of arbitrary $\sigma$ does not really affect any arguments. We now need to use Proposition \ref{Growth} and for that we eventually need to assume $|\eta|$ is large enough. But we only know $|\eta|>1,$ so we do everything with $\eta'=\eta^M$ for large enough $M$ satisfying the purpose. One can see any arguments will not be affected if we do everything with $\eta'$ instead. \begin{lemma} \label{norm ineq 2} \[N_{K'/\mathbb{Q}}(u'_n)\gg_{K, \epsilon} \frac{\Big(\frac{|\eta|-1}{3}\Big)^{\phi(n)-1}}{|\eta|^{\frac{n\epsilon}{1+\epsilon}}}.\] \end{lemma} \begin{proof} Doing analogously as in page 7 of \cite{Srini}, we get the following variant of their equation (15) \begin{equation} \label{eqn 1} |\eta|^n \ll_{\epsilon} \Big( N_{K'/\mathbb{Q}}(u_n)^{\deg(K)h_K}\sqrt{N_{K'/\mathbb{Q}}(v_n)}\Big)^{1+\epsilon}, \end{equation} where the $h_K$ factor comes because $[K':K]=h_K.$ From $\eta^n=u_nv_n,$ we get $N(u_n)N(v_n) \ll_{K} |\eta|^n.$ In particular, \begin{equation} \label{eqn 2} N_{K'/\mathbb{Q}}(v'_n)\leq N_{K'/\mathbb{Q}}(v_n) \ll_{K} |\eta|^{\frac{2n\epsilon}{(2\deg(K)h_K-1)(1+\epsilon)}}. \end{equation} On the other hand, \[N_{K'/\mathbb{Q}}(u'_n)N_{K'/\mathbb{Q}}(v'_n)=N_{K'/\mathbb{Q}}(\phi_n(\eta))=\prod_{\sigma \in M_{K', \infty}}\prod_{i=1}^{\phi(n)} \sigma((\eta-\omega^i)).\] By Lemma \ref{Ram book}, we have \[|(\eta^{(j)}-\omega^i)| \leq 2, \hspace{0.1cm} \text{for all} \hspace{0.1cm} 2 \leq j \leq \deg(K'), \hspace{0.05cm} 1 \leq i\leq \phi(n).\] In particular we then have, \begin{equation} \label{eqn 3} N_{K'/\mathbb{Q}}(v'_n)N_{K'/\mathbb{Q}}(u'_n)= N(\phi_n(\eta)) \gg_{K} \frac{1}{3}\Big(\frac{|\eta|-1}{3}\Big)^{\phi(n)-1}. \end{equation} Now combining equations (\ref{eqn 2}) and (\ref{eqn 3}) we finally have \[N_{K'/\mathbb{Q}}(u'_n) \gg_{K,\epsilon} \frac{\Big(\frac{|\eta|-1}{3}\Big)^{\phi(n)-1}}{|\eta|^{\frac{n\epsilon}{1+\epsilon}}}.\] \end{proof} \begin{lemma} \label{imp estimate} Denote $\mathcal{S}(X)$ to be the number of non-Wieferich prime ideals in $\mathcal{O}_K$ of norm at most $x,$ with respect to base $\eta.$ Then we have the following estimate, \[ |\mathcal{S}(X)| \geq |\{n \leq \log_{|\eta|}x \mid |N(U'_n)|>n^{\deg(K)}\}|.\] \end{lemma} \begin{proof} For all such $n$'s in the right hand side, we have \[N(u'_n)N(v'_n)=N(\phi_n(\eta)) \ll |\eta|^n \leq x.\] On the other hand, $N(U'_n)\leq N(u'_n)N(v'_n) \leq x.$ Let $\pi_n$ be a prime divisor of $U'_n,$ and hence $N(\pi_n) \leq x.$\\ \newline \noindent Now we take $\pi_n$ to be a prime dividing $U'_n$ such that $(N(\pi_n),n)=1.$ We can do this because $N(U_n')>n^{\deg(K)}.$ Lemma \ref{cycl 1} completes the proof because $\pi_n$'s are now pairwise distinct. \end{proof} \noindent We are finally done with all preparations for the lower bound. \begin{proof}[\textbf{Proof of Theorem 1.0.2}] From lemma \ref{norm ineq 2} we have, \[N_{K'/\mathbb{Q}}(u'_n)\gg_{K, \epsilon} \frac{\Big(\frac{|\eta|-1}{3}\Big)^{\phi(n)-1}}{|\eta|^{\frac{n\epsilon}{1+\epsilon}}}.\] On the hand for any $\eta$ with $|\eta|-1 \geq 3e,$ \[N(U'_n)^{h_K} \geq N_{K'/\mathbb{Q}}(u'_n) \gg_{K,\epsilon} \frac{\Big(\frac{|\eta|-1}{3}\Big)^{\phi(n)-1}}{|\eta|^{\frac{n\epsilon}{1+\epsilon}}} \gg_{K, \epsilon} n^{\deg(K)}\] holds if $\phi(n) \geq \epsilon n +C_{K, \epsilon}$ for a positive constant $C_{K, \epsilon}$ not depending on $n.$ Now one can prove the desired result combining Lemma \ref{imp estimate}, Lemma \ref{cycl 1} and Lemma 6 of \cite{Sil}. \end{proof} \section{Generalization to primes in congruence classes} \noindent Consider rational primes $p$, which are congruent to $1$ modulo $k$. It was first proved by Graves-Murty in \cite{Murty} that there are infinitely many non-Wieferich primes of such a form, with a lower bound of order $\frac{\log x}{\log \log x}.$ Their idea was roughly to study squarefree parts of $\phi_{nk}(a)$. The main point is, if $p\mid \phi_{nk}(a)$ then either $p \mid nk$ or $p \equiv 1 \pmod {nk}.$ Later, Ding and Cheng improved that bound in \cite{Chen}. They established a lower bound of order $\frac{\log x}{\log \log x}(\log \log \log x)^M$ for any natural number $M.$ \\ \newline \noindent Analogously, consider primes $\pi \in \mathcal{O}_K$ of form $N(\pi) \equiv 1 \pmod k.$ As a generalization of \cite{Murty} and \cite{Chen}, one may ask whether there are infinitely many non-Wieferich primes $\pi$ of the same form or not. In this section, we first give an affirmative answer to that question, removing the $(\log \log \log x)^M$ term over any arbitrary number fields. \begin{proof}[\textbf{Proof of Theorem 1.0.3}] First we denote $\mathcal{S}_k(X)$ to be the number of non-Wieferich primes (with respect to unit $\eta$) in $\mathcal{O}_K$ of norm is at most $x$ and congruent to $1$ modulo $k.$ From Lemma \ref{norm ineq 1} we have, \[N_{K'/\mathbb{Q}}(u'_{nk})\gg_{K, \epsilon} \frac{\Big(\frac{|\eta|-1}{3}\Big)^{\phi(nk)-1}}{|\eta|^{\frac{nk\epsilon}{1+\epsilon}}}.\] From Lemma \ref{cycl 1}, if $\pi$ is a prime dividing $u'_{nk}$ and $(nk, N(\pi))=1,$ then \[N(\pi) \equiv 1 \pmod{nk}.\] Lemma \ref{imp estimate} gives, \[S_k(X) \geq |\{nk \leq \log_{|\eta|}(x) \mid N(U'_{nk})>(nk)^{\deg(K)}\}|.\] Following the proof of Theorem 1.0.2, we only need to count \[\{nk \leq Y \mid \phi(nk) \geq \epsilon nk\}.\] Note that $\phi(nk)=\phi(n)\phi(k)\frac{d}{\phi(d)}$ where $d=(n,k).$ Basically we then need to count, \[\Big\{n \leq \frac{Y}{k} \mid \phi(n) \geq \epsilon \frac{k}{\phi(k)}\frac{\phi(d)}{d} n\Big\}.\] Since $k$ is fixed, and all those $d$'s divide $k$, hence the set of numbers $\{\frac{k}{\phi(k)}\frac{\phi(d)}{d}\}$ is finite. For small enough $\epsilon$'s it is then clear that \[\{nk \leq Y \mid \phi(nk) \geq \epsilon nk\} \gg \frac{Y}{k}.\] Now we can complete proof the theorem using Lemma 6 of \cite{Sil}. \end{proof} \section{in other algebraic groups} \noindent Continuing our discussion from the last paragraph of preliminary, we first generalize the work of K\"uhn and M\"uller. Suppose $E$ be an elliptic curve defined over $K$ and $P \in E(K)$ be a point of infinite order. Let $\pi$ be a prime ideal in $\mathcal{O}_K$ and $N_{\pi}=|E \pmod{\pi}|$. Now the question is whether $N_{\pi}P \equiv 0 \pmod {\pi^2}.$ If we are in trivial class group case, any point $P \in E(K)$ can be written as $\Big(\frac{a_P}{d^2_P},\frac{b_P}{d^3_P}\Big)$ uniquely. If not, we could still write \[(x_P)=\frac{\mathfrak{a}_P}{\mathfrak{d}^2_P}\hspace{0.15cm}\text{and}\hspace{0.15cm}(y_P)=\frac{\mathfrak{b}_P}{\mathfrak{d}^3_P},\] where $\mathfrak{a}_P, \mathfrak{b}_P, \mathfrak{d}_P$ are ideals in $\mathcal{O}_K.$ We can then consider $d_P, a_P, b_P$ to be the lifts of $\mathfrak{d}_P, \mathfrak{a}_P$ and $\mathfrak{b}_P$ respectively to $K',$ and argue similarly as in the previous section. So let us do everything with trivial class group. To fix notations once and for all, write \[P=\Big(\frac{a_P}{d^2_P}, \frac{b_P}{d^3_P}\Big).\] Following Silverman's approach in \cite{Sil}, fix $P \in E(K)$ a non-torsion point and write $nP=\Big(\frac{a_n}{d^2_n}, \frac{b_n}{d^3_n}\Big).$ Since we are working over trivial class group, we can write $$d_n=u_nv_n,$$ where $u_n$ is the square-free and $v_n$ is the square-full part of $d_n.$ We further denote $D_n$ to be the greatest divisor of $d_n$ not dividing $d_1d_2 \cdots d_{n-1}.$ We further write $D_n=U_nV_n$ similarly as before. We can do all of these because the class group is trivial. If not, we could still define $D_n$ as an ideal in $\mathcal{O}_K$ and work with its lift to $K'.$\\ \newline \noindent Let us first introduce to the uniform ABC conjecture for curves over number fields, as proposed by Vojta. \begin{definition} Let $X$ be a smooth, proper, geometrically connected curve over a number field $K$. Let $D \subset X$ be an effective reduced divisor, and $\omega_X$ be the canonical sheaf on $X$. Fix a proper regular model $\mathfrak{X}$ of $X$ over $\mathrm{Spec}(\mathcal{O}_K)$ and extend $D$ to an effective horizontal divisor $\mathfrak{D}$ on $\mathfrak{X}$. For any point $P \in X(K),$ define \[\mathrm{cond}_{\mathfrak{X}, \mathfrak{D}}(P)= \prod_{p \in S} N(\mathfrak{p})^{\frac{v_{\mathfrak{p}}(p)}{[K:\mathbb{Q}]}},\] where $S$ is the set of finite primes $\mathfrak{p}$ of $K$ such that the intersection multiplicity $(\mathfrak{P}, \mathfrak{D})_{\mathfrak{p}} \neq 0.$ \end{definition} \begin{conjecture}[ABC for curves]\label{ABC for curves} Suppose that $\omega_X(D)$ is ample and $h_{\omega_X}(D)$ be a Weil height function on X with respect to $\omega_X(D)$. Then for any $ \epsilon > 0$ and $d \in \mathbb{N}$, there exists a constant $c = c(\epsilon, d, X , D)$ such that \[h_{\omega_X(D)}(P) \leq (1 + \epsilon) \big(\log \mathrm{disc}(k(P)\big) + \log \mathrm{cond}_{\mathfrak{X},\mathfrak{D}}(P) + c\] for all $P \in X(K)-\mathrm{supp}(D)$ satisfying $[k(P) : \mathbb{Q}] \leq d,$ where $k(P)$ is the residue field at $P.$ \end{conjecture} \noindent Interestingly we then have, \begin{proposition} \label{equiv} ABC for number fields is equivalent to $\mathrm{Conjecture} 2.$ \end{proposition} \begin{proof} We first show that Conjecture 2 above implies ABC for number fields. Let $X=\mathbb{P}^1$ and take $a,b,c \in K$ with $a=b+c$ Consider the point $P=[a:c] \in X(K).$ Let $D=(0)+(1)+(\infty) \in \text{Div}(X)$ be an effective divisor with $\deg(\omega_X(D))=2g(X)-2+\deg(D)=1.$ In particular, $\omega_X(D)$ is ample of degree $1.$ Therefore, up to a bounded constant $h_{\omega_X(D)}$ is the weil height $h.$ Now the conjecture above implies, \[H(a,b,c) \ll_{K} H(P) \ll_{\epsilon} \Big(\text{cond}_{\mathfrak{X}, \mathfrak{D}} (P)\Big)^{1+\epsilon},\] where the first implication is coming from Remark 2.1.1. For any prime $\mathfrak{p},$ note that $\big(\mathfrak{P}, (0)\big)_{\mathfrak{p}} \neq 0, \big(\mathfrak{P}, (\infty)\big)_{\mathfrak{p}} \neq 0$ and $\big(\mathfrak{P}, (1)\big)_{\mathfrak{p}} \neq 0$ if and only if $\mathfrak{p} \mid a, \mathfrak{p} \mid b$ and $\mathfrak{p} \mid c=a-b$ holds respectively. In particular, \[\text{cond}_{\mathfrak{X}, \mathfrak{D}} (P)= \Big( \prod_{\mathfrak{p} \mid abc}N_{K/\mathbb{Q}}(\mathfrak{p})^{v_{\mathfrak{p}}(p)}\Big)^{\frac{1}{[K:\mathbb{Q}]}}=\text{rad}(abc),\] and that completes the proof for one direction. For the converse, see Theorem 2.1 of \cite{Moc}. \end{proof} \noindent Let us now prove the main result of this section. But we first need some key lemmas. \begin{lemma} \label{norm ineq 4} Following the previously introduced notations, \[\log N_{K/\mathbb{Q}}(v_P) \ll_{\epsilon} \epsilon \log N_{K/\mathbb{Q}}(d_P)+c.\] \end{lemma} \begin{proof} Consider $X=E$ and $D=(0) \in \text{Div}(X).$ Hence $\omega_X(D)$ has degree $1$ and so is ample. Therefore, $h_{\omega_X(D)}$ is the Weil height $h$ on $E(K)$ up to a bounded constant. On the other hand, \[\text{cond}_{\mathfrak{X}, \mathfrak{D}}(P)=\prod_{\mathfrak{p}\in S} N(\mathfrak{p})^{\frac{v_{\mathfrak{p}}}{[K:\mathbb{Q}]}},\] where $S$ is the set of primes $\mathfrak{p}$ with $\big(\mathfrak{P}, (0)\big)_{\mathfrak{p}} \neq 0.$ If $E$ has good reduction at $\mathfrak{p},$ then the previous statement holds if and only if $\mathfrak{p} \mid d_P.$ By Conjecture 2, we then get \begin{equation} \label{11} h(P) \leq (1+\epsilon) \log (\text{rad}(d_P))+c_E. \end{equation} Note that \begin{equation}\label{2} \sum_{v \in M_K} \log \max\Big\{||\frac{a_P}{d^2_P}||_v, 1\Big\} \geq \sum_{v \in M'_K} \log \max\Big\{||\frac{a_P}{d^2_P}||_v, 1\Big\} \geq -2\sum_{v\mid d_P} \log \min\Big\{||d_P||_v, 1\Big\}. \end{equation} \noindent On the other hand, \begin{equation}\label{3} -2\sum_{v\mid d_P} \log \min\Big\{||d_P||_v, 1\Big\}=2\log N_{K/\mathbb{Q}}(d_P)=2\big(\log N_{K/\mathbb{Q}}(u_p)+ \log N_{K/\mathbb{Q}}(v_P)\big). \end{equation} and \begin{equation}\label{4} \log(\mathrm{rad}(d_P)) \leq N_{K/\mathbb{Q}}(u_P)+ \frac{1}{2} N_{K/\mathbb{Q}}(v_P). \end{equation} Combining (\ref{11}), (\ref{2}), (\ref{3}) and (\ref{4}) we get \[\frac{1-\epsilon}{2} \log N_{K/\mathbb{Q}}(v_P) \leq \epsilon \log N_{K/\mathbb{Q}}(u_P)+c',\] and this finishes proof of the lemma since $d_P=u_Pv_P.$ \end{proof} \noindent We have already discussed about why we can afford to be in trivial class group case. Following the notations in \cite{Sil}, we define $u_n, v_n, U_n, V_n$ analogously. \begin{lemma} \label{track 2} Let $\mathfrak{p}$ be a prime ideal in $\mathcal{O}_K$ dividing $U_n$ and not dividing $d_2\Delta_E,$ then \[m_{\mathfrak{p}}=n, N_{\mathfrak{p}}P \neq 0 \pmod{\mathfrak{p}^2},\] where $m_{\mathfrak{p}}$ is the least number $n$ such that $nP \equiv 0 \pmod {\mathfrak{p}}.$ \end{lemma} \begin{proof} Let $\mathcal{F}_{\mathfrak{p}}, \mathcal{F}_{\mathfrak{p}^2}$ be the formal groups of $E$ at $\mathfrak{p}$ and $\mathfrak{p}^2$ respectively. Now the proof is exactly analogous to Lemma 11 in \cite{Sil}. Because, \[\mathcal{F}_{\mathfrak{p}}/\mathcal{F}_{\mathfrak{p}^2} \sim k(\mathfrak{p}),\] where $k(\mathfrak{p})$ is the residue field at $\mathfrak{p},$ and Hasse-Weil gives $N_{\mathfrak{p}} \leq (\sqrt{k(\mathfrak{p})}+1)^2$. The rest is exactly same. \end{proof} \begin{lemma} \label{norm height 1} We have \[n^2 \hat{h}(P) \geq \log N_{K/\mathbb{Q}}(d_n) \gg_{E, \epsilon} (1-\epsilon)n^2 \hat{h}(P).\] \end{lemma} \begin{proof} After using Corollary \ref{Siegel} from preliminary, the proof is exactly same as Lemma 8 of \cite{Sil}. \end{proof} \noindent Furthermore, we have the following crucial lower bound, \begin{lemma} \label{norm height 2} \[\log N_{K/\mathbb{Q}}(D_n) \gg_{E,\epsilon} \big( \frac{1}{3}-\epsilon)n^2\hat{h}(P)-\log n.\] \end{lemma} \begin{proof} First part of Silverman's arguments in the proof of Lemma 9 (in \cite{Sil}) uses some facts from formal group of $E.$ All of those carries over number fields because the main required fact was Proposition VII.2.2 of \cite{Silverman}, and that is valid for any DVR. On the other hand second part of the proof, i.e. about estimating $\log N(D_n)$ follows immediately from the previous lemma. \end{proof} \begin{proof}[\textbf{Proof of Theorem 1.0.4}] From Lemma \ref{norm ineq 4}, Lemma \ref{norm height 1} and Lemma \ref{norm height 2} we obtain, \[\log N_{K/\mathbb{Q}}(U_n)= \log N_{K/\mathbb{Q}}(D_n)-\log N_{K/\mathbb{Q}} (V_n) \gg_{E, \epsilon} \big(\frac{1}{3}-\epsilon\big)n^2\hat{h}(P)-\log n.\] Sine $P$ is non-torsion from the beginning, $\hat{h}(P) \neq 0.$ And so, we may assume \[N_{K/\mathbb{Q}}(U_n)>N_{K/\mathbb{Q}}(d_2\Delta_E),\] for all but finitely many $n$. For all such $n's,$ we can therefore pick a prime ideal $\pi_n$ dividing $U_n$ co prime to $d_2\Delta_E.$ Lemma \ref{track 2} then shows $\pi_n$ is a non-Wieferich prime for $P.$ On the other hand, we have \[\log N_{K/\mathbb{Q}}(U_n) \leq \log N_{K/\mathbb{Q}}(D_n) \ll_{E} n^2 \hat{h}(P).\] So for all $n \ll_{E,P} \sqrt{\log x}, \log N_{K/\mathbb{Q}}(U_n) \leq x$. In particular $\pi_n$ is a non-Wiefrich for $P$ with norm at most $x.$ Once again Lemma \ref{track 2} shows, these $\pi_n$'s are pairwise different, and hence the proof is now completed. \end{proof} \end{document}
\begin{document} \title{Quantum density peak clustering} \author{Duarte Magano} \affiliation{Instituto Superior T\'{e}cnico, Universidade de Lisboa, Portugal} \affiliation{Instituto de Telecomunica\c{c}\~{o}es, Portugal} \author{Lorenzo Buffoni} \affiliation{Portuguese Quantum Institute, Portugal} \author{Yasser Omar} \affiliation{Instituto Superior T\'{e}cnico, Universidade de Lisboa, Portugal} \affiliation{Portuguese Quantum Institute, Portugal} \affiliation{Centro de Física e Engenharia de Materiais Avançados (CeFEMA), Physics of Information and Quantum Technologies Group, Portugal} \date{July 21, 2022} \begin{abstract} Clustering algorithms are of fundamental importance when dealing with large unstructured datasets and discovering new patterns and correlations therein, with applications ranging from scientific research to medical imaging and marketing analysis. In this work, we introduce a quantum version of the density peak clustering algorithm, built upon a quantum routine for minimum finding. We prove a quantum speedup for a decision version of density peak clustering depending on the structure of the dataset. Specifically, the speedup is dependent on the heights of the trees of the induced graph of nearest-highers, i.e., the graph of connections to the nearest elements with higher density. We discuss this condition, showing that our algorithm is particularly suitable for high-dimensional datasets. Finally, we benchmark our proposal with a toy problem on a real quantum device. \end{abstract} \maketitle \section{Introduction} Machine Learning (ML) \cite{bishop_pattern_2011,hastie2009elements} is a field with an exceptional cross-disciplinary breadth of applications and studies. The aim of ML is to develop computer algorithms that improve automatically through experience by learning from data, so as to identify distinctive patterns and make decisions with minimal human intervention. The applications of ML that are already possible today are extremely compelling and diverse \cite{sutton2018reinforcement,graves2013speech,sebe2005machine}, and still growing at a steady pace. However, the training and deployment of these models, involving an ever increasing amount of data, faces computational challenges \cite{tieleman2008training} that are only partially met by the development of special purpose classical computing units such as GPUs. This challenge posed by ML algorithms has led to a recent interest in applying quantum computing to machine learning tasks \cite{schuld2015introduction,wittek2014quantum,adcock2015advances,arunachalam2017survey,Biamonte:2017db} that sparked the field of Quantum Machine Learning (QML). So far, there have been many proposals for different QML algorithms. Several of them \cite{lloyd2020quantum,vinci2020path,otterbach2017unsupervised} have shown the potential to accelerate ML tasks, but largely rely on heuristic methods, and \emph{proving} an advantage in terms of computational complexity for these particular algorithms is generally hard. On the other hand, there have been a number of works that, by using quantum algorithms with known complexity as subroutines \cite{wiebe2012quantum, lloyd2014quantum}, could prove the existence of a quantum advantage in QML. Indeed, it has been conjectured that QML algorithms could provide the first breakthrough algorithms on near-term quantum devices, given the inherent robustness of these algorithms to noise and perturbations. Most of the literature on QML has focused on \emph{supervised} learning \cite{goodfellow2016deep} problems. This particular subset of ML has a number of appealing characteristics as it is more immediate to implement and allows for feedback loops and minimization of well-kown and well-behaved cost functions. Unlike supervised learning, unsupervised learning is a much harder, and still largely unsolved, problem. And yet, it has the appealing potential to learn the hidden statistical correlations of large unlabeled datasets \cite{vincent2008extracting,hinton1995wake}, which constitute the vast majority of data being available today. Amongst all unsupervised learning problems, clustering is one of the most popular. In clustering, we consider a dataset $D$ composed of $n$ elements, \begin{equation} D = \{ x_0, \ldots, x_{n-1}\}, \end{equation} in which we are given a notion of \emph{distance} between every two elements in the dataset $x_i, x_j \in D$, \begin{equation} \dist{x_i}{x_j}. \end{equation} Interpreting this distance as a similarity measure, the (ill-defined) problem of clustering can be formulated as follows. \begin{Clustering} Given a dataset $D$ and a distance $\dist{\cdot}{\cdot}$ between each pair of elements of $D$, partition $D$ into sets called \emph{clusters}, such that similar elements belong to the same cluster and dissimilar elements belong to distinct clusters. \end{Clustering} Clustering algorithms thus need to separate unlabeled data into different classes (or clusters) without any external labeling and supervision. Solutions to the clustering problem based on quantum computing have been proposed for a long time, resorting to a plethora of different strategies \cite{Aimeur2007,Yu2010,Li2011,Aimeur2013,LloydMohseniRebentrost,otterbach2017unsupervised,Bauckhage2017,Daskin2017,qmeans,li2021quantum,QSpectralClustering,pires2}. For example, in Ref. \cite{Li2011} the data poins are treated as interacting quantum walkers on a lattice; whereas in Ref. \cite{qmeans} quantum subroutines for distance estimation and matrix arithmetics are employed to develop an efficient quantum version of a classical standard algorithm, $k$-means clustering. In \cite{otterbach2017unsupervised}, the clustering problem was reformulated as an optimisation problem and solved by applying a hybrid optimisation algorithm on a Rigetti quantum processor, hinting to the possibility of realising such advantage on near term quantum devices. In this work, we adopt the black-box/oracular model of clustering introduced in \cite{Aimeur2007}, in which the information concerning the distances between points in the dataset is available only through oracle queries. Under this framework, references \cite{Aimeur2007,Aimeur2013}, using variants of Grover's search \cite{GroverSearch}, quantize typical subroutines of learning algorithms, such as finding the largest distance in a dataset, computing the median, or constructing the $c$-neighbourhood graph. These subroutines are then used to accelerate standard clustering methods, namely, divisive clustering, $k$-medians clustering and clustering via minimum spanning tree. Nevertheless, the state-of-the-art in clustering has been steadily evolving in recent years, and so the question arises: can we also use quantum computing to speedup modern clustering algorithms? We consider a recent and very popular clustering method, usually referred to as density peak clustering (DPC) \cite{Rodriguez2014}. In a nutshell, the idea is to attribute a ``density'' value to every element of the dataset based on their distances to all other elements, and then assign each element to the same cluster as the nearest neighbour that has a density greater than itself (referred to as its \emph{nearest-higher}). Relying on a variant of quantum search known as quantum minimum finding \cite{DurrHoyer1996}, we show that, for any element of the dataset, we can find its nearest-higher (up to bounded-error probability)in time $\mathcal{O}\left(n^{3/2}\right)$, as opposed to the classical $\mathcal{O}\left(n^2\right)$ complexity. Unfortunately, to fully solve DPC we would need to repeat this subroutine $n$ times. This would provide no advantage as we can classically implement DPC in $\mathcal{O}\left(n^2\right)$ time. Motivated by this, in this work we consider a simpler variant of the clustering problem, which we refer to a \emph{decision} clustering. \begin{DecisionClustering} Given a dataset $D$, a distance $\dist{\cdot}{\cdot}$ between each pair of elements of $D$, and two elements $x_i, x_j \in D$, decide if $x_i$ and $x_j$ are in the same cluster. \label{prob:decisionclustering} \end{DecisionClustering} \noindent Evidently, any solution of the clustering problem contains the answer to decision clustering, but not the other way around. This problem is relevant in situations where one is interested in establishing connections between particular elements but does not need to know the cluster structure of the entire dataset. For example, we may want to know if two users of social media platform are friends, or if an individual fits a particular consumer segment. Moreover, this can also be useful when one has to decide to which cluster a new datapoint belongs without having to re-cluster all the data. Finally, decision clustering can be iteratively applied to cluster small subsets of the data. Classically solving decision clustering with DPC has the same complexity as solving the full clustering problem, $\mathcal{O}\left(n^2\right)$. However, in the quantum setting we show that we can solve it in $\tilde{\mathcal{O}} \left(n^{3/2} H \right)$ time, where $H$ is the maximum height of all trees in the graph of nearest-highers (to be properly defined in Section \ref{sec:DPC}). The factor $H$ is not known \emph{a priori}, depending on the specific structure of the dataset. Nevertheless, we argue that for high-dimensional datasets $H$ scales as $\mathcal{O}(n^{1 / d_{\mathrm{eff}}})$, where $d_{\mathrm{eff}}$ is a constant greater than $2$, and confirm this hypothesis with numerical simulations. When this holds, quantum density peak decision clustering provides a speedup over its classical counterpart. Finally, we benchmark our quantum algorithm using a toy problem on a real quantum device, the ibm-perth 7-qubit quantum processor. As expected, the hardware errors severally mitigate any possible quantum advantage. Nevertheless, the noiseless simulations confirm the potential of our approach. The article is structured as follows. Section \ref{sec:preliminaries} provides summary of background material that is necessary for understanding our quantum algorithm. Namely, we introduce the data model and review the quantum minimum finding algorithm. Section \ref{sec:DPC} explains the classical DPC algorithm, assuming that the reader is not yet familiar with it. Then, in section \ref{sec:qalg} we present our quantum algorithm. The complexity of the algorithm depends on the $H$ factor, and so in section \ref{sec:height} we study how $H$ behaves for different datasets. In section \ref{sec:experiment} we show the results of the implementation of our proposal on a real quantum device. Section \ref{sec:conclusion} concludes the article with a brief summary and discussion. \section{Preliminaries} \label{sec:preliminaries} \subsection{Data model} In this article, we work in a black-box model. We assume that our knowledge about the data comes uniquely from querying an ``oracle'' that returns the distance between pairs of points \begin{equation} (i,j) \xrightarrow[]{\text{query}} \dist{x_i}{x_j}. \label{eq:querymodel} \end{equation} We make no assumptions about this distance besides that it is non-negative and symmetric. That is, it does not need to be a distance by a proper mathematical definition. Our complexity measure is the number of performed queries (this is known as query complexity). This model is an abstraction that reasonably fits a number of problems. For example, the oracle may represent accessing a database with the distances between a group of cities, or a routine that estimates the dissimilarity between pairs of images. The query complexity is a good estimate of the total time complexity whenever determining the distance between elements is the most computationally intensive part of the algorithm. Nevertheless, we also point out that this model may not be a good description of other common situations. For example, if we know the coordinates of a set of points in $\R^d$ (for some integer $d$), we already have more structure than in the black-box model and there are geometrical methods that allow significant speedups for certain tasks. In the quantum setting, we assume that we can query the distances between elements in quantum superposition. That is, we assume access to a unitary $Q$ (the quantum oracle) such that \begin{equation} Q \ket{i,j,0^{\otimes q}} = \ket{i, j, \dist{x_i}{x_j}}, \label{eq:quantumquerymodel} \end{equation} where $q$ is the number of bits necessary to store the distances up to desired accuracy. In particular, given a superposition $\sum_{i j} \alpha_{ij} \ket{i, j}$ (for any set of normalized complex amplitudes $\{\alpha_{ij}\}_{ij}$), $Q$ acts as \begin{equation} Q \bigg( \sum_{i j} \alpha_{ij} \ket{i, j} \bigg) \ket{0^{\otimes q}} = \sum_{i j} \alpha_{ij} \ket{i, j, \dist{x_i}{x_j}}. \end{equation} The quantum query complexity is counted as the number of applications of the unitary $Q$ (for more details on the quantum query model refer to \cite{Ambainis_2017}). When describing classical data, the oracle may be realized with a quantum random access memory (qRAM) architecture \cite{QRAM}. In the end, our results are critically dependent on the existence of a qRAM with the above mentioned properties, representing a common setting in theoretical work on quantum algorithms, and in quantum machine learning in particular. Nevertheless, even though there have been proposals of physical architectures for implementing QRAM \cite{QRAM_architectures,Park_Petruccione_Rhee_2019}, there are still significant challenges to overcome before such a device can be practically realized \cite{Matteo_Gheorghiu_Mosca_2020}. \subsection{Quantum minimum finding \label{sec:quantumminimumfinding}} The key quantum routine we use in our work is the well-known quantum minimum finding algorithm of D\"{u}rr and H\o{}yer \cite{DurrHoyer1996}, which in itself is a specific application of the quantum search algorithm \cite{GroverSearch}. Begin by considering a boolean function $F$ defined on a domain of size $n$, $F: \{0, 1, \ldots, n-1\} \rightarrow \zo$. Our goal is to find an element $x$ such that $F(x)=1$, assuming one exists. $F$ is provided as a black box, that is, we can only gain information about the function by evaluating it on given elements. In the worst-case scenario, we may need to query all $n$ possible inputs before succeeding. Now suppose that we have access to a unitary $O_F$ that marks the $1$-inputs with a $-1$ phase, \begin{equation} O_F \ket{i} = \begin{cases} +\ket{i}, \text{ if } F(i) = 0 \\ -\ket{i}, \text{ if } F(i) = 1 \end{cases}. \label{eq:unitaryF} \end{equation} The unitary $O_F$ is an oracle for $F$, and an application of $O_F$ is referred to as a quantum query to $F$ as introduced in the section above. Lets now consider $\ket{\Psi}$ to be the uniform superposition of all input states, \begin{equation} \ket{\Psi} = \frac{1}{\sqrt{n}} \sum_{i=0}^{n-1} \ket{i}. \end{equation} If we measure $\ket{\Psi}$, we obtain every input with equal probability. Grover's algorithm \cite{GroverSearch} is based on the observation that we can amplify the probability of measuring $1$-input states via the repeated application of the operator \begin{equation} G_F = \left(2 \ket{\Psi}\bra{\Psi} - I \right) \cdot O_F \end{equation} to an initial state $\ket{\Psi}$. Roughly, the amplitudes of the $1$-input states grow linearly with each application of $G_F$, while their measurement probabilities grow quadratically. This means that $\sim\sqrt{n}$ quantum queries are sufficient to measure an input state with high probability. Boyer, Brassard, H\o{}yer, and Tapp \cite{Boyer1996}, generalizing Grover's algorithm \cite{GroverSearch}, prove the following theorem. \begin{theorem}[Quantum search, \cite{Boyer1996}] \label{thm:quantumsearch} Let $F:\{0, \ldots, n-1\} \rightarrow \zo$ and let $t = \vert \{x \in \{0, \ldots, n-1\} : F(x)=1\} \vert$ (which does not need to be known a priori). Then, we can find an element $x$ such that $F(x) = 1$ with an expected number of $\mathcal{O}(\sqrt{n / t})$ quantum queries to $F$. \end{theorem} Quantum minimum finding calls quantum search as a subroutine. Suppose that we want to find a minimizer of a black-box function $f: \{0, \ldots, n-1\} \rightarrow \N$. For any element $i \in [n]$, define the boolean function \begin{equation} F_i(j)= \begin{cases} 0, \text{ if } f(j) \geq f(i) \\ 1, \text{ if } f(j) < f(i) \end{cases}, \end{equation} which can be evaluated with two queries to $f$. Let $O_{F_i}$ be a unitary that evaluates $F_i$, as in expression \eqref{eq:unitaryF}. The quantum minimum finding algorithm starts by choosing a threshold element $i$ uniformly at random between $0$ and $n-1$. Employing quantum search with $O_{F_i}$ as oracle, we look an element $j$ such that $F(j) < F(i)$. We then repeat this process, updating $j$ as the threshold element, until the probability that the selected threshold is the minimum of $f$ is sufficiently large. With this algorithm, D\"{u}rr and H\o{}yer \cite{DurrHoyer1996} reach the following result. \begin{theorem}[Quantum minimum finding, \cite{DurrHoyer1996}] \label{thm:quantumminimumfinding} Let $f:\{x \in \{0, \ldots, n-1\} \rightarrow \N$ and $\epsilon \in [0, 1[$. Then, we can find the minimizer of $f$ with probability at least $1 - \epsilon$ using $\mathcal{O}(\sqrt{n} \log(1 / \epsilon) )$ quantum queries to $f$. \end{theorem} \section{Density peak clustering \label{sec:DPC}} Since its introduction in 2014 \cite{Rodriguez2014}, density peak clustering (DPC) has been widely studied and applied \cite{tu2019spatial,cheng2016large,shi2019unsupervised}. This algorithm, albeit remaining quite simple in the concept and implementation, presents some interesting features that are absent from most of the simpler clustering algorithms discussed in the QML literature. As an example, it does not make assumptions on the number of clusters present in the data (unlike simpler algorithms such as $k$-means \cite{macqueen1967some}), but it can infer this information from the data itself. Linked to this first property, DPC does not require any prior hypothesis on the shape of the dataset (i.e., gaussian distributed or symmetric), and works well with datasets of virtually any shape \cite{Rodriguez2014,fang2020adaptive}. Finally, DPC is able to detect outliers in the dataset, that is, elements that do not belong to any cluster. This last property opens the possibility of using DPC beyond the usual scope of clustering problems, such as anomaly detection \cite{tu2020hyperspectral,shi2019unsupervised}. We will now introduce this algorithm from a mathematical point of view and have a deeper look at its implementation to understand its capabilities and its criticalities. In density peak clustering, the first step is to compute the density $\rho(x_i)$ of every element $x_i \in D$, \begin{equation} \rho(x_i) = \sum_{x_j \in D} \chi(\dist{x_i}{x_j}), \label{eq:densitydefinition} \end{equation} where $\chi$ is a convolutional kernel that can be optimized according to the specific application. Common choices include the step kernel $\chi(x) = \Theta(x - d_c)$ or the Gaussian kernel $\chi(x) = \exp(- x^2 / d_c^2)$, for some normalization parameter $d_c$. Next, for each element $x_i$ we find its ``nearest-higher'' $h(x_i)$, defined as the closest element to $x_i$ with higher density than $x_i$, \begin{equation} h(x_i) = \argmin_{x_j: \rho(x_j) > \rho(x_i)} \dist{x_i}{x_j}. \label{eq:NHdefinition} \end{equation} The nearest-higher separation $\delta(x_i)$ is the distance between $x_i$ and its nearest-higher, \begin{equation} \delta(x_i) = \dist{x_i}{h(x_i)}. \label{eq:deltadefinition} \end{equation} Naturally, if the point in question is the one with highest density in the entire dataset then it does not have a nearest-higher. In that case, by convention, the nearest-higher separation is $+\infty$. The key observation at the basis of DPC is that, for the elements that are local maxima of the density, the nearest-higher separation is much larger than the typical nearest-neighbour distance. This idea is illustrated in Figure \ref{fig:DPCillustration}, where we consider a small dataset embedded in $\R^2$ with the similarity measure provided by the Euclidean distance. In Figure \ref{fig:rhodelta_plot}, by plotting the density versus the nearest-higher separation for all elements in the dataset, it becomes clear that the elements $7$, $11$, and $15$ stand out from their neighbours. These three elements are classified as \emph{roots}, and each root will originate its own cluster. The elements $0$ and $16$, despite also having large nearest-higher separations, also have low densities and so are classified as \emph{outliers}. In more detail, for some choice of threshold $\rho_c$ and $\delta_c$ (to be specified according to the typical scales of the dataset), we promote to roots the elements $x_i$ satisfying \begin{equation} \rho(x_i) > \rho_c \quad \text{and} \quad \delta(x_i) > \delta_c, \end{equation} and demote to outliers those who obey \begin{equation} \rho(x_i) < \rho_c \quad \text{and} \quad \delta(x_i) > \delta_c. \end{equation} Then, the rest of the elements are assigned to the same cluster as their nearest-higher. As evidenced in Figure \ref{fig:clustering_forest}, the directed graph of nearest-highers (with edges originated from roots removed) forms a forest and each tree corresponds to a distinct cluster, the root nodes of the tree being the elements that were prometed to roots. The main steps of density peak clustering are summarized in Algorithm \ref{algo:DPclustering}. The complexity of the algorithm becomes clear immediately from the first step. Indeed, in order to compute the density of a given element, we need to query the distance to every other element in the dataset. Repeating this for all elements in the dataset means that we end up querying all $(n^2 - n) / 2$ distances. \begin{claim} The density peak clustering algorithm (Algorithm \ref{algo:DPclustering}) has $\mathcal{O}(n^2)$ query complexity, where $n$ is the size of the dataset. \label{thm:DDclassicalcomplexity} \end{claim} Now consider a \emph{decision} version of the clustering problem, where the goal is, given two elements $x_i, x_j \in D$, to decide if they belong to the same cluster. Evidently, we could solve the full clustering problem and then verify if $x_i$ and $x_j$ were assigned to the same cluster. But in the context of DPC there is a more direct approach, relying on the following simple observation: the two elements belong to the same cluster if and only if the sequences of nearest-highers starting from $x_i$ and $x_j$ lead to the same root. In other words, the problem is reduced to finding the roots of the respective trees in the graph of nearest-highers. This can be solved with Algorithm \ref{algo:decisionclustering}, where we just walk up the trees node by node by computing the corresponding nearest-higher. We know that we have reached a root when the nearest-higher separation is larger than $\delta_c$. Unfortunately, this approach comes with no significant complexity advantage as every step we take up the tree has essentially the same computational cost of the full clustering. To see this, just note that computing a nearest-higher of an element $x_i$ requires knowing how the density of all other element compares to $\rho(x_i)$ (\textit{cf.} Eq.~\eqref{eq:NHdefinition}). So, it seems unavoidable to compute all the densities. \begin{claim} The decision version of density peak clustering (Algorithm \ref{algo:decisionclustering}) has $\mathcal{O}(n^2)$ query complexity, where $n$ is the size of the dataset. \label{thm:DDclassicaldecisioncomplexity} \end{claim} In the next section, we show that we can circumvent this cost with quantum search, establishing a quantum advantage for the decision version of density peak clustering. \begin{figure} \caption{Density peak clustering. In Figure \ref{fig:clustering_illustration} \label{fig:clustering_illustration} \label{fig:rhodelta_plot} \label{fig:DPCillustration} \end{figure} \begin{algorithm}[p] \SetArgSty{} \caption{Density peak clustering (originally proposed in Ref. \cite{Rodriguez2014})} \label{algo:DPclustering} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{dataset $D$, density threshold $\rho_c$, and nearest-higher separation threshold $\delta_c$ (and possibly normalization parameter $d_c$, depending on the form of the kernel)} \Output{clusters} For all $x_i \in D$, compute density $\rho(x_i) = \sum_{j} \chi(\dist{x_i}{x_j})$ \label{step:densities}\; For all $x_i \in D$, compute nearest-higher $h(x_i) = \argmin_{j: \rho(x_j) > \rho(x_i)} \dist{x_i}{x_j}$ and respective separation $\delta(x_i) = \dist{x_i}{h(x_i)}$\label{step:NHs}\; For all $x_i \in D$, promote to root if $\rho(x_i) > \rho_c \land \delta(x_i) > \delta_c$ and to outlier if $\rho(x_i) < \rho_c \land \delta(x_i) > \delta_c$\label{step:promotions}\; For all $x_i \in D$ that is not a root nor an outlier, assign $x_i$ to the same cluster as $h(x_i)$\label{step:assignclusters}\; \end{algorithm} \begin{algorithm}[p] \SetArgSty{} \caption{Decision version of density peak clustering} \label{algo:decisionclustering} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \SetKwData{no}{no} \SetKwData{yes}{yes} \SetKwFunction{findroot}{FindRoot} \SetKwProg{Fn}{def}{\string:}{} \Input{dataset $D$, $x_i, x_j \in D$, density threshold $\rho_c$, and nearest-higher separation threshold $\delta_c$ (and possibly normalization parameter $d_c$, depending on the form of the kernel)} \Output{yes/no} \Fn{\findroot{$x$}}{ \If{$x$ is a root}{ output $x$\; } \Else{ output \findroot{$h(x)$}\; } } \If{$x_i$ or $x_j$ are outliers}{ output \no\; } \If{\findroot{$x_i$} = \findroot{$x_j$}}{ output \yes\; } \Else{ output \no\; } \end{algorithm} \section{Quantum algorithm \label{sec:qalg}} We now have all the elements to introduce a quantum algorithm to solve the decision version of density peak clustering. The quantum algorithm will use the same strategy as in Algorithm \ref{algo:decisionclustering}, calling the quantum minimum finding routine (section \ref{sec:quantumminimumfinding}) to determine the nearest-highers. For each $i \in \{0, \ldots, n-1\}$, consider the function \begin{align} f_i(j) &= \begin{cases} \dist{x_i}{x_j} , & \text{if } \rho_j > \rho_i\\ + \infty, & \text{if } \rho_j \leq \rho_i. \end{cases} \label{eq:ffunction} \end{align} From the definition of nearest-higher (Eq. \eqref{eq:NHdefinition}), it is clear that finding the minimizer of $f_i$ is equivalent to determining the nearest-higher of $x_i$, \begin{equation} h(x_i) = \argmin_j f_i(j). \label{eq:NH} \end{equation} For any element in the dataset, we can evaluate its density (Eq.~\eqref{eq:densitydefinition}) with $n-1$ distance queries. So, we can compute $f_i(j)$, for any $j \in \{0, \ldots, n-1\}$, with $\mathcal{O}(n)$ queries. Since any classical computation can be simulated on a quantum computer \cite{NielsenChuang}, there exists a unitary $U_i$ that uses $\mathcal{O}(n)$ quantum queries and achieves the transformation \begin{equation} U_i \ket{j} \ket{0^{\otimes q + 1 }} = \ket{j} \ket{f_i(j)} \end{equation} for any $j \in \{0, \ldots, n-1\}$, where $q$ is the number of qubits to store the distance up to the desired accuracy and we add an extra flag qubit to indicate if $f_i$ evaluates to $+\infty$. Therefore, by Theorem \ref{thm:quantumminimumfinding}, we can use the quantum minimum finding algorithm to find the minimizer of $f_i$ (i.e., the nearest-higher of $x_i$) with probability at least $1 - \epsilon$ with $\mathcal{O}(\sqrt{n} \log (1 / \epsilon))$ applications of $U_i$, amounting to a total of \begin{equation} \mathcal{O}\left(n^{3/2} \log (1 / \epsilon)\right) \label{eq:NHquantumcomplexity} \end{equation} quantum queries. Let $H$ be the maximum height of any tree in the graph of nearest-highers. In the worst case scenario, one may need to call quantum minimum finding $2 H$ times before reaching the root node for both inputs. Moreover, we want to make sure that it finds the correct nearest-highers every single call with high probability, which can be done by setting the constant $\epsilon$ in equation \eqref{eq:NHquantumcomplexity} to be sufficiently small. A simple calculation concludes the following. \begin{claim} Let $D$ be a dataset of $n$ elements and let $H$ be the maximum height of any tree in the graph of nearest-highers. For any two elements $x_i, x_j \in D$, we can solve density peak decision clustering with success probability at least $1 - \epsilon$ (for any $\epsilon > 0$) with quantum query complexity \begin{equation} \mathcal{O} \left( n^{3/2} H \log (H / \epsilon) \right). \end{equation} \end{claim} Recall that the classical complexity of the decision version of density peak clustering is $\mathcal{O}(n^2)$ (Claim \ref{thm:DDclassicaldecisioncomplexity}), irrespective of the value of $H$. \section{Height of trees \label{sec:height}} We have seen that we can reach a lower quantum query complexity for the density peak decision clustering problem depending on the factor $H$, the maximum height of the trees in the graph of nearest-highers. Indeed, there is quantum speedup when $H$ scales as $\mathcal{O}(n^a)$ for some $a<1/2$. In contrast, there is no speedup when $H = \Omega(n^a)$ with $a > 1/2$, as we may solve clustering classically in $\mathcal{O}(n^2)$ time. To understand how the factor $H$ scales, we can start by considering very simple data model. We assume that the dataset is generated by sampling $n$ points uniformly at random from a bounded region of $\R^d$. In this case, we expect to see only one cluster that spans the entire region. A straightforward calculation reveals that the expected nearest-neighbour distance $\expval{\mathit{nn}}$ scales as \begin{equation} \expval{\mathit{nn}} = \mathcal{O} \left( \frac{1}{n^{1/d}} \right). \end{equation} While the nearest-higher does not always coincide with the nearest-neighbour, the nearest-higher separation is most likely not much larger than the typical nearest-neighbour distance (except for the root node). So, we should have \begin{equation} \expval{\delta} \sim \expval{\mathit{nn}}. \label{eq:deltascaling} \end{equation} Now consider a leaf node of the graph of nearest-highers at a distance, say, $L$ from the root node. As it probably lies close the edge of the region, $L$ characterizes the ``size'' of the cluster. Its nearest-higher is found roughly along the direction of the centre of the cluster. That is, the nearest-higher is at a distance $\sim \expval{\delta}$ closer to the root node. Repeating this reasoning for all nodes in the path to the root, we conclude that \begin{equation} \expval{H} \sim \frac{L}{\expval{\delta}} \sim \frac{L}{\expval{\mathit{nn}}} = \mathcal{O} \left( n^{1/d} \right). \label{eq:Hscaling} \end{equation} Admittedly, we have merely provided an informal argument for the expression \eqref{eq:Hscaling}, not a fully rigorous proof. Still, we can verify this behaviour numerically. We generate such artificial datasets by sampling uniformly from $d$-balls of radius one, for different dimensions $d$. The results, shown in Figure \ref{fig:heights}a, confirm that indeed $\expval{H}$ scales as $n^{1/d}$. To study how $H$ behaves in more general settings, we consider two other types of datasets: \begin{itemize} \item \emph{Well-separated, Gaussian-shaped clusters (Figure \ref{fig:heights}b).} For different values of $d$, we pick ten centroids at random in range $[0, 100]^d$, associating to each a randomly chosen covariance matrix. We then generate artificial datasets by drawing from the corresponding Gaussian distributions. With high probability, the clusters will be well-separated. \item \emph{Real-world dataset (Figure \ref{fig:heights}c).} We randomly sample entries from the Forest Cover Type dataset \cite{ForestType}. This dataset originally contains fifty-five features, both numerical and categorical, conveying information about the Roosevelt National Forest in Colorado, namely, tree types, shadow coverage, distance to nearby landmarks, soil type, and local topography. We preprocess the dataset by numerically encoding the categorical data, and then rescaling the numerical variables such that each has mean zero and variance one. We define the clustering distance as the Euclidean distance between the first ten principal components of the data. \end{itemize} For both cases, we observe that \begin{equation} \expval{H} \sim n^{1 / d_{\mathrm{eff}}}, \end{equation} for some parameter $d_{\mathrm{eff}}$. We interpret $d_{\mathrm{eff}}$ as an ``effective dimension'' of the dataset, which can be smaller than the number of features of the data. For example, for the Gaussian datasets a simple polynomial fit reveals $d_{\mathrm{eff}} = 1.94, 2.36, 2.80, 3.22$ for $d = 2,3,4,5$, respectively. For the Forest Cover Type dataset, we find $d_{\mathrm{eff}} = 3.71$. An interesting research question (outside the scope of the present article) would be to properly understand how this effective dimension arises from the structure of the data. For the cases considered where the datasets had more than two features, we have verified that the effective dimension was greater than two, entering the regime where our density peak decision clustering algorithm shows a quantum speedup. This is evidence that our quantum algorithm could be suitable for high-dimensional, real-world problems. Our conclusions are summarized in the following claim. \begin{claim} Let $D$ be a dataset of $n$ elements and let the maximum height of any tree in the graph of nearest-highers scale as $\mathcal{O}\left(n^{1 / d_{\mathrm{eff}}} \right)$ for some parameter $d_{\mathrm{eff}}$. Then, for any two elements $x_i, x_j \in D$, we can solve density peak decision clustering with constant error probability with quantum query complexity \begin{equation} \tilde{\mathcal{O}} \left( n^{3/2 + 1 / d_{\mathrm{eff}}} \right). \end{equation} In particular, if $d_{\mathrm{eff}} > 2$, quantum query complexity is better than the classical query complexity $\mathcal{O} \left( n^2 \right)$. \end{claim} \begin{figure} \caption{Heights of trees. We plot the maximum height of any tree in the graphs of nearest-highers, $H$, for different types of datasets (we show the average over five runs). Both axis are shown in logarithmic scale. For plots (a) and (b) we artificially generate uniform and Gaussian datasets (respectively) for several dimensions $d$ with varying number of elements $n$. In the upper left corners, we show examples of such datasets in two dimensions. For plot (c), we take $n$ random samples from the Forest Cover Type dataset \cite{ForestType} \label{fig:clustering_ball} \label{fig:clustering_gaussian} \label{fig:clustering_forest} \label{fig:heights} \end{figure} We would like to stress that our speedup shows a dependence on a geometric property of the dataset that, to the best of our knowledge, has not yet been seen in the literature. While it is a common idea that quantum speedups in machine learning may rely on the structure of the dataset, it is usually hard to rigorously characterize the necessary structure. In contrast, in this work we were able prove that we have speedup if $H$ scales better than $\sqrt{n}$ (or, in other words, if the effective dimension is larger than $2$). \section{Experimental implementation \label{sec:experiment}} In this section, we test the proposed quantum density peak clustering on a real quantum processor. Specifically, we implement the main quantum routine, minimum finding. Since we are limited by the size of the devices, we solve a synthetic clustering problem involving just eight elements -- see the dataset depicted in Figure \ref{fig:toy_dataset}. We can encode each element of the dataset in $\lceil \log(8) \rceil = 3$ qubits, being suitable to run on NISQ machines. The first problem to address is implementing the oracle in a suitable manner. \begin{figure} \caption{Experimental implementation of toy problem. Figure \ref{fig:toy_dataset} \label{fig:toy_dataset} \label{fig:simul_results} \end{figure} Given the density $\rho_i$ computed for each $i \in \{0, \ldots, 7\}$ in our synthetic dataset, consider the function introduced in Eq.~\eqref{eq:ffunction}. The quantum minimum finding calls an oracle $O_{i,j}$ that implements the Boolean function \begin{equation} F_{i,j}(k)= \begin{cases} 0, \text{ if } f_i(k) \geq f_i(j) \\ 1, \text{ if } f_i(k) < f_i(j) \end{cases} \end{equation} as \begin{equation} O_{i,j} \vert k \rangle= \begin{cases} +\vert k \rangle, \text{ if } F_{i,j}(k)=0 \\ -\vert k \rangle, \text{ if } F_{i,j}(k)=1 \end{cases}. \label{eq:experimental_oracle} \end{equation} Present-day quantum machines do not have qRAM access, nor can perform complex arithmetic operations such as computing densities. So, we classically pre-compute $F_{i,j}(k)$ for every values of $i$, $j$, and $k$ and construct a circuit for each oracle $O_{i,j}$ following scheme outlined in \cite{figgatt2017complete}. While this is not how the algorithm is meant to be implemented in practice (cf. section \ref{sec:qalg}), we believe that this approach is suited for the purposes of a proof-of-concept. At this point, we have all the ingredients to start the quantum nearest-higher search. Given a point $i$ we start by selecting a random threshold $j$ and call the oracle dictionary $F_{i,j}$. We then mark the states that have $F_{i,j}(k)=1$ following the same marking scheme outlined in \cite{figgatt2017complete} and apply the amplitude amplification subroutine. At the end, we measure a state $\vert k \rangle$ that is selected as the new threshold $j'=k$ if $f_i(k) \geq f_i(j)$, otherwise a new threshold $j'$ is selected at random. This procedure is iterated until the nearest higher is found for every point $i$ in the dataset. We benchmark this strategy against the classical method of nearest-higher search that is just a random sampling of $j$. In either case, the figure of merit is the average number of oracles calls before finding the nearest-higher, $\langle n_O \rangle$, i.e., the quantum query complexity. In Figure \ref{fig:simul_results} we show the results for $\langle n_O \rangle$ taken over $1000$ run of the algorithm by using both the classical strategy and the quantum routine. As expected, the classical nearest-higher algorithm (i.e. random search) always takes on average iterations $\sim 3.5$ (blue bars in Figure \ref{fig:simul_results}), irrespectively of the point as one would expect given the size of the dataset. Regarding the quantum search, we first used the Qiskit package \cite{Qiskit} to simulate the algorithm without noise. In general, we can have multiple rounds of amplitude amplification in each step of our quantum minimum finding subroutine. However, here we opted by always running just one round. The simulations (green bars in Figure \ref{fig:simul_results}) clearly demonstrate quantum speedup, as the average number of oracle calls before convergence is lower than in the classical case for all points. We can observe that there are some points with speedups more pronounced than others as, for each specific datapoint, this depends on the number of states marked by the oracle. Finally, we ran our Qiskit program on a real quantum computer by IBM, the ibm-perth 7-qubit processor (see red bars in Figure \ref{fig:simul_results}). In this case, the analysis is more subtle because of the real-world noise on top of our ideal quantum algorithm. Indeed, for points whose oracles mark several states, the depth search circuits is quite large, making the computations more sensible to noise. On the other hand, there are some points for which the number of states marked by the oracle is low and thus the circuit is small enough that we can still observe an advantage over the classical case (even if it is less pronounced than in the ideal quantum simulation due to noise). This observation is very relevant as it is generally difficult to observe such quantum advantages running quantum machine learning subroutines on real hardware, even if for a toy problem as the one explored here. We can thus hope that, with some improvement in coherences and maybe error correcting codes being built in real quantum processors, we can start observing some advantages for real-world problems. \section{Conclusions \label{sec:conclusion}} In this work, we have introduced a quantum version of the density peak clustering algorithm, specifically aimed at its decision version. Our proposed algorithm builds upon the well-known quantum minimum finding algorithm, giving us the possibility of computing the query complexity of quantum density peak clustering. Indeed, while the classical query complexity of density peak clustering is $\mathcal{O} \left( n^2 \right)$, we prove that our proposed quantum algorithm has complexity $\mathcal{O}\left( n^{3/2 + 1 / d_{\mathrm{eff}}} \right)$, for a parameter $d_{\mathrm{eff}}$ that depends on the structure of the dataset. For values of $d_{\mathrm{eff}}>2$, we have a quantum speedup, albeit a modest one. This provable dependence of the complexity on this geometric property of the dataset constitutes by itself an notable result. Indeed, while it is widely accepted that quantum speedups for machine learning may depend on the structure of the data, it is often difficult to precisely characterize this dependence. As discussed in section \ref{sec:height}, in our case we interpret the parameter $d_{\mathrm{eff}}$ as an ``effective dimension'' of the dataset, making quantum density peak clustering specially suited for high-dimensional problems. The successful implementation of a toy problem in a real quantum computer and the observation of an advantage, even in the presence of the noise typical of a NISQ device, hints at the concrete possibility of exploiting the capability of quantum density peak clustering in near-term quantum computers. To conclude, we would like to raise two points that are relevant not only for the quantum density peak clustering, but also for the quantum machine learning community at large. The first one regards the efficient implementation of the classical computation routines, which in this work were included in the oracles. For example, while comparing two distances consumes $\mathcal{O}(1)$ time, the overhead of this calculation prohibits its implementation on present-day quanutm hardware. The second point is to understand how the effective dimension of the dataset $d_{\mathrm{eff}}$ scales in general for an arbitrary dataset (and if we can always define an effective dimension given the scaling of nearest neighbors). Going even deeper, one could ask if $d_{\mathrm{eff}}$ represents some fundamental property of the data and its structure (and can thus be exploited further) or if it is just a scaling parameter. We believe that answering these questions could be crucial for quantum density peak clustering and its future as a viable clustering algorithm for quantum machine learning on near-term quantum devices. \section*{Acknowledgments} We would like to thank Bruno Coutinho for his valuable insights on the graph of nearest-highers. We would also like to thank Diogo Cruz, Akshat Kumar, Jo\~{a}o Moutinho, Sagar Pratapsi, and Mathieu Roget for fruitful discussions. Furthermore, we acknowledge the support from FCT, namely through project UIDB/50008/2020 and UIDB/04540/ 2020. DM acknowledges the support from FCT through scholarship 2020.04677.BD. \begin{thebibliography}{50} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{https://doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Bishop}(2011)}]{bishop_pattern_2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {Bishop}},\ }\href@noop {} {\emph {\bibinfo {title} {Pattern Recognition and Machine Learning}}},\ \bibinfo {edition} {1st}\ ed.\ (\bibinfo {publisher} {Springer},\ \bibinfo {address} {New York},\ \bibinfo {year} {2011})\BibitemShut {NoStop} \bibitem [{\citenamefont {Hastie}\ \emph {et~al.}(2009)\citenamefont {Hastie}, \citenamefont {Tibshirani},\ and\ \citenamefont {Friedman}}]{hastie2009elements} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Hastie}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Tibshirani}},\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Friedman}},\ }\href@noop {} {\emph {\bibinfo {title} {The Elements of Statistical Learning: Data Mining, Inference, and Prediction}}}\ (\bibinfo {publisher} {Springer Science \& Business Media},\ \bibinfo {year} {2009})\BibitemShut {NoStop} \bibitem [{\citenamefont {Sutton}\ and\ \citenamefont {Barto}(2018)}]{sutton2018reinforcement} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~S.}\ \bibnamefont {Sutton}}\ and\ \bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont {Barto}},\ }\href@noop {} {\emph {\bibinfo {title} {Reinforcement learning: An introduction}}}\ (\bibinfo {publisher} {MIT press},\ \bibinfo {year} {2018})\BibitemShut {NoStop} \bibitem [{\citenamefont {Graves}\ \emph {et~al.}(2013)\citenamefont {Graves}, \citenamefont {Mohamed},\ and\ \citenamefont {Hinton}}]{graves2013speech} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Graves}}, \bibinfo {author} {\bibfnamefont {A.~R.}\ \bibnamefont {Mohamed}},\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Hinton}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {2013 IEEE international conference on acoustics, speech and signal processing}\ ,\ \bibinfo {pages} {6645}} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sebe}\ \emph {et~al.}(2005)\citenamefont {Sebe}, \citenamefont {Cohen}, \citenamefont {Garg},\ and\ \citenamefont {Huang}}]{sebe2005machine} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Sebe}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Cohen}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Garg}},\ and\ \bibinfo {author} {\bibfnamefont {T.~S.}\ \bibnamefont {Huang}},\ }\href@noop {} {\emph {\bibinfo {title} {Machine learning in computer vision}}},\ Vol.~\bibinfo {volume} {29}\ (\bibinfo {publisher} {Springer Science \& Business Media},\ \bibinfo {year} {2005})\BibitemShut {NoStop} \bibitem [{\citenamefont {Tieleman}(2008)}]{tieleman2008training} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Tieleman}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Proceedings of the 25th international conference on Machine learning}}}\ (\bibinfo {organization} {ACM},\ \bibinfo {year} {2008})\ pp.\ \bibinfo {pages} {1064--1071}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schuld}\ \emph {et~al.}(2015)\citenamefont {Schuld}, \citenamefont {Sinayskiy},\ and\ \citenamefont {Petruccione}}]{schuld2015introduction} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Schuld}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Sinayskiy}},\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Petruccione}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Contemporary Physics}\ }\textbf {\bibinfo {volume} {56}},\ \bibinfo {pages} {172} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wittek}(2014)}]{wittek2014quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Wittek}},\ }\href@noop {} {}\ (\bibinfo {publisher} {Academic Press},\ \bibinfo {year} {2014})\BibitemShut {NoStop} \bibitem [{\citenamefont {Adcock}\ \emph {et~al.}(2015)\citenamefont {Adcock}, \citenamefont {Allen}, \citenamefont {Day}, \citenamefont {Frick}, \citenamefont {Hinchliff}, \citenamefont {Johnson}, \citenamefont {Morley-Short}, \citenamefont {Pallister}, \citenamefont {Price},\ and\ \citenamefont {Stanisic}}]{adcock2015advances} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Adcock}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Allen}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Day}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Frick}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Hinchliff}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Johnson}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Morley-Short}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Pallister}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Price}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Stanisic}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1512.02900}\ } (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Arunachalam}\ and\ \citenamefont {de~Wolf}(2017)}]{arunachalam2017survey} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Arunachalam}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {de~Wolf}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1701.06806}\ } (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Biamonte}\ \emph {et~al.}(2017)\citenamefont {Biamonte}, \citenamefont {Wittek}, \citenamefont {Pancotti}, \citenamefont {Rebentrost}, \citenamefont {Wiebe},\ and\ \citenamefont {Lloyd}}]{Biamonte:2017db} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Biamonte}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Wittek}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Pancotti}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Rebentrost}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Wiebe}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},\ }\href {https://arxiv.org/abs/1611.09347} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {549}},\ \bibinfo {pages} {195} (\bibinfo {year} {2017})},\ \Eprint {https://arxiv.org/abs/1611.09347} {1611.09347} \BibitemShut {NoStop} \bibitem [{\citenamefont {Lloyd}\ \emph {et~al.}(2020)\citenamefont {Lloyd}, \citenamefont {Schuld}, \citenamefont {Ijaz}, \citenamefont {Izaac},\ and\ \citenamefont {Killoran}}]{lloyd2020quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Schuld}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ijaz}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Izaac}},\ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Killoran}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2001.03622}\ } (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vinci}\ \emph {et~al.}(2020)\citenamefont {Vinci}, \citenamefont {Buffoni}, \citenamefont {Sadeghi}, \citenamefont {Khoshaman}, \citenamefont {Andriyash},\ and\ \citenamefont {Amin}}]{vinci2020path} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Vinci}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Buffoni}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Sadeghi}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Khoshaman}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Andriyash}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Amin}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Machine Learning: Science and Technology}\ } (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Otterbach}\ \emph {et~al.}(2017)\citenamefont {Otterbach}, \citenamefont {Manenti}, \citenamefont {Alidoust}, \citenamefont {Bestwick}, \citenamefont {Block}, \citenamefont {Bloom}, \citenamefont {Caldwell}, \citenamefont {Didier}, \citenamefont {Fried}, \citenamefont {Hong} \emph {et~al.}}]{otterbach2017unsupervised} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Otterbach}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Manenti}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Alidoust}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bestwick}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Block}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Bloom}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Caldwell}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Didier}}, \bibinfo {author} {\bibfnamefont {E.~S.}\ \bibnamefont {Fried}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Hong}}, \emph {et~al.},\ }\href@noop {} {\bibinfo {title} {Unsupervised machine learning on a hybrid quantum computer}} (\bibinfo {year} {2017}),\ \Eprint {https://arxiv.org/abs/1712.05771} {arXiv:1712.05771 [quant-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Wiebe}\ \emph {et~al.}(2012)\citenamefont {Wiebe}, \citenamefont {Braun},\ and\ \citenamefont {Lloyd}}]{wiebe2012quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Wiebe}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Braun}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical review letters}\ }\textbf {\bibinfo {volume} {109}},\ \bibinfo {pages} {050505} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lloyd}\ \emph {et~al.}(2014)\citenamefont {Lloyd}, \citenamefont {Mohseni},\ and\ \citenamefont {Rebentrost}}]{lloyd2014quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mohseni}},\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Rebentrost}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature Physics}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {631} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Goodfellow}\ \emph {et~al.}(2016)\citenamefont {Goodfellow}, \citenamefont {Bengio},\ and\ \citenamefont {Courville}}]{goodfellow2016deep} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Goodfellow}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Bengio}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Courville}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {http://www. deeplearningbook.org}\ } (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vincent}\ \emph {et~al.}(2008)\citenamefont {Vincent}, \citenamefont {Larochelle}, \citenamefont {Bengio},\ and\ \citenamefont {Manzagol}}]{vincent2008extracting} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Vincent}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Larochelle}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Bengio}},\ and\ \bibinfo {author} {\bibfnamefont {P.-A.}\ \bibnamefont {Manzagol}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Proceedings of the 25th international conference on Machine learning}}}\ (\bibinfo {organization} {ACM},\ \bibinfo {year} {2008})\ pp.\ \bibinfo {pages} {1096--1103}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hinton}\ \emph {et~al.}(1995)\citenamefont {Hinton}, \citenamefont {Dayan}, \citenamefont {Frey},\ and\ \citenamefont {Neal}}]{hinton1995wake} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~E.}\ \bibnamefont {Hinton}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Dayan}}, \bibinfo {author} {\bibfnamefont {B.~J.}\ \bibnamefont {Frey}},\ and\ \bibinfo {author} {\bibfnamefont {R.~M.}\ \bibnamefont {Neal}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {268}},\ \bibinfo {pages} {1158} (\bibinfo {year} {1995})}\BibitemShut {NoStop} \bibitem [{\citenamefont {A{\"{i}}meur}\ \emph {et~al.}(2007)\citenamefont {A{\"{i}}meur}, \citenamefont {Brassard},\ and\ \citenamefont {Gambs}}]{Aimeur2007} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {A{\"{i}}meur}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Brassard}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gambs}},\ }\href {https://doi.org/10.1145/1273496.1273497} {\bibfield {journal} {\bibinfo {journal} {ACM International Conference Proceeding Series}\ }\textbf {\bibinfo {volume} {227}},\ \bibinfo {pages} {1} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yu}\ \emph {et~al.}(2010)\citenamefont {Yu}, \citenamefont {Qian},\ and\ \citenamefont {Liu}}]{Yu2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Qian}},\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Liu}},\ }\href {https://doi.org/10.1007/s00500-009-0478-1} {\bibfield {journal} {\bibinfo {journal} {Soft Computing}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages} {921} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Li}\ \emph {et~al.}(2011)\citenamefont {Li}, \citenamefont {He},\ and\ \citenamefont {Jiang}}]{Li2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {He}},\ and\ \bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont {Jiang}},\ }\href {https://doi.org/10.1007/s11128-010-0169-y} {\bibfield {journal} {\bibinfo {journal} {Quantum Information Processing}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {13} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {A{\"{i}}meur}\ \emph {et~al.}(2013)\citenamefont {A{\"{i}}meur}, \citenamefont {Brassard},\ and\ \citenamefont {Gambs}}]{Aimeur2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {A{\"{i}}meur}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Brassard}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gambs}},\ }\href {https://doi.org/10.1007/s10994-012-5316-5} {\bibfield {journal} {\bibinfo {journal} {Machine Learning}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {261} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lloyd}\ \emph {et~al.}(2013)\citenamefont {Lloyd}, \citenamefont {Mohseni},\ and\ \citenamefont {Rebentrost}}]{LloydMohseniRebentrost} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mohseni}},\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Rebentrost}},\ }\href@noop {} {\bibinfo {title} {Quantum algorithms for supervised and unsupervised machine learning}} (\bibinfo {year} {2013}),\ \Eprint {https://arxiv.org/abs/1307.0411v2} {arXiv:1307.0411v2 [quant-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Bauckhage}\ \emph {et~al.}(2017)\citenamefont {Bauckhage}, \citenamefont {Brito}, \citenamefont {Cvejoski}, \citenamefont {Ojeda}, \citenamefont {Sifa},\ and\ \citenamefont {Wrobel}}]{Bauckhage2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Bauckhage}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Brito}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Cvejoski}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Ojeda}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Sifa}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Wrobel}},\ }\href@noop {} {\bibinfo {title} {{Adiabatic Quantum Computing for Binary Clustering}}} (\bibinfo {year} {2017}),\ \Eprint {https://arxiv.org/abs/1706.05528} {arXiv:1706.05528 [quant-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Daskin}(2017)}]{Daskin2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Daskin}},\ }\href@noop {} {\bibinfo {title} {Quantum spectral clustering through a biased phase estimation algorithm}} (\bibinfo {year} {2017}),\ \Eprint {https://arxiv.org/abs/1703.05568} {arXiv:1703.05568 [quant-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Kerenidis}\ \emph {et~al.}(2019)\citenamefont {Kerenidis}, \citenamefont {Landman}, \citenamefont {Luongo},\ and\ \citenamefont {Prakash}}]{qmeans} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Kerenidis}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Landman}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Luongo}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Prakash}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Proceedings of the 33rd International Conference on Neural Information Processing Systems}\ }\textbf {\bibinfo {volume} {372}} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Li}\ and\ \citenamefont {Kais}(2021)}]{li2021quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Li}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Kais}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2106.07078}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kerenidis}\ and\ \citenamefont {Landman}(2021)}]{QSpectralClustering} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Kerenidis}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Landman}},\ }\href {https://doi.org/10.1103/PhysRevA.103.042415} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {103}},\ \bibinfo {pages} {042415} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pires}\ \emph {et~al.}(2021)\citenamefont {Pires}, \citenamefont {Bargassa}, \citenamefont {Omar},\ and\ \citenamefont {Seixas}}]{pires2} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Pires}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Bargassa}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Omar}},\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Seixas}},\ }\href@noop {} {\bibinfo {title} {{A Digital Quantum Algorithm for Jet Clustering in High-Energy Physics}}} (\bibinfo {year} {2021}),\ \Eprint {https://arxiv.org/abs/2101.05618} {arXiv:2101.05618 [quanth-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Grover}(1997)}]{GroverSearch} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Grover}},\ }\href {https://doi.org/10.1103/PhysRevLett.79.325} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {79}},\ \bibinfo {pages} {325} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rodriguez}\ and\ \citenamefont {Laio}(2014)}]{Rodriguez2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Rodriguez}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Laio}},\ }\href {https://doi.org/10.1126/science.1242072} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {344}},\ \bibinfo {pages} {1492} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Durr}}\ and\ \citenamefont {{Hoyer}}(1996)}]{DurrHoyer1996} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {{Durr}}}\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {{Hoyer}}},\ }\href@noop {} {\bibinfo {title} {{A Quantum Algorithm for Finding the Minimum}}} (\bibinfo {year} {1996}),\ \Eprint {https://arxiv.org/abs/quant-ph/9607014} {arXiv:quant-ph/9607014 [quant-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Ambainis}(2017)}]{Ambainis_2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ambainis}},\ }\href@noop {} {\bibinfo {title} {Understanding quantum algorithms via query complexity}} (\bibinfo {year} {2017}),\ \Eprint {https://arxiv.org/abs/1712.06349} {arXiv:1712.06349 [quant-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Giovannetti}\ \emph {et~al.}(2008{\natexlab{a}})\citenamefont {Giovannetti}, \citenamefont {Lloyd},\ and\ \citenamefont {Maccone}}]{QRAM} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Giovannetti}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},\ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Maccone}},\ }\href {https://doi.org/10.1103/PhysRevLett.100.160501} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {100}},\ \bibinfo {pages} {160501} (\bibinfo {year} {2008}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Giovannetti}\ \emph {et~al.}(2008{\natexlab{b}})\citenamefont {Giovannetti}, \citenamefont {Lloyd},\ and\ \citenamefont {Maccone}}]{QRAM_architectures} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Giovannetti}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},\ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Maccone}},\ }\href {https://doi.org/10.1103/PhysRevA.78.052310} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo {pages} {052310} (\bibinfo {year} {2008}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Park}\ \emph {et~al.}(2019)\citenamefont {Park}, \citenamefont {Petruccione},\ and\ \citenamefont {Rhee}}]{Park_Petruccione_Rhee_2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~K.}\ \bibnamefont {Park}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Petruccione}},\ and\ \bibinfo {author} {\bibfnamefont {J.~K.~K.}\ \bibnamefont {Rhee}},\ }\href {https://doi.org/10.1038/s41598-019-40439-3} {\bibfield {journal} {\bibinfo {journal} {Scientific Reports}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {1–8} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Matteo}\ \emph {et~al.}(2020)\citenamefont {Matteo}, \citenamefont {Gheorghiu},\ and\ \citenamefont {Mosca}}]{Matteo_Gheorghiu_Mosca_2020} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.~D.}\ \bibnamefont {Matteo}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Gheorghiu}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mosca}},\ }\href {https://doi.org/10.1109/TQE.2020.2965803} {\bibfield {journal} {\bibinfo {journal} {IEEE Transactions on Quantum Engineering}\ }\textbf {\bibinfo {volume} {1}},\ \bibinfo {pages} {1–13} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Boyer}\ \emph {et~al.}(1998)\citenamefont {Boyer}, \citenamefont {Brassard}, \citenamefont {H{\O}yer},\ and\ \citenamefont {Tapp}}]{Boyer1996} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Boyer}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Brassard}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {H{\O}yer}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Tapp}},\ }\href {https://doi.org/10.1002/(SICI)1521-3978(199806)46:4/5<493::AID-PROP493>3.0.CO;2-P} {\bibfield {journal} {\bibinfo {journal} {Fortschritte der Physik}\ }\textbf {\bibinfo {volume} {46}},\ \bibinfo {pages} {493} (\bibinfo {year} {1998})},\ \Eprint {https://arxiv.org/abs/9605034} {arXiv:9605034 [quant-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Tu}\ \emph {et~al.}(2019)\citenamefont {Tu}, \citenamefont {Zhang}, \citenamefont {Kang}, \citenamefont {Wang},\ and\ \citenamefont {Benediktsson}}]{tu2019spatial} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Tu}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Kang}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wang}},\ and\ \bibinfo {author} {\bibfnamefont {J.~A.}\ \bibnamefont {Benediktsson}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {IEEE Transactions on Geoscience and Remote Sensing}\ }\textbf {\bibinfo {volume} {57}},\ \bibinfo {pages} {5085} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cheng}\ \emph {et~al.}(2016)\citenamefont {Cheng}, \citenamefont {Quan}, \citenamefont {Liu},\ and\ \citenamefont {Zeng}}]{cheng2016large} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Cheng}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Quan}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Liu}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Zeng}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {BMC bioinformatics}\ }\textbf {\bibinfo {volume} {17}},\ \bibinfo {pages} {1} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shi}\ \emph {et~al.}(2019)\citenamefont {Shi}, \citenamefont {Lu}, \citenamefont {Jiang}, \citenamefont {Zhi},\ and\ \citenamefont {Xu}}]{shi2019unsupervised} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Shi}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Lu}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zhi}},\ and\ \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Xu}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {2019 Chinese Control And Decision Conference (CCDC)}}}\ (\bibinfo {organization} {IEEE},\ \bibinfo {year} {2019})\ pp.\ \bibinfo {pages} {1954--1959}\BibitemShut {NoStop} \bibitem [{\citenamefont {MacQueen}\ \emph {et~al.}(1967)\citenamefont {MacQueen} \emph {et~al.}}]{macqueen1967some} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {MacQueen}} \emph {et~al.},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Proceedings of the fifth Berkeley symposium on mathematical statistics and probability}}},\ Vol.~\bibinfo {volume} {1}\ (\bibinfo {organization} {Oakland, CA, USA},\ \bibinfo {year} {1967})\ pp.\ \bibinfo {pages} {281--297}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fang}\ \emph {et~al.}(2020)\citenamefont {Fang}, \citenamefont {Qiu},\ and\ \citenamefont {Yuan}}]{fang2020adaptive} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Fang}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Qiu}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Yuan}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Pattern Recognition}\ }\textbf {\bibinfo {volume} {107}},\ \bibinfo {pages} {107452} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tu}\ \emph {et~al.}(2020)\citenamefont {Tu}, \citenamefont {Yang}, \citenamefont {Li}, \citenamefont {Zhou},\ and\ \citenamefont {He}}]{tu2020hyperspectral} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Tu}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Zhou}},\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {He}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Pattern Recognition Letters}\ }\textbf {\bibinfo {volume} {129}},\ \bibinfo {pages} {144} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nielsen}\ and\ \citenamefont {Chuang}(2010)}]{NielsenChuang} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Nielsen}}\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Chuang}},\ }\href {https://doi.org/10.1017/CBO9780511976667} {\emph {\bibinfo {title} {{Quantum Computation and Quantum Information}}}}\ (\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {year} {2010})\BibitemShut {NoStop} \bibitem [{\citenamefont {Blackard}\ and\ \citenamefont {Dean}(1999)}]{ForestType} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Blackard}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Dean}},\ }\href {https://doi.org/10.1016/S0168-1699(99)00046-0} {\bibfield {journal} {\bibinfo {journal} {Computers and Electronics in Agriculture}\ }\textbf {\bibinfo {volume} {24}},\ \bibinfo {pages} {131} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {van~der Maaten}\ and\ \citenamefont {Hinton}(2008)}]{tSNE} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {van~der Maaten}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Hinton}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Journal of Machine Learning Research}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {2579} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Figgatt}\ \emph {et~al.}(2017)\citenamefont {Figgatt}, \citenamefont {Maslov}, \citenamefont {Landsman}, \citenamefont {Linke}, \citenamefont {Debnath},\ and\ \citenamefont {Monroe}}]{figgatt2017complete} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Figgatt}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Maslov}}, \bibinfo {author} {\bibfnamefont {K.~A.}\ \bibnamefont {Landsman}}, \bibinfo {author} {\bibfnamefont {N.~M.}\ \bibnamefont {Linke}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Debnath}},\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Monroe}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature communications}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {1} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {et~al.}(2021)}]{Qiskit} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~S.~A.}\ \bibnamefont {et~al.}},\ }\href {https://doi.org/10.5281/zenodo.2573505} {\bibinfo {title} {Qiskit: An open-source framework for quantum computing}} (\bibinfo {year} {2021})\BibitemShut {NoStop} \end{thebibliography} \end{document}
\begin{document} \title{Growth sequences for circle diffeomorphisms} \author{Nobuya Watanabe} \address{ Department of Mathematics, School of Commerce, Waseda University, Shinjuku, Tokyo 169-8050, Japan.} \email{[email protected]} \begin{abstract} We obtain results on the growth sequences of the differential for iterations of circle diffeomorphisms without periodic points. \end{abstract} \maketitle \section{Introduction and statement of results} \noindent Let $f:S^1 \rightarrow S^1$ be a $C^{1}$-diffeomorphism where $S^1={\Bbb R}/{\Bbb Z}$. We define the {\it growth sequence} for $f$ by \[ \Gamma_n(f)= \max \{ \lVert Df^n\rVert, \lVert Df^{-n}\rVert \},\ \ \ n \in \Bbb{N},\] where $f^n$ is the $n$-th iteration of $f$ and $\lVert Df^n\rVert = {\displaystyle \max_{x \in S^1}}|Df^n(x)|$. If $f$ has periodic points, then the study of growth sequences reduces to the case of interval diffeomorphisms which was studied in \cite{B},\cite{PS},\cite{W}. If $f$ has no periodic points, then by the theorem of Gottschalk-Hedlund $\Gamma_n(f)$ is bounded if and only if $f$ is $C^1$-conjugate to a rotation. Notice that if $\Gamma_n(f)$ is bounded then $f$ is minimal. So it is natural to ask how rapidly could the sequence $\Gamma_n(f)$ grow if it is unbounded. In this paper we give an answer to this question: \begin{thm} Let $f:S^1 \rightarrow S^1$ be a $C^2$-diffeomorphism without periodic points. Then \[\lim_{n\rightarrow \infty}\frac{\Gamma_n(f)}{n^2}=0.\] \end{thm} \begin{thm} For any increasing unbounded sequence of positive real numbers $\theta_n =o(n^2)$ as $n \rightarrow \infty$ and any $\varepsilon >0$ there exists an analytic diffeomorphism $f:S^1 \rightarrow S^1$ without periodic points such that \[1- \varepsilon \le \limsup_{n\rightarrow \infty} \frac{\Gamma_n(f)}{\theta_n} \le 1.\] \end{thm} \section{Preliminaries} \noindent Given an orientation preserving homeomorphism $f:S^1\rightarrow S^1$, its {\it rotation number} is defined by \[ \rho (f) = \lim_{n \rightarrow \infty}\frac{\tilde{f}^n(x)-x}{n} \mod {\Bbb Z} \] where $\tilde{f}$ denotes a lift of $f$ to $\Bbb{R}$. The limit exists and is independent on $x \in \Bbb{R}$ and a lift $\tilde{f}$. Put $\alpha = \rho(f)$. Let $R_{\alpha}$ be the rigid rotation by $\alpha$ \[ R_{\alpha}(x)=x+\alpha \mod {\Bbb Z}. \] For the basic properties of circle homeomorphisms and the combinatorics of orbits of the rotation of the circle, general references are \cite{MS} chapter I and \cite{KH} chapter 11, 12. By Poincar$\acute{{\rm e}}$ the order structure of orbits of $f$ and $R_{\alpha}$ on $S^1$ are almost same. In particular if $\rho (f) = \frac{p}{q} \in \Bbb{Q}/\Bbb{Z}$ then $f$ has periodic points of period $q$ and every periodic orbits of $f$ have the same order as orbits of $R_{\frac{p}{q}}$ on $S^1$. $\rho (f) \in (\Bbb{R}\setminus \Bbb{Q})/\Bbb{Z}$ if and only if $f$ has no periodic points, in this case, if $f$ is of class $C^2$ then by the well known theorem of Denjoy $f$ is topologically conjugate to $R_{\alpha}$. Suppose $\alpha \in (\Bbb{R}\setminus \Bbb{Q})/\Bbb{Z}$. Let \[ \alpha = [a_1,a_2,a_3,\ldots]=\dfrac{1}{a_1+ \dfrac{1}{a_2+\dfrac{1}{a_3+{}_{\ddots}}}}\ \ \ , a_{i} \ge 1, a_i \in \Bbb{N} \] be the continued fraction expansion of $\alpha$, and \[ \frac{p_n}{q_n}=[a_1,a_2,\ldots,a_n] \] be its $n$-th convergent. Then $p_n$ and $q_n$ satisfy \[ p_{n+1}=a_{n+1}p_n+p_{n-1},~~p_0=0,~p_1=1, \] \[ q_{n+1}=a_{n+1}q_n+q_{n-1},~~q_0=1,~q_1=a_1, \] \[ \frac{p_0}{q_0} < \frac{p_2}{q_2} < \frac{p_4}{q_4} < \cdots < \alpha < \cdots < \frac{p_5}{q_5} < \frac{p_3}{q_3} < \frac{p_1}{q_1}. \] The sequence of rational numbers $\{ \frac{p_n}{q_n}\}$ is the best rational approximation of $\alpha$. This can be expressed using the dynamics of $R_{\alpha}$ as follows. $R^{q_n}_{\alpha}(0) \in [0,R^{-q_{n-1}}_{\alpha}(0)]$, and if $k >q_{n-1}$, $R^{k}_{\alpha}(0) \in [R^{q_{n-1}}_{\alpha}(0),R^{-q_{n-1}}_{\alpha}(0)]$ then $k \ge q_{n}$. Note that for $0 \le k \le a_{n+1}, R_{\alpha}^{kq_n}(0) \in [0,R^{-q_{n-1}}_{\alpha}(0)]$, and $R_{\alpha}^{(a_{n+1}+1)q_n}(0) \notin [0,R_{\alpha}^{-q_{n-1}}(0)]$. For $\alpha \in (\Bbb{R}\setminus \Bbb{Q})/\Bbb{Z}$ the continued fraction expansion is unique. On the other hand for $\beta \in \Bbb{Q}/\Bbb{Z}$ expressions by continued fractions are not unique, $\beta = [b_1,b_2,\ldots,b_n+1] = [b_1,b_2,\ldots,b_n,1]$. For $\alpha = [a_1,a_2,\ldots]$ and $i, j \in \Bbb{N}, 1 \le i \le j$ we denote $\alpha|[i,j] = [a_i,a_{i+1},\ldots,a_j]$. In case we emphasize $\alpha$ we denote $a_{i}(\alpha), p_i(\alpha), q_i(\alpha)$. For $x \in S^1$, $I_n(x)$ denotes the smaller interval with endpoints $x$ and $f^{q_n}(x)$ and for an interval $J \subset S^1$, $|J|$ the length of $J$. The following is well known. See \cite{MS} chapter I section 2a. \begin{lem} {\rm (Denjoy)} Let $f$ be a $C^1$-diffeomorphism of $S^1$ without periodic points and $\log Df:S^1\rightarrow {\Bbb R}$ has bounded variation. Then there exists a positive constant $C_1 = C_1(f)$ satisfying the following properties. $(1)$~ For any $0\le l \le q_{n+1}$ and for every $x_1,x_2 \in I_n(x)$ \[ \frac{1}{C_1}\le \frac{Df^l(x_1)}{Df^l(x_2)}\le C_1. \] $(2)$~{\rm (Denjoy inequality)} For every $n \in {\Bbb N}$, \[ \frac{1}{C_1}\le \lVert Df^{q_n} \rVert \le C_1. \] \end{lem} As stated in section 1, the growth sequences play a significant role in the problem of the smooth linearization of circle diffeomorphisms, where the arithmetic property of rotation numbers and the regularity of diffeomorphisms are important. This problem has a rich history, see e.g. \cite{A}, \cite{H}, \cite{Y}, \cite{KS}, \cite{St}, \cite{KO}. In this paper, particularly we need the following improvement of Denjoy inequality which is due to Katznelson and Ornstein. The statement of Lemma 2 is obtained by merging results in \cite{KO}, for $(1)$, (1.16), lemma 3.2 (3.6) and proposition 3.3 (a), for $(2)$, theorem 3.7. \begin{lem} Let $f$ be a $C^2$-diffeomorphism of $S^1$ without periodic points. Set \[ E_n = \max \{ \lVert \log Df^{q_n}\rVert,~ \max_{x\in S^1} \{|D\log Df^{q_n}(x)||I_{n-1}(x)|\} \}. \] Then the following hold. $(1)$~~~$\lim_{n \rightarrow \infty} E_n = 0.$ $(2)$ If $f$ is of class $C^{2+\delta}, \delta > 0$ then there exist $C>0$ and $0 < \lambda <1$ such that $\lVert \log Df^{q_n}\rVert \le C\lambda^n$ for any $n \in \Bbb{N}$. \end{lem} The conclusion of Lemma 2 (2) plus some arithmetic condition of $\rho (f)$ are sufficient to provide the $C^{1}$-linearization of $f$. We need the following which is a special case of the main theorem in \cite{KO}. For $C^{3+\delta}$-diffeomorphisms it is originally due to Herman \cite{H}. \noindent {\bf Corollary of Lemma 2 (2).}\ \ {\it If} $f$ {\it is of class} $C^{2+\delta}$ {\it and the rotation number} $\alpha = \rho(f)$ {\it is of bounded type i.e.} $a_i(\alpha)$ {\it is uniformly bounded then} $\lVert Df^n \rVert$ {\it is uniformly bounded.} \section{Proof of Theorem 1} \noindent Let $f:S^1 \rightarrow S^1$ be a $C^2$-diffeomorphism without periodic points with the rotation number $\rho(f) = [a_{1},a_{2},\ldots]$ and its convergents $\{\frac{p_{n}}{q_{n}}\}$. The following crucial and fundamental lemma is due to Polterovich and Sodin (\cite{PS} lemma 2.3). \begin{lem} {\rm (Growth lemma)} Let $\lbrace A(k) \rbrace_{k\ge 0}$ be a sequence of real numbers such that for each $k \ge 1$ \[ 2A(k)-A(k-1)-A(k+1)\le C\exp (-A(k)),\quad C > 0, \] and $A(0)=0$. Then either for each $k \ge 0$ \[ A(k) \le 2\log \left ( k\sqrt{\frac{C}{2}}+1 \right ), ~ or ~~\liminf_{k\rightarrow \infty}\frac{A(k)}{k}>0. \] \end{lem} \begin{lem} For $0 \le k \le a_{n+1}+1$ we set $A_n(k)=\log \lVert Df^{kq_n}\rVert$. Then there exists a positive constant $C=C(f)$ independent with $n$ such that for $1\le k \le a_{n+1}$, \[ 2A_n(k)-A_n(k-1)-A_n(k+1)\le CE_n \exp(-A_n(k)). \] \end{lem} \begin{proof}~~Let $A_n(k) = \log Df^{kq_n}(x_0)$ and $x_i=f^{iq_n}(x_0)$. Then we have, \[ 2A_n(k)-A_n(k-1)-A_n(k+1) \] \[ \le 2\log Df^{kq_n}(x_0)-\log Df^{(k-1)q_n}(x_1) -\log Df^{(k+1)q_n}(x_{-1}) \] \[ \le |\log Df^{q_n}(x_0)-\log Df^{q_n}(x_{-1})| = |D\log Df^{q_n}(y_0)||I_n(x_{k-1})|\frac{|I_n(x_{-1})|}{|I_n(x_{k-1})|} , \] where $y_0\in I_n(x_{-1})$. Notice that the intervals $I_n(x_{-1}), I_n(x_{0}), I_n(x_{1}),\ldots, I_n(x_{a_{n+1}-1})$ are adjacent in this order and ${\displaystyle \cup_{i=0}^{a_{n+1}-1}} I_n(x_i) \subset I_{n-1}(f^{-q_{n-1}}(x_0))$. Since $y_0 \in I_n(x_{-1})$, we have for $1\le k \le a_{n+1}-1$, $I_n(x_{k-1})\subset I_{n-1}(f^{-q_{n-1}}(y_0))$. So by Denjoy inequality (Lemma 1 (2)) we have \[ |I_n(x_{k-1})|\le C_1^2|I_{n-1}(y_0)|, \] and using lemma 1 (1) we have \[ \frac{|I_n(x_{-1})|}{|I_n(x_{k-1})|} \le C_1\frac{1}{Df^{kq_n}(x_0)}. \] Hence we have \[ 2A_n(k)-A_n(k-1)-A_n(k+1) \] \[ \le C_1^3|D\log Df^{q_n}(y_0)||I_{n-1}(y_0)| \frac{1}{Df^{kq_n}(x_0)} \le C_1^3E_n\exp(-A_n(k)). \] \end{proof} We extend $A_n(k)$ for $k \ge a_{n+1}+2$ by $A_n(k)=A_n(a_{n+1}+1)$. Then by Lemma 1 (2) and the definition of $E_{n}$ we have \[ 2A_n(a_{n+1}+1)-A_n(a_{n+1})-A_n(a_{n+1}+2) \] \[\le \log Df^{(a_{n+1}+1)q_n}(x_0)-\log Df^{a_{n+1}q_n}(x_0) \le \lVert \log Df^{q_n}\rVert \] \[ \le E_n \exp (-A_n(a_{n+1}+1))\lVert Df^{(a_{n+1}+1)q_n}\rVert \] \[ \le E_n \exp (-A_n(a_{n+1}+1))\lVert Df^{q_{n+1}}\rVert \lVert Df^{q_{n}}\rVert \lVert Df^{-q_{n-1}}\rVert \] \[ \le C_1^3E_n \exp (-A_n(a_{n+1}+1)). \] For $k \ge a_{n+1}+2$, $2A_n(k)-A_n(k-1)-A_n(k+1)=0$. Then since $A_n(k)$ satisfy the condition of Lemma 3 with the constant $C=C_1^3$ and obviously $ \lim_{k \rightarrow \infty}\frac{A_n(k)}{k}=0$, we have \[ \lVert Df^{kq_n}\rVert \le \left (\sqrt{\frac{CE_n}{2}}k+1 \right) ^2, ~~0\le k \le a_{n+1}.\] For $q_n\le l < q_{n+1}$, we define $0\le k_{i+1}\le a_{i+1}, (i=0,1,\ldots,n)$ inductively by \[ r_{n+1}=l,~~ r_{i+1}=k_{i+1}q_i+r_i,~~ 0\le r_i < q_i. \] Then, using ~$\frac{q_{i+1}}{q_i} \ge a_{i+1} \ge k_{i+1}$, \[ \frac{\lVert Df^l\rVert}{l^2}\le \frac{\prod^n_{i=0}\lVert Df^{k_{i+1}q_i}\rVert}{(k_{n+1}q_n)^2} \le \frac{\prod^n_{i=0}\left (\sqrt{\frac{CE_i}{2}} k_{i+1}+1\right )^2} {\left (k_{n+1}\prod^{n-1}_{i=0}\frac{q_{i+1}}{q_i}\right )^2} \] \[ \le \left ( \sqrt{\frac{CE_n}{2}}+1\right ) ^2 \prod^{n-1}_{i=0}\left( \sqrt{\frac{CE_i}{2}} +\frac{q_i}{q_{i+1}}\right)^2. \] Since $\frac{q_i}{q_{i+2}}<\frac{1}{2}$, for sufficiently small $E_i$ and $E_{i+1}$ \[ \left( \sqrt{\frac{CE_i}{2}}+\frac{q_i}{q_{i+1}}\right) \left( \sqrt{\frac{CE_{i+1}}{2}}+\frac{q_{i+1}}{q_{i+2}}\right) \le \frac{1}{2}. \] By Lemma 2 (1), $E_n \rightarrow 0$ as $n \rightarrow \infty$. Consequently we have \[ \lim_{l \rightarrow \infty}\frac{\lVert Df^l\rVert}{l^2}=0. \] For the case $\lVert Df^{-l} \rVert, l >0$, the argument is the same. \section{Proof of Theorem 2} \noindent Let $\{\theta_n\}_{n\ge 1}$ be any increasing unbounded sequence of positive real numbers such that $\theta_n=o(n^2)$ as $n \rightarrow \infty$. We consider the two-parameter family of rational functions on the Riemann sphere $\hat{\Bbb{C}}= \Bbb{C}\cup \{\infty\}$, \[ J_{a,t}: \hat{\Bbb{C}} \rightarrow \hat{\Bbb{C}}, \ \ J_{a,t}(z)= \exp (2\pi i t)z^2\frac{z+a}{az+1} \] where $a \in \Bbb{R}, a > 3$ and $t \in \Bbb{R}/\Bbb{Z}$. For each $a, t$ the map $J_{a,t}$ makes invariant the unit circle $\partial \Bbb{D} = \{z\in \Bbb{C}; \lvert z \rvert =1\}, J_{a,t}(\partial \Bbb{D})= \partial \Bbb{D}$, moreover the restriction of $J_{a,t}$ to $\partial \Bbb{D}$ is an orientation preserving diffeomorphism. The set of critical points of $J_{a,t}$ consists of four elements containing $0$ and $\infty$ which are fixed by $J_{a,t}$. Notice that if $a \rightarrow \infty$ then on a compact tubular neighbourhood of the unit circle in $\Bbb{C}\setminus \{0\}$ $J_{a.t}$ uniformly converges to the rotation $z \mapsto \exp (2\pi i t)z$. Put $\psi : \Bbb{R}/\Bbb{Z} \rightarrow \partial\Bbb{D}, \psi (x)= \exp (2\pi i x)$. Conjugating $J_{a,t}|\partial \Bbb{D}$ by $\psi$ we obtain the family of analytic circle diffeomorphisms $\{f_{a,t}\}$, \[ f_{a,t}: \Bbb{R}/\Bbb{Z} \rightarrow \Bbb{R}/\Bbb{Z},~~ f_{a,t}(x)=\psi^{-1}\circ J_{a,t}\circ \psi (x) =f_{a,0}(x)+t \mod{\Bbb{Z}}. \] Temporarily we fix $a > 3$ and abbreviate as $f_{a,t}= f_{t}$. The following properties of this family are standard. See e.g. \cite{MS} chapter I, section 4, where Arnold family $x \mapsto x+ a\sin (2\pi x) +t$ is mainly dealt with but the argument is valid for our family. Also see \cite{KH} chapter 11, section 1. The map $F: S^1 \rightarrow S^1 , t \mapsto \rho (f_t)$ is continuous and monotone increasing. We set \[ K=\{t\in S^1; \rho (f_t) ~~{\rm is~ irrational} \}. \] We denote Cl($K$) the closure of $K$. $F|K$ is a one-to-one map. For $t \in K$ with $F(t)=\alpha$, we denote $f_{t}=\hat{f}_{\alpha}$. Notice that $f_{t}$ never conjugate to a rational rotation. Hence for $\frac{p}{q} \in \Bbb{Q}/\Bbb{Z}$, $F^{-1}(\frac{p}{q})$ is a closed interval, say, $[\frac{p}{q}_{-},\frac{p}{q}_{+}] $. Moreover, $F^{-1}|(\Bbb{R}\setminus \Bbb{Q})/\Bbb{Z}: (\Bbb{R}\setminus \Bbb{Q})/\Bbb{Z} \rightarrow K$ is continuous and \[ \lim_{\alpha \rightarrow \frac{p}{q}-0}F^{-1}| (\Bbb{R}\setminus \Bbb{Q})/\Bbb{Z}(\alpha) = \frac{p}{q}_{-}, ~\lim_{\alpha \rightarrow \frac{p}{q}+0}F^{-1}|(\Bbb{R}\setminus \Bbb{Q})/\Bbb{Z}(\alpha) = \frac{p}{q}_{+}. \] Note that for every $\frac{p}{q}\in \Bbb{Q}/\Bbb{Z}$ and every $x \in S^{1}$, there exists $t \in [\frac{p}{q}_{-},\frac{p}{q}_{+}]$ such that $f_{t}^{q}(x)=x$. For $\frac{p}{q} \in \Bbb{Q}/\Bbb{Z}$, put $t_{*} = \frac{p}{q}_{-}$. The case $t_{*}= \frac{p}{q}_{+}$ is similar. Then the graph of $f^q_{t_{*}}(x)$ touches from below to the graph of the identity map, in particular, there exists $x_0 \in S^1$ such that \[ f^q_{t_{*}}(x_0)=x_0, ~~Df^q_{t_{*}}(x_0)=1. \] Then the following holds. \begin{lem} $D^2f^q_{t_{*}}(x_0) \ne 0.$ \end{lem} \begin{proof} By contradiction, we suppose $D^2f_{t_{*}}^q(x_0)=0$. Then in our case $D^3f_{t_{*}}^q(x_0)=0$, otherwise $x_0$ is a topologically transversal fixed point of $f^q_{t_{*}}$ and persists under perturbation of $f_{t_{*}}$, which contradicts $t_{*} \in \mathrm{Cl}(K)\setminus K$. Set $z_{0} = \psi (x_{0}) \in \partial \Bbb{D}$. Since the order of tangency to the identity map is an invariant of $C^{\infty}$-conjugacy \cite{T}, we have for $J_{t_{*}}=J_{a,t_{*}}$ \[ J_{t_{*}}^{q}(z_{0})=z_{0}, \ DJ_{t_{*}}^{q}(z_{0})=1, \ D^{2}J_{t_{*}}^{q}(z_{0})= D^{3}J_{t_{*}}^{q}(z_{0})= 0. \] So $z_{0}$ is a parabolic fixed point for $J_{t_{*}}^{q}$ with multiplicity at least four. See \cite{M} chapter 7. By the Laeu-Fatou flower theorem (\cite{M} th.7.2) $z_0$ has at least three basins of attraction for $J^q_{t_{*}}$. Let $B$ be one of the immediate attracting basins of $z_0$ for $J^q_{t_{*}}$. Then $B$ must contain at least one critical point of $J_{t_{*}}^{q}$ (\cite{M} corollary 7.10). So each basin of the cycle $\{z_0,J_{t_{*}}(z_0),\ldots, J_{t_{*}}^{q-1}(z_{0})\}$ contains at least one critical point of $J_{t_{*}}$. But $J_{t_{*}}$ has exactly four critical points and two of them are fixed points. We obtain a contradiction. \end{proof} Hence, for example, by comparing a fractional linear transformation (see also \cite{B} thorem 1 (A)), we can see that there exist $C >0$ and $\{x_l\}_{l\ge 1} \subset S^1 $ with $\lim_{l\rightarrow \infty}x_l=x_0$ such that \[ Df^{lq}_{t_{*}}(x_l)\ge Cl^2 , ~~ \mathrm{for~any}~l\in \Bbb{N}. \] Since $\theta_n=o(n^2)$ , we have \noindent {\bf Corollary of Lemma 5.}\ \ {\it For sufficiently large} $l$, {\it we have} $\lVert Df^{lq}_{t_{*}} \rVert > \theta_{lq}$. \noindent {\bf Remark.} For each $k \in {\Bbb N}$ we set \[ U_k=\{ t\in \mathrm{Cl}(K); \mathrm{There~ exist}\ m\ge k ~ \mathrm{and}~ x\in S^1 \ \mathrm{such\ that}\ Df^m_t(x) > m\sqrt{\theta_m} \}. \] Obviously $U_k$ is open set in Cl($K$). By the corollary and the denseness of preimages of rational numbers by $F$ in Cl($K$), $U_k$ is dense in Cl($K$). So the following set is a residual subset of Cl($K$), \[ \{t \in \mathrm{Cl}(K); \limsup_{n\rightarrow \infty} \frac{\Gamma_n(f_{t})}{\theta_n}=\infty\}. \] We seek a desired diffeomorphism in this family $\{ f_t \}$ by specifying its rotation number $\alpha_{\infty} = \rho(f_{t_{\infty}}) \in (\Bbb{R}\setminus \Bbb{Q})/\Bbb{Z}$. We will define an increasing sequence of even numbers $0 < n_1 < n_2 < n_3< \cdots$, and a sequence of positive integers $A_1,A_2,A_3,\ldots$ inductively. The continued fraction expansion of $\alpha_{\infty}$ is the following. \[ \alpha_{\infty} =[a_1(\alpha_{\infty}),a_2(\alpha_{\infty}),a_3(\alpha_{\infty}),\ldots] \] \[ = [1,1,\ldots,1,A_{1},1,\ldots,1,A_{2},1,\ldots,1,A_{k},1,\ldots] \] where if $i=n_k$ then $a_i(\alpha_{\infty})=A_k$ and if $i \ne n_k$ for any $k$ then $a_i(\alpha_{\infty})=1$. For $m, A \ge 1, m, A \in \Bbb{N}$, we set \[ \alpha_{m}^{A} = [a_1(\alpha_{m}^{A}),a_2(\alpha_{m}^{A}),a_{3}(\alpha_{m}^{A}),\ldots] \] \[ = [1,1,\ldots,1,A_{1},1,\ldots,1,A_{m-1},1,\ldots,1,A,1,1,1,\ldots] \] where $a_i(\alpha_{m}^{A})=A_k$ if $i= n_k \le n_{m-1}$ and $a_i(\alpha_{m}^{A})=A$ if $i=n_m$ and $a_i(\alpha_{m}^{A})=1$ otherwise. Set $\alpha_m = \alpha_{m}^{A_m}$. Notice that $\alpha_{m}^{A}|[1,n_{m}-1] = \alpha_{\infty}|[1,n_{m}-1]$ and $\alpha_{m}^{A}$ is of bounded type. Unless otherwise stated we use the symbols $p_{n}, q_{n}$ as $p_{n}(\alpha_{\infty}), q_{n}(\alpha_{\infty})$. \begin{lem} There exist a sequence of even numbers $0 < n_1 < n_2 < n_3< \cdots$, and a sequence of positive integers $A_1,A_2,A_3,\ldots$ such that for each $m \ge 1$ the following properties hold. $(1)$ For any $j \in \Bbb{Z}$ with $q_{n_{m}-1} \le |j| \le A_mq_{n_{m}-1}$, $\lVert D\hat{f}_{\alpha_{m}}^{j} \rVert < \theta_{|j|}$. $(2)$ There exists $j_{m}\in \Bbb{Z}$ such that \[ q_{n_{m}-1} \le |j_{m}| \le (A_m+1)q_{n_{m}-1} ,\ \lVert D\hat{f}_{\alpha_{m}^{A_{m}+1}}^{j_{m}} \rVert \ge \theta_{|j_{m}|}. \] $(3)$ For any $t \in F^{-1}(\alpha)$ with $\alpha|[1,n_{m+1}-1] = \alpha_{m}|[1,n_{m+1}-1]$ and any $j \in \Bbb{Z}$ with $\lvert j \rvert \le q_{n_{m}}$, \[ \lVert Df_{t}^{j} \rVert-1 \le \lVert D\hat{f}_{\alpha_{m}}^{j}\rVert \le \lVert Df_{t}^{j}\rVert +1 . \] \end{lem} \begin{proof} Let $\alpha_0 = [1,1,1,\ldots] = \frac{\sqrt{5}-1}{2}$. Since $\alpha_0$ is of bounded type by Corollary of Lemma 2 (2) there exists $C_0 > 0$ such that for any $l \in \Bbb{Z}$ , $\lVert D\hat{f}^{l}_{\alpha_{0}}\rVert \le C_{0}$. Let $n_1$ be a sufficiently large even number such that if $|i| \ge q_{n_1-1}(\alpha_0)$ then $\theta_{|i|} \ge C_0$. Let $\beta_1 = \alpha_0|[1,n_{1}-1] = \frac{p_{n_1-1}(\alpha_0)}{q_{n_1-1}(\alpha_0)} = [1,1,\ldots1]=[1,1,\ldots1,\infty] \in \Bbb{Q}/\Bbb{Z}$. Then by Corollary of Lemma 5 there exists $d \in \Bbb{N}$ such that $\lVert Df_{\beta_{1-}}^{dq_{n_1-1}}\rVert > \theta_{dq_{n_1-1}}$, where $F^{-1}(\beta_{1})=[\beta_{1-},\beta_{1+}]$. Since $\alpha_{1}^{A} \rightarrow \beta_1-0$ as $A \rightarrow \infty$, $F^{-1}(\alpha_{1}^{A}) \rightarrow \beta_{1-}$ as $A \rightarrow \infty$. So for sufficiently large $A$ we have $\lVert D\hat{f}_{\alpha_{1}^{A}}^{dq_{n_1-1}}\rVert > \theta_{dq_{n_1-1}}$. Hence the following is well defined. \[ A_{1}=\max\{ A; \mathrm{for\ any\ } j \in \Bbb{Z}\ \mathrm{with}\ q_{n_1-1} \le |j| \le Aq_{n_1-1},\ \lVert D\hat{f}_{\alpha_{1}^{A}}^{j}\rVert < \theta_{\lvert j\rvert}\}. \] Therefore there exists $j_{1} \in \Bbb{Z}$ such that \[ q_{n_1-1} \le |j_{1}| \le (A_{1}+1)q_{n_1-1}, \ \lVert D\hat{f}_{\alpha_{1}^{A_{1}+1}}^{j_{1}}\rVert \ge \theta_{\lvert j_{1}\rvert}. \] Suppose we have $n_{1}, n_{2},\ldots,n_{m-1}$ and $A_{1},A_{2},\ldots,A_{m-1}$ satisfying conditions of Lemma. Notice that $\alpha_{m-1}$ is of bounded type and that (3) is satisfied by only requiring that $n_{m}-n_{m-1}$ is sufficiently large. So by the exactly same procedure as above we choose a sufficiently large even number $n_{m}$ and set \[ A_{m}=\max\{ A; \mathrm{for\ any\ } j \in \Bbb{Z}\ \mathrm{with}\ q_{n_m-1} \le |j| \le Aq_{n_m-1},\ \lVert D\hat{f}_{\alpha_{m}^{A}}^{j}\rVert < \theta_{\lvert j\rvert}\}. \] \end{proof} \begin{lem} Let $\beta_{0}, \beta_{1}, \beta_{2} \in \Bbb{Q}/\Bbb{Z}$ be \[ \beta_{i} = [b_{1}(\beta_{i}),b_{2}(\beta_{i}),\ldots,b_{2n}(\beta_{i})] = \frac{p_{2n}(\beta_{i})}{q_{2n}(\beta_{i})}, \ i=0,1,2 \] such that $\beta_{0}|[1,2n-1]=\beta_{1}|[1,2n-1]=\beta_{2}|[1,2n-1] $ and for some $B \ge 1, B \in \Bbb{N},\ b_{2n}(\beta_{i})=B+i$. Then for any $s_{1}, s_{2} \in F^{-1}((\beta_{0},\beta_{2}))$ and any $x \in S^1$ we have \[ \sum_{i=1}^{q_{2n}(\beta_{2})}\lvert ( f_{s_{1}}^{i}(x), f_{s_{2}}^{i}(x) ) \rvert \le 7. \] \end{lem} \begin{proof} The argument of the proof is same as the \'Swi\c atek's of lemma 3 in \cite{Sw}. We recall Farey interval. A Farey interval is an interval $I = (\frac{p}{q},\frac{p'}{q'}), p,p',q,q'\in \Bbb{Z}, q,q' > 0$ with $pq'-p'q=1$. Then the following holds. $(*)$ All rational in $I$ have the form \ $\displaystyle \frac{kp+lp'}{kq+lq'}, \ k, l \ge 1, k,l \in \Bbb{N}$. Since $q_{2n}(\beta_{i})=(B+i)q_{2n-1}(\beta_{0})+q_{2n-2}(\beta_{0})$ and $p_{2n}(\beta_{i})=(B+i)p_{2n-1}(\beta_{0})+p_{2n-2}(\beta_{0})$ two intervals $(\beta_{0},\beta_{1}), (\beta_{1},\beta_{2})$ are Farey intervals and $q_{2n}(\beta_{0})<q_{2n}(\beta_{1})<q_{2n}(\beta_{2})$ and by $(*)$ the cardinality of the set of rationals in $(\beta_{0},\beta_{2})$ with denominator less than $2q_{2n}(\beta_{2})$ is at most six (three if $B \ge 3$). For given $x \in S^1$ we define \[ t_{1}=\sup\{t \in [\beta_{0-},\beta_{0+}]; f_{t}^{q_{2n}(\beta_{0})}(x)=x\}, \] \[ t_{2}=\inf\{t \in [\beta_{2-},\beta_{2+}]; f_{t}^{q_{2n}(\beta_{2})}(x)=x\}. \] We define a diffeomorphism $G: S^{1}\times [t_{1},t_{2}] \rightarrow S^{1}\times [t_{1},t_{2}]$ by $G(y, t)=(f_{t}(y),t)$. Then we have \[ DG^{i}(y,t)= \left( \begin{array}{@{\,}cc@{\,}} Df_{t}^{i}(y)&\frac{d}{dt}(f_{t}^{i}(y))\\ 0&1 \end{array} \right) = \left( \begin{array}{@{\,}cc@{\,}} Df_{t}^{i}(y)&1+\sum_{k=1}^{i-1}Df_{t}^{i-k}(f_{t}^{k}(y))\\ 0&1 \end{array} \right). \] So $G$ monotonically twists $S^1$-direction to the right. More precisely, let $\tilde{G}: \Bbb{R}\times [t_{1},t_{2}] \rightarrow \Bbb{R}\times [t_{1},t_{2}], \tilde{G}(\tilde{y},t)= (\tilde{f_{t}}(\tilde{y}),t)$ be a lift of $G$, then for any $i \ge 1$ the slope of the image of a vertical segment $\{ \tilde{y}\} \times [t_{1},t_{2}]$ by $\tilde{G}^{i}$ is everywhere positive finite. Let $P : S^{1}\times [t_{1},t_{2}] \rightarrow S^1$ be the projection on the first coordinate. By contradiction we assume $\sum_{i=1}^{q_{2n}(\beta_{2})} \lvert ( f_{s_{1}}^{i}(x), f_{s_{2}}^{i}(x) ) \rvert > 7$. We consider the interval $\gamma = \{x\}\times [t_{1},t_{2}]$ and its images by $G^{i}$. Since $[s_{1},s_{2}] \subset (t_{1},t_{2})$, intervals $P(G^{i}(\gamma)), 1 \le i \le q_{2n}(\beta_{2})$ overlap somewhere with multiplicity at least eight. Then, by the twist condition of $G$ there exist distinct natural numbers $i_{k}$, ($0 \le k \le 7, k \in \Bbb{Z}$) with $1 \le i_{k} \le q_{2n}(\beta_{2})$ such that for each $k$ ($1 \le k \le 7$), \[ (\{f_{t_{2}}^{i_{0}}(x)\} \times [t_{1},t_{2}]) \cap G^{i_{k}}(\gamma)\ne \emptyset. \] Moreover, using the preservation of order by $\tilde{f_{t}}: \Bbb{R}\times \{t\} \rightarrow \Bbb{R}\times \{t\}$ and the twist condition of $G$, we can see that for any $j \ge 0$, \[ (\{f_{t_{2}}^{i_{0}+j}(x)\} \times [t_{1},t_{2}]) \cap G^{i_{k}+j}(\gamma)\ne \emptyset. \] In particular for $j=q_{2n}(\beta_{2})-i_{0}$ by the definition of $t_{2}$ we have \[ \gamma \cap G^{i_{k}+q_{2n}(\beta_{2})-i_{0}}(\gamma)\ne \emptyset. \] This imply that there exists a parameter value $u_{k} \in (t_{1},t_{2})$ such that $f_{u_{k}}^{q_{2n}(\beta_{2})+i_{k}-i_{0}}(x)=x$. For each $k$ ($1 \le k \le 7$) the denominator of $\rho(f_{u_{k}})$ which divides $q_{2n}(\beta_{2})+i_{k}-i_{0}$ is less than $2q_{2n}(\beta_{2})$. This is a contradiction. \end{proof} \noindent {\it Proof of Theorem 2.} \noindent $\star$ {\bf Lower bound.} Let $j_{m} \in \Bbb{Z}$ be in Lemma 6 (2). Then $\lvert j_{m}\rvert \le (A_{m}+1)q_{n_{m-1}} < q_{n_{m}}(\alpha_{m}^{A_{m}+2})$. We assume $j_{m} > 0$. Then since three rational numbers \[ \alpha_{m}^{A_{m}}|[1,n_{m}], \alpha_{m}^{A_{m}+1}|[1,n_{m}], \alpha_{m}^{A_{m}+2}|[1,n_{m}] \] satisfy the condition of Lemma 7 and \[ \alpha_{\infty} \in (\alpha_{m}^{A_{m}}|[1,n_{m}], \alpha_{m}^{A_{m}+1}|[1,n_{m}]), \] \[ \alpha_{m}^{A_{m}+1} \in (\alpha_{m}^{A_{m}+1}|[1,n_{m}], \alpha_{m}^{A_{m}+2}|[1,n_{m}]), \] we have for any $x \in S^{1}$ \[ \lvert \log D\hat{f}_{\alpha_{\infty}}^{j_{m}}(x) - \log D\hat{f}_{\alpha_{m}^{A_{m}+1}}^{j_{m}}(x) \rvert \] \[ = \left \lvert \sum_{i=1}^{j_{m}-1} \log Df_{0}(\hat{f}_{\alpha_{\infty}}^{i}(x)) - \sum_{i=1}^{j_{m}-1}\log Df_{0}(\hat{f}_{\alpha_{m}^{A_{m}+1}}^{i}(x)) \right \rvert \] \[ \le \rVert D\log Df_{0} \rVert \sum_{i=1}^{j_{m}-1}\ \lvert ( \hat{f}_{\alpha_{\infty}}^{i}(x), \hat{f}_{\alpha_{m}^{A_{m}+1}}^{i}(x) ) \rvert \le 7\lVert D\log Df_{0} \rVert . \] Since there exists $x_{*} \in S^1$ such that $\lvert D\hat{f}_{\alpha_{m}^{A_{m}+1}}^{j_{m}}(x_{*})\rvert \ge \theta_{j_{m}}$ we have \[ \frac{\lVert D\hat{f}_{\alpha_{\infty}}^{j_{m}}\rVert}{\theta_{j_{m}}} \ge \frac{\lvert D\hat{f}_{\alpha_{\infty}}^{j_{m}}(x_{*})\rvert} {\lvert D\hat{f}_{\alpha_{m}^{A_{m}+1}}^{j_{m}}(x_{*})\rvert } \ge \exp (-7\lVert D\log Df_{0} \rVert). \] For the case $j_{m} < 0$, using the chain rule $D\hat{f}_{\alpha}^{j_{m}}(x) = (D\hat{f}_{\alpha}^{-j_{m}} (\hat{f}_{\alpha}^{j_{m}}(x)))^{-1}$ we can obtain the same estimates . As stated above by making the parameter $a$ sufficiently large we can assume that $\lVert D\log Df_{0} \rVert = \lVert D\log Df_{a,0} \rVert$ is smaller than any given positive value. \noindent $\star$ {\bf Upper bound.} Let $l \in \Bbb{Z}$ with $q_{n} \le l < q_{n+1}$. The case $q_{n} \le -l < q_{n+1}$ is similar. Let $n_{m} =\max \{n_{i}; n_{i}\le n\}$. As in the proof of Theorem 1 we expand $l$ as follows, \[ l = k_{n+1}q_{n} + \cdots + k_{n_{m}+1}q_{n_{m}}+cq_{n_{m}-1}+r, \] where $0 \le k_{i} \le a_{i}(\alpha_{\infty})=1$ ($n_{m}+1 \le i \le n+1$) and we choose $c \in \{-1,0,1\}$ so that $q_{n_{m}-1} \le r \le A_{m}q_{n_{m}-1}$. By Lemma 2 (2) and Lemma 6 (1), (3) we have \[ \lVert D\hat{f}_{\alpha_{\infty}}^{l}\rVert \le \lVert D\hat{f}_{\alpha_{\infty}}^{q_{n}}\rVert \cdots \lVert D\hat{f}_{\alpha_{\infty}}^{cq_{n_{m}-1}}\rVert \lVert D\hat{f}_{\alpha_{\infty}}^{r}\rVert \] \[ \le \exp(C\sum_{i=n_{m}-1}^{n}\lambda^{i})(1+\lVert D\hat{f}_{\alpha_{m}}^{r}\rVert) \le \exp(C\sum_{i=n_{m}-1}^{n}\lambda^{i})(1+\theta_{r}). \] Therefore we have \[ \limsup_{l \rightarrow \infty} \frac{\lVert D\hat{f}_{\alpha_{\infty}}^{l}\rVert}{\theta_{l}} \le \limsup_{l \rightarrow \infty} \frac{\exp(C\sum_{i=n_{m}-1}^{n}\lambda^{i})(1+\theta_{r})}{\theta_{l}} \le 1. \] \end{document}
\begin{document} \renewcommand{$\bullet$}{$\bullet$} \pagestyle{headings} \title{A note on the existence of solutions to Markovian superquadratic BSDEs with an unbounded terminal condition} \author{ Federica Masiero\\ Dipartmento di Matematica e Applicazioni, Università di Milano Bicocca\\ via Cozzi 53, 20125 Milano, Italy\\ e-mail: [email protected]\\ \\ Adrien Richou\\ Univ. Bordeaux, IMB, UMR 5251, F-33400 Talence, France.\\ CNRS, IMB, UMR 5251, F-33400 Talence, France.\\ INRIA, \'Equipe ALEA, F-33400 Talence, France.\\ e-mail: [email protected]} \selectlanguage{english} \maketitle \begin{abstract} In \cite{Richou-12}, the author proved the existence and the uniqueness of solutions to Markovian superquadratic BSDEs with an unbounded terminal condition when the generator and the terminal condition are locally Lipschitz. In this paper, we prove that the existence result remains true for these BSDEs when the regularity assumptions on the terminal condition is weakened. \end{abstract} \selectlanguage{english} \section{Introduction} Since the early nineties and the work of Pardoux and Peng \cite{Pardoux-Peng-90}, there has been an increasing interest for backward stochastic differential equations (BSDEs for short) because of the wide range of applications. A particular class of BSDE is studied since few years: BSDEs with generators of quadratic growth with respect to the variable $z$ (quadratic BSDEs for short). See e.g. \cite{Kobylanski-00,Briand-Hu-06,Delbaen-Hu-Richou-09} for existence and uniqueness results and \cite{Rouge-ElKaroui-00,Hu-Imkeller-Muller-05,Mania-Schweizer-05} for applications. Naturally, we could also wonder what happens when the generator has a superquadratic growth with respect to the variable $z$. Up to our knowledge the case of superquadratic BSDEs was firstly investigated in the recent paper \cite{Delbaen-Hu-Bao-09}. In this article, the authors consider superquadratic BSDEs when the terminal condition is bounded and the generator is convex in $z$. Firstly, they show that in a general way the problem is ill-posed: given a superquadratic generator, there exists a bounded terminal condition such that the associated BSDE does not admit any bounded solution and, on the other hand, if the BSDE admits a bounded solution, there exist infinitely many bounded solutions for this BSDE. In the same paper, the authors also show that the problem becomes well-posed in a Markovian framework: when the terminal condition and the generator are deterministic functions of a forward SDE, we have an existence result. More precisely, let us consider $(X,Y,Z)$ the solution to the (decoupled) forward backward system \begin{eqnarray*} X_t &=& x +\int_0^t b(s,X_s)ds+\int_0^t \sigma(s) dW_s,\\ Y_t &=& g(X_T) +\int_t^T f(s,X_s,Y_s,Z_s)ds-\int_t^T Z_s dW_s, \end{eqnarray*} with growth assumptions \begin{eqnarray*} \abs{f(t,x,y,z)} &\leqslant& C(1+\abs{x}^{p_f}+\abs{y}+\abs{z}^{l+1}), \quad l>1,\\ \abs{g(x)} &\leqslant& C(1+\abs{x}^{p_g}). \end{eqnarray*} In \cite{Delbaen-Hu-Bao-09}, the authors obtain an existence result by assuming that $p_g=p_f=0$, $f$ is a convex function that depends only on $z$ and $g$ is a lower (or upper) semi-continuous function. As in the quadratic case it is possible to show that the boundedness of the terminal condition is a too strong assumption: in \cite{Richou-12}, the author shows an existence and uniqueness result by assuming that $p_g \leqslant 1+1/l$, $p_f \leqslant 1+1/l$, $f$ and $g$ are locally Lipschitz functions with respect to $x$ and $z$. When we consider this result, two questions arise: \begin{itemize} \item Could we have an existence result when $p_g$ or $p_f$ is greater than $1+1/l$ ? \item Could we have an existence result when $f$ or $g$ is less smooth with respect to $x$ or $z$, that is to say, is it possible to have assumptions on the growth of $g$ and $f$ but not on the growth of their derivatives with respect to $x$ and $z$ ? \end{itemize} For the first question, the answer is clearly ``no'' in the quadratic case: see e.g. \cite{Delbaen-Hu-Richou-09}. In the superquadratic case, the authors of \cite{Gladkov-Guedda-Kersner-08} have obtained the same limitation on the growth of the initial condition for the so-called generalized deterministic KPZ equation $u_t=u_{xx}+\lambda\abs{u_x}^q$ and they show that this boundary is sharp for power-type initial conditions. So, it seems that the answer of the first question is also ``no'' in the superquadratic case. For the second question, the answer is clearly ``yes'' in the quadratic case. Indeed, a smoothness assumption on $f$ is required for uniqueness results (see e.g. \cite{Briand-Hu-08,Delbaen-Hu-Richou-09}) but not for existence results (see e.g. \cite{Briand-Hu-08,Barrieu-ElKaroui-11}). In the superquadratic case, the authors of \cite{Delbaen-Hu-Bao-09} show an existence result when $g$ is only lower (or upper) semi-continuous but also bounded. Nevertheless $f(z)$ is assumed to be convex, that implies that it is a locally Lipschitz function. The aim of this note is to mix results of articles \cite{Delbaen-Hu-Bao-09,Richou-12} to obtain an existence result when the terminal condition is only lower (or upper) semi-continuous and unbounded. Let us remark that we answer only partially to the second question because we do not relax smoothness assumptions on $f$. For completeness, in the recent paper \cite{Cheridito-Stadje-12}, Cheridito and Stadje show an existence and uniqueness result for superquadratic BSDEs in a Lipschitz or bounded ``path-dependent'' framework: the terminal condition and the generator are Lipschitz or bounded functions of Brownian motion paths. To the best of our knowledge, \cite{Delbaen-Hu-Bao-09,Richou-12,Cheridito-Stadje-12} are the only papers that deal with superquadratic BSDEs. The paper is organized as follows. In section 2 we obtain some general a priori estimates on $Y$ and $Z$ for Markovian superquadratic BSDEs whereas section 3 is devoted to the existence result described before. \paragraph{Notations} Throughout this paper, $(W_t)_{t \geqslant 0}$ will denote a $d$-dimensional Brownian motion, defined on a probability space $(\Omega,\mathcal{F}, \mathbb P)$. For $t \geqslant 0$, let $\mathcal{F}_t$ denote the $\sigma$-algebra $\sigma(W_s; 0\leqslant s\leqslant t)$, augmented with the $\mathbb P$-null sets of $\mathcal{F}$. The Euclidean norm on $\mathbb R^d$ will be denoted by $|.|$. The operator norm induced by $|.|$ on the space of linear operators is also denoted by $|.|$. The notation $\mathbb{E}_t$ stands for the conditional expectation given $\mathcal{F}_t$. For $p \geqslant 2$, $m \in \mathbb N$, we denote further \begin{itemize} \item $\mathcal{S}^p$ the space of real-valued, adapted and càdlàg processes $(Y_t)_{t \in [0,T]}$ normed by $\norm{Y}_{\mathcal{S}^p}=\mathbb E [(\sup_{t \in [0,T]} \abs{Y_t})^p]^{1/p}$; \item $\mathcal{M}^p(\mathbb R^m)$, or $\mathcal{M}^p$, the space of all progressively measurable processes $(Z_t)_{t \in [0,T]}$ with values in $\mathbb R^m$ normed by $\norm{Z}_{\mathcal{M}^p}=\mathbb E[(\int_0^T \abs{Z_s}^2ds)^{p/2}]^{1/p}$. \end{itemize} In the following, we keep the same notation $C$ for all finite, nonnegative constants that appear in our computations. In this paper we consider $X$ the solution to the SDE \begin{equation} \label{EDS} X_t=x+\int_0^t b(s,X_s)ds+\int_0^t \sigma(s) dW_s, \end{equation} and $(Y,Z) \in \mathcal{S}^2\times \mathcal{M}^2$ the solution to the Markovian BSDE \begin{equation} \label{EDSR} Y_t=g(X_T)+\int_t^T f(s,X_s,Y_s,Z_s)ds-\int_t^T Z_sdW_s. \end{equation} By a solution to the BSDE (\ref{EDSR}) we mean a pair $(Y_t,Z_t)_{t \in [0,T]}$ of predictable processes with values in $\mathbb R \times \mathbb R^{1 \times d}$ such that $\mathbb P$-a.s., $t \mapsto Y_t$ is continuous, $t \mapsto Z_t$ belongs to $L^2([0,T])$, $t \mapsto f(t,X_t,Y_t,Z_t)$ belongs to $L^1([0,T])$ and $\mathbb{P}-a.s.$ the equation (\ref{EDSR}) is verified. \section{Some a priori estimates on $Y$ and $Z$} For the SDE (\ref{EDS}) we use standard assumption. \paragraph{Assumption (F.1).} Let $b : [0,T] \times \mathbb{R}^d \rightarrow \mathbb{R}^d$ and $\sigma : [0,T] \rightarrow \mathbb{R}^{d \times d}$ be continuous functions and let us assume that there exists $K_b \geqslant 0$ such that: \begin{enumerate}[(a)] \item $\forall t \in [0,T]$, $\abs{b(t,0)} \leqslant C$, \item $\forall t \in [0,T]$, $\forall (x,x') \in \mathbb{R}^d \times \mathbb{R}^d$, $\abs{b(t,x)-b(t,x')} \leqslant K_b \abs{x-x'}.$ \end{enumerate} Let us now consider the following assumptions on the generator and on the terminal condition of the BSDE (\ref{EDSR}). \paragraph{Assumption (B.1).} Let $f: [0,T] \times \mathbb{R}^d \times \mathbb{R} \times \mathbb{R}^{1\times d} \rightarrow \mathbb{R}$ be a continuous function and let us assume that there exist five constants, $l > 1$, $0 \leqslant r_f < \frac{1}{l}$, $\beta \geqslant 0$, $\gamma \geqslant 0$ and $\delta \geqslant 0$ such that: \begin{enumerate}[(a)] \item for each $(t,x,y,y',z) \in [0,T] \times \mathbb{R}^d \times \mathbb{R} \times \mathbb{R} \times \mathbb{R}^{1\times d}$, $$ \abs{f(t,x,y,z)-f(t,x,y',z)} \leqslant \delta \abs{y-y'};$$ \item for each $(t,x,y,z,z') \in [0,T] \times \mathbb R^d \times \mathbb R \times \mathbb R^{1\times d} \times \mathbb R^{1\times d}$, $$\abs{ f(t,x,y,z)-f(t,x,y,z')} \leqslant \left(C+\frac{\gamma}{2}(\abs{z}^l+\abs{z'}^l)\right)\abs{z-z'};$$ \item for each $(t,x,x',y,z) \in [0,T] \times \mathbb R^d \times \mathbb R^d \times \mathbb R \times \mathbb R^{1\times d}$, $$\abs{ f(t,x,y,z)-f(t,x',y,z)} \leqslant \left(C+\frac{\beta}{2}(\abs{x}^{r_f}+\abs{x'}^{r_f})\right)\abs{x-x'}.$$ \end{enumerate} \paragraph{Assumption (TC.1).} Let $g:\mathbb{R}^d \rightarrow \mathbb{R}$ be a continuous function and let us assume that there exist $0 \leqslant r_g < \frac{1}{l}$ and $\alpha \geqslant 0$ such that: for each $(t,x,x',y,z) \in [0,T] \times \mathbb R^d \times \mathbb R^d \times \mathbb R \times \mathbb R^{1\times d}$, $$\abs{g(x)-g(x')} \leqslant \left(C+\frac{\alpha}{2}(\abs{x}^{r_g}+\abs{x'}^{r_g})\right)\abs{x-x'}.$$ We also use more general growth assumptions that are more natural for existence results. \paragraph{Assumptions (B.2).} Let $f: [0,T] \times \mathbb{R}^d \times \mathbb{R} \times \mathbb{R}^{1\times d} \rightarrow \mathbb{R}$ be a continuous function and let us assume that there exist constants, $l > 1$, $0 \leqslant r_f < \frac{1}{l}$, $\bar{\beta} \geqslant 0$, $\bar{\gamma} \geqslant 0$, $\bar{\delta} \geqslant 0$, $0\leqslant \eta < l+1$, $\varepsilon>0$ such that: one of these inequalities holds, for all $(t,x,y,z) \in [0,T] \times \mathbb{R}^d \times \mathbb{R} \times \mathbb{R}^{1\times d}$, \begin{enumerate}[(a)] \item $\abs{f(t,x,y,z)} \leqslant C+\bar{\beta}\abs{x}^{r_f+1}+\bar{\delta}\abs{y}+\bar{\gamma} \abs{z}^{l+1}$, \item $-C-\bar{\beta} \abs{x}^{r_f+1}-\bar{\delta}\abs{y}-\bar{\gamma} \abs{z}^{\eta} \leqslant f(t,x,y,z) \leqslant C+\bar{\beta}\abs{x}^{r_f+1}+\bar{\delta}\abs{y}+\bar{\gamma} \abs{z}^{l+1}$, \item $-C-\bar{\beta} \abs{x}^{r_f+1}-\bar{\delta}\abs{y}+\varepsilon \abs{z}^{l+1} \leqslant f(t,x,y,z) \leqslant C+\bar{\beta}\abs{x}^{r_f+1}+\bar{\delta}\abs{y}+\bar{\gamma} \abs{z}^{l+1}$. \end{enumerate} \paragraph{Assumption (TC.2).} Let $g:\mathbb{R}^d \rightarrow \mathbb{R}$ be a lower semi-continuous function and let us assume that there exist $0 \leqslant p_g < 1+1/l$ and $\bar{\alpha} \geqslant 0$ such that: for each $x \in \mathbb{R}^d$, $$\abs{g(x)} \leqslant C+\bar{\alpha} \abs{x}^{p_g}.$$ \begin{rem} The following relations hold true: \begin{itemize} \item (B.2)(c) $\mathbb Rightarrow$ (B.2)(b) $\mathbb Rightarrow$ (B.2)(a). \item (B.1) $\mathbb Rightarrow$ (B.2)(a). \item (TC.1) $\mathbb Rightarrow$ (TC.2) with $p_g=r_g+1$. \item We only consider superquadratic BSDEs, so $l > 1$. $l=1$ corresponds to the quadratic case. \end{itemize} \end{rem} Firstly, let us recall the existence and uniqueness result shown in \cite{Richou-12}. \begin{prop} \label{existence unicite localement lipschitz} We assume that (F.1), (B.1) and (TC.1) hold. There exists a solution $(Y,Z)$ of the Markovian BSDE (\ref{EDSR}) in $\mathcal{S}^2\times \mathcal{M}^2$ such that, \begin{equation} \label{estimee croissance Z} \abs{Z_t} \leqslant A+B(\abs{X_t}^{r_g}+(T-t)\abs{X_t}^{r_f}), \quad \forall t \in [0,T]. \end{equation} Moreover, this solution is unique amongst solutions $(Y,Z)$ such that \begin{itemize} \item $Y \in \mathcal{S}^2$, \item there exists $\eta >0$ such that $$\mathbb{E} \left[e^{(\frac{1}{2}+\eta)\frac{\gamma^2}{4}\int_0^T \abs{Z_s}^{2l} ds}\right] < +\infty.$$ \end{itemize} \end{prop} \begin{rem} To be precise, in the Proposition 2.2 of the article \cite{Richou-12} the author shows the estimate $$\abs{Z_t} \leqslant A+B\abs{X_t}^{r_g \vee r_f}, \quad \forall t \in [0,T],$$ but it is rather easy to do the proof again to show the estimate (\ref{estimee croissance Z}) given in Proposition \ref{existence unicite localement lipschitz}. \end{rem} Such a result allows us to obtain a comparison result. \begin{prop} \label{comparison result} We assume that (F.1) holds. Let $f_1$, $f_2$ two generators and $g_1$, $g_2$ two terminal conditions such that (B.1) and (TC.1) hold. Let $(Y^1,Z^1)$ and $(Y^2,Z^2)$ be the associated solutions given by Proposition \ref{existence unicite localement lipschitz}. We assume that $g_1 \leqslant g_2$ and $f_1 \leqslant f_2$. Then we have that $Y^1 \leqslant Y^2$ almost surely. \end{prop} \paragraph*{Proof of the proposition} The proof is the same than the classical one that can be found in \cite{ElKaroui-Peng-Quenez-97} for example. Let us set $\delta Y:=Y^1-Y^2$ and $\delta Z:=Z^1-Z^2$. The usual linearization trick gives us $$\delta Y_t=g_1(X_T)-g_2(X_T)+\int_t^T f_1(s,X_s,Y^1_s,Z^1_s)-f_2(s,X_s,Y^1_s,Z^1_s)+\delta Y_s U_s +\delta Z_s V_s ds -\int_t^T \delta Z_s dW_s,$$ with $\abs{U_s}\leqslant \delta$ and $$\abs{V_s} \leqslant C+\frac{\gamma}{2}\left( \abs{Z^1_s}^l +\abs{Z^2_s}^l\right) \leqslant C(1+\abs{X_s}^{(r_g\vee r_f)l}).$$ Since $(r_g\vee r_f)l<1$, Novikov's condition is fulfilled and we are allowed to apply Girsanov's transformation: \begin{eqnarray*} \delta Y_t &=& \mathbb E^{\mathbb Q}_t \left[ e^{\int_t^T U_udu}(g_1(X_T)-g_2(X_T))+\int_t^T e^{\int_t^s U_udu}(f_1(s,X_s,Y^1_s,Z^1_s)-f_2(s,X_s,Y^1_s,Z^1_s))ds \right]\\ &\leqslant& 0, \end{eqnarray*} with $$\frac{d\mathbb Q}{d\mathbb P}=\exp \left( \int_0^T V_s dW_s-\frac{1}{2} \int_0^T \abs{V_s}^2ds \right).$$ \cqfd Now we are ready to prove estimates on $Y$ and $Z$. \begin{prop} \label{estimation Y} Let us assume that (F.1), (B.1), (B.2), (TC.1) and (TC.2) hold. Let $(Y,Z)$ be the solution of the BSDE (\ref{EDSR}) given by Proposition \ref{existence unicite localement lipschitz}. Then we have, for all $t \in [0,T]$, $$\abs{Y_t} \leqslant C(1+\abs{X_t}^{p_g}+(T-t)\abs{X_t}^{r_f+1})$$ with a constant $C$ that depends on constants that appear in assumptions (F.1), (B.2) and (TC.2) but not in assumptions (B.1) and (TC.1). \end{prop} \paragraph*{Proof of the proposition} Let us consider the terminal condition $$\bar{g}(x)=C+\bar{\alpha} (\abs{x}+1)^{p_g},$$ and the generator $$\bar{f}(t,x,y,z) =C+\bar{\beta}\abs{x}^{r_f+1}+\bar{\delta}\abs{y}+\bar{\gamma} \abs{z}^{l+1},$$ with $C$ such that $g \leqslant \bar{g}$ and $f \leqslant \bar{f}$. (B.1) holds for $\bar{f}$ and (TC.1) holds for $\bar{g}$, so, according to Proposition \ref{existence unicite localement lipschitz}, there exists a unique solution $(\bar{Y},\bar{Z})$ to the BSDE $$ \bar{Y}_t=\bar{g}(X_T)+\int_t^T \bar{f}(s,X_s,\bar{Y}_s,\bar{Z}_s)ds-\int_t^T \bar{Z}_sdW_s.$$ Thanks to Proposition \ref{comparison result}, we know that $$Y \leqslant \bar{Y}, \quad \textrm{and} \quad \bar{Y}\geqslant 0.$$ Moreover, since $\abs{\bar{Z}_s} \leqslant C(1+\abs{X_s}^{(p_g-1)\vee r_f})$, $(p_g-1)l<1$ and $r_f l<1$, we have \begin{eqnarray*} \bar{Y}_t &\leqslant&\mathbb E_t \left[e^{\bar{\delta}(T-t)} (C+\bar{\alpha}(\abs{X_t}+1)^{p_g})+\int_t^T e^{\bar{\delta}(s-t)}(C+\bar{\beta}\abs{X_s}^{r_f+1}+\bar{\gamma}\abs{\bar{Z}_s}^{l+1})ds\right]\\ &\leqslant & C\left(1+\mathbb E_t\left[\sup_{t \leqslant s\leqslant T}\abs{X_s}^{p_g}\right]+(T-t)\mathbb E_t\left[\sup_{t \leqslant s\leqslant T}\abs{X_s}^{r_f+1}\right]\right). \end{eqnarray*} Let us remark that the constant $C$ in the a priori estimate for $\bar{Z}$ depends on constants that appear in assumptions (F.1), (B.2) and (TC.2) but not in assumptions (B.1) and (TC.1). Thanks to classical estimates on SDEs we have, for all $p \geqslant 1$, $$\mathbb E_t\left[\sup_{t \leqslant s\leqslant T}\abs{X_s}^{p}\right] \leqslant C(1+\abs{X_t}^{p}),$$ so we obtain $$Y_t \leqslant \bar{Y}_t \leqslant C(1+\abs{X_t}^{p_g}+(T-t)\abs{X_t}^{r_f+1}).$$ By the same type of argument we easily show that $$ -C(1+\abs{X_t}^{p_g}+(T-t)\abs{X_t}^{r_f+1})\leqslant Y_t,$$ and this concludes the proof. \cqfd \begin{prop} \label{estimation Z} Let us assume that (F.1), (B.1), (B.2)(c), (TC.1) and (TC.2) hold. Let $(Y,Z)$ be the solution of the BSDE (\ref{EDSR}) given by Proposition \ref{existence unicite localement lipschitz}. Then, for all $t \in [0,T]$, we have $$\mathbb E_t\left[\int_t^T\abs{Z_s}^{l+1}ds\right] \leqslant C(1+\abs{X_t}^{p_g}+(T-t)\abs{X_t}^{r_f+1}),$$ with a constant $C$ that depends on constants that appear in assumptions (F.1), (B.2)(c) and (TC.2) but not in assumptions (B.1) and (TC.1). \end{prop} \paragraph*{Proof of the proposition} To show the proposition we just have to write \begin{eqnarray*} \mathbb E_t\left[\int_t^T\abs{Z_s}^{l+1}ds\right] &\leqslant& \frac{1}{\varepsilon}\left(\mathbb E_t\left[\int_t^T f(s,X_s,Y_s,Z_s)ds +\int_t^T \left( C+\bar{\beta} \abs{X_s}^{r_f+1}+\bar{\delta}\abs{Y_s}\right)ds\right]\right)\\ &\leqslant& \frac{1}{\varepsilon}\left(\mathbb E_t\left[Y_t-g(X_T) +\int_t^T C+\bar{\beta} \abs{X_s}^{r_f+1}+\bar{\delta}\abs{Y_s}ds\right]\right)\\ &\leqslant& C(1+(T-t)\abs{X_t}^{r_f+1}+\abs{X_t}^{p_g}) \end{eqnarray*} thanks to Proposition \ref{estimation Y}. \cqfd \begin{rem} \label{inversion hypotheses croissance f} Proposition \ref{estimation Z} stays true if we replace assumption (B.2)(c) by $$-C-\bar{\beta} \abs{x}^{r_f+1}-\bar{\delta}\abs{y}- \bar{\gamma} \abs{z}^{l+1} \leqslant f(t,x,y,z) \leqslant C+\bar{\beta}\abs{x}^{r_f+1}+\bar{\delta}\abs{y}-\varepsilon\abs{z}^{l+1}.$$ \end{rem} \begin{rem} In Propositions \ref{estimation Y} and \ref{estimation Z} we insist on the fact that $C$ does not depend on constants that appear in assumptions (B.1) and (TC.1) when the local Lipschitzianity of the coefficients is stated. Thanks to this property, we can use these a priori estimates on $Y$ and $Z$ in the following section where we obtain an existence result when the terminal condition is not locally Lipschitz. \end{rem} \section{An existence result} Let us now introduce new assumptions. \paragraph{Assumption (F.2).} $b$ is differentiable with respect to $x$ and $\sigma$ is differentiable with respect to $t$. There exists $\lambda \in \mathbb R^+$ such that $\forall \eta \in \mathbb{R}^d$ \begin{equation*} \label{hypothese sur nabla b} \abs{\tr{\eta}\sigma(s)[\tr{\sigma(s)}\tr{\nabla b(s,x)}-\tr{\sigma'(s)}]\eta} \leqslant \lambda\abs{\tr{\eta}\sigma(s)}^2, \quad \forall (s,x) \in [0,T] \times \mathbb R^d. \end{equation*} \begin{rem} It is shown in part 5.5.1 of \cite{Richou-10} that if $\sigma$ does not depend on time, assumption (F.2) is equivalent to this kind of commutativity assumption: \begin{itemize} \item there exist $A : [0,T] \times \mathbb R^d \rightarrow \mathbb R^{d \times d}$ and $B : [0,T] \rightarrow \mathbb R^{d \times d}$ such that $A$ is differentiable with respect to $x$, $\nabla_x A$ is bounded and $\forall x \in \mathbb R^d$, $\forall s \in [0,T]$, $b(s,x)\sigma=\sigma A(s,x)+B(s).$ \end{itemize} It is also noticed in \cite{Richou-10} that this assumption allows us to reduce assumption on the regularity of $b$ by a standard smooth approximation of $A$. \end{rem} \paragraph{Assumption (B.3).} $f$ is differentiable with respect to $z$ and for all $(t,x,y,z) \in [0,T] \times \mathbb{R}^d \times \mathbb{R} \times \mathbb{R}^{1\times d}$, $$f(t,x,y,z)-\langle \nabla_zf(t,x,y,z),z\rangle \leqslant C-\varepsilon \abs{z}^{l+1}.$$ \begin{rem} Let us give some substantial examples of functions such that (B.3) holds. If we assume that $f(t,x,y,z):=f_1(t,x,y,z)+f_2(t,x,y,z)$ with $f_1$ a differentiable function with respect to $z$ such that, $\exists p \in[0,l[$, $\forall (t,x,y,z) \in [0,T] \times \mathbb{R}^d \times \mathbb{R} \times \mathbb{R}^{1\times d}$, $$\abs{\nabla_z f_1(t,x,y,z)} \leqslant (1+\abs{z}^{p}),$$ and $f_2$ is a twice differentiable function with respect to $z$ such that, $\forall (t,x,y,z) \in [0,T] \times \mathbb{R}^d \times \mathbb{R} \times \mathbb{R}^{1\times d}$, $\forall u \in \mathbb R^d$, $$^tu\nabla^2_{zz}f_2(t,x,y,z)u \geqslant (-C+\varepsilon\abs{z}^{l-1})\abs{u}^2,$$ then we easily see that $$f_1(t,x,y,z)-\langle \nabla_zf_1(t,x,y,z),z\rangle \leqslant C+C \abs{z}^{p+1},$$ and a direct application of Taylor expansion with integral form gives us $$f_2(t,x,y,z)-\langle \nabla_zf_2(t,x,y,z),z\rangle \leqslant C-C' \abs{z}^{l+1},$$ so (B.3) holds. For example, (B.3) holds for the function $z \mapsto C\abs{z}^{l+1}+h(\abs{z}^{l+1-\eta})$ with $C>0$, $0<\eta\leqslant l+1$ and $h$ a differentiable function with a bounded derivative. \end{rem} \begin{prop} \label{prop estimation temporelle Z} Let us assume that (F.1), (F.2), (B.1), (B.3), (TC.1) and (TC.2) hold. Let $(Y,Z)$ be the solution of the BSDE (\ref{EDSR}) given by Proposition \ref{existence unicite localement lipschitz}. If we assume that $0 \leqslant p_gl <1$, then we have, for all $t \in [0,T[$, $$\abs{Z_t} \leqslant \frac{C(1+\abs{X_t}^{p_g/(l+1)})}{(T-t)^{1/(l+1)}}+C\abs{X_t}^{\frac{r_f+1}{l+1} }.$$ The constant $C$ depends on constants that appear in assumptions (F.1), (F.2), (B.1), (B.3) and (TC.2) but not in assumption (TC.1). \end{prop} \paragraph*{Proof of the proposition} Firstly we approximate our Markovian BSDE by another one. Let $(Y^M,Z^M)$ the solution of the BSDE \begin{equation} \label{EDSR approchee} Y^M_t = g_M(X_T)+\int_t^T f_M(s,X_s,Y^M_s,Z_s^M)ds-\int_t^T Z_s^M dW_s, \end{equation} with $g_M=g \circ \rho_M$ and $f_M=f(.,\rho_M(.),.,.)$ where $\rho_M$ is a smooth modification of the projection on the centered Euclidean ball of radius $M$ such that $\abs{\rho_M}\leqslant M$, $\abs{\nabla \rho_M} \leqslant 1$ and $\rho_M(x)=x$ when $\abs{x}\leqslant M-1$. It is now easy to see that $g_M$ and $f_M$ are Lipschitz functions with respect to $x$. Proposition 2.3 in \cite{Richou-12} gives us that $Z^M$ is bounded by a constant $C_0$ that depends on $M$. So, $f_M$ is a Lipschitz function with respect to $z$ and BSDE (\ref{EDSR approchee}) is a classical Lipschitz BSDE. Now we use the following Lemma that will be shown afterwards. \begin{lem} \label{lemme recurrence} Let us assume that (F.1), (F.2), (B.1), (B.3), (TC.1) and (TC.2) hold. We also assume that $0 \leqslant p_gl <1$. Then we have, for all $t \in [0,T[$, $$\abs{Z_t^M} \leqslant \frac{A_n+B_n\abs{X_t}^{p_g/(l+1)}}{(T-t)^{1/(l+1)}}+D_n\abs{X_t}^{\frac{r_f+1}{l+1} },$$ with $(A_n,B_n,D_n)_{n \in \mathbb N}$ defined by recursion: $B_0=0$, $D_0=0$, $A_0=C_0T^{1/(l+1)}$, $$A_{n+1}=C(1+A_n^{al}+B_n^{alp}+D_n^{al\bar{p}}), \quad B_{n+1}=C, \quad D_{n+1}=C,$$ where $a:=(p_g\vee (r_f+1))/(l+1)$, $p>1$, $\bar{p}>1$ and $C$ is a constant that does not depend on $M$ and constants in assumption (TC.1). \end{lem} Since $al<1$, the recursion function that define the sequence $(A_n)_{n \geqslant 0}$ is a contractor function, so $A_n \rightarrow A_{\infty}$ when $ n \rightarrow +\infty$, with $A_{\infty}$ that does not depend on $M$ and constants in assumption (TC.1). Finally, we have, for all $t \in [0,T[$, $$\abs{Z_t^M} \leqslant \frac{C(1+\abs{X_t}^{p_g/(l+1)})}{(T-t)^{1/(l+1)}}+C\abs{X_t}^{\frac{r_f+1}{l+1}}.$$ The constant $C$ depends on constants that appear in assumptions (F.1), (F.2), (B.1), (B.3) and (TC.2) but not in assumption (TC.1). Moreover $C$ does not depends on $M$. Now, we want to come back to the initial BSDE (\ref{EDSR}). It is already shown in the proof of Proposition 2.2 of the article \cite{Richou-12} that $(Y^n,Z^n) \rightarrow (Y,Z)$ in $\mathcal{S}^2 \times \mathcal{M}^2$. So our estimate on $Z^M$ stays true for a version of $Z$. \cqfd \paragraph*{Proof of Lemma \ref{lemme recurrence}} Let us prove the result by recursion. For $n=0$ we have already shown the result. Let us assume that the result is true for some $n \in \mathbb N$ and let us show that it stays true for $n+1$. In a first time we suppose that $f$ and $g$ are differentiable with respect to $x$ and $y$. Then $(Y^M,Z^M)$ is differentiable with respect to $x$ and $(\nabla Y^M,\nabla Z^M)$ is the solution of the BSDE \begin{eqnarray*} \nabla Y_t^M &=& \nabla g_M(X_T)\nabla X_T - \int_t^T \nabla Z_s^M dW_s\\ & & +\int_t^T \nabla_x f_M(s,X_s,Y_s^M,Z_s^M) \nabla X_s + \nabla_y f_M(s,X_s,Y_s^M,Z_s^M) \nabla Y_s^M + \nabla_z f_M(s,X_s,Y_s^M,Z_s^M) \nabla Z_s^M ds, \end{eqnarray*} and a version of $Z^M$ is given by $(\nabla Y_t^M(\nabla X_t)^{-1} \sigma(t))_{t \in[0,T]}$. Let us introduce some notations: we set \begin{eqnarray*} d\tilde{W}_t &:=& dW_t-\nabla_z f_M(t,X_t,Y_t^M,Z_t^M)dt,\\ \alpha_t &:= & \int_0^t e^{\int_0^s\nabla_y f_M(u,X_u,Y_u^M,Z_u^M)du}\nabla_x f_M(s,X_s,Y_s^M,Z_s^M)\nabla X_s ds (\nabla X_t)^{-1}\sigma(t),\\ \tilde{Z}_t^M &:=& e^{\int_0^t\nabla_y f_M(s,X_s,Y_s^M,Z_s^M)ds} Z_t^M +\alpha_t. \end{eqnarray*} By applying Girsanov's theorem we know that there exists a probability $\mathbb Q^M$ under which $\tilde{W}$ is a Brownian motion with $$\frac{d\mathbb Q^M}{d\mathbb P} = \exp\left( \int_0^T \nabla_z f_M(t,X_t,Y_t^M,Z_t^M)dW_t-\frac{1}{2}\int_0^T \abs{\nabla_z f_M(t,X_t,Y_t^M,Z_t^M)}^2dt\right).$$ Then, exactly as in the proof of Theorem 3.3 in \cite{Richou-11}, we can show the following lemma. \begin{lem} \label{Qm sous martingale} $\abs{e^{\lambda t} \tilde{Z}_t^M}^2$ is a $\mathbb{Q}^M$-submartingale. \end{lem} For the reader's convenience, we recall this proof in the appendix. It results that $\abs{e^{\lambda t} \tilde{Z}_t^M}^{l+1}$ is also a $\mathbb{Q}^M$-submartingale and we have: \begin{eqnarray*} \mathbb{E}^{\mathbb{Q}^M}_t \left[\int_t^T e^{2\lambda s}\abs{\tilde{Z}^M_s}^{l+1}ds \right] & \geqslant & e^{2\lambda t}\abs{\tilde{Z}_t^M}^{l+1}(T-t)\\ &\geqslant & e^{2\lambda t}\abs{e^{\int_0^t\nabla_y f_M(s,X_s,Y_s^M,Z_s^M)ds}Z_t^M+\alpha_t}^{l+1}(T-t) , \end{eqnarray*} which implies \begin{eqnarray} \nonumber \abs{Z_t^M}^{l+1}(T-t) & \leqslant & C\left(e^{2\lambda t}\abs{e^{\int_0^t\nabla_y f_M(s,X_s,Y_s^M,Z_s^M)ds}Z_t^M+\alpha_t}^{l+1}+\abs{\alpha_t}^{l+1}\right)(T-t)\\ \nonumber &\leqslant & C\left( \mathbb{E}^{\mathbb{Q}^M}_t \left[\int_t^T e^{2\lambda s}\abs{\tilde{Z}_s^M}^{l+1}ds\right] +(T-t)\left(1+\abs{X_t}^{(l+1)r_f}\right)\right)\\ \nonumber &\leqslant & C\left( 1+\mathbb{E}^{\mathbb{Q}^M}_t \left[\int_t^T \abs{Z_s^M}^{l+1}ds\right] +\mathbb{E}^{\mathbb{Q}^M}_t \left[\int_t^T \abs{X_s}^{(l+1)r_f}ds\right]+(T-t)\abs{X_t}^{(l+1)r_f}\right).\\ \label{inegalite1 Z} & & \end{eqnarray} Let us recall that $(Y^M,Z^M)$ is solution of BSDE $$Y_t^M=g_M(X_T)+\int_t^T \tilde{f}_M(s,X_s,Y_s^M,Z_s^M)ds-\int_t^T Z_s^Md\tilde{W}_s,$$ with $$\tilde{f}_M(s,x,y,z):=f_M(s,x,y,z)-\langle z,\nabla_zf_M(s,x,y,z)\rangle.$$ Since assumption (B.3) holds for $f$, assumption (B.2)(c) holds for $-\tilde{f}_M$ with constants that do not depend on $M$. Then we can mimic the proof of Proposition \ref{estimation Z} (see also Remark \ref{inversion hypotheses croissance f}) to show that \begin{equation} \label{inegalite2 Z} \mathbb{E}^{\mathbb{Q}^M}_t \left[\int_t^T \abs{Z_s^M}^{l+1}ds\right] \leqslant C\left(1+\mathbb{E}^{\mathbb{Q}^M}_t \left[\abs{X_T}^{p_g}\right]+\int_t^T \left(\mathbb{E}^{\mathbb{Q}^M}_t \left[\abs{X_s}^{p_g}\right]+\mathbb{E}^{\mathbb{Q}^M}_t \left[\abs{X_s}^{r_f+1}\right]\right)ds\right), \end{equation} with a constant $C$ that does not depend on $M$ and constants that appear in assumption (TC.1). Then, by putting (\ref{inegalite2 Z}) in (\ref{inegalite1 Z}), we see that we just have to obtain an a priori estimate for $\mathbb{E}^{\mathbb{Q}^M}_t \left[\abs{X_s}^{c}\right]$ with $c \in \mathbb R^{+*}$. We have \begin{eqnarray*} \abs{X_s} &=& \abs{X_t+\int_t^s b(u,X_u)du+\int_t^s \sigma(u) d\tilde{W}_u+\int_t^s \sigma(u) \nabla_z f_M(u,X_u,Y_u^M,Z_u^M)du}\\ &\leqslant& \abs{X_t}+C+C\int_t^s\abs{X_u}du+\abs{\int_t^s \sigma(u) d\tilde{W}_u}+C\int_t^s \abs{Z_u^M}^ldu, \end{eqnarray*} with $C$ that does not depend on $M$. Now we use the recursion assumption to obtain \begin{eqnarray*} \int_t^s \abs{Z_u^M}^ldu & \leqslant& C\int_t^s \left(\frac{A_n^l}{(T-u)^{l/(l+1)}} +\frac{B_n^l}{(T-u)^{l/(l+1)}}\abs{X_u}^{lp_g/(l+1)}+D_n^l\abs{X_u}^{(r_f+1)l/(l+1)}\right)du. \end{eqnarray*} Obviously we have $\int_t^T \frac{A_n^l}{(T-u)^{l/(l+1)}}du \leqslant CA_n^l$. For the other terms we use Young inequality: Since $lp_g/(l+1) <1$ and $(r_f+1)l/(l+1)<1$ , we have \begin{eqnarray*} \int_t^s \abs{Z_u^M}^ldu & \leqslant& CA_n^l+C\int_t^s \left(\frac{B_n^{lp}}{(T-u)^{lp/(l+1)}}+D_n^{l\bar{p}}+\abs{X_u}\right)du, \end{eqnarray*} with $p=1/(1-lp_g/(l+1))$ and $\bar{p}>1$. Since we assume that $lp_g <1$, then $lp/(l+1)<1$ and $\int_t^s \frac{B_n^{lp}}{(T-u)^{lp/(l+1)}}du \leqslant CB_n^{lp}$. Finally, we obtain $$\int_t^s \abs{Z_u^M}^ldu \leqslant CA_n^l+CB_n^{lp}+CD_n^{l\bar{p}}+C\int_t^s\abs{X_u}du,$$ and $$\abs{X_s} \leqslant \abs{X_t}+C+C\int_t^s\abs{X_u}du+\sup_{t \leqslant r \leqslant T} \abs{\int_t^r \sigma(u) d\tilde{W}_u}+CA_n^l+CB_n^{lp}+CD_n^{l\bar{p}}.$$ Gronwall's lemma gives us \begin{eqnarray*} \abs{X_s} &\leqslant& C\left(1+\sup_{t \leqslant r \leqslant T} \abs{\int_t^r \sigma(u) d\tilde{W}_u}+A_n^l+B_n^{lp}+D_n^{l\bar{p}}+\abs{X_t}\right) \end{eqnarray*} that implies \begin{eqnarray} \label{inegalite3 Z} \mathbb{E}^{\mathbb{Q}^M}_t \left[\abs{X_s}^{c}\right] &\leqslant& C\left(1+A_n^{cl}+B_n^{clp}+D_n^{cl\bar{p}}+\abs{X_t}^c\right). \end{eqnarray} By putting (\ref{inegalite3 Z}) in (\ref{inegalite2 Z}) and (\ref{inegalite1 Z}), we obtain \begin{eqnarray*} \abs{Z_t^M}^{l+1}(T-t) &\leqslant& C\left(1+\mathbb{E}^{\mathbb{Q}^M}_t \left[\abs{X_T}^{p_g}\right]+\int_t^T \mathbb{E}^{\mathbb{Q}^M}_t \left[\abs{X_s}^{p_g\vee (r_f+1)}\right]ds + (T-t)\abs{X_t}^{(l+1)r_f}\right)\\ &\leqslant& C\left( 1+A_n^{(l+1)al}+B_n^{(l+1)alp}+D_n^{(l+1)al\bar{p}}+\abs{X_t}^{p_g}+(T-t)\abs{X_t}^{r_f+1} \right), \end{eqnarray*} with $a=(p_g\vee (r_f+1))/(l+1)$ and $C$ that does not depend on $M$ and constants that appear in assumption (TC.1). So, we easily see that we can take $$A_{n+1}=C(1+A_n^{al}+B_n^{alp}+D_n^{al\bar{p}}), \quad B_{n+1}=C, \quad D_{n+1}=C,$$ and then the result is proved. When $f$ and $g$ are not differentiable we can prove the result by a standard approximation and stability results for BSDEs with linear growth. \cqfd Since the estimate on $Z$ given by Proposition~\ref{prop estimation temporelle Z} does not depend on constants that appear in assumption (TC.1), we can use it to show an existence result for superquadratic BSDEs with a quite general terminal condition. \begin{thm} \label{theoreme final} Let assume that (F.1), (F.2), (B.1), (B.2)(b), (B.3) and (TC.2) hold. We also assume that $0 \leqslant p_gl <1$, then there exists a solution $(Y,Z)$ to the BSDE (\ref{EDSR}) such that $(Y,Z) \in \mathcal{S}^2 \times \mathcal{M}^2$. Moreover, we have for all $t \in [0,T[$, \begin{equation} \label{estimee deterministe 2 pour Z} \abs{Z_t} \leqslant \frac{C(1+\abs{X_t}^{p_g/(l+1)})}{(T-t)^{1/(l+1)}}+C\abs{X_t}^{\frac{r_f+1}{l+1}}, \end{equation} and, if we assume that (B.2)(c) holds, \begin{equation*} \label{moment Mp pour Z} \mathbb E\left[\int_0^T\abs{Z_s}^{l+1}ds\right] <+\infty. \end{equation*} \end{thm} \paragraph*{Proof of Theorem \ref{theoreme final}} The proof is based on the proof of Proposition 4.3 in \cite{Delbaen-Hu-Bao-09}. For each integer $n\geqslant 0$, we construct the sup-convolution of $g$ defined by $$g_n(x):=\sup_{u \in \mathbb R^d} \set{g(u)-n\abs{x-u}}.$$ Let us recall some well-known facts about sup-convolution: \begin{lem} For $n\geqslant n_0$ with $n_0$ big enough, we have, \begin{itemize} \item $g_n$ is well defined, \item (TC.1) holds for $g_n$ with $r_g=0$, \item (TC.2) holds for $g_n$ with same constants $C$ and $\bar{\alpha}$ than for $g$ (they do not depend on $n$), \item $(g_n)_n$ is decreasing, \item $(g_n)_n$ converges pointwise to $g$. \end{itemize} \end{lem} Since (TC.1) holds, we can consider $(Y^n,Z^n)$ the solution given by Proposition \ref{existence unicite localement lipschitz}. It follows from Propositions \ref{comparison result} and \ref{estimation Y} that, for all $n \geqslant n_0$, \begin{equation} \label{croissance Yn} -C(1+\abs{X_t}^{p_g}+(T-t)\abs{X_t}^{r_f+1}) \leqslant Y^{n+1}_t \leqslant Y^n_t \leqslant Y^{n_0}_t \leqslant C(1+\abs{X_t}^{p_g}+(T-t)\abs{X_t}^{r_f+1}), \end{equation} with $C$ that does not depend on $n$: indeed, the constant in Proposition \ref{estimation Y} just depends on the growth of the terminal condition and here the growth of $g_n$ can be chosen independently of $n$ (see previous lemma). So $(Y_n)_n$ converges almost surely and we can define $$Y=\lim_{n \rightarrow +\infty} Y^n.$$ Passing to the limit into (\ref{croissance Yn}), we obtain that the estimate of Proposition \ref{estimation Y} stays true for $Y$. Now the aim is to show that $(Z_n)_n$ converges in the good space. For any $T' \in]0,T[$, $(Y^n,Z^n)$ satisfies \begin{equation} \label{etoile5} Y_t^n=Y_{T'}^n+\int_t^{T'} f(s,X_s,Y_s^n,Z_s^n)ds-\int_t^{T'} Z_s^ndW_s, \quad 0 \leqslant t \leqslant T'. \end{equation} Let us denote $\delta Y^{n,m}:=Y^n-Y^m$ and $\delta Z^{n,m}:=Z^n-Z^m$. The classical linearization method gives us that $(\delta Y^{n,m}, \delta Z^{n,m})$ is the solution of BSDE $$\delta Y^{n,m}_t = \delta Y^{n,m}_{T'}+\int_t^{T'} U_s^{n,m} \delta Y^{n,m}_s+V_s^{n,m} \delta Z^{n,m}_s ds -\int_t^{T'} \delta Z_s^{n,m} dW_s,$$ where $\abs{U^{n,m}} \leqslant C$ and, by using estimates of Proposition \ref{prop estimation temporelle Z}, \begin{equation} \label{etoile1} \abs{V^{n,m}} \leqslant C(1+\abs{Z^n}^l+\abs{Z^m}^l) \leqslant C(1+\abs{X}^{p}), \end{equation} with $p<1$ and $C$ that depends on $T'$ but does not depend on $n$ and $m$. Since $p<1$, Novikov's condition is fulfilled and we can apply Girsanov's theorem: there exists a probability $\mathbb Q^{n,m}$ such that $d\tilde{W}_t:=dW_t-V_t^{n,m}dt$ is a Brownian motion under this probability. By classical transformations, we have that $(\delta Y^{n,m}, \delta Z^{n,m})$ is the solution of the BSDE $$\delta Y^{n,m}_t = \delta Y^{n,m}_{T'}e^{\int_t^{T'}U_s^{n,m}ds}-\int_t^{T'} e^{\int_t^{s}U_u^{n,m}du}\delta Z_s^{n,m} d\tilde{W}_s.$$ Since $U^{n,m}$ is bounded, classical estimates on BSDEs give us (see e.g. \cite{ElKaroui-Peng-Quenez-97}) \begin{equation} \label{etoile2} \mathbb E^{\mathbb Q^{n,m}} \left[\left( \int_0^{T'} \abs{\delta Z^{n,m}_s}^2ds\right)^2 \right] \leqslant C\mathbb E^{\mathbb Q^{n,m}} \left[ \abs{\delta Y^{n,m}_{T'}}^4 \right]. \end{equation} Now, we would like to have the same type of estimate than (\ref{etoile2}), but with the classical expectation instead of $\mathbb{E}^{\mathbb Q^{n,m}}$. To do so, we define the exponential martingale $$\mathcal{E}^{n,m}_{T'}:= \exp \left( \int_0^{T'} V_s^{n,m} dW_s-\frac{1}{2}\int_0^{T'} \abs{V_s^{n,m}}^2ds \right).$$ Then, for all $p \in \mathbb R$, \begin{equation} \label{etoile3et4} \mathbb E\left[ (\mathcal{E}^{n,m}_{T'})^p \right] <C_p, \end{equation} with $C_p$ that does not depend on $n$ and $m$: indeed, by applying (\ref{etoile1}) and Gronwall's lemma we have \begin{eqnarray*} \mathbb E \left[e^{p \int_0^{T'} V_s^{n,m} dW_s-\frac{p}{2}\int_0^{T'} \abs{V_s^{n,m}}^2ds}\right] &=& \mathbb E \left[e^{\frac{1}{2}\left( \int_0^{T'} 2pV_s^{n,m} dW_s-\frac{1}{2}\int_0^{T'} \abs{2pV_s^{n,m}}^2ds\right) +\left(p^2-\frac{p}{2}\right) \int_0^{T'} \abs{V_s^{n,m}}^2ds}\right]\\ &\leqslant& \mathbb E \left[e^{\int_0^{T'} 2pV_s^{n,m} dW_s-\frac{1}{2}\int_0^{T'} \abs{2pV_s^{n,m}}^2ds}\right]^{1/2}\mathbb E\left[ e^{\left(2p^2-p\right) \int_0^{T'} \abs{V_s^{n,m}}^2ds}\right]^{1/2}\\ &\leqslant& \mathbb E\left[ e^{C\abs{2p^2-p}\left(1+\sup_{0 \leqslant s \leqslant T} \abs{X_s}^{2p}\right)}\right]^{1/2}\\ & < & + \infty, \end{eqnarray*} because $2p<2$. By applying Cauchy Schwarz inequality and by using (\ref{etoile3et4}) and (\ref{etoile2}), we obtain \begin{eqnarray*} \mathbb E \left[\int_0^{T'} \abs{\delta Z^{n,m}_s}^2ds \right] &=& \mathbb E \left[(\mathcal{E}^{n,m}_{T'})^{-1/2}(\mathcal{E}^{n,m}_{T'})^{1/2}\int_0^{T'} \abs{\delta Z^{n,m}_s}^2ds\right]\\ &\leqslant& \mathbb E\left[ (\mathcal{E}^{n,m}_{T'})^{-1} \right]^{1/2}\mathbb E^{\mathbb Q^{n,m}} \left[\left( \int_0^{T'} \abs{\delta Z^{n,m}_s}^2ds\right)^2 \right]^{1/2}\\ &\leqslant& C\mathbb E^{\mathbb Q^{n,m}} \left[ \abs{\delta Y^{n,m}_{T'}}^4 \right]^{1/2}\\ &\leqslant& C\mathbb E\left[ (\mathcal{E}^{n,m}_{T'})^{2} \right]^{1/2}\mathbb E \left[ \abs{\delta Y^{n,m}_{T'}}^8 \right]^{1/4}\\ &\leqslant& C\mathbb E \left[ \abs{\delta Y^{n,m}_{T'}}^8 \right]^{1/4} \xrightarrow{n,m \to 0} 0. \end{eqnarray*} Since $\mathcal{M}^2$ is a Banach space, we can define $$Z=\lim_{n \to +\infty} Z^n, \quad d\mathbb P \times dt\textrm{-a.e.} .$$ If we apply Proposition \ref{estimation Z}, we have that $\norm{Z^n}_{\mathcal{M}^2}<C$ with a constant $C$ that does not depend on $n$. So, Fatou's lemma gives us that $Z \in \mathcal{M}^2$. Moreover, the estimate on $Z^n$ given by Proposition \ref{prop estimation temporelle Z} stays true for $Z$ and, if we assume that (B.2)(c) holds, then Proposition \ref{estimation Z} gives us that $$\mathbb E \left[ \int_0^T \abs{Z_s^n}^{l+1}ds\right]<C$$ with a constant $C$ that does not depend on $n$ and so $$\mathbb E \left[ \int_0^T \abs{Z_s}^{l+1}ds\right]<C.$$ Finally, by passing to the limit when $n \to +\infty$ in (\ref{etoile5}) and by using the dominated convergence theorem, we obtain that for any fixed $T' \in [0,T[$, $(Y,Z)$ satisfies \begin{equation} \label{etoile6} Y_t=Y_{T'}+\int_t^{T'} f(s,X_s,Y_s,Z_s)ds-\int_t^{T'} Z_sdW_s, \quad 0 \leqslant t \leqslant T'. \end{equation} To conclude, we just have to prove that we can pass to the limit when $T' \to T$ in (\ref{etoile6}). Let us show that $Y_{T'} \xrightarrow{T' \to T} g(X_T)$ a.s.. Firstly, we have $$\overline{\lim}_{s \to T} Y_s \leqslant \overline{\lim}_{s \to T} Y_s^n =g_n(X_T) \textrm{ a.s.} \quad \textrm{ for any } n \geqslant n_0, $$ which implies $\overline{\lim}_{s \to T} Y_s \leqslant g(X_T)$, a.s.. On the other hand, we use assumption (B.2)(b) and we apply Propositions \ref{estimation Y} and \ref{prop estimation temporelle Z} to deduce that, a.s., \begin{eqnarray*} Y_t^n &=& g_n(X_T)+\int_t^T f(s,X_s,Y_s^n,Z_s^n)ds -\int_t^T Z_s^n dW_s\\ &\geqslant& g_n(X_T)-C\int_t^T 1+\abs{X_s}^{r_f+1}+\abs{Y_s^n}+\abs{Z_s^n}^{\eta}ds-\int_t^T Z_s^n dW_s\\ &\geqslant& \mathbb E_t \left[ g_n(X_T) -C\int_t^T 1+\abs{X_s}^{(r_f+1) \vee p_g }+\frac{1+\abs{X_s}^{\eta p_g/(l+1)}}{(T-s)^{\eta/(l+1)}}ds\right]\\ &\geqslant& \mathbb E_t \left[ g_n(X_T)\right] -C(T-t)(1+\abs{X_t}^{(r_f+1) \vee p_g })-C(T-t)^{1-\eta/(l+1)}(1+\abs{X_t}^{\eta p_g/(l+1)}), \end{eqnarray*} and $$Y_t=\lim_{n \to +\infty} Y_t^n \geqslant \mathbb E_t\left[ g(X_T)\right] -C(T-t)(1+\abs{X_t}^{(r_f+1) \vee p_g })-C(T-t)^{1-\eta/(l+1)}(1+\abs{X_t}^{\eta p_g/(l+1)}),$$ which implies $$\underline{\lim}_{t \to T} Y_t \geqslant \underline{\lim}_{t \to T} \mathbb E_t\left[ g(X_T)\right] = g(X_T).$$ Hence, $\lim_{t \to T} Y_t=g(X_T)$ a.s. . Now, let us come back to BSDE (\ref{etoile6}). Since we have $$\int_t^T \abs{f(s,X_s,Y_s,Z_s)}ds \leqslant \int_t^T C(1+\abs{X_s}^{r_f+1}+\abs{Y_s}+\abs{Z_s}^{l+1})ds<+\infty \textrm{ a.s.},$$ then $$\int_t^{T'} f(s,X_s,Y_s,Z_s)ds \xrightarrow{T' \to T} \int_t^{T} f(s,X_s,Y_s,Z_s)ds<+\infty\textrm{ a.s.}.$$ Finally, passing to the limit when $T' \to T$ in (\ref{etoile6}), we conclude that $(Y,Z)$ is a solution to BSDE (\ref{EDSR}). \cqfd \begin{rem} The function $z \mapsto C\abs{z}^{l+1}+h(\abs{z}^{l+1-\eta})$ with $C>0$, $0<\eta\leqslant l+1$ and $h$ a differentiable function with a bounded derivative is an example of generator such that (B.1), (B.2)(b) and (B.3) hold. \end{rem} \begin{rem} The estimate $$\abs{Z_t} \leqslant \frac{C(1+\abs{X_t}^{p_g})}{\sqrt{T-t}}+C\abs{X_t}^{r_f+1}$$ is already known in the Lipschitz framework as a consequence of the Bismut-Elworthy formula (see e.g. \cite{Fuhrman-Tessitore-02}). For the superquadratic case, the same estimate was obtained when $p_g=0$ and $f$ does not depend on $x$ and $y$ in \cite{Delbaen-Hu-Bao-09} (see also \cite{Richou-11} for the quadratic case). In \cite{Delbaen-Hu-Bao-09}, Remark 4.4. gives the same type of estimate than (\ref{estimee deterministe 2 pour Z}) for the example $f(z)=\abs{z}^{l}$. This result was already obtained by Gilding et al. in \cite{Gilding-Guedda-Kersner-03} using Bernstein's technique when $f(z)=\abs{z}^{l}$, $b=0$ and $\sigma$ is the identity. \end{rem} \begin{rem} In this article, estimate (\ref{estimee deterministe 2 pour Z}) for the process $Z$ allows us to obtain an existence result. But this type of deterministic bound is also interesting for numerical approximation of BSDEs (see e.g. \cite{Richou-11}) or for studying stochastic optimal control problems in infinite dimension (see e.g. \cite{Masiero-10}). \end{rem} \appendix \section{Appendix} \subsection{Proof of Lemma \ref{Qm sous martingale}} Let us set $$F_t^M := e^{\int_0^t\nabla_y f_M(s,X_s,Y_s^M,Z_s^M)ds}\nabla Y_t^M+\int_0^t e^{\int_0^s\nabla_y f_M(u,X_u,Y_u^M,Z_u^M)du}\nabla_x f_M(s,X_s,Y_s^M,Z_s^M)\nabla X_s ds,$$ and $$\tilde{F}_t^M := e^{\lambda t} F_t^M (\nabla X_t)^{-1}.$$ Since $d\nabla X_t=\nabla b(t,X_t)\nabla X_t dt$, then $d(\nabla X_t)^{-1} = - (\nabla X_t)^{-1}\nabla b(t,X_t)dt$ and thanks to Itô's formula, $$d\tilde{Z}_t^M=dF_t^M(\nabla X_t)^{-1} \sigma(t)-F_t^M (\nabla X_t)^{-1}\nabla b(t,X_t)\sigma(t)dt+F_t^M(\nabla X_t)^{-1}\sigma'(t)dt,$$ and $$d(e^{\lambda t}\tilde{Z}_t^M)=\tilde{F}_t^M(\lambda Id-\nabla b(t,X_t))\sigma(t)dt+\tilde{F}_t^M\sigma'(t)dt+e^{\lambda t} dF_t^M(\nabla X_t)^{-1} \sigma(t).$$ Finally, $$d\abs{e^{\lambda t} \tilde{Z}_t^M}^2=d\langle N \rangle_t+2\left[\lambda\abs{\tilde{F}_t^M\sigma(t)}^2-\tilde{F}_t^M\sigma(t)[\tr{\sigma(t)}\tr{\nabla b(t,X_t)}-\tr{\sigma'(t)}]\tr{\tilde{F}_t^M}\right]dt+dN_t^*,$$ with $N_t:=\int_0^t e^{\lambda s} dF_s^M(\nabla X_s)^{-1}\sigma(s)$ and $N_t^*$ a $\mathbb{Q}^M$-martingale. Thanks to the assumption (F.2) we are able to conclude that $\abs{e^{\lambda t} \tilde{Z}_t^M}^2$ is a $\mathbb{Q}^M$-submartingale. \cqfd \def$'${$'$} \end{document}
\begin{document} \title{Block Rigidity: Strong Multiplayer Parallel Repetition implies Super-Linear Lower Bounds for Turing Machines} \author{Kunal Mittal\thanks{Department of Computer Science, Princeton University. Research supported by the Simons Collaboration on Algorithms and Geometry, by a Simons Investigator Award and by the National Science Foundation grants No. CCF-1714779, CCF-2007462.} \and Ran Raz\footnotemark[1]} \date{} \maketitle \begin{abstract} We prove that a sufficiently strong parallel repetition theorem for a special case of multiplayer (multiprover) games implies super-linear lower bounds for multi-tape Turing machines with advice. To the best of our knowledge, this is the first connection between parallel repetition and lower bounds for time complexity and the first major potential implication of a parallel repetition theorem with more than two players. Along the way to proving this result, we define and initiate a study of {\it block rigidity}, a weakening of Valiant's notion of {\it rigidity}~\cite{Val77}. While rigidity was originally defined for matrices, or, equivalently, for (multi-output) linear functions, we extend and study both rigidity and block rigidity for general (multi-output) functions. Using techniques of Paul, Pippenger, Szemer{\'{e}}di and Trotter \cite{PPST83}, we show that a block-rigid function cannot be computed by multi-tape Turing machines that run in linear (or slightly super-linear) time, even in the non-uniform setting, where the machine gets an arbitrary advice tape. We then describe a class of multiplayer games, such that, a sufficiently strong parallel repetition theorem for that class of games implies an explicit block-rigid function. The games in that class have the following property that may be of independent interest: for every random string for the verifier (which, in particular, determines the vector of queries to the players), there is a unique correct answer for each of the players, and the verifier accepts if and only if all answers are correct. We refer to such games as {\it independent games}. The theorem that we need is that parallel repetition reduces the value of games in this class from $v$ to $v^{\Omega(n)}$, where $n$ is the number of repetitions. As another application of block rigidity, we show conditional size-depth tradeoffs for boolean circuits, where the gates compute arbitrary functions over large sets. \end{abstract} \section{Introduction} We study relations between three seemingly unrelated topics: parallel repetition of multiplayer games, a variant of Valiant's notion of rigidity, that we refer to as block rigidity, and proving super-linear lower bounds for Turing machines with advice. \subsection{Super-Linear Lower Bounds for Turing Machines} Deterministic multi-tape Turing machines are the standard model of computation for defining time-complexity classes. Lower bounds for the running time of such machines are known by the time-hierarchy theorem~\cite{HS65}, using a diagonalization argument. Moreover, the seminal work of Paul, Pippenger, Szemer{\'{e}}di and Trotter gives a separation of non-deterministic linear time from deterministic linear time, for multi-tape Turing machines~\cite{PPST83} (using ideas from \cite{HPV77} and \cite{PR80}). That is, it shows that $\mathrm{DTIME}(n) \subsetneq \mathbb{N}TIME(n)$. This result has been slightly improved to give $\mathrm{DTIME}(n\sqrt{\log^*{n}}) \subsetneq \mathbb{N}TIME(n\sqrt{\log^*{n}})$~\cite{San01}. However, the above mentioned lower bounds do not hold in the non-uniform setting, where the machines are allowed to use arbitrary advice depending on the length of the input. In the non-uniform setting, no super-linear lower bound is known for the running time of deterministic multi-tape Turing machines. Moreover, such bounds are not known even for a multi-output function. \subsection{Block Rigidity} The concept of matrix rigidity was introduced by Valiant as a means to prove super-linear lower bounds against circuits of logarithmic depth~\cite{Val77}. Since then, it has also found applications in communication complexity~\cite{Raz89} (see also~\cite{Wun12,Lok01}). We extend Valiant's notion of rigid matrices to the concept of {\it rigid functions}, and further to {\it block-rigid functions}. We believe that these notions are of independent interest. We note that block-rigidity is a weaker condition than rigidity and hence it may be easier to find explicit block-rigid functions. Further, our result gives a new application of rigidity. Over a field $\mathbb F$, a matrix $A\in \mathbb F^{n\times n}$ is said to be an $(r,s)$-rigid matrix if it is not possible to reduce the rank of $A$ to at most $r$, by changing at most $s$ entries in each row of $A$. Valiant showed that if $A\in \mathbb F^{n\times n}$ is $(\epsilon n, n^{\epsilon})$-rigid for some constant $\epsilon >0$, then $A$ is not computable by a linear-circuit of logarithmic depth and linear size. As in many problems in complexity, the challenge is to find explicit rigid matrices. By explicit, we mean that a polynomial time deterministic Turing machine should be able to output a rigid matrix $A\in \mathbb F^{n\times n}$ on input~$1^n$. The best known bounds on explicit rigid matrices are far from what is needed to get super-linear circuit lower bounds (see \cite{Fri93,SS97,SSS97,Lok00,Lok06,KLPS14,AW17,GT18,AC19,BHPT20}). We extend the above definition to functions $f:\set{0,1}^n\to \set{0,1}^n$, by saying that $f$ is \emph{not} an $(r,s)$-rigid function if there exists a subset $X\subseteq \set{0,1}^n$ of size at least $2^{n-r}$, such that over $X$, each output bit of $f$ can be written as a function of some $s$ input bits (see Definition~\ref{def:rig_func}). By a simple counting argument (see Proposition~\ref{prop:rig_func_exist}), it follows that random functions are rigid with good probability. We further extend this definition to what we call block-rigid functions (see Definition~\ref{def:block_rig_func}). For this, we'll consider vectors $x\in \set{0,1}^{nk}$, which are thought of as composed of $k$ blocks, each of size $n$. We say that a function $f:\set{0,1}^{nk}\to \set{0,1}^{nk}$ is \emph{not} an $(r,s)$-block-rigid function, if there exists a subset $X\subseteq \set{0,1}^{nk}$ of size at least $2^{nk-r}$, such that over $X$, each \emph{output block} of $f$ can be written as a function of some $s$ \emph{input blocks}. We conjecture that it is possible to obtain large block-rigid functions, using smaller rigid functions. For a function $f:\set{0,1}^k\to \set{0,1}^k$, we define the function $f^{\otimes n}:\set{0,1}^{nk}\to \set{0,1}^{nk}$ as follows. For each $x = (x_{ij})_{i\in[n],j\in [k]} \in \set{0,1}^{nk}$, we define $f^{\otimes n}(x)$ to be the vector obtained by applying $f$ to $(x_{i1},\dots,x_{ik})$, in place for each $i\in[n]$ (see Definition~\ref{def:f_tens_idn}). \begin{conjecture}\label{intro_conj:rig_amp_to_block_rig} There exists a universal constant $c>0$ such that the following is true. Let $f: \set{0,1}^k\to \set{0,1}^k$ be an $(r, s)$-rigid function, and $n\in \mathbb{N}$. Then, $f^{\otimes n}: \set{0,1}^{nk}\to \set{0,1}^{nk}$ is a $(cnr, cs)$-block-rigid function. \end{conjecture} We prove the following theorem. It is restated and proved as Theorem \ref{thm:tm_lb_explicit} in Section \ref{sec:tms}. \begin{theorem}\label{intro_thm:tm_lb_explicit} Let $t:\mathbb{N}\to\mathbb{N}$ be any function such that $t(n) = \omega(n)$. Assuming Conjecture $\ref{intro_conj:rig_amp_to_block_rig}$, there exists an (explicitly given) function $f:\set{0,1}^*\to \set{0,1}^*$ such that \begin{enumerate} \item On inputs $x$ of length $n$ bits, the output $f(x)$ is of length at most $n$ bits. \item The function $f$ is computable by a multi-tape deterministic Turing machine that runs in time $O(t(n))$ on inputs of length $n$. \item The function $f$ is not computable by any multi-tape deterministic Turing machine that takes advice and runs in time $O(n)$ on inputs of length $n$. \end{enumerate} \end{theorem} More generally, we show that families of block-rigid functions cannot be computed by non-uniform Turing machines running in linear-time. This makes it interesting to find such families that are computable in polynomial time. The following theorem is restated as Theorem \ref{thm:tm_lb_blk_rig}. \begin{theorem}\label{intro_thm:tm_lb_blk_rig} Let $k: \mathbb{N} \to \mathbb{N}$ be a function such that $k(n)=\omega(1)$ and $k(n)=2^{o(n)}$, and $f_n:\set{0,1}^{nk(n)}\to\set{0,1}^{nk(n)}$ be a family of $(\epsilon nk(n), \epsilon k(n))$-block-rigid functions, for some constant $\epsilon >0$. Let $M$ be any multi-tape deterministic linear-time Turing machine that takes advice. Then, there exists $n\in \mathbb{N}$, and $x\in \set{0,1}^{nk(n)}$, such that $M(x)\not= f_n(x)$. \end{theorem} As another application, based on Conjecture \ref{intro_conj:rig_amp_to_block_rig}, we show size-depth tradeoffs for boolean circuits, where the gates compute arbitrary functions over large (with respect to the input size) sets (see Section \ref{sec:ckt_lb}). \subsection{Parallel Repetition} In a $k$-player game $\mathcal G$, questions $(x_1,\dots,x_k)$ are chosen from some joint distribution $\mu$. For each $j\in [k]$, player $j$ is given $x_j$ and gives an answer $a_j$ that depends only on $x_j$. The players are said to win if their answers satisfy a fixed predicate $V(x_1,\dots,x_k,a_1,\dots,a_k)$. We note that $V$ might be randomized, that is, it might depend on some random string that is sampled independently of $(x_1,\dots,x_k)$. The value of the game $\text{val}(\mathcal G)$ is defined to be the maximum winning probability over the possible strategies of the players. It is natural to consider the parallel repetition $\mathcal G^{\otimes n}$ of such a game $\mathcal G$. Now, the questions $(x_1^{(i)}, \dots, x_k^{(i)})$ are chosen from $\mu$, independently for each $i\in[n]$. For each $j\in [k]$, player $j$ is given $(x_j^{(1)}, \dots, x_j^{(n)})$ and gives answers $(a_j^{(1)}, \dots, a_j^{(n)})$. The players are said to win if the answers satisfy $V(x_1^{(i)},\dots,x_k^{(i)},a_1^{(i)},\dots,a_k^{(i)})$ for every $i\in [n]$. The value of the game $\text{val}(\mathcal G^{\otimes n})$ is defined to be the maximum winning probability over the possible strategies of the players. Note that the players are allowed to correlate their answers to different repetitions of the game. Parallel repetition of games was first studied in~\cite{FRS94}, owing to its relation with multiprover interactive proofs~\cite{BOGKW88}. It was hoped that the value $\text{val}(\mathcal G^{\otimes n})$ of the repeated game goes down as $\text{val}(\mathcal G)^n$. However, this is not the case, as shown in \cite{For89,Fei91,FV02,Raz11}. A lot is known about parallel repetition of 2-player games. The, so called, parallel repetition theorem, first proved by Raz~\cite{Raz98} and further simplified and improved by Holenstein~\cite{Hol09}, shows that if $\text{val}(\mathcal G) < 1$, then $\text{val}(\mathcal G^{\otimes n}) \leq 2^{-\Omega(n/s)}$, where $s$ is the length of the answers given by the players. The bounds in this theorem were later made tight even for the case when the initial game has small value (see~\cite{DS14} and~\cite{BG15}). Much less is known for $k$-player games with $k\geq 3$. Verbitsky \cite{Ver96} showed that if $\text{val}(G)<1$, then the value of the the repeated game goes down to zero as $n$ grows larger. The result shows a very weak rate of decay, approximately equal to $\frac{1}{\alpha(n)}$, where $\alpha$ is the inverse-Ackermann function, owing to the use of the density Hales-Jewett theorem (see \cite{FK91} and \cite{Pol12}). A recent result by Dinur, Harsha, Venkat and Yuen \cite{DHVY17} shows exponential decay, but only in the special case of what they call \emph{expanding games}. This approach fails when the input to the players have strong correlations. In this paper (see Section \ref{sec:par_rep}), we show that a sufficiently strong parallel repetition theorem for multiplayer games implies Conjecture~\ref{intro_conj:rig_amp_to_block_rig}. The following theorem is proved formally as Theorem \ref{thm:parrep_implies_rigidity} in Section \ref{sec:par_rep}. \begin{theorem} There exists a family $\set{\mathcal G_{\mathcal S,k}}$ of $k$-player games (where $\mathcal S$ is some parameter), such that a strong parallel repetition theorem for all games in $\set{\mathcal G_{\mathcal S,k}}$ implies Conjecture~\ref{intro_conj:rig_amp_to_block_rig}. \end{theorem} Although the games in this family do not fit into the framework of \cite{DHVY17}, they satisfy some very special properties. Every $k$-player game in the family satisfies the following: \begin{enumerate} \item The questions to the $k$-players are chosen as follows: First, $k$ bits, $x_1,\dots,x_k \in \set{0,1}$, are drawn uniformly and independently. Each of the $k$-players sees some subset of these $k$-bits. \item The predicate $V$ satisfies the condition that on fixing the bits $x_1,\dots,x_k$, there is a unique accepting answer for each player (independently of all other answers) and the verifier accepts if every player answers with the accepting answer. We refer to games that satisfy this property as {\it independent games}. \end{enumerate} We believe that these properties may allow us to prove strong upper bounds on the value of parallel repetition of such games, despite our lack of understanding of multiplayer parallel repetition. The bounds that we need are that parallel repetition reduces the value of such games from $v$ to $v^{\Omega(n)}$, where $n$ is the number of repetitions (as is proved in~\cite{DS14} and~\cite{BG15} for 2-player games). \subsection{Open Problems} \begin{enumerate} \item The main open problem is to make progress towards proving Conjecture \ref{intro_conj:rig_amp_to_block_rig}, possibly using the framework of parallel repetition. The remarks after Theorem \ref{thm:tm_lb_explicit} mention some weaker statements that suffice for our applications. The examples of matrix-transpose and matrix-product in Section \ref{sec:matrix_problems} also serve as interesting problems. \item Our techniques, which are based on \cite{PPST83}, heavily exploit the fact that the Turing machines have one-dimensional tapes. Time-space lower bounds for satisfiability in the case of multi-tape Turing machines with random access \cite{FLvMV05}, and Turing machines with one $d$-dimensional tape \cite{vMR05}, are known. Extending such results to the non-uniform setting is an interesting open problem. \item The question of whether a rigid-matrix $A\in \mathbb F_2^{n\times n}$ is rigid when seen as a function $A:\set{0,1}^n\to\set{0,1}^n$ is very interesting (see Section \ref{sec:rig_mat_vs_func}). This question is closely related to a Conjecture of Jukna and Schnitger \cite{JS11} on the linearization of depth-2 circuits. This is also related to the question of whether data structures for linear problems can be optimally linearized (see \cite{DGW19}). We note that there are known examples of linear problems for which the best known data-structures are non-linear, without any known linear data-structure achieving the same bounds (see \cite{KU11}). \end{enumerate} \section{Preliminaries} Let $\mathbb{N}=\set{1,2,3,\dots}$ be the set of all natural numbers. For any $k\in \mathbb{N}$, we use $[k]$ to denote the set $\set{1,2,\dots,k}$. We use $\mathbb F$ to denote an arbitrary finite field, and $\mathbb F_2$ to denote the finite field on two elements. Let $x\in \set{0,1}^k$. For $i\in [k]$, we use $x_i$ to denote the $i$\textsuperscript{th} coordinate of $x$. For $S\subseteq [k]$, we denote by $x|_S$ the vector $(x_i)_{i\in S}$, which is the restriction of $x$ to coordinates in $S$. We also consider vectors $x\in \set{0,1}^{nk}$, for some $n\in \mathbb{N}$. We think of these as composed of $k$ blocks, each consisting of a vector in $\set{0,1}^n$. That is, $x = (x_{ij})_{i\in [n], j\in [k]}$. By abuse of notation, for $S\subseteq [k]$, we denote by $x|_S$ the vector $(x_{ij})_{i\in [n], j\in S}$, which is the restriction of $x$ to the blocks indexed by $S$. Let $A\in \mathbb F^{nk\times nk}$ be an $nk\times nk$ matrix. We think of $A$ as a block-matrix consisting of $k^2$ blocks, each block being an $n\times n$ matrix. That is, $A = (A_{ij})_{i, j\in [k]}$, where for all $i, j\in [k]$, $A_{ij}\in \mathbb F^{n\times n}$. For each $i\in [k]$, we call $(A_{ij})_{j\in [k]}$ the $i$\textsuperscript{th} block-row of $A$. For every $n\in \mathbb{N}$, we define $\log^* n = \min\{\ell\in \mathbb{N}\cup\set{0} : \underbrace{\log_2 \log_2 \dots \log_2}_{\ell \text{ times}}{n} \leq 1 \}.$ \section{Rigidity and Block Rigidity} \label{sec:rig_and_block_rig} \subsection{Rigidity}\label{subsec:rig} The concept of matrix rigidity was introduced by Valiant \cite{Val77}. It is defined as follows. \begin{definition}\label{def:rig_mat} A matrix $A \in \mathbb F^{n\times n}$ is said to be an $(r, s)$-rigid matrix if it cannot be written as $A = B+C$, where $B$ has rank at most $r$, and $C$ has at most $s$ non-zero entries in each row. \end{definition} Valiant \cite{Val77} showed the existence of rigid matrices by a simple counting argument. For the sake of completeness, we include this proof. \begin{proposition}\label{prop:rig_mat_exist} For any constant $0<\epsilon\leq\frac{1}{8}$, and any $n\in \mathbb{N}$, there exists a matrix $A\in \mathbb F^{n\times n}$ that is an $(\epsilon n, \epsilon n)$-rigid matrix. \end{proposition} \begin{proof} Fix any $0<\epsilon\leq\frac{1}{8}$. We bound the number of $n\times n$ matrices that are not $(\epsilon n, \epsilon n)$-rigid matrices. \begin{enumerate} \item Any $n\times n$ matrix with rank at most $r$ can be written as the product of an $n\times r$ and an $r \times n$ matrix. Hence, the number of matrices of rank at most $\epsilon n$ is at most $\mathbb{N}ormF^{2\epsilon n^2} \leq \mathbb{N}ormF^{\frac{n^2}{4}}$. \item The number of matrices that have at most $\epsilon n$ non-zero entries in each row is at most \[\brac{\binom{n}{\epsilon n}\mathbb{N}ormF^{\epsilon n}}^n \leq \brac{\frac{e}{\epsilon}}^{\epsilon n^2}\mathbb{N}ormF^{\epsilon n^2} = \mathbb{N}ormF^{\epsilon n^2(1 + \log_{\mathbb{N}ormF}{ \frac{e}{\epsilon}} )} < \mathbb{N}ormF^{\frac{3n^2}{4}}.\] We used the binomial estimate $\binom{n}{r} \leq \brac{\frac{en}{r}}^r$. \end{enumerate} Since each matrix that is not an $(r, s)$-rigid matrix can be written as the sum of a matrix with rank at most $r$, and a matrix with at most $s$ non-zero entries in each row, the number of matrices that are not $(\epsilon n,\epsilon n)$-rigid matrices is strictly less than $\mathbb{N}ormF^{\frac{n^2}{4}} \cdot \mathbb{N}ormF^{\frac{3n^2}{4}} = \mathbb{N}ormF^{n^2}$, which is the total number of $n\times n$ matrices. \end{proof} Observe that a matrix $A \in \mathbb F_2^{n\times n}$ is not an $(r, s)$-rigid matrix if and only if there is a subspace $X\subseteq \mathbb F_2^n$ of dimension at least $n-r$, and a matrix $C$ with at most $s$ non-zero entries in each row, such that $Ax=Cx$ for all $x\in X$. We use this formulation to extend the concept of rigidity to general functions. \begin{definition}\label{def:rig_func} A function $f:\set{0,1}^n\to \set{0,1}^n$ is said to be an $(r, s)$-rigid function if for every subset $X\subseteq \set{0,1}^n$ of size at least $2^{n-r}$, and subsets $S_1,\dots,S_n\subseteq[n]$ of size $s$, and functions $g_1,\dots,g_n:\set{0,1}^s\to \set{0,1}$, there exists $x\in X$ such that $f(x) \not= (g_1(x|_{S_1}), g_2(x|_{S_2}), \dots, g_n(x|_{S_n}))$. \end{definition} Using a similar counting argument as in Proposition \ref{prop:rig_mat_exist}, we show the existence of rigid functions. \begin{proposition}\label{prop:rig_func_exist} For any constant $0<\epsilon\leq\frac{1}{8}$, and any (large enough) integer $n$, there exists a function $f:\set{0,1}^n \to \set{0,1}^n$ that is an $(\epsilon n, \epsilon n)$-rigid function. \end{proposition} \begin{proof} Fix any constant $0<\epsilon\leq\frac{1}{8}$, and any integer $n \geq \frac{2}{\epsilon}$. We count the number of functions $f:\set{0,1}^n\to \set{0,1}^n$ that are not $(\epsilon n, \epsilon n)$-rigid functions. \begin{enumerate} \item Note that in Definition \ref{def:rig_func}, it is without loss of generality to assume that $\norm{X} = 2^{n-\epsilon n}$. The number subsets $X\subseteq \set{0,1}^n$ of size $2^{n-\epsilon n}$ is $\binom{2^n}{2^{n-\epsilon n}} \leq \brac{e2^{\epsilon n}}^{2^{n-\epsilon n}} < 2^{2\epsilon n 2^{n-\epsilon n}} \leq 2^{\frac{n}{4} 2^{n-\epsilon n}}.$ \item The number of subsets $S_1,\dots,S_n \subseteq[n]$ of size $\epsilon n$ is at most $ \binom{n}{\epsilon n}^n < n^{\epsilon n^2} < 2^{\frac{n}{4} 2^{n-\epsilon n}}$. \item The number of functions $g_1,\dots,g_n: \set{0,1}^{\epsilon n}\to \set{0,1}$ is at most $ 2^{n2^{\epsilon n}} < 2^{\frac{n}{4} 2^{n-\epsilon n}}$. \item The number of choices for values of $f$ on $\set{0,1}^n \setminus X$ is at most $2^{n(2^n-\norm{X})} \leq 2^{n(2^n-2^{n-\epsilon n})}$. \end{enumerate} Hence, the total number of functions $f: \set{0,1}^n\to \set{0,1}^n$ that are not $(\epsilon n, \epsilon n)$-rigid functions is strictly less than $ \brac{2^{\frac{n}{4} 2^{n-\epsilon n}}}^3 2^{n2^n-n2^{n-\epsilon n}} < 2^{n2^n}.$ \end{proof} \subsection{Block Rigidity}\label{subsec:block_rig} In this section, we introduce the notion of block rigidity. \begin{definition}\label{def:block_rig_mat} A matrix $A \in \mathbb F^{nk\times nk}$ is said to be an $(r,s)$-block-rigid matrix if it cannot be written as $A = B+C$, where $B$ has rank at most $r$, and $C$ has at most $s$ non-zero matrices in each block-row. \end{definition} Observe that if $A\in \mathbb F^{nk\times nk}$ is an $(r,ns)$-rigid matrix, then it is also $(r,s)$-block-rigid matrix. Combining this with Proposition \ref{prop:rig_mat_exist}, we get the following. \begin{observation} For any constant $0<\epsilon\leq\frac{1}{8}$, and positive integers $n,k$, there exists an $(\epsilon nk, \epsilon k)$-block-rigid matrix $A\in \mathbb F^{nk\times nk}$. \end{observation} Following the definition of rigid-functions in Section \ref{subsec:rig}, we define block-rigid functions as follows. \begin{definition}\label{def:block_rig_func} A function $f: \set{0,1}^{nk}\to \set{0,1}^{nk}$ is said to be an $(r,s)$-block-rigid function if for every subset $X\subseteq \set{0,1}^{nk}$ of size at least $2^{nk-r}$, and subsets $S_1,\dots,S_k\subseteq[k]$ of size $s$, and functions $g_1,\dots,g_k: \set{0,1}^{ns}\to \set{0,1}^n$, there exists $x\in X$ such that $f(x) \not= (g_1(x|_{S_1}), g_2(x|_{S_2}), \dots, g_k(x|_{S_k}))$. \end{definition} Observe that if $f : \set{0,1}^{nk}\to \set{0,1}^{nk}$ is an $(r,ns)$-rigid function, then it is also $(r,s)$-block-rigid function. Combining this with Proposition \ref{prop:rig_func_exist}, we get the following. \begin{observation} For any constant $0<\epsilon\leq\frac{1}{8}$, and (large enough) integers $n,k$, there exists an $(\epsilon nk, \epsilon k)$-block-rigid function $f: \set{0,1}^{nk}\to \set{0,1}^{nk}$. \end{observation} Note that $n=1$ in the definition of block-rigid matrices (functions) gives the usual definition of rigid matrices (functions). For our applications, we will mostly be interested in the case when $n$ is much larger than $k$. \subsection{Rigidity Amplification} A natural question is whether there is a way to amplify rigidity. That is, given a rigid matrix (function), is there a way to obtain a larger matrix (function) which is rigid, or even block-rigid. \begin{definition}\label{def:f_tens_idn} Let $f: \set{0,1}^k\to \set{0,1}^k$ be any function. Define $f^{\otimes n}: \set{0,1}^{nk}\to \set{0,1}^{nk}$ as following. Let $x = (x_{ij})_{i\in[n],j\in[k]} \in \set{0,1}^{nk}$ and $i\in [n], j\in [k]$. The $(i,j)$\textsuperscript{th} coordinate of $f^{\otimes n}(x)$ is defined to be the $j$\textsuperscript{th} coordinate of $f(x_{i1}, x_{i2},\dots, x_{ik})$. \end{definition} Basically, applying $f^{\otimes n}$ on $x\in \set{0,1}^{nk}$ is the same as applying $f$ on $(x_{i1}, x_{i2},\dots, x_{ik})$, in place for each $i\in [n]$. For a linear function given by matrix $A\in \mathbb F_2^{k\times k}$, this operation corresponds to $A\otimes I_n$, where $I_n$ is the $n\times n$ identity matrix, and $\otimes$ denotes the Kronecker product of matrices. It is easy to see that if $f$ is not rigid, then $f^{\otimes n}$ is not block-rigid. \begin{observation}\label{obs:rig_amp_converse} Suppose $f: \set{0,1}^k\to \set{0,1}^k$ is not an $(r,s)$-rigid function. Then $f^{\otimes n}$ is not an $(nr,s)$-block-rigid function. \end{observation} The converse of Observation $\ref{obs:rig_amp_converse}$ is more interesting. We believe that it is true, and restate Conjecture \ref{intro_conj:rig_amp_to_block_rig} below. \begin{conjecture}\label{conj:rig_amp_to_block_rig} There exists a universal constant $c>0$ such that the following is true. Let $f: \set{0,1}^k\to \set{0,1}^k$ be an $(r, s)$-rigid function, and $n\in \mathbb{N}$. Then, $f^{\otimes n}: \set{0,1}^{nk}\to \set{0,1}^{nk}$ is a $(cnr, cs)$-block-rigid function. \end{conjecture} \section{Parallel Repetition}\label{sec:par_rep} In this section, we show an approach to prove Conjecture \ref{conj:rig_amp_to_block_rig} regarding rigidity amplification. This is based on proving a strong parallel repetition theorem for a $k$-player game. Fix some $k\in \mathbb{N}$, a function $f: \set{0,1}^{k} \to \set{0,1}^{k}$, an integer $1\leq s<k$, and $\mathcal S = (S_1,\dots,S_k)$, where each $S_i \subseteq[k]$ is of size $s$. We define a $k$-player game $\mathcal G_{\mathcal S}$ as follows: The $k$-players choose functions $g_1,\dots,g_k: \set{0,1}^s\to \set{0,1}$, which we call a strategy. A verifier chooses $x_1,\dots,x_k \in \set{0,1}$ uniformly and independently. Let $x = (x_1,\dots,x_k)\in \set{0,1}^k$. For each $j\in[k]$, Player $j$ is given the input $x|_{S_j}$, and they answer $a_j = g_j(x|_{S_j}) \in \set{0,1}$. The verifier accepts if and only if $f(x) = (a_1,\dots,a_k)$. The goal of the players is to maximize the winning probability. Formally, the value of the game is defined as \[\text{val}(\mathcal G_{\mathcal S}) := \max_{g_1,\dots,g_k} \Pr_{x_1,\dots,x_k\in \set{0,1}}\sqbrac{f(x) = \brac{g_1(x|_{S_1}),\dots, g_k(x|_{S_k}) }}.\] The $n$-fold repetition of $\mathcal G_{\mathcal S}$, denoted by $\mathcal G_{\mathcal S}^{\otimes n}$ is defined as follows. The players choose a strategy $g_1,\dots,g_k: \set{0,1}^{ns}\to \set{0,1}^n$. The verifier chooses $x_1,\dots,x_k \in \set{0,1}^n$ uniformly and independently. Let $x = (x_1,\dots,x_k)\in \set{0,1}^{nk}$. Player $j$ is given the input $x|_{S_j}$, and they answer $a_j=g_j(x|_{S_j}) \in \set{0,1}^n$. The verifier accepts if and only if $f^{\otimes n}(x) = (a_1,\dots,a_k)$. That is, for each $i\in[n]$, $j\in[k]$, the $j$\textsuperscript{th} bit of $f(x_{i1},\dots,x_{ik})$ equals the $i$\textsuperscript{th} bit of $a_j$. The value of this repeated game is \[\text{val}(\mathcal G_{\mathcal S}^{\otimes n}) := \max_{g_1,\dots,g_k} \Pr_{x_1,\dots,x_k\in \set{0,1}^n}\sqbrac{f^{\otimes n}(x) = \brac{g_1(x|_{S_1}),\dots, g_k(x|_{S_k}) }}.\] From Definition \ref{def:block_rig_func}, we get the following: \begin{observation}\label{obs:block_rig_func_game_val} Let $f: \set{0,1}^{k} \to \set{0,1}^{k}$ be a function, and $n\in \mathbb{N}$. Then, $f^{\otimes n}$ is an $(r,s)$-block-rigid function if and only if for every $\mathcal S = (S_1,\dots,S_k)$ with set sizes as $s$, $\text{val}(\mathcal G_{\mathcal S}^{\otimes n}) < 2^{-r}$. \end{observation} \begin{proof} Let $f^{\otimes n}$ be an $(r,s)$-block-rigid function. Suppose, for the sake of contradiction, that $\mathcal S = (S_1,\dots,S_k)$ is such that $\text{val} (\mathcal G_{\mathcal S}^{\otimes n}) \geq 2^{-r}$. Let the functions $g_1,\dots,g_k: \set{0,1}^{ns}\to \set{0,1}^n$ be an optimal strategy for the players. Define $X:= \set{x\in \set{0,1}^{nk} \,|\, f^{\otimes n}(x) = \brac{g_1(x|_{S_1}),\dots, g_k(x|_{S_k}) } }$. Then, $\norm{X} = \text{val}(\mathcal G_{\mathcal S}) \cdot 2^{nk} \geq 2^{nk-r}$, which contradicts the block rigidity of $f^{\otimes n}$. Conversely, suppose that $f^{\otimes n}$ is not $(r,s)$-block-rigid. Then, there exists $X\subseteq \set{0,1}^{nk}$ with $\norm{X}\geq 2^{nk-r}$, subsets $S_1,\dots,S_k\subseteq [k]$ of size $s$, and functions $g_1,\dots,g_k: \set{0,1}^{ns}\to \set{0,1}^n$, such that for all $x\in X$, $f^{\otimes n}(x) = (g(x|_{S_1}),\dots,g(x|_{S_k}))$. Let $\mathcal S = (S_1,\dots,S_k)$, and suppose the players use strategy $g_1,\dots,g_k$. Then, $\text{val}(\mathcal G_{\mathcal S}^{\otimes n}) \geq \norm{X} \cdot 2^{-nk} \geq 2^{-r}$. \end{proof} In particular, for $n=1$, Observation \ref{obs:block_rig_func_game_val} gives the following: \begin{observation}\label{obs:rig_func_game_val} A function $f: \set{0,1}^{k} \to \set{0,1}^{k}$ is an $(r,s)$-rigid-function if and only if for every $\mathcal S = (S_1,\dots,S_k)$ with set sizes as $s$, $\text{val}(\mathcal G_{\mathcal S}) < 2^{-r}$. \end{observation} We conjecture the following strong parallel repetition theorem. \begin{conjecture}\label{conj:par_rep} There exists a constant $c>0$ such that the following is true. Let $f: \set{0,1}^k\to \set{0,1}^k$ be any function, and $\mathcal S = (S_1,\dots,S_k)$ be such that for each $i\in [k]$, $S_i\subseteq[k]$ is of size $s$. Then, for all $n\in \mathbb{N}$, $\text{val}(\mathcal G_{\mathcal S}^{\otimes n}) \leq (\text{val}(\mathcal G_{\mathcal S}))^{cn}$. \end{conjecture} Combining Observation \ref{obs:block_rig_func_game_val} and \ref{obs:rig_func_game_val}, we get the following: \begin{theorem} \label{thm:parrep_implies_rigidity} Conjecture \ref{conj:par_rep} $\Longrightarrow$ Conjecture \ref{conj:rig_amp_to_block_rig}. \end{theorem} \begin{remark} \begin{enumerate}[label=(\roman*)] \item By looking only at some particular player, it can be shown that if ${\text{val}(\mathcal G_{\mathcal S})<1}$, then $\text{val}(\mathcal G_{\mathcal S}^{\otimes n}) \leq 2^{-\Omega(n)}$. In fact, such a result holds for all \emph{independent games}. The harder part seems to be showing strong parallel repetition when the initial game has small value. \item Observe that the game $\mathcal G_{\mathcal S}$ has a randomized predicate in the case $\cup_{j=1}^k S_j \not= [k]$. This condition can be removed (even for general independent games) by introducing a new player. This player is given the random string used by the verifier, and is always required to answer a single bit equal to zero. This maintains the independent game property, and ensures that the predicate used by the verifier is a deterministic function of the vector of input queries to the players. \end{enumerate} \end{remark} \section{Turing Machine Lower Bounds}\label{sec:tms} In this section, we show a conditional super-linear lower bound for multi-tape deterministic Turing machines that can take advice. Without loss of generality, we only consider machines that have a separate read-only input tape. We assume that the advice string, which is a function of the input length, is written on a separate advice tape at the beginning of computation. We are interested in machines that compute multi-output functions. For this, we assume that at the end of computation, the machine writes the entire output on a separate write-only output tape, and then halts. We consider the following problem. \begin{definition} Let $k:\mathbb{N}\to\mathbb{N}$ be a function. We define the problem $\text{Tensor\textsubscript{k}}$ as follows: \begin{itemize} \item[-] \emph{Input:} $(f,x)$, where $f:\set{0,1}^k\to\set{0,1}^k$ is a function, and $x\in \set{0,1}^{nk}$, for some $n\in\mathbb{N}$ and $k = k(n)$. \item[-] \emph{Output:} $f^{\otimes n}(x) \in \set{0,1}^{nk}$. \end{itemize} The function $f$ is given as input in the form of its entire truth table. The input $x = (x_{ij})_{i\in [n],j\in[k]}$ is given in the order $(x_{11},\dots,x_{n1},x_{12},\dots,x_{n2},\dots,x_{1k},\dots,x_{nk})$. The total length of the input is $m(n) := 2^kk + nk$. \end{definition} We observe that if the function $k:\mathbb{N}\to \mathbb{N}$ grows very slowly with $n$, the problem $\text{Tensor\textsubscript{k}}$ can be solved by a deterministic Turing machine in slightly super-linear time. \begin{observation}\label{obs:sup_lin_tm_ub} Let $k:\mathbb{N}\to\mathbb{N}$ be a function. There exists a deterministic Turing machine that solves the problem $\text{Tensor\textsubscript{k}}$ in time $O(nk2^k)$, on input $(f,x)$ of length $nk+2^kk$, where $k = k(n)$. \end{observation} \begin{proof} We note that applying $f^{\otimes n}$ on $x = (x_{ij})_{i\in[n],j\in[k]}$ is the same as applying $f$ on $(x_{i1}, x_{i2},\dots, x_{ik})$, in place for each $i\in [n]$. A Turing machine can do the following: \begin{enumerate} \item Find $n$ and $k$, using the fact that the description of $f$ is of length $2^kk$, and that of $x$ is of length $nk$. \item Rearrange the input so that for each $i\in [n]$, the part $(x_{i1}, x_{i2},\dots, x_{ik})$ is written consecutively on the tape. \item Using the truth table of $f$, compute the output for each such part. \item Rearrange the entire output back to the desired form. \end{enumerate} The total time taken is $O(nk+nk^2 + nk2^k+nk^2) = O(nk2^k)$. \end{proof} We now state and prove the main technical theorem of this section. \begin{theorem}\label{thm:tm_advice_lb} Let $k:\mathbb{N}\to\mathbb{N}$ be a function such that $k(n) = \omega(1)$ and $k(n) = o(\log_2n)$. Let $m = m(n):= 2^kk + nk$, where $k = k(n)$. Suppose $M$ is a deterministic multi-tape Turing machine that takes advice, and runs in linear time in the length of its input. Assuming Conjecture \ref{conj:rig_amp_to_block_rig}, the machine $M$ does not solve the problem $\text{Tensor\textsubscript{k}}$ correctly for all inputs. That is, there exists $n\in \mathbb{N}$, and $y = (f,x) \in \set{0,1}^m$ such that $M(y) \not= f^{\otimes n}(x)$. \end{theorem} There are two main technical ideas that will be useful to us. The first is the notion of block-respecting Turing machines, defined by Hopcroft, Paul and Valiant \cite{HPV77}. The second is a graph theoretic result, which was proven by Paul, Pippenger, Szemer{\'{e}}di and Trotter \cite{PPST83}, and was used to show a separation between deterministic and non-deterministic linear time. \begin{definition}\label{def:block_resp} Let $M$ be a Turing machine, and let $b:\mathbb{N}\to\mathbb{N}$ be a function. Partition the computation of $M$, on any input $y$ of length $m$, into time \emph{segments} of length $b(m)$, with the last segment having length at most $b(m)$. Also, partition each of the tapes of $M$ into \emph{blocks}, each consisting of $b(m)$ contiguous cells. We say that $M$ is block-respecting with respect to block size $b$, if on inputs of length $m$, the tape heads of $M$ cross blocks only at times that are integer multiples of $b(m)$. \end{definition} \begin{lemma}\label{lemma:hpv_block_resp}\cite{HPV77} Let $t:\mathbb{N}\to\mathbb{N}$ be a function, and $M$ be a multi-tape deterministic Turing machine running in time $t(m)$ on inputs of length $m$. Let $b:\mathbb{N}\to\mathbb{N}$ be a function such that $b(m)$ is computable from $1^m$ by a multi-tape deterministic Turing machine running in time $O(t(m))$. Then, the language recognized by $M$ is also recognized by a multi-tape deterministic Turing Machine $M'$, which runs in time $O(t(m))$, and is block-respecting with respect to $b$. \end{lemma} The rest of this section is devoted to the proof of Theorem \ref{thm:tm_advice_lb}. Let $k:\mathbb{N}\to\mathbb{N}$ be a function such that $k(n) = \omega(1)$ and $k(n) = o(\log_2n)$. Let $m = m(n) := 2^kk+nk$, where $k = k(n)$. Suppose that $M$ is a deterministic multi-tape Turing Machine, which on input $y = (f,x) \in \set{0,1}^m$, takes advice, runs in time $O(m)$, and outputs $f^{\otimes n}(x)$. Let $b:\mathbb{N}\to\mathbb{N}$ be a function such that $b(m) = n$. By our assumption that $k(n) = o(\log_2 n)$, we can assume that the input $y = (f,x) \in \set{0,1}^m$ consists of $k+1$ blocks, where the first block contains $f$ (possibly padded by blank symbols to the left), and the remaining $k$ blocks contain $x$. By Lemma \ref{lemma:hpv_block_resp}, we can further assume that $M$ is block-respecting with respect to $b$. Note that we can assume $n$ to be a part of the advice, and hence we don't need to care about the computability of $b$. For an input $y = (f,x) \in \set{0,1}^m$, the number of time segments for which $M$ runs on $x$ is at most $\frac{O(m)}{b(m)} = \frac{O(nk)}{n} = O(k) := a(n)$. We define the computation graph $G_M(y)$, for input $y\in\set{0,1}^m$, as follows. \begin{definition}\label{def:comp_graph} The vertex set of $G_M(y)$ is defined to be $V_M(y) = \set{1,\dots,a(n)}$. For each $1\leq i <j\leq a(n)$, the edge set $E_M(y)$ has the edge $(i,j)$, if either \begin{enumerate}[label=(\roman*)] \item $j=i+1$, or \item there is some tape, such that, during the computation on $y$, $M$ visits the same block on that tape in both time-segments $i$ and $j$, and never visits that block during any time-segment strictly between $i$ and $j$. \end{enumerate} \end{definition} In a directed acyclic graph $G$, we say that a vertex $u$ is a predecessor of a vertex $v$, if there exists a directed path from $u$ to $v$. \begin{lemma}\label{lemma:ppst_graph_result} \cite{PPST83} For every $y$, the graph $G_M(y)$ satisfies the following: \begin{enumerate} \item Each vertex in $G_M(y)$ has degree $O(1)$. \item There exists a set of vertices $J\subset V_M(y)$ in $G_M(y)$, of size $O\brac{\frac{a(n)}{\log^*{a(n)}}}$ such that every vertex of $G_M(y)$ has at most $O\brac{\frac{a(n)}{\log^*{a(n)}}}$ many predecessors in the induced subgraph on the vertex set $V_M(y)\setminus J$. \end{enumerate} We note that the constants here might depend on the number of tapes of $M$. \end{lemma} \begin{lemma}\label{lemma:out_small_no_inp_blocks} Let $\epsilon>0$ be any constant and $\mathcal Y\subseteq \set{0,1}^m$ be any subset of the inputs. For all (large enough) $n$, there exists a subset $Y\subseteq \mathcal Y$ of size $\norm{Y} \geq \norm{\mathcal Y}\cdot 2^{-\epsilon nk}$, and subsets $S_1,\dots,S_k\subseteq [k]$ of size $\epsilon k$, such that for each $y = (f,x)\in Y$, and each $i\in[k]$, the $i$\textsuperscript{th} block (of length $n$) of $f^{\otimes n}(x)$ can be written as a function of $x|_{S_i}$ and the truth-table of $f$. \end{lemma} \begin{proof} For input $y = (f,x)\in \set{0,1}^m$, let $J(y) \subset V_M(y)$ be a set as in Lemma \ref{lemma:ppst_graph_result}. Let $C(y)$ denote the following information about the computation of $M$: \begin{enumerate}[label=(\roman*)] \item The internal state of $M$ at the end of each time-segment. \item The position of all tape heads at the end of each time-segment. \item For each time segment in $J(y)$, and for each tape of $M$, the final transcription (of length $n$) of the block that was visited on this tape during this segment. \end{enumerate} Let $g:\mathcal Y\to\set{0,1}^*$ be the function given by $g(y) = (G_M(y),J(y),C(y))$. Observe that the output of $g$ can be described using $O\brac{k\log_2{k} +\frac{nk}{\log^*{k}}}$ bits. By our assumption that $k(n) = \omega(1)$ and $k(n) = o(\log_2 n)$, we have that for large $n$, this is at most $\epsilon nk$ bits. Hence, there exists a set $Y \subseteq \mathcal Y$ of size $\norm{Y}\geq \norm{\mathcal Y}\cdot 2^{-\epsilon nk}$, such that for each $y\in Y$, $g(y)$ takes on some fixed value $(G = (V,E),J,C)$. Now, consider any $y = (f,x)\in Y$. The machine writes the $k$ blocks of the output $f^{\otimes n}(x)$ on the output tape in the last $k$ time segments before halting. For each of these time segments, the corresponding vertex in $G$ has at most $O\brac{\frac{k}{\log^*{k}}} \leq \epsilon k$ predecessors in the induced subgraph on $V\setminus J$. These further correspond to at most $\epsilon k$ distinct blocks of $y$ that are visited (on the input tape) during these predecessor time segments. Since the relevant block transcriptions at the end of time segments for vertices in $J$ are fixed in $C$, each output block can be written as a function of at most $\epsilon k$ blocks of $y$. For the $i$\textsuperscript{th} block of output, without loss of generality, this includes the first block of $y$, which contains the truth table of $f$, and blocks of $x$ which indexed by some subset $S_i\subseteq [k]$ of size $\epsilon k$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:tm_advice_lb}] Let $\delta=\frac{1}{8}$. Fix some sufficiently large $n$, and a $(\delta k,\delta k)$-rigid function $f_0:\set{0,1}^k\to\set{0,1}^k$. The existence of such a function is guaranteed by Proposition \ref{prop:rig_func_exist}. By Conjecture \ref{conj:rig_amp_to_block_rig}, the function $f_0^{\otimes n}$ is an $(\epsilon nk, \epsilon k)$-block-rigid function for some constant $\epsilon >0$. On the other hand, Lemma $\ref{lemma:out_small_no_inp_blocks}$, with $\mathcal Y = \set{(f_0,x): x\in\set{0,1}^{nk}}$, shows that $f_0^{\otimes n}$ is not an $(\epsilon nk, \epsilon k)$-block-rigid function for any constant $\epsilon > 0$. \end{proof} We now restate and prove Theorem \ref{intro_thm:tm_lb_explicit}. \begin{theorem}\label{thm:tm_lb_explicit} Let $t:\mathbb{N}\to\mathbb{N}$ be any function such that $t(n) = \omega(n)$. Assuming Conjecture $\ref{conj:rig_amp_to_block_rig}$, there exists a function $f:\set{0,1}^*\to \set{0,1}^*$ such that \begin{enumerate} \item On inputs $x$ of length $n$ bits, the output $f(x)$ is of length at most $n$ bits. \item The function $f$ is computable by a multi-tape deterministic Turing machine that runs in time $O(t(n))$ on inputs of length $n$. \item The function $f$ is not computable by any multi-tape deterministic Turing machine that takes advice and runs in time $O(n)$ on inputs of length $n$. \end{enumerate} \end{theorem} \begin{proof} Let $k:\mathbb{N}\to\mathbb{N}$ be a function such that $k(n) = \omega(1)$, $k(n) = o(\log_2{n})$, and $nk2^k \leq t(2^kk+nk)$. The theorem then follows from Observation \ref{obs:sup_lin_tm_ub} and Theorem $\ref{thm:tm_advice_lb}$. \end{proof} \begin{remark} \begin{enumerate}[label=(\roman*)] \item We note that for the proof of Theorem \ref{thm:tm_advice_lb}, it suffices to find, for infinitely many $n$, a single function $f:\set{0,1}^{k(n)}\to\set{0,1}^{k(n)}$ such that $f^{\otimes n}$ is an $(\epsilon nk, \epsilon k)$-block-rigid function, where $\epsilon >0$ is a constant. This would show that $M$ cannot give the correct answer to $\text{Tensor\textsubscript{k}}$ for inputs of the form $(f,x)$, where $x\in \set{0,1}^{nk}$. \item For the proof of Theorem \ref{thm:tm_advice_lb}, it can be shown that it suffices for the following condition to hold for infinitely many $n$, and some constant $\epsilon >0$. Let $S_1,\dots,S_k\subseteq [k]$ be fixed sets of size $\epsilon k$, and $f:\set{0,1}^k\to\set{0,1}^k$ be a function chosen uniformly at random. Then, with probability at least $1-2^{-\omega(k\log_2{k})}$, the function $f^{\otimes n}$ is an $(\epsilon nk, \epsilon k)$-block-rigid function against the fixed sets $S_1,\dots,S_k$. We note that the probability here is not good enough to be able to union bound over $S_1,\dots,S_k$ and get a single function as mentioned in the previous remark. \end{enumerate} \end{remark} Essentially the same argument as that of Theorem \ref{thm:tm_advice_lb} also proves Theorem \ref{intro_thm:tm_lb_blk_rig}, which we restate below. \begin{theorem}\label{thm:tm_lb_blk_rig} Let $k: \mathbb{N} \to \mathbb{N}$ be a function such that $k(n)=\omega(1)$ and $k(n)=2^{o(n)}$, and $f_n:\set{0,1}^{nk(n)}\to\set{0,1}^{nk(n)}$ be a family of $(\epsilon nk(n), \epsilon k(n))$-block-rigid functions, for some constant $\epsilon >0$. Let $M$ be any multi-tape deterministic linear-time Turing machine that takes advice. Then, there exists $n\in \mathbb{N}$, and $x\in \set{0,1}^{nk(n)}$, such that $M(x)\not= f_n(x)$. \end{theorem} The above theorem makes it interesting to find families of block-rigid functions that are computable in polynomial time. \section{Size-Depth Tradeoffs}\label{sec:ckt_lb} In this section, we will consider boolean circuits over a set $F$. These are directed acyclic graphs with each node $v$ labelled either as an input node or by an arbitrary function $g_v:F\times F\to F$. The input nodes have in-degree 0 and all other nodes have in-degree 2. Some nodes are further labelled as output nodes, and they compute the outputs (in the usual manner), when the inputs are from the set $F$. The size of the circuit is defined to be the number of edges in the graph. The depth of the circuit is defined to be the length of a longest directed path from an input node to an output node. Valiant \cite{Val77} showed that if $A\in \mathbb F^{n\times n}$ is an $(\epsilon n, n^{\epsilon})$-rigid matrix for some constant $\epsilon >0$, then the corresponding function cannot be computed by an $O(n)$-size and $O(\log_2{n})$-depth linear circuit over $\mathbb F$. By a linear circuit, we mean that each gate computes a linear function (over $\mathbb F$) of its inputs. A similar argument can be used to prove the following. \begin{lemma} \cite{Val77} \label{lemma:rgd_fn_ckt_lb} Suppose $f:\set{0,1}^{nk}\to \set{0,1}^{nk}$ is an $(\epsilon nk, k^{\epsilon})$-block-rigid function, for some constant $\epsilon >0$. Then, the function $g:(\set{0,1}^{n})^k\to (\set{0,1}^{n})^k$ corresponding to $f$ cannot be computed by an $O(k)$-size and $O(\log_2{k})$-depth circuit over the set $F = \set{0,1}^n$. \end{lemma} \begin{theorem}\label{thm:ckt_lb_explicit} Let $k:\mathbb{N}\to\mathbb{N}$ be a function such that $k(n) = \omega(1)$ and $k(n) = o(\log_2{n})$. Let $m = m(n):= 2^kk + nk$, where $k = k(n)$. Assuming Conjecture \ref{conj:rig_amp_to_block_rig}, the problem $\text{Tensor\textsubscript{k}}$ is not solvable by $O(k)$-size and $O(\log_2{k})$-depth circuits over the set $F = \set{0,1}^n$. Here, the input $(f,x)$ to the circuit is given in the form of $k+1$ elements in $\set{0,1}^n$, the first one being the truth table of $f$, and the remaining $k$ being the blocks of $x$. \end{theorem} \begin{proof} Let $\delta = \frac{1}{8}$. Fix some large $n$, and a $(\delta k,\delta k)$-rigid function $f_0:\set{0,1}^k\to\set{0,1}^k$, where $k = k(n)$. The existence of such a function is guaranteed by Proposition \ref{prop:rig_func_exist}. Assuming Conjecture \ref{conj:rig_amp_to_block_rig}, the function $f_0^{\otimes n}$ is an $(\epsilon nk, \epsilon k)$-block-rigid function, for some universal constant $\epsilon >0$. By Lemma \ref{lemma:rgd_fn_ckt_lb}, the corresponding function on $(\set{0,1}^n)^k$ cannot be computed by an $O(k)$-size and $O(\log_2{k})$-depth circuit over $F = \set{0,1}^n$. Since $f_0$ can be hard-wired in any circuit solving $\text{Tensor\textsubscript{k}}$, we have the desired result. \end{proof} \section{Rigid Matrices and Rigid Functions}\label{sec:rig_mat_vs_func} A natural question to ask is whether the functions corresponding to rigid matrices are rigid functions or not. \begin{conjecture}\label{conj:rig_mat_are_rig_func} There exists a universal constant $c>0$ such that whenever $A\in \mathbb F_2^{n\times n}$ is an $(r,s)$-rigid matrix, the corresponding function $A: \mathbb F_2^n\to \mathbb F_2^n$ is a $(cr,cs)$-rigid function. \end{conjecture} We show that a positive answer to the above resolves a closely related conjecture by Jukna and Schnitger \cite{JS11}. \begin{definition} Consider a depth-2 circuit, with $x=(x_1,\dots, x_n)$ as the input variables, $w$ gates in the middle layer, computing boolean functions $h_1,\dots,h_w$ and $m$ output gates, computing boolean functions $g_1,\dots,g_m$. The circuit computes a function $f=(f_1,\dots,f_m): \mathbb F_2^n\to \mathbb F_2^m$ satisfying $f_i(x_1\dots,x_n) = g_i(x,h_1(x),\dots,h_w(x))$, for each $i\in[m]$. The width of the circuit is defined to be $w$. The degree of the circuit is defined to be the maximum over all gates $g_i$, of the number of wires going directly from the inputs $x_1,\dots,x_n$ to $g_i$. \end{definition} We remark that Lemma \ref{lemma:out_small_no_inp_blocks} essentially shows that any function computable by a deterministic linear-time Turing Machine has a depth-2 circuit of small width and small `block-degree'. \begin{conjecture}\cite{JS11}\label{conj:lin_depth_two_ckt} Suppose $f: \mathbb F_2^n\to \mathbb F_2^n$ is a linear function computable by a depth-2 circuit with width $w$ and degree $d$. Then, $f$ is computable by a depth-2 circuit, with width $O(w)$, and degree $O(d)$, each of whose gates compute a linear function. \end{conjecture} \begin{observation} Conjecture \ref{conj:rig_mat_are_rig_func} $\Longrightarrow$ Conjecture \ref{conj:lin_depth_two_ckt}. \end{observation} \begin{proof} Suppose $f: \mathbb F_2^n\to \mathbb F_2^n$ is a linear function computable by a depth-2 circuit with width $w$ and degree $d$. Then, there exists a set $X\subseteq \mathbb F_2^n$ of size at least $2^{n-w}$, such that for each $x\in X$, the value of the functions computed by the gates in the middle layer is the same. Hence, for each $x\in X$, each element of $f(x)$ can be written as a function of at most $d$ elements of $x$. This shows that $f$ is not an $(w,d)$-rigid function. Assuming Conjecture \ref{conj:rig_mat_are_rig_func}, the matrix $A\in \mathbb F_2^{n\times n}$ for the function $f$ is not a $(cw,cd)$-rigid matrix, for some constant $c>0$. Then, $A = B+C$, where the rank of $B$ is at most $cw$, and $C$ has at most $cd$ non-zero entries in each row. Now, there exist matrices $B_1\in \mathbb F_2^{n\times cw}$ and $B_2\in \mathbb F_2^{cw\times n}$, such that $B=B_1B_2$. Then, $f$ is computable by a linear depth-2 circuit with width $cw$ and degree $cd$, where the middle layer computes output of the function corresponding to $B_2$. \end{proof} \section{Future Directions}\label{sec:matrix_problems} In this section, we state some well known problems related to matrices. It seems interesting to study the block-rigidity of the these functions. \subsection{Matrix Transpose} The matrix-transpose problem is described as follows: \begin{itemize} \item[-] \emph{Input:} A matrix $X\in \mathbb F_2^{n\times n}$ as a vector of length $n^2$ bits, in row-major order, for some $n\in\mathbb{N}$. \item[-] \emph{Output:} The matrix $X$ column-major order (or equivalently, the transpose of $X$ in row-major order). \end{itemize} It is well known (see \cite{DMS91} for a short proof) that the above problem can be solved on a 2-tape Turing machine in time $O(N\log{N})$, on inputs of length $N=n^2$. We believe that this cannot be solved by Turing machines in linear-time, and that the notion of block-rigidity might be a viable approach to prove this. Next, we observe some structural details about the problem. The matrix-transpose problem computes a linear function, whose $N\times N$ matrix $A$ on inputs of length $N=n^2$ is described as follows. For each $i,j\in[n]$, let $e_{ij}\in\mathbb F_2^{n\times n}$ denote the matrix whose $(i,j)$\textsuperscript{th} entry is 1 and rest of the entries are zero. The matrix $A$ is an $N\times N$ matrix made up of $n^2$ blocks, with the $(i,j)$\textsuperscript{th} block equal to $e_{ji}$. Using a similar argument as in Observation \ref{obs:block_rig_func_game_val}, one can show that the value of the following game captures the block-rigidity of the matrix-transpose function. Fix integers $n\in \mathbb{N}$, $1\leq s<n$, and a collection $\mathcal S = (S_1,\dots,S_n)$, where each $S_i \subseteq[n]$ is of size $s$. We define an $n$-player game $\mathcal G_{\mathcal S}$ as follows: A verifier chooses a matrix $X\in \mathbb F_2^{n\times n}$, with each entry being chosen uniformly and independently. For each $j\in[n]$, player $j$ is given the rows of the matrix indexed by $S_j$, and they answer $y_j \in \mathbb F_2^n$. The verifier accepts if and only if for each $j\in[n]$, $y_j$ equals the $j$\textsuperscript{th} column of $X$. \begin{conjecture} There exists a constant $c>0$ such that the function given by the matrix $A$ is a $(c n^2, c n)$-block-rigid function. Equivalently, for each collection $\mathcal S$ with set sizes as $cn$, the value of the game $\mathcal G_{\mathcal S}$ is at most $2^{-cn^2}$. \end{conjecture} We note that the above game is of independent interest from a combinatorial point of view as well. Basically, it asks whether there exists a large family of $n\times n$ matrices, in which each column can be represented as some function of a small fraction of the rows. The problem of whether the matrix $A$ is a block-rigid matrix is also interesting. This corresponds to the players in the above game using strategies which are linear functions. \subsection{Matrix Product} The matrix-product problem is described as follows: \begin{itemize} \item[-] \emph{Input:} Matrices $X, Y\in \mathbb F_2^{n\times n}$ as vectors of length $n^2$ bits, in row-major order, for some $n\in\mathbb{N}$. \item[-] \emph{Output:} The matrix $Z=XY$ in row-major order. \end{itemize} The block-rigidity of the matrix-product function is captured by the following game: Fix integers $n\in \mathbb{N}$, $1\leq s<n$, and collections $\mathcal S = (S_1,\dots,S_n)$, $\mathcal T = (T_1,\dots,T_n)$ where each $S_i,T_i \subseteq[n]$ is of size $s$. We define a $n$-player game $\mathcal G_{\mathcal S, \mathcal T}$ as follows: A verifier chooses matrices $X,Y\in \mathbb F_2^{n\times n}$, with each entry being chosen uniformly and independently. For each $j\in[n]$, player $j$ is given the rows of the matrices $X$ and $Y$ indexed by $S_j$ and $T_j$ respectively, and they answer $y_j \in \mathbb F_2^n$. The verifier accepts if and only if for each $j\in[n]$, $y_j$ equals the $j$\textsuperscript{th} row of $XY$. \begin{conjecture} There exists a constant $c>0$ such that for each $\mathcal S, \mathcal T$ with set sizes as $cn$, the value of the game $\mathcal G_{\mathcal S, \mathcal T}$ is at most $2^{-cn^2}$. \end{conjecture} One may change the row-major order for some (or all) of the matrices to column-major order. It is easy to modify the above game in such a case. \end{document}
\begin{document} \date{\today} \title{Hodge theory on transversely symplectic foliations} \author{Yi Lin } \date{\today} \maketitle \begin{abstract} In this paper, we develop symplectic Hodge theory on transversely symplectic foliations. In particular, we establish the symplectic $d\delta$-lemma for any such foliations with the (transverse) $s$-Lefschetz property. As transversely symplectic foliations include many geometric structures, such as contact manifolds, co-symplectic manifolds, symplectic orbifolds, and symplectic quasi-folds as special examples, our work provides a unifying treatment of symplectic Hodge theory in these geometries. As an application, we show that on compact $K$-contact manifolds, the $s$-Lefschetz property implies a general result on the vanishing of cup products, and that the cup length of a $2n+1$ dimensional compact $K$-contact manifold with the (transverse) $s$-Lefschetz property is at most $2n-s$. For any even integer $s\geq 2$, we also apply our main result to produce examples of $K$-contact manifolds that are $s$-Lefschetz but not $(s+1)$-Lefschetz. \end{abstract} \setcounter{section}{0} \setcounter{subsection}{0} \section{Introduction} Hodge theory on symplectic manifolds was introduced by Ehresmann and Libermann \cite{EL49}, \cite{L55}, and was rediscovered by Brylinski \cite{brylinski;differential-poisson}. Brylinski proved that on a compact K\"ahler manifold, any de Rham cohomology class admits a symplectic harmonic representative, and further conjectured that on a compact symplectic manifold, any de Rham cohomology class has a symplectic harmonic representative. However, Mathieu proved that for a $2n$ dimensional symplectic manifold $(M,\omega)$, the Brylinski conjecture is true if and only if $(M,\omega)$ satisfies the Hard Lefschetz property, i.e., for any $0\leq k\leq n$, the Lefschetz map \begin{equation}\label{Lefschetz-map}L^k: H^{n-k}(M)\rightarrow H^{n+k}(M),\,\,\,\,[\alpha]\mapsto [\omega^k\wedge \alpha]\end{equation} is surjective. Mathieu's result was sharpened by Merkulov \cite{Mer98} and Guillemin\cite{Gui01}, who independently established the symplectic $d\delta$-lemma. On symplectic manifolds, Fern\'{a}ndez, Mu\~{n}oz and Ugarte \cite{FMU04} introduced a notion of weakly Lefschetz property. More precisely, for any $0\leq s\leq n-1$, a $2n$ dimensional symplectic manifold $(M,\omega)$ is said to satisfy the $s$-Lefschetz property, if and only if for any $0\leq k\leq s$, the Lefshetz map (\ref{Lefschetz-map}) is surjective. Fern\'{a}ndez, Mu\~{n}oz and Ugarte extended the symplectic $d\delta$-lemma to symplectic manifolds with the $s$-Lefschetz property; moreover, for any even integer $s\geq 2$, they also produced examples of symplectic manifolds which are $s$-Lefschetz but not $(s+1)$-Lefschetz, c.f. \cite{FMU04}, \cite{FMU07}. The foliated version of symplectic Hodge theory has also been studied in the literature. Pak \cite{Pak08} extended Mathieu's Lefschetz theorem to the basic cohomology groups of transversely symplectic flows. In the context of odd dimensional symplectic manifolds, He \cite{He10} further developed Hodge theory on transversely symplectic flows, and established the symplectic $d\delta$-lemma in this framework. Replacing the Riemannian hodge theory used in \cite{CNY13} by symplectic Hodge theory developed by He \cite{He10}, Lin \cite{L13} proved that for a compact $K$-contact manifold, the transverse Hard Lefschetz property on basic cohomology groups is equivalent to the Hard Lefschetz property on de Rham cohomology groups introduced in \cite{CNY13}. In particular, this implies immediately that on a compact Sasakian manifold, the two existing versions of Hard Lefschetz theorems, established in \cite{ka90} and \cite{CNY13} respectively, are mathematically equivalent to each other. Boyer and Galicki \cite{BG08} raised the open question of whether there exist simply-connected $K$-contact manifolds which do not admit any Sasakian structures. Lin's Hodge theoretic methods shed new insights on the Lefschetz property of a $K$-contact manifold, and allow him to produce such examples in any dimension $\geq 9$. However, many singular symplectic spaces naturally arise as the leaf space of higher dimensional transversely symplectic foliations. For example, effective symplectic orbifolds in the sense of Satake \cite{S57}, c.f. Example \ref{orbifolds}, and symplectic quasi-folds introduced by Prato\cite{P01}, c.f. Example \ref{quasi-folds}. Thus it would be natural to consider Hodge theory on transversely symplectic foliations of arbitrary dimension as well. In the present paper, for any transversely symplectic foliation, we introduce the notion of transverse $s$-Lefschetz property on its basic cohomology groups ( see Definition \ref{weak-lefschetz-property}), and generalize the machinery of symplectic Hodge theory to this framework. Among other things, we prove the symplectic $d\delta$-lemma in this setup. We then apply our results to the study of $K$-contact manifolds, and prove that the transverse $s$-Lefschetz property imposed on the basic cohomology of a $K$-contact manifold is equivalent to the $s$-Lefschetz property imposed on its de Rham cohomology (see Theorem \ref{main-result1}). As a first application, we show that on compact $K$-contact manifolds, the $s$-Lefschetz property implies a fairly general result on the vanishing of cup products, which implies that the cup length of a $2n+1$ dimensional compact $K$-contact manifold with the $s$-Lefschetz property is at most $2n-s$. It is well known that the cup length of a $2n+1$ dimensional compact $K$-contact manifold is at most $2n$, c.f. \cite[Theorem 7.4.1]{BG08}. More recently, using the methods of rational homotopy theory, it is shown in \cite{MT15} that a simply-connected $7$ dimensional compact Sasakian manifold has vanishing cup product $H^2\times H^2\rightarrow H^4$. Our result does not assume the $K$-contact manifold under consideration to be simply-connected, and simultaneously generalizes all these known results in the literature. As a second application, for any even integer $s\geq 2$, we produce examples of $2s+5$ dimensional compact $K$-contact manifolds that are $s$-Lefschetz but not $(s+1)$-Lefschetz. In particular, our construction provides new examples of simply-connected compact $K$-contact manifolds that do not admit any Sasakian structures in dimension $\geq 9$. Our paper is organized as follows. Section \ref{transverse-sym} reviews preliminaries on transversely symplectic foliations. Section \ref{transverse-sym-Hodge} explains how to do Hodge theory on transversely symplectic foliations. Section \ref{ddelta-lemma} proves the symplectic $d\delta$-lemma for a transversely symplectic foliation. Section \ref{review-contact} recalls necessary background materials on $K$-contact and Sasakian geometries. Section \ref{Kcontact-s-lefschetz} proves that the transverse $s$-Lefschetz condition imposed on the basic cohomology groups of a compact $K$-contact manifold is equivalent to the $s$-Lefschetz condition imposed on its de Rham cohomology groups. Section \ref{cup-length} shows that the cup length of a compact $2n+1$ dimensional $K$-contact manifold with the $s$-Lefschetz property is at most $2n-s$. Section \ref{main-examples} constructs, for any even integer $s\geq 2$, a compact $K$-contact manifold that is $s$-Lefschetz but not $(s+1)$-Lefschetz. \section{ Review of transversely symplectic foliations }\label{transverse-sym} Let $\mathcal{F}$ be a foliation of co-dimension $q$ on a smooth manifold $M$ of dimension $n$, and let $P$ be the integrable subbundle of $TM$ associated to $\mathcal{F}$. A \emph{foliation chart} of $(M,\mathcal{F})$ is a coordinate chart $(\varphi: U\rightarrow \mathbf{R}^{n-q}\times \mathbf{R}^{q})$ on $M$, such that for any $z\in U$, a vector $X_z\in T_zM$ is tangent to the fiber $P_z$ if and only if $\varphi_{*z}(X_z)$ is tangent to the vertical plaque $\mathbf{R}^{n-q}\times \{ \pi\circ \varphi(z)\}$, where $\pi: \mathbf{R}^{n-q}\times\mathbf{R}^q\rightarrow \mathbf{R}^q$ is the projection map. Let $\varphi: U\rightarrow \mathbf{R}^{n-q}\times \mathbf{R}^{q}$ be a foliation chart, and let $y_i$ be the $i$-th coordinate of the function $\pi\circ \varphi$. Then we call $\{y_1,\cdots, y_q\}$ \emph{transverse coordinates} on $U$. On the foliated manifold $(M,\mathcal{F})$, the spaces of \emph{horizontal forms} $\Omega_{hor}(M)$ and \emph{basic forms} $\Omega_{bas}(M)$ are defined as follows respectively. \begin{equation}\begin{split} &\Omega_{\textmd{hor}}(M)=\{\alpha\in\Omega(M)\mid\iota_{X}\alpha=0,\,\forall\, X\in \Gamma(P)\}, \\ & \Omega_{\textmd{bas}}(M)=\{\alpha\in\Omega(M)\mid\iota_{X}\alpha=0,\quad\mathcal{L}_{X}\alpha=0,\forall X\in \Gamma(P)\}. \end{split}\end{equation} Here $\Gamma(P)$ denotes the space of smooth sections of the integrable subbundle $P$ associated to the foliation. Since the exterior differential operator $d$ preserves basic forms, we obtain a subcomplex of the de Rham complex $\{\Omega^{*}(M),d\}$, which is called the \emph{basic de Rham complex} $$ \xymatrix@C=0.5cm{ \cdot\cdot\cdot \ar[r] & \Omega^{k-1}_{\textmd{bas}}(M) \ar[r]^{d} & \Omega^{k}_{\textmd{bas}}(M) \ar[r]^{d} & \Omega^{k+1}_{\textmd{bas}}(M) \ar[r]^{\,\,\,\,\,d} & \cdot\cdot\cdot } $$ The cohomology of the basic de Rham complex $\{\Omega^{*}_{\textmd{bas}}(M),d\}$, denoted by $H^{*}_{B}(M,\R)$, is called the \emph{basic de Rham cohomology} of $M$. If $M$ is connected\footnote{The manifolds we consider in this paper are all connected.}, then $H^{0}_{B}(M,\R)\cong\mathbb{R}^{1}$. Moreover, the inclusion $\Omega_{bas}^1(M)\hookrightarrow \Omega^1(M)$ induces an injective map $H^1_B(M)\rightarrow H^1(M)$ (\cite[Prop. 4.1]{T97}). In general, the group $H^{k}_{B}(M,\R)$ may be infinite-dimensional for $k\geq2$. \begin{definition}[Transversely symplectic foliation] Let $M$ be a manifold with a foliation $\mathcal{F}$. A closed two form $\omega$ is said to be transversely symplectic with respect to $\mathcal{F}$, if for any $p\in M$, the kernel of $\omega_p$ coincides with the tangent space of the leave passing through $p$. A \emph{transversely symplectic foliation} is a triple $(M,\mathcal{F},\omega)$, such that $\omega$ is transversely symplectic with respect to the foliation $\mathcal{F}$. \end{definition} \begin{example}\label{Contact-example}(\textbf{Contact manifolds}) On any co-oriented contact manifold $(M,\eta)$ with a contact one form $\eta$, there is a nowhere vanishing Reeb vector $\xi$ uniquely determined by the equations $\iota_{\xi}\eta=1,\,\,\,\,\,\iota_{\xi}d\eta=0$. The one dimensional foliation $\mathcal{F}_{\xi}$ associated to $\xi$ is called the Reeb characteristic foliation. It is easy to see that $\mathcal{F}_{\xi}$ is transversely symplectic with $\omega:=d\eta$ being the transversely symplectic form. \end{example} \begin{example}\label{Co-sympl-example}(\textbf{Co-symplectic manifolds}) A $2n+1$ dimensional manifold $M$ is co-symplectic, if it is equipped with a closed two form $\omega$, and a closed one form $\eta$, such that $\omega^n\wedge \eta\neq 0$. The Reeb vector field $\xi$ is uniquely determined by $\iota_{\xi}\eta=1$, $\iota_{\xi}d\eta=0$. Let $\mathcal{F}_{\xi}$ be the associated Reeb characteristic foliation. Then $(M,\mathcal{F},\omega)$ is transversely symplectic. \end{example} \begin{example} (\textbf{Odd dimensional symplectic manifolds}, \cite{He10}) Let $M$ be a $2n+1$ dimensional manifold with a volume form $\Omega$ and a closed 2-form $\omega$, such that $\omega^{n}\neq0$ everywhere. Then the triple $(M,\omega,\Omega)$ is called an $(2n+1)$ dimensional symplectic manifold. In particular, there exists a unique canonical vector field $\xi$ defined by $$ \iota_{\xi}\omega=0,\quad\quad\quad\iota_{\xi}\Omega=\frac{\omega^{n}}{n!} $$ which is called the Reeb vector field on $(M,\omega,\Omega)$. The Reeb vector field yields a canonical 1-dimensional foliation $\mathcal{F}_{\xi}$ . It is straightforward to check that $(M,\omega, \mathcal{F}_{\xi})$ is transversely symplectic. \end{example} \begin{example}(\textbf{Symplectic orbifolds})\label{orbifolds} Let $(X,\sigma)$ be a $2n$ dimensional effective symplectic orbifold in the sense of Satake \cite{S57}. Then the total space of the orthogonal frame orbi-bundle $\pi:P\longrightarrow X$ is a smooth manifold, on which the structure group $O(2n)$ acts locally free. Let $\mathcal{F}$ be the foliation induced by the $O(2n)$ action, whose leaves are precisely given by the orbits of the $O(2n)$ action. Set $\omega:=\pi^{*}\sigma$. Then it is easy to see that $(P,\mathcal{F},\omega)$ is a transversely symplectic foliation. In this case, the leaf space of $\mathcal{F}$ can be naturally identified with the orbifold $X$. \end{example} \begin{example}(\textbf{Symplectic quasi-folds} \cite{P01})\label{quasi-folds} Suppose that a torus $T$ acts on a symplectic manifold $(X,\sigma)$ in a Hamiltonian fashion with a moment map $ \phi:X\longrightarrow\mathfrak{t}^{*}. $ Let $N\subset T$ be a non-closed subgroup with Lie algebra $\mathfrak{n}$. Then the action of $N$ on $X$ has a moment map $ \varphi:X\longrightarrow\mathfrak{n}^{*} $, which is given by the composition of $\phi: X\rightarrow \mathfrak{t}^*$ and the natural projection map $\mathfrak{t}^*\rightarrow \mathfrak{n}^*$. Let $a$ be a regular value of $\varphi$. Consider the the submanifold $M=\varphi^{-1}(a)\subset X$. The $N$-action on $M$ yields a transversely symplectic foliation $\mathcal{F}$ with the transversely symplectic form $\omega:=i^{*}\sigma$, where $i: M\hookrightarrow X$ is the inclusion map. When $N$ is a connected subgroup, the leaf space of $\mathcal{F}$ is exactly the so called symplectic quasi-fold introduced by E. Prato \cite{P01}. \end{example} \begin{theorem} \label{Darboux-thm}(Darboux theorem) Let $(\mathcal{F},\omega)$ be a transversely symplectic foliation of co-dimension $2n$ on a $2n+l$ dimensional manifold $M$. Then for any $p\in M$, there exists a foliation chart $(U, \psi)$ around $p$ equipped with transverse coordinates $\{ x_1, y_1,\cdots x_n,y_n\}$, such that \[ (\psi^{-1})^*\omega\vert_{\psi(U)}= \displaystyle \sum_{i=1}^n dx_i\wedge dy_i.\] For a transversely symplectic foliation, such transverse coordinates will be called \textbf{transverse Darboux coordinates}. \end{theorem} \begin{proof} $\forall\, p\in M$, choose a foliation coordinate chart $(U,\varphi)$ around $p$. Without loss of generality, we may assume that $\varphi(U)=V_1\times V_2 \subset \mathbf{R}^l\times \mathbf{R}^{2n}$ for two open subsets $V_1$ and $V_2$ in $\mathbf{R}^l$ and $\mathbf{R}^{2n}$ respectively. Let $\pi: \varphi(U)\rightarrow V_2$ be the projection map, and let $\{z_1, \cdots, z_l, w_1, \cdots, w_{2n}\}$ be the foliation coordinates on $\varphi(U)$. Note that by definition, $\omega$ is a basic form. It follows easily that $(\varphi^{-1})^*\omega$ induces a symplectic form $\sigma$ on $V_2$. Replacing $V_2$ by a smaller open subset if necessary, we may assume that there are Dauboux coordinates $\{x_1,y_1,\cdots, x_n,y_n\}$ on $V_2$, such that \[ \sigma =\displaystyle \sum_{i=1}^n dx_i\wedge dy_i, \,\,\,\text{ on }\,\, V_2.\] Now let $F: V_1\times V_2\rightarrow V_1\times V_2$ be the diffeomorphism given by the change of coordinates map \[ (z_1, \cdots, z_l, w_1, w_2,\cdots,w_{2n-1}, w_{2n})\mapsto (z_1, \cdots, z_l, x_1, y_1, \cdots, x_n, y_n),\] and let $\psi= F\circ \varphi$. Then it is easy to see that the foliation chart $(U,\psi)$ has the desired property stated in Theorem \ref{Darboux-thm}. \end{proof} \section{Transverse symplectic Hodge theory}\label{transverse-sym-Hodge} In this section we develop the machinery of symplectic Hodge theory on a transversely symplectic foliation. We refer to \cite{brylinski;differential-poisson} and \cite{Yan96} for general background on symplectic Hodge theory. We need to explain how to define the symplectic Hodge star operator on the space of basic forms. To this end, we first review the construction of symplectic Hodge star operator on a symplectic vector space. Let $(V,\sigma)$ be a symplectic vector space, where $\sigma$ is a non-degenerate bi-linear pairing on $V$. Since $\sigma$ is non-degenerate, it induces a linear isomorphism \[ V\rightarrow V^*, \,\,\, X\mapsto \iota_X\sigma.\] Its inverse map extends linearly to a linear isomorphism $ \sharp: \wedge^k V^*\rightarrow \wedge^k V$. Thus we get a bi-linear pairing \[B(\cdot, \cdot): \wedge^k V^*\times \wedge^kV^* \rightarrow \mathbf{R},\,\,\,\,B(\alpha, \beta)=<\sharp (\alpha),\beta>,\] where $\alpha$ and $\beta$ are $k$-forms on $V$, and $<\cdot, \cdot>$ is the natural dual pairing between $k$-forms and $k$-vectors. It is straightforward to check that this pairing is non-degenerate. In this context, the symplectic Hodge star of a $k$-form $\alpha$ on $V$ is uniquely determined by the following equation. \[ \beta\wedge \star \alpha =B(\beta, \alpha)\dfrac{\sigma^n}{n!}, \,\,\,\, \forall\,\beta \in \wedge^k V^*.\] We recall that the following identify holds for symplectic Hodge star operator. \begin{equation}\label{star-square} \star ^2 =\text{id}. \end{equation} Now let $(\mathcal{F},\omega)$ be a transversely symplectic foliation of co-dimension $2n$ on a manifold $M$, let $P$ be the integrable sub-bundle of $TM$ associated to the foliation $\mathcal{F}$, and Let $Q=TM/ P$. Then the projection map $\pi: TM\rightarrow Q$ induces a pullback map $\pi^*: \wedge^r Q\rightarrow \wedge^r T^*M$. Clearly, if $s$ is a section of $\wedge^rQ$, then $\pi^* (s)$ is a horizontal $r$-form on $M$. Conversely, if $\alpha$ is a horizontal $r$-form on $M$, then there exists a unique section $s$ of $\wedge^rQ$, such that $\alpha=\pi^* s$. In particular, since $\omega$ is horizontal, there is a section $\sigma \in \wedge^2Q$ such that $\pi^*\sigma=\omega$. It is easy to see that at any point $x\in M$, $(Q_x,\sigma_x)$ is a symplectic vector space. Thus there is a (point-wise defined) symplectic Hodge star operator on the space of sections of $\wedge^*Q$. By identifying horizontal forms on $M$ with sections of $\wedge^*Q$, we get a symplectic Hodge star operator \[\star: \Omega^k_{hor}(M)\rightarrow \Omega^{2n-k}_{hor}(M).\] \begin{lemma} \label{basic-star} If $\alpha$ is a basic $k$-form, then $\star \alpha$ is a basic $(2n-k)$-form. \end{lemma} \begin{proof} To show that $\star \alpha$ is basic, it suffices to show that for any $p\in M$, there exists a foliation coordinate neighborhood $U$ of $p$, such that $\star \alpha$ has the following local expression on $U$. \[ \star \alpha = \displaystyle \sum_I g(x_1, y_1, \cdots, x_n, y_n) dx_I\wedge dy_J,\] where $\{x_1, y_1,\cdots, x_n,y_n\}$ are transverse Darboux coordinates on $U$, and $I$ and $J$ are multi-index. Since $\alpha$ is a basic form, for any $p\in M$, there exists a foliation coordinate neighborhood $U$ of $p$, equipped with transverse Darboux coordinates $\{x_1, y_1, \cdots, x_n, y_n\}$, such that \[ \alpha \vert_U= \displaystyle \sum f(x_1,y_1,\cdots, x_n, y_n)dx_I\wedge dy_J,\] where $I$ and $J$ are multi-index. Without loss of generality, we may assume that $\alpha\vert_U=f(x_1,y_1,\cdots, x_n, y_n)dx_I\wedge dy_J$ for some given multi-index $I$ and $J$. Since \[\omega\vert_U=\displaystyle \sum_{i=1}^n dx_i\wedge dy_i,\] a straightforward calculation shows that \[ \star \alpha=c f(x_1, y_1, \cdots, x_n, y_n) dx_{I'}\wedge dy_{J'},\] where $c$ is a constant, and $I'$ and $J'$ are multi-indexes. Therefore $\star \alpha$ must also be a basic form. \end{proof} There are three important operators, the Lefschetz map $L$, the dual Lefschetz map $\Lambda$, and the degree counting map $H$, which are defined on horizontal forms as follows. \begin{equation} \label{three-canonical-maps} \begin{aligned} & L :\Omega_{hor}^*(M) \rightarrow \Omega_{hor}^{*+2}(M), \,\,\,\alpha \mapsto \alpha \wedge \omega,\\ & \Lambda: \Omega_{hor}^*(M) \rightarrow \Omega_{hor}^{*-2}(M),\,\,\,\alpha \mapsto \star L\star \alpha,\\ & H: \Omega^k_{hor}(M)\rightarrow \Omega^k_{hor}(M),\,\,\,H(\alpha)=(n-k)\alpha,\,\,\,\alpha \in \Omega^{k}_{hor}(M).\end{aligned}\end{equation} Using transverse Ddarboux coordinates, we have the following local description of the operator $\Lambda$. We refer to \cite[Lemma 2.1.4]{He10} for a proof. \begin{lemma} \label{canonical-local-form-Lambda} Let $(U, x_1,\cdots, x_n,y_1,\cdots, y_n)$ be a transverse Darboux coordinate chart on $M$. Then for any form $\alpha\in \Omega^*(U)$, we have \[ \Lambda \alpha=\displaystyle \sum_{i=1}^n\iota_{\frac{\partial}{\partial x_i}} \iota_{\frac{\partial}{\partial y_i}}\alpha.\] \end{lemma} Exploring the one to one correspondence between horizontal forms on $M$ and sections of $\wedge^* Q$, it is easy to see that the actions of $L$, $\Lambda$ and $H$ on horizontal satisfy the following commutator relations. \begin{equation} \label{sl2-module-on-forms} [ \Lambda, L]=H, \,\,\,[H, \Lambda]=2\Lambda,\,\,\,[H, L]=-2L. \end{equation} Note that $\omega$ is a basic form itself. As an immediate consequence of Lemma \ref{basic-star}, we have the following result. \begin{lemma} \label{basic-sl2} The operators $L$, $\Lambda$ and $H$ map basic forms to basic forms, and satisfy all three commutator relations in Equation \ref{sl2-module-on-forms}. \end{lemma} Therefore, these three operators define a representation of the Lie algebra $sl(2)$ on $\Omega_{bas}(M)$. Although the $sl_2$-module $\Omega_{bas}(M)$ is infinite dimensional, there are only finitely many eigenvalues of the operator $H$. Such an $sl_2$-module is said to be of finite $H$-type, and has been studied in great details in \cite{Ma95} and \cite{Yan96}. Applying \cite[Corollary 2.6]{Yan96} to the present situation, we have the following result. \begin{lemma} \label{yan's-result} Let $(M,\mathcal{F},\omega)$ be a transversely symplectic foliation of co-dimension $2n$. For any $0\leq k\leq n$, $\alpha\in \Omega_{bas}^k(M)$ is said to be primitive if $L^{n-k+1}\alpha=0$. Then we have that \begin{enumerate} \item [a)] a basic $k$-form $\alpha$ is primitive if and only if $\Lambda \alpha=0$; \item [b)] For any $0\leq k\leq n$, the Lefschetz map \[ L^{n-k}: \Omega_{bas}^k(M)\rightarrow \Omega_{bas}^{2n-k}(M),\,\,\,\,\alpha\mapsto \omega^{n-k}\wedge\alpha\] is a linear isomorphism. \item[c)] any differential form $\alpha_k \in \Omega_{bas}^k(M)$ admits a unique Lefschetz decomposition \begin{equation}\label{lefschetz-decompose-forms} \alpha_k =\displaystyle \sum_{r\geq \text{max}(\frac{k-n}{2}, 0)} \dfrac{L^r}{r!}\beta_{k-2r},\end{equation} where $\beta_{k-2r}$ is a primitive basic form of degree $k-2r$. \end{enumerate} \end{lemma} \begin{remark}Throughout the rest of this paper, we will denote the space of primitive basic $k$-forms on $M$ by $\mathcal{P}_{bas}^k(M)$. \end{remark} Applying \cite[Theorem 3.16]{W80} to the present situation, we have the following Weil identity. \begin{lemma}\label{Weil-identity}Let $(M,\omega,\Omega)$ be a transversely symplectic foliation of co-dimension $2n$, and let $\alpha$ be a primitive $k$-form in $\Omega_{bas}^k(M)$.Then for any $r\leq n-k$, \[\star L^r\alpha=(-1)^{\frac{k(k-1)}{2}}\dfrac{r!}{(n-k-r)!}L^{n-k-r}\alpha.\] \end{lemma} The symplectic Hodge operator gives rise to the symplectic adjoint operator $\delta$ of the exterior differential $d$ as follows. \[ \delta \alpha_k=(-1)^{k+1}\star d\star\alpha_k,\,\,\,\alpha_k\in\Omega_{bas}^k(M).\] It is easy to see that $\delta \circ \delta=0$. Thus we have the following differential complex of homology. $$ \xymatrix@C=0.5cm{ \cdot\cdot\cdot \ar[r] & \Omega^{k+1}_{\textmd{bas}}(M) \ar[r]^{\delta} & \Omega^{k}_{\textmd{bas}}(M) \ar[r]^{\delta} & \Omega^{k-1}_{\textmd{bas}}(M) \ar[r]^{\,\,\,\,\,\delta} & \cdot\cdot\cdot } $$ The homology of the above complex, denoted by $H_{\delta}(\Omega_{bas}(M),\R)$, is called the \emph{basic $\delta$-homology} of the foliated manifold $M$. We note that the symplectic hodge star operator maps a $\delta$-closed basic form of degree $k$ to a $d$-closed differential basic form of degree $2n-k$. Using the identity $\star^2=\text{id}$, it is straightforward to show the following result. \begin{lemma}\label{homology-cohomology} The symplectic Hodge star operator induces a linear isomorphism \[ \star: H_{\delta}(\Omega_{bas}^k(M),\R) \cong H^{2n-k}_B(M,\R) .\] \end{lemma} It is noteworthy that the symplectic adjoint operator $\delta$ anti-commutes with $d$. In this context, a basic form $\alpha$ is said to be symplectic harmonic if and only if $d\alpha=\delta\alpha=0$. We define \[\begin{split}&\Omega_{hr}^k(M)=\{\alpha\in \Omega^k_{bas}(M)\,\vert\, d\alpha=\delta\alpha=0\}, \\& H_{hr}^k(M,\R)=\dfrac{ \text{Im}\left(\Omega^k_{hr}(M)\hookrightarrow\Omega^k_{bas}(M)\right)}{\text{im} (\Omega_{bas}^{k-1}(M)\xrightarrow{d}\Omega^k_{bas}(M))}.\end{split}\] It is easy to see that the operators $L$, $\Lambda$ and $H$ map harmonic forms to harmonic forms, and therefore turn $\Omega_{hr}^*(M)$ into a $sl_2$ module of finite $H$-type. Applying \cite[Corollary 2.6]{Yan96} again, we have the following result. \begin{lemma}\label{har-forms} Let $(M,\mathcal{F},\omega)$ be a transversely symplectic foliation of co-dimension $2n$. Then for any $0\leq k\leq n$, the Lefschetz map \[ L^k: \Omega^{n-k}_{hr}(M)\rightarrow \Omega^{n+k}_{hr}(M)\] is an isomorphism. \end{lemma} \begin{definition} \label{weak-lefschetz-property} Let $(M,\mathcal{F},\omega)$ be a transversely symplectic foliation of co-dimension $2n$. It is said to satisfy the {\bf transverse Hard Lefschetz property}, if for any $0\leq k\leq n$, the Lefschetz map \begin{equation}\label{foliation-Lefschetz-map} L^{n-k}: H^{k}_B(M,\R)\rightarrow H_B^{2n-k}(M,\R)\, \,\,[\alpha]_B\mapsto [\omega^{n-k}\wedge \alpha]_B \end{equation} is an isomorphism. More generally, for any integer $0\leq s\leq n-1$, $(M,\omega, \mathcal{F})$ is said to satisfy the {\bf transverse $s$-Lefschetz property}, if for any $0\leq k\leq s$, the Lefschetz map (\ref{foliation-Lefschetz-map}) is an isomorphism. \end{definition} \begin{remark} Let $(X,\omega)$ be a $2n$ dimensional symplectic manifold. In the literature of symplectic Hodge theory, c.f. \cite{Ma95}, \cite{Yan96}, $X$ is said to satisfy the Hard Lefschetz property if for any $0\leq k\leq n$, the Lefschetz map (\ref{Lefschetz-map}) is surjective. By the Mathieu's theorem \cite{Ma95}, this is equivalent to saying that the Brylinski conjecture holds for $X$. However, sometimes the Hard Lefschetz property is also used in a slightly stronger sense, c.f. \cite{Gui01}, in which $(X,\omega)$ is said to satisfy this property if for each $0\leq k\leq n$, the Lefschetz map (\ref{Lefschetz-map}) is an isomorphism. This strengthening is necessary in order to establish the symplectic $d\delta$-lemma. Clearly, when the symplectic manifold $X$ is compact, these two versions of the Hard Lefschetz property are equivalent to each other due to the Poincar\'e duality. Analogously, on a transversely symplectic foliation, there are two slightly different versions of the transverse Hard Lefschetz property. However, the Poincar\'e duality may not hold for basic cohomology groups even if $M$ is compact. For this reason, in Definition \ref{weak-lefschetz-property}, we use the Hard Lefschetz property in the strong sense, so that we can establish the symplectic $d\delta$-lemma. \end{remark} \begin{definition}\label{primitive1}Let $(M,\omega, \Omega)$ be a transversely symplectic foliation of co-dimension $2n$. For any $0\leq r \leq n$, the $r$-th primitive basic cohomology group, $PH_B^{r}(M,\R)$, is defined as follows. \[ PH_B^r (M,\R)= \text{ker}(L^{n-r+1}: H_B^r(M,\R)\rightarrow H_B^{2n-r+2}(M,\R)) .\] \end{definition} Finally, we collect here a few commutator relations which we will use later in this paper. We refer to \cite{He10} for a detailed proof. \begin{lemma} \label{commutator} \[ [d, \Lambda]=\delta,\,\,\, [\delta, L]=d, \,\,\,[d\delta, L]=0,\,\,\,[d\delta,\Lambda]=0.\] \end{lemma} \section{ Symplectic $d\delta$-lemma on transversely symplectic foliations } \label{ddelta-lemma} In this section, we extend the symplectic Hodge theoretic results to transversely symplectic foliations. Analogous to the treatment used in \cite{He10}, our proof follows closely the original arguments used in \cite{Gui01}. Nevertheless, for completeness, we have made our proof as self-contained as possible. It is worth mentioning that He proved the symplectic $d\delta$-lemma on odd dimensional symplectic manifolds under one extra condition, that there exists a so called connection one form on the foliated manifold, c.f. \cite[Theorem 3.4.1]{He10}. In the present paper, we use a slightly different argument, which allows us to establish the $d\delta$-lemma without this assumption. \begin{theorem}\label{Mathieu's-theoremv2} Let $(M,\mathcal{F},\omega)$ be a transversely symplectic foliation of co-dimnesion $2n$, and let $0\leq s \leq n - 1$. Then for any $0\leq k\leq s$, the Lefschetz map (\ref{foliation-Lefschetz-map}) is surjective if and only if either one of the following equivalent statements holds. \begin{itemize} \item[1)] $H^k_{hr}(M,\omega) = H^k_B(M,\R)$, for any $0\leq k \leq s + 2$, and $ H^{2n-k}_{hr}(M,\omega) = H_B^{2n-k}(M,\R)$, for any $0\leq k \leq s$. \item[2)] $H^{2n-k}_{hr}(M,\omega) = H_B^{2n-k}(M,\R)$ for any $0\leq k \leq s$. \end{itemize} \end{theorem} \begin{proof} \textbf{Step 1}. Clearly, 1) implies 2). Assume that 2) holds. We show that for any $0\leq k\leq s$, the Lefschetz map (\ref{foliation-Lefschetz-map}) is surjective. To this end, we consider the following commutative diagram. \begin{equation}\label{commutative-diagram}\xymatrix{\ar @{} [dr] \Omega_{hr}^k(M)\ar[d]\ar[r]^-{\wedge\omega^{n-k}} &\Omega_{hr}^{2n-k}(M)\ar[d]\\ H_B^k(M,\R)\ar[r]^-{\wedge[\omega^{n-k}]} &H^{2n-k}_B(M,\R)}\end{equation} Here two vertical maps are given by the composition of the inclusion map $\Omega_{hr}^*(M)\hookrightarrow \text{ker}\left(\Omega^*_{B}(M)\xrightarrow{d}\Omega_B^{*+1}(M)\right)$ and the natural quotient map \[\text{ker}\left(\Omega^*_{B}(M)\xrightarrow{d}\Omega_B^{*+1}(M)\right)\rightarrow H_B^*(M,\R).\]Since by Lemma \ref{har-forms}, the map $\Omega_{hr}^k(M)\xrightarrow{\wedge \omega^{n-k}} \Omega_{hr}^{2n-k}(M)$ is an isomorphism, it follows easily from Condition 2) that the bottom horizontal map must be surjective. \textbf{Step 2}. Assume that for any $0\leq k\leq s$, the Lefschetz map (\ref{foliation-Lefschetz-map}) is surjective. We first show that \begin{equation}\label{induction-harmonic}H^k_{hr}(M,\R) = H^k_B(M,\R), \forall\, 0\leq k\leq s+2.\end{equation} We claim that for any $0\leq k\leq s+2$, \begin{equation}\label{primitive-decom-step1} H_B^k(M,\R)= PH_B^k(M,\R) +\text{im}\,L .\end{equation} To see this, note that $\forall\, [\alpha]_B\in H^k_B(M,\R)$, by the transverse $s$-Lefschetz property, there exists $[\beta]_B\in H^{k-2}_B(M,\R)$, such that $L^{n-k+1}[\alpha]_B=L^{n-k+2}[\beta]_B$. Thus $[\alpha]_B-L[\beta]_B\in PH^k_B(M,\R)$. This proves that \[ [\alpha]_B=\left([\alpha]_B-L[\beta]_B\right)+L[\beta]_B\in PH_B^k(M,\R) +\text{im}\,L,\] and establishes our claim. To complete the proof, we proceed by induction on the degree of the cohomology group $H_B^k(M,\R)$. When $k=0, 1$, it is easy to see that any $0$-cycle and $1$-cycle are harmonic, which implies that (\ref{induction-harmonic}) holds for $k=0,1$. Assume (\ref{induction-harmonic}) is true for any $k<p\leq n$. To show that it holds for $k=p$, it suffices to show that any cohomology class in $PH_B^p(M,\R)$ admits a harmonic representative. Let $[\alpha]_B\in PH^p_B(M,\R)$, where $p\leq s+2$. Then $L^{n-p+1}[\alpha]_B=0$. Thus there exists $\gamma\in \Omega^{2n-p+1}_{bas}(M)$, such that $L^{n-p+1} (\alpha)=d\gamma$. Since $L^{n-p+1}: \Omega_{bas}^{p-1}(M)\rightarrow \Omega_{bas}^{2n-p+1}(M)$ is an isomorphism, there exists $\beta\in \Omega^{p-1}_{bas}(M)$ such that $\gamma=L^{n-p+1}(\beta)$. This implies that $L^{n-p+1}(\alpha-d \beta)=0$. Equivalently, we have that $\Lambda(\alpha-d\eta)=0$. Thus $\delta(\alpha-d \beta )= [d,\Lambda](\alpha-d\eta)=0$, which shows that $[\alpha]_B$ has a harmonic representative $\alpha-d\beta$. The other half of the statement in Condition 1) follows easily from the $s$-Lefschetz property and the commutative diagram (\ref{commutative-diagram}). \end{proof} \begin{remark} \label{harmonic-primitive-class} In the second step of the proof of Theorem \ref{Mathieu's-theoremv2}, we actually proved that any primitive cohomology class has a symplectic harmonic representative which is a closed primitive basic form. \end{remark} \begin{theorem} (c.f. \cite{Yan96})\label{primitive-decomposition} Let $(M,\mathcal{F},\omega)$ be a co-dimension $2n$ transversely symplectic foliation that satisfies the transverse $s$-Lefschetz property. Then for any $0\leq i\leq s+2$ or $2n-s\leq i\leq 2n$, we have that \begin{equation} \label{lef-decomp-identity} H_B^i(M,\R)=\bigoplus_r L^r PH_B^{i-2r}(M,\R) \end{equation}\end{theorem} \begin{proof} We first claim that (\ref{lef-decomp-identity}) holds for $0\leq i\leq s+2$. In view of (\ref{primitive-decom-step1}), it suffices to show that $PH_B^i(M,\R)\cap L(H^{i-2}_B(M,\R))=\{0\}$. To see this, assume that $[\alpha]_B=L([\beta]_B)\in PH_B^i(M,\R)\cap L(H^{i-2}_B(M,\R))$. Then by primitivity, $L^{n-i+1}([\alpha]_B)=L^{n-i+2}([\beta]_B)=0$. However, since by assumption the Lefschetz map $L^{n-i+2}: H^{i-2}_B(M,\R)\rightarrow H^{2n-i+2}(M,\R)$ is an isomorphism, it follows that $[\beta]_B=0$. This implies that $[\alpha]_B=L([\beta]_B)=0$, from which our claim follows. Since for each $0\leq k\leq s$, the Lefschetz map (\ref{foliation-Lefschetz-map}) is an isomorphism, (\ref{lef-decomp-identity}) also holds for any $2n-s\leq i\leq 2n$. \end{proof} \begin{definition} Let $(M,\mathcal{F},\omega)$ be a transversely symplectic foliation of co-dimension $2n$, and let $0\leq s \leq n - 1$. We say that $(M,\mathcal{F},\omega)$ satisfies the symplectic $d\delta$-lemma up to degree $s$ if \begin{equation}\label{s-ddelta-lemma}\begin{split} & \text{Im}\,d \cap\text{ker}\delta=\text{Im}\, d\delta=\text{Im}\, \delta\cap\text{ker}\,d ,\,\text{on }\,\Omega^{k}_{bas}(M)\,\,\,\forall\,k\leq s,\\&\text{Im}d \cap\text{ker}\delta=\text{Im}\, d\delta, \,\,\,\text{on}\, \Omega^{s+1}_{bas}(M).\end{split} \end{equation} \end{definition} \begin{lemma} \label{decomp-1}Let $(M,\mathcal{F},\omega)$ be a transversely symplectic foliation of co-dimension $2n$ that satisfies the transverse $s$-Lefschetz property, and let $0\leq s \leq n - 1$. Suppose that $\alpha\in \Omega_{bas}^k(M)$, where $0\leq k\leq s+2$ or $2n-s\leq k\leq 2n$. Consider the Lefschetz decomposition of $\alpha$ as in Equation (\ref{lefschetz-decompose-forms}). If $\alpha$ is both harmonic and $d$-exact, then each $\alpha_r$ is $d$-exact. \end{lemma} \begin{proof} \textbf{Step 1}. We first prove the case when $0\leq k\leq s+2$. Write \begin{equation}\label{L-decomposition-2} \alpha=\displaystyle\sum_{r\geq 0} L^r\alpha_{r},\end{equation} where $\alpha_{r}$ is a closed primitive basic form of degree $k-2r$. We will use downward induction on $r$. Note that when $r>\frac{k}{2}$, $\alpha_{r}=0$ is $d$-exact. Let us assume by induction that $\alpha_r$ is $d$-exact for $r>q$, where $q>0$, and conclude that $\alpha_q$ is $d$-exact. By the induction hypothesis, \[ \alpha'=\alpha-\displaystyle\sum_{r> q} L^r\alpha_r=\displaystyle\sum_{r\leq q}L^r\alpha_r \] is $d$-exact. Applying $L^{n-k+q}$ we get from the above identity that \[ L^{n-k+q}\alpha'=L^{n-k+2q}\alpha_q+ \displaystyle\sum_{r< q}L^{n-(k-2r)+(q-r)}\alpha_r.\] Since $\alpha_r$ is a primitive form with degree $k-2r$, we have that \[L^{n-(k-2r)+(q-r)}\alpha_r=0,\,\,\,\,\forall\,r<q.\] It follows that \begin{equation}\label{HLP-eq1} L^{n-k+q}\alpha'=L^{n-k+2q}\alpha_q.\end{equation} The left hand side of (\ref{HLP-eq1}) is $d$-exact. Since $k-2q\leq s$, it follows from the transverse $s$-Lefschetz property that $\alpha_q$ is exact as well. This proves that for each $r>0$, $\alpha_r$ is $d$-exact. Finally, if $\alpha_0$ appears in the righthand side of Equation (\ref{L-decomposition-2}), then by the above argument, \[ \alpha_0=\alpha-\displaystyle \sum_{r>0}L^{r}\alpha_r\] must be exact as well. This finishes the proof of the case when $0\leq k\leq s+2$. \textbf{Step 2.} We prove the case when $2n-s\leq k\leq 2n$. Set $p=2n-k$. Since $\alpha \in\Omega_{bas}^{2n-p}(M)$ is harmonic, by Lemma \ref{har-forms}, there exists a harmonic $p$-form $\beta \in \Omega_{bas}^p(M)$ such that $\alpha=L^{n-p} \beta$, where $0\leq p\leq s$. Lefschetz decompose $\beta$ as follows. \[ \beta =\displaystyle \sum_{r\geq \text{max}(\frac{p-n}{2}, 0)}L^r\beta_{r},\] where $\beta_{r}\in \Omega_{bas}^{p-2r}(M)$. Since $M$ satisfies the transverse $s$-Lefschetz property, $\beta$ must be a $d$-exact basic form in $\Omega_{bas}(M)$. By our work in Step 1, each $\beta_r$ must be $d$-exact. However, we have \[\alpha=\displaystyle \sum_{r\geq \text{max}(\frac{k-n}{2}, 0)}L^{n-p+r}\beta_{r}.\] Since the Lefschetz decomposition of $\alpha$ is unique, this completes the proof of Lemma \ref{decomp-1}. \end{proof} \begin{proposition} \label{ddelta-relation}Suppose that $(M,\mathcal{F},\omega)$ is a transversely symplectic foliation of co-dimension $2n$ that satisfies the transverse $s$-Lefschetz property, where $0\leq s \leq n - 1$. Then we have that \begin{itemize} \item [a)] If $\alpha\in \Omega_{bas}(M)$ is both $d$-exact and $\delta$-closed, and if either $0\leq \text{deg}\, \alpha\leq s+2$ or $2n-s\leq \text{deg}\, \alpha\leq 2n$, then $\alpha$ is $\delta$-exact. \item[b)] If $\alpha\in \Omega_{bas}(M)$ is both $\delta$-exact and $d$-closed, and if either $0\leq \text{deg}\, \alpha\leq s$ or $2n-s-2\leq \text{deg} \, \alpha\leq 2n$, then $\alpha$ is $d$-exact. \item[c)]\[\,\,\,\text{on}\,\left( \bigoplus_{k=0}^s\Omega^k(M)\right)\oplus\left(\bigoplus_{k=2n-s}^{2n}\Omega^k(M)\right), \,\,\,\text{Im}\, d\cap \text{ker}\, \delta=\text{ker}\, d\cap \text{Im}\, \delta. \] \end{itemize} \end{proposition} \begin{proof} We first prove a). Let $\alpha\in \Omega_{bas}^k(M)$ be both $d$-exact and $\delta$-closed, where $0\leq k\leq s+2$ or $2n-s\leq k\leq 2n$. Lefschetz decompose $\alpha$ into \[\alpha=\displaystyle\sum_{r\geq 0} L^r\alpha_{r},\] where $\alpha_{r}$ is a closed primitive basic form of degree $k-2r$. Without loss of generality, we may assume that $r\leq n-(k-2r)$, for otherwise $L^r\alpha_r=0$ due to the primitivity of $\alpha_r$. Since $\alpha$ is both $d$-exact and $\delta$-closed, by Lemma \ref{decomp-1}, each term $\alpha_r$ is $d$-exact. Now by Lemma \ref{Weil-identity}, \[\star \alpha=\displaystyle\sum_{r\geq 0}C_r L^{n-k+r}\alpha_{r},\] where $C_r$ is a constant that depends on $k$ and $r$. It follows $\star \alpha$ is $d$-exact. Therefore $\alpha$ must be $\delta$-exact. Next we prove b). Suppose that $\alpha\in \Omega_{bas}(M)$ is both $\delta$-exact and $d$-closed, and that either $0\leq \text{deg}\, \alpha\leq s$ or $2n-s-2\leq \text{deg} \, \alpha\leq 2n$. Then $\star \alpha$ is both $d$-exact and $\delta$-closed. Moreover, either $0\leq \text{deg}\, \star\alpha\leq s+2$ or $2n-s\leq \text{deg}\, \star \alpha\leq 2n$. It follows from a) that $\star\alpha$ is $\delta$-exact. Therefore $\alpha$ itself is $d$-exact. Finally, we note that c) is an immediate consequence of a) and b). \end{proof} Suppose that $\tau\in \Omega_{bas}^k(M)$ has the property that $d\tau$ is harmonic, where $0\leq k\leq s+2$ or $2n-s\leq k\leq 2n$. Lefschetz decompose $\tau$ as follows. \begin{equation}\label{summand-1} \tau=\sum_{r\geq (p-n)_+}L^r\alpha_r,\end{equation} where $\alpha_r$'s are primitive forms. Then \begin{equation}\label{summand-2}d\tau=\sum_{r\geq (p-n)_+}L^rd\alpha_r.\end{equation} The argument given in the proof of \cite[Lemma 3.2.4, 3.2.5, Cor. 3.2.6]{He10} extends verbatim here to give us the following strengthening of Proposition \ref{ddelta-relation}. \begin{lemma}\label{exact1} Suppose that $(M,\mathcal{F},\omega)$ is a transversely symplectic foliation of co-dimension $2n$ that satisfies the transverse $s$-Lefschetz property, where $0\leq s \leq n - 1$. Then in Equation (\ref{summand-2}), each term $d\alpha_r=\beta_r+\omega\wedge \beta_r'$, where $\beta_r$ and $\beta_r'$ are $d$-exact primitive basic forms. As a result, each summand $L^r d\alpha_r$ in Equation (\ref{summand-2}) is $\delta$-exact. \end{lemma} We are ready to prove the symplectic $d\delta$-lemma on basic differential forms. \begin{theorem}\label{weak-ddelta-lemma} Let $(M,\mathcal{F},\omega)$ be a transversely symplectic foliation foliation of co-dimension $2n$ on a connected manifold $M$, and let $0\leq s \leq n - 1$. Then the following statements are equivalent: \begin{itemize}\item [1)] $M$ has the transverse $s$-Lefschetz property. \item[2)] $M$ satisfies the $d\delta$-lemma up to degree $s$. \item[3)] The identities (\ref{s-ddelta-lemma}) hold on $\Omega_{bas}^{\geq 2n-s}(M)$, and $\text{Im} \delta\cap \text{ker} d = \text{Im} d\delta$ holds on $\Omega_{bas}^{2n-s-1}(M)$.\end{itemize} \end{theorem} \begin{proof} \textbf{Step 1}. We show that $1)$ implies $2)$. Since by Proposition \ref{ddelta-relation}, \[\text{Im} d\cap \text{ker} \delta=\text{ker}\, \delta\cap\text{im}d\,\,\,\text{ on}\,\,\bigoplus_{k=0}^s\Omega_{bas}^k(M),\] it suffices to show that for any $0\leq k \leq s+1$, if a basic $k$-form $\alpha$ is both $d$-exact and $\delta$-closed, then $\alpha \in \text{im}\,d\delta$. We will proceed by using induction on the degree $k$ of $\alpha$. The case of degree 0 is trivial. Assume that $\alpha\in\Omega^{1}_{\textmd{bas}}(M)$ satisfies $\alpha=df$ for some basic function $f$ and $\delta\alpha=0$. Note that $f$ represents a homology class in $H_{\delta}(\Omega_{bas}^0(M),\R)$. However, it follows from Lemma \ref{homology-cohomology} and the transverse $s$-Lefschetz property, that \[H_{\delta}(\Omega^0_{bas}(M),\R)\cong H^{2n}_B(M,\R)\cong H^0_B(M,\R)\cong \R.\] Here the last equality holds since $M$ is connected. Therefore there exists a constant $a$ and a basic one form $\gamma$ such that $f=a+\delta \gamma$. It follows that $\alpha=d(a+\delta \gamma)=d\delta \gamma $. Assume by induction that this is true for basic forms up to degree $<k$, where $k\leq s+1$. Now suppose that a $k$-form $\alpha \in \text{im}\,d \cap\text{ker}\delta$. By assumption, $\alpha= d\beta$ for some $\beta\in \Omega^{k-1}(M)$. Lefschetz decompose $\beta$ as \[ \beta=\displaystyle \sum_{r\geq 0} L^r\beta_r,\] where $\beta_r\in \Omega_{bas}^{k-1-2r}(M)$ are primitive basic forms. Then \[ d\beta=\displaystyle \sum_{r\geq 0} L^r d\beta_r.\] By Lemma \ref{exact1}, $d\beta_r=v_r+Lv_r'$, where $v_r$ and $v_r'$ are exact primitive basic forms of degree $k-2r+1$ and $k-2r-1$ respectively. As a result, each term $L^rd\beta_r$ is both $d$-exact and $\delta$-closed. So we need only to prove that $d\beta \in \text{im}\, d\delta$ in the case that $\beta=L^r v$, where $v\in \Omega_{bas}^{k-2r-1}(M)$ is a primitive form, and $d\beta$ is $\delta$-closed. By Lemma \ref{Weil-identity}, we get \[ \star L^r v=CL^{n-k+r-1} v,\] where $C$ is a constant. By Lemma \ref{exact1}, \[ d\star L^r v=C \left(L^{n-k+r+1} u_1+L^{n-k+r+2} u_2\right)\] where $u_1$ and $u_2$ are both primitive forms, and are both $d$-exact. Apply $\star$ on both sides of the equation again, we get \[ \delta L^r v=C_1L^{r-1} u_1+C_2L^r u_2.\] Hence $\delta L^rv $ is both $\delta$-exact and $d$-exact. By the induction hypothesis, there exists $\sigma \in \Omega_{bas}^{k-2}(M)$ such that $\delta L^rv=\delta d\sigma$. Now set $\gamma= L^rv- d\sigma$. Then $\alpha= d\gamma$ and $\delta \gamma=0$. Sine $\gamma$ is $\delta$-closed, $\star \gamma$ is a closed basic form of degree $2n-(k-1)$. Since $M$ satisfies the transverse $s$-Lefschetz property, by Theorem \ref{Mathieu's-theoremv2} there exists a harmonic form $\eta$ such that $\star \gamma-\eta$ is $d$-exact. Consequently, $\gamma-\star \eta=\star(\star \gamma-\eta)=\delta \mu$ for some $\mu\in \Omega^k_{bas}(M)$. Note that $\star \eta$ is also harmonic. It follows that \[ \alpha =d\gamma=d(\star \eta+\delta \mu)=d\delta \mu.\] \textbf{Step 2}. We show that 2) implies 3). Note that by Proposition \ref{ddelta-relation}, we have that \[\text{Im} d\cap \text{ker} \delta=\text{ker}\, \delta\cap\text{im}d\,\,\,\text{ on}\,\,\bigoplus_{k=2n-s}^{2n}\Omega_{bas}^k(M).\] Thus it suffices to show that for any $0\leq k\leq s+1$, if a basic $(2n-k)$-form $\alpha$ is both $\delta$-exact and $d$-closed, then it must lie in the image of $d\delta$. Indeed, given such a basic form $\alpha$, $\star \alpha$ must be both $d$-exact and $\delta$-closed. Since $\star\alpha$ is of degree $k$, it follows from 2) that there exists a basic form $\eta$ such that $\star \alpha=d\delta \eta$. As a result, we have that \[\alpha=\star(\star \alpha)= \star d\delta\eta =(\star d\star) (\star \delta \star) (\star \eta)=\pm d\delta \eta.\] \textbf{Step 3}. We show that 3) implies 1). We first show that for any $0\leq k\leq s$, the Lefschetz map (\ref{foliation-Lefschetz-map}) is surjective. In view of Theorem \ref{Mathieu's-theoremv2}, it suffices to show that any cohomology class $[\alpha]\in H^{2n-k}_B(M)$ has a harmonic representative, where $0\leq k\leq s$. Note that $\delta \alpha$ is both $\delta$-exact and $d$-closed, and has degree $2n-k-1$. It follows from 3) that $\delta \alpha=\delta d\gamma$ for some basic form $\gamma$. Note that $\delta(\alpha-d\gamma)=0$. Thus $\alpha-d\gamma$ is both $d$-closed and $\delta$-closed, and therefore is harmonic. Clearly, $\alpha-d\gamma$ is a harmonic representative for the cohomology class $[\alpha]_B$. Next, we show that for any $0\leq k\leq s$, the Lefschetz map (\ref{foliation-Lefschetz-map}) is injective. Suppose that $[\alpha]_B\in H^{k}_B(M)$ such that $L^{n-k}[\alpha]_B=0$. By the same argument as given in the previous paragraph, it is easy to see that $[\alpha]_B$ admits a harmonic representative. Without loss of generality, we may assume that $\alpha$ is a harmonic form. It follows that $L^{n-k}\alpha$ is both $d$-exact and $\delta$-closed. Thus 3) implies that $L^{n-k}\alpha = d\delta \gamma$ for some basic form $\gamma$ of degree $2n-k$. By Lemma \ref{yan's-result}, $\gamma=L^{n-k}\eta$ for some basic $k$-form $\eta$. Since $d\delta$ commutes with $L$, \[L^{n-k}\alpha=d\delta (L^{n-k}\eta)=L^{n-k}(d\delta\eta).\] Recall that by Lemma \ref{yan's-result}, the map $L^{n-k}: \Omega_{bas}^k(M)\rightarrow \Omega^{2n-k}_{bas}(M)$ is an isomorphism. It follows that $\alpha=d\delta \eta$. This completes the proof of Theorem \ref{weak-ddelta-lemma}. \end{proof} \section{Review of contact and Sasakian geometry}\label{review-contact} Let $(M,\eta)$ be a co-oriented contact manifold with a contact one form $\eta$. We say that $(M,\eta)$ is \textbf{$K$-contact} if there is an endomorphism $\Phi: TM\rightarrow TM$ such that the following conditions are satisfied. \begin{itemize} \item[1)] $\Phi^2=-Id+\xi\otimes \eta$, where $\xi$ is the Reeb vector field of $\eta$; \item [2)] the contact one form $\eta$ is compatible with $\Phi$ in the sense that \[ d\eta(\Phi(X),\Phi(Y))=d\eta(X,Y)\] for all $X$ and $Y$, moreover, $d\eta(\Phi(X),X)>0$ for all non-zero $X \in \text{ker}\,\eta$; \item[3)] the Reeb field of $\eta$ is a Killing field with respect to the Riemannian metric defined by the formula \[ g(X,Y)=d\eta(\Phi(X),Y)+\eta(X)\eta(Y).\] \end{itemize} Given a $K$-contact structure $(M,\eta,\Phi,g)$, one can define a metric cone \[ (C(M), g_C)=(M\times \R_+, r^2g+dr^2),\] where $r$ is the radial coordinate. The $K$-contact structure $(M,\eta,\Phi)$ is called Sasakian if this metric cone is a K\"ahler manifold with K\"ahler form $\dfrac{1}{2} d(r^2\eta)$. Let $(M,\eta)$ be a contact manifold with contact one form $\eta$ and a characteristic Reeb vector $\xi$, let $\omega=d\eta$, and let $\mathcal{F}_{\xi}$ be the Reeb characteristic foliation. As we explained in Example \ref{Contact-example}, $(M,\mathcal{F},\omega)$ is an one dimensional transversely symplectic foliation. The following result relates $H^*_B(M)$ to $H^*(M)$. \begin{proposition}\label{exact-sequence}(\cite[Sec. 7.2]{BG08})\begin{itemize} \item[1)] On any $K$-contact manifold $(M,\eta)$, there is a long exact cohomology sequence \begin{equation}\label{Gysin} \cdots \rightarrow H^k_B(M,\R) \xrightarrow{i_*} H^k(M, \R)\xrightarrow{j_k} H^{k-1}_B(M,\R)\xrightarrow{\wedge[d\eta]} H^{k+1}_B(M,\R)\xrightarrow{i_*} \cdots,\end{equation} where $i_*$ is the map induced by the inclusion, and $j_k$ is the map induced by $\iota_{\xi}$. \item[2)] If $(M,\eta)$ is a compact $K$-contact manifold of dimension $2n+1$, then for any $r\geq 0$ the basic cohomology $H_B^r(M,\R)$ is finite dimensional, and for $r>2n$, the basic cohomology $H_B^r(M,\R)=0$; moreover, for any $0\leq r\leq 2n$, there is a non-degenerate pairing \[ H^r_B(M,\R)\otimes H^{2n-r}_B(M,\R)\rightarrow \R,\,\,\,([\alpha]_B,[\beta]_B)\mapsto \int_M\, \eta\wedge\alpha\wedge\beta. \] \end{itemize} \end{proposition} On a compact Sasakian manifold $M$, the following Hard Lefschetz theorem is due to El Kacimi-Alaoui \cite{ka90}. \begin{theorem}(\cite{ka90})\label{trans-Kahler} Let $(M,\eta,g)$ be a compact Sasakian manifold with a contact one form $\eta$ and a Sasakian metric $g$. Then $M$ satisfies the transverse Hard Lefschetz property introduced in Definition \ref{weak-lefschetz-property}. \end{theorem} More recently, Cappelletti-Montano, De Nicola, and Yudin \cite{CNY13} established a Hard Lefschetz theorem for the De Rham cohomology group of a compact Sasakian manifold. \begin{theorem}(\cite{CNY13}) \label{HLP-sasakian} Let $(M,\eta,g)$ be a $2n+1$ dimensional compact Sasakian manifold with a contact one form $\eta$ and a Sasakian metric $g$, and let $\Pi: \Omega^*(M)\rightarrow \Omega_{har}^*(M)$ be the projection onto the space of Harmonic forms. Then for any $0\leq k\leq n$, the map \[Lef_k: H^{k}(M,\R)\rightarrow H^{2n+1-k}(M,\R), [\beta]\mapsto [\eta\wedge (d\eta)^{n-k}\wedge \Pi \beta]\] is an isomorphism. Moreover, for any $[\beta]\in H^k(M,\R)$, and for any closed basic primitive $k$-form $\beta'\in [\beta]$, $[\eta\wedge (d\eta)^{n-k}\wedge \beta']=Lef_k([\beta])$. In particular, the Lefschetz map $Lef_k$ does not depend on the choice of a compatible Sasakian metric. \end{theorem} This result motivates them to propose the following definition of the Hard Lefschetz property for a contact manifold. \begin{definition}\label{HLP-contact} Let $(M,\eta)$ be a $2n+1$ dimensional compact contact manifold with a contact $1$-form $\eta$. For any $0\leq k\leq n$, define the Lefschetz relation between the cohomology group $H^{k}(M,\R)$ and $H^{2n+1-k}(M,\R)$ to be \begin{equation}\label{Lef-relation} \mathcal{R}_{Lef_k}=\{([\beta],[\eta\wedge L^{n-k}\beta])\,\vert \iota_{\xi}\beta=0, d\beta=0, L^{n-k+1}\beta=0\}.\end{equation} If it is the graph of an isomorphism $Lef_k: H^{k}(M,\R)\rightarrow H^{2n+1-k}(M,\R)$ for any $0\leq k\leq n$, then the contact manifold $(M,\eta)$ is said to have the hard Lefschetz property. \end{definition} We introduce the following refinement of Definition \ref{HLP-contact}. \begin{definition} \label{HLP-contactv2} Let $(M,\eta)$ be a $2n+1$ dimensional compact contact manifold with a contact $1$-form $\eta$, and let $0\leq s\leq n-1$. If for any $0\leq k\leq s$, (\ref{Lef-relation}) is the graph of an isomorphism $Lef_k: H^{k}(M,\R)\rightarrow H^{2n+1-k}(M,\R)$, then the contact manifold $(M,\eta)$ is said to have the $s$-Lefschetz property. \end{definition} \begin{remark} Note that by the Poincar\'e duality, every compact contact manifold is $0$-Lefschetz. For the same reason, a simply-connected compact contact manifold is $1$-Lefschetz. \end{remark} \section{K-contact manifolds with the transverse $s$-Lefschetz property}\label{Kcontact-s-lefschetz} Throughout this section, we assume $(M,\eta)$ to be a $2n+1$ dimensional compact $K$-contact manifold with a contact $1$-form $\eta$, and a Reeb vector field $\xi$. Let $\omega=d\eta$, and let $\mathcal{F}_{\xi}$ be the Reeb characteristic foliation. We will apply the machinery developed in Section \ref{ddelta-lemma} to the transverse symplectic flow $(M,\mathcal{F}_{\xi},\omega)$, and prove that $M$ satisfies the transverse $s$-Leschetz property if and only if it satisfies the $s$-Lefschetz property stated in Definition \ref{HLP-contactv2}. \begin{lemma}\label{tech-lemma1}Let $(M,\eta)$ be a $2n+1$ dimensional compact $K$-contact manifold with a contact $1$-form $\eta$. Assume that $M$ satisfies the transverse $s$-Lefschetz property, $0\leq s\leq n-1$. Then for any $0\leq k\leq s+1$, the map \[ i_*: H^k_B(M, \R)\rightarrow H^k(M,\R)\] is surjective; moreover, its image equals \begin{equation} \label{image} \{i_*[\alpha]_B\,\vert\, \alpha \in \Omega^k_{bas}(M), d\alpha=0, \omega^{n-k+1}\wedge \alpha=0\}.\end{equation} As a result, the restriction map $i_*: PH^k_{B}(M,\R)\rightarrow H^k(M,\R)$ is an isomorphism. \end{lemma} \begin{proof} Consider the long exact sequence (\ref{Gysin}). By assumption, $M$ satisfies the transverse $s$-Lefschetz property. Thus the map \[H^{i}_B(M,\R)\xrightarrow{\wedge[\omega]} H_B^{i+2}(M,\R)\] is injective for any $ 0\leq i \leq s$. It then follows from the exactness of the sequence (\ref{Gysin}) that the map \[ i_*: H^{k}_B(M,\R)\rightarrow H^{k}(M,\R)\] is surjective for any $ 0\leq k\leq s+1$. This proves the first assertion in Lemma \ref{tech-lemma1}. Since $M$ satisfies the transverse $s$-Lefschetz property, by Theorem \ref{primitive-decomposition}, for any $0\leq k\leq s+1$, \[ H^{k}_B(M,\R)= PH^k_B(M)\oplus L H^{k-2}_B(M,\R).\] It is clear from the exactness of the sequence (\ref{Gysin}) that \[i_*\left(H^k_B(M,\R)\right)=i_*\left(PH^k_B(M,\R)\right),\,\,\,\text{ker} i_* \cap PH^k(M,\R)=0.\] Therefore the restriction map $i_*:PH^k_B(M,\R)\rightarrow H^k(M,\R)$ is an isomorphism. Finally, the fact that $i_*\left( H^k_B(M,\R)\right)$ equals (\ref{image}) follows easily from Remark \ref{harmonic-primitive-class}. This completes the proof of Lemma \ref{tech-lemma1}. \end{proof} We are ready to define the Lefschetz map on the cohomology groups. In \cite{CNY13}, such maps are introduced using Riemannian Hodge theory associated to a compatible Sasakian metric. In contrast, we define these maps here using the symplectic Hodge theory on the space of basic forms. For any $ 0\leq k\leq s+1$, define $Lef_k : H^{k}(M,\R)\rightarrow H^{2n+1-k}(M,\R)$ as follows. For any cohomology class $[\gamma] \in H^{k}(M,\R)$, by Lemma \ref{tech-lemma1} there exists a closed primitive basic $k$-form $\alpha \in \mathcal{P}_{bas}^k(M)$ such that $i_*[\alpha]_B=[\gamma]$. Observe that $d \left( \eta\wedge L^{n-k}\wedge \alpha\right)= L^{n-k+1} \alpha=0$. We define \begin{equation}\label{main-map} Lef_k[\gamma]= [\eta\wedge L^{n-k} \alpha].\end{equation} \begin{lemma}\label{tech-lemma2} Assume that $M$ satisfies the transverse $s$-Lefschetz property. Then for any $0\leq k\leq s+1$, the map (\ref{main-map}) does not depend on the choice of closed primitive basic forms. \end{lemma} \begin{proof} Suppose that there are two closed primitive basic $k$-forms $\alpha_1$ and $\alpha_2$ such that $i_*[ \alpha_1 ]_B=i_*[\alpha_2]_B\in H^{k}(M,\R)$. It follows from the exactness of the sequence (\ref{Gysin}) that $ [\alpha_1]_B=[\alpha_2]_B +L [\beta]_B$ for some closed basic $(k-2)$-form $\beta$. Since $M$ satisfies the transverse $s$-Lefschetz property, by Theorem \ref{Mathieu's-theoremv2} we may well assume that $\beta$ is symplectic harmonic. Therefore, $\alpha_1-\alpha_2 -L \beta$ is both $d$-exact and $\delta$-closed. By Theorem \ref{weak-ddelta-lemma}, the symplectic $d\delta$-lemma, there exists a basic $k$-form $\varphi$ such that \begin{equation} \label{difference} \alpha_1-\alpha_2 -L \beta= d\delta \varphi \end{equation} Lefschetz decompose $\beta$ and $\varphi$ as follows. \[ \begin{split} &\beta=\beta_{k-2}+L\beta_{k-4}+L^2\beta_{k-6}+\cdots \\ & \varphi= \varphi_{k}+L\varphi_{k-2}+L^2\varphi_{k-4}+\cdots \end{split} \] Here $\varphi_{k-i}\in \mathcal{P}_{bas}^{k-i}(M)$, $i=0, 2,\cdots$, and $\beta_{k-i}\in \mathcal{P}_{bas}^{k-i}(M)$, $i=2,4,\cdots$. Since $d\delta$ commutes with $L$, it follows from (\ref{difference}) that \[ \alpha_1-\alpha_2=d\delta \varphi_k +L(\beta_{k-2}+d\delta \varphi_{k-2})+L^2(\beta_{k-4}+d\delta\varphi_{k-4})\cdots .\] Since $d\delta$ commutes with $\Lambda$, $d\delta$ maps primitive forms to primitive forms. It then follows from the uniqueness of the Lefschetz decomposition that \[\alpha_1-\alpha_2=d\delta \varphi_k.\] Observe that \[\begin{split} \eta\wedge \left(\omega^{n-k}\wedge (\alpha_1-\alpha_2)\right)&= \eta\wedge \left(\omega^{n-k}\wedge d\delta\varphi_k\right)\\&= -d\left( \eta\wedge \omega^{n-k}\wedge \delta\varphi_k\right)+ \left(L^{n-k+1} \delta\varphi_k\right). \end{split}\] Now using the commutator relation $[L, \delta]=-d$ repeatedly, it is clear that $L^{n-k+1} \delta\varphi_k$ must be $d$-exact, since $\varphi_k$ is a primitive $k$-form and so $L^{n-k+1}\varphi_{k}=0$. It follows immediately that $\eta\wedge L^{n-k} (\alpha_1-\alpha_2)$ must be $d$-exact. This completes the proof of Lemma \ref{tech-lemma2}. \end{proof} \begin{theorem} \label{main-result1}Let $M$ be a $2n+1$ dimensional compact $K$-contact manifold with a contact one form $\eta$, and let $0\leq s\leq n-1$. Then it satisfies the transverse $s$-Lefschetz property as introduced in Definition \ref{weak-lefschetz-property} if and only if it satisfies the $s$-Lefschetz property as introduced in Definition \ref{HLP-contactv2}. \end{theorem} \begin{proof} {\bf Step 1.} \,Assume that $M$ satisfies the transverse $s$-Lefschetz property. We show that $M$ satisfies the $s$-Lefschetz property. Since $M$ is oriented and compact, in view of the Poincar\'e duality, it suffices to show that for any $0\leq k\leq s$, the map given in (\ref{main-map}) is injective. Suppose that $Lef_k[\gamma]=[\eta\wedge L^{n-k} \alpha]=0$, where $\alpha \in \mathcal{P}_{bas}^k(M)$ such that $d\alpha=0$, $i_*[\alpha]_B=[\gamma]$. Since the group homomorphism $j_{2n+1-k}:H^{2n+1-k}(M,\R)\rightarrow H_B^{2n-k}(M,\R)$ is induced by $\iota_{\xi}$, it follows that \[0=j_{2n+1-k}(0)=j_{2n+1-k}([\eta\wedge (L^{n-k} \alpha)])=[ L^{n-k} \alpha]_B.\] Since $M$ has the transverse $s$-Lefschetz property, and since $0\leq k\leq s$, $[\alpha]_B=0$. Thus $[\gamma]=i_*([\alpha]_B)=0$. {\bf Step 2.}\, Assume that $M$ satisfies the $s$-Lefschetz property, i.e., , for any $0\leq k\leq s$, \[ \mathcal{R}_{Lef_k}=\{([\beta],[\eta\wedge L^{n-k}\beta])\,\vert \iota_{\xi}\beta=0, d\beta=0, L^{n-k+1}\beta=0\}\] is the graph of an isomorphism $Lef_k: H^k(M)\rightarrow H^{2n-k+1}(M)$. In particular, it implies that if $\alpha$ is a closed primitive basic $k$-form, $0\leq k\leq s$, then $Lef_k([\alpha])=[\eta\wedge L^{n-k}\alpha]$. We first claim that if $[\alpha]_B\in PH^k_B(M,\R)$ such that $L^{n-k}[\alpha]_B=0\in H^{2n-k}_B(M,\R)$, then $[\alpha]_B\in \text{im}\,L$. By Remark \ref{harmonic-primitive-class}, we may assume that $\alpha$ is a closed primitive basic $k$-form. Then for any closed primitive basic $k$-form $\beta$, \[j_{2n+1}\left(Lef_k(i_*[\alpha]_B\cup i_*[\beta]_B)\right)= j_{2n+1}([\eta\wedge L^{n-k}\alpha \wedge \beta])= L^{n-k}[\alpha]_B\cup [\beta]_B=0.\] We observe that the map $j_{2n+1}: H^{2n+1}(M,\R)\rightarrow H_B^{2n}(M,\R)$ is an isomorphism. Indeed, it is an immediate consequence of the exactness of the sequence (\ref{Gysin}) at stage $2n+1$. Since $H_B^{i}(M,\R)=0$ when $i\geq 2n+1$, we have that \begin{equation}\label{Gysin-final-stage}\cdots \rightarrow 0 \xrightarrow{i_*} H^{2n+1}(M, \R)\xrightarrow{j_{2n+1}} H^{2n}_B(M,\R)\xrightarrow{\wedge[\omega]} 0\rightarrow \cdots \end{equation} As a result, $Lef_k(i_*[\alpha]_B) \cup i_*[\beta]_B=0$. Since $\beta$ is arbitrarily chosen, by the Poincar\'e duality, we must have $Lef_k[i_*[\alpha]_B)=0$. Since $Lef_k$ is an isomorphism, $i_*[\alpha]_B=0$. By the exactness of the sequence (\ref{Gysin}), $[\alpha]_B=L[\lambda]_B$ for some $[\lambda]_B\in H_B^{k-2}(M,\R)$. This proves our claim. Now we show that for any $0\leq k\leq s$, the map \begin{equation}\label{induction} L^{n-k}: H^k_B(M)\rightarrow H_B^{2n-k}(M),\,\,\,[\alpha]_B\mapsto [\omega^{n-k}\wedge \alpha]_B\end{equation} is an isomorphism by induction on $k$. By Part 2) in Proposition \ref{exact-sequence}, it suffices to show that for any $0\leq k\leq s$, the map (\ref{induction}) is injective. Note that when $k=0,1$, for degree reasons, every closed basic $k$-form is primitive; furthermore, $\text{im}\, L\cap \Omega^k_{bas}(M)=\{0\}$. As a result, when $k=0,1$, the injectivity of the map (\ref{induction}) is a simple consequence of the claim we established above. Assume that $M$ satisfies the transverse $p$-Lefschetz property for $p<k$. Then it follows from Theorem \ref{primitive-decomposition} that $H^k_B(M,\R)=PH^k_B(M,\R)\oplus\text{im}\,L$. Suppose that $L^{n-k}([\alpha]_B+L[\sigma]_B)=0$, where $[\alpha]_B\in PH_B^{k}(M,\R)$ and $[\sigma]\in H_B^{k-2}(M,\R)$. Then we must have $L^{n-k+1}([\alpha]_B+L[\sigma]_B)=L^{n-k+2}[\sigma]_B=0$ since $[\alpha]_B\in PH^k_B(M)$. It follows from our inductive hypothesis again that $[\sigma]_B=0$. As a result, $L^{n-k}[\alpha]_B=0$. By the claim we established earlier, we must have that $[\alpha]_B=L[\beta]_B$ for some $[\beta]_B\in H^{k-2}_B(M,\R)$. However, by Theorem \ref{primitive-decomposition}, $PH^k_B(M,\R)\cap \text{im}L=0$. Therefore we must have $[\alpha]_B=0$. This completes the proof of Theorem \ref{main-result1}. \end{proof} \section{Cup length of Lefschetz $K$-contact manifolds}\label{cup-length} In this section, we show that for compact $K$-contact manifolds, the weak Lefschetz condition implies a fairly general result on the vanishing of cup products. \begin{theorem}\label{vanishing-cup-prod}Let $(M,\eta)$ be a $2n + 1$ dimensional compact $K$-contact manifold that satisfies the $s$-Lefschetz property, where $0\leq s\leq n-1$, and let $y_i\in H^{k_i}(M,\R)$,$1\leq i\leq p$. If $\,\forall\, 1\leq i \leq p$, $1\leq k_i\leq s+1$, and if $k_1+\cdots+k_p\geq 2n-s$, then the cup product \begin{equation} \label{vanishing-cup-eq}y_1\cup\cdots\cup y_p=0.\end{equation} \end{theorem} \begin{proof} $\forall\, 1\leq i\leq p$, since $1\leq k_i\leq s+1$, by Lemma \ref{tech-lemma1} there exists $[\alpha_i]_B \in H_B^{k_i}(M)$, such that $i_*([\alpha_i]_B)=y_i$. By assumption, $M$ satisfies the $s$-Lefschetz property. It follows from Theorem \ref{main-result1}, it must satisfy the transverse $s$-Lefschetz property as well. Now that $k_1+\cdots +k_p\geq 2n-s$, there exists $ [\beta]_B \in H_B^{2n-k_1-\cdots-k_p}(M)$, such that \[ [\alpha_1 \wedge \cdots \wedge \alpha_p]_B= L^{k_1+\cdots +k_p-n}([\beta]_B).\] Clearly, we have that $k_1+\cdots+k_p-n\geq n-s\geq 1$. It follows immediately from the exactness of the Sequence (\ref{exact-sequence}) that \[ \begin{split} y_1\cup \cdots \cup y_p&=i_*([\alpha_1]_B)\cup \cdots \cup i_*([\alpha_p]_B) \\&=i_*([\alpha_1\wedge \cdots \wedge \alpha_p]_B)\\&=i_*(L^{k_1+\cdots k_p-n}([\beta]_B))=0.\end{split}\] \end{proof} The following result is an immediate consequence of Theorem \ref{vanishing-cup-prod}. \begin{theorem}\label{cup-length-lefschetz} Let $M$ be a $2n+1$ dimensional compact Lefschetz $K$-contact manifold that satisfies the $s$-Lefschetz property, where $0\leq s\leq n-1$. Then the cup length of $M$ is $\leq 2n-s$. \end{theorem} \begin{proof} Let $y_i \in H^{k_i}(M)$, $1\leq i\leq p$. To establish Theorem \ref{cup-length-lefschetz}, it suffices to show that if $p>2n-s$, and if $k_i\geq 1$, then (\ref{vanishing-cup-eq}) holds. Indeed, if there is a cohomology class, say $y_1$, such that $k_1>s+1$, then we have that \[ k_1+\cdots+k_p> s+1+ (2n-s)= 2n+1.\] So in this case (\ref{vanishing-cup-eq}) holds for degree reasons. Therefore we may assume that $1\leq k_i\leq s+1$, $\forall\, 1\leq i\leq p$. Since $k_1+\cdots+k_p\geq p> 2n-s$, in this case (\ref{vanishing-cup-eq}) follows directly from Theorem \ref{vanishing-cup-prod}. \end{proof} Since by the Poincar\'e duality, any compact connected $K$-contact manifold is $0$-Lefschetz, Theorem \ref{cup-length-lefschetz} immediately implies the following result of Boyer and Galicki \cite[Theorem 7.4.1]{BG08}. \begin{corollary}\label{cup-length-k-contact} The cup length of a $2n+1$ dimensional compact connected $K$-contact manifold is at most $2n$. \end{corollary} On the other hand, by Theorem \ref{trans-Kahler}, any $2n+1$ dimensional compact Sasakian manifold satisfies the transverse hard Lefschetz property. Theorem \ref{cup-length-lefschetz} gives us the following upper bound for the cup length of a compact Sasakian manifold. \begin{corollary}\label{cup-length-sasakian} The cup length of a $2n+1$ dimensional compact Sasakian manifold is at most $n+1$. \end{corollary} \section{Boothby-Wang fibration over weakly Lefschetz symplectic manifolds}\label{main-examples} In this section, we apply the main result obtained in Section \ref{Kcontact-s-lefschetz} to a Boothy-Wang fibration, and use it to construct examples of $K$-contact manifolds without any Sasakian structures in dimension $\geq 9$. We first briefly review Boothby-Wang construction here, and refer to \cite{B76} for more details. A co-oriented contact structure on a $2n+1$ dimensional compact manifold $P$ is said to be regular if it is given as the kernel of a contact one form $\eta$, whose Reeb field $\xi$ generates a free effective $S^1$ action on $P$. Under this assumption, $P$ is the total space of a principal circle bundle $ \pi: P\rightarrow M:=P/S^1$, and the base manifold $M$ is equipped with an integral symplectic form $\omega$ such that $\pi^* \omega =d\eta$. Conversely, let $(M,\omega)$ be a compact symplectic manifold with an integral symplectic form $\omega$, and let $\pi:P\rightarrow M$ be the principal circle bundle over $M$ with Euler class $[\omega]$ and a connection one form $\eta$ such that $\pi^*\omega=d\eta$. Then $\eta$ is a contact one form on $P$ whose characteristic Reeb vector field generates the right translations of the structure group $S^1$ of this bundle. It is easy to deduce the following result as a direct consequence of Theorem \ref{main-result1}. \begin{theorem} \label{regular-Lef-contact}Let $\pi: P\rightarrow M$ be a Boothby-Wang fibration as we described above. Then $(P,\eta)$ satisfies the $s$-Lefschetz property if and only if the base symplectic manifold $(M,\omega)$ satisfies the $s$-Lefschetz property. \end{theorem} Next, we recall an useful result \cite{Ha13} on when the total space of a Boothby-Wang fibration is simply-connected. Let $X$ be a compact and oriented manifold of dimension $m$. We say that $c \in H^2(X,\Z)$ is indivisible if the map \[ c \cup : H^{m-2}(X,\Z)\rightarrow H^m(X,\Z)\] is surjective. \begin{lemma} \label{boothby-wang-l1} (\cite[Lemma 15]{Ha13}) Let $\pi: P\rightarrow M$ be a Boothby-Wang fibration, and let $\omega$ be an integral symplectic form on $M$ which represents the Euler class of the Boothby-Wang fibration. Then $P$ is simply-connected if and only if $M$ is simply-connected, and the Euler class $[\omega]$ is indivisible.\end{lemma} We also need the following result concerning the existence of symplectic manifolds which are $s$-Lefschetz but not $(s+1)$-Lefschetz proved in \cite[Prop. 5.2]{FMU07}. \begin{theorem} \label{example-weak-lef}Let $s \geq 2$ be an even integer. Then there is a simply-connected symplectic $(W_s,\omega)$ of dimension $2(s+2)$ which is $s$-Lefschetz but not $(s+1)$-Lefschetz. Moreover, the symplectic form $\omega$ is integral, and $b_{s+1}(W_s)=3$. \end{theorem} \begin{remark} By \cite[Theorem 4.2]{FMU07}, the symplectic form on $M_s$ constructed in \cite[Prop. 5.1]{FMU07} can chosen to be integral. A careful reading of the proof of \cite[Prop.5.2]{FMU07} shows that the symplectic form on $W_s$ can also chosen to be integral; moreover, $b_{s+3}(W_s)=3$. Thus by the Poincar\'e dulaity, we have that $b_{s+1}(W_s)=3$. \end{remark} We are ready to prove the main result of Section \ref{main-examples}. \begin{theorem}\label{high-dim-example-1} For any even integer $s\geq 2$, there exists a $2s+5$ dimensional simply-connected compact $K$-contact manifold $(M,\eta)$, such that $(M,\eta)$ is $s$-Lefschetz but not $(s+1)$-Lefschetz, and such that $b_{s+1}(M)\leq 3$ is odd. In particular, $M$ does not support any Sasakian structure. \end{theorem} \begin{proof} By Theorem \ref{example-weak-lef}, there is a closed simply-connected symplectic manifold $(W_s,\omega)$ of dimension $2(s+2)$ that is $s$-Lefschetz but not $(s+1)$-Lefschetz. Moreover, the symplectic form $\omega$ is integral, and $b_{s+1}(W_s)=3$. Without loss of generality, we may also assume that $[\omega]$ is indivisible. Let $(M,\eta)$ be the Boothby-Wang firbation over $(W_s,\omega)$ whose Chern class is $[\omega]$. Then by Lemma \ref{boothby-wang-l1}, $M$ is simply-connected. Consider the following portion of the Gysin sequence for the principal circle bundle $\pi:M\rightarrow W_s$. \begin{equation}\label{Gysin-2} \cdots H^{s-1}(W_s,\R) \xrightarrow{\wedge[\omega]} H^{s+1}(W_s,\R) \xrightarrow{\pi^*} H^{s+1}(M, \R)\xrightarrow{\pi_*} H^{s}(W_s,\R)\xrightarrow{\wedge[\omega]} H^{s+2}(W_s,\R)\xrightarrow{\pi^*} \cdots,\end{equation} where $\pi_*:H^*(M,\R)\rightarrow H^{*-1}(X,\R)$ is the map induced by integration along the fibre. Since $W_s$ is $s$-Lefschetz with $s$ being an even integer, by \cite[Prop. 2.6]{FMU07}, $b_{s-1}(W_s)$ must be even as $s-1$ is odd. Moreover, the map $H^{j}(W_s,\R)\xrightarrow{\wedge[\omega]} H^{j+2}(W_s,\R)$ must be injective for $j=s-1$ and $j=s$. As a result, $b_{s+1}(M)=b_{s+1}(W_s)-b_{s-1}(W_s)=3-b_{s-1}(W_s)$ must be odd as well. It follows from \cite[Theorem 7.4.11]{BG08} that $M$ can not support any Sasakian structure. \end{proof} \noindent Yi Lin \\ Department of Mathematical Sciences \\ Georgia Southern University\\ 203 Georgia Ave., Statesboro, GA, 30460 \\ {\em E\--mail}: [email protected] \noindent \noindent \end{document}
\begin{document} \ifluatex \directlua{adddednatlualoader = function () require = function (stem) local fname = dednat6dir..stem..".lua" package.loaded[stem] = package.loaded[stem] or dofile(fname) or fname end end} \catcode`\^^J=10 \directlua{dofile "dednat6load.lua"} \else \def\ifnextchar/{\toop}{\toop/>/}{\ifnextchar/{\toop}{\toop/>/}} \def\rightarrow{\rightarrow} \def\defded#1#2{\expandafter\def\csname ded-#1\endcsname{#2}} \def\ifdedundefined#1{\expandafter\ifx\csname ded-#1\endcsname\relax} \def\ded#1{\ifdedundefined{#1} \errmessage{UNDEFINED DEDUCTION: #1} \else \csname ded-#1\endcsname \fi } \def\defdiag#1#2{\expandafter\def\csname diag-#1\endcsname{\bfig#2\efig}} \def\defdiagprep#1#2#3{\expandafter\def\csname diag-#1\endcsname{{#2\bfig#3\efig}}} \def\ifdiagundefined#1{\expandafter\ifx\csname diag-#1\endcsname\relax} \def\diag#1{\ifdiagundefined{#1} \errmessage{UNDEFINED DIAGRAM: #1} \else \csname diag-#1\endcsname \fi } \newlength{\celllower} \newlength{\lcelllower} \def\cellfont{} \def\lcellfont{} \def\cell #1{\lower\celllower\hbox to 0pt{\hss\cellfont${#1}$\hss}} \def\lcell#1{\lower\celllower\hbox to 0pt {\lcellfont${#1}$\hss}} \def\expr#1{\directlua{output(tostring(#1))}} \def\eval#1{\directlua{#1}} \def\directlua{pu()}{\directlua{pu()}} \defdiag{changeofthebaselaxcommacategorieschangeofbase}{ \morphism(0,0)|b|/{@{->}@/_18pt/}/<825,0>[{\mathbb{A}{//z}}`{\mathbb{A}{/y}};{c^\Leftarrow}] \morphism(825,0)|a|/{@{->}@/_18pt/}/<-825,0>[{\mathbb{A}{/y}}`{\mathbb{A}{//z}};{c\overline{!}}] \morphism(412,90)/-|/<0,-180>[{\phantom{O}}`{\phantom{O}};] } \defdiag{compositionoftwoadjunctionschangeofbasecommacompositionofpullbackcomma}{ \morphism(0,0)|b|/{@{->}@/_16pt/}/<675,0>[{\mathbb{A}{//z}}`{\mathbb{A}{/z}};{\id_z^\Leftarrow}] \morphism(675,0)|a|/{@{->}@/_16pt/}/<-675,0>[{\mathbb{A}{/z}}`{\mathbb{A}{//z}};{\id_z\overline{!}}] \morphism(675,0)|b|/{@{->}@/_16pt/}/<675,0>[{\mathbb{A}{/z}}`{\mathbb{A}{/y}};{c^\ast}] \morphism(1350,0)|a|/{@{->}@/_16pt/}/<-675,0>[{\mathbb{A}{/y}}`{\mathbb{A}{/z}};{c!}] \morphism(0,0)|b|/{@{->}@/_47pt/}/<1350,0>[{\mathbb{A}{//z}}`{\mathbb{A}{/y}};{c^\Leftarrow}] \morphism(1350,0)|a|/{@{->}@/_47pt/}/<-1350,0>[{\mathbb{A}{/y}}`{\mathbb{A}{//z}};{c\overline{!}}] \morphism(338,90)/-|/<0,-180>[{\phantom{O}}`{\phantom{O}};] \morphism(1012,90)/-|/<0,-180>[{\phantom{O}}`{\phantom{O}};] } \defdiag{whiskering1}{ \morphism(0,0)|r|/->/<0,-300>[{w}`{x};{f}] \morphism(0,-300)|l|/{@{->}@/_30pt/}/<0,-600>[{x}`{y};{h'}] \morphism(0,-300)|r|/{@{->}@/^30pt/}/<0,-600>[{x}`{y};{h}] \morphism(0,-900)|r|/->/<0,-300>[{y}`{z};{g}] \morphism(-225,-600)|a|/<=/<450,0>[{\phantom{O}}`{\phantom{O}};{\xi}] } \defdiag{whiskering2}{ \morphism(0,0)|r|/{@{->}@/_30pt/}/<0,-450>[{w}`{x};{f}] \morphism(0,0)|r|/{@{->}@/^30pt/}/<0,-450>[{w}`{x};{f}] \morphism(0,-450)|l|/{@{->}@/_30pt/}/<0,-600>[{x}`{y};{h}] \morphism(0,-450)|r|/{@{->}@/^30pt/}/<0,-600>[{x}`{y};{h'}] \morphism(0,-1050)|r|/{@{->}@/^30pt/}/<0,-450>[{y}`{z};{g}] \morphism(0,-1050)|r|/{@{->}@/_30pt/}/<0,-450>[{y}`{z};{g}] \morphism(-225,-750)|a|/=>/<450,0>[{\phantom{O}}`{\phantom{O}};{\xi}] \morphism(-90,-225)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] \morphism(-90,-1275)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] } \defdiag{twofoldtwofunctors}{ \morphism(0,0)|b|/{@{->}@/_16pt/}/<825,0>[{\mathbb{A}}`{\mathbb{B}};{G}] \morphism(825,0)|a|/{@{->}@/_16pt/}/<-825,0>[{\mathbb{B}}`{\mathbb{A}};{F}] } \defdiag{triangleidentityadjunctiondiagram1}{ \morphism(0,0)|a|/->/<600,0>[{\mathbb{A}}`{\mathbb{B}};{G}] \morphism(0,0)/=/<0,-600>[{\mathbb{A}}`{\mathbb{A}};] \morphism(600,0)/=/<0,-600>[{\mathbb{B}}`{\mathbb{B}};] \morphism(0,-600)|b|/->/<600,0>[{\mathbb{A}}`{\mathbb{B}};{G}] \morphism(600,0)|m|/->/<-600,-600>[{\mathbb{B}}`{\mathbb{A}};{F}] \morphism(225,-450)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\eta}] \morphism(0,-150)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\varepsilon}] } \defdiag{triangleidentityadjunctiondiagram2}{ \morphism(600,0)|a|/->/<-600,0>[{\mathbb{B}}`{\mathbb{A}};{F}] \morphism(0,0)/=/<0,-600>[{\mathbb{A}}`{\mathbb{A}};] \morphism(600,0)/=/<0,-600>[{\mathbb{B}}`{\mathbb{B}};] \morphism(600,-600)|b|/->/<-600,0>[{\mathbb{B}}`{\mathbb{A}};{F}] \morphism(0,-600)|m|/->/<600,600>[{\mathbb{A}}`{\mathbb{B}};{G}] \morphism(225,-450)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\eta}] \morphism(0,-150)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\varepsilon}] } \defdiag{basicadjunction}{ \morphism(0,0)|b|/{@{->}@/_16pt/}/<825,0>[{\mathbb{A}}`{\mathbb{B}};{G}] \morphism(825,0)|a|/{@{->}@/_16pt/}/<-825,0>[{\mathbb{B}}`{\mathbb{A}};{F}] \morphism(412,90)|r|/{@{-|}@<-7pt>}/<0,-180>[{\phantom{O}}`{\phantom{O}};{\left(\varepsilon{,}\eta\right)}] } \defdiag{leftsideoftheequationassociativityofmonad}{ \morphism(0,0)|a|/<-/<600,0>[{\mathbb{B}}`{\mathbb{B}};{T}] \morphism(600,-600)|r|/->/<0,600>[{\mathbb{B}}`{\mathbb{B}};{T}] \morphism(0,-600)|m|/->/<600,600>[{\mathbb{B}}`{\mathbb{B}};{T}] \morphism(600,-600)|b|/<-/<-600,0>[{\mathbb{B}}`{\mathbb{B}};{T}] \morphism(0,-600)|l|/->/<0,600>[{\mathbb{B}}`{\mathbb{B}};{T}] \morphism(0,-150)|a|/{@{<=}@<-5pt>}/<375,0>[{\phantom{O}}`{\phantom{O}};{\mu}] \morphism(225,-450)|a|/{@{<=}@<5pt>}/<375,0>[{\phantom{O}}`{\phantom{O}};{\mu}] } \defdiag{rightsideoftheequationassociativityofmonad}{ \morphism(0,0)|a|/<-/<600,0>[{\mathbb{B}}`{\mathbb{B}};{T}] \morphism(0,-600)|l|/->/<0,600>[{\mathbb{B}}`{\mathbb{B}};{T}] \morphism(600,-600)|b|/<-/<-600,0>[{\mathbb{B}}`{\mathbb{B}};{T}] \morphism(600,-600)|r|/->/<0,600>[{\mathbb{B}}`{\mathbb{B}};{T}] \morphism(600,-600)|m|/->/<-600,600>[{\mathbb{B}}`{\mathbb{B}};{T}] \morphism(0,-450)|a|/{@{<=}@<5pt>}/<375,0>[{\phantom{O}}`{\phantom{O}};{\mu}] \morphism(225,-150)|a|/{@{<=}@<-5pt>}/<375,0>[{\phantom{O}}`{\phantom{O}};{\mu}] } \defdiag{firstsideoftheequationidenityofamonad}{ \morphism(0,0)|m|/->/<600,0>[{\mathbb{B}}`{\mathbb{B}};{T}] \morphism(600,0)|r|/->/<0,-600>[{\mathbb{B}}`{\mathbb{B}};{T}] \morphism(0,0)|b|/->/<600,-600>[{\mathbb{B}}`{\mathbb{B}};{T}] \morphism(0,0)/{@{=}@/^40pt/}/<600,0>[{\mathbb{B}}`{\mathbb{B}};] \morphism(300,322)|r|/=>/<0,-300>[{\phantom{O}}`{\phantom{O}};{\eta}] \morphism(300,-38)|r|/{@{=>}@<10pt>}/<0,-330>[{\phantom{O}}`{\phantom{O}};{\mu}] } \defdiag{secondsideoftheequationidenityofamonad}{ \morphism(600,0)|m|/->/<-600,0>[{\mathbb{B}}`{\mathbb{B}};{T}] \morphism(600,-600)|r|/->/<0,600>[{\mathbb{B}}`{\mathbb{B}};{T}] \morphism(600,-600)|b|/->/<-600,600>[{\mathbb{B}}`{\mathbb{B}};{T}] \morphism(0,0)/{@{=}@/^40pt/}/<600,0>[{\mathbb{B}}`{\mathbb{B}};] \morphism(300,322)|r|/=>/<0,-300>[{\phantom{O}}`{\phantom{O}};{\eta}] \morphism(300,-38)|r|/{@{=>}@<10pt>}/<0,-330>[{\phantom{O}}`{\phantom{O}};{\mu}] } \defdiag{compositionofadjunctionsrarilali}{ \morphism(0,0)|b|/{@{->}@/_16pt/}/<900,0>[{w}`{x};{g}] \morphism(900,0)|a|/{@{->}@/_16pt/}/<-900,0>[{x}`{w};{f}] \morphism(900,0)|b|/{@{->}@/_16pt/}/<900,0>[{x}`{y};{g'}] \morphism(1800,0)|a|/{@{->}@/_16pt/}/<-900,0>[{y}`{x};{f'}] \morphism(450,90)|r|/{@{-|}@<-7pt>}/<0,-180>[{\phantom{O}}`{\phantom{O}};{\left({v}{,}n\right)}] \morphism(1350,90)|r|/{@{-|}@<-7pt>}/<0,-180>[{\phantom{O}}`{\phantom{O}};{\left(v'{,}n'\right)}] } \defdiag{proofodthelalicancellation}{ \morphism(750,-675)|b|/->/<750,-225>[{x}`{y};{g'}] \morphism(0,-450)|b|/->/<750,-225>[{w}`{x};{g}] \morphism(750,-225)|a|/->/<-750,-225>[{x}`{w};{f}] \morphism(1500,0)|a|/->/<-750,-225>[{y}`{x};{f'}] \morphism(1500,0)/=/<0,-900>[{y}`{y};] \morphism(750,-225)/=/<0,-450>[{x}`{x};] \morphism(938,-450)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{n'}] \morphism(262,-450)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{n}] } \defdiag{compositionofadjunctionsrarilalicorollaryforisomorphisms}{ \morphism(0,0)|b|/{@{->}@/_16pt/}/<900,0>[{w}`{x};{g'}] \morphism(900,0)|a|/{@{->}@/_16pt/}/<-900,0>[{x}`{w};{f'}] \morphism(900,0)|b|/{@{->}@/_16pt/}/<900,0>[{x}`{y};{g}] \morphism(1800,0)|a|/{@{->}@/_16pt/}/<-900,0>[{y}`{x};{f}] \morphism(1800,0)|b|/{@{->}@/_16pt/}/<900,0>[{y}`{z};{g''}] \morphism(2700,0)|a|/{@{->}@/_16pt/}/<-900,0>[{z}`{y};{f''}] } \defdiag{existingadjunction1}{ \morphism(0,0)|b|/{@{->}@/_16pt/}/<900,0>[{w}`{x};{g}] \morphism(900,0)|a|/{@{->}@/_16pt/}/<-900,0>[{x}`{w};{f}] \morphism(450,90)|r|/{@{-|}@<-7pt>}/<0,-180>[{\phantom{O}}`{\phantom{O}};{\left({v}{,}n\right)}] } \defdiag{existingadjunction2}{ \morphism(0,0)|b|/{@{->}@/_16pt/}/<900,0>[{w}`{y};{\hat{g}}] \morphism(900,0)|a|/{@{->}@/_16pt/}/<-900,0>[{y}`{w};{ff'}] \morphism(450,90)|r|/{@{-|}@<-7pt>}/<0,-180>[{\phantom{O}}`{\phantom{O}};{\left({\hat{v}}{,}\hat{n}\right)}] } \defdiag{counitofthecancellationtheoremlaris}{ \morphism(0,0)|a|/->/<600,0>[{x}`{w};{f}] \morphism(600,0)|a|/->/<600,0>[{w}`{y};{\hat{g}}] \morphism(1200,0)|r|/->/<0,-600>[{y}`{x};{f'}] \morphism(1200,-600)|m|/->/<-600,0>[{x}`{w};{f}] \morphism(600,-600)|m|/->/<-600,0>[{w}`{x};{g}] \morphism(1200,-600)/{@{=}@/^60pt/}/<-1200,0>[{x}`{x};] \morphism(0,0)/=/<0,-600>[{x}`{x};] \morphism(600,0)/=/<0,-600>[{w}`{w};] \morphism(712,-300)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\hat{v}}] \morphism(210,-300)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] \morphism(600,-772)/=/<0,-180>[{\phantom{O}}`{\phantom{O}};] } \defdiag{exampleofnocancellationpropertyoflalis}{ \morphism(600,0)|b|/->/<-600,0>[{\mathsf{2}}`{\mathsf{1}};{s^0}] \morphism(1200,0)|a|/{@{->}@/_20pt/}/<-1200,0>[{\mathsf{1}}`{\mathsf{1}};{s^0d^0}] } \defdiag{pullbackdiagramdefinition}{ \morphism(0,0)|a|/->/<0,-600>[{x\times_{(a,b)}w}`{x};{a^\ast{(b)}}] \morphism(0,0)|a|/->/<600,0>[{x\times_{(a,b)}w}`{w};{b^\ast{(a)}}] \morphism(0,-600)|l|/->/<600,0>[{x}`{y};{a}] \morphism(600,0)|r|/->/<0,-600>[{w}`{y};{b}] } \defdiag{twocellofpullbackdefinitionleftside}{ \morphism(0,0)|l|/{@{->}@/_28pt/}/<0,-750>[{z}`{x};{h_0}] \morphism(0,0)|r|/{@{->}@/^28pt/}/<0,-750>[{z}`{x};{h_0'}] \morphism(900,0)|r|/->/<0,-750>[{w}`{y};{b}] \morphism(0,0)|a|/->/<900,0>[{z}`{w};{h_1'}] \morphism(0,-750)|r|/->/<900,0>[{x}`{y};{a}] \morphism(510,-375)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] \morphism(-150,-375)|a|/=>/<300,0>[{\phantom{O}}`{\phantom{O}};{\xi_0}] } \defdiag{twocellpullbackdefinitionrightside}{ \morphism(0,0)|a|/{@{->}@/^28pt/}/<900,0>[{z}`{w};{h_1'}] \morphism(0,-750)|b|/->/<900,0>[{x}`{y};{a}] \morphism(0,0)|b|/{@{->}@/_28pt/}/<900,0>[{z}`{w};{h_1}] \morphism(0,0)|l|/->/<0,-750>[{z}`{x};{h_0}] \morphism(900,0)|r|/->/<0,-750>[{w}`{y};{b}] \morphism(450,-435)/=/<0,-180>[{\phantom{O}}`{\phantom{O}};] \morphism(450,150)|l|/<=/<0,-300>[{\phantom{O}}`{\phantom{O}};{\xi_1}] } \defdiag{commadiagramdefinition}{ \morphism(0,0)|a|/->/<600,0>[{a\downarrow{b}}`{x};{a^\Rightarrow{(b)}}] \morphism(0,0)|a|/->/<0,-600>[{a\downarrow{b}}`{w};{b^\Leftarrow{(a)}}] \morphism(600,0)|r|/->/<0,-600>[{x}`{y};{a}] \morphism(0,-600)|r|/->/<600,0>[{w}`{y};{b}] \morphism(112,-300)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{a\downarrow{b}}}] } \defdiag{morphismcommadiagramdefinition}{ \morphism(0,0)|r|/->/<375,-375>[{z}`{a\downarrow{b}};{h}] \morphism(375,-375)|a|/->/<600,0>[{a\downarrow{b}}`{x};{a^\Rightarrow{(b)}}] \morphism(375,-375)|a|/->/<0,-600>[{a\downarrow{b}}`{w};{b^\Leftarrow{(a)}}] \morphism(375,-975)|r|/->/<600,0>[{w}`{y};{b}] \morphism(975,-375)|r|/->/<0,-600>[{x}`{y};{a}] \morphism(1575,-375)|a|/->/<600,0>[{z}`{x};{h_0}] \morphism(1575,-375)|a|/->/<0,-600>[{z}`{w};{h_1}] \morphism(1575,-975)|r|/->/<600,0>[{w}`{y};{b}] \morphism(2175,-375)|r|/->/<0,-600>[{x}`{y};{a}] \morphism(488,-675)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{a\downarrow{b}}}] \morphism(1688,-675)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\gamma }] \morphism(1185,-675)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] } \defdiag{commatwocellofpullbackdefinitionleftside}{ \morphism(900,0)|r|/{@{->}@/^28pt/}/<0,-750>[{z}`{x};{h_0}] \morphism(900,0)|l|/{@{->}@/_28pt/}/<0,-750>[{z}`{x};{h_0'}] \morphism(0,0)|l|/->/<0,-750>[{w}`{y};{b}] \morphism(900,0)|a|/->/<-900,0>[{z}`{w};{h_1'}] \morphism(900,-750)|b|/->/<-900,0>[{x}`{y};{a}] \morphism(38,-375)|a|/<=/<525,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{a\downarrow{b}}\ast\id{_{h'}}}] \morphism(750,-375)|a|/<=/<300,0>[{\phantom{O}}`{\phantom{O}};{\xi_0}] } \defdiag{commatwocellpullbackdefinitionrightside}{ \morphism(900,0)|b|/{@{->}@/^28pt/}/<-900,0>[{z}`{w};{h_1}] \morphism(900,-750)|b|/->/<-900,0>[{x}`{y};{a}] \morphism(900,0)|a|/{@{->}@/_28pt/}/<-900,0>[{z}`{w};{h_1'}] \morphism(900,0)|r|/->/<0,-750>[{z}`{x};{h_0}] \morphism(0,0)|l|/->/<0,-750>[{w}`{y};{b}] \morphism(450,-375)|r|/<=/<0,-300>[{\phantom{O}}`{\phantom{O}};{\chi ^{a\downarrow{b}}\ast\id{_{h}}}] \morphism(450,150)|r|/<=/<0,-300>[{\phantom{O}}`{\phantom{O}};{\xi_1}] } \defdiag{modificationofthedefinitionoflaxidempotenttwomonad}{ \morphism(0,0)|a|/=/<1050,0>[{T^2}`{T^2};{\id_{T^2}}] \morphism(0,0)|l|/=>/<525,-525>[{T^2}`{T};{\mu}] \morphism(525,-525)|r|/=>/<525,525>[{T}`{T^2};{\eta{T}}] \morphism(525,-38)|r|/=>/<0,-375>[{\phantom{O}}`{\phantom{O}};{\Gamma}] } \defdiag{modificationofthedefinitionoflaxidempotenttwomonadtriangleidentityoone}{ \morphism(0,0)/=/<750,0>[{T^2(z)}`{T^2(z)};] \morphism(0,0)|l|/->/<375,-375>[{T^2(z)}`{T(z)};{\mu_z}] \morphism(375,-375)|r|/->/<375,375>[{T(z)}`{T^2(z)};{\eta_{T(z)}}] \morphism(750,0)|a|/->/<450,0>[{T^2(z)}`{T(z)};{\mu_z}] \morphism(375,-15)|r|/=>/<0,-300>[{\phantom{O}}`{\phantom{O}};{\Gamma_z}] } \defdiag{modificationofthedefinitionoflaxidempotenttwomonadtriangleidentityotwo}{ \morphism(450,0)/=/<750,0>[{T^2(z)}`{T^2(z)};] \morphism(450,0)|l|/->/<375,-375>[{T^2(z)}`{T(z)};{\mu_z}] \morphism(825,-375)|r|/->/<375,375>[{T(z)}`{T^2(z)};{\eta_{T(z)}}] \morphism(0,0)|a|/->/<450,0>[{T(z)}`{T^2(z)};{\eta_{T(z)}}] \morphism(825,-15)|r|/=>/<0,-300>[{\phantom{O}}`{\phantom{O}};{\Gamma_z}] } \defdiag{equationforidempotentmonadinterchangelaw}{ \morphism(0,0)|l|/->/<0,-375>[{\mathbb{A}}`{\mathbb{A}};{T}] \morphism(0,-375)|r|/->/<600,-300>[{\mathbb{A}}`{\mathbb{A}};{T}] \morphism(600,-675)|r|/->/<-600,-300>[{\mathbb{A}}`{\mathbb{A}};{T}] \morphism(0,-375)|l|/->/<0,-600>[{\mathbb{A}}`{\mathbb{A}};{T}] \morphism(0,0)/{@{=}@/^45pt/}/<0,-375>[{\mathbb{A}}`{\mathbb{A}};] \morphism(75,-675)|a|/<=/<300,0>[{\phantom{O}}`{\phantom{O}};{\mu}] \morphism(-22,-188)|a|/<=/<300,0>[{\phantom{O}}`{\phantom{O}};{\eta}] } \defdiag{equationforidempotentmonadinterchangelawtwo}{ \morphism(0,0)|l|/->/<0,-375>[{\mathbb{A}}`{\mathbb{A}};{T}] \morphism(0,-375)|m|/->/<600,-300>[{\mathbb{A}}`{\mathbb{A}};{T}] \morphism(600,-675)|r|/->/<-600,-300>[{\mathbb{A}}`{\mathbb{A}};{T}] \morphism(0,-375)|l|/->/<0,-600>[{\mathbb{A}}`{\mathbb{A}};{T}] \morphism(0,-375)/{@{=}@/^35pt/}/<600,-300>[{\mathbb{A}}`{\mathbb{A}};] \morphism(75,-675)|a|/<=/<300,0>[{\phantom{O}}`{\phantom{O}};{\mu}] \morphism(300,-248)|l|/{@{=>}@<7pt>}/<0,-300>[{\phantom{O}}`{\phantom{O}};{\eta}] } \defdiag{pastinginordertogetmorphismofalgebrasvertical}{ \morphism(0,0)/=/<0,-510>[{T(x)}`{T(x)};] \morphism(0,-510)|a|/->/<0,-510>[{T(x)}`{T(y)};{T(f)}] \morphism(0,-1020)|r|/->/<1200,0>[{T(y)}`{y};{b}] \morphism(0,0)|a|/->/<1200,0>[{T(x)}`{x};{a}] \morphism(1200,0)|m|/->/<-1200,-510>[{x}`{T(x)};{\eta_x}] \morphism(1200,-510)|m|/->/<-1200,-510>[{y}`{T(y)};{\eta_y}] \morphism(1200,-510)/=/<0,-510>[{y}`{y};] \morphism(1200,0)|r|/->/<0,-510>[{x}`{y};{f}] \morphism(202,-128)/=/<195,0>[{\phantom{O}}`{\phantom{O}};] \morphism(802,-892)/=/<195,0>[{\phantom{O}}`{\phantom{O}};] \morphism(502,-510)/=/<195,0>[{\phantom{O}}`{\phantom{O}};] } \defdiag{equationforidempotentmonadinterchangelawtwoadjunctiontwo}{ \morphism(0,-900)/=/<0,-600>[{\mathbb{A}}`{\mathbb{A}};] \morphism(0,-900)|m|/->/<450,-300>[{\mathbb{A}}`{\mathbb{B}};{G}] \morphism(450,-1200)|b|/->/<-450,-300>[{\mathbb{B}}`{\mathbb{A}};{F}] \morphism(0,0)|l|/->/<0,-300>[{\mathbb{B}}`{\mathbb{A}};{F}] \morphism(0,-300)|l|/->/<0,-300>[{\mathbb{A}}`{\mathbb{B}};{G}] \morphism(0,-600)|l|/->/<0,-300>[{\mathbb{B}}`{\mathbb{A}};{F}] \morphism(0,-600)/{@{=}@/^20pt/}/<450,-600>[{\mathbb{B}}`{\mathbb{B}};] \morphism(52,-900)|a|/<=/<345,0>[{\phantom{O}}`{\phantom{O}};{\eta}] \morphism(0,-1200)|a|/<=/<330,0>[{\phantom{O}}`{\phantom{O}};{\varepsilon}] } \defdiag{equationforidempotentmonadinterchangelawtwoadjunction}{ \morphism(450,0)/=/<0,-600>[{\mathbb{B}}`{\mathbb{B}};] \morphism(450,-900)/=/<0,-600>[{\mathbb{A}}`{\mathbb{A}};] \morphism(450,-900)|a|/->/<450,-300>[{\mathbb{A}}`{\mathbb{B}};{G}] \morphism(900,-1200)|b|/->/<-450,-300>[{\mathbb{B}}`{\mathbb{A}};{F}] \morphism(450,0)|a|/->/<-450,-300>[{\mathbb{B}}`{\mathbb{A}};{F}] \morphism(0,-300)|b|/->/<450,-300>[{\mathbb{A}}`{\mathbb{B}};{G}] \morphism(450,-600)|r|/->/<0,-300>[{\mathbb{B}}`{\mathbb{A}};{F}] \morphism(120,-300)|a|/<=/<330,0>[{\phantom{O}}`{\phantom{O}};{\eta}] \morphism(450,-1200)|a|/<=/<330,0>[{\phantom{O}}`{\phantom{O}};{\varepsilon}] } \defdiag{equivalencesplitepifullyfaithful}{ \morphism(0,0)|a|/->/<900,0>[{\mathbb{A}{\left({w},x\right)}}`{\mathbb{A}{\left({FG(w)},x\right)}};{\mathbb{A}({\varepsilon_w},x)}] \morphism(900,0)|a|/->/<900,0>[{\mathbb{A}{\left({FG(w)},x\right)}}`{\mathbb{B}{\left({G(w)},G(x)\right)}};{\cong}] \morphism(0,0)|b|/{@{->}@/_20pt/}/<1800,0>[{\mathbb{A}{\left({w},x\right)}}`{\mathbb{B}{\left({G(w)},G(x)\right)}};{G}] } \defdiag{coequalizerofthecounitforBecktheorem}{ \morphism(0,0)|a|/{@{->}@/^20pt/}/<900,0>[{FGFG(x)}`{FG(x)};{\varepsilon_{FG(x)}}] \morphism(0,0)|b|/{@{->}@/_20pt/}/<900,0>[{FGFG(x)}`{FG(x)};{FG\left(\varepsilon_x\right)}] \morphism(900,0)|a|/->/<600,0>[{FG(x)}`{x};{\varepsilon_x}] } \defdiag{compositionof2adjunctionsfirstdiagram}{ \morphism(0,0)|b|/{@{->}@/_16pt/}/<900,0>[{\mathbb{A}}`{\mathbb{B}};{G}] \morphism(900,0)|a|/{@{->}@/_16pt/}/<-900,0>[{\mathbb{B}}`{\mathbb{A}};{F}] \morphism(900,0)|b|/{@{->}@/_16pt/}/<900,0>[{\mathbb{B}}`{\mathbb{C}};{J}] \morphism(1800,0)|a|/{@{->}@/_16pt/}/<-900,0>[{\mathbb{C}}`{\mathbb{B}};{H}] \Loop(900,0){{\mathbb{B}}}(ur,ul)_{\mathcal{T}} \morphism(450,90)|r|/{@{-|}@<-7pt>}/<0,-180>[{\phantom{O}}`{\phantom{O}};{\left(\varepsilon{,}\eta\right)}] \morphism(1350,90)|r|/{@{-|}@<-7pt>}/<0,-180>[{\phantom{O}}`{\phantom{O}};{\left(\delta{,}\rho\right)}] } \defdiag{compositionof2adjunctionsseconddiagram}{ \morphism(0,0)|b|/{@{->}@/_20pt/}/<1800,0>[{\mathbb{A}}`{\mathbb{C}};{J\circ{G}}] \morphism(1800,0)|a|/{@{->}@/_20pt/}/<-1800,0>[{\mathbb{C}}`{\mathbb{A}};{F\circ{H}}] \Loop(1800,0){{\mathbb{C}}}(rd,ru)_\mathcal{R} \morphism(900,112)|r|/{@{-|}@<-7pt>}/<0,-225>[{\phantom{O}}`{\phantom{O}};{\left(\varepsilon\cdot\left(F\delta{G}\right){,\,\,}\left(J\eta{H}\right)\cdot\rho\right)}] } \defdiag{compositionoftwoadjunctionscompactHausdorfftopset}{ \morphism(0,0)/{@{->}@/_16pt/}/<900,0>[{\CmpHaus}`{\Top};] \morphism(900,0)/{@{->}@/_16pt/}/<-900,0>[{\Top}`{\CmpHaus};] \morphism(900,0)/{@{->}@/_16pt/}/<900,0>[{\Top}`{\Set};] \morphism(1800,0)/{@{->}@/_16pt/}/<-900,0>[{\Set}`{\Top};] \morphism(450,90)/-|/<0,-180>[{\phantom{O}}`{\phantom{O}};] \morphism(1350,90)/-|/<0,-180>[{\phantom{O}}`{\phantom{O}};] } \defdiag{twocellidentityfortheproofofthecriterionforsimplicity}{ \morphism(0,0)|a|/->/<750,0>[{\mathbb{A}}`{\mathbb{B}};{G}] \morphism(750,0)|r|/->/<0,-450>[{\mathbb{B}}`{\mathbb{C}};{J}] \morphism(750,-450)/=/<0,-1200>[{\mathbb{C}}`{\mathbb{C}};] \morphism(750,-450)|m|/->/<-375,-300>[{\mathbb{C}}`{\mathbb{B}};{H}] \morphism(375,-750)|a|/->/<-375,-300>[{\mathbb{B}}`{\mathbb{A}};{F}] \morphism(0,-1050)|l|/->/<375,-300>[{\mathbb{A}}`{\mathbb{B}};{G}] \morphism(375,-1350)|b|/->/<375,-300>[{\mathbb{B}}`{\mathbb{C}};{J}] \morphism(750,0)/{@{=}@/_30pt/}/<-375,-750>[{\mathbb{B}}`{\mathbb{B}};] \morphism(375,-750)/=/<0,-600>[{\mathbb{B}}`{\mathbb{B}};] \morphism(75,-1050)|a|/<=/<300,0>[{\phantom{O}}`{\phantom{O}};{\eta}] \morphism(412,-1050)|a|/<=/<300,0>[{\phantom{O}}`{\phantom{O}};{\rho}] \morphism(398,-375)|a|/<=/<330,0>[{\phantom{O}}`{\phantom{O}};{\delta}] } \defdiag{compositionofadjunctionsepsilondelta}{ \morphism(0,0)|b|/{@{=>}@/_15pt/}/<1350,0>[{JG}`{JGFG};{J\eta{G}}] \morphism(1350,0)|a|/{@{=>}@/_15pt/}/<-1350,0>[{JGFG}`{JG};{JG\varepsilon}] \morphism(1350,0)|b|/{@{=>}@/_15pt/}/<1350,0>[{JGFG}`{JGFHJG};{\vartheta}] \morphism(2700,0)|a|/{@{=>}@/_15pt/}/<-1350,0>[{JGFHJG}`{JGFG};{JGF\delta{G}}] \morphism(0,0)|b|/{@{=>}@<-5pt>@/_40pt/}/<2700,0>[{JG}`{JGFHJG};{\alpha_{JG}}] \morphism(2700,0)|a|/{@{=>}@<-5pt>@/_40pt/}/<-2700,0>[{JGFHJG}`{JG};{JG\left(\varepsilon\left({F}\delta{G}\right)\right)}] } \defdiag{laxmorphismofcoalgebras1cell}{ \morphism(0,0)|a|/->/<300,0>[{w}`{x};{f}] } \defdiag{twocelloflaxcommamorphism}{ \morphism(0,0)|a|/->/<600,0>[{w}`{x};{f}] \morphism(0,0)|l|/->/<300,-420>[{w}`{y};{a}] \morphism(600,0)|r|/->/<-300,-420>[{x}`{y};{b}] \morphism(128,-0)|a|/{@{<=}@<-20pt>}/<345,0>[{\phantom{O}}`{\phantom{O}};{\gamma }] } \defdiag{compositionofmorphismslaxcommatwocategories}{ \morphism(0,0)|a|/->/<675,0>[{w}`{x};{f}] \morphism(675,0)|a|/->/<675,0>[{x}`{z};{g}] \morphism(0,0)|l|/->/<675,-600>[{w}`{y};{a}] \morphism(1350,0)|r|/->/<-675,-600>[{z}`{y};{c}] \morphism(675,0)|m|/->/<0,-600>[{x}`{y};{b}] \morphism(240,-150)|a|/<=/<345,0>[{\phantom{O}}`{\phantom{O}};{\gamma }] \morphism(765,-150)|a|/<=/<345,0>[{\phantom{O}}`{\phantom{O}};{\chi }] } \defdiag{leftsideequationtwocellforlaxcommacategor}{ \morphism(0,0)|a|/{@{->}@/^18pt/}/<750,0>[{w}`{x};{f}] \morphism(0,0)|b|/{@{->}@/_18pt/}/<750,0>[{w}`{x};{f'}] \morphism(0,0)|l|/->/<0,-675>[{w}`{y};{a}] \morphism(750,0)|r|/->/<-750,-675>[{x}`{y};{b}] \morphism(375,150)|l|/=>/<0,-300>[{\phantom{O}}`{\phantom{O}};{\zeta }] \morphism(-8,-338)|a|/<=/<390,0>[{\phantom{O}}`{\phantom{O}};{\gamma {'}}] } \defdiag{rightsideequationtwocellforlaxcommacategory}{ \morphism(0,0)|a|/->/<690,0>[{w}`{x};{f}] \morphism(0,0)|l|/->/<0,-645>[{w}`{y};{a}] \morphism(690,0)|r|/->/<-690,-645>[{x}`{y};{b}] \morphism(22,-322)|a|/{@{<=}@<15pt>}/<375,0>[{\phantom{O}}`{\phantom{O}};{\gamma }] } \defdiag{changeofthebasecommacategories}{ \morphism(0,0)|b|/{@{->}@/_16pt/}/<825,0>[{\mathbb{A}{/z}}`{\mathbb{A}{/y}};{c^\ast}] \morphism(825,0)|a|/{@{->}@/_16pt/}/<-825,0>[{\mathbb{A}{/y}}`{\mathbb{A}{/z}};{c!}] \morphism(412,90)/-|/<0,-180>[{\phantom{O}}`{\phantom{O}};] } \defdiag{changeofthebasecommatwocategories}{ \morphism(0,0)|b|/{@{->}@/_16pt/}/<825,0>[{\mathbb{A}{/z}}`{\mathbb{A}{/y}};{c^\ast}] \morphism(825,0)|a|/{@{->}@/_16pt/}/<-825,0>[{\mathbb{A}{/y}}`{\mathbb{A}{/z}};{c!}] \morphism(412,90)/-|/<0,-180>[{\phantom{O}}`{\phantom{O}};] } \defdiag{cstaronmorphismm}{ \morphism(0,0)|l|/->/<0,-600>[{w\times_{(a,c)}y}`{x\times_{(b,c)}y};{c^\ast{\left(f\right)}}] \morphism(0,0)/->/<600,0>[{w\times_{(a,c)}y}`{w};] \morphism(0,-600)|m|/->/<600,0>[{x\times_{(b,c)}y}`{x};{b^\ast{(c)}}] \morphism(600,0)|r|/->/<0,-600>[{w}`{x};{f}] \morphism(600,-600)|r|/->/<0,-600>[{x}`{z};{b}] \morphism(0,-600)|l|/->/<0,-600>[{x\times_{(b,c)}y}`{y};{c^\ast{(b)}}] \morphism(0,-1200)|b|/->/<600,0>[{y}`{z};{c}] \morphism(600,0)|r|/{@{->}@/^60pt/}/<0,-1200>[{w}`{z};{a}] \morphism(0,0)|l|/{@{->}@/_60pt/}/<0,-1200>[{w\times_{(a,c)}y}`{y};{c^\ast(a)}] } \defdiag{rightrsidefirstequationimagetwocellofthepbchangeofbase}{ \morphism(0,0)|m|/->/<0,-600>[{w\times_{(a,c)}{y}}`{w};{a^\ast{(c)}}] \morphism(0,-600)|r|/{@{->}@/^22pt/}/<0,-600>[{w}`{x};{f}] \morphism(0,-600)|l|/{@{->}@/_22pt/}/<0,-600>[{w}`{x};{f'}] \morphism(-188,-900)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\zeta }] } \defdiag{leftsidefirstequationimagetwocellofthepbchangeofbase}{ \morphism(0,-600)|m|/->/<0,-600>[{x\times_{(b,c)}{y}}`{x};{b^\ast{(c)}}] \morphism(0,0)|r|/{@{->}@/^22pt/}/<0,-600>[{w\times_{(a,c)}{y}}`{x\times_{(b,c)}{y}};{c^\ast{\left(f\right)}}] \morphism(0,0)|l|/{@{->}@/_22pt/}/<0,-600>[{w\times_{(a,c)}{y}}`{x\times_{(b,c)}{y}};{c^\ast{\left(f'\right)}}] \morphism(-188,-300)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{c^\ast{\left(\zeta \right)}}] } \defdiag{rightrsidesecondequationimagetwocellofthepbchangeofbase}{ \morphism(0,0)|r|/{@{->}@/^15pt/}/<0,-1200>[{w\times_{(a,c)}{y}}`{y};{c^\ast{(a)}}] \morphism(0,0)|l|/{@{->}@/_15pt/}/<0,-1200>[{w\times_{(a,c)}{y}}`{y};{c^\ast{(a)}}] \morphism(-90,-600)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] } \defdiag{leftsidesecondequationimagetwocellofthepbchangeofbase}{ \morphism(0,-600)|m|/->/<0,-600>[{x\times_{(b,c)}{y}}`{y};{c^\ast{(b)}}] \morphism(0,0)|r|/{@{->}@/^22pt/}/<0,-600>[{w\times_{(a,c)}{y}}`{x\times_{(b,c)}{y}};{c^\ast{(f)}}] \morphism(0,0)|l|/{@{->}@/_22pt/}/<0,-600>[{w\times_{(a,c)}{y}}`{x\times_{(b,c)}{y}};{c^\ast{(f')}}] \morphism(-188,-300)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{c^\ast{\left(\zeta \right)}}] } \defdiag{directimagedefinitiondiagram}{ \morphism(1350,0)|a|/->/<-675,0>[{\mathbb{A}{/y}}`{\mathbb{A}{/z}};{c!}] \morphism(675,0)/->/<-675,0>[{\mathbb{A}{/z}}`{\mathbb{A}{//z}};] \morphism(1350,0)|m|/{@{->}@/_35pt/}/<-1350,0>[{\mathbb{A}{/y}}`{\mathbb{A}{//z}};{c\overline{!}}] } \defdiag{commadiagramdefinitioncommaadjunction}{ \morphism(0,0)|a|/->/<600,0>[{b\downarrow{c}}`{x};{b^\Rightarrow{(c)}}] \morphism(0,0)|l|/->/<0,-600>[{b\downarrow{c}}`{y};{c^\Leftarrow{(b)}}] \morphism(600,0)|r|/->/<0,-600>[{x}`{z};{b}] \morphism(0,-600)|b|/->/<600,0>[{y}`{z};{c}] \morphism(112,-300)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{b\downarrow{c}}}] } \defdiag{twocelloflaxcommamorphismmmm}{ \morphism(0,0)|a|/->/<540,0>[{w}`{x};{f}] \morphism(0,0)|l|/->/<270,-420>[{w}`{z};{a}] \morphism(540,0)|r|/->/<-270,-420>[{x}`{z};{b}] \morphism(98,-0)|a|/{@{<=}@<-20pt>}/<345,0>[{\phantom{O}}`{\phantom{O}};{\gamma }] } \defdiag{morphismfforthedefinitioncLeftarrow}{ \morphism(0,0)|a|/->/<255,0>[{w}`{x};{f}] } \defdiag{cleftarrowonmorphismm}{ \morphism(0,0)|m|/->/<600,-600>[{a\downarrow{c}}`{b\downarrow{c}};{c^\Leftarrow{\left(f,\gamma \right)}}] \morphism(600,-600)|m|/->/<600,0>[{b\downarrow{c}}`{x};{b^\Rightarrow{(c)}}] \morphism(1200,-600)|r|/->/<0,-600>[{x}`{z};{b}] \morphism(600,-600)|m|/->/<0,-600>[{b\downarrow{c}}`{y};{c^\Leftarrow{(b)}}] \morphism(600,-1200)|b|/->/<600,0>[{y}`{z};{c}] \morphism(0,0)|l|/{@{->}@/_30pt/}/<600,-1200>[{a\downarrow{c}}`{y};{c^\Leftarrow{(a)}}] \morphism(0,0)|a|/{@{->}@/^30pt/}/<1200,-600>[{a\downarrow{c}}`{x};{f\,\cdot\,{a^\Rightarrow{(c)}}}] \morphism(210,-600)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] \morphism(712,-900)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{b\downarrow{c}}}] \morphism(510,-300)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] } \defdiag{cleftarrowonmorphismmrightside}{ \morphism(1200,-600)|r|/->/<-600,-600>[{x}`{z};{b}] \morphism(0,-1200)|b|/->/<600,0>[{y}`{z};{c}] \morphism(0,0)|l|/->/<0,-1200>[{a\downarrow{c}}`{y};{c^\Leftarrow{(a)}}] \morphism(600,0)|r|/->/<600,-600>[{w}`{x};{f}] \morphism(600,0)|m|/->/<0,-1200>[{w}`{z};{a}] \morphism(0,0)|a|/->/<600,0>[{a\downarrow{c}}`{w};{a^\Rightarrow{(c)}}] \morphism(712,-600)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\gamma }] \morphism(112,-600)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{a\downarrow{c}}}] } \defdiag{rightrsidefirstequationimagetwocellofthecommachangeofbase}{ \morphism(0,0)|m|/->/<0,-600>[{a\downarrow{c}}`{w};{a^\Rightarrow{(c)}}] \morphism(0,-600)|r|/{@{->}@/^22pt/}/<0,-600>[{w}`{x};{f}] \morphism(0,-600)|l|/{@{->}@/_22pt/}/<0,-600>[{w}`{x};{f'}] \morphism(-188,-900)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\zeta }] } \defdiag{leftsidefirstequationimagetwocellofthecommachangeofbase}{ \morphism(0,-600)|m|/->/<0,-600>[{b\downarrow{c}}`{x};{b^\Rightarrow{(c)}}] \morphism(0,0)|r|/{@{->}@/^22pt/}/<0,-600>[{a\downarrow{c}}`{b\downarrow{c}};{c^\Leftarrow{\left(f,\gamma \right)}}] \morphism(0,0)|l|/{@{->}@/_22pt/}/<0,-600>[{a\downarrow{c}}`{b\downarrow{c}};{c^\Leftarrow{\left(f',\gamma {'}\right)}}] \morphism(-188,-300)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{c^\Leftarrow{\left(\zeta \right)}}] } \defdiag{rightrsidesecondequationimagetwocellofthecommachangeofbase}{ \morphism(0,0)|r|/{@{->}@/^15pt/}/<0,-1200>[{a\downarrow{c}}`{y};{c^\Leftarrow{(a)}}] \morphism(0,0)|l|/{@{->}@/_15pt/}/<0,-1200>[{a\downarrow{c}}`{y};{c^\Leftarrow{(a)}}] \morphism(-90,-600)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] } \defdiag{leftsidesecondequationimagetwocellofthecommachangeofbase}{ \morphism(0,-600)|m|/->/<0,-600>[{b\downarrow{c}}`{y};{c^\Leftarrow{(b)}}] \morphism(0,0)|r|/{@{->}@/^22pt/}/<0,-600>[{a\downarrow{c}}`{b\downarrow{c}};{c^\Leftarrow{\left(f,\gamma \right)}}] \morphism(0,0)|l|/{@{->}@/_22pt/}/<0,-600>[{a\downarrow{c}}`{b\downarrow{c}};{c^\Leftarrow{\left(f',\gamma {'}\right)}}] \morphism(-188,-300)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{c^\Leftarrow{\left(\zeta \right)}}] } \defdiag{changeofthebaselaxcommacategories}{ \morphism(0,0)|b|/{@{->}@/_18pt/}/<825,0>[{\mathbb{A}{//z}}`{\mathbb{A}{/y}};{c^\Leftarrow}] \morphism(825,0)|a|/{@{->}@/_18pt/}/<-825,0>[{\mathbb{A}{/y}}`{\mathbb{A}{//z}};{c\overline{!}}] \morphism(412,90)/-|/<0,-180>[{\phantom{O}}`{\phantom{O}};] } \defdiag{commadiagramdefinitionproofcommaobject}{ \morphism(0,0)|a|/->/<600,0>[{b\downarrow{c}}`{x};{b^\Rightarrow{(c)}}] \morphism(0,0)|l|/->/<0,-600>[{b\downarrow{c}}`{y};{c^\Leftarrow{(b)}}] \morphism(600,0)|r|/->/<0,-600>[{x}`{z};{b}] \morphism(0,-600)|b|/->/<600,0>[{y}`{z};{c}] \morphism(112,-300)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{b\downarrow{c}}}] } \defdiag{commaalongca}{ \morphism(0,0)|a|/->/<600,0>[{ca\downarrow{c}}`{w};{\left({ca}\right)^\Rightarrow{(c)}}] \morphism(0,0)|l|/->/<0,-600>[{ca\downarrow{c}}`{y};{c^\Leftarrow{c\overline{!}}{(a)}}] \morphism(600,0)|r|/->/<0,-600>[{w}`{z};{ca}] \morphism(0,-600)|b|/->/<600,0>[{y}`{z};{c}] \morphism(112,-300)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{ca\downarrow{c}}}] } \defdiag{morphismcommadiagramdefinitionrholinha}{ \morphism(0,0)|r|/->/<375,-375>[{w}`{ca\downarrow{c}};{\rho_{(w,a)}'}] \morphism(375,-375)|a|/->/<600,0>[{ca\downarrow{c}}`{w};{(ca)^\Rightarrow{(c)}}] \morphism(375,-375)|l|/->/<0,-600>[{ca\downarrow{c}}`{y};{c^\Leftarrow{(ca)}}] \morphism(375,-975)|b|/->/<600,0>[{y}`{z};{c}] \morphism(975,-375)|r|/->/<0,-600>[{w}`{z};{ca}] \morphism(1575,-375)|a|/->/<600,0>[{w}`{w};{\id_w}] \morphism(1575,-375)|l|/->/<0,-600>[{w}`{y};{a}] \morphism(1575,-975)|b|/->/<600,0>[{y}`{z};{c}] \morphism(2175,-375)|r|/->/<0,-600>[{w}`{z};{ca}] \morphism(488,-675)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{ca\downarrow{c}}}] \morphism(1785,-675)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] \morphism(1185,-675)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] } \defdiag{definitionofdeltalinhaleft}{ \morphism(0,0)|m|/->/<600,-600>[{c\cdot{c^\Leftarrow{(b)}}\downarrow{c}}`{b\downarrow{c}};{\updelta{'}}] \morphism(600,-600)|m|/->/<600,0>[{b\downarrow{c}}`{x};{b^\Rightarrow{(c)}}] \morphism(1200,-600)|r|/->/<0,-600>[{x}`{z};{b}] \morphism(600,-600)|m|/->/<0,-600>[{b\downarrow{c}}`{y};{c^\Leftarrow{(b)}}] \morphism(600,-1200)|b|/->/<600,0>[{y}`{z};{c}] \morphism(0,0)|l|/{@{->}@/_30pt/}/<600,-1200>[{c\cdot{c^\Leftarrow{(b)}}\downarrow{c}}`{y};{c^\Leftarrow{(a)}}] \morphism(0,0)|a|/{@{->}@/^30pt/}/<1200,-600>[{c\cdot{c^\Leftarrow{(b)}}\downarrow{c}}`{x};{f\,\cdot\,{a^\Rightarrow{(c)}}}] \morphism(210,-600)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] \morphism(712,-900)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{b\downarrow{c}}}] \morphism(510,-300)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] } \defdiag{definitionofdeltalinharight}{ \morphism(1425,-600)|r|/->/<-600,-600>[{x}`{z};{b}] \morphism(0,-1200)|b|/->/<825,0>[{y}`{z};{c}] \morphism(0,0)|l|/->/<0,-1200>[{c\cdot{c^\Leftarrow{(b)}}\downarrow{c}}`{y};{c^\Leftarrow{c\overline{!}}{c^\Leftarrow}(b)}] \morphism(825,0)|r|/->/<600,-600>[{b\downarrow{c}}`{x};{b^\Rightarrow{(c)}}] \morphism(825,0)|m|/->/<0,-600>[{b\downarrow{c}}`{y};{c^\Leftarrow{(b)}}] \morphism(825,-600)|m|/->/<0,-600>[{y}`{z};{c}] \morphism(0,0)|a|/->/<825,0>[{c\cdot{c^\Leftarrow{(b)}}\downarrow{c}}`{b\downarrow{c}};{\left({c}\cdot{c^\Leftarrow}(b)\right)^\Rightarrow{(c)}}] \morphism(938,-600)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{b\downarrow{c}}}] \morphism(150,-600)|a|/<=/<525,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{c\cdot{c^\Leftarrow}(b)\downarrow{c}}}] } \defdiag{secontriangleidentitycoomaobjecttwoadjunction}{ \morphism(0,0)|r|/->/<300,-300>[{b\downarrow{c}}`{c\cdot{c}^\Leftarrow{(b)}\downarrow{c}};{\rho_{c^\Leftarrow{(x,b)}}'}] \morphism(300,-300)|r|/->/<300,-300>[{c\cdot{c}^\Leftarrow{(b)}\downarrow{c}}`{b\downarrow{c}};{\updelta{'}}] \morphism(600,-600)|a|/->/<600,0>[{b\downarrow{c}}`{x};{b^\Rightarrow{(c)}}] \morphism(600,-600)|l|/->/<0,-600>[{b\downarrow{c}}`{y};{c^\Leftarrow{(b)}}] \morphism(600,-1200)|r|/->/<600,0>[{y}`{z};{c}] \morphism(1200,-600)|r|/->/<0,-600>[{x}`{z};{b}] \morphism(2100,-600)|a|/->/<600,0>[{b\downarrow{c}}`{x};{b^\Rightarrow{(c)}}] \morphism(2100,-600)|l|/->/<0,-600>[{b\downarrow{c}}`{y};{c^\Leftarrow{(b)}}] \morphism(2100,-1200)|r|/->/<600,0>[{y}`{z};{c}] \morphism(2700,-600)|r|/->/<0,-600>[{x}`{z};{b}] \morphism(712,-900)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{b\downarrow{c}}}] \morphism(2212,-900)|a|/<=/<375,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{b\downarrow{c}}}] \morphism(1560,-900)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] } \defdiag{basicdiagramofcompositionwithcandinclusion}{ \morphism(1050,0)|b|/->/<-525,0>[{\mathbb{A}{/y}}`{\mathbb{A}{/z}};{c!}] \morphism(525,0)|b|/->/<-525,0>[{\mathbb{A}{/z}}`{\mathbb{A}{//z}};{\id_z\overline{!}}] \morphism(1050,0)|a|/{@{->}@/_30pt/}/<-1050,0>[{\mathbb{A}{/y}}`{\mathbb{A}{//z}};{c\overline{!}}] } \defdiag{compositionoftwoadjunctionschangeofbasecomma}{ \morphism(0,0)|b|/{@{->}@/_16pt/}/<675,0>[{\mathbb{A}{//z}}`{\mathbb{A}{/z}};{\id_z^\Leftarrow}] \morphism(675,0)|a|/{@{->}@/_16pt/}/<-675,0>[{\mathbb{A}{/z}}`{\mathbb{A}{//z}};{\id_z\overline{!}}] \morphism(675,0)|b|/{@{->}@/_16pt/}/<675,0>[{\mathbb{A}{/z}}`{\mathbb{A}{/y}};{c^\ast}] \morphism(1350,0)|a|/{@{->}@/_16pt/}/<-675,0>[{\mathbb{A}{/y}}`{\mathbb{A}{/z}};{c!}] \morphism(0,0)|b|/{@{->}@/_47pt/}/<1350,0>[{\mathbb{A}{//z}}`{\mathbb{A}{/y}};{c^\Leftarrow}] \morphism(1350,0)|a|/{@{->}@/_47pt/}/<-1350,0>[{\mathbb{A}{/y}}`{\mathbb{A}{//z}};{c\overline{!}}] \morphism(338,90)/-|/<0,-180>[{\phantom{O}}`{\phantom{O}};] \morphism(1012,90)/-|/<0,-180>[{\phantom{O}}`{\phantom{O}};] } \defdiag{changeofthebaseidentitycoherencelaxidempotent}{ \morphism(0,0)|b|/{@{->}@/_18pt/}/<825,0>[{\mathbb{A}{//y}}`{\mathbb{A}{/y}};{\id_y^\Leftarrow}] \morphism(825,0)|a|/{@{->}@/_18pt/}/<-825,0>[{\mathbb{A}{/y}}`{\mathbb{A}{//y}};{\id_y\overline{!}}] \morphism(412,90)|r|/-|/<0,-180>[{\phantom{O}}`{\phantom{O}};{\left(\delta{,}\rho\right)}] } \defdiag{counittwocelloftheidentitytwoadjunctionlaxcoomacomma}{ \morphism(0,0)|a|/->/<675,0>[{b\downarrow{\id_y}}`{x};{b^\Rightarrow{(\id_y)}}] \morphism(675,0)|r|/->/<0,-675>[{x}`{y};{b}] \morphism(0,0)|l|/->/<0,-675>[{b\downarrow{\id_y}}`{y};{\id_y^\Leftarrow{(b)}}] \morphism(0,-675)|b|/->/<675,0>[{y}`{y};{\id_y}] \morphism(128,-338)|a|/<=/<420,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{b\downarrow{\id_y}}}] } \defdiag{rholparaademonstracaodelaxidempotency}{ \morphism(0,0)|m|/->/<900,-450>[{x}`{b\downarrow{\id_y}};{\overline{\underline{\rho}}_{(x,b)}}] \morphism(900,-450)|m|/->/<900,0>[{b\downarrow{\id_y}}`{x};{b^\Rightarrow{(\id_y)}\,{=}\,\overline{\underline{\delta}}_{(x,b)}}] \morphism(1800,-450)|r|/->/<0,-900>[{x}`{y};{b}] \morphism(900,-450)|m|/->/<0,-900>[{b\downarrow{\id_y}}`{y};{\id_y^\Leftarrow{(b)}}] \morphism(900,-1350)|r|/->/<900,0>[{y}`{y};{\id_y}] \morphism(0,0)|l|/{@{->}@/_30pt/}/<900,-1350>[{x}`{y};{b}] \morphism(0,0)|r|/{@{->}@/^30pt/}/<1800,-450>[{x}`{x};{\id_x}] \morphism(360,-675)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] \morphism(1125,-900)|a|/<=/<450,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{b\downarrow{\id_y}}}] \morphism(810,-225)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] } \defdiag{identityonbtodefinerho}{ \morphism(0,0)|r|/{@{->}@/^28pt/}/<0,-1350>[{x}`{y};{b}] \morphism(0,0)|l|/{@{->}@/_28pt/}/<0,-1350>[{x}`{y};{b}] \morphism(-90,-675)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] } \defdiag{firsttwocellinordertodefinecounitoftheadjunctionthatgivesarariii}{ \morphism(0,0)|a|/->/<300,-450>[{b\downarrow{\id_y}}`{x};{\overline{\underline{\delta}}_{(x,b)}}] \morphism(0,0)|b|/{@{->}@/_60pt/}/<600,-1500>[{b\downarrow{\id_y}}`{y};{\id_y^\Leftarrow{(b)}}] \morphism(300,-450)|a|/->/<300,-450>[{x}`{b\downarrow{\id_y}};{\overline{\underline{\rho}}_{(x,b)}}] \morphism(600,-900)|m|/->/<0,-600>[{b\downarrow{\id_y}}`{y};{\id_y^\Leftarrow{(b)}}] \morphism(300,-450)|l|/{@{->}@/_30pt/}/<300,-1050>[{x}`{y};{b}] \morphism(285,-975)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] \morphism(-150,-750)|a|/<=/<300,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{b\downarrow{\id_y}}}] } \defdiag{secondtwocellinordertodefinecounitoftheadjunctionthatgivesarariiii}{ \morphism(0,0)|l|/->/<600,-900>[{b\downarrow{\id_y}}`{b\downarrow{\id_y}};{\id_{b\downarrow{\id_y}}}] \morphism(0,0)|a|/{@{->}@/^40pt/}/<1200,-900>[{b\downarrow{\id_y}}`{x};{\overline{\underline{\delta}}_{(x,b)}}] \morphism(600,-900)|a|/->/<600,0>[{b\downarrow{\id_y}}`{x};{\overline{\underline{\delta}}_{(x,b)}}] \morphism(510,-450)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] } \defdiag{firsttwocellinordertodefinecounitoftheadjunctionthatgivesarari}{ \morphism(0,0)|a|/->/<300,-450>[{b\downarrow{\id_y}}`{x};{\overline{\underline{\delta}}_{(x,b)}}] \morphism(0,0)|b|/{@{->}@/_60pt/}/<600,-1500>[{b\downarrow{\id_y}}`{y};{\id_y^\Leftarrow{(b)}}] \morphism(300,-450)|a|/->/<300,-450>[{x}`{b\downarrow{\id_y}};{\overline{\underline{\rho}}_{(x,b)}}] \morphism(600,-900)|a|/->/<600,0>[{b\downarrow{\id_y}}`{x};{\overline{\underline{\delta}}_{(x,b)}}] \morphism(1200,-900)|r|/->/<0,-600>[{x}`{y};{b}] \morphism(600,-900)|m|/->/<0,-600>[{b\downarrow{\id_y}}`{y};{\id_y^\Leftarrow{(b)}}] \morphism(600,-1500)|b|/->/<600,0>[{y}`{y};{\id_y}] \morphism(300,-450)|l|/{@{->}@/_30pt/}/<300,-1050>[{x}`{y};{b}] \morphism(285,-975)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] \morphism(750,-1200)|a|/<=/<300,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{b\downarrow{\id_y}}}] \morphism(-150,-750)|a|/<=/<300,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{b\downarrow{\id_y}}}] } \defdiag{secondtwocellinordertodefinecounitoftheadjunctionthatgivesarari}{ \morphism(0,0)|l|/->/<600,-900>[{b\downarrow{\id_y}}`{b\downarrow{\id_y}};{\id_{b\downarrow{\id_y}}}] \morphism(0,0)|a|/{@{->}@/^40pt/}/<1200,-900>[{b\downarrow{\id_y}}`{x};{\overline{\underline{\delta}}_{(x,b)}}] \morphism(600,-900)|a|/->/<600,0>[{b\downarrow{\id_y}}`{x};{\overline{\underline{\delta}}_{(x,b)}}] \morphism(1200,-900)|r|/->/<0,-600>[{x}`{y};{b}] \morphism(600,-900)|m|/->/<0,-600>[{b\downarrow{\id_y}}`{y};{\id_y^\Leftarrow{(b)}}] \morphism(600,-1500)|b|/->/<600,0>[{y}`{y};{\id_y}] \morphism(750,-1200)|a|/<=/<300,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{b\downarrow{\id_y}}}] \morphism(510,-450)/=/<180,0>[{\phantom{O}}`{\phantom{O}};] } \defdiag{basicliftingtwoadjunction}{ \morphism(0,0)|b|/{@{->}@/_16pt/}/<825,0>[{\mathbb{A}{/y}}`{\mathbb{B}{/G(y)}};{\check{G}}] \morphism(825,0)|a|/{@{->}@/_16pt/}/<-825,0>[{\mathbb{B}{/G(y)}}`{\mathbb{A}{/y}};{\varepsilon_{y}!\circ\check{F}}] \morphism(412,90)/-|/<0,-180>[{\phantom{O}}`{\phantom{O}};] } \defdiag{basicliftingtwoadjunctionlaxcommatwocategory}{ \morphism(0,0)|b|/{@{->}@/_16pt/}/<825,0>[{\mathbb{A}{//y}}`{\mathbb{B}{//G(y)}};{\check{G}}] \morphism(825,0)|a|/{@{->}@/_16pt/}/<-825,0>[{\mathbb{B}{//G(y)}}`{\mathbb{A}{//y}};{\varepsilon_{y}\underline{!}\circ\check{F}}] \morphism(412,90)/-|/<0,-180>[{\phantom{O}}`{\phantom{O}};] } \defdiag{basicliftingtwoadjunctionn}{ \morphism(0,0)|b|/{@{->}@/_16pt/}/<825,0>[{\mathbb{A}{/y}}`{\mathbb{B}{/G(y)}};{\check{G}}] \morphism(825,0)|a|/{@{->}@/_16pt/}/<-825,0>[{\mathbb{B}{/G(y)}}`{\mathbb{A}{/y}};{\varepsilon_{y}!\circ\check{F}}] \morphism(412,90)/-|/<0,-180>[{\phantom{O}}`{\phantom{O}};] } \defdiag{basicliftingtwoadjunctionlaxcommatwocategoryy}{ \morphism(0,0)|b|/{@{->}@/_16pt/}/<825,0>[{\mathbb{A}{//y}}`{\mathbb{B}{//G(y)}};{\check{G}}] \morphism(825,0)|a|/{@{->}@/_16pt/}/<-825,0>[{\mathbb{B}{//G(y)}}`{\mathbb{A}{//y}};{\varepsilon_{y}\underline{!}\circ\check{F}}] \morphism(412,90)/-|/<0,-180>[{\phantom{O}}`{\phantom{O}};] } \defdiag{basicliftingtwoadjunctionlaxcommatwocategoryagain}{ \morphism(0,0)|b|/{@{->}@/_16pt/}/<825,0>[{\mathbb{A}{//y}}`{\mathbb{B}{//G(y)}};{\check{G}}] \morphism(825,0)|a|/{@{->}@/_16pt/}/<-825,0>[{\mathbb{B}{//G(y)}}`{\mathbb{A}{//y}};{\varepsilon_{y}\overline{!}\circ\check{F}}] \morphism(412,90)/-|/<0,-180>[{\phantom{O}}`{\phantom{O}};] } \defdiag{basicadjunctionliftingagain}{ \morphism(0,0)|b|/{@{->}@/_16pt/}/<825,0>[{\mathbb{A}{/y}}`{\mathbb{B}{/G(y)}};{\check{G}}] \morphism(825,0)|a|/{@{->}@/_16pt/}/<-825,0>[{\mathbb{B}{/G(y)}}`{\mathbb{A}{/y}};{\varepsilon_{y}!\circ\check{F}}] \morphism(412,90)/-|/<0,-180>[{\phantom{O}}`{\phantom{O}};] } \defdiag{compositionof2adjunctionsfirstdiagramlaxcommatwocategorycomma}{ \morphism(0,0)|b|/{@{->}@/_20pt/}/<1200,0>[{\mathbb{A}{//F(y)}}`{\mathbb{B}{//GF(y)}};{\check{G}}] \morphism(1200,0)|a|/{@{->}@/_20pt/}/<-1200,0>[{\mathbb{B}{//GF(y)}}`{\mathbb{A}{//F(y)}};{\varepsilon_{F(y)}\underline{!}\circ\check{F}}] \morphism(1200,0)|b|/{@{->}@/_20pt/}/<1200,0>[{\mathbb{B}{//GF(y)}}`{\mathbb{B}{/y}};{\eta_y^\Leftarrow}] \morphism(2400,0)|a|/{@{->}@/_20pt/}/<-1200,0>[{\mathbb{B}{/y}}`{\mathbb{B}{//GF(y)}};{\eta_y\overline{!}}] \Loop(1200,0){{\mathbb{B}{//GF(y)}}}(ur,ul)_{\mathcal{T}} \morphism(600,90)|r|/{@{-|}@<-7pt>}/<0,-180>[{\phantom{O}}`{\phantom{O}};{\left(\varepsilon{,}\eta\right)}] \morphism(1800,90)|r|/{@{-|}@<-7pt>}/<0,-180>[{\phantom{O}}`{\phantom{O}};{\left(\delta{,}\rho\right)}] } \defdiag{compositionof2adjunctionsseconddiagrametacommalaxcomma}{ \morphism(0,0)|b|/{@{->}@/_23pt/}/<2400,0>[{\mathbb{A}{//F(y)}}`{\mathbb{B}{/y}};{\eta_y^\Leftarrow\circ\check{G}}] \morphism(2400,0)|a|/{@{->}@/_23pt/}/<-2400,0>[{\mathbb{B}{/y}}`{\mathbb{A}{//F(y)}};{\check{F}}] \Loop(2400,0){{\mathbb{B}{/y}}}(rd,ru)_\mathcal{R} \morphism(1200,112)|r|/{@{-|}@<-15pt>}/<0,-225>[{\phantom{O}}`{\phantom{O}};{\left(\varepsilon\cdot\left(\id_{\check{F}}\ast\delta\ast\id_{\check{G}}\right){,}\,\,\,\alpha\right)}] } \defdiag{definitionofalphalinha}{ \morphism(0,0)|r|/->/<450,-450>[{x}`{GF(b)\downarrow{\eta_y}};{\alpha_{(x,b)}}] \morphism(450,-450)|a|/->/<900,0>[{GF(b)\downarrow{\eta_y}}`{GF(x)};{\left({G}F(b)\right)^\Rightarrow\left(\eta_y\right)}] \morphism(450,-450)|a|/->/<0,-750>[{GF(b)\downarrow{\eta_y}}`{y};{\eta_y^\Leftarrow\left({G}F(b)\right)}] \morphism(450,-1200)|b|/->/<900,0>[{y}`{GF(y)};{\eta_y}] \morphism(1350,-450)|r|/->/<0,-750>[{GF(x)}`{GF(y)};{GF(b)}] \morphism(2100,-450)|a|/->/<900,0>[{x}`{GF(x)};{\eta_x}] \morphism(2100,-450)|l|/->/<0,-750>[{x}`{y};{b}] \morphism(2100,-1200)|b|/->/<900,0>[{y}`{GF(y)};{\eta_y}] \morphism(3000,-450)|r|/->/<0,-750>[{GF(x)}`{GF(y)};{GF(b)}] \morphism(675,-825)|a|/<=/<450,0>[{\phantom{O}}`{\phantom{O}};{\chi ^{ca\downarrow{c}}}] \morphism(2445,-825)/=/<210,0>[{\phantom{O}}`{\phantom{O}};] \morphism(1672,-825)/=/<210,0>[{\phantom{O}}`{\phantom{O}};] } \defdiag{basicliftingtwoadjunctionlaxcommatwocategoryylefttwoadjoint}{ \morphism(675,0)|b|/->/<-675,0>[{\mathbb{B}{//GF(y)}}`{\mathbb{A}{//F(y)}};{\varepsilon_{F(y)}\underline{!}\circ\check{F}}] \morphism(1350,0)|b|/->/<-675,0>[{\mathbb{B}{/y}}`{\mathbb{B}{//GF(y)}};{\eta_y\overline{!}}] \morphism(1350,0)|m|/{@{->}@/_28pt/}/<-1350,0>[{\mathbb{B}{/y}}`{\mathbb{A}{//F(y)}};{\check{F}}] } \defdiag{basicliftingtwoadjunctionpullbacktwocategoryylefttwoadjoint}{ \morphism(675,0)|b|/->/<-675,0>[{\mathbb{B}{/GF(y)}}`{\mathbb{A}{/F(y)}};{\varepsilon_{F(y)}!\circ\check{F}}] \morphism(1350,0)|b|/->/<-675,0>[{\mathbb{B}{/y}}`{\mathbb{B}{/GF(y)}};{\eta_y!}] \morphism(1350,0)|m|/{@{->}@/_28pt/}/<-1350,0>[{\mathbb{B}{/y}}`{\mathbb{A}{/F(y)}};{\check{F}}] } \defdiag{2adjunctionidentityforthesouthafricantheorem}{ \morphism(0,0)|b|/{@{->}@/_16pt/}/<825,0>[{\mathbb{A}{//F(y)}}`{\mathbb{A}{/F(y)}};{\id_{F(y)}^{\Leftarrow}}] \morphism(825,0)|a|/{@{->}@/_16pt/}/<-825,0>[{\mathbb{A}{/F(y)}}`{\mathbb{A}{//F(y)}};{\id_{F(y)}\overline{!}}] \morphism(412,90)/-|/<0,-180>[{\phantom{O}}`{\phantom{O}};] } \defdiag{compositionof2adjunctionsadmissiblewrtbasifibration}{ \morphism(0,0)|b|/{@{->}@/_20pt/}/<1200,0>[{\mathbb{A}{/F(y)}}`{\mathbb{B}{/GF(y)}};{\check{G}}] \morphism(1200,0)|a|/{@{->}@/_20pt/}/<-1200,0>[{\mathbb{B}{/GF(y)}}`{\mathbb{A}{/F(y)}};{\varepsilon_{F(y)}!\circ\check{F}}] \morphism(1200,0)|b|/{@{->}@/_20pt/}/<1200,0>[{\mathbb{B}{/GF(y)}}`{\mathbb{B}{/y}};{\eta_y^\ast}] \morphism(2400,0)|a|/{@{->}@/_20pt/}/<-1200,0>[{\mathbb{B}{/y}}`{\mathbb{B}{/GF(y)}};{\eta_y!}] \morphism(0,0)|b|/{@{->}@<-5pt>@/_45pt/}/<2400,0>[{\mathbb{A}{/F(y)}}`{\mathbb{B}{/y}};{\eta_y^{\ast}\,\circ\,\check{G}}] \morphism(2400,0)|a|/{@{->}@<-5pt>@/_45pt/}/<-2400,0>[{\mathbb{B}{/y}}`{\mathbb{A}{/F(y)}};{\check{F}}] \morphism(600,90)|r|/{@{-|}@<-7pt>}/<0,-180>[{\phantom{O}}`{\phantom{O}};{\left(\varepsilon{,}\eta\right)}] \morphism(1800,90)/-|/<0,-180>[{\phantom{O}}`{\phantom{O}};] } \defdiag{theoremthatusespreservationofcommafirstleft}{ \morphism(0,0)|r|/{@{->}@/^20pt/}/<0,-900>[{\mathbb{A}{//F(y)}}`{\mathbb{B}{//GF(y)}};{\check{G}}] \morphism(0,-900)|l|/{@{->}@/^20pt/}/<0,900>[{\mathbb{B}{//GF(y)}}`{\mathbb{A}{//F(y)}};{\varepsilon_{F(y)}\underline{!}\circ\check{F}}] \morphism(0,-900)|r|/{@{->}@/^20pt/}/<0,-900>[{\mathbb{B}{//GF(y)}}`{\mathbb{B}{/y}};{\eta_y^\Leftarrow}] \morphism(0,-1800)|l|/{@{->}@/^20pt/}/<0,900>[{\mathbb{B}{/y}}`{\mathbb{B}{//GF(y)}};{\eta_y\overline{!}}] \morphism(-105,-450)|a|/{@{-|}@<-7pt>}/<210,0>[{\phantom{O}}`{\phantom{O}};{\left(\varepsilon{,}\eta\right)}] \morphism(-105,-1350)|a|/{@{-|}@<-7pt>}/<210,0>[{\phantom{O}}`{\phantom{O}};{\left(\delta{,}\rho\right)}] } \defdiag{theoremthatusespreservationofcommasecond}{ \morphism(0,0)|r|/{@{->}@/^20pt/}/<0,-750>[{\mathbb{A}{//F(y)}}`{\mathbb{B}{//GF(y)}};{\check{G}}] \morphism(0,-750)|l|/{@{->}@/^20pt/}/<0,750>[{\mathbb{B}{//GF(y)}}`{\mathbb{A}{//F(y)}};{\varepsilon_{F(y)}\underline{!}\circ\check{F}}] \morphism(0,-750)|r|/{@{->}@/^20pt/}/<0,-750>[{\mathbb{B}{//GF(y)}}`{\mathbb{B}{/GF(y)}};{\id_{GF(y)}^\Leftarrow}] \morphism(0,-1500)|l|/{@{->}@/^20pt/}/<0,750>[{\mathbb{B}{/GF(y)}}`{\mathbb{B}{//GF(y)}};{\id_{GF(y)}\overline{!}}] \morphism(0,-1500)|r|/{@{->}@/^20pt/}/<0,-750>[{\mathbb{B}{/GF(y)}}`{\mathbb{B}{/y}};{\eta_y^\ast}] \morphism(0,-2250)|l|/{@{->}@/^20pt/}/<0,750>[{\mathbb{B}{/y}}`{\mathbb{B}{/GF(y)}};{\eta_y!}] \morphism(-105,-375)|a|/{@{-|}@<-7pt>}/<210,0>[{\phantom{O}}`{\phantom{O}};{\left(\varepsilon{,}\eta\right)}] \morphism(-105,-1125)/-|/<210,0>[{\phantom{O}}`{\phantom{O}};] \morphism(-105,-1875)/-|/<210,0>[{\phantom{O}}`{\phantom{O}};] } \defdiag{theoremthatusespreservationofcommathird}{ \morphism(0,0)|r|/{@{->}@/^20pt/}/<0,-750>[{\mathbb{A}{//F(y)}}`{\mathbb{A}{/F(y)}};{\id_{F(y)}^\Leftarrow}] \morphism(0,-750)|l|/{@{->}@/^20pt/}/<0,750>[{\mathbb{A}{/F(y)}}`{\mathbb{A}{//F(y)}};{\id_{F(y)}\overline{!}}] \morphism(0,-750)|r|/{@{->}@/^20pt/}/<0,-750>[{\mathbb{A}{/F(y)}}`{\mathbb{B}{/GF(y)}};{\check{G}}] \morphism(0,-1500)|l|/{@{->}@/^20pt/}/<0,750>[{\mathbb{B}{/GF(y)}}`{\mathbb{A}{/F(y)}};{\varepsilon_{F(y)}!\circ\check{F}}] \morphism(0,-1500)|r|/{@{->}@/^20pt/}/<0,-750>[{\mathbb{B}{/GF(y)}}`{\mathbb{B}{/y}};{\eta_y^\ast}] \morphism(0,-2250)|l|/{@{->}@/^20pt/}/<0,750>[{\mathbb{B}{/y}}`{\mathbb{B}{/GF(y)}};{\eta_y!}] \morphism(-105,-375)/-|/<210,0>[{\phantom{O}}`{\phantom{O}};] \morphism(-105,-1125)|a|/{@{-|}@<-7pt>}/<210,0>[{\phantom{O}}`{\phantom{O}};{\left(\varepsilon{,}\eta\right)}] \morphism(-105,-1875)/-|/<210,0>[{\phantom{O}}`{\phantom{O}};] } \defdiag{definingtheimageofthelefttwoadjointinfamsecond}{ \morphism(0,0)/{@{->}@/^30pt/}/<975,0>[{\displaystyle\coprod_{j=1}^{n}x_j}`{\displaystyle\coprod_{j=1}^{m}y_j};] \morphism(0,0)/{@{->}@/_30pt/}/<975,0>[{\displaystyle\coprod_{j=1}^{n}x_j}`{\displaystyle\coprod_{j=1}^{m}y_j};] \morphism(488,150)/=>/<0,-300>[{\phantom{O}}`{\phantom{O}};] } \defdiag{definingtheimageofthelefttwoadjointinfam}{ \morphism(0,0)|a|/{@{->}@/^30pt/}/<750,0>[{x_i}`{y_{t_0(i)}};{t_i}] \morphism(0,0)|b|/{@{->}@/_30pt/}/<750,0>[{x_i}`{y_{t_0(i)}};{t_i'}] \morphism(750,0)/->/<675,0>[{y_{t_0(i)}}`{\displaystyle\coprod_{j=1}^{m}y_j};] \morphism(375,150)/=>/<0,-300>[{\phantom{O}}`{\phantom{O}};] } \defdiag{basicadjunctionoffam}{ \morphism(0,0)|b|/{@{->}@/_25pt/}/<825,0>[{\mathbb{A}}`{{\mathsf{Fam}}_{\mathsf{fin}}\left({\mathbb{A}}\right)};{I}] \morphism(825,0)/{@{->}@/_25pt/}/<-825,0>[{{\mathsf{Fam}}_{\mathsf{fin}}\left({\mathbb{A}}\right)}`{\mathbb{A}};] \morphism(412,105)|r|/{@{-|}@<-7pt>}/<0,-210>[{\phantom{O}}`{\phantom{O}};{\left(\varepsilon{,}\eta\right)}] } \defdiag{basiccommutativitydiagramoftheextensivity}{ \morphism(0,-525)|b|/->/<1125,0>[{\displaystyle\mathbb{A}{/}{\coprod_{j=1}^{n}y_j}}`{{\mathsf{Fam}}_{\mathsf{fin}}\left({\mathbb{A}}\right)/\left({y}_j\right)_{j\in\left\{{1},\ldots{,}n\right\}}};{\eta_{Y}^\ast\circ\,{\check{I}_\mathbb{A}}}] \morphism(0,-525)|l|/<-/<0,525>[{\displaystyle\mathbb{A}{/}{\coprod_{j=1}^{n}y_j}}`{\displaystyle\prod_{j=1}^n\mathbb{A}/y_j};{\simeq}] \morphism(1125,-525)|r|/<-/<0,525>[{{\mathsf{Fam}}_{\mathsf{fin}}\left({\mathbb{A}}\right)/\left({y}_j\right)_{j\in\left\{{1},\ldots{,}n\right\}}}`{\displaystyle\prod_{j=1}^n{\mathsf{Fam}}_{\mathsf{fin}}\left(\mathbb{A}/y_j\right)};{\simeq}] \morphism(0,0)|a|/->/<1125,0>[{\displaystyle\prod_{j=1}^n\mathbb{A}/y_j}`{\displaystyle\prod_{j=1}^n{\mathsf{Fam}}_{\mathsf{fin}}\left(\mathbb{A}/y_j\right)};{\prod_{j=1}^n{I_{\mathbb{A}/y_j}}}] \morphism(382,-262)|m|//<210,0>[{\phantom{O}}`{\phantom{O}};{\cong}] } \def\directlua{pu()}{} \fi \title{Lax comma $2$-categories and admissible $2$-functors} \author{Maria Manuel Clementino and Fernando Lucatelli Nunes} \dedication{In memory of Marta Bunge} \address{(1,2): University of Coimbra, CMUC, Department of Mathematics, 3000-143 Coimbra, Portugal.\\ (2): Departement Informatica, Universiteit Utrecht, Nederland} \eaddress{[email protected] and [email protected]} \amsclass{18N10, 18N15, 18A05, 18A22, 18A40} \keywords{change-of-base functor, comma object, Galois theory, Kock-Z\"{o}berlein monads, semi-left exact functor, lax comma $2$-categories, simple $2$-adjunctions, $2$-admissible $2$-functor} \thanks{This work was supported through the programme ``Oberwolfach Leibniz Fellows'' by the Mathematisches Forschungsinstitut Oberwolfach in 2022. This research was partially supported by the CMUC, Centre for Mathematics of the University of Coimbra - UIDB/00324/2020, funded by the Portuguese Government through FCT/MCTES, and by the Institut de Recherche en Math\'{e}matique et Physique (IRMP, UCLouvain, Belgium).} \maketitle \begin{abstract} This paper is a contribution towards a two dimensional extension of the basic ideas and results of Janelidze's Galois theory. In the present paper, we give a suitable counterpart notion to that of \textit{absolute admissible Galois structure} for the lax idempotent context, compatible with the context of \textit{lax orthogonal factorization systems}. As part of this work, we study lax comma $2$-categories, giving analogue results to the basic properties of the usual comma categories. We show that each morphism of a $2$-category induces a $2$-adjunction between lax comma $2$-categories and comma $2$-categories, playing the role of the usual \textit{change-of-base functors}. With these induced $2$-adjunctions, we are able to show that each $2$-adjunction induces $2$-adjunctions between lax comma $2$-categories and comma $2$-categories, which are our analogues of the usual lifting to the comma categories used in Janelidze's Galois theory. We give sufficient conditions under which these liftings are $2$-premonadic and induce a lax idempotent $2$-monad, which corresponds to our notion of $2$-admissible $2$-functor. In order to carry out this work, we analyse when a composition of $2$-adjunctions is a lax idempotent $2$-monad, and when it is $2$-premonadic. We give then examples of our $2$-admissible $2$-functors (and, in particular, simple $2$-functors), especially using a result that says that all admissible ($2$-)functors in the classical sense are also $2$-admissible (and hence simple as well). \end{abstract} \tableofcontents \setcounter{secnumdepth}{-1} \section{Introduction} Categorical Galois theory, originally developed by Janelidze~\cite{MR1061480, MR1822890}, gives a unifying setting for most of the formerly introduced Galois type theorems, even generalizing most of them. It neatly gives a common ground for Magid's Galois theory of commutative rings, Grothendieck's theory of \'{e}tale covering of schemes, and central extension of groups. Furthermore, since its genesis, Janelidze's Galois theory has found several developments, applications and examples in new settings (see, for instance, \cite{MR1397399}, \cite{MR3275274}, \cite{MR3207214}, \cite[Theorem~4.2]{MR1245796}, and \cite[Theorem~9.8]{2016arXiv160604999L}). The most elementary observation on factorization systems and Janelidze's Galois theory is that, in the suitable setting of finitely complete categories, the notion of absolute admissible Galois structure coincides with that of a semi-left-exact reflective functor/adjunction (see, for instance, \cite[Section~5.5]{MR1822890} or \cite{zbMATH01024334}). Motivated by the fact above and the theory of \textit{lax orthogonal factorization systems}~\cite{zbMATH07249998, MR3708821, MR3545937}, we have started a project whose aim is to investigate a two dimensional extension of the basic ideas and results of (absolute) Janelidze's Galois theory. We deal herein with a key step of this endeavor, that is to say, we develop the basics in order to give a suitable counterpart notion to that of \textit{absolute admissible Galois structure}. We adopt the \textit{usual} viewpoint that the $2$-dimensional analogue of an idempotent monad (full reflective functor) is that of a lax idempotent monad (pre-Kock-Z\"{o}berlein $2$-functor). Therefore the concept of an admissible Galois structure within our context should be a lax idempotent counterpart to the notion of \textit{semi-left exact reflective functor}; namely, an appropriate notion of semi-left exact functor for the context of \cite{MR3545937}. We study the lifting of $2$-adjunctions to comma type $2$-categories. We find two possible liftings which deserve interest. The underlying adjunction of the first type of lifting is the usual $1$-dimensional case, while the other one, more relevant to our context, is a counterpart to the lifting of the $2$-monad given in \cite{MR3545937} by comma objects. The last one requires us to study the lax analogue notion for comma categories, the notion of \textit{lax comma $2$-categories} of the title. We study the basic aspects of lax comma $2$-categories. Among them, the $2$-adjunction between the usual comma $2$-category and the lax comma $2$-category (for each object), and a counterpart for the usual change-of-base $2$-functors, which comes into play as a fundamental aspect of our work and, specially, to introduce the definition of \textit{$2$-admissible $2$-adjunction}. With these analogues of the change-of-base $2$-functors, we are able to introduce the lifting of each $2$-adjunction to a $2$-adjunction between the lax comma $2$-category and the comma $2$-category as a composition of $2$-adjunctions. Namely, the composition of a straightforward lifting to the lax comma $2$-categories with a change-of-base $2$-functor induced by the appropriate component of the unit. Fully relying on the study of properties of compositions of 2-adjunctions, we investigate the properties of these liftings of the $2$-adjunctions. Namely, we show under which conditions these liftings induce lax idempotent $2$-monads (the simple $2$-adjunctions of \cite{MR3545937}), recovering one characterization given in \cite{MR3545937} of their \textit{simple $2$-adjunctions}. We give also a characterization of the $2$-functors whose introduced lifting is lax idempotent and $2$-premonadic, the \textit{$2$-admissible $2$-functors} within our context. In Section \ref{section1 Preliminaries} we recall basic aspects and terminology of $2$-categories, such as $2$-adjunctions and $2$-monads, finishing the section giving aspects on \textit{raris}, right-adjoint right-inverses (see Definition \ref{ralirarilalilaridefinition}) within a $2$-category. Taking the opportunity to fix notation, we also recall the universal properties of the main two dimensional limits used in our work in Section \ref{twodimensionallimits section2}, that is to say, the definitions of conical $2$-limits and comma objects. In Section \ref{sectionkockzoberlein} we recall and show aspects on idempotent and lax idempotent $2$-monads needed to our work on admissible and $2$-admissible $2$-functors, also introducing a characterization of the $2$-adjunctions that induce lax idempotent $2$-monads, called herein lax idempotent $2$-adjunctions (see, for instance, Theorem \ref{characterizationlaxidempotent2adjunction}). In Section \ref{compositionof2adjunctionssection} we introduce the main concepts and results on composition of $2$-adjunctions in order to introduce the notions of simple, admissible and $2$-admissible $2$-adjunctions (see, for instance, Definitions \ref{Admissibletwoadjunction}, \ref{SIMPLEtwoadjunction}, and \ref{twoadmissibledefinition}). The results focus on characterizing and giving conditions under which the composition of $2$-adjunctions is an idempotent/lax idempotent (full reflective/pre-Kock-Z\"{o}berlein) $2$-adjunction ($2$-functor). Most of them are analogues for the simpler case of idempotent $2$-adjunctions (see, for instance, Theorem \ref{admissibilitywrtJH} which characterizes when the composition of right $2$-adjoints is pre-Kock-Z\"{o}berlein). In Section \ref{basicdefinitionsarticle} we recall the notion of lax comma $2$-categories $\mathbb{A} //y $, for each $2$-category $\mathbb{A} $ and object $y\in\mathbb{A} $ (see Definition \ref{definitionoflaxcommacategory}). This notion has already appeared in the literature (see, for instance, \cite[I,5]{MR0371990}, \cite[\S~6]{MR0249483}, \cite[Exercise~5, p.~115]{MR1712872} and \cite[p.~305]{MR558494}). We, then, introduce the change-of-base $2$-functors for lax comma $2$-categories. More precisely, we show that, for each morphism $c: y\to z $ in a $2$-category $\mathbb{A} $ with comma objects, we have an induced $2$-adjunction \directlua{pu()} \begin{equation*}\label{changeofthebaselaxcommacategoriesequationchangeofbase} \diag{changeofthebaselaxcommacategorieschangeofbase}. \end{equation*} between the lax comma $2$-category $\mathbb{A} //z $ and the comma $2$-category $\mathbb{A} /y$. We give an explicit construction of this $2$-adjunction: see Theorem \ref{teoremamudancadebaselassa}. Provided that $\mathbb{A} $ has pullbacks and comma objects, these induced $2$-adjunctions, together with the classical change-of-base $2$-functors, give the $2$-adjunctions \directlua{pu()} \begin{equation*} \diag{compositionoftwoadjunctionschangeofbasecommacompositionofpullbackcomma} \end{equation*} in which the composition of $c!\dashv c^\ast : \mathbb{A} /z \to \mathbb{A} /y $ with $\id _z\overline{!}\dashv id _z^\Leftarrow : \mathbb{A} //z \to \mathbb{A} /z $ is, up to $2$-natural isomorphism, the $2$-adjunction $c\overline{!}\dashv c^\Leftarrow : \mathbb{A} //z \to \mathbb{A} /y $ (see Theorem \ref{relationofchangeofbasecomma}). We finish Section \ref{change-of-base functor} showing that, whenever it is well defined, $\id_ y^\Leftarrow $ is pre-Kock-Z\"{o}berlein (Theorem \ref{laxidempotentcoherencelaxcommacomma}). The main point of Section \ref{secao de admissibilidade} is to introduce our notions of admissibility and $2$-admissibility (Definition \ref{maindefinition}), relying on the definitions previously introduced in Section \ref{compositionof2adjunctionssection}. We also use the main results of Section \ref{compositionof2adjunctionssection} to characterize and give conditions under which a $2$-functor is $2$-admissible (see, for instance, Corollaries \ref{oneofthemaincorollaries} and \ref{preKockZoberleinFdashvG}). We finish Section \ref{secao de admissibilidade} with a fundamental observation on admissibility and $2$-admissibility, namely, Theorem \ref{southafricantheorem}. It says that, provided that $\mathbb{A} $ has comma objects, if $F\dashv G $ is admissible in the classical sense (called herein \textit{admissible w.r.t. the basic fibration}), meaning that $G$ itself is full reflective and the compositions $$ \eta_y^\ast\,\circ \check{G}: \mathbb{A} /F(y) \to \mathbb{B} /y $$ are full reflective for all $y$, then $G$ is $2$-admissible, which means that the compositions $$ \eta_y^\Leftarrow\,\circ \check{G}: \mathbb{A} //F(y) \to \mathbb{B} /y $$ are pre-Kock-Z\"{o}berlein for all objects $y$. We discuss examples of $2$-admissible $2$-functors (and hence also simple $2$-functors) in Section \ref{sectionexamplesexamples}. Most examples are about cocompletion of $2$-categories, making use of Theorem \ref{southafricantheorem}. \setcounter{secnumdepth}{5} \section{Preliminaries}\label{section1 Preliminaries} Let $\Cat $ be the cartesian closed category of categories in some universe. We denote the internal hom by $$\Cat (-,-): \Cat ^\op\times \Cat\to \Cat .$$ A $2$-category $\mathbb{A} $ herein is the same as a $\Cat $-enriched category. We denote the enriched hom of a $2$-category $\mathbb{A} $ by $$\mathbb{A} (-,-): \mathbb{A} ^\op \times \mathbb{A} \to \Cat $$ which, again, is of course a $2$-functor. As usual, the composition of $1$-cells (morphisms) are denoted by $\circ $, $\cdot $, or omitted whenever it is clear from the context. The vertical composition of $2$-cells is denoted by $\cdot $ or omitted when it is clear, while the horizontal composition is denoted by $\ast$. Recall that, from the vertical and horizontal compositions, we construct the fundamental operation of \textit{pasting}~\cite{MR0357542, MR1040947}. Finally, if $f: w\to x $, $g:y\to z $ are $1$-cells of $\mathbb{A} $, given a $2$-cell $\xi : h\Rightarrow h' : x\to y $, motivated by the case of $\mathbb{A} = \Cat $, we use interchangeably the notations \directlua{pu()} \directlua{pu()} \begin{equation} \id_g\ast \xi\ast \id_ f\quad =\quad \diag{whiskering1} \quad =\quad g\xi f \end{equation} to denote the whiskering of $\xi $ with $f$ and $g$. Henceforth, we consider the $3$-category of $2$-categories, $2$-functors, $2$-natural transformations and modifications, denoted by $2$-$\Cat $. We refer to \cite{MR0357542, MR0299653} for the basics on $2$-dimensional category theory, and, more particularly, to the definitions of adjunctions, monads and Kan extensions inside a $2$-category. Moreover, we also extensively assume aspects of $2$-monad theory. The pioneering reference is \cite{MR1007911}, while we mostly follow the terminology (and results) of \cite{2016arXiv160703087L}. In this paper, we consider the \textit{strict} versions of $2$-dimensional adjunctions and monads: the concepts coincide with the $\Cat $-enriched ones. A \textit{$2$-adjunction}, denoted by $$(F\dashv G, \varepsilon, \eta): \mathbb{A}\to\mathbb{B} ,$$ consists of $2$-functors \directlua{pu()} $$ \diag{twofoldtwofunctors} $$ with $2$-natural transformations $\varepsilon : FG\Longrightarrow \id _ \mathbb{A} $ and $\eta : \id _ \mathbb{B} \Longrightarrow GF $ playing the role of the \textit{counit} and the \textit{unit} respectively. More precisely, the equations of $2$-natural transformations \directlua{pu()} \directlua{pu()} \begin{equation*}\tag{triangle identities} \diag{triangleidentityadjunctiondiagram1}\, =\, \id _ G\quad\mbox{ and }\quad \diag{triangleidentityadjunctiondiagram2}\, =\, \id _F \end{equation*} hold. We usually denote a $2$-adjunction $(F\dashv G, \varepsilon, \eta): \mathbb{A}\to\mathbb{B} $ by \directlua{pu()} $$ \diag{basicadjunction} $$ or by $F\dashv G: \mathbb{A}\to \mathbb{B} $ for short, when the counit and unit are already given. A \textit{ $2$-monad} on a $2$-category $\mathbb{B} $ is a triple $\mathcal{T} = (T, \mu , \eta) $ in which $T: \mathbb{B} \to \mathbb{B} $ is an endo-$2$-functor and $\mu, \eta $ are $2$-natural transformations playing the role of the multiplication and the unit respectively. That is to say, $\mu $ and $\eta $ are $2$-natural transformations such that the equations \directlua{pu()} \directlua{pu()} \directlua{pu()} \directlua{pu()} \begin{equation*}\tag{associativity of a $2$-monad} \diag{leftsideoftheequationassociativityofmonad} \quad =\quad \diag{rightsideoftheequationassociativityofmonad} \end{equation*} \begin{equation*}\tag{identity of a $2$-monad} \diag{firstsideoftheequationidenityofamonad}\quad = \quad \diag{secondsideoftheequationidenityofamonad}\quad = \quad \id _T \end{equation*} hold. Since the notions above coincide with the $\Cat $-enriched ones, it should be noted that the formal theory of monads applies to this case. More precisely, every $2$-adjunction does induce a $2$-monad, and we have the usual Eilenberg-Moore and Kleisli factorizations of a right $2$-adjoint functor (\textit{e.g} \cite[Section~2]{MR0299653} or \cite[Section~3]{2019arXiv190201225L}), which give rise respectively to the notions of $2$-monadic and Kleisli $2$-functors. Furthermore, we also have (the enriched version of) Beck's monadicity theorem~\cite[Theorem~II.2.1]{MR0280560}. In this direction, we use expressions like \textit{equivalence (or $2$-equivalence)}, and \textit{fully faithful $2$-functor} to mean the (strict) $\Cat $-enriched notions: that is to say, respectively, \textit{equivalence} in the $2$-category of $2$-categories, and a $2$-functor that is \textit{locally an isomorphism}. \subsection{Lalis and ralis} To refer to adjunctions where the unit or counit is an identity, we adopt a terminology similar to the one introduced by Gray in \cite[0.3.B]{zbMATH03305157}. More precisely: \begin{defi}\label{ralirarilalilaridefinition} Assume that $(f\dashv g, v, n )$ is an adjunction in a $2$-category $\mathbb{A} $. \begin{itemize} \renewcommand\labelitemi{--} \item If the counit $ v $ is the identity $2$-cell, $(f\dashv g, v, n )$ is called a \textit{rari adjunction (or rari pair)}, or a \textit{lali adjunction}. If there is a rari adjunction $f\dashv g $, the morphism $f$ is called a \textit{lali (left-adjoint left-inverse)}, while the morphism $g$ is called a \textit{rari (right-adjoint right-inverse)}. \item If the unit $ n $ is the identity $2$-cell, $(f\dashv g, v, n )$ is called a \textit{rali adjunction}, or a \textit{lari adjunction}. If there is a rali adjunction $f\dashv g $, the morphism $f$ is called a \textit{lari}, while the morphism $g$ is called a \textit{rali}. \end{itemize} \end{defi} Laris (ralis) are closed by composition, and have specific cancellation properties. We recall them below. \begin{lem}\label{preparacaoparacancelamentolaris} Assume that \directlua{pu()} \begin{equation} \diag{compositionofadjunctionsrarilali} \end{equation} are adjunctions in $\mathbb{A} $. \begin{enumerate}[a)] \item Assuming that $f\dashv g $ is a lari adjunction: we have that $ f f'\dashv g' g $ is a lari adjunction if, and only if, $f'\dashv g' $ is a lari adjunction as well.\label{dualaralilaricancellation} \item Assuming that $f'\dashv g' $ is a lali adjunction: the adjunction $f f'\dashv g' g $ is a lali adjunction if, and only if, $f\dashv g $ is a lali adjunction as well. \label{aralilaricancellation} \end{enumerate} \end{lem} \begin{proof} Assuming that $ n $ is an isomorphism, we have that the unit \directlua{pu()} \begin{equation} \diag{proofodthelalicancellation} \end{equation} of the composition $ff'\dashv g'g$ is invertible if, and only if, $ n' $ is invertible. This proves \ref{aralilaricancellation} and, dually, we get \ref{dualaralilaricancellation}. \end{proof} Of course, the situation is simpler when we consider isomorphisms. That is to say: \begin{coro}\label{precisecancellationisomorphism} Assume that \directlua{pu()} \begin{equation} \diag{compositionofadjunctionsrarilalicorollaryforisomorphisms} \end{equation} are morphisms in $\mathbb{A} $ such that $ (f') ^{-1} = g' $ and $ (f'') ^{-1} = g'' $. There is a lali (rali) adjunction $f'\cdot f\cdot f''\dashv g''\cdot g \cdot g' $ if and only if there is a lali (rali) adjunction $f\dashv g $. \end{coro} \begin{proof} If $f\dashv g $ is a lali (rali) adjunction, since $f'\dashv g' $ and $f''\dashv g'' $ are of course lali and rali adjunctions, it follows that the composite $$ f'\cdot f\cdot f''\dashv g''\cdot g \cdot g' $$ is a lali (rali) adjunction by Lemma \ref{preparacaoparacancelamentolaris}. Conversely, if $f'\cdot f\cdot f''\dashv g''\cdot g \cdot g' $ is a lali (rali) adjunction, since $g'\dashv f' $ and $g''\dashv f'' $ are lali and rali adjunctions, we get that the composite $$ g'\cdot f'\cdot f\cdot f''\cdot g''\dashv f''\cdot g''\cdot g \cdot g'\cdot f', $$ which is $f\dashv g $, is a lali adjunction. \end{proof} But we also have a stronger cancellation property: \begin{theo}[Left cancellation property]\label{cancellationpropertyralis} Let $f: x\to w, f': y\to x $ be morphisms of a $2$-category $\mathbb{A} $. \begin{enumerate}[a)] \item Assuming that $f: x\to w $ is a lari: the composite $ f f': y\to w $ is a lari if, and only if, $f': y\to x $ is a lari as well.\label{cancellationpropertyralisA} \item Assuming that $f $ is a rari: the composite $ff'$ is a rari if and only if $f' $ is a rari. \label{cancellationpropertyralisB} \end{enumerate} \end{theo} \begin{proof} By Lemma \ref{preparacaoparacancelamentolaris}, if $f $ and $f' $ are laris, the composite $ff' $ is a lari as well. Conversely, assume that $f $ and $ff'$ are laris. This means that there are adjunctions \directlua{pu()} \directlua{pu()} \begin{equation*} \diag{existingadjunction1}\quad \diag{existingadjunction2} \end{equation*} in $\mathbb{A} $ such that $n=\id _{gf} $ and $\hat{n} = \id _{\hat{g}ff'}$. We claim that \directlua{pu()} \begin{equation} \left(f'\dashv \hat{g}f,\, \diag{counitofthecancellationtheoremlaris} , \id_{\hat{g}ff'}\right) \end{equation} is a (lari) adjunction. In fact, the triangle identities follow from the facts that the equations $ \hat{v}ff' = \id_{ff'} $ and $\hat{g}\hat{v} = \id_{\hat{g}} $ hold. Finally, the statement \ref{cancellationpropertyralisB} is the codual of \ref{cancellationpropertyralisA}. \end{proof} On the one hand, the \textit{left cancellation property} of Theorem \ref{cancellationpropertyralis} does not hold for lalis or ralis. For instance, in $\Cat $, we consider the terminal category $\mathsf{1} $ and the category $\mathsf{2} $ with two objects and only one nontrivial morphism between them. The morphisms \directlua{pu()} \begin{equation} \diag{exampleofnocancellationpropertyoflalis} \end{equation} are lalis. But the inclusion $d^0: \mathsf{1}\to \mathsf{2} $ of the terminal object of $\mathsf{2} $ is not a lali, since it does not have a right adjoint. On the other hand, the dual of Theorem \ref{cancellationpropertyralis} gives a right cancellation property for ralis and lalis. \begin{coro}[Right cancellation property]\label{leftcancellationpropertyralis} Let $f: x\to w, f': y\to x $ be morphisms of a $2$-category $\mathbb{A} $. If $f': y\to x $ is a lali (rali): we have that $f: x\to w $ is a lali (rali) if, and only if, the composite $ f f': y\to w $ is a lali (rali) as well.\label{leftcancellationpropertyralisA} \end{coro} \section{Two dimensional limits}\label{twodimensionallimits section2} In this section, we recall basic universal constructions related to the results of this paper. Two dimensional limits are the same as weighted limits in the $\Cat $-enriched context~\cite{MR0280560}. We refer, for instance, to \cite{MR0401868} for the basics on $2$-dimensional limits. We are particularly interested in \textit{conical $2$-(co)limits} and \textit{comma objects}. \subsection{Conical $2$-limits} Two dimensional conical (co)limits are just weighted (co)limits with a weight constantly equal to the terminal category $\mathsf{1} $. Henceforth, the words \textit{(co)product}, \textit{pullback/pushout} and \textit{(co)equalizer} refer to the $2$-dimensional versions of each of those (co)limits. For instance, if $a :x\to y $, $b : w\to y $ are morphisms of a $2$-category $\mathbb{A} $, assuming its existence, the \textit{pullback} of $b $ along $a $ is an object $\displaystyle x\times _{(a,b)} w $ together with $1$-cells $ a^\ast (b):x\times _{(a,b)} w \to x $ and $b^\ast (a) : x\times _{(a,b)} w\to w $ making the diagram \directlua{pu()} \begin{equation}\label{definitionpullbackdiagram} \diag{pullbackdiagramdefinition} \end{equation} commutative, and satisfying the following universal property. For every object $z $ and every pair of $2$-cells $$(\xi _ 0 : h_0\Rightarrow h_0' : z\to x ,\, \xi _ 1 : h_1\Rightarrow h_1' : z\to w ) $$ such that the equation \directlua{pu()} \directlua{pu()} \begin{equation} \diag{twocellpullbackdefinitionrightside} \quad =\quad \diag{twocellofpullbackdefinitionleftside} \end{equation} holds, there is a unique $2$-cell $\xi : h\Rightarrow h' : z\to x\times_{(a,b)}w $ satisfying the equations \begin{center} $\id _ {a^\ast (b)}\ast \xi = \xi _ 0 $ and $\id _ {b^\ast (a) }\ast\xi = \xi _ 1 $. \end{center} \begin{rem} It is clear that the concept of \textit{pullback} in locally discrete $2$-categories coincides with the concept of ($1$-dimensional) \textit{pullback} in the underlying categories. Moreover, when a \textit{pullback} exists in a $2$-category, it is isomorphic to the ($1$-dimensional) \textit{pullback} in the underlying category. Finally, both the statements above are also true if \textit{pullback} is replaced by any type of conical $2$-limit with a locally discrete \textit{shape} (domain). \end{rem} \subsection{Comma objects}\label{definitionofcommaobjects} If $a :x\to y $, $b : w\to y $ are morphisms of a $2$-category $\mathbb{A} $, the comma object of $a$ along $b$, if it exists, is an object $a\downarrow b $ with the following universal property. There are $1$-cells $a^{\Rightarrow } (b) : a\downarrow b\to x $ and $ b^{\Leftarrow } (a) : a\downarrow b\to w $ and a $2$-cell \directlua{pu()} \begin{equation}\label{diagramdefinitioncommaobject} \diag{commadiagramdefinition} \end{equation} such that: \begin{enumerate} \item For every triple $(h_0: z\to x, h_1: z\to w, \gamma : a h_0 \Rightarrow b h_1 ) $ in which $h_0, h_1 $ are morphisms and $\gamma $ is a $2$-cell of $\mathbb{A} $, there is a unique morphism $h: z\to a\downarrow b $ such that the equations $h_0 = a^\Rightarrow (b) \cdot h $, $h_1 = b^\Leftarrow (a)\cdot h $ and \directlua{pu()} \begin{equation} \diag{morphismcommadiagramdefinition} \end{equation} hold. \item For every pair of $2$-cells $(\xi _ 0 : h_0\Rightarrow h_0' :z\to x,\, \xi _ 1 : h_1\Rightarrow h_1' : z\to w ) $ such that \directlua{pu()} \directlua{pu()} \begin{equation} \diag{commatwocellpullbackdefinitionrightside} \quad =\quad \diag{commatwocellofpullbackdefinitionleftside} \end{equation} holds, there is a unique $2$-cell $\xi : h\Rightarrow h' : z\to a\downarrow b $ such that $\id _ {a^\Rightarrow (b)}\ast \xi = \xi _ 0 $ and $\id _ {b^\Leftarrow (a)}\ast \xi = \xi _ 1 $. \end{enumerate} \begin{rem} If $\mathbb{A} $ is a locally discrete $2$-category, the comma object of a morphism $a$ along $b$ has the same universal property of the pullback of $a$ along $b$. \end{rem} \section{Lax idempotent $2$-adjunctions}\label{sectionkockzoberlein} Herein, our standpoint is that the notion of \textit{pre-Kock-Z\"{o}berlein $2$-functor} is the $2$-dimensional counterpart of the notion of \textit{full reflective functor}. In this section, we recall the basic definitions and give basic characterizations, but we refer to \cite{MR1359690, MR1432190} and \cite[Ch.~4]{zbMATH05036792} for fundamental aspects and examples of lax idempotent $2$-monads. \begin{defi}[Lax idempotent $2$-monad] A \textit{lax idempotent $2$-monad} is a $2$-monad $\mathcal{T} = (T, \mu , \eta) $ such that we have a rari adjunction $\mu \dashv \eta \ast \id_T $. An \textit{idempotent $2$-monad} is a $2$-monad $\mathcal{T} = (T, \mu , \eta) $ such that $\mu $ is invertible or, in other words, it is a lax idempotent $2$-monad such that $\mu \dashv \eta \ast \id_T $ is a rali adjunction as well. \end{defi} More explicitly, a $2$-monad $\mathcal{T} = (T, \mu , \eta) $ on a $2$-category $\mathbb{B} $ is lax idempotent if there is a modification \directlua{pu()} $$\diag{modificationofthedefinitionoflaxidempotenttwomonad} $$ such that, for each object $z\in \mathbb{B} $, \directlua{pu()} \directlua{pu()} \begin{equation*} \diag{modificationofthedefinitionoflaxidempotenttwomonadtriangleidentityotwo}\qquad\qquad \diag{modificationofthedefinitionoflaxidempotenttwomonadtriangleidentityoone} \end{equation*} are respectively the identity $2$-cells on $\eta_{T(z)} $ and on $\mu _z $. \begin{rem}[Dualities and self-duality]\label{dualitylaxidempotentremark} The concepts of lax idempotent and idempotent $2$-monads are actually notions that can be defined inside any $3$-category (or, more generally, tricategory~\cite{MR1261589}). Therefore they have eight dual notions each (counting the concept itself). However, the notions of lax idempotent and idempotent $2$-monads are self-dual, that is to say, the dual notion coincides with itself. More precisely, a triple $\mathcal{T} = (T, \mu , \eta) $ is a (lax) idempotent $2$-monad in the $3$-category $2\textrm{-}\Cat $ if and only if the corresponding triple is also a (lax) idempotent $2$-monad in the $3$-category $\left( 2\textrm{-}\Cat\right) ^\op $. Furthermore, the notion of idempotent $2$-monad is self-$3$-dual, meaning that the notion does not change when we invert the directions of the $3$-cells (which are, in our case, the modifications). However the $3$-dual of the notion of lax idempotent $2$-monad is that of colax idempotent $2$-monad. Finally, the notions obtained from the inversion of the directions of the $2$-cells, that is to say, the codual (or $2$-dual) concepts, are those of lax idempotent and idempotent $2$-comonads. \end{rem} \textit{Henceforth, throughout this section, we always assume that a $2$-adjunction $$\diag{basicadjunction} $$ is given, and we denote by $\mathcal{T} = (T, \mu , \eta) $ the induced $2$-monad $(GF, G \varepsilon F, \eta ) $ on $\mathbb{B} $. } \subsection{Idempotency} There are several useful well-known characterizations of idempotent ($2$-)monads (see, for instance, \cite[p.~196]{MR1313497}). \begin{lem}[Idempotent $2$-monad]\label{characterizationidempotentmonad} The following statements are equivalent. \begin{enumerate}[i)] \item $\mathcal{T}$ is idempotent; \label{idempotentdefinitionbasictheorem} \item $T \eta $ (or $\eta T $) is an epimorphism; \item $\mu $ is a monomorphism; \label{trivialmonomorphismidempotent} \item $T\eta =\eta T $; \label{idempotentequalunitsidempotent} \item $a: T(x)\to x $ is a $\mathcal{T}$-algebra structure if, and only if, $a\cdot \eta _ x = \id _x$;\label{novoestruturadealgebrapara2categoriadealgebrasestritas} \item $a: T(x)\to x $ is a $\mathcal{T}$-algebra structure if, and only if, $a $ is the inverse of $\eta _ x $;\label{2novoestruturadealgebrapara2categoriadealgebrasestritas} \item the forgetful $2$-functor $ \mathcal{T}\textrm{-}\Alg _{\mathsf{s}}\to \mathbb{B} $ between the $2$-category of strict $\mathcal{T}$-algebras and strict $\mathcal{T}$-morphisms (with modifications as $2$-cells) and the $2$-category $\mathbb{B}$ is fully faithful (that is to say, locally an isomorphism).\label{fullyfaithfulstrictalgebras} \end{enumerate} \end{lem} \begin{proof} Since $\mu\cdot (\eta T) = \mu\cdot (T \eta ) = \id _ T $, we have the following chain of equivalences: $\mu $ is a monomorphism $\Leftrightarrow $ $ \mu $ is invertible $\Leftrightarrow $ $\eta T$ or $T\eta $ is invertible $\Leftrightarrow $ $\eta T$ or $T\eta $ is an epimorphism. This proves the equivalence of the first three statements. By the definition of monomorphism, \ref{trivialmonomorphismidempotent} implies \ref{idempotentequalunitsidempotent}. Conversely, assuming that $T\eta =\eta T $, we have that $ T^2 \eta = T \eta T $ and, thus, we get that \directlua{pu()} \directlua{pu()} $$ \left(T\eta\right)\cdot \mu \quad = \quad \diag{equationforidempotentmonadinterchangelaw} \quad = \quad \diag{equationforidempotentmonadinterchangelawtwo}\quad = \quad \id _{T^2}. $$ Therefore $T \eta $ is the inverse of $\mu $ and, hence, $\mu $ is a monomorphism. Assuming one of the first four equivalent statements (and hence all of them), we have that, given a morphism $a: T(x)\to x $ such that $a\cdot \eta _ x = \id _x $, the equation \begin{equation}\label{rightinverseinversealgebrastructure} \eta _ {x}\cdot a = T(a) \cdot \eta _ {T(x)} = T(a\cdot \eta _x) = \id _ {T(x)}. \end{equation} holds. Thus, since $ \eta_ {T(x)}\cdot \eta_ x = T(\eta _x)\cdot \eta_ x $ and $\mu = \left( T \eta \right)^{-1} $, we conclude that \begin{equation}\label{associativity trivially holds for algebras} a\cdot \mu _x = \left(\eta_ {T(x)}\cdot \eta_ x\right)^{-1} = \left(T\left(\eta_ {x}\right)\cdot \eta_ x\right)^{-1} = a\cdot T(a). \end{equation} This proves that \ref{novoestruturadealgebrapara2categoriadealgebrasestritas} holds. Conversely, \ref{novoestruturadealgebrapara2categoriadealgebrasestritas} trivially implies \ref{trivialmonomorphismidempotent} (and, hence, all of the first four equivalent statements), since, for each $x\in\mathbb{B} $, $\mu _x $ is a (free) $\mathcal{T}$-algebra structure for $x$. Moreover, by Equations \eqref{rightinverseinversealgebrastructure} and \eqref{associativity trivially holds for algebras}, we conclude that the first four statements are also equivalent to \ref{2novoestruturadealgebrapara2categoriadealgebrasestritas}. Finally, recall that, for every $2$-monad $\mathcal{T}$ on a $2$-category $\mathbb{B}$, the forgetful functor $ \mathcal{T}\textrm{-}\Alg _{\mathsf{s}}\to \mathbb{B} $ between the $2$-category of strict $\mathcal{T}$-algebras and strict $\mathcal{T}$-morphisms (with modifications as $2$-cells) and the $2$-category $\mathbb{B}$ is faithful. Assuming \ref{2novoestruturadealgebrapara2categoriadealgebrasestritas}, in order to verify that the forgetful functor is full, it is enough to see that, for any morphism $f: x\to y $ of $\mathbb{B}$, if $a: T(x)\to x $, $b: T(y)\to y $ are $\mathcal{T}$-algebra structures, we have that the pasting \directlua{pu()} $$\diag{pastinginordertogetmorphismofalgebrasvertical} $$ is the identity $2$-cell and, hence, the morphism $f$ induces a morphism of algebras between $(x,a)$ and $(y,b)$. Assuming \ref{fullyfaithfulstrictalgebras}, we get that, for any object $x\in\mathbb{B}$, $\eta_ {T(x)} $ induces a morphism between the free $\mathcal{T}$-algebras $\left( T(x),\mu _x \right)$ and $\left( T^2(x), \mu _{T(x)} \right) $. That is to say, $$\eta _ {T(x) }\cdot \mu _x = \mu _{T(x)}\cdot T(\eta _{T(x)} ) $$ and, since the right side of the equation above is equal to the identity on $T^2(x) $, we conclude that $\mu _x $ is a split monomorphism. This proves that \ref{trivialmonomorphismidempotent} holds. \end{proof} A $2$-adjunction induces an idempotent $2$-monad if, and only if, the induced $2$-comonad is also idempotent. More generally: \begin{prop}\label{inducedidempotent} The following statements are equivalent. \begin{enumerate}[i)] \item $\mathcal{T}$ is idempotent;\label{1oftheidempotentadjunction} \item $F\eta $ (or $\eta G $) is an epimorphism; \label{equvalentdualidempotent2} \item $\varepsilon F $ (or $G \varepsilon $) is a monomorphism;\label{equvalentdualidempotent} \item The induced $2$-comonad is idempotent.\label{codualityequivalent4statement} \end{enumerate} \end{prop} \begin{proof} Since, by the triangle identities, we have that \begin{center} $\left( \varepsilon F\right) \cdot \left(F\eta\right) = \id _ F $ and $\left( G\varepsilon \right) \cdot \left(\eta G\right) = \id _ G $, \end{center} we get that \ref{equvalentdualidempotent2} implies that $\varepsilon F $ or $G\varepsilon $ is invertible and, therefore, $G\varepsilon F = \mu $ is invertible. Analogously, \ref{equvalentdualidempotent} implies \ref{1oftheidempotentadjunction}. Moreover, if we assume that $\mathcal{T}$ is idempotent, by Lemma \ref{characterizationidempotentmonad}, we have that $$ GF \eta = \eta GF $$ which, together with one of the triangle identities, implies that \directlua{pu()} \directlua{pu()} \begin{equation*} \left(F \eta\right)\cdot \left( \varepsilon F\right)\quad = \diag{equationforidempotentmonadinterchangelawtwoadjunction} =\quad \diag{equationforidempotentmonadinterchangelawtwoadjunctiontwo} =\quad \id_{FGF}. \end{equation*} This proves that \ref{1oftheidempotentadjunction} implies \ref{equvalentdualidempotent2} and \ref{equvalentdualidempotent}. Therefore we proved that \ref{1oftheidempotentadjunction}, \ref{equvalentdualidempotent2} and \ref{equvalentdualidempotent} are equivalent statements. Finally, since condition \ref{equvalentdualidempotent} is codual and equivalent to condition \ref{equvalentdualidempotent2}, we conclude that \ref{1oftheidempotentadjunction} is equivalent to its codual -- that is to say, to condition \ref{codualityequivalent4statement}. \end{proof} Motivated by the result above, we say that a $2$-adjunction is \textit{idempotent} if it induces an idempotent $2$-(co)monad. \begin{rem}\label{thinidempotency} If the $2$-adjunction $F\dashv G: \mathbb{A}\to\mathbb{B} $ is such that the underlying category of $\mathbb{A} $ (or $\mathbb{B} $) is \textit{thin}, then the induced $2$-monad is idempotent by Proposition \ref{inducedidempotent}. In particular, seeing categories as locally discrete $2$-categories and contravariant $2$-functors as covariant ones defined in the dual of the respective domains, any \textit{Galois connection} induces an idempotent ($2$-)(co)monad. \end{rem} If the $2$-adjunction $F\dashv G $ is idempotent and $G$ is $2$-monadic, $G$ is called a \textit{full reflective $2$-functor}. This terminology is justified by the well-known characterization below. \begin{prop}[Full reflective $2$-functor]\label{fullreflective2functorcharacterization} The following statements are equivalent. \begin{enumerate}[i)] \renewcommand\labelitemi{--} \item $G$ is a full reflective $2$-functor;\label{1melhoradoidempotente} \item $F\dashv G $ is idempotent and $G$ is $2$-premonadic;\label{2melhoradoidempotente} \item $G$ is fully faithful;\label{3melhoradoidempotente} \item $\varepsilon $ is invertible.\label{4melhoradoidempotente} \end{enumerate} \end{prop} \begin{proof} Recall that a $2$-functor is $2$-premonadic if the (Eilenberg-Moore) comparison $2$-functor is fully faithful (that is to say, locally an isomorphism). We have that \ref{1melhoradoidempotente} trivially implies \ref{2melhoradoidempotente}. Moreover, since the forgetful $2$-functor $ \mathcal{T}\textrm{-}\Alg _{\mathsf{s}} \to \mathbb{B} $ is fully faithful whenever $ \mathcal{T}$ is idempotent, we have that \ref{2melhoradoidempotente} implies \ref{3melhoradoidempotente}. Since, for every pair of objects $w,x\in\mathbb{A} $, the diagram \directlua{pu()} $$ \diag{equivalencesplitepifullyfaithful} $$ commutes, \ref{3melhoradoidempotente} and \ref{4melhoradoidempotente} are equivalent. Assuming \ref{4melhoradoidempotente}, we have in particular that $\varepsilon $ is a split epimorphism and $G$ reflects isomorphisms, hence, $G$ is $2$-monadic (see Proposition at \cite[p.~236]{MR2056584}). Furthermore, clearly, we also get that $G\varepsilon $ is a (split) monomorphism, which implies that $F\dashv G $ is idempotent by Proposition \ref{inducedidempotent}. Therefore \ref{4melhoradoidempotente} implies \ref{1melhoradoidempotente}. \end{proof} The dual notion of full reflective $2$-functor in $2$-$\Cat $ is called \textit{full co-reflective $2$-functor}. As a consequence of Proposition \ref{fullreflective2functorcharacterization}, we have: \begin{coro}\label{coreflectiveplusreflectiveimpliesequivalence} If $F\dashv G $ is such that $F$ is full co-reflective and $G$ is full reflective, then $F\dashv G$ is a $2$-adjoint equivalence. \end{coro} \begin{rem}[Idempotent $2$-adjunction vs. full reflective $2$-functor] It should be noted that there are non-$2$-monadic idempotent $2$-adjunctions. Remark \ref{thinidempotency} gives a way of constructing easy examples. For instance, given a $2$-category $\mathbb{A} $, the unique $2$-functor $\mathbb{A} \to \mathsf{1} $ has a left $2$-adjoint if and only if $\mathbb{A} $ has an initial object. Assuming that $\mathbb{A} $ has an initial object and $\mathbb{A} $ is not ($2$-)equivalent to $\mathsf{1}$, the $2$-functor $\mathbb{A} \to \mathsf{1} $ is not a reflective $2$-functor, although the $2$-adjunction is idempotent. More generally, by Corollary \ref{coreflectiveplusreflectiveimpliesequivalence} any full reflective $2$-functor which is not an equivalence gives an example of an idempotent $2$-adjunction such that the left $2$-adjoint is not $2$-comonadic. Dually, any non-equivalence full co-reflective $2$-functor gives an idempotent $2$-adjunction such that the right $2$-adjoint is not a full reflective $2$-functor. \end{rem} \subsection{Kleisli vs. idempotent adjunctions} Recall that a $2$-adjunction \textit{$F\dashv G $ is Kleisli if the Kleisli comparison $2$-functor} is an equivalence. This fact holds if, and only if, $F$ is essentially surjective on objects. Moreover, a Kleisli $2$-adjunction is always premonadic, since the Kleisli $2$-category is equivalent to the full sub-$2$-category of free algebras of the $2$-category $\mathcal{T}\textrm{-}\Alg _{\mathsf{s}} $ of the strict algebras of the induced $2$-monad. It should be noted that, by Proposition \ref{fullreflective2functorcharacterization}, we have that, whenever a $2$-adjunction $F\dashv G $ is idempotent, \textit{$G$ is $2$-premonadic if and only if $G$ is $2$-monadic}. Therefore by Lemma \ref{observationKleisliequivalenttoalgebras} below, this means that, whenever $\mathcal{T}$ is idempotent, the Kleisli $2$-category is ($2$-)equivalent to the $2$-category $\mathcal{T}\textrm{-}\Alg _{\mathsf{s}}$. \begin{lem}\label{observationKleisliequivalenttoalgebras} The following statements are equivalent. \begin{enumerate}[i)] \item The Kleisli $2$-category w.r.t. $\mathcal{T}$ is $2$-equivalent to the $2$-category of (strict) $\mathcal{T} $-algebras. \item If $F'\dashv G' $ induces $\mathcal{T}$, then $G' $ is $2$-premonadic if, and only if, $G' $ is $2$-monadic. \end{enumerate} \end{lem} By Proposition \ref{fullreflective2functorcharacterization}, we conclude the following well-known result: \begin{coro}\label{KelislivsMonadicIdempotentcase} An idempotent $2$-adjunction $F\dashv G $ is $2$-monadic if, and only if, it is Kleisli. \end{coro} \subsection{Lax idempotency} For this part, we assume the definition of strict algebras and lax $\mathcal{T}$-morphisms between them, which can be found, for instance, in \cite[Definition~2.2]{arXiv:1711.02051}. Given a $2$-monad $ \mathcal{T} $, we denote by $ \mathcal{T}\textrm{-}\Alg _{\ell }$ the $2$-category of strict algebras, lax $\mathcal{T}$-morphisms and modifications. In this case, $\mathcal{T}\textrm{-}\Alg _{\mathsf{s}}$ is the locally full sub-$2$-category of $ \mathcal{T}\textrm{-}\Alg _{\ell }$ consisting of strict $\mathcal{T}$-algebras and strict $\mathcal{T}$-morphisms between them. Theorem \ref{characterizationlaxidempotentmonad} is a well-known characterization of lax idempotent $2$-monads~\cite{MR1359690}. We refer to \cite{MR1432190, MR1476422} for the proofs. \begin{theo}[Lax idempotent $2$-monad]\label{characterizationlaxidempotentmonad} The following statements are equivalent. \begin{enumerate}[i)] \item $\mathcal{T}$ is lax idempotent; \item $\id _T\ast \eta \dashv \mu $ is a rali adjunction; \item $a: T(x)\to x $ is a $\mathcal{T}$-algebra structure if, and only if, there is a \textit{rari} adjunction $ a\dashv \eta_ x $; \item $a: T(x)\to x $ is a $\mathcal{T}$-pseudoalgebra structure if, and only if, there is an adjunction $ a\dashv \eta_ x $; \item the forgetful $2$-functor $ \mathcal{T}\textrm{-}\Alg _{\ell }\to \mathbb{B} $ between the $2$-category of strict $\mathcal{T}$-algebras and lax $\mathcal{T}$-morphisms and the $2$-category $\mathbb{B}$ is fully faithful. \end{enumerate} \end{theo} Similarly to the idempotent case, a $2$-adjunction induces a lax idempotent $2$-monad if and only if it induces a lax idempotent $2$-comonad. Furthermore, we give below a lax idempotent analogue of Proposition \ref{inducedidempotent}. \begin{theo}[Lax idempotent $2$-adjunction]\label{characterizationlaxidempotent2adjunction} The following statements are equivalent. \begin{enumerate}[i)] \item $\mathcal{T}$ is lax idempotent;\label{laxidempotentadjunctioncharacterization1} \item $G\varepsilon \dashv \eta G $ is a lali adjunction;\label{laxidempotentadjunctioncharacterization2} \item $F\eta \dashv \varepsilon F $ is a rali adjunction;\label{laxidempotentadjunctioncharacterization3} \item The induced $2$-comonad is lax idempotent.\label{laxidempotentadjunctioncharacterization4} \end{enumerate} \end{theo} \begin{proof} By Lemma \ref{characterizationlaxidempotentmonad}, it is clear that \ref{laxidempotentadjunctioncharacterization2} or \ref{laxidempotentadjunctioncharacterization3} implies \ref{laxidempotentadjunctioncharacterization1}. Conversely, assuming \ref{laxidempotentadjunctioncharacterization1}, we have by Lemma \ref{characterizationlaxidempotentmonad} that $ \id _{GF}\ast \eta \dashv \id _G\ast\varepsilon \ast \id _ F $. By \textit{doctrinal adjunction} (\textit{e.g.} \cite{MR0360749}), we conclude that $F\left( \eta _ x\right)\dashv \varepsilon _{F(x)} $ for every $x$ of $\mathbb{B} $. Finally, again, by doctrinal adjunction, we conclude that $ \id _F\ast \eta\dashv \varepsilon \ast \id _ {F} $. This proves that \ref{laxidempotentadjunctioncharacterization1} implies \ref{laxidempotentadjunctioncharacterization3}. Analogously, by doctrinal adjunction, we get that \ref{laxidempotentadjunctioncharacterization1} implies \ref{laxidempotentadjunctioncharacterization2}. Hence we proved that the first three statements are equivalent. Since the condition \ref{laxidempotentadjunctioncharacterization2} is codual and equivalent to \ref{laxidempotentadjunctioncharacterization3}, we get that \ref{laxidempotentadjunctioncharacterization1} is equivalent to its codual -- which means \ref{laxidempotentadjunctioncharacterization4}. \end{proof} We follow Kelly's definition of regular epimorphism (also known as strict epimorphism) as outlined in \cite{zbMATH03271557}. We recall that, whenever a ($2$-)category $\mathbb{A}$ has kernel pairs, a morphism is a regular epimorphism if and only if its effective (that is, it is the coequalizer of its kernel pair). \begin{defi}[pre-Kock-Z\"{o}berlein $2$-functor] If the induced $2$-monad $\mathcal{T}$ is lax idempotent, the $2$-adjunction $F\dashv G $ is \textit{lax idempotent}. In this case if, furthermore, $G$ is $2$-premonadic, $G$ is called a \textit{pre-Kock-Z\"{o}berlein $2$-functor}. Finally, if it is also $2$-monadic, $G$ is a \textit{Kock-Z\"{o}berlein $2$-functor}. \end{defi} \begin{prop}\label{preKockZoberleintwofunctor} Assume that $ F\dashv G : \mathbb{A}\to\mathbb{B} $ is lax idempotent. The following statements are equivalent. \begin{enumerate}[i)] \item $G$ is a pre-Kock-Z\"{o}berlein $2$-functor; \item For each object $x\in \mathbb{A} $, $\varepsilon _x $ is a regular epimorphism; \item For each object $x\in \mathbb{A} $, \directlua{pu()} \begin{equation} \diag{coequalizerofthecounitforBecktheorem} \end{equation} is a coequalizer. \end{enumerate} \end{prop} \begin{proof} The result follows directly from the well-known characterization of ($2$-)premonadic ($2$-)functors due to Beck (see, for instance, \cite[p.~226]{MR2056584}). \end{proof} \begin{theo} Assume that $ F\dashv G : \mathbb{A}\to\mathbb{B} $ is lax idempotent. The following statements are equivalent. \begin{enumerate}[i)] \renewcommand\labelitemi{--} \item $G$ is a Kock-Z\"{o}berlein $2$-functor; \item $G$ creates absolute coequalizers; \item $G$ is a pre-Kock-Z\"{o}berlein $2$-functor, and, whenever $\eta _ y $ is a \textit{rari}, there is $x\in \mathbb{A} $ such that $G(x)\cong y $. \end{enumerate} \end{theo} \begin{proof} The result follows from Proposition \ref{preKockZoberleintwofunctor}, and the characterization of algebra structures for lax idempotent $2$-monads recalled in Theorem \ref{characterizationlaxidempotentmonad}. \end{proof} \begin{rem}[Algebras and free algebras] Corollary \ref{KelislivsMonadicIdempotentcase} says that a $2$-functor $G$ is Kleisli if and only if it is monadic, whenever $F\dashv G$ induces an idempotent $2$-monad. This is not the case when $\mathcal{T}$ is only lax idempotent. The reference \cite{MR3673245} provides several counterexamples in this direction. Moreover, in our context, in Section \ref{change-of-base functor}, Theorem \ref{laxidempotentcoherencelaxcommacomma} also provides several examples: more precisely, given any $2$-category $\mathbb{A} $ and object $z\in\mathbb{A} $, the $2$-adjunction between the \textit{lax comma $2$-category $\mathbb{A} //z $ (see Definition \ref{definitionoflaxcommacategory})} and the corresponding comma $2$-category $\mathbb{A}{/z} $ usually is a Kleisli $2$-adjunction which is not $2$-monadic. \end{rem} Finally, it should be noted that: \begin{lem}\label{equivalenciadefullreflectivelaxidempotent} If $\mathbb{A} $ and $\mathbb{B} $ are locally discrete, we have that $F\dashv G $ is lax idempotent ($G$ is pre-Kock-Z\"{o}berlein) if and only if $F\dashv G $ is idempotent ($G$ is full reflective). \end{lem} \begin{proof} It is enough to note that a $2$-monad defined on a locally discrete $2$-category is lax idempotent if and only if it is idempotent. The rest follows from Proposition \ref{fullreflective2functorcharacterization}. More particularly, it follows from the fact that $2$-premonadicity and $2$-monadicity are equivalent properties for idempotent $2$-adjunctions. \end{proof} \section{Composition of $2$-adjunctions}\label{compositionof2adjunctionssection} Throughout this section, \directlua{pu()} \begin{equation}\label{compositionof2adjunctions} \diag{compositionof2adjunctionsfirstdiagram} \end{equation} are given $2$-adjunctions, and $\mathcal{T} = (T, \mu , \eta ) = (GF, G\varepsilon F , \eta ) $ is the $2$-monad induced by the $2$-adjunction $F\dashv G $. Recall that we have the composition of $2$-adjunctions above given by \directlua{pu()} \begin{equation} \diag{compositionof2adjunctionsseconddiagram} \end{equation} where $\mathcal{R} = ( R, v, \alpha ) $ denotes the $2$-monad induced by $FH\dashv JG $. \subsection{Idempotent $2$-adjunctions} If $J$ and $G$ are full reflective $2$-functors, $JG$ is a full reflective $2$-functor and, in particular, $FH\dashv JG $ induces an idempotent $2$-monad. However, if $F\dashv G $ and $H\dashv J $ are only idempotent $2$-adjunctions, we cannot conclude that the composite is idempotent. For instance, consider the $2$-adjunctions \directlua{pu()} \begin{equation}\label{Exampleofcompositionoffullreflectivenotfullreflective} \diag{compositionoftwoadjunctionscompactHausdorfftopset} \end{equation} in which $\Top $ is the locally discrete $2$-category of topological spaces and continuous functions, $\CmpHaus $ is the full sub-$2$-category of compact Hausdorff spaces, and the right adjoints are the usual forgetful functors. Both $2$-adjunctions are idempotent, but the composition induces the ultrafilter ($2$-)monad which is not idempotent. Proposition \ref{1dimensionalsimple} characterizes when the composition of the $2$-adjunctions is idempotent. It corresponds to the characterization of the simple (reflective) functors in the $1$-dimensional case. \begin{prop}\label{1dimensionalsimple} Assume that $F\dashv G$ is idempotent. The following statements are equivalent. \begin{enumerate}[i)] \item $FH\dashv JG $ is idempotent;\label{idmepotentcompositioncharacterization1} \item $JGF\delta G $ (or $F \delta GFH $) is a monomorphism;\label{idmepotentcompositioncharacterization2} \item $FH \alpha $ (or $\alpha JG $) is an epimorphism.\label{idmepotentcompositioncharacterization3} \end{enumerate} \end{prop} \begin{proof} Since $F\dashv G$ is idempotent, $G\varepsilon $, $\varepsilon F$, $F\eta $ and $\eta G $ are invertible. By Proposition \ref{inducedidempotent}, the $2$-adjunction $FH\dashv JG $ is idempotent if, and only if, \begin{center} $JG\left(\varepsilon\cdot\left(F\delta{G}\right)\right) = \left( JG \varepsilon\right)\cdot \left(JT\delta{G}\right) $, or $\left(\varepsilon\cdot\left(F\delta{G}\right)\right) FH =\left(\varepsilon FH\right) \cdot \left(F\delta TH\right) $, \end{center} is a monomorphism. Therefore, since $JG \varepsilon $ and $\varepsilon FH $ are invertible, we get that $FH\dashv JG $ is idempotent if, and only if, $JT\delta G $, or $ F\delta TH$, is a monomorphism. This proves that \ref{idmepotentcompositioncharacterization1} is equivalent to \ref{idmepotentcompositioncharacterization2}. Finally, \ref{idmepotentcompositioncharacterization1} is equivalent to \ref{idmepotentcompositioncharacterization3} by Proposition \ref{inducedidempotent}. \end{proof} \begin{coro}\label{criterion1dimensionalsimple} If $J$ is full reflective and $F\dashv G $ is idempotent, then the composition is idempotent. \end{coro} \begin{proof} In this case, since $\delta $ is invertible, we have that $JGF\delta G $ is an isomorphism and, hence, a monomorphism. \end{proof} \begin{defi}[Admissible $2$-functor]\label{Admissibletwoadjunction} The $2$-adjunction $F\dashv G $ is \textit{admissible} w.r.t. $H\dashv J$ if $JG$ is a full reflective $2$-functor. \end{defi} If $G$ is full reflective, and the composition $JG $ is full reflective, we generally cannot conclude that $J$ is full reflective. More precisely, in this case, we have: \begin{prop}\label{1dimensionalsemileftexact} Assuming that $G$ is full reflective, the horizontal composition $F\delta G $ is invertible if and only if the $2$-adjunction $F\dashv G $ is admissible w.r.t. $H\dashv J $. \end{prop} \begin{proof} Since $\varepsilon $ is invertible (by Proposition \ref{fullreflective2functorcharacterization}), we get that $\left(F\delta{G}\right)$ is invertible if and only if the counit $\varepsilon\left(F\delta{G}\right)$ of $FH\dashv JG $ is invertible. By Proposition \ref{fullreflective2functorcharacterization}, this fact completes the proof. \end{proof} \subsection{Lax idempotent $2$-adjunctions} We turn our attention now to analogous results for the lax idempotent case. The main point is to investigate when the composition of the $2$-adjunctions is lax idempotent and premonadic. \begin{defi}[Simplicity]\label{SIMPLEtwoadjunction} The $2$-adjunction $F\dashv G $ is \textit{simple} w.r.t. $H\dashv J$ if the composition $FH\dashv JG $ is lax idempotent. \end{defi} As a consequence of the characterization of lax idempotent $2$-adjunctions, we get: \begin{theo}[Simplicity]\label{simplicityofFGwrtJH} Assume that $G$ is locally fully faithful. The $2$-adjunction $F\dashv G $ is simple w.r.t. $H\dashv J$ if and only if $$\left(\id _{TH}\ast \alpha\right) \dashv \left(\mu\ast \id _{H} \right)\cdot \left( \id _ T\ast \delta \ast \id _ {TH}\right) $$ is a rali adjunction. \end{theo} \begin{proof} By Theorem \ref{characterizationlaxidempotent2adjunction}, we conclude that the $2$-adjunction $FH\dashv JG $ is lax idempotent if and only if $$\left( FH \alpha\right) \dashv \left(\varepsilon FH \right)\cdot \left( F \delta \ast TH \right) $$ is a rali adjunction. Since $G$ is locally fully faithful, we have the rali adjunction above if, and only if, there is a rali adjunction $ TH\alpha \dashv \left(\mu H \right)\cdot \left( T \delta TH\right) $. \end{proof} The characterization of Theorem \ref{simplicityofFGwrtJH} turns out to be difficult to apply for most of the examples, since it involves several units and counits of the given $2$-adjunctions. Therefore it seems useful to have suitable sufficient conditions to get simplicity. \begin{theo}\label{criterionsufficientforsimplicity} \begin{enumerate}[a)] \item Assume that $JGF\delta G $ is invertible: $FH\dashv JG $ is lax idempotent if and only if there is a lali adjunction $JG\varepsilon\dashv J\eta G $. \item Assume that $ F \delta GFH $ is invertible: $FH\dashv JG $ is lax idempotent if and only if there is a rali adjunction $F\eta H\dashv \varepsilon FH $. \end{enumerate} \end{theo} \begin{proof} We assume that $JGF\delta G $ is invertible. The other case is entirely analogous and, in fact, dual ($3$-dimensional codual). By hypothesis, there is a $2$-natural transformation $\vartheta : JGFG\Longrightarrow JGFHJG $ which is the inverse of $JGF\delta G $. Therefore, since \directlua{pu()} \begin{equation} \left( JGF\delta G\right)\cdot \left( \alpha JG\right)\quad =\quad \diag{twocellidentityfortheproofofthecriterionforsimplicity} \quad = \quad J\eta G, \end{equation} we conclude that \begin{equation} \vartheta\cdot \left( J\eta G\right) = \vartheta\cdot \left( JGF\delta G\right)\cdot \left( \alpha JG\right) = \alpha JG. \end{equation} Therefore we have the following situation \directlua{pu()} \begin{equation} \diag{compositionofadjunctionsepsilondelta} \end{equation} in which $\vartheta ^{-1} = JGF\delta{G} $. This is the hypothesis of Corollary \ref{precisecancellationisomorphism} and, thus, there is a lali adjunction $$JG\left(\varepsilon\cdot\left({F}\delta{G}\right)\right)\dashv \alpha_{JG} $$ if, and only if, there is a lali adjunction $JG\varepsilon\dashv J\eta{G} $. By Theorem \ref{characterizationlaxidempotent2adjunction}, this completes the proof. \end{proof} \begin{coro}\label{corollarytrivialforlaxidempotent} Assume that $F\dashv G $ is lax idempotent. \begin{enumerate}[a)] \item If $JGF\delta G $ is invertible, then $FH\dashv JG $ is lax idempotent. \item If $ F \delta GFH $ is invertible, then $FH\dashv JG $ is lax idempotent. \end{enumerate} \end{coro} \begin{proof} In fact, if $F\dashv G $ is lax idempotent, we have in particular that there are a rali adjunction $F\eta H\dashv \varepsilon FH $ and a lali adjunction $JG\varepsilon\dashv J\eta G $. Therefore the result follows from Theorem \ref{criterionsufficientforsimplicity}. \end{proof} It should be noted that the $2$-adjunctions in \eqref{Exampleofcompositionoffullreflectivenotfullreflective} show in particular that $FH\dashv JG $ might not be lax idempotent, even if $F\dashv G$ and $H\dashv J $ are. However, analogously to the idempotent case (see Corollary \ref{criterion1dimensionalsimple}), we have a nicer situation whenever $J $ is full reflective. \begin{coro} If $J$ is full reflective, then $F\dashv G $ is lax idempotent if, and only if, $FH\dashv JG $ is lax idempotent. \end{coro} \begin{proof} Assuming that $J$ is full reflective, we get that $\delta $ is invertible and, thus, $JGF\delta G $ is invertible. If $F\dashv G $ is lax idempotent, we get that the composite is lax idempotent by Corollary \ref{corollarytrivialforlaxidempotent}. Conversely, if $FH\dashv JG $ is lax idempotent, by Theorem \ref{criterionsufficientforsimplicity}, there is a lali adjunction $$JG\varepsilon\dashv J\eta G .$$ Since $J$ is locally an isomorphism, this implies that there is a lali adjunction $G\varepsilon\dashv \eta G $ which proves that $F\dashv G $ is lax idempotent by Theorem \ref{characterizationlaxidempotent2adjunction}. \end{proof} \begin{defi}[$2$-admissibility]\label{twoadmissibledefinition} The $2$-adjunction $F\dashv G $ is \textit{$2$-admissible} w.r.t. $H\dashv J$ if the composition $FH\dashv JG $ is lax idempotent and premonadic (that is to say, $JG$ is pre-Kock-Z\"{o}berlein). \end{defi} As a consequence of Proposition \ref{preKockZoberleintwofunctor} and Theorem \ref{simplicityofFGwrtJH}, we have: \begin{theo}[$2$-admissibility]\label{admissibilitywrtJH} Assume that $G$ is pre-Kock-Z\"{o}berlein. The $2$-adjunction $F\dashv G $ is \textit{$2$-admissible} w.r.t. $H\dashv J$ if, and only if, the two conditions below hold. \begin{itemize} \renewcommand\labelitemi{--} \item $TH \alpha \dashv \left(\mu\ast \id _{H} \right)\cdot \left( \id _ T\ast \delta \ast \id _ {TH}\right)$ is a lari adjunction (or, equivalently, $F\dashv G $ is simple w.r.t. $H\dashv J $); \item For each object $z\in\mathbb{C} $, $\left(\varepsilon\cdot \left( F\delta G\right)\right)_z $ is a regular epimorphism. \end{itemize} \end{theo} Recall that the composition of a regular epimorphism with a split epimorphism is always a regular epimorphism~\textit{c.f.}~\cite{zbMATH03271557}. Therefore we also have that: \begin{coro} $F\dashv G $ is simple w.r.t. $H\dashv J $, and $F\delta G $ is a split epimorphism, we conclude that $F\dashv G $ is $2$-admissible w.r.t. $H\dashv J $. \end{coro} \begin{proof} It follows directly from Theorem \ref{admissibilitywrtJH} and the observation above. \end{proof} Since the composition of a regular epimorphism with an isomorphism is always a regular epimorphism, we get: \begin{coro}\label{themaincase} If $F\delta G $ is an isomorphism and $G$ is pre-Kock-Z\"{o}berlein, then $F\dashv G $ is $2$-admissible w.r.t. $H\dashv J $. In particular, if $J$ is full reflective and $G$ is pre-Kock-Z\"{o}berlein, we conclude that $JG$ is pre-Kock-Z\"{o}berlein. \end{coro} \begin{proof} Since $F\delta G $ is invertible, we get that $JG F\delta G $ is invertible. Therefore, by Corollary \ref{corollarytrivialforlaxidempotent}, we get the simplicity. Moreover $\varepsilon \cdot\left( F\delta G\right) $ is a regular epimorphism since $\varepsilon $ is a regular epimorphism and $\left( F\delta G\right) $ is invertible. \end{proof} \section{Lax comma $2$-categories and change-of-base $2$-functors}\label{change-of-base functor}\label{basicdefinitionsarticle} The notion of lax comma $2$-categories is well-known and has been considered in the literature in many contexts (see, for instance, \cite[I,5]{MR0371990}, \cite[\S~6]{MR0249483}, \cite[Exercise~5, p.~115]{MR1712872} or \cite[p.~305]{MR558494}). We recall the definition in an elementary manner below, following the perspective of our setting. The main aim is to introduce the respective notions of change-of-base $2$-functors. In a $2$-category $\mathbb{A} $ with products, given any object $y\in \mathbb{A} $, the endofunctor $$ \left( y\times - \right) : \mathbb{A} \to \mathbb{A} $$ has a unique comonadic structure. In this context, the usual $2$-category of strict coalgebras $\left({y}\times{-}\right)\textrm{-}\CoAlg_{\mathsf{s}} $ is isomorphic to the comma $2$-category $\mathbb{A} / y$. Moreover, the $2$-category $\left( y\times - \right)\textrm{-} \CoAlg _{\ell }$ of strict $ \left( y\times - \right)$-coalgebras and lax morphisms is what we call the \textit{lax comma $2$-category} $\mathbb{A} // y $.\footnote{See, for instance, \cite[Def.~4.1]{2016arXiv160703087L}.} More generally, we explicitly define the lax comma categories below. \begin{defi}[Lax comma $2$-category]\label{definitionoflaxcommacategory} Given an object $y$ of a $2$-category $\mathbb{A} $, we denote by $\mathbb{A} // y $ the $2$-category defined by the following. \begin{itemize} \renewcommand\labelitemi{--} \item The objects are pairs $ (w, a) $ in which $w$ is an object of $\mathbb{A} $ and $$\xymatrix{ w\ar[r]|-{a} &y }$$ is a morphism of $\mathbb{A} $. \item A morphism in $\mathbb{A} // y $ between objects $(w, a) $ and $ (x, b) $ is a pair \directlua{pu()} \directlua{pu()} $$\left( \diag{laxmorphismofcoalgebras1cell}, \diag{twocelloflaxcommamorphism} \right) $$ in which $f: w\to x $ is a morphism of $\mathbb{A} $ and $\gamma $ is a $2$-cell of $\mathbb{A} $. If $(f, \gamma ): (w, a)\to (x, b) $ and $ (g, \chi ): (x, b)\to (z, c) $ are morphisms of $\mathbb{A} // y $, the composition is defined by $(g\circ f, \gamma \cdot \left(\chi \ast \id _f\right) ) $, that is to say, the composition of the morphisms $g$ and $f$ with the pasting \directlua{pu()} $$\diag{compositionofmorphismslaxcommatwocategories} $$ of the $2$-cells $\chi $ and $\gamma $. Finally, with the definitions above, the identity on the object $(w,a)$ is of course the morphism $(\id _w, \id _a ) $. \item A $2$-cell between morphisms $(f, \gamma ) $ and $(f', \gamma ' ) $ is given by a $2$-cell $\zeta : f\Rightarrow f' $ such that the equation \directlua{pu()} \directlua{pu()} \begin{equation*} \diag{leftsideequationtwocellforlaxcommacategor}\quad =\quad \diag{rightsideequationtwocellforlaxcommacategory} \end{equation*} holds. \end{itemize} The $2$-category $\mathbb{A} // y$ is called the \textit{lax comma $2$-category} of $\mathbb{A} $ over $y $, while the $2$-category $\mathbb{A} ^\co //y $ is called the \textit{colax comma $2$-category} of $\mathbb{A} $ over $y$. \end{defi} The concept of (co)lax comma $2$-category, possibly under other names, has already appeared in the literature. See, for instance, \cite[Exercise~5, p.~115]{MR1712872} or \cite[p.~305]{MR558494}. As for our choice of the direction of the $2$-cells for the notion of lax comma $2$-categories, although we do not follow \cite[p.~305]{MR558494}, our choice is compatible with the usual definition of lax natural transformation. \begin{defi}[(Strict) comma $2$-category] Given an object $y$ of a $2$-category $\mathbb{A} $, we denote by $\mathbb{A} / y $ the \textit{comma $2$-category} over $y$, defined to be the locally full \textit{wide} sub-$2$-category of $\mathbb{A} // y $ in which a morphism from $\left( w, a\right) $ to $ \left( x, b\right) $ is a morphism $$(f, \chi ): \left(w, a\right) \to \left(x, b\right) $$ such that $\chi $ is the identity $2$-cell. \end{defi} \begin{rem}\label{Fcategoryforthefirsttime} We have an inclusion $2$-functor $\mathbb{A} / y\to \mathbb{A} // y $ obviously defined. The morphisms in the image of this inclusion are called \textit{strict} (or \textit{tight}) morphisms of $ \mathbb{A} // y $. The $2$-category $ \mathbb{A} // y $ endowed with this inclusion forms an enhanced $2$-category, or, more precisely, an $\mathfrak{F}$-category as defined in \cite{MR2854177}. \end{rem} \subsection{Classical (strict) change-of-base functor} Assuming that $\mathbb{A} $ has pullbacks, given any morphism $c:y\to z $ of a $2$-category $\mathbb{A} $, it is well known that it induces a $2$-adjunction \directlua{pu()} \begin{equation}\label{equationtwoadjunctionchangeofthebasetwofunctorbasic} \diag{changeofthebasecommacategories} \end{equation} between the (strict) comma $2$-categories in which the right $2$-adjoint is called the \textit{change-of-base $2$-functor} induced by the morphism $c$ (see, for instance, \cite{MR1173011}). Recall that $c^\ast $ is defined by the pullback along $c$, and the left adjoint is defined by the composition with $c$, the so called direct-image $2$-functor $c! (w, a) = (w, ca)$. In the present section, we give the analogue for lax comma $2$-categories, that is to say, the\textit{ change-of-base $2$-functors for the lax comma $2$-categories}, given in Proposition \ref{def:change-of-base-lax}. Firstly, we recall the classical case: \begin{prop}[Change-of-base $2$-functor]\label{teoremamudancadebase} Let $\mathbb{A} $ be a $2$-category with pullbacks. If $c: y\to z $ is any morphism, we get a $2$-adjunction \directlua{pu()} \begin{equation}\label{equationtwoadjunctionchangeofthebasetwofunctorpullbackbasic} \diag{changeofthebasecommatwocategories} \end{equation} in which $c^\ast $ is defined by the pullback along $c$. Explicitly, the assignment of objects of $c^\ast $ is given by $$(w, a) \mapsto (w\times_{(a,c)}y, c^\ast (a) : w\times_{(a,c)}y \to y )$$ while the action of $c^\ast $ on morphisms is given by \begin{equation}\label{definitionofactiononmorphismstwofunctorcoalgebraslaxcommaa} \left( w\xrightarrow{f} x,\, \id _a \right) : (w, a)\to (x, b) \,\,\mapsto\, \left( w\times_{(a,c)}y \xrightarrow{c^\ast\left(f, \id_a\right)} x\times_{(b,c)}y,\, \id _{c^\ast (a)} \right) : c^\ast (a)\to c^\ast( b) \end{equation} in which all the squares of \directlua{pu()} \begin{equation}\label{definitionpullbackonmorphismss} \diag{cstaronmorphismm} \end{equation} are pullbacks. Finally, the image of a $2$-cell $\zeta : f\Rightarrow f' : (w,a)\to (x,b) $ is defined by the unique $2$-cell $c^\ast \left(\zeta \right)$ such that the equations \directlua{pu()} \directlua{pu()} \directlua{pu()} \directlua{pu()} \begin{equation} \diag{leftsidefirstequationimagetwocellofthepbchangeofbase} = \diag{rightrsidefirstequationimagetwocellofthepbchangeofbase}\quad\mbox{ and }\quad \diag{leftsidesecondequationimagetwocellofthepbchangeofbase} =\quad \diag{rightrsidesecondequationimagetwocellofthepbchangeofbase} \end{equation} hold. \end{prop} By considering the comma object, every morphism $c: y\to z $ in a $2$-category $\mathbb{A} $ induces a $2$-functor $ c^\Leftarrow : \mathbb{A} //z \to \mathbb{A} / y $. This gives what we call, herein, the (lax) \textit{change-of-base $2$-functor}. We start by defining the appropriate analogous of direct image $2$-functor in our setting, that is to say, the $2$-functor $c\overline{!} : \mathbb{A} / y \to \mathbb{A} / / z $ induced by each morphism $c: y\to z $ of $\mathbb{A} $. \begin{defi}[Direct image]\label{definitiondirectimage} If $c: y\to z $ is any morphism of a $2$-category $\mathbb{A} $, we define the commutative diagram \directlua{pu()} \begin{equation}\label{directimagedefinitiondiagramequation} \diag{directimagedefinitiondiagram} \end{equation} in which the unlabeled arrow is the obvious inclusion, and $$c! : \mathbb{A} /y \to \mathbb{A} / z $$ is defined by $$(x, a)\mapsto (x, ca), \, (f, \id )\mapsto (f, \id _ c \ast \id ), \, \zeta \mapsto \zeta , $$ that is to say, the usual direct image $2$-functor. \end{defi} Theorem \ref{teoremamudancadebaselassa} shows that, in the presence of suitable comma objects, for each morphism $c: y\to z $ in $2$-category, the direct image $2$-functor $c\overline{!} : \mathbb{A} / y \to \mathbb{A} // z $ has a right $2$-adjoint -- the (lax) change-of-base $2$-functor $c^\Leftarrow $ defined below. \begin{defi}[$c^\Leftarrow $]\label{def:change-of-base-lax} Let $\mathbb{A} $ be any $2$-category, and $c: y\to z $ a morphism of $\mathbb{A} $. Assume that $\mathbb{A} $ has comma objects along $c$. We denote by $$ c^\Leftarrow : \mathbb{A} //z \to \mathbb{A} / y $$ the $2$-functor defined by the comma object along the morphism $c$. Explicitly, the action on objects of $c^\Leftarrow $ is given by \begin{equation} (x, b) \mapsto (b\downarrow c, c^\Leftarrow (b) : b\downarrow c\to y ) \end{equation} in which \directlua{pu()} \begin{equation}\label{definitionofthetwofunctorcommaobject} \diag{commadiagramdefinitioncommaadjunction} \end{equation} is the comma object as in \ref{definitionofcommaobjects}, while the action on morphisms is given by \directlua{pu()} \directlua{pu()} \begin{equation}\label{definitiononmorphismsofcommaalongc} \left( \diag{morphismfforthedefinitioncLeftarrow}, \diag{twocelloflaxcommamorphismmmm}\right)\quad\mapsto\quad \left( a\downarrow{c}\xrightarrow{c^\Leftarrow\left(f, \gamma \right) } b\downarrow{c},\quad\id _{c^\Leftarrow (a) } \right) \end{equation} in which $c^\Leftarrow (f, \gamma ) $, sometimes only denoted by $c^\Leftarrow (f )$, is the unique morphism of $\mathbb{A} $ such that the equations $$ b^\Rightarrow (c)\,\cdot\,{c^\Leftarrow{\left( f, \gamma \right)}} = f\,\cdot\,{a^\Rightarrow{(c)}}, \qquad c^\Leftarrow (b)\,\cdot\,{c^\Leftarrow{\left( f, \gamma \right)}} = c^\Leftarrow (a), $$ \directlua{pu()} \directlua{pu()} \begin{equation} \diag{cleftarrowonmorphismm}\quad =\quad \diag{cleftarrowonmorphismmrightside} \end{equation} hold. Finally, if $\zeta : f\Rightarrow f' : (w,a)\to (x,b) $ is a $2$-cell between morphisms $(f, \gamma ) $ and $(f', \gamma ' ) $ in $\mathbb{A} // z $, the $2$-cell $c^\Leftarrow (\zeta ) $ is the unique $2$-cell such that the equations \directlua{pu()} \directlua{pu()} \directlua{pu()} \directlua{pu()} \begin{equation} \diag{leftsidefirstequationimagetwocellofthecommachangeofbase} = \diag{rightrsidefirstequationimagetwocellofthecommachangeofbase}\quad\mbox{ and }\quad \diag{leftsidesecondequationimagetwocellofthecommachangeofbase} =\quad \diag{rightrsidesecondequationimagetwocellofthecommachangeofbase} \end{equation} hold. \end{defi} \begin{theo}\label{teoremamudancadebaselassa} Let $\mathbb{A} $ be any $2$-category, and $c: y\to z $ a morphism in $\mathbb{A} $. If $\mathbb{A} $ has comma objects along $c $, then we have a $2$-adjunction \directlua{pu()} \begin{equation}\label{changeofthebaselaxcommacategoriesequation} \diag{changeofthebaselaxcommacategories}. \end{equation} \end{theo} \begin{proof} We define below the counit, denoted by $\delta $, and the unit, denoted by $\rho $, of the $2$-adjunction $c\overline{!}\dashv c^\Leftarrow $. For each object \begin{equation*} \left( x, x\xrightarrow{b} z \right) \end{equation*} of $\mathbb{A} // z $, we have the comma object \directlua{pu()} \begin{equation} \diag{commadiagramdefinitionproofcommaobject} \end{equation} as in \eqref{definitionofthetwofunctorcommaobject}. We define the counit on $(x,b) $, denoted by $\delta_ {(x,b)} $, to be the morphism between $c\overline{!} c^\Leftarrow (x,b) $ and $(x, b) $ in $\mathbb{A} //z $ given by the pair $(b^\Rightarrow (c) , \chi ^{b\downarrow c} ) $. Moreover, for each object \begin{equation*} \left( w, w\xrightarrow{a} y \right) \end{equation*} in $\mathbb{A} /y $, we have the comma object \directlua{pu()} \begin{equation} \diag{commaalongca} \end{equation} in $\mathbb{A} $. By the universal property of the comma object, there is a unique morphism $\rho _ {(w, a)} '$ of $\mathbb{A} $ such that the equations \directlua{pu()} \begin{equation} \diag{morphismcommadiagramdefinitionrholinha} \end{equation} \begin{equation*} \left(ca\right)^\Rightarrow (c)\, \cdot \, \rho _ {(w, a)}' = \id _w \qquad\mbox{and}\qquad c^\Leftarrow c\overline{!} (a)\,\cdot\, \rho _ {(w, a)} '= a \end{equation*} hold. By the equation above, the pair $ (\rho _ {(w, a)}' , \id _a ) $ gives a morphism between $(w,a) $ and $(ca\downarrow c, c^\Leftarrow c\overline{!} (a)) $ in $\mathbb{A} / y $. We claim that the component $\rho _ {(w, a)} $ of the unit of $c\overline{!} \dashv c^\Leftarrow $ on $(w, a) $ is the morphism defined by the pair $ (\rho _ {(w, a)}' , \id _a ) $. It is straightforward to see that the definitions above actually give $2$-natural transformations $\delta : c\overline{!} c^\Leftarrow\longrightarrow \id _{\mathbb{A}//z } $ and $\rho : \id _ {\mathbb{A} /y }\longrightarrow c^\Leftarrow c\overline{!} $. We prove below that $\delta $ and $\rho $ satisfy the triangle identities. Let $(w,a) $ be an object of $\mathbb{A} / y $. The image of the morphism $\rho _{(w, a)} $ by the $2$-functor $c\overline{!} : \mathbb{A} /y\to \mathbb{A} //z $ is the morphism $ (\rho _{(w,a)} ', \id _{ca} ) $ between $c\overline{!} (w, a ) = (w, ca) $ and $(ca\downarrow c, c\overline{!} c^\Leftarrow c\overline{!} (a)) $ in $\mathbb{A} //z $, while the component $\delta _ { c\overline{!} (w,a) } = \delta _ {(w, ca)}$ is the morphism $\left( (ca)^\Rightarrow (c), \chi ^{ca\downarrow c} \right) $. By the definition of $\rho ' _{(w,a)} $, we have that $(ca)^\Rightarrow (c) \cdot \rho _{(w,a)} ' = \id _w $ and $ \chi ^{ca\downarrow c} \ast \id _{\rho _{(w,a)} '} = \id _{ca} $. Therefore $\delta _ { c\overline{!} (w,a) }\, \cdot \, c\overline{!} \left( \rho _{(w,a)} \right) $ is the identity on $c\overline{!} (a) $. This proves the first triangle identity. Let $(x, b) $ be an object of $\mathbb{A} //z $. Denoting by $(c\cdot c^\Rightarrow (b)\downarrow c , \chi ^{c\cdot c^\Leftarrow (b)\downarrow c } ) $ the comma object of $c\cdot c^\Leftarrow (b) $ along $c $, we have that the morphism $$ c^\Leftarrow \left(\delta _{(x,b)}\right) : c^\Leftarrow c\overline{!} c^\Leftarrow (x,b) \to c^\Leftarrow (x,b) $$ in $\mathbb{A} / y $ is defined by the pair $(\updelta ' , \id _{c^\Leftarrow c\overline{!} c^\Leftarrow (b) } ) $ in which $\updelta '$ is the unique morphism in $\mathbb{A} $ making the diagrams \begin{equation*} \vcenter{ \xymatrix@=4em{ x && b\downarrow c \ar[ll]_-{b^\Rightarrow (c) } \\ & c\cdot c^\Leftarrow (b) \downarrow c \ar[ru]_-{\updelta ' } \ar[lu]^-{b^\Rightarrow (c) \cdot \left( c\cdot c^ \Leftarrow (b) \right) ^\Rightarrow (c) } & } } \qquad \vcenter{ \xymatrix@=3em{ b\downarrow c \ar[rd]^-{c^\Leftarrow (b) }& \\ & y \\ c\cdot c^\Leftarrow (b)\downarrow c \ar[ru]_-{c^\Leftarrow c\overline{!} c^\Leftarrow (b) } \ar[uu]^-{\updelta ' }& } } \end{equation*} commute, and the equation \directlua{pu()} \directlua{pu()} \begin{equation} \diag{definitionofdeltalinhaleft}\quad =\quad \diag{definitionofdeltalinharight} \end{equation} holds. Since, by the definition of $\rho $, the underlying morphism $\rho _{c^\Leftarrow (x,b) }' $ of the component of $\rho $ on $ c^\Leftarrow (x, b) $ is such that the equations \begin{equation*} \chi ^ {c\cdot c^\Leftarrow (b)\downarrow c} \ast \id _ {\rho ' _{c^\Leftarrow (x,b) } } = \id _{c\cdot c^\Leftarrow (b) },\, \left( c\cdot c^ \Leftarrow (b) \right)^\Rightarrow (c)\cdot \rho ' _{c^\Leftarrow (b) } = \id _{b\downarrow c }, \, c^\Leftarrow c\overline{!} c^\Leftarrow (b) \cdot \rho ' _{c^\Leftarrow (x, b) } = c\cdot c^\Leftarrow (a) \end{equation*} hold, we get that the equations \directlua{pu()} \begin{equation} \diag{secontriangleidentitycoomaobjecttwoadjunction} \end{equation} \begin{equation*} c^\Leftarrow (b)\cdot \delta ' \cdot \rho _{c^\Leftarrow (b) }' = c^\Leftarrow (b), \qquad b^\Rightarrow (c) \cdot \delta ' \cdot \rho _{c^\Leftarrow (b) }' = b^ \Rightarrow (c) \end{equation*} hold. Since, by the universal property of the comma object of $b$ along $c$, the morphism satisfying the three equations above is unique, we conclude that $\delta ' \cdot \rho _{c^\Leftarrow (x,b) }'$ is the identity on $b\downarrow c $. This proves that $$c^\Leftarrow (\delta _{(x,b)} ) \cdot \rho _{c^\Leftarrow (x,b) } = \id _{c^\Leftarrow (x,b)} $$ which proves the second triangle identity. \end{proof} \begin{coro}\label{itisactuallythecoherencetwoadjunctioncorollary} If $\mathbb{A} $ has comma objects (along identities), then $\mathbb{A} /y \to \mathbb{A} // y $ has a right $2$-adjoint which is defined by the comma object along the identity $\id _ y $. \end{coro} \begin{proof} It follows from Theorem \ref{teoremamudancadebaselassa} and the fact that the inclusion $\mathbb{A} /y \to \mathbb{A} // y $ is actually given by the $2$-functor $\id _ y \overline{!} : \mathbb{A} /y\to \mathbb{A} // y $ and, hence, it is left $2$-adjoint to the $2$-functor $$\id _y ^\Leftarrow : \mathbb{A} //y \to\mathbb{A} /y .$$ \end{proof} By Theorem \ref{teoremamudancadebaselassa} and the fact that, given a morphism $c: y\to z $ of a $2$-category $\mathbb{A} $, \directlua{pu()} \begin{equation}\label{basicdiagramofcompositionwithcandinclusionequation} \diag{basicdiagramofcompositionwithcandinclusion} \end{equation} commutes, we get that: \begin{theo}\label{relationofchangeofbasecomma} Let $\mathbb{A}$ be a $2$-category, and $c: y\to z $ a morphism of $\mathbb{A} $. If $\mathbb{A} $ has comma objects and pullbacks along $c$, we have the following commutative diagram of $2$-adjunctions \directlua{pu()} \begin{equation}\label{compositionoftwoadjunctionschangeofbasecommaequation} \diag{compositionoftwoadjunctionschangeofbasecomma} \end{equation} which means that the composition of the $2$-adjunction $c!\dashv c^\ast : \mathbb{A} /z \to \mathbb{A} /y $ with $\id _z\overline{!}\dashv id _z^\Leftarrow : \mathbb{A} //z \to \mathbb{A} /z $ is, up to $2$-natural isomorphism, the $2$-adjunction $$c\overline{!}\dashv c^\Leftarrow : \mathbb{A} //z \to \mathbb{A} /y .$$ \end{theo} Given a $2$-category $\mathbb{A} $, it is clear that, for any object $y$ of $\mathbb{A} $, the $2$-adjunction $ \id _y ! \dashv \id_y ^\ast : \mathbb{A} /y \to \mathbb{A} /y $ is $2$-naturally isomorphic to the identity $2$-adjunction $ \id_{\mathbb{A} /y }\dashv \id_{\mathbb{A} /y } $ and, in particular, is an idempotent $2$-adjunction. In the setting of Theorem \ref{teoremamudancadebaselassa}, that is to say, the comma version of the change-of-base $2$-functor, the $2$-adjunction $$ \id _y \overline{!} \dashv \id_y ^\Leftarrow : \mathbb{A} //y \to \mathbb{A} /y ,$$ is far from being isomorphic to the identity $2$-adjunction. It is not even idempotent in most of the cases. It is, however, always lax idempotent and a Kleisli $2$-adjunction. More precisely: \begin{theo}\label{laxidempotentcoherencelaxcommacomma} Let $\mathbb{A} $ be a $2$-category, and $y $ an object of $\mathbb{A} $. If $\mathbb{A} $ has comma objects along $\id _y $, then the $2$-adjunction \directlua{pu()} \begin{equation}\label{changeofthebaseidentitycoherencelaxidempotentequationn} \diag{changeofthebaseidentitycoherencelaxidempotent} \end{equation} is lax idempotent. Moreover, it is a Kleisli $2$-adjunction and, hence, $\id_y^\Leftarrow $ is a pre-Kock-Z\"{o}berlein $2$-functor. \end{theo} \begin{proof} In order to verify that \eqref{changeofthebaseidentitycoherencelaxidempotentequationn} is a Kleisli $2$-adjunction, it is enough to see that $\id_y\overline{!} $ is bijective on objects. In particular, we conclude that $\id_y^\Leftarrow $ is $2$-premonadic. Therefore, in order to prove that $\id_y^\Leftarrow $ is a pre-Kock-Z\"{o}berlein $2$-functor, it remains only to prove that the $2$-adjunction \eqref{changeofthebaseidentitycoherencelaxidempotentequationn} is lax idempotent. We prove below that \begin{equation}\label{proofofthelaxidempotencyofthetwoadjunctionoftheidentity} \id_{\id_y\overline{!}} \ast \rho\dashv \delta \ast\id _{\id_y\overline{!} } \end{equation} is a rari adjunction and, hence, it satisfies the condition \ref{laxidempotentadjunctioncharacterization3} of Theorem \ref{characterizationlaxidempotent2adjunction}, which implies that the $2$-adjunction \eqref{changeofthebaseidentitycoherencelaxidempotentequationn} is lax idempotent. For short, throughout this proof, we denote $\id_{\id_y\overline{!}} \ast \rho$ by $ \overline{\rho } $, and $\delta \ast\id _{\id_y\overline{!} }$ by $\overline{\delta } $. Recall that, given an object $(x, b)\in{\mathbb{A} /y} $, we have that $\overline{\delta } _{(x,b) } $ is defined by the pair \directlua{pu()} \begin{equation*} \left(\overline{\underline{\delta}}_{(x,b)},\, \diag{counittwocelloftheidentitytwoadjunctionlaxcoomacomma}\right) \end{equation*} in which, as suggested by the notation, the $2$-cell is the comma object in $\mathbb{A} $, and $$\overline{\underline{\delta}}_{(x,b)}:= b^\Rightarrow{(\id_y)}.$$ Moreover, recall that, given an object $(x, b)\in{\mathbb{A} /y} $, we have that $\overline{\rho } _{(x,b) } = \left(\overline{\underline{\rho}}_ {(x,b)}, \id_b \right) $ in which $\overline{\underline{\rho}}_ {(x,b)}$ is the unique morphism of $\mathbb{A} $ such that the equations \directlua{pu()} \directlua{pu()} \begin{center} $\overline{\underline{\delta}}_{(x,b)}\cdot \overline{\underline{\rho}}_{(x,b)} = \id _x$,\quad $ \id_y^\Leftarrow{(b)} \cdot \overline{\underline{\rho}}_{(x,b)} =b$,\quad and \end{center} \begin{equation}\label{rhodef} \diag{identityonbtodefinerho} \quad = \quad \diag{rholparaademonstracaodelaxidempotency} \end{equation} hold. For each object $(x,b)\in\mathbb{A} /y $, the pair of $2$-cells $\left( \chi ^{b\downarrow{\id_y}}, \id _ {\overline{\underline{\delta}}_{(x,b)}} \right) $ satisfies the equation \directlua{pu()} \directlua{pu()} \normalsize \directlua{pu()} \directlua{pu()} \begin{equation} \diag{firsttwocellinordertodefinecounitoftheadjunctionthatgivesarari}\, =\,\diag{secondtwocellinordertodefinecounitoftheadjunctionthatgivesarari} \end{equation} and, hence, by the universal property of the comma object, there is a unique $2$-cell $\Gamma _ {(x,b)} $ such that the equations $$\id_{\id_y^\Leftarrow{(b)} }\ast \Gamma _ {(x,b)} = \chi ^{b\downarrow{\id_y}} \quad\mbox{ and }\quad \id _ {\overline{\underline{\delta}}_{(x,b)}}\ast \Gamma _ {(x,b)} = \id _ {\overline{\underline{\delta}}_{(x,b)}} $$ hold. The $2$-cells $\Gamma _ {(x,b)} $ define a modification $$\Gamma : \overline{\rho }\cdot \overline{\delta }\Longrightarrow \id _{\id_y\overline{!}\id_y^\Leftarrow\id_y\overline{!}} $$ which we claim to be the counit of the adjunction \eqref{proofofthelaxidempotencyofthetwoadjunctionoftheidentity}. The first triangle identity holds, since, by the definition of $\Gamma $ above, $$ \id _ {\overline{\underline{\delta}}_{(x,b)}}\ast \Gamma _ {(x,b)} = \id _ {\overline{\underline{\delta}}_{(x,b)}} $$ for every object $(x,b)\in\mathbb{A} /y $. Finally, for each object $(x,b)\in\mathbb{A} /y $, $\Gamma _ {(x,b)}\ast \id _{\overline{\underline{\rho}}_{(x,b)} } $ is such that $$\id_{\id_y^\Leftarrow{(b)} }\ast \Gamma _ {(x,b)}\ast \id _{\overline{\underline{\rho}}_{(x,b)} } = \chi ^{b\downarrow{\id_y}}\ast \id _{\overline{\underline{\rho}}_{(x,b)} } = \id _b $$ by \eqref{rhodef}, and, of course, $$\id _ {\overline{\underline{\delta}}_{(x,b)}}\ast \Gamma _ {(x,b)}\ast \id _{\overline{\underline{\rho}}_{(x,b)} } = \id _ {\overline{\underline{\delta}}_{(x,b)}\cdot \overline{\underline{\rho}}_{(x,b)} } .$$ Therefore, by the universal property of the comma object $b\downarrow \id_y $, we get that $\Gamma _ {(x,b)}\ast \id _{\overline{\underline{\rho}}_{(x,b)} } = \id_{\id _{\overline{\underline{\rho}}_{(x,b)} }} $. This completes the proof that the second triangle identity holds. \end{proof} \section{Admissibility}\label{secao de admissibilidade} Throughout this section, $$\diag{basicadjunction} $$ is a given $2$-adjunction. By abuse of language, given any $2$-functor $H: \mathbb{A}\to \mathbb{B} $, for each object $x $ in $\mathbb{A} $, we denote by the same $\check {H} $ the $2$-functors $$ \check{H} : \mathbb{A}/x \to \mathbb{B} /H(x), \quad \check{H} : \mathbb{A} /x \to \mathbb{B} // H(x), \quad \check{H} : \mathbb{A} // x \to \mathbb{B} // H(x) $$ pointwise defined by $H$. Moreover, given a morphism $f:w\to x $ of $\mathbb{A} $, we denote by $$ f\underline{!} : \mathbb{A} //w\to \mathbb{A} //x $$ the $2$-functor defined by the \textit{direct image} between the lax comma $2$-categories, whose restriction to $ \mathbb{A} /w $ is equal to $f\overline{!} $. \begin{prop} If $G$ is a locally fully faithful $2$-functor then, for each object $x $ of $\mathbb{A}$, both $\check {G}: \mathbb{A} /x\to \mathbb{B}/ G(x) $ and $\check {G}: \mathbb{A} //x \to \mathbb{B}// G(x) $ are locally fully faithful. \end{prop} \begin{theo} For any object $y\in \mathbb{A} $, we have two $2$-adjunctions \directlua{pu()} \directlua{pu()} \begin{equation} \diag{basicliftingtwoadjunction}\quad\mbox{ and }\quad\diag{basicliftingtwoadjunctionlaxcommatwocategory} \end{equation} where the counit and the unit of these $2$-adjunctions are defined pointwise by the counit and unit of $F\dashv G $. \end{theo} \begin{coro}\label{followsfromthepointwiseproperty} For each object $y\in \mathbb{A} $, the $2$-adjunctions \directlua{pu()} \directlua{pu()} \begin{equation} \diag{basicliftingtwoadjunctionn}\quad\mbox{ and }\quad\diag{basicliftingtwoadjunctionlaxcommatwocategoryy} \end{equation} are lax idempotent (premonadic) if, and only if, $F\dashv G $ is lax idempotent (premonadic). \end{coro} Henceforth, we further assume that $\mathbb{B} $ has comma objects and pullbacks whenever necessary. Recall that, in this case, by Section \ref{change-of-base functor}, for each object $y $ of $\mathbb{B} $, we have $2$-adjunctions \begin{center} $\eta_ y !\dashv \eta _y ^\ast : \mathbb{B} / GF (y)\to \mathbb{B}/ y $ and $\eta_ y \overline{!}\dashv \eta _y ^{\Leftarrow } : \mathbb{B} // GF (y) \to \mathbb{B}/ y $ \end{center} in which the right $2$-adjoints are given respectively by the pullback and the comma object along $\eta_ y$. \begin{defi}[Simple, admissible and $2$-admissible $2$-functors]\label{maindefinition} The $2$-functor $G$ is called \textit{simple/$2$-admissible} if $F\dashv G $ is lax idempotent/pre-Kock-Z\"{o}berlein, and, for every $y\in \mathbb{B}$, \directlua{pu()} \begin{equation} \diag{basicliftingtwoadjunctionlaxcommatwocategoryagain} \end{equation} is simple/$2$-admissible w.r.t. $\eta_ y !\dashv \eta _y ^\Leftarrow $ (see Definitions \ref{SIMPLEtwoadjunction} and \ref{twoadmissibledefinition}). We say that $G$ is \textit{admissible w.r.t. the basic fibration} if $G$ is fully faithful, and, for every $y\in \mathbb{B}$, \directlua{pu()} \begin{equation} \diag{basicadjunctionliftingagain} \end{equation} is admissible w.r.t. $\eta_ y !\dashv \eta _y ^\ast $. \end{defi} \begin{rem} The notion of admissibility w.r.t. the basic fibration is just the direct strict $2$-dimensional generalization of the classical notion of admissibility (also called semi-left-exact reflective functor)~\cite{MR779198, MR1822890}, while the notion of simplicity coincides with that introduced in \cite{MR3545937}. \end{rem} In order to establish the direct consequences of the results of Section \ref{compositionof2adjunctionssection} for the case of $2$-admissibility and simplicity, we set some notation below. For each $y$ of $\mathbb{B} $, we consider the $2$-adjunctions \directlua{pu()} \begin{equation}\label{SIMPLECOMPOSITION} \diag{compositionof2adjunctionsfirstdiagramlaxcommatwocategorycomma} \end{equation} in which, by abuse of language, we denote respectively by $\varepsilon $ and $\eta $ the counit and unit defined pointwise, and $\mathcal{T} = (T, \mu , \eta ) $ the $2$-monad induced by $\varepsilon _ {F(y)}! \circ \check{F}\dashv \check{G} $. In this case, the composition of $2$-adjunctions above is given by \directlua{pu()} \begin{equation} \diag{compositionof2adjunctionsseconddiagrametacommalaxcomma} \end{equation} where $\alpha = \left(\id _ {\eta_ y ^\Leftarrow }\ast \eta \ast \id _{\eta _ y \overline{!} }\right)\cdot \rho $, and we denote by $\mathcal{R} = ( R, v, \alpha ) $ the $2$-monad induced by $\check{F}\dashv \eta_y^\Leftarrow\circ\check{G}$. \begin{rem}[$\alpha $] Given an object $\left(x, b\right)\in \mathbb{B} / y $, $$\alpha _{\left(x, b\right) } : \left(x, b\right)\to \eta_ y ^\Leftarrow \check{G}\check{F} \left(x, b\right) $$ is defined by the unique morphism $\alpha _b : w\to GF (b)\downarrow \eta _y $ in $\mathbb{B}$ such that the equations \directlua{pu()} \begin{equation} \diag{definitionofalphalinha} \end{equation} \begin{equation*} \left(GF(b)\right)^\Rightarrow (\eta _ y )\, \cdot \, \alpha _b = \eta _w \qquad\mbox{and}\qquad \eta _ y ^\Leftarrow \left( GF(b) \right) \,\cdot\, \alpha _b = b \end{equation*} hold. \end{rem} \begin{rem} The composition of $\varepsilon _ {F(y)}!\circ \check{F} $ with $\eta_ y !$ is given by $\check{F} $. More precisely, the diagrams \directlua{pu()} \directlua{pu()} $$ \diag{basicliftingtwoadjunctionlaxcommatwocategoryylefttwoadjoint}\qquad \diag{basicliftingtwoadjunctionpullbacktwocategoryylefttwoadjoint} $$ commute. \end{rem} As direct consequences of the main results of Section \ref{compositionof2adjunctionssection}, we get the following corollaries. \begin{coro}[Simplicity~\cite{MR3545937}] Let $G$ be pre-Kock-Z\"{o}berlein. The $2$-adjunction $$\left( F\dashv G, \varepsilon , \eta \right) : \mathbb{A}\to \mathbb{B} $$ is simple if, and only if, for each $y\in \mathbb{B} $, $$\id _ T \ast\alpha \dashv \mu \cdot \left(\id _ T \ast \delta \ast \id _ T\right) $$ in which $\left(\id _ T \ast\alpha \right) $ is pointwise defined by $\left(\id _ T \ast\alpha \right) _b: = T( \alpha _ {(x,b)} ) $, and $\mu\cdot \left(\id _ T \ast \delta \ast \id _ T\right) $ is pointwise defined by \begin{equation*} \left(\mu\cdot \left(\id _ T \ast \delta \ast \id _ T\right)\right) _ b\quad :=\quad \vcenter{ \xymatrix@=3.5em{ T\left(T(b)\downarrow \eta _ y \right)\ar[r]^-{T\left( \delta _{T(b) } \right) } \ar@{}[rd]|-{\xLeftarrow{T\left( \chi ^{ T(b)\downarrow \eta _ y }\right) }} \ar[d]_-{T\left(\eta _ y ^\Leftarrow \left(T(b) \right)\right) } & TT(x) \ar[d]|-{TT(b) } \ar[r]^-{\mu _ {x }} & T(x)\ar[d]^-{T(b)} \\ T(y) \ar[r]_-{ T(\eta _y ) } & TT(y) \ar[r]_-{\mu_y} & T(y) \ar@{{}{ }{}}[lu]|-{=} } } \end{equation*} \end{coro} \begin{proof} The result follows from Corollary \ref{followsfromthepointwiseproperty} and Theorem \ref{simplicityofFGwrtJH}. \end{proof} \begin{coro} Assume that $F\dashv G $ is lax idempotent. We have that $F\dashv G $ is simple provided that, for each $y\in \mathbb{B} $, $\eta_y^{\Leftarrow }T\delta \check{G} $ or $ F \delta T $ is invertible. \end{coro} \begin{proof} It follows from Corollary \ref{followsfromthepointwiseproperty} and Corollary \ref{corollarytrivialforlaxidempotent}. \end{proof} \begin{coro}[$2$-admissibility]\label{oneofthemaincorollaries} Assume that $G$ is pre-Kock-Z\"{o}berlein. The $2$-adjunction $\left( F\dashv G, \varepsilon , \eta \right) : \mathbb{A}\to \mathbb{B} $ is $2$-admissible if and only if it is simple and, for every object $y\in \mathbb{B}$ and every object $a: w\to F(y) $ of $\mathbb{A} // F(y) $, the morphism defined by $$\xymatrix@=3.5em{ F\left(G(a)\downarrow \eta _ y \right) \ar[d]_-{F\left(\eta _ y ^\Leftarrow \left(G(a) \right)\right) } \ar[r]^-{F\left( \delta _{G(a) } \right) } \ar@{}[rd]|-{\xLeftarrow{F\left( \chi ^{ G(a)\downarrow \eta _ y }\right) }} &FG(w)\ar[d]|-{FG(a) } \ar[r]^-{\varepsilon _ {w }} & w \ar[d]^-{a} \\ F( y) \ar[r]_-{ F(\eta _ y) } & FGF(y)\ar[r]_-{\varepsilon _ {F(y)} } & F(y) \ar@{{}{ }{}}[lu]|-{=} } $$ in $\mathbb{A} //F(y) $ is a regular epimorphism, \textit{i.e.} the morphism defined by $$\left(\varepsilon _ w\cdot F\left( \delta _{G(a) } \right) , \id _ {\varepsilon _{F(y)}}\ast F\left( \chi ^{ G(a)\downarrow \eta _ y }\right) \right):\varepsilon _ {F(y)}!\, \check{F}\, \eta _y ^\Leftarrow\, \check{G} (a)\to a $$ is a regular epimorphism in $\mathbb{A} //F(y) $. \end{coro} \begin{proof} The result follows from Corollary \ref{followsfromthepointwiseproperty} and Theorem \ref{admissibilitywrtJH}. \end{proof} \begin{coro}\label{preKockZoberleinFdashvG} If $G$ is pre-Kock-Z\"{o}berlein then $F\dashv G $ is $2$-admissible, provided that, for each $y\in\mathbb{B} $, $\check{F}\delta \check{G} $ is invertible. \end{coro} \begin{proof} It follows from Corollary \ref{followsfromthepointwiseproperty} and Corollary \ref{themaincase}. \end{proof} It should be noted that by Lemma \ref{equivalenciadefullreflectivelaxidempotent} we can conclude that the notion of simplicity w.r.t. the basic fibration (admissibility w.r.t. the basic fibration) coincides with the notion of simplicity ($2$-admissibility) if $\mathbb{A} $ and $\mathbb{B} $ are locally discrete. This shows that the notion of simplicity and $2$-admissibility can be seen as generalizations of the classical notions of simplicity and admissibility/semi-left exact reflective functors~\cite{MR779198, MR1822890} when categories are seen as locally discrete $2$-categories. Furthermore, Theorem \ref{southafricantheorem} shows that classical admissibility implies 2-admissibility in the presence of comma objects. \begin{prop}\label{theoremthatfollowsfrompreservationandidentity} Assume that $F\dashv G $ is pre-Kock-Z\"{o}berlein, and $\mathbb{A}$ has comma objects. The $2$-adjunction $F\dashv G $ is simple ($2$-admissible) if, and only if, for each object $y\in\mathbb{B} $, the $2$-adjunction \directlua{pu()} \begin{equation}\label{idenittyonfycommaobject} \diag{2adjunctionidentityforthesouthafricantheorem} \end{equation} is simple ($2$-admissible) w.r.t. the composite of the $2$-adjunctions \directlua{pu()} \begin{equation} \diag{compositionof2adjunctionsadmissiblewrtbasifibration} \end{equation} \end{prop} \begin{proof} By definition, $F\dashv G $ is simple ($2$-admissible) if, and only if, for each object $y\in\mathbb{B}$, the composition of the $2$-adjunctions of \eqref{SIMPLECOMPOSITION} is lax idempotent (pre-Kock-Z\"{o}berlein). Since $G$ is right $2$-adjoint, it preserves comma objects and, hence, we get that \directlua{pu()} \directlua{pu()} \directlua{pu()} \begin{equation} \diag{theoremthatusespreservationofcommathird} \qquad \cong \qquad \diag{theoremthatusespreservationofcommasecond}\qquad \cong\qquad \diag{theoremthatusespreservationofcommafirstleft} \end{equation} in which the second $2$-natural isomorphism follows from Theorem \ref{relationofchangeofbasecomma}. By the definitions of simplicity and $2$-admissibility (see Definitions \ref{twoadmissibledefinition} and \ref{SIMPLEtwoadjunction}), the proof is complete. \end{proof} \begin{theo}\label{southafricantheorem} Provided that $\mathbb{A} $ has comma objects, if $\left(F\dashv G\right): \mathbb{A}\to \mathbb{B} $ is admissible w.r.t. the basic fibration, then it is $2$-admissible. \end{theo} \begin{proof} By Theorem \ref{laxidempotentcoherencelaxcommacomma}, the $2$-functor $\id_{F(y)}^{\Leftarrow } $ (the right $2$-adjoint of \eqref{idenittyonfycommaobject}) is a pre-Kock-Z\"{o}berlein $2$-functor for every $y\in \mathbb{B} $. If $F\dashv G $ is admissible w.r.t. the basic fibration, we get that, for every $y\in\mathbb{B}$, $\eta _ y ^\ast \circ \check{G} $ is full reflective. Therefore $\eta _ y ^\ast \circ \check{G} \circ \id_ {F(y)} ^{\Leftarrow } $ is a pre-Kock-Z\"{o}berlein $2$-functor by Corollary \ref{themaincase}. By Proposition \ref{theoremthatfollowsfrompreservationandidentity}, this means that $F\dashv G $ is $2$-admissible. \end{proof} \section{Examples}\label{sectionexamplesexamples} The references \cite{MR3545937, MR3708821} provide several examples of simple $2$-adjunctions/monads. In this section, we give examples of $2$-admissible $2$-adjunctions which, in particular, are also examples of simple $2$-adjunctions. Our first example of $2$-admissible $2$-adjunction is the identity. The result below follows directly from Theorem \ref{laxidempotentcoherencelaxcommacomma}. \begin{lem}\label{firstexampleofadmissible} Let $\mathbb{A} $ be any $2$-category with comma objects. The $2$-adjunction $\id_\mathbb{A} \dashv \id _\mathbb{A} $ is $2$-admissible. \end{lem} Of course, the identity is also an example of admissible $2$-functor w.r.t. the basic fibration. Moreover, by Theorem \ref{southafricantheorem}, examples of admissible $2$-functors w.r.t. the basic fibrations give us a wide class of examples of $2$-admissible $2$-functors. \begin{theo} Let $\ord $ be the $2$-category of preordered sets, and $\cat $ the $2$-category of small categories. The inclusion $2$-functor $\ord\to\cat $ has a left $2$-adjoint and it is admissible w.r.t. the basic fibration (and, hence, also $2$-admissible). \end{theo} \begin{proof} It is known that the underlying adjunction is admissible (w.r.t. the basic fibration)~\cite{MR2075604}. Since $\cat $ is a complete $2$-category, we get that the $2$-adjunction is admissible w.r.t. the basic fibration. \end{proof} Free cocompletions of $2$-categories also give us a good source for examples of admissibility w.r.t. the basic fibration. In particular, the most basic cocompletion is the free addition of the initial object. \begin{theo} Let $\mathbb{A} $ be a $2$-category with pullbacks and an initial object $0$. We denote by $\overline{\mathbb{A} } $ the free addition of an initial object. If $\mathbb{A} (-, 0) : \mathbb{A} ^\op\to\Cat $ is constantly equal to the empty category, the canonical $2$-functor $$G:\mathbb{A}\to \overline{\mathbb{A} } $$ is admissible w.r.t. the basic fibration (and, hence, if $\mathbb{A} $ has comma objects, it is $2$-admissible as well). \end{theo} \begin{proof} In fact $\mathbb{A}\to \overline{\mathbb{A} } $ has a left $2$-adjoint if and only if $\mathbb{A} $ has initial object. Moreover, provided that $\mathbb{A} $ has initial object, we denote by $\eta $ the unit of this $2$-adjunction and by $\overline{0} $ the initial object freely added. We have that $\eta _x $ is invertible whenever $x\neq \overline{0} $. Therefore, in this case, $$\eta _{x} ^\ast\circ \check{G} : \mathbb{A} /x \to \overline{\mathbb{A} } /x $$ is fully faithful. Moreover, $\eta _{\overline{0}} ^\ast\circ \check{G} : \mathbb{A} /0 \to \overline{\mathbb{A} } /\overline{0} $ is clearly an isomorphism, since $\mathbb{A} /0$ and $\overline{\mathbb{A} } /\overline{0}$ are both empty. This completes the proof that $G$ is admissible w.r.t. the basic fibration and, hence, $2$-admissible provided that it has comma objects. \end{proof} Another example is the free cocompletion of a $2$-category under (finite) coproducts. \begin{defi}\label{cocompletionunderfinitecoproducts} Let $\mathbb{A} $ be a $2$-category. We define the $2$-category ${\mathsf{Fam}}_{\mathsf{fin}}\left(\mathbb{A} \right) $ as follows. The objects of ${\mathsf{Fam}}_{\mathsf{fin}}\left(\mathbb{A} \right) $ are finite families of objects of $\mathbb{A} $, which can be seen as (possibly empty) lists of objects $$\left( x_1, \ldots , x_n\right). $$ In this case, a morphism $\left( x_1, \ldots , x_n \right)\to \left( y_1, \ldots , y_m\right)$ is a list $t = \left( t_0, \ldots , t_n \right) $ in which $$t_0 :\left\{ 1, \ldots , n \right\} \to \left\{ 1, \ldots , m \right\} $$ is a function, and, for $j>0 $, $$ t_j : x_j\to y _{t_0(j) } $$ is a morphism of $\mathbb{A} $. The composition and, hence, the identities are defined pointwise. Finally, given morphisms $$t = \left( t_0, \ldots , t_n \right), t' = \left( t'_0, \ldots , t'_n \right): \left( x_1, \ldots , x_n \right)\to \left( y_1, \ldots , y_m\right)$$ of ${\mathsf{Fam}}_{\mathsf{fin}}\left(\mathbb{A} \right) $, there is no $2$-cell $t\Rightarrow t' $, provided that $t_0\neq t_0' $. Otherwise, a $2$-cell $\tau : t\Rightarrow t' $ is a finite family of $2$-cells $$ \left( \tau _ j : t_j \Rightarrow t'_j: x_j\to y_ {t_0 (j) } \right)_{j\in\left\{ 1,\ldots , n\right\} } $$ of $\mathbb{A} $. The horizontal and vertical compositions are again defined pointwise. \end{defi} There is an obvious full faithful $2$-functor $ I_\mathbb{A}: \mathbb{A} \to {\mathsf{Fam}}_{\mathsf{fin}}\left(\mathbb{A} \right)$ which takes each object $x$ to the family $(x) $. As observed above, the $2$-category $ {\mathsf{Fam}}_{\mathsf{fin}}\left(\mathbb{A}\right) $ is the \textit{free cocompletion }of $\mathbb{A} $ under finite coproducts. In particular, we have: \begin{prop} The fully faithful $2$-functor $$I_\mathbb{A} : \mathbb{A} \to {\mathsf{Fam}}_{\mathsf{fin}}\left(\mathbb{A} \right)$$ has a left $2$-adjoint if and only if $\mathbb{A} $ has finite coproducts. In this case, the left $2$-adjoint is given by the coproduct. More precisely, a $2$-cell $$ \left(\tau _ 1, \ldots , \tau _n\right): (t_0, \ldots , t_n)\Longrightarrow (t_0', \ldots , t_n') : (x_1, \ldots , x_n)\to (y_1, \ldots , y_m) $$ in $ {\mathsf{Fam}}_{\mathsf{fin}}\left(\mathbb{A} \right) $ is taken to the unique $2$-cell \directlua{pu()} \begin{equation}\label{definingtheimageofthelefttwoadjointinfamlabelsecond} \diag{definingtheimageofthelefttwoadjointinfamsecond} \end{equation} induced by the $2$-cells \directlua{pu()} \begin{equation}\label{definingtheimageofthelefttwoadjointinfamlabel} \left(\diag{definingtheimageofthelefttwoadjointinfam}\right)_{i\in\left\{1,\ldots , n\right\} } \end{equation} in which the second arrows are the components of the universal cocone that gives the coproduct. \end{prop} \begin{rem} If we replace \textit{finite families} with \textit{arbitrary families} in Definition \ref{cocompletionunderfinitecoproducts}, we get the concept of $\mathsf{Fam}\left(\mathbb{A} \right) $ which corresponds to the free cocompletion of $\mathbb{A} $ under coproducts. \end{rem} We say that a $2$-category \textit{$\mathbb{A} $ has finite limits }if it has finite products, pullbacks and comma objects. The well-known notion of extensive category has an obvious (strict) $2$-dimensional analogue. In order to simplify the hypothesis on completion of the $2$-category $\mathbb{A} $, we are going to consider lextensive $2$-categories. \begin{defi}[Lextensive $2$-category]\label{definitionofextensivetwocategory} A $2$-category $\mathbb{A} $ is lextensive if it has finite limits and coproducts, and, for every finite family of objects $\left( y_1, \ldots , y_n\right) $, the $2$-functor \begin{eqnarray*} \displaystyle\prod _{j=1}^n \mathbb{A}/y_j &\to & \mathbb{A} / \coprod _{j=1}^n y_j \\ \left(a_j : w_j\to y_j \right)_{j\in\left\{1, \ldots n\right\}} &\mapsto &\coprod_{j=1}^n a_j \end{eqnarray*} defined pointwise by the coproduct is a ($\Cat $-)equivalence. \end{defi} \begin{theo} Let $\mathbb{A} $ be a lextensive $2$-category. We consider the $2$-adjunction \directlua{pu()} $$ \diag{basicadjunctionoffam} $$ in which the right $2$-adjoint is the canonical inclusion. For each finite family $Y=\left( y_j \right) _ {j\in\left\{ 1, \ldots , n\right\} } $ of objects in $\mathbb{A} $, there is a (canonical) $2$-natural isomorphism \directlua{pu()} \begin{equation}\label{diagramwhichprovestheadmissibilityofthecaseofFam} \diag{basiccommutativitydiagramoftheextensivity} \end{equation} \end{theo} \begin{proof} The equivalence $2$-functor $$\displaystyle\prod_{j=1}^n{\mathsf{Fam}}_{\mathsf{fin}}\left(\mathbb{A}/y_j\right)\to{\mathsf{Fam}}_{\mathsf{fin}}\left({\mathbb{A}}\right)/\left({y}_j\right)_{j\in\left\{{1},\ldots{,}n\right\}} $$ is such that each object $$A= \left( \left( a_{(1,1)}, \ldots, a_{(1,m_1)} \right), \ldots , \left( a_{(n,1)}, \ldots, a_{(n,m_n)}\right)\right) $$ is taken to $$ t^A = \left( t^A_ l \right)_ {l\in \left\{ 0, \left(1,1\right),\ldots , \left(1,m_1\right), \ldots , \left(n,m_n\right)\right\} }$$ in which $ t^A_0(j,k) := j $ and $t^A_{(j,k)}:= a_{(j,k)} $. The action on morphisms and $2$-cells is then pointwise defined. \end{proof} \begin{coro} Let $\mathbb{A} $ be a lextensive $2$-category. The $2$-functor $I_\mathbb{A} : \mathbb{A} \to {\mathsf{Fam}}_{\mathsf{fin}}\left( \mathbb{A} \right)$ is admissible w.r.t. the basic fibration and, hence, $2$-admissible. \end{coro} \begin{proof} In fact, since products of fully faithful $2$-functors are fully faithful, we get that $\eta _ Y^\ast I_ \mathbb{A} $ is fully faithful by the $2$-natural isomorphism \eqref{diagramwhichprovestheadmissibilityofthecaseofFam}. \end{proof} \begin{rem} Definition \ref{definitionofextensivetwocategory} has an obvious infinite analogue, the definition of \textit{infinitary lextensive $2$-category}. For an infinitary lextensive $2$-category $\mathbb{A} $, we have an analogous result w.r.t. $\mathsf{Fam}\left( \mathbb{A} \right) $. More precisely, $$I_\mathbb{A} : \mathbb{A} \to \mathsf{Fam}\left( \mathbb{A} \right) $$ is admissible w.r.t. the basic fibration (and, hence, $2$-admissible) whenever $\mathbb{A} $ is infinitary extensive. \end{rem} \section*{Acknowledgments} We express our gratitude to George Janelidze for kindly hosting us at the University of Cape Town in Nov/2019 and for providing us with valuable suggestions on admissible functors. We also extend our thanks to Marino Gran and Tim Van der Linden for hosting us at the UCL, Louvain la Neuve, in May/2018, where we commenced this project. Lastly, we appreciate the anonymous referee's input, including suggestions and key bibliographic references. \directlua{pu()} \end{document}
\begin{document} \title{Score-based Generative Neural Networks for Large-Scale Optimal Transport} \begin{abstract} We consider the fundamental problem of sampling the optimal transport coupling between given source and target distributions. In certain cases, the optimal transport plan takes the form of a one-to-one mapping from the source support to the target support, but learning or even approximating such a map is computationally challenging for large and high-dimensional datasets due to the high cost of linear programming routines and an intrinsic curse of dimensionality. We study instead the Sinkhorn problem, a regularized form of optimal transport whose solutions are couplings between the source and the target distribution. We introduce a novel framework for learning the Sinkhorn coupling between two distributions in the form of a score-based generative model. Conditioned on source data, our procedure iterates Langevin Dynamics to sample target data according to the regularized optimal coupling. Key to this approach is a neural network parametrization of the Sinkhorn problem, and we prove convergence of gradient descent with respect to network parameters in this formulation. We demonstrate its empirical success on a variety of large scale optimal transport tasks. \end{abstract} \section{Introduction} \label{sec:intro} It is often useful to compare two data distributions by computing a distance between them in some appropriate metric. For instance, statistical distances can be used to fit the parameters of a distribution to match some given data. Comparison of statistical distances can also enable distribution testing, quantification of distribution shifts, and provide methods to correct for distribution shift through domain adaptation \cite{Kouw_Loog_2019}. Optimal transport theory provides a rich set of tools for comparing distributions in \textit{Wasserstein Distance}. Intuitively, an optimal transport plan from a source distribution $\sigma \in \mathcal{M}_+(\mathcal{X})$ to a target distribution $\tau \in \mathcal{M}_+(\mathcal{Y})$ is a blueprint for transporting the mass of $\sigma$ to match that of $\tau$ as cheaply as possible with respect to some ground cost. Here, $\mathcal{X}$ and $\mathcal{Y}$ are compact metric spaces and $\mathcal{M}_+(\mathcal{X})$ denotes the set of positive Radon measures over $\mathcal{X}$, and it is assumed that $\sigma$, $\tau$ are supported over all of $\mathcal{X}$, $\mathcal{Y}$ respectively. The Wasserstein Distance between two distributions is defined to be the cost of an optimal transport plan. Because the ground cost can incorporate underlying geometry of the data space, optimal transport plans often provide a meaningful correspondence between points in $\mathcal{X}$ and $\mathcal{Y}$. A famous example is given by Brenier's Theorem, which states that, when $\mathcal{X}, \mathcal{Y} \subseteq {\mathbb R}^d$ and $\sigma$, $\tau$ have finite variance, the optimal transport plan under a squared-$l_2$ ground cost is realized by a map $T: \mathcal{X} \to \mathcal{Y}$ \cite[Theorem~2.12]{Villani_Society_2003}. However, it is often computationally challenging to exactly compute optimal transport plans, as one must exactly solve a linear program requiring time which is super-quadratic in the size of input datasets \cite{Cuturi_2013}. \begin{figure} \caption{We use SCONES to sample the mean-squared-$L^2$ cost, entropy regularized optimal transport mapping between 2x downsampled CelebA images (Source) and unmodified CelebA images (Target) at $\lambda = 0.005$ regularization. } \label{fig:celeba32px-celeba} \end{figure} Instead, we opt to study a regularized form of the optimal transport problem whose solution takes the form of a joint density $\pi(x,y)$ with marginals $\pi_X(x) = \sigma(x)$ and $\pi_Y(y) = \tau(y)$. A correspondence between points is given by the conditional distribution $\pi_{Y \mid X = x}(y)$, which relates each input point to a distribution over output points. In recent work \cite{Seguy_Damodaran_Flamary_Courty_Rolet_Blondel_2018}, the authors propose a large-scale stochastic dual approach in which $\pi(x, y)$ is parametrized by two continuous dual variables that may be represented by neural networks and trained at large-scale via stochastic gradient ascent. Then, with access to $\pi(x, y)$, they approximate an optimal transport \textit{map} using a barycentric projection of the form $T : x \mapsto \argmin_{y} {\mathbb E}_{\pi_{Y \mid X = x}}[d(y, Y)]$, where $d:\mathcal{Y} \times \mathcal{Y} \to {\mathbb R}$ is a convex cost on $\mathcal{Y}$. Their method is extended by \cite{Li_Genevay_Yurochkin_Solomon_2020} to the problem of learning regularized Wasserstein barycenters. In both cases, the Barycentric projection is observed to induce averaging artifacts such as those shown in Figure \ref{fig:methods-comparison}. Instead, we propose a direct sampling strategy to generate samples from $\pi_{Y \mid X=x}(y)$ using a \textit{score-based generative model}. Score-based generative models are trained to sample a generic probability density by iterating a stochastic dynamical system knows as \textit{Langevin dynamics} \citep{Song_Ermon_2020}. In contrast to projection methods for large-scale optimal transport, we demonstrate that pre-trained score based generative models can be naturally applied to the problem of large-scale regularized optimal transport. Our main contributions are as follows: \begin{enumerate} \item We show that pretrained score based generative models can be easily adapted for the purpose of sampling high dimensional regularized optimal transport plans. Our method eliminates the need to estimate a barycentric projection and it results in sharper samples because it eliminates averaging artifacts incurred by such a projection. \item Score based generative models have been used for unconditional data generation and for conditional data generation in settings such as inpainting. We demonstrate how to adapt pretrained score based generative models for the more challenging conditional sampling problem of \textit{regularized optimal transport}. \item Our method relies on a neural network parametrization of the dual regularized optimal transport problem. Under assumptions of large network width, we prove that gradient descent w.r.t. neural network parameters converges to a global maximizer of the dual problem. We also prove optimization error bounds based on a stability analysis of the dual problem. \item We demonstrate the empirical success of our method on a synthetic optimal transport task and on optimal transport of high dimensional image data. \end{enumerate} \section{Background and Related Work}\label{sec:background} We will briefly review some key facts about optimal transport and generative modeling. For a more expansive background on optimal transport, we recommend the references \citep{Villani_Society_2003} and \citep{Thorpe}. \begin{figure} \caption{Samples generated by SCONES for entropy regularized optimal transport including the samples shown in Figure \ref{fig:celeba32px-celeba} \label{fig:my_label} \end{figure} \subsection{Regularized Optimal Transport}\label{sec:background-reg-ot} We begin by reviewing the formulation of the regularized OT problem. \begin{dfn}[Regularized OT] \label{def:reg-ot} Let $\sigma \in \mathcal{M}_+(\mathcal{X})$ and $\tau \in \mathcal{M}_+(\mathcal{Y})$ be probability measures supported on compact sets $\mathcal{X}$, $\mathcal{Y}$. Let $c : \mathcal{X} \times \mathcal{Y} \to {\mathbb R}$ be a convex, lower semi-continuous function representing cost of transporting a point $x \in \mathcal{X}$ to $y \in \mathcal{Y}$. The regularized optimal transport distance $\text{ \fontfamily{cmr}\selectfont OT}_\lambda(\sigma, \tau)$ is given by \begin{align}\label{eqn:reg-ot} \text{ \fontfamily{cmr}\selectfont OT}_\lambda(\sigma, \tau)& = \min_{\pi} {\mathbb E}_\pi[c(x, y)] + \lambda H(\pi) \\ \text{subject to} & \quad \pi_X = \sigma, \quad \pi_Y = \tau \nonumber \\ & \quad \pi(x, y) \geq 0 \nonumber \end{align} where $H:\mathcal{M}_+(\mathcal{X} \times \mathcal{Y}) \to {\mathbb R}$ is a convex regularizer and $\lambda \geq 0$ is a regularization parameter. \end{dfn} We are mainly concerned with optimal transport of empirical distributions, where $\mathcal{X}$ and $\mathcal{Y}$ are finite and $\sigma$, $\tau$ are empirical probability vectors. In most of the following theorems, we will work in the \textit{empirical setting} of Definition \ref{def:reg-ot}, so that $\mathcal{X}$ and $\mathcal{Y}$ are finite subsets of ${\mathbb R}^d$ and $\sigma$, $\tau$ are vectors in the probability simplices of dimension $|\mathcal{X}|$ and $|\mathcal{Y}|$, respectively. We refer to the objective $K_\lambda(\pi) = {\mathbb E}_\pi[c(x, y)] + \lambda H(\pi)$ as the \textit{primal objective}, and we will use $J_\lambda(\phi, \psi)$ to refer to the associated \textit{dual objective}, with dual variables $\phi$, $\psi$. Two common regularizers are $H(\pi) = \text{ \fontfamily{cmr}\selectfont KL}(\pi || \sigma \times \tau)$ and $H(\pi) = \chi^2(\pi || \sigma \times \tau)$, sometimes called \textit{entropy} and \textit{$l_2$} regularization respectively: \begin{align*} \text{ \fontfamily{cmr}\selectfont KL}(\pi || \sigma \times \tau) & = {\mathbb E}_{\pi} \left[\log\left( \frac{d\pi(x, y)}{d\sigma(x)d\tau(y)}\right) \right], \quad \chi^2(\pi || \sigma \times \tau) = {\mathbb E}_{\sigma \times \tau} \left[ \left( \frac{d \pi(x, y)}{ d \sigma(x) d\tau(y)} \right)^2 \right] \end{align*} where $\frac{d \pi(x, y)}{d \sigma(x) d \tau(y)}$ is the Radon-Nikodym derivative of $\pi$ with respect to the product measure $\sigma \times \tau$. These regularizers contribute useful optimization properties to the primal and dual problems. For example, $\text{ \fontfamily{cmr}\selectfont KL}(\pi || \sigma \times \tau)$ is exactly the mutual information $I_\pi(X;Y)$ of the coupling $(X, Y) \sim \pi$, so intuitively speaking, entropy regularization explicitly prevents $\pi_{Y \mid X=x}$ from concentrating on a point by stipulating that the conditional measure retain some bits of uncertainty after conditioning. The effects of this regularization are described by Propositions \ref{prop:kl-dual-1} and \ref{prop:kl-dual-2}. First, regularization induces convexity properties which are useful from an optimization perspective. \begin{prop}\label{prop:kl-dual-1} In the empirical setting of Definition \ref{def:reg-ot}, the entropy regularized primal problem $K_\lambda(\pi)$ is $\lambda$-strongly convex in $l_1$ norm. The dual problem $J_\lambda(\phi, \psi)$ is concave, unconstrained, and $\frac{1}{\lambda}$-strongly smooth in $l_\infty$ norm. Additionally, these objectives witness strong duality: $\inf_{\pi \in \mathcal{M}_+(\mathcal{X} \times \mathcal{Y})} K_\lambda(\pi) = \sup_{\phi, \psi \in {\mathbb R}^{2d}} J_\lambda(\phi, \psi)$, and the extrema of each objective are attained over their respective domains. \end{prop} In addition to these optimization properties, regularizing the OT problem induces a specific form of the dual objective and resulting optimal solutions. \begin{prop}\label{prop:kl-dual-2} In the setting of Proposition \ref{prop:kl-dual-1}, the KL-regularized dual objective takes the form \begin{align*} J_\lambda(\phi, \psi) & \coloneqq {\mathbb E}_{\sigma} [\phi(x)] + {\mathbb E}_{\tau} [\psi(y)] \\ & - \lambda {\mathbb E}_{\sigma \times \tau}\left[ \frac{1}{e} \exp\left( \frac{1}{\lambda} \left(\phi(x) + \psi(y) - c(x, y) \right) \right) \right]. \end{align*} The optimal solutions $\phi^*, \psi^* = \argmax_{\phi, \psi \in {\mathbb R}^{2d}} J_\lambda(\phi, \psi)$ and $\pi^* = \argmin_{\pi \in \mathcal{M}_+(\mathcal{X} \times \mathcal{Y})} K_\lambda(\pi)$ satisfy \begin{align*} \pi^*(x, y) = \frac{1}{e} \exp\left( \frac{1}{\lambda} \left(\phi^*(x) + \psi^*(y) - c(x, y) \right) \right) \sigma(x)\tau(y). \end{align*} \end{prop} These propositions are specializations of Proposition \ref{prop:f-div-dual} and they are well-known to the literature on entropy regularized optimal transport \citep{Cuturi_2013, Blondel_Seguy_Rolet_2018}. The solution $\pi^*(x, y)$ of the entropy regularized problem is often called the \textit{Sinkhorn coupling} between $\sigma$ and $\tau$ in reference to Sinkhorn's Algorithm \citep{Sinkhorn_1966}, a popular approach to efficiently solving the discrete entropy regularized OT problem. For \textit{arbitrary} choices of regularization, we call $\pi^*(x, y)$ a Sinkhorn coupling. Propositions \ref{prop:kl-dual-1} and \ref{prop:kl-dual-2} illustrate the main desiderata when choosing the regularizer: that $H(\pi)$, and hence $K_\lambda(\pi)$, be strongly convex in $l_1$ norm and that $H(\pi)$ induce a nice analytic form of $\pi^*$ in terms of $\phi^*$, $\psi^*$. In regards to the former, $H(\pi)$ is akin to barrier functions used by interior point methods \citep{Cuturi_2013}. Prior work \citep{Blondel_Seguy_Rolet_2018} is an example of the latter, in which it is shown that for discrete optimal transport, $\chi^2$ regularization yields an analytic form of $\pi^*$ having a thresholding operation that promotes sparsity. Conveniently, the KL and $\chi^2$ regularizers both belong to the class of $f$-Divergences, which are statistical divergences of the form \begin{align*} D_f(p || q) = {\mathbb E}_q \left[ f\left(\frac{p(x)}{q(x)}\right) \right]. \end{align*} where $f: {\mathbb R} \to {\mathbb R}$ is convex, $f(1) = 0$, $p, q$ are probability measures, and $p$ is absolutely continuous with respect to $q$. For example, the KL regularizer has $f_{\text{ \fontfamily{cmr}\selectfont KL}}(t) = t\log(t)$ and the $\chi^2$ regularizer has $f_{\chi^2}(t) = t^2 - 1$. The $f$-Divergences are good choices for regularizing optimal transport: strong convexity of $f$ is a sufficient condition for strong convexity of $H_f(\pi) \coloneqq D_f(\pi || \sigma \times \tau)$ in $l_1$ norm, and the form of $f$ is the aspect which determines the form of $\pi^*$ in terms of $\phi^*$, $\psi^*$. This relationship is captured by the following generalization of Propositions \ref{prop:kl-dual-1} and \ref{prop:kl-dual-2}, which we prove in Section \ref{supp:proofs} of the Supplemental Materials. \begin{prop}\label{prop:f-div-dual} Consider the empirical setting of Definition \ref{def:reg-ot}. Let $f(v) : {\mathbb R} \to {\mathbb R}$ be a differentiable $\alpha$-strongly convex function with convex conjugate $f^*(v)$. Set $f^{*\prime}(v) = \partial_v f^*(v)$. Define the violation function $V(x, y ; \phi, \psi) = \phi(x) + \psi(y) - c(x, y)$. Then, \begin{enumerate} \item The $D_f$ regularized primal problem $K_\lambda(\pi)$ is $\lambda \alpha$-strongly convex in $l_1$ norm. With respect to dual variables $\phi \in {\mathbb R}^{|\mathcal{X}|}$ and $\psi \in {\mathbb R}^{|\mathcal{Y}|}$, the dual problem $J_\lambda(\phi, \psi)$ is concave, unconstrained, and $\frac{1}{\lambda \alpha}$-strongly smooth in $l_\infty$ norm. Strong duality holds: $K_\lambda(\pi) \geq J_\lambda(\phi, \psi)$ for all $\pi$, $\phi$, $\psi$, with equality for some triple $\pi^*, \phi^*, \psi^*$. \item $J_\lambda(\phi, \psi)$ takes the form $\displaystyle J_\lambda(\phi, \psi) = {\mathbb E}_\sigma[\phi(x)] + {\mathbb E}_\tau[\psi(y)] - {\mathbb E}_{\sigma \times \tau} [ H^*_f(V(x, y; \phi, \psi))]$, where $H^*_f(v) = \lambda f^* (\lambda^{-1} v)$. \item The optimal solutions $(\pi^*, \phi^*, \psi^*)$ satisfy \begin{align*} \pi^*(x, y) = M_f(V(x, y; \phi, \psi)) \sigma(x) \tau(y) \end{align*} where $M_f(x, y) = f^{*\prime}(\lambda^{-1}v)$. \end{enumerate} \end{prop} For this reason, we focus in this work on $f$-Divergence-based regularizers. Where it is clear, we will drop subscripts on regularizer $H(\pi)$ and the so-called \textit{compatibility function} $M(v)$ and we will omit the dual variable arguments of $V(x, y)$. The specific form of these terms for $\text{ \fontfamily{cmr}\selectfont KL}$ regularization, $\chi^2$ regularization, and a variety of other regularizers may be found in Section \ref{supp:reg-via-fdiv} of the Appendix. \subsection{Langevin Sampling and Score Based Generative Modeling}\label{sec:background-langevin} Given access to optimal dual variables $\phi^*(x)$, $\psi^*(y)$, it is easy to evaluate the density of the corresponding optimal coupling according to Proposition \ref{prop:f-div-dual}. To generate samples distributed according to this coupling, we apply \textit{Langevin Sampling}. The key quantity used in Langevin sampling of a generic (possibly unnormalized) probability measure $p(x)$ is its \textit{score function}, given by $\nabla_x \log p(x)$ for $x \in \mathcal{X}$. The algorithm is an iterative Monte Carlo method which generates approximate samples $\tilde{x}_t$ by iterating the map \begin{align*} \tilde{x}_t = \tilde{x}_{t-1} + \ensuremath{\varepsilon}ilon \nabla_x \log p(\tilde{x}_{t-1}) + \sqrt{2\ensuremath{\varepsilon}ilon} z_t \end{align*} where $\ensuremath{\varepsilon}ilon > 0$ is a step size parameter and where $z_t \sim {\mathbb N}cl(0, I)$ independently at each time step $t \geq 0$. In the limit $\ensuremath{\varepsilon}ilon \to 0$ and $T \to \infty$, the samples $\tilde{x}_T$ converge weakly in distribution to $p(x)$. \citet{Song_Ermon_2020} introduce a method to estimate the score with a neural network $s_{\vartheta}(x)$, trained on samples from $p(x)$, so that it approximates $s_\vartheta(x) \approx \nabla_x \log p(x)$ for a given $x \in \mathcal{X}$. To generate samples, one may iterate Langevin dynamics with the score estimate in place of the true score. To scale this method to high dimensional image datasets, \citet{Song_Ermon_2020} propose an annealing scheme which samples noised versions of $p(x)$ as the noise is gradually reduced. One first samples a noised distribution $p(x) a.s.\xspacet {\mathbb N}cl(0, \tau_1)$, at noise level $\tau_1$. The noisy samples, which are presumed to lie near high density regions of $p(x)$, are used to initialize additional rounds of Langevin dynamics at diminishing noise levels $\tau_2 > \ldots > \tau_N > 0$. At the final round, Annealed Langevin Sampling outputs approximate samples according to the noiseless distribution. \citet{Song_Ermon_2020} demonstrate that Annealed Langevin Sampling (ALS) with score estimatation can be used to generate sample images that rival the quality of popular generative modeling tools like GANs or VAEs. \section{Conditional Sampling of Regularized Optimal Transport Plans} \label{sec:method} Our approach can be split into two main steps. First, we approximate the density of the optimal Sinkhorn coupling $\pi^*(x, y)$ which minimizes $K_\lambda(\pi)$ over the data. To do so, we apply the large-scale stochastic dual approach introduced by \citet{Seguy_Damodaran_Flamary_Courty_Rolet_Blondel_2018}, which involves instantiating neural networks $\phi_{\theta} : \mathcal{X} \to {\mathbb R}$ and $\psi_{\theta} : \mathcal{Y} \to {\mathbb R}$ that serve as parametrized dual variables. We then maximize $J_\lambda(\phi_{\theta}, \psi_{\theta})$ with respect to $\theta$ via gradient descent and take the resulting parameters $\theta^*$ and the associated transport plan $\hat{\pi}(x, y) = M(V(x, y; \phi_{\theta^*}, \psi_{\theta^*})) \sigma(x) \tau(y)$. This procedure is shown in Algorithm \ref{algo:part1}. Note that when the dual problem is only approximately maximized, $\hat{\pi}$ need not be a normalized density. We therefore call $\tilde{\pi}$ the \textit{pseudo-coupling} which approximates the true Sinkhorn coupling $\hat{\pi}$. After optimizing $\theta^*$, we sample the conditional $\hat{\pi}_{Y \mid X=x}(y)$ using Langevin dynamics. The score estimator for the conditional distribution is, \begin{align*} \nabla_y \log \hat{\pi}_{Y \mid X = x}(y) & = \nabla_y [ \log( M(V(x, y ; \phi_{\theta^*}, \psi_{\theta^*})) \sigma(x) \tau(y) ) - \log( \sigma(x) )] \\ & \approx \nabla_y \log(M(V(x, y ; \phi_{\theta^*}, \psi_{\theta^*}))) + s_{\vartheta}(y). \end{align*} We therefore approximate $\nabla_y \hat{\pi}_{ Y \mid X = x}(y)$ by directly differentiating $\log M(V(x, y))$ using standard automatic differentiation tools and adding the result to an unconditional score estimate $s_\vartheta(y)$. The full Langevin sampling algorithm for general regularized optimal transport is shown in Algorithm \ref{algo:part2}. We note that our method has the effect of biasing the Langevin iterates towards the region where $\hat \pi_{Y|X=x}$ is localized. This may be beneficial for Langevin sampling, which enjoys exponentially fast mixing when sampling log-concave distributions. In the supplementary material, we prove a known result that for the entropy regularized problem given in Proposition \ref{prop:kl-dual-2}: the compatibility $M(V(x, y ; \phi^*, \psi^*)) = \frac{1}{e} \exp\left( \frac{1}{\lambda} \left(\phi^*(x) + \psi^*(y) - c(x, y) \right) \right)$ is log-concave with respect to $y$. For $\lambda\to 0$, this localizes around the optimal transport of $x$, $T(x)$, and so heuristically should lead to faster mixing. \begin{figure} \caption{Density Estimation.} \label{algo:part1} \caption{SCONES Sampling Procedure} \label{algo:part2} \end{figure} \section{Theoretical Analysis} \label{sec:theory} In principle, the empirical setting of Definition \ref{def:reg-ot} poses an unconstrained optimization problem over ${\mathbb R}^{|\mathcal{X}|}\times {\mathbb R}^{|\mathcal{Y}|}$, which could be optimized directly by gradient descent on vectors $(\phi, \psi)$. The point of a more expensive neural network parametrization $\phi_\theta$, $\psi_\theta$ is to learn a \textit{continuous} distribution that agrees with the empirical optimal transport plan between discrete $\sigma$, $\tau$, and that generalizes to a continuous space containing $\mathcal{X} \times \mathcal{Y}$. By training $\phi_\theta$, $\psi_\theta$, we approximate the underlying continuous data distribution up to optimization error and up to statistical estimation error between the empirical coupling and the population coupling. In the present section, we justify this approach by proving convergence of Algorithm \ref{algo:part1} to the global maximizer of $J_\lambda(\phi, \psi)$, under assumptions of large network width, along with a quantitative bound on optimization error. In Section \ref{supp:stat-est} of the Appendix, we provide a cursory analysis of rates of statistical estimation of entropy regularized Sinkhorn couplings. We make the following main assumptions on the neural networks $\phi_{\theta}$ and $\psi_{\theta}$. \begin{assn}[Approximate Linearity]\label{assn:nets} Let $f_\theta(x)$ be a neural network with parameters $\theta \in \Theta$, where $\Theta$ is a set of feasible weights, for example those reachable by gradient descent. Fix a dataset $\{X_i\}_{i=1}^N$ and let $\mathcal{K}_\theta \in {\mathbb R}^{N \times N}$ be the Gram matrix of coordinates $[\mathcal{K}_\theta]_{ij} = \langle \nabla_\theta f_\theta(X_i), \nabla_\theta f_\theta(X_j) \rangle $. Then $f_\theta(x)$ must satisfy, \begin{enumerate} \item There exists $R \gg 0$ so that $\Theta \subseteq B(0, R)$, where $B(0, R)$ is the Euclidean ball of radius $R$. \item There exist $\rho_M > \rho_m > 0$ such that for $\theta \in \Theta$, \begin{align*} \rho_M \geq \lambda_{\text{max}}(\mathcal{K}_\theta) \geq \lambda_{\text{min}}(\mathcal{K}_\theta) \geq \rho_m > 0. \end{align*} \item For $\theta \in \Theta$ and for all data points $\{X_i\}_{i=1}^N$, the Hessian matrix $D^2_\theta f_\theta(X_i)$ is bounded in spectral norm: $ \| D^2_\theta f_\theta(X_i) \| \leq \frac{\rho_M}{C_h}$, where $C_h \gg 0$ depends only on $R$, $N$, and the regularization $\lambda$. \end{enumerate} \end{assn} The dependencies of $C_h$ are made clear in the Supplemental Materials, Section \ref{supp:proofs}. The quantity $\mathcal{K}_\theta$ is called the \textit{neural tangent kernel} (NTK) associated with the network $f_\theta(x)$. It has been shown for a variety of nets that, at sufficiently large width, the NTK is well conditioned and nearly constant on the set of weights reachable by gradient descent on a convex objective function \citep{Liu_Zhu_Belkin_2020, Du_Hu_2019, Du_Lee_Li_Wang_Zhai_2019}. For instance, fully-connected networks with smooth and Lipschitz-continuous activations fall into this class and hence satisfy Assumption \ref{assn:nets} when the width of all layers is sufficiently large \citep{Liu_Zhu_Belkin_2020}. First, we show in Theorem \ref{thm:opt-neural-nets} that when $\phi$, $\psi$ are parametrized by neural networks satisfying Assumption \ref{assn:nets}, gradient descent converges to \textit{global} maximizers of the dual objective. This provides additional justification for the large-scale approach of \citet{Seguy_Damodaran_Flamary_Courty_Rolet_Blondel_2018}. \begin{thm}[Optimizing Neural Nets]\label{thm:opt-neural-nets} Suppose $J_\lambda(\phi, \psi)$ is $\frac{1}{s}$-strongly smooth in $l_\infty$ norm. Let $\phi_{\theta}$, $\psi_{\theta}$ be neural networks satisfying Assumption \ref{assn:nets} for the dataset $\{(x_i, y_i)\}_{i=1}^N$, $N = |\mathcal{X}| \cdot |\mathcal{Y}|$. Then gradient descent of $J_\lambda(\phi_{\theta}, \psi_{\theta})$ with respect to $\theta$ at learning rate $\eta = \frac{\lambda}{2 \rho_M}$ converges to an $\ensuremath{\varepsilon}ilon$-approximate global maximizer of $J_\lambda$ in at most $\left(\frac{2\kappa R^2}{s}\right) \ensuremath{\varepsilon}ilon^{-1}$ iterations, where $\kappa = \frac{\rho_M}{\rho_m}$. \end{thm} Given outputs $\hat{\phi}$, $\hat{\psi}$ of Algorithm \ref{algo:part1}, we may assume by Theorem \ref{thm:opt-neural-nets} that the networks are $\ensuremath{\varepsilon}ilon$-approximate global maximizers of $J_\lambda(\phi, \psi)$. Due to $\lambda \alpha$-strong convexity of the primal objective, the optimization error $\ensuremath{\varepsilon}ilon$ bounds the distance of the underlying pseudo-plan $\hat{\pi}$ from the true global extrema. We make this bound concrete in Theorem \ref{thm:stability}, which guarantees that approximately maximizing $J_\lambda(\phi, \psi)$ is sufficient to produce a close approximation of the true empirical Sinkhorn coupling. \begin{thm}[Stability of the OT Problem] \label{thm:stability} Suppose $K_\lambda(\pi)$ is $s$-strongly convex in $l_1$ norm and let $\mathcal{L}(\phi, \psi, \pi)$ be the Lagrangian of the regularized optimal transport problem. For $\hat{\phi}$, $\hat{\psi}$ which are $\ensuremath{\varepsilon}ilon$-approximate maximizers of $J_\lambda(\phi, \psi)$, the pseudo-plan $\hat{\pi} = M_f(V(x, y; \hat{\phi}, \hat{\psi})) \sigma(x) \tau(y)$ satisfies \begin{align*} |\hat{\pi} - \pi^* |_1 \leq \sqrt{\frac{2\ensuremath{\varepsilon}ilon}{s}} \leq \frac{1}{s} \left| \nabla_{\hat{\pi}} \mathcal{L}(\hat{\phi}, \hat{\psi},\hat{\pi}) \right|_1. \end{align*} \end{thm} Theorem \ref{thm:stability} guarantees that if one can approximately optimize the dual objective using Algorithm \ref{algo:part1}, then the corresponding coupling $\hat{\pi}$ is close in $l_1$ norm to the true optimal transport coupling. This approximation guarantee justifies the choice to draw samples $\tilde{\pi}_{Y \mid X = x}$ as an approximation to sampling $\pi_{Y \mid X =x }$ instead. Both Theorems \ref{thm:opt-neural-nets} and \ref{thm:stability} are proven in Section \ref{supp:proofs} of the Appendix. \section{Experiments} \label{sec:expts} Our main point of comparison is the barycentric projection method proposed by \citet{Seguy_Damodaran_Flamary_Courty_Rolet_Blondel_2018}, which trains a neural network $T_\theta : \mathcal{X} \to \mathcal{Y}$ to map source data to target data by optimizing the objective $ \theta \coloneqq \argmin_{\theta} {\mathbb E}_{\pi_{Y \mid X = x}}[|T_\theta(x)-Y|^2]$. For transportation experiments between USPS \cite{usps} and MNIST \cite{mnist} datasets, we scale both datasets to 16px and parametrize the dual variables and barycentric projections by fully connected ReLU networks. We train score estimators for MNIST and USPS at 16px resolution using the method and architecture of \cite{Song_Ermon_2020}. For transportation experiments using CelebA \cite{liu2015faceattributes}, dual variables $\phi$, $\psi$ are parametrized as ReLU FCNs with 8, 2048-dimensional hidden layers. Both the barycentric projection and the score estimators use the U-Net based image-to-image architecture introduced in \cite{Song_Ermon_2020}. Numerical hyperparameters like learning rates, optimizer parameters, and annealing schedules, along with additional details of our neural network architectures, are tabulated in Section \ref{sup:exp-details} of the Appendix. \subsection{Optimal Transportation of Image Data} \begin{figure} \caption{Comparison of Barycentric Projection \cite{Seguy_Damodaran_Flamary_Courty_Rolet_Blondel_2018} \label{fig:usps-mnist} \end{figure} We show in Figure \ref{fig:usps-mnist} a qualitative plot of SCONES samples on transportation between MNIST and USPS digits. We also show in Section \ref{sec:intro}, Figure \ref{fig:celeba32px-celeba} a qualitative plot of transportation of CelebA images. Because barycentric projection averages $\pi_{Y \mid X=x}$, output images are blurred and show visible mixing of multiple digits. By directly sampling the optimal transport plan, SCONES can separate these modes and generate more realistic images. At low regularization levels, Algorithm \ref{algo:part1} becomes more expensive and can become numerically unstable. As shown in Figures \ref{fig:usps-mnist} and \ref{fig:celeba32px-celeba}, SCONES can be used to sample the Sinkhorn coupling in intermediate regularization regimes, where optimal transport has a nontrivial effect despite $\pi_{Y | X =x}$ not concentrating on a single image. \begin{wrapfigure}[37]{r}{0.5\textwidth} \begin{center} \includegraphics[width=0.445\textwidth]{figures/BP,SCONES_Qualitative.pdf} \end{center} \caption{Entropy regularized, $\lambda=2$, $L^2$ cost SCONES and BP samples on transportation from a unit Gaussian source distribution to the Swiss Roll target distribution. For many samples, the Barycentric average lies off the manifold of high target density, whereas SCONES can separate multiple modes of the conditional coupling and correctly recover the target distribution.} \end{wrapfigure} To quantitatively assess the quality of images generated by SCONES, we compute the FID scores of generated CelebA images on two optimal transport problems: transporting 2x downsampled CelebA images to CelebA (the `Super-res.' task) and transporting CelebA to CelebA (the 'Identity' task) for a variety of regularization parameters. The FID score is a popular measurement of sample quality for image generative models and it is a proxy for agreement between the distribution of SCONES samples of the marginal $\pi_Y(y)$ and the true distribution $\tau(y)$. In both cases, we partition CelebA into two datasets of equal size and optimize Algorithm \ref{algo:part1} using separated partitions as source and target data, resizing the source data in the superresolution task. As shown in Table \ref{tbl:FID}, SCONES has a significantly lower FID score than samples generated by barycentric projection. However, under ideal tuning, the unconditional score network generates CelebA samples with FID score 10.23 \cite{Song_Ermon_2020}, so there is some cost in sample quality incurred when using SCONES. \begin{table}[t] \centering \setstretch{1.3} \begin{tabular}{|l|ccc|ccc|} \hline & \makecell[l{p{1.5cm}}]{\mbox{KL regularization}, \\ $\lambda = 0.1$} & \makecell[l{p{1.5cm}}]{~\\$\lambda = 0.01$} & \makecell[l{p{1.5cm}}]{~\\$\lambda = 0.005$} & \makecell[l{p{1.5cm}}]{\mbox{$\chi^2$ regularization}, \\ $\lambda = 0.1$} & \makecell[l{p{1.5cm}}]{~\\$\lambda = 0.01$} & \makecell[l{p{1.5cm}}]{~\\$\lambda = 0.001$} \\ \hline \setstretch{1} \rule{0pt}{1.5\normalbaselineskip} \makecell[l]{ SCONES, \\ Super-res.} & 35.59 & 35.77 & 43.80 & 25.84 & 25.64 & 25.59 \\ \setstretch{1} \rule{0pt}{1.5\normalbaselineskip} \makecell[l]{Bary. Proj., \\ Super-res.} & 193.92 & 230.85 & 228.78 & 190.10 & 216.54 & 212.72 \\[0.6em] \hline \setstretch{1} \rule{0pt}{1.5\normalbaselineskip} \makecell[l]{SCONES, \\ Identity} & 36.62 & 34.84 & 43.99 & 25.51 & 25.65 & 27.88 \\ \setstretch{1} \rule{0pt}{1.5\normalbaselineskip} \makecell[l]{Bary. Proj., \\ Identity} & 195.64 & 217.24 & 217.67 & 188.29 & 219.96 & 214.90 \\[0.6em] \hline \end{tabular} \setstretch{1} \rule{0pt}{0.3\normalbaselineskip} \caption{FID metric of samples generated by barycentric projection and SCONES, computed on $n=5000$ samples from each model. For comparison to unregularized OT methods, we also trained a Wasserstein-2 GAN ($W_2$ GAN) \cite{leygonie2019adversarial} and a Wasserstein-2 Generative Network ($W_2$ Gen) \cite{korotin2021wasserstein}. $W_2$ GAN achieves FIDs 55.77 on the super-res. task and 32.617 on the identity task. $W_2$ Gen achieves FIDs 32.80 on the super-res. task and 20.57 on the identity task.} \label{tbl:FID} \end{table} \subsection{Sampling Synthetic Data} To compare SCONES to a ground truth Sinkhorn coupling in a continuous setting, we consider entropy regularized optimal transport between Gaussian measures on ${\mathbb R}^d$. Given $\sigma = {\mathbb N}cl(\mu_1, \text{ \fontfamily{cmr}\selectfont S}gma_1)$ and $\tau = {\mathbb N}cl(\mu_2, \text{ \fontfamily{cmr}\selectfont S}gma_2)$, the Sinkhorn coupling of $\sigma$, $\tau$ is itself a Gaussian measure and it can be written in closed form in terms of the regularization $\lambda$ and the means and covariances of $\sigma$, $\tau$ \cite{janati2020}. In dimensions $d \in \{2, 16, 54, 128, 256\}$, we consider $\text{ \fontfamily{cmr}\selectfont S}gma_1, \text{ \fontfamily{cmr}\selectfont S}gma_2$ whose eigenvectors are uniform random (i.e. drawn from the Haar measure on $SO(d)$) and whose eigenvalues are sampled uniform i.i.d. from $[1, 10]$. In all cases, we set means $\mu_1, \mu_2$ equal to zero and choose regularization $\lambda = 2d$. In the Gaussian setting, ${\mathbb E}[\|x-y\|_2^2]$ is of order $d$, so this choice of scaling ensures a fair comparison across problem dimensions by fixing the relative magnitudes of the cost and regularization terms. We evaluate performance on this task using the Bures-Wasserstein Unexplained Variance Percentage \cite{chen-2021}, BW-UV$(\hat{\pi}, \pi^{\lambda})$, where $\pi^\lambda$ is the closed form solution given by \citet{janati2020} and where $\hat{\pi}$ is the joint empirical covariance of $k=10000$ samples $(x, y) \sim \pi$ generated using either SCONES or Barycentric Projection. We train SCONES according to Algorithm \ref{algo:part1} and generate samples according to Algorithm \ref{algo:part2}. In place of a score estimate, we use the ground truth target score $\nabla_y \log \tau(y) = \text{ \fontfamily{cmr}\selectfont S}gma_2^{-1}(y - \mu_2)$ and omit annealing. We compare SCONES samples to the true solution in the BW-UVP metric \cite{chen-2021} which is measured on a scale from 0 to 100, lower is better. We report $\text{BW-UVP}(\hat{\pi}, {\pi}^\lambda)$ where $\hat{\pi}$ is a $2d$-by-$2d$ joint empirical covariance of SCONES samples $(x, y) \sim \hat{\pi}$ or of BP samples $(x, T_\theta(x))$, $x \sim \sigma$ and $\pi^\lambda$ is the closed-form covariance. \begin{table}[t] \setstretch{1.5} \begin{tabular}{|l|l|l|l|l|l|l|} \hline & $d=2$ & $d=16$ & $d=64$ & $d=128$ & $d=256$ \\ \hline SCONES & $0.025 \pm 0.0014$ & $0.52 \pm 0.0086$ & $1.2 \pm 0.014$ & $1.4 \pm 0.066$ & $2.0 \pm 0.047$ \\ \hline BP & $7.1 \pm 0.13$ & $35 \pm 0.23$ & $42 \pm 0.14$ & $41 \pm 0.088 $ & $41 \pm 0.098$ \\ \hline \end{tabular} \setstretch{1} \caption{Comparison of SCONES to BP on KL-regularized optimal transport between random high-dimensional gaussians. In each cell, we report the average BW-UVP between a sample empirical covariance and the analytical solution. We report the average over $n=10$ independent random source, target Gaussians and the standard error of the mean. } \label{fig:methods-comparison} \end{table} \section{Discussion and Future Work} We introduce and analyze the SCONES method for learning and sampling large-scale optimal transport plans. Our method takes the form of a conditional sampling problem for which the conditional score decomposes naturally into a prior, unconditional score $\nabla_y \log \tau(y)$ and a ``compatibility term'' $\nabla_y \log M(V(x, y))$. This decomposition illustrates a key benefit of SCONES: one score network may re-used to cheaply transport many source distributions to the same target. In contrast, learned forward-model-based transportation maps require an expensive training procedure for each distinct pair of source and target distribution. This benefit comes in exchange of increased computational cost of iterative sampling. For example, generating 1000 samples requires roughly 3 hours using one NVIDIA 2080 Ti GPU. The cost to sample score-based models may fall with future engineering advances, but iterative sampling intrinsically require multiple forward pass evaluations of the score estimator as opposed to a single evaluation of a learned transportation mapping. There is much future work to be done. First, we study only simple fully connected ReLU networks as parametrizations of the dual variables. Interestingly, we observe that under $L^2$ transportation cost, parametrization by multi-layer convolutional networks perform equally or worse than their FCN counterparts when optimizing Algorithm \ref{algo:part1}. One explanation may be the \textit{permutation invariance} of $L^2$ cost: applying a permutation of coordinates to the source and target distribution does not change the optimal objective value and the optimal coupling is simply conjugated by a coordinate permutation. As a consequence, the optimal coupling may depend non-locally on input data coordinates, violating the inductive biases of localized convolutional filters. Understanding which network parametrizations or inductive biases are best for a particular choice of transportation cost, source distribution, and target distribution, is one direction for future investigation. Second, it remains to explore whether there is a potential synergistic effect between Langevin sampling and optimal transport. Heuristically, as $\lambda \to 0$ the conditional plan $\pi_{Y | X = x}$ concentrates around the transport image of $x$, which should improve the mixing time required by Langevin dynamics to explore high density regions of space. In Section \ref{supp:proofs} of the Appendix, we prove a known result, that the entropy regularized $L^2$ cost compatibility term $M(V(x, y)) = e^{V(x, y) / \lambda}$ is a log-concave function of $y$ for fixed $x$. It the target distribution is itself log-concave, the conditional coupling $\pi_{Y | X = x}$ is also log-concanve and hence Langevin sampling enjoys exponentially fast mixing time. However, more work is required to understand the impacts of non-log-concavity of the target and of optimization errors when learning the compatibility and score functions in practice. We look forward to future developments on these and other aspects of large-scale regularized optimal transport. \end{ack} \setcounter{section}{0} \renewcommand{\Alph{section}}{\Alph{section}} \section{Regularizing Optimal Transport with $f$-Divergences} \label{supp:reg-via-fdiv} \begin{table}[h] \setstretch{1.5} \centering \begin{adjustbox}{center} \begin{tabular}{|l|l|l|l|l|l|} \hline Name & $f(v)$ & $f^*(v)$ & $f^{*\prime}$ & Dom$(f^*(v))$ \\ \hline \hline Kullback-Leibler & $ v\log(v)$ & $\exp(v-1)$ & $\exp(v-1) $ & $v \in {\mathbb R}$\\ Reverse KL & $-\log(v)$ & $\log(-\frac{1}{v}) - 1$ & $-\frac{1}{v}$ & $v < 0$\\ Pearson $\chi^2$ & $(v-1)^2$ & $\frac{v^2}{4}+v$ & $\frac{v}{2} + 1$ & $v \in {\mathbb R}$ \\ Squared Hellinger & $(\sqrt{v} - 1)^2$ & $\frac{v}{1-v}$ & $(1-v)^{-2}$ & $v < 1$ \\ Jensen-Shannon & $-(v+1) \log(\frac{1+v}{2}) + v \log v$ & $\frac{e^x}{2-e^x}$ & $\frac{2x}{e^x - 2} + x - \log(2-e^x)$ & $v < \log(2)$\\ GAN & $v \log(v) - (v+1) \log(v+1)$ & $-v - \log(e^{-v}-1)$ & $(e^{-y}-1)^{-1}$ & $v < 0$ \\ \hline \end{tabular} \end{adjustbox} \setstretch{1} \caption{A list of $f$-Divergences, their Fenchel-Legendre conjugates, and the derivative of their conjugates. These functions determine the corresponding dual regularizers $H^*_f(v)$ and compatibility functions $M_f(v)$. We take definitions of each divergence from \cite{Nowozin_Cseke_Tomioka_2016}. Note that there are many equivalent formulations as each $f(v)$ is defined only up to additive $c(t-1)$, $c \in {\mathbb R}$, and the resulting optimization problems are defined only up to shifting and scaling the objective.} \label{tab:f-div} \end{table} Here are some general properties of $f$-Divergences which are also used in Section \ref{supp:proofs}. We provide examples of $f$-Divergences in Table \ref{tab:f-div}. The specific forms of $H_f^*(v)$ and $M_f(v)$ are determined by $f(v)$, $f^*(v)$, and $f^{*\prime}(v)$, which can in turn be used to formulate Algorithms \ref{algo:part1} and \ref{algo:part2} for each divergence. \begin{dfn}[$f$-Divergences] Let $f: {\mathbb R} \to {\mathbb R}$ be convex with $f(1) = 0$ and let $p, q$ be probability measures such that $p$ is absolutely continuous with respect to $q$. The corresponding $f$-Divergence is defined $D_f(p || q) = {\mathbb E}_q[f(\frac{dp(x)}{dq(x)})]$ where $\frac{dp(x)}{dq(x)}$ is the Radon-Nikodym derivative of $p$ w.r.t. $q$. \end{dfn} \begin{prop}[Strong Convexity of $D_f$] \label{prop:convex-div} Let $\mathcal{X}$ be a countable compact metric space. Fix $q \in \mathcal{M}_+(\mathcal{X})$ and let $\mathcal{P}_q(\mathcal{X})$ be the set of probability measures on $\mathcal{X}$ that are absolutely continuous with respect to $q$ and which have bounded density over $\mathcal{X}$. Let $f: {\mathbb R} \to {\mathbb R}$ be $\alpha$-strongly convex with corresponding $f$-Divergence $D_f(p || q)$. Then, the function $H_f(p) \coloneqq D_f(p || q)$ defined over $p \in \mathcal{P}_q(\mathcal{X})$ is $\alpha$-strongly convex in 1-norm: for $p_0, p_1 \in \mathcal{P}_q(\mathcal{X})$, \begin{align} \label{eqn:1-sc} H_f(p_1) \geq H_f(p_0) + \langle \nabla_{p} H_f (p_0), p_1 - p_0 \rangle + \frac{\alpha}{2} | p_1 - p_0 |_1^2. \end{align} \end{prop} \begin{proof} Define the measure $p_t = tp_1 + (1-t)p_0$. Then $H_f$ satisfies the following convexity inequality (\citet{Melbourne_2020}, Proposition 2). \begin{align*} H_f(p_t) \leq t H_f(p_1) + (1-t) H_f(p_0) - \alpha \left( t | p_1 - p_t|_{\text{TV}}^2 + (1-t) | p_0 - p_t |_{\text{TV}}^2 \right) \end{align*} By assumption that $\mathcal{X}$ is countable, $|p - q|_{\text{TV}} = \frac{1}{2}|p - q|_1$. It follows that, \begin{align*} H_f(p_1) & \geq H_f(p_0) + \frac{H_f(p_0 + t(p_1 - p_0)) - H_f(p_0)}{t} + \frac{\alpha}{2} \left( | p_1 - p_t |_1^2 + (t^{-1} - 1) | p_0 - p_t |_1^2 \right) \\ & \geq H_f(p_0) + \frac{H_f(p_0 + t(p_1 - p_0)) - H_f(p_0)}{t} + \frac{\alpha}{2} | p_1 - p_t |_1^2 \end{align*} and, taking the limit $t \to 0$, the inequality \eqref{eqn:1-sc} follows. \end{proof} For the purposes of solving empirical regularized optimal transport, the technical conditions of Proposition \ref{prop:convex-div} hold. Additionally, note that $\alpha$-strong convexity of $f$ is sufficient but not necessary for strong convexity of $H_f$. For example, entropy regularization uses $f_{\text{KL}}(v) = v \log(v)$ which is not strongly convex over its domain, ${\mathbb R}_+$, but which yields a regularizer $H_{\text{KL}}(p) = \text{ \fontfamily{cmr}\selectfont KL}(p || q)$ that is $1$-strongly convex in $l_1$ norm when $q$ is uniform. This follows from Pinksker's inequality as shown in \cite{Seguy_Damodaran_Flamary_Courty_Rolet_Blondel_2018}. Also, if $f$ is $\alpha$-strongly convex over a subinterval $[a, b]$ of its domain, then Proposition \ref{prop:convex-div} holds under the additional assumption that $a \leq \frac{dp(x)}{dq(x)}(x) \leq b$ uniformly over $x \in \mathcal{X}$. \section{Proofs} \label{supp:proofs} For convenience, we repeat the main assumptions and statements of theorems alongside their proofs. First, we prove the following properties about $f$-divergences. \begin{prop-}[\ref{prop:f-div-dual} -- Regularization with $f$-Divergences] Consider the empirical setting of Definition \ref{def:reg-ot}. Let $f(v) : {\mathbb R} \to {\mathbb R}$ be a differentiable $\alpha$-strongly convex function with convex conjugate $f^*(v)$. Set $f^{*\prime}(v) = \partial_v f^*(v)$. Define the violation function $V(x, y ; \phi, \psi) = \phi(x) + \psi(y) - c(x, y)$. Then, \begin{enumerate} \item The $D_f$ regularized primal problem $K_\lambda(\pi)$ is $\lambda \alpha$-strongly convex in $l_1$ norm. With respect to dual variables $\phi \in {\mathbb R}^{|\mathcal{X}|}$ and $\psi \in {\mathbb R}^{|\mathcal{Y}|}$, the dual problem $J_\lambda(\phi, \psi)$ is concave, unconstrained, and $\frac{1}{\lambda \alpha}$-strongly smooth in $l_\infty$ norm. Strong duality holds: $K_\lambda(\pi) \geq J_\lambda(\phi, \psi)$ for all $\pi$, $\phi$, $\psi$, with equality for some triple $\pi^*, \phi^*, \psi^*$. \item $J_\lambda(\phi, \psi)$ takes the form \begin{align*} J_\lambda(\phi, \psi) = {\mathbb E}_\sigma[\phi(x)] + {\mathbb E}_\tau[\psi(y)] - {\mathbb E}_{\sigma \times \tau} [ H^*_f(V(x, y; \phi, \psi))] \end{align*} where $H^*_f(v) = \lambda f^* (\lambda^{-1} v)$. \item The optimal solutions $(\pi^*, \phi^*, \psi^*)$ satisfy \begin{align*} \pi^*(x, y) = M_f(V(x, y; \phi, \psi)) \sigma(x) \tau(y) \end{align*} where $M_f(x, y) = f^{*\prime}(\lambda^{-1}v)$. \end{enumerate} \end{prop-} \begin{proof} By assumption that $f$ is differentiable, $K_\lambda(\pi)$ is continuous and differentiable with respect to $\pi \in \mathcal{M}_+(\mathcal{X} \times \mathcal{Y})$. By Proposition \ref{prop:convex-div}, it is $\lambda \alpha$-strongly convex in $l_1$ norm. By the Fenchel-Moreau theorem, $K_\lambda(\pi)$ therefore has a unique minimizer $\pi^*$ satisfying strong duality, and by \cite[Theorem 6]{Kakade_Shalev-Shwartz_Tewari}, the dual problem is $\frac{1}{\lambda \alpha}$-strongly smooth in $l_\infty$ norm. The primal and dual are related by the Lagrangian $\mathcal{L}(\pi, \phi, \psi)$, \begin{align} \label{eq:lagrangian} \mathcal{L}(\phi, \psi, \pi) = & \ {\mathbb E}_\pi [c(x, y)] + \lambda H_f(\pi) + {\mathbb E}_\sigma[\phi(x)] - {\mathbb E}_\pi[\phi(x)] + {\mathbb E}_\tau[\phi(y)] - {\mathbb E}_\pi[\psi(y)] \end{align} which has $K_\lambda(\pi) = \max_{\phi, \psi} \mathcal{L}(\phi, \psi, \pi)$ and $J_\lambda(\phi, \psi) = \min_{\pi} \mathcal{L}(\phi, \psi, \pi)$. In the empirical setting, $\pi$, $\sigma$, $\tau$ may be written as finite dimensional vectors with coordinates $\pi_{x, y}$, $\sigma_x$, $\tau_y$ for $x, y \in \mathcal{X} \times \mathcal{Y}$. Minimizing the $\pi$ terms of $J_\lambda$, \begin{align*} \min_{\pi \in \mathcal{M}(\mathcal{X} \times \mathcal{Y})} & \left\{ {\mathbb E}_{\pi}[c(x, y) - \phi(x) - \psi(y) ] + \lambda {\mathbb E}_{\sigma \times \tau} \left[ f\left( \frac{ d \pi(x, y)}{d\sigma(x) d \tau(y) }\right) \right] \right\} \\ & = \sum_{x, y \in \mathcal{X} \times \mathcal{Y}} - \max_{\pi_{x, y} \geq 0} \left\{ \pi_{x, y} \cdot (\phi(x) + \psi(y) - c(x, y)) - \lambda \sigma_x \tau_y f \left( \frac{\pi_{x, y}}{\sigma_x \tau_y}\right) \right\} \\ & = \sum_{x, y \in \mathcal{X} \times \mathcal{Y}} -h^*_{x, y}( \phi(x) + \psi(y) - c(x, y)) \end{align*} where $h^*_{x, y}$ is the convex conjugate of $(\lambda \sigma_x \tau_y) \cdot f( p/ (\sigma_x \tau_y) )$ w.r.t. the argument $p$. For general convex $f(p)$, it is true that $[\lambda f(p)]^*(v) = \lambda f^*(\lambda^{-1} v)$ \cite[Chapter 3]{Boyd_Boyd_Vandenberghe_Press_2004}. Applying twice, \begin{align*} [(\lambda \sigma_x \tau_y) \cdot f(p/(\sigma_x \tau_y))]^*(v) = \lambda [(\sigma_x \tau_y) f(p/(\sigma_x \tau_y))]^*(\lambda^{-1} v) = (\lambda \sigma_x \tau_y) \cdot f^*(v/\lambda) \end{align*} so that \begin{align*} \min_{\pi \in \mathcal{M}_+(\mathcal{X} \times \mathcal{Y})} & {\mathbb E}_{\pi}[c(x, y) - \phi(x) - \psi(y) ] + \lambda {\mathbb E}_{\sigma \times \tau} \left[ f\left( \frac{ d \pi(x, y)}{d\sigma(x) d \tau(y) }\right) \right] \\ = & \sum_{x, y \in \mathcal{X} \times \mathcal{Y}} \sigma_x \tau_y \lambda f^*(\lambda^{-1} v) \\ = & - {\mathbb E}_{\sigma \times \tau} [ H^*_f(V(x, y; \phi, \psi))] \end{align*} for $H^*_f(v) = \lambda f^*(\lambda^{-1} v)$. The claimed form of $J_\lambda(\phi, \psi) $ follows. Additionally, for general convex $f(p)$, it is true that $\partial_v f^*(v) = \argmax_{p} \left\{ \langle v, p \rangle - f(p)\right\}$, \cite[Chapter 3]{Boyd_Boyd_Vandenberghe_Press_2004}. For $\phi^*$, $\psi^*$ maximizing $J_\lambda(\phi, \psi)$, it follows by strong duality that \begin{align*} \pi^*_{x, y} & = \argmin_{\pi \in \mathcal{M}_+(\mathcal{X} \times \mathcal{Y})} \mathcal{L}(\phi^*, \psi^*, \pi) \\ & = \nabla_V {\mathbb E}_{\sigma \times \tau} [ H^*_f(V(x, y; \phi^*, \psi^*))] = M_f(V(x, y; \phi^*, \psi^*)) \sigma_x \tau_y. \end{align*} as claimed. \end{proof} We proceed to proofs of the theorems stated in Section \ref{sec:theory}. \begin{assn-}[\ref{assn:nets} -- Approximate Linearity] Let $f_\theta(x)$ be a neural network with parameters $\theta \in \Theta$, where $\Theta$ is a set of feasible weights, for example those reachable by gradient descent. Fix a dataset $\{X_i\}_{i=1}^N$ and let $\mathcal{K}_\theta \in {\mathbb R}^{N \times N}$ be the Gram matrix of coordinates $[\mathcal{K}_\theta]_{ij} = \langle \nabla_\theta f_\theta(X_i), \nabla_\theta f_\theta(X_j) \rangle $. Then $f_\theta(x)$ must satisfy, \begin{enumerate} \item There exists $R \gg 0$ so that $\Theta \subseteq B(0, R)$, where $B(0, R)$ is the Euclidean ball of radius $R$. \item There exist $\rho_M > \rho_m > 0$ such that for $\theta \in \Theta$, \begin{align*} \rho_M \geq \lambda_{\text{max}}(\mathcal{K}_\theta) \geq \lambda_{\text{min}}(\mathcal{K}_\theta) \geq \rho_m > 0. \end{align*} \item For $\theta \in \Theta$ and for all data points $\{X_i\}_{i=1}^N$, the Hessian matrix $D^2_\theta f_\theta(x_i)$ is bounded in spectral norm: \begin{align*} \| D^2_\theta f_\theta(x_i) \| \leq \frac{\rho_M}{C_h} \end{align*} where $C_h \gg 0$ depends only on $R$, $N$, and the regularization $\lambda$. \end{enumerate} \end{assn-} The constant $C_h$ may depend on the dataset size $N$, the upper bound of $\rho_M$ for eigenvalues of the NTK, the regularization parameter $\lambda$, and it may also depend indirectly on the bound $R$. \begin{thm-}[\ref{thm:opt-neural-nets} -- Optimizing Neural Nets] Suppose $J_\lambda(\phi, \psi)$ is $\frac{1}{s}$-strongly smooth in $l_\infty$ norm. Let $\phi_{\theta}$, $\psi_{\theta}$ be neural networks satisfying Assumption \ref{assn:nets} for the dataset $\{(x_i, y_i)\}_{i=1}^N$, $N = |\mathcal{X}| \cdot |\mathcal{Y}|$. Then gradient descent of $J_\lambda(\phi_{\theta}, \psi_{\theta})$ with respect to $\theta$ at learning rate $\eta = \frac{\lambda}{2 \rho_M}$ converges to an $\ensuremath{\varepsilon}ilon$-approximate global maximizer of $J_\lambda$ in at most $\left(\frac{2\kappa R^2}{s}\right) \ensuremath{\varepsilon}ilon^{-1}$ iterations, where $\kappa = \frac{\rho_M}{\rho_m}$. \end{thm-} \begin{proof} For indices $i$, let $S_{\theta_i} = (\phi_{\theta_i}, \psi_{\theta_i})$ so that Assumption \ref{assn:nets} applies with $S_{\theta}$ in place of $f_\theta$. \begin{lem}[Smoothness] \label{lem:smooth} $J_\lambda(S_\theta)$ is $\frac{2\rho_M}{s}$-strongly smooth in $l_2$ norm with respect to $\theta$: \begin{align*} J_\lambda(S_{\theta_2}) \leq J_\lambda(S_{\theta_1}) + \langle \nabla_\theta J_\lambda(S_{\theta_1}), S_{\theta_2} - S_{\theta_1} \rangle + \frac{\rho_M}{\lambda} \|\theta_2 - \theta_1\|_2^2. \end{align*} \end{lem} \begin{proof} It is assumed that $J_\lambda(S)$ is $(\frac{1}{s}, l_\infty)$-strongly smooth and that $K_\lambda(\pi)$ is $(s, l_1)$-strongly convex. Note that $(\frac{1}{s}, l_2)$-strong smoothness is \textit{weakest} in the sense that it is implied via norm equivalence by $(\frac{1}{s}, l_q)$-strong smoothness for $2 \leq q \leq \infty$. \begin{align*} J_\lambda(S_2) & \leq J_\lambda(S_1) + \langle \nabla_{S} J_\lambda(S_1), S_2 - S_1 \rangle + \frac{1}{2s} \mid S_2 - S_1\mid_q^2 \\ \implies J_\lambda(S_2) & \leq J_\lambda(S_1) + \langle \nabla_{S} J_\lambda(S_1), S_2 - S_1 \rangle + \frac{1}{2s} \| S_2 - S_1\|_2^2 \end{align*} A symmetric property holds for $(s, l_2)$-strong convexity of $K_\lambda(\pi)$ which is implied by $(s, l_p)$-strong convexity, $1 \leq p \leq 2$. By Assumption \ref{assn:nets}, \begin{align} \label{ineq} J_\lambda(S_{\theta_2}) - J_\lambda (S_{\theta_1}) - \langle \nabla_{S} J_\lambda(S_{\theta_1}), S_{\theta_2} - S_{\theta_1}\rangle \leq \frac{1}{2s} \|S_{\theta_2} - S_{\theta_1} \|_2^2 \leq \frac{\rho_M}{2s} \| \theta_2 - \theta_1 \|_2^2. \end{align} To establish smoothness, it remains to bound $\langle \nabla_{S} J_\lambda(S_{\theta_1}), S_{\theta_2} - S_{\theta_1}\rangle$. Set $v = \nabla_{S} J_\lambda(S_{\theta_1}) \in {\mathbb R}^N$ and consider the first-order Taylor expansion in $\theta$ of $\langle v, S_{\theta} \rangle$ evaluated at $\theta = \theta_2$. Applying Lagrange's form of the remainder, there exists $0 < c < 1$ such that \begin{align*} \langle v, S_{\theta_2} \rangle = & \ \langle v, S_{\theta_1} \rangle + \langle v, J^S_\theta (S_{\theta_2} - S_{\theta_1})\rangle \\ & +\frac{1}{2} \sum_{i=1}^n v_i (\theta_2 - \theta_1)^T [D^2_\theta(S_{\theta_1}(x_i) + c( S_{\theta_2}(x_i) - S_{\theta_1}(x_i) )) ](\theta_2 - \theta_1) \end{align*} and so by Cauchy-Schwartz, \begin{align*} \langle v, S_{\theta_2} - S_{\theta_1} \rangle \leq \langle v, J^S_\theta (S_{\theta_2} - S_{\theta_1})\rangle + \frac{\|D^2_\theta\|}{2} \sqrt{N} \|v\|_2 \|\theta_2 - \theta_1 \|_2^2 \leq \frac{\rho_M}{2s} \|\theta_2 - \theta_1 \|_2^2. \end{align*} The final inequality follows by taking $C_h \geq \lambda \sqrt{N} \sup_v \|v\|_2 $. This supremum is bounded by assumption that $\Theta \subseteq B(0, R)$. Plugging in $v = \nabla_{S} J_\lambda(S_{\theta_1})$, we have \begin{align*} \langle \nabla_{S} J_\lambda(S_{\theta_1}), S_{\theta_2} - S_{\theta_1}\rangle & \leq \langle \nabla_{S} J_\lambda(S_{\theta_1}), J^S_\theta (S_{\theta_2} - S_{\theta_1}) \rangle + \frac{\rho_M}{2s} \|\theta_2 - \theta_1 \|_2^2 \\ & = \langle \nabla_\theta J_\lambda(S_{\theta_1}), \theta_2 - \theta_1\rangle + \frac{\rho_M}{2s} \|\theta_2 - \theta_1 \|_2^2. \end{align*} Returning to \eqref{ineq}, we have \begin{align*} J_\lambda(S_{\theta_2}) - J_\lambda (S_{\theta_1}) \leq \langle \nabla_\theta J_\lambda(S_{\theta_1}), \theta_2 - \theta_1\rangle + \frac{\rho_M}{s} \|\theta_2 - \theta_1 \|_2^2. \end{align*} from which Lemma \ref{lem:smooth} follows. \end{proof} \begin{lem}[Gradient Descent] \label{lem:gd} Gradient descent over the parameters $\theta$ with learning rate $\eta = \frac{s}{2\rho_M}$ converges in $T$ iterations to parameters $\theta_t$ satisfying $J_\lambda(S_{\theta_t}) - J_\lambda(S^*) \leq \left(\frac{2 \kappa R^2}{s} \right) \frac{1}{T} $ where $\kappa = \frac{\rho_M}{\rho_m}$ is the condition number. \end{lem} \begin{proof} Fix $\theta_0$ and set $\theta_{t+1} = \theta_t - \eta \nabla_\theta J_\lambda(S_\theta)$. The step size $\eta$ is chosen so that by Lemma \ref{lem:smooth}, $J_\lambda(S_t) - J_\lambda(S_{t+1}) \geq \frac{s}{2\rho_M}\|\nabla_{\theta} J_\lambda(S_{\theta_t})\|_2^2$. By convexity, $J_\lambda(S^*) \geq J_\lambda(S_{\theta_t}) + \langle \nabla_S J_\lambda(S_{\theta_t}), S^* - S_{\theta_t} \rangle$, so that \begin{align*} \|\nabla_{\theta} J_\lambda(S_{\theta_t})\|_2^2 \geq \rho_m \|\nabla_{S} J_\lambda(S_{\theta_t})\|_2^2 \geq (J_\lambda(S_{\theta_t}) - J_\lambda(S^*))^2 \left(\frac{\rho_m}{\|S_{\theta_t} - S^*\|_2^2 }\right). \end{align*} Setting $\Delta_t = J_\lambda(S_{\theta_t}) - J_\lambda(S^*)$, this implies $\Delta_t \geq \Delta_{t+1} + \Delta_t^2 \left( \frac{s \rho_m}{2 \rho_M\|S_{\theta_t} - S^*\|_2^2 } \right)$ and thus $\Delta_t \leq \left[ T \left( \frac{s \rho_m}{2 \rho_M\|S_{\theta_t} - S^*\|_2^2 } \right) \right]^{-1}.$ The claim follows from $\|S_{\theta_t} - S^*\|_2 < R$. \end{proof} Theorem \ref{thm:opt-neural-nets} follows immediately from Lemmas \ref{lem:smooth} and \ref{lem:gd}. \end{proof} \begin{thm-}[\ref{thm:stability} -- Stability of Regularized OT Problem] Suppose $K_\lambda(\pi)$ is $s$-strongly convex in $l_1$ norm and let $\mathcal{L}(\phi, \psi, \pi)$ be the Lagrangian of the regularized optimal transport problem. For $\hat{\phi}$, $\hat{\psi}$ which are $\ensuremath{\varepsilon}ilon$-approximate maximizers of $J_\lambda(\phi, \psi)$, the pseudo-plan $\hat{\pi} = M_f(V(x, y; \hat{\phi}, \hat{\psi})) \sigma(x) \tau(y)$ satisfies \begin{align*} |\hat{\pi} - \pi^* |_1 \leq \sqrt{\frac{2\ensuremath{\varepsilon}ilon}{s}} \leq \frac{1}{s} \left| \nabla_{\hat{\pi}} \mathcal{L}(\hat{\phi}, \hat{\psi},\hat{\pi}) \right|_1. \end{align*} \end{thm-} \begin{proof} For indices $i$, denote by $S_i$ the tuple $(\phi_i, \psi_i, \pi_i)$. The regularized optimal transport problem has Lagrangian $\mathcal{L}(\phi, \psi, \pi)$ given by \begin{align*} \mathcal{L}(\phi, \psi, \pi) = & \ {\mathbb E}_\pi [c(x, y)] + \lambda H_f(\pi) + {\mathbb E}_\sigma[\phi(x)] - {\mathbb E}_\pi[\phi(x)] + {\mathbb E}_\tau[\phi(y)] - {\mathbb E}_\pi[\psi(y)] \end{align*} Because $\mathcal{L}(\phi, \psi, \pi)$ is a sum of $K_\lambda(\pi)$ and linear terms, the Lagrangian inherits $s$-strong convexity w.r.t. the argument $\pi$: \begin{align*} \mathcal{L}(S_2) \geq & \ \mathcal{L}(S_1) + \langle \nabla \mathcal{L}(S_1), S_2 - S_1 \rangle + \frac{s}{2} |\pi_2 - \pi_1|_1^2. \end{align*} Letting $S^* = (\phi^*, \psi^*, \pi^*)$ be the optimal solution and $\hat{S} = (\hat{\phi}, \hat{\psi}, \hat{\pi})$ be an $\ensuremath{\varepsilon}ilon$-approximation, it follows that \begin{align} \label{eqn:stability-err-bound} \ensuremath{\varepsilon}ilon \geq \mathcal{L}(\hat{S}) - \mathcal{L}(S^*) \geq \frac{s}{2}| \hat{\pi} - \pi^* |_1^2 \implies | \hat{\pi} - \pi^* | \leq \sqrt{\frac{2 \ensuremath{\varepsilon}ilon }{s} }. \end{align} Additionally, note that strong convexity implies a Polyak-Åojasiewicz (PL) inequality w.r.t. $\hat{\pi}$. \begin{align} \label{eqn:pl-ineq} s \left( \mathcal{L}(\hat{S}) - \mathcal{L}(S^*) \right) \leq \frac{1}{2} |\nabla_{\pi} \mathcal{L}(\hat{S}) |_1^2. \end{align} The second inequality follows from \eqref{eqn:stability-err-bound} and the PL inequality \eqref{eqn:pl-ineq}. \end{proof} \subsection{Statistical Estimation of Sinkhorn Plans} \label{supp:stat-est} We consider consider estimating an entropy regularized OT plan when $\mathcal{Y}$ = $\mathcal{X}$. {Let $\hat{\sigma}$, $\hat{\tau}$ be empirical distributions generated by drawing $n \geq 1$ i.i.d. samples from $\sigma$, $\tau$ respectively.} Let $\pi^\lambda_n$ be the Sinkhorn plan between $\hat \sigma$ and $\hat \tau$ at regularization $\lambda$, and let $\mathsf{D} := \diam(\mathcal{X})$. For simplicity, we also assume that $\sigma$ and $\tau$ are sub-Gaussian. We also assume that $n$ is fixed. Under these assumptions, we will show that $W_1(\pi_n^\lambda , \pi^\lambda) \lesssim n^{-1/2}$. The following result follows from Proposition E.4 and E.5 of of~\citet{Luise_Salzo_Pontil_Ciliberto_2019} and will be useful in deriving the statistical error between $\pi^\lambda_n$ and $\pi^\lambda$. This result characterizes fast statistical convergence of the Sinkhorn potentials as long as the cost is sufficiently smooth. \begin{prop} \label{prop:sink-pot} Suppose that $c \in \mathcal C^{s+1}(\mathcal X \times \mathcal X)$. Then, for any $\sigma, \tau $ probability measures supported on $\mathcal X$, with probability at least $1 - \tau$, \begin{align*} \| v - v_n \|_\infty, \| u - u_n \|_\infty \lesssim \frac{\lambda e^{3 \mathsf D / \lambda} \log 1/\tau}{\sqrt{n}}, \end{align*} where $(u,v)$ are the Sinkhorn potentials for $ \sigma, \tau $ and $(u_n, v_n)$ are the Sinkhorn potentials for $\hat \sigma, \hat \tau $. \end{prop} Let $\pi_n^\lambda = M_n \sigma_n \tau_n$ and $\pi^\lambda = M \sigma \tau$, We recall that \[ M(x, y) = \frac{1}{e} \exp \left(\frac{1}{\lambda} (\phi(x) + \psi(y) - c(x, y) )\right), \] \[ M_n(x, y) = \frac{1}{e} \exp \left(\frac{1}{\lambda} (\phi_n(x) + \psi_n(y) - c(x, y) )\right), \] We note that $M$ and $M_n$ are uniformly bounded by $e^{3\mathsf{D}/\lambda}$~\cite{Luise_Salzo_Pontil_Ciliberto_2019} and $M$ inherits smoothness properties from $\phi$, $\psi$, and $c$. We can write (for some optimal, bounded, 1-Lipschitz $f_n$) \begin{align}\nonumber W_1(\pi_n^\lambda , \pi^\lambda) &= |\int f_n \pi_n^\lambda - \int f_n \pi^\lambda| \\ \nonumber &\leq |\int f_n (H_n - H)\sigma_n \tau_n| + |\int f_n H (\sigma_n \tau_n - \sigma \tau)| \\ & \leq |f_n|_\infty |H_n - H|_\infty + |\int f_n H (\sigma_n \tau_n - \sigma \tau)|.\label{eq:w1bound} \end{align} If $\sigma$ and $\tau$ are $\beta^2$ subGaussian, then we can bound the second term with high probability: \begin{align*} \mathbb{P}\left(|\frac{1}{n^2} \sum_i \sum_j f_n(X_i, Y_j) H(X_i, Y_j) - \mathbb{E}_{\sigma\times \tau} f_n(X, Y) H(X, Y)| > t\right) < e^{-n^2 \frac{t^2}{2 \beta^2}}. \end{align*} Setting $t = \sqrt{2} \log(\delta) \beta /n$ in this expression, we get that w.p.~at least $1-\delta$, \[ |\frac{1}{n^2} \sum_i \sum_j f_n(X_i, Y_j) H(X_i, Y_j) - \mathbb{E}_{\sigma\times \tau} f_n(X, Y) H(X, Y)| < \frac{\sqrt{2}\beta \log \delta}{n}. \] Now to bound the first term in~\eqref{eq:w1bound}, we use the fact that $f_n$ is 1-Lipschitz and bounded by $D$. For the optimal potentials $\phi$ and $\psi$ in the original Sinkhorn problem for $\sigma$ and $\tau$, we use the result of Proposition~\ref{prop:sink-pot} to yield \begin{align*} |H_n(x, y) - H(x, y)| &= \left| \frac{1}{e} \exp \left(\frac{1}{\lambda} (\phi_n(x) + \psi_n(y) - c(x, y) )\right) - \frac{1}{e} \exp \left(\frac{1}{\lambda} (\phi(x) + \psi(y) - c(x, y) )\right) \right| \\ \nonumber &= \frac{1}{e} \left|\exp \left(\frac{1}{\lambda} (\phi(x) + \psi(y) - c(x, y) ) \right) \left( 1 - \exp \left(\frac{\phi(x) - \phi_n(x) }{\lambda}\right) \exp\left( \frac{\psi(y) - \psi_n(y)}{\lambda} ) \right) \right)\right| \\ &\lesssim e^{3 \mathsf{D}/\lambda} |1 - e^{\frac{2}{\lambda \sqrt{n}}}| \\ & \lesssim \frac{e^{3 \mathsf{D}/\lambda}}{\lambda \sqrt{n}}. \end{align*} Thus, putting this all together, \begin{align*} W_1(\pi_n^\lambda , \pi^\lambda) & \lesssim \frac{\mathsf{D}}{\sqrt{n}} + \frac{1}{n}. \end{align*} Interestingly, the rate of estimation of the Sinkhorn plan breaks the curse of dimensionality. It must be noted, however, that the exponential dependence of Proposition \ref{prop:sink-pot} on $\lambda^{-1}$ implies we can only attain these fast rates in appropriately large regularization regimes. \subsection{Log-concavity of Sinkhorn Factor} The optimal entropy regularized Sinkhorn plan is given by \[ \pi^*(x, y) = \frac{1}{e} \exp\left( \frac{1}{\lambda} \left(\phi^*(x) + \psi^*(y) - c(x, y) \right) \right) \sigma(x)\tau(y). \] This implies that the conditional Sinkhorn density of $Y|X$ is \[ \pi^*(y|x) = \frac{1}{e} \exp\left( \frac{1}{\lambda} \left(\phi^*(x) + \psi^*(y) - c(x, y) \right) \right) \tau(y). \] The optimal potentials satisfy fixed point equations. In particular, \[ \psi^*(y) = -\lambda \log \int \exp\left[- \frac{1}{\lambda} \left(c(x,y) - \phi^*(x)\right) \right] d\sigma(x). \] Using this result, one can prove the following lemma. \begin{lem}[\cite{benamou2020capacity}] For the cost $\|x-y\|^2$, the map $$h(y) = \exp\left( \frac{1}{\lambda} \left(\phi^*(x) + \psi^*(y) - \|x-y\|^2 \right) \right)$$ is log-concave. \end{lem} \begin{proof} The proof comes by differentiating the map. We calculate the gradient, \begin{align*} \nabla \log h(y) = -2\frac{y-x}{\lambda} + \frac{2}{\lambda} \frac{\int \exp\left[- \frac{1}{\lambda} \left(\|x-y\|^2 - \phi^*(x)\right) \right] (y-x) d\sigma(x)}{\int \exp\left[- \frac{1}{\lambda} \left(\|x-y\|^2 - \phi^*(x)\right) \right] d\sigma(x)} \end{align*} and the Hessian, \begin{align*} &\nabla^2 \log h(y) = -2\frac{I}{\lambda} \\ &+\frac{4}{\lambda^2} \frac{\int \exp\left[- \frac{1}{\lambda} \left(\|x-y\|^2 - \phi^*(x)\right) \right] (y-x) d\sigma(x) \int \exp\left[- \frac{1}{\lambda} \left(\|x-y\|^2 - \phi^*(x)\right) \right] (y-x)^\top d\sigma(x)}{(\int \exp\left[- \frac{1}{\lambda} \left(\|x-y\|^2 - \phi^*(x)\right) \right] d\sigma(x))^2} \\ &-\frac{4}{\lambda^2} \frac{\int \exp\left[- \frac{1}{\lambda} \left(\|x-y\|^2 - \phi^*(x)\right) \right] (y-x) (y-x)^\top d\sigma(x)}{\int \exp\left[- \frac{1}{\lambda} \left(\|x-y\|^2 - \phi^*(x)\right) \right] d\sigma(x)} \\ &+2 I/\lambda \frac{\int \exp\left[- \frac{1}{\lambda} \left(\|x-y\|^2 - \phi^*(x)\right) \right] d\sigma(x)}{\int \exp\left[- \frac{1}{\lambda} \left(\|x-y\|^2 - \phi^*(x)\right) \right] d\sigma(x)} \\ &= -\frac{4}{\lambda^2} \Big(- \frac{\int \exp\left[- \frac{1}{\lambda} \left(\|x-y\|^2 - \phi^*(x)\right) \right] (y-x) d\sigma(x) \int \exp\left[- \frac{1}{\lambda} \left(\|x-y\|^2 - \phi^*(x)\right) \right] (y-x)^\top d\sigma(x)}{(\int \exp\left[- \frac{1}{\lambda} \left(\|x-y\|^2 - \phi^*(x)\right) \right] d\sigma(x))^2} \\ &+ \frac{\int \exp\left[- \frac{1}{\lambda} \left(\|x-y\|^2 - \phi^*(x)\right) \right] (y-x) (y-x)^\top d\sigma(x)}{\int \exp\left[- \frac{1}{\lambda} \left(\|x-y\|^2 - \phi^*(x)\right) \right] d\sigma(x)} \Big) \end{align*} In the last term, we recognize that \[ \rho(x) = \frac{\exp\left[- \frac{1}{\lambda} \left(\|x-y\|^2 - \phi^*(x)\right) \right]}{\int \exp\left[- \frac{1}{\lambda} \left(\|x-y\|^2 - \phi^*(x)\right) \right] d\sigma(x)} \] forms a valid density with respect to $\sigma$, and thus \[ \nabla^2 \log h(y) = -\frac{4}{\lambda^2} \mathsf{Cov}_{\rho d\sigma}(X-y) \] where we take the covariance matrix of $X-y$ with respect to the density $\rho d\sigma$. \end{proof} Suppose, for sake of argument, that $\tau(y)$ is $\alpha$ strongly log-concave, and the function $h(y)$ is $\beta$ strongly log-concave. Then, $\pi_{Y|X=x}\propto h(y) \tau(y)$, $\alpha + \beta$ strongly log-concave. In particular, standard results on the mixing time of the Langevin diffusion implies that the diffusion for $\pi_{Y|X=x}$ mixes faster than the diffusion for the marginal $\tau$ alone. Also, as $\lambda \to 0$, the function $h(y)$ concentrates around $\phi_{OT}(x) + \psi_{OT}(y) - \|x-y\|^2$, where $\phi_{OT}$ and $\psi_{OT}$ are the optimal transport potentials. In particular, if there exists an optimal transport map between $\sigma$ and $\tau$, then $h(y)$ concentrates around the unregularized optimal transport image $y=T(x)$. \section{Experimental Details}\label{sup:exp-details} \subsection{Network Architectures} Our method integrates separate neural networks playing the roles of \textit{unconditional score estimator}, \textit{compatibility function}, and \textit{barycentric projector}. In our experiments each of these networks uses one of two main architectures: a fully connected network with ReLU activations, and an image-to-image architecture introduced by \citet{Song_Ermon_2020} that is inspired by architectures for image segmentation. For the first network type, we write ``ReLU FCN, Sigmoid output, $w_0 \to w_1 \to \ldots \to w_k \to w_{k+1}$,'' for integers $w_i \geq 1$, to indicate a $k$-hidden-layer fully connected network whose internal layers use ReLU activations and whose output layer uses sigmoid activation. The hidden layers have dimension $w_1, w_2, \ldots, w_k$ and the network has input and output with dimension $w_0, w_{k+1}$ respectively. For the second network type, we replicate the architectures listed in \citet[Appendix B.1, Tables 2 and 3]{Song_Ermon_2020} and refer to them by name, for example ``NCSN \px{32}'' or ``NCSNv2 \px{32}.'' Our implementation of these experiments may be found in the supplementary code submission. \subsection{Image Sampling Parameter Sheets} \textbf{MNIST $\leftrightarrow$ USPS}: details for qualitative transportation experiments between MNIST and USPS in Figure \ref{fig:usps-mnist} are given in Table \ref{tab:expt-digits-training}. \textbf{CelebA, Blur-CelebA $\to$ CelebA}: we sample \px{64} CelebA images. The Blur-CelebA dataset is composed of CelebA images which are first resized to \px{32} and then resized back to \px{64}, creating a blurred effect. The FID computations in Table \ref{tbl:FID} used a shared set of training parameters given in Table \ref{tab:expt-faces-training}. The sampling parameters for each FID computation are given in Table \ref{tab:expt-faces-sampling}. \textbf{Synthetic Data}: details for the synthetic data experiment shown in Figure \ref{fig:methods-comparison} are given in Table \ref{tab:expt-synth-sampling}. \begin{table}[ht] \setstretch{1.4} \begin{tabular}{|l|l|p{7.5cm}|} \hline Problem Aspect & Hyperparameters & Numbers and details \\ \hline \hline \multirow{2}{*}{Source} & Dataset & USPS \cite{usps} \\ \cline{2-3} & Preprocessing & None \\ \hline \multirow{2}{*}{Target} & Dataset & MNIST \cite{mnist} \\ \cline{2-3} & Preprocessing & Nearest neighbor resize to \px{16}. \\ \hline \multirow{4}{*}{Score Estimator} & Architecture & NCSN \px{32}, applied as-is to \px{16} images. \\ \cline{2-3} & Loss & Denoising Score Matching \\ \cline{2-3} & Optimization & \makecell[l]{Adam, lr $=10^{-4}$, $\beta_1=0.9$, $\beta_2=0.999$. \\ No EMA of model parameters.} \\ \cline{2-3} & Training & \makecell[l]{40000 training iterations, \\ 128 samples per minibatch. } \\ \hline \multirow{4}{*}{Compatibility} & Architecture & \makecell[l]{ReLU network with ReLU output activation, \\$256 \to 1024 \to 1024 \to1$ } \\ \cline{2-3} & Regularization & $\chi^2$ Regularization, $\lambda = 0.001$. \\ \cline{2-3} & Optimization & Adam, lr $=10^{-6}$, $\beta_1=0.9$, $\beta_2=0.999$ \\ \cline{2-3} & Training & \makecell[l]{5000 training iterations, \\ 1000 samples per minibatch. } \\ \hline \multirow{3}{*}{\makecell[l]{Barycentric \\Projection}} & Architecture & \makecell[l]{ReLU network with sigmoid output activation, \\ $256 \to 1024 \to 1024 \to 256$. \\Input pixels are scaled to $[-1, 1]$ by $x \mapsto 2x-1$. } \\ \cline{2-3} & Optimization & Adam, lr $=10^{-6}$, $\beta_1=0.9$, $\beta_2=0.999$ \\ \cline{2-3} & Training & \makecell[l]{5000 training iterations, \\ 1000 samples per minibatch. } \\ \hline \multirow{5}{*}{Sampling} & Annealing Schedule & \makecell[l]{7 noise levels decaying geometrically, \\ $\tau_0 = 0.2154, \ldots, \tau_6 = 0.01$.} \\ \cline{2-3} & Step size & $\ensuremath{\varepsilon}ilon = 5 \cdot 10^{-6}$ \\ \cline{2-3} & Steps per noise level & $T=20$ \\ \cline{2-3} & Denoising? \cite{jolicoeur-martineau2021adversarial} & Yes \\ \cline{2-3} & $\chi^2$ SoftPlus threshold & $\alpha = 1000$ \\ \hline \end{tabular} \caption{ Data and model details for the \textbf{USPS $\to$ MNIST} qualitative experiment shown in Figure \ref{fig:usps-mnist}. For \textbf{MNIST $\to$ USPS}, we use the same configuration with source and target datasets swapped.} \label{tab:expt-digits-training} \setstretch{1} \end{table} \begin{table}[ht] \setstretch{1.4} \begin{tabular}{|l|l|p{8.1cm}|} \hline Problem Aspect & Hyperparameters & Numbers and details \\ \hline \hline \multirow{2}{*}{Source} & Dataset & CelebA or Blur-CelebA \cite{liu2015faceattributes} \\ \cline{2-3} & Preprocessing & \makecell[l]{\px{140} center crop. \\ If Blur-CelebA: nearest neighbor resize to \px{32}. \\ Nearest neighbor resize to \px{64}. \\ Horizontal flip with probability 0.5. } \\ \hline \multirow{2}{*}{Target} & Dataset & CelebA \cite{liu2015faceattributes} \\ \cline{2-3} & Preprocessing & \makecell[l]{\px{140} center crop. \\ Nearest neighbor resize to \px{64}. \\ Horizontal flip with probability 0.5. } \\ \hline \multirow{4}{*}{Score Estimator} & Architecture & NCSNv2 \px{64}. \\ \cline{2-3} & Loss & Denoising Score Matching \\ \cline{2-3} & Optimization & \makecell[l]{Adam, lr $=10^{-4}$, $\beta_1=0.9$, $\beta_2=0.999$. \\ Parameter EMA at rate $0.999$. } \\ \cline{2-3} & Training & \makecell[l]{210000 training iterations, \\ 128 samples per minibatch. } \\ \hline \multirow{4}{*}{Compatibility} & Architecture & \makecell[l]{ReLU network with ReLU output activation, \\$3 \cdot 64^2 \to 2048 \to \ldots \to 2048 \to1$ (8 hidden layers). } \\ \cline{2-3} & Regularization & \makecell[l]{Varies in $\chi^2$ reg., $\lambda \in \{0.1, 0.1, 0.001\}$,\\ and KL reg., $\lambda \in \{0.1, 0.01, 0.005\}$. }\\ \cline{2-3} & Optimization & Adam, lr $=10^{-6}$, $\beta_1=0.9$, $\beta_2=0.999$ \\ \cline{2-3} & Training & \makecell[l]{5000 training iterations, \\ 1000 samples per minibatch. } \\ \hline \multirow{3}{*}{\makecell[l]{Barycentric \\ Projection}} & Architecture & \makecell[l]{NCSNv2 \px{64} applied as-is for image generation. } \\ \cline{2-3} & Optimization & Adam, lr $=10^{-7}$, $\beta_1=0.9$, $\beta_2=0.999$ \\ \cline{2-3} & Training & \makecell[l]{20000 training iterations, \\ 64 samples per minibatch. } \\ \hline \end{tabular} \caption{Training details for the \textbf{CelebA, Blur-CelebA $\to$ CelebA} FID experiment (Figure \ref{fig:methods-comparison}).} \label{tab:expt-faces-training} \setstretch{1} \end{table} \begin{table}[ht] \setstretch{1.4} \begin{tabular}{|l|l|l|l|l|l|} \hline Problem & Noise ($\tau_1, \tau_k$) & Step Size & Steps & Denoising? \cite{jolicoeur-martineau2021adversarial} & $\chi^2$ SoftPlus Param. \\ \hline $\chi^2$, $\lambda = 0.1$ & $(9, 0.01)$ & $15 \cdot 10^{-7}$ & $k=500$ & Yes & $\alpha=10$ \\ \hline $\chi^2$, $\lambda = 0.01$ & \multicolumn{5}{l|}{\ditto} \\ \hline $\chi^2$, $\lambda = 0.001$ & \multicolumn{5}{l|}{\ditto} \\ \hline KL, $\lambda=0.1$ & $(90, 0.1)$ & $15 \cdot 10^{-7}$ & $k=500$ & Yes & -- \\ \hline KL, $\lambda=0.01$ & \multicolumn{5}{l|}{\ditto} \\ \hline KL, $\lambda=0.005$ & $(90, 0.1)$ & $1 \cdot 10^{-7}$ & $k=500$ & Yes & -- \\ \hline \end{tabular} \caption{Sampling details for the \textbf{CelebA, Blur-CelebA $\to$ CelebA} FID experiment (Figure \ref{fig:methods-comparison}).} \label{tab:expt-faces-sampling} \setstretch{1} \end{table} \begin{table}[ht] \setstretch{1.4} \begin{tabular}{|l|l|p{7.5cm}|} \hline Problem Aspect & Hyperparameters & Numbers and details \\ \hline \hline \multirow{2}{*}{Source} & Dataset & \makecell[l]{Gaussian in ${\mathbb R}^{784}$, \\ Mean and covariance are that of MNIST} \\ \cline{2-3} & Preprocessing & None \\ \hline \multirow{2}{*}{Target} & Dataset & Unit gaussian in ${\mathbb R}^{784}$. \\ \cline{2-3} & Preprocessing & None \\ \hline Score Estimator & Architecture & None (score is given by closed form) \\ \hline \multirow{4}{*}{Compatibility} & Architecture & \makecell[l]{ReLU network with ReLU output activation, \\$784 \to 2048 \to 2048 \to 2048 \to 2048 \to1$ } \\ \cline{2-3} & Regularization & KL Regularization, $\lambda \in \{1, 0.5, 0.25\}$. \\ \cline{2-3} & Optimization & Adam, lr $=10^{-6}$, $\beta_1=0.9$, $\beta_2=0.999$ \\ \cline{2-3} & Training & \makecell[l]{5000 training iterations, \\ 1000 samples per minibatch. } \\ \hline \multirow{5}{*}{Sampling} & Annealing Schedule & No annealing. \\ \cline{2-3} & Step size & $\ensuremath{\varepsilon}ilon = 5 \cdot 10^{-3}$ \\ \cline{2-3} & Mixing steps & $T=1000$ \\ \cline{2-3} & Denoising? \cite{jolicoeur-martineau2021adversarial} & Not applicable. \\ \cline{2-3} \end{tabular} \caption{ Sampling and model details for the synthetic experiment shown in Figure \ref{fig:methods-comparison}. } \label{tab:expt-synth-sampling} \setstretch{1} \end{table} \end{document}
\begin{document} \title[Coefficient estimates for some classes of functions associated with \(q\)-function theory] {Coefficient estimates for some classes of functions associated with \(q\)-function theory} \def\@arabic\c@footnote{} \footnotetext{ \texttt{\tiny File:~\jobname .tex, printed: \number\day-\number\month-\number\year, \thehours.\ifnum\theminutes<10{0}\fi\theminutes} } \makeatletter\def\@arabic\c@footnote{\@arabic\c@footnote}\makeatother \author{Sarita Agrawal} \thanks{ Discipline of Mathematics, Indian Institute of Technology Indore, Simrol, Khandwa Road, Indore 453 552, India\\ {\em Email: [email protected]}\\ {\bf Acknowledgement}: I thank my PhD supervisor Dr. Swadesh Kumar Sahoo for his valuable suggestions and careful reading of this manuscript. } \begin{abstract} In this paper, for every $q\in(0,1)$, we obtain the Herglotz representation theorem and discuss the Bieberbach type problem for the class of $q$-convex functions of order $\alpha, 0\le\alpha<1$. In addition, we discuss the Fekete-szeg\"o problem and the Hankel determinant problem for the class of $q$-starlike functions, leading to couple of conjectures for the class of $q$-starlike functions of order $\alpha, 0\le\alpha<1$. \noindent {\bf 2010 Mathematics Subject Classification}. 28A25; 30C45; 30C50; 33B10. \noindent {\bf Key words and phrases.} $q$-starlike functions of order $\alpha$, $q$-convex functions, Herglotz representation, Bieberbach's conjecture, the Fekete-szeg\"o problem, Hankel determinant. \end{abstract} \maketitle \pagestyle{myheadings} \markboth{Sarita Agrawal}{Coefficient problems} \section{Introduction} Throughout the present investigation, we denote by $\mathbb{C}$, the set of complex numbers and by $\mathcal{H}({\mathbb D})$, the set of all analytic (or holomorphic) functions in ${\mathbb D}$. We use the symbol $\mathcal{A}$ for the class of functions $f \in \mathcal{H}({\mathbb D})$ with the standard normalization $f(0)=0=f'(0)-1$. i.e. the functions $f\in\mathcal{A}$ have the power series representation of the form \begin{equation}\label{e1} f(z)=z+\sum_{n=2}^\infty a_nz^n. \end{equation} The set $\mathcal{S}$ denotes the class of {\em univalent} functions in $\mathcal{A}$. We denote by $\mathcal{S}^*$ and $\mathcal{C}$, the class of starlike and convex functions in $\mathcal{A}$ respectively. These are vastly available in the literature; see \cite{Dur83,Goo83}. The principal value of the logarithmic function $\log z$ for $z\neq 0$ is denoted by ${\operatorname{Log}\,} z:=\ln |z|+i {\operatorname{Arg}\,}(z)$, where $-\pi\le {\operatorname{Arg}\,}(z)<\pi$. In geometric function theory, finding bound for the coefficient $a_n$ of functions of the form (\ref{e1}) is an important problem, as it reveals the geometric properties of the corresponding function. For example, the bound for the second coefficient $a_2$ of functions in the class $\mathcal{S}$, gives the growth and distortion properties as well as covering theorems. Bieberbach proposed a conjecture in the year $1916$ that {\em ``among all functions in $\mathcal{S}$, the Koebe function has the largest coefficient"}; for instance see \cite{Dur83,Goo83}. This conjecture was a challenging open problem for mathematicians for several decades. To prove this conjecture initial approach was made for some subclasses of univalent functions like $\mathcal{S}^*,\mathcal{C}$, etc. Many more new techniques were developed in order to settle the conjecture. One of the important techniques is the {\em Herglotz representation theorem} which tells about the integral representation of analytic functions with positive real part in ${\mathbb D}$. Finally, the complete proof of Bieberbach's conjecture was settled by de Branges in $1985$ \cite{deB85}. Another interesting coefficient estimation is the Hankel determinant. The $k^{th}$ order Hankel determinant ($k\ge 1$) of $f\in \mathcal{A}$ is defined by $$H_k(n)=\left|\begin{array}{ccc} a_n & a_{n+1} \cdots & a_{n+k-1}\\ a_{n+1} & \cdots & a_{n+k}\\ \vdots & \vdots &\vdots\\ a_{n+k-1} & \cdots & a_{n+2k-2} \end{array} \right|. $$ For our discussion, in this paper, we consider the Hankel determinant $H_2(1)$ (also called the Fekete-Szeg\"o functional) and $H_2(2)$. Also in 1916, Bieberbach proved that if $f\in\mathcal{S}$, then $|a_2^2-a_3|\le 1$. In 1933, Fekete and Szeg\"o in \cite{FS33} proved that $$|a_3-\mu a_2^2|\le \left \{ \begin{array}{ll} 4\mu-3 & \mbox{if } \mu \ge 1\\ 1+2\exp [-2\mu/(1-\mu)] & \mbox{if } 0\le \mu \le 1\\ 3-4\mu & \mbox{if } \mu \le 0 \end{array}\right. . $$ The result is sharp in the sense that for each $\mu$ there is a function in the class under consideration for which equality holds. The coefficient functional $a_3-\mu a_2^2$ has many applications in function theory. For example, the functional $a_3-a_2^2$ is equal to $S_f(z)/6$, where $S_f(z)$ is the Schwarzian derivative of the locally univalent function $f$ defined by $S_f(z)=(f''(z)/f'(z))'-(1/2)(f''(z)/f'(z))^2$. Finding the maximum value of the functional $a_3-\mu a_2^2$ is called the {\em Fekete-Szeg\"o problem}. Koepf solved the Fekete-Szeg\"o problem for close-to-convex functions and obtains the largest real number $\mu$ for which $a_3-\mu a_2^2$ is maximized by the Koebe function $z/(1-z)^2$ is $\mu=1/3$ (see \cite{Koe87}). Later, in \cite{Koe87-II} (see also \cite{Lon93}), this result was generalized for functions that are close-to-convex of order $\beta$, $\beta\ge 0$. In \cite{Pfl85}, Pfluger employed the variational method to give another treatment of the Fekete-Szeg\"o inequality which includes a description of the image domains under extremal functions. Later, Pfluger \cite{Pfl86} used Jenkin’s method to show that for $f\in \mathcal{S}$, $$|a_3-\mu a_2^2|\le 1+2|\exp(-2\mu/(1-\mu))| $$ holds for complex $\mu$ such that ${\operatorname{Re}\,}(1/(1-\mu))\ge 1$. The inequality is sharp if and only if $\mu$ is in a certain pear shaped subregion of the disk given by $$\mu =1-(u+itv)/u^2+v^2, \quad -1\le t\le 1, $$ where $u=1-\log(\cos \varphi)$ and $v=\tan \varphi- \varphi , 0<\varphi <\pi/2$. In recent years, study of $q$-analogs of subclasses of univalent functions is well adopted among function theorists. Bieberbach type problems for functions belonging to classes associated with $q$-function theory are discussed in \cite{IMS90,AS14-2,SS15}. In the sequel, we discuss the Bieberbach type problem for $q$-analog of convex functions of order $\alpha,0\le \alpha<1$. Finding of Hankel determinant and Fekete-Szeg\"o problem for subclasses of univalent functions are vastly available in literature, see, for instance \cite{KM69,Koe87, Koe87-II}. But these type of problems are not considered for classes involving $q$-theory. In this regard, we motivate to discuss the Hankel determinant and Fekete-Szeg\"o problems for the $q$-analog of starlike functions. \section{Preliminaries, and Main Theorems}\label{prelm} For $0<q<1$, {\em the $q$-difference operator} (see \cite{AS14-2}), denoted as $D_qf$, is defined by the equation $$(D_qf)(z)=\frac{f(z)-f(qz)}{z(1-q)},\quad z\neq 0, \quad (D_qf)(0)=f'(0). $$ Now, recall the defintion of the class of {\em $q$-starlike functions of order $\alpha, 0\le\alpha<1$,} denoted by $\mathcal{S}_q^*(\alpha)$. \begin{definition}\cite[Definition~1.1]{AS14-2} A function $f\in\mathcal{A}$ is said to be in the class $\mathcal{S}_q^*(\alpha)$, $0\le \alpha<1$, if $$\left|\frac{\displaystyle\frac{z(D_qf)(z)}{f(z)}-\alpha}{1-\alpha}-\frac{1}{1-q}\right|\leq \frac{1}{1-q}, \quad z\in \mathbb{D}. $$ \end{definition} Note that the choice $\alpha=0$ gives the definition of the class of $q$-starlike functions, denoted by $\mathcal{S}_q^*$, (see \cite[Definition~1.3]{IMS90}). Indeed, a function $f\in\mathcal{A}$ is said to belong to $\mathcal{S}_q^*$ if $|(z(D_qf)(z))/f(z)-1/(1-q)|\leq 1/(1-q), z\in \mathbb{D}$. By using the idea of the well-known Alexander's theorem \cite[Theorem~2.12]{Dur83}, Baricz and Swaminathan in \cite{BS14} defined a $q$-analog of convex functions, denoted by $\mathcal{C}_q$, in the following way. \begin{definition}\cite[Definition~3.1]{BS14} A function $f\in\mathcal{A}$ is said to belong to $\mathcal{C}_q$ if and only if $z(D_qf)(z)\in \mathcal{S}_q^*$. \end{definition} We call the functions of the class $\mathcal{C}_q$ as {\em $q$-convex functions}. The class $\mathcal{C}_q$ is non-empty as shown in \cite[Theorem~3.2]{BS14}. Note that as $q\to 1$, the classes $\mathcal{S}_q^*$ and $\mathcal{C}_q$ reduce to $\mathcal{S}^*$ and $\mathcal{C}$ respectively. It is natural to define {\em the $q$-convex functions of order $\alpha$}, $0\le \alpha< 1$, denoted by $\mathcal{C}_q(\alpha)$, in the following way: \begin{definition}\label{def} A function $f\in\mathcal{A}$ is said to be in the class $\mathcal{C}_q(\alpha), 0\le \alpha<1$, if and only if $z(D_qf)(z)\in \mathcal{S}_q^*(\alpha)$. \end{definition} We can see that as $q\to 1$, the class $\mathcal{C}_q(\alpha)$ reduces to the class of convex functions of order $\alpha$, $\mathcal{C(\alpha)}$ (for definition of $\mathcal{C(\alpha)}$ see \cite{Goo83}). Bieberbach type problem is estimated for the classes $\mathcal{S}_q^*$ and $\mathcal{S}_q^*(\alpha)$ in the articles \cite{IMS90} and \cite{AS14-2} respectively. But the Fekete-Szeg\"o problem and the Hankel determinant were not considered there. In this article, we first discuss these two problems for the class $\mathcal{S}_q^*$ and posed two conjectures on the Fekete-Szeg\"o problem and Hankel determinant for the class $\mathcal{S}_q^*(\alpha)$. Since the Bieberbach type problem for the class $\mathcal{C}_q(\alpha), 0\le \alpha<1$, is not available in literature, here we obtain the Bieberbach type problem for the class $\mathcal{C}_q(\alpha)$ for $0\le \alpha<1$. In addition, we find the Herglotz representation formula for functions belonging to the class $\mathcal{C}_q(\alpha)$. One can also think of Hankel determinant, Fekete-Szeg\"o problems for $\mathcal{C}_q(\alpha)$ as well. The concept of $q$-integral is useful in this setting. Thomae was a pupil of Heine who introduced, the so-called, $q$-integral \cite{Tho69} $$\int_0^1 f(t)\, d_q t = (1-q)\sum_{n=0}^{\infty} q^n f(q^n), $$ provided the $q$-series converges. In 1910, Jackson defined the general $q$-integral \cite{Jac10} (see also \cite{GR90,Tho69}) in the following manner: $$\int_a^b f(t)\, d_q t:=\int_0^b f(t)\, d_q t - \int_0^a f(t)\, d_q t, $$ where $$I_q(f(x)):=\int_0^x f(t)\, d_q t = x(1-q)\sum_{n=0}^{\infty} q^n f(xq^n), $$ provided the $q$-series converges. Observe that $$D_q I_q f(x)=f(x)~~\mbox{ and }~~ I_q D_q f(x)=f(x)-f(0), $$ where the second equality holds if $f$ is continuous at $x=0$. For more background on $q$-integrals, we refer to \cite{GR90}. Now, we state our main results. The Fekete-Szeg\"o problem for the class $\mathcal{S}_q^*$ is obtained as follows: \begin{theorem}\label{T3} Let $f\in\mathcal{S}_q^*$ be of the form (\ref{e1}) and $\mu$ be any complex number. Then $$ |a_3-\mu a_2^2|\le \max\left\{\left|2(1-2\mu)\left(\frac{\ln q}{q-1}\right)^2+2\left(\frac{\ln q}{q^2-1}\right)\right|, 2\left(\frac{\ln q}{q^2-1}\right)\right\}. $$ Equality occurs for the functions \begin{equation}\label{e2} F_1(z):=z\left\{\exp \left[\displaystyle \sum_{n=1}^\infty \frac{2\ln q}{q^n-1}z^n\right]\right\} \end{equation} and \begin{equation}\label{e3} F_2(z):=z\left\{\exp \left[\displaystyle \sum_{n=1}^\infty \frac{2\ln q}{q^{2n}-1}z^{2n}\right]\right\}. \end{equation} \end{theorem} The next result is the estimation of second order Hankel determinant for the class $\mathcal{S}_q^*$. \begin{theorem}\label{T4} Let $f\in\mathcal{S}_q^*$ be of the form (\ref{e1}). Then $$ |H_2(2)|=|a_2a_4-a_3^2|\le 4\left(\frac{\ln q}{q^2-1}\right)^2. $$ Equality occurs for the function $F_2(z)$ defined in $(\ref{e3})$. \end{theorem} \begin{remark} For $q\to 1$, Theorem~\ref{T3} gives the Fekete-Szeg\"o problem for the class $\mathcal{S}^*$ \cite[Theorem~1]{KM69}. \end{remark} \begin{remark} For $q\to 1$, Theorem~\ref{T4} gives the Hankel determinant for the class $\mathcal{S}^*$ \cite[Theorem~3.1]{Jan07}. \end{remark} Now we present the Herglotz representation of functions belonging to the class $\mathcal{C}_q(\alpha)$: \begin{theorem}\label{thm2} Let $f\in \mathcal{A}$. Then $f\in\mathcal{C}_q(\alpha)$, $0\le \alpha<1$, if and only if there exists a probability measure $\mu$ supported on the unit circle such that $$\frac{z(D_qf)'(z)}{(D_qf)(z)}=\int_{|\sigma|=1}\sigma z F_{q, \alpha}^{'}(\sigma z)\rm{d}\mu(\sigma) $$ where \begin{equation}\label{MainThm1:eq1} F_{q,\alpha}(z)=\displaystyle \sum_{n=1}^\infty \frac{(-2)\left(\ln \frac{q}{1-\alpha(1-q)}\right)}{1-q^n}z^n, \quad z\in {\mathbb D}. \end{equation} \end{theorem} \begin{remark} It is clear that when $q\to1$, $$F_{q, \alpha}^{'}(z)\to 2(1-\alpha)/(1-z) \mbox{ and } z(D_qf)'(z)/(D_qf)(z)\to zf''(z)/f'(z). $$ Hence, when $q$ approaches to $1$, Theorem~\ref{thm2} leads to the Herglotz Representation of convex functions of order $\alpha$ (see for instance \cite[pp. 172, Problem~3]{Goo83}). \end{remark} The Bieberbach type problem for the class $\mathcal{C}_q(\alpha)$, $0\le \alpha<1$, is stated below: \begin{theorem}\label{sec2-thm7} Let \begin{equation}\label{MainThm2:eq} E_q(z):=I_q\{\exp [F_{q,\alpha}(z)]\}=z+\displaystyle \sum_{n=2}^\infty\left(\frac{1-q}{1-q^n}\right) c_n z^n \end{equation} where $c_n$ is the $n$-th coefficient of the function $z\exp [F_{q,\alpha}(z)]$. Then $E_q\in \mathcal{C}_q(\alpha)$, $0\le \alpha<1$. Moreover, if $f(z)=z+\sum_{n=2}^\infty a_n z^n\in \mathcal{C}_q(\alpha)$, then $|a_n|\le((1-q)/(1-q^n)) c_n$ with equality holding for all $n$ if and only if $f$ is a rotation of $E_q$. \end{theorem} \begin{remark} It would be interesting to get an explicit form of the extremal function independent of the $q$-integral in Theorem~\ref{sec2-thm7}. \end{remark} \begin{remark} It is clear that when $q\to 1$, $$F_{q,\alpha}(z)\to -2(1-\alpha)\log(1-z), $$ and hence $z\exp [F_{q,\alpha}(z)]\to z/(1-z)^{2(1-\alpha)}$. Therefore, as $q\to 1$, the coefficient $c_n\to {\prod_{k=2}^n (k-2\alpha)}/(n-1)!$, which gives $|a_n|$ is bounded by ${\prod_{k=2}^n (k-2\alpha)}/n!$ for $f\in \mathcal{C}(\alpha)$. i.e. when $q\to 1$, Theorem~\ref{sec2-thm7} leads to the Bieberbach type problem for the class $\mathcal{C}(\alpha)$ (see for instance \cite[Theorem~2, pp. 140]{Goo83}). \end{remark} \section{Properties of the class $\mathcal{C}_q(\alpha), 0\le \alpha<1$}\label{sec2} This section is devoted to study of some basic properties of the class $\mathcal{C}_q(\alpha)$. The following proposition says that a function $f\in\mathcal{C}_q(\alpha)$ can be written in terms of a function $g$ in $\mathcal{S}^*_q(\alpha)$. The proof is obvious and it follows from the definition of $\mathcal{C}_q(\alpha)$. \begin{proposition}\label{sec1-prop1} Let $f \in \mathcal{C}_q(\alpha)$, $0\le \alpha<1$. Then there exists a unique function $g \in \mathcal{S}^*_q(\alpha)$, $0\le \alpha<1$, such that \begin{equation}\label{prop1-eqn} g(z)=z(D_qf)(z) \end{equation} holds. Similarly, for a given function $g\in \mathcal{S}_q^*(\alpha)$ there exists a unique function $f\in \mathcal{C}_q(\alpha)$ satisfying $(\ref{prop1-eqn})$. \end{proposition} Next result is a characterization for a function to be in the class $\mathcal{C}_q(\alpha)$. \begin{theorem}\label{sec2-thm1} Let $f\in \mathcal{A}$. Then $f\in\mathcal{C}_q(\alpha)$, $0\le \alpha<1$, if and only if $$\left|q\frac{(D_qf)(qz)}{(D_qf)(z)}-\alpha q\right|\leq {1-\alpha}, \quad z\in \mathbb{D}. $$ \end{theorem} \begin{proof} By Definition~\ref{def}, we have $f\in\mathcal{C}_q(\alpha)$ if and only if $z(D_qf)(z)\in \mathcal{S}_q^*(\alpha)$. Then the result follows immediately from \cite[Theorem~2.2]{AS14-2}. \end{proof} \begin{corollary} The class $\mathcal{C}_q(\alpha)$ satisfies the inclusion relation $$\bigcap_{q<p<1}\mathcal{C}_p(\alpha)\subseteq \mathcal{C}_q(\alpha) ~~\mbox{ and }~~ \bigcap_{0<q<1}\mathcal{C}_q(\alpha) = \mathcal{C}(\alpha). $$ \end{corollary} \begin{proof} If $f\in\mathcal{C}_p(\alpha)$ for all $p\in (q,1)$, then as $p\to q$ we get $f\in\mathcal{C}_q(\alpha)$. Hence the inclusion $$\bigcap_{q<p<1}\mathcal{C}_p(\alpha)\subseteq \mathcal{C}_q(\alpha) $$ holds. Similarly, if $f\in\mathcal{C}_q(\alpha)$ for all $q\in (0,1)$, then as $q\to 1$ we get $f\in\mathcal{C}(\alpha)$. That is, $$ \bigcap_{0<q<1}\mathcal{C}_q(\alpha) \subseteq \mathcal{C}(\alpha) $$ holds. It remains to show that $$\mathcal{C}(\alpha)\subseteq \bigcap_{0<q<1}\mathcal{C}_q(\alpha). $$ For this, we let $f\in \mathcal{C}(\alpha)$. Then we show that $f\in \mathcal{C}_q(\alpha)$ for all $q\in (0,1)$. Since $f\in \mathcal{C}(\alpha)$, $zf'\in \mathcal{S}^*(\alpha)$. By \cite[Corollary~2.3]{AS14-2}, $\mathcal{S}^*(\alpha)=\cap_{0<q<1}\mathcal{S}^*_q(\alpha)$, it follows that $zf'\in \mathcal{S}^*_q(\alpha)$ for all $q\in(0,1)$. Thus, by Proposition~\ref{sec1-prop1}, there exists a unique $h\in \mathcal{C}_q(\alpha)$ satisfying the identity (\ref{prop1-eqn}) with $h(z)=f(z)$. The proof now follows immediately. \end{proof} We now define two sets and proceed to prepare some basic results which are being used to prove our main results as well. Define $$B_q=\{g:g\in \mathcal{H}({\mathbb D}),~g(0)=q \mbox{ and } g:{\mathbb D} \to {\mathbb D}\} ~~\mbox{ and }~~ B_q^0=\{g:g\in B_q \mbox{ and } 0\notin g({\mathbb D}) \}. $$ \begin{lemma}{\label{lm2}} \cite[Lemma~2.4]{AS14-2} If $h\in B_q$ then the infinite product $\prod_{n=0}^\infty \{((1-\alpha)h(zq^n)+\alpha q)/q\}$ converges uniformly on compact subsets of ${\mathbb D}$. \end{lemma} \begin{lemma}{\label{lm3}} If $h\in B_q^0$ then the infinite product $\prod_{n=0}^\infty \{((1-\alpha)h(zq^n)+\alpha q)/q\}$ converges uniformly on compact subsets of ${\mathbb D}$ to a nonzero function in $\mathcal{H}({\mathbb D})$ with no zeros. Furthermore, the function $f$ satisfying the relation \begin{equation}\label{eq3} z(D_qf)(z)=\frac{z}{\prod_{n=0}^\infty \{((1-\alpha)h(zq^n)+\alpha q)/q\}} \end{equation} belongs to $\mathcal{C}_q(\alpha)$ and $h(z)=\displaystyle \left(q\frac{(D_qf)(qz)}{(D_qf)(z)}-\alpha q\right)/(1-\alpha)$. \end{lemma} \begin{proof} The convergence of the infinite product is due to Lemma \ref{lm2}. Since $h\in B_q^0$, we have $h(z)\neq 0$ in ${\mathbb D}$ and the infinite product does not vanish in ${\mathbb D}$. Thus, the function $z(D_qf)(z)\in \mathcal{A}$ and we find the relation $$ q\frac{(D_qf)(qz)}{(D_qf)(z)}=q\lim_{k\to\infty}\prod_{n=0}^k\frac{(1-\alpha)h(zq^n)+\alpha q}{(1-\alpha)h(zq^{n+1})+\alpha q} =(1-\alpha)h(z)+\alpha q. $$ Since $h\in B_q^0$, we get $f\in \mathcal{C}_q(\alpha)$ and the proof of our lemma is complete. \end{proof} Let $\mathcal{P}$ be the family of all functions $p\in \mathcal{H}({\mathbb D})$ for which ${\operatorname{Re}\,} \{p(z)\}\ge 0$ and \begin{equation}\label{e6} p(z)=1+p_1z+p_2z^2+\ldots \end{equation} for $z\in{\mathbb D}$. \begin{lemma}\label{lm}\cite[Lemma~2.4]{IMS90} A function $g\in B_q^0$ if and only if it has the representation \begin{equation}\label{eq4} g(z)=\exp\{(\ln q) p(z)\}, \end{equation} where $p(z)$ belongs to the class $\mathcal{P}$. \end{lemma} \begin{theorem}\label{thm1} The mapping $\rho:\mathcal{C}_q(\alpha) \to B_q^0$ defined by $$\rho(f)(z)=\left(q\frac{(D_qf)(qz)}{(D_qf)(z)}-\alpha q\right)/(1-\alpha) $$ is a bijection. \end{theorem} \begin{proof} For $ h \in B_q^0 $, define a mapping $\sigma:\,B_q^0 \to \mathcal{A}$ by $$ z(D_q\sigma(h))(z)=\frac{z}{\prod_{n=0}^\infty \{((1-\alpha)h(zq^n)+\alpha q)/q\}} $$ It is clear from Lemma~\ref{lm3} that $\sigma(h) \in \mathcal{C}_q$ and $(\rho\circ \sigma)(h)=h$. Considering the composition mapping $\sigma\circ \rho$ we compute that \begin{eqnarray*} z(D_q(\sigma\circ \rho)(f))(z) &=&\frac{z}{\prod_{n=0}^\infty \{((1-\alpha)\rho(f)(zq^n)+\alpha q)/q\}}\\ &=&\frac{z}{\prod_{n=0}^\infty \{q(D_qf)(zq^{n+1})/q(D_qf)(zq^n)\}} =z(D_qf)(z) \end{eqnarray*} or, $$ (\sigma\circ \rho)(f)=f. $$ Hence $\sigma\circ \rho$ and $\rho\circ \sigma$ are identity mappings and $\sigma$ is the inverse of $\rho$, i.e. the map $\rho(f)$ is invertible. Hence $\rho(f)$ is a bijection. This completes the proof of our theorem. \end{proof} \section{Proof of the main theorems} In this section we prove our main theorems stated in Section~\ref{prelm}. The following lemmas are useful for the proof of the Fekete-Szeg\"o problem and finding the Hankel determinant. \begin{lemma}\label{l1}\cite[Theorem~1.13]{IMS90} The mapping $\rho:\mathcal{S}_q^* \to B_q^0$ defined by $$\rho(f)(z)=\displaystyle \frac{f(qz)}{f(z)} $$ is a bijection. \end{lemma} \begin{lemma}\label{l3}\cite[Theorem~1.15]{IMS90} Let $f\in \mathcal{A}$. Then $f\in\mathcal{S}_q^*$ if and only if there exists a probability measure $\mu$ supported on the unit circle such that $$\frac{zf'(z)}{f(z)}=1+\int_{|\sigma|=1}\sigma z F_q^{'}(\sigma z)\rm{d}\mu(\sigma) $$ where \begin{equation} F_q(z)=\displaystyle \sum_{n=1}^\infty \frac{2\ln q}{q^n-1}z^n, \quad z\in {\mathbb D} . \end{equation} \end{lemma} \begin{lemma}\label{l4}\cite[pp.~254-256]{LZ83} Let the function $p\in\mathcal{P}$ and be given by the power series (\ref{e6}). Then $$ 2p_2=p_1^2+x(4-p_1^2), $$ $$4p_3=p_1^3+2(4-p_1^2)p_1x-p_1(4-p_1^2)x^2+2(4-p_1^2)(1-|x|^2)z, $$ for some $x$ and $z$ satisfying $|x|\le 1$, $|z|\le 1$, and $p_1\in [0,2]$. \end{lemma} \begin{lemma}\label{l5}\cite[Lemma~1]{MM92} Let the function $p\in\mathcal{P}$ and be given by the power series (\ref{e6}). Then for any real number $\lambda$, $$ |p_2-\lambda p_1^2|\le 2 \max\{1, |2\lambda-1|\} $$ and the result is sharp. \end{lemma} \begin{proof}[\bf Proof of Theorem~\ref{T3}] Let $f\in \mathcal{S}^*_q$. Then by Lemma~\ref{l1}, there exist a function $g\in B_q ^0$ such that $g(z)=f(qz)/f(z)$. Since $g\in B_q ^0$, by Lemma~\ref{lm}, $g(z)$ has the representation (\ref{eq4}). That is, $$ \frac{f(qz)}{f(z)}=\exp\{(\ln q)p(z)\}. $$ Define the function $\phi(z)={\operatorname{Log}\,}\{f(z)/z\}$ and set $$\phi(z)={\operatorname{Log}\,}\frac{f(z)}{z}=\sum_{n=1}^\infty \phi_n z^n. $$ On solving, we get $$ \ln q+\phi(qz)=\phi(z)+(\ln q)p(z). $$ This implies \begin{equation}\label{e5} \phi_n=p_n\left(\frac{\ln q}{q^n-1}\right). \end{equation} So, $f(z)$ can be written as \begin{equation}\label{e5.5} f(z)=z\exp\left[\sum_{n=1}^\infty \phi_n z^n\right], \end{equation} where $\phi_n$ is defined in (\ref{e5}) and $f(z)$ has the form (\ref{e1}). Equating the coefficients of both sides in (\ref{e5.5}) and using the value of $\phi_n$ given in $(\ref{e5})$, we obtain \begin{equation}\label{e5.6} a_2=\phi_1=p_1\left(\frac{\ln q}{q-1}\right),\quad a_3=\phi_2+\frac{\phi_1^2}{2}=p_2\left(\frac{\ln q}{q^2-1}\right)+\frac{p_1^2}{2}\left(\frac{\ln q}{q-1}\right)^2. \end{equation} Thus, \begin{eqnarray*} |a_3-\mu a_2^2|&=&\left|p_2\left(\frac{\ln q}{q^2-1}\right)+\frac{p_1^2}{2}\left(\frac{\ln q}{q-1}\right)^2-\mu p_1^2 \left(\frac{\ln q}{q-1}\right)^2\right|\\ &=&\left(\frac{\ln q}{q^2-1}\right)\left|p_2-(2\mu -1)\displaystyle\frac{\displaystyle\left(\frac{\ln q}{q-1}\right)^2}{\displaystyle\left(\frac{2\ln q}{q^2-1}\right)}p_1^2\right|\\ &\le& \max\left\{\left|2(1-2\mu)\left(\frac{\ln q}{q-1}\right)^2+2\left(\frac{\ln q}{q^2-1}\right)\right|, 2\left(\frac{\ln q}{q^2-1}\right)\right\}, \end{eqnarray*} where the last inequality follows from Lemma~\ref{l5}. It now remains to prove the sharpness part. This can easily be shown by the definition of $\mathcal{S}^*_q$ that the functions $F_1$ and $F_2$ defined in the statement of Theorem~\ref{T3} belong to the class $\mathcal{S}^*_q$. One can also see that $F_1 \in \mathcal{S}^*_q$ as a special case to Lemma~\ref{l3}, when the measure has a unit mass. The functions $F_1$ and $F_2$ show the sharpness of the result. This completes the proof of the theorem. \end{proof} We now pose the following conjecture on Fekete-Szeg\"o problem for $\mathcal{S}_q^*(\alpha)$. \begin{conjecture}\label{sqsa-f} Let $f\in\mathcal{S}_q^*(\alpha)$, $0\le\alpha<1$, be of the form (\ref{e1}) and $\mu$ be any complex number. Then $$ |a_3-\mu a_2^2|\le \max\left\{\left|2(1-2\mu)\left(\frac{\ln \frac{q}{1-\alpha(1-q)}}{q-1}\right)^2+2\left(\frac{\ln \frac{q}{1-\alpha(1-q)}}{q^2-1}\right)\right|,2\left(\frac{\ln \frac{q}{1-\alpha(1-q)}}{q^2-1}\right)\right\}. $$ Equality occurs for the functions \begin{equation}\label{e7} F_1(z):=z\left\{\exp \left[\displaystyle \sum_{n=1}^\infty \frac{2\ln \frac{q}{1-\alpha(1-q)}}{q^n-1}z^n\right]\right\} \end{equation} and \begin{equation}\label{e8} F_2(z):=z\left\{\exp \left[\displaystyle \sum_{n=1}^\infty \frac{2\ln \frac{q}{1-\alpha(1-q)}}{q^{2n}-1}z^{2n}\right]\right\}. \end{equation} \end{conjecture} \begin{proof}[\bf Proof of Theorem~\ref{T4}] Given that $f\in \mathcal{S}_q^*$ having the form \eqref{e1}. In (\ref{e5.6}), we already obtained the values of $a_2$ and $a_3$. In the similar way one can find the value of $a_4$. Indeed, $$a_4=\phi_3+\phi_1\phi_2+\frac{\phi_1^3}{6}=p_3\left(\frac{\ln q}{q^3-1}\right)+p_1p_2\left(\frac{\ln q}{q-1}\right)\left(\frac{\ln q}{q^2-1}\right)+\frac{p_1^3}{6}\left(\frac{\ln q}{q-1}\right)^3. $$ Hence, $$|a_2a_4-a_3^2|=\left|-\frac{p_1^4}{12}\left(\frac{\ln q}{q-1}\right)^4+p_1p_3\left(\frac{\ln q}{q-1}\right)\left(\frac{\ln q}{q^3-1}\right)-p_2^2\left(\frac{\ln q}{q^2-1}\right)^2\right|. $$ Suppose now that $p_1=c$ and $0\le c\le 2$. Using Lemma~\ref{l4}, we obtain \begin{eqnarray*} |a_2a_4-a_3^2|&=&\left|-\frac{c^4}{12}\left[\left(\frac{\ln q}{q-1}\right)^4-3\left(\frac{\ln q}{q-1}\right)\left(\frac{\ln q}{q^3-1}\right)+3\left(\frac{\ln q}{q^2-1}\right)^2\right]\right.\\ && \left.+\frac{c^2}{2}(4-c^2)x\left[\left(\frac{\ln q}{q-1}\right)\left(\frac{\ln q}{q^3-1}\right)-\left(\frac{\ln q}{q^2-1}\right)^2\right]\right.\\ &&\left.+\frac{(4-c^2)(1-|x|^2)cz}{2}\left(\frac{\ln q}{q-1}\right)\left(\frac{\ln q}{q^3-1}\right)\right.\\ &&\left.-\left[\frac{c^2}{4}(4-c^2)\left(\frac{\ln q}{q-1}\right)\left(\frac{\ln q}{q^3-1}\right)+\frac{(4-c^2)^2}{4}\left(\frac{\ln q}{q^2-1}\right)^2\right]x^2\right|\\ &\le & \frac{c^4}{12}\left|\left(\frac{\ln q}{q-1}\right)^4-3\left(\frac{\ln q}{q-1}\right)\left(\frac{\ln q}{q^3-1}\right)+3\left(\frac{\ln q}{q^2-1}\right)^2\right| +\frac{(4-c^2)c}{2}\left(\frac{\ln q}{q-1}\right)\\ && \left(\frac{\ln q}{q^3-1}\right)+\frac{c^2}{2}(4-c^2)\left[\left(\frac{\ln q}{q-1}\right)\left(\frac{\ln q}{q^3-1}\right)-\left(\frac{\ln q}{q^2-1}\right)^2\right]\rho\\ &&+\left(\frac{4-c^2}{4}\right)\left[(4-c^2)\left(\frac{\ln q}{q^2-1}\right)^2+c(c-2)\left(\frac{\ln q}{q-1}\right)\left(\frac{\ln q}{q^3-1}\right)\right]\rho^2\\ &=& F(\rho) \end{eqnarray*} with $\rho=|x|\le 1$. Furthermore, $$ F'(\rho)\ge 0. $$ This implies that $F$ is an increasing function of $\rho$ and thus the upper bound for $|a_2a_4-a_3^2|$ corresponds to $\rho=1$. Hence, $$|a_2a_4-a_3^2|\le F(1)=G(c)\, \mbox{ (say)}. $$ We can see that $$\left(\frac{\ln q}{q-1}\right)^4-3\left(\frac{\ln q}{q-1}\right)\left(\frac{\ln q}{q^3-1}\right)+3\left(\frac{\ln q}{q^2-1}\right)^2> 0, \mbox{ for } 0<q<1. $$ Now, a simple calculation gives that \begin{eqnarray*} G(c)=\frac{c^4}{12}\left[\left(\frac{\ln q}{q-1}\right)^4-12\left(\frac{\ln q}{q-1}\right)\left(\frac{\ln q}{q^3-1}\right)+12\left(\frac{\ln q}{q^2-1}\right)^2\right]\\ && \hspace*{-7cm} +c^2\left[3\left(\frac{\ln q}{q-1}\right)\left(\frac{\ln q}{q^3-1}\right)-4\left(\frac{\ln q}{q^2-1}\right)^2\right]+4\left(\frac{\ln q}{q^2-1}\right)^2. \end{eqnarray*} The expression $G'(c)=0$ gives either $c=0$ or $$c^2=\frac{6\displaystyle \left[4\left(\frac{\ln q}{q^2-1}\right)^2-3\left(\frac{\ln q}{q-1}\right)\left(\frac{\ln q}{q^3-1}\right)\right]}{\displaystyle \left(\frac{\ln q}{q-1}\right)^4-12\left(\frac{\ln q}{q-1}\right)\left(\frac{\ln q}{q^3-1}\right)+12\left(\frac{\ln q}{q^2-1}\right)^2}. $$ We can verify that $G''(c)$ is negative for $c=0$ and positive for other values of $c$. Hence the maximum of $G(c)$ occurs at $c=0$. Thus, we obtain $$ |a_2a_4-a_3^2|\le 4\left(\frac{\ln q}{q^2-1}\right)^2. $$ The function $F_2$ defined in the statement of the theorem shows the sharpness of the result. This completes the proof of the theorem. \end{proof} We pose one more conjecture which is about the Hankel Determinant for the class $\mathcal{S}_q^*(\alpha)$. \begin{conjecture}\label{sqsa-h} Let $f\in\mathcal{S}_q^*(\alpha)$, $0\le \alpha<1$, be of the form (\ref{e1}). Then $$|a_2a_4-a_3^2|\le 4\left(\frac{\ln \frac{q}{1-\alpha(1-q)}}{q^2-1}\right)^2. $$ Equality occurs for the function $F_2$ defined in $(\ref{e8})$. \end{conjecture} \begin{remark} Here we remark that the proofs of Conjectures~\ref{sqsa-f} and ~\ref{sqsa-h} will follow in the similar manner as the proofs of Theorem~\ref{T3} and Theorem~\ref{T4}, respectively. However, the conjectures are all about to find the extremal functions which we believe to be \eqref{e7} and \eqref{e8}. \end{remark} \begin{proof}[\bf Proof of Theorem~\ref{thm2}] Let $f\in \mathcal{C}_q(\alpha)$, $0\le \alpha<1$. By definition of $\mathcal{C}_q(\alpha)$, $z(D_qf)(z)\in \mathcal{S}_q^*(\alpha)$. Then by \cite[Theorem~1.1]{AS14-2}, we have $$z\frac{(z(D_qf)(z))'(z)}{z(D_qf)(z)}=1+\int_{|\sigma|=1}\sigma z F_{q,\alpha}^{'}(\sigma z)\rm{d}\mu(\sigma) $$ or, $$1+\frac{z(D_qf)'(z)}{(D_qf)(z)}=1+\int_{|\sigma|=1}\sigma z F_{q,\alpha}^{'}(\sigma z)\rm{d}\mu(\sigma), $$ where $F_{q,\alpha}$ is defined in (\ref{MainThm1:eq1}). Hence the proof is complete. \end{proof} \begin{proof}[\bf Proof of Theorem~\ref{sec2-thm7}] Let $f(z)=z+\sum_{n=2}^\infty a_n z^n\in \mathcal{C}_q(\alpha)$. By definition of $\mathcal{C}_q(\alpha)$, $z(D_qf)(z)=z+\sum_{n=2}^\infty (1-q^n)/(1-q)a_n z^n\in \mathcal{S}_q^*(\alpha)$. Then by \cite[Theorem~1.3]{AS14-2}, we have $$\left|\frac{1-q^n}{1-q}a_n\right|\leq c_n. $$ Next, we show that equality holds for the function $E_{q}\in \mathcal{C}_q(\alpha)$. As a special case to Theorem~\ref{thm2}, when the measure has a unit mass, it is clear that $E_{q}\in \mathcal{C}_q(\alpha)$. Let $E_q(z)=z+\sum_{n=2}^\infty b_n z^n$. From this representation of $E_q$ and the definition of $D_qf$, we get \begin{equation}\label{e9} z(D_q E_q)(z)=z+\sum_{n=2}^\infty b_n(1-q^n)/(1-q) z^n. \end{equation} Since $E_q(z)=I_q\{\exp [F_{q,\alpha}(z)]\}$, $z(D_q E_q)(z)=z\{\exp [F_{q,\alpha}(z)]\}$ and since $c_n$ is the $n$-th coefficient of the function $z\exp [F_{q,\alpha}(z)]$, we have \begin{equation}\label{e10} z(D_q E_q)(z)=z+\sum_{n=2}^\infty c_n z^n. \end{equation} By comparing (\ref{e9}) and (\ref{e10}) we get, $b_n=c_n(1-q)/(1-q^n)$. i.e. $$E_q(z)=z+\displaystyle \sum_{n=2}^\infty\left(\frac{1-q}{1-q^n}\right) c_n z^n. $$ This completes the proof of our theorem. \end{proof} \end{document}
\begin{document} \begin{center} {\Large \bf Quantitative Evaluation of Decoherence and{}\\{}\vskip0.16cm{}Applications for Quantum-Dot Charge Qubits} \end{center} \begin{center}{\bf Leonid Fedichkin}\ and\ {\bf Vladimir Privman} \end{center} \begin{center}{Center for Quantum Device Technology, Department of Physics and{}\\{}Department of Electrical and Computer Engineering, Clarkson University, Potsdam, New York 13699--5721, USA }\end{center} \section*{Abstract} We review results on evaluation of loss of information in quantum registers due to their interactions with the environment. It is demonstrated that an optimal measure of the level of quantum noise effects can be introduced via the maximal absolute eigenvalue norm of deviation of the density matrix of a quantum register from that of ideal, noiseless dynamics. For a semiconductor quantum dot charge qubits interacting with acoustic phonons, explicit expressions for this measure are derived. For a broad class of environmental modes, this measure is shown to have the property that for small levels of quantum noise it is additive and scales linearly with the size of the quantum register. \section{Introduction} In recent years, there has been significant progress in quantum computation and design of solid-state quantum information processors~\cite{Shor97,Grover97,Loss98,Privman98,Kane98,Vrijen,Nano00,Makhlin01,IV3,IV4,Vion02,Chiorescu03,Hanson03,Pashkin03,IV2,Shlimak04}. Quantum computers promise enormous speed-up of computation of certain very important problems, including factorization of large numbers~\cite{Shor97} and search~\cite{Grover97}. However, practically useful quantum information processing devices have not been made yet. One of the major obstacles to scalability has been decoherence. This is due to the fact that the effect of quantum speed-up is crucially dependent upon the coherence of quantum registers. Therefore, understanding the dynamics of coherence loss has drawn significant experimental and theoretical effort. In general, decoherence~\cite{vanKampen,nonMarkov,Anastopoulos,Ford,Braun,Lewis,Wang,PMV,Privman,Lutz,Khaetskii,OConnell,Strunz,Haake,short,VP6,VPAF5,VPDT4,VPDS3,VPDT2,VPDT1,Solenov} reveals itself in most experiments with quantum objects. It is a process whereby the quantum coherent physical system of interest interacts with the environment and, because of this interaction, changes its evolution from unperturbed ``ideal'' dynamics. The change of the dynamics is reflected by the corresponding change of the density matrix~\cite{Neumann,Abragam,therm2,therm1,Louisell} of the system. The time-dependence of the system's density matrix should be evaluated for an appropriate model of the system and its environment. If a multi-particle quantum system is considered then the respective density matrix becomes rather large and difficult to deal with. This occurs even for relatively small quantum registers containing just a few quantum bits (qubits). In this paper, we review evaluation of decoherence effects starting from the system Hamiltonian and followed by the definition and estimation of a decoherence error-measure in a quantum information processing ``register'' composed of several qubits. The paper is organized as follows: In Section~\ref{Sec2}, we consider a specific example of a solid state nanostructure. As a representative model for a qubit, we consider an electron in a semiconductor double quantum dot system. We derive the evolution of the density matrix of the electron, which losses coherence due to interaction with phonons. In Section~\ref{Sec3}, we define a measure characterizing decoherence and show how to calculate it from the density matrix elements for a semiconductor double quantum dot system introduced earlier. Finally, in Section~\ref{Sec4}, we establish that the measure of decoherence introduced, is additive for several-qubit registers, i.e., the total ``computational error'' scales linearly with the number of qubits. \section{Semiconductor Quantum Dot Charge Qubit}\label{Sec2} Solid-state nanostructures attracted much attention recently as a possible basis for large scale quantum information processing~\cite{roadmap}. Most stages of their fabrication can be borrowed from existing fabrication steps in microelectronics industry. Also, only microelectronics technology has demonstrated the ability to create and control locally evolution of thousands of nano-objects, which is required for quantum computation. There were several proposals for semiconductor qubits, reviewed, e.g., in~\cite{PMV}. In particular, the encoding of quantum information in the position of the electron was investigated in~\cite{Barenco,Hawrylak,Alex1,Alex2,Openov}. In~\cite{Ekert} it was argued that an electron in a typical quantum dot will loose coherence very fast which will prevent it from being a good qubit. However, this problem can be resolved with sophisticated designs of quantum-dot arrangements, e.g., arrays of several quantum dots, if properly designed~\cite{Zanardi}, can form a coherent quantum register. It was also shown that a symmetric layout of just two quantum dots can strongly diminish decoherence effects due to phonons and other environmental noises~\cite{Fedichkin,dd,dd_ieee}. Recent successful observations~\cite{Hayashi,Marcus,Fuji1,Fuji2,Fuji3} of spatial evolution of an electron in symmetric semiconductor double dot systems have experimentally confirmed that such a system is capable of maintaining coherence at least on time scales sufficient for observation of several cycles of quantum dynamics. In the above experiments measurements were performed at very low substrate temperatures of few tens of mK, in order to avoid additional thermally activated sources of decoherence. Theoretical results on the influence of the temperature on the first-order phonon relaxation rates in double dot systems were presented in~\cite{Barrett,Ahn}. In view of the above experimental advances, we have chosen a single electron in semiconductor double quantum dot system, whose dynamics is affected by vibrations of the crystal lattice, as a representative example of a quantum coherent system interacting with the environment. In the range of parameters corresponding to experiments~\cite{Hayashi,Fuji1,Fuji2,Fuji3} phonons dominate decoherence. Of course, for different systems or for similar systems in different ranges of external conditions some other sources of decoherence may prevail, for example, noise due to hopping of charge carries on nearby traps, studied in~\cite{Pashkin,Martin}, or due to the electron-electron interaction~\cite{Vorojtsov}. \begin{figure} \caption{Double well potential.} \label{fig:1} \end{figure} Semiconductor double quantum dot creates three-dimensional double well confinement potential for electron in it. Let us denote the line connecting centers of the dots as the $x$-axis. Then the electron confining potential along $x$, is schematically shown in Fig.~\ref{fig:1}. The nanostructure is composed of two quantum dots with a potential barrier between them. Parameters of the structure are properly adjusted so that two lower energy levels of spatial quantization lie very close to each other compared to the external temperature and to the distances to higher energy levels. Therefore hopping of the electron to higher levels is suppressed. The electron is treated as a superposition of two basis states, $|0\rangle$ and $|1\rangle$, corresponding to ``false'' and ``true'' in Boolean logic, \begin{equation} \psi = \alpha \psi_0 + \beta \psi_1 . \end{equation} It should be noted that the states that define the ``logical'' basis are not the ground and first excited states of the double-dot system. Instead, $\psi_0$ (the ``0'' state of the qubit) is chosen to be localized at the first quantum dot and, in a zeroth order approximation, be similar to the ground state of that dot if it were isolated. Similarly, $\psi_1$ (the ``1'' state) resembles the ground state of the second dot (if it were isolated). This assumes that the dots are sufficiently (but not necessarily exactly) symmetric. We denote the coordinates of the potential minima of the dots (dot centers) as vectors $\mathbf R_0$ and $\mathbf R_1$, respectively. The separation between the dot centers is \begin{equation} \mathbf L = \mathbf R_1-\mathbf R_0 . \end{equation} The Hamiltonian of an electron interacting with a phonon bath consists of three terms \begin{equation} \label{H} H = H_e + H_p + H_{ep}. \end{equation} The electron term is \begin{equation}\label{He} H_e = - \frac{1}{2}\varepsilon _{A}(t)\sigma _x- \frac{1}{2}\varepsilon _{P}(t)\sigma _z, \end{equation} where $ \sigma_x$ and $\sigma _z$ are Pauli matrices, whereas $\varepsilon _{A}(t)$ and $\varepsilon _{P}(t)$ can have time-dependence, as determined by unitary single-qubit quantum gate-functions to be implemented for specific quantum algorithm. This can be achieved by adjusting the potential on the metallic nanogates surrounding the double-dot system. For constant $\varepsilon _{A}$ and $\varepsilon _{P}$, the energy splitting between the electron energy levels is \begin{equation} \varepsilon=\sqrt{\varepsilon _{A}^2+\varepsilon _{P}^2}. \end{equation} The Hamiltonian term of the phonon bath is described by \begin{equation} H_p = \sum\limits_{\mathbf{q},\lambda } \hbar \omega_q {\kern 1pt} b_{\mathbf{ q},\lambda }^{\dagger} b_{\mathbf{q},\lambda }, \end{equation} where $ b^{\dagger}_{\mathbf{q},\lambda}$ and $ b_{\mathbf{q},\lambda}$ are the creation and annihilation operators of phonons, respectively, with the wave vector $\mathbf q\,$ and polarization $\lambda$. We approximate the acoustic phonon spectrum as isotropic one with a linear dispersion \begin{equation}\omega_q = s q,\end{equation} where $s$ is the speed of sound in the semiconductor crystal. In the next few paragraphs we show that the electron-phonon interaction can be expressed as \begin{equation} \label{int} H_{ep}=\sum\limits_{\mathbf{q},\lambda } \sigma_z {\kern 1pt}\left( g_{\mathbf{q},\lambda } b_{\mathbf{q}, \lambda}^{\dagger} + g_{\mathbf{q},\lambda }^* b_{\mathbf{q}, \lambda}\right), \end{equation} with the coupling constants $g_{\mathbf{{q}, \lambda}}$ determined by the geometry of the double-dot and the properties of the material. The derivation follows \cite{dd,dd_ieee}. The piezoacoustic electron-phonon interaction \cite{Mahan} is given by \begin{equation} \label{p1} H_{ep}=i{\sum\limits_{\mathbf q,\lambda} }\sqrt{\frac \hbar {2\rho s q V }}\, M_\lambda ({\mathbf q})F (\mathbf q)(b_{\mathbf q}+b_{-\mathbf q}^{\dagger}), \end{equation} where $\rho$ is the density of the semiconductor, $V$ is volume of the sample, and for the matrix element $M_\lambda ({\mathbf q})$, one can derive \begin{equation} \label{p2} M_\lambda ({\mathbf q})=\frac 1{2q^2}{\sum\limits_{ijk} }(\xi _i^{\vphantom{Q}}q_j^{\vphantom{Q}}+\xi _j^{\vphantom{Q}}q_i^{\vphantom{Q}})q_k^{\vphantom{Q}} M_{ijk}. \end{equation} Here $\xi _j$ are the polarization vector components for polarization $\lambda$, while $M_{ijk}$ express the electric field as a linear response to the stress, \begin{equation} \label{p3} E_k={\sum\limits_{ij} }M_{ijk}S_{ij}. \end{equation} For a crystal with zinc-blende lattice, like GaAs, the tensor $M_{ijk}$ has only those components non-zero for which all three indexes $i$, $j$, $k$ are different; furthermore, all these components are equal, $M_{ijk}=M$. Thus, we have \begin{equation} \label{p4} M_\lambda (\mathbf q)=\frac M{q^2}(\xi^{\vphantom{Q}}_1q_2^{\vphantom{Q}}q_3^{\vphantom{Q}}+ \xi^{\vphantom{Q}}_2 q_1^{\vphantom{Q}}q_3^{\vphantom{Q}}+\xi^{\vphantom{Q}}_3q_1^{\vphantom{Q}}q_2^{\vphantom{Q}}). \end{equation} The form factor $F(\mathbf{{q})}$ accounting for that we are working with electrons which are not usual plane waves, is given by \begin{equation}\label{e-density} F (\mathbf q)= \sum\limits_{j,k} c_j^{\dagger}c_k\int d^3r \phi_j^{*}(\mathbf r)\phi_k(\mathbf r)e^{-i\mathbf q\cdot \mathbf r}, \end{equation} where $c_k$, $c^{\dagger }_j$ are the annihilation and creation operators of the basis states $k,j=0,1$. In quantum dots formed by a repulsive potential of nearby gates, an electron is usually confined near the potential minima, which are approximately parabolic. Therefore the ground states in each dot have Gaussian shape \begin{equation} \label{1} \phi_j(\mathbf r)=\displaystyle\frac{e^{-|\mathbf r-\mathbf R_j|^2/2a^2}}{a^{3/2}\pi ^{3/4}}\,, \end{equation} where $2a$ is a characteristic size of the dots. We assume that the distance between the dots, $L = | \mathbf L |$, is sufficiently large compared to $a$, and that the different dot wave functions do not strongly overlap, \begin{equation} \bigg| \int d^3r \phi_j^{*}(\mathbf r)\phi_k(\mathbf r)e^{-i \mathbf q\cdot \mathbf r} \bigg| \ll 1, \quad {\rm for} \quad j\neq k. \end{equation} In other words tunneling between the dots is small, as is the case for the recently studied experimental structures \cite{Hayashi,Fujisawa,Dzurak1,Dzurak2}, where the splitting due to tunneling, measured by $\varepsilon _{A}$, was just several tens of $\mu$eV, while the electron quantization energy in each dot was at least several meV. For $j=k$, we obtain \begin{eqnarray}\label{3}\nonumber \int d^3 r \phi_j^{*}(\mathbf r)\phi_j(\mathbf r) e^{-i\mathbf q\cdot \mathbf r}= \frac{1}{a^3\pi^{3/2} }\int d^3 r e^{-|{\mathbf r} -{\mathbf R}_j|^2/a^2}e^{-i\mathbf q\cdot \mathbf r} \end{eqnarray} \begin{eqnarray} =e^{-i{{\mathbf q}\cdot {{\mathbf R}_j}}}e^{-a^2q^2/4}. \end{eqnarray} The resulting form factor is \begin{equation} F (q)=e^{-a^2q^2/4}e^{-i\mathbf q\cdot \mathbf R}(c_0^{\dagger }c_0e^{i\mathbf q\cdot \mathbf L/2}+c_1^{\dagger }c_1e^{-i\mathbf q\cdot \mathbf L/2}), \end{equation} where $\mathbf R=\left(\mathbf R_0+\mathbf R_1\right)/2$. Therefore \begin{equation}\label{form} F (q)=e^{-a^2q^2/4}e^{-i\mathbf q\cdot \mathbf R}\left[\cos (\mathbf q\cdot \mathbf L /2)I+i\sin (\mathbf q\cdot \mathbf L /2)\sigma _z\right] \label{4}, \end{equation} where $I$ is the identity operator. Only the last term in (\ref{form}) represents an interaction affecting the qubit states. It leads to a Hamiltonian term of the form (\ref{int}), with coupling constants \begin{eqnarray} \label{p6}\nonumber g_{\mathbf q, \lambda}&=&-\sqrt{\frac{\hbar }{2 \rho q s V }} \, M e^{-a^2q^2/4 - i\mathbf q\cdot \mathbf R}\\ &&\times(\xi_1^{\vphantom{Q}}e_2^{\vphantom{Q}}e_3^{\vphantom{Q}}+ \xi_2^{\vphantom{Q}}e_1^{\vphantom{Q}}e_3^{\vphantom{Q}} +\xi_3^{\vphantom{Q}}e_1^{\vphantom{Q}}e_2^{\vphantom{Q}}) \sin (\mathbf q\cdot \mathbf L/2), \end{eqnarray} where $e_k=q_k/q$. The general form of qubit evolution controlled by the Hamiltonian term (\ref{He}) is time dependent. Decoherence estimates for some solid-state systems with certain shapes of time dependence of the system Hamiltonian were reported recently \cite{Solenov,Brandes,Brandes2}. However, such estimations are rather sophisticated. To avoid this difficulty we observe that all single-qubit rotations which are required for quantum algorithms can be successfully performed by using two constant-Hamiltonian gates without loss of quantum speed-up, e.g., by amplitude rotation gate and phase shift gate \cite{Kitaev3}. To implement these gates one can keep the Hamiltonian term (\ref{He}) constant during the implementation of each gate, adjusting the parameters $\varepsilon _{A}$ and $\varepsilon _{P}$ as appropriate for each gate and for the idling qubit in between gate functions. In the next paragraph we initiate our consideration of decoherence during the implementation of the NOT amplitude gate. Then consider $\pi$-phase shift gate later in the section. The quantum NOT gate is a unitary operator which transforms the states $|0\rangle$ and $|1\rangle$ into each other. Any superposition of $|0\rangle$ and $|1\rangle$ transforms accordingly, \begin{equation} {\rm NOT} \left(\alpha |0\rangle + \beta |1\rangle\right) = \beta |0\rangle + \alpha |1\rangle . \end{equation} The NOT gate can be implemented by properly choosing $\varepsilon_A$ and $\varepsilon_P$ in the Hamiltonian term (\ref{He}). Specifically, with constant \begin{equation} \varepsilon_A=\varepsilon \end{equation} and \begin{equation} \varepsilon_P=0, \end{equation} the ``ideal'' NOT gate function is carried out, with these interaction parameters, over the time interval \begin{equation} \tau=\frac{\pi\hbar}{\varepsilon}. \end{equation} The major source of quantum noise for double-dot qubit subject to the NOT-gate type coupling, is relaxation involving energy exchange with the phonon bath (i.e., emission and absorption of phonons). Here it is more convenient to study the evolution of the density matrix in the energy basis, $\left\{ \left|+\right\rangle,\left|-\right\rangle\right\}$, where \begin{equation} \left|\pm\right\rangle=\left(\left|0\right\rangle \pm \left|1\right\rangle\right)/\sqrt{2}. \end{equation} Then, assuming that the time interval of interest is $[0,\tau]$, the qubit density matrix can be expressed \cite{therm2} in the energy basis as \begin{eqnarray} \label{rho_rel} \rho(t)=\left( \begin{array}{cc} \rho_{++}^{th}+\left[\rho _{++}(0)-\rho_{++}^{th}\right]e^{ - \Gamma t} & \rho _{+-}(0)e^{ - (\Gamma/2-i\varepsilon/\hbar ) t} \\\\ \rho _{-+}(0)e^{ - (\Gamma/2+i\varepsilon/\hbar) t} & \rho_{--}^{th}+\left[\rho _{--}(0)-\rho_{--}^{th}\right]e^{ - \Gamma t} \\ \end{array} \right)\! . \end{eqnarray} This is a standard Markovian approximation for the evolution of the density matrix. For large times, this type of evolution would in principle result in the thermal state, with the off-diagonal density matrix elements decaying to zero, while the diagonal ones approaching the thermal values proportional to the Boltzmann factors corresponding to the energies $\pm \varepsilon /2$. However, here we are only interested in such evolution for a relatively short time interval, $\tau$, of a NOT gate. The rate parameter $\Gamma$ is simply the sum \cite{therm2} of the phonon emission rate, $W^{e}$, and absorption rate, $W^{a}$, \begin{equation}\label{Gamma} \Gamma=W^{e}+W^{a}. \end{equation} The probability for the absorption of a phonon due to excitation from the ground state to the upper level is \begin{equation} w^{\lambda}=\frac{2 \pi}{\hbar}|\langle f|H_{ep}|i\rangle|^2 \delta(\varepsilon -\hbar s q), \end{equation} where $|i\rangle$ is the initial state with the extra phonon with energy $\hbar s q$ and $|f\rangle$ is the final state, $\mathbf q$ is the wave vector, and $\lambda$ is the phonon polarization. Thus, we have to calculate \begin{equation}\label{W} W^a=\sum\limits_{\bf q, \lambda} w^{\lambda}=\frac{V}{(2\pi)^3}\sum\limits_{\lambda}\int d^3 q\, w^{\lambda}. \end{equation} For the interaction (\ref{int}) one can derive \begin{equation} w^{\lambda}=\frac{2 \pi}{\hbar}|g_{\mathbf q,\lambda}|^2 N^{th} \delta(\varepsilon-\hbar sq), \end{equation} where \begin{equation} N^{th}=\frac{1}{\exp(\hbar sq /k_B T)-1} \end{equation} is the phonon occupation number at temperature $T$, and $k_B$ is the Boltzmann constant. The coupling constant in (\ref{p6}) depends on the polarization if the interaction is piezoelectric. For longitudinal phonons, the polarization vector has Cartesian components, expressed in terms of the spherical-coordinate angles, \begin{equation} \label{cpl1} \xi_1^{\parallel}=e_1=\sin\theta \cos\phi, \quad \xi_2^{\parallel}=e_2=\sin\theta \sin\phi,\quad \xi_3^{\parallel}=e_3=\cos\theta, \end{equation} where $e_j=q_j/q$. For transverse phonons, it is convenient to define the two polarization vectors $\xi_i^{\perp1}$ and $\xi_i^{\perp2}$ to have \begin{equation} \label{cpl2} \xi_1^{\perp1}=\sin\phi, \quad \xi_2^{\perp1}=-\cos\phi, \quad \xi_3^{\perp1}=0, \end{equation} \begin{equation}\label{cpl4} \xi_1^{\perp2}=-\cos \theta\cos\phi, \quad \xi_2^{\perp2}=-\cos \theta\sin\phi, \quad \xi_3^{\perp2}=\sin \theta. \end{equation} Then for longitudinal phonons, one obtains \cite{dd_ieee} \begin{eqnarray} w^{\parallel}&=&\frac{\pi}{\rho s V q }M^2e^{-a^2q^2/4}\\ &&\times\, 9\sin^4\theta\cos^2\theta\sin^2\phi\cos^2\phi\sin^2 (qL\cos\theta/2).\nonumber \end{eqnarray} For transverse phonons, one gets \begin{eqnarray}\nonumber w^{\perp1}&=&\frac{\pi}{\rho s V q }M^2e^{-a^2q^2/4}(-2\sin\theta\cos^2\theta\sin\phi\cos\phi\\ &&+\sin^3\theta\cos\phi\sin\phi)^2 \sin^2(qL\cos\theta/2), \end{eqnarray} \begin{eqnarray}\nonumber w^{\perp2}&=&\frac{\pi}{\rho s V q }M^2e^{-a^2q^2/4}(-2\sin\theta\cos\theta\cos^2\phi\\ &&+\sin\theta\cos\theta\sin^2\phi)^2 \sin^2 (qL\cos\theta/2). \end{eqnarray} By combining these contributions and substituting them in (\ref{W}), one can obtain the probability of absorption of a phonon for all polarizations, \begin{eqnarray}\label{r1} W_{\rm piezo}^a&=&\displaystyle\frac{M^2}{20\pi\rho s^2\hbar L^5 k^4}\frac{\exp{ \left(-\frac{a^2 k^2}{2}\right)}}{\exp\left(\frac{\hbar s k}{k_B T}\right)-1} \\\nonumber &&\times\left\{\left(kL\right)^5+ 5 kL\left[ 2 \left(kL\right)^2 - 21\right] \cos\left(kL\right)\right.\\ &&+\left.\nonumber 15\left[7 - 3\left(kL\right)^2\right]\sin\left(kL\right)\right\} , \end{eqnarray} where \begin{equation} k=\frac{\varepsilon}{\hbar s} \end{equation} is the wave-vector of the absorbed phonon. Finally, the expressions for the phonon emission rates, $W^e$, can be obtained by multiplying the above expression, (\ref{r1}), by $(N_{th}+1)/N_{th}$. The $\pi$ phase gate is a unitary operator which does not change the absolute values of the probability amplitudes of a qubit in the superposition of the $|0\rangle$ and $|1\rangle$ basis states. instead it increases the relative phase between the probability amplitudes by $\pi$ angle. Consequently, superposition of $|0\rangle$ and $|1\rangle$ transforms according to \begin{equation} {\Pi} \left(\alpha|0\rangle + \beta |1\rangle\right) = \alpha |0\rangle - \beta |1\rangle . \end{equation} Over a time interval $\tau$, the $\pi$ gate can be carried out with constant interaction parameters, \begin{equation} \varepsilon_A=0 \end{equation} and \begin{equation} \varepsilon_P = \varepsilon = \frac{\pi\hbar}{\tau}. \end{equation} Charge qubit dynamics during implementation of phase gates was investigated in \cite{dd}. The relaxation dynamics is suppressed during the $\pi$ gate, because there is no tunneling between the dots. The main quantum noise then results due to pure dephasing. It leads to the decay of the off-diagonal qubit density matrix elements, while keeping the diagonal density matrix elements unchanged. The qubit density matrix can be represented in this regime as \cite{basis,Palma} \begin{equation}\label{mm0} \rho(t) =\left( \begin{array}{cc} \rho _{00} {(0)}& \rho _{01} {(0)}e^{ - B^2(t)+i\varepsilon t/\hbar} \\\\ \rho _{10} {(0)}e^{ - B^2(t)-i\varepsilon t/\hbar} & \rho _{11} {(0)} \\ \end{array} \right), \end{equation} with the spectral function, \begin{eqnarray} \label{cd02}\nonumber B^2(t)&=&\frac{8}{\hbar^2} {\sum\limits_{\mathbf q, \lambda} } \frac{\left| g_{\mathbf q, \lambda}\right| ^2}{\omega _q^2}\sin ^2 \frac{\omega _qt}2\coth \frac{\hbar \omega _q}{2 k_B T}\\ &=&\frac{V}{\hbar^2 \pi^3}\int d^3 q \sum\limits_{\lambda} \frac{\left| g_{\mathbf q, \lambda} \right| ^2}{q^2 s^2}\sin ^2\frac{qst}2\coth \frac{\hbar q s}{2 k_B T}. \end{eqnarray} For the piezoelectric interaction, the coupling constant $g_{\mathbf q, \lambda}$ was obtained in (\ref{p6}), and expression for the spectral function is \begin{eqnarray} \label{cp1}\nonumber B^2_{\rm piezo}(t)&=&\displaystyle \frac{M^2}{2\pi ^3\hbar \rho s^3 }\!\int_0^{\infty } q^2dq\int_0^{\pi} \sin \theta d\theta \int_0^{2 \pi} d\varphi \\ &&\times\sum\limits_{\lambda}\frac {(\xi^{\lambda} _1e_2e_3+\xi_2^{\lambda}e_1e_3+\xi_3^{\lambda}e_1e_2)^2}{ q^3}\exp(-a^2q^2/2){}\nonumber{} \\ &&\times\sin^2 (q L \cos\theta)\sin ^2\displaystyle \frac{qst} 2\coth \displaystyle\frac{\hbar q s}{2 k_B T}. \end{eqnarray} In summary, in this section we obtained the leading-order expressions for the semiconductor double-dot qubit density matrix in the presence of decoherence due to piezoelectric interaction with acoustic phonons during implementation of amplitude and phase gates. \section{Quantification of Decoherence}\label{Sec3} Quantum information processing at the level of qubits and few-qubit registers, assumes near coherent evolution, which is at best achievable at short to intermediate times. Therefore attention has recently shifted from large-time system dynamics in the regime of onset of thermalization, to almost perfectly coherent dynamics at shorter times. Since many quantum systems proposed as candidates for qubits for practical realizations of quantum computing require estimation of their coherence, quantitative characterization of decoherence is crucially important for quantum information processing~\cite{Privman98,Kane98,Vrijen, Hawrylak,Alex1,Alex2,Openov,Ekert,Fedichkin,Hayashi,Barrett,Ahn,Fujisawa, Dzurak1,Dzurak2,Kitaev3,Shnirman,Loss,Imamoglu,Rossi,Nakamura,Tanamoto,Platzman, Sanders,Burkard,Bandyopadhyay,Larionov, Cain,Smith,Ben1,Ben2, qec,Steane,Bennett,Calderbank,SteanePRA,Gottesman,Knill,Kitaev, Kitaev2,Preskill,DiVincenzo,norm,additivity,jctn}. A single measure characterizing decoherence is highly desirable for comparison of different qubit designs. Besides the evaluation of single qubit performance one also has to analyze scaling of decoherence as the register size (the number of qubits involved) increases. Direct quantitative calculations of decoherence of even few-qubit quantum registers are not feasible. Therefore, a practical approach has been to explore quantitative measures of decoherence~\cite{norm}, develop techniques to calculate such measures at least approximately for realistic one- and two-qubit systems~\cite{dd,dd_ieee}, and then establish scaling (additivity)~\cite{additivity,jctn}) for several-qubit quantum systems. In this section, we outline different approaches to define and quantify decoherence. We argue that a measure based on a properly defined as a certain operator norm of deviation of the density matrix from ideal, is the most appropriate for quantifying decoherence in quantum registers. We consider several approaches to generally quantifying the degree of decoherence due to interactions with environment. We first mention the approach based on the asymptotic relaxation time scales. The entropy and idempotency-defect measures are then reviewed. The fidelity measure of decoherence is considered next. Finally, we introduce our operator norm measure of decoherence. Furthermore, we discuss an approach to eliminate the initial-state dependence of the decoherence measures. Markovian approximation schemes typically yield exponential approach to the limiting values of the density matrix elements for large times \cite{Abragam,therm2,therm1}. For a two-state system, this defines the time scales $T_1$ and $T_2$, associated, respectively, with the approach by the diagonal (thermalization) and off-diagonal (dephasing, decoherence) density-matrix elements to their limiting values. More generally, for large times we approximate deviations from stationary values of the diagonal and off-diagonal density matrix elements as \begin{equation} \rho _{kk} (t) - \rho _{kk}(\infty) \propto e^{ - t/T_{kk} } , \end{equation}\par\noindent \begin{equation} \rho _{jk} (t) \propto e^{ - t/T_{jk} } \qquad (j \ne k) . \end{equation}\par\noindent The shortest time among $T_{kk}$ is often identified as $T_1$. Similarly, $T_2$ can be defined as the shortest time among $T_{n \ne m}$. These definitions yield the characteristic times of thermalization and decoherence (dephasing). Unfortunately the exponential behavior of the density matrix elements in the energy basis is applicable only for large times, whereas for quantum computing applications, the short-time behavior is usually relevant \cite{short}. Moreover, while the energy basis is natural for large times, the choice of the preferred basis is not obvious for short and intermediate times \cite{short,basis}. Therefore, the time scales $T_1$ and $T_2$ have limited applicability for evaluating coherence in quantum computing. An alternative approach is based on the calculation of the entropy \cite{Neumann} of the system, \begin{equation} S(t)=- {\rm Tr}\left( \rho \ln \rho \right), \end{equation}\par\noindent or the first order entropy (idempotency defect) \cite{Kim,Zurek,Zagur}, \begin{equation} \label{trace} s(t)=1 - {\rm Tr} \left( \rho ^2 \right). \end{equation}\par\noindent Both expressions are basis independent, have a minimum at pure states and effectively describe the degree of the state's ``purity.'' Any deviation from a pure state leads to the deviation from the minimal values, 0, for both measures, \begin{equation} S_{\,\rm pure\ state}(t)= s_{\,\rm pure\ state}(t)= 0. \end{equation} Unfortunately, entropy measures the deviation from pure-state evolution rather than deviation from a specific ideal evolution. The fidelity measure, considered presently, has been widely used. If the Hamiltonian of the system and environment is \begin{equation}\label{f0} H=H_S+H_B+H_I, \end{equation}\par\noindent where $H_S$ is the internal system dynamics, $H_B$ gives the evolution of environment (bath), and $H_I$ describes system-bath interaction, then the fidelity measure \cite{Dalton,Fidelity2} can be defined as, \begin{equation}\label{f1} F(t)={\rm Tr}_{\,S} \left[ \, \rho _{\rm ideal}(t) \, \rho (t) \, \right]. \end{equation}\par\noindent Here the trace is over the system degrees of freedom, and $\rho_{\rm ideal}(t)$ represents the pure-state evolution of the system under $H_S$ only, without interaction with the environment ($H_I=0$). In general, the Hamiltonian term $H_S$ governing the system dynamics can be time dependent. For the sake of simplicity throughout this review we consider constant $H_S$ over time intervals of quantum gates, cf.\ Section~\ref{Sec2}. In this case \begin{equation}\label{f2} \rho _{\rm ideal}(t)= e^{-iH_S t}\rho(0)\, e^{iH_S t}. \end{equation} More sophisticated scenarios with qubits evolving under time dependent $H_S$ were considered in~\cite{Solenov,Brandes,Brandes2}. The fidelity provides a measure of decoherence in terms of the difference between the ``real,'' environmentally influenced evolution, $\rho (t)$, and the ``ideal'' evolution, $\rho_{\rm ideal} (t)$. It will attain its maximal value, 1, only provided $\rho (t) = \rho_{\rm ideal} (t)$. This property relies on the added assumption the $ \rho_{\rm ideal} (t)$ remains a projection operator (pure state) for all times $t \geq 0$. As an simple example consider a two-level system decaying to the ground state, when there is no internal system dynamics, \begin{equation} \rho _{\rm ideal} (t) =\left( \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right), \end{equation}\par\noindent \begin{equation} \rho (t)=\left( \begin{array}{cc} 1-e^{-\Gamma t} & 0 \\ 0 & e^{-\Gamma t} \end{array} \right), \end{equation}\par\noindent and the fidelity is monotonic, \begin{equation}\label{f3} F(t)=e^{-\Gamma t}. \end{equation}\par\noindent Note that the requirement that $\rho_{\rm ideal}(t)$ is a pure-state (projection operator), excludes, in particular, any $T>0$ thermalized state as the initial system state. Consider the application of the fidelity measure for the infinite-temperature initial state of our two level system. We get \begin{equation} \rho (0)=\rho _{\rm ideal}(t)=\left( \begin{array}{cc} 1/2 & 0 \\ 0 & 1/2 \end{array} \right), \end{equation}\par\noindent which is not a projection operator. The spontaneous-decay density matrix is then \begin{equation} \rho (t)=\left( \begin{array}{cc} 1-(e^{-\Gamma t}/2) & 0 \\ 0 & e^{-\Gamma t}/2 \end{array}\right). \end{equation}\par\noindent The fidelity remains constant \begin{equation}\label{f3.1} F(t)=1/2, \end{equation}\par\noindent and it does not provide any information of the time dependence of the decay process. Let us now consider the operator norms \cite{Kato} that measure the deviation of the system from the ideal state, to quantify the degree of decoherence, as proposed in~\cite{norm,additivity,jctn}. Such measures do not require the initial density matrix to be pure-state. We define the deviation according to \begin{equation}\label{deviation} \sigma(t) \equiv \rho(t) - \rho_{\rm ideal} (t) . \end{equation}\par\noindent We can use, for instance, the eigenvalue norm \cite{Kato}, \begin{equation}\label{n11} \left\|\sigma \right\|_{\lambda} = {\max_i} \left| {\lambda _i } \right|, \end{equation}\par\noindent or the trace norm, \begin{equation}\label{tracenorm} \left\| \sigma \right\|_{{\rm Tr}} = \sum\limits_i {\left| {\lambda _i } \right|}, \end{equation}\par\noindent etc., where $\lambda_i$ are the eigenvalues of the deviation operator (\ref{deviation}). Since density operators are Hermitian and bounded, their norms, as well the norm of the deviation, can be always defined and evaluated by using the expressions shown, avoiding the more formal mathematical definitions. We also note that $\left\| A \right\|=0$ implies that $A=0$. The calculation of these norms is sometimes simplified by the observation that $\sigma(t)$ is traceless. Specifically, for two-level systems, we get \begin{equation} \left\| \sigma \right\|_{\lambda} = \sqrt {\left| {\sigma _{00} } \right|^2 + \left| {\sigma_{01} } \right|^2 } = {1 \over 2} \left\| \sigma \right\|_{{\rm Tr}}. \end{equation}\par\noindent For our example of the two-level system undergoing spontaneous decay, the norm is \begin{equation} \left\| \sigma \right\|_{\lambda} = 1 - e^{-\Gamma t} . \end{equation}\par\noindent The measures considered above quantify decoherence of a system provided that its initial state is given. However, in quantum computing, it is impractical to keep track of all the possible initial states for each quantum register, that might be needed for implementing a particular quantum algorithm. Furthermore, even the preparation of the initial state can introduce additional noise. Therefore, for evaluation of fault-tolerance (scalability), it will be necessary to obtain an upper-bound estimate of decoherence for an arbitrary initial state. To characterize decoherence for an arbitrary initial state, pure or mixed, we proposed~\cite{norm} to use the maximal norm, $D$, which is determined as an operator norm maximized over all the initial density matrices(the worst case scenario error estimate), \begin{equation}\label{normD} D(t) = \sup_{\rho (0)}\bigg(\left\| \sigma (t,\rho (0))\right\|_{\lambda} \bigg). \end{equation}\par\noindent For realistic two-level systems coupled to various types of environmental modes, the expressions of the maximal norm are surprisingly elegant and compact. They are usually monotonic and contain no oscillations due to the internal system dynamics. Most importantly, in the next section we will establish the {\it additivity\/} property of the maximal norm of deviation measure. Here we conclude by presenting the expressions for this measure for the two gates for the semiconductor double-dot system introduced in preceding section. The qubit error measure, $D$, was obtained from the density matrix deviation from the ``ideal'' evolution by using the operator norm approach~\cite{norm}. After lengthy calculations, one gets \cite{dd} relatively simple expressions for the NOT gate, \begin{equation}\label{Da(t)} D_{\rm NOT}=\frac{1 - e^{ - \Gamma \tau}}{1+e^{ -\varepsilon / k_B T }}, \end{equation} and for the $\pi$ gate, \begin{equation}\label{Dp(t)} D_{\pi} = \frac{1}{2}\left[ 1 - e^{ - B^2 (\tau)}\right], \end{equation} where all the parameters were defined in Section~\ref{Sec2}. A realistic ``general'' noise estimate per typical quantum-gate cycle time $\tau$, could be taken as the larger of these two expressions. \section{Additivity of the Decoherence Measure}\label{Sec4} In the study of decoherence of several-qubit systems, one has to consider the degree to which noisy environments of different qubits are correlated \cite{Palma,additivity,JPCM}. Furthermore, if all constituent qubits are interacting with the same bath, then there are methods to reduce decoherence without quantum error correction, by instead encoding the state of one logical qubit in a decoherence-free subspace of the states of several physical qubits \cite{Zanardi,Palma,DFS,Lidar,Wubs}. In this section, we will consider several-qubit system and assume the ``worst case scenario,'' i.e., that the qubits experience uncorrelated noise, and each is coupled to a separate bath. Since analytical calculations for several qubits are impractical, we have to find some ``additivity'' properties that will allow us to estimate the error measure for the whole system from the error measures of the constituent qubits. For a general class of decoherence processes, including those occurring in semiconductor qubits considered in Section~\ref{Sec2}, we argue that maximal deviation norm measure introduced in Section~\ref{Sec3} is additive. The decoherence dynamics of a multiqubit system is rather complicated. The loss of quantum coherence results also in the loss of two-particle and several-particle entanglements in the system. The higher order (multi-qubit) entanglements are ``encoded'' in the far off-diagonal elements of the multi-qubit register density matrix, and therefore these quantum correlations will decay at least as fast as the products of the decay factors for the qubits involved, as exemplified by several explicit calculations \cite{VPDT2,Eberly1,Storcz,Eberly2}. This observation supports the conclusion that at large times the \emph{rates\/} of decay of coherence of the qubits will be additive. However, here we seek a different result. We look for additivity property which is valid not in the regime of the asymptotic large-time decay of quantum coherence, but for short times, $\tau$, of quantum gate functions, when the noise level, namely the value of the measure $D(\tau)$ for each qubit, is relatively small. In this regime, we will establish \cite{additivity}: even for strongly entangled qubits$\,$---$\,$which are important for the utilization of the power of quantum computation$\,$---$\,$the error measures $D$ of the individual qubits in a quantum register are additive. Thus, the error measure for a register made of similar qubits, scales up linearly with their number, consistent with other theoretical and experimental observations \cite{Dalton,Suter1,Suter2}. Thus, to characterize decoherence for an arbitrary initial state, pure or mixed, we use the maximal norm, $D$, which was defined (\ref{normD}) as an operator norm maximized over all the possible initial density matrices. One can show that $0 \leq D(t) \leq 1$. This measure of decoherence will typically increase monotonically from zero at $t=0$, saturating at large times at a value $D(\infty) \leq 1$. The definition of the maximal decoherence measure $D(t)$ looks rather complicated for a general multiqubit system. However, it can be evaluated in closed form for short times, appropriate for quantum computing, for a single-qubit (two-state) system. We then establish an approximate additivity that allows us to estimate $D(t)$ for several-qubit systems as well. The evolution of the reduced density operator of the system (\ref{f1}) and the one for the ideal density matrix (\ref{f2}) can be formally expressed \cite{Kitaev3,Kitaev,Kitaev2} in the superoperator notation as \begin{equation}\label{T1} \rho(t)=T(t)\rho(0), \end{equation} \begin{equation}\label{Ti} \rho^{(i)}(t)=T^{(i)}(t)\rho(0), \end{equation} where $T$, $T^{(i)}$ are linear superoperators. The deviation matrix can be expressed as \begin{equation}\label{t1} \sigma(t)=\left[T(t)-T^{(i)}(t)\right]\rho(0). \end{equation} The initial density matrix can decomposed as follows, \begin{equation}\label{mixture} \rho(0)=\sum_{j} p_j |\psi_j\rangle\langle\psi_j|, \end{equation} where $\sum_j p_j=1$ and $0 \leq p_j\leq1$. Here the wavefunction set $|\psi_j\rangle$ is not assumed to have any orthogonality properties. Then, we get \begin{equation} \sigma\left(t, \rho(0)\right)=\sum_{j} p_j \left[T(t)-T^{(i)}(t)\right]\left|\psi_j\right\rangle\left\langle\psi_j\right|. \end{equation} The deviation norm can thus be bounded, \begin{equation}\label{proj} \|\sigma(t, \rho(0))\|_{\lambda} \; \leq \; \left\| \left[T(t)-T^{(i)}(t)\right] |\phi\rangle\langle\phi|\right\|_{\lambda}. \end{equation} Here $|\phi\rangle$ is defined according to \begin{equation}\nonumber \left\| \left[T-T^{(i)}\right] |\phi\rangle\langle\phi|\right\|_{\lambda}= \max_j\left\| \left[T-T^{(i)}\right] |\psi_j\rangle\langle\psi_j|\right\|_{\lambda}. \end{equation} For any initial density operator which is a statistical mixture, one can always find a density operator which is pure-state, $|\phi\rangle\langle\phi|$, such that $\|\sigma(t, \rho(0))\|_{\lambda}\leq\|\sigma(t, |\phi\rangle\langle\phi|)\|_{\lambda}$. Therefore, evaluation of the supremum over the initial density operators in order to find $D(t)$, see (\ref{normD}), can be done over only pure-state density operators, $\rho (0)$. Consider briefly strategies of evaluating $D(t)$ for a single qubit. We can parameterize $\rho(0)$ as \begin{equation}\label{parametriazation1} \rho(0)=U \left( \begin{array}{cc} P & 0 \\ 0 & 1-P \\ \end{array} \right)U^{\dagger}, \end{equation} where $0\leq P \leq 1$, and $U$ is an arbitrary $2 \times 2$ unitary matrix, \begin{equation} U=\left( \begin{array}{cc} e^{i(\alpha+\gamma)}\cos\theta & e^{i(\alpha-\gamma)}\sin\theta \\ -e^{i(\gamma-\alpha)}\sin\theta & e^{- i(\alpha+\gamma)}\cos\theta \\ \end{array}\right). \end{equation} Then, one should find a supremum of the norm of deviation (\ref{n11}) over all the possible real parameters $P$, $\alpha$, $\gamma$ and $\theta$. As shown above, it suffices to consider the density operator in the form of a projector and put $P=1$. Thus, one should search for the maximum over the remaining three real parameters $\alpha$, $\gamma$ and $\theta$. Another parameterization of the pure-state density operators, $\rho(0)=|\phi\rangle\langle\phi|$, is to express an arbitrary wave function $|\phi\rangle=\sum_j (a_j+i b_j)|j\rangle$ in some convenient orthonormal basis $|j\rangle$, where $j=1,\ldots,N$. For a two-level system, \begin{equation}\label{parametrization2} \rho(0)=\left( \begin{array}{cc} a_1^2+b_1^2 & (a_1+i b_1)(a_2-i b_2) \\ (a_1-i b_1)(a_2+i b_2) & a_2^2+b_2^2 \\ \end{array} \right), \end{equation} where the four real parameters $a_{1,2},b_{1,2}$ satisfy $a_1^2+b_1^2+a_2^2+b_2^2=1$, so that the maximization is again over three independent real numbers. The final expressions (\ref{Da(t)}) and (\ref{Dp(t)}) for $D(t)$, for our selected single-qubit systems considered in Section \ref{Sec2}, are actually quite compact and tractable. In quantum computing, the error rates can be significantly reduced by using several physical qubits to encode each logical qubit \cite{Zanardi,DFS,Lidar}. Therefore, even before active quantum error correction is incorporated \cite{Ben1,Ben2,qec,Steane,Bennett,Calderbank,SteanePRA,Gottesman,Knill}, evaluation of decoherence of several qubits is an important, but formidable task. Here our aim is to prove the approximate additivity of $D_q(t)$, including the case of the initially \emph{entangled\/} qubits, labeled by $q$, whose dynamics is governed by \begin{equation} H=\sum_q H_q=\sum_q \left(H_{Sq}+H_{Bq}+H_{Iq}\right), \end{equation} where $H_{Sq}$ is the Hamiltonian of the $q$th qubit itself, $H_{Bq}$ is the Hamiltonian of the environment of the $q$th qubit, and $H_{Iq}$ is corresponding qubit-environment interaction. We consider a more complicated (for actual evaluation) diamond norm \cite{Kitaev3,Kitaev,Kitaev2}, as an auxiliary quantity used to establish the additivity of the more easily calculable operator norm $D(t)$. The establishment of the upper-bound estimate for the maximal deviation norm of a multiqubit system, involves several steps. We first derive a bound for this norm in terms of the diamond norm. Actually, for single qubits, in several models the diamond norm can be expressed via the corresponding maximal deviation norm. At the same time, the diamond norm for the whole quantum system is bounded by sum of the norms of the constituent qubits by using a certain specific stability property of the diamond norm, $K(t)$. This norm is defined as \begin{equation}\label{supernormK} K(t) =\|T- T^{(i)}\|_{\diamond}=\sup_{\varrho} \| \{[T-T^{(i)}]{\raise2pt\hbox{$\scriptscriptstyle{\otimes}$}} I\} {\varrho} \|_{\rm Tr}. \end{equation} The superoperators $T$, $T^{(i)}$ characterize the actual and ideal evolutions according to (\ref{T1}), (\ref{Ti}). Here $I$ is the identity superoperator in a Hilbert space $G$ whose dimension is the same as that of the corresponding space of the superoperators $T$ and $T^{(i)}$, and $\varrho$ is an arbitrary density operator in the product space of twice the number of qubits. The diamond norm has an important stability property, proved in \cite{Kitaev3,Kitaev,Kitaev2}, \begin{equation}\label{stability} \|B_1 {\raise2pt\hbox{$\scriptscriptstyle{\otimes}$}} B_2\|_{\diamond}=\|B_1\|_{\diamond} \|B_2\|_{\diamond}. \end{equation} Note that (\ref{stability}) is a property of the superoperators rather than that of the operators. Consider a composite system consisting of two subsystems $S_1$, $S_2$, with the noninteracting Hamiltonian \begin{equation} H_{S_1S_2}=H_{S_1}+H_{S_2}. \end{equation} The evolution superoperator of the system will be \begin{equation} T_{S_1S_2}=T_{S_1}{\raise2pt\hbox{$\scriptscriptstyle{\otimes}$}} T_{S_2}, \end{equation} and the ideal one \begin{equation} T_{S_1S_2}^{(i)}=T_{S_1}^{(i)}{\raise2pt\hbox{$\scriptscriptstyle{\otimes}$}} T_{S_2}^{(i)}. \end{equation} The diamond measure for the system can be expressed as \begin{eqnarray} &&K_{S_1S_2}^{\vphantom{(i)}}=\|T_{S_1S_2}^{\vphantom{(i)}} - T_{S_1S_2}^{(i)}\|_{\diamond}= \|(T_{S_1}^{\vphantom{(i)}}-T_{S_1}^{(i)}){\raise2pt\hbox{$\scriptscriptstyle{\otimes}$}} T_{S_2}^{\vphantom{(i)}}+T_{S_1}^{(i)}{\raise2pt\hbox{$\scriptscriptstyle{\otimes}$}} (T_{S_2}^{\vphantom{(i)}}-T_{S_2}^{(i)})\|_{\diamond}\nonumber\\ &&\leq\|(T_{S_1}^{\vphantom{(i)}}-T_{S_1}^{(i)}){\raise2pt\hbox{$\scriptscriptstyle{\otimes}$}} T_{S_2}^{\vphantom{(i)}}\|_{\diamond}+\|T_{S_1}^{(i)}{\raise2pt\hbox{$\scriptscriptstyle{\otimes}$}} (T_{S_2}^{\vphantom{(i)}}-T_{S_2}^{(i)})\|_{\diamond} . \label{justbelow} \end{eqnarray} By using the stability property (\ref{stability}), we get \begin{eqnarray} K_{S_1S_2}^{\vphantom{(i)}}\leq\|(T_{S_1}^{\vphantom{(i)}}-T_{S_1}^{(i)}){\raise2pt\hbox{$\scriptscriptstyle{\otimes}$}} T_{S_2}^{\vphantom{(i)}}\|_{\diamond}+\|T_{S_1}^{(i)}{\raise2pt\hbox{$\scriptscriptstyle{\otimes}$}} (T_{S_2}^{\vphantom{(i)}}-T_{S_2}^{(i)})\|_{\diamond}=\cr \|T_{S_1}^{\vphantom{(i)}}-T_{S_1}^{(i)}\|_{\diamond}\| T_{S_2}^{\vphantom{(i)}}\|_{\diamond} +\|T_{S_1}^{(i)}\|_{\diamond}\|T_{S_2}^{\vphantom{(i)}}- T_{S_2}^{(i)}\|_{\diamond}=\nonumber\\ \|T_{S_1}^{\vphantom{(i)}}-T_{S_1}^{(i)}\|_{\diamond}+\|T_{S_2}^{\vphantom{(i)}}- T_{S_2}^{(i)}\|_{\diamond}= K_{S_1}^{\vphantom{(i)}}+K_{S_2}^{\vphantom{(i)}}. \end{eqnarray} The inequality \begin{equation}\label{Kbound} K\le \sum_q K_{q}, \end{equation} for the diamond norm $K(t)$ has thus been obtained. Let us emphasize that the subsystems can be initially entangled. This property is particularly useful for quantum computing, the power of which is based on qubit entanglement. However, even in the simplest case of the diamond norm of one qubit, the calculations are extremely cumbersome. Therefore, the use of the measure $D(t)$ is preferable for actual calculations. For short times, of quantum gate functions, we can use (\ref{Kbound}) as an approximate inequality for order of magnitude estimates of decoherence measures, even when the qubits are interacting. Indeed, for short times, the interaction effects will not modify the quantities entering both sides significantly. The key point is that while the interaction effects are small, this inequality can be used for {\it strongly entangled\/} qubits. The two deviation-operator norms considered are related by the following inequality \begin{equation}\label{a1} \left\| \sigma \right\|_{{\lambda}}\leq\frac 1 2 \left\| \sigma \right\|_{{\rm Tr}}\leq 1. \end{equation} Here the left-hand side follows from \begin{equation} \rm{Tr} \,\sigma =\sum_j\lambda_j =0. \end{equation} Therefore the $\ell$th eigenvalue of the deviation operator $\sigma$ that has the maximum absolute value, $\lambda_\ell=\lambda_{\rm{max}}$, can be expressed as \begin{equation}\lambda_{\ell}=-\sum_{j\neq \ell}\lambda_j.\end{equation} Thus, we have \begin{equation}\label{a2} \left\| \sigma \right\|_{{\lambda}}=\frac 1 2\left(2 |\lambda_\ell|\right) \leq \frac 1 2\left(|\lambda_\ell|+\sum_{j\neq \ell}|\lambda_j|\right)= \frac 1 2\left(\sum_j|\lambda_j|\right)=\frac 1 2\left\|\sigma \right\|_{\rm Tr}. \end{equation} The right-hand side of (\ref{a1}) then also follows, because any density matrix has trace norm 1, \begin{equation}\label{a3} \| \sigma \|_{{\rm Tr}} = \| \rho-\rho^{(i)} \|_{{\rm Tr}}\leq \| \rho \|_{{\rm Tr}}+ \|\rho^{(i)} \|_{{\rm Tr}}=2. \end{equation} From the relation (\ref{a3}) it follows that \begin{equation}\label{prop} K(t)\le 2. \end{equation} By taking the supremum of both sides of the relation (\ref{a2}) we get \begin{equation}\label{prop1} D(t)=\sup_{\rho(0)}\left\| \sigma \right\|_{{\lambda}}\le \frac 12 \sup_{\rho(0)}\left\| \sigma \right\|_{\rm Tr} \le\frac 12 K(t), \end{equation} where the last step involves technical derivation details \cite{additivity} not reproduced here. In fact, for a single qubit, calculations for typical qubit models \cite{additivity} give \begin{equation} D_q(t)={\frac 1 2} K_q(t). \end{equation} Since $D$ is generally bounded by (or equal to) $K/2$, it follows that the multiqubit norm $D$ is approximately bounded from above by the sum of the single-qubit norms even for the \emph{initially entangled\/} qubits, \begin{equation}\label{DN1} D(t) \le \frac 12 K(t) \le \frac 12 \sum_q K_{q}(t)= \sum_q D_{q}(t), \end{equation} where $q$ labels the qubits. For specific models of decoherence of the type encountered in Section~\ref{Sec2}, as well as those formulated for general studies of short-time decoherence \cite{norm}, a stronger property has been demonstrated by deriving additional bounds not reviewed here \cite{additivity}, namely that the noise measures are actually equal, for low levels of noise, \begin{equation}\label{DN4-b} D(t)=\sum_q D_{q}(t)+o\left(\sum_q D_{q}(t)\right). \end{equation} Thus, in this section we considered the maximal operator norm suitable for evaluation of decoherence for a quantum register consisting of qubits immersed in noisy environments. We established the approximate additivity property of this measure of decoherence for multi-qubit registers at short times, for which the level of quantum noise is low, and the qubit-qubit interaction effects are small, but without any limitation on the initial entanglement of the qubit register. In conclusion, we surveyed the theory of evaluation of quantum noise effects for quantum registers. Maximal deviation norm was proposed for error estimation and its expressions were presented for a realistic model of semiconductor double-dot qubit interacting with acoustic phonons. Maximal deviation norm has a unique additivity property which facilitates error rate estimation for several-qubit registers. \section*{Acknowledgments} We are grateful to A.~Fedorov, D.~Mozyrsky, D.~Solenov, I.~Vagner, and D.~Tolkunov for collaborations and instructive discussions. This research was supported by the National Science Foundation, grant DMR-0121146. \end{document}
\begin{document} \title{{\textbf{SDP Duals without Duality Gaps for a Class of Convex Minimax Programs}\footnote{Research was partially supported by a grant from the Australian Research Council.}}} \author{\textsc{V. Jeyakumar}\thanks{Corresponding author. Department of Applied Mathematics, University of New South Wales, Sydney 2052, Australia. E-mail: [email protected]} \qquad and \qquad \textsc{J. Vicente-P\'erez}\thanks{Department of Applied Mathematics, University of New South Wales, Sydney 2052, Australia. This author has been partially supported by the MICINN of Spain, Grant MTM2011-29064-C03-02. E-mail: [email protected]} } \date{May 5, 2013} \maketitle \begin{abstract} \noindent In this paper we introduce a new dual program, which is representable as a semi-definite linear programming problem, for a primal convex minimax programming model problem and show that there is no duality gap between the primal and the dual whenever the functions involved are SOS-convex polynomials. Under a suitable constraint qualification, we derive strong duality results for this class of minimax problems. Consequently, we present applications of our results to robust SOS-convex programming problems under data uncertainty and to minimax fractional programming problems with SOS-convex polynomials. We obtain these results by first establishing sum of squares polynomial representations of non-negativity of a convex max function over a system of SOS-convex constraints. The new class of SOS-convex polynomials is an important subclass of convex polynomials and it includes convex quadratic functions and separable convex polynomials. The SOS-convexity of polynomials can numerically be checked by solving semi-definite programming problems whereas numerically verifying convexity of polynomials is generally very hard. \noindent{\footnotesize \textsl{Keywords:} SOS-convex polynomials, sum of squares polynomials, minimax programming, semidefinite programming, zero duality gap.} \end{abstract} \section{Introduction} \label{SEC1} Consider the minimax programming problem \begin{equation*} \begin{array}{ccl} (P) & \inf\limits_{x\in\mathbb{R}^n} & \max\limits_{j\in \mathbb{N}_r}\,p_j(x)\\ & \text{s.t.} & g_i(x) \leq 0, \ i\in \mathbb{N}_m, \end{array} \end{equation*} where $p_j$, for $j\in \mathbb{N}_r:=\{1,\ldots,r\}$, and $g_i$, for $i\in \mathbb{N}_m:=\{1,\ldots,m\}$, are real polynomials on $\mathbb{R}^n$. Discrete minimax model problems of the form $(P)$ arise in many areas of applications in engineering and commerce as resource allocation and planning problems (\cite{ibaraki} and other references therein). More recently, these models have appeared in robust optimization \cite{robust,bert} which is becoming increasingly important in optimization due to the reality of uncertainty in many real-world optimization problems and the importance of finding solutions that are immunized against data uncertainty. For instance, consider the optimization model problem with the data uncertainty in the constraints and in the objective function: \begin{equation*} \inf \{f_0(x,v_0) \ : \ f_i(x,v_i) \leq 0,\ \forall i=1,\ldots,k\}, \end{equation*} where $v_i\in\mathbb{R}^{n_i}$ is an uncertain parameter belonging to a finite uncertainty set $\mathcal V_i:=\{ v_i^1,\ldots,v_i^{s_i}\}$ for each $i\in\{0\}\cup\mathbb{N}_k$. The robust counterpart of the uncertain problem, which finds a robust solution that is immunized against all the possible uncertain scenarios, is then given by the minimax model problem of the form $(P)$, \begin{equation*} \inf\limits_{x\in\mathbb{R}^n} \left\{ \max\limits_{v_0\in \mathcal V_0} f_0(x,v_0) : f_i(x,v_i^j) \leq 0,\ \forall j = 1,\ldots,s_i, \forall i=1,\ldots,k \right\} , \end{equation*} where the uncertain constraints are enforced for every possible value of the parameter $v_i^j$ within their uncertainty sets $\mathcal V_i$. In the case of standard convex polynomial programming problem where $r=1$ and the functions involved in our model problem $(P)$ are convex polynomials, it is known that there is no duality gap between $(P)$ and its Lagrangian dual \cite{Auslender}. However, the Lagrangian dual, in general, may not easily be solvable. Recent research has shown that whenever $r=1$ and the functions involved in $(P)$ are \emph{SOS-convex polynomials} (see Definition 2.1), the problem $(P)$ enjoys no duality gap between $(P)$ and its dual problem which is representable as a semidefinite programming problem (SDP). Such a duality result is of great interest in optimization because SDP's can efficiently be solved by interior-point methods and so the optimal value of the original model $(P)$ can be found by solving its dual problem \cite{jeya-li-siam}. The new class of SOS-convex polynomials from algebraic geometry \cite{HeNie10,Lasserre} is an important subclass of convex polynomials and it includes convex quadratic functions and separable convex polynomials. The SOS-convexity of polynomials can numerically be checked by solving semidefinite programming problems whereas deciding convexity of polynomials is generally very hard \cite{Parrilo,Parrilo1}. This raises the very basic issue of which \emph{convex minimax programming problems} can be presented with zero duality gap where the duals can be represented as semidefinite linear programming problems. In this paper we address this issue by way of examining minimax programming problems $(P)$ with SOS-convex polynomials. We make the following contributions to minimax optimization. I. Without any qualifications, we establish dual characterizations of non-negativity of max functions of convex polynomials over a system of convex polynomial inequalities and then derive sum-of-squares-polynomial representations of non-negativity of max functions of SOS-convex polynomials over a system of SOS-convex polynomial inequalities. II. Using the sum-of-squares-polynomial representations, we introduce a dual program for $(P)$, which is representable as a semidefinite linear programming problem, and show that there is no duality gap between $(P)$ and its dual whenever the functions $p_j$'s and $g_i$'s are SOS-convex polynomials. Under a constraint qualification, we prove that strong duality holds between $(P)$ and its dual problem. As an application, we prove that the value of a robust convex programming problem under polytopic data uncertainty is equal to its SDP dual program. The significance of our duality theorems is that the value of our model problem $(P)$ can easily be found by solving its SDP dual problem. III. Under a constraint qualification, we establish that strong duality continues to hold for SOS-convex minimax fractional programming problems with their corresponding SDP duals, including minimax linear fractional programming problems for which the SDP dual problems reduce to linear programming problems. The outline of the paper is as follows. Section \ref{SEC2} provides dual characterizations and representations of non-negativity of max functions of convex polynomials as well as SOS-convex polynomials over a system of inequalities. Section \ref{SEC3} presents zero duality gaps and strong duality results for our model problem $(P)$. Section \ref{SEC4} gives applications of our duality results to classes of robust convex optimization problems and minimax fractional programming problems. Appendix provides basic re-formulation of our dual problem as semidefinite linear programming problem. \section{Dual Characterizations and Representations of Non-negativity} \label{SEC2} In this Section, we present dual characterizations of solvability of inequality systems involving convex as well as SOS-convex polynomials. Firstly, we shall recall a few basic definitions and results which will be needed later in the sequel. We say that a real polynomial $f$ is \emph{sum of squares} \cite{Laurent_survey} if there exist real polynomials $f_j$, $j=1,\ldots,s$, such that $f=\sum_{j=1}^{s}{f_j^2}$. The set of all sum of squares real polynomials is denoted by $\Sigma^2$, whereas the set consisting of all sum of squares real polynomials with degree at most $d$ is denoted by $\Sigma^2_d$. Similarly, we say a matrix polynomial $F \in \mathbb{R}[x]^{n \times n}$ is a SOS-matrix polynomial if $F(x)=H(x)H(x)^T$ where $H(x) \in \mathbb{R}[x]^{n \times s}$ is a matrix polynomial for some $s \in \mathbb{N}$. We now introduce the definition of SOS-convex polynomial. \begin{definition}[{\cite{Parrilo,HeNie10}}] A real polynomial $f$ on $\mathbb{R}^n$ is called \textit{SOS-convex} if the Hessian matrix function $x\mapsto \nabla^2 f(x)$ is a SOS-matrix polynomial. \end{definition} Clearly, a SOS-convex polynomial is convex. However, the converse is not true. Thus, there exists a convex polynomial which is not SOS-convex \cite{Parrilo}. It is known that any convex quadratic function and any convex separable polynomial is a SOS-convex polynomial. Moreover, a SOS-convex polynomial can be non-quadratic and non-separable. For instance, $f(x)=x_1^8+x_1^2+x_1x_2+x_2^2$ is a SOS-convex polynomial (see \cite{Nie0}) which is non-quadratic and non-separable. The following basic known results on convex polynomials play key roles throughout the paper. \begin{lemma}[{\cite[Lemma 8]{HeNie10}}] \label{polysos} Let $f$ be a SOS-convex polynomial. If $f(u)=0$ and $\nabla f(u)=0$ for some $u\in\mathbb{R}^n$, then $f$ is a sum of squares polynomial. \end{lemma} \begin{lemma}[{\cite[Theorem 3]{BeKl02}}] \label{minattain} Let $f_0,f_1,\ldots,f_m$ be convex polynomials on $\mathbb{R}^n$. Suppose that $\inf_{x\in C}f_0(x)>-\infty$ where $C:=\{x \in \mathbb{R}^n : f_i(x) \leq 0, i\in\mathbb{N}_m \} \neq \emptyset$. Then, $\argmin_{x\in C} f_0(x) \neq \emptyset$. \end{lemma} \begin{corollary} \label{polyissos} Any nonnegative SOS-convex polynomial on $\mathbb{R}^n$ is a sum of squares polynomial. \end{corollary} \begin{proof} Let $f$ be a nonnegative SOS-convex polynomial on $\mathbb{R}^n$. In virtue of Lemma \ref{minattain}, we know that $\min_{x\in\mathbb{R}^n} f(x) = f(x^*)$ for some $x^*\in\mathbb{R}^n$. Therefore, $h := f - f(x^*) $ is a nonnegative SOS-convex polynomial such that $h(x^*) = 0$ and $\nabla h(x^*)=0$. By applying Lemma \ref{polysos} we get that $h$ is a sum of squares polynomial, so $ f - f(x^*) = \sigma $ for some $\sigma \in \Sigma^2$. Therefore, $f = \sigma + f(x^*) $ is a sum of squares polynomial since $f(x^*)\geq 0$. \end{proof} Let $\Delta$ be the simplex in $\mathbb{R}^r$, that is, $\Delta := \left\{\delta\in \mathbb{R}^r_+ : \sum_{j=1}^r \delta_j = 1 \right\}$. \begin{theorem}[{Dual characterization of non-negativity}] \label{qfreeconvexeq} Let $p_j$ and $g_i$ be convex polynomials for all $j\in \mathbb{N}_r$ and $i\in \mathbb{N}_m$, with $\mathcal F:= \{ x\in\mathbb{R}^n : g_i(x) \leq 0, i\in \mathbb{N}_m\} \neq \emptyset$. Then, the following statements are equivalent: \begin{enumerate} \item[\rm($i$)] $g_i(x) \leq 0, \, i\in \mathbb{N}_m \Rightarrow \max\limits_{j\in \mathbb{N}_r} p_j(x) \geq 0 $. \item[\rm($ii$)] $(\forall \varepsilon>0)$ $(\exists\,\bar{\delta}\in\Delta, \bar{\lambda}\in\mathbb{R}^m_+)$ \ $\sum\limits_{j=1}^{r}{\bar{\delta}_j p_j} + \sum\limits_{i=1}^{m}{\bar{\lambda}_i g_i} + \varepsilon > 0 $. \end{enumerate} \end{theorem} \begin{proof} $(ii) \Rightarrow (i)$ Suppose that for each $\varepsilon>0$, there exist $\bar{\delta}\in\Delta$ and $\bar{\lambda}\in\mathbb{R}^m_+$ such that $\sum_{j=1}^{r}{\bar{\delta}_j p_j} + \sum_{i=1}^{m}{\bar{\lambda}_i g_i} + \varepsilon > 0 $. Then, for any $x\in\mathcal F$ we have $$ \max\limits_{\delta\in\Delta} \sum\limits_{j=1}^{r}{\delta_j p_j(x)} + \varepsilon \geq \sum\limits_{j=1}^{r}{\bar{\delta}_j p_j(x)} + \varepsilon \geq \sum\limits_{j=1}^{r}{\bar{\delta}_j p_j(x)} + \sum\limits_{i=1}^{m}{\bar{\lambda}_i g_i(x)} + \varepsilon > 0. $$ Letting $\varepsilon\rightarrow 0$, we see that $ \max\limits_{j\in \mathbb{N}_r} p_j(x) = \max\limits_{\delta\in\Delta} \sum\limits_{j=1}^{r}{\delta_j p_j} \geq 0$ for all $x\in\mathcal F$. $(i) \Rightarrow (ii)$ Assume that $(i)$ holds. Let $\varepsilon>0$ be arbitrary and let $f_j:=p_j+\varepsilon$ for all $j\in \mathbb{N}_r$. Then, one has \begin{equation*} \max\limits_{j\in \mathbb{N}_r} f_j(x) = \max\limits_{j\in \mathbb{N}_r} \{p_j(x)\} + \varepsilon > 0 \quad \forall x\in\mathcal F. \end{equation*} Now, we will show that the set \begin{equation*} G:=\left\{z=\left(\underline{z},\overline{z}\right)\in \mathbb{R}^{r+m} : \exists\,x \in\mathbb{R}^n \text{ such that } f_j(x) \leq \underline{z}_j, j\in\mathbb{N}_r, g_i(x) \leq \overline{z}_i, i\in\mathbb{N}_m \right\} \end{equation*} is a closed and convex set. As $f_j$ and $g_i$ are all convex polynomials, then $G$ is clearly a convex set. To see that it is closed, let $\{z^k\}_{k\in\mathbb{N}} \subset G$ be such that $\{z^k\} \rightarrow z^*$ as $k \rightarrow \infty$. Then, for each $k\in\mathbb{N}$, there exists $x^k \in \mathbb{R}^n$ such that $f_j(x^k) \leq \underline{z}^k_j$ and $g_i(x^k) \leq \overline{z}^k_i$, for all $j\in\mathbb{N}_r$ and $i\in\mathbb{N}_m$. Now, consider the convex optimization problem \begin{equation*} \begin{array}{ccl} (\bar{P}) & \min\limits_{x\in\mathbb{R}^n,u\in\mathbb{R}^{r+m}} & \left\|u-z^*\right\|^2 \\ & \text{s.t.} & f_j(x)-\underline{u}_j \leq 0, j\in\mathbb{N}_r, \\ & & g_i(x)-\overline{u}_i \leq 0, i\in\mathbb{N}_m. \end{array} \end{equation*} Obviously, $0 \leq \inf(\bar{P}) \leq \left\|z^k-z^*\right\|^2 $ for all $k\in\mathbb{N}$. Since $\left\|z^k-z^*\right\|^2 \rightarrow 0$ as $k \rightarrow \infty$, we get $\inf(\bar{P}) = 0$. Moreover, Lemma \ref{minattain} implies that $\inf(\bar{P})$ is attained, and so, there exists $x^* \in \mathbb{R}^n$ such that $f_j(x^*) \leq \underline{z}^*_j$, $j\in\mathbb{N}_r$, and $g_i(x^*) \leq \overline{z}^*_i$, $i\in\mathbb{N}_m$. So $z^* \in G$, and consequently, $G$ is closed. Since $\max_{j\in \mathbb{N}_r} f_j(x) > 0$ for all $x\in\mathcal F$, $0\notin G$. Hence, by the strict separation theorem \cite[Theorem 1.1.5]{Za2002}, there exist $v = \left(\underline{v},\overline{v}\right)\in\mathbb{R}^{r+m} \backslash \{0\}$, $\alpha \in \mathbb{R}$ and $\xi>0$ such that \begin{equation*} 0 = v^T 0 \leq \alpha < \alpha + \xi \leq \underline{v}^T \underline{z} + \overline{v}^T \overline{z} \end{equation*} for all $z \in G$. Since $G + \left(\mathbb{R}^r_+ \times \mathbb{R}^m_+\right) \subset G$, $\underline{v}_j \geq 0$ and $\overline{v}_i \geq 0$, for all $j\in\mathbb{N}_r$ and $i\in\mathbb{N}_m$. Observe that, for each $x \in \mathbb{R}^n$, $(f_1(x),\ldots,f_r(x),g_1(x),\ldots,g_m(x)) \in G$. So, for each $x \in \mathbb{R}^n$, \begin{equation} \label{equ:1} \sum_{j=1}^{r} \underline{v}_j f_j(x) + \sum_{i=1}^m \overline{v}_i g_i(x) \geq \alpha+\xi \geq \xi > 0 . \end{equation} Now, we claim $\underline{v}\in\mathbb{R}^r_+ \backslash\{0\}$. Otherwise, $\underline{v} = 0$, then we get from \eqref{equ:1} that $\sum_{i=1}^m \overline{v}_i g_i(\bar{x}) > 0$ for any $\bar{x}\in\mathcal F$ (recall that $\mathcal F$ is nonempty). Since $g_i(\bar{x}) \leq 0$ and $\overline{v}_i \geq 0$ for all $i\in\mathbb{N}_m$, $\sum_{i=1}^m \overline{v}_i g_i(\bar{x}) \leq 0$, which is a contradiction. So, $\kappa :=\sum_{j=1}^{r} \underline{v}_j > 0$. Therefore, \eqref{equ:1} implies that \begin{equation*} \sum\limits_{j=1}^{r}{\bar{\delta}_j f_j(x)} + \sum\limits_{i=1}^{m}{\bar{\lambda}_i g_i(x)} \geq \bar{\xi} > 0 \end{equation*} for all $x \in \mathbb{R}^n$, where $\bar{\delta}_j:= \kappa^{-1}\underline{v}_j \geq 0$ for all $j\in\mathbb{N}_r$, $\bar{\lambda}_i:=\kappa^{-1}\overline{v}_i \geq 0 $ for all $i\in\mathbb{N}_m$, and $ \bar{\xi}:= \kappa^{-1} \xi > 0$. Since $\sum_{j=1}^{r}\bar{\delta}_j = 1$, we can write \begin{equation*} \sum\limits_{j=1}^{r}{\bar{\delta}_j p_j} + \sum\limits_{i=1}^{m}{\bar{\lambda}_i g_i} + \varepsilon > 0. \end{equation*} Thus, the conclusion follows. \end{proof} Let $d$ be the smallest even number such that $d\geq \max\{ \max\limits_{j\in \mathbb{N}_r}\deg p_j, \max\limits_{i\in \mathbb{N}_m}\deg g_i \}$. \begin{theorem}[{SOS-Convexity \& representation of non-negativity}] \label{qfreesosconvexeq} Let $p_j$ and $g_i$ be SOS-convex polynomials for all $j\in \mathbb{N}_r$ and $i\in \mathbb{N}_m$, with $\mathcal F:= \{ x\in\mathbb{R}^n : g_i(x) \leq 0, i\in \mathbb{N}_m\} \neq \emptyset$. Then, the following statements are equivalent: \begin{enumerate} \item[\rm($i$)] $g_i(x) \leq 0, \, i\in \mathbb{N}_m \Rightarrow \max\limits_{j\in \mathbb{N}_r} p_j(x) \geq 0 $. \item[\rm($ii$)] $(\forall \varepsilon>0)$ $(\exists\,\bar{\delta}\in\Delta, \bar{\lambda}\in\mathbb{R}^m_+,\bar{\sigma}\in\Sigma^2_d)$ $\sum\limits_{j=1}^{r}{\bar{\delta}_j p_j} + \sum\limits_{i=1}^{m}{\bar{\lambda}_i g_i} + \varepsilon = \bar{\sigma} $. \end{enumerate} \end{theorem} \begin{proof} $(ii) \Rightarrow (i)$ Suppose that for each $\varepsilon>0$, there exist $\bar{\delta}\in\Delta$, $\bar{\lambda}\in\mathbb{R}^m_+$ and $\bar{\sigma}\in\Sigma^2_d$ such that $\sum_{j=1}^{r}{\bar{\delta}_j p_j} + \sum_{i=1}^{m}{\bar{\lambda}_i g_i} + \varepsilon = \bar{\sigma}$. Then, for any $x\in\mathcal F$ we have $$ \max\limits_{\delta\in\Delta} \sum\limits_{j=1}^{r}{\delta_j p_j(x)} + \varepsilon \geq \sum\limits_{j=1}^{r}{\bar{\delta}_j p_j(x)} + \varepsilon = \bar{\sigma} - \sum\limits_{i=1}^{m}{\bar{\lambda}_i g_i(x)} \geq 0. $$ Letting $\varepsilon\rightarrow 0$, we see that $\max\limits_{j\in \mathbb{N}_r} p_j(x) \geq 0$ for all $x\in\mathcal F$. $(i) \Rightarrow (ii)$ Assume that $(i)$ holds and let $\varepsilon>0$ arbitrary. Then, by Theorem \ref{qfreeconvexeq}, there exist $\bar{\delta}\in\Delta$ and $\bar{\lambda}\in\mathbb{R}^m_+$ such that $$L:=\sum\limits_{j=1}^{r}{\bar{\delta}_j p_j} + \sum\limits_{i=1}^{m}{\bar{\lambda}_i g_i} + \varepsilon > 0 .$$ Since $p_j$ and $g_i$ are all SOS-convex polynomials, then $L$ is a (nonnegative) SOS-convex polynomial too. Hence, Corollary \ref{polyissos} ensures that $L$ is a sum of squares polynomial (of degree at most $d$), that is, there exist $\bar{\sigma}\in\Sigma^2_d$ such that $$ \sum\limits_{j=1}^{r}{\bar{\delta}_j p_j} + \sum\limits_{i=1}^{m}{\bar{\lambda}_i g_i} + \varepsilon = \bar{\sigma} .$$ Thus, the conclusion follows. \end{proof} \section{Duality for Minimax Programs with SOS-convex Polynomials} \label{SEC3} In this Section we introduce the dual problem for our minimax model problem and establish duality theorems whenever the functions involved are SOS-convex polynomials. Consider the minimax programming problem \begin{equation} \label{primal} \begin{array}{ccl} (P) & \inf\limits_{x\in\mathbb{R}^n} & \max\limits_{j\in \mathbb{N}_r}\,p_j(x)\\ & \text{s.t.} & g_i(x) \leq 0, \ i\in \mathbb{N}_m, \end{array} \end{equation} and its associated dual problem \begin{equation} \label{dual} \begin{array}{ccl} (D) & \sup & \mu \\ & \text{s.t.} & \sum\limits_{j=1}^{r}{\delta_j p_j} + \sum\limits_{i=1}^{m}{\lambda_i g_i} -\mu \in \Sigma^2_d \\ & & \delta\in\Delta, \lambda\in\mathbb{R}^m_+, \mu\in\mathbb{R}, \end{array} \end{equation} where $p_j$ and $g_i$ are real polynomials on $\mathbb{R}^n$ for all $j\in \mathbb{N}_r$ and $i\in \mathbb{N}_m$ and $d$ is the smallest even number such that $d\geq \max\{ \max\limits_{j\in \mathbb{N}_r}\deg p_j, \max\limits_{i\in \mathbb{N}_m}\deg g_i \}$. It is well known that optimization problems of the form $(D)$ can equivalently be re-formula\-ted as semidefinite programming problem \cite{Lasserre}. See Appendix for details. For instance, consider the quadratic optimization problem $(P^{cq})$ where $p_j$ and $g_i$ are all quadratic functions, that is, $p_j(x) = x^T A_j x + a_j^T x + \alpha_j$ and $g_i(x) = x^T C_i x + c_i^T x + \gamma_i$ for all $x\in\mathbb{R}^n$, with $A_j, C_i \in \mathbb{S}^{n}$, the space of all symmetric $(n\times n)$ matrices, $a_j, c_i \in \mathbb{R}^n$ and $\alpha_j,\gamma_i \in \mathbb{R}$ for all $j\in \mathbb{N}_r$ and $i\in \mathbb{N}_m$, that is \begin{equation} \label{eq:0} \begin{array}{ccl} (P^{cq}) & \inf\limits_{x\in\mathbb{R}^n} & \max\limits_{j\in \mathbb{N}_r}\,x^T A_j x + a_j^T x + \alpha_j \\ & \text{s.t.} & x^T C_i x + c_i^T x + \gamma_i \leq 0,\ \ i\in \mathbb{N}_m. \end{array} \end{equation} In this case, the sum of squares constraint in its associated dual problem $\sum_{j=1}^{r}{\delta_j p_j} + \sum_{i=1}^{m}{\lambda_i g_i} -\mu \in \Sigma^2_2$ is equivalent to the inequality $\sum_{j=1}^{r}{\delta_j p_j} + \sum_{i=1}^{m}{\lambda_i g_i} -\mu \geq 0$. This, in turn (see \cite[p. 163]{bental-nemirovski}), is equivalent to \begin{equation*} \begin{pmatrix} \sum\limits_{j=1}^r\delta_j \alpha_j + \sum\limits_{i=1}^m\lambda_i\gamma_i -\mu & \frac{1}{2} (\sum\limits_{j=1}^r \delta_j a_j^T + \sum\limits_{i=1}^m\lambda_i c_i^T ) \\ \frac{1}{2} (\sum\limits_{j=1}^r \delta_j a_j + \sum\limits_{i=1}^m\lambda_i c_i ) & \sum\limits_{j=1}^r \delta_j A_j + \sum\limits_{i=1}^m\lambda_i C_i \end{pmatrix} \succeq 0. \end{equation*} Therefore, the dual problem of $(P^{cq})$ becomes \begin{equation} \label{eq:1} \begin{array}{ccl} (D^{cq}) & \sup & \mu \\ & \text{s.t.} & \sum\limits_{j=1}^r\delta_j \begin{pmatrix} 2\alpha_j & a_j^T \\ a_j & 2A_j \end{pmatrix} + \sum\limits_{i=1}^m \lambda_i \begin{pmatrix} 2\gamma_i & c_i^T \\ c_i & 2C_i\end{pmatrix} - \mu \begin{pmatrix} 2 & 0 \\ 0 & 0 \end{pmatrix} \succeq 0, \\ & & \delta\in\Delta, \lambda\in\mathbb{R}^m_+, \mu\in\mathbb{R}, \end{array} \end{equation} which is clearly a semidefinite programming problem. \begin{lemma} \label{infsup} Let $p_j$ and $g_i$ be convex polynomials for all $j\in \mathbb{N}_r$ and $i\in \mathbb{N}_m$, with $\mathcal F:= \{ x\in\mathbb{R}^n : g_i(x) \leq 0, i\in \mathbb{N}_m\} \neq \emptyset$. Then, \begin{equation} \label{zero:gap:1} \begin{array}{ccc} \inf(P) & = & \sup\limits_{\delta\in\Delta , \lambda\in\mathbb{R}^m_+} \inf\limits_{x\in\mathbb{R}^{n}} \left\{ \sum\limits_{j=1}^{r}{\delta_j p_j(x)} + \sum\limits_{i=1}^{m}{\lambda_i g_i(x)} \right\} . \end{array} \end{equation} \end{lemma} \begin{proof} Note that, for any $\bar{x}\in\mathcal F$, $\bar{\delta}\in\Delta$ and $\bar{\lambda}\in\mathbb{R}^m_+$, one has $$ \max\limits_{j\in \mathbb{N}_r} p_j(\bar{x}) \geq \sum\limits_{j=1}^r \bar{\delta}_j p_j(\bar{x}) \geq \sum\limits_{j=1}^r \bar{\delta}_j p_j(\bar{x}) + \sum\limits_{i=1}^m \bar{\lambda}_i g_i(\bar{x}) \geq \inf\limits_{x\in\mathbb{R}^{n}} \left\{ \sum\limits_{j=1}^r \bar{\delta}_j p_j(x) + \sum\limits_{i=1}^m \bar{\lambda}_i g_i(x) \right\}. $$ Therefore, $\inf(P) \geq \sup_{\delta\in\Delta , \lambda\in\mathbb{R}^m_+} \inf_{x\in\mathbb{R}^{n}} \{ \sum_{j=1}^{r}{\delta_j p_j(x)} + \sum_{i=1}^{m}{\lambda_i g_i(x)} \} $. To see the reverse inequality, we may assume without loss of generality that $\inf(P) > -\infty$, otherwise the conclusion follows immediately. Since $\mathcal F\neq \emptyset$, we have $\mu^*:=\inf(P) \in \mathbb{R}$. Then, for $\varepsilon > 0$ arbitrary, as $\max_{j\in \mathbb{N}_r} \{ p_j(x) -\mu^* \} \geq 0$ for all $x\in\mathcal F$, by Theorem \ref{qfreeconvexeq} we get that there exist $\bar{\delta}\in\Delta$ and $\bar{\lambda}\in\mathbb{R}^m_+$ such that $ \sum_{j=1}^{r}{\bar{\delta}_j p_j} + \sum_{i=1}^{m}{\bar{\lambda}_i g_i} > \mu^* - \varepsilon $. Consequently, $$ \sup\limits_{\delta\in\Delta , \lambda\in\mathbb{R}^m_+} \inf\limits_{x\in\mathbb{R}^{n}} \left\{ \sum\limits_{j=1}^{r}{\delta_j p_j(x)} + \sum\limits_{i=1}^{m}{\lambda_i g_i(x)} \right\} \geq \mu^* - \varepsilon.$$ Since the above inequality holds for any $\varepsilon>0$, passing to the limit we obtain the desired inequality, which concludes the proof. \end{proof} As a consequence of Lemma \ref{infsup}, we derive the following zero-duality gap result for $(P)$. \begin{theorem}[{Zero duality gap}] \label{strong:01} Let $p_j$ and $g_i$ be SOS-convex polynomials for all $j\in \mathbb{N}_r$ and $i\in \mathbb{N}_m$, with $\mathcal F:= \{ x\in\mathbb{R}^n : g_i(x) \leq 0, i\in \mathbb{N}_m\} \neq \emptyset$. Then, $$\inf(P) = \sup(D).$$ \end{theorem} \begin{proof} For any $\bar{x}\in\mathcal F$ and any $\bar{\delta}\in\Delta$, $\bar{\lambda}\in\mathbb{R}^m_+$ and $\bar{\mu}\in\mathbb{R}$ such that $\sum_{j=1}^{r}{\bar{\delta}_j p_j} + \sum_{i=1}^{m}{\bar{\lambda}_i g_i} -\bar{\mu} = \bar{\sigma} \in \Sigma^2_d$, one has $$ \sum\limits_{j=1}^r \bar{\delta}_j \left(p_j(\bar{x}) - \bar{\mu} \right) = \sum\limits_{j=1}^r \bar{\delta}_j p_j(\bar{x}) - \bar{\mu} = \bar{\sigma}(\bar{x}) - \sum\limits_{i=1}^m \bar{\lambda}_i g_i(\bar{x}) \geq 0.$$ Then, there exists $j_0\in \mathbb{N}_r$ such that $p_{j_0}(\bar{x})-\bar{\mu} \geq 0$, and so, $ \bar{\mu} \leq \max\limits_{j\in \mathbb{N}_r} p_j(\bar{x}) $. Thus, $\sup(D) \leq \inf(P)$. To see the reverse inequality, we may assume without loss of generality that $\inf(P)>-\infty$, otherwise the conclusion follows immediately. Since $\mathcal F\neq \emptyset$, we have $\mu^*:=\inf(P) \in \mathbb{R}$. Then, as a consequence of Lemma \ref{infsup}, for $\varepsilon > 0$ arbitrary we have $$ \sup\limits_{\delta\in\Delta , \lambda\in\mathbb{R}^m_+, \mu\in\mathbb{R}} \left\{ \mu : \sum\limits_{j=1}^{r}{\delta_j p_j} + \sum\limits_{i=1}^{m}{\lambda_i g_i} -\mu \geq 0 \right\} \geq \mu^* - \varepsilon. $$ As $p_j$ and $g_i$ are all SOS-convex polynomials, then $L:=\sum_{j=1}^{r}{\delta_j p_j} + \sum_{i=1}^{m}{\lambda_i g_i} -\mu$ is a SOS-convex polynomial too. So, by Corollary \ref{polyissos}, $L$ is nonnegative if and only if $L\in \Sigma^2_d$. Hence, $\mu^* - \varepsilon \leq \sup(D) $. Since the previous inequality holds for any $\varepsilon>0$, passing to the limit we get $\mu^* \leq \sup(D) $, which concludes the proof. \end{proof} We now see that whenever the Slater condition, $$\left\{x\in\mathbb{R}^n : g_i(x) < 0, i\in \mathbb{N}_m \right\} \neq \emptyset ,$$ is satisfied strong duality between $(P)$ and $(D)$ holds. \begin{theorem}[{Strong duality}] \label{strong:02} Let $p_j$ and $g_i$ be SOS-convex polynomials for all $j\in \mathbb{N}_r$ and $i\in \mathbb{N}_m$, with $\mathcal F:= \{ x\in\mathbb{R}^n : g_i(x) \leq 0, i\in \mathbb{N}_m\} \neq \emptyset$. If the Slater condition holds, then $$\inf(P) = \max(D).$$ \end{theorem} \begin{proof} Let $f:=\max_{j\in \mathbb{N}_r} p_j$ and $\mu^* := \inf(P) \in \mathbb{R}$. Thus, since the Slater condition is fulfilled, by the usual convex programming duality and the convex-convave minimax theorem, we get \begin{equation*} \mu^* \leq \inf(P) = \inf\limits_{x\in\mathbb{R}^n}\left\{ f(x) : g_i(x)\leq 0, i\in \mathbb{N}_m\right\} = \max\limits_{\lambda\in\mathbb{R}^m_+} \, \inf\limits_{x\in\mathbb{R}^n} \left\{ f(x) + \sum_{i=1}^m \lambda_i g_i(x) \right\} = \end{equation*} \begin{equation*} = \max\limits_{\lambda\in\mathbb{R}^m_+} \, \inf\limits_{x\in\mathbb{R}^n} \, \max_{\delta\in\Delta} \left\{ \sum_{i=1}^{r}{\delta_j p_j(x)} + \sum_{i=1}^m \lambda_i g_i(x) \right\} = \max\limits_{\lambda\in\mathbb{R}^m_+, \delta\in\Delta} \, \inf\limits_{x\in\mathbb{R}^n} \left\{ \sum_{i=1}^{r}{\delta_j p_j(x)} + \sum_{i=1}^m \lambda_i g_i(x) \right\}. \end{equation*} Hence, there exist $\bar{\lambda}\in \mathbb{R}^m_+$ and $\bar{\delta}\in\Delta$ such that \begin{equation*} L:=\sum\limits_{j=1}^{r}{\bar{\delta}_j p_j} + \sum_{i=1}^{m}{\bar{\lambda}_i g_i} -\mu^* \geq 0. \end{equation*} As $p_j$ and $g_i$ are all SOS-convex polynomials, $L$ is a (nonnegative) SOS-convex polynomial too, and consequently, in virtue of Corollary \ref{polyissos}, $L$ is a sum of squares polynomial (of degree at most $d$). Hence, $(\bar{\delta},\bar{\lambda},\mu^*)$ is a feasible point of $(D)$, so $\mu^* \leq \sup(D) $. Since weak duality always holds, we conclude $\inf(P) = \max(D)$. \end{proof} Recall the minimax quadratic programming problem $(P^{cq})$ introduced in \eqref{eq:0} and its dual problem $(D^{cq})$ given in \eqref{eq:1}. Note that the set of all $(n\times n)$ positive semi-definite matrices is denoted by $\mathbb{S}^{n}_{+}$. \begin{corollary} \label{convex:quadratic} Let $A_j, C_i \in \mathbb{S}^{n}_{+}$, $a_j, c_i \in \mathbb{R}^n$, and $\alpha_j,\gamma_i \in \mathbb{R}$ for all $j\in \mathbb{N}_r$ and $i\in \mathbb{N}_m$. If there exists $\bar{x}\in\mathbb{R}^n$ such that $\bar{x}^T C_i \bar{x} + c_i^T \bar{x} + \gamma_i < 0$ for all $i\in \mathbb{N}_m$, then $$\inf(P^{cq}) = \max(D^{cq}).$$ \end{corollary} \begin{proof} As $A_j, C_i \in \mathbb{S}^{n}_{+}$ for all $j\in \mathbb{N}_r$ and $i\in \mathbb{N}_m$, all the quadratic functions involved in $(P^{cq})$ are convex. Hence, since the Slater condition holds and any convex quadratic function is a SOS-convex polynomial, by applying Theorem \ref{strong:02} we get $\inf(P^{cq}) = \max(D^{cq})$. \end{proof} \begin{remark}[{Attainment of the optimal value}] \label{minimax:attain} For the problem $(P)$ introduced in \eqref{primal}, note that if $f:=\max\limits_{j\in\mathbb{N}_r}\,p_j$ (which is not a polynomial, in general) is bounded from below on the nonempty set $\mathcal F$, then $f$ attains its minimum on $\mathcal F$. In other words, if $\inf(P) \in \mathbb{R}$, then there exists $x^*\in\mathcal F$ such that $f(x^*)=\min(P)$. To see this, let consider the following convex polynomial optimization problem. \begin{equation*} \begin{array}{ccl} (P_{e}) & \inf\limits_{(x,z)\in\mathbb{R}^{n}\times\mathbb{R}} & z \\ & \text{s.t.} & p_j(x) -z \leq 0,\ \forall j\in\mathbb{N}_r, \\ & & g_i(x) \leq 0, \ \forall i\in\mathbb{N}_m. \end{array} \end{equation*} Let $\mathcal F_e$ be the (nonempty) feasible set of $(P_e)$. Observe that $x_0\in\mathcal F$ implies $(x_0,z_0) \in \mathcal F_e$ for all $z_0 \geq f(x_0)$, and conversely, $(x_0,z_0) \in\mathcal F_e$ implies $x_0\in\mathcal F$. Moreover, one has $\inf(P)=\inf(P_{e})$. Thus, Lemma \ref{minattain} can be applied to problem $(P_e)$ and then, there exists $(x^*,z^*)\in \mathcal F_e$ such that $ z^* = \min(P_e)$. Since $z^* \leq z$ for all $(x,z)\in \mathcal F_e$ and $(x,f(x))\in \mathcal F_e$ for all $x\in \mathcal F$, then we get \begin{equation} \label{att1} z^* \leq f(x) \qquad \forall x\in \mathcal F. \end{equation} On the other hand, as $(x^*,z^*)\in \mathcal F_e$ we get $x^*\in \mathcal F$ and \begin{equation} \label{att2} f(x^*) \leq z^*. \end{equation} Combining \eqref{att1} and \eqref{att2} we conclude $f(x^*) \leq f(x)$ for all $x\in \mathcal F$, and so, $x^*$ is a minimizer of $(P)$. \end{remark} Recall that the subdifferential of the (convex) function $f$ at $x\in\mathbb{R}^n$ is defined to be the set $$ \partial f(x):= \left\{ v\in\mathbb{R}^n : f(y) \geq f(x) + v^T(y-x),\ \forall y\in\dom f \right\}.$$ For a convex set $C\subset \mathbb{R}^{n}$, the normal cone of $C$ of at $x\in C$ is given by $$ N_C(x):=\left\{v\in \mathbb{R}^n : v^T(y-x)\leq 0,\ \forall y\in C\right\}.$$ Let $\mathcal F:= \{ x\in\mathbb{R}^n : g_i(x) \leq 0, i\in \mathbb{N}_m\} \neq \emptyset$. We will say that the \emph{normal cone condition} holds for $\mathcal F$ at $x\in \mathcal F$ provided that $$ N_\mathcal F(x) = \left\{ \sum_{i=1}^m \lambda_i \nabla g_i(x) : \lambda \in \mathbb{R}^{m}_{+}, \sum_{i=1}^{m}{\lambda_i g_i(x)} = 0 \right\}.$$ It is known that the normal cone condition holds whenever the Slater condition is satisfied. \begin{theorem}[{Min-max duality}] \label{strong:03} Let $p_j$ and $g_i$ be SOS-convex polynomials for all $j\in \mathbb{N}_r$ and $i\in \mathbb{N}_m$, with $\mathcal F:= \{ x\in\mathbb{R}^n : g_i(x) \leq 0, i\in \mathbb{N}_m\} \neq \emptyset$. Let $x^*\in\mathcal F$ be an optimal solution of $(P)$ and assume that the normal cone condition for $\mathcal F$ at $x^*$ holds. Then, $$\min(P) = \max(D).$$ \end{theorem} \begin{proof} Let $f:=\max_{j\in \mathbb{N}_r} p_j$ and $\mu^* := \min(P) \in \mathbb{R}$. If $x^*\in\mathcal F$ is an optimal solution of $(P)$, that is, $f(x^*) = \mu^*$, then by optimality conditions we have $0\in \partial f(x^*) + N_{\mathcal F}(x^*)$. As a consequence of the normal cone condition for $\mathcal F$ at $x^*$ and \cite[Proposition 2.3.12]{Clarke}, we get $$ 0 = \sum\limits_{j=1}^{r}{\bar{\delta}_j \nabla p_j(x^*)} + \sum\limits_{i=1}^m \bar{\lambda}_i \nabla g_i(x^*) $$ for some $\bar{\lambda}\in\mathbb{R}^m_+$ with $\bar{\lambda}_i g_i(x^*) = 0$ for all $i\in \mathbb{N}_m$, and $\bar{\delta}\in \Delta$ with $\bar{\delta}_j=0$ for those $j\in\mathbb{N}_r$ such that $p_j(x^*) \neq \mu^*$. Note that the polynomial \begin{equation*} L :=\sum\limits_{j=1}^{r}{\bar{\delta}_j p_j} + \sum\limits_{i=1}^m \bar{\lambda}_i g_i - \mu^* \end{equation*} satisfies $L(x^*)=0$ and $\nabla L(x^*)=0$. Moreover, $L$ is a SOS-convex polynomial since $p_j$ and $g_i$ are all SOS-convex polynomials. Then, as a consequence of Lemma \ref{polysos}, $L$ is a sum of squares polynomial (of degree at most $d$). Then, $(\bar{\delta},\bar{\lambda},\mu^*)$ is a feasible point of $(D)$, so $\mu^* \leq \sup(D) $. Since weak duality always holds, we conclude $\min(P) = \max(D)$. \end{proof} It is worth noting that, in the case where $r=1$, our min-max duality Theorem \ref{strong:03} collapses to the corresponding strong duality Theorem 4.1 shown in \cite{jeya-li-siam}. The following simple example illustrates the above min-max duality therorem. \begin{example} Consider the optimization problem \begin{equation*} (P_1) \quad \min\limits_{x\in\mathbb{R}} \left\{ \max\{ 2x^4-x, 5x^2+x \} : x\geq -2 \right\}. \end{equation*} It is easy to check that $x^*=0$ is a minimizer of $(P_1)$ and $\min(P_1)=0$. The corresponding dual problem of $(P_1)$ is \begin{equation*} (D_1) \quad \max\limits_{\delta\geq 0,\lambda \geq 0, \mu\in\mathbb{R}} \left\{ \mu : \delta(2x^4-x) + (1-\delta)(5x^2+x) - \lambda (x+2) - \mu \in \Sigma^2_4 \right\} \end{equation*} As $ x^4 + \frac{5}{2}x^2 \in \Sigma^2_4 $, $\delta = \frac{1}{2}$, $\lambda=0$ and $\mu =0$ is a feasible point of $(D_1)$. So, $\sup(D_1) \geq 0$. On the other hand, the sum of squares constraint in $(D_1)$ gives us $-2\lambda -\mu \geq 0$. Consequently, $\mu \leq -2\lambda \leq 0$, which implies $\max(D) = 0$. \end{example} \section{Applications to Robust Optimization \& Rational Programs} \label{SEC4} In this Section, we provide applications of our duality theorems to robust SOS-convex programming problems under data uncertainty and to rational programming problems. Let us consider the following optimization program with the data uncertainty in the constraints and in the objective function. \begin{equation*} \begin{array}{ccl} (UP) & \inf & f_0(x,v_0) \\ & \text{s.t.} & f_i(x,v_i) \leq 0,\ \forall i=1,\ldots,k, \end{array} \end{equation*} where, for each $i\in\{0\}\cup\mathbb{N}_k$, $v_i$ is an uncertain parameter and $v_i \in \mathcal V_i$ for some $\mathcal V_i \subset \mathbb{R}^{n_i}$. The robust counterpart of $(UP)$, which finds a robust solution to $(UP)$ that is immunized against all the possible uncertain scenarios, is given by \begin{equation*} \begin{array}{ccl} (RP) & \inf & \sup\limits_{v_0\in \mathcal V_0} f_0(x,v_0) \\ & \text{s.t.} & f_i(x,v_i) \leq 0,\ \forall v_i\in\mathcal V_i, \forall i=1,\ldots,k. \end{array} \end{equation*} \begin{theorem}[{Finite data uncertainty}] Let $f_i(\cdot,v_i)$ be a SOS-convex polynomial for each $v_i\in \mathcal V_i :=\{v_i^1,\ldots,v_i^{s_i} \}$ and each $i\in\{0\}\cup\mathbb{N}_k$ and let $r:=s_0$. Assume that there exists $\bar{x}\in\mathbb{R}^n$ such that $f_i(\bar{x},v_i^j) < 0$ for all $j\in\mathbb{N}_{s_i}$ and $i\in\mathbb{N}_k$. Then $\inf(RP) = \max(RD)$, where \begin{equation} \label{concl:robust} \begin{array}{ccl} (RD) & \sup & \mu \\ & \text{s.t.} & \sum\limits_{l=1}^{r}{\delta_l f_0(\cdot,v_0^l)} + \sum\limits_{i=1}^{k}{\sum\limits_{j=1}^{s_i}{ \lambda_i^j f_i(\cdot,v_i^j) }} - \mu \in \Sigma^2_t \\ & & \delta \in \Delta, \lambda_i\in\mathbb{R}^{s_i}_{+}\ (\forall i\in\mathbb{N}_k), \mu \in \mathbb{R}, \end{array} \end{equation} and $t$ is the smallest even number such that $t\geq \max\{ \max\limits_{l\in \mathbb{N}_{r}}\deg f_0(\cdot,v_0^l), \max\limits_{i\in \mathbb{N}_k}\max\limits_{j\in \mathbb{N}_{s_i}}\deg f_i(\cdot,v_i^j) \}$. \end{theorem} \begin{proof} It is easy to see that problem $(RP)$ is equivalent to \begin{equation} \label{dual:aux:rob} \begin{array}{ccl} (RP_e) & \inf & \max\limits_{j\in \mathbb{N}_{r}} f_0(x,v_0^j) \\ & \text{s.t.} & f_i(x,v_i^j) \leq 0,\ \forall j\in\mathbb{N}_{s_i}, \forall i=1,\ldots,k. \end{array} \end{equation} Since the Slater condition holds, by applying Theorem \ref{strong:02} we get $\inf(RP_e) = \max(RD)$. \end{proof} \begin{theorem}[{Polytopic data uncertainty}] Suppose that, for each $i\in\{0\}\cup\mathbb{N}_k$, $x \mapsto f_i(x,v_i)$ is a SOS-convex polynomial for each $v_i \in \mathcal V_i:=\co\{v_i^1,\ldots,v_i^{s_i} \}$ with $r:=s_0$, and $v_i \mapsto f_i(x,v_i)$ is affine for each $x\in\mathbb{R}^{n}$. Assume there exists $\bar{x}\in\mathbb{R}^n$ such that $f_i(\bar{x},v_i^j) < 0$ for all $j\in\mathbb{N}_{s_i}$ and $i\in\mathbb{N}_k$. Then, $\inf(RP) = \max(RD)$ where the problem $(RD)$ is defined in \eqref{concl:robust}. \end{theorem} \begin{proof} Let $i\in\mathbb{N}_k$. As $f_i(x,\cdot)$ is affine for each $x\in\mathbb{R}^{n}$, then $f_i(x,v_i) \leq 0$ for all $v_i\in\mathcal V_i:=\co\{v_i^1,\ldots,v_i^{s_i}\}$ if and only if $f_i(x,v_i^j) \leq 0$ for all $j\in\mathbb{N}_{s_i}$. Moreover, we see that $$ \sup\limits_{v_0\in \mathcal V_0} f_0(x,v_0) = \max\limits_{j\in \mathbb{N}_{r}} f_0(x,v_0^j) .$$ Hence, problem $(RP)$ is equivalent to $(RP_e)$ introduced in \eqref{dual:aux:rob}. Reasoning as in the proof of the above theorem we conclude $\inf(RP) = \max(RD)$. \end{proof} Now, consider the following minimax rational programming problem, \begin{equation*} \begin{array}{ccl} (\mathcal P) & \inf\limits_{x\in\mathbb{R}^n} & \max\limits_{j\in \mathbb{N}_r}\,\frac{p_j(x)}{q(x)}\\ & \text{s.t.} & g_i(x) \leq 0,\ \ i\in \mathbb{N}_m. \end{array} \end{equation*} where $p_j$, for $j\in \mathbb{N}_r$, $q$, and $g_i$, for $i\in \mathbb{N}_m$, are real polynomials on $\mathbb{R}^n$, and for each $j\in \mathbb{N}_r$, $p_j(x)\geq 0$ and $q(x) > 0 $ over the feasible set. This is a generalization of problem $(P)$ introduced in \eqref{primal}. For related minimax fractional programs, see \cite{crouzeix1,Lai99}. Minimax fractional programs often appear in resource allocation and planning problems of management science where the objective function in their optimization problems involve ratios such as cost or profit in time, return on capital and earnings per share (see \cite{ibraki1}). We associate with $(\mathcal P)$ the following SDP dual problem \begin{equation} \label{dualfrac} \begin{array}{ccl} (\mathcal D) & \sup & \mu \\ & \text{s.t.} & \sum\limits_{j=1}^{r}{\delta_j p_j} + \sum\limits_{i=1}^{m}{\lambda_i g_i} -\mu q \in\Sigma^2_d \\ & & \delta\in\Delta, \lambda\in\mathbb{R}^m_+, \mu\in\mathbb{R}, \end{array} \end{equation} where $d$ is the smallest even number such that $d\geq \max\{ \deg q, \max\limits_{j\in \mathbb{N}_r}\deg p_j, \max\limits_{i\in \mathbb{N}_m}\deg g_i \}$. It is worth noting that, in general, problem $(\mathcal P)$ may not attain its optimal value when it is finite, even when $r=1$. To see this, consider the rational programming problem $(\mathcal P_1)$ $\inf_{x\in\mathbb{R}}\left\{ \frac{1}{x} : 1-x \leq 0\right\}$. Obviously, $\inf(\mathcal P_1) = 0$, however, for any feasible point $x$, one has $\frac{1}{x}>0$. Thus, the optimal value of $(\mathcal P_1)$ is not attained. \begin{theorem}[{Strong duality for minimax rational programs}] \label{strong:frac} Let $p_j$, $g_i$ and $-q$ be SOS-convex polynomials for all $j\in \mathbb{N}_r$ and $i\in \mathbb{N}_m$, such that $p_j(x)\geq 0$ and $q(x) > 0 $ for all $x\in \mathcal F:= \{ x\in\mathbb{R}^n : g_i(x) \leq 0, i\in \mathbb{N}_m\} \neq \emptyset$. If the Slater condition holds, then $$\inf(\mathcal P) = \max(\mathcal D).$$ \end{theorem} \begin{proof} Note that for any $\mu\in\mathbb{R}_+$, one has $\inf(\mathcal P)\geq \mu$ if and only if $\inf(P_\mu)\geq 0$, where \begin{equation} \label{aux:problem} (P_{\mu}) \quad \inf\limits_{x\in\mathcal F}\,\max\limits_{j\in \mathbb{N}_r}\,\{p_j(x)-\mu q(x)\}. \end{equation} By the assumption, $\inf(\mathcal P)$ is finite. So, it follows easily that $\mu^* := \inf(\mathcal P) \in\mathbb{R}_+ $ and then $\inf(P_{\mu^*}) \geq 0$. Since, for each $j\in\mathbb{N}_r$, $p_j - \mu^* q$ is a SOS-convex polynomial and the Slater condition holds, by Theorem \ref{strong:02} we have that $\inf(P_{\mu^*}) = \max(D_{\mu^*})$ where \begin{equation} \begin{array}{ccl} (D_{\mu^*}) & \sup & \theta \\ & \text{s.t.} & \sum\limits_{j=1}^{r}{\delta_j p_j} + \sum\limits_{i=1}^{m}{\lambda_i g_i} -\mu^* q -\theta \in\Sigma^2_d \\ & & \delta\in\Delta, \lambda\in\mathbb{R}^m_+, \theta\in\mathbb{R}. \end{array} \label{ax:dl} \end{equation} As $\max(D_{\mu^*})=\inf(P_{\mu^*}) \geq 0$, there exist $\bar{\delta}\in\Delta$, $\bar{\lambda}\in \mathbb{R}^m_+$ and $\bar{\theta}\in \mathbb{R}_+$ such that $$ \sum\limits_{j=1}^{r}{\bar{\delta}_j p_j} + \sum_{i=1}^{m}{\bar{\lambda}_i g_i} -\mu^*q \in (\bar{\theta} + \Sigma^2_d) \subset \Sigma^2_d. $$ Therefore, $(\bar{\delta},\bar{\lambda},\mu^*)$ is a feasible point of $(\mathcal D)$, so $\mu^* \leq \sup(\mathcal D) $. Since weak duality always holds, we conclude $\inf(\mathcal P) = \max(\mathcal D)$. \end{proof} Let us consider the particular problem $(\mathcal P^{cq})$ where $p_j$, $q$ and $g_i$ are all quadratic functions, that is, $p_j(x) = x^T A_j x + a_j^T x + \alpha_j$, $q(x) = x^T B x + b^T x + \beta$ and $g_i(x) = x^T C_i x + c_i^T x + \gamma_i$ for all $x\in\mathbb{R}^n$, with $A_j, B, C_i \in \mathbb{S}^{n}$, $a_j, b, c_i \in \mathbb{R}^n$ and $\alpha_j,\beta,\gamma_i \in \mathbb{R}$ for all $j\in \mathbb{N}_r$ and $i\in \mathbb{N}_m$, that is \begin{equation*} \begin{array}{ccl} (\mathcal P^{cq}) & \inf\limits_{x\in\mathbb{R}^n} & \max\limits_{j\in \mathbb{N}_r}\,\displaystyle\frac{x^T A_j x + a_j^T x + \alpha_j}{x^T B x + b^T x + \beta} \\ & \text{s.t.} & x^T C_i x + c_i^T x + \gamma_i \leq 0,\ \ i\in \mathbb{N}_m. \end{array} \end{equation*} Assume that $p_j(x)\geq 0$ and $q(x) > 0 $ over the feasible set. The dual problem of $(\mathcal P^{cq})$ is given by \begin{equation*} \begin{array}{ccl} (\mathcal D^{cq}) & \sup & \mu \\ & \text{s.t.} & \sum\limits_{j=1}^r\delta_j \begin{pmatrix} 2\alpha_j & a_j^T \\ a_j & 2A_j \end{pmatrix} + \sum\limits_{i=1}^m \lambda_i \begin{pmatrix} 2\gamma_i & c_i^T \\ c_i & 2C_i\end{pmatrix} - \mu \begin{pmatrix} 2\beta & b^T \\ b & 2B \end{pmatrix} \succeq 0, \\ & & \delta\in\Delta, \lambda\in\mathbb{R}^m_+, \mu\in\mathbb{R}, \end{array} \end{equation*} which is clearly a semidefinite programming problem. \begin{corollary} \label{convex:quadratic:2} Let consider the problem $(\mathcal P^{cq})$ such that $A_j, -B, C_i \in \mathbb{S}^{n}_{+}$ for all $j\in \mathbb{N}_r$ and $i\in \mathbb{N}_m$. If there exists $\bar{x}\in\mathbb{R}^n$ such that $\bar{x}^T C_i \bar{x} + c_i^T \bar{x} + \gamma_i < 0$ for all $i\in \mathbb{N}_m$, then $$\inf(\mathcal P^{cq}) = \max(\mathcal D^{cq}).$$ \end{corollary} \begin{proof}Note that the sum of squares constraint in its associated dual problem $\sum_{j=1}^{r}{\delta_j p_j} + \sum_{i=1}^{m}{\lambda_i g_i} -\mu q \in \Sigma^2_2$ is equivalent to the inequality $\sum_{j=1}^{r}{\delta_j p_j} + \sum_{i=1}^{m}{\lambda_i g_i} -\mu q \geq 0$. This is equivalent to \begin{equation*} \begin{pmatrix} \sum\limits_{j=1}^r\delta_j \alpha_j + \sum\limits_{i=1}^m\lambda_i\gamma_i -\mu\beta & \frac{1}{2} (\sum\limits_{j=1}^r \delta_j a_j^T + \sum\limits_{i=1}^m\lambda_i c_i^T -\mu b^T) \\ \frac{1}{2} (\sum\limits_{j=1}^r \delta_j a_j + \sum\limits_{i=1}^m\lambda_i c_i -\mu b) & \sum\limits_{j=1}^r \delta_j A_j + \sum\limits_{i=1}^m\lambda_i C_i -\mu B \end{pmatrix} \succeq 0. \end{equation*} So, our dual problem $(\mathcal D)$ collapses to $(\mathcal D^{cq})$. Since the Slater condition holds and any convex quadratic function is a SOS-convex polynomial, by applying Theorem \ref{strong:frac} we get $\inf(\mathcal P^{cq}) = \max(\mathcal D^{cq})$. \end{proof} \begin{corollary} Let $p$, $g_i$ and $-q$ be SOS-convex polynomials for all $i\in \mathbb{N}_m$, such that $p(x)\geq 0$ and $q(x) > 0 $ for all $x\in \mathcal F:= \{ x\in\mathbb{R}^n : g_i(x) \leq 0, i\in \mathbb{N}_m\} \neq \emptyset$. If the Slater condition holds, then $$ \inf\limits_{x\in\mathcal F} \frac{p(x)}{q(x)} = \max\limits_{\mu\in\mathbb{R},\lambda\in\mathbb{R}^m_+}\left\{ \mu : p + \sum_{i=1}^{m}{\lambda_i g_i} -\mu q \in \Sigma^2_k \right\}$$ where $k$ is the smallest even number such that $k\geq \max\{ \deg p, \deg q, \max\limits_{i\in \mathbb{N}_m}\deg g_i \}$. \end{corollary} \begin{proof} It is a straightforward consequence of Theorem \ref{strong:frac} when $r=1$. \end{proof} Next we show that the non-negativity of the polynomials $p_j$'s can be dropped whenever $q$ is an affine function. \begin{corollary} \label{strong:4} Let $p_j$ and $g_i$ be SOS-convex polynomials for all $j\in \mathbb{N}_r$ and $i\in \mathbb{N}_m$, $b\in\mathbb{R}^n$ and $\beta\in\mathbb{R}$ such that $b^Tx+\beta>0$ for all $x\in \mathcal F:= \{ x\in\mathbb{R}^n : g_i(x) \leq 0, i\in \mathbb{N}_m\} \neq \emptyset$. If the Slater condition holds, then \begin{equation*} \inf\limits_{x\in\mathcal F} \, \max\limits_{j\in \mathbb{N}_r} \, \frac{p_j(x)}{b^Tx+\beta} = \max\limits_{\substack{\mu\in\mathbb{R} \\ \delta\in\Delta, \lambda\in\mathbb{R}^m_+}} \left\{ \mu : \sum\limits_{j=1}^{r}{\delta_j p_j(x)} + \sum\limits_{i=1}^{m}{\lambda_i g_i(x)} -\mu (b^Tx + \beta) \in \Sigma^2_d\right\}. \end{equation*} \end{corollary} \begin{proof} The proof follows the same line of arguments as the proof of Theorem \ref{strong:frac}, except that, in the case $q(x):=b^Tx+\beta$ for all $x\in\mathbb{R}^{n}$, all polynomials $p_j -\mu^*q $ are SOS-convex without the non-negativity of all $p_j$'s, and therefore, of $\mu^*$. \end{proof} For the particular problem \begin{equation*} \begin{array}{ccl} (\mathcal P^{l}) & \inf\limits_{x\in\mathbb{R}^n} & \max\limits_{j\in \mathbb{N}_r}\,\frac{a_j^T x + \alpha_j}{b^T x + \beta} \\ & \text{s.t.} & c_i^T x + \gamma_i \leq 0, \ i\in \mathbb{N}_m, \end{array} \end{equation*} the corresponding dual problem can be stated as the following linear programming problem \begin{eqnarray} (\mathcal D^{l}) & \max & \mu \nonumber \\ & \text{s.t.} & \sum\limits_{j=1}^r \delta_j a_j + \sum\limits_{i=1}^m\lambda_i c_i -\mu b = 0, \label{const:lin1} \\ & & \sum\limits_{j=1}^r \delta_j \alpha_j + \sum\limits_{i=1}^m\lambda_i\gamma_i - \mu\beta \geq 0, \label{const:lin2} \\ & & \delta\in\Delta, \lambda\in\mathbb{R}^m_+, \mu\in\mathbb{R}. \nonumber \end{eqnarray} \begin{corollary} Let $\alpha_j,\beta,\gamma_i \in \mathbb{R}$ and $a_j,b,c_i \in \mathbb{R}^n$ for all $j\in \mathbb{N}_r$ and $i\in \mathbb{N}_m$. Assume that $b^T x + \beta > 0$ for all feasible point $x$ of $\mathcal P^l$. Then, $$ \inf(\mathcal P^{l}) = \max(\mathcal D^{l}).$$ \end{corollary} \begin{proof} By applying Corollary \ref{strong:4}, we get that $\inf(\mathcal P^{l})$ equals to \begin{equation*} \max\limits_{\substack{\mu\in\mathbb{R} \\ \delta\in\Delta, \lambda\in\mathbb{R}^m_+}} \left\{ \mu : \sum\limits_{j=1}^r \delta_j \left( a_j^T x + \alpha_j \right) + \sum\limits_{i=1}^m\lambda_i \left(c_i^T x + \gamma_i\right) - \mu \left( b^T x + \beta\right) \in \Sigma^2_2 \right\} \end{equation*} Since the sum of squares constraint in the above dual problem is equivalent to \eqref{const:lin1} and \eqref{const:lin2}, we conclude $\inf(\mathcal P^{l}) = \max(\mathcal D^{l})$. \end{proof} If a minimizer $x^*$ of $(\mathcal P)$ is known, then the Slater condition in Theorem \ref{strong:frac} can be replaced by a weaker condition in order to derive strong duality between $(\mathcal P)$ and $(\mathcal D)$. \begin{theorem} \label{strong:frac:min} Let $p_j$, $g_i$ and $-q$ be SOS-convex polynomials for all $j\in \mathbb{N}_r$ and $i\in \mathbb{N}_m$, such that $p_j(x)\geq 0$ and $q(x) > 0 $ for all $x\in \mathcal F:= \{ x\in\mathbb{R}^n : g_i(x) \leq 0, i\in \mathbb{N}_m\} \neq \emptyset$. Let $x^*\in\mathcal F$ be an optimal solution of $(\mathcal P)$ and assume that the normal cone condition for $\mathcal F$ at $x^*$ holds. Then, $\min(\mathcal P) = \max(\mathcal D).$ \end{theorem} \begin{proof} Let $\mu^* := \min(\mathcal P) \in \mathbb{R}_{+}$. Note that $(\mathcal P)$ has optimal solution $x^*$ with optimal value $\mu^*$ if and only if $x^*$ is an optimal solution of $(P_{\mu^*})$ with optimal value $0$ (cf. \cite[Lemma 2.3]{Lai99}), where $(P_{\mu^*})$ is stated in \eqref{aux:problem}. Since, for each $j\in\mathbb{N}_r$, $p_j - \mu^* q$ is a SOS-convex polynomial and the normal cone condition for $\mathcal F$ at $x^*$ holds, by Theorem \ref{strong:03} we have that $\min(P_{\mu^*}) = \max(D_{\mu^*})$ where $(D_{\mu^*})$ has been stated in \eqref{ax:dl}. As $\max(D_{\mu^*}) = 0$, there exist $\bar{\delta}\in\Delta$ and $\bar{\lambda}\in \mathbb{R}^m_+$ such that $$ \sum\limits_{j=1}^{r}{\bar{\delta}_j p_j} + \sum_{i=1}^{m}{\bar{\lambda}_i g_i} -\mu^*q \in \Sigma^2_d. $$ Therefore, $(\bar{\delta},\bar{\lambda},\mu^*)$ is a feasible point of $(\mathcal D)$, so $\mu^* \leq \sup(\mathcal D) $. Since weak duality always holds, we conclude $\min(\mathcal P) = \max(\mathcal D)$. \end{proof} \section*{Appendix: SDP Representations of Dual Programs.} \label{APP} Finally, for the sake of completeness, we show how our dual problem $(\mathcal D)$ given in \eqref{dualfrac} can be represented by a semidefinite linear programming problem. To this aim, let us recall some basic facts on the relationship between sums of squares polynomials and semidefinite programming problems. We denote by $\mathbb{S}^{n}$ the space of symmetric $n\times n$ matrices. For any $A,B\in \mathbb{S}^{n}$, we write $A \succeq 0$ if and only if $A$ is positive semidefinite, and $\left\langle A, B\right\rangle$ stands for $\operatorname{trace}(AB)$. Let $\mathbb{S}^{n}_{+}:=\{ A \in \mathbb{S}^{n} : A\succeq 0\}$ be the closed convex cone of positive semidefinite $n\times n$ (symmetric) matrices. The space of all real polynomials on $\mathbb{R}^n$ with degree $d$ is denoted by $\mathbb{R}_d[x_1,\ldots,x_n]$ and its canonical basis is given by \begin{equation*} y(x) \equiv (x_{\alpha})_{\left|\alpha\right| \leq d} := (1,x_1,x_2,\ldots,x_n,x_1^2,x_1x_2,\ldots,x_2^2,\ldots,x_n^2,\ldots,x_1^{d},\ldots,x_n^d)^T, \end{equation*} which has dimension $e(d,n):=\binom{n+d}{d}$, and $\alpha \in \mathbb{N}^n$ is a multi-index such that $\left|\alpha\right|:=\sum_{i=1}^{n}{\alpha_i}$. Let $\mathcal N:=\{ \alpha\in \mathbb{N}^n : \left|\alpha\right| \leq d \}$. Thus, if $f$ is a polynomial on $\mathbb{R}^n$ with degree at most $d$, one has $$ f(x) = \sum\limits_{\alpha\in \mathcal{N}} f_{\alpha}x_{\alpha}. $$ Assume that $d$ is an even number, and let $k:=d/2$. Then, according to \cite[Proposition 2.1]{Lasserre}, $f$ is a sum of squares polynomial if and only if there exists $Q \in \mathbb{S}_{+}^{e(k,n)}$ such that $f(x)= y(x)^T Q \,y(x)$. By writing $y(x) y(x)^T = \sum_{\alpha\in \mathcal{N}} B_{\alpha}x_{\alpha}$ for appropiate matrices $(B_{\alpha})\subset \mathbb{S}^{e(k,n)}$, one has that $f$ is a sum of squares polynomial if and only if there exists $Q \in \mathbb{S}_{+}^{e(k,n)}$ such that $\left\langle Q,B_{\alpha}\right\rangle = f_{\alpha}$ for all $\alpha \in \mathcal{N}$. Using the above characterization, we see that our dual problem $(\mathcal D)$ can be equivalently rewritten as the following semidefinite programming problem. \begin{equation*} \begin{array}{ccl} (\mathcal{SD}) & \sup & \mu \\ & \text{s.t.} & \sum\limits_{j=1}^{r}{\delta_j (p_j)_{\alpha}} + \sum\limits_{i=1}^{m}{\lambda_i (g_i)_{\alpha}} - \mu q_{\alpha} = \left\langle Q,B_{\alpha}\right\rangle \qquad \forall \alpha \in \mathcal{N}, \\ & & \sum\limits_{j=1}^{r}{\delta_j} = 1, \\ & & \delta\in\mathbb{R}^r_+, \lambda\in\mathbb{R}^m_+, \mu\in\mathbb{R}, Q \in \mathbb{S}^{e(k,n)}_{+}. \end{array} \end{equation*} Letting $q_{\alpha}=1$ for $\alpha=(0,\ldots,0)$ and $q_{\alpha}=0$ otherwise, we get the SDP representation for problem $(D)$ in \eqref{dual}. \end{document}
{\beta}gin{document} {\beta}gin{abstract} We prove that the Khovanov-Lauda-Rouquier algebras $R_{\alpha}$ of finite type are (graded) affine cellular in the sense of Koenig and Xi. In fact, we establish a stronger property, namely that the affine cell ideals in $R_{\alpha}$ are generated by idempotents. This in particular implies the (known) result that the global dimension of $R_{\alpha}$ is finite. \end{abstract} \title[Affine cellularity of finite type KLR algebras]{Affine Cellularity of Khovanov-Lauda-Rouquier Algebras of Finite Types} \maketitle \section{Introduction}{\lambda}bel{SIntro} The goal of this paper is to establish (graded) affine cellularity in the sense of Koenig and Xi \cite{KoXi} for the Khovanov-Lauda-Rouquier algebras $R_{\alpha}$ of finite Lie type. In fact, we construct a chain of affine cell ideals in $R_{\alpha}$ which are generated by idempotents. This stronger property is analogous to quasi-heredity for finite dimensional algebras, and by a general result of Koenig and Xi \cite[Theorem 4.4]{KoXi}, it also implies finiteness of the global dimension of $R_{\alpha}$. Thus we obtain a new proof of (a slightly stronger version of) a recent result of Kato \cite{Kato} and McNamara \cite{McN} (see also \cite{BKM}). As another application, one gets a theory of standard and proper standard modules, cf. \cite{Kato},\cite{BKM}. It would be interesting to apply this paper to prove the conjectural (graded) cellularity of cyclotomic KLR algebras of finite types. Our approach is independent of the homological results in \cite{McN}, \cite{Kato} and \cite{BKM} (which relies on \cite{McN}). The connection between the theory developed in \cite{BKM} and this paper is explained in \cite{KK}. This paper generalizes \cite{KLM}, where analogous results were obtained for finite type $A$. We now give a definition of (graded) affine cellular algebra from \cite[Definition 2.1]{KoXi}. Throughout the paper, unless otherwise stated, we assume that all algebras are (${\mathbb Z}$)-graded, all ideals, subspaces, etc. are homogeneous, and all homomorphisms are homogeneous degree zero homomorphisms with respect to the given gradings. For this introduction, we fix a noetherian domain $k$ (later on it will be sufficient to work with $k={\mathbb Z}$). Let $A$ be a (graded) unital $k$-algebra with a $k$-anti-involution $\tau$. A (two-sided) ideal $J$ in $A$ is called an \emph{affine cell ideal} if the following conditions are satisfied: {\beta}gin{enumerate} \item $\tau(J) = J$; \item there exists an affine $k$-algebra $B$ with a $k$-involution ${\sigma}$ and a free $k$-module $V$ of finite rank such that ${\mathtt D}elta:=V \otimes_k B$ has an $A$-$B$-bimodule structure, with the right $B$-module structure induced by the regular right $B$-module structure on $B$; \item let ${\mathtt D}elta' := B \otimes_k V$ be the $B$-$A$-bimodule with left $B$-module structure induced by the regular left $B$-module structure on $B$ and right $A$-module structure defined by {\beta}gin{equation}{\lambda}bel{ERightCell} (b\otimes v)a = \operatorname{op}eratorname{s}(\tau(a)(v \otimes b)), \end{equation} where $\operatorname{op}eratorname{s}:V\otimes_k B\to B\otimes_k V,\ v\otimes b\to b\otimes v$; then there is an $A$-$A$-bimodule isomorphism $\mu: J \to {\mathtt D}elta \otimes_B{\mathtt D}elta'$, such that the following diagram commutes: {\beta}gin{equation}{\lambda}bel{ECellCD} \xymatrix{ J \ar^-{\mu}[rr] \ar^{\tau}[d]&& {\mathtt D}elta \otimes_B{\mathtt D}elta' \ar^{v \otimes b \otimes b' \otimes w \mapsto w \otimes {\sigma}(b') \otimes {\sigma}(b) \otimes v }[d] \\ J \ar^-{\mu}[rr]&&{\mathtt D}elta \otimes_B{\mathtt D}elta'.} \end{equation} \end{enumerate} The algebra $A$ is called \emph{affine cellular} if there is a $k$-module decomposition $A= J_1' \operatorname{op}lus J_2' \operatorname{op}lus \cdots \operatorname{op}lus J_n'$ with $\tau(J_l')=J_l'$ for $1 \leq l \leq n$, such that, setting $J_m:= \text{\boldmath$i$}goplus_{l=1}^m J_l'$, we obtain an ideal filtration $$0=J_0 \subset J_1 \subset J_2 \subset \cdots \subset J_n=A$$ so that each $J_m/J_{m-1}$ is an affine cell ideal of $A/J_{m-1}$. To describe our main results we introduce some notation referring the reader to the main body of the paper for details. Fix a Cartan datum of finite type, and denote by ${\mathtt P}hi_+ = \{{\beta}_1, \dots, {\beta}_N\}$ the set of positive roots, and by $Q_+$ the positive part of the root lattice. For ${\alpha} \in Q_+$ we have the KLR algebra $R_{\alpha}$ with standard idempotents $\{e(\text{\boldmath$i$})\mid\text{\boldmath$i$}\in{\langle I \rangle}_{\alpha}\}$. We denote by ${\mathtt P}i({\alpha})$ the set of root partitions of ${\alpha}$. This is partially ordered with respect to a certain bilexicographic order `$\leq$'. To any $\pi\in{\mathtt P}i({\alpha})$ one associates a proper standard module $\bar {\mathtt D}e(\pi)$ and a word $\text{\boldmath$i$}_\pi\in {\langle I \rangle}_{\alpha}$. We fix a distinguished vector $v_\pi^+ \in \bar {\mathtt D}e(\pi)$, and choose a set $\mathfrak{B}_\pi \subseteq R_{\alpha}$ so that $\{bv_\pi^+ \mid b \in \mathfrak{B}_\pi\}$ is a basis of $\bar {\mathtt D}e(\pi)$. We define polynomial subalgebras ${\mathtt L}ambda_\pi\subseteq R_{\alpha}$---these are isomorphic to tensor products of algebras of symmetric polynomials. We also explicitly define elements ${\delta}_\pi, D_\pi\in R_{\alpha}$ and set $e_\pi := D_\pi {\delta}_\pi$. Then we set {\beta}gin{align*} I_\pi' &:= \text{$k$-span}\{b e_\pi {\mathtt L}ambda_\pi D_\pi (b')^\tau \mid b,b' \in \mathfrak{B}_\pi\}, \end{align*} $I_\pi := \sum_{{\sigma} {\mathfrak g}eq \pi}{I_{\sigma}'}$, and $I_{>\pi} = \sum_{{\sigma} > \pi}{I_{\sigma}'}$. Our main results are now as follows {\mathfrak n}oindent {\bf Main Theorem.} {\em The algebra $R_{\alpha}$ is graded affine cellular with cell chain given by the ideals $\{I_\pi\mid \pi \in {\mathtt P}i({\alpha})\}$. Moreover, for a fixed $\pi\in {\mathtt P}i({\alpha})$, we set $\bar R_{\alpha} := R_{\alpha} / I_{>\pi}$ and $\bar h:=h+I_{>\pi}$ for any $h\in R_{\alpha}$. We have: {\beta}gin{enumerate} \item[{\rm (i)}] $I_\pi = \sum_{{\sigma} {\mathfrak g}eq \pi} R_{\alpha} e(\text{\boldmath$i$}_{\sigma}) R_{\alpha}$; \item[{\rm (ii)}] $\bar e_\pi$ is an idempotent in $\bar R_{\alpha}$; \item[{\rm (iii)}] the map ${\mathtt L}ambda_\pi \to \bar e_\pi \bar R_{\alpha} \bar e_\pi,\ f\mapsto \bar e_\pi \bar f\bar e_\pi$ is an isomorphism of graded algebras; \item[{\rm (iv)}] $\bar R_{\alpha} \bar e_\pi$ is a free right $\bar e_\pi \bar R_{\alpha} \bar e_\pi$-module with basis $\{\bar b \bar e_\pi \mid b \in \mathfrak{B}_\pi\}$; \item[{\rm (v)}] $\bar e_\pi \bar R_{\alpha} $ is a free left $\bar e_\pi \bar R_{\alpha} \bar e_\pi$-module with basis $\{\bar e_\pi \bar D_\pi \bar b^\tau \mid b \in \mathfrak{B}_\pi\}$; \item[{\rm (vi)}] multiplication provides an isomorphism \[ \bar R_{\alpha} \bar e_\pi \otimes_{\bar e_\pi \bar R_{\alpha} \bar e_\pi} \bar e_\pi \bar R_{\alpha} \stackrel{\sim}{\longrightarrow} \bar R_{\alpha} \bar e_\pi \bar R_{\alpha}; \] \item[{\rm (vii)}] $\bar R_{\alpha} \bar e_\pi \bar R_{\alpha} = I_\pi / I_{>\pi}$. \end{enumerate} } Main Theorem(vii) shows that each affine cell ideal $I_\pi/I_{>\pi}$ in $R_{\alpha}/I_{>\pi}$ is generated by an idempotent. This, together with the fact that each algebra ${\mathtt L}ambda_\pi$ is a polynomial algebra, is enough to invoke \cite[Theorem 4.4]{KoXi} to get {\mathfrak n}oindent {\bf Corollary.} {\em If the ground ring $k$ has finite global dimension, then the algebra $R_{\alpha}$ has finite global dimension. } This seems to be a slight generalization of \cite{Kato},\cite{McN},\cite{BKM} in two ways: \cite{Kato} assumes that $k$ is a field of characteristic zero (and the Lie type is simply-laced), while \cite{McN},\cite{BKM} assume that $k$ is a field; moreover, \cite{Kato},\cite{McN},\cite{BKM} deal with categories of graded modules only, while our corollary holds for the algebra $R_{\alpha}$ even as an ungraded algebra. The paper is organized as follows. Section 2 contains preliminaries needed for the rest of the paper. The first subsection contains mostly general conventions that will be used. Subsection~\ref{SLie} goes over the Lie theoretic notation that we employ. We move on in subsection~\ref{SSKLR} to the definition and basic results of Khovanov-Lauda-Rouquier (KLR) algebras. The next two subsections are devoted to recalling results about the representation theory of KLR algebras. Then, in subsection~\ref{SSQG}, we introduce our notation regarding quantum groups, and recall some well-known basis theorems. The next subsection is devoted to the connection between KLR algebras and quantum groups, namely the categorification theorems. Finally, subsection~\ref{SSDF} contains an easy direct proof of a graded dimension formula for the KLR algebras, cf. \cite[Corollary 3.15]{BKM}. Section 3 is devoted to constructing a basis for the KLR algebras that is amenable to checking affine cellularity. We begin in subsection~\ref{SSSSWId} by choosing some special weight idempotents and proving some properties they enjoy. Subsection~\ref{SNota} introduces the notation that allows us to define our affine cellular structure. This subsection also contains the crucial Hypothesis~\ref{HProp}. Next, in subsection~\ref{SSPower}, we come up with an affine cellular basis in the special case corresponding to a root partition of that is a power of a single root. Finally, we use this in the last subsection to come up with our affine cellular basis in full generality. In section 4 we show how the affine cellular basis is used to prove that the KLR algebras are affine cellular. Finally, in section 5 we verify Hypothesis~\ref{HProp} for all positive roots in all finite types. We begin in subsection~\ref{SSHomog} by recalling some results concerning homogeneous representations. In subsection~\ref{SSSLO} we recall the definition of special Lyndon orders and Lyndon words, which will serve as the special weights of subsection~\ref{SSSSWId}. The next subsection is devoted to verifying Hypothesis~\ref{HProp} in the special case when the cuspidal representation corresponding to the positive root is homogeneous. We then employ this in subsection~\ref{SSADE} to show that the hypothesis holds in simply-laced types. Finally, we have subsection~\ref{SSBCFG}, wherein we verify the hypothesis by hand in the non-symmetric types. \section{Preliminaries and a dimension formula} In this section we set up the theory of KLR algebras and their connection to quantum groups following mainly \cite{KL1} and also \cite{KR}. Only subsection~\ref{SSDF} contains some new material. \subsection{Generalities} Throughout the paper we work over the ground ring ${\mathcal O}$ which is assumed to be either ${\mathbb Z}$ or an arbitrary field $F$. Most of the time we work over $F$ and then deduce the corresponding result for ${\mathbb Z}$ using the following standard lemma {\beta}gin{Lemma}{\lambda}bel{PFieldToZ} Let $M$ be a finitely generated ${\mathbb Z}$-module, and $\{x_{\alpha}\}_{{\alpha} \in A}$ a subset of $M$. Then $\{x_{\alpha}\}$ is a spanning set (resp. basis) of $M$ if and only if $\{1_F \otimes x_{\alpha}\}$ is a spanning set (resp. basis) of $F \otimes_{\mathbb Z} M$ for every field $F$. \end{Lemma} Let $q$ be an indeterminate, ${\mathbb Q}(q)$ the field of rational functions, and ${\mathtt L}aurent:={\mathbb Z}[q,q^{-1}]\subseteq {\mathbb Q}(q)$. Let\, $\bar{ }:{\mathbb Q}(q)\to {\mathbb Q}(q)$ be the ${\mathbb Q}$-algebra involution with $\bar q=q^{-1}$, referred to as the {\em bar-involution}. For a graded vector space $V=\operatorname{op}lus_{n\in {\mathbb Z}} V_n$, with finite dimensional graded components its {\em graded dimension} is ${\operatorname{dim}_q}\, \, V:=\sum_{n \in {\mathbb Z}} (\dim V_n)q^n\in{\mathbb Z}[[q,q^{-1}]]$. For any graded $F$-algebra $H$ we denote by ${\mathcal M}od{H}$ the abelian category of all graded left $H$-modules, with morphisms being {\em degree-preserving} module homomorphisms, which we denote by ${\operatorname{hom}}$. Let $\mod{H}$ denote the abelian subcategory of all {\em finite dimensional}\, graded $H$-modules and ${\mathtt P}roj{H}$ denote the additive subcategory of all {\em finitely generated projective}\, graded $H$-modules. Denote the corresponding Grothendieck groups by $[\mod{H}]$ and $[{\mathtt P}roj{H}]$, respectively. These Grothendieck groups are ${\mathtt L}aurent$-modules via $ q^m[M]:=[M{\lambda}ngle m\rangle], $ where $M{\lambda}ngle m\rangle$ denotes the module obtained by shifting the grading up by $m$: $ M{\lambda}ngle m\rangle_n:=M_{n-m}. $ For $n \in {\mathbb Z}$, let $ {\mathcal H}om_H(M, N)_n := {\operatorname{hom}}_H(M {\lambda}ngle n \rangle, N) $ denote the space of homomorphisms of degree $n$. Set $ {\mathcal H}om_H(M,N) := \text{\boldmath$i$}goplus_{n \in {\mathbb Z}} {\mathcal H}om_H(M,N)_n. $ \subsection{Lie theoretic data}{\lambda}bel{SLie} A {\em Cartan datum} is a pair $(I,\cdot)$ consisting of a set $I$ and a ${\mathbb Z}$-valued symmetric bilinear form $i,j\mapsto i\cdot j$ on the free abelian group ${\mathbb Z}[I]$ such that $i\cdot i\in \{2,4,6,\dots\}$ for all $i\in I$ and $2(i\cdot j)/(i\cdot i)\in\{0,-1,-2\dots\}$ for all $i{\mathfrak n}eq j$ in $I$. Set $ a_{ij}:=2(i\cdot j)/(i\cdot i)$ for $i,j\in I$ and define the {\em Cartan matrix} $A:=(a_{ij})_{i,j\in I}$. Throughout the paper, unless otherwise stated, we assume that $A$ has {\em finite type}, see \cite[\S 4]{Kac}. We have simple roots $\{{\alpha}_i\mid i\in I\}$, and we identify ${\alpha}_i$ with $i$. Let $Q_+ := \text{\boldmath$i$}goplus_{i \in I} {\mathbb Z}_{{\mathfrak g}eq 0} {\alpha}_i$. For ${\alpha}pha \in Q_+$, we write ${\operatorname{ht}}({\alpha}pha)$ for the sum of its coefficients when expanded in terms of the ${\alpha}_i$'s. Denote by ${\mathtt P}hi_+\subset Q_+$ the set of {\em positive}\, roots, cf. \cite[\S 1.3]{Kac}, and by $W$ the corresponding Weyl group. A total order on ${\mathtt P}hi_+$ is called {\em convex} if ${\beta},{{\mathfrak g}amma},{\beta}+{{\mathfrak g}amma}\in {\mathtt P}hi_+$ and ${\beta}<{{\mathfrak g}amma}$ imply ${\beta}<{\beta}+{{\mathfrak g}amma}<{{\mathfrak g}amma}$. Given ${\beta}ta\in {\mathbb Z}[I]$, denote {\beta}gin{align*} q_{\beta}ta:=q^{({\beta}ta\cdot {\beta}ta)/2},\ [n]_{\beta}ta:=(q_{\beta}ta^{n}-q_{\beta}ta^{-n})/(q_{\beta}ta-q_{\beta}ta^{-1}),\ [n]^!_{\beta}ta:=[n]_{\beta}ta[n-1]_{\beta}ta\dots[1]_{\beta}ta. \end{align*} In particular, for $i\in I$, we have $q_i,[n]_i,[n]_i^!$. Let $A$ be a $Q_+$-graded ${\mathbb Q}(q)$-algebra, $\theta\in A_{{\alpha}}$ for ${\alpha}\in Q_+$, and $n\in{\mathbb Z}_{{\mathfrak g}eq 0}$. We use the standard notation for quantum divided powers: $\theta^{(n)}:= \theta^n/[n]_{\alpha}^!.$ Denote by ${\langle I \rangle}:=\text{\boldmath$i$}gsqcup_{d{\mathfrak g}eq 0} I^d$ the set of all tuples $\text{\boldmath$i$}=i_1\dots i_d$ of elements of $I$, which we refer to as {\em words}. We consider ${\langle I \rangle}$ as a monoid under the concatenation product. If $\text{\boldmath$i$}\in{\langle I \rangle}$, we can write it in the form $ \text{\boldmath$i$}=j_1^{m_1}\dots j_r^{m_r} $ for $j_1,\dots,j_r\in I$ such that $j_s{\mathfrak n}eq j_{s+1}$ for all $s=1,2,\dots,r-1$. We then denote {\beta}gin{equation}{\lambda}bel{EIFact} [\text{\boldmath$i$}]!:=[m_1]^!_{j_1}\dots[m_r]^!_{j_r}. \end{equation} For $\text{\boldmath$i$}=i_1\dots i_d$ set $ |\text{\boldmath$i$}|:={\alpha}_{i_1}+\dots+{\alpha}_{i_d}\in Q_+. $ The symmetric group $S_d$ with simple transpositions $s_1,\dots,s_{d-1}$ acts on $I^d$ on the left by place permutations. The $S_d$-orbits on $I^d$ are the sets $ {\langle I \rangle}_{\alpha}pha := \{\text{\boldmath$i$}\in I^d \:|\:|\text{\boldmath$i$}| = {\alpha}pha\} $ parametrized by the elements ${\alpha}pha \in Q_+$ of height $d$. \subsection{Khovanov-Lauda-Rouquier algebras}{\lambda}bel{SSKLR} Let $A$ be a Cartan matrix. Choose signs ${\varepsilon}_{ij}$ for all $i,j \in I$ with $a_{ij} < 0$ so that ${\varepsilon}_{ij}{\varepsilon}_{ji} = -1$, and define the polynomials $\{Q_{ij}(u,v)\in F[u,v]\mid i,j\in I\}$: {\beta}gin{equation}{\lambda}bel{EArun} Q_{ij}(u,v):= \left\{ {\beta}gin{array}{ll} 0 &{\mathfrak h}box{if $i=j$;}\\ 1 &{\mathfrak h}box{if $a_{ij}=0$;}\\ {\varepsilon}_{ij}(u^{-a_{ij}}-v^{-a_{ji}}) &{\mathfrak h}box{if $a_{ij}<0$.} \end{array} \right. \end{equation} In addition, fix ${\alpha}\in Q_+$ of height $d$. Let $R_{\alpha}=R_{\alpha}(\mathbb{G}a,{\mathcal O})$ be an associative graded unital ${\mathcal O}$-algebra, given by the generators {\beta}gin{equation*}{\lambda}bel{EKLGens} \{e(\text{\boldmath$i$})\mid \text{\boldmath$i$}\in {\langle I \rangle}_{\alpha}\}\cup\{y_1,\dots,y_{d}\}\cup\{\psi_1, \dots,\psi_{d-1}\} \end{equation*} and the following relations for all $\text{\boldmath$i$},\text{\boldmath$j$}\in {\langle I \rangle}_{\alpha}$ and all admissible $r,t$: {\beta}gin{equation} e(\text{\boldmath$i$}) e(\text{\boldmath$j$}) = {\delta}_{\text{\boldmath$i$},\text{\boldmath$j$}} e(\text{\boldmath$i$}), \quad{\textstyle\sum_{\text{\boldmath$i$} \in {\langle I \rangle}_{\alpha}pha}} e(\text{\boldmath$i$}) = 1;{\lambda}bel{R1} \end{equation} {\beta}gin{equation}{\lambda}bel{R2PsiY} y_r e(\text{\boldmath$i$}) = e(\text{\boldmath$i$}) y_r;\qquad y_r y_t = y_t y_r; \end{equation} {\beta}gin{equation} \psi_r e(\text{\boldmath$i$}) = e(s_r\text{\boldmath$i$}) \psi_r;{\lambda}bel{R2PsiE} \end{equation} {\beta}gin{equation}{\lambda}bel{R3YPsi} y_r \psi_s = \psi_s y_r\qquad (r {\mathfrak n}eq s,s+1); \end{equation} {\beta}gin{equation} (y_t\psi_r-\psi_r y_{s_r(t)})e(\text{\boldmath$i$}) = {\delta}_{i_r,i_{r+1}}({\delta}_{t,r+1}-{\delta}_{t,r})e(\text{\boldmath$i$}); {\lambda}bel{R6} \end{equation} {\beta}gin{equation} \psi_r^2e(\text{\boldmath$i$}) = Q_{i_r,i_{r+1}}(y_r,y_{r+1})e(\text{\boldmath$i$}) {\lambda}bel{R4} \end{equation} {\beta}gin{equation} \psi_r \psi_t = \psi_t \psi_r\qquad (|r-t|>1);{\lambda}bel{R3Psi} \end{equation} {\beta}gin{equation} {\beta}gin{split} &(\psi_{r+1}\psi_{r} \psi_{r+1}-\psi_{r} \psi_{r+1} \psi_{r}) e(\text{\boldmath$i$}) \\= & {\delta}_{i_r,i_{r+2}}\mathbf{f}rac{Q_{i_r,i_{r+1}}(y_{r+2},y_{r+1})-Q_{i_r,i_{r+1}}(y_r,y_{r+1})}{y_{r+2}-y_r}e(\text{\boldmath$i$}). \end{split} {\lambda}bel{R7} \end{equation} The {\em grading} on $R_{\alpha}$ is defined by setting: $$ {\delta}g(e(\text{\boldmath$i$}))=0,\quad {\delta}g(y_re(\text{\boldmath$i$}))=i_r\cdot i_r,\quad{\delta}g(\psi_r e(\text{\boldmath$i$}))=-i_r\cdot i_{r+1}. $$ In this paper {\em grading} always means {\em ${\mathbb Z}$-grading}, ideals are assumed to be homogeneous, and modules are assumed graded, unless otherwise stated. It is pointed out in \cite{KL2} and \cite[\S3.2.4]{R} that up to isomorphism the graded ${\mathcal O}$-algebra $R_{\alpha}$ depends only on the Cartan datum and ${\alpha}$. We refer to the algebra $R_{\alpha}$ as an {\em (affine) Khovanov-Lauda-Rouquier algebra}. It is convenient to consider the direct sum of algebras $ R:=\text{\boldmath$i$}goplus_{{\alpha}_\in Q_+} R_{\alpha}. $ Note that $R$ is non-unital, but it is locally unital since each $R_{\alpha}$ is unital. The algebra $R_{\alpha}pha$ possesses a graded anti-automorphism {\beta}gin{equation}{\lambda}bel{star} \tau:R_{\alpha}pha \rightarrow R_{\alpha}pha,\ x \mapsto x^\tau \end{equation} which is the identity on generators. For each element $w\in S_d$ fix a reduced expression $w=s_{r_1}\dots s_{r_m}$ and set $ \psi_w:=\psi_{r_1}\dots \psi_{r_m}. $ In general, $\psi_w$ depends on the choice of the reduced expression of $w$. {\beta}gin{Theorem}{\lambda}bel{TBasis}{\rm \cite[Theorem 2.5]{KL1}, \cite[Theorem 3.7]{R}} The following set is an ${\mathcal O}$-basis of $R_{\alpha}$: $ \{\psi_w y_1^{m_1}\dots y_d^{m_d}e(\text{\boldmath$i$})\mid w\in S_d,\ m_1,\dots,m_d\in{\mathbb Z}_{{\mathfrak g}eq 0}, \ \text{\boldmath$i$}\in {\langle I \rangle}_{\alpha}\}. $ \end{Theorem} In view of the theorem, we have a polynomial subalgebra {\beta}gin{equation}{\lambda}bel{EPol} P_d={\mathcal O}[y_1,\dots,y_d]\subseteq R_{\alpha}. \end{equation} Let ${{\mathfrak g}amma}_1,\dots,{{\mathfrak g}amma}_l$ be elements of $Q_+$ with ${{\mathfrak g}amma}_1+\dots+{{\mathfrak g}amma}_l={\alpha}$. Then we have a natural embedding {\beta}gin{equation}{\lambda}bel{EIotaPi} \iota_{{{\mathfrak g}amma}_1,\dots,{{\mathfrak g}amma}_l}:R_{{{\mathfrak g}amma}_1}\otimes\dots\otimes R_{{{\mathfrak g}amma}_l}{\hookrightarrow} R_{\alpha} \end{equation} of algebras, whose image is the {\em parabolic subalgebra} $R_{{{\mathfrak g}amma}_1,\dots,{{\mathfrak g}amma}_l}\subseteq R_{\alpha}$. This is not a unital subalgebra, the image of the identity element of $R_{{{\mathfrak g}amma}_1}\otimes\dots\otimes R_{{{\mathfrak g}amma}_l}$ being $$ \textstyle1_{{{\mathfrak g}amma}_1,\dots,{{\mathfrak g}amma}_l} = \sum_{\text{\boldmath$i$}^{(1)} \in {\langle I \rangle}_{{{\mathfrak g}amma}_1},\dots, \text{\boldmath$i$}^{(l)} \in {\langle I \rangle}_{{{\mathfrak g}amma}_l}} e(\text{\boldmath$i$}^{(1)}\dots\text{\boldmath$i$}^{(l)}). $$ An important special case is where ${\alpha}=d{\alpha}_i$ is a multiple of a simple root, in which case we have that $R_{d{\alpha}_i}$ is the $d^{th}$ nilHecke algebra $H_d$ generated by $\{y_1, \dots, y_d, \psi_1, \dots, \psi_{d-1}\}$ subject to the relations {\beta}gin{align} \psi_{r}^2 &= 0 {\lambda}bel{eq:HeckeRel1} \\ \psi_{r} \psi_{s} &= \psi_{s} \psi_{r} \qquad \textup{if $|r-s| > 1$} {\lambda}bel{eq:HeckeRel2} \\ \psi_{r} \psi_{r+1} \psi_{r} &= \psi_{r+1} \psi_{r} \psi_{r+1} {\lambda}bel{eq:HeckeRel3} \\ \psi_{r} y_{s} &= y_{s} \psi_{r} \qquad \textup{if $s{\mathfrak n}eq r,r+1$} {\lambda}bel{eq:HeckeRel4} \\ \psi_{r} y_{r+1} &= y_{r} \psi_{r} + 1 {\lambda}bel{eq:HeckeRel5} \\ y_{r+1} \psi_{r} &= \psi_{r} y_{r} + 1. {\lambda}bel{eq:HeckeRel6} \end{align} The grading is so that ${\delta}g(y_r) = {\alpha}_i\cdot{\alpha}_i$ and ${\delta}g(\psi_r) = -{\alpha}_i\cdot{\alpha}_i$. Note that here the elements $\psi_w$ do not depend on a choice of reduced decompositions. Let $w_0 \in \mathfrak{S}_d$ be the longest element, and define the following elements of $H_d$: \[ {\delta}_d := y_2 y_3^2 \dots y_d^{d-1}, \quad e_d := \psi_{w_0} {\delta}_d. \] It is known that {\beta}gin{equation}{\lambda}bel{EnH} e_d \psi_{w_0} = \psi_{w_0}, \end{equation} and in particular $e_d$ is an idempotent, see for example \cite[\S2.2]{KL1}. The following is a special case of our main theorem for the case where ${\alpha}=d{\alpha}_i$, which will be used in its proof. It is known that the center $Z(H_d)$ consists of the symmetric polynomials ${\mathcal O}[y_1,\dots,y_d]^{\mathfrak{S}_d}$. {\beta}gin{Theorem}{\lambda}bel{TnilHeckeCellBasis} {\rm \cite[Theorem 4.16]{KLM}} Let $X$ be a ${\mathcal O}$-basis of ${\mathcal O}[y_1,\dots,y_d]^{\mathfrak{S}_d}$ and let $\mathfrak{B}$ be a basis of ${\mathcal O}[y_1,\dots,y_d]$ as an ${\mathcal O}[y_1,\dots,y_d]^{\mathfrak{S}_d}$-module. Then $\{b e_d f \psi_{w_0} (b')^\tau\ |\ b,b' \in \mathfrak{B}, f \in X\}$ is a ${\mathcal O}$-basis of $H_d$. \end{Theorem} \subsection{Basic representation theory of $R_{\alpha}$} By \cite{KL1}, every irreducible graded $R_{\alpha}$-module is finite dimensional, and there are finitely many irreducible $R_{\alpha}$-modules up to isomorphism and grading shift. For $\text{\boldmath$i$}\in {\langle I \rangle}_{\alpha}$ and $M\in{\mathcal M}od{R_{\alpha}}$, the {\em $\text{\boldmath$i$}$-word space} of $M$ is $ M_\text{\boldmath$i$}:=e(\text{\boldmath$i$})M. $ We have a decomposition of (graded) vector spaces $ M=\text{\boldmath$i$}goplus_{\text{\boldmath$i$}\in {\langle I \rangle}_{\alpha}}M_\text{\boldmath$i$}. $ We say that $\text{\boldmath$i$}$ is a {\em word of $M$} if $M_\text{\boldmath$i$}{\mathfrak n}eq 0$. We identify in a natural way: $$ [\mod{R}]=\text{\boldmath$i$}goplus_{{\alpha}\in Q_+}[\mod{R_{\alpha}}],\quad [{\mathtt P}roj{R}]=\text{\boldmath$i$}goplus_{{\alpha}\in Q_+}[{\mathtt P}roj{R_{\alpha}}]. $$ Recall the anti-automorphism $\tau$ from (\ref{star}). This allows us to introduce the left $R_{\alpha}$-module structure on the graded dual of a finite dimensional $R_{\alpha}$-module $M$---the resulting left $R_{\alpha}$-module is denoted $M^\circledast$. On the other hand, given any left $R_{\alpha}$-module $M$, denote by $M^\tau$ the right $R_{\alpha}$-module with the action given by $mx=\tau(x)m$ for $x\in R_{\alpha},m\in M$. Following \cite[(14)]{KL2}, define the {\em Khovanov-Lauda pairing} to be the ${\mathtt L}aurent$-linear pairing $$(\cdot,\cdot) : [{\mathtt P}roj{R_{\alpha}}]\times[{\mathtt P}roj{R_{\alpha}}] \rightarrow {\mathtt L}aurent\cdot {\operatorname{pr}}od_{i\in I}{\operatorname{pr}}od_{a=1}^{m_i}\mathbf{f}rac{1}{(1-q_i^{2a})} $$ such that $([P],[Q]) = {\operatorname{dim}_q}\,(P^\tau\otimes_{R_{\alpha}} Q)$. Let ${\alpha},{\beta}\in Q_+$. Recalling the isomorphism $\iota_{{\alpha},{\beta}}:R_{\alpha}\otimes\ R_{\beta}\to R_{{\alpha},{\beta}}\subseteq R_{{\alpha}+{\beta}}$, consider the functors {\beta}gin{align*} {\operatorname{Ind}}_{{\alpha}pha,{\beta}ta} &:= R_{{\alpha}pha+{\beta}ta} 1_{{\alpha}pha,{\beta}ta} \otimes_{R_{{\alpha}pha,{\beta}ta}} ?:{\mathcal M}od{R_{{\alpha}pha,{\beta}ta}} \rightarrow {\mathcal M}od{R_{{\alpha}pha+{\beta}ta}},\\ {\mathbb R}es_{{\alpha}pha,{\beta}ta} &:= 1_{{\alpha}pha,{\beta}ta} R_{{\alpha}pha+{\beta}ta} \otimes_{R_{{\alpha}pha+{\beta}ta}} ?:{\mathcal M}od{R_{{\alpha}pha+{\beta}ta}}\rightarrow {\mathcal M}od{R_{{\alpha}pha,{\beta}ta}}. \end{align*} For $M\in\mod{R_{\alpha}}$ and $N\in \mod{R_{\beta}ta}$, we denote $ M\circ N:={\operatorname{Ind}}_{{\alpha},{\beta}} (M\boxtimes N). $ The functors of induction define products on the Grothendieck groups $[\mod{R}]$ and $[{\mathtt P}roj{R}]$ and the functors of restriction define coproducts on $[\mod{R}]$ and $[{\mathtt P}roj{R}]$. These products and coproducts make $[\mod{R}]$ and $[{\mathtt P}roj{R}]$ into twisted unital and counital bialgebras \cite[Proposition 3.2]{KL1}. Let $i\in I$ and $n\in{\mathbb Z}_{> 0}$. As explained in \cite[$\S$2.2]{KL1}, the algebra $R_{n {\alpha}_i}$ has a representation on the polynomials $F[y_1,\dots,y_n]$ such that each $y_r$ acts as multiplication by $y_r$ and each $\psi_r$ acts as the divided difference operator $ \partial_r: f \mapsto \mathbf{f}rac{{^{s_r}} f - f}{y_{r}-y_{r+1}}. $ Let $P(i^{(n)})$ denote this representation of $R_{n{\alpha}_i}$ viewed as a graded $R_{n{\alpha}_i}$-module with grading defined by $$ {\delta}g(y_1^{m_1} \cdots y_n^{m_n}) := ({\alpha}_i\cdot {\alpha}_i )(m_1+\cdots+m_n - n(n-1)/4). $$ By \cite[$\S$2.2]{KL1}, the left regular $R_{n{\alpha}_i}$-module decomposes as $ P(i^n) \cong [n]^!_i \cdot P(i^{(n)})$. In particular, $P(i^{(n)})$ is projective. Set {\beta}gin{align*} \theta_{i}^{(n)}&:= {\operatorname{Ind}}_{{\alpha}pha,n {\alpha}_i} (? \boxtimes P(i^{(n)})):{\mathcal M}od{R_{\alpha}pha} \rightarrow {\mathcal M}od{R_{{\alpha}pha+n{\alpha}_i}},\\ (\theta_{i}^*)^{(n)}&:= {\mathcal H}om_{R'_{n {\alpha}_i}}(P(i^{(n)}), ?): {\mathcal M}od{R_{{\alpha}pha+n{\alpha}_i}} \rightarrow {\mathcal M}od{R_{{\alpha}pha}}, \end{align*} where $R'_{n{\alpha}_i} := 1 \otimes R_{n{\alpha}_i} \subseteq R_{{\alpha}pha,n{\alpha}_i}$. These functors induce ${\mathtt L}aurent$-linear maps on the corresponding Grothendieck groups: $$\theta_i^{(n)}:[{\mathtt P}roj{R_{\alpha}pha}] \rightarrow [{\mathtt P}roj{R_{{\alpha}pha+n{\alpha}_i}}],\quad (\theta_{i}^*)^{(n)}:[\mod{R_{{\alpha}pha+n{\alpha}_i}}] \rightarrow [\mod{R_{{\alpha}pha}}]. $$ \subsection{Cuspidal and standard modules} {\lambda}bel{SSSMT} Standard module theory for $R_{\alpha}$ has been developed in \cite{KR,HMM,BKOP,McN}. Here we follow the most general approach of McNamara \cite{McN}. Fix a reduced decomposition $w_0 = s_{i_1} \dots s_{i_N}$ of the longest element $w_0\in W$. This gives a convex total order on the positive roots $${\mathtt P}hi_+ = \{ {\beta}_1 > \dots > {\beta}_N\},$$ with ${\beta}_{N+1-k} = s_{i_1} \dots s_{i_{k-1}}({\alpha}_{i_k})$. To every positive root ${\beta}\in{\mathtt P}hi_+$ of the corresponding root system ${\mathtt P}hi$, one associates a {\em cuspidal module} $L({\beta})$. This irreducible module is uniquely determined by the following property: if ${\delta},{{\mathfrak g}amma}\in Q_+$ are non-zero elements such that ${\beta}={\delta}+{{\mathfrak g}amma}$ and ${\mathbb R}es_{{\delta},{{\mathfrak g}amma}}L({\beta}){\mathfrak n}eq 0$, then ${\delta}$ is a sum of positive roots less than ${\beta}$ and ${{\mathfrak g}amma}$ is a sum of positive roots greater than ${\beta}$. A standard argument involving the Mackey Theorem from \cite{KL1} and convexity as in the proof of \cite[Lemma 2.11]{BKM}, yields: {\beta}gin{Lemma} {\lambda}bel{LBKM} Let ${\beta}\in{\mathtt P}hi_+$ and $a_1,\dots,a_n\in{\mathbb Z}_{{\mathfrak g}eq 0}$. All composition factors of ${\mathbb R}es_{a_1{\beta},\dots,a_n{\beta}}L({\beta})^{\circ (a_1+\dots+a_n)}$ are of the form $L({\beta})^{\circ a_1}\boxtimes \dots\boxtimes L({\beta})^{\circ a_n}$. \end{Lemma} \iffalse{ Note that ${\sigma} = ({\beta}^{a+b+c})$ is the minimal root partition of $(a+b+c){\beta}$. Now, $e L({\sigma}) \subseteq {\mathbb R}es_{\pi^{(1)}, \pi^{(2)}, \pi^{(3)}}(L({\sigma})) = ({\mathbb R}es^{a{\beta},b{\beta},c{\beta}}_{\pi^{(1)},\pi^{(2)},\pi^{(3)}}) \circ {\mathbb R}es_{a{\beta}, b{\beta}, c{\beta}} (L({\sigma}))$. By \cite[Proposition 2.6]{McN}, we have that ${\mathbb R}es_{a{\beta}, b{\beta}, c{\beta}} \circ {\operatorname{Ind}}_{{\beta},\dots,{\beta}}$ has a filtration indexed by tuples ${{\mathfrak g}amma}_{ij} \in Q_+$ for $1\leq i \leq a+b+c$ and $j=1,2,3$ with the following properties: {\beta}gin{enumerate} \item[\rm (i)] ${{\mathfrak g}amma}_{i1} + {{\mathfrak g}amma}_{i2} + {{\mathfrak g}amma}_{i3} = {\beta}$ for every $i=1,\dots,a+b+c$; \item[\rm (ii)] $\sum_i {{\mathfrak g}amma}_{i1} = a{\beta}$, $\sum_i {{\mathfrak g}amma}_{i2} = b{\beta}$, $\sum_i {{\mathfrak g}amma}_{i3} = c{\beta}$; \item[\rm (iii)] The subquotients of ${\mathbb R}es_{a{\beta}, b{\beta}, c{\beta}} \circ {\operatorname{Ind}}_{{\beta},\dots,{\beta}}$ are precisely ${\operatorname{Ind}}_{{\mathfrak g}amma}^{a{\beta}, b{\beta}, c{\beta}} \circ \operatorname{op}eratorname{reindex} \circ {\mathbb R}es_{{\mathfrak g}amma}$. \end{enumerate} We apply this to the module $L({\beta}) \boxtimes \dots \boxtimes L({\beta})$ to see that ${\mathbb R}es_{a{\beta}, b{\beta}, c{\beta}}(L({\beta}^{a+b+c}))$ has composition factors ${\operatorname{Ind}}_{{\mathfrak g}amma}^{a{\beta}, b{\beta}, c{\beta}} \circ \operatorname{op}eratorname{reindex} \circ {\mathbb R}es_{{\mathfrak g}amma}(L({\beta}) \boxtimes \dots \boxtimes L({\beta}))$. Now, $$ {\mathbb R}es_{{\mathfrak g}amma}(L({\beta}) \boxtimes \dots \boxtimes L({\beta})) = {\mathbb R}es_{{{\mathfrak g}amma}_{11},{{\mathfrak g}amma}_{12},{{\mathfrak g}amma}_{13}}(L({\beta})) \boxtimes \dots \boxtimes {\mathbb R}es_{{{\mathfrak g}amma}_{a+b+c,1},{{\mathfrak g}amma}_{a+b+c,2},{{\mathfrak g}amma}_{a+b+c,3}}(L({\beta})) $$ which by \cite[Lemma 3.2]{McN} can only be nonzero when every ${{\mathfrak g}amma}_{i1}$ is a sum of roots less than or equal to ${\beta}$. In view of property (ii) and the fact that the only way to express $a{\beta}$ as a sum of roots all less than or equal to ${\beta}$ is the obvious one, we must have that ${{\mathfrak g}amma}_{i1}$ is zero for $b+c$ values of $i$, and is equal to ${\beta}$ for $a$ values of $i$. Similarly, ${{\mathfrak g}amma}_{i3}$ is zero for $a+b$ values of $i$, and is equal to ${\beta}$ for $c$ values of $i$. By property (i), we can never have ${{\mathfrak g}amma}_{i1} = {{\mathfrak g}amma}_{i3} = {\beta}$, and so for the $b$ values of $i$ for which ${{\mathfrak g}amma}_{i1} = {{\mathfrak g}amma}_{i3} = 0$, we must have ${{\mathfrak g}amma}_{i2} = {\beta}$. Therefore the composition factors of ${\mathbb R}es_{a{\beta}, b{\beta}, c{\beta}}(L({\beta}^{a+b+c}))$ all take the form $$ {\operatorname{Ind}}_{{\beta},\dots,{\beta}}^{a{\beta}, b{\beta}, c{\beta}}(L({\beta}) \boxtimes \dots \boxtimes L({\beta})) = L({\beta}^a) \boxtimes L({\beta}^b) \boxtimes L({\beta}^c). $$ }\mathbf{f}i Let ${\alpha}\in Q_+$. A tuple $\pi=(p_1, \dots p_N) \in {\mathbb Z}_{{\mathfrak g}eq 0}^N$ is called a {\em root partition of ${\alpha}$} if $p_1{\beta}_1+\dots+p_N{\beta}_N={\alpha}$. We also use the notation $\pi=({\beta}_1^{p_1},\dots,{\beta}_N^{p_N})$. For example, if ${\alpha}=n{\beta}$ for ${\beta}\in{\mathtt P}hi_+$, we have a root partition $({\beta}^n)\in{\mathtt P}i({\alpha})$. Denote by ${\mathtt P}i({\alpha})$ the set of all root partitions of ${\alpha}$. This set has two total orders: $\leq_l$ and $\leq_r$ defined as follows: $(p_1,\dots,p_N)<_l (s_1,\dots,s_N)$ (resp. $(p_1,\dots,p_N)<_r (s_1,\dots,s_N)$) if there exists $1\leq k\leq N$ such that $p_k<s_k$ and $p_m=s_m$ for all $m<k$ (resp. $m>k$). Finally, we have a {\em bilexicographic partial order}: {\beta}gin{equation}{\lambda}bel{EBilex} \pi\leq {\sigma} {\mathtt L}ongleftrightarrow \pi\leq_l {\sigma}\ \text{and}\ \pi\leq_r{\sigma}\qquad(\pi,{\sigma}\in{\mathtt P}i({\alpha})). \end{equation} The following lemma is implicit in \cite{McN}; see also \cite[Lemma 2.5]{BKM}. {\beta}gin{Lemma}{\lambda}bel{LMinRP} Given any $\pi \in {\mathtt P}i(p{\beta})$, we have $\pi {\mathfrak g}eq ({\beta}^p)$. \end{Lemma} For a root partition $\pi=(p_1,\dots,p_N)\in{\mathtt P}i({\alpha})$ as above, set ${\operatorname{sh}}ift(\pi):=\sum_{k=1}^N ({\beta}_k\cdot {\beta}_k)p_k(p_k-1)/4$, and define the corresponding {\em proper standard module} {\beta}gin{equation}{\lambda}bel{EStand} \bar{\mathtt D}e(\pi):=L({{\beta}_1})^{\circ p_1}\circ\dots\circ L({{\beta}_N})^{\circ p_N}{\lambda}ngle{\operatorname{sh}}ift(\pi)\rangle. \end{equation} For $\pi=({\beta}_1^{p_1},\dots,{\beta}_N^{p_N})$, we denote $$ {\mathbb R}es_\pi:={\mathbb R}es_{p_1{\beta}_1,\dots,p_N{\beta}_N}. $$ {\beta}gin{Theorem} {\lambda}bel{TStand} {\rm \cite{McN}} For any convex order there exists a cuspidal system $\{L({\beta})\mid {\beta}\in {\mathtt P}hi_+\}$. Moreover: {\beta}gin{enumerate} \item[{\rm (i)}] For every $\pi\in{\mathtt P}i({\alpha})$, the proper standard module $ \bar{\mathtt D}e(\pi) $ has irreducible head; denote this irreducible module $L(\pi)$. \item[{\rm (ii)}] $\{L(\pi)\mid \pi\in {\mathtt P}i({\alpha})\}$ is a complete and irredundant system of irreducible $R_{\alpha}$-modules up to isomorphism. \item[{\rm (iii)}] $L(\pi)^\circledast{\sigma}meq L(\pi)$. \item[{\rm (iv)}] $[\bar{\mathtt D}e(\pi):L(\pi)]_q=1$, and $[\bar{\mathtt D}e(\pi):L({\sigma})]_q{\mathfrak n}eq 0$ implies ${\sigma}\leq \pi$. \item[{\rm (v)}] $L({\beta})^{\circ n}$ is irreducible for every ${\beta}\in {\mathtt P}hi_+$ and every $n\in{\mathbb Z}_{>0}$. \item[{\rm (vi)}] ${\mathbb R}es_\pi\bar{\mathtt D}e({\sigma}){\mathfrak n}eq 0$ implies ${\sigma}{\mathfrak g}eq \pi$, and ${\mathbb R}es_\pi\bar{\mathtt D}e(\pi){\sigma}meq L({{\beta}_1})^{\circ p_1}\boxtimes\dots\boxtimes L({{\beta}_N})^{\circ p_N}$. \end{enumerate} \end{Theorem} Note that the algebra $R_{\alpha}(F)$ is defined over ${\mathbb Z}$, i.e. $R_{\alpha}(F){\sigma}meq R_{\alpha}({\mathbb Z})\otimes_{\mathbb Z} F$. We will use the corresponding indices when we need to distinguish between modules defined over different rings. The following result shows that cuspidal modules are also defined over ${\mathbb Z}$: {\beta}gin{Lemma} {\lambda}bel{LCuspInt} Let ${\beta}\in{\mathtt P}hi_+$, and $v\in L({\beta})_{\mathbb Q}$ be a non-zero homogeneous vector. Then $L({\beta})_{\mathbb Z}:=R_{\beta}({\mathbb Z})\cdot v\subset L({\beta})_{\mathbb Q}$ is an $R_{\beta}({\mathbb Z})$-invariant lattice such that $L({\beta})_{\mathbb Z}\otimes_{\mathbb Z} F{\sigma}meq L({\beta})_F$ as $R_{\beta}(F)$-modules for any field $F$. \end{Lemma} {\beta}gin{proof} Note using degrees that $L({\beta})_{\mathbb Z}$ is finitely generated over ${\mathbb Z}$, hence it is a lattice in $L({\beta})_{\mathbb Q}$. Furthermore ${\operatorname{ch}_q\:} L({\beta})_{\mathbb Z}\otimes_{\mathbb Z} F={\operatorname{ch}_q\:} L({\beta})_{\mathbb Q}$, whence by definition of the cuspidal modules, all composition factors of ${\operatorname{ch}_q\:} L({\beta})_{\mathbb Z}\otimes_{\mathbb Z} F$ are of the form $L({\beta})_F$. But there is always a multiplicity one composition factor in a reduction modulo $p$ of any irreducible module over a KLR algebra, thank to \cite[Lemma 4.7]{Kcusp}. \end{proof} \subsection{Quantum groups}{\lambda}bel{SSQG} Following \cite[Section 1.2]{Lu1}, we define the algebra $\pf$ to be the free ${\mathbb Q}(q)$-algebra with generators $\ptheta_i$ for $i \in I$ (our $q$ is Lusztig's $v^{-1}$, in keeping with the conventions of \cite{KL1}). This algebra is $Q_+$-graded by assigning the degree ${\alpha}_i$ to $\ptheta_i$ for each $i \in I$, so that $\pf = \operatorname{op}lus_{{\alpha} \in Q_+} \pf_{\alpha}$. If $x \in \pf_{\alpha}$, we write $|x| = {\alpha}$. For $\text{\boldmath$i$}=(i_1, \dots, i_n) \in {\langle I \rangle}$, write $\ptheta_{\text{\boldmath$i$}} := \ptheta_{i_1} \dots \ptheta_{i_n}$. Then $ \{\ptheta_\text{\boldmath$i$} \mid \text{\boldmath$i$} \in {\langle I \rangle}_{\alpha}\} $ is a basis for $\pf_{\alpha}$. In particular, each $\pf_{\alpha}$ is finite dimensional. Consider the graded dual $\pf^* := \operatorname{op}lus_{{\alpha} \in Q_+} (\pf_{\alpha})^*$. We consider words $\text{\boldmath$i$} \in {\langle I \rangle}$ as elements of $\pf^*$, so that $\text{\boldmath$i$}(\ptheta_\text{\boldmath$j$}) = {\delta}_{\text{\boldmath$i$}, \text{\boldmath$j$}}$. That is to say, $ \{\text{\boldmath$i$} \mid \text{\boldmath$i$} \in {\langle I \rangle}_{\alpha}\} $ is the basis of $\pf_{\alpha}^*$ dual to the basis $\{\ptheta_\text{\boldmath$i$} \mid \text{\boldmath$i$} \in {\langle I \rangle}_{\alpha}\}$. Let $\pAf$ be the ${\mathcal A}$-subalgebra of $\pf$ generated by $\{(\ptheta_i)^{(n)} \mid i \in I, n \in {\mathbb Z}_{{\mathfrak g}eq 0} \}$. This algebra is $Q_+$-graded by $\pAf = \operatorname{op}lus_{{\alpha} \in Q_+}\pAf_{\alpha}$, where $\pAf_{\alpha} := \pAf \cap \pf_{\alpha}$. Given $\text{\boldmath$i$} = j_1^{r_1}\dots j_m^{r_m} \in {\langle I \rangle}$ with $j_n {\mathfrak n}eq j_{n+1}$ for $1 \leq n < m$, denote $\ptheta_{(\text{\boldmath$i$})} := (\ptheta_{j_1})^{(r_1)} \dots (\ptheta_{j_m})^{(r_m)} \in \pAf$. Then {\beta}gin{equation}{\lambda}bel{EMBasisA} \{\ptheta_{(\text{\boldmath$i$})} \mid \text{\boldmath$i$} \in {\langle I \rangle}_{\alpha}\} \end{equation} is an ${\mathcal A}$-basis of $\pAf_{\alpha}$. We also define $\pAf^* := \{x \in \pf^* \mid x(\pAf) \subseteq {\mathcal A}\}$, and assign it the induced $Q_+$-grading. For every ${\alpha} \in Q_+$, the ${\mathcal A}$-module $\pAf_{\alpha}^*$ is free with basis {\beta}gin{equation}{\lambda}bel{EDualMBasisA} \{[\text{\boldmath$i$}]! \text{\boldmath$i$} \mid \text{\boldmath$i$} \in {\langle I \rangle}_{\alpha}\} \end{equation} dual to (\ref{EMBasisA}). There is a {\em twisted} multiplication on $\pf \otimes \pf$ given by $(x \otimes y)(z \otimes w) = q^{-|y|\cdot|z|} xz \otimes yw$ for homogeneous $x,y,z,w \in \pf$. Let $r\operatorname{col}on \pf \to \pf \otimes \pf$ be the algebra homomorphism determined by $r(\ptheta_i) = \ptheta_i \otimes 1 + 1 \otimes \ptheta_i$ for all $i \in I$. By \cite[Proposition 1.2.3]{Lu1} there is a unique symmetric bilinear form $(\cdot, \cdot)$ on $\pf$ such that $(1,1)=1$ and {\beta}gin{align*} (\ptheta_i, \ptheta_j) &= \mathbf{f}rac{{\delta}_{i,j}}{1-q_i^2} \quad\text{for } i,j \in I,\\ (xy, z) &= (x \otimes y, r(z)),\\ (x, yz) &= (r(x), y \otimes z), \end{align*} where the bilinear form on $\pf \otimes \pf$ is given by $(x \otimes x', y \otimes y') = (x,y)(x',y')$. Define $\mathbf{f}$ to be the quotient of $\pf$ by the radical of $(\cdot,\cdot)$. Denote the image of $\ptheta_i$ in $\mathbf{f}$ by $\theta_i$. The $Q_+$-grading on $\pf$ descends to a $Q_+$-grading on $\mathbf{f}$ with $|\theta_i| = i$. Let ${\mathcal A}f$ be the ${\mathcal A}$-subalgebra of $\mathbf{f}$ generated by $\theta_i^{(n)}$ for $i \in I, n \in {\mathbb Z}_{{\mathfrak g}eq 0}$. This algebra is $Q_+$-graded by ${\mathcal A}f_{\alpha} := {\mathcal A}f \cap \mathbf{f}_{\alpha}$. Given $\text{\boldmath$i$} = j_1^{r_1}\dots j_m^{r_m} \in {\langle I \rangle}$ with $j_n {\mathfrak n}eq j_{n+1}$ for $1 \leq n < m$, denote $\theta_\text{\boldmath$i$} := \theta_{j_1}^{r_1} \dots \theta_{j_m}^{r_m}$ and {\beta}gin{equation}{\lambda}bel{EThetaI} \theta_{(\text{\boldmath$i$})} := \theta_{j_1}^{(r_1)} \dots \theta_{j_m}^{(r_m)} \in {\mathcal A}f. \end{equation} We recall the definition of the {\em PBW basis} of ${\mathcal A}f$ from \cite[Part VI]{Lu1}. Recall that a reduced decomposition $w_0 = s_{i_1} \dots s_{i_N}$ yields a total order on the positive roots ${\mathtt P}hi_+ = \{ {\beta}_1 > \dots > {\beta}_N\}$, with ${\beta}_{N+1-k} = s_{i_1} \dots s_{i_{k-1}}({\alpha}_{i_k})$. Now, embed ${\mathcal A}f$ into the upper half of the full quantum group via $\theta_i \mapsto E_i$ and take the braid group generators $T_i := T''_{i,+}$ from \cite[37.1.3]{Lu1}. For $1 \leq k \leq N$, we define \[ E_{{\beta}_{N+1-k}} := T_{i_1} \dots T_{i_{k-1}}(\theta_{i_k}) \in {\mathcal A}f_{{\beta}_{N+1-k}}. \] For a sequence $\pi = (p_1, \dots p_N) \in {\mathbb Z}_{{\mathfrak g}eq 0}^N$, we set $$E_\pi := E_{{\beta}_1}^{(p_1)} \dots E_{{\beta}_N}^{(p_N)}$$ and also define {\beta}gin{equation}{\lambda}bel{ELPi} l_\pi:={\operatorname{pr}}od_{r=1}^N {\operatorname{pr}}od_{s=1}^{p_k} \mathbf{f}rac{1}{1-q_{{\beta}_r}^{2s}}. \end{equation} The next theorem now gives a PBW basis of ${\mathcal A}f_{\alpha}$. {\beta}gin{Theorem}{\lambda}bel{TPBW} The set $ \{E_\pi \mid \pi \in {\mathtt P}i({\alpha})\} $ is an ${\mathcal A}$-basis of ${\mathcal A}f_{\alpha}$. Furthermore: \[ (E_\pi, E_{\sigma}) = {\delta}_{\pi, {\sigma}} l_\pi. \] \end{Theorem} {\beta}gin{proof} This follows from Corollary 41.1.4(b), Propositions 41.1.7, 38.2.3, and Lemma 1.4.4 of Lusztig \cite{Lu1}. \end{proof} Consider the graded dual $\mathbf{f}^* := \operatorname{op}lus_{{\alpha} \in Q_+} \mathbf{f}_{\alpha}^*$. The map $r^*: \mathbf{f}^* \otimes \mathbf{f}^* \to \mathbf{f}^*$ gives $\mathbf{f}^*$ the structure of an associative algebra. Let {\beta}gin{equation}{\lambda}bel{EIota} \kappa: \mathbf{f}^* {\hookrightarrow} \pf^* \end{equation} be the map dual to the quotient map $\xi:\pf {\twoheadrightarrow} \mathbf{f}$. Set ${\mathcal A}f^* := \{x \in \mathbf{f}^* \mid x({\mathcal A}f) \subseteq {\mathcal A}\}$ with the induced $Q_+$-grading. Given $i \in I$, we denote by $\theta_i^*: \mathbf{f}^* \to \mathbf{f}^*$ the dual map to the map $\mathbf{f} \to \mathbf{f},\ x \mapsto x \theta_i$. Then the divided power $(\theta_i^*)^{(n)}:\mathbf{f}^*\to\mathbf{f}^*$ is dual to the map $x \mapsto x \theta_i^{(n)}$. Clearly $(\theta_i^*)^{(n)}$ stabilizes ${\mathcal A}f^*$. For ${\beta} \in {\mathtt P}hi_+$, define $E_{\beta}^* \in {\mathcal A}f_{\beta}^*$ to be dual to $E_{\beta}$. We define {\beta}gin{equation}{\lambda}bel{EDualPBW} (E_{\beta}^*)^{{\lambda}ngle m\rangle} := q_{\beta}^{m(m-1)/2} (E_{\beta}^*)^m \quad\text{and}\quad E_\pi^* := (E_{{\beta}_1}^*)^{{\lambda}ngle p_1 \rangle} \dots (E_{{\beta}_N}^*)^{{\lambda}ngle p_N\rangle} \end{equation} for $m {\mathfrak g}eq 0$, and any sequence $\pi = (p_1, \dots p_N) \in {\mathbb Z}_{{\mathfrak g}eq 0}^N$. The next well-known result gives the {\em dual PBW basis} of ${\mathcal A}f^*$. {\beta}gin{Theorem}{\lambda}bel{TDualPBW} The set $ \{E_\pi^* \mid \pi \in {\mathtt P}i({\alpha})\} $ is the ${\mathcal A}$-basis of ${\mathcal A}f_{\alpha}^*$ dual to the PBW basis of Theorem~\ref{TPBW}. \end{Theorem} {\beta}gin{proof} It easily follows from the properties of the Lusztig bilinear form, the definition of the product on $\mathbf{f}^*$ and Theorem~\ref{TPBW} that the linear functions $(E_\pi,-)$ and $l_\pi E_\pi^*$ on $\mathbf{f}$ are equal. It remains to apply Theorem~\ref{TPBW} one more time. \end{proof} {\beta}gin{Example} {\rm Let $C=A_2$, and $w_0=s_1s_2s_1$. Then $E_{{\alpha}_1+{\alpha}_2}=T_{1,+}''(E_2)=E_1E_2-qE_2E_1$, and, switching back to $\theta$'s, the PBW basis of ${\mathcal A}f_{{\alpha}_1+{\alpha}_2}$ is $\{\theta_2\theta_1, \theta_1\theta_2-q\theta_2\theta_1\}$. Using the defining properties of Lusztig's bilinear form, one can easily check that $(E_{{\alpha}_2}E_{{\alpha}_1},E_{{\alpha}_2}E_{{\alpha}_1})=\mathbf{f}rac{1}{(1-q^2)^2}$, $(E_{{\alpha}_1+{\alpha}_2},E_{{\alpha}_1+{\alpha}_2})=\mathbf{f}rac{1}{(1-q^2)}$, and $(E_{{\alpha}_2}E_{{\alpha}_1},E_{{\alpha}_1+{\alpha}_2})=0$. Finally the dual basis is $\{(12),(21)+q(12)\}$. } \end{Example} \subsection{Categorification of ${\mathcal A}f$ and ${\mathcal A}f^*$} Now we state the fundamental categorification theorem proved in \cite{KL1,KL2}, see also \cite{R}. We denote by $[R_0]$ the class of the left regular representation of the trivial algebra $R_0\cong F$. {\beta}gin{Theorem}{\lambda}bel{klthm} There is a unique ${\mathtt L}aurent$-linear isomorphism $ {{\mathfrak g}amma}mma:{\mathcal A}f \stackrel{{\sigma}m}{\rightarrow} [{\mathtt P}roj{R}] $ such that $1 \mapsto [R_0]$ and ${{\mathfrak g}amma}mma( x \theta_i^{(n)}) = \theta_i^{(n)}({{\mathfrak g}amma}mma(x))$ for all $x \in {\mathcal A}f$, $i \in I$, and $n {\mathfrak g}eq 1$. Under this isomorphism: {\beta}gin{itemize} \item[(1)] ${{\mathfrak g}amma}mma({\mathcal A}f_{\alpha})=[{\mathtt P}roj{R_{\alpha}}]$; \item[(2)] the multiplication ${\mathcal A}f_{\alpha}pha \otimes {\mathcal A}f_{\beta}ta \rightarrow {\mathcal A}f_{{\alpha}pha+{\beta}ta}$ corresponds to the product on $[{\mathtt P}roj{R}]$ induced by the exact functor ${\operatorname{Ind}}_{{\alpha}pha,{\beta}ta}$; \item[(3)] for $\text{\boldmath$i$} \in {\langle I \rangle}_{\alpha}$ we have ${{\mathfrak g}amma}mma(\theta_\text{\boldmath$i$}) = [R_{\alpha} e(\text{\boldmath$i$})]$; \item[(4)] for $x,y \in {\mathcal A}f$ we have $(x,y) = ({{\mathfrak g}amma}(x),{{\mathfrak g}amma}(y))$. \end{itemize} \end{Theorem} Let $M$ be a finite dimensional graded $R_{\alpha}$-module. Define the {\em $q$-character} of $M$ as follows: {\beta}gin{equation*} {\operatorname{ch}_q\:} M:=\sum_{\text{\boldmath$i$}\in {\langle I \rangle}_{\alpha}}({\operatorname{dim}_q}\, M_\text{\boldmath$i$}) \text{\boldmath$i$}\in \pAf^*. \end{equation*} The $q$-character map ${\operatorname{ch}_q\:}: \mod{R_{\alpha}}\to \pAf^*$ factors through to give an $ {\mathtt L}aurent$-linear map from the Grothendieck group {\beta}gin{equation}{\lambda}bel{EChMap} {\operatorname{ch}_q\:}: [\mod{R_{\alpha}}]\to \pAf^*. \end{equation} We now state a dual result to Theorem~\ref{klthm}, see \cite[Theorem 4.4]{KR}. {\beta}gin{Theorem}{\lambda}bel{Dklthm} There is a unique ${\mathtt L}aurent$-linear isomorphism $ {{\mathfrak g}amma}mma^*: [\mod{R}]\stackrel{{\sigma}m}{\rightarrow}{\mathcal A}f^* $ with the following properties: {\beta}gin{itemize} \item[(1)] ${{\mathfrak g}amma}^*([R_0])= 1$; \item[(2)] ${{\mathfrak g}amma}mma^*((\theta_i^*)^{(n)}(x)) = (\theta_i^*)^{(n)}({{\mathfrak g}amma}mma^*(x))$ for all $x \in [\mod{R}],\ i \in I,\ n {\mathfrak g}eq 1$; \item[(3)] the following triangle is commutative: $$ {\beta}gin{pb-diagram} {\mathfrak n}ode{}{\mathfrak n}ode{\pAf^*} {\mathfrak n}ode{} \\ {\mathfrak n}ode{[\mod{R}]} \arrow[2]{e,t}{{{\mathfrak g}amma}^*} \arrow{ne,t}{{\operatorname{ch}_q\:}} {\mathfrak n}ode{}{\mathfrak n}ode{{\mathcal A}f^*} \arrow{nw,t}{\kappa} \end{pb-diagram} $$ \item[(4)] ${{\mathfrak g}amma}mma^*([\mod{R_{\alpha}}])={\mathcal A}f^*_{\alpha}$ for all ${{\mathfrak g}amma}\in Q_+$; \item[(5)] under the isomorphism ${{\mathfrak g}amma}^*$, the multiplication ${\mathcal A}f^*_{\alpha}pha \otimes {\mathcal A}f^*_{\beta}ta \rightarrow {\mathcal A}f^*_{{\alpha}pha+{\beta}ta}$ corresponds to the product on $[\mod{R}]$ induced by ${\operatorname{Ind}}_{{\alpha}pha,{\beta}ta}$; \end{itemize} \end{Theorem} We conclude with McNamara's result on the categorification of the dual PBW-basis (see also \cite{Kato} for simply laced Lie types): {\beta}gin{Lemma}{\lambda}bel{LStdPBW} For every $\pi \in {\mathtt P}i({\alpha})$ we have ${{\mathfrak g}amma}^*([\bar {\mathtt D}e(\pi)]) = E_\pi^*$. \end{Lemma} {\beta}gin{proof} By \cite[Theorem 3.1(1)]{McN}, we have ${{\mathfrak g}amma}^*([L({\beta})]) = E_{\beta}^*$ for all ${\beta}\in{\mathtt P}hi_+$. The general case then follows from Theorem~\ref{Dklthm}(5) and the definition (\ref{EStand}) of $\bar{\mathtt D}e(\pi)$. \end{proof} \subsection{A dimension formula}{\lambda}bel{SSDF} In this section we obtain a dimension formula for $R_{\alpha}$, which can be viewed as a combinatorial shadow of the affine quasi-hereditary structure on it. The idea of the proof comes from \cite[Theorem 4.20]{BKgrdec}. An independent but much less elementary proof can be found in \cite[Corollary 3.15]{BKM}. Recall the element $\theta_{(\text{\boldmath$i$})}\in {\mathcal A}f$ from (\ref{EThetaI}) and the scalar $[\text{\boldmath$i$}]!\in{\mathcal A}$ from (\ref{EIFact}). We note that Lemma~\ref{LMonPBWGen} and Theorem~\ref{TDimGen} {\em do not require the assumption that the Cartan matrix $A$ is of finite type}, adopted elsewhere in the paper. {\beta}gin{Lemma}{\lambda}bel{LMonPBWGen} Let $V^1,\dots,V^m\in \mod{R_{\alpha}}$, and let $v^n:={{\mathfrak g}amma}^*([V^n])\in{\mathcal A}f^*_{\alpha}$ for $n=1,\dots,m$. Assume that $\{v^1,\dots,v^m\}$ is an ${\mathcal A}$-basis of ${\mathcal A}f^*_{\alpha}$. Let $\{v_1,\dots,v_m\}$ be the dual basis of ${\mathcal A}f_{\alpha}$. Then for every $\text{\boldmath$i$} \in {\langle I \rangle}_{\alpha}$, we have \[ \theta_{(\text{\boldmath$i$})} = \sum_{n=1}^m\mathbf{f}rac{{\operatorname{dim}_q}\, V^n_\text{\boldmath$i$}}{[\text{\boldmath$i$}]!} v_n. \] \end{Lemma} {\beta}gin{proof} Recall the map $\kappa$ from (\ref{EIota}) dual to the natural projection $\xi:\pf {\twoheadrightarrow} \mathbf{f}$. By Theorem~\ref{Dklthm} we have for any $1\leq n\leq m$: {\beta}gin{align*} \kappa(v^n) = \kappa({{\mathfrak g}amma}^*([V^n]))={\operatorname{ch}_q\:}([V^n])=\sum_{\text{\boldmath$i$} \in {\langle I \rangle}_{\alpha}}({\operatorname{dim}_q}\, V^n_\text{\boldmath$i$}) \text{\boldmath$i$} = \sum_{\text{\boldmath$i$} \in {\langle I \rangle}_{\alpha}}\mathbf{f}rac{{\operatorname{dim}_q}\, V^n_\text{\boldmath$i$}}{[\text{\boldmath$i$}]!} [\text{\boldmath$i$}]! \text{\boldmath$i$}. \end{align*} Recalling (\ref{EMBasisA}) and (\ref{EDualMBasisA}), $\{\ptheta_{(\text{\boldmath$i$})} \mid \text{\boldmath$i$} \in {\langle I \rangle}_{\alpha}\}$ and $\{[\text{\boldmath$i$}]! \text{\boldmath$i$} \mid \text{\boldmath$i$} \in {\langle I \rangle}_{\alpha}\}$ is a pair of dual bases in $\pAf_{\alpha} $ and $\pAf_{\alpha} ^*$. So, using our expression for $ \kappa(v^n)$, we can now get by dualizing: {\beta}gin{align*} \theta_{(\text{\boldmath$i$})}&=\xi(\ptheta_{(\text{\boldmath$i$})})=\sum_{n=1}^m v^n(\xi(\ptheta_{(\text{\boldmath$i$})}))v_n \\ &=\sum_{n=1}^m\kappa(v^n)(\ptheta_{(\text{\boldmath$i$})})v_n =\sum_{n=1}^m \sum_{\text{\boldmath$j$} \in {\langle I \rangle}_{\alpha}}\mathbf{f}rac{{\operatorname{dim}_q}\, V^n_\text{\boldmath$j$}}{[\text{\boldmath$j$}]!} [\text{\boldmath$j$}]! \text{\boldmath$j$} (\ptheta_{(\text{\boldmath$i$})})v_n \\ &= \sum_{n=1}^m \sum_{\text{\boldmath$j$} \in {\langle I \rangle}_{\alpha}}\mathbf{f}rac{{\operatorname{dim}_q}\, V^n_\text{\boldmath$j$}}{[\text{\boldmath$j$}]!} {\delta}_{\text{\boldmath$i$},\text{\boldmath$j$}}v_n = \sum_{n=1}^m\mathbf{f}rac{{\operatorname{dim}_q}\, V^n_\text{\boldmath$i$}}{[\text{\boldmath$i$}]!} v_n, \end{align*} as required. \end{proof} {\beta}gin{Theorem}{\lambda}bel{TDimGen} With the assumptions of Lemma~\ref{LMonPBWGen}, for every $\text{\boldmath$i$}, \text{\boldmath$j$} \in {\langle I \rangle}_{\alpha}$, we have \[ {\operatorname{dim}_q}\,(e(\text{\boldmath$i$})R_{\alpha} e(\text{\boldmath$j$})) = \sum_{n,k=1}^m ({\operatorname{dim}_q}\, V^n_\text{\boldmath$i$})({\operatorname{dim}_q}\, V^k_\text{\boldmath$j$})(v_n,v_k). \] In particular, \[ {\operatorname{dim}_q}\,(R_{\alpha}) = \sum_{n,k=1}^m ({\operatorname{dim}_q}\, V^n)({\operatorname{dim}_q}\, V^k)(v_n,v_k). \] \end{Theorem} {\beta}gin{proof} Theorem~\ref{klthm}(3) shows that $[R_{\alpha} e(\text{\boldmath$i$})] = {{\mathfrak g}amma}(\theta_\text{\boldmath$i$}) = {{\mathfrak g}amma}([\text{\boldmath$i$}]! \theta_{(\text{\boldmath$i$})})$. Using the definitions and Theorem~\ref{klthm}(4), we have {\beta}gin{align*} {\operatorname{dim}_q}\,(e(\text{\boldmath$i$})R_{\alpha} e(\text{\boldmath$j$})) &= {\operatorname{dim}_q}\,((R_{\alpha} e(\text{\boldmath$i$}))^\tau \otimes_{R_{\alpha}} R_{\alpha} e(\text{\boldmath$j$}))\\ &= ([R_{\alpha} e(\text{\boldmath$i$})], [R_{\alpha} e(\text{\boldmath$j$})])= ([\text{\boldmath$i$}]! \theta_{(\text{\boldmath$i$})}, [\text{\boldmath$j$}]! \theta_{(\text{\boldmath$j$})}). \end{align*} Now, by Lemma~\ref{LMonPBWGen} we see that {\beta}gin{align*} ([\text{\boldmath$i$}]! \theta_{(\text{\boldmath$i$})}, [\text{\boldmath$j$}]! \theta_{(\text{\boldmath$j$})}) &= {\mathcal B}ig( \sum_{n=1}^m({\operatorname{dim}_q}\, V^n_\text{\boldmath$i$}) v_n, \sum_{k=1}^n({\operatorname{dim}_q}\, V^k_\text{\boldmath$j$}) v_k {\mathcal B}ig), \end{align*} which implies the theorem. \end{proof} Recall the scalar $l_\pi$ from (\ref{ELPi}), the module $\bar{\mathtt D}e(\pi)$ from (\ref{EStand}), and PBW-basis elements $E_\pi$ from \S\ref{SSQG}. {\beta}gin{Corollary}{\lambda}bel{TDim} For every $\text{\boldmath$i$}, \text{\boldmath$j$} \in {\langle I \rangle}_{\alpha}$, we have \[ {\operatorname{dim}_q}\,(e(\text{\boldmath$i$})R_{\alpha} e(\text{\boldmath$j$})) = \sum_{\pi \in {\mathtt P}i({\alpha})} ({\operatorname{dim}_q}\, \bar {\mathtt D}e(\pi)_\text{\boldmath$i$})({\operatorname{dim}_q}\, \bar {\mathtt D}e(\pi)_\text{\boldmath$j$}) l_\pi. \] In particular, \[ {\operatorname{dim}_q}\,(R_{\alpha}) = \sum_{\pi \in {\mathtt P}i({\alpha})} ({\operatorname{dim}_q}\, \bar {\mathtt D}e(\pi))^2 l_\pi. \] \end{Corollary} {\beta}gin{proof} By Lemma~\ref{LStdPBW}, we have ${{\mathfrak g}amma}^*(\bar{\mathtt D}e(\pi))=E^*_\pi$ for all $\pi\in{\mathtt P}i({\alpha})$. Moreover, by Theorem~\ref{TDualPBW}, $\{E_\pi^*\mid \pi\in {\mathtt P}i({\alpha})\}$ and $\{E_\pi\mid \pi\in {\mathtt P}i({\alpha})\}$ is a pair of dual bases in ${\mathcal A}f_{\alpha}^*$ and ${\mathcal A}f_{\alpha}$. Finally, $(E_\pi,E_{\sigma})={\delta}_{\pi,{\sigma}}l_\pi$ by Theorem~\ref{TPBW}. It remains to apply Theorem~\ref{TDimGen}. \end{proof} \section{Affine cellular structure} Throughout this section we fix ${\alpha}\in Q_+$ and a total order $\leq$ on the set ${\mathtt P}i({\alpha})$ of root partitions of ${\alpha}$, which refines the bilexicographic partial order (\ref{EBilex}). \subsection{Some special word idempotents}{\lambda}bel{SSSSWId} Recall from Section~\ref{SSSMT} that for each ${\beta}ta\in {\mathtt P}hi_+$, we have a cuspidal module $L({\beta})$. Every irreducible $R_{\alpha}$-module $L$ has a word space $L_\text{\boldmath$i$}$ such that the lowest degree component of $L_\text{\boldmath$i$}$ is one-dimensional, see for example \cite[Lemma 2.30]{Kcusp} or \cite[Lemma 4.5]{BKM} for two natural choices. From now on, for each ${\beta}\in{\mathtt P}hi_+$ we make an arbitrary choice of such word $\text{\boldmath$i$}_{\beta}$ for the cuspidal module $L({\beta})$. For $\pi=({\beta}_1^{p_1},\dots,{\beta}_N^{p_N})\in {\mathtt P}i({\alpha})$, define {\beta}gin{align*} \text{\boldmath$i$}_\pi&:=\text{\boldmath$i$}_{{\beta}_1}^{p_1}\dots \text{\boldmath$i$}_{{\beta}_N}^{p_N}, \\ I_\pi &:= \sum_{{\sigma} {\mathfrak g}eq \pi} R_{\alpha} e(\text{\boldmath$i$}_{\sigma}) R_{\alpha}, \\ I_{>\pi} &:= \sum_{{\sigma} > \pi} R_{\alpha} e(\text{\boldmath$i$}_{\sigma}) R_{\alpha}, \end{align*} the sums being over ${\sigma} \in {\mathtt P}i({\alpha})$. We also consider the (non-unital) embedding of algebras: \[ \iota_\pi := \iota_{p_1{\beta}_1,\dots,p_N{\beta}_N} : R_{p_1 {\beta}_1} \otimes \dots \otimes R_{p_N {\beta}_N} {\hookrightarrow} R_{\alpha}, \] whose image is the parabolic subalgebra $$ R_\pi := R_{p_1{\beta}_1,\dots,p_N{\beta}_N}. $$ {\beta}gin{Lemma} {\lambda}bel{LAllIdem} If a two-sided ideal $J$ of $R_{\alpha}$ contains all idempotents $e(\text{\boldmath$i$}_\pi)$ with $\pi\in{\mathtt P}i({\alpha})$, then $J=R_{\alpha}$. \end{Lemma} {\beta}gin{proof} If $J{\mathfrak n}eq R_{\alpha}$, let $I$ be a maximal left ideal containing $J$. Then $R_{\alpha} / I \cong L(\pi)$ for some $\pi$. Then $e(\text{\boldmath$i$}_\pi)L(\pi){\mathfrak n}eq 0$, which contradicts the assumption that $e(\text{\boldmath$i$}_\pi)\in J$. This argument proves the lemma over any field, and then it also follows for ${\mathbb Z}$. \end{proof} {\beta}gin{Lemma}{\lambda}bel{LBadWordsNew} Let $\pi\in{\mathtt P}i({\alpha})$ and $e \in R_{\alpha}$ a homogeneous idempotent. If $e L({\sigma}) = 0$ for all ${\sigma}\leq \pi$, then $e \in I_{>\pi}$. \end{Lemma} {\beta}gin{proof} Let $I$ be any maximal (graded) left ideal containing $I_{>\pi}$. Then $R_{{\alpha}} / I \cong L({\sigma})$ for some ${\sigma} \in {\mathtt P}i({\alpha})$ such that ${\sigma}\leq \pi$. Indeed, if we had ${\sigma} > \pi$ then by definition $e(\text{\boldmath$i$}_{\sigma}) \in I_{>\pi} \subseteq I$, and so $e(\text{\boldmath$i$}_{\sigma}) L({\sigma}) = e(\text{\boldmath$i$}_{\sigma})(R_{\alpha}/I)=0$, which is a contradiction. We have shown that $e$ is contained in every maximal left ideal containing $I_{>\pi}$. By a standard argument, explained in \cite[Lemma 5.8]{KLM}, we conclude that $e \in I_{>\pi}$. \end{proof} {\beta}gin{Corollary}{\lambda}bel{LBadWords} Suppose that ${\alpha} = p{\beta}$ for some $p {\mathfrak g}eq 1$ and ${\beta} \in {\mathtt P}hi_+$. Let $\text{\boldmath$i$} \in {\langle I \rangle}_{{\alpha}}$. If $e(\text{\boldmath$i$}) L({\beta}^p) = 0$, then $e(\text{\boldmath$i$}) \in I_{>({\beta}^p)}$. \end{Corollary} {\beta}gin{proof} This follows from Lemma~\ref{LMinRP} together with Proposition~\ref{LBadWordsNew}. \end{proof} {\beta}gin{Lemma}{\lambda}bel{LIotaImage} Let $\pi = ({\beta}_1^{p_1}\dots {\beta}_N^{p_N})\in {\mathtt P}i({\alpha})$. Then $R_\pi \subseteq I_\pi$. \end{Lemma} {\beta}gin{proof} By Lemma~\ref{LAllIdem}, we have $$ R_{p_n{\beta}_n}=\sum_{\pi^{(n)}\in{\mathtt P}i(p_n{\beta}_n)}R_{p_n{\beta}_n}e(\text{\boldmath$i$}_{\pi^{(n)}})R_{p_n{\beta}_n} $$ for all $n=1,\dots,N$. Therefore the image of $\iota_\pi$ equals {\beta}gin{equation}{\lambda}bel{EIotaImage} \sum R_\pi e(\text{\boldmath$i$}_{\pi^{(1)}}\dots \text{\boldmath$i$}_{\pi^{(N)}}) R_\pi \end{equation} where the sum is over all $\pi^{(1)}\in{\mathtt P}i(p_1{\beta}_1),\dots, \pi^{(N)}\in{\mathtt P}i(p_N{\beta}_N)$. Fix $\pi^{(n)}\in{\mathtt P}i(p_n{\beta}_n)$ for all $n=1,\dots,N$. If $\pi^{(n)} = ({\beta}_n^{p_n})$ for every $n$, then $\text{\boldmath$i$}_{\pi^{(1)}}\dots \text{\boldmath$i$}_{\pi^{(N)}} = \text{\boldmath$i$}_\pi$, and the corresponding term of (\ref{EIotaImage}) is in $I_\pi$ by definition. Let us now assume that $\pi^{(k)} {\mathfrak n}eq ({\beta}_k^{p_k})$ for some $k$. In view of Lemma~\ref{LMinRP}, we have $\pi^{(k)} > ({\beta}_k^{p_k})$. For any ${\sigma} \in {\mathtt P}i({\alpha})$ we have $e(\text{\boldmath$i$}_{\pi^{(1)}}\dots \text{\boldmath$i$}_{\pi^{(N)}})\bar{\mathtt D}e({\sigma}) \subseteq {\mathbb R}es_\pi \bar{\mathtt D}e({\sigma})$, and by Theorem~\ref{TStand}(vi), if ${\sigma} < \pi$ then ${\mathbb R}es_\pi \bar{\mathtt D}e({\sigma})=0$. Furthermore, for ${\sigma} = \pi$ we have, by Theorem~\ref{TStand}(vi) applied again {\beta}gin{align*} e(\text{\boldmath$i$}_{\pi^{(1)}}\dots \text{\boldmath$i$}_{\pi^{(N)}})\bar{\mathtt D}e(\pi) &= e(\text{\boldmath$i$}_{\pi^{(1)}}\dots \text{\boldmath$i$}_{\pi^{(N)}}) {\mathbb R}es_\pi \bar {\mathtt D}e(\pi)\\ &= e(\text{\boldmath$i$}_{\pi^{(1)}}\dots \text{\boldmath$i$}_{\pi^{(N)}})(\bar{\mathtt D}e({\beta}_1^{p_1}) \boxtimes \dots \boxtimes \bar{\mathtt D}e({\beta}_N^{p_N}))\\ &\subseteq ({\mathbb R}es_{\pi^{(1)}} \bar{\mathtt D}e({\beta}_1^{p_1})) \boxtimes \dots \boxtimes ({\mathbb R}es_{\pi^{(N)}} \bar{\mathtt D}e({\beta}_N^{p_N})), \end{align*} which is zero since ${\mathbb R}es_{\pi^{(k)}} \bar{\mathtt D}e({\beta}_k^{p_k}) = 0$ by Theorem~\ref{TStand}(vi) again. We have shown that for all ${\sigma} \leq \pi$ we have $e(\text{\boldmath$i$}_{\pi^{(1)}}\dots \text{\boldmath$i$}_{\pi^{(N)}})\bar{\mathtt D}e({\sigma})=0$, and consequently $e(\text{\boldmath$i$}_{\pi^{(1)}}\dots \text{\boldmath$i$}_{\pi^{(N)}}) L({\sigma})=0$. Applying Lemma~\ref{LBadWordsNew}, we have that $e(\text{\boldmath$i$}_{\pi^{(1)}}\dots \text{\boldmath$i$}_{\pi^{(N)}}) \in I_{>\pi} \subseteq I_\pi$. \end{proof} The following result will often allow us to reduce to the case of a smaller height. {\beta}gin{Proposition}{\lambda}bel{LParaIdeal1New} Let ${{\mathfrak g}amma}_1,\dots,{{\mathfrak g}amma}_m\in Q_{+}$, $1\leq k\leq m$, and $\pi_0\in{\mathtt P}i({{\mathfrak g}amma}_k)$. Assume that $\pi\in{\mathtt P}i({{\mathfrak g}amma}_1+\dots+{{\mathfrak g}amma}_m)$ is such that all idempotents from the set $$ E=\{e(\text{\boldmath$i$}_{\pi^{(1)}}\dots \text{\boldmath$i$}_{\pi^{(m)}})\mid \pi^{(n)}\in{\mathtt P}i({{\mathfrak g}amma}_n)\ \text{for all $n=1,\dots,m$ and $\pi^{(k)}>\pi_0$}\} $$ annihilate the irreducible modules $L({\sigma})$ for all ${\sigma}\leq \pi$. Then {\beta}gin{equation}{\lambda}bel{EIotaOfR} \iota_{{{\mathfrak g}amma}_1,\dots,{{\mathfrak g}amma}_m}(R_{{{\mathfrak g}amma}_1} \otimes\dots\otimes R_{{{\mathfrak g}amma}_{k-1}}\otimes I_{>\pi_0}\otimes R_{{{\mathfrak g}amma}_{k+1}} \otimes\dots\otimes R_{{{\mathfrak g}amma}_m}) \subseteq I_{>\pi}. \end{equation} \end{Proposition} {\beta}gin{proof} We may assume that ${{\mathfrak g}amma}_k{\mathfrak n}eq 0$ since otherwise $I_{>\pi_0}=0$, and the result is clear. By Lemma~\ref{LAllIdem}, we have $ R_{{{\mathfrak g}amma}_n}=\sum_{\pi^{(n)}\in{\mathtt P}i({{\mathfrak g}amma}_n)}R_{{{\mathfrak g}amma}_n}e(\text{\boldmath$i$}_{\pi^{(n)}})R_{{{\mathfrak g}amma}_n} $ for all $n=1,\dots,m$, and by definition, we have $ I_{>\pi_0}=\sum_{\pi^{(k)}>\pi_0}R_{{{\mathfrak g}amma}_k}e(\text{\boldmath$i$}_{\pi^{(k)}})R_{{{\mathfrak g}amma}_k}. $ Therefore the left hand side of (\ref{EIotaOfR}) equals $ \sum_{e\in E} R_{{{\mathfrak g}amma}_1,\dots,{{\mathfrak g}amma}_m} e R_{{{\mathfrak g}amma}_1,\dots,{{\mathfrak g}amma}_m}. $ The result now follows by applying Lemma~\ref{LBadWordsNew}. \end{proof} Recall from Lemma~\ref{LIotaImage} that ${\operatorname{im}} (\iota_\pi)\subseteq I_\pi$. {\beta}gin{Corollary}{\lambda}bel{LParaIdeal2} Let $\pi = ({\beta}ta_1^{p_1}, \dots, {\beta}ta_N^{p_N})\in{\mathtt P}i({\alpha})$ and $1\leq k\leq N$. Then \[ \iota_\pi(R_{p_1 {\beta}_1} \otimes \dots \otimes R_{p_{k-1}{\beta}_{k-1}} \otimes I_{>({\beta}_k^{p_k})} \otimes R_{p_{k+1} {\beta}_{k+1}} \otimes \dots \otimes R_{p_N {\beta}_N}) \subseteq I_{>\pi}. \] In particular, the composite map $R_\pi \stackrel{\iota_\pi}{\longrightarrow}I_\pi\longrightarrow I_\pi / I_{>\pi}$ factors through the quotient $R_{p_1 {\beta}_1} / I_{>({\beta}_1^{p_1})} \otimes \dots \otimes R_{p_N {\beta}_N} / I_{>({\beta}_N^{p_N})}$. \end{Corollary} {\beta}gin{proof} Apply Proposition~\ref{LParaIdeal1New} with $m=N$, ${{\mathfrak g}amma}_n=p_n{\beta}_n$, for $1\leq n \leq N$, $\pi_0=({\beta}_k^{p_k})$, and $\pi=\pi$. We have to prove that any $e= e(\text{\boldmath$i$}_{\pi^{(1)}}\dots \text{\boldmath$i$}_{\pi^{(N)}}) \in E$ annihilates all $L({\sigma})$ for ${\sigma}\leq \pi$. We prove more, namely that $e$ annihilates $\bar{\mathtt D}e({\sigma})$ for all ${\sigma}\leq\pi$. By Theorem~\ref{TStand}(vi): {\beta}gin{align*} e\bar{\mathtt D}e({\sigma})&=e{\mathbb R}es_\pi \bar{\mathtt D}e({\sigma})=e{\delta}_{\pi,{\sigma}}(L({{\beta}_1})^{\circ p_1}\boxtimes\dots\boxtimes L({{\beta}_1})^{\circ p_1})\\ &={\delta}_{\pi,{\sigma}}\,e(\text{\boldmath$i$}_{\pi^{(1)}})L({{\beta}_1})^{\circ p_1}\boxtimes \dots\boxtimes e(\text{\boldmath$i$}_{\pi^{(N)}})L({{\beta}_N})^{\circ p_N}, \end{align*} which is zero since $$ e(\text{\boldmath$i$}_{\pi^{(k)}})L({{\beta}_k})^{\circ p_k}=e(\text{\boldmath$i$}_{\pi^{(k)}}){\mathbb R}es_{\pi^{(k)}}L({{\beta}_k})^{\circ p_k}=0 $$ by Theorem~\ref{TStand}(vi) again. \end{proof} {\beta}gin{Corollary}{\lambda}bel{LParaIdeal1} For ${\beta}\in {\mathtt P}hi_+$ and $a, b, c \in {\mathbb Z}_{{\mathfrak g}eq 0}$ we have \[ \iota_{a{\beta},b{\beta},c{\beta}}(R_{a {\beta}} \otimes I_{>({\beta}^b)} \otimes R_{c{\beta}}) \subseteq I_{>({\beta}^{a+b+c})}. \] \end{Corollary} {\beta}gin{proof} We apply Proposition~\ref{LParaIdeal1New} with $m=3$, $k=2$, ${{\mathfrak g}amma}_1=a{\beta}$, ${{\mathfrak g}amma}_2=b{\beta}$, ${{\mathfrak g}amma}_3=c{\beta}$, $\pi_0=({\beta}^b)$ and $\pi=({\beta}^{a+b+c})$. Pick an idempotent $e=e(\text{\boldmath$i$}_{\pi^{(1)}} \text{\boldmath$i$}_{\pi^{(2)}} \text{\boldmath$i$}_{\pi^{(3)}})\in E$. Since $\pi$ is the minimal element of ${\mathtt P}i((a+b+c){\beta})$, it suffices to prove that $eL(\pi)=0$. Note that $eL(\pi)=e{\mathbb R}es_{a{\beta},b{\beta},c{\beta}}L(\pi)$, so using Lemma~\ref{LBKM}, we just need to show that $e(L({\beta})^{\circ a}\boxtimes L({\beta})^{\circ b}\boxtimes L({\beta})^{\circ c})=0$. But $$ e(L({\beta})^{\circ a}\boxtimes L({\beta})^{\circ b}\boxtimes L({\beta})^{\circ c})=e(\text{\boldmath$i$}_{\pi^{(1)}})L({\beta})^{\circ a}\boxtimes e(\text{\boldmath$i$}_{\pi^{(2)}})L({\beta})^{\circ b}\boxtimes e(\text{\boldmath$i$}_{\pi^{(3)}})L({\beta})^{\circ c} $$ is zero, since $e(\text{\boldmath$i$}_{\pi^{(2)}})L({\beta})^{\circ b}=e(\text{\boldmath$i$}_{\pi^{(2)}}){\mathbb R}es_{\pi^{(2)}}L({\beta})^{\circ b}=0 $ by Theorem~\ref{TStand}(vi). \end{proof} Repeated application of Corollary~\ref{LParaIdeal1} gives the following result. {\beta}gin{Corollary}{\lambda}bel{COneRootIdeal} For ${\beta}\in {\mathtt P}hi_+$ and $p \in {\mathbb Z}_{> 0}$ we have \[ \iota_{{\beta},\dots,{\beta}}(R_{{\beta}} \otimes \dots \otimes I_{>({\beta})} \otimes \dots \otimes R_{{\beta}}) \subseteq I_{>({\beta}^p)}. \] \end{Corollary} \subsection{Basic notation concerning cellular bases}{\lambda}bel{SNota} Let ${\beta}$ be a fixed positive root of height $d$. Recall that we have made a choice of $\text{\boldmath$i$}_{\beta}$ so that in the word space $e(\text{\boldmath$i$}_{\beta}) L({\beta})$ of the cuspidal module, the lowest degree part is $1$-dimensional. We fix its spanning vector $v_{\beta}^-$ defined over ${\mathbb Z}$, see Lemma~\ref{LCuspInt}. Similarly, the highest degree part is spanned over ${\mathbb Z}$ by some $v_{\beta}^+$. We consider the element of the symmetric group $w_{{\beta},r}\in \mathfrak{S}_{pd}$ $$ w_{{\beta},r}:={\operatorname{pr}}od_{k=1}^d((r-1)d+k,rd+k). $$ which permutes the $r$th and the $(r+1)$st `$d$-blocks'. Now define $$ \psi_{{\beta},r}:=\psi_{w_{{\beta},r}}\in R_{p{\beta}}. $$ \iffalse{ In other words, $\psi_{\beta}$ is a `permutation of two ${\beta}$-blocks' and corresponding to the following element of $\mathfrak{S}_{2d}$: $$ {\beta}gin{braid}\tikzset{scale=0.8,baseline=12mm} \draw(1,8)--(7,0); \draw(2,8)--(8,0); \draw[dotted](2.5,8)--(4.5,8); \draw[dotted](5.5,4)--(7.5,4); \draw[dotted](2.5,0)--(4.5,0); \draw(5,8)--(11,0); \draw(6,8)--(12,0); \draw(7,8)--(1,0); \draw(8,8)--(2,0); \draw[dotted](8.5,8)--(10.5,8); \draw[dotted](8.5,0)--(10.5,0); \draw(11,8)--(5,0); \draw(12,8)--(6,0); \end{braid} $$ }\mathbf{f}i Moreover, for $u\in\mathfrak{S}_p$ with a fixed reduced decomposition $u=s_{r_1}\dots s_{r_m}$, define the elements {\beta}gin{align*} w_{{\beta},u}&:=w_{{\beta},r_1}\dots w_{{\beta},r_m}\in\mathfrak{S}_{pd}, \\ \psi_{{\beta},u}&:=\psi_{{\beta},r_1}\dots\psi_{{\beta},r_m}\in R_{p{\beta}}. \end{align*} In Section~\ref{SVerif}, we will explicitly define homogeneous elements $$ {\delta}_{\beta},\ D_{\beta},\ y_{\beta}\ \in\ e(\text{\boldmath$i$}_{\beta})R_{\beta} e(\text{\boldmath$i$}_{\beta}) $$ and $e_{\beta} := D_{\beta} {\delta}_{\beta}$ so that the following hypothesis is satisfied: {\beta}gin{Hypothesis}{\lambda}bel{HProp} We have: {\beta}gin{enumerate} \item[\textrm{(i)}] $e_{\beta}^2 - e_{\beta} \in I_{>({\beta})}$. \item[\textrm{(ii)}] ${\delta}_{\beta}, D_{\beta}$ and $y_{\beta}$ are $\tau$-invariant. \item[\textrm{(iii)}] ${\delta}_{\beta} v_{\beta}^- = v_{\beta}^+$ and $D_{\beta} v_{\beta}^+ = v_{\beta}^-$, \item[\textrm{(iv)}] $y_{\beta}$ has degree ${\beta}\cdot {\beta}$ and commutes with ${\delta}_{\beta}$ and $D_{\beta}$, \item[\textrm{(v)}] The algebra $(e_{\beta} R_{\beta} e_{\beta} + I_{>({\beta})})/I_{>({\beta})}$ is generated by $e_{\beta} y_{\beta} e_{\beta} + I_{>({\beta})}$. \item[\textrm{(vi)}] $\iota_{{\beta},{\beta}}(D_{\beta} \otimes D_{\beta}) \psi_{{\beta},1} = \psi_{{\beta},1} \iota_{{\beta},{\beta}}(D_{\beta} \otimes D_{\beta})$. \end{enumerate} \end{Hypothesis} From now on until we verify it in Section~\ref{SVerif}, we will work under the assumption that Hypothesis~\ref{HProp} holds. It turns out that this hypothesis is sufficient to construct affine cellular bases. {\beta}gin{Lemma}{\lambda}bel{CProp} $R_{\beta} e_{\beta} R_{\beta} + I_{>({\beta})} = R_{\beta}$ \end{Lemma} {\beta}gin{proof} This follows as in the proof of Lemma~\ref{LAllIdem} using $e_{\beta} L({\beta}){\mathfrak n}eq 0$. \end{proof} Using Lemma~\ref{LCuspInt}, we can choose a set $$ \mathfrak{B}_{\beta}\subseteq R_{\beta} $$ of elements defined over ${\mathbb Z}$ such that $$ \{b v_{\beta}^-\mid b\in\mathfrak{B}_{\beta}\} $$ is an ${\mathcal O}$-basis of $L({\beta})_{\mathcal O}$. Fix $p\in{\mathbb Z}_{>0}$ and define the set {\beta}gin{align*} \mathfrak{B}_{{\beta}^{\boxtimes p}}:=\{\iota_{{\beta},\dots,{\beta}}(b_1\otimes\dots\otimes b_p)\mid b_1,\dots,b_p\in\mathfrak{B}_{\beta}\}, \end{align*} and the element $$ y_{{\beta},r}:=\iota_{(r-1){\beta},{\beta},(p-r){\beta}}(1\otimes y_{\beta}\otimes 1)\in R_{p{\beta}} \qquad(1\leq r\leq p). $$ Further, define the elements of $R_{p{\beta}}$ {\beta}gin{align} e_{{\beta}^{\boxtimes p}}&:=\iota_{{\beta},\dots,{\beta}}(e_{\beta},\dots,e_{\beta}),\\ {\delta}_{({\beta}^p)}&:=y_{{\beta},2}y_{{\beta},3}^2\dots y_{{\beta},p}^{p-1} \iota_{{\beta},\dots,{\beta}}({\delta}_{\beta} \otimes \dots \otimes {\delta}_{\beta}),\\ D_{({\beta}^{p})}&:=\psi_{{\beta},w_0} \iota_{{\beta},\dots,{\beta}}(D_{\beta} \otimes \dots \otimes D_{\beta}),\\ e_{({\beta}^p)}&:=D_{({\beta}^p)} {\delta}_{({\beta}^p)} = \psi_{{\beta},w_0}y_{{\beta},2}y_{{\beta},3}^2\dots y_{{\beta},p}^{p-1} e_{{\beta}^{\boxtimes p}},{\lambda}bel{EDeSi} \end{align} where $w_0 \in \mathfrak{S}_p$ is the longest element. It will be proved in Corollary~\ref{C030413_3} that $e_{({\beta}^p)}^2-e_{({\beta}^p)}\in I_{>({\beta}^p)}$ generalizing part (i) of Hypothesis~\ref{HProp}. It is easy to see, as in \cite[Lemma~2.4]{KLM}, that there is always a choice of a reduced decompositon of $w_0$ such that {\beta}gin{equation}{\lambda}bel{ETauInv1} \psi_{{\beta},w_0}^\tau=\psi_{{\beta},w_0}. \end{equation} We have the algebras of polynomials and the symmetric polynomials: {\beta}gin{equation}{\lambda}bel{EPolSi} P_{({\beta}^p)}={\mathcal O}[y_{{\beta},1},\dots,y_{{\beta},p}]\quad\text{and}\quad {\mathtt L}ambda_{({\beta}^p)}=P_{({\beta}^p)}^{\mathfrak{S}_p} \end{equation} While it is clear that the $y_{{\beta},r}$ commute, we do not yet know that they are algebraically independent, but this will turn out to be the case. For now, one can interpret ${\mathtt L}ambda_{({\beta}^p)}$ as the algebra generated by the elementary symmetric functions in $y_{{\beta},1},\dots,y_{{\beta},p}$. Note using Hypothesis~\ref{HProp}(iv) that {\beta}gin{equation}{\lambda}bel{EUpperBoundp} {\operatorname{dim}_q}\, {\mathtt L}ambda_{({\beta}^p)}\leq {\operatorname{pr}}od_{s=1}^p\mathbf{f}rac{1}{1-q_{\beta}^{2s}}. \end{equation} Given ${\alpha} \in Q_+$ of height $d$ and a root partition $\pi=({\beta}_1^{p_1}, \dots, {\beta}_N^{p_N}) \in {\mathtt P}i({\alpha})$ we define the parabolic subgroup {\beta}gin{align*} \mathfrak{S}_\pi := \mathfrak{S}_{{\operatorname{ht}}({\beta}_1)}^{\times p_1} \times \dots \times \mathfrak{S}_{{\operatorname{ht}}({\beta}_N)}^{\times p_N} \subseteq \mathfrak{S}_d, \\ \mathfrak{S}_{(\pi)} := \mathfrak{S}_{p_1{\operatorname{ht}}({\beta}_1)} \times \dots \times \mathfrak{S}_{p_N{\operatorname{ht}}({\beta}_N)} \subseteq \mathfrak{S}_d, \end{align*} and we denote by $\mathfrak{S}^\pi$ (resp. $\mathfrak{S}^{(\pi)}$) the set of minimal left coset representatives of $\mathfrak{S}_\pi$ (resp. $\mathfrak{S}_{(\pi)}$) in $\mathfrak{S}_d$. Set {\beta}gin{align*} \mathfrak{B}_\pi &:= \{\psi_w \iota_\pi(b_1\otimes \dots\otimes b_N)\mid w\in \mathfrak{S}^\pi,\ b_n\in\mathfrak{B}_{{\beta}_n^{\boxtimes p_n}}\ \text{for $n=1,\dots,N$}\}. \end{align*} Using the natural embedding of $L({{\beta}_1})^{\boxtimes p_1}\boxtimes\dots\boxtimes L({{\beta}_N})^{\boxtimes p_N} \subseteq \bar{\mathtt D}e(\pi)$, we define the elements $$ v_\pi^-= (v_{{\beta}_1}^-)^{\otimes p_1}\otimes\dots\otimes (v_{{\beta}_N}^-)^{\otimes p_N} \in \bar{\mathtt D}e(\pi) $$ which belong to the word space corresponding to the words $$ \text{\boldmath$i$}_\pi:=\text{\boldmath$i$}_{{\beta}_1}^{p_1}\dots \text{\boldmath$i$}_{{\beta}_N}^{p_N}. $$ From definitions we have {\beta}gin{Lemma} {\lambda}bel{LDeBasis} Let $\pi\in{\mathtt P}i({\alpha})$. Then $\{bv_\pi^-\mid b\in \mathfrak{B}_\pi\}$ is a basis for $\bar {\mathtt D}e(\pi)$. \end{Lemma} Define {\beta}gin{align*} {\delta}_\pi &:= \iota_\pi({\delta}_{({\beta}_1^{p_1})}\otimes \dots \otimes {\delta}_{({\beta}_N^{p_N})}), \\ D_\pi &:= \iota_\pi(D_{({\beta}_1^{p_1})}\otimes \dots \otimes D_{({\beta}_N^{p_N})}), \\ e_\pi &:= \iota_\pi(e_{({\beta}_1^{p_1})} \otimes \dots \otimes e_{({\beta}_N^{p_N})}) =D_\pi {\delta}_\pi,\\ {\mathtt D}e(\pi) &:= ((R_{\alpha} e_\pi + I_{>\pi})/I_{>\pi}){\lambda}ngle{\delta}g(v_\pi^-)\rangle,\\ {\mathtt D}e'(\pi) &:= ((e_\pi R_{\alpha} + I_{>\pi})/I_{>\pi}){\lambda}ngle{\delta}g(v_\pi^+)\rangle,\\ {\mathtt L}ambda_\pi &:= \iota_\pi({\mathtt L}ambda_{({\beta}_1^{p_1})}\otimes\dots\otimes {\mathtt L}ambda_{({\beta}_N^{p_N})}). \end{align*} Note by (\ref{EUpperBoundp}) and (\ref{ELPi}) that {\beta}gin{equation}{\lambda}bel{EDimUpper} {\operatorname{dim}_q}\, {\mathtt L}ambda_\pi\leq l_\pi. \end{equation} Choose also a homogeneous basis $X_\pi$ for ${\mathtt L}ambda_\pi$. The following lemma is a consequence of Hypothesis~\ref{HProp}(ii),(vi) and (\ref{ETauInv1}). {\beta}gin{Lemma}{\lambda}bel{LDDelTau} We have $D_\pi^\tau = D_\pi$ and ${\delta}_\pi^\tau = {\delta}_\pi$. \end{Lemma} \subsection{Powers of a single root}{\lambda}bel{SSPower} Throughout this subsection ${\beta} \in {\mathtt P}hi_+$ and $p \in {\mathbb Z}_{>0}$ are fixed. Define ${\alpha} := p{\beta}$, and ${\sigma}:=({\beta}^p) \in {\mathtt P}i({\alpha})$. Define $\bar R_{{\alpha}} := R_{{\alpha}} / I_{>{\sigma}}$, and given $r \in R_{{\alpha}}$ write $\bar r$ for its image in $\bar R_{{\alpha}}$. The following proposition is the main result of this subsection. {\beta}gin{Proposition}{\lambda}bel{LPowerBasis} We have that {\beta}gin{enumerate} \item[\textrm{(i)}] $\{\bar b \bar f \bar e_{{\sigma}} \mid b \in \mathfrak{B}_{\sigma}, f \in X_{\sigma}\}$ is an ${\mathcal O}$-basis for ${\mathtt D}e({\sigma})$. \item[\textrm{(ii)}] $\{\bar e_{{\sigma}} \bar f \bar D_{{\sigma}}\, \bar b^\tau \mid b \in \mathfrak{B}_{\sigma}, f \in X_{\sigma}\}$ is an ${\mathcal O}$-basis for ${\mathtt D}e'({\sigma})$. \item[\textrm{(iii)}] $\{\bar b \bar e_{{\sigma}} \bar f \bar D_{{\sigma}} (\bar b')^\tau \mid b,b' \in \mathfrak{B}_{\sigma}, f \in X_{\sigma}\}$ is an ${\mathcal O}$-basis for $\bar R_{{\alpha}}$. \item[\textrm{(iv)}] The elements $\bar y_{{\beta},1},\dots,\bar y_{{\beta},p}$ are algebraically independent. \end{enumerate} \end{Proposition} The proof of the Proposition will occupy this subsection. It goes by induction on $p\,{\operatorname{ht}}({\beta})$. If ${\beta}$ a simple root, then $R_{\alpha}=\bar R_{\alpha}$ is exactly the nil-Hecke algebra, and we are done by Theorem~\ref{TnilHeckeCellBasis}. For the rest of the section, we assume the Proposition holds with ${\sigma} = ({{\mathfrak g}amma}^s) \in {\mathtt P}i(s{{\mathfrak g}amma})$ whenever ${{\mathfrak g}amma}\in{\mathtt P}hi_+$ and $s\,{\operatorname{ht}}({{\mathfrak g}amma}) < p\,{\operatorname{ht}}({\beta})$ and prove that it also holds for ${\sigma} = ({\beta}^p)$. We shall also assume that ${\mathcal O}=F$ is a field, and then use Lemma~\ref{PFieldToZ} to lift to ${\mathbb Z}$-forms. {\beta}gin{Lemma}{\lambda}bel{L050413_1} Assume that $p=1$. Then Proposition~\ref{LPowerBasis} holds. \end{Lemma} {\beta}gin{proof} Since $L({\beta})$ is the unique simple module in $\mod{\bar R_{\beta}}$ and $${\mathcal H}om_{\bar R_{\beta}}({\mathtt D}e({\beta}), L({\beta})) = e_{\beta} L({\beta}) = Fv_{\beta}^-$$ is one-dimensional by Hypothesis~\ref{HProp}, it follows that ${\mathtt D}e({\beta})$ is the projective cover of $L({\beta})$ in $\mod{\bar R_{\beta}}$ under the map $\bar e_{\beta} \mapsto v_{\beta}^-$. All composition factors of ${\mathtt D}e({\beta})$ are isomorphic to $L({\beta})$. Therefore, lifting the basis $\{b v_{\beta}^- \mid b \in \mathfrak{B}_{\beta}\}$ of $L({\beta})$ to ${\mathtt D}e({\beta})$ we see that ${\mathtt D}e({\beta})$ is spanned by $$\{\bar b {\varphi}(\bar e_{\beta}) \mid b \in \mathfrak{B}_{\beta}, {\varphi} \in {\operatorname{End}}_{\bar R_{\beta}}({\mathtt D}e({\beta}))\}.$$ By Hypothesis~\ref{HProp}(v), ${\operatorname{End}}_{\bar R_{\beta}}({\mathtt D}e({\beta})) {\sigma}meq \bar e_{\beta} \bar R_{\beta} \bar e_{\beta}$ is generated by $\bar e_{\beta}\bar y_{\beta} \bar e_{\beta}=\bar y_{\beta} \bar e_{\beta}$. Thus $${\mathtt D}e({\beta}) = F\text{-span}\{\bar b \bar f \bar e_{\beta} \mid b \in \mathfrak{B}_{\beta}, f \in X_{\beta} \}.$$ Analogously, ${\mathtt D}e'({\beta})$ is the projective cover of $L({\beta})^\tau$ as right $\bar R_{\beta}$-modules under the map $\bar e_{\beta} \mapsto v_{\beta}^+$. As above, lifting the basis $\{v_{\beta}^+ D_{\beta} b^\tau \mid b \in \mathfrak{B}_{\beta}\}$ of $L({\beta})^\tau$ to ${\mathtt D}e'({\beta})$ we see that $${\mathtt D}e'({\beta}) = F\operatorname{op}eratorname{-span}\{\bar e_{\beta} \bar f \bar D_{\beta}\, \bar b^\tau \mid b \in \mathfrak{B}_{\beta}, f \in X_{\beta}\}.$$ Therefore by Lemma~\ref{CProp} and Hypothesis~\ref{HProp}(iv), $$\bar R_{\beta} = \bar R_{\beta} \bar e_{\beta} \bar R_{\beta} = F\text{-span}\{\bar b \bar e_{\beta} \bar f \bar D_{\beta} (\bar b')^\tau \mid b,b' \in \mathfrak{B}_{\beta}, f \in X_{\beta} \}.$$ Let $\pi = ({\beta}_1^{p_1},\dots,{\beta}_N^{p_N}) > ({\beta})$. By definition and \cite[Proposition 2.16]{KL1} we have {\beta}gin{align*} I_\pi &= R_{\beta} e(\text{\boldmath$i$}_\pi) R_{\beta} + I_{>\pi}\\ &= \sum_{u,v \in \mathfrak{S}^{(\pi)}} \psi_u R_\pi e(\text{\boldmath$i$}_\pi) R_\pi \psi_v^\tau + I_{>\pi} \subseteq \sum_{u,v \in \mathfrak{S}^{(\pi)}} \psi_u R_\pi \psi_v^\tau + I_{>\pi}, \end{align*} because $e(\text{\boldmath$i$}_\pi) \in R_\pi$. The opposite inclusion follows from Lemma~\ref{LIotaImage}. For $n=1,\dots,N$, define $$B_n := \{b e_{({\beta}_n^{p_n})} f D_{{\beta}_n^{p_n}} (b')^\tau \mid b,b' \in \mathfrak{B}_{({\beta}_n^{p_n})}, f \in X_{({\beta}_n^{p_n})}\}.$$ By part (iii) of the induction hypothesis, for $n=1,\dots,N$, the image of $B_n$ in $\bar R_{p_n {\beta}_n}$ is a basis. Let $$ B_\pi:=\{\iota_\pi(b_1\otimes\dots\otimes b_N)\mid b_1\in\mathfrak{B}_{({\beta}_1^{p_1})},\dots, b_N\in\mathfrak{B}_{({\beta}_N^{p_N})}\}. $$ By Corollary~\ref{LParaIdeal2} and definitions from Section~\ref{SNota}, {\beta}gin{align*} R_\pi + I_{>\pi} &= F\operatorname{op}eratorname{-span}\{\iota_\pi(r_1\otimes \dots\otimes r_N) \mid r_n \in B_n \text{ for } n=1,\dots,N\} + I_{>\pi}\\ &= F\operatorname{op}eratorname{-span}\{b e_\pi f D_\pi (b')^\tau \mid b, b' \in B_\pi, f \in X_\pi\} + I_{>\pi} \end{align*} and therefore $$ I_\pi = F\operatorname{op}eratorname{-span}\{\psi_u b e_\pi f D_\pi (b')^\tau \psi_v^\tau \mid u,v \in \mathfrak{S}^{(\pi)}, b, b' \in B_\pi, f \in X_\pi\} + I_{>\pi}. $$ By definition of $\mathfrak{B}_\pi$ we have {\beta}gin{align} I_\pi &= F\text{-span}\{b e_\pi f D_\pi (b')^\tau \mid b, b' \in \mathfrak{B}_\pi, f \in X_\pi\} + I_{>\pi},{\lambda}bel{EIpi}\\ R_{\beta} &= \sum_{\pi \in {\mathtt P}i({\beta})} F\text{-span}\{b e_\pi f D_\pi (b')^\tau \mid b, b' \in \mathfrak{B}_\pi, f \in X_\pi\}. \end{align} Using (\ref{EDimUpper}) and the equality ${\delta}g(D_\pi) = 2{\delta}g(v_\pi^-)$ for all $\pi \in {\mathtt P}i({\beta})$, we get {\beta}gin{align*} \dim_q(R_{\beta}) &= \sum_{\pi\in{\mathtt P}i({\beta})}\dim_q(F\text{-span}\{b e_\pi f D_\pi (b')^\tau \mid b, b' \in \mathfrak{B}_\pi, f \in X_\pi\})\\ &\leq \sum_{\pi\in{\mathtt P}i({\beta})}{\mathcal B}ig(\sum_{b \in \mathfrak{B}_\pi}q^{{\delta}g(b)}{\mathcal B}ig) \dim_q({\mathtt L}ambda_\pi) q^{{\delta}g(D_\pi)} {\mathcal B}ig(\sum_{b \in \mathfrak{B}_\pi}q^{{\delta}g(b)}{\mathcal B}ig)\\ &\leq \sum_{\pi\in{\mathtt P}i({\beta})}{\mathcal B}ig(\sum_{b \in \mathfrak{B}_\pi}q^{{\delta}g(b v_\pi^-)}{\mathcal B}ig)^2 l_\pi\\ &= \sum_{\pi\in{\mathtt P}i({\beta})}\dim_q(\bar {\mathtt D}e(\pi))^2 l_\pi = \dim_q(R_{\beta}), \end{align*} by Corollary~\ref{TDim}. The inequalities are therefore equalities, and this implies that the spanning set $\{b e_\pi f D_\pi (b')^\tau \mid \pi \in {\mathtt P}i({\beta}), b, b' \in \mathfrak{B}_\pi, f \in X_\pi\}$ of $R_{\beta}$ is a basis and ${\operatorname{dim}_q}\, {\mathtt L}ambda_{\pi}=l_\pi$ for all $\pi$. These yield (iii) and (iv) of Proposition~\ref{LPowerBasis} in our special case $p=1$. To show (i) and (ii), we have already noted that the claimed bases span ${\mathtt D}e({\beta})$ and ${\mathtt D}e'({\beta})$, respectively. We now apply part (iii) to see that they are linearly independent. \end{proof} {\beta}gin{Corollary}{\lambda}bel{CFree} We have {\beta}gin{enumerate} \item[\textrm{(i)}] $\bar e_{\beta} \bar R_{\beta} \bar e_{\beta}$ is a polynomial algebra in the variable $\bar y_{\beta} \bar e_{\beta}$. \item[\textrm{(ii)}] ${\mathtt D}e({\beta})$ is a free right $\bar e_{\beta} \bar R_{\beta} \bar e_{\beta}$-module with basis $\{\bar b \bar e_{\beta} \mid b \in \mathfrak{B}_{\beta}\}$. \item[\textrm{(iii)}] ${\mathtt D}e'({\beta})$ is a free left $\bar e_{\beta} \bar R_{\beta} \bar e_{\beta}$-module with basis $\{\bar e_{\beta} \bar D_{\beta} \bar b^\tau \mid b \in \mathfrak{B}_{\beta}\}$. \end{enumerate} \end{Corollary} {\beta}gin{proof} By the lemma, we have Proposition~\ref{LPowerBasis} for $p=1$. Now, (i) follows from parts (i) and (iv) of the proposition. The remaining statements follow from parts (i) and (ii) of the proposition. \end{proof} {\beta}gin{Corollary}{\lambda}bel{CGrot} In the Grothendieck group, we have $[{\mathtt D}e({\beta})] = [L({\beta})]/(1-q_{\beta}^2).$ \end{Corollary} {\beta}gin{Lemma}{\lambda}bel{LInduct} Up to a degree shift, ${\mathtt D}e({\beta})^{\circ p} \cong \bar R_{p{\beta}} \bar e_{{\beta}^{\boxtimes p}}$. \end{Lemma} {\beta}gin{proof} By Corollary~\ref{COneRootIdeal} we have a map $$ {\mathtt D}e({\beta})^{\boxtimes p} \to {\mathbb R}es_{{\beta}, \dots, {\beta}}(\bar R_{p{\beta}} \bar e_{{\beta}^{\boxtimes p}}),\; \bar e_{\beta}^{\otimes p} \mapsto \bar e_{{\beta}^{\boxtimes p}}. $$ By Frobenius reciprocity, we obtain a map $$ \mu: {\mathtt D}e({\beta})^{\circ p} \to \bar R_{p{\beta}} \bar e_{{\beta}^{\boxtimes p}},\; 1_{{\beta}, \dots, {\beta}} \otimes \bar e_{\beta}^{\otimes p} \mapsto \bar e_{{\beta}^{\boxtimes p}}. $$ We now show that $I_{>({\beta}^p)} {\mathtt D}e({\beta})^{\circ p} = 0$. It is enough to prove that ${\mathbb R}es_\pi {\mathtt D}e({\beta})^{\circ p} = 0$ for all $\pi > ({\beta}^p)$. Since all composition factors of ${\mathtt D}e({\beta})$ are isomorphic to $L({\beta})$, it follows that all composition factors of ${\mathtt D}e({\beta})^{\circ p}$ are isomorphic to $L({\beta})^{\circ p} \cong L({\beta}^p)$. By Theorem~\ref{TStand}\textrm{(vi)}, ${\mathbb R}es_\pi(L({\beta})^{\circ p}) = 0$, which proves the claim. Since $e_{{\beta}^{\boxtimes p}} 1_{{\beta}, \dots, {\beta}} \otimes \bar e_{\beta}^{\otimes p} = 1_{{\beta}, \dots, {\beta}} \otimes \bar e_{\beta}^{\otimes p}$, we obtain a map {\beta}gin{align*} {\mathfrak n}u: \bar R_{p{\beta}} \bar e_{{\beta}^{\boxtimes p}} &\to {\mathtt D}e({\beta})^{\circ p},\; \bar e_{{\beta}^{\boxtimes p}} \mapsto 1_{{\beta}, \dots, {\beta}} \otimes \bar e_{\beta}^{\otimes p}. \end{align*} The homomorphisms $\mu, {\mathfrak n}u$ map the evident cyclic generators to each other, and so are inverse isomorphisms. \end{proof} {\beta}gin{Lemma}{\lambda}bel{LMackeyPsi} There exists an endomorphism of ${\mathtt D}e({\beta}) \circ {\mathtt D}e({\beta})$ which sends $1_{{\beta},{\beta}} \otimes (\bar e_{\beta} \otimes \bar e_{\beta})$ to $ \psi_{{\beta},1} 1_{{\beta},{\beta}} \otimes (\bar e_{\beta} \otimes \bar e_{\beta})$. \end{Lemma} {\beta}gin{proof} Apply the Mackey theorem to ${\mathbb R}es_{{\beta},{\beta}}({\mathtt D}e({\beta}) \circ {\mathtt D}e({\beta}))$. We get a short exact sequence of $R_{\beta} \boxtimes R_{\beta}$-modules $$ 0 \to {\mathtt D}e({\beta}) \boxtimes {\mathtt D}e({\beta}) \to {\mathbb R}es_{{\beta},{\beta}}({\mathtt D}e({\beta}) \circ {\mathtt D}e({\beta})) \to ({\mathtt D}e({\beta}) \boxtimes {\mathtt D}e({\beta})){\lambda}ngle-{\beta}\cdot{\beta}\rangle \to 0, $$ where $\psi_{{\beta},1} 1_{{\beta},{\beta}} \otimes (\bar e_{\beta} \otimes\bar e_{\beta})\in {\mathbb R}es_{{\beta},{\beta}}({\mathtt D}e({\beta}) \circ {\mathtt D}e({\beta}))$ is a preimage of the standard generator of $({\mathtt D}e({\beta}) \boxtimes {\mathtt D}e({\beta})){\lambda}ngle-{\beta}\cdot{\beta}\rangle$. We now show that this is actually a sequence of $\bar R_{\beta} \boxtimes \bar R_{\beta}$-modules. It is sufficient to show that for any $\pi > ({\beta})$, we have that $$ {\mathbb R}es_{\pi,{\beta}} \circ {\mathbb R}es_{{\beta},{\beta}}({\mathtt D}e({\beta}) \circ {\mathtt D}e({\beta})) = 0 = {\mathbb R}es_{{\beta},\pi} \circ {\mathbb R}es_{{\beta},{\beta}}({\mathtt D}e({\beta}) \circ {\mathtt D}e({\beta})). $$ We show the first equality, the second being similar. All composition factors of ${\mathtt D}e({\beta})$ are isomorphic to $L({\beta})$, so all composition factors of ${\mathtt D}e({\beta}) \circ {\mathtt D}e({\beta})$ are isomorphic to $L({\beta}) \circ L({\beta})$, and thus all composition factors of ${\mathbb R}es_{{\beta},{\beta}}({\mathtt D}e({\beta}) \circ {\mathtt D}e({\beta}))$ are isomorphic to $L({\beta}) \boxtimes L({\beta})$. Theorem~\ref{TStand} now tells us that ${\mathbb R}es_{\pi}(L({\beta})) = 0$ for all $\pi > ({\beta})$. By the projectivity of ${\mathtt D}e({\beta})$ as $\bar R_{\beta}$-module, the short exact sequence splits, giving the required endomorphism by Frobenius reciprocity. \end{proof} {\beta}gin{Corollary}{\lambda}bel{C030413_1} $\bar \psi_{{\beta},1} \bar e_{{\beta}^{\boxtimes 2}} = \bar e_{{\beta}^{\boxtimes 2}} \bar \psi_{{\beta},1} \bar e_{{\beta}^{\boxtimes 2}}$. \end{Corollary} {\beta}gin{proof} Let ${\varphi}$ be the endomorphism of ${\mathtt D}e({\beta}) \circ {\mathtt D}e({\beta})$ constructed in Lemma~\ref{LMackeyPsi}, regarded as an endomorphism of $\bar R_{2{\beta}} \bar e_{{\beta}^{\boxtimes 2}}$ by Lemma~\ref{LInduct}. Then $$ \bar \psi_{{\beta},1} \bar e_{{\beta}^{\boxtimes 2}} = {\varphi}(\bar e_{{\beta}^{\boxtimes 2}}) = {\varphi}(\bar e_{{\beta}^{\boxtimes 2}}^2) = \bar e_{{\beta}^{\boxtimes 2}} {\varphi}(\bar e_{{\beta}^{\boxtimes 2}}) = \bar e_{{\beta}^{\boxtimes 2}} \bar \psi_{{\beta},1} \bar e_{{\beta}^{\boxtimes 2}},$$ as required. \end{proof} {\beta}gin{Corollary} {\lambda}bel{C030413_2} We have $\bar e_{{\beta}^{\boxtimes p}}\bar e_{({\beta}^p)} \bar e_{{\beta}^{\boxtimes p}}=\bar e_{({\beta}^p)}$. \end{Corollary} {\beta}gin{proof} Follows from (\ref{EDeSi}), Corollary~\ref{C030413_1} and Hypothesis~\ref{HProp}. \end{proof} {\beta}gin{Lemma}{\lambda}bel{LEndBasis} The set $\{\bar e_{{\beta}^{\boxtimes p}} \bar y_{{\beta},1}^{a_1} \dots \bar y_{{\beta},p}^{a_p} \bar \psi_{{\beta},w} \bar e_{{\beta}^{\boxtimes p}} \mid w \in \mathfrak{S}_p, a_1, \dots, a_p {\mathfrak g}eq 0 \}$ gives a linear basis of $\bar e_{{\beta}^{\boxtimes p}} \bar R_{{\alpha}} \bar e_{{\beta}^{\boxtimes p}}$. \end{Lemma} {\beta}gin{proof} The elements above are linearly independent by Lemmas~\ref{LInduct} and \ref{LMackeyPsi}, and Corollary~\ref{CFree}. We use Frobenius reciprocity, Corollary~\ref{CGrot}, and \cite[Lemma 2.11]{BKM} to see that {\beta}gin{align*} {\operatorname{dim}_q}\, {\operatorname{End}}_{R_{\alpha}}({\mathtt D}e({\beta})^{\circ p}) &= {\operatorname{dim}_q}\, {\mathcal H}om_{R_{{\beta},\dots,{\beta}}}({\mathtt D}e({\beta})^{\boxtimes p}, {\mathbb R}es_{{\beta},\dots,{\beta}} {\mathtt D}e({\beta})^{\circ p})\\ &\leq [{\mathbb R}es_{{\beta},\dots,{\beta}} {\mathtt D}e({\beta})^{\circ p} : L({\beta})^{\boxtimes p}]\\ &= [{\mathbb R}es_{{\beta},\dots,{\beta}} L({\beta})^{\circ p} : L({\beta})^{\boxtimes p}]/(1-q_{\beta}^2)^p\\ &= q_{\beta}^{-\mathbf{f}rac{1}{2}p(p-1)} [p]_{\beta}^! / (1-q_{\beta}^2)^p. \end{align*} By the formula for the Poincar\'e polynomial of $\mathfrak{S}_p$, we have shown that \[ {\operatorname{dim}_q}\, \bar e_{{\beta}^{\boxtimes p}} \bar R_{{\alpha}} \bar e_{{\beta}^{\boxtimes p}} \leq \mathbf{f}rac{\sum_{w \in \mathfrak{S}_p}q_{\beta}^{-2l(w)}}{(1-q_{\beta}^2)^p}, \] showing that the proposed basis also spans. \end{proof} The next two lemmas are proved using ideas that already appeared in the proofs of \cite[Lemmas]{BKM}. {\beta}gin{Lemma}{\lambda}bel{LNHRel1} We have that {\beta}gin{align*} \bar \psi_{{\beta}, r}^2 \bar e_{{\beta}^{\boxtimes p}} &= 0, &\text{for } 1 \leq r \leq p-1,\\ \bar \psi_{{\beta}, r} \bar \psi_{{\beta}, s} \bar e_{{\beta}^{\boxtimes p}} &= \bar \psi_{{\beta}, s} \bar \psi_{{\beta}, r} \bar e_{{\beta}^{\boxtimes p}}, &\text{for $|r-s| > 1$, and}\\ \bar \psi_{{\beta}, r} \bar \psi_{{\beta}, r+1} \bar \psi_{{\beta}, r} \bar e_{{\beta}^{\boxtimes p}} &= \bar \psi_{{\beta}, r+1} \bar \psi_{{\beta}, r} \bar \psi_{{\beta}, r+1} \bar e_{{\beta}^{\boxtimes p}}, &\text{for } 1 \leq r \leq p-2. \end{align*} \end{Lemma} {\beta}gin{proof} We use Lemma~\ref{LInduct} to identify $\bar R_{p{\beta}} \bar e_{{\beta}^{\boxtimes p}}$ with ${\mathtt D}e({\beta})^{\circ p}$. It is enough to prove the first relation in the case $p=2$. The Mackey theorem analysis in the proof of Lemma~\ref{LMackeyPsi} shows that, as a graded vector space {\beta}gin{equation}{\lambda}bel{EDeSquare} ({\mathtt D}e({\beta}) \circ {\mathtt D}e({\beta}))_{\text{\boldmath$i$}_{\beta}^2} = e(\text{\boldmath$i$}_{\beta}^2) \otimes ({\mathtt D}e({\beta}) \boxtimes {\mathtt D}e({\beta})) \operatorname{op}lus \psi_{{\beta},1} e(\text{\boldmath$i$}_{\beta}^2) \otimes ({\mathtt D}e({\beta}) \boxtimes {\mathtt D}e({\beta})). \end{equation} The vector $\bar e_{\beta} \in {\mathtt D}e({\beta})_{\text{\boldmath$i$}_{\beta}}$ is of minimal degree, and thus $\psi_{{\beta},1} e(\text{\boldmath$i$}_{\beta}^2) \otimes (\bar e_{\beta} \otimes \bar e_{\beta})$ is of minimal degree in $({\mathtt D}e({\beta}) \circ {\mathtt D}e({\beta}))_{\text{\boldmath$i$}_{\beta}^2}$. The degree of $\psi_{{\beta},1}^2 e(\text{\boldmath$i$}_{\beta}^2) \otimes (\bar e_{\beta} \otimes \bar e_{\beta})$ is smaller by ${\beta}\cdot{\beta}$, so the vector is zero. The second relation is clear from the definitions. To prove the third relation, it is sufficient to consider $p=3$. Let $w_r := w_{{\beta},r}$, and set $w_0:=w_1w_2w_1$. Using the defining relations of $R_{3{\beta}}$, we deduce that $(\psi_{{\beta}, 2} \psi_{{\beta}, 1} \psi_{{\beta}, 2} - \psi_{{\beta}, 1} \psi_{{\beta}, 2} \psi_{{\beta}, 1}) e(\text{\boldmath$i$}_{\beta}^3) \otimes (\bar e_{\beta} \otimes \bar e_{\beta} \otimes \bar e_{\beta})$ is an element of degree $3{\delta}g(v_{\beta}^-)-6{\beta}\cdot{\beta}$ in $S:=\sum_{w < w_0} \psi_w e(\text{\boldmath$i$}_{\beta}^3) \otimes ({\mathtt D}e({\beta}) \boxtimes {\mathtt D}e({\beta}) \boxtimes {\mathtt D}e({\beta}))$, where $<$ denotes the Bruhat order. By a Mackey theorem analysis as in the proof of Lemma~\ref{LMackeyPsi}, we see that $$S = \sum_{w \in \{1,w_1,w_2,w_1w_2,w_2w_1\}} \psi_w e(\text{\boldmath$i$}_{\beta}^3) \otimes ({\mathtt D}e({\beta}) \boxtimes {\mathtt D}e({\beta}) \boxtimes {\mathtt D}e({\beta})).$$ The lowest degree of an element in $S$ is therefore $3{\delta}g(v_{\beta}^-)-4{\beta}\cdot{\beta}$, and the third relation is proved. \end{proof} {\beta}gin{Lemma}{\lambda}bel{LNHRel2} There exists a unique choice of ${\varepsilon}_{\beta} = \pm 1$ such that {\beta}gin{align*} \bar \psi_{{\beta}, r} \bar y_{{\beta}, s} \bar e_{{\beta}^{\boxtimes p}} &= \bar y_{{\beta}, s} \bar \psi_{{\beta}, r} \bar e_{{\beta}^{\boxtimes p}}, &\text{for } s {\mathfrak n}eq r, r+1,\\ \bar \psi_{{\beta}, r} {\varepsilon}_{\beta} \bar y_{{\beta}, r+1} \bar e_{{\beta}^{\boxtimes p}} &= ({\varepsilon}_{\beta} \bar y_{{\beta}, r} \bar \psi_{{\beta}, r} + 1) \bar e_{{\beta}^{\boxtimes p}}, &\text{for $1 \leq r < p$, and}\\ {\varepsilon}_{\beta} \bar y_{{\beta}, r+1} \bar \psi_{{\beta}, r} \bar e_{{\beta}^{\boxtimes p}} &= (\bar \psi_{{\beta}, r} {\varepsilon}_{\beta} \bar y_{{\beta}, r} + 1) \bar e_{{\beta}^{\boxtimes p}}, &\text{for $1 \leq r < p$.} \end{align*} \end{Lemma} {\beta}gin{proof} The first relation is clear from the definitions. It is enough to prove the remaining relations for $p=2$. Using the defining relations of $R_{2{\beta}}$ and a Mackey theorem analysis as in the proof of Lemma~\ref{LMackeyPsi}, we deduce that {\beta}gin{align*} (\bar \psi_{{\beta}, 1} \bar y_{{\beta}, 2} - \bar y_{{\beta}, 1} \bar \psi_{{\beta}, 1})\bar e_{{\beta}^{\boxtimes 2}} &\in \sum_{w<w_{{\beta},1}}\psi_we(\text{\boldmath$i$}_{\beta}^2) \otimes ({\mathtt D}e({\beta}) \boxtimes {\mathtt D}e({\beta}))\\ &= e(\text{\boldmath$i$}_{\beta}^2) \otimes ({\mathtt D}e({\beta}) \boxtimes {\mathtt D}e({\beta})), \end{align*} and the only vector of the correct degree is $\bar e_{{\beta}^{\boxtimes 2}}$. Therefore (working over ${\mathbb Z}$) we must have that $$(\bar \psi_{{\beta}, 1} \bar y_{{\beta}, 2} - \bar y_{{\beta}, 1} \bar \psi_{{\beta}, 1}) \bar e_{{\beta}^{\boxtimes 2}} = c_+ \bar e_{{\beta}^{\boxtimes 2}}$$ for some $c_+ \in {\mathbb Z}$. Similarly, we obtain $$(\bar \psi_{{\beta}, 1} \bar y_{{\beta}, 1} - \bar y_{{\beta}, 2} \bar \psi_{{\beta}, 1}) \bar e_{{\beta}^{\boxtimes 2}} = c_- \bar e_{{\beta}^{\boxtimes 2}}$$ for some $c_- \in {\mathbb Z}$. We compute {\beta}gin{align*} (\bar \psi_{{\beta},1} \bar y_{{\beta},1} \bar y_{{\beta},2} - \bar y_{{\beta},1} \bar y_{{\beta},2} \bar \psi_{{\beta},1}) \bar e_{{\beta}^{\boxtimes 2}} &= (\bar y_{{\beta},2} \bar \psi_{{\beta},1} + c_-)\bar y_{{\beta},2} - \bar y_{{\beta},2} (\bar \psi_{{\beta},1} \bar y_{{\beta},2} - c_+) \\ &= (c_-+c_+)\bar y_{{\beta},2} \bar e_{{\beta}^{\boxtimes 2}}\\ (\bar \psi_{{\beta},1} \bar y_{{\beta},1} \bar y_{{\beta},2} - \bar y_{{\beta},1} \bar y_{{\beta},2} \bar \psi_{{\beta},1}) \bar e_{{\beta}^{\boxtimes 2}} &= (\bar y_{{\beta},1} \bar \psi_{{\beta},1} + c_+)\bar y_{{\beta},1} - \bar y_{{\beta},1} (\bar \psi_{{\beta},1} \bar y_{{\beta},1} - c_-) \\ &= (c_++c_-)\bar y_{{\beta},1} \bar e_{{\beta}^{\boxtimes 2}} \end{align*} and since $\bar y_{{\beta},1} \bar e_{{\beta}^{\boxtimes 2}}$ and $\bar y_{{\beta},1} \bar e_{{\beta}^{\boxtimes 2}}$ are linearly independent by Lemma~\ref{LEndBasis}, we must have $c_-=-c_+$. We now fix a prime $p$ and extend scalars to ${\mathcal F}F_p$. Suppose that ${\varepsilon}_{\beta} = 0 \in {\mathcal F}F_p$, so that {\beta}gin{align*} \bar \psi_{{\beta}, 1} \bar y_{{\beta}, 2} \bar e_{{\beta}^{\boxtimes 2}} &= \bar y_{{\beta}, 1} \bar \psi_{{\beta}, 1} \bar e_{{\beta}^{\boxtimes 2}}\\ \bar \psi_{{\beta}, 1} \bar y_{{\beta}, 1} \bar e_{{\beta}^{\boxtimes 2}} &= \bar y_{{\beta}, 2} \bar \psi_{{\beta}, 1} \bar e_{{\beta}^{\boxtimes 2}}. \end{align*} Define $S$ to be the submodule of ${\mathtt D}e({\beta}) \circ {\mathtt D}e({\beta})$ generated by $\bar y_{{\beta},1} \bar e_{{\beta}^{\boxtimes 2}}$ and $\bar y_{{\beta},2} \bar e_{{\beta}^{\boxtimes 2}}$. The above equations show that the endomorphism defined by right multiplication by $\bar \psi_{{\beta},1} \bar e_{{\beta}^{\boxtimes 2}}$ leaves $S$ invariant. On the other hand, ${\mathtt D}e({\beta}) \circ {\mathtt D}e({\beta}) / S \cong L({\beta}) \circ L({\beta})$ is irreducible. Since the endomorphism algebra of an irreducible module is one dimensional, we have a contradiction. Therefore ${\varepsilon}_{\beta} {\mathfrak n}eq 0$ when reduced modulo any prime, i.e. ${\varepsilon}_{\beta} = \pm 1$. \end{proof} {\beta}gin{Corollary}{\lambda}bel{C030413_3} The homomorphism from the nilHecke algebra $H_p$ determined by $$ \zeta: H_p \to \bar e_{{\beta}^{\boxtimes p}} \bar R_{{\alpha}} \bar e_{{\beta}^{\boxtimes p}}, y_r \mapsto {\varepsilon}_{\beta} \bar y_{{\beta}, r} \bar e_{{\beta}^{\boxtimes p}}, \psi_r \mapsto \bar \psi_{{\beta}, r} \bar e_{{\beta}^{\boxtimes p}} $$ is an isomorphism. Under this isomorphism the idempotent $e_p\in H_p$ is mapped onto $\bar e_{\sigma}$. \end{Corollary} {\beta}gin{proof} Using Lemmas~\ref{LNHRel1} and~\ref{LNHRel2}, we see that the map exists. By Lemma~\ref{LEndBasis}, the map is an isomorphism. The second statement now follows using Corollary~\ref{C030413_2}. \end{proof} {\beta}gin{Corollary}{\lambda}bel{CSymm} Given $f \in {\mathtt L}ambda_{\sigma}$, $\bar f$ commutes with $\bar {\delta}_{\sigma}$, $\bar e_{\sigma}$, and $\bar e_{\sigma} \bar D_{\sigma}$. \end{Corollary} {\beta}gin{proof} It follows directly from Hypothesis~\ref{HProp}(iv) and the definitions that ${\delta}_{\sigma}$ commutes with every element of $P_{\sigma}$, and in particular with every element of the subalgebra ${\mathtt L}ambda_{\sigma}$. Denote by $w_0$ the longest element of $\mathfrak{S}_p$. Then by Corollaries~\ref{C030413_1} and~\ref{C030413_3} {\beta}gin{align*} \bar e_{\sigma} \bar D_{\sigma} &= \bar \psi_{{\beta},w_0} \bar y_{{\beta},2} \dots \bar y_{{\beta},p}^{p-1} \bar e_{{\beta}^{\boxtimes p}} \bar \psi_{{\beta},w_0} \iota(D_{\beta} \otimes \dots \otimes D_{\beta})\\ &= (\bar e_{{\beta}^{\boxtimes p}} \psi_{{\beta},w_0} \bar e_{{\beta}^{\boxtimes p}})(\bar y_{{\beta},2} \dots \bar y_{{\beta},p}^{p-1})(\bar e_{{\beta}^{\boxtimes p}} \psi_{{\beta},w_0} \bar e_{{\beta}^{\boxtimes p}})\iota(D_{\beta} \otimes \dots \otimes D_{\beta})\\ &= \zeta(\psi_{w_0})(\bar y_{{\beta},2} \dots \bar y_{{\beta},p}^{p-1})\zeta(\psi_{w_0})\iota(D_{\beta} \otimes \dots \otimes D_{\beta}) \end{align*} Any $f \in {\mathtt L}ambda_{\sigma}$ commutes with $\iota(D_{\beta} \otimes \dots \otimes D_{\beta})$ by Hypothesis~\ref{HProp}(iv). It is well known that the center of the nilHecke algebra $H_p$ is given by the symmetric functions ${\mathtt L}ambda_p$. In particular, every element of ${\mathtt L}ambda_p$ commutes with $\psi_{w_0}$. Let $g \in {\mathtt L}ambda_p$ be such that $\zeta(g) = \bar f \bar e_{{\beta}^{\boxtimes p}}$. Then $\zeta(\psi_{w_0}) \bar f = \zeta(\psi_{w_0} g) = \zeta(g \psi_{w_0}) = f \zeta(\psi_{w_0})$. This implies the claim. \end{proof} We can now finish the proof of Proposition~\ref{LPowerBasis}. Corollary~\ref{C030413_3} provides an isomorphism $H_p \cong {\operatorname{End}}_{R_{\alpha}}(\bar R_{\alpha} \bar e_{{\beta}^{\boxtimes p}})$ under which the idempotent $e_p$ corresponds to right multiplication by $\bar e_{\sigma}$. But $e_p$ is a primitive idempotent, so the image $\bar R_{\alpha} \bar e_{\sigma} = {\mathtt D}e({\sigma})$ of this endomorphism is an indecomposable projective $\bar R_{\alpha}$-module. We may identify {\beta}gin{equation}{\lambda}bel{EEndo} {\operatorname{End}}_{R_{\alpha}}({\mathtt D}e({\sigma})) \cong \bar e_{\sigma} \bar R_{\alpha} \bar e_{\sigma} = \zeta(e_p H_p e_p) = \zeta({\mathtt L}ambda_p e_p) = \bar e_{\sigma} \bar {\mathtt L}ambda_{\sigma} \bar e_{\sigma} \cong {\mathtt L}ambda_{\sigma}, \end{equation} where the action of ${\mathtt L}ambda_{\sigma}$ on ${\mathtt D}e({\sigma})=\bar R_{\alpha} \bar e_{\sigma}$ is given by right multiplication which makes sense in view of Corollary~\ref{CSymm}. Therefore ${\mathtt D}e({\sigma}) {\twoheadrightarrow} L({\sigma}), \bar e_{\sigma} \mapsto v_{\sigma}^-$ is a projective cover in $\mod{\bar R_{{\alpha}}}$. Furthermore, since $\mod{\bar R_{{\alpha}}}$ has only one irreducible module, every composition factor of ${\mathtt D}e({\sigma})$ is isomorphic to $L({\sigma})$ with an appropriate degree shift. We can lift the basis $\{b v_{\sigma}^- \mid b \in \mathfrak{B}_{\sigma}\}$ for $L({\sigma})$ to the set $\{\bar b \bar e_{\sigma} \mid b \in \mathfrak{B}_{\sigma}\} \subseteq {\mathtt D}e({\sigma})$. Using the basis $X_{\sigma}$ for ${\mathtt L}ambda_{\sigma}$, we get a basis $\{\bar b \bar f \bar e_{\sigma} \mid b \in \mathfrak{B}_{\sigma}, f \in X_{\sigma}\}$ for ${\mathtt D}e({\sigma})$. Similarly, ${\mathtt D}e'({\sigma}) {\twoheadrightarrow} L({\sigma})^\tau, \bar e_{\sigma} \mapsto v_{\sigma}^+$ is a projective cover in $\mod{\bar R_{{\alpha}}^{op}}$. It is immediate that $\{v_{\sigma}^+ D_{\sigma} b^\tau \mid b \in \mathfrak{B}_{\sigma}\}$ is a basis of $L({\sigma})^\tau$. Lifting as above, we have that $\{\bar e_{\sigma} \bar f\bar D_{\sigma} \bar b^\tau \mid b \in \mathfrak{B}_{\sigma}, f \in X_{\sigma}\}$ is a basis for ${\mathtt D}e'({\sigma})$. Finally, applying the multiplication map and Corollary~\ref{CSymm} we have that $\{\bar b \bar e_{\sigma} \bar f \bar D_{\sigma} \bar (b')^\tau \mid b,b' \in \mathfrak{B}_{\sigma}, f \in X_{\sigma}\}$ spans $\bar R_{{\alpha}}$. Therefore by induction, $$R_{{\alpha}} = F\operatorname{op}eratorname{-span}\{\bar b \bar e_\pi \bar f \bar D_\pi \bar (b')^\tau \mid \pi \in {\mathtt P}i({\alpha}), b,b' \in \mathfrak{B}_\pi, f \in X_\pi\},$$ and comparing graded dimensions with Corollary~\ref{TDim} as in the proof of Lemma~\ref{L050413_1}, this set is therefore a basis. \qed \subsection{General case} In this section we use the results of the previous subsections to obtain affine cellular bases of the KLR algebras of finite type. Fix ${\alpha} \in Q_+$ and $\pi=({\beta}_1^{p_1}, \dots, {\beta}_N^{p_N}) \in {\mathtt P}i({\alpha})$. Define $\bar R_{\alpha} := R_{\alpha} / I_{>\pi}$, and write $\bar r \in \bar R_{\alpha}$ for the image of an element $r \in R_{\alpha}$. We begin with some easy consequences of the previous section. {\beta}gin{Corollary}{\lambda}bel{CSymmGen} We have {\beta}gin{enumerate} \item[\textrm{(i)}] Given $f \in {\mathtt L}ambda_\pi$, $\bar f$ commutes with $\bar {\delta}_\pi$, $\bar e_\pi$, and $\bar e_\pi \bar D_\pi$. \item[\textrm{(ii)}] Up to a grading shift, ${\mathtt D}e(\pi) \cong {\mathtt D}e({\beta}_1^{p_1}) \circ \dots \circ {\mathtt D}e({\beta}_N^{p_N})$. \item[\textrm{(iii)}] The map ${\mathtt L}ambda_\pi \to {\operatorname{End}}_{\bar R_{\alpha}}({\mathtt D}e(\pi))$ sending $f$ to right multiplication by $\bar e_\pi \bar f \bar e_\pi$ is an isomorphism of algebras. \item[\textrm{(iv)}] The map ${\mathtt L}ambda_\pi \to {\operatorname{End}}_{\bar R_{\alpha}}({\mathtt D}e'(\pi))$ sending $f$ to left multiplication by $\bar e_\pi \bar f \bar e_\pi$ is an isomorphism of algebras. \end{enumerate} \end{Corollary} {\beta}gin{proof} Claim (i) follows directly from Corollary~\ref{CSymm} and the definitions. The proof of claim (ii) is similar to that of Lemma~\ref{LInduct}. To be precise, by Corollary~\ref{LParaIdeal2} we have a map $$ {\mathtt D}e({\beta}_1^{p_1}) \boxtimes \dots \boxtimes {\mathtt D}e({\beta}_N^{p_N}) \to {\mathbb R}es_\pi{\mathtt D}e(\pi),\; \bar e_{({\beta}_1^{p_1})}\otimes \dots \otimes \bar e_{({\beta}_N^{p_N})} \mapsto \bar e_\pi, $$ which by Frobenius reciprocity determines a homomorphism $$ \mu: {\mathtt D}e({\beta}_1^{p_1}) \circ \dots \circ {\mathtt D}e({\beta}_N^{p_N}) \to {\mathtt D}e(\pi),\; 1_\pi \otimes (\bar e_{({\beta}_1^{p_1})}\otimes \dots \otimes \bar e_{({\beta}_N^{p_N})}) \mapsto \bar e_\pi. $$ We now claim that $I_{>\pi} ({\mathtt D}e({\beta}_1^{p_1}) \circ \dots \circ {\mathtt D}e({\beta}_N^{p_N})) = 0$. It is enough to prove that ${\mathbb R}es_{\sigma}({\mathtt D}e({\beta}_1^{p_1}) \circ \dots \circ {\mathtt D}e({\beta}_N^{p_N})) = 0$ for all ${\sigma} > \pi$. By exactness of induction, it follows that ${\mathtt D}e({\beta}_1^{p_1}) \circ \dots \circ {\mathtt D}e({\beta}_N^{p_N})$ has an exhaustive filtration by $L({\beta}_1^{p_1}) \circ \dots \circ L({\beta}_N^{p_N}) = \bar {\mathtt D}e(\pi)$. By Theorem~\ref{TStand}\textrm{(vi)}, ${\mathbb R}es_{\sigma}(\bar {\mathtt D}e(\pi)) = 0$, which proves the claim. Since $$e_\pi 1_\pi \otimes (\bar e_{({\beta}_1^{p_1})}\otimes \dots \otimes \bar e_{({\beta}_N^{p_N})}) = 1_\pi \otimes (\bar e_{({\beta}_1^{p_1})}\otimes \dots \otimes \bar e_{({\beta}_N^{p_N})}),$$ we obtain a map {\beta}gin{align*} {\mathfrak n}u: {\mathtt D}e(\pi) &\to {\mathtt D}e({\beta}_1^{p_1}) \circ \dots \circ {\mathtt D}e({\beta}_N^{p_N}),\; \bar e_\pi \mapsto 1_\pi \otimes (\bar e_{({\beta}_1^{p_1})}\otimes \dots \otimes \bar e_{({\beta}_N^{p_N})}). \end{align*} The homomorphisms $\mu, {\mathfrak n}u$ map the evident cyclic generators to each other, and so are inverse isomorphisms. We use claim (ii) to identify ${\mathtt D}e(\pi)$ with ${\mathtt D}e({\beta}_1^{p_1}) \circ \dots {\mathtt D}e({\beta}_N^{p_N})$. As noted in the proof of claim (ii), ${\mathtt D}e(\pi)$ has an exhaustive filtration by $$\bar {\mathtt D}e(\pi) = \operatorname{op}lus_{w \in \mathfrak{S}^{(\pi)}} \psi_w 1_\pi \otimes (\bar {\mathtt D}e({\beta}_1^{p_1}) \boxtimes \dots \boxtimes \bar {\mathtt D}e({\beta}_N^{p_N})).$$ By Theorem~\ref{TStand}(vi), ${\mathbb R}es_\pi\bar {\mathtt D}e(\pi)$ picks out the summand corresponding to $w=1$. Therefore ${\mathbb R}es_\pi{\mathtt D}e(\pi) \cong {\mathtt D}e({\beta}_1^{p_1}) \boxtimes \dots \boxtimes {\mathtt D}e({\beta}_N^{p_N})$. Applying Frobenius reciprocity and (\ref{EEndo}), we obtain {\beta}gin{align*} {\operatorname{End}}_{R_{\alpha}}({\mathtt D}e(\pi)) &= {\mathcal H}om_{R_{\alpha}}({\mathtt D}e({\beta}_1^{p_1}) \circ \dots \circ {\mathtt D}e({\beta}_n^{p_N}), {\mathtt D}e({\beta}_1^{p_1}) \circ \dots \circ {\mathtt D}e({\beta}_n^{p_N}))\\ &{\sigma}meq {\mathcal H}om_{R_\pi}({\mathtt D}e({\beta}_1^{p_1}) \boxtimes \dots \boxtimes {\mathtt D}e({\beta}_n^{p_N}), {\mathbb R}es_\pi({\mathtt D}e({\beta}_1^{p_1}) \circ \dots \circ {\mathtt D}e({\beta}_n^{p_N})))\\ &{\sigma}meq {\operatorname{End}}_{R_\pi}({\mathtt D}e({\beta}_1^{p_1}) \boxtimes \dots \boxtimes {\mathtt D}e({\beta}_N^{p_N}))\\ &{\sigma}meq {\operatorname{End}}_{R_{p_1{\beta}_1}}({\mathtt D}e({\beta}_1^{p_1})) \otimes \dots \otimes {\operatorname{End}}_{R_{p_1{\beta}_N}}({\mathtt D}e({\beta}_1^{p_N}))\\ &{\sigma}meq {\mathtt L}ambda_{({\beta}_1^{p_1})} \otimes \dots \otimes {\mathtt L}ambda_{({\beta}_N^{p_N})} {\sigma}meq {\mathtt L}ambda_\pi. \end{align*} This proves claim (iii), and claim (iv) is shown similarly. \end{proof} {\beta}gin{Proposition}{\lambda}bel{LGenBasis} We have that {\beta}gin{enumerate} \item[\textrm{(i)}] $\{\bar b \bar f \bar e_\pi \mid b \in \mathfrak{B}_\pi, f \in X_\pi\}$ is an ${\mathcal O}$-basis for ${\mathtt D}e(\pi)$, \item[\textrm{(ii)}] $\{\bar e_{\pi} \bar f \bar D_{\pi} \bar b^\tau \mid b \in \mathfrak{B}_\pi, f \in X_\pi\}$ is an ${\mathcal O}$-basis for ${\mathtt D}e'(\pi)$, and \item[\textrm{(iii)}] $\{\bar b \bar e_{\pi} \bar f \bar D_{\pi} (\bar b')^\tau \mid b,b' \in \mathfrak{B}_\pi, f \in X_\pi\}$ is an ${\mathcal O}$-basis for $\bar I_\pi$. \end{enumerate} \end{Proposition} {\beta}gin{proof} For $n=1,\dots,N$, define $$B_n := \{\bar b \bar f \bar e_{({\beta}_n^{p_n})} \mid b \in \mathfrak{B}_{({\beta}_n^{p_n})}, f \in X_{({\beta}_n^{p_n})}\}.$$ By Proposition~\ref{LPowerBasis}, $B_n$ is a basis of ${\mathtt D}e({\beta}_n^{p_n})$ for each $n=1,\dots,N$. Let $\bar \iota_\pi: \bar R_{p_1 {\beta}_1} \otimes \dots \otimes \bar R_{p_N {\beta}_N} \to \bar R_{\alpha}$ be the map induced by $\iota_\pi$, as in Corollary~\ref{LParaIdeal2}. Using \cite[Proposition 2.16]{KL1}, and computing as in the proof of Lemma~\ref{L050413_1}, we have {\beta}gin{align*} {\mathtt D}e(\pi) &= \sum_{w \in \mathfrak{S}^{(\pi)}} \bar \psi_w \bar R_\pi \bar e_\pi = \sum_{w \in \mathfrak{S}^{(\pi)}} \bar \psi_w \bar \iota_\pi({\mathtt D}e({\beta}_1^{p_1}) \otimes \dots \otimes {\mathtt D}e({\beta}_N^{p_N}))\\ &= {\mathcal O}\operatorname{op}eratorname{-span}\{\bar \psi_w \bar \iota_\pi(b_1 \otimes \dots \otimes b_N) \mid w \in \mathfrak{S}^{(\pi)}, b_n \in B_n\}\\ &= {\mathcal O}\operatorname{op}eratorname{-span}\{\bar b \bar f \bar e_\pi \mid b \in \mathfrak{B}_\pi, f \in X_\pi\}. \end{align*} We have shown that the set in (i) spans ${\mathtt D}e(\pi)$. A similar argument shows that the set in (ii) spans ${\mathtt D}e'(\pi)$. Now, applying the multiplication map ${\mathtt D}e(\pi) \otimes {\mathtt D}e'(\pi) {\twoheadrightarrow} \bar I_\pi$ and using Corollary~\ref{CSymmGen}(i) yields the spanning set of (iii). Letting $\pi$ vary over ${\mathtt P}i({\alpha})$, we have $$ R_{\alpha} = \sum_{\pi \in {\mathtt P}i({\alpha})} {\mathcal O}\operatorname{op}eratorname{-span}\{b e_\pi f D_\pi (b')^\tau \mid b, b' \in \mathfrak{B}_\pi, f \in X_\pi\}. $$ Using (\ref{EDimUpper}) and the equality ${\delta}g(D_\pi) = 2{\delta}g(v_\pi^-)$ for all $\pi \in {\mathtt P}i({\alpha})$, we get {\beta}gin{align*} \dim_q(R_{\alpha}) &= \sum_{\pi\in{\mathtt P}i({\alpha})}\dim_q({\mathcal O}\operatorname{op}eratorname{-span}\{b e_\pi f D_\pi (b')^\tau \mid b, b' \in \mathfrak{B}_\pi, f \in X_\pi\})\\ &\leq \sum_{\pi\in{\mathtt P}i({\alpha})}{\mathcal B}ig(\sum_{b \in \mathfrak{B}_\pi}q^{{\delta}g(b)}{\mathcal B}ig) \dim_q({\mathtt L}ambda_\pi) q^{{\delta}g(D_\pi)} {\mathcal B}ig(\sum_{b \in \mathfrak{B}_\pi}q^{{\delta}g(b)}{\mathcal B}ig)\\ &\leq \sum_{\pi\in{\mathtt P}i({\alpha})}{\mathcal B}ig(\sum_{b \in \mathfrak{B}_\pi}q^{{\delta}g(b v_\pi^-)}{\mathcal B}ig)^2 l_\pi\\ &= \sum_{\pi\in{\mathtt P}i({\alpha})}\dim_q(\bar {\mathtt D}e(\pi))^2 l_\pi = \dim_q(R_{\alpha}), \end{align*} by Corollary~\ref{TDim}. The inequalities are therefore equalities, and this implies that the spanning set $\{b e_\pi f D_\pi (b')^\tau \mid \pi \in {\mathtt P}i({\alpha}), b, b' \in \mathfrak{B}_\pi, f \in X_\pi\}$ of $R_{\alpha}$ is a basis and ${\operatorname{dim}_q}\, {\mathtt L}ambda_{\pi}=l_\pi$ for all $\pi$. To show (i) and (ii), we have already noted that the claimed bases span ${\mathtt D}e({\beta})$ and ${\mathtt D}e'({\beta})$, respectively. We now apply part (iii) to see that they are linearly independent. \end{proof} {\beta}gin{Corollary} {\lambda}bel{CCellBasis} The set $\{ b e_{\pi} f D_{\pi} ( b')^\tau \mid \pi\in {\mathtt P}i({\alpha}),\ b,b' \in \mathfrak{B}_\pi, f \in X_\pi\}$ is an ${\mathcal O}$-basis for $R_{\alpha}$. \end{Corollary} {\beta}gin{proof} Apply Proposition~\ref{LGenBasis}(iii) and the fact that the filtration by the ideals $I_\pi$ exhausts $R_{\alpha}$, which follows from Lemma~\ref{LAllIdem}. \end{proof} \section{Affine cellularity} Recall the notion of an affine cellular algebra from the introduction. In this section, we fix ${\alpha} \in Q_+$ and prove that $R_{\alpha}$ is affine cellular over ${\mathbb Z}$ (which then implies that it is affine cellular over any $k$). For any $\pi \in {\mathtt P}i({\alpha})$, we define $$I_\pi' := {\mathbb Z}\text{-span}\{b e_\pi {\mathtt L}ambda_\pi D_\pi (b')^\tau \mid b, b' \in \mathfrak{B}_\pi\}.$$ By Corollary~\ref{CCellBasis}, we have $R_{\alpha}=\operatorname{op}lus_{\pi\in{\mathtt P}i({\alpha})}I_\pi'$. Moreover, $\tau(I_\pi') = I_\pi'$. Indeed, ${\delta}_\pi$ commutes with elements of ${\mathtt L}ambda_\pi$ in view of Hypothesis~\ref{HProp}(iv). So by Lemma~\ref{LDDelTau}, we have {\beta}gin{align*} \tau(I_\pi') &= {\mathbb Z}\text{-span}\{b' D_\pi^\tau {\mathtt L}ambda_\pi^\tau {\delta}_\pi^\tau D_\pi^\tau b^\tau \mid b, b' \in \mathfrak{B}_\pi\} \\ &= {\mathbb Z}\text{-span}\{b' D_\pi {\delta}_\pi {\mathtt L}ambda_\pi D_\pi b^\tau \mid b, b' \in \mathfrak{B}_\pi\} = I_\pi'. \end{align*} By Proposition~\ref{LGenBasis}, we have $I_\pi=\operatorname{op}lus_{{\sigma} {\mathfrak g}eq \pi} I_{\sigma}'$, and we have a nested family of ideals $(I_\pi)_{\pi\in{\mathtt P}i({\alpha})}$. To check that $R_{\alpha}$ is affine cellular, we need to verify that $\bar I_\pi:=I_\pi/I_{>\pi}$ is an affine cell ideal in $\bar R_{\alpha}:=R_{\alpha}/I_{>\pi}$. As usual we denote $\bar x:=x+I_{>\pi}\in \bar R_{\alpha}$ for $x\in R_{\alpha}$. The affine algebra $B$ in the definition of a cell ideal will be the algebra ${\mathtt L}ambda_\pi$, with the automorphism ${\sigma}$ being the identity map. The ${\mathbb Z}$-module $V$ will be the formal free ${\mathbb Z}$-module $V_\pi$ on the basis $\mathfrak{B}_\pi$. By Corollary~\ref{CSymmGen}(i) and Proposition~\ref{LGenBasis}, the following maps are isomomorphisms of ${\mathtt L}ambda_\pi$-modules. {\beta}gin{align*} \eta_\pi&: V_\pi \otimes_{\mathbb Z} {\mathtt L}ambda_\pi \to {\mathtt D}e(\pi), b \otimes f \mapsto \bar b \bar f \bar e_\pi,\\ \eta'_\pi&: {\mathtt L}ambda_\pi \otimes_{\mathbb Z} V_\pi \to {\mathtt D}e'(\pi), f \otimes b \mapsto \bar e_\pi \bar f \bar D_\pi \bar b^\tau. \end{align*} This allows us to endow $V_\pi \otimes_{\mathbb Z} {\mathtt L}ambda_\pi$ with a structure of an $(R_{\alpha},{\mathtt L}ambda_\pi)$-bimodule and ${\mathtt L}ambda_\pi \otimes_{\mathbb Z} V_\pi$ with a structure of an $({\mathtt L}ambda_\pi,R_{\alpha})$-bimodule. In view of Corollary~\ref{CSymmGen}(iii),(iv) we see that ${\mathtt D}e(\pi)$ (resp. ${\mathtt D}e'(\pi)$) is a right (resp. left) ${\mathtt L}ambda_\pi$-module, and so we may define an $R_{\alpha}$-bimodule homomorphism $${\mathfrak n}u_\pi: {\mathtt D}e(\pi) \otimes_{{\mathtt L}ambda_\pi} {\mathtt D}e'(\pi) \to I_\pi / I_{>\pi}, \bar r \bar e_\pi \otimes \bar e_\pi \bar r' \mapsto \bar r \bar e_\pi \bar r'.$$ By Proposition~\ref{LGenBasis}, ${\mathfrak n}u_\pi$ is an isomorphism. Let $\mu_\pi:={\mathfrak n}u_\pi^{-1}$. This will be the map $\mu$ in the definition of a cell ideal. {\beta}gin{Theorem} The above data make $R_{\alpha}$ into an affine cellular algebra. \end{Theorem} {\beta}gin{proof} To verify that $\bar I_\pi$ is a cell ideal in $\bar R_{\alpha}$, we first check that our $({\mathtt L}ambda_\pi,R_{\alpha})$-bimodule structure on ${\mathtt L}ambda_\pi\otimes_{\mathbb Z} V_\pi$ comes from our $(R_{\alpha},{\mathtt L}ambda_\pi)$-bimodule structure on $V_\pi \otimes_{\mathbb Z} {\mathtt L}ambda_\pi$ via the rule (\ref{ERightCell}). Let $\operatorname{op}eratorname{s}_\pi: V_\pi \otimes_{\mathbb Z} {\mathtt L}ambda_\pi \stackrel{\sim}{\longrightarrow} {\mathtt L}ambda_\pi \otimes_{\mathbb Z} V_\pi$ be the swap map. This is equivalent to the fact that the composition map {\beta}gin{equation}{\lambda}bel{EDeSwap} {\varphi}: {\mathtt D}e'(\pi) \stackrel{(\eta'_\pi)^{-1}}{\longrightarrow} {\mathtt L}ambda_\pi \otimes_{\mathbb Z} V_\pi \stackrel{\operatorname{op}eratorname{s}_\pi^{-1}}{\rightarrow} V_\pi \otimes_{\mathbb Z} {\mathtt L}ambda_\pi \stackrel{\eta_\pi}{\to} {\mathtt D}e(\pi) = {\mathtt D}e(\pi)^\tau, \end{equation} is an isomorphism of right $R_{\alpha}$-modules. We already know that this is an isomorphism of ${\mathbb Z}$-modules, and so it suffices to check that $$ {\varphi}(\bar e_\pi\bar f \bar D_\pi \bar c^\tau \bar r)=\bar r^\tau {\varphi}(\bar e_\pi\bar f \bar D_\pi \bar c^\tau) $$ for all $f \in {\mathtt L}ambda_\pi$, $c \in \mathfrak{B}_\pi$, and $r\in R_{\alpha}$. Note that ${\varphi}(\bar e_\pi\bar f \bar D_\pi \bar c^\tau)=\bar c\bar f\bar e_\pi$. So we have to check {\beta}gin{equation}{\lambda}bel{E120613} {\varphi}(\bar e_\pi\bar f \bar D_\pi \bar c^\tau \bar r)=\bar r^\tau\bar c\bar f\bar e_\pi. \end{equation} By Proposition~\ref{LGenBasis}(ii) we can find $\{f_b \mid b \in \mathfrak{B}_\pi\} \subseteq {\mathtt L}ambda_\pi$ such that {\beta}gin{equation}{\lambda}bel{EAction} \bar e_\pi\bar f \bar D_\pi \bar c^\tau \bar r = \sum_{b \in \mathfrak{B}_\pi}{\bar e_\pi \bar f_b \bar D_\pi \bar b^\tau}. \end{equation} Also, by Corollary~\ref{CSymmGen}(i), we have $$ \bar e_\pi\bar f \bar D_\pi \bar c^\tau = \bar f \bar e_\pi\bar D_\pi \bar c^\tau= \bar e_\pi\bar D_\pi \bar f \bar c^\tau $$ Using this and the $\tau$-invariance of $D_\pi$ and ${\delta}_\pi$, we get (\ref{E120613}) as follows: {\beta}gin{align*} \bar r^\tau\bar c\bar f\bar e_\pi &= \bar r^\tau\bar c\bar f \bar e_\pi^2 = \bar r^\tau\bar c\bar f \bar D_\pi^\tau \bar {\delta}_\pi^\tau \bar D_\pi^\tau \bar {\delta}_\pi^\tau = (\bar {\delta}_\pi \bar D_\pi \bar {\delta}_\pi \bar D_\pi \bar f \bar c^\tau \bar r)^\tau\\ &= (\bar {\delta}_\pi \bar e_\pi \bar D_\pi \bar f \bar c^\tau \bar r)^\tau = (\bar {\delta}_\pi \bar e_\pi \bar f \bar D_\pi \bar c^\tau \bar r)^\tau = (\bar {\delta}_\pi \sum_{b \in \mathfrak{B}_\pi} \bar e_\pi \bar f_b \bar D_\pi \bar b^\tau)^\tau\\ &= (\sum_{b \in \mathfrak{B}_\pi} \bar {\delta}_\pi \bar D_\pi \bar {\delta}_\pi \bar D_\pi \bar f_b \bar b^\tau)^\tau = \sum_{b \in \mathfrak{B}_\pi} \bar b \bar f_b \bar D_\pi \bar {\delta}_\pi \bar D_\pi \bar {\delta}_\pi = \sum_{b \in \mathfrak{B}_\pi} \bar b \bar f_b \bar e_\pi^2\\ &= \sum_{b \in \mathfrak{B}_\pi} \bar b \bar f_b \bar e_\pi, \end{align*} which equals the left hand side of (\ref{E120613}) by definition of ${\varphi}$. To complete the proof, it remains to verify the commutativity of (\ref{ECellCD}). This is equivalent to $$ \tau \circ {\mathfrak n}u_\pi \circ (\eta_\pi \otimes \eta'_\pi) ((b \otimes f) \otimes (f' \otimes b'))={\mathfrak n}u_\pi \circ (\eta_\pi \otimes \eta'_\pi) ((b' \otimes f') \otimes (f \otimes b)) $$ for all $b,b'\in \mathfrak{B}_\pi$ and $f,f'\in{\mathtt L}ambda_\pi$. The left hand side equals {\beta}gin{align*} &\tau \circ {\mathfrak n}u_\pi(\bar b \bar f \bar e_\pi \otimes \bar e_\pi \bar f' \bar D_\pi (\bar b')^\tau)= \tau(\bar b \bar f \bar e_\pi \bar f' \bar D_\pi (\bar b')^\tau) = \tau(\bar b \bar e_\pi \bar f \bar f' \bar D_\pi (\bar b')^\tau)\\ = &\ \bar b' \bar D_\pi \bar f' \bar f \bar e_\pi^\tau \bar b^\tau = \bar b' \bar D_\pi \bar f' \bar f \bar {\delta}_\pi \bar D_\pi \bar b^\tau = \bar b' \bar D_\pi {\delta}_\pi \bar f' \bar f \bar \bar D_\pi \bar b^\tau = \bar b' \bar e_\pi \bar f' \bar f \bar D_\pi \bar b^\tau \\ = &\ \bar b' \bar f' \bar e_\pi \bar f \bar D_\pi \bar b^\tau = {\mathfrak n}u_\pi(\bar b' \bar f' \bar e_\pi \otimes \bar e_\pi \bar f \bar D_\pi \bar b^\tau), \end{align*} which equals ${\mathfrak n}u_\pi \circ (\eta_\pi \otimes \eta'_\pi) ((b' \otimes f') \otimes (f \otimes b))$, as required. \end{proof} \section{Verification of the Hypothesis}{\lambda}bel{SVerif} In this section we verify Hypothesis~\ref{HProp} for all finite types. In ADE types (with one exception) this can be do using the theory of homogeneous representations developed in \cite{KRhomog}. This theory is reviewed in the next subsection. We use the cuspidal modules of \cite{HMM}. Throughout the section ${\beta}$ is a positive root, and $\bar R_{\beta}:=R_{\beta}/I_{>({\beta})}$, $\bar r:=r+I_{>({\beta})}$ for $r\in R_{\beta}$. \subsection{Homogeneous representations} {\lambda}bel{SSHomog} In this section we assume that the Cartan matrix $A$ is symmetric. In this subsection we fix ${\alpha}\in Q_+$ with $d={\operatorname{ht}}({\alpha})$. A graded $R_{\alpha}$-module is called {\em homogeneous} if it is concentrated in one degree. Let $\text{\boldmath$i$}\in {\langle I \rangle}_{\alpha}$. We call $s_r\in S_d$ an {\em admissible transposition} for $\text{\boldmath$i$}$ if $a_{i_r, i_{r+1}}=0$. The {\em word graph} $G_{\alpha}$ is the graph with the set of vertices ${\langle I \rangle}_{\alpha}$, and with $\text{\boldmath$i$},\text{\boldmath$j$}\in {\langle I \rangle}_{\alpha}$ connected by an edge if and only if $\text{\boldmath$j$}=s_r \text{\boldmath$i$}$ for some admissible transposition $s_r$ for $\text{\boldmath$i$}$. A connected component $C$ of $G_{\alpha}$ is called {\em homogeneous} if for some $\text{\boldmath$i$}=(i_1,\dots,i_d)\in C$ the following condition holds: {\beta}gin{equation}{\lambda}bel{ENC} {\beta}gin{split} \text{if $i_r=i_s$ for some $r<s$ then there exist $t,u$} \\ \text{such that $r<t<u<s$ and $a_{i_r,i_t}=a_{i_r,i_u}=-1$.} \end{split} \end{equation} {\beta}gin{Theorem}{\lambda}bel{Thomog} {\rm \cite[Theorems 3.6, 3.10, (3.3)]{KRhomog}} Let $C$ be a homogeneous connected component of $G_{\alpha}$. Let $L(C)$ be the vector space concentrated in degree $0$ with basis $\{v_\text{\boldmath$i$}\mid \text{\boldmath$i$}\in C\}$ labeled by the elements of $C$. The formulas {\beta}gin{align*} 1_\text{\boldmath$j$} v_\text{\boldmath$i$}&={\delta}_{\text{\boldmath$i$},\text{\boldmath$j$}}v_\text{\boldmath$i$} \qquad (\text{\boldmath$j$}\in {\langle I \rangle}_{\alpha},\ \text{\boldmath$i$}\in C),\\ y_r v_\text{\boldmath$i$}&=0\qquad (1\leq r\leq d,\ \text{\boldmath$i$}\in C),\\ \psi_rv_{\text{\boldmath$i$}}&= \left\{ {\beta}gin{array}{ll} v_{s_r\text{\boldmath$i$}} &{\mathfrak h}box{if $s_r\text{\boldmath$i$}\in C$,}\\ 0 &{\mathfrak h}box{otherwise;} \end{array} \right. \quad(1\leq r<d,\ \text{\boldmath$i$}\in C) \end{align*} define an action of $R_{\alpha}$ on $L(C)$, under which $L(C)$ is a homogeneous irreducible $R_{\alpha}$-module. Furthermore, $L(C){\mathfrak n}ot\cong L(C')$ if $C{\mathfrak n}eq C'$, and every homogeneous irreducible $R_{\alpha}$-module, up to a degree shift, is isomorphic to one of the modules $L(C)$. \end{Theorem} We need to push the theory of homogeneous modules a little further. In Proposition~\ref{PHomGenRel} below we give a presentation for a homogeneous module as a cyclic modules generated by a word vector. Let $C$ be a homogeneous component of $G_{\alpha}$ and $\text{\boldmath$i$}\in C$. An element $w\in \mathfrak{S}_d$ is called {\em $\text{\boldmath$i$}$-admissible} if it can be written as $w=s_{r_1}\dots s_{r_b}$, where $s_{r_a}$ is an admissible transposition for $s_{r_{a+1}}\dots s_{r_b}\text{\boldmath$i$}$ for all $a=1,\dots,b$. We denote the set of all $\text{\boldmath$i$}$-admissible elements by $\mathfrak{D}_\text{\boldmath$i$}$. {\beta}gin{Lemma} {\lambda}bel{LAdm} Let $C$ be a homogeneous component of $G_{\alpha}$ and $\text{\boldmath$i$}\in C$. Then $\{\psi_wv_\text{\boldmath$i$}\mid w\in\mathfrak{D}_\text{\boldmath$i$}\}$ is a basis of $L(C)$. \end{Lemma} {\beta}gin{proof} Note that if $w,w'$ are admissible elements, then $w=w'$ if and only if $w\text{\boldmath$i$}=w'\text{\boldmath$i$}$. Indeed, it suffices to prove that $w\text{\boldmath$i$}=\text{\boldmath$i$}$ implies $w=1$, which follows from the property (\ref{ENC}). The lemma follows. \end{proof} {\beta}gin{Proposition}{\lambda}bel{PHomGenRel} Let $C$ be a homogeneous component of $G_{\alpha}$ and $\text{\boldmath$i$}\in C$. Let $J(\text{\boldmath$i$})$ be the left ideal of $R_{\alpha}$ generated by {\beta}gin{align}{\lambda}bel{EJ(C)} \{y_r,1_\text{\boldmath$j$},\psi_w1_\text{\boldmath$i$}\mid\, 1\leq r\leq d,\ \text{\boldmath$j$}\in {\langle I \rangle}_{\alpha}\setminus \text{\boldmath$i$},\ w\in\mathfrak{S}_d\setminus \mathfrak{D}_\text{\boldmath$i$}\}. \end{align} Then $R_{\alpha}/J_{\alpha}{\sigma}meq L(C)$ as (graded) left $R_{\alpha}$-modules. \end{Proposition} {\beta}gin{proof} Note that the elements in (\ref{EJ(C)}) annihilate the vector $v_\text{\boldmath$i$}\in L(C)$, which generates $L(C)$, whence we have a (homogeneous) surjection $$ R_{\alpha}/J_{\alpha}{\twoheadrightarrow} L(C),\ h+J_{\alpha}\mapsto hv_\text{\boldmath$i$}. $$ To prove that this surjection is an isomorphism it suffices to prove that the dimension of $R_{\alpha}/J_{\alpha}$ is at most $\dim L(C)=|C|$, which follows easily from Lemma~\ref{LAdm}. \end{proof} \subsection{Special Lyndon orders}{\lambda}bel{SSSLO} Recall the theory of standard modules reviewed in \S\ref{SSSMT}. We now specialize to the case of a {\em Lyndon}\, convex order on ${\mathtt P}hi_+$ as studied in \cite{KR}. For this we first need to fix a total order `$\leq$' on $I$. This gives rise to a lexicographic order `$\leq$' on the set ${\langle I \rangle}$. In particular, each finite dimensional $R_{\alpha}$-module has its (lexicographically) highest word, and the highest word of an irreducible module determines the irreducible module uniquely up to an isomorphism. This leads to the natural notion of {\em dominant words} (called good words in \cite{KR}), namely the elements of ${\langle I \rangle}_{\alpha}$ which occur as highest words of finite dimensional $R_{\alpha}$-modules. The dominant words of cuspidal modules are characterized among all dominant words by the property that they are {\em Lyndon words}, so we refer to them as {\em dominant Lyndon words}. There is an explicit bijection $$ {\mathtt P}hi_+\to\{\text{dominant Lyndon words}\},\ {\beta}\mapsto \text{\boldmath$i$}_{\beta}, $$ uniquely determined by the property $|\text{\boldmath$i$}_{\beta}|={\beta}$. Note that this notation $\text{\boldmath$i$}_{\beta}$ will be consistent with the same notation used in \S\ref{SSSSWId}. Setting ${\beta}\leq{{\mathfrak g}amma}$ if and only if $\text{\boldmath$i$}_{\beta}\leq\text{\boldmath$i$}_{{\mathfrak g}amma}$ for ${\beta},{{\mathfrak g}amma}\in{\mathtt P}hi_+$ defines a total order on ${\mathtt P}hi_+$ called a {\em Lyndon order}. It is known that each Lyndon order is convex, and the theory of standard modules for Lyndon orders, developed in \cite{KR}, fits into the general theory described in \S\ref{SSSMT}. However, working with Lyndon orders allows us to be a little more explicit. In particular, given a a root partition $\pi=(p_1,\dots,p_N)\in{\mathtt P}i({\alpha})$, set {\beta}gin{equation}{\lambda}bel{EIPi} \text{\boldmath$i$}_\pi:=\text{\boldmath$i$}_{{\beta}_1}^{p_1}\dots\text{\boldmath$i$}_{{\beta}_N}^{p_N}\in{\langle I \rangle}_{\alpha}. \end{equation} {\beta}gin{Lemma} {\lambda}bel{LHighestWt} {\rm \cite[Theorem 7.2]{KR}} Let $\pi\in{\mathtt P}i({\alpha})$. Then $\text{\boldmath$i$}_\pi$ is the highest word of $L(\pi)$. \end{Lemma} From now on, we fix the notation for the Dynkin diagrams as follows: \[ {\beta}gin{dynkin} \draw (.5,.8)node[anchor=west,font={\mathfrak n}ormalsize]{$A_\ell\quad(\ell {\mathfrak g}eq 1)$}; \draw (0,0)node[above]{$1$} circle (0.10); \draw (0.15,0)--(0.85,0); \draw (1,0)node[above]{$2$} circle (0.10); \draw[ dotted] (1.15,0)--(2.85,0); \draw (3,0)node[above]{$\ell-1$} circle (0.10); \draw (3.15,0)--(3.85,0); \draw (4,0)node[above]{$\ell$} circle (0.10); \end{dynkin}\quad {\beta}gin{dynkin} \draw (.5,.8)node[anchor=west,font={\mathfrak n}ormalsize]{$B_\ell\quad(\ell {\mathfrak g}eq 2)$}; \draw (0,0)node[above]{$1$} circle (0.10); \draw (0.15,0)--(0.85,0); \draw (1,0)node[above]{$2$} circle (0.10); \draw[ dotted] (1.15,0)--(2.85,0); \draw (3,0)node[above]{$\ell-1$} circle (0.10); \draw (3.15,.05)--(3.85,.05); \draw (3.15,-.05)--(3.85,-.05); \draw (3.5,0)node{$>$}; \draw (4,0)node[above]{$\ell$} circle (0.10); \end{dynkin}\quad {\beta}gin{dynkin} \draw (.5,.8)node[anchor=west,font={\mathfrak n}ormalsize]{$C_\ell\quad(\ell {\mathfrak g}eq 3)$}; \draw (0,0)node[above]{$1$} circle (0.10); \draw (0.15,0)--(0.85,0); \draw (1,0)node[above]{$2$} circle (0.10); \draw[ dotted] (1.15,0)--(2.85,0); \draw (3,0)node[above]{$\ell-1$} circle (0.10); \draw (3.15,.05)--(3.85,.05); \draw (3.15,-.05)--(3.85,-.05); \draw (3.5,0)node{$<$}; \draw (4,0)node[above]{$\ell$} circle (0.10); \end{dynkin} \] \[ {\beta}gin{dynkin} \draw (.5,.8)node[anchor=west,font={\mathfrak n}ormalsize]{$D_\ell\quad(\ell {\mathfrak g}eq 4)$}; \draw (0,0)node[above]{$1$} circle (0.10); \draw (0.15,0)--(0.85,0); \draw (1,0)node[above]{$2$} circle (0.10); \draw[ dotted] (1.15,0)--(2.85,0); \draw (3,0)node[above]{$\ell-2$} circle (0.10); \draw (3,-.15)--(3,-.85); \draw (3,-1)node[right]{$\ell$} circle (0.10); \draw (3.15,0)--(3.85,0); \draw (4,0)node[above]{$\ell-1$} circle (0.10); \end{dynkin} {\beta}gin{dynkin} \draw (.75,.8)node[anchor=west,font={\mathfrak n}ormalsize]{$E_\ell\quad(\ell = 6,7,8)$}; \draw (0,0)node[above]{$1$} circle (0.10); \draw (0.15,0)--(0.85,0); \draw (1,0)node[above]{$2$} circle (0.10); \draw[ dotted] (1.15,0)--(2.85,0); \draw (3,0)node[above]{$\ell-3$} circle (0.10); \draw (3,-.15)--(3,-.85); \draw (3,-1)node[right]{$\ell$} circle (0.10); \draw (3.15,0)--(3.85,0); \draw (4,0)node[above]{$\ell-2$} circle (0.10); \draw (4.15,0)--(4.85,0); \draw (5,0)node[above]{$\ell-1$} circle (0.10); \end{dynkin} {\beta}gin{dynkin} \draw (1,.8)node[anchor=west,font={\mathfrak n}ormalsize]{$F_4$}; \draw (0,0)node[above]{$1$} circle (0.10); \draw (0.15,0)--(0.85,0); \draw (1,0)node[above]{$2$} circle (0.10); \draw (1.15,.05)--(1.85,.05); \draw (1.15,-.05)--(1.85,-.05); \draw (1.5,0)node{$>$}; \draw (2,0)node[above]{$3$} circle (0.10); \draw (2.15,0)--(2.85,0); \draw (3,0)node[above]{$4$} circle (0.10); \end{dynkin} {\beta}gin{dynkin} \draw (0,.8)node[anchor=west,font={\mathfrak n}ormalsize]{$G_2$}; \draw (0,0)node[above]{$1$} circle (0.10); \draw (0.15,.05)--(0.85,.05); \draw (0.15,0)--(0.85,0); \draw (0.15,-.05)--(0.85,-.05); \draw (0.5,0)node{$<$}; \draw (1,0)node[above]{$2$} circle (0.10); \end{dynkin} \] Also, we choose the signs ${\varepsilon}_{ij}$ as in \S\ref{SSKLR} and the total order $\leq$ on $I$ so that ${\varepsilon}_{ij}=1$ and $i< j$ if the corresponding labels $i$ and $j$ satisfy $i< j$ as integers. \subsection{Homogeneous roots} We stick with the choices made in \S\ref{SSSLO}. Throughout the subsection, we assume that the Cartan matrix is of $ADE$ type and ${\beta} \in {\mathtt P}hi_+$ is such that $\text{\boldmath$i$}_{\beta}$ is homogeneous. Let $d:={\operatorname{ht}}({\beta})$. The module $L({\beta})$ is concentrated in degree 0, and each of its word spaces is one dimensional. Set $\mathfrak{D}_{\beta} := \mathfrak{D}_{\text{\boldmath$i$}_{\beta}}$. Then we can take $\mathfrak{B}_{\beta} = \{\psi_w e(\text{\boldmath$i$}_{\beta}) \mid w \in \mathfrak{D}_{\beta}\}$. Let ${\delta}_{\beta} = D_{\beta} = e(\text{\boldmath$i$}_{\beta})$, and define $y_{\beta} := y_d e(\text{\boldmath$i$}_{\beta})$. All parts of Hypothesis~\ref{HProp} are trivially satisfied, except (v). In the rest of this subsection we verify Hypothesis~\ref{HProp}(v). {\beta}gin{Lemma}{\lambda}bel{LBadInBlock} Let $w \in \mathfrak{S}_d \setminus \mathfrak{D}_{{\beta}}$. Then $\psi_w P_{d} e(\text{\boldmath$i$}_{{\beta}}) \subseteq I_{>({\beta})}.$ \end{Lemma} {\beta}gin{proof} We have $\psi_w=\psi_{r_1} \dots \psi_{r_m}$ for a reduced decomposition $w=s_{r_1} \dots s_{r_m}$. Let $k$ be the largest index such that $s_{r_k}$ is not an admissible transposition of $s_{r_{k+1}} \dots s_{r_m} \text{\boldmath$i$}_{\beta}$. By Theorem~\ref{Thomog}, $s_{r_k} \dots s_{r_m} \text{\boldmath$i$}_{\beta}$ is not a word of $L({\beta})$. So by Corollary~\ref{LBadWords}, $$\psi_{r_{k}}\dots \psi_{r_m}P_d e( \text{\boldmath$i$}_{\beta}) =e(s_{r_k} \dots s_{r_m} \text{\boldmath$i$}_{\beta})\psi_{r_{k}}\dots \psi_{r_m}P_d e( \text{\boldmath$i$}_{\beta}) \subseteq I_{>({\beta})},$$ whence $\psi_wP_d e( \text{\boldmath$i$}_{\beta})\subseteq I_{>({\beta})}$. \end{proof} {\beta}gin{Lemma}{\lambda}bel{LyEq} Given $1 \leq r,s \leq d$, we have $(y_s - y_r) e(\text{\boldmath$i$}_{{\beta}}) \in I_{>({\beta})}$. \end{Lemma} {\beta}gin{proof} We prove by induction on $s=1,\dots,d$ that $(y_s - y_r) e(\text{\boldmath$i$}_{\beta}) \in I_{>({\beta})}$ for all $1 \leq r \leq s$. The base case $s=1$ is trivial. Let $s>1$, and write $\text{\boldmath$i$}_{\beta} = (i_1, \dots, i_d)$. If $i_r \cdot i_s = 0$ for all $1 \leq r < s$, then \[ (i_s, i_1, i_2, \dots, i_{s-1}, i_{s+1}, \dots, i_d) \] is a word of $L({\beta})$. On the other hand, Lemma~\ref{LHighestWt} says that $\text{\boldmath$i$}_{\beta}$ is the largest word of $L({\beta})$ and so $i_s < i_1$. But then $\text{\boldmath$i$}_{\beta}$ is not a Lyndon word, which is a contradiction. Thus there exists some $r < s$ with $i_r \cdot i_s {\mathfrak n}eq 0$. Since the Cartan matrix is assumed to be of ADE type, either $i_r \cdot i_s = -1$ or $i_r = i_s$. In the second case, by homogeneity (\ref{ENC}) we can find $r < r' < s$ with $i_{r'} \cdot i_s = -1$. This shows that the definition $ t:= \max\{r \mid r < s \text{ and } i_r \cdot i_s = -1\} $ makes sense. Once again by homogeneity we must have that $i_r \cdot i_s = 0$ for any $r$ with $t < r < s$. Therefore, using defining relations in $R_{\alpha}$, we get \[ (\psi_{s-1} \dots \psi_t)(\psi_t \dots \psi_{s-1}) e(\text{\boldmath$i$}_{\beta}) = \pm (y_s - y_t) e(\text{\boldmath$i$}_{\beta}). \] On the other hand, the cycle $(t, t+1, \dots, s)$ is not an element of $\mathfrak{D}_{\beta}$. By Lemma~\ref{LBadInBlock} we must have $\psi_t \dots \psi_{s-1} e(\text{\boldmath$i$}_{\beta}) \in I_{>({\beta})}$. This shows that $(y_s - y_t) e(\text{\boldmath$i$}_{\beta}) \in I_{>({\beta})}$, and therefore by induction that $(y_s - y_r) e(\text{\boldmath$i$}_{\beta}) \in I_{>({\beta})}$ for every $r$ with $1 \leq r \leq s$. \end{proof} Recall the notation $\bar R_{\beta}:= R_{\beta}/I_{>({\beta})}$ and $\bar r:=r+I_{>({\beta})}\in\bar R_{\beta}$ for $r\in R_{\beta}$. {\beta}gin{Corollary} We have that $\bar e_{\beta} \bar R_{\beta} \bar e_{\beta}$ is generated by $\bar y_{\beta}$. \end{Corollary} {\beta}gin{proof} By Theorem~\ref{TBasis}, an element of $e_{\beta} R_{\beta} e_{\beta}$ is a linear combination of terms of the form $\psi_w y_1^{a_1} \dots y_d^{a_d} e(\text{\boldmath$i$}_{\beta})$ such that $w\text{\boldmath$i$}_{\beta}=\text{\boldmath$i$}_{\beta}$. If $w {\mathfrak n}otin \mathfrak{D}_{\beta}$, then $\psi_w e_{\beta} \in I_{>({\beta})}$ by Lemma~\ref{LBadInBlock}. Otherwise, Lemma~\ref{LAdm} shows that $w=1$. Therefore, $\bar e_{\beta} \bar R_{\beta} \bar e_{\beta}$ is spanned by terms of the form $\bar y_1^{a_1} \dots \bar y_d^{a_d} \bar e_{\beta}$. In view of Lemma~\ref{LyEq}, we see that $\bar e_{\beta} \bar R_{\beta} \bar e_{\beta}$ is generated by $\bar y_{\beta}=\bar y_d$. \end{proof} \subsection{Types $ADE$}{\lambda}bel{SSADE} Throughout the subsection, we assume again that the Cartan matrix is of $ADE$ type. By \cite{HMM}, with a correction made in \cite[Lemma A7]{BKM}, if ${\beta} \in {\mathtt P}hi_+$ is any positive root, except the highest root in type $E_8$, then $\text{\boldmath$i$}_{\beta}$ is homogeneous. We have proved in the previous subsection that Hypothesis~\ref{HProp} holds in this case. Now, we deal with the highest root $$\theta := 2{\alpha}_1 + 3{\alpha}_2 + 4{\alpha}_3 + 5{\alpha}_4 + 6{\alpha}_5 + 4{\alpha}_6 + 2{\alpha}_7 + 3{\alpha}_8$$ in type $E_8$. By \cite[Example A.5]{BKM}, the corresponding Lyndon word is $$\text{\boldmath$i$}_\theta = 12345867564534231234586756458.$$ Define the positive roots {\beta}gin{align} \theta_1&:={\alpha}_1 + {\alpha}_2 + {\alpha}_3 + 2{\alpha}_4 + 3{\alpha}_5 + 2{\alpha}_6 + {\alpha}_7 + 2{\alpha}_8, {\lambda}bel{ETheta1} \\ {\theta_2}&:={\alpha}_1 + 2{\alpha}_2 + 3{\alpha}_3 + 3{\alpha}_4 + 3{\alpha}_5 + 2{\alpha}_6 + {\alpha}_7 + {\alpha}_8. {\lambda}bel{ETheta2} \end{align} Then the root partition $(\theta_1,{\theta_2})$ is a minimal element of ${\mathtt P}i(\theta)\setminus \{(\theta)\}$. Moreover, $\text{\boldmath$i$}_{\theta_2} = 1234586756453423$ and $\text{\boldmath$i$}_{\theta_1} = 1234586756458$. Indeed, one sees by inspection that these words are highest words in the corresponding homogeneous representations and are Lyndon. Finally, we have $\text{\boldmath$i$}_\theta = \text{\boldmath$i$}_{\theta_2} \text{\boldmath$i$}_{\theta_1}$. Denote by $v_{\theta_1}$ and $v_{\theta_2}$ non-zero vectors in the $\text{\boldmath$i$}_{\theta_1}$- and $\text{\boldmath$i$}_{\theta_2}$-word spaces in the homogeneous modules $L({\theta_1})$ and $L({\theta_2})$, respectively. Note that $L({\theta_1})\boxtimes L({\theta_2})$ is naturaly a submodule of $L({\theta_1})\circ L({\theta_2})$, so we can consider $v_{\theta_1}\otimes v_{\theta_2}$ as a cyclic vector of $L({\theta_1})\circ L({\theta_2})$, and similarly $v_{\theta_2}\otimes v_{\theta_1}$ as a cyclic vector of $L({\theta_2})\circ L({\theta_1})$. By definition, $L({\theta_1})\circ L({\theta_2})$ is the proper standard module $\bar{\mathtt D}e({\theta_1},{\theta_2})$, and let $v_{{\theta_1},{\theta_2}}$ be the image of $v_{\theta_1}\otimes v_{\theta_2}$ under the natural projection $\bar{\mathtt D}e({\theta_1},{\theta_2}){\twoheadrightarrow} L({\theta_1},{\theta_2})$. Denote by $w(\theta)$ the element of $\mathfrak{S}_{29}$ which sends $(1,\dots,29)$ to $(17,\dots,29,1,\dots,16)$. The following has been established in \cite{BKM}, see especially \cite[Theorem A.9, Proof]{BKM}, but we sketch its very easy proof for the reader's convenience. {\beta}gin{Lemma}{\lambda}bel{LSES} The multiplicity of the highest word $\text{\boldmath$i$}_\theta$ in $L(\theta)$ is one. Moreover, there is a non-zero vector $v_\theta$ in the $\theta$-word space of $L(\theta)$ and homogeneous $R_\theta$-module maps {\beta}gin{align*} \mu&: L({\theta_1},{\theta_2}){\lambda}ngle 1\rangle \to L({\theta_2})\circ L({\theta_1}),\ v_{{\theta_1},{\theta_2}}\mapsto \psi_{w(\theta)}(v_{\theta_2}\otimes v_{\theta_1}), \\ {\mathfrak n}u&: L({\theta_2}) \circ L({\theta_1}) \to L(\theta),\ v_{\theta_2}\otimes v_{\theta_1}\mapsto v_\theta, \end{align*} such that the sequence \[ 0 \to L({\theta_1},{\theta_2}){\lambda}ngle 1\rangle \stackrel{\mu}{\to} L({\theta_2}) \circ L({\theta_1}) \stackrel{{\mathfrak n}u}\to L(\theta) \to 0 \] is exact. Finally, $${\operatorname{ch}_q\:} L(\theta)=({\operatorname{ch}_q\:} L({\theta_2})\circ {\operatorname{ch}_q\:} L({\theta_1}) - q\,{\operatorname{ch}_q\:} L({\theta_1})\circ {\operatorname{ch}_q\:} L({\theta_2}))/(1-q^2).$$ \end{Lemma} {\beta}gin{proof} By \cite[Theorem 7.2(ii)]{KR}, the multiplicity of the word $\text{\boldmath$i$}_{\theta_1}\text{\boldmath$i$}_{\theta_2}$ in $L({\theta_1})\circ L({\theta_2})$ is $1$. Moreover, an explicit check shows that the multiplicity of $\text{\boldmath$i$}_\theta$ in $L({\theta_1})\circ L({\theta_2})$ is $q$. We conclude using Theorem~\ref{TStand} and the minimality of $({\theta_1},{\theta_2})$ in ${\mathtt P}i(\theta)\setminus\{(\theta)\}$ that the standard module $L({\theta_1})\circ L({\theta_2})$ is uniserial with head $L({\theta_1},{\theta_2})$ and socle $L(\theta){\lambda}ngle 1\rangle$. The result follows from these observations since $L({\theta_1},{\theta_2})$ is $\circledast$-self-dual and $(L({\theta_1})\circ L({\theta_2}))^\circledast {\sigma}meq L({\theta_2})\circ L({\theta_1}){\lambda}ngle -1\rangle$ in view of \cite[Theorem 2.2]{LV}. \end{proof} Consider the parabolic subgroup $\mathfrak{S}_{{\operatorname{ht}}(\theta_2)}\times\mathfrak{S}_{{\operatorname{ht}}(\theta_1)}\subseteq \mathfrak{S}_d$ and define $$ \mathfrak{D}_{\theta_2,\theta_1}:=\{(w_2,w_1)\in \mathfrak{S}_{{\operatorname{ht}}(\theta_2)}\times\mathfrak{S}_{{\operatorname{ht}}(\theta_1)}\mid w_2\in\mathfrak{D}_{\theta_2},\ w_1\in\mathfrak{D}_{\theta_1}\}. $$ With this notation we finally have: {\beta}gin{Lemma}{\lambda}bel{LIrrE8} The cuspidal module $L(\theta)$ is generated by a degree $0$ vector $v_\theta$ subject only to the relations: {\beta}gin{align} {\lambda}bel{Rel1} (e(\text{\boldmath$j$})-{\delta}_{\text{\boldmath$j$}, \text{\boldmath$i$}_\theta}) v_\theta &=0, \quad \textup{for all $\text{\boldmath$j$}\in {\langle I \rangle}_\theta$},\\ {\lambda}bel{Rel2} y_r v_\theta &= 0, \quad \textup{for all $r=1,\dots,{\operatorname{ht}}(\theta)$},\\ {\lambda}bel{RE_81} \psi_w v_\theta&=0, \quad \textup{for all $w\in (\mathfrak{S}_{{\operatorname{ht}}(\theta_2)}\times\mathfrak{S}_{{\operatorname{ht}}(\theta_1)})\setminus \mathfrak{D}_{\theta_2,\theta_1}$},\\ \psi_{w(\theta)}v_\theta&=0. \end{align} \end{Lemma} {\beta}gin{proof} The theorem follows easily from Proposition~\ref{PHomGenRel} applied to homogeneous modules $L({\theta_1})$ and $L({\theta_2})$, and Lemma~\ref{LSES}. \end{proof} We now define ${\delta}_\theta = D_\theta = e(\text{\boldmath$i$}_\theta)$, and $y_\theta = y_{{\operatorname{ht}}(\theta)} e(\text{\boldmath$i$}_\theta)$. All parts of Hypothesis~\ref{HProp} are trivially satisfied, except (v). We now verify Hypothesis~\ref{HProp}(v). {\beta}gin{Lemma}{\lambda}bel{PParaE8} We have \[ \iota_{\theta_2, \theta_1}(I_{>(\theta_2)} \otimes R_{\theta_1} + R_{\theta_2} \otimes I_{>(\theta_1)}) \subseteq I_{>(\theta)}. \] \end{Lemma} {\beta}gin{proof} Apply Proposition~\ref{LParaIdeal1New} twice with $m=2$, ${{\mathfrak g}amma}_1=\theta_2$, ${{\mathfrak g}amma}_2=\theta_1$, $\pi=(\theta)$, and either $k=1$ and $\pi_0=(\theta_2)$, or $k=2$ and $\pi_0=(\theta_1)$. \end{proof} {\beta}gin{Lemma} We have that $\bar e_\theta \bar R_\theta \bar e_\theta$ is generated by $\bar y_\theta$. \end{Lemma} {\beta}gin{proof} By Theorem~\ref{TBasis}, an element of $e_\theta R_\theta e_\theta$ is a linear combination of terms of the form $\psi_w y_1^{a_1} \dots y_d^{a_d} e(\text{\boldmath$i$}_\theta)$ such that $w\text{\boldmath$i$}_\theta=\text{\boldmath$i$}_\theta$. If $w\in (\mathfrak{S}_{{\operatorname{ht}}(\theta_2)}\times\mathfrak{S}_{{\operatorname{ht}}(\theta_1)})\setminus \mathfrak{D}_{\theta_2,\theta_1}$, then $\psi_w e_\theta \in I_{>(\theta)}$ by Lemmas~\ref{PParaE8} and~\ref{LBadInBlock}. So we may assume that $w = uv$ with $u \in \mathfrak{S}^{{\operatorname{ht}}(\theta_2),{\operatorname{ht}}(\theta_1)}$, $v \in \mathfrak{D}_{\theta_2, \theta_1}$. It is easy to check that the only such permutation that fixes $\text{\boldmath$i$}_\theta$ is the identity. We therefore see that $\bar e_\theta \bar R_\theta \bar e_\theta$ is generated by $\bar y_1, \dots, \bar y_{{\operatorname{ht}}(\theta)}$. Note that ${\operatorname{ht}}(\theta_2)=16$ and ${\operatorname{ht}}(\theta_1)=13$. Using the cases ${\beta}=\theta_2$ and ${\beta}=\theta_1$ proved above and Lemma~\ref{PParaE8}, we have that $(y_r - y_s)e(\text{\boldmath$i$}_\theta) \in I_{>(\theta)}$ if $1 \leq r,s \leq 16$ or $17 \leq r,s \leq 29$. It remains to show that $(y_r - y_s)e(\text{\boldmath$i$}_\theta) \in I_{>(\theta)}$ for some $1 \leq r \leq 16$ and $17 \leq s \leq 29$. Let $w \in \mathfrak{S}_{29}$ be the cycle $(27, 26, \dots, 16)$. By considering words and using Corollary~\ref{LBadWords}, one can verify that {\beta}gin{align*} \psi_w^\tau \psi_w e(\text{\boldmath$i$}_\theta) &\equiv (y_{16} - y_{27})e(\text{\boldmath$i$}_\theta) \pmod{I_{>(\theta)}}. \end{align*} On the other hand, by the formula for the character of $L(\theta)$ from Lemma~\ref{LSES}, we have that $w \text{\boldmath$i$}_\theta$ is not a word of $L(\theta)$. Therefore, by Corollary~\ref{LBadWords}, we have that $\psi_w e(\text{\boldmath$i$}_\theta) \in I_{>(\theta)}$, so $(y_{16} - y_{27})e(\text{\boldmath$i$}_\theta) \in I_{>(\theta)}$, and we are done. \end{proof} \subsection{Non-symmetric types}{\lambda}bel{SSBCFG} Now we deal with non-symmetric Cartan matrices, i.e. Cartan matrices of $BCFG$ types. {\beta}gin{Lemma}{\lambda}bel{LHelper} Suppose that ${\delta}_{\beta}, D_{\beta} \in e(\text{\boldmath$i$}_{\beta}) R_{\beta} e(\text{\boldmath$i$}_{\beta})$ have been chosen so that Hypothesis~\ref{HProp}(iii) is satisfied. If the minimal degree component of $e(\text{\boldmath$i$}_{\beta}) R_{\beta} e(\text{\boldmath$i$}_{\beta})$ is spanned by $D_{\beta}$, then Hypothesis~\ref{HProp}(i) and (vi) are satisfied. \end{Lemma} {\beta}gin{proof} Since $D_{\beta} {\delta}_{\beta} D_{\beta}$ has the same degree as $D_{\beta}$, the assumption above implies that $D_{\beta} {\delta}_{\beta} D_{\beta}$ is proportional to $D_{\beta}$. Acting on $v_{\beta}^+$ and using Hypothesis~\ref{HProp}(iii) gives $D_{\beta} {\delta}_{\beta} D_{\beta} = D_{\beta}$, which upon multiplication by ${\delta}_{\beta}$ on the right gives the property $e_{\beta}^2=e_{\beta}$, which is even stronger than (i). To see (vi), we look at the lowest degree component in $e(\text{\boldmath$i$}_{\beta}\text{\boldmath$i$}_{\beta})R_{2{\beta}}e(\text{\boldmath$i$}_{\beta}\text{\boldmath$i$}_{\beta})$ using \cite[Lemma 5.3(ii)]{KR} and commutation relations in the algebra $R_{2{\beta}}$. \end{proof} It will be clear in almost all cases that the condition of Lemma~\ref{LHelper} will be satisfied, and moreover Hypothesis~\ref{HProp}(ii) and (iv) are easy to verify by inspection. This leaves Hypothesis~\ref{HProp}(v) to be shown in each case. \subsubsection{Type $B_l$}{\lambda}bel{SSTypeB} The set of positive roots is broken into two types. For $1 \leq i \leq j \leq l$ we have the root ${\alpha}_i + \dots + {\alpha}_j$, and for $1 \leq i < j \leq l$ we have the root ${\alpha}_i + \dots + {\alpha}_{j-1} + 2{\alpha}_j + \dots + 2{\alpha}_l$. Let ${\beta} := {\alpha}_i + \dots + {\alpha}_j$. Then $\text{\boldmath$i$}_{{\beta}} := (i, \dots, j)$, and the irreducible module $L({{\beta}})$ is one-dimensional with character $\text{\boldmath$i$}_{{\beta}}$. Define ${\delta}_{{\beta}} := D_{{\beta}} := e(\text{\boldmath$i$}_{{\beta}})$ and $y_{{\beta}} := y_{d} e(\text{\boldmath$i$}_{\beta})$. Using Corollary~\ref{LBadWords} one sees that $\psi_r e_{{\beta}} \in I_{>({\beta})}$ for all $r$, which by Theorem~\ref{TBasis} shows that $\bar R_{\beta} \bar e_{\beta} = F[\bar y_1, \dots, \bar y_d] \bar e_{\beta}$. This also shows that for $1 \leq r \leq d$ we have the elements of $I_{>({\beta})}$: \[ \psi_r^2 e_{{\beta}} = {\beta}gin{cases} (y_r-y_{r+1}^2)e_{{\beta}}, & \text{if $j=l$ and $r=d-1$}\\ (y_r - y_{r+1})e_{{\beta}}, & \text{otherwise.} \end{cases} \] It follows that $\bar R_{\beta} \bar e_{\beta} = F[\bar y_{\beta}] \bar e_{\beta}$, and thus $\bar e_{{\beta}} \bar R_{{\beta}} \bar e_{{\beta}}$ is generated by $\bar e_{{\beta}} \bar y_{{\beta}} \bar e_{{\beta}}$. Consider ${\beta} := {\alpha}_i + \dots + {\alpha}_{j-1} + 2{\alpha}_j + \dots + 2{\alpha}_l$. In this case, $\text{\boldmath$i$}_{\beta} = (i, \dots, l, l, \dots, j)$, and ${\operatorname{ch}_q\:} L({{\beta}}) = (q+q^{-1}) \text{\boldmath$i$}_{{\beta}}$. Define ${\delta}_{{\beta}} := y_{l-i+2} e(\text{\boldmath$i$}_{{\beta}})$, $D_{{\beta}} := \psi_{l-i+1} e(\text{\boldmath$i$}_{{\beta}})$, and $y_{{\beta}} = y_1 e(\text{\boldmath$i$}_{\beta})$. Using Corollary~\ref{LBadWords}, one sees that $\psi_r e(\text{\boldmath$i$}_{{\beta}}) \in I_{>({\beta})}$ for $r {\mathfrak n}eq l-i+1$. It is also clear that $\psi_{l-i+1} e_{{\beta}} = 0$, and therefore by Theorem~\ref{TBasis}, $\bar R_{{\beta}} \bar e_{{\beta}} = F[\bar y_1, \dots, \bar y_d] e_{{\beta}}$. We also have the following elements of $I_{>({\beta})}$: \[ \psi_r^2 e(\text{\boldmath$i$}_{{\beta}}) = {\beta}gin{cases} (y_r - y_{r+1})e(\text{\boldmath$i$}_{{\beta}}), & \text{for $1 \leq r \leq l-i-1$;}\\ (y_{l-i} - y_{l-i+1}^2)e(\text{\boldmath$i$}_{{\beta}}), & \text{for $r = l-i$};\\ (y_{l-i+3} - y_{l-i+2}^2)e(\text{\boldmath$i$}_{{\beta}}), & \text{for $r = l-i+2$};\\ (y_{r+1} - y_r)e(\text{\boldmath$i$}_{{\beta}}) & \text{for $l-i+3 \leq r \leq d-1$}. \end{cases} \] Taken together, these show that $\bar R_{{\beta}} \bar e(\text{\boldmath$i$}_{{\beta}}) = F[\bar y_{l-i+1}, \bar y_{l-i+2}] \bar e(\text{\boldmath$i$}_{{\beta}})$. Multiplying on both sides by $\bar e_{{\beta}}$ and using the KLR / nil-Hecke relations, we have $$\bar e_{{\beta}} \bar R_{{\beta}} \bar e_{{\beta}} = F[\bar y_{l-i+1} + \bar y_{l-i+2}, \bar y_{l-i+1} \bar y_{l-i+2}] \bar e_{{\beta}}.$$ Furthermore, $(y_{l-i+1} + y_{l-i+2})\psi_{l-i+1}e(\text{\boldmath$i$}_{\beta}) = \psi_{l-i+1} \psi_{l-i}^2 \psi_{l-i+1} e(\text{\boldmath$i$}_{\beta}) \in I_{>({\beta})}$, and so in fact $$\bar e_{{\beta}} \bar R_{{\beta}} \bar e_{{\beta}} = F[\bar y_{l-i+1}^2] \bar e_{{\beta}} = F[\bar y_1] \bar e_{{\beta}}.$$ \subsubsection{Type $C_l$} The set of positive roots is broken into three types. For $1 \leq i \leq j \leq l$ we have the root ${\alpha}_i + \dots + {\alpha}_j$, for $1 \leq i < j < l$ we have the root ${\alpha}_i + \dots + {\alpha}_{j-1} + 2{\alpha}_j + \dots + 2{\alpha}_{l-1} + {\alpha}_l$, and for $1 \leq i < l$ we have the root $2{\alpha}_i + \dots + 2{\alpha}_{l-1} + {\alpha}_l$. Consider ${\beta} = {\alpha}_i + \dots + {\alpha}_j$. Then $\text{\boldmath$i$}_{\beta} = (i, \dots, j)$ and ${\operatorname{ch}_q\:} L({\beta}) = \text{\boldmath$i$}_{{\beta}}$. Define ${\delta}_{\beta} = D_{\beta} := e(\text{\boldmath$i$}_{\beta})$. Define $y_{\beta} := y_1 e(\text{\boldmath$i$}_{\beta})$. Using Corollary~\ref{LBadWords} one sees that $\psi_r e_{{\beta}} \in I_{>({\beta})}$ for all $r$, which by Theorem~\ref{TBasis} shows that $\bar R_{\beta} \bar e_{\beta} = F[\bar y_1, \dots, \bar y_d] \bar e_{\beta}$. This also shows that for $1 \leq r \leq d$ we have the elements of $I_{>({\beta})}$: \[ \psi_r^2 e_{{\beta}} = {\beta}gin{cases} (y_r^2-y_{r+1})e_{{\beta}}, & \text{if $j=l$ and $r=d-1$}\\ (y_r - y_{r+1})e_{{\beta}}, & \text{otherwise.} \end{cases} \] Consequently, $\bar e_{{\beta}} \bar R_{{\beta}} \bar e_{{\beta}}$ is generated by $\bar e_{{\beta}} \bar y_{{\beta}} \bar e_{{\beta}}$. Consider ${\beta} = {\alpha}_i + \dots + {\alpha}_{j-1} + 2{\alpha}_j + \dots + 2{\alpha}_{l-1} + {\alpha}_l$. Then $\text{\boldmath$i$}_{\beta} = (i, \dots, l-1, l, l-1, \dots, j)$ and ${\operatorname{ch}_q\:} L({\beta}) = (i, \dots, l-1, l, l-1, \dots, j)$. Define ${\delta}_{\beta} = D_{\beta} := e(\text{\boldmath$i$}_{\beta})$, and $y_{\beta} := y_1 e(\text{\boldmath$i$}_{\beta})$. Using Corollary~\ref{LBadWords} one sees that $\psi_r e_{{\beta}} \in I_{>({\beta})}$ for all $r$, which by Theorem~\ref{TBasis} shows that $\bar R_{\beta} \bar e_{\beta} = F[\bar y_1, \dots, \bar y_d] \bar e_{\beta}$. This also shows that for $1 \leq r \leq d$ we have the elements of $I_{>({\beta})}$: \[ \psi_r^2 e_{{\beta}} = {\beta}gin{cases} (y_r - y_{r+1}) e_{{\beta}}, & \text{for $1 \leq r \leq l-i-1$;}\\ (y_{l-i}^2 - y_{l-i+1})e_{{\beta}}, & \text{for $r = l-i$};\\ (y_{l-i+2}^2 - y_{l-i+1})e_{{\beta}}, & \text{for $r = l-i+1$};\\ (y_{r+1} - y_r)e_{{\beta}} & \text{for $l-i+2 \leq r \leq d-1$}. \end{cases} \] It follows that $\bar R_{\beta} \bar e_{\beta} = F[\bar y_{l-i}, \bar y_{l-i+2}] \bar e_{\beta}$. Furthermore, by the relation (\ref{R7}), $$ (y_{l-i} + y_{l-i+2}) e_{\beta} = (\psi_{l-i+1} \psi_{l-i} \psi_{l-i+1} - \psi_{l-i} \psi_{l-i+1} \psi_{l-i}) e_{\beta} \in I_{>({\beta})} $$ and therefore $\bar R_{\beta} \bar e_{\beta} = F[\bar y_{l-i}] e_{\beta} = F[\bar y_{\beta}] \bar e_{\beta}$. Consider ${\beta} = 2{\alpha}_i + \dots + 2{\alpha}_{l-1} + {\alpha}_l$. Then $\text{\boldmath$i$}_{{\beta}} = (i, \dots, l-1, i, \dots, l)$ and $${\operatorname{ch}_q\:} L({\beta}) = q((i,\dots,l-1) \circ (i,\dots,l-1))\cdot(l).$$ Let $w \in \mathfrak{S}_{d}$ be the permutation that sends $(1,2,\dots, d)$ to $(l-i+1, \dots, d-1, 1, \dots, l-i, d)$, and define $D_{{\beta}} := \psi_w e(\text{\boldmath$i$}_{{\beta}})$. Define also ${\delta}_{{\beta}} := y_{d-1} e(\text{\boldmath$i$}_{{\beta}})$ and $y_{{\beta}} := y_d e(\text{\boldmath$i$}_{\beta})$. Set ${{\mathfrak g}amma} = {\alpha}_i + \dots + {\alpha}_{l-1}$. Since $I_{>({{\mathfrak g}amma}^2)}$ is generated by idempotents $e(\text{\boldmath$i$})$ with $\text{\boldmath$i$} > \text{\boldmath$i$}_{{\mathfrak g}amma}^2$, and $\text{\boldmath$i$}_{\beta} = \text{\boldmath$i$}_{{\mathfrak g}amma}^2 i_l$ is the highest weight of $L_{\beta}$, we see that $$ \iota_{2{{\mathfrak g}amma},{\alpha}_l}(I_{>({{\mathfrak g}amma}^2)} \otimes R_{{\alpha}_l}) \subseteq I_{>({\beta})}. $$ Let $\mu: \bar R_{2{{\mathfrak g}amma}} \boxtimes R_{{\alpha}_l} \to \bar R_{\beta}$ be the induced map. Note that every weight of $L({\beta})$ ends with $l$, so that $\psi_u e(\text{\boldmath$i$}_{\beta}) \in I_{>({\beta})}$ unless $u \in \mathfrak{S}_{d-1,1}$, by Corollary~\ref{LBadWords}. Therefore, applying (\ref{EEndo}) in the type $A$ case of $({{\mathfrak g}amma}^2)$ (which has already been verified), we obtain $$ \bar e_{\beta} \bar R_{\beta} \bar e_{\beta} = \mu(\bar e_{({{\mathfrak g}amma}^2)} \bar R_{2{{\mathfrak g}amma}} \bar e_{({{\mathfrak g}amma}^2)} \otimes R_{{\alpha}_l}) = \bar e_{\beta} {\mathcal O}[\bar y_{l-i} + \bar y_{2l-2i}, \bar y_{l-i} \bar y_{2l-2i}, \bar y_d] \bar e_{\beta}. $$ Furthermore, $$ (\bar y_{l-i} + \bar y_{2l-2i}) \bar e(\text{\boldmath$i$}_{\beta}) = \bar \psi_{l-i} \dots \bar \psi_{2l-2i-1} \bar \psi_{2l-2i}^2 \bar \psi_{2l-2i-1} \dots \bar \psi_{l-i} \bar e(\text{\boldmath$i$}_{\beta}) = 0 $$ and $(\bar y_{2l-2i}^2 - \bar y_d) \bar e(\text{\boldmath$i$}_{\beta}) = \bar \psi_{2l-2i}^2 \bar e(\text{\boldmath$i$}_{\beta}) = 0.$ Thus $\bar e_{\beta} \bar R_{\beta} \bar e_{\beta}$ is generated by $\bar e_{\beta} \bar y_d \bar e_{\beta}$. \subsubsection{Type $F_4$} We write ${\beta} = c_1 {\alpha}_1 + c_2 {\alpha}_2 + c_3 {\alpha}_3 + c_4 {\alpha}_4 \in {\mathtt P}hi_+$. If $c_4 = 0$, then this root lies in a subsystem of type $B_3$ with the {\em same order} as in section~\ref{SSTypeB} and we are done. If ${\beta} = {\alpha}_i + \dots +{\alpha}_j$ for some $1\leq i \leq j \leq 4$, then $\text{\boldmath$i$}_{\beta} = (i,\dots,j)$ and ${\operatorname{ch}_q\:} L({\beta}) = (i,\dots,j)$. In this case we take $D_{\beta} = {\delta}_{\beta} = e(\text{\boldmath$i$}_{\beta})$, and set $y_{\beta} = y_{{\operatorname{ht}}({\beta})} e(\text{\boldmath$i$}_{\beta})$. \iffalse ${\beta} = {\alpha}_2 + 2{\alpha}_3 + {\alpha}_4$: We have $\text{\boldmath$i$}_{\beta} = (2343)$ and ${\operatorname{ch}_q\:} L({\beta}) = (2343)+[2]_4(2334)$. Set $D_{\beta} = {\delta}_{\beta} = e(\text{\boldmath$i$}_{\beta})$, and $y_{\beta} = y_3 e(\text{\boldmath$i$}_{\beta})$. It remains to verify Hypothesis~\ref{HProp}\textrm{(v)}. ${\beta} = {\alpha}_2 + 2{\alpha}_3 + 2{\alpha}_4$: We have $\text{\boldmath$i$}_{\beta} = (23434)$ and ${\operatorname{ch}_q\:} L({\beta}) = [2]_4(23434)+[2]_4^2(23344)$. Set $D_{\beta} = \psi_3 \psi_2 \psi_4 \psi_3 e(\text{\boldmath$i$}_{\beta})$, ${\delta}_{\beta} = y_5$, and $y_{\beta} = y_1$. ${\beta} = {\alpha}_1 + {\alpha}_2 + 2{\alpha}_3 + {\alpha}_4$: $\text{\boldmath$i$}_{\beta} = (12343)$, $D_{\beta} = {\delta}_{\beta} = e(\text{\boldmath$i$}_{\beta})$, and $y_{\beta} = $. ${\beta} = {\alpha}_1 + {\alpha}_2 + 2{\alpha}_3 + 2{\alpha}_4$: $\text{\boldmath$i$}_{\beta} = (123434)$, $D_{\beta} = \psi_4 \psi_3 \psi_5 \psi_4 e(\text{\boldmath$i$}_{\beta})$, ${\delta}_{\beta} = y_6$, and $y_{\beta} = $. ${\beta} = {\alpha}_1 + 2{\alpha}_2 + 2{\alpha}_3 + {\alpha}_4$: $\text{\boldmath$i$}_{\beta} = (123432)$, $D_{\beta} = {\delta}_{\beta} = e(\text{\boldmath$i$}_{\beta})$, and $y_{\beta} = $. ${\beta} = {\alpha}_1 + 2{\alpha}_2 + 3{\alpha}_3 + {\alpha}_4$: $\text{\boldmath$i$}_{\beta} = (1234323)$, $D_{\beta} = {\delta}_{\beta} = e(\text{\boldmath$i$}_{\beta})$, and $y_{\beta} = $. ${\beta} = {\alpha}_1 + 2{\alpha}_2 + 2{\alpha}_3 + 2{\alpha}_4$: $\text{\boldmath$i$}_{\beta} = (1234342)$, $D_{\beta} = \psi_4 \psi_3 \psi_5 \psi_4 e(\text{\boldmath$i$}_{\beta})$, ${\delta}_{\beta} = y_6$, and $y_{\beta} = $. ${\beta} = {\alpha}_1 + 2{\alpha}_2 + 3{\alpha}_3 + 2{\alpha}_4$: $\text{\boldmath$i$}_{\beta} = (12343423)$, $D_{\beta} = \psi_4 \psi_3 \psi_5 \psi_4 e(\text{\boldmath$i$}_{\beta})$, ${\delta}_{\beta} = y_6$, and $y_{\beta} = $. ${\beta} = {\alpha}_1 + 2{\alpha}_2 + 4{\alpha}_3 + 2{\alpha}_4$: $\text{\boldmath$i$}_{\beta} = (123434233)$, $D_{\beta} = \psi_4 \psi_3 \psi_5 \psi_4 \psi_8 e(\text{\boldmath$i$}_{\beta})$, ${\delta}_{\beta} = y_6 y_9$, and $y_{\beta} = $. ${\beta} = {\alpha}_1 + 3{\alpha}_2 + 4{\alpha}_3 + 2{\alpha}_4$: $\text{\boldmath$i$}_{\beta} = (1234342332)$, $D_{\beta} = \psi_4 \psi_3 \psi_5 \psi_4 \psi_8 e(\text{\boldmath$i$}_{\beta})$, ${\delta}_{\beta} = y_6 y_9$, and $y_{\beta} = $. \mathbf{f}i The following table shows the choice of data for the remaining roots, except for the highest root ${\beta} = 2{\alpha}_1 + 3{\alpha}_2 + 4{\alpha}_3 + 2{\alpha}_4$, which we discuss separately. In each of these cases, the hypotheses may be verified by employing the same methods used above. For example, in each case either Hypothesis~\ref{HProp}(i)-(iv),(vi) may be verified directly or with the help of Lemma~\ref{LHelper} when it applies. {\beta}gin{tabular}{|c|c|c|c|} {\mathfrak h}line $\text{\boldmath$i$}_{\beta}$ & $D_{\beta}$ & ${\delta}_{\beta}$ & $y_{\beta}$ \\ {\mathfrak h}line $2343$ & $e(\text{\boldmath$i$}_{\beta})$ & $e(\text{\boldmath$i$}_{\beta})$ & $y_3 e(\text{\boldmath$i$}_{\beta})$ \\ $12343$ & $e(\text{\boldmath$i$}_{\beta})$ & $e(\text{\boldmath$i$}_{\beta})$ & $y_5 e(\text{\boldmath$i$}_{\beta})$ \\ $23434$ & $\psi_3\psi_2\psi_4\psi_3e(\text{\boldmath$i$}_{\beta})$ & $y_5e(\text{\boldmath$i$}_{\beta})$ & $y_1e(\text{\boldmath$i$}_{\beta})$ \\ $123432$ & $e(\text{\boldmath$i$}_{\beta})$ & $e(\text{\boldmath$i$}_{\beta})$ & $y_5e(\text{\boldmath$i$}_{\beta})$ \\ $123434$ & $\psi_4\psi_3\psi_5\psi_4e(\text{\boldmath$i$}_{\beta})$ & $y_6e(\text{\boldmath$i$}_{\beta})$ & $y_2e(\text{\boldmath$i$}_{\beta})$ \\ $1234323$ & $e(\text{\boldmath$i$}_{\beta})$ & $e(\text{\boldmath$i$}_{\beta})$ & $y_5 e(\text{\boldmath$i$}_{\beta})$ \\ $1234342$ & $\psi_4\psi_3\psi_5\psi_4e(\text{\boldmath$i$}_{\beta})$ & $y_6e(\text{\boldmath$i$}_{\beta})$ & $y_2e(\text{\boldmath$i$}_{\beta})$ \\ $12343423$ & $\psi_4\psi_3\psi_5\psi_4e(\text{\boldmath$i$}_{\beta})$ & $y_6e(\text{\boldmath$i$}_{\beta})$ & $y_8 e(\text{\boldmath$i$}_{\beta})$ \\ $123434233$ & $\psi_4\psi_3\psi_5\psi_4\psi_8e(\text{\boldmath$i$}_{\beta})$ & $y_6y_9e(\text{\boldmath$i$}_{\beta})$ & $y_7e(\text{\boldmath$i$}_{\beta})$ \\ $1234342332$ & $\psi_4\psi_3\psi_5\psi_4\psi_8e(\text{\boldmath$i$}_{\beta})$ & $y_6y_9e(\text{\boldmath$i$}_{\beta})$ & $y_{10}e(\text{\boldmath$i$}_{\beta})$ \\ {\mathfrak h}line \end{tabular} Consider now ${\beta} = 2{\alpha}_1 + 3{\alpha}_2 + 4{\alpha}_3 + 2{\alpha}_4$, where $\text{\boldmath$i$}_{\beta} = (12343123432)$. Let $w \in \mathfrak{S}_{11}$ be the permutation that sends $(1,\dots, 11)$ to $(6,7,8,9,10,1,2,3,4,5,11)$, and set $D_{\beta} = \psi_w e(\text{\boldmath$i$}_{\beta})$. Let ${\delta}_{\beta} = y_{10} e(\text{\boldmath$i$}_{\beta})$, and $y_{\beta} = y_{11} e(\text{\boldmath$i$}_{\beta})$. Define ${{\mathfrak g}amma} = {\alpha}_1 + {\alpha}_2 + 2{\alpha}_3 + {\alpha}_4$. There is a map $\mu: \bar R_{2{{\mathfrak g}amma}} \boxtimes R_{{\alpha}_2} \to \bar R_{\beta}$. This map is not surjective, but one can show that $$ \bar e(\text{\boldmath$i$}_{\beta}) \bar R_{\beta} \bar e(\text{\boldmath$i$}_{\beta}) = \mu(\bar e(\text{\boldmath$i$}_{{\mathfrak g}amma}^2) \bar R_{2{{\mathfrak g}amma}} \bar e(\text{\boldmath$i$}_{{\mathfrak g}amma}^2) \otimes R_{{\alpha}_2}) $$ and thus {\beta}gin{align*} \bar e_{\beta} \bar R_{\beta} \bar e_{\beta} &= \mu(\bar e_{({{\mathfrak g}amma}^2)} \bar R_{2{{\mathfrak g}amma}} \bar e_{({{\mathfrak g}amma}^2)} \otimes R_{{\alpha}_2}) = \mu({\mathcal O}[\bar y_5 + \bar y_{10}, \bar y_5 \bar y_{10}] \bar e_{({{\mathfrak g}amma}^2)} \otimes R_{{\alpha}_2}) \\ &= {\mathcal O}[\bar y_5 + \bar y_{10}, \bar y_5 \bar y_{10}, \bar y_{11}] \bar e_{\beta}. \end{align*} We also compute (cf. \cite[\S5]{McN}): $$ (\bar y_5 + \bar y_{10}) \bar e(\text{\boldmath$i$}_{\beta}) = - \bar \psi_5 \bar \psi_6 \bar \psi_7 \bar \psi_8 \bar \psi_9 \bar \psi_{10}^2 \bar \psi_9 \bar \psi_8 \bar \psi_7 \bar \psi_6 \bar \psi_5 \bar e(\text{\boldmath$i$}_{\beta}), $$ which is zero because it contains the weight $(12341234323)$, and this is not a weight of $L({\beta})$. Since $\bar y_{11} \bar e(\text{\boldmath$i$}_{\beta}) = \bar y_{10}^2 \bar e(\text{\boldmath$i$}_{\beta})$, we see that $\bar e_{\beta} \bar R_{\beta} \bar e_{\beta} = {\mathcal O}[\bar y_{11}] \bar e_{\beta}$, as required. \subsubsection{Type $G_2$} ${\beta} = {\alpha}_1$: $\text{\boldmath$i$}_{\beta} = (1)$, $D_{\beta} = {\delta}_{\beta} = e(\text{\boldmath$i$}_{\beta})$, and $y_{\beta} = y_1 e(\text{\boldmath$i$}_{\beta})$. ${\beta} = {\alpha}_2$: $\text{\boldmath$i$}_{\beta} = (2)$, $D_{\beta} = {\delta}_{\beta} = e(\text{\boldmath$i$}_{\beta})$, and $y_{\beta} = y_1 e(\text{\boldmath$i$}_{\beta})$. ${\beta} = {\alpha}_1 + {\alpha}_2$: $\text{\boldmath$i$}_{\beta} = (12)$, $D_{\beta} = {\delta}_{\beta} = e(\text{\boldmath$i$}_{\beta})$, and $y_{\beta} = y_1 e(\text{\boldmath$i$}_{\beta})$. ${\beta} = 2{\alpha}_1 + {\alpha}_2$: $\text{\boldmath$i$}_{\beta} = (112)$, $D_{\beta} = \psi_1 e(\text{\boldmath$i$}_{\beta})$, ${\delta}_{\beta} = y_2 e(\text{\boldmath$i$}_{\beta})$, and $y_{\beta} = (y_1 + y_2) e(\text{\boldmath$i$}_{\beta})$. ${\beta} = 3{\alpha}_1 + {\alpha}_2$: $\text{\boldmath$i$}_{\beta} = (1112)$, $D_{\beta} = \psi_1 \psi_2 \psi_1 e(\text{\boldmath$i$}_{\beta})$, ${\delta}_{\beta} = y_2 y_3^2 e(\text{\boldmath$i$}_{\beta})$, and $y_{\beta} = y_1 y_2 y_3 e(\text{\boldmath$i$}_{\beta})$. Let $\mu$ be the composition $R_{3{\alpha}_1} \boxtimes R_{{\alpha}_2} {\hookrightarrow} R_{\beta} {\twoheadrightarrow} \bar R_{\beta}$. If $w {\mathfrak n}otin \mathfrak{S}_{3,1}$, then $\psi_w e(\text{\boldmath$i$}_{\beta}) \in I_{>({\beta})}$, and so $\mu$ is surjective. Furthermore, $\bar e_{\beta} = \mu(e_{({\alpha}_1^3)} \otimes 1)$. Thus $\bar e_{\beta} \bar R_{\beta} \bar e_{\beta} = {\mathcal O}[\bar y_1, \bar y_2, \bar y_3, \bar y_4]^{\mathfrak{S}_{3,1}} \bar e_{\beta}$. Since $(\bar y_3^3 - \bar y_4) \bar e(\text{\boldmath$i$}_{\beta}) = \bar \psi_3^2 \bar e(\text{\boldmath$i$}_{\beta}) = 0$, we have that $\bar e_{\beta} \bar R_{\beta} \bar e_{\beta}$ is generated by ${\mathcal O}[\bar y_1, \bar y_2, \bar y_3]^{\mathfrak{S}_3} \bar e_{\beta}$. Observe using \cite[Theorem 4.12(i)]{KLM} that $$ (\bar y_1 + \bar y_2 + \bar y_3) \bar e_{\beta} = \bar e_{\beta} \bar \psi_1 \bar \psi_2 \bar \psi_3^2 \bar e_{\beta} \in I_{>({\beta})} $$ and $$ ((\bar y_1 + \bar y_2 + \bar y_3)^2 - (\bar y_1 \bar y_2 + \bar y_1 \bar y_3 + \bar y_2 \bar y_3)) \bar e_{\beta} = \bar e_{\beta} \bar \psi_2 \bar \psi_3^2 \bar e_{\beta} \in I_{>({\beta})}. $$ Therefore $\bar e_{\beta} \bar R_{\beta} \bar e_{\beta}$ is generated by $\bar y_1 \bar y_2 \bar y_3$. ${\beta} = 3{\alpha}_1 + 2{\alpha}_2$: $\text{\boldmath$i$}_{\beta} = (11212)$, $D_{\beta} = \psi_1 \psi_3 \psi_2 \psi_4 \psi_1 \psi_3 e(\text{\boldmath$i$}_{\beta})$, ${\delta}_{\beta} = y_2 y_4^2 e(\text{\boldmath$i$}_{\beta})$, and $y_{\beta} = y_1 y_2 y_4 e(\text{\boldmath$i$}_{\beta})$. We first prove {\mathfrak n}oindent {\em Claim:} If $w {\mathfrak n}eq 1$ then $e(\text{\boldmath$i$}_{\beta}) \psi_w e_{\beta} \in I_{>({\beta})}$. {\mathfrak n}oindent This is clearly true unless $w$ is one of the twelve permutations that stabilizes the weight $\text{\boldmath$i$}_{\beta}$. Of these, six produce a negative degree. Since $D_{\beta}$ spans the smallest degree component of $e(\text{\boldmath$i$}_{\beta}) R_{\beta} e(\text{\boldmath$i$}_{\beta})$ we the Claim holds for these six permutations. Two of the remaining six permutations end with the cycle $(12)$. Since $\psi_1 D_{\beta} = 0$, this implies that the Claim holds for them too. Finally, reduced decompositions for the remaining non-identity permutations may be chosen so that $e(\text{\boldmath$i$}_{\beta}) \psi_w \in I_{>({\beta})}$ by Lemmas~\ref{LBadWordsNew} and \ref{LHighestWt}. Now we combine the Claim with Theorem~\ref{TBasis} to see that $\bar e_{\beta} \bar R_{\beta} \bar e_{\beta}$ is generated by $\bar e_{\beta} {\mathcal O}[\bar y_1, \dots, \bar y_5] \bar e_{\beta}$. Next, by weights and quadratic relations, $(y_3-y_2^3) e(\text{\boldmath$i$}_{\beta}), (y_5-y_4^3) e(\text{\boldmath$i$}_{\beta}) \in I_{>({\beta})}$. Thus $\bar e_{\beta} \bar R_{\beta} \bar e_{\beta} = \bar e_{\beta} {\mathcal O}[\bar y_1, \bar y_2, \bar y_4] \bar e_{\beta}$. This can then be seen to be equal to ${\mathcal O}[\bar y_1, \bar y_2, \bar y_4]^{\mathfrak{S}_3} \bar e_{\beta}$, arguing as in the case of the root $3{\alpha}_1 + {\alpha}_2$ above. Again as in the case of the root $3{\alpha}_1 + {\alpha}_2$, one then shows using specific elements of $I_{>({\beta})}$that $(\bar y_1 + \bar y_2 + \bar y_4) \bar e_{\beta} = 0$ and $(\bar y_1 \bar y_2 + \bar y_1 \bar y_4 + \bar y_2 \bar y_4) \bar e_{\beta} = 0$, so that $\bar e_{\beta} \bar R_{\beta} \bar e_{\beta} = {\mathcal O}[\bar y_1 \bar y_2 \bar y_4] e_{\beta}$. {\beta}gin{thebibliography}{ABC} \text{\boldmath$i$}bitem{BKgrdec} J. Brundan and A. Kleshchev, Graded decomposition numbers for cyclotomic Hecke algebras, {\em Adv. Math.} {\bf 222} (2009), 1883--1942. \text{\boldmath$i$}bitem{BKM} J. Brundan, A. Kleshchev and P.J. McNamara, Homological properties of finite type Khovanov-Lauda-Rouquier algebras, {\em Duke Math. J.}, to appear; \arXiv{1210.6900}. \text{\boldmath$i$}bitem{BKOP} G. Benkart, S.-J. Kang, S.-J Oh, and E.Park, Construction of irreducible representations over Khovanov-Lauda-Rouquier algebras of finite classical type, {\em IMRN}, to appear; {\tt arXiv:1108.1048}. \text{\boldmath$i$}bitem{HMM} D. Hill, G. Melvin, and D. Mondragon, Representations of quiver Hecke algebras via Lyndon bases, {\em J. Pure Appl. Algebra} {\bf 216} (2012), 1052--1079. \text{\boldmath$i$}bitem{Kac} V. G. Kac, {\em Infinite Dimensional Lie Algebras}, Cambridge University Press, 1990. \text{\boldmath$i$}bitem{Kato} S. Kato, PBW bases and KLR algebras, \arXiv{1203.5254}. \text{\boldmath$i$}bitem{KL1} M. Khovanov and A. Lauda, A diagrammatic approach to categorification of quantum groups I, {\em Represent. Theory} {\bf 13} (2009), 309--347. \text{\boldmath$i$}bitem{KL2} M. Khovanov and A. Lauda, A diagrammatic approach to categorification of quantum groups II, {\em Trans. Amer. Math. Soc.} {\bf 363} (2011), 2685--2700. \text{\boldmath$i$}bitem{Kcusp} A. Kleshchev, Cuspidal systems for affine Khovanov-Lauda-Rouquier algebras, {\em Math. Z.}, to appear; {\tt arXiv:1210.6556}. \text{\boldmath$i$}bitem{KLM} A. Kleshchev, J. Loubert, and V. Miemietz, Affine Cellularity of Khovanov-Lauda-Rouquier algebras in type A, {\em J. Lond. Math. Soc.}, to appear; \arXiv{1210.6542}. \text{\boldmath$i$}bitem{KR} A. Kleshchev and A. Ram, Representations of Khovanov-Lauda-Rouquier algebras and combinatorics of Lyndon words, {\em Math. Ann.} {\bf 349} (2011), no. 4, 943--975. \text{\boldmath$i$}bitem{KRhomog} A. Kleshchev and A. Ram, Homogeneous representations of KhovanovLauda algebras, {\em J. Eur. Math. Soc.} {\bf 12} (2010), 1293--1306. \text{\boldmath$i$}bitem{KK} S. Koenig and A. Kleshchev, Affine highest weight categories and affine quasihereditary algebras, {\em preprint}, 2013. \text{\boldmath$i$}bitem{KoXi} S. Koenig and C. Xi, Affine cellular algebras, {\em Adv. Math.} {\bf 229} (2012), no. 1, 139--182. \text{\boldmath$i$}bitem{LV} A.D. Lauda and M. Vazirani, Crystals from categorified quantum groups, {\em Adv. Math.} {\bf 228}(2011), 803--861. \text{\boldmath$i$}bitem{Lec} B. Leclerc, Dual canonical bases, quantum shuffles and $q$-characters, {\em Math. Z.} {\bf 246} (2004), 691--732. \text{\boldmath$i$}bitem{Lu1} G. Lusztig, {\em Introduction to Quantum Groups}, Progress in mathematics 110, Birkh\"auser, Boston, 1993. \text{\boldmath$i$}bitem{Man} L. Manivel, {\em Symmetric functions, Schubert polynomials and Degeneracy Loci}, vol. 6 of SMF/AMS Texts and Monographs. AMS, Providence, RI, 2001. \text{\boldmath$i$}bitem{McN} P.J. McNamara, Finite dimensional representations of Khovanov-Lauda-Rouquier algebras I: finite type, {\em J. reine angew. Math.}, to appear; \arXiv{1207.5860}. \text{\boldmath$i$}bitem{R} R. Rouquier, $2$-Kac-Moody algebras; \arXiv{0812.5023}. \end{thebibliography} \end{document}
\begin{document} \begin{frontmatter} \title{Existence of Kazdan-Warner equation with sign-changing prescribed function\tnoteref{SZ}} \author[whu1,whu2]{Linlin Sun} \address[whu1]{School of Mathematics and Statistics, Wuhan University, Wuhan 430072, China} \address[whu2]{Hubei Key Laboratory of Computational Science, Wuhan University, Wuhan, 430072, China} \ead{[email protected]} \author[mpi]{Jingyong Zhu\texorpdfstring{\corref{zjy}}{}} \address[mpi]{Max Planck Institute for Mathematics in the Sciences, Inselstrasse 22, 04103 Leipzig, Germany} \ead{[email protected]} \cortext[zjy]{Corresponding author.} \tnotetext[SZ]{This research is partially supported by the National Natural Science Foundation of China (Grant No. 11971358, 11801420). The first author would like to thank Prof. Chen Xuezhang for his useful suggestions and support.} \begin{abstract} In this paper, we study the following Kazdan-Warner equation with sign-changing prescribed function $h$ \begin{align*} -\Delta u=8\pi\left(\dfrac{he^{u}}{\int_{\Sigma}he^{u}}-1\right) \end{align*} on a closed Riemann surface whose area equals one. The solutions are the critical points of the functional $J_{8\pi}$ which is defined by \begin{align*} J_{8\pi}(u)=\dfrac{1}{16\pi}\int_{\Sigma}\abs{\nabla u}^2+\int_{\Sigma}u-\ln\abs{\int_{\Sigma}he^{u}},\quad u\in H^1\left(\Sigma\right). \end{align*} We prove the existence of minimizer of $J_{8\pi}$ by assuming \begin{equation*} \Delta \ln h^++8\pi-2K>0 \end{equation*} at each maximum point of $2\ln h^++A$, where $K$ is the Gaussian curvature, $h^+$ is the positive part of $h$ and $A$ is the regular part of the Green function. This generalizes the existence result of Ding, Jost, Li and Wang [Asian J. Math. 1(1997), 230-248] to the sign-changing prescribed function case. We are also interested in the blow-up behavior of a sequence $u_{\varepsilon}$ of critical points of $J_{8\pi-\varepsilon}$ with $\int_{\Sigma}he^{u_{\varepsilon}}=1, \lim\limits_{\varepsilon\searrow 0}J_{8\pi-\varepsilon}\left(u_{\varepsilon}\right)<\infty$ and obtain the following identity during the blow-up process \begin{equation*} -\varepsilon=\frac{16\pi}{(8\pi-\varepsilon)h(p_\varepsilon)}\left[\Delta \ln h(p_\varepsilon)+8\pi-2K(p_\varepsilon)\right]\lambda_{\varepsilon}e^{-\lambda_{\varepsilon}}+O\left(e^{-\lambda_{\varepsilon}}\right), \end{equation*} where $p_\varepsilon$ and $\lambda_\varepsilon$ are the maximum point and maximum value of $u_\varepsilon$, respectively. Moreover, $p_{\varepsilon}$ converges to the blow-up point which is a critical point of the function $2\ln h^{+}+A$. \end{abstract} \begin{keyword} Kazdan-Warner equation \sep sign-changing prescribed function \sep existence. \MSC[2020] 35B33 \sep 58J05 \end{keyword} \end{frontmatter} \section{Introduction} Let $\Sigma$ be a closed Riemann surface whose area equals one. Let $h$ be a nonzero smooth function on $\Sigma$ such that $\max\limits_{\Sigma}h>0$. For each positive number $\rho$, we consider the following functional \begin{align*} J_{\rho}(u)=\dfrac{1}{2\rho}\int_{\Sigma}\abs{\nabla u}^2+\int_{\Sigma}u-\ln\abs{\int_{\Sigma}he^{u}},\quad u\in H^1\left(\Sigma\right). \end{align*} The critical points of $J_{\rho}$ are solutions to the following mean field equation \begin{align}\label{eq:KW} -\Delta u=\rho\left(\dfrac{he^{u}}{\int_{\Sigma}he^{u}}-1\right) \end{align} where $\Delta$ is the Laplace operator on $\Sigma$. Mean field equation has a strong relationship with Kazdan-Warner equation. Forty years ago, Kazdan and Warner \cite{kazdan1974curvature} considered the solvability of the equation \begin{align*} -\Delta u=he^u-\rho, \end{align*} where $\rho$ is a constant and $h$ is some smooth prescribed function. When $\rho>0$, the equation above is equivalent to the mean field equation \eqref{eq:KW}. The special case $\rho=8\pi$ is sometimes called the Kazdan-Warner equation. In particular, when $\Sigma$ is the standard sphere $\mathbb{S}^2$, it is called the Nirenberg problem, which comes from the conformal geometry. It has been studied by Moser \cite{moser1971sharp}, Kazdan and Warner \cite{kazdan1974curvature}, Chen and Ding \cite{chen1987scalar}, Chang and Yang \cite{chang1987prescribing} and others. The mean field equation \eqref{eq:KW} appears in various context such as the abelian Chern-Simons-Higgs models. The existence of solutions of \eqref{eq:KW} and its evolution problem has been widely studied in recent decades (see for example \cite{BreMer91uniform,casteras2015mean,CheLin02sharp,chen2003topological,DinJosLiWan97differential,ding1999existence,djadli2008existence,DM2008existence,LiZhu19convergence,Lin00topological,Mal08morse,struwe2002curvature,SunZhu20global} and the references therein). In this paper, we consider the existence theory of Kazdan-Warner equation ($\rho=8\pi$) with sign-changing prescribed function. The key is to analyze the asymptotic behavior of the blow-up solutions $u_\varepsilon$ (see \eqref{blow-up sequence}) and the functional $J_{8\pi}$. We prove the following identity near the blow-up point, whose analogue was proved by Chen and Lin in \cite{CheLin02sharp} when the prescribed function $h$ is positive. \begin{theorem}\label{blow-up id} Let $h$ be positive somewhere on $\Sigma$ and $u_{\varepsilon}$ a blow-up sequence satisfying \begin{align}\label{blow-up sequence} -\Delta u_{\varepsilon}=\left(8\pi-\varepsilon\right)\left(he^{\varepsilon}-1\right),\quad\text{in}\ \Sigma \end{align} and \begin{align}\label{eq:upper-J} \lim_{\varepsilon\searrow 0}J_{8\pi-\varepsilon}\left(u_{\varepsilon}\right)<\infty. \end{align} Then up to a subsequence, for $p_{\varepsilon}\in\Sigma$ with \begin{align*} \lambda_{\varepsilon}=\max_{\Sigma}u_{\varepsilon}=u_{\varepsilon}\left(p_{\varepsilon}\right), \end{align*} we have \begin{equation*} -\varepsilon=\frac{16\pi}{(8\pi-\varepsilon)h(p_\varepsilon)}[\Delta \ln h^+(p_\varepsilon)+8\pi-2K(p_\varepsilon)]\lambda_{\varepsilon}e^{-\lambda_{\varepsilon}}+O\left(e^{-\lambda_{\varepsilon}}\right), \end{equation*} where $K$ denotes the Gaussian curvature of $\Sigma$. \end{theorem} This yields a uniform bound of minimizers as $\varepsilon\searrow 0$ provided that $\Delta \ln h^++8\pi-2K>0$ at all blow-up points. Let $G(q,p)$ be the Green function on $\Sigma$ with singularity at $p$, i.e., \begin{align*} \Delta G(\cdot,p)=1-\delta_{p},\quad\int_{\Sigma}G(\cdot,p)=0. \end{align*} Under a local normal coordinate $x$ centering at $p$, we have \begin{equation}\label{expansion G} 8\pi G(x,p)=-4\ln\abs{x}+A(p)+b_1x_1+b_2x_2+c_1x_1^2+2c_2x_1x_2+c_3x_2^2+O\left(\abs{x}^3\right). \end{equation} By \autoref{cp position}, we know the blow-up point has to be a critical point of $2\ln h^+(p)+A(p)$. Thus, we get an existence result. \begin{cor} Let $\Sigma$ be a compact Riemann surface and $K(p)$ be its Gaussian curvature. Suppose $h(p)$ is a smooth function which is positive somewhere on $\Sigma$. If we have the following for all critical points of $2\ln h^++A$ \begin{equation*} \Delta \ln h^++8\pi-2K>0, \end{equation*} then equation \eqref{eq:KW} has a solution for $\rho=8\pi$. \end{cor} Furthermore, if $u_{\varepsilon}$ is a minimizer of $J_{8\pi-\varepsilon}$, we can show the blow-up point is actually the maximum point of $2\ln h^++A$. \begin{theorem}\label{functional value} If $u_{\varepsilon}$ is a minimizer of $J_{8\pi-\varepsilon}$ and blows up as $\varepsilon\searrow 0$, then the blow-up point $p_0$ is a maximum point of the function $2\ln h^++A$. Moreover, \begin{align*} \inf_{u\in H^1\left(\Sigma\right)}J_{8\pi}=-1-\ln\pi-\left(\ln h(p_0)+\frac12 A(p_0)\right), \end{align*} and there is a sequence $\phi_{\varepsilon}\in H^1\left(\Sigma\right)$ such that \begin{align*} \begin{split} J_{8\pi}\left(\phi_{\varepsilon}\right)&=-1-\ln\pi-\left(\ln h(p_0)+\frac12 A(p_0)\right)\\ &\quad-\dfrac{1}{4}\left(\Delta\ln h(p_0)+8\pi-2K(p_0)\right)\varepsilon\ln\varepsilon^{-1} +o\left(\varepsilon\ln\varepsilon^{-1}\right). \end{split} \end{align*} \end{theorem} Hence, we obtain a minimizing solution of the functional $J_{8\pi}$. \begin{theorem}\label{thm:DJLW} Let $\Sigma$ be a compact Riemann surface and $K$ be its Gaussian curvature. Suppose $h$ is a smooth function which is positive somewhere on $\Sigma$. If the following holds at the maximum points of $2\ln h^++A$ \begin{equation*} \Delta \ln h+8\pi-2K>0, \end{equation*} then equation \eqref{eq:KW} has a minimizing solution for $\rho=8\pi$. \end{theorem} \begin{rem}The condition mentioned in \autoref{thm:DJLW} can not hold on $2$-sphere with arbitrary metric. Assume $g=e^{2\phi}g_0$ and solve \begin{align*} -\Delta_{g_0}\psi=\dfrac{1}{\abs{\Sigma}_{g_0}}-\dfrac{e^{2\phi}}{\abs{\Sigma}_{g}},\quad \int_{\Sigma}\psi\dif\mu_{g_0}=0, \end{align*} where $\abs{\Sigma}_g$ stands for the area of $\Sigma$ with respect to the metric $g$. Set $h_0=he^{2\phi+\rho \psi}$. Then \begin{align*} J_{\rho,h,g}(u)=J_{\rho,h_0,g_0}\left(u-\rho \psi\right)-\dfrac{\rho}{2}\int_{\Sigma}\abs{\dif \psi}_{g_0}^2\dif\mu_{g_0}. \end{align*} If the condition mentioned in \autoref{thm:DJLW} holds, then there is a minimizer of $J_{8\pi,h,g}$. Hence, there is also a minimizer of $J_{8\pi, h_0,g_0}$. If $\Sigma$ is a $2$-sphere, we choose $g_0$ such that the Gaussian curvature is constant, then $h_0$ must be a constant (see \cite{Han90prescribing}). Thus $h$ is a positive function and \begin{align*} \Delta_{g}\ln h+\dfrac{8\pi}{\abs{\Sigma}_{g}}-2K_{g}=e^{-2\phi}\left(\Delta_{g_0}\ln h_0+\dfrac{8\pi}{\abs{\Sigma}_{g_0}}-2K_{g_0}\right)=0 \end{align*} which is a contradiction. \end{rem} \begin{rem}Zhu \cite{Zhu18generalized} also obtained the infimum of the functional $J_{8\pi}$ if there is no minimzer (when $h$ is non-negative). He pointed out the blow-up point must be the positive point of $h$ and used the maximum principle to estimate the lower bound of the functional $J_{8\pi}$ when $h$ is non-negative. In our case, the maximum principle does not work since $h$ is sign-changed. We will use the method of energy estimate to give the lower bound of the functional $J_{8\pi}$. Such a method also can be used to consider the flow case (cf. \cite{SunZhu20global,LiZhu19convergence}) and the Palais-Smale sequence. \end{rem} \begin{rem} The method in the proof of \autoref{thm:DJLW} can be used to prove the convergence of the Kazdan-Warner flow. In other words, under the same condition mentioned in \autoref{thm:DJLW}, there exists an initial date $u_0$ such that the following flow \begin{align*} \dfrac{\partial u}{\partial t}=\Delta u+8\pi\left(\dfrac{he^{u}}{\int_{\Sigma}he^{u}}-1\right),\quad u(0)=u_0 \end{align*} converges to a minimizer of $J_{8\pi}$. This gives a generalization of the previous results \cite{LiZhu19convergence} (positive prescribed function case) and \cite{SunZhu20global} (non-negative prescribed function case). Rencently, Chen, Li, Li and Xu \cite{CheLiLiXu20gaussian} consider another flow approach to the Gaussian curvature flow on sphere and reproved the existence result for sign-changing prescribed function which was obtained by Han \cite{Han90prescribing}. \end{rem} \section{Preliminary} Recall the strong Trudinger-Moser inequality (cf. \cite[Theorem 1.7]{Fon93sharp}) \begin{align*} \sup_{u\in H^1\left(\Sigma\right), \int_{\Sigma}\abs{\nabla u}^2\leq 1, \int_{\Sigma}u=0}\int_{\Sigma}\exp\left(4\pi u^2\right)<\infty. \end{align*} which implies the Trudinger-Moser inequality \begin{align}\label{eq:TM} \ln\int_{\Sigma}e^{u}\leq\dfrac{1}{16\pi}\int_{\Sigma}\abs{\nabla u}^2+\int_{\Sigma}u+c \end{align} where $c$ is a uniform constant depends only the geometry of $\Sigma$. We may assume $h$ is positive somewhere. If $0<\rho<8\pi$, then applying the Trudinger-Moser inequality \eqref{eq:TM} Kazdan and Warner (\cite[Theorem 7.2]{kazdan1974curvature}) proved that the Kazdan-Warner equation \eqref{eq:KW} admits a solution $u$ which minimizes the functional $J_{\rho}$ and satisfies \begin{align*} \int_{\Sigma}he^{u}=1. \end{align*} We consider the critical case $\rho=8\pi$. For every $\varepsilon\in(0,8\pi)$, let $u_{\varepsilon}$ be a minimizer of $J_{8\pi-\varepsilon}$ which satisfies \begin{align*} \int_{\Sigma}he^{u_{\varepsilon}}=1. \end{align*} Thus $u_{\varepsilon}$ satisfies \eqref{blow-up sequence}. It is clear that the function \begin{align*} \rho\mapsto\inf_{u\in H^1\left(\Sigma\right)}J_{\rho}(u) \end{align*} is a decreasing function on $(0,+\infty)$. In particular, $u_{\varepsilon}$ satisfies \eqref{eq:upper-J}. By the Trudinger-Moser inequality \eqref{eq:TM}, we have \begin{align}\label{eq:lower-J} J_{8\pi-\varepsilon}\left(u_{\varepsilon}\right)\geq&\ln\int_{\Sigma}e^{u_{\varepsilon}}-c. \end{align} Thus \eqref{eq:upper-J} and \eqref{eq:lower-J} gives \begin{align}\label{eq:upper-energy} \int_{\Sigma}e^{u_{\varepsilon}}\leq C,\quad\forall \varepsilon\in(0,4\pi). \end{align} One can check that \begin{align*} \lim_{\varepsilon\to0}J_{8\pi}\left(u_{\varepsilon}\right)=\inf_{u\in H^1\left(\Sigma\right)}J_{8\pi}(u). \end{align*} If \begin{align*} \limsup_{\varepsilon\to0}\max_{\Sigma}u_{\varepsilon}<+\infty, \end{align*} then up to a subsequence $u_{\varepsilon}$ converges smoothly to a minimizer of $J_{8\pi}$. In the rest of this section, we only assume $u_{\varepsilon}$ is a solution to \eqref{blow-up sequence} and satisfies the condition \eqref{eq:upper-energy}. Assume now $\set{u_{\varepsilon}}$ is a blow-up sequence, i.e., \begin{align*} \limsup_{\varepsilon\to0}\max_{\Sigma}u_{\varepsilon}=+\infty. \end{align*} Without loss of generality, we may assume $h^{\pm}e^{u_{\varepsilon}}\dif\mu_{\Sigma}$ converges to a nonzero Radon measure $\mu^{\pm}$ as $\varepsilon\to0$. Define the singular set $S$ of the sequence $\set{u_{\varepsilon}}$ by \begin{align*} S=\set{x\in\Sigma: \abs{\mu}\left(\set{x}\right)\geq \dfrac12}, \end{align*} where $\abs{\mu}=\mu^++\mu^-$. It is clear that $S$ is a finite set. Applying Brezis-Merle’s estimate \cite[Theorem 1]{BreMer91uniform}, one can obtain that for each compact subset $K\subset \Sigma\setminus S$ (cf. \cite[Lemma 2.8]{DinJosLiWan97differential}) \begin{align}\label{eq:out} \norm{u_{\varepsilon}-\int_{\Sigma} u_{\varepsilon}}_{L^{\infty}\left(K\right)}\leq C_{K}. \end{align} Then one obtain a characterization of $S$ by the blow-up sets of $\set{u_{\varepsilon}}$ (cf. \cite[Page 1240]{BreMer91uniform}) \begin{align*} S=\set{p\in\Sigma: \exists \ p_\varepsilon\in\Sigma,\ s.t.\ \lim_{\varepsilon\to0}p_\varepsilon=p,\ \lim_{\varepsilon\to0}u_{\varepsilon}\left(p_\varepsilon\right)=\infty.} \end{align*} Moreover, $S$ is nonempty and \begin{align*} \lim_{\varepsilon\to0}\int_{\Sigma} u_{\varepsilon}=-\infty, \end{align*} which implies that $u_{\varepsilon}$ goes to $-\infty$ uniformly on each compact subsets $K\subset\Sigma\setminus S$. Thus, $\abs{\mu}$ is a Dirac measure. By using blow-up analysis (cf. \cite[Lemma 1]{LiSha94blowup}) together with the classification result of Chen-Li \cite[Theorem 1]{CheLi91classification}, one can show that $\mu^{-}=0$ and \begin{align*} S=\set{p\in\Sigma : \mu^+\left(\set{p}\right)\geq 1, h(p)>0.} \end{align*} Notice that $he^{u_{\varepsilon}}\dif\mu_{\Sigma}$ converges to the nonzero Radon measure $\mu^+$ as $\varepsilon\to0$. We conclude that $S=\set{p_0}$ is a single point set and $\abs{\mu}=\mu^+=\delta_{x_0}$. Thus \begin{lem}[cf. Lemma 2.6 in \cite{DinJosLiWan97differential}]\label{convergence} $u_\varepsilon-\int_{\Sigma}u_\varepsilon$ converges to $8\pi G(\cdot,p_0)$ weakly in $W^{1,q}\left(\Sigma\right)$ and strongly in $L^q\left(\Sigma\right)$ for every $q\in(1,2)$, and converges in $C^2_{loc}\left(\Sigma\setminus\set{p_0}\right)$. \end{lem} For a fixed small $\delta_0>0$ and $u_{\varepsilon}$ of $J_{8\pi}$, we define $\rho_\varepsilon$ to be \begin{equation*} \rho_\varepsilon=(8\pi-\varepsilon)\int_{B_{\delta_0}(p_0)}he^{u_\varepsilon} \end{equation*} and \begin{align*} \lambda_{\varepsilon}=u_{\varepsilon}(p_\varepsilon)=\max_{\overline{B_{\delta}(p_0)}}u_{\varepsilon}\to +\infty. \end{align*} We may assume \begin{align*} h\vert_{B_{\delta_0}(p_0)}\geq\dfrac12h(p_0)>0,\quad \max_{\partial B_{\delta_0}(p_0)}u_{\varepsilon}-\min_{\partial B_{\delta_0}(p_0)}u_{\varepsilon}\leq C,\quad\int_{B_{\delta_0}(p_0)}e^{u_{\varepsilon}}\leq C. \end{align*} Li \cite[Theorem 0.3]{Li99harnack} obtained the following local estimate \begin{equation}\label{eq:local} \abs{u_\varepsilon(p)-\ln{\frac{e^{\lambda_\varepsilon}}{1+\frac{(8\pi-\varepsilon)h_{p_\varepsilon}}{8}e^{\lambda_\varepsilon}|p-p_\varepsilon|^2}}}\leq C \end{equation} for $p\in B_{\delta_0}(p_0)$, where $|p-p_\varepsilon |$ stands for the distance between $p$ and $p_\varepsilon$. Together with \autoref{convergence}, the above local estimate \eqref{eq:local} gives the following \begin{lem}[cf. Corollary 2.4 in \cite{CheLin02sharp}]\label{outside} There exists a constant $C>0$ such that \begin{equation*} \abs{u_\varepsilon+\lambda_\varepsilon}\leq C \ \text{in} \ \Sigma\setminus B_{\delta_0}(p_0). \end{equation*} \end{lem} \begin{lem}[cf. Estimate A in \cite{CheLin02sharp}]\label{estimate B} Set $w_\varepsilon$ to be the error term defined by \begin{equation*} \omega_\varepsilon(q)=u_\varepsilon(q)-\rho_\varepsilon G(q,p_\varepsilon)-\bar{u}_\varepsilon \end{equation*} on $\Sigma\setminus B_{\delta_0/2}(p_0)$. Then we have \begin{equation*} \norm{\omega_\varepsilon}_{C^1\left(\Sigma\setminus B_{\delta_0}(p_0)\right)}=O\left(e^{-\lambda_\varepsilon/2}\right). \end{equation*} \end{lem} \begin{proof} Notice that $h$ maybe non-positive outside of $B_{\delta_0/2}(p_0)$ and in this case we also have the above estimate. We list a proof here. By Green representation formula, for every $q\in\Sigma\setminus B_{\delta_0}(p_0)$ \begin{align*} u_{\varepsilon}(q)-\bar u_{\varepsilon}=&\left(8\pi-\varepsilon\right)\int_{\Sigma}G(q,p)\left[h(p)e^{u_{\varepsilon}(p)}-1\right]\dif\mu_{\Sigma}(p)\\ =&\left(8\pi-\varepsilon\right)\int_{\Sigma}\left(G(q,p)-G\left(q,p_{\varepsilon}\right)\right)\left[h(p)e^{u_{\varepsilon}(p)}-1\right]\dif\mu_{\Sigma}(p)\\ =&\left(8\pi-\varepsilon\right)\int_{\Sigma\setminus B_{\delta_0/2}(p_0)}\left(G(q,p)-G\left(q,p_{\varepsilon}\right)\right)h(p)e^{u_{\varepsilon}(p)}\dif\mu_{\Sigma}(p)\\ &+\left(8\pi-\varepsilon\right)\int_{ B_{\delta_0/2}(p_0)}\left(G(q,p)-G\left(q,p_{\varepsilon}\right)\right)h(p)e^{u_{\varepsilon}(p)}\dif\mu_{\Sigma}(p)+(8\pi-\varepsilon)G(q,p_{\varepsilon})\\ =&(8\pi-\varepsilon)G(q,p_{\varepsilon})+O\left(e^{-\lambda_{\varepsilon}/2}\right). \end{align*} Here we used estimate \eqref{eq:out} and Li's local estimate \eqref{eq:local}. By definition, \begin{align*} \rho_{\varepsilon}=(8\pi-\varepsilon)-(8\pi-\varepsilon)\int_{\Sigma\setminus B_{\delta_0}(p_0)}he^{u_{\varepsilon}}=(8\pi-\varepsilon)+O\left(e^{-\lambda_{\varepsilon}}\right). \end{align*} Thus \begin{align*} u_{\varepsilon}(q)-\bar u_{\varepsilon}-\rho_{\varepsilon}G(q,p_{\varepsilon})=O\left(e^{-\lambda_{\varepsilon}/2}\right),\quad\forall q\in \Sigma\setminus B_{\delta_0}(p_0). \end{align*} Notice that \begin{align*} -\Delta\left(u_{\varepsilon}-\bar u_{\varepsilon}-\rho_{\varepsilon}G(\cdot,p_{\varepsilon})\right)=(8\pi-\varepsilon)he^{u_{\varepsilon}}+\rho_{\varepsilon}-(8\pi-\varepsilon)=O\left(e^{-\lambda_{\varepsilon}}\right),\quad\text{in}\ \Sigma\setminus B_{\delta_0}(p_0) \end{align*} and \begin{align*} u_{\varepsilon}-\bar u_{\varepsilon}-\rho_{\varepsilon}G(\cdot,p_{\varepsilon})=O\left(e^{-\lambda_{\varepsilon}/2}\right),\quad\text{on}\ \partial B_{\delta_0}(p_0). \end{align*} The standard elliptic estimate gives \begin{align*} \norm{u_{\varepsilon}-\bar u_{\varepsilon}-\rho_{\varepsilon}G(\cdot,p_{\varepsilon})}_{C^1\left(\Sigma\setminus B_{\delta_0}(p_0)\right)}=O\left(e^{-\lambda_{\varepsilon}/2}\right). \end{align*} \end{proof} Based on these facts, we then have the following local estimates. The proofs are same as those in \cite{CheLin02sharp}, so we omit them here. \begin{lem}[cf. Estimate B in \cite{CheLin02sharp}]\label{cp position} By using the local normal coordinate $x$ centering at $p_{\varepsilon}$, we set the regular part of Green function $G(x,p_\varepsilon)$ to be \begin{equation*} \tilde{G}_{\varepsilon}(x)=G(x,p_\varepsilon)+\dfrac{1}{2\pi}\ln\abs{x}, \end{equation*} and set \begin{equation*} G_{\varepsilon}^*(x)=\rho_\varepsilon\tilde{G}_\varepsilon(x). \end{equation*} Then we get \begin{equation*} \abs{\nabla\left(\ln h^++G_{\varepsilon}^*\right)(p_{\varepsilon})}=O\left(e^{-\lambda_\varepsilon/2}\right). \end{equation*} \end{lem} Notice that the Green function is symmetric and we conclude that \begin{align*} \abs{\nabla\left(2\ln h^++\dfrac{8\pi-\varepsilon}{8\pi}A\right)\left(p_{\varepsilon}\right)}=O\left(e^{-\lambda_{\varepsilon}/2}\right). \end{align*} In $B_{\delta_0}(p_\varepsilon)$, we define the following function as in \cite{CheLin02sharp} \begin{equation*} v_\varepsilon(p)=\ln\dfrac{e^{\lambda_{\varepsilon}}}{\left(1+\frac{(8\pi-\varepsilon)h(p_\varepsilon)}{8}e^{\lambda_{\varepsilon}}|p-q_{\varepsilon}|^2\right)^2}, \end{equation*} where $q_{\varepsilon}$ is chosen to satisfy \begin{equation*} \nabla v_{\varepsilon}(p_{\varepsilon})=\nabla\ln h(p_{\varepsilon}), \end{equation*} which implies $\abs{p_\varepsilon-q_\varepsilon}=O\left(e^{-\lambda_\varepsilon}\right)$. We also set the error term as \begin{equation*} \eta_\varepsilon(p)=u_\varepsilon(p)-v_\varepsilon(p)-(G_{\varepsilon}^*(p)-G_{\varepsilon}^*(p_\varepsilon)) \end{equation*} and \begin{equation*} R_{\varepsilon}=\left(\frac{(8\pi-\varepsilon)h(p_\varepsilon)}{8}e^{\lambda_{\varepsilon}}\right)^{\frac12}\delta_0. \end{equation*} Then we have the following estimate for the scaled function $\tilde{\eta}_\varepsilon(z)=\eta_\varepsilon\left(\delta_0R_{\varepsilon}^{-1}z\right)$ for $|z|\leq R_{\varepsilon}$. \begin{lem}[cf. Estimates C, D and E in \cite{CheLin02sharp}]\label{remainder} For any $\tau\in(0,1)$, there exists a constant $C=C_\tau$ such that \begin{equation*} \eta_\varepsilon(p)=\left(4-\dfrac{\rho_\varepsilon}{2\pi}\right)\ln{|p-p_\varepsilon|}+O\left(\lambda_\varepsilon e^{-\frac{\tau\lambda_\varepsilon}{2}}\sup_{\frac{\delta_0}{2}\leq|p-p_\varepsilon|\leq\delta_0}|\eta_\varepsilon|+e^{-\frac{\lambda_\varepsilon}{2}}\right) \end{equation*} and \begin{equation*} \abs{\tilde{\eta}_\varepsilon(z)}\leq C\left(1+|z|\right)^\tau\left(e^{-\tau\lambda_\varepsilon}+e^{-\frac{\tau}{2}\lambda_\varepsilon}|8\pi-\rho_\varepsilon|\right) \end{equation*} hold for $p\in \bar{B}_{\delta_0}(p_\varepsilon)\setminus{B_{\delta_0/2}(p_\varepsilon)}$ and $|z|\leq R_{\varepsilon}$. \end{lem} The following lemma shows the relationship between $\rho_\varepsilon-8\pi$ and $\eta_\varepsilon$. \begin{lem}[cf. Estimate F in \cite{CheLin02sharp}]\label{boundary term} \begin{equation*} \rho_\varepsilon-8\pi=-\int_{\partial B_{\delta_0}(p_\varepsilon)}\frac{\partial \eta_\varepsilon}{\partial\nu}d\sigma+O\left(e^{-\lambda_\varepsilon}\right), \end{equation*} where $\nu$ denotes the unit outer normal of $\partial B_{\delta_0}(p_\varepsilon)$. \end{lem} \section{Proof of \autoref{blow-up id}} In this section, we prove \autoref{blow-up id} as in \cite{CheLin02sharp}. \begin{proof} By \autoref{outside}, we have \begin{equation}\label{differnce} \rho_\varepsilon=8\pi-\varepsilon+O\left(e^{-\lambda_\varepsilon}\right). \end{equation} This implies that we need to control $\rho_\varepsilon-8\pi$, which is equivalent to compute $-\int_{\partial B_{\delta_0}(p_\varepsilon)}\frac{\partial \eta_\varepsilon}{\partial\nu}d\sigma$ by \autoref{boundary term}. To do so, we set \begin{equation*} \psi=\dfrac{1-a|x-y_{\varepsilon}|^2}{1+a|x-y_{\varepsilon}|^2} \quad \text{for} \ x\in\mathbb{R}^2, \end{equation*} where $a=\frac{(8\pi-\varepsilon)h(p_\varepsilon)}{8}e^{\lambda_\varepsilon}$. Then $\psi$ satisfies \begin{equation}\label{eq:psi} \Delta_0\psi+(8\pi-\varepsilon)h(p_\varepsilon)e^{v_\varepsilon}\psi=0, \end{equation} where $\Delta_0$ is the standard Laplacian in $\mathbb{R}^2$. On the other hand, by \eqref{differnce}, we have \begin{equation}\label{eq:eta} \begin{split} \Delta_0\eta_\varepsilon&=\Delta_0 u_\varepsilon-\Delta_0 v_\varepsilon-\Delta_0 G_{\varepsilon}^*\\ &=-(8\pi-\varepsilon)h(p_\varepsilon)e^{v_\varepsilon(x)}H(x,\eta_\varepsilon)+O(e^{-\lambda_\varepsilon}), \end{split} \end{equation} where \begin{equation*} H(x,t)=\frac{h^*(x)}{h(p_\varepsilon)}e^{t+G_{\varepsilon}^*(x)-G_{\varepsilon}^*(0)}-1 \end{equation*} and $h^*(x)=h(x)e^{2\phi(x)}$, $\phi(x)$ comes from the metric $ds^2=e^{2\phi(x)}dx^2$ with $\phi(0)=0$ and $\nabla\phi(0)=0$. By using \eqref{eq:psi}, \eqref{eq:eta} and integration by parts, we get \begin{equation*} \begin{split} \int_{\partial B_{\delta_0}(p_\varepsilon)}\left(\psi\dfrac{\partial\eta_\varepsilon}{\partial\nu}-\eta_\varepsilon\dfrac{\partial\psi}{\partial\nu}\right)d\sigma&=\int_{B_{\delta_0}(p_\varepsilon)}(\psi\Delta_0\eta_\varepsilon-\eta_\varepsilon\Delta_0\psi)dx\\ &=-\int_{B_{\delta_0}(p_\varepsilon)}\psi(x)(8\pi-\varepsilon)h(p_\varepsilon)e^{v_\varepsilon(x)}(H(x,\eta_\varepsilon)-\eta_\varepsilon(x))+O\left(e^{-\lambda_\varepsilon}\right). \end{split} \end{equation*} Since $\psi$ satisfies \begin{equation*} \psi(x)=-1+\dfrac{2}{1+a|x-y_{\varepsilon}|^2}=-1+O\left(e^{-\lambda_\varepsilon}\right) \ \text{and} \ |\nabla\psi(x)|=O\left(e^{-\lambda_\varepsilon}\right) \end{equation*} for $x\in\partial B_{\delta_0}(p_\varepsilon)$, we have \begin{equation*} \begin{split} -\int_{\partial B_{\delta_0}(p_\varepsilon)}\dfrac{\partial \eta_\varepsilon}{\partial\nu}d\sigma=-\int_{B_{\delta_0}(p_\varepsilon)}\psi(x)(8\pi-\varepsilon)h(p_\varepsilon)e^{v_\varepsilon(x)}(H(x,\eta_\varepsilon)-\eta_\varepsilon(x))+O\left(e^{-\lambda_\varepsilon}\right). \end{split} \end{equation*} Recall \begin{equation*} \begin{split} H(x,\eta_\varepsilon)-\eta_\varepsilon(x)&=\dfrac{h^*(x)}{h(p_\varepsilon)}e^{\eta_\varepsilon+G_{\varepsilon}^*(x)-G_{\varepsilon}^*(0)}-1-\eta_\varepsilon(x)\\ &=H(x,0)+H(x,0)\eta_\varepsilon+O(1)|\eta_\varepsilon|^2, \end{split} \end{equation*} where \begin{equation*} \begin{split} H(x,0)&=\frac{h^*(x)}{h(p_\varepsilon)}e^{G_{\varepsilon}^*(x)-G_{\varepsilon}^*(0)}-1\\ &=\frac{1}{h(p_\varepsilon)}e^{2\phi(x)+\ln{h}(x)+G^*(x)-G^*(p_\varepsilon)}-1\\ &=\langle b_\varepsilon,x\rangle+\langle B_\varepsilon x,x\rangle+O(1)|x|^{2+\beta}, \end{split} \end{equation*} where $b_\varepsilon$ and $B_\varepsilon$ are the gradient and Hessian of $H(x,0)$ at $x=0$. By \autoref{cp position}, we have $|b_\varepsilon|=O\left(e^{-\lambda/2}\right)$. Let $z$ and $z_\varepsilon$ satisfy \begin{align*} \begin{cases} x=e^{-\frac{\lambda_\varepsilon}{2}}\left(\dfrac{h(p_\varepsilon)(8\pi-\varepsilon)}{8}\right)^{-\frac12}z,\\ y_{\varepsilon}=e^{-\frac{\lambda_\varepsilon}{2}}\left(\dfrac{h(p_\varepsilon)(8\pi-\varepsilon)}{8}\right)^{-\frac12}z_\varepsilon. \end{cases} \end{align*} Then we get \begin{align*} \left|\int_{ B_{\delta_0}(p_\varepsilon)}e^{v_\varepsilon}\langle b_\varepsilon,x\rangle dx\right|\leq& Ce^{-\lambda_\varepsilon}\int_{|z|\leq R_0}\left(1+|z-z_\varepsilon|^2\right)^{-2}|z|dz=O\left(e^{-\lambda_\varepsilon}\right),\\ \int_{ B_{\delta_0}(p_\varepsilon)}e^{v_\varepsilon}|x|^{2+\beta} dx\leq& Ce^{-\frac{2+\beta}{2}\lambda_\varepsilon}\int_{|z|\leq R_0}\left(1+|z|^2\right)^{-2}|z|^{2+\beta}dz=O\left(e^{-\lambda_\varepsilon}\right) \end{align*} and \begin{equation*} \begin{split} \int_{ B_{\delta_0}(p_\varepsilon)}e^{v_\varepsilon}(x_\alpha-p_{\varepsilon,\alpha})(x_\beta-p_{\varepsilon,\beta}) dx=&\left((8\pi-\varepsilon)\frac{h(p_\varepsilon)}{8}\right)^{-2}e^{-\lambda_\varepsilon}\int_{|z|\leq R_0}\left(1+|z-z_\varepsilon|^2\right)^{-2}z_\alpha z_\beta dz\\ &=\left((8\pi-\varepsilon)\frac{h(p_\varepsilon)}{8}\right)^{-2}e^{-\lambda_\varepsilon}\pi\left[\delta_{\alpha\beta}\ln{R_{\varepsilon}}+O\left(e^{-\frac{\lambda_\varepsilon}{2}}\right)\right], \end{split} \end{equation*} where $x_\alpha$ stands for the $\alpha$-th coordinate of $x$ and $1\leq\alpha,\beta\leq2$. Putting those estimates above together, we have \begin{equation*} \begin{split} \int_{ B_{\delta_0}(p_\varepsilon)}(8\pi-\varepsilon)h(p_\varepsilon)e^{v_\varepsilon}H(x,0)dx&=\frac{32\pi}{(8\pi-\varepsilon)h(p_\varepsilon)}\left(B_\varepsilon^{11}+B_\varepsilon^{22}\right)e^{-\lambda_\varepsilon}\lambda_\varepsilon+O(1)e^{-\lambda_\varepsilon}. \end{split} \end{equation*} Note that $\Delta_0 G_{\varepsilon}^*(0)=\rho_\varepsilon=(8\pi-\varepsilon)+O\left(e^{-\lambda_\varepsilon}\right)$ and $-\Delta_0\phi(0)=K(p_\varepsilon)$. By \autoref{cp position}, we know \begin{equation*} \begin{split} B_\varepsilon^{11}+B_\varepsilon^{22}&=\dfrac12\Delta_0 H(0,0)\\ &=\frac12(\Delta\ln{h}(p_\varepsilon)+8\pi-\varepsilon-2K(p_\varepsilon))+O\left(e^{-\lambda_\varepsilon}\right). \end{split} \end{equation*} For the remainder terms, we use \autoref{remainder} to get \begin{equation*} \begin{split} &\int_{B_{\delta_0}(p_{\varepsilon})} e^{v_\varepsilon}H(x,0)\eta_\varepsilon(x)dx=O\left(e^{-\lambda_\varepsilon}\right)\\ &\int_{B_{\delta_0})(p_{\varepsilon})} e^{v_\varepsilon}\eta_\varepsilon^2(x)dx=O\left(e^{-\lambda_\varepsilon}+e^{-\tau\lambda_\varepsilon}|8\pi-\rho_\varepsilon|\right). \end{split} \end{equation*} Therefore, \begin{equation*} \rho_\varepsilon-8\pi=\dfrac{16\pi}{(8\pi-\varepsilon)h(p_\varepsilon)}\left[\Delta \ln h(p_\varepsilon)+8\pi-2K(p_\varepsilon)\right]\lambda_{\varepsilon}e^{-\lambda_{\varepsilon}}+O\left(e^{-\lambda_{\varepsilon}}\right) \end{equation*} and this completes the proof. \end{proof} \section{Proof of Theorem \ref{functional value}} \begin{proof}On one hand, checking the proof in \cite[Theorem 1.2]{SunZhu20global} step by step, we have \begin{align}\label{eq:lower-functional} \begin{split} \inf_{u\in H^1\left(\Sigma\right)}J_{8\pi}(u)=\lim_{\varepsilon\to 0}J_{8\pi}\left(u_{\varepsilon}\right)&\geq-1-\ln\pi-\left(\ln h(p_0)+\dfrac12 A(p_0)\right)\\ &\geq-1-\ln\pi-\max_{p\in\Sigma}\left(\ln h^+(p)+\dfrac12 A(p)\right). \end{split} \end{align} We sketch the proof here. Without loss of generality, up to a conformal change of the metric, we may assume that the metric is the Euclidean metric around $p_0$ and we also assume $p_0$ is the origin $o\in\mathbb{B}\subset\Sigma$. Choose $p_\varepsilon\to p_0$ such that \begin{align*} \lambda_{\varepsilon}=u_{\varepsilon}\left(p_\varepsilon\right)=\max_{\Sigma}u_{\varepsilon}\to+\infty. \end{align*} Set $r_{\varepsilon}=e^{-\lambda_{\varepsilon}/2}$ and \begin{align*} \tilde u_{\varepsilon}=u_{\varepsilon}\left(p_\varepsilon+r_{\varepsilon}x\right)+2\ln r_{\varepsilon},\quad\abs{x}< r_{\varepsilon}^{-1}\left(1-\abs{p_\varepsilon}\right). \end{align*} Then $\tilde u_{\varepsilon}$ converges to $w$ in $C^{\infty}_{loc}\left(\mathbb{R}^2\right)$ where \begin{align*} w(x)=-2\ln\left(1+\pi h(p_0)\abs{x}^2\right). \end{align*} We denote by $o_{\varepsilon}(1)$ (resp. $o_{R}(1), o_{\delta}(1)$) the terms which tents to zero as $\varepsilon\to0$ (resp. $R\to\infty, \delta\to0$). Moreover, $o_{\varepsilon}(1)$ may depend on $R,\delta$, while $o_{R}(1)$ may depend on $\delta$. We have \begin{align*} \dfrac{1}{16\pi}\int_{\mathbf{B}_{r_{\varepsilon}R}(p_\varepsilon)}\abs{\nabla u_{\varepsilon}}^2=\dfrac{1}{16\pi}\int_{\mathbf{B}_{R}}\abs{\nabla\tilde{u}_{\varepsilon}}^2=\ln\left(\pi h(p_0)R^2\right)-1+o_{\varepsilon}(1)+o_{R}(1). \end{align*} According to \autoref{convergence}, a direct calculation yields \begin{align*} \dfrac{1}{16\pi}\int_{\Sigma\setminus\mathrm{B}_{\delta}(p_{\varepsilon})}\abs{\nabla u_{\varepsilon}}^2=-2\ln\delta+\dfrac12 A(p_0)+o_{\varepsilon}(1)+o_{\delta}(1). \end{align*} Under polar coordinates $(r,\theta)$, set \begin{align*} u^*_{\varepsilon}(r)=\dfrac{1}{2\pi}\int_{0}^{2\pi} u_{\varepsilon}\left(p_{\varepsilon}+re^{\sqrt{-1}\theta}\right)\dif\theta. \end{align*} Then \begin{align*} u^*_{\varepsilon}(\delta)=&\int_{\Sigma}u_{\varepsilon}-4\ln\delta+ A(p_0)+o_{\varepsilon}(1)+o_{\delta}(1),\\ u^*_{\varepsilon}\left(r_{\varepsilon}R\right)=&-2\ln r_{\varepsilon}-2\ln\left(\pi h(p_0)R^2\right)+o_{\varepsilon}(1)+o_{R}(1). \end{align*} Solve \begin{align*} \begin{cases} -\Delta \xi_{\varepsilon}=0,&\text{in}\ \mathbf{B}_{\delta}\left(p_{\varepsilon}\right)\setminus\mathbf{B}_{r_{\varepsilon}R}\left(p_{\varepsilon}\right),\\ \xi_{\varepsilon}=u_{\varepsilon}^*,&\text{on}\ \partial\left(\mathbf{B}_{\delta}\left(p_{\varepsilon}\right)\setminus\mathbf{B}_{r_{\varepsilon}R}\left(p_{\varepsilon}\right)\right). \end{cases} \end{align*} We have \begin{align*} \begin{split} \dfrac{1}{16\pi}\int_{\mathbf{B}_{\delta}\left(p_{\varepsilon}\right)\setminus\mathbf{B}_{r_{\varepsilon}R}\left(p_{\varepsilon}\right)}\abs{\nabla u_{\varepsilon}}^2&\geq\dfrac{1}{16\pi}\int_{\mathbf{B}_{\delta}\left(p_{\varepsilon}\right)\setminus\mathbf{B}_{r_{\varepsilon}R}\left(p_{\varepsilon}\right)}\abs{\nabla u^*_{\varepsilon}}^2\\ &\geq\dfrac{1}{16\pi} \int_{\mathbf{B}_{\delta}\left(p_{\varepsilon}\right)\setminus\mathbf{B}_{r_{\varepsilon}R}\left(p_{\varepsilon}\right)}\abs{\nabla\xi_{\varepsilon}}^2=\dfrac{\left(u_{\varepsilon}^*(\delta)-u_{\varepsilon}^*(r_{\varepsilon}R)\right)^2}{8\left(\ln\delta-\ln\left(r_{\varepsilon}R\right)\right)}. \end{split} \end{align*} Thus \begin{align*} \begin{split} \dfrac{1}{16\pi}\int_{\mathbf{B}_{\delta}\left(p_{\varepsilon}\right)\setminus\mathbf{B}_{r_{\varepsilon}R}\left(p_{\varepsilon}\right)}\abs{\nabla u_{\varepsilon}}^2\geq&\dfrac{\left(u_{\varepsilon}^*(\delta)-u_{\varepsilon}^*(r_{\varepsilon}R)\right)^2}{-8\ln r_{\varepsilon}}\left(1+\dfrac{\ln\left(R/\delta\right)}{-\ln r_{\varepsilon}}\right)\\ =&\dfrac{\left(\tau_{\varepsilon}+\int_{\Sigma}u_{\varepsilon}-2\ln r_{\varepsilon}\right)^2}{-8\ln r_{\varepsilon}}+\dfrac{1}{8}\left(2+\dfrac{\tau_{\varepsilon}}{\ln r_{\varepsilon}}+\dfrac{\int_{\Sigma}u_{\varepsilon}}{\ln r_{\varepsilon}}\right)^2\ln(R/\delta)\\ &-\int_{\Sigma}u_{\varepsilon}-4\ln\left(R/\delta\right)-A(p_0)-2\ln(\pi h(p_0))\\ &+o_R(1)+o_\delta(1), \end{split} \end{align*} where \begin{align*} \begin{split} \tau_{\varepsilon}&=u^*_{\varepsilon}(\delta)-u^*_{\varepsilon}\left(r_{\varepsilon}R\right)-\int_{\Sigma}u_{\varepsilon}+2\ln r_{\varepsilon}\\ &=4\ln\left(R/\delta\right)+ A(p_0)+2\ln\left(\pi h(p_0)\right)+o_{\varepsilon}(1)+o_{\delta}(1)+o_{R}(1). \end{split} \end{align*} Hence, we get \begin{align*} \begin{split} C\geq J_{8\pi}\left(u_{\varepsilon}\right)\geq&-1-\ln\pi-\ln h(p_0)-\dfrac12A(p_0)\\ &+\dfrac{\left(\tau_{\varepsilon}+\int_{\Sigma}u_{\varepsilon}-2\ln r_{\varepsilon}\right)^2}{-8\ln r_{\varepsilon}}+\dfrac{1}{8}\left(\left(2+\dfrac{\tau_{\varepsilon}}{\ln r_{\varepsilon}}+\dfrac{\int_{\Sigma}u_{\varepsilon}}{\ln r_{\varepsilon}}\right)^2-16\right)\ln(R/\delta)\\ &+o_{\varepsilon}(1)+o_{R}(1)+o_{\delta}(1) \end{split} \end{align*} which implies \begin{align*} \int_{\Sigma}u_{\varepsilon}=-\lambda_{\varepsilon}+O\left(\sqrt{\lambda_{\varepsilon}}\right) \end{align*} and we obtain \eqref{eq:lower-functional}. On the other hand, checking the proof in \cite[Theorem 1.2]{DinJosLiWan97differential} step by step, for each $p$ with $h(p)>0$, there exists a sequence $\phi_{\varepsilon}\in H^1\left(\Sigma\right)$ such that \begin{align*} \begin{split} J_{8\pi}\left(\phi_{\varepsilon}\right)=&-1-\ln\pi-\left(\ln h(p)+\dfrac12A(p)\right)\\ &-\dfrac{1}{4}\left(\Delta\ln h(p)+8\pi-2K(p)+\abs{\nabla\left(\ln h+\dfrac12A\right)(p)}^2\right)\varepsilon\ln\varepsilon^{-1}\\ &+o\left(\varepsilon\ln\varepsilon^{-1}\right). \end{split} \end{align*} Here we used the fact that the Green function $G$ is symmetric. These test functions $\phi_{\varepsilon}$ can be constructed as following: without loss of generality, assume $p=0$ and \begin{align*} 8\pi G(x,0)=&-2\ln\abs{x}+A(p)+b_1x_1+b_2x_2+\beta(x), \end{align*} and take \begin{align*} \phi_{\varepsilon}(x)=\begin{cases} -2\ln\left(\abs{x}^2+\varepsilon\right)+b_1x_1+b_2x_2+\ln\varepsilon,&\abs{x}<\alpha_{\varepsilon}\sqrt{\varepsilon},\\ 8\pi G(x,0)-\eta\left(\alpha_{\varepsilon}\sqrt{\varepsilon}\abs{x}\right)\beta(x)+C_{\varepsilon}+\ln\varepsilon,&\alpha_{\varepsilon}\sqrt{\varepsilon}\leq\abs{x}<2\alpha_{\varepsilon}\sqrt{\varepsilon},\\ 8\pi G(x,0)+C_{\varepsilon}+\ln\varepsilon,&\abs{x}\geq 2\alpha_{\varepsilon}\sqrt{\varepsilon}, \end{cases} \end{align*} where $\eta$ is a cutoff function supported in $[0,2]$ and $\eta=1$ on $[0,1]$ and the positive constants $\alpha_{\varepsilon}$ and $C_{\varepsilon}$ are chosen carefully. The assumption $h$ is positive in \cite{DinJosLiWan97differential} is used only to ensure that \begin{align*} \lim_{\varepsilon\searrow0}\int_{\Sigma}he^{\phi_{\varepsilon}}>0. \end{align*} If $p$ is a critical point (e.g., a maximum point) of the function $2\ln h^++A$, then \begin{align*} J_{8\pi}\left(\phi_{\varepsilon}\right)=&-1-\ln\pi-\left(\ln h(p)+\dfrac12A(p)\right)-\dfrac{1}{4}\left(\Delta\ln h(p)+8\pi-2K(p)\right)\varepsilon\ln\varepsilon^{-1}+o\left(\varepsilon\ln\varepsilon^{-1}\right). \end{align*} This gives \begin{align*} \inf_{u\in H^1\left(\Sigma\right)}J_{8\pi}(u)=-1-\ln\pi-\max_{p\in\Sigma}\left(\ln h^+(p)+\dfrac12A(p)\right)=-1-\ln\pi-\left(\ln h(p_0)+\dfrac12 A(p_0)\right). \end{align*} In particular, the blow-up point $p_0$ must be a maximum point of the function $\ln h^++A$. \end{proof} \begin{rem} One can write down the $o_\varepsilon(1)$ as follows. By \autoref{estimate B} and \eqref{expansion G}, direct computations give us \begin{align*} \begin{split} \dfrac{1}{16\pi}\int_{\Sigma\setminus{B}_{\delta}(p_{\varepsilon})}\abs{\nabla u_{\varepsilon}}^2=&\left(1-\dfrac{\varepsilon}{4\pi}+\frac{\varepsilon^2}{64\pi^2}+O\left(e^{-\lambda_\varepsilon}\right)\right)\left(-2\ln\delta+\dfrac12 A(p_\varepsilon)+O\left(e^{-\lambda_\varepsilon}\right)+o_{\delta}(1)\right)+O\left(e^{-\lambda_\varepsilon}\right)\\ =&-2\ln\delta+\dfrac12 A(p_\varepsilon) -\dfrac{\varepsilon}{4\pi}\left(-2\ln\delta+\dfrac12 A(p_\varepsilon)+O\left(e^{-\lambda_\varepsilon}\right)+o_{\delta}(1)\right)\\ &\quad+O\left(\varepsilon^2\right)+O\left(e^{-\lambda_\varepsilon}\right)+o_{\delta}(1). \end{split} \end{align*} From the proof of \autoref{blow-up id}, we also get the following \begin{align*} \int_{\mathrm{B}_{\delta}(p_{\varepsilon})}\abs{\nabla \eta_{\varepsilon}}^2=O\left(\varepsilon^2\delta\right)+O\left(e^{-\lambda_\varepsilon}\right), \end{align*} \begin{align*} \dfrac{1}{16\pi}\int_{{B}_{r_{\varepsilon}R}(p_\varepsilon)}\abs{\nabla v_{\varepsilon}}^2=\ln\left(\pi h(p_0)R^2\right)-1+o_{R}(1), \end{align*} \begin{align*} \int_{B_{\delta}(p_{\varepsilon})}\abs{\nabla G^*}^2=O\left(\delta^2\right) \end{align*} and \begin{align*} \dfrac{1}{16\pi}\int_{{B}_{r_{\varepsilon}R}(p_\varepsilon)}\abs{\nabla G^*}^2=O\left(r_{\varepsilon}^2\right)=O(e^{-\lambda_\varepsilon}). \end{align*} These imply that \begin{align*} \dfrac{1}{16\pi}\int_{B_{r_{\varepsilon}R}(p_{\varepsilon})}\abs{\nabla u_{\varepsilon}}^2=\ln\left(\pi h(p_0)R^2\right)-1+O\left(\varepsilon^2 e^{-\frac{\lambda_\varepsilon }{2}}\right)+O\left(e^{-\lambda_\varepsilon}\right)+o_{R}(1). \end{align*} On the neck, $o_\varepsilon(1)$ are the convergent rates in \autoref{convergence} and $\tilde u_{\varepsilon}\to w$. \end{rem} \biboptions{longnamesfirst,sort&compress} \end{document}
\begin{document} \title{ extbf{Gelfand-Kirillov dimension for rings} \begin{abstract} \noindent The classical Gelfand-Kirillov dimension for algebras over fields has been extended recently by J. Bell and J.J Zhang to algebras over commutative domains. However, the behavior of this new notion has not been enough investigated for the principal algebraic constructions as polynomial rings, matrix rings, localizations, filtered-graded rings, skew $PBW$ extensions, etc. In this paper we present complete proofs of the computation of this more general dimension for the mentioned algebraic constructions for algebras over commutative domains. The Gelfand-Kirillov dimension for modules and the Gelfand-Kirillov transcendence degree will be also considered. The obtained results can be applied in particular to algebras over the ring of integers, i.e, to arbitrary rings. \noindent \textit{Key words and phrases.} Gelfand-Kirillov dimension, Gelfand-Kirillov transcendence degree. \noindent 2010 \textit{Mathematics Subject Classification.} Primary: 16P90. Secondary: 16S80, 16S85, 16W70, 16S36. \end{abstract} \section{Introduction} Let $R$ be a commutative domain and $Q$ be the field of fractions of $R$. Let $B$ be a $R$-algebra, then $Q\otimes B$ is a $Q$-algebra and its classical Gelfand-Kirillov dimension is denoted by ${\rm GKdim} (Q\otimes B)$ (see \cite{Krause}). Recall that if $M$ is a finitely generated $R$-module, then the \textit{rank} of $M$ is defined by \begin{center} ${\rm rank}M:=\dim_Q(Q\otimes_R M)<\infty$. \end{center} From now on in this paper all tensors are over $R$. \begin{definition}[\cite{BellZhang}, Section 1]\label{definitionGKforrings} Let $R$ be a commutative domain and $B$ be a $R$-algebra. The Gelfand-Kirillov dimension of $B$ is defined to be \begin{equation} {\rm GKdim}(B):=\sup_{V}\overline{\lim_{n\to \infty}}\log_n{\rm rank} V^n, \end{equation} where $V$ varies over all frames of $B$ and $V^n:=\, _R\langle v_1\cdots v_n\mid v_i\in V, 1\leq i\leq n\rangle$; a frame of $B$ is a finitely generated $R$-submodule of $B$ containing $1$. A frame $V$ generates $B$ if $B$ is generated by $V$ as $R$-algebra. \end{definition} \begin{proposition}[\cite{BellZhang}, Lemma 3.1]\label{proposition17.3.8} Let $B$ be a $R$-algebra. Then, \begin{center} ${\rm GKdim}(B)={\rm GKdim}(Q\otimes B)$. \end{center} \end{proposition} \begin{proof} Let $V$ be a frame of $B$, then $Q\otimes V$ is a frame of the $Q$-algebra $Q\otimes B$. In fact, if $V=\,_R\langle v_1,\dots,v_m\rangle$, then $Q\otimes V=\,_Q\langle 1\otimes v_1,\dots,1\otimes v_m \rangle$ and $1\otimes 1\in Q\otimes V$. Observe that for every $n\geq 0$, \begin{center} $Q\otimes V^n=(Q\otimes V)^n$, \end{center} hence \begin{center} ${\rm GKdim}(B)=\sup_{V}\overline{\lim_{n\to \infty}}\log_n{\rm rank} V^n=\sup_{V}\overline{\lim_{n\to \infty}}\log_n(\dim_Q(Q\otimes V^n))=\sup_{V}\overline{\lim_{n\to \infty}}\log_n(\dim_Q(Q\otimes V)^n)\leq {\rm GKdim}(Q\otimes B)$. \end{center} Now let $W$ be a frame of $Q\otimes B$, then there exist finitely many $v_1,\dots,v_m\in B$ such that $V_W:=\,_R\langle 1,v_1,\dots,v_m\rangle$ is a frame of $B$ and $W\subseteq Q\otimes V_W$. Indeed, let $W=\,_Q\langle z_1,\dots,z_k\rangle$, then for every $1\leq i\leq k$, \begin{center} $z_i=q_{i1}\otimes v_{i1}+\cdots +q_{im_i}\otimes v_{im_i}$, \end{center} with $q_{ij}\in Q$ and $v_{ij}\in B$; hence, $W\subseteq\, _Q\langle 1\otimes v_{11},\dots,1\otimes v_{km_k}\rangle$ and we have found elements $v_1,\dots,v_m\in B$ such that \begin{center} $W\subseteq\,_Q\langle 1\otimes 1,1\otimes v_1,\dots,1\otimes v_m \rangle\subseteq Q\otimes V_W$, \end{center} with $V_W:=\,_R\langle 1,v_1,\dots,v_m\rangle$. This proves the claimed. Therefore, \begin{center} ${\rm GKdim}(Q\otimes B)=\sup_{W}\overline{\lim_{n\to \infty}}\log_n\dim_Q W^n\leq \sup_{V_W}\overline{\lim_{n\to \infty}}\log_n\dim_Q (Q\otimes V_W)^n=\sup_{V_W}\overline{\lim_{n\to \infty}}\log_n\dim_Q (Q\otimes V_W^n)=\sup_{V_W}\overline{\lim_{n\to \infty}}\log_n{\rm rank} V_W^n\leq {\rm GKdim}(B)$. \end{center} \end{proof} Observe that $B$ is finitely generated if and only if $B$ has a generator frame. In fact, if $x_1,\dots,x_m$ generate $B$ as $R$-algebra, then $V:=\,_R\langle 1,x_1,\dots,x_m\rangle$ is a generator frame of $B$; the converse is trivial from Definition \ref{definitionGKforrings}. \begin{proposition} Let $B$ be a $R$-algebra and $V$ be a frame that generates $B$. Then, \begin{equation*} {\rm GKdim}(B)=\overline{\lim_{n\to \infty}}\log_n({\rm rank}V^n). \end{equation*} Moreover, this equality is independent of the generator frame. \end{proposition} \begin{proof} Notice that $Q\otimes V$ generates $Q\otimes B$, so from Proposition \ref{proposition17.3.8} \begin{center} ${\rm GKdim}(B)={\rm GKdim}(Q\otimes B)=\overline{\lim_{n\to \infty}}\log_n(\dim_Q (Q\otimes V)^n)=\overline{\lim_{n\to \infty}}\log_n(\dim_Q (Q\otimes V^n))=\overline{\lim_{n\to \infty}}\log_n{\rm rank} V^n$. \end{center} The second part is trivial since the proof was independent of the chosen frame. \end{proof} We conclude this preliminary section recalling the definition of the skew $PBW$ extensions that we will consider in the main theorem of the next section. Skew $PBW$ extensions cover many noncommutative rings and algebras coming from mathematical physics (see \cite{Lezama-reyes}). In \cite{Reyes5} the Gelfand-Kirillov dimension of skew $PBW$ extensions that are algebras over fields was computed. In the next section this result will be extended to the case of algebras over commutative domains, i.e., for arbitrary rings that are skew $PBW$ extensions. Another generalization of the Gelfand-Kirillov dimension for skew $PBW$ extensions was studied in \cite{lezamalatorre}, interpreting these extensions as finitely semi-graded rings. The computation presented here agree with \cite{Reyes5} and \cite{lezamalatorre}, choosing properly the ring of coefficients of the extension. \begin{definition}[\cite{LezamaGallego}]\label{gpbwextension} Let $R$ and $A$ be rings. We say that $A$ is a \textit{skew $PBW$ extension of $R$} $($also called a $\sigma-PBW$ extension of $R$$)$ if the following conditions hold: \begin{enumerate} \item[\rm (i)]$R\subseteq A$. \item[\rm (ii)]There exist finitely many elements $x_1,\dots ,x_n\in A$ such $A$ is a left $R$-free module with basis \begin{center} ${\rm Mon}(A):= \{x^{\alpha}=x_1^{\alpha_1}\cdots x_n^{\alpha_n}\mid \alpha=(\alpha_1,\dots ,\alpha_n)\in \mathbb{N}^n\}$, with $\mathbb{N}:=\{0,1,2,\dots\}$. \end{center} The set ${\rm Mon}(A)$ is called the set of standard monomials of $A$. \item[\rm (iii)]For every $1\leq i\leq n$ and $r\in R-\{0\}$ there exists $c_{i,r}\in R-\{0\}$ such that \begin{equation}\label{sigmadefinicion1} x_ir-c_{i,r}x_i\in R. \end{equation} \item[\rm (iv)]For every $1\leq i,j\leq n$ there exists $c_{i,j}\in R-\{0\}$ such that \begin{equation}\label{sigmadefinicion2} x_jx_i-c_{i,j}x_ix_j\in R+Rx_1+\cdots +Rx_n. \end{equation} Under these conditions we will write $A:=\sigma(R)\langle x_1,\dots ,x_n\rangle$. \end{enumerate} \end{definition} Associated to a skew $PBW$ extension $A=\sigma(R)\langle x_1,\dots ,x_n\rangle$ there are $n$ injective endomorphisms $\sigma_1,\dots,\sigma_n$ of $R$ and $\sigma_i$-derivations, as the following proposition shows. \begin{proposition}[\cite{LezamaGallego}, Proposition 3]\label{sigmadefinition} Let $A$ be a skew $PBW$ extension of $R$. Then, for every $1\leq i\leq n$, there exist an injective ring endomorphism $\sigma_i:R\rightarrow R$ and a $\sigma_i$-derivation $\delta_i:R\rightarrow R$ such that \begin{center} $x_ir=\sigma_i(r)x_i+\delta_i(r)$, \end{center} for each $r\in R$. \end{proposition} A particular case of skew $PBW$ extension is when all $\sigma_i$ are bijective and the constants $c_{ij}$ are invertible. \begin{definition}[\cite{LezamaGallego}]\label{sigmapbwderivationtype} Let $A$ be a skew $PBW$ extension. $A$ is bijective if $\sigma_i$ is bijective for every $1\leq i\leq n$ and $c_{i,j}$ is invertible for any $1\leq i<j\leq n$. \end{definition} If $A=\sigma(R)\langle x_1,\dots,x_n\rangle$ is a skew $PBW$ extension of the ring $R$, then, as was observed in Proposition \ref{sigmadefinition}, $A$ induces injective endomorphisms $\sigma_k:R\to R$ and $\sigma_k$-derivations $\delta_k:R\to R$, $1\leq k\leq n$. Moreover, from the Definition \ref{gpbwextension}, there exists a unique finite set of constants $c_{ij}, d_{ij}, a_{ij}^{(k)}\in R$, $c_{ij}\neq 0$, such that \begin{equation}\label{equation1.2.1} x_jx_i=c_{ij}x_ix_j+a_{ij}^{(1)}x_1+\cdots+a_{ij}^{(n)}x_n+d_{ij}, \ \text{for every}\ 1\leq i<j\leq n. \end{equation} Many important algebras and rings coming from mathematical physics are particular examples of skew $PBW$ extensions: Habitual ring of polynomials in several variables, Weyl algebras, enveloping algebras of finite dimensional Lie algebras, algebra of $q$-differential operators, many important types of Ore algebras, algebras of diffusion type, additive and multiplicative analogues of the Weyl algebra, dispin algebra $\mathcal{U}(osp(1,2))$, quantum algebra $\mathcal{U}'(so(3,K))$, Woronowicz algebra $\mathcal{W}_{\nu}(\mathfrak{sl}(2,K))$, Manin algebra $\mathcal{O}_q(M_2(K))$, coordinate algebra of the quantum group $SL_q(2)$, $q$-Heisenberg algebra \textbf{H}$_n(q)$, Hayashi algebra $W_q(J)$, differential operators on a quantum space $D_{\textbf{q}}(S_{\textbf{q}})$, Witten's deformation of $\mathcal{U}(\mathfrak{sl}(2,K))$, multiparameter Weyl algebra $A_n^{Q,\Gamma}(K)$, quantum symplectic space $\mathcal{O}_q(\mathfrak{sp}(K^{2n}))$, some quadratic algebras in 3 variables, some 3-dimensional skew polynomial algebras, particular types of Sklyanin algebras, homogenized enveloping algebra $\mathcal{A}(\mathcal{G})$, Sridharan enveloping algebra of 3-dimensional Lie algebra $\mathcal{G}$, among many others. For a precise definition of any of these rings and algebras see \cite{Lezama-reyes}. \section{Computation of the Gelfand-Kirillov dimension\\ for the principal algebraic constructions} Next we will compute the Gelfand-Kirillov dimension for the most important algebraic constructions. For this we will apply the correspondent properties of the Gelfand-Kirillov dimension over fields (see \cite{Krause}). \begin{theorem}\label{theorem17.4.4} Let $B$ be a $R$-algebra. \begin{enumerate} \item[\rm (i)]If $B$ is finitely generated, then \begin{center} ${\rm GKdim}(B)=0$ if and only if ${\rm rank} B<\infty$. \end{center} Moreover, if $B$ is a domain with ${\rm GKdim}(B)<\infty$, then $B$ is an Ore domain. \item[\rm (ii)]${\rm GKdim}(B[x_1,\dots,x_m])={\rm GKdim}(B)+m$. \item[\rm (iii)]For $m\geq 2$, ${\rm GKdim}(R\{x_1,\dots,x_m\})=\infty$. \item[\rm (iv)]${\rm GKdim}(M_n(B))={\rm GKdim}(B)$. \item[\rm (v)]If $I$ is a proper two-sided ideal of $B$, then ${\rm GKdim}(B/I)\leq {\rm GKdim}(B)$. \item[\rm (vi)]Let $C$ be a subalgebra of $B$. Then, ${\rm GKdim}(C)\leq {\rm GKdim}(B)$. Moreover, if $C\subseteq Z(B)$ $($the center of $B$$)$ and $B$ is finitely generated as $C$-module, then ${\rm GKdim}(C)={\rm GKdim}(B)$. \item[\rm (vii)]Let $C$ be a $R$-algebra. Then, \begin{center} ${\rm GKdim}(B\times C)=\max\{{\rm GKdim}(B),{\rm GKdim}(C)\}$. \end{center} \item[\rm (viii)]Let $C$ be a $R$-algebra. Then, \begin{center} $\max\{{\rm GKdim}(B),{\rm GKdim}(C)\}\leq {\rm GKdim}(B\otimes C)\leq {\rm GKdim}(B)+{\rm GKdim}(C)$. \end{center} In addition, suppose that $C$ contains a finitely generated subalgebra $C_0$ such that ${\rm GKdim}(C_0)={\rm GKdim}(C)$, then \begin{center} ${\rm GKdim}(B\otimes C)={\rm GKdim}(B)+{\rm GKdim}(C)$. \end{center} \item[\rm (ix)]Let $S$ be a multiplicative system of $B$ consisting of central regular elements. Then, \begin{center} ${\rm GKdim}(BS^{-1})={\rm GKdim}(B)$. \end{center} \item[\rm (x)]If $G$ is a finite group, then \[ {\rm GKdim}(B[G])={\rm GKdim}(B). \] \item[\rm (xi)]Let $B$ be $\mathbb{N}$-filtered and locally finite, i.e., for every $p\in \mathbb{N}$, $F_p(B)$ is a finitely generated $R$-submodule of $B$. If $Gr(B)$ is finitely generated, then \[ {\rm GKdim}(Gr(B))={\rm GKdim}(B). \] \item[\rm (xii)]Suppose that $B$ has a generator frame $V$. If $\sigma$ is a $R$-linear automorphism of $B$ such that $\sigma(V)\subseteq V$ and $\delta$ is a $R$-linear $\sigma$-derivation of $B$, then \[ {\rm GKdim}(B[x;\sigma,\delta])={\rm GKdim}(B)+1. \] \item[\rm (xiii)]Suppose that $B$ has a generator frame $V$ and let $A=\sigma(B)\langle x_1,\dotsc,x_t\rangle$ be a bijective skew $PBW$ extension of $B$. If for $1\leq i\leq t$, $\sigma_i,\delta_i$ are $R$-linear and $\sigma_i(V)\subseteq V$, then \[ {\rm GKdim}(A)={\rm GKdim}(B)+t. \] \end{enumerate} \end{theorem} \begin{proof} (i) Notice that $Q\otimes B$ is finitely generated, so ${\rm GKdim}(Q\otimes B)=0$ if and only if $\dim_Q(Q\otimes B)<\infty$. Since ${\rm rank}B=\dim_Q(Q\otimes B)$, we get \begin{center} ${\rm GKdim}(B)=0$ if and only if ${\rm GKdim}(Q\otimes B)=0$ if and only if ${\rm rank}B<\infty$. \end{center} For the second statement, we will show that $B$ is a right Ore domain, the proof on the left side is similar. Suppose contrary that there exist $0\neq s\in B$ and $b\in B$ such that $sB\cap bB=0$. Since $B$ is a domain, the following sum is direct \begin{center} $bB+sbB+s^{2}bB+s^{3}bB+\cdots $. \end{center} Let $V$ be a frame of $B$, then $W_V:=\,_R\langle V,s,b\rangle$ is a frame of $B$ and \begin{center} $W_V^{2n}\supseteq\, _R\langle V,s,b\rangle^nV^n\supseteq bV^n+sbV^n+s^2bV^n+\cdots+s^{n-1}bV^n$ \end{center} and since the last sum is direct, then \begin{center} ${\rm GKdim}(B)\geq \sup_{W_V}\overline{\lim_{n\to \infty}}\log_n{\rm rank} W_V^{2n}\geq \sup_{V}\overline{\lim_{n\to \infty}}\log_n{n\rm rank} V^n=1+{\rm GKdim}(B)$, \end{center} but this is impossible since ${\rm GKdim}(B)<\infty$. \noindent (ii) It is enough to consider the case $m=1$. We have the isomorphism of $R$-algebras $B[x]\cong B\otimes R[x]$, whence we have the following isomorphism of $Q$-algebras: \begin{center} $Q\otimes B[x]\cong Q\otimes (B\otimes R[x])\cong (Q\otimes B)\otimes R[x]\cong (Q\otimes B)[x]$. \end{center} Therefore, \begin{center} ${\rm GKdim}(B[x])={\rm GKdim}(Q\otimes B[x])={\rm GKdim}((Q\otimes B)[x])={\rm GKdim}(Q\otimes B)+1={\rm GKdim}(B)+1$. \end{center} (iii) We have the isomorphism of $Q$-algebras \begin{align*} Q\otimes (R\{x_1,\dots,x_m\}) & \cong Q\{x_1,\dots,x_m\}\\ q\otimes \sum_{\alpha}r_\alpha x^\alpha & \mapsto \sum_{\alpha}(qr_\alpha)x^\alpha. \end{align*} Therefore, \begin{center} ${\rm GKdim}(R\{x_1,\dots,x_m\})={\rm GKdim}(Q\otimes (R\{x_1,\dots,x_m\}))={\rm GKdim}(Q\{x_1,\dots,x_m\})=\infty$. \end{center} (iv) We have the isomorphism of $R$-algebras $M_n(B)\cong B\otimes M_n(R)$, whence we have the following isomorphism of $Q$-algebras: \begin{center} $Q\otimes M_n(B)\cong Q\otimes (B\otimes M_n(R))\cong (Q\otimes B)\otimes M_n(R)\cong M_n(Q\otimes B)$. \end{center} Therefore, \begin{center} ${\rm GKdim}(M_n(B))={\rm GKdim}(Q\otimes M_n(B))={\rm GKdim}(M_n(Q\otimes B))={\rm GKdim}(Q\otimes B)={\rm GKdim}(B)$. \end{center} (v) Let $W$ be a frame of $B/I$, we can assume that $W=\,_R\langle \overline{1},\overline{w_1},\dots,\overline{w_t}\rangle$, then $V_W:=\,_R\langle 1, w_1,\dots,w_t\rangle$ is a frame of $B$, and hence, $Q\otimes W$ is a frame of $Q\otimes (B/I)$ and $Q\otimes V_W$ is a frame of $Q\otimes B$. Observe that for every $n\geq 0$, \begin{center} $W^n=\,_R\langle \overline{w_{i_1}}\cdots \overline{w_{i_n}}\mid w_{i_j}\in \{1,w_1,\dots,w_t\}\rangle=\overline{V_W^n}$, $Q\otimes W^n=Q\otimes \overline{V_W^n}$, $\dim_Q(Q\otimes W^n)=\dim_Q(Q\otimes \overline{V_W^n})\leq \dim_Q(Q\otimes V_W^n)$. \end{center} The inequality can by justified in the following way. Both $Q$-vector spaces $Q\otimes \overline{V_W^n}$ and $Q\otimes V_W^n$ have finite dimension and we have the surjective homomorphism of $Q$-vector spaces $Q\otimes V_W^n\to Q\otimes \overline{V_W^n}$, $q\otimes z\mapsto q\otimes \overline{z}$, with $q\in Q$, $z\in V_W^n$. Therefore, ${\rm GKdim}(B/I)\leq {\rm GKdim}(B)$. \noindent (vi) Since $Q$ is $R$-flat, then $Q\otimes C$ is a $Q$-subalgebra of $Q\otimes B$, hence \begin{center} ${\rm GKdim}(C)={\rm GKdim}(Q\otimes C)\leq {\rm GKdim}(Q\otimes B )={\rm GKdim}(B)$. \end{center} For the second statement, $Q\otimes C$ is a $Q$-subalgebra of $Q\otimes Z(B)\subseteq Z(Q\otimes B)$; moreover, if $B=\,_C\langle b_1,\dots,b_t\rangle$, with $b_i\in B$, $1\leq i\leq t$, then \begin{center} $Q\otimes B=\,_{Q\otimes C}\langle 1\otimes b_1,\dots,1\otimes b_t\rangle$. \end{center} Therefore, \begin{center} ${\rm GKdim}(C)={\rm GKdim}(Q\otimes C)={\rm GKdim}(Q\otimes B)={\rm GKdim}(B)$. \end{center} (vii) We have the following isomorphism of $Q$-algebras \begin{align*} Q\otimes (B\times C)& \cong (Q\otimes B)\times (Q\otimes C)\\ q\otimes (b,c) & \mapsto (q\otimes b,q\otimes c). \end{align*} Hence, \begin{center} ${\rm GKdim}(B\times C)={\rm GKdim}(Q\otimes (B\times C))={\rm GKdim}((Q\otimes B)\times (Q\otimes C))=\max\{{\rm GKdim}(Q\otimes B),{\rm GKdim}(Q\otimes C)\}=\max\{{\rm GKdim}(B),{\rm GKdim}(C)\}$. \end{center} (viii) We have the following isomorphisms of $Q$-algebras \begin{center} $(Q\otimes B)\otimes (Q\otimes C)\cong Q\otimes (B\otimes Q)\otimes C\cong (Q\otimes Q)\otimes (B\otimes C)\cong Q\otimes (B\otimes C)$. \end{center} Therefore, \begin{center} ${\rm GKdim}(B\otimes C)={\rm GKdim}(Q\otimes (B\otimes C))={\rm GKdim}((Q\otimes B)\otimes (Q\otimes C))\leq {\rm GKdim}(Q\otimes B)+{\rm GKdim}(Q\otimes C)={\rm GKdim}(B)+{\rm GKdim}(C)$; $\max\{{\rm GKdim}(B),{\rm GKdim}(C)\}=\max\{{\rm GKdim}(Q\otimes B),{\rm GKdim}(Q\otimes C)\}\leq {\rm GKdim}((Q\otimes B)\otimes (Q\otimes C))={\rm GKdim}(Q\otimes (B\otimes C))={\rm GKdim}(B\otimes C)$. \end{center} For the second part, if $C$ contains a finitely generated subalgebra $C_0$ such that ${\rm GKdim}(C_0)={\rm GKdim}(C)$, then $Q\otimes C$ contains the finitely generated $Q$-algebra $Q\otimes C_0$, ${\rm GKdim}(Q\otimes C_0)={\rm GKdim}(Q\otimes C)$, and since $Q\otimes Q\cong Q$, we obtain {\small \begin{center} ${\rm GKdim}(B\otimes C)={\rm GKdim}(Q\otimes (B\otimes C))={\rm GKdim}((Q\otimes B)\otimes (Q\otimes C))={\rm GKdim}(Q\otimes B)+{\rm GKdim}(Q\otimes C)={\rm GKdim}(B)+{\rm GKdim}(C)$. \end{center}} \noindent (ix) Let $W:=\,_R\langle \frac{w_1}{s_1},\dots,\frac{w_t}{s_t}\rangle$ be a frame of $BS^{-1}$; taking a common denominator $s$ we can assume $W=\,_R\langle \frac{w_1}{s},\dots,\frac{w_t}{s}\rangle$. Then $sW\subseteq\, _R\langle w_1,\dots,w_t\rangle\subseteq B$ and observe that $V_W:=\,_R\langle 1,s,w_1,\dots,w_t\rangle$ is a frame of $B$. For every $n\geq 0$, $W^n\subseteq V_W^n\frac{1}{s^n}$ and since $Q$ is $R$-flat, then $Q\otimes W^n\subseteq Q\otimes V_W^n\frac{1}{s^n}\cong Q\otimes V_W^n$ (isomorphism of $Q$-vector spaces), so $\dim_Q(Q\otimes W^n)\leq \dim_Q(Q\otimes V_W^n)$, i.e., ${\rm rank}(W^n)\leq {\rm rank}(V_W^n)$, whence ${\rm GKdim}(BS^{-1})\leq {\rm GKdim}(B)$. Since $B\subseteq BS^{-1}$, (vi) completes the proof. \noindent (x) We have the following isomorphism of $R$-algebras \begin{align*} B\otimes R[G] & \cong B[G]\\ b\otimes (\sum_{g\in G}r_g g) & \mapsto \sum_{g\in G}(b r_g)\cdot g \end{align*} and from this we obtain the isomorphism of $Q$-algebras \begin{center} $Q\otimes B[G]\cong Q\otimes (B\otimes R[G])\cong (Q\otimes B)\otimes R[G]\cong (Q\otimes B)[G]$. \end{center} Therefore, \begin{center} ${\rm GKdim}(B[G])={\rm GKdim}(Q\otimes B[G])={\rm GKdim}((Q\otimes B)[G])={\rm GKdim}(Q\otimes B)={\rm GKdim}(B)$. \end{center} (xi) $Q\otimes B$ has an induced natural $\mathbb{N}$-filtration given by $\{Q\otimes F_p(B)\}_{p\in \mathbb{N}}$, moreover, $Q\otimes B$ is locally finite. Since $Gr(B)$ is finitely generated, then $Q\otimes Gr(B)$ is finitely generated. We have the following isomorphism of $Q$-algebras: \begin{center} $Q\otimes Gr(B)\cong Gr(Q\otimes B)$, $q\otimes \overline{b_p}\mapsto \overline{q\otimes b_p}$, with $q\in Q$ and $b_p\in F_p(B)$ \end{center} (this isomorphism is induced by the isomorphisms $Q\otimes \frac{F_p(B)}{F_{p-1}(B)}\cong \frac{Q\otimes F_p(B)}{Q\otimes F_{p-1}(B)}$). Hence, $Gr(Q\otimes B)$ is finitely generated and we have \begin{center} ${\rm GKdim}(Gr(B))={\rm GKdim}(Q\otimes Gr(B))={\rm GKdim}(Gr(Q\otimes B))={\rm GKdim}(Q\otimes B)={\rm GKdim}(B)$. \end{center} (xii) We know that $Q\otimes V$ generates the $Q$-algebra $Q\otimes B$. Observe that \begin{center} $Q\otimes (B[x;\sigma,\delta])\cong (Q\otimes B)[x;\overline{\sigma},\overline{\delta}]$ \end{center} is an isomorphism of $Q$-algebras, where $\overline{\sigma},\overline{\delta}:Q\otimes B\to Q\otimes B$ are defined in the following natural way, $\overline{\sigma}:=i_Q\otimes \sigma$, $\overline{\delta}:=i_Q\otimes \delta$. It is easy to check that $\overline{\sigma}$ is a bijective homomorphism of $Q$-algebras, $\overline{\delta}$ is a $Q$-linear $\overline{\sigma}$-derivation and $\overline{\sigma}(Q\otimes V)\subseteq Q\otimes V$. Therefore, \begin{center} ${\rm GKdim}(B[x;\sigma,\delta])={\rm GKdim}((Q\otimes B)[x;\overline{\sigma},\overline{\delta}])={\rm GKdim}(Q\otimes B)+1={\rm GKdim}(B)+1$. \end{center} (xiii) Note first that $A$ is a $R$-algebra. Let $V:=\,_R\langle 1,v_1,\dots,v_{l}\rangle$ be a generator frame of $B$, then $\{V^n\}_{n\geq 0 }$ is a locally finite $\mathbb{N}$-filtration of $B$, in particular, $B=\bigcup_{n\geq 0}V^n$. Let $m:={\rm max}\{m_{11},\dots,m_{tl}\}\geq 1$ with $\delta_i(v_j)\in V^{m_{ij}}$, $1\leq i\leq t$ and $1\leq j\leq l$. Then, $\delta_i(V)\subseteq V^m$ for every $1\leq i\leq t$. In addition, $m$ can be chosen such that all parameters in (\ref{equation1.2.1}) belong to $V^m$. By induction we can show that for $n\geq 0$, $\delta_i(V^n)\subseteq V^{n+m}$. In fact, $\delta_i(V^0)=\delta_i(R)=0\subseteq V^m$; $\delta_i(V)\subseteq V^m\subseteq V^{m+1}$; assume that $\delta_i(V^n)\subseteq V^{n+m}$ and let $z\in V^n$ and $v\in V$, then $\delta_i(zv)=\sigma_i(z)\delta_i(v)+\delta_i(z)v$, but $\sigma_i(z)\in V^n$, whence $\delta_i(zv)\in V^{n+1+m}$. Thus, $\delta_i(V^{n+1})\subseteq V^{n+1+m}$. From this we obtain in particular that $\delta_i(V^m)\subseteq V^{2m}$. Since $V^m$ is also a generator frame, then we can assume that the generator frame $V$ satisfies $\delta_i(V)\subseteq V^2$ and all parameters in (\ref{equation1.2.1}) belong to $V$. From this, in turn, we conclude that this generator frame $V=\,_R\langle 1,v_1,\dots,v_{l}\rangle$ satisfies \begin{center} $\delta_i(V^n)\subseteq V^{n+1}$, for every $n\geq 0$. \end{center} Let $X:=\,_R\langle 1,x_1,\dotsc,x_{t} \rangle$, and for every $n\geq 1$ let \begin{center} $X_n:=\,_R\langle x^\alpha\in Mon(A)\mid \deg(x^\alpha)\leq n\rangle$ (see Definition \ref{gpbwextension}). \end{center} Note that for every $n\geq 1$, \begin{center} $X_n\subseteq X^n\subseteq V^{n-1}X_n$. \end{center} The first inclusion is trivial and the second can be proved by induction. Indeed, $X^1=X=X_1=RX_1=V^{1-1}X_1$; assume that $X^{n-1}\subseteq V^{n-2}X_{n-1}$ and let $z\in X^n$, we can suppose that $z$ has the form $z=z_1\cdots z_n$ with $z_i\in \{1,x_1,\dots,x_t\}$, $1\leq i\leq n$. If at least one $z_i$ is equal $1$, then $z\in X^{n-1}$, and by induction $z\in V^{n-2}X_{n-1}\subseteq V^{n-1}X_n$. Thus, we can suppose that every $z_i\in \{x_1,\dots,x_t\}$. If $z\in Mon(A)$, then $z\in X_n\subseteq V^{n-1}X_n$. Assume that $z\notin Mon(A)$, then at least one factor of $z$ should be moved in order to represent $z$ through the $B$-basis $Mon(A)$ of $A$. But the maximum number of permutations in order to do this is $\leq n-1$ (and this is true for every factor to be moved). Notice that in every permutation arise the parameters of $A$, and as was observed above, these parameters belongs to $V$. Hence, once the factor is in the right position, we can apply induction and get that $z$, represented in the standard form through the basis $Mon(A)$, belongs to $V^{n-1}X_n$. $X_n$ is a free left $R$-module with \begin{equation*} \dim_R X_n=\sum_{i=0}^n\binom{t+i-1}{i}, \end{equation*} and for $n\ggg 0$ \begin{equation*} (n/n-1)^t\leq \dim_R X_n\leq (n+1)^t. \end{equation*} Now we can complete the proof dividing it in two steps. For this, let $W:=V+X$, then $W$ is a generating frame of $A$. \textit{Step 1}. ${\rm GKdim}(A)\leq {\rm GKdim}(B)+t$. We will show first that for every $n\geq 0$, \begin{center} $W^n\subseteq V^{n}X^{n}$. \end{center} For $n=0$, $W^0=R=V^0X^0$; for $n=1$, $W^1=V+X\subseteq VX$. Suppose that $W^n\subseteq V^{n}X^{n}$ and let $w\in W$ and $z\in W^n$, then denoting by $\delta$ any of elements of $\{\delta_1,\dots,\delta_t\}$, we get $wz\in (V+X)V^{n}X^{n}\subseteq V^{n+1}X^n+XV^nX^n\subseteq V^{n+1}X^{n+1}+(V^nX+\delta(V^n))X^n\subseteq V^{n+1}X^{n+1}+V^nX^{n+1}+V^{n+1}X^n\subseteq V^{n+1}X^{n+1}$. Thus, $W^{n+1}\subseteq V^{{n+1}}X^{{n+1}}$. Hence, ${\rm rank}W^{n}\leq {\rm rank}V^{n}X^{n}\leq {\rm rank}V^{n}V^{n-1}X_n={\rm rank}V^{2n-1}X_n\leq {\rm rank}V^{2n}X_n$, but since $V^{2n}\subseteq B$ and the $R$-basis of $X_n$ is conformed by standard monomials with $\dim_R X_n\leq (n+1)^t$, then ${\rm rank}V^{2n}X_n\leq (n+1)^t{\rm rank}V^{2n}$. In fact, let $l:=\dim_R X_n$ and $\{x^{\alpha_1},\dots,x^{\alpha_l}\}$ be a $R$-basis of $X_n$, then we have the $R$-isomorphism \begin{center} $V^{2n}\oplus \cdots \oplus V^{2n}\to V^{2n}X_n$ $(b_1,\dots,b_l)\mapsto b_1x^{\alpha_1}+\cdots+b_lx^{\alpha_l}$, \end{center} and hence ${\rm rank}V^{2n}X_n={\rm rank}(V^{2n}\oplus \cdots \oplus V^{2n})=l{\rm rank}V^{2n}\leq (n+1)^t{\rm rank}V^{2n}$. Therefore, \begin{center} ${\rm GKdim}(A)=\overline{\lim_{n\to \infty}}\log_n({\rm rank} W^{n})\leq \overline{\lim_{n\to \infty}}\log_n({\rm rank} V^{2n}X_n)\leq \overline{\lim_{n\to \infty}}\log_n((n+1)^t\dim_Q(Q\otimes V^{2n}))=t+\overline{\lim_{n\to \infty}}\log_n(\dim_Q(Q\otimes V^n))=t+\overline{\lim_{n\to \infty}}\log_n({\rm rank}V^n)=t+{\rm GKdim}(B)$. \end{center} \textit{Step 2}. ${\rm GKdim}(A)\geq {\rm GKdim}(B)+t$. Observe that for every $n\geq 0$, \begin{center} $V^{n}X^n\subseteq W^{2n}$. \end{center} In fact, $V^0X^0=R=W^0$ and for $n\geq 1$, $V^nX^n\subseteq (V+X)^n(V+X)^n=W^{2n}$. Therefore, $W^{2n}\supseteq V^{n}X^n\supseteq V^{n}X_n$, and as in the step 1, we get ${\rm rank}W^{2n}\geq {\rm rank}V^{n}X_{n}\geq (n/n-1)^t{\rm rank}V^{n}$, whence \begin{center} ${\rm GKdim}(A)=\overline{\lim_{n\to \infty}}\log_n({\rm rank} W^{2n})\geq \overline{\lim_{n\to \infty}}\log_n({\rm rank} V^{n}X_n)\geq\overline{\lim_{n\to \infty}}\log_n((n/n-1)^t\dim_Q(Q\otimes V^n))=t+\overline{\lim_{n\to \infty}}\log_n(\dim_Q(Q\otimes V^n))=t+\overline{\lim_{n\to \infty}}\log_n({\rm rank}V^n)=t+{\rm GKdim}(B)$. \end{center} \end{proof} \begin{remark} Comparing with the classical Gelfand-Kirillov dimension over fields (see \cite{Krause}), we have the followings remarks about Definition \ref{definitionGKforrings}. (i) If $V$ is a generator frame of $B$, then $\{V^n\}_{n\geq 0}$ is a $\mathbb{N}$-filtration of $B$. Note that ${\rm GKdim}(B)$ measures the asymptotic behavior of the sequence $\{{\rm rank} V^n\}_{n\geq 0}$. (ii) From Proposition \ref{proposition17.3.8} we get that \begin{center} ${\rm GKdim}(B)=r$ if and only if $r\in \{0\}\cup \{1\}\cup [2,\infty]$. \end{center} Indeed, let $r:={\rm GKdim}(B)={\rm GKdim}(Q\otimes B)$. Then, it is well-known (see \cite{Krause}) that $r\in \{0\}\cup \{1\}\cup [2,\infty]$. Conversely, suppose that $r$ is in this union, then there exists a $Q$-algebra $A$ such that ${\rm GKdim}(A)=r$. Notice that $A$ is a $R$-algebra. Let $X$ be a $Q$-basis of $A$ and let $B$ be the $R$-subalgebra of $A$ generated by $X$. We have the surjective homomorphism of $Q$-algebras \begin{center} $Q\otimes B\xrightarrow{\alpha} A$, $q\otimes b\mapsto q\cdot b$, with $q\in Q$ and $b\in B$. \end{center} Hence, $A\cong (Q\otimes B)/\ker(\alpha)$, so $r={\rm GKdim}(A)\leq {\rm GKdim}(Q\otimes B)={\rm GKdim}(B)$. On the other hand, since $Q$ is $R$-flat we have the injective homomorphism of $Q$-algebras $Q\otimes B\hookrightarrow Q\otimes A$, and moreover, $Q\otimes A\cong A$, with $q\otimes a\mapsto q\cdot a$ (isomorphism of $Q$-algebras). Therefore, $Q\otimes B$ is a $Q$-subalgebra of $A$, and hence, ${\rm GKdim}(B)={\rm GKdim}(Q\otimes B)\leq {\rm GKdim}(A)=r$. Thus, ${\rm GKdim}(B)=r$. (iii) If $B$ is finitely generated and commutative, then ${\rm GKdim}(B)$ is a nonnegative integer. Indeed, this property is well-known (see \cite{Krause}, Theorem 4.5) for commutative finitely generated algebras over fields since in such situation ${\rm GKdim}={\rm cl.Kdim}$. Hence, since $Q\otimes B$ is finitely generated and commutative, then ${\rm GKdim}(B)={\rm GKdim}(Q\otimes B)$ is a nonnegative integer. \end{remark} \section{Gelfand-Kirillov dimension for modules} Definition \ref{definitionGKforrings} can be extended to modules. Let $M$ be a right $B$-module, then $M$ is a $R-B$-bimodule: Indeed, for $r\in R$, $m\in M$ and $b\in B$ we define $r\cdot m:=m\cdot (r\cdot 1_B)$, and from this it is easy to check that $(r\cdot m )\cdot b=r\cdot (m\cdot b)$. \begin{definition} Let $B$ a $R$-algebra and $M$ be a right $B$-module. The \textit{Gelfand-Kirillov dimension of $M$} is defined by \begin{equation*} {\rm GKdim}(M):=\sup_{V,F}\overline{\lim_{n\to \infty}}\log_n{\rm rank} FV^n, \end{equation*} where $V$ varies over all frames of $B$ and $F$ over all finitely generated $R$-submodules of $M$. In addition, ${\rm GKdim}(0):=-\infty$. \end{definition} Notice that $FV^n$ is a finitely generated $R$-submodule of $M$: This follows from the identity $(r\cdot m)\cdot (r'\cdot b)=(rr')\cdot(m\cdot b)$, with $r,r'\in R$, $m\in M$ and $b\in B$. Moreover, $Q\otimes M$ is a right module over the $Q$-algebra $Q\otimes B$, with product \begin{center} $(q\otimes m)\cdot (q'\otimes b):=(qq')\otimes (m\cdot b)$, where $q,q'\in Q$, $m\in M$ and $b\in B$. \end{center} \begin{proposition}\label{proposition17.4.7} Let $B$ a $R$-algebra and $M$ be a right $B$-module. Then, \begin{center} ${\rm GKdim}(M)={\rm GKdim}(Q\otimes M)$. \end{center} \end{proposition} \begin{proof} Let $V$ be a frame of $B$ and $F$ be a finitely generated $R$-submodule of $M$, then $Q\otimes F$ is a finitely generated $Q$-vector subspace of the right $Q\otimes B$-module $Q\otimes M$, and $Q\otimes V$ is a frame of $Q\otimes B$. In addition, $(Q\otimes F)(Q\otimes V)^n=Q\otimes FV^n$ is a finitely generated $Q$-subspace of $Q\otimes M$. Therefore, {\small \begin{center} ${\rm GKdim}(M)=\sup_{V,F}\overline{\lim_{n\to \infty}}\log_n{\rm rank} FV^n=\sup_{V,F}\overline{\lim_{n\to \infty}}\log_n(\dim_Q(Q\otimes FV^n))=\sup_{V,F}\overline{\lim_{n\to \infty}}\log_n(\dim_Q((Q\otimes F)(Q\otimes V)^n))\leq {\rm GKdim}(Q\otimes M)$. \end{center}} Now let $W$ be a frame of $Q\otimes B$, we showed in the proof of Proposition \ref{proposition17.3.8} that there exist finitely many $v_1,\dots,v_m\in B$ such that $V_W:=\,_R\langle 1,v_1,\dots,v_m\rangle$ is a frame of $B$ and $W\subseteq Q\otimes V_W$. Similarly, if $G$ is a $Q$-subspace of $Q\otimes M$ of finite dimension, then there exist finitely many $m_1,\dots,m_t\in M$ such that $F_G:=\,_R\langle m_1,\dots,m_t\rangle$ satisfies $G\subseteq Q\otimes F_G$. Moreover, for every $n\geq 0$, \begin{center} $GW^n\subseteq (Q\otimes F_G)(Q\otimes V_W)^n=(Q\otimes F_G)(Q\otimes V_W^n)=Q\otimes F_GV_W^n$. \end{center} Therefore, \begin{center} ${\rm GKdim}(Q\otimes M)=\sup_{W,G}\overline{\lim_{n\to \infty}}\log_n\dim_Q GW^n\leq \sup_{V_W,F_G}\overline{\lim_{n\to \infty}}\log_n\dim_Q (Q\otimes F_GV_W^n)=\sup_{V_W,F_G}\overline{\lim_{n\to \infty}}\log_n{\rm rank} F_GV_W^n\leq {\rm GKdim}(M)$. \end{center} \end{proof} In the next theorem we present some basic properties of the Gelfand-Kirillov dimension of modules. \begin{theorem}\label{theoremformodules} Let $B$ a $R$-algebra and $M$ be a right $B$-module. Then, \begin{enumerate} \item[\rm (i)]${\rm GKdim}(B_B)={\rm GKdim}(B)$. \item[\rm (ii)]${\rm GKdim}(M)\leq {\rm GKdim}(B)$. \item[\rm (iii)]Let $0\to K\to M\ \to L\to 0$ be an exact sequence of right $B$-modules, then \begin{center} ${\rm GKdim}(M)\geq \max\{{\rm GKdim}(K),{\rm GKdim}(L)\}$. \end{center} \item[\rm (iv)]Let $I$ be a two-sided ideal of $B$ and $MI=0$, then \begin{center} ${\rm GKdim}(M_B)={\rm GKdim}(M_{B/I})$. \end{center} \item[\rm (v)]${\rm GKdim}(\sum_{i=1}^n M_i)=\max\{{\rm GKdim}(M_i)\}_{i=1}^n={\rm GKdim}(\bigoplus_{i=1}^n M_i)$. \end{enumerate} \end{theorem} \begin{proof} (i) let $V$ be a frame of $B$, then for every $n\geq 0$, $V^n\subseteq V^{n+1}=VV^n$, hence \begin{equation*} {\rm GKdim}(B)=\sup_{V}\overline{\lim_{n\to \infty}}\log_n{\rm rank} V^n\leq \sup_{V,V}\overline{\lim_{n\to \infty}}\log_n{\rm rank} VV^{n}\leq {\rm GKdim}(B_B). \end{equation*} On the other hand, if $F$ is a finitely generated $R$-submodule of $B$, then for every frame $V$ of $B$ we have that $W_V:=V+F$ is a frame of $B$, moreover, for every $n$, $W_V^{n+1}=(V+F)^{n+1}\supseteq FV^n$, so \begin{equation*} {\rm GKdim}(B_B)=\sup_{V,F}\overline{\lim_{n\to \infty}}\log_n{\rm rank}FV^n\leq \sup_{W_V}\overline{\lim_{n\to \infty}}\log_n{\rm rank}W_V^{n+1}\leq {\rm GKdim}(B). \end{equation*} (ii) From Proposition \ref{proposition17.4.7} and (i) we get \begin{center} ${\rm GKdim}(M)={\rm GKdim}(Q\otimes M)\leq {\rm GKdim}(Q\otimes B)={\rm GKdim}(B)$. \end{center} (iii) Since $Q$ is a $R$-flat module we get the following exact sequence of $Q\otimes B$-modules $0\to Q\otimes K\to Q\otimes M\ \to Q\otimes L\to 0$, whence \begin{center} ${\rm GKdim}(M)={\rm GKdim}(Q\otimes M)\geq \max\{{\rm GKdim}(Q\otimes K),{\rm GKdim}(Q\otimes L)\}=\max\{{\rm GKdim}(K),{\rm GKdim}(L)\}$. \end{center} (iv) As in the part (v) of Theorem \ref{theorem17.4.4}, let $W$ be a frame of $B/I$, we can assume that $W=\,_R\langle \overline{1},\overline{w_1},\dots,\overline{w_t}\rangle$, then $V_W:=\,_R\langle 1, w_1,\dots,w_t\rangle$ is a frame of $B$. Let $G$ be a finitely generated $R$-submodule of $M_{B/I}$, then $F_G:=G$ is also a finitely generated $R$-submodule of $M_B$. For every $n\geq 0$ we have $\dim_Q(Q\otimes GW^n)=\dim_Q(Q\otimes G\overline{V_W^n})=\dim_Q(Q\otimes F_GV_W^n)$. The last equality in this case can by justified in the following way. The $Q$-vector spaces $Q\otimes G\overline{V_W^n}$ and $Q\otimes F_GV_W^n$ have finite dimension and we have the following homomorphisms of $Q$-vector spaces: \begin{center} $Q\otimes F_GV_W^n\to Q\otimes G\overline{V_W^n}$, $q\otimes g\cdot z\mapsto q\otimes g\cdot \overline{z}$, with $q\in Q$, $g\in F_G$ and $z\in V_W^n$, $Q\otimes G\overline{V_W^n}\to Q\otimes F_GV_W^n$, $q\otimes g\cdot \overline{z}\mapsto q\otimes g\cdot z$, with $q\in Q$, $g\in G$ and $z\in V_W^n$. \end{center} The last homomorphism is well-defined since $MI=0$. It is clear that the composes of these homomorphisms give the identities. Hence, \begin{center} ${\rm GKdim}(M_{B/I})=\sup_{W,G}\overline{\lim_{n\to \infty}}\log_n{\rm rank} GW^n=\sup_{W,G}\overline{\lim_{n\to \infty}}\log_n\dim_Q (Q\otimes GW^n)= \sup_{V_W,F_G}\overline{\lim_{n\to \infty}}\log_n\dim_Q (Q\otimes F_GV_W^n)=\sup_{V_W,F_G}\overline{\lim_{n\to \infty}}\log_n{\rm rank} F_GV_W^n\leq {\rm GKdim}(M_B)$. \end{center} Conversely, let $V:=\,_R\langle 1, v_1,\dots,v_t\rangle$ be a frame of $B$, then $W_V:=\,_R\langle \overline{1}, \overline{v_1},\dots,\overline{v_t}\rangle$ is a frame of $B/I$; let $F$ be a finitely generated $R$-submodule of $M_B$, then $G_F:=F$ is a finitely generated $R$-submodule of $M_{B/I}$. As above, for every $n\geq 0$ we have $\dim_Q(Q\otimes FV^n)= \dim_Q(Q\otimes G_FW_V^n)$. Hence, \begin{center} ${\rm GKdim}(M_{B})=\sup_{V,F}\overline{\lim_{n\to \infty}}\log_n{\rm rank} FV^n=\sup_{V,F}\overline{\lim_{n\to \infty}}\log_n\dim_Q (Q\otimes FV^n)= \sup_{W_V,G_F}\overline{\lim_{n\to \infty}}\log_n\dim_Q (Q\otimes G_FW_V^n)=\sup_{W_V,G_F}\overline{\lim_{n\to \infty}}\log_n{\rm rank} G_FW_V^n\leq {\rm GKdim}(M_{B/I})$. \end{center} Therefore, ${\rm GKdim}(M_{B})={\rm GKdim}(M_{B/I})$. (v) The equalities can be proved tensoring by $Q$. \end{proof} \section{Gelfand-Kirillov transcendence degree} In \cite{GK} was defined the Gelfand-Kirillov transcendence degree for algebras over fields (see also \cite{Zhangtranscendence}). This notion can be extended to algebras over commutative domains. \begin{definition} Let $B$ a $R$-algebra. The \textit{Gelfand-Kirillov transcendence degree} of $B$ is defined by \[ {\rm Tdeg}(B):=\sup_{V}\inf_{b}{\rm GKdim}(R[bV]), \] where $V$ ranges over all frames of $B$ and $b$ ranges over all regular elements of $B$. \end{definition} \begin{theorem}\label{theorem4.2} Let $B$ a $R$-algebra. Then, \begin{enumerate} \item[\rm (i)]${\rm Tdeg}(B)\leq {\rm GKdim}(B)$. \item[\rm (ii)]If $B$ is commutative, then ${\rm Tdeg}(B)={\rm GKdim}(B)$. \end{enumerate} \end{theorem} \begin{proof} Since $R[bV]$ is a $R$-subalgebra of $B$, then ${\rm GKdim}(R[bV])\leq {\rm GKdim}(B)$ for all $V$ and $b$, whence \begin{center} ${\rm Tdeg}(B)\leq {\rm GKdim}(B)$. \end{center} If $B$ is commutative, then $Q\otimes B$ is commutative and it is known that for commutative algebras over fields the equality holds, therefore, ${\rm Tdeg}(Q\otimes B)={\rm GKdim}(Q\otimes B)$ (see \cite{Zhangtranscendence}, Proposition 2.2). But note that ${\rm Tdeg}(Q\otimes B)\leq {\rm Tdeg}(B)$. In fact, \[ {\rm Tdeg}(Q\otimes B):=\sup_{W}\inf_{z}{\rm GKdim}(Q[zW])\leq \sup_{W}\inf_{u}{\rm GKdim}(Q[uW]), \] where $W$ ranges over all frames of $Q\otimes B$, $z$ ranges over all regular elements of $Q\otimes B$ and $u$ over all regular elements of $Q\otimes B$ of the form $1\otimes b$, with $b$ any regular element of $B$ (if $b$ is a regular element of $B$, then $1\otimes b$ is a regular element of $Q\otimes B$: Indeed, this follows from the fact that $Q\otimes B\cong BS_0^{-1}$, with $S_0:=R-\{0\}$, $1\otimes b\mapsto \frac{b}{1}$). As we saw in the proof of Proposition \ref{proposition17.3.8}, given a frame $W$ there exists a frame $V_W$ of $B$ such that $W\subseteq Q\otimes V_W$, hence $uW\subseteq u(Q\otimes V_W)=Q\otimes bV_W\subseteq Q\otimes R[bV_W]$, whence $Q[zW]\subseteq Q\otimes R[bV_W]$, and from this \[ {\rm Tdeg}(Q\otimes B)\leq \sup_{V_W}\inf_{b}{\rm GKdim}(Q\otimes R[bV_W])=\sup_{V_W}\inf_{b}{\rm GKdim}(R[bV_W])\leq {\rm Tdeg}(B). \] This proves the claimed. \end{proof} \begin{remark} (i) Taking $R=\mathbb{Z}$ in Definition \ref{definitionGKforrings} we get the notion of Gelfand-Kirillov dimension for arbitrary rings, and hence, all properties in Theorems \ref{theorem17.4.4}, \ref{theoremformodules} and \ref{theorem4.2} hold in this particular situation. (ii) The proof of Theorem 2.2 in \cite{LezamaHelbert3} can be easy adapted to the case of algebras over commutative domains. Indeed, we can replace vector subspaces over the field $K$ and its dimension by finitely generated submodules and its rank over the commutative domain $R$. Thus, Theorem 2.2 in \cite{LezamaHelbert3} can be extended in the following way: Let $R$ be a commutative domain and $A$ be a right Ore domain. If $A$ is a finitely generated $R$-algebra such that ${\rm GKdim}(A)<{\rm GKdim}(Z(A))+1$, then \begin{center} $Z(Q_r(A))=\{\frac{p}{q}\mid p,q\in Z(A), q\neq 0\}\cong Q(Z(A))$. \end{center} \end{remark} \end{document}
\begin{document} \def9/13/'03{9/13/'03} \title{ {\huge\bf Spectral measure of Laplacean operators in Paley-Wiener space} } \def\rule{0cm}{0cm}} \def\qd{\rule{3mm}{3mm}} \def\BB{$\bullet${\rule{0cm}{0cm}} \def\qd{\rule{3mm}{3mm}} \def\BB{$\bullet$} \renewcommand{1.25}{1.25} \renewcommand{\thesection:\Alph{sbsect}.\arabic{equation}}{\thesection:\Alph{sbsect}.\arabic{equation}} \def\setcounter{equation}{0}} \newcounter{sectn} \newcounter{sbsect{\setcounter{equation}{0}} \newcounter{sectn} \newcounter{sbsect} \def\sect#1{\addtocounter{section}{1}\setcounter{equation}{0}} \newcounter{sectn} \newcounter{sbsect\setcounter{sbsect}{0} \renewcommand{\thesection:\Alph{sbsect}}{\thesection}\rule{0cm}{0cm}} \def\qd{\rule{3mm}{3mm}} \def\BB{$\bullet$ \\ {\rule{0cm}{0cm}} \def\qd{\rule{3mm}{3mm}} \def\BB{$\bullet$\hspace{-2em}\large\bf\thesection:\Alph{sbsect}.\qquad #1 \par}} \def\subsect#1{\addtocounter{sbsect}{1}\setcounter{equation}{0}} \newcounter{sectn} \newcounter{sbsect \renewcommand{\thesection:\Alph{sbsect}}{\thesection:\Alph{sbsect}}\rule{0cm}{0cm}} \def\qd{\rule{3mm}{3mm}} \def\BB{$\bullet$ \\ {\bf\rule{0cm}{0cm}} \def\qd{\rule{3mm}{3mm}} \def\BB{$\bullet$\hspace{-1.5em}\thesection:\Alph{sbsect}.\qquad #1 \par}} \newtheorem{Theorem}{THEOREM} \newtheorem{Lemma}[Theorem]{LEMMA} \newtheorem{Corollary}[Theorem]{COROLLARY} \def\thm#1#2{\be{Theorem}{\lb{#1} #2}} \def\LEM#1#2{\BE{Lemma}{\LB{#1} #2}} \def\mathbf COR#1#2{\BE{Corollary}{\LB{#1} #2}} \def \noindent {\sc Proof:}\qquad{ \noindent {\sc Proof:}\qquad} \def\1 \par\noindent{\bf REMARK:}\qquad {\rule{0cm}{0cm}} \def\qd{\rule{3mm}{3mm}} \def\BB{$\bullet$ \par\noindent{\bf REMARK:}\qquad } \def $\quad$\qd \\} \def\ds{\displaystyle{ $\quad$\qd \\} \def\ds{\displaystyle} \def\LB#1{\label{#1}} \def\BE#1#2{\begin{#1} #2 \end{#1}} \def\EQ#1#2{\BE{equation}{\LB{#1} #2}} \def\ARR#1#2{\BE{array}{{#1} #2}} \def\DES#1{\BE{description}{#1}} \def\QT#1{\BE{quote}{#1}} \def\ENUM#1{\BE{enumerate}{#1}} \def{\cal I}} \def\A{{\cal A}} \def\D{{\cal D}}\def\bc{\mathbf{c}TM#1{\BE{itemize}{#1}} \def\mathbf COM#1{\par\noindent{\bf COMMENT:\quad\sl #1}\par\noindent} \def\hbox{$\;{\leftarrow}\kern-.15em{\mapstochar}\:\:$}{\hbox{$\;{\leftarrow}\kern-.15em{\mapstochar}\:\:$}} \def\kern.344em{\rule[.18ex]{.075em}{1.32ex}}\kern-.344em{\kern.344em{\rule[.18ex]{.075em}{1.32ex}}\kern-.344em} \def\overline{m}} \def\Mb{\overline{M}} \def\Mh{\hat{M}ox{\rm I\kern-.21em R}} \def\mathbf CX{\overline{m}} \def\Mb{\overline{M}} \def\Mh{\hat{M}ox{\rm \vv C}{\overline{m}} \def\Mb{\overline{M}} \def\Mh{\hat{M}ox{\rm I\kern-.21em R}} \def\mathbf CX{\overline{m}} \def\Mb{\overline{M}} \def\Mh{\hat{M}ox{\rm \kern.344em{\rule[.18ex]{.075em}{1.32ex}}\kern-.344em C}} \def\mathbf Rightarrow} \def\emb{\hookrightarrow} \def\wk{\rightharpoonup{\mathbf Rightarrow} \def\emb{\hookrightarrow} \def\wk{\rightharpoonup} \def\dot{\1}} \def\d{\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightot} \def\+{\oplus} \def\x{\times{\dot{\rule{0cm}{0cm}} \def\qd{\rule{3mm}{3mm}} \def\BB{$\bullet$}} \def\d{\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightot} \def\+{\oplus} \def\x{\times} \def\langle} \def\>{\rangle} \def\o{\circ} \def\at#1{\Bigr|_{#1}{\langle} \def\>{\rangle} \def\o{\circ} \def\at#1{\Bigr|_{#1}} \def\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\right{\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\right} \def\varepsilon} \def\f{\varphi} \def\om{\omega} \def\Om{\Omega{\varepsilon} \def\f{\varphi} \def\om{\omega} \def\Om{\Omega} \def\gamma} \def\gep{\gm_\eps} \def\lm{\lambda} \def\lep{\lm_\eps{\gamma} \def\gep{\gamma} \def\gep{\gm_\eps} \def\lm{\lambda} \def\lep{\lm_\eps_\varepsilon} \def\f{\varphi} \def\om{\omega} \def\Om{\Omega} \def\lm{\lambda} \def\lep{\lm_\varepsilon} \def\f{\varphi} \def\om{\omega} \def\Om{\Omega} \def\delta} \def\rb{\overline{r}} \def\rt{\tilde{r}} \def\al{\alpha{\delta} \def\rb{\overline{r}} \def\rt{\tilde{r}} \def\al{\alpha} \def\hat{f}} \def\yb{\overline{y}} \def\oet{\L(1-e^{-\mu\tau}\mathbf R){\hat{f}} \def\yb{\overline{y}} \def\oet{\L(1-e^{-\mu\tau}\mathbf R)} \def\overline{m}} \def\Mb{\overline{M}} \def\Mh{\hat{M}{\overline{m}} \def\Mb{\overline{M}} \def\Mh{\hat{M}} \def\mathbf{x}} \def\by{\mathbf{y}} \def\bS{\mathbf{S}{\mathbf{x}} \def\by{\mathbf{y}} \def\bS{\mathbf{S}} \def{\cal I}} \def\A{{\cal A}} \def\D{{\cal D}}\def\bc{\mathbf{c}{{\cal I}} \def\A{{\cal A}} \def\D{{\cal D}}\def\bc{\mathbf{c}} \defequation} \def\de{differential \eq} \def\pde{partial \de{equation} \def\de{differential equation} \def\de{differential \eq} \def\pde{partial \de} \def\pde{partial \de} \defsolution} \def\pb{problem} \def\bdy{boundary} \def\fn{function{solution} \def\pb{problem} \def\bdy{boundary} \def\fn{function} \defdelay \de} \def\ev{eigenvalue{delay \de} \def\ev{eigenvalue} \def\asy{asymptotic} \def\mathbf{x}} \def\by{\mathbf{y}} \def\bS{\mathbf{S}{\mathbf{x}} \def\by{\mathbf{y}} \def\bS{\mathbf{S}} \def{\mathcal H}} \def\U{{\cal U}} \def\D{{\cal D}}\def\bc{\mathbf{c}{{\mathcal H}} \def\U{{\cal U}} \def\D{{\cal D}}\def\bc{\mathbf{c}} \defequation} \def\de{differential \eq} \def\pde{partial \de{equation} \def\de{differential equation} \def\de{differential \eq} \def\pde{partial \de} \def\pde{partial \de} \defsolution} \def\pb{problem} \def\bdy{boundary} \def\fn{function{solution} \def\pb{problem} \def\bdy{boundary} \def\fn{function} \defdelay \de} \def\ev{eigenvalue{delay \de} \def\ev{eigenvalue} \def\mathbf R{\mathbf R} \def\mathbf C{\mathbf C} \author{ {\bf Dang Vu Giang}\\ Hanoi Institute of Mathematics\\ 18 Hoang Quoc Viet, 10307 Hanoi, Vietnam\\ {\footnotesize e-mail: $\langle} \def\>{\rangle} \def\o{\circ} \def\at#1{\Bigr|_{#1}[email protected]$\>$}\\ \rule{0cm}{0cm}} \def\qd{\rule{3mm}{3mm}} \def\BB{$\bullet$\\ } \maketitle \noindent {\bf Abstract.} We are interested in computing the spectral measure of Laplacean operators in Paley-Wiener space, the Hilbert space of all square integrable functions having Fourier transforms supported in a compact set $K$, the closure of an open bounded set in $\mathbf R^N$. I is well-known that every differential operator is bounded in this space. Among others, we will prove that the spectrum of Laplace operator is the set $$\{-|x|^2: x\in K\}.$$ \noindent {\bf\sc 2000 AMS Subject Classification: } 46E30 35B40 \noindent {\bf\sc Key Words: } {interior points, approximate point spectrum} \eject \sect{Introduction } \noindent Let $K_0$ be a bounded open set in $\mathbf R^N$. Let $K$ be the closure of $K_0$. Then $K$ is a compact set. Let $\mathcal H$ be the Hilbert space of all $f\in L^2(\mathbf R^N)$ such that the Fourier transform $\hat f$ of $f$ is supported in $K$. Let $\Delta$ denote the Laplace operator in $\mathbf R^N$. Then $\Delta$ generates a contraction $C_0$-semigroup in $L^2(\mathbf R^N)$ and $$e^{t\Delta}f(x)=\frac1{(4\pi t)^{N/2}}\int_{\mathbf R^N}\exp\Bigl(-\frac{|x-y|^2}{4t}\Bigr)f(y)dy,$$ for every $f\in L^2(\mathbf R^N)$. (See [2,3,4].) On the other hand, $\Delta$ is bounded and self-adjoint operator on $\mathcal H$, so $$e^{t\Delta}f(x)=\sum_{k=0}^\infty\frac{t^k}{k!}\Delta^k f(x),$$ for every $f\in\mathcal H $. Let $\rho(\Delta)$ denote the resolvent set of $\Delta$ over $\mathcal H$ and let $R(\lambda,\Delta):=\bigl(\lambda-\Delta\bigr)^{-1}$ denote the resolvent operator. By Hille-Yosida theorem, $(0,\infty)\subset\rho(\Delta)$ and $$||R(\lambda,\Delta)|| \leqslant\frac1\lambda\qquad\hbox{ for all }\quad\lambda>0.$$ Moreover, \EQ{resolvent}{\ARR{cll}{ R(\lambda,\Delta) &\ds=\frac1\lambda\sum_{k=0}^\infty\frac{\Delta^k}{\lambda^k} \\ &\ds=\sum_{k=0}^\infty(\mu-\lambda)^kR(\mu,\Delta)^{k+1}\quad\hbox{ (for } 0<\lambda<\mu) \\ &\ds=\int_0^\infty e^{-\lambda t} e^{t\Delta}dt.}} Let $$\sigma(\Delta):=\mathbf C\setminus\rho(\Delta)$$ denote the spectrum of $\Delta$ over $\mathcal H$. Then, by a well-known theorem of Gelfand, $\sigma(\Delta)$ is non-empty compact set of the complex plane $\mathbf C$. Let $$r(\Delta):=\sup\bigl\{|z|:z\in\sigma(\Delta)\bigr\}$$ denote the spectral radius of $\Delta$ over $\mathcal H$. Then $$r(\Delta)=\lim_{n\to\infty}||\Delta^n||^{1/n}.$$ Let $$\hat f(x)=\frac1{(2\pi)^{N/2}}\int_{\mathbf R^N}e^{-ix\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightot y}f(y)dy$$ denote the Fourier transform of $f\in L^1(\mathbf R^N)$. Classical Fourier Analysis will give that $$\widehat{\Delta f}(x)=-|x|^2\hat f(x),$$ and consequently, $$||\Delta||=\sup_{x\in K}|x|^2=r(\Delta)=:R.$$ \noindent {\bf Theorem 1. } {\it We have} $$\sigma(\Delta)=\{-|x|^2: x\in K\}.$$ \noindent{\it Proof: } First, $\Delta$ is self-adjoint, so $\sigma(\Delta)\subset\mathbf R$. But the resolvent set of $\Delta$ is containing the positive semi-axis, so $\sigma(\Delta)\subset (-\infty,0]$. On the other hand, the spectral radius of $\Delta$ is $R$, so $\sigma(\Delta)\subseteq[-R,0].$ But $\Delta$ has no eigenvector in ${\mathcal H}} \def\U{{\cal U}} \def\D{{\cal D}}\def\bc{\mathbf{c}$, so $\Delta$ has approximate point spectrum only. This means that $-\lambda\in\sigma(\Delta)$ if and only if there is a sequence $f_1,f_2,\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightots$ of unit vectors of ${\mathcal H}} \def\U{{\cal U}} \def\D{{\cal D}}\def\bc{\mathbf{c}$ such that $$\lim_{n\to\infty}||\Delta f_n+\lambda f_n||=0$$ (see \cite{arendt} page 64, for details). We will prove that if $\lambda=|x|^2$ for some $x\in K$, there is a sequence $f_1,f_2,\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightots$ of unit vectors of ${\mathcal H}} \def\U{{\cal U}} \def\D{{\cal D}}\def\bc{\mathbf{c}$ such that $$\lim_{n\to\infty}||\Delta f_n+\lambda f_n||=0.$$ Indeed, let $x_0$ be an interior point of $K$ and $\lambda=|x_0|^2$. For $n$ large enough, the ball $B_n$ centered at $x_0$ with radius $1/n$ is contained in $K$. Let $\varphi_n=:\hat{f_n}$ be a unit vector of $L^2(\mathbf R^N)$ supported in $B_n$ which is constant in this ball. Then, $f_n$ is a unit vector of ${\mathcal H}} \def\U{{\cal U}} \def\D{{\cal D}}\def\bc{\mathbf{c}$, and \[\begin{aligned} ||\Delta f_n+\lambda f_n||^2=||\widehat{\Delta f_n}+\lambda\hat f_n||^2 &=\int_{B_n}\bigl(|x_0|^2-|x|^2\bigr)^2|\varphi_n(x)|^2 dx\\ &\leqslant\frac 1{n^2}\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightot\Bigl(2|x_0|+\frac1n\Bigr)^2, \end{aligned}\] which is certainly tends to 0 as $n\to\infty$. Similar proof will hold, if $x_0$ is a boundary point. Therefore, $$\{-|x|^2: x\in K\}\subseteq\sigma(\Delta).$$ Conversely, let $-\lambda\in\sigma(\Delta)$. Let $f_0,f_1,\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightots$ of unit vectors of ${\mathcal H}} \def\U{{\cal U}} \def\D{{\cal D}}\def\bc{\mathbf{c}$ such that $$\lim_{n\to\infty}||\Delta f_n+\lambda f_n||=0.$$ Let $\varphi_n=\hat f_n$. We have $$||\Delta f_n+\lambda f_n||^2=||\widehat{\Delta f_n}+\lambda\hat f_n||^2 =\int_{K}\bigl(\lambda-|x|^2\bigr)^2|\varphi_n(x)|^2 dx \ge\delta,$$ where $\delta=\min_{x\in K} \bigl(\lambda-|x|^2\bigr)^2= (\lambda-|x_0|^2)^2$ for some $x_0\in K$. But $||\Delta f_n+\lambda f_n||\to 0$ as $n\to\infty$, so $\delta=0$, and consequently, $\lambda=|x_0|^2$ for some $x_0\in K$. The proof is now complete. \noindent{\bf Remark. } The assumption that $K$ is the closure of an open set is very essential. If $K$ has an isolated point or a cluster point, the assertion of this Theorem does not hold! \sect{Moments of Spectral Measures} \noindent Let $H_0=-\Delta$. Then $H_0$ is bounded, positive and self-adjoint operator of ${\mathcal H}} \def\U{{\cal U}} \def\D{{\cal D}}\def\bc{\mathbf{c}$, because $$\langle\Delta f, f\rangle=-\int_{\mathbf R^N}\bigl|\nabla f(x)\bigr|^2dx\leqslant 0\quad\hbox{ for all } f\in{\mathcal H}} \def\U{{\cal U}} \def\D{{\cal D}}\def\bc{\mathbf{c}. $$ Consider the spectral decomposition of $\Delta$: $$\Delta=\int_{\sigma(\Delta)}\lambda dE_\lambda.$$ Let $\mu$ denote the spectral measure of $\Delta$ with respect with some function $v\in\mathcal H$ with $||v||=1$. Then $\mu$ is probability measure supported in $[-R,0]$ and $$\langle R(\lambda,\Delta)v,v\rangle=\int_{\sigma(\Delta)}\frac{d\mu(x)}{\lambda-x}$$ and $$\langle e^{it\Delta}v,v\rangle=\int_{\sigma(\Delta)} e^{itx}d\mu(x).$$ Let $\{p_0,p_1,p_2,\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightots\}$ be the system of orthonormal polynomials with respect to $\mu$. Then there are two sequences $\{a_0,a_1,a_2\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightots\}$ and $\{b_0,b_1,b_2\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightots\}$ such that $$xp_n(x)=a_{n-1}p_{n-1}(x)+b_np_n(x)+a_np_{n+1}(x)\qquad\hbox{ for all } n=1,2,\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightots.$$ Let $A$ be a tridiagonal Jacobi matrix with the main diagonal $\{b_0,b_1,b_2\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightots\}$ and two other diagonals $\{a_0,a_1,a_2\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightots\}$: \begin{equation*} A=\left( \begin{matrix} b_0&a_0&0&0&\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightots\\ a_0&b_1&a_1&0&\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightots\\ 0&a_1&b_2&a_2&\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightots \\ 0&0&a_2&b_3& \partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightots\\ \vdots& \vdots& \vdots& \vdots& \ddots \end{matrix} \right) \end{equation*} Let $\underline e_0=(1,0,0,\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightots)$ be the first vector in $\ell^2(\mathbf N_0)$. Then $A$ as a bounded linear operator in $\ell^2(\mathbf N_0)$ will have $\mu$ as spectral measure with respect to $\underline e_0$, i.e., $$\langle e^{itA}\underline e_0,\underline e_0\rangle=\int_{\sigma(\Delta)} e^{itx}d\mu(x).$$ Hence, $\Delta$ and $A$ are spectrally equivalent. Since $\Delta$ has no eigenvalue, so $$\sum_{n = 0}^{\infty}\mid p_n(x)\mid^2 = \infty \qquad \overline{m}} \def\Mb{\overline{M}} \def\Mh{\hat{M}ox{for all } x \in \mathbf C.$$ It is well known that the measure $\mu$ is absolutely continuous, i.e. $d\mu(\xi) = \varphi(\xi)d\xi$, where $\varphi(\xi)> 0$ on $\sigma(\Delta)$. If $K$ is the closed unit ball of $\mathbf R^N$ and $$v(x)=\frac1{(2\pi)^{N/2}\sqrt{|K|}}\int_{K}e^{-ix\partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightot y}dy,$$ then $\sigma(\Delta)=[-1,0]$ and $$\int_{-1}^0\xi^k\varphi(\xi)d\xi = (-1)^k\frac{N}{N + 2k}, \quad\overline{m}} \def\Mb{\overline{M}} \def\Mh{\hat{M}ox{for } k = 0, 1, 2, \partial} \def\grad{\nabla} \def\L{\left} \def\mathbf R{\rightots.$$ Hence, $\varphi (\xi )=\frac{N}{2}{{\left| \xi \right|}^{\frac{N}{2}-1}}$ is the density of the spectral measure. \par\noindent{\bf Acknowledgement.} Deepest appreciation is extended towards the NAFOSTED (the National Foundation for Science and Techology Development in Vietnam) for the financial support. \end{document}
\begin{document} \title[Elastic shells with incompatible strains] {Models for elastic shells with incompatible strains} \author{Marta Lewicka, L. Mahadevan and Mohammad Reza Pakzad} \address{Marta Lewicka, University of Pittsburgh, Department of Mathematics, 301 Thackeray Hall, Pittsburgh, PA 15260} \address{L. Mahadevan, Harvard University, School of Engineering and Applied Sciences, and Department of Physics, Cambridge, MA 02138} \address{Mohammad Reza Pakzad, University of Pittsburgh, Department of Mathematics, 301 Thackeray Hall, Pittsburgh, PA 15260} \email{[email protected], [email protected], [email protected]} \subjclass{74K20, 74B20} \keywords{non-Euclidean plates, nonlinear elasticity, Gamma convergence, calculus of variations} \begin{abstract} The three-dimensional shapes of thin lamina such as leaves, flowers, feathers, wings etc, are driven by the differential strain induced by the relative growth. The growth takes place through variations in the Riemannian metric, given on the thin sheet as a function of location in the central plane and also across its thickness. The shape is then a consequence of elastic energy minimization on the frustrated geometrical object. Here we provide a rigorous derivation of the asymptotic theories for shapes of residually strained thin lamina with nontrivial curvatures, i.e. growing elastic shells in both the weakly and strongly curved regimes, generalizing earlier results for the growth of nominally flat plates. The different theories are distinguished by the scaling of the mid-surface curvature relative to the inverse thickness and growth strain, and also allow us to generalize the classical F\"oppl-von K\'arm\'an energy to theories of prestrained shallow shells. \end{abstract} \maketitle \section{Introduction} The physical basis for {morphogenesis} is now classical and elegantly presented in D'arcy Thompson's opus \lq\lq On growth and form" as follows {\em "An organism is so complex a thing, and growth so complex a phenomenon, that for growth to be so uniform and constant in all the parts as to keep the whole shape unchanged would indeed be an unlikely and an unusual circumstance. Rates vary, proportions change, and the whole configuration alters accordingly."} From a mathematical and mechanical perspective, this reduces to a simple principle: differential growth in a body leads to residual strains that will generically result in changes in the shape of a tissue, organ or body. Eventually, the growth patterns are expected to themselves be regulated by these strains, so that this principle might well be the basis for the physical self-organization of biological tissues. Recent interest in characterizing the morphogenesis of low-dimensional structures such as filaments, laminae and their assemblies, is driven by the twin motivations of understanding the origin of shape in biological systems and the promise of mimicking them in artificial mimics \cite{klein, Koehl2008, Sharon2}. The results lie at the interface of biology, physics and engineering, but they also have a deeply geometric character. Indeed, the basic question may be characterized in terms of a variation on a classical theme in differential geometry - that of embedding a shape with a given metric in a space of possibly different dimension \cite{Nash1, Nash2}. However, the goal now is not only to state the conditions when it might be done (or not), but also to constructively determine the resulting shapes in terms of an appropriate mechanical theory. While these issues arise in three-dimensional tissues, the combination of the separation of scales that arises naturally in slender structures and the constraints associated with the prescription of growth laws that are functions of space (and time) leads to the expectation that the resulting theories ought to be variants of classical elastic plate and shell theories such as the F\"oppl-von K\'arm\'an or the Donnell-Mushtari-Vlasov theories \cite{Calladine}. That this is the case, has been shown for bodies that are initially flat and thin i.e. elastic plates with no initial curvature, using analogies to thermoelasticity \cite{Mansfield,Maha}, perturbation analysis \cite{BenAmar, Sharon2}, and rigorous asymptotic analysis \cite{lemapa} that follows a program similar to the derivation of the equations for the nonlinear elasticity of thin plates and shells \cite{FJMgeo, FJMM, lemopa7, lemopa_convex, lepa} and a linearized theory \cite{lepa_noneuclid} for residually strained Kirchhoff plates \cite{kirchhoff}. However, most laminae are naturally curved in their strain-free configurations, as a consequence of slow relaxation, perhaps following a previous growth history. Since even infinitesimal deformations of a curved shell will potentially violate isometry relative to its rest state, one expects that differential growth of such an object will likely lead to a variety of possible low dimensional theories depending on the relative size of the metric changes imposed on the system. This potential multiplicity of asymptotic theories is of course presaged by a similar state of affairs for the derivation of a nonlinear theory of elastic shells \cite{FJMhier, lepa}. Here, we build on the discussion in \cite{Maha, Maha2, lemapa} and present a rigorous derivation of a set of asymptotic theories for the shape of {\em residually strained} thin lamina with nontrivial curvatures, i.e. growing elastic shells. As our starting point for a similar theory for growing curved shells, we use the observation that it is possible to change the shape of a lamina such as a blooming lily petal by driving it via excess growth of the margins relative to the interior, rather than via midrib deformations \cite{doorn}. Previously, a thermoelastic analogy \cite{Mansfield} suggested a natural generalization of the Donnell-Mushtari-Vlasov shell theory \cite{Calladine} to growing shells \cite{Maha2}, proposed as a mathematical model for blooming, activated from the initial (transverse) out-of-plane displacement $v_0$ of a petal's mid-surface. When $v_0=0$ the equations (\ref{new_Karman}) reduce to the prestrained von K\'arm\'an equations (\ref{old_Karman}) proposed in \cite{Maha}. These were rigorously derived in \cite{lemapa} from {\em non-Euclidean elasticity}, where the imposed 3d prestrain is given via a Riemannian metric, whose components display the appropriate linear target stretching tensor $\epsilon_g$ (of order 2 in shell's thickness $h$), and the bending tensor $\kappa_g$ (of order 1 in $h$, see (\ref{ah})). This leads us to focus on a particular regime of scaling for the prestrain tensor \eqref{qh-gamma} which corresponds, a posteriori, in all different regimes of shallowness studied here, to von-K\`arm\'an type theories. It is pertinent to start with a few comments regarding this particular choice of the scaling regime. From a mathematical point of view, the von-K\`arm\`an regime, where the nonlinear elastic energy per unit thickness scales like $h^4$, usually corresponds to sub-linear theories, i.e. the first nonlinear theories which arise when the magnitude of forces or of prestrain allows the elastic lamina to cross the threshold of linear behavior and to manifest phenomena such as buckling. Since theses sublinear theories are also the least complicated among the nonlinear theories of plates and shells arising in the literature, and are relevant for many applications, they are popular with engineers, physicists and applied mathematicians. Therefore, in the analysis of nonlinear shallow shell models with growth, it is reasonable to start with the von K\'arm\'an regime as here. In contrast, there are a number of technical challenges that must be addressed when deriving lower order nonlinear theories using $\Gamma$-convergence. Our current study is potentially thus the first of a series that considers the various possible shell theories that result for various limiting cases of the growth strain, the boundary loading etc. Indeed, in a forthcoming paper \cite{LMP-new}, we address a shallow shell model that arises in a forcing regime equivalent to the energy scaling $h^\beta$ for $\beta<4$, where, analogous to \cite{FJMhier}, technical obstacles regarding properties of the Sobolev solutions to the Monge-Amp\`ere equations must be addressed, before establishing the corresponding $\Gamma$-limit result. {In Section 2, we formulate our main results, in terms of a scaling analysis that leads to the hierarchy of limiting models as a function of the various prestrain and shallowness regimes. In Section 3 we argue that for non-flat mid-surface $S$ (of arbitrary curvature or for the referential out-of-plane displacement $v_0\neq 0$), the variationally correct 2d theory coincides with the extension of the classical {\em von K\'arm\'an energy} to shells, derived in \cite{lemopa7}. In the special case $v_0=0$, this energy still reduces to the functional whose Euler-Lagrange equations are those derived growing elastic plates in \cite{Maha}. In Section 4, we discuss a new shallow shell model, valid when the radius of curvature of the mid-surface is relatively large compared to the thickness. This limit leads to a prestrained plate model which inherits the geometric structure of the shallow shell. In Section 5, we consider the case where the radius of curvature and the thickness are comparable in magnitude, and appropriately compatible with the order of the prestrain tensor. We show that equations for a growing elastic shell can be formally derived by pulling back the in-plane and out-of-plane growth tensors $\epsilon_g$ and $\kappa_g$, from {\em shallow shells} $(S_h)^h$ with reference mid-surface $S_h$ given by the scaled out-of-plane displacement $hv_0$, onto a flat reference configuration. Furthermore, we argue that this theory for growing elastic shells is also the Euler-Lagrange equation of the variational limit for 3d nonlinear elastic energies on $(S_h)^h$. In Section 6 we discuss the model where the effects of shallowness are dominated by the growth-induced prestrain. In this case the limiting energy is impervious to the influence of the shell geometry, but the effects of growth may not be neglected. This leads to the the generalized von K\'arm\'an equations for a growing flat plate. In Section 7, we justify that under our prestrain or growth scaling assumptions, the derived models are the relevant ones when the boundaries are free and no external forces are present. Finally, in Section 8, we conclude with a discussion of the present results and prospects for the future. Since the proofs of the theorems consist of tedious yet minor (though necessary) modifications of the arguments detailed in \cite{lemopa7, lemapa, lemopa_convex}, we refer the interested reader to the Appendix, where they are given for completeness. \section{Preliminaries and scaling limits} Let $ v_0\in \mathcal{C}^{1,1}(\bar\Omega)$ be an out-of-plate displacement on an open, bounded subset $\Omega\subset\mathbb{R}^2$, associated with a family of surfaces, parametrized by $\gamma\in [0,1]$: \begin{equation}\label{shallow-alpha-gamma} S_\gamma=\phi_\gamma(\Omega), \mbox{ where } \phi_\gamma(x) = \big(x, \gamma v_0(x)\big) \qquad \forall x=(x_1, x_2)\in\Omega, \end{equation} The unit normal vector to $S_\gamma$ at $\phi_\gamma(x)$ is given by: $$\vec n^\gamma(x) = \frac{\partial_1\phi_\gamma(x) \times \partial_2\phi_\gamma(x)} {|\partial_1\phi_\gamma(x) \times \partial_2\phi_\gamma(x)|} = \frac{1}{\sqrt{1+\gamma^2|\nabla v_0|^2}} \big(-\gamma\partial_1v_0(x), -\gamma \partial_2v_0(x), 1\big) \qquad\forall x\in\Omega. $$ For small $h>0$, we now consider thin plates $\Omega^h = \Omega\times (-h/2, h/2)$ and 3d shells $(S_\gamma)^h$: \begin{equation}\label{shallow-gamma} (S_\gamma)^h = \big\{\tilde \phi_{\gamma}(x, x_3); ~ x\in\Omega, ~ x_3\in(-h/2, h/2)\big\}, \end{equation} where the extension $\tilde \phi_\gamma:\Omega^h\rightarrow \mathbb{R}^3$ of $\phi_\gamma$ on $\Omega^h$ in (\ref{shallow-alpha-gamma}) is given by the formula: \begin{equation}\label{kl-gamma} \tilde\phi_{\gamma}(x, x_3) = \phi_\gamma(x) + x_3\vec n^\gamma(x) \qquad\forall (x, x_3)\in\Omega^h. \end{equation} For an elastic body with the reference configuration $(S_\gamma)^h$ we assume that its elastic energy density $W:\mathbb{R}^{3\times 3}\longrightarrow \mathbb{R}_{+}$ is $\mathcal{C}^2$ regular in a neighborhood of $SO(3)$. Moreover, we assume that $W$ satisfies the normalization, frame indifference and nondegeneracy conditions: \begin{equation}\label{en_as} \begin{split} \exists c>0\quad \forall F\in \mathbb{R}^{3\times 3} \quad \forall R\in SO(3) \quad &W(R) = 0, \quad W(RF) = W(F),\\ &W(F)\geq c~ \mathrm{dist}^2(F, SO(3)). \end{split} \end{equation} where $F=\nabla u$ is the deformation gradient relative to the reference configuration $(S_\gamma)^h$. For prestrained structures characterized by the Riemannian metric: $$p^h = (q^h)^Tq^h \quad \mbox{ on } ~~(S_\gamma)^h,$$ the tensor $F=\nabla u$ is replaced by $F =\nabla u (q^h)^{-1}$, so that the thickness averaged elastic energy is given by: \begin{equation}\label{IhW-gamma} I^{\gamma,h}(u) = \frac{1}{h}\int_{(S_\gamma)^h} W(F) ~\mbox{d}z = \frac{1}{h}\int_{(S_\gamma)^h} W(\nabla u (q^h)^{-1}) ~\mbox{d}z, \qquad \forall u \in W^{1,2}((S_\gamma)^h,\mathbb{R}^3). \end{equation} Letting $\epsilon_g,\kappa_g:\bar\Omega \rightarrow \mathbb{R}^{3\times 3}$ be two given smooth tensors, for each small $h$ we define the growth tensors $q^h$ on $(S_\gamma)^h$ by: \begin{equation}\label{qh-gamma} q^h(\phi_\gamma(x) + x_3\vec n^\gamma(x)) = \mbox{Id} + h^2\epsilon_g(x) + hx_3\kappa_g(x) \qquad \forall (x, x_3)\in\Omega^h. \end{equation} The corresponding metric $p^h=(q^h)^Tq^h$ on $(S_\gamma)^h$ is then: $$p^h(\phi_\gamma(x) + x_3\vec n^\gamma(x)) = \mbox{Id} + 2h^2\mbox{sym }\epsilon_g(x) + 2hx_3\mbox{sym }\kappa_g(x) +\mathcal{O}(h^3).$$ An important part of our study focuses on the asymptotic behavior in the limit of vanishing thickness $h\to 0$ of the variational models $I^{\gamma, h}$ in (\ref{IhW-gamma}), when $\gamma= \gamma(h)= h^\alpha$ for a given exponent $0 \le \alpha < +\infty$. The regime $\alpha>0$ corresponds to the study of a {\it shallow} shell. However, we will identify {\em three distinct shallow shell limit models}, depending on the asymptotic behavior of the ratio $\gamma/h$, which in our setting depends only on the value of $\alpha$. This allows us to rigorously derive the $\Gamma$-limits: ${\Gamma\mbox{-}}\lim_{h\to 0} \frac1{h^4} I^{h^\alpha, h}$, and show that under suitable incompatibility conditions on the strain tensors $\epsilon_g$ or $\kappa_g$, the infimum of energies $I^{h^\alpha,h}$ scales like $h^4$ irrespective of the value of $\alpha$. This justifies our choice of the energy scaling and lends credibility to limiting models as physically relevant in the corresponding scaling regimes. To get a sense of our results it is useful to summarize our analysis in terms of the $\Gamma-$limit of $\frac1{h^4} I^{h^\alpha, h},$ which can be identified as follows: \begin{equation}\label{energylimits} \Gamma\mbox{-}\lim_{h\to 0} \frac1{h^4} I^{h^\alpha, h} = \left \{ \begin{array}{ll} {\mathcal I}_4 & \mbox{if} \quad \alpha=0 \\ \\ {\mathcal I}^\infty_4 & \mbox{if} \quad 0<\alpha <1 \\ \\ {\mathcal I}^1_4 & \mbox{if} \quad \alpha=1 \\ \\ {\mathcal I}^0_4 & \mbox{if} \quad \alpha>1. \end{array} \right. \end{equation} The above four theories collapse into one and the same theory when $v_0=0$. Otherwise we must deal with four distinct potential limits depending on the choice of parameters, in the following order: {\bf Case 1. $\alpha=0$.} This corresponds to $\gamma = 1$ where the 3d model is that of the prestrained non-linear elastic shell of arbitrarily large curvature (no shallowness involved). We will show that the $\Gamma$-limit in this case leads to a prestrained von K\'arm\'an model ${\mathcal I}_4$ for the 2d mid-surface $S_1$. This will be described in a more general framework in Section \ref{sec2}. {\bf Case 2. $0<\alpha<1$.} This corresponds to the flat limit $\gamma\to 0$ when the energy can be conceived as a limit of the von K\'arm\'an models $\mathcal I_4$ for shallow shells $S_\gamma$. In other words, this limiting model corresponds to the case when: $ \displaystyle \lim_{h\to 0} \frac{\gamma(h)}{h} = \infty, $ and it can be also identified as: $$ \displaystyle {\mathcal I}^\infty_4 = \Gamma\mbox{-}\lim_{\gamma\to 0} \Big (\Gamma\mbox{-}\lim_{h\to 0} \frac 1{h^4} I^{\gamma,h}\Big ), $$ by choosing the distinguished sequence of limits, first as $h \rightarrow 0$ and then $\gamma \rightarrow 0$. In Section \ref{sec4} we will see that ${\mathcal I}^\infty_4$ is formulated for displacements of a plate but it inherits certain geometric properties of shallow shells $S_\gamma$, such as the first-order infinitesimal isometry constraint. {\bf Case 3. $\alpha=1$.} This corresponds to the case $\displaystyle \lim_{h\to 0} \gamma(h) /h = 1$. The limit model $\mathcal{I}_4^1$, derived in Section \ref{sec3}, is an unconstrained energy minimization, reflecting both the effect of shallowness and that of the prestrain. It corresponds to a simultaneous passing to the limit $(0,0)$ of the pair $(\gamma, h)$ in (\ref{IhW-gamma}). The Euler-Lagrange equations (\ref{new_Karman}) of $\mathcal{I}_4^1$ were suggested in \cite{Maha2} for the description of the deployment of petals during the blooming of a flower. {\bf Case 4. $\alpha>1$.} Finally, the $\Gamma$-limit for all values of $\alpha>1$, i.e. when $\displaystyle \lim_{h\to 0} \frac {\gamma(h)}{h} =0,$ coincides with the zero thickness limit of the degenerate case $\gamma=0$, which is the prestrained plate von K\'arm\'an model, discussed in \cite{lemapa}. This limiting energy can be obtained by taking the consecutive limits: $$ \displaystyle {\mathcal I}^0_4= \Gamma\mbox{-}\lim_{h\to 0} \Big (\Gamma\mbox{-}\lim_{\gamma\to 0} \frac 1{h^4} I^{\gamma,h}\Big ).$$ \section{The prestrained von K\'arm\'an energy for shells of arbitrary curvature: $\alpha =0$} \label{sec2} When the parameter $\alpha=0$, the 3d variational problem associated with (\ref{IhW-gamma}) is reduced to the 3d nonlinear elastic energy on the thin shell $S^h_1$, where $S_1$ is the graph of $v_0$. It is useful to discuss this model in a more general framework. Let $S$ be an arbitrary 2d surface embedded in $\mathbb{R}^3,$ that is compact, connected, oriented, and of class $\mathcal{C}^{1,1}$. The boundary $\partial S$ of $S$ is assumed to be the union of finitely many (possibly none) Lipschitz continuous curves. We consider the family $\{S^h\}_{h>0}$ of thin shells of thickness $h$ around $S$: $$ S^h = \left\{z=x + t\vec n(x); ~ x\in S, ~ -\frac{h}{2}< t < \frac{h}{2}\right\}, \qquad 0<h<h_0 \ll 1 $$ where we use the following notation: $\vec n(x)$ for the unit normal, $T_x S$ for the tangent space, and $\Pi(x) = \nabla \vec n(x)$ for the shape operator on $S$, at a given $x\in S$. The projection onto $S$ along $\vec n$ is denoted by $\pi$, so that $\pi(z) =x$ for all $ z=x+t\vec n(x)\in S^h,$ and we assume that $h \ll 1$ is small enough to have $\pi$ well defined on each $S^h$. The instantaneous growth of $S^h$ is described by smooth tensors: $\epsilon_g,\kappa_g:\overline S\longrightarrow \mathbb{R}^{3\times 3}$, by: \begin{equation}\label{ah} a^h=[a_{ij}^h] :\overline{S^h}\longrightarrow\mathbb{R}^{3\times 3}, \qquad a^h(x + t\vec n)=\mathrm{Id} + h^2\epsilon_g(x) + ht\kappa_g(x). \end{equation} The growth tensor $a^h$ is as in \cite{Maha, lemapa}, now in a general non-flat geometry setting. Given the elastic energy density $W:\mathbb{R}^{3\times 3}\longrightarrow \mathbb{R}_{+}$ as in (\ref{en_as}), the thickness averaged elastic energy induced by the prestrain $a^h$ is given by: \begin{equation}\label{IhW} I^h(u^h) = \frac{1}{h}\int_{S^h} W(\nabla u^h(a^h)^{-1}) ~\mbox{d}z, \qquad \forall u^h\in W^{1,2}(S^h,\mathbb{R}^3). \end{equation} Taking the asymptotic limit (the $\Gamma$-limit as $h\to 0$, see Theorem \ref{thmainuno} and Theorem \ref{thmaindue} below) of the energies $I^h$ (note that $I^h=I^{1,h}$ in the notation of (\ref{IhW-gamma})) then leads to the variationally correct model for weakly prestrained shells. It corresponds to the following nonlinear energy functional $\mathcal{I}_4$ acting on the admissible limiting pairs $(V, B)$: \begin{equation}\label{vonK} \begin{split} \forall V\in\mathcal{V} \quad \forall B\in\mathcal{B} \qquad \mathcal{I}_4(V,B)= &\frac{1}{2} \int_S \mathcal{Q}_2\left(x,B - \frac{1}{2} (A^2)_{tan} - (\mathrm{sym}~\epsilon_g)_{tan}\right)\\ &+ \frac{1}{24} \int_S \mathcal{Q}_2\Big(x,(\nabla(A\vec n) - A\Pi)_{tan} - (\mathrm{sym}~\kappa_g)_{tan}\Big). \end{split} \end{equation} Here, {\underline{the space $\mathcal{V}$ consists of {\em the first-order infinitesimal isometries} on $S$}}, defined by: \begin{equation}\label{spaceV} \mathcal{V} = \left\{V\in W^{2,2}(S,\mathbb{R}^3); ~~ \tau\cdot \partial_\tau V(x) = 0 \quad \forall {\rm{a.e.}} \,\, x\in S \quad \forall\tau\in T_x S\right\}, \end{equation} that is those $W^{2,2}$ regular displacements $V$ for whom the change of metric on $S$ due to the deformation $\mbox{id} + \epsilon V $ is of order $\epsilon^2$, as $\epsilon\to 0$. Furthermore, for a matrix field $A\in L^2(S, \mathbb{R}^{3\times 3})$, let $A_{tan}(x)$ denote the tangential minor of $A$ at $x\in S$, that is $[(A(x)\tau)\eta]_{\tau,\eta\in T_x S}$. The skew-symmetric gradient of $V$ as in (\ref{spaceV}) then uniquely determines a $W^{1,2}$ matrix field $A:S\longrightarrow so(3)$ so that: $\partial_\tau V(x) = A(x)\tau$ for all $\tau\in T_x S$. Hence, we equivalently write: \begin{equation*}\label{Adef-intro} \begin{split} \mathcal{V} = \big\{V\in W^{2,2}(S,\mathbb{R}^3); \quad &\exists A\in W^{1,2}(S,\mathbb{R}^{3\times 3}) \quad \forall {\rm{a.e.}} \,\, x\in S \quad \forall \tau\in T_x S\\ & \qquad\qquad \partial_\tau V(x) = A(x)\tau \mbox{ and } A(x)^T= -A(x) \big\}. \end{split} \end{equation*} For a plate, that is when $S\subset\mathbb{R}^2$, an equivalent analytic characterization for $V=(V^1, V^2, V^3) \in {\mathcal V}$ is given by: $(V^1, V^2) = (-\omega y, \omega x) + (b_1, b_2)$, while the out-of-plane displacement $V^3\in W^{2,2}(S,\mathbb{R})$ remains unconstrained. {\underline{The space $\mathcal{B}$ in (\ref{vonK}) consists of {\em finite strains}}: \begin{equation}\label{fss} \mathcal{B} = \Big\{L^2 - \lim_{\epsilon\to 0}\mathrm{sym }\nabla w^\epsilon; ~~ w^\epsilon\in W^{1,2}(S,\mathbb{R}^3)\Big\}, \end{equation} which are all limits of symmetrized gradients of sequences of displacements on $S$. By $\mathrm{sym } \nabla w (x)$ we mean here a bilinear form on $T_xS$ given by: $(\mathrm{sym }\nabla w(x)\tau)\eta = \frac{1}{2}[(\partial_\tau w(x))\eta+(\partial_\eta w(x))\tau]$ for all $\tau, \eta \in T_xS$. It follows (via Korn's inequality) that for a flat plate $S\subset \mathbb{R}^2$, the space $\mathcal{B}$ consists precisely of symmetrized gradients of all the in-plane displacements: $\mathcal{B} = \{\mathrm{sym }\nabla w; ~~ w\in W^{1,2}(S,\mathbb{R}^2)\}$. When $S$ is strictly convex, rotationally symmetric, or developable without flat regions, it has been proven in \cite{lemopa7, schmidt} that $\mathcal{B}=L^2(S,\mathbb{R}^{2\times 2}_{sym})$, i.e. it contains all symmetric matrix fields on $S$ with square integrable entries. Finally, in (\ref{vonK}), {\underline{the quadratic forms}: \begin{equation}\label{quad} \mathcal{Q}_3(F) = D^2 W(\mbox{Id})(F,F), \quad \mathcal{Q}_2(x, F_{tan}) = \min\{\mathcal{Q}_3(\tilde F); ~~ \tilde F\in\mathbb{R}^{3\times 3}, ~~ (\tilde F - F)_{tan} = 0\}. \end{equation} where the form $\mathcal{Q}_3$ is defined for all $F\in\mathbb{R}^{3\times 3}$, while $\mathcal{Q}_2(x,\cdot)$ for a given $x\in S$ is defined on tangential minors $F_{tan}$ of such matrices. Both forms $\mathcal{Q}_3$ and all $\mathcal{Q}_2(x,\cdot)$ are nonnegative definite and depend only on the symmetric parts of their arguments. We now have the following results, stating in particular that the functional $\mathcal{I}_4$ is the $\Gamma$-limit \cite{dalmaso} of the scaled energies $h^{-4}I^h$: \begin{theorem}\label{thmainuno} Let a sequence of deformations $u^h\in W^{1,2}(S^h,\mathbb{R}^3$) satisfy $I^h(u^h) \leq Ch^4$. Then there exists proper rotations $\bar R^h\in SO(3)$ and translations $c^h\in\mathbb{R}^3$ such that for the renormalized deformations: $$ y^h(x+t\vec n(x)) = (\bar R^h)^T u^h(x+t\frac{h}{h_0}\vec n) - c^h:S^{h_0}\longrightarrow\mathbb{R}^3 $$ defined on the common thin shell $S^{h_0}$, the following holds. \begin{itemize} \item[(i)] $y^h$ converge in $W^{1,2}(S^{h_0},\mathbb{R}^3)$ to $\pi$. \item[(ii)] The scaled displacements: \begin{equation}\label{Vh} V^h(x)={h}^{-1}\fint_{-h_0/2}^{h_0/2}y^h(x+t\vec n) - x~\mathrm{d}t \end{equation} converge (up to a subsequence) in $W^{1,2}(S,\mathbb{R}^3)$ to some $V\in \mathcal{V}$. \item[(iii)] The scaled averaged strains: \begin{equation}\label{Bh} B^h(x) = {h}^{-1} \mathrm{sym}\nabla V^h(x) \end{equation} converge (up to a subsequence) weakly in $L^{2}(S,\mathbb{R}^{2\times 2})$ to a limit $B\in\mathcal{B}$. \item[(iv)] The lower bound holds: $$\liminf_{h\to 0} {h^{-4}} I^h(u^h) \geq \mathcal{I}_4(V,B).$$ \end{itemize} \end{theorem} \begin{theorem}\label{thmaindue} For every couple $V\in \mathcal{V}$ and $B\in\mathcal{B}$, there exists a sequence of deformations $u^h\in W^{1,2}(S^{h},\mathbb{R}^3)$ such that: \begin{itemize} \item[(i)] The rescaled sequence $y^h(x + t\vec n) = u^h(x + t\frac{h}{h_0}\vec n)$ converges in $W^{1,2}(S^{h_0},\mathbb{R}^3)$ to $\pi$. \item[(ii)] The displacements $\displaystyle V^h$ as in (\ref{Vh}) converge in $W^{1,2}(S,\mathbb{R}^3)$ to $V$. \item[(iii)] The strains $B^h$ as in (\ref{Bh}) converge in $W^{1,2}(S,\mathbb{R}^{2\times 2})$ to $B$. \item[(iv)] There holds: $$\lim_{h\to 0} h^{-4} I^h(u^h) = \mathcal{I}_4(V,B).$$ \end{itemize} \end{theorem} The proofs follow through a combination of arguments in \cite{lemapa} and \cite{lemopa7}, which we do not repeat here but instead comment on the functional (\ref{vonK}) and its relation with the prestrained von K\'arm\'an equations for plates. Here, in analogy with the theory for flat plates $S\subset \mathbb{R}^2$ with incompatible strains \cite{lemapa}, in (\ref{ah}) we have assumed that the target metric is $2$nd order in thickness $h$ for the in-plane stretching $(\mbox{sym } \epsilon_g)$, and $1$st order in $h$ for bending $(\mbox{sym } \kappa_g)$. Due to this particular choice of scalings the limit energy $\mathcal{I}_4$ is composed of exactly two terms, corresponding to stretching and bending. The argument of the integrand in the first term, namely $B - \frac{1}{2} (A^2)_{tan} - (\mathrm{sym}~\epsilon_g)_{tan}$, represents the difference of the second order stretching induced by the deformation $v^h=\mbox{id} + h V + h^2 w^h$ from the target stretching $(\mbox{sym}~ \epsilon_g)$, with $V\in \mathcal V$ and $\mbox{sym} \nabla w^h \to B$. The argument of the integrand in the second term $(\nabla(A\vec n) - A\Pi)_{tan} -(\mathrm{sym}~\kappa_g)_{tan}$, represents the difference of the first order bending induced by $v^h$ from the target bending $(\mbox{sym}~ \kappa_g)$. In general, the second order displacement $w$ can be very oscillatory. Due to the non-trivial geometry of the mid-surface $S$, the finite strain space $\mathcal B$ is usually large and hence a bound on the $L^2$ norm of the symmetric gradients $\mbox{sym} \nabla w^h$ implies only a very weak bound on $w^h$. The limiting tensor $B$ can hence be written only as the symmetric gradient of a very weakly regular distribution (not a classical higher order displacement). \begin{remark} When the mid-surface $S$ is elliptic, then for any first order isometry $V\in \mathcal{V}$ there exists $B\in \mathcal{B} = L^2(S,\mathbb{R}_{sym}^{2\times 2})$ such that $B- \frac{1}{2} (A^2)_{tan} - (\mbox{sym } \epsilon_g)_{tan} = 0$ (see \cite{lemopa_convex}). This implies that for any $V$ there exists a higher order modification $w^h$ for which in the limit, the second order target stretching is realized. Thus, the energy $\mathcal{I}_4$ reduces to: $$\mathcal{I}_4(V) = \frac{1}{24} \int_S \mathcal{Q}_2\Big(x,(\nabla(A\vec n) - A\Pi)_{tan} - (\mathrm{sym}~\kappa_g)_{tan}\Big)~\mbox{d}x, $$ i.e. the bending term which is to be minimized over the space $\mathcal{V}$. Note that this variational problem is convex (minimizing a convex integral over a linear space $\mathcal{V}$), and hence it admits only one solution (up to rigid motions). Following the analysis in \cite{lemopa_convex}, we see that for elliptic surfaces, all limiting theories for $h^{-\beta}I^h$ under the energy scaling $\beta>2$, coincide with the linear theory $\mathcal{I}_4$ as above, while the sublinear theory, to be used in the description of buckling, is the Kirchhoff-like (nonlinear bending) theory corresponding to $\beta=2$ and derived in \cite{lepa_noneuclid}. \end{remark} \section{The prestrained shallow shell with a first-order isometry constraint: $0 <\alpha<1$}\label{sec4} When the parameter $0<\alpha<1,$ the highest order terms (of order $h^{2\alpha}$) in the prestrain metric $p^h$ on $(S_\gamma)^h$ pulled back on the flat reference configuration $\Omega^h$, turn out to be \lq \lq compatible'', i.e. entirely generated by the reference displacement $h^\alpha v_0$. In other words, the shallow shell will easily compensate for these terms by rigidly keeping its structure at the $h^\alpha$ order and only will make adjustments at higher orders to the prestrain induced by $\epsilon_g$ and $\kappa_g$. In the limit as $h\to 0$ we therefore expect that the effective energy functional on $\Omega$ will depend only on the out-of-plane and the in-plane displacements of respective orders $h$ and $h^2$. Yet, as we shall see below, the residual curvature of mid-surfaces will appear in a two-fold manner: as a linearized first-order isometry constraint on the out-of-plate displacement (\ref{constr2}), and also as a defining constraint on the space of admissible in-plane displacements. The mid-plate $\Omega$ will inherit the space of first order infinitesimal isometries (\ref{spaceV}) and the finite strain space (\ref{fss}), in the asymptotic limit of vanishing curvature shells. {\underline{The space of {\em finite strains} ${\mathcal B}_{v_0}\subset L^2(\Omega,\mathbb{R}^{2\times 2}_{sym})$} is defined as: $$ \mathcal{B}_{v_0}= \Big\{L^2 - \lim_{\epsilon\to 0}\big(\mathrm{sym }\nabla w^\epsilon + \mathrm{sym}(\nabla v^\epsilon \otimes \nabla v_0)\big); ~~ w^\epsilon\in W^{1,2}(\Omega,\mathbb{R}^2), ~v^\epsilon \in W^{1,2}(\Omega,\mathbb{R}) \Big\}.$$ We now identify ${\mathcal B}_{v_0}$ with each of the finite strain spaces of the shallow surfaces $S_\gamma$: \begin{lemma}\label{fssh} Let the surfaces $S_\gamma$ be as in (\ref{shallow-alpha-gamma}). Then for all $\gamma\neq 0$, the finite strain spaces: \begin{equation*} \mathcal{B}^\gamma = \Big\{L^2 - \lim_{\epsilon\to 0}\mathrm{sym }\nabla w^\epsilon; ~~ w^\epsilon\in W^{1,2}(S_\gamma,\mathbb{R}^3)\Big\}, \end{equation*} are each isomorphic to ${\mathcal B}_{v_0}$ via the linear isomorphism: $$ {\mathcal T^\gamma}: L^2(S_\gamma, {\mathcal L}^2_{sym} (TS_\gamma,\mathbb R)) \to L^2(\Omega, \mathbb R^{2\times 2}_{sym}). $$ Here, $L^2(S_\gamma, {\mathcal L}^2_{sym} (TS_\gamma,\mathbb R))$ is the space of all $L^2$-sections of the bundle of symmetric bilinear forms on $S_\gamma$, and ${\mathcal T^\gamma}$ is naturally defined by: $$ [{\mathcal T}^\gamma(\sigma)(x)]_{ij}= \sigma(\phi_\gamma(x)) (\partial_i \phi_\gamma(x), \partial_j \phi_\gamma(x)) \quad \forall \,{\mbox a.e.} \, x\in \Omega \quad \forall \sigma \in L^2(S_\gamma, {\mathcal L}^2_{sym} (TS_\gamma,\mathbb R)). $$ \end{lemma} \begin{proof} Let $w\in W^{1,2}(S_\gamma, \mathbb{R}^3)$ and write $\tilde w =(\tilde w_1, \tilde w_2, \tilde w_3) = w\circ\phi_\gamma\in W^{1,2}(\Omega,\mathbb{R}^3)$. Then, for $i,j=1,2$ we have: $$(\mbox{sym}\nabla w)(\partial_i \phi_\gamma, \partial_j\phi_\gamma) = \frac{1}{2}\left(\partial_i\tilde w\cdot \partial_j\phi_\gamma + \partial_j\tilde w\cdot \partial_i\phi_\gamma\right) = \Big[\mbox{sym}\nabla (\tilde w_1, \tilde w_2) + \gamma~\mbox{sym}(\nabla \tilde w_3\otimes \nabla v_0)\Big]_{ij}.$$ Take now a sequence $w^\epsilon\in W^{1,2}(S_\gamma, \mathbb{R}^3)$ such that $\lim_{\epsilon\to 0}\mbox{sym}\nabla w^\epsilon = B_\gamma\in \mathcal{B}^\gamma$. Then: $$\mathcal{T}^\gamma(B_\gamma) = \lim_{\epsilon\to 0}\mathcal{T}^\gamma(\mbox{sym}\nabla w^\epsilon) = \lim_{\epsilon\to 0}\Big(\mbox{sym}\nabla (\tilde w_1^\epsilon, \tilde w_2^\epsilon) + \mbox{sym}(\nabla(\gamma\tilde w_3^\epsilon)\otimes \nabla v_0)\Big) \in\mathcal{B}_{v_0},$$ which proves the claim. \end{proof} The following is a consequence of Lemma \ref{fssh}, \cite[Lemma 5.6]{lemopa7} and \cite[Lemma 3.3]{schmidt}: \begin{corollary}\label{cor-ellip} Assume that: \begin{itemize} \item[{(i)}] either: $v_0\in {\mathcal C}^{2,1}(\Omega)\cap \mathcal{C}^{1,1}(\bar\Omega)$ and $\det \nabla^2 v_0 \geq c>0$ in $\Omega$, \item[{(ii)}] or: $v_0\in\mathcal C^{2}(\bar \Omega)$ with $\det\nabla^2 v_0=0$ in $\Omega$, and $\nabla^2v_0$ does not vanish identically on any open region in $\Omega$. \end{itemize} Then: \begin{equation}\label{B=all} {\mathcal B}_{v_0} = L^2(\Omega, \mathbb R^{2\times 2}_{sym}). \end{equation} \end{corollary} Indeed, in \cite{lemopa_convex} we proved that for any strictly elliptic surface $S$, its finite strain space $\mathcal{B}$ equals $L^2(\Omega, \mathbb R^{2\times 2}_{sym})$. Since every $S_\gamma$ is strictly elliptic under the assumption (i), the result follows by the equivalence of spaces $\mathcal{B}^\gamma$ and $\mathcal{B}_{v_0}$ in Lemma \ref{fssh}. The same observation can be derived directly, as follows. Given $B:\Omega\rightarrow \mathbb{R}_{sym}^{2\times 2}$ smooth enough, we first solve for $v$ in: \begin{equation}\label{equation-mystery} \left\{\begin{array}{ll} \mbox{cof}\,\, \nabla^2 v_0 : \nabla^2 v= -\textrm{curl}^T \textrm{curl} \,B & \mbox{ in } \Omega,\\ v=0 & \mbox{ on } \partial\Omega. \end{array} \right. \end{equation} Then we have: $$\textrm{curl}^T \textrm{curl}\, B = -\mbox{cof}\nabla^2 v : \nabla^2v_0 = \mbox{curl}^T\mbox{curl} (\nabla v\otimes\nabla v_0) = \textrm{curl}^T \textrm{curl} \Big ( \mbox{sym} (\nabla v \otimes \nabla v_0)\Big )$$ (see also Remark \ref{rem4.7}), and therefore: $$ B = \mbox{sym} \nabla (v_1, v_2) + \mbox{sym} (\nabla v \otimes \nabla v_0), $$ for some in-plane displacement $(v_1, v_2):\Omega\rightarrow \mathbb{R}^2$. The density of smooth fields $B$ in the space $L^2(\Omega, \mathbb R^{2\times 2}_{sym})$ now yields the result. \begin{remark} We expect that the property (\ref{B=all}) is satisfied for a generic $v_0$, whenever $\nabla^2 v_0$ does not vanish identically on any open region of $\Omega$. The argument requires studying very weak solutions of the mixed-type equation (\ref{equation-mystery}). When this equation is degenerate ($v_0\equiv 0$), ${\mathcal B}_{v_0}$ coincides with the space of all matrix fields in the kernel of the operator $\mathrm{curl}^T \mathrm{curl}$ and hence it is only a proper subset of $L^2(\Omega, \mathbb R^{2\times 2}_{sym})$, consisting of symmetric gradients. \end{remark} We now present the main $\Gamma$-convergence result for the shallow shell regime $0<\alpha<1$. The proofs which consist of tedious modifications of the arguments in \cite{lemopa7, lemapa}, are outlined in the Appendix. \begin{theorem}\label{shallow<1-liminf} Let $0<\alpha<1$. Assume $u^h\in W^{1,2}((S_{h^\alpha})^h,\mathbb{R}^3)$ satisfies $I^{h^\alpha, h} (u^h)\leq Ch^{4}$, where $I^{\gamma,h}$ is given as in (\ref{IhW-gamma}). Then there exists $\bar R^h\in SO(3)$ and $c^h\in\mathbb{R}^3$ such that for the normalized deformations: \begin{equation*}\label{resc-gamma} y^h(x,t) = (\bar R^h)^T (u^h\circ \tilde \phi_{h^\alpha})(x, ht) - c^h : \Omega^1\longrightarrow\mathbb{R}^3 \end{equation*} with $\phi_\gamma$ and $\gamma=h^\alpha$ as in (\ref{shallow-alpha-gamma}), we have: \begin{itemize} \item[(i)] $y^h(x,t)$ converge in $W^{1,2}(\Omega^1,\mathbb{R}^3)$ to $x$. \item[(ii)] The scaled displacements $V^h(x) = h^{-1} \fint_{-1/2}^{1/2}y^h(x,t) - x - h^\alpha v_0 (x)e_3 ~\mathrm{d}t$ converge (up to a subsequence) in $W^{1,2}(\Omega,\mathbb{R}^3)$ to $(0,0,v)^T$ where $v\in W^{2,2}(\Omega, \mathbb{R})$ and: \begin{equation}\label{constr2} \mathrm{cof}\,\,\nabla^2 v_0 : \nabla^2 v=0 \quad \mbox{ in } \Omega. \end{equation} \item[(iii)] The scaled strains: $$ \displaystyle B_h= \frac 1h \Big (\mathrm{sym} \nabla (V^h_1, V^h_2) + h^\alpha \mathrm{sym} (\nabla V^h_3 \otimes \nabla v_0) \Big ) $$ converge (up to a subsequence) weakly in $L^2$ to some $B \in {\mathcal B}_{v_0}$. \item[(iv)] Moreover: $\liminf_{h\to 0} h^{-4} I^{h^\alpha, h}(u^h) \geq \mathcal{I}_4^\infty (v, B)$, where: \begin{equation}\label{energy-shallow<1} \mathcal{I}_4^\infty(v, B)= \int_\Omega \mathcal{Q}_2\left(B + \frac{1}{2} \nabla v \otimes \nabla v - (\mathrm{sym }~\epsilon_g)_{tan}\right) + \frac{1}{24} \int_\Omega \mathcal{Q}_2\Big(\nabla^2 v + (\mathrm{sym }~\kappa_g)_{tan} \Big), \end{equation} with $\mathcal{Q}_2$ defined in (\ref{quad}). \end{itemize} \end{theorem} \begin{theorem}\label{shallow<1-limsup} Let $0<\alpha<1$. For every $v\in W^{2,2}(\Omega,\mathbb{R})$ satisfying (\ref{constr2}) and every $B\in {\mathcal B}_{v_0}$, there exists a sequence of deformations $u^h\in W^{1,2}((S_{h^\alpha})^h,\mathbb{R}^3)$ such that: \begin{itemize} \item[(i)] The sequence $y^h(x,t) = u^h(x+h^\alpha v_0(x)e_3 + ht\vec n^{\gamma}(x))$ converges in $W^{1,2}(\Omega^1)$ to $x$. \item[(ii)] The scaled displacements $V^h$ as in (ii) Theorem \ref{shallow<1-liminf} converge in $W^{1,2}$ to $(0,0,v)$. \item[(iii)] The scaled strains $B^h$ as in (iii) Theorem \ref{shallow<1-liminf} converge weakly in $L^2$ to $B$. \item[(iv)] $\lim_{h\to 0} h^{-4}I^{h^\alpha, h}(u^h) = \mathcal{I}_4^\infty(v,B).$ \end{itemize} \end{theorem} In the special cases of Corollary \ref{cor-ellip}, we have: \begin{theorem}\label{shallow<1-limsup2} Assume additionally that $v_0$ is such that (\ref{B=all}) holds. Then, for every $v\in W^{2,2}(\Omega,\mathbb{R})$ satisfying (\ref{constr2}), there exists a sequence $u^h\in W^{1,2}((S_{h^\alpha})^h,\mathbb{R}^3)$ such that (i) and (ii) of Theorem \ref{shallow<1-limsup} hold, and moreover: $$\displaystyle \lim_{h\to 0} h^{-4}I^{h^\alpha, h}(u^h) = \frac{1}{24} \int_\Omega \mathcal{Q}_2\Big(\nabla^2 v + (\mathrm{sym}~\kappa_g)_{tan}\Big ).$$ \end{theorem} \begin{remark}\label{rem4.7} Comparing functionals (\ref{energy-shallow<1}) with (\ref{vonK}), note that the space $\mathcal{V}(S_\gamma)$ of first-order infinitesimal isometries on $S_\gamma$ is made of displacements $V:S_\gamma\rightarrow \mathbb{R}^3$ of the form: \begin{equation}\label{VSh} \begin{split} &V(\phi_\gamma(x)) = (\gamma v_1(x), h^\alpha v_2(x), v_3) \qquad \forall x\in\Omega,\\ &\mbox{such that }~ (v_1, v_2, v_3)\in W^{2,2}(\Omega,\mathbb{R}^3) ~\mbox{ and }~ \mbox{sym}\nabla(v_1, v_2) + \mbox{sym}(\nabla v_3\otimes \nabla v_0)=0. \end{split} \end{equation} Indeed, similarly as in the proof of Lemma \ref{fssh}, the condition $\mbox{sym}\nabla V=0$ on $S_\gamma$ becomes: $$ 0 = \frac{1}{2} \big(\partial_i(V\circ \phi_\gamma) \cdot\partial_j\phi_\gamma + \partial_j(V\circ \phi_\gamma) \cdot\partial_i\phi_\gamma\big) = ~ \mbox{sym}[\nabla (v_1, v_2) + \nabla v_3\otimes \nabla v_0]_{ij}. $$ We also see that $v_3$ can be completed by $(v_1, v_2)$ to $V\in\mathcal{V}_1(S_h)$ as in (\ref{VSh}) only if: \begin{equation}\label{VShv3} \mbox{cof}\nabla^2v_0 : \nabla^2v_3 = 0, \end{equation} the latter being also a sufficient condition when $\Omega$ is simply connected. This follows from: \begin{equation*} \begin{split} \mbox{curl}^T&\mbox{curl}\Big(\mbox{sym}(\nabla v_3\otimes\nabla v_0)\Big) = \mbox{curl}^T\mbox{curl}\Big(\nabla v_3\otimes\nabla v_0\Big) \\ & = \partial_{22}(\partial_1v_3\cdot \partial_1v_0) + \partial_{11}(\partial_2v_3\cdot\partial_2v_0) - \partial_{12}(\partial_1v_3\cdot\partial_2v_0+\partial_2v_3\cdot\partial_1v_0)\\ & = -\big(\partial_{11}v_3\cdot\partial_{22}v_0 + \partial_{22}v_3\cdot\partial_{11}v_0 - 2\partial_{12}v_3\cdot\partial_{12}v_0\big) = - \mbox{cof} \nabla^2v_0:\nabla^2v_3. \end{split} \end{equation*} Hence, the admissible out-of-plane displacements $v_3$ relevant in (\ref{vonK}), must obey for the least the constraint (\ref{VShv3}), which appears in the 2-scale limiting theory (\ref{energy-shallow<1}) as constraint (\ref{constr2}). This is in contrast with the unconstrained 2-scale limiting theory (\ref{vonKnew1}) developed in the next section. \end{remark} \begin{remark} To put the last two results in another context, we draw the reader's attention to the forthcoming paper \cite{LMP-new}, where we analyze the $\Gamma$-limit of the shallow shell energies $\frac {1}{h^{2\alpha+2}} I^{h^\alpha,h}$ on shells with curvature of order $h^\alpha$. This energy scaling is produced by forces of appropriate magnitude or by prestrains of a different order than those considered in the present paper. Our main result in \cite{LMP-new} concerns the case $\alpha<1$, where we can establish that in the special case $\det \nabla^2 v_0 \equiv c_0>0$, the $\Gamma$-limit is a linearized Kirchhoff model with a Monge-Amp\`ere curvature constraint: \begin{equation}\label{constr} \det \nabla^2 v= \det \nabla^2 v_0 \end{equation} on the admissible out-of-plane displacements $v\in W^{2,2}(\Omega)$. The constraint (\ref{constr2}) can be interpreted as a linearization of (\ref{constr}), thereby highlighting the relationship between the two models for elliptic shallow shells. \end{remark} \section{The generalized Donnell-Mushtari-Vlasov model for a prestrained shallow shell: $\alpha=1$} \label{sec3} When the parameter $\alpha =1,$ i.e. the curvature of the mid-surface co-varies with the thickness, so that $\gamma=h$. For small $h$, the growth tensors on $(S_h)^h$ are then defined by (\ref{qh-gamma}) and the corresponding metric $p^h=(q^h)^Tq^h$ is given by: $$p^h(\phi_h(x) + x_3\vec n^h(x)) = \mbox{Id} + 2h^2\mbox{sym }\epsilon_g(x) + 2hx_3\mbox{sym }\kappa_g(x) +\mathcal{O}(h^3).$$ Let $v^h=u^h\circ\tilde\phi_h\in W^{1,2}(\Omega^h,\mathbb{R}^3)$, via diffeomorphisms $\tilde\phi_h$ in (\ref{kl-gamma}). By this simple change of variables, we see that: \begin{equation*} \begin{split} I^{h,h}(u^h) & = \frac{1}{h}\int_{(S_h)^h} W(\nabla u^h(q^h)^{-1}) = \frac{1}{h}\int_{\Omega^h} W\Big((\nabla v^h)(\nabla\tilde\phi_h)^{-1} (q^h\circ \tilde\phi_h)^{-1}\Big) \cdot \det\nabla\tilde\phi_h~\mbox{d}(x, x_3) \\ & = \frac{1}{h}\int_{\Omega^h} W\Big((\nabla v^h) (b^h)^{-1}\Big) \cdot \det\nabla\tilde\phi_h~\mbox{d}(x, x_3), \end{split} \end{equation*} where: $$b^h = (q^h\circ \tilde \phi_h)\nabla \tilde\phi_h.$$ In order to understand the structure of $b^h$ we need the following result: \begin{lemma}\label{lemik} The pull-back of the metric $p^h$ through $\tilde\phi_h$ satisfies: \begin{equation*} \begin{split} \forall (x, x_3)\in \Omega^h\qquad g^h(x, x_3) & = (\nabla\tilde\phi_h)^T (p^h\circ \tilde\phi_h) (\nabla \tilde\phi_h) \\ & = \mathrm{Id} + h^2\Big(2\mathrm{sym}~\epsilon_g(x) + (\nabla v_0(x)\otimes\nabla v_0(x))^\ast \Big) \\ & \qquad + 2hx_3\Big(\mathrm{sym}~\kappa_g(x) - (\nabla^2 v_0(x))^\ast\Big) + \mathcal{O}(h^3), \end{split} \end{equation*} where $F^\ast\in\mathbb{R}^{3\times 3}$ denotes the matrix whose only non-zero entries are in its $2\times 2$ principal minor given by $F\in\mathbb{R}^{2\times 2}$. \end{lemma} \begin{proof} By a direct calculation, we obtain: \begin{equation*} \begin{split} \partial_1\tilde\phi_h & = \big(1-x_3h\partial_{11}^2v_0, -x_3h\partial_{12}^2v_0, h \partial_{1}v_0\big) + \mathcal{O}(h^3),\\ \partial_2\tilde\phi_h & = \big(-x_3h\partial_{12}^2v_0, 1-x_3h\partial_{22}^2v_0, h \partial_{2}v_0\big) + \mathcal{O}(h^3),\\ \partial_3\tilde\phi_h & = \vec n^h = \big(-h\partial_{1}v_0, -h\partial_{2}v_0, 1-\frac{1}{2}h^2 |\nabla v_0|^2\big) + \mathcal{O}(h^3). \end{split} \end{equation*} Hence: \begin{equation*} \begin{split} &(\nabla\tilde\phi_h)^T (\nabla\tilde\phi_h) = \mbox{Id}_3 -2x_3h (\nabla^2v_0)^\ast + h^2 (\nabla v_0\otimes \nabla v_0)^\ast + \mathcal{O}(h^3)\\ &(\nabla\tilde\phi_h)^T \big(2h^2\mbox{sym }\epsilon_g + 2hx_3\mbox{sym }\kappa_g\big)(\nabla\tilde\phi_h) = 2h^2\mbox{sym }\epsilon_g + 2hx_3\mbox{sym }\kappa_g+ \mathcal{O}(h^3), \end{split} \end{equation*} in view of $\nabla\tilde\phi_h = \mbox{Id}_3 + \mathcal{O}(h)$, and the result follows. \end{proof} Note that: $ (b^h)^T b^h = g^h $ and therefore by the polar decomposition of matrices: $$ b^h = R(x,x_3) a^h \qquad \mbox{on } \Omega^h $$ for some $R(x,x_3)\in SO(3)$ and the symmetric growth tensor $a^h$ given by: \begin{equation}\label{ahnew} a^h = \sqrt{g^h} = \mathrm{Id} + h^2\Big(\mbox{sym }\epsilon_g + \frac{1}{2} (\nabla v_0\otimes\nabla v_0)^\ast \Big) + hx_3\Big(\mbox{sym }\kappa_g - (\nabla^2 v_0)^\ast\Big) + \mathcal{O}(h^3). \end{equation} For isotropic $W$ it directly follows that: \begin{equation}\label{changev} \begin{split} I^{h,h}(u^h) & = \frac{1}{h}\int_{\Omega^h} W\Big((\nabla v^h) (a^h)^{-1}R(x)^{-1}\Big) \cdot \det\nabla\tilde\phi_h~\mbox{d}(x, x_3) \\ & = \frac{1}{h}\int_{\Omega^h} W\Big((\nabla v^h) (a^h)^{-1}\Big)\cdot (1+ \mathcal{O}(h))~\mbox{d}(x, x_3). \end{split} \end{equation} Heuristically, modulo the change of variable $\tilde \phi_h$ the problem reduces then to the study of deformations of the flat thin film $\Omega^h$ with the prestrain $a^h$. Indeed, by exactly the same analysis as in \cite{lemapa} Theorems 1.2 and 1.3, we obtain in the general (not necessarily isotropic) case, the following result: \begin{theorem}\label{thnew} Assume that $u^h\in W^{1,2}((S_h)^h,\mathbb{R}^3)$ satisfies $I^{h,h}(u^h)\leq Ch^4$. Then there exists proper rotations $\bar R^h\in SO(3)$ and translations $c^h\in\mathbb{R}^3$ such that for the normalized deformations: \begin{equation*}\label{resc} y^h(x,t) = (\bar R^h)^T (u^h\circ \tilde \phi_h)(x, ht) - c^h : \Omega^1\longrightarrow\mathbb{R}^3 \end{equation*} defined by means of (\ref{kl-gamma}) on the common domain $\Omega^1=\Omega\times (-1/2, 1/2)$ the following holds: \begin{itemize} \item[(i)] $y^h(x,t)$ converge in $W^{1,2}(\Omega^1,\mathbb{R}^3)$ to $x$. \item[(ii)] The scaled displacements $V^h(x) = h^{-1} \fint_{-1/2}^{1/2}y^h(x,t) - x ~\mathrm{d}t$ converge (up to a subsequence) in $W^{1,2}(\Omega,\mathbb{R}^3)$ to the vector field of the form $(0,0,v)^T$ and $v\in W^{2,2}(\Omega, \mathbb{R})$. \item[(iii)] The scaled in-plane displacements $h^{-1}V_{tan}^h$ converge (up to a subsequence) weakly in $W^{1,2}$ to $w\in W^{1,2}(\Omega,\mathbb{R}^2)$. \item[(iv)] Moreover: $\liminf_{h\to 0} h^{-4} I^{h,h}(u^h) \geq \mathcal{I}_4^1 (w,v)$ where: \begin{equation}\label{vonKnew1} \begin{split} \mathcal{I}_4^1(w,v)= \frac{1}{2} \int_\Omega \mathcal{Q}_2&\left(\mathrm{sym }\nabla w +\frac{1}{2}\nabla v\otimes \nabla v - \frac{1}{2}\nabla v_0\otimes\nabla v_0 - (\mathrm{sym}~\epsilon_g)_{tan}\right)\\ &\qquad\qquad + \frac{1}{24} \int_\Omega \mathcal{Q}_2\Big(\nabla^2 v - \nabla^2v_0 + (\mathrm{sym}~\kappa_g)_{tan}\Big). \end{split} \end{equation} \end{itemize} \end{theorem} In the same manner, applying the proof of Theorem 1.4 of \cite{lemapa} to (\ref{changev}), yields: \begin{theorem}\label{recsec_sth} For every $v\in W^{2,2}(\Omega,\mathbb{R})$ and $w\in W^{1,2}(\Omega,\mathbb{R}^2)$, there exists a sequence of deformations $u^h\in W^{1,2}((S_h)^h,\mathbb{R}^3)$ such that: \begin{itemize} \item[(i)] The sequence $y^h(x,t) = u^h(x+h v_0(x)e_3 + ht\vec n^h(x))$ converges in $W^{1,2}(\Omega^1,\mathbb{R}^3)$ to $x$. \item[(ii)] The displacements $V^h$ as in (ii) Theorem \ref{thnew} converge in $W^{1,2}$ to $(0,0,v)$. \item[(iii)] The in-plane displacements $h^{-1}V^h_{tan}$ converge in $W^{1,2}$ to $w$. \item[(iv)] $\lim_{h\to 0} h^{-4}I^{h,h}(u^h) = \mathcal{I}_{g, v_0}(w,v).$ \end{itemize} \end{theorem} \section{The prestrained plate model and the Euler-Lagrange equations: $\alpha >1$} When the parameter $\alpha > 1,$ we calculate the pull-back of the induced metric $p^h = (q^h)^Tq^h$, to the flat plate $\Omega^h$, via the change of variable $\tilde \phi_\gamma$ as in (\ref{kl-gamma}). Just as in Lemma \ref{lemik}, we obtain: \begin{equation}\label{metric-alpha} \begin{split} g^h = (\tilde \phi_{h^\alpha})^\ast p^h= \mathrm{Id}_3 & + h^{2\alpha} (\nabla v_0\otimes\nabla v_0)^\ast - 2h^\alpha x_3 (\nabla^2 v_0)^\ast \\ & + 2h^2\mbox{sym }\epsilon_g + 2hx_3 \mbox{sym }\kappa_g + \mathcal{O}(h^3). \end{split} \end{equation} It is therefore clear that the prestrain terms $(\epsilon_g, \kappa_g)$ take over the effect of shallowness and hence the limiting theory in the scaling regime $h^4$ is that derived in \cite{lemapa}, coinciding with results of Theorem \ref{thnew} and Theorem \ref{shallow<1-liminf} for the case $v_0=0$ and with the results of Theorem \ref{thmainuno} for $S\subset \mathbb R^2$: \begin{equation}\label{vonKnew10} \begin{split} \forall v\in W^{2,2}(\Omega, \mathbb{R}) \quad \forall &w\in W^{1,2}(\Omega, \mathbb{R}^2) \\ \mathcal{I}_4^0(w,v)= \frac{1}{2} \int_\Omega \mathcal{Q}_2&\left(\mathrm{sym }\nabla w +\frac{1}{2}\nabla v\otimes \nabla v - (\mathrm{sym}~\epsilon_g)_{tan}\right)\\ &\qquad\qquad \qquad\qquad \qquad + \frac{1}{24} \int_\Omega \mathcal{Q}_2\Big(\nabla^2 v + (\mathrm{sym}~\kappa_g)_{tan}\Big). \end{split} \end{equation} Indeed, consider the prestrained von K\'arm\'an shell model $\mathcal{I}_4$ discussed in Section \ref{sec2} for a degenerate situation $S\subset \mathbb R^2$. The term $B- \frac{1}{2} (A^2)_{tan}$ reduces to: $\frac{1}{2} \left(\nabla w+ (\nabla w)^T + \nabla v \otimes \nabla v\right)$, where $w$ and $v=V^3$ are respectively the in-plane and the out-of-plane displacements of $S$. The term $(\nabla(A\vec n) - A\Pi)_{tan} $ reduces also to: $-\nabla^2 v$. Therefore, when $S\subset \mathbb R^2$, $\mathcal{I}_4$ coincides with the model $\mathcal I^0_4$ and with the models $\mathcal {I}^\infty_4$ and $\mathcal{I}^1_4$ in the degenerate case $v_0=0$. \begin{remark} We point out a qualitative difference between the out-of-plane displacements $v$ in the argument of $\mathcal{I}^0_4$ and $\mathcal{I}^1_4$ and those appearing as the arguments of $\mathcal{I}^\infty_4$. The former are the net lowest order out of plane displacements of the limit deformations which are of order $h$, as suggested by Theorem \ref{thnew} (ii), but, according to Theorem \ref{shallow<1-liminf} (ii), when $\alpha<1$, the latter are the second highest order term of the expansion of the deformation after $h^\alpha v_0$. Hence, one should replace $v$ in (\ref{vonKnew1}) or (\ref{new_Karman}) through a change of variables by $v+ h^{\alpha-1} v_0$ in order to quantitatively compare this model with the variational model $\mathcal{I}^\infty_4$ in (\ref{energy-shallow<1}). \end{remark} As shown in \cite{lemapa}, under the assumption of $W$ being isotropic, the Euler-Lagrange equations of $\mathcal{I}_4$ under this degeneracy condition (or equivalently the Euler-Lagrange equations of $\mathcal{I}^0_4$) can be then written in terms of the displacement $v$ and the Airy stress potential $\Phi$: \begin{equation}\label{old_Karman} \left \{ \begin{split} \Delta^2\Phi & = -Y(\det \nabla^2 v + \lambda_g)\\ Z\Delta^2v &= [v,\Phi] - Z\Omega_g~, \end{split}\right. \end{equation} where $Y$ is the Young modulus, $Z$ the bending stiffness, $\nu$ the Poisson ratio (given in terms of the Lam\'e constants $\mu$ and $\lambda$), and : \begin{equation}\label{lambdag} \begin{split} \lambda_g & = \mbox{curl}^T\mbox{curl }(\epsilon_g)_{2\times 2} = \partial_{22}(\epsilon_g)_{11} + \partial_{11}(\epsilon_g)_{22} - \partial_{12}\Big((\epsilon_g)_{12}+ (\epsilon_g)_{21}\Big),\\ \Omega_g & = \mbox{div}^T\mbox{div }\Big((\kappa_g)_{2\times 2} +\nu\mbox{ cof }(\kappa_g)_{2\times 2}\Big) \\ & = \partial_{11} \Big((\kappa_g)_{11}+ \nu(\kappa_g)_{22}\Big) + \partial_{22}\Big((\kappa_g)_{22}+ \nu(\kappa_g)_{11}\Big) + (1- \nu)\partial_{12}\Big((\kappa_g)_{12}+ (\kappa_g)_{21}\Big). \end{split} \end{equation} Equations (\ref{old_Karman}) are based on a thermoelastic analogy to growth \cite{Mansfield, Maha} and can also be derived using a formal perturbation theory \cite{BenAmar}. On the other hand, the following system was introduced in \cite{Maha2}, as a mathematical model of blooming activated by differential lateral growth from an initial non-zero transverse displacement field $v_0$: \begin{equation}\label{new_Karman} \left \{ \begin{split} \Delta^2\Phi & = -Y(\det \nabla^2 v -\det\nabla^2 v_0+ \lambda_g)\\ Z(\Delta^2v-\Delta^2 v_0) &= [v,\Phi] - Z\Omega_g~, \end{split}\right. \end{equation} A similar calculation as in \cite{lemapa} then shows that (\ref{new_Karman}) can be viewed as the Euler Lagrange equations corresponding to the energy functional $\mathcal{I}^1_4$. We will now show that (\ref{new_Karman}) can be directly derived from the equations (\ref{old_Karman}). \begin{proposition}\label{derive} The system (\ref{new_Karman}) can be derived from the equations (\ref{old_Karman}) by pulling back the prestrain tensors $\epsilon_g$ and $\kappa_g$ from a sequence of shallow shells $(S_h)^h$ generated by the vanishing out-of-plane displacements $hv_0$. \end{proposition} \begin{proof} By Lemma \ref{lemik} we see that the growth tensor on $\Omega^h$ is given by (\ref{ahnew}). Applying (\ref{lambdag}) to the modified strain and curvature in $a^h$, to the leading order, we obtain: \begin{equation*} \begin{split} \lambda_g(v_0) & = \mbox{curl}^T\mbox{curl }\Big((\mbox{sym }\epsilon_g)_{tan} + \frac{1}{2}\nabla v_0\otimes \nabla v_0\Big) = \lambda_g + \det\nabla^2v_0\\ \Omega_g(v_0) & = \mbox{div}^T\mbox{div }\Big(((\mbox{sym }\kappa_g)_{tan}-\nabla^2v_0) +\nu\mbox{ cof }((\mbox{sym }\kappa_g)_{tan}-\nabla^2v_0)\Big) \\ & = \Omega_g -\Delta^2v_0, \end{split} \end{equation*} where the last equality follows from $\mbox{div }\mbox{cof }\nabla^2v_0=0$. Consequently, (\ref{old_Karman}) for the growth tensor (\ref{ahnew}) becomes exactly (\ref{new_Karman}). \end{proof} \section{The energy scaling}\label{sec5} A straightforward consequence of our results is the following assertion about the scaling of the infimum elastic energies of the thin prestrained shallow shells in the von K\'arm\'an regime (\ref{qh-gamma}). \begin{theorem} Let $\alpha> 0$ and let the sequence of thin shells $(S_\gamma)^h$ be given as in (\ref{shallow-gamma}) with the elastic energies of deformations $I^{\gamma,h}$ as in (\ref{IhW-gamma}). Assume that: \begin{equation}\label{knot0} \mathrm{curl} \, (\mathrm{sym}~\kappa_g)_{tan} \not\equiv 0 \quad \mbox { in } \Omega. \end{equation} Then, there exists constants $c, C>0$ for which: \begin{equation}\label{teho} \displaystyle \forall 0<h\ll 1 \qquad c \le \inf_{u\in W^{1,2}((S_{h^\alpha})^h, {\mathbb R}^3)} \frac{1}{h^4} I^{h^\alpha, h}(u) \le C. \end{equation} \end{theorem} Indeed, the condition $\mbox{curl}(\mbox{sym }\kappa_g)_{tan}\equiv 0$ is equivalent to $(\mbox{sym }\kappa_g)_{tan} = \nabla^2 v$, for some $v:\Omega\rightarrow \mathbb{R}$. If not satisfied, the bending term in (\ref{energy-shallow<1}) is always positive, yielding the lower bound in (\ref{teho}). The existence of a recovery sequence in Theorem \ref{shallow<1-limsup} and Theorem \ref{recsec_sth} and \cite{lemapa} implies the upper bound. \begin{remark} The incompatibility condition (\ref{knot0}) can be relaxed depending on the specific value of $\alpha$, and the assumed energy level, see e.g. \cite{lemapa} for a more involved scaling analysis when $\alpha>1$. Heuristically, conditions of similar type imply that the Riemann curvature tensor of the induced metric $p^h$ is non-zero and hence, in view of \cite[Theorem 2.2]{lepa_noneuclid}, they guarantee the positivity of the infimum of $I^{\gamma,h}$. In a further step we observe that, when $p^h$ is close to be flat, the scaling regime depends on the magnitude of the first non-zero term of the expansion of its curvature tensor. Note also that when $\alpha<1$, the first two non-zero terms after identity in (\ref{metric-alpha}) have no bearing on the first non-zero terms in the expansion of the curvature. Analogously, the induced prestrains $\kappa_g'= \nabla^2 v_0$ and $\epsilon_g'= \frac 12 (\nabla^2 v_0 \otimes \nabla^2 v_0)$ corresponding to the scalings $h^\alpha$ and $h^{2\alpha}$ do not satisfy neither conditions (1.13) nor (1.14) of \cite{lemapa}. Therefore the energy infimum must naturally fall below $h^4$, i.e. in the regime $h^{2\alpha+2}$. \end{remark} \section{Discussion} Our analysis has rigorously derived a general theory of shells with residual strain arising from relative growth, inhomogeneous swelling, plasticity etc. In fact, there are many such theories; each is a consequence of the scalings of shell's curvature relative to the magnitude of the strain incompatibility induced by curvature growth tensors. Indeed, for any exponent $\alpha\geq 0$ we have considered the following energies of deformations on weakly prestrained shallow shells: \begin{equation*} I^{h}(u) = \frac{1}{h}\int_{(S_{h^\alpha})^h} W((\nabla u) (q^h)^{-1}) \qquad \forall u\in W^{1,2}((S_{h^\alpha})^h, \mathbb{R}^3), \end{equation*} with the growth tensor $q^h$ given by (\ref{qh-gamma}) on thin shells of the form (\ref{shallow-gamma}) around the mid-surface: $$S_{h^\alpha} = \phi_{h^\alpha} (\Omega), \qquad \phi_{h^\alpha} (x)=(x, h^\alpha v_0(x)), \qquad v_0\in\mathcal{C}^{1,1} (\bar\Omega,\mathbb{R}).$$ We have established that independent of the value of $\alpha$, the scaling for the infimum of the energy is always determined by the prestrain and is of order $h^4$ under our current assumption (\ref{knot0}). When $\alpha>1$, the prestrain overwhelms the role of shallowness so that the limiting theory is the one derived in \cite{lemapa}, coinciding with results of Theorem \ref{thnew} for the case $v_0=0$ and yielding the Euler-Lagrange equations (\ref{old_Karman}). When $\alpha=1$ one recovers the recently postulated model \cite{Maha2}, discussed in the present paper. For the case $0<\alpha<1$, the limiting theory reduces to a new constrained theory and can be viewed as a plate theory where the non-trivial geometric structure of the shallow shell is inherited by the plate, or equivalently it can be considered as the natural limit of the generalized von K\'arm\'an theories (\ref{vonK}) on the shallow midsurface $S_\gamma$ as $\gamma\to 0$. This may be contrasted with a similar problem considered by the authors in \cite{LMP-new}, where the $\Gamma$-limit is discussed under energy regime of order $h^{2\alpha+2}$. This order is compatible with the case where the role of shallowness is affected by relative scaled magnitude of the body forces or prestrains, so that the choice of $\alpha$ has a bearing on the limiting model. Our analysis in the present paper and in \cite{LMP-new} is this thus the beginning of an exploration from a vast possible scenarios. A natural generalization of our results would be to allow for different scaling regimes for the growth tensors in search of other possible limiting theories. Overall, there are three independent parameters: one associated with scaling of the shallowness, and two incompatible strains characterized in terms of their dependence on the thickness $h$ in the form $h^\alpha$. The resulting theories depend on the choice of scalings for these three parameters. Thus, there is no single {\it correct} model in general, but of course when dealing with a concrete situation, a choice of particular scalings for the relative magnitude of the thickness, the shallowness and the differential growth determines the effective theory, as we have shown here. \noindent{\bf Acknowledgments.} This project is based upon work supported by, among others, the National Science Foundation. M.L. is partially supported by the NSF grants DMS-0707275 and DMS-0846996, and by the Polish MN grant N N201 547438. L.M. is supported by the MacArthur Foundation, M.R.P. is partially supported by the NSF grants DMS-0907844 and DMS-1210258. \begin{center} {\bf \Large Appendices} \end{center} Here we provide a proof of Theorem \ref{shallow<1-liminf} and Theorem \ref{shallow<1-limsup}. We first derive bounds on families of vector mappings $\{u^h\}_{h>0}$, defined on $(S_\gamma)^h$ as in (\ref{shallow-alpha-gamma}) and (\ref{shallow-gamma}), under the assumption of smallness on their energy and when $\gamma=h^\alpha$ for the scaling regime $0<\alpha<1$. In what follows, by $C$ we denote an arbitrary positive constant, depending on $v_0$, but not on $h$ or the vector mapping under consideration. In all proofs, the convergences are understood up to a subsequence, unless stated otherwise. \appendix \section{Proof of Theorem \ref{shallow<1-liminf}} Let a sequence of deformations $u^h\in W^{1,2}((S_\gamma)^h, \mathbb{R}^3)$ satisfy: $$\displaystyle \frac{1}{h} \int_{(S_\gamma)^h}W((\nabla u^h)(q^h)^{-1})~\mbox{d}z \leq C h^4,$$ where, recalling the definition of $\tilde\phi_h:\Omega\times (-h/2, h/2)\to (S_\gamma)^h$ in (\ref{kl-gamma}), we have: $$ q^h \circ \tilde \phi_\gamma = \mbox{Id} + h^2 \epsilon_g + hx_3 \kappa_g.$$ The following approximation results can be obtained by combining arguments in \cite{lemopa7} and \cite{lemapa}, in view of the seminal work \cite{FJMhier}. \begin{lemma} \label{appr} (Parallel to \cite[Lemma 3.1]{lemopa7}, \cite[Theorem 1.6]{lemapa}.) There exists a matrix field $R^h\in W^{1,2}(S_\gamma, \mathbb{R}^{3\times 3})$ with values in $SO(3)$, and a matrix $Q^h\in SO(3)$, such that: \begin{itemize} \item[(i)] $\displaystyle \frac 1h \|\nabla u^h - R^h\|^2_{L^2(S_\gamma^h)} \leq Ch^4,$ \item[(ii)] $\|\nabla R^h\|_{L^2(S_\gamma)} \leq Ch,$ \item[(iii)] $\|(Q^h)^TR^h - \mathrm{Id}\|_{L^p(S_\gamma)} \leq Ch,~$ for all $p\in [1,\infty)$. \end{itemize} \end{lemma} \begin{lemma} \label{lem3.2} (Parallel to \cite[Lemma 3.2]{lemopa7}.) Let $R^h, Q^h$ be as in Lemma \ref{appr}. There holds, or all $p\in [1,\infty)$: \begin{itemize} \item[(i)] $\displaystyle \lim_{h\to 0}~ (Q^{h})^TR^{h} \circ \phi_\gamma =\mathrm{Id}$, in $W^{1,2}(\Omega)$ and in $L^p(\Omega)$. \end{itemize} Moreover, there exists a $W^{1,2}$ skew-symmetric field $A:S\longrightarrow so(3)$, such that: \begin{itemize} \item[(ii)] $\displaystyle \lim_{h\to 0}~ \frac{1}{h} \left( (Q^{h})^TR^{h} - \mathrm{Id}\right) \circ \phi_\gamma = A$, weakly in $W^{1,2}(\Omega)$ and (strongly) in $L^p(\Omega)$. \item[(iii)] $\displaystyle \lim_{h\to 0}~ \frac{1}{h^2} \mathrm{sym }\left( (Q^{h})^TR^{h} - \mathrm{Id}\right) \circ \phi_\gamma = \frac{1}{2}A^2$, in $L^p(\Omega)$. \end{itemize} \end{lemma} \begin{proof} The convergences in (i) follow from Lemma \ref{appr}. To prove (ii), notice that: $$A^h = \frac{1}{h} \left((Q^h)^T R^h - \mathrm{Id}\right) \circ \phi_\gamma$$ is bounded in $W^{1,2}(\Omega)$ and so it has a weakly converging subsequence, as $h\to 0$. Consequently, convergence is strong in $L^p(\Omega)$. One has: \begin{equation}\label{symAh} A^h + (A^h)^T = \frac{1}{h} \left((Q^h)^T R^h + (R^h)^TQ^h - 2\mathrm{Id} \right) = - {h}(A^h)^TA^h. \end{equation} The latter converges to $0$ in $L^p(\Omega)$, and therefore the limit matrix field $A$ is skew-symmetric. The above equality proves as well that: $$\lim_{h\to 0} \frac{1}{h} \mbox{ sym } A^h = \frac{1}{2} A^2$$ in $L^p(\Omega)$, which implies (iii). \end{proof} Consider (and compare with Theorem \ref{shallow<1-liminf}) the rescaling: \begin{equation*} y^h(x + t\vec n^\gamma(x)) = u^h\left(x + t{h}/{h_0}\vec n^\gamma(x)\right) \qquad \forall x\in S_\gamma\quad \forall t\in (-h_0/2, h_0/2), \end{equation*} so that $y^h\in W^{1,2}((S_\gamma)^{h_0}, \mathbb{R}^3)$. Also, define: $\nabla_h y^h(x + t\vec n^\gamma(x)) = \nabla u^h\left(x + t{h}/{h_0}\vec n^\gamma(x)\right)$. In what follows, $\Pi_\gamma=\nabla{\vec n}^\gamma$ denotes the second fundamental form of $S_\gamma$. By a straightforward calculation we obtain: \begin{proposition}\label{formule} (Parallel to \cite[Proposition 3.3]{lemopa7}.) For each $x\in S_\gamma$, $t\in (-h_0/2, h_0/2)$ and $\tau\in T_xS_\gamma$ there hold: \begin{equation*} \begin{split} \partial_{\tau} y^h(x+t\vec n^\gamma) &= \nabla_h y^h\left(x + t\vec n^\gamma\right)\left(\mathrm{Id} + t{h}/{h_0}\Pi_\gamma(x)\right) (\mathrm{Id} + t\Pi_\gamma(x))^{-1} \tau \\ {\partial_{\vec n^\gamma}} y^h(x+t\vec n^\gamma) &= \frac{h}{h_0} \nabla_h y^h\left(x + t\vec n^\gamma\right)\vec n^\gamma(x). \end{split} \end{equation*} Moreover, for $I^h(y^h) = \frac{1}{h} \int_{(S_\gamma)^h}W((\nabla u^h)(q^h)^{-1})$ one has: \begin{equation*} \begin{split} I^h(y^h) &= \frac{1}{h_0}\int_{(S_\gamma)^{h_0}} W(\nabla_h y^h (x+t\vec n^\gamma) (q^h)^{-1})\cdot \det \left[\left(\mathrm{Id} + t{h}/{h_0}\Pi_\gamma\right) (\mathrm{Id} + t\Pi_\gamma)^{-1}\right]\\ &= \int_{S_\gamma} \fint_{-h_0/2}^{h_0/2} W(\nabla_h y^h (x+t\vec n^\gamma) (q^h)^{-1})\cdot \det \left[\mathrm{Id} + t{h}/{h_0}\Pi_\gamma(x)\right]~\mathrm{d}t~\mathrm{d}x. \end{split} \end{equation*} \end{proposition} Directly from Lemma \ref{appr} (i) and Lemma \ref{lem3.2} (ii) there follows: \begin{proposition} \label{help} (Parallel to \cite[Proposition 3.4]{lemopa7}.) \begin{itemize} \item[(i)] $\displaystyle \|\nabla_h y^h - R^h\|_{L^2((S_\gamma)^{h_0})}\leq Ch^2$. \item[(ii)] $\displaystyle \lim_{h\to 0} \frac{1}{h}\left((Q^h)^T\nabla_h y^h - \mathrm{Id}\right) \circ \tilde \phi_\gamma = A$, in $L^{2}(\Omega^{h_0})$. \end{itemize} \end{proposition} We consider the corrected by rigid motions deformations $\tilde y^h\in W^{1,2}((S_\gamma)^{h_0},\mathbb{R}^3)$ and averaged displacements $V^h\in W^{1,2}(S_\gamma, \mathbb{R}^3)$: \begin{equation*} \tilde y^h = (Q^h)^T y^h - c^h,\qquad V^h = V^h[\tilde y^h] = \frac{1}{h}\fint_{-h_0/2}^{h_0/2} \tilde y^h(x+t\vec n^\gamma) - x ~\mathrm{d}t, \end{equation*} where the constants $c^h$ are chosen so that $\fint_{\Omega} V^h \circ \phi_\gamma = 0$. \begin{lemma} \label{lem3.4} (Parallel to \cite[Lemma 3.5]{lemopa7}.) \begin{itemize} \item[(i)] $\displaystyle \lim_{h\to 0} (\tilde y^{h}\circ \tilde \phi_\gamma - \phi_\gamma) =0$ in $W^{1,2}(\Omega^{h_0}).$ \item[(ii)] $\displaystyle \lim_{h\to 0} (V^{h}\circ \phi_\gamma) = V$ in $W^{1,2}(\Omega)$. \end{itemize} The vector field $V$ in (ii) has regularity $W^{2,2}(\Omega, \mathbb{R}^3)$ and it satisfies $\partial_\tau V (x) = A(x) \tau$ for all $\tau\in \mathbb{R}^2S$. The $W^{1,2}$ skew-symmetric matrix field $A:S\longrightarrow so(3)$ is as in Lemma \ref{lem3.2}. \end{lemma} \begin{proof} {\bf 1.} In view of Proposition \ref{formule} and Proposition \ref{help} we have: \begin{equation}\label{important} \begin{split} &\left\|\nabla_{tan}\tilde y^h - \left((Q^h)^T R^h\right)_{tan}\cdot (\mathrm{Id} + th/h_0 \Pi_\gamma^h) (\mathrm{Id} + t\Pi_\gamma)^{-1} \right\|_{L^2(S_\gamma^{h_0})} \leq Ch^2\\ &\left\|\partial_{\vec n^\gamma}\tilde y^h \right\|_{L^2((S_\gamma)^{h_0})} \leq Ch\|\nabla_h y^h\|_{L^2(S_\gamma^{h_0})} \leq Ch. \end{split} \end{equation} To prove convergence of $V^h \circ \phi_\gamma$, consider for $x\in S_\gamma$: \begin{equation}\label{nabla_Vh} \begin{split} \nabla V^h(x) &= \frac{1}{h} \fint_{-h_0/2}^{h_0/2} \nabla_{tan}\tilde y^h(x+t\vec n^\gamma) (\mathrm{Id} + t\Pi_\gamma) - \mathrm{Id} ~\mbox{d}t\\ &= \frac{1}{h} \fint_{-h_0/2}^{h_0/2} \Big(\nabla_{tan}\tilde y^h - \left((Q^h)^T R^h\right)_{tan} (\mathrm{Id} + t\Pi_\gamma)^{-1}\Big) (\mathrm{Id} + t\Pi_\gamma) ~\mbox{d}t\\ &\quad + \frac{1}{h} \Big((Q^h)^T R^h (x) - \mathrm{Id}\Big)_{tan} = A^h \circ (\phi_\gamma)^{-1}+ {\mathcal O}(h). \end{split} \end{equation} We also have: $ \nabla (V^h\circ \phi_\gamma) = (\nabla V^h \circ \phi_\gamma ) (\nabla \phi_\gamma),$ hence: \begin{equation}\label{nabla_Vh-local} \begin{split} \nabla (&V^h \circ \phi_\gamma) = \Big [( \frac{1}{h} \fint_{-h_0/2}^{h_0/2} \nabla_{tan}\tilde y^h (\mathrm{Id} + t\Pi_\gamma) - \mathrm{Id} ~\mbox{d}t )\circ \phi_\gamma \Big] \nabla \phi_\gamma \\ &= \Big [( \frac{1}{h} \fint_{-h_0/2}^{h_0/2} \Big(\nabla_{tan}\tilde y^h - \left((Q^h)^T R^h\right)_{tan} (\mathrm{Id} + t\Pi_\gamma)^{-1}\Big) (\mathrm{Id} + t\Pi_\gamma) ~\mbox{d}t) \circ \phi_\gamma \Big ] \nabla \phi_\gamma \\ &\quad + \Big [( \frac{1}{h} \Big((Q^h)^T R^h (x) - \mathrm{Id}\Big)_{tan})\circ \phi_\gamma \Big ] \nabla \phi_\gamma = A^h \nabla \phi_\gamma + \mathcal O(h). \end{split} \end{equation} Therefore, $\nabla (V^h\circ \phi_\gamma)$ converges to $A_{tan}$ in $L^2(\Omega)$ and since $\fint_{\Omega} V^h \circ \phi_\gamma = 0$, we may use Poincar\'e inequality on $\Omega$ to deduce (ii). {\bf 2.} To prove (i), notice that by (\ref{important}) and Lemma \ref{lem3.2} we obtain the following convergences in $L^2(\Omega^{h_0})$: \begin{equation*} \begin{split} &\lim_{h\to 0}\nabla (\tilde y^h \circ \tilde \phi_\gamma) - \nabla \tilde \phi_\gamma =\lim_{h\to 0} \Big ( (\nabla \tilde y^h - \mbox{Id})\circ \tilde \phi_\gamma\Big ) \nabla \tilde \phi_\gamma =0,\\ &\lim_{h\to 0}{\partial_3}(\tilde y^h \circ \tilde \phi_\gamma)=\lim_{h\to 0} (\partial_{\vec n^\gamma} \tilde y^h)\circ \tilde \phi_\gamma = 0. \end{split} \end{equation*} Therefore $\nabla (\tilde y^h\circ \tilde \phi_\gamma ) -\nabla\phi_\gamma$ converges to $0$ in $L^2(\Omega^{h_0})$. Since the sequence $\{V^h \circ \phi_\gamma\}$ is bounded in $L^2(\Omega)$, it also follows that: \begin{equation}\label{conve} \lim_{h\to 0} \left\|\int_{-h_0/2}^{h_0/2}\tilde y^h (\tilde \phi_\gamma) - x ~\mbox{d}t\right\|_{L^2(S_\gamma)} = 0. \end{equation} Now, let $g(x+t\vec n^\gamma) = |\mbox{det } (\mathrm{Id} +t\Pi_\gamma(x))|^{-1}$. Consider the two terms in the right hand side of: $$\|\tilde y^h - \pi\|_{L^2((S_\gamma)^{h_0})} \leq \left\|(\tilde y^h - \pi) - \int_{(S_\gamma)^{h_0}}(\tilde y^h - \pi)\cdot \frac{g}{\scriptstyle \int_{(S_\gamma)^{h_0}} g} \right\|_{L^2((S_\gamma)^{h_0})} + ~\left|\int_{S_\gamma^{h_0}}(\tilde y^h - \pi)\cdot \frac{g}{\scriptstyle\int_{(S_\gamma)^{h_0}} g}\right|.$$ The first term can be bounded by means of the weighted Poincar\'e inequality, by $\|\nabla (\tilde y^h -\pi)\|_{L^2(S_\gamma^{h_0})}$ and therefore it converges to $0$ as $h\to 0$. The second term converges to $0$ as well, in view of (\ref{conve}) and: $$\left|\int_{(S_\gamma)^{h_0}} (\tilde y^h - \pi)\cdot g\right| = \left|\int_{S_\gamma}\int_{-h_0/2}^{h_0/2} \tilde y^h - \pi ~\mbox{d}t~\mbox{d}x\right| \leq C \left\|\int_{-h_0/2}^{h_0/2}\tilde y^h - \pi ~\mbox{d}t\right\|_{L^2(S_\gamma)}.$$ \end{proof} \begin{proposition}\label{new-constraint} We have: \begin{itemize} \item[(i)] $ \displaystyle \lim_{h\to 0} \frac{1}{h^\alpha} \Big ( (\tilde y^h \circ \tilde \phi_\gamma) - x \Big ) = v_0 e_3$ in $W^{1,2}(\Omega, \mathbb{R}^3)$, \item[(ii)] $\displaystyle \mathrm{cof}\,\,\nabla^2 v_0: \nabla^2 V^3 = \mathrm{cof}\,\,\nabla^2 v_0: \nabla^2 v =0$ in $\Omega$. \end{itemize} \end{proposition} \begin{proof} The statement (i) easily follows from Lemma \ref{lem3.4} (i). By (\ref{nabla_Vh-local}) and \eqref{symAh} we calculate: \begin{equation*} \begin{split} \forall i,j=1,2 \quad & 2 \left\langle\partial_i \phi_\gamma, (\mbox{sym} \nabla V^h) \partial_j \phi_\gamma\right\rangle = \left\langle \partial_i (V^h \circ \phi_\gamma), \partial_j \phi_\gamma \right\rangle + \left\langle\partial_j (V^h \circ \phi_\gamma), \partial_i \phi_\gamma\right\rangle \\ & = \left\langle \partial_j \phi_\gamma, A^h \partial_i \phi_\gamma\right\rangle + \left\langle\partial_i \phi_\gamma, A^h \partial_j \phi_\gamma\right\rangle + \mathcal O(h) = \mathcal O(h), \end{split} \end{equation*} and hence, denoting: $V^h\circ \phi_\gamma= (v_1^h, v_2^h, v_3^h)$, we get: \begin{equation*} \begin{split} \mathcal O(h) = 2 \left\langle\partial_i \phi_\gamma, (\mbox{sym} \nabla V^h) \partial_j \phi_\gamma\right\rangle & = \left\langle\partial_i (V^h \circ \phi_\gamma), \partial_j \phi_\gamma\right\rangle + \left\langle \partial_j (V^h \circ \phi_\gamma), \partial_i \phi_\gamma\right\rangle \\ & = 2\left\langle e_i, \big(\mbox{sym} \nabla(v^h_1, v^h_2) + h^\alpha \mbox{sym} (\nabla v^h_3 \otimes \nabla v_0)\big) e_j\right\rangle. \end{split} \end{equation*} Dividing by $h^\alpha$ and passing to $0$ in $h$ we obtain: $$ \lim_{h\to 0} \Big (\mbox{sym} (\nabla v^h_3 \otimes \nabla v_0) + \frac {1}{h^\alpha} \mbox{sym} \nabla(v^h_1, v^h_2) \Big)=0. $$ On the other hand $V^h \circ \phi_\gamma$ converges to $V$ in $\Omega$ and so $\mbox{sym} \nabla V=0$ imply that $(v^h_1, v^h_2)$ converge to a constant, while $v^h_3$ converges to $V^3=v\in W^{2,2}(\Omega)$. Passing to the limit we obtain: $$ \mbox{sym} \big(\nabla v \otimes \nabla v_0\big) = - \lim_{h\to 0} \frac {1}{h^\alpha} \mbox{sym} \nabla(v^h_1, v^h_2) = - \mbox{sym} \nabla \tilde w, $$ for some $\tilde w \in W^{1,2}(\Omega)$ (we used Korn's inequality for deducing the existence of $\tilde w$). By applying the operator $\mbox{curl}^T\mbox{curl}$ on both sides, we conclude: $$ \displaystyle \mbox{cof }\nabla^2 v_0: \nabla^2 v = 0,$$ as claimed in (ii). \end{proof} We now need to study the following sequence of matrix fields on $(S_\gamma)^{h_0}$: $$G^h = \frac{1}{{h}} \Big((R^h)^T \nabla_h y^h - \mathrm{Id}\Big) \circ \tilde \phi_\gamma.$$ In view of Proposition \ref{help} (i), the tensor $2\mbox{sym } G^h$ is the $h^2$ order term in the expansion of the nonlinear strain $(\nabla u^h)^T \nabla u^h$, at $\mbox{Id}$. \begin{lemma} \label{lem3.6} (Parallel to \cite[Lemma 3.6]{lemopa7}.) The sequence $\{G^h\} $ as above has a subsequence, converging weakly in $L^2(\Omega^{h_0})$ to a matrix field $G$. The tangential minor of $G$ is affine in the $e_3$ direction. More precisely: $$ G(x,t)_{2\times 2} = G_0(x)_{2\times 2} - \frac{t}{h_0} \nabla^2 v(x), ~~\mbox{ with }~~ G_0(x) = \fint_{-h_0/2}^{h_0/2} G(x, t)~\mathrm{d}t.$$ \end{lemma} \begin{proof} {\bf 1.} The sequence $\{G^h\}$ is bounded in $L^2(\Omega^{h_0})$ by Proposition \ref{help} (i). Therefore it has a subsequence (which we do not relabel) converging weakly to some $G$. For a fixed $s>0$, consider now the sequence of vector fields $f^{s,h}\in W^{1,2}((S_\gamma)^{h_0}, \mathbb{R}^3)$: $$f^{s,h}(x+t\vec n^\gamma) = \frac{1}{sh^2} \Big[\Big(h_0\tilde y^h(x+(t+s)\vec n^\gamma) - h(x+(t+s)\vec n^\gamma)\Big) - \Big(h_0\tilde y^h(x+t\vec n^\gamma) - h(x+t\vec n^\gamma)\Big)\Big]$$ We claim that $f^{s,h} \circ \tilde \phi_\gamma$ converges in $L^2(\Omega^{h_0})$ to $Ae_3$. Indeed, by Proposition \ref{formule} one has: \begin{equation*} \begin{split} f^{s,h}(x+t\vec n^\gamma) &= \frac{1}{{h^2}} \fint_t^{t+s} \Big(h_0\partial_{\vec n^\gamma} \tilde y^h(x+\sigma \vec n^\gamma) - h\vec n^\gamma\Big) ~\mbox{d}\sigma\\ &= \frac{1}{h} \fint_{t}^{t+s} \Big((Q^h)^T\nabla_h y^h (x+\sigma\vec n^\gamma) - \mathrm{Id}\Big)\vec n^\gamma~\mbox{d}\sigma, \end{split} \end{equation*} and the convergence follows by Proposition \ref{help} (ii). {\bf 2.} We claim that this convergence is actually weak in $W^{1,2}(\Omega^{h_0})$. First, notice that the $x_3$ derivatives converge to $0$ in $L^2(\Omega^{h_0})$ by Proposition \ref{help} (ii): \begin{equation*} \partial_{3}(f^{s,h}\circ \tilde \phi_\gamma) = \frac{1}{sh} ~\Big [ (Q^h)^T \Big(\nabla_h y^h(x+(t+s)\vec n^\gamma) - \nabla_h y^h(x+t\vec n^\gamma)\Big) \circ \tilde \phi_\gamma\Big ] \vec n^\gamma(x). \end{equation*} We now find the weak limit of the tangential gradients of $f^{s,h}$. By Proposition \ref{formule}: \begin{equation*} \begin{split} &\partial_{i} (f^{s,h} \circ \tilde \phi_\gamma) = \frac{1}{sh^2} ~ \Big [\Big(h_0\nabla \tilde y^h(x+(t+s)\vec n^\gamma)(\mathrm{Id} + (t+s)\Pi_\gamma)(\mathrm{Id} + t\Pi_\gamma)^{-1} \\ & \qquad\qquad\qquad\qquad\qquad - h_0\nabla \tilde y^h(x+t\vec n^\gamma) - hs\Pi_\gamma (\mathrm{Id} + t\Pi_\gamma)^{-1}\Big) \Big] \circ \tilde \phi_\gamma \partial_i \phi_\gamma\\ &=\frac{h_0}{sh^2} ~\Big [ (Q^h)^T \Big(\nabla_h y^h(x+(t+s)\vec n^\gamma) - \nabla_h y^h(x+t\vec n^\gamma)\Big) (\mathrm{Id} + th/h_0\Pi_\gamma)(\mathrm{Id} + t\Pi_\gamma)^{-1} \Big] \circ \tilde \phi_\gamma \partial_i \phi_\gamma \\ &\quad + \frac{1}{sh}~\Big [ \Big((Q^h)^T\nabla_h y^h(x+(t+s)\vec n^\gamma) - \mathrm{Id}\Big) s \Pi_\gamma(\mathrm{Id} + t\Pi_\gamma)^{-1} \Big] \circ \tilde \phi_\gamma \partial_i \phi_\gamma . \end{split} \end{equation*} By Proposition \ref{help} (ii), the following expression in the right hand side above: $$\frac{1}{h}~\Big [ \Big((Q^h)^T\nabla_h y^h(x+(t+s)\vec n^\gamma) - \mathrm{Id}\Big) \Pi_\gamma(\mathrm{Id} + t\Pi_\gamma)^{-1}] \Big] \circ \tilde \phi_\gamma $$ converges in $L^2(\Omega^{h_0})$ to $0$. On the other hand, the first term in this expression converges weakly in $L^2(\Omega^{h_0})$ to: $$\frac{h_0}{s} (G(x,t+s)) - G(x,t)),$$ by Lemma \ref{lem3.2} (i). This establishes the (weak) convergence of $f^{s,h}$ in $W^{1,2}(\Omega^{h_0})$. {\bf 3.} Equating the weak limits of tangential derivatives, we obtain: \begin{equation*} \begin{split} \partial_i (A e_3) (x) &= \frac{h_0}{s}\Big(G(x,(t+s)) - G(x,t)\Big)e_i. \end{split} \end{equation*} This proves the lemma. \end{proof} Finally, we have the following bound for convergence of the scaled energies $I^h$. \begin{lemma}\label{liminf} (Parallel to \cite[Lemma 3.7]{lemopa7} and \cite[Theorem 1.3]{lemapa}.) $$\liminf_{h\to 0} \frac{1}{h^4} I^h(y^h) \geq \frac{1}{2} \int_S \mathcal{Q}_2\left((\mathrm{sym }~ (G_0)_{2 \times 2} - (\epsilon_g)_{2\times 2}\right) + \frac{1}{24} \int_S \mathcal{Q}_2\left(\nabla^2 v + (\kappa_g)_{2\times 2} \right).$$ \end{lemma} In view of Lemma \ref{lem3.4} and Lemma \ref{liminf}, it remains to understand the structure of $G_0$. \begin{lemma}\label{lem3.9} (Parallel to \cite[Lemma 3.7]{lemopa7}.) Let $G_0$ be as in Lemma \ref{lem3.6}. Then we have the following convergence (up to a subsequence) weakly in $L^2(\Omega)$: \begin{equation}\label{symG0tan} \lim_{h\to 0} \frac{1}{h} \left\langle\partial_j \phi_\gamma, ( [\mathrm{sym }~\nabla V^h] \circ \phi_\gamma) \partial_i \phi_\gamma\right\rangle = \left\langle e_i, \left(\mathrm{sym }~G_0 + \frac{1}{2} A^2\right)_{2\times 2} e_j\right\rangle. \end{equation} \end{lemma} \begin{proof} We use the formula (\ref{nabla_Vh}) composed with $\phi_\gamma$ to calculate $\frac{1}{h}(\mathrm{sym }~ \nabla V^h) \circ \phi_\gamma$. The last term in the right hand side gives: $$\frac{1}{{h^2}} \mbox{sym} \left((Q^h)^T R^h - \mbox{Id}\right)_{tan} \circ \phi_\gamma = \frac{1}{h^2} \mbox{sym}\left((Q^h)^T R^h - \mbox{Id}\right)_{tan} \circ \phi_\gamma,$$ which converges in $L^2(\Omega)$ to $1/2 (A^2)_{tan}$ by Lemma \ref{lem3.2} (iii). To treat the first term in the right hand side of (\ref{nabla_Vh}), notice that for every $\tau\in T_xS_\gamma$: \begin{equation*} \begin{split} &\langle \frac{1}{h^2} \Bigg[\fint_{-h_0/2}^{h_0/2} \nabla\tilde y^h(x+t\vec n^\gamma) (\mbox{Id} + t\Pi_\gamma) - (Q^h)^T R^h(x) ~\mbox{d}t\Bigg] \circ \phi_\gamma, \tau\rangle \\ &\quad = \frac{1}{h^2}\langle (Q^h)^T \Bigg[\fint_{-h_0/2}^{h_0/2}\nabla_h y^h(x+t\vec n^\gamma) - R^h(x) ~\mbox{d}t + \fint_{-h_0/2}^{h_0/2} t h/h_0 \nabla_h y^h \Pi_\gamma~\mbox{d}t\Bigg] \circ \phi_\gamma, \tau\rangle\\ &\quad= \frac{1}{h^2}\langle (Q^h)^T R^h(x) \Bigg[\fint_{-h_0/2}^{h_0/2}(R^h)^T\nabla_h y^h - \mbox{Id}~\mbox{d}t\Bigg] \circ \phi_\gamma, \tau\rangle \\ &\qquad\qquad\qquad\qquad + \frac{h_0}{h}\langle\Big ( (Q^h)^T\Bigg[\fint_{-h_0/2}^{h_0/2} t \left(\nabla_h y^h- R^h\pi\right) ~\mbox{d}t \Bigg]\Pi_\gamma(x) \Big )\circ \phi_\gamma, \tau\rangle, \end{split} \end{equation*} where we used Proposition \ref{formule}. Now, the second term in the right hand side above converges in $L^2(\Omega)$ to $0$, by Proposition \ref{help} (i). Further, the matrix in the first term equals to: $$((Q^h)^T R^h(x) \fint_{-h_0/2}^{h_0/2}G^h(x+t\vec n^\gamma)~\mbox{d}t ) \circ \phi_\gamma,$$ and by Lemma \ref{lem3.2} (i) and Lemma \ref{lem3.6}, it converges weakly in $L^2(\Omega)$ to $G_0$. This completes the proof, in view of the fact that $\nabla \phi_\gamma$ converges to $(e_1, e_2)$. \end{proof} \noindent {\bf Conclusion of the proof of Theorem \ref{shallow<1-liminf}} It now remains to identify $B$ as a member of ${\mathcal B}_{v_0}$. We have: \begin{equation}\label{strange} \frac 1h \langle \partial_j \phi_\gamma, ( [\mathrm{sym }~\nabla (V^h)] \circ \phi_\gamma) \partial_i \phi_\gamma \rangle = \frac 1h \langle e_i, \big(\mbox{sym} \nabla(v^h_1, v^h_2) + h^\alpha \mbox{sym} (\nabla v^h_3 \otimes \nabla v_0)\big) e_j\rangle. \end{equation} Therefore: $$ \mbox{sym} (G_0)_{2\times 2} = B - \frac{A^2}{2}$$ where $B$ is given by the following limit: $$ B= \lim_{h\to 0} B^h= \lim_{h\to 0} \frac 1h \Big [\mbox{sym} \nabla(v^h_1, v^h_2) + h^\alpha \mbox{sym} (\nabla v^h_3 \otimes \nabla v_0) \Big ] \in {\mathcal B_{v_0}}, $$ whose existence is assured by Lemma \ref{lem3.9}. \hspace*{\fill}\mbox{\ \rule{.1in}{.1in}} \section{Proof of Theorem \ref{shallow<1-limsup}} Let $v$ and $B$ be given as in the statement of Theorem \ref{shallow<1-limsup}. Since $v$ satisfies the constraint (\ref{constr2}) on a simply-connected $\Omega$, there exists a displacement field $\tilde w\in W^{2,2}(\Omega, {\mathbb R}^2)$ for which: $$ \mbox{sym} \nabla \tilde w+ \mbox{sym} (\nabla v \otimes \nabla v_0) =0. $$ Let $V_\gamma= (\gamma \tilde w, v)\circ (\phi_\gamma)^{-1}$. Then it is straightforward as in \eqref{strange} to verify that $V_\gamma$ is a first order isometry of class $W^{2,2}$ on $S_\gamma$, i.e. $\mbox{sym} \nabla V_\gamma=0$ on $S_\gamma$. Also, using the isomorphism ${\mathcal T}^\gamma$ in Lemma 4.1, we let $B_\gamma= ({\mathcal T}^\gamma)^{-1}(B)\in {\mathcal B}^\gamma$, where the latter space is identified in \cite{lemopa7} as the finite strain space of $S_\gamma$. Given $V_\gamma$ and $B_\gamma$ as above, we proceed as in \cite[Theorem 2.2]{lemopa7}, as follows. With a slight abuse of notation, we write: \begin{equation}\label{marta} \mathcal{Q}_2(x,F_{tan}) = \min\left\{\mathcal{Q}_3(F_{tan} + c\otimes e_3 + e_3\otimes c); ~~ c\in\mathbb{R}^3\right\}. \end{equation} The unique vector $c$, which attains the above minimum will be denoted $c(x,F_{tan})$. By uniqueness, the map $c$ is linear in its second argument. Also, for all $F\in {\mathbb R}^{3\times3}$, by $l(F)$ we denote the unique vector in ${\mathbb R}^3$, linearly depending on $F$, for which: $$\mbox{sym}\big(F - (F_{2\times 2})^*\big) = \mbox{sym}\big(l(F) \otimes e_3\big).$$ Recall that $\gamma=h^\alpha$. Given $B_{\gamma}\in\mathcal{B}^\gamma$, there exists a sequence of vector fields $w^h\in W^{1,2}(S_\gamma,\mathbb{R}^3)$ such that $\|\mbox{sym }\nabla w^h - B_\gamma\|_{L^2(S_\gamma)} $ converges to $0$. Without loss of generality, we may assume that $w^h$ are smooth, and (by possibly reparameterizing the sequence) that: \begin{equation}\label{norm} \lim_{h\to 0} \sqrt{h}\|w^h\|_{W^{2,\infty}(S_\gamma)} = 0. \end{equation} In the same manner, we approximate $V_\gamma$ by a sequence $v^h\in W^{2,\infty}(S_\gamma,\mathbb{R}^3)$ such that, for a sufficiently small, fixed $\epsilon_0>0$: \begin{equation}\label{vhapprox} \begin{split} & \lim_{h\to 0} \|v^h - V_\gamma\|_{W^{2,2}(S_\gamma)} = 0, \qquad h\|v^h\|_{W^{2,\infty}(S_\gamma)} \leq \epsilon_0,\\ & \lim_{h\to 0}\frac{1}{h^2}~ \mu\left\{x\in S; ~~ v^h(x) \neq V(x)\right\} =0. \end{split} \end{equation} The existence of such $v^h$ follows by partition of unity and a truncation argument, as a special case of the Lusin-type result for Sobolev functions (see \cite[Proposition 2]{FJMhier}). Finally, define the sequence of deformations $u^h\in W^{1,2}((S_\gamma)^{h},\mathbb{R}^3)$ by: \begin{equation}\label{rec_seq} \begin{split} u^h(x+t\vec n^\gamma) & = x + h v^h(x) + h^2 w^h(x) \\ & \qquad + {t}\vec n^\gamma(x) + {t}h\Big(\Pi_\gamma v^h_{tan} - \nabla (v^h\vec n^\gamma)\Big)(x)\\ & \qquad - t h^2 (\vec n^\gamma)^T \nabla w^h + {t} h^2 d^{0,h}(x) + \frac 12 t^2h d^{1,h}(x). \end{split} \end{equation} The vector fields $d^{0,h}, d^{1,h}\in W^{1,\infty}(S_\gamma,\mathbb{R}^3)$ are defined so that: \begin{equation}\label{nd01h} \lim_{h\to 0} \sqrt{h} \left(\|d^{0,h}\|_{W^{1,\infty}(S_\gamma)} + \|d^{1,h}\|_{W^{1,\infty}(S_\gamma)}\right) = 0 \end{equation} and that, in $L^2(\Omega)$: \begin{equation}\label{warp} \begin{split} \lim_{h\to 0} d^{0,h} \circ \phi_\gamma & = l(\epsilon_g) - \frac 12 |\nabla v|^2 e_3 + c \Big(B - \frac 12 \nabla v \otimes \nabla v - (\mbox{sym } \epsilon_g)_{2\times2} \Big),\\ \lim_{h\to 0} d^{1,h} \circ \phi_\gamma& = l(\kappa_g) + c\Big(-\nabla^2 v - (\mbox{sym }\kappa_g)_{2\times 2}\Big). \end{split} \end{equation} Now, the convergence statements of Theorem \ref{shallow<1-limsup} are verified by straightforward calculations as in the proofs of \cite[Theorem 2.2]{lemopa7} and \cite[Theorem 1.4] {lemapa}. \begin{remark} One may define the recovery sequence explicitly on $\Omega$, without the diagonal argument presented in the proof above. Namely we proceed as follows. We approximate $V=(\tilde w, v)$ by a sequence $V^h = (\tilde w^h, v^h) \in W^{2,\infty}(\Omega,\mathbb{R}^3)$ such that, for a sufficiently small, fixed $\epsilon_0>0$: \begin{equation}\label{vhapprox} \begin{split} & \lim_{h\to 0} \|V^h - V\|_{W^{2,2}(\Omega)} = 0, \qquad h\|V^h\|_{W^{2,\infty}(\Omega)} \leq \epsilon_0,\\ & \lim_{h\to 0}\frac{1}{h^2}~ \mu\left\{x\in \Omega; ~~ V^h(x) \neq V(x)\right\} =0. \end{split} \end{equation} Also, let $w^h:\Omega\to {\mathbb R}^3$ be such that $$ \displaystyle B = \lim_{h\to 0} \Big [\mbox{sym} \nabla(w^h_1, w^h_2) + \mbox{sym} (\nabla w^h_3 \otimes \nabla v_0) \Big ]. $$ Without loss of generality, we may assume that $w^h$ are smooth, and that: \begin{equation}\label{norm} \lim_{h\to 0} \sqrt{h}\|w^h\|_{W^{2,\infty}(\Omega)} = 0. \end{equation} The recovery sequence is then given by: \begin{equation}\label{recoveryseq} \begin{split} y^h(x, t) & = u^h (x+ h^\alpha v_0(x) e_3 + th\vec n^\gamma) \\ & = \left[\begin{array}{c}x \\ h^\alpha v_0 (x) \end{array}\right] + h \left[\begin{array}{c} h^{\alpha} \tilde w^h(x) \\ v^h(x)\end{array}\right] + h^2 \left[\begin{array}{c} w^h_{tan} \\ h^{-\alpha} w^h_3 \end{array}\right] \\ & \qquad \qquad + th \left[\begin{array}{c}-h^\alpha\nabla v_0(x)\\1\end{array}\right] + th \left[\begin{array}{c}-h\nabla v^h(x)\\1\end{array}\right] \\ & \qquad \qquad -th^3 \left[\begin{array}{c} 0 \\ h^{-\alpha} \nabla w^h_3 \end{array}\right] + h^3 t d^{0,h}(x) + \frac{1}{2}h^3 t^2 d^{1,h}(x), \end{split} \end{equation} where the vector fields $d^{0,h}, d^{1,h}\in W^{1,\infty}(\Omega,\mathbb{R}^3)$ are defined similarly as before. \end{remark} \end{document}
\begin{document} \title{Simple Dynamic Spanners with Near-optimal Recourse against an Adaptive Adversary} \begin{abstract} Designing dynamic algorithms against an adaptive adversary whose performance match the ones assuming an oblivious adversary is a major research program in the field of dynamic graph algorithms. One of the prominent examples whose oblivious-vs-adaptive gap remains maximally large is the \emph{fully dynamic spanner} problem; there exist algorithms assuming an oblivious adversary with near-optimal size-stretch trade-off using only $\operatorname{polylog}(n)$ update time {[}Baswana, Khurana, and Sarkar TALG'12; Forster and Goranci STOC'19; Bernstein, Forster, and Henzinger SODA'20{]}, while against an adaptive adversary, even when we allow infinite time and only count recourse (i.e.~the number of edge changes per update in the maintained spanner), all previous algorithms with stretch at most $\log^{5}(n)$ require at least $\Omega(n)$ amortized recourse {[}Ausiello, Franciosa, and Italiano ESA'05{]}. In this paper, we completely close this gap with respect to recourse by showing algorithms against an adaptive adversary with near-optimal size-stretch trade-off and recourse. More precisely, for any $k\ge1$, our algorithm maintains a $(2k-1)$-spanner of size $O(n^{1+1/k}\log n)$ with $O(\log n)$ amortized recourse, which is optimal in all parameters up to a $O(\log n)$ factor. As a step toward algorithms with small update time (not just recourse), we show another algorithm that maintains a $3$-spanner of size $\tilde{O}(n^{1.5})$ with $\operatorname{polylog}(n)$ amortized recourse \emph{and} simultaneously $\tilde{O}(\sqrt{n})$ worst-case update time. \begin{comment} Our technique also implies as a by-product a distributed $3$-spanner algorithm using 2 rounds in the CONGEST model, improving the best algorithm with $\operatorname{polylog}(n)$ rounds {[}Rozho\v{n} and Ghaffari STOC'20{]} \end{comment} \end{abstract} \section{Introduction} Increasingly, algorithms are used interactively for data analysis, decision making, and classically as data structures. Often it is not realistic to assume that a user or an adversary is \emph{oblivious} to the outputs of the algorithms; they can be \emph{adaptive} in the sense that their updates and queries to the algorithm may depend on the previous outputs they saw. Unfortunately, many classical algorithms give strong guarantees only when assuming an oblivious adversary. This calls for the design of algorithms that work against an adaptive adversary whose performance match the ones assuming an oblivious adversary. Driven by this question, there have been exciting lines of work across different communities in theoretical computer science, including streaming algorithms against an adaptive adversary \cite{ben2020framework,hasidim2020adversarially,woodruff2020tight,alon2021adversarial,kaplan2021separating,Braverman2021adversarial}, statistical algorithms against an adaptive data analyst \cite{hardt2014preventing,dwork2015preserving,bassily2021algorithmic,steinke2017tight}, and very recent algorithms for machine unlearning \cite{gupta2021adaptive}. In the area of this paper, namely dynamic graph algorithms, a continuous effort has also been put on designing algorithms against an adaptive adversary. This is witnessed by dynamic algorithms for maintaining spanning forests \cite{holm2001poly,NanongkaiS17,Wulff-Nilsen17,NanongkaiSW17,ChuzhoyGLNPS19}, shortest paths \cite{BernsteinC16,Bernstein17,BernsteinChechikSparse,ChuzhoyK19,ChuzhoyS20,gutenberg2020decremental,gutenberg2020deterministic,GutenbergWW20,Chuzhoy21}, matching \cite{BhattacharyaHI15,BhattacharyaHN16,BhattacharyaHN17,BhattacharyaK19,Wajc19,BhattacharyaK21deterministic}, and more. This development led to new powerful tools, such as the expander decomposition and hierarchy \cite{SaranurakW19,GoranciRST20,liS2021} applicable beyond dynamic algorithms \cite{Li21,li2021nearly,abboud2021apmf,zhang2021faster}, and other exciting applications such as the first almost-linear time algorithms for many flow and cut problems \cite{BrandLNPSSSW20,BrandLLSS0W21,Chuzhoy21,BernsteinGS21}. Nevertheless, for many fundamental dynamic graph problems, including graph sparsifiers \cite{AbrahamDKKP16}, reachability \cite{BernsteinPW19}, directed shortest paths \cite{gutenberg2020decremental}, the performance gap between algorithms against an oblivious and adaptive adversary remains large, waiting to be explored and, hopefully, closed. One of the most prominent dynamic problems whose oblivious-vs-adaptive gap is maximally large is the \emph{fully dynamic spanner} problem \cite{AusielloFI06,Elkin11,BaswanaKS12,BodwinK16,ForsterG19,BernsteinFH19,Bernstein2020fully}. Given an unweighted undirected graph $G=(V,E)$ with $n$ vertices, an \emph{$\alpha$-spanner }$H$ is a subgraph of $G$ whose pairwise distances between vertices are preserved up to the \emph{stretch} factor of $\alpha$, i.e., for all $u,v\in V$, we have $\mathrm{deg}ist_{G}(u,v)\le\mathrm{deg}ist_{H}(u,v)\le\alpha\cdot\mathrm{deg}ist_{G}(u,v)$.\footnote{Here, $\mathrm{deg}ist_G(u,v)$ denotes the distance between $u$ and $v$ in graph $G$.} \begin{comment} For any $k\ge1$, there exists a $(2k-1)$-spanner of size $O(n^{1+1/k})$ and this is tight assuming Erdos's conjecture. \end{comment} {} In this problem, we want to maintain an $\alpha$-spanner of a graph $G$ while $G$ undergoes both edge insertions and deletions, and for each edge update, spend as small update time as possible. Assuming an oblivious adversary, near-optimal algorithms have been shown: for every $k\ge1$, there are algorithms maintaining a $(2k-1)$-spanner containing $\tilde{O}(n^{1+1/k})$ edges\footnote{$\tilde{O}(\cdot)$ hides a $\operatorname{polylog}(n)$ factor.}, which is nearly tight with the $\Omega(n^{1+1/k})$ bound from Erd\H{o}s' girth conjecture (proven for the cases where $k=1,2,3,5$ \cite{wenger1991extremal}). Their update times are either $O(k\log^{2}n)$ amortized \cite{BaswanaKS12,ForsterG19} or $O(1)^{k}\log^{3}n$ worst-case \cite{BernsteinFH19}, both of which are polylogarithmic when $k$ is a constant. In contrast, the only known dynamic spanner algorithm against an adaptive adversary by \cite{AusielloFI06} requires $O(n)$ amortized update time and it can maintain a $(2k-1)$-spanner of size $O(n^{1+1/k})$ only for $k\le3$. Whether the $O(n)$ bound can be improved remained open until very recently. Bernstein~et~al.~\cite{Bernstein2020fully} show that a $\log^6(n)$-spanner can be maintained against an adaptive adversary using $\operatorname{polylog}(n)$ amortized update time. The drawback, however, is that their expander-based technique is too crude to give any stretch smaller than $\operatorname{polylog}(n)$. Hence, for $k\le \log^6(n)$, it is still unclear if the $\Theta(n)$ bound is inherent. Surprisingly, this holds even if we allow infinite time, and only count \emph{recourse}, i.e., the number of edge changes per update in the maintained spanner. The stark difference in performance between the two adversarial settings motivates the main question of this paper: \begin{center} \emph{Is the $\Omega(n)$ recourse bound inherent for fully dynamic spanners against an adaptive adversary? } \par\end{center} \begin{comment} Fixing this drawback has recently become a major research program \begin{itemize} \item Against adaptive adversary an active research program in the whole TCS. \begin{itemize} \item Computation is more and more interactive... that is why. \begin{itemize} \item Example of areas: Streaming against adaptive, adaptive data analysis, machine unlearning. \end{itemize} \item In dynamic graph algorithms, a lot of effort: \begin{itemize} \item examples of problem ... \begin{itemize} \item Bonus: many fast static algorithms. \end{itemize} \item Great problems remains open. \begin{itemize} \item Cut sparsifiers, Spanner, Lower stretch spanning tree, Reachability, Directed SSSP \end{itemizes} \end{itemize} \end{itemize} \item Among these dynamic problem, spanners is one of the most prominent problem. \begin{itemize} \item Say the problem. \item Gap is maximally large. \begin{itemize} \item Series of work culminated in 2008: where near-opt oblivious algorithm was shown. \item Adaptive: \begin{itemize} \item The only work is the first paper on dynamic spanners \cite{AusielloFI06}: $O(n)$ amortized \item Until very recently, \cite{Bernstein2020fully} give the first $o(n)$ update time, but only when stretch $k=\operatorname{polylog}(n)$ is very big. \item For smaller $k$, $\Omega(n)$ bound remains the best even if we care only about recourse. \end{itemize} \end{itemize} \end{itemize} \end{itemize} \begin{center} \emph{Can we close the gap for fully dynamic spanners, even in the recourse setting?} \par\end{center} \begin{itemize} \item Recourse is worth studying by itself. \begin{itemize} \item People directly try to minimize as a main goal recourse \begin{itemize} \item balanced graph partitioning, steiner tree/forest, facility location, \item expander hierarchy, Low stretch tree, sometimes it is crucial to separately show a recourse bound stronger than update time bound. \item Problems where current barrier is on recourse. \begin{itemize} \item topo sort, edge coloring \end{itemize} \end{itemize} \item We completely close the gap w.r.t. recourse \end{itemize} \end{itemize} \end{comment} Recourse is an important performance measure of dynamic algorithms. There are dynamic settings where changes in solutions are costly while computation itself is considered cheap, and so the main goal is to directly minimize recourse \cite{gupta2014maintaining,gupta2014online,avin2020dynamic,gupta2020fully}. Even when the final goal is to minimize update time, there are many dynamic algorithms that crucially require the design of subroutines with recourse bounds \emph{stronger than} update time bounds to obtain small final update time \cite{chechik2020dynamic,GoranciRST20,chen2020fast}. Historically, there are dynamic problems, such as planar embedding \cite{HolmR20soda,HolmR20stoc} and maximal independent set \cite{Censor-HillelHK16,BehnezhadDHSS19,ChechikZ19}, where low recourse algorithms were first discovered and later led to fast update-time algorithms. Similar to dynamic spanners, there are other fundamental problems, including topological sorting \cite{BernsteinC18cycle} and edge coloring \cite{bhattacharya2021online}, for which low recourse algorithms remain the crucial bottleneck to faster update time. In this paper, we successfully break the $O(n)$ recourse barrier and completely close the oblivious-vs-adaptive gap with respect to recourse for fully dynamic spanners against an adaptive adversary. \begin{theorem} \label{thm:main greedy}There exists a deterministic algorithm that, given an unweighted graph $G$ with $n$ vertices undergoing edge insertions and deletions and a parameter $k\ge1$, maintains a $(2k-1)$-spanner of $G$ containing $O(n^{1+1/k}\log n)$ edges using $O(\log n)$ amortized recourse. \end{theorem} As the above algorithm is deterministic, it automatically works against an adaptive adversary. Each update can be processed in polynomial time. Both the recourse and stretch-size trade-off of \mathcal{C}ref{thm:main greedy} are optimal up to a $O(\log n)$ factor. When ignoring the update time, it even dominates the current best algorithm assuming an oblivious adversary \cite{BaswanaKS12,ForsterG19} that maintains a $(2k-1)$-spanner of size $O(n^{1+1/k}\log n)$ using $O(k\log^{2}n)$ recourse. Therefore, the oblivious-vs-adaptive gap for amortized recourse is closed. The algorithm of \mathcal{C}ref{thm:main greedy} is as simple as possible. As it turns out, a variant of the classical greedy spanner algorithm \cite{AlthoferDDJS93} simply does the job! Although the argument is short and ``obvious'' in hindsight, for us, it was very surprising. This is because the greedy algorithm \emph{sequentially} inspects edges in some fixed order, and its output solely depends on this order. Generally, long chains of dependencies in algorithms are notoriously hard to analyze in the dynamic setting. More recently, a similar greedy approach was dynamized in the context of dynamic maximal independent set \cite{BehnezhadDHSS19} by choosing a \emph{random order }for the greedy algorithm\emph{. }Unfortunately, the random order must be kept secret from the adversary and so this fails completely in our adaptive setting. Despite these intuitive difficulties, our key insight is that we can \emph{adaptively choose the order} for the greedy algorithm after each update. This simple idea is enough, see \mathcal{C}ref{sec:greedy} for details. \begin{comment} because, algorithmically, Greedy algorithm seems very hard to dynamize. \begin{thm} $(2k-1)$-spanner optimal amortized recourse \end{thm} \begin{itemize} \item Selling points \begin{itemize} \item First adaptive spanner with $o(n)$ recourse for $k<\operatorname{polylog}(n)$. \item Everything is optimal (close the gap from \cite{BaswanaKS12,ForsterG19}). In fact, the recourse is better! \item As simple as possible. \begin{itemize} \item Greedy! \item Although the argument is short, clean and so ``obvious'' in hindsight, it was surprising for us because, algorithmically, Greedy algorithm seems very hard to dynamize. \begin{itemize} \item Example: maximal matching, MIS (by soheil): Greedy is so sequential, Randomized Greedy is dynamic, but inherently oblivious. \item (Which is why no previous dynamic is nowhere close to Greedy). \end{itemize} \item It turns out that when the computation is ignored, Greedy spanners are very stable. \end{itemize} \end{itemize} \end{itemize} \end{comment} \mathcal{C}ref{thm:main greedy} leaves open the oblivious-vs-adaptive gap for the update time. Below, we show a partial progress on this direction by showing an algorithm with near-optimal recourse and simultaneously non-trivial update time. \begin{theorem} \label{thm:main proactive}There exists a randomized algorithm that, given an unweighted graph $G$ with $n$ vertices undergoing edge insertions and deletions, with high probability maintains against an adaptive adversary a $3$-spanner of $G$ containing $\tilde{O}(n^{1.5})$ edges using $\tilde{O}(1)$ amortized recourse \emph{and} $\tilde{O}(\sqrt{n})$ worst-case update time. \end{theorem} We note again that, prior to the above result, there was no algorithm against an adaptive adversary with $o(n)$ \emph{amortized update time} that can maintain a spanner of stretch less than $\log^6(n)$. \mathcal{C}ref{thm:main proactive} shows that for $3$-spanners, the update time can be $\tilde{O}(\sqrt{n})$ worst-case, while guaranteeing near-optimal recourse. \begin{comment} In the light of \ref{thm:main proactive}, we are quite optimistic that there exists a near-optimal algorithms with polylogarithmic update time against an adaptive adversary. \end{comment} {} We prove \mathcal{C}ref{thm:main proactive} by employing a technique called {\em proactive resampling}, which was recently introduced in \cite{Bernstein2020fully} for handling an adaptive adversary. We apply this technique on a modification of a spanner construction by Grossman and Parter \cite{GrossmanP17} from distributed computation community. The modification is small, but seems inherently necessary for bounding the recourse. To successfully apply proactive resampling, we refine the technique in two ways. First, we present a simple abstraction in terms of a certain load balancing problem that captures the power of proactive resampling. Previously, the technique was presented and applied specifically for the dynamic cut sparsifier problem \cite{Bernstein2020fully}. But actually, this technique is conceptually simple and quite generic, so our new abstraction will likely facilitate future applications. Our second technical contribution is to generalize and make the proactive resampling technique more flexible. At a very high level, in \cite{Bernstein2020fully}, there is a single parameter about sampling probability that is \emph{fixed} throughout the whole process, and their analysis requires this fact. In our load-balancing abstraction, we need to work with multiple sampling probabilities and, moreover, they change through time. We manage to analyze this generalized process using probabilistic tools about \emph{stochastic domination}, which in turn simplifies the whole analysis. If a strong recourse bound is not needed, then proactive resampling can be bypassed and the algorithm becomes very simple, deterministic, and has slightly improved bounds as follows. \begin{theorem} \label{thm:main det worst case}There exists a deterministic algorithm that, given an unweighted graph $G$ with $n$ vertices undergoing edge insertions and deletions, maintains a $3$-spanner of $G$ containing $O(n^{1.5})$ edges using $O(\min\{\Delta,\sqrt{n}\}\log n)$ worst-case update time, where $\Delta$ is the maximum degree of $G$. \end{theorem} Despite its simplicity, the above result improves the update time of the fastest deterministic dynamic 3-spanner algorithm \cite{AusielloFI06} from $O(\Delta)$ amortized to $O(\min\{\Delta,\sqrt{n}\}\log n)$ worst-case. In fact, all previous dynamic spanner algorithms with worst-case update time either assume an oblivious adversary \cite{Elkin11,BodwinK16,BernsteinFH19} or have a very large stretch of $n^{o(1)}$ \cite{Bernstein2020fully}. See \mathcal{C}ref{tab:state of the art} for detailed comparison. \begin{comment} \ref{thm:main proactive} is based on a novel spanner construction that can be interesting beyond the dynamic settings. See REF for the detailed comparison with known spanner constructions. Here, we list some applications easily implied the new construction. The first application is a new algorithm for computing a 3-spanner in the CONGEST model. In this model, the underlying distributed network is represented as a graph $G=(V,E)$ with $n$ vertices. The only initial knowledge of each vertex $v\in V$ is its neighbors. In each \emph{round}, each vertex $v$ can exchange a distinct $O(\log n)$-bit message with each of its neighbors. At the end of the computation, each vertex $v$ should report all edges incident to $v$ that should be in the spanner. \end{comment} \begin{comment} \begin{itemize} \item Selling points \begin{itemize} \item Still optimal recourse. Make us optimistic about fast spanners. \item First adaptive spanner with $o(n)$ update time for any $k<\operatorname{polylog}(n)$. \item Novel spanner construction \begin{itemize} \item Known construction: Greedy, Baswana-clustering, Ball-growing (Uri-Zwick, Elkin (randomized)) \item Point to technical section why it is very different \begin{itemize} \item This is a clustering based as well. \item But completely different (Can we discuss this clearly with picture?) \item Ours: every node is a center of some cluster, and same radius on every cluster. \item Other clustering: only some nodes are centers. \end{itemize} \item Evidence: new applications \end{itemize} \end{itemize} \end{itemize} \begin{thm} 3-spanner in CONGEST in 2 rounds \end{thm} \begin{itemize} \item Improved STOC'20 that takes $\operatorname{polylog}(n)$ bounds \end{itemize} \begin{thm} Deterministic 3-spanner $O(\min\{\Delta,\sqrt{n}\})$ worst-case update time. \end{thm} \begin{itemize} \item Strict improvement of previously best deterministic 3-spanners with $O(\Delta)$ update time. Other worst-case algo are oblivious. \item Everything works in weighted graphs too. \item See Table for summary. \end{itemize} \end{comment} \begin{table} \begin{adjustwidth}{-.5cm}{} \footnotesize{ \begin{centering} \begin{tabular}{|l|c|c|c|c|c|} \hline \textbf{Ref.} & \textbf{Stretch} & \textbf{Size} & \textbf{Recourse} & \textbf{Update Time} & \textbf{Deterministic?}\tabularnewline \hline \hline \multicolumn{6}{|l|}{\textbf{Against an oblivious adversary}}\tabularnewline \hline \cite{BaswanaKS12} & $2k-1$ & $O(k^{8}n^{1+1/k}\log^{2}n)$ & \multicolumn{2}{c|}{$O(7^{k/2})$ amortized} & rand. oblivious\tabularnewline \hline \cite{BaswanaKS12,ForsterG19} & $2k-1$ & $O(n^{1+1/k}\log n)$ & \multicolumn{2}{c|}{$O(k\log^{2}n)$ amortized} & rand. oblivious\tabularnewline \hline \cite{BernsteinFH19} & $2k-1$ & $\tilde{O}(n^{1+1/k})$ & \multicolumn{2}{c|}{$O(1)^{k}\log^{3}n$ worst-case} & rand. oblivious\tabularnewline \hline \hline \multicolumn{6}{|l|}{\textbf{Against an adaptive adversary}}\tabularnewline \hline \multirow{2}{*}{\cite{AusielloFI06}} & $3$ & $O(n^{1+1/2})$ & \multicolumn{2}{c|}{$O(\Delta)$ amortized} & deterministic \tabularnewline \cline{2-6} & $5$ & $O(n^{1+1/3})$ & \multicolumn{2}{c|}{$O(\Delta)$ amortized} & deterministic \tabularnewline \hline \multirow{2}{*}{\cite{Bernstein2020fully}} & \textbf{$\tilde{O}(1)$ } & \textbf{$\tilde{O}(n)$ } & \multicolumn{2}{c|}{\textbf{$\tilde{O}(1)$ }amortized} & rand. adaptive\tabularnewline \cline{2-6} & $n^{o(1)}$ & $\tilde{O}(n)$ & \multicolumn{2}{c|}{$n^{o(1)}$ worst-case} & deterministic \tabularnewline \hline \multirow{3}{*}{\textbf{Ours}} & $2k-1$ & $O(n^{1+1/k}\log n)$ & $O(\log n)$ amortized & $\operatorname{poly}(n)$ worst-case & deterministic \tabularnewline \cline{2-6} & $3$ & $\tilde{O}(n^{1+1/2})$ & $\tilde{O}(1)$ amortized & $\tilde{O}(\sqrt{n})$ worst-case & rand. adaptive\tabularnewline \cline{2-6} & $3$ & $O(n^{1+1/2})$ & \multicolumn{2}{c|}{$O(\min\{\Delta,\sqrt{n}\}\log n)$ worst-case} & deterministic\tabularnewline \hline \end{tabular} \par\end{centering} } \end{adjustwidth} \caption{\label{tab:state of the art}The state of the art of fully dynamic spanner algorithms.} \end{table} \noindent \textbf{Organization.} In \mathcal{C}ref{sec:greedy}, we give a very short proof of \mathcal{C}ref{thm:main greedy}. In \mathcal{C}ref{sec:3spanner}, we prove \mathcal{C}ref{thm:main proactive} assuming a crucial lemma (\mathcal{C}ref{th:proactive:resampling:main}) needed for bounding the recourse. To prove this lemma, we show a new abstraction for the proactive resampling technique in \mathcal{C}ref{sect:job-machine} and complete the analysis in \mathcal{C}ref{sec:reduce to load}. Our side result, \mathcal{C}ref{thm:main det worst case}, is based on the the static construction presented in \mathcal{C}ref{sub:sec:static} and its simple proof is given in \mathcal{C}ref{sub:sec:dynamic:1}. \section{Deterministic Spanner with Near-optimal Recourse} \label{sec:greedy} Below, we show a decremental algorithm that \emph{handles edge deletions only} with near-optimal recourse. This will imply \mathcal{C}ref{thm:main greedy} by a known reduction formally stated in \mathcal{C}ref{lemma:fully_dyn_reduction}. To describe our decremental algorithm, let us first recall the classic greedy algorithm. \noindent \textbf{The Greedy Algorithm.} Alth\"{o}fer et al.~\cite{dcg/Althofer93} showed the following algorithm for computing $(2k-1)$-spanners. Given a graph $G=(V,E)$ with $n$ vertices, fix some \emph{order} of edges in $E$. Then, we inspect each edge one by one according to the order. Initially $E_H = \emptyset$. When we inspect $e=(u,v)$, if $\mathrm{deg}ist_H(u,v) \geq 2k$, then add $e$ into $E_H$. Otherwise, ignore it. We have the following: \begin{theorem}[\cite{dcg/Althofer93}] \label{thm:greedy classic} The greedy algorithm above outputs a $(2k-1)$-spanner $H=(V,E_H)$ of $G$ containing $O(n^{1+1/k})$ edges. \end{theorem} It is widely believed that the greedy algorithm is extremely bad in dynamic setting: an edge update can drastically change the greedy spanner. In contrary, when we allow the order in which greedy scans the edges to be changed adaptively, we can avoid removing spanner edges until it is deleted by the adversary. This key insight leads to optimal recourse. When recourse is the only concern, prior to our work this result was known only for spanners with polylog stretch, which is a much easier problem. \noindent \textbf{The Decremental Greedy Algorithm.} Now we describe our deletion-only algorithm. Let $G$ be an initial graph with $m$ edges and $H = (V,E_H)$ be a $(2k-1)$-spanner with $O(n^{1+1/k})$ edges. Suppose an edge $e = (u,v)$ is deleted from the graph $G$. If $(u,v) \notin E_H$, then we do nothing. Otherwise, we do the following. We first remove $e$ from $E_H$. Now we look at the only remaining non-spanner edges $E \setminus E_H$, one by one in an arbitrary order. (Note that the order is \emph{adaptively} defined and not fixed through time because it is defined only on $E \setminus E_H$.) When we inspect $(x,y) \in E \setminus E_H$, as in the greedy algorithm, we ask if $\mathrm{deg}ist_H(x,y) \geq 2k$ and add $(x,y)$ to $E_H$ if and only if it is the case. This completes the description of the algorithm. \noindent \textbf{Analysis.} We start with the most crucial point. We claim that the new output after removing $e$ is as if we run the greedy algorithm that first inspects edges in $E_{H}$ (the order within $E_{H}$ is preserved) and later inspects edges in $E\setminus E_{H}$. To see the claim, we argue that if the greedy algorithm inspects $E_H$ first, then the whole set $E_{H}$ must be included, just like $E_H$ remains in the new output. To see this, note that, for each $(x,y)\in E_{H}$, $\mathrm{deg}ist_{H}(x,y)\ge2k$ when $(x,y)$ was inspected according to some order. But, if we move the whole set $E_{H}$ to be the prefix of the order (while the order within $E_H$ is preserved), it must still be the case that $\mathrm{deg}ist_{H}(x,y)\ge2k$ when $(x,y)$ is inspected and so $e$ must be added into the spanner by the greedy algorithm. So our algorithm indeed ``simulates'' inspecting $E_H$ first, and then it explicitly implements the greedy algorithm on the remaining part $E\setminus E_{H}$. So we conclude that it simulates the greedy algorithm. Therefore, by \mathcal{C}ref{thm:greedy classic}, the new output is a $(2k-1)$-spanner with $O(n^{1+1/k})$ edges. The next important point is that, whenever an edge $e$ added into the spanner $H$, the algorithm never tries to remove $e$ from $H$. So $e$ remains in $H$ until it is deleted by the adversary. Therefore, the total recourse is $O(m)$. With this, we conclude the following key lemma: \begin{lemma} \label{thm:greedy_spanner} Given a graph $G$ with $n$ vertices and $m$ initial edges undergoing only edge deletions, the algorithm above maintains a $(2k-1)$-spanner $H$ of $G$ of size $O(n^{1+1/k})$ with $O(m)$ total recourse. \end{lemma} By plugging \mathcal{C}ref{thm:greedy_spanner} to the fully-dynamic-to-decremental reduction by \cite{BaswanaKS12} below, we conclude \mathcal{C}ref{thm:main greedy}. We also include the proof of \mathcal{C}ref{lemma:fully_dyn_reduction} in \mathcal{C}ref{sec:reduction} for completeness. \begin{lemma}[\cite{BaswanaKS12}] \label{lemma:fully_dyn_reduction} Suppose that for a graph $G$ with $n$ vertices and $m$ initial edges undergoing only edge deletions, there is an algorithm that maintains a $(2k-1)$-spanner $H$ of size $O(S(n))$ with $O(F(m))$ total recourse where $F(m) = \Omega(m)$, then there exists an algorithm that maintains a $(2k-1)$-spanner $H'$ of size $O(S(n) \log n)$ in a fully dynamic graph with $n$ vertices using $O(F((U) \log n))$ total recourse. Here $U$ is the total number of updates, starting from an empty graph. \end{lemma} \section{$3$-Spanner with Near-optimal Recourse and Fast Update Time} \label{sec:3spanner} In this section, we prove \mathcal{C}ref{thm:main proactive} by showing an algorithm for maintaining a $3$-spanner with small update time {\em in addition} to having small recourse. We start by explaining a basic static construction and needed data structures in \mathcal{C}ref{sub:sec:static} and show the dynamic algorithm in \mathcal{C}ref{sub:sec:dynamic:1}. Assuming our key lemma (\mathcal{C}ref{th:proactive:resampling:main}) about proactive resampling, most details here are quite straight forward. Hence, due to space constraint, most proofs are either omitted or deferred to \mathcal{C}ref{sec:app:missing:3spanner}. Throughout this section, we let $N_G(u) = \{ v \in V : (u, v) \in E \}$ denote the set of neighbors of a node $u \in V$ in a graph $G = (V, E)$, and we let $\text{deg}_G(u) = |N_G(u)|$ denote the degree of the node $u$ in the graph $G$. \subsection{A Static Construction and Basic Data Structures} \label{sub:sec:static} \textbf{A Static Construction.} We now describe our static algorithm. Though our presentation is different, our algorithm is almost identical to \cite{GrossmanP17}. The only difference is that we do not distinguish small-degree vertices from large-degree vertices. We first arbitrarily partition $V$ into $\sqrt{n}$ equal-sized \emph{buckets} $V_1, \ldots, V_{\sqrt{n}}$. We then construct three sets of edges $E_1, E_2, E_3$. For every bucket $V_i, i \in [1,\sqrt{n}]$, we do the following. First, for all $v \in V \setminus V_i$, if $V_i \cap N_G(v)$ is not empty, we choose a neighbor $c_i(v) \in V_i \cap N_G(v)$ and add $(v,c_i(v))$ to $E_1$. We call $c_i(v)$ an \emph{$i$-partner} of $v$. Next, for every edge $e=(u,v)$, where both $u,v \in V_i$, we add $e$ to $E_2$. Lastly, for $u,u' \in V_i$ with overlapping neighborhoods, we pick an arbitrary common neighbor $w_{uu'} \in N_G(u) \cap N_G(u')$ and add $(u,w_{uu'}),(w_{uu'},u')$ to $E_3$. We refer to the node $w_{uu'}$ as the \emph{witness} for the pair $u,u'$. \begin{claim} \label{cl:static:stretch-size} The subgraph $H = (V, E_1 \cup E_2 \cup E_3)$ is a $3$-spanner of $G$ consisting of at most $O(n\sqrt{n})$ edges. \end{claim} \noindent \textbf{Dynamizing the Construction.} Notice that it suffices to separately maintain $E_1, E_2, E_3$, in order to maintain the above dynamic $3$-spanner. Maintaining $E_1$ and $E_2$ is straightforward and can be done in a fully-dynamic setting in $O(1)$ worst-case update time. Indeed, if $e = (u,c_i(u))\in E_1$ is deleted, then we pick a new $i$-partner $c_i(u) \in V_i \cap N_G(u)$ for $u$. Maintaining $V_i \cap N_G(u)$ for all $u$ allows us to update $c_i(u)$ efficiently. If $e=(u,u') \in E_2$, where $u,u' \in V_i$, is deleted, then we do nothing. The remaining task, maintaining $E_3$, is the most challenging part of our dynamic algorithm. Before we proceed, let us define a subroutine and a data structure needed to implement our algorithm. \noindent \textbf{Resampling Subroutine.} We define {\sc Resample}$(u,u')$ as a subroutine that \emph{uniformly samples} a witness $w_{uu'}$ (i.e.~a common neighbor of $u$ and $u'$), if exists. Notice that, we can obtain $E_3$ by calling {\sc Resample}$(u,u')$ for all $u,u' \in V_i$ and for all $i \in [1,\sqrt{n}]$. \noindent \textbf{Partnership Data Structures.} The subroutine above hints that we need a data structure for maintaining the common neighborhoods for all pairs of vertices that are in the same bucket. For vertices $u$ and $v$ within the same bucket, we let $P(u,v) = N_G(u) \cap N_G(v)$ be the \emph{partnership} between $u$ and $v$. To maintain these structures dynamically, when an edge $(u,v)$ is inserted, if $u \in V_i$ and $v \in V_j$, we add $v$ to $P(u,u')$ for all $u' \in V_i \cap N_G(v) \setminus \{u\}$, and symmetrically add $u$ to $P(v,v')$ for all $v' \in V_j \cap N_G(u) \setminus \{v\}$. This clearly takes $O(\sqrt{n} \log{n})$ worst-case time for edge insertion. and this is symmetric for edge deletion. As we want to prove that our final update time is $\tilde{O}(\sqrt{n})$, we can assume from now that $E_1,E_2$, and all partnerships are maintained in the background. \subsection{Maintaining Witnesses via Proactive Resampling} \label{sub:sec:dynamic:1} \textbf{Remark.} For clarity of exposition, we will present an amortized update time analysis. Using standard approach, we can make the update time worst case. We will discuss this issue at the end of this section. Our dynamic algorithm runs in {\em phases}, where each phase lasts for $n \sqrt{n}$ consecutive updates (edge insertions/deletions). As a spanner is decomposable\footnote{Let $G_1 = (V,E_1)$ and $G_2 = (V,E_2)$. If $H_1$ and $H_2$ are $\alpha$-spanners $G_1$ and $G_2$ respectively, then $H_1 \cup H_2$ is a $\alpha$-spanner of $G_1 \cup G_2$.}, it suffices to maintain a $3$-spanner $H$ of the graph undergoing only edge deletions within this phase and then include all edges inserted within this phase into $H$, which increases the size of $H$ by at most $n\sqrt{n}$ edges. Henceforth, \emph{we only need to present how to initialize a phase and how to handle edge deletions within each phase.} The reason behind this reduction is because our proactive resampling technique naturally works for decremental graphs. \noindent \textbf{Initialization.} At the start of the phase, since our partnerships structures only processed edge deletions from the previous phase, we first update partnerships with all the $O(n\sqrt{n})$ inserted edges from the previous phase. Then, we call {\sc Resample}$(u,u')$ for all $u,u' \in V_i$ for all $i \in [1,\sqrt{n}]$ to replace all witnesses and initialize $E_3$ of this phase. \noindent \textbf{Difficulty of Bounding Recourse.} Maintaining $E_3$ (equivalently, the witnesses) in $\tilde{O}(\sqrt{n})$ worst-case time is straightforward because the partnership data structure has $O(\sqrt{n} \log{n})$ update time. However, our goal is to show $\tilde{O}(1)$ amortized recourse, which is the most challenging part of our analysis. To see the difficulty, if $(u,v)$ is deleted and $u \in V_i$, a vertex $v$ may serve as a witness $\{w_{uu'}\}$ for all $u' \in V_i$. In this case, deleting $(u,v)$ causes the algorithm to find a new witness $w_{uu'}$ for all $u' \in V_i$. This implies a recourse of $|V_i| = \Omega(\sqrt {n})$. To circumvent this issue, we apply the technique of {\em proactive resampling}, as described below. \noindent \textbf{Proactive Resampling.} We keep track of a {\em time-variable} $T$; the number of updates to $G$ that have occurred in this phase until now. $T$ is initially $0$. We increment $T$ each time an edge gets deleted from $G$. In addition, for all $i \in [1, \sqrt{n}]$ and $u, u' \in V_i$ with $u \neq u'$, we maintain: (1) $w_{uu'}$, the {\em witness} for the pair $u$ and $u'$ and (2) a set $\text{{\sc Schedule}}[u, u']$ of positive integers, which is the set of timesteps where our algorithm intends to {\em proactively resample} a new witness for $u,u'$. This set grows adaptively each time the adversary deletes $(u,w_{uu'})$ or $(w_{uu'}, u')$. Finally, to ensure that the update time of our algorithm remains small, for each $\lambda \in [1, n \sqrt{n}]$ we maintain a $\text{{\sc List}}[\lambda]$, which consists of all those pairs of nodes $(x, x')$ such that $\lambda \in \text{{\sc Schedule}}[x, x']$. When an edge $(u,v)$, where $u \in V_i$ and $v \in V_j$ is deleted, we do the following operations. First, for all $u' \in V_i$ that had $v = w_{uu'}$ as a common neighbor with $u$ before deleting $(u,v)$, we add the timesteps $\{T + 2^k \mid T + 2^k \leq n \sqrt{n}, k \in \mathbb N \}$ to $\text{{\sc Schedule}}[u,u']$. Second, analogous to the previous one, for all $v' \in V_j$ that had $u = w_{vv'}$ as a common neighbor with $v$ before deleting $(u,v)$, we add the timesteps $\{T + 2^k \mid T + 2^k \leq n \sqrt{n}, k \in \mathbb N \}$ to $\text{{\sc Schedule}}[v,v']$. Third, we set $T \leftarrow T+1$. Lastly, for each $(x,x') \in \text{{\sc List}}[T]$, we call the subroutine $\text{{\sc Resample}}(x,x')$. The key lemma below summarizes a crucial property of this dynamic algorithm. Its proof appears in \mathcal{C}ref{sect:job-machine}. \begin{lemma} \label{th:proactive:resampling:main} During a phase consisting of $n\sqrt{n}$ edge deletions, our dynamic algorithm makes at most $\tilde{O}(\sqrt{n})$ calls to the {\sc Resample} subroutine after each edge deletion. Moreover, the {\em total} number of calls to the {\sc Resample} subroutine during an {\em entire phase} is at most $\tilde{O}(n \sqrt{n})$ w.h.p. Both these guarantees hold against an adaptive adversary. \end{lemma} \noindent \textbf{Analysis of Recourse and Update Time.} Our analysis are given in the lemmas below. \begin{lemma}[Recourse] \label{lm:amortized:recourse} The amortized recourse of our algorithm is $\tilde{O}(1)$ w.h.p., against an adaptive adversary. \end{lemma} \begin{proof} To maintain the edge-sets $E_1$ and $E_2$, we pay a worst-case recourse of $O(1)$ per update. For maintaining the edge-set $E_3$, our total recourse during the entire phase is at most $O(1)$ times the number of calls made to the {\sc Resample}$(.,.)$ subroutine, which in turn is at most $\tilde{O}(n \sqrt{n})$ w.h.p.~(see Lemma~\ref{th:proactive:resampling:main}). Finally, while computing $E_3$ in the beginning of a phase, we pay $O(n \sqrt{n})$ recourse. Therefore, the overall total recourse during an entire phase is $\tilde{O}(n \sqrt{n})$ w.h.p.. Since a phase lasts for $n \sqrt{n}$ time steps, we conclude the lemma. \end{proof} \begin{lemma}[Worst-case Update Time within a Phase] \label{lm:worstcase} Within a phase, our algorithm handles a given update in $\tilde{O}(\sqrt{n})$ worst case time w.h.p.. \end{lemma} \begin{proof} Recall that the sets $E_1, E_2$ can be maintained in ${O}(1)$ worst case update time. Henceforth, we focus on the time required to maintain the edge-set $E_3$ after a given update in $G$. Excluding the time spent on maintaining the partnership data structure (which is $\tilde{O}(\sqrt{n})$ in the worst-case anyway), this is proportional to $\tilde{O}(1)$ times the number of calls made to the {\sc Resample}$(.,.)$ subroutine, {\em plus} $\tilde{O}(1)$ times the number of pairs $u,u' (v,v')$ where we need to adjust $\text{{\sc SCHEDULE}}[u,u']$. The former is w.h.p.~at most $\tilde{O}(\sqrt{n})$ according to Lemma~\ref{th:proactive:resampling:main}, while the latter is also at most $\tilde{O}(\sqrt{n})$ since $|V_i|, |V_j| \leq \sqrt{n}$. Thus, within a phase we can also maintain the set $E_3$ w.h.p.~in $\tilde{O}(\sqrt{n})$ worst case update time. \end{proof} Although the above lemma says that we can handle each edge deletion in $\tilde{O}(\sqrt{n})$ worst-case update time, our current algorithm does not guarantee worst-case update time yet because the intialization time exceed the $\tilde{O}(\sqrt{n})$ bound. In more details, observe that the total initialization time is $O(n\sqrt n) \times O(\sqrt{n} \log{n})$ because we need to insert $O(n\sqrt{n})$ edges into partnership data structures, which has $O(\sqrt{n} \log{n})$ update time. Over a phase of $n\sqrt{n}$ steps, this implies only $\tilde{O}(\sqrt{n})$ amortized update time. However, since the algorithm takes long time only at the initialization of the phase, but takes $\tilde{O}(\sqrt{n})$ worst-case step for each update during the phase, we can apply the standard building-in-the-background technique (see~\mathcal{C}ref{sub:app:spread:work}) to de-amortized the update time. We conclude the following: \begin{lemma}[Worst-case Update Time for the Whole Update Sequence] \label{lm:worstcase:updatetime} W.h.p., the worst-case update time of our dynamic algorithm is $\tilde{O}(\sqrt{n})$. \end{lemma} \section{Proactive Resampling: Abstraction} \label{sect:job-machine} The goal of this section is to prove \mathcal{C}ref{th:proactive:resampling:main} for bounding the recourse of our 3-spanner algorithm. This is the most technical part of this paper. To ease the analysis, we will abstract the problem situation in \mathcal{C}ref{th:proactive:resampling:main} as a particular dynamic problem of assigning jobs to machines while an adversary keeps deleting machines and the goal is to minimize the total number of reassignments. Below, we formalize this problem and show how to use it to bound the recourse of our 3-spanner algorithm. Our abstraction has two technical contributions: (1) it allows us to easily work with multiple sampling probabilities, while in \cite{Bernstein2020fully}, they fixed a single parameter on sampling probability, (2) the simplicity of this abstraction can expose the generality of the proactive resampling technique itself; it is not specific to the cut sparsifier problem as used in \cite{Bernstein2020fully}. \noindent \textbf{Jobs, Machines, Routines, Assignments, and Loads.} Let $J$ denote a set of \emph{jobs} and $M$ denote a set of \emph{machines}. We think of them as two sets of vertices of the (hyper)-graph $G=(J,M,R)$.\footnote{This graph is different from the graph that we maintain a spanner in previous sections.} A \emph{routine} $r \in R$ is a hyperedge of $G$ such that $r$ contains exactly one job-vertex from $J$, denoted by $\mathrm{job}(r)\in J$, and may contain several machine-vertices from $M$, denoted by $M(r) \subseteq M$. Each routine $r$ in $G$ means there is a routine for handling $\mathrm{job}(r)$ by \emph{simultaneously} calling machines in $M(r)$. Note that $r = \{\mathrm{job}(r)\}\cup M(r)$. We say that $r$ is a \emph{routine for} $\mathrm{job}(r)$. For each machine $x \in M(r)$, we say that routine $r$ \emph{involves} machine $x$, or that $r$ \emph{contains} $x$. The set $R(x)$ is then defined as the set of routines involving machine $x$. Observe that there are $\mathrm{deg}eg_G(u)$ different routines for handling job $u$. An \emph{assignment} $A = (J,M, F \subseteq R)$ is simply a subgraph of $G$. We say assignment $A$ is \emph{feasible} iff $\mathrm{deg}eg_A(u) = 1$ for every job $u \in J$ where $\mathrm{deg}eg_G(u)>0$. That is, every job is handled by some routine, if exists. When $r \in A$, we say that $\mathrm{job}(r)$ is \emph{handled by} routine $r$. Finally, given an assignment $A$, the \emph{load} of a machine $x$ is the number of routines in $A$ involving $x$, or in other words, is the degree of $x$ in $A$, $\mathrm{deg}eg_A(x)$. We note explicitly that our end-goal is not to optimize loads of machines. Rather, we want to minimize the number of reassignments needed to maintain feasible assignments throughout the process. In this section, we usually use $u,v,w$ to denote jobs, use $x,y,z$ to denote machines, and use $e$ or $r$ to denote routines or (hyper)edges. \noindent \textbf{The Dynamic Problem.} Our problem is to maintain a feasible assignment $A$ while the graph $G$ undergoes a sequence of machine deletions (which might stop before all machines are deleted). More specifically, when a machine $x$ is deleted, all routines $r$ containing $x$ are deleted as well. But when routines in $A$ are deleted, $A$ might not be feasible anymore and we need to add new edges to $A$ to make $A$ feasible. Our goal is to minimize the total number of routines ever added to $A$. To be more precise, write the graph $G$ and the assignment $A$ after $t$ machine-deletions as $G^t = (J,M,R^t)$ and $A^t =(J,M,F^t)$, respectively. Here, we define \emph{recourse} at timestep $t$ to be $|F^t \setminus F^{t-1}|$, which is the number of routined added into $A$ at timestep $t$. When the adversary deletes $T$ machines, the goal is then to minimize the total recourse $\sum_{t = 1}^{T} |F^t \setminus F^{t-1}|$. \noindent \textbf{The Algorithm: Proactive Resampling.} To describe our algorithm, first let $\textsc{Resample}(u)$ denote the process of reassigning job $u$ to a uniformly random routine for $u$. In the graph language, $\textsc{Resample}(u)$ removes the edge $r$ such that $\mathrm{job}(r) = u$ from $A$, sample an edge $r'$ from $\{r \in R~|~\mathrm{job}(r) = u\}$, and then add $r'$ into $A$. At timestep $0$, we initialize a feasible assignment $A^0$ by calling $\textsc{Resample}(u )$ for every job $u \in J$, i.e., assign each job $u$ to a random routine for $u$. Below, we describe how to handling deletions. Let $T$ be the total number of machine-deletions. For each job $u$, we maintain $\textsc{Schedule}(u) \subseteq [T]$ containing all time steps that we will invoke $\textsc{Resample}(u)$. That is, at any timestep $t$, before an adversary takes any action, we call $\textsc{Resample}(u)$ if $t \in \textsc{Schedule}(u)$. We say that an adversary \emph{touches} $u$ at timestep $t$ if the routine $r \in A^t$ handling $u$ at time $t$ is deleted. When $u$ is touched, we call $\textsc{Resample}(u)$ and, very importantly, we put $t+1, t+2, t+4, \ldots$ where $t+2^i \le T$ into $\textsc{Schedule}(u)$. This is the action that we call \emph{proactive resampling} because we do not just resample a routine for $u$ only when $u$ is touched, but do so proactively in the future as well. This completes the description of the algorithm. \begin{comment} Now we describe what we mean by proactive resampling. For each vertex $u$, we maintain $\textsc{Schedule}(u)$, which contains all time steps in the future that we will invoke $\textsc{Resample}(u)$. We proactively put timesteps where we would call $\textsc{Resample}(u)$ into $\textsc{Schedule}(u)$, hence the term \emph{proactive resampling}. We say that an adversary \emph{touches} $u$ at timestep $t$ if an edge $e \in A^t$ with $\mathrm{job}(e) = u$ is deleted at timestep $t$. When $u$ is touched, we call $\textsc{Resample}(u)$ and we put $t+1, t+2, t+4, \ldots$ into $\textsc{Schedule}(u)$. At any timestep $t'$, before an adversary takes any action, we check $\textsc{Schedule}(u)$ and see if $t' \in \textsc{Schedule}(u)$. If so, then we call $\textsc{Resample}(u)$. \end{comment} Clearly, $A$ remains a feasible assignment throughout because whenever a job $u$ is touched, we immediately call $\textsc{Resample}(u)$. The key lemma below states that the algorithm has low recourse, even if the adversary is adaptive in the sense that each deletion at time $t$ depends on previous assignment before time $t$. \begin{lemma} \label{lemma:second_guarantee} Let $T$ be the total number of machine-deletions. The total recourse of the algorithm running against an adaptive adversary is $O\big( |J|\log(\Delta)\log^2|M| + T\log^2 |M|\big)$ with high probability where $\Delta$ is the maximum degree of any job. Moreover, if the load of a machine never exceeds $\lambda$, then our algorithm has $O(\lambda \log{T})$ worst-case recourse. \end{lemma} \begin{comment} \begin{lemma} \label{lemma:second_guarantee} With high probability, throughout $T$ time steps, for every time step, this algorithm maintains a feasible assignment $A$ against an adaptive adversary. The total recourse of the algorithm is bounded above by $O\big( |J|\log^2|M| + T\log |M|\big)$. \end{lemma} \end{comment} We will prove \mathcal{C}ref{lemma:second_guarantee} in \mathcal{C}ref{sec:reduce to load}. Before proceeding any further, however, we argue why \mathcal{C}ref{lemma:second_guarantee} directly bounds the recourse of our 3-spanner algorithm. \noindent \textbf{Back to 3-spanners: Proof of \mathcal{C}ref{th:proactive:resampling:main}.} It is easy to see that maintaining $E_3$ in our $3$-spanner algorithm can be framed exactly as the job-machine load-balancing problem. Suppose the given graph is $G = (V,E)$ where $n = |V|$ and $m = |E|$. We create a job $j_{uu'}$ for each pair of vertices $u,u' \in V_i$ with $u \neq u'$. For each edge $e \in E$, we create a machine $x_e$. Hence, $|J| = O(n^{1.5})$ and $|M| = |E| = m$. For each job, as we want to have a witness $w_{uu'}$, this witness is corresponding to two edges $e=(u,w_{uu'})$ and $e' = (u',w_{uu'})$. Hence, we create a routine $r = (j_{uu'}, e, e')$ for each $u,u' \in V_i$ and a common neighbor $w_{uu'}$. Since there are at most $n$ common neighbors between each $u$ and $u'$, $\Delta = O(n)$. A feasible assignment is then corresponding to finding a witness for each job. Our algorithm that maintains the spanner is also exactly this load-balancing algorithm. Hence, the recourse of the $3$-spanner construction follows from Lemma~\ref{lemma:second_guarantee} where we delete exactly $T=O(n^{1.5})$ machines. As $|J| = O(n^{1.5})$, the total recourse bound then becomes $O(n^{1.5} \log^3 n)$. As $T=O(n^{1.5})$, averaging this bound over all timesteps yields $O(\log^3 n )$ amortized recourse. \section{Proactive Resampling: Analysis (Proof of \mathcal{C}ref{lemma:second_guarantee})} \label{sec:reduce to load} The first step to prove \mathcal{C}ref{lemma:second_guarantee} is to bound the loads of machines $x$. This is because whenever machine $x$ is deleted, its load of $\mathrm{deg}eg_A(x)$ would contribute to the total recourse. What would be the expected load of each machine? For intuition, suppose that the adversary was \emph{oblivious}. Recall that $R(x)$ denote the set of all routines involving machine $x$. Then, the expected load of machine $x$ would be $\sum_{r\in R(x)} 1 / \mathrm{deg}eg_G (\mathrm{job}(r))$ because each job samples its routine uniformly at random, and this is concentrated with high probability using Chernoff's bound. Although in reality our adversary is \emph{adaptive}, our plan is to still prove that the loads of machines do not exceed its expectation in the oblivious setting too much. This motivates the following definitions. \begin{definition} The \emph{target load} of machine $x$ is $\mathrm{target}(x) = \sum_{r \in R(x)} 1 / \mathrm{deg}eg_G (\mathrm{job}(r))$. The target load of $x$ at time $t$ is $\mathrm{target}^t(x) = \sum_{r \in R^t(x)} 1 / \mathrm{deg}eg_{G^t} (\mathrm{job}(r))$. An assignment $A$ has \emph{overhead} $(\alpha, \beta)$ iff $ \mathrm{deg}eg_A(x) \leq \alpha \cdot \mathrm{target}(x) + \beta$ for every machine $x \in M$. \end{definition} Our key technical lemma is to show that, via proactive resampling, the loads of machines are indeed close to its expectation in the oblivious setting. That is, the maintained assignment has small overhead. Recall that $T$ is the total number of machine-deletions. \begin{lemma} \label{lemma:overhead} With high probability, the assignment $A$ maintained by our algorithm always has overhead $\left (O(\log{T}), O(\log |M|) \right)$ even when the adversary is adaptive. \end{lemma} Due to space limit, the proof of \mathcal{C}ref{lemma:overhead} will given in \mathcal{C}ref{sec:proof_overhead}. Assuming \mathcal{C}ref{lemma:overhead}, we formally show how to bound of machine loads implies the total recourse, which proves \mathcal{C}ref{lemma:second_guarantee}. \begin{proof}[Proof of \mathcal{C}ref{lemma:second_guarantee}] Let $T$ be the total number of deletions. Observe that the total recourse up to time $T$ is precisely the total number of $\textsc{Resample}(.)$ calls up to time $T$, which in turn is at most the total number of $\textsc{Resample}(.)$ calls put into $\textsc{Schedule}(.)$ since time $1$ until time $T$. Therefore, our strategy is to bound, for each time $t$, the number of $\textsc{Resample}(.)$ calls \emph{newly generated} at time $t$. Let $x^t$ be the machine deleted at time $t$. Observe this is at most $O(\log T) \times \mathrm{deg}eg_{A^t}(x^t)$ where $\mathrm{deg}eg_{A^t}(x^t)$ is $x^t$'s load at time $t$ and the $O(\log T)$ factor is because of proactive sampling. By \mathcal{C}ref{lemma:overhead}, we have $\mathrm{deg}eg_{A^t}(x^t) \le O\left(\log{(T)}\mathrm{target}^t(x^t) + \log {|M|}\right)$. Also, we claim that $\sum_{t=1}^T \mathrm{target}^t(x^t) = O(|J| \log {\Delta})$ where $\Delta$ is the maximum degree of jobs (to be proven below). Therefore, the total recourse up to time $T$ is at most \begin{align*} O(\log T)\sum_{t=1}^{T}\mathrm{deg}eg_{A^{t}}(x^{t}) & \le O(\log T)\sum_{t=1}^{T}O\left(\log(T)\mathrm{target}^{t}(x^{t})+\log|M|\right) \\&\le O\left(|J|\log{(\Delta)}\log^{2}{|M|}+T\log^{2}{|M|}\right) \end{align*} as $T \le |M|$. It remains to show that $\sum_{t=1}^T \mathrm{target}^t(x^t) = O(|J| \log \Delta)$. Recall that $\mathrm{target}^t(x) = \sum_{r \ni x} \frac{1}{\mathrm{deg}eg_{G^t} (\mathrm{job}(r))}$. Imagine when machine $x^t$ is deleted at time $t$. We will show how to charge $\mathrm{target}^t(x^t)$ to jobs with edges connecting to $x^t$. For each job $u$ with $c$ (hyper)edges connecting to $x^t$, $u$'s contribution of $\mathrm{target}^t(x^t)$ is $c/\mathrm{deg}eg_{G^t}(u)$. So we distribute the charge of $c/\mathrm{deg}eg_{G^t}(u) \leq \frac{1}{\mathrm{deg}eg_{G^t}(u)} +\frac{1}{\mathrm{deg}eg_{G^t}(u) - 1} + \ldots + \frac{1}{\mathrm{deg}eg_{G^t}(u)-c+1}$ to $u$. Since these edges are charged from machine $x^t$ to job $u$ only once, the total charge of each job $u$ at most $\frac{1}{\mathrm{deg}eg_G(u)}+ \frac{1}{\mathrm{deg}eg_G(u)-1} + \mathrm{deg}ots + 1/2 + 1 = O(\log \Delta).$ Since there are $|J|$ jobs, the bound $\sum_{t=1}^T \mathrm{target}^t(x^t) = O(|J| \log \Delta)$ follows. To see that we have worst-case recourse, one can look at any timestep $t$. There are $O(\log{t})$ timesteps that can cause $\textsc{Resample}$ to be invoked at timestep $t$, namely, $t-1, t-2, t-4, \ldots$. At each of these timesteps $t'$, one machine is deleted, so the number of $\textsc{Resample}$ calls added from timestep $t'$ is also bounded by the load of the deleted machine $x_{t'}$, which does not exceed $\lambda$. Summing this up, the number of calls we make at timestep $t$ is at most $O(\lambda \log{t}) = O(\lambda \log{T})$. This concludes our proof. \end{proof} \section{Conclusion} In this paper, we study fully dynamic spanner algorithms against an adaptive adversary. Our algorithm in \mathcal{C}ref{thm:main greedy} maintains a spanner with near-optimal stretch-size trade-off using only $O(\log n)$ amortized recourse. This closes the current oblivious-vs-adaptive gap with respect to amortized recourse. Whether the gap can be closed for \emph{worst-case recourse} is an interesting open problem. The ultimate goal is to show algorithms against an adaptive adversary with polylogarithmic amortized update time or even worst-case. Via the multiplicative weight update framework \cite{Fleischer00,GargK07}, such algorithms would imply $O(k)$-approximate multi-commodity flow algorithm with $\tilde{O}(n^{2+1/k})$ time which would in turn improve the state-of-the-art. We made partial progress toward this goal by showing the first dynamic 3-spanner algorithms against an adaptive adversary with $\tilde{O}(\sqrt{n})$ update time in \mathcal{C}ref{thm:main det worst case} and \emph{simultaneously} with $\tilde{O}(1)$ amortized recourse in \mathcal{C}ref{thm:main proactive}, improving upon the $O(n)$ amortized update time since the 15-year-old work by \cite{AusielloFI06}. Generalizing our \mathcal{C}ref{thm:main det worst case} to dynamic $(2k-1)$-spanners of size $\tilde{O}(n^{1+1/k})$, for any $k \ge 2$, is also a very interesting and challenging open question. \begin{comment} \begin{itemize} \item Future: \begin{itemize} \item Oblivious-vs-adaptive gap \begin{itemize} \item Greedy: shave $O(\log n)$ factor to $O(1)$ \item Worst-case recourse?? \item Update time: \begin{itemize} \item Application to multicommodity flow: $O(k)$-approx in $O(m+n^{1+1/k})$ time. \end{itemize} \end{itemize} \item $(2k-1)$-stretch with $\tilde{O}(n^{1+1/k})$-size on weight graphs, light spanners \begin{itemize} \item Is this known even for oblivious and recourse! \end{itemize} \end{itemize} \end{itemize} \end{comment} \appendix \section{Proof of \mathcal{C}ref{lemma:overhead}} \label{sec:proof_overhead} Here, we show that the load $\mathrm{deg}eg_{A^t}(x)$ of every machine $x$ at each time $t$ is small. Some basic notions are needed in the analysis. \noindent \textbf{Experiments and Relevant Experiments.} An \emph{experiment} $X$ is a binary random variable associated with an edge/routine $e$ and time step $t$, where $X = 1$ iff $\textsc{Resample}(\mathrm{job}(e))$ is called at time $t$ and $e$ is chosen to handle $\mathrm{job}(e)$, among all edges incident to $\mathrm{job}(e)$. Observe that $\mathbb{P}[X=1] = 1/\mathrm{deg}eg_{G^t}(\mathrm{job}(e))$. Note that each call to $\textsc{Resample}(u)$ at time $t$ creates new $\mathrm{deg}eg_{G^t}(u)$ experiments. We order all experiments $X_1,X_2,X_3,\mathrm{deg}ots$ by the time of their creation. For convenience, for each experiment $X$, we let $e(X)$, $t(X)$, and $\mathrm{job}(X)$ denote its edge, time of creation, and job respectively. Next, we define the most important notion in the whole analysis. \begin{definition}\label{def:rel} For any time $t$ and edge $e \in R^t$ at time $t$, an experiment $X$ is \emph{$(t,e)$-relevant} if \begin{itemize} \item $e(X) = e$, and \item there is no $t(X) < t' < t$ such that $t' \in \textsc{Schedule}^{t(X)}(\mathrm{job}(e))$. \end{itemize} Moreover, $X$ is $(t,x)$-relevant if it is $(t,e)$-relevant and edge $e \in R^{t}(x)$ is incident to $x$. \end{definition} Intuitively, $X$ is a $(t,e)$-relevant experiment if $X$ could cause $e$ to appear in the assignment $A^t$ at time $t$. To see why, clearly if $e(X) \neq e$, then $X$ cannot cause $e$ to appear. Otherwise, if $e(X) = e$ but there is $t' \in (t(X),t)$ where $t' \in \textsc{Schedule}^{t(X)}(\mathrm{job}(e))$, then $X$ cannot cause $e$ to appear at time $t$ either. This is because even if $X$ is successful and so $e$ appears at time $t(X)$, then later at time $t' > t(X)$, $e$ will be resampled again, and so $X$ has nothing to do whether $e$ appears at time $t > t'$. With the same intuition, $X$ is $(t,x)$-relevant if $X$ could contribute to the load $\mathrm{deg}eg_{A^t}(x)$ of machine $x$ at time $t$. It is important to note that, we decide whether $X$ is a $(t,e)$-relevant based on $\textsc{Schedule}^{t(X)}(\mathrm{job}(e))$ at time $t(X)$. If it was based on $\textsc{Schedule}^{t}(\mathrm{job}(e))$ at time $t$, then there would be only a single experiment $X$ that is $(t,e)$-relevant (which is the one with $e(X)=e$ and maximum $t(X)<t$). According to \mathcal{C}ref{def:rel}, there could be more than one experiments that are $(t,e)$-relevant. For example, suppose $X$ is $(t,e)$-relevant. At time $t(X) + 1$, the adversary could touch $\mathrm{job}(e)$, hence, adding $t(X) + 2, t(X) + 4, \ldots$ into $\textsc{Schedule}(\mathrm{job}(e))$. Because of this action, there is another experiment $X'$ that is $(t,e)$-relevant and $t(X') > t(X)$. This motivates the following definition. \begin{definition} Let $Rel(t,e)$ be the random variable denoting the number of $(t,e)$-relevant experiments, and let $Rel(t,x) = \sum_{e \in R^t(x)} Rel(t,e)$ denote the total number of $(t,x)$-relevant experiments. \end{definition} To simplify the notations in the proof of \mathcal{C}ref{lemma:overhead} below, we assume the following. \begin{assumption}[The Machine-disjoint Assumption]\label{assump:disjoint} For any routines $e,e'$ with $\mathrm{job}(e) = \mathrm{job}(e')$, then $M(e) \cap M(e') = \emptyset$. That is, the edges adjacent to the same job are machine-disjoint. \end{assumption} Note that this assumption indeed holds for our $3$-spanner application. This is because any two paths of length $2$ between a pair of centers $u$ and $u'$ must be edge disjoint in any simple graph. We show how remove this assumption in \mathcal{C}ref{sec:lift_assumption}, but the notations are more complicated. \noindent \textbf{Roadmap for Bounding Loads.} We are now ready to describe the key steps for bounding the load $\mathrm{deg}eg_{A^{t}}(x)$, for any time $t$ and machine $x$. First, we write $\mathcal X^{(t,x)} = X_1^{(t,x)}, X_2^{(t,x)}, \ldots, X_{Rel(t,x)}^{(t,x)}$ as the sequence of all $(t, x)$-relevant experiments (ordered by time step the experiments are taken). The order in $\mathcal X^{(t,x)}$ will be important only later. For now, we write $$\overline{\mathrm{deg}eg}_{A^t}(x) = \sum_{X \in \mathcal X^{(t,x)}} X,$$ as the total number of success $(t, x)$-relevant experiments. As any edge $e$ adjacent to $x$ in $A^t$ may appear only because of some successful $(t,x)$-relevant experiment $X \in \mathcal X^{(t,x)}$, we conclude the following: \begin{lemma}[Key Step 1] \label{lem:bound deg} $\mathrm{deg}eg_{A^{t}}(x)\le\overline{\mathrm{deg}eg}_{A^{t}}(x)$. \end{lemma} \mathcal{C}ref{lem:bound deg} reduces the problem to bounding $\overline{\mathrm{deg}eg}_{A^{t}}(x)$. If all $(t, x)$-relevant experiments $\mathcal X^{(t,x)}= \{X_i^{(t,x)}\}_i$ were independent, then we could have easily applied standard concentration bounds to $\overline{\mathrm{deg}eg}_{A^{t}}(x) = \sum_{X \in \mathcal X^{(t,x)}} X$. Unfortunately, they are not independent as the outcome of earlier experiments can affect the adversary's actions, which in turn affect later experiments. Our strategy is to relate the sequence $\mathcal X^{(t,x)}$ of $(t, x)$-relevant experiments to another sequence $\hat{\mathcal X}^{(t,x)} = \hat{X}_1^{(t,x)}, \hat{X}_2^{(t,x)} \ldots, \hat{X}_{Rel(t,x)}^{(t,x)}$ of \emph{independent} random variables defined as follows. For each $(t, x)$-relevant experiment $\hat{X}_i^{(t,x)}$ where $e = e(\hat{X}_i^{(t,x)})$ and $u = \mathrm{job}(e)$, we carefully define $\hat{X}_i^{(t,x)}$ as an \emph{independent} binary random variable such that $$\mathbb{P}[\hat{X}_i^{(t,x)} = 1 ]= 1 /\mathrm{deg}eg_{G^t}(u),$$ which is the probability that $\textsc{Resample}(u)$ chooses $e$ at time $t$. We similarly define $$\widehat{\mathrm{deg}eg}_{A^t}(x) = \sum_{\hat{X} \in \hat{\mathcal X}^{(t,x)}} \hat X,$$ that sums independent random variables, where each term in the sum is closely related to the corresponding $(t, x)$-relevant experiments. By our careful choice of $\mathbb{P}[\hat{X}_i^{(t,x)} = 1 ]$, we can relates $\widehat{\mathrm{deg}eg}_{A^t}(x)$ to $\overline{\mathrm{deg}eg}_{A^t}(x)$ via the notion of \emph{stochastic dominance} defined below. \begin{definition} Let $Y$ and $Z$ be two random variables not necessarily defined on the same probability space. We say that $Z$ \emph{stochastically dominates} $Y$, written as $Y \preceq Z$, if for all $\lambda \in \mathbb R$, we have $\mathbb{P}[Y \geq \lambda] \leq \mathbb{P}[Z \geq \lambda]$. \end{definition} Our second important step is to prove the following: \begin{lemma}[Key Step 2] \label{lem:bound deg bar} $\overline{\mathrm{deg}eg}_{A^t}(x) \preceq \widehat{\mathrm{deg}eg}_{A^t}(x)$. \end{lemma} \mathcal{C}ref{lem:bound deg bar}, which will be proven in \mathcal{C}ref{sub:key 2}, reduces the problem to bounding $\widehat{\mathrm{deg}eg}_{A^{t}}(x)$, which is indeed relatively easy to bound because it is a sum of independent random variables. The last key step of our proof does exactly this: \begin{lemma}[Key Step 3] \label{lem:bound deg hat} $ \widehat{\mathrm{deg}eg}_{A^t}(x) \le 2 \log{(t)} \mathrm{target}^t(x) + O(\log{|M|})$ with probability $1-1/{|M|}^{10}$. \end{lemma} We prove \mathcal{C}ref{lem:bound deg hat} in \mathcal{C}ref{sub:key 3}. Here, we only mention one important point about the proof. The $\log(t)$ factor above follows from the factor the number of $(t,e)$-relevant experiment is always at most $Rel(t,e) \le \log(t)$ for any time $t$ and edge $e$. This property is so crucial and, actually, is what the proactive resampling technique is designed for. Given three key steps above (\mathcal{C}ref{lem:bound deg,lem:bound deg bar,lem:bound deg hat}), we can conclude the proof of \mathcal{C}ref{lemma:overhead}. \noindent \textbf{Proof of \mathcal{C}ref{lemma:overhead}.} Recall that we ultimately want to show that, for every timestep $t$, the maintained assignment $A^t$ has overhead $O(\log{(T)},\log|M|)$. In other words, for every $t \in T$ and every $x \in M$, we want to show that $$\mathrm{deg}eg_{A^t}(x) \leq \mathrm{target}^t(x) \cdot O(\log{(T)}) + O(\log |M|).$$ By \mathcal{C}ref{lem:bound deg}, it suffices to show that $$\overline{\mathrm{deg}eg}_{A^t}(x) \leq \mathrm{target}^t(x) \cdot O(\log{(T)}) + O(\log |M|).$$ By \mathcal{C}ref{lem:bound deg bar,lem:bound deg hat}, \begin{align*} &~\mathbb{P}[\overline{\mathrm{deg}eg}_{A^t}(x) \geq 2 \log(t) \mathrm{target}^t(x) + O(\log |M|) ]\\ \leq&~\mathbb{P}[\widehat{\mathrm{deg}eg}_{A^t}(x) \geq 2 \log(t) \mathrm{target}^t(x) + O(\log |M|)] & \text{~(\mathcal{C}ref{lem:bound deg bar})} \\ \leq&~1/{|M|}^{10}. & \text{~(\mathcal{C}ref{lem:bound deg hat})} \end{align*} Now we apply union bound to the probability above. There are $T\leq |M|$ timesteps and $|M|$ machines, hence the probability that a bad event happens is bounded by $ \frac{T|M| }{{|M|}^{10}} = \frac{1}{{|M|}^8}.$ Here, we conclude the proof of \mathcal{C}ref{lemma:overhead}. \subsection{Key Step 2} \label{sub:key 2} The goal of this subsection is to prove \mathcal{C}ref{lem:bound deg bar}. The following reduces our problem into proving that a certain probabilistic condition holds. \begin{lemma}[Lemma 1.8.7(a) \cite{Doerr2020}] \label{lem:doerr} Let $X_1, \ldots, X_n$ be arbitrary boolean random variables and let $X^*_1 \ldots X^*_n$ be independent binary random variables. If we have $$\mathbb{P}[X_i = 1| X_1 = x_1, \ldots, X_{i-1} = x_{i-1}] \leq \mathbb{P}[X^*_i = 1]$$ for all $i \in [n]$ and all $x_1, \ldots, x_{i-1} \in \{0,1\}$ with $\mathbb{P}[X_1 = x_1, \ldots, X_{i-1} = x_{i-1}] >0$, then $$ \sum_{i=1}^n X_i \preceq \sum_{i=1}^n X^*_i.$$ \end{lemma} In light of the above lemma, we will prove that $$\mathbb{P}[X_i^{(t,x)} = 1 | X_1^{(t,x)}, \ldots, X_{i-1}^{(t,x)}] \leq \mathbb{P}[{X}_i^{(t,x)} = 1] \leq \mathbb{P}[\hat{X}_i^{(t,x)} = 1],$$ in Claims~\ref{claim:history_wont_help} and \ref{claim:upperbound_prob_exp}, respectively. This would imply $\sum_{X^{(t,x)}_i\in{\mathcal{X}}^{(t,x)}}{X^{(t,x)}_i} \preceq \sum_{\hat{X}^{(t,x)}_i\in\hat{\mathcal{X}}^{(t,x)}}\hat{X}^{(t,x)}_i$ by \mathcal{C}ref{lem:doerr} above, and so $\overline{\mathrm{deg}eg}_{A^t}(x) \preceq \widehat{\mathrm{deg}eg}_{A^t}(x)$, completing the proof \mathcal{C}ref{lem:bound deg bar}. \begin{claim} \label{claim:upperbound_prob_exp} For any $(t,x)$-relevant experiment $X^{(t,x)}_i$, $\mathbb{P}[X_i^{(t,x)} = 1] \leq \mathbb{P}[\hat{X}_i^{(t,x)} = 1]$. \end{claim} \begin{proof} Let $e = e(X_i^{(t,x)})$ and $t' = t(X_i^{(t,x)})$ be the time that the experiment $X_i^{(t,x)}$ is taken. Note that $t' \le t$ as $X_i^{(t,x)}$ is $(t,x)$-relevant. The claim follows because $$ \mathbb{P}[X_i^{(t,x)} = 1] = \frac{1}{ \mathrm{deg}eg_{G^{t'}}(\mathrm{job}(e)) } \leq \frac{1}{ \mathrm{deg}eg_{G^{t}}(\mathrm{job}(e)) } = \mathbb{P}[\hat{X}_i^{(t,x)} = 1]. $$ The first equality is because $X_i^{(t,x)}$ succeeds iff $\textsc{Resample}(\mathrm{job}(e))$ chooses $e$ at time $t'$, which happens with probability $1/ \mathrm{deg}eg_{G^{t'}}(\mathrm{job}(e))$. (Note that knowing that $X_i^{(t,x)}$ is $(t,x)$-relevant does not change the probability that the experiment $X_i^{(t,x)}$ succeeds because we can determine if $X_i$ is $(t,x)$-relevant depends on information in the past, including $X_1, X_2, \ldots, X_{i-1}$ and the adversary's actions.) The inequality is because $t' \le t$ and $G$ undergoes deletions only. The last equality is by definition of $\hat{X}_i^{(t,x)}$. \end{proof} \begin{claim} \label{claim:history_wont_help} $\mathbb{P}[X_i^{(t,x)} = 1 | X_1^{(t,x)}, \ldots, X_{i-1}^{(t,x)}] \leq \mathbb{P}[X_i^{(t,x)} = 1].$ \end{claim} \begin{proof} By \mathcal{C}ref{assump:disjoint}, this is true simply because knowing the results of the past experiments and other experiments taken at the same timestep as $X_i^{(t,x)}$ cannot increase the probability that $X_i^{(t,x)}$ being $1$. Without the assumption, for some $i$ and $j<i$, it is possible that $\mathbb{P}[X_i^{(t,x)} = 1 | X_{j }^{(t,x)} = 0] > \mathbb{P}[X_i^{(t,x)} = 1]$ if $t(X_i^{(t,x)}) = t(X_j^{(t,x)})$ and $\mathrm{job}(X_i^{(t,x)}) = \mathrm{job}(X_j^{(t,x)})$, i.e., $X_i^{(t,x)}$ and $X_j^{(t,x)}$ are being sampled with the same $\textsc{Resample}(\mathrm{job}(X_i^{(t,x)})$ call. \end{proof} \subsection{Key Step 3} \label{sub:key 3} In this section, we prove \mathcal{C}ref{lem:bound deg hat}. To simplify our proofs, we say that \emph{time $t'$ is $(t,e)$-relevant} if there is a $(t,e)$-relevant experiment $X$ created at time $t(X) = t'$. Since, for each time step $t'$, the algorithm can only create one $(t,e)$-relevant experiment, we have the following observation: \begin{observation}\label{obs:rel time} The number of $(t,e)$-relevant experiments $R(t,e)$ is exactly the number of $(t,e)$-relevant time steps. \end{observation} Now, we state the following crucial lemma. It says that, there are at most $\log (t)$ experiments that are $(t,e)$-relevant. \begin{lemma} \label{claim:log_relevant} $Rel(t,e) \leq \log (t)$ for every $t$ and $e\in R^t$. \end{lemma} \begin{proof} By \mathcal{C}ref{obs:rel time}, we will bound the total number of $(t,e)$-relevant time steps. Suppose $t'$ is $(t,e)$-relevant. It suffices to show that if there is another time $t''>t'$ which is $(t,e)$-relevant, then $t'' \geq (t' + t)/2$. This means that each consecutive $(t,e)$-relevant time steps decrease the gap to the fixed time step $t$ by at least half. So this can happen at most $\log(t)$ times. To prove the claim, observe that $t'' \notin \textsc{Schedule}^{t'}(\mathrm{job}(e))$ as $t'$ is $(t,e)$-relevant. Hence, the adversary must touch $\mathrm{job}(e)$ at some timestep $s\geq t'$. When that happens, we add $s+1, s+2, \ldots$ into $\textsc{Schedule}^{s}(\mathrm{job}(e))$. Let $s' = s +2^{\log {(t-s)}-1}$. It is clear that $$s' \geq s + 2^{\lceil \log (t-s) \rceil -1} \geq (s+t)/2 \geq (t' + t )/2.$$ Because any timestep in $(t',s']$ cannot be $(t,e)$-relevant, $t''$ must be greater than $s'$. Hence, $t'' \geq (t' + t)/2$ as claimed. \end{proof} The above implies that the expected value of $\widehat{\mathrm{deg}eg}_{A^t}(x)$ is not too far from the target load of $x$ at time $t$. \begin{lemma}\label{lem:expect deg hat} $\mathbb{E} [\widehat{\mathrm{deg}eg}_{A^t}(x) ] \leq \log{(t)}\mathrm{target}^t(x)$. \end{lemma} \begin{proof} We have the following $$\mathbb{E}[\widehat{\mathrm{deg}eg}_{A^t}(x) ] = \mathbb{E}[ \sum_{ \hat X \in \hat {\mathcal X} ^{(t,x)}} \hat X ] \leq \log{(t)} \sum_{e \in R^t(x)} 1/\mathrm{deg}eg_{G^t}(\mathrm{job}(e)) = \log(t)\cdot \mathrm{target}^t(x),$$ where the first and last equalities are by definitions of $\widehat{\mathrm{deg}eg}_{A^t}(x)$ and $\mathrm{target}^t(x)$, respectively. It remains to prove the inequality. Observe that $\sum_{\hat{X}\in\hat{\mathcal{X}}^{(t,x)}\mid e(X_{i}^{(t,x)})=e}\mathbb{E}[\hat{X}]=Rel(t,e)/\mathrm{deg}eg_{G^{t}}(\mathrm{job}(e))$. This is because the number of terms in the sum is exactly the number of $(t,e)$-relevant experiments $Rel(t,e)$, and we precisely define each $\hat{X}_{i}^{(t,x)}\in{\cal \hat{X}}^{(t,x)}$ where $e=e(X_{i}^{(t,x)})$ so that $\mathbb{E}[\hat{X}_{i}^{(t,x)}]=\mathbb{P}[\hat{X}_{i}^{(t,x)}=1]=1/\mathrm{deg}eg_{G^{t}}(\mathrm{job}(e))$. Therefore, by \mathcal{C}ref{claim:log_relevant}, we have $$ \mathbb{E}[\sum_{\hat{X}\in\hat{\mathcal{X}}^{(t,x)}}\hat{X}]=\sum_{e\in R^{t}(x)}\sum_{\hat{X}\in\hat{\mathcal{X}}^{(t,x)}\mid e(X_{i}^{(t,x)})=e}\mathbb{E}[\hat{X}]\le \log(t)\cdot\sum_{e\in R^{t}(x)}1/\mathrm{deg}eg_{G^{t}}(\mathrm{job}(e)).$$ \end{proof} The last step is to show that the expectation bound from \mathcal{C}ref{lem:expect deg hat} is concentrated. However, since $\mathrm{target}(x)$ for the machine $x$ can be very small ($\ll1$), it is not enough to use the standard multiplicative Chernoff's bound. Instead, we will apply the version with both additive error and multiplicative error stated below. \begin{lemma}[Additive-multiplicative Chernoff's bound~\cite{soda/BadanidiyuruV14}] Let $X_1, \ldots, X_n$ be independent binary random variables. Let $S = \sum_{i =1}^n X_i$. Then for all $\mathrm{deg}elta \in[0,1]$ and $\alpha > 0$, $$\mathbb{P}[S \geq (1+\mathrm{deg}elta) \mathbb{E}[S] + \alpha ]\leq \mathrm{exp}( - \frac{\mathrm{deg}elta \alpha}{3}).$$ \end{lemma} \paragraph*{Proof of \mathcal{C}ref{lem:bound deg hat}.} By plugging $\mathrm{deg}elta = 1$ and $\alpha = 30\log{|M|}$ into the above bound, we have $$\mathbb{P}\left[\widehat{\mathrm{deg}eg}_{A^t}(x) \geq 2 \mathbb{E}[\widehat{\mathrm{deg}eg}_{A^t}(x)] + 30\log{|M|}\right] \leq \mathrm{exp}( - 30\log{|M|} / 3) = |M|^{-10}.$$ This completes the proof of \mathcal{C}ref{lem:bound deg hat} because $\mathbb{E} [\widehat{\mathrm{deg}eg}_{A^t}(x) ] \leq \log{(t)}\mathrm{target}^t(x)$ by \mathcal{C}ref{lem:expect deg hat}. \endinput ======================================================= \paragraph*{Overview of the proof.} We first look at our algorithm as a process of doing experiments on edges. Each experiment is corresponding to a part of a $\textsc{Resample}(.)$ call. So an experiment $X$ on edge $e$ will success with probability $1/\mathrm{deg}eg(\mathrm{job}(e))$. These experiments will tell us whether an edge will be put in $A$. In Claim~\ref{claim:log_relevant}, we show that, if we fix the timestep $t$ and an edge $e$, then there are at most $\log(t)$ \emph{relevant} experiments, Intuitively, this means that only these $\log(t)$ experiments could ``cause'' $e$ to appear in $A$ at time $t$. We want to use these experiments to bound the overhead. However, we cannot apply the concentration bounds like Chernoff's bound directly to the sum of these experiments as they are not independent. The outcome of past experiments affect the adversary's actions, which in turn affect the latter experiments. To simplify that, we define another process of \textbf{independent} variables, in which the outcome of the process \emph{dominates} the sum of the experiments. By applying the concentration bound to such a process, we conclude that the overhead of the jobs must not be large. \subsubsection*{Boiling down the algorithm: defining relevant experiments} We first describe the stochastic process of sampling edges as experiments. When we call $\textsc{Resample}(u)$, we instantiate $\mathrm{deg}eg_G(u)$ experiments, one on each edge adjacent to $u$. These experiments are negatively correlated as only one of the edges can be chosen. Let $X_1, X_2, X_3, \ldots$ each be a binary random variable. Each variable is corresponding to an experiment of sampling on an edge at a certain time. Initially, we do $|J|$ samplings, hence there are $|R|$ variables corresponding to sampling edges at timestep $0$. At timestep $t$, suppose we call $\textsc{Resample}(u)$, then there are $\mathrm{deg}eg_{G^t}(u)$ variables corresponding to experiments of edges adjacent to $u$. Each of this variable will be $1$ if the corresponding sampling is successful and will be $0$ otherwise. For our convenient, for a fix experiment $X$, we let $e(X)$, $t(X)$, and $\mathrm{job}(X)$ be the corresponding edge, the corresponding timestep, and the corresponding job respectively. \begin{definition} We say that an experiment $X$ is $(t,e)$-relevant if \begin{itemize} \item $e(X) = e$, and \item there is no $t(X) < t' < t$ such that $t' \in \textsc{Schedule}^{t(X)}(\mathrm{job}(e))$. \end{itemize} We say that $X$ is $(t,x)$-relevant if it is $(t,e)$-relevant and $e \in R^t(x)$. \end{definition} Intuitively, $X$ is $(t,e)$-relevant if $e$ might be in $A^t$ \textbf{because of} $X$, based on information at time $t(X)$. Similarly, $X$ is $(t,x)$-relevant if $X$ might cause the edge $e(X)$ to be in $A^t(x)$. By this definition, we can ask if $X$ is $(t,e)$-relevant ($(t,x)$-relevant) for any $t >t(X)$ without knowing the outcome of future experiments and the adversary's action. Also, there could be more than one experiments that are $(t,e)$-relevant. For example, we know at time $t(X)$ that $X$ is $(t,e)$-relevant. At time $t(X) + 1$, the adversary could touch $\mathrm{job}(e)$, hence, adding $t(X) + 2, t(X) + 4, \ldots$ into $\textsc{Schedule}(\mathrm{job}(e))$. Because of this action, there is another experiment $X'$ that is $(t,e)$-relevant and $t(X') > t(X)$. \begin{definition} Let $Rel(t,e)$ be the random variable denoting the number of $(t,e)$-relevant experiments. Note that there is at most one $(t,e)$-relevant experiment per timestep. Similarly, $Rel(t,x) = \sum_{e \in R(x)} Rel(t,e)$ denotes the total number of $(t,x)$-relevant experiments. \end{definition} The following claim is the key property of proactive resampling. This property allows us to bound the overhead, as well as prevent the adversary from accumulating load on one machine. \begin{claim} \label{claim:log_relevant} For a fixed timestep $t$ and a fixed edge $e$, There are at most $\log (t)$ experiments that are $(t,e)$-relevant. In the otherword, $Rel(t,e) \leq \log (t)$. \end{claim} \begin{proof} In this proof, we say that time $t'$ is $(t,e)$-relevant if there is a $(t,e)$-relevant experiment $X$ where $t(X) = t'$. Suppose $t'$ is $(t,e)$-relevant. If $t'' \in (t', t)$ is $(t,e)$-relevant, then we claim that $t'' \geq (t' + t)/2$. Since $t'$ is $(t,e)$-relevant, $t'' \notin \textsc{Schedule}^{t'}(\mathrm{job}(e))$. Hence, the adversary must touch $\mathrm{job}(e)$ at some timestep $s\geq t'$. When that happens, we add $s+1, s+2, \ldots$ into $\textsc{Schedule}^{s}(\mathrm{job}(e))$. Let $s' = s +2^{\log {(t-s)}-1}$. It is clear that $$s' \geq s + 2^{\lceil \log (t-s) \rceil -1} \geq (s+t)/2 \geq (t' + t )/2.$$ Because any timestep in $(t',s']$ cannot be $(t,e)$-relevant, $t''$ must be greater than $s'$. Hence, $t'' \geq (t' + t)/2$ as claimed. At each consecutive relevant timesteps decrease the gap to the fixed timestep $t$ by at least half, there is at most $O(\log t)$ such timesteps. \end{proof} To actually bound the overhead, we need to look at the set of edges $F$ rather than a single edge $e$. \begin{definition} Let $\mathcal X^{(t,x)} = X_1^{(t,x)}, X_2^{(t,x)}, \ldots, X_{Rel(t,x)}^{(t,x)}$ denote the sequence of all $(t, x)$-relevant experiments (ordered by time step the experiments are taken). \end{definition} Notice that for the process above, the number of experiments itself is a random variable. As we will use $\mathcal X^{(t,R(x))}$ to bound $\mathrm{deg}eg_{A^t}(x)$, the load of job $x$ at time $t$, it is useful to define the following. \begin{definition} We let $\overline{\mathrm{deg}eg}_{A^t}(x) = \mathrm{deg}isplaystyle \sum_{X \in \mathcal X^{(t,x)}} X$. \end{definition} We now show that this is indeed an upper bound of the load. In fact, the bound on relevant experiments controls approximation quality of $\overline{\mathrm{deg}eg}_{A^t}(x)$ for $\mathrm{deg}eg_{A^t}(x)$. \begin{lemma} \label{lemma:overline_is_ub} $\mathrm{deg}eg_{A^{t}}(x)\le\overline{\mathrm{deg}eg}_{A^{t}}(x) \leq \mathrm{deg}eg_{A^t}(x)\log{(T)}$ \end{lemma} \begin{proof} Each edge $e$ adjacent to $x$ in $A^t$ appears because of some successful $(t,x)$-relevant experiment $X_e$. This experiment must appear in $X^{(t,x)}$. As $X^{(t,x)}$ also includes other unsuccessful experiments and successful experiments that got resampled, the left inequality holds. By \mathcal{C}ref{claim:log_relevant}, there are at most $\log{T}$ relevant experiments per an edge, so we never over-count successful experiments more than $\log{T}$ times. Hence, the right inequality holds. \end{proof} We now claim the following, which if true, immediately implies \mathcal{C}ref{lemma:overhead}. \begin{lemma} \label{lemma:overline_deg_bound} For some constant $k>0$, throughout $T$ timesteps, with high probability, $\overline{\mathrm{deg}eg}_{A^t}(x) \leq 2 \log{(t)} \mathrm{target}(x) + k \log |M|$. \end{lemma} \begin{proof}[Proof of \mathcal{C}ref{lemma:overhead}] By \mathcal{C}ref{lemma:overline_is_ub} and \mathcal{C}ref{lemma:overline_deg_bound}, we have that $\mathrm{deg}eg_{A^{t}}(x)\le\overline{\mathrm{deg}eg}_{A^{t}}(x) \le 2 \log{(t)} \mathrm{target}(x) + k \log |M|$. Hence, by definition of overhead, the overhead of $x$ at time $t$ is $(\log(t),\log |M|) = (O(\log T) , \log |M|)$. \end{proof} Notice that, if $\{X_i^{(t,x)}\}$ are independent, then we can apply a concentration bound to $\sum_{X \in \mathcal X^{(t,x)}} X$ to get \mathcal{C}ref{lemma:overline_deg_bound}. Unfortunately, this is not the case as the outcome of experiments can affect the adversary's actions and the adversary's decisions affect $\mathcal X^{(t,x)}$. \subsection{Dealing with dependent process through independent dominating process} For a fixed machine $x$, let $R(x)$ be the set of edges adjacent to $x$, we want to bound the load of $x$ at time $t$, hence we are naturally interested in upper bounding $\overline{\mathrm{deg}eg}_{A^t}(x)$. Let us first inspect the probability that an edge gets sampled. \begin{definition} For a fixed job $u$ and a fixed timestep $t$, we let $p(t,u) = 1 /\mathrm{deg}eg_{G^t}(u)$. be the probability we sampling an edge $e$ adjacent to $u$ at time $t$. We also define $p(t,e) = p(t,\mathrm{job}(e))$. \end{definition} \begin{claim} \label{claim:upperbound_prob_exp} For any $(t,x)$-relevant experiment $X^{(t,x)}_i$ where $e = e(X^{(t,x)}_i)$, we have $\mathbb{P}[X_i^{(t,x)} = 1] \leq p(t,e)$. \end{claim} \begin{proof} Notice that since we can determine if $X_i$ is $(t,x)$-relevant depends on information in the past, including $X_1, X_2, \ldots, X_{i-1}$ and the adversary's actions. Knowing that $X_i$ is $(t,x)$-relevant does not change the probability that the experiment $X_i$ succeeds, i.e., $\mathbb{P}[X_i | X_i \text{ is } (t,x)\text{-relevant}] = \mathbb{P}[X_i]$. Notice that, since $X_i^{(t,x)}$ is $(t,x)$-relevant, $t' = t(X_i^{(t,x)}) \leq t$. Since $\mathrm{deg}eg_{G^{t'}}(\mathrm{job}(e)) \geq \mathrm{deg}eg_{G^t}(\mathrm{job}(e))$, the claim holds. \end{proof} To simplify things further we define another sequence of random variables. \begin{definition} Given $\mathcal X^{(t,x)}$, for each random variable $X_i^{(t,x)}$, we let $\hat{X}_i^{(t,x)}$ be a binary random variable that is true with probability $p\left(t,e (X_i^{(t,x)}) \right)$. Let $\hat{\mathcal X}^{(t,x)} = \hat{X}_1^{(t,x)}, \hat{X}_2^{(t,x)} \ldots, \hat{X}_{Rel(t,x)}^{(t,x)}$ be a sequence of \emph{independent} random variables derived from $\mathcal X^{(t,x)}$. We also let $\widehat{\mathrm{deg}eg}_{A^t}(x) = \sum_{\hat{X} \in \hat{\mathcal X}^{(t,x)}} \hat X$. \end{definition} We want to use $\widehat{\mathrm{deg}eg}_{A^t}(x)$ as a proxy to bound the overhead. Because of Claim~\ref{claim:upperbound_prob_exp}, we know that $\mathbb{P}[\hat{X}_i^{(t,x)} = 1] \geq \mathbb{P}[X_i^{(t,x)} = 1]$. To show that bounding $\widehat{\mathrm{deg}eg}_{A^t}(x)$ suffices, we use the notion of \emph{stochastic domination}. \begin{definition} Let $X$ and $Y$ be two random variables not necessarily defined on the same probability space. We say that $Y$ \emph{stochastically dominates} $X$, written as $X \preceq Y$, if for all $\lambda \in \mathcal R$, we have $\mathbb{P}[X \geq \lambda] \leq \mathbb{P}[Y \geq \lambda]$. \end{definition} \begin{lemma}[\cite{Doerr2020}] \label{lem:doerr} Let $X_1, \ldots, X_n$ be arbitrary boolean random variables and let $X^*_1 \ldots X^*_n$ be independent binary random variables. If we have $$\mathbb{P}[X_i = 1| X_1 = x_1, \ldots, X_{i-1} = x_{i-1}] \leq \mathbb{P}[X^*_i = 1]$$ for all $i \in [n]$ and all $x_1, \ldots, x_{i-1} \in \{0,1\}$ with $\mathbb{P}[X_1 = x_1, \ldots, X_{i-1} = x_{i-1}] >0$, then $$ \sum_{i=1}^n X_i \preceq \sum_{i=1}^n X^*_i.$$ \end{lemma} \paragraph*{Additional assumption:} We assume that, for any two hyperedges $e,e'$ with $\mathrm{job}(e) = \mathrm{job}(e') = u$, then $M(e) \cap M(e') = \emptyset$. In other words, the edges adjacent to any job $u$ are machine-disjoint. This assumption holds true for $3$-spanner, but might not be true if we want to apply the job-machine problem elsewhere (e.g., $(2k-1)$-spanner). However, we assume it here to simplify the presentation. We make a remark later on how to fix this assumption. \begin{claim} \label{claim:ind_dominate} $ \overline{\mathrm{deg}eg}_{A^t}(x) \preceq \widehat{\mathrm{deg}eg}_{A^t}(x).$ \end{claim} \begin{proof} By Lemma~\ref{lem:doerr}, it suffices to prove that $$\mathbb{P}[X_i^{(t,x)} = 1 | X_1^{(t,x)}, \ldots, X_{i-1}^{(t,x)}] \leq \mathbb{P}[\hat{X}_i^{(t,x)} = 1].$$ This can be easily seen as when we define $\hat{X}_i^{(t,x)}$, we define it in such a way that $\mathbb{P}[\hat{X}_i^{(t,x)} = 1]$ is greater than $\mathbb{P}[X_i^{(t,x)} = 1]$. Moreover, because of the assumption above, knowing the results of the past experiments will not increase the probability that $X_i^{(t,x)}$ being $1$. \end{proof} \paragraph*{Lifting the machine-disjoint assumption:} If we want to remove this assumption, we can bundle the experiments that are negatively-correlated, i.e., the experiments belong to the same job taken at the same timestep, altogether. Since the sum of these experiments is a boolean random variable, we can then do the analysis on these sum instead. Now we have $\hat{\mathcal X}^{(t,x)}$, which we can apply Chernoff's bound to bound the chance that not too many edges get realized from a set of edges. Since $\mathrm{target}(x)$ for the machine $x$ can be very small ($\ll1$), it is not enough to use multiplicative Chernoff's bound. We will use the version that has both additive error and multiplicative error, as stated below. \begin{lemma}[Additive-multiplicative Chernoff's bound~\cite{soda/BadanidiyuruV14}] Let $X_1, \ldots, X_n$ be independent binary random variables. Let $S = \sum_{i =1}^n X_i$. Then for all $\mathrm{deg}elta \in[0,1]$ and $\alpha > 0$, $$\mathbb{P}[S \geq (1+\mathrm{deg}elta) \mathbb{E}[S] + \alpha ]\leq \mathrm{exp}( - \frac{\mathrm{deg}elta \alpha}{3}).$$ \end{lemma} \begin{claim} \label{claim:machine_load_concentration} Let $x$ be a machine. We have that, for any $k> 0$ $$\mathbb{P}\left[ \widehat{\mathrm{deg}eg}_{A^t}(x) \geq 2 \log{(t)} \mathrm{target}(x) + k\log{|M|}\right] \leq |M|^{-k/3} .$$ \end{claim} \begin{proof} We first show that $\mathbb{E}[ \widehat{\mathrm{deg}eg}_{A^t}(x) ] \leq \log{(t)}\mathrm{target}(x)$. By Claim~\ref{claim:log_relevant}, each edge $e$ has at most $O(\log{(t)})$ $(t,e)$-relevant experiments. Hence, for any job $u$ adjacent to $x$, there are at most $O(\log{(t)})$ different $t'$ such that $t'$ is $(t,x$-relevant. Since $\mathbb{E}[\hat{X_i}^{(t,x}] = p(t,e(\hat{X_i}^{(t,x} )$, it follows that $$\mathbb{E}[\widehat{\mathrm{deg}eg}_{A^t}(x) ] = \mathbb{E}[ \mathrm{deg}isplaystyle \sum_{ \hat X \in \hat {\mathcal X} ^{(t,x)}} \hat X ] \leq O(\log{(t)}) \sum_{e \in x} p(t,e) = O\left(\log{(t)} \mathrm{target}(x) \right).$$ Since $\mathbb{E}[\widehat{\mathrm{deg}eg}_{A^t}(x) ] \leq \log{(t)} \mathrm{target}(x)$, we can substitute the Chernoff's bound above with $\mathrm{deg}elta = 1$ and $\alpha = k\log{|M|}$. We have, $$\mathbb{P}\left[\widehat{\mathrm{deg}eg}_{A^t}(x) \geq 2 \mathbb{E}[\widehat{\mathrm{deg}eg}_{A^t}(x)] + k\log{|M|}\right] \leq \mathrm{exp}( - k\log{|M|} / 3) = |M|^{-k/3}.$$ \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma:overline_deg_bound}] We are now ready to prove Lemma~\ref{lemma:overline_deg_bound}. To prove the overhead bound, the idea is to use Claim~\ref{claim:ind_dominate} and Claim~\ref{claim:machine_load_concentration} and apply union bound to it. Because of Claim~\ref{claim:ind_dominate}, we can use $\mathbb{P}[\widehat{\mathrm{deg}eg}_{A^t}(x) \hat X > \lambda]$ as an upperbound of $\mathbb{P}[\overline{\mathrm{deg}eg}_{A^t}(x)> \lambda]$. Claim~\ref{claim:machine_load_concentration} shows that this probability at one timestep is small. There are $|M|$ machines and $T \leq |M|$ timesteps. The probability that the overhead exceeds $O(\log{(T)}, \log{|M|})$ is bounded above by $|M|T$ times of the probability that a single machine's overhead is too high. Hence, this happen with high probability as long as $|M|T < |M|^2$ is considerably smaller than $|M|^{k/3}$. We can set $k$ to be $9$ and the probability of the bad event occuring is bounded above by $1/|M|$. \end{proof} \section{Lifting the Machine-Disjoint Assumption} \label{sec:lift_assumption} In the proof of \mathcal{C}ref{lemma:overhead} in \mathcal{C}ref{sec:reduce to load}, we assume \mathcal{C}ref{assump:disjoint} which says that, for a fixed job $u$, routines for $u$ are machine-disjoint. In this section, we show how to remove it. \paragraph*{Reiterating the pain point.} Each experiment $X_i$ is defined \textbf{with respect to} a sampling on each edge. In any sequence $\mathcal X^{(t,x)}$, there could be two $(t,x)$-relevant experiments $X_i^{(t,x)}$ and $X_{j}^{(t,x)}$ where $i<j$ such that $t(X_i^{(t,x)}) = t(X_j^{(t,x)})$ and $\mathrm{job}(X_i^{(t,x)}) = \mathrm{job} (X_j^{(t,x)})$. By our sampling process, these two variables are negatively correlated as knowing $X_i^{(t,x)} = 1$ would immediately imply that $X_j^{(t,x)} = 0$ and vice versa. To see why the condition in \mathcal{C}ref{lem:doerr} may not be true in this case, let consider the case where we have two random variables $X$ and $Y$ where $\mathbb{P}[X=1] = p$ be the probability that $X$ being $1$. If these two variables are correlated in such a way that $X+Y = 1$, then we cannot show that $\hat X , \hat Y$ such that $X + Y \preceq \hat X + \hat Y$ using \mathcal{C}ref{lem:doerr} unless $\mathbb{P}[\hat X] = \mathbb{P}[\hat Y] = 1$. We want to make sure that variables in the sequence we want to analyze do not have such correlation. \paragraph*{Proving strategy.} Here, we will define two other sequences $\mathcal Y^{(t,x)}$ and $\hat{\mathcal Y}^{(t,x)}$. These two sequences are defined analogous to $\mathcal X^{(t,x)}$ and $\hat{\mathcal X}^{(t,x)}$. We then define $\widetilde{\mathrm{deg}eg}_{A^t}(x) = \sum_{\hat Y \in \hat{\mathcal Y}^{(t,x)}} \hat Y$. We will replace $\widehat{\mathrm{deg}eg}_{A^t}(x)$ in Key Steps 2 and 3 by $\widetilde{\mathrm{deg}eg}_{A^t}(x)$. More precisely, instead of \mathcal{C}ref{lem:bound deg bar}, we will prove $\overline{\mathrm{deg}eg}_{A^t}(x) \preceq \widetilde{\mathrm{deg}eg}_{A^t}(x)$, and we will prove that, w.h.p., $\widetilde{\mathrm{deg}eg}_{A^t}(x) \leq 2\mathrm{target}^t(x) + O( \log |M|) $ instead of \mathcal{C}ref{lem:bound deg hat}. \paragraph*{Redefining relevant experiments.} For a fixed $\mathcal X^{(t,x)}$, we let $\pi^{(t,x)}(t',u) = \{ X_i^{(t,x)} : t(X_i^{(t,x)}) = t', \mathrm{job}(X_i^{(t,x)}) = u \}$ be the set of $(t,x)$-relevant experiments containing $u$ taken at time $t'$. We then let $Y^{(t,x)}(t',u)$ be a boolean random variable that is one if any of experiments in $\pi^{(t,x)}(t',u)$ succeed. In other words, $$ Y^{(t,x)} (t',u) = \sum_{X \in \pi^{(t,x)}(t',u)} X.$$ Note that $Y^{(t,x)}(t',u)$ is a boolean random variable because only at most one variable in $\pi^{(t,x)}(t',u)$ can be true as they are being sampled with the same $\textsc{Resample}(u)$ call. We say that $Y^{(t,x)}(t',u)$ is \textbf{well-defined} if $|\pi^{(t,x)}(t',u)|>0.$ Note that $$\mathbb{P}[Y^{(t,x)}(t',u) = 1] = \frac{|\pi^{(t,x)}(t',u)|} { \mathrm{deg}eg_{G^{t'}}(u)}.$$ Now let us define more notations to help count the number of relevant experiments in form of $Y^{(t,x)}(.,.)$. \begin{definition} We say that a timestep $t'$ is $(t,x,u)$-relevant if $Y^{(t,x)}(t',u)$ is well-defined. Let $Rel(t,x,u)$ be the number of $(t,x,u)$-relevant timesteps, this is exactly the number of possible $t'$ such that $Y^{(t,x)}(t',u)$ is well-defined. We also let $\widetilde{Rel}(t,x) = \sum_u Rel(t,x,u)$ be the number of possible pairs of $t',u$ such that $Y^{(t,x)}(t',u)$ is well-defined. \end{definition} We then define a sequence $\mathcal Y^{(t,x)} = Y_1^{(t,x)}, Y_2^{(t,x)}, \ldots, Y_{\widetilde{Rel}(t,x)}^{(t,x)}$ to be the sequence of well-defined $Y^{(t,x)}(.,.)$ order by the time of these experiments. Notice that $\overline{\mathrm{deg}eg}_{A^t}(x) = \sum_{X \in \mathcal X^{(t,x)}} X = \sum_{Y \in \mathcal Y^{(t,x)}} Y.$ Now we discuss the replacement of $\hat{\mathcal X}^{(t,x)}$. For each $Y_i^{(t,x)}$, we let $\hat{Y}_i^{(t,x)}$ be an independent random variable that is true with probability $ \frac{ |\pi^{(t,x)}(t',u)| }{\mathrm{deg}eg_{G^t}(u)}$. Then the sequence $\hat{\mathcal Y}^{(t,x)}$ is $\hat{Y}_1^{(t,x)}, \hat{Y}_2^{(t,x)}, \ldots, \hat{Y}_{\widetilde{Rel}(t,x)}^{(t,x)}$. As mentioned above, $\widetilde{\mathrm{deg}eg}_{A^t}(x) = \sum_{\hat Y \in \hat{\mathcal Y}^{(t,x)}} \hat Y$. We state the two main claims below. \begin{claim}[Key Step $2^\prime$] $\overline{\mathrm{deg}eg}_{A^t}(x) \preceq \widetilde{\mathrm{deg}eg}_{A^t}(x)$. \end{claim} \begin{proof} As $\overline{\mathrm{deg}eg}_{A^t}(x) = \sum_{Y \in Y^{(t,x)}} Y$, by \mathcal{C}ref{lem:doerr}, we need to show that $\mathbb{P}[Y_i^{(t,x)} = 1 | Y_1^{(t,x)}, \ldots, Y_{i-1}^{(t,x)}] \leq \mathbb{P}[\hat Y_i^{(t,x)}].$ By replacing $\mathcal X^{(t,x)}$ and $\hat{\mathcal X}^{(t,x)}$ with $\mathcal Y^{(t,x)}$ and $\hat{\mathcal Y}^{(t,x)}$, the proof here can be done with the same arguments we used in \mathcal{C}ref{sub:key 2}. Note that the proof works because random variables from $\mathcal Y^{(t,x)}$ are \emph{not} negatively correlated. \end{proof} \begin{claim}[Key Step $3^\prime$] $ \widetilde{\mathrm{deg}eg}_{A^t}(x) \le 2 \log{(t)} \mathrm{target}^t(x) + O(\log{|M|})$ with probability $1-1/{|M|}^{10}$. \end{claim} \begin{proof} We can follow exactly the proof in \mathcal{C}ref{sub:key 3}. The only crucial part is to show that $\mathbb{E}[\widetilde{\mathrm{deg}eg}_{A^t}(x)] = \mathbb{E}[\widehat{\mathrm{deg}eg}_{A^t}(x)]$. The other parts can be argued exactly the same way. This is true by the way we define the process. For each $\hat Y_i^{(t,x)}$ that is being $1$ with probability $c/\mathrm{deg}eg_{G^t}(\mathrm{job}(Y_i^{(t,x)})$ for some integer $c$, we have $c$ different boolean variables $\hat X_1,\hat X_2,\ldots,\hat X_c$ in $\hat{\mathcal X}^{(t,x)}$ that is being $1$ with probability $1/\mathrm{deg}eg_{G^t}(\mathrm{job}(Y_i^{(t,x)})).$ Hence, the two expectations are identical by linearity of expectation. \end{proof} \section{A Fully-dynamic-to-decremental Reduction for Spanners}\label{sec:reduction} In this section, we give a reduction from a fully-dynamic spanner to a decremental spanner. This reduction is due to~\cite{BaswanaKS12} and we provide it here for the completeness of our paper. Let $E_1 \ldots E_j$ be a partition of $E$, then the observation below states that the union of spanners of $E_1 \ldots E_j$ is a spanner of $E$. \begin{observation}[Observation~5.2 in~\cite{BaswanaKS12}] \label{obs:union_spanners} For a given graph $G = (V,E)$, let $E_1 \ldots E_j$ be a partition of the set of edges $E$, and let $\mathcal{E}_1 \ldots \mathcal{E}_j$ be respectively the $t$-spanner of subgraphs $G_1 = (V,E_1), \ldots, G_j = (V,E_j)$. Then $\bigcup_i \mathcal{E}_i$ is a $t$-spanner of the original graph $G=(V,E)$. \end{observation} With this observation, the idea behind the reduction is to split into $O(\log n)$ subgraphs in such a way that every subgraph, excepts one subgraph, is a decremental instance. Formally, let $\ell_0$ be the greatest integer such that $2^{\ell_0} \leq n^{1+1/k}$. We do the following. \begin{enumerate} \item We partition $E$ into $E_0 \ldots E_j, j = \lceil \log_2 n^{1-1/k} \rceil$ such that $|E_i| \leq 2^{\ell_0 + i}$. Each edge will belong to only one set $E_i$ and we keep track of this information. \item For each $E_i$, $i>0$, we maintain $H_i = (V,\mathcal E_i)$, which is a $(2k-1)$-spanner of $(V,E_i)$. \item We maintains a binary counter $\mathbf{C}$ which counts from $0$ to $\frac{n(n-1)}{2}$. This will be used to decide when to rebuild $E_i$. \end{enumerate} In the beginning, we set $E_j = E$ and $E_i = \emptyset$ for all $i < j$. The counter $\mathbf C$ is set to $0$. Any edge deletion of $e \in E_i$ for any $i$ is handled as in the decremental case. When an edge $e$ is inserted, we increment the counter $\mathbf{C}$ by one. Let $g$ be the highest bit of $\mathbf C$ that gets flipped. If $g \leq \ell_0$, then we put $e$ in $E_0$ and $\mathcal E_0$. Otherwise, we insert $e$ into $E_h$, where $h = g - \ell_0$, move all edges from $E_i$, $i<h$ to $E_h$. At this moment $E_i = \emptyset$ for all $i <h$. We then rebuilt the spanner $\mathcal E_h$. From Observation~\ref{obs:union_spanners}, $\bigcup_i \mathcal E_i$ is a $(2k-1)$-spanner of $G$. \begin{lemma}[Restate Lemma~\ref{lemma:fully_dyn_reduction}] \label{lemma:fully_dyn_reduction_restate} Suppose that for a graph $G$ with $n$ vertices and $m$ initial edges undergoing only edge deletions, there is an algorithm that maintains a $(2k-1)$-spanner $H$ of size $O(S(n))$ with $O(F(m))$ total recourse where $F(m) = \Omega(m)$, then there exists an algorithm that maintains a $(2k-1)$-spanner $H'$ of size $O(S(n) \log n)$ in a fully dynamic graph with $O(F((U) \log n))$ total recourse. Here $U$ is the number of updates made throughout the algorithm, starting from an empty graph. \end{lemma} \begin{proof}[Proof of Lemmma~\ref{lemma:fully_dyn_reduction}] We use the reduction above to partition the graph into $E_1, \ldots, E_j$. We then use the decremental algorithm to maintain each $\mathcal E_i$ for all $i > 0$. We first show that the size of $|H'| = |\bigcup_i \mathcal E_i| = O(\log n S(n))$. As we have $O(\log n)$ subgraphs, $|\bigcup_i \mathcal E_i| = \sum_i S(|E_i|) \leq O( S(n)\log n)$. Now we show the total recourse. Let $\mathcal G_1, \mathcal G_2, \ldots, \mathcal G_k$ be all the graph we rebuilt throughout all the timesteps. Then the total recourse is bounded by $\sum_i F(|\mathcal G_i|) \leq F(\sum_i |\mathcal G_i|)$. Notice that the level of any edge $e$ can only go up, so $e$ can contribute the recourse to only $\log n$ different graphs. Hence, this inequality becomes $$F(\sum_i |\mathcal G_i|) \leq F( (U) \log n ).$$ \end{proof} \section{Missing Proofs from \mathcal{C}ref{sec:3spanner}} \label{sec:app:missing:3spanner} \begin{claim}[Restate \mathcal{C}ref{cl:static:stretch-size}] The subgraph $H = (V, E_1 \cup E_2 \cup E_3)$ is a $3$-spanner of $G$ consisting of at most $O(n\sqrt{n})$ edges. \end{claim} \begin{proof} We need to show that (1) $H$ is a $3$-spanner and (2) $E_H = E_1 \cup E_2 \cup E_3$ has size at most $O(n \sqrt{n})$. GG \paragraph*{Stretch.} Consider an edge $e = (u,v)$ where $u \in V_i, v \in V_j$. We show that $H$ has a path of length at most 3 between $u$ and $v$. The easy case is when $(u,v) \in E_H$. This gives us a path of one edge. It happens when $u = c_i(v)$ or $v = c_j(u)$, or $i = j$. Suppose $(u,v ) \notin E_H$. Consider $v' = c_j(u)$. Since $u$ is a common neighbor between $v$ and $v'$, $P(v,v')$ is not empty. As $e \notin E_H$, $u \neq w_{vv'}$. As, the path $u, v', w_{vv'}, v$ has length exactly $3$, the stretch part is concluded. \paragraph*{Size.} Each vertex $u$ has upto $\sqrt{n}$ partners. Since we have $n$ vertices, $|E_1| = O(n \sqrt{n})$. For $E_2$, the graph induced on $V_i$ has at most $O(n)$ edges. Since we have $\sqrt{n}$ buckets, $|E_2| = O(n \sqrt{n})$. For $E_3$, $|E_3|$ is bounded by the number of witnesses we need. Since we have $O{\sqrt{n}}$ buckets, and we have $O(n)$ pairs of vertices within the same bucket, for all buckets, $|E_3|$ must be bounded by $O(n \sqrt{n})$. We conclude the proof by saying that $|E_H| = O(|E_1| + |E_2| + |E_3|) = O(n\sqrt{n})$ \end{proof} \subsection{Making the update time worst case} \label{sub:app:spread:work} It is evident that the update time in Lemma~\ref{lm:worstcase} is in worst-case. The whole algorithm is amortized only because we are {\em replacing} $E_3$ from scratch at the start of each phase, which takes $\tilde{O}(n^2)$ time, and this time needs to be amortized over the length of the phase. Using very standard techniques from the existing literature on dynamic algorithms (see e.g.~\cite{thorup2005worst,baswana2016dynamic,NanongkaiSW17,kiss2021deterministic}), we can easily convert this into an overall worst case update time guarantee. The idea is to {\em spread out} the $\tilde{O}(n^2)$ cost of rebuilding at the start of a given phase through a sufficiently large chunk of updates in the phase preceding it. For completeness, we will describe the idea in more detail here. Let $G_i$ be the graph after $i$ updates. We will use one instance of our algorithm to handle $L = \tilde{\Theta}(n\sqrt{n})$, that is, the $i$-th instance will be used to handle the time steps $[(i-1)L, iL)$. For our idea to work, any copy must be able to handle $2L$ updates (which is fine in our case). At the beginning, we initiate the first copy $D_1$ with time $\tilde{O}(n^2)$. We want to initiate $D_2$, as well as feed $D_2$ with $L$ updates, so that $D_2$ is ready to use at the timestep $L$. This can be done in the following manner. \begin{itemize} \item During time steps $[0,L/3)$, we initiate $D_2$ with the graph $G_0$, \item During time steps $[L/3, 2L/3)$, we carefully feed the surviving\footnote{Some edges in the spanner of $D_2$ might be deleted between the time step $[0,2L/3]$, we must not add deleted edges in our actual output.} output from $D_2$ into our actual output, \item During time steps $[2L/3,L)$, we update $D_2$ with updates from time steps $[0,L)$, $3$ updates at a time. \end{itemize} Hence, at time step $L$, we can switch from $D_1$ to $D_2$, disregard $D_1$, and start initiating $D_3$. More generally, suppose at time step $iL$, after we have initiated $D_i$, which we will use for time step $[iL, (i+1)L)$. In the following $L$ time steps, to disregard $D_{i-1}$ and initiate $D_{i+1}$, we do the following. \begin{itemize} \item During time steps $[iL,iL + L/3)$, we initiate $D_{i+1}$ with the graph $G_{iL}$ and slowly disregard the output from $D_{i-1}$, \item During time steps $[iL + L/3,iL + 2L/3)$, we carefully feed the surviving output from $D_{i+1}$ into our actual output, \item During time steps $[iL + 2L/3,(i+1)L)$, we update $D_{i+1}$ with updates from time steps $[iL,(i+1)L)$, $3$ updates at a time. \end{itemize} Then, at time step $(i+1)L$, we completely disregard $D_{i-1}$, and completely initiate $D_{i+1}$, hence maintaining all the desired properties. Our actual output at time step $t$, is consisting of $E_{D_i}$ and $E_{D_{i+1}}$, which are output of $D_i$ and $D_{i+1}$, respectively. Notice that, $(V,E_{D_i})$ is a $3$-spanner of $G_t$, hence, $(V, E_{D_i} \cup E_{D_{i+1}})$ is also a $3$-spanner of $G_t$. This is because a spanner with extra edges is still a spanner. Hence, as long as $|E_{D_i} \cup E_{D_{i+1}}|$ is not too large, we can keep both of them in our output at the same time. We conclude by saying that, this idea of maintaining multiple copies, along with our \mathcal{C}ref{lm:worstcase:updatetime}, implies \mathcal{C}ref{thm:main det worst case}. \textbf{Note.} As a corollary, we note that the idea above does not have anything to do with Spanner. Rather, it is applicable to any dynamic problem, as long as, it can handle batch updates in worst-case time, and that the bottleneck of the algorithm is on the initialization time. \end{document}
\begin{document} \title{Some invariant subalgebras are graded isolated singularities} \author{Ruipeng Zhu} \address{Department of Mathematics, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China} \email{[email protected]} \begin{abstract} In this note, we prove that the invariant subalgebra of the skew polynomial algebra $\kk \langle x_0, x_1, \cdots, x_{n-1} \rangle / (\{x_ix_j+x_jx_i \mid i \neq j\})$ under the action $x_i \mapsto x_{i+1}\,(i \in \Z/{n\Z})$ is a graded isolated singularity, and thus a conjecture of Chan-Young-Zhang is true. \end{abstract} \subjclass[2020]{ 16S35 16S38 16W22 } \keywords{Graded isolated singularity, group action, pertinency, Gelfand-Kirillov dimension} \thanks{} \maketitle \section{Introduction} Noncommutative graded isolated singularities are defined by Ueyama \cite[Definition 2.2]{Uey2013}. A noetherian connected graded algebra $B$ is called a {\it graded isolated singularity} if the associated noncommutative projective scheme $\mathrm{Proj} (B)$ (in the sense of \cite{AZ1994}) has finite global dimension. See \cite{CKWZ2018, BHZ2018, GKMW2019, CKZ2020} for some examples of graded isolated singularities. Let $A$ be a noetherian Artin-Schelter regular algebra and $G$ be a finite subgroup of $\Aut_{\mathrm{gr}}(A)$. To prove a version of the noncommutative Auslander theorem, an invariant called the {\it pertinency} of the $G$-action on $A$ is introduced in \cite{BHZ2018} and \cite{BHZ2019}. We recall it here. The {\it pertinency} of the $G$-action on $A$ \cite[Definition 0.1]{BHZ2019} is defined to be $$\mathbf{p}(A, G):= \GKdim(A) - \GKdim(A\#G/(e_0)),$$ where $(e_0)$ is the ideal of the skew group algebra $A\#G$ generated by $e_0 := 1 \# \frac{1}{|G|} \sum_{g \in G} g$. Then, by \cite[Theorem 3.10]{MU2016}, $A^G$ is a graded isolated singularity if and only if $\mathbf{p}(A, G) = \GKdim(A)$. Unlike in the commutative cases, it is difficult to determine when the invariant subalgebra is a graded isolated singularity. Let $\kk$ be an algebraically closed field of characteristic zero. Let $A = \kk_{-1}[x_0, \dots, x_{n-1}] \,(n \geqslant 2)$ be the ($-1$)-skew polynomial algebra, which is generated by $\{x_0, \dots, x_{n-1}\}$ and subject to the relations $$x_ix_j = (-1) x_jx_i\,(\forall i \neq j).$$ Let $G:=C_n$ be the cyclic group of order $n$ acting on $A$ by permuting the generators of the algebra cyclically; namely, $C_n$ is generated by $\sigma = (0 \, 1 \, 2 \, \cdots \, n-1)$ of order $n$ that acts on the generators by $$\sigma x_i = x_{i+1}, \; \forall \, i \in \Z_n:= \Z/n\Z.$$ In \cite[Theorem 0.4]{CYZ2020}, Chan, Young and Zhang prove the following result on graded isolated singularities. \begin{thm}\label{CYZ-thm} If either $3$ or $5$ divides $n$, then $\mathbf{p}(A, G) < \GKdim A = n$. Consequently, the invariant subalgebra $A^G$ is not a graded isolated singularity. \end{thm} Based on this theorem and \cite[Theorem 0.2]{CYZ2020}, Chan, Young and Zhang give the following conjecture \cite[Conjecture 0.5]{CYZ2020}. \begin{conj}\label{conj} The invariant subalgebra $A^G$ is a graded isolated singularity if and only if $n$ is not divisible by $3$ or $5$. \end{conj} To prove Conjecture \ref{conj} is true, it suffices to prove the following theorem, which is the main result in this note. \begin{thm}\label{main-thm} If $n$ is not divisible by $3$ or $5$, then $\mathbf{p}(A, G) = \GKdim A = n$. As a consequence, $A^G$ is a graded isolated singularity. \end{thm} \section{Preliminaries} Before giving a proof of Theorem \ref{main-thm}, let us recall some notations and results in \cite{CYZ2020}. Let $\omega$ be a primitive $n$th root of unity. For any $\gamma = 0, 1, \dots, n-1 \in \Z_n$, let $$b_{\gamma} := \frac{1}{n} \sum_{i=0}^{n-1} \omega^{i\gamma} x_i \in A \subseteq A \# C_n.$$ Then $b_{\gamma}$ is an $\omega^{- \gamma}$-eigenvector of $\sigma$. Let $$e_{\gamma} := \frac{1}{n} \sum_{i=0}^{n-1} (\omega^{\gamma} \sigma)^{i} \in \kk C_n \subseteq A\#C_n,$$ which are idempotent elements. Suppose $\deg(x_i) = 1$ and $\deg(e_i) = 0$ for all $i \in \Z_n$. As usual, $[-,-]$ denotes the graded commutator of the graded ring $A\#C_n$, that is, $[u,v] = uv - (-1)^{\deg(u)\deg(v)} vu$ for any homogeneous elements $u,v \in A\#C_n$. \begin{lem}\cite[Lemma 1.1]{CYZ2020} The graded algebras $A$ and $A\#C_n$ can be presented as $$A \cong \frac{\kk \langle b_0, \dots, b_{n-1} \rangle}{([b_0, b_k] - [b_l, b_{k-l}])}\, \text{ and }\, A \# C_n \cong \frac{\kk \langle b_0, \dots, b_{n-1}, e_0, \dots, e_{n-1} \rangle}{(e_{\alpha}b_{\gamma} - b_{\gamma}e_{\alpha-\gamma}, e_ie_j-\delta_{ij}e_i, [b_0, b_k] - [b_l, b_{k-l}])}$$ respectively, where $\delta_{ij}$ is the Kronecker delta and indices are taken modulo $n$. \end{lem} For each $j \in \Z_n$, let $$c_j := [b_k, b_{j-k}] = b_kb_{j-k} + b_{j-k}b_k = \frac{2}{n^2}\sum_{i=0}^{n-1}\omega^{ij}x_i^2.$$ Then $c_j$ is an $\omega^{-j}$-eigenvector of $\sigma$. For any vector $\mathbf{i} = (i_0, \dots, i_{n-1}) \in \N^n$, we use the following notations: $$\mathbf{b^i} = b_0^{i_0} \cdots b_{n-1}^{i_{n-1}} \,\textrm{ and }\, \mathbf{c^i} = c_0^{i_0} \cdots c_{n-1}^{i_{n-1}}.$$ Let $R_{\gamma}$ be the subspace of $A$ spanned by the elements $\mathbf{b^ic^j}$ such that $\sum_{s=0}^{n-1} (i_s+j_s)s = \gamma \mod n$; that is, $R_{\gamma}$ consists of $\omega^{-\gamma}$-eigenvectors of $\sigma$. This gives an $R_0$-module decomposition $$A = R_0 \oplus R_1 \oplus \cdots \oplus R_{n-1}.$$ \begin{defn} \begin{enumerate} \item Let $\Phi_n:= \{ k \mid c_k^{N_k} \in (e_0) \text{ for some } N_k \geq 0 \}$, where $(e_0)$ is the two-sided ideal of $A\#C_n$ containing $e_0$. \item Let $\phi_2(n) := \{ k \mid 0 \leq k \leq n-1, \gcd(k, n) = 2^{w} \text{ for some } w \geq 0 \}$. \item Let $\Psi_{j}^{[n]}:= \{ i \mid c_i^N \in R_jA \text{ for some } N \geq 0 \}$. \item \cite[Definition 5.2 and Lemma 5.3(1)]{CYZ2020} We say $n$ is {\it admissible} if, for any $i$ and $j$, $i \in \Psi_j^{[n]}$, or equivalently, $\GKdim (A\#C_n/(e_0)) = 0$. \item Let $\overline{A} := A/\langle c_k \mid k \in \Phi_n \rangle$, and $\overline{\Psi}_{j}^{[n]}:= \{ i \mid c_i^N \in \overline{R}_j\overline{A} \text{ for some } N \geq 0 \}$ where $\overline{R}_j = \frac{R_j + \langle c_k \mid k \in \Phi_n \rangle}{\langle c_k \mid k \in \Phi_n \rangle} \subseteq \overline{A}$. \end{enumerate} \end{defn} Let $\Z_n^{\times}$ be the set of invertible elements in $\Z_n$. \begin{lem}\label{lem1} \begin{enumerate} \item \cite[Definition 6.1]{CYZ2020} $\Phi_n$ is a special subset of $\Z_n$, that is, $k \in \Phi_n$ if and only if $\lambda k \in \Phi_n$ for all $\lambda \in \Z_n^{\times}$. \item \cite[Proposition 2.3]{CYZ2020} $\phi_2(n) \subseteq \Phi_n$. \end{enumerate} \end{lem} The following proposition follows from the proof of \cite[Proposition 6.6]{CYZ2020}. \begin{prop}\label{prop1} Let $n \geq 2$ such that $3, 5 \nmid n$. If $1, \dots, n-1 \in \overline{\Psi}_1^{[n]}$, then $0 \in \overline{\Psi}_1^{[n]}$. \end{prop} \begin{prop}\cite[Proposition 6.8]{CYZ2020}\label{prop2} Let $n \geq 2$. Suppose that \begin{enumerate} \item every proper factor of $n$ is admissible, and \item for each $0 \leq i \leq n-1$, $i \in \overline{\Psi}_1^{[n]}$. \end{enumerate} Then $n$ is admissible. \end{prop} \section{Proof of the Theorem \ref{main-thm}}\label{sec2} \begin{proof}[Proof of Theorem \ref{main-thm}] We prove it by induction on $n$. Assume that every proper factor of $n$ is admissible. By Proposition \ref{prop2}, it suffices to prove that $$\text{ for each } 0 \leq i \leq n-1, \; i \in \overline{\Psi}_1^{[n]}.$$ If this is not true, that is, there is $ 0 \leq m \leq n-1$ such that $m \notin \overline{\Psi}_1^{[n]}$. Then we may assume that \begin{enumerate} \item $m \neq 0$, by Proposition \ref{prop1}; \item $m \mid n$, by Lemma \ref{lem1} (2) as $\Z_n^{\times} \subseteq \phi_2(n) \subseteq \Phi_n$; \item $m > 5$, by Lemma \ref{lem1} (2) and assumption $3,5 \nmid n$. \end{enumerate} Write $n = mq$ with $q>1$. Since $c_m$ is an eigenvector of $\sigma$, then $C_n$ acts on the localization $A[c_m^{-1}]$, and $A[c_m^{-1}] \# C_n / (e_0) \cong (A \# C_n / (e_0))[c_m^{-1}]$. Let $$\widetilde{A} = \frac{\kk \langle b_0, \dots, b_{m-1} \rangle}{([b_0, b_k] - [b_l, b_{k-l}] \mid l, k \in \Z_m)}$$ be a subalgebra of $A$, and $\widetilde{R}_{\gamma}$ be the subspace of $\widetilde{A}$ spanned by the elements $\mathbf{b^ic^j}$ such that $\sum_{s=0}^{m-1} (i_s+j_s)s = \gamma \mod m$. For any $\mathbf{b^ic^j} = b_0^{i_0} \cdots b_{m-1}^{i_{m-1}}c_0^{j_0} \cdots c_{m-1}^{j_{m-1}} \in \widetilde{R}_1$ with $$\sum_{s=0}^{m-1} (i_s + j_s)s = mk + 1 \text{ for some } k \geq 0,$$ then there exists $l > 0$ such that $(l-1)q \leq k < lq$. Hence $\mathbf{b^ic^j}c_{m}^{lq-k} \in R_1A$, and $\mathbf{b^ic^j} \in R_1A[c_m^{-1}]$. It follows that $$\widetilde{R}_1\widetilde{A} \subseteq R_1A[c_m^{-1}].$$ Write $\widetilde{\omega} = \omega^q$. Note that $\widetilde{A} \cong \kk_{-1}[\widetilde{x}_0, \dots, \widetilde{x}_{m-1}]$ via $b_{\gamma} \mapsto \frac{1}{m} \sum_{i=0}^{m-1}\widetilde{\omega}^{i \gamma} \widetilde{x_i}$. Then the cyclic group $C_m$ of order $m$ acts on $\widetilde{A}$ by permuting the generators of the algebra cyclically; namely, $C_m$ is generated by $\widetilde{\sigma} = (012 \cdots m-1)$ of order $m$ that acts on the generators by $$\widetilde{\sigma} \widetilde{x_i} = \widetilde{x_{i+1}}, \; \forall \, i \in \Z_m.$$ Then $\widetilde{R}_{\gamma}$ consists of $\widetilde{\omega}^{-\gamma}$-eigenvectors of $\widetilde{\sigma}$. By assumption, $m$ is admissible, so for any $0 \leq i \leq m-1$, there exists $N_i$ such that $$c_i^{N_i} \in \widetilde{R}_1\widetilde{A} \subseteq R_1A[c_m^{-1}].$$ Let $\Gamma$ be the right ideal $R_1A[c_m^{-1}] + \sum\limits_{\exists \, N_k, \, {c_k^{N_k}} \in R_1A[c_m^{-1}]} c_kA[c_m^{-1}]$ of $A[c_m^{-1}]$. Next we prove that $\Gamma = A[c_m^{-1}]$. The following proof is quite similar to the proof of \cite[Porposition 6.6]{CYZ2020}. ~\\ Claim 1. Let $0 \leq j < \frac{m-1}{2}$. If $c_m^sb_j \in \Gamma$ for some $s>0$, then $c_m^{s+1}b_{j+1} \in \Gamma$. \begin{proof}[Proof of Claim 1] First of all, $b_{j+1}b_{m-j} \in \widetilde{R}_1\widetilde{A} \subseteq R_1A[c_m^{-1}]$ since $(j+1) + (m-j) = 1 \mod m$. Due to $c_m^sb_j \in \Gamma$, then \begin{align*} \Gamma \ni & [b_{j+1}b_{m-j}, c_m^sb_j] \\ & = c_m^sb_{j+1}b_{m-j}b_j - c_m^sb_jb_{j+1}b_{m-j} \\ & = c_m^{s}b_{j+1}b_{m-j}b_j + c_m^sb_{j+1}b_jb_{m-j} - c_m^sc_{2j+1}b_{m-j} \\ & = c_m^{s+1}b_{j+1} - c_m^sc_{2j+1}b_{m-j}. \end{align*} Since there exists $N_{2j+1} > 0$ such that $c_{2j+1}^{N_{2j+1}} \in \widetilde{R}_1\widetilde{A}$ for $2j+1 < m$, then $c_m^{s+1}b_{j+1} \in \Gamma$. \end{proof} ~\\ Claim 2. Suppose that $m = 2k +1$. If $c_m^sb_k \in \Gamma$, then $c_m^{s+2}b_{k+2} \in \Gamma$. \begin{proof}[Proof of Claim 2] Note that $b_{k+1}b_{k+2}b_{m-1} \in \widetilde{R}_1\widetilde{A} \subseteq \Gamma$ as $(k+1) + (k+2) + (m-1) = 1 \mod m$. \begin{align*} \Gamma \ni & [c_m^sb_k, b_{k+1}b_{k+2}b_{m-1}] \\ & = c_m^sb_kb_{k+1}b_{k+2}b_{m-1} + c_m^sb_{k+1}b_{k+2}b_{m-1}b_k \\ & = c_m^sb_kb_{k+1}b_{k+2}b_{m-1} + c_m^sc_{3k} b_{k+1}b_{k+2} - c_m^sb_{k+1}b_{k+2}b_kb_{m-1} \\ & = c_m^sb_kb_{k+1}b_{k+2}b_{m-1} + c_m^sc_{3k} b_{k+1}b_{k+2} - c_m^sc_{m+1}b_{k+1}b_{m-1} + c_m^sb_{k+1}b_kb_{k+2}b_{m-1} \\ & = c_m^{s+1} b_{k+2}b_{m-1} + c_m^sc_{3k} b_{k+1}b_{k+2} - c_m^sc_{m+1}b_{k+1}b_{m-1}. \end{align*} Since $c_{m+1}c_m^{q-1} \in R_1 A$, then $c_{m+1} \in R_1A[c_m^{-1}]$. Hence $c_m^{s+1} b_{k+2}b_{m-1} + c_m^sc_{3k} b_{k+1}b_{k+2} \in \Gamma$. \begin{align*} \Gamma \ni & [c_m^{s+1} b_{k+2}b_{m-1} + c_m^sc_{3k} b_{k+1}b_{k+2}, b_1] \\ & = c_m^{s+1} b_{k+2}b_{m-1}b_1 - c_m^{s+1} b_1b_{k+2}b_{m-1} + c_m^sc_{3k} b_{k+1}b_{k+2}b_1 - c_m^sc_{3k} b_1b_{k+1}b_{k+2} \\ & = c_m^{s+2} b_{k+2} - c_m^{s+1} b_{k+2}b_1b_{m-1} - c_m^{s+1} b_1b_{k+2}b_{m-1} \\ & \;\; + c_m^sc_{3k} b_{k+1}b_{k+2}b_1 + c_m^sc_{3k} b_{k+1}b_1b_{k+2} - c_m^sc_{3k} c_{k+2}b_{k+2} \\ & = c_m^{s+2} b_{k+2} - c_m^{s+1} c_{k+3}b_{m-1} + c_m^sc_{3k} c_{k+3} b_{k+1} - c_m^sc_{3k} c_{k+2}b_{k+2}. \end{align*} By assumption $m > 5$, so $k > 2$. Since $k+2 < k+3 < 2k+1 = m$, $c_{k+2}, c_{k+3} \in \Gamma$ by assumption. It follows that $c_m^{s+2} b_{k+2} \in \Gamma$. \end{proof} ~\\ Claim 3. $c_m^{m-1} \in \Gamma$. \begin{proof}[Proof of Claim 3] Assume that $m$ is even. Starting with $b_1$, and applying Claim 1 ($\frac{m}{2}-1$) times, we get $c_m^{\frac{m}{2}-1} b_{\frac{m}{2}} \in \Gamma$. Hence $c_m^{m-1} = [c_m^{\frac{m}{2}-1} b_{\frac{m}{2}}, c_m^{\frac{m}{2}-1} b_{\frac{m}{2}}] \in \Gamma$. If $m = 2k+1$ is odd, then by applying Claim 1 ($k-2$) and ($k-1$) times we get $c_m^{k-2} b_{k-1}$ and $c_m^{k-1} b_{k}\in \Gamma$ respectively. By applying Claim 2 we get $c_m^{k+1} b_{k+2} \in \Gamma$. Therefore, $c_m^{2k} = [c_m^{k-2} b_{k-1}, c_m^{k+1} b_{k+2}] \in \Gamma.$ \end{proof} By Claim 3, $\Gamma = A[c_m^{-1}]$. Recall that $\Gamma = R_1A[c_m^{-1}] + \sum\limits_{\exists \, N_k, \, {c_k^{N_k}} \in R_1A[c_m^{-1}]} c_kA[c_m^{-1}]$. It is not difficult to see that $A[c_m^{-1}] = R_1A[c_m^{-1}]$. So there exists $N \geq 0$ such that $c_m^N \in R_1A$, which is a contradiction (as $m \notin \overline{\Psi}_1^{[n]}$). This implies $\overline{\Psi}_1^{[n]} = \{0, 1, \dotsm, n-1 \}$, that is, $n$ is admissible. Hence $\GKdim(A\#C_n / (e_0)) = 0$, and $\mathbf{p}(A, G) = n$. \end{proof} \section*{Acknowledgments} The author is very grateful to Professor Quanshui Wu and James Zhang who read the paper and made numerous helpful suggestions. \end{document}
\begin{document} \title{Degeneracy of Angular Voronoi Diagram} \author{ Hidetoshi Muta\\ Department of Computer Science, \\ University of Tokyo \\ [email protected] \and Kimikazu Kato\\ Nihon Unisys, Ltd. \\ \& \\ Department of Computer Science, \\ University of Tokyo \\ [email protected] } \date{} \maketitle \begin{abstract} Angular Voronoi diagram was introduced by Asano et al.\ as fundamental research for a mesh generation. In an angular Voronoi diagram, the edges are curves of degree three. From view of computational robustness we need to treat the curves carefully, because they might have a singularity. We enumerate all the possible types of curves that appear as an edge of an angular Voronoi diagram, which tells us what kind of degeneracy is possible and tells us necessity of considering a singularity for computational robustness. \end{abstract} \section{Introduction} The Voronoi diagram has developed as one of the most important research objects in computational geometry~\cite{vdsurvey, spatialTessellations}. Although originally Voronoi diagrams were investigated in the terms of Euclidean space, research about Voronoi diagrams in other distance spaces has developed recently. Voronoi diagrams can be considered as a powerful tool to analyze the structure of an unnatural distorted distance space~\cite{kato05,kato06a,kato06b}. Asano et al.~\cite{avd} first introduced the angular Voronoi diagram. It is proposed as a fundamental research applicable to a mesh generation. They showed edges of an angular Voronoi diagram are curves whose degree is at most three. Generally understanding degeneracy of a Voronoi diagram is very important especially from a computational point of view. It is a basis to assure the robustness of computation of the actual diagram. To deal with other distance spaces and keep computational robustness, we extend the meaning of ``degeneracy''. When we say a ``degenerate case,'' it is a case which takes special care for computational robustness. Especially for an angular Voronoi diagram we have to consider singularities of curves. We show a symbolic example in Fig.~\ref{sample}. In this example, the two edges have a intersection at their singular points, both of which are nodes. In this paper, we show a classification of possible curves which can appear as edges of an angular Voronoi diagram. Although we have not completed the whole classification, we have achieved to show possible variety of curves. This is a first step to investigate the degeneracy angular Voronoi diagrams. This is a good indication for a structure of a general Voronoi diagram with respect to non-Euclidean distance. \begin{figure} \caption{Examples of where two edges are crossing at their singular points} \label{sample} \end{figure} The rest of this paper is organized as follows. First, in Sect.~\ref{preliminary}, we give some mathematical properties of degree three equation and properties of edges of an angular Voronoi diagram. In Sect.~\ref{threeDegreeCase} and Sect.~\ref{lowerDegreeCase} , we show degeneracy of angular Voronoi diagram. We give a conclusion in Sect.~\ref{conclusion}. \section{Preliminaries}\label{preliminary} \subsection{Curves of degree three and their singularities} We explain basic mathematical facts about an algebraic curve of degree three. For more general theory about algebraic curves, see \cite{Hartshorn77}. As for singularities, three has a special meaning as a number of degree of curve, because curves of degree two have only a trivial singularity. Actually, a curve of degree two is singular only when its equation is divided into two linear equations, and the only singularity that may appear is the cross point of the lines. Generally for a curve $ F(x, y) = 0 $, if $ \frac{\partial F}{\partial x} = 0 $ and $ \frac{\partial F}{\partial y} = 0 $, the point is called a singular point. Otherwise, the point is called a regular point. We show three examples of singular points. Let $ F(x, y) = y^2 - x^2 (x + a) = 0 $, that is $ y = \pm x \sqrt{x + a} $. $ \frac{\partial F}{\partial x} = - 3 x^2 - 2 a x$, $ \frac{\partial F}{\partial y} = 2 y $, then the point $ (0, 0) $ is a singular point, Additionally, the singular point is called as one of the following three types as shown in Fig.~\ref{sng}. \begin{figure} \caption{Three types of singular points of curves of degree three} \label{sng} \end{figure} $ ( 1 ) $ $ a > 0 $. Two smooth curves, which are symmetric with respect to the $x$-axis, pass through the point (0, 0) as shown in the left of Fig~\ref{sng}, because $ x + a > 0 $ in the neighborhood of $ x = 0 $. In this case, $ (0, 0) $ is called a node. $ ( 2 ) $ $ a < 0 $. There is no real corresponding $ y $ value as shown in the middle of the Fig~\ref{sng}, since $ x + a < 0 $ in the neighborhood of $ x = 0 $, $ (0, 0) $ is an isolated point. $ ( 3 ) $ $ a = 0 $. Then $ y = \pm x^{\frac{3}{2}} $. If $ x > 0 $, there are two curves which are symmetric with respect to the $x$-axis as shown in the right of the Fig~\ref{sng}. The two curves have the common tangential line on $ (0, 0) $, This singular point is called a cusp. \subsection{Angular Voronoi diagram} \label{avdedge} In this section, we define the angular Voronoi diagram and show its edge is at most three degree. When a line segment $ s $ and a point $ p $ in a plane are given, the visual angle of $ s $ from $ p $ is the angle formed by two rays emanating from $ p $ through two endpoints of $ s $, and it is denoted by $ \theta_p(s) $. Since we do not use the direction of the angle, the range of the angle is always between $ 0 $ and $ \pi $. Given a set $ S $ of $ n $ line segments $ s_1, s_2, \dots, s_n $ in a plane, we define an angular Voronoi diagram $ AVD(S) $ for $ S $ as follows: \begin{description} \item[Voronoi region:] Each line segment $ s_i $ is associated with a region, called a Voronoi region $ V(s_i) $, consisting of all points $ p $ such that the visual angle of $ s_ i $ from $ p $ is smaller than that of any other line segment $ s_j $. \begin{equation} V(s_i) = \{ p \in \mathbb{R}^2 | \theta_p(s_i) < \theta_p(s_j) \text{ for any } j \neq i \}. \end{equation} \item[Voronoi edge:] Voronoi edges form the boundary of Voronoi regions. Thus, they are defined for pairs of line segments: \begin{equation} \begin{split} E(s_i, s_j) = \{ p \in \mathbb{R}^2 | \theta_p(s_i) = \theta_p(s_j) < \theta_p(s_k) \\ \text{ for any } k \neq i , j\}. \end{split} \end{equation} \end{description} The following theorem was proved by Asano et al. \begin{theorem}[\cite{avd}] \label{asano} Edges of an angular Voronoi diagram are described by polynomial curves of degree at most three. \end{theorem} Let us consider a pair of line segments $ (s_1, s_2) $. Without lose of generality, we assume that $ s_1 $ is fixed on the $x$-axis, one of the endpoints of $ s_1 $ is $(1, 0)$ and the other is $ (-1, 0) $, and assume that the length of $ s_2 $ is $ 2 l $, the midpoint of $ s_2 $ is the coordinate $ ( a, b ) $ and a slope between the $x$-axis and $s_2$ is $ \alpha $ (see Fig.~\ref{arrangement2seg}). Then, we have the equation of the edge of the angular Voronoi diagram as follows: \begin{figure} \caption{The arrangement of the two line segments} \label{arrangement2seg} \end{figure} \begin{equation} \begin{split} & y \{ (x-a)^2 + (y-b)^2 - l^2 \} \\ & -l \{ (x-a) \sin \alpha - (y-b) \cos \alpha \} ( x^2 + y^2 - 1 ) \\ & = 0\label{eq:avdeq} \end{split} \end{equation} For the details of the proof of this theorem, see~\cite{avd}. \iffalse \begin{proof} Each Voronoi edge is defined by a pair of lines segments. They consider one for a pair $ ( s_1, s_2 ) $ of line segments. \begin{equation} x^2 + (y-h)^2 = 1 + h^2 \end{equation} \begin{equation} h = \frac{x^2 + y^2 - 1}{2 y} \end{equation} \begin{equation} \begin{split} & (x - a - l h \sin \alpha)^2 + (y - b + l h \cos \alpha )^2 \\ = &l^2 (1 + h^2) \end{split} \end{equation} \begin{equation} \begin{split} & (x - a)^2 + (y - b)^2 - l^2 \\ = & 2 l \{ (x - a) \sin \alpha - (y - b) \cos \alpha) \} h \end{split} \end{equation} \begin{equation} \begin{split} & y \{ (x-a)^2 + (y-b)^2 - l^2 \} \\ & -l \{ (x-a) \sin \alpha - (y-b) \cos \alpha \} ( x^2 + y^2 - 1 ) \\ & = 0\label{eq:avdeq} \end{split} \end{equation} \end{proof} \fi \section{Degree three case}\label{threeDegreeCase} In this section, we show special cases even when the edge is of degree three. First, we show the condition for an edge to be a union of two curves. Second, we show the example of an edge which is an irreducible but has a singularity. \begin{theorem} \label{th:d3f} Whenever an equation of an edge of an angular Voronoi diagram is exactly degree three and factorable, the equation is a product of a circle and a line. \end{theorem} \begin{proof} We expand Eq.~\eqref{eq:avdeq}, which is an equation of an edge of an angular Voronoi diagram. Then, we have (we show only degree three terms of the equation, because it is too long.) \begin{equation} \begin{split} & ( l \cos \alpha + 1) y^3 + ( - l \sin \alpha ) y^2 x \\ & + ( l \cos \alpha + 1) y x^2 + ( - l \sin \alpha) x^3 + \dotsb = 0. \label{eq:avdeqex} \\ \end{split} \end{equation} The general form of a product of an expression of degree two and an expression of degree one is \begin{equation} \begin{split} & ( a_1 y^2 + a_2 y x + a_3 x^2 + a_4 y + a_5 x + a_6 ) \\ & \times ( b_1 y + b_2 x + b_3)= 0. \label{eq:gened2d1} \\ \end{split} \end{equation} We expand Eq.~\eqref{eq:gened2d1}. Then we have (only degree three terms) \begin{equation} \begin{split} & a_1 b_1 y^3 + (a_1 b_2 + a_2 b_1) y^2 x \\ & + (a_2 b_2 + a_3 b_1) y x^2 + a_3 b_2 x^3 + \dotsb = 0. \label{eq:gened2d1ex} \\ \end{split} \end{equation} If Eq.~\eqref{eq:avdeq} is factorable, then both Eq.~\eqref{eq:avdeqex} and Eq.~\eqref{eq:gened2d1ex} have the same factors. \begin{align} l \cos \alpha + 1 &= a_1 b_1, & -l \sin \alpha &= a_1 b_2 + a_2 b_1, \notag \\ l \cos \alpha + 1 &= a_2 b_2 + a_3 b_1, & -l \sin \alpha &= a_3 b_2. \label{simueq} \end{align} Eliminating $ l $ and $ \alpha $ from Eq.~\eqref{simueq}, we have \begin{align} a_1 b_1 &= a_2 b_2 + a_3 b_1, & a_3 b_2 &= a_1 b_2 + a_2 b_1. \label{a5} \end{align} Solving Eq.~\eqref{a5}, we have \begin{gather} b_1 = b_2 = 0 \qquad \text{or} \label{b1b20} \\ a_1 = a_3, \qquad a_2 = 0. \label{a1eqa3a20} \end{gather} If Eq.~\eqref{b1b20} holds, then we have \begin{equation} b_3( a_1 y^2 + a_2 y x + a_3 x^2 + a_4 y + a_5 x + a_6 ) = 0 \label{eqd2} \\ \end{equation} In this proof, we do not consider this case, because Eq.~\eqref{eqd2} is degree two. If Eq.~\eqref{a1eqa3a20} holds, then we have \begin{equation} ( a_1 y^2 + a_1 x^2 + a_4 y + a_5 x + a_6 ) ( b_1 y + b_2 x + b_3)= 0 \\ \end{equation} $ a_1 y^2 + a_1 x^2 + a_4 y + a_5 x + a_6 = 0 $ is the equation of a circle. \end{proof} Note we do not say a radius of the circle is positive, that is to say, it may be 0 or imaginary number. We found the following remarkable examples. ¥begin{figure} ¥begin{center} ¥includegraphics[width=11em,clip]{eps/isvd07_000.eps} ¥end{center} ¥caption{Example 1 circular degeneracy} ¥label{circleex1} ¥end{figure} ¥begin{example} If four endpoints of two line segments are on the same circle and the two line segments have the same length, an equation of an edge of an angular Voronoi diagram is a product of a circle and a line as in Fig.¥ref{circleex1}. ¥end{example} ¥begin{proof} We use the same setting as Section ¥ref{avdedge}. We consider a circle $ C $ whose center is $ (0, h) $ and whose radius is $ ¥sqrt{h^2 + 1} $. Two endpoints of $ s_1 $ are on $ C $. We change the parameters: ¥begin{align} a &= h ¥cos ¥theta, & b &= h + h ¥sin ¥theta, & l &= 1, & ¥alpha &= ¥theta - ¥frac{¥pi}{2}. ¥label{circlep1} ¥end{align} Then two endpoints of $ s_2 $ are on $ C $ and the length of $ s_2 $ is also $ 2 $. Substituting Eq.‾¥eqref{circlep1} into Eq.‾¥eqref{eq:avdeq} and factorizing it, then we have ¥begin{equation} ¥begin{split} & (y^2 + x^2 - 2hy - 1) ¥¥ & ¥times ¥{ ( 1 + ¥sin ¥theta) y + (¥cos ¥theta) x - h ( 1 + ¥sin ¥theta) ¥} = 0 ¥label{circleeq2}. ¥end{split} ¥end{equation} Eq.‾¥eqref{circleeq2} is a product of an equation of a circle and an equation of a line. ¥end{proof} ¥begin{figure} ¥begin{center} ¥includegraphics[width=11em,clip]{eps/isvd07_004.eps} ¥end{center} ¥caption{Arrangement of two lines} ¥label{circlese} ¥end{figure} ¥begin{figure} ¥begin{center} ¥includegraphics[width=11em,clip]{eps/isvd07_001.eps}¥¥ ¥end{center} ¥caption{Example 2 circular degeneracy} ¥label{circleex2} ¥end{figure} ¥begin{example} Suppose that two line segments AB and CD are given and O is a crosspoint AC and BD. If AC crosses BD orthogonally, and if the length of AO is equal to the length of CO and the length of BO is not equal to the length of DO in Fig.¥ref{circlese}, an equation of an edge of an angular Voronoi diagram is a product of a circle and a line as in Fig.¥ref{circleex2}. ¥end{example} ¥begin{proof} We place line segment $ s_1 $ on $ (2, 0) , (0, - 2 ¥tan ¥theta_1) $ and place line segment $ s_2 $ on $ (-2, 0) , (0, 2 ¥tan ¥theta_2) $. Suppose $(x, y)$ is a point from where the visual angle of $ s_1 $ is equal to that of $ s_2 $. Then there are two circle $C_1$ and $C_2$, $C_1$ passes through $(x, y)$ and two endpoints of $s_1$ and $C_2$ passes similarly. We find the center of $C_1$ is $ (1 - h_1 ¥sin ¥theta_1, - ¥tan ¥theta_1 + h_1 ¥cos ¥theta_1) $, because $C_1$ passes two endpoints of $s_1$. In the same way, the center of $C_2$ is $ (-1 - h_2 ¥sin ¥theta_2, ¥tan ¥theta_2 + h_2 ¥cos ¥theta_2) $. Because both $ (2, 0) $ and $ (x, y) $ are on $ C_1 $, ¥begin{equation} ¥begin{split} & (1 - h_1 ¥sin ¥theta_1 - 2)^2 + ( - ¥tan ¥theta_1 + h_1 ¥cos ¥theta_1 )^2 ¥¥ = & (1 - h_1 ¥sin ¥theta_1 - x)^2 + ( - ¥tan ¥theta_1 + h_1 ¥cos ¥theta_1 - y)^2. ¥label{p1} ¥end{split} ¥end{equation} Because both $ (-2, 0) $ and $ (x, y) $ are on $ C_2 $, ¥begin{equation} ¥begin{split} & (-1 - h_2 ¥sin ¥theta_2 - (-2))^2 + ( ¥tan ¥theta_2 + h_2 ¥cos ¥theta_2 )^2 ¥¥ = & (-1 - h_2 ¥sin ¥theta_2 - x)^2 + ( ¥tan ¥theta_2 + h_2 ¥cos ¥theta_2 - y)^2. ¥label{p2} ¥end{split} ¥end{equation} The ratio between the distance the center of $C_1$ to $s_1$ and the length of $s_1$ is equal to the ratio between the distance the center of $C_2$ to $s_2$ and the length of $s_2$, since the visual angle of $s_1$ from $(x, y)$ is equal to that of $s_2$ from $(x, y)$. ¥begin{equation} ¥begin{split} h_1 : h_2 = ¥left| ¥frac{2}{¥cos ¥theta_1} ¥right| : ¥left| ¥frac{2}{¥cos ¥theta_2} ¥right|. ¥label{p3} ¥end{split} ¥end{equation} We expand Eq.‾¥eqref{p1} and Eq.‾¥eqref{p2} and substitute the results into Eq.‾¥eqref{p3}, then we have ¥begin{equation} ¥begin{split} (x^3 + x y^2-4 x ) ¥sin(¥theta_1 - ¥theta_2) - 4 x y ¥cos(¥theta_1 - ¥theta_2) = 0. ¥label{p4} ¥end{split} ¥end{equation} We can divide Eq.‾¥eqref{p4} by $ ¥sin(¥theta_1 - ¥theta_2) ( ¥neq 0 )$, because the length of BO is not equal to that of DO. ¥begin{equation} ¥begin{split} x ¥Big¥{ x^2 + ¥left(y - 2¥tan(¥frac{¥pi}{2} - ¥theta_1 + ¥theta_2)¥right)^2 ¥¥ - ¥frac{1}{4 ¥cos^2(¥frac{¥pi}{2} - ¥theta_1 + ¥theta_2)} ¥Big¥} = 0. ¥label{p5} ¥end{split} ¥end{equation} Eq.‾¥eqref{p5} is a product of an equation of a circle and an equation of a line. ¥end{proof} ¥begin{figure} ¥begin{center} ¥includegraphics[width=11em,clip]{eps/isvd07_002.eps} ¥end{center} ¥caption{Example 3 circular degeneracy} ¥label{circleex3} ¥end{figure} ¥begin{example} If two line segments are on the same line and their lengths are not equal, an equation of an edge of an angular Voronoi diagram is a product of a circle and a line (see Fig.¥ref{circleex3}). ¥end{example} ¥begin{proof} We use the same setting as Section ¥ref{avdedge}. $ s_1 $ is on the x-axis. Suppose $b, l$ and $ ¥alpha$ satisfy the following. ¥begin{align} b & = 0, & ¥alpha & = 0, ¥pi, & l & ¥neq 1. ¥label{circlep2} ¥end{align} Then $ s_2 $ is on the $x$-axis and its length is not equal to the length of $ s_1 $. Substituting Eq.‾¥eqref{circlep2} into Eq.‾¥eqref{eq:avdeq} and factorizing, then we have ¥begin{equation} ¥begin{split} y ¥{ (¥pm l + 1) y^2 + (¥pm l + 1) x^2 - 2 a x + (a^2 + l^2 ¥mp l)¥} = 0 ¥label{p6}. ¥end{split} ¥end{equation} Eq.‾¥eqref{p6} is a product of an equation of a circle and an equation of a line. ¥end{proof} ¥begin{figure} ¥begin{center} ¥includegraphics[width=11em,clip]{eps/isvd07_003.eps} ¥end{center} ¥caption{Example 4 circular degeneracy} ¥label{circleex4} ¥end{figure} ¥begin{example} If two line segments share an endpoint, an equation of an edge of an angular Voronoi diagram is a product of a circle and a line in Fig.¥ref{circleex4}. ¥end{example} ¥begin{proof} We use the same setting of Section ¥ref{avdedge}. Suppose $a, b, l$ and $ ¥alpha$ satisfy the following. ¥begin{align} a & = l ¥cos ¥alpha - 1, & b & = l ¥sin ¥alpha. ¥label{circlep3} ¥end{align} Then one of the endpoints of $ s_2 $ is also $ (-1, 0 ) $. Substituting Eq.‾¥eqref{circlep3} into Eq.‾¥eqref{eq:avdeq} and factorizing, then we have ¥begin{equation} ¥begin{split} ¥{y^2 + (x+1)^2 ¥} ¥{y(l ¥cos ¥theta - 1) - l (x -1) ¥sin ¥theta ¥} = 0 ¥end{split} ¥end{equation} Eq.‾¥eqref{p5} is a product of an equation of a circle (in this case only a point) and an equation of a line. ¥end{proof} We have discussed the case when the equation of the edge can be divided into two polynomials. That is not the only ``unusual'' case. Even when an edge is also an irreducible curve, there is an ``unusual'' case. The following is the example. ¥begin{example} ¥label{th:d3u} The following curve which appears as an edge of an angular Voronoi diagram is irreducible but has a node. ¥begin{align} a &= 2, & b &= ¥frac{4}{3}, & l &= ¥frac{5}{3}, & ¥cos ¥alpha &= ¥frac{3}{5}, & ¥sin ¥alpha &= - ¥frac{4}{5}. ¥label{nodepara} ¥end{align} ¥end{example} ¥begin{proof} Substituting Eq.‾¥eqref{nodepara} into Eq.‾¥eqref{eq:avdeq}, we have ¥begin{equation} ¥begin{split} f(x, y) = 2y^3 + ¥frac{4}{3} y^2 x + 2 y x^2 + ¥frac{4}{3} x^3 ¥¥ - ¥frac{20}{3} y^2 - 4 y x - 4 x^2 + 2 y - ¥frac{4}{3} x + 4. ¥label{nodeeq1} ¥end{split} ¥end{equation} We partially differentiate Eq.‾¥eqref{nodeeq1} as ¥begin{equation} ¥begin{split} ¥frac{¥partial f}{¥partial x} & = ¥frac{4}{3} y^2 + 4 y x + 4 x^2 - 4 y - 8 x - ¥frac{4}{3} ¥¥ ¥frac{¥partial f}{¥partial y} & = 6 y^2 + ¥frac{8}{3} y x + 2 x^2 - ¥frac{40}{3} y - 4 x + 2 ¥label{nodeeq3}. ¥end{split} ¥end{equation} Substituting $ y = 2 $ and $ x = - 1 $ into Eq.‾¥eqref{nodeeq1} and Eq.‾¥eqref{nodeeq3}, we have ¥begin{align} f = ¥frac{¥partial f}{¥partial x} = ¥frac{¥partial f}{¥partial y} = 0. ¥end{align} The point (-1, 2) is a singularity and we find it is a node by drawing the graph of Eq.‾¥eqref{nodeeq1} (see Fig.¥ref{fignode}). ¥begin{figure} ¥begin{center} ¥includegraphics[width=11em,clip]{eps/isvd07_005.eps} ¥end{center} ¥caption{Example of node} ¥label{fignode} ¥end{figure} ¥end{proof} ¥section{Lower degree case} ¥label{lowerDegreeCase} In previous section, we explain degree three cases. Now, in this section, we explain degree less than three cases. A degree of an edge of an angular Voronoi diagram can be less than three. Three theorems in this section give a classification of all possible edges of degree less than three. In Th.‾¥ref{th:d2u} and Th.‾¥ref{th:d2f}, we show that the possible shapes for an edge of degree two are hyperbola or two lines. Th.‾¥ref{th:d1} shows that a curve of degree one cannot appear as an edge. ¥begin{theorem} ¥label{th:d2u} An equation of an edge of angular Voronoi diagram is exactly degree two and irreducible, if and only if the equation is an orthogonal hyperbola. ¥end{theorem} ¥begin{proof} The conditions for the degree three terms of Eq.‾¥eqref{eq:avdeqex} to be eliminated is $ l ¥cos ¥alpha = -1 $ and $ l ¥sin ¥alpha = 0 $. Solving this system, we derive ¥begin{align} l & = 1, & ¥alpha = ¥pi. ¥label{p8} ¥end{align} Eq.‾(¥ref{p8}) means that the two line segments have the same length and are parallel. Substituting Eq.‾¥eqref{p8} into Eq.‾¥eqref{eq:avdeq}, we have ¥begin{equation} -b y^2 -2 a y x + b x^2 + (a^2 + b^2) y - b = 0. ¥label{d2eq} ¥end{equation} If $ b = 0 $, Eq.‾¥eqref{d2eq} is $ -2ayx + (a^2+b^2)y = 0 $. In this case the equation of the edge is $ y ¥{ 2 a x - (a^2+b^2) ¥} = 0 $. It is factorable and becomes a product of two orthogonal lines (see the right of Fig.¥ref{twoline}). Otherwise, ¥begin{equation} y^2 + ¥frac{2 a}{b} y x - x^2 - ¥frac{a^2 + b^2}{b} y + 1 = 0 ¥label{d2normal} ¥end{equation} If Eq.‾¥eqref{d2normal} is irreducible, it is an orthogonal hyperbola (see Fig.¥ref{hyperbola}). If Eq.‾¥eqref{d2normal} is factorable, ¥begin{equation} (c_1 y + c_2 x + c_3) ( ¥frac{1}{c_1} y - ¥frac{1}{c_2} x + ¥frac{1}{c_3} ) = 0 ¥label{p9} ¥end{equation} These two lines are orthogonal to each other as in the left of Fig.¥ref{twoline}. ¥end{proof} ¥begin{theorem} ¥label{th:d2f} An Equation of an edge of angular Voronoi diagram is exactly degree two and factorable, if and only if it is a product of two orthogonal lines equation. ¥end{theorem} ¥begin{proof} See the proof of Th.‾¥ref{th:d2u}. ¥end{proof} ¥begin{figure} ¥begin{center} ¥includegraphics[width=11em,clip]{eps/isvd07_006.eps} ¥includegraphics[width=11em,clip]{eps/isvd07_007.eps} ¥end{center} ¥caption{Examples of Two orthogonal lines} ¥label{twoline} ¥end{figure} ¥begin{figure} ¥begin{center} ¥includegraphics[width=11em,clip]{eps/isvd07_008.eps} ¥end{center} ¥caption{Example of an orthogonal hyperbola} ¥label{hyperbola} ¥end{figure} ¥begin{theorem} ¥label{th:d1} An equation of an edge of an angular Voronoi diagram is never degree one. ¥end{theorem} ¥begin{proof} Eliminating the degree two terms from Eq.‾¥eqref{d2eq}, we obtain $ a = b = 0 $. This means the two line segments are exactly the same. This case is meaningless. ¥end{proof} \section{Conclusions} \label{conclusion} \begin{table} \caption{The types of angular Voronoi diagram's edges} \label{typesAVD} \begin{center} \begin{tabular}{|l|l|l|} \hline degree 3 & irreducible & no singularity \\ & & (most general case) \\ \cline{3-3} & & one singularity(Ex.\ref{th:d3u}) \\ \cline{2-3} & \multicolumn{2}{|l|}{factorable(circle $ \times $ line)(Th.\ref{th:d3f})} \\ \hline degree 2 & \multicolumn{2}{|l|}{irreducible(hyperbola)(Th.\ref{th:d2u})} \\ \cline{2-3} & \multicolumn{2}{|l|}{factorable(two line)(Th.\ref{th:d2f})} \\ \hline degree 1 & \multicolumn{2}{|l|}{unrealizable(Th.\ref{th:d1})} \\ \hline \end{tabular} \end{center} \end{table} We classified possible curves that can appear as an edge of an angular Voronoi diagram. Although we have not got a necessary and sufficient condition for each case, we successfully enumerated the cases when the edge becomes somewhat ``unusual.'' The summary of our results is shown in Table \ref{typesAVD}. We showed that a Voronoi edge can be of degree three or two, but cannot be of degree one. A degree three edge can have a singularity and if it can be factored into two curves, they are always a circle and a line. A degree two curve is a hyperbola when it is irreducible, and it is two lines when it is factorable. Now we are interested in a further analysis on conditions for unusual curves. For example, ``What is the necessary and sufficient condition for an edge to be a circle and a line?'' is a problem to consider. Additionally, a possible relative position of multiple edges is also important. To check if what kind of bad-conditioned position (for example, the case multiple curves are crossing at their singular point) can be possible is essential to achieve computational robustness. \end{document}
\begin{document} \mathfrak{m}aketitle \begin{abstract} For a domain $D$, the ring $\Int(D)$ of integer-valued polynomials over $D$ is atomic if $D$ satisfies the ascending chain condition on principal ideals. However, even for a discrete valuation domain $V$, the ring $\IntR(V)$ of integer-valued rational functions over $V$ is antimatter. We introduce a family of atomic rings of integer-valued rational functions and study various factorization properties on these rings. \end{abstract} \section{Introduction} \indent\indent Although the ring of integer-valued rational functions over a domain has a definition that is closely related to the more much well-studied ring of integer-valued polynomials, the two rings can behave very differently depending on the base ring. Some of these differences are revealed through the lens of factorization. Given a domain $D$ with field of fractions $K$ and $E$ some subset of $K$, we first define \[ \IntR(D) \coloneqq \{ \varphi \in K(x) \mathfrak{m}id \varphi(d) \in D,\, \forall d \in D \} \mathfrak{m}athfrak{q}uad \text{and} \mathfrak{m}athfrak{q}uad \IntR(E,D) \coloneqq \{ \varphi \in K(x) \mathfrak{m}id \varphi(a) \in D,\,\forall a \in E \} \] the \textbf{ring of integer-valued rational functions over $D$} and the \textbf{ring of integer-valued rational functions on $E$ over $D$}, respectively. Note that $\IntR(D,D) = \IntR(D)$. Compare these definitions to \[ \Int(D) \coloneqq \{ f \in K[x]\mathfrak{m}id f(d) \in D,\, \forall d \in D \} \mathfrak{m}athfrak{q}uad \text{and} \mathfrak{m}athfrak{q}uad \Int(E,D) \coloneqq \{ f \in K[x] \mathfrak{m}id f(a) \in D,\,\forall a \in E \} \] the \textbf{ring of integer-valued polynomials over $D$} and \textbf{ring of integer-valued polynomials on $E$ over $D$}, respectively. The ring of integer-valued rational functions has ideals that can be defined using the ideals of the base ring. The ideals of $\IntR(E,D)$ relevant to this work are \[ \mathfrak{M}_{ \mathfrak{m}, a } \coloneqq \{\varphi \in \IntR(E,D) \mathfrak{m}id \varphi(a) \in \mathfrak{m} \} \mathfrak{m}athfrak{q}uad \text{and} \mathfrak{m}athfrak{q}uad \IntR(E,\mathfrak{m}) \coloneqq \{\varphi \in \IntR(E,D) \mathfrak{m}id \varphi(a) \in \mathfrak{m},\,\forall a \in E\} \] where $\mathfrak{m}$ is some maximal ideal of $D$ and $a$ is some element of $E$. Moreover, $\mathfrak{M}_{ \mathfrak{m}, a }$ is a maximal ideal of $\IntR(E,D)$. Whenever we refer to a ring, we are indicating a commutative ring with identity. Likewise, whenever we refer to a monoid, we are indicating a commutative monoid. For a ring $R$, we denote by $R^\bullet$ the multiplicative monoid of nonzero elements of $R$ and $R^\times$ the multiplicative monoid of units of $R$. We also use ${\mathfrak{m}athbb N}$ to denote the set of natural numbers including $0$. For a totally ordered abelian group $\Gamma$, we view it as being embedded in its divisible closure ${\mathfrak{m}athbb Q}\Gamma \coloneqq \Gamma \otimes_{{\mathfrak{m}athbb Z}} {\mathfrak{m}athbb Q}$. We also define $\Gamma_{\geq 0} \coloneqq \{\gamma \in \Gamma \mathfrak{m}id \gamma \geq 0\}$. To study factorization in a ring is to study how elements of the ring can be written as a product of irreducible elements, or \textbf{atoms}. A ring in which every nonzero, nonunit element can written as a product of finitely many atoms is \textbf{atomic}. On the other end of the spectrum, if a ring has no atoms, we call it \textbf{antimatter}. Factorization over rings of integer-valued polynomials has been studied extensively. This can take the form of factorization over general rings of integer-valued polynomials, such as in \cite{RingsBetween,GottiLi}. Factorization over the classical ring of integer-valued polynomials $\Int({\mathfrak{m}athbb Z})$ has also been studied, with Frisch showing that given any finite multiset of integers greater than or equal to 2, there exists some polynomial in $\Int({\mathfrak{m}athbb Z})$ whose factorization lengths is exactly the given multiset \cite{Frisch}. This result has been extended to $\Int(D)$, where $D$ is a Dedekind domain with finite residue fields and infinitely many maximal ideals \cite{LengthsDedekind}. There is seemingly no content in investigating factorization in rings of integer-valued rational functions since even for any discrete valuation domain $V$, the ring $\IntR(V)$ is antimatter \cite[Proposition X.3.3]{Cahen}. However, we introduce a family of domains $D$ with field of fractions $K$ such that $\IntR(K,D)$ is atomic. This allows us study factorization invariants over this family of rings of integer-valued rational functions. Let $R$ be a ring and $r \in R$ be a nonzero, nonunit element. Since only the multiplicative structure of the ring is considered in these definitions, these definitions also apply to a commutative monoid. We say two elements $a, b \in R$ are \textbf{associates} if there exists a unit $u \in R$ such that $a = ub$, and we write $a \sim b$. We denote by $\mathfrak{m}athcal{A}(R) \coloneqq \{a \in R \mathfrak{m}id \text{$a$ is an atom}\}/\sim$. The \textbf{set of factorizations} of $r$ in $R$ is given by \[ \mathfrak{m}athsf{Z}(r) = \mathopen{}\mathclose\bgroup\originalleft\{ \mathopen{}\mathclose\bgroup\originalleft(e_{[a]}\aftergroup\egroup\originalright)_{[a] \in \mathfrak{m}athcal{A}(R)} \in \bigoplus_{[a] \in \mathfrak{m}athcal{A}(R)} {\mathfrak{m}athbb N} \,\mathfrak{m}iddle\vert\, \mathfrak{m}athfrak{p}rod_{[a] \in \mathfrak{m}athcal{A}(R)} a^{e_{[a]}} \sim r \aftergroup\egroup\originalright\}. \] Note that the product is a finite product since for each $\mathopen{}\mathclose\bgroup\originalleft(e_{[a]}\aftergroup\egroup\originalright)$, there are only finitely many elements $[a] \in \mathfrak{m}athcal{A}(R)$ such that $e_{[a]}$ is nonzero. The product is also well-defined up to association. For each factorization $\mathopen{}\mathclose\bgroup\originalleft(e_{[a]}\aftergroup\egroup\originalright) \in \mathfrak{m}athsf{Z}(r)$, we define $\abs{\mathopen{}\mathclose\bgroup\originalleft(e_{[a]}\aftergroup\egroup\originalright)} \coloneqq \sum\limits_{[a] \in \mathfrak{m}athcal{A}(R)} e_{[a]}$ to be the \textbf{length} of the factorization. The \textbf{set of lengths} of $r$ is then defined to be $\mathfrak{m}athcal{L}(r) := \{\abs{z} \mathfrak{m}id z \in \mathfrak{m}athsf{Z}(r) \}$. We also set $L(r) \coloneqq \sup \mathfrak{m}athcal{L}(r)$ and $\ell(r) \coloneqq \inf \mathfrak{m}athcal{L}(r)$ to be the \textbf{longest factorization length} and the \textbf{shortest factorization length} of $r$ in $R$. We say that $R$ is of \textbf{bounded factorization} if $R$ is atomic and $\abs{\mathfrak{m}athcal{L}(s)} < \infty$ for each nonzero, nonunit element $s \in R$. Let $z = \mathopen{}\mathclose\bgroup\originalleft(e_{[a]}\aftergroup\egroup\originalright)$ and $z' = \mathopen{}\mathclose\bgroup\originalleft(e_{[a]}'\aftergroup\egroup\originalright)$ be two factorizations in $\mathfrak{m}athsf{Z}(r)$. We define $\gcd(z, z') = (\mathfrak{m}in\{e_{[a]}, e_{[a]}'\})$. The \textbf{distance} between the two factorizations of $r$ in $R$ is defined to be $\mathfrak{m}ax\{\abs{z-\gcd(z,z')}, \abs{z' - \gcd(z,z')} \}$. For $N \in {\mathfrak{m}athbb N}$, an \textbf{$N$-chain} between $z$ and $z'$ is a finite sequence of factorizations $z = z_0, \dots, z_n = z'$ such that each $z_i \in \mathfrak{m}athsf{Z}(r)$ and $d(z_i, z_{i+1}) \leq N$ for all $i < n$. Then we define the \textbf{catenary degree} of $r$ in $R$, denoted as $c(r)$, to be the minimal $N$ such that there exists an $N$-chain between any two elements of $\mathfrak{m}athsf{Z}(r)$. For all of the factorization invariants, a subscript may be used to emphasize the ring. For example, we can use $\mathfrak{m}athsf{Z}_R(r)$ to denote the set of factorizations of $r$ in $R$. Let $D$ be a domain with field of fractions $K$. Suppose that $A$ is a domain such that $D \subseteq A \subseteq K[x]$ and $A \cap K = D$. For example, $A = \Int(E,D)$ for any subset $E$ of $K$ satisfies these conditions. In this case, for any $d \in D$ and any atoms $a_1, \dots, a_n$ of $A$ such that $d = a_1 \cdots a_n$, we have that $a_1, \dots, a_n$ are all atoms of $D$. This is because we may view each $d \in D$ as a degree zero polynomial in $A$ and to write a degree zero polynomial as a product of elements in $A$, we must write the degree zero polynomial as a product of degree zero polynomials, which are elements of $D$. In other words, under this setup, factoring degree zero polynomials in $A$ is exactly the same as factoring them in $D$. However, if we have $D \subseteq A$ but $A$ is not contained in $K[x]$, then it is possible that an element $d \in D$ can be written as the product of elements of $A$, not all of which are in $D$. For this reason, for our family of domains $D$ such that $\IntR(K,D)$ is atomic, we will mostly focus on factorization of elements $d \in D$ viewed as elements of $\IntR(K,D)$. In Section \ref{Sect:Valuation}, we discuss factorization in $\IntR(E,V)$ for $V$ a valuation domain and $E$ a subset of the field of fractions. For Section \ref{Sect:Atomicity}, we introduce some computational tools and use them to prove that for a certain family of domains $D$ with field of fractions $K$, the ring $\IntR(K,D)$ is atomic. As for Section \ref{Sect:Lengths}, we show that the set of factorization lengths of an element $d \in D$ changes if we view $d$ as an element of these atomic rings $\IntR(K,D)$. Lastly, in Section \ref{Sect:1toInfinity}, we focus on factorization of atomic rings of the form $\IntR(K,D)$, where $D$ is a localization of a monoid domain whose underlying monoid has a gap when embedded into its difference group which is a additive subgroup of ${\mathfrak{m}athbb R}$ with no smallest strictly positive element. \section{Factorization in rings of integer-valued rational functions over valuation domains}\label{Sect:Valuation} \indent\indent For a domain $D$, if $D$ satisfies the ascending chain condition on principal ideals, then the ring $\Int(D)$ of integer-valued polynomials over $D$ is atomic \cite[Corollary 7.6]{RingsBetween}. However, even for a discrete valuation domain $V$, the ring $\IntR(V)$ is not atomic. In fact, $\IntR(V)$ is an antimatter domain \cite[Proposition X.3.3]{Cahen}. The result in \cite[Proposition X.3.3]{Cahen} states that for any $E$ that is a subset of the field of fractions of a discrete valuation domain $V$, the ring $\IntR(E,V)$ is an antimatter domain. We will show that this is slightly incorrect. It is possible for $\IntR(E,V)$ to have atoms and even be atomic under certain conditions. Another reason to consider integer-valued rational functions over valuation domains is that we have tools such as continuity to construct integer-valued rational functions with control in terms of the values the functions attain. \begin{proposition}\label{Prop:RationalFunctionsContinuousInValuationTopology}\cite[Proposition 2.1]{Liu} Let $D$ be a domain with field of fractions $K$ and $E$ a subset of $K$. Let $v$ be a valuation on $K$ with value group $\Gamma$ such that the associated valuation ring $V$ contains $D$. Then each element of $\IntR(E,D)$ is a continuous function from $E$ to $D$ with respect to the topology induced by the valuation. \end{proposition} We now give a lemma that generalizes the result of \cite[Proposition X.3.3]{Cahen} in the setting of a valuation domain with principal maximal ideal. \begin{lemma}\label{Lem:StoneWeierstrass} Let $V$ be a valuation domain with principal maximal ideal. Suppose that $E$ is some nonempty subset of the field of fractions of $V$. Let $\varphi \in \IntR(E,V)$ be a nonzero, nonunit element. If there exists an $a \in E$ such that $\varphi(a)$ is not a unit and $a$ is not an isolated point of $E$ with respect to the topology induced by the valuation, then $\varphi$ is strictly divisible by a nonunit element of $\IntR(E,V)$. \end{lemma} \begin{proof} Let $\varphi \in \IntR(E,V)$ and suppose that there exists an $a \in E$ such that $\varphi(a)$ is not a unit and $a$ is not an isolated point of $E$. By applying the isomorphism $\IntR(E,V) \to \IntR(E-a,V)$ defined by $\rho(x) \mathfrak{m}apsto \rho(x + a)$, we can assume without loss of generality that $a = 0$, $0 \in E$, and $0$ is not an isolated point of $E$. Let $v$ be the valuation associated with $V$ and $\mathfrak{m}$ be the maximal ideal of $V$, generated by some $t \in V$. Since $\varphi(0) \in \mathfrak{m}$, there exists some $\delta \in \Gamma$ such that $v(\varphi(d)) \geq v(t)$ for all $d \in E$ such that $v(d) > \delta$ due to Proposition \ref{Prop:RationalFunctionsContinuousInValuationTopology}. Because $0$ is not an isolated point, we know that there exists some $b \in E$ such that $v(b) > \delta$. Construct \[\mathfrak{m}athfrak{p}si(x) = \frac{x^3+b^3t^2}{x^3+b^3t}.\] Note that $\frac{v(b^3t)}{3} = v(b) + \frac{v(t)}{3} \notin \Gamma$ and similarly $\frac{v(b^3t^2)}{3} \notin \Gamma$. Also notice that there is no element of $\Gamma$ between $\frac{v(b^3t)}{3}$ and $\frac{v(b^3t^2)}{3}$ since there is no element of $\Gamma$ between $\frac{v(t)}{3}$ and $\frac{v(t^2)}{3}$. Then for each $d \in E$, we calculate \[ v(\mathfrak{m}athfrak{p}si(d)) = \begin{cases} 0, &\text{if $v(d) < \frac{v(b^3t)}{3}$},\\ v(b^3t^2)-v(b^3t) = v(t) > 0, &\text{if $v(d) > \frac{v(b^3t^2)}{3}$}. \end{cases} \] Using the fact that $\mathfrak{m}athfrak{p}si(0) \in \mathfrak{m}$, we have that $\mathfrak{m}athfrak{p}si \in \IntR(E,V)$ and $\mathfrak{m}athfrak{p}si$ is not a unit of $\IntR(E,V)$. Additionally, for $d \in E$ such that $v(d) < \delta$, we have that $v(\mathfrak{m}athfrak{p}si(d)) = 0$ since $\delta < v(b) < \frac{v(b^3t)}{3}$. We also know that for $d \in E$ with $v(d) \geq \delta$ that $v(\mathfrak{m}athfrak{p}si(d)) \leq v(t)$. This implies that $\frac{\varphi}{\mathfrak{m}athfrak{p}si} \in \IntR(E,V)$ since $v\mathopen{}\mathclose\bgroup\originalleft(\frac{\varphi}{\mathfrak{m}athfrak{p}si}(d)\aftergroup\egroup\originalright) \geq 0$ regardless whether $v(d) < \delta$ or $v(d) \geq \delta$. Lastly, we check that $v\mathopen{}\mathclose\bgroup\originalleft(\frac{\varphi}{\mathfrak{m}athfrak{p}si}(b)\aftergroup\egroup\originalright) = v(\varphi(b)) > 0$. Thus, $\frac{\varphi}{\mathfrak{m}athfrak{p}si}$ is not a unit of $\IntR(E,V)$ either, which means that $\varphi$ is strictly divisible by $\mathfrak{m}athfrak{p}si$. \end{proof} What follows is a technical lemma that constructs integer-valued rational functions over a valuation domain with even more control on the values attained. \begin{lemma}\label{Lem:ZigZag} Let $V$ be a valuation domain with a maximal ideal that is not principal, and suppose that the residue field of $V$ is algebraically closed or the value group is not divisible. Denote by $K$ the field of fractions of $V$ and $\Gamma$ the associated value group. For any $\alpha, \alpha' \in \Gamma_{\geq 0}$ with $\alpha < \alpha'$ and $\varepsilon \in {\mathfrak{m}athbb Q}\Gamma$ with $\varepsilon > 0$, there exists $\varphi_{\alpha, \alpha',\varepsilon} \in \IntR(K,V)$ such that there exist some $\beta, \beta' \in {\mathfrak{m}athbb Q}\Gamma$ and some $\alpha'' \in \Gamma$ with $\alpha' - \varepsilon < \alpha'' \leq \alpha'$ so that for all $d \in K$, we have \[ v(\varphi_{\alpha, \alpha', \varepsilon}(d)) = \begin{cases} \alpha, & \text{if } v(d) < \beta, \\ \alpha'', & \text{if } v(d) > \beta', \end{cases} \] and $\alpha \leq v(\varphi_{\alpha, \alpha', \varepsilon}(d)) \leq \alpha''$ if $\beta \leq v(d) \leq \beta'$. If the residue field of $V$ is not algebraically closed and $\Gamma$ is divisible, we can take $\alpha'' = \alpha'$. \end{lemma} \begin{proof} Fix $\alpha, \alpha', \varepsilon \in \Gamma_{\geq 0}$. We split into two cases. The first case is concerned with when the residue field of $V$ is not algebraically closed and $\Gamma$ is divisible. The second case is concerned with when $\Gamma$ is not divisible. In both cases, we construct $\varphi_{\alpha, \alpha',\varepsilon}$ directly, but in the first case, the value of $\varepsilon$ is not used in the construction of $\varphi_{\alpha, \alpha',\varepsilon}$. \begin{enumerate} \item We assume that the residue field of $V$ is not algebraically closed and $\Gamma$ is divisible. Since the residue field of $V$ is not algebraically closed, there exists a monic, nonconstant, unit-valued polynomial in $V[x]$. This polynomial lifts to a monic nonconstant polynomial $f \in V[x]$ such that $f(V) \subseteq V^\times$. Let $n \coloneqq \deg(f)$. Take $b \in V$ such that $v(b) = \alpha$ and $c \in V$ such that $v(c) = \frac{\alpha'-\alpha}{n}$. Then we construct \[ \varphi_{\alpha, \alpha', \varepsilon}(x) = \frac{bc^nf\mathopen{}\mathclose\bgroup\originalleft(\frac{x}{c}\aftergroup\egroup\originalright)}{f(x)}. \] Since $v\mathopen{}\mathclose\bgroup\originalleft(a_1^n f\mathopen{}\mathclose\bgroup\originalleft(\frac{a_2}{a_1}\aftergroup\egroup\originalright)\aftergroup\egroup\originalright) = \mathfrak{m}in\{nv(a_1), nv(a_2)\}$ for each $a_1, a_2 \in K$ with $a_1 \neq 0$ \cite[Corollary 2.3]{PruferNonDRings}, we calculate that for each $d \in K$, we have \[ v(\varphi_{\alpha, \alpha', \varepsilon}(d)) = \begin{cases} \alpha, & \text{if $v(d) < 0$},\\ \alpha + nv(d), & \text{if $0 \leq v(d)\leq \frac{\alpha'-\alpha}{n}$},\\ \alpha + nv(c) = \alpha', & \text{if $v(d) > \frac{\alpha'-\alpha}{n}$}. \end{cases} \] We can confirm that for $d \in K$ such that $0 \leq v(d) \leq \frac{\alpha' - \alpha}{n}$, we have that $0 \leq nv(d) \leq \alpha' - \alpha$ and therefore, $\alpha \leq \alpha + nv(d) \leq \alpha'$. This gives the desired function by taking $\beta = 0$, $\beta' = \frac{\alpha'-\alpha}{n}$, and $\alpha'' = \alpha'$. \item Now we assume that $\Gamma$ is not divisible. Then there exist some prime $p$ and $\mathfrak{m}u \in \Gamma$ with $\mathfrak{m}u > 0$ such that $\frac{\mathfrak{m}u}{p} \notin \Gamma$. Since the maximal ideal of $V$ is not principal, we have that there exists some $\alpha'' \in \Gamma$ such that $\mathfrak{m}ax\{\alpha' - \varepsilon, \alpha\} < \alpha'' \leq \alpha'$ and $\frac{\alpha''-\alpha}{p} \in \Gamma$. Note that $\frac{\mathfrak{m}u + \alpha''-\alpha}{p} \notin \Gamma$. We now consider \[ \varphi_{\alpha,\alpha',\varepsilon}(x) = \frac{b(x^p+cc')}{x^p+c'}, \] where $b \in V$ is some element such that $v(b) = \alpha$, $c \in V$ is some element such that $v(c) = \alpha'' - \alpha$, and $c' \in V$ is such that $v(c') = \mathfrak{m}u$. For each $d \in K$, we have \[ v(\varphi_{\alpha, \alpha',\varepsilon}(d)) = \begin{cases} \alpha, & \text{if $v(d) < \frac{\mathfrak{m}u}{p}$},\\ \alpha + pv(d) - \mathfrak{m}u, & \text{if $\frac{\mathfrak{m}u}{p} < v(d) < \frac{\mathfrak{m}u+\alpha''-\alpha}{p}$ },\\ \alpha + \alpha'' - \alpha + \mathfrak{m}u - \mathfrak{m}u = \alpha'',& \text{if $v(d) > \frac{\mathfrak{m}u+\alpha''-\alpha}{p}$}. \end{cases} \] Notice that $\frac{\mathfrak{m}u}{p} < v(d) < \frac{\mathfrak{m}u+\alpha''-\alpha}{p}$ implies that $\alpha < \alpha + pv(d) - \mathfrak{m}u < \alpha''$. Thus, we have the desired function taking $\beta = \frac{\mathfrak{m}u}{p}$ and $\beta' = \frac{\mathfrak{m}u+\alpha''-\alpha}{p}$. \end{enumerate} \end{proof} Now we use the previous two lemmas to prove that for a large class of valuation domains $V$, rings of the form $\IntR(E,V)$ is an antimatter domain. \begin{proposition}\label{Prop:Antimatter} Let $V$ be a valuation domain and let $E$ be a nonempty subset of the field of fractions of $V$. If the residue field of $V$ is not algebraically closed or the value group is not divisible, then $\IntR(E,V)$ is an antimatter domain, unless $E$ has an isolated point with respect to the topology induced by the valuation and the maximal ideal of $V$ is principal. \end{proposition} \begin{proof} Let $\mathfrak{m}$ denote the maximal ideal of $V$ and let $\Gamma$ denote the value group. First we do not make any assumptions about the valuation domain $V$ and the subset $E$. Let $\varphi \in \IntR(E,V)$ be a nonzero, nonunit element. Then there exists some $a \in E$ such that $\varphi(a) \in \mathfrak{m} \setminus \{0\}$. By setting $\mathfrak{m}athfrak{p}si(x) := \varphi(x+a) \in \IntR(E-a,V)$, we see that $\mathfrak{m}athfrak{p}si(0) \in \mathfrak{m} \setminus \{0\}$, so we can assume without loss of generality that $\varphi(0) \in \mathfrak{m} \setminus \{0\}$ and $0 \in E$. Since $\varphi(0) \in \mathfrak{m}\setminus \{0\}$, there exists $\delta \in \Gamma$ with $\delta \geq 0$ such that for all $d \in V$ with $v(d) \geq \delta$, we have that $v(\varphi(d)) = v(\varphi(0)) > 0$, due to Proposition \ref{Prop:RationalFunctionsContinuousInValuationTopology}. Let $r \in V$ such that $v(r) = \delta$. Our goal is to construct a nonunit $\mathfrak{m}athfrak{p}si \in \IntR(E, V)$ such that $\frac{\varphi}{\mathfrak{m}athfrak{p}si} \in \IntR(E,V)$ and $\frac{\varphi}{\mathfrak{m}athfrak{p}si}$ is not a unit. This will show that $\IntR(E,V)$ is antimatter. We split the proof into three cases. Case 1 is when $V/\mathfrak{m}$ is not algebraically closed and $\Gamma$ is divisible. Case 2 is when $\Gamma$ is not divisible and $\mathfrak{m}$ is not principal. Case 3 is when $\mathfrak{m}$ is principal and $E$ has no isolated points. \begin{enumerate} \item We assume that $V/\mathfrak{m}$ is not algebraically closed and $\Gamma$ is divisible. We take $\mathfrak{m}athfrak{p}si(x) = \varphi_{\alpha,\alpha', \varepsilon}\mathopen{}\mathclose\bgroup\originalleft(\frac{x}{r}\aftergroup\egroup\originalright)$ from Lemma \ref{Lem:ZigZag} using $\alpha = 0$, $\alpha' = \frac{v(\varphi(0))}{2}$, and any value of $\varepsilon \in \Gamma$ such that $\varepsilon > 0$. We see that $\mathfrak{m}athfrak{p}si \in \IntR(E,V)$. Since $\mathfrak{m}athfrak{p}si(0) \in \mathfrak{m}$, the rational function $\mathfrak{m}athfrak{p}si$ is not a unit of $\IntR(E,V)$. Furthermore, if we let $a \in E$, then $v(\mathfrak{m}athfrak{p}si(a)) = 0$ if $v(a) < \delta$ and $v(\mathfrak{m}athfrak{p}si(a)) \leq \frac{v(\varphi(0))}{2} < v(\varphi(0)) = v(\varphi(a))$ if $v(a) \geq \delta$. Therefore, $v\mathopen{}\mathclose\bgroup\originalleft(\frac{\varphi}{\mathfrak{m}athfrak{p}si}(a)\aftergroup\egroup\originalright) \geq 0$ in both cases, so $\frac{\varphi}{\mathfrak{m}athfrak{p}si} \in \IntR(V)$. Also, $\frac{\varphi}{\mathfrak{m}athfrak{p}si}$ is not a unit because \[v\mathopen{}\mathclose\bgroup\originalleft(\frac{\varphi}{\mathfrak{m}athfrak{p}si}(0)\aftergroup\egroup\originalright) = v(\varphi(0)) - \frac{v(\varphi(0))}{2}=\frac{v(\varphi(0))}{2} > 0. \] \item Suppose that $\Gamma$ is not divisible and $\mathfrak{m}$ is not principal. Take $\mathfrak{m}athfrak{p}si(x) = \varphi_{\alpha, \alpha', \varepsilon}\mathopen{}\mathclose\bgroup\originalleft(\frac{x}{r}\aftergroup\egroup\originalright)$ as in Lemma \ref{Lem:ZigZag} using $\alpha = 0$, $\alpha' = \frac{v(\varphi(0))}{2}$, and $\varepsilon \in \Gamma$ such that $0 <\varepsilon < \frac{v(\varphi(0))}{2}$. From this, we see that $\mathfrak{m}athfrak{p}si \in \IntR(E,V)$. Since $\mathfrak{m}athfrak{p}si(0) \in \mathfrak{m}$, we also know that $\mathfrak{m}athfrak{p}si$ is not a unit of $\IntR(E,V)$. Moreover, we have $v\mathopen{}\mathclose\bgroup\originalleft(\frac{\varphi}{\mathfrak{m}athfrak{p}si}(a)\aftergroup\egroup\originalright) \geq 0$ for each $a \in E$, since $v(\mathfrak{m}athfrak{p}si(a)) = 0$ if $v(a) < \delta$ and $v(\mathfrak{m}athfrak{p}si(a)) < \frac{v(\varphi(0))}{2} < v(\varphi(0)) = v(\varphi(a))$ if $v(a) \geq \delta$. Thus, $\frac{\varphi}{\mathfrak{m}athfrak{p}si} \in \IntR(E,V)$. Furthermore, $v\mathopen{}\mathclose\bgroup\originalleft(\frac{\varphi}{\mathfrak{m}athfrak{p}si}(0)\aftergroup\egroup\originalright) > v(\varphi(0)) - v(\varphi(0)) > 0$, so $\frac{\varphi}{\mathfrak{m}athfrak{p}si}$ is not a unit of $\IntR(E,V)$. \item Now we assume that $\mathfrak{m}$ is principal, generated by some $t \in V$, and $E$ has no isolated points with respect to the topology induced by the valuation. We know that $\varphi$ is strictly divisible by another nonunit element of $\IntR(E,V)$ by Lemma \ref{Lem:StoneWeierstrass}. \end{enumerate} \end{proof} If $V$ is a valuation domain with principal maximal ideal and $E$ is some subset of the field of fractions with an isolated point, then the factorization in $\IntR(E,V)$ is more interesting. \begin{lemma}\label{Lem:AtomsInIntREV} Let $V$ be a valuation domain with field of fractions $K$ and associated valuation $v$. Suppose that the maximal ideal $\mathfrak{m}$ of $V$ is principal, generated by some $t \in V$. Take $E$ to be some subset of $K$. Then \[ \mathfrak{m}athcal{A}(\IntR(E,V)) = \{[\mathfrak{m}athfrak{p}si_s] \mathfrak{m}id s \in S \}, \] where $S$ is the set of isolated points of $E$ with respect to the topology induced by the valuation and for each $s \in S$, the rational function $\mathfrak{m}athfrak{p}si_s$ is given by \[ \mathfrak{m}athfrak{p}si_s(x) = \frac{(x-s)^3+c_s^3t^2}{(x-s)^3+c_s^3t}, \] with $c_s \in K$ such that there does not exist any $b \in E$ with $v(b-s) > v(c_s - s)$. \end{lemma} \begin{proof} Let $s \in S$, then for each $d \in E$, we calculate that \[ v(\mathfrak{m}athfrak{p}si_s(d)) = \begin{cases} 0,&\text{if $d \neq s$} \\ v(t),&\text{if $d = s$}. \end{cases} \] We can deduce that $\mathfrak{m}athfrak{p}si_s$ is an atom. If $\mathfrak{m}athfrak{p}si_s = \rho_1\rho_2$ for some $\rho_1, \rho_2 \in \IntR(E,V)$, then $v(t) = v(\mathfrak{m}athfrak{p}si_s(s)) = v(\rho_1(s)) + v(\rho_2(s))$ implies that $\rho_1(s)$ or $\rho_2(s)$ is a unit of $V$. This implies that $\rho_1$ or $\rho_2$ is a unit of $\IntR(E,V)$, since $\rho_1$ and $\rho_2$ are unit-valued on $E \setminus \{s\}$. Now suppose that $\varphi \in \IntR(E,V)$ is an atom. Since $\varphi$ is not a unit, there exists $a \in E$ such that $\varphi(a) \in \mathfrak{m}$. If $a \notin S$, then Lemma \ref{Lem:StoneWeierstrass} implies that $\varphi$ is not an atom. Thus, there exists $s \in S$ such that $\varphi(s) \in \mathfrak{m}$. Now note that $\mathfrak{m}athfrak{p}si_s$ divides $\varphi$. Since $\varphi$ is an atom, we have that $\varphi \sim \mathfrak{m}athfrak{p}si_s$. \end{proof} Therefore, in the case of a valuation domain $V$ with principal ideal, it is possible to obtain a ring of the form $\IntR(E,V)$ that has atoms. In fact, it is possible to obtain a ring of the form $\IntR(E,V)$ that is atomic. \begin{proposition}\label{Prop:DiscreteAtomic} Let $V$ be a valuation domain with principal maximal ideal. Also let $E$ be a subset of the field of fractions of $V$, and $\Gamma$ be the value group of $V$. Then $\IntR(E,V)$ is atomic if and only if $E$ is finite and $\Gamma \cong {\mathfrak{m}athbb Z}$. Moreover, in this case, we have \[\IntR(E,V)^\bullet/\IntR(E,V)^\times \cong {\mathfrak{m}athbb N}^{\abs{E}},\] where $\IntR(E,V)^\bullet$ is the multiplicative monoid of nonzero elements, making $\IntR(E,V)$ a unique factorization domain. \end{proposition} \begin{proof} Let $\mathfrak{m}$ denote the maximal ideal of $V$, generated by some $t \in V$. Denote by $\mathfrak{m}athfrak{p}si_s$ the atom in $\IntR(E,V)$ such that $v(\mathfrak{m}athfrak{p}si_s(d)) = 0$ for $d \in E \setminus \{s\}$ and $v(\mathfrak{m}athfrak{p}si_s(s)) = v(t)$. Every atom of $\IntR(E,V)$ is associate to $\mathfrak{m}athfrak{p}si_s$ for some $s \in E$ by Lemma \ref{Lem:AtomsInIntREV}. Suppose that $\IntR(E,V)$ is atomic. If there exists $a \in E$ that is not an isolated point with respect to the topology induced by the valuation, then Lemma \ref{Lem:StoneWeierstrass} implies that every $\varphi \in \IntR(E,V)$ with $\varphi(a) \in \mathfrak{m}$ is strictly divisible by a nonunit element of $\IntR(E,V)$. In particular, $t \in \IntR(E,V)$ cannot be written as the product of finitely many atoms. If on the contrary $t = \varphi_1 \cdots \varphi_n$, where $\varphi_1, \dots, \varphi_n$ are atoms of $\IntR(E,V)$, then there exists an $i$ such that $\varphi_i(a) \in \mathfrak{m}$ for some $a \in E$ that is not an isolated point. However, we have just shown that $\varphi_i$ would not be an atom, a contradiction. Therefore, $\IntR(E,V)$ being atomic implies that $E$ consists of only isolated points. A finite product of atoms in $\IntR(E,V)$ is associated to $\mathfrak{m}athfrak{p}si_{s_1}\cdots \mathfrak{m}athfrak{p}si_{s_n}$ for some $s_1, \dots, s_n \in E$ due to Lemma \ref{Lem:AtomsInIntREV}. The rational function $\mathfrak{m}athfrak{p}si_{s_1}\cdots \mathfrak{m}athfrak{p}si_{s_n}$ is valued in $\mathfrak{m}$ only on the finite set $\{s_1, \dots, s_n\}$. Therefore, the constant function $t \in \IntR(E,V)$ can only be written as a product of finitely many atoms if $E$ is finite. If $\Gamma \not\cong {\mathfrak{m}athbb Z}$, since $\mathfrak{m}$ is principal, there exists $c \in V$ such that there does not exist $n \in {\mathfrak{m}athbb N}$ with $nv(t) = v(c)$. Therefore, if $\Gamma \not\cong {\mathfrak{m}athbb Z}$, the constant function $c \in \IntR(E,V)$ cannot be written as the product of finitely many atoms of $\IntR(E,V)$. This means $\IntR(E,V)$ being atomic implies that $E$ is finite and $\Gamma \cong {\mathfrak{m}athbb Z}$. Now we suppose that $E$ is finite and $\Gamma \cong {\mathfrak{m}athbb Z}$. If we are able to show that $\IntR(E,V)^\bullet/\IntR(E,V)^\times \cong {\mathfrak{m}athbb N}^{\abs{E}}$ as monoids, then we will show that $\IntR(E,V)$ is atomic since ${\mathfrak{m}athbb N}^{\abs{E}}$ is atomic. Write $E = \{s_1, \dots, s_n\}$ for distinct elements $s_1, \dots, s_n$. The isomorphism is given by \[ \varphi \mathfrak{m}apsto (v(\varphi(s_1)), \dots, v(\varphi(s_n))), \] for any nonzero $\varphi \in \IntR(E,V)$, where $v$ is the associated valuation of $V$ with $v(t) = 1$. We see this map is indeed a well-defined injective homomorphism. The elements $\mathfrak{m}athfrak{p}si_{s_1}, \dots, \mathfrak{m}athfrak{p}si_{s_n}$, given by Lemma \ref{Lem:AtomsInIntREV}, map to a generating set of ${\mathfrak{m}athbb N}^{\abs{E}}$, showing that the homomorphism is surjective. \end{proof} We summarize the atomicity of the ring $\IntR(E,V)$ for a valuation domain $V$ and a subset $E$ of the field of fractions of $V$. The ring $\IntR(E,V)$ is only atomic where $E$ is finite and $V$ is a discrete valuation domain (the value group is isomorphic to ${\mathfrak{m}athbb Z}$). When $\IntR(E,V)$ is atomic, it is also a unique factorization domain. If we only require $\IntR(E,V)$ to have atoms, then $V$ needs to have a principal maximal ideal and $E$ needs to have isolated points with respect to the topology induced by the valuation. Outside of this case, the ring $\IntR(E,V)$ is an antimatter domain. \section{Atomicity}\label{Sect:Atomicity} \indent\indent Even for a valuation domain $V$, rings of integer-valued rational functions of the form $\IntR(E,V)$ is antimatter most of the time. Proposition \ref{Prop:DiscreteAtomic} gives an instance of a ring of the form $\IntR(E,V)$ that is atomic, but $\IntR(E,V)$ is a unique factorization domain in this case. In this section, we introduce another family of atomic rings of integer-valued rational functions. One way to look for atomic domains of integer-valued rational functions is to find an bounded factorization domain $D$ such that $\IntR(E,D)$ is local (so $D$ is necessarily local) for some subset $E$ of the field of fractions of $D$. Then make use of the following lemma. \begin{lemma}\label{Lem:LocalImpliesAtomic} Let $D$ be a bounded factorization domain with field of fractions $K$. Suppose $E$ is a subset of $K$ such that $\IntR(E,D)$ is local. Then $\IntR(E,D)$ is atomic. \end{lemma} \begin{proof} Since $\IntR(E,D)$ is local, then $D$ must also be local. Let $a, b \in D$ be nonunits of $D$. Then $a+b$ is not a unit in $\IntR(E,D)$ since $\IntR(E,D)$ is local, so $a+b$ is also not a unit of $D$. This shows that $\IntR(E,D)$ being local implies that $D$ is local. Denote the maximal ideal of $D$ by $\mathfrak{m}$. Let $a \in E$. We know that $\mathfrak{M}_{ \mathfrak{m}, a }$ is a maximal ideal of $\IntR(E,D)$ for each $a \in E$. Since $\IntR(E,D)$ has a unique maximal ideal, the maximal ideals $\{\mathfrak{M}_{ \mathfrak{m}, a }\}_{a \in E}$ all coincide, meaning the maximal ideal of $\IntR(E,D)$ is $\IntR(E,\mathfrak{m})$. This has the implication that for a $\varphi \in \IntR(E,D)$ such that there exists $a \in E$ where $\varphi(a) \in D^\times$, we have $\varphi(b) \in D$ for all $b \in E$, meaning that $\varphi$ is a unit of $\IntR(E,D)$. Now let $\varphi \in \IntR(E,D)$ be a nonzero, nonunit element. We want to factor $\varphi$ into a product of finitely many atoms. We induct on $\inf\{L_D(\varphi(a)) \mathfrak{m}id a \in E\}$, which is finite since $D$ is a bounded factorization domain. If $\inf\{L_D(\varphi(a)) \mathfrak{m}id a \in E\} = 1$, then there exists $a \in E$ such that $\varphi(a)$ is an atom of $D$. Therefore, $\varphi$ is an atom. If we write $\varphi = \mathfrak{m}athfrak{p}si_1\mathfrak{m}athfrak{p}si_2$ for some $\mathfrak{m}athfrak{p}si_1, \mathfrak{m}athfrak{p}si_2 \in \IntR(E,D)$, then $\varphi(a) = \mathfrak{m}athfrak{p}si_1(a)\mathfrak{m}athfrak{p}si_2(a)$ implies that $\mathfrak{m}athfrak{p}si_1(a)$ or $\mathfrak{m}athfrak{p}si_2(a)$ is a unit of $D$, which means that $\mathfrak{m}athfrak{p}si_1$ or $\mathfrak{m}athfrak{p}si_2$ is a unit of $\IntR(E,D)$. Since $\varphi$ is an atom, we know that in particular, $\varphi$ as a product of finitely many atoms. Now suppose that $n \coloneqq \inf\{L_D(\varphi(a)) \mathfrak{m}id a \in E\} > 1$. Then there exists $a \in E$ such that $L_D(\varphi(a)) = n$. If $\varphi$ is atomic, we are done. If $\varphi$ is not atomic, we can write $\varphi = \mathfrak{m}athfrak{p}si_1\mathfrak{m}athfrak{p}si_2$ for some $\mathfrak{m}athfrak{p}si_1, \mathfrak{m}athfrak{p}si_2 \in \IntR(E,\mathfrak{m})$. Then $\varphi(a) = \mathfrak{m}athfrak{p}si_1(a)\mathfrak{m}athfrak{p}si_2(a)$. Since neither $\mathfrak{m}athfrak{p}si_1(a)$ nor $\mathfrak{m}athfrak{p}si_2(a)$ is a unit of $D$, we must have that $L_D(\mathfrak{m}athfrak{p}si_1(a)), L_D(\mathfrak{m}athfrak{p}si_2(a)) < n$. This implies that both $\inf\{L_D(\mathfrak{m}athfrak{p}si_1(a)) \mathfrak{m}id a \in E\}$ and $\inf\{L_D(\mathfrak{m}athfrak{p}si_2(a)) \mathfrak{m}id a \in E\}$ are strictly less than $n$. By induction, we can write $\mathfrak{m}athfrak{p}si_1$ and $\mathfrak{m}athfrak{p}si_2$ as the product of finitely many atoms of $\IntR(E,D)$, so we can do the same for $\varphi$. \end{proof} One way to force a ring of integer-valued rational functions to be local is to have a local domain $D$ such that there exists a valuation overring whose valuation separates the units of $D$ and the maximal ideal of $D$. To reach this result, we use the idea of minimum valuation functions. \begin{definition}\cite{Liu} Let $V$ be a valuation domain with value group $\Gamma$, valuation $v$, and field of fractions $K$. Take a nonzero polynomial $f \in K[x]$ and write it as $f(x) = a_nx^n + \cdots + a_1x+ a_0$ for $a_0, a_1, \dots, a_n \in K$. We define the \textbf{minimum valuation function of $f$} as $\mathfrak{m}inval_{f,v}: \Gamma \to \Gamma $ by \[\gamma \mathfrak{m}apsto \mathfrak{m}in\{v(a_0), v(a_1)+\gamma, v(a_2)+2\gamma, \dots, v(a_n) + n\gamma\}\] for each $\gamma \in \Gamma$. We will denote $\mathfrak{m}inval_{f,v}$ as $\mathfrak{m}inval_f$ if the valuation $v$ is clear from context. It is oftentimes helpful to think of $\mathfrak{m}inval_f$ as a function from ${\mathfrak{m}athbb Q}{\Gamma}$ to ${\mathfrak{m}athbb Q}{\Gamma}$ defined as $\gamma \mathfrak{m}apsto \mathfrak{m}in\{v(a_0), v(a_1)+\gamma, v(a_2)+2\gamma, \dots, v(a_n) + n\gamma\}$ for each $\gamma \in {\mathfrak{m}athbb Q}{\Gamma}$. For a nonzero rational function $\varphi \in K[x]$, we write $\varphi = \frac{f}{g}$ for some $f, g \in K[x]$. Then for each $\gamma \in \Gamma$, we define $\mathfrak{m}inval_\varphi(\gamma) = \mathfrak{m}inval_f(\gamma) - \mathfrak{m}inval_g(\gamma)$. \end{definition} The purpose of the minimum valuation function of a rational function is that it can predict the valuation of the outputs of the rational function most of the time. The minimum valuation function also has the nice property of being piecewise linear. The following lemma showcases these facts. \begin{lemma}\cite[Proposition 2.24 and Lemma 2.26]{Liu}\label{Lem:MinvalForm} Let $V$ be a valuation domain with value group $\Gamma$, valuation $v$, maximal ideal $\mathfrak{m}$, and field of fractions $K$. For a nonzero $\varphi \in K(x)$, the function $\mathfrak{m}inval_\varphi$ has the following form evaluated at $\gamma \in {\mathfrak{m}athbb Q}\Gamma$ \[ \mathfrak{m}inval_\varphi(\gamma) = \begin{cases} c_1 \gamma + \beta_1, & \gamma \leq \delta_1,\\ c_2 \gamma + \beta_2, & \delta_1 \leq \gamma \leq \delta_2,\\ \vdots\\ c_{k-1} \gamma + \beta_{k-1}, & \delta_{k-2} \leq \gamma \leq \delta_{k-1},\\ c_k \gamma + \beta_k, & \delta_{k-1} \leq \gamma, \end{cases} \] where $c_1, \dots, c_k \in {\mathfrak{m}athbb Z}$; $\beta_1, \dots, \beta_k \in \Gamma$; and $\delta_1, \dots, \delta_{k-1} \in {\mathfrak{m}athbb Q}{\Gamma}$ such that $\delta_1 < \cdots < \delta_{k-1}$. Furthermore, for all but finitely many $\gamma \in \Gamma$, we have that $v(\varphi(t)) = \mathfrak{m}inval_\varphi(v(t))$ for all $t \in K$ such that $v(t) = \gamma$. \end{lemma} Let $D$ be a local domain with field of fractions $K$. We now develop a technical lemma using the minimum valuation function to say that if all of the nonzero rational functions in $\IntR(K,D)$ have a minimum valuation function that is identically $0$ or always strictly positive, then $\IntR(K,D)$ is a local domain. \begin{lemma}\label{Lem:MinvalDichotomyImpliesLocal} Let $D$ be a local domain with maximal ideal $\mathfrak{m}$ and field of fractions $K$. Suppose that there is a valuation overring $V$ of $D$ with valuation $v$ such that for all $d \in \mathfrak{m}$, we have $v(d) > 0$. Also let $\Gamma$ be the value group of $v$. Suppose that for all nonzero $\varphi \in \IntR(K,D)$ that $\mathfrak{m}inval_\varphi(\gamma) = 0$ for all $\gamma \in \Gamma$ or $\mathfrak{m}inval_\varphi(\gamma) > 0$ for all $\gamma \in \Gamma$. Then $\IntR(K,D)$ is a local domain with maximal ideal $\IntR(K,\mathfrak{m})$. \end{lemma} \begin{proof} Let $\varphi \in \IntR(K,D)$. Suppose that $\mathfrak{m}inval_\varphi(\gamma) = 0$ for all $\gamma \in \Gamma$. Assume for a contradiction that there exists $a \in K \setminus \{0\}$ such that $v(\varphi(a)) > 0$. Then form \[ \mathfrak{m}athfrak{p}si(x) :=\varphi(a(1+x)) \in \IntR(K,D). \] There exists $\delta \in \Gamma$ so that for all $\gamma \geq \delta$, we have $v(\varphi(a)) = v(\mathfrak{m}athfrak{p}si(0)) = \mathfrak{m}inval_\mathfrak{m}athfrak{p}si(\gamma) > 0$. However, there also exists $\alpha \in \Gamma$ with $\alpha < 0$ such that $\mathfrak{m}inval_\mathfrak{m}athfrak{p}si(\alpha) = v(\mathfrak{m}athfrak{p}si(b))$ for all $b \in K$ with $v(b) = \alpha$ and $\mathfrak{m}inval_\varphi(v(a)+\alpha) = v(\varphi(c))$ for all $c \in K$ with $v(c) = v(a) + \alpha$ by Lemma \ref{Lem:MinvalForm}. Let $b \in K$ such that $v(b) = \alpha$. Then \[ \mathfrak{m}inval_\mathfrak{m}athfrak{p}si(\alpha) = v(\mathfrak{m}athfrak{p}si(b)) = v(\varphi(a(1+b))) = \mathfrak{m}inval_\varphi(v(a)+\alpha) = 0. \] This gives us $\mathfrak{m}inval_\mathfrak{m}athfrak{p}si(\delta) > 0$ but $\mathfrak{m}inval_{\mathfrak{m}athfrak{p}si}(\alpha) = 0$, which is disallowed by assumption. Thus, for all $a \in K$, we have $v(\varphi(a)) = 0$ and $\varphi(a) \in D$ so $\varphi(a) \in D^\times$. Now if we assume that $\varphi \in \IntR(K,D)$ is such that $\mathfrak{m}inval_\varphi(\gamma) > 0$ for all $\gamma \in \Gamma$, then we can similarly conclude that for all $a \in K$, we have $v(\varphi(a)) > 0$, so $\varphi(a) \in \mathfrak{m}$. Therefore, $\IntR(K,D) = \IntR(K,D)^\times \sqcup \IntR(K,\mathfrak{m})$, meaning that $\IntR(K,D)$ is a local domain with maximal ideal $\IntR(K,\mathfrak{m})$. \end{proof} Next, we can construct local domains $D$ that satisfy the previous lemma by requiring the valuation on $D$ to give every element of the maximal ideal of $D$ a strictly positive value. \begin{lemma}\label{Lem:GapImpliesLocal} Let $D$ be a local domain with maximal ideal $\mathfrak{m}$ and field of fractions $K$. Suppose there exists a valuation overring $V$ of $D$ such that maximal ideal of $V$ is not principal. Let $\Gamma$ be the value group of $V$. Suppose that there exists $\gamma \in \Gamma$ with $\gamma > 0$ such that $v(d) > \gamma$ for each $d \in \mathfrak{m}$. Then $\IntR(K,D)$ is a local domain with maximal ideal $\IntR(K,\mathfrak{m})$. \end{lemma} \begin{proof} Let $\varphi \in \IntR(K,D)$ be nonzero. By Lemma \ref{Lem:MinvalForm}, we have that $\mathfrak{m}inval_\varphi$ is a piecewise linear function that maps $\Gamma$ to $\{0\} \sqcup v(\mathfrak{m})$. Since the maxima ideal of $V$ is not principal, the value group $\Gamma$ is not discrete. Thus, we must have $\mathfrak{m}inval_\varphi = 0$ or $\mathfrak{m}inval_\varphi(\gamma) > 0$ for all $\gamma \in \Gamma$. Then $\IntR(K,D)$ is a local domain with maximal ideal $\IntR(K,\mathfrak{m})$ by Lemma \ref{Lem:MinvalDichotomyImpliesLocal}. \end{proof} We still need the base ring to be a bounded factorization domain in order to use Lemma \ref{Lem:LocalImpliesAtomic}. We get this if the image of the base ring under the separating valuation is a bounded factorization monoid. \begin{lemma}\label{Lem:BFMtoLocalAtomic} Let $D$ be a local domain with maximal ideal $\mathfrak{m}$ and field of fractions $K$. Suppose there exists a valuation overring $V$ of $D$ such that maximal ideal of $V$ is not principal. Let $\Gamma$ be the value group of $V$ and $v$ be the associated valuation. Suppose that there exists $\gamma \in \Gamma$ with $\gamma > 0$ such that $v(d) > \gamma$ for each $d \in \mathfrak{m}$ and $M \coloneqq \{0\} \cup v(\mathfrak{m})$ is a bounded factorization monoid. Then $\IntR(K,D)$ is a local, atomic domain. \end{lemma} \begin{proof} We want to use Lemma \ref{Lem:LocalImpliesAtomic}, so we will show that $D$ is a bounded factorization domain. Let $d \in \mathfrak{m} \setminus \{0\}$. We want to show that we can write $d$ as a product of finitely many atoms of $D$ by inducting on $L_M(v(d))$. We know that $L_M(v(d))$ is finite since $M$ is a bounded factorization monoid. If $L_M(v(d)) = 1$, then $v(d)$ is an atom of $M$. Whenever we write $d = b_1b_2$ for some $b_1, b_2 \in D$, we have $v(d) = v(b_1) + v(b_2)$. Thus, $v(b_1) = 0$ or $v(b_2) = 0$, so $b_1$ or $b_2$ is a unit of $D$ by the assumption that there exists $\gamma \in \Gamma$ with $\gamma > 0$ such that $v(d) > \gamma$ for each $d \in \mathfrak{m}$. Therefore, $d$ is an atom of $D$. Suppose that $n \coloneqq L_M(v(d)) > 1$. If $d$ is an atom of $D$, there is nothing to do. If $d$ is not an atom of $D$, then we can write $d = b_1b_2$ for some $b_1, b_2 \in \mathfrak{m}$. Since $v(d) = v(b_1) + v(b_2)$, we must have that $L_M(v(b_1)), L_M(v(b_2)) < n$. By induction, we know that we can write $b_1$ and $b_2$ both as a product of finitely many atoms of $D$, so $d$ can be written as the product of finitely many atoms of $D$ as well. Now write $d = a_1 \cdots a_m$ for some atoms $a_1, \dots, a_m \in D$. Then we see that $v(d) = v(a_1) + \cdots + v(a_m)$ with no $v(a_i) = 0$ implies that $m \leq L_M(v(d)) < \infty$. Thus, $D$ is a bounded factorization domain. We now use Lemma \ref{Lem:GapImpliesLocal} to show that $\IntR(K,D)$ is local, so Lemma \ref{Lem:LocalImpliesAtomic} tells us that $\IntR(K,D)$ is atomic. \end{proof} We will provide examples of domains $D$ that satisfy the conditions of the previous lemma by starting with the monoid $M$. We can form a ring out of this monoid called the monoid ring. A collection of results on these rings can be found in \cite{GilmerSemigroupRings}. We then localize these monoid rings to satisfy the local condition. \begin{definition} Let $R$ be a domain and $M$ be an commutative, cancellative, additive monoid. We form the \textbf{monoid ring} $R[t;M]$ whose elements are finite sums of the form $rt^{\alpha}$, where $r \in R$ and $\alpha \in M$. The multiplication in $R[t;M]$ is determined by $r_1t^{\alpha_1} \cdot r_2t^{\alpha_2} = r_1r_2t^{\alpha_1+\alpha_2}$ for $r_1, r_2 \in R$ and $\alpha_1, \alpha_2 \in M$. Denote by $(t;M)$ the ideal of $R[t;M]$ generated by $\{t^\alpha \mathfrak{m}id \alpha \text{ is a nonzero nonunit of $M$}\}$. \end{definition} The ring ${\mathfrak{m}athbb F}_2[t;M]_{(t;M)}$, where $M = \{q \in {\mathfrak{m}athbb Q} \mathfrak{m}id q = 0 \text{ or } q \geq 1\}$ has been used to produce a SFT ring of finite Krull dimension such that its power series ring has infinite Krull dimension \cite{CoykendallSFT}. Here, we use localized monoid domains to product a family of local, atomic rings of integer-valued rational functions. If the underlying monoid is totally ordered, these localized monoid domains have a useful valuation on them to produce the valuation domain required for Lemma \ref{Lem:BFMtoLocalAtomic}. \begin{definition} Let $M$ be a monoid. The monoid $M$ is \textbf{totally ordered} if there is a total order $\leq$ on $M$ such that for all $\alpha, \alpha', \beta, \beta' \in M$, having $\alpha \leq \alpha'$ and $\beta \leq \beta'$ imply $\alpha + \beta \leq \alpha' + \beta'$. Let $M$ be a totally ordered monoid. We say that $M$ is \textbf{positive} if $ \alpha \geq 0$ for all $\alpha \in M$. The \textbf{difference group} of $M$ is $\mathfrak{m}athsf{gp}(M) \coloneqq \{\alpha - \alpha' \mathfrak{m}id \alpha, \alpha' \in M \}$. Take $k$ to be a field and $M$ to be a totally ordered positive monoid, and set $D$ to be $k[t;M]_{(t;M)}$. Let $K$ be the field of fractions of $D$. The \textbf{$M$-valuation} $v$ on $D$ is given by \[ v\mathopen{}\mathclose\bgroup\originalleft( a_{\alpha_{i_1}} t^{\alpha_{i_1}} + \cdots + a_{\alpha_{i_n}}t^{\alpha_{i_n}} \aftergroup\egroup\originalright) = \mathfrak{m}in\{\alpha_{i_1}, \dots, \alpha_{i_n} \}, \] where $\alpha_{i_1}, \dots, \alpha_{i_n} \in M$ are distinct and $a_{\alpha_{i_1}}, \dots, a_{\alpha_{i_n}} \in k$ are nonzero elements. This valuation on $D$ extends uniquely to a valuation on $K$ with value group $\mathfrak{m}athsf{gp}(M)$. For a totally ordered, positive monoid $M$, we say that $M$ is \textbf{bounded away from $0$} if there exists $\gamma \in \mathfrak{m}athsf{gp}(M)$ with $\gamma > 0$ such that $\alpha \geq \gamma$ for each nonzero $\alpha \in M$. We say that $M$ is \textbf{integrally terminal for $\gamma$} for some $\gamma \in \mathfrak{m}athsf{gp}(M)$ if for all $\gamma' \in \mathfrak{m}athsf{gp}(M)$ such that $\gamma' > \gamma$, we have $\gamma' \in M$ and for each $\gamma' \in \mathfrak{m}athsf{gp}(M)$ with $\gamma' > 0$, there exists $n \in {\mathfrak{m}athbb N}$ such that $n\gamma' > \gamma$. Note that the latter condition implies that $n\gamma' \in M$. We say that $M$ is \textbf{integrally terminal} if there exists some $\gamma \in \mathfrak{m}athsf{gp}(M)$ such that $M$ is integrally terminal for $\gamma$. \end{definition} \begin{theorem}\label{Thm:Atomic} Let $M$ be a totally ordered, positive, bounded factorization monoid bounded away from $0$. Suppose that $\Gamma \coloneqq \mathfrak{m}athsf{gp}(M)$ has no minimal strictly positive element. Let $k$ be a field and take $D$ to be $k[t;M]_{(t;M)}$. Also let $K$ be the field of fractions of $D$. Then $\IntR(K,D)$ is a local, atomic domain. \end{theorem} \begin{proof} Note that since $M$ is bounded away from $0$, there exists $\gamma \in \Gamma$ with $\gamma > 0$ such that $\alpha \geq \gamma$ for each $\alpha \in M\setminus \{0\}$. Let $v$ be the $M$-valuation on $K$. Call the associated valuation domain $V$. Since $\Gamma$ has no minimal strictly positive element, the maximal ideal of $V$ is not principal. Plus, $\{0\} \cup v((t;M)k[t;M]_{(t;M)}) = M$ is a bounded factorization monoid. Furthermore, we have that for each $d \in (t;M)k[t;M]_{(t;M)}$ that $v(d) \in M$ so $v(d) > \gamma$. By Lemma \ref{Lem:BFMtoLocalAtomic}, we have that $\IntR(K,D)$ is a local, atomic domain. \end{proof} In some cases, the atomicity of these local, atomic domains of the form $\IntR(K,D)$, where $D = k[t;M]_{(t;M)}$, is fragile in the sense that the integral closure of $\IntR(K,D)$ is an antimatter domain. In particular, this happens when we additionally require $M$ to be integrally terminal in addition to the conditions in Theorem \ref{Thm:Atomic}. The notion of $M$ being integrally terminal can be thought of as $M$ containing all sufficiently large elements of $\mathfrak{m}athsf{gp}(M)$ and this sufficiently large portion of $\mathfrak{m}athsf{gp}(M)$ is most of $\mathfrak{m}athsf{gp}(M)$. Before we show that the integral closure of $\IntR(K,D)$ is antimatter in this case, we give a criterion for elements of $K$ to be in $D$ using the $M$-valuation. \begin{lemma}\label{Lem:BiggestFilter} Let $M$ be a totally ordered, positive monoid that is integrally terminal for some $\delta \in \Gamma \coloneqq \mathfrak{m}athsf{gp}(M)$. Let $k$ be a field and take $D$ to be $k[t;M]_{(t;M)}$. Let $K$ be the field of fractions of $D$ and let $v$ be the $M$-valuation on $K$. If $c \in K$ is such that $v(c) > \delta$, or if $v(c) = \delta$ and $\delta \in M$, then $c \in D$. \end{lemma} \begin{proof} Let $c \in K$ such that $v(c) > \delta$. We can write $c$ as \[ c = \frac{\sum\limits_{\alpha \in \Gamma_{\geq 0}} a_{\alpha} t^\alpha}{\sum\limits_{\beta \in \Gamma_{\geq 0}} b_\beta t^\beta}, \] where each $a_\alpha, b_\beta \in k$ and $a_\alpha, b_\beta = 0$ for all but finitely many $\alpha, \beta \in \Gamma_{\geq 0}$. We write $c$ so that $b_0 \neq 0$ and $\mathfrak{m}in\{\alpha \in \Gamma_{\geq 0} \mathfrak{m}id a_\alpha \neq 0 \} = v(c)$. Let $\varepsilon = \mathfrak{m}in\{\beta \in \Gamma_{\geq 0} \setminus \{0\} \mathfrak{m}id b_\beta \neq 0 \}$. If $\varepsilon$ does not exist, then $c \in k[t;M] \subseteq D$ and we are done. Now there is some odd $n\in {\mathfrak{m}athbb N}$ such that $n\varepsilon > \delta$. We have that $x+b_0$ divides $x^n+b_0^n$ in $k[x]$. Let $f(x) = \frac{x^n+b_0^n}{x+b_0} \in k[x]$. Then we have \[ c = \frac{f\mathopen{}\mathclose\bgroup\originalleft(-b_0 + \sum\limits_{\beta \in \Gamma_{\geq 0}} b_\beta t^\beta \aftergroup\egroup\originalright)\sum\limits_{\alpha \in \Gamma_{\geq 0}} a_{\alpha} t^\alpha}{f\mathopen{}\mathclose\bgroup\originalleft(-b_0 + \sum\limits_{\beta \in \Gamma_{\geq 0}} b_\beta t^\beta \aftergroup\egroup\originalright)\sum\limits_{\beta \in \Gamma_{\geq 0}} b_\beta t^\beta} = \frac{f\mathopen{}\mathclose\bgroup\originalleft(-b_0 + \sum\limits_{\beta \in \Gamma_{\geq 0}} b_\beta t^\beta \aftergroup\egroup\originalright)\sum\limits_{\alpha \in \Gamma_{\geq 0}} a_{\alpha} t^\alpha}{\mathopen{}\mathclose\bgroup\originalleft(-b_0 + \sum\limits_{\beta \in \Gamma_{\geq 0}} b_\beta t^\beta \aftergroup\egroup\originalright)^n + b_0^n} \] Each term in $\mathopen{}\mathclose\bgroup\originalleft(\sum\limits_{\beta \in \Gamma_{\geq 0} \setminus \{0\}} b_\beta t^\beta\aftergroup\egroup\originalright)^n$ has value at least $n\varepsilon > \delta$. Therefore, the entire denominator $\mathopen{}\mathclose\bgroup\originalleft(\sum\limits_{\beta \in \Gamma_{\geq 0} \setminus \{0\}} b_\beta t^\beta\aftergroup\egroup\originalright)^n + b_0^n$ is in $k[t;M] \setminus (t;M)$. As for the numerator, each term when expanded out has valuation at least $v(c) > \delta$, so the numerator is in $k[t;M]$. Thus, $c \in D$. If $v(c) = \delta$ and $\delta \in M$. The proof is similar. \end{proof} \begin{example} Let $M$ be a positive submonoid of ${\mathfrak{m}athbb R}_{\geq 0}$ such that there exists $\delta \in {\mathfrak{m}athbb R}_{\geq 0}$ so that $[\delta, \infty) \subseteq M$. Note that $M$ is integrally terminal. Also let $k$ be a field and $D = k[t;M]_{(t;M)}$. Denote by $K$ the field of fractions of $D$ and $v$ the $M$-valuation on $K$. Then $\frac{t^{13\delta} + t^{7\delta} + t^{3\delta}}{t^{\delta/2}+1} \in D$ since $v\mathopen{}\mathclose\bgroup\originalleft(\frac{t^{13\delta} + t^{7\delta} + t^{3\delta}}{t^{\delta/2}+1}\aftergroup\egroup\originalright) = 3\delta > \delta$. \end{example} Now we compute the integral closure of $\IntR(K,D)$ under certain assumptions and prove that this ring is an antimatter domain. \begin{proposition} Let $M$ be a totally ordered, integrally terminal, positive monoid bounded away from $0$. Suppose that $\Gamma \coloneqq \mathfrak{m}athsf{gp}(M)$ has no minimal strictly positive element. Let $k$ be a field and take $D$ to be the ring $k[t;M]_{(t;M)}$ and let $K$ be the field of fractions of $D$. Denote by $V$ the valuation domain associated with the $M$-valuation $v$ on $K$. The integral closure of $\IntR(K,D)$ is the local, antimatter domain $\IntR(K,D)^\times + \mathfrak{M}$, where \[ \mathfrak{M} = \{\varphi \in \IntR(K,V) \mathfrak{m}id \exists \gamma \in \Gamma_{> 0} \text{ such that } \,\forall a \in K, v(\varphi(a)) \geq \gamma \} \] and is the maximal ideal of the integral closure of $\IntR(K,D)$. \end{proposition} \begin{proof} Since $M$ is bounded away from $0$, there exists some $\gamma_1 \in \Gamma_{> 0}$ such that $\alpha \geq \gamma_1$ for all nonzero $\alpha \in M$. Since $M$ is integrally terminal, there exists some $\gamma_2 \in \Gamma$ such that for all $\gamma \in \Gamma$ such that $\gamma > \gamma_2$, we have that $\gamma \in M$ and that for each $\gamma \in \Gamma$ such that $\gamma > 0$, there exists $n \in {\mathfrak{m}athbb N}$ such that $n\gamma > \gamma_2$. Let $\IntR(K,D)'$ be the integral closure of $\IntR(K,D)$. First, we notice that $\IntR(K,D)^\times \subseteq \IntR(K,D) \subseteq \IntR(K,D)'$. If we take $\varphi \in \mathfrak{M}$, then there exists $\gamma \in \Gamma$ with $\gamma > 0$ such that for all $a \in K$, we have $v(\varphi(a)) \geq \gamma$. Then there exists $n \in {\mathfrak{m}athbb N}$ with $n > 0$ such that $n \gamma > \gamma_2$. Thus, $v(\varphi^n(a)) \geq n\gamma > \gamma_2$ for all $a \in K$. Using this, we can see that $\varphi^n \in \IntR(K,D)$ so $\varphi \in \IntR(K,D)'$. Thus, $\IntR(K,D)'$ contains $\IntR(K,D)^\times + \mathfrak{M}$. Now let $\varphi \in \IntR(K,D)'$. Then $\varphi^n + \mathfrak{m}athfrak{p}si_{n-1}\varphi^{n-1} + \cdots + \mathfrak{m}athfrak{p}si_1 \varphi + \mathfrak{m}athfrak{p}si_0 = 0$ for some $\mathfrak{m}athfrak{p}si_0, \dots, \mathfrak{m}athfrak{p}si_{n-1} \in \IntR(K,D)$. Then for every $a \in K$, we see that $\varphi(a)$ satisfies some monic polynomial with coefficients in $D$, so $\varphi(a)$ is in the integral closure of $D$, which is contained in $V$. Thus, $\varphi \in \IntR(K,V)$. We consider the case of when $\varphi$ is not a unit of $\IntR(K,V)$ and the case of when $\varphi$ is a unit of $\IntR(K,V)$. First, consider the case where $\varphi \notin \IntR(K,V)^\times$. Suppose that there does not exist $\gamma \in \Gamma$ with $\gamma > 0$ such that $v(\varphi(a)) \geq \gamma$ for all $a \in K$. We look at the monic polynomial over $\IntR(K,D)$ that $\varphi$ satisfies: $\mathfrak{m}athfrak{p}si_{n}\varphi^n + \mathfrak{m}athfrak{p}si_{n-1}\varphi^{n-1} + \cdots + \mathfrak{m}athfrak{p}si_1 \varphi + \mathfrak{m}athfrak{p}si_0 = 0$ for some $\mathfrak{m}athfrak{p}si_0, \dots, \mathfrak{m}athfrak{p}si_{n-1} \in \IntR(K,D)$ and $\mathfrak{m}athfrak{p}si_{n} = 1$. There exists some $i \in \{ 1, \dots, n\}$ such that $\mathfrak{m}athfrak{p}si_{1}, \dots, \mathfrak{m}athfrak{p}si_{i-1}$ are all in the maximal ideal of $\IntR(K,D)$ and $\mathfrak{m}athfrak{p}si_i \in \IntR(K,D)^\times$. Then we take $a \in K$ such that $0< v(\varphi(a)) < \frac{\gamma_1}{i}$. We get that \[ v(\mathfrak{m}athfrak{p}si_0(a)) = v(\varphi(a)^n + \mathfrak{m}athfrak{p}si_{n-1}(a)\varphi(a)^{n-1} + \cdots + \mathfrak{m}athfrak{p}si_1(a)\varphi(a) ) = iv(\varphi(a)) < \gamma_1, \] since we have $v(\mathfrak{m}athfrak{p}si_j(a)\varphi(a)^j) = v(\mathfrak{m}athfrak{p}si_j(a)) + jv(\varphi(a)) > \gamma_1 > iv(\varphi(x))$ for $j$ such that $1 \leq j < i$ and we have $v(\mathfrak{m}athfrak{p}si_j(a)\varphi(a)^j) \geq jv(\varphi(a)) > iv(\varphi(a))$ for $j$ such that $i < j \leq n$. On the other hand, \begin{align*} v(\mathfrak{m}athfrak{p}si_0(a)) & = v\mathopen{}\mathclose\bgroup\originalleft(\sum_{j=1}^{n}\mathfrak{m}athfrak{p}si_j(a)\varphi(a)^j \aftergroup\egroup\originalright) \\ & = v(\varphi(a)) + v\mathopen{}\mathclose\bgroup\originalleft(\sum_{j=1}^{n}\mathfrak{m}athfrak{p}si_j(a)\varphi(a)^{j-1}\aftergroup\egroup\originalright) \\ & \geq v(\varphi(a)) > 0 \end{align*} This implies that $0 < v(\mathfrak{m}athfrak{p}si_0(a)) < \gamma_1$, which is impossible since $\mathfrak{m}athfrak{p}si_0(a) \in D$. This means that there must a $\gamma \in \Gamma$ with $\gamma > 0$ such that $v(\varphi(a)) \geq \gamma$ for all $a \in K$. Therefore, $\varphi \in \mathfrak{M}$. If $\varphi \in \IntR(K,V)^\times$, we have that $v(\varphi(0) - c) > 0$ for some $c \in k^\times$. Then set $\Phi(x):= \varphi(x) - c$. Now, $\Phi(x) \notin \IntR(K,V)^\times$ since $v(\Phi(0)) > 0$. We know $\Phi \in \IntR(K,D)'$, so we have $\Phi \in \mathfrak{M}$ by the previous case. Therefore, we conclude that $\varphi \in c + \mathfrak{M} \subseteq \IntR(K,D)^\times + \mathfrak{M}$. Now that $\IntR(K,D)' = \IntR(K,D)^\times + \mathfrak{M}$, we see that the set of nonunits of $\IntR(K,D)'$ is $\mathfrak{M}$ and $\mathfrak{M}$ is closed under addition. Thus, $\IntR(K,D)'$ is local with maximal ideal $\mathfrak{M}$. To see that $\IntR(K,D)'$ is antimatter, we take a nonzero $\varphi \in \mathfrak{M}$. There is some $\gamma \in \Gamma$ with $\gamma > 0$ such that $v(\varphi(a)) \geq \gamma$ for all $a \in K$. There exists some $\gamma' \in \Gamma$ such that $0 < \gamma' < \gamma$ since $\Gamma$ has no minimal strictly positive element. Then $\varphi = t^{\gamma'}\frac{\varphi}{t^{\gamma'}}$ with $t^{\gamma'}, \frac{\varphi}{t^{\gamma'}} \in \mathfrak{M}$. Thus, any nonzero, nonunit element of $\IntR(K,D)'$ is strictly divisible by a nonunit element. \end{proof} \section{Factorization lengths}\label{Sect:Lengths} \indent\indent For the atomic family of integer-valued rational functions over localizations of monoid domains in the previous section, the factorization of constant rational functions is different than that of the same constant considered as an element of the base ring. One way this difference in factorization behavior is captured is by comparing the set of factorization lengths. We first prove that atoms in the base ring remain atoms in the ring of integer-valued rational functions. \begin{lemma}\label{Lem:AtomTransfer} Let $D$ be an atomic domain with field of fractions $K$. Let $E$ be a nonempty subset of $K$ such that $\IntR(E,D)$ is a local domain. If $d \in D$ is an atom of $D$, then $d$ is an atom of $\IntR(E,D)$. \end{lemma} \begin{proof} We have seen in the proof of Lemma $\ref{Lem:LocalImpliesAtomic}$ that $\IntR(E,D)$ being local implies that $D$ is local. Furthermore, $\IntR(E,\mathfrak{m})$ is the unique maximal ideal of $\IntR(E,D)$, where $\mathfrak{m}$ is the unique maximal ideal of $D$. Let $d \in D$ be an atom of $D$. Suppose $d = \varphi_1\varphi_2$ for some $\varphi_1, \varphi_2 \in \IntR(E,D)$. Then $d = \varphi_1(a)\varphi_2(a)$ for all $a \in E$. If we fix an $a \in E$, we see that $\varphi_1(a)$ is a unit or $\varphi_2(a)$ is a unit of $D$. Since the maximal ideal of $\IntR(E,D)$ is $\IntR(E,\mathfrak{m})$, we know that $\varphi_1$ or $\varphi_2$ is a unit of $\IntR(E,D)$. This implies that $d$ is an atom of $\IntR(E,D)$. \end{proof} Let $k$ be a field and $M$ be a totally ordered, positive monoid. Take $D$ to be $k[t;M]_{(t;M)}$ and $K$ to be its field of fractions. Lemma \ref{Lem:BiggestFilter} shows that if $M$ is integrally terminal, an element of $K$ with sufficiently large value is in $D$. The part of $M$ that is not sufficiently large does not behave as nicely. For example, if an integer-valued rational function in $\IntR(K,D)$ has some $a \in K$ such that $v(\varphi(a))$ is not part of the sufficiently large part of $M$, then $\varphi$ is restricted in its form. \begin{lemma}\label{Lem:Flat} Let $k$ be a field and $M$ be a totally ordered, positive monoid. Set $\Gamma \coloneqq \mathfrak{m}athsf{gp}(M)$ and suppose $\Gamma$ has no minimal strictly positive element. Let $D = k[t;M]_{(t;M)}$, $K$ be its field of fractions, and $v$ the $M$-valuation on $K$. Take some $\varphi \in \IntR(K,D)$. Suppose there exist $b \in K$ and $\delta, \delta' \in \Gamma_{\geq 0}$ such that $0 < v(\varphi(b)) < \delta < \delta'$ and there does not exist $\alpha \in M$ such that $\delta < \alpha < \delta'$. Then $\{v(\varphi(a)) \mathfrak{m}id a \in K \} = \{v(\varphi(b))\}$. \end{lemma} \begin{proof} Without loss of generality, we can assume that $b = 0$. Consider the function $\mathfrak{m}athfrak{p}si(x) \coloneqq \varphi(x) - \varphi(0)$. Since $\mathfrak{m}athfrak{p}si(0) = 0$, we calculate that $\mathfrak{m}inval_\mathfrak{m}athfrak{p}si(\gamma) = v(\mathfrak{m}athfrak{p}si(0)) \geq \delta'$ for all sufficiently large values of $\gamma$. Since $\mathfrak{m}inval_\mathfrak{m}athfrak{p}si$ has image contained in $M$ and due to the form $\mathfrak{m}inval_\mathfrak{m}athfrak{p}si$ given by Lemma \ref{Lem:MinvalForm}, we know that $\mathfrak{m}inval_\mathfrak{m}athfrak{p}si(\gamma) \geq \delta'$ for all $\gamma \in \Gamma$. Suppose there exists $a \in K$ such that $v(\mathfrak{m}athfrak{p}si(a)) < \delta'$. Consider $\rho(x) \coloneqq \mathfrak{m}athfrak{p}si(a(1+x))$. By Lemma \ref{Lem:MinvalForm}, there exists $d \in K$ with $v(d) < 0$ such that \[\mathfrak{m}inval_\rho(v(d)) = v(\rho(d)) = v(\mathfrak{m}athfrak{p}si(a(1+d))) = \mathfrak{m}inval_\mathfrak{m}athfrak{p}si{(v(a) + v(d))} \geq \delta'.\] However, $v(\rho(0)) = v(\mathfrak{m}athfrak{p}si(a)) < \delta$, so $\mathfrak{m}inval_\rho(\gamma) < \delta$ for $\gamma \in \Gamma$ sufficiently large. This is a contradiction since $\mathfrak{m}inval_\rho$ cannot attain both values greater than or equal to $\delta'$ and strictly less than $\delta$. Thus, $v(\mathfrak{m}athfrak{p}si(a)) \geq \delta'$ for all $a \in K$. Now we see that $v(\varphi(a) - \varphi(0)) \geq \delta'$ for all $a \in K$. Since $v(\varphi(0)) < \delta'$, we have that $v(\varphi(a)) = v(\varphi(0))$ for all $a \in K$. \end{proof} \begin{example} Let $M = \{0\} \cup [2,3] \cup [4, \infty)$ be a submonoid of ${\mathfrak{m}athbb R}$ and $k$ to be any field. Take $D$ to be $k[t;M]_{(t;M)}$ and $K$ its field of fractions. Let $v$ be the $M$-valuation on $K$. Take $\varphi \in \IntR(K,D)$. Suppose that there exists $b \in K$ such that $v(\varphi(b)) = 2$. Since the interval $(3,4)$ is missing from $M$, then we must have $v(\varphi(a)) = 2$ for all $a \in K$. \end{example} The previous example also satisfies the following theorem if $k$ is not algebraically closed. Another example that also satisfies the following theorem is if we replaced $M$ with $\mathopen{}\mathclose\bgroup\originalleft(\{0,1,\sqrt{2}\} \cup ([2\sqrt{2}-1,\infty) \aftergroup\egroup\originalright)\cap {\mathfrak{m}athbb Z}[\sqrt{2}]$. The field $k$ can be algebraically closed in this case and still satisfy the conditions of the theorem since $\mathfrak{m}athsf{gp}(M) = {\mathfrak{m}athbb Z}[\sqrt{2}]$ is not divisible. \begin{theorem}\label{Thm:ExtendSetOfLengths} Let $M$ be a totally ordered, integrally terminal, positive, bounded factorization monoid. Suppose that $\Gamma \coloneqq \mathfrak{m}athsf{gp}(M)$ has no minimal strictly positive element. Further suppose that there exists some $\alpha \in \Gamma$ for which $M$ is integrally terminal and some $\beta \in \Gamma$ with $0 < \beta < \alpha$ such that no $\gamma \in M$ has the property $\beta < \gamma < \alpha$. Let $k$ be a field and suppose that $k$ is not algebraically closed or $\Gamma$ is not divisible. Take $D = k[t;M]_{(t;M)}$, $K$ to be its field of fractions, and $v$ to be the $M$-valuation on $K$. Then for any nonzero, nonunit $d \in D$ with $v(d) > 3\alpha$, we have \[ \mathfrak{m}athcal{L}_D(d) \cup \{2, \dots, N-2 \} \subseteq \mathfrak{m}athcal{L}_{\IntR(K,D)}(d) \subseteq \{2, \dots, L_D(d)\}, \] where $N \coloneqq \mathfrak{m}in\{n\in {\mathfrak{m}athbb N} \mathfrak{m}id n\alpha > v(d)\}$. \end{theorem} \begin{proof} We observe that from Lemma \ref{Lem:AtomTransfer}, we have that atoms of $D$ are atoms of $\IntR(K,D)$. Thus, $\mathfrak{m}athcal{L}_D(d) \subseteq \mathfrak{m}athcal{L}_{\IntR(K,D)}(d)$. The existence of $\beta \in \Gamma$ with $0 < \beta < \alpha$ such that no $\gamma \in M$ has the property $\beta < \gamma < \alpha$ means that there does not exist $\gamma \in M$ such that $0 < \gamma < \alpha - \beta$. This implies that $M$ is bounded away from $0$. Since $M$ is bounded away from $0$, there exists $\gamma_1 \in \Gamma$ with $\gamma_1 > 0$ such that for all nonzero $\gamma \in M$, we have that $\gamma \geq \gamma_1$. We now split into two cases. For the first case, we assume that $k$ is not algebraically closed and $\Gamma$ is divisible. For the second case, we assume the $\Gamma$ is not divisible. \begin{enumerate} \item Assume that $k$ is not algebraically closed and $\Gamma$ is divisible. We now fix an $\ell \in \{2, \dots, N-2\}$ and set $\alpha' = \frac{v(d)-\alpha}{\ell - 1}$. Then we write \[ d = \underbrace{\varphi_{\alpha, \alpha',\varepsilon} \cdots \varphi_{\alpha, \alpha',\varepsilon}}_{\text{$\ell - 1$ times}} \cdot \frac{d}{\varphi_{\alpha, \alpha', \varepsilon}^{\ell-1}}, \] where $\varphi_{\alpha, \alpha', \varepsilon}\in K(x)$ is such that there exist $\delta, \delta' \in {\mathfrak{m}athbb Q}\Gamma$ so that for all $a \in K$, we have \[ v(\varphi_{\alpha, \alpha', \varepsilon}(a)) = \begin{cases} \alpha, & \text{if } v(a) < \delta, \\ \alpha', & \text{if } v(a) > \delta', \end{cases} \] and $\alpha \leq v(\varphi_{\alpha, \alpha', \varepsilon}(a)) \leq \alpha'$ if $\delta \leq v(d) \leq \delta'$, guaranteed by Lemma \ref{Lem:ZigZag}, using any value for $\varepsilon \in {\mathfrak{m}athbb Q}\Gamma$ such that $\varepsilon > 0$. We will show that each factor is an atom of $\IntR(K,D)$. We have that $\varphi_{\alpha, \alpha', \varepsilon} \in \IntR(K,D)$ by Lemma \ref{Lem:BiggestFilter}. Suppose that $\varphi_{\alpha, \alpha', \varepsilon} = \mathfrak{m}athfrak{p}si_1\mathfrak{m}athfrak{p}si_2$ for some $\mathfrak{m}athfrak{p}si_1, \mathfrak{m}athfrak{p}si_2 \in \IntR(K,D)$. Take $a \in K$ with $v(a) < 0$. Then $\alpha = v(\varphi_{\alpha, \alpha'}(a)) = v(\mathfrak{m}athfrak{p}si_1(a)) + v(\mathfrak{m}athfrak{p}si_2(a))$. If $v(\mathfrak{m}athfrak{p}si_1(a))$ or $v(\mathfrak{m}athfrak{p}si_2(a))$ is $0$, then $\mathfrak{m}athfrak{p}si_1$ or $\mathfrak{m}athfrak{p}si_2$ is a unit of $\IntR(K,D)$ by Theorem \ref{Thm:Atomic}. If not, then $0 < v(\mathfrak{m}athfrak{p}si_i(a)) \leq \beta$ for $i \in \{1,2\}$, but by Lemma \ref{Lem:Flat}, we have that $\{v(\varphi_{\alpha, \alpha', \varepsilon}(a)) \mathfrak{m}id a \in K \}$ is a singleton, a contradiction. Thus, $\varphi_{\alpha, \alpha', \varepsilon}$ is an atom of $\IntR(K,D)$. Now we consider $\rho \coloneqq \frac{d}{\varphi_{\alpha, \alpha'}^{\ell-1}}$. Letting $a \in K$, we calculate that \[ v(\rho(a)) = \begin{cases} v(d) - (\ell -1)\alpha , & v(a) < \delta,\\ v(d)-(\ell-1)\alpha' = \alpha, & v(a) > \delta', \end{cases} \] and $\alpha \leq v(\rho(a)) \leq v(d) - (\ell - 1)\alpha$ if $\delta \leq v(a) \leq \delta'$. We also calculate that $v(d) - (\ell-1)\alpha \geq v(d) - (N-3)\alpha \geq 2\alpha$. This shows that $\rho \in \IntR(K,D)$ by Lemma \ref{Lem:BiggestFilter}. We can make a similar argument as the one for $\varphi_{\alpha, \alpha', \varepsilon}$ to show that $\rho$ is an atom. Since we have provided a factorization of $d$ in $\IntR(K,D)$ of length $\ell$ for each $\ell \in \{2, \dots, N-2\}$, we have that $\{2, \dots, N-2\} \subseteq \mathfrak{m}athcal{L}_{\IntR(K,D)}(d)$. \item Now assume that $\Gamma$ is not divisible. Fix $\ell \in \{2, \dots, N-2\}$. Let $\alpha' = \frac{v(d)-\alpha}{\ell-1}$ and $\varepsilon = \frac{\gamma_1}{\ell-1}$. By Lemma \ref{Lem:ZigZag}, there exists $\varphi_{\alpha, \alpha', \varepsilon} \in K(x)$ such that there exist $\delta, \delta' \in {\mathfrak{m}athbb Q}\Gamma$ and some $\alpha'' \in \Gamma$ with $\alpha' - \varepsilon < \alpha'' \leq \alpha'$ so that for all $d \in K$, we have \[ v(\varphi_{\alpha, \alpha', \varepsilon}(d)) = \begin{cases} \alpha, & \text{if } v(d) < \delta, \\ \alpha'', & \text{if } v(d) > \delta', \end{cases} \] and $\alpha \leq v(\varphi_{\alpha, \alpha', \varepsilon}(d)) \leq \alpha''$ if $\delta \leq v(d) \leq \delta'$. Now write again \[\varphi = \underbrace{\varphi_{\alpha, \alpha', \varepsilon} \cdots \varphi_{\alpha, \alpha', \varepsilon}}_{\text{$\ell - 1$ times}} \cdot \frac{d}{\varphi_{\alpha, \alpha', \varepsilon}^{\ell-1}}.\] We can show that $\varphi_{\alpha, \alpha', \varepsilon}$ is an atom of $\IntR(K,D)$ as before. Set $\rho = \frac{d}{\varphi_{\alpha, \alpha', \varepsilon}^{\ell-1}}$. Letting $a \in K$, we calculate that \[ v(\rho(a)) = \begin{cases} v(d) - (\ell -1)\alpha , & v(a) < \delta,\\ v(d)-(\ell-1)\alpha'' \geq \alpha, & v(a) > \delta', \end{cases} \] and $\alpha \leq v(\rho(a)) \leq v(d) - (\ell - 1)\alpha$ if $\delta \leq v(a) \leq \delta'$. Since $\alpha' - \varepsilon < \alpha'' \leq \alpha'$, we have that $\alpha \leq v(d) - (\ell-1)\alpha'' < \alpha+\gamma_1$. Utilizing Lemma \ref{Lem:BiggestFilter} shows that $\rho \in \IntR(K,D)$. Now we write $\rho = \mathfrak{m}athfrak{p}si_1\mathfrak{m}athfrak{p}si_2$ for some $\mathfrak{m}athfrak{p}si_1, \mathfrak{m}athfrak{p}si_2 \in \IntR(K,D)$. For all $a \in K$ with $v(a)> \delta'$, we have that $\alpha \leq v(\mathfrak{m}athfrak{p}si_1(a)) + v(\mathfrak{m}athfrak{p}si_2(a)) \leq \alpha + \gamma_1$. If for such an $a$ neither $v(\mathfrak{m}athfrak{p}si_1(a))$ nor $v(\mathfrak{m}athfrak{p}si_2(a))$ is $0$, then we must have $0 < v(\mathfrak{m}athfrak{p}si_1(a)) \leq \beta$ and $0 < v(\mathfrak{m}athfrak{p}si_2(a)) \leq \beta$. Otherwise, $v(\mathfrak{m}athfrak{p}si_i(a)) \geq \alpha$ for some $i \in \{1,2\}$ but then $0 < v(\mathfrak{m}athfrak{p}si_j(a)) < \gamma_1$ for $j \in \{1,2\}$ such that $j \neq i$, a contradiction. However, now Lemma \ref{Lem:Flat} implies that $\{v(\rho(a)) \mathfrak{m}id a \in K\}$ is a singleton, a contradiction. Thus, for some $i \in \{1,2\}$, we have $v(\mathfrak{m}athfrak{p}si_i(a)) = 0$ for $a \in K$ such that $v(a)$ is sufficiently large. Therefore $\mathfrak{m}athfrak{p}si_1$ or $\mathfrak{m}athfrak{p}si_2$ is a unit of $\IntR(K,D)$ by Theorem \ref{Thm:Atomic}. \end{enumerate} Lastly, if we write $d = \mathfrak{m}athfrak{p}si_1 \cdots \mathfrak{m}athfrak{p}si_n$ for atoms $\mathfrak{m}athfrak{p}si_1, \dots, \mathfrak{m}athfrak{p}si_n \in \IntR(K,D)$, then we see that $d = \mathfrak{m}athfrak{p}si_1(0) \cdots \mathfrak{m}athfrak{p}si_n(0)$ is a product of $n$ nonunit elements in $D$. Therefore, $n \leq L_D(d)$. \end{proof} \section{Positive monoids with one gap}\label{Sect:1toInfinity} \indent\indent Let $\Gamma$ be an additive subgroup of ${\mathfrak{m}athbb R}$ with no minimal strictly positive element. We can scale the elements so that without loss of generality, $1 \in \Gamma$. In this section, we want to focus on integer-valued rational functions over the ring $k[t;M]_{(t;M)}$, where $k$ is a field and $M$ is of the form $M = \{0\} \cup \{\gamma \in \Gamma \mathfrak{m}id \gamma \geq 1\}$. The reason for this is that we can always utilize Lemma \ref{Lem:BiggestFilter} to understand the factorization in the base ring $k[t;M]_{(t;M)}$. \begin{proposition}\label{Prop:FactoringOneInfinity} Suppose that $\Gamma$ is an additive subgroup of ${\mathfrak{m}athbb R}$ with no minimal strictly positive element and $1 \in \Gamma$. Let $M = \{0\} \cup \{\gamma \in \Gamma \mathfrak{m}id \gamma \geq 1\}$. Take $D$ to be $D = k[t;M]_{(t;M)}$ for some field $k$. Suppose that $v$ is the $M$-valuation on $D$. Then for any nonzero, nonunit element $d \in D$, we have \[ \mathfrak{m}athcal{L}_D(d) = \mathopen{}\mathclose\bgroup\originalleft\{ \lim_{r \to 2^-} \mathopen{}\mathclose\bgroup\originalleft\lceil \frac{v(d)}{r} \aftergroup\egroup\originalright\rceil, \dots, \lfloor v(d) \rfloor - 1, \lfloor v(d) \rfloor \aftergroup\egroup\originalright\} \] and \[ c_D(d) = \begin{cases} 0,& \text{if $1 \leq v(d) < 2$},\\ 2,& \text{if $2 \leq v(d) < 3$},\\ 3,& \text{if $v(d) \geq 3$}. \end{cases} \] \end{proposition} \begin{proof} First note that the atoms of $D$ are exactly the elements of $d \in D$ such that $1 \leq v(d) < 2$. If $v(d) \geq 2$, then $t$ strictly divides $d$ by Lemma \ref{Lem:BiggestFilter}. Let $d \in D$ be a nonzero, nonunit element. Next, we fix a value for $\ell$ that is in $\mathopen{}\mathclose\bgroup\originalleft\{ \lim\limits_{r \to 2^-} \mathopen{}\mathclose\bgroup\originalleft\lceil \frac{v(d)}{r} \aftergroup\egroup\originalright\rceil, \dots, \lfloor v(d) \rfloor - 1, \lfloor v(d) \rfloor \aftergroup\egroup\originalright\}$. We can write \[ d = \underbrace{t^{v(d)/\ell} \cdots t^{v(d)/\ell}}_{\text{$\ell - 1$ times}} \cdot \frac{d}{t^{\frac{(\ell-1)v(d)}{\ell}}}. \] We want to show each factor is irreducible. We know that $v(t^{v(d)/\ell}) = \frac{v(d)}{\ell}$ and $v\mathopen{}\mathclose\bgroup\originalleft(\frac{d}{t^{\frac{(\ell-1)v(d)}{\ell}}} \aftergroup\egroup\originalright) = v(d) - \frac{(\ell-1)v(d)}{\ell} = \frac{v(d)}{\ell}$. Since $\ell \leq \lfloor v(d) \rfloor$, then \[ \frac{v(d)}{\ell} \geq \frac{v(d)}{\lfloor v(d) \rfloor} \geq 1. \] On the other hand, since $\lim\limits_{r \to 2^-} \mathopen{}\mathclose\bgroup\originalleft\lceil \frac{v(d)}{r} \aftergroup\egroup\originalright\rceil \leq \ell$, there exists a value $s \in (0, 2)$ such that $\lim\limits_{r \to 2^-} \mathopen{}\mathclose\bgroup\originalleft\lceil \frac{v(d)}{r} \aftergroup\egroup\originalright\rceil = \mathopen{}\mathclose\bgroup\originalleft\lceil\frac{v(d)}{s} \aftergroup\egroup\originalright\rceil \leq \ell$. This implies that $\frac{v(d)}{s} \leq \ell$ and thus \[ \frac{v(d)}{\ell} \leq s < 2. \] Since $1 \leq \frac{v(d)}{\ell} < 2$, we have that $t^{v(d)/\ell}$ and $\frac{d}{t^{\frac{(\ell-1)v(d)}{\ell}}}$ are irreducible. Now we want to show that $d$ has no other factorization lengths. Suppose that $d = a_1\cdots a_\ell$ for some $\ell \in {\mathfrak{m}athbb N}$ and each $a_i \in D$ is irreducible. Then $v(a_i) \in [1,2)$ for each $i$ so $v(d) \in [\ell, 2\ell)$. Since $\ell\leq v(d)$, we must have $\ell \leq \lfloor v(d) \rfloor$. On the other hand, we have $\frac{v(d)}{2} < \ell$. Thus, $\lim\limits_{r \to 2^-} \mathopen{}\mathclose\bgroup\originalleft\lceil \frac{v(d)}{r} \aftergroup\egroup\originalright\rceil \leq \ell$. This shows that \[ \mathfrak{m}athcal{L}_D(d) = \mathopen{}\mathclose\bgroup\originalleft\{ \lim_{r \to 2^-} \mathopen{}\mathclose\bgroup\originalleft\lceil \frac{v(d)}{r} \aftergroup\egroup\originalright\rceil, \dots, \lfloor v(d) \rfloor - 1, \lfloor v(d) \rfloor \aftergroup\egroup\originalright\} \] Now we calculate the catenary degree. If $1 \leq v(d) < 2$, then $d$ is an atom of $D$ so the catenary degree is $0$. If $2 \leq v(d) < 3$, then $\mathfrak{m}athcal{L}_D(d) = \{2\}$. Moreover, we have that for any distinct $\alpha, \beta \in M$ such that $1 < \alpha, \beta < 2$, the elements $t + t^\alpha$ and $t + t^\beta$ are not associates. Otherwise, $\frac{t+t^\alpha}{t+t^\beta}-1 = \frac{t^{\alpha-1}+t^{\beta-1}}{1+t^{\beta-1}} \in D$, but the valuation of this element is $\mathfrak{m}in\{\alpha-1,\beta-1\}$ which is not in $M$, a contradiction. Thus, for $d \in D$ such that $2 \leq v(d) < 3$, since $t + t^{\alpha}$ for any $\alpha \in M$ such that $1 < \alpha < 2$ is a distinct divisor of $d$ up to association, we know that $\abs{\mathfrak{m}athsf{Z}_D(d)} > 1$, meaning that $c_D(d) = 2$. Now we assume that $d \in D\setminus\{0\}$ is such that $v(d) \geq 3$. We will establish a $3$-chain from any factorization of $d$ to the factorization of $d$ given by \[d = \underbrace{t\cdots t}_{\text{$\lfloor v(d) \rfloor-1$ times}} \frac{d}{t^{\lfloor v(d) \rfloor-1}}.\] We have that $\abs{\mathfrak{m}athcal{L}_D(d)} > 1$ so this will imply that $c_D(d) = 3$. Take a factorization $d = a_1 \cdots a_n$, where each $a_i \in D$ is irreducible. If there does not exist distinct $i, j \in \{1, \dots, n\}$ such that $a_i, a_j \not\sim t$, then the factorization $a_1 \cdots a_n$ is the same as $t \cdots t \cdot \frac{d}{t^{\lfloor v(d) \rfloor-1}}$ up to reordering and association. Now suppose that there exists distinct $i, j \in \{1, \dots, n\}$ such that $a_i, a_j \not\sim t$. Note that $2 \leq v(a_ia_j) < 4$. We know that \[ a_i a_j = \begin{cases} t \cdot t \cdot \frac{a_i a_j}{t^2}, & \text{if $3 \leq v(a_ia_j)< 4$},\\ t \cdot \frac{a_ia_j}{t}, & \text{if $2 \leq v(a_ia_j) < 3$}. \end{cases} \] In either case, there is a factorization of $d$ that is of distance at most $3$ from $a_1 \cdots a_n$ and has strictly more factors associate to $t$. Thus, there is a $3$-chain between any factorization of $d$. As established before, this implies that $c_D(d) = 3$. \end{proof} Let $D$ be as in the previous proposition and let $K$ be its field of fractions. We compare the factorization of elements $d \in D$ viewed as elements of $D$ and viewed as elements of $\IntR(K,D)$. We see that the set of lengths extends downward to 2, but the catenary degree remains the same. \begin{theorem} Suppose that $\Gamma$ is an additive subgroup of ${\mathfrak{m}athbb R}$ with no minimal strictly positive element and $1 \in \Gamma$. Let $M = \{0\} \cup \{\gamma \in \Gamma \mathfrak{m}id \gamma \geq 1\}$. Take $D$ to be $D = k[t;M]_{(t;M)}$ for some field $k$. Denote by $K$ the field of fractions of $D$. Suppose that $v$ is the $M$-valuation on $K$. Also suppose that $k$ is not algebraically closed or $\mathfrak{m}athsf{gp}(M)$ is not divisible. Then for any nonzero, nonunit element $d \in D$, we have \[ \mathfrak{m}athcal{L}_{\IntR(K,D)}(d) = \begin{cases} \mathopen{}\mathclose\bgroup\originalleft\{ 2, 3, \dots, \lfloor v(d) \rfloor - 1, \lfloor v(d) \rfloor \aftergroup\egroup\originalright\}, & \text{if $v(d) \geq 2$},\\ \mathopen{}\mathclose\bgroup\originalleft\{1 \aftergroup\egroup\originalright\}, & \text{if $1 \leq v(d) < 2$} \end{cases} \] and \[ c_{\IntR(K,D)}(d) = \begin{cases} 0,& \text{if $1 \leq v(d) < 2$},\\ 2,& \text{if $2 \leq v(d) < 3$},\\ 3,& \text{if $v(d) \geq 3$}. \end{cases} \] Moreover, for a nonzero, nonunit element $\varphi \in \IntR(K,D)$, we have \[\mathfrak{m}athcal{L}_{\IntR(K,D)}(\varphi) =\begin{cases} \mathopen{}\mathclose\bgroup\originalleft\{2,3, \dots, \lfloor \alpha(\varphi)\rfloor \aftergroup\egroup\originalright\}, & \text{if $\alpha(\varphi) \geq 2$}\\ \{1\}, & \text{if $1 \leq \alpha(\varphi) <2$}, \end{cases}\] where $\alpha(\varphi) = \inf\{v(\varphi(a)) \mathfrak{m}id a \in K \}$. \end{theorem} \begin{proof} Let $d \in D$ be a nonzero, nonunit element. Suppose $1 \leq v(d) < 2$. Then Lemma \ref{Lem:AtomTransfer} implies that $d$ is irreducible in $\IntR(K,D)$. Suppose that $2 \leq v(d) \leq 3$. Then we have $\{2, \dots, \mathopen{}\mathclose\bgroup\originalleft\lfloor v(d) \aftergroup\egroup\originalright\rfloor \} = \mathfrak{m}athcal{L}_{D}(d) \subseteq \mathfrak{m}athcal{L}_{\IntR(K,D)}(d)$ by Proposition \ref{Prop:FactoringOneInfinity}. Furthermore, if $d = \mathfrak{m}athfrak{p}si_1 \cdots \mathfrak{m}athfrak{p}si_n$ for some irreducible elements $\mathfrak{m}athfrak{p}si_1, \dots, \mathfrak{m}athfrak{p}si_n \in \IntR(K,D)$, we have $d = \mathfrak{m}athfrak{p}si_1(0) \cdots \mathfrak{m}athfrak{p}si_n(0)$, so we have $n \leq L_D(d) = \mathopen{}\mathclose\bgroup\originalleft\lfloor v(d) \aftergroup\egroup\originalright\rfloor$. Therefore, \[\mathfrak{m}athcal{L}_{\IntR(K,D)}(d) = \mathopen{}\mathclose\bgroup\originalleft\{ 2, 3, \dots, \lfloor v(d) \rfloor - 1, \lfloor v(d) \rfloor \aftergroup\egroup\originalright\},\] If $d \in D$ is nonzero and has the property that $v(d) > 3$, then we use Theorem \ref{Thm:ExtendSetOfLengths}. We see that $\mathfrak{m}in\{n \in {\mathfrak{m}athbb N} \mathfrak{m}id n\cdot 1 > v(d) \} \geq \lceil v(d) \rceil$. Thus, we have \[ \mathfrak{m}athcal{L}_D(d) \cup \{2, \dots, \lceil v(d) \rceil - 2 \} \subseteq \mathfrak{m}athcal{L}_{\IntR(K,D)}(d) \subseteq \{2, \dots, \mathopen{}\mathclose\bgroup\originalleft\lfloor v(d) \aftergroup\egroup\originalright\rfloor \} \] Since Proposition \ref{Prop:FactoringOneInfinity} gives $\mathfrak{m}athcal{L}_D(d)$, we have that \[ \mathfrak{m}athcal{L}_{\IntR(K,D)}(d) = \mathopen{}\mathclose\bgroup\originalleft\{ 2, 3, \dots, \lfloor v(d) \rfloor - 1, \lfloor v(d) \rfloor \aftergroup\egroup\originalright\}.\] Now we note that for any nonzero, nonunit element $\varphi \in \IntR(K,D)$, the rational function $\varphi$ is an atom if and only if $1 \leq \alpha(\varphi) < 2$. If $\varphi \in \IntR(K,D)$ is such that $1 \leq \alpha(\varphi) < 2$, then there exists $a \in K$ such that $1 \leq v(\varphi(a)) < 2$. Thus, whenever we write $\varphi = \mathfrak{m}athfrak{p}si_1\mathfrak{m}athfrak{p}si_2$ for some $\mathfrak{m}athfrak{p}si_1, \mathfrak{m}athfrak{p}si_2 \in \IntR(K,D)$, we have that $\varphi(a) = \mathfrak{m}athfrak{p}si_1(a)\mathfrak{m}athfrak{p}si_2(a)$. Since $\varphi(a)$ is an atom of $D$, we must have $\mathfrak{m}athfrak{p}si_1(a)$ or $\mathfrak{m}athfrak{p}si_2(a)$ be a unit of $D$, which means that $\mathfrak{m}athfrak{p}si_1$ or $\mathfrak{m}athfrak{p}si_2$ is a unit of $\IntR(K,D)$ by Theorem \ref{Thm:Atomic}. On the other hand, suppose that $\varphi \in \IntR(K,D)$ is an atom. Then $\varphi(0)$ is an atom of $D$, which means that $1 \leq v(\varphi(0)) < 2$. Thus, $1 \leq \alpha(\varphi) < 2$. Suppose that $\varphi \in \IntR(K,D)$ is not an atom of $\IntR(K,D)$. Then $\lfloor \alpha(\varphi) \rfloor \geq 2$. We can write $t^{\lfloor \alpha(\varphi)\rfloor}$ as the product of $\ell$ atoms of $\IntR(K,D)$, where $\ell \in \{2, 3, \dots, \lfloor \alpha(\varphi)\rfloor \}$. Fix such an $\ell$. We have $t^{\lfloor \alpha(\varphi)\rfloor} = \mathfrak{m}athfrak{p}si_1\cdots \mathfrak{m}athfrak{p}si_\ell$, where $\mathfrak{m}athfrak{p}si_1, \dots, \mathfrak{m}athfrak{p}si_\ell$ are atoms of $\IntR(K,D)$. We also know that there exists $a \in K$ such that $\lfloor \alpha(\varphi) \rfloor \leq v(\varphi(a)) < \lfloor \alpha(\varphi) \rfloor + 1$. From the proof of Theorem \ref{Thm:ExtendSetOfLengths}, we can write the factorization in a way so that $v(\mathfrak{m}athfrak{p}si_1(a)) =1$. Then $1 \leq \alpha\mathopen{}\mathclose\bgroup\originalleft(\frac{\varphi}{\mathfrak{m}athfrak{p}si_2\cdots\mathfrak{m}athfrak{p}si_\ell}\aftergroup\egroup\originalright) \leq v(\varphi(a)) - (\lfloor \alpha(\varphi) \rfloor - 1) < 2$, meaning that $\frac{\varphi}{\mathfrak{m}athfrak{p}si_2\cdots\mathfrak{m}athfrak{p}si_\ell}$ is an atom of $\IntR(K,D)$. Therefore, we can write $\varphi = \frac{\varphi}{\mathfrak{m}athfrak{p}si_2\cdots\mathfrak{m}athfrak{p}si_\ell} \cdot \mathfrak{m}athfrak{p}si_2 \cdots \mathfrak{m}athfrak{p}si_\ell$. Now, we obtain $\{2, 3, \dots, \lfloor \alpha(\varphi)\rfloor \} \subseteq \mathfrak{m}athcal{L}_{\IntR(K,D)}(d)$. Write $\varphi = \mathfrak{m}athfrak{p}si_1 \cdots \mathfrak{m}athfrak{p}si_n$ for atom $\mathfrak{m}athfrak{p}si_1, \dots, \mathfrak{m}athfrak{p}si_n \in \IntR(K,D)$. There exists $a \in K$ such that $\lfloor \alpha(\varphi) \rfloor \leq v(\varphi(a)) < \lfloor \alpha(\varphi) \rfloor + 1$. Thus, $v(\varphi(a)) = v(\mathfrak{m}athfrak{p}si_1(a)) + \cdots + v(\mathfrak{m}athfrak{p}si_n(a))$ implies that $n \leq \lfloor \alpha(\varphi) \rfloor$ since $v(\mathfrak{m}athfrak{p}si_i(a)) \geq 1$ for each $i$. We can then conclude that $\mathfrak{m}athcal{L}_{\IntR(K,D)}(\varphi) = \mathopen{}\mathclose\bgroup\originalleft\{2,3, \dots, \lfloor \alpha(\varphi)\rfloor \aftergroup\egroup\originalright\}$. Lastly, we calculate catenary degrees. Let $d \in D$ be a nonzero, nonunit element. As we have seen before, if $1 \leq v(d) < 2$, then $d$ is an atom of $\IntR(K,D)$ and therefore $c_{\IntR(K,D)}(d) = 0$. If $2 \leq v(d) < 3$, then $\mathfrak{m}athcal{L}_{\IntR(K,D)}(d) = \{2\}$. Moreover, we can show that $t + t^\gamma$ and $t + t^{\gamma'}$ for distinct $\gamma, \gamma' \in M$ such that $1 < \gamma, \gamma' < 2$ are not associates in $\IntR(K,D)$. As a consequence, we have that $\abs{\mathfrak{m}athsf{Z}_{\IntR(K,D)}(d)} > 1$, so $c_{\IntR(K,D)}(d) = 2$. Now we assume that $d \in D$ is a nonzero element such that $v(d) \geq 3$. We want to show that there exists a $3$-chain from any factorization of $d$ in $\IntR(K,D)$ to the factorization of $d$ given by \[d = \underbrace{t\cdots t}_{\text{$\lfloor v(d) \rfloor-1$ times}} \frac{d}{t^{\lfloor v(d) \rfloor-1}}.\] Write $d = \mathfrak{m}athfrak{p}si_1 \cdots \mathfrak{m}athfrak{p}si_n$ for $\mathfrak{m}athfrak{p}si_1, \dots, \mathfrak{m}athfrak{p}si_n$ irreducible elements of $\IntR(K,D)$. If there does not exist distinct $i, j$ in $\{1, \dots, n\}$ such that $\mathfrak{m}athfrak{p}si_i$ and $\mathfrak{m}athfrak{p}si_j$ are both not associate to $t$, then the factorization $d = \mathfrak{m}athfrak{p}si_1 \cdots \mathfrak{m}athfrak{p}si_n$ is the same as $t \cdots t \cdot \frac{d}{t^{\lfloor v(d) \rfloor-1}}$ up to reordering and association. Now suppose that there are distinct $i, j$ in $\{1, \dots, n\}$ such that $\mathfrak{m}athfrak{p}si_i$ and $\mathfrak{m}athfrak{p}si_j$ are both not associate to $t$. Note that $\alpha(\mathfrak{m}athfrak{p}si_i\mathfrak{m}athfrak{p}si_j) \geq 2$. Thus, $\frac{\mathfrak{m}athfrak{p}si_i\mathfrak{m}athfrak{p}si_j}{t} \in \IntR(K,D)$. We have that $\alpha\mathopen{}\mathclose\bgroup\originalleft(\frac{\mathfrak{m}athfrak{p}si_i\mathfrak{m}athfrak{p}si_j}{t} \aftergroup\egroup\originalright) = \alpha(\mathfrak{m}athfrak{p}si_i\mathfrak{m}athfrak{p}si_j) - 1 \geq 1$. If $2 \leq \alpha(\mathfrak{m}athfrak{p}si_i\mathfrak{m}athfrak{p}si_j) <3 $, then $\frac{\mathfrak{m}athfrak{p}si_i\mathfrak{m}athfrak{p}si_j}{t}$ is irreducible. If $\alpha(\mathfrak{m}athfrak{p}si_i\mathfrak{m}athfrak{p}si_j) \geq 3$, then $2 \in \mathfrak{m}athcal{L}_{\IntR(K,D)}\mathopen{}\mathclose\bgroup\originalleft(\frac{\mathfrak{m}athfrak{p}si_i\mathfrak{m}athfrak{p}si_j}{t}\aftergroup\egroup\originalright)$, so we can write $\frac{\mathfrak{m}athfrak{p}si_i\mathfrak{m}athfrak{p}si_j}{t} = \rho_1\rho_2$ for some irreducible elements $\rho_1, \rho_2 \in \IntR(K,D)$. Now, we observe that \[ \mathfrak{m}athfrak{p}si_i\mathfrak{m}athfrak{p}si_j = \begin{cases} t \cdot \frac{\mathfrak{m}athfrak{p}si_i\mathfrak{m}athfrak{p}si_j}{t}, & \text{if $2 \leq \alpha(\mathfrak{m}athfrak{p}si_i\mathfrak{m}athfrak{p}si_j) <3 $},\\ t \cdot \rho_1 \cdot \rho_2, & \text{if $\alpha(\mathfrak{m}athfrak{p}si_i\mathfrak{m}athfrak{p}si_j) \geq 3$}. \end{cases} \] This means that there is a factorization of distance at most $3$ from $d = \mathfrak{m}athfrak{p}si_1 \cdots \mathfrak{m}athfrak{p}si_n$ such that there are strictly more factors associate to $t$. This shows that there is a $3$-chain between any two factorizations of $d$ in $\IntR(K,D)$, showing that $c_{\IntR(K,D)}(d) \leq 3$. Since $\abs{\mathfrak{m}athcal{L}_{\IntR(K,D)}(d)} > 1$, we must have $c_{\IntR(K,D)}(d) = 3$ as the distance between two factorizations of different lengths is at least $3$. \end{proof} \end{document}
\begin{document} \begin{abstract} Recently, Naruse discovered a hook length formula for the number of standard Young tableaux of a skew shape. Morales, Pak and Panova found two $q$-analogs of Naruse's hook length formula over semistandard Young tableaux (SSYTs) and reverse plane partitions (RPPs). As an application of their formula, they expressed certain $q$-Euler numbers, which are generating functions for SSYTs and RPPs of a zigzag border strip, in terms of weighted Dyck paths. They found a determinantal formula for the generating function for SSYTs of a skew staircase shape and proposed two conjectures related to RPPs of the same shape. One conjecture is a determinantal formula for the number of \emph{pleasant diagrams} in terms of Schr\"oder paths and the other conjecture is a determinantal formula for the generating function for RPPs of a skew staircase shape in terms of $q$-Euler numbers. In this paper, we show that the results of Morales, Pak and Panova on the $q$-Euler numbers can be derived from previously known results due to Prodinger by manipulating continued fractions. These $q$-Euler numbers are naturally expressed as generating functions for alternating permutations with certain statistics involving \emph{maj}. It has been proved by Huber and Yee that these $q$-Euler numbers are generating functions for alternating permutations with certain statistics involving \emph{inv}. By modifying Foata's bijection we construct a bijection on alternating permutations which sends the statistics involving \emph{maj} to the statistic involving \emph{inv}. We also prove the aforementioned two conjectures of Morales, Pak and Panova. \end{abstract} \title{Reverse plane partitions of skew staircase shapes and $q$-Euler numbers} \tableofcontents \section{Introduction} The classical hook length formula due to Frame et al.~\cite{Frame1954} states that, for a partition $\lambda$ of $n$, the number $f^{\lambda}$ of standard Young tableaux of shape $\lambda$ is given by \[ f^{\lambda} = n! \prod_{u\in \lambda}\frac{1}{h(u)}, \] where $h(u)$ is the \emph{hook length} of the square $u$. See Section~\ref{sec:preliminaries} for all undefined terminologies in the introduction. Recently, Naruse \cite{Naruse} generalized the hook length formula to standard Young tableaux of a skew shape as follows. For partitions $\mu\subset \lambda$, the number $f^{\lm}$ of standard Young tableaux of shape $\lm$ is given by \begin{equation} \label{eq:naruse} f^{\lm} = |\lm|! \sum_{D\in\EE(\lm)} \prod_{u\in \lambda\setminus D}\frac{1}{h(u)}, \end{equation} where $\EE(\lm)$ is the set of \emph{excited diagrams} of $\lm$. Morales et al.~\cite{MPP1} found two natural $q$-analogs of Naruse's hook length formula over \emph{semistandard Young tableaux} (SSYTs) and \emph{reverse plane partitions} (RPPs): \begin{equation} \label{eq:MPP1} \sum_{\pi\in\SSYT(\lm)}q^{|\pi|} =\sum_{D\in\EE(\lm)} \prod_{(i,j)\in\lambda\setminus D}\frac{q^{\lambda_j'-i}}{1-q^{h(i,j)}} \end{equation} and \begin{equation} \label{eq:MPP2} \sum_{\pi\in\RPP(\lm)}q^{|\pi|} = \sum_{P\in\PP(\lm)} \prod_{u\in P} \frac{q^{h(u)}}{1-q^{h(u)}}, \end{equation} where $\SSYT(\lm)$ is the set of SSYTs of shape $\lm$, $\RPP(\lm)$ is the set of RPPs of shape $\lm$ and $\PP(\lm)$ is the set of \emph{pleasant diagrams} of $\lm$. Let $\delta_n$ denote the staircase shape partition $(n-1,n-2,\dots,1)$. Morales et al. \cite{MPP2} found interesting connections between SSYTs or RPPs of shape $\lm$ with $q$-Euler numbers when $\lm$ is the skew staircase shape $\delta_{n+2k}/\delta_n$. To explain their results, we need some definitions. Let $\Alt_{n}$ be the set of alternating permutations of $\{1,2,\dots,n\}$, where $\pi=\pi_1\pi_2\dots\pi_n$ is called \emph{alternating} if $\pi_1<\pi_2>\pi_3<\pi_4>\cdots$. The \emph{Euler number} $E_n$ is defined to be the cardinality of $\Alt_{n}$. It is well known that \[ \sum_{n\ge0} E_n\frac{x^n}{n!} = \sec x + \tan x. \] The even-indexed Euler numbers $E_{2n}$ are called the \emph{secant numbers} and the odd-indexed Euler numbers $E_{2n+1}$ are called the \emph{tangent numbers}. We refer the reader to \cite{Stanley2010} for many interesting properties of alternating permutations and Euler numbers. When $\lambda/\mu=\delta_{n+2}/\delta_n$, the right hand sides of \eqref{eq:MPP1} and \eqref{eq:MPP2} are closely related to the $q$-Euler numbers $E_{n}(q)$ and $E^*_{n}(q)$ defined by \begin{equation} \label{eq:En_maj} E_{n}(q)=\sum_{\pi\in\Alt_{n}} q^{\maj(\pi^{-1})} \qand E^*_{n}(q)=\sum_{\pi\in\Alt_{n}} q^{\maj(\kappa_{n}\pi^{-1})}, \end{equation} where $\kappa_{n}$ is the permutation \[ (1)(2,3)(4,5)\dots(2\flr{(n-1)/2}, 2\flr{(n-1)/2}+1) \] in cycle notation and $\maj(\pi)$ is the \emph{major index} of $\pi$. As corollaries of \eqref{eq:MPP1} and \eqref{eq:MPP2} for $\lambda/\mu=\delta_{n+2}/\delta_n$, Morales et al. \cite[Corollaries~1.7 and 1.8]{MPP2} obtained that \begin{equation} \label{eq:MPP_Euler} \frac{E_{2n+1}(q)}{(q;q)_{2n+1}} = \sum_{D\in\Dyck_{2n}} \prod_{(a,b)\in D} \frac{q^b}{1-q^{2b+1}} \end{equation} and \begin{equation} \label{eq:MPP_Euler*} \frac{E^*_{2n+1}(q)}{(q;q)_{2n+1}} = \sum_{D\in\Dyck_{2n}} q^{H(D)} \prod_{(a,b)\in D} \frac{1}{1-q^{2b+1}}, \end{equation} where $\Dyck_{2n}$ is the set of \emph{Dyck paths} of length $2n$, $H(D)=\sum_{(a,b)\in\HP(D)}(2b+1)$, $\HP(D)$ is the set of \emph{high peaks} in $D$ and \[ (a;q)_n = (1-a)(1-aq)\cdots(1-aq^{n-1}). \] The $q$-Euler numbers $E_n(q)$ and $E_n^*(q)$ have been studied by Prodinger \cite{Prodinger2008} in different contexts. In this paper we give different proofs of \eqref{eq:MPP_Euler} and \eqref{eq:MPP_Euler*} using Prodinger's results and continued fractions. In fact, the $q$-Euler numbers $E_n(q)$ were first introduced by Jackson \cite{Jackson1904} and they have the following combinatorial interpretation, see \cite{Stanley76, Stanley2010}: \begin{equation} \label{eq:En_inv} E_{n}(q)=\sum_{\pi\in\Alt_{n}} q^{\inv(\pi)}, \end{equation} where $\inv(\pi)$ is the number of \emph{inversions} of $\pi$. Observe that, by \eqref{eq:En_maj} and \eqref{eq:En_inv}, we have \begin{equation} \label{eq:inv_maj} \sum_{\pi\in\Alt_{n}} q^{\maj(\pi^{-1})} = \sum_{\pi\in\Alt_{n}} q^{\inv(\pi)}. \end{equation} Foata's bijection \cite{Foata68} gives a bijective proof of \eqref{eq:inv_maj}. The $q$-Euler numbers $E^*_n(q)$ have also been studied by Huber and Yee \cite{Huber2010} who showed that \begin{equation} \label{eq:Huber} E_{2n+1}^*(q)=\sum_{\pi\in\Alt_{2n+1}} q^{\inv(\pi)-\ndes(\pi_e)}, \end{equation} where $\ndes(\pi_e)$ is the number of \emph{non-descents} in $\pi_e=\pi_2\pi_4\cdots\pi_{\flr{n/2}}$. Again, by \eqref{eq:En_maj} and \eqref{eq:Huber}, we have \begin{equation} \label{eq:inv_maj*} \sum_{\pi\in\Alt_{2n+1}} q^{\maj(\kappa_{2n+1}\pi^{-1})} = \sum_{\pi\in\Alt_{2n+1}} q^{\inv(\pi)-\ndes(\pi_e)}. \end{equation} In this paper we give a bijective proof of \eqref{eq:inv_maj*} by modifying Foata's bijection. There are similar $q$-Euler numbers studied in \cite{Prodinger2008, Huber2010}. We show that these $q$-Euler numbers have similar properties, see Theorems~\ref{thm:q-tan} and \ref{thm:q-sec}. Note that in \eqref{eq:MPP1} the generating function for SSYTs of shape $\lm$ is expressed in terms of the excited diagrams of $\lm$. Morales et al. \cite[Corollaries~8.1~and~8.8]{MPP2} proved the following determinantal formulas for the number $e(\lm)$ of excited diagrams of $\lm$ and the generating function for SSYTs of shape $\lm$ for $\lm=\delta_{n+2k}/\delta_n$: \begin{equation} \label{eq:excited} e(\delta_{n+2k}/\delta_n) = \det(C_{n+i+j-2})_{i,j=1}^k = \prod_{1\le i<j\le n} \frac{2k+i+j-1}{i+j-1}, \end{equation} where $C_n=\frac{1}{n+1}\binom{2n}{n}$ is the Catalan number, and \begin{equation} \label{eq:ssyt} \sum_{\pi\in\SSYT(\delta_{n+2k}/\delta_n)} q^{|\pi|} =\det\left( \frac{E_{2n+2i+2j-3}(q)}{(q;q)_{2n+2i+2j-3}} \right)_{i,j=1}^k. \end{equation} Let $p(\lm)$ be the number of pleasant diagrams of $\lm$. Morales et al. \cite{MPP2} showed that $p(\delta_{n+2}/\delta_n)=\mathfrak{s}_n$, where $\mathfrak{s}_n=2^{n+2} s_n$ for the \emph{little Schr\"oder number} $s_n$. They proposed the following conjectures on $p(\lm)$ and the generating function for RPPs of shape $\lm$ for $\lm=\delta_{n+2k}/\delta_n$. \begin{thm} \cite[Conjecture 9.3]{MPP2} \label{conj:9.3} We have \[ p(\delta_{n+2k}/\delta_n) = 2^{\binom k2} \det(\mathfrak{s}_{n-2+i+j})_{i,j=1}^k. \] \end{thm} \begin{thm}\cite[Conjecture 9.6]{MPP2} \label{conj:9.6} We have \[ \sum_{\pi\in \RPP(\delta_{n+2k}/\delta_n)} q^{|\pi|} =q^{-\frac{k(k-1)(6n+8k-1)}{6}} \det\left( \frac{E^*_{2n+2i+2j-3}(q)}{(q;q)_{2n+2i+2j-3}} \right)_{i,j=1}^k. \] \end{thm} In this paper we prove their conjectures. We remark that the determinants in \eqref{eq:excited} and \eqref{eq:ssyt} can be expressed in terms of non-intersecting paths using Lindstr\"om--Gessel--Viennot lemma \cite{Lindstrom, GesselViennot}. However, the determinants in Theorems~\ref{conj:9.3} and \ref{conj:9.6} are not of such forms that we can directly apply Lindstr\"om--Gessel--Viennot lemma. Therefore, we need extra work to relate them with non-intersecting paths. To this end we first express the determinants in terms of weakly non-intersecting paths. Then we resolve each pair of weakly non-intersecting paths into a pair of strictly non-intersecting paths one at a time. The remainder of this paper is organized as follows. In Section~\ref{sec:preliminaries} we provide necessary definitions and some known results. In Section~\ref{sec:q-euler} we give different proofs of \eqref{eq:MPP_Euler} and \eqref{eq:MPP_Euler*} using Prodinger's results, lattice paths and continued fractions. In Section~\ref{sec:foata} we state several properties of Prodinger's $q$-Euler numbers and give a bijective proof of \eqref{eq:inv_maj*} by constructing a Foata-type bijection. In Section~\ref{sec:MPP conjectures} we prove Theorems~\ref{conj:9.3} and \ref{conj:9.6}. In Section~\ref{sec:lascoux-pragacz-type}, we find a determinantal formula for $p(\lm)$ and the generating function for the reverse plane partitions of shape $\lm$ for a certain class of skew shapes $\lm$ including $\delta_{n+2k}/\delta_{n}$ and $\delta_{n+2k+1}/\delta_{n}$. \section{Preliminaries} \label{sec:preliminaries} In this section we give basic definitions and some known results. \subsection{Permutation statistics and alternating permutations} The set of integers is denoted by $\ZZ$ and the set of nonnegative integers is denoted by $\NN$. Let $[n]:=\{1,2,\dots,n\}$. A \emph{permutation} of $[n]$ is a bijection $\pi:[n]\to[n]$. The set of permutations of $[n]$ is denoted by $\Sym_n$. We also consider a permutation $\pi\in \Sym_n$ as the word $\pi=\pi_1\pi_2\dots\pi_n$ of integers, where $\pi_i=\pi(i)$ for $i\in[n]$. For $\pi,\sigma\in \Sym_n$, the product $\pi\sigma$ is defined as the usual composition of functions, i.e., $\pi\sigma(i) = \pi(\sigma(i))$ for $i\in[n]$. For a permutation $\pi=\pi_1\pi_2\dots\pi_n\in \Sym_n$ we define \[ \pi_o = \pi_1\pi_3\cdots \pi_{2\flr{(n-1)/2}+1} \qand \pi_e = \pi_2\pi_4\cdots \pi_{2\flr{n/2}}. \] Let $w=w_1w_2\dots w_n$ be a word of length $n$ consisting of integers. A \emph{descent} (resp.~\emph{ascent}) of $w$ is an integer $i\in[n-1]$ satisfying $w_i>w_{i+1}$ (resp.~$w_i<w_{i+1}$). A \emph{non-descent} (resp.~\emph{non-ascent}) of $w$ is an integer $i\in[n]$ that is not a descent (resp.~\emph{ascent}). In other words, $i$ is a non-descent (resp.~non-ascent) of $w$ if and only if either $i$ is an ascent (resp.~descent) of $w$ or $i=n$. We denote by $\Des(w)$, $\Asc(w)$, $\NDes(w)$ and $\NAsc(w)$ the sets of descents, ascents, non-descents and non-ascents of $w$ respectively. The \emph{major index} $\maj(w)$ of $w$ is defined to be the sum of descents of $w$. An \emph{inversion} of $w$ is a pair $(i,j)$ of integers $1\le i<j\le n$ satisfying $w_i>w_j$. The number of inversions of $w$ is denoted by $\inv(w)$. An \emph{alternating permutation} (resp.~\emph{reverse alternating permutation}) is a permutation $\pi=\pi_1\pi_2\dots\pi_n\in \Sym_n$ satisfying $\pi_1<\pi_2>\pi_3<\pi_4>\cdots$ (resp.~$\pi_1>\pi_2<\pi_3>\pi_4<\cdots$). The set of alternating permutations (resp.~reverse alternating permutations) in $\Sym_n$ is denoted by $\Alt_{n}$ (resp.~$\Ralt_{n}$). Note that, for $\pi\in \Sym_n$, we have $\pi\in \Alt_{n}$ (resp.~$\pi\in \Ralt_{n}$) if and only if $\Des(\pi)=\{2,4,6,\dots\}\cap [n-1]$ (resp.~$\Des(\pi)=\{1,3,5,\dots\}\cap [n-1]$). For $n\ge1$, we define \begin{align*} \eta_{2n} &= (1,2)(3,4)\cdots (2n-1,2n)\in \Sym_{2n},\\ \eta_{2n+1} &= (1,2)(3,4)\cdots (2n-1,2n)(2n+1) \in \Sym_{2n+1},\\ \kappa_{2n} &= (1)(2,3)(4,5)\cdots (2n-2,2n-1)(2n)\in \Sym_{2n},\\ \kappa_{2n+1} &= (1)(2,3)(4,5)\cdots (2n-1,2n) \in \Sym_{2n+1}. \end{align*} \subsection{$(P,\omega)$-partitions} $(P,\omega)$-partitions are generalizations of partitions introduced by Stanley\cite{Stanley72}. Here we recall basic properties of $(P,\omega)$-partitions. We refer the reader to \cite[Chapter 3]{EC1} for more details on the theory of $(P,\omega)$-partitions. Let $P$ be a poset with $n$ elements. A \emph{labeling} of $P$ is a bijection $\omega:P\to[n]$. A pair $(P,\omega)$ of a poset $P$ and its labeling $\omega$ is called a \emph{labeled poset}. A \emph{$(P,\omega)$-partition} is a function $\sigma:P\to \{0,1,2,\dots\}$ satisfying \begin{itemize} \item $\sigma(x)\ge \sigma(y)$ if $x\le_P y$, \item $\sigma(x)>\sigma(y)$ if $x\le_P y$ and $\omega(x)> \omega(y)$. \end{itemize} We denote by $\PP(P,\omega)$ the set of $(P,\omega)$-partitions. The \emph{size} $|\sigma|$ of $\sigma\in\PP(P,\omega)$ is defined by \[ |\sigma| = \sum_{x\in P}\sigma(x). \] A \emph{linear extension} of $P$ is an arrangement $(t_1,t_2,\dots,t_n)$ of the elements in $P$ such that if $t_i<_P t_j$ then $i<j$. The \emph{Jordan-H\"older set} $\LL(P,\omega)$ of $P$ with labeling $\omega$ is the set of permutations of the form $\omega(t_1)\omega(t_2)\cdots \omega(t_n)$ for some linear extension $(t_1,t_2,\dots,t_n)$ of $P$. It is well known \cite[Theorem~3.15.7]{EC1} that the $(P,\omega)$-partition generating function can be written in terms of linear extensions: \begin{equation} \label{eq:lin_ext} \sum_{\sigma\in\PP(P,\omega)} q^{|\sigma|} =\frac{\sum_{\pi\in\LL(P,\omega)}q^{\maj(\pi)}}{(q;q)_n}. \end{equation} \subsection{Semistandard Young tableaux and reverse plane partitions} A \emph{partition} of $n$ is a weakly decreasing sequence $\lambda=(\lambda_1,\dots,\lambda_k)$ of positive integers summing to $n$. If $\lambda$ is a partition of $n$, we write $\lambda\vdash n$. We denote the staircase partition by $\delta_n = (n-1,n-2,\dots,1)$. For $a,b\in\{0,1\}$, we define \[ \delta_n^{(a,b)} =(n-1-a,n-2,n-3,\dots,2, 1-b). \] In other words, \begin{align*} \delta_n^{(0,0)} & =\delta_n = (n-1,n-2,n-3,\dots,1),\\ \delta_n^{(1,0)} & =(n-2,n-2,n-3,\dots,1),\\ \delta_n^{(0,1)} & =(n-1,n-2,\dots,2),\\ \delta_n^{(1,1)} & =(n-2,n-2,n-3,\dots,2). \end{align*} Let $\lambda=(\lambda_1,\dots,\lambda_n)$ be a partition. The \emph{Young diagram} of $\lambda$ is the left-justified array of squares in which there are $\lambda_i$ squares in row $i$, see Figure~\ref{fig:young}. We identity a partition $\lambda$ with its Young diagram. Considering $\lambda$ as the set of squares in the Young diagram, the notation $(i,j)\in \lambda$ means that the Young diagram of $\lambda$ has a square in the $i$th row and $j$th column. The \emph{transpose} $\lambda'$ of $\lambda$ is the partition $\lambda'=(\lambda'_1,\dots,\lambda_k')$ where $k=\lambda_1$ and $\lambda_i'$ is the number of squares in the $i$th column of the Young diagram of $\lambda$. For $(i,j)\in\lambda$, the \emph{hook length} of $(i,j)$ is defined by \[ h(i,j)=\lambda_i+\lambda_j'-i-j+1. \] For two partitions $\lambda$ and $\mu$, we write $\mu\subset\lambda$ if the Young diagram of $\mu$ is contained in that of $\lambda$. In this case we define the \emph{skew shape} $\lm$ to be the set-theoretic difference $\lambda-\mu$ of the Young diagrams, see Figure~\ref{fig:young}. The \emph{size} $|\lm|$ of a skew shape $\lm$ is the number of squares in $\lm$. A partition $\lambda$ is also considered as a skew shape by $\lambda=\lambda/\emptyset$. \begin{figure} \caption{The Young diagram of $(4,3,1)$ on the left and the skew shape $(4,3,1)/(2,1)$ on the right.} \label{fig:young} \end{figure} A \emph{semistandard Young tableau} (or SSYT for short) of shape $\lm$ is a filling of $\lm$ with nonnegative integers such that the integers are weakly increasing in each row and strictly increasing in each column. A \emph{reverse plane partition} (or RPP for short) of shape $\lm$ is a filling of $\lm$ with nonnegative integers such that the integers are weakly increasing in each row and each column. A \emph{strict tableau} (or ST for short) of shape $\lm$ is a filling of $\lm$ with nonnegative integers such that the integers are strictly increasing in each row and each column. See Figure~\ref{fig:ssyt}. \begin{figure} \caption{A semistandard Young tableau of shape $(6,5,3)/(2,1)$ on the left, a reverse plane partition of shape $(6,5,3)/(2,1)$ in the middle and a strict tableau of shape $(6,5,3)/(2,1)$ on the right.} \label{fig:ssyt} \end{figure} We denote by $\SSYT(\lm)$, $\RPP(\lm)$ and $\ST(\lm)$ the set of SSYTs, RPPs and STs of shape $\lm$, respectively. For an SSYT, RPP or ST $\pi$, the \emph{size} $|\pi|$ of $\pi$ is defined to be the sum of entries in $\pi$. SSYTs, RPPs and STs can be considered as $(P,\omega)$-partitions as follows. Let $\lm$ be a skew shape. Denote by $P_\lm$ the poset whose elements are the squares in $\lm$ with relation $x\le y$ if $x$ is weakly southeast of $y$. We denote by $\omega_\lm^\SSYT$, $\omega_\lm^\RPP$ and $\omega_\lm^\ST$ the labelings of $P_\lm$ which are uniquely determined by the following properties, see Figure~\ref{fig:labelings}. \begin{itemize} \item For $(i,j),(i',j')\in\lm$, we have $\omega_\lm^\SSYT((i,j))\le \omega_\lm^\SSYT((i',j'))$ if and only if $j>j'$, or $j=j'$ and $i\le i'$. \item For $(i,j),(i',j')\in\lm$, we have $\omega_\lm^\RPP((i,j))\le \omega_\lm^\RPP((i',j'))$ if and only if $j>j'$, or $j=j'$ and $i\ge i'$. \item For $(i,j),(i',j')\in\lm$, we have $\omega_\lm^\ST((i,j))\le \omega_\lm^\ST((i',j'))$ if and only if $i<i'$, or $i=i'$ and $j\le j'$. \end{itemize} \begin{figure} \caption{The labelings $\omega_{\lm} \label{fig:labelings} \end{figure} We can naturally identify the elements in $\PP(P_\lm,\omega_\lm^\SSYT)$ (resp.~$\PP(P_\lm,\omega_\lm^\RPP)$ and $\PP(P_\lm,\omega_\lm^\ST)$) with the SSYTs (resp.~RPPs and STs) of shape $\lm$. Therefore, we have \begin{equation} \label{eq:SSYT->P} \sum_{\pi\in\SSYT(\lm)} q^{|\pi|} = \sum_{\pi\in\PP(P_\lm,\omega_\lm^\SSYT)} q^{|\pi|}, \end{equation} \begin{equation} \label{eq:RPP->P} \sum_{\pi\in\RPP(\lm)} q^{|\pi|} = \sum_{\pi\in\PP(P_\lm,\omega_\lm^\RPP)} q^{|\pi|} \end{equation} and \begin{equation} \label{eq:ST->P} \sum_{\pi\in\ST(\lm)} q^{|\pi|} = \sum_{\pi\in\PP(P_\lm,\omega_\lm^\ST)} q^{|\pi|}. \end{equation} It is easy to check that $\pi\in\LL(P_{\delta_{n+2}/\delta_{n}}, \omega_{\delta_{n+2}/\delta_{n}}^\SSYT)$ if and only if $\pi^{-1}\in\Alt_{2n+1}$. Thus we have \begin{equation} \label{eq:SSYT->Alt} \sum_{\pi\in\LL(P_{\delta_{n+2}/\delta_{n}}, \omega_{\delta_{n+2}/\delta_{n}}^\SSYT)} q^{\maj(\pi)} =\sum_{\pi\in\Alt_{2n+1}}q^{\maj(\pi^{-1})} = E_{2n+1}(q). \end{equation} Similarly, we have $\pi\in\LL(P_{\delta_{n+2}/\delta_{n}}, \omega_{\delta_{n+2}/\delta_{n}}^\RPP)$ if and only if $\pi=\kappa_{2n+1}\sigma^{-1}$ for some $\sigma\in\Alt_{2n+1}$. Thus, \begin{equation} \label{eq:RPP->Alt} \sum_{\pi\in\LL(P_{\delta_{n+2}/\delta_{n}}, \omega_{\delta_{n+2}/\delta_{n}}^\RPP)} q^{\maj(\pi)} =\sum_{\pi\in\Alt_{2n+1}}q^{\maj(\kappa_{2n+1}\pi^{-1})} = E^*_{2n+1}(q). \end{equation} By \eqref{eq:lin_ext}, \eqref{eq:SSYT->P} and \eqref{eq:SSYT->Alt}, we have \begin{equation} \label{eq:SSYT->E} \sum_{\pi\in\SSYT(\delta_{n+2}/\delta_{n})}q^{|\pi|} = \frac{E_{2n+1}(q)}{(q;q)_{2n+1}}. \end{equation} Similarly, by \eqref{eq:lin_ext}, \eqref{eq:RPP->P} and \eqref{eq:RPP->Alt}, we have \begin{equation} \label{eq:RPP->E} \sum_{\pi\in\RPP(\delta_{n+2}/\delta_n)}q^{|\pi|} = \frac{E^*_{2n+1}(q)}{(q;q)_{2n+1}}. \end{equation} Since every ST of shape $\delta_{n+2}/\delta_n$ is obtained from an RPP of shape $\delta_{n+2}/\delta_n$ by adding 1 to each entry in $\delta_{n+2}/\delta_{n+1}$, we have \begin{equation} \label{eq:RPP=ST} \sum_{\pi\in\ST(\delta_{n+2}/\delta_n)}q^{|\pi|} = \sum_{\pi\in\RPP(\delta_{n+2}/\delta_n)}q^{|\pi|+n+1}. \end{equation} \subsection{Excited diagrams and pleasant diagrams} Excited diagrams are introduced by Ikeda and Naruse\cite{IkedaNaruse09} to describe a combinatorial formula for the Schubert classes in the equivariant cohomology ring of the Grassmannians, and Naruse \cite{Naruse} derived a hook length formula for standard Young tableaux of a skew shape given in \eqref{eq:naruse} using equivariant cohomology theory and the excited Young diagrams. In \cite{MPP1}, two $q$-analogs of Naruse's hook length formula are given, and one of them, namely \eqref{eq:MPP2}, is defined as a sum over pleasant diagrams. Here, we give definitions of excited diagrams and pleasant diagrams. Consider a skew shape $\lm$. Let $D$ be a set of squares in $\lambda$. Suppose that $(i,j)\in D$, $(i+1,j), (i,j+1), (i+1,j+1)\not \in D$ and $(i+1,j+1)\in\lambda$. Then replacing $(i,j)$ by $(i+1,j+1)$ in $D$ is called an \emph{excited move}. An \emph{excited diagram} of $\lm$ is a set of squares obtained from $\mu$ by a sequence of excited moves. We denote by $\EE(\lm)$ the set of excited diagrams of $\lm$. See Figure~\ref{fig:excited}. \begin{figure} \caption{There are $8$ excited diagrams of $(4,4,3,3)/(2,1)$. The squares in each excited diagram are shaded.} \label{fig:excited} \end{figure} A \emph{pleasant diagram} of $\lm$ is a subset of $\lambda\setminus D$ for an excited diagram $D$ of $\lm$. The set of pleasant diagrams of $\lm$ is denoted by $\PP(\lm)$. We note that the original definition of a pleasant diagram is different from the definition given here. In \cite[Theorem~6.10]{MPP1} it is shown that the two definitions are equivalent. \subsection{Dyck paths and Schr\"oder paths} A \emph{nonnegative lattice path} is a set of points $(x_i,y_i)$ in $\ZZ\times\NN$ for $0\le i\le m$ satisfying $x_0<x_1<\dots<x_m$. The set $\{(x_i,y_i),(x_{i+1},y_{i+1})\}$ of two consecutive points is called a \emph{step}. The step $\{(x_i,y_i),(x_{i+1},y_{i+1})\}$ is called an \emph{up step} (resp.~\emph{down step} and \emph{horizontal step}) if $(x_{i+1},y_{i+1})-(x_{i},y_{i})$ is equal to $(1,1)$ (resp.~$(1,-1)$ and $(2,0)$). A \emph{(little) Schr\"oder path} from $(a,0)$ to $(b,0)$ is a nonnegative lattice path $\{(x_i,y_i): 0\le i\le m\}$ such that $(x_0,y_0)=(a,0)$, $(x_m,y_m)=(b,0)$, every step is one of an up step, down step and a horizontal step, and there is no horizontal step $\{(t,0),(t+2,0)\}$ on the $x$-axis. The set of Schr\"order paths from $(a,0)$ to $(b,0)$ is denoted by $\Sch((a,0)\to(b,0))$. We also denote $\Sch_{2n}=\Sch((-n,0)\to(n,0))$. A \emph{Dyck path} is a Schr\"oder path without horizontal steps. The set of Dyck paths from $(a,0)$ to $(b,0)$ is denoted by $\Dyck((a,0)\to(b,0))$. We also denote $\Dyck_{2n}=\Dyck((-n,0)\to(n,0))$. Let $D=\{(x_i,y_i):0\le i\le 2n\}\in\Dyck_{2n}$. Note that $x_i=-n+i$. A \emph{valley} (resp.~\emph{peak}) of $D$ is a point $(x_i,y_i)$ such that $0<i<2n$ and $y_{i-1}>y_{i}<y_{i+1}$ (resp.~$y_{i-1}<y_{i}>y_{i+1}$). A \emph{high peak} is a peak $(x_i,y_i)$ with $y_i\ge2$. \section{$q$-Euler numbers and continued fractions} \label{sec:q-euler} Prodinger \cite{Prodinger2000} considered the probability $\tau^{\ge\le}_n(q)$ that a random word $w_1\dots w_n$ of positive integers of length $n$ satisfies the relations $w_1\ge w_2 \le w_3 \ge w_4 \le \cdots$, where each $w_i$ is chosen independently randomly with probability $\mathrm{Pr}(w_i=k)=q^{k-1}(1-q)$ for $0<q<1$. For other choices of inequalities, for example $\ge$ and $<$, the probability $\tau^{\ge <}_n(q)$ is defined similarly. From the definition, one can easily see that \begin{equation} \label{eq:SSYT->tau} \sum_{\pi\in\SSYT(\delta_{n+2}/\delta_{n})}q^{|\pi|} = \frac{\tau^{\ge<}_{2n+1} (q)}{(1-q)^{2n+1}} , \end{equation} \begin{equation} \label{eq:RPP->tau} \sum_{\pi\in\RPP(\delta_{n+2}/\delta_{n})}q^{|\pi|} = \frac{\tau^{\ge \le}_{2n+1}(q)}{(1-q)^{2n+1}} \end{equation} and \begin{equation} \label{eq:ST->tau} \sum_{\pi\in\ST(\delta_{n+2}/\delta_{n})}q^{|\pi|} = \frac{\tau^{><}_{2n+1}(q)}{(1-q)^{2n+1}}. \end{equation} Observe that, by \eqref{eq:SSYT->E} and \eqref{eq:SSYT->tau}, \begin{equation} \label{eq:E->tau} \frac{E_{2n+1}(q)}{(q;q)_{2n+1}} = \frac{\tau^{\ge<}_{2n+1} (q)}{(1-q)^{2n+1}}. \end{equation} Similarly, by \eqref{eq:RPP->E} and \eqref{eq:RPP->tau}, \begin{equation} \label{eq:E*->tau} \frac{E^*_{2n+1}(q)}{(q;q)_{2n+1}} = \frac{\tau^{\ge \le}_{2n+1}(q)}{(1-q)^{2n+1}}. \end{equation} In this section we show \eqref{eq:MPP_Euler} and \eqref{eq:MPP_Euler*} using Prodinger's results. Prodinger \cite{Prodinger2008} found a continued fraction expression for the generating functions of $\tau^{\ge<}_{2n+1}(q)$ and $\tau^{\ge\le}_{2n+1}(q)$. Using Flajolet's theory \cite{Flajolet1980} of continued fractions we show that \eqref{eq:MPP_Euler} is equivalent to Prodinger's continued fraction. We prove \eqref{eq:MPP_Euler*} in a similar fashion. However, unlike \eqref{eq:MPP_Euler}, the weight of a Dyck path in \eqref{eq:MPP_Euler*} is not a usual weight used in Flajolet's theory. To remedy this we first express $E^*_{2n+1}(q)$ as a generating function for weighted Schr\"oder paths and change it to a generating function of weighted Dyck paths. We recall Flajolet's theory\cite{Flajolet1980} which gives a combinatorial interpretation for the continued fraction expansion as a generating function of weighted Dyck paths. Let $u=(u_0,u_1,\dots)$, $d=(d_1,d_2,\dots)$ and $w=(w_0,w_1,\dots)$ be sequences satisfying $w_i=u_id_{i+1}$ for $i\ge0$. For a Dyck path $P\in\Dyck_{2n}$, we define the weight $\wt_w(P)$ with respect to $w$ to be the product of the weight of each step in $P$, where the weight of an up step $\{(i,j),(i+1,j+1)\}$ is $u_j$ and the weight of a down step $\{(i,j),(i+1,j-1)\}$ is $d_j$. Flajolet \cite{Flajolet1980} showed that the generating function for weighted Dyck paths has a continued fraction expansion: \begin{equation} \label{eq:flajolet} \sum_{n\ge0}\sum_{P\in\Dyck_{2n}} \wt_w(P) x^{2n} = \cfrac{1}{1-\cfrac{w_0 x^2}{1-\cfrac{w_1 x^2}{1-\cfrac{w_2 x^2}{1-\cdots}}}} . \end{equation} \subsection{The $q$-Euler numbers $E_{2n+1}(q)$} We give a new proof of \eqref{eq:MPP_Euler} using \eqref{eq:flajolet}. \begin{prop}\cite[Corollary 1.7]{MPP2}\label{prop:qEuler} We have \begin{equation}\label{eqn:qEuler} \frac{E_{2n+1}(q)}{(q;q)_{2n+1}}= \sum_{P\in \Dyck_{2n}}\prod_{(a,b)\in P}\frac{q^b}{1-q^{2b+1}}. \end{equation} \end{prop} \begin{proof} By the result of Prodinger \cite[Theorem 4.1]{Prodinger2000} (with replacing $z$ by $x/(1-q)$), we have the following continued fraction expansion : $$\sum_{n\ge 0}E_{2n+1} (q) \frac{x^{2n+1}}{(q;q)_{2n+1}} = \frac{x}{1-q}\cdot \cfrac{1}{1-\cfrac{qx^2/(1-q)(1-q^3)}{1-\cfrac{q^3 x^3/(1-q^3)(1-q^5)}{1-\cfrac{q^5 x^2/(1-q^5)(1-q^7)}{1-\cdots}}}}. $$ If we set $u_i=d_i=\frac{q^i}{1-q^{2i+1}}$ and $w_i=u_id_{i+1}$, then \eqref{eq:flajolet} implies $$\frac{x}{1-q}\sum_{n\ge 0}\left(\sum_{P\in\Dyck_{2n}}\wt_w(P) \right)x^{2n}=\sum_{n\ge 0}E_{2n+1}(q)\frac{x^{2n+1}}{(q;q)_{2n+1}}.$$ By comparing the coefficients of $x^{2n+1}$, we get $$\frac{E_{2n+1}(q)}{(q;q)_{2n+1}}=\frac{1}{1-q}\sum_{P\in\Dyck_{2n}}\wt_w(P).$$ If we change the weight of a Dyck path in the right hand side of \eqref{eqn:qEuler} defined on each point to the weight on each step, we obtain $$\sum_{P\in \Dyck_{2n}}\prod_{(a,b)\in P}\frac{q^b}{1-q^{2b+1}} = \frac{1}{1-q}\sum_{P\in\Dyck_{2n}}\wt_w(P),$$ which finishes the proof. \end{proof} By applying a similar argument, we can express the $q$-secant number $E_{2n}(q)$ as a generating function of weighted Dyck paths whose weights are given in terms of principle specializations $s_{\lm}(q)=s_{\lm}(1,q,q^2,\dots)$ of skew Schur functions. \begin{prop}\label{prop:q_Euler_even} Let $w=(w_0,w_1,\dots)$ be the sequence given by \[ w_i = (-1)^i \Delta_{i-2}\Delta_{i+1}/\Delta_{i-1}\Delta_i, \] where $\Delta_i = 1$ for $i<0$, $\Delta_0 = s_{(0)} (q)$, $\Delta_2 = s_{(4)/(0)}(q)$, $\Delta_{2n}=s_{(3n+1,\dots, 2n+2)/(n-1,\dots, 1,0)}(q)$ and $\Delta_1 = s_{(2)}(q)$, $\Delta_3 = s_{(5,4)/(1,0)}(q)$, $\Delta_{2n-1}=s_{(3n-1,\dots, 2n)/(n-1,\dots, 1,0)}(q)$. Then $$ \frac{E_{2n}(q)}{(q;q)_{2n}}=\sum_{P\in \Dyck_{2n}}\wt_w(P). $$ \end{prop} \begin{proof} In \cite[Theorem 2.5]{Huber2010}, it was shown that $$\sum_{n=0}^\infty \frac{E_{2n}(q) x^{2n}}{(q;q)_{2n}}=\frac{1}{\cos _q (x)}=\left( \sum_{n = 0}^\infty (-1)^n\frac{x^{2n}}{(q;q)_{2n}} \right)^{-1}.$$ The following continued fraction expansion of $1/\cos _q (x)$ has been obtained in \cite[Lemma 5.5]{Lascoux1988}: \begin{align*} 1/\cos _q (x) &= 1+ s_2 (q) x^2 + s_{32/1}(q) x^4 +\cdots + s_{(n+1,\dots, 2)/(n-1,\dots, 1,0)}(q) x^{2n}+\cdots\\ & =\cfrac{1}{1-\cfrac{\Delta_1 x^2}{1+\cfrac{\Delta_2/\Delta_1 x^2}{1-\cfrac{\Delta_3/\Delta_1\Delta_2 x^2}{ \cfrac{\cdots}{1+\cfrac{(-1)^n x^{2}\Delta_n \Delta_{n+3}/\Delta_{n+1}\Delta_{n+2}}{\cdots }}}}}}. \end{align*} By \eqref{eq:flajolet}, $1/\cos_q (x)$ can be interpreted as a generating function of weighted Dyck paths. This finishes the proof. \end{proof} \subsection{The $q$-Euler numbers $E_{2n+1}^*(q)$} By using Prodinger's result on $E_{2n+1}^\ast(q)$, we give a new proof of \eqref{eq:MPP_Euler*}. \begin{prop}\cite[Corollary 1.8]{MPP2} We have $$ \frac{E^*_{2n+1}(q)}{(q;q)_{2n+1}} = \sum_{P\in\Dyck_{2n}} q^{H(P)} \prod_{(a,b)\in P} \frac{1}{1-q^{2b+1}}. $$ \end{prop} \begin{proof} By Prodinger's result \cite[Theorem 2.2]{Prodinger2000}, we have $$ \sum_{n=0}^\infty\frac{E_{2n+1}^{\ast}(q)x^{2n+1}}{(q;q)_{2n+1}}= \sum_{n=0}^\infty\frac{(-1)^n q^{n(n+1)}x^{2n+1}}{(q;q)_{2n+1}}\left/ \sum_{n=0}^\infty\frac{(-1)^n q^{n(n-1)}x^{2n}}{(q;q)_{2n}}\right. . $$ Let us denote the right hand side by \begin{align*} & \sum_{n=0}^\infty\frac{(-1)^n q^{n(n+1)}x^{2n+1}}{(q;q)_{2n+1}}\left/ \sum_{n=0}^\infty\frac{(-1)^n q^{n(n-1)}x^{2n}}{(q;q)_{2n}}\right. \\ &= \frac{x}{1-q}\cdot \frac{1}{ \displaystyle \sum_{n=0}^\infty\dfrac{(-1)^n q^{n(n-1)}x^{2n}}{(q;q)_{2n}}\left/\sum_{n=0}^\infty\frac{(-1)^n q^{n(n+1)}x^{2n}}{(q^2;q)_{2n}}\right. .} \end{align*} We use Euler's approach \cite{Euler1788} of using Euclid's algorithm to obtain a continued fraction expansion, namely, we apply \begin{equation}\label{eqn:cf_exp} \frac{N}{D}=1+a +\frac{N-(1+a)D}{D} \end{equation} iteratively (cf. \cite{Bhatnagar}. Note that Bhatnagar used this method to prove number of continued fractions due to Ramanujan). By applying it once with $a=0$, we have \begin{align*} \dfrac{\displaystyle \sum_{n=0}^\infty\dfrac{(-1)^n q^{n(n-1)}x^{2n}}{(q;q)_{2n}}}{\displaystyle \sum_{n=0}^\infty\dfrac{(-1)^n q^{n(n+1)}x^{2n}}{(q^2;q)_{2n}}} &= 1+\frac{ \displaystyle \sum_{n=0}^\infty\dfrac{(-1)^n q^{n(n-1)}x^{2n}}{(q;q)_{2n}}-\sum_{n=0}^\infty\dfrac{(-1)^n q^{n(n+1)}x^{2n}}{(q^2;q)_{2n}}}{\displaystyle \sum_{n=0}^\infty\dfrac{(-1)^n q^{n(n+1)}x^{2n}}{(q^2;q)_{2n}}}\\ &= 1+\frac{\displaystyle \sum_{n=0}^\infty\dfrac{(-1)^n q^{n(n-1)}(1-q^{2n})x^{2n}}{(q;q)_{2n+1}}}{\displaystyle \sum_{n=0}^\infty\dfrac{(-1)^n q^{n(n+1)}x^{2n}}{(q^2;q)_{2n}}}\\ &= 1-\frac{\dfrac{x^2}{(1-q)(1-q^3)}}{\displaystyle \sum_{n=0}^\infty\dfrac{(-1)^n q^{n(n+1)}x^{2n}}{(q^2;q)_{2n}}\left/ \sum_{n=0}^\infty\dfrac{(-1)^n q^{n(n+1)}(1-q^{2n+2})x^{2n}}{(1-q^2)(q^4;q)_{2n}}\right.}. \end{align*} After applying \eqref{eqn:cf_exp} $k-1$ times (with appropriate $a$'s), we would have $$\sum_{n=0}^\infty\frac{(-1)^n q^{n(n+1)}(q^{2n+2};q^2)_{k-1}x^{2n}}{(q^{2};q^2)_{k-1}(q^{2k};q)_{2n}} \left/ \sum_{n=0}^\infty\frac{(-1)^n q^{n(n+1)}(q^{2n+2};q^2)_{k}x^{2n}}{(q^2;q^2)_{k}(q^{2k+2};q)_{2n}} \right. $$ in the last denominator. If we apply \eqref{eqn:cf_exp} one more time with $a=\frac{x^2}{1-q^{2k+1}}$, then we get \begin{align*} &\frac{\displaystyle \sum_{n=0}^\infty\dfrac{(-1)^n q^{n(n+1)}(q^{2n+2};q^2)_{k-1}x^{2n}}{(q^2;q^2)_{k-1}(q^{2k};q)_{2n}}}{\displaystyle \sum_{n=0}^\infty\dfrac{(-1)^n q^{n(n+1)}(q^{2n+2};q^2)_{k}x^{2n}}{(q^2;q^2)_{k}(q^{2k+2};q)_{2n}}}\\ &= 1+\frac{x^2}{1-q^{2k+1}} +\frac{\displaystyle \sum_{n=1}^\infty\dfrac{(-1)^n q^{n(n-1)}(q^{2n};q^2)_{k+1}x^{2n}}{(q^2;q^2)_{k-1}(q^{2k};q)_{2n+2}}}{\displaystyle \sum_{n=0}^\infty\dfrac{(-1)^n q^{n(n+1)}(q^{2n+2};q^2)_{k}x^{2n}}{(q^2;q^2)_{k}(q^{2k+2};q)_{2n}}}\\ &= 1+\dfrac{x^2}{1-q^{2k+1}}-\frac{\dfrac{x^2}{(1-q^{2k+1})(1-q^{2k+3})}}{\displaystyle \sum_{n=0}^\infty\dfrac{(-1)^n q^{n(n+1)}(q^{2n+2};q^2)_{k}x^{2n}}{(q^2;q^2)_{k}(q^{2k+2};q)_{2n}}\left/ \sum_{n=0}^\infty\dfrac{(-1)^n q^{n(n+1)}(q^{2n+2};q^2)_{k+1}x^{2n}}{(q^2;q^2)_{k+1}(q^{2k+4};q)_{2n}}\right.}. \end{align*} Eventually we obtain \begin{equation}\label{eqn:cf_E*} \sum_{n=0}^\infty\frac{E_{2n+1}^{\ast}(q)x^{2n+1}}{(q;q)_{2n+1}} = \frac{x}{1-q} \cdot \cfrac{1}{1-\cfrac{\dfrac{x^2}{(1-q)(1-q^3)}}{1+\dfrac{x^2}{1-q^3}-\cfrac{\dfrac{x^2}{(1-q^3)(1-q^5)}}{1+\dfrac{x^2}{1-q^5}-\cfrac{\dfrac{x^2}{(1-q^5)(1-q^7)}}{\cdots}}}}. \end{equation} Similarly to \eqref{eq:flajolet}, the right hand side of \eqref{eqn:cf_E*} can be interpreted as a generating function of weighted little Schr\"{o}der paths with the following weights on each step : $$\begin{cases} a_i = 1 &\text{ for the up steps going from the level $y=i$ to the level $y=i+1$}\\ c_i = \frac{-1}{1-q^{2i+1}} & \text{ for the horizontal steps on the level $y=i$}\\ b_ i = \frac{1}{(1-q^{2i-1})(1-q^{2i+1})}& \text{ for the down steps going from the level $y=i$ to the level $y=i-1$}. \end{cases}$$ Note that $c_0 = 0$, since we consider little Schr\"{o}der paths. To change the weighted little Schr\"{o}der paths to weighted Dyck paths, we combine the weights of a horizontal step and a pair of up and down steps, and define it as a weight of a pair of up and down steps. In other words, we add the weights defined on the steps $$\includegraphics{./figures/steps.pdf}$$ which gives $$\frac{-1}{1-q^{2i+1}}+\frac{1}{(1-q^{2i+1})(1-q^{2i+3})} = \frac{q^{2i+3}}{(1-q^{2i+1})(1-q^{2i+3})}$$ and we define it as a weight defined on a pair of up and down steps $$\includegraphics{./figures/peak.pdf}.$$ This procedure would change the weights defined on little Schr\"{o}der paths to weights on Dyck path, keeping the weights on the up and down steps. Then, considering the extra factor of $1/(1-q)$ in front of the right hand side of \eqref{eqn:cf_E*}, it is not hard to see that the weights defined on the Dyck path are consistent with the weights used in the right hand side of \eqref{eq:MPP_Euler*}. \end{proof} We remark that by applying Flajolet's theory, we can derive the continued fraction expansion which gives the weighted Dyck paths generating function : $$ \sum_{n=0}^\infty\frac{E_{2n+1}^{\ast}(q)x^{2n+1}}{(q;q)_{2n+1}}=\frac{x}{1-q}\cdot \cfrac{1}{1-\cfrac{w_0 x^2}{1-\cfrac{w_1 x^2}{1-\cfrac{w_2 x^2}{1-\cdots}}}}, $$ where \begin{equation}\label{eqn:w_i} w_i =\begin{cases} \dfrac{q^i}{(1-q^{2i+1})(1-q^{2i+3})}, \text{ if $i$ is even,} \\ \dfrac{q^{3i+2}}{(1-q^{2i+1})(1-q^{2i+3})}, \text{ if $i$ is odd}. \end{cases} \end{equation} Note that this continued fraction expansion was conjectured in \cite[Conjecture 4.5]{Prodinger2000} (and proved later in \cite{Fulmek2000,Prodinger2008}), but it can be easily obtained by applying Euler's approach with using the identity $$\frac{N}{D}=1+\frac{N - D}{D}$$ iteratively. Then, \eqref{eq:flajolet} implies that $$\sum_{n\ge 0}E_{2n+1}^{\ast}(q)\frac{x^{2n+1}}{(q;q)_{2n+1}}= \frac{x}{1-q}\sum_{n\ge 0}\left(\sum_{P\in\Dyck_{2n}}\wt_w(P) \right)x^{2n},$$ where $w=(w_0,w_1,\dots)$ is the sequence defined in \eqref{eqn:w_i}. \begin{cor} We have $$ \sum_{P\in \Dyck_{2n}}q^{H(P)}\prod_{(a,b)\in P}\frac{1}{1-q^{2b+1}}= \frac{1}{1-q}\sum_{P\in\Dyck_{2n}}\wt_w(P), $$ where $w=(w_0,w_1,\dots)$ is the sequence defined in \eqref{eqn:w_i}. \end{cor} \subsection{Other continued fractions} In this subsection we find continued fraction expressions for various $q$-tangent numbers. For each row $\tau^{\alpha\beta}_{2n+1}$, $\frac{(A,B)}{(C,D)}$ and $w_i$ in Table \ref{tab:cfs}, we assume the following form of continued fraction expansion $$ \sum_{n=0}^\infty\frac{\tau^{\alpha\beta}_{2n+1} (q)}{(1-q)^{2n+1}} \cdot \frac{x^{2n+1}}{(q;q)_{2n+1}}=\frac{\displaystyle\sum_{n=0}^{\infty}\dfrac{(-1)^n q^{A n^2 +Bn}x^{2n+1}}{(q;q)_{2n+1}}}{\displaystyle\sum_{n=0}^{\infty}\dfrac{(-1)^n q^{Cn^2+Dn}x^{2n}}{(q;q)_{2n}}}=\frac{x}{1-q}\cdot \cfrac{1}{1-\cfrac{w_0 x^2}{1-\cfrac{w_1 x^2}{1-\cfrac{w_2 x^2}{1-\cdots}}}}. $$ The proof for the continued fraction expansion can be done by applying Euler's approach of using Euclid's algorithm and we omit the details. \begin{table}[h] \renewcommand{2}{2} \begin{tabular}{ccc} \hline $\tau^{\alpha\beta}_{2n+1} (q)$& $\frac{(A,B)}{(C,D)}$& $w_i$\\ \hline $\tau^{\ge<}_{2n+1}(q)$ \text{ or } $\tau^{\le>}_{2n+1}(q)$ & $\frac{(0,0)}{(0,0)}$ \text{ or } $\frac{(2,1)}{(2,-1)}$ & $\dfrac{q^{2i+1}}{(1-q^{2i+1})(1-q^{2i+3})}$ \\ $\tau^{\ge\le}_{2n+1}(q)$ & $\frac{(1,1)}{(1,-1)}$ & $\begin{cases} \dfrac{q^i}{(1-q^{2i+1})(1-q^{2i+3})}, \text{ if $i$ is even,} \\ \dfrac{q^{3i+2}}{(1-q^{2i+1})(1-q^{2i+3})}, \text{ if $i$ is odd}. \end{cases}$\\ $\tau^{<>}_{2n+1}(q)$ & $\frac{(1,1)}{(1,0)}$ & $-$\\ $\tau^{><}_{2n+1}(q)$ & $\frac{(1,0)}{(1,0)}$ & $\begin{cases} \dfrac{q^{3i+2}}{(1-q^{2i+1})(1-q^{2i+3})}, \text{ if $i$ is even,} \\ \dfrac{q^{i}}{(1-q^{2i+1})(1-q^{2i+3})}, \text{ if $i$ is odd}. \end{cases}$ \\ $-$ & $\frac{(0,1)}{(0,1)}$ & $\dfrac{q^{2i+2}}{(1-q^{2i+1})(1-q^{2i+3})}$ \\ $-$ & $\frac{(2,0)}{(2,-2)}$ & $\dfrac{q^{2i}}{(1-q^{2i+1})(1-q^{2i+3})}$\\ $\tau^{\le\ge}_{2n+1}(q)$ & $\frac{(1,0)}{(1,-1)}$ & $-$\\ \hline \end{tabular} \caption{Continued fraction expressions for the generating functions of various $q$-tangent numbers.}\label{tab:cfs} \end{table} \section{Prodinger's $q$-Euler numbers and Foata-type bijections} \label{sec:foata} Recall from the previous section that we have \begin{equation} \label{eq:1} \frac{\tau^{\ge<}_{2n+1} (q)}{(1-q)^{2n+1}} = \sum_{\pi\in\SSYT(\delta_{n+2}/\delta_{n})}q^{|\pi|} = \frac{\sum_{\pi\in\Alt_{2n+1}} q^{\maj(\pi^{-1})} }{(q;q)_{2n+1}} \end{equation} and \begin{equation} \label{eq:2} \frac{\tau^{\ge \le}_{2n+1}(q)}{(1-q)^{2n+1}} = \sum_{\pi\in\RPP(\delta_{n+2}/\delta_{n})}q^{|\pi|} = \frac{\sum_{\pi\in\Alt_{2n+1}} q^{\maj(\kappa_{2n+1}\pi^{-1})}}{(q;q)_{2n+1}}. \end{equation} Using recurrence relations and generating functions, Huber and Yee \cite{Huber2010} showed that \begin{equation} \label{eq:3} \frac{\tau^{\ge<}_{2n+1} (q)}{(1-q)^{2n+1}} = \frac{\sum_{\pi\in\Alt_{2n+1}} q^{\inv(\pi^{-1})} }{(q;q)_{2n+1}} \end{equation} and \begin{equation} \label{eq:4} \frac{\tau^{\ge\le}_{2n+1} (q)}{(1-q)^{2n+1}} = \frac{\sum_{\pi\in\Alt_{2n+1}} q^{\inv(\pi^{-1})-\ndes(\pi_e)} }{(q;q)_{2n+1}}. \end{equation} Observe that \eqref{eq:2} and \eqref{eq:4} imply \begin{equation} \label{eq:5} \sum_{\pi\in\Alt_{2n+1}} q^{\maj(\kappa_{2n+1}\pi^{-1})} =\sum_{\pi\in\Alt_{2n+1}} q^{\inv(\pi^{-1})-\ndes(\pi_e)}. \end{equation} In Section~\ref{sec:prodingers-q-euler} we explore various versions of $q$-Euler numbers considered by Prodinger \cite{Prodinger2000} and show that they have similar identities as \eqref{eq:1}, \eqref{eq:2}, \eqref{eq:3} and \eqref{eq:4}. In Section~\ref{sec:foata-type-bijection} we give a bijective proof of \eqref{eq:5} by finding a Foata-type bijection. In Section~\ref{sec:more_foata} we give various Foata-type bijections which imply similar identities as \eqref{eq:5}. \subsection{Prodinger's $q$-Euler numbers} \label{sec:prodingers-q-euler} Prodinger \cite{Prodinger2000} showed that the generating function for $\tau^{\alpha\beta}_{n} (q)$ for any choice of alternating inequalities $\alpha$ and $\beta$, i.e., \[ (\alpha,\beta)\in \{(\ge,\le)\, (\ge,<),(>,\le),(>,<),(\le,\ge),(\le,>),(<,\ge),(<,>)\}, \] has a nice expression as a quotient of series. Observe that, by reversing a word of length $2n+1$, we have $\tau_{2n+1}^{\ge<}(q) = \tau_{2n+1}^{>\le}(q)$ and $\tau_{2n+1}^{\le>}(q) = \tau_{2n+1}^{<\ge}(q)$. Also, by reversing a word of length $2n$, we have $\tau_{2n}^{\ge\le}(q) = \tau_{2n}^{\le\ge}(q)$, $\tau_{2n}^{\ge<}(q) = \tau_{2n}^{\le>}(q)$, $\tau_{2n}^{>\le}(q) = \tau_{2n}^{<\ge}(q)$ and $\tau_{2n}^{><}(q) = \tau_{2n}^{<>}(q)$. Therefore, we only need to consider $6$ $q$-tangent numbers $\tau^{\alpha\beta}_{2n+1}$ and $4$ $q$-secant numbers $\tau^{\alpha\beta}_{2n}$. The following lemma is straightforward to check. \begin{lem}\label{lem:reverse_complement} For $\pi=\pi_1\pi_2\dots\pi_n\in \Sym_n$, let \[ \phi(\pi) = (n+1-\pi_n)(n+1-\pi_{n-1})\cdots(n+1-\pi_1). \] Then the map $\phi$ induces a bijection $\phi:\Alt_{2n+1}\to\Ralt_{2n+1}$ and a bijection $\phi:\Alt_{2n}\to\Alt_{2n}$. Moreover, for $\pi\in\Alt_{2n+1}$ and $\sigma\in\Alt_{2n}$, we have \[ (\inv(\phi(\pi)),\des(\phi(\pi)_o),\des(\phi(\pi)_e)) = (\inv(\pi),\des(\pi_o),\des(\pi_e)) \] and \[ (\inv(\phi(\sigma)),\des(\phi(\sigma)_o)) = (\inv(\sigma),\des(\sigma_e)). \] \end{lem} Now we state a unifying theorem for Prodinger's $q$-tangent numbers. \begin{thm}\label{thm:q-tan} For each row $\tau_{2n+1}^{\alpha\beta}(q), \TAB, M, I, (A,B)/(C,D)$ in Table~\ref{tab:q-tan}, we have \[ f_{2n+1}:=\frac{\tau_{2n+1}^{\alpha\beta}(q)}{(1-q)^{2n+1}}=\sum_{\pi\in \TAB}q^{|\pi|} = \frac{M}{(q;q)_{2n+1}} = \frac{I}{(q;q)_{2n+1}} , \] whose generating function is \[ \sum_{n\ge0} f_{2n+1} x^{2n+1} = \frac{\sum_{n\ge0} (-1)^nq^{An^2+Bn}x^{2n+1}/(q;q)_{2n+1}} {\sum_{n\ge0}(-1)^nq^{Cn^2+Dn}x^{2n}/(q;q)_{2n}}. \] \end{thm} \begin{proof} This is obtained by combining known results. The connection between $\tau^{\alpha\beta}_{2n+1}(q)$ and $\TAB$ is obvious from their definitions. The connection between $\TAB$ and $M$ follows from the $P$-partition theory \eqref{eq:lin_ext}. The connection between $\tau^{\alpha\beta}_{2n+1}(q)$ and $(A,B)/(C,D)$ is due to Prodinger \cite{Prodinger2000}. The connection between $I$ and $(A,B)/(C,D)$ is due to Huber and Yee \cite{Huber2010}. See Figure~\ref{fig:diagram} for an illustration of these connections. We note that Huber and Yee \cite{Huber2010} only considered $\Alt_{2n+1}$ for $I$, but Lemma~\ref{lem:reverse_complement} implies that $\Alt_{2n+1}$ can be replaced by $\Ralt_{2n+1}$. \end{proof} \begin{figure} \caption{The connections in Theorems~\ref{thm:q-tan} \label{fig:diagram} \end{figure} \begin{table} \renewcommand{2}{2} \centering \begin{tabular}{c l l l c} \hline $\tau_{2n+1}^{\alpha\beta}(q)$ & \qquad$\TAB$ & \qquad\qquad$M$ & \qquad\qquad$I$ & $\frac{(A,B)}{(C,D)}$\\ \hline $\tau^{\ge<}_{2n+1}(q)$ & $\SSYT(\delta_{n+2}/\delta_n)$ & $\displaystyle\sum_{\pi\in\Alt_{2n+1}}q^{\maj(\pi^{-1})}$ & $\displaystyle \sum_{\pi\in\Alt^*_{2n+1}}q^{\inv(\pi)}$ & $\frac{(0,0)}{(0,0)}$ \\ $\tau^{\ge\le}_{2n+1}(q)$ & $\RPP(\delta_{n+2}/\delta_n)$ & $\displaystyle\sum_{\pi\in\Alt_{2n+1}}q^{\maj(\kappa_{2n+1}\pi^{-1})}$ & $\displaystyle \sum_{\pi\in\Alt^*_{2n+1}}q^{\inv(\pi)-\ndes(\pi_e)}$ & $\frac{(1,1)}{(1,-1)}$ \\ $\tau^{><}_{2n+1}(q)$ & $\ST(\delta_{n+2}/\delta_n)$ & $\displaystyle\sum_{\pi\in\Alt_{2n+1}}q^{\maj(\eta_{2n+1}\pi^{-1})}$ & $\displaystyle \sum_{\pi\in\Alt^*_{2n+1}}q^{\inv(\pi)+\nasc(\pi_e)}$ & $\frac{(1,0)}{(1,0)}$ \\ $\tau^{<\ge}_{2n+1}(q)$ & $\SSYT(\delta_{n+3}^{(1,1)}/\delta_{n+1})$ & $\displaystyle\sum_{\pi\in\Ralt_{2n+1}}q^{\maj(\pi^{-1})}$& $\displaystyle \sum_{\pi\in\Alt^*_{2n+1}}q^{\inv(\pi)}$ & $\frac{(0,0)}{(0,0)}$ \\ $\tau^{\le\ge}_{2n+1}(q)$ & $\RPP(\delta_{n+3}^{(1,1)}/\delta_{n+1})$ & $\displaystyle\sum_{\pi\in\Ralt_{2n+1}}q^{\maj(\eta_{2n+1}\pi^{-1})}$ & $\displaystyle \sum_{\pi\in\Alt^*_{2n+1}}q^{\inv(\pi)-\asc(\pi_o)}$ & $\frac{(1,0)}{(1,-1)}$ \\ $\tau^{<>}_{2n+1}(q)$ & $\ST(\delta_{n+3}^{(1,1)}/\delta_{n+1})$ & $\displaystyle\sum_{\pi\in\Ralt_{2n+1}}q^{\maj(\kappa_{2n+1}\pi^{-1})}$ & $\displaystyle \sum_{\pi\in\Alt^*_{2n+1}}q^{\inv(\pi)+\des(\pi_o)}$ & $\frac{(1,1)}{(1,0)}$ \\ \hline \end{tabular} \caption{Interpretations for Prodinger's $q$-tangent numbers. The notation $\Alt^*_{2n+1}$ means it can be either $\Alt_{2n+1}$ or $\Ralt_{2n+1}$.} \label{tab:q-tan} \end{table} By the same arguments, we obtain a unifying theorem for Prodinger's $q$-secant numbers. Note that we consider $6$ $q$-secant numbers even though two pairs of them are equal because their natural combinatorial interpretations are different. \begin{thm}\label{thm:q-sec} For each row $\tau_{2n}^{\alpha\beta}(q), \TAB, M, I, 1/(C,D)$ in Table~\ref{tab:q-sec}, we have \[ f_{2n}: = \frac{\tau_{2n}^{\alpha\beta}(q)}{(1-q)^{2n}} = \sum_{\pi\in \TAB}q^{|\pi|} = \frac{M}{(q;q)_{2n}} = \frac{I}{(q;q)_{2n}}, \] whose generating function is \[ \sum_{n\ge0} f_{2n} x^{2n} = \frac{1}{\sum_{n\ge0}(-1)^nq^{Cn^2+Dn}x^{2n}/(q;q)_{2n}}. \] \end{thm} \begin{table} \renewcommand{2}{2} \centering \begin{tabular}{c l l l c} \hline $\tau_{2n}^{\alpha\beta}(q)$ & \qquad $\TAB$ & \qquad \qquad$M$ & \qquad\qquad $I$ & $\frac{1}{(C,D)}$\\ \hline $\tau^{\ge<}_{2n}(q)$ & $\SSYT(\delta^{(0,1)}_{n+2}/\delta_n)$ & $\displaystyle\sum_{\pi\in\Alt_{2n}}q^{\maj(\pi^{-1})}$ & $\displaystyle \sum_{\pi\in\Alt_{2n}}q^{\inv(\pi)}$ & $\frac{1}{(0,0)}$ \\ $\tau^{\ge\le}_{2n}(q)$ & $\RPP(\delta^{(0,1)}_{n+2}/\delta_n)$ & $\displaystyle\sum_{\pi\in\Alt_{2n}}q^{\maj(\kappa_{2n}\pi^{-1})}$ & $\displaystyle \sum_{\pi\in\Alt_{2n}}q^{\inv(\pi)-\asc(\pi_*)}$ & $\frac{1}{(1,-1)}$ \\ $\tau^{><}_{2n}(q)$ & $\ST(\delta^{(0,1)}_{n+2}/\delta_n)$ & $\displaystyle\sum_{\pi\in\Alt_{2n}}q^{\maj(\eta_{2n}\pi^{-1})}$ & $\displaystyle \sum_{\pi\in\Alt_{2n}}q^{\inv(\pi)+\nasc(\pi_*)}$ & $\frac{1}{(1,0)}$ \\ $\tau^{<\ge}_{2n}(q)$ & $\SSYT(\delta_{n+2}^{(1,0)}/\delta_{n})$ & $\displaystyle\sum_{\pi\in\Ralt_{2n}}q^{\maj(\pi^{-1})}$& $\displaystyle \sum_{\pi\in\Ralt_{2n}}q^{\inv(\pi)}$ & $\frac{1}{(2,-1)}$ \\ $\tau^{\le\ge}_{2n}(q)$ & $\RPP(\delta_{n+2}^{(1,0)}/\delta_{n})$ & $\displaystyle\sum_{\pi\in\Ralt_{2n}}q^{\maj(\eta_{2n}\pi^{-1})}$ & $\displaystyle \sum_{\pi\in\Ralt_{2n}}q^{\inv(\pi)-\ndes(\pi_*)}$ & $\frac{1}{(1,-1)}$ \\ $\tau^{<>}_{2n}(q)$ & $\ST(\delta_{n+2}^{(1,0)}/\delta_{n})$ & $\displaystyle\sum_{\pi\in\Ralt_{2n}}q^{\maj(\kappa_{2n}\pi^{-1})}$ & $\displaystyle \sum_{\pi\in\Ralt_{2n}}q^{\inv(\pi)+\des(\pi_*)}$ & $\frac{1}{(1,0)}$ \\ \hline \end{tabular} \caption{Interpretations for Prodinger's $q$-secant numbers. The notation $\pi_*$ means it can be either $\pi_o$ or $\pi_e$.} \label{tab:q-sec} \end{table} It is natural to ask a direct bijective proof of $M=I$ in Theorems~\ref{thm:q-tan} and \ref{thm:q-sec}. For the next two subsections we accomplish this by finding Foata-type bijections. \begin{remark} The last two expressions for $I$ in Table~\ref{tab:q-sec} have not been considered in \cite{Huber2010}. Therefore, in order to complete the proof of Theorem~\ref{thm:q-sec} we need to find a connection between these expressions with other known results. In Section~\ref{sec:more_foata} we find a bijective proof of $M=I$ for all cases in Tables~\ref{tab:q-tan} and \ref{tab:q-sec}. Note that in Table~\ref{tab:q-sec} the second and fifth rows are equal and the third and sixth rows are equal, which imply \[ \sum_{\pi\in\Alt_{2n}}q^{\inv(\pi)-\asc(\pi_*)} = \sum_{\pi\in\Ralt_{2n}}q^{\inv(\pi)-\ndes(\pi_*)} \] and \[ \sum_{\pi\in\Alt_{2n}}q^{\inv(\pi)+\nasc(\pi_*)} = \sum_{\pi\in\Ralt_{2n}}q^{\inv(\pi)+\des(\pi_*)}. \] It would be interesting to find a direct bijective proof of the above two identities. \end{remark} \subsection{Foata-type bijection for $E^*_{2n+1}(q)$.}\ \label{sec:foata-type-bijection} We denote by $\Alt^{-1}_n$ the set of permutations $\pi\in \Sym_n$ with $\pi^{-1}\in \Alt_n$. Note that $\pi\in\Alt^{-1}_n$ if and only if the relative position of $2i-1$ and $2i$ in $\pi$, for all $1\le i\le \flr{n/2}$, is \[ \pi= \cdots (2i-1) \cdots (2i) \cdots \] and the relative position of $2i$ and $2i+1$ in $\pi$, for all $1\le i\le \flr{(n-1)/2}$, is \[ \pi= \cdots (2i+1) \cdots (2i) \cdots. \] Let $\prec$ be a total order on $\NN$. For a word $w_1\dots w_k$ consisting of distinct positive integers, we define $f(w_1\dots w_k,\prec)$ as follows. Let $b_0,b_1,\dots,b_m$ be the integers such that \begin{itemize} \item $0=b_0< b_1<\dots<b_m=k-1$, \item if $w_{k-1}\prec w_k$, then $w_{b_1},\dots,w_{b_m}\prec w_k \prec w_j$ for all $j\in[k-1]\setminus\{b_1,\dots,b_m\}$, and \item if $w_{k}\prec w_{k-1}$, then $w_j \prec w_k \prec w_{b_1},\dots,w_{b_m} $ for all $j\in[k-1]\setminus\{b_1,\dots,b_m\}$. \end{itemize} For $1\le j\le m$, let $B_j=w_{b_{j-1}+1}\dots w_{b_j}$. We denote \[ B(w_1\dots w_k,\prec) = (B_1,B_2,\dots, B_m). \] Note that $w_1\dots w_{k-1}w_k$ is the concatenation $B_1B_2\dots B_m w_k$. Let $B_j'=w_{b_j}w_{b_{j-1}+1}\dots w_{b_{j}-1}$. Then we define \[ f(w_1\dots w_k,\prec)=B'_1B'_2\dots B'_m w_k. \] For example, \[ f(496318725,<) =f(|4|963|1|872|5,<)= 439612875, \] where $|4|963|1|872|5$ means that $B(496318725,<)=(4,963,1,872)$. For a permutation $\pi=\pi_1\dots \pi_n\in \Sym_n$ and a total order $\prec$ on $\NN$, we define $F(\pi,\prec)$ as follows. Let $w^{(1)}=\pi_1$. For $2\le k\le n$, let $w^{(k)}=f(w^{(k-1)}\pi_k,\prec)$. Finally $F(\pi,\prec)=w^{(n)}$. Note that for the natural order $1<2<\cdots$, the map $F(\pi,<)$ is the same as the Foata map. For $i\ge1$, we define $<_i$ to be the total order on $\NN$ obtained from the natural ordering by reversing the order of $i$ and $i+1$, i.e., for $a<b$ with $(a,b)\ne(i,i+1)$, we have $a<_ib$ and $i+1<_i i$. For $\pi\in \Alt^{-1}_{2n+1}$, we define $\FA(\pi)$ as follows. First, we set $w^{(1)}=\pi_1$. For $2\le k\le 2n+1$, there are two cases: \begin{itemize} \item If $\pi_k=2i$ and $\pi_1\dots\pi_{k-1}$ does not have $2i+2$, then $w^{(k)}=f(w^{(k-1)}\pi_k, <_{2i})$. \item Otherwise, $w^{(k)}=f(w^{(k-1)}\pi_k, <)$. \end{itemize} Then $\FA(\pi)$ is defined to be $w^{(2n+1)}$. \begin{exam} Let $\pi=317295486 \in\Alt^{-1}_9$. Then \begin{align*} w^{(1)} &= 3, \\ w^{(2)} &= f(|3|1,<)=31, \\ w^{(3)} &= f(|3|1|7,<) =317 , \\ w^{(4)} & =f(|\mathbf{3}17|\mathbf{2},<_2)=7312, &\mbox{(There is no 4 before 2.)}\\ w^{(5)} &= f(|7|3|1|2|9,<)=73129,\\ w^{(6)} &= f(|7|3129|5,<)=793125,\\ w^{(7)} &= f(|793|1|2|\mathbf{5}|\mathbf{4},<_4)=3791254, &\mbox{(There is no 6 before 4.)}\\ w^{(8)} &= f(|3|7|\mathbf{9}|1|2|5|4|\mathbf{8},<_8)=37912548, &\mbox{(There is no 10 before 8.)}\\ w^{(9)} &= f(|37|9|12548|6,<)=739812546, &\mbox{(There is 8 before 6.)} \end{align*} where the two integers whose order is reversed are written in bold face. Thus $\FA(317295486)=739812546$. \end{exam} \begin{thm}\label{thm:foata} The map $\FA$ induces a bijection $\FA:\Alt^{-1}_{2n+1}\to \Alt^{-1}_{2n+1}$. Moreover, if $\pi\in \Alt^{-1}_{2n+1}$ and $\sigma=\FA(\pi)$, then \[ \maj(\kappa_{2n+1} \pi) = \inv(\sigma)-\ndes((\sigma^{-1})_e). \] \end{thm} \begin{proof} One can easily see that each step in the construction of $\FA$ is invertible. Thus $\FA:\Sym_n\to \Sym_n$ is a bijection. By the construction of $\FA$, the relative positions of consecutive integers never change. Thus $\Des(\pi^{-1})=\Des(\FA(\pi)^{-1})$, which implies that $\FA:\Alt^{-1}_{2n+1}\to \Alt^{-1}_{2n+1}$ is also a bijection. For the proof of the second statement, let $\pi=\pi_1\dots\pi_{2n+1}\in\Alt_{2n+1}^{-1}$ and $w^{(k)}$ the word in the construction of $\FA(\pi)$ for $k\in[2n+1]$. We claim that for every $k\in[2n+1]$, \begin{equation} \label{eq:claim} \maj(\kappa\pi^{(k)}) = \inv(w^{(k)})-t(w^{(k)}), \end{equation} where $\pi^{(k)}=\pi_1\dots\pi_k$, $\kappa\pi^{(k)}$ is the word obtained from $\pi^{(k)}$ by interchanging $2i$ and $2i+1$ for all $i$ for which $\pi^{(k)}$ contains both $2i$ and $2i+1$, and $t(w^{(k)})$ is the number of even integers $2i$ for which $w^{(k)}$ contains $2i$ and there is no $2i+2$ before $2i$ in $w^{(k)}$. Note that if $w$ is a permutation then $t(w)=\ndes(w^{-1}_e)$. Thus \eqref{eq:claim} for $k=2n+1$ implies the second statement of the lemma. We prove \eqref{eq:claim} by induction on $k$. It is true for $k=0$ since both sides are equal to $0$. Assuming the claim \eqref{eq:claim} for $k-1\ge 0$, we consider the claim for $k$. There are several cases as follows. \begin{description} \item[Case 1] $\pi_k=2i+1$ for some $i$. Then $w^{(k)}=f(w^{(k-1)}\pi_k, <)$. Since $\pi\in\Alt_{2n+1}^{-1}$, $\pi^{(k)}$ does not have $2i$ and we have $\maj(\kappa\pi^{(k)})=\maj((\kappa\pi^{(k-1)})\pi_k)$. \begin{description} \item[Subcase 1-(a)] $\pi_{k-1}>2i+1$. Since the last element of $\kappa\pi^{(k-1)}$ remains greater than $\pi_k$, we have $\maj(\kappa\pi^{(k)})=\maj((\kappa\pi^{(k-1)}))+k-1$. Let \[ B(w^{(k-1)}\pi_k,<)=(B_1,\dots,B_\ell). \] Since the last element of $w^{(k-1)}$ is $\pi_{k-1}>\pi_k$, for all $j\in[\ell]$, the last element of $B_j$ is the only element in $B_j$ which is greater than $\pi_k$. For every element $x$ in $B_j$, if it is the last element in $B_j$, then $x$ and $\pi_k$ form a new inversion in $w^{(k)}$, and otherwise $x$ and the last element of $B_j$ form a new inversion in $w^{(k)}$. Thus $\inv(w^{(k)})=\inv(w^{(k-1)})+k-1$. Moreover, we have $t(w^{(k)}) = t(w^{(k-1)})$ because the relative position of $2r$ and $2r+2$ does not change in $w^{(k)}$. The relative position of $2r$ and $2r+2$ can be changed only if $2r<2i+1<2r+2$, which implies $r=i$. This is a contradiction to the fact that $2i$ is not in $\pi^{(k)}$. Thus $\inv(w^{(k)})-t(w^{(k)})=\inv(w^{(k-1)}) -t(w^{(k-1)})+k-1$ and the claim is also true for $k$. \item[Subcase 1-(b)] $\pi_{k-1}<2i$. In this case $\maj(\kappa\pi^{(k)})=\maj((\kappa\pi^{(k-1)}))$. By the same arguments one can show that $\inv(w^{(k)}) = \inv(w^{(k-1)})$ and $t(w^{(k)})=(w^{(k-1)})$, which imply the claim for $k$. \end{description} \item[Case 2] $\pi_k=2i$ for some $i$. Since $\pi\in\Alt^{-1}_{2n+1}$, $\pi^{(k)}$ contains $2i+1$. There are $5$ subcases. \begin{description} \item [Subcase 2-(a)] $\pi_{k-1}=2i+1$. In this case $\pi^{(k)}$ does not have $2i+2$ since $\pi\in\Alt^{-1}_{2n+1}$. Thus $w^{(k)}=f(w^{(k-1)}\pi_k,<_{2i})$. Observe that the last two elements of $\kappa\pi^{(k)}$ are $2i$ and $2i+1$ in this order, which implies $\maj(\kappa\pi^{(k)}) = \maj(\kappa\pi^{(k-1)})$. Let \[ B(w^{(k-1)}\pi_k,<_{2i})=(B_1,\dots,B_\ell). \] The last element of $w^{(k-1)}$ is $\pi_{k-1}=2i+1$, which is smaller than $\pi_k=2i$ with respect to the order $<_{2i}$. Hence, for every $j\in[\ell]$, the last element in $B_j$ is the only element there which is smaller than $2i$ with respect to $<_{2i}$. If $x\in B_j$ is the last element in $B_j$, then $x<_{2i} 2i$. In this case $x$ and $2i$ form a new inversion in $w^{(k)}$ if and only if $x=2i+1$. On the other hand, if $x\in B_j$ is not the last element $y$ in $B_j$, then $y<_{2i} 2i <_{2i} x$. In this case when we construct $w^{(k)}$ we lose the inversion formed by $x$ and $y$ in $w^{(k-1)}$ and get a new inversion formed by $x$ and $2i$. Therefore, in total, $\inv(w^{(k)}) = \inv(w^{(k-1)}) +1$. Since the relative position of $2r$ and $2r+2$ does not change, the last element of $w^{(k)}$ is $2i$ and $w^{(k)}$ does not have $2i+2$, we have $t(w^{(k)}) = t(w^{(k-1)})+1$. Therefore, $\inv(w^{(k)})-t(w^{(k)}) = \inv(w^{(k-1)})-t(w^{(k-1)})$, which implies the claim for $k$. \item [Subcase 2-(b)] $\pi_{k-1}>2i+1$ and $\pi^{(k)}$ has $2i+2$. In this case we have $w^{(k)}=f(w^{(k-1)}\pi_k,<)$. By similar arguments as in Subcase 1-(a), we have $\maj(\kappa\pi^{(k)})=\maj(\kappa\pi^{(k-1)})+k-1$, $\inv(w^{(k)}) = \inv(w^{(k-1)})+k-1$ and $t(w^{(k)}) = t(w^{(k-1)})$. Thus $\inv(w^{(k)})-t(w^{(k)}) = \inv(w^{(k-1)})-t(w^{(k-1)})+k-1$, which implies the claim for $k$. \item [Subcase 2-(c)] $\pi_{k-1}>2i+1$ and $\pi^{(k)}$ does not have $2i+2$. In this case, $w^{(k)}=f(w^{(k-1)}\pi_k,<_{2i})$. By similar arguments as in Subcase 2-(a), we obtain $\maj(\kappa\pi^{(k)})=\maj(\kappa\pi^{(k-1)})+k-1$, $\inv(w^{(k)}) = \inv(w^{(k-1)})+k$ and $t(w^{(k)}) = t(w^{(k-1)})+1$. Thus $\inv(w^{(k)})-t(w^{(k)}) = \inv(w^{(k-1)})-t(w^{(k-1)})+k-1$, which implies the claim for $k$. \item [Subcase 2-(d)] $\pi_{k-1}< 2i$ and $\pi^{(k)}$ has $2i+2$. In this case, $w^{(k)}=f(w^{(k-1)}\pi_k,<)$. By similar arguments as in Subcase 1-(a), we obtain $\maj(\kappa\pi^{(k)})=\maj(\kappa\pi^{(k-1)})$, $\inv(w^{(k)}) = \inv(w^{(k-1)})$ and $t(w^{(k)}) = t(w^{(k-1)})$. Thus we have $\inv(w^{(k)})-t(w^{(k)}) = \inv(w^{(k-1)})-t(w^{(k-1)})+k-1$, which implies the claim for $k$. \item [Subcase 2-(e)] $\pi_{k-1}< 2i$ and $\pi^{(k)}$ does not have $2i+2$. In this case, we have $w^{(k)}=f(w^{(k-1)}\pi_k, <_{2i})$. By similar arguments as in Subcase 2-(a), we obtain $\maj(\kappa\pi^{(k)})=\maj(\kappa\pi^{(k-1)})$, $\inv(w^{(k)}) = \inv(w^{(k-1)})+1$ and $t(w^{(k)}) = t(w^{(k-1)})-1$. Thus $\inv(w^{(k)})-t(w^{(k)}) = \inv(w^{(k-1)})-t(w^{(k-1)})$, which implies the claim for $k$. \end{description} \end{description} In any case, the claim is true for $k$, which completes the proof. \end{proof} \begin{cor} We have \[ \sum_{\pi\in \Alt_{2n+1}} q^{\maj(\kappa_{2n+1} \pi^{-1})} =\sum_{\pi\in \Alt_{2n+1}} q^{\inv(\pi)-\ndes(\pi_e)}. \] \end{cor} \begin{proof} By Theorem~\ref{thm:foata}, \[ \sum_{\pi\in \Alt_{2n+1}} q^{\maj(\kappa_{2n+1} \pi^{-1})} = \sum_{\pi\in \Alt^{-1}_{2n+1}} q^{\maj(\kappa_{2n+1} \pi)}= \sum_{\sigma\in \Alt^{-1}_{2n+1}} q^{\inv(\sigma)-\ndes((\sigma^{-1})_e)}. \] Since \[ \sum_{\sigma\in \Alt^{-1}_{2n+1}} q^{\inv(\sigma)-\ndes((\sigma^{-1})_e)} =\sum_{\sigma\in \Alt_{2n+1}} q^{\inv(\sigma)-\ndes(\sigma_e)}, \] we are done. \end{proof} \subsection{Foata-type bijections for other $q$-Euler numbers} \label{sec:more_foata} We give Foata-type bijections which imply the identities $M=I$ in Tables~\ref{tab:q-tan} and \ref{tab:q-sec}. \begin{thm}\label{thm:foata_mod} For each row $\mathrm{Set}$, $N$, $A(\pi)$, $B(\sigma)$, $C$, $D$, $\prec$ in Table~\ref{tab:foata}, the modified Foata map $F_{\mathrm{mod}}$ is defined as follows. For $\pi=\pi_1\dots\pi_N\in \mathrm{Set}$, $F_{\mathrm{mod}}(\pi)=w^{(N)}$, where $w^{(1)}=\pi_1$ and, for $2\le k\le N$, \[ w^{(k)}= \begin{cases} f(w^{(k-1)}\pi_k, \prec) & \mbox{if $\pi_k=C$ and $\pi_1\dots\pi_{k-1}$ does not have $D$,}\\ w^{(k)}=f(w^{(k-1)}\pi_k, <) & \mbox{otherwise.} \end{cases} \] Then $F_{\mathrm{mod}}: \mathrm{Set}\to\mathrm{Set}$ is a bijection such that if $\sigma=F_{\mathrm{mod}}(\pi)$, then $A(\pi)=B(\sigma)$. \end{thm} We omit the proof of Theorem~\ref{thm:foata_mod} since it is similar to the proof of Theorem~\ref{thm:foata}. \begin{table} \centering \begin{tabular}{cclllll} \hline $\mathrm{Set}$ & $N$ & $A(\pi)$ & $B(\sigma)$ & $C$ & $D$ & $\prec$ \\ \hline\ $\Alt^{-1}_{N}$ & $2n+1$ & $\maj(\pi)$ & $\inv(\sigma)$ & $-$ & $-$ & $-$ \\ & & $\maj(\kappa_N\pi)$ & $\inv(\sigma)-\ndes((\sigma^{-1})_e)$ & $2i$ & $2i+2$ & $<_{2i}$\\ & & $\maj(\eta_N\pi)$ & $\inv(\sigma)+\nasc((\sigma^{-1})_e)$ & $2i$ & $2i-2$ & $<_{2i-1}$ \\ & $2n$ & $\maj(\pi)$ & $\inv(\sigma)$ & $-$ & $-$ & $-$ \\ & & $\maj(\kappa_N\pi)$ & $\inv(\sigma)-\asc((\sigma^{-1})_*)$ & $2i$ & $2i+2$ & $<_{2i}$ \\ & & $\maj(\eta_N\pi)$ & $\inv(\sigma)+\nasc((\sigma^{-1})_*)$ & $2i$ & $2i-2$ & $<_{2i-1}$ \\ \hline\ $\Ralt^{-1}_{N}$ & $2n+1$ & $\maj(\pi)$ & $\inv(\sigma)$ & $-$ & $-$ & $-$\\ & & $\maj(\kappa_N\pi)$ & $\inv(\sigma)+\des((\sigma^{-1})_o)$ & $2i+1$ & $2i-1$ & $<_{2i}$ \\ & & $\maj(\eta_N\pi)$ & $\inv(\sigma)-\asc((\sigma^{-1})_o)$ & $2i-1$ & $2i+1$ & $<_{2i-1}$ \\ & $2n$ & $\maj(\pi)$ & $\inv(\sigma)$ & $-$ & $-$ & $-$ \\ & & $\maj(\kappa_N\pi)$ & $\inv(\sigma)+\des((\sigma^{-1})_*)$ & $2i+1$ & $2i-1$ & $<_{2i}$ \\ & & $\maj(\eta_N\pi)$ & $\inv(\sigma)-\ndes((\sigma^{-1})_*)$ & $2i-1$ & $2i+1$ & $<_{2i-1}$ \\ \hline\ \end{tabular} \caption{Each row defines the modified Foata map $F_{\mathrm{mod}}$ described in Theorem~\ref{thm:foata_mod}. If $(C,D,\prec)=(-,-,-)$, $F_{\mathrm{mod}}$ is the original Foata map.} \label{tab:foata} \end{table} \section{Proof of Theorems~\ref{conj:9.3} and \ref{conj:9.6}} \label{sec:MPP conjectures} In this section we prove the two conjectures of Morales et al., Theorems~\ref{conj:9.3}~and~\ref{conj:9.6}. Let us briefly outline our proof. Recall that both Theorems~\ref{conj:9.3} and \ref{conj:9.6} are of the form $A=Q\det(c_{ij})$. In Section~\ref{sec:pleas-diagr-delt} we interpret pleasant diagrams of $\delta_{n+2k}/\delta_n$ as non-intersecting marked Dyck paths. This interpretation can be used to express $A$ as a generating function for non-intersecting Dyck paths. In Section~\ref{sec:modif-lindstr-gess} we show a modification of Lindstr\"om--Gessel--Viennot lemma which allows us to express $\det(c_{ij})$ as a generating function for weakly non-intersecting Dyck paths. In Section~\ref{sec:conn-betw-weakly} we find a connection between the generating function for weakly non-intersecting Dyck paths and the generating function for (strictly) non-intersecting Dyck paths. Using these results we prove Theorems~\ref{conj:9.3}~and~\ref{conj:9.6} in Sections~\ref{sec:proof-th1} and \ref{sec:proof-thm2}. Recall that $\Dyck_{2n}$ (resp.~$\Sch_{2n}$) is the set of Dyck (resp.~Schr\"oder) paths from $(-n,0)$ to $(n,0)$. Since a Dyck path is a Schr\"oder path without horizontal steps we have $\Dyck_{2n}\subseteq\Sch_{2n}$. For a Schr\"oder path $S\in\Sch_{2n}$ and $-n\le i\le n$, we define $S(i)=j$ if $(i,j)\in S$ or $\{(i-1,j),(i+1,j)\}$ is a horizontal step of $S$. For $S_1\in\Sch_{2n}$ and $S_2\in\Sch_{2n+4k}$, we write $S_1\le S_2$ if $S_1(i)\le S_2(i)$ for all $-n\le i\le n$ and there is no $i$ such that $S_1(i)=S_2(i)$ and $S_1(i+1)= S_2(i+1)$. Similarly, we write $S_1< S_2$ if $S_1(i)< S_2(i)$ for all $-n\le i\le n$. Note that if $S_1<S_2$, then $S_2$ is strictly above $S_1$. If $S_1\le S_2$, then $S_2$ is weakly above $S_1$ and in addition $S_1$ and $S_2$ do not share any steps. For a Dyck path $D\in\Dyck_{2n}$, we denote by $\VV(D)$ (resp.~$\HP(D)$) the set of valleys (resp.~high peaks) of $D$. We denote by $\Dyck_{2n}^k$ the set of $k$-tuples $(D_1,\dots,D_k)$ of Dyck paths, where for $i\in[k]$, \[ D_i\in\Dyck_{2n+4i-4} = \Dyck((-n-2i+2,0)\to(n+2i-2,0)). \] For brevity, if $(D_1,\dots,D_k)\in\Dyck_{2n}^k$ and $D_1\prec D_2\prec \dots\prec D_k$, where $\prec$ can be $<$ or $\le$, we will simply write $(D_1\prec D_2\prec \dots\prec D_k)\in\Dyck_{2n}^k$. For example, $(D_1\le D_2<D_3)\in\Dyck_{2n}^3$ means $(D_1,D_2,D_3)\in\Dyck_{2n}^3$ and $D_1\le D_2<D_3$. \subsection{Pleasant diagrams of $\delta_{n+2k}/\delta_n$ and non-intersecting marked Dyck paths}\ \label{sec:pleas-diagr-delt} For a point $p=(i,j)\in\ZZ\times\NN$, the \emph{height} $\HT(p)$ of $p$ is defined to be $j$. We identify the square $u=(i,j)$ in the $i$th row and $j$th column in $\delta_{n+2k}$ with the point $p=(j-i,n+2k-i-j)\in\ZZ\times\NN$. Under this identification one can easily check that if a square $u\in \delta_{n+2k}$ corresponds to a point $p\in \ZZ\times\NN$ then the hook length $h(u)$ in $\delta_{n+2k}$ is equal to $2\HT(p)+1$. We define the set $\ND_{2n}^{k}$ of non-intersecting Dyck paths by \[ \ND_{2n}^{k} = \{(D_1,\dots,D_k)\in\Dyck_{2n}^k : D_1<\dots<D_k\}. \] Morales et al. showed that there is a natural bijection between $\ND_{2n}^k$ and $\EE(\delta_{n+2k}/\delta_n )$ as follows. \begin{prop}\cite[Corollary~8.4]{MPP2} \label{prop:excited} The map $\rho:\ND_{2n}^k\to \EE(\delta_{n+2k}/\delta_n)$ defined by \[ \rho(D_1,\dots,D_k) = \delta_{n+2k}\setminus (D_1\cup \dots \cup D_k) \] is a bijection. \end{prop} A \emph{marked Dyck path} is a Dyck path in which each point that is not a valley may or may not be marked. Let \[ \ND_{2n}^{*k} = \left\{(D_1,\dots,D_k,C): (D_1,\dots,D_k)\in\ND_{2n}^k, C\subset \bigcup_{i=1}^k (D_i \setminus \VV(D_i))\right\}. \] Note that an element $(D_1,\dots,D_k,C)\in \ND_{2n}^{*k}$ can be considered as non-intersecting marked Dyck paths $D_1,\dots,D_k$ with the set $C$ of marked points. The following proposition allows us to consider pleasant diagrams of $\delta_{n+2k}/\delta_n$ as non-intersecting marked Dyck paths. \begin{prop}\label{prop:pleasant} The map $\rho^*:\ND_{2n}^{*k}\to \PP(\delta_{n+2k}/\delta_n)$ defined by \[ \rho^*(D_1,\dots,D_k,C) = (D_1\cup \dots \cup D_k)\setminus C \] is a bijection. \end{prop} \begin{proof} By Proposition~\ref{prop:excited}, $\rho^*$ is a well defined map from $\ND_{2n}^{*k}$ to $\PP(\delta_{n+2k}/\delta_n)$. We need to show that $\rho^*$ is surjective and injective. Consider an arbitrary pleasant diagram $P\in \PP(\delta_{n+2k}/\delta_n)$. By Proposition~\ref{prop:excited}, there is $(D_1,\dots,D_k)\in\ND_{2n}^k$ such that $P\subset (D_1\cup \dots\cup D_k)$. We can take $(D_1,\dots,D_k)$ so that the squares in $D_1,\dots, D_k$ are as northwest as possible. Then $P$ contains all valleys in $D_1,\dots,D_k$ because otherwise we make $D_1,\dots,D_k$ more toward northwest. Therefore $\rho^*$ is surjective. Now suppose that \[ \rho^*(D_1,\dots,D_k,C) = \rho^*(D_1',\dots,D_k',C'). \] If $D_1\ne D_1'$, we can find a valley $p$ in one of $D_1$ and $D_1'$ that is not contained in the other Dyck path. We can assume $p\in \VV(D_1)$ and $p\not\in D'_1$. Then we have $p\in \rho^*(D_1,\dots,D_k,C)$ and $p\not\in\rho^*(D_1',\dots,D'_k,C')$, which is a contradiction. Thus we must have $D_1=D_1'$. Similarly we can show that $D_i=D_i'$ for all $i\in[k]$. Then we also have $C=C'$. Thus $\rho^*$ is injective. \end{proof} \begin{remark} In Proposition~\ref{prop:pleasant}, we can also use high peaks instead of valleys. This was shown for $k=1$ in \cite[Corollary~9.2]{MPP2}. \end{remark} \subsection{A modification of Lindstr\"om--Gessel--Viennot lemma}\ \label{sec:modif-lindstr-gess} Let $\wt$ and $\wtext$ be fixed weight functions defined on $\ZZ\times\NN$. We define \[ \wt_\VV(D) = \prod_{p\in D} \wt(p) \prod_{p\in\mathcal{V}(D)} \wtext(p) \] and \[ \wt_\HP(D) = \prod_{p\in D} \wt(p) \prod_{p\in\HP(D)} \wtext(p). \] One can regard $\wt_\VV(D)$ as a weight of a Dyck path $D$ in which every point $p$ of $D$ has the weight $\wt(p)$ and every valley $p$ of $D$ has the extra weight $\wtext(p)$. For Dyck paths $D_1, \dots, D_k$, we define \[ \wt_\VV(D_1, \dots, D_k) = \wt_\VV(D_1)\cdots \wt_\VV(D_k). \] The next lemma is a modification of Lindstr\"om--Gessel--Viennot lemma. \begin{lem}\label{lem:det} For $1\le i,j\le k$, let $A_i=(-n-2i+2,0)$, $B_j=(n+2j-2,0)$ and \[ d_n^{i,j}(q) = \sum_{D\in\Dyck(A_i\to B_j)} \wt_\VV(D). \] Then \begin{equation}\label{eqn:lem_det} \det(d_{n}^{i,j}(q))_{i,j=1}^k =\sum_{(D_1\le \dots \le D_k)\in\Dyck_{2n}^k} \wt_\VV(D_1, \dots, D_k) \prod_{i=1}^{k-1} \prod_{p\in D_i\cap D_{i+1}} \left(1-\frac{1}{\wtext(p)}\right). \end{equation} \end{lem} Note that if $\wt$ and $\wtext$ depend only on the $y$-coordinates, i.e., $\wt(a,b)=f(b)$ and $\wtext(a,b)=g(b)$ for some functions $f$ and $g$, then $d_{n}^{i,j}(q)$ can be written as $d_{n+i+j-2}(q)$, where \[ d_n(q) = \sum_{D\in\Dyck_{2n}} \wt_\VV(D). \] \begin{proof} We have \[ \det(d_{n}^{i,j}(q))_{i,j=1}^k =\sum_{(D_1,\dots,D_k)\in T} w(D_1,\dots,D_k), \] where $T$ is the set of $k$-tuples $(D_1,\dots,D_k)$ of Dyck paths such that $D_i\in\Dyck(A_i\to B_{\pi_i})$ for some permutation $\pi\in \Sym_k$ and $w(D_1,\dots,D_k) = \sgn(\pi) \wt_\VV(D_1, \dots, D_k) $. Let $T_1$ be the subset of $T$ consisting of $(D_1,\dots,D_k)$ such that $D_i$ and $D_j$ do not share any common step for all $1\le i<j\le k$. Suppose that $(D_1,\dots,D_k)\in T\setminus T_1$. Then we can find the lexicographically largest index $(i,j)$ such that $D_i$ and $D_j$ have a common step. Let $p_1$ and $p_2$ be the rightmost two consecutive points contained in $D_i$ and $D_j$. Let $D_i'$ and $D_j'$ be the Dyck paths obtained from $D_i$ and $D_j$, respectively, by exchanging the points after $p_2$. For $t\ne i,j$, let $D_t'=D_t$. Then the map $(D_1,\dots,D_k)\mapsto (D_1',\dots,D_k')$ is a sign-reversing involution on $T\setminus T_1$ with no fixed points. Therefore, we have \[ \sum_{(D_1,\dots,D_k)\in T} w(D_1,\dots,D_k) = \sum_{(D_1,\dots,D_k)\in T_1} w(D_1,\dots,D_k). \] Now consider $(D_1,\dots,D_k)\in T_1$. By the condition on the elements in $T_1$, we can find a unique $k$-tuple $(P_1\le \dots\le P_k)\in \Dyck_{2n}^k$ such that the steps in $D_1,\dots,D_k$ are the same as the steps in $P_1,\dots,P_k$. Note that we also have $(P_1, \dots, P_k)\in T_1$. We define the labeling $L$ of the intersection points in $P_1,\dots,P_k$ as follows. Let $p$ be an intersection point in $P_1,\dots,P_k$. Then $p$ is also an intersection point of two paths in $(D_1,\dots,D_k)$. Let $D_r$ and $D_s$ be the Dyck paths containing $p$. Since $P_1\le \dots\le P_k$ and $(P_1, \dots, P_k)\in T_1$, there is a unique $i$ for which $p$ is contained in $P_i$ and $P_{i+1}$. Then $p$ is both a peak of $P_{i}$ and a valley of $P_{i+1}$. On the other hand, $p$ may or may not be a peak or a valley in $D_r$ or $D_s$. Observe that $p$ is a peak (resp.~valley) of $D_r$ if and only if it is a valley (resp.~peak) of $D_s$. We define the label $L(p)$ of $p$ by \[ L(p) = \begin{cases} 1 & \mbox{if $p$ is a valley of $D_r$ or $D_s$,}\\ -1/\wtext(p) & \mbox{otherwise.} \end{cases} \] For example, if $D_1$ and $D_2$ are the paths in Figure~\ref{fig:D1D2}, the paths $P_1$ and $P_2$ and the labeling $L$ are determined as shown in Figure~\ref{fig:P1P2}. \begin{figure} \caption{An example of $(D_1,D_2)\in T_1$ in the proof of Lemma~\ref{lem:det} \label{fig:D1D2} \end{figure} \begin{figure} \caption{The gray path is $P_1$ and the black path is $P_2$. The letter $W$ above an intersection point $p$ means that $L(p)=-1/\wtext(p)$.} \label{fig:P1P2} \end{figure} Let $w'(P_1,\dots,P_k,L)$ be the product of $\wt_\VV(D_1, \dots, D_k)$ and the labels $L(p)$ of all intersection points $p$. It is not hard to see that $w(D_1,\dots,D_k)=w'(P_1,\dots,P_k,L)$. Moreover, given $(P_1\le\cdots\le P_k)\in \Dyck_{2n}^k$ and any labeling $L$ of their intersection points, there is a unique $(D_1,\dots,D_k)\in T_1$ which makes $P_1,\dots,P_k$ and the labeling $L$. Thus, \[ \sum_{(D_1,\dots,D_k)\in T_1} w(D_1,\dots,D_k) =\sum_{P_1,\dots,P_k,L} w'(P_1,\dots,P_k,L), \] where the sum is over all $(P_1\le\cdots\le P_k)\in \Dyck_{2n}^k$ and all possible labelings $L$ of their intersection points. Since \[ \sum_{P_1,\dots,P_k,L} w'(P_1,\dots,P_k,L) = \sum_{(D_1\le \dots \le D_k)\in\Dyck_{2n}^k} \wt_\VV(D_1, \dots, D_k) \prod_{i=1}^{k-1} \prod_{p\in D_i\cap D_{i+1}} \left(1-\frac{1}{\wtext(p)}\right), \] we obtain the desired identity. \end{proof} \begin{remark}\label{rmk:LGV} Lindstr\"om--Gessel--Viennot lemma \cite{Lindstrom,GesselViennot} expresses a determinant as a sum over non-intersecting lattice paths. In our case, due to the extra weights on the valleys, the paths which have common points are not completely cancelled. Therefore the right-hand side of \eqref{eqn:lem_det} is a sum over \emph{weakly} non-intersecting lattice paths. If $\wtext(p)=1$ for all points $p$, then Lemma~\ref{lem:det} reduces to Lindstr\"om--Gessel--Viennot lemma. \end{remark} \subsection{Weakly and strictly non-intersecting Dyck paths}\ \label{sec:conn-betw-weakly} For a constant $c$ and a Schr\"oder path $S$, we define \[ \wt_\Sch(c,S) = c^{\hs(S)} \prod_{p\in S} \wt(p), \] where $\hs(S)$ is the number of horizontal steps in $S$. In other words, we give extra weight $c$ to each horizontal step. For a Schr\"oder path $S$, we define $\phi_\VV(S)$ (resp. $\phi_\HP(S)$) to be the Dyck path obtained from $S$ by changing each horizontal step to a down step followed by an up step (an up step followed by a down step). For a Dyck path $D$, we denote by $\phi_\VV^{-1}(D)$ (resp.~$\phi_\HP^{-1}(D)$) the inverse image of $D$, i.e., the set of all Schr\"oder paths $S$ with $\phi_\VV(S)=D$ (resp.~$\phi_\HP(S)=D$). Note that for a given Dyck path $D$, there can be several Schr\"oder paths $S$ mapping to $D$ via $\phi_\VV$ or $\phi_\HP$. The following proposition tells us that if $\wt(p)\left( \wtext(p)-1 \right)$ is constant, then the weight $\wt_\VV(D)$ or $\wt_\HP(D)$ is equal to the sum of weights of such Schr\"oder paths. \begin{prop}\label{prop:MPP_weight} Suppose that the weight functions $\wt$ and $\wtext$ satisfy $\wt(p)\left( \wtext(p)-1 \right)=c$ for all $p\in\ZZ\times\NN$. Then, for every Dyck path $D$, we have \[ \wt_\VV(D) = \sum_{S\in \phi_\VV^{-1}(D)} \wt_\Sch(c,S) \qand \wt_\HP(D) = \sum_{S\in \phi_\HP^{-1}(D)} \wt_\Sch(c,S). \] \end{prop} \begin{proof} The elements in $\phi_\VV^{-1}(D)$ (resp.~$\phi_\HP^{-1}(D)$) are those obtained from $D$ by doing the following: for each point $p$ in $\VV(D)$ (resp.~$\HP(D)$), either do nothing or remove $p$. Here, if $p$ is in $\VV(D)$ (resp.~$\HP(D)$), removing $p$ can be understood as replacing the pair of a down step and an up step (resp.~an up step and a down step) at $p$ by a horizontal step. Thus \begin{align*} \sum_{S\in \phi_\VV^{-1}(D)} \wt_\Sch(c,S) &= \prod_{v\in\mathcal{V}(D)} (\wt(v) + c) \prod_{p\in D\setminus\mathcal{V}(D)} \wt(p) \\ &= \prod_{v\in\mathcal{V}(D)} \left( \frac{\wt(v) + c}{\wt(v)} \right) \prod_{p\in D} \wt(p) \\ &= \prod_{v\in\mathcal{V}(D)} \left(1 + \frac{\wt(v)(\wtext(v)-1)}{\wt(v)} \right) \prod_{p\in D} \wt(p) \\ &= \prod_{v\in\mathcal{V}(D)} \wtext(v) \prod_{p\in D} \wt(p)\\ &= \wt_\VV(D). \end{align*} The second identity can be proved in the same way using high peaks instead of valleys. \end{proof} The following proposition is the key ingredient for the proofs of Theorems~\ref{conj:9.3} and \ref{conj:9.6}. \begin{prop} \label{prop:MPP_key1} Suppose that the weight functions $\wt$ and $\wtext$ satisfy $\wt(p)\left( \wtext(p)-1 \right)=c$ for all $p\in\ZZ\times\NN$. Let $A\in\Dyck_{2n}$ and $B\in\Dyck_{2n+8}$ be fixed Dyck paths with $A<B$. Then \begin{multline*} \sum_{(A\le D<B)\in\Dyck_{2n}^3} \wt_\VV(D) \prod_{p\in A\cap D} \left( 1 - \frac{1}{\wtext(p)} \right) \\ = \sum_{(A<D\le B)\in\Dyck_{2n}^3} \wt_\HP(D) \prod_{p\in D\cap B} \left( 1 - \frac{1}{\wtext(p)} \right). \end{multline*} \end{prop} \begin{proof} Let \begin{align*} X&=\{D\in\Dyck_{2n+4}:A\le D<B\}, \\ Y&=\{D\in\Dyck_{2n+4}:A< D\le B\},\\ Z&=\{S\in\Sch_{2n+4}:A< S< B\}. \end{align*} It suffices to show the following identities \begin{equation} \label{eq:XZ} \sum_{S\in Z} \wt_\Sch(c,S) =\sum_{D\in X} \wt_\VV(D) \prod_{p\in A\cap D} \left( 1 - \frac{1}{\wtext(p)} \right) \end{equation} and \begin{equation} \label{eq:YZ} \sum_{S\in Z} \wt_\Sch(c,S) =\sum_{D\in Y} \wt_\HP(D) \prod_{p\in D\cap B} \left( 1 - \frac{1}{\wtext(p)} \right). \end{equation} Note that if $D\in Y$, every peak of $D$ is a high peak. It is easy to see that for all $S\in Z$, we have $\phi_\VV(S)\in X$ and $\phi_\HP(S)\in Y$. Consider the restrictions $\phi_\VV |_Z:Z\to X$ and $\phi_\HP |_Z:Z\to Y$, which are surjective. Observe that for $D\in X$ (resp.~$D\in Y$), we have $A\cap D\subset \VV(D)$ (resp.~$D\cap B\subset \HP(D)$). The elements in $(\phi_\VV|_Z)^{-1}(D)$ (resp.~$(\phi_\HP|_Z)^{-1}(D)$) are those obtained from $D$ by doing the following: for each point $p$ in $\VV(D)\setminus A$ (resp.~$\HP(D)\setminus B$), either do nothing or remove $p$, and for each point $p$ in $A\cap D$ (resp.~$D\cap B$), remove $p$. Now we prove \eqref{eq:XZ}. Let $D\in X$. By the same arguments used in the proof of Proposition~\ref{prop:MPP_weight}, we have \begin{align*} \sum_{S\in (\phi_\VV|_Z)^{-1}(D)} \wt_\Sch(c,S) &= \prod_{p\in D\setminus\mathcal{V}(D)} \wt(p) \prod_{v\in\mathcal{V}(D)\setminus A} (\wt(v) + c) \prod_{p\in A\cap D} c\\ &=\prod_{p\in D\setminus\mathcal{V}(D)} \wt(p) \prod_{v\in\mathcal{V}(D)} \wt(v)\wtext(v) \prod_{p\in A\cap D} \frac{\wt(p)\left( \wtext(p)-1 \right)}{\wt(p)\wtext(p)}\\ &= \wt_\VV(D) \prod_{p\in A\cap D} \left( 1 - \frac{1}{\wtext(p)} \right). \end{align*} Thus, we obtain \[ \sum_{S\in Z} \wt_\Sch(c,S) =\sum_{D\in X} \sum_{S\in (\phi_\VV|_Z)^{-1}(D)} \wt_\Sch(c,S) =\sum_{D\in X} \wt_\VV(D) \prod_{p\in A\cap D} \left( 1 - \frac{1}{\wtext(p)} \right), \] which is \eqref{eq:XZ}. The second identity~\eqref{eq:YZ} can be proved similarly. \end{proof} If $\wt$ and $\wtext$ satisfy certain conditions, we have a connection between weakly non-intersecting Dyck paths and strictly non-intersecting Dyck paths as follows. \begin{prop} \label{prop:MPP_key2} Suppose that $\wt$ and $\wtext$ satisfy the following conditions \begin{itemize} \item $\wt(p)\left( \wtext(p)-1 \right)=c$ for all $p\in\ZZ\times\NN$, and \item $\wt_\HP(D) = t_j \wt_\VV(D)$ for all $D\in\Dyck_{2j}$ such that every peak in $D$ is a high peak. \end{itemize} Then we have \begin{multline*} \sum_{(D_1\le \cdots \le D_k)\in\Dyck_{2n}^k} \wt_\VV(D_1, \dots, D_k) \prod_{i=1}^{k-1} \prod_{p\in D_i\cap D_{i+1}} \left( 1 - \frac{1}{\wtext(p)} \right) \\ = \prod_{i=1}^{k-1} t_{n+2i}^{i} \sum_{(D_1 < \cdots < D_k)\in\Dyck_{2n}^k} \wt_\VV(D_1, \dots, D_k). \end{multline*} \end{prop} \begin{proof} Throughout this proof, we assume that $D_i\in \Dyck_{2n+4i-4}$ for $i\in[k]$ and $D_{k+1}$ is fixed to be the highest Dyck path in $\Dyck_{2n+4k}$ consisting of $n+2k$ up steps followed by $n+2k$ down steps. For a sequence $(\prec_1,\dots,\prec_k)$ of inequalities $<$ and $\le$, let \[ g(\prec_1,\dots,\prec_k) = \sum_{D_1\prec_1 D_2 \prec_2\cdots \prec_k D_{k+1}} \wt_\VV(D_1, \dots, D_k) \prod_{i=1}^{k-1} \prod_{p\in D_i\cap D_{i+1}} \left( 1 - \frac{1}{\wtext(p)} \right). \] Note that we always have \[ g(\prec_1,\dots,\prec_{k-1},<) = g(\prec_1,\dots,\prec_{k-1},\le). \] The identity in this proposition can be restated as \begin{equation} \label{eq:8} g(\le,\dots,\le) = \prod_{i=1}^{k-1} t_{n+2i}^{i} \cdot g(<,\dots,<) . \end{equation} Suppose that $D_{i-1}$ and $D_{i+1}$ are fixed, where $2\le i\le k$. Then, by Proposition~\ref{prop:MPP_key1} and the assumptions on $\wt$ and $\wtext$, we have \begin{multline*} \sum_{D_{i-1}\le D_i<D_{i+1}} \wt_\VV(D_i) \prod_{p\in D_{i-1}\cap D_i} \left( 1 - \frac{1}{\wtext(p)} \right)\\ = t_{n-2+2i} \sum_{D_{i-1}< D_i\le D_{i+1}} \wt_\VV(D_i) \prod_{p\in D_{i}\cap D_{i+1}} \left( 1 - \frac{1}{\wtext(p)} \right). \end{multline*} This implies that for any $2\le i\le k$ and $\prec_1,\dots,\prec_{i-2},\prec_{i+1},\dots,\prec_k$, \[ g(\prec_1,\dots,\prec_{i-2},\le,<,\prec_{i+1},\dots,\prec_k) = t_{n+2i-2}\cdot g(\prec_1,\dots,\prec_{i-2},<,\le,\prec_{i+1},\dots,\prec_k). \] Therefore, \begin{align*} g(\le,\dots,\le) &= g(\le,\dots,\le,<) = t_{n+2k-2}\cdot g(\le,\dots,\le,<,\le)\\ &= t_{n+2k-2}t_{n+2k-4}\cdot g(\le,\dots,\le,<,\le,\le)\\ &= \cdots \\ & = \prod_{i=1}^{k-1} t_{n+2i} \cdot g(<,\le,\dots,\le). \end{align*} By applying this process repeatedly we obtain \eqref{eq:8}. \end{proof} \subsection{Proof of Theorem~\ref{conj:9.3}} \label{sec:proof-th1} Recall the first conjecture of Morales, Pak and Panova (Theorem~\ref{conj:9.3}) : \begin{equation} \label{eq:6} p(\delta_{n+2k}/\delta_n) = 2^{\binom k2} \det \left( \mathfrak{s}_{n-2+i+j} \right)_{i,j=1}^k, \end{equation} where $\mathfrak{s}_{n} = p(\delta_{n+2}/\delta_n)$. By Proposition~\ref{prop:pleasant}, \begin{align*} p(\delta_{n+2k}/\delta_n) &= \sum_{(D_1< \dots < D_k)\in\Dyck_{2n}^k} \prod_{i=1}^k 2^{|D_1\setminus \VV(D_1)|}\\ &= 2^{k(2n-3)+4\binom{k+1}2} \sum_{(D_1< \dots < D_k)\in\Dyck_{2n}^k} \left(\frac{1}{2}\right)^{v(D_1)+\dots+v(D_k)} \end{align*} and \[ \mathfrak{s}_{n} = p(\delta_{n+2}/\delta_n) = 2^{2n+1}\sum_{D\in\Dyck_{2n}} \left(\frac 12\right)^{v(D)}. \] Let \[ d_n(q) = \sum_{D\in\Dyck_{2n}}q^{v(D)} \] and \[ d_{n,k}(q)=\sum_{(D_1<D_2<\dots<D_k)\in \Dyck_{2n}^k}q^{v(D_1)+\dots+v(D_k)}. \] Then \eqref{eq:6} can be rewritten as \[ 2^{k(2n-3)+4\binom{k+1}2} d_{n,k}(1/2)= 2^{\binom k2} \det(2^{2n-3+2i+2j}d_{n+i+j-2}(1/2))_{i,j=1}^k \] or \[ 2^{-\binom k2} d_{n,k}(1/2)= \det(d_{n+i+j-2}(1/2))_{i,j=1}^k. \] Thus Theorem~\ref{conj:9.3} is obtained from the following theorem by substituting $q=1/2$. \begin{thm}\label{thm:MPP1_restated} For $n,k\ge1$, we have \[ \det(d_{n+i+j-2}(q))_{i,j=1}^k = q^{\binom k2} d_{n,k}(q). \] \end{thm} \begin{proof} By Lemma~\ref{lem:det} with $\wt(p) = 1$ and $\wtext(p) = q$ for all $p\in\ZZ\times\NN$, we can rewrite Theorem~\ref{thm:MPP1_restated} as \begin{multline}\label{eqn:MPP1} \sum_{(D_1\le \dots \le D_k)\in\Dyck_{2n}^k} q^{v(D_1)+\dots+v(D_k)} \left(1-\frac{1}{q}\right)^{|D_1\cap D_2|+\dots+|D_{k-1}\cap D_k|} \\ =q^{\binom k2} \sum_{(D_1< \dots < D_k)\in\Dyck_{2n}^k} q^{v(D_1)+\dots+v(D_k)}. \end{multline} Then \eqref{eqn:MPP1} follows immediately from Proposition~\ref{prop:MPP_key2} and Lemma~\ref{lem:10} below. \end{proof} \begin{lem} \label{lem:10} Let $\wt(p) = 1$ and $\wtext(p) = q$ for a lattice point $p$. Then \begin{itemize} \item $\wt(p)\left( \wtext(p)-1 \right)=q-1$ for all lattice points $p$, and \item $\wt_\HP(D) = q \wt_\VV(D)$ for all $D\in\Dyck_{2j}$ such that every peak in $D$ is a high peak. \end{itemize} \end{lem} \begin{proof} The first statement is clear. The second one follows from the fact that in a Dyck path the number of peaks is always one more than the number of valleys. \end{proof} \subsection{Proof of Theorem~\ref{conj:9.6}} \label{sec:proof-thm2} Recall the second conjecture of Morales, Pak and Panova (Theorem~\ref{conj:9.6}) : \begin{equation} \label{eq:7} \sum_{\pi\in \RPP(\delta_{n+2k}/\delta_n)} q^{|\pi|} =q^{-\frac{k(k-1)(6n+8k-1)}{6}} \det\left( \frac{E^*_{2n+2i+2j-3}(q)}{(q;q)_{2n+2i+2j-3}} \right)_{i,j=1}^k. \end{equation} By \eqref{eq:MPP2} and Proposition~\ref{prop:pleasant}, we have \begin{align*} &\sum_{\pi\in \RPP(\delta_{n+2k}/\delta_n)} q^{|\pi|} =\sum_{P\in\PP(\delta_{n+2k}/\delta_n)} \prod_{u\in P} \frac{q^{h(u)}}{1-q^{h(u)}}\\ &= \sum_{(D_1< \dots < D_k)\in\Dyck_{2n}^k} \prod_{i=1}^k \left( \prod_{p\in \VV(D_i)} \frac{ q^{2\HT(p)+1}}{1-q^{2\HT(p)+1}} \prod_{p\in D_i\setminus\VV(D_i)} \left( 1+ \frac{ q^{2\HT(p)+1}}{ 1-q^{2\HT(p)+1}}\right) \right)\\ &= \sum_{(D_1< \dots < D_k)\in\Dyck_{2n}^k} \prod_{i=1}^k \left( \prod_{p\in \VV(D_i)} q^{2\HT(p)+1} \prod_{p\in D_i} \frac{1}{1-q^{2\HT(p)+1}} \right) \end{align*} and \[ \frac{E^*_{2n+1}(q)}{(q;q)_{2n+1}} = \sum_{\pi\in \RPP(\delta_{n+2}/\delta_n)} q^{|\pi|} = \sum_{D\in\Dyck_{2n}} \prod_{p\in \VV(D)} q^{2\HT(p)+1} \prod_{p\in D} \frac{1}{1-q^{2\HT(p)+1}}. \] Thus, by Lemma~\ref{lem:det} with $\wt(p) = 1/(1-q^{2\HT(p)+1})$ and $\wtext(p) = q^{2\HT(p)+1}$, we can rewrite \eqref{eq:7} as follows. \begin{thm}\label{thm:MPP2} We have \begin{multline*} \sum_{(D_1\le \dots \le D_k)\in\Dyck_{2n}^k} \prod_{i=1}^k \left( \prod_{p\in \VV(D_i)} q^{2\HT(p)+1} \prod_{p\in D_i} \frac{1}{1-q^{2\HT(p)+1}} \right)\prod_{j=1}^{k-1} \prod_{p\in D_j\cap D_{j+1}} \left(1 - \frac{1}{q^{2\HT(p)+1}}\right) \\ =q^{\frac{k(k-1)(6n+8k-1)}{6}} \sum_{(D_1< \dots < D_k)\in\Dyck_{2n}^k} \prod_{i=1}^k \left( \prod_{p\in \VV(D_i)} q^{2\HT(p)+1} \prod_{p\in D_i} \frac{1}{1-q^{2\HT(p)+1}} \right). \end{multline*} \end{thm} Theorem~\ref{thm:MPP2} follows immediately from Proposition~\ref{prop:MPP_key2} with $t_j=q^{2j+1}$, Lemma~\ref{lem:last} below and the fact \[ \sum_{i=1}^{k-1} i(2n+4i+1) = \frac{k(k-1)(6n+8k-1)}{6}. \] \begin{lem} \label{lem:last} Let $\wt(p) = 1/(1-q^{2\HT(p)+1})$ and $\wtext(p) = q^{2\HT(p)+1}$. Then \begin{itemize} \item $\wt(p)\left( \wtext(p)-1 \right)=-1$ for all $p$, and \item $\wt_\HP(D) = q^{2n+1} \wt_\VV(D)$ for all $D\in\Dyck_{2n}$ such that every peak in $D$ is a high peak. \end{itemize} \end{lem} \begin{proof} The first statement is clear. For the second one, consider a Dyck path $D\in\Dyck_{2n}$ all of whose peaks are high peaks. Let $p_1=(x_1,y_1),\dots,p_k=(x_k,y_k)$ are the peaks of $D$ with $x_1<\dots<x_k$. Then the valleys of $D$ are the points $v_i=(x'_i,y'_i)$ for $i\in[k-1]$ satisfying $y_i'-y_i = -(x_i'-x_i)$ and $y_i'-y_{i+1} = x_i'-x_{i+1}$. Solving the system of equations for $x_i '$ and $y_i '$ gives \[ x'_i = \frac{x_i+x_{i+1}+y_i-y_{i+1}}2 \qand y'_i = \frac{x_i-x_{i+1}+y_i+y_{i+1}}2. \] Thus \[ \wt_\VV(D)=q^{\sum_{p\in\VV(D)}(2\HT(p)+1)} \wt(D) = q^{\sum_{i=1}^{k-1}(2y_i'+1)} \wt(D) =q^{(x_1-x_k-y_1-y_k) + 2(y_1+\dots+y_k)+k-1} \wt(D). \] Since $(x_1,y_1)$ is the first peak and $(x_k,y_k)$ is the last peak, $x_1=y_1$ and $x_k+y_k=2n$. Therefore, \[ \wt_\VV(D)=q^{-2n-1} q^{2(y_1+\dots+y_k)+k} \wt(D) = q^{-2n-1}\wt_\HP(D) . \qedhere \] \end{proof} \section{A determinantal formula for a certain class of skew shapes} \label{sec:lascoux-pragacz-type} Lascoux and Pragacz \cite{Lascoux1988} found a determinantal formula for a skew Schur function in terms of ribbon Schur functions. Morales et al. \cite{MPP2} found a similar determinantal formula for the number of excited diagrams. In this section, applying the same methods used in the previous section, we find a determinantal formula for $p(\lm)$ and the generating function for the reverse plane partitions of shape $\lm$ for a certain class of skew shapes $\lm$ including $\delta_{n+2k}/\delta_{n}$ and $\delta_{n+2k+1}/\delta_{n}$. Consider a partition $\lambda$. Recall that $u=(i,j)\in\lambda$ means that $u$ is a cell in the $i$th row and $j$th column of the Young diagram of $\lambda$. Let $L=(u_0,u_1,\dots,u_m)$ be a sequence of cells in $\lambda$. Each pair $(u_{i-1},u_i)$ is called a \emph{step} of $L$. A step $(u_{i-1},u_i)$ is called an \emph{up step} (resp.~\emph{down step}) if $u_i-u_{i-1}$ is equal to $(-1,0)$ (resp.~$(0,1)$). We say that $L$ is a \emph{$\lambda$-Dyck path} if every step is either an up step or a down step. The set of $\lambda$-Dyck paths starting at a cell $s$ and ending at a cell $t$ is denoted by $\Dyck_{\lambda}(s,t)$. If $s$ is the southmost cell of a column of $\lambda$ and $t$ is the eastmost cell of a row of $\lambda$, we denote by $L_\lambda(s,t)$ the lowest Dyck path in $\Dyck_{\lambda}(s,t)$, see Figure~\ref{fig:lowest} for an example. \begin{figure} \caption{The lowest path $L_\lambda(s,t)$ in $\Dyck_{\lambda} \label{fig:lowest} \end{figure} Let $D=(u_0,u_1,\dots,u_m)$ be a $\lambda$-Dyck path. A cell $u_i$, for $1\le i\le m-1$, is called a \emph{peak} (resp.~\emph{valley}) if $(u_{i-1},u_i)$ is an up step (resp.~down step) and $(u_{i},u_{i+1})$ is a down step (resp.~up step). A peak $u_i$ is called a \emph{$\lambda$-high peak} if $u_i+(1,1)\in\lambda$. The set of valleys in $D$ is denoted by $\VV(D)$. Suppose that $D_1$ and $D_2$ are $\lambda$-Dyck paths. We say that $D_1$ is \emph{weakly below} $D_2$, denoted $D_1\le D_2$, if the following conditions hold. \begin{itemize} \item For every cell $u\in D_1$, there is an integer $k\ge0$ satisfying $u-(k,k)\in D_2$. \item If $u\in D_1\cap D_2$, then $u$ is a peak of $D_1$ and a valley of $D_2$. \end{itemize} We say that $D_1$ is \emph{strictly below} $D_2$, denoted $D_1<D_2$, if $D_1\le D_2$ and $D_1\cap D_2=\emptyset$. The \emph{Kreiman outer decomposition} of $\lm$ is a sequence $L_1,\dots,L_k$ of mutually disjoint nonempty $\lambda$-Dyck paths satisfying the following conditions. \begin{itemize} \item Each $L_i$ starts at the southmost cell of a column of $\lambda$ and ends at the eastmost cell of a row of $\lambda$. \item $L_1\cup \dots\cup L_k = \lm$. \end{itemize} See Figure~\ref{fig:Kreiman} for an example of the Kreiman outer decomposition. It is known \cite{Kreiman} that there is a unique (up to permutation) Kreiman outer decomposition of $\lm$. \begin{figure} \caption{The left diagram shows the Kreiman outer decomposition $L_1,\dots,L_7$ of $\lm$ for $\lambda=(9,8,8,8,5,5,4)$ and $\mu=(4,3,1)$. The label $L_i$ is written below the starting cell of it. The right diagram shows the poset of $L_1,\dots,L_7$ with relation $<$. } \label{fig:Kreiman} \end{figure} A poset $P$ is called \emph{ranked} if there exists a rank function $r:P\to\NN$ satisfying the following conditions. \begin{itemize} \item If $x$ is a minimal element, $r(x)=0$. \item If $y$ covers $x$, then $r(y)=r(x)+1$. \end{itemize} Note that if $P$ is a ranked poset, there is a unique rank function $r:P\to\NN$. The \emph{rank} of $x\in P$ is defined to be $r(x)$. For example, the poset in Figure~\ref{fig:Kreiman} is a ranked poset and the ranks of $L_1,L_2,\dots,L_7$ are $3,2,1,0,0,1,0$ respectively. However, if we remove the element $L_7$ from this poset, the resulting poset is not a ranked poset. The following two theorems can be proved by the same arguments in the previous section. We omit the details. \begin{thm}\label{thm:LP1} Let $L_1,\dots,L_k$ be the Kreiman outer decomposition of $\lm$. Let $P$ be the poset of $L_1,\dots,L_k$ with relation $<$. Suppose that the following conditions hold. \begin{itemize} \item $P$ is a ranked poset. \item If $L_i<L_j$, then in $L_j$ the first step is an up step, the last step is a down step and every peak is a $\lambda$-high peak. \end{itemize} Let $s_i$ (resp.~$t_i$) be the first (resp.~last) cell in $L_i$ and $r_i$ the rank of $L_i$ in the poset $P$. Then we have \[ \sum_{\pi\in\RPP(\lm)} q^{|\pi|} = q^{-\sum_{i=1}^k r_i|L_i|} \det \left( E_\lambda(s_i,t_j;q)\right)_{i,j=1}^k, \] where \[ E_\lambda(s_i,t_j;q) = \sum_{\pi\in \RPP(L_\lambda(s_i,t_j))} q^{|\pi|} = \sum_{D\in\Dyck_\lambda(s_i,t_j)} \prod_{u\in D} \frac1{1-q^{h(u)}} \prod_{u\in\VV(D)} q^{h(u)}. \] \end{thm} \begin{thm}\label{thm:LP2} Under the same conditions in Theorem~\ref{thm:LP1}, we have \[ \sum_{\substack{D_i\in \Dyck_{\lambda}(s_i,t_i) \\ D_1<\dots<D_k}} q^{\sum_{i=1}^k |\VV(D_i)|} = q^{-\sum_{i=1}^k r_i} \det \left( F_\lambda(s_i,t_j;q)\right)_{i,j=1}^k, \] where \[ F_\lambda(s_i,t_j;q) = \sum_{D\in\Dyck_\lambda(s_i,t_j)} q^{|\VV(D)|}. \] \end{thm} Since \[ p(\lm) = 2^{|\lm|}\sum_{\substack{D_i\in \Dyck_{\lambda}(s_i,t_i) \\ D_1<\dots<D_k}} 2^{-\sum_{i=1}^k |\VV(D_i)|}, \] by substituting $q=1/2$ in Theorem~\ref{thm:LP2}, we obtain \[ p(\lm) = 2^{|\lm|+\sum_{i=1}^k r_i} \det \left( F_\lambda(s_i,t_j;1/2)\right)_{i,j=1}^k, \] which can be rewritten as follows. \begin{cor}\label{cor:LP2} Under the same conditions in Theorem~\ref{thm:LP1}, we have \[ p(\lm) = 2^{\sum_{i=1}^k r_i} \det \left( p(L_\lambda(s_i,t_j))\right)_{i,j=1}^k. \] \end{cor} If $\lm=\delta_{n+2k}/\delta_n$ in Theorem~\ref{thm:LP1}, we obtain Theorem~\ref{conj:9.6}. If $\lm=\delta_{n+2k+1}/\delta_n$ in Theorem~\ref{thm:LP1}, using the Kreiman outer decomposition as shown in Figure~\ref{fig:E_odd}, we obtain the following corollary. \begin{figure} \caption{The Kreiman outer decomposition $L_1,\dots,L_{n+k} \label{fig:E_odd} \end{figure} \begin{cor}\label{cor:E_odd} We have \[ \sum_{\pi\in \RPP(\delta_{n+2k+1}/\delta_n)} q^{|\pi|} =q^{-\frac{k(k+1)(6n+8k+1)}{6}} \det\left( \frac{E^*_{2i+2\overline{j}+1}(q)}{(q;q)_{2i+2\overline{j}+1}} \right)_{i,j=1}^{n+k}, \] where $E^*_t(q)=0$ if $t<0$ and \[ \overline j = \begin{cases} j-n-1 & \mbox{if $j>n$},\\ -j & \mbox{if $j\le n$}. \end{cases} \] \end{cor} Similarly, as a special case of Corollary~\ref{cor:LP2}, we obtain a determinant formula for the number of pleasant diagrams of shape $\delta_{n+2k+1}/\delta_n$. \begin{cor}\label{cor:E_odd2} We have \[ p(\delta_{n+2k+1}/\delta_n) =2^{\binom{k+1}2} \det\left( \mathfrak{s}_{i+\overline{j}} \right)_{i,j=1}^{n+k}, \] where \[ \mathfrak{s}_m = \begin{cases} 0 & \mbox{if $m<0$,}\\ 2 & \mbox{if $m=0$,}\\ 2^{m+2}s_m & \mbox{if $m>0$,}\\ \end{cases} \] and $s_m$ is the little Schr\"oder number. \end{cor} \begin{exam} If $n=2$ and $k=1$, we have \[ p(\delta_{n+2k+1}/\delta_{n}) = p(\delta_{5}/\delta_2) = 2^8+2^9=3\cdot 2^8, \] and \[ 2^{\binom{k+1}2} \det\left( \mathfrak{s}_{i+\overline{j}} \right)_{i,j=1}^{n+k} =2 \det\left( \begin{matrix} \mathfrak{s}_0 & \mathfrak{s}_{-1} & \mathfrak{s}_1 \\ \mathfrak{s}_1 & \mathfrak{s}_{0} & \mathfrak{s}_2 \\ \mathfrak{s}_2 & \mathfrak{s}_{1} & \mathfrak{s}_3\\ \end{matrix} \right) =2 \det\left( \begin{matrix} 2&0&8\\ 8&2&48\\ 48&8&352\\ \end{matrix} \right) = 3\cdot 2^8. \] \end{exam} Note that if $\alpha$ is a reverse hook, i.e., $\alpha=(b^a)/((b-1)^{a-1})$, it is easy to see that \[ \sum_{\pi\in \RPP(\alpha)} q^{|\pi|} = \sum_{t\ge0} q^t \qbinom{a+t-1}{t}\qbinom{b+t-1}{t}, \] where \[ \qbinom{n}{m} = \frac{(q;q)_n}{(q;q)_m(q;q)_{n-m}}. \] Therefore, if $\lm$ is a thick reverse hook $((b+k)^{a+k})/(b^a)$, we obtain the following formula as a corollary of Theorem~\ref{thm:LP1}. \begin{cor}\label{cor:rhook} Let $\lambda=((b+k)^{a+k})$ and $\mu=(b^a)$. Then \[ \sum_{\pi\in \RPP(\lm)} q^{|\pi|} = q^{-\frac{k(k-1)(3a+3b+4k+1)}6} \det\left( \sum_{t\ge0} q^t \qbinom{a+t+i-1}{t}\qbinom{b+t+j-1}{t} \right)_{i,j=1}^k. \] \end{cor} \begin{remark} Theorem~\ref{thm:LP1} is not applicable to $\lm$ for arbitrary rectangular shapes $\lambda$ and $\mu$. For example, if $\lm=(6,6,6,6)/(3,3)$, the Kreiman outer decomposition of $\lm$ has three $\lambda$-Dyck paths $L_1<L_2<L_3$ as shown in Figure~\ref{fig:rect}. Since $L_2$ does not start with an up step, the conditions in Theorem~\ref{thm:LP1} are not satisfied. \end{remark} \begin{figure} \caption{The Kreiman outer decomposition of $(6,6,6,6)/(3,3)$.} \label{fig:rect} \end{figure} Let $\lambda=(b^a)$, $s$ the southmost cell in the first column of $\lambda$ and $t$ the eastmost cell in the first row of $\lambda$. Then every $\lambda$-Dyck path in $\Dyck_\lambda(s,t)$ is completely determined by its valleys. By considering the valleys of the $\lambda$-Dyck paths, one can prove that, for a reverse hook $\alpha=(b^a)/((b-1)^{a-1})$, \[ p(\alpha)=\sum_{t\ge0} 2^{a+b-t-1}\binom{a-1}t \binom{b-1}t. \] Thus, as a special case of Corollary~\ref{cor:LP2}, we have a determinant formula for $p(\lm)$ when $\lm$ is a thick reverse hook $((b+k)^{a+k})/(b^a)$. \begin{cor}\label{cor:rhook2} Let $\lambda=((b+k)^{a+k})$ and $\mu=(b^a)$. Then \[ p(\lm) = 2^{\binom k2} \det\left( \sum_{t\ge0} 2^{a+b+i+j-t-1}\binom{a+i-1}t \binom{b+j-1}t \right)_{i,j=1}^k. \] \end{cor} \end{document}
\begin{document} \subfigure[$n=20$]{\includegraphics[scale=0.43]{like20}} \subfigure[$n=100$]{\includegraphics[scale=0.43]{like100}} \caption{Comparing $\mathcal{L}_\beta$ surface using $20$ and $100$ design points.} \label{like} \end{figure} \end{document}
\begin{document} \title{Fair allocation of indivisible goods and chores } \titlerunning{Fair allocation of combinations of indivisible goods and chores} \author{Haris~Aziz \and Ioannis~Caragiannis \and Ayumi~Igarashi$^{*}$ \and Toby~Walsh } \institute{ H. Aziz \at UNSW Sydney and Data61 CSIRO, Australia\\ \email{[email protected]} \and I. Caragiannis \at Aarhus University, Denmark\\ \email{[email protected]} \and A. Igarashi$^{*}$ (corresponding author) \at National Institute of Informatics, Japan\\ \email{ayumi\[email protected]} \and T. Walsh \at UNSW Sydney and Data61 CSIRO, Australia\\ \email{[email protected]} } \maketitle \begin{abstract} We consider the problem of fairly dividing a set of items. Much of the fair division literature assumes that the items are ``goods'' i.e., they yield positive utility for the agents. There is also some work where the items are ``chores'' that yield negative utility for the agents. In this paper, we consider a more general scenario where an agent may have positive or negative utility for each item. This framework captures, e.g., fair task assignment, where agents can have both positive and negative utilities for each task. We show that whereas some of the positive axiomatic and computational results extend to this more general setting, others do not. We present several new and efficient algorithms for finding fair allocations in this general setting. We also point out several gaps in the literature regarding the existence of allocations satisfying certain fairness and efficiency properties and further study the complexity of computing such allocations. \end{abstract} \section{Introduction} Consider a group of students who are assigned to a certain set of coursework tasks. Students may have subjective views regarding how enjoyable each task is. For some people, solving a mathematical problem may be fulfilling and rewarding. For others, it may be nothing but torture. A student who gets more cumbersome chores may be compensated by giving her some valued goods so that she does not feel hard done by. This example can be viewed as an instance of a classic fair division problem. The agents have different preferences over the items and we want to allocate the items to agents as fairly as possible. The twist we consider is that whether an agent has positive or negative utility for an item is subjective. Our setting is general enough to encapsulate two well-studied settings: (1) ``good allocation'' in which agents have positive utilities for the items and (2) ``chore allocation'' in which agents have negative utilities for the items. The setting we consider also covers a third setting (3) ``allocation of objective goods and chores'' in which the items can be partitioned into chores (that yield negative utility for all agents) and goods (that yield positive utility for all agents). Setting (3) covers several scenarios where an agent could be compensated by some goods for doing some chores. In this paper, we suggest a very simple yet general model of allocation of indivisible items that properly includes chore and good allocation. For this model, we present some case studies that highlight that whereas some existence and computational results can be extended to our general model, in other cases the combination of good and chore allocation poses interesting challenges not faced in subsettings. Our central technical contributions are several new efficient algorithms for finding fair allocations. In particular: \begin{itemize} \item We formalize fairness concepts for the general setting. Some fairness concepts directly extend from the setting of good allocation to our setting. Other fairness concepts such as ``envy-freeness up to one item'' (EF1) and ``proportionality up to one item'' (PROP1) need to be generalized appropriately. \item We show that the round robin sequential allocation algorithm that returns an EF1 allocation for the case of goods does not work in general. Nevertheless, we present a careful generalization of the decentralized round robin algorithm that finds an EF1 allocation when utilities are additive. \item Turning our attention to an efficient and fair allocation, we show that for the case of two agents, there exists a polynomial-time algorithm that finds an EF1 and Pareto-optimal (PO) allocation for our setting. The algorithm can be viewed as an interesting generalization of the Adjusted Winner rule ~\citep{BT96a,BT96b} that is designed for divisible goods. \item Weakening EF1 to PROP1, we show that there exists an allocation that is not only PROP1 but also contiguous (assuming that items are placed in a line). We further give a polynomial-time algorithm that finds such an allocation. \end{itemize} \subsection{Related Work} Fair allocation of indivisible items is a central problem in several fields including computer science and economics~\citep{BT96a,BCM15a}. Fair allocation has been extensively studied for allocation of divisible goods, commonly known as cake cutting~\citep{BT96a}. There are several established notions of fairness, including envy-freeness and proportionality. The recently introduced \emph{maximin share} (MMS) notion is weaker than envy-freeness and proportionality and has been heavily studied in the computer science literature. \citet{KPW18} showed that an MMS allocation of goods may not always exist; on the positive side, there exists a polynomial-time algorithm that returns a 2/3-approximate MMS allocation~\citep{KPW18,AMNS17}. Subsequent papers have presented simpler \citep{BK20} or even better \citep{SGHSY18} approximation algorithms for MMS allocations. In general, checking whether there exists an envy-free and Pareto-optimal allocation for goods is $\Sigma_2^p$-complete~\citep{KBKZ09a}. The idea of envy-freeness up to one good (EF1) was implicit in the paper by \citet{LMMS04a}. Today, it has become a well-studied fairness concept in its own right~\citep{Budi11a}. \citet{CKMPS19} further popularized it, showing that a natural modification of the Nash welfare maximizing rule satisfies EF1 and PO for the case of goods. \citet{BMV17a} recently presented a pseudo-polynomial-time algorithm for computing an allocation that is PO and EF1 for goods. A stronger fairness concept, {\em envy freeness up to the least valued good} (EFX), was introduced by \citet{CKMPS19}. \citet{Aziz16a} noted that the work on multi-agent chore allocation is less developed than that of goods and that results from one may not necessarily carry over to the other. \citet{ARSW17a} considered fair allocation of indivisible chores and showed that there exists a simple polynomial-time algorithm that returns a 2-approximate MMS allocation for chores. \citet{BK20} presented a better approximation algorithm. \citet{CKKK12} studied the efficiency loss in order to achieve several fair allocations in the context of both good and chore divisions. Allocation of a mixture of goods and chores has received recent attention in the context of divisible items~\citep{BMSY16a,BMSY17a}. Here, we focus on indivisible items. \paragraph{Subsequent Work.} Our study of a general setting for goods and chores and our formalization of general definitions for fairness concepts (that apply well to hybrid settings) has spurred further work on the topic. In his survey \citet{Moul19a} discusses the subtle differences between the treatment of goods and chores. \citet{GaMc20a} and \citet{CGMM21a} focus on algorithms for computing competitive equilibrium for goods and chores. \citet{AlWa20a} consider variations of concepts and algorithms that we propose. \citet{AzRe20a} consider a stronger concept of group envy-freeness for goods and chores. \citet{AMS20a} focus on our formulation of PROP1 (that is a weakening of proportionality) and propose a polynomial-time algorithm for computing allocations that are PROP1 and Pareto optimal. \citet{brczi2020envyfree} pointed out that the natural extension of envy-cycle elimination algorithm of \citet{LMMS04a} does not give an EF1 allocation for a mix of goods and chores. \citet{BSV20a} provide further insights into the issue and show that restricting the envy-graph to edges involving maximum envy results in an EF1 allocation for doubly-monotonic valuations that are more general than additive utilities. \section{Our Model and Fairness Concepts}\label{sec:model} We now define a fair division problem of indivisible items where agents may have both positive and negative utilities. For a natural number $s \in \mathbb{N}$, we write $[s]=\{1,2,\ldots,s\}$. An \emph{instance} is a triple $I=(N,O,U)$ where \begin{itemize} \item $N=[n]$ is a set of {\em agents}, \item $O=\{o_1,o_2,\ldots,o_m\}$ is a set of {\em indivisible items}, and \item $U$ is an $n$-tuple of utility functions $u_i:2^O \rightarrow \mathbb{R}$. \end{itemize} Each subset $X \subseteq O$ is referred to as a {\em bundle} of items. We may abuse the notation and write $u_i(o)= u_i(\{o\})$. We note that under this model, an item can be a good for one agent $i$ (i.e., $u_i(o)>0$) but a chore for another agent $j$ (i.e., $u_j(o)<0$). We assume that the utilities in this paper are \emph{additive}, namely, $u_i(X)=\sum_{o \in X}u_i(o)$ for each bundle $X \subseteq O$. We say that agent $i$ \emph{weakly prefers} (respectively, \emph{strictly prefers}) item $o$ to item $o'$ if $u_i(o) \geq u_i(o')$ (respectively, $u_i(o)> u_i(o')$). An {\em allocation} $\pi$ is a function $\pi \colon N\rightarrow 2^O$ such that $\bigcup_{i \in N}\pi(i)=O$, and $\pi(i) \cap \pi(j)=\emptyset$ for every pair of distinct agents $i,j \in N$. We first observe that the definitions of standard fairness concepts can be naturally extended to this general model. The most classical fairness principle is {\em envy-freeness}, requiring that agents do not envy each other. Specifically, given an allocation $\pi$, we say that $i$ {\em envies} $j$ if $u_i(\pi(i)) < u_i(\pi(j))$. An allocation $\pi$ is \emph{envy-free} (EF) if no agent envies the other agents. Another appealing notion of fairness is {\em proportionality}, which guarantees each agent an $1/n$ fraction of her utility for the whole set of items. Formally, an allocation $\pi$ is \emph{proportional} (PROP) if each agent $i \in N$ receives a bundle $\pi(i)$ of utility at least her {\em proportional fair share} $u_{i}(O)/n$. The following implication, which is well-known for the case of goods, holds in our setting as well. \begin{proposition}\label{prop:relation:EF} For additive utilities, an envy-free allocation satisfies proportionality. \end{proposition} \begin{proof} Suppose that an allocation $\pi$ is an envy-free allocation. Consider any agent $i \in N$. Then, by envy-freeness, $u_i(\pi(i))\geq u_i(\pi(j))$ for all $j\in N$. Thus, by summing up all the inequalities, $n\cdot u_i(\pi(i))\geq \sum_{j\in N}u_i(\pi(j))=u_{i}(O)$. Hence, each $i\in N$ receives a bundle of utility at least $u_{i}(O)/n$, so $\pi$ satisfies proportionality. \qed\end{proof} A simple example of one good with two agents already suggests the impossibility in achieving envy-freeness and proportionality. The recent literature on indivisible allocation has, thereby, focused on approximations of these fairness concepts. A prominent relaxation of envy-freeness, introduced by \citet{Budi11a}, is {\em envy-freeness up to one good} (EF1), which requires that an agent's envy towards another bundle can be eliminated by removing some good from the envied bundle. We will present a generalized definition of EF1 for our setting: the envy can disappear by removing either one ``good" from the other's bundle or one ``chore" from their own bundle. Given an allocation $\pi$, we say that $i$ {\em envies} $j$ \emph{by more than one item} if $i$ envies $j$, and $u_i(\pi(i)\setminus \{o\}) < u_i(\pi(j)\setminus \{o\})$ for any item $o \in \pi(i) \cup \pi(j)$. \begin{definition}[EF1] An allocation $\pi$ is \emph{envy-free up to one item (EF1)} if for all $i,j \in N$, $i$ does not envy $j$ by more than one item. \end{definition} Obviously, envy-freeness implies EF1. \citet{CFS17a} introduced a novel relaxation of proportionality, which is referred to as {\em PROP1}. In the context of good allocation, this fairness relaxation is a weakening of both EF1 and proportionality, requiring that each agent gets her proportional fair share if she obtains one additional good from the others' bundles. Now we will extend this definition to our setting: under our definition, each agent receives her proportional fair share by obtaining an additional good or removing some chore from her bundle. \begin{definition}[PROP1] An allocation $\pi$ satisfies \emph{proportionality up to one item (PROP1)} if for each agent $i\in N$, \begin{itemize} \item $u_i(\pi(i))\geq u_{i}(O)/n$; or \item $u_i(\pi(i))+u_i(o)\geq u_{i}(O)/n$ for some $o\in O\setminus \pi(i)$; or \item $u_i(\pi(i))-u_i(o)\geq u_{i}(O)/n$ for some $o\in \pi(i)$. \end{itemize} \end{definition} We can verify that EF1 implies PROP1. \begin{proposition}\label{prop:relation:EF1Prop1} For additive utilities, an EF1 allocation satisfies PROP1. \end{proposition} \begin{proof} The claim clearly holds when $|N| \leq 1$ or $O=\emptyset$; thus suppose $|N| \geq 2$ and $O \neq \emptyset$. Consider any allocation $\pi$ that satisfies EF1, and any agent $i \in N$. First, consider the case when $\pi(i)=O$. If $u_i(O) \geq 0$, then it is clear that $u_i(\pi(i))=u_i(O) \geq u_i(O)/n$. If $u_i(O) < 0$, consider any $j \in N \setminus \{i\} \neq \emptyset$. Since $\pi$ is EF1, there is an item $o \in \pi(i)$ such that $u_i(\pi(i))- u_i(o) \geq u_i(\pi(j))= u_i(\emptyset)=0$. Thus, $u_i(\pi(i)) -u_i(o) \geq 0> u_i(O)/n$ for some $o \in \pi(i)$. Next, consider the case when $\pi(i)=\emptyset$. If $u_i(O) \leq 0$, then it is clear that $u_i(\pi(i))=u_i(\emptyset)=0 \geq u_i(O)/n$. Thus, suppose $u_i(O) > 0$. Then, by EF1, for every $j \in N \setminus \{i\}$, $i$ does not envy $j$, or there is an item $o \in \pi(j)$ such that $u_i(\pi(i))= u_i(\emptyset) =0 \geq u_i(\pi(j))- u_i(o)$. Let $o^* \in O$ be such that $o^* \in \mbox{argmax}_{o \in O}u_i(o)$. Note that $u_i(o^*)>0$ since $u_i(O) > 0$. Then, $u_i(\pi(i)) + u_i(o^*) \geq u_i(\pi(j))$ for every $j \in N$, which implies that $u_i(\pi(i)) + u_i(o^*) \geq u_i(O)/n$. Finally, consider the case when when $O\setminus \pi(i) \neq \emptyset$ and $\pi(i) \neq \emptyset$. Let $x=\max_{o\in O\setminus \pi(i)}u_i(o)$ and $y= \min_{o\in \pi(i)}u_i(o)$. Since $\pi$ satisfies EF1, for any agent $j \in N \setminus \{i\}$, \begin{itemize} \item $i$ does not envy $j$; or \item there exists an item $o \in \pi(j)$ such that $u_i(\pi(i)) \geq u_i(\pi(j)) - u_i(o)$; or \item there exists an item $o \in \pi(i)$ such that $u_i(\pi(i))- u_i(o) \geq u_i(\pi(j))$, \end{itemize} which implies \begin{itemize} \item $u_i(\pi(i)) \geq u_i(\pi(j))$; or \item $u_i(\pi(i)) +x \geq u_i(\pi(j))$; or \item $u_i(\pi(i))- y \geq u_i(\pi(j))$. \end{itemize} Thus, if $i$ gets bonus utility $b^* :=\max \{x,-y,0\}$ by getting some good or removing some chore, her updated utility is such that $u_i(\pi(i)) + b^* \geq u_i(\pi(j))$ for any agent $j \in N \setminus \{i\}$. This would imply that $$ n(u_i(\pi(i))+b^*)\geq \sum_{j\in N} u_i(\pi(j))=u_i(O), $$ which implies that $u_i(\pi(i))+b^* \geq u_i(O)/n$. Hence PROP1 is satisfied. \qed \end{proof} Figure \ref{fig:part-relations} illustrates the relations between fairness concepts introduced above. \begin{figure} \caption{Relations between fairness concepts.} \label{fig:part-relations} \end{figure} Besides fairness, we will also consider an efficiency criterion. The most commonly used efficiency concept is {\em Pareto-optimality}. Given an allocation $\pi$, another allocation $\pi'$ is a {\em Pareto-improvement} of $\pi$ if $u_i(\pi'(i)) \geq u_i(\pi(i))$ for all $i \in N$ and $u_j(\pi'(j)) > u_j(\pi(j))$ for some $j \in N$. We say that an allocation $\pi$ is {\em Pareto-optimal} (PO) if there is no allocation that is a Pareto-improvement of $\pi$. \section{Finding an EF1 Allocation} In this section, we focus on EF1, a very permissive fairness concept that admits a polynomial-time algorithm in the case of good allocation. For instance, consider a {\em round robin rule} in which agents take turns, and choose their most preferred unallocated item. The round robin rule finds an EF1 allocation if all the items are goods~(see e.g., \citealp{CKMPS19}). By a very similar argument, it can be shown that the algorithm also finds an EF1 allocation if all the items are chores. However, we will show that the round robin rule already fails to find an EF1 allocation if we have some items that are goods and others that are chores. \begin{proposition} The round robin rule does not satisfy EF1. \end{proposition} \begin{proof} Suppose there are two agents and four items with identical utilities described below. \begin{center} \upshape \setlength{\tabcolsep}{6.4pt} \begin{tabular}{rccccc} \toprule & \multicolumn{4}{l}{\!\!\! \begin{tikzpicture}[scale=0.57, transform shape, every node/.style={minimum size=7mm, inner sep=1.2pt, font=\huge}] \node[draw, circle](2) at (1.2,0) {$1$}; \node[draw, circle](3) at (2.4,0) {$2$}; \node[draw, circle](4) at (3.6,0) {$3$}; \node[draw, circle](5) at (4.8,0) {$4$}; \end{tikzpicture}\!\!\!\! } \\ \midrule Alice, Bob:\!\! & 2 & -3 & -3 & -3 \\ \bottomrule \end{tabular} \end{center} Consider the order, in which Alice chooses the only good and then the remaining chores of equal value are allocated accordingly. In that case, Alice gets the positively valued good and one chore, whereas Bob gets two chores. So even if one item is removed from the bundles of Alice or Bob, Bob will still remain envious. \qed \end{proof} Nevertheless, a careful adaptation of the round robin method to our setting, which we call the {\em double round robin algorithm}, constructs an EF1 allocation. In essence, the algorithm will apply the round robin method twice: clockwise and anticlockwise. In the first phase, the round-robin algorithm allocates {\em chores} to agents (i.e., the items for which each agent has non-positive utility), while in the second phase, the reversed round-robin algorithm allocates the remaining {\em goods} to agents, in the opposite order starting with the agent who chooses last in the first phase. Intuitively each agent $i$ may envy agent $j$ who comes earlier than her at the end of one phase, but $i$ does not envy $j$ with respect to the items allocated in the other round; hence the envy of $i$ towards $j$ can be bounded up to one item. We present a formal description of the algorithm in Algorithm~\ref{alg:double}; see Figure \ref{fig:double} for an illustration. \begin{algorithm} \caption{Double Round Robin Algorithm} \label{alg:double} \begin{algorithmic}[1] \REQUIRE An instance $I=(N,O,U)$. \ENSURE An allocation $\pi$. \STATE Initialize $\pi(i)=\emptyset$ for each agent $i \in N$. \STATE Partition $O$ into $O^+=\{o\in O\mid \exists i\in N \text{ s.t. } u_i(o)> 0\}$, $O^-=\{o\in O\mid \forall i\in N, u_i(o) \leq 0\}$ and suppose $|O^-|=an-k$ for some positive integer $a$ and $k\in \{0,,\ldots, n-1\}$. \label{step:partition} \STATE Create $k$ dummy chores for which each agent has utility $0$, and add them to $O^-$ (hence, $|O^-|=an$). \STATE Let the agents come in a round robin sequence $(1,2,\ldots, n)^{*}$ and pick their most preferred item in $O^-$ until all items in $O^-$ are allocated.\label{step:chores} \STATE Let the agents come in a round robin sequence $(n, n-1,\ldots,1)^{*}$ and pick their most preferred item in $O^+$ until all items in $O^+$ are allocated. If an agent has no available item which gives her strictly positive utility, she does not get a real item but pretends to \emph{pick} a dummy one for which she has utility $0$.\label{step:goods} \STATE Remove the dummy items from the current allocation $\pi$ and return the resulting allocation $\pi^*$. \end{algorithmic} \end{algorithm} \begin{figure} \caption{Illustration of Double Round Robin Algorithm. The dotted line corresponds to the picking order when allocating chores. The thick line corresponds to the picking order when allocating goods. The solid black circle indicates the agent who starts the picking. For the dotted (chores) round, agent $1$ is the first agent to pick. For the solid (goods) round, agent $n$ is the first agent to pick.} \label{fig:double} \end{figure} In the following, for an allocation $\pi$ and a bundle $X$, we say that $i$ {\em envies} $j$ {\em with respect to} $X$ if $u_i(\pi(i) \cap X) < u_i(\pi(j) \cap X)$. \begin{theorem} For additive utilities, the double round robin algorithm returns an EF1 allocation in $O(\max \{m\log m,mn\})$ time. \end{theorem} \begin{proof} We note that the algorithm ensures that all agents receive the same number of chores, by introducing $k$ dummy chores. Now let $\pi$ be the output of Algorithm \ref{alg:double}. To see that $\pi$ satisfies EF1, consider any pair of two agents $i$ and $j$ where $i<j$. We will show that by removing one item, these agents do not envy each other. We denote by $c^{i}_t$ and $c^{j}_t$ the $t$-th items allocated to agent $i$ and agent $j$ for $t=1,2,\ldots,a$ in Line \ref{step:chores}, respectively. We denote by $g^{i}_t$ and $g^{j}_t$ the $t$-th items allocated to agent $i$ and agent $j$ for $t=1,2,\ldots,b$ in Line \ref{step:goods}, respectively, where $b$ denotes the number of rounds each agent chooses an item (including a dummy item) in Line \ref{step:goods}. First, consider $i$'s envy for $j$. We first observe that the $t$-th item $c^{i}_t$ in $O^-$ allocated to $i$ is weakly preferred by $i$ to the $t$-th item $c^{j}_t$ in $O^-$ allocated to $j$. Hence, agent $i$ does not envy $j$ with respect to $O^-$. Namely, \begin{align} &u_i(\pi(i) \cap O^-)= \sum^a_{t=1}u_i(c^{i}_t) \geq \sum^a_{t=1}u_i(c^{j}_t) =u_i(\pi(j) \cap O^-). \label{eq:chore} \end{align} As for the good allocation, agent $i$ may envy agent $j$ with respect to $O^+$. But if the first item $g^{j}_1$ picked by $j$ from $O^+$ is removed from $j$'s bundle, then the envy will disappear, i.e., $i$ does not envy $j$ with respect to $O^+\setminus \{g^{j}_1\}$. Namely, \begin{align} &u_i(\pi(i) \cap O^+)= \sum^b_{t=1}u_i(g^{i}_t) \geq \sum^b_{t=2}u_i(g^{j}_t) = u_i((\pi(j) \cap O^+)\setminus \{g^{j}_1\}). \label{eq:good} \end{align} The reason is that for each item $g^{j}_t$ picked by $j$ where $t=2,3,\ldots,b$, there is a corresponding item $g^{i}_{t-1}$ picked by $i$ before $j$'s turn that is weakly as preferred by $i$ to $g^{j}_t$. Combining \eqref{eq:chore} and \eqref{eq:good} yields $u_i(\pi(i)) \geq u_i(\pi(j)\setminus \{g^{j}_1\})$. Second, consider $j$'s envy for $i$. Similarly to the above, agent $j$ does not envy agent $i$ with respect to $O^+$ because agent $j$ takes the first pick among $i$ and $j$; that is, for every item in $g^{i}_t$ chosen by $i$, agent $j$ picks an item $g^{j}_t$ before $i$ that she weakly prefers to $g^{i}_t$. Thus, \begin{align} &u_j(\pi(j) \cap O^+)= \sum^b_{t=1}u_j(g^{j}_t) \geq \sum^b_{t=1}u_i(g^{i}_t) =u_j(\pi(i) \cap O^+). \label{eq:good:ji} \end{align} As for the items in $O^-$, for each item $c^{i}_{t}$ picked by $i$ where $t=2,3,\ldots,a$, there is an item $c^{j}_{t-1}$ picked by $j$ before $i$ that $j$ weakly prefers to $c^{i}_{t}$, which implies that $j$ does not envy $i$ with respect to $O^-\setminus \{c^{j}_a\}$. Thus \begin{align} &u_j(\pi(j)\setminus \{c^{j}_a\})= \sum^{a-1}_{t=1}u_j(c^{j}_t) \geq \sum^a_{t=2}u_j(c^{i}_t) \geq \sum^a_{t=1}u_j(c^{i}_t) = u_j(\pi(i)). \label{eq:chore:ji} \end{align} Note that the last inequality holds since $u_j(c^{i}_1) \leq 0$. Combining \eqref{eq:good:ji} and \eqref{eq:chore:ji} yields $u_j(\pi(j)\setminus \{c^{j}_a\}) \geq u_j(\pi(i))$. In either case, agents do not envy each other by more than one item. We conclude that $\pi$ is EF1 and so does the final allocation $\pi^*$ as removing dummy items does not affect the utilities of each agent. It remains to analyze the running time of Algorithm \ref{alg:double}. Line \ref{step:partition} requires $O(mn)$ time as each item needs to be examined by all agents. Lines \ref{step:chores} and \ref{step:goods} require $O(m\log m)$ time as there are at most $m$ iterations, and for each iteration, each agent has to choose the most preferred item out of at most $m$ items, which can be done by sorting all the items according to the preference of each agent at the beginning. Thus, the total running time can be bounded by $O(\max \{m\log m,mn\})$, which completes the proof.\qed \end{proof} \section{Finding an EF1 and PO allocation} We move on to the next question as to whether fairness is achievable together with efficiency. In the context of good allocation where agents have non-negative additive utilities, \citet{CKMPS19} proved that an outcome that maximizes the {\em Nash welfare} (i.e., the product of utilities) satisfies EF1 and Pareto-optimality simultaneously. The question regarding whether a Pareto-optimal and EF1 allocation exists for chores is unresolved. Starting from an EF1 allocation and finding Pareto improvements, one runs into two challenges: first, Pareto improvements may not necessarily preserve EF1; second, finding Pareto improvements is NP-hard~\citep{ABL+16a,KBKZ09a}. Even if we ignore the second challenge, the question regarding the existence of a Pareto-optimal and EF1 allocation for chores is open. Next, we show that the problem of finding an EF1 and Pareto-optimal allocation is completely resolved for the restricted but important case of two agents. The main theorem in this section is stated as follows. \begin{theorem}\label{thm:AW} For two agents with additive utilities, a Pareto-optimal and EF1 allocation always exists and can be computed in $O(m^2)$ time. \end{theorem} Our algorithm for the problem can be viewed as a discrete version of the well-known Adjusted Winner (AW) rule~\citep{BT96a,BT96b}. Just like the Adjusted Winner rule, our algorithm finds a Pareto-optimal and EF1 allocation. In contrast to AW, which is designed for goods, our algorithm can handle both goods and chores. The algorithm begins by giving each subjective item to the agent who considers it as a good. So, in the following, we assume that we have objective items only, i.e., for each item $o \in O$, either $o$ is a good ($u_i(o)>0$ for each $i \in N$); or $o$ is a chore ($u_i(o)<0$ for each $i \in N$). Now we call one of the two agents {\em winner} (denoted by $w$) and another {\em loser} (denoted by $\ell$). \begin{enumerate} \item Initially, all goods are allocated to the winner and all chores to the loser. \item We sort the items in terms of $|u_{\ell}(o)|/|u_w(o)|$ (monotone non-increasing order), and consider reallocation of the items according to the ordering (from the left-most to the right-most item). \item When considering a good, we move it from the winner to the loser. When considering a chore, we move it from the loser to the winner. We stop when the loser does not envy the winner by more than one item. \end{enumerate} We present a formal description of the algorithm in Algorithm \ref{alg:AW}. \begin{algorithm} \caption{Generalized Adjusted Winner Algorithm} \label{alg:AW} \begin{algorithmic}[1] \REQUIRE An instance $I=(N,O,U)$ where $N=\{w,\ell \}$. \ENSURE An allocation $\pi$. \STATE Initialize $\pi(i)=\emptyset$ for each agent $i \in N$. \STATE Let $O^*_{w}=\{\, o \in O \mid u_{w}(o)\geq 0~\mbox{and}~u_{\ell}(o) \leq 0 \,\}$ and $O^*_{\ell}=\{\, o \in O \mid u_{\ell}(o)\geq 0~\mbox{and}~u_{w}(o) < 0 \,\}$. \STATE Let $O^+=\{\, o \in O \mid u_i(o) > 0\quad \forall i \in N\,\}$ and $O^-=\{\,o \in O \mid u_i(o) <0\quad \forall i \in N\,\}$. \STATE For each item $o \in O^+ \cup O^*_w$, allocate $o$ to agent $w$. For each item $o \in O^{-} \cup O^*_{\ell}$, allocate $o$ to agent $\ell$.\label{line:initialization} \STATE Sort the items in $O^+ \cup O^-=\{o_1,o_2,\ldots,o_r \}$ where $|u_{\ell}(o_1)|/|u_w(o_1)| \ge |u_{\ell}(o_2)|/|u_w(o_2)| \ge \cdots \ge |u_{\ell}(o_r)|/|u_w(o_r)|$. \STATE Set $t=1$. \WHILE{agent $\ell$ envies agent $w$ by more than one item} \IF{$o_t \in O^+$}\label{line:while} \STATE Set $\pi(w)=\pi(w) \setminus \{o_t\}$ and $\pi(\ell)=\pi(\ell) \cup \{o_t\}$. \ELSIF{$o_t \in O^-$} \STATE Set $\pi(w)=\pi(w) \cup \{o_t\}$ and $\pi(\ell)=\pi(\ell) \setminus \{o_t\}$. \ENDIF \STATE Update $t = t +1$. \ENDWHILE \end{algorithmic} \end{algorithm} We will first prove that at any point of the algorithm, the allocation $\pi$ is Pareto-optimal, and so is the final allocation. \begin{lemma}\label{lem:PO:AW} During the execution of Algorithm \ref{alg:AW}, the allocation $\pi$ is Pareto-optimal at any point after Line \ref{line:initialization}. \end{lemma} \begin{proof} It can be easily verified that the allocation $\pi$ just after Line \ref{line:initialization} is Pareto-optimal. Thus, consider some time step after the algorithm enters the {\bf while}-loop of Line \ref{line:while}. Assume towards a contradiction that $\pi'$ is a Pareto-improvement of $\pi$. We assume without loss of generality that all items in $O^*_w$ remain assigned to $w$ under $\pi'$ because transferring an item in $O^*_w$ from $w$ to $\ell$ improve neither the utility of $w$ nor that of $\ell$. Likewise, we assume that all items in $O^*_{\ell}$ remain assigned to $\ell$ under $\pi'$. In the following, we call each item $o \in O^+$ a \emph{good} and item $o \in O^-$ a \emph{chore}. For each $i,j \in \{ w,\ell\}$ with $i \neq j$, let \begin{itemize} \item $G_{ii}$ be the set of goods in $\pi(i) \cap \pi'(i)$; \item $C_{ii}$ be the set of chores in $\pi(i) \cap \pi'(i)$; \item $G_{ij}$ be the set of goods in $\pi(i) \cap \pi'(j)$; \item $C_{ij}$ be the set of chores in $\pi(i) \cap \pi'(j)$. \end{itemize} Consider first the case when in $\pi$, the winner has utility which is at least as high as in $\pi'$, while the loser is strictly better off. Taking into account that the bundles of goods $G_{ww}$ and $G_{\ell \ell}$ and the bundles of chores $C_{ww}$ and $C_{\ell \ell}$ are allocated to the same agent in both allocations, this means \begin{align}\label{eq:pareto-dom1} u_{w}(G_{\ell w})+u_{w}(C_{\ell w})-u_{w}(G_{w \ell})-u_{w}(C_{w \ell}) &\geq 0;~\mbox{and}\\\label{eq:pareto-dom2} u_{\ell}(G_{w \ell})+u_{\ell}(C_{w \ell})-u_{\ell}(G_{\ell w})-u_{\ell}(C_{\ell w}) &>0 \end{align} The crucial observation now is that the algorithm considered all items in $G_{\ell w}$ and $C_{w \ell}$ before the items in $G_{w \ell}$ and $C_{\ell w}$ in the ordering. Indeed, recall that all the goods are initially assigned to the winner, $G_{\ell w} \subseteq \pi(\ell)$, and $G_{w \ell} \subseteq \pi(w)$. Thus the goods in $G_{\ell w}$ are those transferred from the winner $w$ to the loser $\ell$ in the {\bf while}-loop of Line \ref{line:while}, while the goods in $G_{w \ell}$ are those that stay in the winner's bundle. Similarly, recall that all the chores are initially assigned to the loser, $C_{w \ell} \subseteq \pi(w)$, and $C_{\ell w} \subseteq \pi(\ell)$. Thus, the chores in $C_{w \ell}$ are those transferred from the loser $\ell$ to the winner $w$, while the chores in $C_{\ell w}$ are those that stay in the loser's bundle. Now, let $\alpha$ be such that \begin{align*} \max_{o\in G_{w \ell}\cup C_{\ell w}}{|u_{\ell}(o)|/|u_{w}(o)|} \leq \alpha \leq \min_{o\in G_{\ell w}\cup C_{w \ell}}{|u_{\ell}(o)|/|u_{w}(o)|}. \end{align*} This definition implies the inequalities, \begin{align*} u_{\ell}(G_{w \ell})\leq \alpha u_{w}(G_{w \ell}); u_{\ell}(G_{\ell w})\geq \alpha u_{w}(G_{\ell w}); \\ -u_{\ell}(C_{w \ell})\geq -\alpha u_{w}(C_{w \ell}); -u_{\ell}(C_{\ell w})\leq -\alpha u_{w}(C_{\ell w}), \end{align*} which, together with inequality (\ref{eq:pareto-dom2}), yield \begin{align*} 0 &< u_{\ell}(G_{w \ell}) + u_{\ell}(C_{w \ell})- u_{\ell}(G_{\ell w}) - u_{\ell}(C_{\ell w}) \\ &\leq-\alpha(u_{w}(G_{\ell w})+u_{w}(C_{\ell w})-u_{w}(G_{w \ell}) - u_{w}(C_{w \ell}))\leq 0, \end{align*} a contradiction. The last inequality follows by (\ref{eq:pareto-dom1}) and by the fact that $\alpha$ is non-negative. A similar argument applies when in $\pi$, the loser has utility which is at least as high as in $\pi'$, while the winner is strictly better off. \qed \end{proof} We are now ready to prove Theorem \ref{thm:AW}. \begin{proof}[of Theorem \ref{thm:AW}] We will prove that the final output $\pi$ of Algorithm \ref{alg:AW} satisfies EF1. Together with Lemma \ref{lem:PO:AW}, this proves the desired claim. Now observe that at the final allocation $\pi$, at most one agent envies the other: if the loser still envies the winner and the winner also envies the loser, then exchanging the bundles would result in a Pareto improvement, contradicting Lemma~\ref{lem:PO:AW}. Thus, $\pi$ is EF1 when the loser envies the winner at $\pi$. Consider when at $\pi$, the loser does not envy the winner but the winner envies the loser. Let $\pi'$ be the previous allocation just before the final transfer in the {\bf while}-loop of Line \ref{line:while}. Let $W=\pi'(w) \cap \pi(w)$ and $L=\pi'(\ell) \cap \pi(\ell)$. Namely, $W$ (respectively, $L$) is the set of items in the winner's bundle (respectively, the loser's bundle) excluding the transferred item at $\pi'$ and $\pi$. By construction, the loser envies the winner by more than one item at $\pi'$, which implies $u_{\ell}(L) < u_{\ell}(W)$. Suppose towards a contradiction that the winner envies the loser by more than one item at $\pi$, which implies $u_{w}(W) < u_{w}(L)$. \begin{itemize} \item If $g$ is the last good that has been moved from the winner to the loser, then allocating $W$ to $\ell$ and $L \cup \{g\}$ to $w$ would be a Pareto-improvement of $\pi'$, a contradiction. \item If $c$ is the last chore that has been moved from the loser to the winner, then allocating $W\cup \{c\}$ to $\ell$ and $L$ to $w$ would be a Pareto-improvement of $\pi'$, a contradiction. \end{itemize} Hence, the winner does not envy the loser by more than one item at $\pi$; we conclude that $\pi$ is EF1. It remains to analyze the running time of the algorithm. First, the items can be sorted in $O(m \log m)$ time. The adjustment process takes $O(m^2)$ time. Each iteration checks the allocation is EF1 from the view point of the loser, which requires at most $m$ comparisons of utilities, and there are at most $m$ iterations. Thus, the number of operations is bounded by $O(m^2)$.\qed \end{proof} The example below illustrates our discrete adaptation of AW. \begin{example}[Example of the generalized AW] Consider two agents, Alice and Bob, and five items with the following additive utilities where the gray circles correspond to goods and the white circles correspond to chores. \begin{center} \upshape \setlength{\tabcolsep}{6.4pt} \begin{tabular}{rcccccccc} \toprule & \multicolumn{7}{l}{\!\!\! \begin{tikzpicture}[scale=0.57, transform shape, every node/.style={minimum size=7mm, inner sep=1.3pt, font=\huge}] \node[draw, circle, fill=gray!70](1) at (1.2,0) {$1$}; \node[draw, circle](2) at (2.4,0) {$2$}; \node[draw, circle,fill=gray!70](3) at (3.6,0) {$3$}; \node[draw, circle, fill=gray!70](4) at (4.8,0) {$4$}; \node[draw, circle](5) at (6,0) {$5$}; \node[draw, circle](6) at (7.3,0) {$6$}; \node[draw, circle](7) at (8.6,0) {$7$}; \end{tikzpicture}\!\!\!\! } \\ \midrule Alice (winner) :\!\! & 1 & -1 & 2 & 1 &-2&-4& -6 \\ Bob (loser) :\!\! & 4 & -3 & 6 & 2 & -2 & -2 & -2\\ $|u_{\ell}(o)|/|u_w(o)|$ :\!\! & 4 & 3 & 3 & 2 & 1 & 1/2 & 1/3\\ \bottomrule \end{tabular} \end{center} The generalized AW initially allocates the goods to Alice and the chores to Bob. Then, it transfers the first good from Alice to Bob and moves the second chore from Bob to Alice. After moving the third good from Alice to Bob, Bob stops being envious by more than one item. Hence the final allocation gives the items $2$ and $4$ to Alice and the rest to Bob.\qed \end{example} A natural question is whether PO and EF1 allocations exist for three or more agents; we leave this an an interesting open question. We remark that Pareto-optimality by itself is easy to achieve in $O(nm)$ time. It suffices to give each item to the agent who values it the most. \section{Finding a Connected PROP1 Allocation} We saw that there always exists an EF1 allocation for subjective goods and chores. If we weaken EF1 to PROP1, one can achieve one additional requirement besides fairness, that is, {\em connectivity}. In this section, we will consider a situation when items are placed on a path, and each agent desires a connected bundle of the path. Finding a connected set of items is important in many scenarios. For example, the items can be a set of rooms in a corridor and the agents could be research groups where each research group wants to get adjacent rooms (see e.g., \citealp{BCE17a,BCFIMPVZ19}). We will show that a connected PROP1 allocation exists and can be found efficiently. In what follows, we assume that the path is given by a sequence of items $(o_1,o_2,\ldots,o_m)$. Formally, we say that an allocation $\pi$ is \emph{connected} if for each agent $i \in N$, $\pi(i)$ is connected in the path $(o_1,o_2,\ldots,o_m)$. We will consider a slightly more stringent notion of PROP1: A connected allocation $\pi$ is PROP1$_{outer}$ if for each agent $i \in N$, \begin{itemize} \item agent $i$ receives a bundle of utility at least her proportional fair share, i.e., $u_i(\pi(i))\geq u_{i}(O)/n$, or \item $u_i(\pi(i))+u_i(o)\geq u_{i}(O)/n$ for some item $o\in O\setminus \pi(i)$ such that $\pi(i) \cup \{o\}$ is connected; or \item $u_i(\pi(i))-u_i(o)\geq u_{i}(O)/n$ for some $o\in \pi(i)$ such that $\pi(i) \setminus \{o\}$ is connected. \end{itemize} We first prove a result for a case of the cake cutting setting that is of independent interest. In the following, a {\em mixed cake} is the interval $[0,m]$. Each agent $i \in N$ has a value density function ${\hat u}_i$, which maps a subinterval of the cake to a real value, where $i$ has uniform utility $u_i(o_j)$ for the interval $[j-1,j]$ for each $j \in [m]$. The \emph{proportional fair share} of agent $i$ for a mixed cake $[0,m]$ is given by ${\hat u}_i([0,m])/n$. A {\em contiguous allocation} of a mixed cake assigns each agent a disjoint sub-interval of the cake where the union of the intervals equals the entire cake $[0,m]$; it satisfies {\em proportionality} if each agent $i$ gets an interval of utility at least her proportional fair share. \begin{theorem}\label{thm:proportional} For additive utilities, a contiguous proportional allocation of a mixed cake exists and can be computed in polynomial time. \end{theorem} \begin{proof} Let $N^+$ be the set of agents with strictly positive total utility for $O$. We combine the moving-knife algorithms for goods and chores as follows. First, if there is an agent who has positive proportional fair share, i.e., $N^+ \neq \emptyset$, we apply the moving-knife algorithm only to the agents in $N^+$. Our algorithm moves a knife from \emph{left} to \emph{right}, and agents shout whenever the left part of the cake has a utility of exactly equal to the proportional fair share. The first agent who shouts is allocated the left bundle, and the algorithm recurs on the remaining instance. Second, if no agent has a positive proportional fair share, our algorithm moves a knife from \emph{right} to \emph{left}, and agents shout whenever the left part of the cake has utility exactly proportional fair share. Again, the first agent who shouts is allocated the left bundle, and the algorithm recurs on the remaining instance. \begin{algorithm} \caption{Generalized Moving-knife Algorithm $\mathcal{A}$} \label{alg:movingknife} \begin{algorithmic}[1] \REQUIRE A sub-interval $[\ell,r]$, agent set $N'$, utility functions ${\hat u}_i$ for each $i \in N'$. \ENSURE An allocation ${\hat \pi}$ of a mixed cake $[\ell,r]$ to $N'$. \STATE Initialize ${\hat \pi}(i)=\emptyset$ for each $i \in N'$. \STATE Set $N^+=\{\, i \in N' \mid {\hat u}_i([\ell,r]) > 0\,\}$. \IF{$N^+ \neq \emptyset$} \IF{$|N^+|=1$} \STATE Allocate $[\ell,r]$ to the unique agent in $N^+$. \ELSE \STATE Let $x_i$ be the minimum point where ${\hat u}_i([\ell,x_i])={\hat u}_i([\ell,r])/|N^+|$ for $i \in N^+$. \STATE Find agent $j$ with minimum $x_j$ among all agents in $N^+$. \RETURN ${\hat \pi}$ where ${\hat \pi}(j)=[\ell,x_{j}]$ and ${\hat \pi}|_{N'\setminus \{j\}}=\mathcal{A}([x_{j},r],N' \setminus \{j\},({\hat u}_i)_{i \in N'\setminus \{j\}})$ \ENDIF \ELSE \STATE Let $x_i$ be the maximum point where ${\hat u}_i([\ell,x_i])=-{\hat u}_i([\ell,r])/n$ for $i \in N'$. \STATE Find agent $j$ with maximum $x_j$ among all agents in $N'$. \RETURN ${\hat \pi}$ where ${\hat \pi}(j)=[\ell,x_{j}]$ and ${\hat \pi}|_{N'\setminus \{j\}}=\mathcal{A}([x_{j},r],N' \setminus \{j\},({\hat u}_i)_{i \in N'\setminus \{j\}})$ \ENDIF \end{algorithmic} \end{algorithm} Algorithm \ref{alg:movingknife} formalizes the idea. To prove its correctness, we will prove by induction on the number of agents $|N'|$ that the allocation of a mixed cake $[\ell,r]$ produced by $\mathcal{A}$ satisfies the following: \begin{itemize} \item if $N^+ \neq \emptyset$, then each agent in $N^+$ receives an interval of utility at least her proportional fair share ${\hat u}_i([\ell,r])/|N'|$ and each agent not in $N^+$ receives an empty piece; and \item if $N^+ = \emptyset$, then each agent receives an interval of utility at least her proportional fair share ${\hat u}_i([\ell,r])/|N'|$. \end{itemize} The claim is clearly true when $|N'|=1$. Suppose that $\mathcal{A}$ returns a proportional allocation of a mixed cake with desired properties when $|N'| = k-1$; we will prove it for $|N'|=k$. Suppose that some agent has positive proportional fair share, i.e., $N^+ \neq \emptyset$. Note that each agent $i$ not in $N^+$ has non-positive proportional fair share and gets nothing; thus it suffices to show that the agents in $N^+$ receive bundles of utility at least her proportional fair share. If $|N^+|=1$, the claim is trivial; thus assume otherwise. Clearly, agent $j$ receives an interval of utility at least her proportional fair share. Further, all other agents in $N^+$ have utility at most their proportional fair shares for the left piece $[\ell,x_j]$. Indeed, if there is an agent $i' \in N^+$ whose utility for the left piece $[\ell,x_j]$ is greater than her proportional fair share ${\hat u}_{i'}([\ell,r])/k$, then $i'$ would have shouted when the knife reaches before $x_j$ by the continuity of ${\hat u}_{i'}$, i.e., $x_{i'}<x_j$, contradicting the minimality of $x_j$. Thus, the remaining agents in $N^+$ have at least $(k-1) \cdot {\hat u}_i([\ell,r])/k$ utility for the rest of the cake $[x_j,r]$; hence, by the induction hypothesis each agent in $N^+$ gets an interval of utility at least her proportional fair share, and each of the remaining agents gets an empty piece. Suppose that no agent has positive proportional fair share. Again, if there is an agent $i'$ whose utility for the left piece $[\ell,x_j]$ is greater than her proportional fair share ${\hat u}_{i'}([\ell,r])/k$, then $i'$ would have shouted when the knife reaches before $x_j$ by the continuity of ${\hat u}_{i'}$, i.e., $x_{i'}>x_j$, contradicting the maximality of $x_j$. Thus, all the remaining agents have utility at least $(k-1) \cdot {\hat u}_i([\ell,r])/k$ for the rest of the cake $[x_j,r]$, and hence, by the induction hypothesis, each agent gets an interval of utility at least her proportional fair share ${\hat u}_i([\ell,r])/k$. It can be easily verified that Algorithm \ref{alg:movingknife} runs in polynomial time. \qed \end{proof} The theorem stated above also applies to a general cake-cutting model in which information about agent's utility function over an interval can be inferred by a series of queries. We note that in contrast with proportionality, the existence of a contiguous envy-free allocation of a mixed cake remains elusive: it is known to exist only when the number $n$ of agents is four or a prime number \citep{Halevi2018,MeSh18a}. Next, we show how a fractional proportional allocation can be used to achieve a contiguous PROP1 division of indivisible items. \begin{theorem} For additive utilities, a connected PROP1$_{outer}$ allocation of a path always exists and can be computed in polynomial time. \end{theorem} \begin{proof} Given a path $(o_1,o_2,\ldots,o_m)$, consider a mixed cake $[0,m]$ and each agent with a value density function ${\hat u}_i$, where $i$ has uniform utility $u_i(o_j)$ for the interval $[j-1,j]$ for each $j \in [m]$. We know that this instance admits a contiguous and proportional allocation ${\hat \pi}$ from Theorem \ref{thm:proportional}. Suppose without loss of generality that under such an allocation ${\hat \pi}$, agents $1, 2,\ldots, n$ receive the 1st, 2nd, $\ldots$, and $n$-th bundles from left to right. That is, each agent $i=1,2,\ldots,n$ receives the sub-interval $[x_{i-1},x_i]$ of the mixed cake, where $0=x_0 \leq x_1 \leq \ldots \leq x_{n-1} \leq x_n=m$. Without loss of generality, we also assume that no agent gets the empty bundle under this fractional allocation, i.e., $x_{i-1}< x_i$ for each $i=1,2,\ldots, n$. From left to right, we show how to allocate each item $o_j$ for $j=1,2,\ldots,m$ to construct an integral allocation ${\pi}$. If item $o_j$ is fully contained in some agent's bundle, namely, $x_{i-1} \leq j-1 \leq j \leq x_i$ for some $i \in N$, then we assign each item $o_j$ to agent $i$. If not, i.e., the item $o_j$ is on the boundary, we allocate it according to the left-most/right-most agents' preferences. Formally, suppose that $j-1 \leq x_{\ell} \leq x_{\ell+1} \leq \ldots \leq x_r \leq j$ such that $x_{\ell}=\min \{\, x_i \mid x_i \geq j-1 \,\}$ and $x_r=\max \{\, x_i \mid x_i \leq j \,\}$. Then we do the following: \begin{enumerate} \item If two agents $\ell$ and $r$ disagree on the sign of $o_j$, i.e., $\min \{u_{\ell}(o_j), u_{r}(o_j)\} <0 < \max \{u_{\ell}(o_j), u_{r}(o_j)\}$, we give the item $o_j$ to the agent $i \in \{\ell,r\}$ who has positive utility for it. \item If two agents $\ell$ and $r$ agree on the sign of $o_j$, i.e., $\min \{u_{\ell}(o_j), u_{r}(o_j)\} \geq 0$ or $\max \{u_{\ell}(o_j), u_{r}(o_j)\} < 0$, we allocate the item $o_j$ in such a way that: \begin{itemize} \item the left-agent $\ell$ takes $o_j$ if both agents have non-negative utility, i.e., $\min \{u_{\ell}(o_j), u_{r}(o_j)\} \geq 0$; \item the right-agent $r$ takes $o_j$ if both agents have negative utility, i.e., $\max \{u_{\ell}(o_j), u_{r}(o_j)\} < 0$. \end{itemize} \end{enumerate} Note that if there is an agent who gets a fraction of one item only under the proportional fractional division, the agent gets nothing under our final allocation. The resulting integral allocation $\pi$ is PROP1$_{outer}$. To see this, take any agent $i$. Clearly, when one of the knife positions $x_{i-1}$ and $x_i$ is integral, the bundle satisfies PROP$_{outer}$. Further, if $[x_{i-1},x_j] \subseteq [j-1,j]$ for some $j \in [m]$, agent $i$ gets utility $1/n$ by receiving either the item $o_j$ or the empty bundle. Thus, assume otherwise, i.e., $x_{i-1},x_i \not \in \{0,1,\ldots,m\}$ and $|x_i-x_{i-1}|>1$. We will show that such an agent gets utility $1/n$ by either receiving the item next to its bundle or deleting the left-most item of her bundle. Let $o_r$ and $o_{\ell}$ be the left and right boundary items where $x_{i-1} \in (r-1,r)$ and $x_{i} \in (\ell-1,\ell)$. Note that we have $\{o_{\ell+1},o_{\ell+2},\ldots,o_{r-1}\} \subseteq \pi(i)$. Consider the following four cases. \begin{itemize} \item Both $o_{\ell}$ and $o_r$ are goods for $i$, i.e., $\min \{u_i(o_{\ell}),u_i(o_r)\} \geq 0$. In this case, agent $i$ receives at least $o_r$. Thus, if $o_{\ell} \in \pi(i)$, agent $i$ obtains utility $1/n$. If not, agent $i$ gets utility $1/n$ by receiving the item $o_{\ell}$. \item Both $o_{\ell}$ and $o_r$ are chores for $i$, i.e., $\max \{u_i(o_{\ell}),u_i(o_r)\} < 0$. In this case, agent $i$ does not receive $o_r$. Thus, if $o_{\ell} \not \in \pi(i)$, agent $i$ obtains utility $1/n$. If not, agent $i$ gets utility $1/n$ by removing the item $o_{\ell}$. \item The item $o_{\ell}$ is a good but $o_{r}$ is a chore for $i$, i.e., $u_i(o_{\ell}) \geq 0$ and $u_i(o_r) <0$. In this case, agent $i$ does not receive $o_r$. Thus, if $o_{\ell} \in \pi(i)$, agent $i$ obtains utility $1/n$. If not, agent $i$ gets utility $1/n$ by receiving the item $o_{\ell}$. \item The item $o_{\ell}$ is a chore but $o_{r}$ is a good for $i$, i.e., $u_i(o_{\ell}) < 0$ and $u_i(o_r) \geq 0$. In this case, agent $i$ receives at least $o_r$. Thus, if $o_{\ell} \not \in \pi(i)$, agent $i$ obtains utility $1/n$. If not, agent $i$ gets utility $1/n$ by removing the item $o_{\ell}$. \end{itemize} We conclude that $\pi$ is a connected PROP1$_{outer}$ allocation. By Theorem \ref{thm:proportional}, it is immediate to see that one can compute a connected PROP1$_{outer}$ allocation in polynomial time.\qed \end{proof} \section{Discussion} In this paper, we have formally studied fair allocation when the items are a combination of subjective goods and chores. Our work paves the way for detailed examination of allocation of goods/chores, and opens up an interesting line of research, with many problems left open to explore. Perhaps the most intriguing open question as a result of our study is the existence of an EF1 allocation under arbitrary non-monotonic utilities. There are further fairness concepts that could be studied from both existence and complexity issues, most notably {\em envy-freeness up to the least valued item} (EFX) \citep{CKMPS19}. In our setting, one can define an allocation $\pi$ to be \emph{EFX} if for any pair of agents $i$ and $j$, agent $i$ does not envy agent $j$, or the following two conditions hold: \begin{enumerate} \item $\forall o\in \pi(i)$ s.t. $u_i(\pi(i)\setminus \{o\})>u_i(\pi(i))$: $u_i(\pi(i)\setminus \{o\})\geq u_i(\pi(j))$; and \item $\forall o\in \pi(j)$ s.t. $u_i(\pi(j)\setminus \{o\})<u_i(\pi(j))$ : $u_i(\pi(i))\geq u_i(\pi(j)\setminus \{o\})$. \end{enumerate} That is, $i$'s envy towards $j$ can be eliminated by either removing $i$'s least valuable good from $j$'s bundle or removing $i$'s favorite chore from $i$'s bundle. \citet{CKMPS19} mentioned the following `enigmatic' problem: does an EFX allocation exist for goods? It would be interesting to investigate the same question for subjective or objective goods/chores under additive utilities. We also note that while the relationship between Pareto-optimality and most fairness notions is still unclear, \citet{CFS17a} proposed a fairness concept called {\em round robin share} (RRS) that can be achieved together with Pareto-optimality. In our context, RRS can be formalized as follows. Given an instance $I=(N,O, U)$, consider the round robin sequence in which all agents have the same utilities as agent $i$. In that case, the minimum utility achieved by any of the agents is RRS$_i(I)$. An allocation satisfies RRS if each agent $i$ gets utility at least RRS$_i(I)$. It would be then very natural to ask what is the computational complexity of finding an allocation satisfying both properties. Finally, recent papers of \citet{BCE17a} and Bil\`{o} et al. $(2019)$ showed that a connected allocation satisfying several fairness notions, such as MMS and EF1, is guaranteed to exist for some restricted domains. These existence results crucially rely on the fact that the agents have monotonic valuations, and it remains open whether similar results can be obtained in fair division of indivisible goods and chores. \end{document}
\begin{document} \title{Quasilinear Schr\"odinger Equations} \author{Nicholas P. Michalowski} \maketitle \begin{abstract} In this paper we prove local well-posedness for Quasi-linear Scrh\"odinger equations with initial data in unweighted Sobolev Spaces. For small initial data with minimal smoothness this has addressed by J.~Marzuola, J.~Metcalfe and D.~Tataru \cite{JMJMDT2012}, \cite{JMJMDT2012-2}. This work does not attempt to address the minimal regularity for initial data, but instead builds on the previous results of C.~Kenig, G.~Ponce, and L.~Vega \cite{CKGPLV1998}, \cite{CKGPLV2004} to remove the smallness condition in unweighted spaces. This is accomplished by developing a non-centered version of Doi's Lemma, which allows one to prove Kato type smoothing estimates. These estimates make it possible to achieve the necessary a priori linear results. \end{abstract} \section{Introduction} We are interested in the local solvability of the IVP \begin{equation}\label{theproblem} \left\{\begin{aligned} &\partial_t u = ia_{jk}(\ensuremath{x, t, u, \bar{u}, \grad u, \grad \bar u})\partial_{x_j}\partial_{x_k}u + \vec b_1(\ensuremath{x, t, u, \bar{u}, \grad u, \grad \bar u})\cdot\nabla u\\ &\qquad + \vec{b}_2(\ensuremath{x, t, u, \bar{u}, \grad u, \grad \bar u})\cdot\nabla \bar u +c_1(\ensuremath{x, t, u, \bar{u}})u + c_2(\ensuremath{x, t, u, \bar{u}})\bar u +f(x,t)\\ &u(x,0)=u_0(x). \end{aligned}\right. \end{equation} Quasi-linear Schr\"odinger equations have been studied extensively in recent years. The aim of the current work is to extend some results of C.~Kenig, G.~Ponce, and L.~Vega in \cite{CKGPLV2004}. In particular we aimed to remove the assumption that the initial data $\jap{x}^2\partial_x^\alpha u_0\in L^2$ for suitable $\alpha.$ As pointed out in \cite{CKGPLV2004}, other forms of these equations have been extensively studied. In \cite{CKGPLV1993}, the same authors show that the equation \begin{equation}\label{constcoeffNLS} \partial_t u +i\scr{L}u=P(u,\bar u, \nabla u, \nabla \bar u)\end{equation} with \[\scr{L}=\sum_{i=1}^k\partial_{x_i}-\sum_{i=k-1}^n\partial_{x_k}\] and $P(\cdot)$ a non-linearity, is locally well posed for small initial data in $H^s.$ The smallness condition was first removed in $n=1$ by N.~Hayashi and T.~Ozawa in \cite{NHTO1994}. After a change of variables they were able to write the equation as an equivalent system that did not involve first order terms in $u$. For this system can be handled by the energy method. For the case elliptic case when $\scr{L}=\mathcal{D}elta$, H.~Chihara \cite{HC1995} was able to remove the smallness condition in all dimensions. Again, the main idea here was to use a transformation which eliminates the first order terms in $u$ so that the energy method applies. For the change of variables to cancel the first order terms it was necessary to first diagonalize the system for $(u,\bar u).$ In order to diagonalize the system, as we will see below, the ellipticity of $\scr{L}$ is essential. In \cite{CKGPLV1998}, Kenig et.\ al.\ removed the smallness condition in all dimensions. They construct a pseudo-differential operator $C$ so that $\overline{Cv}=C\overline{v}$, and because of this they are able to avoid the diagonalization argument needed in \cite{HC1995}. The construction of $C$ produces a symbol in the Calder\'on-Vaillancourt class. As one moves to variable coefficient second order terms it becomes necessary to introduce non-trapping conditions on the coefficients. Consider the equation \begin{equation} \left\{ \begin{aligned} &\partial_tu=i\partial_{x_k}a_{kj}(x)\partial_{x_j}u + \vec b_1(x)\cdot \nabla u +f\\ &u|_{t=0}=u_0 \end{aligned} \right. \end{equation} where $a_{kj}$ elliptic and asymptotically flat. Ichinoise \cite{WI1984} show that \[\sup_{\stackrel{\stackrel{x_0\in\mathbb{R}^n}{\xi_0\in S^{n-1}}}{t_0\in\mathbb{R}}} \abs{\int_0^{t_0}\Imag \vec b_1(X(s,x_0,\xi_0))\cdot\Xi(s,x_0,\xi_0)\,ds}\] is a necessary condition for the estimate \[\sup_{0<t<T}\norm{u}_{L^2}\leq C_T\paren{\norm{u_0}_{L^2}+ \norm{f}_{L^1_T L^2_x}}.\] The non-trapping assumption is closely related to local smoothing estimates, which are key to the linear theory. This can be seen from the work of S. Doi (\cite{SD1994},\cite{SD1996}), Craig et.\ al.\ \cite{WCTKWS1995}, and others. From their work it can be seen that, under appropriate smoothness, ellipticity and asymptotic flatness assumptions, the non-trapping condition for \begin{equation} \left\{ \begin{aligned} &\partial_tu=i\partial_{x_k}a_{kj}(x)\partial_{x_j}u \\ & u|_{t=0}=u_0 \end{aligned}\right. \end{equation} verify local smoothing estimates. That is, estimates of the form\\ $\norm{J^{1/2}u}_{L^2(\mathbb{R}^n\times[0,T],\jap{x}^m\,dxdt)}\leq C_T\norm{u_0}_{L^2}$. In addition, Doi \cite{SD2000} also showed that, under the same conditions, if the above estimate holds then the non-trapping assumption must hold. C.\ Kenig et al in (\cite{CKGPCRLV2005}, \cite{CKGPCRLV2006}) have extended the results of their previous work in the variable coefficient case by removing ellipticity assumptions. Their work assumes that the initial data is in a weighted Sobolev space. It will be the subject of future work to extend the methods here to remove the weights in this cases. Recently, in both the elliptic and hyberbolic settings J.~Marzuola, J.~Metcalfe and D.~Tataru \cite{JMJMDT2012-2} have established low regularity local well-posedness for for small initial data in $H^s$ for $s>(n+5)/2$. Having a smallness condition on the initial data allows the authors to avoid explicit non-trapping assumptions. In \cite{JMJMDT2012} the above authors also considered the situation in which quadratic interactions are present and establish low regularity well-posedness results. Our own contribution to this body of research is remove the smallness condition for the work of Kenig, Ponce, Vega without imposing any smallness on conditions on the initial data. Specifically, we assume the following conditions on the coefficients. Let $B_M^k(0)=\{z\in \mathbb{C}^k\mid \abs{z}<M\}.$ \begin{enumerate}[labelsep=\parindent, leftmargin=*, label=(NL\arabic*)] \item\label{NLRegularity} There exist $\tilde N=\tilde N(n)\in\mathbb{N}$ such that, for any $M>0$, $a_{jk}, b_{1,j}, b_{2,j}\in C_b^{\tilde N}(\mathbb{R}^{n}\times\mathbb{R}\times B_M^{2n+2}(0))$ for $j, k=1, 2, \ldots, n$, and $c_1, c_2\in C_b^{\tilde N}(\mathbb{R}^n\times \mathbb{R} \times B^2_M(0)).$ \item\label{NLReal2ndOrder} Let $(x, t, \vec z)\in \mathbb{R}^n\times \mathbb{R}\times \mathbb{C}^{2n+2}.$ The matrix $A(x, t, \vec z) = \paren{a_{jk}(x, t, \vec z)}_{j,k=1, \ldots, n}$ is real valued. \item\label{NLSymmetric} The matrix $A(x,t,\vec z)$ is symmetric for all $x\in \mathbb{R}^n$, $t\in \mathbb{R}$ and $\vec z\in B_M^{2n+2}(0).$ \item\label{NLElliptic} For $\vec z \in B_M^{2n+2}(0)$ the matrix $A(x,t,\vec z)$ is uniformly elliptic. That is, there exists $\gamma_M>0$ such that \[\gamma_M\abs{\xi}^2\leq \sum_{j,k=1}^n a_{jk}(x,t,\vec z)\xi_j\xi_k\leq \gamma_M^{-1}\abs{\xi}^2,\] for all $x\in \mathbb{R}^n$, $t\in \mathbb{R}$ and $\vec z\in B_M^{2n+2}(0).$ \item\label{NLFlat} The matrix $A(x,t,\vec z)$ is asymptotically flat. That is, there exists a constant $C_M$ such that $I-A(x,t,\vec z)$, together with any derivatives of $A(x,t,\vec z)$ up to order 2 (not including $\partial_t^2 A(x,t,\vec z)$), are bounded in absolute value by $\frac{C_M}{\jap{x}^2}$. \item\label{NLFirstOrder} Here and throughout we let $\mathbb{R}^n=\bigcup_{\mu\in\mathbb{Z}^n} Q_\mu$ where $Q_\mu$ are unit cubes with vertices in $\mathbb{Z}^n$ and centers $x_\mu$. Suppose that, for $j=1,2$, $b_j(x,t,0,0,\vec 0,\vec 0)=0$, and $\partial_{z_i} b_j(x,t,0,0,\vec 0,\vec 0)=0$. Also, for some $C_M>0$, we have that $\partial_{x_l}a_{jk}(x,t,\vec z) = \sum_{\mu\in\mathbb{Z}^n}\alpha_\mu\phi_\mu^{ljk}(x,t,\vec z)$ with $\alpha_\mu>0$, $\sum \alpha_\mu<C_M$, $\phi_\mu^{ljk}(\cdot,t,\vec z)\in C^{\tilde N}(\mathbb{R}^n)$ with $\norm{\phi_\mu^{ljk}}_{C^{\tilde N}}\leq 1$ and uniformly for $t\in \mathbb{R}$ and $\vec z\in B_M^{2n+2}(0)$ we have $\supp \phi_{\mu}^{ljk}\subseteq Q_\mu^*$ (the double of $Q_\mu$) for $l,k,j=1\ldots n$. Similarly for $\partial_ta_{jk}$, $\partial_{z_m}a_{jk}$, and $\partial_t\partial_{z_m}a_{jk}$. \item\label{NLNoTrap} We associate to our coefficients and our initial data the symbol $$h(x,\xi)=-a_{jk}(x,0,u_0,\bar u_0,\nabla u_0,\nabla\bar u_0)\xi_j\xi_k .$$ We assume the bicharacteristic flow obtained from $h$ is non-trapping. That is the solution to the system of ODE's \[\left\{ \begin{aligned} &\frac{d}{dt}X_j(s,x,\xi)=\diff{h}{\xi_j}(X,\Xi) \\ &\frac{d}{dt}\Xi_j(s,x,\xi)=-\diff{h}{x_j}(X,\Xi) \\ & X(0,x,\xi)=x \quad\text{and}\quad\Xi(0,x,\xi)=\xi \end{aligned} \right.\] satisfies $\{X(s,x,\xi)\mid s>0\}$ and $\{X(s,x,\xi) \mid s<0\}$ are unbounded for all $(x,\xi)\in\mathbb{R}^n\times\paren{\mathbb{R}^n\setminus\{0\}}$. \end{enumerate} \begin{thm}\label{solution} Under these assumptions there exist $\tilde N, s$ depending only on the dimension so that if we are given $u_0\in H^s$ and $f\in L^\infty([0,1];H^s)$, then there is a $T_0<1$ depending on the norms of $u_0$ and $f$ and \nlinref{NLRegularity}-\nlinref{NLNoTrap} so that there is a unique solution to \eqref{theproblem}, $u(x,t)$, on the interval $[0,T_0]$ satisfying \[u\in C([0,T_0];H^{s-1})\cap L^\infty([0,T_0];H^s).\] \end{thm} The remainder of the paper is organized as follows. In section~\ref{chap:doi} we establish an uncentered version of Doi's lemma necessary to later results. In section~\ref{sec:linear} we establish a priori linear results. Finally, in section~\ref{sec:nonlinear}, we give the proof of Theorem~\ref{solution} \section{Doi's Lemma}\label{chap:doi} Doi's lemma is a key estimate that allows us to obtain local smoothing. It is the local smoothing estimates that allow us to handle the first order terms in the linear theory. In this section we present two variants of Doi's lemma, one due to S.~Doi that holds in the elliptic setting and one due to C.~Kenig, G.~Ponce, C.~Rolvung, and L.~Vega in \cite{CKGPCRLV2005}, that also holds when the coefficients are not necessarily elliptic. We then show how to extend these results to corresponding ``non-centered'' versions that we need for the precise form of our local smoothing estimates. We consider the symbol $h(x,\xi)\in S^2_{1,0}$ defined by $h(x,\xi)=\sum_{j,k=1}^na_{jk}(x)\xi_j\xi_k$. Let $A(x)$ denote the matrix $\left(a_{jk}(x)\right)_{j,k=1}^n$. We impose the following assumptions on $A(x)$ \begin{enumerate}[labelindent=0in, leftmargin=*, label=(D\arabic*)] \item\label{D1} There exist $N=N(n)\in\mathbb{N}$, and $C>0$ so that $a_{jk}(x)\in C_b^N(\mathbb{R}^n)$ for $j, k = 1, 2,\ldots, n,$ with norm controlled by $C$. \item\label{D2} The functions $a_{jk}(x)$ are real valued and the matrix $A(x)=\big(a_{jk}(x)\big)_{jk=1,...,n}$ is symmetric. \item\label{D3} The matrix $A(x)$ is uniformly elliptic. That is, for all $x\in \mathbb{R}^n$ there is a positive number $\gamma$ so that $$C^{-1} |\xi|^2 \leq \sum_{j, k = 1}^n a_{jk}(x)\xi_j\xi_k \leq C|\xi|^2.$$ \item\label{D4} The matrix $A(x)$ is asymptotically flat. That is $$\abs{I-A(x)} \leq \frac{C}{\jap{x}^2} \qquad \text{and}\qquad \abs{\nabla_x a_{jk}(x)} \leq \frac{C}{\jap{x}^2}.$$ \item\label{D5} Let $X(s,x,\xi)$ and $\Xi(s,x,\xi)$ be the Hamiltonian flow associated to $h$. That is $X$ and $\Xi$ are solutions to the following ODEs: \[\left\{ \begin{aligned} &\frac{d}{dt}X_j(s,x,\xi)=2a_{jk}\big(X(s,x,\xi)\big)\Xi_k(s,x,\xi) \\ &\frac{d}{dt}\Xi_j(s,x,\xi)=-\frac{\partial a_{lk}}{\partial x_j}\big(X(s,x,\xi)\big)\Xi_l(s,x,\xi)\Xi_k(s,x,\xi)\\ & X(0,x,\xi)=x \quad\text{and}\quad\Xi(0,x,\xi)=\xi. \end{aligned}\right.\] Then for each pair $x,\xi$ with $\xi\neq 0$ we assume that the sets $\{X(s, x, \xi) \mid s>0\}$ and $\{X(s,x,\xi) \mid s<0\}$ are unbounded. \end{enumerate} \begin{lem}[S.~Doi \cite{SD1996}]\label{doilemma} With $a_{jk}(x)$ satisfy \ref{D1}-\ref{D5}, there exist a symbol $p\in S^0_{1,0}$, with semi-norms bounded in terms of $C$, and a constant $B\in (0,1)$ depending on $C$ and \ref{D5} such that $$H_hp\mathrel{\mathop:}=\{h,p\}=\sum_{i=1}^n\frac{\partial h}{\partial \xi_i}\frac{\partial p}{\partial x_i} - \frac{\partial h}{\partial x_i}\frac{\partial p}{\partial \xi_i}\geq \frac{B\abs{\xi}}{\jap{x}^2} - \frac{1}{B}\quad \text{for all }x, \xi \in \mathbb{R}^n.$$ \end{lem} It is worth noting that in the case that the coefficients $a_{jk}(x)$ are elliptic, one can use the fact that the symbol $h$ is preserved under the Hamiltonian flow together with the ellipticity to deduce that $C^{-2}\abs{\xi}^2 \leq \abs{\Xi(s,x,\xi)}^2\leq C^2\abs{\xi}^2$. This implies that the solutions $X$ and $\Xi$ exist for all times. For our purposes we need a version of Doi's lemma that is not centered at the origin. As before we let $\mathbb{R}^n=\bigcup_{\mu\in\mathbb{Z}^n}Q_\mu$ with $Q_\mu$ unit cubes with vertices in the lattice $\mathbb{Z}^n$ (indexed by some corner), and let $x_\mu$ be the center of $Q_\mu$. \begin{lem}\label{uncentereddoi1} Suppose $a_{jk}$ satisfies \emph{\ref{D1}-\ref{D5}}. Then there exists a symbol $p_\mu\in S^0_{1,0}$ such that $H_hp_\mu(x,\xi)\geq C_1\frac{\abs{\xi}}{\jap{x-x_\mu}^2}-C_2$, where $C_1$, $C_2$ and the semi-norms of $p_\mu$ can be bounded independently of $\mu$. \end{lem} \begin{proof} For $\abs{x_\mu}<10$ we take $p_{\mu}=p$. The content of the lemma is for $\abs{x_\mu}>>0$. Let $\tilde h(x,\xi)=\abs{\xi}^2$, applying Lemma~\ref{doilemma} to the Laplacian can find a symbol $r(x,\xi)$ so that $H_{\tilde h}r(x,\xi)\geq \tilde C_1 \frac{\abs{\xi}}{\jap{x}^2}-\tilde C_2$. We take $p_\mu(x,\xi)=Np(x,\xi)+r(x-x_\mu,\xi),$ with $N$ to be determined. Let $r_\mu(x,\xi)=r(x-x_\mu,\xi)$. We calculate \[\begin{split} H_hr_\mu(x,\xi)&=\sum_{i=1}^n (2a_{ik}(x)\xi_k) \frac{\partial r_\mu}{\partial x_i} + \frac{\partial a_{jk}}{\partial x_i}(x)\xi_j\xi_k\frac{\partial r_\mu}{\partial \xi_i}\\ &= \sum_{i=1}^n2\xi_i\frac{\partial r_\mu}{\partial x_i} + 2(a_{ik}(x)-\partialta_{ik})\xi_k \frac{\partial r_\mu}{\partial x_i} + \frac{\partial a_{jk}}{\partial x_i}(x)\xi_j\xi_k\frac{\partial r_\mu}{\partial \xi_i}. \end{split}\] For the second term we have that $\abs{2(a_{ik}-\partialta_{ik})\xi_k\frac{\partial r_\mu}{\partial x_i}}\leq C\frac{\abs{\xi}}{\jap{x}^2}$ from \ref{D4} and the bounds for the semi-norms of $r_\mu$. Similarly for the third term we have that \\ $\abs{\frac{\partial a_{jk}}{\partial x_i}\xi_j\xi_k\frac{\partial r_\mu}{\partial \xi_i}} \leq \frac{C\abs{\xi}}{\jap{x}^2}.$ The first term give that $H_{\tilde h}r_\mu \geq \tilde C_1\frac{\abs{\xi}}{\jap{x-x_\mu}^2}-\tilde C_2$. Finally, \[H_h(p_\mu)=NH_h(p)+H_h(r_\mu)\geq NC_1\frac{\abs{\xi}}{\jap{x}^2}-C\frac{\abs{\xi}}{\jap{x}^2} + \tilde C_1\frac{\abs{\xi}}{\jap{x-x_\mu}^2}-NC_2-\tilde C_2.\] So we choose $N$ large enough so that $NC_1\frac{\abs{\xi}}{\jap{x}^2}-C\frac{\abs{\xi}}{\jap{x}^2}\geq 0$ and we get \[H_h(p_\mu)\geq C_1\frac{\abs{\xi}}{\jap{x-x_\mu}^2}-C_2.\] \end{proof} \begin{rem}\label{doibump} We remark that, perhaps by increasing our choice of $N$ to $2N$, we can ensure that $H_hp_\mu\geq C_1\frac{\abs{\xi}}{\jap{x-x_\mu}^2} + C_2\frac{\abs{\xi}}{\jap{x}^2}-C_3$. This will be important to us when we want to consider linear estimates where the coefficients of the second order term depend on time. \end{rem} \section{Linear Results}\label{sec:linear} In this section we consider the system \begin{equation}\label{LinGenS} \left\{\begin{aligned} &\begin{aligned}\partial_t u =& -\epsilon\mathcal{D}elta^2u+i\partial_{x_j}a_{jk}(x,t)\partial_{x_k}u + b_1(x,t,D)u + \vec{b}_2(x,t)\cdot\nabla \bar u\\ &+c_1(x,t,D)u+c_2(x,t,D)\bar u +f(x,t),\end{aligned}\\ &u(x,0)=u_0(x). \end{aligned}\right. \end{equation} First we set some notation. We denote by $A(x,t)$ the matrix $(a_{jk}(x,t))_{j,k=1}^n$ and the symbol $h(x,t,\xi)=\sum_{j,k=1}^na_{jk}(x,t)\xi_j\xi_k$. For a function $u(x,t)$ the Fourier transform of $u$ in the $x$ variable will be denoted by $\hat u(\xi,t)$. For a time varying symbol $q(x,t,\xi)$ we use the following notation \[\Psi_qu(x,t) = \frac{1}{(2\pi)^n}\int e^{ix\cdot\xi}q(x,t,\xi)\hat{u}(\xi,t)\,d\xi.\] We let $\mathbb{R}^n=\bigcup_{\mu\in\mathbb{Z}^n}Q_\mu$ with $Q_\mu$ unit cubes with vertices in the lattice $\mathbb{Z}^n$. We let $x_\mu$ denote the the center of $Q_\mu$ and $Q_\mu^*$ denote its concentric double. When we use the linear estimates in the non-linear problem we will evalunate our coefficients at some local solution. For this reason it will be important for the constant appearing in our final inequality to depend only on the coefficients at $t=0.$ We therefore take the convention that constants related to our coefficients at $t=0$ will be denoted by $C_0$ and constants depending on our coefficients at times other then 0 will be generically denoted by $C.$ We place the following assumptions on the coefficients. \begin{enumerate}[labelindent=0in, leftmargin=*, label=(L\arabic*)] \item\label{LRegularity} There exist $M_L=M_L(n)\in\mathbb{N}$, and $C>0$ so that $a_{jk}(\cdot,t)$, $b_{2,j}(\cdot,t)$, $\partial_ta_{jk}(\cdot,t)$, $\partial_t b_{2,j}(\cdot,t) \in C_b^{M_L}(\mathbb{R}^n)$ for $j, k = 1, 2,\ldots, n,$ with norm controlled by $C$. We assume that, uniformly in $t$, the symbols $c_1(x,t,\xi)$, $c_2(x,t,\xi)\in S^0_{1,0}$ and $b_1(x,t,\xi)\in S^1_{1,0}$ with seminorms controlled by $C$. In addition, we have that the norms of $a_{jk}(x,0)$ $b_{2,j}(x,0)$ for $j,k=1, \ldots, n$ in $C^{M_L}_b$ together with the seminorms of $b_1(x,0,\xi), c_1(x,0,\xi)$ and $c_2(x,0,\xi)$ are controlled by $C_0$. \item\label{LElliptic} The matrix $A(x,t)=\big(a_{jk}(x,t)\big)_{j,k=1\ldots n}$ has real valued entries, is symmetric, and is uniformly elliptic. That is, there is a positive number $C$ so that $$C^{-1} |\xi|^2 \leq \sum_{j, k = 1}^n a_{jk}(x,t)\xi_j\xi_k \leq C|\xi|^2.$$ Further at $t=0$ we have $$C_0^{-1} |\xi|^2 \leq \sum_{j, k = 1}^n a_{jk}(x,0)\xi_j\xi_k \leq C_0|\xi|^2.$$ \item\label{LFlat} The matrix $A(x,t)$ is asymptotically flat. That is, \[\abs{I-A(x,t)} + \abs{\nabla_x A(x,t)} + \abs{\partial_ta_{jk}(x,t)} + \abs{\partial_t\partial_{x_i}a_{jk}(x,t)} \leq \frac{C}{\jap{x}^2} \] and \[\abs{I-A(x,0)} + \abs{\nabla_x A(x,0)} + \abs{\partial_ta_{jk}(x,0)} + \abs{\partial_t\partial_{x_i}a_{jk}(x,0)} \leq \frac{C_0}{\jap{x}^2}.\] \item\label{LFirstOrder} The symbol $b_1(x,t,\xi)$ satisfies an estimate of the form \[\abs{\mathbb{R}e b_1(x,0,\xi)}\leq \sum_{\mu\in\mathbb{Z}^n} \beta_\mu^0\varphi_\mu(x)\abs{\xi},\] where $\beta_\mu^0\geq 0$, $\sum_{\mu\in\mathbb{Z}^n}\beta_\mu^0\leq C_0$, $\|\varphi_\mu\|_{C^{M_L}}\leq 1$ and $\supp \varphi_\mu\subseteq Q_\mu^*$. Also assume that \[\partial_t \paren{\mathbb{R}e b_1(x,t,\xi)} = \sum_{\mu\in Z^n} \tilde\beta_\mu(t)\varphi_\mu(x,t,\xi),\] where $\tilde\beta_\mu(t)\geq 0$, and $\sum_{\mu\in\mathbb{Z}^n}\tilde\beta_\mu(t)\leq C$. For all $t\in \mathbb{R}_+$ the time varying symbols $\varphi_\mu(x,t,\xi)\in S^1_{1,0}$ with seminorms bounded by 1, independently of $t$ and $\mu$, and $\supp\varphi_\mu(\cdot,t,\xi)\subseteq Q_\mu^*$. \item\label{LNoTrap} We assume that Hamiltonian flow associated to $h_0(x,\xi):=h(x,0,\xi)$ is non-trapping. Let $p_\mu(x,\xi)$ be the Doi symbol for cube $Q_\mu$ associated to as constructed in the previous section. We assume that these symbols satisfy \[H_{h_0}p_\mu \geq \frac{1}{C_0}\paren{\frac{\abs{\xi}}{\jap{x}^2} + \frac{\abs{\xi}}{\jap{x-x_\mu}^2}}-C_0.\] The bounds in our arguments also depend on a finite number of seminorms of $p_\mu$ in $S^0_{1,0}$ and we assume these seminorms are controlled by $C_0$. See Remark \ref{doibump} in Section \ref{chap:doi} for this version of Doi's Lemma. \end{enumerate} \begin{thm}\label{aplin} Suppose that $u_0\in L^2$ and that there is a solution $u(x,t)$ to \eqref{LinGenS} in $C([0,T];L^2)$, where the coefficients satisfy \linref{LRegularity}-\linref{LNoTrap}. Then there exist real numbers $T=T(C,C_0,\paren{\beta_\mu^0}_{\mu\in\mathbb{Z}^n})$ and $A=A(C_0,\paren{\beta_\mu^0}_{\mu\in\mathbb{Z}^n})$ so that if $f\in L^1([0,T];L^2)$, then $u$ satisfies \[\sup_{0\leq t\leq T}\|u(\cdot,t)\|_2^2+\sup_{\mu\in\mathbb{Z}^n} \|J^{1/2}u\|_{L^2(Q_\mu\times [0,T])}^2 \leq A\paren{\|u_0\|_2^2+\paren{\int_0^T\|f(\cdot,t)\|_2\,dt}^2}.\] \end{thm} \begin{proof} We break the proof of this theorem into several steps. \noindent\underline{Step 1. Reduction to a system.} Let $\vec w = \begin{pmatrix} u\\ \bar u \end{pmatrix}$, $\vec f = \begin{pmatrix} f\\ \bar f \end{pmatrix}$, and $\vec{w}_0=\begin{pmatrix} u_0\\ \bar u_0 \end{pmatrix}.$ Let $\scr{L}(x,t)$ denote the operator $\partial_{x_j}(a_{jk}(x,t)\partial_{x_k}\cdot)$. Then using the equations for $u$ and $\bar u$, we see that $w$ satisfies \begin{equation} \left\{\begin{aligned} &\partial_t \vec{w} = -\epsilon\mathcal{D}elta^2 I \vec{w}+\paren{iH+B+C}\vec{w} + \vec{f}(x,t),\\ &\vec{w}(x,0)=\vec{w}_0(x),\end{aligned}\right. \end{equation} where \[H= \begin{pmatrix} \scr{L}(x,t) & 0\\ 0 & -\scr{L}(x,t) \end{pmatrix},\quad B= \begin{pmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \end{pmatrix}:= \begin{pmatrix} b_1(x,t,D) & b_2(x,t)\cdot\nabla \\ \overline{b_2(x,t)}\cdot \nabla & \overline{b_1(x,t,D)} \end{pmatrix}\] \[\text{and }C= \begin{pmatrix} c_1(x,t,D) & c_2(x,t,D) \\ \overline{c_2(x,t,D)} & \overline{c_1(x,t,D)} \end{pmatrix}.\] Note that for the rest of this chapter $\ip{\vec u}{\vec v}=\int u_1\bar v_1+u_2\bar v_2\,dx$ and $\norm{\vec u}_2^2=\ip{\vec u}{\vec u}$. \noindent \underline{Step 2. Diagonalize the first order terms.} We now define an operator $S= \begin{pmatrix} 0 & s_{12} \\ s_{21} & 0 \end{pmatrix}$ where $s_{12}$ and $s_{21}$ will be defined to be time varying $\Psi$DO's and have symbols in $S_{1,0}^{-1}$ uniformly in $t$. We being by choosing $\phi \in C_0^\infty(\mathbb{R}^n)$ so that $\phi(y)=1$ for $\abs{y}<1$ and $\phi(y)=0$ for $\abs{y}\geq 2$. Let $\theta_R(\xi)=1-\phi(\xi/R)$ and $\theta(\xi)=\theta_1(\xi).$ Let $\tilde h(x,t,\xi)=\theta_R(\xi)h^{-1}(x,t,\xi).$ Notice that, by ellipticity \mbox{$h(x,t,\xi)\geq C^{-1}\abs{\xi}^2$}, hence $\tilde h$ is a smooth function. Let $\tilde{\scr{L}}=\Psi_{\tilde h}$, then we see that $\tilde{\scr{L}}\scr{L}=I+\Psi_{r_1}$ with $r_1\in S_{1,0}^{-1}$ uniformly in $t$. We define $S_{12}=\frac{1}{2}iB_{12}\tilde{\scr{L}}$ and $S_{21}=-\frac{1}{2}iB_{21}\tilde{\scr{L}}$. We denote the symbols of $S_{12}$ and $S_{21}$ by $s_{12}(x,t,\xi)$ and $s_{21}(x,t,\xi)$ respectively. Clearly $s_{ij}(x,t,\xi)\in S_{1,0}^{-1}$ uniformly in $t$. Let $\Lambda = I-S$, if we choose $R$ large enough, then we can control the norms of $\Lambda$ and $\Lambda^{-1}$ by constants (see Kenig's Park City Lecture 2 \cite{CK2005}). We will use $\Lambda$ to change variables, and the resulting system will have diagonal first order terms. We first perform some calculations that are necessary to rewrite the system in terms of $\Lambda w$. \[\begin{split}&-iH\Lambda +i\Lambda H= -iHI+iHS+iIH-iSH=-i\paren{SH-HS}\\=& -i\paren{ \begin{pmatrix} 0 & S_{12}\\ S_{21} & 0 \end{pmatrix} \begin{pmatrix} \scr{L} & 0 \\ 0 & -\scr{L} \end{pmatrix} - \begin{pmatrix} \scr{L} & 0 \\ 0 & -\scr{L} \end{pmatrix} \begin{pmatrix} 0 & S_{12}\\ S_{21} & 0 \end{pmatrix} }\\=& \begin{pmatrix} 0 & iS_{12}\scr{L}+i\scr{L}S_{12} \\ -iS_{21}\scr{L}-i\scr{L}S_{21} & 0 \end{pmatrix}. \end{split} \] Notice that $\scr{L}S_{12}=S_{12}\scr{L}+E^0_1$, where $E^0_1$ is an error of order $0$. Similarly $\scr{L}S_{21}=S_{21}\scr{L}+E_2^0$ with $E_2^0$ of order $0.$ Hence $iS_{12}\scr{L}+i\scr{L}S_{12}=2iS_{12}\scr{L}+iE_1^0=-B_{12}\tilde{\scr{L}}\scr{L}+iE_1^0=-B_{12}+E_3^0$ and $-iS_{21}\scr{L}-i\scr{L}S_{21}=-2iS_{21}\scr{L}-iE_2^0=-B_{21}\tilde{\scr{L}}\scr{L}+iE_2^0=-B_{21}+E_4^0$ with $E_3^0$ and $E_4^0$ errors of order $0$. We write $B=B_d+B_{ad}$ where $B_d= \paren{\begin{smallmatrix} B_{11} & 0 \\ 0 & B_{22} \end{smallmatrix}} $ and $B_{ad}=\paren{ \begin{smallmatrix} 0 & B_{12} \\ B_{21} & 0 \end{smallmatrix}}. $ Now, \[\Lambda B_{ad}=IB_{ad}-SB_{ad}=B_{ad}- \begin{pmatrix} S_{12}B_{21} & 0 \\ 0 & S_{21}B_{12} \end{pmatrix} =B_{ad}+E^0_{ad}\] with $E_{ad}^0$ of order 0. For the other terms we will want to commute $\Lambda$ and our operators, in order to derive the equation for $\Lambda \vec w$. Starting with $\Lambda\partial_t=\partial_t\Lambda+[\partial_t,\Lambda]=\partial_t\Lambda+\partial_t S$, where this last expression is the matrix of $\Psi$DO's whose symbols are given by $\partial_t s_{jk}(x,t,\xi)$. Using the bounds for $\partial_t b_2(x,t,\xi)$ and $\partial_ta_{jk}(x,t,\xi)$ we again see that these symbols are uniformly in $S_{1,0}^{-1}$. Notice that $\Lambda B_d=B_d-SB_d$ and $B_d=B_d\Lambda+B_dS$, so that $\Lambda B_d=B_d\Lambda+B_dS-SB_d=B_d\Lambda+E_d^0$ with $E_d^0$ is of order 0. \[\Lambda \epsilon I\mathcal{D}elta^2=\epsilon \mathcal{D}elta^2\Lambda - \epsilon \begin{pmatrix} 0 & \mathcal{D}elta^2S_{12}-S_{12}\mathcal{D}elta^2 \\ \mathcal{D}elta^2S_{21}-S_{21}\mathcal{D}elta^2 & 0 \end{pmatrix} =\epsilon \mathcal{D}elta^2 I \Lambda + \epsilon \tilde{R} \] where $\tilde{R}$ is a matrix whose entries are operators of order 2. We set $R=\tilde{R}\Lambda^{-1}$, which is still of order 2. We write \[\Lambda C + \partial_tS+E^0_{ad}+E^0_d= \paren{\Lambda C \Lambda^{-1} + \partial_tS\Lambda^{-1}+E^0_{ad}\Lambda^{-1}+E^0_d\Lambda^{-1}}\Lambda =: \tilde{C}\Lambda\] with $\tilde{C}$ of order 0. Lastly set $\vec F = \Lambda \vec f$. Define $\vec z=\Lambda \vec w$ and apply $\Lambda$ to our equation. We have \[\Lambda \partial_t\vec w=i\epsilon\Lambda\mathcal{D}elta^2 I \vec w +\paren{i\Lambda H +\Lambda B +\Lambda C}\vec w+\Lambda \vec f\] Using our calculations above we have \[\partial_t \vec z=-\epsilon \mathcal{D}elta^2 I \vec z + \epsilon R\vec z+ iH\vec z - \begin{pmatrix} 0 & B_{12}\\ B_{21}& 0 \end{pmatrix}\vec w +B_d \vec z+B_{ad}\vec w+ \tilde{C}\vec{z}+\vec F. \] Hence if we set $\vec z_0=\Lambda \vec w_0$ to arrive at a system with diagonal first order terms, namely \begin{equation*} \left\{\begin{aligned} &\partial_t \vec{z} = -\epsilon\mathcal{D}elta^2 I \vec{z}+\epsilon R\vec z + i H \vec z +B_d\vec z +\tilde{C}\vec z +\vec F, \\ &\vec{z}(x,0)=\vec{z}_0(x).\end{aligned}\right. \end{equation*} As we pointed out earlier we have control of the norms of $\Lambda$ and $\Lambda^{-1},$ so deriving our desired estimates for $\vec z$ will imply the estimates for $\vec w$. Since we work in slightly unusual norms it seems a good time to recall them and justify this last statement. \begin{mydef} Let $\mathbb{R}^n=\bigcup_{\mu\in \mathbb{Z}^n}Q_\mu$ as usual. Let $f:\mathbb{R}^n\times \mathbb{R}\to \mathbb{C}$ be measurable function. We define \[\tnorm{f}_T=\sup_{\mu\in \mathbb{Z}^n}\norm{f}_{L^2(Q_{\mu}\times [0,T])}\] and \[\tnorm{f}'_T=\sum_{\mu\in\mathbb{Z}^n}\norm{f}_{L^2(Q_\mu\times[0,T])}.\] \end{mydef} \begin{thm} For $a\in S_{1,0}^0$ there is an $N(n)$ so that $\tnorm{\Psi_af}_T\leq C\tnorm{f}_{T}$ and $\tnorm{\Psi_af}'_{T}\leq C \tnorm{f}'_{T}$ \end{thm} \begin{proof} See Kenig's Park City Lecture notes, Lecture 2 \cite{CK2005}. \end{proof} Now we return to our linear estimates. It is important for the non-linear theory that we only make our non-trapping assumptions at $t=0.$ The following lemma allows us to handle time varying leading coefficients. \begin{lem}\label{timedepdoi} Let $h(x,\xi)=a_{jk}(x,0)\xi_j\xi_k$ and let $p_\mu$ be the Doi symbol corresponding to $h$ centered at cube $Q_\mu$. Then there exists a $T_1=T_1(C,C_0)$ so that uniformly for all $t<T_1$ the time varying symbol $h_t(x,\xi)=a_{jk}(x,t)\xi_j\xi_k$ satisfies \[H_{h_t}p_\mu(x,\xi)\geq \frac{1}{C_0}\frac{\abs{\xi}}{\jap{x-x_\mu}^2}-C_0.\] \end{lem} \begin{proof} By direct calculation we have \[ \begin{aligned} H_{h_t}p_\mu&=\sum_{i=1}^n \diff{h}{\xi_i}\diff{p_\mu}{x_i} - \diff{h}{x_i}\diff{p_\mu}{\xi_i}+ \paren{\diff{h_t}{\xi_i} - \diff{h}{\xi_i}}\diff{p_\mu}{x_i} - \paren{\diff{h_t}{x_i}-\diff{h}{x_i}}\diff{p_\mu}{\xi_i}\\ &= H_hp_\mu+ \sum_{i=1}^n 2\paren{a_{ik}(x,t)-a_{ik}(x,0)}\xi_k\diff{p_\mu}{x_i}\\ &\qquad\qquad\qquad\qquad-\paren{\diff{a_{jk}(x,t)}{x_i} -\diff{a_{jk}(x,0)}{x_i}}\xi_j\xi_k\diff{p_\mu}{\xi_i}. \end{aligned} \] From the asymptotic flatness condition \linref{LFlat} there is a $T_1=T_1(C,C_0)$ so that if $t<T_1$ we have that we have that \[ \abs{\sum_{i=1}^n 2\paren{a_{ik}(x,t)-a_{ik}(x,0)}\xi_k\diff{p_\mu}{x_i} - \paren{\diff{a_{jk}(x,t)}{x_i}-\diff{a_{jk}(x,0)}{x_i}} \xi_j\xi_k\diff{p_\mu}{\xi_i}}\leq \frac{1}{C_0}\frac{\abs{\xi}}{\jap{x}^2}. \] Now using \linref{LNoTrap} we get that \[H_{h_t}p_\mu\geq H_h p_\mu-\frac{1}{C_0}\frac{\abs{\xi}}{\jap{x}^2}\geq \frac{1}{C_0}\frac{\abs{\xi}}{\jap{x-x_\mu}^2}-C_0.\] \end{proof} \noindent \underline{Step 3. Energy Estimates.} The goal of this section is to conclude the proof. The program is to again introduce an invertible change of variables, this time based on Doi's Lemma. It is Doi's lemma that allows us to absorb the first order terms. Set \[\Psi_M= \begin{pmatrix} \Psi_{q_1} & 0 \\ 0 & \Psi_{q_2} \end{pmatrix} \] where $\Psi_{q_1}$, $\Psi_{q_2}$ are invertible $\Psi$DO's of order 0 that will be defined below. First we compute the necessary commutators that arise in the change of variables. For the leading order terms \[\Psi_M\epsilon\mathcal{D}elta^2I = \epsilon\mathcal{D}elta^2I\Psi_M\ + \epsilon \tilde R^3\Psi_M\] with $\tilde R^3$ of order 3. The second order remainder term yields. $\Psi_M\epsilon R=\epsilon R \Psi_M+\epsilon \tilde R^1\Psi_M$ We collect these remainder terms by setting $R^3=\tilde R^3+R+\tilde R^1$, which is of order 3. The remaining terms pose no difficulty. The first and zeroth order terms simply give $\Psi_M B_d=B_d\Psi_M+E^0_5$, $\Psi_M\tilde C=\tilde C\Psi_M+E^0_6$ and lastly we set $\vec G=\Psi_M\vec F$. Again we absorb the error terms of order 0 into $\tilde C$. By setting $\vec \alpha = \Psi_M\vec z$ and $\vec \alpha_0=\Psi_M\vec z_0$ we arrive at the system \begin{equation} \left\{\begin{aligned} &\partial_t \vec{\alpha} = -\epsilon\mathcal{D}elta^2 I \vec{\alpha}+\epsilon R^3\alpha - i [H,\Psi_M] \vec z +iH\vec \alpha +B_d\vec \alpha +\tilde{C}\vec \alpha +\vec G \\ &\vec{\alpha}(x,0)=\vec{\alpha}_0(x).\end{aligned}\right. \end{equation} To construct $\Psi_M$ we let $\mathbb{R}^n=\bigcup_{\mu\in\mathbb{Z}^n}Q_\mu$ as usual. Fix a cube $Q_{\mu_0}$ and let \[\gamma_{\mu_0}(x,\xi)=p_{\mu_0}(x,\xi)+\sum_{\mu\in\mathbb{Z}^n}\beta_\mu^0 p_\mu(x,\xi)\] with $\beta_\mu^0$ as in \linref{LFirstOrder}. Notice that $\gamma_{\mu_0}\in S_{1,0}^0$ with seminorms controlled in terms of $C_0$. Let $q_1(x,\xi)=\exp(\theta_R(\xi)\tilde{C}_0\gamma_{\mu_0}(x,\xi))$ and $q_2(x,\xi) = \exp(-\theta_R(\xi)\tilde{C}_0gamma_{\mu_0}(x,\xi)).$ Where $\tilde{C}_0$ depends on $C_0$ will be chosen below. Notice that again, if we take $R$ large we may ensure that $\Psi_M$ is invertible uniformly in $\mu_0$. We now compute \[-i[H,\Psi_M] = -i \begin{pmatrix} \scr{L}\Psi_{q_1}-\Psi_{q_1}\scr{L} & 0 \\ 0 & -\scr{L}\Psi_{q_2}+\Psi_{q_2}\scr{L} \end{pmatrix}. \] Let $\ell(x,t,\xi)$ be the symbol for $\scr{L}$, then $\ell(x,t,\xi) = a_{jk}(x,t)\xi_j\xi_k + \partial_{x_j}a_{jk}(x,t)\xi_k = h_t(x,\xi) + \ell_1(x,t,\xi)$. Note that $\{\ell_1,q_i\}\in S^0_{1,0}$ uniformly in $t$ for $i=1,2.$ Hence, \[-i\paren{\scr{L}\Psi_{q_1}-\Psi_{q_1}\scr{L}}=\Psi_{\{h_t,q_1\}}+E^0_7,\] with $E^0_7$ an operator of order 0. It follows that \[\{h_t,q_1\}=\paren{\diff{h_t}{\xi_i}\theta_R(\xi) \diff{\gamma_{\mu_0}}{x_i} - \diff{h_t}{x_i}\theta_R(\xi)\diff{\gamma_{\mu_0}}{\xi_i} - \diff{h_t}{x_i}\diff{\theta_R}{\xi_i}\gamma_{\mu_0}} e^{\theta_R(\xi)\gamma_{\mu_0}},\] where the last term in the parentheses is in $S^{-\infty}_{1,0}$. Therefore \[-i\paren{\scr{L}\Psi_{q_1}-\Psi_{q_1}\scr{L}} = -\Psi_{\theta_RH_{h_t}\gamma_{\mu_o}}\Psi_{q_1} + E^0_8,\] with $E_8^0$ of order 0. In the same way \[-i\paren{-\scr{L}\Psi_{q_2}+\Psi_{q_2}\scr{L}} = -\Psi_{\theta_RH_{h_t}\gamma_{\mu_o}}\Psi_{q_2} + E^0_9.\] Thus our system (after absorbing errors into $\tilde C$) looks like \begin{equation*} \left\{\begin{aligned} &\begin{aligned}\partial_t \vec{\alpha} =& -\epsilon\mathcal{D}elta^2 I \vec{\alpha}+\epsilon R^3\alpha + \begin{pmatrix} -\Psi_{\theta_R(\xi)H_{h_t}\gamma_{\mu_0}} & 0\\ 0 & -\Psi_{\theta_R(\xi)H_{h_t}\gamma_{\mu_0}} \end{pmatrix} \vec \alpha \\ &\qquad +iH\vec \alpha +B_d\vec \alpha +\tilde{C}\vec \alpha +\vec G, \end{aligned} \\ &\vec{\alpha}(x,0)=\vec{\alpha}_0(x).\end{aligned}\right. \end{equation*} We now proceed to derive energy estimates for $\alpha.$ Consider \[ \begin{aligned} \partial_t\ip{\vec\alpha}{\vec\alpha} &= \ip{\partial_t\vec\alpha}{\vec\alpha} +\ip{\vec\alpha}{\partial_t\vec\alpha} \\&=\ip{-\epsilon\mathcal{D}elta^2I\vec\alpha}{\vec\alpha} + \ip{\vec\alpha}{-\epsilon\mathcal{D}elta^2I\vec\alpha} + \ip{\epsilon R^3\vec\alpha}{\vec\alpha} + \ip{\vec\alpha}{\epsilon R^3\vec\alpha} + \\ &\ip{iH\vec\alpha}{\vec\alpha} +\ip{\vec\alpha}{iH\vec\alpha} + \ip{B_d\vec\alpha}{\vec\alpha} + \ip{\vec\alpha}{B_d\vec\alpha} +\\& \ip{C\vec\alpha}{\vec\alpha} + \ip{\vec\alpha}{C\vec\alpha} + \ip{\vec G}{\vec\alpha} + \ip{\vec\alpha}{\vec G} + \\ & \ip{ \begin{pmatrix} -\Psi_{\theta_R(\xi)H_{h_t}\gamma_{\mu_0}} & 0\\ 0 & -\Psi_{\theta_R(\xi)H_{h_t}\gamma_{\mu_0}} \end{pmatrix}\vec\alpha}{\vec\alpha} +\\& \ip{\vec\alpha}{\begin{pmatrix} -\Psi_{\theta_R(\xi)H_{h_t}\gamma_{\mu_0}} & 0\\ 0 & -\Psi_{\theta_R(\xi)H_{h_t}\gamma_{\mu_0}} \end{pmatrix}\vec\alpha}. \end{aligned} \] In the first two terms we have we have that $-\epsilon\ip{\mathcal{D}elta^2I\vec\alpha}{\vec\alpha} - \epsilon\ip{\vec\alpha}{\mathcal{D}elta^2I\vec\alpha} = -2\epsilon\ip{\mathcal{D}elta I\vec\alpha}{\mathcal{D}elta I\vec\alpha} = -2\epsilon\|\mathcal{D}elta\vec\alpha\|_2^2$. \pagebreak[1] The second two terms contribute $\ip{\epsilon R^3\vec\alpha}{\vec\alpha} + \ip{\vec\alpha}{\epsilon R^3\vec\alpha} = 2\epsilon\mathbb{R}e\ip{R^3\vec\alpha}{\vec\alpha} = 2\epsilon\mathbb{R}e\ip{J^{-3/2}IR^3\vec\alpha}{J^{3/2}I\vec\alpha}$. As both $J^{3/2}$ and $J^{-3/2}IR^3$ are operators of order $3/2$ we can bound this by $C\|\vec\alpha\|^2_{H^{3/2}}.$ Now by interpolation we have that $\|\vec\alpha\|^2_{H^{3/2}}\leq \eta_0\norm{\mathcal{D}elta I\vec\alpha}_2^2 + \frac{2}{\eta_0}\|\vec\alpha\|^2_2.$ Hence \[\abs{2\epsilon\mathbb{R}e\ip{R^3\vec\alpha}{\vec\alpha}}\leq 2\epsilon C\eta_0\|\mathcal{D}elta I\vec\alpha\|_2^2 + \frac{4\epsilon C}{\eta_0}\|\vec\alpha\|_2^2.\] By setting $\eta_0=1/(2C)$ we can absorb the first term into $-2\epsilon\|\mathcal{D}elta\vec\alpha\|^2_2$ to get the first four terms are bounded by $-\epsilon \|\mathcal{D}elta I\vec\alpha\|^2_2+8\epsilon C^2\|\vec\alpha\|^2_2$. We now turn our attention to first order terms. That is, the last two terms and the terms involving $B_d$. Consider the matrix of symbols \[F:= \begin{pmatrix} -\theta_R(\xi)H_{h_t}\gamma_{\mu_0}(x,\xi)+b_1(x,t,\xi) & 0 \\ 0 & -\theta_R(\xi)H_{h_t}\gamma_{\mu_0}(x,\xi)+\overline{b_1(x,t,-\xi)} \end{pmatrix}. \] We will need to control $F+F^*$ to apply the vector valued G\aa rding's inequality. Let $\phi_{Q_\mu^*}$ be a smooth cut off to the double of $Q_\mu$ and let $\chi_{Q_\mu^*}=\phi^2_{Q_\mu^*}$. By our construction of $\gamma_{\mu_0}$ we have that \[ \begin{aligned} -\tilde{C}_0\theta_R(\xi)H_{h_t}\gamma_{\mu_0}\leq& \tilde{C}_0\theta_R(\xi)\Bigg(-\frac{1}{C_0}\frac{\abs{\xi}}{\jap{x-x_{\mu_0}}^2}+C_0 \\& \qquad\qquad\qquad\qquad -\sum_{\mu\in\mathbb{Z}^n}\beta_\mu^0\paren{\frac{1}{C_0}\frac{\abs{\xi}}{\jap{x-x_\mu}^2}-C_0}\Bigg)\\ \leq &\tilde{C}_0\theta_R(\xi)\paren{-C_0'\abs{\xi}\chi_{Q_{\mu_0}^*} - \sum_{\mu\in\mathbb{Z}^n}\beta_\mu^0 C_0'\abs{\xi}\chi_{Q_{\mu}^*}} + \tilde{C}_0C_0''\\ \leq & \tilde{C}_0\theta_R(\xi)\paren{-C_0'\abs{\xi}\chi_{Q_{\mu_0}^*}- C_0'\abs{\mathbb{R}e b_1(x,0,\xi)}} + \tilde{C}_0C_0''. \end{aligned} \] Here we choose $\tilde C_0$ so that $\tilde C_0C_0'\geq 2$. Now we have that \begin{multline*} -\theta_R(\xi)H_{h_t}\gamma_{\mu_0}+2\mathbb{R}e b_1(x,t,\xi) =\\ \paren{\theta_R(\xi)H_{h_t}\gamma_{\mu_0}+2\mathbb{R}e b_1(x,0,\xi)} + 2\mathbb{R}e \int_0^t\partial_tb_1(x,s,\xi)\,ds\\ \leq -\tilde C_0'\theta_R(\xi)\abs{\xi}\chi_{Q_{\mu_0}^*}+\tilde C_0''+ 2\int_0^t\paren{\sum_{\mu\in Z^n} \tilde\beta_\mu(s)\varphi_\mu(x,s,\xi)}\,ds. \end{multline*} Let $p(x,t,\xi)= 2\int_0^t\paren{\sum_{\mu\in Z^n} \tilde\beta_\mu(s)\varphi_\mu(x,s,\xi)}\,ds.$ Apply the vector valued G\aa rding inequality (see \cite{HK1981}, \cite{MT1991}) to get \begin{multline*} \mathbb{R}e\ip{ \begin{pmatrix} -\Psi_{\theta_R(\xi)H_{h_t}\gamma_{\mu_0}}+b_1(x,t,D) & 0\\ 0 & -\Psi_{\theta_R(\xi)H_{h_t}\gamma_{\mu_0}}+\overline{b_1(x,t,D)} \end{pmatrix} \vec\alpha}{\vec\alpha}\\ \leq C\norm{\vec\alpha}_2^2-\mathbb{R}e\ip{ \begin{pmatrix} \Psi_{\theta_R \abs{\xi}\chi_{Q_{\mu_0}^*}} & 0 \\ 0 & \Psi_{\theta_R \abs{\xi}\chi_{Q_{\mu_0}^*}} \end{pmatrix} \vec\alpha}{\vec\alpha}+\\\mathbb{R}e\ip{ \begin{pmatrix} \Psi_{p(x,t,\xi)} & 0 \\ 0 & \Psi_{p(x,t,-\xi)} \end{pmatrix} \vec\alpha}{\vec\alpha}. \end{multline*} We denote this last term by $\mathbb{R}e\ip{E_M\vec\alpha}{\vec\alpha}$. The symbol of the operator \\ $\Psi_{\theta_R(\xi)\abs{\xi}\chi_{Q_{\mu_0}^*}}-J^{1/2}\chi_{Q_{\mu_0}^*}J^{1/2}$ is in $S^0_{1,0}$. Hence we have \begin{multline*}\mathbb{R}e\ip{ \begin{pmatrix} \Psi_{\tilde C_0\theta_R \abs{\xi}\chi_{Q_{\mu_0}^*}} & 0 \\ 0 & \Psi_{\tilde C_0 x\theta_R \abs{\xi}\chi_{Q_{\mu_0}^*}} \end{pmatrix} \vec\alpha}{\vec\alpha} \\ \geq \ip{\phi_{Q_{\mu_0}^*}J^{1/2}I\vec\alpha}{\phi_{Q_{\mu_0}^*}J^{1/2}I\vec\alpha} -C\norm{\vec\alpha}_{L^2}^2 \\\geq\; \|\phi_{Q_{\mu_0}^*}J^{1/2}I\vec\alpha\|_{L^2(Q_{\mu_0})}-C\norm{\vec\alpha}_{L^2}^2. \end{multline*} To handle the terms involving $H$ notice that $\int \scr{L}\alpha_1\alpha_1=\int\alpha_1\scr{L}\alpha_1$, and hence $i\ip{H\vec\alpha}{\vec\alpha}-i\ip{\vec\alpha}{H\vec\alpha}=0.$ For the terms involving $\tilde C$ we use Cauchy-Schwartz inequality, $\abs{\ip{\tilde C \vec\alpha}{\vec\alpha}}\leq \norm{\tilde C\vec\alpha}_2\norm{\vec\alpha}_2\leq C\norm{\vec\alpha}_2^2$. Putting all this together we see that $$\frac{d}{dt}\norm{\vec\alpha}_2^2\leq -\epsilon\|\mathcal{D}elta I\vec\alpha\|_2^2+C\norm{\vec\alpha}_2^2-\norm{J^{1/2}I\vec\alpha}_{L^2(Q_{\mu_0})}^2 + 2\abs{\mathbb{R}e\ip{\vec\alpha}{\vec G}}+\mathbb{R}e\ip{E_M\vec\alpha}{\vec\alpha}.$$ Integrating in time we find that \begin{multline*}\norm{\vec\alpha(t)}_2^2 + \norm{\phi_{Q_{\mu_0}^*}J^{1/2}I\vec\alpha}_{L^2(Q_{\mu_0}\times [0,t])}^2\leq \norm{\vec\alpha(0)}_2^2 +C\int_0^t\norm{\vec\alpha}_2^2\,ds +\\ 2\int_0^t\abs{\mathbb{R}e\ip{\vec\alpha}{\vec G}}\,ds + \int_0^t\mathbb{R}e\ip{E_M\vec\alpha}{\vec\alpha}\,ds.\end{multline*} In order to handle the terms $\int_0^t\mathbb{R}e\ip{E_M\vec\alpha}{\vec\alpha}\,ds$, we have that \begin{multline*} \int_0^t \int_0^s \int\sum_{\mu\in\mathbb{Z}^n}\tilde\beta_\mu(r)\Psi_{\varphi_\mu(x,r,\xi)}\alpha_1(s)\overline{\alpha_1(s)}\,dx\,dr\,ds=\\ \int_0^t \sum_{\mu\in\mathbb{Z}^n}\tilde\beta_\mu(r) \int_r^t\int\Psi_{\varphi_\mu(x,r,\xi)} \alpha_1(s)\overline{\alpha_1(s)}\,dx\,ds\,dr. \end{multline*} Our estimates on $\varphi_\mu(x,s,\xi)$ give us that $\int\Psi_{\varphi_\mu(s,x,\xi)}\alpha_1\overline{\alpha_1}\,dx\leq \norm{J^{1/2}\alpha_1}_{L^2(Q_\mu^*)}^2+C\norm{\alpha_1}_2^2$. Thus we have $$\abs{\int_0^t \ip{E_M\vec\alpha}{\vec\alpha}}\leq Ct\sup_{\mu\in \mathbb{Z}^n}\norm{J^{1/2}I\vec\alpha}_{L^2([0,t]\times Q_\mu)}+Ct\sup_{0\leq s \leq t}\norm{\vec\alpha}_2^2$$ Hence, after taking a supremum over $0\leq t \leq T$, we arrive at \begin{multline*}\sup_{0\leq t\leq T}\norm{\vec\alpha(t)}_2^2 + \norm{\phi_{Q_{\mu_0}^*}J^{1/2}I\vec\alpha}_{L^2(Q_{\mu_0}\times [0,T])}^2\leq \norm{\vec\alpha(0)}_2^2 +CT\sup_{0\leq t\leq T}\norm{\vec\alpha}_2^2 + \\ 2\int_0^T\abs{\mathbb{R}e\ip{\vec\alpha}{\vec G}}\,ds + CT\sup_{\mu\in \mathbb{Z}^n}\norm{J^{1/2}I\vec\alpha}_{L^2([0,T]\times Q_\mu)}. \end{multline*} By choosing $T$ small we may make $CT\sup_{0\leq t\leq T}\norm{\vec\alpha}_2^2\leq \frac{1}{2} \sup_{0\leq t\leq T}\norm{\vec\alpha}_2^2$ and absorb this term into the left hand side. In this way we get \begin{multline}\label{basicest} \sup_{0\leq t\leq T}\norm{\vec\alpha(t)}_2^2 + \norm{J^{1/2}I\vec\alpha}_{L^2(Q_{\mu_0}\times [0,T])}^2\leq 2\norm{\vec\alpha(0)}_2^2 + \\ 4\int_0^T\abs{\mathbb{R}e\ip{\vec\alpha}{\vec G}}\,ds + CT\sup_{\mu\in\mathbb{Z}^n}\norm{J^{1/2}I\vec\alpha}_{L^2([0,T]\times Q_\mu)}. \end{multline} In terms of $\vec z$ our estimates now tell us \begin{multline*}\sup_{0\leq t\leq T}\norm{\Psi_M\vec z(t)}_2^2 + \norm{\phi_{Q_{\mu_0}^*}J^{1/2}I\Psi_M\vec z}_{L^2(Q_{\mu_0}\times [0,T])}^2\leq 2\norm{\Psi_M\vec z(0)}_2^2 + \\ 4\int_0^T\abs{\mathbb{R}e\ip{\Psi_M\vec z}{\vec G}}\,ds + CT\sup_{\mu\in\mathbb{Z}^n}\norm{J^{1/2}I\Psi_M\vec z}_{L^2([0,T]\times Q_\mu)}. \end{multline*} But notice that, $J^{1/2}I\Psi_M=\Phi_MJ^{1/2}I+E$, where $E$ is of order 0. Hence \begin{multline*}\int_0^T \norm{\phi_{Q_{\mu_0}^*}J^{1/2}I\Psi_M\vec z}_{L^2(Q_{\mu_0})}^2\,dt\geq\\ \int_0^TC_0\norm{J^{1/2}I\vec z}_{L^2(Q_{\mu_0})}^2\,dt - CT\sup_{0\leq t\leq T}\norm{\vec z}_2^2.\end{multline*} Thus, possibly after another restriction in $T$, we arrive at \begin{multline*}\sup_{0\leq t\leq T}\norm{\vec z(t)}_2^2 + \norm{J^{1/2}I\vec z}_{L^2(Q_{\mu_0}\times [0,T])}^2\leq C_0\Bigg(\norm{\vec z(0)}_2^2 + \int_0^T\abs{\mathbb{R}e\ip{\Psi_M\vec z}{\vec G}}\,ds \\ + CT\sup_{\mu\in\mathbb{Z}^n}\norm{J^{1/2}I\vec z}_{L^2([0,T]\times Q_\mu)}\Bigg). \end{multline*} Now estimate the term involving $\vec G$. \begin{multline*}\label{L1intest} \int_0^T\abs{\mathbb{R}e\ip{\Psi_M\vec z}{\vec G}}\,ds\leq \int_0^TC_0\norm{\vec z}_2\norm{\vec G}_2\,dt\leq C_0\sup_{0\leq s\leq t}\norm{\vec z(s)}_2\norm{G}_{L^1_t L^2_x}\\\leq C_0\eta\sup_{0\leq s \leq t}\norm{\vec z(t)}_2^2+\frac{C_0}{\eta}\norm{G}_{L^1_tL^2_x}^2 \end{multline*} Choosing $\eta$ small enough to absorb the term involving $z$ to the left hand side. Our estimate now is of the form \begin{multline*}\sup_{0\leq t\leq T}\norm{\vec z(t)}_2^2 + \norm{J^{1/2}I\vec z}_{L^2(Q_{\mu_0}\times [0,T])}^2\leq C_0\Bigg(\norm{\vec z(0)}_2^2 + \norm{G}^2_{L^1_tL^2_x}+ \\ CT\sup_{\mu\in\mathbb{Z}^n}\norm{J^{1/2}I\vec z}_{L^2([0,T]\times Q_\mu)}\Bigg).\end{multline*} Finally to get Theorem~\ref{aplin} we take a supremum in $\mu_0$, then after a suitable restriction in $T$ we may absorb $CT\sup_{\mu\in\mathbb{Z}^n}\norm{J^{1/2}I\vec z}_{L^2([0,T]\times Q_\mu)}$ into the left hand side. Keeping in mind that estimates for $z$ will imply the corresponding estimates in $u.$ \end{proof} We now turn to a perturbation result. It is possible to weaken the non-trapping condition \linref{LNoTrap}. It is enough to assume that the second order coefficients are ``close'' to coefficients that are non-trapping. To this end, we again consider equation \ref{LinGenS}. We still assume that the coefficients satisfy conditions \linref{LRegularity}--\linref{LFirstOrder}. Instead of \linref{LNoTrap}, suppose that $A(x,t)=A_0(x,t)+\eta A_1(x,t)$. Assume that $h_0(x,\xi)=\ip{A_0(x,0)\xi}{\xi}$ satisfies the non-trapping condition \linref{LNoTrap}. In addition, assume that $\abs{A_1(x,t)}+\abs{\nabla_xA_1(x,t)}\leq\frac{C}{\jap{x}^2}$ uniformly in $t$. Then for $\eta$ sufficiently small, depending on $C$ and $C_0$, the conclusion of Theorem~\ref{aplin} holds. To see this, notice that we only use the non-trapping condition \linref{LNoTrap} in the proof of Lemma~\ref{timedepdoi}. We will now prove this lemma in the under these slightly more general assumptions. \begin{lem} Let $h_0(x,\xi)$ be as above and let $p_\mu$ be the Doi symbol corresponding to $h_0$ centered at cube $Q_\mu$. Then there exists a $T_1=T_1(C,C_0)$ so that, uniformly for all $t<T_1$, the time varying symbol $h_t(x,\xi)=a_{jk}(x,t)\xi_j\xi_k$ satisfies \[H_{h_t}p_\mu(x,\xi)\geq \frac{1}{C_0}\frac{\abs{\xi}}{\jap{x-x_\mu}^2}-C_0.\] \end{lem} \begin{proof} To facilitate calculations we use the following notations for the matrix entries $(a^0_{jk}(x,t))_{j,k=1\ldots n}:=A_0(x,t)$ and $(a^1_{jk}(x,t))_{j,k=1\ldots n}:=A_1(x,t)$. It is also convenient to denote $k_0(x,t,\xi)=\ip{A_0(x,t)\xi}{\xi}$, and $k_1(x,t,\xi)=\ip{A_1(x,t)\xi}{\xi}$. Proceeding with our calculation as before we have \[ \begin{aligned} H_{h_t}p_\mu&=\sum_{i=1}^n \diff{h_0}{\xi_i}\diff{p_\mu}{x_i} - \diff{h_0}{x_i}\diff{p_\mu}{\xi_i}+ \paren{\diff{k_0}{\xi_i} -\diff{h_0}{\xi_i}+\eta\diff{k_1}{\xi_1}}\diff{p_\mu}{x_i}\\ &\qquad\qquad\qquad\qquad - \paren{\diff{k_0}{x_i}-\diff{h_0}{x_i} +\eta\diff{k_1}{x_i}}\diff{p_\mu}{\xi_i}\\ &= H_{h_0}p_\mu+ \sum_{i=1}^n 2\paren{a^0_{ik}(x,t)-a^0_{ik}(x,0)+\eta a^1_{ik}(x,t)}\xi_k\diff{p_\mu}{x_i}\\ &\qquad\qquad\qquad\qquad-\paren{\diff{a^0_{jk}(x,t)}{x_i} -\diff{a^0_{jk}(x,0)}{x_i}+\eta\diff{a^1_{jk}(x,t)}{x_i}}\xi_j\xi_k\diff{p_\mu}{\xi_i}. \end{aligned} \] As before the asymptotic flatness condition \linref{LFlat} there is a $T_1=T_1(C,C_0)$ so that if $t<T_1$ we have that we have that \begin{multline*} \abs{\sum_{i=1}^n 2\paren{a^0_{ik}(x,t)-a^0_{ik}(x,0)}\xi_k\diff{p_\mu}{x_i} \right.\\\left. - \paren{\diff{a^0_{jk}(x,t)}{x_i}-\diff{a^0_{jk}(x,0)}{x_i}} \xi_j\xi_k\diff{p_\mu}{\xi_i}}\leq \frac{1}{2C_0}\frac{\abs{\xi}}{\jap{x}^2}. \end{multline*} Our conditions on $A_1$, together with the control of the seminorms of $p_\mu$ give that \[ \eta\abs{\sum_{i=1}^na^1_{ik}(x,t)\xi_k\diff{p_\mu}{x_i}+\diff{a^1_{jk}(x,0)}{x_i} \xi_j\xi_k\diff{p_\mu}{\xi_i}}\leq \frac{\eta C \abs{\xi}}{\jap{x}^2}\] We choose $\eta$ so that $\frac{\eta C \abs{\xi}}{\jap{x}^2}\leq\frac{1}{2C_1}\frac{\abs{\xi}}{\jap{x}^2}$. Now using the assumption that $h^0$ satisfies \linref{LNoTrap} we get that \[H_{h_t}p_\mu\geq H_{h_0} p_\mu-\frac{1}{C_1}\frac{\abs{\xi}}{\jap{x}^2}\geq \frac{1}{C_1}\frac{\abs{\xi}}{\jap{x-x_\mu}^2}-C_1.\] Using the same version of Doi's lemma as before. \end{proof} \section{Nonlinear Results}\label{sec:nonlinear} In this section we approach \eqref{theproblem} by the artificial viscosity method. Hence, we are interested in the system \begin{equation}\label{NonLinGenS1} \left\{\begin{aligned} & \begin{aligned}\partial_t u =&-\epsilon\mathcal{D}elta^2u+ia_{jk}(\ensuremath{x, t, u, \bar{u}, \grad u, \grad \bar u})\partial_{x_j}\partial_{x_k}u \\& + \vec b_1(\ensuremath{x, t, u, \bar{u}, \grad u, \grad \bar u})\cdot\nabla u +\vec{b}_2(\ensuremath{x, t, u, \bar{u}, \grad u, \grad \bar u})\cdot\nabla \bar u\\ & +c_1(\ensuremath{x, t, u, \bar{u}})u + c_2(\ensuremath{x, t, u, \bar{u}})\bar u +f(x,t), \end{aligned}\\ &u(x,0)=u_0(x). \end{aligned}\right. \end{equation} We assume the coefficients satisfy the conditions set out in the introduction. We take $s>N+n+4$ with $N$ as in \linref{LRegularity} and even. We take $\tilde N> s+2$. We abbreviate our system to \begin{equation*} \left\{ \begin{aligned} &\partial_t u=-\epsilon\mathcal{D}elta^2u+\scr{L}(u)u+f(x,t)\\ &u(x,0)=u_0(x), \end{aligned} \right. \end{equation*} where \begin{multline*}\scr{L}(u)v=ia_{jk}(\ensuremath{x, t, u, \bar{u}, \grad u, \grad \bar u})\partial_{x_j}\partial_{x_k}v + \vec b_1(\ensuremath{x, t, u, \bar{u}, \grad u, \grad \bar u})\cdot\nabla v + \\\vec{b}_2(\ensuremath{x, t, u, \bar{u}, \grad u, \grad \bar u})\cdot\nabla \bar v + c_1(\ensuremath{x, t, u, \bar{u}})v + c_2(\ensuremath{x, t, u, \bar{u}})\bar v.\end{multline*} \begin{thm}\label{contractmapping} Take $s> n+3$. For $v_0\in H^s$ and $f\in L^\infty([0,1];H^s),$ define $\lambda=\|v_0\|_s + \int_0^1\|f(t)\|_{H^s}\,dt$. Define $$X_{M_0,T}=\{v:\mathbb{R}^n\times[0,T]\to \mathbb{C} \mid v(x,0)=v_0, v\in C([0,T];H^s), \norm{v}_{L^\infty_tH^s_x}\leq M_0\}.$$ If $\lambda<M_0/2$, then there exists $T_\epsilon$, $1>T_\epsilon>0$, so that equation \eqref{NonLinGenS1} with initial data $v_0$ has a unique solution $v^\epsilon\in X_{M_0,T^\epsilon}.$ \end{thm} \begin{proof} For $t<1$, consider the integral form the equation \[\Gamma v(t)=e^{-\epsilon t\mathcal{D}elta^2}v_0 + \int_0^t e^{-\epsilon (t-t')\mathcal{D}elta^2}\paren{\scr{L}(v)v(t')+f(\cdot,t')}\,dt'.\] We show that $\Gamma$ is a contraction mapping on the space $X_{M_0,T}$ after a suitable restriction of $T$. So let $\alpha$ be a multi-index such that $\abs{\alpha}=s$ and consider \[\partial_x^\alpha \Gamma v(t)=e^{-\epsilon t\mathcal{D}elta^2}\partial_x^\alpha v_0 + \int_0^t\partial_x^\alpha \paren{e^{-\epsilon(t-t')\mathcal{D}elta^2}\scr{L}(v)v+f}(t')\,dt'.\] Choose multi-indices $\beta$ and $\beta'$ so that $\abs{\beta'}=2$, $\abs{\beta}=s-2$, and $\alpha=\beta+\beta'$. \begin{multline*} \partial_x^\alpha \Gamma v(t)= e^{-\epsilon t\mathcal{D}elta^2}\partial_x^\alpha v_0 + \int_0^t\partial_x^{\beta'}e^{-\epsilon(t-t')\mathcal{D}elta^2} \partial_x^\beta\paren{\scr{L}(v)v(t')}\,dt'\\ + \int_0^te^{-\epsilon(t-t')\mathcal{D}elta^2}\partial_x^\alpha f(\cdot,t')\,dt'. \end{multline*} Hence, \begin{multline*} \norm{\partial_x^\alpha\Gamma v}_2 \leq C\paren{\norm{\partial_x^\alpha v_0}_2 +\int_0^t\norm{\partial_x^{\beta'}e^{-\epsilon (t-t)\mathcal{D}elta^2}\partial_x^\beta\scr{L}(v)}_2\,dt' + \int_0^t\norm{\partial_x^\alpha f(t')}_2\,dt'} \\ \leq C\paren{\norm{v_0}_{H^s} + \int_0^t\frac{1}{(t-t')^{1/2}\epsilon^{1/2}} \norm{\partial_x^\beta\scr{L}(v)v(t')}_2\,dt' + \int_0^t\norm{f(t')}_{H^s}\,dt'}. \end{multline*} In order to proceed further we need to turn our attention to $\partial^\beta_x\scr{L}(v)v$. \begin{lem}\label{gammacontract} Let $u,v\in X_{M,T}$ and suppose that $\abs{\beta}=s-2$ for $s>n+3$, then there exists a $P\in \mathbb{N}$ so that $\norm{\partial^\beta_x\scr{L}(u)v}_2\leq C\norm{v}_{H^s}\paren{1+\norm{u}_{H^s}+\norm{u}_{H^s}^P}$ with $0\leq t \leq T$ and $C=C(M,n,s)$. \end{lem} \begin{proof} We estimate term by term, \begin{multline*} \partial^\beta_x\scr{L}(u)v=\partial_x\paren{a_{jk}(\ensuremath{x, t, u, \bar{u}, \grad u, \grad \bar u})\partial_{x_j}\partial_{x_k}v}+ \partial_x^{\beta}\paren{\vec b_1(\ensuremath{x, t, u, \bar{u}, \grad u, \grad \bar u})\cdot\nabla v}+\\ \partial_x^\beta\paren{\vec{b}_2(\ensuremath{x, t, u, \bar{u}, \grad u, \grad \bar u})\cdot\nabla \bar v} + \partial_x^\beta \paren{c_1(\ensuremath{x, t, u, \bar{u}})v} + \partial_x^\beta\paren{c_2(\ensuremath{x, t, u, \bar{u}})\bar v}. \end{multline*} We start with $c_1(\ensuremath{x, t, u, \bar{u}})$. Let $\tilde c_1(\ensuremath{x, t, u, \bar{u}})=c_1(\ensuremath{x, t, u, \bar{u}})-c_1(x,t,0,0)$ so that $\tilde c_1(x,t,0,0)=0.$ Then $\partial_x^\beta\paren{c_1(\ensuremath{x, t, u, \bar{u}})v} = \partial_x^\beta \paren{\tilde c_1(\ensuremath{x, t, u, \bar{u}})v} + \partial_x^\beta\paren{c(x,t,0,0)v}$, and the $H^s$ norm of the second term is clearly bounded by $C\norm{v}_{H^s}$ where $C$ depends on $c_1$ and $\beta$. We have, \[\norm{\partial_x^\beta\paren{\tilde c_1(\ensuremath{x, t, u, \bar{u}})v}}_2 \leq \sum_{\gamma+\partialta=\beta}\norm{\partial_x^\gamma\paren{\tilde c_1(\ensuremath{x, t, u, \bar{u}})}\partial_x^\partialta v}_2.\] If $\abs{\partialta} < s-n/2$, then $\norm{\partial_x^\partialta v}_\infty\leq C\|\partial_x^\partialta v\|_{H^{s-\abs{\partialta}}}\leq C\norm{v}_{H^s}.$ It follows that $\norm{\partial_x^\gamma \tilde c_1(\ensuremath{x, t, u, \bar{u}})\partial_x^\partialta v}_2\leq C\norm{v}_{H^s}\norm{\partial_x^\gamma \tilde c_1(\ensuremath{x, t, u, \bar{u}})}_2$. As $\tilde c_1(x,t,0,0)=0$ and $\tilde c_1\in C_b^{\tilde N}$ it follows that $\tilde c_1(\ensuremath{x, t, u, \bar{u}})\in H^s$. Hence $\norm{\partial_x^\gamma\tilde c_1(\ensuremath{x, t, u, \bar{u}})}_2\leq \norm{c_1(\ensuremath{x, t, u, \bar{u}})}_{H^s}\leq C\paren{\norm{u}_{H^s}+\norm{u}_{H^s}^p}$ for some $p\in \mathbb{N}$. On the other hand, if $\abs{\partialta} \geq s-n/2$, then we may not estimate $\partial_x^\partialta v$ in $L^\infty$. Instead we estimate the other factor in $L^\infty$. Because $\abs{\gamma}+\abs{\partialta}=\abs{\beta}=s-2$, we have that $\abs{\gamma}\leq n/2-2$. Since $s>n-2$, we have that $s-\abs{\gamma}>n/2$. Therefore, \begin{multline*} \norm{\partial_x^\gamma\tilde c_1(\ensuremath{x, t, u, \bar{u}})}_\infty \leq C\norm{\partial_x^\gamma\tilde c_1(\ensuremath{x, t, u, \bar{u}})}_{H^{s-\abs{\gamma}}}\\\leq C\norm{\tilde c_1(\ensuremath{x, t, u, \bar{u}})}_{H^s}\leq C\paren{\norm{u}_{H^s}+\norm{u}_{H^s}^p} \end{multline*} with $P$ as before. The estimates for $c_2$ work in exactly the same way. To estimate $\partial_x^{\beta}\paren{\vec b_1(\ensuremath{x, t, u, \bar{u}, \grad u, \grad \bar u})\cdot\nabla u}$ note that our assumptions imply\\ $b_1(x,t,0,0,\vec 0, \vec 0)=0.$ Again we have \[\norm{\partial_x^{\beta}\paren{\vec b_1(\ensuremath{x, t, u, \bar{u}, \grad u, \grad \bar u})\cdot\nabla v}}_2 \leq \sum_{i=1}^n\sum_{\gamma+\partialta=\beta}\norm{\partial_x^\gamma b_{1,i}(\ensuremath{x, t, u, \bar{u}, \grad u, \grad \bar u})\partial_x^\partialta\partial_{x_i}v}_2.\] In this case, if $\abs{\partialta} < s-n/2-1$, then we proceed by estimating $\partial_x^\partialta\partial_{x_i}v$ in $L^\infty$. If instead $\abs{\partialta}\geq s-n/2-1$, then we get that $\abs{\gamma}\leq n/2-1$. We have that $s>n+2$, so that $s-\abs{\gamma}>n/2+1$. Hence we may estimate $\partial_x^\gamma b_{1,i}(\ensuremath{x, t, u, \bar{u}, \grad u, \grad \bar u})$ in $L^\infty$. Again the estimates for the terms involving $b_2$ work in the same way as the terms involving $b_1$. The estimates for terms involving $a_{jk}$ are essentially identically to those for $c_i$ and $b_i$ except that we need to require $s>n+3$. \end{proof} So we know that \[\norm{\partial_x^\alpha\Gamma v}_2 \leq C\norm{v_0}_{H^s}+C_{M_0}\frac{2t^{1/2}}{\epsilon^{1/2}}M_0(1+M_0^p)+ \int_0^1\|f(t)\|_{H^s}\,dt\] and therefore \[\norm{\Gamma v}_{H^s} \leq C\norm{v_0}_{H^s}+C_{M_0}(\frac{2t^{1/2}}{\epsilon^{1/2}}+t)M_0(1+M_0^p)+ \int_0^1\|f(t)\|_{H^s}\,dt.\] By choosing $T$ so that $C_{M_0}\paren{T^{1/2}/\epsilon^{1/2}+T}M_0(1+M_0^p)<\lambda$ then we get that $\Gamma$ maps $X_{M_0,T}$ to itself. Now take $u,v\in X_{M_0,T}$. We wish to show that $\Gamma$ is a contraction mapping. We have that \begin{multline*}\Gamma u(t) - \Gamma v(t) = \int_0^te^{-\epsilon(t-t')\mathcal{D}elta^2}\paren{\scr{L}(u)u-\scr{L}(v)v}(t')\,dt' =\\ \int_0^t e^{-\epsilon(t-t')\mathcal{D}elta^2} \paren{\paren{\scr{L}(u)-\scr{L}(v)}u+\scr{L}(v)\paren{u-v}}\,dt'. \end{multline*} To estimate the terms that arise from $\scr{L}(v)\paren{u-v}$ we may use Lemma \ref{gammacontract} to conclude that \[\norm{\int_0^te^{-\epsilon(T-t)\mathcal{D}elta^2}\scr{L}(v)\paren{u-v}}_{H^s}\,dt\leq C\norm{u-v}_{H^s}\paren{\frac{t^{1/2}}{\epsilon^{1/2}}+t}\paren{1+M_0+M_0^p}.\] So by choosing $T_\epsilon < T$ this last expression is less then $1/4\norm{u-v}_{H^s}.$ To estimate terms involving $\paren{\scr{L}(u)-\scr{L}(v)}u$ we proceed in essentially the same way. For example, to estimate $\norm{\paren{c_1(\ensuremath{x, t, u, \bar{u}})-c_1(\ensuremath{x, t, v, \bar{v}})}u}_{H^s}$, rewrite the difference as follows \begin{multline*} c_1(x,t,u,\bar u)-c_1(x,t,v,\bar u)+c_1(x,t,v,\bar u) - c_1(x,t,v,\bar v)=\\ \partial_{z_1}c_1(x,t,su-(1-s)v,\bar u)u\paren{u-v}+ \partial_{z_2}c_1(x,t,v,r\bar u+(1-r)\bar v)u\paren{\bar u-\bar v}. \end{multline*} We can see the above two terms are bounded by $C(M_0+M_0^p)\norm{u-v}_{H^s}$. The other terms work similarly. We conclude that \[\norm{\paren{\Gamma u-\Gamma v}(t)}_{H^s}\leq C\paren{\frac{T^{1/2}}{\epsilon^{1/2}}+T}\paren{1+M_0+M_0^p}\norm{u-v}_{H^s}.\] Choosing $T_\epsilon<T$ appropriately, we see $\Gamma$ is a contraction mapping. Hence there is a unique $v^\epsilon\in X_{T_\epsilon,M_0}$ such that $v^\epsilon$ solves \eqref{NonLinGenS1} with initial data $v_0$. \end{proof} The following lemma is useful in verifying the conditions for our linear estimates which help us get a uniform time of existence. \begin{lem}\label{2ndorderassumps} Let $v\in X_{T,M_0}$ with $v(0)=u_0$, and suppose that $v$ satisfies \eqref{NonLinGenS1} then for $s>N+n/2+4$ the coefficients $a_{jk}(x,t,v,\bar v, \nabla v, \nabla \bar v)$ satisfies \linref{LRegularity}, \linref{LElliptic}, \linref{LFlat}, and \linref{LNoTrap}. Where the constant $C$ that appears in these conditions depends on $M_0$ and $C_1$ depends on $u_0$. \end{lem} \begin{proof} Take $s>N+n/2+4$, then $v$ together with all of it's derivatives to order $N+1$ are in $L^\infty$. This, together with \nlinref{NLRegularity}, allows us to verify \linref{LRegularity}. The assumptions \linref{LElliptic} and \linref{LNoTrap} follow immediately from \nlinref{NLReal2ndOrder}, \nlinref{NLSymmetric}, \nlinref{NLElliptic}. and \nlinref{NLNoTrap}. It remains to verify \linref{LFlat} and \linref{LFirstOrder}. Clearly, $\abs{I-a_{jk}((x,t,v,\bar v,\nabla v, \nabla\bar v ))}\leq C/\jap{x}^2$ follows from \nlinref{NLFlat} and our $L^\infty$ bounds just as in the cases above. Let $*$ denote $(x,t,v,\bar v,\nabla v, \nabla\bar v )$, and consider \[\partial_{x_i}a_{jk}(*)=\diff{a}{x_i}(*) + \diff{a_{jk}}{v}(*)\diff{v}{x_i} + \cdots + \diff{a_{jk}}{\partial_{x_n}v}(*)\frac{\partial^2 \bar v}{\partial x_i \partial x_n}.\] The first and second order derivatives of $v$ are in $L^\infty$ because $s> n/2+2$. Hence by using \nlinref{NLFlat} we can bound each term by $C/\jap{x}^2$. The estimate for $\partial_t a_{jk}(x,t,v,\bar v,\nabla v, \nabla\bar v)$ is similar. The primary difference is that we have to estimate terms of the form $\partial_tv$ and $\partial_t(\partial_{x_i} v)$ in $L^\infty$. To handle $\partial_tv$ it is enough to notice that $\partial_t v$ is equal to the right hand side of \eqref{NonLinGenS1}. Each term of $\scr{L}(v)v$ is in $L^\infty$ by \nlinref{NLRegularity} and our $L^\infty$ bound on $v$ and it's derivatives. Since $f(x,t)\in L^\infty_tH^s_x$ it is in $L^\infty_{t,x}$. To handle the final term $\partial_t\partial_{x_i} v$ we apply $\partial_{x_i}$ to our equation and get \begin{equation*} \begin{aligned} &\partial_t \partial_{x_i}v = -\epsilon\mathcal{D}elta^2 \partial_{x_i} v + ia_{jk}(*)\partial_{x_j x_k}(\partial_{x_i}v) + i\diff{a_{jk}}{x_i}(*)\partial_{x_j x_k}v + i\diff{a_{jk}}{v}(*)\partial_{x_i}v\\ &+i\diff{a_{jk}}{\bar v}(*)\partial_{x_i}\bar v +i \sum_{l=1}^m \paren{\diff{a_{jk}}{\partial_l v}(*)\partial_{x_j x_k}v}\partial_{x_l x_i}v +i\sum_{l=1}^m \paren{\diff{a_{jk}}{\partial_l \bar v}(*)\partial_{x_j x_k}v}\partial_{x_l x_i}\bar v\\ &+\vec b_1(*)\cdot\nabla \partial_{x_i}v + \cdots + \partial_{x_i} f(x,t). \end{aligned} \end{equation*} We find that each of these terms may again be handled by our $L^\infty$ bounds for $v$ and its derivatives and \nlinref{NLRegularity}. Again $\partial_{x_i}f\in L^\infty_t H^{s-1}_x$ so we may bound $\norm{\partial_{x_i} f}_\infty.$ Lastly, to bound $\partial_t\partial_{x_i}a_{jk}(x,t,v,\bar v, \nabla v \nabla \bar v)$ we proceed in the same way. We additionally have to estimate terms of the form $\partial_t\partial_{x_i}\partial_{x_j}v$ in $L^\infty$, but we simply apply $\partial_{x_j}\partial_{x_i}$ to our equation, and again we only encounter terms involving the derivatives of our coefficients evaluated at $v$ multiplied by derivatives of $v$ of order less than 4, so for $s>n/2+4$ we may estimate these terms as before. \end{proof} To see that our first order terms will satisfy the conditions of our linear theory we first need two lemma's. \begin{lem}\label{bintegrable} Suppose $b(x,t, z_1,\ldots, z_{2n+2})\in C^{\tilde N}_b(\mathbb{R}\times\mathbb{R}\times B^{2n+2}_M(0))$ satisfies \\$b(x,t,0,0,\vec 0,\vec 0)=0$ and $\partial_{z_i} b(x,t,0,0,\vec 0,\vec 0)=0$. For any $M\in \mathbb{N}$, if $s>n/2+M+1$ and $\tilde N>s+2$ then $b(\ensuremath{x, t, u, \bar{u}, \grad u, \grad \bar u})\in W^{1,M}$ for $u\in L^\infty_tH^s_x$ \end{lem} \begin{proof} First to see it is in $L^1$, we set $f(r)=b(x,t,ru,r\bar u,r\nabla u,r\nabla \bar u).$ Then we calculate \begin{multline*} f'(r)=\diff{b}{z_1}(x,t,ru,r\bar u,r\nabla u,r\nabla \bar u)u + \diff{b}{z_2}(x,t,ru,r\bar u,r\nabla u,r\nabla \bar u)\bar u +\\ \sum_{i=1}^n\diff{b}{z_{i+2}}(x,t,ru,r\bar u,r\nabla u,r\nabla \bar u)\diff{u}{x_i} + \sum_{i=1}^n\diff{b}{z_{i+n+2}}(x,t,ru,r\bar u,r\nabla u,r\nabla \bar u)\diff{\bar u}{x_i} \end{multline*} Clearly $f(0)=f'(0)=0,$ and $f(1)=b_1(x,t,u,\bar u,\nabla u,\nabla\bar u)$. Now, \[\norm{b}_1=\norm{f(1)}_1=\norm{\int_0^1(1-r)f''(r)\,dr}_1\leq \int_0^1(1-r)\norm{f''(r)}_1\,dr.\] With in $f''(r)$ are terms of the form $(\partial^2 b) u^2$, $(\partial^2 b) u\bar u$, $(\partial^2 b) u\partial u$, $(\partial^2 b) \bar u\partial u$,\\ $(\partial^2 b) u\partial \bar u$, etc. The key observation is that they all involve exactly of degree two when looked at as polynomials in the derivatives of $u$. So we may apply Cauchy-Schwartz and integrate in $s$. For example \begin{multline*}\int \abs{\frac{\partial^2 b}{\partial z_1\partial z_3}(x,t,u,\bar u,\nabla u, \nabla \bar u)u\diff{u}{x_1}}\,dx\leq \norm{\frac{\partial^2 b}{\partial z_1 \partial z_3}}_\infty\int\abs{u\diff{u}{x_1}}\,dx\\ \leq \norm{\frac{\partial^2 b}{\partial z_1\partial z_3}}_\infty \norm{u}_2\norm{\diff{u}{x_1}}_2\leq C_{b,u}\norm{u}_{H^s}^2. \end{multline*} Estimates of $\partial_x^\alpha b$ work similarly. In fact, \[\partial_{x_i} b=\diff{b}{x_i}(\cdot)+\diff{b}{z_1}(\cdot)\diff{u}{x_i}+ \diff{b}{z_2}(\cdot)\diff{\bar u}{x_i} + \sum_{j=1}^n\diff{b}{z_{j+2}}(\cdot)\frac{\partial^2u}{\partial x_i\partial x_j} + \sum_{j=1}^n\diff{b}{z_{j+n+2}}(\cdot)\frac{\partial^2 u}{\partial x_i\partial x_j}\] Each term has the property that if it is evaluated at $u=0$ it is 0, as well a derivative in the $z_1$-$z_{2n+2}$. Hence we may apply the argument above to bound the $L^1$ norm of each of these. \end{proof} The above lemma together with the following observation of Kenig et.\ al.\ \cite{CKGPLV1998} allow us to see that $b_1(\ensuremath{x, t, u, \bar{u}, \grad u, \grad \bar u})$ satisfies our linear assumptions. \begin{lem}\label{bdecay} For $M>N+n$ if $b(x,t)\in W^{1,M}$ uniformly in $t$, then one can find $\varphi_\mu(x,t)$ so that $\supp \varphi_\mu(\cdot,t) \subset Q_\mu^*$,\\ $\norm{\varphi_\mu(\cdot,t)}_{C^N_b}\leq 1$, and \[b(x,t)=\sum_{\mu\in\mathbb{Z}^n}\alpha_\mu(t)\varphi_\mu(x,t)\text{ with }\sum_{\mu\in\mathbb{Z}^n}\abs{\alpha_\mu}\leq c\norm{b}_{W^{1,m}}. \] \end{lem} \begin{proof} By the Sobolev Imbedding theorem if $M>N+n$ then $\norm{b(\cdot,t)}_{C^N(Q_\mu^*)}\leq C\norm{b}_{W^{1,M}(Q_\mu^*)}$ with $C$ independent of $\mu$. Let $ \eta_\mu$ be a $C^\infty$ partition of unity subordinate to $Q^*_\mu$ with $\norm{\eta_\mu}_{C^N}$ independent of $\mu$. Then $b(x,t)=\sum_{\mu\in\mathbb{Z}^n}\eta_\mu b(x,t)$ and $\norm{\eta_\mu b}_{C^N}\leq C \norm{b}_{W^{1,M}(Q_\mu^*)}$. Since $Q_\mu^*$ have bounded overlap $\sum_{\mu\in \mathbb{Z}^n}\norm{b}_{W^{1,M}(Q_\mu^*)}\leq C\norm{b}_{W^{1,M}}$. Hence we just have to set \[\varphi_\mu(x,t)=\frac{b(x,t)\eta_\mu(x)}{\norm{\eta_\mu b(\cdot,t)}_{C^N}}\text{ and } \alpha_\mu(t) = \norm{\eta_\mu b(\cdot,t)}_{C^N}.\] \end{proof} Let $J=(1+\mathcal{D}elta)^{\frac{1}{2}}$. In order to get the necessary estimates on $\norm{u(t)}_{H^s}$ we inductively estimate $J^{2m}u$. Again let $*$ denote $(x,t,u^\epsilon,\bar u^\epsilon,\nabla u^\epsilon,\nabla \bar u^\epsilon)$. As in \cite{CKGPLV2004} we consider the following systems, for $m=1, 2, \ldots s/2$, \[ \begin{aligned} \partial_t J^{2m} u^\epsilon =& -\epsilon \mathcal{D}elta^2 J^{2m}u^\epsilon + \scr{L}(u^\epsilon)J^{2m}u^\epsilon + 2mi\partial_{x_l}\paren{a_{jk}(*)}\partial^3_{jkl}J^{2\paren{m-1}}u^\epsilon \\ +& i\partial_{jk}^2u^\epsilon\partial_{\partial_l u}a_{jk}(*) \partial_lJ^{2m}u^\epsilon + i\partial^2_{jk}u^\epsilon\partial_{\partial_l \bar u}a_{jk}(*)\partial_lJ^{2m}\bar u^\epsilon\\ +&\partial_j u^\epsilon \paren{\partial_{\partial_l u}b_{1,j}(*)\partial_lJ^{2m}u^\epsilon +\partial_{\partial_l \bar u}b_{1,j}(*)\partial_lJ^{2m}\bar u^\epsilon}\\ +&\partial_j \bar u^\epsilon \paren{\partial_{\partial_l u}b_{2,j}(*)\partial_lJ^{2m}u^\epsilon +\partial_{\partial_l \bar u}b_{2,j}(*)\partial_lJ^{2m}\bar u^\epsilon} \\ +&c_{1,2m}(x,t,\paren{\partial^\beta u^\epsilon}_{\abs{\beta}\leq 4},\paren{\partial^\beta\bar u^\epsilon}_{\abs{\beta}\leq 4})R_{2m,1}J^{2m}u^\epsilon\\ +& c_{2,2m}(x,t,\paren{\partial^\beta u^\epsilon}_{\abs{\beta}\leq 4},\paren{\partial^\beta\bar u^\epsilon}_{\abs{\beta}\leq 4})R_{2m,2}J^{2m}\bar u^\epsilon \\ +&f(x,t,\paren{\partial^\beta u^\epsilon}_{\abs{\beta}\leq 2m-2},\paren{\partial^\beta u^\epsilon}_{\abs{\beta}\leq 2m-2}) +J^{2m}f(x,t) \end{aligned} \] Or more briefly, \begin{multline*} \partial_t J^{2m}u^\epsilon = -\epsilon\mathcal{D}elta^2 J^{2m}u^\epsilon+\scr{L}_{2m}(u^\epsilon)J^{2m}u^\epsilon \\+f_{2m}(x,t,\paren{\partial^\beta u^\epsilon}_{\abs{\beta}\leq 2m-2},\paren{\partial^\beta u^\epsilon}_{\abs{\beta}\leq 2m-2}) \end{multline*} Where \begin{align*} &\scr{L}_{2m}(u)v=ia_{jk}(x,t,u,\bar u, \nabla u,\nabla \bar u)\partial^2_{jk}v + b_{1,1_j}(x,t,\paren{\partial^\alpha u}_{\abs{\alpha}\leq 2},\paren{\partial^\alpha \bar u}_{\abs{\alpha}\leq 1})\partial_{x_j}v + \\& +\tilde b_{{lk,1}_j}(x,t,u,\bar u, \nabla u,\nabla \bar u)R_{lk}\partial_{x_j}v + \vec b_{{2m,2}}(x,t,\paren{\partial^\alpha u}_{\abs{\alpha}\leq 2},\paren{\partial^\alpha \bar u}_{\abs{\alpha}\leq 1})\cdot \nabla\bar v\\& + c_{1,2m}(x,t,\paren{\partial^\beta u}_{\abs{\beta}\leq 4},\paren{\partial^\beta\bar u}_{\abs{\beta}\leq 4})R_{2m,1}v\\& + c_{2,2m}(x,t,\paren{\partial^\beta u}_{\abs{\beta}\leq 4},\paren{\partial^\beta\bar u}_{\abs{\beta}\leq 4})R_{2m,2}\bar v, \end{align*} with \begin{multline*} b_{1,1_j}(x,t,u,\bar u, \nabla u,\nabla \bar u) = b_{1_j}(x,t,u,\bar u, \nabla u,\nabla \bar u) + i\partial_{lk}^2u\partial_{\partial_j u}a_{lk}(x,t,u,\bar u, \nabla u,\nabla \bar u) \\+\partial_l u\partial_{\partial_j u}b_{1_l}(x,t,u,\bar u, \nabla u,\nabla \bar u) + \partial_l\bar u\partial_{\partial_j u}b_{2_l}(x,t,u,\bar u, \nabla u,\nabla \bar u), \end{multline*} $$\tilde b_{lk,1_j}=2mi\partial_{x_j}(a_{lk}(x,t,u,\bar u, \nabla u,\nabla \bar u)),$$ $$R_{lk}=\partial^2_{lk}J^{-2}, \quad \text{ and }$$ \begin{multline*} b_{2m,2_j}(x,t,u,\bar u, \nabla u,\nabla \bar u) = b_{2,j}(x,t,u,\bar u, \nabla u,\nabla \bar u) + i\partial_{lk}^2u\partial_{\partial_j \bar u}a_{lk}(x,t,u,\bar u, \nabla u,\nabla \bar u) \\+\partial_l u\partial_{\partial_j \bar u}b_{1_l}(x,t,u,\bar u, \nabla u,\nabla \bar u) + \partial_l\bar u\partial_{\partial_j \bar u}b_{2_l}(x,t,u,\bar u, \nabla u,\nabla \bar u). \end{multline*} The same observations from \cite{CKGPLV2004} apply. The principal part of $\scr{L}_{2m}(u^\epsilon)$ is independent of $m$. The coefficients $b_{{1,1}_j}$, $b_{{2m,2}_j}$, and $\tilde b_{lk}$ depend on the coefficients $a_{jk}$, $\vec b_l$ and their first derivatives, $u$ and the derivatives of $u$, but only on $m$ as a multiplicative constant. Notice that here both $a_{jk}$ and $b_2$ generate first order terms but the $\Psi$DO's $R_{lk}$ are independent of $m$. We need to verify that these coefficients satisfy the conditions for our linear theory when we evaluate them at any solution $v\in X_{T,M_0}$ with $v(0)=u_0$. Since the leading order coefficients have not changed, Lemma~\ref{2ndorderassumps} still assures us that our linear assumptions are verified. Because $s>N+n/2+4$ our $H^s$ bounds on $v$ together with \nlinref{NLRegularity} give us that the other coefficients verify \linref{LRegularity}. Now we just need to verify \linref{LFirstOrder}. Notice that in our linear theory we had the equation in divergence form and hence we have to add an additional first order term to be able to apply the theory. \begin{lem} The first order coefficients $\vec b_{1,1_j}(x,t,v,\bar v,\nabla v, \nabla\bar v)$, \\ $\tilde b_{lk,1_j}(x,t,v,\bar v,\nabla v, \nabla\bar v)R_{lk}$ and $\partial_{x_l}\paren{a_{jk}(x,t,v,\bar v,\nabla v,\nabla\bar v)}$, satisfy \linref{LFirstOrder}. \end{lem} \begin{proof} Let $M$ be as in Lemma~\ref{bdecay}. Then by \nlinref{NLFirstOrder} we may apply directly apply Lemma~\ref{bintegrable} to get that $b_{1_j}\in W^{1,M}$. Similarly we may apply Lemma~\ref{bintegrable} to $\partial_t b_{1_j}$. And hence $b_{1,j}$ has the necessary decomposition for \linref{LFirstOrder}. For the terms involving $\partial_{\partial_ju}b_{k_l}$ $(k=1,2)$ notice that if $b_k(x,t,0,0,\vec 0, \vec 0)=\partial_{z_i}b(x,t,0,0,\vec 0,\vec 0) = 0$ then $\paren{z_i\partial_{z_l}b_k(x,t,\vec z)}|_{z=0} = \partial_{z_m}\paren{z_i\partial_{z_l}b_k(x,t,\vec z)}|_{z=0}=0$. So we may again apply Lemma~\ref{bintegrable} and in the same way as for $b_{1,j}$ get the desired decomposition for these terms. The bounds for $\partial_{x_l}\paren{a_{jk}(x,t,v,\bar v, \nabla v, \nabla\bar v)}$ and $\partial_t\partial_{x_l}\paren{a_{jk}(x,t,v,\bar v,\nabla v, \nabla\bar v)}$ follow from \nlinref{NLFirstOrder} together with the $L^\infty$ bounds for $\partial_{x_l} u$, $\partial_{x_l} \bar u$, $\partial_{x_lx_i}^2 u$ and $\partial_{x_lx_i}^2 \bar u$. Similarly with $b_{lk,1_j}(x,t,v,\bar v, \nabla v, \nabla\bar v)R_{lk}$ and $i\partial_{lk}^2v\partial_{\partial_j u}a_{lk}(x,t,v,\bar v, \nabla v,\nabla \bar v)$. \end{proof} For $J^{2m}u^\epsilon$, observe that if we evaluate our coefficients at any $v\in X_{M_0,T}$ with $v(0)=u_0$ we arrive at a linear equation whose solution satisfies Theorem~\ref{aplin} with $A_m$ depending on $u_0$ and the behavior of the coefficients for the system of $J^{2m}u^{\epsilon}$ at $t=0$. Let $A=\max A_m$ and take $M_0=20A\lambda$. Notice at each stage the terms that come from $f_{2m}$ depend only on terms of order strictly less then $2(m-1)$, which have been estimated in a previous step in $L^\infty_TL^2_x$ and so appear with a factor of $T$ in front when we apply our a priori estimate. Thus there is a $T'$ independent of $\epsilon$ so that for a fixed increasing function $R$, so that \[\sup_{[0,T']}\norm{u^\epsilon(\cdot,t)}_s\leq A\paren{\lambda+T'R(M_0)}\] We may choose $T'$ small enough so that $A\paren{\lambda+T'R(M_0)}\leq M_0/4=5A\lambda$. Then, by our remark after Theorem~\ref{contractmapping}, we can reapply our contraction mapping theorem with initial data $u(T_\epsilon)$. We obtain a solution until time $2T_\epsilon$, if we apply our linear theory again (on the whole interval $[0,2T_\epsilon]$ we see that $\norm{u(2T_\epsilon)}_s\leq M_0/4.$ Then we may continue $k$ times as long as $kT_\epsilon < T'$. We thereby extend $u^\epsilon$ to a solution on $[0,T_0]$ with $u^\epsilon \in X_{T_0,M_0}$ for any $\epsilon$. Finally we come to the last result. \begin{thm} There exists $u\in C([0,T^*];H^{s-1}(\mathbb{R}^n))\cap L^\infty([0,T^*];H^s(\mathbb{R}^n))$ such that $u^\epsilon\to u$ as $\epsilon\to 0$ in $C([0,T^*]; H^{s-1})$ \end{thm} \begin{proof} Let $\epsilon, \epsilon'\in (0,1)$ with $\epsilon'<\epsilon$. Let $v=u^\epsilon-u^{\epsilon'}$. Then $v$ satisfies \begin{equation*} \left\{\begin{aligned} & \partial_t v=-(\epsilon-\epsilon')\mathcal{D}elta^2 u^\epsilon - \epsilon'\mathcal{D}elta^2 v +\scr{L}(u^{\epsilon'})v +\paren{\scr{L}(u^\epsilon)-\scr{L}(u^{\epsilon'})}u^\epsilon \\ & v(0,x)=0 \end{aligned}\right. \end{equation*} Now we rewrite $(\scr{L}(u^{\epsilon})-\scr{L}(u^{\epsilon'}))u^{\epsilon}$ we proceed term by term \[ \begin{aligned} &ia_{jk}(x,t,u^\epsilon,\bar u^\epsilon,\nabla u^\epsilon, \nabla\bar u^\epsilon)- ia_{jk}(x,t,u^{\epsilon'},\bar u^{\epsilon'},\nabla u^{\epsilon'}, \nabla\bar u^{\epsilon'})\\ =&\frac{ia_{jk}(x,t,u^\epsilon,\bar u^\epsilon,\nabla u^\epsilon, \nabla\bar u^\epsilon) - ia_{jk}(x,t,u^{\epsilon'},\bar u^\epsilon,\nabla u^\epsilon, \nabla\bar u^\epsilon)}{u^\epsilon - u^{\epsilon'}}\partial^2_{jk}u^\epsilon v \\ +&\frac{ia_{jk}(x,t,u^{\epsilon'},\bar u^\epsilon,\nabla u^\epsilon, \nabla\bar u^\epsilon) - ia_{jk}(x,t,u^{\epsilon'},\bar u^{\epsilon'},\nabla u^\epsilon, \nabla\bar u^\epsilon)}{\bar u^\epsilon - \bar u^{\epsilon'}}\partial^2_{jk}u^\epsilon \bar v \\ +&\frac{ia_{jk}(x,t,u^{\epsilon'},\bar u^{\epsilon'},\nabla u^\epsilon, \nabla\bar u^\epsilon) - ia_{jk}(x,t,u^{\epsilon'},\bar u^{\epsilon'},\nabla u^{\epsilon'}, \nabla\bar u^\epsilon)}{\partial_l u^\epsilon - \partial_l u^{\epsilon'}}\partial^2_{jk}u^\epsilon \partial_l v \\ +& \frac{ia_{jk}(x,t,u^{\epsilon'},\bar u^{\epsilon'},\nabla u^{\epsilon'}, \nabla\bar u^\epsilon) - ia_{jk}(x,t,u^{\epsilon'},\bar u^{\epsilon'},\nabla u^{\epsilon'}, \nabla\bar u^{\epsilon'})}{\partial_l \bar u^\epsilon - \partial_l \bar u^{\epsilon'}}\partial^2_{jk}u^\epsilon \partial_l \bar v \\ \end{aligned} \] So we get zeroth and first order terms in $v$. The first order terms have coefficients $\partial_{z_k}a_{jk}(x,t,u^{\epsilon'},\bar u^{\epsilon'}, \cdot, \nabla \bar u^\epsilon)$ and $\partial_{z_k}a_{jk}(x,t,u^{\epsilon'},\bar u^{\epsilon'}, \nabla \bar u^{\epsilon'},\cdot)$. By \nlinref{NLFirstOrder} we assumed the necessary decomposition of $\partial_{z_l} a_{jk}$ and $\partial_t\partial_{z_l}a_{jk}$ so that these terms satisfy \linref{LFirstOrder}. We apply the same idea to the $b_l$, $l=1,2$ and also get zeroth and first order terms in $v$. To see that our first order terms still satisfy the required estimate we remark that the conclusion of Lemma 4.4 still holds for $\partial_{z_k}b_{1,j}(x,t,u^{\epsilon'},\bar u^{\epsilon'}, \cdot, \nabla\bar u^{\epsilon})\partial_ju^\epsilon$. Indeed when we estimate the $L^1$ norm will still have the product of two elements of $H^s$ whose norm is controlled by $M_0.$ Similarly with the first order terms. Thus we arrive at a system whose coefficients satisfy the conditions for our linear estimates. Applying our linear estimates we conclude that \[\sup_{[0,T^*]}\norm{v}_2\leq C(\epsilon-\epsilon')\int_0^{T^*}\norm{\mathcal{D}elta^2 u^\epsilon}_2\,dt\leq C(\epsilon-\epsilon')T_0M_0.\] Hence as $\epsilon-\epsilon'\to 0$ we have $u^{\epsilon}-u^{\epsilon'}\to 0$ in $C([0,T^*];L^2).$ So there is a $u\in C([0,T^*];L^2)$ such that $u^{\epsilon}\to u.$ Since $u^\epsilon\in L^\infty([0,T^*];H^s)$ and $L^\infty([0,T^*];H^s)$ is the dual of $L^1([0,T^*]; H^{-s})$ we know there is a subsequence that has a limit in $L^\infty([0,T^*];H^s).$ But by our first estimate this could only be $u.$ To see that $u\in C([0,T^*];H^{s-1})$ we simply notice that \[\norm{u(t)-u(t')}_{H^{s-1}}\leq \norm{u(t)-u(t')}_2^{1/s}\norm{u(t)-u(t')}_{H^s}^{(s-1)/s}.\] The first term in the right hand side tends to 0 and the second is bounded. Hence $u\in C([0,T^*];H^{s-1})$. To see that $u$ is unique we reapply the last argument with $\epsilon=\epsilon'=0$. We will end up with \[\sup_{[0,T^*]}\norm{v}_2=0.\] and therefore $u$ is unique. \end{proof} \begin{bibdiv} \begin{biblist} \bibselect{qlsbib} \end{biblist} \end{bibdiv} \end{document}
\begin{document} \title{ Geometric partition categories: \\ On short Brauer algebras and their blob subalgebras} \author{Z. K\'AD\'AR and P. P. MARTIN \\ School of Mathematics, University of Leeds} \date{} \maketitle {\mathbf N}indent {\small Abstract: The main result here gives an algebra(/linear category) isomorphism between a geometrically defined subcategory ${\mathcal J}^1_0$ of a short Brauer category ${\mathcal J}_0$ and a certain one-parameter specialisation of the blob category ${\mathfrak b}$. That is, we prove the Conjecture in Remark 6.7 of \cite{KadarMartinYu}. We also define a sequence of generalisations ${\mathcal J}^i_{i-1}$ of the category ${\mathcal J}^1_0$. The connection of $ {\mathcal J}_0$ with the blob category inspires a search for connections also with its beautiful representation theory. Here we obtain formulae determining the non-semisimplicity condition (generalising the classical `root-of-unity' condition). } {\mathbf N}indent {\small {\em Keywords}: diagram algebra, topological spin chain.} \section{Introduction} A motivating aim here is to study the structure of the $k$-linear categories ${\mathcal B }l{l}$ from \cite{KadarMartinYu}, and in particular the representation theory of the corresponding $k$-algebras (with $k$ a field) ${\mathcal J}_{l,n}$ in the non-semisimple cases. These structures are of intrinsic interest (cf. \cite{jk,james,Martin0915,CoxDeVisscher}); and see also \cite{KadarMartinYu} for a discussion of some of the extrinsic motivations for this study --- in short one seeks generalisations of the intriguing examples of Kazhdan--Lusztig theory \cite{KazhdanLusztig79,Soergel97a,AndersenJantzenSoergel94} observed \cite{Martin0915} in the representation theory of the Brauer category ${\mathcal B } = {\mathcal B }l{\infty}$ \cite{Brauer37}. Another motivating aim is to study module categories over monoidal categories (see e.g. \cite{Ostrik01} for a review) beyond the usual `semisimple' setting. The study strategy in Part~1 (\S\ref{ss:pre1}-\ref{ss:main01}) can be seen as trying to relate the problem to the representation theory of the blob category ${\mathfrak b}$ and the blob algebra ${\mathfrak b}_n$ \cite{MartinSaleur94a}, which is contrastingly very well understood (see e.g. \cite{CoxGrahamMartin03}), itself with deep and tantalising connections to Kazhdan--Lusztig theory \cite{MartinWoodcock03}. (More recently see e.g. \cite{BowmanCoxSpeyer}.) This also allows us to make contact with the original physical motivations for these algebras, as the algebras of physical systems with boundaries and interfaces \cite{MartinSaleur94a}. Indeed the blob algebras have been of renewed interest recently in several areas, not only of physics but also for example the study of KLR algebras \cite{KLRI,KLRII}, Soergel bimodules \cite{Soergel07} and monoidal categories \cite{JoyalStreet}. As we shall see, in the simplest non-trivial case the algebras are (at least) related by inclusions of the form ${\mathfrak b}_m \hookrightarrow {\mathcal J}_{0,n}$. Inclusion is not in general a directly helpful relationship in representation theory. (For example the Temperley--Lieb algebra $T_n$ \cite{tl} is a subalgebra of ${\mathfrak b}_n$, but the representation theories of these algebras are radically different: cf. \cite{Martin91} and \cite{CoxGrahamMartin03}.) However the inclusion here is of `high index', so there is hope that it will indeed shed light on the open problem. In Part~2 (\S\ref{ss:other1}) we include some indicative results on ${\mathcal J}_{0,n}$ representation theory. These are obtained by working directly with ${\mathcal J}_{0,n}$, but serve as a first step in this direction (full analysis of these results is demoted to a separate paper). In Section~\ref{ss:pre1} we introduce concepts and notations. In \S\ref{ss:s4} we define for each category ${\mathcal B }l{l}$ a new subcategory. In \S\ref{ss:main01} we examine the relationship to the blob category. In particular in Section~\ref{ss:main01} we state and prove the main theorem. In section~\ref{ss:other1} we consider consequences for the algebras ${\mathcal B }^l_n ={\mathcal J}_{l,n}$ themselves. In Section~\ref{ss:discusstar} we discuss related open problems. \input 4point6sec2 \input 4point6sec3 \section{The blob isomorphism Theorem} \label{ss:main01} \input paul46/p36s4a \input 4point6sec4 \section{On representation theory consequences for short Brauer algebras} \label{ss:other1} \newcommand{\Specht}[2]{{\mathcal S}^{#1}_{#2}} \newcommand{\Deltam}[2]{\Delta^{#1}_{#2}} \newcommand{\Deltamd}[2]{D^{#1}_{#2}} \newcommand{Chebyshev}{Chebyshev} \subsection{Summary of relevant results for ${\mathfrak b}_n$} Let us restrict to the case $k=\C$. From a representation theory perspective the natural parameterisation of ${\mathfrak b}_n$ is $\delta=[2]$ (recall $[n]=(q^n - q^{-n})/(q-q^{-1})$) and $\delta' = \frac{[m+1]}{[m]}$. Then if $m {\mathbf N}t\in \Z$ we know that ${\mathfrak b}_n$ is semisimple, with a well-known structure \cite{MartinSaleur94a}. If $m \in \Z $ but $q$ not a root of unity then the algebras are no longer semisimple (for sufficiently large $n$), but the structure is still relatively simple to describe. The most interesting case is $m \in \Z$ and $q$ a root of unity. The structure in this case is quite complicated. See e.g. \cite{CoxGrahamMartin03} for a full description. With this summary in mind, note that due to (\ref{th:Theta}) we are interested in the cases when \[ \frac{[m+1]}{[m]} \; = \; \frac{[2] + 1}{2} \] This is solved for example by $m=1$ when $[2]=1$. For our present purposes the key point here has a precursor already even from the arithmetically simpler Temperley--Lieb case, as follows. \mdef \label{de:cheby} Recall (see e.g. \cite{Martin91}) that the Chebyshev\ polynomials are the polynomials $d_n$ determined by the recurrence $ d_{n+2}=x d_{n+1}-d_n$, with initial conditions $d_0=1, d_1=x$. (We write $x$ for $\delta$ here, simply for reasons of familiarity.) The first few are $d_n = 1,x , x^2 -1, x^3 -2x, x^4 -3x^2 +1, ...$ ($n=0,1,2,3,...$). These arise for example as the determinants of gram matrices such as: \[ \Deltam{n}{n-2} = \mat{cccccccc} \delta & 1 & 0 & 0 & 0\\ 1 & \delta & 1 & 0 &0\\ 0 & 1 & \delta & 1 &0\\ 0 & 0 & 1 & \delta & 1 \\ 0 & 0 & 0 & 1 & \delta \tam \] The obvious translational symmetry of this structure (arising from the local geometrical translational symmetry - the monoidal structure - of the TL diagram `particles') gives rise to the natural fourier parameterisation $d_{n-1} = [n]$. Loosely speaking, the geometrical boundary conditions here pick out a pure fourier sine series (fixing one end); and then the $n$ value (fixing the other end --- hence the special behaviour at roots of unity of $q$). The blob algebra generalises this essentially by changing the boundary conditions. Next we look for evidence of similar phenomena in the short Brauer case. \ignore{{ \subsection{Summary of results for ${\mathcal B }l{l,n}$} ... \subsection{Standard Bratteli diagrams} The difference between ${\mathcal B }l{l,n}$ and ${\mathcal B }Bl{l,n}$ is that ... }} \subsection{Gram matrices, towers of recollement} \begin{figure} \caption{(a) Indicative labelling scheme for standard modules for height $l=0$ Brauer algebras. \quad (b) Bratteli diagram with dimensions of standard modules up to $n=5$. \label{fig:bratt1} \label{fig:bratt1} \end{figure} We assume familiarity with the representation theory as treated in \cite{KadarMartinYu}, including the construction of standard modules. Here we restrict consideration to height 0. Our labeling scheme for Gram matrices $\Deltam{n}{\lambda}$ of the standard modules $\Specht{n}{\lambda}$ is $\Deltam{n}{\lambda} = \Deltam{n}{m,\pm}$ (superscript: algebra rank $n$; subscript: number $m$ of propagating lines and (for $m>1$) $\pm$ is the symmetric / antisymmetric label from $S_2$). See Fig.\ref{fig:bratt1}. For example, the diagram basis for the $n=6$ standard module corresponding to $\lambda = (4,+)$ can be drawn as: \[ \includegraphics[width=5.4in]{xfig/base64.eps} \] where we omit to draw the $(2)$-symmetrizer sitting on the first two propagating lines (thus we can draw the $\lambda=(4,-)$ case similarly, provided we keep in mind the omission, which affects calculations). Note that the basis (so drawn) contains one extra diagram compared to the $l=-1$/Temperley--Lieb case. The extra diagram has an interesting effect on the gram matrix of the natural contravariant form (see \cite{KadarMartinYu}). As for the TL case this can be computed in terms of Chebyshev\ polynomials (or equivalently fourier transforms). But here the initial conditions are different. We have \[ \Deltam{3}{1} = \mat{cccccccc} \delta & 1 & 1 \\ 1 & \delta & 1 \\ 1 & 1 & \delta \tam, \hspace{.351in} \Deltam{4}{2,\pm} = \mat{cccccccc} \delta & 1 & 1 & 0 \\ 1 & \delta & 1 & \pm 1 \\ 1 & 1 & \delta & 1 \\ 0 & \pm 1 & 1 & \delta \tam, \hspace{.351in} \Deltam{n}{n-2,+} = \mat{cccccccc} \delta & 1 & 1 & 0 & 0 & 0 \\ 1 & \delta & 1 & 1 & 0 & 0 \\ 1 & 1 & \delta & 1 & 0 & 0 \\ 0 & 1 & 1 & \delta & 1 & 0 \\ 0 & 0 & 0 & 1 & \delta & 1 \\ 0 & 0 & 0 & 0 & 1 & \delta \tam \] (we give the $n=6$ example, but the general pattern will be clear). Laplace explanding $\Deltamd{n}{\lambda} = |\Deltam{n}{\lambda}|$ with respect to the bottom row we get a Chebyshev\ recurrence \[ \Deltamd{n}{n-2,\pm} = \delta \Deltamd{n-1}{n-3,\pm} - \Deltamd{n-2}{n-4,\pm} \] where the initial conditions are \newcommand{\delta}{\delta} $\Deltamd{3}{1} = (\delta-1)^2 (\delta+2) $ and $\Deltamd{4}{2,+}= \delta(\delta-1)(\delta^2+\delta-4)$ and $\Deltamd{4}{2,-}= (\delta-1)(\delta+1)(\delta-2)(\delta+2)$. Note from Theorem~1.1(ii) of \cite{CoxMartinParkerXi06} (the tower-of-recollement method) and Proposition~5.3 of \cite{KadarMartinYu} (standard restriction rules) that the other gram determinants and indeed the `reductive' representation theory can be determined from this subset of gram determinants. We will address this task in a separate paper. Here we restrict to some of the key preliminary observations. The Chebyshev\ polynomials $d_n$ from (\ref{de:cheby}) are a basis for the space of polynomials; and the recurrence is linear, so we can express our recurrence in terms of them, and hence make use of their more `fourier-like' formulations: $d_{n-1} = [n] = \frac{q^n -q^{-n}}{q-q^{-1}}$, where $\delta = x = q+q^{-1}$. The determinants $D_n^\pm$ of the key subset of Gram matrices of form $\Deltam{n}{n-2,\pm}$ can be expressed as \begin{eqnarray} D_n^+&=&(x-1) \left[ (x+2) (x-1) d_{n-3}-2x d_{n-4}\right]\\ D_n^-&=&(x-1) (x+2) \left[ (x-1) d_{n-3} -2 d_{n-4} \right] \end{eqnarray} Explicitly, the low rank cases of all the Gram matrices are as follows: \begin{eqnarray*} D_1^3&=&(x-1)^2(x+2)\\ D_0^4&=&(x-1)^2x^3(x+2)\\ D_2^{4+}&=&(x-1)x(x^2+x-4)\\ D_2^{4-}&=&(x-1)(x+1)(x-2)(x+2)\\ D^5_1&=&(x-1)^{12}(x+1)(x-2)(x+2) {{}^6 (x^2 +x-4)}\\ D^{5+}_3&=&(x-1)(x^4+x^3-5x^2-x+2)\\ D^{5-}_3&=&(x-1)(x+2)(x^3-x^2-3x+1)\\ D^6_0&=&(x-1)^{12}x^{11}(x+1)(x-2)(x+2)^6(x^2+x-4)\\ D^{6+}_2&=&(x-1)^8x^5(x+1)(x-2)(x+2)(x^2+x-4)^6(x^4+x^3-5x^2-x+2)\\ D^{6-}_2&=&(x-1)^8(x+1)^6(x-2)^6(x+2)^7(x^2+x-4)(x^3-x^2-3x+1)\\ D^{6+}_4&=&(x-1)^2 x(x^3+2x^2-4x-6)\\ D^{6-}_4&=&(x-1)^2(x+2)(x^3-4x-2) \end{eqnarray*} (the cases not computed by recursion may be computed by brute force, see below). A key point to take from this is that the short Brauer algebras manifest some similarities with the root-of-unity paradigm for non-semisimplicity, but move beyond it. As noted, taken in combination with tower of recollement methods these results `seed' the reductive representation theory (the determination of decomposition matrices). We address this analysis fully in a separate paper, but the programme may be illustrated as follows. \newcommand{\ing}[1]{\includegraphics[width=.31in]{xfig/#1.eps}} This form corresponds to the map from the standard module $\Specht{n}{\lambda}$ to its contravariant dual which, on general grounds, maps the simple head to the socle \cite{KadarMartinYu}. Thus when the form is non-singular we deduce that the standard module is simple. And on the other hand when it is singular the standard module will have a corresponding submodule. It is not generally easy to determine the rank of the form and hence the dimension of the simple head from the gram determinant. For example the rank of $\Deltam{5}{1}$ is easily seen to be 1, while the dimension of $\Specht{5}{1}$ is 11 (see Fig.\ref{fig:bratt1} or below) and the determinant factor is $(x-1)^{12}$. To illustrate first consider $D^3_1$. The basis here is $\{ \ing{S31a} , \ing{S31bb} , \ing{S31c} \}$. For example the action of generators on the element $ \ing{S31bb} - \ing{S31a} $ at the singular point $\delta = x =1$ is: \[ \stackrel{\ing{S31d} }{ \ing{S31bb}} - \stackrel{\ing{S31d} }{ \ing{S31a}} \quad = \; 0, \mbox{ }\quad\quad \stackrel{\ing{S31e} }{ \ing{S31bb}} - \stackrel{\ing{S31e} }{ \ing{S31a}} \quad = \; 0, \mbox{ and }\quad \stackrel{\ing{S31ff} }{ \ing{S31bb}} - \stackrel{\ing{S31ff} }{ \ing{S31a}} \quad = \; - (\ing{S31bb} - \ing{S31a} ) \] That is, when $\delta=x=1$ this element spans a submodule isomorphic to $\Specht{3}{3,-}$. Meanwhile for the element $\ing{S31a} + \ing{S31bb} -2 \ing{S31c}$: \[ \stackrel{\ing{S31d} }{ \ing{S31a}} + \stackrel{\ing{S31d} }{ \ing{S31bb}} -2\stackrel{\ing{S31d} }{ \ing{S31c}} \quad =\quad \stackrel{\ing{S31e} }{ \ing{S31a}} + \stackrel{\ing{S31e} }{ \ing{S31bb}} -2\stackrel{\ing{S31e} }{ \ing{S31c}} \quad = \; 0, \] \[ \stackrel{\ing{S31ff} }{ \ing{S31a}} + \stackrel{\ing{S31ff} }{ \ing{S31bb}} -2\stackrel{\ing{S31ff} }{ \ing{S31c}} \quad = \; \ing{S31a} + \ing{S31bb} -2 \ing{S31c} \] So this element spans a submodule isomorphic to $\Specht{3}{3,+}$. We deduce that the simple head is one-dimensional. On the other hand consider $\ing{S31a} + \ing{S31bb} + \ing{S31c}$ in case $\delta=x=-2$. This spans a submodule isomorphic to $\Specht{3}{3,+}$. Here the simple head is two-dimensional. By the module-category embedding property \cite[(4.26)]{KadarMartinYu} these standard module morphisms have images in higher ranks, thus when $x=1$ our map $\Specht{3}{3,-} \rightarrow \Specht{3}{1}$ gives a map $\Specht{5}{3,-} \rightarrow \Specht{5}{1}$ and so on. The embedding functor is not exact so we cannot tell {\em directly} from the gram matrix if an image map has a kernel. So (comparing also with the dimensions from Fig.\ref{fig:bratt1}), a naive lower bound on the exponent in the factor $(x-1)^{12}$ in $D^5_1$ is 4+4, corresponding to the dimensions of the simple heads of $\Specht{5}{3,+}$ and $\Specht{5}{3,-}$ when $x=1$. It is intriguing to compare with the blob case \cite{CoxGrahamMartin03}. There the embedded standard module morphisms are injective, but if that is the case here the naive bound is still only lifted to 5+5, so we see that there will be some nice subtleties here. As a further illustration, the basis for $n=6$ and $\lambda=0$ is: \[ \includegraphics[width=5.94in]{xfig/base60.eps} \] (N.B. the basis for $n=5$, $\lambda=1$ is combinatorially identical). (As noted, we do not strictly need such cases for the `Cox criterion'. It is enough to use $\lambda=n-2$. We include it for curiosity's sake.) The corresponding gram matrix then comes from the array in Fig.\ref{fig:arrayz}. \begin{figure} \caption{Gram matrix calculation for $n=6$ and $\lambda=0$. \label{fig:arrayz} \label{fig:arrayz} \end{figure} Thus, writing $j$ for $\delta^j$ (with $j$ the number of connected components in a diagram), the gram matrix is given by \[ \mat{ccccc|ccccc|cc} 3 & 2 & 2 & 1 & 2 & 2 & 1 & 2 & 1 & 2 & 1 \\ 2 & 3 & 1 & 2 & 1 & 1 & 2 & 1 & 1 & 2 & 1 \\ 2 & 1 & 3 & 2 & 1 & 2 & 1 & 1 & 2 & 1 & 1 \\ 1 & 2 & 2 & 3 & 2 & 1 & 2 & 1 & 2 & 1 & 1 \\ 2 & 1 & 1 & 2 & 3 & 1 & 1 & 2 & 1 & 1 & 2 \\ \hline 2 & 1 & 2 & 1 & 1 & 3 & 2 & 1 & 1 & 1 & 2 \\ 1 & 2 & 1 & 2 & 1 & 2 & 3 & 2 & 1 & 1 & 2 \\ 2 & 1 & 1 & 1 & 2 & 1 & 2 & 3 & 2 & 1 & 1 \\ 1 & 1 & 2 & 2 & 1 & 1 & 1 & 2 & 3 & 2 & 2 \\ 2 & 2 & 1 & 1 & 1 & 1 & 1 & 1 & 2 & 3 & 2 \\ \hline 1 & 1 & 1 & 1 & 2 & 2 & 2 & 1 & 2 & 2 & 3 \tam \] The determinant here can still be computed by brute force. \section{Discussion} \label{ss:discusstar} Some notable open questions follow. {\mathbf N}indent Q1. How to generalise the `short Brauer' construction to the BMW algebra \cite{BirmanWenzl89,Murakami87}? {\mathbf N}indent Q2. How to relate the usual two-parameter version of the blob algebra to the short Brauer algebras --- which by the original construction have only a single parameter. Recall that there is, essentially trivially, a two-parameter version of $T_n$. First recall that $T_n$ has a basis of non-crossing Brauer diagrams \cite{Weyl46o,brown} up to ambient isotopy (see \S\ref{ss:pre1} for a summary of Brauer diagram concepts --- ambient isotopy does not include, for example, the Reidemeister moves included in general Brauer diagram equivalence, but it is sufficient in the non-crossing case, and this is key here). The elements of the basis can be seen as partitioning the interval into alcoves. These alcoves can be shaded black or white with the property that \\ (A1) the colour changes across each boundary; and \\ (A2) the leftmost alcove is white, say. \\ (NB Another way of saying this is that arcs have a well-defined `height' in the sense of this paper, which is either odd or even.) \\ Thus in composition both black and white loops may form. The number of each separately is an invariant of ambient isotopy. It follows that we may associate a different parameter to each. Thus we have an algebra $T_n(\delta_b, \delta_w)$, say. It is easy to see that $T_n(\delta_b, \delta_w) \cong T_n(\alpha\delta_b, \delta_w /\alpha)$ for any unit $\alpha$, so the difference can usually be scaled away. For example recall the following. {\theo{\label{th:TLgen1} {\rm \cite{Martin91}} Consider the algebra defined by generators $U = \{ U_1,U_2,...,U_{n-1} \}$ and relations $\tau = \{ \mbox{ $U_i^2 = \delta U_i$, $U_i U_{i\pm 1} U_i = U_i$, $U_i U_j = U_j U_i$, $j\neq i\pm 1$ } \}$. The map $$ U_i \mapsto u_i = \{\{1,1' \} \{2,2' \}, ... , \{i , i+1 \},\{ i',i+1' \}, ..., \{n,n' \}\} \qquad (i=1,2,...,n-1) $$ extends to an algeba isomorphism $ k\langle U \rangle/\tau \cong T_n$. \qed }} To see the isomorphic two-parameter version consider the effect on the relations of the map $U_i \mapsto \alpha U_i$ ($i$ odd), $U_i \mapsto \alpha^{-1} U_i$ ($i$ even). The blob algebra ${\mathfrak b}_n$ can be seen as the subalgebra of $T_{2n}(\delta_b, \delta_w)$ generated by diagrams with a lateral-flip symmetry. In this subalgebra, however, it is {\em not } possible to scale away the second parameter. The short Brauer algebras are, from one perspective, generalisations of $T_n$. It is interesting to consider if there are analogous generalisations of the two-parameter version that (like the blob) have the property that the second parameter becomes material. This is not obvious. The generalisation destroys the two-tone alcove construction. How does the two-tone construction look in the categorical setting? Here we write $T(n,m)$ for the subset $J_{-1}(n,m)$ of $J(n,m)$ of non-crossing pair-partitions. We fix $\delta \in k$ and note that ${\mathcal T } = (\N_0, k T(n,m),*)$ is a subcategory of ${\mathcal B }$. Indeed ${\mathcal T } = {\mathcal B }^{-1}$. The inclusion is of $k$-linear categories, and also of monoidal $k$-linear categories. As in the algebra case we note that in the non-crossing setting we can count the number of black and white loops separately (i.e. these numbers are separately well-defined). Note however that the monoidal structure on ${\mathcal T }$ does not preserve this property. It is the axiom (A2) that is the problem. {\mathbf N}indent {\bf Acknowledgements.} We thank EPSRC for funding under grant EP/I038683/1. We thank Shona Yu, Azat Gaynutdinov and Peter Finch for useful conversations. \appendix \section*{Appendix} \section{Colour pictures for Lemma~\ref{mj}} \label{ss:colour} \input 4point6sec3Ap \end{document}
\begin{document} \title{Quantum Digital Signature Based on Quantum One-way Functions \thanks{This work was supported by the Natural Science Foundation of China under Grant No. 60273027 and the National Grand Fundamental Research 973 Program of China under Grant No. G1999035802}} \iffalse \author{ Xin L¨¹\"u, Deng-Guo Feng \\ State Key Laboratory of Information Security, \\Graduate School of Chinese Academy of Sciences, Beijing, 100039,China\\ \\ email: [email protected]} \fi \author{Xin L¨¹\"u, Deng-Guo Feng } \institute{State Key Laboratory of Information Security, \\Graduate School of Chinese Academy of Sciences, Beijing, 100039,China\\ email: [email protected]} \maketitle \textbf{Abstract}: A quantum digital signature scheme based on quantum mechanics is proposed in this paper. The security of the protocol relies on the existence of quantum one-way functions by fundamental quantum principles. Our protocol involves a so-called arbitrator who validates and authenticates the signed message. This scheme uses public quantum keys to sign message and uses quantum one-time pad to ensure the security of quantum information on channel. To guarantee the authenticity of the transmitted quantum states, a family of quantum stabilizer code is employed. The proposed scheme presents a novel method to construct secure quantum signature systems for future secure communications. \textbf{Key words}: Digital signature; Quantum cryptography; Error correction code; Quantum one-way functions \section{Introduction} Quantum cryptography aims at providing information security that relies on the main properties of quantum mechanics. The most successful topic of quantum cryptography is quantum key distribution (QKD), which was firstly invented by Bennett and Brassard in 1984 \cite{BB84}. QKD is believed to be the first practical quantum information processor and its unconditional security has been proven \cite{Mayers01,Shor-simple}. Other than QKD, quantum cryptography protocols are widely studied in these years, such as quantum digital signature and quantum message authentication. Digital signature is a main task in modern cryptography and is widely used in today's communication systems. Digital signature cares about the ``authenticity'' data on channel \cite{Goldreich}. Informally, an unforgeable signature scheme requires that each user be able to efficiently generate his(her) own signature and verify the validity of another user's signature on a specific document, and no one be able to efficiently generate the signatures of other users to documents that those users didn't sign. Gottesman and Chuang proposed a quantum digital system \cite{QDS} based on quantum mechanics, and claimed that the scheme was absolutely secure, even against an adversary having unlimited computational resources. The scheme, however, can only sign classical bits string and can't deal with general quantum superposition states. Zeng presented an arbitrated quantum signature scheme, the security of which is due to the correlation of the GHZ triplet states and utilization of quantum one-time pad \cite{Zeng}.In an arbitrated signature scheme, all communications involve a so called arbitrator who has access to the contents of the messages \cite{DS81}. The security of most arbitrated signature schemes depends heavily on the trustworthiness of the arbitrators. Zeng's protocol signs quantum messages which are known to the signatory. It seems impossible to sign a general unknown quantum state \cite{QDS,Zeng,Authentication}. In this paper, we present a novel arbitrated quantum digital signature scheme which can sign general quantum states, the security of which is based on a family of quantum one-way functions by quantum information theory. This article is arranged as below. Section 2 introduces some definitions and preliminaries we will use in the article. Section 3 describes the proposed quantum signature scheme. The security is considered in Section 4. Section 5 gives discussions and conclusions. \section{Preliminaries} \subsection{Quantum one-way function} This section introduces a class of quantum one-way functions based on the fundamental principles of quantum mechanics, which was proposed by Gottesman and Chuang \cite{QDS} and the definitions are presented as below. \begin{definition}[quantum one-way function ] A function \begin{math}f:|x\rangle_{n_{1}}\mapsto|f(x)\rangle_{n_{2}} \end{math} where \begin{math}x\in F_{2}^{n_{1}}\end{math} and \begin{math}n_{1}\gg n_{2}\end{math}, is called a quantum one-way function under physical mechanics if (1) Easy to compute: There is a quantum polynomial-time algorithm A such that on input \begin{math}|x\rangle\end{math} outputs \begin{math}|f(x)\rangle\end{math}. (2) Hard to invert: Given \begin{math}|f(x)\rangle\end{math}, it is impossible to invert \textit{x} by virtue of fundamental quantum information theory. \end{definition} What should be pointed out for the above definition is that the condition \begin{math}n_{1}\gg n_{2}\end{math} is necessary. By Holevo's theorem \cite{Nielson}, no more than $n$ classical bits of information can be obtained by measuring $n$ qubits quantum states. Several means to construct quantum one-way function were introduced by Gottesman and Chuang \cite{QDS} and here we choose the quantum fingerprinting function \cite{finger} for the candidate. The quantum fingerprinting function of a bit string $u\in F_{2}^{w}$ is \begin{equation}|f(u)\rangle=\frac{1}{\sqrt{m}}\sum_{l=1}^m(-1)^{E_{l}(u)}\cdot|l\rangle\end{equation} where \begin{math}E:\{0,1\}^{w}\rightarrow\{0,1\}^{m}\end{math} is a family of error correcting code with fixed \begin{math}c>1,0<\delta<1\end{math} and \begin{math}m=cw\end{math}. $E_{l}(u)$ denotes the $lth$ bit of $E(u)$. The distance between distinct code words $E(u_{1})$ and $E(u_{2})$ is at least $(1-\delta)m$. Since two distinct code words can be equal in at most $\delta m$ positions, for any $u_{1}\neq u_{2}$ we have $\langle f(u_{1})|f(u_{2})\rangle\leq \delta m/m=\delta$. Here \begin{math}f(u)\end{math} can be regarded as a class of quantum one-way functions, which are easy to compute, but difficult to reverse. \subsection{Quantum stabilizer codes} Quantum error correction code (QECC) is a way of encoding quantum data (having $m$ qubits) into $n$ qubits (m$<$n), which protects quantum states against the effects of noise. Quantum stabilizer code is an important class of QECC and has been used to the other subject of quantum information, such as quantum cryptography \cite{Nielson}. The Pauli operators $\{\pm I,\pm \sigma_{x},\pm \sigma_{y},\pm \sigma_{z}\}$ constitute a group of order 8. The $n$-fold tensor products of single qubit Pauli operators also form a group $G_{n}=\pm\{I,\pm \sigma_{x},\pm \sigma_{y},\pm \sigma_{z}\}$, of order $2^{2n+1}$. We refer to $G_{n}$ as the $n$-qubit Pauli group. Let $S$ denote an abelian subgroup of the $n$-qubit Pauli group $G_{n}$. Then the stabilizer codes $H_{S}\subseteq H_{2^{2n}}$ satisfy, \begin{equation}|\psi\rangle \in H_{S}, iff~~M|\psi\rangle=|\psi\rangle~~for~~all~~M\in S \end{equation} The group $S$ is called the stabilizer of the code, since it preserves all of the codewords. For stabilizer codes [[$n$, $k$, $d$]], the generators $M_{i }$ and the errors $E_{a}$, write \begin{equation}M_{i}E_{a}=(-1)^{S_{ia}}E_{a}M_{i},i=1,\cdots,n-k \end{equation} The $s_{ia}'s$ constitute a syndrome for the error $E_{a}$, as $(-1)^{S_{ia}}$ will be the result of measuring $M_{i }$ if the error $E_{a }$ happens. For a nondegenerate code, $s_{ia}'s$ will be distinct for all $E_{a}\in \varepsilon$, so that measuring the $n-k$ stabilizer generators will diagnose the error completely. \section{The Proposed Protocol} \subsection{Security requirements} The proposed scheme is a cryptographic protocol involving three entities: a signatory Alice, a receiver Bob, and an arbitrator Trent who authenticates and validates the signed message. The security of the signature scheme depends much on the trustworthiness of the arbitrator who has access to the contents of the messages. The quantum digital signature discussed in this article should meet the following security conditions: \begin{enumerate} \item Each user (Alice) can efficiently generate her own signature on messages of his choice; \item A receiver Bob can efficiently verify whether a given string is a signature of another user's on specific message with Trent's help; \item The signatory can't disavow the message that she has signed; \item It is infeasible to produce signatures of other users' messages they haven't signed. \end{enumerate} \subsection{The protocol} \subsubsection{Key generation} \begin{enumerate} \item Key distribution. Alice, Bob and Trent agree on some random bits $K_{AT}$, $K_{AB}$ and $K_{TB}$ as their private keys. $K_{AT}$ is shared between Alice and Trent, $K_{AB}$ is shared between Alice and Bob and $K_{TB}$ between Trent and Bob . $~~$ To ensure that the scheme is unconditionally secure, the keys can be generated using quantum key distribution protocols, such as BB84 or EPR protocol\cite{BB84,Nielson}. \item Signature key generation. Alice generates 2\textit{k} random secret strings \begin{math}u_{i,j}\in F_{2}^{w}\end{math} and computes \begin{equation}|y_{i,j}\rangle=|f(u_{i,j})\rangle,1\leq i\leq 2n,j\in\{0,1\}\end{equation} Here \begin{math}f:|x\rangle\mapsto|f(x)\rangle \end{math} is a class of quantum one-way functions introduced in section 2. Alice generates 4\textit{n} key pairs of \begin{math}\{{u_{i,j},|y_{i,j} \rangle}\}_{j\in\{0,1\}}^{1\leq i\leq 2n}\end{math} and then publicly announces $\{|y_{i,j} \rangle\}_{j \in\{0,1\}}^{1\leq i\leq 2n}$ as her public key and keeps $\{u_{i,j} \}_{j \in\{0,1\}}^{1\leq i\leq 2n}$ as her private key. \end{enumerate} \subsubsection{Signature } \begin{enumerate} \item Suppose Alice has a quantum state $|\psi\rangle \in \mathcal{H}_{2^{n}}$ and wants to send it to Bob. Alice randomly selects bits strings $x \in F_{2}^{2n}$, $k$ for the stabilizer codes $\{Q_{k}\}$ and $s$. She $q$-encrypts $|\psi\rangle$ as $\rho$ using $x$. Alice encodes $\rho$ according to $Q_{k}$ with syndromes $s$ and obtains $\pi$. \item Alice computes \begin{equation}X=(x_{pre_{|s|}}\oplus y)||(x_{suf_{2n-|s|}})\footnote{Suppose $s<2n$ in the algorithm. Here, $x_{pre_{|y|}}$ denotes the first $|y|$ bits of $x$ and $x_{suf_{2n-|y|}}$ denotes the last $2n-|y|$ bits of $x$, $a\oplus b$ means the bit-by-bit XOR of the strings $a$ and $b$, namely $a\oplus b=a_{1}\oplus b_{1},\cdots, a_{m}\oplus b_{m}$. The symbol $``||''$ means concatenation of two binary strings.}\end{equation} and generates four copies of $X's$ signature $|\Sigma_{K}(X)\rangle$ according to her key $K\in \{u_{i,j},|y_{i,j}\rangle|1\leq i\leq 2n, j\in \{0,1\}\}$ \begin{equation}|\Sigma_{K}(X)\rangle=|y_{1,X_{1}}\otimes\ldots\otimes y_{2n,X_{2n}}\rangle=|a_{1}\otimes\dots\otimes a_{2n}\rangle=|a\rangle \end{equation} Alice sends $\pi$ and two copies of $|\Sigma_{K}(X)\rangle$ to Bob. At the same time, she encrypts $\{s,k,x\}$ as $C_{1}$ using $K_{AT}$ \footnote{In this algorithm, we select classical one-time-pad to encrypt classical message to ensure the unconditional security.} and sends $C_{1}$ and two copies of $|\Sigma_{K}(X)\rangle$ to Trent. We assume that each setting up of a protocol has a unique sequence number. \end{enumerate} \subsubsection{Verification} \begin{enumerate} \item Trent receives $C'_{1}$ and two copies of $|\Sigma'_{K}(X)\rangle=|a'\rangle$. Trent checks whether these two copies of $|\Sigma'_{K}(x)\rangle$ he recieved are equivalent by performing a quantum swap test circuit (QSTC \cite{finger}). If any one of $|a'_{i}\rangle$'s fails the test, Trent aborts the protocol. Trent decrypts $C'_{1}$ using his secure key $K_{AT}$ and obtains $\{s_{T},k_{T},x_{T}\}$. He computes $|\Sigma_{K}(X)_{(T)}\rangle$ according to $x_{T}$ and Alice's public keys. Trent compares $|\Sigma_{K}(X)_{(T)}\rangle=|a\rangle_{T}$ to $|\Sigma'_{K}(X)\rangle$. If any one of them fails the test, Trent aborts the protocol. Trent encrypts $\{k_{T},x_{T}\}$ as $C_{2}$ using $K_{TB}$ and sends the ciphertext to Bob. $~~$The comparison of two quantum states is less straightforward than in the classical case because of the statistical properties of quantum measurements. Another serious problem is that quantum measurements usually introduce a noneligible disturbance of the measured state. Here, we use the quantum swap test circuit (QSTC) proposed in \cite{finger} to compare whether $|a_{i}\rangle_{T}$ and $|a'_{i}\rangle$ are equivalent or not. QSTC is a comparison strategy with one-sided error probability $(1+\delta^{2}/2)$, and each pair of the compared qubits has an inner product with an absolute value at most $\delta$. Because there are $2n$ sets of qubits to be compared, the error probability of the test can be reduced to $(\frac{1+\delta^{2}}{2})^{2n}$, where $\langle f_{i}|f_{j}\rangle\leq\delta$ with $i\neq j$, and $n$ is the security parameter. Let the number of the incorrect keys be $e_{j}$, Bob rejects it as invalid signature if $e_{j}>cM$. Here $c$ is a threshold for rejection and acceptance in the protocol. \item Bob has received Alice's information $[\pi' ,|\Sigma''_{K}(X)\rangle=|a''\rangle]$, $\pi'$ and Trent's message $C'_{2}$ now. He deciphers $C'_{2}$ as $\{k_{B},x_{B}\}$ and computes $X_{B}$ according to Eq.(5). He measures the syndrome $s_{B}$ of the stabilizer code $Q_{k}$ on $\pi'$ and decodes the qubits as $\rho'$. He encrypts $s_{B}$ as $C_{3}$ using parts of $K_{TB}$ and sends it to Trent. \item Trent encrypts $s_{T}$ as $C_{4}$ using parts of $K_{TB}$ and sends it to Bob. \item Bob deciphers $C'_{4}$ and obtains $s_{T}$. He compares $s_{B}$ to $s_{T}$ and aborts if any error is detected. Bob checks whether these two copies of $|\Sigma''_{K}(X)\rangle$ are equivalent by performing the QSTC. He computes quantum states $|\Sigma(X)\rangle_{B}=|a\rangle_{B}$ using $X_{B}$ and Alice's public keys $\{|y_{i,j}\rangle\}_{j \in\{0,1\}}^{1\leq i\leq 2n}$. He verifies Alice's signature according to \begin{equation}V_{K}(X_{B},|\Sigma'_{K}(X)\rangle)=True\Leftrightarrow \{|a'_{i}\rangle=|y_{i,X_{i}}\rangle=|a''_{i}\rangle_{B} \}_{1\leq i\leq 2n}\end{equation} Bob $q$-decrypts $\rho'$ as $|\psi'\rangle$ according to $x_{B}$. \end{enumerate} \section{Security Analysis} \subsection{Correctness} \begin{theorem}[Correctness] Suppose all the entities involved in the scheme follow the protocol, then Eq. (7) holds. \end{theorem} \textbf{Proof}. The correctness of the scheme can be seen by inspection. In the absence of intervention, Trent will obtain Alice's key $s,x,k$ and her signature of $X$. Trent verifies the signature and sends $x,k$ secretly to Bob. Bob can successfully decode and decipher the quantum states and verify Alice's signature. Because Alice signs her message according to Eq. (6), it's easy to verify that Eq. (7) holds. \subsection{Security against repudiation} Alice can't deny her signature. When Alice disavows her signature, Bob will resort to Trent. Bob sends one copy of the signature $|\Sigma''_{K}(X)\rangle$ to Trent. Trent compares $s_{B}$ and $|\Sigma''_{K}(X)\rangle$ with $s_{T}$ and his kept copy of signature $|\Sigma'_{K}(X)\rangle$ Alice has sent to him. If all these pass the test, Trent reveals that Alice is cheating because $|\Sigma_{K}(X)\rangle$ contains Alice's signature on her private keys $x$ and $s$. Otherwise, Trent concludes that the signature has been forged by Bob or other attackers. \subsection{Security against forgery} \begin{theorem} Other entities forge Alice's signature with a successful probability at most $2^{-[(w-t\lceil log_{2}m\rceil)+2n]}$. \end{theorem} \textbf{Proof.} Considering that an adversary (Eve or Bob) controls the communication channels connecting Alice, Trent and Bob and wants to forger Alice's signature. Here we present two strategies that the attack Eve (Bob) can apply. \begin{enumerate} \item One is that she tries to alter the signed quantum states. Eve intercepts $[\pi' ,|\Sigma'_{K}(X)\rangle]$. She keeps $\pi'$ and selects a random key $x_{E}$ to encrypt another quantum states $|\phi\rangle$ as $\tau$ and sends $[\tau ,|\Sigma'{K}(X)\rangle]$ to Bob. Because Eve knows nothing about the stabilizer code $\{Q_{k}\}$ and syndrome $s$, her cheating will be detected by Bob in the fourth step of the verification phase when he compares the syndrome $y$ to $y'$. \item The second strategy is that the attacker tries to recover Alice's private keys and generates a ``legal'' signature. Because she knows nothing about Alice's private keys $x,y,k, K_{AT}$ and $\{u_{i,j} \}_{j \in\{0,1\}}^{1\leq i\leq 2n}$. She can't compute $x,y,k$ from the mixed state $\pi'$. According to Holevo's theorem \cite{Nielson}, Eve can obtain at most $t\lceil log_{2}m\rceil$ bits of classical information about one of Alice's signature key $\{u_{i,j} \}$ from Alice's public key. Here, $t$ is a small natural number and we let $c=4$ in our scheme. Since she lacks $w-t\lceil log_{2}m\rceil$ bits of information about any private key which Alice hasn't revealed, she will only guess correctly at most $2^{-[w-t\lceil log_{2}m\rceil]}$ of it. Therefore, the attacker can forger Alice's signature only with a successful probability less than $2^{-[(w-t\lceil log_{2}m\rceil)+2n]}$. \end{enumerate} \section{Concluding Remarks} Designing quantum digital signature protocol is not trivial because of several fundamental properties of quantum message. The first and the most important property of quantum information is the no-clone theorem, which forbids the unknown qubits reproduction. For digital signature, how can we verify the signature is indeed the signature on a specific state without generating copies of the original message? The second is the probability and irreversibility properties of quantum measurement. That brings much troubles to decide whether a state is a legal signature without changing that state. The last property of secure quantum signature scheme is that it is also a secure encryption scheme, which has been shown by Barnum $et~al.$ in literature \cite{Authentication}. In this article, we investigate how to span these obstacles and present a quantum digital signature scheme. The security of the scheme relies on the existence of a family of quantum one-way functions by quantum principles. The authenticity of the quantum information is obtained by quantum error correction codes and security of the quantum information on channel is ensured by quantum one-time pad. \end{document}
\begin{document} \title[Regularity of Fourier Integral Operators]{Global and local regularity of Fourier integral operators on weighted and unweighted spaces} \author{David Dos Santos Ferreira} \address{Universit\'e Paris 13, Cnrs, umr 7539 Laga, 99 avenue Jean-Baptiste Cl\'ement, F-93430 Villetaneuse, France} \email{[email protected]} \thanks{D.~DSF. is partially supported by ANR grant Equa-disp.} \author{Wolfgang Staubach} \address{Department of Mathematics and the Maxwell Institute for Mathematical Sciences, Heriot-Watt University, Edinburgh EH14 4AS, United Kingdom} \email{[email protected]} \thanks{W.~S. is partially supported by the Engineering and Physical Sciences Research Council First Grant Scheme, reference number EP/H051368/1.} \subjclass[2000]{35S30} \begin{abstract} We investigate the global continuity on $L^p$ spaces with $p\in [1,\infty]$ of Fourier integral operators with smooth and rough amplitudes and/or phase functions subject to certain non-degeneracy conditions. We initiate the investigation of the continuity of smooth and rough Fourier integral operators on weighted $L^{p}$ spaces, $L_{w}^p$ with $1< p < \infty$ and $w\in A_{p},$ (i.e. the Muckenhoput weights), and establish weighted norm inequalities for operators with rough and smooth amplitudes and phase functions satisfying a suitable rank condition. These results are then applied to prove weighted and unweighted estimates for the commutators of Fourier integral operators with functions of bounded mean oscillation BMO, then to some estimates on weighted Triebel-Lizorkin spaces, and finally to global unweighted and local weighted estimates for the solutions of the Cauchy problem for $m$-th and second order hyperbolic partial differential equations on $\mathbf{R}^n .$ \end{abstract} \maketitle \setcounter{tocdepth}{1} \tableofcontents \section*{Introduction and main results} A Fourier integral operator is an operator that can be written locally in the form \begin{align}\label{Intro:Fourier integral operator} T_{a, \varphi}u(x)= (2\pi)^{-n} \int_{\mathbf{R}^n} e^{i\varphi(x,\xi)} a(x,\xi) \widehat{u}(\xi) \, \mathrm{d}\xi, \end{align} where $a(x,\xi)$ is the {\it{amplitude}} and $\varphi(x,\xi)$ is the {\it{phase function}}. Historically, a systematic study of smooth Fourier integral operators with amplitudes in $C^{\infty}(\mathbf{R}^{n} \times \mathbf{R}^n)$ with $|\partial^{\alpha}_{\xi} \partial^{\beta}_{\xi} a(x,\xi)| \leq C_{\alpha \beta} (1+|\xi|)^{m-\varrho|\alpha|+\partialelta|\beta|}$ (i.e. $a(x, \xi)\in S^{m}_{\varrho, \partialelta}$), and phase functions in $C^{\infty}(\mathbf{R}^{n} \times \mathbf{R}^n\setminus 0)$ homogenous of degree $1$ in the frequency variable $\xi$ and with non-vanishing determinant of the mixed Hessian matrix (i.e. {\it{non-degenerate phase functions}}), was initiated in the classical paper of L. H\"ormander \cite{H3}. Furthermore, I. Eskin \cite{Esk} (in the case $a(x,\xi)\in S^{0}_{1,0}$) and H\"ormander \cite{H3} (in the case $a(x,\xi) \in S^{0}_{\varrho, 1-\varrho},$ $\frac{1}{2} <\varrho \leq 1$) showed the local $L^2$ boundedness of Fourier integral operators with non-degenerate phase functions. Later on, H\"ormander's local $L^2$ result was extended by R. Beals \cite{RBE} and A. Greenleaf and G. Uhlmann \cite{GU} to the case of amplitudes in $S^{0}_{\frac{1}{2},\frac{1}{2}}.$ \\ After the pioneering investigations by M. Beals \cite{Be}, the optimal results concerning local continuity properties of smooth Fourier integral operators (with non-degenerate and homogeneous phase functions) in $L^{p}$ for $1\leq p\leq \infty$, were obtained in the seminal paper of A. Seeger, C. D. Sogge and E.M. Stein \cite{SSS}. This also paved the way for further investigations by G. Mockenhaupt, Seeger and Sogge in \cite{MSS1} and \cite{MSS2}, see also \cite{So} and \cite{So1}. In these investigations the boundedness, from $L^{p}_{\text{comp}}$ to $L^{p}_{\text{loc}}$ and from $L^{p}_{\text{comp}}$ to $L^{q}_{\text{loc}}$ of smooth Fourier integral operators with non-degenerate phase functions have been established, and furthermore it was shown that the maximal operators associated with certain Fourier integral operators (and in particular constant and variable coefficient hypersurface averages) are $L^p$ bounded.\\ In the context of H\"ormander type amplitudes and non-degenerate homogeneous phase functions which are most frequently used in applications in partial differential equations, it has been comparatively small amount of activity concerning global $L^p$ boundedness of Fourier integral operators. Among these, we would like to mention the global $L^2$ boundedness of Fourier integral operators with homogeneous phases in $C^{\infty}(\mathbf{R}^{n} \times \mathbf{R}^n \setminus 0)$ and amplitudes in the H\"ormander class $S^{0}_{0,0},$ due to D. Fujiwara \cite{Fuji}; the global $L^2$ boundedness of operators with inhomogeneous phases in $C^{\infty}(\mathbf{R}^{n} \times \mathbf{R}^n)$ and amplitudes in $S^{0}_{0,0},$ due to K. Asada and D. Fujiwara \cite{AF}; the global $L^p$ boundedness of operators with smooth amplitudes in the so called $\mathbf{SG}$ classes, due to E. Cordero, F. Nicola and L. Rodino in \cite{CNR}; and finally, S. Coriasco and M. Ruzhansky's global $L^{p}$ boundedness of Fourier integral operators \cite{CR}, with smooth amplitudes in a suitable subclass of the H\"ormander class $S^{0}_{1,0},$ where certain decay of the amplitudes in the spatial variables are assumed. We should also mention that before the appearance of the paper \cite{CR}, M. Ruzhansky and M. Sugimoto had already proved in \cite{Ruz 2} certain weighted $L^2$ boundedness (with some power weights) as well as the global unweighted $L^2$ boundedness of Fourier integrals operators with phases in $C^{\infty}(\mathbf{R}^{n} \times \mathbf{R}^{n})$ that are not necessarily homogeneous in the frequency variables, and amplitudes that are in the class $S^{0}_{0,0}.$ In all the aforementioned results, one has assumed ceratin bounds on the derivatives of the phase functions and also a stronger non-degeneracy condition than the one required in the local $L^p$ estimates. \\ In this paper, we shall take all these results as our point of departure and make a systematic study of the global $L^p$ boundedness of Fourier integral operators with amplitudes in $S^{m}_{\varrho, \partialelta}$ with $\varrho$ and $\partialelta$ in $[0,1],$ which cover all the possible ranges of $\varrho$'s and $\partialelta$'s. Furthermore we initiate the study of weighted norm inequalities for Fourier integral operators with weights in the $A_p$ class of Muckenhoupt and use our global unweighted $L^p$ results to prove a sharp weighted $L^p$ boundedness theorem for Fourier integral operators. The weighted results in turn will be used to establish the validity of certain vector-valued inequalities and more importantly to prove the weighted and unweighted boundedness of commutators of Fourier integral operators with functions of bounded mean oscillation BMO. Thus, all the results of this paper are connected and each chapter uses the results of the previous ones. This has been reflected in the structure of the paper and the presentation of the results.\\ Concerning the specific conditions that are put in this paper on the phase functions, it has been known at least since the appearance of the papers \cite{Fuji}, \cite{AF}, \cite{Ruz 2} and \cite{CR}, that one has to assume stronger conditions, than mere non-degeneracy, on the phase function in order to obtain global $L^{p}$ boundedness results. In fact it turns out that the assumption on the phase function, referred to in this paper as the {\it{strong non-degeneracy condition}}, which requires a nonzero lower bound on the modulus of the determinant of the mixed Hessian of the phase, is actually necessary for the validity of global regularity of Fourier integral operators, see section \mathop{\rm Re} f{necessity of strong non-degeneracy}. Furthermore, we also introduce the class $\Phi^k$ of homogeneous (of degree 1) phase functions with a specific control over the derivatives of orders greater than or equal to $k,$ and assume our phases to be strongly non-degenerate and belong to $\Phi^k$ for some $k$. At first glance, these conditions might seem restrictive, but fortunately they are general enough to accommodate the phase functions arising in the study of hyperbolic partial differential equations and will still apply to the most generic phases in practical applications.\\ Concerning our choice of amplitudes, there are some features that set our investigations apart from those made previously, for example partly motivated by the investigation of C.E. Kenig and the second author of the present paper \cite{KS}, of the $L^{p}$ boundedness of the so called {\it{pseudo-pseudodifferential operators}}, we consider the global and local $L^p$ boundedness of Fourier integral operators when the amplitude $a(x,\xi)$ belongs to the class $L^{\infty}S^{m}_{\varrho}$, wherein $a(x,\xi)$ behaves in the spatial variable $x$ like an $L^\infty$ function, and in the frequency variable $\xi,$ the behaviour is that of a symbol in the H\"ormander class $S^{m}_{\varrho,0}.$\\ It is worth mentioning that the conditions defining classes $\Phi^k$, $L^{\infty}S^{m}_{\varrho}$ and the assumption of strong non-degeneracy make the global results obtained here natural extensions of the local boundedness results of Seeger, Sogge and Stein's in \cite{SSS}. Apart from the obvious local to global generalizations, this is because on one hand, our methods can handle the singularity of the phase function in the frequency variables at the origin and therefore the usual assumption that $\xi\neq 0$ in the support of the amplitude becomes obsolete. On the other hand, we do not require any regularity (and therefore no decay of derivatives) in the spatial variables of the amplitudes. Therefore, our amplitudes are close to, and in fact are spatially non-smooth versions of those in the Seeger-Sogge-Stein's paper \cite{SSS}. Indeed, in \cite{SSS} the authors although dealing with spatially smooth amplitudes, assume neither any decay in the spatial variables nor the vanishing of the amplitude in a neighbourhood of the singularity of the phase function.\\ There are several steps involved in the proof of the results of the paper and there are discussions about various conditions that we impose on the operators as well as some motivations for the study of rough operators. Moreover, giving examples and counterexamples when necessary, we have strived to give motivations for our assumptions in the statements of the theorems. Here we will not mention all the results that have been proven in this paper, instead we chose to highlight those that are the main outcomes of our investigations.\\ \noindent In Chapter 1, we set up the notations and give the definitions of the classes of amplitudes, phase functions and weights that will be used througt the paper. We also include the tools that we need in proving our global boundedness results, in this chapter. We close the chapter with a discussion about the connections between rough amplitudes and global boundedness of Fourier integral operators.\\ Chapter 2 is devoted to the investigation of the global boundedness of Fourier integral operators with smooth or rough phases, and smooth or rough amplitudes. To achieve our global boundedness results, we split the operators in low and high frequency parts and show the boundedness of each and one of them separately. In proving the $L^p$ boundedness of the low frequency portion, see Theorem \mathop{\rm Re} f{general low frequency boundedness for rough Fourier integral operator}, we utilise Lemma \mathop{\rm Re} f{main low frequency estim} which yields a favourable kernel estimate for the low frequency operator, thereafter we use the phase reduction of Lemma \mathop{\rm Re} f{phase reduction} to bring the operator to a canonical form, and finally we use the $L^p$ boundedness of the non-smooth substation in Corollary \mathop{\rm Re} f{cor:main substitution estim} to conclude the proof. Thus, we are able to establish the global $L^p$ boundedness of the frequency-localised portion of the operator, for all $p$'s in $[1,\infty]$ simultaneously.\\ The global boundedness of the high frequency portion of the Fourier integral operator needs to be investigated in three steps. First we show the $L^1-L^1$ boundedness then we proceed to the $L^2-L^2$ boundedness and finally we prove the $L^{\infty}-L^{\infty}$ boundedness.\\ In order to show the $L^1$ boundedness of Theorem \mathop{\rm Re} f{Intro:L1Thm}, we use a semi-classical reduction from Subsection \mathop{\rm Re} f{Semiclasical reduction subsec} and Lemma \mathop{\rm Re} f{Lp:semiclassical}, which will be used throughout the paper. Thereafter we use the semiclassical version of the Seeger-Sogge-Stein decomposition which was introduced in a microlocal form in \cite{SSS}.\\ For our global $L^2$ boundedness result, we also consider amplitudes with $\varrho<\partialelta$ which appear in the study of Fourier integral operators on nilpotent Lie groups and also in scattering theory. In Theorem \mathop{\rm Re} f{Calderon-Vaillancourt for FIOs}, we show a global $L^2$ boundedness result for operators with smooth H\"ormander class amplitudes in $S^{\min(0,\frac{n}{2}(\varrho-\partialelta))}_{\varrho,\partialelta},$ $\varrho\in [0,1],$ $\partialelta \in [0,1).$ Also, in Theorem \mathop{\rm Re} f{Intro:L2Thm} we prove the $L^2$ boundedness of operators with amplitudes belonging to $S^{m}_{\varrho,1},$ with $m<\frac{n}{2}(\varrho-1)$. In both of these theorems, the phase functions are assumed to satisfy the strong non-degeneracy condition and both of these results are sharp. These can be viewed as extensions of the celebrated Calder\'on-Vaillancourt theorem \cite{CV} to the case of Fourier integral operators. Indeed, the phase function of a pseudodifferential operator, which is $\langle x, \xi \rangle$ is in class $\Phi^2$ and satisfies the strong non-degeneracy condition and therefore our $L^2$ boundedness result completes the $L^2$ boundedness theory of smooth Fourier integral operators with homogeneous non-degenerate phases.\\ Finally, in Theorem \mathop{\rm Re} f{Intro:LinftyThm} we prove the sharp global $L^{\infty}$ boundedness of Fourier integral operators, where in the proof we follow almost the same line of argument as in the proof of the $L^1$ boundedness case, but to obtain the sharp result which we desire, we make a more detailed analysis of the kernel estimates which bring us beyond the result implied by the mere utilisation of the Seeger-Sogge-Stein decomposition. Furthermore, in this case, no non-degeneracy assumption on the phase is required. Our results above are summerised in the following global $L^p$ boundedness theorem, see Theorem \mathop{\rm Re} f{main L^p thm for smooth Fourier integral operators}: \subsection*{A. Global $L^{p}$ boundedness of smooth Fourier integral operators.} Let $T$ be a Fourier integral operator given by \mathop{\rm Re} f{Intro:Fourier integral operator} with amplitude $a \in S^m_{\varrho, \partialelta}$ and a strongly non-degenerate phase function $\varphi(x,\xi) \in \Phi^2.$ Setting $\lambda:=\min(0,n(\varrho-\partialelta)),$ suppose that either of the following conditions hold: \begin{enumerate} \item[(a)] $1 \leq p \leq 2,$ $0\leq \varrho \leq 1,$ $0\leq \partialelta \leq 1,$ and $$ m<n(\varrho -1)\bigg (\frac{2}{p}-1\bigg)+\big(n-1\big)\bigg(\frac{1}{2}-\frac{1}{p}\bigg)+ \lambda\bigg(1-\frac{1}{p}\bigg); $$ or \item[(b)] $2 \leq p \leq \infty,$ $0\leq \varrho \leq 1,$ $0\leq \partialelta \leq 1,$ and $$ m<n(\varrho -1)\bigg (\frac{1}{2}-\frac{1}{p}\bigg)+ (n-1)\bigg(\frac{1}{p}-\frac{1}{2}\bigg) +\frac{\lambda}{p};$$ or \item[(c)] $p=2,$ $0\leq \varrho\leq 1,$ $0\leq \partialelta<1,$ and $$m= \frac{\lambda}{2}.$$ \end{enumerate} Then there exists a constant $C>0$ such that $ \Vert Tu\Vert_{L^{p}} \leq C \Vert u\Vert_{L^{p}}.$ For Fourier integral operators with rough amplitudes we show in Theorem \mathop{\rm Re} f{main L^p thm for Fourier integral operators with smooth phase and rough amplitudes} the following: \subsection*{B. Global $L^{p}$ boundedness of rough Fourier integral operators.} \noindent Let $T$ be a Fourier integral operator given by \eqref{Intro:Fourier integral operator} with amplitude $a \in L^{\infty}S^m_{\varrho}, 0\leq \varrho \leq 1$ and a strongly non-degenerate phase function $\varphi(x,\xi) \in \Phi^2.$ Suppose that either of the following conditions hold: \begin{enumerate} \item[(a)] $1 \leq p \leq 2$ and $$ m<\frac{n}{p}(\varrho -1)+\big(n-1\big)\bigg(\frac{1}{2}-\frac{1}{p}\bigg); $$ or \item[(b)] $2 \leq p \leq \infty$ and $$ m<\frac{n}{2}(\varrho -1)+ (n-1)\bigg(\frac{1}{p}-\frac{1}{2}\bigg).$$ \end{enumerate} Then there exists a constant $C>0$ such that $ \Vert Tu\Vert_{L^{p}} \leq C \Vert u\Vert_{L^{p}}.$ We also extend both of the results above, i.e. the $L^p-L^p$ regularity of smooth and rough operators, to the $L^p-L^q$ regularity, in Theorem \mathop{\rm Re} f{main LpLq thm for Fourier integral operators}.\\ After proving the global regularity of Fourier integral operators with smooth phase functions, we turn to the problem of local and global boundedness of operators which are merely bounded in the spatial variables in both their phases and amplitudes. A motivation for this investigation stems from the study of maximal estimates involving Fourier integral operators, where a standard stopping time argument (here referred to as linearisation) reduces the problem to a Fourier integral operator with a non-smooth phase and sometimes also a non-smooth amplitude. For instance, estimates for the maximal spherical average operator \begin{align*} A u(x) = \sup_{t \in [0,1]} \Big| \int_{S^{n-1}} u(x+t\omega) \, \mathrm{d}\sigma(\omega) \Big| \end{align*} which is directly related to the rough Fourier integral operator $$ T u(x) =(2\pi)^{-n} \int_{\mathbf{R}^n} a(x,\xi) e^{it(x)|\xi|+i\langle x,\xi \rangle} \widehat{u}(\xi) \, \mathrm{d}\xi$$ where $t(x)$ is a measurable function in $x$, with values in $[0,1]$ and $a(x,\xi) \in L^{\infty} S_{1}^{-\frac{n-1}{2}}.$ Here, the phase function of the Fourier integral operator is $\varphi(x,\xi)=it(x)|\xi|+i\langle x,\xi \rangle$ which is merely an $L^{\infty}$ function in the spatial variables $x$, but is smooth outside the origin in the frequency variables $\xi.$ As we shall see later, according to Definition \mathop{\rm Re} f{defn of rough phases}, this phase belongs to the class $L^{\infty} \Phi^{2}.$\\ In our investigation of local or global $L^p$ boundedness of the rough Fourier integral operators above for $p\neq 2$, the results obtained are similar to those of the local results for amplitudes in the class $S^{m}_{1,0}$ obtained in \cite{MSS1}, \cite{MSS2} and \cite{So1} for \eqref{Intro:LinWave}. However, we consider more general classes of amplitudes (i.e. the $S^{m}_{\varrho, \partialelta}$ class) and also require only measurability and boundedness in the spatial variables (i.e. the $L^{\infty}S^{m}_{\varrho}$ class). The main results in this context are the $L^2$ boundedness results which apart from the case of Fourier integral operators in dimension $n=1,$ yield a problem of considerable difficulty in case one desires to prove a $L^2$ regularity result under the sole assumption of rough non-degeneracy, see Definition \mathop{\rm Re} f{defn of rough nondegeneracy}. Using the geometric conditions (imposed on the phase functions) which are the rough analogues of the non-degeneracy and corank conditions for smooth phases (the rough corank condition \mathop{\rm Re} f{Rough corank condition}), we are able to prove a local $L^2$ boundedness result with a certain loss of derivatives depending on the rough corank of the phase. More explicitly we prove in Theorem \mathop{\rm Re} f{Intro:L2ThmDeg}: \subsection*{C. Local $L^2$ boundedness of Fourier integral operators with rough amplitudes and phases.} \noindent Let $T$ be a Fourier integral operator given by \eqref{Intro:Fourier integral operator} with amplitude $a \in L^{\infty}S^m_\varrho$ and phase function $\varphi \in L^{\infty}\Phi^2$. Suppose that the phase satisfies the rough corank condition \mathop{\rm Re} f{Rough corank condition}, then $T$ can be extended as a bounded operator from $L^{2}_{\rm comp}$ to $L^{2}_{\rm loc}$ provided $m<-\frac{n+k-1}{4}+\frac{(n-k)(\varrho-1)}{2}.$\\ \noindent Despite the lack of sharpness in the above theorem, the proof is rather technical. However, in case $n=1$ this theorem can be improved to yield a local $L^2$ boundedness result with $m<\frac{-1}{2},$ and if the assumptions on the phase function are also strengthen with a Lipschitz condition on the $\xi$ derivatives of order $2$ and higher of the phase, then the above theorem holds with a loss $m<-\frac{k}{2}+\frac{(n-k)(\varrho-1)}{2}.$\\ In Chapter 3 we turn to the problem of weighted norm inequalities for Fourier integral operators. To our knowledge this question has never been investigated previously in the context of Muckenhoupt's $A_p$ weights which are the most natural class of weights in harmonic analysis. Here we start this investigation by establishing sharp boundedness results for a fairly wide class of Fourier integral operators, somehow modeled on the parametrices of hyperbolic partial differential equations. One notable feature of our investigation is that we also prove the results for Fourier integral operators whose phase functions and amplitudes are only bounded and measurable in the spatial variables and exhibit suitable symbol type behavior in the frequency variables.\\ As before, we begin by discussing the weighted estimates for the low frequency portion of the Fourier integral operators which can be handled by Lemma \mathop{\rm Re} f{main low frequency estim}. As a matter of fact, the weighted $L^p$ boundedness of low frequency parts of Fourier integral operators is merely an analytic issue involving the right decay rates of the phase function and does not involve any rank condition on the phase. The situation in the high frequency case is entirely different. Here, there is also a significant distinction between the weighted a and unweighted case, in the sense that, if one desires to prove sharp weighted estimates, then a rank condition on the phase function is absolutely crucial. This fact has been discussed in detail in Section \mathop{\rm Re} f{counterexamples in weighted setting}, where one finds various examples, including one related to the wave equation, and counterexamples which will lead us to the correct condition on the phase. Then we will proceed with the local high frequency and global high frequency boundedness estimates. As a rule, in the investigation of boundedness of Fourier integral operators, the local estimates require somewhat milder conditions on the phase functions compared to the global estimates and our case study of the weighted norm inequalities here is no exception to this rule. Furthermore, we are able to formulate the local weighted boundedness results in an invariant way involving the canonical relation of the Fourier integral operator in question. Our main results in this context are contained in Theorem \mathop{\rm Re} f{endpointweightedboundthm}: \subsection*{D. Weighted $L^{p}$ boundedness of Fourier integral operators.} Let $a(x,\xi)\in L^{\infty}S^{-\frac{n+1}{2}\varrho+n(\varrho -1)}_{\varrho}$ and $\varrho \in [0,1].$ Suppose that either \begin{enumerate} \item[(a)] $a(x,\xi)$ is compactly supported in the $x$ variable and the phase function $\varphi(x,\xi)\in C^{\infty} (\mathbf{R}^ n \times \mathbf{R}^{n}\setminus 0)$, is positively homogeneous of degree $1$ in $\xi$ and satisfies, $\partialet\partial^2_{x\xi}\varphi(x,\xi) \neq 0$ as well as $\mathrm{rank}\,\partial^2_{\xi\xi}\varphi(x,\xi) =n-1$; or \item[(b)] $\varphi (x,\xi)-\langle x,\xi\rangle \in L^{\infty}\Phi^1,$ $\varphi $ satisfies the rough non-degeneracy condition as well as $|\mathrm{det}_{n-1} \partial^2_{\xi\xi}\varphi(x,\xi)|\geq c>0$. \end{enumerate} Then the operator $T_{a,\varphi}$ is bounded on $L^{p}_{w}$ for $p\in (1,\infty)$ and all $w\in A_p$. Furthermore, for $\varrho=1$ this result is sharp.\\ Here, it is worth mentioning that in the non-endpoint case, i.e. if $a(x,\xi)\in L^{\infty}S^{m}_{\varrho}$ with $m<-\frac{n+1}{2}\varrho+n(\varrho -1),$ we can prove a result that requires no non-degeneracy assumption on the phase function. The proof of these statements are long and technical and use several steps involving careful kernel estimates, uniform pointwise estimates on certain oscillatory integrals, unweighted local and global $L^p$ boundedness, interpolation, and extrapolation.\\ In Chapter 4 we are motivated by the fact that weighted norm inequalities with $A_p$ weights can be used as an efficient tool in proving vector valued inequalities and also boundedness of commutators of operators with {\it{functions of bounded mean oscillation}} BMO. Therefore, we start the chapter by showing boundedness of certain Fourier integral operators in {\it{weighted Triebel-Lizorkin spaces}} (see \eqref{Tribliz definition}). This is based on a vector valued inequality for Fourier integral operators.\\ But more importantly we prove for the first time, in Theorems \mathop{\rm Re} f{Commutator estimates for FIO} and \mathop{\rm Re} f{k-th commutator estimates}, the boundedness and weighted boundedness of BMO commutators of Fourier integral operators, namely \subsection*{E. $L^{p}$ boundedness of BMO commutators of Fourier integral operators.} \noindent Suppose either \begin{enumerate} \item [(a)] $T\in I^{m}_{\varrho, \mathrm{comp}}(\mathbf{R}^{n}\times \mathbf{R}^{n}; \mathcal{C})$ with $\frac{1}{2} \leq \varrho\leq 1$ and $m<(\varrho-n)|\frac{1}{p}-\frac{1}{2}|,$ satisfies all the conditions of Theorem \mathop{\rm Re} f{weighted boundedness for true amplitudes with power weights} or; \item [(b)] $T_{a,\varphi}$ with $a\in S^{m}_{\varrho, \partialelta},$ $0\leq \varrho \leq 1$, $0\leq \partialelta\leq 1,$ $\lambda= \min(0, n(\varrho-\partialelta))$ and $\varphi(x,\xi)$ is a strongly non-degenerate phase function with $\varphi(x,\xi)-\langle x,\xi\rangle \in \Phi^1,$ where in the range $1<p\leq 2,$ $$ m<n(\varrho -1)\bigg (\frac{2}{p}-1\bigg)+\big(n-1\big)\bigg(\frac{1}{2}-\frac{1}{p}\bigg)+ \lambda\bigg(1-\frac{1}{p}\bigg);$$ and in the range $2 \leq p <\infty$ $$ m<n(\varrho -1)\bigg (\frac{1}{2}-\frac{1}{p}\bigg)+ (n-1)\bigg(\frac{1}{p}-\frac{1}{2}\bigg) +\frac{\lambda}{p};$$ or \item [(c)] $T_{a,\varphi}$ with $a\in L^{\infty}S^{m}_{\varrho},$ $0\leq \varrho \leq 1$ and $\varphi$ is a strongly non-degenerate phase function with $\varphi(x,\xi) -\langle x, \xi\rangle \in \Phi^1,$ where in the range $1<p\leq 2,$ $$ m<\frac{n}{p}(\varrho -1)+\big(n-1\big)\bigg(\frac{1}{2}-\frac{1}{p}\bigg),$$ and for the range $2 \leq p <\infty$ $$ m<\frac{n}{2}(\varrho -1)+ (n-1)\bigg(\frac{1}{p}-\frac{1}{2}\bigg).$$ \end{enumerate} Then for $b\in \mathrm{BMO}$, the commutators $[b, T]$ and $[b, T_{a,\varphi}]$ are bounded on $L^p$ with $1<p<\infty.$ Here we like to mention that once again, the global $L^p$ bounded in Theorem \textbf{A} above is used in the proof of the $L^p$ boundedness of the BMO commutators. Finally, the weighted norm inequalities with weights in all $A_p$ classes have the advantage of implying weighted boundedness of repeated commutators, namely one has \subsection*{F. Weighted $L^{p}$ boundedness of k-th BMO commutators of Fourier integral operators.} \noindent Let $a(x,\xi)\in L^{\infty}S^{-\frac{n+1}{2}\varrho+n(\varrho -1)}_{\varrho}$ and $\varrho \in[0,1].$ Suppose that either \begin{enumerate} \item[(a)] $a(x,\xi)$ is compactly supported in the $x$ variable and the phase function $\varphi(x,\xi)\in C^{\infty} (\mathbf{R}^ n \times \mathbf{R}^{n}\setminus 0)$, is positively homogeneous of degree $1$ in $\xi$ and satisfies, $\partialet\partial^2_{x\xi}\varphi(x,\xi) \neq 0$ as well as $\mathrm{rank}\,\partial^2_{\xi\xi}\varphi(x,\xi) =n-1$; or \item[(b)] $\varphi (x,\xi)-\langle x,\xi\rangle \in L^{\infty}\Phi^1,$ $\varphi $ satisfies the rough non-degeneracy condition as well as $|\mathrm{det}_{n-1} \partial^2_{\xi\xi}\varphi(x,\xi)|\geq c>0$. \end{enumerate} Then, for $b \in \mathrm{BMO}$ and $k$ a positive integer, the $k$-th commutator defined by \begin{equation*} T_{a, b,k} u(x):= T_{a}\big((b(x)-b(\cdot))^{k}u\big)(x) \end{equation*} is bounded on $L^{p}_w$ for each $w \in A_p$ and $p\in(1, \infty)$.\\ These BMO estimates have no predecessors in the literature and are useful in connection to the study of hyperbolic partial differential equations with rough coefficients.\\ In the last section of Chapter 4, we also briefly discuss global unweighted and local weighted estimates for the solutions of the Cauchy problem for $m$-th and second order hyperbolic partial differential equations. \section{Prolegomena} In this chapter, we gather some results which will be useful in the study of boundedness of Fourier integral operators. We also illustrate some of the connections between global boundedness results for operators with smooth phases and amplitudes and local boundedness results for operators with rough phases and amplitudes, thus justifying a joint study of those operators. \subsection{Definitions, notations and preliminaries} \label{prelim} \subsubsection{Phases and amplitudes} In our investigation of the regularity properties of Fourier integral operators, we will be concerned with both smooth and non-smooth amplitudes and phase functions. Below, we shall recall some basic definitions and fix some notations which will be used throughout the paper. Also, in the sequel we use the notation $\langle \xi\rangle$ for $(1+|\xi|^2)^{\frac{1}{2}}.$ The following definition which is due to H\"ormander \cite{H0}, yields one of the most widely used classes of smooth symbols/amplitudes. \begin{defn}\label{defn of hormander amplitudes} Let $m\in \mathbf{R}$, $0\leq \partialelta\leq 1$, $0\leq \varrho\leq 1.$ A function $a(x,\xi)\in C^{\infty}(\mathbf{R}^{n} \times\mathbf{R}^{n})$ belongs to the class $S^{m}_{\varrho,\partialelta}$, if for all multi-indices $\alpha, \, \beta$ it satisfies \begin{align*} \sup_{\xi \in \mathbf{R}^n} \langle \xi\rangle ^{-m+\varrho\vert \alpha\vert- \partialelta |\beta|} |\partial_{\xi}^{\alpha}\partial_{x}^{\beta}a(x,\xi)|< +\infty. \end{align*} \end{defn} We shall also deal with the class $L^{\infty}S^m_{\varrho}$ of rough symbols/amplitudes introduced by Kenig and Staubach in \cite{KS}. \begin{defn}\label{defn of amplitudes} Let $m\in \mathbf{R}$ and $0\leq\varrho \leq 1$. A function $a(x,\xi)$ which is smooth in the frequency variable $\xi$ and bounded measurable in the spatial variable $x$, belongs to the symbol class $L^{\infty}S^{m}_{\varrho}$, if for all multi-indices $\alpha$ it satisfies \begin{align*} \sup_{\xi \in \mathbf{R}^n} \langle \xi\rangle ^{-m+\varrho\vert \alpha\vert} \Vert \partial_{\xi}^{\alpha}a(\cdot\,,\xi)\Vert_{L^{\infty}(\mathbf{R}^{n})}< +\infty. \end{align*} \end{defn} We also need to describe the type of phase functions that we will deal with. To this end, the class $\Phi^{k}$ defined below, will play a significant role in our investigations. \begin{defn}\label{Phik phases} A real valued function $\varphi(x,\xi)$ belongs to the class $\Phi^{k}$, if $\varphi (x,\xi)\in C^{\infty}(\mathbf{R}^n \times\mathbf{R}^n \setminus 0)$, is positively homogeneous of degree $1$ in the frequency variable $\xi$, and satisfies the following condition: For any pair of multi-indices $\alpha$ and $\beta$, satisfying $|\alpha|+|\beta|\geq k$, there exists a positive constant $C_{\alpha, \beta}$ such that \begin{align*} \sup_{(x,\,\xi) \in \mathbf{R}^n \times\mathbf{R}^n \setminus 0} |\xi| ^{-1+\vert \alpha\vert}\vert \partial_{\xi}^{\alpha}\partial_{x}^{\beta}\varphi(x,\xi)\vert \leq C_{\alpha , \beta}. \end{align*} \end{defn} In connection to the problem of local boundedness of Fourier integral operators, one considers phase functions $\varphi(x,\xi)$ that are positively homogeneous of degree $1$ in the frequency variable $\xi$ for which $\partialet [\partial^{2}_{x_{j}\xi_{k}} \varphi(x,\xi)]\neq 0.$ The latter is referred to as the {\it{non-degeneracy condition}}. However, for the purpose of proving global regularity results, we require a stronger condition than the aforementioned weak non-degeneracy condition. \begin{defn}\label{strong non-degeneracy} $($The strong non-degeneracy condition$).$ A real valued phase $\varphi\in C^{2}(\mathbf{R}^n \times\mathbf{R}^n \setminus 0)$ satisfies the strong non-degeneracy condition, if there exists a positive constant $c$ such that $$ \Big|\partialet \frac{\partial^{2}\varphi(x,\xi)}{\partial x_j \partial \xi_k}\Big| \geq c, $$ for all $(x,\,\xi) \in \mathbf{R}^n \times\mathbf{R}^n \setminus 0$. \end{defn} The phases in class $\Phi^2$ satisfying the strong non-degeneracy condition arise naturally in the study of hyperbolic partial differential equations, indeed a phase function closely related to that of the wave operator, namely $\varphi(x,\xi)= |\xi|+ \langle x, \xi\rangle$ belongs to the class $\Phi^2$ and is strongly non-degenerate.\\ We also introduce the non-smooth version of the class $\Phi^k$ which will be used throughout the paper. \begin{defn}\label{defn of rough phases} A real valued function $\varphi(x,\xi)$ belongs to the phase class $L^{\infty}\Phi^{k}$, if it is homogeneous of degree $1$ and smooth on $\mathbf{R}^n \setminus 0$ in the frequency variable $\xi$, bounded measurable in the spatial variable $x,$ and if for all multi-indices $|\alpha|\geq k$ it satisfies \begin{align*} \sup_{\xi \in \mathbf{R}^n \setminus 0} |\xi| ^{-1+\vert \alpha\vert}\Vert \partial_{\xi}^{\alpha}\varphi(\cdot\,,\xi)\Vert_{L^{\infty}(\mathbf{R}^{n})}< +\infty. \end{align*} \end{defn} We observe that if $t(x) \in L^{\infty}$ then the phase function $\varphi(x,\xi)= t(x)|\xi|+ \langle x, \xi\rangle$ belongs to the class $L^\infty\Phi^2 ,$ hence phase functions originating from the linearisation of the maximal functions associated with averages on surfaces, can be considered as members of the $L^{\infty} \Phi^2$ class. We will also need a rough analogue of the non-degeneracy condition, which we define below. \begin{defn}\label{defn of rough nondegeneracy} $($The rough non-degeneracy condition$).$ A real valued phase $\varphi$ satisfies the rough non-degeneracy condition, if it is $C^1$ on $\mathbf{R}^n \setminus 0$ in the frequency variable $\xi$, bounded measurable in the spatial variable $x,$ and there exists a constant $c>0$ $($depending only on the dimension$)$ such that for all $x,y\in \mathbf{R}^n$ and $\xi\in \mathbf{R}^n \setminus 0$ \begin{equation}\label{lower bound on gradients} |\partial_{\xi}\varphi(x,\xi)-\partial_{\xi}\varphi(y,\xi)| \geq c |x-y|. \end{equation} \end{defn} \subsubsection{Basic notions of weighted inequalities} Our main reference for the material in this section are \cite{G} and \cite{S}. Given $u\in L^{p}_{\mathrm{loc}}$, the $L^p$ maximal function $M_p(u)$ is defined by \begin{equation} M_p(u)(x) = \sup_{B\ni x} \left({\frac{1}{\vert B\vert}} \int_{B} \vert u(y)\vert^{p} \, \mathrm{d} y\right)^{\frac{1}{p}} \end{equation} where the supremum is taken over balls $B$ in $\mathbf{R}^{n}$ containing $x$. Clearly then, the Hardy-Littlewood maximal function is given by \[ M(u) := M_{1}(u). \] An immediate consequence of H\"older's inequality is that $M(u)(x)\leq M_{p}(u)(x)$ for $p\geq 1$. We shall use the notation \[ u_{B}:= {\frac{1}{\vert B\vert}} \int_{B} \vert u(y)\vert \, \mathrm{d} y \] for the average of the function $u$ over $B$. One can then define the class of Muckenhoupt $A_p$ weights as follows. \begin{defn} \label{weights} Let $w\in L^{1}_{\mathrm{loc}}$ be a positive function. One says that $w\in A_1$ if there exists a constant $C>0$ such that \begin{equation} M w (x)\leq C w(x), {\frac{1}{\vert B\vert}}uad \text{ for almost all } {\frac{1}{\vert B\vert}}uad x \in \mathbf{R}^{n}. \end{equation} One says that $w\in A_p$ for $p\in(1,\infty)$ if \begin{equation} \sup_{B\, \textrm{balls in}\,\, \mathbf{R}^{n}}\,w_{B}\big(w^{-\frac{1}{p-1}}\big)_{B}^{p-1}<\infty. \end{equation} The $A_p$ constants of a weight $w\in A_p$ are defined by \begin{equation} [w]_{A_1}:= \sup_{B\, \textrm{balls in}\,\, \mathbf{R}^{n}}\,w_{B}\Vert w^{-1}\Vert_{L^{\infty}(B)}, \end{equation} and \begin{equation}\label{ap constant} [w]_{A_p}:= \sup_{B\, \textrm{balls in}\,\, \mathbf{R}^{n}}\,w_{B}\big(w^{-\frac{1}{p-1}}\big)_{B}^{p-1}. \end{equation} \end{defn} \begin{ex}\label{examples of weights} The function $|x|^\alpha$ is in $A_1$ if and only if $-n<\alpha\leq 0$ and is in $A_p$ with $1<p<\infty$ iff $-n<\alpha<n(p-1)$. Also $u(x)= \log{\frac{1}{|x|}} $ when $|x|<\frac{1}{e}$ and $u(x)=1$ otherwise, is an $A_1$ weight. \end{ex} \subsubsection{Additional conventions} As is common practice, we will denote constants which can be determined by known parameters in a given situation, but whose value is not crucial to the problem at hand, by $C$. Such parameters in this paper would be, for example, $m$, $\varrho$, $p$, $n$, $[w]_{A_p}$, and the constants $C_\alpha$ in Definition \mathop{\rm Re} f{defn of amplitudes}. The value of $C$ may differ from line to line, but in each instance could be estimated if necessary. We sometimes write $a\lesssim b$ as shorthand for $a\leq Cb$. Our goal is to prove estimates of the form $$ \Vert Tu\Vert_{L^p} \leq C \Vert u\Vert_{L^p}, {\frac{1}{\vert B\vert}}uad u \in \mathscr{S}(\mathbf{R}^n) $$ when $a \in L^{\infty}S^m_{\varrho}$, $\varphi \in L^{\infty}\Phi^{k}$ and $m < -\sigma \leq 0$ or equivalently $$ \Vert Tu\Vert_{L^p} \leq C \Vert u\Vert_{H^{s,p}}, {\frac{1}{\vert B\vert}}uad u \in \mathscr{S}(\mathbf{R}^n) $$ when $a \in L^{\infty}S^0_{\varrho}$ and $s > \sigma$ and $H^{s,p}:=\{u\in \mathscr{S}';\, (I-C^{\infty}_0elta)^{\frac{s}{2}}u\in L^{p}\}$. We will use indifferently one or the other equivalent formulation and we will refer to $\sigma$ as the loss of derivatives in the $L^p$ boundedness of $T$. \subsection{Tools in proving $L^p$ boundedness} \subsubsection{Semi-classical reduction and decomposition of the operators}\label{Semiclasical reduction subsec} It is convenient to work with semi-classical estimates: let $A$ be the annulus $$ A=\big\{ \xi \in \mathbf{R}^n; \tfrac{1}{2} \leq |\xi| \leq 2 \big\} $$ and $\chi \in C^{\infty}_0(A)$ be a cutoff function, we will prove estimates on the following semi-classical Fourier integral operator $$ T_hu = (2\pi h)^{-n} \int_{\mathbf{R}^n} e^{\frac{i}{h} \varphi(x,\xi)} \chi(\xi) a(x,\xi/h) \widehat{u}(\xi/h) \, \mathrm{d}\xi $$ with $h \in (0,1]$. We will also need to investigate the low frequency component of the operator $$ T_0u = (2\pi)^{-n} \int_{\mathbf{R}^n} e^{i \varphi(x,\xi)} \chi_{0}(\xi) a(x,\xi) \widehat{u}(\xi) \, \mathrm{d}\xi $$ where $\chi_{0} \in C^{\infty}_0(B(0,2))$. The following lemma shows how semi-classical estimates translate into classical ones. We choose to state the result in the realm of weighted $L^p$ spaces with weights in the Muckenhoupt's $A_p$ class. This extent of generality will be needed when we deal with the weighted boundedness of Fourier integral operators. \begin{lem} \label{Lp:semiclassical} Let $a \in L^{\infty}S^m_{\varrho}$ and $\varphi \in L^{\infty}\Phi^{k}$, suppose that for all $h \in (0,1]$ and $w\in A_p$, there exist constants $C_{1},C_{2}>0$ (only depending on the $A_p$ constants of $w$) such that the following estimates hold $$ \Vert T_0 u\Vert_{L_{w}^p} \leq C_{0} \Vert u\Vert_{L_{w}^p}, {\frac{1}{\vert B\vert}}uad \Vert T_hu\Vert_{L_{w}^p} \leq C_{1} h^{-m-s} \Vert u\Vert_{L_{w}^p}, {\frac{1}{\vert B\vert}}uad u \in \mathscr{S}(\mathbf{R}^n). $$ This implies the bound $$ \Vert Tu\Vert_{L_{w}^p} \leq C_{2} \Vert u\Vert_{L_{w}^{p}}, {\frac{1}{\vert B\vert}}uad u \in \mathscr{S}(\mathbf{R}^n) $$ provided $m<-s$. \end{lem} \begin{proof} We start by taking a dyadic partition of unity \begin{align*} \chi_{0}(\xi) +\sum_{j=1}^{+\infty}\chi_{j}(\xi)=1, \end{align*} where $\chi_{0} \in C^{\infty}_0(B(0,2))$, $\chi_{j}(\xi) =\chi(2^{-j}\xi)$ when $j \geq 1$ with $\chi \in C^{\infty}_0(A)$ and we decompose the operator $T$ as \begin{align} \label{Intro:Tdecomp} T = T \chi_{0}(D) + \sum_{j=1}^{+\infty} T \chi_{j}(D). \end{align} The first term in \eqref{Intro:Tdecomp} is bounded from $L^p_{w}$ to itself by assumption. After a change of variables, we have \begin{align*} T \chi_{j}(D)u = (2\pi)^{-n} 2^{jn} \int_{\mathbf{R}^n} e^{i 2^{j}\varphi(x,\xi)} \chi(\xi) a(x,2^{j}\xi) \widehat{u}(2^{j}\xi) \, \mathrm{d}\xi \end{align*} therefore using the semi-classical estimate with $h=2^{-j}$ we obtain $$ \Vert T \chi_{j}(D)u\Vert_{L_{w}^p}\leq C_{1} 2^{(m+s)j} \Vert u\Vert_{L_{w}^p}. $$ This finally gives \begin{align*} \Vert Tu\Vert_{L_{w}^p} \leq C_{0} \Vert u\Vert_{L_{w}^p} + C_{1} \sum_{j=1}^{+\infty} 2^{(m+s)j} \Vert u\Vert_{L_{w}^p} \end{align*} since the series is convergent when $m<-s$. This completes the proof of our lemma. \end{proof} \subsubsection{Seeger-Sogge-Stein decomposition}\label{SSS decomposition} To get useful estimates for the symbol and the phase function, one imposes a second microlocalization on the former semi-classical operator in such a way that the annulus $A$ is partitioned into truncated cones of thickness roughly $\sqrt{h}$. Roughly $h^{-(n-1)/2}$ such pieces are needed to cover the annulus $A$. For each $h \in (0,1]$ we fix a collection of unit vectors $\{\xi^{\nu}\}_{1 \leq \nu \leq J}$ which satisfy: \begin{enumerate} \item $\vert \xi^{\nu}-\xi^{\mu}\vert \geq h^{-\frac{1}{2}},$ if $\nu \neq \mu$, \\ \item If $\xi \in \mathbf{S}^{n-1}$, then there exists a $\xi^{\nu}$ so that $\vert \xi -\xi^{\nu}\vert \leq h^{\frac{1}{2}}$. \\ \end{enumerate} Let $\Gamma^{\nu}$ denote the cone in the $\xi$ space with aperture $\sqrt{h}$ whose central direction is $\xi^{\nu}$, i.e. \begin{equation} \Gamma^{\nu}=\Big\{ \xi \in \mathbf{R}^n;\, \Big\vert \frac{\xi}{\vert\xi\vert}-\xi^{\nu} \Big\vert \leq \sqrt{h} \Big\}. \end{equation} One can construct an associated partition of unity given by functions $\psi^{\nu}$, each homogeneous of degree $0$ in $\xi$ and supported in $\Gamma^{\nu}$ with \begin{equation} \sum_{\nu=1}^J \psi^{\nu}(\xi)=1,{\frac{1}{\vert B\vert}}uad \text{ for all}\, \xi \neq 0 \end{equation} and \begin{align} \label{Linfty:PsiBounds} \sup_{\xi \in \mathbf{R}^n}|\partial^{\alpha} \psi^{\nu}(\xi)| \leq C_{\alpha} h^{-\frac{|\alpha|}{2}}. \end{align} We decompose the operator $T_{h}$ as \begin{align} T_{h} = \sum_{\nu=1}^J T_{h}\psi^{\nu}(D) = \sum_{\nu=1}^J T_{h}^{\nu} \end{align} where the kernel of the operator $T_h^{\nu}$ is given by \begin{align}\label{kernel of Thnu} T_h^{\nu}(x,y)&= (2\pi h)^{-n} \int_{\mathbf{R}^n} e^{\frac{i}{h} \varphi(x,\xi)-\frac{i}{h}\langle y,\xi \rangle} \chi(\xi)\psi^{\nu}(\xi) a(x,\xi/h) \, \mathrm{d}\xi \\ \nonumber &= (2\pi h)^{-n} \int_{\mathbf{R}^n} e^{\frac{i}{h}\langle \nabla_{\xi}\varphi(x,\xi^{\nu})-y,\xi \rangle} b^{\nu}(x,\xi,h) \, \mathrm{d}\xi \end{align} with amplitude $b^{\nu}(x,\xi,h)=e^{\frac{i}{h}\langle \nabla_{\xi}\varphi(x,\xi)-\nabla_{\xi}\varphi(x,\xi^{\nu}),\xi \rangle} \chi(\xi)\psi^{\nu}(\xi) a(x,\xi/h)$. We choose our coordinates on $\mathbf{R}^n=\mathbf{R} \xi^{\nu}\oplus{\xi^{\nu}}^{\perp}$ in the following way $$ \xi = \xi_{1} \xi^{\nu}+\xi', {\frac{1}{\vert B\vert}}uad \xi' \perp \xi_{\nu}. $$ Also it is worth noticing that the symbol $\chi(\xi) a(x,\xi/h)$ satisfies the following bound \begin{equation} \label{Linfty:Symbolsc} \sup_{\xi}\Vert\partial_{\xi}^{\alpha}\big(\chi(\xi)\, a(\cdot,\xi /h)\big)\Vert_{L^{\infty}} \leq C_{\alpha} h^{-m-|\alpha|(1-\varrho)}. \end{equation} \begin{lem} \label{Linfty:bLemma} Let $a \in L^{\infty}S^m_{\varrho}$ and $\varphi(x,\xi)\in L^{\infty} \Phi^2.$ Then the symbol \begin{align*} b^{\nu}(x,\xi,h)=e^{\frac{i}{h}\langle \nabla_{\xi}\varphi(x,\xi)-\nabla_{\xi}\varphi(x,\xi^{\nu}),\xi \rangle}\psi^{\nu}(\xi) \chi(\xi) a(x,\xi/h) \end{align*} satisfies the estimates \begin{align*} \sup_{\xi} \big\Vert\partial_{\xi}^{\alpha}b^{\nu}(\cdot,\xi,h)\big\Vert_{L^{\infty}} \leq C_{\alpha} h^{-m-|\alpha|(1-\varrho)-\frac{|\alpha'|}{2}}. \end{align*} \end{lem} \begin{proof} We first observe that the bounds \eqref{Linfty:PsiBounds} may be improved to \begin{align} \label{Linfty:SymbolPsi} \sup_{\xi \in A} \big|\partial_{\xi}^{\alpha}\psi^{\nu}(\xi)\big| \leq C_{\alpha} h^{-\frac{|\alpha'|}{2}}. \end{align} This can be seen by induction on $|\alpha|$; by Euler's identity, we have \begin{align*} \partial_{\xi_{1}}\partial^{\alpha}_{\xi}\psi^{\nu} = - \langle |\xi|^{-1}\xi-\xi^{\nu},\nabla \partial^{\alpha}_{\xi}\psi^{\nu} \rangle + |\alpha| \partial^{\alpha}_{\xi}\psi^{\nu} \end{align*} from which we deduce \begin{align*} |\partial_{\xi_{1}}\partial^{\alpha}_{\xi}\psi^{\nu}| &\leq \Big\vert \frac{\xi}{\vert\xi\vert}-\xi^{\nu} \Big\vert \, |\nabla \partial^{\alpha}_{\xi}\psi^{\nu}| +|\alpha| |\partial^{\alpha}_{\xi}\psi^{\nu}| \\ &\lesssim h^{\frac{1}{2}} h^{-\frac{1+|\alpha'|}{2}}+h^{-\frac{|\alpha'|}{2}}. \end{align*} This ends the induction. Similarly we have \begin{align} \label{Linfty:SymbolPhase} \sup_{\xi \in A \cap \Gamma^{\nu}} \big\Vert\partial_{\xi}^{\alpha}\big(e^{\frac{i}{h}\langle \nabla_{\xi}\varphi(\cdot,\xi)-\nabla_{\xi}\varphi(\cdot,\xi^{\nu}),\xi \rangle}\big)\big\Vert_{L^{\infty}} \lesssim h^{-\frac{|\alpha'|}{2}}. \end{align} To prove this bound, we proceed by induction on $|\alpha|$, we have \begin{multline*} \nabla_{\xi}\partial_{\xi}^{\alpha}\Big(e^{\frac{i}{h}\langle \nabla_{\xi}\varphi(x,\xi)-\nabla_{\xi}\varphi(x,\xi^{\nu}),\xi \rangle}\Big)= \\ \frac{i}{h}\partial_{\xi}^{\alpha}\Big( \big(\nabla_{\xi}\varphi(x,\xi)-\nabla_{\xi}\varphi(x,\xi^{\nu})\big) e^{\frac{i}{h}\langle \nabla_{\xi}\varphi(x,\xi)-\nabla_{\xi}\varphi(x,\xi^{\nu}),\xi \rangle}\Big) \end{multline*} and by the Leibniz rule, it suffices to verify that for $|\beta| \leq 1$ \begin{align*} &\sup_{\xi \in A \cap \Gamma^{\nu}} \big\Vert\partial_{\xi}^{\beta}\big( \partial_{\xi'}\varphi(\cdot,\xi)-\partial_{\xi'}\varphi(\cdot,\xi^{\nu})\big)\big\Vert_{L^{\infty}} \lesssim h^{\frac{1-|\beta'|}{2}} \\ &\sup_{\xi \in A \cap \Gamma^{\nu}} \big\Vert\partial_{\xi}^{\beta}\big( \partial_{\xi_{1}}\varphi(\cdot,\xi)-\partial_{\xi_{1}}\varphi(\cdot,\xi^{\nu})\big)\big\Vert_{L^{\infty}} \lesssim h^{1-\frac{|\beta'|}{2}}, \end{align*} where for the case $\beta=0$ one simply uses the mean value theorem on $\nabla_{\xi}\varphi(x,\xi)-\nabla_{\xi}\varphi(x,\xi^{\nu})$, which due the condition $\varphi\in L^{\infty}\Phi^2$ yields the desired estimates. We note that a homogeneous function which vanishes at $\xi=\xi_{\nu}$ may be written in the form $$ \Big(\frac{\xi}{|\xi|}-\xi_{\nu}\Big)r(x,\xi)=\mathcal{O}(\sqrt{h}) {\frac{1}{\vert B\vert}}uad \textrm{ on } A \cap \Gamma^{\nu} $$ and this gives the first bound for $\beta_{1} \neq 1$. We also have $\partial_{\xi_{1}}\partial_{\xi}\varphi(x,\xi_{\nu})=0$ by Euler's identity, therefore the former remark yields $\partial_{\xi_{1}}\partial_{\xi}\varphi(x,\xi)=\mathcal{O}(\sqrt{h})$ which is the first bound for $\beta_{1}=1$ (as well as the second bound for $\beta'\neq 0$). It remains to prove the second bound for $\beta'=0$: by the mean value theorem and the bounds we have already obtained $$ |\partial_{\xi_{1}}\varphi(x,\xi)-\partial_{\xi_{1}}\varphi(x,\xi^{\nu})| \lesssim \sqrt{h} \Big|\frac{\xi}{|\xi|}-\xi_{\nu}\Big| \lesssim h. $$ The estimates on $b_{\nu}$ are consequences of \eqref{Linfty:Symbolsc}, \eqref{Linfty:SymbolPsi} and \eqref{Linfty:SymbolPhase} and of Leibniz's rule. \end{proof} \subsubsection{Phase reduction}\label{phase reduction} In our definition of class $L^\infty \Phi^{k}$ we have only required control of those frequency derivatives of the phase function which are greater or equal to $k$. This restriction is motivated by the simple model case phase function $\varphi(x,\xi)=t(x)|\xi|+ \langle x,\xi\rangle$, $t(x)\in L^\infty$, for which the first order $\xi$-derivatives of the phase are not bounded but all the derivatives of order equal or higher than 2 are indeed bounded and so $\varphi(x,\xi)\in L^\infty \Phi^2$. However in order to deal with low frequency portions of Fourier integral operators one also needs to control the first order $\xi$ derivatives of the phase. The following phase reduction lemma will reduce the phase of the Fourier integral operators to a linear term plus a phase for which the first order frequency derivatives are bounded. \begin{lem} Any Fourier integral operator $T$ of the type \eqref{Intro:Fourier integral operator} with amplitude $\sigma(x,\xi)\in L^{\infty}S^{m}_{\varrho}$ and phase function $\varphi(x,\xi)\in L^{\infty}\Phi^2$, can be written as a finite sum of operators of the form \begin{equation}\label{reduced rep of Fourier integral operator} \frac{1}{(2\pi)^{n}} \int a(x,\xi)\, e^{i\theta(x,\xi)+i\langle \nabla_{\xi}\varphi(x,\zeta),\xi\rangle}\, \widehat{u}(\xi) \, \mathrm{d}\xi \end{equation} where $\zeta$ is a point on the unit sphere $\mathbf{S}^{n-1}$, $\theta(x,\xi)\in L^{\infty}\Phi^{1},$ and $a(x,\xi) \in L^{\infty} S^{m}_{\varrho}$ is localized in the $\xi$ variable around the point $\zeta$. \end{lem} \begin{proof} We start by localizing the amplitude in the $\xi$ variable by introducing an open convex covering $\{U_l\}_{l=1}^{M},$ with maximum of diameters $d$, of the unit sphere $\mathbf{S}^{n-1}$. Let $\Xi_{l}$ be a smooth partition of unity subordinate to the covering $U_l$ and set $a_{l}(x,\xi)=\sigma(x,\xi)\, \Xi_{l}(\frac{\xi}{|\xi|}).$ We set \begin{equation} T_{l}u(x):= \frac{1}{(2\pi)^n} \int \, a_l(x,\xi)\, e^{i\varphi(x,\xi)}\,\widehat{u}(\xi) \, \mathrm{d}\xi, \end{equation} and fix a point $\zeta \in U_l.$ Then for any $\xi\in U_l$, Taylor's formula and Euler's homogeneity formula yield \begin{align*} \varphi(x,\xi) &= \varphi(x,\zeta) + \langle \nabla_{\xi}\varphi (x,\zeta), \xi-\zeta\rangle +\theta (x,\xi) \\ &= \theta(x,\xi)+\langle \nabla_{\xi}\varphi(x,\zeta),\xi\rangle \end{align*} Furthermore, for $\xi\in U_l$, $\partial_{\xi_k} \theta(x,\xi)= \partial_{\xi_k} \varphi(x,\frac{\xi}{|\xi|})-\partial_{\xi_k} \varphi(x,\zeta)$, so the mean value theorem and the definition of class $L^{\infty}\Phi^2$ yield $|\partial_{\xi_k} \theta(x,\xi)|\leq Cd$ and for $|\alpha|\geq 2$, $|\partial^{\alpha}_{\xi} \theta(x,\xi)|\leq C |\xi|^{1-|\alpha|}.$ Here we remark in passing that in dealing with function $\theta(x,\xi),$ we only needed to control the second and higher order $\xi-$derivatives of the phase function $\varphi(x,\xi)$ and this gives a further motivation for the definition of the class $L^\infty \Phi^2 .$ We shall now extend the function $\theta(x,\xi)$ to the whole of $\mathbf{R}^{n}\times \mathbf{R}^{n}\setminus 0$, preserving its properties and we denote this extension by $\theta(x,\xi)$ again. Hence the Fourier integral operators $T_l$ defined by \begin{equation} T_{l}u(x):=\frac{1}{(2\pi)^{n}} \int a_l(x,\xi)\,e^{i\theta(x,\xi)+i\langle \nabla_{\xi}\varphi(x,\zeta),\xi\rangle}\, \widehat{u}(\xi) \, \mathrm{d}\xi, \end{equation} are the localized pieces of the original Fourier integral operator $T$ and therefore $T=\sum_{l=1}^{M} T_l$ as claimed. \end{proof} \subsubsection{Necessary and sufficient conditions for the non-degeneracy of smooth phase functions} \label{phase nondegeneracy} The smoothness of phases of Fourier integral operators makes the study of boundedness considerably easier in the sense that the conditions of a phase being strongly non-degenerate and belonging to the class $\Phi^{2}$ are enough to secure $L^p$ boundedness for a wide range of rough amplitudes. The following proposition which is useful in proving global $L^2$ boundedness of Fourier integral operators, establishes a relationship between the strongly non-degenerate phases and the lower bound estimates for the gradient of the phases in question. \begin{prop}\label{equivalence of lowerbound and non degeneracy} Let $\varphi (x,\xi)\in C^{\infty}(\mathbf{R}^n \times\mathbf{R}^n \setminus 0)$ be a real valued phase then following statements hold true: \begin{enumerate} \item Assume that $$ \Big|\partialet \frac{\partial^{2}\varphi(x,\xi)}{\partial x_j \partial \xi_k} \Big| \geq C_1, $$ for all $(x,\,\xi) \in \mathbf{R}^n \times\mathbf{R}^n \setminus 0,$ and that $$ \Big\Vert\frac{\partial^{2}\varphi(x,\xi)}{\partial x \partial \xi}\Big\Vert\leq C_2, $$ for all $(x,\,\xi) \in \mathbf{R}^n \times\mathbf{R}^n \setminus 0$ and some constant $C_2>0,$ where $\Vert \cdot\Vert$ denotes matrix norm. Then \begin{equation}\label{lowerbound on gradient} |\nabla_{\xi} \varphi(x,\xi)- \nabla_{\xi}\varphi(y,\xi)|\geq C |x-y|, \end{equation} for $x,y\in\mathbf{R}^{n}$ and $\xi\in \mathbf{R}^{n}\setminus 0$ and some $C>0.$ \item Assume that $|\nabla_{\xi} \varphi(x,\xi)- \nabla_{\xi}(y,\xi)|\geq C |x-y|$ for $x,y\in\mathbf{R}^{n}$ and $\xi\in \mathbf{R}^{n}\setminus 0$ and some $C>0.$ Then there exists a constant $C_1$ such that $$ \Big|\partialet \frac{\partial^{2}\varphi(x,\xi)}{\partial x_j \partial \xi_k}\Big| \geq C_1, $$ for all $(x,\,\xi) \in \mathbf{R}^n \times\mathbf{R}^n \setminus 0$. \end{enumerate} \end{prop} \begin{proof} \setenumerate[0]{leftmargin=0pt,itemindent=20pt} \begin{enumerate} \item We consider the map $\mathbf{R}^{n}\ni x \to \nabla_{\xi}\varphi(x,\xi) \in\mathbf{R}^{n}$ and using our assumptions on $\varphi$, Schwartz's global inverse function theorem \cite{Sch} yields that this map is a global $C^1$-diffeomorphism whose inverse $\lambda_{\xi}$ satisfies \begin{equation} |\lambda_{\xi}(z)-\lambda_{\xi}(w)|\leq \sup_{[z,w]} \Vert\lambda'_{\xi}\Vert \times |z-w|. \end{equation} Furthermore, $\lambda'_{\xi}(z)= [(\lambda^{-1}_{\xi})^{'}]^{-1}\circ \lambda_{\xi}(z)=[ \partial^{2}_{x,\xi} \varphi(\lambda_{\xi}(z) , \xi)]^{-1} $. Therefore using the wellknown matrix inequality $\Vert A^{-1}\Vert \leq c_{n} |\partialet A|^{-1} \Vert A\Vert^{n-1}$ which is valid for all $A\in \mathrm{GL}\,(n,\mathbf{R})$, we obtain using the assumption $\Vert\frac{\partial^{2}\varphi(x,\xi)}{\partial x \partial \xi}\Vert\leq C_2 $ that \begin{equation*} \Vert \lambda'_{\xi}(z)\Vert \leq c_n |\partialet [\partial^{2}_{x,\xi} \varphi(\lambda_{\xi}(z), \xi)]|^{-1} \Vert \partial^{2}_{x,\xi} \varphi(\lambda_{\xi}(z), \xi)\Vert^{n-1}\leq \frac{c_n}{C_1} C_2\leq \frac{1}{C}. \end{equation*} This yields that $ |\lambda_{\xi}(z)-\lambda_{\xi}(w)|\leq C|z-w|$ and setting $z=\nabla_{\xi} \varphi(x,\xi)$ and $w=\nabla_{\xi} \varphi(y,\xi)$, we obtain \eqref{lowerbound on gradient}. \item Given the lower bound on the difference of the gradients as in the statement of the second part of the proposition, setting $y=x+hv$ with $v\in \mathbf{R}^n$ yields, $$\frac{|\nabla_{\xi} \varphi(x+hv,\xi)- \nabla_{\xi}\varphi(x,\xi)|}{h}\geq C|v|$$ and letting $h$ tend to zero we have for any $v\in \mathbf{R}^n$ \begin{equation}\label{invertibility of hessian} |\partial^2_{x,\xi}\varphi(x,\xi)\cdot v|\geq C |v|. \end{equation} This means that $\partial^2_{x,\xi}\varphi(x,\xi)$ is invertible and $|[\partial^2_{x,\xi}\varphi(x,\xi)]^{-1}\cdot w|\leq \frac{|w|}{C}.$ Therefore, taking the supremum we obtain $\frac{1}{\Vert [\partial^2_{x,\xi}\varphi(x,\xi)]^{-1}\Vert^n} \geq \frac{1}{C^n}.$ Now using the wellknown matrix inequality \begin{equation}\label{variant of hadamard inequality} \frac{1}{\gamma_{n} \Vert A^{-1}\Vert^{n}} \leq |\partialet A|\leq \gamma_{n} \Vert A\Vert^n, \end{equation} which is a consequence of the Hadamard inequality, yields for $A=\frac{\partial^{2}\varphi(x,\xi)}{\partial x_j \partial \xi_k}$ \begin{equation} \bigg|\partialet\frac{\partial^{2}\varphi(x,\xi)}{\partial x_j \partial \xi_k}\bigg|\geq \frac{1}{\gamma_{n}C^n}. \end{equation} \end{enumerate} This completes the proof. \end{proof} \begin{rem} Proposition \mathop{\rm Re} f{equivalence of lowerbound and non degeneracy} gives a motivation for our rough non-degeneracy condition in Definition \mathop{\rm Re} f{defn of rough phases}, when there is no differentiability in the spatial variables. \end{rem} \subsubsection{Necessity of strong non-degeneracy for global regularity}\label{necessity of strong non-degeneracy} We shall now discuss a simple example which illustrates the necessity of the strong non-degeneracy condition for the validity o global $L^p$ boundedness of Fourier integral operators. To this end, we take a smooth diffeomorphism $\kappa:\mathbf{R}^{n} \to \mathbf{R}^{n}$ with everywhere nonzero Jacobian determinant, i.e. $\partialet \kappa'(x)\neq 0$ for all $x\in \mathbf{R}^{n}.$ Now, if we let $\varphi(x,\xi)= \langle \kappa(x),\xi\rangle$ and take $a(x,\xi)=1 \in S^{0}_{1,0},$ then the Fourier integral operator $T_{a,\varphi} u(x)$ is nothing but the composition operator $u\circ \kappa(x).$ Therefore \begin{equation}\label{necessity counterexample Lp bound} \Vert T_{a, \varphi}u\Vert_{L^p} = \Vert u\circ \kappa\Vert_{L^p}=\{\int_{\mathbf{R}^{n}} |u(y)|^{p} \, |\partialet\kappa'^{-1}(\kappa^{-1} (y))|\, dy\}^{\frac{1}{p}}, \end{equation} from which we see that $T_{a,\varphi}$ is $L^p$ bounded for any $p$, if and only if there exists a constant $C>0$ such that $|\partialet \kappa'^{-1}(x)|\leq C$ for all $x\in\mathbf{R}^{n}.$ The latter is equivalent to $|\partialet \kappa'(x)|\geq \frac{1}{C} >0.$ Now since $|\partialet \frac{\partial^{2}\varphi(x,\xi)}{\partial x \partial \xi}|=|\partialet \kappa'(x)|$ it follows at once that a necessary condition for the $L^p$ boundedness of the operator $T_{a, \varphi}$ is the strong non-degeneracy of the phase function $\varphi$. We observe that if we instead had chosen $a(x,\xi)$ to be equal to a smooth compactly supported function in $x,$ then the $L^p$ boundedness of $T_{a, \varphi}$ would have followed from the mere non-degeneracy condition $|\partialet \frac{\partial^{2}\varphi(x,\xi)}{\partial x \partial \xi}|=|\partialet \kappa'(x)|\neq 0.$ \subsubsection{Non smooth changes of variables}\label{nonsmooth change of variables} In dealing with rough Fourier integral operators we would need at some point to make changes of variables when the substitution is not differentiable. This issue is problematic in general but in our setting, thanks to the rough non-degeneracy assumption on the phase, we can show that the substitution is indeed valid and furthermore has the desired boundedness properties. The discussion below is an abstract approach to the problem of non smooth substitution and we refer the reader interested in related substitution results to De Guzman~\cite{Guz}. \begin{lem} \label{Lem:Substitution} Let $U$ be a measurable set and let $t: U \to \mathbf{R}^n$ be a bounded measurable map satisfying \begin{align} \label{InjectivityHyp} |t(x)-t(y)| \geq c |x-y| \end{align} for almost every $x,y \in U$. Then there exists a function $J_{t} \in L^{\infty}(\mathbf{R}^n)$ supported in $t(U)$ such that the substitution formula \begin{align}\label{rough substitution formula} \int_{U} u\circ t(x) \, \mathrm{d} x = \int u(z) J_{t}(z) \, \mathrm{d} z \end{align} holds for all $u \in L^1(\mathbf{R}^n)$ and the Jacobian $J_t$ satisfies the estimate $\Vert J_t\Vert_{L^{\infty}} \leq \frac{2 \sqrt{n}}{c}.$ \end{lem} \begin{rem} \label{BijectivityRem} If one works with a representative $t$ in the equivalence class of functions equal almost everywhere, then possibly after replacing $U$ with $U \setminus N$ (where $N$ is a null-set where \eqref{InjectivityHyp} does not hold), one may assume that $t$ is an injective map with \eqref{InjectivityHyp} holding everywhere on $U$. \end{rem} For the convenience of the reader, we provide a proof of this simple lemma. \begin{proof} As observed in Remark \mathop{\rm Re} f{BijectivityRem}, we may assume that $t$ is an injective map from $U$ to $\mathbf{R}^n$ for which \eqref{InjectivityHyp} holds on $U$. The formula \begin{align*} \mu_{t}(f) = \int_{U} f \circ t(x) \, \mathrm{d} x, {\frac{1}{\vert B\vert}}uad f \in C^0_{0}(\mathbf{R}^n) \end{align*} defines a non-negative Radon measure, which by the Riesz representation theorem is associated to a Borel measure. In this case, the latter measure is explicitly given by \begin{align*} \mu_{t}(A) = |t^{-1}(A) \cap U | \end{align*} on all Lebesgue measurable sets $A \subset \mathbf{R}^n$, where we use the notation $|\cdot|$ for the Lebesgue measure of a set. By the Lebesgue decomposition theorem, this measure can be split into an absolutely continuous and a singular part, i.e. $$ \mu_{t} = \mu_{t}^{\rm ac} + \mu_{t}^{\rm sing}.$$ Now assumption \eqref{InjectivityHyp} yields \begin{align*} t^{-1}\big(B_{\infty}(w,r)\big) \subset B_{\infty}(x,2\sqrt{n}r/c), {\frac{1}{\vert B\vert}}uad \textrm{ if } t(x) \in B_{\infty}(w,r) \end{align*} where $B_{\infty}(w,r)$ is a ball of center $w$ and radius $r$ for the supremum norm. This implies that whenever $$ A \cap t(U) \subset \bigcup_{k=0}^{\infty} B_{\infty}(w_{k},r_{k}) $$ it follows that $$ t^{-1}(A) \cap U \subset \bigcup_{k=0}^{\infty} B_{\infty}(x_{k},2\sqrt{n}r_{k}/c) $$ where the centers $x_{k}$ have been chosen in $t^{-1}(B_{\infty}(w_{k},r_{k}))$ when this set is nonempty. Furthermore, it is wellknown that the Lebesgue measure of a set can be computed using \begin{align*} |\Omega|=\inf \bigg\{ \sum_{k=0}^{\infty} |Q_{k}|, \; \Omega \subset \bigcup_{k=0}^{\infty} Q_{k}\bigg\} \end{align*} where the infimum is taken over all possible sequences $(Q_{k})_{k \in \mathbf{N}}$ of cubes with faces parallel to the axes. Therefore \begin{align} \label{MeasureBound} \mu_{t}(A) \leq \frac{2 \sqrt{n}}{c} |A \cap t(U)| \leq \frac{2 \sqrt{n}}{c} |A| \end{align} for all Lebesgue measurable sets $A$ in $\mathbf{R}^n$. In particular, Lebesgue null-sets are also null-sets with respect to $\mu_{t},$ which in turn implies that the measure $\mu_{t}$ is absolutely continuous with respect to the Lebesgue measure. By the Radon-Nikodym theorem, there exists a positive Lebesgue measurable function $J_{t} \in L^1_{\rm loc}$ such that $\mu_{t}$ has density $J_{t}$ $$ \mu_{t}(A) = \int_{A} J_{t}(x) \, \mathrm{d} x. $$ By Lebesgue's differentiation theorem, we may compute the Jacobian function $J_{t}$ from the measure $\mu_{t}$ by a limiting process on balls $B$, namely \begin{align}\label{the jacobian of mu} J_{t}(x) = \lim_{B \to \{x\}} \frac{1}{|B|} \int_{B} J_{t}(y) \, \mathrm{d} y = \lim_{B \to \{x\}} \frac{\mu_{t}(B)}{|B|}. \end{align} Equality \eqref{the jacobian of mu} together with the estimate \eqref{MeasureBound} yields that $J_{t}$ is bounded and \begin{align*} \Vert J_{t}\Vert_{L^{\infty}} \leq \frac{2 \sqrt{n}}{c}. \end{align*} Moreover, from the definition of $\mu_{t}$ it is clear that it is supported in $t(U)$. Finally, \eqref{rough substitution formula} follows from \begin{align*} \int_{U} u \circ t(x) \, \mathrm{d} x = \mu_{t}(u) = \int u \, \mathrm{d} \mu_{t} = \int u(z) J_{t}(z) \, \mathrm{d} z \end{align*} for all $u \in C^0_{0}(U)$, and this extends to functions $u \in L^1(\mathbf{R}^n)$. \end{proof} \begin{rem} Note that if there is a representative $t$ in the equivalence class such that \eqref{InjectivityHyp} holds everywhere on $U$ and such that $t(U)$ is an open subset of $\mathbf{R}^n$, then $t^{-1}: t(U) \to U$ is a Lipschitz bijection. Furthermore, any open subset $V \subset t(U)$ is open in $\mathbf{R}^n$ and by Brouwer's theorem on the invariance of the domain $t^{-1}(W)$ is open. This means that the map $t$ is actually continuous. \end{rem} \begin{cor}\label{cor:main substitution estim} Let $t:\mathbf{R}^n \to \mathbf{R}^n$ be a map satisfying the assumptions in $\mathrm{Lemma\, \mathop{\rm Re} f{Lem:Substitution}}$ with $U=\mathbf{R}^n$, then $u \mapsto u \circ t$ is a bounded map on $L^p$ for $p\in [1,\infty]$. \end{cor} \begin{proof} This easily follows from Lemma \mathop{\rm Re} f{Lem:Substitution}: \begin{align*} \int |u\circ t(x)|^p \, \mathrm{d} x = \int |u(z)|^p J_{t}(z) \, \mathrm{d} z \leq \Vert J_{t}\Vert_{L^{\infty}} \Vert u\Vert_{L^p} \end{align*} when $p\in [1,\infty)$. The $L^{\infty}$ estimate is similar. \end{proof} \subsubsection{$L^p$ boundedness of the low frequency portion of rough Fourier integral operators}\label{small frequency Lp boundedness} Here we will prove the $L^p$ boundedness for $p\in [1,\infty]$ of Fourier integral operators whose amplitude contains a smooth compactly supported function factor, the support of which lies in a neighbourhood of the origin. There are a couple of difficulties to overcome here, the first being the singularity of the phase function in the frequency variable at the origin. The second problem is the one caused by the lack of smoothness in the spatial variables. In order to handle these problems we need the following lemma \begin{lem}\label{main low frequency estim} Let $b(x,\xi)$ be a bounded function which is $C^{n+1}(\mathbf{R}^n_{\xi} \setminus 0)$ and compactly supported in the frequency variable $\xi$ and $L^{\infty}(\mathbf{R}^n_{x})$ in the space variable $x$ satisfying \begin{align*} \sup_{\xi \in \mathbf{R}^n \setminus 0} |\xi|^{-1+|\alpha|} \Vert\partial^{\alpha}_{\xi}b(\cdot\,,\xi)\Vert_{L^{\infty}} < +\infty, {\frac{1}{\vert B\vert}}uad |\alpha| \leq n+1. \end{align*} Then for all $0 \leq \mu<1$ we have \begin{align} \label{LowFreq:KernelEst1} \sup_{x,y \in \mathbf{R}^{2n}} \langle y \rangle^{n+\mu} \Big| \int e^{-i \langle y,\xi \rangle} b(x,\xi) \, \mathrm{d} \xi \Big| < +\infty. \end{align} \end{lem} \begin{proof} Since $b(x,\xi)$ is assumed to be bounded, the integral in \eqref{LowFreq:KernelEst1} which we denote by $B(x,y),$ is uniformly bounded and therefore it suffices to consider the case $|y| \geq 1.$ Integrations by parts yield \begin{align*} B(x,y)=|y|^{-2n} \int e^{-i \langle y,\xi \rangle} \langle y,D_{\xi} \rangle^n b(x,\xi) \, \mathrm{d} \xi \end{align*} and therefore we have the estimate \begin{align*} |B(x,y)| \leq C |y|^{-n} \int_{|\xi|<M} \frac{\mathrm{d} \xi}{|\xi|^{n-1}}. \end{align*} We would like to gain an extra factor of $|y|^{-\mu};$ to this end consider the function $\beta(x,y,\xi)= |y|^{-n}\langle y,D_{\xi} \rangle^n b(x,\xi)$ which is smooth in $\xi$ on $\mathbf{R}^n \setminus 0$ and satisfies \begin{align*} \sup_{\xi \in \mathbf{R}^n \setminus 0} |\xi|^{n+|\alpha|-1} \Vert\partial_{\xi}^{\alpha}\beta(\cdot,\cdot,\xi)\Vert_{L^{\infty}} < +\infty, {\frac{1}{\vert B\vert}}uad |\alpha| \leq 1. \end{align*} Let $\chi$ be a $C^{\infty}_{0}(\mathbf{R}^n)$ function which is one on the unit ball and zero outside the ball of radius $2$. Taking $0<\varepsilon \leq 1$, we have \begin{align*} |y|^nB(x,y)&= \int e^{-i \langle y,\xi \rangle} \chi(\xi/\varepsilon) \beta(x,y,\xi) \, \mathrm{d} \xi \\ &{\frac{1}{\vert B\vert}}uad + \int e^{-i \langle y,\xi \rangle} \big(1-\chi(\xi/\varepsilon)\big) \beta(x,y,\xi) \, \mathrm{d} \xi \end{align*} the first term is bounded by a constant times $\varepsilon$, while the second term is equal to \begin{align*} i |y|^{-2} \int e^{-i \langle y,\xi \rangle} \big(\varepsilon^{-1}\langle y,\partial_{\xi}\rangle \chi(\xi/\varepsilon) \beta - \big(1-\chi(\xi/\varepsilon)\big) \langle y,\partial_{\xi}\rangle \beta \big) \, \mathrm{d} \xi \end{align*} which may be bounded by \begin{align*} |y|^{-1} (C_{1}-C_{2} \log \varepsilon). \end{align*} We minimize the bound $C_{0}\varepsilon+ |y|^{-1} (C_{1}-C_{2} \log \varepsilon)$ by taking $\varepsilon=|y|^{-1}$, and obtain \begin{align*} |B(x,y)| \leq C |y|^{-n-1} \big(1+\log |y|\big) \leq C' |y|^{-n-\mu} \end{align*} for all $0 \leq \mu<1$. This is the desired estimate. \end{proof} Having this in our disposal we can show that the low frequency portion of the Fourier integral operators are $L^p$ bounded, more precisely we have \begin{thm}\label{general low frequency boundedness for rough Fourier integral operator} Let $a(x,\xi)\in L^{\infty}S^{m}_{\varrho}$ with $m\in \mathbf{R}$ and $\varrho \in [0,1]$ and let the phase function $\varphi(x,\xi)\in L^{\infty}\Phi^2$ satisfy the rough non-degeneracy condition $($according to $\mathrm{Definition\, \mathop{\rm Re} f{defn of rough nondegeneracy}}$$).$ Then for all $\chi_{0}(\xi)\in C_{0}^{\infty}$ supported around the origin, the Fourier integral operator $$T_{0} u(x)= \frac{1}{(2\pi)^{n}}\int_{\mathbf{R}^{n}}e^{i\varphi(x,\xi)}a(x,\xi) \chi_{0}(\xi) \, \widehat{u}(\xi)\, \mathrm{d}\xi $$ is bounded on $L^p$ for $p\in [1,\infty]$. \end{thm} \begin{proof} In proving the $L^p$ boundedness, according to the reduction of the phase procedure in Lemma \mathop{\rm Re} f{reduced rep of Fourier integral operator}, there is no loss of generality to assume that our Fourier integral operator is of the form $$T_{0} u(x)= \frac{1}{(2\pi)^{n}} \int a(x,\xi)\,\chi_{0}(\xi) e^{i\theta(x,\xi)+i\langle \nabla_{\xi}\varphi(x,\zeta),\xi\rangle}\, \widehat{u}(\xi) \, \mathrm{d}\xi,$$ for some $\zeta\in \mathbf{S}^{n-1}$, $a\in L^{\infty} S^{m}_{\varrho}$ and $\theta \in L^{\infty} \Phi^1.$ In the proof of the $L^p$ boundedness of $T_0$ we only need to analyze the kernel of the operator $$\int a(x,\xi) \chi_{0}(\xi) e^{i\theta(x,\xi)+i\langle \nabla_{\xi}\varphi(x,\zeta),\xi\rangle} \widehat{u}(\xi) \, \mathrm{d}\xi,$$ which is given by $$T_0 (x,y):= \int e^{i\langle \nabla_{\xi}\varphi (x,\zeta)-y,\xi\rangle} \,e^{i\theta (x,\xi)}\, a(x,\xi) \chi_{0}(\xi)\, \mathrm{d}\xi.$$ Now the estimates on the $\xi$ derivatives of $\theta(x,\xi)$ above, yield $$\sup_{|\xi|\neq 0} |\xi|^{-1+|\alpha|} |\partial_{\xi}^{\alpha} \theta(x, \xi)|<\infty,$$ for $|\alpha|\geq 1$ uniformly in $x$, and therefore setting $b(x,\xi):=a(x,\xi) \chi_{0}(\xi) e^{i\theta(x,\xi)}$ we have that $b(x,\xi)$ is bounded and $\sup_{|\xi|\neq 0} |\xi|^{-1+|\alpha|} |\partial_{\xi}^{\alpha} b(x, \xi)|<\infty,$ for $|\alpha|\geq 1$ uniformly in $x$ and using Lemma \mathop{\rm Re} f{main low frequency estim}, we have for all $\mu\in [0,1)$ \begin{equation*} |T_{0} (x,y)|\leq C\langle \nabla_{\xi}\varphi (x,\zeta)-y\rangle ^{-n-\mu}. \end{equation*} From this it follows that $$\sup_{x} \int |T_{0} (x,y)|\, dy<\infty ,$$ and using our rough non-degeneracy assumption and Corollary \mathop{\rm Re} f{cor:main substitution estim} in the case $p=1,$ we also have $$\int |T_{0} (x,y)|\, \mathrm{d} x \lesssim \int \langle \nabla_{\xi}\varphi (x,\zeta)-y\rangle ^{-n-\mu}\, \mathrm{d} x \lesssim \int \langle z\rangle ^{-n-\mu}\, \mathrm{d} z<\infty,$$ uniformly in $y$. This estimate and Young's inequality yield the $L^p$ boundedness of the operator $T_0$. \end{proof} \subsection{Some links between nonsmoothness and global boundedness}\label{nonsmoothness and global estimates} In this paragraph, we illustrate some of the relations between boundedness for rough Fourier integral operators and the global boundedness of operators with smooth amplitudes and phases. Our observation is that local estimates for non-smooth Fourier integral operators imply global estimates for certain classes of Fourier integral operators. This can be done either by compactification or by using a dyadic decomposition. To see the relation between compactification and global boundedness, consider the operator \begin{align} Tu(x) = (2\pi)^{-n} \int e^{i \varphi(x,\xi)} a(x,\xi) \widehat{u}(\xi) \, \mathrm{d} \xi. \end{align} Let $\chi \in C^{\infty}_0(B(0,2))$ be equal to one on the unit ball $B(0,1)$, and $\omega = 1-\chi$ be supported away from zero. Then $$ T = T_{0} + T_{1}, {\frac{1}{\vert B\vert}}uad T_{0}=\chi T, {\frac{1}{\vert B\vert}}uad T_{1}=\omega T. $$ For the global continuity of $T$, we are only interested in $T_{1}$ since the amplitude of $T_{0}$ is compactly supported in the space variable and the boundedness of that operator follows from the local theory. Concerning $T_{1}$, we make the change of variables \begin{align} z = \frac{x}{|x|^{1+\frac{1}{\theta}}}, {\frac{1}{\vert B\vert}}uad x = \frac{z}{|z|^{1+\theta}}, {\frac{1}{\vert B\vert}}uad \theta \in (0,1] \end{align} so that \begin{align} \int |T_{1}u(x)|^p \, \mathrm{d} x = \theta \int \Big|T_{1}u\Big(\frac{z}{|z|^{1+\theta}}\Big)\Big|^p |z|^{-n(1+\theta)} \, \mathrm{d} z. \end{align} Therefore it suffices to study the $L^p$ boundedness of the Fourier integral operator \begin{align} \tilde{T}_{1}u(z) = (2\pi)^{-n} \int e^{i \varphi(z/|z|^{1+\theta},\zeta)} \underbrace{|z|^{-\frac{n}{p}(1+\theta)} \omega a\Big(\frac{z}{|z|^{1+\theta}},\zeta\Big)}_{=\tilde{a}(z,\zeta)} \widehat{u}(\zeta) \, \mathrm{d} \zeta. \end{align} The amplitude $\tilde{a}(z,\zeta)$ is compactly supported (in the unit ball), and for a suitable choice of $\theta$ belongs to $L^{\infty}S^m$ provided \footnote{This decay assumption is due to the singularity at $0$ of $|z|^{-n(1+\theta)/p}$ of the Jacobian. Note that any improvement on the regularity of $\tilde{a}, \tilde{\varphi}$ should translate into decay properties at infinity of the original amplitude and phase $a,\varphi$.} \begin{align} \label{DecayAssum} \langle x \rangle^{s} a(x,\xi) \in L^{\infty}S^m, {\frac{1}{\vert B\vert}}uad s>\frac{n}{p}. \end{align} Now suppose that $\varphi$ satisfies the following (global) non-degeneracy assumption: \begin{align} |\nabla_{\xi}\varphi(x,\xi)-\nabla_{\xi}\varphi(y,\xi)| \geq c |x-y| \end{align} for all $x,y \in \mathbf{R}^n$. Then since \footnote{In the case of the Kelvin transform $\theta=1$, it is easy to get a better lower bound (in fact an equality): $$ \Big|\frac{z}{|z|^{2}}-\frac{w}{|w|^{2}} \Big|^2 = |z|^{-2}+|w|^{-2} - \frac{2\langle w,z\rangle}{|z|^{2}|w|^{2}}=\frac{|z-w|^2}{|z|^2|w|^2}. $$} \begin{align} \Big|\frac{z}{|z|^{1+\theta}}-\frac{w}{|w|^{1+\theta}} \Big|^2 &= |z|^{-2\theta}+|w|^{-2\theta} - \frac{2\langle w,z\rangle}{|z|^{1+\theta}|w|^{1+\theta}} \\ \nonumber &\geq \frac{1}{\max(|w|,|z|)^{1+\theta}} \, |z-w|^2, \end{align} the phase $\tilde{\varphi}(z,\zeta)=\varphi(z/|z|^{1+\theta},\zeta)$ satisfies a similar non-degeneracy condition, namely \begin{align} |\nabla_{\zeta}\tilde{\varphi}(z,\zeta)-\nabla_{\zeta}\tilde{\varphi}(w,\zeta)|&=\Big|\nabla_{\zeta}\varphi\Big(\frac{z}{|z|^{1+\theta}},\zeta\Big) -\nabla_{\zeta}\varphi\Big(\frac{w}{|w|^{1+\theta}},\zeta\Big)\Big| \\ \nonumber & \geq \frac{c}{\max(|w|,|z|)^{\frac{1+\theta}{2}}} \, |z-w| \geq c |z-w|, \end{align} when $|w|,|z| \leq 1$. In order to improve the decay assumption on the amplitude \eqref{DecayAssum}, one can consider more general changes of variables which do not affect the angular coordinate in the polar decomposition, i.e. coordinate changes of the form $$ z=f(|x|)\frac{x}{|x|} \textrm{ where } f:(0,\infty) \to (0,1) \textrm{ is a diffeomorphism.} $$ Then $x=g(|z|)z/|z|$ where $g$ is the inverse function of $f,$ and the Jacobian of such a change of variables is given by $$ |g'(|z|)|g^{n-1}(|z|)|z|^{1-n}. $$ We would like to choose $g$ in such a way that the singularities of its Jacobian become weaker than those in the case of $g(s)=s^{-\theta}.$ One possible choice is to take $$ g(s) = \log(1-s) $$ for which we have $$ |g'(s)|g^{n-1}(s)s^{1-n}= \frac{\log^{n-1}(1-s)}{s^{n-1}(1-s)} = \mathcal{O}\big((1-s)^{-1-\theta}\big) $$ if $s \in (0,1)$. For this choice, we need the following decay \begin{align} \langle x \rangle^{s} a(x,\xi) \in L^{\infty}S^m, {\frac{1}{\vert B\vert}}uad s>\frac{1}{p}. \end{align} Furthermore, if one assumes that $g/s$ is decreasing (or increasing) then the phase $\tilde{\varphi}$ satisfies our non-degeneracy assumption, because \begin{align} \Big|\frac{z}{|z|}g(|z|)&-\frac{w}{|w|}g(|w|) \Big|^2 \\ \nonumber &= g(|z|)^{2}+g(|w|)^{2} - \frac{2\langle w,z\rangle}{|z||w|}g(|z|)g(|w|) \\ \nonumber &\geq \Big(\frac{g}{s}\Big)\big(\min(|w|,|z|)\big) \, |z-w|^2 \geq g'(0) \, |z-w|^2 . \end{align} Alternatively, in order to investigate global boundedness using a dyadic decomposition, one takes a Littlewood-Paley partition of unity $1=\chi(x) + \sum_{j=1}^{\infty}\psi(2^{-j} x)$, which yields \begin{align} T = \chi T + \sum_{j=1}^{\infty} T_{j}, {\frac{1}{\vert B\vert}}uad T_j:=\psi(2^{-j}\cdot)T. \end{align} Once again we are only interested in $T_{j}$ and following a change of variables, we want to prove \begin{align} \int \big|T_{j}\big(u(2^{-j}\cdot\big)(2^jz) \big|^p \, \mathrm{d} z \leq C_{p} \int |u(z)|^p \, \mathrm{d} z. \end{align} This leads us to the study of the operator \begin{align} \tilde{T}_{j}u(z) & = T_{j}\big(u(2^{-j}\cdot\big)(2^j z) \\ \nonumber &= (2\pi)^{-n} \int e^{i 2^{-j}\varphi(2^j z,\zeta)} \underbrace{\psi(z) a\big(2^j z,2^{-j}\zeta\big)}_{=\tilde{a}_{j}(z,\zeta)} \widehat{u}(\zeta) \, \mathrm{d} \zeta. \end{align} The estimate \begin{align} |\partial^{\alpha}_{\zeta} \tilde{a}_{j}(z,\zeta)| &\leq \underbrace{2^{-j|\alpha|}(1+2^{j}|z|)^{m}}_{\simeq 2^{j(m-|\alpha|)}} (1+2^{-j}|\zeta|)^{m-|\alpha|} \\ \nonumber &\leq C_{\alpha} \, (1+|\zeta|)^{m-|\alpha|}, \end{align} yields that the amplitude $\tilde{a}_{j}(z,\zeta)$ belongs (uniformly with respect to $j$) to the class $L^{\infty}S^m$ provided \begin{align} \langle x \rangle^{-m} a(x,\xi) \in L^{\infty}S^m . \end{align} The phase $\tilde{\varphi}_{j}(z,\zeta)=2^{-j}\varphi(2^j z,\zeta)$ satisfies the non-degeneracy assumption \begin{align} |\nabla_{\zeta}\tilde{\varphi}_{j}(z,\zeta)-\nabla_{\zeta}\tilde{\varphi}_{j}(w,\zeta)| \geq c |z-w|. \end{align} Therefore, once again the problem of establishing the global $L^p$ boundedness is reduced to a local problem concerning operators with rough amplitudes. \section{Global boundedness of Fourier integral operators} In this chapter, partly motivated by the investigation in \cite{KS} of the $L^{p}$ boundedness of the so called pseudo-pseudodifferential operators where the symbols of the aforementioned operators are only bounded and measurable in the spatial variables $x$, we consider the global and local boundedness in Lebesgue spaces of Fourier integral operators of the form \begin{align}\label{Fourier integral operator} Tu(x) = (2\pi)^{-n} \int_{\mathbf{R}^n} e^{i\varphi(x,\xi)} a(x,\xi) \widehat{u}(\xi) \, \mathrm{d}\xi, \end{align} in case when the phase function $\varphi(x,\xi)$ is smooth and homogeneous of degree 1 in the frequency variable $\xi,$ and the amplitude $a(x,\xi)$ is either in some H\"ormander class $S^{m}_{\varrho, \partialelta},$ or is a $L^{\infty}$ function in the spatial variable $x$ and belongs to some $L^\infty S^{m}_\varrho$ class. We shall also investigate the $L^p$ boundedness problem for Fourier integral operators with rough phases that are $L^{\infty}$ functions in the spatial variable. In the case of the rough phase, the standard notion of non-degeneracy of the phase function has no meaning due to lack of differentiability in the $x$ variables. However, there is a non-smooth analogue of the non-degeneracy condition which has already been introduced in Definition \mathop{\rm Re} f{defn of rough nondegeneracy} which will be exploited further here.\\ We start by investigating the question of $L^1$ boundedness of Fourier integral operators with rough amplitudes but smooth phase functions satisfying the strong non-degeneracy condition. Thereafter we turn to the problem of $L^2$ boundedness of the Fourier integral operators with smooth phases, but rough or smooth amplitudes. In the case of smooth amplitudes, we show the analogue of the Calder\'on-Vaillancourt's $L^2$ boundedness of pseudodifferential operators in the realm of Fourier integral operators. Next, we consider Fourier integral operators with rough amplitudes and rough phase functions and show a global and a local $L^2$ result in that context. We also give a fairly general discussion of the symplectic aspects of the $L^2$ boundedness of Fourier integral operators.\\ After concluding our investigation of the $L^2$ boundedness, we proceed by proving a sharp $L^\infty$ boundedness theorem for Fourier integral operators with rough amplitudes and rough phases in class $L^\infty\Phi^2,$ without any non-degeneracy assumption on the phase. Finally, we close this chapter by proving $L^p-L^p$ and $L^p-L^q$ estimates for operators with smooth phase function, and smooth or rough amplitudes. \subsection{Global $L^1$ boundedness of rough Fourier integral operators} As will be shown below, the global $L^1$ boundedness of Fourier integral operators is a consequence of Theorem \mathop{\rm Re} f{general low frequency boundedness for rough Fourier integral operator}, the Seeger-Sogge-Stein decomposition, and elementary kernel estimates. \begin{thm} \label{Intro:L1Thm} Let $T$ be a Fourier integral operator given by \eqref{Intro:Fourier integral operator} with amplitude $a \in L^{\infty}S^m_{\varrho}$ and phase function $\varphi \in L^{\infty}\Phi^2$ satisfying the rough non-degeneracy condition. Then there exists a constant $C>0$ such that $$ \Vert Tu\Vert_{L^{1}} \leq C \Vert u\Vert_{L^{1}}, {\frac{1}{\vert B\vert}}uad u \in \mathscr{S}(\mathbf{R}^n) $$ provided $m<-\frac{n-1}{2} +n(\varrho-1)$ and $0\leq \varrho\leq 1$. \end{thm} \begin{proof} Using semiclassical reduction of Subsection \mathop{\rm Re} f{Semiclasical reduction subsec}, we decompose $T$ into low and high frequency portions $T_0$ and $T_h$. Then we use the Seeger-Sogge-Stein decomposition of Subsection \mathop{\rm Re} f{SSS decomposition} to decompose $T_h$ into the sum $\sum_{\nu=1}^{J} T_{h}^{\nu}.$ The boundedness of $T_0$ follows at once from Theorem \mathop{\rm Re} f{general low frequency boundedness for rough Fourier integral operator}, so it remains to establish suitable semiclassical estimates for $T_{h}^{\nu}$. To this end we consider the following differential operator \begin{align*} L = 1-\partial_{\xi_{1}}^2-h \partial_{\xi'}^2 \end{align*} for which we have according to Lemma \mathop{\rm Re} f{Linfty:bLemma} \begin{equation} \sup_{\xi} \Vert L^{N}b^{\nu}(\cdot,\xi,h)\Vert_{L^{\infty}} \lesssim h^{-m-2N(1-\varrho)}. \end{equation} Integrations by parts yield \begin{align*} |T_{h}^{\nu}(x,y)| \leq (2\pi h)^{-n} \big(1+g_{1}(y-\nabla_{\xi}\varphi(x,\xi^{\nu})\big)^{-N} \int |L^N b^{\nu}(x,\xi,h) | \, \mathrm{d} \xi \end{align*} for all integers $N$, with \begin{equation} g(z)=h^{-2}z_{1}^2+h^{-1}|z'|^{2}. \end{equation} This further gives \begin{align*} |T_{h}^{\nu}(x,y)| \leq C_{N} h^{-m-\frac{n+1}{2}-2N(1-\varrho)} \big(1+g(y-\nabla_{\xi}\varphi(x,\xi^{\nu})\big)^{-N} \end{align*} since the volume of the portion of cone $|A \cap \Gamma_{\nu}|$ is of the order of $h^{(n-1)/2}$. By interpolation, it is easy to obtain the former bound when the integer $N$ is replaced by $M/2$ where $M$ is any given positive number ; indeed write $M/2=N+\theta$ where $N=[\frac{M}{2}]$ and $\theta \in [0,1)$ and \begin{align}\label{T(x,y) estim} |T^{\nu}_{h}(x,y)| &= |T^{\nu}_{h}(x,y)|^{\theta} |T^{\nu}_{h}(x,y)|^{1-\theta} \nonumber \\ &\leq C^{1-\theta}_{N} C^{\theta}_{N+1}h^{-m-\frac{n+1}{2}-(1-\varrho)M} \big(1+g(y-\nabla_{\xi}\varphi(x,\xi^{\nu})\big)^{-\frac{M}{2}}. \end{align} This implies that for any real number $M>n$ \begin{align*} \sup_{x} \int |T_h^{\nu}(x,y)| \, \mathrm{d} y \leq C_{M} h^{-m-M(1-\varrho)}. \end{align*} Furthermore the rough non-degeneracy assumption on the phase function $\varphi(x,\xi)$ and Corollary \mathop{\rm Re} f{cor:main substitution estim} with $p=1$ yield \begin{align*} \sup_{y} \int |T_h^{\nu}(x,y)| \, \mathrm{d} x &\leq C^{1-\theta}_{N} C^{\theta}_{N+1} \int\big(1+g(\nabla_{\xi}\varphi(x,\xi^{\nu})\big)^{-\frac{M}{2}}\, \mathrm{d} x \\ \nonumber &\leq C_{M} h^{-m-M(1-\varrho)} \end{align*} thus using Young's inequality and summing in $\nu$ $$ \Vert T_h u\Vert_{L^{1}} \leq \sum_{\nu=1}^J \Vert T_h^{\nu} u\Vert_{L^{1}} \leq C_{M} h^{-m-\frac{n-1}{2}-M(1-\varrho)}\Vert u\Vert_{L^{1}} $$ since $J$ is bounded (from above and below) by a constant times $h^{-\frac{n-1}{2}}$. By Lemma \mathop{\rm Re} f{Lp:semiclassical} one has $$ \Vert T u\Vert_{L^{1}} \lesssim \Vert u\Vert_{L^{1}} $$ provided $m<-\frac{n-1}{2}-M(1-\varrho)$ and $M>n$, i.e. if $m<-\frac{n-1}{2}+n(\varrho -1)$. This completes the proof of Theorem \mathop{\rm Re} f{Intro:L1Thm} \end{proof} \subsection{Local and global $L^2$ boundedness of Fourier integral operators} In this section we study the local and global $L^2$ boundedness properties of Fourier integral operators. Here we complete the global $L^2$ theory of Fourier integral operators with smooth strongly non-degenerate phase functions in class $\Phi^2$ and smooth amplitudes in the H\"ormander class $S^{m}_{\varrho, \partialelta}$ for all ranges of $\rho$'s and $\partialelta$'s. As a first step we establish global $L^2$ boundedness of Fourier integral operators with smooth phases and rough amplitudes in $L^\infty S^{m}_{\varrho}$, then we proceed by investigating the $L^2$ boundedness of Fourier integral operators with smooth phases and amplitudes and finally we consider the $L^2$ regularity of the operators with rough amplitudes in $L^{\infty}S^{m}_{\varrho}$ and rough non-degenerate phase functions in $L^{\infty} \Phi^2.$ \subsubsection{$L^2$ boundedness of Fourier integral operators with phases in $\Phi^2$} The global $L^2$ boundedness of Fourier integral operators which we aim to prove below, yields on one hand a global version of Eskin's and H\"ormander's local $L^2$ boundedness theorem for amplitudes in $S^{0}_{1,0}$, and on the other hand generalises the global $L^2$ result of Fujiwara's for amplitudes in $S^{0}_{0,0}$ to the case of rough amplitudes. Furthermore, as we shall see later, our result is sharp. \begin{thm} \label{global L2 boundedness smooth phase rough amplitude} Let $a(x,\xi)\in L^{\infty}S^{m}_{\varrho}$ and the phase $\varphi(x,\xi) \in \Phi^2$ be strongly non degenerate. Then the Fourier integral operator $$T_{a,\varphi}u(x)=\frac{1}{(2\pi)^n} \int a(x,\xi)\, e^{i\varphi (x,\xi)}\,\widehat{u}(\xi)\, \mathrm{d}\xi$$ is a bounded operator from $L^2$ to itself provided $m<\frac{n}{2}(\varrho-1).$ The bound on $m$ is sharp. \end{thm} \begin{proof} In light of Theorem \mathop{\rm Re} f{general low frequency boundedness for rough Fourier integral operator}, we can confine ourselves to deal with the high frequency component $T_{h}$ of $T_{a,\varphi},$ hence we can assume that $\xi\neq 0$ on the support of the amplitude $a(x,\xi).$ Here we shall use a $T_{h}T_{h}^*$ argument, and therefore, the kernel of the operator $S_{h}=T_{h}T_{h}^*$ reads \begin{align*} S_{h}(x,y) = \frac{1}{(2\pi h)^{n}} \int e^{\frac{i}{h}(\varphi(x,\xi)-\varphi(y,\xi))} \chi^2(\xi) a(x,\xi/h) \overline{a}(y,\xi/h) \, \mathrm{d}\xi \end{align*} Now the strong non degeneracy assumption on the phase and Proposition \mathop{\rm Re} f{equivalence of lowerbound and non degeneracy} yield that there is a constant $C>0$ such that $|\nabla_{\xi} \varphi(x,\xi)- \nabla_{\xi}\varphi(y,\xi)|\geq C |x-y|,$ for $x,y\in\mathbf{R}^{n}$ and $\xi\in \mathbf{R}^{n}\setminus 0.$ This enables us to use the non-stationary phase estimate in \cite{H1} Theorem 7.7.1, and the smoothness of the phase function $\varphi(x,\xi)$ in the spatial variable, yield that for all integers $N$ \begin{align*} |S_{h}(x,y)| \leq C_N h^{-2m-n-(1-\varrho)N} \langle h^{-1}(x-y) \rangle^{-N}, \end{align*} for some constant $C_N >0.$ Let $M$ be a positive real number, we have $M=N+\theta$ where $N$ is the integer part of $M$ and $\theta \in [0,1)$ and therefore \begin{align} \nonumber |S_{h}(x,y)| &= |S_{h}(x,y)|^{\theta} |S_{h}(x,y)|^{1-\theta} \\ &\leq C^{1-\theta}_{N} C^{\theta}_{N+1}h^{-2m-n-(1-\varrho)M} \langle h^{-1}(x-y) \rangle^{-M}. \end{align} This implies \begin{align} \sup_{x} \int |S_{h}(x,y)| \, \mathrm{d} y \leq C_{M} h^{-2m-(1-\varrho)M} \end{align} for all $M>n$. By Cauchy-Schwarz and Young inequalities, we obtain \begin{align}\label{semiclassical L2 pieces} \Vert T_{h}^*u\Vert_{L^2}^2 \leq \Vert S_{h}u\Vert_{L^2} \Vert u\Vert_{L^2} \leq C h^{-2m-(1-\varrho)M} \Vert u\Vert^2_{L^2}. \end{align} Therefore, by Lemma \mathop{\rm Re} f{Lp:semiclassical} we have the $L^2$ bound $$ \Vert Tu\Vert_{L^2} \lesssim \Vert u\Vert_{L^2} $$ provided $m<-(1-\varrho)M/2$ and $M>n$. This completes the proof of Theorem \mathop{\rm Re} f{global L2 boundedness smooth phase rough amplitude}. For the sharpness of this result we consider the phase function $\varphi(x,\xi)= \langle x,\xi\rangle \in \Phi^2$ which is strongly non-degenerate. It was shown in \cite{Rod} that for $m=\frac{n}{2}(\varrho-1)$ there are symbols $a(x,\xi)\in S^{m}_{\varrho, 1}$ such that the pseudodifferential operator $$ a(x,D)u(x)= \frac{1}{(2\pi)^{n}} \int a(x,\xi) \, e^{i\langle x, \xi\rangle}\, \widehat{u}(\xi)\, \mathrm{d} \xi $$ is not $L^2$ bounded. Since $S^{m}_{\varrho, 1}\subset L^{\infty}S^{m}_{\varrho}$, it turns out that there are amplitudes in $L^{\infty}S^{m}_{\varrho}$ which yield an $L^2$ unbounded operator for a non-degenerate phase function in class $\Phi^2.$ Hence the order $m$ in the theorem is sharp. \end{proof} As a consequence, we obtain an alternative proof for the $L^2$ boundedness of pseudo-pseudodifferential operators introduced in \cite{KS}. More precisely we have \begin{cor} \label{Intro:psipsi} Let $a(x,D)$ be a pseudo-pseudodifferential operator, i.e. an operator defined on the Schwartz class, given by \begin{equation} a(x,D)u=\frac{1}{(2\pi)^{n}} \int_{\mathbf{R}^n} e^{i\langle x, \xi\rangle} a(x,\xi) \widehat{u}(\xi) \, \mathrm{d} \xi, \end{equation} with symbol $a \in L^{\infty} S^{m}_{\varrho}, $ $0\leq \varrho \leq 1$. If $m<n(\varrho -1)/2$, then $a(x,D)$ extends as an $L^2$ bounded operator. \end{cor} Theorem \mathop{\rm Re} f{global L2 boundedness smooth phase rough amplitude} can be used to show a simple local $L^2$ boundedness result for Fourier integral operators with smooth symbols in the H\"ormander class $S^{m}_{\varrho, \partialelta}$ in those cases when the symbolic calculus of the Fourier integral operators, as defined in \cite{H3} breaks down (e.g. in case $\partialelta \geq \varrho$), more precisely we have \begin{cor} Let $a(x,\xi) \in S^{m}_{\varrho, \partialelta}$ with compact support in $x$ variable and let $\varphi(x,\xi) \in \Phi^2$ be strongly non-degenerate. Then for $m<\frac{n}{2}(\varrho -\partialelta-1)$ and $0\leq \varrho \leq 1,$ $0\leq \partialelta \leq 1.$ Then the corresponding Fourier integral operator is bounded on $L^2 .$ \end{cor} \begin{proof} By Sobolev embedding theorem, for a function $f(x,y)$ one has $$\int |f(x,x)|^2 \, \mathrm{d} x \leq C_{n} \sum_{|\alpha|\leq N} \iint |\partial^{\alpha}_{y} f(x,y)|^2 \,\mathrm{d} x \, \mathrm{d} y$$ with $N>n/2$. Now let $f(x,y):= \int a(y,\xi)\, e^{i\varphi(x,\xi)} \, \hat{u}(\xi) \, \mathrm{d}\xi .$ Since $a(x,\xi) \in S^{m}_{\varrho, \partialelta}$, we have that $\partial^{\alpha}_{y}a(y,\xi) \in L^{\infty}S^{m+\partialelta|\alpha|}_{\varrho}$. Therefore, Theorem \mathop{\rm Re} f{global L2 boundedness smooth phase rough amplitude} yields $$\int |\partial^{\alpha}_{y} f(x,y)|^2 \,\mathrm{d} x \lesssim \Vert u\Vert^2_{L^2},$$ provided that $m+\partialelta|\alpha| < \frac{n}{2}(\varrho-1)$. Since $|\alpha|\leq N$ and $N>n/2,$ one sees that it suffices to take $m<\frac{n}{2}( \varrho-\partialelta-1).$ We also note that in the argument above, the integration in the $y$ variable will not cause any problem due to the compact support assumption of the amplitude. \end{proof} However, as was shown by D. Fujiwara in \cite{Fuji}, Fourier integral operators with phases in $\Phi^2$ and amplitudes in $S^{0}_{0,0}$ are bounded in $L^2.$ This result suggests the possibility of the existence of an analog of the Calder\'{o}n-Vaillancourt theorem \cite{CV}, concerning $L^2$ boundedness of pseudodifferential operators with symbols in $S^{0}_{\varrho, \varrho}$ with $\varrho\in [0,1),$ in the realm of smooth Fourier integral operators. That this is indeed the case will be the content of Theorem \mathop{\rm Re} f{Calderon-Vaillancourt for FIOs} below. However, before proceeding with the statement of that theorem, we will need two lemmas, the first of which is a continuous version of the Cotlar-Stein lemma, due to A. Calder\'{o}n and R. Vaillancourt, see i.e. \cite{CV} for a proof. \begin{lem}\label{calderon-vaillancourt lemma} Let $\mathscr{H}$ be a Hilbert space, and $A(\xi)$ a family of bounded linear endomorphisms of $\mathscr{H}$ depending on $\xi \in \mathbf{R}^n .$ Assume the following three conditions hold: \begin{enumerate} \item the operator norm of $A(\xi)$ is less than a number $C$ independent of $\xi.$ \item for every $u\in \mathscr{H}$ the function $\xi\mapsto A(\xi)u$ from $\mathbf{R}^{n} \mapsto \mathscr{H}$ is continuous for the norm topology of $\mathscr{H}.$ \item for all $\xi_{1}$ and $\xi_2$ in $\mathbf{R}^n$ \begin{equation}\label{cotlar estimates} \Vert A^{\ast}(\xi_1) A(\xi_2)\Vert \leq h(\xi_1 , \xi_2)^2,\,\,\,\text{and}\,\,\, \Vert A(\xi_1) A^{\ast}(\xi_2)\Vert \leq h(\xi_1 , \xi_2)^2 , \end{equation} with $h(\xi_1 , \xi_2 )\geq 0$ is the kernel of a bounded linear operator on $L^2$ with norm $K$. \end{enumerate} Then for every $E\subset \mathbf{R}^n$, with $|E|<\infty$, the operator $A_{E}=\int_{E} A(\xi) \, \mathrm{d}\xi$ defined by $\langle A_{E} u, v\rangle _{\mathscr{H}} = \int _{E} \langle A(\xi) u, v\rangle _{\mathscr{H}}\, \mathrm{d}\xi,$ is a bounded linear operator on $\mathscr{H}$ with norm less than or equal to $K.$ \end{lem} We shall also use the following useful lemma. \begin{lem}\label{integration by parts lem} Let \begin{equation}\label{definition of integration by parts} Lu(x):= D^{-2} (1-i s(x)\langle\nabla_{x}F, \nabla_{x}\rangle)u(x), \end{equation} with $D:= (1+ s(x)|\nabla_{x} F|^{2})^{1/2}.$ Then \begin{enumerate} \item $L (e^{iF(x)})= e^{iF(x)}$ \item if ${}^{t}L$ denotes the formal transpose of $L,$ then for any positive integer $N,$ $({}^{t}L)^{N} u(x)$ is a finite linear combination of terms of the form \begin{equation}\label{mainterm} CD^{-k} \{\prod_{\mu=1}^{p}\partial^{\alpha_\mu}_{x} s(x)\}\{\prod_{\nu =1}^{q}\partial^{\beta_{\nu}}_{x} F(x)\}\partial^{\gamma}_{x} u(x), \end{equation} with \begin{multline}\label{relations for order of derivatives} 2N\leq k\leq 4N ;\, k-2N\leq p\leq k-N ;\, |\alpha_{\mu}|\geq 0;\, \sum_{\mu=1}^{p} |\alpha_{\mu}|\leq N\\ k-2N\leq q\leq k-N;\, |\beta_{\nu}|\geq 1;\,\sum_{\mu=1}^{q} |\beta_{\nu}|\leq q+N;\, |\gamma| \leq N. \end{multline} \end{enumerate} \end{lem} \begin{proof} First one notes that \begin{equation*} \partial_{x_j} D^{-N}= -\frac{N}{2} D^{-N-2}\sum_{k=1}^{n}\{2s(x) \,\partial_{x_k}F\, \partial^{2}_{x_{j} x_{k}}F + \partial_{x_{j}}s\, (\partial_{x_{k}}F)^2\}. \end{equation*} This and Leibniz's rule yield \begin{align*} {}^{t}Lu(x) &= D^{-2} u(x)+i\sum_{j=1}^{n}\partial_{x_{j}}(D^{-2}\, s(x)\,u(x)\, \partial_{x_j}F ) \\ &=D^{-2} u(x)-iD^{-4}\sum_{k,\,j=1}^{n} u (x)\, s(x)\,\partial_{x_j}F \,\big\{2s(x) \,\partial_{x_k}F\, \partial^{2}_{x_{j} x_{k}}F \\ &{\frac{1}{\vert B\vert}}uad + \partial_{x_{j}}s\, (\partial_{x_{k}}F)^2 +i D^{-2} \sum_{j=1}^{n} u(x)\, \partial_{x_{j}}s(x)\,\partial_{x_{j}}F \\ &{\frac{1}{\vert B\vert}}uad+i D^{-2} \sum_{j=1}^{n} s(x)\,u(x)\,\partial^{2}_{x_{j}}F + i D^{-2} \sum_{j=1}^{n} s(x)\,\partial_{x_{j}}u(x)\,\partial_{x_{j}}F. \end{align*} From this it follows that ${}^{t}L$ is a linear combination of operators of the form \begin{equation} \label{type1} D^{-2}\times \end{equation} \begin{equation} \label{type2} D^{-4}s^{2}(x)\,\partial_{x_j}F \,\partial_{x_k}F\, \partial^{2}_{x_{j} x_{k}}F \times \end{equation} \begin{equation} \label{type3} D^{-4}s\,\partial_{x_{j}}s\,\partial_{x_j}F \, (\partial_{x_{k}}F)^2 \times \end{equation} \begin{equation} \label{type4} D^{-2}\partial_{x_{j}}s(x)\,\partial_{x_{j}}F\times \end{equation} \begin{equation} \label{type5} D^{-2} s(x)\,\partial^{2}_{x_{j}}F\times \end{equation} \begin{equation} \label{type6} D^{-2} s(x)\,\partial_{x_{j}}F\,\partial_{x_{j}}. \end{equation} If we conventionally say that the term \eqref{mainterm} is of the type $$\bigg(k,p, \sum_{\mu=1}^{p}|\alpha_{\mu}|, q, \sum_{\nu=1}^q |\beta_{\nu}|, |\gamma|\bigg),$$ then ${}^{t} L$ is sum of terms of the types $(2,0,0,0,0,0),$ $(4,2,0,3,4,0),$ $(4,2,1,3,3,0),$ $(2,1,1,1,1,0),$ $(2,1,0,1,2,0)$ and $(2,1,0,1,1,1).$ Now operating the operators in \eqref{type1}, \eqref{type2}, \eqref{type3}, \eqref{type4}, \eqref{type5} on a term \eqref{mainterm} of type $$\bigg(k,p, \sum_{\mu=1}^{p}|\alpha_{\mu}|, q, \sum_{\nu=1}^q |\beta_{\nu}|, |\gamma|\bigg),$$ increases the types by $(2,0,0,0,0,0),$ $(4,2,0,3,4,0),$ $(4,2,1,3,3,0),$ $(2,1,1,1,1,0),$ $(2,1,0,1,2,0)$ respectively. To see how operating a term of form the \mathop{\rm Re} f{type6} on \eqref{mainterm} changes the type we use Leibniz rule to obtain \begin{multline*} D^{-2} s(x)\,\partial_{x_{j}}F\,\partial_{x_{j}} \bigg( D^{-k} \bigg\{\prod_{\mu=1}^{p}\partial^{\alpha_\mu}_{x} s(x)\bigg\} \bigg\{\prod_{\nu =1}^{q}\partial^{\beta_{\nu}}_{x} F(x)\bigg\}\,\partial^{\gamma}_{x} u(x)\bigg)=\\ -\frac{k}{2} \bigg(D^{-k-4}\sum_{l=1}^{n}\partial_{x_{j}}F\,\Big\{2s(x) \,\partial_{x_l}F\, \partial^{2}_{x_{j} x_{l}}F + \partial_{x_{j}}s\, (\partial_{x_{l}}F)^2\Big\}\bigg)\times\\ \bigg\{\prod_{\mu=1}^{p}\partial^{\alpha_\mu}_{x} s(x)\bigg\}\bigg\{\prod_{\nu =1}^{q}\partial^{\beta_{\nu}}_{x} F(x)\bigg\}\,\partial^{\gamma}_{x} u(x)\\ +D^{-k-2}\partial_{x_{j}}F \sum_{\mu'=1}^{p}\bigg\{\prod_{\mu\neq \mu'}\partial^{\alpha_\mu}_{x} s(x)\bigg\} \bigg\{\partial_{x_{j}} \partial ^{\alpha_{\mu'}}_{x} s(x)\bigg\} \,\prod_{\nu =1}^{q}\partial^{\beta_{\nu}}_{x} F(x)\,\partial^{\gamma}_{x} u(x)\\ +D^{-k-2}\partial_{x_{j}}F \, \prod_{\mu=1}^{p}\partial^{\alpha_\mu}_{x} s(x)\sum_{\nu'=1}^{q}\bigg\{\prod_{\nu\neq \nu'}\partial^{\alpha_\nu}_{x} F(x)\bigg\} \bigg\{\partial_{x_{j}} \partial ^{\alpha_{\nu'}}_{x} F(x)\bigg\}\,\partial^{\gamma}_{x} u(x)\\ +D^{-k-2} \bigg\{\prod_{\mu=1}^{p}\partial^{\alpha_\mu}_{x} s(x)\bigg\}\bigg\{\prod_{\nu =1}^{q}\partial^{\beta_{\nu}}_{x} F(x)\bigg\}\,\partial_{x_{j}}\,\partial^{\gamma}_{x} u(x). \end{multline*} Therefore, upon application of ${}^{t} L$ to \eqref{mainterm}, the types of the resulting terms increase by $(2,0,0,0,0,0),$ $(4,2,0,3,4,0),$ $(4,2,1,3,3,0),$ $(2,1,1,1,1,0),$ $(2,1,0,1,2,0)$ and $(2,1,0,1,1,1).$ Iteration of this process yields \begin{equation*} ({}^{t} L)^{N} u(x)= \sum C\,D^{-k} \bigg\{\prod_{\mu=1}^{p}\partial^{\alpha_\mu}_{x} s(x)\bigg\}\bigg\{\prod_{\nu =1}^{q}\partial^{\beta_{\nu}}_{x} F(x)\bigg\}\partial^{\gamma}_{x} u(x), \end{equation*} where the summation is taken over all non-negative integers $N_1 ,$ $N_2 ,$ $N_3 ,$ $N_4 ,$ $N_5 ,$ $N_6$ with $\sum_{j=1}^{6} N_j =N$ and \begin{multline} \bigg(k,p, \sum_{\mu=1}^{p}|\alpha_{\mu}|, q, \sum_{\nu=1}^q |\beta_{\nu}|, |\gamma|\bigg)= N_1 (2,0,0,0,0,0)+ N_2 (4,2,0,3,4,0)+\\ N_3 (4,2,1,3,3,0)+ N_4 (2,1,1,1,1,0)+ N_5 (2,1,0,1,2,0)+ N_6 (2,1,0,1,1,1). \end{multline} Hence, \begin{equation} \label{k} k= 2N_1 +4 N_2 + 4 N_3 + 2N_4 +2N_5 +2N_6 \end{equation} \begin{equation} \label{p} p= 2N_2 +2N_3 + N_4 + N_5 + N_6 \end{equation} \begin{equation} \label{sum alphamu} \sum_{\mu=1}^{p} |\alpha_{\mu}|= N_3 + N_4 \end{equation} \begin{equation} \label{q} q= 3N_2 + 3 N_3 + N_4 + N_5 +N_6 \end{equation} \begin{equation} \label{sum betanu} \sum_{\nu=1}^{q} |\beta_{\nu}|=4 N_2 + 3 N_3 + N_4 + 2 N_5 + N_6 \end{equation} \begin{equation} \label{gamma} |\gamma|= N_6 . \end{equation} From this it also follows that $(k,p, \sum_{\mu=1}^{p}|\alpha_{\mu}|, q, \sum_{\nu=1}^q |\beta_{\nu}|, |\gamma|)$ satisfy \eqref{relations for order of derivatives}. \end{proof} \begin{thm}\label{Calderon-Vaillancourt for FIOs} If $m=\min(0,\frac{n}{2}(\varrho-\partialelta)),$ $0\leq \varrho\leq 1$, $0\leq \partialelta<1,$ $a\in S^{m}_{\varrho, \partialelta}$ and $\varphi\in \Phi^{2},$ satisfies the strong non-degeneracy condition, then the operator $T_{a} u(x)= \int a(x,\xi)\, e^{i\varphi(x,\xi)} \hat{u}(\xi)\, \mathrm{d}\xi$ is bounded on $L^2.$ \end{thm} \begin{proof} First we observe that since for $\partialelta \leq \varrho$, $S^{0}_{\varrho, \partialelta} \subset S^{0}_{\varrho, \varrho},$ it is enough to show the theorem for $0\leq \varrho\leq \partialelta<1$ and $m=\frac{n}{2}(\varrho-\partialelta).$ Also, as we have done previously, we can assume without loss of generality that $a(x,\xi)=0$ when $\xi$ is in a a small neighbourhood of the origin. Using the $TT^{\ast}$ argument, it is enough to show that the operator \begin{equation}\label{Tb amplitude presentation} T_{b} u(x)= \iint b(x,y,\xi)\, e^{i\varphi(x,\xi)-i\varphi(y,\xi)}\, u(y)\, \mathrm{d} y \, \mathrm{d}\xi, \end{equation} where $b$ satisfies the estimate \begin{equation}\label{derivtives of b} |\partial^{\alpha}_{\xi}\partial^{\beta}_{x} \partial^{\gamma}_{y} b(x,y,\xi)|\leq C_{\alpha\, \beta\,\gamma} \langle \xi \rangle ^{m_1-\varrho|\alpha|+\partialelta(|\beta| +|\gamma|)}, \end{equation} with $m_1=n(\varrho-\partialelta)$ and $0\leq \varrho\leq \partialelta<1,$ is bounded on $L^2 .$\\ \noindent We introduce a differential operator $$ L:= D^{-2} \Big\{1-i\langle \xi\rangle^{\varrho}\big(\langle\nabla_{\xi}\varphi(x,\xi)-\nabla_{\xi}\varphi(y,\xi), \nabla_{\xi}\rangle\big)\Big\},$$ with $D=(1+\langle \xi\rangle ^{\varrho}|\nabla_{\xi}\varphi(x,\xi)-\nabla_{\xi}\varphi(y,\xi)|^2)^{\frac{1}{2}}.$ It follows from Lemma \mathop{\rm Re} f{integration by parts lem} that \begin{equation*} L (e^{i\varphi(x,\xi)-i\varphi(y,\xi)})= e^{i\varphi(x,\xi)-i\varphi(y,\xi)} \end{equation*} and that $({}^t L)^{N}(b(x,y,\xi))$ is a finite sum of terms of the form \begin{equation} D^{-k} \bigg\{\prod_{\mu=1}^{p}\partial^{\alpha_\mu}_{\xi} \langle \xi\rangle^{\varrho}\bigg\}\bigg\{\prod_{\nu =1}^{q} \big(\partial^{\beta_{\nu}}_{\xi}\varphi(x,\xi)-\partial^{\beta_{\nu}}_{\xi}\varphi(y,\xi)\big)\bigg\}\,\partial^{\gamma}_{\xi}b(x,y,\xi). \end{equation} Furthermore since $\varphi \in \Phi^2$ is assumed to be strongly non-degenerate, we can use Proposition \mathop{\rm Re} f{equivalence of lowerbound and non degeneracy} to deduce that \begin{equation} \label{lowerbound for phi in x} |\nabla_{\xi} \varphi(x,\xi)-\nabla_{\xi}\varphi(y,\xi)|\geq c_1 |x-y| \end{equation} \begin{equation} \label{lowerbound for phi in xi} |\nabla_{z} \varphi(z,\xi_2)-\nabla_{z}\varphi(z,\xi_1)|\geq c_2|\xi_1-\xi_2|. \end{equation} Using \eqref{lowerbound for phi in x}, \eqref{relations for order of derivatives} and \eqref{derivtives of b}, we have \begin{equation}\label{x derivative estimate} |\partial^{\sigma}_{x}({}^t L)^{N}(b(x,y,\xi))|\leq C \Lambda(\langle \xi\rangle ^{\varrho} (x-y))\langle \xi\rangle ^{m_1+\partialelta|\sigma|}, \end{equation} where $\Lambda$ is an integrable function with $\int \Lambda (x) \, dx\lesssim 1.$ Integration by parts using $L$, $N$ times, in \eqref{Tb amplitude presentation} one has \begin{equation} T_{b} u(x)= \iint c(x,y,\xi)\, e^{i\varphi(x,\xi)-i\varphi(y,\xi)}\, u(y)\, \mathrm{d} y \, \mathrm{d}\xi, \end{equation} with $c(x,y,\xi)=({}^t L)^{N}(b(x,y,\xi))$ and \begin{equation}\label{derivative estimates for c} |\partial^{\sigma}_{x}c(x,y,\xi)|\leq C \Lambda(\langle \xi\rangle ^{\varrho} (x-y))\langle \xi\rangle ^{m_1+\partialelta|\sigma|} \end{equation} and the same estimate is valid for $\partial^{\sigma}_{y}c(x,y,\xi).$ From this we get the representation \begin{equation} T_{b} = \int A(\xi)\, \mathrm{d}\xi, \end{equation} where $A(\xi) u(x):= \int c(x,y,\xi)\, e^{i\varphi(x,\xi)-i\varphi(y,\xi)}\, u(y)\, \mathrm{d} y.$ Noting that $A(\xi)=0$ for $\xi$ outside some compact set, we observe that condition (1) of Lemma \mathop{\rm Re} f{calderon-vaillancourt lemma} follows from Young's inequality and \eqref{derivative estimates for c} with $\sigma=0,$ and condition (2) of Lemma \mathop{\rm Re} f{calderon-vaillancourt lemma} follows from the assumption of the compact support of the amplitude. To verify condition (3) we confine ourselves to the estimate of $\Vert A^{\ast}(\xi_1) A(\xi_2)\Vert$, since the one for $\Vert A(\xi_1) A^{\ast}(\xi_2)\Vert$ is similar. To this end, a calculation shows that the kernel of $A^{\ast}(\xi_1) A(\xi_2) $ is given by \begin{multline}\label{kernel of A star A} K(x,y,\xi_1 , \xi_2):= \\ \int \overline{c}(z,x,\xi_1)\, c(z,y,\xi_2 )\, e^{i[\varphi(z,\xi_{2})-\varphi(z,\xi_{1})+\varphi(x,\xi_{1})-\varphi(y,\xi_{2})]}\, \mathrm{d} z. \end{multline} The estimate \eqref{derivative estimates for c} yields \begin{multline}\label{first estimate for K} |K(x,y,\xi_1 , \xi_2)|\\\lesssim \langle \xi_1\rangle ^{m_1}\, \langle \xi_2\rangle ^{m_1} \int \Lambda(\langle \xi_{1}\rangle ^{\varrho} (x-z))\, \Lambda(\langle \xi_{2}\rangle ^{\varrho} (y-z))\, \mathrm{d} z. \end{multline} Therefore by choosing $N$ large enough, Young's inequality and using the fact that $\int \Lambda (x) \, dx \lesssim 1$ yield \begin{equation}\label{first estim for A star A} \Vert A^{\ast}(\xi_1) A(\xi_2)\Vert \lesssim \langle \xi_1\rangle ^{m_1 -n\varrho}\, \langle \xi_2\rangle ^{m_1 -n\varrho}. \end{equation} At this point we introduce another first order differential operator $M:= G^{-2} \{1-i(\langle\nabla_{z}\varphi(z,\xi_2)-\nabla_{z}\varphi(z,\xi_1),\nabla_{z}\rangle)\}$, with $G=(1+|\nabla_{z}\varphi(z,\xi_2)-\nabla_{z}\varphi(z,\xi_1)|^2)^{\frac{1}{2}}.$ Using the fact that $M e^{i(\varphi(z,\xi_{2})-\varphi(z,\xi_{1}))}= e^{i(\varphi(z,\xi_{2})-\varphi(z,\xi_{1}))},$ integration by parts in \eqref{kernel of A star A} yields \begin{equation} \int ({}^{t} M)^{N'}\{\overline{c}(z,x,\xi_1)\, c(z,y,\xi_2 )\}\, e^{i[\varphi(z,\xi_{2})-\varphi(z,\xi_{1})+\varphi(x,\xi_{1})-\varphi(y,\xi_{2})]}\, \mathrm{d} z. \end{equation} Using the second part of Lemma \mathop{\rm Re} f{integration by parts lem}, we find that $({}^{t} M)^{N'}\{\overline{c}(z,x,\xi_1)\, c(z,y,\xi_2 )\}$ is a linear combination of terms of the form \begin{equation}\label{differential operator M} G^{-k}\bigg\{\prod_{\nu =1}^{q}(\partial^{\beta_{\nu}}_{z}\varphi(z,\xi_2)-\partial^{\beta_{\nu}}_{\xi}\varphi(z,\xi_1))\bigg\} \partial^{\gamma_1}_{z}\overline{c}(z,x,\xi_1)\, \partial^{\gamma_2}_{z} c(z,y,\xi_2), \end{equation} where $k,$ $q,$ $\beta_\nu$ satisfy the inequalities in \mathop{\rm Re} f{relations for order of derivatives} and $|\gamma_1|+|\gamma_2| \leq N'.$ Now, \eqref{lowerbound for phi in xi}, \eqref{derivative estimates for c} and \eqref{differential operator M}, yield the following estimate for $K(x,y,\xi_1 , \xi_2)$ \begin{align}\label{second estimate for K} |K(x,y,\xi_1 , \xi_2)|\lesssim \langle \xi_1\rangle ^{m_1}\, \langle \xi_2\rangle ^{m_1}(1+|\xi_1|+|\xi_2|)^{\partialelta N'} |\xi_1 -\xi_2|^{-N'} \\ \nonumber \times \int \Lambda(\langle \xi_{1}\rangle ^{\varrho} (x-z))\, \Lambda(\langle \xi_{2}\rangle ^{\varrho} (y-z))\, \mathrm{d} z. \end{align} Once again, choosing $N$ large enough, Young's inequality yields \begin{equation}\label{second estim for A star A} \Vert A^{\ast}(\xi_1) A(\xi_2)\Vert \lesssim \langle \xi_1\rangle ^{m_1-n\varrho}\, \langle \xi_2\rangle ^{m_1-n\varrho} \frac{(1+|\xi_1|+|\xi_2|)^{\partialelta N'}}{|\xi_1 -\xi_2|^{N'}}. \end{equation} Using the fact that for $x>0,$ $\inf (1,x) \sim (1+\frac{1}{x})^{-1}$, one optimizes the estimates \eqref{first estim for A star A} and \eqref{second estim for A star A} by \begin{align}\label{optimal estim for A star A} \Vert A^{\ast}(\xi_1) A(\xi_2)\Vert &\lesssim \langle \xi_1\rangle ^{m_1 -n\varrho}\, \langle \xi_2\rangle ^{m_1 -n\varrho} \bigg(1+ \frac{|\xi_1 -\xi_2|^{N'}}{(1+|\xi_1|+|\xi_2|)^{\partialelta N'}}\bigg)^{-1} \\\nonumber &:= h^{2}(\xi_1 , \xi_2). \end{align} Therefore recalling that $m_1=n(\varrho-\partialelta),$ in applying Lemma \mathop{\rm Re} f{calderon-vaillancourt lemma}, we need to show that \begin{equation} K(\xi_1 , \xi_2)= (1+|\xi_1|)^{\frac{-n\partialelta}{2}} (1+|\xi_2|)^{-\frac{n\partialelta}{2}} \bigg(1+ \frac{|\xi_1 -\xi_2|^{N'}}{(1+|\xi_1|+|\xi_2|)^{\partialelta N'}}\bigg)^{-\frac{1}{2}} \end{equation} is the kernel of a bounded operator in $L^2.$ At this point we use Schur's lemma, which yields the desired conclusion provided that $$ \sup_{\xi_1} \int K(\xi_1 , \xi_2)\, \mathrm{d} \xi_2, {\frac{1}{\vert B\vert}}uad \sup_{\xi_2} \int K(\xi_1 , \xi_2)\, \mathrm{d} \xi_1 $$ are both finite. Due to the symmetry of the kernel, we only need to show the finiteness of one of these quantities.\\ \noindent To this end, we fix $\xi_1$ and consider the domains $\mathcal{A}=\{(\xi_1, \xi_2);\, |\xi_2| \geq 2 |\xi_1|\},$ $\mathcal{B}=\{(\xi_1, \xi_2);\, \frac{|\xi_1|}{2}\leq |\xi_2| \leq 2 |\xi_1|\},$ and $\mathcal{C}=\{(\xi_1, \xi_2);\, |\xi_2| \leq \frac{ |\xi_1|}{2}\}.$ Now we observe that on the set $\mathcal{A},$ $K(\xi_1, \xi_2)$ is dominated by \begin{equation}\label{estimate on A} (1+|\xi_1|)^{-\frac{n\partialelta}{2}} (1+|\xi_2|)^{-\frac{n\partialelta}{2}+\frac{N'}{2}(\partialelta -1)}, \end{equation} on $\mathcal{B},$ $K(\xi_1, \xi_2)$ is dominated by \begin{equation}\label{estimate on A} (1+|\xi_1|)^{-n \partialelta} \bigg(1+ \frac{|\xi_1 -\xi_2|^{N'}}{(1+|\xi_1|)^{\partialelta N'}}\bigg)^{-\frac{1}{2}}, \end{equation} and on $\mathcal{C},$ $K(\xi_1, \xi_2)$ is dominated by \begin{equation}\label{estimate on A} (1+|\xi_2|)^{-\frac{n\partialelta}{2}} (1+|\xi_1|)^{-\frac{n\partialelta}{2}+\frac{N'}{2}(\partialelta -1)}. \end{equation} Therefore, if $I_{\Omega}:= \int_{\Omega} K(\xi_1 , \xi_2 )\, d\xi_2,$ then choosing $\frac{N'}{2}(\partialelta -1)<-n ,$ which is only possible if $\partialelta<1, $ we have that $I_{\mathcal{A}} <\infty$ uniformly in $\xi_1$. Also, \begin{equation} I_{\mathcal{C}} \leq (1+|\xi_1|)^{n-\frac{n\partialelta}{2}+\frac{N'}{2}(\partialelta -1)}\leq C, \end{equation} which is again possible by the fact that $\partialelta<1$ and a suitable choice of $N'.$ In $I_\mathcal{B}$ let us make a change of variables to set $\xi_2 -\xi_1 = (1+|\xi_1| )^{\partialelta} y$, then \begin{equation} I_{\mathcal{B}} \leq \int (1+|y|^{N'}) ^{-\frac{1}{2}} \mathrm{d} y <\infty, \end{equation} by taking $N'$ large enough. These estimates yield the desired result and the proof of there theorem is therefore complete. \end{proof} \subsubsection{$L^2$ boundedness of Fourier integral operators with phases in $L^\infty \Phi^2$} Next we shall turn to the problem of $L^2$ boundedness of Fourier integral operators with non-smooth amplitudes and phases. As was mentioned in the introduction, a motivation for considering fully rough Fourier integral operators stems from a "linearisation" procedure which reduces certain maximal operators to Fourier integral operators with a non-smooth phase and sometimes also a non-smooth amplitude. For instance, estimates for the maximal spherical average operator \begin{align*} A u(x) = \sup_{t \in [0,1]} \bigg| \int_{S^{n-1}} u(x+t\omega) \, \mathrm{d}\sigma(\omega) \bigg| \end{align*} are related to those for the maximal wave operator $$ W u(x) = \sup_{t \in [0,1]} \big|e^{it\sqrt{-C^{\infty}_0elta}}u(x)\big|, $$ and can for instance be deduced from those of the linearized operator \begin{align} \label{Intro:LinWave} e^{it(x)\sqrt{-C^{\infty}_0elta}}u=(2\pi)^{-n} \int_{\mathbf{R}^n} e^{it(x)|\xi|+i\langle x,\xi \rangle} \widehat{u}(\xi) \, \mathrm{d}\xi, \end{align} where $t(x)$ is a measurable function in $x$, with values in $[0,1]$ and the phase here belongs to the class $L^\infty \Phi^2.$ As will be demonstrated later, the validity of the results in the rough case depend on the geometric conditions (imposed on the phase functions) which are the rough analogues of the non-degeneracy and corank conditions for smooth phases. In trying to understand the subtle interrelations between boundedness, smoothness and geometric conditions, we remark that even if one assumes the phase of the linearized operator \eqref{Intro:LinWave} to be smooth, there are cases for which the canonical relation of this operator ceases to be the graph of a symplectomorphism. Indeed, contrary to the wave operator $e^{i t\sqrt{-C^{\infty}_0elta}}$ at fixed time $t \in [0,1]$, the phase $\varphi(x,\xi)=\langle x,\xi \rangle+t(x)|\xi|$ of the linearized operator cannot be a generating function of a canonical transformation, (see \cite{D}), in certain cases since \begin{align*} \frac{\partial^{2}\varphi}{\partial x \partial\xi}(x,\xi) &= {\rm Id} + \nabla t(x) \otimes \frac{\xi}{|\xi|}, \\ \ker \frac{\partial^{2}\varphi}{\partial x \partial\xi}(x,\xi) &= \mathop{\rm span} \nabla t(x) {\frac{1}{\vert B\vert}}uad \textrm{ when } \langle \xi,\nabla t(x)\rangle + |\xi|=0, \end{align*} and this happens when $|\nabla t(x)| \geq 1$ and $\xi = \varrho(-\frac{\nabla t(x)}{|\nabla t(x)|^{2}}+\eta)$ with $\varrho \in \mathbf{R}^*_{+}$ and $\eta$ is a vector orthogonal to $\nabla t(x)$ of norm $(1-|\nabla t(x)|^{-2})^{1/2}$. Therefore, one can not expect $L^2$ boundedness of \eqref{Intro:LinWave} even when the function $t(x)$ is smooth. Nevertheless, in this case the rank of the Hessian $\partial^2\varphi/\partial x\partial \xi$ drops by one with respect to its maximal possible value, and one could still establish $L^2$ estimates with loss of derivatives (see section \mathop{\rm Re} f{subsec:symplecticL2} for more details). The operators that we intend to study will fall into this category. Before we investigate the local $L^2$ boundedness of operators based on geometric conditions on their phase, we state and prove a purely analytic global $L^2$ boundedness result which will be used later.\\ \begin{thm} \label{Intro:L2Thm} Let $T$ be a Fourier integral operator given by \eqref{Intro:Fourier integral operator} with amplitude $a \in L^{\infty}S^m_{\varrho}, 0\leq \varrho \leq 1$ and a phase function $\varphi(x,\xi) \in L^{\infty}\Phi^2$ satisfying the rough non-degeneracy condition. Then there exists a constant $C>0$ such that $$ \Vert T u\Vert_{L^{2}} \lesssim \Vert u\Vert_{L^{2}} $$ provided $m<n(\varrho -1)/2-(n-1)/4$. \end{thm} \begin{proof} Using semiclassical reduction of Subsection \mathop{\rm Re} f{Semiclasical reduction subsec}, we decompose $T$ into low and high frequency portions $T_0$ and $T_h$. The boundedness of $T_0$ follows at once from Theorem \mathop{\rm Re} f{general low frequency boundedness for rough Fourier integral operator}, so it remains to establish suitable semiclassical estimates for $T_{h}$. Once again we use the $TT^*$ argument. The kernel of the operator $S_{h}=T_{h}T_{h}^*$ reads \begin{align*} S_{h}(x,y) = (2\pi h)^{-n} \int e^{\frac{i}{h}(\varphi(x,\xi)-\varphi(y,\xi))} \chi^2(\xi) a(x,\xi/h) \overline{a}(y,\xi/h) \, \mathrm{d} \xi. \end{align*} We now use the Seeger-Sogge-Stein decomposition (section \mathop{\rm Re} f{SSS decomposition}) and split this operator as the sum $\sum_{j=1}^N S_h^{\nu}$ where the kernel of $S_{h}^{\nu}$ takes the form \begin{align*} S^{\nu}_{h}(x,y) = (2\pi h)^{-n} \int e^{\frac{i}{h}\langle \nabla_{\xi}\varphi(x,\xi^{\nu})-\nabla_{\xi}\varphi(y,\xi^{\nu}),\xi\rangle} b_{\nu}(x,\xi,h) \overline{b_{\nu}}(y,\xi,h) \, \mathrm{d} \xi. \end{align*} We consider the following differential operator $$ L = 1-\partial_{\xi_{1}}^2-h\partial_{\xi'}^2 $$ for which we have according to Lemma \mathop{\rm Re} f{Linfty:bLemma} \begin{equation} \sup_{\xi} \Vert L^{N}b^{\nu}(\cdot,\xi,h)\Vert_{L^{\infty}} \lesssim h^{-m-2N(1-\varrho)}. \end{equation} Integration by parts yields \begin{multline*} |S_{h}^{\nu}(x,y)| \leq (2\pi h)^{-n} \big(1+g\big(\nabla_{\xi}\varphi(y,\xi^{\nu})-\nabla_{\xi}\varphi(x,\xi^{\nu})\big)\big)^{-N} \\ \times \int \big|L^N \big(b^{\nu}(x,\xi,h) \overline{b^{\nu}}(y,\xi,h)\big)\big| \, \mathrm{d} \xi \end{multline*} for all integers $N$, with \begin{equation} g(z)=h^{-2}z_{1}^2+h^{-1}|z'|^{2}. \end{equation} The standard interpolation trick gives the same inequality for for all positive numbers $M>0$ and thus we have \begin{align*} |S_{h}^{\nu}(x,y)| \leq C h^{-2m-\frac{n+1}{2}-2M(1-\varrho)} \big(1+g\big(\nabla_{\xi}\varphi(y,\xi^{\nu})-\nabla_{\xi}\varphi(x,\xi^{\nu})\big)\big)^{-M} \end{align*} since the volume of the portion of cone $|A \cap \Gamma_{\nu}|$ is of the order of $h^{(n-1)/2}$. By the non-degeneracy assumption and Lemma \mathop{\rm Re} f{Lem:Substitution}, we get \begin{align*} \int |S_{h}^{\nu}(x,y)| \, \mathrm{d} y \leq C h^{-2m-\frac{n+1}{2}-2M(1-\varrho)} \underbrace{\int \big(1+g(z)\big)^{-M} \, \mathrm{d} z}_{= c h^{\frac{n+1}{2}}}. \end{align*} By Young's inequality (remembering that the kernel $S_{h}^{\nu}(x,y)$ is symmetric), we obtain \begin{align*} {\Vert}S_{h}^{\nu}u{\Vert}_{L^2} \leq C h^{-2m-2M(1-\varrho)} {\Vert}u{\Vert}_{L^2} \end{align*} and summing the inequalities \begin{align*} {\Vert}T_{h}^*u{\Vert}_{L^2}^2 \leq \sum_{\nu=1}^J {\Vert}S_{h}^{\nu}u{\Vert}_{L^2} {\Vert}u{\Vert}_{L^2} \leq C h^{-2m+\frac{n-1}{2}-2(1-\varrho)M} {\Vert}u{\Vert}_{L^2}, \end{align*} since there are roughly $h^{-(n-1)/2}$ terms in the sum. By Lemma \mathop{\rm Re} f{Lp:semiclassical}, we have the $L^2$ bound $$ {\Vert}Tu{\Vert}_{L^2} \lesssim {\Vert}u{\Vert}_{L^2} $$ provided $m<-(n-1)/4+(\varrho-1)M$ and $M>n/2$, which yields the desired result. \end{proof} \begin{rem} The reason why we were led to perform the Seeger-Sogge-Stein decomposition is that under the rough non-degeneracy assumption (Definition \mathop{\rm Re} f{defn of rough nondegeneracy}), the non-stationary phase (Theorem 7.7.1 \cite{H1}) provides the bound \begin{align} |S_{h}(x,y)| &\leq C_{N} h^{-2m-n+N} |x-y|^{2N} \\ \nonumber &\leq C_{N}h^{-2m-n}\big(1+h^{-1}|x-y|^2)^{-N} \end{align} leading, when say $\varrho=1$, to a loss of $n/4$ derivatives instead of $(n-1)/4$ derivatives in our case. This however can be improved to no loss of derivatives when one also assumes that there is a Lipschitz bound on the higher order derivatives $$ |\partial^{\alpha}_{\xi}\varphi(x,\xi)-\partial^{\alpha}_{\xi}\varphi(y,\xi)| \leq C_{\alpha}|x-y|, {\frac{1}{\vert B\vert}}uad |\alpha| \geq 2. $$ This is indeed the case in dimension $n=1$, or if the phase can be decomposed as $\varphi(x,\xi) =\varphi^{\sharp}(x,\xi)+ \varphi^{\flat}(x,\xi)$ where $ \varphi^{\sharp}$ is linear in $\xi$ and $\varphi^{\flat} \in \Phi^2$. \end{rem} Let $\pi_1$ denote the projection onto the spatial variables, i.e. \begin{align*} \pi_{1}:T^*\mathbf{R}^n &\to \mathbf{R}^n \\ (x,\xi) &\mapsto x. \end{align*} A geometric condition sufficient for the local $L^2$ boundedness of rough Fourier integral operators with phase functions $\varphi(x,\xi)$ and amplitudes $a(x,\xi)$ is as follows: \begin{hyp} \label{Rough corank condition} For each $x\in\pi_{1}(\mathop{\rm supp} a)$ and all $\xi\in \mathbf{S}^{n-1}$ there exists a linear subspace $V_{x,\xi}$ belonging to the Grassmannian ${\rm Gr}(n,n-k)$ varying continuously with $(x,\xi)$, and constants $c_{1},c_{2}>0$ such that if $\pi_{V_{x,\xi}}$ denotes the projection onto $V_{x,\xi}$, then $$ |\partial_{\xi}\varphi(x,\xi)-\partial_{\xi}\varphi(y,\xi)|+c_{1}|x-y|^2 \geq c_{2} |\pi_{V_{x,\xi}}(x-y)| $$ for all $x,y \in \pi_1(\mathop{\rm supp} a)$. \end{hyp} \begin{thm} \label{Intro:L2ThmDeg} Let $T$ be a Fourier integral operator given by \eqref{Intro:Fourier integral operator} with amplitude $a \in L^{\infty}S^m_\varrho$ and phase function $\varphi \in L^{\infty}\Phi^2$. Suppose that the phase satisfies the rough corank condition $\mathop{\rm Re} f{Rough corank condition}$, then $T$ can be extended as a bounded operator from $L^{2}_{\rm comp}$ to $L^{2}_{\rm loc}$ provided $m<-\frac{n+k-1}{4}+\frac{(n-k)(\varrho-1)}{2}.$ \end{thm} \begin{proof} Since we aim to prove a local $L^2$ boundedness result, we may assume that the amplitude $a$ is compactly supported in the spatial variable $x$. Then since $S_{0}=T_{0}T^*_{0}$ has a bounded compactly supported kernel, it extends to a bounded operator on $L^2$. It remains to deal with the high frequency part of the operator. Given $(x_{\mu} , \xi_{\mu})\in \mathbf{R} ^n \times \mathbf{R}^n$, $\mu= 1, \partialots,\, J,$ we consider a partition of unity $$ \sum_{\mu=1}^J \psi^{\mu}(x,\xi)=1, {\frac{1}{\vert B\vert}}uad \xi \neq 0 $$ given by functions $\psi^{\mu}$ homogeneous of degree $0$ in the frequency variable $\xi$ supported in cones \begin{align*} \Gamma^{\mu}=\Big\{ (x,\xi) \in T^*\mathbf{R}^n;\, |x-x^{\mu}|^2+\Big\vert \frac{\xi}{\vert\xi\vert}-\xi^{\mu} \Big\vert^2 \leq \varepsilon^2 \Big\} \end{align*} where $\varepsilon$ is yet to be chosen. We decompose the operator as \begin{align} T_{h} = \sum_{\mu=1}^N T_{h}^{\mu} \end{align} where the kernel of $T_{h}^{\mu}$ is given by \begin{align*} T_{h}^{\mu}(x,y)=(2\pi h)^{-n} \int_{\mathbf{R}^n} e^{\frac{i}{h} \varphi(x,\xi)-\frac{i}{h}\langle y,\xi \rangle} \psi^{\mu}(x,\xi)\chi(\xi) a(x,\xi/h) \, \mathrm{d}\xi. \end{align*} We have the direct sum $$ \mathbf{R}^n = V_{x^{\mu},\xi^{\mu}} \oplus V_{x^{\mu},\xi^{\mu}}^{\perp}, {\frac{1}{\vert B\vert}}uad \partialim V_{x^{\mu},\xi^{\mu}}=n-k $$ and we decompose vectors $x=x'+x''$ (i.e. $x=(x' , x'')$) according to this sum. Assumption \mathop{\rm Re} f{Rough corank condition} implies \begin{align*} |\partial_{\xi}\varphi(x' , x'',\xi)&-\partial_{\xi}\varphi(y' , x'',\xi)| \\ &\geq c_{2}|\pi_{V_{x,\xi}}(x'-y')| -c_{1}|x'-y'|^2 \\ &\geq c_{2}|x'-y'| \Big(1-\Vert\pi_{V_{x,\xi}}-\pi_{V_{x^{\mu},\xi^{\mu}}}\Vert - \frac{c_{1}}{c_{2}}|x'-y'|\Big). \end{align*} Now since $(x,\xi) \mapsto \pi_{V_{x,\xi}}$ is continuous, we can choose $\varepsilon$ in the definition of the cone $\Gamma^{\mu}$ small enough so that \begin{align*} \Vert\pi_{V_{x,\xi}}-\pi_{V_{x^{\mu},\xi^{\mu}}}\Vert \leq \frac{1}{4}, {\frac{1}{\vert B\vert}}uad |x'-y'| \leq |x'-{x^{\mu}}^{'}| + |y'-{x^{\mu}}^{'}| \leq \frac{c_2}{4c_1} \end{align*} and therefore we have \begin{align} \label{L2:PartialNonDeg} |\partial_{\xi}\varphi(x', x'',\xi)-\partial_{\xi}\varphi(y',x'',\xi)| \geq \frac{c_{2}}{2}|x' -y'| \end{align} when $(x,\xi)$ and $(y,\xi)$ belong to $\Gamma^{\mu}$. We fix the $x''$ variable and use a $TT^*$ argument on the operator acting in the $x'$ variables. We consider \begin{align*} S^{\mu}_{h}(x',x'', y') = (2\pi h)^{-n} \int e^{\frac{i}{h}(\varphi(x,\xi)-\varphi(y',x'',\xi))} a^{\mu}_{h}(x,\xi) \overline{a^{\mu}_{h}}(y',x'',\xi) \, \mathrm{d} \xi. \end{align*} Because of \eqref{L2:PartialNonDeg}, performing a Seeger-Sogge-Stein decomposition and reasoning as in the proof of Theorem \mathop{\rm Re} f{Intro:L2Thm} we get \begin{multline*} \bigg(\int_{V_{x^{\mu},\xi^{\mu}}} \bigg| \int_{V_{x^{\mu},\xi^{\mu}}} S^{\mu}_{h}(x',x'', y') u(y') \, \mathrm{d} y' \bigg|^2 \, \mathrm{d} x'\bigg)^{\frac{1}{2}} \\ \leq C h^{-2m-\frac{n-k-1}{2}-k-2M(1-\varrho)} \bigg(\int |u(y')|^2 \, \mathrm{d} y'\bigg)^{\frac{1}{2}}, \end{multline*} with a constant $C$ that is independent of $x'',$ provided $M>\frac{n-k}{2}$ and therefore \begin{align*} \int \Big| \int_{V_{x^{\mu},\xi^{\mu}}} \overline{T^{\mu}_{h}(x',x'',y)}u(x) \, \mathrm{d} x' \Big|^{2} \, \mathrm{d} y \leq C h^{-2m-\frac{n-1}{2}-\frac{k}{2}-2M(1-\varrho)} {\Vert}u{\Vert}^2_{L^2}. \end{align*} Hence by Minkowski's integral inequality \begin{align*} \Vert T_{h}^*u\Vert_{L^2} &\leq \int_{V_{x^{\mu},\xi^{\mu}}^{\perp}}\bigg(\int \Big| \int_{V_{x^{\mu},\xi^{\mu}}} \overline{T^{\mu}_{h}(x',x'',y)}u(x) \, \mathrm{d} x' \Big|^{2} \, \mathrm{d} y \bigg)^{\frac{1}{2}} \mathrm{d} x'' \\ &\leq C h^{-m-\frac{n-1}{4}-\frac{k}{4}-M(1-\varrho)}\Vert u\Vert_{L^2} \end{align*} provided $M>\frac{n-k}{2}$ and the amplitude is compactly supported in $x''.$ This yields the $L^2$ bound for $m<-(n-1+k)/4-(1-\varrho)M$ provided $M>\frac{n-k}{2}$, and completes the proof of Theorem \mathop{\rm Re} f{Intro:L2ThmDeg} \end{proof} \begin{rem} The phase of the linearized maximal wave operator which is $\varphi(x,\xi)=t(x)|\xi| + \langle x,\xi\rangle$, satisfies the assumptions of Theorem \mathop{\rm Re} f{Intro:L2ThmDeg} since it belongs to $L^{\infty}\Phi^2$ and it also satisfies the rough corank condition~\mathop{\rm Re} f{Rough corank condition}. Indeed if $\xi \in \mathbf{S}^{n-1}$ we can take $V_{x,\xi}=\xi^{\perp}$ and if $\pi_{\xi},\pi_{\xi^{\perp}}$ denote the projections onto $\mathop{\rm span} \xi$ and $V_{x,\xi}$ respectively then it is clear that \begin{align*} |\partial_{\xi}\varphi(x,\xi)&-\partial_{\xi}\varphi(y,\xi)|^2 \\ &= |t(x)-t(y)|^2+|x-y|^2+2(t(x)-t(y)) \underbrace{\langle \xi,x-y \rangle}_{=\pm |\pi_{\xi}(x-y)|} \\ &\geq \big|\pi_{\xi^{\perp}}(x-y)\big|^2+\big| |t(x)-t(y)|- |\pi_{\xi}(x-y)| \big|^2 \\ &\geq |\pi_{\xi^{\perp}}(x-y)|^2. \end{align*} Therefore, as mentioned earlier, the Fourier integral operators under consideration include the linearized maximal wave operator. \end{rem} A consequence of this is a local $L^2$ boundedness result for Fourier integral operators with smooth phase functions and rough symbols. \begin{cor} Suppose that $\varphi(x,\xi)$ is a smooth phase function satisfying the non-degeneracy condition \begin{equation} \mathop{\rm rank} \frac{\partial^2 \varphi}{\partial x_{j} \partial \xi_{k}} \geq n-k, {\frac{1}{\vert B\vert}}uad \textrm{ on } \mathop{\rm supp} a \end{equation} and the entries of the Hessian matrix have bounded derivatives with respect to both $x$ and $\xi$ separately. Assume also that the symbol $a$ belongs to $L^{\infty}S^{m}_{\varrho}, 0\leq \varrho \leq 1$. Then the associated Fourier integral operator is bounded from $L^2_{\rm comp}$ to $L^2_{\rm loc}$ provided $m<-\frac{k}{2}+\frac{(n-k)(\varrho -1)}{2}$. \end{cor} This is sharp, for example in the case $k=0$ (i.e. pseudodifferential operators), since there exists $m_0$ with $m_0 > n(\varrho-\partialelta)/2$ such that the pseudodifferential operator with symbol belonging to $S^{m_0}_{\varrho,\partialelta}$ is not bounded from $L^2_{\text{comp}}$ to $L^2_{\text{loc}}$, see \cite{H4}. Now since the phase of a pseudodifferential operator satisfies the condition of the above corollary with $k=0$ and since obviously $m_0 \geq n(\varrho -1)/2$ and $S^{m_0}_{\varrho,\partialelta}\subset L^{\infty}S^{m_0}_{\varrho},$ it follows that the above $L^2$ boundedness is sharp. \subsubsection{Symplectic aspects of the $L^2$ boundedness} \label{subsec:symplecticL2} Here we shall discuss the symplectic aspects of the $L^2$ boundedness of Fourier integral operators which aims to highlight the essentially geometric nature of the problem of $L^2$ regularity of Fourier integral operators. We begin by recalling some of the well known $L^{2}$ continuity results in the case of smooth phases and amplitudes. The kernel of the Fourier integral operator \begin{align} \label{L2:Fourier integral operator} Tu(x) = (2\pi)^{-n} \int_{\mathbf{R}^n} e^{i\varphi(x,\xi)} a(x,\xi) \widehat{u}(\xi) \, \mathrm{d} \xi \end{align} is an oscillatory integral whose wave front set is contained in the closed subset of $\partialot{T}^*\mathbf{R}^{2n}=T^*\mathbf{R}^{2n} \setminus 0$ \begin{align} \label{L2:WFkernel} \mathop{\rm WF}(T) \subset \big\{ (x,\partial_{x}\varphi(x,\xi),\partial_{\xi}\varphi(x,\xi),-\xi) : (x,\xi) \in \mathop{\rm supp} a, \, \xi \neq 0 \big\}. \end{align} The cotangent space $T^*\mathbf{R}^n$ is endowed with the symplectic form $$ \sigma = \sum_{j=1}^n \mathrm{d}\xi_j \wedge \mathrm{d} x_j. $$ A canonical relation is a Lagrangian submanifold of the product $T^*\mathbf{R}^n \times T^*\mathbf{R}^n$ endowed with the symplectic form $\sigma \oplus (-\sigma)$, this means that the aforementioned symplectic form vanishes on the canonical relation. In particular, by rearranging the terms in the closed cone \eqref{L2:WFkernel}, one obtains a canonical relation \begin{align*} \mathcal{C}_{\varphi} = \big\{ (x,\partial_{x}\varphi(x,\xi),\partial_{\xi}\varphi(x,\xi),\xi) : (x,\xi) \in \mathop{\rm supp} a \big\} \end{align*} in $T^*\mathbf{R}^n \times T^*\mathbf{R}^n$. If $\mathcal{C}$ is a canonical relation, we consider the two maps $\pi_{1}: (x,\xi) \mapsto (x, \partial_{x}\varphi)$ and $\pi_{2}: (x,\xi)\mapsto(\partial_{\xi} \varphi , \xi),$ \begin{align*} \xymatrix @!0 @C=4pc @R=3pc { & \ar[ld]_{\pi_1} \mathcal{C} \subset T^*\mathbf{R}^n \times T^*\mathbf{R}^n \ar[rd]^{\pi_2} & \\ T^*\mathbf{R}^n & & T^*\mathbf{R}^n. } \end{align*} The canonical relation $\mathcal{C}$ is (locally) the graph of a smooth function $\chi$ if and only if $\pi_1$ is a (local) diffeomorphism, and in this case $\chi=\pi_2 \circ \pi_1^{-1}$. This function $\chi$ is a diffeomorphism if and only if $\pi_2$ is a diffeomorphism. Note that if this is the case, $\chi$ is a symplectomorphism because the submanifold $\mathcal{C}$ is {\it{Lagrangian}} for the symplectic form, i.e. $\sigma \oplus (-\sigma)$ $$ \mathrm{d} \xi \wedge \mathrm{d} x - \mathrm{d} \eta \wedge \mathrm{d} y = 0 {\frac{1}{\vert B\vert}}uad \textrm{ when } (y,\eta) = \chi(x,\xi). $$ The canonical relation $\mathcal{C}_{\varphi}$ is locally the graph of a symplectomorphism in the neighbourhood of $(x_{0},\partial_{x}\varphi(x_{0},\xi_{0}),\partial_{\xi}\varphi(x_{0},\xi_{0}),\xi_{0})$ if and only if \begin{align} \label{L2:NonDegenerate} \partialet \frac{\partial^{2}\varphi}{\partial x \partial \xi}(x_{0},\xi_{0}) \neq 0. \end{align} It is well-known that the Fourier integral operators of order $0$ whose canonical relation $\mathcal{C}_{\varphi}$ is locally the graph of a symplectic transformation $\chi$, are locally $L^2$ bounded. More precisely \begin{thm} \label{L2:L2Fourier integral operator} Let $a \in S^0_{1,0}$ and $\varphi$ be a real valued function in $C^{\infty}(\mathbf{R}^n \times \mathbf{R}^n \setminus 0)$ which is homogeneous of degree $1$ in $\xi$. Assume that the homogeneous canonical relation $\mathcal{C}_{\varphi}$ is locally the graph \footnote{Or equivalently that \eqref{L2:NonDegenerate} holds on $\mathop{\rm supp} a$.} of a symplectomorphism between two open neighbourhoods in $\partialot{T}^*\mathbf{R}^n=T^*\mathbf{R}^n \setminus 0$. Then the Fourier integral operator \eqref{L2:Fourier integral operator} defines a bounded operator from $L^2_{\rm comp}$ to $L^2_{\rm loc}$. \end{thm} \begin{proof} This is Theorem 25.3.1 in \cite{H2}. \end{proof} But in fact, there are boundedness results even when $\mathcal{C}$ is not the graph of a symplectomorphism, i.e. when either the projection $\pi_1$ or $\pi_2$ is not a diffeomorphism. There is an important instance for which this is the case and one could still prove local $L^2$ boundedness with loss of derivatives. A suggestive example for this situation is the restriction operator to a linear subspace $H=\big\{x=(x',x'') \in \mathbf{R}^n=\mathbf{R}^{n'} \times \mathbf{R}^{n''} : x''=0 \big\}$ \begin{align*} R_H u = \langle D \rangle^{m}u(x',0) = (2\pi)^{-n} \int e^{i \langle x',\xi'\rangle} \langle \xi \rangle^m \widehat{u}(\xi) \, \mathrm{d}\xi \end{align*} where $m \leq 0$. We know that this operator is bounded from $L^2_{\rm comp}$ to $L^2_{\rm loc}$; indeed for all $a \in C^{\infty}_{0}(\mathbf{R}^n)$ there exists a constant $C_{m,n}$ such that \begin{align*} \Vert aR_Hu\Vert_{L^2} \leq C_{m,n} \Vert u\Vert_{L^2} \end{align*} provided $m \leq -\mathop{\rm codim}H/2$. The canonical relation of the Fourier integral operator $R_H$ is given by \begin{align*} \xymatrix @!0 @C=5pc @R=4pc { & \ar[ld]_{\pi_1} \mathcal{C}_H=\big\{(x,\xi',0;x',0,\xi), \, (x,\xi) \in T^*\mathbf{R}^n \big\} \ar[rd]^{\pi_2} & \\ \big\{ \xi''=0 \big\} \subset T^*\mathbf{R}^n & & \big\{ x''=0 \big\} \subset T^*\mathbf{R}^n. } \end{align*} By $\sigma_{\mathcal{C}_{H}}$ we denote the pullback of the symplectic form $\sigma,$ by $\pi_1 ,$ to $\mathcal{C}_{H}$ (of course we could equally well consider the pullback $\pi_{2}^*\sigma$ without changing anything) $$ \sigma_{\mathcal{C}_{H}} = \pi_1^* \sigma = \mathrm{d}\xi' \wedge \mathrm{d} x'. $$ Then we have $$ \mathop{\rm corank} \sigma_{\mathcal{C}_{H}} = 2n''=2\mathop{\rm codim} H $$ and the condition of $L^2$ boundedness is therefore $m \leq -\mathop{\rm corank} \sigma_{\mathcal{C}_{H}}/4$. In fact, this example models the general situation, and this is Theorem~25.3.8 in \cite{H2}. \begin{thm} \label{L2:L2Fourier integral operatordeg} Let $a \in S^m_{1,0}$ and $\varphi$ be a real valued function in $C^{\infty}(\mathbf{R}^n \times \mathbf{R}^n \setminus 0)$ which is homogeneous of degree $1$ in $\xi$ such that \footnote{This ensures that $\mathcal{C}_{\varphi}$ is a homogeneous canonical relation to which the radial vectors of $\partialot{T}^*\mathbf{R}^n \times 0$ and $0 \times \partialot{T}^*\mathbf{R}^n$ are never tangential.} $d\varphi \neq 0$ on $\mathop{\rm supp} a$. Then the Fourier integral operator \eqref{L2:Fourier integral operator} defines a bounded operator from $L^2_{\rm comp}$ to $L^2_{\rm loc}$ provided $m \leq - \mathop{\rm corank} \sigma_{\mathcal{C}_{\varphi}}/4$. Here $\sigma_{\mathcal{C}_{\varphi}}$ is the two form on $\mathcal{C}_{\varphi}$ obtained by lifting to $C_{\varphi}$ the symplectic form $\sigma$ on $\partialot{T}^*\mathbf{R}^n$ by one of the projections $\pi_{1}$ or $\pi_{2}$. \end{thm} The fact that the canonical relation is parametrised by $$ F: (x,\xi) \mapsto (x,\partial_{x}\varphi(x,\xi),\partial_{\xi}\varphi(x,\xi),\xi) $$ allows us to compute \begin{align*} F^*(\pi_{1}^* \sigma) &= \mathrm{d}(\pi_{1}\circ F)^*(\xi \, \mathrm{d} x) = \mathrm{d}\big(\partial_{x} \varphi(x,\xi) \, \mathrm{d} x\big) \\ &= \underbrace{\sum_{j,k=1}^n\partial^2_{x_{j}x_{k}}\varphi(x,\xi) \, \mathrm{d} x_{j} \wedge \mathrm{d} x_{k}}_{=0} + \sum_{j,k=1}^n\partial^2_{\xi_{j}x_{k}}\varphi(x,\xi) \, \mathrm{d} \xi_{j} \wedge \mathrm{d} x_{k}. \end{align*} Therefore we have \begin{align*} F^*\sigma_{\mathcal{C}_{\varphi}} = \sum_{j,k=1}^n\partial^2_{\xi_{j}x_{k}}\varphi(x,\xi) \, \mathrm{d} \xi_{j} \wedge \mathrm{d} x_{k} \end{align*} which yields \begin{align*} \mathop{\rm corank} \sigma_{\mathcal{C}_{\varphi}} = 2 \mathop{\rm corank} \frac{\partial^{2}\varphi}{\partial x \partial \xi}. \end{align*} The geometric assumption in Theorem \mathop{\rm Re} f{L2:L2Fourier integral operatordeg} (which is valid for general Fourier integral operators, not necessarily of the form \eqref{L2:Fourier integral operator}) is therefore equivalent to \begin{align} \label{L2:CorankHyp} m \leq -\frac{1}{2} \mathop{\rm corank} \frac{\partial^{2}\varphi}{\partial x \partial \xi}. \end{align} \begin{rem} If the function $t(x)$ in the linearized maximal wave operator \eqref{Intro:LinWave} were smooth, then that operator would fall into the category of Fourier integral operators satisfying the assumptions of Theorem \mathop{\rm Re} f{L2:L2Fourier integral operatordeg}. Indeed as already noted in the introduction, the corank of $\partial^2 \varphi/\partial x\partial \xi$ when $\varphi(x,\xi)=t(x)|\xi|+\langle x,\xi \rangle$ is at most $1$. Therefore $e^{it(x)\sqrt{-C^{\infty}_0elta}}$ defines a bounded operator from $H^{1/2}_{\rm comp}$ to $L^2_{\rm loc}$ when $t(x)$ is a smooth function on $\mathbf{R}^n$. \end{rem} Theorem \mathop{\rm Re} f{Intro:L2Thm} for $\varrho=1$ is the non-smooth analogue of Theorem \mathop{\rm Re} f{L2:L2Fourier integral operator} where the non-degeneracy condition \eqref{L2:NonDegenerate} which requires smoothness in $x$ has been replaced by Definition \mathop{\rm Re} f{defn of rough nondegeneracy}. Note nevertheless that Theorem \mathop{\rm Re} f{Intro:L2Thm} is a \textit{global} $L^2$ result. Similarly Theorem \mathop{\rm Re} f{Intro:L2ThmDeg} for $\varrho=1$ is the non-smooth analogue of Theorem \mathop{\rm Re} f{L2:L2Fourier integral operatordeg} with \eqref{L2:CorankHyp} replaced by assumption \mathop{\rm Re} f{Rough corank condition}. \subsection{Global $L^\infty$ boundedness of rough Fourier integral operators} In this section, we establish the $L^\infty$ boundedness of Fourier integral operators. To prove the $L^\infty$ boundedness of the high frequency portion of the operator, we need to use the semiclassical estimates of Subsection \mathop{\rm Re} f{SSS decomposition}. However, using only the Seeger-Sogge-Stein decomposition yields a loss of derivatives no better than $m<-\frac{n-1}{2}+ n(\varrho-1)$, and to obtain the sharp $L^\infty$ boundedness result claimed in Theorem \mathop{\rm Re} f{Intro:LinftyThm}, further analysis is needed. \begin{thm} \label{Intro:LinftyThm} Let $T$ be a Fourier integral operator given by \eqref{Intro:Fourier integral operator} with amplitude $a \in L^{\infty}S^m_{\varrho}$ and phase function $\varphi \in L^{\infty}\Phi^2.$ Then there exists a constant $C>0$ such that $$ \Vert Tu\Vert_{L^{\infty}} \lesssim \Vert u\Vert_{L^{\infty}}, {\frac{1}{\vert B\vert}}uad u \in \mathscr{S}(\mathbf{R}^n) $$ provided $m<-\frac{n-1}{2} +\frac{n}{2}(\varrho-1)$ and $0\leq \varrho\leq 1$. Furthermore, this result is sharp. \end{thm} \begin{proof} As a first step, we use the semiclassical reduction of Subsection \mathop{\rm Re} f{Semiclasical reduction subsec} to decompose $T$ into $T_0$ and $T_h$. Thereafter we split the semiclassical piece $T_h$ further into $\sum_{\nu=1}^{J} T_{h}^{\nu}$ using the Seeger-Sogge-Stein decomposition of Subsection \mathop{\rm Re} f{SSS decomposition} applied to the amplitude $a(x,\xi)$ and the phase $\varphi(x,\xi).$ Once again, the boundedness of $T_0$ follows from Theorem \mathop{\rm Re} f{general low frequency boundedness for rough Fourier integral operator}, but here we don't need the rough non-degeneracy of the phase function due to the fact that we are dealing with the $L^\infty$ boundedness of $T_0$ which only requires that the integral with respect to the $y$ variable of the Schwartz kernel $T_0(x,y)$ being finite. See Theorem \mathop{\rm Re} f{general low frequency boundedness for rough Fourier integral operator} for further details. From equation \mathop{\rm Re} f{kernel of Thnu} one deduces that the kernel of the semiclassical high frequency operator $T_{h}^{\nu}$ is given by $$ T_{h}^{\nu}(x,y)=(2\pi h)^{-n} \int_{\mathbf{R}^n} e^{\frac{i}{h}\langle \nabla_{\xi}\varphi(x,\xi^{\nu})-y,\xi \rangle} b^{\nu}(x,\xi,h) \, \mathrm{d} \xi,$$ with \begin{equation*} b^{\nu}(x,\xi,h)=e^{\frac{i}{h}\langle \nabla_{\xi}\varphi(x,\xi)-\nabla_{\xi}\varphi(x,\xi^{\nu}),\xi \rangle}\psi^{\nu}(\xi) \chi(\xi) a(x,\xi/h). \end{equation*} Now since $$ \Vert T_h^{\nu} u\Vert_{L^{\infty}} \leq \Vert u\Vert_{L^{\infty}}\int | T_h^{\nu}(x,y)|\,\mathrm{d} y, $$ it remains to show a suitable estimate for $\int | T_h^{\nu}(x,y)|\,dy$. As in the proof of $L^1$ boundedness, we use the differential operator \begin{align*} L = 1-\partial_{\xi_{1}}^2-h \partial_{\xi'}^2 \end{align*} for which we have according to Lemma \mathop{\rm Re} f{Linfty:bLemma} \begin{equation}\label{estimate on bnu} \sup_{\xi} \Vert L^{N}b^{\nu}(\cdot,\xi,h)\Vert_{L^{\infty}} \lesssim h^{-m-2N(1-\varrho)}. \end{equation} Setting \begin{equation}\label{metric g} g(z)=h^{-2}z_{1}^2+h^{-1}|z'|^{2}, \end{equation} we have \begin{equation*} L^N\,e^{\frac{i}{h}\langle \nabla_{\xi}\varphi (x,\xi^{\nu})-y,\xi \rangle} =\big(1+g(y-\nabla_{\xi}\varphi (x,\xi^{\nu})\big)^{N}\, e^{\frac{i}{h}\langle \nabla_{\xi}\varphi (x,\xi^{\nu})-y,\xi \rangle} \end{equation*} for all integers $N$. Now we observe that \begin{align*} (2\pi h)^{\frac{n}{2}}\int\vert T_h^{\nu}(x,y)\vert\, \mathrm{d} y &= (2\pi h)^{\frac{n}{2}}\int\vert T_h^{\nu}(x,y+\nabla_{\xi}\varphi (x,\xi^{\nu}))\vert\, \mathrm{d} y \\ \nonumber &= \int |\widehat{b^{\nu}} (x,y,h)| \, \mathrm{d} y \\ \nonumber&=\int_{\sqrt{g(y)}\leq h^{\varrho}}+\int_{\sqrt{g(y)}> h^{\varrho}}\vert \widehat{b^{\nu}} (x,y,h)\vert\, \mathrm{d} y :=\textbf{I}_1 + \textbf{I}_2 , \end{align*} where $$ \widehat{b^{\nu}} (x,y,h)=(2\pi h)^{-\frac{n}{2}}\int e^{-\frac{i}{h}\langle y,\xi\rangle}\,b^{\nu}(x,\xi,h)\, \mathrm{d} \xi $$ is the semiclassical Fourier transform of $b^\nu$. To estimate $\textbf{I}_1$ we use the Cauchy-Schwarz inequality, the semiclassical Plancherel theorem, the definition of $g$ in \eqref{metric g} and \eqref{estimate on bnu}. Hence remembering the fact that the measure of the $\xi$-support of $b^{\nu}(x,\xi, h)$ is $O(h^{\frac{(n-1)}{2}})$ we have \begin{align*} \textbf{I}_1 &\leq \bigg\{\int_{\sqrt{g(y)}\leq h^{\varrho}}\mathrm{d} y\bigg\}^{\frac{1}{2}}\bigg\{\int\vert\widehat{b^{\nu}} (x,y,h)\vert^{2} \mathrm{d} y\bigg\}^{\frac{1}{2}} \\ &\lesssim h^{\frac{n+1}{4}}\bigg\{\int_{|y| \leq h^\varrho}\mathrm{d} y\bigg\}^{\frac{1}{2}}\bigg\{\int |b^{\nu} (x,\xi,h)|^2 \, \mathrm{d}\xi\bigg\}^{\frac{1}{2}} \\ &\lesssim h^{\frac{n+1}{4}} h^{\frac{n\varrho}{2}} h^{-m+\frac{n-1}{4}} \lesssim h^{\frac{n}{2}} h^{-m+\frac{n\varrho}{2}}. \end{align*} Before we proceed with the estimate of $\textbf{I}_2$, we observe that if $l$ is a non-negative integer then the semiclassical Plancherel theorem and \eqref{estimate on bnu} yield \begin{align}\label{integer l estimates for I2} \bigg(\int\vert \widehat{b^{\nu}} (x,y,h)\vert^{2}(1+g(y))^{2l}\,\mathrm{d} y\bigg)^{\frac{1}{2}} &\leq \bigg(\int |L^{l} b^{\nu} (x,\xi, h)|^2\, \mathrm{d} \xi\bigg)^{\frac{1}{2}} \\ \nonumber &\leq h^{-m-2l(1-\varrho)+\frac{n-1}{4}}. \end{align} Moreover, any positive real number $l$ which is not an integer can be written as $[l]+\{l\}$ where $[l]$ denotes the integer part of $l$ and $\{l\}$ its fractional part, which is $0<\{l\}<1$. Therefore, H\"older's inequality with conjugate exponents $\frac{1}{\{l\}}$ and $\frac{1}{1-\{l\}}$ yields \begin{multline*} \int \vert \widehat{b^{\nu}}(x,y,h)\vert^{2}(1+g(y))^{2l}\,\mathrm{d} y \\ = \int\vert \widehat{b^{\nu}} \vert^{2\{l\}}\, \vert \widehat{b^{\nu}}\vert^{2(1-\{l\})} (1+g(y))^{2\{l\}([l]+1)} \, (1+g(y))^{2[l](1-\{l\})} \,\mathrm{d} y \\ \leq \bigg(\int \vert \widehat{b^{\nu}} \vert^{2} (1+g(y))^{2([l]+1)} \, \mathrm{d} y\bigg)^{\{l\}} \bigg(\int \vert \widehat{b^{\nu}} \vert^{2}(1+g(y))^{2[l]} \, \mathrm{d} y\bigg)^{1-\{l\}}. \end{multline*} Therefore, using \eqref{integer l estimates for I2} we obtain \begin{align*} \bigg(\int \vert \widehat{b^{\nu}} &(x,y,h)\vert^{2}(1+g(y))^{2l}\, \mathrm{d} y\bigg)^{\frac{1}{2}} \\ \nonumber &\leq \bigg(\int |L^{[l]+1} b^{\nu} (x,\xi, h)|^2\, \mathrm{d} \xi\bigg)^{\frac{\{l\}}{2}} \bigg(\int |L^{[l]} b^{\nu} (x,\xi, h)|^2\, \mathrm{d}\xi\bigg)^{\frac{1-\{l\}}{2}} \\ \nonumber &\leq h^{\{l\}(-m-2([l]+1)(1-\varrho)+\frac{n-1}{4})}h^{(1-\{l\})(-m-2[l](1-\varrho)+\frac{n-1}{4})} \\ \nonumber &\leq h^{-m-2l(1-\varrho)+\frac{n-1}{4}}, \end{align*} and hence \eqref{integer l estimates for I2} is actually valid for all non-negative real numbers $l$. Turning now to the estimates for $\textbf{I}_2 ,$ we use the same tools as in the case of $\textbf{I}_1$ and \eqref{integer l estimates for I2} for $l\in [0,\infty)$. This yields for any $l>\frac{n}{4}$ \begin{align*} \textbf{I}_2 &\leq \bigg\{\int_{\sqrt{g(y)}> h^{\varrho}}(1+g(y))^{-2l}\, \mathrm{d} y\bigg\}^{\frac{1}{2}} \times\bigg\{\int\vert \widehat{b^{\nu}} (x,y,h)\vert^{2}(1+g(y))^{2l} \, \mathrm{d} y\bigg\}^{\frac{1}{2}} \\ \nonumber &\lesssim h^{\frac{n+1}{4}}\bigg\{\int_{|y|> h^{\varrho}}|y|^{-4l} \, \mathrm{d} y\bigg\}^{\frac{1}{2}} h^{-m-2l(1-\varrho)+\frac{n-1}{4}} \\ \nonumber &\lesssim h^{\frac{n+1}{4}} h^{\varrho(\frac{n}{2}-2l)} h^{-m-2l(1-\varrho)+\frac{n-1}{4}}\lesssim h^{\frac{n}{2}} h^{-m+\frac{n\varrho}{2}-2l}. \end{align*} Therefore \begin{align}\label{integral of Thnu dy} \sup_{x} \int |T_h^{\nu}(x,y)| \, \mathrm{d} y \leq C_{l}h^{-m+\frac{n\varrho}{2}-2l} \end{align} and summing in $\nu$ yields $$ \Vert T_h u\Vert_{L^{\infty}} \leq \sum_{\nu=1}^J \Vert T_h^{\nu} u\Vert_{L^{\infty}} \leq C_{l} h^{-m+\frac{n\varrho}{2}-2l-\frac{n-1}{2}}\Vert u\Vert_{L^{\infty}}, $$ since $J$ is bounded (from above and below) by a constant times $h^{-\frac{n-1}{2}}$. By Lemma \mathop{\rm Re} f{Lp:semiclassical} one has $$ \Vert T u\Vert_{L^{\infty}} \lesssim \Vert u\Vert_{L^{\infty}} $$ provided $m<-\frac{n-1}{2}+\frac{n\varrho}{2}-2l$ and $l>\frac{n}{4}$, i.e. if $m<-\frac{n-1}{2}+\frac{n}{2}(\varrho -1)$. This completes the proof of Theorem \mathop{\rm Re} f{Intro:LinftyThm}. \end{proof} \subsection{Global $L^p$-$L^p$ and $L^p$-$L^q$ boundedness of Fourier integral operators} In this section we shall state and prove our main boundedness results for Fourier integral operators. Here, we prove results both for smooth and rough operators with phases satisfying various non-degeneracy conditions. As a first step, interpolation yields the following global $L^p$ results: \begin{thm}\label{main L^p thm for smooth Fourier integral operators} Let $T$ be a Fourier integral operator given by \eqref{Intro:Fourier integral operator} with amplitude $a \in S^m_{\varrho, \partialelta}, 0\leq \varrho \leq 1,$ $0\leq \partialelta \leq 1,$ and a phase function $\varphi(x,\xi) \in \Phi^2$ satisfying the strong non-degeneracy condition. Setting $\lambda:=\min(0,n(\varrho-\partialelta)),$ suppose that either of the following conditions hold: \begin{enumerate} \item[$(a)$] $1 \leq p \leq 2$ and $$ m<n(\varrho -1)\bigg (\frac{2}{p}-1\bigg)+\big(n-1\big)\bigg(\frac{1}{2}-\frac{1}{p}\bigg)+ \lambda\bigg(1-\frac{1}{p}\bigg); $$ or \item[$(b)$] $2 \leq p \leq \infty$ and $$ m<n(\varrho -1)\bigg (\frac{1}{2}-\frac{1}{p}\bigg)+ (n-1)\bigg(\frac{1}{p}-\frac{1}{2}\bigg) +\frac{\lambda}{p};$$ or \item[$(c)$] $p=2,$ $0\leq \varrho\leq 1,$ $0\leq \partialelta<1,$ and $$m= \frac{\lambda}{2}.$$ \end{enumerate} Then there exists a constant $C>0$ such that $ \Vert Tu\Vert_{L^{p}} \leq C \Vert u\Vert_{L^{p}}.$ \end{thm} \begin{proof} The proof is a direct consequence of interpolation of the the $L^1$ boundedness result of Theorem \mathop{\rm Re} f{Intro:L1Thm} with the $L^2$ boundedness of Theorem \mathop{\rm Re} f{Calderon-Vaillancourt for FIOs} on one hand, and the interpolation of the latter with the $L^\infty$ boundedness result of Theorem \mathop{\rm Re} f{Intro:LinftyThm}. The details are left to the reader. \end{proof} \begin{thm}\label{main L^p thm for Fourier integral operators with smooth phase and rough amplitudes} Let $T$ be a Fourier integral operator given by \eqref{Intro:Fourier integral operator} with amplitude $a \in L^{\infty}S^m_{\varrho}, 0\leq \varrho \leq 1$ and a strongly non-degenerate phase function $\varphi(x,\xi) \in \Phi^2.$ Suppose that either of the following conditions hold: \begin{enumerate} \item[$(a)$] $1 \leq p \leq 2$ and $$ m<\frac{n}{p}(\varrho -1)+\big(n-1\big)\bigg(\frac{1}{2}-\frac{1}{p}\bigg); $$ or \item[$(b)$] $2 \leq p \leq \infty$ and $$ m<\frac{n}{2}(\varrho -1)+ (n-1)\bigg(\frac{1}{p}-\frac{1}{2}\bigg).$$ \end{enumerate} Then there exists a constant $C>0$ such that $ \Vert Tu\Vert_{L^{p}} \leq C \Vert u\Vert_{L^{p}}.$ \end{thm} \begin{proof} The proof follows once again from interpolation of the $L^1$ boundedness result of Theorem \mathop{\rm Re} f{Intro:L1Thm} with the $L^2$ boundedness of Theorem \mathop{\rm Re} f{global L2 boundedness smooth phase rough amplitude} on one hand, and the interpolation of the latter with the $L^\infty$ boundedness result of Theorem \mathop{\rm Re} f{Intro:LinftyThm}. \end{proof} As an immediate consequence of the theorem above one has \begin{cor}\label{LinftySm1 cor} For a Fourier integral operator $T$ with amplitude $a \in L^{\infty}S^m_{1}$ and a strongly non-degenerate phase function $\varphi(x,\xi) \in \Phi^2,$ one has $L^p$ boundedness for $p\in [1,\infty]$ provided $m<-(n-1)|\frac{1}{p}-\frac{1}{2}|.$ \end{cor} Using Sobolev embedding theorem one can also show the following $L^{p}-L^{q}$ estimates for rough Fourier integral operators. \begin{thm}\label{main LpLq thm for Fourier integral operators} Suppose that \begin{enumerate} \item [$(1)$] $T$ is a Fourier integral operator with an amplitude $a \in S^m_{\varrho, \partialelta},$ $0\leq \varrho \leq 1,$ $0\leq \partialelta \leq 1$ and a strongly non-degenerate phase function $\varphi(x,\xi) \in \Phi^2,$ with either of the following conditions: \begin{enumerate} \item [$(a)$] $1 \leq p\leq q \leq 2$ and $$m<n(\varrho -1)\bigg (\frac{2}{q}-1\bigg)- (n-1)\bigg(\frac{1}{p}-\frac{1}{2}\bigg)+ \lambda\bigg(1-\frac{1}{q}\bigg)+\frac{1}{q}-\frac{1}{p};$$ or \item [$(b)$] $2 \leq p \leq q\leq \infty$ and $$m<n(\varrho -1)\bigg (\frac{1}{2}-\frac{1}{q}\bigg)+ (n-1)\bigg(\frac{2}{q}-\frac{1}{p}-\frac{1}{2}\bigg)+\frac{\lambda}{q}+\frac{1}{q}-\frac{1}{p}.$$ \end{enumerate} \item [$(2)$] $T$ is a Fourier integral operator with an amplitude $a \in L^{\infty}S^m_{\varrho}, 0\leq \varrho \leq 1$ and a strongly non-degenerate phase function $\varphi(x,\xi) \in \Phi^2,$ with either of the following conditions: \begin{enumerate} \item [$(a)$] $1 \leq p\leq q \leq 2$ and $$m<\frac{n}{q}(\varrho -1)- (n-1)\bigg(\frac{1}{p}-\frac{1}{2}\bigg)+\frac{1}{q}-\frac{1}{p};$$ or \item [$(b)$] $2 \leq p \leq q\leq \infty$ and $$m<\frac{n}{2}(\varrho -1)+ (n-1)\bigg(\frac{2}{q}-\frac{1}{p}-\frac{1}{2}\bigg)+\frac{1}{q}-\frac{1}{p}.$$ \end{enumerate} \end{enumerate} Then there exists a constant $C>0$ such that $\Vert Tu\Vert _{L^{q}} \leq C\Vert u\Vert_{L^{p}}.$ \end{thm} \begin{proof} We give the details of the proof only for $(1)$ (a). The rest of the proof is similar to that of $(1)$(a), through the use of Theorem \mathop{\rm Re} f{main L^p thm for smooth Fourier integral operators} part (b) or Theorem \mathop{\rm Re} f{main L^p thm for Fourier integral operators with smooth phase and rough amplitudes}. Condition $m<n(\varrho -1)(\frac{2}{q}-1)- (n-1)(\frac{1}{p}-\frac{1}{2})+ \lambda (1-\frac{1}{q})+\frac{1}{q}-\frac{1}{p}$ yields the existence of of a real number $s$ with \begin{equation}\label{bounds on s} n\bigg(\frac{1}{p}-\frac{1}{q}\bigg)\leq s<n(\varrho -1)\bigg (\frac{2}{q}-1\bigg)+\big(n-1\big)\bigg(\frac{1}{2}-\frac{1}{q}\bigg)+ \lambda\bigg(1-\frac{1}{q}\bigg)-m. \end{equation} Therefore writing $T= T(1-C^{\infty}_0elta)^{\frac{s}{2}}(1-C^{\infty}_0elta)^{-\frac{s}{2}}$, Leibniz rule reveals that the amplitude of $T(1-C^{\infty}_0elta)^{\frac{s}{2}}$ belongs to $L^{\infty}S^{m+s}_{\varrho}$ and since $$ m+s<n(\varrho -1)\bigg (\frac{2}{q}-1\bigg)+\big(n-1\big)\bigg(\frac{1}{2}-\frac{1}{q}\bigg)+ \lambda\bigg(1-\frac{1}{q}\bigg), $$ Theorem \mathop{\rm Re} f{main L^p thm for smooth Fourier integral operators} part (a) yields \begin{align*} \Vert T u\Vert _{L^{q}} = \Vert T(1-C^{\infty}_0elta)^{\frac{s}{2}}(1-C^{\infty}_0elta)^{-\frac{s}{2}} u\Vert _{L^{q}} \lesssim \Vert (1-C^{\infty}_0elta)^{-\frac{s}{2}} u\Vert_{L^{q}} \lesssim \Vert u\Vert_{L^{p}}, \end{align*} where the very last estimate is a direct consequence of \eqref{bounds on s} and the Sobolev embedding theorem. Hence $\Vert Tu\Vert _{L^{q}} \lesssim \Vert u\Vert_{L^{p}}$ for the above ranges of $p$, $q$ and $m$, and the proof is complete. \end{proof} \section{Global and local weighted $L^p$ boundedness of Fourier integral operators} The purpose of this chapter is to establish boundedness results for a fairly wide class of Fourier integral operators on weighted $L^p$ spaces with weights belonging to Muckenhoupt's $A_p$ class. We also prove these results for Fourier integral operators whose phase functions and amplitudes are only bounded and measurable in the spatial variables and exhibit suitable symbol type behavior in the frequency variable. We will start by recalling some facts from the theory of $A_p$ weights which will be needed in this section. Thereafter we prove a couple of uniform stationary phase estimates for oscillatory integrals and then proceed with the weighted boundedness for the low frequency portions of Fourier integral operators. Before proceeding with our claims about the weighted boundedness of the high frequency part of Fourier integral operators, a discussion of a counterexample leads us to a rank condition on the phase function $\varphi(x,\xi)$ which is crucial for the validity of the weighted boundedness (with $A_p$ weights) of Fourier integral operators. Using interpolation and extrapolation, we can prove an endpoint weighted $L^p$ boundedness theorem for operators within a specific class of amplitudes and all $A_p$ weights, which is shown to be sharp in a case of particular interest and can also be invariantly formulated in the local case. Finally we show the $L^p$ boundedness of a much wider class of operators for some subclasses of the $A_p$ weights. \subsection{Tools in proving weighted boundedness} The following results are well-known and can be found in their order of appearance in \cite{GR}, \cite{J} and \cite{S}. \begin{thm} \label{open} Suppose $p > 1$ and $w \in A_p$. There exists an exponent $q < p$, which depends only on $p$ and $[w]_{A_p}$, such that $w \in A_q$. There exists $\varepsilon > 0$, which depends only on $p$ and $[w]_{A_p}$, such that $w^{1+\varepsilon} \in A_p$. \end{thm} \begin{thm} \label{maxweight} For $1 < q < \infty$, the Hardy-Littlewood maximal operator is bounded on $L^q_w$ if and only if $w \in A_q$. Consequently, for $1 \leq p < \infty$, $M_p$ is bounded on $L^p_w$ if and only if $w \in A_{q/p}$ \end{thm} \begin{thm} \label{convolve} Suppose that $\varphi \colon \mathbf{R}^n \to \mathbf{R}$ is integrable non-increasing and radial. Then, for $u \in L^1$, we have \[ \int \varphi(y)u(x-y) \, \mathrm{d} y \leq \Vert\varphi\Vert_{L^1} Mu(x) \] for all $x \in \mathbf{R}^n$. \end{thm} The following result of J.Rubio de Francia is also basic in the context of weighted norm inequalities. \begin{thm}[Extrapolation Theorem]\label{extrapolation} If $\Vert Tu\Vert_{L^{p_{0}}_{w}} \leq C \Vert u\Vert_{L^{p_{0}}_{w}}$ for some fixed $p_0\in (1,\infty)$ and all $w\in A_{p_0}$, then one has in fact $\Vert Tu\Vert_{L^p_{w}} \leq C \Vert u\Vert_{L^p_{w}}$ for all $p\in(1,\infty)$ and $w\in A_{p}.$ \end{thm} \subsubsection{A pointwise uniform bound on oscillatory integrals} Before we proceed with the main estimates we would need a stationary-phase estimate which will enable us to control certain integrals depending on various parameters uniformly with respect to those parameters. Here and in the sequel we denote the Hessian in $\xi$ of the phase function $\varphi(x,\xi)$ by $\partial^2_{\xi\xi}\varphi(x,\xi)$. \begin{lem} \label{UniformStationaryPhase} For $\lambda\geq 1$, let $a_{\lambda}(x,\xi)\in L^{\infty} S^{0}_{0}$ with seminorms that are uniform in $\lambda$ and let $\mathop{\rm supp}_{\xi} a_{\lambda}(x,\xi) \subset B(0, \lambda ^{\mu})$ for some $\mu\ \ge 0$. Assume that $\varphi(x,\xi)\in L^{\infty}S^{0}_{0}$ and $|\partialet \partial^2_{\xi\xi}\varphi(x,\xi)|\geq c>0$ for all $(x,\xi) \in \mathop{\rm supp} a_{\lambda}$. Then one has \begin{equation}\label{uniform stationary phase} \sup_{x\in \mathbf{R}^n} \Big| \int e^{i\lambda\varphi(x,\xi)}\, a_{\lambda}(x,\xi)\, \mathrm{d} \xi \Big| \lesssim \lambda^{n\mu-\frac{n}{2}} \end{equation} \end{lem} \begin{proof} We start with the case $\mu=0$. The matrix inequality $\Vert A^{-1}\Vert \leq C_n |\partialet A|^{-1} \Vert A\Vert^{n-1}$ with $A =\partial^2_{\xi\xi}\varphi(x,\xi)$ and the assumptions on $\varphi,$ yield the uniform bound (in $x$ and $\xi$) \begin{equation}\label{uniform estim for the inverse hessian} \big\Vert [\partial^2_{\xi\xi}\varphi(x,\xi)]^{-1} \big\Vert \leq C_n |\partialet \partial^2_{\xi\xi}\varphi(x,\xi)|^{-1} \big\Vert \partial^2_{\xi\xi}\varphi(x,\xi)\big\Vert^{n-1}\lesssim 1. \end{equation} Looking at the map $\kappa_{x}: \xi\mapsto \nabla_{\xi} \varphi(x,\xi)$, we observe that $D\kappa_{x}(\xi)=\partial^2_{\xi\xi}\varphi(x,\xi)$, where $D\kappa_{x}(\xi)$ denotes the Jacobian matrix of the map $\kappa_{x}$, and that $\kappa_{x}$ is a diffeomorphism due to the condition on $\varphi$ in the lemma. Therefore $$D\kappa^{-1}_{x}(\tilde{\xi})= \big[\partial^2_{\xi\xi}\varphi(x,\kappa^{-1}_{x}(\tilde{\xi}))\big]^{-1},$$ which using \eqref{uniform estim for the inverse hessian} yields uniform bounds for $\Vert D\kappa^{-1}_{x}(\tilde{\xi})\Vert$, hence $$|\kappa^{-1}_{x}(\tilde{\xi})-\kappa^{-1}_{x}(\tilde{\eta})|\leq \big\Vert D\kappa^{-1}_{x} \big\Vert \times |\tilde{\xi}- \tilde{\eta}| \lesssim |\tilde{\xi}- \tilde{\eta}|.$$ This applied to $\tilde{\xi}=\kappa_x(\xi)$, $\tilde{\eta}=\kappa_x(\eta)$ implies that \begin{align} \label{uniform lower bound} |\xi -\eta| \lesssim |\kappa_x(\xi)-\kappa_x(\eta)| = |\nabla_{\xi}\varphi(x,\xi)-\nabla_{\xi}\varphi(x,\eta)|. \end{align} We set $$ I(\lambda,x): = \int e^{i\lambda\varphi(x,\xi)}\, a_{\lambda}(x,\xi)\, \mathrm{d} \xi $$ and compute \begin{align*} |I(\lambda,x)|^2 = \iint e^{i\lambda (\varphi(x,\xi)-\varphi(x,\xi+\eta))}\, a_{\lambda}(x,\xi)\, \overline{a_{\lambda}}(x,\xi+\eta)\, \mathrm{d} \xi \, \mathrm{d} \eta. \end{align*} We decompose the integral in $\eta$ into two integrals, one on $|\eta|\leq\partialelta$ and the other on $|\eta|>\partialelta$, and this yields the estimate \begin{multline*} |I(\lambda,x)|^2 \lesssim \\ \partialelta^n + \int_{\partialelta}^{\infty} \int_{\mathbf{S}^{n-1}} \bigg|\int e^{i\lambda r \frac{\varphi(x,\xi)-\varphi(x,\xi+r\theta)}{r}} \, a_{\lambda}(x,\xi)\, \overline{a_{\lambda}}(x,\xi+r\theta)\, \mathrm{d} \xi \bigg| \, \mathrm{d} \theta \, r^{n-1} \mathrm{d} r. \end{multline*} Using the uniform lower bound on the gradient of the phase in \eqref{uniform lower bound}, we get the uniform lower bound $$ \bigg|\frac{\nabla_{\xi}\varphi(x,\xi)-\nabla_{\xi}\varphi(x,\xi+r\theta)}{r}\bigg| \gtrsim 1.$$ Therefore, applying the non-stationary phase estimate of \cite[Theorem 7.7.1]{H1} to the right-hand side integral yields \begin{align*} |I(\lambda,x)|^2 &\lesssim \partialelta^n + \lambda^{-n-1} \int_{\partialelta}^{\infty} r^{-n-1} \, r^{n-1}\mathrm{d} r \\ &\lesssim \partialelta^n + \partialelta^{-1} \lambda^{-n-1}. \end{align*} We now optimize this estimate by choosing $\partialelta=\lambda^{-1}$ and obtain the bound $$ |I(\lambda,x)| \leq C \lambda^{\frac{-n}{2}}, $$ with a constant uniform in $x$. In the case $\mu>0$, we cover the ball $B(0, \lambda ^{\mu})$ with balls of radius $1$ and in doing that, one would need $O(\lambda^{n\mu})$ balls of unit radius. This will obviously provide a covering of the $\xi$ support of $a_{\lambda}$ with balls of radius 1 and we take a finite smooth partition of unity $\theta_{j}(\xi),$ $j=1, \, \partialots,\, O(\lambda^{n\mu}),$ subordinate to this covering with $|\partial^{\alpha}_{\xi} \theta_{j}|\leq C_{\alpha}$. Now by the first part of this proof we have \begin{equation}\label{partition pieces} \Big|\int e^{i\lambda\varphi(x,\xi)}\, a_{\lambda}(x,\xi)\,\theta_{j}(\xi)\, \mathrm{d} \xi \Big|\leq C \lambda^{-\frac{n}{2}} \end{equation} with a constant that is uniform in $x$ and $j$. Finally summing in $j$ and remembering that there are roughly $O(\lambda^{n\mu})$ terms involved yields the desired estimate. \end{proof} \subsubsection{Weighted local and global low frequency estimates} For the low frequency portion of the Fourier integral operators studied in this section we are once again able to handle the $L^p$ boundedness for all $p\in[1,\infty]$, using Lemma \mathop{\rm Re} f{main low frequency estim} and imposing suitable conditions on the phases. \begin{prop}\label{weighted low frequency estimates} Let $\varrho \in [0,1]$ and suppose either: \begin{itemize} \item[(a)] $a(x,\xi)\in L^{\infty}S^{m}_{\varrho}$ is compactly supported in the $x$ variable, $m\in\mathbf{R}$ and $\varphi (x,\xi) \in L^{\infty}\Phi^1$; or \item[(b)] $a(x,\xi)\in L^{\infty}S^{m}_{\varrho}$, $m\in \mathbf{R}$ and $\varphi(x,\xi)-\langle x,\xi\rangle\in L^{\infty}\Phi^1$ \end{itemize} Then for all $\chi_{0}(\xi)\in C^{\infty}_{0}$ supported near the origin, the Fourier integral operator $$ T_{0}u(x)=\frac{1}{(2\pi)^n} \int a(x,\xi)\,\chi_{0}(\xi)\, e^{i\varphi (x,\xi)}\,\widehat{u}(\xi)\, \mathrm{d}\xi$$ is bounded on $L^{p}_{w}$ for $1<p<\infty$ and all $w\in A_p$. \end{prop} \begin{proof} \setitemize[0]{leftmargin=0pt,itemindent=20pt} \begin{itemize} \item[(a)] The operator $T_0$ can be written as $T_0 u(x)= \int K_{0}(x,y) \, u(x-y) \, \mathrm{d} y$ with $$ K_{0}(x,y)= \frac{1}{(2\pi)^n} \int e^{i\psi(x,\xi)-i\langle y,\xi\rangle}\chi_{0}(\xi)\, a(x,\xi) \, \mathrm{d} \xi , $$ where $\psi(x,\xi):= \varphi(x,\xi)-\langle x,\xi\rangle$ satisfies the estimate $$\sup_{|\xi|\neq 0} |\xi|^{-1+|\alpha|} |\partial_{\xi}^{\alpha} \psi(x, \xi)|\leq C_\alpha,$$ for $|\alpha|\geq 1$, on support of the amplitude $a$. Therefore setting $b(x,\xi):=a(x,\xi) \chi_{0}(\xi) e^{i\psi(x,\xi)}$ we have that $b$ is bounded and $$\sup_{|\xi|\neq 0} |\xi|^{-1+|\alpha|} |\partial_{\xi}^{\alpha} b(x, \xi)|<\infty,$$ for $|\alpha|\geq 1$ uniformly in $x$ and using Lemma \mathop{\rm Re} f{main low frequency estim}, we have for all $\mu \in [0,1)$ \begin{equation} |K^{l}_{0} (x,y)|\leq C_{l} \langle y\rangle ^{-n-\mu}, \end{equation} for all $x$. From this and Theorem \mathop{\rm Re} f{convolve}, it follows that $|T_0 u(x)|\lesssim Mu(x)$ and Theorem \mathop{\rm Re} f{maxweight} yields the $L_{w}^{p}$ boundedness of $T_0$. \item[(b)] The only difference from the local case, is that instead of the assumption of compact support in $x$, the assumption $\varphi(x,\xi)-\langle x,\xi\rangle\in L^{\infty}\Phi^1$ yields that $b(x,\xi)$ in the previous proof satisfies the very same estimate, whereupon the same argument will conclude the proof. \end{itemize} \end{proof} \subsection{Counterexamples in the context of weighted boundedness}\label{counterexamples in weighted setting} The following counterexample going back to \cite{KW}, shows that for smooth Fourier integral operators (smooth phases and well as amplitudes), the non-degeneracy of the phase function i.e. the non-vanishing of the determinant of the mixed hessian of the phase, is not enough to yield weighted $L^p$ boundedness, unless one is prepared to loose a rather unreasonable amount of derivatives. \begin{cex} Let $\varphi(x,\xi)= \langle x, \xi\rangle +\xi_1$ which is non-degenerate but $$\mathrm{rank}\, \partial^{2}_{\xi\xi}\varphi=0,$$ and let $a(x,\xi)= \langle\xi\rangle^{m}$ with $-n<m<0$. Then it has been shown in \cite{Yab} that for $1<p<\infty$ there exists $w\in A_p$ and $f\in L^{p}_{w}$ such that the Fourier integral operator $Tu(x)= (2\pi)^{-n} \int e^{i\langle x, \xi\rangle +i\xi_1} \langle\xi\rangle^{m} \widehat{u}(\xi) \, \mathrm{d}\xi,$ does not belong to $L^{p}_{w}$. \end{cex} However, as the following proposition shows, even with a phase of the type above, one can prove weighted $L^p$ boundedness provided certain (comparatively large) loss of derivatives. \begin{prop}\label{weighted boundedness without rank} Let $a(x,\xi)\in L^{\infty}S_{1}^{m}$, $m\leq -n$ and $\varphi(x,\xi)-\langle x, \xi\rangle \in L^{\infty}\Phi^1$. Then $T_{a,\varphi}u(x):=\int e^{i\varphi(x,\xi)} \sigma(x,\xi) \widehat{u}(\xi)\, \mathrm{d}\xi$ is bounded on $L_{w}^{p}$ for $w\in A_p$ and $1<p<\infty$. This result is sharp. \end{prop} \begin{proof} For the low frequency part of the Fourier integral operator we could for example use Proposition \mathop{\rm Re} f{weighted low frequency estimates}. For the high frequency part we may assume that $a(x,\xi)=0$ when $\xi$ is in a neighborhood of the origin. The proof in the case $m<-n$ can be done by a simple integration by parts argument in the integral defining the Schwartz kernel of the operator. Hence the main point of the proof is to deal with the case $m=-n.$ Now the Fourier integral operator $T_{a,\varphi}$ can be written as \begin{equation} T_{a,\varphi}u(x)=\int e^{i\varphi(x,\xi)} a(x,\xi) \widehat{u}(\xi)\, \mathrm{d}\xi=\int \sigma(x,\xi) e^{i\langle x,\xi\rangle} \widehat{u}(\xi)\, \mathrm{d}\xi \end{equation} with $\sigma(x,\xi)= a(x,\xi) e^{i (\varphi(x,\xi)-\langle x,\xi\rangle)}$ and we can assume that $\sigma(x,\xi)=0$ in a neighborhood of the origin. Therefore, since we have assumed that $\varphi(x,\xi)-\langle x, \xi\rangle \in L^{\infty}\Phi^1$, the operator $T_{a,\varphi}= \sigma(x,D)$ is a pseudo-pseudodifferential operator in the sense of \cite{KS}, belonging to the class $\mathrm{OP}L^{\infty}S^{-n}_{0}.$ We use the Littlewood-Paley partition of unity and decompose the symbol as \begin{equation*} \sigma(x,\xi)= \sum_{k=1}^{\infty}\sigma_{k}(x,\xi) \end{equation*} with $\sigma_k(x,\xi)= \sigma(x,\xi)\varphi_{k}(\xi)$, $k\geq 1$. We have $$ \sigma_{k}(x,D)(u)(x)=\frac{1}{(2\pi)^{n}}\int \sigma_{k}(x,\xi)\widehat{u}(\xi)e^{i\langle x, \xi\rangle} \, \mathrm{d}\xi$$ for $k\geq 1$. We note that $\sigma_k (x,D)u(x)$ can be written as \begin{equation*} \sigma_{k}(x,D)u(x)=\int K_{k}(x,y)u(x-y) \, \mathrm{d} y \end{equation*} with \begin{equation*} K_{k}(x,y)=\frac{1}{(2\pi)^{n}}\int \sigma_{k}(x,\xi) e^{i\langle y,\xi\rangle} \, \mathrm{d}\xi= \check{\sigma}_{k}(x,y), \end{equation*} where $\check{\sigma}_k$ here denotes the inverse Fourier transform of $\sigma_{k}(x,\xi)$ with respect to $\xi$. One observes that \begin{align*} |\sigma_{k}(x,D)u(x)|^{p}&= \Big|\int K_{k}(x,y)u(x-y)\,\mathrm{d} y\Big|^{p} \\&= \Big|\int K_{k}(x,y)\omega(y)\frac{1}{\omega (y)}u(x-y)\, \mathrm{d} y\Big|^{p}, \end{align*} with weight functions $\omega(y)$ which will be chosen momentarily. Therefore, H\"older's inequality yields \begin{equation}\label{eq2.14} |\sigma_{k}(x,D)u(x)|^{p}\leq \bigg\{ \int |K_{k}(x,y)|^{p'} | \omega (y)|^{p'} \, \mathrm{d} y\bigg\}^{\frac{p}{p'}} \bigg\{ \int \frac{| u(x-y)|^{p}}{|\omega (y)|^{p}} \, \mathrm{d} y\bigg\}, \end{equation} where $\frac{1}{p} +\frac{1}{p'}=1.$ Now for an $l>\frac{n}{p}$, we define $\omega$ by \begin{equation*} \omega(y)=\begin{cases} 1, &| y| \leq 1; \\ | y|^{l}, & | y| >1. \end{cases} \end{equation*} By Hausdorff-Young's theorem and the symbol estimate, first for $\alpha=0$ and then for $| \alpha|=l$, we have \begin{align}\label{eq2.17} \int |K_k (x,y)|^{p'} \, \mathrm{d} y &\leq \bigg\{\int |\sigma_k (x,\xi)|^p\, \mathrm{d}\xi\bigg\}^{\frac{p'}{p}} \lesssim \bigg\{\int_{| \xi| \sim 2^{k}}2^{-npk} \, \mathrm{d}\xi\bigg\}^{\frac{p'}{p}} \\ \nonumber &\lesssim 2^{kp'(\frac{n}{p}-n)}, \end{align} and \begin{align}\label{eq2.18} \int | K_k (x,y)|^{p'} |y|^{p'l}\,\mathrm{d} y &\lesssim \bigg\{\int |\nabla_{\xi}^{l}\sigma_k (x,\xi)|^p \, \mathrm{d}\xi\bigg\}^{\frac{p'}{p}} \\Ê\nonumber &\lesssim \bigg\{\int_{| \xi| \sim 2^{k}}2^{-kpn}\, \mathrm{d} \xi\bigg\}^{\frac{p'}{p}} \lesssim 2^{kp'(\frac{n}{p}-n)}. \end{align} Hence, splitting the integral into $| y| \leq 1$ and $,| y| > 1$ yields \begin{equation*} \bigg\{\int | K_k (x,y)|^{p'}| \omega(y)|^{p'} \, \mathrm{d} y\bigg\}^{\frac{p}{p'}} \lesssim \big\{2^{kp'(\frac{n}{p}-n)}\big\}^{\frac{p}{p'}}=2^{kp(\frac{n}{p}-n)}. \end{equation*} Furthermore, using Theorem 3.1.3 we have \begin{equation*} \int \frac{| u(x-y)|^{p}}{| \omega (y)|^{p}} \, \mathrm{d} y \lesssim \big(M_{p} u(x)\big)^{p} \end{equation*} with a constant that only depends on the dimension $n$. Thus \eqref{eq2.14} yields \begin{equation}\label{pointwiseakestim} |\sigma_{k}(x,D) u(x)|^{p} \lesssim 2^{k(\frac{n}{p}-n)}\big(M_{p} u(x)\big)^{p} \end{equation} Summing in $k$ using \eqref{pointwiseakestim} we obtain \begin{align*} |T_{a, \varphi} u(x)|&=|\sigma(x,D) u(x)|\lesssim \bigg\{\sum_{k=1}^{\infty}|\sigma_{k}(x,D)u(x)|^{p}\bigg\}^{\frac{1}{p}} \\ &\lesssim \big(M_{p} u(x)\big)\bigg(\sum_{k=1}^{\infty}2^{k(\frac{n}{p}-n)}\bigg)^{\frac{1}{p}} \end{align*} We observe that the series above converges for any $p>1$ and therefore an application of Theorem \mathop{\rm Re} f{maxweight} ends the proof. The sharpness of the result follows from the Counterexample 1. \end{proof} The above discussion suggests that without further conditions on the phase, which as it will turn out amounts to a rank condition, the weighted norm inequalities of the type considered in this paper will be false. The following counterexample shows that, even if the phase function satisfies an appropriate rank condition, there is a critical threshold on the loss of derivatives, below which the weighted norm inequalities will fail. \begin{cex} We consider the following operator $$ T_m = e^{i|D|} \langle D \rangle^{m} $$ and the following functions $$ w_{b}(x)=|x|^{-b}, {\frac{1}{\vert B\vert}}uad f_{\mu}(x)=\int e^{-i|\xi|+i x \cdot \xi} \langle \xi \rangle^{-\mu} \, \mathrm{d} \xi. $$ As was mentioned in Example \mathop{\rm Re} f{examples of weights} in Chapter 1, $w_{b} \in A_1$ if $0 \leq b <n,$ from which it also follows that $w:=w_b\chi_{|x|<2}\in A_1$ for $0\leq b<n$, were $\chi_A$ denotes the characteristic function of the set $A$. Now if $\mu <m+n$ then we claim that $|T_m f_{\mu}(x)| \geq C_{m\mu} |x|^{\mu-m-n}$ on $|x| \leq 2$. Indeed, we have $$ T_m f_{\mu}(x) = \int e^{i x \cdot \xi} \langle \xi \rangle^{m-\mu} \, \mathrm{d} \xi $$ which is a radial function equal to \begin{align*} |\mathbf{S}^{n-1}| \, |x|^{\mu-m-n} \int_0^{\infty} \widehat{\mathrm{d} \omega}(r) \big(|x|^2+r^2\big)^{\frac{m-\mu}{2}} r^{n-1} \, \mathrm{d} r. \end{align*} If we denote by $g_{\mu m}$ the function given by the integral, and take a dyadic partition of unity $1=\psi_{0}+\sum_{j=1}^{\infty}\psi(2^{-j}\cdot)$ then \begin{align*} g_{\mu m}(s)&=\int_0^{2} \widehat{\mathrm{d} \omega}(r) (s^2+r^2)^{\frac{m-\mu}{2}} r^{n-1} \psi_0(r) \, \mathrm{d} r \\ &{\frac{1}{\vert B\vert}}uad + \sum_{j=1}^{\infty} 2^{jn} \int_0^{\infty} \widehat{\mathrm{d} \omega}(2^jr) (s^2+2^{2j}r^2)^{\frac{m-\mu}{2}} r^{n-1} \psi(r) \, \mathrm{d} r. \end{align*} The first term is continuous if $m-\mu+n>0$ and writing down the integral representing $\widehat{\mathrm{d} \omega}(2^jr)$ and integrating by parts yields that the series in the second term of $g_{\mu m}$ is a smooth function of $s$. Moreover \begin{align*} g_{\mu m}(0)= \int e^{i\langle \xi, e_{1}\rangle} |\xi|^{m-\mu} \, \mathrm{d}\xi= C_n |e_1|^{-n-m+\mu}\neq 0, \end{align*} since the inverse Fourier transform of a radial homogeneous distribution of degree $\alpha$ is a radial homogeneous distribution of order $-n-\alpha.$ This proves the claim. From this claim it follows that \begin{align*} \int |T_{m}f_{\mu}|^p w \, \mathrm{d} x \geq C_{\mu m} \int_{|x|\leq 2} |x|^{(\mu-m-n)p-b} \, \mathrm{d} x, \end{align*} and therefore $T_m f_{\mu} \notin L^p_w$ if $(\mu-m-n)p-b \leq -n.$ Now, we also have $|f_{\mu}(x)| \leq A_{\mu} \big|1-|x|\big|^{\mu-\frac{n+1}{2}}+B_{\mu}$ on $|x| \leq 2$. This is because the function $f_{\mu}$ is radial \begin{align*} f_{\mu}(x) = |\mathbf{S}^{n-1}| \int_{0}^{\infty} \widehat{\mathrm{d} \omega}(r|x|) e^{-i r} (1+r^2)^{\frac{\mu}{2}} r^{n-1} \, \mathrm{d} r \end{align*} and using the information on the Fourier transform of the measure of the sphere, \begin{align} f_{\mu}(x) = |\mathbf{S}^{n-1}| \, \sum_{\pm} \int_{0}^{\infty} e^{-i r(1\pm|x|)} a_{\pm}(r|x|) (1+r^2)^{\frac{\mu}{2}} r^{n-1} \, \mathrm{d} r \end{align} where $$ |\partial^{\alpha}a_{\pm}(r)| \leq C_{\alpha} \langle r \rangle^{-\frac{n-1}{2}-\alpha}. $$ We now use a dyadic partition of unity $1=\psi_0+\sum_{k=1}^{\infty} \psi(2^{-k}\cdot)$ on the integral and obtain a sum of terms of the form \begin{align*} 2^{kn} \int_{0}^{\infty} e^{-2^ki r(1\pm|x|)} \underbrace{a_{\pm}(2^kr|x|) \psi(r) (1+2^{2k}r^2)^{\frac{\mu}{2}} r^{n-1}}_{=b^{\pm}_k(r,|x|)} \, \mathrm{d} r \end{align*} with $$ |\partial^{\alpha}_r b^{\pm}_k(r,|x|)| \leq C_{\alpha} 2^{-(\frac{n-1}{2}-\mu+\alpha)k}. $$ Integration by parts yields \begin{align*} |f_{\mu}| &\leq C_1 + C_2 \sum_{2^k|1-|x|| \leq 1} 2^{-(\frac{n+1}{2}-\mu)k} + C_3 \sum_{2^k|1-|x|| > 1} 2^{-(\frac{n+1}{2}-\mu+N)k} \big|1-|x|\big|^{-N} \\ & \leq C_1 + C'_2 \big|1-|x|\big|^{\mu-\frac{n+1}{2}}. \end{align*} Hence one has \begin{align*} \int |f_{\mu}|^p w \, \mathrm{d} x \leq A_{\mu} \int_{|x| \leq 2} \big|1-|x|\big|^{\mu p-\frac{n+1}{2}p} |x|^{-b} \, \mathrm{d} x+B_{\mu} \int_{|x|\leq 2} |x|^{-b} \, \mathrm{d} x, \end{align*} which in turn yields $f_{\mu} \in L^p_w$ if $ \mu>\frac{n+1}{2}-\frac{1}{p}$ and $0 \leq b<n$. From the estimates above it follows that if $1<p<\infty$ and $T_{m}$ is bounded on $L^p_{w}$ then \begin{align}\label{estim on m} m \leq -\frac{n-1}{2}-\frac{1}{p}. \end{align} Indeed if $T_{m}$ is bounded on $L^p_{w}$ then we have \begin{align*} -m>\frac{b-n}{p}+n-\mu \end{align*} for all $0\leq b <n$ and all $\mu>\frac{n+1}{2}-\frac{1}{p}$. Letting $\mu$ tend to $\frac{n+1}{2}-\frac{1}{p}$ we obtain \begin{align*} m \leq -\frac{b-n}{p}-\frac{n-1}{2}-\frac{1}{p} \end{align*} for all $0 \leq b<n$, and letting $b$ tend to $n$ yields \eqref{estim on m}.\\ \noindent Now by Theorem \mathop{\rm Re} f{extrapolation} if $T_{m}$ is bounded on $L^{q}_{w}$ for a fixed $q>1$ and for all $w \in A_{q},$ then by extrapolation it is bounded on all $L^p_{w}$ for all $w\in A_{p}$ and all $1<p<\infty$, therefore since $w \in A_{1}\subset A_{p}$, we conclude that $T_{m}$ is bounded on $L^p_{w}$ for all $1<p<\infty$, which implies that $m$ has to satisfy the inequality \begin{align*} m\leq-\frac{n-1}{2}-\frac{1}{p}, \end{align*} for all $1<p<\infty$. Letting $p$ tend to $1$, we obtain \begin{align*} m\leq-\frac{n+1}{2}, \end{align*} which is the desired critical threshold for the validity of the weighted $L^p$ boundedness of the class of Fourier integral operators under consideration in this paper. \end{cex} We end up with an example showing that in the most general situation one cannot expect \emph{global} $L^p$ weighted estimates unless the phase satisfies some slightly stronger property than a rank condition. \begin{cex} Let $B$ be the unit ball in $\mathbf{R}^n$, we consider the operator $$ Tu(x) = (1_{B} * u)(2x) $$ and suppose that this operator is bounded on $L^p_{w}$ with bound $C_{p}=C\big([w]_{A_{p}}\big)$ only depending on $[w]_{A_{p}}$ \begin{align*} {\Vert}Tu{\Vert}_{L^p_{w}} \leq C_{p} {\Vert}u{\Vert}_{L^p_{w}}, {\frac{1}{\vert B\vert}}uad u \in \mathscr{S}(\mathbf{R}^n). \end{align*} Note that the $A_{p}$-constant $[w]_{A_{p}}$ is scale invariant. If we apply the estimate to the function $u(\varepsilon \cdot)$ and the weight $w(\varepsilon \cdot)$ and scale it, we obtain \begin{align*} {\Vert}T_{\varepsilon}u{\Vert}_{L^p_{w}} \leq C_{p} {\Vert}u{\Vert}_{L^p_{w}}, {\frac{1}{\vert B\vert}}uad u \in \mathscr{S}(\mathbf{R}^n) \end{align*} with $T_{\varepsilon}u=\varepsilon^{-n} (1_{\varepsilon B}*u)(2x)$. Since $T_{\varepsilon}u$ tends to $u(2x)$ in $L^p_{w}$, by letting $\varepsilon$ tend to $0$ we deduce from the former \begin{align*} {\Vert}u(2\,\cdot\,){\Vert}_{L^p_{w}} \leq C_{p} {\Vert}u{\Vert}_{L^p_{w}}, {\frac{1}{\vert B\vert}}uad u \in \mathscr{S}(\mathbf{R}^n). \end{align*} After a change of variable, this would imply \begin{align*} 2^{-n} \int |u(x)|^p w(x/2) \, \mathrm{d} x \leq C^p_{p} \int |u(x)|^p w(x) \, \mathrm{d} x \end{align*} for all $u \in \mathscr{S}(\mathbf{R}^n),$ hence \begin{align*} w(x/2) \ \leq C_{p}^p2^n w(x). \end{align*} This means that one can expect weighted $L^p$ estimates for $T$ only if $w$ satisfies a doubling property. Note that $T$ can be written as a sum of Fourier integral operators with amplitudes in $S^{-\frac{n+1}{2}}_{1,0}$ with phases of the form $$ \varphi_{\pm} = 2\langle x, \xi \rangle \pm |\xi| $$ which satisfy a rank condition. Nevertheless, one has $$ \varphi_{\pm} - \langle x,\xi \rangle \notin L^{\infty}\Phi^1. $$ In particular, one cannot generally expect global weighted estimate for Fourier integral operators with phases such that $\varphi - \langle x,\xi \rangle \notin L^{\infty}\Phi^1$ unless the weight satisfies some doubling property. \end{cex} \subsection{Invariant formulation in the local boundedness} The aim of this section is to give an invariant formulation of the rank condition on the phase function, which will be used to prove our weighted norm inequalities for Fourier integral operators. In Counterexample 1 we saw that a rank condition is necessary for the validity of weighted $L^p$ estimates. The following discussion will enable us to give an invariant formulation of the local weighted norm inequalities for operators with amplitudes in $S^{\frac{-(n+1)}{2}\varrho+n(\varrho -1)}_{\varrho, 1-\varrho},$ $\varrho\in[\frac{1}{2},1].$ We refer the reader to \cite{H3} for the properties of Fourier integral operators with amplitudes in $S^{m}_{\varrho, 1-\varrho},$ $\varrho\in (\frac{1}{2}, 1]$ and the paper by A. Greenleaf and G. Uhlmann for the case when $\varrho=\frac{1}{2}.$ The central object in the theory of Fourier integral operators is the canonical relation \begin{align*} \mathcal{C}_{\varphi} = \big\{ (x,\partial_{x}\varphi(x,\xi),\partial_{\xi}\varphi(x,\xi),\xi) : (x,\xi) \in \mathop{\rm supp} a \big\} \end{align*} in $T^*\mathbf{R}^n \times T^*\mathbf{R}^n$. We consider the following projection on the space variables \begin{align*} \xymatrix @!0 @C=7pc @R=4pc {& \mathcal{C}_{\varphi} \subset T^*\mathbf{R}^n_{x} \times T^*\mathbf{R}^n_{y} & \simeq T^{*}(\mathbf{R}^{2n}_{x,y}) \ar[d]^{\pi_0} & \\ & & \pi_{0}(\mathcal{C}) \subset \mathbf{R}^{2n}_{xy}& } \end{align*} with \begin{align*} \pi_{0}(x,\xi,y,\eta)= (x,y). \end{align*} The differential of this projection is given by \begin{align*} \mathrm{d} \pi_{0}(t_{x},t_{\xi},t_{y},t_{\eta}) = (t_{x},t_{y}), {\frac{1}{\vert B\vert}}uad t_{\xi}&=\partial^2_{x x}\varphi \, t_{x} + \partial^2_{x\xi}\varphi \, t_{\eta} \\ t_{y}&=\partial^2_{\xi x} \varphi \, t_{x} + \partial^2_{\xi\xi}\varphi \, t_{\eta} \end{align*} so that its kernel is given by \begin{align*} \big\{(0,\partial^2_{x\xi}\varphi \, t_{\eta},0,t_{\eta}): t_{\eta} \in \ker \partial^2_{\xi\xi}\varphi \big\} \end{align*} This implies $$ \mathop{\rm rank} \mathrm{d} \pi_{0} = \mathop{\rm codim} \ker \mathrm{d} \pi_{0} = \mathop{\rm codim} \ker \partial^2_{\xi \xi}\varphi = n+\mathop{\rm rank} \partial^2_{\xi \xi}\varphi. $$ Our assumption on the phase $\mathop{\rm rank} \partial^2_{\xi \xi}\varphi=n-1$ can be invariantly formulated as $$ \mathop{\rm rank} \mathrm{d} \pi_{0} = 2n-1. $$ Using these facts, we will later on be able to show that if $T$ is a Fourier integral operator with amplitude in $S^{-\frac{n+1}{2}\varrho +n(\varrho -1)}_{\varrho ,1-\varrho}$ with $\varrho\in [\frac{1}{2},1]$ whose canonical relation $\mathcal{C}$ is locally the graph of a symplectomorphism, and if $$ \mathop{\rm rank}\mathrm{d} \pi_{0} = 2n-1 $$ everywhere on $\mathcal{C}$, with $\pi_{0} : \mathcal{C} \to \mathbf{R}^{2n}$ defined by $\pi_{0}(x,\xi,y,\eta)=(x,\eta)$, then there exists a constant $C>0$ such that \begin{align*} {\Vert}Tu{\Vert}_{L^{p}_{w,{\rm loc}}} \leq {\Vert}u{\Vert}_{L^p_{w,{\rm comp}}} \end{align*} for all $w \in A_{p}$ and all $1 <p <\infty$. However, we will actually prove local weighted $L^p$ boundedness of operators with amplitudes in the class $L^{\infty}S^{-\frac{n+1}{2}\varrho +n(\varrho -1)}_{\varrho}$ with $\varrho\in [0,1]$ for which the invariant formulation above lacks meaning, and therefore to keep the presentation of the statements as simple as possible, we will not always (with the exception of Theorem \mathop{\rm Re} f{weighted boundedness for true amplitudes with power weights}) state the local boundedness theorems in an invariant form. \subsection{Weighted local and global $L^p$ boundedness of Fourier integral operators} We start this by showing the local weighted $L^p$ boundedness of Fourier integral operators. In Counterexample 1 which was related to Fourier integral operators with linear phases, the Hessian in the frequency variable $\xi$ of the phase function vanishes identically. This suggests that some kind of condition on the Hessian of the phase might be required. It turns out that the condition we need can be formulated in terms of the rank of the Hessian of the phase function in the frequency variable. Furthermore Counterexample 2 which was related to the wave operator, suggests a condition on the order of the amplitudes involved. It turns out that these conditions, appropriately formulated, will indeed yield weighted boundedness of a wide range of Fourier integral operators even having rough phases and amplitudes. \begin{thm}\label{local weighted boundedness} Let $a(x,\xi)\in L^{\infty}S^{m}_{\varrho}$ with $m<-\frac{n+1}{2}\varrho+ n(\varrho-1)$ and $\varrho \in[0,1]$ be compactly supported in the $x$ variable. Let the phase function $\varphi(x,\xi)\in C^{\infty}(\mathbf{R}\times \mathbf{R}\setminus{0})$ homogeneous of degree $1$ in $\xi$ satisfy $\mathrm{rank}\,\partial^2_{\xi\xi}\varphi(x,\xi) =n-1$. Then the corresponding Fourier integral operator is $L^p_w$ bounded for $1<p<\infty$ and all $w\in A_p$. \end{thm} \begin{proof} The low frequency part of the Fourier integral operator is handled by Proposition \mathop{\rm Re} f{weighted low frequency estimates} part (a). For the high frequency portion, once again we use a Littlewood-Paley decomposition in the frequency variables as in Subsection \mathop{\rm Re} f{Semiclasical reduction subsec} to reduce the operator to its semiclassical piece $$T_h u(x)= (2\pi)^{-n} \iint e^{i(\varphi(x,\xi)-\langle y, \xi\rangle)}\,\chi(h\xi)\, a(x,\xi)\, u(y)\, \mathrm{d} y\, \mathrm{d}\xi$$ with $\chi(\xi)$ smooth and supported in the annulus $\frac{1}{2}\leq |\xi|\leq 2.$ Now if we let $\theta(x,\xi):=\varphi(x,\xi)-\langle x, \xi\rangle$, then we have \begin{equation}\label{first derivative of theta} \vert\nabla_{\xi} \theta\vert \lesssim 1 \end{equation} on the support of $a$. Furthermore, if \begin{equation}\label{Kernel of local weihted Fourier integral operator} T_h (x,y)= (2\pi)^{-n} \int e^{i(\theta(x,\xi)-\langle y, \xi\rangle)}\, \chi(h\xi) a\big(x,\xi\big)\, \mathrm{d}\xi, \end{equation} then in order to get useful pointwise estimates for the operator $T_h$ we would need to estimate the kernel $T_h(x,y)$. Localising in frequency and rotating the coordinates we may assume that $a(x,\xi)$ is supported in a small conic neighbourhood of a $\xi_0 =e_n$. At this point we split the modulus of $T_h$ into \begin{align*} |T_h u(x)| &\leq \int_{|y|> 1+2\Vert\nabla_{\xi} \theta\Vert_{L^\infty}} + \int_{|y|\leq 1+2\Vert\nabla_{\xi} \theta\Vert_{L^\infty}}| T_h (x,y)| \, |u(x-y)| \, \mathrm{d} y \\ &:=\textbf{I} + \textbf{II}. \end{align*} where there are obviously no critical points on the domain of integration for~\textbf{I}. \\ \textbf{Estimate of I.} Making the change of variables $\xi\to h^{-1} \xi$ we obtain \begin{align*} T_h u(x)&= (2\pi)^{-n} h^{-n}\iint e^{ih(\theta(x,\xi)-\langle y, \xi\rangle)}\,\chi(\xi)\, a\big(x,\xi/h\big)\, u(y)\, \mathrm{d} y\, \mathrm{d} \xi. \end{align*} Here, since $2\Vert \nabla_{\xi} \theta\Vert_{L^\infty} < 1+ 2\Vert \nabla_{\xi} \theta\Vert_{L^\infty} < |y|$, we have \begin{equation}\label{local weighted non stationary} |\nabla_{\xi} \theta(x,\xi) -y| \geq |y| -\Vert \nabla_{\xi} \theta\Vert_{L^\infty}> \frac{|y|}{2}. \end{equation} Also, $|\partial^{\alpha}_{\xi} (\theta(x,\xi)- \langle y, \xi\rangle)| \leq C_\alpha$ for all $|\alpha|\geq 2$ uniformly in $x$ and $y$. Therefore using the non-stationary phase estimate in \cite{H1} Theorem 7.7.1 to \eqref{Kernel of local weihted Fourier integral operator}, the derivative estimates on $a(x,\xi/h)$ and \eqref{local weighted non stationary} yield for $N>0$ \begin{align*}\label{kernel estim local weighted I} | T_h (x,y)| &\lesssim h^{-n} h^{N} \sum_{\alpha \leq N} \sup\Big\{\big|\partial ^{\alpha} (\chi a(x,\xi/h)\big|\,|\nabla_{\xi} \theta(x,\xi) -y|^{|\alpha|-2N}\Big\}\\ \nonumber &\lesssim h^{-m-n+N\varrho} |y|^{-2N}\lesssim h^{-m-n+N\varrho} \langle y \rangle ^{-2N}, \end{align*} where we have used the fact that $|y|>1$ in \textbf{I}. Hence taking $N>\frac{n}{2}$, Theorem \mathop{\rm Re} f{convolve} yields \begin{equation} \textbf{I}\lesssim h^{-m-n+N \varrho} \int \langle y \rangle ^{-N} |u(x-y)|\, \mathrm{d} y \lesssim h^{-m-n+N \varrho} Mu(x). \end{equation} \textbf{Estimate of II.} Making a change of variables $\xi\to h^{-\varrho} \xi$ we obtain \begin{align*} T_h(x,y)&= (2\pi)^{-n} h^{-n\varrho}\int e^{ih^{-\varrho}(\theta(x,\xi)-\langle y, \xi\rangle)}\,\chi(h^{-\varrho+1}\xi)\, a\big(x,h^{-\varrho}\xi\big)\, \mathrm{d} \xi \\ \nonumber &:=(2\pi)^{-n} h^{-n\varrho}\int e^{ih^{-\varrho}(\varphi(x,\xi)-\langle y, \xi\rangle)}\, a_{h}\big(x,\xi\big) \, \mathrm{d}\xi \end{align*} where $a_{h}(x,\xi)$ is compactly supported in $x$, supported in $\xi$ in the annulus $\tfrac{1}{2}h^{\varrho-1} \leq |\xi| \leq 2 h^{\varrho-1}$ and is uniformly bounded together with all its derivatives in $\xi$, by $h^{-m}$. Here the assumption, $\mathrm{rank}\,\partial^2_{\xi\xi}\varphi(x,\xi) =n-1$ for all $\xi$, yields that $\mathrm{ker}\,\partial^2_{\xi\xi}\varphi(x,\xi_{0}) =\mathrm{span}\, \{\xi_0\}=\mathrm{span}\, \{e_n\}.$ Therefore by the definition of $\theta(x,\xi)$ \begin{equation} \partialet \partial^2_{\xi'\xi'}\theta (x,e_n)\neq 0. \end{equation} The assumption that $a$ has its $\xi$-support in a small conic neighborhood of $e_n$ implies that if that support is sufficiently small, then \begin{equation} |\partialet \partial^2_{\xi'\xi'}\theta (x,\xi)| \geq 0, {\frac{1}{\vert B\vert}}uad (x,\xi) \in \mathop{\rm supp} a_h. \end{equation} Finally, due to the restriction $1+ 2\Vert \nabla_{\xi} \theta\Vert_{L^\infty} \geq |y|$ and \eqref{first derivative of theta}, one has \begin{equation} \label{boundedness of derivatives of theta} |\partial^{\alpha}_{\xi} (\theta(x,\xi)- \langle y, \xi\rangle)| \leq C_\alpha \end{equation} for all $|\alpha|\geq 1$ uniformly in $x$ and $y$. Hence $\theta(x,\xi)- \langle y', \xi'\rangle$ and $h^ma_{h}$ satisfy all the assumptions of the stationary phase estimate in Lemma \mathop{\rm Re} f{UniformStationaryPhase} with $\lambda = h^{-\varrho}$ and $\lambda^{\mu}=h^{\varrho-1}$, we obtain \begin{align*} \bigg|\int e^{ih^{-\varrho}(\varphi(x,\xi)-\langle y, \xi\rangle)}\, a_{h}\big(x,\xi\big) \, \mathrm{d}\xi' \bigg| \lesssim h^{-m} h^{\frac{n-1}{2}\varrho} h^{(n-1)(\varrho-1)} \end{align*} and using the fact that the integral in $\xi_{n}$ lies on a segment of size $h^{\varrho-1}$, we get \begin{equation} |T_h(x, y)|\lesssim h^{-n\varrho} h^{-m} h^{\frac{n-1}{2}\varrho-(1-\varrho)n}\lesssim h^{-m-\frac{n+1}{2}\varrho-(1-\varrho)n}. \end{equation} This yields that \begin{align*} \textbf{II} &\lesssim h^{-m-\frac{n+1}{2}\varrho-(1-\varrho)n} \int_{|y|\leq 1+ 2\Vert \nabla_{\xi} \theta\Vert_{L^\infty}} |u(x-y)|\, \mathrm{d} y \\ &\lesssim h^{-m-\frac{n+1}{2}\varrho-(1-\varrho)n} Mu(x) \end{align*} Now adding \textbf{I} and \textbf{II}, taking $N>n$ large enough, using Lemma \mathop{\rm Re} f{Lp:semiclassical}, the assumption $m<-\frac{n+1}{2}\varrho +n(\varrho-1)$ and Theorem \mathop{\rm Re} f{maxweight}, we will obtain the desired result. \end{proof} Here we remark that the condition on the rank of the Hessian on the metric is quite natural and is satisfied by phases like $\langle x, \xi\rangle +|\xi|$ and $\langle x, \xi\rangle +\sqrt{|\xi'|^2 -|\xi''|^2}$ where $\xi=(\xi' , \xi'')$ (with an amplitude supported in $|\xi'|>|\xi''|$), but if we put a slightly stronger condition than the rank condition on the phase, then it turns out that we would not only be able to extend the local result to a global one but also lower the regularity assumption on the phase function to the sole assumption of measurability and boundedness in the spatial variable $x$. Therefore the estimates we provide below will aim to achieve this level of generality. Having the uniform stationary phase above in our disposal we will proceed with the main high frequency estimates, but before that let us fix a notation. \subsubsection*{Notation.} Given an $n\times n$ matrix $M$, we denote by $\mathrm{det}_{n-1} M$ the determinant of the matrix $PMP$ where $P$ is the projection to the orthogonal complement of $\mathrm{ker} M$. \begin{thm}\label{global weighted boundedness} Let $a(x,\xi)\in L^{\infty}S^{m}_{\varrho}$ with $m<-\frac{n+1}{2}\varrho+ n(\varrho-1)$ and $\varrho \in[0,1]$. Let the phase function $\varphi(x,\xi)$ satisfy $|\mathrm{det}_{n-1} \partial^2_{\xi\xi}\varphi(x,\xi)|\geq c>0$. Furthermore, suppose that $\varphi (x,\xi)-\langle x,\xi\rangle \in L^{\infty}\Phi^1$ Then the associated Fourier integral operator is bounded on $L_{w}^{p}$, for $1<p<\infty$ and $w\in A_p$. \end{thm} \begin{proof} As before, the low frequency part of the Fourier integral operator is treated using Proposition \mathop{\rm Re} f{weighted low frequency estimates} part (b). For the high frequency part we follow the same line of argument as in the proof of Theorem \mathop{\rm Re} f{local weighted boundedness}. More specifically at the level of showing the estimate \eqref{first derivative of theta}, the lack of compact support in $x$ variable lead us to use our assumption $\varphi(x,\xi)-\langle x,\xi\rangle \in L^{\infty}\Phi^1$ instead, which yields that $\Vert \nabla_{\xi}\theta(x,\xi)\Vert_{ L^{\infty}}\lesssim 1$. Splitting the kernel of the Fourier integral operator into the same pieces \textbf{I} and \textbf{II} as in the proof of Theorem \mathop{\rm Re} f{local weighted boundedness}. We estimate the \textbf{I} piece exactly in the same way as before but for piece \textbf{II} we proceed as follows. First of all, the assumption $|\mathrm{det}_{n-1} \partial^2_{\xi\xi}\varphi(x,\xi)|\geq c>0$ for all $(x,\xi)$, yields in particular that $|\mathrm{det}_{n-1} (\partial^2_{\xi\xi}(\theta(x,\xi)- \langle y, \xi\rangle)|\geq c>0$. Due to the restriction $1+ 2\Vert \nabla_{\xi} \theta\Vert_{L^\infty} \geq |y|$ and \eqref{first derivative of theta}, one also has $$|\partial^{\alpha}_{\xi} (\theta(x,\xi)- \langle y, \xi\rangle)| \leq C_\alpha$$ for all $|\alpha|\geq 1$ uniformly in $x$ and $y$, which yields that $\theta(x,\xi)- \langle y, \xi\rangle\in L^\infty S^{0}_{0}.$ This means that all the assumptions in Lemma \mathop{\rm Re} f{UniformStationaryPhase} are satisfied and therefore we get \begin{multline*} \textbf{II} \lesssim h^{-m-\frac{n+1}{2}\varrho-(1-\varrho)n} \\ \times \int_{|y|\leq 1+ 2\Vert \nabla_{\xi} \theta\Vert_{L^\infty}} |u(x-y)|\, \mathrm{d} y \lesssim h^{-m-\frac{n+1}{2}\varrho-(1-\varrho)n} Mu(x) \end{multline*} Now adding \textbf{I} and \textbf{II}, using Lemma \mathop{\rm Re} f{Lp:semiclassical} and the assumptions $N>n$, $m<-\frac{n+1}{2}\varrho+n(\varrho-1)$ and Theorem \mathop{\rm Re} f{maxweight} all together yield the desired result. \end{proof} \subsubsection{Endpoint weighted boundedness of Fourier integral operators} The Following interpolation lemma is the main tool in proving the endpoint weighted boundedness of Fourier integral operators. \begin{lem} \label{interpolgen} Let $0 \leq \varrho \leq 1$, $1 < p < \infty$ and $m_1 < m_2$. Suppose that \begin{enumerate} \item [$(a)$] the Fourier integral operator $T$ with amplitude $a(x,\xi)\in L^\infty S^{m_1}_{\varrho}$ and the phase $\varphi(x,\xi)$ are bounded on $L^p_w$ for a fixed $w \in A_p$, and \item [$(b)$] the Fourier integral operator $T$ with amplitude $a(x,\xi)\in L^\infty S^{m_2}_{\varrho}$ and the same type of phases as in $($a$)$ are bounded on $L^p$, \end{enumerate} where the bounds depend only on a finite number of seminorms in Definition $\mathop{\rm Re} f{defn of amplitudes}.$ Then, for each $m \in (m_1,m_2)$, operators with amplitudes in $L^\infty S^{m}_{\varrho}$ are bounded on $L^p_{w^\nu}$, and \begin{equation} \nu = \frac{m_2 - m}{m_2 - m_1}. \end{equation} \end{lem} \begin{proof} For $a\in L^{\infty} S^{m}_{\varrho}$ we introduce a family of amplitudes $a_{z}(x,\xi):= \langle \xi \rangle ^{z} a(x,\xi)$, where $z\in \Omega:=\{z\in\mathbb{C};\, m_1 - m\leq \text{Re}\,z\leq m_2 - m \}$. It is easy to see that, for $| \alpha | \leq C_1$ with $C_1$ large enough and $z\in \Omega$, \begin{equation*} | \partial^{\alpha}_{\xi} a_z (x,\xi)| \lesssim (1+| \text{Im}\, z| )^{C_2}\langle \xi \rangle ^{\text{Re}\, z +m-\varrho | \alpha|}, \end{equation*} for some $C_2$. We introduce the operator $$T_z u:= w^{\frac{m_2 - m - z}{p(m_2 - m_1)}} T_{a_{z}, \varphi}\big(w^{-\frac{m_2 - m - z}{p(m_2 - m_1)}} u\big).$$ First we consider the case of $p\in [1,2]$. In this case, $A_p\subset A_2$ which in turn implies that both $w^{\frac{1}{p}}$ and $w^{-\frac{1}{p}}$ belong to $L^p_{\mathrm{loc}}$ and therefore for $z\in \Omega$, $T_z$ is an analytic family of operators in the sense of Stein-Weiss \cite{SW}. Now we claim that for $z_1\in \mathbb{C}$ with $\text{Re}\, z_1=m_{1} - m$, the operator $(1+| \text{Im}\,z_1 |)^{-C_2}T_{a_{z_{1}}, \varphi}$ is bounded on $L^{p}_{w}$ with bounds uniform in $z_1$. Indeed the amplitude of this operator is $(1+| \text{Im}\,z_1 |)^{-C_2}a_{z_{1}}(x,\xi)$ which belongs to $L^{\infty}S^{m_1}_{\varrho}$ with constants uniform in $z_1$. Thus, the claim follows from assumption (a). Consequently, we have \begin{align*} \Vert T_{z_1} u\Vert^{p}_{L^p} & = (1+| \text{Im}\,z_1 |)^{pC_2}\Big\Vert (1+| \text{Im}\,z_1|)^{-C_2}w^{\frac{m_2 - m - z_1}{p(m_2 - m_1)}} T_{a_{z_1},\varphi}\big(w^{-\frac{m_2 - m - z_1}{p(m_2 - m_1)}} u\big)\Big\Vert ^{p}_{L^{p}} \\ \nonumber & \lesssim (1+| \text{Im}\,z_1 |)^{pC_2} \Big\Vert w^{-\frac{m_2 - m - z_1}{p(m_2 - m_1)}} u \Big\Vert ^{p}_{L^{p}_{w}} \\ \nonumber & =(1+| \text{Im}\,z_1 |)^{pC_2}\Vert u\Vert^{p}_{L^p}, \end{align*} where we have used the fact that $\big|w^{\frac{m_2 - m - z_1}{(m_2 - m_1)}} \big|= w$. Similarly if $z_2 \in \mathbb{C}$ with $\text{Re}\,z_2 = m_2 - m$, then $\big|w^{\frac{m_2 - m - z_2}{(m_2 - m_1)}} \big|= 1$ and the amplitude of the operator $(1+| \text{Im}\,z_2 |)^{-C_2}T_{a_{z_2}}$ belongs to $L^\infty S^{m_2}_\varrho$ with constants uniform in $z_2$. Assumption (b) therefore implies that \begin{equation*} \Vert T_{z_2} u\Vert^{p}_{L^p}\lesssim (1+| \text{Im}\,z_2 |)^{pC_2}\Vert u\Vert^{p}_{L^p}. \end{equation*} Therefore the complex interpolation of operators in \cite{BS} implies that for $z = 0$ we have \begin{equation*} \Vert T_{0} u\Vert^{p}_{L^p}= \Big\Vert w^{\frac{m_2 - m}{p(m_2 - m_1)}} T_{a,\varphi}\big(w^{-\frac{m_2 - m}{p(m_2 - m_1)}}u\big)\Big\Vert^{p}_{L^p}\leq C\Vert u\Vert^{p}_{L^p}. \end{equation*} Hence, setting $v = w^{-\frac{m_2 - m}{p(m_2 - m_1)}}u$ this reads \begin{equation} \Vert T_{a,\varphi}v\Vert^{p}_{L^p_{w^\nu}}\leq C\Vert u\Vert^{p}_{L^p_{w^\nu}}, \end{equation} where $\nu = (m_2 - m)/(m_2 - m_1)$. This ends the proof in the case $1\leq p\leq 2$. At this point we recall the fact that if a linear operator $T$ is bounded on $L^p_w$, then its adjoint $T^{\ast}$ is bounded on $L^{p'}_{ w^{1-p'}}$. Therefore, in the case $p > 2$, we apply the above proof to $T^{\ast}_{a, \varphi}$, with $p'\in [1,2)$ and $v= w^{1-p'}$, which yields that $T^{\ast}_{a, \varphi}$ is bounded on $L^{p'}_ {v^\nu}$ and since $w \in A_p$, we have $v \in A_{p'}$ and so $T_a$ is bounded on $L^p_{v^{(1-p)\nu}} = L^p_{w^{(1-p')(1-p)\nu}}= L^p_{w^\nu}$, which concludes the proof of the theorem. \end{proof} Now we are ready to prove our main result concerning weighted boundedness of Fourier integral operators. This is done by combining our previous results with a method based on the properties of the $A_p$ weights and complex interpolation. \begin{thm}\label{endpointweightedboundthm} Let $a(x,\xi)\in L^{\infty}S^{-\frac{n+1}{2}\varrho+n(\varrho -1)}_{\varrho}$ and $\varrho \in [0,1].$ Suppose that either \begin{enumerate} \item[$(a)$] $a(x,\xi)$ is compactly supported in the $x$ variable and the phase function $\varphi(x,\xi)\in C^{\infty} (\mathbf{R}^ n \times \mathbf{R}^{n}\setminus 0)$, is positively homogeneous of degree $1$ in $\xi$ and satisfies, $\partialet\partial^2_{x\xi}\varphi(x,\xi) \neq 0$ as well as $\mathrm{rank}\,\partial^2_{\xi\xi}\varphi(x,\xi) =n-1$; or \item[$(b)$] $\varphi (x,\xi)-\langle x,\xi\rangle \in L^{\infty}\Phi^1,$ $\varphi $ satisfies the rough non-degeneracy condition as well as $|\mathrm{det}_{n-1} \partial^2_{\xi\xi}\varphi(x,\xi)|\geq c>0$. \end{enumerate} Then the operator $T_{a,\varphi}$ is bounded on $L^{p}_{w}$ for $p\in (1,\infty)$ and all $w\in A_p$. Furthermore, for $\varrho=1$ this result is sharp. \end{thm} \begin{proof} The sharpness of this result for $\varrho=1$ is already contained in Counterexample 2 discussed above. The key issue in the proof is that both assumptions in the statement of the theorem guarantee the weighted boundedness for $m<-\frac{n+1}{2}\varrho +n(\varrho -1).$ The rest of the argument is rather abstract and goes as follows. By the extrapolation Theorem \mathop{\rm Re} f{extrapolation}, it is enough to show the boundedness of $T_{a,\varphi}$ in weighted $L^2$ spaces with weights in the class $A_2$. Let us fix $m_2$ such that $-\frac{n+1}{2}\varrho+n(\varrho -1)<m_2 < \frac{n}{2}(\varrho-1).$ By Theorem \mathop{\rm Re} f{open}, given $w\in A_{2}$ choose $\varepsilon$ such that $w^{1+\varepsilon}\in A_{2}$. For this $\varepsilon$ take $m_1<-\frac{n+1}{2}\varrho+n(\varrho -1)$ in such a way that the straight line $L$ that joins points with coordinates $(m_1,1+ \varepsilon)$ and $(m_2,0)$, intersects the line $x=-\frac{n+1}{2}\varrho+n(\varrho -1)$ in the $(x,y)$ plane in a point with coordinates $(-\frac{n+1}{2}\varrho+n(\varrho -1),1)$. Clearly this procedure is possible due to the fact that we can choose the point $m_1$ on the negative $x$ axis as close as we like to the point $-\frac{n+1}{2}\varrho+n(\varrho -1).$ By Theorem \mathop{\rm Re} f{local weighted boundedness}, given $\varphi(x,\xi)\in C^{\infty} (\mathbf{R}^ n \times \mathbf{R}^{n}\setminus 0)$, positively homogeneous of degree $1$ in $\xi,$ satisfying $\mathrm{rank}\,\partial^2_{\xi\xi}\varphi(x,\xi) =n-1,$ and $a\in L^{\infty} S^{m_1}_{\varrho},$ the Fourier integral operators $T_{a, \varphi}$ are bounded operators on $L^2_{w^{1+\varepsilon}}$ for $w \in A_2$ and, by Theorem \mathop{\rm Re} f{global L2 boundedness smooth phase rough amplitude}, or rather its proof, the Fourier integral operators with amplitudes in $L^{\infty}S^{m_2}_{\varrho}$ compactly supported in the spatial variable, and phases that are positively homogeneous of degree $1$ in the frequency variable and satisfying the non-degeneracy condition $\partialet\partial^2_{x\xi}\varphi(x,\xi) \neq 0,$ are bounded on $L^2$. Therefore, by Lemma \mathop{\rm Re} f{interpolgen}, the Fourier integral operators $T_{a, \varphi}$ with phases and amplitudes as in the statement of the theorem are bounded operators on $L^2_w$. The proof of part (b) is similar and uses instead the interpolation between the weighted boundedness of Theorem \mathop{\rm Re} f{global weighted boundedness} and the unweighted $L^2$ boundedness result of Theorem \mathop{\rm Re} f{Intro:L2Thm}. The details are left to the interested reader. \end{proof} If we don't insist on proving weighted boundedness results valid for all $A_p$ weights then, it is possible to improve on the number of derivatives in the estimates and push the numerology almost all the way to those that guaranty unweighted $L^p$ boundedness. Therefore, there is the trade-off between the generality of weights and loss of derivatives as will be discussed below. \begin{thm}\label{weighted boundedness for true amplitudes with power weights} Let $\mathcal{C} \subset (\mathbf{R}^{n} \times \mathbf{R}^{n}\setminus {0})\times (\mathbf{R}^{n}\times \mathbf{R}^{n}\setminus {0}),$ be a conic manifold which is locally a canonical graph, see e.g. \cite{H3} for the definitions. Let $\pi: \mathcal{C}\to \mathbf{R}^{n}\times \mathbf{R}^{n}$ denote the natural projection. Suppose that for every $\lambda_{0}=(x_0, \xi _0 , y_0 , \eta_0 ) \in\mathcal {C}$ there is a conic neighborhood $U_{\lambda_{0}}\subset \mathcal{C}$ of $\lambda_{0}$ and a smooth map $\pi_{\lambda_{0}}: \mathcal{C}\cap U_{\lambda_{0}}\to \mathcal{C}$, homogeneous of degree $0$, with $\mathrm{rank}\,d\pi_{\lambda_{0}} = 2n-1,$ such that $$\pi= \pi\circ \pi_{\lambda_{0}}.$$ Let $T\in I^{m}_{\varrho, \mathrm{comp}}(\mathbf{R}^{n}\times \mathbf{R}^{n}; \mathcal{C})$ $($see \cite{H3}$)$ with $\frac{1}{2}\leq\varrho \leq 1$ and $m < (\varrho-n)|\frac{1}{p}-\frac{1}{2}|.$ Then for all $w \in A_p $ there exists $\alpha\in (0,1)$ depending on $m$, $\varrho$, $\partialelta$, $p$ and $[w]_{A_p}$ such that, for all $\varepsilon \in[0,\alpha] ,$ the Fourier integral operator $T_{a, \varphi}$ is bounded on $L^{p}_{w^\varepsilon}$ where $1<p<\infty.$ \end{thm} \begin{proof} By the equivalence of phase function theorem, which for $\frac{1}{2}< \varrho \leq 1$ is due to H\"ormander \cite{H3} and for $\varrho=\frac{1}{2}$ is due to Greenleaf-Uhlmann \cite{GU}, we reduce the study of operator $T$ to that of a finite linear combination of operators which in appropriate local coordinate systems have the form \begin{equation} T_{a}u(x)=(2\pi)^{-n} \iint e^{i\varphi(x,\xi)-i\langle y, \xi\rangle}\, a(x,\xi) \,u (y) dy\, d\xi, \end{equation} where $a\in S^{m}_{\varrho, 1-\varrho}$ with compact support in $x$ variable, and $\varphi$ homogeneous of degree 1 in $\xi,$ with $\partialet\partial^2_{x\xi}\varphi(x,\xi) \neq 0$ and $\mathrm{rank}\,\partial^2_{\xi\xi}\varphi(x,\xi) =n-1 .$ If $m\leq -\frac{n+1}{2}\varrho + n(\varrho -1),$ then Theorem \mathop{\rm Re} f{endpointweightedboundthm} case $(a)$ yields the result, so we assume that $m> -\frac{n+1}{2}\varrho + n(\varrho -1).$ Also by assumption of the theorem we can find a $m_2 ,$ which we shall fix from now on, such that $m< m_2 < (\varrho-n)|\frac{1}{p}-\frac{1}{2}|$ and $m_1 <-\frac{n+1}{2}\varrho + n(\varrho -1).$ Now a result of Seeger-Sogge-Stein, namely Theorem 5.1 in \cite {SSS} yields that operators $T_a$ with amplitudes compactly supported in the $x$ variable in the class $S^{m_2}_{\varrho, 1-\varrho},$ and phase functions $\varphi(x,\xi)$ satisfying $\mathrm{rank}\,\partial^2_{\xi\xi}\varphi(x,\xi) =n-1$ are bounded on $L^p.$ Furthermore by Theorem \mathop{\rm Re} f{endpointweightedboundthm} case $(a),$ The operators $T_a$ with $a\in S^{m_1}_{\varrho, \partialelta}$ are bounded on $L^{p}_{w},$ $p\in (1,\infty).$ Therefore, Lemma \mathop{\rm Re} f{interpolgen} yields the desired result. \end{proof} A similar result also holds for operators with amplitudes in $S^{m}_{\varrho,\partialelta}$ with without any rank condition on the phase function. \begin{thm}\label{weighted boundedness with power weights} Let $a(x,\xi)\in S^{m}_{\varrho,\partialelta},$ $\varphi(x,\xi)$ be a strongly non-degenerate phase function with $\varphi(x,\xi)-\langle x,\xi\rangle \in \Phi^1,$ and $\lambda:=\min(0,n(\varrho-\partialelta)),$ with either of the following ranges of parameters: \begin{enumerate} \item [$(1)$] $0\leq \varrho \leq 1$, $0\leq \partialelta\leq 1, $ $1 \leq p \leq 2$ and $$ m<n(\varrho -1)\bigg (\frac{2}{p}-1\bigg)+\big(n-1\big)\bigg(\frac{1}{2}-\frac{1}{p}\bigg)+ \lambda\bigg(1-\frac{1}{p}\bigg); $$ or \item [$(2)$] $0\leq \varrho \leq 1$, $0\leq \partialelta\leq 1, $ $2 \leq p \leq \infty$ and $$ m<n(\varrho -1)\bigg (\frac{1}{2}-\frac{1}{p}\bigg)+ (n-1)\bigg(\frac{1}{p}-\frac{1}{2}\bigg) +\frac{\lambda}{p}.$$ \end{enumerate} Then for all $w \in A_p $ there exists $\alpha\in (0,1)$ depending on $m$, $\varrho$, $\partialelta$, $p$ and $[w]_{A_p}$ such that, for all $\varepsilon \in[0,\alpha] ,$ the Fourier integral operator $T_{a, \varphi}$ is bounded on $L^{p}_{w^\varepsilon}$ where $1<p<\infty.$ \end{thm} \begin{proof} The proof is similar to that of Theorem \mathop{\rm Re} f{weighted boundedness for true amplitudes with power weights} and we only consider the proof in case (1), since the other case is similar. We observe that since $\Phi^1\subset \Phi^2$ and $\langle x, \xi \rangle \in \Phi^2,$ the assumption $\varphi (x,\xi)-\langle x,\xi\rangle \in \Phi^1,$ implies that $\varphi(x,\xi) \in \Phi^2.$ To proceed with the proof we can assume that $m> -n$ because otherwise by Proposition \mathop{\rm Re} f{weighted boundedness without rank} there is nothing to prove. The assumption of the theorem, enables us to find $m_2$ such that $$ m< m_2 <n(\varrho -1)\bigg (\frac{2}{p}-1\bigg)+\big(n-1\big)\bigg(\frac{1}{2}-\frac{1}{p}\bigg)+ \lambda\bigg(1-\frac{1}{p}\bigg) $$ and $m_1 <-n.$ Now Theorem \mathop{\rm Re} f{main L^p thm for smooth Fourier integral operators} yields that operators $T_a$ with amplitudes in the class $S^{m_2}_{\varrho, \partialelta},$ and strongly non-degenerate phase functions $\varphi(x,\xi) \in \Phi^2$ are bounded on $L^p.$ Furthermore Proposition \mathop{\rm Re} f{weighted boundedness without rank} yields that operators $T_a$ with $b\in S^{m_1}_{\varrho, \partialelta}$ are bounded on $L^{p}_{w}.$ Therefore, Lemma \mathop{\rm Re} f{interpolgen} yields once again the desired result for the range $1<p\leq 2.$ \end{proof} Finally, for operators with non-smooth amplitudes we can prove the following: \begin{thm}\label{weighted boundedness with power weights nonsmooth symbols} Let $a(x,\xi)\in L^{\infty}S^{m}_{\varrho},$ $0\leq \varrho\leq 1,$ and let $\varphi(x,\xi)-\langle x,\xi\rangle \in \Phi^1,$ with a strongly non-degenerate $\varphi$ and either of the following ranges of parameters: \begin{enumerate} \item[$(a)$] $1 \leq p \leq 2$ and $$ m<\frac{n}{p}(\varrho -1)+\big(n-1\big)\bigg(\frac{1}{2}-\frac{1}{p}\bigg); $$ or \item[$(b)$] $2 \leq p \leq \infty$ and $$ m<\frac{n}{2}(\varrho -1)+ (n-1)\bigg(\frac{1}{p}-\frac{1}{2}\bigg).$$ \end{enumerate} Then for all $w \in A_p $ there exists $\alpha\in (0,1)$ depending on $m$, $\varrho$, $p$ and $[w]_{A_p}$ such that, for all $\varepsilon \in[0,\alpha] ,$ the Fourier integral operator $T_{a, \varphi}$ is bounded on $L^{p}_{w^\varepsilon}.$ \end{thm} \begin{proof} The proof is a modification of that of Theorem \mathop{\rm Re} f{weighted boundedness with power weights}, where one also uses Theorem \mathop{\rm Re} f{main L^p thm for Fourier integral operators with smooth phase and rough amplitudes}. The straightforward modifications are left to the interested reader. \end{proof} Here we remark that if in the proofs of Theorems \mathop{\rm Re} f{weighted boundedness with power weights} and \mathop{\rm Re} f{weighted boundedness with power weights nonsmooth symbols} we would have used Theorem \mathop{\rm Re} f{endpointweightedboundthm} case (b) instead of Proposition \mathop{\rm Re} f{weighted boundedness without rank} in the proof above, then we would obtain a similar result, under the condition $|\mathrm{det}_{n-1} \partial^2_{\xi\xi}\varphi(x,\xi)|\geq c>0$ on the phase, but with an improved $\alpha$ as compared to those in the statements of Theorems \mathop{\rm Re} f{weighted boundedness with power weights} and \mathop{\rm Re} f{weighted boundedness with power weights nonsmooth symbols}. \section{Applications in harmonic analysis and partial differential equations} In this chapter, we use our weighted estimates proved in the previous chapter to show the boundedness of constant coefficient Fourier integral operators in weighted Triebel-Lizorkin spaces. This is done using vector-valued inequalities for the aforementioned operators. We proceed by establishing weighted and unweighted $L^p$ boundedness of commutators of a wide class of Fourier integral operators with functions of bounded mean oscillation (BMO), where in some cases we also show the weighted boundedness of iterated commutators. The boundedness of commutators are proven using the weighted estimates of the previous chapter and a rather abstract complex analytic method. Finally in the last section, we prove global unweighted and local weighted estimates for the solutions of the Cauchy problem for $m$-th and second order hyperbolic partial differential equations on $\mathbf{R}^n .$ \subsection{Estimates in weighted Triebel-Lizorkin spaces} In this section, we investigate the problem of the boundedness of certain classes on Fourier integral operators in weighted Triebel-Lizorkin spaces. The result obtained here can be viewed as an example of the application of weighted norm inequalities for FIO's. The main reference for this section is \cite{GR} and we will refer the reader to that monograph for the proofs of the statements concerning vector-valued inequalities. \begin{defn} An operator $T$ defined in $L^{p}(\mu)$ $($ this denotes $L^{p}$ spaces with measure $d\mu$$)$ is called {\it{linearizable}} if there exits a linear operator $U$ defined on $L^{p}(\mu)$ whose values are Banach space-valued functions such that \begin{equation} \vert Tu(x)\vert =\Vert Uu(x)\Vert_{B},\,\,\,\, u\in L^{p}(\mu) \end{equation} \end{defn} We shall use the following theorem due to Garcia-Cuerva and Rubio de Francia, whose proof can be found in \cite{GR}. \begin{thm}\label{vectorvalued thm} Let $T_{j}$ be a sequence of linearizable operators and suppose that for some fixed $r>1$ and all weights $w\in A_r$ \begin{equation}\label{vectorvalued estim} \int \vert T_{j} u(x) \vert ^{r} w(x) \, \mathrm{d} x \leq C_{r}(w) \int \vert u(x)\vert ^r w(x) \, \mathrm{d} x, \end{equation} with $C_{r}(w)$ depending on the weight $w$. Then for $1<p,\, q<\infty$ and $w\in A_{p}$ one has the following weighted vector-valued inequality \begin{equation}\label{weightedvecvaluedineq} \Big\Vert \Big\{\sum_{j} \vert T_{j} u_{j} \vert ^{q}\Big\}^{\frac{1}{q}}\Big\Vert_{L^{p}_w} \leq C_{p,q}(w) \Big\Vert\Big \{\sum_{j} \vert u_{j} \vert ^{q}\Big\}^{\frac{1}{q}}\Big\Vert_{L^{p}_w}. \end{equation} \end{thm} Next we recall the definition of the weighted Triebel-Lizorkin spaces, see e.g. \cite{T} \begin{defn} Start with a partition of unity $\sum_{j=0}^{\infty} \psi_{j}(\xi)=1$, where $\psi_{0}(\xi)$ is supported in $\vert \xi\vert\leq 2$, $\psi_{j}(\xi)$ for $j\geq 1$ is supported in $2^{j-1} \leq \vert \xi\vert \leq 2^{j+1}$ and $ |\partial^{\alpha} \psi_{j}(\xi)| \leq C_{\alpha} 2^{-j|\alpha|}$, for $j\geq 1$. Given $s\in \mathbb{R},$ $1\leq p\leq\infty,$ $1\leq q\leq \infty ,$ and $w\in A_p ,$ a tempered distribution $u$ belongs to the weighted {\it{Triebel-Lizorkin}} space $F^{s,p}_{q,\,w}$ if \begin{equation} \label{Tribliz definition} \Vert u\Vert_{F^{s,p}_{q,\,w}}:= \bigg\Vert \bigg\{\sum_{j=0}^{\infty}\vert 2^{js}\psi_{j}(D) u\vert^q\bigg\}^{\frac{1}{q}}\bigg\Vert_{L^p_{w}}<\infty, \end{equation} \end{defn} From this it follows that for a linear operator $T$ the estimate \begin{equation} \Vert Tu\Vert_{F^{s',p}_{q,\,w}}\lesssim \Vert u\Vert_{F^{s,p}_{q,\,w}}, \end{equation} is implied by \begin{equation}\label{trieblizorestim} \bigg\Vert \bigg\{\sum_{j=0}^{\infty}\vert 2^{js'}\psi_{j}(D) Tu\vert^q\bigg\}^{\frac{1}{q}}\bigg\Vert_{L^p_{w}} \lesssim \bigg\Vert \bigg\{\sum_{j=0}^{\infty}\vert 2^{js}\psi_{j}(D) u\vert^q\bigg\}^{\frac{1}{q}}\bigg\Vert_{L^p_{w}}. \end{equation} Now if one is in the situation where $[T,\psi_j]=0,$ then \eqref{trieblizorestim} is equivalent to \begin{equation}\label{trieblizorestimequiv} \bigg\Vert \bigg\{\sum_{j=0}^{\infty}\vert 2^{j(s'-s)}T (2^{js}\psi_{j}(D)u)\vert^q\bigg\}^{\frac{1}{q}}\bigg\Vert_{L^p_{w}} \lesssim \bigg\Vert \bigg\{\sum_{j=0}^{\infty}\vert 2^{js}\psi_{j}(D) u\vert^q\bigg\}^{\frac{1}{q}}\bigg\Vert_{L^p_{w}}. \end{equation} Therefore, setting $T_j:=2^{j(s'-s)}T$ and $u_j := 2^{js}\psi_j u$ and assuming that $s\geq s'$, \eqref{trieblizorestimequiv} has the same form as the the vector-valued inequality \eqref{weightedvecvaluedineq} and follows from \eqref{vectorvalued estim}. Using these facts yields the following result, \begin{thm} Let $a(\xi)\in S_{1,0}^{-\frac{n+1}{2}}$ and $\varphi\in \Phi^{1}$ with $|\mathrm{det}_{n-1} \partial^2_{\xi\xi}\varphi(\xi)|\geq c>0$. Then for $s\geq s',$ $1<p<\infty ,$ $1<q<\infty ,$ and $w\in A_p, $ the Fourier integral operator $$Tu(x) = \frac{1}{(2\pi)^{n}}\int_{\mathbb{R}^{n}} e^{i \varphi(\xi)+i \langle x, \xi\rangle} a(\xi)\hat{u}(\xi) \, d\xi$$ satisfies the estimate \begin{equation} \Vert Tu\Vert_{F^{s',p}_{q,\,w}}\lesssim \Vert u\Vert_{F^{s,p}_{q,\,w}} \end{equation} \end{thm} \begin{proof} We only need to check that $T_{j}= 2^{j(s'-s)}T$ satisfies \eqref{vectorvalued estim}. But this follows from the assumption $s\geq s'$ and Theorem \mathop{\rm Re} f{endpointweightedboundthm} part (b) concerning the global weighted boundedness of Fourier integral operators. \end{proof} \subsection{Commutators with BMO functions} In this section we show how our weighted norm inequalities can be used to derive the $L^p$ boundedness of commutators of functions of bounded mean oscillation with a wide range of pseudodifferential operators. We start with the precise definition of a function of bounded mean oscillation. \begin{defn} \label{BMO} A locally integrable function $b$ is of bounded mean oscillation if \begin{equation} \Vert b\Vert_{\mathrm{BMO}} := \sup_{B}{\frac{1}{\vert B\vert}} \int_{B} \vert b(x)-b_B\vert \, \mathrm{d} x<\infty, \end{equation} where the supremum is taken over all balls in $\mathbf{R}^n$. We denote the set of such functions by $\mathrm{BMO}$. \end{defn} For $b\in \mathrm{BMO}$ it is well-known that for any $\gamma<\frac{1}{2^ n e}$, there exits a constant $C_{n,\gamma}$ so that for $u\in\text{BMO}$ and all balls $B$, \begin{equation}\label{bmoestimate} \frac{1}{\vert B\vert}\int_{B} e^{\gamma\vert b(x)-b_B\vert/\Vert u\Vert_{\mathrm{BMO}}}\, \mathrm{d} x \leq C_{n,\gamma}. \end{equation} For this see \cite[p.~528]{G}. The following abstract lemma will enable us to prove the $L^p$ boundedness of the BMO commutators of Fourier integral operators. \begin{lem}\label{commutatorlem} For $1< p <\infty$, let $T$ be a linear operator which is bounded on $L^{p}_{w^\varepsilon}$ for all $w\in A_{p}$ for some fixed $\varepsilon\in (0,1]$. Then given a function $b\in\mathrm{BMO}$, if $\Psi(z):= \int e^{zb(x)}T(e^{-z b(x)} u)(x) v(x)\, \mathrm{d} x$ is holomorphic in a disc $\vert z\vert <\lambda$, then the commutator $[b,T]$ is bounded on $L^{p}$. \end{lem} \begin{proof} Without loss of generality we can assume that $\Vert b\Vert_{\mathrm{BMO}}=1$. We take $u$ and $v$ in $C^{\infty}_{0}$ with $\Vert u\Vert_{L^p} \leq1$ and $\Vert v\Vert_{L^{p'}} \leq 1$, and an application of H\"older's inequality to the holomorphic function $\Psi(z)$ together with the assumption on $v$ yield \begin{equation*}\label{Fp} | \Psi(z)|^p \leq \int e^{p\,\text{Re}\,z b(x)} | T(e^{-\,z\, b(x)}u) |^{p}\, \mathrm{d} x . \end{equation*} Our first goal is to show that the function $\Psi(z)$ defined above is bounded on a disc with centre at the origin and sufficiently small radius. At this point we recall a lemma due to Chanillo \cite{Chan} which states that if $\Vert b\Vert_{\mathrm{BMO}}=1$, then for $2<s<\infty$, there is an $r_0$ depending on $s$ such that for all $r\in [-r_0 , r_0]$, $e^{rb(x)}\in A_{\frac{s}{2}}.$\\ Taking $s=2p$ in Chanillo's lemma, we see that there is some $r_1$ depending on $p$ such that for $| r| <r_1$, $e^{rb(x)}\in A_{p}$. Then we claim that if $R:=\text{min}\, (\lambda, \frac{\varepsilon r_{1}}{p})$ and $|z|< R$ then $|\Psi(z)| \lesssim 1$. Indeed since $R< \frac{\varepsilon r_{1}}{p}$ we have $|\text{Re}\, z| < \frac{\varepsilon r_{1}}{p}$ and therefore $| \frac{p\,\text{Re}\,z}{\varepsilon}| <r_1$. Therefore Chanillo's lemma yields that for $|z|< R$, $w:=e^{\frac{p\,\text{Re}\,z}{\varepsilon}b(x)}\in A_{p}$ and since $e^{p\,\text{Re}\,z b(x)}=w^{\varepsilon}$, the assumption of weighted boundedness of $T$ and the $L^{p}$ bound on $u$, imply that for $|z|<R$ \begin{equation*} \begin{split} | \Psi(z)|^{p} & \leq \int e^{p\,\text{Re}\,z b(x)} | T(e^{-\,z\, b(x)}u) |^{p}\, \mathrm{d} x \\ & = \int w^{\varepsilon} | T(e^{-\,z\, b(x)}u) |^{p}\, \mathrm{d} x \\ & \lesssim \int w^{\varepsilon} | e^{-\,z\, b(x)}u |^{p}\, \mathrm{d} x=\int w^{\varepsilon} w^{-\varepsilon}| u |^{p}\, \mathrm{d} x\lesssim 1, \end{split} \end{equation*} and therefore $|\Psi(z)|\lesssim 1$ for $|z| <R$. Finally, using the holomorphicity of $\Psi(z)$ in the disc $|z|<R$, Cauchy's integral formula applied to the circle $|z|=R'<R$, and the estimate $| \Psi(z)|\lesssim 1$, we conclude that \begin{equation*} | \Psi'(0)| \leq \frac{1}{2\pi} \int_{| z| =R'} \frac{| \Psi(\zeta)|}{| \zeta^2|} \,|\mathrm{d}\zeta| \lesssim 1. \end{equation*} By construction of $\Psi(z)$, we actually have that $\Psi'(0)=\int v(x) [b,T]u(x) \, \mathrm{d} x$ and the definition of the $L^p$ norm of the operator $[b, T]$ together with the assumptions on $u$ and $v$ yield at once that $[f, T]$ is a bounded operator from $L^p$ to itself for $p$. \end{proof} The following lemma guarantees the holomorphicity of $$ \Psi(z):= \int e^{zb(x)} T_{a, \varphi}(e^{-z b(x)} u)(x) v(x)\, \mathrm{d} x, $$ when $T_{a,\varphi}$ is a $L^2$ bounded Fourier integral operator. \begin{lem}\label{holomorflem} Assume that $\varphi$ is a strongly non-degenerate phase function in the class $\Phi^2$ and suppose that either: \begin{enumerate} \item [$(a)$] $T_{a, \varphi}$ is a Fourier integral operator with $a\in S^{m}_{\varrho, \partialelta}$, $0\leq \varrho\leq 1,$ $0\leq \partialelta<1, $ $m=\min(0, \frac{n}{2}(\varrho-\partialelta))$ or \item [$(b)$] $T_{a, \varphi}$ is a Fourier integral operator with $a\in L^{\infty} S^{m}_{\varrho},$ $0\leq \varrho \leq 1,$ $m<\frac{n}{2}(\varrho-1).$ \end{enumerate} Then given $b\in\mathrm{BMO}$ with $\Vert b\Vert_{\mathrm{BMO}}=1$ and $u$ and $v$ in $C^{\infty}_{0}$, there exists $\lambda>0$ such that the function $\Psi(z):= \int e^{zb(x)} T_{a, \varphi} (e^{-z b(x)} u)(x) v(x)\, \mathrm{d} x$ is holomorphic in the disc $\vert z\vert <\lambda$. \end{lem} \begin{proof} \setenumerate[0]{leftmargin=0pt,itemindent=20pt} \begin{enumerate} \item [(a)] From the explicit representation of $\Psi(z)$ \begin{equation}\label{psi(z) representation} \Psi(z)= \iiint a(x,\xi)\, e^{i\varphi(x,\xi)-i\langle y, \xi\rangle}\,e^{zb(x)-z b(y)}\,v(x)\,u(y) \,\mathrm{d} y\, \mathrm{d}\xi\, \mathrm{d} x \end{equation} we can without loss of generality assume that $a(x,\xi)$ has compact $x-$support. For $f\in \mathscr{S}$ and $\varepsilon\in(0,1)$ we take $\chi(\xi)\in C^{\infty}_{0}(\mathbf{R}^{n})$ such that $\chi(0)=1$ and set \begin{equation} T_{a_{\varepsilon}, \varphi} f(x)=\int a(x,\xi)\,\chi(\varepsilon \xi)\, e^{i\varphi(x,\xi)}\,\hat{f}(\xi)\, d\xi. \end{equation} Using this and the assumption of the compact $x-$support of the amplitude, one can see that for $f\in \mathscr{S}$, $\lim_{\varepsilon \to 0}T_{a_{\varepsilon}, \varphi} f= T_{a, \varphi}f$ in the Schwartz class $\mathscr{S}$ and also $\lim_{\varepsilon \to 0}\Vert T_{a_{\varepsilon}, \varphi} f- T_{a, \varphi}f\Vert _{L^2}=0.$ Since $ a(x,\xi)\,\chi(\varepsilon \xi)\in S^{m}_{\varrho, \partialelta}$ with seminorms that are independent of $\varepsilon,$ it follows from our assumptions on the amplitude and the phase and Theorem \mathop{\rm Re} f{Calderon-Vaillancourt for FIOs} that $\Vert T_{a_{\varepsilon}, \varphi} f\Vert_{L^2} \lesssim \Vert f\Vert_{L^2}$ with a $L^2$ bound that is independent of $\varepsilon.$ Therefore, the density of $\mathscr{S}$ in $L^2$ yields \begin{equation}\label{L2 Convergence of T-epsilon to T} \lim_{\varepsilon \to 0}\Vert T_{a_{\varepsilon}, \varphi} f- T_{a, \varphi}f\Vert _{L^2}=0, \end{equation} for all $f\in L^2.$ Now if we define \begin{align}\label{defn psi epsilon} \Psi_{\varepsilon} (z)&:=\int e^{zb(x)} T_{a_{\varepsilon}, \varphi} (e^{-z b(x)} u)(x) v(x)\, \mathrm{d} x\\ \nonumber &=\iiint a(x,\xi)\,\chi(\varepsilon \xi)\, e^{i\varphi(x,\xi)-i\langle y, \xi\rangle}\,e^{zb(x)-z b(y)}\,v(x)\,u(y) \, \mathrm{d} y\, \mathrm{d}\xi\, \mathrm{d} x, \end{align} then the integrand in the last expression is a holomorphic function of $z$. Furthermore, from \eqref{bmoestimate} and the assumption $\Vert b\Vert_{\textrm{BMO}}=1$ one can deduce that for all $p\in [1, \infty)$ and $|z|<\frac{\gamma}{p}$, and all compact sets $K$ one has \begin{equation}\label{local integrability of exponentials} \int_{K} e^{\pm p\,\mathrm{Re}\, z\, b(x)} \, \mathrm{d} x\leq C_\gamma (K). \end{equation} This fact shows that $u e^{-z\, b}$ and $v e^{z\, b}$ both belong to $L^{p}$ for all $p\in [1,\infty)$ provided $|z|<\frac{\gamma}{p}$. These together with the compact support in $\xi$ of the integrand defining $\Psi_{\varepsilon} (z)$ and uniform bounds on the amplitude in $x$, yield the absolute convergence of the integral in \eqref{defn psi epsilon} and therefore $\Psi_{\varepsilon}(z)$ is a holomorphic function in $|z|<1.$ Now we claim that for $\gamma$ as in \eqref{bmoestimate}, \begin{equation*} \lim_{\varepsilon\to 0} \sup_{\vert z\vert< \frac{\gamma}{2}} | \Psi_{\varepsilon}(z)-\Psi(z)|=0. \end{equation*} Indeed, since $\frac{\gamma}{2}<\frac{1}{2}$, one has for $\vert z\vert <\frac{\gamma}{2}$ \begin{align*} | \Psi_{\varepsilon}(z)-\Psi(z)|&=\left|\int v(x) e^{z b(x)}\,\big[T_{a_{\varepsilon}, \varphi}- T_{a, \varphi}\big](e^{-z b} u)(x)\, \mathrm{d} x\right|\\ &\leq {\Vert}v\, e^{zb}{\Vert}_{L^2} \big\Vert [T_{a_{\varepsilon}, \varphi}-T_{a, \varphi}]( u\, e^{-z b})\big\Vert_{L^2}\\ &\leq {\Vert}v{\Vert}_{L^{\infty}}\left \{\int_{\mathrm{supp}\,v} e^{2\mathrm{Re}\, z b(x)}\, \mathrm{d} x\right \}^{\frac{1}{2}} \Vert [T_{a_{\varepsilon}, \varphi}-T_{a, \varphi}]( u\, e^{-z b})\Vert_{L^2}. \end{align*} Therefore, using \eqref{local integrability of exponentials} with $p=2$ and \eqref{L2 Convergence of T-epsilon to T} yield that $$\varlimsup_{\varepsilon\to 0} \sup_{\vert z\vert< \frac{\gamma}{2}}|\Psi_{\varepsilon}(z)-\Psi(z)|=0$$ and hence $\Psi(z)$ is a holomorphic function in $\vert z\vert <\lambda$ with $\lambda \in (0, \frac{\gamma}{2}).$ \item [(b)] Using the semiclassical reduction in the proof of Theorem \mathop{\rm Re} f{global L2 boundedness smooth phase rough amplitude}, we decompose the operator $T_{a ,\varphi}$ into low and high frequency parts, $T_0$ and $T_h$. From this it follows that $\Psi_{0}(z):= \int e^{zb(x)} T_{0} (e^{-z b(x)} u)(x) v(x)\, \mathrm{d} x$ can be written as \begin{multline}\label{Psi0} \Psi_{0}(z)= \\ \int \bigg\{\int\int e^{i\varphi(x,\xi) -i\langle y, \xi \rangle}\, \chi_{0}(\xi)\, a(x,\xi)\, u(y)\, e^{ -zb(y)} \, \mathrm{d} y\, \mathrm{d} \xi\bigg\} v(x)\,e^{zb(x)}\, \mathrm{d} x \end{multline} and $\Psi_{h}(z):= \int e^{zb(x)} T_{h} (e^{-z b(x)} u)(x) v(x)\, \mathrm{d} x$ is given by \begin{multline}\label{Psih} \Psi_{h}(z)= \\ h^{-n}\int \bigg\{\iint e^{\frac{i}{h}\varphi(x,\xi)-\frac{i}{h}\langle y, \xi \rangle}\, \chi(\xi)\, a(x,\xi/h)\, u(y) \,e^{ -zb(y)}\, \mathrm{d} y\, \mathrm{d} \xi\bigg\} \, v(x)\,e^{zb(x)} \, \mathrm{d} x, \end{multline} Now we claim that for $\Psi_{0}(z)$ and $\Psi_h(z)$ are holomorphic in $|z|<1$. To see this, we reason in a way similar to the proof of part (a). Namely, using the compact support in $\xi$ of the integrands of \eqref{Psi0} and \eqref{Psih} and uniform bounds on the amplitude in $x$, yield the absolute convergence of the integrals in \eqref{Psi0} and \eqref{Psih} and therefore $\Psi_{0}(z)$ and $\Psi_{h}(z)$ are holomorphic functions in $|z|<1.$ Next we proceed with a uniform estimate (in $z$) for $\Psi_{h}(z)$. For this we use once again that $u e^{-z\, b}$ and $v e^{z\, b}$ both belong to $L^{2}$ provided $|z|<\frac{\gamma}{2}.$ Therefore the Cauchy-Schwartz inequality and \eqref{semiclassical L2 pieces} yield \begin{align}\label{main Psi_h z estim} | \Psi_{h}(z)|&=\left|\int v(x) e^{zb(x)}T_{h}(e^{-z b} u)(x)\, \mathrm{d} x\right| \\ \nonumber &\leq \Vert u\, e^{-z b} \Vert_{L^2} \Vert T^{\ast}_{h}(v\, e^{zb})\Vert_{L^2} \leq \Vert u\, e^{-z b} \Vert_{L^2} \Vert T_{h}T^{\ast}_{h}(v\, e^{zb})\Vert^{1/2}_{L^2}\Vert v\, e^{zb}\Vert^{1/2}_{L^2} \\ \nonumber &\leq h^{-m-(1-\varrho)M/2} \Vert u\, e^{-zb}\Vert_{L^2} \Vert v\, e^{zb}\Vert_{L^2} \lesssim h^{-m-(1-\varrho)M/2} \end{align} Hence, $\vert \Psi_{h}(z)\vert \lesssim h^{-m-(1-\varrho)M/2}$ and setting $h=2^{-j}$, using $m<\frac{n}{2}(\varrho-1)$ and summing in $j$ we would have a uniformly convergent series of holomorphic functions which therefore converges to a holomorphic function and by taking a $\lambda$ in the interval $(0,\frac{\gamma}{2})$ we conclude the holomorphicity of $\Psi(z)$ in $|z|<\lambda.$ \end{enumerate} \end{proof} Lemmas \mathop{\rm Re} f{commutatorlem} and \mathop{\rm Re} f{holomorflem} yield our main result concerning commutators with BMO functions, namely \begin{thm}\label{Commutator estimates for FIO} Suppose either \begin{enumerate} \item [$(a)$] $T\in I^{m}_{\varrho, \mathrm{comp}}(\mathbf{R}^{n}\times \mathbf{R}^{n}; \mathcal{C})$ with $\frac{1}{2} \leq \varrho\leq 1$ and $m<(\varrho-n)|\frac{1}{p}-\frac{1}{2}|,$ satisfies all the conditions of Theorem $\mathop{\rm Re} f{weighted boundedness for true amplitudes with power weights}$ or; \item [$(b)$] $T_{a,\varphi}$ with $a\in S^{m}_{\varrho, \partialelta},$ $0\leq \varrho \leq 1$, $0\leq \partialelta\leq 1,$ $\lambda= \min(0, n(\varrho-\partialelta))$ and $\varphi(x,\xi)$ is a strongly non-degenerate phase function with $\varphi(x,\xi)-\langle x,\xi\rangle \in \Phi^1,$ where in the range $1<p\leq 2,$ $$ m<n(\varrho -1)\bigg (\frac{2}{p}-1\bigg)+\big(n-1\big)\bigg(\frac{1}{2}-\frac{1}{p}\bigg)+ \lambda\bigg(1-\frac{1}{p}\bigg);$$ and in the range $2 \leq p <\infty$ $$ m<n(\varrho -1)\bigg (\frac{1}{2}-\frac{1}{p}\bigg)+ (n-1)\bigg(\frac{1}{p}-\frac{1}{2}\bigg) +\frac{\lambda}{p};$$ or \item [$(c)$] $T_{a,\varphi}$ with $a\in L^{\infty}S^{m}_{\varrho},$ $0\leq \varrho \leq 1$ and $\varphi$ is a strongly non-degenerate phase function with $\varphi(x,\xi) -\langle x, \xi\rangle \in \Phi^1,$ where in the range $1<p\leq 2,$ $$ m<\frac{n}{p}(\varrho -1)+\big(n-1\big)\bigg(\frac{1}{2}-\frac{1}{p}\bigg),$$ and for the range $2 \leq p <\infty$ $$ m<\frac{n}{2}(\varrho -1)+ (n-1)\bigg(\frac{1}{p}-\frac{1}{2}\bigg).$$ \end{enumerate} Then for $b\in \mathrm{BMO}$, the commutators $[b, T]$ and $[b, T_{a,\varphi}]$ are bounded on $L^p$ with $1<p<\infty.$ \end{thm} \begin{proof} \setenumerate[0]{leftmargin=0pt,itemindent=20pt} \begin{enumerate} \item [(a)] One reduces $T$ to a finite sum of operators of the form $T_a$ as in the proof of Theorem \mathop{\rm Re} f{weighted boundedness for true amplitudes with power weights}. That theorem also yields the existence of an $\varepsilon \in (0,1)$ such that $T_a$ with $a\in S^{m}_{\varrho, 1-\varrho}$ and $m<(\varrho-n)|\frac{1}{p}-\frac{1}{2}|$ is $L^{p}_{w^\varepsilon}-$bounded. Moreover, since $m<(\varrho-n)|\frac{1}{p}-\frac{1}{2}|\leq 0,$ and $1-\varrho\leq \varrho,$ Theorem \mathop{\rm Re} f{Calderon-Vaillancourt for FIOs} yields that $T_a$ is $L^2$ bounded. Hence, if $$\Psi(z)=\int e^{z b(x)} T_{a} (e^{-z b(x)} u)(x) v(x) \, \mathrm{d} x$$ with $u$ and $v$ in $C^{\infty}_{0},$ then Lemma \mathop{\rm Re} f{holomorflem} yields that $\Psi(z)$ is holomorphic in a neighbourhood of the origin. Therefore, Lemma \mathop{\rm Re} f{commutatorlem} implies that the commutator $[b, T_a ]$ is bounded on $L^p$ and the linearity of the commutator in $T$ allows us to conclude the same result for a finite linear combinations of operators of the same type as $T_a$. This ends the proof of part (a). \item [(b)] The proof of this part is similar to that of part (a). Here we observe that for any ranges of $p$ in the statement of the theorem, the order of the amplitude $m<\min(0,\frac{n}{2}(\varrho-\partialelta))$ and so $T_{a, \varphi}$ is $L^2$ bounded. Now, application of \mathop{\rm Re} f{weighted boundedness with power weights}, Theorem \mathop{\rm Re} f{Calderon-Vaillancourt for FIOs} and Lemma \mathop{\rm Re} f{holomorflem} part (a), concludes the proof. \item[(c)] The proof of this part is similar to that of part (b). For any ranges of $p,$ the order of the amplitude $m<\frac{n}{2}(\varrho-1)$ and so $T_{a, \varphi}$ is $L^2$ bounded. Therefore, Theorem \mathop{\rm Re} f{weighted boundedness with power weights nonsmooth symbols}, Theorem \mathop{\rm Re} f{global L2 boundedness smooth phase rough amplitude} and Lemma \mathop{\rm Re} f{holomorflem} part (b), yield the desired result. \end{enumerate} \end{proof} Finally, the weighted norm inequalities with weights in all $A_p$ classes have the advantage of implying weighted boundedness of repeated commutators. Namely, one has \begin{thm}\label{k-th commutator estimates} Let $a(x,\xi)\in L^{\infty}S^{-\frac{n+1}{2}\varrho+n(\varrho -1)}_{\varrho}$ and $\varrho \in[0,1].$ Suppose that either \begin{enumerate} \item[$(a)$] $a(x,\xi)$ is compactly supported in the $x$ variable and the phase function $\varphi(x,\xi)\in C^{\infty} (\mathbf{R}^ n \times \mathbf{R}^{n}\setminus 0)$, is positively homogeneous of degree $1$ in $\xi$ and satisfies, $\partialet\partial^2_{x\xi}\varphi(x,\xi) \neq 0$ as well as $\mathrm{rank}\,\partial^2_{\xi\xi}\varphi(x,\xi) =n-1$; or \item[$(b)$] $\varphi (x,\xi)-\langle x,\xi\rangle \in L^{\infty}\Phi^1,$ $\varphi $ satisfies either the strong or the rough non-degeneracy condition $($depending on whether the phase is spatially smooth or not$)$, as well as $|\mathrm{det}_{n-1} \partial^2_{\xi\xi}\varphi(x,\xi)|\geq c>0$. \end{enumerate} Then, for $b \in \mathrm{BMO}$ and $k$ a positive integer, the $k$-th commutator defined by \begin{equation*} T_{a, b,k} u(x):= T_{a}\big((b(x)-b(\cdot))^{k}u\big)(x) \end{equation*} is bounded on $L^{p}_w$ for each $w \in A_p$ and $p\in(1, \infty)$. \end{thm} \begin{proof} The claims in (a) and (b) are direct consequences of Theorem \mathop{\rm Re} f{endpointweightedboundthm} and Theorem 2.13 in \cite{ABKP}. \end{proof} \subsection{Applications to hyperbolic partial differential equations} It is wellknown, see e.g. \cite{D}, that the Cauchy problem for a strictly hyperbolic partial differential equation \begin{equation}\label{hyp Cauchy prob} \begin{cases} D^{m}_{t}+ \sum_{j=1}^{m} P_{j}(x,t,D_x ) D_{t}^{m-j}, \,\,\, t\neq 0\\ \partial_{t}^{j} u|_{t=0}=f_ j (x),\,\,\, 0\leq j\leq m-1 \end{cases} \end{equation} can be solved locally in time and modulo smoothing operators by \begin{equation}\label{FIO representation of the solution} u(x,t)= \sum_{j=0}^{m-1}\sum_{k=1}^{m} \int_{\mathbf{R}^n} e^{i\varphi_{k}(x,\xi,t)} a_{jk}(x,\xi,t) \, \widehat{f_{j}}(\xi)\, \mathrm{d}\xi \end{equation} where $a_{jk}(x,\xi,t)$ are suitably chosen amplitudes depending smoothly on $t$ and belonging to $S^{-j}_{1, 0}$, and the phases $\varphi_{k}(x,\xi,t)$ also depend smoothly on $t,$ are strongly non-degenerate and belong to the class $\Phi^2.$ This yields the following: \begin{thm} Let $u(x,t)$ be the solution of the hyperbolic Cauchy problem \eqref{hyp Cauchy prob} with initial data $f_j$. Let $m_{p}= (n-1)|\frac{1}{p}-\frac{1}{2}|$, for a given $p\in [1, \infty]$. If $f_j \in H^{s+m_{p}-j, p}(\mathbf{R}^n)$ and $T\in (0, \infty)$ is fixed, then for any $\varepsilon >0$ the solution $u(\cdot, t) \in H^{s-\varepsilon,p}(\mathbf{R}^n),$ satisfies the global estimate \begin{equation} \Vert u(\cdot, t)\Vert_{H^{s-\varepsilon,p}}\leq C_{T} \sum_{j=0}^{m-1}\Vert f_{j}\Vert_{H^{s+m_p-j,p}}, \,\,\, t\in [-T,T],\,\, p\in [1, \infty] \end{equation} \end{thm} \begin{proof} The result follows at once from the Fourier integral operator representation \eqref{FIO representation of the solution} and Corollary \mathop{\rm Re} f{LinftySm1 cor}. \end{proof} The representation formula \eqref{FIO representation of the solution} also yields the following local weighted estimate for the solution of the Cauchy problem for the second order hyperbolic equation above and in particular for variable coefficient wave equation. In this connection we recall that $H_{w}^{s}:=\{ u\in \mathscr{S}';\, (1-C^{\infty}_0elta)^{\frac{s}{2}}u \in L_{w}^{p}, \, w\in A_{p}\}. $ \begin{thm} Let $u(x,t)$ be the solution of the hyperbolic Cauchy problem \eqref{hyp Cauchy prob} with $m=2$ and initial data $f_j .$ For $p\in (1,\infty),$ if $f_j \in H_{w}^{s+\frac{n+1}{2}-j, p}(\mathbf{R}^n)$ with $w\in A_p$, and if $T\in (0,\infty)$ is small enough, then the solution $u(\cdot, t)$ is in $H_{w}^{s,p}(\mathbf{R}^n)$ and satisfies the weighted estimate \begin{equation} \Vert \chi u(\cdot, t)\Vert_{H_{w}^{s,p}}\leq C_{T} \sum_{j=0}^{1}\Vert f_{j}\Vert_{H_{w}^{s+\frac{n+1}{2}-j,p}}, \,\,\, t\in [-T,T]\setminus \{0\},\,\forall w\in A_{p}, \end{equation} and all $\chi \in C^{\infty}_{0}(\mathbf{R}^n).$ \end{thm} \begin{proof} In the case when $m=2$ then one has the important property that $$\mathrm{rank}\, \partial^{2}_{\xi \xi} \varphi(x, \xi, t)=n-1,$$ for $t \in [-T,T]\setminus \{0\}$ and small $T.$ This fact and the localization of the solution $u(x,t)$ is the spatial variable $x,$ enable us to use Theorem \mathop{\rm Re} f{endpointweightedboundthm} in the case $\varrho=1,$ from which the theorem follows. \end{proof} \end{document}
\begin{document} \title{The Clifford group, stabilizer states, and linear and quadratic operations over GF(2). } \author{Jeroen Dehaene} \email{[email protected]} \affiliation{Katholieke Universiteit Leuven, ESAT-SCD, Belgium} \author{Bart De Moor} \affiliation{Katholieke Universiteit Leuven, ESAT-SCD, Belgium} \date{\today} \begin{abstract} We describe stabilizer states and Clifford group operations using linear operations and quadratic forms over binary vector spaces. We show how the $n$-qubit Clifford group is isomorphic to a group with an operation that is defined in terms of a $(2n+1)\times (2n+1)$ binary matrix product and binary quadratic forms. As an application we give two schemes to efficiently decompose Clifford group operations into one and two-qubit operations. We also show how the coefficients of stabilizer states and Clifford group operations in a standard basis expansion can be described by binary quadratic forms. Our results are useful for quantum error correction, entanglement distillation and possibly quantum computing. \end{abstract} \pacs{03.67.-a} \maketitle \section{Introduction} Stabilizer states and Clifford group operations play a central role in quantum error correction, quantum computing, and entanglement distillation. A stabilizer state is a state of an $n$-qubit system that is a simultaneous eigenvector of a commutative subgroup of the Pauli group. The latter consists of all tensor products of $n$ single-qubit Pauli operations. The Clifford group is the group of unitary operations that map the Pauli group to itself under conjugation. In quantum error correction these concepts play a central role in the theory of stabilizer codes \cite{Got:97}. Although a quantum computer working with only stabilizer states and Clifford group operations is not powerful enough to disallow efficient simulation on a classical computer \cite{ChN:00, Got:98}, it is not unlikely that possible new quantum algorithms will exploit the rich structure of this group. In \cite{DVD:03}, we also showed the relevance of a quotient group of the Clifford group in mixed state entanglement distillation. In this paper, we link stabilizer states and Clifford operations with binary linear algebra and binary quadratic forms (over GF(2)). The connection between multiplication of Pauli group elements and binary addition is well known as is the connection between commutability of Pauli group operations and a binary symplectic inner product \cite{Got:97}. In \cite{DVD:03} we extended this connection to a link between a quotient group of the Clifford group and binary symplectic matrices (there termed $P$ orthogonal). In this paper we give a binary characterization of the full Clifford group, by adding quadratic forms to the symplectic operations. In addition we show how the coefficients, with respect to a standard basis, of both stabilizer states and Clifford operations can also be described using binary quadratic forms. Our results also lead to efficient ways for decomposing Clifford group operations in a product of 2-qubit operations. \section{Clifford group operations and binary linear and quadratic operations} \label{secbin} In this section, we show how the Clifford group is isomorphic to a group that can be entirely described in terms of binary linear algebra, by means of symplectic linear operations and quadratic forms. We use the following notation for Pauli matrices. \[ \begin{array}{lrrl} \sigma_{00} & = \tau_{00} & =\sigma_0 & = \left[\begin{array}{rr} 1 & 0\\ 0 & 1 \end{array}\right],\\ \sigma_{01} & = \tau_{01} & =\sigma_x & = \left[\begin{array}{rr} 0 & 1\\ 1 & 0 \end{array}\right],\\ \sigma_{10} & = \tau_{10} & =\sigma_z & = \left[\begin{array}{rr} 1 & 0\\ 0 & -1 \end{array}\right],\\ \sigma_{11} & & =\sigma_y & = \left[\begin{array}{rr} 0 & -i\\ i & 0 \end{array}\right],\\ & \tau_{11} & =i\sigma_y & = \left[\begin{array}{rr} 0 & 1\\ -1 & 0 \end{array}\right]. \end{array} \] We also use vector indices to indicate tensor products of Pauli matrices. If $v,w\in\mathbb{Z}_2^{n}$ and $a=\left[\begin{array}{c}v\\w\end{array}\right]\in\mathbb{Z}_2^{2n}$, then we denote \begin{equation} \label{eqsigtau} \begin{array}{ll} \sigma_a &=\sigma_{v_1w_1}\otimes\ldots\otimes\sigma_{v_nw_n},\\ \tau_a &=\tau_{v_1w_1}\otimes\ldots\otimes\tau_{v_nw_n} \end{array} \end{equation} If we define the Pauli group to contain all tensor products of Pauli matrices with an additional complex phase in $\{1,i,-1,-i\}$, an arbitrary Pauli group element can be represented as $i^\delta (-1)^\epsilon \tau_u$, where $\delta,\epsilon\in\mathbb{Z}_2$ and $u\in\mathbb{Z}_2^{2n}$. The separation of $\delta$ and $\epsilon$, rather than having $i^\gamma$ with $\gamma\in\mathbb{Z}_4$, is deliberate and will simplify formulas below. Throughout this paper exponents of $i$ will always be binary. As a result $i^{\delta_1}i^{\delta_2}=i^{\delta_1+\delta_2}(-1)^{\delta_1\delta_2}$. Multiplication of two Pauli group elements can now be translated into binary terms in the following way: \begin{lemma} \label{lemtautau} If $a_1,a_2\in\mathbb{Z}_2^{2n}$, $\delta_1,\delta_2,\epsilon_1,\epsilon_2\in\mathbb{Z}_2$ and $\tau$ is defined as in Eq.~(\ref{eqsigtau}), then \[ \begin{array}{l} i^{\delta_1} (-1)^{\epsilon_1} \tau_{a_1} i^{\delta_2} (-1)^{\epsilon_2} \tau_{a_2} =i^{\delta_{12}}(-1)^{\epsilon_{12}} \tau_{a_{12}}\\ \mbox{with} \begin{array}[t]{ll} \delta_{12} & = \delta_1+\delta_2\\ \epsilon_{12} & =\epsilon_1+\epsilon_2+\delta_1\delta_2 + a_2^TUa_1\\ a_{12} & = a_1+a_2,\\ U & =\left[\begin{array}{rr} 0_n & I_n\\ 0_n & 0_n \end{array}\right], \end{array} \end{array} \] where multiplication and addition of binary variables is modulo 2. \end{lemma} These formulas can easily be verified for $n=1$ and then generalized for $n>1$. The term $a_2^TUa_1$ ``counts'' (modulo 2) the number of positions $k$ where ${w_1}_k=1$ and ${v_2}_k=1$ (with $a_1=\left[\begin{array}{c}v_1\\w_1\end{array}\right]$ and $a_2=\left[\begin{array}{c}v_2\\w_2\end{array}\right]$), as only these positions get a minus sign in the following derivation: \[ \begin{array}{rl} \tau_{{v_1}_k {w_1}_k}\tau_{{v_2}_k {w_2}_k} & =\sigma_z^{{v_1}_k}\sigma_x^{{w_1}_k}\sigma_z^{{v_2}_k}\sigma_x^{{w_2}_k}\\ &=(-1)^{{w_1}_k{v_2}_k}\sigma_z^{{v_1}_k+{v_2}_k}\sigma_x^{{w_1}_k+{w_2}_k}\\ &=(-1)^{{w_1}_k{v_2}_k}\tau_{{v_1}_k+{v_2}_k,{w_1}_k+{w_2}_k}. \end{array} \] A Clifford group operation $Q$, by definition, maps the Pauli group to itself under conjugation: \[ Q \tau_a Q^\dag = i^\delta (-1)^\epsilon \tau_b \] for some $\delta$,$\epsilon$,$b$, function of $a$. Because $Q\tau_{a_1}\tau_{a_2}Q^\dag=(Q\tau_{a_1}Q^\dag)(Q\tau_{a_2}Q^\dag)$, it is sufficient to know the image of a generating set of the Pauli group to know the image of all Pauli group elements and define $Q$ (up to an overall phase). In binary terms it is sufficient to know the image of $\tau_{b_k},~k=1,\ldots,n$ where $b_k,~k=1,\ldots,n$ form a basis of $\mathbb{Z}_2^{2n}$. For this purpose it is possible to work with Hermitian Pauli group elements only as the image of a Hermitian matrix under $X\rightarrow QXQ^\dag$ will again be Hermitian (and the images of Hermitian Pauli group elements are sufficient do derive the images of non Hermitian ones). In our binary language Hermitian Pauli group elements are described as \[ i^{a^TUa}(-1)^\epsilon\tau_a \] as $a^TUa$ counts (modulo 2) the number of $\tau_{11}$ in the tensor product $\tau_a$. For $\tau_{11}$ is the only non-Hermitian (actually skew Hermitian) of the four $\tau$ matrices and multiplication with $i$ makes it Hermitian. Now we take the standard basis of $\mathbb{Z}_2^{2n}$ $e_k,~k=1,\ldots,n$ where $e_k$ is the $k$-th column of $I_{2n}$, and consider the generating set of Hermitian operators $\tau_{e_k}$. These correspond to single-qubit operations $\sigma_z$ and $\sigma_x$. We denote their images under $X\rightarrow QXQ^\dag$ by $i^{d_k}(-1)^{h_k} \tau_{c_k}$ and assemble the vectors $c_k$ in a matrix $C$ (with columns $c_k$) and the scalars $d_k$ and $h_k$ in the vectors $d$ and $h$. As the images are Hermitian, $d_k=c_k^TUc_k$ or $d=\mbox{diag}(C^TUC)$ (with $\mbox{diag}(X)$ the vector with the diagonal elements of $X$). Now, given $C$, $d$ and $h$, defining the Clifford operation $Q$, the image $i^{\delta_2}(-1)^{\epsilon_2}\tau_{b_2}$ of $i^{\delta_1}(-1)^{\epsilon_1}\tau_{b_1}$ under $X\rightarrow QXQ^\dag$ can be found by mutliplying those operators $i^{d_k}(-1)^{h_k} \tau_{c_k}$ for which ${b_1}_k=1$. By repeated application of Lemma~\ref{lemtautau}, this yields \[ \begin{array}{ll} b_2 &=C b_1\\ \delta_2 &= \delta_1 + d^T b_1\\ \epsilon_2 &= \epsilon_1 + h^T b_1 + b_1^T(\mbox{lows}(C^TUC+dd^T)b_1 + \delta_1 d^T b_1 \end{array} \] where $\mbox{lows}(X)$ is the strictly lower triangular part of $X$. These formulas can be simplified by introducing the following notation \[ \begin{array}{llll} \bar C &=\left[\begin{array}{ll} C & 0\\ d^T & 1 \end{array}\right]\\ \bar U &=\left[\begin{array}{rr} U & 0\\ 0 & 1 \end{array}\right]\\ \bar h &=\left[\begin{array}{r} h\\ 0 \end{array}\right]\\ \bar b_1 &=\left[\begin{array}{r} b_1\\ \delta_1 \end{array}\right] & \bar b_2 &=\left[\begin{array}{r} b_2\\ \delta_2 \end{array}\right]\\ \tau_{\bar b_1} &= i^{\delta_1} \tau_{b_1} & \tau_{\bar b_2} &= i^{\delta_2} \tau_{b_2} \end{array} \] We then get the following theorem \begin{theorem} \label{theocb} Given $\bar C$ and $\bar h$, defining the Clifford operation $Q$ as above, the image under $X\rightarrow QXQ^\dag$ of $(-1)^{\epsilon_1}\tau_{\bar b_1}$ is $(-1)^{\epsilon_2}\tau_{\bar b_2}$ with \[ \begin{array}{ll} \bar b_2 &= \bar C \bar b_1\\ \epsilon_2 &= \epsilon_1 + \bar h^T \bar b_1 + \bar b_1 \mbox{lows}(\bar C^T \bar U \bar C) \bar b_1 \end{array} \] \end{theorem} With this theorem we can also compose two Clifford operations using the binary language. To this end we have to find the images under the second operation of the images under the first operation of the standard basis vectors. This can be done using Theorem~\ref{theocb}: \begin{theorem} \label{theoQ21} Given $\bar C_1$, $\bar h_1$, $\bar C_2$ and $\bar h_2$, defining two Clifford operations $Q_1$ and $Q_2$ as above, the product $Q_{21}=Q_2 Q_1$ is represented by $\bar C_{21}$ and $\bar h_{21}$ given by \[ \begin{array}{ll} \bar C_{21}&=\bar C_2 \bar C_1\\ \bar h_{21}&=\bar h_1 + \bar C_1^T \bar h_2 + \mbox{diag}(\bar C_1^T \mbox{lows}(\bar C_2^T \bar U \bar C_2)\bar C_1) \end{array} \] \end{theorem} The next question is which $\bar C$ and $\bar h$ or $C$, $d$ and $h$ can represent a Clifford operation. The answer is that $C$ has to be a symplectic matrix (and $d$ has to be equal to $\mbox{diag}(C^TUC)$ as above). If we define $P$ to be $U+U^T$, we call a matrix symplectic if $C^TPC=P$. One way to see that $C$ has to be symplectic is through the connection of the symplectic inner product $b^TPa$ with commutability of Pauli group elements: \[ \tau_a\tau_b=(-1)^{b^TPa}\tau_b\tau_a \] Since the map $X\rightarrow QXQ^\dag$ preserves commutability, $a$ and $b$ have to represent commutable Pauli group elements ($b^TPa=0$) if and only if $Ca$ and $Cb$ represent commutable elements ($b^TC^TPCa=0$). This implies that $C$ has to be symplectic. That symplecticity is also sufficient was first implied by Theorem~1 of \cite{DVD:03} (almost, as this result was set in the context of entanglement distillation where the signs $\epsilon$ play no significant role). The idea is to give a constructive way of realizing the Clifford operation $Q$ given by $\bar C$ and $\bar h$. This can be done using only one and two-qubit operations, which makes the result also of practical use. In Sec.~\ref{sec2q} we give two such decompositions that are more transparent than the results of \cite{DVD:03}. First, to conclude this section, we complete the binary group picture by a formula for the inverse of a Clifford group element, given in binary terms. \begin{theorem} \label{theoinv} Given $\bar C_1$ and $\bar h_1$, defining a Clifford operation $Q_1$ as above, the inverse $Q_2=Q_1^{-1}$ is represented by \[ \begin{array}{ll} \bar C_2&=\bar C_1^{-1}= \left[\begin{array}{ll} C_1^{-1} & 0\\ d^T C^{-1} & 1 \end{array}\right] = \left[\begin{array}{ll} PC_1^TP & 0\\ d_1^TPC_1^TP & 1 \end{array}\right] \\ \bar h_2&=\bar C^{-T} \bar h + \mbox{diag}(\bar C^{-T}\mbox{lows}(\bar C^T\bar U\bar C)\bar C^{-1}) \end{array} \] \end{theorem} These formulas can be verified using Theorem~\ref{theoQ21}. Finally note that since the Clifford operations form a group and the matrices $\bar C$ are simply multiplied when composing Clifford group operations, the matrices $\bar C$ with $C$ symplectic and $d=\mbox{diag}(C^TUC)$ must form a group of $(2n+1)\times(2n+1)$ matrices that is isomorphic to the symplectic group of $2n\times 2n$ matrices. This can be easily verified by showing that \[ \mbox{diag}(C_1^TC_2^TUC_2C_1)=C_1^T\mbox{diag}(C_2^TUC_2)+\mbox{diag}(C_1^TUC_1) \] This follows from the fact that $C^TUC+U$ is symmetric when $C^TPC=P$ and $x^TSx=x^T\mbox{diag}(S)$ when $S$ is symmetric. In a similar way it can be proven that $\mbox{diag}(C^{-T}UC^{-1})=C^{-T}\mbox{diag}(C^TUC)$. \section{Special Clifford operations in the binary picture} \label{secspec} In this section we consider a selected set of Clifford group operations and their representation in the binary picture of Sec.~\ref{secbin}. First, we consider the Pauli group operations $Q=\tau_a$ as Clifford operations. Note that a global phase cannot be represented as it does not affect the action $X\rightarrow QXQ^\dag$. To construct $C$ and $h$ we have to consider the images of the operators $\tau_{e_k}$ representing one-qubit operations $\sigma_x$ and $\sigma_z$. One can easily verify that $\tau_a$ is represented by \begin{equation} \label{eqsc1} \begin{array}{ll} C&=I_{2n}\\ h&=Pa \end{array} \end{equation} Second, note that Clifford operations acting on a subset $\alpha \subset \{1,\ldots,n\}$ consist of a symplectic matrix on the rows and columns with indices in $\alpha\cup(\alpha+n)$, embedded in an identity matrix (that is, with ones on positions $C_{k,k}=1$, $k\not\in\alpha\cup(\alpha+n)$ and $C_{k,l}=0$ if $k\neq l$ and $k$ or $l$ $\not\in\alpha\cup(\alpha+n)$.) Also $h_k=0$ if $k\not\in\alpha\cup(\alpha+n)$. Third, qubit permutations, are represented by \[ \begin{array}{ll} C&=\left[\begin{array}{ll} \Pi & 0\\ 0 & \Pi \end{array}\right]\\ h&=0 \end{array} \] where $\Pi$ is a permutation matrix. Fourth, the conditional not or CNOT operation on two qubits is represented by \[ \begin{array}{ll} C&= \left[\begin{array}{llll} 1 & 1 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 1\\ \end{array}\right]\\ h&=0 \end{array} \] Fifth, by composing qubit permutations and CNOT operations on selected qubits any linear transformation of the index space $|x\rangle\rightarrow|Rx\rangle$ can be realized, where $x\in\mathbb{Z}_2^n$ labels the standard basis states $|x\rangle=|x_1\rangle\otimes\ldots\otimes|x_n\rangle$ and $R\in\mathbb{Z}_2^{n\times n}$ is an invertible matrix (modulo 2). This operation is represented in the symplectic picture by \begin{equation} \label{eqsc5} \begin{array}{ll} C&=\left[\begin{array}{ll} R^{-T} & 0\\ 0 & R \end{array}\right]\\ h&=0 \end{array} \end{equation} The qubit permutations and CNOT operation discussed above are special cases of such operations as qubit permutations can be represented as $|x\rangle\rightarrow |\Pi x\rangle$ and the CNOT operation as $|x\rangle\rightarrow |\left[\begin{array}{ll} 1 & 0\\ 1 & 1 \end{array}\right]x\rangle$. Decomposing a general linear transformation $R$ into CNOTS and qubit permutations can be done by Gauss elimination (a well known technique for the solution of systems of linear equations). In this process $R$ is operated on on the left by CNOTS and qubit permutations to be gradually transformed in an identity matrix. The process operates on $R$, column by column, first moving a nonzero element into the diagonal position by a qubit permutation, then zeroing the rest of the column by CNOTS. The inverses of the applied operations yield a decomposition of $R$. Sixth, we consider Hadamard operations. The Hadamard operation on a single qubit $Q=H=\frac{1}{\sqrt{2}}\left[\begin{array}{rr}1&1\\1&-1\end{array}\right]$ is represented by $C=\left[\begin{array}{ll}0 & 1\\1 & 0\end{array}\right]$ and $h=0$. A Hadamard operation on a selected set of qubits is represented by the embedding of such matrices in an identity matrix as explained above. As a special case we mention the Hadamard operation on all qubits, which is represented by $C=P$ and $h=0$. Seventh, we consider operations $e^{i(\pi/4)\tau_{\bar a}}=\frac{1}{\sqrt{2}}(I+i\tau_{\bar a})$ where $a\in\mathbb{Z}_2^{2n}$, $\bar a=\left[\begin{array}{c}a\\a^TUa\end{array}\right]$, and $\tau_{\bar a}=i^{a^TUa}\tau_a$. These operations are represented by \begin{equation} \label{eqsc7} \begin{array}{ll} C&=I+aa^TP\\ h&=C^TUa \end{array} \end{equation} This is proved in the Appendix. Finally, we mention that real Clifford operations have $d=0$. \section{Decompositions of Clifford operations in one and two-qubit operations} \label{sec2q} In this section we write general Clifford group operations as products of one and two-qubit operations using the binary picture. This does not only complete the results of Sec.~\ref{secbin}, showing that every symplectic $C$ and arbitrary $h$ represent a Clifford operation. It is also of practical use for quantum computing applications as well as entanglement distillation applications since two-qubit operations can be realized relatively easily and the number of two-qubit operations needed is ``only'' quadratical in the number of qubits. We give two different schemes. First, for both schemes, we observe that the main problem is realizing $C$, not $h$. For once a Clifford operation represented by $C$ and $h'$ is realized, we can realize $h$ by doing an extra operation $Q=\tau_{CP(h+h')}$ on the left or $Q=\tau_{P(h+h')}$ on the right. This can be proved by using Eq.~(\ref{eqsc1}) and Theorem~\ref{theoQ21}. The first scheme realizes $C$ by two-qubit operations, acting on qubit $k$ and $l$, of the type $e^{i(\pi/4)\tau_{\bar a}}$ with symplectic matrices $(I+aa^TP)$ where $a$ can be nonzero (i.e. one) only at positions $k,l,n+k$ and $n+l$. The scheme works by reducing a given symplectic matrix $C$ to the identity matrix by operating on the left with two-qubit operations. The product of the inverses of these two-qubit matrices is then equal to $C$. The reduction to the identity matrix is done by working on two columns $m$ and $n+m$ at a time, for $m=1,\ldots,n$. First columns $1$ and $n+1$ are reduced to columns $1$ and $n+1$ of the identity matrix. Because through all the operations $C$ remains symplectic, one can show that as a result also rows $1$ and $n+1$ are reduced to rows $1$ and $n+1$ of the identity matrix. Then one can repeat the same process on the submatrix of $C$ obtained by dropping rows and columns $1$ and $n+1$, until the whole matrix is reduced to the identity matrix. Let $\alpha=\{1,1+n\}$ and $\beta=\{l,l+n\}$. The first step in reducing columns $1$ and $n+1$ of $C$ to the corresponding columns of the identity matrix is a qubit permutation, exchanging qubit $1$ with some qubit $k$ to make $C_{\alpha,\alpha}$ invertible. This can be done for if all $C_{\beta,\alpha}$ would be rank deficient, we would have $c_1^TPc_{n+1}=0$ which is in conflict with the symplecticity of $C$. (Note that a $2\times 2$-matrix is invertible if and only if it is symplectic). Next, we perform two-qubit operations $e^{i(\pi/4)\tau_{\bar a}}$ on qubits $1$ and $l$ with $a_{\alpha}=c_{\alpha,n+1}$ and $a_{\beta}=c_{\beta,1}$, for $l=2,\ldots,n$. Such an operation changes $C$ through multiplication with $I+aa^TP$. For the first column this means that $c_1$ is replaced by $c_1+a$, as $a^TPc_1= c_{\alpha,n+1}^TP_2c_{\alpha,1}+ c_{\beta,1}^TP_2c_{\beta,1}= 1+0=1$, where $P_2=\left[\begin{array}{cc}0&1\\1&0\end{array}\right]$. This way $c_{\beta,1}$ is reduced to $0$. $C_{\alpha,\alpha}$ is changed at every step but remains invertible (and symplectic). Note that through these operations also the other columns of $C$ are changed. After the first column has been zeroed on all positions except $\alpha$, we tackle column $n+1$ with operations $e^{i(\pi/4)\tau_{\bar a}}$ on qubits $1$ and $l$ with $a_{\alpha}=c_{\alpha,1}$ and $a_{\beta}=c_{\beta,n+1}$, $l=2,\ldots,n$. These operations have no effect on $c_1$ because $a^TPc_1=c_{\alpha,1}^TP_2c_{\alpha,1}+0=0$, and reduce $c_{\beta,n+1}$ to $0$ in the same way as was done for the first column. After these operations we are left with $c_1$ and $c_{n+1}$ all $0$ except for $C_{\alpha,\alpha}$ which equals an invertible matrix. This matrix can be transformed into an identity matrix by a one-qubit symplectic operation on qubit $1$. One-qubit Clifford operations can be easily made by one-qubit operations of type $e^{i(\pi/4)\tau_{\bar a}}$. An advantage of this scheme is that it is efficient if only some columns of $C$ (or rows, as one can also work on the right) are specified while the other columns do not matter. This is the case in the entanglement distillation protocols of \cite{DVD:03}. The second scheme also takes a number of steps that is quadratical in $n$. It is based on the following theorem, which will also be of importance in Sec.~\ref{secdesc}, and for which we give a constructive proof. \begin{theorem} \label{theosympdec} If $C\in\mathbb{Z}_2^{2n\times 2n}$ is a symplectic matrix ($C^TPC=P$), it can be decomposed as \begin{eqnarray} C&=&\left[\begin{array}{cc} T_1^{-T} & 0\\ 0 & T_1 \end{array}\right]\times\label{eqsd1}\\ && \left[\begin{array}{llll} I_{n-r} & V_1 & Z_3+V_1V_2^T & V_2+V_1Z_2\\ 0 & Z_1 & V_1^T+Z_1V_2^T & I_r+Z_1Z_2\\ 0 & 0 & I_{n-r} & 0\\ 0 & I_r & V_2^T & Z_2 \end{array}\right] \left[\begin{array}{cc} T_2^{-T} & 0\\ 0 & T_2 \end{array}\right]\nonumber\\ &=&\left[\begin{array}{cc} T_1^{-T} & 0\\ 0 & T_1 \end{array}\right]\times\label{eqsd2}\\ &&\left[\begin{array}{llll} I_{n-r} & 0 & Z_3 & V_1\\ 0 & I_r & V_1^T & Z_1\\ 0 & 0 & I_{n-r} & 0\\ 0 & 0 & 0 & I_r \end{array}\right] \left[\begin{array}{llll} I_{n-r} & 0 & 0 & 0\\ 0 & 0 & 0 & I_r\\ 0 & 0 & I_{n-r} & 0\\ 0 & I_r & 0 & 0 \end{array}\right]\times\nonumber\\ &&\left[\begin{array}{llll} I_{n-r} & 0 & 0 & V_2\\ 0 & I_r & V_2^T & Z_2\\ 0 & 0 & I_{n-r} & 0\\ 0 & 0 & 0 & I_r \end{array}\right] \left[\begin{array}{cc} T_2^{-T} & 0\\ 0 & T_2 \end{array}\right]\nonumber \end{eqnarray} where $T_1$ and $T_2$ are invertible $n\times n$ matrices, $Z_1$ and $Z_2$ are symmetric $r\times r$ matrices, $Z_3$ is a symmetric $(n-r)\times(n-r)$ matrix, $V_1$ and $V_2$ are $(n-r)\times r$ matrices and the zero blocks have appropriate dimensions. \end{theorem} {\bf Proof:} To prove this theorem We consider $C$ as a block matrix $C=\left[\begin{array}{cc} E' & F'\\ G' & H'\\ \end{array}\right]$. Then, we find invertible $R_1$ and $R_2$ in $\mathbb{Z}_2^{n\times n}$ such that $R_1^{-1}G'R_2= \left[\begin{array}{cc} 0 & 0\\ 0 & I_{r} \end{array}\right],$ where $r$ is the rank of $G'$. This is a standard linear algebra technique and can be realized (for example) by (1) setting the first $n-r$ columns of $R_2$ equal to a basis of the kernel of $G'$, (2) choosing the other columns of $R_2$ as to make it invertible, (3) setting the last $r$ columns of $R_1$ equal to the last $r$ columns of $R_2$ multiplied on the left by $G'$ (This yields a basis of the range of $G'$), and (4) choosing the other columns of $R_1$ as to make it invertible. By construction, this implies $G'R_2=R_1\left[\begin{array}{cc} 0 & 0\\ 0 & I_{r} \end{array}\right]$. Now we set \begin{equation} \label{eqefh0} \begin{array}{l} \left[\begin{array}{cc} R_1^T & 0\\ 0 & R_1^{-1} \end{array}\right] C \left[\begin{array}{cc} R_2 & 0\\ 0 & R_2^{-T} \end{array}\right]=\\ ~\left[\begin{array}{llll} E_{11} & E_{12} & F_{11} & F_{12}\\ E_{21} & E_{22} & F_{21} & F_{22}\\ 0 & 0 & H_{11} & H_{12}\\ 0 & I_r & H_{21} & H_{12} \end{array}\right] \end{array} \end{equation} Because the three matrices in the left-hand side of Eq.~(\ref{eqefh0}) are symplectic, so is the right-hand side. This leads to the following relations between its submatrices: \begin{eqnarray} E_{21}^T=0\label{eqefh1}\\ E_{11}^TH_{11}+E_{21}^TH_{21}=I\label{eqefh2}\\ E_{11}^TH_{12}+E_{21}^TH_{22}=0\label{eqefh3}\\ E_{22}^T+E_{22}=0\label{eqefh4}\\ E_{12}^TH_{11}+E_{22}^TH_{21}+F_{21}=0\label{eqefh5}\\ E_{12}^TH_{12}+E_{22}^TH_{22}+F_{22}=I\label{eqefh6}\\ F_{11}^TH_{11}+F_{21}^TH_{21}+H_{11}^TF_{11}+H_{21}^TF_{21}=0\label{eqefh7}\\ F_{11}^TH_{12}+F_{21}^TH_{22}+H_{11}^TF_{12}+H_{21}^TF_{21}=0\label{eqefh8}\\ F_{12}^TH_{12}+F_{22}^TH_{22}+H_{12}^TF_{12}+H_{22}^TF_{22}=0\label{eqefh9} \end{eqnarray} With Eq.~(\ref{eqefh1}) and Eq.~(\ref{eqefh2}) we find $H_{11}$=$E_{11}^{-T}$. Now, if we replace $R_2$ by $R_2 \left[\begin{array}{ll} E_{11}^{-1} & 0\\ 0 & I_r \end{array}\right]$, both $H_{11}$ and $E_{11}$ are replaced by $I_{n-r}$. We will assume that this choice of $R_2$ was taken from the start. Then, from Eq.~(\ref{eqefh1}) and Eq.~(\ref{eqefh3}) we find $H_{12}=0$. From Eq.~(\ref{eqefh4}) we learn that $E_{22}$ is symmetric. From Eq.~(\ref{eqefh5}) and Eq.~(\ref{eqefh6}) we find $F_{21}=E_{12}^T+E_{22}^TH_{21}$ and $F_{22}=I+E_{22}H_{22}$. Substituting these equations in Eqs.~(\ref{eqefh7}),(\ref{eqefh8}) and~(\ref{eqefh9}), we find that $F_{11}+H_{21}^TE_{12}^T$ is symmetric, $F_{12}=H_{21}^T+E_{12}H_{22}$, and $H_{22}$ is symmetric. Setting $T_1=R_1$, $T_2=R_2^T$ (with $R_2$ chosen as to make $E_{11}=H_{11}=I$), $V_1=E_{12}$, $V_2=H_{21}^T$, $Z_1=E_{22}$, $Z_2=H_{22}$ and $Z_3=F_{11}+V_1V_2^T$, we obtain Eq.~(\ref{eqsd1}). Note that $Z_3$ is symmetric because $F_{11}+V_2V_1^T$ and $V_2V_1^T+V_1V_2^T$ are symmetric. Finally Eq.~\ref{eqsd2} can be easily verified. This completes the proof. $\square$ To find a decomposition of $C$ in one and two-qubit operations we concentrate on the five matrices in the right-hand side of Eq.~(\ref{eqsd2}), all of which are symplectic. Clearly the first and last matrix are linear index space transformations as discussed in Sec.~\ref{secspec}. These can be decomposed into CNOTs and qubit permutations. The middle matrix corresponds to Hadamard operations on the last $r$ qubits. We will now show that the second and fourth matrix can be realized by one and two-qubit operations of the type $e^{i(\pi/4)\tau_{\bar a}}$. First note that both matrices are of the form $\left[\begin{array}{ll}I&Z\\0&I\end{array}\right]$ with $Z$ symmetric. These matrices form a commutative subgroup of the symplectic matrices with \[ \left[\begin{array}{cc}I&Z_a\\0&I\end{array}\right] \left[\begin{array}{cc}I&Z_b\\0&I\end{array}\right]= \left[\begin{array}{cc}I&Z_a+Z_b\\0&I\end{array}\right]. \] Now, we realize $\left[\begin{array}{ll}I&Z\\0&I\end{array}\right]$ with one and two-qubit operations by first realizing the ones on off-diagonal positions in $Z$ and then realizing the diagonal. Entries $Z_{k,l}=Z_{l,k}=1$ are realized by operations $e^{i(\pi/4)\tau_{\bar a}}$ with $a_k=a_l=1$ and $a_m=0$ if $m\neq k$ and $m\neq l$. These are two-qubit operations which realize the off-diagonal part of $Z$ and as a by-product produce some diagonal. Now this diagonal can be replaced by the diagonal of $Z$ by one-qubit operations $e^{i(\pi/4)\tau_{\bar a}}$ with $a_k=1$ and $a_m=0$ if $m\neq k$, which affect only the diagonal entries $Z_{k,k}$. This completes the construction of $C$ by means of one and two-qubit operations. \section{Description of stabilizer states and Clifford operations using binary quadratic forms} \label{secdesc} In this section we use our binary language to get further results on stabilizer states and Clifford operations. First, we take the binary picture of stabilizer states and their stabilizers and show how Clifford operations act on stabilizer states in the binary picture. We also discuss the binary equivalent of replacing one set of generators of a stabilizer by another. Then we move to two seemingly unrelated results. One is the expansion of a stabilizer state in the standard basis, describing the coefficients with binary quadratic forms. The other is a similar description of the entries of the unitary matrix of a Clifford operation with respect to the same standard basis. A stabilizer state $|\psi\rangle$ is the simultaneous eigenvector, with eigenvalues $1$, of $n$ commutable Hermitian Pauli group elements $i^{f_k}(-1)^{b_k}\tau_{s_k}$, $k=1,\ldots,n$, where $s_k\in\mathbb{Z}_2^{2n}, k=1,\ldots,n$ are linearly independent, $f_k,b_k\in\mathbb{Z}_2$ and $f_k=s_k^TUs_k$. The $n$ Hermitian Pauli group elements generate a commutable subgroup of the Pauli group, called the stabilizer ${\cal S}$ of the state. We will assemble the vectors $s_k$ as the columns of a matrix $S\in\mathbb{Z}_2^{2n\times n}$ and the scalars $f_k$ and $b_k$ in vectors $f$ and $b\in\mathbb{Z}_2^{n}$. This binary representation of stabilizer states is common in the literature of stabilizer codes \cite{Got:97}. The fact that the Pauli group elements are commutable is reflected by $S^TPS=0$. One can think of $S$, $f^T$ and $b^T$ as the ``left half'' of $C$, $d^T$ and $h^T$ of Sec.~\ref{secbin}. In the style of that section we also define $\bar S=\left[\begin{array}{l}S\\f^T\end{array}\right]$. If $|\psi\rangle$ is operated on by a Clifford operation $Q$, $Q|\psi\rangle$ is a new stabilizer state whose stabilizer is given by $Q{\cal S}Q^\dag$. As a result, the new set of generators, represented by $\bar S'$ and $b'$ can be found by acting with $\bar C$ and $h$, representing $Q$, as in Theorem~\ref{theocb} and Theorem~\ref{theoQ21}. One finds \[ \begin{array}{ll} \bar S'&=\bar C\bar S\\ b'&= b + S^T h + \mbox{diag}(\bar S^T\mbox{lows}(\bar C^T \bar U \bar C)\bar S) \end{array} \] The representation of ${\cal S}$ by $\bar S$ and $b$ is not unique as they only represent one set of generators of ${\cal S}$. In the binary language a change from one set of generators to another is represented by an invertible linear transformation $R$ acting on the right on $S$ and acting appropriately on $b$. By repeated application of Lemma~\ref{lemtautau} one finds that $\bar S$ and $b$ can be transformed as \[ \begin{array}{ll} \bar S'&=\bar S R\\ b'&=R^Tb+\mbox{diag}(R^T\mbox{lows}(\bar S^T\bar U\bar S)R) \end{array} \] Below we will refer to such a transformation as a stabilizer basis change. Before we state the main results of this section, we show how binary linear algebra can also be used to describe the action of a Pauli matrix on a state, expanded in the standard basis. \begin{equation} \label{eqtaubin} \tau_a \sum_{x\in\mathbb{Z}_2^n} \psi_x |x\rangle= \sum_{x\in\mathbb{Z}_2^n} (-1)^{v^Tx} \psi_{x+w} |x\rangle \end{equation} where $a=\left[\begin{array}{c}v\\w\end{array}\right]$. This is proved as follows. From $\sigma_x |b\rangle = |b+1\rangle$ with $b\in\mathbb{Z}_2$, we have $\tau_{\scriptsize\left[\begin{array}{c}0\\w\end{array}\right]}\sum_x\psi_x|x\rangle= \sum_x\psi_x|x+w\rangle=\sum_x\psi_{x+w}|x\rangle$. From $\sigma_z |b\rangle=(-1)^b|b\rangle$, we then find Eq.~(\ref{eqtaubin}). Now we exploit our binary language to get results about the expansion in the standard basis of a stabilizer state as summarized in the following theorem, for which we give a constructive proof. \begin{theorem} \label{theosq} (i) If $\bar S$ and $b$ represent a stabilizer state $|\psi\rangle$ as described above, $\bar S$ and $b$ can be transformed by an invertible index space transformation $|x\rangle\rightarrow|T^{-1}x\rangle$ with $T\in\mathbb{Z}_2^{n\times n}$ and an invertible stabilizer basis change $R\in\mathbb{Z}_2^{n\times n}$ into the form \begin{equation} \label{eqsq1} \begin{array}{ll} \bar S'&= \left[\begin{array}{ccc} T^T & 0 & 0\\ 0 & T^{-1} & 0\\ 0 & 0 & 1 \end{array}\right] \bar S R= \left[\begin{array}{ccc} Z & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & I_{r_c}\\ I_{r_a} & 0 & 0\\ 0 & I_{r_b} & 0\\ 0 & 0 & 0\\ f_a^T& 0 & 0 \end{array}\right]\\ b'&=\left[\begin{array}{c} b_{ab}\\ b_c \end{array}\right] \end{array} \end{equation} where $Z$ is full rank and symmetric and $f_a=\mbox{diag}(Z)$. (ii) The state $|\psi\rangle$ can be expanded in the standard basis as \[ \begin{array}{l} |\psi\rangle= (1/\sqrt{(2^{(r_a+r_b)})})\times\\ \sum_{y\in\mathbb{Z}_2^{(r_a+r_b)}} (-i)^{f_a^Ty_a}(-1)^{(y_a^T\mbox{\small lows}(Z+f_af_a^T)y_a+b_{ab}^Ty)} |T{\scriptsize \left[\begin{array}{c}y\\ b_c\end{array}\right]}\rangle \end{array} \] where $y=\left[\begin{array}{c}y_a\\y_b\end{array}\right]$ with $y_a\in\mathbb{Z}_2^{r_a}$ and $y_b\in\mathbb{Z}_2^{r_b}$. \end{theorem} In words this theorem reads as follows. If the coefficients of a stabilizer state $|\psi\rangle$, with respect to the standard basis $\{|x\rangle|x\in\mathbb{Z}_2^n\}$, are considered as a function of the binary basis label $x$, this function is nonzero in an $r_a+r_b$ dimensional plane (a coset of a subspace of $\mathbb{Z}_2^n$) and the nonzero elements are (up to a global scaling factor) equal to $1$,$i$,$-1$ or $-i$, where the signs are given by a binary quadratic function over the plane and $i$'s appear either in a subplane of codimension one or nowhere (if $f_a=0$). {\bf Proof:} First we write $S$ as a block matrix \[ S=\left[\begin{array}{c} V\\W \end{array}\right] \] with $V,W\in\mathbb{Z}_2^{n\times n}$. Then we perform a first stabilizer basis change $R_1$, transforming $W$ to $W^{(1)}=WR_1=[W^{(1)}_{ab}~0]$, where $W^{(1)}_{ab}\in\mathbb{Z}_2^{n\times(r_a+r_b)}$ and $r_a+r_b=\mbox{rank}(W)$. This is achieved by setting the last columns of $R_1$ equal to a basis of the kernel of $W$ and choosing the other columns as to make it invertible. As a result the columns of $W^{(1)}_{ab}$ are a basis of the range of $W$. We also write the transformation of $V$ in block form as $V^{(1)}=VR_1=[V^{(1)}_{ab}~V^{(1)}_c]$. Because $S^{(1)}$ is full rank, $V^{(1)}_c$ must also be full rank. Now we perform a second stabilizer basis change $R_2=\left[\begin{array}{ll} R_{ab,ab} & 0\\ R_{c,ab} & I_{r_c} \end{array}\right]$, transforming $V^{(1)}=[V^{(1)}_{ab}~V^{(1)}_c] $ to $V^{(2)}=V^{(1)}R_2=[V^{(2)}_a~0~V^{(2)}_c]$, where $V^{(2)}_a\in\mathbb{Z}_2^{n\times r_a}$ and $r_a+r_c=\mbox{rank}(V)$. This is achieved by setting the columns $r_a+1$ till $r_a+r_b$ of $R_2$ equal to a basis of the kernel of $V^{(1)}$ and choosing the first $r_a$ columns as to make it invertible. (Note that the last $r_c$ columns of $R_2$ are equal to the corresponding columns of the identity matrix and no linear combination of them can be in the kernel of $V^{(1)}$ as $V^{(1)}_c$ is full rank). As a result the columns of $[V^{(2)}_a~V^{(2)}_c]$ are a basis of the range of $V$. We also write the transformation of $W^{(1)}$ in block form as $W^{(2)}=W^{(1)}R_2=[W^{(2)}_a~W^{(2)}_b~0]$. Next we perform an index space transformation $|x\rangle\rightarrow|T^{-1}x\rangle$ with $T=[W^{(2)}_a~W^{(2)}_b~W^{(2)}_c]$ where the columns $W^{(2)}_c$ are chosen as to make $T$ invertible. As a result $V^{(2)}$ is transformed to $V^{(3)}=T^TV^{(2)}=[V^{(3)}_a~0~V^{(3)}_c]$, $W^{(2)}$ is transformed to $W^{(3)}=T^{-1}W^{(2)}= \left[\begin{array}{cc} I_{r_a+r_b} & 0\\ 0 & 0 \end{array}\right]$. Because $S^{(3)}=\left[\begin{array}{c} V^{(3)}\\W^{(3)}\end{array}\right]$ satisfies ${S^{(3)}}^TPS^{(3)}=0$, one also finds $V^{(3)}=\left[\begin{array}{ccc} Z & 0 & 0\\ 0 & 0 & 0\\ V^{(3)}_{ca} & 0 & V^{(3)}_{cc} \end{array}\right]$ where $Z$ is symmetric and $V^{(3)}_{cc}$ is full rank. A final stabilizer basis change $R_3= \left[\begin{array}{ccc} I_{r_a} & 0 & 0\\ 0 & I_{r_b} & 0\\ {V^{(3)}_{cc}}^{-1} V^{(3)}_{ca} & 0 & {V^{(3)}_{cc}}^{-1} \end{array}\right]$ transforms $V^{(3)}$ to $V'=V^{(3)}R_3=\left[\begin{array}{ccc} Z & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & I_{r_c} \end{array}\right]$ and leaves $W^{(3)}=W'$ unchanged. Through all the transformations we also have to keep track of $f$ and $b$. We find $f'=\mbox{diag}(S'^TUS')= \left[\begin{array}{c}\mbox{diag}(Z)\\0\end{array}\right]$. Setting $R=R_1R_2R_3$ we find $\left[\begin{array}{c} b_{ab}\\ b_{c} \end{array}\right]= R^Tb+\mbox{diag}(R^T\mbox{lows}(V^TW+dd^T)R)$. We still have to prove that $Z$ is full rank. First note that $Z={W^{(2)}_a}^TV^{(2)}_a$. From ${S^{(2)}}^TPS^{(2)}=0$ and the fact that $[V^{(2)}_a~V^{(2)}_c]$ and $[W^{(2)}_a~W^{(2)}_b]$ are full rank, it follows that the columns of $W^{(2)}_b$ span the orthogonal complement of $[V^{(2)}_a~V^{(2)}_c]$ and the columns of $V^{(2)}_c$ span the orthogonal complement of $[W^{(2)}_a~W^{(2)}_b]$. Assume now that there exists some $x\in\mathbb{Z}_2^{r_a}$ with $x\neq 0$ and $Zx=0$, then $V^{(2)}_ax$ is orthogonal to the columns of $W^{(2)}_a$. And $V^{(2)}_ax$ is also orthogonal to the columns of $W^{(2)}_b$. Therefore $V^{(2)}_ax$ is a linear combination of the columns of $V^{(2)}_c$. This is in contradiction with the fact that $[V^{(2)}_a~V^{(2)}_c]$ is full rank. Therefore, $Z$ is full rank. This completes the proof of part (i). To prove part (ii), first observe that applying $|x\rangle\rightarrow |T^{-1}x\rangle$ to $|\psi\rangle$ simply replaces $|T{\scriptsize\left[\begin{array}{c}y\\ b_c\end{array}\right]}\rangle$ by $|{\scriptsize\left[\begin{array}{c}y\\ b_c\end{array}\right]}\rangle$, and stabilizer basis transformations only change the description of a stabilizer state but not the state itself. Therefore, we have to prove that \begin{equation} \label{eqsq2} \begin{array}{l} |\phi\rangle=\\ \sum_{y\in\mathbb{Z}_2^{(r_a+r_b)}} (-i)^{f_a^Ty_a}(-1)^{(y_a^T\mbox{\small lows}(Z+f_af_a^T)y_a+b_{ab}^Ty)} |{\scriptsize\left[\begin{array}{c}y\\ b_c\end{array}\right]}\rangle \end{array} \end{equation} is an eigenvector with eigenvalue one of the operators $i^{f'_k}(-1)^{b'_k}\tau_{s'_k}$ described by $\bar S'$ and $b'$. For $k=1,\ldots,r_a$, we have \[ \begin{array}{ll} s'_k &= \left[\begin{array}{c} Ze_k\\0\\e_k\\0\end{array}\right]\\ f'_k &= {f_a}_k = z_{k,k}\\ b'_k &= {b_{ab}}_k \end{array} \] where $e_k$ is the $k$-th column of $I_{r_a}$. With Eq.~(\ref{eqtaubin}) we find \[ \begin{array}{l} i^{f'_k}(-1)^{b'_k}\tau_{s'_k}|\phi\rangle\\ =\sum_y[ i^{{f_a}_k}(-1)^{{b_{ab}}_k}(-1)^{(Ze_k)^Ty_a}(-i)^{f_a^T(y_a+e_k)}\times\\ (-1)^{((y_a+e_k)^T\mbox{\small lows}(Z+f_af_a^T)(y_a+e_k)+b_a^T(y_a+e_k)+b_b^Ty_b)}\times\\ |{\scriptsize\left[\begin{array}{c}y\\ b_c\end{array}\right]}\rangle]\\ = \sum_y[ i^{{f_a}_k}(-i)^{f_a^Ty_a}(-i)^{{f_a}_k}(-1)^{f_a^Ty_a{f_a}_k}\times\\ (-1)^{e_k^TZy_a+{b_{ab}}_k} (-1)^{(y_a^T\mbox{\small lows}(Z+f_af_a^T)y_a)}\times\\ (-1)^{(e_k^T(Z+f_af_a^T)y_a +b_a^Ty_a+{b_{ab}}_k+b_b^Ty_b)} |{\scriptsize\left[\begin{array}{c}y\\ b_c\end{array}\right]}\rangle]\\ = |\phi\rangle \end{array}\\ \] For $k=r_a+1,\ldots,r_b$ we have \[ \begin{array}{ll} s'_k &= \left[\begin{array}{c} 0\\e_k\\0\end{array}\right]\\ f'_k &= 0 \\ b'_k &= {b_{ab}}_k \end{array} \] where now $e_k$ is the $k$-th column of $I_{(r_a+r_b)}$. With Eq.~(\ref{eqtaubin}) we find \[ \begin{array}{l} i^{f'_k}(-1)^{b'_k}\tau_{s'_k}|\phi\rangle\\ =\sum_y[(-1)^{{b_{ab}}_k}(-i)^{f_a^Ty_a}\times\\ (-1)^{(y_a\mbox{\small lows}(Z+f_af_a^T)y_a+b_{ab}^T(y+e_k))} |{\scriptsize \left[\begin{array}{c}y\\ b_c\end{array}\right]}\rangle]\\ =|\phi\rangle \end{array} \] For $k=r_b+1,\ldots,n$, we find with Eq.~(\ref{eqtaubin}) that $i^{f'_k}(-1)^{b'_k}\tau_{s'_k}|x\rangle=(-1)^{x_k+b'_k}|x\rangle$. The state $|\phi\rangle$ is clearly an eigenstate of this operator as $x_k+b'_k=0$ for all states $|x\rangle=|{\scriptsize\left[\begin{array}{c}y\\ b_c\end{array}\right]}\rangle$ and $k=r_b+1,\ldots,n$. This completes the proof. $\square$ Finally, we show how also the entries of a Clifford matrix can be described with binary quadratic forms, by using Theorem~\ref{theosympdec}. This leads to the following theorem for which we give a constructive proof. \begin{theorem} \label{theocq} Given a Clifford operation $Q$, represented by $\bar C$ and $h$ (or $C$,$d$ and $h$) as in Sec.~\ref{secbin}, $Q$ can be written as \[ \begin{array}{ll} Q=& (1/\sqrt{2^r}) \sum_{x_b\in\mathbb{Z}_2^{n-r}} \sum_{x_r\in\mathbb{Z}_2^r} \sum_{x_c\in\mathbb{Z}_2^r}\\ & [(-i)^{d_{br}^Tx_{br}}(-i)^{d_{bc}^Tx_{bc}}(-1)^{(h_{bc}^Tx_{bc} +x_r^Tx_c)}\times\\ &(-1)^{x_{br}^T\mbox{\small lows}(Z_{br}+d_{br}d_{br}^T)x_{br}}\times\\ &(-1)^{x_{bc}^T\mbox{\small lows}(Z_{bc}+d_{bc}d_{bc}^T)x_{bc}} |T_1 x_{br} \rangle \langle T_2^{-1} x_{bc}+t|] \end{array} \] where $x_{br}=\left[\begin{array}{c}x_b\\x_r\end{array}\right]$ and $x_{bc}=\left[\begin{array}{c}x_b\\x_c\end{array}\right]$, $T_1,T_2\in\mathbb{Z}_2^{n\times n}$ are invertible matrices, $Z_{br},Z_{bc}\in\mathbb{Z}_2^{n\times n}$ are symmetric, $d_{br}=\mbox{diag}(Z_{br})$, $d_{bc}=\mbox{diag}(Z_{bc})$ and $h_{bc},t\in\mathbb{Z}_2^n$. \end{theorem} {\bf Proof:} The proof is based on the decomposition of $C$ as a product of five matrices as in Theorem~\ref{theosympdec}. Due to the isomorphism between the group of symplectic matrices $C$ and the extended matrices $\bar C$ as defined in Sec.~\ref{secbin}, this decomposition can be converted into a decomposition of $\bar C$ as follows. \[ \begin{array}{ll} \bar C &= \bar C^{(1)}\bar C^{(2)}\bar C^{(3)}\bar C^{(4)}\bar C^{(5)}\\ &= \left[\begin{array}{ccc} T_1^{-T} & 0 &0\\ 0 & T_1 &0\\ 0 & 0 &1 \end{array}\right] \left[\begin{array}{lll} I_{n} & Z_{br} & 0\\ 0 & I_{n} & 0\\ 0 & d_{br}^T & 1 \end{array}\right]\times\\ & \left[\begin{array}{lllll} I_{n-r} & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & I_r & 0\\ 0 & 0 & I_{n-r} & 0 & 0\\ 0 & I_r & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 \end{array}\right] \left[\begin{array}{lll} I_{n} & Z_{bc} & 0\\ 0 & I_{n} & 0\\ 0 & d_{bc}^T & 1 \end{array}\right] \left[\begin{array}{ccc} T_2^{-T} & 0 & 0\\ 0 & T_2 & 0\\ 0 & 0 & 1\\ \end{array}\right], \end{array} \] where $Z_{br}=\left[\begin{array}{cc} Z_3 & V_1\\ V_1^T & Z_1 \end{array}\right]$, $Z_{bc}=\left[\begin{array}{cc} 0 & V_2\\ V_2^T & Z_2 \end{array}\right]$, $d_{br}=\mbox{diag}(Z_{br})$ and $d_{bc}=\mbox{diag}(Z_{bc})$. If we define Clifford operations $Q^{(k)}$ by $\bar C^{(k)}$ and $h^{(k)}=0$, $k=1,\ldots,5$, the operation $Q^{(1)}Q^{(2)}Q^{(3)}Q^{(4)}Q^{(5)}$ is represented by $\bar C$ and some vector $h'$, that can be found by repeated application of Theorem~\ref{theoQ21}. The vector $h$ of the given Clifford operation $Q$ can then be realized by an extra operation $Q^{(6)}$ to the right with $\bar C^{(6)}=I$ and $h^{(6)}=h+h'$. Now, $Q^{(3)}$ is a Hadamard operation on the last $r$ qubits. Because a Hadamard operation on one qubit can be written as $H_1=(1/\sqrt{2}) \sum_{b_r,b_c\in\mathbb{Z}_2} (-1)^{b_rb_c} |b_r\rangle \langle b_c|$, the Hadamard operation on $r$ qubits can be written as $H_r(1/\sqrt{2^r}) \sum_{x_r,x_c\in\mathbb{Z}_2^r} (-1)^{x_r^Tx_c} |x_r\rangle \langle x_c|$ and, including the $n-r$ qubits that are not operated on, as \begin{equation} \label{eqq3} Q^{(3)}=(1/\sqrt{2^r}) \sum_{x_b\in\mathbb{Z}_2^{n-r}}\sum_{x_r,x_c\in\mathbb{Z}_2^r} (-1)^{x_r^Tx_c} |x_{br}\rangle \langle x_{bc}|. \end{equation} Considered as a matrix this is a block diagonal matrix with $2^{n-r}$ identical $2^r\times 2^r$ blocks with entries that are $1$ or $-1$. The index $x_b$ addresses the blocks and the indices $x_c$ and $x_r$ adress the columns and rows inside the blocks. Now we will show that the matrix $Q$ can be derived from this matrix by multiplying on the left and the right with a diagonal matrix and a permutation matrix representing an affine index space transformation. First we concentrate on $Q^{(2)}$ and $Q^{(4)}$. $\bar C^{(2)}$ and $\bar C^{(4)}$ have the form \[ \bar{\tilde C}=\left[\begin{array}{ccc} I & \tilde Z & 0\\ 0 & I & 0\\ 0 & \tilde d & 1 \end{array}\right]. \] We show that such a matrix (together with $\tilde h=0$) represents a diagonal Clifford operation \begin{equation} \label{eqtq} \tilde Q=\sum_{x\in\mathbb{Z}_2^n} (-i)^{\tilde d^Tx} (-1)^{x^T\mbox{\small lows}(\tilde Z+\tilde d\tilde d^T)x}|x\rangle\langle x|. \end{equation} This result can be derived using the decomposition in (diagonal) one and two-qubit operations given in Sec.~\ref{sec2q}, but can more easily be proved by showing that the Pauli group elements $\tau_{e_k}$, with $e_k$ the $k$-th column of $I_{2n}$, are mapped to operators represented by the columns of $\bar{\tilde C}$ under $X\rightarrow\tilde QX\tilde Q^\dag$. Clearly, for $k=1,\ldots,n$, $\tilde Q \tau_{e_k} \tilde Q^\dag=\tau_{e_k}\tilde Q\tilde Q^\dag=\tau_{e_k}$ (as $\tilde Q$ and $\tau_{e_k}$ are diagonal). For $k=n+1,\ldots,2n$ let $e_k$ again be the $k$-th column of $I_{2n}$ and $e'_k$ the $k$-th column of $I_n$. Then we have \[ \begin{array}{l} \tilde Q \tau_{e_k} \tilde Q^\dag \tau_{e_k}\\ =\sum_{x} [(-i)^{\tilde d^Tx} (-1)^{x^T\mbox{\small lows}(\tilde Z+\tilde d\tilde d^T)x}|x\rangle\langle x|]\times\\ ~\sum_{x} [(+i)^{\tilde d^T(x+e'_k)} (-1)^{(x+e'_k)^T\mbox{\small lows}(\tilde Z+\tilde d\tilde d^T)(x+e'_k)}|x\rangle\langle x|]\\ =\sum_{x}[ (-i)^{\tilde d^Tx}i^{\tilde d^Tx}i^{\tilde d^Te'_k}(-1)^{\tilde d^Tx\tilde d^Te'_k}\times\\ ~(-1)^{x^T(\tilde Z+\tilde d\tilde d^T)e'_k}] =i^{\tilde d_k}\tau_{\scriptsize\left[\begin{array}{c} Ze'_k\\ 0\end{array}\right]}. \end{array} \] Bringing the second $\tau_{e_k}$ from the left-hand side to the right-hand side we finally prove Eq.~(\ref{eqtq}). Combining Eqs.~(\ref{eqq3}) and~(\ref{eqtq}), we find \[ \begin{array}{l} Q^{(2)}Q^{(3)}Q^{(4)}=\\ (1/\sqrt{2^r}) \sum_{x_b\in\mathbb{Z}_2^{n-r}} \sum_{x_r,x_c\in\mathbb{Z}_2^r} [(-i)^{d_{br}^Tx_{br}}(-i)^{d_{bc}^Tx_{bc}}\times\\ (-1)^{x_r^Tx_c} (-1)^{x_{br}^T\mbox{\small lows}(Z_{br}+d_{br}d_{br}^T)x_{br}}\times\\ (-1)^{x_{bc}^T\mbox{\small lows}(Z_{bc}+d_{bc}d_{bc}^T)x_{bc}} |x_{br} \rangle \langle x_{bc}|] \end{array} \] To take into account the index space transformation $C^{(1)}$ we simply have to replace $|x_{br}\rangle$ by $|T_1x_{br}\rangle$. For $C^{(5)}$ and $C^{(6)}$ we first define $t$ and $h_{bc}\in\mathbb{Z}_2^n$ by writing $h^{(6)}$ as $h^{(6)}=\left[\begin{array}{c}t\\T_2^Th_{bc}\end{array}\right]$. Then, with Eqs.~(\ref{eqsc1}) and~(\ref{eqtaubin}) we find $\langle x_{bc}| C^{(5)} C^{(6)}=(-1)^{h_{bc}^Tx_{bc}}\langle T_2^{-1}x_{bc}+t|$. This completes the proof. $\square$ \section{Conclusion} We have shown the relevance of binary linear algebra (over GF(2)) for the theory of stabilizer states and Clifford group operations. We have described how the Clifford group is isomorphic to a group that can be entirely described in terms of binary linear algebra. This has led to two schemes for the decomposition of Clifford group operations in a product of one and two-qubit operations, and to the desription of standard basis expansions of both stabilizer states and Clifford group operations with binary quadratic forms. \appendix* \section{Proof of equation~(\ref{eqsc7})} \label{app} Let $e_k$ be the $k$-th column of $I_{2n}$, $k=1,\ldots,2n$. Then we have to find the images of $\tau_{e_k}$ (Hermitian matrices) under $X\rightarrow QXQ^\dag$ with $Q=e^{i(\pi/4)\tau_{\bar a}}=\frac{1}{\sqrt{2}}(I+i\tau_{\bar a})$ to yield the $k$-th column $c_k=C e_k$ of $C$ and the $k$-th entry $h_k=e_k^T h$ of $h$. We find \[ \begin{array}{l} i^{c_k^TUc_k}(-1)^{h_k}\tau_{c_k}\\ ~=\frac{1}{\sqrt{2}}(I+i\tau_{\bar a}) \tau_{e_k} \frac{1}{\sqrt{2}}(I-i\tau_{\bar a})\\ ~= \frac{1}{2}(\tau_{e_k}+\tau_{\bar a}\tau_{e_k}\tau_{\bar a}) +\frac{1}{2}i(\tau_{\bar a}\tau_{e_k}-\tau_{e_k}\tau_{\bar a})\\ ~= \frac{1}{2} (1+(-1)^{e_k^TPa})\tau_{e_k} +\frac{1}{2}i(1-(-1)^{e_k^TPa})\tau_{\bar a}\tau_{e_k}, \end{array} \] where in the last step we used $\tau_{\bar a}^2=I$ and $\tau_a\tau_b=(-1)^{b^TPa}\tau_b\tau_a$ as follows from Lemma~\ref{lemtautau}. When $e_k^TPa=0$ we find $c_k=e_k$ and $h_k=0$. When $e_k^TPa=1$ we find \[ \begin{array}{ll} i^{c_k^TUc_k}(-1)^{h_k}\tau_{c_k} &= i \tau_{\bar a}\tau_{e_k}\\ &= i i^{a^TUa} (-1)^{e_k^TUa} \tau_{a+e_k}, \end{array} \] From this formula it can be read that $c_k=a+e_k$. With $ii^{a^TUa}=i^{a^TUa+1}(-1)^{a^TUa}$ (with the addition in the exponents modulo $2$) and $(a+e_k)^TU(a+e_k)=a^TUa+e_k^TPa+e_k^TUe_k=a^TUa+1$, we also find that $h_k=a^TUa+e_k^TUa$. Combining the two cases $e_k^TPa=0$ and $e_k^TPa=1$ we find $c_k=e_k+a(e_k^TPa)=(I+aa^TP)e_k$, yielding $C=(I+aa^TP)$. For $h$ we find $h_k=(e_k^TPa)(a^TUa+e_k^TUa)$. With $(e_k^TPa)(e_k^TUa)=e_k^TUa$ this reduces to $h_k=e_k^T(Paa^TUa+Ua)$ and $h=(I+aa^TP)^TUa$. This completes the proof. $\square$ \begin{acknowledgments} We thank Frank Verstraete for useful discussions. Our research is supported by grants from several funding agencies and sources: Research Council KULeuven: Concerted Research Action GOA-Mefisto~666 (Mathematical Engineering); Flemish Government: Fund for Scientific Research Flanders: several PhD/postdoc grants, projects G.0240.99 (multilinear algebra), G.0120.03 (QIT), research communities ICCoS, ANMMM; Belgian Federal Government: DWTC (IUAP IV-02 (1996-2001) and IUAP V-22 (2002-2006): Dynamical Systems and Control: Computation, Identification \& Modelling \end{acknowledgments} \end{document}
\begin{document} \title{Correction to ``Fibers of tropicalization"} \author[Payne]{Sam Payne} \address{Yale University, Department of Mathematics, 10 Hillhouse Ave, New Haven, CT 06511} \email{[email protected]} \begin{abstract} This note explains an error in Proposition~5.1 of ``Fibers of tropicalization," Math. Z. 262 (2009), no. 2, 301--311, discovered by W.~Buczynska and F.~Sottile, and fills the resulting gap in the proof of the paper's main theorem. \end{abstract} \ensuremath{\mathfrak{m}}aketitle Part (3) of Proposition~5.1 in \cite{tropicalfibers} claims that if $X$ is a subvariety of a torus $T$ containing the identity then there is a split surjection $\varphi: T \rightarrow T'$ such that the image of $X$ is a hypersurface and the intersection of the initial degeneration $X_0$ with the kernel of $\varphi_0$ is $\{1_T\}$. This claim is false, and the following is a counterexample. \begin{example} Suppose the characteristic of $K$ is not 2, and let $X$ be a curve in a three-dimensional torus containing all eight $2$-torsion points of $T$. Then $X_0$ contains all eight $2$-torsion points of $T_0$ and, for any projection $\varphi$ from $T$ to a two dimensional torus, the kernel of $\varphi_0$ contains four $2$-torsion points, all of which are in $X_0$. \end{example} \noindent The falsehood of part (3) of Proposition~5.1 leaves an essential gap in the proof of Theorem~4.1, which is the main result of \cite{tropicalfibers}. This result has now been proved independently by different means, including nonarchimedean analysis \cite[Pro\-position~4.14]{Gubler12} and noetherian approximation \cite[Theorem~4.2.5]{tropicallifting}. The original proposed method of proof using split surjections of tori to decrease the codimension may be of independent interest, but the error in Proposition~5.1 interferes with the reduction to the hypersurface case. Here, we complete the proof of Theorem~4.1 via the original method of split surjections of tori by projecting even further, onto a torus of dimension equal to $\dim X$, and using the going-down theorem for finite extensions of integral domains. \begin{remark} The second main result of \cite{tropicalfibers} is Corollary~4.2, which says that the fibers of the classical tropicalization map from $X(K)$ to $N_G$ are Zariski dense. The error in Proposition~5.1 does not create a serious gap in the proof of this weaker result; the original arguments can be modified, as follows, to obtain Corollary~4.2 without deducing it from Theorem~4.1. Parts (1) and (2) of Proposition~5.1 reduce Corollary~4.2 to the hypersurface case, and Proposition~6.1 shows that if $X$ is a hypersurface then $\Trop^{-1}(v) \cap X(K)$ is infinite. The argument in Section~6.3 then goes through with $\underline x$ and $\TTrop^{-1}(\underline x)$ replaced by $v$ and $\Trop^{-1}(v)$, respectively. \end{remark} To prove Theorem~4.1, we first consider the special case when $X$ is a torus. \begin{lemma} \label{lem:torus} For any $v \in N_G$ and $\underline x \in T_v(k)$, the fiber $\TTrop^{-1}(\underline x)$ is Zariski dense in $T$. \end{lemma} \begin{proof} After translation, we may assume that $v = 0$ and $\underline x = 1_T$. If $T = K^*$ is one-dimensional then $\TTrop^{-1}(1_{T})$ is identified with the subset $1 + \ensuremath{\mathfrak{m}}$ of the valuation ring $R$, which is infinite and hence Zariski dense. For $T$ of arbitrary dimension $n$, choose an isomorphism from $T$ to $(K^*)^n$. Then $\Trop^{-1}(1_T)$ is identified with the product of dense sets $(1+\ensuremath{\mathfrak{m}})^n$, and is therefore dense. \end{proof} Next, we choose a suitable projection from $X$ to a torus of dimension equal to $\dim X$. Say $X$ has dimension $d$. Recall that the set of $v$ in $N_\ensuremath{\ensuremath{\mathfrak{m}}athbb{R}}$ such that $X_v$ is nonempty is the underlying set of a finite polyhedral complex $\Delta$ of pure dimension $d$. Projecting along a general rational subspace of codimension $d$ in $N_\ensuremath{\ensuremath{\mathfrak{m}}athbb{Q}}$ maps each face of $\Delta$ isomorphically onto its image. Such a projection corresponds to a split surjection of tori $\varphi: T \rightarrow T'$ with the property that, for each $v' \in N'_G$, the preimage \[ \phi^{-1}(v') \cap \Trop(X) \] is finite, where $\phi: N_G \rightarrow N'_G$ is the linear map induced by $\varphi$. Say $v_0, \ldots, v_s$ are the finitely many preimages in $\Delta$ of $\phi(v)$, where $v_0 = v$. Each of the schemes $T_{v_0}, \ldots, T_{v_s}$ over $\Spec R$ contains the torus $T_K$ as an open subscheme, and the morphisms $\varphi_{v_i} : T_{v_i} \rightarrow T'_{\phi(v)}$ all agree on $T_K$. Therefore, we can glue these schemes and morphisms to get \[ T_{v_0} \cup \cdots \cup T_{v_s} \rightarrow T'_{\phi(v)} \] over $\Spec R$. We write $\ensuremath{\ensuremath{\mathfrak{m}}athbb{P}}hi$ for the restriction of this morphism to the closure of $X$ and prove Theorem~4.1 using the following technical result. \begin{proposition} \label{prop:finite} The morphism $\ensuremath{\ensuremath{\mathfrak{m}}athbb{P}}hi:\ensuremath{\ensuremath{\mathfrak{m}}athcal{X}}_{v_0} \cup \cdots \cup \ensuremath{\ensuremath{\mathfrak{m}}athcal{X}}_{v_s} \rightarrow T'_{\phi(v)}$ is finite. \end{proposition} \noindent Here $\ensuremath{\ensuremath{\mathfrak{m}}athcal{X}}_{v_i}$ denotes the closure of $X$ in $T_{v_i}$. The schemes $\ensuremath{\ensuremath{\mathfrak{m}}athcal{X}}_{v_i}$ and $T'_{\phi(v)}$ over $\Spec R$ are not noetherian, but this does not create any additional difficulties. We deduce Theorem~4.1 from Proposition~\ref{prop:finite} using the going-down theorem for finite extensions of an integrally closed domain, which has no noetherian hypothesis. \begin{proof}[Proof of Theorem~4.1] First, we show that $R[M']^{\phi(v)}$ is an integrally closed domain. Translating by a point in $\Trop^{-1}(\phi(v))$ induces an isomorphism from $R[M']^{\phi(v)}$ to $R[M']$ which is a localization of a polynomial ring over the integrally closed domain $R$, and hence an integrally closed domain, by \cite[Proposition~17.B(2)]{Matsumura80} and \cite[Example~9.3]{Matsumura89}. By Lemma~\ref{lem:torus}, the points $x'$ in $\TTrop^{-1}(\varphi_v(\underline x)$ are dense in $T'$. By the going-down theorem for finite extensions of an integrally closed domain \cite[Theorem~9.4(ii)]{Matsumura89}, for each such $x'$ there is a point $x \in X(K)$ specializing to $\underline x$ such that $\varphi(x) = x'$. This shows that the image of $\TTrop^{-1}(\underline x) \cap X(K)$ is Zariski dense in $T'$. Since $\varphi$ is finite, it follows that $\TTrop^{-1}(\underline x) \cap X(K)$ is Zariski dense in $X$, as required. \end{proof} It remains to prove Proposition~\ref{prop:finite}. To do this, we work with $G$-admissible fans and the associated toric schemes over $\Spec R$, as in \cite{Gubler12}, to which we refer the reader for details of these constructions. Choose a $G$-admissible fan structure $\Sigma'$ on $N'_\ensuremath{\ensuremath{\mathfrak{m}}athbb{R}} \times \ensuremath{\ensuremath{\mathfrak{m}}athbb{R}}_{\geq 0}$ that contains $\ensuremath{\ensuremath{\mathfrak{m}}athbb{R}}_{\geq 0} \cdot (\phi(v), 1)$ as a 1-dimensional face. Let $\Sigma$ be a $G$-admissible fan on $N_\ensuremath{\ensuremath{\mathfrak{m}}athbb{R}} \times \ensuremath{\ensuremath{\mathfrak{m}}athbb{R}}_{\geq 0}$ such that \begin{enumerate} \item For each face $\tau \in \Delta$, the cone $\ensuremath{\ensuremath{\mathfrak{m}}athbb{R}}_{\geq 0} \cdot (\tau \times 1)$ is a union of faces of $\Sigma$. \item For each face $\sigma \in \Sigma$, the image $(\phi \times 1)(\sigma)$ is contained in a face of $\Sigma'$. \end{enumerate} Let $\ensuremath{\ensuremath{\mathfrak{m}}athcal{Y}}_\Sigma$ and $\ensuremath{\ensuremath{\mathfrak{m}}athcal{Y}}_{\Sigma'}$ be the corresponding toric schemes over $\Spec R$, as defined in \cite[Section~7]{Gubler12}, containing $T_K$ and $T'_K$ as dense open subschemes, respectively, and let $\ensuremath{\ensuremath{\mathfrak{m}}athcal{X}}_\Sigma$ be the closure of $X$ in $\ensuremath{\ensuremath{\mathfrak{m}}athcal{Y}}_\Sigma$. Since $\Sigma'$ contains $\ensuremath{\ensuremath{\mathfrak{m}}athbb{R}}_{\geq 0} \cdot (\phi(v), 1)$, the toric scheme $\ensuremath{\ensuremath{\mathfrak{m}}athcal{Y}}_{\Sigma'}$ contains $T'_{\phi(v)}$ as an open subscheme. Then, by (1) and (2), the cones $\ensuremath{\ensuremath{\mathfrak{m}}athbb{R}}_{\geq 0} (v_i, 1)$ must be cones in $\Sigma$, for $i = 0, \ldots, s$, and hence $\ensuremath{\ensuremath{\mathfrak{m}}athcal{X}}_\Sigma$ contains $\ensuremath{\ensuremath{\mathfrak{m}}athcal{X}}_{v_0} \cup \cdots \cup \ensuremath{\ensuremath{\mathfrak{m}}athcal{X}}_{v_s}$ as an open subscheme as well. The morphism $\varphi: T_K \rightarrow T'_K$ extends to a morphism of toric schemes from $\ensuremath{\ensuremath{\mathfrak{m}}athcal{Y}}_\Sigma$ to $\ensuremath{\ensuremath{\mathfrak{m}}athcal{Y}}_{\Sigma'}$, by \cite[11.9]{Gubler12}, and we write $\overline \ensuremath{\ensuremath{\mathfrak{m}}athbb{P}}hi$ for the restriction of this morphism to $\ensuremath{\ensuremath{\mathfrak{m}}athcal{X}}_\Sigma$. The morphism $\ensuremath{\ensuremath{\mathfrak{m}}athbb{P}}hi$ in Proposition~\ref{prop:finite} is the restriction of $\overline \ensuremath{\ensuremath{\mathfrak{m}}athbb{P}}hi$ to $\ensuremath{\ensuremath{\mathfrak{m}}athcal{X}}_{v_0} \cup \cdots \cup \ensuremath{\ensuremath{\mathfrak{m}}athcal{X}}_{v_s}$. \begin{lemma} \label{lem:proper} The morphism $\overline \ensuremath{\ensuremath{\mathfrak{m}}athbb{P}}hi: \ensuremath{\ensuremath{\mathfrak{m}}athcal{X}}_\Sigma \rightarrow \ensuremath{\ensuremath{\mathfrak{m}}athcal{Y}}_{\Sigma'}$ is proper and of finite presentation. \end{lemma} \begin{proof} By \cite[Proposition~11.12]{Gubler12} and (1), the scheme $\ensuremath{\ensuremath{\mathfrak{m}}athcal{X}}_\Sigma$ is proper over $\Spec R$. Since $\ensuremath{\ensuremath{\mathfrak{m}}athcal{Y}}_{\Sigma'}$ is separated over $\Spec R$ \cite[Lemma~7.8]{Gubler12}, it follows that $\ensuremath{\ensuremath{\mathfrak{m}}athcal{X}}_\Sigma$ is proper over $\ensuremath{\ensuremath{\mathfrak{m}}athcal{Y}}_{\Sigma'}$ \cite[Corollary~5.4.3(i)]{EGA2}. Since $\Sigma$ is $G$-admissible and $G$ is divisible, the toric scheme $\ensuremath{\ensuremath{\mathfrak{m}}athcal{Y}}_\Sigma$ is of finite presentation over $\Spec R$ \cite[Proposition~6.7]{Gubler12}, and in particular it is of finite type over $\Spec R$. Therefore, $\ensuremath{\ensuremath{\mathfrak{m}}athcal{X}}_\Sigma$ is of finite type over $\Spec R$. Since $\ensuremath{\ensuremath{\mathfrak{m}}athcal{X}}_\Sigma$ is the closure of its generic fiber, by construction, it is also flat, and hence of finite presentation over $\Spec R$ \cite[Corollary~3.4.7]{RaynaudGruson71}. Hence $\ensuremath{\ensuremath{\mathfrak{m}}athcal{X}}_\Sigma$ is also of finite presentation over $\ensuremath{\ensuremath{\mathfrak{m}}athcal{Y}}_\Sigma$ \cite[Proposition~1.6.2(v)]{EGA4.1}. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:finite}] By \cite[Lemma~11.6]{Gubler12}, the union $\ensuremath{\ensuremath{\mathfrak{m}}athcal{X}}_{v_0} \cup \cdots \cup \ensuremath{\ensuremath{\mathfrak{m}}athcal{X}}_{v_s}$ is the full preimage of $T'_{\phi(v)}$ under $\overline \ensuremath{\ensuremath{\mathfrak{m}}athbb{P}}hi$. Therefore, by Lemma~\ref{lem:proper}, $\ensuremath{\ensuremath{\mathfrak{m}}athbb{P}}hi$ is proper and of finite presentation. To show that $\ensuremath{\ensuremath{\mathfrak{m}}athbb{P}}hi$ is finite, it remains to show that it has finite fibers \cite[Theorem~8.11.1]{EGA4.3}. We begin with the general fiber of $T'_{\phi(v)}$. If $x' \in T'(K)$, then the tropicalization of the fiber $\ensuremath{\ensuremath{\mathfrak{m}}athbb{P}}hi^{-1}(x')$ must be contained in $\phi^{-1}(\Trop(x')) \cap \Trop(X)$, which is finite by construction. Therefore, the fiber must be zero-dimensional and hence finite, since $\ensuremath{\ensuremath{\mathfrak{m}}athbb{P}}hi$ is of finite presentation. It remains to check the special fiber of $T'_{\phi(v)}$. Suppose $\underline x'$ is in $T'_{\phi(v)}(k)$. The preimage of $T'_{\phi_v}(k)$ is the disjoint union $X_{v_0} \cup \cdots \cup X_{v_s}$. Therefore, the tropicalization of $\ensuremath{\ensuremath{\mathfrak{m}}athbb{P}}hi^{-1}(\underline x') \cap X_{v_i}$ must be contained in the fiber over $0$ under the natural map from the star of $v_i$ in $\Trop(X)$ to $N'_\ensuremath{\ensuremath{\mathfrak{m}}athbb{R}}$, which is, again, finite by construction. It follows that the fiber over $\underline x'$ must be zero-dimensional, and hence finite, as required. \end{proof} \begin{remark} In addition to the error in Proposition~5.1, there is an unrelated sign error in Section~3 of \cite{tropicalfibers}, which appears in four places. At the bottom of p.~305, the tilted group ring should be defined as \[ R[M]^v = \bigoplus_{u \in M} \ensuremath{\mathfrak{m}}^{-\ensuremath{\langle}u,v\ensuremath{\rangle}}. \] The quotient $k[T_v]$ of this ring by the ideal generated by $\ensuremath{\mathfrak{m}}$ is then $\bigoplus_{u \in M} k^{-\ensuremath{\langle}u,v\ensuremath{\rangle}}$, the weight function on monomials is given by $bx^u \ensuremath{\mathfrak{m}}apsto \nu(b) + \ensuremath{\langle}u,v\ensuremath{\rangle}$, and the weight of the monomial $a_{u,i} x^u t^i$ in Example~3.3 is $i + \ensuremath{\langle}u,v\ensuremath{\rangle}$. \end{remark} \noindent \textbf{Acknowledgments.} I am most grateful to W.~Bu\-czynska and F. Sottile for their careful reading and for bringing these mistakes to my attention. This work was supported in part by NSF DMS 1068689 and completed during a visit to the Max Planck Institute for Mathematics in Bonn, Germany. \end{document}
\begin{equation}gin{document} \title{On the $O(1/k)$ Convergence of Asynchronous Distributed Alternating Direction Method of Multipliers$^*$ hanks{$^*$This work was supported by National Science Foundation under Career grant DMI-0545910, AFOSR MURI FA9550-09-1-0538, and ONR Basic Research Challenge No. N000141210997.} \begin{equation}gin{abstract} We consider a network of agents that are cooperatively solving a global optimization problem, where the objective function is the sum of privately known local objective functions of the agents and the decision variables are coupled via linear constraints. Recent literature focused on special cases of this formulation and studied their distributed solution through either subgradient based methods with $O(1/\hbox{\rlap{$\sqcap$}$\sqcup$}rt{k})$ rate of convergence (where $k$ is the iteration number) or Alternating Direction Method of Multipliers (ADMM) based methods, which require a synchronous implementation and a globally known order on the agents. In this paper, we present a novel asynchronous ADMM based distributed method for the general formulation and show that it converges at the rate $O\left(1/k\right)$. \end{abstract} \section{Introduction}\label{sec:intro} We consider the following optimization problem with a separable objective function and linear constraints: \begin{equation}gin{align}\label{Multisplit-formulation} \min_{x_i\in X_i,z\in Z}\quad &\sum_{i=1}^N f_i(x_i)\\ \nonumber s.t. \quad & Dx + Hz= 0. \end{align} Here each $f_i:\mathbb{R}^n\to \mathbb{R}$ is a (possibly nonsmooth) convex function, $X_i$ and $Z$ are closed convex subsets of $\mathbb{R}^{n}$ and $\mathbb{R}^{W}$, and $D$ and $H$ are matrices of dimensions $W \times nN$ and $W \times W$. The decision variable $x$ is given by the partition $x = [x_1', \ldots, x_{N}']' \in \mathbb{R}^{nN}$, where the $x_i\in \mathbb{R}^n$ are components (subvectors) of $x$. We denote by set $X$ the product of sets $X_i$, hence the constraint on $x$ can be written compactly as $x\in X$. Our focus on this formulation is motivated by {\it distributed multi-agent optimization problems}, which attracted much recent attention in the optimization, control and signal processing communities. Such problems involve resource allocation, information processing, and learning among a set $\{1,\ldots,N\}$ of distributed agents connected through a network $G=(V,E)$, where $E$ denotes the set of $M$ undirected edges between the agents. In such applications, each agent has access to a privately known local objective (or cost) function, which represents the negative utility or the loss agent $i$ incurs at the decision variable $x$. The goal is to collectively solve a global optimization problem\footnote{The usefulness of formulation (\ref{distopt-problem}) can be illustrated by, among other things, machine learning problems described as follows: \begin{equation}gin{align*} \min_x \sum_{i=1}^{N-1} l\left([W_ix-b_i]\right)+\pi \norm{x}_1, \end{align*} where $W_i$ corresponds to the input sample data (and functions thereof), $b_i$ represents the measured outputs, $W_i x-b_i$ indicates the prediction error and $l$ is the loss function on the prediction error. Scalar $\pi$ is nonnegative and it indicates the penalty parameter on complexity of the model. The widely used Least Absolute Deviation (LAD) formulation, the Least-Absolute Shrinkage and Selection Operator (Lasso) formulation and $l_1$ regularized formulations can all be represented by the above formulation by varying loss function $l$ and penalty parameter $\pi$ (see \cite{BishopML} for more details). The above formulation is a special case of the distributed multi-agent optimization problem (\ref{distopt-problem}), where $f_i(x) = l\left(W_ix-b_i\right)$ for $i=1,\ldots, N-1$ and $f_{N} = \pi_2\norm{x}_1.$ In applications where the data pairs $\big(W_i, b_i\big)$ are collected and maintained by different sensors over a network, the functions $f_i$ are local to each agent and the need for a distributed algorithm arises naturally. } \begin{equation}gin{align}\label{distopt-problem} \min &\sum_{i=1}^N f_i(x)\\ \nonumber s.t. \quad & x\in X. \end{align} This problem can be reformulated in the general formulation of (\ref{Multisplit-formulation}) by introducing a local copy $x_i$ of the decision variable for each node $i$ and imposing the constraint $x_i=x_j$ for all agents $i$ and $j$ with edge $(i,j)\in E$. Under the assumption that the underlying network is connected, this condition ensures that each of the local copies are equal to each other. Using the edge-node incidence matrix of network $G$, denoted by $A \in \mathbb{R}^{Mn \times Nn }$, the reformulated problem can be written compactly as\footnote{ The edge-node incidence matrix of network $G$ is defined as follows: Each $n$-row block of matrix $A$ corresponds to an edge in the graph and each $n$-column block represent a node. The $n$ rows corresponding to the edge $e=(i,j)$ has $I(n\times n)$ in the $i^{th}$ $n$-column block, $-I(n\times n)$ in the $j^{th}$ $n-$column block and $0$ in the other columns, where $I(n\times n)$ is the identity matrix of dimension $n$. } \begin{equation}gin{align}\label{distFormulationSync} \min_{x_i\in X}\quad &\sum_{i=1}^N f_i(x_i)\\ \nonumber s.t. \quad & A x = 0, \nonumber \end{align} where $x$ is the vector $[x_1, x_2, \ldots, x_N]'$. We will refer to this formulation as the {\it edge-based reformulation} of the multi-agent optimization problem. Note that this formulation is a special case of problem (\ref{Multisplit-formulation}) with $D=A$, $H=0$ and $X_i=X$ for all $i$. Since these problems often lack a centralized processing unit, it is imperative that iterative solutions of problem (\ref{distopt-problem}) involve decentralized computations, meaning that each node (processor) performs calculations independently and on the basis of local information available to it and then communicates this information to its neighbors according to the underlying network structure. Though there have been many important advances in the design of decentralized optimization algorithms for multi-agent optimization problems, several challenges still remain. First, many of these algorithms are based on first-order subgradient methods, which have slow convergence rates (given by $O(1/\hbox{\rlap{$\sqcap$}$\sqcup$}rt{k})$ where $k$ is the iteration number), making them impractical in many large scale applications. Second, with the exception of a few recent contributions, existing algorithms are synchronous, meaning that computations are simultaneously performed according to some global clock, but this often goes against the highly decentralized nature of the problem, which precludes such global information being available to all nodes. In this paper, we focus on the more general formulation (\ref{Multisplit-formulation}) and propose an asynchronous decentralized algorithm based on the classical Alternating Direction Method of Multipliers (ADMM) (see \cite{ADMMBoyd}, \cite{Eckstein2012} for comprehensive tutorials). We adopt the following asynchronous implementation for our algorithm: at each iteration $k$, a random subset $\Psi^k$ of the constraints are selected, which in turn selects the components of $x$ that appear in these constraints. We refer to the selected constraints as {\it active constraints} and selected components as the {\it active components (or agents)}. We design an ADMM-type primal-dual algorithm which at each iteration updates the primal variables using partial information about the problem data, in particular using cost functions corresponding to active components and active constraints, and updates the dual variables corresponding to active constraints. In the context of the edge-based reformulated multi-agent optimization problem (\ref{distFormulationSync}), this corresponds to a fully decentralized and asynchronous implementation in which a subset of the edges are randomly activated (for example according to local clocks associated with those edges) and the agents incident to those edges perform computations on the basis of their local objective functions followed by communication of updated values with neighbors. Under the assumption that each constraint has a positive probability of being selected and the constraints have a decoupled structure (which is satisfied by reformulations of the distributed multi-agent optimization problem), our first result shows that the (primal) asynchronous iterates generated by this algorithm converge almost surely to an optimal solution. Our proof relies on relating the asynchronous iterates to {\it full-information} iterates that would be generated by the algorithm that use full information about the cost functions and constraints at each iteration. In particular, we introduce a weighted norm where the weights are given by the inverse of the probabilities with which the constraints are activated and constructs a Lyapunov function for the asynchronous iterates using this weighted norm. Our second result establishes a {\it performance guarantee of $O(1/k)$} for this algorithm under a compactness assumption on the constraint sets $X$ and $Z$, which to our knowledge is faster than the guarantees available in the literature for this problem. More specifically, we show that the expected value of the difference of the objective function value and the optimal value as well as the expected feasibility violation converges to 0 at rate $O(1/k)$. Our paper is related to a large recent literature on distributed optimization methods for solving the multi-agent optimization problem. Most closely related is a recent stream which proposed distributed synchronous ADMM algorithms for solving problem (\ref{distopt-problem}) (or specialized versions of it) (see \cite{MotaColoring}, \cite{Mota2012}, \cite{JXMAug}, \cite{giannakisAdHoc}, \cite{GiannakisChannelDecoding}). These papers have demonstrated the excellent computational performance of ADMM algorithms in the context of several signal processing applications. A closely related work in this stream is our recent paper \cite{WeiCDC}, where we considered problem (\ref{distopt-problem}) under the general assumption that the $f_i$ are convex. In \cite{WeiCDC}, we presented an ADMM based algorithm which operates by updating the decision variable $x$ in $N$ steps in a synchronous manner using a deterministic cyclic order and showed that it converges at the rate $O(1/k)$. This algorithm however requires a synchronous implementation and a globally known order on the set of agents. The algorithm presented here selects a subset of the components of the decision variable $x$ randomly and updates the variables $(x,z)$ in two steps by first updating the selected components of $x$ and then updating the $z$ variable. Another strand of this literature uses first-order (sub)gradient methods for solving problem (\ref{distopt-problem}). Much of this work builds on the seminal works \cite{ParallelCompute} and \cite{TsitsiklisThes}, which proposed gradient methods that can parallelize computations across multiple processors. The more recent paper \cite{NOSubgradient} introduced a first-order primal subgradient method for solving problem (\ref{distopt-problem}) over deterministically varying networks. This method involves each agent maintaining and updating an estimate of the optimal solution by linearly combining a subgradient step along its local cost function with averaging of estimates obtained from his neighbors (also known as a single consensus step).\footnote{This work is clearly also related to the extensive literature on consensus and cooperative control, where the goal is to design local deterministic or random update rules to achieve global coordination (for deterministic update rules, see \cite{Blondel}, \cite{BCM2009distributed}, \cite{FCFZ}, \cite{ali}, \cite{KarMoura}, \cite{murray}, \cite{Consensus2}, \cite{OCB}, \cite{Tahbaz-SalehiJad08}; for random update rules, see \cite{ScaglioneGossip}, \cite{BoydGossip}, \cite{GeographicGossip}, \cite{Fagnani08}).} Several follow-up papers considered variants of this method for problems with local and global constraints \cite{JJSubgradient}, \cite{NOPConstrained} randomly varying networks \cite{LORandomNetwork}, \cite{LobelOzdaglarFeijer}, \cite{MateiBaras} and random gradient errors \cite{SNVStochasticGradient}, \cite{NOOTQuantization}. A different distributed algorithm that relies on Nesterov's dual averaging algorithm \cite{NesterovDualAvg} for static networks has been proposed and analyzed in \cite{Duchi2012}. Such gradient methods typically have a convergence rate of $O(1/\hbox{\rlap{$\sqcap$}$\sqcup$}rt{k})$. The more recent contribution \cite{MouraFastGrad} focuses on a special case of (\ref{distopt-problem}) under smoothness assumptions on the cost functions and availability of global information about some problem parameters, and provided gradient algorithms (with multiple consensus steps) which converge at the faster rate of $O(1/k^2)$. With the exception of \cite{asyncGossip} and \cite{IutzelerRand}, all algorithms provided in the literature are synchronous and assume that computations at all nodes are performed simultaneously according to a global clock. \cite{asyncGossip} provides an asynchronous subgradient method that uses gossip-type activation and communication between pairs of nodes and shows (under a compactness assumption on the iterates) that the iterates generated by this method converge almost surely to an optimal solution. The recent independent paper \cite{IutzelerRand} provides an asynchronous randomized ADMM algorithm for solving problem (\ref{distopt-problem}) and establishes convergence of the iterates to an optimal solution by studying the convergence of randomized Gauss-Seidel iterations on non-expansive operators. Our paper instead proposes an asynchronous ADMM algorithm for the more general problem (\ref{Multisplit-formulation}) and uses a Lyapunov function argument for establishing $O(1/k)$ rate of convergence. Our algorithm and analysis also build on and combines ideas from several important contributions in the study of ADMM algorithms. Earlier work in this area focuses on the case $C=2$, where $C$ refers to the number of sequential primal updates at each iteration, and studies convergence in the context of finding zeros of the sum of two maximal monotone operators (more specifically, the Douglas-Rachford operator), see \cite{DouglasRachford}, \cite{Eckstein1989}, \cite{LionsSplitting}. The recent contribution \cite{HeYuanRate} considered solving problem (\ref{Multisplit-formulation}) (with $C=2$) with ADMM and showed that the objective function values of the iterates converge at the rate $O(1/k)$. Other recent works analyzed the rate of convergence of ADMM and other related algorithms under smoothness conditions on the objective function (see \cite{DengYing}, \cite{MaFastSplitting}, \cite{Goldfarb2012}). Another paper \cite{HanYuanNote2012} considered the case $C\ge 2$ and showed that the resulting ADMM algorithm, converges under the more restrictive assumption that each $f_i$ is strongly convex. The recent paper \cite{Luo2012} focused on the general case $C\ge 2$ and established a global linear convergence rate using an error bound condition that estimates the distance from the dual optimal solution set in terms of norm of a proximal residual. The paper is organized as follows: we start in Section \ref{sec:ADMMSt} by highlighting the main ideas of the standard ADMM algorithm. In Section \ref{sec:stochADMM2SplittingGeneral}, we focus on the more general formulation (\ref{Multisplit-formulation}), present the asynchronous ADMM algorithm and apply this algorithm to solve problem (\ref{distopt-problem}) in a distributed way. Section \ref{sec:2splitconvGeneral} contains our convergence and rate of convergence analysis. Section \ref{sec:con} concludes with closing remarks. \noindent\textbf{Basic Notation and Notions:} A vector is viewed as a column vector. For a matrix $A$, we write $[A]_i$ to denote the $i^{th}$ column of matrix $A$, and $[A]^j$ to denote the $j^{th}$ row of matrix $A$. For a vector $x$, $x_i$ denotes the $i^{th}$ component of the vector. For a vector $x$ in $\mathbb{R}^n$ and set $S$ a subset of $\{1,\ldots, n\}$, we denote by $[x]_{S}$ a vector in $\mathbb{R}^n$, which places zeros for all components of $x$ outside set $S$, i.e., \begin{equation}gin{align*} [[x]_{S}]_i= \left\{\begin{equation}gin{array}{cc}x_i & \mbox{if}\quad i\in S,\\ 0& \mbox{otherwise.}\end{array}\right. \end{align*} We use $x'$ and $A'$ to denote the transpose of a vector $x$ and a matrix $A$ respectively. We use standard Euclidean norm (i.e., 2-norm) unless otherwise noted, i.e., for a vector $x$ in $\mathbb{R}^n$, $\norm{x}=\left(\sum_{i=1}^n x_i^2\right)^{\frac{1}{2}}$. \section{Preliminaries: Standard ADMM Algorithm}\label{sec:ADMMSt} The standard ADMM algorithm solves a separable convex optimization problem where the decision vector decomposes into two variables and the objective function is the sum of convex functions over these variables that are coupled through a linear constraint:\footnote{Interested readers can find more details in \cite{ADMMBoyd} and \cite{Eckstein2012}.} \begin{equation}gin{align}\label{ADMMFormulation} \min_{x\in X,z\in Z}\quad & F_s(x)+G_s(z)\\ \nonumber s.t. \quad & D_s x + H_s z = c, \end{align} where $F_s:\mathbb{R}^n\to \mathbb{R}$ and $G_s:\mathbb{R}^n\to \mathbb{R}$ are convex functions, $X$ and $Z$ are nonempty closed convex subsets of $\mathbb{R}^n$ and $\mathbb{R}^m$, and $D_s$ and $H_s$ are matrices of dimension $w\times n$ and $w \times m$. We consider the augmented Lagrangian function of problem (\ref{ADMMFormulation}) obtained by adding a quadratic penalty for feasibility violation to the Lagrangian function: \begin{equation}gin{align}\label{eq:augL}L_\begin{equation}ta(x,z,p) = F_s(x)+G_s(z)-p'(D_s x+H_s z-c)+\frac{\begin{equation}ta}{2}\norm{D_s x+H_s z-c}^2,\end{align} where $p$ in $\mathbb{R}^w$ is the Lagrange multiplier corresponding to the constraint $D_s x+H_s z=c$ and $\begin{equation}ta$ is a positive penalty parameter. The standard ADMM algorithm is an iterative primal-dual algorithm, which can be viewed as an approximate version of the classical augmented Lagrangian method for solving problem (\ref{ADMMFormulation}). It proceeds by approximately minimizing the augmented Lagrangian function through updating the primal variables $x$ and $z$ sequentially within a single pass block coordinate descent (in a Gauss-Seidel manner) at the current Lagrange multiplier (or the dual variable) followed by updating the dual variable through a gradient ascent method (see \cite{Luo2012} and \cite{Eckstein2012}). More specifically, starting from some initial vector $(x^0, z^0, p^0)$,\footnote{We use superscripts to denote the iteration number.} at iteration $k\ge 0$, the variables are updated as \begin{equation}gin{align} x^{k+1} &\in \mathop{\rm argmin}_{x\in X} L_\begin{equation}ta(x,z^k,p^k), \label{eq:yUpdate}\\ z^{k+1} & \in \mathop{\rm argmin}_{z\in Z} L_\begin{equation}ta(x^{k+1},z,p ^k), \label{eq:zUpdate}\\ p ^{k+1}& = p ^k -\begin{equation}ta(D_s x^{k+1}+H_s z^{k+1}-c). \label{eq:muUpdate} \end{align} We assume that the minimizers in steps (\ref{eq:yUpdate}) and (\ref{eq:zUpdate}) exist, however they need not be unique. Note that the stepsize used in updating the dual variable is the same as the penalty parameter $\begin{equation}ta$. The ADMM algorithm takes advantage of the separable structure of problem (\ref{ADMMFormulation}) and decouples the minimization of functions $F_s$ and $G_s$ since the sequential minimization over $x$ and $z$ involves (quadratic perturbations) these functions separately. This is particularly useful in applications where the minimization over these component functions admit simple solutions and can be implemented in a parallel or decentralized manner. The analysis of the ADMM algorithm adopts the following standard assumption on problem (\ref{ADMMFormulation}). \begin{equation}gin{assumption}{\it (Existence of a Saddle Point)}\label{assm:saddlePointSt} The Lagrangian function of problem (\ref{ADMMFormulation}) given by \[ L(x,z,p) = F_s(x)+G_s(z)-p'(D_sx+H_sz),\] has a saddle point, i.e., there exists a solution-multiplier pair $(x^*, z^*, p^*)$ with \[ L(x^*,z^*,p)\leq L(x^*,z^*,p^*)\leq L(x,z,p^*),\] for all $x$ in $\mathbb{R}^{n}$, $z$ in $\mathbb{R}^{m}$ and $p$ in $\mathbb{R}^{w}$. \end{assumption} Note that the existence of a saddle point is equivalent to the existence of a primal dual optimal solution pair. It is well-known that under the given assumptions, the objective function value of the primal sequence $\{x^k,z^k\}$ generated by (\ref{eq:yUpdate})-(\ref{eq:zUpdate}) converges to the optimal value of problem (\ref{ADMMFormulation}) and the dual sequence $\{p^k\}$ generated by (\ref{eq:muUpdate}) converges to a dual optimal solution (see Section 3.2 of \cite{ADMMBoyd}). \section{Asynchronous ADMM Algorithm}\label{sec:stochADMM2SplittingGeneral} Extending the standard ADMM, we present in this section an asynchronous distributed ADMM algorithm. We present the problem formulation and assumptions in Section \ref{sec:formulation}. In Section \ref{sec:asyncAlg}, we discuss the asynchronous implementation considered in the rest of this paper that involves updating a subset of components of the decision vector at each time using partial information about problem data and without need for a global coordinator. Section \ref{sec:generalAlg} contains the details of the asynchronous ADMM algorithm. In Section \ref{sec:specialCase}, we apply the asynchronous ADMM algorithm to solve the distributed multi-agent optimization problem (\ref{distopt-problem}). \subsection{Problem Formulation and Assumptions}\label{sec:formulation} We consider the optimization problem given in (\ref{Multisplit-formulation}), which is restated here for convenience: \begin{equation}gin{align*} \min_{x_i\in X_i,z\in Z}\quad &\sum_{i=1}^N f_i(x_i)\\ \nonumber s.t. \quad & Dx + Hz= 0. \end{align*} This problem formulation arises in large-scale multi-agent (or processor) environments where problem data is distributed across $N$ agents, i.e., each agent has access only to the component function $f_i$ and maintains the decision variable component $x_i$. The constraints usually represent the coupling across components of the decision variable imposed by the underlying connectivity among the agents. Motivated by such applications, we will refer to each component function $f_i$ as the {\it local objective function} and use the notation $F:\mathbb{R}^{nN}\to \mathbb{R}$ to denote the {\it global objective function} given by their sum: \begin{equation}\label{eq:defF} F(x)=\sum_{i=1}^N f_i(x_i).\end{equation} Similar to the standard ADMM formulation, we adopt the following assumption. \begin{equation}gin{assumption}{\it (Existence of a Saddle Point)}\label{assm:saddlePoint2Split} The Lagrangian function of problem (\ref{Multisplit-formulation}), \begin{equation} L(x,z,p) = F(x)-p'(Dx+Hz),\label{eq:LagGeneral}\end{equation} has a saddle point, i.e., there exists a solution-multiplier pair $(x^*, z^*, p^*)$ with \begin{equation} L(x^*,z^*,p)\leq L(x^*,z^*,p^*)\leq L(x,z,p^*)\label{ineq:saddlePoint2}\end{equation} for all $x$ in $X$, $z$ in $Z$ and $p$ in $\mathbb{R}^{W}$. \end{assumption} Moreover, we assume that the matrices have special structure that enables solving problem (\ref{Multisplit-formulation}) in an asynchronous manner: \begin{equation}gin{assumption}\label{assm:matrices} ({\it Decoupled Constraints}) Matrix $H$ is diagonal and invertible. Each row of matrix $D$ has exactly one nonzero element and matrix $D$ has no columns of all zeros.\footnote{We assume without loss of generality that each $x_i$ is involved at least in one of the constraints, otherwise, we could remove it from the problem and optimize it separately. Similarly, the diagonal elements of matrix $H$ are assumed to be non-zero, otherwise, that component of variable $z$ can be dropped from the optimization problem.} \end{assumption} The diagonal structure of matrix $H$ implies that each component of vector $z$ appears in exactly one linear constraint. The conditions that each row of matrix $D$ has only one nonzero element and matrix $D$ has no column of zeros guarantee the columns of matrix $D$ are linearly independent and hence matrix $D'D$ is positive definite. The condition on matrix $D$ implies that each row of the constraint $Dx+Hz=0$ involves exactly one $x_i$. We will see in Section \ref{sec:specialCase} that this assumption is satisfied by the distributed multi-agent optimization problem that motivates this work. \subsection{Asynchronous Algorithm Implementation}\label{sec:asyncAlg} In the large scale multi-agent applications descried above, it is essential that the iterative solution of the problem involves computations performed by agents in a decentralized manner (with access to local information) with as little coordination as possible. This necessitates an asynchronous implementation in which some of the agents become active (randomly) in time and update the relevant components of the decision variable using partial and local information about problem data while keeping the rest of the components of the decision variable unchanged. This removes the need for a centralized coordinator or global clock, which is an unrealistic requirement in such decentralized environments. To describe the asynchronous algorithm implementation we consider in this paper more formally, we first introduce some notation. We call a partition of the set $\{1,\ldots, W\}$ a {\it proper partition} if it has the property that if $z_i$ and $z_j$ are coupled in the constraint set $Z$, i.e., value of $z_i$ affects the constraint on $z_j$ for any $z$ in set $Z$, then $i$ and $j$ belong to the same partition, i.e., $\{i,j\}\subset \psi$ for some $\psi$ in the partition. We let $\Pi$ be a proper partition of the set $\{1,\ldots, W\}$ , which forms a partition of the set of $W$ rows of the linear constraint $Dx+Hz=0$. For each $\psi$ in $\Pi$, we define $\Phi(\psi)$ to be the set of indices $i$, where $x_i$ appears in the linear constraints in set $\psi$. Note that $\Phi(\psi)$ is an element of the power set $2^{\{1,\ldots, N\}}$. At each iteration of the asynchronous algorithm, two random variables $\Phi^k$ and $\Psi^k$ are realized. While the pair $(\Phi^k, \Psi^k)$ is correlated for each iteration $k$, these variables are assumed to be independent and identically distributed across iterations. At each iteration $k$, first the random variable $\Psi^k$ is realized. The realized value, denoted by $\psi^k$, is an element of the proper partition $\Pi$ and selects a subset of the linear constraints $Dx+Hz=0$. The random variable $\Phi^k$ then takes the realized value $\phi^k=\Phi(\psi^k)$. We can view this process as activating a subset of the coupling constraints and the components that are involved in these constraints. If $l\in \psi^k$, we say constraint $l $ as well as its associated dual variable $p_l$ is {\it active} at iteration $k$. Moreover, if $i\in \Phi(\psi^k)$, we say that component $i$ or agent $i$ is {\it active} at iteration $k$. We use the notation $\bar{\phi}^k$ to denote the complement of set $\phi^k$ in set $\{1,\ldots,N\}$ and similarly $\bar{\psi}^k$ to denote the complement of set $\psi^k$ in set $\{1,\ldots, W\}$. Our goal is to design an algorithm in which at each iteration $k$, only active components of the decision variable and active dual variables are updated using local cost functions of active agents and active constraints. To that end, we define $f^k:\mathbb{R}^{nN}\to \mathbb{R}$ as the sum of the local objective functions whose indices are in the subset $\phi^k$: \[f^k(x) = \sum_{i\in \phi^k}f_i(x_i), \] We denote by $D_i$ the matrix in $\mathbb{R}^{W\times nN}$ that picks up the columns corresponding to $x_i$ from matrix $D$ and has zeros elsewhere. Similarly, we denote by $H_l$ the diagonal matrix in $\mathbb{R}^{W\times W}$ which picks up the element in the $l^{th}$ diagonal position from matrix $H$ and has zeros elsewhere. Using this notation, we define the matrices \[D_{\phi^k}=\sum_{i\in \phi^k} D_i, \quad \mbox{and}\quad H_{\psi^k} = \sum_{l\in \psi^k} H_l.\] We impose the following condition on the asynchronous algorithm. \begin{equation}gin{assumption}\label{assm:io}(\it{Infinitely Often Update}) For all $k$ and all $\psi$ in the proper partition $\Pi$, \[\mathbb{P}(\Psi^k=\psi)>0.\] \end{assumption} This assumption ensures that each element of the partition $\Pi$ is active infinitely often with probability 1. Since matrix $D$ has no columns of all zeros, each of the $x_i$ is involved in some constraints, and hence $\cup_{\psi\in\Pi}\Phi(\psi) = \{1,\ldots,N\}$. The preceding assumption therefore implies that each agent $i$ belongs to at least one set $\Phi(\psi)$ and therefore is active infinitely often with probability $1$. From definition of the partition $\Pi$, we have $\cup_{\psi\in\Pi} \psi= \{1,\ldots,W\}$. Thus, each constraint $l$ is active infinitely often with probability $1$. \subsection{Asynchronous ADMM Algorithm}\label{sec:generalAlg} We next describe the asynchronous ADMM algorithm for solving problem (\ref{Multisplit-formulation}). \begin{equation}gin{framed}\noindent \textbf{I. Asynchronous ADMM algorithm:} \begin{equation}gin{itemize} \item[A] Initialization: choose some arbitrary $x^0$ in $X$, $z^0$ in $Z$ and $p^0=0$. \item[B] At iteration $k$, random variables $\Phi^k$ and $\Psi^k$ takes realizations $\phi^k$ and $\psi^k$. Function $f^k$ and matrices $D_{\phi^k}$, $H_{\psi^k}$ are generated accordingly. \begin{equation}gin{itemize} \item[a] The primal variable $x$ is updated as \begin{equation} \label{x2}x^{k+1}\in\mathop{\rm argmin}_{x\in X} f^k(x)-( p^k)' D_{\phi^k} x+\frac{\begin{equation}ta}{2} \norm{D_{\phi^k} x+ H z^k}^2.\end{equation} with $x_i^{k+1}=x_i^k$, for $i$ in $\bar\phi^k$. \item[b] The primal variable $z$ is updated as \begin{equation}\label{z2}z^{k+1} \in \mathop{\rm argmin}_{z\in Z} -(p^k)'H_{\psi^k}z+ \frac{\begin{equation}ta}{2}\norm{H_{\psi^k}z+ D_{\phi^k}x^{k+1}}^2.\end{equation} with $z_i^{k+1}=z_i^k$, for $i$ in $\bar\psi^k$. \item[c] The dual variable $p$ is updated as \begin{equation}\label{eq:pUpdate2SplitGeneral} p^{k+1} = p^k-\begin{equation}ta[D_{\phi^k}x^{k+1}+H_{\psi^k}z^{k+1}]_{\psi^k}.\end{equation} \end{itemize} \end{itemize} \end{framed} We assume that the minimizers in updates (\ref{x2}) and (\ref{z2}) exist, but need not be unique.\footnote{Note that the optimization in (\ref{x2}) and (\ref{z2}) are independent of components of $x$ not in $\phi^k$ and components of $z$ not in $\psi^k$ and thus the restriction of $x_i^{k+1}=x_i^k$, for $i$ not in $\phi^k$ and $z_i^{k+1}=z_i^k$, for $i$ not in $\psi^k$ still preserves optimality of $x^{k+1}$ and $z^{k+1}$ with respect to the optimization problems in update (\ref{x2}) and (\ref{z2}).} The term $\frac{\begin{equation}ta}{2}\norm{D_{\phi^k} x+ H z^k}^2$ in the objective function of the minimization problem in update (\ref{x2}) can be written as \[\frac{\begin{equation}ta}{2}\norm{D_{\phi^k} x+ H z^k}^2=\frac{\begin{equation}ta}{2}\norm{D_{\phi^k}x}^2+\begin{equation}ta( Hz^k)'D_{\phi^k}x+\frac{\begin{equation}ta}{2}\norm{Hz^k}^2,\] where the last term is independent of the decision variable $x$ and thus can be dropped from the objective function. Therefore, the primal $x$ update can be written as \begin{equation}gin{align}\label{eq:xUpdate2SplitGeneral} x^{k+1} \in\mathop{\rm argmin}_{x \in X} f^k(x)-(p^k-\begin{equation}ta Hz^k)'D_{\phi^k}x+\frac{\begin{equation}ta}{2}\norm{D_{\phi^k}x}^2.\end{align} Similarly, the term $ \frac{\begin{equation}ta}{2}\norm{H_{\psi^k}z+ D_{\phi^k}x^{k+1}}^2$ in update (\ref{z2}) can be expressed equivalently as \[ \frac{\begin{equation}ta}{2}\norm{H_{\psi^k}z+ D_{\phi^k}x^{k+1}}^2= \frac{\begin{equation}ta}{2}\norm{H_{\psi^k}z}^2+ {\begin{equation}ta}( D_{\phi^k}x^{k+1})'H_{\psi^k}z+ \frac{\begin{equation}ta}{2}\norm{ D_{\phi^k}x^{k+1}}^2.\] We can drop the term $ \frac{\begin{equation}ta}{2}\norm{ D_{\phi^k}x^{k+1}}^2$, which is constant in $z$, and write update (\ref{z2}) as \begin{equation}gin{align}\label{eq:zUpdate2SplitGeneral} z^{k+1} \in \mathop{\rm argmin}_{z\in Z} -(p^k-\begin{equation}ta D_{\phi^k}x^{k+1})'H_{\psi^k}z+\frac{\begin{equation}ta}{2}\norm{H_{\psi^k}z}^2, \end{align} The updates (\ref{eq:xUpdate2SplitGeneral}) and (\ref{eq:zUpdate2SplitGeneral}) make the dependence on the decision variables $x$ and $z$ more explicit and therefore will be used in the convergence analysis. We refer to (\ref{eq:xUpdate2SplitGeneral}) and (\ref{eq:zUpdate2SplitGeneral}) as the {\it primal $x$ and $z$ update} respectively, and (\ref{eq:pUpdate2SplitGeneral}) as the {\it dual update}. \subsection{Special Case: Distributed Multi-agent Optimization}\label{sec:specialCase} We apply the asynchronous ADMM algorithm to the edge-based reformulation of the multi-agent optimization problem (\ref{distFormulationSync}).\footnote{For simplifying the exposition, we assume $n=1$ and note that the results extend to $n>1$.} Note that each constraint of this problem takes the form $x_i=x_j$ for agents $i$ and $j$ with $(i,j)\in E$. Therefore, this formulation does not satisfy Assumption \ref{assm:matrices}. We next introduce another reformulation of this problem, used also in Example 4.4 of Section 3.4 in \cite{ParallelCompute}, so that each each constraint only involves one component of the decision variable.\footnote{Note that this reformulation can be applied to any problem with a separable objective function and linear constraints to turn it into a problem of form (\ref{Multisplit-formulation}) that satisfies Assumption \ref{assm:matrices}.} More specifically, we let $N(e)$ denote the agents which are the endpoints of edge $e$ and introduce a variable $z=[z_{eq}]_{e=1,\ldots,M\atop q\in N(e)}$ of dimension 2$M$, one for each endpoint of each edge. Using this variable, we can write the constraint $x_i=x_j$ for each edge $e=(i,j)$ as \[x_i=z_{ei}, \quad -x_j=z_{ej},\quad z_{ei}+z_{ej}=0.\] The variables $z_{ei}$ can be viewed as an estimate of the component $x_j$ which is known by node $i$. The transformed problem can be written compactly as \begin{equation}gin{align}\label{2splitFormulationEdge} \min_{x_i\in X,z\in Z}\quad &\sum_{i=1}^N f_i(x_i)\\ \nonumber s.t. \quad & A_{ei}x_{i} = z_{ei}, \quad \mbox{$e=1,\ldots,M$, $i \in \mathcal{N}(e)$,}\nonumber \end{align} where $Z$ is the set $\{z\in \mathbb{R}^{2M}\ |\ \sum_{q\in \mathcal{N}(e)} z_{eq}=0,\ \mbox{$e=1,\ldots,M$}\}$ and $A_{ei}$ denotes the entry in the $e^{th}$ row and $i^{th}$ column of matrix $A$, which is either $1$ or $-1$. This formulation is in the form of problem (\ref{Multisplit-formulation}) with matrix $H = -I$, where $I$ is the identity matrix of dimension $2M\times 2M$. Matrix $D$ is of dimension $2M\times N$, where each row contains exactly one entry of $1$ or $-1$. In view of the fact that each node is incident to at least one edge, matrix $D$ has no column of all zeros. Hence Assumption \ref{assm:matrices} is satisfied. One natural implementation of the asynchronous algorithm is to associate with each edge an independent Poisson clock with identical rates across the edges. At iteration $k$, if the clock corresponding to edge $(i,j)$ ticks, then $\phi^k=\{i,j\}$ and $\psi^k$ picks the rows in the constraint associated with edge $(i,j)$, i.e., the constraints $x_i=z_{ei}$ and $-x_j=z_{ej}$.\footnote{Note that this selection is a proper partition of the constraints since the set $Z$ couples only the variables $z_{ek}$ for the endpoints of an edge $e$.} We associate a dual variable $p_{ei}$ in $\mathbb{R}$ to each of the constraint $ A_{ei}x_{i} = z_{ei}, $ and denote the vector of dual variables by $p$. The primal $z$ update and the dual update [Eqs.\ (\ref{z2}) and (\ref{eq:pUpdate2SplitGeneral})] for this problem are given by \begin{equation}gin{align}\label{QP} z_{ei}^{k+1}, z_{ej}^{k+1} = \mathop{\rm argmin}_{z_{ei}, z_{ej}, z_{ei}+z_{ej}=0} -(p_{ei}^k)'(A_{ei}x_i^{k+1}-z_{ei})-(p_{ej}^k)'(A_{ej}x_j^{k+1}-z_{ej})\\ \nonumber+ \frac{\begin{equation}ta}{2}\left(\norm{A_{ei}x_i^{k+1}-z_{ei}}^2+\norm{A_{ej}x_j^{k+1}-z_{ej}}^2\right),\end{align} \[ p_{eq}^{k+1} = p_{eq}^k-\begin{equation}ta(A_{eq}x_q^{k+1}-z_{eq}^{k+1})\qquad \hbox{for } q=i,j.\] The primal $z$ update involves a quadratic optimization problem with linear constraints which can be solved in closed form. In particular, using first order optimality conditions, we conclude \begin{equation} z_{ei}^{k+1} = \frac{1}{\begin{equation}ta}(-p_{ei}^k-v^{k+1})+A_{ei}x_i^{k+1},\qquad \qquad z_{ej}^{k+1} = \frac{1}{\begin{equation}ta}(-p_{ei}^k-v^{k+1})+A_{ej}x_j^{k+1},\label{eq:zej}\end{equation} where $v^{k+1}$ is the Lagrange multiplier associated with the constraint $z_{ei}+z_{ej}=0$ and is given by \begin{equation} \label{eq:v}v^{k+1} = \frac{1}{2}(-p_{ei}^k-p_{ej}^k)+\frac{\begin{equation}ta}{2}(A_{ei}x_i^{k+1}+A_{ej}x_j^{k+1}).\end{equation} Combining these steps yields the following asynchronous algorithm for problem (\ref{distFormulationSync}) which can be implemented in a decentralized manner by each node $i$ at each iteration $k$ having access to only his local objective function $f_i$, adjacency matrix entries $A_{ei}$, and his local variables $x_i^k$, $z_{ei}^k$, and $p_{ei}^k$ while exchanging information with one of his neighbors.\footnote{The asynchronous ADMM algorithm can also be applied to a node-based reformulation of problem (\ref{distopt-problem}), where we impose the local copy of each node to be equal to the average of that of its neighbors. This leads to another asynchronous distributed algorithm with a different communication structure in which each node at each iteration broadcasts its local variables to all his neighbors, see \cite{WOLIDS} for more details.} \begin{equation}gin{framed}\noindent \textbf{II. Asynchronous Edge Based ADMM algorithm:} \begin{equation}gin{itemize} \item[A] Initialization: choose some arbitrary $x_i^0$ in $X$ and $z^0$ in $Z$, which are not necessarily all equal. Initialize $p_{ei}^0=0$ for all edges $e$ and end points $i$. \item[B] At time step $k$, the local clock associated with edge $e =(i,j)$ ticks, \begin{equation}gin{itemize} \item[a] Agents $i$ and $j$ update their estimates $x_i^k$ and $x_j^k$ simultaneously as \[x_q^{k+1} = \mathop{\rm argmin}_{x_q \in X} f_q(x_q)-(p_{eq}^k)'A_{eq}x_q+\frac{\begin{equation}ta}{2} \norm{A_{eq}x_q-z_{eq}^k}^2\] for $q=i,j$. The updated value of $x_i^{k+1}$ and $x_j^{k+1}$ are exchanged over the edge $e$. \item[b] Agents $i$ and $j$ exchange their current dual variables $p_{ei}^k$ and $p_{ej}^k$ over the edge $e$. For $q=i,j$, agents $i$ and $j$ use the obtained values to compute the variable $v^{k+1}$ as Eq.\ (\ref{eq:v}), i.e., \[v^{k+1} = \frac{1}{2}(-p_{ei}^k-p_{ej}^k)+\frac{\begin{equation}ta}{2}(A_{ei}x_i^{k+1}+A_{ej}x_i^{k+1}).\] and update their estimates $z_{ei}^k$ and $z_{ej}^k$ according to Eq.\ (\ref{eq:zej}), i.e., \[z_{eq}^{k+1} = \frac{1}{\begin{equation}ta}(-p_{eq}^k-v^{k+1})+A_{eq}x_q^{k+1}.\] \item [c] Agents $i$ and $j$ update the dual variables $p_{ei}^{k+1}$ and $p_{ej}^{k+1}$ as \[p_{eq}^{k+1}=-v^{k+1}\qquad \hbox{for }q=i,j.\] \item [d] All other agents keep the same variables as the previous time.\end{itemize} \end{itemize} \end{framed} \section{Convergence Analysis for Asynchronous ADMM Algorithm}\label{sec:2splitconvGeneral} In this section, we study the convergence behavior of the asynchronous ADMM algorithm under Assumptions \ref{assm:saddlePoint2Split}-\ref{assm:io}. We show that the primal iterates $\{x^k,z^k\}$ generated by (\ref{eq:xUpdate2SplitGeneral}) and (\ref{eq:zUpdate2SplitGeneral}) converge almost surely to an optimal solution of problem (\ref{Multisplit-formulation}). Under the additional assumption that the constraint sets $X$ and $Z$ are compact, we further show that the corresponding objective function values converge to the optimal value in expectation at rate $O(1/k)$. We first recall the relationship between the sets $\phi^k$ and $\psi^k$ for a particular iteration $k$, which plays an important role in the analysis. Since the set of active components at time $k$, $\phi^k$, represents all components of the decision variable that appear in the active constraints defined by the set $\psi^k$, we can write \begin{equation}\label{eq:phipsi}[Dx]_{\psi^k}=[D_{\phi^k}x]_{\psi^k}.\end{equation} We next consider a sequence $\{y^k,v^k,\mu^k\}$, which is formed of iterates defined by a ``full information" version of the ADMM algorithm in which all constraints (and therefore all components) are active at each iteration. We will show that under the Decoupled Constraints Assumption (cf.\ Assumption \ref{assm:matrices}), the iterates generated by the asynchronous algorithm $(x^k,z^k,p^k)$ take the values of $(y^k,v^k,\mu^k)$ over the sets of active components and constraints and remain at their previous values otherwise. This association enables us to perform the convergence analysis using the sequence $\{y^k,v^k,\mu^k\}$ and then translate the results into bounds on the objective function value improvement along the sequence $\{x^k,z^k,p^k\}$. More specifically, at iteration $k$, we define $y^{k+1}$ by \begin{equation}\label{eq:defy} y^{k+1}\in\mathop{\rm argmin}_{y\in X} F(y)-(p^k-\begin{equation}ta Hz^k)'Dy+\frac{\begin{equation}ta}{2}\norm{Dy}^2.\end{equation} Due to the fact that each row of matrix $D$ has only one nonzero element [cf.\ Assumption \ref{assm:matrices}], the norm $\norm{Dy}^2$ can be decomposed as $\sum_{i=1}^N \norm{D_iy_i}^2$, where recall that $D_i$ is the matrix that picks up the columns corresponding to component $x_i$ and is equal to zero otherwise. Thus, the preceding optimization problem can be written as a separable optimization problem over the variables $y_i$: \[ y^{k+1}\in \sum_{i=1}^N \mathop{\rm argmin}_{y_i\in X_i} f_i(y_i)-(p^k-\begin{equation}ta Hz^k)'D_iy_i+\frac{\begin{equation}ta}{2}\norm{D_iy_i}^2.\] Since $f^k(x)=\sum_{i\in \phi^k}f_i(x_i)$, and $D_{\phi^k}=\sum_{i\in \phi^k}D_i$, the minimization problem that defines the iterate $x^{k+1}$ [cf.\ Eq.\ (\ref{eq:xUpdate2SplitGeneral})] similarly decomposes over the variables $x_i$ for $i\in \Phi^k$. Hence, the iterates $x^{k+1}$ and $y^{k+1}$ are identical over the components in set $\phi^k$, i.e., $[x^{k+1}]_{\phi^k} = [y^{k+1}]_{\phi^k}$. Using the definition of matrix $D_{\phi^k}$, i.e., $D_{\phi^k}=\sum_{i\in\phi^k}D_i$, this implies the following relation: \begin{equation}\label{eq:dx}D_{\phi^k}x^{k+1}=D_{\phi^k}y^{k+1}.\end{equation} The rest of the components of the iterate $x^{k+1}$ by definition remain at their previous value, i.e., $[x^{k+1}]_{\bar \phi^k}=[x^k]_{\bar \phi^k}$. Similarly, we define vector $v^{k+1}$ in $Z$ by \begin{equation}\label{eq:defv} v^{k+1}\in\mathop{\rm argmin}_{v\in Z} -(p^k-\begin{equation}ta Dy^{k+1})'Hv+\frac{\begin{equation}ta}{2}\norm{Hv}^2.\end{equation} Using the diagonal structure of matrix $H$ [cf.\ Assumption \ref{assm:matrices}] and the fact that $\Pi$ is a proper partition of the constraint set [cf.\ Section \ref{sec:asyncAlg}], this problem can also be decomposed in the following way: \[ v^{k+1}\in \mathop{\rm argmin}_{v, [v]_{\psi} \in Z_{\psi}} \sum_{\psi\in \Pi}-(p^k-\begin{equation}ta Dy^{k+1})'H_\psi [v]_\psi+\frac{\begin{equation}ta}{2}\norm{H_\psi [v]_\psi}^2,\] where $H_\psi$ is a diagonal matrix that contains the $l^{th}$ diagonal element of the diagonal matrix $H$ for $l$ in set $\psi$ (and has zeros elsewhere) and set $Z_\psi$ is the projection of set $Z$ on component $[v]_\psi$. Since the diagonal matrix $H_{\psi^k}$ has nonzero elements only on the $l^{th}$ element of the diagonal with $l\in \psi^k$, the update of $[v]_\psi$ is independent of the other components, hence we can express the update on the components of $v^{k+1}$ in set $\psi^k$ as \[[v^{k+1}]_{\psi^k}\in\mathop{\rm argmin}_{v\in Z} -(p^k-\begin{equation}ta D_{\psi^k}x^{k+1})'H_{\psi^k}z+\frac{\begin{equation}ta}{2}\norm{H_{\psi^k}z}^2.\] By the primal $z$ update [cf.\ Eq.\ (\ref{eq:zUpdate2SplitGeneral})], this shows that $[z^{k+1}]_{\psi^k}=[v^{k+1}]_{\psi^k}$. By definition, the rest of the components of $z^{k+1}$ remain at their previous values, i.e., $[z^{k+1}]_{\bar{\psi}^k} = [z^k]_{\bar{\psi}^k}$. Finally, we define vector $\mu^{k+1}$ in $\mathbb{R}^W$ by \begin{equation}\label{eq:defMu} \mu^{k+1}= p^k-\begin{equation}ta (Dy^{k+1}+Hv^{k+1}). \end{equation} We relate this vector to the dual variable $p^{k+1}$ using the dual update [cf.\ Eq.\ (\ref{eq:pUpdate2SplitGeneral})]. We also have \[[D_{\phi^k}x^{k+1}]_{\psi^k}=[D_{\phi^k}y^{k+1}]_{\psi^k}=[Dy^{k+1}]_{\psi^k},\] where the first equality follows from Eq.\ (\ref{eq:dx}) and second is derived from Eq.\ (\ref{eq:phipsi}). Moreover, since $H$ is diagonal, we have $[H_{\psi^k}z^{k+1}]_{\psi^k}=[Hv^{k+1}]_{\psi^k}$. Thus, we obtain $[p^{k+1}]_{\psi^k}=[\mu^{k+1}]_{\psi^k}$ and $[p^{k+1}]_{\bar{\psi}^k}=[p^k]_{\bar{\psi}^k}$. A key term in our analysis will be the {\it residual} defined at a given primal vector $(y,v)$ by \begin{equation}\label{eq:defrOthers}r=Dy+Hv.\end{equation} The residual term is important since its value at the primal vector $(y^{k+1},v^{k+1})$ specifies the update direction for the dual vector $\mu^{k+1}$ [cf.\ Eq.\ (\ref{eq:defMu})]. We will denote the residual at the primal vector $(y^{k+1},v^{k+1})$ by \begin{equation}\label{eq:defr} r^{k+1} = Dy^{k+1}+Hv^{k+1}.\end{equation} \subsection{Preliminaries} We proceed to the convergence analysis of the asynchronous algorithm. We first present some preliminary results which will be used later to establish convergence properties of asynchronous algorithm. In particular, we provide bounds on the difference of the objective function value of the vector $y^k$ from the optimal value, the distance between $\mu^k$ and an optimal dual solution and distance between $v^k$ and an optimal solution $z^*$. We also provide a set of sufficient conditions for a limit point of the sequence $\{x^k, z^k, p^k\}$ to be a saddle point of the Lagrangian function. The results of this section are independent of the probability distributions of the random variables $\Phi^k$ and $\Psi^k$. Due to space constraints, the proofs of the results of in this section are omitted. We refer the reader to \cite{WOLIDS} for the missing details. The next lemma establishes primal feasibility (or zero residual property) of a saddle point of the Lagrangian function of problem (\ref{Multisplit-formulation}). \begin{equation}gin{lemma}\label{lemma:saddlePoint2} Let $(x^*, z^*, p^*)$ be a saddle point of the Lagrangian function defined as in Eq.\ (\ref{eq:LagGeneral}) of problem (\ref{Multisplit-formulation}). Then \begin{equation} Dx^*+H z^*=0.\label{eq:saddleFeasAll}\end{equation} \end{lemma} The next theorem provides bounds on two key quantities, $F(y^{k+1})-\mu'r^{k+1}$ and $\frac{1}{2\begin{equation}ta}\norm{\mu^{k+1}-p^*}^2+\frac{\begin{equation}ta}{2}\norm{H(v^{k+1}-z^*)}^2$. {These quantities will be related to the iterates generated by the asynchronous ADMM algorithm via a weighted norm and a weighted Lagrangian function in Section \ref{sec:convRate}. The weighted version of the quantity $\frac{1}{2\begin{equation}ta}\norm{\mu^{k+1}-p^*}^2+\frac{\begin{equation}ta}{2}\norm{H(v^{k+1}-z^*)}^2$ is used to show almost sure convergence of the algorithm and the quantity $F(y^{k+1})-\mu'r^{k+1}$ is used in the convergence rate analysis.} \begin{equation}gin{theorem}\label{thm:deterBounds} Let $\{x^k,z^k, p^k\}$ be the sequence generated by the asynchronous ADMM algorithm (\ref{x2})-(\ref{eq:pUpdate2SplitGeneral}). Let $\{y^{k}, v^{k}, \mu^{k}\}$ be the sequence defined in Eqs.\ (\ref{eq:defy})-(\ref{eq:defMu}) and $(x^*, z^*, p^*)$ be a saddle point of the Lagrangian function of problem (\ref{Multisplit-formulation}). The following hold at each iteration $k$: \begin{equation}gin{align}\label{ineq:fValue} F(x^*)-F(y^{k+1})&+\mu'r^{k+1}\geq \frac{1}{2\begin{equation}ta}\left(\norm{\mu^{k+1}-\mu}^2-\norm{p^k-\mu}^2\right)\\\nonumber&+\frac{\begin{equation}ta}{2}\left(\norm{H(v^{k+1}-z^*)}^2-\norm{H(z^k-z^*)}^2\right) +\frac{\begin{equation}ta}{2}\norm{r^{k+1}}^2+\frac{\begin{equation}ta}{2}\norm{H(v^{k+1}-z^k)}^2, \end{align} for all $\mu$ in $\mathbb{R}^{W}$, and \begin{equation}gin{align}\label{ineq:normValue} 0\geq &\frac{1}{2\begin{equation}ta}\left(\norm{\mu^{k+1}-p^*}^2-\norm{p^k-p^*}^2\right)+\frac{\begin{equation}ta}{2}\left(\norm{H(v^{k+1}-z^*)}^2-\norm{H(z^k-z^*)}^2\right)\\ \nonumber &+\frac{\begin{equation}ta}{2}\norm{r^{k+1}}^2+\frac{\begin{equation}ta}{2}\norm{H(v^{k+1}-z^k)}^2. \end{align} \end{theorem} The following lemma analyzes the limiting properties of the sequence $\{x^k, z^k, p^k\}$. The results will later be used in Lemma \ref{lemma:convergence}, which { provides a set of sufficient conditions for a limit point of the sequence $\{x^k, z^k, p^k\}$ to be a saddle point. \begin{equation}gin{lemma}\label{lemma:limitPoint} Let $\{x^k,z^k, p^k\}$ be the sequence generated by the asynchronous ADMM algorithm (\ref{x2})-(\ref{eq:pUpdate2SplitGeneral}). Let $\{y^{k}, v^{k}, \mu^{k}\}$ be the sequence defined in Eqs.\ (\ref{eq:defy}), (\ref{eq:defv} and )(\ref{eq:defMu}). Consider a sample path of $\Psi^k$ and $\Phi^k$ along which the sequence $\left\{\norm{r^{k+1}}^2+\norm{H(v^{k+1}-z^k)}^2\right\}$ converges to $0$ and the sequence $\{z^k, p^k\}$ is bounded, where $r^k$ is the residual defined as in Eq.\ (\ref{eq:defr}). Then, the sequence $\{x^k, y^k, z^k\}$ has a limit point, which is a saddle point of the Lagrangian function of problem (\ref{Multisplit-formulation}). \end{lemma} \subsection{Convergence and Rate of Convergence}\label{sec:convRate} The results of the previous section did not rely on the probability distributions of random variables $\Phi^k$ and $\Psi^k$. In this section, we will introduce a weighted norm and weighted Lagrangian function where the weights are defined in terms of the probability distributions of random variables $\Psi^k$ and $\Phi^k$ representing the active constraints and components. We will use the weighted norm to construct a nonnegative supermartingale along the sequence $\{x^k,z^k,p^k\}$ generated by the asynchronous ADMM algorithm and use it to establish the almost sure convergence of this sequence to a saddle point of the Lagrangian function of problem (\ref{Multisplit-formulation}). {By relating the iterates generated by the asynchronous ADMM algorithm to the variables $(y^k, v^k, \mu^k)$ through taking expectations of the weighted Lagrangian function and using results from Theorem \ref{thm:deterBounds}, we will show that under a compactness assumption on the constraint sets $X$ and $Z$, the asynchronous ADMM algorithm converges with rate $O(1/k)$ in expectation in terms of both objective function value and constraint violation.} We use the notation $\alpha_i$ to denote the probability that component $x_i$ is active at one iteration, i.e., \begin{equation}\label{eq:defAlpha}\alpha_i= \mathbb{P}(i\in \Phi^k),\end{equation} and the notation $\lambda_l$ to denote the probability that constraint $l$ is active at one iteration, i.e., \begin{equation}\label{eq:defPi}\lambda_l = \mathbb{P}(l\in \Psi^k).\end{equation} Note that, { since the random variables $\Phi^k$ (and $\Psi^k$) are independent and identically distributed for all $k$}, these probabilities are the same across all iterations. We define a diagonal matrix $\Lambda$ in $\mathbb{R}^{W\times W}$ with elements $\lambda_l$ on the diagonal, i.e., \[\Lambda_{ll}=\lambda_l\qquad \hbox{for each } l\in \{1,\ldots,W\}.\] Since each constraint is assumed to be active with strictly positive probability [cf.\ Assumption \ref{assm:io}], matrix $\Lambda$ is positive definite. We write $\bar \Lambda$ to indicate the inverse of matrix $\Lambda$. Matrix $\bar \Lambda$ induces a {\it weighted vector norm} for $p$ in $\mathbb{R}^{W}$ as \[\norm{p}_{\bar \Lambda}^2 = p'\bar\Lambda p.\] We define a {\it weighted Lagrangian function} $\tilde L(x,z,\mu):\mathbb{R}^{nN}\times \mathbb{R}^W\times \mathbb{R}^W\to \mathbb{R}$ as \begin{equation}\label{eq:defTildeL}\tilde L(x,z,\mu)=\sum_{i=1}^N \frac{1}{\alpha_i}f_i(x_i)-\mu'\left(\sum_{i=1}^N \frac{1}{\alpha_i}D_ix+\sum_{l=1} \frac{1}{\lambda_l}H_lz\right).\end{equation} We use the symbol $\mathcal{J}_k$ to denote the filtration up to and include iteration $k$, which contains information of random variables $\Phi^t$ and $\Psi^t$ for $t\leq k$. We have $\mathcal{J}_k\subset \mathcal{J}_{k+1}$ for all $k\geq 1$. The particular weights in $\bar \Lambda$-norm and the weighted Lagrangian function are chosen to relate the expectation of the norm $\frac{1}{2\begin{equation}ta}\norm{p^{k+1}-\mu}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k+1}-v)}^2_{\bar\Lambda}$ and function $\tilde L(x^{k+1},z^{k+1},\mu)$ to $\frac{1}{2\begin{equation}ta}\norm{p^{k}-\mu}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k}-v)}^2_{\bar\Lambda}$ and function $\tilde L(x^{k},z^{k},\mu)$, as we will show in the following lemma. This relation will be used in Theorem \ref{thm:asyncDual} to show that the scalar sequence $\left\{\frac{1}{2\begin{equation}ta}\norm{p^{k}-\mu}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k}-v)}^2_{\bar\Lambda}\right\}$ is a nonnegative supermartingale, and establish almost sure convergence of the asynchronous ADMM algorithm. \begin{equation}gin{lemma}\label{lemma:martingale} Let $\{x^k,z^k, p^k\}$ be the sequence generated by the asynchronous ADMM algorithm (\ref{x2})-(\ref{eq:pUpdate2SplitGeneral}). Let $\{y^{k}, v^{k}, \mu^{k}\}$ be the sequence defined in Eqs.\ (\ref{eq:defy}), (\ref{eq:defv}), (\ref{eq:defMu}). Then the following hold for each iteration $k$: \begin{equation}gin{align}\label{eq:exp1}\mathbb{E}&\left(\frac{1}{2\begin{equation}ta}\norm{p^{k+1}-\mu}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k+1}-v)}^2_{\bar\Lambda}\bigg| \mathcal{J}_k\right)=\frac{1}{2\begin{equation}ta}\norm{\mu^{k+1}-\mu}^2\\ \nonumber &+\frac{\begin{equation}ta}{2} \norm{H(v^{k+1}-v)}^2 +\frac{1}{2\begin{equation}ta}\norm{p^{k}-\mu}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k}-v)}^2_{\bar\Lambda}-\frac{1}{2\begin{equation}ta}\norm{p^{k}-\mu}^2-\frac{\begin{equation}ta}{2}\norm{H(z^{k}-v)}^2, \end{align} for all $\mu$ in $\mathbb{R}^W$ and $v$ in $Z$, and \begin{equation}gin{align}\label{eq:exp2} \mathbb{E}&\left(\tilde L(x^{k+1},z^{k+1},\mu)\bigg| \mathcal{J}_k\right)\nonumber\\&=\left(F(y^{k+1})-\mu'(Dy^{k+1}+Hv^{k+1})\right) +\tilde L(x^k,z^k,\mu)-\left(F(x^{k})-\mu'(Dx^{k}+Hz^{k})\right), \end{align} for all $\mu$ in $\mathbb{R}^W$. \end{lemma} \begin{equation}gin{proof} By the definition of $\lambda_l$ in Eq.\ (\ref{eq:defPi}), for each $l$, the element $p^{k+1}_l$ can be either updated to $\mu^{k+1}_l$ with probability $\lambda_l$, or stay at previous value $p_l^k$ with probability $1-\lambda_l$. Hence, we have the following expected value \begin{equation}gin{align*}\mathbb{E}\left(\frac{1}{2\begin{equation}ta}\norm{p^{k+1}-\mu}^2_{\bar\Lambda}\bigg| \mathcal{J}_k\right)=& \sum_{l=1}^{W} \frac{1}{\lambda_l}\left[\lambda_l\left(\frac{1}{2\begin{equation}ta}\norm{\mu^{k+1}_l-\mu_l}^2\right)+(1-\lambda_l) \left(\frac{1}{2\begin{equation}ta}\norm{p_l^k-\mu_l}^2\right)\right]\\\nonumber =&\frac{1}{2\begin{equation}ta}\norm{\mu^{k+1}-\mu}^2 +\frac{1}{2\begin{equation}ta}\norm{p^{k}-\mu}^2_{\bar\Lambda}-\frac{1}{2\begin{equation}ta}\norm{p^{k}-\mu}^2, \end{align*} where the second equality follows from definition of $\norm{\cdot}_{\bar\Lambda}$, and grouping the terms. Similarly, $z^{k+1}_l$ is either equal to $v^{k+1}_l$ with probability $\lambda_l$ or $z^{k}_l$ with probability $1-\lambda_l$. Due to the diagonal structure of the $H$ matrix, the vector $H_lz$ has only one non-zero element equal to $[Hz]_l$ at $l^{th}$ position and zeros else where. Thus, we obtain \begin{equation}gin{align*}\mathbb{E}\left(\frac{\begin{equation}ta}{2}\norm{H(z^{k+1}-v)}^2_{\bar\Lambda}\bigg| \mathcal{J}_k\right)&=\sum_{l=1}^{W} \frac{1}{\lambda_l}\left[\lambda_l\left(\frac{\begin{equation}ta}{2}\norm{H_l(v^{k+1}-v)}\right)+(1-\lambda_l) \left(\frac{\begin{equation}ta}{2}\norm{H_l(z^k-v)}^2\right)\right] \\&=\frac{\begin{equation}ta}{2} \norm{H(v^{k+1}-v)}^2 +\frac{\begin{equation}ta}{2}\norm{H(z^{k}-v)}^2_{\bar\Lambda}-\frac{\begin{equation}ta}{2}\norm{H(z^{k}-v)}^2, \end{align*} where we used the definition of $\norm{\cdot}_{\bar\Lambda}$ once again. By summing the above two equations and using linearity of expectation operator, we obtain Eq.\ (\ref{eq:exp1}). Using a similar line of argument, we observe that at iteration $k$, for each $i$, $x_i^{k+1}$ has the value of $y_i^{k+1}$ with probability $\alpha_i$ and its previous value $x_l^{k}$ with probability $1-\alpha_i$. The expectation of function $\tilde L$ therefore satisfies \begin{equation}gin{align*} \mathbb{E}&\left(\tilde L(x^{k+1},z^{k+1},\mu)\bigg| \mathcal{J}_k\right)=\sum_{i=1}^N \frac{1}{\alpha_i}\left[\alpha_i\left( f_i(y^{k+1}_i)-\mu'D_i y^{k+1}\right)+(1-\alpha_i)\left( f_i(x^{k}_i)-\mu'D_i x^{k}\right)\right]\\ &+\sum_{l=1}^W \frac{1}{\lambda_l} \mu'\left[\lambda_l H_lv^{k+1}+(1-\lambda_l)H_lz^k\right]\\&=\left(\sum_{i=1}^N f_i(y_i^{k+1})-\mu'(Dy^{k+1}+Hv^{k+1})\right) +\tilde L(x^k,z^k,\mu)-\left(\sum_{i=1}^N f_i(x_i^{k})-\mu'(Dx^{k}+Hz^{k})\right), \end{align*} where we used the fact that $D=\sum_{i=1}^N D_i$. Using the definition $F(x) = \sum_{i=1}^N f_i(x)$ [cf.\ Eq.\ (\ref{eq:defF})], this shows Eq.\ (\ref{eq:exp2}). \end{proof} The next lemma builds on Lemma \ref{lemma:limitPoint} and establishes a sufficient condition for the sequence $\{x^k, z^k, p^k\}$ to converge to a saddle point of the Lagrangian. Theorem \ref{thm:asyncDual} will then show that this sufficient condition holds with probability 1 and thus the algorithm converges almost surely. \begin{equation}gin{lemma}\label{lemma:convergence} {Let $(x^*, z^*, p^*)$ be any saddle point of the Lagrangian function of problem (\ref{Multisplit-formulation}) and $\{x^k,z^k, p^k\}$ be the sequence generated by the asynchronous ADMM algorithm (\ref{x2})-(\ref{eq:pUpdate2SplitGeneral}). Along any sample path of $\Phi^k$ and $\Psi^k$, if the scalar sequence $\frac{1}{2\begin{equation}ta}\norm{p^{k+1}-p^*}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k+1}-z^*)}^2_{\bar\Lambda}$ is convergent and the scalar sequence $\frac{\begin{equation}ta}{2}\left[\norm{r^{k+1}}^2+\norm{H(v^{k+1}-z^k)}^2\right]$ converges to $0$, then the sequence $\{x^k, z^k, p^k\}$ converges to a saddle point of the Lagrangian function of problem (\ref{Multisplit-formulation}).} \end{lemma} \begin{equation}gin{proof} Since the scalar sequence $\frac{1}{2\begin{equation}ta}\norm{p^{k+1}-p^*}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k+1}-z^*)}^2_{\bar\Lambda}$ converges, matrix $\bar\Lambda$ is positive definite, and matrix $H$ is invertible [cf.\ Assumption \ref{assm:matrices}], it follows that the sequences $\{p^k\}$ and $\{z^k\}$ are bounded. Lemma \ref{lemma:limitPoint} then implies that the sequence $\{x^k, z^k, p^k\}$ has a limit point. We next show that the sequence $\{x^k, z^k, p^k\}$ has a unique limit point. Let $(\tilde x, \tilde z, \tilde p)$ be a limit point of the sequence $\{x^k, z^k, p^k\}$, i.e., the limit of sequence $\{x^k, z^k, p^k\}$ along a subsequence $\kappa$. We first show that the components $\tilde z, \tilde p$ are uniquely defined. By Lemma \ref{lemma:limitPoint}, the point $(\tilde x, \tilde z, \tilde p)$ is a saddle point of the Lagrangian function. Using the assumption of the lemma for $(p^*,z^*)=(\tilde p, \tilde z)$, this shows that the scalar sequence $\left\{\frac{1}{2\begin{equation}ta}\norm{p^{k+1}-\tilde p}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k+1}-\tilde z)}^2_{\bar\Lambda}\right\}$ is convergent. The limit of the sequence, therefore, is the same as the limit along any subsequence, implying \begin{equation}gin{eqnarray*} \lim_{k\to\infty}\frac{1}{2\begin{equation}ta}\norm{p^{k+1}-\tilde p}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k+1}-\tilde z)}^2_{\bar\Lambda} &=&\lim_{k\to\infty, k\in \kappa}\frac{1}{2\begin{equation}ta}\norm{p^{k+1}-\tilde p}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k+1}-\tilde z)}^2_{\bar\Lambda} \\ &=& \frac{1}{2\begin{equation}ta}\norm{\tilde p-\tilde p}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(\tilde z-\tilde z)}^2_{\bar\Lambda} = 0, \end{eqnarray*} Since matrix $\tilde \Lambda$ is positive definite and matrix $H$ is invertible, this shows that $\lim_{k\to\infty}p^k=\tilde p$ and $\lim_{k\to\infty}z^k=\tilde z$. Next we prove that given $(\tilde z,\tilde p)$, the $x$ component of the saddle point is uniquely determined. By Lemma \ref{lemma:saddlePoint2}, we have $D\tilde x+H\tilde z=0$. Since matrix $D$ has full column rank [cf.\ Assumption \ref{assm:matrices}], the vector $\tilde x$ is uniquely determined by $\tilde x = -(D'D)^{-1}D'H\tilde z$. \end{proof} The next theorem establishes almost sure convergence of the asynchronous ADMM algorithm. Our analysis uses results related to supermartingales (interested readers are referred to \cite{Grimmett} and \cite{williamsMartingale} for a comprehensive treatment of the subject). \begin{equation}gin{theorem}\label{thm:asyncDual} Let $\{x^k,z^k, p^k\}$ be the sequence generated by the asynchronous ADMM algorithm (\ref{x2})-(\ref{eq:pUpdate2SplitGeneral}). The sequence $\{x^k, z^k, p^k\}$ converges almost surely to a saddle point of the Lagrangian function of problem (\ref{Multisplit-formulation}). \end{theorem} \begin{equation}gin{proof} We will show that the conditions of Lemma \ref{lemma:convergence} are satisfied almost surely. We will first focus on the scalar sequence $\frac{1}{2\begin{equation}ta}\norm{p^{k+1}-\mu}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k+1}-v)}^2_{\bar\Lambda}$ and show that it is a nonnegative supermartingale. By martingale convergence theorem, this shows that it converges almost surely. We next establish that the scalar sequence $\frac{\begin{equation}ta}{2}\norm{r^{k+1}-H(v^{k+1}-z^k)}^2$ converges to 0 almost surely by an argument similar to the one used to establish Borel-Cantelli lemma. These two results imply that the set of events where $\frac{1}{2\begin{equation}ta}\norm{p^{k+1}-p^*}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k+1}-z^*)}^2_{\bar\Lambda}$ is convergent and $\frac{\begin{equation}ta}{2}\left[\norm{r^{k+1}}^2+\norm{H(v^{k+1}-z^k)}^2\right]$ converges to $0$ has probability $1$. Hence, by Lemma \ref{lemma:convergence}, we have the sequence $\{x^k, z^k, p^k\}$ converges to a saddle point of the Lagrangian function almost surely. We first show that the scalar sequence $\frac{1}{2\begin{equation}ta}\norm{p^{k+1}-\mu}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k+1}-v)}^2_{\bar\Lambda}$ is a nonnegative supermartingale. Since it is a summation of two norms, it immediately follows that it is nonnegative. To see it is a supermartingale, we let vectors $y^{k+1},v^{k+1},\mu^{k+1}$ and $r^{k+1}$ be those defined in Eqs.\ (\ref{eq:defy}), (\ref{eq:defv}), (\ref{eq:defMu}) and (\ref{eq:defr}). Recall that the symbol $\mathcal{J}_k$ denotes the filtration up to and including iteration $k$. From Lemma \ref{lemma:martingale}, we have \begin{equation}gin{align*}&\mathbb{E}\left(\frac{1}{2\begin{equation}ta}\norm{p^{k+1}-\mu}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k+1}-v)}^2_{\bar\Lambda}\bigg| \mathcal{J}_k\right)\\ \nonumber&\qquad =\frac{1}{2\begin{equation}ta}\norm{\mu^{k+1}-\mu}^2+\frac{\begin{equation}ta}{2} \norm{H(v^{k+1}-v)}^2 +\frac{1}{2\begin{equation}ta}\norm{p^{k}-\mu}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k}-v)}^2_{\bar\Lambda}\\ \nonumber& \qquad \quad -\frac{1}{2\begin{equation}ta}\norm{p^{k}-\mu}^2-\frac{\begin{equation}ta}{2}\norm{H(z^{k}-v)}^2 \end{align*} Substituting $\mu=p^*$ and $v=z^*$ in the above expectation calculation and combining with the following inequality from Theorem \ref{thm:deterBounds}, \begin{equation}gin{align*} 0\geq &\frac{1}{2\begin{equation}ta}\left(\norm{\mu^{k+1}-p^*}^2-\norm{p^k-p^*}^2\right)+\frac{\begin{equation}ta}{2}\left(\norm{H(v^{k+1}-z^*)}^2-\norm{H(z^k-z^*)}^2\right)\\ &+\frac{\begin{equation}ta}{2}\norm{r^{k+1}}^2+\frac{\begin{equation}ta}{2}\norm{H(v^{k+1}-z^k)}^2, \end{align*} we obtain \begin{equation}gin{align*}&\mathbb{E}\left(\frac{1}{2\begin{equation}ta}\norm{p^{k+1}-p^*}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k+1}-z^*)}^2_{\bar\Lambda}\bigg| \mathcal{J}_k\right) \\&\leq\frac{1}{2\begin{equation}ta}\norm{p^{k}-p^*}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k}-z^*)}^2_{\bar\Lambda} -\frac{\begin{equation}ta}{2}\norm{r^{k+1}}^2-\frac{\begin{equation}ta}{2}\norm{H(v^{k+1}-z^k)}^2. \end{align*} Hence, the sequence $\frac{1}{2\begin{equation}ta}\norm{p^{k+1}-p^*}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k+1}-z^*)}^2_{\bar\Lambda}$ is a nonnegative supermartingale in $k$ and by martingale convergence theorem, it converges almost surely. We next establish that the scalar sequence $\left\{\frac{\begin{equation}ta}{2}\norm{r^{k+1}}^2+\frac{\begin{equation}ta}{2}\norm{H(v^{k+1}-z^k)}^2\right\}$ converges to 0 almost surely. Rearranging the terms {in the previous inequality} and taking iterated expectation with respect to the filtration $\mathcal{J}_k$, we obtain for all $T$ \begin{equation}gin{align}\label{ineq:expInner}\sum_{k=1}^T\mathbb{E}\left(\frac{\begin{equation}ta}{2}\norm{r^{k+1}}^2+\frac{\begin{equation}ta}{2}\norm{H(v^{k+1}-z^k)}^2\right) \leq&\frac{1}{2\begin{equation}ta}\norm{p^{0}-p^*}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{0}-z^*)}^2_{\bar\Lambda}\\\nonumber &-\mathbb{E}\left(\frac{1}{2\begin{equation}ta}\norm{p^{T+1}-p^*}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{T+1}-z^*)}^2_{\bar\Lambda}\right)\\\nonumber \leq&\frac{1}{2\begin{equation}ta}\norm{p^{0}-p^*}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{0}-z^*)}^2_{\bar\Lambda}, \end{align} where the last inequality follows from relaxing the upper bound by dropping the non-positive expected value term. Thus, the sequence $\left\{\mathbb{E}\left(\frac{\begin{equation}ta}{2}\left[\norm{r^{k+1}}^2+\norm{H(v^{k+1}-z^k)}^2\right]\right)\right\}$ is summable implying \begin{equation}\label{eq:summable}\lim_{k\to \infty} \sum_{t=k}^\infty \mathbb{E}\left(\frac{\begin{equation}ta}{2}\left[\norm{r^{k+1}}^2+\norm{H(v^{k+1}-z^k)}^2\right]\right) = 0\end{equation} By Markov inequality, we have \[\mathbb{P}\left(\frac{\begin{equation}ta}{2}\left[\norm{r^{k+1}}^2+\norm{H(v^{k+1}-z^k)}^2\right] \geq \epsilonilon\right)\leq \frac{1}{\epsilonilon}\mathbb{E}\left(\frac{\begin{equation}ta}{2}\left[\norm{r^{k+1}}^2+\norm{H(v^{k+1}-z^k)}^2\right]\right),\] for any scalar $\epsilonilon>0$ for all iterations $t$. Therefore, we have \begin{equation}gin{align*}\lim_{k\to\infty} \mathbb{P}&\left(\sup_{t\geq k}\frac{\begin{equation}ta}{2}\left[\norm{r^{k+1}}^2+\norm{H(v^{k+1}-z^k)}^2\right] \geq \epsilonilon\right)\\&=\lim_{k\to\infty}\mathbb{P}\left(\bigcup_{t=k}^\infty \frac{\begin{equation}ta}{2}\left[\norm{r^{k+1}}^2+\norm{H(v^{k+1}-z^k)}^2\right]\geq \epsilonilon\right)\\ &\leq \lim_{k\to \infty} \sum_{t=k}^\infty \mathbb{P}\left(\frac{\begin{equation}ta}{2}\left[\norm{r^{k+1}}^2+\norm{H(v^{k+1}-z^k)}^2\right] \geq \epsilonilon\right)\\ &\leq\lim_{k\to \infty} \frac{1}{\epsilonilon}\sum_{t=k}^\infty \mathbb{E}\left(\frac{\begin{equation}ta}{2}\left[\norm{r^{k+1}}^2+\norm{H(v^{k+1}-z^k)}^2\right]\right)=0,\end{align*} where the first inequality follows from union bound on probability, the second inequality follows from the preceding relation, and the last equality follows from Eq.\ (\ref{eq:summable}). This proves that the sequence $\frac{\begin{equation}ta}{2}\left[\norm{r^{k+1}}^2+\norm{H(v^{k+1}-z^k)}^2\right]$ converges to $0$ almost surely. \end{proof} We next analyze convergence rate of the asynchronous ADMM algorithm. The rate analysis is done with respect to the time ergodic averages defined as $\bar x(T)$ in $\mathbb{R}^{nN}$, the time average of $x^k$ up to and including iteration $T$, i.e., \begin{equation}\label{eq:defbarX}\bar x_i(T)=\frac{\sum_{1=1}^T x_i^{k}}{T},\end{equation} for all $i=1,\ldots, N$,\footnote{Here the notation $\bar x_i(T)$ denotes the vector of length $n$ corresponding to agent $i$.} and $\bar z(k)$ in $\mathbb{R}^{W}$ as \begin{equation}\label{eq:defbarZ}\bar z_l(T)=\frac{\sum_{k=1}^T z_{l}^{k}}{T},\end{equation} for all $l=1, \ldots, W$. We next introduce some scalars $Q(\mu)$, $\bar Q$, $\bar \theta$ and $\tilde L^0$, all of which will be used to provide an upper bound on the constant term that appears in the rate analysis. Scalar $Q(\mu)$ is defined by \begin{equation}\label{eq:defQ}Q(\mu)=\max_{x\in X, z\in Z}-\tilde L(x, z,\mu),\end{equation} which implies $Q(\mu)\geq -\tilde L(x^{k+1}, z^{k+1},\mu)$ for any realization of $\Psi^k$ and $\Phi^k$. For the rest of the section, we adopt the following assumption, which will be used to guarantee that scalar $Q(\mu)$ is well defined and finite: \begin{equation}gin{assumption} \label{assm:compact} The sets $X$ and $Z$ are both compact. \end{assumption} Since the weighted Lagrangian function $\tilde L$ is continuous in $x$ and $z$ [cf.\ Eq.\ (\ref{eq:defTildeL})], and all iterates $(x^k, z^k)$ are in the compact set $X\times Z$, by Weierstrass theorem the maximization in the preceding equality is attained and finite. Since function $\tilde L$ is linear in $\mu$, the function $Q(\mu)$ is the maximum of linear functions and is thus convex and continuous in $\mu$. We define scalar $\bar Q$ as \begin{equation}\label{eq:defBarQ} \bar Q=\max_{\mu=p^*-\alpha,\norm{\alpha}\leq 1} Q(\mu).\end{equation} The reason that such scalar $\bar Q<\infty$ exists is once again by Weierstrass theorem (maximization over a compact set). We define vector $\bar\theta$ in $\mathbb{R}^W$ as \begin{equation}\label{eq:defbartheta}\bar\theta = p^* - \mathop{\rm argmax}_{\norm{u}\leq 1}\norm{p^0-(p^*-u)}_{\bar\Lambda}^2,\end{equation} such maximizer exists due to Weierstrass theorem and the fact that the set $\norm{u}\leq 1$ is compact and the function $\norm{p^0-(p^*-u)}_{\bar\Lambda}^2$ is continuous. Scalar $\tilde L^0$ is defined by \begin{equation}\label{eq:deftildeL0} \tilde L^0=\max_{\theta=p^*-\alpha, \norm{\alpha}\leq 1} \tilde L(x^0,z^0,\theta).\end{equation} This scalar is well defined because the constraint set is compact and the function $\tilde L$ is continuous in $\theta$. \begin{equation}gin{theorem}\label{thm:rate} Let $\{x^k,z^k, p^k\}$ be the sequence generated by the asynchronous ADMM algorithm \ref{x2})-(\ref{eq:pUpdate2SplitGeneral}) and $(x^*, z^*, p^*)$ be a saddle point of the Lagrangian function of problem (\ref{Multisplit-formulation}). Let the vectors $\bar x(T)$, $\bar z(T)$ be defined as in Eqs.\ (\ref{eq:defbarX}) and (\ref{eq:defbarZ}), the scalars $\bar Q$, $\bar \theta$ and $\tilde L^0$ be defined as in Eqs. (\ref{eq:defBarQ}), \ (\ref{eq:defbartheta}) and (\ref{eq:deftildeL0}) and the function $\tilde L$ be defined as in Eq.\ (\ref{eq:defTildeL}). Then the following relations hold: \begin{equation}gin{align}\label{ineq:rateFeasiblity} \norm{\mathbb{E}(D \bar x(T)+H\bar z(T)) } \leq \frac{1}{T}\left[\bar Q +\tilde L^0 +\frac{1}{2\begin{equation}ta}\norm{p^{0}-\bar\theta}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{0}-z^*)}^2_{\bar\Lambda}\right],\end{align} and \begin{equation}gin{align}\label{ineq:ratePrimal}\norm{\mathbb{E}(F(\bar x(T)))-F(x^*)} \leq& \frac{1}{T}\left[\bar Q +\tilde L^0 +\frac{1}{2\begin{equation}ta}\norm{p^{0}-p^*}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{0}-z^*)}^2_{\bar\Lambda}\right]\\\nonumber&+ \frac{\norm{p^*}_\infty}{T}\left[Q(p^*) +\tilde L(x^0,z^0,p^*) +\frac{1}{2\begin{equation}ta}\norm{p^{0}-\bar\theta}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{0}-z^*)}^2_{\bar\Lambda}\right].\end{align} \end{theorem} \begin{equation}gin{proof} The proof of the theorem relies on Lemma \ref{lemma:martingale} and Theorem \ref{thm:deterBounds}. We combine these results with law of iterated expectation, telescoping cancellation and convexity of the function $F$ to establish the bound \begin{equation}gin{align}\label{ineq:rate2}\mathbb{E}\left[F(\bar x(T))\right.&\left.-\mu'(D \bar x(T)+H\bar z(T))\right]-F(x^*) \\ \nonumber&\leq \frac{1}{T}\left[Q(\mu) +\tilde L(x^0,z^0,\mu) +\frac{1}{2\begin{equation}ta}\norm{p^{0}-\mu}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{0}-z^*)}^2_{\bar\Lambda}\right], \end{align} for all $\mu$ in $\mathbb{R}^W$. Then by using different choices of the vector $\mu$, we obtain the desired results. We will first prove Eq.\ (\ref{ineq:rate2}). Recall Eq.\ (\ref{eq:exp2}): \begin{equation}gin{align*} \mathbb{E}&\left(\tilde L(x^{k+1},z^{k+1},\mu)\bigg| \mathcal{J}_k\right)\nonumber\\&=\left(F(y^{k+1})-\mu'(Dy^{k+1}+Hv^{k+1})\right) +\tilde L(x^k,z^k,\mu)-\left(F(x^{k})-\mu'(Dx^{k}+Hz^{k})\right), \end{align*} We rearrange Eq.\ (\ref{ineq:fValue}) from Theorem \ref{thm:deterBounds}, and obtain \begin{equation}gin{align*} F(y^{k+1})&-\mu'r^{k+1}\leq F(x^*)- \frac{1}{2\begin{equation}ta}\left(\norm{\mu^{k+1}-\mu}^2-\norm{p^k-\mu}^2\right)\\\nonumber&-\frac{\begin{equation}ta}{2}\left(\norm{H(v^{k+1}-z^*)}^2-\norm{H(z^k-z^*)}^2\right) -\frac{\begin{equation}ta}{2}\norm{r^{k+1}}^2-\frac{\begin{equation}ta}{2}\norm{H(v^{k+1}-z^k)}^2. \end{align*} Since $r^{k+1}=Dy^{k+1}+Hv^{k+1}$, we can apply this bound on the first term on the right-hand side of the preceding relation which implies \begin{equation}gin{align*} \mathbb{E}&\left(\tilde L(x^{k+1},z^{k+1},\mu)\bigg| \mathcal{J}_k\right)\leq F(x^*)- \frac{1}{2\begin{equation}ta}\left(\norm{\mu^{k+1}-\mu}^2-\norm{p^k-\mu}^2\right)\\\nonumber&-\frac{\begin{equation}ta}{2}\left(\norm{H(v^{k+1}-z^*)}^2-\norm{H(z^k-z^*)}^2\right) -\frac{\begin{equation}ta}{2}\norm{r^{k+1}}^2-\frac{\begin{equation}ta}{2}\norm{H(v^{k+1}-z^k)}^2\\ &+\tilde L(x^k,z^k,\mu)-\left(F(x^{k})-\mu'(Dx^{k}+Hz^{k})\right), \end{align*} Combining the above inequality with Eq.\ (\ref{eq:exp1}) and using the linearity of expectation, we have \begin{equation}gin{align*}\mathbb{E}&\left(\tilde L(x^{k+1},z^{k+1},\mu)+\frac{1}{2\begin{equation}ta}\norm{p^{k+1}-\mu}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k+1}-z^*)}^2_{\bar\Lambda}\bigg| \mathcal{J}_k\right)\\ \leq& F(x^*) -\frac{\begin{equation}ta}{2}\norm{r^{k+1}}^2-\frac{\begin{equation}ta}{2}\norm{H(v^{k+1}-z^k)}^2-\left(F(x^{k})-\mu'(Dx^{k}+Hz^{k})\right)\\ &+\tilde L(x^k,z^k,\mu) +\frac{1}{2\begin{equation}ta}\norm{p^{k}-\mu}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k}-z^*)}^2_{\bar\Lambda}\\ \leq &F(x^*)-\left(F(x^{k})-\mu'(Dx^{k}+Hz^{k})\right)+\tilde L(x^k,z^k,\mu) +\frac{1}{2\begin{equation}ta}\norm{p^{k}-\mu}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{k}-z^*)}^2_{\bar\Lambda}, \end{align*} where the last inequality follows from relaxing the upper bound by dropping the non-positive term $-\frac{\begin{equation}ta}{2}\norm{r^{k+1}}^2-\frac{\begin{equation}ta}{2}\norm{H(v^{k+1}-z^k)}^2$. This relation holds for $k=1,\ldots, T$ and by the law of iterated expectation, the telescoping sum after term cancellation satisfies \begin{equation}gin{align}\label{ineq:rateExp}\mathbb{E}&\left(\tilde L(x^{T+1},z^{T+1},\mu)+\frac{1}{2\begin{equation}ta}\norm{p^{T+1}-\mu}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{T+1}-z^*)}^2_{\bar\Lambda}\right) \leq TF(x^*)\\\nonumber &-\mathbb{E}\left[\sum_{k=1}^T\left(F(x^{k})-\mu'(Dx^{k}+Hz^{k})\right)\right]+\tilde L(x^0,z^0,\mu) +\frac{1}{2\begin{equation}ta}\norm{p^{0}-\mu}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{0}-z^*)}^2_{\bar\Lambda}. \end{align} By convexity of the functions $f_i$, we have \[\sum_{k=1}^T F(x^k) = \sum_{k=1}^T \sum_{i=1}^N f_i(x_i^k)\geq T \sum_{i=1}^N f_i(\bar x_i(T))=TF(\bar x(T)).\] The same results hold after taking expectation on both sides. By linearity of matrix-vector multiplication, we have $\sum_{k=1}^T Dx^k=TD\bar x(T),\ \sum_{k=1}^T Hz^k=TH\bar z(T).$ Relation (\ref{ineq:rateExp}) therefore implies that \begin{equation}gin{align*}T\mathbb{E}&\left[F(\bar x(T))-\mu'(D \bar x(T)+H\bar z(T))\right]-TF(x^*)\\ \leq& \mathbb{E}\left[\sum_{k=1}^T\left(F(x^{k})-\mu'(Dx^{k}+Hz^{k})\right)\right] -TF(x^*)\\ \leq&-\mathbb{E}\left(\tilde L(x^{T+1},z^{T+1},\mu)+\frac{1}{2\begin{equation}ta}\norm{p^{T+1}-\mu}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{T+1}-z^*)}^2_{\bar\Lambda}\right) \\& +\tilde L(x^0,z^0,\mu) +\frac{1}{2\begin{equation}ta}\norm{p^{0}-\mu}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{0}-z^*)}^2_{\bar\Lambda}. \end{align*} Using the definition of scalar $Q(\mu)$ [cf.\ Eq.\ (\ref{eq:defQ})] and by dropping the non-positive norm terms from the above upper bound, we obtain \begin{equation}gin{align*}T\mathbb{E}\left[F(\bar x(T))\right.&\left.-\mu'(D \bar x(T)+H\bar z(T))\right]-TF(x^*)\\& \leq Q(\mu) +\tilde L(x^0,z^0,\mu) +\frac{1}{2\begin{equation}ta}\norm{p^{0}-\mu}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{0}-z^*)}^2_{\bar\Lambda}. \end{align*} We now divide both sides of the preceding inequality by $T$ and obtain Eq.\ (\ref{ineq:rate2}). We now use Eq.\ (\ref{ineq:rate2}) to first show that ${\norm{\mathbb{E}(D\bar x(T)+H\bar z(T))}}$ converges to 0 with rate $1/T$. For each iteration $T$, we define a vector $\theta(T)$ as $\theta(T) = p^*-\frac{\mathbb{E}(D\bar x(T)+H\bar z(T))}{\norm{\mathbb{E}(D\bar x(T)+H\bar z(T))}}$. By substituting $\mu=\theta(T)$ in Eq.\ (\ref{ineq:rate2}), we obtain for each $T$, \begin{equation}gin{align*} \mathbb{E}&\left[F(\bar x(T))-(\theta(T))'(D \bar x(T)+H\bar z(T))\right]-F(x^*) \\&\leq \frac{1}{T}\left[Q(\theta(T)) +\tilde L(x^0,z^0, \theta(T)) +\frac{1}{2\begin{equation}ta}\norm{p^{0}-\theta(T)}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{0}-z^*)}^2_{\bar\Lambda}\right],\end{align*} Since the vectors $\frac{\mathbb{E}(D\bar x(T)+H\bar z(T))}{\norm{\mathbb{E}(D\bar x(T)+H\bar z(T))}}$ all have norm $1$ and hence $\theta(T)$ is bounded within the unit sphere, by using the definition of $\bar \theta$, we have $\norm{p^0-\theta(T)}_{\bar\Lambda}^2\leq \norm{p^0-\bar\theta}_{\bar\Lambda}^2$. Eqs.\ (\ref{eq:defBarQ}) and (\ref{eq:deftildeL0}) implies $Q(\theta(T))\leq \bar Q$ and $\tilde L(x^0, z^0, \theta(T))\leq \tilde L^0$ for all $T$. Thus the above inequality suggests that the following holds true for all $T$, \begin{equation}gin{align*} \mathbb{E}(F(\bar x(T))-(\theta(T))'\mathbb{E}(D \bar x(T)+H\bar z(T))-F(x^*) \leq \frac{1}{T}\left[\bar Q +\tilde L^0 +\frac{1}{2\begin{equation}ta}\norm{p^{0}-\bar\theta}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{0}-z^*)}^2_{\bar\Lambda}\right].\end{align*} From the definition of $\theta(T)$, we have $(\theta(T))'\mathbb{E}(D\bar x(T)+H\bar z(T)) = (p^*)'\mathbb{E}(D\bar x(T)+H\bar z(T)) - \norm{\mathbb{E}(D\bar x(T)+H\bar z(T))}$, and thus \begin{equation}gin{align*}\mathbb{E}(F(\bar x(T))-(\theta(T))'\mathbb{E}(D \bar x(T)+H\bar z(T))-F(x^*) =&\mathbb{E}(F(\bar x(T)))-(p^*)'\mathbb{E}\left[(D \bar x(T)+H\bar z(T))\right]\\&-F(x^*)+\norm{\mathbb{E}(D \bar x(T)+H\bar z(T)) }.\end{align*} Since the point $(x^*, z^*, p^*)$ is a saddle point of the Lagrangian function, using Lemma \ref{lemma:saddlePoint2}, we have \begin{equation}\label{ineq:lag}0\le \mathbb{E}L((\bar x(T)), \bar z(T), p^*) - L(x^*, z^*, p^*) = \mathbb{E}(F(\bar x(T))) - F(x^*)-(p^*)'\mathbb{E}\left[(D \bar x(T)+H\bar z(T))\right].\end{equation} The preceding three relations imply that \begin{equation}gin{align*} \norm{\mathbb{E}(D \bar x(T)+H\bar z(T)) } \leq \frac{1}{T}\left[\bar Q +\tilde L^0 +\frac{1}{2\begin{equation}ta}\norm{p^{0}-\bar\theta}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{0}-z^*)}^2_{\bar\Lambda}\right],\end{align*} which shows the first desired inequality. To prove Eq.\ (\ref{ineq:ratePrimal}), we let $\mu=p^*$ in Eq.\ (\ref{ineq:rate2}) and obtain \begin{equation}gin{align*} \mathbb{E}(F(\bar x(T))-&(p^*)'\mathbb{E}(D \bar x(T)+H\bar z(T))-F(x^*) \\&\leq \frac{1}{T}\left[Q(p^*) +\tilde L(x^0,z^0, p^*) +\frac{1}{2\begin{equation}ta}\norm{p^{0}-p^*}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{0}-z^*)}^2_{\bar\Lambda}\right]. \end{align*} This inequality together with Eq.\ (\ref{ineq:lag}) imply \begin{equation}gin{align*}&\norm{\mathbb{E}(F(\bar x(T)))-(p^*)'\mathbb{E}(D \bar x(T)+H\bar z(T))-F(x^*)} \\ &\leq \frac{1}{T}\left[Q(p^*) +\tilde L(x^0,z^0, p^*) +\frac{1}{2\begin{equation}ta}\norm{p^{0}-p^*}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{0}-z^*)}^2_{\bar\Lambda}\right].\end{align*} By triangle inequality, we obtain \begin{equation}gin{align}\label{ineq:fNorm}\norm{\mathbb{E}(F(\bar x(T)))-F(x^*)} \leq& \frac{1}{T}\left[Q(p^*) +\tilde L(x^0,z^0,p^*) +\frac{1}{2\begin{equation}ta}\norm{p^{0}-p^*}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{0}-z^*)}^2_{\bar\Lambda}\right]\\\nonumber&+\norm{\mathbb{E}((p^*)'(D \bar x(T)+H\bar z(T)))},\end{align} Using definition of Euclidean and $l_\infty$ norms,\footnote{We use the standard notation that $\norm{x}_\infty = \max_i |x_i|$.} the last term $\norm{\mathbb{E}((p^*)'(D \bar x(T)+H\bar z(T)))}$ satisfies \begin{equation}gin{align*}\norm{\mathbb{E}((p^*)'(D \bar x(T)+H\bar z(T)))} &= \hbox{\rlap{$\sqcap$}$\sqcup$}rt{\sum_{l=1}^W (p_l^*)^2[\mathbb{E}(D\bar x(T)+H\bar z(T))]_l^2} \\&\leq \hbox{\rlap{$\sqcap$}$\sqcup$}rt{\sum_{l=1}^W \norm{p^*}_\infty^2[\mathbb{E}(D\bar x(T)+H\bar z(T))]_l^2} =\norm{p^*}_\infty\norm{\mathbb{E}(D\bar x(T)+H\bar z(T))}.\end{align*} The above inequality combined with Eq.\ (\ref{ineq:rateFeasiblity}) yields \[\norm{\mathbb{E}((p^*)'(D \bar x(T)+H\bar z(T)))} \leq \frac{\norm{p^*}_\infty}{T}\left[Q(p^*) +\tilde L(x^0,z^0,p^*) +\frac{1}{2\begin{equation}ta}\norm{p^{0}-\bar\theta}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{0}-z^*)}^2_{\bar\Lambda}\right].\] Hence, Eq.\ (\ref{ineq:fNorm}) implies \begin{equation}gin{align*}\norm{\mathbb{E}(F(\bar x(T)))-F(x^*)} \leq& \frac{1}{T}\left[\bar Q +\tilde L^0 +\frac{1}{2\begin{equation}ta}\norm{p^{0}-p^*}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{0}-z^*)}^2_{\bar\Lambda}\right]\\\nonumber&+ \frac{\norm{p^*}_\infty}{T}\left[Q(p^*) +\tilde L(x^0,z^0,p^*) +\frac{1}{2\begin{equation}ta}\norm{p^{0}-\bar\theta}^2_{\bar\Lambda}+\frac{\begin{equation}ta}{2}\norm{H(z^{0}-z^*)}^2_{\bar\Lambda}\right].\end{align*} Thus we have established the desired relation (\ref{ineq:ratePrimal}). \end{proof} We remark that by Jensen's inequality and convexity of the function $F$, we have $F(\mathbb{E}(\bar x(T)))\leq \mathbb{E}(F(\bar x(T))),$ and the preceding results also holds true when we replace $\mathbb{E}(F(\bar x(T)))$ by $ F(\mathbb{E}(\bar x(T)))$. \section{Conclusions}\label{sec:con} We developed a fully asynchronous ADMM based algorithm for a convex optimization problem with separable objective function and linear constraints. This problem is motivated by distributed multi-agent optimization problems where a (static) network of agents each with access to a privately known local objective function seek to optimize the sum of these functions using computations based on local information and communication with neighbors. We show that this algorithm converges almost surely to an optimal solution. Moreover, the rate of convergence of the objective function values and feasibility violation is given by $O(1/k)$. Future work includes investigating network effects (e.g., effects of communication noise, quantization) and time-varying network topology on the performance of the algorithm. \end{document}
\begin{equation}gin{document} \title{Nijenhuis geometry II: Left-symmetric algebras and linearization problem for Nijenhuis operators} \section{Introduction} The Nijenhuis torsion \cite{nijenhuis} of an operator field $R$ on a manifold is a tensor defined on a pair of vector fields $v, w$ as: \begin{equation}gin{equation}\label{torsion} \mathcal N_R(v, w) = R[Rv, w] + R[v, Rw] - R^2[v, w] - [Rv, Rw]. \end{equation} Here $[\, , \,]$ stands for the standard commutator of vector fields. The vanishing of the Nijenhuis torsion \eqref{torsion} in local coordinates $x^1, \dots, x^n$ is equivalent to the system of partial differential equations on the components $R^{\alpha}_i$ of $R$: for $i \neq j$ and $1 \leq i, j, k \leq n$ \begin{equation}gin{equation}\label{torsion2} \pd{R^{\alpha}_i}{x^j} R_{\alpha}^k - \pd{R^{\alpha}_j}{x^i} R_{\alpha}^k -\pd{R^k_i}{x^{\alpha}} R^{\alpha}_j + \pd{R^k_j}{x^{\alpha}} R^{\alpha}_i = 0. \end{equation} Operator field $R$ with vanishing Nijenhuis torsion is called \textbf{Nijenhuis operator}. Nijenhuis operators play important role in finite and infinite dimensional integrable systems \cite{kossman}, \cite{mok2}, almost complex structures \cite{newlander} and projectively equivalent metrics \cite{proj}. The point $\mathrm{p}$ is \textbf{algebraically generic} if Segre characteristic of $R$ at $\mathrm{p}$ is the same as Segre characteristic of $R$ for all points in sufficiently small neighbourhood of $\mathrm{p}$. If point is not algebraically generic, then it is \textbf{singular}. The point $\mathrm{p}$ is \textbf{a singular point of scalar type} if $R = \lambda \operatorname{Id}$ for some constant $\lambda$ at this point. The following Splitting Theorem \cite{bmk} play an important role in study of the local geometry of the Nijenhuis operators: \begin{equation}gin{theor} Assume that the spectrum of a Nijenhuis operator $R$ at a point ${\mathrm{p}}$ consists of $k$ real (distinct) eigenvalues $\lambda_1,\dots, \lambda_k$ with multiplicities $\mathsf{m}_1,\dots, \mathsf{m}_k$ respectively and $s$ pairs of complex (non-real) conjugate eigenvalues $\mu_1,\bar \mu_1, \dots ,\mu_s,\bar \mu_s$ of multiplicities $\mathsf{l}_1,\dots, \mathsf{l}_s$. Then in a neighborhood of ${\mathrm{p}}$ there exists a local coordinate system $$ x_1=(x_{1}^1\dots x_{1}^{\mathsf{m}_1}), \dots , x_k=(x_{k}^1 \dots x_{k}^{\mathsf{m}_k}),\quad u_1=(u_{1}^1 \dots u_{1}^{2\mathsf{l}_1}), \dots , u_s=(u_{s}^1 \dots u_{s}^{2\mathsf{l}_s}), $$ in which $R$ takes the following block-diagonal form $$ R = \begin{equation}gin{pmatrix} Q_1(x_1)& & & & & \\ &{\mathrm d}\,ots & & & &\\ & & Q_k(x_k)& & & \\ & & & Q^{\mathbb C}_1 (u_1)& & \\ & & & & {\mathrm d}\,ots & \\ & & & & & Q^{\mathbb C}_s (u_s) \end{pmatrix} $$ where each block depends on its own group of variables and is a Nijenhuis operator w.r.t. these variables. \end{theor} This theorem reduces the study of the Nijenhuis operators around any point to the study of the Nijenhuis operators around the point, where all eigenvalues of the given operator coincide. The singular points of scalar type are the most singular of such points. In the present paper we follow the Nijenhuis geometry programme, formulated in \cite{bmk}, and study the local properties of the Nijenhuis operators in the neighbourhood of the points of scalar type. Recall, that all of the classical results about Nijenhuis operators \cite{nijenhuis}, \cite{haantjes}, \cite{thompson}, \cite{newlander} were obtained for algebraically generic points, often with extra regularity conditions. Before formulating the results of the paper we need a couple of definitions. Let $\mathfrak a$ be an algebra of dimension $n$ over $\mathbb R$ with the multiplication $\star$. The associator $\mathcal A$ is a trillinear operation on $\mathfrak a$, defined on arbitrary triple $\xi, \eta, \zeta \in \mathfrak a$ as $\mathcal A(\xi, \eta, \zeta) = (\xi\star\eta)\star\zeta - \xi\star(\eta\star\zeta)$ for $\xi, \eta, \zeta \in \mathfrak a$. Associator identically vanish if and only if $\mathfrak a$ is associative. An algebra $\mathfrak a$ is called \textbf{left-symmetric or LSA} if its associator is symmetric in first two terms, that is \begin{equation}gin{equation}\label{lsa} \mathcal A (\xi, \eta , \zeta) = \mathcal A (\eta, \xi, \zeta), \quad \forall \xi, \eta, \zeta \in \mathfrak a. \end{equation} The main property of these algebras is, that commutator $[\xi, \eta] = \xi\star\eta - \eta\star\xi$ defines a Lie algebra structure on $\mathfrak a$. We call this Lie algebra \textbf{the associated Lie algebra}. The other names for these algebras, that appear in the literature, are pre-Lie algebras and Vinberg-Kozul algebras. These algebras were introduced by Vinberg \cite{vinberg} in his study of the homogeneous cones. Later they appeared in different frameworks of geometry, integrable systems and quantum mechanics (see \cite{burde} for an overview on the subject). In particular, the Novikov algebras, that play an important role in the theory of Poisson brackets of the hydrodynamic type are left-symmetric. The left-adjoint action $L_{\xi}$ and the right-adjoint $R_{\xi}$ of $\mathfrak a$ on itself are defined as usual: $L_{\xi} \eta = \xi \star \eta = R_{\eta} \xi$. In terms of the left-adjoint action the condition \eqref{lsa} can be written as \begin{equation}gin{equation}\label{left} L_{\xi} L_{\eta} - L_{\eta} L_{\xi} = L_{[\xi, \eta]}. \end{equation} Assume now, that $\mathfrak a$ is an arbitrary finite dimensional algebra over $\mathbb R$ of dimension $n$. Every such algebra has a natural structure of a smooth $n-$dimensional affine manifold. We denote manifold same as algebra itself. Consider point $\eta$ of this manifold. Tangent space to $\eta$ is naturally identified with $\mathfrak a$ itself. Thus, one can define a tensor field $R$ of type $(1, 1)$ as follows: $R$ acts on element $\xi$ from tangent space $T_{\eta} \mathfrak a$ by the right-adjoint action, that is $R_{\eta} \xi = \xi\star\eta$. Now fix a basis $\eta_i$ in $\mathfrak a$ and denote by $a_{ij}^k$ the structure constants of $\mathfrak a$. The basis defines natural coordinate system on $\mathfrak a$ and denote the corresponding coordinates as $x^i$, that is $\eta = x^i \eta_i$. The components of $R_{\eta}$ are written as $(R_{\eta})^k_i = a^k_{ij} x^j$. In particular, the entries are homogeneous linear functions of $x^i$. Thus, the right-adjoint action of $\mathfrak a$ on itself induces the operator field with homogeneous linear components. We call operator field $R_{\eta}$ \textbf{the right-adjoint operator of $\mathfrak a$}. In a similar fashion one constructs \textbf{left-adjoint operator of $\mathfrak a$}. We denote it same as left action as $L_{\eta}$. Moreover, assume one has an operator field $R_{\eta}$ on a real affine space $\mathfrak a$ with given coordinates $x^i$ and its entries are homogeneous linear functions (we call such operator fields \textbf{linear operator fields}). Then $\mathfrak a$ has a natural structure of algebra over $\mathbb R$ and its structure constants are $a^k_{ij} = \pd{R^k_i}{x^j}$. In this construction $R$ becomes the right-adjoint operator of $\mathfrak a$. This yields a natural bijection between real algebras and linear operator fields on real affine spaces. We call linear operator field on affine space with vanishing Nijenhuis torsion \textbf{linear Nijenhuis operator}. The following Proposition establishes the relation between linear Nijenhuis operators and left-symmetric algebras. For the sake of simplicity we omit $\eta$ in $R_{\eta}, L_{\eta}$ and just write $R, L$ respectively. \begin{equation}gin{prop}\label{main1}(\cite{winterhalder}, Theorem 3.1, Remark 3.1) Let $\mathfrak a$ be an algebra over $\mathbb R$ of dimension $n$. The following conditions are equivalent: \begin{equation}gin{enumerate} \item $\mathfrak a$ is a left-symmetric algebra \item The right-adjoint operator of $\mathfrak a$ is a Nijenhuis operator \end{enumerate} \end{prop} Note, also, that left-adjoint operator $L$ is not Nijenhuis in general. \begin{equation}gin{example}\label{dir} Consider a linear Nijenhuis operator \begin{equation}gin{equation*} R = \left(\begin{equation}gin{array}{cccc} x^1 & 0 & \dots & 0 \\ 0 & x^2 & \dots & 0 \\ & & {\mathrm d}\,ots & \\ 0 & 0 & \dots & x^n \\ \end{array}\right). \end{equation*} The corresponding left-symmetric algebra $\mathfrak a$ in corresponding basis $\eta_1, \dots, \eta_n$ has structure constants: \begin{equation}gin{equation*} \begin{equation}gin{aligned} a^k_{ij} = \begin{equation}gin{cases} 1, \quad i = j = k, \\ 0, \quad \text{\rm{otherwise}} \end{cases} \end{aligned} \end{equation*} The Lie algebra, associated to $\mathfrak a$, is abelian and $\mathfrak a$ is a direct sum of one-dimensional left-symmetric algebras with operation $\star$ defined as $\eta \star \eta = \eta$. \end{example} Now we are proceed to the results of the paper. First, we classify all real left-symmetric algebras in dimension two. Until now there has been only a partial ($\mathfrak b$-series in our terminology) classification by Burde (Proposition, 4.1, \cite{burde2}) of the left symmetric algebras over $\mathbb C$. In our classification we adopt notation, similar to the one in \cite{burde2}. For $\mathfrak b_2, \mathfrak b^+_4, \mathfrak b^-_4$ we have slightly different basis and $\mathfrak b^+_4, \mathfrak b^-_4$ have the same complexification, denoted in \cite{burde2} as $\mathfrak b_4$. \begin{equation}gin{theor}\label{class} Up to isomorphism there are two continuous families and 10 exceptional two dimensional real left-symmetric algebras. The complete list of normal forms is presented in the Table 1 and Table 2 below. The tables contain \begin{equation}gin{itemize} \item All non-zero structure relations for a given basis $\eta_1, \eta_2$ \item The right-adjoint operator of $\mathfrak a$ in coordinates $x, y$, associated with basis $\eta_1, \eta_2$. We denote it as $R$ \item The left-adjoint operator of $\mathfrak a$ in coordinates $x, y$, associated with basis $\eta_1, \eta_2$. We denote it as $L$. \end{itemize} Recall, that in dimension two up to the isomorphism there are only two Lie algebras. The letter $\mathfrak b$ stands for algebras with non-abelian associated Lie algebra and $\mathfrak c$ for the algebras with abelian associated Lie algebra. \begin{equation}gin{equation*} \begin{equation}gin{aligned} & Table \, 1: \text{LSA in dimension two with non-commutative associated Lie algebra} \\ & \begin{equation}gin{tabular}{|l|l|l|l|} \hline Name & Structure relations & $L$ & $R$ \\ \hline $\mathfrak b_{1, \alpha}$ & $\begin{equation}gin{array}{ll} & \eta_2\star\eta_1 = \eta_1,\\ & \eta_2\star\eta_2 = \alpha \eta_2\\ \end{array}$ & $\left(\begin{equation}gin{array}{cc} y & 0 \\ 0 & \alpha y\\ \end{array}\right)$ & $\left(\begin{equation}gin{array}{cc} 0 & x \\ 0 & \alpha y\\ \end{array}\right)$ \\ \hline $\begin{equation}gin{array}{ll} & \mathfrak b_{2, \begin{equation}ta} \\ & \begin{equation}ta \neq 0\\ \end{array}$ & $\begin{equation}gin{array}{ll} & \eta_1\star\eta_2 = \eta_1, \\& \eta_2\star\eta_1 = \big(1 - \frac{1}{\begin{equation}ta}\big) \eta_1 \\ & \eta_2\star\eta_2 = \eta_2 \end{array}$ & $\left(\begin{equation}gin{array}{cc} \big(1 - \frac{1}{\begin{equation}ta}\big) y & x \\ 0 & y\\ \end{array}\right)$ & $\left(\begin{equation}gin{array}{cc} y & \big(1 - \frac{1}{\begin{equation}ta}\big) x \\ 0 & y\\ \end{array}\right)$\\ \hline $\mathfrak b_3$ & $\begin{equation}gin{array}{ll} & \eta_2\star\eta_1 = \eta_1, \\ & \eta_2\star\eta_2 = \eta_1 + \eta_2\\ \end{array}$ & $\left(\begin{equation}gin{array}{cc} y & y \\ 0 & y\\ \end{array}\right)$ & $\left(\begin{equation}gin{array}{cc} 0 & x + y \\ 0 & y\\ \end{array}\right)$ \\ \hline $\mathfrak b_4^{+}$ & $\begin{equation}gin{array}{ll} & \eta_1\star\eta_1 = \eta_2, \\ & \eta_2\star\eta_1 = - \eta_1\\ & \eta_2\star\eta_2 = - 2 \eta_2 \end{array}$ & $\left(\begin{equation}gin{array}{cc} - y & 0 \\ x & - 2y\\ \end{array}\right)$ & $\left(\begin{equation}gin{array}{cc} 0 & -x \\ x & - 2y\\ \end{array}\right)$ \\ \hline $\mathfrak b_4^{-}$ & $\begin{equation}gin{array}{ll} & \eta_1\star\eta_1 = - \eta_2, \\ & \eta_2\star\eta_1 = - \eta_1\\ & \eta_2\star\eta_2 = - 2 \eta_2 \end{array}$ & $\left(\begin{equation}gin{array}{cc} - y & 0 \\ - x & - 2y\\ \end{array}\right)$ & $\left(\begin{equation}gin{array}{cc} 0 & - x \\ - x & -2y\\ \end{array}\right)$ \\ \hline $\mathfrak b_5$ & $\begin{equation}gin{array}{ll} & \eta_1\star\eta_2 = \eta_1, \\ & \eta_2\star\eta_2 = \eta_1 + \eta_2\\ \end{array}$ & $\left(\begin{equation}gin{array}{cc} 0 & x + y \\ 0 & y\\ \end{array}\right)$ & $\left(\begin{equation}gin{array}{cc} y & y \\ 0 & y\\ \end{array}\right)$ \\ \hline \end{tabular} \end{aligned} \end{equation*} \begin{equation}gin{equation*} \begin{equation}gin{aligned} & Table \, 2: \text{LSA in dimension two with commutative associated Lie algebra} \\ & \begin{equation}gin{tabular}{|l|l|l|} \hline Name & Structure relations & $L = R$\\ \hline $\mathfrak c_1$ & & $\left(\begin{equation}gin{array}{cc} 0 & 0 \\ 0 & 0\\ \end{array}\right)$ \\ \hline $\mathfrak c_2$ & $\eta_2\star\eta_2 = \eta_2$ & $\left(\begin{equation}gin{array}{cc} 0 & 0 \\ 0 & y\\ \end{array}\right)$ \\ \hline $\mathfrak c_3$ & $\eta_2\star\eta_2 = \eta_1 $ & $\left(\begin{equation}gin{array}{cc} 0 & y \\ 0 & 0\\ \end{array}\right)$ \\ \hline $\mathfrak c_4$ & $\begin{equation}gin{aligned}& \eta_2\star\eta_2 = \eta_2\\ & \eta_2\star\eta_1 = \eta_1 \\& \eta_1\star\eta_2 = \eta_1 \\ \end{aligned}$ & $\left(\begin{equation}gin{array}{cc} y & x \\ 0 & y\\ \end{array}\right)$ \\ \hline $\mathfrak c_5^{+}$ & $\begin{equation}gin{aligned}& \eta_2\star\eta_2 = \eta_2\\ & \eta_2\star\eta_1 = \eta_1 \\& \eta_1\star\eta_2 = \eta_1 \\ & \eta_1\star\eta_1 = \eta_2\\ \end{aligned}$ & $\left(\begin{equation}gin{array}{cc} y & x \\ x & y\\ \end{array}\right)$ \\ \hline $\mathfrak c_5^{-}$ & $\begin{equation}gin{aligned}& \eta_2\star\eta_2 = \eta_2\\ & \eta_2\star\eta_1 = \eta_1 \\& \eta_1\star\eta_2 = \eta_1 \\ &\eta_1\star\eta_1 = - \eta_2\\ \end{aligned}$ & $\left(\begin{equation}gin{array}{cc} y & x \\ - x & y\\ \end{array}\right)$ \\ \hline \end{tabular} \end{aligned} \end{equation*} \end{theor} Fix singular point $\mathrm{p}$ of scalar type of Nijenhuis operator $R$ on manifold $M^n$ and coordinates $x^1, \dots, x^n$ with coordinate origin is at $\mathrm{p}$. Consider a pair of vectors $v_{\mathrm{p}}, w_{\mathrm{p}} \in T_{\mathrm{p}} M^n$. Denote by $v, w$ arbitrary smooth continuations of $v_{\mathrm{p}}, w_{\mathrm{p}}$ on the neighborhood of ${\mathrm{p}}$. Define the following operation \begin{equation}gin{equation}\label{isotropy} v_{\mathrm{p}} \star w_{\mathrm{p}} = \big ([Rv, w] - R [v, w] \big)\vert_{\mathrm{p}}. \end{equation} The following proposition is the second result of the paper: \begin{equation}gin{prop}\label{main2} Let $R$ be a Nijenhuis operator and ${\mathrm{p}}$ singular point of scalar type. Then \begin{equation}gin{enumerate} \item The definition of $\star$ does not depend on continuations $v, w$ and defines an algebra structure on $T_{\mathrm{p}} M^n$ \item The corresponding algebra is left-symmetric \item In natural basis of $T_{\mathrm{p}} M^n$, associated with coordinates $x^1, \dots, x^n$, the structure constants of this algebra are $a^k_{ij} = \pd{R^k_i}{x^j}\vert_{\mathrm{p}}$. \end{enumerate} \end{prop} The corresponding algebra $\mathfrak a$ is called \textbf{the isotropy algebra at ${\mathrm{p}}$}. Note, that if $R$ is Nijenhuis, then $R - \lambda \operatorname{Id}$ for constant $\lambda$ is also Nijenhuis. Moreover, if ${\mathrm{p}}$ is a singular point of scalar type for $R$, then ${\mathrm{p}}$ is a singular point of scalar type for $R - \lambda \operatorname{Id}$ and in given coordinates the isotropy algebras coincide. We will often assume, that $R$ vanishes at ${\mathrm{p}}$. Tangent space $T_{\mathrm{p}} M^n$ has a natural structure of affine space and $\mathfrak a$ defines by Proposition \ref{main1} natural Nijenhuis operator on it --- the right-adjoint operator. Thus, we have two Nijenhuis operators: one is on tangent space $T_{\mathrm{p}} M^n$ and the other is on the manifold, defined in a neighbourhood of $\mathrm{p}$. W.l.o.g. assume that $R$ vanishes at ${\mathrm{p}}$ and consider the Taylor expansion of $R$ at ${\mathrm{p}}$: $R_1 + R_{k + 1} + \dots$. The entries of $R_i$ are homogeneous polynomials of degree $i$. By Proposition \ref{main2} the first term of the expansion $R_1$ in given coordinates is: $$ (R_1)^k_i = \pd{R^k_i}{x^j}\vert_{\mathrm{p}} x^j. $$ The term $R_1$ is not an operator field on $M^n$ --- it behaves wrong under the coordinate change. But in every coordinate system it is a result of simple lifting of the right-adjoint operator from $T_{\mathrm{p}}$ onto the neighbourhood of ${\mathrm{p}}$ simply by replacing the affine coordinates on $T_{\mathrm{p}} M^n$ with the coordinates on $M^n$. So the natural question arises: is there a coordinate system, where this lifting yields the entire $R$? Or, in other words, is there a coordinate change, in which $R$ coincides $R_1$ on the entire neighbourhood of ${\mathrm{p}}$? This is \textbf{the linearization problem for Nijenhuis operators}. Following the terminology of Weinstein \cite{weinstein}, \cite{weinstein2} we call left-symmetric algebra $\mathfrak a$ \textbf{non-degenerate} if following property holds: if isotropy algebra of Nijenhuis operator $R$ at a singular point of scalar type ${\mathrm{p}}$ is isomorphic to $\mathfrak a$, then there exists a linearizing coordinate change. In \cite{bmk} it is proved that the left-symmetric algebra, described in Example \ref{dir}, is non-degenerate in both formal and analytic category. In this work we give the complete classification of real two-dimensional left-symmetric algebras in terms of non-degeneracy in the smooth category and incomplete classification in the real analytic category (this is the third result of our paper). Introduce the following subsets of $\mathbb R$ $$ \Sigma_0 = \{0\}, \quad \Sigma_1 = \{r \vert \, r \in \mathbb N, r \ensuremath{\mathfrak{g}}eq 3\}, \quad \Sigma_2 = \{\alpha \vert \, \alpha \in \mathbb R, \alpha < 0\}, \quad \Sigma_3 = \{\frac{1}{m} \vert \, m \in \mathbb N, r \ensuremath{\mathfrak{g}}eq 2\} $$ and denote $\Sigma_{\mathrm{sm}} = \Sigma_0 \cup \Sigma_1 \cup \Sigma_2 \cup \Sigma_3$. The following Theorem holds: \begin{equation}gin{theor}\label{themain1} The following table provides the complete classification of two-dimensional left-symmetric algebras in terms of non-degeneracy in the smooth category: \begin{equation}gin{equation*} \begin{equation}gin{aligned} & Table \, 3: \text{Classification of LSA in dimension two in smooth category} \\ & \begin{equation}gin{tabular}{|c|c|} \hline Degenerate LSA & Non-degenerate LSA \\ \hline $\begin{equation}gin{array}{c}\mathfrak c_1, \mathfrak c_2, \mathfrak c_3, \mathfrak c_4, \\ \mathfrak b_5, \mathfrak b_{2, \begin{equation}ta}\\ \mathfrak b_{1, \alpha} \, \text{\rm{for}} \, \alpha \in \Sigma_{\mathrm{sm}} \end{array}$ & $\begin{equation}gin{array}{c} \mathfrak b^+_4, \mathfrak b^-_4, \mathfrak c^+_5, \mathfrak c^-_5 \\ \mathfrak b_3, \mathfrak b_{1, \alpha} \, \text{\rm{for}} \, \alpha \notin \Sigma_{\mathrm{sm}}\end{array}$ \\ \hline \end{tabular}\\ \end{aligned} \end{equation*} \end{theor} Let $[q_0, q_1, q_2, ...]$ be a decomposition of an irrational $\alpha$ into the continuous fraction. If the series \begin{equation}gin{equation*} B(x) = \sum \limits_{i = 0}^{\infty}\frac{\operatorname{log}q_{i + 1}}{q_i} \end{equation*} converges, then $\alpha$ is a \textbf{Brjuno number}. We denote by $\Omega$ \textbf{the set of negative Brjuno numbers} and $\Sigma_{\mathrm{u}}$ is the set of negative irrational numbers, that are not Brjuno numbers. Note, that the Lebesque measure of $\Sigma_{\mathrm{u}}$ is zero. Define $\widehat{\Sigma}_2 = \{- \frac{p}{q}\ \vert \, p, q \in \mathbb N\}$ and consider $\Sigma_{\mathrm{an}} = \Sigma_0 \cup \Sigma_1 \cup \widehat{\Sigma}_2 \cup \Sigma_3$. The following Theorem holds. \begin{equation}gin{theor}\label{themain2} The following table provides the (incomplete) classification of two-dimensional left-symmetric algebras in terms of non-degeneracy in the analytic category: \begin{equation}gin{equation*} \begin{equation}gin{aligned} & Table \, 4: \text{Classification of LSA in dimension two in analytic category} \\ & \begin{equation}gin{tabular}{|c|c|c|} \hline Degenerate LSA & Non-degenerate LSA & Unknown \\ \hline $\begin{equation}gin{array}{c} \mathfrak c_1, \mathfrak c_2, \mathfrak c_3, \mathfrak c_4, \\ \mathfrak b_5, \mathfrak b_{2, \begin{equation}ta} \mathfrak b_{1, \alpha}\, for \, \alpha \in \Sigma_{\mathrm{an}} \end{array}$ & $\begin{equation}gin{array}{c} \mathfrak b^+_4, \mathfrak b^-_4, \mathfrak c^+_5, \mathfrak c^-_5 \\ \mathfrak b_3, \mathfrak b_{1, \alpha} \, for \, \alpha \notin \Sigma_{\mathrm{an}} \cup \Sigma_{\mathrm{u}} \end{array}$ & $\begin{equation}gin{array}{c} \mathfrak b_{1, \alpha} \, for \, \alpha \in \Sigma_{\mathrm{u}} \end{array}$ \\ \hline \end{tabular} \\ \end{aligned} \end{equation*} \end{theor} The set $\Sigma_{\mathrm{u}}$ has an interesting story, directly related to the linearization problem of $R$ in dimension two. Consider analytic vector field $v$ with critical point ${\mathrm{p}}$ and denote $\lambda_1, \lambda_2$ the eigenvalues of the linearization operator at ${\mathrm{p}}$. We are interested in linearization problem for such vector field (see Appendix A for details). If $\frac{\lambda_1}{\lambda_2}$ is negative rational number or $\frac{\lambda_1}{\lambda_2} = r, \frac{1}{r}$ for $2 \leq r \in \mathbb N$, then there are no formal linearization and, thus, no analytic one. This is the classical result by variety of authors, including Poincare himself (see Theorem \ref{ilyash}). If $\frac{\lambda_1}{\lambda_2}> 0$ and $\frac{\lambda_1}{\lambda_2} \neq r, \frac{1}{r}$ for $2 \leq r \in \mathbb N$, then there exists an analytic linearizing coordinate change (see Theorem \ref{poincare}). For $\alpha \in \Sigma$ the linearization problem was solved by Brjuno (see Theorem \ref{brjn}). The question remained open only for $\frac{\lambda_1}{\lambda_2} \in \Sigma_{\mathrm{u}}$. In the late 80s---early 90s this gap was closed by Yoccoz \cite{yoccoz1}, \cite{yoccoz2}, who showed, that for every $\alpha \in \Sigma_{\mathrm{u}}$ there exists such an analytic vector field with $\frac{\lambda_1}{\lambda_2} = \alpha$, that among all formal linearizing coordinate changes, there are no converging ones. Thus, these are no analytic linearizing coordinate change for such vector fields. In our case the only "troublesome" algebra is $\mathfrak b_{1, \alpha}$. At some point the linearization problem for this algebra is reduced to the linearization (see Lemma \ref{prenormal}) of certain vector field $v$ in a special form $v = (f(x, y), y)^T$ (we write vector fields as transposed rows). The linearizing coordinate change must also be in a special form $\bar x = g(x, y), \bar y = y$. We use the mentioned above results to either provide an example of Nijenhuis operator without linearizing coordinate change or prove the existence of such linearization. Unfortunately, for $\alpha \in \Sigma_u$ the "bad" analytic vector field, Yoccoz constructs in his work, is obtained from the complex one, thus it is not in the triangular form we want. Thus, we were not able to apply his result and the our classification in analytic category is incomplete. It seems, that the linearization problem in triangular form was not considered by ODE specialists before. The work is organized as follows: Section \ref{local_geometry} provides some analytical results about Nijenhuis operators in dimension two. All the necessary definitions and results from the theory of ODE, concering the linearization problem and existence of the first integrals around critical points, are presented in Appendix A. Appendix B contains the proof of so called Morse lemma depending on parameters and its Corollary. We included it to keep the work self-sufficient. The proof of Theorem \ref{class} and Proposition \ref{main2} are given in sections \ref{proof1} and \ref{proof2}. The proof of Theorems \ref{themain1} and \ref{themain2} is given in section \ref{proof3}. Author would like to thank Ilya Schurov, Yuri Kudryashov, Alexei Bolsinov for help and fruitful discussions. He would also like to thank referees for valuable comments and suggestions. The work was supported by Russian Science Foundation (project No. 17-11-01303). \section{Local geometry of Nijenhuis operators in dimension 2}\label{local_geometry} Fix coordinates $x, y$ and consider Nijenhuis operator \begin{equation}gin{equation*} R = \left(\begin{equation}gin{array}{cc} R^1_1 & R^1_2 \\ R^2_1 & R^2_2 \\ \end{array}\right). \end{equation*} The following proposition holds: \begin{equation}gin{prop}\label{local1} In dimension two the following conditions are equivalent:\\ 1) Operator $R$ is Nijenhuis;\\ 2) In local coordinates $x, y$: \begin{equation}gin{equation}\label{id1} \mathrm{d} \operatorname{det} R = \mathrm{d} \operatorname{tr} R \, \left(\begin{equation}gin{array}{cc} R^2_2 & - R^1_2 \\ - R^2_1 & R^1_1 \end{array}\right) \,, \end{equation} where $\operatorname{det} R, \operatorname{tr} R$ are the determinant and trace of $R$ respectively. \end{prop} {\it Proof.} From \eqref{torsion2} for $i = 1, j = 2, k = 1$ and $x^1 = x, x^2 = y$ we get \begin{equation}gin{equation*} \begin{equation}gin{aligned} 0 & = \pd{R^1_1}{y} R^1_1 + \pd{R^2_1}{y} R^1_2 - \underline{\pd{R^1_2}{x} R_1^1} - \pd{R^2_2}{x} R^1_2 - \\ & - \pd{R^1_1}{x}R^1_2 - \pd{R^1_1}{y}R^2_2 + \underline{\pd{R^1_2}{x}R^1_1} + \pd{R^1_2}{y}R^2_1 \end{aligned} \end{equation*} Underlined terms cancel. Adding $\pd{R^2_2}{y}R^1_1$ to r.h.s. and subtracting it we get \begin{equation}gin{equation*} \begin{equation}gin{aligned} 0 & = \pd{}{y} \Big( - R^1_1 R^2_2 + R^2_1 R^1_2\Big) - R^1_2 \Big( \pd{R^1_1}{x} + \pd{R^2_2}{x}\Big) + R^1_1 \Big( \pd{R^1_1}{y} + \pd{R^2_2}{y}\Big). \end{aligned} \end{equation*} In a similar way for $i = 1, j = 2, k = 2$ formula \eqref{torsion2} yields \begin{equation}gin{equation*} \begin{equation}gin{aligned} 0 & = \pd{}{x} \Big( - R^1_1 R^2_2 + R^2_1 R^1_2\Big) + R^2_2 \Big( \pd{R^1_1}{x} + \pd{R^2_2}{x}\Big) - R^2_1 \Big( \pd{R^1_1}{y} + \pd{R^2_2}{y}\Big). \end{aligned} \end{equation*} As $R^1_1 R^2_2 - R^2_1 R^1_2 = \operatorname{det} R$ and $R^1_1 + R^2_2 = \operatorname{tr} R$ these equations can be rewritten in form \eqref{id1}. The Lemma is proved. $\blacksquare$\\ \begin{equation}gin{corr}\label{useful} Assume, that $f(x, y)$ and $g(y)$ are arbitrary (smooth or analytic) functions of two and one variables respectively. It follows from Theorem \ref{local1} that \begin{equation}gin{equation*} R = \left( \begin{equation}gin{array}{cc} g(y) & f(x, y)\\ 0 & g(y) \end{array}\right) \end{equation*} is Nijenhuis operator. \end{corr} \begin{equation}gin{corr}\label{useful2} Assume, that $f(x, y)$ and $g(y)$ are arbitrary (smooth or analytic) functions of two and one variables respectively. It follows from Theorem \ref{local1} that \begin{equation}gin{equation*} R = \left( \begin{equation}gin{array}{cc} 0 & f(x, y) \\ 0 & g(y) \end{array}\right), \end{equation*} is Nijenhuis operator. \end{corr} Consider Nijenhuis $R$ with condition $\mathrm{d} \operatorname{tr} R \neq 0$ at point ${\mathrm{p}}$. Fix $\alpha \neq 0$ and choose coordinates $x, y$ around ${\mathrm{p}}$ such that $\operatorname{tr} R = R^1_1 + R^2_2 = \alpha y$. Denote $\operatorname{det} R$ as $g (x, y)$. Proposition \eqref{id1} yields \begin{equation}gin{equation}\label{full} \begin{equation}gin{aligned} R^1_1 &R^2_2 - R^1_2 R^2_1 = g, \\ R^2_1 & = - \frac{1}{\alpha} \pd{g}{x}, \quad R^1_1 = \frac{1}{\alpha} \pd{g}{y}, \\ R^1_1 & + R^2_2 = \alpha y. \\ \end{aligned} \end{equation} We treat this system as a system with functional parameter $g$. We get the following corollary. \begin{equation}gin{corr}\label{local2} Every solution of system \eqref{full} is written in the form: \begin{equation}gin{equation}\label{solution} R = \left(\begin{equation}gin{array}{cc} \frac{1}{\alpha} \pd{g}{y} & R^1_2 \\ - \frac{1}{\alpha} \pd{g}{x} & y - \frac{1}{\alpha} \pd{g}{y} \\ \end{array}\right), \end{equation} where $R^1_2$ satisfies the following (implicit) condition \begin{equation}gin{equation}\label{condition} \frac{1}{\alpha}\pd{g}{y} \Big (y - \frac{1}{\alpha}\pd{g}{y}\Big) + \frac{1}{\alpha}\pd{g}{x} R^1_2 - g = 0. \end{equation} \end{corr} Condition \eqref{condition} does not contain any derivatives of $R^1_2$. It can be solved for arbitrary function $g$, but the component $R^1_2$ may not even be continuous. \begin{equation}gin{example}\label{bath} Consider the special case: $\alpha = 1$ and $g$ depends only on $y$. In this case $\pd{g}{y} = g'$. Condition \eqref{condition} yields $$ \Big(y - g'\Big) g' - g = 0 $$ Differentiating it in $y$ we get $$ g''(y - 2 g') = 0. $$ This implies, that $g$ is a polynomial in $y$ of at most of second degree. Using the method of undetermined coefficients we get two solutions: $g = \frac{y^2}{4}$ and $g = \begin{equation}ta y - \begin{equation}ta^2$ for arbitrary constant $\begin{equation}ta$. \end{example} The following proposition further explores the analytical properties of Nijenhuis operators, that follow from formula \eqref{solution}. \begin{equation}gin{prop}\label{normal} Consider a real plane with coordinates $x, y$ and analytic (smooth) Nijenhuis operator $R$ with singular point of scalar type at the coordinate origin with $\lambda = 0$. Assume, that $\mathrm{d} \operatorname{tr} R \neq 0$ at the origin and determinant $g = \operatorname{det} R$ satisfies the following conditions at the origin: $$ g(0, 0) = \pd{g}{x}(0, 0) = \pd{g}{y}(0, 0) = \frac{\partial^2 g}{\partial x^2} (0, 0) = \frac{\partial^2 g}{\partial y^2} (0, 0) = \frac{\partial^2 g}{\partial x \partial y} (0, 0) = 0. $$ In other words, $g$ has no constant, linear and quadratic part in its Taylor expansion. Define constant $\ensuremath{\mathfrak{g}}amma$ as $\pd{R^1_2}{x} (0, 0) = \ensuremath{\mathfrak{g}}amma$. We get \begin{equation}gin{enumerate} \item In analytic category assume that $$ \ensuremath{\mathfrak{g}}amma \neq 0, \quad \ensuremath{\mathfrak{g}}amma \neq - \frac{p}{q} \text{ with } p, q \in \mathbb N, \quad \ensuremath{\mathfrak{g}}amma \neq \frac{1}{m} \text{ with } 3 \leq m \in \mathbb N, $$ \item In smooth category assume, that $$ \ensuremath{\mathfrak{g}}amma > 0, \quad \ensuremath{\mathfrak{g}}amma \neq \frac{1}{m} \text{ with } 3 \leq m \in \mathbb N. $$ \end{enumerate} Under these assumptions there exists such an analytic (smooth) coordinate change, that in new coordinates $\tilde x, \tilde y$ Nijenhuis operator $R$ takes form: \begin{equation}gin{equation}\label{normal_normal} R = \left ( \begin{equation}gin{array}{cc} 0 & f(\tilde x, \tilde y) \\ 0 & \tilde y \end{array}\right), \end{equation} with $f(0, 0) = 0$ and $\pd{f}{\tilde x} (0, 0) = \ensuremath{\mathfrak{g}}amma$. \end{prop} {\it Proof. }We start with lemma. \begin{equation}gin{lemma}\label{analytic} Consider ODE \begin{equation}gin{equation}\label{op} r(x)\cdot k'(x) - k(x) = 0 \end{equation} in a neighbourhood of the point $x = 0$. Assume, that $r(0) = 0$ and $r'(0) = \begin{equation}ta \neq 0$. Then \begin{equation}gin{enumerate} \item If $\frac{1}{\begin{equation}ta} \notin \mathbb N$ then the only analytic (smooth) solution of \eqref{op} is $k(x) \equiv 0$ \item If $\frac{1}{\begin{equation}ta} \in \mathbb N$ then every analytic (smooth) solution in sufficiently small neighbourhood of $x = 0$ can be written as $k(x) = c x^{\frac{1}{\begin{equation}ta}} F(x)$, where $F(x)$ is analytic (smooth) function, defined only by $r(x)$, $F(0) \neq 0$ and $c$ is arbitrary constant. \end{enumerate} \end{lemma} {\it Proof. } Consider $k(x)$ to be an analytic (smooth) solution of \eqref{op} on the neighbourhood of $x = 0$. W.l.o.g. we assume that this neighbourhood is $x \in [-\epsilon, \epsilon]$ for some $\epsilon > 0$. As $r'(0) \neq 0$ then in sufficiently small neighbourhood of $x = 0$ the only zero of $r(x)$ is $x = 0$. Thus, w.l.o.g. we assume that $r(x) \neq 0$ for $x \in [-\epsilon, 0) \cup (0, \epsilon]$. Again, as $r'(0) \neq 0$, then by Morse lemma there exists an analytic (smooth) function $\bar r(x)$ such that $r(x) = \begin{equation}ta x + x^2 \bar r(x)$. For $x \in [-\epsilon, 0) \cup (0, \epsilon]$ the equation \eqref{op} can be rewritten as $$ \frac{k'}{k} = \frac{1}{r(x)} = \frac{1}{\begin{equation}ta x + x^2 \bar r(x)} = \frac{1}{\begin{equation}ta x} - \frac{1}{\begin{equation}ta}\frac{\bar r}{\begin{equation}ta + x \bar r}. $$ Assume, that $x \in (0, \epsilon]$. Integrating both sides of the equation and taking the exponent, we get \begin{equation}gin{equation}\label{form} k(x) = c x^{\frac{1}{\begin{equation}ta}} \underline{\exp \Big( \int \limits^{\epsilon}_x \frac{1}{\begin{equation}ta}\frac{\bar r(t)}{\begin{equation}ta + t \bar r(t)} \mathrm{d} t \Big)} = c x^{\frac{1}{\begin{equation}ta}} F(x). \end{equation} Here $F(x)$ denotes the underlined integral. As $r(x) \neq 0$ for $x \in [-\epsilon, 0) \cup (0, \epsilon]$, we have that $\begin{equation}ta + x \bar r(x) \neq 0$ for $x \in [-\epsilon, \epsilon]$. Thus, the integral, that defines $F(x)$ converges for all $x$ from the domain and $F(x)$ is analytic (smooth) function with property $F(x) \neq 0$. In particular, $F(0) \neq 0$. Assume now that the first statement of Lemma \ref{analytic} is wrong. That is $\frac{1}{\begin{equation}ta} \notin \mathbb N$ and $c \neq 0$ for some analytic (smooth) solution $k(x)$. Pick the smallest $m \in \mathbb Z^+$ such, that $m > \frac{1}{\begin{equation}ta}$ and consider $m-$th derivative of $k(x)$. We get that $\lim_{x \to 0+} k^{(m)}(x) = \infty$. This contradiction implies, that the first statement of the Lemma is proved. Now consider $\frac{1}{\begin{equation}ta} \in \mathbb N$. The formula \eqref{form} defines analytic (smooth) function on the entire $x \in [\epsilon, \epsilon]$ and this function coincides with the solution on $x \in (0, \epsilon]$. By uniqueness theorem for ODE we get that $k(x)$ coincides with formula \eqref{form} on the entire domain of $x$ and the second statement of Lemma \eqref{analytic} is proved. $\blacksquare$ \begin{equation}gin{lemma}\label{analytic2} In the condition of the Proposition \ref{normal} there exists an analytic (smooth) function $\mu(x, y)$, such that $\mu(0, 0) = 0, \pd{\mu}{x} (0, 0) = 0, \pd{\mu}{y} (0, 0) = 1$ and \begin{equation}gin{equation}\label{astra} \operatorname{det} \big( R - \mu \operatorname{Id}\big) \equiv 0. \end{equation} In other words $\mu$ is eigenvalue of $R$ at each point. \end{lemma} {\it Proof. } Function $\operatorname{det} R = g(x, y)$ satisfies the condition of Corollary \eqref{morse2}. Thus, for $\alpha = 1$ there exist analytic (smooth) functions $h(x, y), k(x)$ such, that \begin{equation}gin{equation}\label{rt3} h(0, 0) = \pd{h}{x}(0, 0) = 0, \quad \Big( \pd{h}{y}(0, 0)\Big)^2 = \frac{1}{4} \neq 0, \quad k(0) = k'(0) = k''(0) = 0 \end{equation} and \begin{equation}gin{equation}\label{rt2} g(x, y) = \frac{y^2}{4} - h^2(x, y) + k(x). \end{equation} As $\pd{h}{y} (0, 0) \neq 0$, then by the Implicit Function Theorem in sufficiently small neighbourhood of the origin there exists an analytic (smooth) curve $s(x)$ such, that $$ h(x, s(x)) = 0, \quad s(0) = 0, \quad s'(0) = - \pd{h}{x}(0, 0) \big( \pd{h}{y}(0, 0)\big)^{-1} = 0. $$ The last property follows from \eqref{rt3}. Substituting \eqref{rt2} into the condition \eqref{condition} for $\alpha = 1$ yields $$ \Big( \frac{y}{2} - 2 h \pd{h}{y}\Big)\Big( \frac{y}{2} + 2 h \pd{h}{y}\Big) + \Big(k' - 2 h \pd{h}{x}\Big) R^1_2 - \frac{y^2}{4} + h^2 - k = 0. $$ Putting $y = s(x)$ and renaming $R^1_2(x, s(x)) = r(x)$ we get $$ r(x)k'(x) - k(x) = 0. $$ At the same time $r'(0) = \pd{R^1_2}{x} (0, 0) + \pd{R^1_2}{y}(0, 0) s'(0) = \ensuremath{\mathfrak{g}}amma$. The conditions of Proposition \ref{normal} imply, that $\ensuremath{\mathfrak{g}}amma$ is either $1, \frac{1}{2}$ or $\frac{1}{\ensuremath{\mathfrak{g}}amma} \notin \mathbb N$. If $\frac{1}{\ensuremath{\mathfrak{g}}amma} \notin \mathbb N$ then, by the first statement of Lemma \ref{analytic} $k(x) \equiv 0$. For $\ensuremath{\mathfrak{g}}amma = 1, \frac{1}{2}$ recall, that by properties \eqref{rt3} $k'(0) = k''(0) = 0$. Thus, constant $c$ for $k(x)$ from second statement of the Lemma \ref{analytic} must be zero. Thus, for all $\ensuremath{\mathfrak{g}}amma$ function $k \equiv 0$. Finally, consider a pair of analytic (smooth) functions $\mu_1 (x, y) = \frac{y}{2} + h(x, y)$ and $\mu_2 (x, y) = \frac{y}{2} - h(x, y)$. As $\mu_1 + \mu_2 = y$ and $\mu_1 \mu_2 = \frac{y^2}{4} - h^2 = g$, then both functions are roots of characteristic polynomial of $R$. Thus, both functions satisfy condition \eqref{astra}. From conditions \eqref{rt3} we know, that $\pd{h}{y} (0, 0)$ is either $\frac{1}{2}$ or $- \frac{1}{2}$. For $\frac{1}{2}$ take $\mu = \mu_1$ and for $- \frac{1}{2}$ take $\mu = \mu_2$. We have that $\pd{\mu}{y} (0, 0) = 1$. By the same conditions \eqref{rt3} $\pd{\mu}{x} (0, 0) = \pm \pd{h}{x} (0, 0) = 0$. The Lemma is proved. $\blacksquare$ The condition \eqref{astra} implies, that $\mu(x, y)$ is an eigenvalue of $R$ at every point. For Nijenhuis operator $R$ such function $\mu$ satisfies the following condition (\cite{bmk}, Proposition 2.3, Formula 9): $$ R^* \mathrm{d} \mu = \mu \mathrm{d} \mu. $$ We take $\mu$, that exists by Lemma \ref{analytic2}, and consider analytic (smooth) coordinate change $\bar x = x, \bar y = \mu(x, y)$. In new coordinates \begin{equation}gin{equation}\label{rt4} R = \left( \begin{equation}gin{array}{cc} R^1_1 & R^1_2 \\ 0 & \bar y \end{array}\right). \end{equation} The trace takes the form $\bar y + R^1_1$ and determinant is $g = yR^1_1$. From conditions of Proposition \ref{normal} on $g$ we have: $$ \pd{g}{\bar y} (0, 0) = R^1_1 (0, 0), \quad \frac{\partial^2 g}{\partial \bar y^2} (0, 0) = 2 \pd{R^1_1}{\bar y} (0, 0), \quad \frac{\partial^2 g}{\partial \bar x \partial \bar y}(0, 0) = \pd{R^1_1}{\bar x} (0, 0). $$ Also, note, that the Jacobian of the coordinate change $\bar x(x, y), \bar y (x, y)$ at the coordinate origin is $\operatorname{Id}$. Thus, the coordinate change does not change the linear part of Taylor expansion of $R$ at the origin and, thus, $\pd{R^1_2}{x} (0, 0) = \ensuremath{\mathfrak{g}}amma$. From Proposition \ref{local1} the vanishing of the Nijenhuis torsion of $R$ in the form \eqref{rt4} yields the single equation \begin{equation}gin{equation}\label{int} R^1_2 \pd{R^1_1}{\bar x} + \Big(\bar y - R^1_1 \Big) \pd{R^1_1}{\bar y} = 0. \end{equation} In coordinates $\bar x, \bar y$ consider vector field $v = \big(R^1_2, \bar y - R^1_1\big)^T$. We have that the coordinate origin is a critical point of $v$. The eigenvalues of linearization operator at this point are $1, \ensuremath{\mathfrak{g}}amma$. Equation \eqref{int} implies, that $R^1_1$ is a first integral of vector field $v$. By Corollary \ref{nodes} under our assumption on $\ensuremath{\mathfrak{g}}amma$ there are no non-constant analytic (smooth) first integral around the origin for $v$. Thus, $R^1_1 \equiv 0$. The Proposition is proved. $\blacksquare$ \begin{equation}gin{example} Lemma \ref{analytic2} implies, that the eigenvalue of the Nijenhuis operator under the conditions of Proposition \ref{normal} is analytic (smooth) function. In general, this is not the case. Consider the right-adjoint operator of $\mathfrak b_4^-$: \begin{equation}gin{equation}\label{example1} R = \left(\begin{equation}gin{array}{cc} 0 & - x \\ - x & - 2y \end{array}\right). \end{equation} The eigenvalues are $\lambda_{1, 2} = - y \pm \sqrt{x^2 + y^2}$. They are not even $C^1$ functions. Note, that $R$ is diagonal in every point of the plane, but, at the same time because the eigenfunctions are bad there are no analytic (smooth) diagonalizing coordinate change around the origin. This is the simplest example, that shows, that Haantjes theorem \cite{haantjes} does not work in singular points. \end{example} \section{The proof of Theorem \ref{class}}\label{proof1} We start with lemma \begin{equation}gin{lemma}\label{lm0} The algebras in Table 1 and Table 2 in the statement of the Theorem \ref{class} are left-symmetric algebras. Algebras with different names (including different values of the parameters in continuous series) are not isomorphic. \end{lemma} {\it Proof. } For algebras $\mathfrak b_{1, \alpha}, \mathfrak b_{2, \begin{equation}ta}, \mathfrak b_3, \mathfrak b_5$, $\mathfrak c_1, \mathfrak c_2, \mathfrak c_3$ and $\mathfrak c_4$ their right-adjoint operators are Nijenhuis by Corollaries \ref{useful} and \ref{useful2}. For $\mathfrak b^+_4, \mathfrak b^-_4, \mathfrak c^+_4, \mathfrak c^-_4$ we check the second condition of Proposition \ref{local1} by direct calculation: \begin{equation}gin{enumerate} \item For $\mathfrak b^+_4$ we have $\operatorname{det} R = x^2, \operatorname{tr} R = - 2y$ and: \begin{equation}gin{equation*} \begin{equation}gin{aligned} \left( \begin{equation}gin{array}{cc} 2x, & 0 \\ \end{array}\right) = \left( \begin{equation}gin{array}{cc} 0 ,& - 2 \\ \end{array}\right)\left( \begin{equation}gin{array}{cc} - 2y & x \\ - x & 0 \end{array}\right) \end{aligned} \end{equation*} \item For $b^-_4$ we have $\operatorname{det} R = - x^2, \operatorname{tr} R = - 2y$ and: \begin{equation}gin{equation*} \begin{equation}gin{aligned} \left( \begin{equation}gin{array}{cc} - 2x, & 0 \\ \end{array}\right) = \left( \begin{equation}gin{array}{cc} 0 ,& - 2 \\ \end{array}\right)\left( \begin{equation}gin{array}{cc} - 2y & x \\ x & 0 \end{array}\right) \end{aligned} \end{equation*} \item For $\mathfrak c^+_5$ we have $\operatorname{det} R = y^2 - x^2, \operatorname{tr} R = 2y$ and: \begin{equation}gin{equation*} \begin{equation}gin{aligned} \left( \begin{equation}gin{array}{cc} - 2x, & 2y \\ \end{array}\right) = \left( \begin{equation}gin{array}{cc} 0 ,& 2 \\ \end{array}\right)\left( \begin{equation}gin{array}{cc} y & - x \\ - x & y \end{array}\right) \end{aligned} \end{equation*} \item For For $\mathfrak c^-_5$ we have $\operatorname{det} R = y^2 + x^2, \operatorname{tr} R = 2y$ and: \begin{equation}gin{equation*} \begin{equation}gin{aligned} \left( \begin{equation}gin{array}{cc} 2x, & 2y \\ \end{array}\right) = \left( \begin{equation}gin{array}{cc} 0 ,& 2 \\ \end{array}\right)\left( \begin{equation}gin{array}{cc} y & - x \\ x & y \end{array}\right) \end{aligned} \end{equation*} \end{enumerate} As all right-adjoint operators in Table 1 and Table 2 are Nijenhuis, then by Proposition \ref{main1} all algebras are left-symmetric. Now, let us proceed with second claim of Lemma. First, note, that algebras from Table 1 and Table 2 are not isomorphic, as their associated Lie algebras are not isomorphic. We will need a following observation. Recall, that $\mathfrak a$ has a natural structure of affine manifold. Consider left-symmetric algebras $\mathfrak a, \mathfrak a'$ and their left- and right-adjoint operators $L, R$ and $L', R'$ respectively. We have that $\mathfrak a$ and $\mathfrak a'$ are isomorphic if and only if there exists a linear coordinate change, that transforms $R$ into $R'$ and $L$ into $L'$ simultaneously. Assume now, that there exists a function of four variables $F(y_1, y_2, y_3, y_4)$ such that $F(\operatorname{tr} L, \operatorname{det} L, \operatorname{det} R, \operatorname{tr}R) \equiv 0$ for $\mathfrak a$ and is not identically zero for $\mathfrak a'$. Obviously, coordinate transformations preserve $F$, thus, $\mathfrak a$ and $\mathfrak a'$ are not isomorphic. Now let us proceed with the proof. Consider $F = \operatorname{det}L$. This function is identically zero on $\mathfrak b_5$ only, thus, this algebra is not isomorphic to $\mathfrak b_{1, \alpha}, \mathfrak b_{2, \begin{equation}ta}, \mathfrak b_3, \mathfrak b_4^+, \mathfrak b_4^-$. Consider $F = \operatorname{det} R$. It is identically zero for $\mathfrak b_{1, \alpha}$ and $\mathfrak b_3$. Thus, they are not isomorphic to $\mathfrak b_{2, \begin{equation}ta}, \mathfrak b_4^+, \mathfrak b_4^-$. Note, that operator field $L$ for $\mathfrak b_{1, \alpha}$ consists of diagonalizable operators, while $L$ for $\mathfrak b_3$ is Jordan block almost everywhere. Thus, these algebras are not isomorphic. Fix $\alpha_0$ and consider $F = \alpha_0 \operatorname{det} L - (\operatorname{tr}R)^2$. This function vanishes only for $\mathfrak b_{1, \alpha_0}$ and non-zero for $\alpha \neq \alpha_0$. Thus, algebras $\mathfrak b_{1, \alpha}$ for different values of parameter $\alpha$ are not isomorphic. Consider $F = (\operatorname{tr} R)^2 - 4 \operatorname{det} R$ (that is the discriminant of characteristic polynomial of $R$). It vanishes identically for $\mathfrak b_{2, \begin{equation}ta}$ and not for $\mathfrak b_4^+, \mathfrak b_4^-$. Thus, $\mathfrak b_{2, \begin{equation}ta}$ is not isomorphic to $\mathfrak b^+_4, \mathfrak b^-_4$. Fix $\begin{equation}ta_0 \neq 1$ and consider $F = \operatorname{det} L - \big( 1 - \frac{1}{\begin{equation}ta_0}\big) \operatorname{det} R$. It vanishes for $\mathfrak b_{2, \begin{equation}ta_0}$ and does not vanish for $\mathfrak b_{2, \begin{equation}ta}$ for $\begin{equation}ta \neq \begin{equation}ta_0$. Thus, the algebras $\mathfrak b_{2, \begin{equation}ta}$ are not isomorphic for different values of the parameters. Denote the function \begin{equation}gin{equation*} T(x) = \left\{\begin{equation}gin{array}{cc} 0 & x \ensuremath{\mathfrak{g}}eq 0, \\ 1 & x < 0,\\ \end{array}\right. \end{equation*} Take $F = T((\operatorname{tr} R)^2 - 4\operatorname{det} R)$. It vanishes for $\mathfrak b_4^-$ and does not vanish for $\mathfrak b_4^+$. This completes the proof for Table 1. Now consider Table 2. For algebra $\mathfrak c_1$ the operator field $R = L$ is zero, but its not zero for the rest of the Table. Thus, this algebra is not isomorphic to any algebra from the rest of the Table 2. Take $F = \operatorname{tr} R$. It vanishes for $\mathfrak c_3$, but not for $\mathfrak c_2, \mathfrak c_4, \mathfrak c_5^+, \mathfrak c_5^-$. Thus, $\mathfrak c_3$ is not isomorphic to $\mathfrak c_2, \mathfrak c_4, \mathfrak c_5^+, \mathfrak c_5^-$. Take $F = \operatorname{det} R$. It vanishes for $\mathfrak c_2$, but not for $\mathfrak c_4, \mathfrak c_5^+, \mathfrak c_5^-$. Thus, the $\mathfrak c_2$ is not isomorphic to $\mathfrak c_4, \mathfrak c_5^+, \mathfrak c_5^-$. Take $F = T(\operatorname{det} R)$. It is zero for $\mathfrak c_5^-$ and nonzero for $\mathfrak c_5^+$ and $\mathfrak c_4$. Thus, it is not isomorphic to $\mathfrak c_5^+, \mathfrak c_4$. Finally, notice, that the right-adjoint operator $R$ for $\mathfrak c_5^+$ almost everywhere has pairwise distinct eigenvalues, while $\mathfrak c_4$ almost everywhere is Jordan block. This completes the proof of Lemma. $\blacksquare$ \begin{equation}gin{lemma}\label{lem1} Let $R, Q \in \mathfrak{gl}(n, \mathbb R)$ and $[R, Q] = \lambda Q$ for $\lambda \neq 0$. Then $Q$ is nilpotent. \end{lemma} {\it Proof.} An operator $\mathrm{ad}_R: \mathfrak{gl}(n, \mathbb R) \to \mathfrak{gl}(n, \mathbb R)$ is defined by the formula $\mathrm{ad}_R Q = [R, Q] = \lambda Q$. From the properties of the matrix commutator it follows that $\mathrm{ad}_R Q^n = n\lambda Q^n$. Suppose now that $Q^n \neq 0$ for all $n \in \mathbb N$. It means that the finite dimensional operator $\mathrm{ad}_R$ has an infinite set of eigenvectors $\lambda, 2\lambda, 3\lambda, \dots$. This contradiction completes the proof. $\blacksquare$ \begin{equation}gin{lemma}\label{lem2} Every two-dimensional commutative subalgebra $\mathfrak h \subset \mathfrak{gl}(2, \mathbb R)$ contains the one-dimensional subspace spanned by the identity matrix. \end{lemma} {\it Proof}. First, recall some facts about $\mathfrak{gl}(n, \mathbb R)$. It has an invariant scalar product, which identifies $\mathfrak{gl}(n, \mathbb R)$ with its dual. In this construction the adjoint orbits are identified with coajoint orbits. The latter are simplectic manifolds and, thus, even dimensional. At the same time the dimension of the adjoint orbit $\mathcal O_X$ of a given element $X$ is $$ \operatorname{dim} \mathcal O_X =\operatorname{dim} \mathfrak{gl}(n, \mathbb R) - \operatorname{dim} \mathfrak z_X, $$ where $\mathfrak z_X$ denotes the centralizer of $X$, that is all elements $Y \in \mathfrak{gl}(n, \mathbb R)$, that commute with $X$. For $n = 2$ we get that $4 - \operatorname{dim} \mathfrak z_X$ must be even, thus, the dimension of centralizer of any element $X$ is even. Now consider element $X \in \mathfrak h$. By definition $\mathfrak h \subseteq \mathfrak z_X$. This implies, that dimension of $\mathfrak z_X$ is either $2$ or $4$. If $\operatorname{dim} \mathfrak z_X = 4$, then $X$ lies in the center of $\mathfrak{gl}(2, \mathbb R)$ and is proportional to identity matrix. Thus, the statement of the Lemma holds in thic case. If $\operatorname{dim} \mathfrak z_X = 2$, then $\mathfrak h$ coincides with $\mathfrak z_X$. In particular identity matrix commutes with $X$ and lies in $\mathfrak h$. The Lemma is proved. $\blacksquare$. For the rest of the proof let us forget about affine structure on $\mathfrak a$ and treat $L$ simply as left-adjoint action of $\mathfrak a$ on itself. Fixing basis $\eta_1, \eta_2$ and we get, that $L_{\xi}$ defines a map from $\mathfrak a$ to matrix algebra $\mathfrak{gl}(2, \mathbb R)$. Identity \eqref{left} implies, that the image of this map is a subalgebra in $\mathfrak{gl}(2, \mathbb R)$ (in fact this is the image of the adjoint action of associated Lie algebra). We denote the dimension of this subalgebra as $N$. \subsection{$N = 0$} This implies that $L_{\eta} = 0$ for an arbitrary $\eta \in \mathfrak a$. Thus, $\mathfrak a$ is isomorphic to $\mathfrak c_1$ from Table 2. \subsection{$N = 1$} W.l.o.g we may assume, that in basis $\xi_1, \xi_2$ the element $\xi_1$ spans the kernel of map $L$, that is $\xi_1 \star \xi_1 = 0$, $\xi_1 \star \xi_2 = 0$ or, in other words, $L_{\xi_1} = 0$. From identity \eqref{left} we get $L_{[\xi_1, \xi_2]} = [L_{\xi_1}, L_{\xi_2}] = 0$. This implies, that \begin{equation}gin{equation}\label{gamma} [\xi_1, \xi_2] = \ensuremath{\mathfrak{g}}amma \xi_1. \end{equation} If $\ensuremath{\mathfrak{g}}amma = 0$ in \eqref{gamma} we get, that $[\xi_1, \xi_2] = 0$. Thus, in given basis structure relations for $\mathfrak a$ are \begin{equation}gin{equation}\label{pref0} \begin{equation}gin{aligned} & \xi_1\star\xi_1 = \xi_1\star\xi_2 = \xi_2\star\xi_1 = 0, \\ & \xi_2\star\xi_2 = b \xi_1 + a \xi_2 , \\ \end{aligned} \end{equation} where $a, b$ some constants. If $a \neq 0$ in \eqref{pref0}, consider the change of basis $\eta_1 = \xi_1$ and $\eta_2 = \frac{b}{a^2} \xi_1 + \frac{1}{a} \xi_2$. The structure relations \eqref{pref0} take form \begin{equation}gin{equation*} \begin{equation}gin{aligned} & \eta_1\star\eta_1 = \eta_1\star\eta_2 = \eta_2\star\eta_1 = 0, \\ & \eta_2\star\eta_2 = \frac{1}{a^2} \xi_2 \star \xi_2 = \Big(\frac{b}{a^2} \xi_1 + \frac{1}{a} \xi_2\Big) = \eta_2. \\ \end{aligned} \end{equation*} This yields the structure relations of the left-symmetric algebra $\mathfrak c_2$ from Table 2. If $a = 0$ in \eqref{pref0}, then $b \neq 0$ (otherwise $\operatorname{dim}\operatorname{Im} L = 0$). After the change of basis $\eta_1 = b \xi_1, \eta_2 = \xi_2$ relations \eqref{pref0} take form \begin{equation}gin{equation*} \begin{equation}gin{aligned} & \eta_1\star\eta_1 = \eta_1\star\eta_2 = \eta_2\star\eta_1 = 0, \\ & \eta_2\star\eta_2 = b \xi_1 = \eta_1. \\ \end{aligned} \end{equation*} This yields the structure relations of the left-symmetric algebra $\mathfrak c_3$ from Table 2. Assume now, that $\ensuremath{\mathfrak{g}}amma \neq 0$ in \eqref{gamma}. We have $[\xi_1, \xi_2] = \ensuremath{\mathfrak{g}}amma \xi_1$. Changing coordinates as $\xi'_1 = \xi_1$ and $\xi'_2 = - \frac{1}{\ensuremath{\mathfrak{g}}amma} \xi_2$ we get, that $[\xi'_1, \xi'_2] = - \xi'_1$. So, w.l.o.g. we may assume that if $\ensuremath{\mathfrak{g}}amma \neq 0$ in \eqref{gamma}, then $\ensuremath{\mathfrak{g}}amma = - 1$. The latter implies, that $\xi_1 \star \xi_2 - \xi_2 \star \xi_1 = \xi_1$. The structure relations in this case are \begin{equation}gin{equation}\label{pref} \begin{equation}gin{aligned} & \xi_1\star\xi_1 = \xi_1\star\xi_2 = 0, \\ & \xi_2\star\xi_1 = \xi_1, \\ & \xi_2\star\xi_2 = b \xi_1 + a \xi_2, \end{aligned} \end{equation} for some constants $a$ and $b$. If $a \neq 1$ in \eqref{pref}, then after the change of basis $\eta_1 = \xi_1, \eta_2 = - \frac{b}{1 - a} \xi_1 + \xi_2$ the relations \eqref{pref} take form \begin{equation}gin{equation*} \begin{equation}gin{aligned} & \eta_1 \star \eta_1 = \eta_1 \star \eta_2 = 0, \\ & \eta_2 \star \eta_1 = \Big( - \frac{b}{1 - a} \xi_1 + \xi_2\Big) \star \xi_1 = \xi_1 = \eta_1, \\ & \eta_2 \star \eta_2 = \Big( - \frac{b}{1 - a} \xi_1 + \xi_2\Big) \star \Big( - \frac{b}{1 - a} \xi_1 + \xi_2\Big) = - \frac{b}{1 - a} \xi_1 + b\xi_1 + a\xi_2 = \\ & = a \Big( - \frac{b}{1 - a} \xi_1 + \xi_2\Big) = a \eta_2. \end{aligned} \end{equation*} Renaming $a$ as $\alpha$, we get the structure relations of the left-symmetric algebra $\mathfrak b_{1, \alpha}$ from Table 1. If in \eqref{pref} constants $a = 1$ and $b = 0$, then we get \begin{equation}gin{equation*} \begin{equation}gin{aligned} & \xi_1\star\xi_1 = \xi_1\star\xi_2 = 0, \\ & \xi_2\star\xi_1 = \xi_1, \\ & \xi_2\star\xi_2 = \xi_2, \end{aligned} \end{equation*} These are the structure relations of $\mathfrak b_{1, 1}$ from Table 1. Assume, finally, that in \eqref{pref} $a = 1$ and $b \neq 0$. After the change of basis $\eta_1 = b \xi_1, \eta_2 = \xi_2$ the relations \eqref{pref} take the form \begin{equation}gin{equation*} \begin{equation}gin{aligned} & \eta_1 \star \eta_1 = \eta_1 \star \eta_2 = 0, \\ & \eta_2 \star \eta_1 = b \xi_2 \star \xi_1 = b \xi_1 = \eta_1, \\ & \eta_2 \star \eta_2 = \xi_2 \star \xi_2 = b \xi_1 + \xi_2 = \eta_1 + \eta_2. \\ \end{aligned} \end{equation*} These are the structure relations of $\mathfrak b_5$ from Table 1. \subsection{$N = 2$ and associated Lie algebra is abelian} $L$ defines a faithful representation of the associated Lie algebra for $\mathfrak a$. As associated Lie algebra is abelian, then by Lemma \ref{lem2}, the image of the representation contains the identity matrix $\operatorname{Id}$. W.l.o.g. we may assume that $L_{\xi_2} = \operatorname{Id}$ in given basis $\xi_1, \xi_2$. In other words $\xi_1 \star \xi_1 = \xi_1$ and $\xi_1 \star \xi_2 = \xi_2$. Identity \eqref{left} implies, that $L_{[\xi_1, \xi_2]} = [L_{\xi_1}, L_{\xi_2}] = [L_{\xi_1}, \operatorname{Id}] = 0$. As representation is faithful and associated Lie algebra is abelian we have $[\xi_1, \xi_2] = 0$. The structure relations in this case are: \begin{equation}gin{equation}\label{pref1} \begin{equation}gin{aligned} & \xi_2 \star \xi_1 = \xi_1 \star \xi_2 = \xi_1, \\ & \xi_2 \star \xi_2 = \xi_2, \\ & \xi_1\star\xi_1 = a \xi_1 + b \xi_2, \\ \end{aligned} \end{equation} for some constants $a$ and $b$. Assume that in \eqref{pref1} coefficients satisfy the relation $\frac{a^2}{4} + b = 0$. Then after the change of basis $\eta_1 = \xi_1 - \frac{a}{2}\xi_2, \eta_2 = \xi_2$ the relations \eqref{pref1} take form \begin{equation}gin{equation*} \begin{equation}gin{aligned} & \eta_2 \star \eta_1 = \eta_1, \quad \eta_1 \star \eta_2 = \Big(\xi_1 - \frac{a}{2}\xi_2\Big) \star \xi_2 = \xi_1 - \frac{a}{2}\xi_2 = \eta_1, \\ & \eta_2 \star \eta_2 = \eta_2, \quad \Big(\xi_1 - \frac{a}{2}\xi_2\Big) \star \Big( \xi_1 - \frac{a}{2}\xi_2 \Big) = a \xi_1 + b \xi_2 - a \xi_1 + \frac{a^2}{4} \xi_2 = 0. \end{aligned} \end{equation*} These are the structure relations of $\mathfrak c_4$ from Table 2. Assume, that the coefficients $a, b$ in \eqref{pref1} are such, that $\frac{a^2}{4} + b \neq 0$. After the change of basis $\eta_1 = \frac{1}{\sqrt{\vert \frac{a^2}{4} + b}\vert}\xi_1 - \frac{a}{2\sqrt{\vert\frac{a^2}{4} + b}\vert}\xi_2, \eta_2 = \xi_2$ the relations \eqref{pref1} take the form \begin{equation}gin{equation*} \begin{equation}gin{aligned} & \eta_2 \star \eta_1 = \eta_1, \quad \eta_1 \star \eta_2 = \eta_1, \quad \eta_2 \star \eta_2 = \eta_2, \\ & \eta_1 \star \eta_1 = \Big(\frac{1}{\sqrt{\vert \frac{a^2}{4} + b}\vert}\xi_1 - \frac{a}{2\sqrt{\vert\frac{a^2}{4} + b}\vert}\xi_2\Big) \star \Big( \frac{1}{\sqrt{\vert \frac{a^2}{4} + b}\vert}\xi_1 - \frac{a}{2\sqrt{\vert\frac{a^2}{4} + b}\vert}\xi_2 \Big) = \frac{\frac{a^2}{4} + b}{\vert \frac{a^2}{4} + b \vert} \eta_2. \end{aligned} \end{equation*} Depending on sign of $\frac{a^2}{4} + b$ we get the structure relations of either $c^+_5$ or $c^-_5$. \subsection{$N = 2$ and associated Lie algebra is non-abelian} As associated Lie algebra of $\mathfrak a$ is not abelian, then w.l.o.g. we may assume that $[\xi_1, \xi_2] = \xi_1\star\xi_2 - \xi_2\star\xi_1 = \xi_1$. Identity \eqref{left} implies, that $[L_{\xi_1}, L_{\xi_2}] = L_{\xi_1}$. By Lemma \ref{lem1} $L_{\xi_1}$ is nilpotent and, as $L_{\xi_1} \neq 0$ as $L_{\xi}$ is a faithful representation of associated Lie algebra. In case of dimension two, the kernel and the image of $L_{\xi_1}$ coincide. We have two cases. \subsubsection{Image of $L_{\xi_1}$ is spanned by $\xi_1$} The condition implies, that $L_{\xi_1} \xi_1 = \xi_1 \star \xi_1 = 0$ and $L_{\xi_1} \xi_2 = \xi_1 \star \xi_2 = \ensuremath{\mathfrak{g}}amma \xi_1$ with $\ensuremath{\mathfrak{g}}amma \neq 0$. Assume $\xi_2 \star \xi_2 = a \xi_1 + b \xi_2$. The left-symmetry of associator yields: \begin{equation}gin{equation*} \begin{equation}gin{aligned} 0 = & \mathcal A(\xi_1, \xi_2, \xi_2) - \mathcal A(\xi_2, \xi_1, \xi_2) = \\ = &(\xi_1 \star \xi_2) \star \xi_2 - \xi_1 \star (\xi_2 \star \xi_2) - (\xi_2 \star \xi_1) \star \xi_2 + \xi_2 \star (\xi_1 \star \xi_2) = \\ = & \ensuremath{\mathfrak{g}}amma^2 \xi_1 - b\ensuremath{\mathfrak{g}}amma \xi_1 - \ensuremath{\mathfrak{g}}amma (\ensuremath{\mathfrak{g}}amma - 1)\xi_1 + \ensuremath{\mathfrak{g}}amma (\ensuremath{\mathfrak{g}}amma - 1) \xi_1 = \\ = & \ensuremath{\mathfrak{g}}amma (b - \ensuremath{\mathfrak{g}}amma)\xi_1. \end{aligned} \end{equation*} Thus, we get the following structure relations \begin{equation}gin{equation}\label{pref2} \begin{equation}gin{aligned} & \xi_1 \star \xi_1 = 0, \quad \xi_1 \star \xi_2 = \ensuremath{\mathfrak{g}}amma \xi_1, \\ & \xi_2 \star \xi_1 = (\ensuremath{\mathfrak{g}}amma - 1)\xi_1, \quad \xi_2 \star \xi_2 = a \xi_1 + \ensuremath{\mathfrak{g}}amma \xi_2. \end{aligned} \end{equation} Assume, that in \eqref{pref2} $\ensuremath{\mathfrak{g}}amma = 1$ and $a \neq 0$. After the change of basis $\eta_1 = a \xi_1, \eta_2 = \xi_2$ the structure relations \eqref{pref2} take form \begin{equation}gin{equation*} \begin{equation}gin{aligned} & \eta_1 \star \eta_1 = 0, \quad \eta_1 \star \eta_2 = \eta_1, \\ & \eta_2 \star \eta_1 = 0, \quad \eta_2\star \eta_2 = \eta_1 + \eta_2. \end{aligned} \end{equation*} These are the structure relations of $\mathfrak b_5$. Assume, that in \eqref{pref2} $\ensuremath{\mathfrak{g}}amma = 1$ and $a = 0$. Renaming the basis $\eta_1 = \xi_1, \eta_2 = \xi_2$ we get the structure relations \begin{equation}gin{equation*} \begin{equation}gin{aligned} & \eta_1 \star \eta_1 = 0, \quad \eta_1 \star \eta_2 = \eta_1, \\ & \eta_2 \star \eta_1 = 0, \quad \eta_2 \star \eta_2 = \eta_2. \end{aligned} \end{equation*} These are the structure relations of $\mathfrak b_{2, 1}$. Assume, that in \eqref{pref2} $\ensuremath{\mathfrak{g}}amma \neq 1$. After change of basis $\eta_1 = \xi_1, \eta_2 = \frac{a}{\ensuremath{\mathfrak{g}}amma (1 - \ensuremath{\mathfrak{g}}amma)}\xi_1 + \frac{1}{\ensuremath{\mathfrak{g}}amma} \xi_2$ the relations \eqref{pref2} take the form \begin{equation}gin{equation*} \begin{equation}gin{aligned} & \eta_1 \star \eta_1 = 0, \quad \eta_1 \star \eta_2 = \xi_1 \star \Big( \frac{a}{\ensuremath{\mathfrak{g}}amma (1 - \ensuremath{\mathfrak{g}}amma)}\xi_1 + \frac{1}{\ensuremath{\mathfrak{g}}amma} \xi_2\Big) = \xi_1 = \eta_1, \\ & \eta_2 \star \eta_1 = \Big( \frac{a}{\ensuremath{\mathfrak{g}}amma (1 - \ensuremath{\mathfrak{g}}amma)}\xi_1 + \frac{1}{\ensuremath{\mathfrak{g}}amma} \xi_2\Big) \star \eta_1 = \frac{\ensuremath{\mathfrak{g}}amma - 1}{\ensuremath{\mathfrak{g}}amma} \xi_1 = \Big( 1 - \frac{1}{\ensuremath{\mathfrak{g}}amma}\Big) \eta_1, \\ & \eta_2 \star \eta_2 = \Big( \frac{a}{\ensuremath{\mathfrak{g}}amma (1 - \ensuremath{\mathfrak{g}}amma)}\xi_1 + \frac{1}{\ensuremath{\mathfrak{g}}amma} \xi_2\Big) \star \Big( \frac{a}{\ensuremath{\mathfrak{g}}amma (1 - \ensuremath{\mathfrak{g}}amma)}\xi_1 + \frac{1}{\ensuremath{\mathfrak{g}}amma} \xi_2\Big) = \\ & = - \frac{a}{\ensuremath{\mathfrak{g}}amma^2} \xi_1 + \frac{a}{\ensuremath{\mathfrak{g}}amma (1 - \ensuremath{\mathfrak{g}}amma)} \xi_1 + \frac{a}{\ensuremath{\mathfrak{g}}amma^2} \xi_1 + \frac{1}{ \ensuremath{\mathfrak{g}}amma} \xi_2 = \eta_2. \end{aligned} \end{equation*} Renaming $\ensuremath{\mathfrak{g}}amma$ as $\begin{equation}ta$ we obtain the structure relations for $\mathfrak b_{2, \begin{equation}ta}$. \subsubsection{Image of $L_{\xi_1}$ is spanned by $a \xi_1 + \xi_2$ for some constant $a$} Recall, that in dimension two the image and kernel of nilpotent operator coincide. As $\xi_1$ and $a \xi_1 + \xi_2$ are linearly independent for all $a \in \mathbb R$, we have $L_{\xi_1} \xi_1 = \xi_1 \star \xi_1 = b (a \xi_1 + \xi_2)$ for some $b \neq 0$. Consider change of basis $\eta_1 = \frac{1}{\sqrt{|b|}}\xi_1, \eta_2 = \xi_2 + a \xi_1$. By definition we have $$ L_{\eta_1} \eta_1 = \eta_1 \star \eta_1 = \frac{b}{|b|} (a\xi_1 + \xi_2) = \operatorname{sgn} (b) \, \eta_2. $$ At the same time by definition $L_{\eta_1} \eta_2 = \eta_1 \star \eta_2 = 0$ and \begin{equation}gin{equation*} \begin{equation}gin{aligned}{} [\eta_1, \eta_2] = \frac{1}{\sqrt{|b|}}[\xi_1, \xi_2] = \frac{1}{\sqrt{|b|}} \xi_1 = \eta_1. \end{aligned} \end{equation*} The left symmetry of the associator for triple $\eta_1, \eta_2, \eta_2$ yields: \begin{equation}gin{equation*} \begin{equation}gin{aligned} 0 = & \mathcal A(\eta_1, \eta_2, \eta_2) - \mathcal A(\eta_2, \eta_1, \eta_2) = \\ = &(\eta_1 \star \eta_2) \star \eta_2 - \eta_1 \star (\eta_2 \star \eta_2) - (\eta_2 \star \eta_1) \star \eta_2 + \eta_2 \star (\eta_1 \star \eta_2) = \\ = & - \eta_1 \star (\eta_2 \star \eta_2). \end{aligned} \end{equation*} This implies, that $\eta_2 \star \eta_2 = \ensuremath{\mathfrak{g}}amma \eta_2$ for some constant $\ensuremath{\mathfrak{g}}amma$. The left symmetry of the associator for triple $\eta_1, \eta_2, \eta_1$ yields: \begin{equation}gin{equation*} \begin{equation}gin{aligned} 0 = & \mathcal A(\eta_1, \eta_2, \eta_1) - \mathcal A(\eta_2, \eta_1, \eta_1) = \\ = & (\eta_1 \star \eta_2) \star \eta_1 - \eta_1 \star (\eta_2 \star \eta_1) - (\eta_2 \star \eta_1)\star \eta_1 + \eta_2 \star (\eta_1 \star \eta_1) = \\ = & 2 \eta_1 \star \eta_1 + \operatorname{sgn}(b) \eta_2 \star \eta_2 = \operatorname{sgn}(b) (2 + \ensuremath{\mathfrak{g}}amma) \eta_2. \end{aligned} \end{equation*} As $\operatorname{sgn}(b) \neq 0$, then $\ensuremath{\mathfrak{g}}amma = - 2$. Thus, we get the following structure relations: \begin{equation}gin{equation*} \begin{equation}gin{aligned} & \eta_1 \star \eta_1 = \operatorname{sgn} (b) \, \eta_2, \quad \eta_1 \star \eta_2 = 0, \\ & \eta_2 \star \eta_1 = - \eta_1, \quad \eta_2 \star \eta_2 = - 2 \eta_2. \end{aligned} \end{equation*} Depending on the sign of $b$ this yields the structure relations for either $\mathfrak b^+_4$ or $\mathfrak b^-_4$. The Theorem is proved. $\blacksquare$ \section{Proof of Proposition \ref{main2}}\label{proof2} Let us recall that we have a pair of vector fields $v, w$ with the property $v({\mathrm{p}}) = v_{\mathrm{p}}$ and $w({\mathrm{p}}) = w_{\mathrm{p}}$, and the operation $$ v_{\mathrm{p}} \star w_{\mathrm{p}} = ([Rv, w] - R[v, w]) \vert_{\mathrm{p}}. $$ In local coordinates $x^1, \dots, x^n$ we have (the underlined terms cancel): \begin{equation}gin{equation*} \begin{equation}gin{aligned} & \big( [Rv, w] - R[v, w]\big)^k = \\ = & \pd{(R^k_{\alpha} v^{\alpha})}{x^{\begin{equation}ta}} w^{\begin{equation}ta} - \pd{w^k}{x^{\begin{equation}ta}} R^{\begin{equation}ta}_{\alpha} v^{\alpha} - R^k_{\alpha} \pd{v^{\alpha}}{x^{\begin{equation}ta}} w^{\begin{equation}ta} + R^k_{\alpha} \pd{w^{\alpha}}{x^{\begin{equation}ta}} v^{\begin{equation}ta} = \\ = & \pd{R^k_{\alpha}}{x^{\begin{equation}ta}} v^{\alpha} w^{\begin{equation}ta} + \underline{R^k_{\alpha} \pd{v^{\alpha}}{x^{\begin{equation}ta}} w^{\begin{equation}ta}} - \pd{w^k}{x^{\begin{equation}ta}} R^{\begin{equation}ta}_{\alpha} v^{\alpha} - \underline{R^k_{\alpha} \pd{v^{\alpha}}{x^{\begin{equation}ta}} w^{\begin{equation}ta}} + R^k_{\alpha} \pd{w^{\alpha}}{x^{\begin{equation}ta}} v^{\begin{equation}ta} = \\ = & \pd{R^k_{\alpha}}{x^{\begin{equation}ta}} v^{\alpha} w^{\begin{equation}ta} + R^k_{\alpha} \pd{w^{\alpha}}{x^{\begin{equation}ta}} v^{\begin{equation}ta} - \pd{w^k}{x^{\begin{equation}ta}} R^{\begin{equation}ta}_{\alpha} v^{\alpha}. \end{aligned} \end{equation*} Substituting point ${\mathrm{p}}$, where $R^k_i = \lambda \delta^k_i$ into the last equation, we get $$ \big( [Rv, w] - R[v, w]\big)^k \vert_{\mathrm{p}} = \pd{R^k_{\alpha}}{x^{\begin{equation}ta}} \vert_{\mathrm{p}} \, v_{\mathrm{p}}^{\alpha} w_{\mathrm{p}}^{\begin{equation}ta}. $$ Thus, we see, that the definition of the operation for $v_{\mathrm{p}}, w_{\mathrm{p}}$ does not depend on the continuations $v, w$ and that the structure constants of the algebra are $\pd{R^k_{\alpha}}{x^{\begin{equation}ta}} \vert_{\mathrm{p}}$. Let us now show, that the algebra is left-symmetric. Denote $a^k_{ij} = \pd{R^k_i}{x^j}\vert_{\mathrm{p}}$. Recall, that if $R$ is Nijenhuis, then $R - \lambda \operatorname{Id}$ is Nijenhuis. So w.l.o.g. assume that $R = 0$ at point ${\mathrm{p}}$ and consider \begin{equation}gin{equation*} \begin{equation}gin{aligned} 0 = & \pd{}{x^r} \Big( \mathcal N_R \Big) \vert_{\mathrm{p}} = \\ = & \pd{}{x^r} \Bigg(\pd{R^{\alpha}_i}{x^j} R_{\alpha}^k - \pd{R^{\alpha}_j}{x^i} R_{\alpha}^k -\pd{R^k_i}{x^{\alpha}} R^{\alpha}_j + \pd{R^k_j}{x^{\alpha}} R^{\alpha}_i\Bigg)\vert_{\mathrm{p}} = \\ = & a^{\alpha}_{ij} a_{\alpha r}^k - a^{\alpha}_{ji} a_{\alpha r}^k - a^k_{i \alpha} a^{\alpha}_{jr} + a^k_{j \alpha} a^{\alpha}_{ir}. \end{aligned} \end{equation*} In natural basis $\eta_1, \dots, \eta_n$ in $T_{\mathrm{p}} M^n$, associated with coordinates $x^1, \dots, x^n$ on the neighbourhood of $\mathrm{p}$, the last equation yields \begin{equation}gin{equation*} \begin{equation}gin{aligned} 0 = (\eta_i \star \eta_j) \star \eta_r - (\eta_j \star \eta_i)\star \eta_r - & \eta_i \star (\eta_j \star \eta_r) + \eta_j \star (\eta_i \star \eta_r) = \\ = & \mathcal A(\eta_i, \eta_j, \eta_r) - \mathcal A(\eta_j, \eta_i, \eta_r). \end{aligned} \end{equation*} Thus, vanishing of the Nijenhuis torsion implies, that algebra $\mathfrak a$ is left-symmetric and Proposition \ref{main2} is proved. $\blacksquare$ \section{Proof of Theorem \ref{themain1} and Theorem \ref{themain2}}\label{proof3} \subsection{Algebras $\mathfrak c_1, \mathfrak c_2, \mathfrak c_3, \mathfrak c_4$}\label{romm} For each of these four left-symmetric algebras we provide polynomial Nijenhuis operators $R$ with $R_1$ coinciding with the right-adjoint operator of the corresponding algebra from Table 2 of Theorem \ref{class}. We also give polynomial function $F (\operatorname{tr} R, \operatorname{det} R)$ that identically vanish for $R_1$ but non-zero on every neighbourhood of the coordinate origin. This implies, that there are no linearizing coordinate change for $R$ in both analytic and smooth category. \begin{equation}gin{equation*} \begin{equation}gin{aligned} \mathfrak c_1: & \quad R = \left(\begin{equation}gin{array}{cc} x^2 & 0 \\ 0 & y^2 \\ \end{array}\right), \quad F = \operatorname{tr} R = x^2 + y^2; \\ \mathfrak c_2: & \quad R = \left(\begin{equation}gin{array}{cc} x^2 & 0 \\ 0 & y \\ \end{array}\right), \quad F = \operatorname{det} R = x^2y; \\ \mathfrak c_3: & \quad R = \left(\begin{equation}gin{array}{cc} y^2 & y \\ 0 & y^2 \\ \end{array}\right), \quad F = \operatorname{det} R = y^4;\\ \mathfrak c_4: & \quad R = \left(\begin{equation}gin{array}{cc} y + yx^2 & x + x^3\\ -xy^2 & y - yx^2\\ \end{array}\right), \quad F = (\operatorname{tr} R)^2 - 4 \operatorname{det} R = - 4 x^2y^2. \end{aligned} \end{equation*} The operators $R$ for $\mathfrak c_1, \mathfrak c_2$ are Nijenhuis by Haantjes theorem \cite{haantjes}, $R$ for $\mathfrak c_3$ is Nijenhuis by Corollary \ref{useful}. Finally operator $R$ for $\mathfrak c_4$ is Nijenhuis by formula \ref{local2} for $\operatorname{tr}R = 2y$ and $\operatorname{det} R = y^2 + x^2 y^2$. \subsection{Algebra $\mathfrak b_5$} Consider operator field $$ R = \left(\begin{equation}gin{array}{cc} y & y - x^2 \\ 0 & y \\ \end{array}\right). $$ Its linear part coincides with the right-adjoint operator of $\mathfrak b_5$ from Table 1 of Theorem \ref{class}. At the same time the curve $y = x^2$ through the coordinate origin for $y \neq 0$ consists of operators, proportional to $\operatorname{Id}$ with non-zero coefficient. Thus, in every neighbourhood of the coordinate origin there exist a point, for which $R$ is proportional to the identity with some non-zero coefficient. At the same time the algebraic type of the right-adjoint operator of $\mathfrak b_5$ in every point is either Jordan block or zero operator. Thus, there are no analytic or smooth linearizing coordinate change. \subsection{Algebras $\mathfrak c_5^+, \mathfrak c_5^-$} Consider two dimensional real plane with coordinates $x, y$ and Nijenhuis operator field $R$, vanishing at the origin. We assume, that the linear part of Taylor expansion of $R$ at the origin is the right-adjoint operator of $\mathfrak c^+_5$ from Table 2 of Theorem \ref{class}. This implies, that the Taylor expansion of $\operatorname{tr} R$ at the origin starts with linear term $2y$ and the Taylor expansion of $g = \operatorname{det} R$ starts with quadratic term $y^2 - x^2$. In particular, $\frac{\partial^2 g}{\partial x^2} (0, 0) = -2 \neq 0$, $\frac{\partial^2 g}{\partial x \partial y}(0, 0) = 0$ and $\frac{\partial^2 g}{\partial y^2} (0, 0) = 2$. W.l.o.g. assume, that $\operatorname{tr} R = 2y$. Applying Lemma \ref{morse} we get that there exist new coordinates $\bar x, \bar y$, such that $$ \operatorname{tr} R = 2 \bar y, \quad g = - \bar x^2 + k(\bar y) $$ and $k''(0) = \frac{\partial^2 g}{\partial y^2} (0, 0) = 2$. Substituting this expression into the condition \eqref{condition} and taking $x = 0$ we get $$ \frac{1}{2}k'(\bar y) (2\bar y - \frac{1}{2}k'(\bar y)) - k(\bar y) = 0. $$ Differentiating both sides by $\bar y$ we get $$ \frac{1}{2}k''(\bar y) (2 \bar y - k') = 0. $$ As $k(0) = 0$ and $k''(0) = 2$, we have that $k' = 2 \bar y$ and, thus, $g = \operatorname{det} R = \bar y^2 - \bar x^2$. Applying formula \eqref{local2} to the given trace and determinant, we get, that in coordinates $\bar x, \bar y$ Nijenhuis operator $R$ coincides with the right-adjoint operator of $\mathfrak c^+_5$. The proof for $\mathfrak c^-_5$ is similar and both left-symmetric algebras are non-degenerate in smooth and analytic category. \subsection{Algebras $\mathfrak b^+_4, \mathfrak b^-_4$} The proof for $\mathfrak b^+_4$ follows the same steps as the proof for $\mathfrak c^+_5$ until the equation $$ \frac{1}{2}k''(\bar y) (2 \bar y - k') = 0 $$ is obtained. We have, that $k''(0) = \frac{\partial^2 g}{\partial y^2} = 0$. Denote $F = 2 \bar y - k'$ and $F'(0) = 2$, thus, function $F \neq 0$ for sufficiently small neighbourhood of $y = 0$. This implies that $k''$ identically vanish and we get $g = \operatorname{der} R = \bar x^2$. Applying formula \eqref{local2} we get, that Nijenhuis operator $R$ in coordinates $\bar x, \bar y$ is the right-adjoint operator of $\mathfrak b^+_5$. The proof for $\mathfrak b^+_4$ is similar. Thus, both $\mathfrak b^+_4$ and $\mathfrak b^-_4$ are non-degenerate in smooth and analytic category. \subsection{Algebra $\mathfrak b_{2, \begin{equation}ta}$} Consider two dimensional real plane with coordinates $x, y$ and Nijenhuis operator $R$ in the form \begin{equation}gin{equation}\label{fm} R = \left(\begin{equation}gin{array}{cc} y & f(x, y) \\ 0 & y \end{array}\right). \end{equation} Assume that $R$ vanishes at the coordinate origin and linear part of its Taylor expansion is the right-adjoint operator of $\mathfrak b_{2, \begin{equation}ta}$ from Table 1 of Theorem \ref{class}. In particular, the linear part of Taylor expansion of $f(x, y)$ is $\big(1 - \frac{1}{\begin{equation}ta}\big)x$. We have two cases. \subsubsection{Case $\begin{equation}ta \neq 1$} In this case $\pd{f}{x} (0, 0) \neq 0$. By the Implicit Function Theorem there exists a curve $r(y)$, such that $f(r(y), y) = 0$ and $r(0) = 0$. This is exactly the curve, that consists of operators $R$, that are multiple of $\operatorname{Id}$. We call this curve \textbf{the characteristic curve}. Next lemma provides necessary condition for the linearization in case $\begin{equation}ta \neq 1$ for both analytic and smooth category. \begin{equation}gin{lemma} Consider Nijenhuis analytic (smooth) operator $R$ in the form \eqref{fm}. Assume, that there exists an analytic (smooth) linearizing coordinate change and denote the characteristic curve as $r(y)$. Then $\pd{f}{x} (r(y), y)$ is constant function on sufficiently small neighbourhood of $y = 0$. \end{lemma}\label{necessary} {\it Proof. }Consider linearizing coordinate change $\bar x = g(x, y), \bar y = h(x, y)$. By definition $\bar y = y = \frac{1}{2} \operatorname{tr} R$. Thus, the linearing coordinate change is in triangular form $\bar x = g(x, y), \bar y = y$. Operator field $R$ transformed as \begin{equation}gin{equation*} R = \left(\begin{equation}gin{array}{cc} \pd{g}{x} & \pd{g}{y} \\ 0 & 1 \end{array}\right) \left( \begin{equation}gin{array}{cc} y & \Big(1 - \frac{1}{\begin{equation}ta}\Big)g \\ & y \end{array}\right) \left(\begin{equation}gin{array}{cc} \frac{1}{\pd{g}{x}} & - \frac{\pd{g}{y}}{\pd{g}{x}} \\ 0 & 1 \end{array}\right) = \left( \begin{equation}gin{array}{cc} y & \Big(1 - \frac{1}{\begin{equation}ta}\Big) \frac{g}{\pd{g}{x}}\\ 0 & y \end{array}\right) \end{equation*} We have $$ f(x, y) = \big(1 - \frac{1}{\begin{equation}ta}\big)\frac{g}{\pd{g}{x}}. $$ Note, that as $\pd{g}{x} (0, 0) \neq 0$, thus, for $r(y)$ we have $g(r(y), y) = 0$ for sufficiently small neighbourhood of $y = 0$. We get \begin{equation}gin{equation*} \pd{f}{x} = \big(1 - \frac{1}{\begin{equation}ta}\big)\Big( 1 - g \frac{1}{\big(\pd{g}{x}\big)^2} \frac{\partial^2 g}{\partial x^2}\Big). \end{equation*} Substituting $x = r(y)$ we get, that r.h.s. of this equation identically equals to $(1 - \frac{1}{\begin{equation}ta})$. $\blacksquare$ Consider polynomial operator field \begin{equation}gin{equation*} R = \left(\begin{equation}gin{array}{cc} y & \Big(1 - \frac{1}{\begin{equation}ta}\Big)x + x^2 + xy \\ 0 & y \end{array} \right) \end{equation*} By Corollary \ref{useful} $R$ is Nijenhuis and it is in the form \eqref{fm}. For sufficiently small $x, y$ the equation $\Big(\Big(1 - \frac{1}{\begin{equation}ta}\Big) + x + y \Big)x = 0$ defines the curve $x = 0$. Thus, this is the characteristic curve $r(y) \equiv 0$. At the same time $\pd{f}{x} (r(y), y) = 1 - \frac{1}{\begin{equation}ta} + y$, which is non-constant. Thus, by Lemma \ref{necessary} $R$ does not satisfy the necessary condition for linearization and has no linearizing coordinate change for both analytic and smooth case. \subsubsection{Case $\begin{equation}ta = 1$} In this case all right-adjoint operators of $\mathfrak b_{2, 1}$ are the multiples of the $\operatorname{Id}$. Consider polynomial operator field \begin{equation}gin{equation*} R = \left(\begin{equation}gin{array}{cc} y & x^2 + y^2 \\ 0 & y \end{array} \right) \end{equation*} By Corollary \ref{useful} $R$ is Nijenhuis and the linear part in its Taylor expansion coincides with the right-adjoint operator of $\mathfrak b_{2, 1}$. At the same time for $x, y \neq 0$ the algebraic type $R$ is Jordan block, while the right-adjoint operator of $\mathfrak b_{2, 1}$ is diagonalizable at every point. Thus, $\mathfrak b_{1, 1}$ is degenerate in both analytic and smooth category. \subsection{Algebra $\mathfrak b_{1, \alpha}$ (analytic case)} We start with lemma. \begin{equation}gin{lemma}\label{prenormal} Consider analytic (smooth) Nijenhuis operator $R$ on the real plane with coordinates $x, y$ that vanishes at the origin. Assume, that $R$ is in the form \begin{equation}gin{equation*} R = \left(\begin{equation}gin{array}{cc} 0 & f(x, y) \\ 0 & \alpha y \end{array} \right), \end{equation*} for some constant $\alpha$ and that the linear part of the Taylor expansion of $f(x, y)$ is $x + \begin{equation}ta y$. There exists a analytic (smooth) linearizing coordinate change, that transforms $R$ into \begin{equation}gin{equation*} R = \left(\begin{equation}gin{array}{cc} 0 & \bar x + \begin{equation}ta \bar y \\ 0 & \alpha \bar y \end{array} \right) \end{equation*} if and only if there exists a linearizing coordinate change for vector field $v = (f(x, y), y)^T$. \end{lemma} {\it Proof. } First, assume that there exists a linearizing coordinate change $\bar x = g(x, y), \bar y = h(x, y)$ for Nijenhuis operator $R$. Note, that $\operatorname{tr} R = \alpha y = \alpha \bar y$, thus, the coordinate change has the " triangular" form $\bar x = g(x, y), \bar y = y$. Operator field transformed as \begin{equation}gin{equation}\label{swipe} \begin{equation}gin{aligned} \left(\begin{equation}gin{array}{cc} \pd{g}{x} & \pd{g}{y} \\ 0 & 1 \end{array}\right) \left(\begin{equation}gin{array}{cc} 0 & f(x, y) \\ 0 & \alpha y \\ \end{array}\right)& \left(\begin{equation}gin{array}{cc} \frac{1}{\pd{g}{x}} & - \frac{\pd{g}{y}}{\pd{g}{x}} \\ 0 & 1 \end{array}\right) = \left(\begin{equation}gin{array}{cc} 0 & \pd{g}{x} f(x, y) + \alpha \pd{g}{y} y \\ 0 & \alpha y \\ \end{array}\right) = \\ & = \left(\begin{equation}gin{array}{cc} 0 & g (x, y) + \begin{equation}ta y \\ 0 & \alpha y \end{array}\right) = \left(\begin{equation}gin{array}{cc} 0 & \bar x + \begin{equation}ta \bar y \\ 0 & \alpha \bar y \end{array}\right), \end{aligned} \end{equation} As $\pd{g}{x} f(x, y) + \alpha \pd{g}{y} y = v(g)$ this can be written as $v(y) = \alpha y, v(g) = g + \begin{equation}ta y$. Thus, the analytic (smooth) linearizing coordinate change $\bar x = x, \bar y = y$ for $R$ is at the same time an analytic (smooth) linearizing coordinate change for $v$. Now, assume that there exists an analytic (smooth) linearizing coordinate change for vector field $v$. Then, by Lemma \ref{triangular} there exists a linearizing coordinate change in the "triangular" form $\bar x = g(x, y), \bar y = y$. This implies, that $v(y) = \alpha y$ and $v(g) = g + \begin{equation}ta y$. Substituting this coordinate change into the \eqref{swipe} we get, that is coordinate change is, in fact, a linearizing coordinate change for Nijenhuis operator $R$. $\blacksquare$ Consider the following polynomial operator fields \begin{equation}gin{equation*} \begin{equation}gin{aligned} \alpha = 0: & \quad R = \left(\begin{equation}gin{array}{cc} y^2 & x \\ 0 & y^2 \\ \end{array}\right), \quad F = \operatorname{det} R = y^4; \\ \alpha = r, 3 \leq r \in \mathbb N: & \quad R = \left(\begin{equation}gin{array}{cc} 0 & x\\ x^{\alpha - 1}& \alpha y \\ \end{array}\right), \quad F = \operatorname{det} R = - x^{\alpha}; \\ \alpha = - \frac{p}{q},\, p, q \in \mathbb N: & \quad R = \left(\begin{equation}gin{array}{cc} 0 & x + x^{p}y^{q} \\ 0 & - \frac{p}{q}y \\ \end{array}\right); \\ \alpha = \frac{1}{m}, 2 \leq m \in \mathbb N: & \quad R = \left(\begin{equation}gin{array}{cc} 0 & x + \alpha y^{\frac{1}{\alpha}} \\ 0 & \alpha y \\ \end{array}\right). \end{aligned} \end{equation*} The first operator is Nijenhuis by Corollary \ref{useful}, the third and the fourth are Nijenhuis by Corollary \ref{useful2}. The second operator field is Nijenhuis by Corollary \ref{local2} for $\operatorname{tr} R = \alpha y$ and $\operatorname{det} R = x^{\alpha}$. The linear part of the Taylor expansion of each operator field is $\mathfrak b_{1, \alpha}$. At the same time the first and the second operator field have no linearizing coordinate change in both smooth and analytic case by argument, similar to one in subsection \ref{romm}. The third and the fourth Nijenhuis operators does not have linearizing coordinate change by Lemma \ref{prenormal} as corresponding vector field $v$ from the condition of this Lemma by Theorem \ref{ilyash} can not be linearized (even in the formal category). This implies, obviously, that there are no analytic linearizing coordinate change. Theorem \ref{chen} implies, that there are no smooth linearizing coordinate change as well. Thus, we have shown, that $\mathfrak b_{1, \alpha}$ for $\alpha \in \Sigma_{\mathrm{an}}$ are degenerate in analytic category. Now, assume, that $\alpha \notin \Sigma_{\mathrm{an}} \cup \Sigma_{\mathrm{u}}$. Consider two dimensional real plane with coordinates $x, y$ and Nijenhuis operator $R$, vanishing at the origin. We assume, that $\operatorname{tr} R = y$ and the linear part of the Taylor expansion of $R$ is $$ \left(\begin{equation}gin{array}{cc} 0 & \frac{1}{\alpha} x \\ 0 & y \end{array}\right). $$ This is the right-adjoint operator of $\mathfrak b_{1, \alpha}$, written in basis $\eta'_1 = \eta_1, \eta'_2 = \frac{1}{\alpha} \eta_2$ (here $\eta_1, \eta_2$ is basis from the Table 2 of Theorem \ref{class}). Consider $g = \operatorname{det} R$. We have that the Taylor expansion has no constant, linear and quadratic part. At the same time $\pd{R^1_2}{x} (0, 0) = \frac{1}{\alpha}$ and $\alpha \neq r, 3 \leq r \in \mathbb N$ and $r \neq - \frac{p}{q}$ with $p, q \in \mathbb N$. Thus, $R$ satisfies the conditions of Proposition \eqref{normal} and there exists a coordinate change, that transforms $R$ into \begin{equation}gin{equation}\label{column} R = \left(\begin{equation}gin{array}{cc} 0 & f(x, y) \\ 0 & y \end{array}\right). \end{equation} Denote vector field $v = (f(x, y), y)^T$. Recall, that $\Omega$ stands for the set of negative Brjuno numbers. Assume, that $\alpha \neq 2$. Note, that together with the condition $\alpha \notin \Sigma_{\mathrm{an}} \cup \Sigma_{\mathrm{u}}$ this implies, that for $\alpha < 0, \alpha \in \Omega$ and for $\alpha > 0, \alpha \neq r, \frac{1}{r}$ with $2 \leq r \in \mathbb N$. By Corollary \ref{analyticc} there exists a linearizing coordinate change for vector field $v$. Thus, by Lemma \ref{prenormal} there exists a linearizing coordinate change for $R$. Now, consider case $\alpha = 2$. By Theorem \ref{ilyash} there are three possible polynomial normal forms of $v$: \begin{equation}gin{equation*} \begin{equation}gin{cases} \dot x = \frac{1}{2}x\\ \dot y = y \\ \end{cases} \, \, \, \, \, \, \quad \begin{equation}gin{cases} \dot x = \frac{1}{2}x\\ \dot y = y - \frac{1}{2}x^2 \\ \end{cases} \quad \begin{equation}gin{cases} \dot x = x\\ \dot y = 2y + \frac{1}{2}x^2 \\ \end{cases} \end{equation*} By Theorem \ref{poincare} there exists an analytic coordinate change $\bar x = g(x, y), \bar y = h(x, y)$, that transforms $v$ into one of these forms. Note that in all three cases $v(g) = \frac{1}{2}g$. At the same time $v(y) = y$. Consider a pair of functions $g(x, y)$ and $y$. Denote the linearization operator of $v$ at the coordinate origin as $A$. Similar to the proof of Lemma \ref{triangular} we get, that the differentials $\mathrm{d} g (0, 0)$ and $\mathrm{d} y$ are eigenvectors for $A^*$ with eigenvalues $\frac{1}{2}$ and $1$ respectively. It is well-known from linear algebra, that these vectors are linearly independent. Thus, $\bar x = g(x, y), \bar y = y$ is a coordinate change and this is a linearizing coordinate change for $v$. By Lemma \ref{prenormal} there exists a linearizing coordinate change for $R$. \subsection{Algebra $b_{1, \alpha}$ (smooth case)} Consider linear vector field $v = (\alpha x, y)^T$ for $\alpha < 0$. Denote $s = - \frac{1}{\alpha}$ and denote $f(x, y)$ to be the smooth first integral of $v$ from Example \ref{smooth_int}. Define smooth function $g(x, y)$: $$ g(x, y) = \begin{equation}gin{cases} \frac{x}{y} f(x, y) & xy \neq 0 \\ 0 & xy = 0. \end{cases} $$ For $xy \neq 0$ this function has the derivatives of all orders. Similar to the Example \ref{smooth_int} we define all the derivatives of $g$ to be zero on $xy = 0$. As a result we get a smooth function and by the definition of this function \begin{equation}gin{equation}\label{idd} g \pd{f}{x} = f\pd{f}{y}. \end{equation} Consider operator field $$ R = \left(\begin{equation}gin{array}{cc} f(x, y) & \alpha x + g(x, y)\\ 0 & y \\ \end{array}\right). $$ We have $\operatorname{tr} R = f + y$, $\operatorname{det} R = yf$ and \begin{equation}gin{equation*} \begin{equation}gin{aligned} & \left( \begin{equation}gin{array}{cc} \pd{f}{x} ,& \pd{f}{y} + 1 \\ \end{array}\right)\left( \begin{equation}gin{array}{cc} y & - \alpha x - g \\ 0 & f \end{array}\right) = \left( \begin{equation}gin{array}{cc} y\pd{f}{x} ,& - \alpha x \pd{f}{x} - g \pd{f}{x} + f \pd{f}{y} + f \\ \end{array}\right) = \\ = & \left( \begin{equation}gin{array}{cc} y\pd{f}{x} ,& y\pd{f}{y} + f \\ \end{array}\right) = \mathrm{d} \operatorname{det} R. \end{aligned} \end{equation*} Here we used identity \eqref{idd} and $-\alpha x \pd{f}{x} = y \pd{f}{y}$, which follows from the fact, that $f$ is a first integral of $v = (\alpha x, y)^T$. By Proposition \ref{local1} this implies, that $R$ is Nijenhuis operator. At the same time for $\operatorname{det} R$ vanishes only on $xy = 0$ in the neighbourhood of the coordinate origin, while for righ-adjoint operator of $\mathfrak b_{1, \alpha}$ determinant vanishes identically. Thus, there are no linearizing coordinate change for $R$. For $\alpha = r, 3 \leq r \in \mathbb N$ and $\alpha = \frac{1}{r}, 2 \leq r \in \mathbb N$ the examples of Nijenhuis operators with no linearizing coordinate changes are identical to analytic case: one needs only to replace Theorem \ref{poincare} with Theorem \ref{chen} in the argument about the absence of linearizing coordinate changes. Thus, we have shown, that $\mathfrak b_{1, \alpha}$ for $\alpha \in \Sigma_{\mathrm{sm}}$ are degenerate in smooth category. Same as in analytic case, consider two dimensional real plane with coordinates $x, y$ and smooth Nijenhuis operator $R$, vanishing at the origin. We assume, that $\operatorname{tr} R = y$ and the linear part of the Taylor expansion of $R$ is $$ \left(\begin{equation}gin{array}{cc} 0 & \frac{1}{\alpha} x \\ 0 & y \end{array}\right). $$ Consider $\alpha \notin \Sigma_{\rm{sm}}$. This implies, that $\alpha > 0$ and $\frac{1}{\alpha} \neq \frac{1}{r}$ with $3 \leq r \in \mathbb N$. Thus, by Proposition \ref{normal} there exists a smooth coordinate change, that transforms $R$ into the form \eqref{column}. Consider vector field $v = (f(x, y), y)^T$. Assume, that $\alpha \neq 2$. As $\alpha \notin \Sigma_{\mathrm{sm}}$, this implies, that $\alpha > 0, \alpha \neq r, \frac{1}{r}$ for $2 \leq r \in \mathbb N$. By Corollary \ref{smoothc} we get that there exists a smooth linearizing coordinate change for $v$. By Lemma \ref{prenormal} there exists a smooth linearizing coordinate change for $R$. The case $\alpha = 2$ is treated the same way as in analytic case with replacing Theorem \ref{poincare} with Theorem \ref{chen} in the argument. \subsection{Algebra $\mathfrak b_3$} Consider two dimensional real plane with coordinates $x, y$ and analytic (smooth) Nijenhuis operator $R$, vanishing at the origin. We assume, that $\operatorname{tr} R = y$ and the linear part of the Taylor expansion of $R$ is $$ \left(\begin{equation}gin{array}{cc} 0 & x + y\\ 0 & y \end{array}\right). $$ We have, that $\pd{R^1_2}{x} (0, 0) = 1$ and the Taylor expansion of the determinant has no constant, linear and quadratic parts. Applying Proposition \ref{normal}, we get that there exists an analytic (smooth) coordinate change, that transforms $R$ into the form \ref{column}. Consider vector field $v = (f(x, y), y)^T$. The eigenvalues of linearization matrix are $\lambda_1 = \lambda_2 = 1$. Thus, by Corollaries \ref{analyticc} and \ref{smoothc} there exist an analytic (smooth) linearizing coordinate change. Thus, by Lemma \ref{prenormal} such coordinate change exists for operator field $R$. \section{Appendix A: Vector fields on the plane} \subsection{Linearization problem for vector fields} Let $v$ be a vector field on two-dimensional real plane with coordinates $x, y$. Point ${\mathrm{p}}$ is \textbf{a critical point of $v$} if $v = 0$ at ${\mathrm{p}}$. In this section we always assume, that the coordinate origin is a critical point of $v$. Consider the Taylor expansion $v_1 + v_2 + \dots$ of vector field $v$ at the coordinate origin. The term $v_1$ is linear in coordinates, that is $v^1 = A^1_1 x + A^1_2 y, v^2 = A^2_1 x + A^2_2 y$. We call $A$ with components $A^i_j$ \textbf{the linearization operator} of $v$. It is a simple exercise to show, that $A^i_j$ define an operator on tangent space to the critical point. Given vector field $v$ with non-zero linear part $v_1$ in its Taylor expansion one may ask if there a coordinate change around the critical point that transforms $v$ into linear vector field? This is the linearization problem for the vector fields. It was studies by many great mathematicians, starting with Poincare (see overview in \cite{ilya} and introduction in \cite{brjuno}). The answer to this question is given in terms of eigenvalues of $A^i_j$. In this section we assume, that eigenvalues of $A^i_j$ are real and non-zero $\lambda_1, \lambda_2$. The pair is called \textbf{resonant} if there exist $m, n \in \mathbb Z^+$, such that $m\lambda_1 + n \lambda_2 = \lambda_1$ and $m + n \ensuremath{\mathfrak{g}}eq 2$. Otherwise the pair is said to be \textbf{non-resonant}. With given restriction the pair $\lambda_1, \lambda_2$ is resonant if either $\frac{\lambda_1}{\lambda_2} = - \frac{p}{q}$, $\lambda_1 = r \lambda_2$ or $r \lambda_1 = \lambda_2$ with $p > q, r \ensuremath{\mathfrak{g}}eq 2 \in \mathbb N$. In particular $\lambda_1 = \lambda_2$ is non-resonant pair. We distinguish linearization in three categories: formal, analytic and smooth. It is known, that (\cite{ilya}, Theorem 4) if the eigenvalues $\lambda_1, \lambda_2$ are non-resonant, then there exists a formal linearization. The proof of the theorem is based on a fact, that for non-resonant eigenvalues certain formal equation, known as homological equation, can be solved. Note, that this includes the case of $A^i_j$ being Jordan block with non-zero eigenvalues (see Lemma 4.5 \cite{ilya}, also \cite{basto}). In case of resonance one can obtain normal polynomial form. \begin{equation}gin{theor}\label{ilyash}(\cite{ilya}, Proposition 4.29, Table 4.1) Let $v$ be an analytic (or formal) vector field on the plane with critical point at the origin. Assume that eigenvalues of the linearization operator at this point are non-zero real $\lambda_1, \lambda_2$. Then there exists a formal coordinate change, that transforms $v$ into its polynomial normal form. The list of conditions and corresponding polynomial normal forms is given in the Table below: \begin{equation}gin{equation*} \begin{equation}gin{aligned} & Table \, 5: \text{Polynomial normal forms of vector fields in dimension two} \\ & \begin{equation}gin{tabular}{|c|c|c|} \hline Type & Conditions & Polynomial normal form \\ \hline No resonance & $\lambda_1, \lambda_2$ is non-resonant pair & linear \\ \hline Resonant node I & $\lambda_1 = r \lambda_2$ and $r \in \mathbb N, r \ensuremath{\mathfrak{g}}eq 2$ & $\begin{equation}gin{aligned} & \dot x = \lambda_1 x + a \lambda_2 y^r \\ & \dot y = \lambda_2 y, \\ & a = \{\pm 1, 0\}\end{aligned}$\\ \hline Resonant node II & $r \lambda_1 = \lambda_2$ and $r \in \mathbb N, r \ensuremath{\mathfrak{g}}eq 2$ & $\begin{equation}gin{aligned} & \dot x = \lambda_1 x \\ & \dot y = \lambda_2 y + a \lambda_1 x^r, \\ & a = \{\pm 1, 0\}\end{aligned}$\\ \hline Resonant saddle & $\lambda_1 = - \frac{p}{q}\lambda_2, p, q \in \mathbb N$ & $\begin{equation}gin{aligned} & \dot x = \lambda_1 x \\ & \dot y = \lambda_2 y (1 \pm x^{qm}y^{pm} + b x^{2qm}y^{2pm}), \\ & b \in \mathbb R, m \in \mathbb N \end{aligned}$ \\ \hline \end{tabular} \\ \end{aligned} \end{equation*} The parameters $a, r$ for resonant node and $b, m$ for resonant saddle are formal invariants of normal forms. \end{theor} A critical point ${\mathrm{p}}$ of a vector field $v$ is called \textbf{elementary} if every eigenvalue of the linearization operator at ${\mathrm{p}}$ has a non-vanishing real part. The following Theorem deals with the linearization in smooth category. \begin{equation}gin{theor}\label{chen} (K.T.Chen, \cite{chen}) Let $v$ and $w$ be two smooth vector fields having coordinate origin as an elementary critical point. Denote by $v_1 + v_2 + ...$ and $w_1 + w_2 + ...$ the respective Taylor's expansions of $v$ and $w$. Then there exists a $C^{\infty}$ transformation about $0$, which carries $v$ to $w$ if and only if there exists a formal transformation which carries the formal vector field $v_1 + v_2 + ...$ to $w_1 + w_2 + ...$. \end{theor} This theorem states, that linearization problem in smooth category for elementary vector fields is the same as formal linearization problem for their Taylor expansions. The Corollary holds. \begin{equation}gin{corr}\label{smoothc} Consider smooth vector field $v = (f(x, y), y)^T$ with critical point at the origin and eigenvalues of the linearization operator at this point being $\alpha, 1$. Assume, that $$ \alpha > 0, \quad \alpha \neq r, \frac{1}{r} \text{ with } 2 \leq r \in \mathbb N. $$ Then there exists a smooth linearizing coordinate change for $v$. \end{corr} {\it Proof. } For a given $\alpha$ the polynomial normal form of $v$ in the Table 5 of Theorem \ref{ilyash} is linear. From Theorem \ref{chen} it follows, that there exists a smooth linearizing coordinate change. $\blacksquare$ In analytic category the linearization problem for vector fields is more complicated. The following theorem is a corollary of the classical Poincare-Dulac theorem. \begin{equation}gin{theor}\label{poincare}(\cite{ilya}, follows from Theorem 5.5) Consider vector field $v$ on the plane with critical point at the origin and linearization operator with real positive non-zero eigenvalues. Then there exists an analytic coordinate change, that transforms $v$ into its polynomial normal form from Theorem \ref{ilyash}. \end{theor} Recall, that $\Omega$ denotes the set of negative Brjuno numbers. We get the following theorem: \begin{equation}gin{theor}\label{brjn} (\cite{brjuno}, follows form Theorem II, see also \cite{ilya} Theorem 5.22) Consider analytic vector field $v$ on the plane with critical point at the origin and linearization operator with real non-zero eigenvalues $\lambda_1, \lambda_2$. Assume, that $\frac{\lambda_1}{\lambda_2} \in \Omega$. Then there exists a linearizing coordinate change. \end{theor} The following corollary holds. \begin{equation}gin{corr}\label{analyticc} Consider analytic vector field $v = (f(x, y), y)^T$ with critical point at the origin and eigenvalues of the linearization operator at this point being $\alpha, 1$. Assume, that either $\alpha \in \Omega$ or $\alpha > 0$ and $\alpha \neq r, \frac{1}{r}$ with $2 \leq r \in \mathbb N$. Then there exists an analytic linearizing coordinate change for $v$. \end{corr} {\it Proof. }If $\alpha \in \Omega$, then the analytic linearizing coordinate change exists by Theorem \ref{brjn}. If $\alpha > 0$ and $\alpha \neq r, \frac{1}{r}$ with $2 \leq r \in \mathbb N$, then the polynomial normal form from Theorem \ref{ilyash} is linear. By Theorem \ref{poincare} there exists a linearizing coordinat change. $\blacksquare$ In the end of this subsection we prove the following Lemma. \begin{equation}gin{lemma}\label{triangular} Consider $\alpha \neq 0$ and analytic (smooth) vector field $v = (f(x, y), y)$ on the plane with coordinates $x, y$ with critical point at the origin. Assume, that eigenvalues of linearization operator are $\alpha, 1$ and there exists a linearizing coordinate change around this point. Then there exists a linearizing coordinate change in "triangular" form $\bar{x} = g(x, y), \bar{y} = y$. \end{lemma} {\it Proof. } Vector field $v$ is an operator, that acts on analytic (smooth) functions by differentiation. Conditions of the Lemma imply that $v(y) = y$. The idea of the proof is to find among the functions a good pair for $y$ to define a proper coordinate change. We have three cases. \textbf{Case $\alpha \neq 1$. } In this case the linearization operator $A$ can be brought to diagonal form with $\alpha, 1$ on diagonal. Consider the linearizing coordinate change, that exists by the condition of the Lemma. Assume that in this change $\bar{x} = g(x, y)$. We have that $v(g) = \alpha g$ and $\mathrm{d} g \neq 0$ at the origin. We have $$ \alpha (\mathrm{d} g)_j = (\mathrm{d} v(g))_j = \pd{v^\alpha}{x^j} \pd{g}{x^{\alpha}} + v^{\alpha} \frac{\partial^2 g}{\partial x^{\alpha} x^j}. $$ Substituting $x = y = 0$ into the equation we get, that $\mathrm{d} g$ at the origin is an eigenvector of $A^*$ with eigenvalue $\alpha$. From linear algebra we know, that eigenvectors for different eigenvalues are linearly independent. Thus, $\mathrm{d} g$ and $\mathrm{d} y$ are linearly independent at the origin and define locally analytic (smooth) coordinate change $\bar{x} = g(x, y), \bar{y} = y$. As $v(g) = \alpha g, v(y) = y$, we get that this is a linearizing coordinate change. \textbf{Case $\alpha = 1$ and $A = \operatorname{Id}$}. Consider the analytic (smooth) linearizing coordinate change $\bar{x} = h_1(x, y), \bar{y} = h_2 (x, y)$ that exists by the condition of the Lemma. We have that $\mathrm{d} h_1$ and $\mathrm{d} h_2$ are linearly independent at the origin. Thus, at least one of these differentials is linearly independent with $\mathrm{d} y$. W.l.o.g. assume that this is $h_1$. We define analytic (smooth) coordinate change $\bar{x} = g(x, y) = h_1(x, y), \bar{y} = y$. By definition $v(g) = g, v(y) = y$, thus, this is a linearizing coordinate change. \textbf{Case $\alpha = 1$ and $A$ is a Jordan block}. Consider the linearizing coordinate change, that exists by the conditions of the Lemma. In new coordinates $\bar{x}, \bar{y}$ we have $v = (\bar x + \bar y, \bar y)^T$. Consider analytic (smooth) solution $h$ of the equation $v(h) = h$ with extra condition $h(0, 0) = 0$. In local coordinates equation is $$ (\bar x + \bar y) \pd{h}{\bar x} + \bar y \pd{h}{\bar y} = h. $$ Differentiating the equation in $\bar x$ we get $$ 0 = (\bar x + \bar y) \frac{\partial^2 h}{\partial \bar x \partial \bar x} + \bar y \frac{\partial^2 h}{\partial \bar y \partial \bar x} = v \Big( \pd{h}{\bar x}\Big). $$ By \ref{integral2} any analytic (smooth) integral of $v = (\bar x + \bar y, \bar y)^T$ is constant around the origin. Thus, $v\big(\pd{h}{\bar x}\big) = 0$ implies, that $h = \ensuremath{\mathfrak{g}}amma \bar x + k(\bar y)$ for some analytic (smooth) function of one variable $k$ and constant $\ensuremath{\mathfrak{g}}amma$. For such $h$ equation $h = v(h)$ takes form $\ensuremath{\mathfrak{g}}amma \bar x + k(\bar y) = \ensuremath{\mathfrak{g}}amma (\bar x + \bar y) + \bar y k'(\bar y)$. Differentiating the latter in $\bar y$ yields $$ \ensuremath{\mathfrak{g}}amma + \bar y k''(\bar y) = 0. $$ Substituting $\bar y = 0$ we get that $\ensuremath{\mathfrak{g}}amma = 0$. This implies $\bar y k'' \equiv 0$ around the origin, thus, $k''(\bar y)$ identically vanish for $\bar y \neq 0$. By continuity it vanishes on the entire neighbourhood of the coordinate origin. As $h(0,0) = 0$ then $h = \ensuremath{\mathfrak{g}}amma \bar y$ for some constant $\ensuremath{\mathfrak{g}}amma \neq 0$. We have shown, that the space of solutions of equation $v(h) = h$ around the origin with condition $h(0, 0) = 0$ is one-dimensional. At the same time $v(y) = y$. Thus, the linearizing coordinate change, that exists by the conditions of Lemma has the form $\bar x = \bar{g}(x, y), \bar{y} = \begin{equation}ta y$ for some $\begin{equation}ta \neq 0$. Taking $\bar x = \frac{1}{\begin{equation}ta} \bar{g} (x, y) = g(x, y), \bar y = y$ yields the triangular linearizing analytic (smooth) coordinate change. The Lemma is proved. $\blacksquare$ \subsection{First integrals around critical points} Assume that $v$ is vector field on the plane with critical point at the origin and linear part $v_1$ with linearization operator $A^i_j$. Recall, that the first integral of a vector field $v$ is function, such that $v(f) = 0$. We start with the following example. \begin{equation}gin{example}\label{smooth_int} Consider linear vector field $v = (\alpha x, y)$ for $\alpha < 0$. Fix the constant $s = - \frac{1}{\alpha} \in \mathbb R^+$. Define the function $f$ as follows: $$ f(x, y) = \begin{equation}gin{cases} \exp \big(- \frac{1}{x^{2s} y^{2}}\big), & \quad xy \neq 0 \\ 0, & \quad xy = 0. \end{cases} $$ Defining all the partial derivatives of $f(x, y)$ as zero on the coordinate cross $xy = 0$ we obtain a function that is smooth on the entire plane and flat on the $xy = 0$. The partial derivatives of $h(x, y)$ satisfy the following identities: $$ \pd{f}{x}(x, y) = \frac{2s}{x^{2s + 1} y^{2}}f(x, y), \quad \pd{f}{y} (x, y) = \frac{2}{x^{2s} y^{3}} f(x, y). $$ In particular $$ \alpha x\pd{f}{x} + y \pd{f}{y} = 0 $$ and, thus, for $\alpha < 0$ function $f(x, y)$ defines a smooth integral for linear vector field $v = (x, \alpha y)$. \end{example} The following two lemmas deal with integrals of the linear vector fields around the origin for dimension two. \begin{equation}gin{lemma}\label{integral2} Consider a real plane with coordinates $x, y$ and linear vector field $v = (x + y, y)^T$. It has no non-constant integrals in a neighbourhood of the origin in both smooth and analytic case. \end{lemma} {\it Proof. } The ODE $\dot x = x + y, \dot y = y$ can be explicitly integrated: \begin{equation}gin{equation*} \begin{equation}gin{aligned} & x(t) = c_2 t \exp{t} + c_1 \exp{t}, \\ & y(t) = c_2 \exp{t}, \end{aligned} \end{equation*} were $c_1, c_2$ are arbitrary constants. We have $\lim_{t \to - \infty} x(t) = \lim_{t \to - \infty} y(t) = 0$. As a result, the closure of every integral curve contains the coordinate origin. The integral must be constant along the curves, thus, by continuity the value of integral on each curve coincides with value of the integral at the origin. As integral curves pass through every point of the neighouhood of the coordinate origin, we get that the first integral of $v$ is constant. $\blacksquare$ \begin{equation}gin{lemma}\label{integral1} Consider a real plane with coordinates $x, y$ and linear vector field $v = (\alpha x, y)^T$ with $\alpha \neq 0$. Then \begin{equation}gin{enumerate} \item It has non-constant smooth first integral on the neighbourhood of the origin if and only if $\alpha < 0$ \item It has non-constant analytic first integral on the neighbourhood of the origin if and only if $\alpha = - \frac{p}{q}$ with $p, q \in \mathbb N$ \end{enumerate} \end{lemma} {\it Proof.} We start with smooth category. The ODE $\dot x = \alpha x, \dot y = y$ can be explicitly integrated: \begin{equation}gin{equation*} \begin{equation}gin{aligned} & x(t) = c_1 \exp{\alpha t}, \\ & y(t) = c_2 \exp{t}, \end{aligned} \end{equation*} where $c_1, c_2$ are arbitrary constants. Similar to the proof of \ref{integral2} the integral curves for $\alpha > 0$ tend to the coordinate origin as $t$ approaches $- \infty$. Thus, again, the closure of each curve contains the coordinate origin and every smooth coordinate is constant. For $\alpha < 0$ one takes $f(x, y)$ from Example \ref{smooth_int} to be non-constant smooth first integral of $v$. Now consider analytic category and decomposition of the first integral $f$ of $v$ into the converging series $f_k + f_{k + 1} + \dots$. Here $f_k$ is the first non-zero term. Note, that $v(f) = v(f_k) + v(f_{k + 1}) + \dots$, where $v(f_i)$ is homogeneous polynomial of degree $i$. Thus, $v(f) = 0$ if and only if $v(f_i) = 0$ for all $i \ensuremath{\mathfrak{g}}eq k$. Consider $V^i$ to be the linear space of monomials $x^p y^q$ of degree $i = p + q$. We have $$ v(x^p y^q) = (p\alpha + q) x^p y^q. $$ This implies, that in the standard monomial basis $v$ acts on $V^i$ as a diagonal operator with eigenvalues $(p\alpha + q)$. The first integrals lie in the kernel of such operator. The kernel is non-empty if and only if at least one of eigenvalues is zero, that is $p + \alpha q = 0$ or $\alpha = - \frac{p}{q}$ for some $p, q \in \mathbb N$. Thus, if $\alpha \neq - \frac{p}{q}$ there are no non-constant first integral. If $\alpha = - \frac{p}{q}$ one takes $f = x^p y^q$ to be non-constant integral of the corresponding vector field $v$. The Lemma is proved. $\blacksquare$ The following Corollary holds. \begin{equation}gin{corr}\label{nodes} (Necessary condition for existing of the first integral) On the plane with coordinates $x, y$ consider analytic (smooth) vector field $v$ with critical point at the origin. Assume, that the eigenvalues of linearization operator at this point are non-zero real $\lambda_1, \lambda_2$. Assume, also, that there exists a non-constant analytic (smooth) first integral of $v$ around the origin. Then \begin{equation}gin{enumerate} \item In analytic category $\frac{\lambda_1}{\lambda_2} = - \frac{p}{q}$ for $p \ensuremath{\mathfrak{g}}eq q \in \mathbb N$, \item In smooth category $\frac{\lambda_1}{\lambda_2} < 0$. \end{enumerate} \end{corr} {\it Proof. } In analytic category consider the decompositions $v = v_1 + v_2 + \dots$ and $f = f_k + f_{k + 1} + \dots$ for vector field $v$ and first integral $f$. Here $f_k, k \ensuremath{\mathfrak{g}}eq 1$ is first non-zero term in the Taylor series of $f$. The formula for $v(f)$ takes form $$ v(f) = v_1(f_k) + \big( v_2(f_k) + v_1(f_{k + 1})\big) + \dots $$ On r.h.s. of this equation the term of degree $k$ is $v_1(f_k)$. As $v(f) = 0$ we get, that $f_k$ is the first integral of $v_1$. If $\lambda_1 \neq \lambda_2$, then w.l.o.g. the linearization operator $A$ can be considered diagonal. Taking $\frac{1}{\lambda_1} v_1$ instead of $v_1$ (rescaling by constant does not affect the fact that $f_k$ is the first integral of $v_1$) we get vector field in the form $v_1 = (\alpha x, y)^T$ with $\alpha = \frac{\lambda_1}{\lambda_2}$. Applying Lemma \ref{integral1} we get, that $\frac{\lambda_1}{\lambda_2} = - \frac{p}{q}$. Now assume, that $\lambda_1 = \lambda_2$. W.l.o.g. we get that $v_1$ is either $v_1 = (x , y)^T$ or $(x + y, y)^T$. Applying Lemmas \ref{integral1} and \ref{integral2} we get the statement of the Corollary in analytic category. Now, consider smooth category. Assume, that $\frac{\lambda_1}{\lambda_2} > 0$ and $\lambda_1, \lambda_2$ is non-resonant pair. By Theorem \ref{ilyash} the polynomial normal form of $v$ is linear. By Theorem \ref{chen} there exists a coordinate change, that transforms $v$ into linear vector field. W.l.o.g. we get that $\frac{1}{\lambda_1} v$ is either $(\alpha x, y)^T$ or $(x + y, y)$ depending on $\frac{\lambda_1}{\lambda_2}$. In both cases by Lemmas \ref{integral1} and \ref{integral2} there are no first integrals. Now assume, that $\lambda_1, \lambda_2$ is a resonant pair of the first type $\lambda_1 = r \lambda_2$. Note, that any first integral of $v$ is a first integral of $\frac{1}{\lambda_2} v$. By Theorem \ref{ilyash} and Theorem \ref{chen} there exists a smooth coordinate change, that transforms $\frac{1}{\lambda_2} v$ into $v = (rx + a y^r, y)$. Here $r = \frac{\lambda_1}{\lambda_2}$ and $a$ is either $0, 1$ or $- 1$. The system of ODE $\dot x = rx + a y^r, \dot y = y$ can be explicitly integrated: \begin{equation}gin{equation*} \begin{equation}gin{aligned} & x(t) = a c_2 t \exp rt + c_1 \exp rt, \\ & y(t) = c_2 \exp t. \end{aligned} \end{equation*} We get that when $\lim_{t \to - \infty} x(t) = \lim_{t \to -\infty} y(t) = 0$. Following the same argument as in proof of Lemma \eqref{integral2}, we get, that the first integral in this case is constant around the origin. The case of the resonant node of the second type, that is $r \lambda_1 = \lambda_2$ is treated the same way. $\blacksquare$ \section{Appendix B: Morse lemma depending on parameters} The following Lemma is well-known (see \cite{horm}, p.502). To keep our work self-sufficient, we provide it with proof. Recall, that the point ${\mathrm{p}}$ is said to be \textbf{critical} for a function $f(x, y)$ on the plane with coordinates $x, y$ if $\pd{f}{x} \vert_{\mathrm{p}} = \pd{f}{y} \vert_{\mathrm{p}} = 0$. \begin{equation}gin{lemma}\label{morse} (Parametric Morse Lemma) Consider an analytic (smooth) function $f(x, y)$ on real plane with critical point at the coordinate origin. Assume, that $$ \frac{\partial^2 f}{\partial x^2} (0, 0) = \ensuremath{\mathfrak{g}}amma \neq 0, \quad \frac{\partial^2 f}{\partial x \partial y} (0, 0) = 0. $$ There exists an analytic (smooth) coordinate change in the form $\bar{x} = g(x, y), \bar{y} = y$ with $g(0, 0) = 0$, such that \begin{equation}gin{equation}\label{morse_normal} f(\bar x, \bar y) = \operatorname{sgn}(\ensuremath{\mathfrak{g}}amma) \, \bar x^2 + k(\bar y), \end{equation} where $h$ is analytic(smooth) function of one variable with $$ k(0) = k'(0) = 0, \quad k''(0) = \frac{\partial^2 f}{\partial y^2} (0, 0). $$ \end{lemma} {\it Proof. } As $\frac{\partial^2 f}{\partial x^2} (0, 0) \neq 0$, then by the Implicit Function Theorem there exists an analytic (smooth) curve $r(y)$ such that $$ \pd{f}{x} (r(y), y) \equiv 0, \quad r (0) = 0. $$ Function $f(x, y)$ can be written in the following integral form \begin{equation}gin{equation*} \begin{equation}gin{aligned} f(x, y) & = f(r(y), y) + \int \limits_{0}^1 \frac{\mathrm{d}}{\mathrm{d} t} f \big( tx + (1 - t) r(y), y \big) \mathrm{d} t = \\ & = f(r(y), y) + \big( x - r(y)\big) \int \limits_{0}^1 \pd{f}{x} \big( tx + (1 - t) r(y), y \big) \mathrm{d} t \end{aligned} \end{equation*} We denote $$ \Phi(x, y) = \int \limits_{0}^1 \pd{f}{x} \big( tx + (1 - t) r(y), y \big) \mathrm{d} t $$ By definition of $r(y)$ we have $\Phi(r(y), y) =\pd{f}{x}(r(y), y) \equiv 0$. For $\Phi$ the following integral formula formula holds $$ \Phi(x, y) = \Phi(r(y), y) + \int \limits_{0}^1 \frac{\mathrm{d}}{\mathrm{d} t}\Phi \big( tx + (1 - t)r(y), y\big) = \big(x - r(y)\big) \int\limits_{0}^1 \pd{\Phi}{x} \big( tx + (1 - t)r(y), y\big) $$ Denoting $F(x, y) = \int\limits_{0}^1 \pd{\Phi}{x} \big( tx + (1 - t)r(y), y\big)$ and substituting the expression into the original integral formula for $f$ we get $$ f(x, y) = f(r(y), y) + \big( x - r(y)\big)^2 F(x, y). $$ In this formula $\frac{\partial^2 f}{\partial x^2} (0, 0) = 2 F(0, 0) = \ensuremath{\mathfrak{g}}amma \neq 0$. Consider functions \begin{equation}gin{equation}\label{change} \bar x = (x - r(y)) \sqrt{|F(x, y)|} = g(x, y), \quad \bar y = y. \end{equation} These are analytic (smooth) functions in a neighbourhood of coordinate origin and $g(0, 0) = 0$. The Jacobian of $\bar x, \bar y$ at the origin is $\sqrt{\frac{|\ensuremath{\mathfrak{g}}amma|}{2}} \neq 0$, thus, \eqref{change} defines a coordinate change in a neighbourhood of coordinate origin. In new coordinates after renaming $f(r(\bar y), \bar y) = k(\bar y)$ function $f$ takes the form \eqref{morse_normal}. By definition of curve $r(y)$ we get $$ k'(y) = \pd{f}{x} (r(y), y) r'(y) + \pd{f}{y}(r(y), y) = \pd{f}{y}(r(y), y), \quad k''(y) = \frac{\partial^2 f}{\partial x \partial y} r'(y) + \frac{\partial^2 f}{\partial y^2}. $$ This implies, that $k(0) = k'(0) = 0$ and $k''(0) = \frac{\partial^2 f}{\partial y^2} (0, 0)$. The Lemma is proved. $\blacksquare$. \begin{equation}gin{corr}\label{morse2} Consider an analytic (smooth) function $f(x, y)$ on real plane with critical point at the coordinate origin. Assume, that $f(0, 0) = 0$ and $$ \frac{\partial^2 f}{\partial x^2} (0, 0) = \frac{\partial^2 f}{\partial y^2} (0, 0) = \frac{\partial^2 f}{\partial x \partial y} (0, 0) = 0. $$ In other words Taylor expansion of $f$ at the coordinate origin has no constant, linear and quadratic part. For every $\alpha \neq 0$ there exist analytic (smooth) functions $h(x, y)$ and $k(y)$ such, that $$ h(0, 0) = 0, \quad \Big(\pd{h}{y}(0, 0)\Big)^2 = \frac{\alpha^2}{4} \neq 0, \quad \pd{h}{x} (0, 0) = 0, \quad k(0) = k'(0) = k''(0) = 0 $$ and \begin{equation}gin{equation}\label{morse2_normal} f(x, y) = \frac{\alpha^2}{4} y^2 - h^2(x, y) + k(x). \end{equation} \end{corr} {\it Proof. } Consider function $\bar f = f - \frac{\alpha^2}{4} y^2$. For $\bar f$ the coordinate origin is still a critical point. At the same time $\frac{\partial^2 \bar f}{\partial y^2} (0, 0) = - \frac{\alpha^2}{2} \neq 0$ and $\frac{\partial^2 f}{\partial x \partial y} (0, 0) = 0$. Applying Lemma \ref{morse} (in our situation the coordinates $x, y$ are interchanged with respect to the coordinates in the statement of the Lemma) we get that there exists a coordinate change $\bar x = x, \bar y = g(x, y)$, such that $\bar f = - \bar y^2 + k(\bar x)$. We also have, that $g(0, 0) = 0$. In old coordinates this formula takes the form \begin{equation}gin{equation}\label{tr1} f - \frac{\alpha^2}{4} y^2 = - g^2(x, y) + k(x). \end{equation} Renaming $g(x, y)$ as $h(x, y)$ we get the formula \eqref{morse2_normal} with condition $h(0, 0) = 0$. Differentiating both sides of the equation \eqref{tr1} we get \begin{equation}gin{equation*} \begin{equation}gin{aligned} & \pd{f}{x} = - 2 h \pd{h}{x} + k'(x), \\ & \frac{\partial^2 f}{\partial x^2} = - 2 \Big( \pd{h}{x}\Big)^2 - 2 h \frac{\partial^2 h}{\partial x^2} + k''(x), \\ & \frac{\partial^2 f}{\partial x \partial y} = - 2 \pd{h}{x}\pd{h}{y} - 2 h \frac{\partial^2 h}{\partial x \partial y}, \\ & \frac{\partial^2 f}{\partial y^2} - \frac{\alpha^2}{2} = - 2 \Big( \pd{h}{y}\Big)^2 - 2 h \, \frac{\partial^2 h}{\partial y^2}. \end{aligned} \end{equation*} Substituting $x = y = 0$ we get $\big(\pd{h}{y}(0, 0)\big)^2 = \frac{\alpha^2}{4} \neq 0$ from the fourth equation. After that the third equation yields $\pd{h}{x}(0, 0) = 0$. Substituting both into the first and second one, we get $k'(0) = k''(0) = 0$. The Corollary is proved. $\blacksquare$ \begin{equation}gin{thebibliography}{99} \bibitem{basto} J. Basto-Goncalves, Linearization of resonant vector fields, Transactions of the American Mathematical Society, Vol. 362, No. 12 (2010), pp. 6457-6476 \bibitem{bmk} A.Bolsinov, V.Matveev, A.Konyaev, Nijenhuis geometry, preprint, arXiv:1903.04603 \bibitem{proj} A. Bolsinov, V. Matveev, Geometrical interpretation of Benenti systems, Journal of Geometry and Physics, Vol. 44, 4, pp. 489-506 (2003) \bibitem{brjuno} A. D. Brjuno, Analytic form of differential equations. I, II, Trudy Moskovskogo Matematicheskogo Obschestva, 25 (1971): 119–262, ISSN 0134-8663, MR 0377192 \bibitem{burde} D. Burde, Left-symmetric algebras, or pre-Lie algebras in geometry and physics, Central European Journal of Mathematics 4, Nr. 3, 323-357 (2006). \bibitem{burde2} D.Burde, Simple left-symmetric algebras with solvable Lie algebra, Manuscripta Mathematica, 1998, Volume 95, Issue 1, pp 397-411 \bibitem{chen} K. T. Chen, Equivalence and decomposition of vector fields about an elementary critical point. Amer. J. Math., 85, N 4, (1963), pp.693-722. \bibitem{horm} L.V. Hörmander, "The analysis of linear partial differential operators. III. Pseudodifferential operators", Grundlehren der Mathematischen Wissenschaften. Springer-Verlag, Berlin, 1985. 525 pp. ISBN: 3-540-13828-5 (1985) \bibitem{haantjes} J. Haantjes, On $X_{m}$forming sets of eigenvectors, Proc. Kon. Ned. Acad. Amsterdam. 1955. Vol. 58. P. 158-162. \bibitem{ilya} Y. Ilyashenko, S. Yakovenko, Lectures on Analytic Differential Equations, ISBN: 0-8218-3667-6, ISBN-13: 978-0-8218-3667-5, Graduate Studies in Mathematics, vol. 86, 625 pp (russian edition) \bibitem{kossman} Y. Kosmann-Schwarzbach, F. Magri, Poisson-Nijenhuis structures. Ann. Inst. Henri Poincare, 53 (1990), 35–81. \bibitem{mok2} O. Mokhov, Compatible flat metrics, Journal of Applied Math.,2(7)(2002), 337-370. \bibitem{newlander} A. Newlander, L. Nirenberg, Complex analytic coordinates in almost complex manifolds. Annals of Mathematics. Second Series. 65 (3): 391–404 (1957). \bibitem{nijenhuis} A. Nijenhuis, $X_{n - 1}$-forming sets of eigenvectors, Indagationes Mathematicae, Vol. 13, No. 2, 1951, pp. 200–212 \bibitem{thompson} G.Thompson, The integrability of a field of endomorphisms, Mathematica Bohemica, Volume: 127, Issue: 4, pp. 605-611 \bibitem{vinberg} E. B. Vinberg, Convex homogeneous cones., Transl. Moscow Math. Soc. 12 (1963), pp.340-403. \bibitem{weinstein} A. Weinstein, The local structure of Poisson manifolds, J. Differential Geometry 18 (1983), pp.523–557 \bibitem{weinstein2}A. Weinstein, A. de Silva, Berkeley Mathematics Lecture Notes, Volume: 10; 1999; 184 pp; ISBN: 978-0-8218-0952-5 \bibitem{winterhalder} A. Winterhalder, Linear Nijenhuis-Tensors and the Construction of Integrable Systems, arXiv.org:9709008, 1997 \bibitem{yoccoz1} J. C. Yoccoz, Linéarisation des germes de difféomorphismes holomorphes de (C, 0), C. R. Acad. Sci. Paris Série I, t. 306 (1988), 55-58. \bibitem{yoccoz2} J.C. Yoccoz, Théoreme de Siegel, nombres de Brjuno et polynomes quadratiques. Asterisque, 231 (1995), 3–88. \end{thebibliography} \end{document}
\begin{document} \title{FederatedScope: A Flexible Federated Learning Platform \\ for Heterogeneity} \author{ Yuexiang Xie$^*$, Zhen Wang$^*$, Dawei Gao, Daoyuan Chen$^\dagger$, Liuyi Yao$^\dagger$, Weirui Kuang$^\dagger$, \\ Yaliang Li$^\ddagger$, Bolin Ding$^\ddagger$, Jingren Zhou \\ Alibaba Group } \begin{abstract} Although remarkable progress has been made by existing federated learning (FL) platforms to provide infrastructures for development, these platforms may not well tackle the challenges brought by various types of heterogeneity, including the heterogeneity in participants' local data, resources, behaviors and learning goals. To fill this gap, in this paper, we propose a novel FL platform, named \emph{{\sf FederatedScope}\xspace}, which employs an event-driven architecture to provide users with great flexibility to independently describe the behaviors of different participants. Such a design makes it easy for users to describe participants with various local training processes, learning goals and backends, and coordinate them into an FL course with synchronous or asynchronous training strategies. Towards an easy-to-use and flexible platform, {\sf FederatedScope}\xspace enables rich types of plug-in operations and components for efficient further development, and we have implemented several important components to better help users with privacy protection, attack simulation and auto-tuning. We have released {\sf FederatedScope}\xspace at \href{https://github.com/alibaba/FederatedScope}{https://github.com/alibaba/FederatedScope} to promote academic research and industrial deployment of federated learning in a wide range of scenarios. \end{abstract} \maketitle \pagestyle{\vldbpagestyle} \renewcommand*{\thefootnote}{\fnsymbol{footnote}} \footnotetext[1]{Co-first authors.} \footnotetext[2]{Equal contribution, listed in alphabetical order.} \footnotetext[3]{Corresponding authors, email addresses: \{yaliang.li, bolin.ding\}@alibaba-inc.com} \renewcommand*{\thefootnote}{\arabic{footnote}} \renewcommand{Yuexiang Xie, Zhen Wang, Dawei Gao, Daoyuan Chen, Liuyi Yao, Weirui Kuang, Yaliang Li, Bolin Ding, and Jingren Zhou}{Yuexiang Xie, Zhen Wang, Dawei Gao, Daoyuan Chen, Liuyi Yao, Weirui Kuang, Yaliang Li, Bolin Ding, and Jingren Zhou} \renewcommand{FederatedScope: A Flexible Federated Learning Platform for Heterogeneity}{FederatedScope: A Flexible Federated Learning Platform for Heterogeneity} \section{Introduction} \label{sec:intro} As one of the feasible solutions to address the privacy leakage issue when utilizing isolated data from multiple sources, Federated Learning (FL)~\cite{mcmahan2017communication,konevcny2016federated,yang2019federated} has rapidly gained enormous popularity in both academia and industry~\cite{yang2019federated_app,hard2018federated,xu2021federated, leroy2019federated}. Such widespread adoption of FL is inextricably tied to the support of FL platforms, such as TFF~\cite{tff}, FATE~\cite{yang2019federated}, PySyft~\cite{pysyft} and FedML~\cite{fedml}, which provide users with functionalities to get started quickly and focus on developing new FL algorithms and applications. Although the existing FL platforms have made remarkable progress, there are more burgeoning demands from FL research and deployment, which are mainly brought by the heterogeneity of FL. Specifically, we summarize the heterogeneity of FL as the following four aspects. (1) \textbf{Heterogeneity in Local Data}. The isolated data in FL, which are often owned by different parties or generated by different edge devices, vary a lot among the FL participants in terms of quality, quantity, underlying distributions, etc. Such heterogeneity in data can lead to the sub-optimal performance when applying the vanilla FedAvg~\cite{mcmahan2017communication}, i.e., producing one global model for all the participants by the same local training process. Recent studies on Personalized FL~\cite{fallah2020personalized,tan2021towards} customize the local training processes of the participants according to their local data, including applying client-specific parameters, training schedules, submodules, and fusing approaches. Supporting these customizations brings challenges to the extensibility and flexibility of FL platforms. (2) \textbf{Heterogeneity in Participants' Resources}. Apart from the heterogeneity of data, the participants' resources can also be very different, including computation resources, storage resources, communication bandwidths, reliability, and so on. However, most of the existing FL platforms~\cite{tff, yang2019federated, fedml} adopt a synchronous training strategy for FL, which might lead to additional overhead caused by the heterogeneity in participants' resources. For example, with the synchronous training strategy, the whole FL system could usually suffer from the slow clients that might be caused by network congestion, sluggish local training, or even device crash. Thus, it would be better if FL platforms allow users to implement/execute FL with asynchronous training strategies~\cite{chen2020asynchronous,wu2020safa} to ensure both efficiency and effectiveness in real-world FL applications. (3) \textbf{Heterogeneity in Participants' Behaviors}. In the vanilla FedAvg~\cite{mcmahan2017communication}, the participants only exchange homogeneous information (e.g., model parameters or gradients) and have the same behaviors (e.g., updating models based on the local data via SGD). However, the practical and recent FL applications often require to exchange various types of information among participants and execute diverse training processes, which leads to rich behaviors. For example, in the emerging federated graph learning applications~\cite{xie2021federated,zhang2021subgraph,wu2021fedgnn,wang2022federatedscopegnn}, participants exchange and handle multiple types of information, including model parameters, gradients, node embeddings, etc; Real-world FL applications might involve participants with different backends, which indicates that participants need backend-dependent computation graphs and accordingly perform diverse training processes. Besides, due to the heterogeneity in local data discussed above, participants could locally train with client-specific configurations that are suitable for their local data to achieve a better model utility. The heterogeneity in participants' behaviors, caused by handling various exchanged information, executing diverse training processes and applying client-specific configurations in local training, prompts the FL platforms to support flexible expression for rich behaviors of participants. (4) \textbf{Heterogeneity in Learning Goals}. Towards a more general scope of utilizing isolated data, some recent FL studies~\cite{xie2021federated,fedem,smith2017federated} propose to allow participants to collaboratively learn common knowledge while optimizing for different learning goals. For example, several research institutes can federally train a graph neural network to capture the generalizable structural patterns of molecules by using their own molecule data that correspond to different learning goals, such as predicting solubility, enzyme type, penetration, etc. Another example, which can benefit from such heterogeneous federated learning, is the pre-trained language models in natural language processing~\cite{tian2022fedbert}, since participants can collaboratively train with different objectives based on their private corpora that cannot be directly shared in some scenarios such as medicine and finance. To handle the heterogeneity in learning goals, FL platforms should allow participants to locally train with different learning objectives and only share parts of the local models for collaboratively learning. The aforementioned aspects of heterogeneity are commonly observed in real-world FL applications. Although we discuss them in the above four aspects, they can appear jointly in a single application. Facing such mixed heterogeneity, users are eager for an FL platform that is \textbf{flexible}: Participants should be allowed to express their diverse behaviors and different learning goals according to their own local data and system resources, and these participants can be effortlessly coordinated with synchronous or asynchronous training strategies for completing the federal training procedure based on a pre-defined consensus. Towards such a purpose, in this paper, we propose {\sf FederatedScope}\xspace, a novel FL platform to handle the heterogeneity of FL. To provide such flexibility, we propose {\sf FederatedScope}\xspace, a novel FL platform that employs the event-driven architecture~\cite{eventdriven, kafka} to frame FL courses into $<$event, handler$>$ pairs. Note that it is not trivial to build up a comprehensive FL platform with such a formalization. In particular, considering the heterogeneity of federated learning, such formalization is expected to express diverse behaviors of servers and clients for handling the heterogeneity, and be well-modularized so that users can conveniently develop new FL algorithms and applications. To fulfill this goal, the events provided in {\sf FederatedScope}\xspace can be categorized into two types, i.e., \textit{events related to message passing} and \textit{events related to condition checking}, which are used to describe what happens in the FL courses from the perspective of an individual participant. For example, in a vanilla FedAvg, a typical event of clients is ``{\sf receiving\_models}'', which indicates that clients receive the global model from the server, and the corresponding handler can be ``\textit{train the received global model based on the local data, and then return the model updates}''. The handlers, triggered by events, describe what actions should be taken when a specific event happens. These events happen in the intended logical order and naturally trigger the corresponding handlers, which can precisely express various FL algorithms and procedures. All the participants can be coordinated with the pre-defined events related to message passing and condition checking to construct suitable FL course for specific scenarios and applications. \begin{figure} \caption{{\sf FederatedScope} \label{fig:programming_interfaces} \end{figure} Besides the flexibility, as an FL platform, {\sf FederatedScope}\xspace also provide great \textbf{usability}, that is, {\sf FederatedScope}\xspace provides different levels of programming interfaces to meet different requirements from users, as demonstrated in Figure~\ref{fig:programming_interfaces}. For the users who want to design new FL algorithms, as discussed above, {\sf FederatedScope}\xspace allows them to add new $<$event, handler$>$ pairs to implement their ideas. For the users who want to directly apply existing FL techniques to certain application scenarios, {\sf FederatedScope}\xspace provides rich sets of events and corresponding handlers, core functionalities and several important plug-in components, all of which can be directly called, and thus users only need to focus on a necessary set of interfaces to be integrated or implemented. For example, we have implemented several personalization federated algorithms, including applying client-wise configuration, maintaining client-wise sub-modules, global-local fusing, etc, for users' convenient usage. Besides personalization, {\sf FederatedScope}\xspace provides users with functionalities such as asynchronous training, privacy protection and cross-backend FL, and several important plug-ins such as attack simulation for protection verification, and auto-tuning for helping users to automatically seek suitable hyperparameters. Last but not least, {\sf FederatedScope}\xspace also has great \textbf{extensibility}, which is brought by the fact that the set of $<$event, handler$>$ pairs can be easily extended by adding new ones. Take personalization again as an example: to add a new personalization, users only need to add new behaviors (e.g., adopting client-specific training course) in the corresponding handlers. Such extension convenience also holds for all the other functions such as federated aggregators, asynchronous training, privacy-protection, etc. Through this way, {\sf FederatedScope}\xspace can be easily extended to include new functions or plug-ins to satisfy new requirements brought by new developments and support a variety of new scenarios. \stitle{Contributions.} Our contributions can be summarized as: (1) Motivated by the heterogeneity challenges from a wide range of FL applications, we propose and release {\sf FederatedScope}\xspace, a novel FL platform to handle heterogeneity in FL. The proposed {\sf FederatedScope}\xspace promotes the development of FL techniques and the deployment of FL applications. (2) With the event-driven architecture, {\sf FederatedScope}\xspace provides users with rich yet extendable sets of events and corresponding handlers, core functionalities such as asynchronous training, personalization and cross-backend FL, and several important plug-in components. These implementations make it easy for users to apply FL algorithms in both academia and industry applications. (3) {\sf FederatedScope}\xspace brings great flexibility, usability and extensibility to users, broadens the application scope and enables more tasks that would otherwise be infeasible due to challenges brought by various types of heterogeneity in FL. \section{Preliminary} \subsection{Problem Definition} \label{sec:preliminary} Federated Learning (FL)~\cite{konevcny2016federated, mcmahan2017communication, yang2019federated}, a learning paradigm for collaboratively training models from dispersed data without directly sharing private information, involves multiple participants who are willing to contribute their local data and computation resources. We use \textit{server} to denote the participant(s) who are responsible for coordinating and aggregating, while other participants are \textit{clients}. During a typical \textit{training round} of an FL course, clients update the global model received from the server by training it with local data, and send the model updates back to the server for collaborative aggregation. In repeated training rounds, the (possibly sensitive) training data is always kept locally in each client; the server and clients only exchange aggregated and meta information, such as model parameters, gradients, public keys, hyperparameters, etc, which, to some degree, alleviates concerns about data privacy. To further satisfy different types of formal privacy protection requirements, various privacy protection techniques can be integrated into FL, such as Differential Privacy (DP)~\cite{triastcyn2019federated, wei2020nbafl}, Homomorphic Encryption (HE)~\cite{hardy2017private, fang2021large}, and Secure Multi-Party Computation (MPC)~\cite{melis2019exploiting, bonawitz2017practical}. In short, the goal of FL is to jointly train a global model in a privacy-preserving manner and achieve a better performance compared to that without collaboration. Formally, there are $M$ clients, and the $m$-th client owns a {\em private training dataset} $\mathcal{D}_m=\{(x_i^{(m)}, y_i^{(m)}) \in \mathcal{X} \times \mathcal{Y}, ~ i=1,2,\ldots,|\mathcal{D}_m|\}$, where $\mathcal{X}$ and $\mathcal{Y}$ are the input feature space and the label space, respectively. $\mathcal{D}_m$ is stored in the $m$-th client's private space, and $n=\sum_{m=1}^{M}|\mathcal{D}_m|$ is the total number of training instances. Without sharing $\mathcal{D}_m$ directly with each other and the server, the $M$ clients together aim to train a model $h_{\theta}: \mathcal{X} \rightarrow \mathcal{Y}$ parameterized by $\theta$, with the {\em loss} $F: \mathcal{Y} \times \mathcal{Y} \rightarrow \mathbb{R}^{+} \cup \{0\}$. The {\em FL loss function} is: \begin{equation} \mathcal{L} = \frac{1}{n}\sum_{m=1}^{M}\sum_{(x_i^{(m)}, y_i^{(m)}) \in \mathcal{D}_m} F\left(h_{\theta}(x_i^{(m)}),y_i^{(m)}\right). \label{eq:loss} \end{equation} \begin{figure*} \caption{Overview of an FL round implemented with FederatedScope.} \label{fig:oberview} \end{figure*} \stitle{Extensions.} For the simplicity of presentation, we focus on a {\em vanilla FL} to minimize the loss function in Equation~\eqref{eq:loss} in most parts of this paper. Our {\sf FederatedScope}\xspace easily supports different federated settings in real-world FL applications, with more complicated loss functions, in order to handle the heterogeneity as discussed in Section~\ref{sec:intro}. For example, for the purpose of personalization, the input feature spaces, the label spaces, and the underlying learning goals can be different for different clients. In {\sf FederatedScope}\xspace, clients can adopt different models and loss functions in local training, and only federally train the shared parts of the models. We will discuss more details in Section~\ref{subsec:p-m}. \subsection{Related Works} \stitle{Comparisons with existing FL platforms.} In the recent years, growing along with the development of federated learning, federated learning platforms, including TFF~\cite{tff}, FATE~\cite{yang2019federated}, LEAF~\cite{leaf}, PySyft~\cite{pysyft}, FedML~\cite{fedml}, FedScale~\cite{lai2021fedscale}, etc., are proposed to support various kinds of applications. These federated learning platforms provide data, models and algorithms, which saves users' effort on implementing from scratch and makes it easy for developers. Most of these existing platforms adopt a procedural programming paradigm, which requires users to explicitly declare a sequential training process and computational graph from the global perspective. However, such a design makes the existing FL platforms unable to provide the required flexibility and extendability for the burgeoning demands from FL research and deployment, which limits the promotion and broadened the application scenarios of federated learning. Meanwhile, users also expect FL platforms to become more convenient and simple to handle the aforementioned heterogeneity in real-world FL applications. To tackle these challenges, we propose {\sf FederatedScope}\xspace to provide great flexibility and extendability for users to handle the heterogeneity in FL. {\sf FederatedScope}\xspace provides rich implementations of FL algorithms for convenient usage, and provides different levels of programming interfaces for users to develop new algorithms. \stitle{Comparisons with distributed machine learning.} In distributed machine learning, the server has the rights to control the behaviors of clients; While in federated learning, all the participants could have their own behaviors following the achieved consensus to collaboratively train the model. For example, given the consensus that participants only need to share parts of the model parameters, clients can apply client-wise training configurations (e.g., training steps, learning rate, regularizer, optimizer, and so on) to locally train the model, and keep the non-shared model parameters and the learning goals (might be different among clients) private. To satisfy such requirements, {\sf FederatedScope}\xspace give the rights to the participants to describe behaviors from their own respective perspectives. Besides, in federated learning, the quality, quantity, and distributions of clients' local data can be very diverse. Such heterogeneity in data makes it challenging to collaboratively learn, which motivates {\sf FederatedScope}\xspace to provide novel functionalities, such as asynchronous training strategies (in Section~\ref{subsec:asynchronous}) and personalized federated learning algorithms (in Section~\ref{subsec:p-m}), to make better use of their isolated data. Furthermore, there exist more privacy/security protection requirements in federated learning compared to distributed machine learning. To tackle this, {\sf FederatedScope}\xspace provides some Byzantine fault tolerance algorithms to defend the malicious participants (in Section~\ref{sec:implementation}), and privacy protection component (in Section~\ref{subsec:privacy_protection}) and attack simulation component (in Section~\ref{subsec:privacy_attack}) to enhance and verify the privacy protection strength of FL applications. \section{Design of FederatedScope} \label{sec:methodoloy} In this section, we introduce the design of {\sf FederatedScope}\xspace, showing how an FL training course can be framed and implemented using an event-driven architecture and why {\sf FederatedScope}\xspace makes it easy to handle heterogeneity in federated learning. The overall structure is illustrated in Appendix~\ref{appendix:overvall_architecture}. \subsection{Overview} An FL course consists of multiple rounds of training, and a typical round implemented with {\sf FederatedScope}\xspace is illustrated in Figure~\ref{fig:oberview}, which includes four major steps: (1) {\bf Broadcast Models:} the server broadcasts the up-to-date global model to the involved clients; (2) {\bf Local Training:} once received the global model, clients perform local training using their trainer based on their private data; (3) {\bf Return Updates:} after local training, clients return the model updates to the server; (4) {\bf Federated Aggregation:} with the help of an aggregator, the server performs federated aggregation on the received model updates, and optimizes the global model. To facilitate efficient development and deployment of such an FL course with multiple computation/communication rounds and different roles, there are two important design principles of {\sf FederatedScope}\xspace. \squishlist \item {\em Minimal dependency between different roles.} In {\sf FederatedScope}\xspace, each client or the server takes care of only the minimal portion of job it needs to collaboratively accomplish, including the model to be collaboratively learned and the exchanged messages. While allowing both synchronous and asynchronous training, we want to avoid introducing too much duty of coordinating and scheduling to the server. This is important especially when we consider the heterogeneity of resources and learning goals for the clients. \item {\em Flexible and expressive programming interfaces for algorithm development and plug-in.} {\sf FederatedScope}\xspace aims to enable efficient development of FL algorithms via proper abstraction of FL courses and providing a necessary set of interfaces that developers need to implement. Moreover, for the purpose of privacy protection and other functionalities, operators (e.g., for noise injection and encryption) and components (e.g., for auto-tuning) need to be plugged into the FL course in a flexible way. \squishend Based on these principles, we first give an overview of our design. \stitle{Basic infrastructure.} {\sf FederatedScope}\xspace employs an {\em event-driven} architecture within which the behaviors of different clients and the server in an FL course can be programmed (relatively) independently. The information exchange among participants and conditions to be checked by participants during the FL course are described as {\em events} (trapezoids within pink areas in Figure~\ref{fig:oberview}); when an event occurs, the corresponding {\em handlers} (hexagons within pink areas) that describes the behavior of a participant is triggered. For example, when ``{\sf receiving\_models}'' occurs, ``local training'' in a client is triggered; when ``{\sf goal\_achieved}'' occurs, ``federated aggregation'' in the server is triggered. It turns out that the pairs of events and handlers are sufficiently expressive to describe all the existing (both synchronous and asynchronous) FL algorithms, as well as new ones we implement in {\sf FederatedScope}\xspace. With such an infrastructure, {\sf FederatedScope}\xspace can easily support different machine learning backends (e.g., PyTorch~\cite{pytorch} and TensorFlow~\cite{tensorflow}). All users need to do is to transform exchanged information (called {\em messages}), which might be related to participants' local backends, into backend-independent ones before sharing, and parse the received messages according to the receiver's backend for further usage. We call this procedure {\em message translation}. \stitle{Programming interfaces.} Within the above infrastructure {\sf FederatedScope}\xspace provides, for each client or server, we only need to implement a {\em Trainer} (green dashed rectangles in Figure~\ref{fig:oberview}) or {\em Aggregator} (blue dashed rectangles), respectively, which encapsulates the details of local training or federated aggregation with well-defined interfaces, e.g., the loss function, optimizer, training step, aggregation algorithms, etc. A Trainer can be implemented as if a machine learning model is trained on the local data owned by a client. Besides Trainer and Aggregator, the design of {\sf FederatedScope}\xspace allows flexible plug-in operators and components. For example, in order to ensure differential privacy, noise injection operators can be plugged to perturb the messages to be sent, where the amount of noise can be customized for different training tasks. More details of the programming interfaces can be found in Section~\ref{sec:implementation}. \stitle{Why event-driven?} Benefited from such event-driven design, {\sf FederatedScope}\xspace provides users with expressiveness and flexibility to handle various types of heterogeneity in FL. Users would not be required to implement the FL courses from a centralized perspective as in a procedural programming paradigm, which might be rather complicated for real-world FL tasks due to the heterogeneity. Instead, each individual participant (a server or a client) is instantiated with its own events and handlers to independently describe its behaviors, such as what actions to take when receiving a certain type of message from others, how to perform local training and what information should be returned (for a client), and when to perform federated aggregation and start/terminate the training process (for a server). In this way, the server performs federated aggregation under flexibly triggered conditions, which can prevent the training process from being blocked by unreliable or slow clients (more details in Section~\ref{subsec:asynchronous}). Different clients may customize their training configurations according to their own data distributions, tasks, and resources, such as training with different trainers for personalization (Section~\ref{subsub:personalized}), learning toward different goals (Section~\ref{subsub:multiple}), and running on different backends (Section~\ref{subsec:cross-backend}). {\sf FederatedScope}\xspace also provides some native plug-in modules (Section~\ref{sec:plugins}) for various important functionalities, including privacy protection, attack simulation, and auto-tuning. Before diving into these parts, we first provide more details about the event-driven design of {\sf FederatedScope}\xspace in Section~\ref{subsec:event_driven}. \subsection{Event-driven Architecture} \label{subsec:event_driven} Event-driven architectures are widely adopted in distributed systems~\cite{eventdriven, kafka}. With such an architecture, an FL training course in {\sf FederatedScope}\xspace can be framed into $<$event, handler$>$ pairs: the participants wait for certain {\em events} (e.g., receiving model parameters broadcast from the server) to trigger corresponding {\em handlers} (e.g., training models based on the local data). Hence, developers can express the behaviors of a participant (a server or a client) independently from its own perspective, rather than sequentially from a global perspective (considering all the participants together), and the implementations can be better modularized. The events in {\sf FederatedScope}\xspace are categorized into two classes. One is related to message passing (e.g., exchanging information with others), which is also considered in previous FL platforms, e.g., receiving user-defined messages in FedML~\cite{fedml} and invoking requests in FedKeeper~\cite{chadha2020towards}. The other class of events checks the satisfaction of customizable conditions (e.g., whether a pre-defined percentage of feedback from clients has been received). Some examples of events provided in {\sf FederatedScope}\xspace are presented in Appendix~\ref{appendix:events}. Next we will introduce these types of events in more detail. \ssstitle{Events Related to Message Passing.} The exchanged information among participants are abstracted as messages, and an FL training course consists of several rounds of message passing. Multiple types of messages are involved in an FL course, including but not limited to building up (e.g., \textit{join\_in} and \textit{id\_assignment}), training (e.g., \textit{model\_param} and \textit{gradients}), and evaluating (e.g., \textit{metrics}). For the participants, receiving a message can be regarded as an event, and their follow-up behaviors can be described in handling functions (i.e., the handlers) to handle the received messages. A handling function can be invoked by the event of receiving one or more types of messages, while receiving a certain type of message should only trigger one handling function directly. Take the vanilla FedAvg as an example, the clients' handling function for the event ``{\sf receiving\_models}'' can be described as ``\textit{train the received global model based on the local data, and then return the model updates}'', and the servers' handling function for the event ``{\sf receiving\_updates}'' can be described as ``\textit{save the model updates, and check whether all the feedback has been received}''. Generally, by defining the events related to message passing, {\sf FederatedScope}\xspace provides users with expressiveness to flexibly describe heterogeneous message exchange, such as exchanging model parameters, gradients, public keys, embeddings, generators, and so on. Meanwhile, through customizing the operations in the corresponding handlers, users can conveniently describe rich behaviors of participants, including training models based on the local data with personalized configurations, performing federated aggregation, predicting, clustering, generating, etc. We will discuss more about this in Section~\ref{subsec:p-m}. \ssstitle{Events Related to Condition Checking.} Apart from the events related to message passing, the events related to condition checking are also indispensable for FL implementations. These events and the corresponding handlers describe the participants' behaviors when certain conditions are satisfied. For example, in an FL course, for the purpose of synchronization in training, the server checks whether the updated gradients or model parameters have been received from all the clients; if yes, it invokes an event ``{\sf all\_received}'', and this event triggers the federated aggregation and pushes forward the training process. One important usage of the events related to condition checking is to express the customizable conditions for triggering the federated aggregation. Besides ``{\sf all\_received}'', in order to support asynchronous training, {\sf FederatedScope}\xspace also provides events ``{\sf goal\_achieved}'' and ``{\sf time\_up}'' for such purpose. Specifically, ``{\sf goal\_achieved}'' indicates that a certain percentage of feedback (so-called aggregation goal) has been received, and ``{\sf time\_up}'' denotes that the user-allocated time budget for each training round has run out. Different from the event ``{\sf all\_received}'' that forces the server to wait for feedback from all the clients, ``{\sf goal\_achieved}'' allows the training process to move forward once the server has received enough feedback, while ``{\sf time\_up}'' encourages the server to collect as much feedback as possible within the time budget, both of which enable different asynchronous training strategies in FL. Furthermore, the events related to condition checking also can be used to describe the behaviors of participants. For example, the server can be equipped with the events ``{\sf all\_joined\_in}'' (i.e., all the clients have joined in the FL course) and ``{\sf early\_stop}'' (i.e., pre-defined early stop conditions are satisfied) to describe when to start and terminate the training process, respectively, while the clients can use the events ``{\sf performance\_drop}'' to trigger personalization when the received global model causes the performance drop, and use ``{\sf low\_bandwidth}'' to reduce the communication frequency when the available bandwidth is not enough. {\sf FederatedScope}\xspace provides warnings if there exist conflicts, and adopts a default resolution following the ``overwriting'' principle. Specifically, in an FL course implemented with {\sf FederatedScope}\xspace, each event is only permitted to be linked with one handler directly during the execution process. If an event is linked with more than one handler, which might cause conflicts in an FL course, a warning would be raised for users by {\sf FederatedScope}\xspace, and the latest linked handler would overwrite the older ones (e.g., the default handler is overwritten by the user-customized handlers). Finally, the handlers that take effect in an FL course would be printed out and recorded in the experimental logs. Users can remove some handlers or adjust the linked orders to make sure the intended handlers would take effect in the constructed FL courses. {\sf FederatedScope}\xspace provides lots of predefined $<$event, handler$>$ pairs, which cover the rich implementation of existing FL algorithms, such as FedAvg~\cite{mcmahan2017communication}, personalization~\cite{pFedME, fedbn, li2021ditto}, federated graph learning approaches~\cite{wang2022federatedscopegnn}, and so on. Users can implement their own algorithms based on these provided $<$event, handler$>$ pairs. However it is out of our scope here to exhaustively list all the possible events related to message passing and condition checking. The most important advantage is that the event-driven design of {\sf FederatedScope}\xspace provides users with expressiveness and flexibility to implement and customize diverse FL algorithms. Next, with {\sf FederatedScope}\xspace, we will demonstrate how to execute asynchronous federated training (Section \ref{subsec:asynchronous}), how to describe rich behaviors of the participants (Section \ref{subsec:p-m}) and how to conduct cross-backend FL (Section \ref{subsec:cross-backend}) in order to handle the heterogeneity of FL. \begin{figure*} \caption{An example of asynchronous training strategy in federated learning.} \label{fig:async} \end{figure*} \subsection{Supporting Asynchronous Training} \label{subsec:asynchronous} The asynchronous training strategies have been successfully applied in distributed machine learning to improve training efficiency~\cite{lian2015asynchronous, zhang2016staleness, chen2020asynchronous}. Considering the aforementioned heterogeneity of FL in Section \ref{sec:intro}, the asynchronous training strategy is important to balance the model performance and training efficiency, especially in cross-device scenarios that involve a large number of unreliable and diverse clients. With the provided events and handlers, which specify what actions to take (i.e., handlers) when certain customizable conditions are satisfied (i.e., events), {\sf FederatedScope}\xspace supports users to conveniently design and implement suitable asynchronous training strategies for their FL applications. \subsubsection{Behaviors for asynchronous FL} \label{subsubsec:async_behavior} Compared with the synchronous training strategy, several unique behaviors of participants might happen in asynchronous FL, which are modularized and provided in {\sf FederatedScope}\xspace as follows: (i) \textit{Tolerating staleness in federated aggregation}. The term ``staleness'' denotes the version difference between the up-to-date global model maintained at the server and the model that a client starts from for local training, which should be tolerable to some extent in asynchronous FL. Specifically, in the federated aggregation, the staled updates from slow clients might be discounted in the aggregator but they still contribute to the aggregation. Of course, when the staleness is larger than a pre-defined threshold, the updates become outdated and thus can be directly dropped out. (ii) \textit{Sampling clients with responsiveness-related strategies}. The uniform strategy for sampling clients~\cite{mcmahan2017communication} might bring model bias in asynchronous FL, since the clients with low response speeds would contribute staled updates with higher probabilities compared with those who respond fast, which implies that the contributions of slow clients would be discounted or even dropped out in federated aggregation. Similar phenomena are happened in synchronous FL using over-selection mechanism~\cite{tff}, as pointed out by previous studies~\cite{huba2022papaya, nguyen2022fedbuff, li2020federated}. To tackle such an issue, with the prior knowledge of response speeds (it can be estimated from device information or historical responses), {\sf FederatedScope}\xspace provides a responsiveness-related sampling strategy (i.e., the sampled probabilities are related to the response speeds) and a group sampling strategy (i.e., clients with similar response speeds are grouped). (iii) \textit{Broadcasting models after receiving update}. With the synchronous training strategy, the server broadcasts the up-to-date model to the sampled clients after performing federated aggregation. Such a broadcasting manner, denoted as \textit{after aggregating} here, can also be adopted in asynchronous FL~\cite{wu2020safa}. We also provide another broadcasting manner to achieve asynchronous FL, named \textit{after receiving}~\cite{nguyen2022fedbuff}, in which the server sends out the current (up-to-date) model to a sampled idle client once the feedback is received. Compared with \textit{after aggregating}, the \textit{after receiving} manner can keep the consistent concurrency and promotes an efficient FL systems~\cite{huba2022papaya}. To the best of our knowledge, with the provided events and the corresponding handlers that describe the above behaviors, most of the existing studies on asynchronous FL can be conveniently implemented with {\sf FederatedScope}\xspace. For example, FedBuff~\cite{nguyen2022fedbuff} proposes to register the event ``{\sf goal\_achieved}'' and apply the \textit{after receiving} broadcasting manner, while SAFA~\cite{wu2020safa} suggests to equip \textit{after aggregating} broadcasting manner with event ``{\sf goal\_achieved}'' and manages clients based on their stalenesses. Particularly, a synchronous FL course with the over-selection mechanism can be easily implemented in {\sf FederatedScope}\xspace by using event ``{\sf goal\_achieved}'' and setting the toleration to 0 (i.e., dropout all staled update). In a nutshell, {\sf FederatedScope}\xspace is well-modularized toward flexibility and extensibility for handling the heterogeneity of FL via applying asynchronous training strategies. \subsubsection{An example of asynchronous FL with {\sf FederatedScope}\xspace } \begin{example} An example of asynchronous training strategy for FL is demonstrated in Figure~\ref{fig:async}. At the beginning of training process, the server samples a subset of the clients and broadcasts the model parameters (denoted as $\theta^{(i)}$ for the $i$-th round in the figure) to the sampled clients (e.g., Clients A/B/C in round $0$ and A/C/D in round $1$). These sampled clients perform local training based on their local data, and return the updated model parameters (e.g., $\theta^{(0)}_A$, $\theta^{(0)}_B$ and $\theta^{(0)}_C$) to the server once they finish training. If the server could receive all the feedback from the sampled clients in time, as demonstrated in round $0$, event ``{\sf all\_received}'' happens, and the corresponding handler is triggered and federated aggregation is performed to generate the new global model (e.g., $\theta^{(1)}$) for the next training round. However, in some cases, such as round $1$, some of the clients (e.g., Client D) fail to return the updated model in time due to some exceptions such as sluggish local training or device crash. With the asynchronous training strategy implemented in {\sf FederatedScope}\xspace, when the allocated time budget has been run out, the ``{\sf time\_up}'' event would occur, and the server starts performing federated aggregation if the feedback has been received from a sufficient number of clients, otherwise the server executes ``remedial measures'', such as restarting the training round, sampling reliable clients additionally, or adaptively adjusting the time budget. The staled feedback, such as $\theta^{(1)}_D$ in round $2$, can be saved and contributed to the aggregation if such staleness is still tolerable according to the user-defined threshold in the corresponding events. $\triangle$ \end{example} As shown in this example, applying asynchronous training can improve the learning efficiency and ensure the effectiveness of the learned model when facing the aforementioned heterogeneity of FL. More fancy strategies for asynchronous machine learning and FL~\cite{lian2015asynchronous, zhang2016staleness, chen2020asynchronous, wu2020safa} have been integrated into {\sf FederatedScope}\xspace, such as discounting the staled update, grouping clients according to their responsiveness, and so on. \subsubsection{Convergence analysis} We provide a theoretical analysis of convergence when applying the asynchronous training strategies in FL, under some widely-adopted assumptions~\cite{nguyen2022fedbuff, chai2021fedat, wang2021field} including smoothness and convexity of the loss function, the bounded gradient variances, and the bounded staleness. \begin{proposition} Consider the optimization problem defined in Equation~\eqref{eq:loss}. For each sampled client, it takes $Q$ local SGD steps with learning rate $\eta$ and returns model update to the server for federated aggregation. Assume that the loss function $F$ is $L$-smooth and $\mu$-strongly convex, when setting $0< \mu Q \eta < 1$, the convergence of global model $\theta^{(t)}$ to the optimum $\theta^{(*)}$ after $T$ rounds satisfies: \begin{align} \quad \mathbb{E}\big[F(\theta^{(T)}) - F(\theta^{(*)})\big] \leq (1-\mu Q \eta)^T\mathbb{E}\big[ F(\theta^{(0)}) - F(\theta^{(*)})\big]& \nonumber\\ +\frac{3LQ\eta}{\mu} \left(\sigma_l^2 + \sigma_g^2 + C\right)\left[\eta Q L (\tau_{\max}^2 +1)+\frac{1}{2}\right],& \end{align} where $\tau_{\max}$ denotes the maximum value of staleness caused by the asynchronous training strategies. \end{proposition} The above result shows the value of implementing asynchronous training strategies with {\sf FederatedScope}\xspace. The detailed proof can be found in Appendix~\ref{appendix:proof}. Compared to previous studies~\cite{chai2021fedat, wang2021field}, we extend the convergence analysis to more challenging asynchronous FL, where we quantify the effect of staled model updates for the federated aggregation and give a general analysis on the convergence rate rather than an ergodic version~\cite{nguyen2022fedbuff}. \subsection{Supporting Personalization \& Multi-Goal} \label{subsec:p-m} In many real-world applications, handling the heterogeneity of FL brings the requirements of the flexibility of participants' training behaviors. That is, clients need client-specific training processes and/or different formats of loss functions to meet their resource limitations, data properties and learning goals, all of which can be diverse as discussed in Section \ref{sec:intro}. Formally speaking, for the $m$-th client, the local training dataset $\mathcal{D}_m$ might correspond to client-specific feature space $\mathcal{X}_m$ and label space $\mathcal{Y}_m$, which can lead to sub-optimal performance of the global model $h_\theta$ or even makes it unusable. To tackle this, the client could (1) maintain a local model $h_{\theta_m}$ with personalized parameters $\theta_m$ (i.e., personalization) and/or (2) minimize the local loss function $F_m$ (i.e., multiple learning goals), while only sharing parts of the models with others for federal training. Therefore, the loss function in Equation~\eqref{eq:loss} can be extended as: \begin{equation} \mathcal{L}' = \frac{1}{n}\sum_{m=1}^{M}\sum_{(x_i^{(m)}, y_i^{(m)}) \in \mathcal{D}_m} F_m\left(h_{\theta_m}(x_i^{(m)}),y_i^{(m)}\right). \label{eq:pfl_loss} \end{equation} Note that there exists some shared parameters among clients, i.e., $\bigcap\limits_{m=1}^{M}\theta_m \neq \emptyset $, and all the clients collaboratively learn $\theta_1, \theta_2, \ldots, \theta_M$ to jointly minimize $\mathcal{L}'$. Benefited from the event-driven architecture, {\sf FederatedScope}\xspace provides users with flexible expressiveness to describe the behavior of an individual participant (a server or a client) from its own perspective, which is crucial for handling the heterogeneity of FL via allowing the differences among participants. In this section, we present how {\sf FederatedScope}\xspace supports such differences among participants for handling the heterogeneity of FL through the following two ways. \subsubsection{Personalized training behaviors} \label{subsub:personalized} As discussed by previous work~\cite{fallah2020personalized,tan2021towards,chen2022pfl}, the heterogeneity of FL (e.g., the heterogeneity in local data and participants' resources) might hurt the model performance for some clients and lead to the sub-optimal performance when sharing the same global model among all participants, such as vanilla FedAvg~\cite{mcmahan2017communication}, which motivates the study of personalized federated algorithms~\cite{pFedME, fedbn, li2021ditto, fedem}. Specifically, personalized federated algorithms are proposed to apply client-specific local training courses based on their private data, including client-wise training configuration, sub-modules, global-local fusing weights, etc. The convergence of the federal learning process is not be determined solely by the learning process of the global model. Each participant can independently choose the most suitable snapshot of the global model. Therefore, users are expected to describe diverse behaviors of clients to develop personalized federated algorithms, which might be rather complicated and inconvenient when using a procedural programming paradigm since lots of effort is put into sequentially coordinating and describing the participants. With the event-driven architecture, {\sf FederatedScope}\xspace allows users to describe the behaviors of participants independently, which provides great flexibility to develop new personalization algorithms. Users are able but not limited to (1) specify the training configurations, such as local training steps and learning rate, for an individual client; (2) define new events related to new types of exchanged messages and/or events related to customized conditions to apply personalization algorithms (e.g., {\sf performance\_drop}); (3) add personalized behaviors into handlers that are triggered for local training, such as fusing the received global model with local models before performing local training. In most cases, such customization can be inherited from the general training behaviors and only need to focus on the differences. Considering that clients might have different privacy protection requirements, some privacy protection techniques can be adopted. For example, clients might choose to inject noise into the model parameters before sharing them. More details of the privacy protection of messages can be found in Section~\ref{subsec:privacy_protection}. We provide several representative personalized federated algorithms~\cite{chen2022pfl} in {\sf FederatedScope}\xspace for handling the heterogeneity in FL, including pFedMe~\cite{pFedME}, FedBN~\cite{fedbn}, FedEM~\cite{fedem}, and Ditto~\cite{li2021ditto}. These built-in algorithms serve as examples for showing how to easily and flexibly develop new personalized federated algorithms, and can also be conveniently adopted via configuring by users in real-world applications. \subsubsection{Multiple Learning Goals} \label{subsub:multiple} Note that the scope of FL also covers the scenarios where participants learn common knowledge while optimizing different learning goals~\cite{xie2021federated, fedem, smith2017federated, yao2022federated}. The participants of an FL course reach a consensus on what needs to be shared while keeping other learning parts private, especially in cross-silo scenarios. For example, several medical research institutes would like to collaboratively learn a graph neural network for capturing the common structure knowledge of molecules, but they will not disclose what is the usage of the learned structure knowledge. They might exchange the update of the graph convolution layers while maintaining the encode layers, readout layers, and headers (such as classifier) private. In this and more similar scenarios from model pre-training, it can be difficult or even intractable for users to develop with a procedural programming paradigm via defining the static computation graph of the FL course. Fortunately, the event-driven design of {\sf FederatedScope}\xspace makes it easy to express and implement the FL courses with multiple learning goals. Each participant owns its local model and private data, defines its computation graph, locally trains with private learning objective, and only exchanges messages of the shared layers with others through FL. Currently, {\sf FederatedScope}\xspace provides three representative scenarios of FL with multiple learning goals, including graph classification, molecular property inference, and natural language understanding (NLU). In the graph classification scenario, clients own different graph classification tasks and aim to collaboratively improve their own performance due to the limitation of available training data. In the molecular property inference scenario, different clients have different property inference goals, such as the solubility (regression task), the enzyme type (classification task), and the penetration (classification task), which leads to heterogeneity in terms of task type. In the NLU scenario, clients are also heterogeneous in terms of task type, and they own different NLU tasks, including sentiment classification, reading compression, and sentence pair similarity prediction. Since the development of FL with multiple learning goals is still in the early stage, {\sf FederatedScope}\xspace provides these scenarios to broaden the scope of FL applications and promote the development of innovative methods. More details of these scenarios of FL with multiple learning goals can be found in our open source repository~\cite{ExampleMultiLearningGoal}. In summary, {\sf FederatedScope}\xspace allows users to describe participants' behaviors from their respective perspectives and thus provides flexibility in applying different training processes and learning goals to the participants to handle the heterogeneity of FL. \subsection{Supporting Cross-backend FL} \label{subsec:cross-backend} Motivated by the strong need from real-world applications, {\sf FederatedScope}\xspace supports constructing cross-backend FL courses. For example, in an FL task, some of the involved clients are equipped with {TensorFlow} while others might run with {PyTorch}. Thanks to the event-driven architecture, {\sf FederatedScope}\xspace can conveniently provide such functionality via a mechanism called \textit{message translation}. Note that such support of cross-backend FL is different from those provided by the universal languages such as ONNX~\cite{onnx} and the existing FL platforms such as TFF~\cite{tff}. Conceptually, ONNX and TFF adopt a global perspective of constructing an FL course, which implies that the complete computation graph is globally defined and shared among all participants. In order to make it compatible with different (versions of) machine learning backends on different clients, the global computation graph is serialized into platform-independent and language-independent representations, sent to the clients, and interpreted or compiled accordingly for different backends. \stitle{Message translation.} {\sf FederatedScope}\xspace, in contrast, gives each participant the right of describing the computation graph on its own (for the portion it takes charge of). Hence, participants can define the computation graph based on their running backends. Following a pre-defined consensus on the format of messages, the participants transform the messages, e.g., gradients and model parameters, generated from the local backends into the pre-defined backend-independent format, e.g., an array of pairs of parameters and values, before sharing them with others. This procedure is called {\em encoding}. For the other direction, once an encoded message is received, the participant parses the message, e.g., the above array, into backend-dependent tensors in its own computation graph and backend, which is called the {\em decoding} procedure. The encoding and decoding procedures are abstracted as two special programming interfaces in {\sf FederatedScope}\xspace with default implementations; they can also be customized for each participant based on its backend and the FL algorithm to be deployed. {\sf FederatedScope}\xspace provides several examples of constructing cross-backend federated learning~\cite{ExampleCrossBackend}. In supporting cross-backend FL, the advantage of {\sf FederatedScope}\xspace is two-fold: (1) {\sf FederatedScope}\xspace provides more flexibility to handle the heterogeneity of FL than other platforms that adopt a global perspective since each participant has the right to declare its computation graph independently. Specifically, the developer of each participant can focus on expressing its own computation graph, such as client-specific embedding layers and output layers, adapting to its input instance and task. There is no need to declare a super graph (i.e., the global perspective) and care about how to distribute it, reducing the implementation difficulty. (2) {\sf FederatedScope}\xspace follows the principle of information minimization, where participants only need to achieve a consensus on the format of messages and exchange necessary information. Thus, the exchanged model parameters will not leak the whole model architecture, the local training algorithm, or the personalization-related operators to other participants, which would otherwise be inferable from the global computation graph of ONNX and TFF. When such information leakage happens, malicious participants benefit from it because they can conduct a white-box attack rather than the more challenging black-box one in {\sf FederatedScope}\xspace. We will talk more about privacy attacks in Section~\ref{subsec:privacy_protection}. \subsection{Usage of FederatedScope} \label{sec:implementation} In this section, we give a full example of how to set up an FL course, so that users can gain a clear and vivid understanding of {\sf FederatedScope}\xspace. At a high level, users should define a series of events and their corresponding handlers, which characterize the behaviors of participants. As shown in Figure~\ref{code:event_handler}, the handlers are expressed as callable functions and bound to the corresponding events with a register mechanism. When an event happens, the corresponding handler will be called to handle it. The example is as follows: \begin{figure} \caption{Behaviors description with events and handlers.} \label{code:event_handler} \end{figure} \begin{example} Consider that a server and several clients would like to construct an FL course and they agree to exchange certain model parameters during the training process. For clients, the event related to message passing is ``{\sf receiving\_models}'', and the corresponding handler can be ``\textit{train the received global models based on local data, and then return the model updates}''. The local training process is executed by a \textit{Trainer} object held by the client. As illustrated in Figure~\ref{code:trainer}, the trainer encapsulates the training details, entirely decoupled from the client's behaviors. Hence, the training process can be described as those of the centralized learning case, and the trainer can be flexibly extended with fancy optimizers, regularizers, personalized algorithms, etc. Such a design makes it easy for user customizations. \begin{figure} \caption{The training behaviors and clients are decoupled for supporting flexible customization.} \label{code:trainer} \end{figure} For the server, the event related to message passing is ``{\sf receiving\_updates}'' and the corresponding handler can be ``\textit{save the model updates, and check the aggregation condition}'', which requires another event related to condition checking. For the synchronous training strategy, such event can be ``{\sf all\_received}'' and the corresponding handler will be ``\textit{perform federated aggregation, and broadcast the updated global models}''. For the asynchronous training strategies, the event ``{\sf all\_received}'' can be replaced with ``{\sf goal\_achieved}'' or ``{\sf time\_up}'', which adds flexible behaviors during sampling clients or performing aggregation (More details can be found in Section~\ref{subsec:asynchronous}). The federated aggregation is executed by an aggregator, which is also decoupled with the server for flexibly supporting various state-of-the-art (SOTA) aggregation algorithms, such as FedOpt~\cite{FedOPT2020Asad}, FedNova~\cite{FedNova2020Wang}, FedProx~\cite{blocal}, etc. Note that when events such as ``{\sf all\_received}'' or ``{\sf goal\_achieved}'' happens, the clients would receive the up-to-date global models after the server performs federated aggregation, which naturally causes the following event ``{\sf receiving\_models}'' and triggers the handlers for performing a new round of local training. In this way, although we have not explicitly declared a sequential training process, the events happen in the intended logical order to trigger the corresponding handlers, which can precisely express the FL procedure and promote modularization. Further, events such as ``{\sf maximum\_iterations\_reached}'' or ``{\sf early\_stopped}'' can be adopted to specify when the FL courses should be terminated. $\triangle$ \end{example} With such event-driven architecture, {\sf FederatedScope}\xspace allows users to use existing or add new $<$event, handler$>$ pairs for flexible customization, rather than inserting the new behaviors into the sequential FL course carefully as those in the procedural programming paradigm. For example, by simply changing the event ``{\sf all\_received}'' to other events related to condition checking such as ``{\sf goal\_achieved}'', users can conveniently apply asynchronous training strategies. Users also can add some new events related to message passing to enable the heterogeneous information exchange, such as node embeddings in graph federated learning~\cite{xie2021federated} and encrypted results in cross-silo federated learning~\cite{hardy2017private}. Several representative examples from end-user perspective can be found on our website, including constructing FL courses via simple configuring~\cite{ExampleConfiguring} and developing new functionalities~\cite{ExampleStartOwnCase}. \stitle{Trainer \& Aggregator.} The details of the adopted algorithms in trainer and aggregator are decoupled with the behaviors of participants. Therefore, when users develop their own trainer/aggregator with {\sf FederatedScope}\xspace, they only need to care about the details of training/aggregating algorithms. For example, users are expected to implement several basic interfaces of trainers, including train, evaluation, update model, etc., which is the same as those in centralized training and serves as ``must-do'' items. For the aggregator, which takes the received messages as inputs and returns the aggregated results, users only need to implement how to aggregate. \stitle{Programming Interfaces and Completeness Checking.} {\sf FederatedScope}\xspace provides base classes to aware users of the necessary interfaces for an FL course, such as {\sf BaseTrainer} and in {\sf BaseWorker}. These base classes can be used to check the completeness of the defined FL courses, since an ``Not Implementation Error'' would be raised to abort the execution if users fail to implement the necessary interfaces. With the base classes, {\sf FederatedScope}\xspace provides rich implementation of existing FL algorithms. Therefore users can inherit the provided implementation and focus on the development of new functions and algorithms, which also ensures the completeness of FL courses. Besides, {\sf FederatedScope}\xspace provides a completeness checking mechanism to generate a directed graph to verify the flow of message transmission in the constructed FL course (an example is illustrated in Appendix~\ref{appendix:completeness}). \stitle{Robustness Against Malicious Participants.} To defend malicious participants and make the system more robust, some Byzantine fault tolerance algorithms are provided in {\sf FederatedScope}\xspace. For example, we can apply the Krum~\cite{blanchard2017machine} aggregation rule in federated aggregation. Note that these Byzantine fault tolerance algorithms can be regarded as the aggregation behaviors of server and implemented in the aggregator, which is decoupled with other behaviors to make it flexible and extendable for users to develop their own fault tolerance algorithms. \section{Important Plug-In Components} \label{sec:plugins} In this section, we present several important plug-in components in {\sf FederatedScope}\xspace for convenient usage. These components provide functionalities including privacy protection, attack simulation, and auto-tuning, all of which are tightly coupled with the design of {\sf FederatedScope}\xspace and serve as plug-ins. \subsection{Behavior Plug-In: Privacy Protection} \label{subsec:privacy_protection} Real-world FL applications might prefer different privacy protection algorithms due to their diversity in types of private information, protection strengths, computation and communication resources, etc., which motivates us to provide various privacy protection algorithms in {\sf FederatedScope}\xspace. With the design of {\sf FederatedScope}\xspace, privacy protection algorithms can be implemented as behavior plug-ins, which indicates that the privacy protection algorithms bring new behaviors of participants. For example, before the participants share messages, the encryption algorithms might be applied on the messages, or the messages would be partitioned into several frames, or certain noise can be injected into the messages. These behaviors have been pre-defined in {\sf FederatedScope}\xspace (so-called the behavior plug-in), and can be easily called to protect privacy via simple configuration. \begin{figure} \caption{Behavior plug-in: injecting noise.} \label{code:inject_noise} \end{figure} Specifically, we implement a widely-used homomorphic encryption algorithm Paillier~\cite{paillier1999public} and apply it in a cross-silo FL task~\cite{hardy2017private}; and we develop a secret sharing mechanism for FedAvg. These provided examples demonstrate how to apply privacy protection algorithms with {\sf FederatedScope}\xspace. Furthermore, to satisfy the heterogeneity in privacy protection strengths, we provide tunable modules for applying Differential Privacy (DP) in FL, which has been a popular technique for privacy protection and has achieved great success in database and FL applications \cite{dp_survey, ding2011differentially, triastcyn2019federated, wei2020nbafl}. An example is illustrated in Figure~\ref{code:inject_noise}, from which we can see that users can utilize the configuration to modify the client's behavior: injecting certain noise into the messages before sharing. Users can combine different behaviors together to implement fancy DP algorithms such as NbAFL~\cite{wei2020nbafl}. Note that to achieve a theoretical guarantee of privacy protection, users still need to specify some necessary settings according to their own data and tasks, including the noise distribution~\cite{phan2017adaptive, dwork2014algorithmic} and privacy budget allocation~\cite{wang2019answering, li2021federated, luo2021privacy}. \subsection{Participant Plug-In: Attack Simulation} \label{subsec:privacy_attack} Attacks, growing along with the development of FL, are important for users to verify the availability and the privacy protection strength of their FL systems and algorithms. Typical attacks include privacy attack and performance attack: the former aims to steal the information related to clients' private data, while the latter aims to intentionally guide the learned model to misclassify a specific subset of data for malicious purposes such as back-door. However, most of the existing FL platforms ignore such an important functional component. Note that it is non-trivial to provide attack simulation in an FL platform, since the diversity of privacy and performance attacks brings challenges to the platform's flexibility and extensibility. Benefited from the design of {\sf FederatedScope}\xspace, the behaviors of malicious participants can be expressed independently, thus the attack simulation can be implemented as the participant plug-in in {\sf FederatedScope}\xspace. To be more specific, as shown in Figure~\ref{code:attck}, users can conveniently choose some of the participants to become malicious clients via configuring, and attack algorithms can be added to their own trainers. These malicious clients are able to collect or inject certain messages among victims, and further recover or infer the target information accordingly. The simulated attacks provided in {\sf FederatedScope}\xspace can be used to verify the privacy protection strength of their FL systems and algorithms. For example, when users develop a new FL algorithm, they want to know the protection level of the proposed algorithm from some perspectives, such as whether the dataset properties or private training samples would be inferred by attacks. They can use several state-of-the-art attack algorithms, which have been provided in {\sf FederatedScope}\xspace for convenient usage, to check the privacy protection strength of their FL algorithms, and enhance the privacy protection strength if necessary according to the results of simulated attacks. {\sf FederatedScope}\xspace provides rich types of attack. For privacy attack, {\sf FederatedScope}\xspace provides the implementation of the following algorithms: (i) Gradient inversion attack~\cite{nasr2019comprehensiveIG} for membership inference; (ii) PIA~\cite{melis2019exploitingPIA} for property inference attack; (iii) DMU-GAN~\cite{hitaj2017deep} for class representative attack; (iv) DLG~\cite{zhu2019dlg}, iDLG~\cite{zhao2020idlg}, GRADINV~\cite{geiping2020inverting} for training data/label inference attack. In terms of performance attack, {\sf FederatedScope}\xspace currently focuses on the backdoor attack, a representative type of performance attack, whose objective is to mislead the model to classify some selected samples to the attacker-specified class. The implementations of SOTA backdoor attacks include: (i) Edge-case backdoor attacks~\cite{edge_case_bd}, BadNets~\cite{badnet}, Blended~\cite{blended}, WaNet~\cite{wanet}, NARCISSUS~\cite{narc}, which perform back-door attack by poisoning the dataset; (ii) Neurotoxin~\cite{neurotoxin} and DBA~\cite{dba}, which perform back-door attack by poisoning the model. \begin{figure} \caption{Participant plug-in: malicious client.} \label{code:attck} \end{figure} \subsection{Manager Plug-In: Auto-tuning} \label{subsec:auto} FL algorithms generally expose hyperparameters that can significantly affect their performance. Without suitable configurations, users cannot manage their FL applications well. Hyperparameter optimization (HPO) methods, both traditional methods (e.g., Bayesian optimization~\cite{BO} and multi-fidelity methods~\cite{li2017hyperband, bohb, dehb, optuna}) and Federated-HPO methods~\cite{fedbo, fedex} (denoting very recent ones that deliberately take the FL setting into account) can help users manage FL applications by automatically seeking suitable hyperparameter configurations. Therefore, in {\sf FederatedScope}\xspace, we provide an auto-tuning plug-in, which incorporates various HPO methods. Conceptually, Bayesian optimization, multi-fidelity, and Federated-HPO methods treat a complete FL course, a few FL rounds, and client-wise local update procedures as black-box functions to be evaluated, respectively. {\sf FederatedScope}\xspace provides a unified interface to manage the underlying FL procedure in various granularities so that different HPO methods can interplay with their corresponding black-box functions. This unification is nontrivial for the last case, where we leverage our event-driven architecture to achieve the client-wise exploration of Federated-HPO methods. When they are plugged in, the exchanged messages are extended with HPO-related samples/models/feedback, and the participants would handle them with extended behaviors accordingly. \begin{figure} \caption{Manager plug-in: re-specify configuration.} \label{code:hpo} \end{figure} For Bayesian optimization methods, we showcase applying various open-sourced HPO packages to interact with {\sf FederatedScope}\xspace. Each time they propose a specific configuration, {\sf FederatedScope}\xspace executes an FL course accordingly and returns a specified metric (e.g., validation loss) as the function's output. As for multi-fidelity methods, we have implemented Hyperband~\cite{li2017hyperband} and PBT~\cite{pbt} in {\sf FederatedScope}\xspace. Specifically, {\sf FederatedScope}\xspace can export the snapshot of a training course to a corresponding checkpoint, from which another training course can restore. With such a checkpoint mechanism, these multi-fidelity methods can evaluate the configurations that have survived previous low-fidelity comparisons by restoring from the last checkpoints rather than learning from scratch. \begin{table*}[t] \centering \caption{The comparison between applying synchronous and asynchronous training strategies in federated learning, in terms of the virtual time cost (hours) to achieve the targeted test accuracy. } \begin{tabular}{l c c c c c c c} \toprule \multirow{2}{*}{Dataset (Target Acc.)} & \multicolumn{3}{c}{Sync.} & \multicolumn{4}{c}{Async.} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-8} & \multicolumn{1}{c}{Vanilla} & \multicolumn{1}{c}{OS} & \multicolumn{1}{c}{OS (FedScale)} & \multicolumn{1}{c}{Goal-Aggr-Unif} & \multicolumn{1}{c}{Goal-Rece-Unif} & \multicolumn{1}{c}{Time-Aggr-Unif} & \multicolumn{1}{c}{Goal-Aggr-Group}\\ \midrule FEMNIST (85\%) & $61.46$ & $27.34_{\;2.25\times}$ & $28.78_{\;2.14\times}$ & $11.29_{\;5.44\times}$ & $11.36_{\;5.41\times}$ & $11.70_{\;5.25\times}$ & $10.42_{\;5.90\times}$\\ CIFAR-10 (70\%) & $66.99$ & $26.42_{\;2.54\times}$ & $28.98_{\;2.31\times}$& $7.73_{\;8.67\times}$ & $7.98_{\;8.39\times}$ & $8.87_{\;7.55\times}$ & $7.54_{\;8.88\times}$ \\ Twitter (69\%) & $9.41$ & $3.84_{\;2.45\times}$ & $4.14_{\;2.27\times}$ & $0.78_{\;12.06\times}$ & $0.64_{\;14.70\times}$ & $0.50_{\;18.82\times}$ & $0.65_{\;14.48\times}$ \\ \bottomrule \end{tabular} \label{table:main_comparison} \end{table*} Furthermore, {\sf FederatedScope}\xspace provides FedEx~\cite{fedex} as an exemplary implementation of Federated-HPO methods. Specifically, once FedEx is plugged in, we sample configurations for each client independently in each FL round. Then each client re-specifies its native configuration and conducts local updates accordingly, as shown in Figure~\ref{code:hpo}. Finally, the client-wise feedback is aggregated to update the policies responsible for determining the optimal configuration(s). In summary, the auto-tuning plug-in can manage FL applications in various granularities. Traditional HPO methods interplay with {\sf FederatedScope}\xspace by configuring and running one or more complete FL rounds, while Federated-HPO methods explore client-wise configurations concurrently in a single FL round. With flexibility provided by the event-driven architecture, we have implemented these HPO methods in a unified way~\cite{wang2022fedhpo}, and novel HPO methods can be easily developed and contributed to {\sf FederatedScope}\xspace. \section{Experiments} \label{sec:exp} \subsection{DataZoo and ModelZoo} For convenient usage, we collect and preprocess ten widely-used datasets from various FL application scenarios, including computer vision datasets (FEMNIST~\cite{emnist}, CelebA~\cite{celeba} and CIFAR-10~\cite{cifar10}), natural language processing datasets (Shakespeare~\cite{mcmahan2017communication}, Twitter~\cite{twitter} and Reddit~\cite{Reddit}) from LEAF~\cite{leaf}, and graph learning datasets (DBLP~\cite{dblp_dataset}, Ciao~\cite{ciao_dataset} and MultiTask~\cite{xie2021federated}) from FederatedScope-GNN (FS-G)~\cite{wang2022federatedscopegnn}. The statistics of these datasets can be found in Appendix~\ref{appendix:datazoo}. Meanwhile, we provide off-the-shelf neural network models via our ModelZoo, which includes widely-adopted model architectures, such as ConvNet~\cite{convnet} and VGG~\cite{vgg} for computer vision tasks, BERT~\cite{bert} and LSTM~\cite{lstm} for natural language processing tasks, and various GNNs~\cite{gcn, gat, gin, sage, gpr-gnn} for graph learning. Such ModelZoo allows users to conveniently develop various trainers for clients. \begin{figure*} \caption{The comparison between synchronous and asynchronous strategies.} \label{fig:async_results} \caption{The distributions of the aggregated count of the clients.} \label{fig:agg_count} \caption{The distributions of the staleness in asynchronous strategies.} \label{fig:staleness} \end{figure*} \subsection{Experiment Settings} Here we conduct a series of experiments with {\sf FederatedScope}\xspace on three representative datasets as follows: \noindent \textbf{FEMNIST}. FEMNIST consists of 805,263 handwritten digits in 62 classes, which are partitioned into 3,597 clients according to the writers. With FL, a CNN with two convolutional layers is trained for image classification task on this dataset. \noindent \textbf{CIFAR-10}. As suggested by previous studies~\cite{lda}, we partition the dataset into 1,000 clients with a Dirichlet distribution, and federally train a CNN with two convolutional layers for image classification. \noindent \textbf{Twitter}. We sample a subset from Twitter, which consists of 6,602 twitter users' 16,077 texts. Each twitter user can be regarded as a client for constructing an FL course. Following previous study~\cite{leaf}, we embed the texts with a bag-of-words model~\cite{bagofword} and collaboratively train a logistic regression model on these sampled clients for sentiment analysis. More implementation details can be found in Appendix~\ref{appendix:implementation} and ~\ref{appendix:programming_amount}. \subsection{Results and Analysis} \subsubsection{Asynchronous Federated Learning} \label{exp:async} We first conduct experiments to compare the performance of applying synchronous and asynchronous training strategies in FL. \textbf{Virtual Timestamp}. Following the best practice in prior FL works~\cite{lai2021fedscale}, we conduct the experiments by simulation while tracking the execution time with virtual timestamps. Specifically, the server begins to broadcast messages containing initial model parameters at timestamp $0$. Then each client sends updates back with a timestamp as the received one plus the execution time of local computation and communication estimated by FedScale~\cite{lai2021fedscale}. The server handles the received messages in the order of their timestamps and lets the next broadcast inherit the timestamp from the message that triggers it, assuming the time cost of the server is negligible. Along with an FL course, we record the performance of the global model with respect to such virtual timestamps. \textbf{Baselines}. We implement FedAvg with two synchronous training strategies including \textit{Sync-vanilla} (i.e., the vanilla synchronous strategy) and \textit{Sync-OS} (i.e., the synchronous strategy with over-selection mechanism~\cite{tff}). As Sync-OS is originally proposed and implemented in FedScale~\cite{lai2021fedscale}, we also adapt it for our experiments and report its performance (denoted as \textit{Sync-OS (FedScale)}) for correctness verification. For asynchronous FL, we instantiate different asynchronous behaviors discussed in Section~\ref{subsubsec:async_behavior}, and different strategies are named in the format of \textit{Async-AdoptedEvent-BroadcastManner-SampleStrategy}. For example, \textit{Async-Goal-Rece-Unif} denotes that this strategy adopts the event ``{\sf goal\_achieved}'', the \textit{after receiving} broadcasting manner and the uniform sampling strategies for asynchronous FL, which can be regarded as the implementation of FedBuff~\cite{nguyen2022fedbuff}; and \textit{Async-Time-Aggr-Group} denotes we adopt the event ``{\sf time\_up}'', the \textit{after aggregating} broadcasting manner and a group sampling strategy (the client would be grouply sampled according to their responsiveness~\cite{chai2021fedat}). \textbf{Analysis}. We adopt the virtual time cost (hours) to achieve the targeted test accuracy as the performance metric for comparing synchronous and asynchronous FL. The experimental results are shown in Table~\ref{table:main_comparison} (more experimental results can be found in Appendix~\ref{appendix:iid_distribution} and \ref{appendix:more_asyn_results}), from which we can observe that asynchronous training strategies achieve significant efficiency improvements (5.25$\times$\textasciitilde 18.82$\times$) compared to the vanilla synchronous training strategy on all the benchmark datasets. Meanwhile, we plot the learning curves in Figure~\ref{fig:async_results}. Due to the space limitation, we only show some asynchronous training strategies on CIFAR-10 dataset and omit other similar results. From Figure~\ref{fig:async_results}, we can observe the existence of noticeable gaps between synchronous and asynchronous training strategies for a long time during the training process. These experimental results are consistent with previous studies~\cite{huba2022papaya, xie2019asynchronous} and confirm that the asynchronous training strategies provided in {\sf FederatedScope}\xspace can significantly improve the training efficiency while achieving competitive model performance. Both our implementation \textit{Sync-OS} and the original implementation in FedScale show that applying over-selection mechanism in synchronous FL can improve the efficiency to some degree. However, it might cause unfairness among participants and then lead to model bias, as demonstrated in Figure~\ref{fig:agg_count}. From the figure we can observe that when applying over-selection mechanism \textit{Sync-OS}, some clients never contribute to the federated aggregation, i.e., $\Pr[\text{effective\_aggregation\_count}=0]>0$. The reason is that these clients need more computation or communication time, and thus their feedback would always be dropped since the server has finished the federated aggregation with the feedback from those clients having faster response speeds. In other words, these clients always become the victims among the over-selected clients, which results in unfairness among participants, and then causes the learned models to bias towards those clients with fast response speeds. In contrast, the asynchronous learning strategies provided in {\sf FederatedScope}\xspace can improve the efficiency without introducing such unfairness and model bias, due to the fact that staled feedback would be tolerated in the federated aggregation. Hence the distribution of effective aggregation count of asynchronous learning strategies plotted in Figure~\ref{fig:agg_count} is more concentrated and similar to that of the vanilla synchronous training strategy. \begin{figure*} \caption{Client-wise test accuracy on FEMNIST dataset.} \label{fig:pfl} \caption{Accuracy w.r.t. varying protection strength and recovered images.} \label{fig:dp} \caption{Best-seen validation loss over time on FEMNIST dataset.} \label{fig:hpo} \end{figure*} Further, in Figure~\ref{fig:staleness}, we illustrate the characteristics of different asynchronous training strategies in terms of staleness (i.e., the version difference between the up-to-date global model and the model used for local training) of the updates when performing federated aggregation. By comparing \textit{Async-Goal-Aggr-Unif} and \textit{Async-Goal-Rece-Unif}, we can see that \textit{after aggregating} broadcasting manner causes less staleness than \textit{after receiving}. It implies that \textit{after aggregating} is more suitable for those FL tasks with a low staleness toleration threshold, but such a broadcasting manner requires more available bandwidths at the server since multiple messages are sent out at the same time. \subsubsection{Personalization} To demonstrate how personalization can handle the heterogeneity among participants, we compare FedAvg~\cite{mcmahan2017communication} with several built-in SOTA personalized FL algorithms, including FedBN~\cite{fedbn}, FedEM~\cite{fedem}, pFedMe~\cite{pFedME} and Ditto~\cite{li2021ditto}. The experimental results are illustrated in Figure~\ref{fig:pfl}, which shows the client-wise test accuracies on FEMNIST dataset. We can observe that the average accuracy (denoted as the red dots) and the 90\% quantile accuracy (denoted as the red horizontal lines) of vanilla FedAvg are both significantly lower than those of personalized FL algorithms. This indicates that applying personalized FL algorithms can improve the client-wise performance, also covering the bottom clients, and then lead to a better overall performance. Besides, in terms of the standard deviation among the client-wise accuracy (shown as $\sigma$ at the top of the figure), personalized FL algorithms can reduce the performance differences to some degree, which confirms the advantages of enabling personalization behaviors for handling the heterogeneity among participants in real-world FL applications. Personalized federated learning algorithms might need different computation and computation resources compared to vanilla FedAvg~\cite{mcmahan2017communication}. The computation and communication costs in each training round are determined by the adopted algorithms. Take the comparisons between vanilla FedAvg and two representative Personalized federated learning algorithms FedBN~\cite{fedbn} and Ditto~\cite{li2021ditto} as examples, in each training round~\cite{chen2022pfl}, (i) FedBN needs the same computation but fewer communication costs, since it proposes to not share the parameters of BacthNorm layer; and (ii) Ditto needs the same communication but more local computation costs, since it suggests to train the local models additionally. Further, from the perspective of an FL course, i.e., iteratively performing the FL training rounds until termination, the communication and computation costs depend on the convergence of learned models. \subsubsection{Privacy Protection and Attack} \label{sec:privacy} We conduct an experiment to show the effect of applying privacy protection algorithms provided in {\sf FederatedScope}\xspace. We take DP as an example, and study its effect on the utility of learned model and the effectiveness in defending against privacy attack. Specifically, we train a ConvNet2 model on FEMNIST, and randomly choose some of the clients to inject Gaussian noise into the returned model updates to strengthen their privacy. We construct multiple FL courses with respect to varying the percentage of clients that injects noise, changing from 0\% to 100\%, and plot the performance of the learned models in Figure~\ref{fig:dp}. From this figure we can observe that as more and more clients choose to inject the noise into the returned model updates, the test accuracy achieved by the learned global model decreases gradually, from 84\% to 65\%, which shows the trade-off between the privacy protection strength and model utility. Moreover, we apply the DLG algorithm~\cite{zhu2019dlg} implemented in {\sf FederatedScope}\xspace to conduct privacy attack, aiming to reconstruct private training data of other users. As shown on the left-hand side of Figure~\ref{fig:dp}, the reconstructed images from the clients who have not injected noises are clear and the privacy attacker successfully recovers clients' training data to the extent that the groundtruth digits are exposed. On the right-hand side of the figure, we plot the reconstructed images from those clients injecting noises, which confirms the effectiveness of the privacy protection provided by DP since the attacker fails to recover meaningful information. \subsubsection{Auto-Tuning} As mentioned in Section~\ref{subsec:auto}, we have implemented several HPO methods in {\sf FederatedScope}\xspace, which enables users to auto-tune hyperparameters of FL courses. Here we experimentally compare some representative HPO methods, including random search (RS)~\cite{bergstra2012random}, successive halving algorithm (SHA)~\cite{li2017hyperband} and recently proposed Federated-HPO method FedEx~\cite{fedex}, by applying them to optimize hyperparameters of FedAvg on FEMNIST dataset. We follow the protocol used in FedHPO-B~\cite{wang2022fedhpo}, where RS and SHA try configurations one by one, and FedEx wrapped by RS/SHA manages the search procedure in a fine-grained granularity to explore hyperparameter space concurrently. We present the results in Figure~\ref{fig:hpo}, where the best-seen validation loss is depicted, and the test accuracy of the searched optimal configuration is reported in the legend. The best-seen validation losses of wrapped FedEx decrease slower than their corresponding wrappers, where such a poorer regret seems to indicate poorer searched hyperparameter configurations. However, their searched configurations' test accuracies are remarkably better than their wrappers, implying the superiority of managing the search procedure in a fine-grained granularity. \section{Conclusions} \label{sec:conclu} In this paper, we introduce {\sf FederatedScope}\xspace, a novel federated learning platform, to provide users with great supports for various FL development and deployment. Towards both convenient usage and flexible customization, {\sf FederatedScope}\xspace exploits an event-driven architecture to frame an FL course into $<$events, handlers$>$ pairs so that users can describe participants' behaviors from their respective perspectives. Such an event-driven design makes {\sf FederatedScope}\xspace suitable for handling various types of heterogeneity in FL, due to the advantages that (i) {\sf FederatedScope}\xspace enables participants to exchange rich types of messages, express diverse training behaviors, and optimize different learning goals, and (ii) {\sf FederatedScope}\xspace offers rich condition checking events to support various coordinations and corporations among participants, such as different asynchronous training strategies. Further, the design of {\sf FederatedScope}\xspace allows us to conveniently implement and provide several important plug-in components, such as privacy protection, attack simulation, and auto-tuning, which are indispensable for practical usage. We have released {\sf FederatedScope}\xspace to help researchers and developers quickly get started, develop new FL algorithms, and build new FL applications, with the goal of promoting and accelerating the progress of FL. \appendix \section{Convergence Analysis} \label{appendix:proof} Without loss of generality, we assume the numbers of training instances are same among clients and simplify Eq.~\eqref{eq:loss} as the following optimization problem: \begin{equation} \label{eq:objective} \min_{\theta \in \mathbb{R}^d} F(\theta):= \frac{1}{M}\sum_{i=1}^{M} F_i(\theta), \end{equation} where $M$ is the number of participated clients, $F_i$ denotes the loss function of client $i$. We use $\mathcal{S}^{(t)} \subseteq [M]$ to denote the index set of the clients that participate in the $t$-th training round. For each client $i\in\mathcal{S}^{(t)}$, it takes $Q$ local SGD steps with learning rate $\eta$. The model update at the $t$-th round is: \begin{align} \theta^{(t+1)} - \theta^{(t)} & = \Delta^{(t)} \nonumber\\ & = \frac{1}{|\mathcal{S}^{(t)}|}\sum_{i\in \mathcal{S}^{(t)}} \Delta_i^{(t)} \nonumber\\ & = -\frac{1}{|\mathcal{S}^{(t)}|}\sum_{i\in \mathcal{S}^{(t)}} \sum_{q=1}^{Q} \eta g_i(\theta^{(t-\tau_i)}_{i,q}), \label{eq:theta_delta} \end{align} where $g_i(\cdot)$ denotes stochastic gradient of client $i$. We use $\tau_i(t)$ to denote the version staleness of initialization model that is used for local training in client $i$ at $t$-th training round. And we simplify it as $\tau_i$ when it would not cause any ambiguity. Firstly, we given some widely-adopted assumptions~\cite{nguyen2022fedbuff, chai2021fedat, wang2021field}: \begin{assumption}[Smoothness] \label{assum:smoothness} The loss function $F$ has Lipschitz continuous gradients with a constant $L>0$, i.e., $\forall \theta_1, \theta_2$: \begin{equation} ||\nabla F(\theta_1) - \nabla F(\theta_2)||^2 \leq L^2 ||\theta_1 - \theta_2 ||^2, \end{equation} and further: \begin{equation} F(\theta_1) - F(\theta_2) \leq \langle \nabla F(\theta_2), \theta_1-\theta_2 \rangle + \frac{L}{2}|| \theta_1 - \theta_2 ||^2. \end{equation} \end{assumption} \begin{assumption}[Convexity] \label{assum:convexity} The loss function $F$ is $\mu$-strongly convex with a constant $\mu>0$, i.e., $\forall \theta_1, \theta_2$: \begin{equation} F(\theta_1) - F(\theta_2) \geq \langle \nabla F(\theta_2), \theta_1-\theta_2 \rangle + \frac{\mu}{2}|| \theta_1 - \theta_2 ||^2. \end{equation} \end{assumption} \begin{assumption}[Unbiasedness] \label{assm:unbiased} The estimation of stochastic gradient is unbiased: $\mathbb{E}[{g_i(\theta)}] = \nabla F_i(\theta)$. \end{assumption} \begin{assumption}[Bounded variances] \label{assm:bounded_var} The local and global variances are bounded for all clients, i.e., $\forall i$, there exist constants $\sigma_l$ and $\sigma_g$, s.t., \begin{align} \mathbb{E}\left[||g_i(\theta) - \nabla F_i(\theta)||^2\right] \leq \sigma_l^2,\\ \frac{1}{M}\sum_{i=1}^{M}||\nabla F_i(\theta) - \nabla F(\theta) ||^2 \leq \sigma_g^2. \end{align} \end{assumption} \begin{assumption}[Bounded gradients] \label{assm:bounded_grad} The gradients of clients are bounded, i.e., $||\nabla F_i ||^2 \leq C, \forall i\in [M]$. Specifically, $||\nabla F||^2 = ||\frac{1}{M}\sum_{i=1}^{M}\nabla F_i ||^2 \leq \frac{1}{M}\sum_{i=1}^{M}||\nabla F_i||^2 \leq C$. \end{assumption} \begin{assumption}[Bounded staleness] \label{ass:bounded_staleness} $\forall i \in [M]$, the staleness $\tau_i$ is not larger than $\tau_{\max}$. \end{assumption} Based on these assumptions and inspired by previous studies~\cite{chai2021fedat, nguyen2022fedbuff}, we have two lemmas: \begin{lemma} \label{lemma:bound_gradient_l2_norm} The expectation of the $L_2$ norm of the estimated gradients for all clients are bounded: \begin{align} \mathbb{E}\left[||g_i(\theta^{(t)})||^2\right] \leq 3\left(\sigma_l^2 + \sigma_g^2 + C\right). \end{align} \end{lemma} \begin{proof} \begin{align} &\thinspace\mathbb{E}\left[||g_i(\theta^{(t)})||^2\right] \nonumber\\ = &\thinspace\mathbb{E}\left[|| g_i(\theta^{(t)}) - \nabla F_i(\theta^{(t)}) + \nabla F_i(\theta^{(t)}) - \nabla F(\theta^{(t)}) + \nabla F(\theta^{(t)}) ||^2 \right] \nonumber\\ \leq& \thinspace3\mathbb{E}\left[||g_i(\theta^{(t)}) - \nabla F_i(\theta^{(t)})||^2 + ||\nabla F_i(\theta^{(t)}) - \nabla F(\theta^{(t)})||^2 \right. \nonumber\\ & \quad \:\left.+ ||\nabla F(\theta^{(t)})||^2 \right] \nonumber\\ =& \thinspace 3\left(\sigma_l^2 + \sigma_g^2 + C\right), \end{align} where the last equality follows Assumption~\ref{assm:bounded_var} and~\ref{assm:bounded_grad}. \end{proof} \begin{lemma} \label{lemma:bound_delta_l2_norm} The expectation of $L_2$ norm of $\Delta^{(t)}, \forall t \in [T]$, is bounded: \begin{align} \mathbb{E}\left[||\Delta^{(t)}||^2\right] \leq 3 Q^2 \eta^2 \left(\sigma_l^2 + \sigma_g^2 + C\right), \end{align} where $Q$ denotes the local SGD steps and $\eta$ denotes the learning rate. \end{lemma} \begin{proof} According to Eq.~\eqref{eq:theta_delta}, we have: \begin{align} ||\Delta^{(t)}||^2 & = \frac{1}{|\mathcal{S}^{(t)}|^2} \Big|\Big|\sum_{i\in \mathcal{S}^{(t)}} \sum_{q=1}^{Q}\eta g_i(\theta_{i,q}^{t-\tau_i}) \Big|\Big|^2 \nonumber\\ & \leq \frac{Q\eta}{|\mathcal{S}^{(t)}|} \sum_{i\in \mathcal{S}^{(t)}} \sum_{q=1}^{Q} \big|\big| g_i(\theta_{i,q}^{t-\tau_i}) \big|\big|^2. \end{align} Take the expectation over the randomness w.r.t. the client participation and the stochastic gradients: \begin{align} \mathbb{E}\big[||\Delta^{(t)}||^2\big] & \leq \frac{Q \eta^2}{|\mathcal{S}^{(t)}|} \sum_{i\in \mathcal{S}^{(t)}} \sum_{q=1}^{Q} \mathbb{E}\left[|| g_i(\theta_{i,q}^{t-\tau_i}) ||^2\right] \nonumber\\ & \leq 3 Q^2 \eta^2 \left(\sigma_l^2 + \sigma_g^2 + C\right), \end{align} where the last inequality follows from Lemma~\ref{lemma:bound_gradient_l2_norm} \end{proof} Next we provide the convergence analysis for the proposed asynchronous training protocol in federated learning, inspired by previous studies~\cite{chai2021fedat, nguyen2022fedbuff, wang2021field}. With Eq.~\eqref{eq:theta_delta} and the assumption of smoothness (Assumption~\ref{assum:smoothness}), we have: \begin{align} &F(\theta^{(t+1)}) - F(\theta^{(t)}) \nonumber \\ &\leq \langle \nabla F(\theta^{(t)}), \theta^{(t+1)} - \theta^{(t)} \rangle + \frac{L}{2} || \theta^{(t+1)} - \theta^{(t)} ||^2 \nonumber \\ &= \langle \nabla F(\theta^{(t)}), \Delta^{(t)} \rangle + \frac{L}{2} || \Delta^{(t)} ||^2. \end{align} Take the total expectation: \begin{align} & \quad \mathbb{E}\big[F(\theta^{(t+1)}) - F(\theta^{(t)})\big] \nonumber \\ & \leq \mathbb{E}\big[\langle \nabla F(\theta^{(t)}), \Delta^{(t)} \rangle\big] + \frac{L}{2} \mathbb{E}\big[|| \Delta^{(t)} ||^2\big] \nonumber \\ & \leq \mathbb{E}\big[\underbrace{\langle \nabla F(\theta^{(t)}), \Delta^{(t)} \rangle}_{H_1} \big] + \frac{3LQ^2 \eta^2}{2} \left(\sigma_l^2+\sigma_g^2 + C\right) \quad (\text{With Lemma}~\ref{lemma:bound_delta_l2_norm}). \label{eq:object} \end{align} Consider the bound of $H_1$: \begin{align} H_1 &= \langle \nabla F(\theta^{(t)}), -\frac{1}{|\mathcal{S}^{(t)}|}\sum_{i\in \mathcal{S}^{(t)}}\sum_{q=1}^{Q}\eta g(\theta_{i,q}^{(t-\tau_i)}) \rangle \nonumber\\ & = -\frac{1}{|\mathcal{S}_t|}\sum_{i\in \mathcal{S}^{(t)}}\sum_{q=1}^{Q}\eta \langle \nabla F(\theta^{(t)}), g(\theta_{i,q}^{(t-\tau_i)}) \rangle. \end{align} We take the total expectation $\mathbb{E}[\cdot]:=\mathbb{E}_{\mathcal{F}}\mathbb{E}_{i\sim [M]}\mathbb{E}_{g_i|i,\mathcal{F}}[\cdot]$ ($\mathcal{F}$ denotes the historical information): \begin{align} \mathbb{E}[H_1] &= -\mathbb{E}_{\mathcal{F}}\left[\frac{1}{M}\sum_{i=1}^{ M}\sum_{q=1}^{Q} \eta \mathbb{E}_{g_i|i\sim [M]}\big[\langle \nabla F(\theta^{(t)}), g(\theta_{i,q}^{(t-\tau_i)}) \rangle \big] \right] \nonumber \\ &= -\mathbb{E}_{\mathcal{F}}\left[\sum_{q=1}^{Q}\eta \langle\nabla F(\theta^{(t)}), \frac{1}{M}\sum_{i=1}^{M}\nabla F_i(\theta_{i,q}^{(t-\tau_i)}) \rangle \right]. \end{align} Given $\langle a,b \rangle = \frac{1}{2}(||a||^2 + ||b||^2 - ||a-b||^2)$, we have: \begin{align} \mathbb{E}[H_1] &= \sum_{q=1}^{Q}\frac{\eta}{2} \mathbb{E}_{\mathcal{F}}\Big[ -||\nabla F(\theta^{(t)})||^2 - ||\frac{1}{M}\sum_{i=1}^{M} \nabla F_i(\theta_{i,q}^{(t-\tau_i)})||^2 \nonumber \\ &\quad + ||\nabla F(\theta^{(t)})-\frac{1}{M}\sum_{i=1}^{M} \nabla F_i(\theta_{i,q}^{(t-\tau_i)})||^2 \Big] \nonumber \\ & \leq -\mu Q \eta\mathbb{E}_{\mathcal{F}}\left[ F(\theta^{(t)}) - F(\theta^{(*)})\right] \nonumber\\ & \quad + \sum_{q=1}^{Q}\frac{\eta}{2}\mathbb{E}_{\mathcal{F}}\Big[\underbrace{||\nabla F(\theta^{(t)}) - \frac{1}{M}\sum_{i=1}^{M} \nabla F_i(\theta_{i,q}^{(t-\tau_i)})||^2}_{H_2} \Big], \label{eq:h1} \end{align} where the last inequality follows from Assumption~\ref{assum:convexity} and $\theta^{(*)}$ denotes the optimum of minimizing $F(\cdot)$. Further, with Eq.~\eqref{eq:objective}, we have: \begin{align} H_2 &= \Big|\Big|\frac{1}{M}\sum_{i=1}^{M}\nabla F_i(\theta^{(t)}) - \frac{1}{M}\sum_{i=1}^{M} \nabla F_i(\theta_{i,q}^{(t-\tau_i)}) \Big|\Big|^2 \nonumber\\ & \leq \frac{1}{M}\sum_{i=1}^{M} \big|\big| \nabla F_i(\theta^{(t)}) - \nabla F_i(\theta_{i,q}^{(t-\tau_i)}) \big|\big|^2 \nonumber\\ & \leq \frac{2}{M}\sum_{i=1}^{M} \Big[\big|\big| \nabla F_i(\theta^{(t)}) - \nabla F_i(\theta^{(t-\tau_i)}) \big|\big|^2 + \big|\big| \nabla F_i(\theta^{(t-\tau_i)}) - \nabla F_i(\theta_{i,q}^{(t-\tau_i)}) \big|\big|^2\Big] \nonumber\\ & \leq \frac{2L^2}{M}\sum_{i=1}^{M} \left[\big|\big| \theta^{(t)} - \theta^{(t-\tau_i)} \big|\big|^2 + \big|\big| \theta^{(t-\tau_i)} - \theta_{i,q}^{(t-\tau_i)} \big|\big|^2 \right]. \end{align} Take the expectation, we have: \begin{align} \mathbb{E}[H_2] &\leq \frac{2L^2}{M}\sum_{i=1}^{M} \Big[ \mathbb{E}\big[\big|\big|\theta^{(t)} - \theta^{(t-\tau_i)}\big|\big|^2\big] + \mathbb{E}\big[\big|\big|\theta^{(t-\tau_i)} - \theta_{i,q}^{(t-\tau_i)}\big|\big|^2\big] \Big] \nonumber\\ & \leq \frac{2L^2}{M}\sum_{i=1}^{M} \Big[ \mathbb{E}\big[\big|\big|\sum_{\rho=t-\tau_i}^{t-1}\frac{1}{|\mathcal{S}_{\rho}|}\sum_{i\in \mathcal{S}_{\rho}}\sum_{q=1}^{Q}\eta g_i(\theta_{i,q}^{(\rho)})\big|\big|^2 \big] \nonumber\\ & \qquad + \mathbb{E}\big[\big|\big| \sum_{q=1}^{Q}\eta g_i(\theta_{i,q}^{(t-\tau_i)}) \big|\big|^2\big] \Big] \nonumber\\ & \leq \frac{2L^2}{M}\sum_{i=1}^{M} \Big[ Q \eta^2 \tau_i \sum_{\rho=t-\tau_i}^{t-1}\frac{1}{|\mathcal{S}_{\rho}|}\sum_{i\in \mathcal{S}_{\rho}} \sum_{q=1}^{Q} \mathbb{E} \big[||g_i(\theta_{i,q}^{(\rho)})||^2\big] \nonumber\\ & \qquad + Q\eta^2\sum_{q=1}^{Q} \mathbb{E}\big[||g_i(\theta_{i,q}^{(t-\tau_i)})||^2\big] \Big] \nonumber\\ & \leq \frac{2L^2}{M}\sum_{i=1}^{M} \Big[ 3Q^2\eta^2\tau_{\max}^2(\sigma_l^2 + \sigma_g^2 + C) + 3Q^2\eta^2(\sigma_l^2 + \sigma_g^2 + C)\Big] \nonumber\\ & = 6L^2Q^2\eta^2(\tau_{\max}^2+1)(\sigma_l^2 + \sigma_g^2 + C), \label{eq:h2} \end{align} where the last inequality follows from Lemma~\ref{lemma:bound_gradient_l2_norm} and Assumption~\ref{ass:bounded_staleness}. By inserting Eq.~\eqref{eq:h2} and Eq.~\eqref{eq:h1} into Eq.~\eqref{eq:object}, we have: \begin{align} &\quad \mathbb{E}\left[F(\theta^{(t+1)}) - F(\theta^{(t)})\right] \nonumber\\ &\leq -\mu Q \eta\mathbb{E}\left[ F(\theta^{(t)}) - F(\theta^{(*)})\right] \nonumber\\ &\quad + 3LQ^2\eta^2 \left(\sigma_l^2 + \sigma_g^2 + C\right)\left[\eta Q L (\tau_{\max}^2 +1)+\frac{1}{2}\right]. \end{align} By rearranging, we have: \begin{align} &\quad \mathbb{E}\big[F(\theta^{(t+1)}) - F(\theta^{(*)})\big] - \frac{3LQ\eta}{\mu} (\sigma_l^2 + \sigma_g^2 + C)[\eta Q L (\tau_{\max}^2 +1)+\frac{1}{2}] \nonumber\\ &\leq (1-\mu Q \eta)\Big[\mathbb{E}[ F(\theta^{(t)}) - F(\theta^{(*)})] \nonumber\\ & \quad - \frac{3LQ\eta}{\mu} (\sigma_l^2 + \sigma_g^2 + C)[\eta Q L (\tau_{\max}^2 +1)+\frac{1}{2}]\Big], \end{align} which implies a geometric series with ratio $1-\mu Q \eta$. Thus we can obtain: \begin{align} &\quad \mathbb{E}\big[F(\theta^{(T)}) - F(\theta^{(*)})\big] \nonumber\\ &\leq (1-\mu Q \eta)^T\mathbb{E}\big[ F(\theta^{(0)}) - F(\theta^{(*)})\big] \nonumber\\ & \quad +\left[1-(1-\mu Q \eta)^T\right] \frac{3LQ\eta}{\mu} \left(\sigma_l^2 + \sigma_g^2 + C\right)\left[\eta Q L (\tau_{\max}^2 +1)+\frac{1}{2}\right]. \end{align} When $\mu Q \eta <=1$, we have the following model convergence conclusion: \begin{align} &\quad \mathbb{E}\big[F(\theta^{(T)}) - F(\theta^{(*)})\big] \nonumber\\ &\leq \left(1-\mu Q \eta\right)^T\mathbb{E}\big[ F(\theta^{(0)}) - F(\theta^{(*)})\big] \nonumber\\ & \quad +\frac{3LQ\eta}{\mu} \left(\sigma_l^2 + \sigma_g^2 + C\right)\left[\eta Q L (\tau_{\max}^2 +1)+\frac{1}{2}\right]. \end{align} \begin{table*}[t] \centering \caption{Examples of events in FederatedScope.} \begin{tabular}{l l l} \toprule Category & Event & Event Description \\ \midrule \multirow{5}{*}{Related to Message Passing} & {\sf receiving\_join\_in} & The server receives join-in requirements from clients. \\ & {\sf receiving\_model} & Clients receive the global model from the server. \\ & {\sf receiving\_updates} & The server receives model updates from clients. \\ & {\sf receiving\_eval\_request} & Clients receive the request of evaluation from the server. \\ & ... ... & ... ... \\ \midrule \multirow{5}{*}{Related to Condition Checking} & {\sf all\_received} & All the model updates have been received.\\ & {\sf time\_up} & The allocated time budget for the training round has run out. \\ & {\sf early\_stop} & The pre-defined early stop conditions are satisfied. \\ & {\sf performance\_drop} & The received global model causes a performance drop. \\ & ... ... & ... ... \\ \bottomrule \end{tabular} \label{tab:events} \end{table*} \begin{table*}[t] \centering \caption{The statistics of the datasets provided in DataZoo.} \begin{tabular}{l c c c c c} \toprule Dataset & Task & Number of Instance & Number of Clients \\ \midrule FEMNIST & Image Classification & 817,851 & 3,597 \\ CelebA & Image Classification & 200,288 & 9,323 \\ CIFAR-10 & Image Classification & 60,000 & 1,000 \\ \midrule Shakespeare & Next Character Prediction & 4,226,158 & 1,129 \\ Twitter & Sentiment Analysis & 1,600,498 & 660,120 \\ Reddit & Language Modeling & 56,587,343 & 1,660,820\\ \midrule DBLP (partitioned by venue) & Node Classification & 52,202 & 20\\ DBLP (partitioned by publisher) & Node Classification & 52,202 & 8\\ Ciao & Link Classification & 565,300 & 28 \\ MultiTask & Graph Classification & 18,661 & 7\\ \bottomrule \end{tabular} \label{table:datazoo} \end{table*} \section{Examples of events} \label{appendix:events} Some examples of events provided in {\sf FederatedScope}\xspace are presented in Table~\ref{tab:events}. These events and the corresponding handlers are used to describe participants' behaviors in {\sf FederatedScope}\xspace, which is introduced in Section~\ref{subsec:event_driven}. \section{DataZoo} \label{appendix:datazoo} The statistics of the datasets provided in DataZoo are summarized in Table~\ref{table:datazoo}. Our DataZoo contains ten widely-used datasets collected from various FL applications and standardizes the data preprocessing. \section{Overall Structure} \label{appendix:overvall_architecture} The overall structure is illustrated in Figure~\ref{fig:overall}. \begin{figure} \caption{The overall structure of {\sf FederatedScope} \label{fig:overall} \end{figure} \section{Completeness Checking} \label{appendix:completeness} As shown in Figure~\ref{fig:completeness}, {\sf FederatedScope}\xspace provides a completeness checking to verify the flow of message transmission in the constructed FL courses. A complete FL course should contain at least one path from the ``start'' node to the ``termination'' node. \begin{figure*} \caption{The graphs to verify the flow of message transmission in a constructed FL course. In the left and middle subgraphs, there exists at least one path from the ``start'' node to the ``termination'' node, which indicates the FL courses are complete. In the middle subgraph, the nodes that are unreachable from the start node, .e.g, ``M3" and ``M4", are redundant, and {\sf FederatedScope} \label{fig:completeness} \end{figure*} \section{Implementation Details} \label{appendix:implementation} We conduct the experiments show in Section~\ref{sec:exp} on 8 GeForce RTX 2080Ti GPUs. For the experiments on FEMNIST and CIFAR-10, we train a CNN with two convolutional layers. We set the hidden size to 2048 and the dropout ratio to 0.5 to prevent overfitting. At each round of training, the clients execute 4 SGD steps with a batch size of 20, and the learning rate is set to 0.5. The concurrency number (i.e., the number of clients that perform local training at each training round) is 100. For the experiments on Twitter, we set the embedding size to 300 and the maximum length to 140 to embed the texts with a bag-of-words model, and train a logistic regression model. At each round of training, the clients execute 4 SGD steps with a batch size of 2, and the learning rate is set to 0.05. The concurrency number is set to 200. For all the implemented synchronous and asynchronous strategies, we tune the hyperparameter on the validation set. For Sync-OS, at each training round, we over-select 30\% more clients upon the corresponding concurrency number, i.e., 130 on FEMNIST and CIFAR-10, and 260 on Twitter. For asynchronous settings, we set the aggregation goal to 40, 20, and 40 for the experiments conducted on FEMNIST, CIFAR-10, and Twitter, respectively. And the threshold of staleness toleration is set to 20 for all the datasets. For the asynchronous strategies equipped with "{\sf time\_up}", the time budget of each training round is set to the same value as the averaged time cost for achieving the defined aggregation goal when using "{\sf goal\_achieved}". \section{datasets with IID distribution versus non-IID distribution} \label{appendix:iid_distribution} We split CIFAR-10 into 100 clients, following the uniform distribution and the Dirichlet distribution (the Dirichlet factors $\alpha$ are set to 1.0, 0.5, and 0.2, a smaller factor value implies a higher heterogeneous degree) to synthesize distributed datasets with IID distribution and non-IID distribution, respectively. Then we federally train a CNN with two convolutional layers, adopting vanilla FedAvg~\cite{mcmahan2017communication} and two representative personalized federated learning algorithms, i.e., FedBN~\cite{fedbn} and Ditto~\cite{li2021ditto}. The experimental results are demonstrated in Table~\ref{tab:iid_vs_noniid}. \begin{table}[t] \centering \caption{Experimental results (accuracy) on CIFAR-10 with IID distribution and non-IID distributions.} \begin{tabular}{c c c c c} \toprule \multirow{2}{*}{Methods} & \multirow{2}{*}{IID Distribution} & \multicolumn{3}{c}{Non-IID Distribution}\\ \cmidrule(lr){3-5} & & \multicolumn{1}{c}{$\alpha=1.0$} & \multicolumn{1}{c}{$\alpha=0.5$} & \multicolumn{1}{c}{$\alpha=0.2$} \\ \midrule FedAvg & 0.8049 & 0..7929 & 0.7905 & 0.7700 \\ FedBN & 0.7908 & 0.8106 & 0.8311 & 0.8817 \\ Ditto & 0.7708 & 0.8087 & 0.8278 & 0.8840 \\ \bottomrule \end{tabular} \label{tab:iid_vs_noniid} \end{table} From the experimental results we can observe that, although vanilla FedAvg can achieve competitive performance on CIFAR-10 with IID distribution, it cannot perform well on non-IID distribution datasets and has larger performance drops when the heterogeneity degree increases. The heterogeneity in clients' data, which is widespread in FL applications, could lead to sub-optimal performance when participants learn a simple global model as what they do in distributed machine learning. Therefore, to handle such heterogeneity, clients are encouraged to apply client-specific training based on their private data, and share parts of the global model while locally maintaining others. As a result, FedBN and Ditto obtain noticeable improvement on datasets with non-IID distributions compared to FedAvg, which indicates their effectiveness in handling such heterogeneity of FL. \begin{figure*} \caption{The experimental results of different asynchronous strategies provided in {\sf FederatedScope} \label{fig:revision_asyn_results} \end{figure*} \begin{figure*} \caption{The data distributions among different clustered clients (top clients response faster than bottom clients) in CIFAR-10.} \label{fig:label_distribution} \caption{The data distributions among different clustered clients (top clients response faster than bottom clients) in bias-CIFAR-10.} \label{fig:label_distribution_bias} \caption{The experimental results on bias-CIFAR-10.} \label{fig:asyn_bias} \end{figure*} \section{example of amount of programming} \label{appendix:programming_amount} For the users who construct FL courses based on the built-in functionalities, they can adopt provided data, models, and plug-in operations via simply configuring, or using their own data or models without changing the implementation of the federated behaviors in the provided FL courses. For example, excluding the codes for dataset preprocessing, users only need to modify two lines of code for adopting a customized dataset: one line for registering the dataset class and one line for changing the dataset name in the configuration. For the users who aim to do further customization, they can inherit from clients or servers (e.g., from the vanilla FedAvg) and add new $<$event, handler$>$ pairs to express the new behaviors of servers and clients accordingly. For example, to implement the personalized federated algorithms FedBN and Ditto based on the vanilla FedAvg, users only need to add/modify 19 and around 100 lines of code respectively. \section{experiments and analysis on asynchronous training strategies} \label{appendix:more_asyn_results} We add more experimental results of the asynchronous training strategies provided in {\sf FederatedScope}\xspace, as shown in Figure~\ref{fig:revision_asyn_results}. From the figures we can observe that, asynchronous training strategies can achieve better performance compared to synchronous training strategies. Note that different asynchronous training strategies might have different characteristics. For example, {\it after aggregating} broadcasting manner causes less staleness than {\it after receiving} (as shown in Figure~\ref{fig:staleness}), but {\it after aggregating} requires more available bandwidths at the server since multiple messages are sent out at the same time. These different characteristics can guide users to choose the more suitable asynchronous training strategies for their own application, considering model performance, system resource, staleness toleration, etc. Generally speaking, there is no conclusion on which provided sampling strategy is better than another one, and the effectiveness of the sampling strategies is case dependent. Instead of providing the ``best'' asynchronous training strategies (in fact it is impossible because of ``No Free Lunch''), {\sf FederatedScope}\xspace aim to provide a good abstraction and modularization of asynchronous federated learning, which can both cover most of the existing studies and provide flexibility for customization. Users can conveniently choose the suitable asynchronous training strategies accordingly. As shown in Figure~\ref{fig:revision_asyn_results}, from the experiments conducted on CIFAR-10, we find that the performance of applying response-related and group sampling strategies are similar to that of applying uniform strategy for this specific case. It is not a surprising result, since both response-related and group sampling strategies are proposed to alleviate the model bias caused by the heterogeneity in participants' system resources, e.g., clients with weak responsiveness might contribute little to federated aggregation as their (staled) updates might be discounted or even dropped out. However, the distributions of client data are independent of their distributions of system resources, which causes that when clustering the clients according to their system resources, the data distributions among different clusters are similar, as shown in Figure~\ref{fig:label_distribution}. Such similarities among clusters limit the effectiveness of response-related and group sampling strategies compared to the uniform sampling strategies, since there might not exist model bias caused by the heterogeneity in participants' system resources. Furthermore, we re-split CIFAR-10 dataset and make the distributions of clients' data related to their system resources: We randomly select some labels as ``rare'' labels, and instances with these ``rare'' labels are only owned by clients with weak responsiveness. The distributions among clients on this dataset (called bias-CIFAR-10) are shown in Figure~\ref{fig:label_distribution_bias}, and the experimental results are shown in Figure~\ref{fig:asyn_bias}. We can observe that applying response-related and group sampling strategies can achieve noticeable improvement compared to uniform sampling strategies, which empirically confirms our analysis above. \end{document}
\begin{document} \textbf{\Large Dynamical models of dyadic interactions with delay}\\ \begin{center} \large Natalia Bielczyk, Urszula Fory\'s , Tadeusz P\l{}atkowski\\ \small Faculty of Mathematics, Informatics \& Mechanics, Inst. of Appl. Math. \& Mech.,\\ University of Warsaw, ul. Banacha 2, 02-097 Warsaw\\ \end{center} \begin{abstract} When interpersonal interactions between individuals are described by the (discrete or continuous) dynamical systems, the interactions are usually assumed to be instantaneous: the rates of change of the actual states of the actors at given instant of time are assumed to depend on their states at the same time. In reality the natural time delay should be included in the corresponding models. We investigate a general class of linear models of dyadic interactions with a constant discrete time delay. We prove that in such models the changes of stability of the stationary points from instability to stability or vice versa occur for various intervals of the parameters which determine the intensity of interactions. The conditions guaranteeing arbitrary number (zero, one ore more) of switches are formulated and the relevant theorems are proved. A systematic analysis of all generic cases is carried out. It is obvious that the dynamics of interactions depend both on the strength of reactions of partners on their own states as well as on the partner's state. Results presented in this paper suggest that the joint strength of the reactions of partners to the partner's state, reflected by the product of the strength of reactions of both partners, has greater impact on the dynamics of relationships than the joint strength of reactions to their own states. The dynamics is typically much simpler when the joint strength of reactions to the partner's state is stronger than for their own states. Moreover, we have found that multiple stability switches are possible only in the case of such relationships in which one of the partners reacts with delay on their own state. Some generalizations to triadic interactions are also presented. \end{abstract} Keywords: Dyadic interactions, discrete delay, Hopf bifurcation, stability switches \textbf{\Large Dynamical models of dyadic interactions with delay}\\ {} \section{Introduction} Interpersonal relationships such as dyadic interactions are very complicated and difficult to predict. One of the possibilities of making some predictions is to use mathematical modeling. Mathematical modeling of interactions between humans is a relatively new branch of applied mathematics. Typically, in mathematical sociology, reactions of individuals to stimuli are assumed to be instantaneous, both in discrete and continuous models, cf. for example~\cite{Bar, Bud, Now, Fel, Got, Lat, Lie, Mee, Rin, Rus, Str1, Str2} and the references therein. In the basic models of relationships between two partners the linear approach can be used, cf.~\cite{Fel, Str1, Str2}. It is obvious that such simple models cannot describe the relationships precisely. However, they can capture some key features of them. On the other hand, it is well known that all natural phenomena are delayed with respect to its stimuli. In the case we study, taking into account the delay of reactions can better describe the real dynamics of interpersonal relations because of the natural tendency of some people to analyze their own and/or partner's reactions and act after that, which necessarily takes some time. In the simplest approach, the stimuli are the actual states of both partners. Instantaneous reactions correspond formally to the assumption that the speed, the rates of change of the states and the actual states of the partners are considered at the same instant of time. However, because time delays are present in human interactions, the reactions to stimuli are naturally retarded. Whereas in biological applications the delay has been introduced many years ago, cf.~the Hutchinson model~\cite{hut} which has been proposed in~1948 as a delayed version of the logistic equation, and the models with delays have been already considered in numerous biomathematical literature, cf. for example \cite{For1, Gop, Hal, murray} and the references therein, the delay in systems modeling human behavior have not received as much attention as it possibly should. In this paper we study possible scenarios of the evolution of the linear models in which the constant in time discrete delay is introduced. We carry out a systematic study of the models with delay present in different stimuli terms corresponding to the states of the partners. Although in general nonlinear models are more appropriate in the description of complicated systems, the mathematical approach we use in the paper allows us to simplify the description to the linear one. More precisely, we study small deviations of the partner states from some states (which are the equilibrium or stationary states of the system) and check whether these deviations tend to zero (stable case) or not (unstable case). Because the considered deviations are small, we are able to approximate the nonlinear dynamics using linear models, as usually in the stability theory for differential equations without delays (ODEs) and with delays (DDEs), cf. e.g. ~\cite{Hal, Hal1}. In general, different scenarios of influence of a discrete delay on the stationary state of the model are possible. \begin{enumerate} \item The delay can have no effect on the stability of the stationary state. \item Sufficiently large delay can destabilize the stationary state which is stable without the delay. \item The appropriate values of delay can stabilize the stationary state which is unstable without the delay. \end{enumerate} The second case is the most typical. In this paper we show however, that the third case occurs in the systems which are modeled by two linear DDEs for different intervals of the coefficients determining the model. \subsection{Basic Romeo and Juliet model} As an introductory example, let us consider the model of the mutual interactions of two persons, Romeo (R) and Juliet (J), without the delay, cf.~\cite{Str1, Str2}. Let the functions $R(t)$ and $J(t)$ denote the state of R and J, respectively. The state of the partner can be interpreted as the satisfaction, the level of happiness, etc. from their mutual relation. Positive values reflect the positive attitude (good emotions) towards the partner, or positive experience from the relation, negative values reflect the negative, bad experience. In~\cite{Str1,Str2}, the following model has been considered (we cite the author): ''R loves J, J is a fickle lover. The more R loves her, the more J wants to run away and hide. When R gets discouraged and backs off, J begins to find him strangely attractive. R on the other hand tends to echo her: he warms up when she loves him, and grows cold when she hates him.''. In~\cite{Str1,Str2}, this model is described by the system of two linear ODEs \begin{equation} \left\{ \begin{array}{lcr} \dot{R}(t) & = & aJ(t) ,\\ \dot{J}(t) & = & - bR(t), \end{array} \right. \label{eq:2} \end{equation} where $a$, $b > 0$, according to the interpretation above (other nontrivial cases lead to divergent solutions). The solutions to Eqs.~\eqref{eq:2} oscillate around the equilibrium at $(0,0)$ which is a center in the phase space $(R,J)$. \subsection{General R and J model} Now, consider more general situation. Let R and J have their fixed goals or preferences, ideals of their states in the relation, which we denote by $R^*$ and $J^*$, respectively. Assume that the rate of change of their actual states depends linearly on two factors: the differences between their own actual states and their own ideals, and the difference between the actual states of the partners. Such model has been studied in~\cite{Fel}, and reads \begin{equation} \left\{ \begin{array}{lclcl} \dot{R}(t) & = & c[R^* - R(t)] & + & d[J(t) - R(t)] ,\\ \dot{J}(t) & = & e[J^* - J(t)] & + & f[R(t) - J(t)] .\\ \end{array} \right. \label{eq:248} \end{equation} The model enables to assign clearly personal features of the persons engaged and compute the evolution of the system easily. The coefficients $c$, $d$, $e$, $f$ can be positive or negative, and describe different psychological traits of the partners, for example contrarian, cooperator etc. We refer the reader to~\cite{Fel} for the discussion of different aspects and interpretation of the model. We are interested in the temporary evolution of the states of the partners. Let us denote by $(R_{eq}, J_{eq})$ the stationary state of this linear inhomogeneous system. The stability properties of this state do not change under the linear transformation of the independent variables. Substituting $$ r(t):=R(t)-R_{eq}, \ \ \quad j(t):=J(t)-J_{eq} ,$$ we obtain the homogeneous system \begin{equation} \left\{ \begin{array}{lcl} \dot{r}(t) & = & a_{11}r(t) + a_{12}j(t) ,\\ \dot{j}(t) & = & a_{21}r(t) + a_{22}j(t) , \end{array} \right. \label{eq:8} \end{equation} with the equilibrium $(0,0)$, and general coefficients $a_{kl} \in \mathbb{R}$, $k, l\in \{1, 2\}$. The conditions for stability of the equilibrium for this system can be found in textbooks on ODEs, cf.~\cite{Hal1}. In terms of the dyadic interactions they can be interpreted as conditions of stable relations. They strongly depend on the signs of the coefficients $a_{kl}$ and the relations between them. The choice of signs of the coefficients $a_{kl}$ specifies different psychological traits of the partners. The case $a_{11} >0$, $a_{12} >0$ can be referred to as ''eager beaver'', cf.~\cite{Str2}: ''R gets excited by J love for him, and is further spurred on by his own affectionate feelings for her.'' The case $a_{11}<0$, $a_{12}>0$ is referred to as ''cautious lover''. The case $a_{11}>0$, $a_{12}<0$ could be called ''narcissus'', and $a_{11}<0$, $a_{12}<0$ -- the doubly cautious, or ''stoic lover''. It should be noticed that the model is symmetric with respect to its variables, therefore the same interpretation as we have given for R is also valid for J. \subsection{Delayed R and J model} The models of dyadic interactions described by Eqs.~\eqref{eq:8} do not take into account possible delays in the reactions of the partners to their instantaneous states. In other words, the rate of change of the personal state depends only on the actual state of this person and that of the partner. In this paper we consider situations in which the delays of the reactions of the partners on their own states and/or the states of their partners are taken into account. In general, the delay can have various effects on the dynamics of the partnership, for example we show that delay can help two cautious lovers to stay in a stable relationship although their relationship would lose stability without it. We consider the case when the contributions to the rates of changes of the states $r(t)$ and/or $j(t)$ can be influenced by the past states of one or two of the partners, that is \begin{equation} \left\{ \begin{array}{lcl} \dot{r}(t) & = & a_{11}r(t-\tau_{11}) + a_{12}j(t-\tau_{12}) ,\\ \dot{j}(t) & = & a_{21}r(t-\tau_{21}) + a_{22}j(t-\tau_{22}) , \end{array} \right. \label{eq:ogolne} \end{equation} where $\tau_{kl} \geq 0$, $k, l\in \{1, 2\}$, are the delays of reactions, that is for $\tau_{kk}>0$, $k\in \{1, 2\}$ the person reacts with delay on his/her own state while for $\tau_{kl}>0$, $k\ne l$, $k, l \in \{1, 2\}$, the delay is present in the reaction to the partner state. It is obvious that if $\tau_{kl}=0$ for some $k, l\in \{1, 2\}$, then this type of reaction is instantaneous. In general, it is difficult to study the dynamics of Eqs.~\eqref{eq:ogolne} for different values of delays $\tau_{kl} \ne \tau_{mn}$ for $kl\ne mn$. However, the case when the contribution or contributions from the past states is delayed by the same time interval can be analyzed almost completely. Thus, we consider Eqs.~\eqref{eq:ogolne} with one delay $\tau$ present in some of the reactions terms, while other reactions remain instantaneous. Moreover, it turns out that even in such drastically simplified model interesting effects occur, in particular the appropriate delay can stabilize the equilibrium outcome of the evolution of the states of the couple. Such effect has been also found in~\cite{Bod} for a model with an additional linear term. It is well known, cf.~\cite{Coo}, that if the delay is large enough, and the equilibrium is unstable for at least one value of the delay $\tau \ge 0$, then the trajectories of the considered system diverge for delays large enough. In real applications the behavior of the partners for bounded delays is more relevant. Let us consider the arbitrary maximal value of delay $\tau_{\max}$. We can put the following questions. \begin{enumerate} \item The equilibrium is unstable if there are no delays. Can a larger delay $0<\tau<\tau_{\max}$ stabilize it? \item The equilibrium is stable if there are no delays. If the delay is large enough, $0<\tau<\tau_{\max}$, it has the stabilizing effect. What conditions on the reactive parameters $a_{kl}$ enlarge the interval of stability? \item How many stability switches are possible for the situations 1. and 2.? \item For given parameters of one partner, what can the second person do in order to stabilize the relationship? When is there a possibility to gain stability by changing one's attitude towards themselves and the other person’s state, or the relevant delays? \end{enumerate} The last question can also be formulated as: can a relationship be in command of one flexible person, when the partner is stiff in attitudes and level of reflexivity? \vskip 0.1cm In the following we address these issues, prove the relevant results and discuss numerical examples. \subsection{Notions of stability and instability} Now, we introduce the notion of stability, instability and stability switches. \begin{itemize} \item We say that the equilibrium is stable, when solutions to Eqs.~\eqref{eq:ogolne} tend to this state in time. \item We say that the solution is unstable, when solutions to Eqs.~\eqref{eq:ogolne} diverge. \item We say that the system is stable when its unique equilibrium is stable. \item We say that the system is unstable, when the unique equilibrium is unstable. \item We say that the stability switch occurs if there is the change of stability of equilibrium from unstable to stable or vice versa, caused by a change of a parameter of the system. In this case typically a Hopf bifurcation occurs. \end{itemize} For precise definitions we refer to any textbook in ODEs, cf.~\cite{Hal1}, or DDEs, cf.~\cite{Hal}. It should be noticed that for equations of the form~\eqref{eq:ogolne} there is a unique equilibrium $(0,0)$, while for the more complicated, especially non-linear models, there can be more equilibriums. In general, we can also consider another type of stability, the Lapunov stability, when solutions starting near equilibrium stay near it but not necessarily tend to it. Typically, Lapunov stability is non-generic, which means that the small change of the model parameters leads to stability or instability of equilibrium in the sense described above. In the paper we mainly focus on the generic cases, as typical in analysis of the models of natural phenomena. This approach reflects the fact that in reality the models parameters do not stay exactly the same all the time but can change in time and therefore, non-generic cases are very difficult to observe. The terms on the right-hand side (rhs) of Eqs.~(\ref{eq:ogolne}) are addressed as {\it{stimuli terms}} or reaction terms. \vskip 0.1cm In the next three sections we consider the models with the delay placed in one, two, three and four terms. In section V we introduce a model of triadic interactions. In section VI we conclude and discuss some open problems. In Appendix the proofs of theorems stated in the paper and some other notes on the topic are presented. \section{Single delay} First we consider the relationships in which only one of the stimuli terms is delayed. There are two families of such models. In the first one the evolution of R's state depends on his state in the past, that is instead of the general system~\eqref{eq:ogolne} we study \begin{equation} \left\{ \begin{array}{lcccl} \dot{r}(t) & = & a_{11}r(t-\tau) & + & a_{12}j(t) ,\\ \dot{j}(t) & = & a_{21}r(t) & + & a_{22}j(t) . \end{array} \right. \label{eq:12} \end{equation} In other words, R reacts immediately to the state of J, whereas is reflexive to his own states, and reacts towards them only after the delay $\tau$. This is a reasonable model when it comes to a person who tends to analyze the environment rather than themselves and their own state affects them only after some deliberation. J is more self-sensitive in this model, she tends to update her attitude on the basis of temporary level of private satisfaction. Mathematical properties of this model are described by Theorems~1 and~2 below. In the second model the evolution of the R's state depends on the J's delayed stimulus, whereas his reaction to his own state is immediate, occurs without delay. It is described by the following form of Eqs.~\eqref{eq:ogolne} \begin{equation} \left\{ \begin{array}{lcccc} \dot{r}(t) & = & a_{11}r(t) & + & a_{12}j(t-\tau) ,\\ \dot{j}(t) & = & a_{21}r(t) & + & a_{22}j(t) . \end{array} \right. \label{eq:23} \end{equation} This system describes the situation when R is impulsive about his internal states and follows his satisfaction level consciously by reacting to it immediately, but at the same time he is cautious about J and reacts to her levels of satisfaction with delay. This may result from carefulness as well as lack of knowledge about her feelings. Mathematical properties of this model are described by Theorem 3 below. It is obvious that exactly the same models can be considered when we replace the model variables, that is the roles of R and J are replaced. We omit details for such models because its dynamical properties are the same as those presented below. We look for stability switches of the equilibrium for both models~(\ref{eq:12}) and~(\ref{eq:23}). We remind, cf. for example~\cite{Coo}, that if at least one stability switch occur, then $(0,0)$ is unstable for delays large enough. Note also that Eqs.~(\ref{eq:12}) and~(\ref{eq:23}) are stable for $\tau = 0$ if \begin{equation} \label{warunek0} a_{11}+a_{22} < 0 \quad \text{and} \quad a_{11}a_{22} - a_{12}a_{21} >0. \end{equation} Inequalities~\eqref{warunek0} can be also expressed by the properties of the matrix $\mathbf{A}=\left( \begin{array}{cc} a_{11} & a_{12} \\ a_{21} & a_{22} \end{array}\right)$ which is formed by the model coefficients, namely \[ \text{tr} \mathbf{A}=a_{11}+a_{22}<0 \quad \text{and} \quad \det \mathbf{A} =a_{11}a_{22}-a_{12}a_{21}>0 . \] Note also that for $\text{tr} \mathbf{A}=0$ there is Lapunov stability if $\det \mathbf{A}>0$. In Appendix we prove the following theorems. \begin{tw} \label{tw1} For $|a_{12}a_{21}|<|a_{11}a_{22}|$ \begin{enumerate} \item if the system~(\ref{eq:12}) is unstable for $\tau=0$, then it is unstable for all $\tau \ge 0$; \item if the system~(\ref{eq:12}) is stable for $\tau=0$, then there is an unique stability switch (for which there is the Hopf bifurcation). \end{enumerate} \end{tw} Thus, for $|a_{12}a_{21}|<|a_{11}a_{22}|$ there is at most one stability switch for all admissible reactive parameters $a_{kl}$. This result suggests that if the joint strength of reactions of partners to their own states, reflected by the product of both strength $|a_{11}a_{22}|$, is larger than the joint strength of reactions to partners' states, there is no stability at all or the system loses stability once for all. The conclusion is that, as far as such a relationship was stable and in a result of slight moderation of parameters (which usually accompanies personal development of partners) starts to shiver, the delayed person should just accelerate reactions and the stability will come back to the system. Note that the behavior of the system depends continuously on the parameters of the model, which implies that decrease in delay will cure the relationship even if it loses stability as a result of a slight change in attitude of partners (i.e. some of the parameters $a_{kl}$, $k, l \in \{1, 2\}$). This situation corresponds to reality and natural development of people because it is common to moderate attitudes and reactions in long periods of time. Similar interpretation can be given to the results for the model~(\ref{eq:23}), cf.~Theorem~\ref{tw3} below. For $|a_{12}a_{21}|>|a_{11}a_{22}|$ we define \begin{equation} \Delta := (a_{11}^2 + a_{22}^2)^2 + 4a_{12}a_{21}(a_{22}^2 - a_{11}^2). \label{eq:18000} \end{equation} For $\Delta < 0$, any of the systems considered in this paper has no stability switches, which is proved in Appendix. The case $\Delta = 0$ is non-generic and therefore we omit it in our analysis. For $\Delta > 0$ and $i\in \{0, 1\}$ we denote \begin{equation} y_i := \frac{1}{2}\left(a_{11}^2 - a_{22}^2 - 2a_{12}a_{21} \pm \sqrt{\Delta}\right), \quad \omega_i:=\sqrt{y_i} \ \ \text{for} \ y_i > 0, \label{eq:18001} \end{equation} \begin{equation} a_i:=\frac{a_{12}a_{21}a_{22}}{a_{11}(\omega_i^2 + a_{22}^2)} . \label{eq:18002} \end{equation} \begin{tw} \label{tw2} If $|a_{12}a_{21}| > |a_{11}a_{22}|$, then for all natural $n \in \mathbb{N}$, there exist intervals of ${a_{kl}}$, $k, l\in \{1, 2\}$, such that Eqs.~\eqref{eq:12} has $n$ stability switches. The set of sufficient conditions reads \begin{equation} \Delta > 0, \quad \quad a_{11}^2 – a_{22}^2 - 2a_{12}a_{21} > 0 \label{eq:tencozwykle} \end{equation} and for $i\in \{1, 2, ... ,2(\lceil \frac{n}{2} \rceil -1)\}$ \begin{equation} \left\{ \begin{array}{l} \frac{1}{\omega_1} \arccos a_0 + \frac{(2i - 2) \pi}{\omega_1} < \frac{1}{\omega_0} \arccos a_1 + \frac{(2i - 2) \pi}{\omega_0} , \\ \frac{1}{\omega_0} \arccos a_1 + \frac{(2i - 2) \pi}{\omega_0} < \frac{1}{\omega_1} \arccos a_0 + \frac{(2i) \pi}{\omega_1} . \end{array} \right. \label{eq:6000} \end{equation} \end{tw} The appearance of a stability switch implies instability of the system for $\tau \rightarrow + \infty.$ Thus, in the generic case, if the system is unstable for $\tau=0$, the number of switches must be even, while if it is stable for $\tau=0$, the number of switches must be odd. However, we should remember that in reality the delay is bounded, $\tau<\tau_{\max}$, and therefore the number of switches observed in the interval $[0,\tau_{\max}]$ can be different than those obtained from the analytical investigation when we let $\tau \to +\infty$. Hence, for $|a_{12}a_{21}| > |a_{11}a_{22}|$, there may be narrow segments of delay values for which the relationship is stable. This means the partners cannot know what will be the impact of decreasing or increasing the delay on the stability of the relationship, which implies that if the system loses stability, it is no longer easy to regain it on the basis of conscious acting. This system describes a relationship in which the joint strength of reactions of partners to their own states is weaker than the joint strength of reactions to partners' states. This contributes to a common belief that relationship in which partners are more emotionally focused on partner's states than on their own, are shaky, because it is essential to hold one's own internal source of satisfaction and emotional spine. \begin{rem} We can also consider other types of linear models, e.g. the situation in which the evolution of $r(t)$ is influenced by its current value $r(t)$ and the past value $r(t-\tau)$ (and the partner's actual state $j(t)$) \begin{equation} \left\{ \begin{array}{lccclcl} \dot{r}(t) & = & a_{11}r(t-\tau) & + & a_{12}j(t) & + & a_{13}r(t) ,\\ \dot{j}(t) & = & a_{21}r(t) & + & a_{22}j(t) && . \end{array} \right. \label{eq:410} \end{equation} An example of two stability switches for this model has been considered in~\cite{Bod}. \end{rem} We note that for $a_{13} = 0$ the system~(\ref{eq:410}) is identical to~(\ref{eq:12}). Since the stability properties of the system depend continuously on the parameters, there exists an interval around $a_{13}=0$ for which the stability properties of both systems are the same. Since any number of switches can be obtained for Eqs.~(\ref{eq:12}), the same is true for Eqs.~(\ref{eq:410}) with $a_{13}$ small enough. We argue it more precisely in Appendix. \vskip 0.2cm For the second model~(\ref{eq:23}), in which the rate of change of the R's state depends on the J's state in the past, unlike for Eqs.~(\ref{eq:12}), at most one stability switch can occur. \begin{tw} \label{tw3} For $|a_{12}a_{21}|<|a_{11}a_{22}|$ there are no stability switches for the system~(\ref{eq:23}). For $|a_{12}a_{21}|>|a_{11}a_{22}|$ \begin{enumerate} \item if the system~(\ref{eq:23}) is unstable for $\tau=0$, then it is unstable for all $\tau \ge 0$; \item if the system~(\ref{eq:23}) is stable for $\tau=0$, then there is an unique stability switch. \end{enumerate} \end{tw} \begin{rem} We note that if we add the delay to the simplest model~(\ref{eq:2}), which is the particular case of~(\ref{eq:12}), \begin{equation} \left\{ \begin{array}{lcc} \dot{r}(t) & = & a_{12}j(t-\tau) ,\\ \dot{j}(t) & = & - a_{21}r(t), \end{array} \right. \label{eq:4} \end{equation} with $a_{12}$, $a_{21} > 0$, it turns out that, apart from particular initial data, and a measure zero set of discrete values of delays, the states of the partners diverge. This system can be rewritten as $r''(t) = - \omega^2 r(t - \tau)$, where $\omega^2 = a_{12}a_{21}$, which only gives stable results for $\tau = 2k\pi/\omega$, $k \in \mathbb{N}$. The same is true if both partners react with different delays, $\tau_1$ and $\tau_2$, respectively, because then the system can be rewritten as $r''(t) = - \omega^2 r(t-(\tau_1+\tau_2))$, which means it behaves in the same manner, but with the delay $\tau=\tau_1 + \tau_2$. However, this is stability in the Lapunov sense and therefore it is not preserved for the parameters near these critical values. Thus, in this case the delay cannot lead to stabilization of the relation. We omit details. \end{rem} Below we give an example of the system with a single delay, for which no stability switch occurs and the system is stable for every value of delay. Let us consider Eqs.~\eqref{eq:12} with $\Delta < 0$ and the system is stable for $\tau = 0$, which provides a set of conditions \begin{equation} \left\{ \begin{array}{c} (a_{11}^2 + a_{22}^2)^2 + 4a_{12}a_{21}(a_{22}^2 - a_{11}^2) > 0 ,\\ a_{11} + a_{22} < 0 ,\\ a_{11}a_{22}- a_{12}a_{21} > 0 . \end{array} \right. \label{eq:8003} \end{equation} Conditions~(\ref{eq:8003}) are satisfied e.g.~for the coefficients $a_{11} = a_{12} = 1$, $a_{21} = a_{22} = -2$. In this example R is an eager beaver, and J is a stoic/double cautious lover, and both of them are much more reactive for partner's state than their own. The delay does not affect the stability of the system, it affects however the range of time we have to wait for the system to stabilize. This means that in such a relationship R can be reflexive to any extent, and the relationship is still stable. \vskip 0.2cm In figures below we present numerical examples of the stability switches and the Hopf Bifurcation (HB) for different choices of the parameters of the first model~(\ref{eq:12}). In all figures the coefficients $a_{kl}$ of the system~(\ref{eq:12}) are represented as the coordinates of the vector $[a_{11},a_{12},a_{21},a_{22}]$. The initial data are $r(t)\equiv 1$ and $j(t)\equiv 1$, $t\in [-\tau,0].$ \begin{figure} \caption{Time evolution of solutions $r(t)$, $j(t)$ to Eqs.~(\ref{eq:12} \caption{Trajectories of Eqs.~(\ref{eq:12} \label{fig0} \label{fig1} \end{figure} In Fig.~\ref{fig0} we show the trajectories $r(t)$, $j(t)$ for delay values close to the unique stability switch for the reactive parameters $a_{kl}$ corresponding to p.~2 of Theorem~\ref{tw1}, that is $[a_{11}, a_{12}, a_{21}, a_{22}]=[-4, 1, -2, -2]$. For delays below the switch value, the equilibrium is stable, for larger delays is unstable. The left diagram shows the trajectory which converge to the equilibrium. The middle one visualizes the limit cycle appearing as the result of Hopf bifurcation, and the right one presents the diverging trajectory. In Fig.~\ref{fig1} we show the trajectories in the space $(r, j)$, corresponding to above values of the delay. Similar behavior is obtained e.g. for the reactive parameters $[-4,-2,-5,-6]$, $[-4,4,4,-9]$, $[-5,-2,2,-5]$. As it can be seen at least two of $a_{kl}$, that is $a_{11}$ and $a_{22}$ are negative in this case. In Fig.~\ref{fig3} we show the trajectories around two stability switches, satisfying the assumptions of Theorem~\ref{tw2} for the coefficients $[5,-4,3,-1]$. For delays values below the first switch, the equilibrium is unstable, for delays between the two switches the equilibrium is stable. The left diagrams show the trajectories which converge to the equilibrium. The middle one indicate the corresponding limit cycles, and the right ones show the divergent trajectories. Similar behavior is obtained e.g. for the reactive parameters $[9,4,-6,1]$, $[7,5,-3,2]$, $[5,-7,4,2]$. Now, at least two of $a_{kl}$ are positive. \begin{figure} \caption{Trajectories of Eqs.~(\ref{eq:12} \label{fig3} \end{figure} In Fig.~\ref{fig2} we show the trajectories around three consecutive stability switches $\tau \approx 0.555$, $\tau \approx 1.11$, $\tau \approx 2.1$ for the coefficients $[-2,-4,3,-2]$. The left diagram in each row shows the trajectory which converge to equilibrium. The middle one approximately visualizes the limit cycle, and the right one shows the diverging trajectories. The corresponding diagrams which present the time dependence of $r(t)$, $j(t)$ are similar to those in Fig.~\ref{fig0}, and therefore are not shown. Similar behavior can be obtained e.g. for the reactive parameters $[4,6,-5,-5]$, $[5,-7,7,-7]$, $[2,8,-1,-2]$. Here, at least one of $a_{kl}$ is positive. \begin{figure} \caption{ Trajectories of Eqs.~(\ref{eq:12} \label{fig2} \end{figure} \section{Delay in two terms} There are four types of models in which two of the four stimuli terms are delayed. \begin{equation} \left\{ \begin{array}{lcccc} \dot{r}(t) & = & a_{11}r(t - \tau) & + & a_{12}j(t-\tau),\\ \dot{j}(t) & = & a_{21}r(t) & + & a_{22}j(t), \end{array} \right. \label{eq:27} \end{equation} \begin{equation} \left\{ \begin{array}{lclcl} \dot{r}(t) & = & a_{11}r(t - \tau) & + & a_{12}j(t) ,\\ \dot{j}(t) & = & a_{21}r(t - \tau) & + & a_{22}j(t), \end{array} \right. \label{eq:28} \end{equation} \begin{equation} \left\{ \begin{array}{lcccc} \dot{r}(t) & = & a_{11}r(t - \tau) & + & a_{12}j(t) ,\\ \dot{j}(t) & = & a_{21}r(t) & + & a_{22}j(t - \tau), \end{array} \right. \label{eq:31} \end{equation} \begin{equation} \left\{ \begin{array}{lcccc} \dot{r}(t) & = & a_{11}r(t) & + & a_{12}j(t-\tau) ,\\ \dot{j}(t) & = & a_{21}r(t-\tau) & + & a_{22}j(t). \end{array} \right. \label{eq:32} \end{equation} The first model corresponds to the situation in which R is reflexive towards both himself and J. In the second model both partners consider the R's state with the delay. In the third model both partners are reflexive towards their own states. Finally the fourth one corresponds to the situation in which R and J are reflexive towards their partners. As before, we can consider symmetric models with the roles of J and R exchanged, but for such models the dynamics is the same. We have proved the following \begin{tw}\label{tw4} \begin{enumerate} \item Eqs.~(\ref{eq:27}), (\ref{eq:28}) and~\eqref{eq:32} have at most one stability switch. \item If $a_{11} + a_{22} = 0$, then Eqs.~(\ref{eq:31}) have at most one stability switch. \item If $a_{22}<0$, $a_{12}a_{21}<0$ and $a_{11}$ is close enough to $0$, then Eqs.~\eqref{eq:31} can have arbitrary number of stability switches. \end{enumerate} \end{tw} For Eqs.~(\ref{eq:31}) with $a_{11} + a_{22} \neq 0$, it is difficult to analyze stability switches with the method presented in Appendix for other systems since the quazi-polynomial has two irreducible trigonometric terms. However, we are able to provide arguments for the statement it does display the same features as Eqs.~\eqref{eq:12} and multiple stability switches are possible. We present these arguments in the proof of Theorem~\ref{tw4} in~Appendix. \section{Delay in three and four terms} There are two families of models in which only one of the stimuli terms is not delayed. \begin{equation} \left\{ \begin{array}{lclcc} \dot{r}(t) & = & a_{11}r(t - \tau) & + & a_{12}j(t - \tau) ,\\ \dot{j}(t) & = & a_{21}r(t - \tau) & + & a_{22}j(t), \end{array} \right. \label{eq:40} \end{equation} and \begin{equation} \left\{ \begin{array}{lcccl} \dot{r}(t) & = & a_{11}r(t - \tau) & + & a_{12}j(t - \tau) ,\\ \dot{j}(t) & = & a_{21}r(t) & + & a_{22}j(t-\tau). \end{array} \right. \label{eq:41} \end{equation} In the first case J reacts immediately only on her own state, in the second case only to R's state. Finally, there can be the case when both partners react with the delay on both states, that is \begin{equation} \left\{ \begin{array}{lclcl} \dot{r}(t) & = & a_{11}r(t - \tau) & + & a_{12}j(t - \tau) ,\\ \dot{j}(t) & = & a_{21}r(t-\tau ) & + & a_{22}j(t-\tau). \end{array} \right. \label{eq:4_del} \end{equation} The quazi-polynomials for systems~\eqref{eq:40} and~\eqref{eq:41} are presented in Appendix. They contain irreducible trigonometric terms, as in the case of Eqs.~\eqref{eq:31}. This means that we are not able to formulate appropriate theorems, while for Eqs.~\eqref{eq:4_del} we have proved, cf.~Appendix, the following \begin{tw}\label{tw5} If system~\eqref{eq:4_del} is unstable for $\tau=0$, then it remains unstable for any $\tau>0$. If system~\eqref{eq:4_del} is stable for $\tau=0$, then there is an unique switch of stability and the Hopf bifurcation occurs. \end{tw} The result of Theorem~\ref{tw5} implies that if all reactions of both partners are equally delayed, then the dynamics of the system is similar to those for one variable model, cf.~e.g.~\cite{Sko}. Coming back to systems~\eqref{eq:40} and~\eqref{eq:41}, we might think analogously to the previous models and check the dynamics for possible simplifications. For both models all simplifications (cf.~Appendix) give the same results with at most one stability switch. On the other hand, we can provide some arguments that multiple stability switches are possible. We present it in Note to systems with three delays in Appendix. However, numerical simulations have not bring any other possibilities than no stability switches or only one stability switch. We do not present such simulations in the paper because they are almost the same as presented for the models with single delay. \section{Triadic interactions} One of the possible extensions of the considered models may be obtained by introducing the third person. Such generalization has been already suggested in \cite{Str1}, and recently investigated in \cite{Bag}. In particular, in the latter work the authors analyzed periodic and quasi periodic solutions for linear and nonlinear models of three--person love relations without time delay. Let Paris (P) enters the relationship of R and J. There are many possible ways of modeling such a triad with delays. We discuss two examples described by the following systems of three DDEs \begin{equation} \left\{ \begin{array}{lccclcl} \dot{r}(t) & = & a_{11}r(t) & + & a_{12}j(t), &&\\ \dot{j}(t) & = & a_{21}r(t-\tau) & + & a_{22}j(t) & + & a_{23}p(t-\tau),\\ \dot{p}(t) & = & a_{32}j(t) & + & a_{33}p(t) , && \end{array} \right. \label{eq:301} \end{equation} \begin{equation} \left\{ \begin{array}{lclcccl} \dot{r}(t) & = & a_{11}r(t) & + & a_{12}j(t) , &&\\ \dot{j}(t) & = & a_{21}r(t) & + & a_{22}j(t-\tau) & + & a_{23}p(t) ,\\ \dot{p}(t) & = & a_{32}j(t) & + & a_{33}p(t) .&& \end{array} \right. \label{eq:302} \end{equation} Both models describe the fact that R and P do not interact directly, therefore do not react to the state of the opponent, whereas J needs time to reconsider the situation and react. In the model~(\ref{eq:301}) the pragmatic J analyzes and reacts with delay to the states of both men, whereas in~(\ref{eq:302}) the reaction of introvertic J to their states is impulsive, immediate, whereas the influence of her states on the speed of change of her actual state is delayed. The analysis of the relevant quazi-polynomials which have the third order terms becomes complex. In a particular case of Eqs.~(\ref{eq:301}) in which R and P react identically to their own states: $a_{11}=a_{33} = a$, we obtain an interesting result. Substituting $s(t):=a_{21}r(t) + a_{23}p(t)$ we achieve \begin{equation} \left\{ \begin{array}{lcccc} \dot{j}(t) & = & a_{22}j(t) & + & 1 \cdotp s(t-\tau) ,\\ \dot{s}(t) & = & (a_{12}a_{21}+ a_{23}a_{32})j(t) & + & as(t) . \end{array} \right. \label{eq:303} \end{equation} Thus, we can perceive this situation as if J would interact with an ''averaged'' man whose state is $s$, and is influenced by this state with the delay. We obtain the model~(\ref{eq:23}) with the roles of R and J interchanged, in which only one stability switch is possible. Thus, when both men have the same attitude towards themselves, there is at most one stability switch. Below we present an example of numerical results for the second model~\eqref{eq:303} for the case with three stability switches. \begin{figure} \caption{Solutions to Eqs.~(\ref{eq:302} \label{figtriada1} \end{figure} Analyzing the relevant quazi-polynomials we can take advantage from the fact that stability of the system depends continuously on parameters in order to find examples of triads with multiple stability switches. For example, in order to obtain a system with $n$ stability switches, we may consider a subsystem with only $a_{11}, a_{12}, a_{21}, a_{22}$ to be non-zero parameters and obtain R--J system with $N$ stability switches. Then, we may put $a_{32}=a_{12}$, $a_{33}=a_{11}$, $a_{23}=a_{21}$ to create a system to be symmetrical towards R--P axis. This system will share the number of switches with the subsystem. From this point we may try to change values of chosen parameters until the number of stability switches jumps to another value. It can be shown that multiple stability switches can also occur for adequately chosen intervals of the parameters of the models. \vskip 0.1cm We have used this method to obtain a system with three stability switches for parameters $a_{11}=-28$, $a_{12}=-74$, $a_{21}=76$, $a_{22}=-35$, $a_{23}=76$, $a_{32}=-42$, $a_{33}=-28$. For these parameters, the model~(\ref{eq:302}) describes a situation when after deliberation J reacts negatively to her own states, but is quick in response to both R and P, and follows their emotions. R and P are similar in reactions and tend to neutralize the relationship: the more satisfied J is, the more satisfaction they lose and the opposite. They are the same ambivalent towards themselves. The behavior of this system is shown in Fig.~\ref{figtriada1}. \section{Summary and conclusions} In the paper we have studied a general family of linear models of dyadic relationships with one constant time delay present on the rhs of the systems. We have used the mathematical approach presented in~\cite{Coo}. The only linear systems of two DDEs which can not be completely analyzed using the presented method are those for which the quazi-polynomial has the irreducible term $a \text{e}^{-\lambda\tau} + b\text{e}^{-2\lambda\tau}$, $a^2 + b^2 \ne 0$, which appears if the delays is present in the 'cross' terms of the rhs of the considered system (since then the auxiliary function $F(y)$ contains trigonometric terms, and in general analytic considerations are very difficult). The results presented in the paper, stated mainly in Theorems~\ref{tw1}-\ref{tw4}, show that the joint strength of reactions of partners (where the joint strength means here the product of appropriate strengths of both partners) to the partner's state has greater impact on the system dynamics than the joint strength of reactions on the partner's own state. This is reflected in the fact that for $|a_{11}a_{22}|<|a_{12}a_{21}|$, that is for the joint strength of reactions of partner's own state weaker than the joint strength of reactions to partner's state, the dynamics is typically much simpler. More precisely, Theorem~\ref{tw1} for $|a_{11}a_{22}|> |a_{12}a_{21}|$ implies that the system~\eqref{eq:12} can have at most one stability switch, while Theorem~\ref{tw2} for the same system with opposite inequality can exhibit multiple stability switches. Similarly, for Eq.~\eqref{eq:23} Theorem~\ref{tw3} yields no stability switches for $|a_{11}a_{22}|>|a_{12}a_{21}|$ and possibility of one stability switch for the opposite inequality. The fact that stability of the system depends continuously on the parameters leads to a general observation that adding a term with the interaction parameter close enough to 0 to a given system does not change its stability properties. However, there is a substantial difference between systems depending on which term is delayed. Among all the analyzed two--person interactions, only those described by the system~\eqref{eq:12} provide the possibility of multiple stability switches. In other words, multiple stability switches are possible exclusively when the stimulus of a player to his own state is delayed, and the remaining three stimuli are not. Otherwise, i.e. when only the stimulus to the state of the partner is delayed, or when at least two stimuli are delayed, then at most one stability switch occurs. In this paper we have considered the interactions modeled by linear systems. It can be of interest to consider nonlinear interactions between the partners. On the other hand, one could consider the most general linear model with arbitrary discrete delays. Let $a_k$, $b_k$, $c_k$, $d_k$ denote the constant reactive coefficients, with for example $k=1,...,n_{rr}$ for $a_k$ etc., $\tau_k^{rr}\ge 0$, $\tau_k^{rj}\ge 0$, $\tau_k^{jr}\ge 0$, $\tau_k^{jj}\ge 0$ denote the relevant delays, corresponding to different interaction terms in the relationship R--J. The general model reads $$ \frac{d}{dt} r(t) = \sum_{k=1}^{n_{rr}} a_k r(t-\tau_k^{rr}) + \sum_{k=1}^{n_{rj}} b_k j(t-\tau_k^{rj}) , \\ $$ $$ \frac{d}{dt} j(t) = \sum_{k=1}^{n_{jr}} c_k r(t-\tau_k^{jr}) + \sum_{k=1}^{n_{jj}} d_k j(t-\tau_k^{jj}) , $$ and covers all the situations considered in this paper. In particular one could consider the situation when the (absolute) values of the reactive coefficients decrease with increasing values of the delays, which would mean that the states most remote in the past influence less the actual dynamics of the behavior of the partners. \begin{thebibliography}{99} \baselineskip=5pt \bibitem{Bag} Bagarello, F., Oliveri, F. (2010). An Operator-Like Description of Love Affairs, SIAM J. Appl. Math. 70, 8, pp. 3235-3251. \bibitem{Bar} Baron, R. M., Amazeen, P. G., and Beek, P. J. (1994). Local and global dynamics of social interactions. Dynamical systems in social psychology, Academic Press. San Diego, 111-138. \bibitem{Bod} Bodnar, M., Fory\'s, U., Poleszczuk, J., Bielczyk, N. (2010). \textit{Delay Can Stabilise: Love Affair Dynamics.} National Conference on Applications of Mathematics in Biology and Medicine, Krynica 2010. \bibitem{Bud} Buder, E. H. , A nonlinear dynamic model of social interaction, 1991. Communication Research 18, 174-198. \bibitem{Coo} Cooke, K. L., van den Driessche, P. (1986). \textit{On Zeros of Some Transcendental Equations}. Funkcialaj Ekvacioj, 29, 77-90. \bibitem{Fel} Felmlee, D. H., Greenberg, D. F. (1999). \textit{A dynamic systems model of dyadic interaction}, Journal of Mathematical Sociology, 23(3): 155-180. \bibitem{For1} Fory\'s, U. (2004). Biological delay systems and the Mikhailov criterion of stability, J. Biol. Sys. 12, 1-16 \bibitem{Gop} Gopalsamy, K., 1992. Stability and oscillations in delay differential equations of populations, Springer. \bibitem{Got} Gottman, J. M., Murray, J. D., Swanson, C. C., Tyson, R., and Swanson, K. R., 1994. The mathematics of marriage: dynamic nonlinear models, Westwiev Press. \bibitem{Hal} Hale, J., 1997. Theory of functional differential equations, Springer-Verlag, New York. \bibitem{Hal1} Hale, J., 1969. Ordinary differential equations, Springer-Verlag, New York. \bibitem{hut} Hutchinson, G.E. (1948). Circular casual systems in ecology, Ann. N.Y. Acad. Sci. 50, 221-246. \bibitem{Lat} Latane, B., Nowak, A. (1994). Attitudes as catastrophes, From dimensions to categories with increasing involvement, Dynamical systems in social psychology, San Diego: Academic Pess, 219-249. \bibitem{Lie} Liebovitch, L., S., Naudot, V., Vallacher, R., Nowak, A., Lan-Biu Wrzosinska and Coleman, P. (2008). Dynamics of two-actor cooperation-conflict models, Physica A, 387, 6360-6378. \bibitem{Mee} Meek, R., Meeker, B. (1996). Exploring Nonlinear Path Models via Computer Simulation, Social Science Computer Review, 14(3), 253-268. \bibitem{murray} Murray, J.D., 2002. Mathematical biology. 1, An introduction, Springer-Verlag, New York. \bibitem{Now} Nowak, A., Vallacher, R. R., 1998. Dynamical Social Psychology, Guilford Press, New York. \bibitem{Rin} Rinaldi, S., Gragnani, A. (1998). Love dynamics between secure individuals: A modeling approach, Nonlinear Dynamics, Psychology, and Life Sciences, 2(4). \bibitem{Rus} Rusbult, C., van Lange, P. (2003). Interdependence, interaction and relationships, Annu. Rev. Psychology, 54: 351-75. \bibitem{Sko} Skonieczna, J., Fory\'s, U. (2009). Stability switches for some class of delayed population models, to appear in Applicationes Mathematicae (Warsaw). \bibitem{Str1} Strogatz, S. (1988). Love affairs and differential equations, Math. Magazine 65(1), 35. \bibitem{Str2} Strogatz, S., 1994. Nonlinear dynamics and chaos, Westwiev Press, 138-140. \end{thebibliography} \section{Appendix} In the proofs presented below we use the approach developed in~\cite{Coo}, cf.~also~\cite{Sko}. If $W$ is the quazi-polynomial for the system of DDEs with a single delay $\tau$, then $W(\lambda)=P(\lambda)+Q(\lambda)\text{e}^{-\lambda \tau}$, where $P$ and $Q$ are polynomials. If all zeros of $W$ (that is characteristic values or eigenvalues of the system) are on the left-hand side of the complex plane, then the system is stable. If at least one zero is on the right-hand side of the complex plane, then the system is unstable. Because zeros of $W$ depend continuously on the model parameters, the change of stability is possible only when some zero crosses imaginary axis either from the left to the right or from the right to the left-hand side of the complex plane. This is the reason we are looking for purely imaginary zeros of $W$. Substituting $\lambda:=i\omega$ one gets \[ W(i\omega)=P(i\omega)+Q(i\omega)\text{e}^{-i\omega\tau} . \] Looking for purely imaginary eigenvalues we solve $W(i\omega)=0$, that is $P(i\omega)=-Q(i\omega)\text{e}^{-i\omega \tau}$. Taking the norm of both sides and substituting $\omega^2=y$ we obtain the auxiliary function $F$ defined as \begin{equation}\label{F} F(y) := |P(i\sqrt{y})|^2-|Q(i\sqrt{y})|^2 , \end{equation} and we see that positive zeros of $F$ give the eigenvalues of the studied system of DDEs. We also know that the derivative of $F$ at its zero point determines the direction of movement of the corresponding eigenvalues in the complex plane, cf.~\cite{Coo, Sko}. In fact, if this derivative is positive, then the eigenvalue crosses the imaginary axis from the left to the right-hand side of the complex plane, while for the negative derivative the movement is in the opposite direction. \noindent {\bf{Proof of Theorem~\ref{tw1}.}} The quazi-polynomial for Eqs.~\eqref{eq:12} has the form \[ W(\lambda) = \lambda^2 - a_{22}\lambda - a_{12}a_{21} + \text{e}^{-\lambda\tau}(a_{11}a_{22} - \lambda a_{11}) . \] The auxiliary function $F$ reads \begin{equation}\label{F12} F(y) = y^2 + y(a_{22}^2 - a_{11}^2 + 2a_{12}a_{21}) + a_{12}^2a_{21}^2 - a_{11}^2a_{22}^2 . \end{equation} If the system is unstable for $\tau=0$, then the corresponding characteristic polynomial, which coincides with $W(\lambda)$ for $\tau=0$, has zeros on the right-hand side of the complex plane. As $\tau$ increases, roots of the quazi-polynomial should belong to the left-hand side of the complex plane in order to stabilise the system. Under the assumption $|a_{11}a_{22}|>|a_{12}a_{21}|$ the function $F$ has only one positive zero $y_0$ and its derivative $$F'(y_0) = 2y_0 + a_{22}^2 – a_{11}^2 + 2a_{12}a_{21} > 0 ,$$ which means that the transition from the right to the left-hand side is not possible, cf.~\cite{Sko}. This implies that if the system is unstable for $\tau=0$, then it remains unstable for all $\tau>0$. Moreover, if the eigenvalue enters the right-hand complex half-plane, it remains there forever, yielding that only one stability switch is possible. If the system is stable for $\tau=0$, i.e. $a_{11}a_{22} - a_{12}a_{21} > 0$ and $a_{11} + a_{22} < 0$ under the assumption of Theorem~\ref{tw1}, then all characteristic values are on the left-hand complex half-plane and can cross to the right-hand complex half-plane which implies destabilisation. There exists a critical $\tau_c$ for which HB occurs and the system is unstable for larger delays. More precisely, let $\omega_0 = \sqrt{y_0}$, where $y_0$ is the unique positive zero of $F$. Then the HB occurs for the smallest $\tau$ for which $W(i\omega_0) = 0$, i.e. for the smallest $\tau$ such that \begin{equation}\label{re-im} \left\{ \begin{array}{lcccl} \Re(W(i\omega_0)) & = & -\omega_0^2 -a_{12}a_{21} - a_{11}\omega_0 \sin(\omega_0 \tau) + a_{11}a_{22}\cos (\omega_0 \tau) & = & 0 ,\\ \Im(W(i\omega_0)) & = & -a_{22}\omega_0 - a_{11}a_{22}\sin(\omega_0 \tau) - a_{11}\omega_0 \cos(\omega_0 \tau) & = & 0, \end{array} \right. \end{equation} i.e. for $\tau_c$ determined by the values of sine and cosine \[ \cos(\tau_c\omega_0)=\frac{a_{12}a_{21}a_{22}}{a_{11}(a_{22}^2+\omega_0^2)}, \quad \sin (\tau_c\omega_0)=- \frac{\omega_0(\omega_0^2 +a_{22}^2-a_{12}a_{21})}{a_{11}(a_{22}^2+\omega_0^2)} , \] which implies \[ \tau_c = \left\{ \begin{array}{ccc} \frac{1}{\omega_0} \arccos a_0 & \text{for} & a_{11}<0 ,\\ 2\pi - \frac{1}{\omega_0} \arccos a_0 & \text{for} & a_{11}>0 . \end{array} \right. \] $\Box$ \noindent {\bf{Proof of Theorem~\ref{tw2}.}} If $|a_{12}a_{21}|>|a_{11}a_{22}|$, then stability switches can occur only if the auxiliary function $F$ defined by Eq.~\eqref{F12} has two positive roots $0<y_0 < y_1$, i.e. if \[ a_{22}^2 - a_{11}^2 + 2a_{12}a_{21}>0 ,\ \ (a_{22}^2 + 2a_{12}a_{21} - a_{11}^2)^2 > 4\left((a_{12}a_{21})^2 - (a_{11}a_{22})^2\right) . \] Since $F'(y) = 2y + (a_{22}^2 - a_{11}^2 + 2a_{12}a_{21})$ we have \[ F'(y_0) = -\sqrt{\Delta} < 0\quad \text{and} \quad F'(y_1) = +\sqrt{\Delta} > 0, \] where $\Delta$ is the discriminant of $F$, and therefore we deduce that for $\omega_0$ the eigenvalue can move from the right half-plane to the left one, while for $\omega_1$ -- in the opposite direction. Eqs.~\eqref{re-im} yield \begin{equation} \left\{ \begin{array}{lcc} \sin(\omega_j \tau_j) & = & \frac{-\omega_j(\omega_j^2 + a_{22}^2 + a_{12}a_{21})}{a_{11}(\omega_j^2 + a_{22}^2)} , \\ \cos(\omega_j\tau_j) & = & a_j , \end{array} \right. \label{eq:19} \end{equation} with $a_j = \frac{a_{12}a_{21}a_{22}}{a_{11}(\omega_j^2 + a_{22}^2)}$ for $ j=0, 1$. For arbitrary $j$ let $\tau_{j0}>0$ be the first critical delay for which Eqs.~\eqref{eq:19} are satisfied. We obtain two sequences of critical delays for which eigenvalues can cross the imaginary axis \begin{equation} \tau_{jn} = \tau_{j0} + \frac{2n\pi}{\omega_j}, \quad n \in \mathbb{N}, \quad j=0, \; 1. \label{eq:20} \end{equation} There are two possibilities \begin{enumerate} \item If the system in unstable for $\tau=0$, then at least one zero is on the right-hand side of the imaginary axis. Stability switch occurs if the eigenvalue crosses the imaginary axis from the right to the left-hand side complex plane through the point $i\omega_0$. This can happen only for the sequence $(\tau_{j0})$. Thus, \[ \tau_{00} < \tau_{10}. \] In this case there are at least two switches, because for sufficiently large $\tau$ the system is unstable, cf.~\cite{Coo, Sko}. In order to have more switches, the system which is unstable for $\tau=0$ should satisfy \[ \tau_{00} < \tau_{10} < \tau_{01}, \] which gives two corresponding inequalities in (\ref{eq:6000}). \item If the system is stable for $\tau=0$, then it has all eigenvalues in the lhs of the complex plane. If $\tau_{10} < \tau_{00}$, then at least one stability switch occurs. In order to have three switches the following (together with the assumptions listed before) conditions should be satisfied \[ \tau_{10}<\tau_{00}<\tau_{11} . \] \end{enumerate} Analogously one can obtain the set of sufficient conditions for arbitrary number $N$ of stability switches. \noindent $\Box$ Now, consider the possibility of $5$ stability switches. Let the system be stable for $\tau=0$, then there are $5$ switches if \[ \left\{ \begin{array}{l} \Delta > 0,\\ |a_{12}a_{21}|>|a_{11}a_{22}|,\\ a_{11} + a_{22} <0 \ \wedge \ a_{11}a_{22} - a_{12}a_{21}>0,\\ a_{11}^2 - a_{22}^2 - 2a_{12}a_{21} > 0,\\ {\omega_0}\arccos a_1 < {\omega_1}\arccos a_0 < {\omega_0}\arccos a_1 + {2\pi}{\omega_0},\\ {\omega_0}\arccos a_1 +2\pi\omega_0 < {\omega_1}\arccos a_0 + 2\pi\omega_1 < {\omega_0}\arccos a_1 + {4\pi}{\omega_0}, \end{array} \right. \] Five stability switches occur for example for the system~(\ref{eq:12}) with the coefficients $[-1, 3, -2, 1]$. The corresponding approximate values of the stability switches are $$\tau_1\approx 0.92, \ \tau_2\approx1.4, \ \tau_3\approx 3.3, \ \tau_4\approx 4.2, \ \tau_5\approx 5.46. $$ The system is stable for $\tau \in [0, \tau_1) \cup (\tau_2, \tau_3) \cup (\tau_4, \tau_5).$ \noindent {\bf{Note to Remark~1.}} The quazi-polynomial for Eqs.~(\ref{eq:410}) reads \begin{equation} W(\lambda) = \lambda^2 + (-a_{22} - a_{13})\lambda + (-a_{11}\lambda + a_{11}a_{22})\text{e}^{-\lambda \tau} + a_{13}a_{22} - a_{12}a_{21}, \label{eq:43} \end{equation} and the corresponding auxiliary function $F(y) = y^2 + (a_{22}^2 + a_{13}^2 +2a_{12}a_{21} - a_{11}^2)y + (a_{13}a_{22} - a_{12}a_{21})^2 - (a_{11}a_{22})^2$. We see that for $a_{13}=0$ both functions are the same as for Eqs.~\eqref{eq:12}. This yields that for appropriate values of the model parameters there can be arbitrary number of stability switches, as for Eqs.~\eqref{eq:12}. \noindent {\bf{Proof of Theorem~\ref{tw3}.}} The quazi-polynomial and the auxiliary function $F$ have the form \[ W(\lambda) = \lambda^2 - (a_{11} + a_{22})\lambda + a_{11}a_{22} - a_{12}a_{21}\text{e}^{-\lambda\tau}, \] \[ F(y) = y^2 + (a_{11}^2 + a_{22}^2)y + (a_{11}a_{22})^2 - (a_{12}a_{21})^2. \] If $|a_{12}a_{21}| > |a_{11}a_{22}|$, then $F(0)<0$ and there is one positive zero $y_0$, for which $F'(y_0) = \sqrt{\Delta} > 0$, therefore eigenvalues can cross the imaginary axis only from the left to the right. Thus, \begin{enumerate} \item if the system is unstable for $\tau = 0$, then te same holds for all $\tau\geq 0$; \item if the system is stable for $\tau = 0$, then there exists $\tau_c$, for which HB occurs, and for $\tau > \tau_c$ the system becomes unstable (the critical value $\tau_{cr}$ can be calculated as in the proof of Theorem~\ref{tw1}). \end{enumerate} If $ |a_{12}a_{21}| < |a_{11}a_{22}|$, then the stability switches are possible only if $F(y)$ has two positive roots. The necessary condition $a_{11}^2 + a_{22}^2 < 0$ for this situation excludes such possibility. \noindent $\Box$ \noindent {\bf{Proof of Theorem~\ref{tw4}.}} \begin{enumerate} \item For the models~(\ref{eq:27}) and~(\ref{eq:28}), the quazi-polynomial is of the same form \[ W(\lambda) = \lambda^2 - a_{22}\lambda + (-a_{11}\lambda + a_{11}a_{22} - a_{12}a_{21})\lambda \text{e}^{-\lambda \tau} ,\]and the corresponding auxiliary function $F(y) = y^2 + (a_{22}^2 - a_{11}^2)y - (a_{11}a_{22} - a_{12}a_{21})^2$. If $a_{11}a_{22} - a_{12}a_{21} \neq 0$, then the free term of the function $F$ is always negative and it has one positive root, which implies there is at most one stability switch. \item For the model~(\ref{eq:31}) the quazi-polynomial reads \begin{equation}\label{pol:31} W(\lambda) = \lambda^2 - a_{12}a_{21} + (-a_{11} - a_{22})\lambda \text{e}^{-\lambda \tau} + a_{11}a_{22}\text{e}^{-2\lambda\tau} . \end{equation} If in this particular relationship, partners are focused on their own states from the past but with the opposite result, which means $a_{11} + a_{22} = 0$, the quazi-polynomial~\eqref{pol:31} reduces to \[ W(\lambda) = \lambda^2 - a_{12}a_{21} + a_{11}a_{22}\text{e}^{-2\lambda\tau} \] which enables the analysis as above. The relevant auxiliary function is of the form $F(y) = y^2 - 2a_{12}a_{21}y + (a_{12}a_{21})^2 - (a_{11}a_{22})^2$, and it has two roots for all the parameters, since $\Delta(F) = 4(a_{11}a_{22})^2$ is always positive. Hence, a. if $a_{12}a_{21} < a_{11}a_{22}$, the free term of the function $F$ is negative, which implies the function has a single positive root, so that there is no more than a single switch possible; b. if $a_{12}a_{21} > a_{11}a_{22}$, the system never starts from stability, because the conditions imply jointly that $a_{11}a_{22} - a_{12}a_{21} < 0$. Thus, if we look for more than one stability switch, the condition $\tau_{00} < \tau_{10} < \tau_{01}$ must be satisfied, which implies $\frac{\pi}{2\omega_0} < \frac{\pi}{2\omega_1} < \frac{\pi}{\omega_0}$, where $\omega_0 = (a_{12}a_{21} - a_{11}a_{22})^{\frac{1}{2}}$, $\omega_1 = (a_{12}a_{21} + a_{11}a_{22})^\frac{1}{2}$, but $0 < \omega_0 < \omega_1$ so that the first inequality is never satisfied. On the other hand, if $a_{11}=0$, then quazi-polynomial~\eqref{pol:31} reduces to \[ W(\lambda) = \lambda^2 - a_{12}a_{21} - a_{22}\lambda \text{e}^{-\lambda \tau} , \] which enables the analysis, as before. The auxiliary function $F(y)=y^2-(2a_{12}a_{21}+a_{22}^2)y+a_{12}^2a_{21}^2$ has two positive roots under the assumption $a_{22}^2>4|a_{12}||a_{21}|$ and then multiple stability switches can occur, as in Theorem~\ref{tw4}. Due to the continuous dependence on the parameters multiple switches occur also for $a_{11}$ sufficiently close to $0$. \item For the model~(\ref{eq:32}) there is \[ W(\lambda) = \lambda^2 - (a_{11} + a_{22})\lambda + a_{11}a_{22} - \text{e}^{-2 \lambda \tau}a_{12}a_{21} . \] Here we have $F(y) = y^2 + (a_{11}^2 + a_{22}^2)y + (a_{11}a_{22})^2 – (a_{12}a_{21})^2$ and respectively $\Delta(F) = (a_{11}^2–a_{22}^2)^2 + 4(a_{12}a_{21})^2 > 0 $ for any parameters which means $F$ has two roots. Note that if we expect both the roots to be positive, their sum shall be positive, while sum of the roots equals $-(a_{11}^2 + a_{22}^2)$ and is always non-positive. Thus, there is no more than one stability switch. \end{enumerate} Hence, due to the continuous dependence on parameters the system has a single stability switch at most. \noindent $\Box$ {\bf{Proof of Theorem~\ref{tw5}.}} The quazi-polynomial has the form \[ W(\lambda) = \lambda^2 - (a_{11} + a_{22})\lambda\text{e}^{-\lambda \tau} + (a_{11}a_{22} - a_{12}a_{21})\text{e}^{-2\lambda\tau} \] and the characteristic equation $W(\lambda)=0$ is equivalent to \begin{equation}\label{last} \lambda^2\text{e}^{2\lambda\tau} - (a_{11} + a_{22})\lambda\text{e}^{\lambda \tau} + a_{11}a_{22} - a_{12}a_{21}=0. \end{equation} Let us denote $\lambda \text{e}^{\lambda \tau}=:z$, $a=-(a_{11} + a_{22})=-\text{tr} \mathbf{A}$, $b= a_{11}a_{22} - a_{12}a_{21}=\det \mathbf{A}$. Then Eq.~\eqref{last} reads \begin{equation}\label{uwik} z^2+az+b=0, \end{equation} and therefore, \[ \lambda \text{e}^{\lambda \tau}= \frac{-a-\sqrt{a^2-4ab}}{2} \quad \text{or} \quad \lambda \text{e}^{\lambda \tau}= \frac{-a+\sqrt{a^2-4ab}}{2} . \] Looking for purely imaginary eigenvalues one solves \[ i\omega_1 \text{e}^{i\omega_1 \tau}= \frac{-a-\sqrt{a^2-4ab}}{2}, \quad i\omega_2 \text{e}^{i\omega_2 \tau}= \frac{-a+\sqrt{a^2-4ab}}{2} \] and obtains two sequences of critical delays at which stability stability switches can occur. Now, we need to know the direction of transition of eigenvalues in the complex plane for critical values of delays. We use the theorem of implict function to calculate it. Derivating Eq.~\eqref{uwik} with respect to $\tau$ we obtain \[ 2z\frac{dz}{d\tau}+a\frac{dz}{d\tau}=0 \Longrightarrow \frac{dz}{d\tau}=0 , \] which yields \[ \frac{d\lambda}{d\tau}\text{e}^{\lambda \tau}+\lambda\text{e}^{\lambda\tau} \left(\frac{d\lambda}{d\tau}\tau+\lambda\right)=0 \Longrightarrow \frac{d\lambda}{d\tau}=-\frac{\lambda^2}{1+\lambda \tau} . \] Substituting $\lambda=i\omega_k$, $k\in \{1, 2\}$, $\tau=\tau_{cr}$, where $\tau_{cr}$ is any of the critical delays, one gets \[ \frac{d\Re \lambda}{d\tau}\Big|_{\lambda=i\omega_k}= \frac{\omega^2_k}{1+\omega^2_k\tau_{cr}^2} \] which is positive independently on the values of $\omega_k$ and $\tau_{cr}$. This means that system~\ref{eq:4_del} can have at most one stability switch. $\Box$ \noindent {\bf{Note to systems with delay in three terms.}} For Eqs.~(\ref{eq:40}) the quazi-polynomial has the form \begin{equation} W(\lambda) = \lambda^2 - a_{22}\lambda + (-a_{11}\lambda + a_{11}a_{22})\text{e}^{-\lambda\tau} -a_{12}a_{21}\text{e}^{-2\lambda\tau} \label{eq:655} \end{equation} and we can reduce it in three ways: \begin{enumerate} \item $a_{11}=0$ yielding $W(\lambda) = \lambda^2 - a_{22}\lambda + -a_{12}a_{21}\text{e}^{-2\lambda\tau}$ for which the auxiliary function $F$ always has one positive zero; \item $a_{12}a_{21}=0$ yielding $W(\lambda) = \lambda^2 - a_{22}\lambda + (-a_{11}\lambda + a_{11}a_{22})\text{e}^{-\lambda\tau}$ and $F$ has the same property as above; \item $a_{22}=0$ yielding $W(\lambda) = \lambda^2-a_{11}\lambda\text{e}^{-\lambda\tau} -a_{12}a_{21}\text{e}^{-2\lambda\tau}$ which has the same form as for Eqs.~\eqref{eq:4_del}. \end{enumerate} In all three simplifications at most one stability switch is possible. On the other hand one can rewrite the characteristic equation for $\lambda=i\omega$ in the similar way as when looking for the auxiliary function $F$ and obtain \begin{equation} ||-\omega^2 - a_{22}i\omega|| =||a_{11}i\omega - a_{11}a_{22} +a_{12}a_{21}(\cos \omega\tau-i\sin\omega\tau)|| \end{equation} and define the auxiliary function \[ G_{\tau}(\omega)=\omega^4+ (a_{22}^2-a_{11}^2) \omega^2-a_{11}^2a_{22}^2 -a_{12}^2a_{21}^2+2a_{11}a_{12}a_{21}\sin\omega\tau +2a_{11}a_{22}a_{12}a_{21}\cos\omega\tau \] which zeros give the purely imaginary eigenvalues. We see that \[ G(0)=-(a_{11}a_{22}+a_{12}a_{21})<0 \quad \text{and} \quad \lim_{\omega \to +\infty}G(\omega)=+\infty \] implying at least one positive zero of $G$. We also see that for $\tau=0$ there is $G_0(\omega)=\omega^4+ (a_{22}^2-a_{11}^2) \omega^2-(a_{11}a_{22}+a_{12}a_{21})$ and $G_0$ has one positive zero yielding that for small $\tau>0$ only one positive zero exists. However, looking at the graphs of $\omega^4+ (a_{22}^2-a_{11}^2) \omega^2-a_{11}^2a_{22}^2$ and $a_{12}^2a_{21}^2+2a_{11}a_{12}a_{21}\sin\omega\tau -2a_{11}a_{22}a_{12}a_{21}\cos\omega\tau$ one can easily see that for larger $\tau$ the number of positive zeros increases and there is arbitrary number of positive zeros for sufficiently large $\tau$. This shows that there can be more than one stability switch for some parameters values. For Eqs.~(\ref{eq:41}) the quazipolynomial has the form \begin{equation} W(\lambda) = \lambda^2 + (-\lambda(a_{11} + a_{22}) - a_{12}a_{21}) \text{e}^{-\lambda\tau} + a_{11}a_{22}\text{e}^{-2\lambda\tau} \label{eq:655a} \end{equation} and we can also reduce it in two ways \begin{itemize} \item $a_{11}a_{22}=0$ yielding $W(\lambda) = \lambda^2 + (-\lambda(a_{11} + a_{22}) - a_{12}a_{21}) \text{e}^{-\lambda\tau}$ for which $F$ has one positive zero; \item $a_{12}a_{21}=0$ yielding $W(\lambda) = \lambda^2-\lambda(a_{11} + a_{22}) \text{e}^{-\lambda\tau} + a_{11}a_{22}\text{e}^{-2\lambda\tau}$ and $W$ has the same form as for Eqs.~\eqref{eq:4_del}. \end{itemize} In both cases only one stability switch is possible. However, defining the auxiliary function $G$ one can also expect multiple stability switches. \end{document}
\begin{document} \title{On the geometry of an order unit space} \author{Anil Kumar Karn} \address{School of Mathematical Sciences, National Institute of Science Education and Research Bhubaneswar, An OCC of Homi Bhabha National Institute, P.O. - Jatni, District - Khurda, Odisha - 752050, India.} \email{\textcolor[rgb]{0.00,0.00,0.84}{[email protected]}} \subjclass[2020]{Primary: 46B40; Secondary: 46B20, 46L05.} \keywords{Order unit space, matrix order unit space, canopy, periphery, $\infty$-orthogonality} \begin{abstract} We introduce the notion of \emph{canopy} of a linearly compact convex set containing $0$ in a real vector space. We prove that a canopy determines precisely the positive part of the closed unit ball of an order unit space. We also study its matricial version to characterize matrix order unit spaces. Next, we consider the notion of \emph{periphery} of a canopy which is consists of maximal elements of a canopy in a certain sense. We discuss some elementary properties of the periphery and prove that any order unit space $(V, e)$ of dimension more than $1$ contains a copy of $\ell_{\infty}^2$ as an order unit subspace. Further we find a condition under which $V$ would contain a copy of $\ell_{\infty}^n$ for some $n \in \mathbb{N}$ as an order unit subspace. We also study its application in a unital C$^*$-algebra. \end{abstract} \maketitle \section{Introduction} Order unit spaces dominate the interface of commutative and non-commutative C$^*$-algebras. In early 1940's, Stone, Kakutani, Krein and Yosida proved independently that if an order unit space $(V, e)$ is a vector lattice in its order structure, then it is unitally lattice isomorphic to a dense lattice subspace of $C_{\mathbb{R}}(X)$ for some suitable compact Hausdorff space $X$ \cite[Theorem II.1.10]{A71}. (see the notes after Section1, Chapter II of \cite{A71} for the details.) In 1951, Sherman proved that the self-adjoint part of a C$^*$-algebra $A$ is a vector lattice in its order structure if and only if $A$ is commutative \cite{S51}. The same year, Kadison prove that the infimum of a pair of self-adjoint operators on a complex Hilbert space exists if and only if they are comparable \cite{K51a}. The same year in another paper, he proved that any unital self-adjoint subspace of a unital C$^*$-algebra is an order unit space \cite{K51b}. (Much later in 1977, Choi and Effros proved that a unital self-adjoint subspace of a unital C$^*$-algebra is precisely a matrix order unit space \cite{CE77}.) Soon after Kadison underscored the importance of order unit spaces as a possible role model for a non-commutative ordered spaces, there was a flux of research in the study of order unit spaces and their duals. Some early prominent references are Bonsall, Edwards, Ellis, Asimov and Ng, besides many others. (See \cite{A68,B55,E64,El64,N69}. We refer to \cite{A71,Jam70} for more references and details.) The dual of an order unit space is a base normed space which is defined through the geometric notion \emph{base} in an ordered vector space. On the other hand, the notion of an order unit is order theoretic. In this paper we propose to study a geometric characterization of order unit spaces. We introduce the notion of \emph{canopy} of a linearly compact convex set containing $0$ in a real vector space. We prove that a canopy precisely determines the positive part of the closed unit ball of an order unit space. We also study its matricial version to characterize matrix order unit spaces. Next, we consider the notion of \emph{periphery} of a canopy which is consists of maximal elements of a canopy in a certain sense. The periphery contains projections whenever they exist. We discuss some elementary properties of the periphery. Using these properties, we prove that any order unit space $(V, e)$ of dimension more than $1$ contains a copy of $\ell_{\infty}^2$ as an order unit subspace. We also prove that $V$ is a union these copies in such a way any two such subspace meet at the \emph{axis} $\mathbb{R} e$. Further we find a condition under which $V$ would contain a copy of $\ell_{\infty}^n$ for some $n \in \mathbb{N}$ as an order unit subspace. We also study its application in a unital C$^*$-algebra and to an absolute order unit space. The scheme of the paper is as follows. In Section 2, we introduce the notion of \emph{canopy} of a linearly compact convex set in a real vector space containing $0$ and prove that the positive part of the closed unit ball can be characterized precisely as a linearly compact convex set containing $0$ having a canopy. We next consider the notion of \emph{matricial canopy} in order to find a similar characterization for matrix order unit spaces. In Section 3, consider the notion of \emph{periphery} of a canopy and study some of its properties. We discuss the canopy and periphery of $\ell_{\infty}^n$. In Section 4, we apply these notions to study some properties in order unit spaces and unital C$^*$-algebras. We prove that any order unit space $(V, e)$ of dimension more than $1$ contains a copy of $\ell_{\infty}^2$ as an order unit subspace. We further characterize order unit spaces which contain a copy of $\ell_{\infty}^n$ for some $n \in \mathbb{N}$ as an order unit subspace. \section{The positive part of the closed unit ball} In this section, we provide a geometric description of the positive part of the closed unit ball of an order unit space. We also discuss its matricial version. We first discuss some properties of the positive elements with norm one in an order unit space. \begin{proposition}\label{1} Let $(V, e)$ be an order unit space of dimension $\ge 2$. Put $C_V = \lbrace u \in V^+: \Vert u \Vert = 1 \rbrace$. \begin{enumerate} \item Fix $u \in C_V$ with $u \ne e$ and consider the one dimensional affine subspace $$L(u) = \lbrace u_{\lambda} := e - \lambda (e - u): \lambda \in \mathbb{R} \rbrace$$ of $V$. Then \begin{enumerate} \item $\lbrace u_{\lambda}: \lambda \in \mathbb{R} \rbrace$ is decreasing; \item $u_{\lambda} \in V^+$ if and only if $\lambda \Vert e - u \Vert \le 1$; \item $\Vert u_{\lambda} \Vert = \max \lbrace 1, \vert \lambda \Vert e - u \Vert - 1 \vert \rbrace$ for every $\lambda \in \mathbb{R}$; \item there exists a unique $\bar{u} \in L(u)$ such that $\bar{u}, e - \bar{u} \in C_V$. \end{enumerate} \item For $u, v \in C_V$, we have either $L(u) \bigcap L(v) = \lbrace e \rbrace$ or $L(u) = L(v)$. \end{enumerate} \end{proposition} \begin{proof} (1)(a) follows from the construction of $u_{\lambda}$. (1)(b): Since $e - u \in V^+ \setminus \lbrace 0 \rbrace$, there exist $f_u \in S(V)$ such that $\Vert e - u \Vert = f_u(e - u) = 1 - f_u(u)$. Also then, $f_u(u) \le f(u)$ for all $f \in S(V)$ with $f_u(u) < 1$. Set $\bar{\lambda} := \Vert e - u \Vert^{-1} = \left(\frac{1}{1 - f_u(u)}\right)$ and $\bar{u} := u_{\bar{\lambda}} = e - \bar{\lambda} (e - u)$. For $f \in S(V)$, we have $$f(\bar{u}) = f(e) - \left( \frac{f(e) - f(u)}{1 - f_u(u)} \right) = \frac{f(u) - f_u(u)}{1 - f_u(u)} \ge 0$$ so that $\bar{u} \in V^+$. Now by (1), $u_{\lambda} \in V^+$ if $\lambda \le \bar{\lambda}$. Also, if $u_{\lambda} \in V^+$ for some $\lambda \in \mathbb{R}$, then $$0 \le f_u(u_{\lambda}) = f_u(e) - \lambda (f_u(e) - f_u(u)) = 1 - \lambda (1 - f_u(u)).$$ Thus $\lambda \Vert e - u \Vert \le 1$. (1)(c): Fix $\lambda \in \mathbb{R}$. Case 1. $\lambda \ge 0$. Then for $k \in \mathbb{R}$, we have $u_{\lambda} \le k e$, that is, $\lambda u \le (k - 1 + \lambda) e$ if and only if $k \ge 1$. Next, for $l \in \mathbb{R}$, we have $l e + u_{\lambda} \in V^+$, that is, $(l + 1 - \lambda) e + \lambda u \in V^+$ if and only if $l + 1 - \lambda + \lambda f_u(u) \ge 0$ as $f_u(u) \le f(u)$ for all $f \in S(V)$. In other words, $l e + u_{\lambda} \in V^+$ if and only if $l \ge \lambda \Vert e - u \Vert - 1$. Thus for $\lambda \ge 0$, we have $$\Vert u_{\lambda} \Vert = \inf \lbrace \alpha > 0: \alpha e \pm u_{\lambda} \in V^+ \rbrace = \max \lbrace 1, \lambda \Vert e - u \Vert - 1 \rbrace.$$ Case 2. $\lambda < 0$. Then $l e + u_{\lambda} \in V^+$ for all $l \ge 0$. Next, for $k \in \mathbb{R}$, we have $u_{\lambda} \le k e$, that is, $(k - 1 + \lambda) e - \lambda u \ge 0$ if and only $k - 1 + \lambda - \lambda f_u(u) \ge 0$ for $f_u(u) \le f(u)$ for all $f \in S(V)$ and $- \lambda > 0$. Thus $u_{\lambda} \le k e$ if and only if $k \ge - \lambda \Vert e - u \Vert + 1$. Therefore, for $\lambda < 0$, we have $$\Vert u_{\lambda} \Vert = \inf \lbrace \alpha > 0: \alpha e \pm u_{\lambda} \in V^+ \rbrace = 1 - \lambda \Vert e - u \Vert.$$ Summing up, for any $\lambda \in \mathbb{R}$, we have $$\Vert u_{\lambda} \Vert = \max \lbrace 1, \vert \lambda \Vert e - u \Vert - 1 \vert \rbrace.$$ (1)(d): Put $\bar{u} = e - \Vert e - u \Vert^{-1} (e - u)$. Then by (c), $\Vert \bar{u} \Vert = 1$. Also by construction, $\Vert e - \bar{u} \Vert = 1$. Next, assume that $u_{\lambda} \in L(u)$ is such that $u_{\lambda}, e - u_{\lambda} \in C_V$. Then, as $\Vert e - u_{\lambda} \Vert = 1$, we get $\vert \lambda \vert \Vert e - u \Vert = 1$. If $\lambda = - \Vert e - u \Vert^{-1}$, then $\Vert u_{\lambda} \Vert = 2$ so we must have $\lambda = \Vert e - u \Vert^{-1}$. Thus $u_{\lambda} = \bar{u}$. (2): Let $w \in L(u) \bigcap L(v)$ with $w \ne e$. Then there are $\lambda, \mu \in \mathbb{R} \setminus \lbrace 0 \rbrace$ such that $w = e - \lambda (e - u) = e - \mu (e - v)$. Thus $\lambda (e - u) = \mu (e - v)$. Let $\alpha \in \mathbb{R}$. Then $$e - \alpha (e - u) = e - \alpha \lambda \mu^{-1} (e - v)$$ so that $L(u) \subset L(v)$. Now by symmetry, we have $L(u) = L(v)$. \end{proof} The following statements can be verified easily. \begin{corollary}\label{2} Under the assumptions of Lemma \ref{1}, we have \begin{enumerate} \item $\Vert u_{\lambda} \Vert = 1$ if and only if $0 \le \lambda \Vert e - v \Vert \le 2$; \item $\lbrace \Vert u_{\lambda} \Vert: \lambda \in (- \infty, 0] \rbrace$ is strictly decreasing; \item $\lbrace \Vert u_{\lambda} \Vert: \lambda \in \left[ \frac{2}{\Vert e - u \Vert}, \infty \right) \rbrace$ is strictly increasing \item $C_V \bigcap (e - C_V) = \lbrace \bar{u}: u \in C_V \rbrace$; \item $C(u) := L(u) \bigcap C_V = [e, \bar{u}]$; \item $L(u) \bigcap (e - C_V) = \lbrace \bar{u} \rbrace$. \end{enumerate} \end{corollary} In this section, we show the set $C_V$ leads to a geometric characterization of the order unit space $(V, e)$. Towards this goal, let us recall the following notion from \cite[Definition 3.1]{GK20}. Let $C$ be a convex subset of a real vector space $X$ with $0 \in C$. An element $x \in C$ is called a \emph{lead point} of $C$, if for any $y \in C$ and $\lambda \in [0, 1]$ with $x = \lambda y$, we have $\lambda = 1$ and $y = x$. The set of all lead points of $C$ is denoted by $Lead(C)$. A non-empty set $C$ of a real vector space $V$ is said to be \emph{linearly compact}, if for any $x, y \in C$ with $x \ne y$, we have, the intersection of $C$ with the line through $x$ and $y$, $\lbrace \lambda \in \mathbb{R}: (1 - \lambda) x + \lambda y \in C \rbrace$, is compact (in $\mathbb{R}$). Note that if $C$ is convex, the above intersection is an interval. Following \cite[Proposition 3.2]{GK20}, we may conclude that if $C$ is a linearly compact convex set with $0 \in C$, then $Lead(C)$ is non-empty and for each $x \in C, x \ne 0$, there exist a unique $c \in Lead(C)$ and a unique $0 < \alpha \le 1$ such that $x = \alpha c$. \begin{definition} Let $X$ be a real vector space and let $K$ be a linearly compact set in $X$. Assume that $0 \in K$ and $K \ne \lbrace 0 \rbrace$. Put $Lead(K) := C$ (so that $C$ is a non-empty subset of $X \setminus \lbrace 0 \rbrace$). We say that $C$ is a \emph{canopy} of $K$, if there exists $e \in C$ such that \begin{enumerate} \item $C$ is $*$-shaped at $e$, that is, $[e, v] \subset C$ for all $v \in C$; \item For each $v \in C$ with $v \ne e$, there exists $\bar{v} \in C\cap (e - C)$ such that $$C(v) := \lbrace e - \lambda (e - v): \lambda \ge 0 \rbrace \cap C = [e, \bar{v}];$$ \item $e$ is an extreme point of $K$; \item If $\alpha v + \beta w = e$ for some $v, w \in C$ and $\alpha, \beta \ge 0$, then $\alpha, \beta \le 1$. \end{enumerate} In this case, $e$ is called the \emph{summit} of the canopy $C$. We call $R := C \cap (e - C)$ the \emph{periphery} of $C$. \end{definition} \begin{theorem}\label{canopy} \begin{enumerate} \item Let $(V, V^+, e)$ be a (non-zero) order unit space. Put $K_V := [0, e] = \lbrace u \in V^+: \Vert u \Vert \le 1 \rbrace$ and $C_V := \lbrace u \in V^+: \Vert u \Vert = 1 \rbrace$. Then $C_V$ is a canopy of $K_V$ with summit $e$. \item Conversely, let $C$ be a canopy of a linearly compact set $K$ with summit $e$ in a real vector space $X$. Let $V$ be the linear span of $C$ and let $V^+$ be the cone generated by $C$. Then $e$ is an order unit for $(V, V^+)$ such that $(V, V^+, e)$ is an order unit space. Also then $C = \lbrace u \in V^+: \Vert u \Vert = 1 \rbrace := C_V$. \end{enumerate} \end{theorem} \begin{proof} (1) (a) We show that $K_V$ is linearly compact. Let $u, v \in K$ with $u \ne v$. We need to show that the set $A := \lbrace \lambda \in \mathbb{R}: \lambda u + (1 - \lambda) v \in K_V \rbrace$ is a compact set in $\mathbb{R}$. First, we prove that $A$ is bounded. Assume, if possible, that $A$ not bounded below. Then $\lambda u + (1 - \lambda) v \in K_V$ for all $\lambda < 0$. Then $\lambda u + (1 - \lambda) v \ge 0$ for all $\lambda < 0$, or equivalently, $u \le (\frac{\lambda + 1}{\lambda}) v$ for all $\lambda > 0$. Now, by the Archimedean property in $V$, we get $u \le v$. Next $\lambda u + (1 - \lambda) v \in K_V$ for all $\lambda < 0$ also implies that $0 \le \lambda u + (1 - \lambda) v \le e$, or equivalently, $0 \le \lambda (e - u) + (1 - \lambda) (e - v) \le e$ for all $\lambda < 0$. Thus as above, we get $e - u \le e - v$, that is, $v \le u$. This leads to the contradiction $u = v$. Thus $A$ must be bounded below. Next, we assume, if possible that $A$ is not bounded above. Then $0 \le \lambda u + (1 - \lambda) v \le e$, that is, $0 \le u - v \le \frac{1}{\lambda} (e - v)$ for all $\lambda > 0$. Applying the Archimedean property once again, we now deduce that $0 \le u - v \le 0$ so that $u = v$. Thus $A$ is bounded above as well. As $K_V$ is convex, $A$ is a bounded interval. Put $a = \inf A$ and $b = \sup A$. We show that $A = [a, b]$, that is, $a, b \in A$. In fact, we can find a sequence $\langle a_n \rangle$ in $A$ such that $a \le a_n \le a + \frac 1n$ for all $n \in \mathbb{N}$. Then $a_n u + (1 - a_n) v \in K_V$ for all $n \in \mathbb{N}$. As $\langle a_n \rangle$ converges to $a$ and as $K_V$ is norm closed, we may conclude that $a \in A$. In the same manner, we can also prove that $b \in A$. (b) $C_V$ is $*$-shaped at $e$: Let $v \in C_V$ and $\alpha \in [0, 1]$. Then $0 \le v \le e$ so that $0 \le v \le (1 - \alpha) e + \alpha v \le e$. Thus $1 = \Vert v \Vert \le \Vert (1 - \alpha) e + \alpha v \Vert \le \Vert e \Vert = 1$. That is, $(1 - \alpha) e + \alpha v \in C_V$ whenever $v \in C_V$ and $\alpha \in [0, 1]$. (c) By Lemma \ref{1}(1)(d) and Corollary \ref{2}(6), we note that for each $v \in C_V$, there exists a unique $t(v) \in R_V$ such that $C(v) = [e, t(v)]$. (d) Let $u, v \in K_V$ and $0 < \alpha < 1$ such that $e = \alpha u + (1 - \alpha) v$. Since $u, v \le e$ get $u = e = v$. (e) Let $u, v \in C_V$ and $\alpha, \beta \ge 0$ be such that $\alpha u + \beta v = e$. Find a state $f \in S(V)$ such that $f(u) = \Vert u \Vert = 1$. Then $f(e) = 1$ so that $$1 = f(e) = \alpha f(u) + \beta f(v) \ge \alpha$$ as $v \in V^+$ and $\beta \ge 0$. Similarly, we get $\beta \le 1$. Hence $C_V$ is a canopy in $V$. (2) (a) We note that $K = \lbrace \alpha v: v \in C ~ \mbox{and} ~ 0 \le \alpha \le 1 \rbrace$. We prove that $$K = \lbrace v \in V^+: 0 \le v \le e \rbrace. \qquad (*)$$ Let $v \in K$. Then $v \in V^+$ so that $0 \le v$. Let $v = \alpha u$ for some $u \in C$ and $\alpha \in [0, 1]$. If $u = e$, then $v = \alpha e \le e$. So let $u \ne e$. Then there exists $\bar{u} \in C \cap (e - C)$ and $\lambda \in [0, 1]$ such that $u = (1 - \lambda) e + \lambda \bar{u}$. Thus $$e - v = e - \alpha (1 - \lambda) e - \alpha \lambda \bar{u} = (1 - \alpha) e + \alpha \lambda (e - \bar{u}) \in V^+$$ for $e, e - \bar{u} \in C \subset V^+$ and $1 - \alpha, \alpha \lambda \ge 0$. Therefore, $K \subset \lbrace v \in V^+: 0 \le v \le e \rbrace$. Conversely, assume that $0 \le u \le e$. Then $u, e - u \in V^+$. Thus there exist $v, w \in C$ and $\alpha, \beta \ge 0$ such that $u = \alpha v$ and $e - u = \beta w$. Then $e = \alpha v + \beta w$ so by the definition of a canopy, we must have $\alpha, \beta \le 1$. Therefore, $u = \alpha v \in K$. Hence $(*)$ is proved. (b) Since $C$ spans $V$, we have $V = V^+ - V^+$. Thus, it follows from $(*)$ that $e$ is an order unit for $V$. We prove that $V^+$ is proper and Archimedean. Let $\pm u \in V^+$. Then there exist $v, w \in C$ and $\alpha, \beta \ge 0$ such that $u = \alpha v$ and $- u = \beta w$. Thus $\alpha v + \beta w = 0$. We show that $\alpha = 0 = \beta$. Assume, if possible, that $\alpha > 0$. Then $\beta > 0$ too, for $v \ne 0$. Put $k = \frac{\alpha}{\alpha + \beta}$. Then $0 < k < 1$ and we have $ku + (1 - k) v = 0$ so that $e = k (e - u) + (1 - k) (e - v)$. As $u, v \in C$, we have $e - u, e - v \in K$. So, as $e$ is an extreme point of $K$, we deduce that $e - u = e = e - v$, that is, $u = 0 = v$ which is absurd. Thus $\alpha = 0$. Therefore, $V^+$ is proper. (c) Next, let $v \in V$ be such that $k e + v \in V^+$ for all $k > 0$. We show that $(v, e) \subset V^+$. Let $0 < \lambda < 1$. Put $k = \frac{1 - \lambda}{\lambda}$. Then $(1 - \lambda) e + \lambda v = \lambda (k e + v) \in V^+$ so that $(v, e) \subset V^+$. Let $\Vert \cdot \Vert$ be the seminorm on $V$ corresponding to the order unit $e$. Then $(v, e) \subset (1 + \Vert v \Vert) K$. Since $K$ is linearly closed, so is $(1 + \Vert v \Vert) K$. Thus $v \in (1 + \Vert v \Vert) K \subset V^+$. Therefore $V^+$ is Archimedean. Now it follows that $(V, V^+, e)$ is an order unit space. We also note that $C = C_V$. In fact, we have $K = \lbrace v \in V^+: \Vert v \Vert \le 1 \rbrace$. Thus $C_V = Lead(K) = C$. \end{proof} \subsection{A matricial version} Now we describe a matricial version of the canopy associated with matrix order unit spaces. \begin{definition} Let $X$ be a $*$-vector space. Assume that $\lbrace K_n \rbrace$ is a matrix convex set in $X$ with $K_n \subset M_n(X)_{sa}$ such that $0 \in K_1, K_1 \ne \lbrace 0 \rbrace$ and $K_n$ is linearly compact for each $n \in \mathbb{N}$. Put $C_n := Lead(K_n)$ and assume that $C_n$ is a canopy of $K_n$ with its summit $e_n$ for all $n \in \mathbb{N}$. Then $\lbrace C_n \rbrace$ is called a \emph{matricial canopy} of the matrix convex set $\lbrace K_n \rbrace$ with \emph{matricial summit} $\lbrace e_n \rbrace$. \end{definition} \begin{theorem} Let $X$ be a $*$-vector space. Assume that $\lbrace K_n \rbrace$ is a matrix convex set in $X$ with $K_n \subset M_n(X)_{sa}$ such that $0 \in K_1, K_1 \ne \lbrace 0 \rbrace$ and $K_n$ is linearly compact for each $n \in \mathbb{N}$. Put $C_n := Lead(K_n)$ and assume that $\lbrace C_n \rbrace$ is a matricial canopy of the matrix convex set $\lbrace K_n \rbrace$ with matricial summit $\lbrace e_n \rbrace$. \begin{enumerate} \item For each $n \in \mathbb{N}$, put $V_n^+ := cone(C_n)$ and let $V_n$ be $*$-subspace of $M_n(X)$ generated by $V_n^+$. Then $V_n = M_n(V)$ for every $n$ where $V := V_1$ and $(V, \lbrace V_n^+ \rbrace)$ is a matrix ordered space. \item $V_n^+$ is proper and Archimedean for all $n$. \item Put $e := e_1$. Then $e$ is an order unit for $V$ and $e_n = e^n := e \oplus \dots \oplus e$ for every $n$. \item $(V, \lbrace V_n^+ \rbrace, e)$ is an order unit space such that $K_n = [0, e^n]$ for all $n \in \mathbb{N}$. \end{enumerate} \end{theorem} \begin{proof} (1) Recall that $K_n = \lbrace k v: v \in C_v ~ \mbox{and} ~ k \in [0, 1] \rbrace$ so that $V_n^+ = \bigcup_{r \in \mathbb{N}} r K_n$ for each $n$. Since $0 \in K_1$, we have $0 \in K_n$ for each $n$. Thus for $v \in K_n$ and $\gamma \in \mathbb{M}_{n, p}$ with $\gamma^* \gamma \le \mathbb{I}_p$, we have $\gamma^* v \gamma \in K_p$. Now, it follows that for any $v \in V_n^+$ and $\gamma \in \mathbb{M}_{n, p}$, we have $\gamma^* v \gamma \in V_p^+$. Therefore, $\lbrace V_n^+ \rbrace$ is a matrix order in $X$. First, we show that for each $n$, $K_n \subset M_n(V)_{sa}$. Let $v \in K_n$, say $v = [v_{ij}]$ where $v_{ij} \in X$ for all $i, j \in \lbrace 1, \dots , n \rbrace$. Let $E_r \in \mathbb{M}_{1,n}$ with $1$ at $r$-th place and $0$ elsewhere. Then $E_r E_r^* = 1$ so that $v_{rr} = E_r v E_r^* \in K_1 \subset V_{sa}$. Next, for $r, s \in \lbrace 1, \dots , n \rbrace$ with $r \ne s$, we consider $E_{rs} \in \mathbb{M}_{1,n}$ with $1$ at $r$-th place, $c$ at $s$-th place and $0$ elsewhere. Here $c \in \mathbb{C}$ with $\vert c \vert = 1$. Then $E_{rs} E_{rs}^* = 2$ so that $v_{rr} + v_{ss} + \bar{c} v_{rs} + c v_{sr} \in 2 K_1 \subset V_{sa}$. Thus for $r \ne s$ we have $v_{rs} + v_{sr}, i(v_{rs} - v_{sr}) \in V_{sa}$. Therefore, $v_{rs} \in V$ for all $r, s \in \lbrace 1, \dots , n \rbrace$. As $v = v^*$, we have $v_{sr} = v_{rs}^*$ so that $v \in M_n(V)_{sa}$. Hence $V_n^+ \subset M_n(V)_{sa}$. In particular, $V_n \subset M_n(V)$. Conversely, let $u = [u_{rs}] \in M_n(V)$. Then $$u = \sum_{r,s=1}^n E_r^* u_{rs} E_s = \sum_{s=1}^n \left(\sum_{r=1}^n E_r^* u_{rs} E_r\right) E_{rs}$$ where $E_{rs} \in \mathbb{M}_n$ with $(r, s)$-th entry $1$ and $0$ elsewhere. Let $u_{rs} = \sum_{k = 0}^3 i^k u_{rs}^{(k)}$ where $u_{rs}^{(k)} \in V_1^+$, $k = 0, 1, 2, 3$. Then $$u = \sum_{k=0}^3 \sum_{s=1}^n \left(\sum_{r=1}^n E_r^* u_{rs}^{(k)} E_r \right) E_{rs}. \qquad (\#)$$ We prove that for $w \in V_n^+$ and $\alpha \in \mathbb{M}_n$, we have $w \alpha \in V_n$. Let $c \in \mathbb{C}$ with $\vert c \vert = 1$ and put $\beta = c \alpha$. Then $$\begin{bmatrix} w & w \beta \\ \beta^* w & \beta^* w \beta \end{bmatrix} = \begin{bmatrix} \mathbb{I}_n \\ \beta^* \end{bmatrix} w \begin{bmatrix} \mathbb{I}_n & \beta \end{bmatrix} \in V_{2n}^+.$$ In other words, $\begin{bmatrix} w & c w \alpha \\ \bar{c} \alpha^* w & \alpha^* w \alpha \end{bmatrix} \in V_{2n}^+$ if $\vert c \vert = 1$. Thus $$2 \begin{bmatrix} 0 & w \alpha \\ \alpha^* w & 0 \end{bmatrix} = \begin{bmatrix} w & w \alpha \\ \alpha^* w & \alpha^* w \alpha \end{bmatrix} - \begin{bmatrix} w & - w \alpha \\ - \alpha^* w & \alpha^* w \alpha \end{bmatrix} \in(V_{2n})_{sa}$$ and $$2i \begin{bmatrix} 0 & w \alpha \\ - \alpha^* w & 0 \end{bmatrix} = \begin{bmatrix} w & w \alpha \\ i \alpha^* w & - i \alpha^* w \alpha \end{bmatrix} - \begin{bmatrix} w & - i w \alpha \\ i \alpha^* w & \alpha^* w \alpha \end{bmatrix} \in(V_{2n})_{sa}.$$ Therefore, $\begin{bmatrix} 0 & w \alpha \\ 0 & 0 \end{bmatrix} \in V_{2n}$. Now, letting $\begin{bmatrix} 0 & w \alpha \\ 0 & 0 \end{bmatrix} = \sum_{k=0}^3 i^k w_k$ for some $w_k \in V_{2n}^+$, we get $$w \alpha = \sum_{k=0}^3 i^k \begin{bmatrix} \mathbb{I}_n & \mathbb{I}_n \end{bmatrix} w_k \begin{bmatrix} \mathbb{I}_n \\ \mathbb{I}_n \end{bmatrix} \in V_n.$$ Now, by $(\#)$, we may conclude that $u \in V_n$ for any $u \in M_n(V)$. Hence $V_n = M_n(V)$ for all $n \in \mathbb{N}$. In particular, $(V, \lbrace V_n^+ \rbrace)$ is a matrix ordered space. (2) It follows from the proof of Theorem \ref{canopy} that $V_n^+$ is proper and Archimedean for every $n$. (3) Fix $n \in \mathbb{N}$. By Theorem \ref{canopy}, $((V_n)_{sa}, V_n^+, e_n)$ is an order unit space such that $K_n = [0, e_n]$. As $\lbrace K_r \rbrace$ is a matrix convex set, we have $e^n \in K_n$ so that $e^n \le e_n$. Let $e_n = \left[ e_{rs}^{(n)} \right]$. Then $$e = E_r e^n E_r^* \le E_r e_n E_r^* = e_{rr}^{(n)}$$ for all $r = 1, \dots , n$. Also, as $E_r E_r^* = 1$, we have $e_{rr}^{(n)} \in K_1$ so that $e_{rr}^{(n)} \le e$ for all $r = 1, \dots , n$. Thus $e_{rr}^{(n)} = e$ for all $r = 1, \dots , n$. Now $e_n - e^n \in V_n^+$ with $0$ in its diagonal. Since $V_n^+$ is proper, we must have $e_{rs}^{(n)} = 0$ if $r \ne s$. To see this, fix $r \ne s$ and consider $c \in \mathbb{C}$. Then $$c e_{sr}^{(n)} + \bar{c} e_{rs}^{(n)} = (E_r + c E_s) (e_n - e^n) (E_r + c E_s)^* \in V_1^+.$$ For $c = \pm 1$, we get $\pm \left(e_{sr}^{(n)} + e_{rs}^{(n)} \right) \in V_1^+$ and for $c = \pm i$, we get $\pm i \left(e_{sr}^{(n)} - e_{rs}^{(n)} \right) \in V_1^+$. Since $V_1^+$ is proper, we have $e_{sr}^{(n)} + e_{rs}^{(n)} = 0$ and $i \left(e_{sr}^{(n)} - e_{rs}^{(n)}\right) = 0$ so that $e_{sr}^{(n)} = 0 = e_{rs}^{(n)}$. Hence $e_n = e^n$. (4) Since $V_n^+$ is proper and Archimedean for all $n$, we get that $(V, \lbrace V_n^+ \rbrace, e)$ is a matrix order unit space. Finally, $K_n = [0, e_n] = [0, e^n]$ for all $n$. \end{proof} \section{The periphery} Recall that if $(V, e)$ is an order unit space, then for $K_V := [0, e] = \lbrace v \in V^+: \Vert v \Vert \le 1 \rbrace$, we have $C_V := \lbrace v \in V^+: \Vert v \Vert = 1 \rbrace$ as the canopy of $K_V$ with its summit at the order unit $e$. The periphery of $C_V$ is denoted by $R_V$. Thus $$R_V = C_V \cap (e - C_V) = \lbrace v \in V^+: \Vert v \Vert = 1 = \Vert e - v \Vert \rbrace.$$ In this section, we discuss some of the properties and examples of the periphery corresponding to an order unit space. Let $X$ be a normed linear space and let $x, y \in X$. We say that $x$ is \emph{$\infty$-orthogonal} to $y$, (we write, $x \perp_{\infty} y$), if $\Vert x + k y \Vert = \max \lbrace \Vert x \Vert, \Vert k y \Vert \rbrace$ for all $k \in \mathbb{R}$. It was proved in \cite{K14} that if $(V, e)$ is an order unit space and if $u, v \in V^+ \setminus \lbrace 0 \rbrace$, the $u \perp_{\infty} v$ if and only if $\Vert \Vert u \Vert^{- 1} u + \Vert v \Vert^{- 1} v \Vert = 1$. \begin{proposition}\label{periphery} Let $(V, e)$ be an order unit space and let $u \in C_V$. Then the following statements are equivalent: \begin{enumerate} \item $u \in R_V$; \item there exists $v \in C_V$ such that $u + v \in C_V$; \item $u$ has an $\infty$-orthogonal pair in $C_V$; \item there exists a state $f$ of $V$ such that $f(u) = 0$. \end{enumerate} \end{proposition} \begin{proof} If $u \in R_V$, then $u, e - u \in V^+$ with $\Vert u \Vert = 1 = \Vert e - u \Vert$. Also then $\Vert u + e - u \Vert = \Vert e \Vert = 1$ so that $u \perp_{\infty} (e - u)$. Conversely, let $u \perp_{\infty} v$ for some $v \in C_V$. Then $\Vert u + v \Vert = 1$ so that $u + v \le e$. Thus $v \le e - u \le e$ and we have $1 = \Vert v \Vert \le \Vert e - u \Vert \le \Vert e \Vert = 1$. Therefore, $u \in R_V$. Hence (1), (2) and (3) are equivalent. Again, if $u \in R_V$, then $e - u \in C_V$. Thus there exists a state $f$ of $V$ such that $1 = f(e - u) = 1 - f(u)$, or equivalently, $f(u) = 0$. Conversely, if $f(u) = 0$ for some state $f$ on $V$, then $f(e - u) = 1$ so that $\Vert e - u \Vert \ge 1$. Also, as $0 \le u \le e$, we have $0 \le e - u \le e$ so that $\Vert e - u \Vert \le 1$. Therefore, $e - u \in C_V$ whence $u \in R_V$. \end{proof} The next result is an immediate consequence of Proposition \ref{periphery}. \begin{corollary} Let $(V, e)$ be an order unit space and let $u \in V^+$. Then $u \in R_V$ if and only if there exist $f, g \in S(V)$ such that $f(u) = 1$ and $g(u) = 0$. \end{corollary} \begin{proposition}\label{disjR} Let $(V, e)$ be an order unit space and let $u, v \in R_V$ with $u \ne v$. Then the following statements are equivalent: \begin{enumerate} \item $(u, v) \bigcap R_V \ne \emptyset$; \item there are states $f$ and $g$ of $V$ such that $f(u) = 1 = f(v)$ and $g(u) = 0 = g(v)$; \item $[u, v] \subset R_V$. \end{enumerate} \end{proposition} \begin{proof} Let $w \in (u, v) \bigcap R_V$. Then $w = (1 - \alpha) u + \alpha v \in R_V$ for some $\alpha \in (0, 1)$. Find $f, g \in S(V)$ such that $f((1 - \alpha) u + \alpha v) = 1$ and $g((1 - \alpha) u + \alpha v) = 0$. Thus $$1 = f((1 - \alpha) u + \alpha v) = (1 - \alpha) f(u) + \alpha f(v) \le (1 - \alpha) + \alpha = 1$$ so that $f(u) = 1 = f(v)$. Again $$0 = g((1 - \alpha) u + \alpha v) = (1 - \alpha) g(u) + \alpha g(v)$$ so that $(1 - \alpha) g(u) = 0 = \alpha g(v)$. Since $0 < \alpha < 1$, we get $g(u) = 0 = g(v)$. Let $\lambda \in [0, 1]$ and consider $x = (1 - \lambda) u + \lambda v$. Then Then $x \in V^+$ with $\Vert x \Vert \le 1$. Also as $$f(x) = (1- \lambda) f(u) + \lambda f(v) = 1 - \lambda + \lambda = 1,$$ we note that $x \in C_V$. Further, $$g(x) = (1- \lambda) g(u) + \lambda g(v) = 0$$ so that $x \in R_V$. \end{proof} The following result immediately follows from Proposition \ref{disjR}. \begin{corollary} Let $(V, e)$ be an order unit space and assume that $u_1, u_2 \in R_V$. Then \begin{enumerate} \item $[u_1, e - u_2] \bigcup [u_2, e - u_1] \subset R_V$; \item $\left((u_1, u_2) \bigcup (e - u_1, e - u_2)\right) \bigcap R_V = \emptyset$. \end{enumerate} \end{corollary} \begin{proposition}\label{disj} Let $(V, e)$ be an order unit space and let $u, v \in R_V$. Then $u \ne v$ if and only if $[e, u] \bigcap [e, v] = \lbrace e \rbrace$. \end{proposition} \begin{proof} First, let $w \in [e, u] \bigcap [e, v]$ with $w \ne e$.As $w \in [e, u] \bigcap [e, v]$, we can find $0 < \lambda, \mu \le 1$ such that $w = e - \lambda (e - u) = e - \mu (e - v)$. Then $\lambda (e - u) = \mu (e - v)$. Since $u, v \in R_V$, we have $e - u, e - v \in C_V$. Thus $\Vert e - u \Vert = 1 = \Vert e - v \Vert$ so that $\lambda = \mu$. As $\lambda, \mu > 0$, we get $u = v$. Thus $u \ne v$ implies $[e, u] \bigcap [e, v] = \lbrace e \rbrace$. Evidently, $u = v$ implies $[e, u] \bigcap [e, v] = [e, u] \ne \lbrace e \rbrace$. \end{proof} \begin{remark} Recall that for $u \in R_V$, we have $[e, u] = C(u)$ by Corollary \ref{2}. Thus $C_V = \bigcup_{u \in R_V} C(u)$ as a disjoint union of \emph{untangled strings} $C(u)$'s of $C_V$ attached to $e$. Consequently, we also have $e - C_V = \bigcup_{u \in R_V} [0, u]$. \end{remark} \subsection{Direct sum of order unit spaces.} Next we turn to describe $C_{\ell_{\infty}^n}$ and $R_{\ell_{\infty}^n}$. \begin{lemma} Let $(V_1, e_1)$ and $(V_2, e_2)$ be any two order unit spaces. Consider $V = V_1 \times V_2$, $V^+ = V_!^+ \times V_2^+$ and $e = (e_1, e_2)$. Then $(V, e)$ is also an order unit space and we have \begin{enumerate} \item $C_V = (C_{V_1} \times [0, e_2]_o) \bigcup ([0, e_2]_o \times C_{V_2})$ and \item $R_V = (R_{V_1} \times [0, e_2]_o) \bigcup ([0, e_2]_o \times R_{V_2}) \bigcup (C_{V_1} \times (e_2 - C_{V_2})) \bigcup ((e_1 - C_{V_1}) \times C_{V_2})$. \end{enumerate} \end{lemma} \begin{proof} For $(v_1, v_2) \in V$, we have $\Vert (v_1, v_2) \Vert = \max \lbrace \Vert v_1 \Vert, \Vert v_2 \Vert \rbrace$. Thus $(u_1, u_2) \in C_V$ if and only if $u_1 \in V_1^+$, $u_2 \in V_2^+$ and $\max \lbrace \Vert u_1 \Vert, \Vert u_2 \Vert \rbrace = 1$. Therefore, $C_V = (C_{V_1} \times [0, e_2]_o) \bigcup ([0, e_2]_o \times C_{V_2})$. Now, as $(u_1, u_2) \in R_V$ if and only $(u_1, u_2), (e_1 - u_1, e_2 - u_2) \in C_V$, we may deduce that $R_V = (R_{V_1} \times [0, e_2]_o) \bigcup ([0, e_2]_o \times R_{V_2}) \bigcup (C_{V_1} \times (e_2 - C_{V_2})) \bigcup ((e_1 - C_{V_1}) \times C_{V_2})$. \end{proof} Replace $V_2$ by $\mathbb{R}$. As $C_{\mathbb{R}} = \lbrace 1 \rbrace$ and $R_{\mathbb{R}} = \emptyset$, we may conclude the following: \begin{corollary}\label{oneup} Let $(V, e)$ be an order unit space. Consider $\hat{V} = V \times \mathbb{R}$, $\hat{V}^+ = V^+ \times \mathbb{R}^+$ and $\hat{e} = (e, 1)$. Then $(\hat{V}, \hat{e})$ is an order unit space and we have \begin{enumerate} \item $C_{\hat{V}} = (C_V \times [0, 1]) \bigcup ([0, e_2]_o \times \lbrace 1 \rbrace)$ and \item $R_{\hat{V}} = (R_V \times [0, 1]) \bigcup (C_V \times \lbrace 0 \rbrace) \bigcup ((e - C_V) \times \lbrace 1 \rbrace)$. \end{enumerate} \end{corollary} Again using $C_1 = \lbrace 1 \rbrace$, $R_1 = \emptyset$ and following the induction on $n$, we can easily obtain the canopy and its periphery of $\ell_{\infty}^n$ with the help of Corollary \ref{oneup}. \begin{corollary} Fix $n \in \mathbb{N}$, $n \ge 2$. Put $C_n := C_{\ell_{\infty}^n}$ and $R_n := R_{\ell_{\infty}^n}$. Then \begin{enumerate} \item $C_n = \lbrace (\alpha_1, \dots, \alpha_n): \min \lbrace \alpha_i \rbrace \ge 0 ~ \mbox{and} ~ \max \lbrace \alpha_i \rbrace = 1 \rbrace$ and \item $R_n = \lbrace (\alpha_1, \dots, \alpha_n): \min \lbrace \alpha_i \rbrace = 0 ~ \mbox{and} ~ \max \lbrace \alpha_i \rbrace = 1 \rbrace$. \end{enumerate} \end{corollary} \section{Some applications} \begin{lemma}\label{l2} Let $(V, e)$ be an order unit space of dimension $\ge 2$ (so that $R_V \ne \emptyset$). Let $u \in R_V$ and consider $$P_u := \lbrace \alpha e + \beta u: \alpha, \beta \in \mathbb{R} \rbrace.$$ Then $P_u$ is a unitally order isomorphic to $\ell_{\infty}^2$. \end{lemma} \begin{proof} Consider the mapping $\chi: P_u \to \ell_{\infty}^2$ given by $\chi(\alpha e + \beta u) = (\alpha, \alpha + \beta)$ for all $\alpha, \beta \in \mathbb{R}$. Then $\chi$ is a unital bijection. We show that $\chi$ is an order isomorphism. We first assume that $\alpha e + \beta u \in V^+$. Since $u \in R_V$, we can find $f, g \in S(V)$ such that $f(u) = 1$ and $g(u) = 0$. Thus $0 \le f(\alpha e + \beta u) = \alpha + \beta$ and $0 \le g(\alpha e + \beta u) = \alpha$. Thus $(\alpha, \alpha + \beta) \in \ell_{\infty}^{2 +}$. Conversely, we assume that $(\alpha, \alpha + \beta) \in \ell_{\infty}^{2 +}$. Then $\alpha e + \beta u = \alpha (e - u) + (\alpha + \beta) u \in V^+$. \end{proof} \begin{remark} As $u \in R_V$, we have $u \perp_{\infty} (e - u)$. Thus, it is simple to show that $\chi$ is an isometry. In fact, if $v \in P_u$, say $v = \alpha e + \beta u = \alpha (e - u) + (\alpha + \beta) u$, then $$\Vert v \Vert = \Vert \alpha (e - u) + (\alpha + \beta) u \Vert = \max \lbrace \vert \alpha \vert, \vert \alpha + \beta \vert \rbrace.$$ \end{remark} \begin{theorem} Let $(V, e)$ be an order unit space with $\dim(V) \ge 2$. Then $V$ contains a copy of $\ell_{\infty}^2$ as an order unit subspace. Moreover, $V$ contains a copy of $\ell_{\infty}^n$ ($n \ge 2$) as an order unit subspace if and only if there exist $u_1, \dots, u_n \in R_V$ such that $u_i \perp_{\infty} u_j$ for all $i, j \in \lbrace 1, \dots, n \rbrace$ with $i \ne j$, $\sum_{i=1}^{n} u_i = e$ and $\perp_{\infty}$ is additive in the linear span of $u_1, \dots, u_n$. \end{theorem} \begin{proof} It follows from Lemma \ref{l2} that $V$ contains a copy of $\ell_{\infty}^2$ as an order unit subspace. Next, we assume that $W$ is an order unit subspace of $V$ and $\Gamma: \ell_{\infty}^n \to W$ is a surjective unital order isomorphism. Put $\gamma(e_i) = u_i$ for $i = 1, \dots, n$ where $\lbrace e_1, \dots, e_n \rbrace$ is the standard unit basis of $\ell_{\infty}^n$. Then $u_1, \dots, u_n \in C_V$ with $\sum_{i=1}^{n} u_i = e$. Consider the biorthonormal system $\lbrace f_1, \dots, f_n \rbrace$ in $\ell_1^n$ so that $f_i(e_j) = \delta_{ij}$. Then $\lbrace f_1 \circ \Gamma^{-1}, \dots, f_n \circ \Gamma^{-1} \rbrace$ is the set of pure states of $W$. We can extend $f_i \circ \Gamma^{-1}$ to a pure state $g_i$ of $V$ for each $i = 1, \dots, n$. Then $g_i(u_j) = \delta_{ij}$ so that $u_1, \dots, u_n \in R_V$ and we have $u_i \perp_{\infty} u_j$ for all $i, j \in \lbrace 1, \dots, n \rbrace$ with $i \ne j$. Also, if $\alpha_1, \dots, \alpha_n \in \mathbb{R}$, then \begin{eqnarray*} \left\Vert \sum_{i=1}^{n} \alpha_i u_i \right\Vert &=& \left\Vert \sum_{i=1}^{n} \alpha_i e_i \right\Vert_{\infty}\\ &=& \max \lbrace \vert \alpha_i \vert: 1 \le i \le n \rbrace\\ &=& \max \lbrace \Vert \alpha_i u_i \Vert: 1 \le i \le n \rbrace. \end{eqnarray*} Thus $\perp_{\infty}$ is additive in the linear span of $u_1, \dots, u_n$. Conversely, we assume that there exist $u_1, \dots, u_n \in R_V$ such that $u_i \perp_{\infty} u_j$ for all $i, j \in \lbrace 1, \dots, n \rbrace$ with $i \ne j$, $\sum_{i=1}^{n} u_i = e$ and $\perp_{\infty}$ is additive in the linear span $U$ of $u_1, \dots, u_n$. Define $\Phi: U \to \ell_{\infty}^n$ by $\Phi( \sum_{i=1}^{n} \alpha_i u_i) = (\alpha_i)$. Then $\Phi$ is a unital linear bijection. Also \begin{eqnarray*} \left\Vert \Phi \left( \sum_{i=1}^{n} \alpha_i u_i \right) \right\Vert &=& \Vert (\alpha_i) \Vert_{\infty} \\ &=& \max \lbrace \vert \alpha_i \vert: 1 \le i \le n \rbrace\\ &=& \max \lbrace \Vert \alpha_i u_i \Vert: 1 \le i \le n \rbrace \\ &=& \left\Vert \sum_{i=1}^{n} u_i e_i \right\Vert \end{eqnarray*} as $\perp_{\infty}$ is additive on $U$. Thus that $\Phi$ is an isometry. Now, being a unital linear surjective isometry, $\Phi$ is a unital order isomorphism. \end{proof} \begin{proposition}\label{uniqueu} Let $(V, e)$ be an order unit space and let $u, v \in R_V$. Then, either $P_u = P_v$ or $P_u \bigcap P_v = \mathbb{R} e$. \end{proposition} \begin{proof} Let $w \in P_u \bigcap P_v$ be such that $w \notin \mathbb{R} e$. Without any loss of generality, we assume that $\Vert w \Vert = 1$. Consider $w_1 := \frac 12 (e + w)$. Then $w_1 \in V^+$. Also then $w_1 \in P_u \bigcap P_v$. So, without any loss of generality again, we further assume that $w \in V^+$, that is, $w \in C_V$. Since $w \in P_u$, we have $w = \alpha e + \beta u$ for some $\alpha, \beta \in \mathbb{R}$. As $w \in C_V$, we have $\alpha \ge 0$, $\alpha + \beta \ge 0$ and $\max \lbrace \alpha, \alpha + \beta \rbrace = 1$. Since $w \notin \mathbb{R} e$, we have $\beta \ne 0$. If $\beta > 0$, then we have $1 = \alpha + \beta > \alpha \ge 0$. Thus $w = \alpha e + (1 - \alpha) u \in [e, u]$. Next if $\beta < 0$, then $1 = \alpha > \alpha + \beta \ge 0$ so that $- 1 \le \beta < 0$. Thus $w = e + \beta u = e - (- \beta) (e - (e - u)) \in [e, e - u]$. Summing up, we have $w \in [e, u] \bigcup [e, e - u]$. Similarly, as $w \in P_v$, we also have $w \in [e, v] \bigcup [e, e - v]$. Thus $w \in \left( [e, u] \bigcup [e, e - u] \right) \bigcap \left( [e, v] \bigcup [e, e - v] \right)$. Since $w \ne e$, using Proposition \ref{disj}, we conclude that one of the equalities $[e, u] = [e, v]$ or $[e, u] = [e, e - v]$, or $[e, e - u] = [e, v]$, or $[e, e - u] = [e, e - v]$ hold. In other words, either $u = v$ or $u = e - v$. In both the situations, we have $P_u = P_v$. \end{proof} \begin{theorem} Let $(V, e)$ be an order unit space with $\dim V \ge 2$. Then $V = \bigcup \lbrace P_u: u \in R_V \rbrace$ in such a way that $\bigcap \lbrace P_u: u \in R_V \rbrace = \mathbb{R} e$ and if $v \in V$ with $v \notin \mathbb{R} e$, then there exists a unique $w \in R_V$ such that $v \in P_w = P_{(e - w)}$. \end{theorem} \begin{proof} Let $v \in V$ with $v \notin \mathbb{R} e$. For simplicity, we assume that $\Vert v \Vert = 1$. Put $v_1 = \frac 12 (e + v)$ and $v_2 = \frac 12 (e - v)$. Then $v_1, v_2 \in V^+$ and $v_1, v_2 \notin \mathbb{R} e$. Also then $v = v_1 - v_2$ and $1 = \Vert v \Vert = \max \lbrace \Vert v_1 \Vert, \Vert v_2 \Vert \rbrace$. Replacing $v$ by $- v$, if required, we further assume that $0 < \Vert v_2 \Vert \le \Vert v_1 \Vert = 1$, that is, $\Vert e + v \Vert = 2$. Thus we can find $f \in S(V)$ such that $1 + f(v) = f(e + v) = 2$. Therefore, $f(v) = 1$. Put $w = \Vert e - v \Vert^{-1} (e - v)$. Then $w \in C_V$. Further, $f(e - w) = 1 - \Vert e - v \Vert^{-1} (1 - f(v)) = 1$ so that $e - w \in C_V$. Thus $w \in R_V$. Now $v = e - \Vert e - v \Vert w$ so that $v \in P_w = P_{(e - w)}$. Uniqueness of $w$ follows from Proposition \ref{uniqueu}. \end{proof} \begin{remark} If $v \in V^+$ (or more generally, if $\Vert v \Vert = f(v)$ for some $f \in S(V)$) with $v \notin \mathbb{R} e$, then $\Vert e + \Vert v \Vert^{-1} v \Vert = 2$. Thus $v$ has a unique representation $v = \lambda e + \mu w$ in $P_w$ where $w = \Vert \Vert v \Vert e - v \Vert^{-1} (\Vert v \Vert e - v) \in R_V$ and $\lambda = \Vert v \Vert$ and $\mu = - \Vert \Vert v \Vert e - v \Vert$. \end{remark} Next, we prove an application in unital C$^*$-algebras. \begin{proposition}\label{algebra} Let $A$ be a unital C$^*$-algebra and let $a \in A_{sa}$. Then $\Vert a \Vert \le 1$ and $a \notin A^+ \cup - A^+$ if and only if there exist $x, y \in C_A \setminus \lbrace e \rbrace$ such that $a = x - y$ and $\vert a \vert = 1_A - x y$. In this case, $x$ and $y$ commute. \end{proposition} \begin{proof} First, we assume that $\Vert a \Vert \le 1$ and $a \notin A^+ \cup - A^+$. Consider $a = a^+ - a^-$. Then $0 < \Vert a^+ \Vert, \Vert a^- \Vert \le 1$. Also as $a^+ a^- = 0$, we have $a^+ \perp_{\infty} a^-$. Thus $\Vert a^+ \Vert a^+, \Vert a^- \Vert a^- \in R_A$. It follows that $a^+ \in [0, \Vert a^+ \Vert a^+] \subset 1_A - C_A$. Similarly $a^- \in 1_A - C_A$. Put $1_A - a^+ = y$ and $1_A - a^- = x$. Then $x, y \in C_A \setminus \lbrace e \rbrace$ with $x y = y x$ such that $a = x - y$ and $\vert a \vert = (1_A - y) + (1_A - x)$. Now as $a^+ a^- = 0$, we deduce that $x + y = 1_A + x y$ so that $\vert a \vert = 1_A - x y$. Conversely, assume that $a = x - y$ and $\vert a \vert = 1_A - x y$ for some $x, y \in C_A \setminus \lbrace e \rbrace$. Then $\Vert a \Vert \le 1$ and $x y = y x$. Since $a^2 = \vert a \vert^2$, we get that $(1_A - x^2) (1_A - y^2) = 0$. As $x, y \in V^+$, we note that $1_A + x$ and $1_A + y$ are invertible in $A$. Thus $(1_A - x) (1_A - y) = 0$. Now, it follows that $a^+ = 1_A - y$ and $a^- = 1_A - x$ so that $a \notin A^+ \cup - A^+$. \end{proof} \begin{remark} We observe that $x, y \in C_A \setminus \lbrace e \rbrace$ are uniquely determined by a given $a \in A_{sa}$ with $\Vert a \Vert \le 1$ and $a \notin A^+ \cup - A^+$. To see this, let $u, v \in C_A$ with $u v = v u$ be such that $a = u - v$ and $\vert a \vert = 1_A - u v$. Then $a^2 = \vert a \vert^2$ yields $u^2 + v^2 = 1_A + u^2 v^2$ or equivalently, $(1_A - u^2) (1_A - v^2) = 0$. As $u, v \in C_A$, we note that $1_A + u$ and $1_A + v$ are invertible in $A$. Thus we have $(1_A - u) (1_A - v) = 0$. As $a = u - v = (1_A - v) - (1_A - u)$, we get $1_A - y = a^+ = 1_A - v$ so that $y = v$. Consequently, $x = u$. \end{remark} We can generalize this result in the following sense. Let $A$ be unital C$^*$-algebra. An operator system $M$ in $A$ (that is, a unital self-adjoint subspace of $A$) is said to be an \emph{absolute operator system}, if $\vert a \vert \in M$ whenever $a \in M_{sa}$. Note that every unital C$^*$-subalgebra of $A$ is an absolute operator system. Let $M$ be an absolute operator system in a unital C$^*$-algebra $A$. For $u, v \in M_{sa}$, we define $$u \dot\wedge v := \frac 12 \left( u + v - \vert u - v \vert \right)$$ and $$u \dot\vee v := \frac 12 \left( u + v - \vert u + v \vert \right).$$ Then $u \dot\wedge v, u \dot\vee v \in M_{sa}$. Further, given $u, v \in M_{sa}$, $u \dot\wedge v$ is the unique element of $M_{sa}$ such that \begin{itemize} \item[(i)] $u \dot\wedge v \le u$ and $u\dot\wedge v \le v$, and \item[(ii)] $(u - u \dot\wedge v) (v - u \dot\wedge v) = 0$. \end{itemize} Similarly, $u \dot\vee v$ is the unique element of $M_{sa}$ such that \begin{itemize} \item[(i)] $u \le u \dot\vee v$ and $v \le u\dot\vee v$, and \item[(ii)] $(u \dot\vee v - v) (u \dot\vee v - v) = 0$. \end{itemize} (See \cite[Theorem 2.5]{K22}.) Given $v \in M_{sa}$, we write $v^+ := \frac 12 (\vert v \vert + v) = v \dot\vee 0$ and $v^- := \frac 12 (\vert v \vert - v) = (- v) \dot\vee 0$. Then $v^+ v^- = 0$. Following this discussion, we can generalize Proposition \ref{algebra}. \begin{proposition} Let $M$ be an absolute operator system in a unital C$^*$-algebra $A$ and let $v \in M_{sa}$. Then $\Vert v \Vert \le 1$ with $v \notin M^+ \cup - M^+$ if and only if there exists a unique pair $u, w \in C_M \setminus \lbrace 1 \rbrace$ such that $v = u - w$ and $u \dot\vee w = 1$. \end{proposition} \begin{proof} First, we assume that $\Vert v \Vert \le 1$ with $v \notin M^+ \cup - M^+$. Consider $v = v^+ - v^-$. Then $v^+, v^- \in M^+ \setminus \lbrace 0 \rbrace$ with $\max \lbrace \Vert v^+ \Vert, \Vert v^- \Vert \rbrace = \Vert v \Vert \le 1$. Also, $v^+ v^- = 0$ so that $v^+ \perp_{\infty} v^-$. Thus $\Vert v^+ \Vert^{-1} v^+, \Vert v^- \Vert^{-1} v^- \in R_M$. It follows that $v^+ \in [0, \Vert v^+ \Vert^{-1} v^+] \subset 1 - C_M$. Similarly $v^- \in 1 - C_M$. Put $1 - v^+ = w$ and $1 - v^- = u$. Then $u, w \in C_M \setminus \lbrace 1 \rbrace$ such that $v = u - w$. Since $v^+ v^- = 0$, we have $0 = (1 - u) \dot\wedge (1 - w) = 1 - u \dot\vee w$. Thus $u \dot\vee w = 1$. Next assume that $v = x - y$ for some $x, y \in C_M$ such that $x \dot\vee y = 1$. Then $(1 - x) \dot\wedge (1 - y) = 1 - x \dot\vee y = 0$. Since $v = (1 - y) - (1 - x)$, we conclude that $1 - w = v^+ = 1 - y$. Thus $y = w$ and consequently, $x = u$. Conversely, assume that there exists a unique pair $u, w \in C_M \setminus \lbrace 1 \rbrace$ such that $v = u - w$ and $u \dot\vee w = 1$. Then $- 1 \le - w \le u - w \le u \le 1$ so that $\Vert v \Vert = \Vert u - w \Vert \le 1$. Since $u \dot\vee w = 1$, we get, as above that $v^+ = 1 - w \ne 0$ and $v^- = 1 - u \ne 0$. Hence $v \notin M^+ \cup - M^+$. \end{proof} \begin{remark} We have $\Vert v \Vert = 1$ if and only if either $u$ or $w$ lies in $R_M$. For $v \in M_{sa}$ with $\Vert v \Vert = 1$ and $v \notin M^+ \cup - M^+$, we say that $v$ is \emph{positively tilted}, if $u \notin R_M$, \emph{negatively tilted}, if $w \notin R_M$ and \emph{neutral}, if $u, w \in R_M$. \end{remark} \thanks{{\bf Acknowledgements:} The author was partially supported by Science and Engineering Research Board, Department of Science and Technology, Government of India sponsored Mathematical Research Impact Centric Support project (reference no. MTR/2020/000017).} \end{document}